content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python: Finding a trend in a set of numbers
I have a list of numbers in Python, like this:
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
What's the best way to find the trend in these numbers? I'm not interested in predicting what the next number will be, I just want to output the trend for many sets of numbers so that I can compare the trends.
Edit: By trend, I mean that I'd like a numerical representation of whether the numbers are increasing or decreasing and at what rate. I'm not massively mathematical, so there's probably a proper name for this!
Edit 2: It looks like what I really want is the co-efficient of the linear best fit. What's the best way to get this in Python?
A:
Possibly you mean you want to plot these numbers on a graph and find a straight line through them where the overall distance between the line and the numbers is minimized? This is called a linear regression
def linreg(X, Y):
"""
return a,b in solution to y = ax + b such that root mean square distance between trend line and original points is minimized
"""
N = len(X)
Sx = Sy = Sxx = Syy = Sxy = 0.0
for x, y in zip(X, Y):
Sx = Sx + x
Sy = Sy + y
Sxx = Sxx + x*x
Syy = Syy + y*y
Sxy = Sxy + x*y
det = Sxx * N - Sx * Sx
return (Sxy * N - Sy * Sx)/det, (Sxx * Sy - Sx * Sxy)/det
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
a,b = linreg(range(len(x)),x) //your x,y are switched from standard notation
The trend line is unlikely to pass through your original points, but it will be as close as possible to the original points that a straight line can get. Using the gradient and intercept values of this trend line (a,b) you will be able to extrapolate the line past the end of the array:
extrapolatedtrendline=[a*index + b for index in range(20)] //replace 20 with desired trend length
A:
The Link provided by Keith or probably the answer from Riaz might help you to get the poly fit, but it is always recommended to use libraries if available, and for the problem in your hand, numpy provides a wonderful polynomial fit function called polyfit . You can use polyfit to fit the data over any degree of equation.
Here is an example using numpy to fit the data in a linear equation of the form y=ax+b
>>> data = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
>>> x = np.arange(0,len(data))
>>> y=np.array(data)
>>> z = np.polyfit(x,y,1)
>>> print "{0}x + {1}".format(*z)
4.32527472527x + 17.6
>>>
similarly a quadratic fit would be
>>> print "{0}x^2 + {1}x + {2}".format(*z)
0.311126373626x^2 + 0.280631868132x + 25.6892857143
>>>
A:
Here is one way to get an increasing/decreasing trend:
>>> x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
>>> trend = [b - a for a, b in zip(x[::1], x[1::1])]
>>> trend
[22, -5, 9, -4, 17, -22, 5, 13, -13, 21, 39, -26, 13]
In the resulting list trend, trend[0] can be interpreted as the increase from x[0] to x[1], trend[1] would be the increase from x[1] to x[2] etc. Negative values in trend mean that value in x decreased from one index to the next.
A:
You could do a least squares fit of the data.
Using the formula from this page:
y = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
N = len(y)
x = range(N)
B = (sum(x[i] * y[i] for i in xrange(N)) - 1./N*sum(x)*sum(y)) / (sum(x[i]**2 for i in xrange(N)) - 1./N*sum(x)**2)
A = 1.*sum(y)/N - B * 1.*sum(x)/N
print "%f + %f * x" % (A, B)
Which prints the starting value and delta of the best fit line.
A:
I agree with Keith, I think you're probably looking for a linear least squares fit (if all you want to know is if the numbers are generally increasing or decreasing, and at what rate). The slope of the fit will tell you at what rate they're increasing. If you want a visual representation of a linear least squares fit, try Wolfram Alpha:
http://www.wolframalpha.com/input/?i=linear+fit+%5B12%2C+34%2C+29%2C+38%2C+34%2C+51%2C+29%2C+34%2C+47%2C+34%2C+55%2C+94%2C+68%2C+81%5D
Update: If you want to implement a linear regression in Python, I recommend starting with the explanation at Mathworld:
http://mathworld.wolfram.com/LeastSquaresFitting.html
It's a very straightforward explanation of the algorithm, and it practically writes itself. In particular, you want to pay close attention to equations 16-21, 27, and 28.
Try writing the algorithm yourself, and if you have problems, you should open another question.
A:
You can find the OLS coefficient using numpy:
import numpy as np
y = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x = []
x.append(range(len(y))) #Time variable
x.append([1 for ele in xrange(len(y))]) #This adds the intercept, use range in Python3
y = np.matrix(y).T
x = np.matrix(x).T
betas = ((x.T*x).I*x.T*y)
Results:
>>> betas
matrix([[ 4.32527473], #coefficient on the time variable
[ 17.6 ]]) #coefficient on the intercept
Since the coefficient on the trend variable is positive, observations in your variable are increasing over time.
A:
You can use simply scipy library
from scipy.stats import linregress
data = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x = np.arange(1,len(data)+1)
y=np.array(data)
res = linregress(x, y)
print(f'Equation: {res[0]:.3f} * t + {res[1]:.3f}, R^2: {res[2] ** 2:.2f} ')
res
Output:
Equation: 4.325 * t + 13.275, R^2: 0.66
LinregressResult(slope=4.325274725274725, intercept=13.274725274725277, rvalue=0.8096297800892154, pvalue=0.0004497809466484867, stderr=0.9051717124425395, intercept_stderr=7.707259409345618)
A:
To find the trend in a set of numbers in Python, one approach you can take is to use the polyfit() function from the NumPy library. This function will fit a polynomial of a specified degree to your data and return the coefficients of the polynomial. To find the trend in your data, you would fit a line to the data by setting the degree of the polynomial to 1. This will give you the coefficients of the best-fit line, which you can use to determine the trend in your data.
Here is an example of how you could use the polyfit() function to find the trend in your data:
import numpy as np
# Define the data
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
# Fit a line to the data using polyfit()
coefficients = np.polyfit(range(len(x)), x, 1)
# The trend is given by the slope of the line
trend = coefficients[0]
# Print the trend
print(trend)
In this example, the polyfit() function will fit a line to the data and return the coefficients of the best-fit line. The trend is given by the slope of the line, which is the first coefficient in the output. This value will tell you whether the data is increasing or decreasing and at what rate. A positive value indicates that the data is increasing, while a negative value indicates that the data is decreasing. The larger the value, the steeper the slope of the line and the faster the data is increasing or decreasing.
You can also use the polyfit() function to find the trend in multiple sets of data. To do this, you can fit a line to each set of data using a for loop, and then store the trends in a list. You can then compare the trends to see how they differ between the different sets of data.
Here is an example of how you could use the polyfit() function to find the trend in multiple sets of data:
import numpy as np
# Define the data
x1 = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
x2 = [15, 28, 39, 48, 51, 61, 74, 85, 92, 100, 115, 131, 152, 176]
# Fit a line to each set of data using a for loop
trends = []
for data in [x1, x2]:
coefficients = np.polyfit(range(len(data)), data, 1)
trend = coefficients[0]
trends.append(trend)
# Compare the trends
print(trends)
In this example, the polyfit() function is used in a for loop to fit a line to each set of data and find the trend. The trends are stored in a list, and then compared to see how they differ between the different sets of data.
| Python: Finding a trend in a set of numbers | I have a list of numbers in Python, like this:
x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]
What's the best way to find the trend in these numbers? I'm not interested in predicting what the next number will be, I just want to output the trend for many sets of numbers so that I can compare the trends.
Edit: By trend, I mean that I'd like a numerical representation of whether the numbers are increasing or decreasing and at what rate. I'm not massively mathematical, so there's probably a proper name for this!
Edit 2: It looks like what I really want is the co-efficient of the linear best fit. What's the best way to get this in Python?
| [
"Possibly you mean you want to plot these numbers on a graph and find a straight line through them where the overall distance between the line and the numbers is minimized? This is called a linear regression \ndef linreg(X, Y):\n \"\"\"\n return a,b in solution to y = ax + b such that root mean square distance between trend line and original points is minimized\n \"\"\"\n N = len(X)\n Sx = Sy = Sxx = Syy = Sxy = 0.0\n for x, y in zip(X, Y):\n Sx = Sx + x\n Sy = Sy + y\n Sxx = Sxx + x*x\n Syy = Syy + y*y\n Sxy = Sxy + x*y\n det = Sxx * N - Sx * Sx\n return (Sxy * N - Sy * Sx)/det, (Sxx * Sy - Sx * Sxy)/det\n\n\nx = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\na,b = linreg(range(len(x)),x) //your x,y are switched from standard notation\n\nThe trend line is unlikely to pass through your original points, but it will be as close as possible to the original points that a straight line can get. Using the gradient and intercept values of this trend line (a,b) you will be able to extrapolate the line past the end of the array:\nextrapolatedtrendline=[a*index + b for index in range(20)] //replace 20 with desired trend length\n\n",
"The Link provided by Keith or probably the answer from Riaz might help you to get the poly fit, but it is always recommended to use libraries if available, and for the problem in your hand, numpy provides a wonderful polynomial fit function called polyfit . You can use polyfit to fit the data over any degree of equation.\nHere is an example using numpy to fit the data in a linear equation of the form y=ax+b\n>>> data = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\n>>> x = np.arange(0,len(data))\n>>> y=np.array(data)\n>>> z = np.polyfit(x,y,1)\n>>> print \"{0}x + {1}\".format(*z)\n4.32527472527x + 17.6\n>>> \n\nsimilarly a quadratic fit would be\n>>> print \"{0}x^2 + {1}x + {2}\".format(*z)\n0.311126373626x^2 + 0.280631868132x + 25.6892857143\n>>> \n\n",
"Here is one way to get an increasing/decreasing trend:\n>>> x = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\n>>> trend = [b - a for a, b in zip(x[::1], x[1::1])]\n>>> trend\n[22, -5, 9, -4, 17, -22, 5, 13, -13, 21, 39, -26, 13]\n\nIn the resulting list trend, trend[0] can be interpreted as the increase from x[0] to x[1], trend[1] would be the increase from x[1] to x[2] etc. Negative values in trend mean that value in x decreased from one index to the next.\n",
"You could do a least squares fit of the data.\nUsing the formula from this page:\ny = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\nN = len(y)\nx = range(N)\nB = (sum(x[i] * y[i] for i in xrange(N)) - 1./N*sum(x)*sum(y)) / (sum(x[i]**2 for i in xrange(N)) - 1./N*sum(x)**2)\nA = 1.*sum(y)/N - B * 1.*sum(x)/N\nprint \"%f + %f * x\" % (A, B)\n\nWhich prints the starting value and delta of the best fit line.\n",
"I agree with Keith, I think you're probably looking for a linear least squares fit (if all you want to know is if the numbers are generally increasing or decreasing, and at what rate). The slope of the fit will tell you at what rate they're increasing. If you want a visual representation of a linear least squares fit, try Wolfram Alpha:\nhttp://www.wolframalpha.com/input/?i=linear+fit+%5B12%2C+34%2C+29%2C+38%2C+34%2C+51%2C+29%2C+34%2C+47%2C+34%2C+55%2C+94%2C+68%2C+81%5D\nUpdate: If you want to implement a linear regression in Python, I recommend starting with the explanation at Mathworld:\nhttp://mathworld.wolfram.com/LeastSquaresFitting.html\nIt's a very straightforward explanation of the algorithm, and it practically writes itself. In particular, you want to pay close attention to equations 16-21, 27, and 28.\nTry writing the algorithm yourself, and if you have problems, you should open another question.\n",
"You can find the OLS coefficient using numpy:\nimport numpy as np\n\ny = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\n\nx = []\nx.append(range(len(y))) #Time variable\nx.append([1 for ele in xrange(len(y))]) #This adds the intercept, use range in Python3\n\ny = np.matrix(y).T\nx = np.matrix(x).T\n\nbetas = ((x.T*x).I*x.T*y)\n\nResults:\n>>> betas\nmatrix([[ 4.32527473], #coefficient on the time variable\n [ 17.6 ]]) #coefficient on the intercept\n\nSince the coefficient on the trend variable is positive, observations in your variable are increasing over time.\n",
"You can use simply scipy library\nfrom scipy.stats import linregress\ndata = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\nx = np.arange(1,len(data)+1)\ny=np.array(data)\nres = linregress(x, y)\nprint(f'Equation: {res[0]:.3f} * t + {res[1]:.3f}, R^2: {res[2] ** 2:.2f} ')\nres\n\nOutput:\nEquation: 4.325 * t + 13.275, R^2: 0.66 \nLinregressResult(slope=4.325274725274725, intercept=13.274725274725277, rvalue=0.8096297800892154, pvalue=0.0004497809466484867, stderr=0.9051717124425395, intercept_stderr=7.707259409345618)\n\n",
"To find the trend in a set of numbers in Python, one approach you can take is to use the polyfit() function from the NumPy library. This function will fit a polynomial of a specified degree to your data and return the coefficients of the polynomial. To find the trend in your data, you would fit a line to the data by setting the degree of the polynomial to 1. This will give you the coefficients of the best-fit line, which you can use to determine the trend in your data.\nHere is an example of how you could use the polyfit() function to find the trend in your data:\nimport numpy as np\n\n# Define the data\nx = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\n\n# Fit a line to the data using polyfit()\ncoefficients = np.polyfit(range(len(x)), x, 1)\n\n# The trend is given by the slope of the line\ntrend = coefficients[0]\n\n# Print the trend\nprint(trend)\n\nIn this example, the polyfit() function will fit a line to the data and return the coefficients of the best-fit line. The trend is given by the slope of the line, which is the first coefficient in the output. This value will tell you whether the data is increasing or decreasing and at what rate. A positive value indicates that the data is increasing, while a negative value indicates that the data is decreasing. The larger the value, the steeper the slope of the line and the faster the data is increasing or decreasing.\nYou can also use the polyfit() function to find the trend in multiple sets of data. To do this, you can fit a line to each set of data using a for loop, and then store the trends in a list. You can then compare the trends to see how they differ between the different sets of data.\nHere is an example of how you could use the polyfit() function to find the trend in multiple sets of data:\nimport numpy as np\n\n# Define the data\nx1 = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\nx2 = [15, 28, 39, 48, 51, 61, 74, 85, 92, 100, 115, 131, 152, 176]\n\n# Fit a line to each set of data using a for loop\ntrends = []\nfor data in [x1, x2]:\n coefficients = np.polyfit(range(len(data)), data, 1)\n trend = coefficients[0]\n trends.append(trend)\n\n# Compare the trends\nprint(trends)\n\nIn this example, the polyfit() function is used in a for loop to fit a line to each set of data and find the trend. The trends are stored in a list, and then compared to see how they differ between the different sets of data.\n"
] | [
32,
27,
7,
6,
4,
2,
0,
0
] | [
"Compute the beta coefficient.\ny = [12, 34, 29, 38, 34, 51, 29, 34, 47, 34, 55, 94, 68, 81]\nx = range(1,len(y)+1)\n\ndef var(X):\n S = 0.0\n SS = 0.0\n for x in X:\n S += x\n SS += x*x\n xbar = S/float(len(X))\n return (SS - len(X) * xbar * xbar) / (len(X) -1.0)\n\ndef cov(X,Y):\n n = len(X)\n xbar = sum(X) / n\n ybar = sum(Y) / n\n return sum([(x-xbar)*(y-ybar) for x,y in zip(X,Y)])/(n-1)\n\n\ndef beta(x,y):\n return cov(x,y)/var(x)\n\nprint beta(x,y) #4.34285714286\n\n"
] | [
-2
] | [
"math",
"python"
] | stackoverflow_0010048571_math_python.txt |
Q:
Flask page wont redirect after registration form is validated
After the user registers on the register form, it should redirect them to the home page, but instead it doesn't do anything. It simply reloads the page without the user's password in the password fields.
This is home.py
` from flask import Flask, render_template, url_for, flash, redirect
from forms import RegistrationForm, LoginForm
app = Flask(__name__)
app.config['SECRET_KEY'] = '7364683972504ghfbeg7390'
posts = [
{
'author': 'Sam',
'title': 'Blog Post 1',
'content': 'First post content',
'date_posted': 'November 20, 2022',
},
{
'author': 'Jake',
'title': 'Blog Post 2',
'content': 'Second post content',
'date_posted': 'November 23, 2022'
}
]
@app.route("/home")
@app.route("/")
def home():
return render_template('home.html', posts=posts)
@app.route("/about")
def about():
return render_template('about.html', posts=posts)
@app.route("/register", methods=['GET','POST'])
def register():
form = RegistrationForm()
if form.validate_on_submit():
flash(f'Account created for {form.username.data}!', 'success')
return redirect(url_for('home'))
return render_template('register.html', title='Register', form=form)
@app.route("/login")
def login():
form = LoginForm()
return render_template('login.html', title='Login', form=form)
if __name__ == '__main__':
app.run(debug = True)
this is base.html.
`<!DOCTYPE html>
<html>
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='main.css') }}">
{% if title %}
<title> Flask Blog - {{ title }}</title>
{% else %}
<title>Flask Blog</title>
{% endif %}
{{ form.hidden_tag() }}
</head>
<body>
<header class="site-header">
<nav class="navbar navbar-expand-md navbar-dark bg-dark static-top">
<div class="container">
<a class="navbar-brand mr-4" href="/"></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarToggle" aria-controls="navbarToggle" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarToggle">
<div class="navbar-nav mr-auto">
<a class="nav-item nav-link" href="/">Home</a>
<a class="nav-item nav-link" href="/about">About</a>
</div>
<!-- Navbar Right Side -->
<div class="navbar-nav">
<a class="nav-item nav-link" href="/login">Login</a>
<a class="nav-item nav-link" href="/register">Register</a>
</div>
</div>
</div>
</nav>
</header>
<main role="main" class="container">
<div class="row">
<div class="col-md-8">
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class="alert alert-{{ category }}">
{{ message }}
</div>
{% endfor %}
{% endif %}
{% endwith %}
{% block content %}
{% endblock %}
</div>
<div class="col-md-4">
<div class="content-section">
<h3>Our Sidebar</h3>
<p class='text-muted'>You can put any information here you'd like.
<ul class="list-group">
<li class="list-group-item list-group-item-light">Latest Posts</li>
<li class="list-group-item list-group-item-light">Announcements</li>
<li class="list-group-item list-group-item-light">Calendars</li>
<li class="list-group-item list-group-item-light">etc</li>
</ul>
</p>
</div>
</div>
</div>
</main>
<!-- Optional JavaScript -->
<!-- jQuery first, then Popper.js, then Bootstrap JS -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</body>
</html>`
This is forms.py
`from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, SubmitField, BooleanField
from wtforms.validators import DataRequired, Length, Email, EqualTo
class RegistrationForm(FlaskForm):
username = StringField('Username',
validators=[DataRequired(), Length(min=2, max=20)])
email = StringField('Email',
validators=[DataRequired(), Email()])
password = PasswordField('Password', validators=[DataRequired()])
confirm_password = PasswordField('Confirm Password',
validators=[DataRequired(), EqualTo('Password')])
submit = SubmitField('Sign Up')
class LoginForm(FlaskForm):
email = StringField('Email',
validators=[DataRequired(), Email()])
password = PasswordField('Password', validators=[DataRequired()])
remember = BooleanField('Remember Me')
submit = SubmitField('Login')`
I'm not getting any problems on VSCode, not sure where I went wrong. I am a beginner to Flask. I'm currently following Corey Schafer's tutorial on YouTube. I want it to redirect to the homepage once the user has entered their ifnromation to register.
A:
Hi,
@app.route("/register", methods=['GET','POST'])
def register():
form = RegistrationForm()
if form.validate_on_submit():
flash(f'Account created for {form.username.data}!', 'success')
return redirect(url_for('home'))
return render_template('register.html', title='Register', form=form)
is apparently not being validated on submit for some reason. Therefore
the redirect code is never triggered . The reason for this validation
failure is most likely here:
password = PasswordField('Password', validators=[DataRequired()])
confirm_password = PasswordField('Confirm Password',
validators=[DataRequired(), EqualTo('Password')])
according to documentations, EqualTo validator accepts two
parameters (fieldname, message=None). You are however providing non
existing fieldname 'Password' (it's a label of a field, not a name)
EqualTo('Password'). The working approach could be like this:
EqualTo('password').
The partial code with a change applied.
password = PasswordField('Password', validators=[DataRequired()])
confirm_password = PasswordField('Confirm Password',
validators=[DataRequired(), EqualTo('password')])
If you still be having validation issues, check official docs
here and follow the example of
class wtforms.validators.EqualTo(fieldname, message=None)
where an actual working example of password fields validation is
available.
| Flask page wont redirect after registration form is validated | After the user registers on the register form, it should redirect them to the home page, but instead it doesn't do anything. It simply reloads the page without the user's password in the password fields.
This is home.py
` from flask import Flask, render_template, url_for, flash, redirect
from forms import RegistrationForm, LoginForm
app = Flask(__name__)
app.config['SECRET_KEY'] = '7364683972504ghfbeg7390'
posts = [
{
'author': 'Sam',
'title': 'Blog Post 1',
'content': 'First post content',
'date_posted': 'November 20, 2022',
},
{
'author': 'Jake',
'title': 'Blog Post 2',
'content': 'Second post content',
'date_posted': 'November 23, 2022'
}
]
@app.route("/home")
@app.route("/")
def home():
return render_template('home.html', posts=posts)
@app.route("/about")
def about():
return render_template('about.html', posts=posts)
@app.route("/register", methods=['GET','POST'])
def register():
form = RegistrationForm()
if form.validate_on_submit():
flash(f'Account created for {form.username.data}!', 'success')
return redirect(url_for('home'))
return render_template('register.html', title='Register', form=form)
@app.route("/login")
def login():
form = LoginForm()
return render_template('login.html', title='Login', form=form)
if __name__ == '__main__':
app.run(debug = True)
this is base.html.
`<!DOCTYPE html>
<html>
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='main.css') }}">
{% if title %}
<title> Flask Blog - {{ title }}</title>
{% else %}
<title>Flask Blog</title>
{% endif %}
{{ form.hidden_tag() }}
</head>
<body>
<header class="site-header">
<nav class="navbar navbar-expand-md navbar-dark bg-dark static-top">
<div class="container">
<a class="navbar-brand mr-4" href="/"></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarToggle" aria-controls="navbarToggle" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarToggle">
<div class="navbar-nav mr-auto">
<a class="nav-item nav-link" href="/">Home</a>
<a class="nav-item nav-link" href="/about">About</a>
</div>
<!-- Navbar Right Side -->
<div class="navbar-nav">
<a class="nav-item nav-link" href="/login">Login</a>
<a class="nav-item nav-link" href="/register">Register</a>
</div>
</div>
</div>
</nav>
</header>
<main role="main" class="container">
<div class="row">
<div class="col-md-8">
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class="alert alert-{{ category }}">
{{ message }}
</div>
{% endfor %}
{% endif %}
{% endwith %}
{% block content %}
{% endblock %}
</div>
<div class="col-md-4">
<div class="content-section">
<h3>Our Sidebar</h3>
<p class='text-muted'>You can put any information here you'd like.
<ul class="list-group">
<li class="list-group-item list-group-item-light">Latest Posts</li>
<li class="list-group-item list-group-item-light">Announcements</li>
<li class="list-group-item list-group-item-light">Calendars</li>
<li class="list-group-item list-group-item-light">etc</li>
</ul>
</p>
</div>
</div>
</div>
</main>
<!-- Optional JavaScript -->
<!-- jQuery first, then Popper.js, then Bootstrap JS -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</body>
</html>`
This is forms.py
`from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, SubmitField, BooleanField
from wtforms.validators import DataRequired, Length, Email, EqualTo
class RegistrationForm(FlaskForm):
username = StringField('Username',
validators=[DataRequired(), Length(min=2, max=20)])
email = StringField('Email',
validators=[DataRequired(), Email()])
password = PasswordField('Password', validators=[DataRequired()])
confirm_password = PasswordField('Confirm Password',
validators=[DataRequired(), EqualTo('Password')])
submit = SubmitField('Sign Up')
class LoginForm(FlaskForm):
email = StringField('Email',
validators=[DataRequired(), Email()])
password = PasswordField('Password', validators=[DataRequired()])
remember = BooleanField('Remember Me')
submit = SubmitField('Login')`
I'm not getting any problems on VSCode, not sure where I went wrong. I am a beginner to Flask. I'm currently following Corey Schafer's tutorial on YouTube. I want it to redirect to the homepage once the user has entered their ifnromation to register.
| [
"\nHi,\[email protected](\"/register\", methods=['GET','POST'])\ndef register():\n form = RegistrationForm()\n if form.validate_on_submit():\n flash(f'Account created for {form.username.data}!', 'success')\n return redirect(url_for('home'))\n return render_template('register.html', title='Register', form=form)\n\nis apparently not being validated on submit for some reason. Therefore\nthe redirect code is never triggered . The reason for this validation\nfailure is most likely here:\npassword = PasswordField('Password', validators=[DataRequired()]) \nconfirm_password = PasswordField('Confirm Password', \n validators=[DataRequired(), EqualTo('Password')])\n\naccording to documentations, EqualTo validator accepts two\nparameters (fieldname, message=None). You are however providing non\nexisting fieldname 'Password' (it's a label of a field, not a name)\nEqualTo('Password'). The working approach could be like this:\nEqualTo('password').\nThe partial code with a change applied.\npassword = PasswordField('Password', validators=[DataRequired()]) \nconfirm_password = PasswordField('Confirm Password', \n validators=[DataRequired(), EqualTo('password')])\n\nIf you still be having validation issues, check official docs\nhere and follow the example of\nclass wtforms.validators.EqualTo(fieldname, message=None)\n\nwhere an actual working example of password fields validation is\navailable.\n\n"
] | [
0
] | [] | [] | [
"flask",
"flask_login",
"flask_wtforms",
"html",
"python"
] | stackoverflow_0074623116_flask_flask_login_flask_wtforms_html_python.txt |
Q:
reasons for serializer not validating data DRF
I am sending the data through postman as follows
my model.py is as follows
def get_upload_path(instance, filename):
model = instance._meta
name = model.verbose_name_plural.replace(' ', '_')
return f'{name}/images/{filename}'
class ImageAlbum(models.Model):
def default(self):
return self.images.filter(default=True).first()
def thumbnails(self):
return self.images.filter(width__lt=100, length_lt=100)
class Photo(models.Model):
name = models.CharField(max_length=255, null=True, blank=True)
photo = models.ImageField(upload_to=get_upload_path, null=True, blank=True)
default = models.BooleanField(default=False, null=True, blank=True)
width = models.FloatField(default=100, null=True, blank=True)
length = models.FloatField(default=100, null=True, blank=True)
description = models.CharField(max_length=2000, null=True, blank=True)
latitude = models.DecimalField(max_digits=11, decimal_places=2, null=True, blank=True)
longitude = models.DecimalField(max_digits=11, decimal_places=2, null=True, blank=True)
album = models.ForeignKey(ImageAlbum, related_name='album_data', on_delete=models.CASCADE, null=True, blank=True)
my view.py is as follows
class ImageAlbumListApiView(APIView):
permission_classes = [IsAuthenticated]
parser_classes = [MultiPartParser, FormParser, ]
def get(self, request):
image_album = ImageAlbum.objects.all()
serializer = ImageAlbumSerializer(image_album, many=True)
return Response(serializer.data)
def post(self, request):
serializer = ImageAlbumSerializer(data=request.data)
print(request.data)
if serializer.is_valid(raise_exception=True):
serializer.save()
return Response(serializer.data)
else:
return Response(serializer.errors)
My serializer.py is as follows
class PhotoSerializer(serializers.ModelSerializer):
class Meta:
model = models.Photo
fields = '__all__'
class ImageAlbumSerializer(serializers.ModelSerializer):
album_data = PhotoSerializer(many=True, read_only=True)
file = serializers.ListField(
child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False,
write_only = True), write_only=True)
class Meta:
###Test###
model = models.ImageAlbum
fields = ['id', 'album_data', 'file']
read_only_fields = ['id']
def create(self, validated_data):
#album_data = validated_data.get('album_data')
print(validated_data)
uploaded_files = validated_data.get('file')
#image_info = validated_data.pop('images')
album = models.ImageAlbum.objects.create()
for uploaded_item in uploaded_files:
models.Photo.objects.get_or_create(album=album, photo=uploaded_item)
return album
Now the problem is this that:
when I am posting data through Postman, in the View, i am getting data with both the keys i.e. 'album_data' which contains nested JSON data and 'file' which has list of uploaded files.
But i am not receiving the 'album_data' key in the validated_data of the create function in the serializer.
As i am new to DRF, i am unable to determine why data validation in the serializer is failing?
A:
Here is the problem, on the ImageAlbumSerializer
...
album_data = PhotoSerializer(many=True, read_only=True)
...
The data is bieng passed from the request.data but u declared it as read_only field. so as the name implies that attr is a read_only so it wont be validated or even passed to the validate method. just remove that arg and it will work
| reasons for serializer not validating data DRF | I am sending the data through postman as follows
my model.py is as follows
def get_upload_path(instance, filename):
model = instance._meta
name = model.verbose_name_plural.replace(' ', '_')
return f'{name}/images/{filename}'
class ImageAlbum(models.Model):
def default(self):
return self.images.filter(default=True).first()
def thumbnails(self):
return self.images.filter(width__lt=100, length_lt=100)
class Photo(models.Model):
name = models.CharField(max_length=255, null=True, blank=True)
photo = models.ImageField(upload_to=get_upload_path, null=True, blank=True)
default = models.BooleanField(default=False, null=True, blank=True)
width = models.FloatField(default=100, null=True, blank=True)
length = models.FloatField(default=100, null=True, blank=True)
description = models.CharField(max_length=2000, null=True, blank=True)
latitude = models.DecimalField(max_digits=11, decimal_places=2, null=True, blank=True)
longitude = models.DecimalField(max_digits=11, decimal_places=2, null=True, blank=True)
album = models.ForeignKey(ImageAlbum, related_name='album_data', on_delete=models.CASCADE, null=True, blank=True)
my view.py is as follows
class ImageAlbumListApiView(APIView):
permission_classes = [IsAuthenticated]
parser_classes = [MultiPartParser, FormParser, ]
def get(self, request):
image_album = ImageAlbum.objects.all()
serializer = ImageAlbumSerializer(image_album, many=True)
return Response(serializer.data)
def post(self, request):
serializer = ImageAlbumSerializer(data=request.data)
print(request.data)
if serializer.is_valid(raise_exception=True):
serializer.save()
return Response(serializer.data)
else:
return Response(serializer.errors)
My serializer.py is as follows
class PhotoSerializer(serializers.ModelSerializer):
class Meta:
model = models.Photo
fields = '__all__'
class ImageAlbumSerializer(serializers.ModelSerializer):
album_data = PhotoSerializer(many=True, read_only=True)
file = serializers.ListField(
child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False,
write_only = True), write_only=True)
class Meta:
###Test###
model = models.ImageAlbum
fields = ['id', 'album_data', 'file']
read_only_fields = ['id']
def create(self, validated_data):
#album_data = validated_data.get('album_data')
print(validated_data)
uploaded_files = validated_data.get('file')
#image_info = validated_data.pop('images')
album = models.ImageAlbum.objects.create()
for uploaded_item in uploaded_files:
models.Photo.objects.get_or_create(album=album, photo=uploaded_item)
return album
Now the problem is this that:
when I am posting data through Postman, in the View, i am getting data with both the keys i.e. 'album_data' which contains nested JSON data and 'file' which has list of uploaded files.
But i am not receiving the 'album_data' key in the validated_data of the create function in the serializer.
As i am new to DRF, i am unable to determine why data validation in the serializer is failing?
| [
"Here is the problem, on the ImageAlbumSerializer\n ...\n album_data = PhotoSerializer(many=True, read_only=True)\n ...\n\nThe data is bieng passed from the request.data but u declared it as read_only field. so as the name implies that attr is a read_only so it wont be validated or even passed to the validate method. just remove that arg and it will work\n"
] | [
0
] | [] | [] | [
"django",
"django_rest_framework",
"django_serializer",
"python"
] | stackoverflow_0074641604_django_django_rest_framework_django_serializer_python.txt |
Q:
unable to send image through discord webhooks
i created a method that takes a screenshot
def send_screenshot_to_discord(self):
webhook=discord_webhooks.DiscordWebhooks("https://discord.com/api/webhooks/xyz")
img=ImageGrab.grab()
webhook.set_image(image=img)
webhook.set_footer(text="img")
webhook.send()
the result :
A:
Unfortunately, the discord_webhooks package you're using does not support file attachments, making it impossible to set the locally saved images or Pil-created images as embed images.
The way you set local images as embed images is by using attachment://image.png as the embed.set_image function's url argument, and since you cannot attach images, you can't do that.
Here is the FAQ from discord.py on how to set local images as embed images.
file = discord.File("path/to/my/image.png", filename="image.png")
embed = discord.Embed()
embed.set_image(url="attachment://image.png")
await channel.send(file=file, embed=embed)
Consider using discord_webhook or discord.py's webhook.
| unable to send image through discord webhooks | i created a method that takes a screenshot
def send_screenshot_to_discord(self):
webhook=discord_webhooks.DiscordWebhooks("https://discord.com/api/webhooks/xyz")
img=ImageGrab.grab()
webhook.set_image(image=img)
webhook.set_footer(text="img")
webhook.send()
the result :
| [
"Unfortunately, the discord_webhooks package you're using does not support file attachments, making it impossible to set the locally saved images or Pil-created images as embed images.\nThe way you set local images as embed images is by using attachment://image.png as the embed.set_image function's url argument, and since you cannot attach images, you can't do that.\nHere is the FAQ from discord.py on how to set local images as embed images.\nfile = discord.File(\"path/to/my/image.png\", filename=\"image.png\")\nembed = discord.Embed()\nembed.set_image(url=\"attachment://image.png\")\nawait channel.send(file=file, embed=embed)\n\nConsider using discord_webhook or discord.py's webhook.\n"
] | [
1
] | [] | [] | [
"discord",
"discord.py",
"python",
"python_imaging_library",
"webhooks"
] | stackoverflow_0074653202_discord_discord.py_python_python_imaging_library_webhooks.txt |
Q:
predict new user using lightfm
I want to give a recommendation to a new user using lightfm.
Hi, I've got model, interactions, item_features.
The new user is not in interactions and the only information of the new user is their ratings.(list of book_id and rating pairs)
I tried to use predict() or predict_rank(), but I failed to figure out how.
Could you please give me some advice?
Below is my screenshot which raised ValueError..
A:
I was having the same problem,
What I did was
Created a user_features matrix (based on their preferences) using Dataset class
dataset = Dataset()
dataset.fit(user_ids,item_ids)
user_features = build_user_features([[user_id_1,[user_features_1]],..], normalize=True)
Provide it during training along with interaction CSR
model = LightFM(loss='warp')
model = model.fit(iteraction_csr,
user_features=user_features)
Create user_feature matrix for new-user using their preference ( in
my case genres )
dataset.fit_partial(users=[user_id],user_features=total_genres)
new_user_feature = [user_id,new_user_feature]
new_user_feature = dataset.build_user_features([new_user_feature])
Now predict item rankings with new_user feature
scores = model.predict(<new-user-index>, np.arange(n_items),user_features=new_user_feature)
This gives a pretty decent result for new users but is not as good as the pure CF model.
This is how I implemented it.
A:
This worked for me. It explains the process of formatting new user features and generating predictions for them.
fit_partial can only be used if you want to resume model training from its current state when you have newer interaction data for existing users and items. When you have newer users/items, you should retrain the model.
The existing model can be used to generate predictions for newer users though, the link above explains the process.
| predict new user using lightfm | I want to give a recommendation to a new user using lightfm.
Hi, I've got model, interactions, item_features.
The new user is not in interactions and the only information of the new user is their ratings.(list of book_id and rating pairs)
I tried to use predict() or predict_rank(), but I failed to figure out how.
Could you please give me some advice?
Below is my screenshot which raised ValueError..
| [
"I was having the same problem,\nWhat I did was\n\nCreated a user_features matrix (based on their preferences) using Dataset class\n dataset = Dataset()\n dataset.fit(user_ids,item_ids)\n user_features = build_user_features([[user_id_1,[user_features_1]],..], normalize=True)\n\n\nProvide it during training along with interaction CSR\nmodel = LightFM(loss='warp')\nmodel = model.fit(iteraction_csr,\n user_features=user_features)\n\n\nCreate user_feature matrix for new-user using their preference ( in\nmy case genres )\ndataset.fit_partial(users=[user_id],user_features=total_genres)\nnew_user_feature = [user_id,new_user_feature]\nnew_user_feature = dataset.build_user_features([new_user_feature])\n\n\nNow predict item rankings with new_user feature\nscores = model.predict(<new-user-index>, np.arange(n_items),user_features=new_user_feature)\n\n\n\nThis gives a pretty decent result for new users but is not as good as the pure CF model.\nThis is how I implemented it.\n",
"This worked for me. It explains the process of formatting new user features and generating predictions for them.\nfit_partial can only be used if you want to resume model training from its current state when you have newer interaction data for existing users and items. When you have newer users/items, you should retrain the model.\nThe existing model can be used to generate predictions for newer users though, the link above explains the process.\n"
] | [
3,
0
] | [] | [] | [
"data_science",
"lightfm",
"python",
"recommendation_engine"
] | stackoverflow_0068857138_data_science_lightfm_python_recommendation_engine.txt |
Q:
How to receive a num at each step and continue until zero is entered; then this program should print the sum of enter nums
How to Write a program that receive a number from the input at each step and continue to work until zero is entered. After the zero digit is entered, this program should print the sum of the entered numbers. I want to get n different numbers in n different lines and it stops when it reaches Zero
For ex:(input:)
3
4
5
0
Output:
12
Actually I have this code but it doesn’t work:
‘’’python’’’
Sum= 0
Num = int(input())
While num!=0 :
Num = int(input())
Sum+= num
Print(sum)
But it gives ‘9’ instead of ‘12’
A:
Here I'm using a while loop that continues until the user input is 0 then prints out the sum.
res = 0
user_input = int(input('Input a number: '))
while user_input != 0:
user_input=int(input('Input a number: '))
res+= user_input
print("You entered 0 so the program stopped. The sum of your inputs is: {}".format(res))
A:
Code:-
total_sum=0
n=int(input("Enter the number: "))
while n!=0:
total_sum+=n
n=int(input("Enter the number: "))
print("The total sum until the user input 0 is: "+str(total_sum))
Output:-
#Testcase1 user_input:- 10, 0
Enter the number: 10
Enter the number: 0
The total sum until the user input 0 is: 10
#Testcase2 user_input:- 3, 4, 5, 0
Enter the number: 3
Enter the number: 4
Enter the number: 5
Enter the number: 0
The total sum until the user input 0 is: 12
#Improvisation - { Removing the redundancy of n initialization in above code (which is two times before the while loop and inside the while loop)}
Code:-
total_sum=0
while True:
n=int(input("Enter the number: "))
total_sum+=n
if not n:
break
print("The total sum until the user input 0 is: "+str(total_sum))
Output:-
Same as above
| How to receive a num at each step and continue until zero is entered; then this program should print the sum of enter nums | How to Write a program that receive a number from the input at each step and continue to work until zero is entered. After the zero digit is entered, this program should print the sum of the entered numbers. I want to get n different numbers in n different lines and it stops when it reaches Zero
For ex:(input:)
3
4
5
0
Output:
12
Actually I have this code but it doesn’t work:
‘’’python’’’
Sum= 0
Num = int(input())
While num!=0 :
Num = int(input())
Sum+= num
Print(sum)
But it gives ‘9’ instead of ‘12’
| [
"Here I'm using a while loop that continues until the user input is 0 then prints out the sum.\nres = 0\nuser_input = int(input('Input a number: '))\nwhile user_input != 0:\n user_input=int(input('Input a number: '))\n res+= user_input\nprint(\"You entered 0 so the program stopped. The sum of your inputs is: {}\".format(res))\n\n",
"Code:-\ntotal_sum=0\nn=int(input(\"Enter the number: \"))\nwhile n!=0:\n total_sum+=n\n n=int(input(\"Enter the number: \"))\nprint(\"The total sum until the user input 0 is: \"+str(total_sum))\n\nOutput:-\n#Testcase1 user_input:- 10, 0\nEnter the number: 10\nEnter the number: 0\nThe total sum until the user input 0 is: 10\n\n#Testcase2 user_input:- 3, 4, 5, 0\nEnter the number: 3\nEnter the number: 4\nEnter the number: 5\nEnter the number: 0\nThe total sum until the user input 0 is: 12\n\n#Improvisation - { Removing the redundancy of n initialization in above code (which is two times before the while loop and inside the while loop)}\nCode:-\ntotal_sum=0\nwhile True:\n n=int(input(\"Enter the number: \"))\n total_sum+=n\n if not n:\n break\nprint(\"The total sum until the user input 0 is: \"+str(total_sum))\n\nOutput:-\nSame as above\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074620898_python.txt |
Q:
UserWarning: no type annotations present -- not typechecking test... (def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def])
Context
Suppose one creates a test object, and during its initialization, one also creates some test object properties, like shown below:
class Test_mdsa(unittest.TestCase):
"""Tests whether MDSA algorithm specification detects invalid
specifications."""
# Initialize test object
@typechecked
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
super().__init__(*args, **kwargs)
self.mdsa = MDSA(list(range(0, 4, 1)))
self.mdsa_configs = get_algo_configs(self.mdsa.__dict__)
verify_algo_configs("MDSA", self.mdsa_configs)
Warning message
It shows the following warning (upon running python -m pytest -k the_test:
./../anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard/init.py:1016
/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard
/init.py:1016: UserWarning: no type annotations present -- not typechecking test_mdsa_propagation.Test_mdsa.init
warn('no type annotations present -- not typechecking {}'.format(function_name(func)))
Question
How can one alleviated this warning (without suppressing the warning), or give the correct types or tell the typechecker to ignore the *arg, **kwargs in the initialisation?
Approach I
I thought the # type: ignore[no-untyped-def] was an acceptable way to handle this, however, it does not seem to be picked up by the warning system.
A:
I noticed I should add the -> None: at the end of the __init__(), and that removed the warning when I added:
def __init__(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def]
super().__init__(*args, **kwargs)
Limitations
However, in that case any additional arguments like:
def __init__(self, some_fish:Fish, *args, **kwargs) -> None: # type: ignore[no-untyped-def]
will also not be typechecked. At the same time, I did not yet find a use case in which additional __init__() arguments are necessary for a test.
| UserWarning: no type annotations present -- not typechecking test... (def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]) | Context
Suppose one creates a test object, and during its initialization, one also creates some test object properties, like shown below:
class Test_mdsa(unittest.TestCase):
"""Tests whether MDSA algorithm specification detects invalid
specifications."""
# Initialize test object
@typechecked
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
super().__init__(*args, **kwargs)
self.mdsa = MDSA(list(range(0, 4, 1)))
self.mdsa_configs = get_algo_configs(self.mdsa.__dict__)
verify_algo_configs("MDSA", self.mdsa_configs)
Warning message
It shows the following warning (upon running python -m pytest -k the_test:
./../anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard/init.py:1016
/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/typeguard
/init.py:1016: UserWarning: no type annotations present -- not typechecking test_mdsa_propagation.Test_mdsa.init
warn('no type annotations present -- not typechecking {}'.format(function_name(func)))
Question
How can one alleviated this warning (without suppressing the warning), or give the correct types or tell the typechecker to ignore the *arg, **kwargs in the initialisation?
Approach I
I thought the # type: ignore[no-untyped-def] was an acceptable way to handle this, however, it does not seem to be picked up by the warning system.
| [
"I noticed I should add the -> None: at the end of the __init__(), and that removed the warning when I added:\ndef __init__(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def]\n super().__init__(*args, **kwargs)\n\nLimitations\nHowever, in that case any additional arguments like:\ndef __init__(self, some_fish:Fish, *args, **kwargs) -> None: # type: ignore[no-untyped-def]\n\nwill also not be typechecked. At the same time, I did not yet find a use case in which additional __init__() arguments are necessary for a test.\n"
] | [
0
] | [] | [] | [
"keyword_argument",
"python",
"typechecking",
"warnings"
] | stackoverflow_0074654091_keyword_argument_python_typechecking_warnings.txt |
Q:
How can I copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting
I am trying to copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting. I currently just take a screenshot to preserve formatting, but this is not ideal. Does anyone know of a better way? I search for an extension that maybe has a screenshot button, but no luck.
A:
One way seems to be to copy the styled pandas table from jupyter notebook to excel. It will keep a lot of the formatting. Then you can copy it to powerpoint and it will maintain its style.
A:
Using the pandas styler object you can save directly to Excel. For example you can save the excel of your dataframe with the background color directly to Excel
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
then:
>>> df.style.background_gradient(axis=None, low=0.75, high=1.0).to_excel("your_file.xlsx")
| How can I copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting | I am trying to copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting. I currently just take a screenshot to preserve formatting, but this is not ideal. Does anyone know of a better way? I search for an extension that maybe has a screenshot button, but no luck.
| [
"One way seems to be to copy the styled pandas table from jupyter notebook to excel. It will keep a lot of the formatting. Then you can copy it to powerpoint and it will maintain its style.\n",
"Using the pandas styler object you can save directly to Excel. For example you can save the excel of your dataframe with the background color directly to Excel\n>>> df = pd.DataFrame(columns=[\"City\", \"Temp (c)\", \"Rain (mm)\", \"Wind (m/s)\"],\n... data=[[\"Stockholm\", 21.6, 5.0, 3.2],\n... [\"Oslo\", 22.4, 13.3, 3.1],\n... [\"Copenhagen\", 24.5, 0.0, 6.7]])\n\n\nthen:\n>>> df.style.background_gradient(axis=None, low=0.75, high=1.0).to_excel(\"your_file.xlsx\")\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"jupyter_notebook",
"pandas",
"powerpoint",
"python"
] | stackoverflow_0049222299_dataframe_jupyter_notebook_pandas_powerpoint_python.txt |
Q:
hide chromeDriver console in python
I'm using chrome driver in Selenium to open chrome , log into a router, press some buttons ,upload configuration etc. all code is written in Python.
here is the part of the code to obtain the driver:
chrome_options = webdriver.ChromeOptions()
prefs = {"download.default_directory": self.user_local}
chrome_options.add_experimental_option("prefs", prefs)
chrome_options.experimental_options.
driver = webdriver.Chrome("chromedriver.exe", chrome_options=chrome_options)
driver.set_window_position(0, 0)
driver.set_window_size(0, 0)
return driver
when i fire up my app, i get a chromedriver.exe console (a black window) followed by a chrome window opened and all my requests are done.
My question: is there a way in python to hide the console window ?
(as you can see i'm also re-sizing the chrome window ,my preference would be doing things in a way the user wont notice anything happening on screen)
thanks
Sivan
A:
You will have to edit Selenium Source code to achieve this. I am a noob too, and I dont fully understand the overall consequences of editing source code but here is what I did to achieve hiding the webdriver console window on Windows 7, Python 2.7.
Locate and edit this file as follows:
located at
Lib\site-packages\selenium\webdriver\common\service.py in your Python folder.
Edit the Start() function by adding the creation flags this way: creationflags=CREATE_NO_WINDOW
The edited method will be as below:
def start(self):
"""
Starts the Service.
:Exceptions:
- WebDriverException : Raised either when it can't start the service
or when it can't connect to the service
"""
try:
cmd = [self.path]
cmd.extend(self.command_line_args())
self.process = subprocess.Popen(cmd, env=self.env,
close_fds=platform.system() != 'Windows',
stdout=self.log_file, stderr=self.log_file, creationflags=CREATE_NO_WINDOW)
except TypeError:
raise
You will have to add the relevant imports:
from win32process import CREATE_NO_WINDOW
This should also work for Chrome webdriver as they import the same file to start the webdriver process.
A:
There's a lot of questions relating to this and a lot of various answers. The issue is that using Selenium in a Python process without a console window of its own will cause it to launch its drivers (including the chromedriver) in a new window.
Rather than modifying the Selenium code directly (although this needs to be done eventually) one option you have is to create your own sub-classes for both the Chrome WebDriver and the Service class it uses. The Service class is where Selenium actually calls Popen to launch the service process, e.g. chromedriver.exe (as mentioned in the accepted answer):
import errno
import os
import platform
import subprocess
import sys
import time
import warnings
from selenium.common.exceptions import WebDriverException
from selenium.webdriver.common import utils
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
from selenium.webdriver.chrome import service, webdriver, remote_connection
class HiddenChromeService(service.Service):
def start(self):
try:
cmd = [self.path]
cmd.extend(self.command_line_args())
if platform.system() == 'Windows':
info = subprocess.STARTUPINFO()
info.dwFlags = subprocess.STARTF_USESHOWWINDOW
info.wShowWindow = 0 # SW_HIDE (6 == SW_MINIMIZE)
else:
info = None
self.process = subprocess.Popen(
cmd, env=self.env,
close_fds=platform.system() != 'Windows',
startupinfo=info,
stdout=self.log_file,
stderr=self.log_file,
stdin=subprocess.PIPE)
except TypeError:
raise
except OSError as err:
if err.errno == errno.ENOENT:
raise WebDriverException(
"'%s' executable needs to be in PATH. %s" % (
os.path.basename(self.path), self.start_error_message)
)
elif err.errno == errno.EACCES:
raise WebDriverException(
"'%s' executable may have wrong permissions. %s" % (
os.path.basename(self.path), self.start_error_message)
)
else:
raise
except Exception as e:
raise WebDriverException(
"Executable %s must be in path. %s\n%s" % (
os.path.basename(self.path), self.start_error_message,
str(e)))
count = 0
while True:
self.assert_process_still_running()
if self.is_connectable():
break
count += 1
time.sleep(1)
if count == 30:
raise WebDriverException("Can't connect to the Service %s" % (
self.path,))
class HiddenChromeWebDriver(webdriver.WebDriver):
def __init__(self, executable_path="chromedriver", port=0,
options=None, service_args=None,
desired_capabilities=None, service_log_path=None,
chrome_options=None, keep_alive=True):
if chrome_options:
warnings.warn('use options instead of chrome_options',
DeprecationWarning, stacklevel=2)
options = chrome_options
if options is None:
# desired_capabilities stays as passed in
if desired_capabilities is None:
desired_capabilities = self.create_options().to_capabilities()
else:
if desired_capabilities is None:
desired_capabilities = options.to_capabilities()
else:
desired_capabilities.update(options.to_capabilities())
self.service = HiddenChromeService(
executable_path,
port=port,
service_args=service_args,
log_path=service_log_path)
self.service.start()
try:
RemoteWebDriver.__init__(
self,
command_executor=remote_connection.ChromeRemoteConnection(
remote_server_addr=self.service.service_url,
keep_alive=keep_alive),
desired_capabilities=desired_capabilities)
except Exception:
self.quit()
raise
self._is_remote = False
I removed some of the extra comments and doc string goo for brevity. You would then use this custom WebDriver the same way you'd use the official Chrome one in Selenium:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('headless')
headless_chrome = HiddenChromeWebDriver(chrome_options=options)
headless_chrome.get('http://www.example.com/')
headless_chrome.quit()
Finally, if creating a custom WebDriver is not for you and you don't mind a window flickering and disappearing then you can also use the win32gui library to hide the window after starting up:
# hide chromedriver console on Windows
def enumWindowFunc(hwnd, windowList):
""" win32gui.EnumWindows() callback """
text = win32gui.GetWindowText(hwnd)
className = win32gui.GetClassName(hwnd)
if 'chromedriver' in text.lower() or 'chromedriver' in className.lower():
win32gui.ShowWindow(hwnd, False)
win32gui.EnumWindows(enumWindowFunc, [])
A:
New easy solution! (in selenium4)
We no longer need to edit the selenium library for this. I implemented this feature, which is now part of selenium4 release.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService # Similar thing for firefox also!
from subprocess import CREATE_NO_WINDOW # This flag will only be available in windows
# Define your own service object with the `CREATE_NO_WINDOW ` flag
# If chromedriver.exe is not in PATH, then use:
# ChromeService('/path/to/chromedriver')
chrome_service = ChromeService('chromedriver')
chrome_service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Chrome(service=chrome_service)
Now, command-prompt window does not open up. This is especially useful when you have a GUI desktop app which is responsible for opening selenium browser.
Also, note that we only need to do this in Windows.
A:
While launching chrome browser not seeing chromedriver console with the given sample code.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from time import sleep
options = webdriver.ChromeOptions()
prefs = {"download.default_directory": r"C:\New_Download"}
options.add_experimental_option("prefs", prefs)
print(options.experimental_options)
chromeDriverPath = r'C:\drivers\chromedriver.exe'
driver = webdriver.Chrome(chromeDriverPath,chrome_options=options)
driver.get("http://google.com")
driver.set_window_position(0, 0)
driver.set_window_size(0, 0)
A:
I found the solution to hide the console window when running Chromedriver:
I also got an error when importing CREATE_NO_WINDOW from win32process.
To import it without error find the service.py file in Selenium files (like in Josh O'Brien's reply under this thread). Then instead of importing like this:
from win32process import CREATE_NO_WINDOW
you should instead import it from subprocess:
from subprocess import CREATE_NO_WINDOW
And then just add creationflags=CREATE_NO_WINDOW in the service.py file code part like this (also like in Josh O'Brien's reply):
self.process = subprocess.Popen(cmd, env=self.env,
close_fds=platform.system() != 'Windows',
stdout=self.log_file,
stderr=self.log_file,
stdin=PIPE, creationflags=CREATE_NO_WINDOW)
And it works nicely for me, also when I used Pyinstaller to compile it to an EXE file.
A:
SELENIUM >4 Solution for Pyinstaller
Building from the answer by @Ali Sajjad
If you are using pyinstaller to bundle python code with GUI (like tkinter) and still want to hide the chromedriver console that pops out. Be sure to include the relevant path here...
chrome_service = ChromeService('chromedriver')
chrome_service.creationflags = CREATE_NO_WINDOW
The Service object from selenium take in the path of the chromedriver and thus should be replaced with the following..
chrome_service = ChromeService(get_path('relative/path/to/chromedriver'))
Where get_path is a path connector that helps the scripts to find the files.
Example of the resource_path function that works for me...
def get_path(relative_path):
""" Get the absolute path to the resource, works for dev and for PyInstaller """
try:
# PyInstaller creates a temp folder and stores path in _MEIPASS
base_path = sys._MEIPASS
except Exception:
base_path = os.path.abspath(".")
return os.path.join(base_path, relative_path)
Now, creating an instance would be:
browser = webdriver.Chrome(service=chrome_service, options=options, executable_path=get_path('relative/path/to/chromedriver.exe'))
This should work with pyinstaller.
A:
For me, the following solution did NOT work.
service = Service(EdgeChromiumDriverManager().install())
service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Edge(options=self.options, service=service)
However, using selenium version 4.5.0 instead of 4.7.0 works like a charm.
Also see Unable to hide Chromedriver console with CREATE_NO_WINDOW
| hide chromeDriver console in python | I'm using chrome driver in Selenium to open chrome , log into a router, press some buttons ,upload configuration etc. all code is written in Python.
here is the part of the code to obtain the driver:
chrome_options = webdriver.ChromeOptions()
prefs = {"download.default_directory": self.user_local}
chrome_options.add_experimental_option("prefs", prefs)
chrome_options.experimental_options.
driver = webdriver.Chrome("chromedriver.exe", chrome_options=chrome_options)
driver.set_window_position(0, 0)
driver.set_window_size(0, 0)
return driver
when i fire up my app, i get a chromedriver.exe console (a black window) followed by a chrome window opened and all my requests are done.
My question: is there a way in python to hide the console window ?
(as you can see i'm also re-sizing the chrome window ,my preference would be doing things in a way the user wont notice anything happening on screen)
thanks
Sivan
| [
"You will have to edit Selenium Source code to achieve this. I am a noob too, and I dont fully understand the overall consequences of editing source code but here is what I did to achieve hiding the webdriver console window on Windows 7, Python 2.7.\nLocate and edit this file as follows:\nlocated at \nLib\\site-packages\\selenium\\webdriver\\common\\service.py in your Python folder.\nEdit the Start() function by adding the creation flags this way: creationflags=CREATE_NO_WINDOW\nThe edited method will be as below:\ndef start(self):\n \"\"\"\n Starts the Service.\n\n :Exceptions:\n - WebDriverException : Raised either when it can't start the service\n or when it can't connect to the service\n \"\"\"\n try:\n cmd = [self.path]\n cmd.extend(self.command_line_args())\n self.process = subprocess.Popen(cmd, env=self.env,\n close_fds=platform.system() != 'Windows',\n stdout=self.log_file, stderr=self.log_file, creationflags=CREATE_NO_WINDOW)\n except TypeError:\n raise\n\nYou will have to add the relevant imports:\nfrom win32process import CREATE_NO_WINDOW\n\nThis should also work for Chrome webdriver as they import the same file to start the webdriver process.\n",
"There's a lot of questions relating to this and a lot of various answers. The issue is that using Selenium in a Python process without a console window of its own will cause it to launch its drivers (including the chromedriver) in a new window.\nRather than modifying the Selenium code directly (although this needs to be done eventually) one option you have is to create your own sub-classes for both the Chrome WebDriver and the Service class it uses. The Service class is where Selenium actually calls Popen to launch the service process, e.g. chromedriver.exe (as mentioned in the accepted answer):\nimport errno\nimport os\nimport platform\nimport subprocess\nimport sys\nimport time\nimport warnings\n\nfrom selenium.common.exceptions import WebDriverException\nfrom selenium.webdriver.common import utils\nfrom selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver\nfrom selenium.webdriver.chrome import service, webdriver, remote_connection\n\nclass HiddenChromeService(service.Service):\n\n def start(self):\n try:\n cmd = [self.path]\n cmd.extend(self.command_line_args())\n\n if platform.system() == 'Windows':\n info = subprocess.STARTUPINFO()\n info.dwFlags = subprocess.STARTF_USESHOWWINDOW\n info.wShowWindow = 0 # SW_HIDE (6 == SW_MINIMIZE)\n else:\n info = None\n\n self.process = subprocess.Popen(\n cmd, env=self.env,\n close_fds=platform.system() != 'Windows',\n startupinfo=info,\n stdout=self.log_file,\n stderr=self.log_file,\n stdin=subprocess.PIPE)\n except TypeError:\n raise\n except OSError as err:\n if err.errno == errno.ENOENT:\n raise WebDriverException(\n \"'%s' executable needs to be in PATH. %s\" % (\n os.path.basename(self.path), self.start_error_message)\n )\n elif err.errno == errno.EACCES:\n raise WebDriverException(\n \"'%s' executable may have wrong permissions. %s\" % (\n os.path.basename(self.path), self.start_error_message)\n )\n else:\n raise\n except Exception as e:\n raise WebDriverException(\n \"Executable %s must be in path. %s\\n%s\" % (\n os.path.basename(self.path), self.start_error_message,\n str(e)))\n count = 0\n while True:\n self.assert_process_still_running()\n if self.is_connectable():\n break\n count += 1\n time.sleep(1)\n if count == 30:\n raise WebDriverException(\"Can't connect to the Service %s\" % (\n self.path,))\n\n\nclass HiddenChromeWebDriver(webdriver.WebDriver):\n def __init__(self, executable_path=\"chromedriver\", port=0,\n options=None, service_args=None,\n desired_capabilities=None, service_log_path=None,\n chrome_options=None, keep_alive=True):\n if chrome_options:\n warnings.warn('use options instead of chrome_options',\n DeprecationWarning, stacklevel=2)\n options = chrome_options\n\n if options is None:\n # desired_capabilities stays as passed in\n if desired_capabilities is None:\n desired_capabilities = self.create_options().to_capabilities()\n else:\n if desired_capabilities is None:\n desired_capabilities = options.to_capabilities()\n else:\n desired_capabilities.update(options.to_capabilities())\n\n self.service = HiddenChromeService(\n executable_path,\n port=port,\n service_args=service_args,\n log_path=service_log_path)\n self.service.start()\n\n try:\n RemoteWebDriver.__init__(\n self,\n command_executor=remote_connection.ChromeRemoteConnection(\n remote_server_addr=self.service.service_url,\n keep_alive=keep_alive),\n desired_capabilities=desired_capabilities)\n except Exception:\n self.quit()\n raise\n self._is_remote = False\n\nI removed some of the extra comments and doc string goo for brevity. You would then use this custom WebDriver the same way you'd use the official Chrome one in Selenium:\nfrom selenium import webdriver\noptions = webdriver.ChromeOptions()\noptions.add_argument('headless')\nheadless_chrome = HiddenChromeWebDriver(chrome_options=options)\n\nheadless_chrome.get('http://www.example.com/')\n\nheadless_chrome.quit()\n\nFinally, if creating a custom WebDriver is not for you and you don't mind a window flickering and disappearing then you can also use the win32gui library to hide the window after starting up:\n# hide chromedriver console on Windows\ndef enumWindowFunc(hwnd, windowList):\n \"\"\" win32gui.EnumWindows() callback \"\"\"\n text = win32gui.GetWindowText(hwnd)\n className = win32gui.GetClassName(hwnd)\n if 'chromedriver' in text.lower() or 'chromedriver' in className.lower():\n win32gui.ShowWindow(hwnd, False)\nwin32gui.EnumWindows(enumWindowFunc, [])\n\n",
"New easy solution! (in selenium4)\nWe no longer need to edit the selenium library for this. I implemented this feature, which is now part of selenium4 release.\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service as ChromeService # Similar thing for firefox also!\nfrom subprocess import CREATE_NO_WINDOW # This flag will only be available in windows\n\n# Define your own service object with the `CREATE_NO_WINDOW ` flag\n# If chromedriver.exe is not in PATH, then use:\n# ChromeService('/path/to/chromedriver')\nchrome_service = ChromeService('chromedriver')\nchrome_service.creationflags = CREATE_NO_WINDOW\n\ndriver = webdriver.Chrome(service=chrome_service)\n\nNow, command-prompt window does not open up. This is especially useful when you have a GUI desktop app which is responsible for opening selenium browser.\nAlso, note that we only need to do this in Windows.\n",
"While launching chrome browser not seeing chromedriver console with the given sample code.\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options \nfrom time import sleep\noptions = webdriver.ChromeOptions()\nprefs = {\"download.default_directory\": r\"C:\\New_Download\"}\noptions.add_experimental_option(\"prefs\", prefs)\nprint(options.experimental_options)\n\nchromeDriverPath = r'C:\\drivers\\chromedriver.exe'\ndriver = webdriver.Chrome(chromeDriverPath,chrome_options=options)\ndriver.get(\"http://google.com\")\ndriver.set_window_position(0, 0)\ndriver.set_window_size(0, 0)\n\n",
"I found the solution to hide the console window when running Chromedriver:\nI also got an error when importing CREATE_NO_WINDOW from win32process.\nTo import it without error find the service.py file in Selenium files (like in Josh O'Brien's reply under this thread). Then instead of importing like this:\nfrom win32process import CREATE_NO_WINDOW\n\nyou should instead import it from subprocess:\nfrom subprocess import CREATE_NO_WINDOW\n\nAnd then just add creationflags=CREATE_NO_WINDOW in the service.py file code part like this (also like in Josh O'Brien's reply):\n self.process = subprocess.Popen(cmd, env=self.env,\n close_fds=platform.system() != 'Windows',\n stdout=self.log_file,\n stderr=self.log_file,\n stdin=PIPE, creationflags=CREATE_NO_WINDOW)\n\nAnd it works nicely for me, also when I used Pyinstaller to compile it to an EXE file.\n",
"SELENIUM >4 Solution for Pyinstaller\nBuilding from the answer by @Ali Sajjad\nIf you are using pyinstaller to bundle python code with GUI (like tkinter) and still want to hide the chromedriver console that pops out. Be sure to include the relevant path here...\n chrome_service = ChromeService('chromedriver')\n chrome_service.creationflags = CREATE_NO_WINDOW\n\nThe Service object from selenium take in the path of the chromedriver and thus should be replaced with the following..\n chrome_service = ChromeService(get_path('relative/path/to/chromedriver'))\n\nWhere get_path is a path connector that helps the scripts to find the files.\nExample of the resource_path function that works for me...\n def get_path(relative_path):\n \"\"\" Get the absolute path to the resource, works for dev and for PyInstaller \"\"\"\n try:\n # PyInstaller creates a temp folder and stores path in _MEIPASS\n base_path = sys._MEIPASS\n except Exception:\n base_path = os.path.abspath(\".\")\n return os.path.join(base_path, relative_path)\n\nNow, creating an instance would be:\n browser = webdriver.Chrome(service=chrome_service, options=options, executable_path=get_path('relative/path/to/chromedriver.exe'))\n\nThis should work with pyinstaller.\n",
"For me, the following solution did NOT work.\nservice = Service(EdgeChromiumDriverManager().install())\nservice.creationflags = CREATE_NO_WINDOW\ndriver = webdriver.Edge(options=self.options, service=service)\n\nHowever, using selenium version 4.5.0 instead of 4.7.0 works like a charm.\nAlso see Unable to hide Chromedriver console with CREATE_NO_WINDOW\n"
] | [
30,
13,
8,
1,
1,
0,
0
] | [] | [] | [
"python",
"selenium",
"selenium_chromedriver"
] | stackoverflow_0033983860_python_selenium_selenium_chromedriver.txt |
Q:
How to change dictionary format in python?
I need to transform dict { name : department } to { department : [ name ] } and print all names after transformation, but it prints me only one, what is wrong here?
I need to use dictionary comprehension method.
Tried this, but it doesn't work as expected:
orig_dict = {'Tom': 'HR', 'Ted': 'IT', 'Ken': \
'Marketing', 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'IT', 'Margo': 'HR'}
new_dict = {value: [key] for key, value in orig_dict.items()}
it_names = new_dict['IT']
print(it_names)
A:
It appears that you want values to become keys in a new dictionary and the values to be a list. If so, then:
orig_dict = {'Tom': 'HR', 'Ted': 'IT', 'Ken': 'Marketing', 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'IT', 'Margo': 'HR'}
new_dict = {}
for k, v in orig_dict.items():
new_dict.setdefault(v, []).append(k)
print(new_dict)
Output:
{'HR': ['Tom', 'Margo'], 'IT': ['Ted', 'Jesica'], 'Marketing': ['Ken', 'Jason']}
| How to change dictionary format in python? | I need to transform dict { name : department } to { department : [ name ] } and print all names after transformation, but it prints me only one, what is wrong here?
I need to use dictionary comprehension method.
Tried this, but it doesn't work as expected:
orig_dict = {'Tom': 'HR', 'Ted': 'IT', 'Ken': \
'Marketing', 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'IT', 'Margo': 'HR'}
new_dict = {value: [key] for key, value in orig_dict.items()}
it_names = new_dict['IT']
print(it_names)
| [
"It appears that you want values to become keys in a new dictionary and the values to be a list. If so, then:\norig_dict = {'Tom': 'HR', 'Ted': 'IT', 'Ken': 'Marketing', 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'IT', 'Margo': 'HR'}\nnew_dict = {}\nfor k, v in orig_dict.items():\n new_dict.setdefault(v, []).append(k)\nprint(new_dict)\n\nOutput:\n{'HR': ['Tom', 'Margo'], 'IT': ['Ted', 'Jesica'], 'Marketing': ['Ken', 'Jason']}\n\n"
] | [
1
] | [
"Full Code\nold_dict = {'Tom': 'HR', 'Ted': 'IT', 'Ken': 'Marketing',\n 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'IT', 'Margo': 'HR'}\n# Printing original dictionary\nprint(\"Original dictionary is : \")\nprint(old_dict)\n\nprint()\nnew_dict = {}\nfor key, value in old_dict.items():\n if value in new_dict:\n new_dict[value].append(key)\n else:\n new_dict[value] = [key]\n\n# Printing new dictionary after swapping\n# keys and values\nprint(\"Dictionary after swapping is : \")\nprint(\"keys: values\")\nfor i in new_dict:\n print(i, \" :\", new_dict[i])\n\nit_names = new_dict['IT'] \nprint(it_names)\n\nOutput\nOriginal dictionary is : \n{'Tom': 'HR', 'Ted': 'IT', 'Ken': 'Marketing', 'Jason': 'Marketing', 'Jesica': 'IT', 'Margo': 'HR'} \n\nDictionary after swapping is :\nkeys: values\nHR : ['Tom', 'Margo']\nIT : ['Ted', 'Jesica']\nMarketing : ['Ken', 'Jason']\n['Ted', 'Jesica']\n\n"
] | [
-1
] | [
"dictionary",
"python"
] | stackoverflow_0074653996_dictionary_python.txt |
Q:
Formatting python with Black in VSCode is causing arrays to expand vertically, any way to compress them?
I'm using Black to format python in VSCode, and it's making all my arrays super tall instead of wide. I've set max line length to 150 for pep8 and flake8 and black (but I'm new to Black, and not sure if it uses either of those settings):
"python.formatting.blackArgs": ["--line-length", "150"],
Here's how it looks:
expected = make_dict_of_rows(
[
10,
11,
15,
24,
26,
30,
32,
35,
36,
37,
50,
53,
54,
74,
76,
81,
114,
115,
118,
119,
120,
123,
],
)
is what I get instead of the much more concise:
expected = make_dict_of_rows(
[
10, 11, 15, 24, 26, 30, 32, 35, 36, 37, 50, 53, 54, 74, 76, 81, 114, 115, 118, 119, 120, 123,
],
)
(Or even preferable, this would have some collapsed brackets):
expected = make_dict_of_rows([
10, 11, 15, 24, 26, 30, 32, 35, 36, 37, 50, 53, 54, 74, 76, 81, 114, 115, 118, 119, 120, 123
])
A:
Black will always explode a list into multiple lines if it has a trailing comma. You can remove the trailing comma for black to compress the list. You can also use --skip-magic-trailing-comma:
"python.formatting.blackArgs": ["--line-length", "150", "--skip-magic-trailing-comma"],
A:
Option --skip-magic-trailing-comma seems to be ignored by black. I've stripped down my settings.json to make that nothing else might interfere. Still, the problem persists: Saving a file with e.g. a long list (see example from inital question above) will explode and add a trailing comma.
settings.json:
"python.defaultInterpreterPath": "<PATH-TO-VENV>",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true,
},
},
"python.formatting.provider": "black",
"python.formatting.blackArgs": [
"--line-length",
"88",
"--skip-magic-trailing-comma", // seems to be ignored!
],
}
I'm having black==22.10.0 installed.
What am I missing?
| Formatting python with Black in VSCode is causing arrays to expand vertically, any way to compress them? | I'm using Black to format python in VSCode, and it's making all my arrays super tall instead of wide. I've set max line length to 150 for pep8 and flake8 and black (but I'm new to Black, and not sure if it uses either of those settings):
"python.formatting.blackArgs": ["--line-length", "150"],
Here's how it looks:
expected = make_dict_of_rows(
[
10,
11,
15,
24,
26,
30,
32,
35,
36,
37,
50,
53,
54,
74,
76,
81,
114,
115,
118,
119,
120,
123,
],
)
is what I get instead of the much more concise:
expected = make_dict_of_rows(
[
10, 11, 15, 24, 26, 30, 32, 35, 36, 37, 50, 53, 54, 74, 76, 81, 114, 115, 118, 119, 120, 123,
],
)
(Or even preferable, this would have some collapsed brackets):
expected = make_dict_of_rows([
10, 11, 15, 24, 26, 30, 32, 35, 36, 37, 50, 53, 54, 74, 76, 81, 114, 115, 118, 119, 120, 123
])
| [
"Black will always explode a list into multiple lines if it has a trailing comma. You can remove the trailing comma for black to compress the list. You can also use --skip-magic-trailing-comma:\n\"python.formatting.blackArgs\": [\"--line-length\", \"150\", \"--skip-magic-trailing-comma\"],\n\n",
"Option --skip-magic-trailing-comma seems to be ignored by black. I've stripped down my settings.json to make that nothing else might interfere. Still, the problem persists: Saving a file with e.g. a long list (see example from inital question above) will explode and add a trailing comma.\nsettings.json:\n\"python.defaultInterpreterPath\": \"<PATH-TO-VENV>\",\n \"[python]\": {\n \"editor.formatOnSave\": true,\n \"editor.codeActionsOnSave\": {\n \"source.organizeImports\": true,\n },\n },\n \"python.formatting.provider\": \"black\",\n \"python.formatting.blackArgs\": [\n \"--line-length\",\n \"88\",\n \"--skip-magic-trailing-comma\", // seems to be ignored!\n ],\n}\n\nI'm having black==22.10.0 installed.\nWhat am I missing?\n"
] | [
2,
0
] | [] | [] | [
"python",
"python_black",
"visual_studio_code"
] | stackoverflow_0074323625_python_python_black_visual_studio_code.txt |
Q:
Uncompyle6 convert pyc to py file python 3 (Whole directory)
I have 200 pyc files I need to convert in a folder. I am aware of converting pyc to py files through uncompyle6 -o . 31.pyc however as I have so many pyc files, this would take a long period of time. I've founds lots of documentation but not much in bulk converting to py files. uncompyle6 -o . *.pyc was not supported.
Any idea on how I can achieve this?
A:
Might not be perfect but it worked great for me.
import os
import uncompyle6
your_directory = ''
for dirpath, b, filenames in os.walk(your_directory):
for filename in filenames:
if not filename.endswith('.pyc'):
continue
filepath = dirpath + '/' + filename
original_filename = filename.split('.')[0]
original_filepath = dirpath + '/' + original_filename + '.py'
with open(original_filepath, 'w') as f:
uncompyle6.decompile_file(filepath, f)
A:
This is natively supported by uncompyle6
uncompyle6 -ro <output_directory> <python_directory>
-r tells the tool to recurse into sub directories.
-o tells the tool to output to the given directory.
A:
In operating systems with shell filename expansion, you might be able to use the shell's file expansion ability. For example:
uncompyle6 -o /tmp/unc6 myfiles/*.pyc
If you need something fancier or more control, you could always write some code that does the fancier expansion. Here is the above done in POSIX shell filtering out the single file myfiles/huge.pyc:
cd myfiles
for pyc in *.pyc; do
if [[ $pyc != huge.pyc ]] ; then
uncompyle -o /tmp/unc $pyc
fi
done
Note: It seems this question was also asked in Issue on output directory while executing commands with windows batch command "FOR /R"
A:
thank you for the code, extending it to recursively call, nested sub directories, save as uncompile.py, in the directory to be converted, to run in command prompt type "python uncomple.py" would convert pyc to py in current working directory, with error handling and if rerun skips (recovery) files checking existing py extension match
import os
import uncompyle6
#Use current working directory
your_directory = os.getcwd()
#function processing current dir
def uncompilepath(mydir):
for dirpath, b, filenames in os.walk(mydir):
for d in b:
folderpath = dirpath + '/' + d
print(folderpath)
#recursive sub dir call
uncompilepath(folderpath)
for filename in filenames:
if not filename.endswith('.pyc'):
continue
filepath = dirpath + '/' + filename
original_filename = filename.split('.')[0]
original_filepath = dirpath + '/' + original_filename + '.py'
#ignore if already uncompiled
if os.path.exists(original_filepath):
continue
with open(original_filepath, 'w') as f:
print(filepath)
#error handling
try:
uncompyle6.decompile_file(filepath, f)
except Exception:
print("Error")
uncompilepath(your_directory)
| Uncompyle6 convert pyc to py file python 3 (Whole directory) | I have 200 pyc files I need to convert in a folder. I am aware of converting pyc to py files through uncompyle6 -o . 31.pyc however as I have so many pyc files, this would take a long period of time. I've founds lots of documentation but not much in bulk converting to py files. uncompyle6 -o . *.pyc was not supported.
Any idea on how I can achieve this?
| [
"Might not be perfect but it worked great for me. \nimport os\nimport uncompyle6\nyour_directory = ''\nfor dirpath, b, filenames in os.walk(your_directory):\n for filename in filenames:\n if not filename.endswith('.pyc'):\n continue\n\n filepath = dirpath + '/' + filename\n original_filename = filename.split('.')[0]\n original_filepath = dirpath + '/' + original_filename + '.py'\n with open(original_filepath, 'w') as f:\n uncompyle6.decompile_file(filepath, f)\n\n",
"This is natively supported by uncompyle6\nuncompyle6 -ro <output_directory> <python_directory>\n\n\n-r tells the tool to recurse into sub directories.\n-o tells the tool to output to the given directory.\n\n",
"In operating systems with shell filename expansion, you might be able to use the shell's file expansion ability. For example:\nuncompyle6 -o /tmp/unc6 myfiles/*.pyc\n\nIf you need something fancier or more control, you could always write some code that does the fancier expansion. Here is the above done in POSIX shell filtering out the single file myfiles/huge.pyc:\ncd myfiles\nfor pyc in *.pyc; do \n if [[ $pyc != huge.pyc ]] ; then \n uncompyle -o /tmp/unc $pyc\n fi \ndone\n\nNote: It seems this question was also asked in Issue on output directory while executing commands with windows batch command \"FOR /R\"\n",
" thank you for the code, extending it to recursively call, nested sub directories, save as uncompile.py, in the directory to be converted, to run in command prompt type \"python uncomple.py\" would convert pyc to py in current working directory, with error handling and if rerun skips (recovery) files checking existing py extension match\n\nimport os\nimport uncompyle6\n\n#Use current working directory\nyour_directory = os.getcwd()\n\n#function processing current dir\ndef uncompilepath(mydir):\n for dirpath, b, filenames in os.walk(mydir):\n for d in b:\n folderpath = dirpath + '/' + d\n print(folderpath)\n\n #recursive sub dir call\n uncompilepath(folderpath)\n for filename in filenames:\n if not filename.endswith('.pyc'):\n continue\n filepath = dirpath + '/' + filename\n original_filename = filename.split('.')[0]\n original_filepath = dirpath + '/' + original_filename + '.py'\n\n #ignore if already uncompiled\n if os.path.exists(original_filepath):\n continue\n with open(original_filepath, 'w') as f:\n print(filepath)\n \n #error handling\n try:\n uncompyle6.decompile_file(filepath, f)\n except Exception:\n print(\"Error\")\n \nuncompilepath(your_directory)\n\n"
] | [
6,
5,
4,
0
] | [] | [] | [
"python",
"uncompyle6"
] | stackoverflow_0047397711_python_uncompyle6.txt |
Q:
filter keys out from list of dicts
Say I have a list of dict:
ld = [{'a':1,'b':2,'c':9},{'a':1,'b':2,'c':10}]
And a list to filter the keys out:
l = ['a','c']
Want to remove key a and c from ld:
Try:
result = [d for d in ld for k in d if k in l]
Desired Result:
[{'b':2},{'b':2}]
A:
Your outer container needs to be a list : use a (1 dimension) list comprehension
Your inner container needs to be a dict : ues a dict comprehension
For you now you're using a 2d list comprehension
The filtering part should be at the dict level
ld = [{'a': 1, 'b': 2, 'c': 9}, {'a': 1, 'b': 2, 'c': 10}]
l = ['a', 'c']
result = [{k: v for k, v in subdict.items() if k not in l}
for subdict in ld]
print(result)
A:
Your code is overly compressed and that makes it hard to understand and easy for bugs to hide.
Here you want to "filter" a list of dicts and remove entries from each dicht that is not in the filter list. So my first suggestion is to rename your variables:
data = [{'a': 1, 'b': 2, 'c': 9}, {'a': 1, 'b': 2, 'c': 10}]
excludes = ['a', 'c']
Now you need to unpack a list, and then the inner dict. Currently you are trying to use the list iteration on both. You want items() to iterate over (key, value) pairs.
Further, you are filtering on the outer list level, while the keys to filter live in the inner dict.
Here is my solution
result = []
for entry in data:
# entry is now one of the dicts
result.append({key:value for key, value in entry.items() if key not in excludes})
You can compress this again in a single line, if you really want to. But in my opinion (while not playing code golf) readability beats compressedness.
A:
I think the simple writing method is clearer when there are many cycles
ld = [{'a':1,'b':2,'c':9},{'a':1,'b':2,'c':10}]
l = ['a','c']
for d in ld:
for r in l:
if r in d:
del d[r]
A:
Alternative filtering whether keys in ld are not in l:
result = [{k:d[k]} for d in ld for k in d if k not in l]
| filter keys out from list of dicts | Say I have a list of dict:
ld = [{'a':1,'b':2,'c':9},{'a':1,'b':2,'c':10}]
And a list to filter the keys out:
l = ['a','c']
Want to remove key a and c from ld:
Try:
result = [d for d in ld for k in d if k in l]
Desired Result:
[{'b':2},{'b':2}]
| [
"\nYour outer container needs to be a list : use a (1 dimension) list comprehension\nYour inner container needs to be a dict : ues a dict comprehension\n\nFor you now you're using a 2d list comprehension\n\nThe filtering part should be at the dict level\nld = [{'a': 1, 'b': 2, 'c': 9}, {'a': 1, 'b': 2, 'c': 10}]\nl = ['a', 'c']\nresult = [{k: v for k, v in subdict.items() if k not in l}\n for subdict in ld]\nprint(result)\n\n",
"Your code is overly compressed and that makes it hard to understand and easy for bugs to hide.\nHere you want to \"filter\" a list of dicts and remove entries from each dicht that is not in the filter list. So my first suggestion is to rename your variables:\ndata = [{'a': 1, 'b': 2, 'c': 9}, {'a': 1, 'b': 2, 'c': 10}]\nexcludes = ['a', 'c']\n\nNow you need to unpack a list, and then the inner dict. Currently you are trying to use the list iteration on both. You want items() to iterate over (key, value) pairs.\nFurther, you are filtering on the outer list level, while the keys to filter live in the inner dict.\nHere is my solution\nresult = []\nfor entry in data:\n # entry is now one of the dicts\n result.append({key:value for key, value in entry.items() if key not in excludes})\n\nYou can compress this again in a single line, if you really want to. But in my opinion (while not playing code golf) readability beats compressedness.\n",
"I think the simple writing method is clearer when there are many cycles\nld = [{'a':1,'b':2,'c':9},{'a':1,'b':2,'c':10}]\n\nl = ['a','c']\nfor d in ld:\n for r in l:\n if r in d:\n del d[r]\n\n",
"Alternative filtering whether keys in ld are not in l:\nresult = [{k:d[k]} for d in ld for k in d if k not in l]\n\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074652918_python.txt |
Q:
XOR Pair Of Elements In A List
I have a list of integer pairs
[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]
I want to take each element (0,0) then (0,1), etc. pair, to XOR the two numbers between them and the result converted to binary.
Example: the (0,2) pair
0 decimal equals to 00110000 and 2 decimal equals to 00110010. The XOR of two will be 00000010.
I tried this, but nothing
import functools
test_list = [(0,0),(0,1),(0,2)]
for i in enumerate(test_list):
res = functools.reduce(lambda x, y: x ^ y, test_list)
print(str(res))
A:
It's a quite weird thing you try to do, but I believe you want:
lst = [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]
def to_ascii_code(c):
return str(c).encode('ascii')[0]
out = [bin(to_ascii_code(a)^to_ascii_code(b)) for a,b in lst]
Output:
['0b0', '0b1', '0b10', '0b11', '0b100', '0b101', '0b110', '0b111', '0b1000', '0b1001', '0b1', '0b0']
Different format:
out = ['{0:08b}'.format(to_ascii_code(a)^to_ascii_code(b)) for a,b in lst]
Output:
['00000000', '00000001', '00000010', '00000011', '00000100', '00000101', '00000110', '00000111', '00001000', '00001001', '00000001', '00000000']
A:
Not entirely sure I understand the objective here. How about:
lst = [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]
nlst = [f'{(48+x)^(48+y):08b}' for x,y in lst]
print(nlst)
Output:
['00000000', '00000001', '00000010', '00000011', '00000100', '00000101', '00000110', '00000111', '00001000', '00001001', '00000001', '00000000']
| XOR Pair Of Elements In A List | I have a list of integer pairs
[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]
I want to take each element (0,0) then (0,1), etc. pair, to XOR the two numbers between them and the result converted to binary.
Example: the (0,2) pair
0 decimal equals to 00110000 and 2 decimal equals to 00110010. The XOR of two will be 00000010.
I tried this, but nothing
import functools
test_list = [(0,0),(0,1),(0,2)]
for i in enumerate(test_list):
res = functools.reduce(lambda x, y: x ^ y, test_list)
print(str(res))
| [
"It's a quite weird thing you try to do, but I believe you want:\nlst = [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]\n\ndef to_ascii_code(c):\n return str(c).encode('ascii')[0]\n\nout = [bin(to_ascii_code(a)^to_ascii_code(b)) for a,b in lst]\n\nOutput:\n['0b0', '0b1', '0b10', '0b11', '0b100', '0b101', '0b110', '0b111', '0b1000', '0b1001', '0b1', '0b0']\n\nDifferent format:\nout = ['{0:08b}'.format(to_ascii_code(a)^to_ascii_code(b)) for a,b in lst]\n\nOutput:\n['00000000', '00000001', '00000010', '00000011', '00000100', '00000101', '00000110', '00000111', '00001000', '00001001', '00000001', '00000000']\n\n",
"Not entirely sure I understand the objective here. How about:\nlst = [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9), (1, 0), (1, 1)]\nnlst = [f'{(48+x)^(48+y):08b}' for x,y in lst]\nprint(nlst)\n\nOutput:\n['00000000', '00000001', '00000010', '00000011', '00000100', '00000101', '00000110', '00000111', '00001000', '00001001', '00000001', '00000000']\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074653804_python.txt |
Q:
how do i make it print the multiplication table in the created file?
I've been trying to put a print statement inside the loop to no avail.
def individualizing_file(number: int) -> None:
increasing_number: int = number
with open(f"file_{increasing_number}.txt", "w") as f:
f.write(f"Multiplication table for + {n}")
for _ in range(10):
#print(n*2)
increasing_number += n
if __name__ == '__main__':
n = int(input("Enter a number between 1-9: "))
individualizing_file(n)
Tried to put a print statement in the for loop but it prints nothing inside the file_n file.
A:
You can use print(file=f):
def individualizing_file(number: int) -> None:
with open(f"file_{number}.txt", "w") as f:
f.write(f"Multiplication table for + {n}")
for multiple in range(10):
print(number * multiple, file=f)
Unlike using f.write, print automatically adds a newline at the end. It also accepts any type and not just strings.
For reference to more print options see: the documentation for print. For example if you want to change the ending character or separator.
| how do i make it print the multiplication table in the created file? | I've been trying to put a print statement inside the loop to no avail.
def individualizing_file(number: int) -> None:
increasing_number: int = number
with open(f"file_{increasing_number}.txt", "w") as f:
f.write(f"Multiplication table for + {n}")
for _ in range(10):
#print(n*2)
increasing_number += n
if __name__ == '__main__':
n = int(input("Enter a number between 1-9: "))
individualizing_file(n)
Tried to put a print statement in the for loop but it prints nothing inside the file_n file.
| [
"You can use print(file=f):\ndef individualizing_file(number: int) -> None:\n with open(f\"file_{number}.txt\", \"w\") as f:\n f.write(f\"Multiplication table for + {n}\")\n for multiple in range(10):\n print(number * multiple, file=f)\n\nUnlike using f.write, print automatically adds a newline at the end. It also accepts any type and not just strings.\nFor reference to more print options see: the documentation for print. For example if you want to change the ending character or separator.\n"
] | [
-1
] | [] | [] | [
"python"
] | stackoverflow_0074654233_python.txt |
Q:
How to test a FastAPI api endpoint that consumes images?
I am using pytest to test a FastAPI endpoint that gets in input an image in binary format as in
@app.post("/analyse")
async def analyse(file: bytes = File(...)):
image = Image.open(io.BytesIO(file)).convert("RGB")
stats = process_image(image)
return stats
After starting the server, I can manually test the endpoint successfully by running a call with requests
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder
url = "http://127.0.0.1:8000/analyse"
filename = "./example.jpg"
m = MultipartEncoder(
fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')}
)
r = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000)
assert r.status_code == 200
However, setting up tests in a function of the form:
from fastapi.testclient import TestClient
from requests_toolbelt.multipart.encoder import MultipartEncoder
from app.server import app
client = TestClient(app)
def test_image_analysis():
filename = "example.jpg"
m = MultipartEncoder(
fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')}
)
response = client.post("/analyse",
data=m,
headers={"Content-Type": "multipart/form-data"}
)
assert response.status_code == 200
when running tests with python -m pytest, that gives me back a
> assert response.status_code == 200
E assert 400 == 200
E + where 400 = <Response [400]>.status_code
tests\test_server.py:22: AssertionError
-------------------------------------------------------- Captured log call ---------------------------------------------------------
ERROR fastapi:routing.py:133 Error getting request body: can't concat NoneType to bytes
===================================================== short test summary info ======================================================
FAILED tests/test_server.py::test_image_analysis - assert 400 == 200
what am I doing wrong?
What's the right way to write a test function test_image_analysis() using an image file?
A:
You see a different behavior because requests and TestClient are not exactly same in every aspect as TestClient wraps requests. To dig deeper, refer to the source code: (FastAPI is using TestClient from starlette library, FYI)
https://github.com/encode/starlette/blob/master/starlette/testclient.py
To solve, you can get rid of MultipartEncoder because requests can accept file bytes and encode it by form-data format, with something like
# change it
r = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000)
# to
r = requests.post(url, files={"file": ("filename", open(filename, "rb"), "image/jpeg")})
and modifying the FastAPI test code:
# change
response = client.post("/analyse",
data=m,
headers={"Content-Type": "multipart/form-data"}
)
# to
response = client.post(
"/analyse", files={"file": ("filename", open(filename, "rb"), "image/jpeg")}
)
A:
Below code is working for me:
** API Structure: **
File: api_routers.py
from fastapi import APIRouter, File, UploadFile, Query
router = APIRouter()
@router.post(path="{{API_PATH}}", tags=["Prediction"])
def prediction(id: str, uploadFile: UploadFile):
...
{{CODE}}
return response
Testing Code
File: test_api_router.py
import pytest
import os
from fastapi.testclient import TestClient
import.api_routers
client = TestClient(api_routers.router)
def test_prediction(constants):
# Use constants if fixture created
file_path = "{{IMAGE PATH}}"
if os.path.isfile(file_path):
_files = {'uploadFile': open(file_path, 'rb')}
response = client.post('{{API_PATH}}',
params={
"id": {{ID}}
},
files=_files
)
assert response.status_code == 200
else:
pytest.fail("Scratch file does not exists.")
| How to test a FastAPI api endpoint that consumes images? | I am using pytest to test a FastAPI endpoint that gets in input an image in binary format as in
@app.post("/analyse")
async def analyse(file: bytes = File(...)):
image = Image.open(io.BytesIO(file)).convert("RGB")
stats = process_image(image)
return stats
After starting the server, I can manually test the endpoint successfully by running a call with requests
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder
url = "http://127.0.0.1:8000/analyse"
filename = "./example.jpg"
m = MultipartEncoder(
fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')}
)
r = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000)
assert r.status_code == 200
However, setting up tests in a function of the form:
from fastapi.testclient import TestClient
from requests_toolbelt.multipart.encoder import MultipartEncoder
from app.server import app
client = TestClient(app)
def test_image_analysis():
filename = "example.jpg"
m = MultipartEncoder(
fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')}
)
response = client.post("/analyse",
data=m,
headers={"Content-Type": "multipart/form-data"}
)
assert response.status_code == 200
when running tests with python -m pytest, that gives me back a
> assert response.status_code == 200
E assert 400 == 200
E + where 400 = <Response [400]>.status_code
tests\test_server.py:22: AssertionError
-------------------------------------------------------- Captured log call ---------------------------------------------------------
ERROR fastapi:routing.py:133 Error getting request body: can't concat NoneType to bytes
===================================================== short test summary info ======================================================
FAILED tests/test_server.py::test_image_analysis - assert 400 == 200
what am I doing wrong?
What's the right way to write a test function test_image_analysis() using an image file?
| [
"You see a different behavior because requests and TestClient are not exactly same in every aspect as TestClient wraps requests. To dig deeper, refer to the source code: (FastAPI is using TestClient from starlette library, FYI)\nhttps://github.com/encode/starlette/blob/master/starlette/testclient.py\nTo solve, you can get rid of MultipartEncoder because requests can accept file bytes and encode it by form-data format, with something like\n# change it\nr = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000)\n\n# to \nr = requests.post(url, files={\"file\": (\"filename\", open(filename, \"rb\"), \"image/jpeg\")})\n\nand modifying the FastAPI test code:\n# change\nresponse = client.post(\"/analyse\",\n data=m,\n headers={\"Content-Type\": \"multipart/form-data\"}\n )\n# to\nresponse = client.post(\n \"/analyse\", files={\"file\": (\"filename\", open(filename, \"rb\"), \"image/jpeg\")}\n)\n\n",
"Below code is working for me:\n** API Structure: **\nFile: api_routers.py\nfrom fastapi import APIRouter, File, UploadFile, Query\nrouter = APIRouter()\[email protected](path=\"{{API_PATH}}\", tags=[\"Prediction\"])\ndef prediction(id: str, uploadFile: UploadFile):\n ...\n {{CODE}}\n return response\n\nTesting Code\nFile: test_api_router.py\nimport pytest\nimport os\nfrom fastapi.testclient import TestClient\nimport.api_routers\n\nclient = TestClient(api_routers.router)\n\ndef test_prediction(constants): \n # Use constants if fixture created\n file_path = \"{{IMAGE PATH}}\"\n if os.path.isfile(file_path):\n _files = {'uploadFile': open(file_path, 'rb')}\n response = client.post('{{API_PATH}}',\n params={\n \"id\": {{ID}}\n },\n files=_files\n )\n assert response.status_code == 200\n else:\n pytest.fail(\"Scratch file does not exists.\")\n\n"
] | [
26,
0
] | [] | [] | [
"fastapi",
"multipart",
"pytest",
"python",
"starlette"
] | stackoverflow_0060783222_fastapi_multipart_pytest_python_starlette.txt |
Q:
Kivy: How to make a checkbox "remember" its state/value?
I am trying to make a login screen with a "remember login" feature, a checkbox that, when toggled, will store all user credentials in a text file to access later. I want the app to remember the value of the checkmark so that when I open it again, the checkmark is in the "on" or "off" position, depending on its previous input. I was thinking of using a text file with a boolean condition stored inside. Is there a way I can do this?
A:
Store the value to a ini file and read it on launch
A:
You can make file and write here login parameters(you must write here the last login and if smbd log out clear file) and when app start read this file and put every login parameters where you check them. It works for me.
| Kivy: How to make a checkbox "remember" its state/value? | I am trying to make a login screen with a "remember login" feature, a checkbox that, when toggled, will store all user credentials in a text file to access later. I want the app to remember the value of the checkmark so that when I open it again, the checkmark is in the "on" or "off" position, depending on its previous input. I was thinking of using a text file with a boolean condition stored inside. Is there a way I can do this?
| [
"Store the value to a ini file and read it on launch\n",
"You can make file and write here login parameters(you must write here the last login and if smbd log out clear file) and when app start read this file and put every login parameters where you check them. It works for me.\n"
] | [
0,
0
] | [] | [] | [
"kivy",
"python"
] | stackoverflow_0073502461_kivy_python.txt |
Q:
Accessing a Python traceback from the C API
I'm having some trouble figuring out the proper way to walk a Python traceback using the C API. I'm writing an application that embeds the Python interpreter. I want to be able to execute arbitrary Python code, and if it raises an exception, to translate it to my own application-specific C++ exception. For now, it is sufficient to extract just the file name and line number where the Python exception was raised. This is what I have so far:
PyObject* pyresult = PyObject_CallObject(someCallablePythonObject, someArgs);
if (!pyresult)
{
PyObject* excType, *excValue, *excTraceback;
PyErr_Fetch(&excType, &excValue, &excTraceback);
PyErr_NormalizeException(&excType, &excValue, &excTraceback);
PyTracebackObject* traceback = (PyTracebackObject*)traceback;
// Advance to the last frame (python puts the most-recent call at the end)
while (traceback->tb_next != NULL)
traceback = traceback->tb_next;
// At this point I have access to the line number via traceback->tb_lineno,
// but where do I get the file name from?
// ...
}
Digging around in the Python source code, I see they access both the filename and module name of the current frame via the _frame structure, which looks like it is a privately-defined struct. My next idea was to programmatically load the Python 'traceback' module and call its functions with the C API. Is this sane? Is there a better way to access a Python traceback from C?
A:
This is an old question but for future reference, you can get the current stack frame from the thread state object and then just walk the frames backward. A traceback object isn't necessary unless you want to preserve the state for the future.
For example:
PyThreadState *tstate = PyThreadState_GET();
if (NULL != tstate && NULL != tstate->frame) {
PyFrameObject *frame = tstate->frame;
printf("Python stack trace:\n");
while (NULL != frame) {
// int line = frame->f_lineno;
/*
frame->f_lineno will not always return the correct line number
you need to call PyCode_Addr2Line().
*/
int line = PyCode_Addr2Line(frame->f_code, frame->f_lasti);
const char *filename = PyString_AsString(frame->f_code->co_filename);
const char *funcname = PyString_AsString(frame->f_code->co_name);
printf(" %s(%d): %s\n", filename, line, funcname);
frame = frame->f_back;
}
}
A:
I prefer calling into python from C:
err = PyErr_Occurred();
if (err != NULL) {
PyObject *ptype, *pvalue, *ptraceback;
PyObject *pystr, *module_name, *pyth_module, *pyth_func;
char *str;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
pystr = PyObject_Str(pvalue);
str = PyString_AsString(pystr);
error_description = strdup(str);
/* See if we can get a full traceback */
module_name = PyString_FromString("traceback");
pyth_module = PyImport_Import(module_name);
Py_DECREF(module_name);
if (pyth_module == NULL) {
full_backtrace = NULL;
return;
}
pyth_func = PyObject_GetAttrString(pyth_module, "format_exception");
if (pyth_func && PyCallable_Check(pyth_func)) {
PyObject *pyth_val;
pyth_val = PyObject_CallFunctionObjArgs(pyth_func, ptype, pvalue, ptraceback, NULL);
pystr = PyObject_Str(pyth_val);
str = PyString_AsString(pystr);
full_backtrace = strdup(str);
Py_DECREF(pyth_val);
}
}
A:
I've discovered that _frame is actually defined in the frameobject.h header included with Python. Armed with this plus looking at traceback.c in the Python C implementation, we have:
#include <Python.h>
#include <frameobject.h>
PyTracebackObject* traceback = get_the_traceback();
int line = traceback->tb_lineno;
const char* filename = PyString_AsString(traceback->tb_frame->f_code->co_filename);
But this still seems really dirty to me.
A:
One principal I've found useful in writing C extensions is to use each language where it's best suited. So if you have a task to do that would be best implemented in Python, implement in Python, and if it would be best implemented in C, do it in C. Interpreting tracebacks is best done in Python for two reasons: first, because Python has the tools to do it, and second, because it isn't speed-critical.
I would write a Python function to extract the info you need from the traceback, then call it from C.
You could even go so far as to write a Python wrapper for your callable execution. Instead of invoking someCallablePythonObject, pass it as an argument to your Python function:
def invokeSomeCallablePythonObject(obj, args):
try:
result = obj(*args)
ok = True
except:
# Do some mumbo-jumbo with the traceback, etc.
result = myTraceBackMunger(...)
ok = False
return ok, result
Then in your C code, call this Python function to do the work. The key here is to decide pragmatically which side of the C-Python split to put your code.
A:
I used the following code to extract Python exception's error body. strExcType stores the exception type and strExcValue stores the exception body. Sample values are:
strExcType:"<class 'ImportError'>"
strExcValue:"ImportError("No module named 'nonexistingmodule'",)"
Cpp code:
if(PyErr_Occurred() != NULL) {
PyObject *pyExcType;
PyObject *pyExcValue;
PyObject *pyExcTraceback;
PyErr_Fetch(&pyExcType, &pyExcValue, &pyExcTraceback);
PyErr_NormalizeException(&pyExcType, &pyExcValue, &pyExcTraceback);
PyObject* str_exc_type = PyObject_Repr(pyExcType);
PyObject* pyStr = PyUnicode_AsEncodedString(str_exc_type, "utf-8", "Error ~");
const char *strExcType = PyBytes_AS_STRING(pyStr);
PyObject* str_exc_value = PyObject_Repr(pyExcValue);
PyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, "utf-8", "Error ~");
const char *strExcValue = PyBytes_AS_STRING(pyExcValueStr);
// When using PyErr_Restore() there is no need to use Py_XDECREF for these 3 pointers
//PyErr_Restore(pyExcType, pyExcValue, pyExcTraceback);
Py_XDECREF(pyExcType);
Py_XDECREF(pyExcValue);
Py_XDECREF(pyExcTraceback);
Py_XDECREF(str_exc_type);
Py_XDECREF(pyStr);
Py_XDECREF(str_exc_value);
Py_XDECREF(pyExcValueStr);
}
A:
I had reason to do this recently while writing an allocation tracker for numpy. The previous answers are close but frame->f_lineno will not always return the correct line number--you need to call PyFrame_GetLineNumber(). Here's an updated code snippet:
#include "frameobject.h"
...
PyFrameObject* frame = PyEval_GetFrame();
int lineno = PyFrame_GetLineNumber(frame);
PyObject *filename = frame->f_code->co_filename;
The full thread state is also available in the PyFrameObject; if you want to walk the stack keep iterating on f_back until it's NULL. Checkout the full data structure in frameobject.h: http://svn.python.org/projects/python/trunk/Include/frameobject.h
See also: https://docs.python.org/2/c-api/reflection.html
A:
You can access Python traceback similar to tb_printinternal function. It iterates over PyTracebackObject list. I have tried also suggestions above to iterate over frames, but it does not work for me (I see only the last stack frame).
Excerpts from CPython code:
static int
tb_displayline(PyObject *f, PyObject *filename, int lineno, PyObject *name)
{
int err;
PyObject *line;
if (filename == NULL || name == NULL)
return -1;
line = PyUnicode_FromFormat(" File \"%U\", line %d, in %U\n",
filename, lineno, name);
if (line == NULL)
return -1;
err = PyFile_WriteObject(line, f, Py_PRINT_RAW);
Py_DECREF(line);
if (err != 0)
return err;
/* ignore errors since we are not able to report them, are we? */
if (_Py_DisplaySourceLine(f, filename, lineno, 4))
PyErr_Clear();
return err;
}
static int
tb_printinternal(PyTracebackObject *tb, PyObject *f, long limit)
{
int err = 0;
long depth = 0;
PyTracebackObject *tb1 = tb;
while (tb1 != NULL) {
depth++;
tb1 = tb1->tb_next;
}
while (tb != NULL && err == 0) {
if (depth <= limit) {
err = tb_displayline(f,
tb->tb_frame->f_code->co_filename,
tb->tb_lineno,
tb->tb_frame->f_code->co_name);
}
depth--;
tb = tb->tb_next;
if (err == 0)
err = PyErr_CheckSignals();
}
return err;
}
A:
As of python 3.11, accessing the frame objects seems to need a different approach. Anyway, this works in 3.11, hth someone
py_err(void)
{
PyObject *err = PyErr_Occurred();
if (! err) {
return;
}
PyObject *ptype, *pvalue, *pbacktrace, *pyobj_str;
PyObject *ret, *list, *string;
PyObject *mod;
char *py_str;
PyErr_Fetch(&ptype, &pvalue, &pbacktrace);
PyErr_NormalizeException(&ptype, &pvalue, &pbacktrace);
PyErr_Display(ptype, pvalue, pbacktrace);
PyTraceBack_Print(pbacktrace, pvalue);
pyobj_str = PyObject_Str(pvalue);
py_str = py_obj_to_string(pyobj_str);
printf("%s", py_str);
myfree(py_str);
mod = PyImport_ImportModule("traceback");
list = PyObject_CallMethod(mod, "format_exception", "OOO", ptype, pvalue, pbacktrace);
if (list) {
string = PyUnicode_FromString("\n");
ret = PyUnicode_Join(string, list);
Py_DECREF(list);
Py_DECREF(string);
py_str = py_obj_to_string(ret);
printf("%s", py_str);
myfree(py_str);
Py_DECREF(ret);
}
PyErr_Clear();
}
and you will probably need this too
char *py_obj_to_string(const PyObject *py_str)
{
PyObject *py_encstr;
char *outstr = nullptr;
char *str;
py_encstr = nullptr;
str = nullptr;
if (! PyUnicode_Check((PyObject *) py_str)) {
goto err_out;
}
py_encstr = PyUnicode_AsEncodedString((PyObject *) py_str, "utf-8", nullptr);
if (! py_encstr) {
goto err_out;
}
str = PyBytes_AS_STRING(py_encstr);
if (! str) {
goto err_out;
}
outstr = strdup(str);
err_out:
if (py_encstr) {
Py_XDECREF(py_encstr);
}
return outstr;
}
actual working code if someone needs it can be found in my larger project https://github.com/goblinhack/zorbash
| Accessing a Python traceback from the C API | I'm having some trouble figuring out the proper way to walk a Python traceback using the C API. I'm writing an application that embeds the Python interpreter. I want to be able to execute arbitrary Python code, and if it raises an exception, to translate it to my own application-specific C++ exception. For now, it is sufficient to extract just the file name and line number where the Python exception was raised. This is what I have so far:
PyObject* pyresult = PyObject_CallObject(someCallablePythonObject, someArgs);
if (!pyresult)
{
PyObject* excType, *excValue, *excTraceback;
PyErr_Fetch(&excType, &excValue, &excTraceback);
PyErr_NormalizeException(&excType, &excValue, &excTraceback);
PyTracebackObject* traceback = (PyTracebackObject*)traceback;
// Advance to the last frame (python puts the most-recent call at the end)
while (traceback->tb_next != NULL)
traceback = traceback->tb_next;
// At this point I have access to the line number via traceback->tb_lineno,
// but where do I get the file name from?
// ...
}
Digging around in the Python source code, I see they access both the filename and module name of the current frame via the _frame structure, which looks like it is a privately-defined struct. My next idea was to programmatically load the Python 'traceback' module and call its functions with the C API. Is this sane? Is there a better way to access a Python traceback from C?
| [
"This is an old question but for future reference, you can get the current stack frame from the thread state object and then just walk the frames backward. A traceback object isn't necessary unless you want to preserve the state for the future.\nFor example:\nPyThreadState *tstate = PyThreadState_GET();\nif (NULL != tstate && NULL != tstate->frame) {\n PyFrameObject *frame = tstate->frame;\n\n printf(\"Python stack trace:\\n\");\n while (NULL != frame) {\n // int line = frame->f_lineno;\n /*\n frame->f_lineno will not always return the correct line number\n you need to call PyCode_Addr2Line().\n */\n int line = PyCode_Addr2Line(frame->f_code, frame->f_lasti);\n const char *filename = PyString_AsString(frame->f_code->co_filename);\n const char *funcname = PyString_AsString(frame->f_code->co_name);\n printf(\" %s(%d): %s\\n\", filename, line, funcname);\n frame = frame->f_back;\n }\n}\n\n",
"I prefer calling into python from C:\nerr = PyErr_Occurred();\nif (err != NULL) {\n PyObject *ptype, *pvalue, *ptraceback;\n PyObject *pystr, *module_name, *pyth_module, *pyth_func;\n char *str;\n\n PyErr_Fetch(&ptype, &pvalue, &ptraceback);\n pystr = PyObject_Str(pvalue);\n str = PyString_AsString(pystr);\n error_description = strdup(str);\n\n /* See if we can get a full traceback */\n module_name = PyString_FromString(\"traceback\");\n pyth_module = PyImport_Import(module_name);\n Py_DECREF(module_name);\n\n if (pyth_module == NULL) {\n full_backtrace = NULL;\n return;\n }\n\n pyth_func = PyObject_GetAttrString(pyth_module, \"format_exception\");\n if (pyth_func && PyCallable_Check(pyth_func)) {\n PyObject *pyth_val;\n\n pyth_val = PyObject_CallFunctionObjArgs(pyth_func, ptype, pvalue, ptraceback, NULL);\n\n pystr = PyObject_Str(pyth_val);\n str = PyString_AsString(pystr);\n full_backtrace = strdup(str);\n Py_DECREF(pyth_val);\n }\n}\n\n",
"I've discovered that _frame is actually defined in the frameobject.h header included with Python. Armed with this plus looking at traceback.c in the Python C implementation, we have:\n#include <Python.h>\n#include <frameobject.h>\n\nPyTracebackObject* traceback = get_the_traceback();\n\nint line = traceback->tb_lineno;\nconst char* filename = PyString_AsString(traceback->tb_frame->f_code->co_filename);\n\nBut this still seems really dirty to me.\n",
"One principal I've found useful in writing C extensions is to use each language where it's best suited. So if you have a task to do that would be best implemented in Python, implement in Python, and if it would be best implemented in C, do it in C. Interpreting tracebacks is best done in Python for two reasons: first, because Python has the tools to do it, and second, because it isn't speed-critical.\nI would write a Python function to extract the info you need from the traceback, then call it from C.\nYou could even go so far as to write a Python wrapper for your callable execution. Instead of invoking someCallablePythonObject, pass it as an argument to your Python function:\ndef invokeSomeCallablePythonObject(obj, args):\n try:\n result = obj(*args)\n ok = True\n except:\n # Do some mumbo-jumbo with the traceback, etc.\n result = myTraceBackMunger(...)\n ok = False\n return ok, result\n\nThen in your C code, call this Python function to do the work. The key here is to decide pragmatically which side of the C-Python split to put your code.\n",
"I used the following code to extract Python exception's error body. strExcType stores the exception type and strExcValue stores the exception body. Sample values are:\nstrExcType:\"<class 'ImportError'>\"\nstrExcValue:\"ImportError(\"No module named 'nonexistingmodule'\",)\"\n\nCpp code:\nif(PyErr_Occurred() != NULL) {\n PyObject *pyExcType;\n PyObject *pyExcValue;\n PyObject *pyExcTraceback;\n PyErr_Fetch(&pyExcType, &pyExcValue, &pyExcTraceback);\n PyErr_NormalizeException(&pyExcType, &pyExcValue, &pyExcTraceback);\n\n PyObject* str_exc_type = PyObject_Repr(pyExcType);\n PyObject* pyStr = PyUnicode_AsEncodedString(str_exc_type, \"utf-8\", \"Error ~\");\n const char *strExcType = PyBytes_AS_STRING(pyStr);\n\n PyObject* str_exc_value = PyObject_Repr(pyExcValue);\n PyObject* pyExcValueStr = PyUnicode_AsEncodedString(str_exc_value, \"utf-8\", \"Error ~\");\n const char *strExcValue = PyBytes_AS_STRING(pyExcValueStr);\n\n // When using PyErr_Restore() there is no need to use Py_XDECREF for these 3 pointers\n //PyErr_Restore(pyExcType, pyExcValue, pyExcTraceback);\n\n Py_XDECREF(pyExcType);\n Py_XDECREF(pyExcValue);\n Py_XDECREF(pyExcTraceback);\n\n Py_XDECREF(str_exc_type);\n Py_XDECREF(pyStr);\n\n Py_XDECREF(str_exc_value);\n Py_XDECREF(pyExcValueStr);\n}\n\n",
"I had reason to do this recently while writing an allocation tracker for numpy. The previous answers are close but frame->f_lineno will not always return the correct line number--you need to call PyFrame_GetLineNumber(). Here's an updated code snippet:\n#include \"frameobject.h\"\n...\n\nPyFrameObject* frame = PyEval_GetFrame();\nint lineno = PyFrame_GetLineNumber(frame);\nPyObject *filename = frame->f_code->co_filename;\n\nThe full thread state is also available in the PyFrameObject; if you want to walk the stack keep iterating on f_back until it's NULL. Checkout the full data structure in frameobject.h: http://svn.python.org/projects/python/trunk/Include/frameobject.h\nSee also: https://docs.python.org/2/c-api/reflection.html\n",
"You can access Python traceback similar to tb_printinternal function. It iterates over PyTracebackObject list. I have tried also suggestions above to iterate over frames, but it does not work for me (I see only the last stack frame).\nExcerpts from CPython code:\nstatic int\ntb_displayline(PyObject *f, PyObject *filename, int lineno, PyObject *name)\n{\n int err;\n PyObject *line;\n\n if (filename == NULL || name == NULL)\n return -1;\n line = PyUnicode_FromFormat(\" File \\\"%U\\\", line %d, in %U\\n\",\n filename, lineno, name);\n if (line == NULL)\n return -1;\n err = PyFile_WriteObject(line, f, Py_PRINT_RAW);\n Py_DECREF(line);\n if (err != 0)\n return err;\n /* ignore errors since we are not able to report them, are we? */\n if (_Py_DisplaySourceLine(f, filename, lineno, 4))\n PyErr_Clear();\n return err;\n}\n\nstatic int\ntb_printinternal(PyTracebackObject *tb, PyObject *f, long limit)\n{\n int err = 0;\n long depth = 0;\n PyTracebackObject *tb1 = tb;\n while (tb1 != NULL) {\n depth++;\n tb1 = tb1->tb_next;\n }\n while (tb != NULL && err == 0) {\n if (depth <= limit) {\n err = tb_displayline(f,\n tb->tb_frame->f_code->co_filename,\n tb->tb_lineno,\n tb->tb_frame->f_code->co_name);\n }\n depth--;\n tb = tb->tb_next;\n if (err == 0)\n err = PyErr_CheckSignals();\n }\n return err;\n}\n\n",
"As of python 3.11, accessing the frame objects seems to need a different approach. Anyway, this works in 3.11, hth someone\npy_err(void)\n{\n PyObject *err = PyErr_Occurred();\n if (! err) {\n return;\n }\n\n PyObject *ptype, *pvalue, *pbacktrace, *pyobj_str;\n PyObject *ret, *list, *string;\n PyObject *mod;\n char *py_str;\n\n PyErr_Fetch(&ptype, &pvalue, &pbacktrace);\n PyErr_NormalizeException(&ptype, &pvalue, &pbacktrace);\n PyErr_Display(ptype, pvalue, pbacktrace);\n PyTraceBack_Print(pbacktrace, pvalue);\n\n pyobj_str = PyObject_Str(pvalue);\n py_str = py_obj_to_string(pyobj_str);\n printf(\"%s\", py_str);\n myfree(py_str);\n\n mod = PyImport_ImportModule(\"traceback\");\n list = PyObject_CallMethod(mod, \"format_exception\", \"OOO\", ptype, pvalue, pbacktrace);\n if (list) {\n string = PyUnicode_FromString(\"\\n\");\n ret = PyUnicode_Join(string, list);\n Py_DECREF(list);\n Py_DECREF(string);\n\n py_str = py_obj_to_string(ret);\n printf(\"%s\", py_str);\n myfree(py_str);\n\n Py_DECREF(ret);\n }\n\n PyErr_Clear();\n}\n\nand you will probably need this too\nchar *py_obj_to_string(const PyObject *py_str)\n{ \n PyObject *py_encstr;\n char *outstr = nullptr;\n char *str;\n\n py_encstr = nullptr;\n str = nullptr;\n\n if (! PyUnicode_Check((PyObject *) py_str)) {\n goto err_out;\n }\n \n py_encstr = PyUnicode_AsEncodedString((PyObject *) py_str, \"utf-8\", nullptr);\n if (! py_encstr) {\n goto err_out;\n }\n\n str = PyBytes_AS_STRING(py_encstr);\n if (! str) {\n goto err_out;\n }\n\n outstr = strdup(str);\n\nerr_out:\n \n if (py_encstr) {\n Py_XDECREF(py_encstr);\n }\n \n return outstr;\n}\n\nactual working code if someone needs it can be found in my larger project https://github.com/goblinhack/zorbash\n"
] | [
16,
15,
9,
6,
4,
2,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0001796510_python.txt |
Q:
Load and Retrain PyTorch model…
Hi all! Can you help me? I have a question: "How can I retrain a PyTorch model with just a .pt file ?”
I've looked at many guides but haven't found the answer. Everything there has a model class.
I tried to import using torch.load() and others. But it doesn't work. When I load the model with torch.load() I can't get the parameters.enter image description here
MyModel class I write myself? Doesn't it have to match the class in .pt?
A:
To retrain a PyTorch model with just a .pt file, you will need to use the PyTorch API to create a new model instance and load the .pt file as the initial weights for the model. This can be done using the torch.load() function to load the .pt file, and then using the model.load_state_dict() method to load the weights into the model.
Here is an example of how this might be done:
import torch
# Create a new model instance
model = MyModel()
# Load the .pt file as the initial weights for the model
weights = torch.load('model.pt')
model.load_state_dict(weights)
# Retrain the model using the loaded weights as the starting point
model.fit(...)
In this example, MyModel is the class for your model, and model.pt is the .pt file containing the initial weights for the model. The model.fit() method is used to retrain the model using the loaded weights as the starting point.
It is important to note that this approach assumes that the .pt file was created using the same model class (MyModel in this example) and that the model architecture has not changed since the .pt file was created. If the model architecture has changed, you will need to make sure that the new model architecture is compatible with the weights in the .pt file, or you may need to use a different approach to retrain the model.
| Load and Retrain PyTorch model… | Hi all! Can you help me? I have a question: "How can I retrain a PyTorch model with just a .pt file ?”
I've looked at many guides but haven't found the answer. Everything there has a model class.
I tried to import using torch.load() and others. But it doesn't work. When I load the model with torch.load() I can't get the parameters.enter image description here
MyModel class I write myself? Doesn't it have to match the class in .pt?
| [
"To retrain a PyTorch model with just a .pt file, you will need to use the PyTorch API to create a new model instance and load the .pt file as the initial weights for the model. This can be done using the torch.load() function to load the .pt file, and then using the model.load_state_dict() method to load the weights into the model.\nHere is an example of how this might be done:\nimport torch\n\n# Create a new model instance\nmodel = MyModel()\n\n# Load the .pt file as the initial weights for the model\nweights = torch.load('model.pt')\nmodel.load_state_dict(weights)\n\n# Retrain the model using the loaded weights as the starting point\nmodel.fit(...)\n\nIn this example, MyModel is the class for your model, and model.pt is the .pt file containing the initial weights for the model. The model.fit() method is used to retrain the model using the loaded weights as the starting point.\nIt is important to note that this approach assumes that the .pt file was created using the same model class (MyModel in this example) and that the model architecture has not changed since the .pt file was created. If the model architecture has changed, you will need to make sure that the new model architecture is compatible with the weights in the .pt file, or you may need to use a different approach to retrain the model.\n"
] | [
0
] | [] | [] | [
"artificial_intelligence",
"python",
"pytorch"
] | stackoverflow_0074654315_artificial_intelligence_python_pytorch.txt |
Q:
Why I receive ImportError: cannot import name 'just_fix_windows_console' from 'colorama'?
I have to use BayesianOptimization for hyper parameter tuning for neural networks, for the same when I'm importing it using, from bayes_opt import BayesianOptimization, the following error is obtained
`ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_28896\1719632484.py in <module>
----> 1 from bayes_opt import BayesianOptimization
~\anaconda3\lib\site-packages\bayes_opt\__init__.py in <module>
----> 1 from .bayesian_optimization import BayesianOptimization, Events
2 from .domain_reduction import SequentialDomainReductionTransformer
3 from .util import UtilityFunction
4 from .logger import ScreenLogger, JSONLogger
5 from .constraint import ConstraintModel
~\anaconda3\lib\site-packages\bayes_opt\bayesian_optimization.py in <module>
3 from bayes_opt.constraint import ConstraintModel
4
----> 5 from .target_space import TargetSpace
6 from .event import Events, DEFAULT_EVENTS
7 from .logger import _get_default_logger
~\anaconda3\lib\site-packages\bayes_opt\target_space.py in <module>
2
3 import numpy as np
----> 4 from .util import ensure_rng, NotUniqueError
5 from .util import Colours
6
~\anaconda3\lib\site-packages\bayes_opt\util.py in <module>
3 from scipy.stats import norm
4 from scipy.optimize import minimize
----> 5 from colorama import just_fix_windows_console
6
7
ImportError: cannot import name 'just_fix_windows_console' from 'colorama' (C:\Users\saiga\anaconda3\lib\site-packages\colorama\__init__.py)
`
I have tried importing 'colorama', and other modules in it, which was working, but this name isn't.
Also BayesianOptimization can be directly imported, using import BayesianOptimization but I need to call BayesianOPtimization in the program later using
gbm_bo = BayesianOptimization(gbm_cl_bo, params_gbm, random_state=111)
where gbm_cl_bo are functions defined. But then, the below given error is coming.
TypeError: 'module' object is not callable
So, inorder to avoid this I think I need to call BayesianOptimization from a parent directory. For the same I have also tried the following code : "from .BayesianOptimization import BayesianOptimization", but received the error as
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_28896\572044167.py in <module>
----> 1 from .BayesianOptimization import BayesianOptimization
ImportError: attempted relative import with no known parent package
So how to fix the above import error?
Otherwise, is there an alternate way of calling BayesianOptimization, so as not to get the error "'module' object is not callable".
A:
Based on the changelog for colorama, that function was added in the latest version of the library, 0.4.6.
Make sure you have that version installed, with e.g. pip install -U colorama.
| Why I receive ImportError: cannot import name 'just_fix_windows_console' from 'colorama'? | I have to use BayesianOptimization for hyper parameter tuning for neural networks, for the same when I'm importing it using, from bayes_opt import BayesianOptimization, the following error is obtained
`ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_28896\1719632484.py in <module>
----> 1 from bayes_opt import BayesianOptimization
~\anaconda3\lib\site-packages\bayes_opt\__init__.py in <module>
----> 1 from .bayesian_optimization import BayesianOptimization, Events
2 from .domain_reduction import SequentialDomainReductionTransformer
3 from .util import UtilityFunction
4 from .logger import ScreenLogger, JSONLogger
5 from .constraint import ConstraintModel
~\anaconda3\lib\site-packages\bayes_opt\bayesian_optimization.py in <module>
3 from bayes_opt.constraint import ConstraintModel
4
----> 5 from .target_space import TargetSpace
6 from .event import Events, DEFAULT_EVENTS
7 from .logger import _get_default_logger
~\anaconda3\lib\site-packages\bayes_opt\target_space.py in <module>
2
3 import numpy as np
----> 4 from .util import ensure_rng, NotUniqueError
5 from .util import Colours
6
~\anaconda3\lib\site-packages\bayes_opt\util.py in <module>
3 from scipy.stats import norm
4 from scipy.optimize import minimize
----> 5 from colorama import just_fix_windows_console
6
7
ImportError: cannot import name 'just_fix_windows_console' from 'colorama' (C:\Users\saiga\anaconda3\lib\site-packages\colorama\__init__.py)
`
I have tried importing 'colorama', and other modules in it, which was working, but this name isn't.
Also BayesianOptimization can be directly imported, using import BayesianOptimization but I need to call BayesianOPtimization in the program later using
gbm_bo = BayesianOptimization(gbm_cl_bo, params_gbm, random_state=111)
where gbm_cl_bo are functions defined. But then, the below given error is coming.
TypeError: 'module' object is not callable
So, inorder to avoid this I think I need to call BayesianOptimization from a parent directory. For the same I have also tried the following code : "from .BayesianOptimization import BayesianOptimization", but received the error as
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_28896\572044167.py in <module>
----> 1 from .BayesianOptimization import BayesianOptimization
ImportError: attempted relative import with no known parent package
So how to fix the above import error?
Otherwise, is there an alternate way of calling BayesianOptimization, so as not to get the error "'module' object is not callable".
| [
"Based on the changelog for colorama, that function was added in the latest version of the library, 0.4.6.\nMake sure you have that version installed, with e.g. pip install -U colorama.\n"
] | [
0
] | [] | [] | [
"colorama",
"python"
] | stackoverflow_0074654425_colorama_python.txt |
Q:
Append the last level in Multiindex Dataframe on same length
I have a dataframe like this
df = pd.DataFrame({'A': [1, 2], 'B':['x', 'y'], 'C':[1, 2], 'D':[0, 0]})
df.groupby(['A', 'B', 'C']).mean()
D
A B C
1 x 1 0.0
2 y 2 0.0
I want it to have the same index in C.
D
A B C
1 x 1 0.0
2 NaN
2 y 1 NaN
2 0.0
The goal is to obtain a plot with a y-axis: "D" and an x-axis: "C", where the lines of different combinations ("A" and "B") are not broken if the value for "C" does not exist. In the following plot, it happened to be the case that e.g. the lines are broken between 15-18. I want to have connected lines even if the values in between are missing.
A:
Use Series.unstack, add not exist value(s) in range by Series.reindex with DataFrame.stack, last add Series.to_frame:
df1 = df.groupby(['A', 'B', 'C']).mean()
r = range(1, 5)
df2 = df1['D'].unstack().reindex(columns=r).stack(dropna=False).to_frame(name='D')
print (df2)
D
A B C
1 x 1 0.0
2 NaN
3 NaN
4 NaN
2 y 1 NaN
2 0.0
3 NaN
4 NaN
EDIT: Missing values are not plotting, if possible replace them by some value, here 0:
df = pd.DataFrame({'A': [1, 2, 2,2], 'B':['x', 'y','y','y'], 'C':[1, 2,3,2], 'D':[1,7,4,8]})
df1 = df.groupby(['A', 'B', 'C']).mean()
print (df1)
D
A B C
1 x 1 1.0
2 y 2 7.5
3 4.0
#if unique triples A, B, C
#df1 = df.set_index(['A', 'B', 'C'])
r = range(1, 5)
df2 = (df1['D'].unstack(fill_value=0)
.reindex(columns=r, fill_value=0)
.stack(dropna=False)
.to_frame(name='D'))
print (df2)
D
A B C
1 x 1 1.0
2 0.0
3 0.0
4 0.0
2 y 1 0.0
2 7.5
3 4.0
4 0.0
| Append the last level in Multiindex Dataframe on same length | I have a dataframe like this
df = pd.DataFrame({'A': [1, 2], 'B':['x', 'y'], 'C':[1, 2], 'D':[0, 0]})
df.groupby(['A', 'B', 'C']).mean()
D
A B C
1 x 1 0.0
2 y 2 0.0
I want it to have the same index in C.
D
A B C
1 x 1 0.0
2 NaN
2 y 1 NaN
2 0.0
The goal is to obtain a plot with a y-axis: "D" and an x-axis: "C", where the lines of different combinations ("A" and "B") are not broken if the value for "C" does not exist. In the following plot, it happened to be the case that e.g. the lines are broken between 15-18. I want to have connected lines even if the values in between are missing.
| [
"Use Series.unstack, add not exist value(s) in range by Series.reindex with DataFrame.stack, last add Series.to_frame:\ndf1 = df.groupby(['A', 'B', 'C']).mean()\n\nr = range(1, 5)\ndf2 = df1['D'].unstack().reindex(columns=r).stack(dropna=False).to_frame(name='D')\nprint (df2)\n D\nA B C \n1 x 1 0.0\n 2 NaN\n 3 NaN\n 4 NaN\n2 y 1 NaN\n 2 0.0\n 3 NaN\n 4 NaN\n\nEDIT: Missing values are not plotting, if possible replace them by some value, here 0:\ndf = pd.DataFrame({'A': [1, 2, 2,2], 'B':['x', 'y','y','y'], 'C':[1, 2,3,2], 'D':[1,7,4,8]})\ndf1 = df.groupby(['A', 'B', 'C']).mean()\nprint (df1)\n D\nA B C \n1 x 1 1.0\n2 y 2 7.5\n 3 4.0\n \n#if unique triples A, B, C\n#df1 = df.set_index(['A', 'B', 'C'])\n\nr = range(1, 5)\n\ndf2 = (df1['D'].unstack(fill_value=0)\n .reindex(columns=r, fill_value=0)\n .stack(dropna=False)\n .to_frame(name='D'))\nprint (df2)\n D\nA B C \n1 x 1 1.0\n 2 0.0\n 3 0.0\n 4 0.0\n2 y 1 0.0\n 2 7.5\n 3 4.0\n 4 0.0\n\n"
] | [
0
] | [] | [] | [
"multi_index",
"pandas",
"python"
] | stackoverflow_0074654475_multi_index_pandas_python.txt |
Q:
HEIC to JPEG conversion with metadata
I'm trying to convert heic file in jpeg importing also all metadadata (like gps info and other stuff), unfurtunately with the code below the conversion is ok but no metadata are stored on the jpeg file created.
Anyone can describe me what I need to add in the conversion method?
heif_file = pyheif.read("/transito/126APPLE_IMG_6272.HEIC")
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
image.save("/transito/126APPLE_IMG_6272.JPEG", "JPEG")
A:
Thanks, i found a solution, I hope can help others:
# Open the file
heif_file = pyheif.read(file_path_heic)
# Creation of image
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
# Retrive the metadata
for metadata in heif_file.metadata or []:
if metadata['type'] == 'Exif':
exif_dict = piexif.load(metadata['data'])
# PIL rotates the image according to exif info, so it's necessary to remove the orientation tag otherwise the image will be rotated again (1° time from PIL, 2° from viewer).
exif_dict['0th'][274] = 0
exif_bytes = piexif.dump(exif_dict)
image.save(file_path_jpeg, "JPEG", exif=exif_bytes)
A:
HEIF to JPEG:
from PIL import Image
import pillow_heif
if __name__ == "__main__":
pillow_heif.register_heif_opener()
img = Image.open("any_image.heic")
img.save("output.jpeg")
JPEG to HEIF:
from PIL import Image
import pillow_heif
if __name__ == "__main__":
pillow_heif.register_heif_opener()
img = Image.open("any_image.jpg")
img.save("output.heic")
Rotation (EXIF of XMP) will be removed automatically when needed.
Call to register_heif_opener can be replaced by importing pillow_heif.HeifImagePlugin instead of pillow_heif
Metadata can be edited in Pillow's "info" dictionary and will be saved when saving to HEIF.
A:
Here is an other approach to convert iPhone HEIC images to JPG preserving exif data
Pyhton 3.9 (I'm on Rasperry PI 4 64 bit)
install pillow_heif (0.8.0)
And run following code and you'll find exif data in the new JPEG image.
The trick is to get the dictionary information. No additional conversion required.
This is sample code, built your own wrapper around.
from PIL import Image
import pillow_heif
# open the image file
heif_file = pillow_heif.read_heif("/mnt/pictures/test/IMG_0001.HEIC")
#create the new image
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
print(heif_file.info.keys())
dictionary=heif_file.info
exif_dict=dictionary['exif']
# debug
print(exif_dict)
image.save('/tmp/test000.JPG', "JPEG", exif=exif_dict)
| HEIC to JPEG conversion with metadata | I'm trying to convert heic file in jpeg importing also all metadadata (like gps info and other stuff), unfurtunately with the code below the conversion is ok but no metadata are stored on the jpeg file created.
Anyone can describe me what I need to add in the conversion method?
heif_file = pyheif.read("/transito/126APPLE_IMG_6272.HEIC")
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
image.save("/transito/126APPLE_IMG_6272.JPEG", "JPEG")
| [
"Thanks, i found a solution, I hope can help others:\n# Open the file\nheif_file = pyheif.read(file_path_heic)\n\n# Creation of image \nimage = Image.frombytes(\n heif_file.mode,\n heif_file.size,\n heif_file.data,\n \"raw\",\n heif_file.mode,\n heif_file.stride,\n)\n# Retrive the metadata\nfor metadata in heif_file.metadata or []:\n if metadata['type'] == 'Exif':\n exif_dict = piexif.load(metadata['data'])\n\n# PIL rotates the image according to exif info, so it's necessary to remove the orientation tag otherwise the image will be rotated again (1° time from PIL, 2° from viewer).\nexif_dict['0th'][274] = 0\nexif_bytes = piexif.dump(exif_dict)\nimage.save(file_path_jpeg, \"JPEG\", exif=exif_bytes)\n\n",
"HEIF to JPEG:\nfrom PIL import Image\nimport pillow_heif\n\nif __name__ == \"__main__\":\n pillow_heif.register_heif_opener()\n img = Image.open(\"any_image.heic\")\n img.save(\"output.jpeg\")\n\nJPEG to HEIF:\nfrom PIL import Image\nimport pillow_heif\n\nif __name__ == \"__main__\":\n pillow_heif.register_heif_opener()\n img = Image.open(\"any_image.jpg\")\n img.save(\"output.heic\")\n\n\nRotation (EXIF of XMP) will be removed automatically when needed.\n\nCall to register_heif_opener can be replaced by importing pillow_heif.HeifImagePlugin instead of pillow_heif\n\nMetadata can be edited in Pillow's \"info\" dictionary and will be saved when saving to HEIF.\n\n\n",
"Here is an other approach to convert iPhone HEIC images to JPG preserving exif data\n\nPyhton 3.9 (I'm on Rasperry PI 4 64 bit)\ninstall pillow_heif (0.8.0)\n\nAnd run following code and you'll find exif data in the new JPEG image.\nThe trick is to get the dictionary information. No additional conversion required.\nThis is sample code, built your own wrapper around.\n from PIL import Image\n import pillow_heif\n\n # open the image file\n heif_file = pillow_heif.read_heif(\"/mnt/pictures/test/IMG_0001.HEIC\")\n \n #create the new image\n image = Image.frombytes(\n heif_file.mode,\n heif_file.size,\n heif_file.data,\n \"raw\",\n heif_file.mode,\n heif_file.stride,\n )\n\n print(heif_file.info.keys())\n dictionary=heif_file.info\n exif_dict=dictionary['exif']\n # debug \n print(exif_dict)\n \n image.save('/tmp/test000.JPG', \"JPEG\", exif=exif_dict)\n\n"
] | [
6,
1,
0
] | [] | [] | [
"data_conversion",
"exif",
"heic",
"jpeg",
"python"
] | stackoverflow_0065045644_data_conversion_exif_heic_jpeg_python.txt |
Q:
Group columns if coordinates are not more distant than a threshold
Sps Gps start end
SP1 G1 2 322
SP1 G1 318 1368
SP1 G1 21125 22297
SP2 G2 2 313
SP2 G2 334 1359
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
SP3 G3 2 313
SP3 G3 328 1368
SP3 G3 21116 22294
SP4 G4 346 1356
SP4 G4 21131 22282
and I would like to add a new columns Threshold_gps for each Sps and Gps that have start and end next to each others but where the distance length (end-start) is below a threshold of 500.
Let's take examples:
SP1-G1
Sps Gps start end
SP1 G1 2 322
SP1 G1 318 1368
SP1 G1 21125 22297
here 318-322=-4 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297
then, 21125-1368=19757 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297 G2
SP2-G2
Sps Gps start end Threshold_gps
SP2 G2 2 313
SP2 G2 334 1359
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
334-313=21 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
then, 11716-1359=10357 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885
SP2 G2 21080 22297
then, 20709-11964=8745 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297
then, 21080-20885=195 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297 G3
and so on..
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297 G2
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297 G3
SP3 G3 2 313 G1
SP3 G3 328 1368 G1
SP3 G3 21116 22294 G2
SP4 G4 346 1356 G1
SP4 G4 21131 22282 G2
Does someone have an idea please?
Here is the dict format of the tab if it can helps:
{'Sps': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP2', 4: 'SP2', 5: 'SP2', 6: 'SP2', 7: 'SP2', 8: 'SP3', 9: 'SP3', 10: 'SP3', 11: 'SP4', 12: 'SP4'}, 'Gps': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G2', 4: 'G2', 5: 'G2', 6: 'G2', 7: 'G2', 8: 'G3', 9: 'G3', 10: 'G3', 11: 'G4', 12: 'G4'}, 'start': {0: 2, 1: 318, 2: 21125, 3: 2, 4: 334, 5: 11716, 6: 20709, 7: 21080, 8: 2, 9: 328, 10: 21116, 11: 346, 12: 21131}, 'end': {0: 322, 1: 1368, 2: 22297, 3: 313, 4: 1359, 5: 11964, 6: 20885, 7: 22297, 8: 313, 9: 1368, 10: 22294, 11: 1356, 12: 22282}}
A:
I believe you might want:
df['Threshold_gps'] = (df
.groupby(['Sps', 'Gps'], group_keys=False)
.apply(lambda d: (s:=d['end'].shift().rsub(d['start'])
.gt(500))
.cumsum().add(1-s.iloc[0])
.astype(str).radd('G')
)
)
for python <3.8:
def get_group(g):
s = g['end'].shift().rsub(g['start']).gt(500)
return s.cumsum().add(1-s.iloc[0]).astype(str).radd('G')
df['Threshold_gps'] = (df
.groupby(['Sps', 'Gps'], group_keys=False)
.apply(get_group)
)
Output:
Sps Gps start end Threshold_gps
0 SP1 G1 2 322 G1
1 SP1 G1 318 1368 G1
2 SP1 G1 21125 22297 G2
3 SP2 G2 2 313 G1
4 SP2 G2 334 1359 G1
5 SP2 G2 11716 11964 G2
6 SP2 G2 20709 20885 G3
7 SP2 G2 21080 22297 G3
8 SP3 G3 2 313 G1
9 SP3 G3 328 1368 G1
10 SP3 G3 21116 22294 G2
11 SP4 G4 346 1356 G1
12 SP4 G4 21131 22282 G2
| Group columns if coordinates are not more distant than a threshold | Sps Gps start end
SP1 G1 2 322
SP1 G1 318 1368
SP1 G1 21125 22297
SP2 G2 2 313
SP2 G2 334 1359
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
SP3 G3 2 313
SP3 G3 328 1368
SP3 G3 21116 22294
SP4 G4 346 1356
SP4 G4 21131 22282
and I would like to add a new columns Threshold_gps for each Sps and Gps that have start and end next to each others but where the distance length (end-start) is below a threshold of 500.
Let's take examples:
SP1-G1
Sps Gps start end
SP1 G1 2 322
SP1 G1 318 1368
SP1 G1 21125 22297
here 318-322=-4 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297
then, 21125-1368=19757 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297 G2
SP2-G2
Sps Gps start end Threshold_gps
SP2 G2 2 313
SP2 G2 334 1359
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
334-313=21 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964
SP2 G2 20709 20885
SP2 G2 21080 22297
then, 11716-1359=10357 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885
SP2 G2 21080 22297
then, 20709-11964=8745 which is > 500 so I do not group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297
then, 21080-20885=195 which is < 500 so I group them
Sps Gps start end Threshold_gps
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297 G3
and so on..
Sps Gps start end Threshold_gps
SP1 G1 2 322 G1
SP1 G1 318 1368 G1
SP1 G1 21125 22297 G2
SP2 G2 2 313 G1
SP2 G2 334 1359 G1
SP2 G2 11716 11964 G2
SP2 G2 20709 20885 G3
SP2 G2 21080 22297 G3
SP3 G3 2 313 G1
SP3 G3 328 1368 G1
SP3 G3 21116 22294 G2
SP4 G4 346 1356 G1
SP4 G4 21131 22282 G2
Does someone have an idea please?
Here is the dict format of the tab if it can helps:
{'Sps': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP2', 4: 'SP2', 5: 'SP2', 6: 'SP2', 7: 'SP2', 8: 'SP3', 9: 'SP3', 10: 'SP3', 11: 'SP4', 12: 'SP4'}, 'Gps': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G2', 4: 'G2', 5: 'G2', 6: 'G2', 7: 'G2', 8: 'G3', 9: 'G3', 10: 'G3', 11: 'G4', 12: 'G4'}, 'start': {0: 2, 1: 318, 2: 21125, 3: 2, 4: 334, 5: 11716, 6: 20709, 7: 21080, 8: 2, 9: 328, 10: 21116, 11: 346, 12: 21131}, 'end': {0: 322, 1: 1368, 2: 22297, 3: 313, 4: 1359, 5: 11964, 6: 20885, 7: 22297, 8: 313, 9: 1368, 10: 22294, 11: 1356, 12: 22282}}
| [
"I believe you might want:\ndf['Threshold_gps'] = (df\n .groupby(['Sps', 'Gps'], group_keys=False)\n .apply(lambda d: (s:=d['end'].shift().rsub(d['start'])\n .gt(500))\n .cumsum().add(1-s.iloc[0])\n .astype(str).radd('G')\n )\n)\n\nfor python <3.8:\ndef get_group(g):\n s = g['end'].shift().rsub(g['start']).gt(500)\n return s.cumsum().add(1-s.iloc[0]).astype(str).radd('G')\n\ndf['Threshold_gps'] = (df\n .groupby(['Sps', 'Gps'], group_keys=False)\n .apply(get_group)\n)\n\nOutput:\n Sps Gps start end Threshold_gps\n0 SP1 G1 2 322 G1\n1 SP1 G1 318 1368 G1\n2 SP1 G1 21125 22297 G2\n3 SP2 G2 2 313 G1\n4 SP2 G2 334 1359 G1\n5 SP2 G2 11716 11964 G2\n6 SP2 G2 20709 20885 G3\n7 SP2 G2 21080 22297 G3\n8 SP3 G3 2 313 G1\n9 SP3 G3 328 1368 G1\n10 SP3 G3 21116 22294 G2\n11 SP4 G4 346 1356 G1\n12 SP4 G4 21131 22282 G2\n\n"
] | [
4
] | [] | [] | [
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074653913_pandas_python_python_3.x.txt |
Q:
Django login/ payload visible in plaintext in Chrome DevTools
This is weird. I have created login functions so many times but never noticed this thing.
When we provide a username and password in a form and submit it, and it goes to the server-side as a Payload like this, I can see the data in the Chrome DevTools network tab:
csrfmiddlewaretoken:
mHjXdIDo50tfygxZualuxaCBBdKboeK2R89scsxyfUxm22iFsMHY2xKtxC9uQNni
username: testuser
password: 'dummy pass' #same as i typed(no encryption)
I got this in the case of incorrect creds because the login failed and it wouldn't redirect to the other page.
But then I tried with valid creds and I checked the Preserve log box in the Chrome network tab. Then I checked there and I could still see the exact entered Username and password. At first I thought I might have missed some encryption logic or something.
But then I tried with multiple reputed tech companies' login functionality and I could still see creds in the payload. Isn't this wrong?
It's supposed to be in the encrypted format right?
Models.py
from django.contrib.auth.models import User
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
html
<form method="POST" class="needs-validation mb-4" novalidate>
{% csrf_token %}
<div class="form-outline mb-4">
<input type="email" id="txt_email" class="form-control"
placeholder="Username or email address" required />
</div>
<div class="form-outline mb-4">
<input type="password" id="txt_password" class="form-control"
placeholder="Password" required />
</div>
<div class="d-grid gap-2">
<button class="btn btn-primary fa-lg gradient-custom-2 login_btn" type="submit" id="btn_login"><i class="fa fa-sign-in" aria-hidden="true"> </i> Sign in</button>
<div class="alert alert-danger" id="lbl_error" role="alert" style="display: none;">
</div>
</div>
</form>
login view
def authcheck(request):
try:
if request.method == "POST":
username = request.POST["username"]
password = request.POST["password"]
user = authenticate(username=username, password=password)
if user is not None:
check_is_partner = Profile.objects.filter(user__username=username, is_partner=True).values("password_reset").first()
if check_is_partner and check_is_partner['password_reset'] is True:
return JsonResponse(({'code':0 ,'username':username}), content_type="json")
if check_ip_restricted(user.profile.ip_restriction, request):
return HttpResponse("ok_ipr", content_type="json")
login(request, user)
session = request.session
session["username"] = username
session["userid"] = user.id
session.save()
if check_is_partner:
return HttpResponse("1", content_type="json")
else:
return HttpResponse("ok", content_type="json")
else:
return HttpResponse("nok", content_type="json")
except Exception:
return HttpResponse("error", content_type="json")
A:
It's supposed to be in the encrypted format right?
No.
What you're seeing in Chrome DevTools is the username and password before they get encrypted.
If you were to run tcpdump or Wireshark when you make the request, you'd see that it is encrypted over the network.
In order for the data to be usable by anyone, it has to be unencrypted/decrypted at some point.
For example, you can also see the response data (status code, headers, payload) in Chrome DevTools, which is encrypted over the network, but it's shown to you after it's been decrypted.
Here's a similar answer to a similar question.
EDIT: This is all assuming you're on a site using https. If you're using plain ole http, anyone sniffing the network can see your username + password in plaintext.
A:
Everything at the front end - the browser side is visible to everyone, and dev tools are no exception. Use HTTPS for security reasons.
| Django login/ payload visible in plaintext in Chrome DevTools | This is weird. I have created login functions so many times but never noticed this thing.
When we provide a username and password in a form and submit it, and it goes to the server-side as a Payload like this, I can see the data in the Chrome DevTools network tab:
csrfmiddlewaretoken:
mHjXdIDo50tfygxZualuxaCBBdKboeK2R89scsxyfUxm22iFsMHY2xKtxC9uQNni
username: testuser
password: 'dummy pass' #same as i typed(no encryption)
I got this in the case of incorrect creds because the login failed and it wouldn't redirect to the other page.
But then I tried with valid creds and I checked the Preserve log box in the Chrome network tab. Then I checked there and I could still see the exact entered Username and password. At first I thought I might have missed some encryption logic or something.
But then I tried with multiple reputed tech companies' login functionality and I could still see creds in the payload. Isn't this wrong?
It's supposed to be in the encrypted format right?
Models.py
from django.contrib.auth.models import User
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
html
<form method="POST" class="needs-validation mb-4" novalidate>
{% csrf_token %}
<div class="form-outline mb-4">
<input type="email" id="txt_email" class="form-control"
placeholder="Username or email address" required />
</div>
<div class="form-outline mb-4">
<input type="password" id="txt_password" class="form-control"
placeholder="Password" required />
</div>
<div class="d-grid gap-2">
<button class="btn btn-primary fa-lg gradient-custom-2 login_btn" type="submit" id="btn_login"><i class="fa fa-sign-in" aria-hidden="true"> </i> Sign in</button>
<div class="alert alert-danger" id="lbl_error" role="alert" style="display: none;">
</div>
</div>
</form>
login view
def authcheck(request):
try:
if request.method == "POST":
username = request.POST["username"]
password = request.POST["password"]
user = authenticate(username=username, password=password)
if user is not None:
check_is_partner = Profile.objects.filter(user__username=username, is_partner=True).values("password_reset").first()
if check_is_partner and check_is_partner['password_reset'] is True:
return JsonResponse(({'code':0 ,'username':username}), content_type="json")
if check_ip_restricted(user.profile.ip_restriction, request):
return HttpResponse("ok_ipr", content_type="json")
login(request, user)
session = request.session
session["username"] = username
session["userid"] = user.id
session.save()
if check_is_partner:
return HttpResponse("1", content_type="json")
else:
return HttpResponse("ok", content_type="json")
else:
return HttpResponse("nok", content_type="json")
except Exception:
return HttpResponse("error", content_type="json")
| [
"\nIt's supposed to be in the encrypted format right?\n\nNo.\nWhat you're seeing in Chrome DevTools is the username and password before they get encrypted.\nIf you were to run tcpdump or Wireshark when you make the request, you'd see that it is encrypted over the network.\nIn order for the data to be usable by anyone, it has to be unencrypted/decrypted at some point.\nFor example, you can also see the response data (status code, headers, payload) in Chrome DevTools, which is encrypted over the network, but it's shown to you after it's been decrypted.\n\nHere's a similar answer to a similar question.\n\nEDIT: This is all assuming you're on a site using https. If you're using plain ole http, anyone sniffing the network can see your username + password in plaintext.\n",
"Everything at the front end - the browser side is visible to everyone, and dev tools are no exception. Use HTTPS for security reasons.\n"
] | [
2,
0
] | [] | [] | [
"authentication",
"django",
"payload",
"python"
] | stackoverflow_0074612555_authentication_django_payload_python.txt |
Q:
How do I return true, for two arrays that are the same length and value? (Python)
So, the question I am trying to solve is...
"Return true if two arrays are equal.
The arrays are equal if they are the same length and contain the same value at each particular index.
Two empty arrays are equal."
for example:
input:
a == [1, 9, 4, 6, 3]
b == [1, 9, 4, 6, 3]
output:
true
OR
input:
a == [5, 3, 1]
b == [6, 2, 9, 4]
output:
false
This is how I went about it. I'm able to get the length of the arrays right, but I don't know how to ensure the values in it will be the same too. That's the part I am stuck on how to implement.
def solution(a, b):
if range(len(a)) == range(len(b)):
return True
else:
return False
| How do I return true, for two arrays that are the same length and value? (Python) | So, the question I am trying to solve is...
"Return true if two arrays are equal.
The arrays are equal if they are the same length and contain the same value at each particular index.
Two empty arrays are equal."
for example:
input:
a == [1, 9, 4, 6, 3]
b == [1, 9, 4, 6, 3]
output:
true
OR
input:
a == [5, 3, 1]
b == [6, 2, 9, 4]
output:
false
This is how I went about it. I'm able to get the length of the arrays right, but I don't know how to ensure the values in it will be the same too. That's the part I am stuck on how to implement.
def solution(a, b):
if range(len(a)) == range(len(b)):
return True
else:
return False
| [] | [] | [
"Use numpy.array_equal:\na = [1, 9, 4, 6, 3]\nb = [1, 9, 4, 6, 3]\nnp.array_equal(a, b)\n# True\n\na = [5, 3, 1]\nb = [6, 2, 9, 4]\nnp.array_equal(a, b)\n# False\n\nnp.array_equal([], [])\n# True\n\n",
"just use a for loop\ndef solution(a, b):\n x = 0\n if (len(a) == len(b)): \n for i in range(len(a)):\n if (a[i] == b[i]):\n x = 1\n else:\n x = 0\n break\n if (x==1):\n return True\n else:\n return False\n else: \n return False\n\n"
] | [
-1,
-2
] | [
"arrays",
"numpy",
"python",
"python_3.x"
] | stackoverflow_0074653398_arrays_numpy_python_python_3.x.txt |
Q:
Compare value of previous row and next row; create new DF with the rows matching condition
I am trying to compare floating point values with each another within one column; I need a function that doesn't produce an error...
The functione should loop through the column and compare each value within the columns previous value and also with the next value and create a new DF with all rows matching conditions.
I tried a combination of for loops and if statements but I couldn't figure out a code not producing errors.
Example:
Condition = True if the value of col1 is high than the previous value of col1 and at the same time lower than the next; all within col1
Condition = True as well if the value of col1 is lower than the previous value of col1 and at the same time higher than the next
The first and last value will produce an error so they should be compared each with a variable called compare_first and compare_last which I will define manually
values = [[5.5, 2.5, 10.0], [2.0, 4.5, 1.0], [2.5, 5.2, 8.0],
[4.5, 5.8, 4.8], [4.6, 6.3, 9.6], [4.1, 6.4, 9.0],
[5.1, 2.3, 11.1]]
# creating a pandas dataframe
a_df = pd.DataFrame(values, columns=['col1', 'col2', 'col3'],
index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
print(a_df)
output
col1 col2 col3
a 5.5 2.5 10.0
b 2.0 4.5 1.0
c 2.5 5.2 8.0
d 4.5 5.8 4.8
e 4.6 6.3 9.6
f 4.1 6.4 9.0
g 5.1 2.3 11.1
desired output - all rows matching the described conditions as a new df
col1 col2 col3
b 2.0 4.5 1.0
e 4.6 6.3 9.6
f 4.1 6.4 9.0
A:
Compare shifted values for greater prevous or next values with DataFrame.shift and chain masks by | for bitwise OR, then omit first and last value of mask and set False in Series.reindex:
m = a_df.col1.lt(a_df.col1.shift()) | a_df.col1.gt(a_df.col1.shift(-1))
# @mozway alternative
m = a_df.col1.diff().lt(0) | a_df.col1.diff(-1).gt(0)
df = a_df[m.iloc[1:-1].reindex(a_df.index, fill_value=False)]
print (df)
col1 col2 col3
b 2.0 4.5 1.0
e 4.6 6.3 9.6
f 4.1 6.4 9.0
A:
Thank you @jezreal & @mozway through your code samples I started to research about .le() and .gt() and .diff() and .any() and managed to create a code that exatly does what I need. I AM SO HAPPY to have solved this :D
here it is:
#b_df = a Series with all values matching conditions
b_df = a_df.col1[((a_df.col1.diff().gt(0) & a_df.col1.diff(-1).gt(0))|(a_df.col1.diff().le(0) & a_df.col1.diff(-1).le(0)))]
#keep_index_df if the original index should be preserved (this df is the solution to my question)
keep_index_df = a_df.reset_index().merge(b_df, how='right', on='col1')
keep_index_df = keep_index_df.set_index('index')
#reseted index df
new_index_df = pd.merge(a_df, b_df, how='inner')
| Compare value of previous row and next row; create new DF with the rows matching condition | I am trying to compare floating point values with each another within one column; I need a function that doesn't produce an error...
The functione should loop through the column and compare each value within the columns previous value and also with the next value and create a new DF with all rows matching conditions.
I tried a combination of for loops and if statements but I couldn't figure out a code not producing errors.
Example:
Condition = True if the value of col1 is high than the previous value of col1 and at the same time lower than the next; all within col1
Condition = True as well if the value of col1 is lower than the previous value of col1 and at the same time higher than the next
The first and last value will produce an error so they should be compared each with a variable called compare_first and compare_last which I will define manually
values = [[5.5, 2.5, 10.0], [2.0, 4.5, 1.0], [2.5, 5.2, 8.0],
[4.5, 5.8, 4.8], [4.6, 6.3, 9.6], [4.1, 6.4, 9.0],
[5.1, 2.3, 11.1]]
# creating a pandas dataframe
a_df = pd.DataFrame(values, columns=['col1', 'col2', 'col3'],
index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
print(a_df)
output
col1 col2 col3
a 5.5 2.5 10.0
b 2.0 4.5 1.0
c 2.5 5.2 8.0
d 4.5 5.8 4.8
e 4.6 6.3 9.6
f 4.1 6.4 9.0
g 5.1 2.3 11.1
desired output - all rows matching the described conditions as a new df
col1 col2 col3
b 2.0 4.5 1.0
e 4.6 6.3 9.6
f 4.1 6.4 9.0
| [
"Compare shifted values for greater prevous or next values with DataFrame.shift and chain masks by | for bitwise OR, then omit first and last value of mask and set False in Series.reindex:\nm = a_df.col1.lt(a_df.col1.shift()) | a_df.col1.gt(a_df.col1.shift(-1))\n\n# @mozway alternative\nm = a_df.col1.diff().lt(0) | a_df.col1.diff(-1).gt(0)\n\ndf = a_df[m.iloc[1:-1].reindex(a_df.index, fill_value=False)]\nprint (df)\n col1 col2 col3\nb 2.0 4.5 1.0\ne 4.6 6.3 9.6\nf 4.1 6.4 9.0\n\n",
"Thank you @jezreal & @mozway through your code samples I started to research about .le() and .gt() and .diff() and .any() and managed to create a code that exatly does what I need. I AM SO HAPPY to have solved this :D\nhere it is:\n#b_df = a Series with all values matching conditions\nb_df = a_df.col1[((a_df.col1.diff().gt(0) & a_df.col1.diff(-1).gt(0))|(a_df.col1.diff().le(0) & a_df.col1.diff(-1).le(0)))]\n\n#keep_index_df if the original index should be preserved (this df is the solution to my question)\nkeep_index_df = a_df.reset_index().merge(b_df, how='right', on='col1')\nkeep_index_df = keep_index_df.set_index('index')\n\n#reseted index df\nnew_index_df = pd.merge(a_df, b_df, how='inner')\n\n"
] | [
1,
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074612326_dataframe_pandas_python.txt |
Q:
How to shift quartile lines in seaborn grouped violin plots?
Consider the following seaborn grouped violinplot with split violins, where I inserted a small space inbetween.
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="whitegrid")
tips = sns.load_dataset("tips")
fig, ax = plt.subplots()
sns.violinplot(
data=tips, x="day", y="total_bill", hue="smoker", split=True, inner="quart", linewidth=1,
palette={"Yes": "b", "No": ".85"}, ax=ax
)
sns.despine(left=True)
delta = 0.025
for ii, item in enumerate(ax.collections):
if isinstance(item, matplotlib.collections.PolyCollection):
path, = item.get_paths()
vertices = path.vertices
if ii % 2: # -> to right
vertices[:, 0] += delta
else: # -> to left
vertices[:, 0] -= delta
plt.show()
How can I shift the quartile (and median) indicating dotted (and dashed) lines back inside the violins?
A:
You can do it exactly the same way as you did with the violins:
for i, line in enumerate(ax.get_lines()):
line.get_path().vertices[:, 0] += delta if i // 3 % 2 else -delta
| How to shift quartile lines in seaborn grouped violin plots? | Consider the following seaborn grouped violinplot with split violins, where I inserted a small space inbetween.
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="whitegrid")
tips = sns.load_dataset("tips")
fig, ax = plt.subplots()
sns.violinplot(
data=tips, x="day", y="total_bill", hue="smoker", split=True, inner="quart", linewidth=1,
palette={"Yes": "b", "No": ".85"}, ax=ax
)
sns.despine(left=True)
delta = 0.025
for ii, item in enumerate(ax.collections):
if isinstance(item, matplotlib.collections.PolyCollection):
path, = item.get_paths()
vertices = path.vertices
if ii % 2: # -> to right
vertices[:, 0] += delta
else: # -> to left
vertices[:, 0] -= delta
plt.show()
How can I shift the quartile (and median) indicating dotted (and dashed) lines back inside the violins?
| [
"You can do it exactly the same way as you did with the violins:\nfor i, line in enumerate(ax.get_lines()):\n line.get_path().vertices[:, 0] += delta if i // 3 % 2 else -delta\n\n\n"
] | [
2
] | [] | [] | [
"matplotlib",
"plot",
"python",
"seaborn",
"violin_plot"
] | stackoverflow_0074653509_matplotlib_plot_python_seaborn_violin_plot.txt |
Q:
How to check if Python is running on an M1 mac, even under Rosetta?
I have python 3.10 code that launches a process but it needs to run a different process if it is running on an M1 Mac.
Is there a way to reliably detect if you are on an M1 Mac even if the python process is running in Rosetta?
I've tried this:
print(sys.platform)
# On Intel silicon:
darwin
# On M1 silicon:
darwin
but it always prints "darwin".
I tried sniffing around in the os.* and sys.* libraries and the best I found was this:
print(os.uname())
# On Intel silicon:
posix.uname_result(sysname='Darwin', nodename='XXX', release='21.5.0', version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64', machine='x86_64')
# On M1 silicon:
posix.uname_result(sysname='Darwin', nodename='XXX', release='21.4.0', version='Darwin Kernel Version 21.4.0: Fri Mar 18 00:47:26 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T8101', machine='x86_64')
I assume it returns machine= 'x86_64' on the M1 machine because Python is running in Rosetta? The version field does appear different:
# Intel
version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64'
# M1
version='Darwin Kernel Version 21.4.0: Fri Mar 18 00:47:26 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T8101'
Is parsing uname() and looking for "ARM" in the version field the best way to check for M1 silicon if you are running under Rosetta?
A:
You could just check for the Processor Name, and check it that way. The easiest way to get it is by using the cpuinfo module. cpuinfo.get_cpu_info()['brand_raw'] returns a string with the processor brand and name, for example "Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz". If you only want to have "i5-6500", you can just take the 3rd word from the string.
import cpuinfo
cpudata = cpuinfo.get_cpu_info()['brand_raw']
cpuname = cpudata.split(" ")[1]
If you then would print(cpuname), it should only output the processor "name", so in this case i5-6500.
A:
Use Python's built-in platform library to determine if a Mac is M1/M2:
import platform
print(platform.processor())
On an M1/M2 Mac --> arm
On an older Mac --> i386
A:
The only way I was able to do it from Python was to call out to sysctl. This is taken from cmake-macos-rosetta:
Non-Rosetta Python on M1:
>>> import platform, subprocess
>>> platform.processor()
'arm'
# Rosetta would not report arm.
>>> subprocess.run(["sysctl", "-n", "sysctl.proc_translated"])
0
Rosetta Python on M1:
>>> import platform, subprocess
>>> platform.processor()
'i386'
# Rosetta fakes the processor.
>>> subprocess.run(["sysctl", "-n", "sysctl.proc_translated"])
1
CompletedProcess(args=['sysctl', '-n', 'sysctl.proc_translated'], returncode=0)
| How to check if Python is running on an M1 mac, even under Rosetta? | I have python 3.10 code that launches a process but it needs to run a different process if it is running on an M1 Mac.
Is there a way to reliably detect if you are on an M1 Mac even if the python process is running in Rosetta?
I've tried this:
print(sys.platform)
# On Intel silicon:
darwin
# On M1 silicon:
darwin
but it always prints "darwin".
I tried sniffing around in the os.* and sys.* libraries and the best I found was this:
print(os.uname())
# On Intel silicon:
posix.uname_result(sysname='Darwin', nodename='XXX', release='21.5.0', version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64', machine='x86_64')
# On M1 silicon:
posix.uname_result(sysname='Darwin', nodename='XXX', release='21.4.0', version='Darwin Kernel Version 21.4.0: Fri Mar 18 00:47:26 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T8101', machine='x86_64')
I assume it returns machine= 'x86_64' on the M1 machine because Python is running in Rosetta? The version field does appear different:
# Intel
version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64'
# M1
version='Darwin Kernel Version 21.4.0: Fri Mar 18 00:47:26 PDT 2022; root:xnu-8020.101.4~15/RELEASE_ARM64_T8101'
Is parsing uname() and looking for "ARM" in the version field the best way to check for M1 silicon if you are running under Rosetta?
| [
"You could just check for the Processor Name, and check it that way. The easiest way to get it is by using the cpuinfo module. cpuinfo.get_cpu_info()['brand_raw'] returns a string with the processor brand and name, for example \"Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz\". If you only want to have \"i5-6500\", you can just take the 3rd word from the string.\nimport cpuinfo\ncpudata = cpuinfo.get_cpu_info()['brand_raw']\ncpuname = cpudata.split(\" \")[1]\n\nIf you then would print(cpuname), it should only output the processor \"name\", so in this case i5-6500.\n",
"Use Python's built-in platform library to determine if a Mac is M1/M2:\nimport platform\n\nprint(platform.processor())\n\nOn an M1/M2 Mac --> arm\nOn an older Mac --> i386\n",
"The only way I was able to do it from Python was to call out to sysctl. This is taken from cmake-macos-rosetta:\nNon-Rosetta Python on M1:\n>>> import platform, subprocess\n>>> platform.processor()\n'arm'\n\n# Rosetta would not report arm.\n\n>>> subprocess.run([\"sysctl\", \"-n\", \"sysctl.proc_translated\"])\n0\n\nRosetta Python on M1:\n>>> import platform, subprocess\n>>> platform.processor()\n'i386'\n\n# Rosetta fakes the processor.\n\n>>> subprocess.run([\"sysctl\", \"-n\", \"sysctl.proc_translated\"])\n1\nCompletedProcess(args=['sysctl', '-n', 'sysctl.proc_translated'], returncode=0)\n\n"
] | [
1,
1,
0
] | [] | [] | [
"apple_m1",
"python"
] | stackoverflow_0072888632_apple_m1_python.txt |
Q:
converting series to dataframe using 'as_index' and 'reset_index' not working
I am trying to convert this series of data into dataframe using as_index = False inside groupby method. My goal is to show the total value for month and weekday.
My data
This is my main data uber-15.
Dispatching Pickup_date Affiliated locationID month weekDay day hour minute
0 B02617 2015-05-17 09:47:00 B02617 141 5 Sunday 17 9 47
1 B02617 2015-05-17 09:47:00 B02617 65 5 Sunday 17 9 47
From this I am extracting month and weekDay.
temp = uber_15.groupby(['month', "weekDay"]).size()
Next I am converting this series to dataframe using as_index.
temp = uber_15.groupby(['month', "weekDay"], as_index=False).size()
But the result is same when I use as_index=False but not working.
I also tried finding online solution where I find about reset_index but this there is column header with "0" which was supposed to be 'size' with reset_index.
temp = uber_15.groupby(['month', "weekDay"]).size().reset_index()
This the goal I am trying to achieve.
this is the output I am getting.
A:
To convert a Pandas Series object to a DataFrame with columns named after the Series indices, you can use the to_frame() method on the Series object. This method converts the Series to a DataFrame with a single column, where the column name is the name of the Series index. Here is an example of how you can use this method to convert your Series object to a DataFrame:
# Create a Series object with the size of each group
temp = uber_15.groupby(['month', "weekDay"]).size()
# Convert the Series to a DataFrame
temp_df = temp.to_frame()
# Rename the column to 'size'
temp_df.columns = ['size']
After running this code, the DataFrame temp_df will have two columns named 'month' and 'weekDay', and a third column named 'size' containing the size of each group.
Alternatively, you can use the reset_index() method on the Series object to convert the Series to a DataFrame, and then rename the 'level_0' column to 'size'. Here is an example of how you can do this:
# Create a Series object with the size of each group
temp = uber_15.groupby(['month', "weekDay"]).size()
# Convert the Series to a DataFrame and rename the 'level_0' column to 'size'
temp_df = temp.reset_index().rename(columns={'level_0': 'size'})
In this case, the resulting DataFrame will have the same structure as the one shown in your goal image.
Note that in both examples, the groupby() method is called with the as_index parameter set to False by default. This means that the month and weekDay values will be used as indices in the resulting Series object, rather than as columns in the DataFrame. This is why you do not see the month and weekDay columns in the output of the groupby() method. If you want to include these values as columns in the DataFrame, you can set the as_index parameter to True when calling the groupby() method, like this:
Copy code
# Create a DataFrame with month and weekDay as columns
temp = uber_15.groupby(['month', "weekDay"], as_index=True).size()
With this change, the resulting DataFrame will have three columns named 'month', 'weekDay', and 'size', where the 'size' column contains the size of each group. You can then use the to_frame() or reset_index() method to convert this DataFrame to the final format you want.
I hope this helps!
| converting series to dataframe using 'as_index' and 'reset_index' not working | I am trying to convert this series of data into dataframe using as_index = False inside groupby method. My goal is to show the total value for month and weekday.
My data
This is my main data uber-15.
Dispatching Pickup_date Affiliated locationID month weekDay day hour minute
0 B02617 2015-05-17 09:47:00 B02617 141 5 Sunday 17 9 47
1 B02617 2015-05-17 09:47:00 B02617 65 5 Sunday 17 9 47
From this I am extracting month and weekDay.
temp = uber_15.groupby(['month', "weekDay"]).size()
Next I am converting this series to dataframe using as_index.
temp = uber_15.groupby(['month', "weekDay"], as_index=False).size()
But the result is same when I use as_index=False but not working.
I also tried finding online solution where I find about reset_index but this there is column header with "0" which was supposed to be 'size' with reset_index.
temp = uber_15.groupby(['month', "weekDay"]).size().reset_index()
This the goal I am trying to achieve.
this is the output I am getting.
| [
"To convert a Pandas Series object to a DataFrame with columns named after the Series indices, you can use the to_frame() method on the Series object. This method converts the Series to a DataFrame with a single column, where the column name is the name of the Series index. Here is an example of how you can use this method to convert your Series object to a DataFrame:\n# Create a Series object with the size of each group\ntemp = uber_15.groupby(['month', \"weekDay\"]).size()\n\n# Convert the Series to a DataFrame\ntemp_df = temp.to_frame()\n\n# Rename the column to 'size'\ntemp_df.columns = ['size']\n\nAfter running this code, the DataFrame temp_df will have two columns named 'month' and 'weekDay', and a third column named 'size' containing the size of each group.\nAlternatively, you can use the reset_index() method on the Series object to convert the Series to a DataFrame, and then rename the 'level_0' column to 'size'. Here is an example of how you can do this:\n# Create a Series object with the size of each group\ntemp = uber_15.groupby(['month', \"weekDay\"]).size()\n\n# Convert the Series to a DataFrame and rename the 'level_0' column to 'size'\ntemp_df = temp.reset_index().rename(columns={'level_0': 'size'})\n\nIn this case, the resulting DataFrame will have the same structure as the one shown in your goal image.\nNote that in both examples, the groupby() method is called with the as_index parameter set to False by default. This means that the month and weekDay values will be used as indices in the resulting Series object, rather than as columns in the DataFrame. This is why you do not see the month and weekDay columns in the output of the groupby() method. If you want to include these values as columns in the DataFrame, you can set the as_index parameter to True when calling the groupby() method, like this:\nCopy code\n# Create a DataFrame with month and weekDay as columns\ntemp = uber_15.groupby(['month', \"weekDay\"], as_index=True).size()\n\nWith this change, the resulting DataFrame will have three columns named 'month', 'weekDay', and 'size', where the 'size' column contains the size of each group. You can then use the to_frame() or reset_index() method to convert this DataFrame to the final format you want.\nI hope this helps!\n"
] | [
1
] | [] | [] | [
"dataframe",
"jupyter_notebook",
"pandas",
"python"
] | stackoverflow_0074654704_dataframe_jupyter_notebook_pandas_python.txt |
Q:
Missing dataframe column percentage
I have a dataset with 21 columns there are 2 columns that has 25% missing values, I'm reluctant to drop them or not?
Is it make sence to drop columns that has more than 20% of its data as missing, or how can I determine the percentage of missing values that decide to drop the column
I dropped the columns that have 20% or more missing values, I am expecting to know the best way to determine this percentage amount for example: should I use 20% or 40% or higher?
A:
One approach
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, 5, 21 * 5).reshape(-1, 21)).replace({0: np.nan})
print('Original df\n',df)
df = df.loc[:, df.isna().sum().div(df.shape[0]).le(0.25)]
print('\nResult df without columns > 25% missing values\n',df)
Original df
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 2.0 NaN 3.0 NaN NaN 3.0 2.0 NaN 1.0 4.0 NaN 4.0 3.0 1.0 3.0 4.0 1.0 1.0 1.0 4.0 NaN
1 3.0 NaN 3.0 NaN 4.0 NaN 3.0 NaN NaN 1.0 NaN 2.0 1.0 4.0 NaN 2.0 3.0 NaN 4.0 NaN 2.0
2 4.0 NaN 2.0 NaN NaN 1.0 2.0 4.0 1.0 4.0 4.0 1.0 3.0 2.0 2.0 4.0 NaN NaN 2.0 3.0 4.0
3 2.0 NaN 4.0 4.0 3.0 3.0 2.0 NaN 2.0 3.0 4.0 NaN 1.0 4.0 NaN 2.0 2.0 3.0 3.0 4.0 1.0
4 4.0 NaN 4.0 3.0 4.0 1.0 4.0 NaN NaN NaN 2.0 2.0 NaN 2.0 2.0 2.0 NaN 2.0 NaN 2.0 NaN
Result df without columns > 25% missing values
0 2 5 6 9 11 12 13 15 18 19
0 2.0 3.0 3.0 2.0 4.0 4.0 3.0 1.0 4.0 1.0 4.0
1 3.0 3.0 NaN 3.0 1.0 2.0 1.0 4.0 2.0 4.0 NaN
2 4.0 2.0 1.0 2.0 4.0 1.0 3.0 2.0 4.0 2.0 3.0
3 2.0 4.0 3.0 2.0 3.0 NaN 1.0 4.0 2.0 3.0 4.0
4 4.0 4.0 1.0 4.0 NaN 2.0 NaN 2.0 2.0 NaN 2.0
A:
how can I determine the percentage of missing values
You might do it following way
import pandas as pd
df = pd.DataFrame({'X':[1,2,3],'Y':[4,5,None],'Z':[7,None,None]})
missing = df.isnull().mean() * 100
print(missing)
output
X 0.000000
Y 33.333333
Z 66.666667
dtype: float64
Explanation: .isnull() gives True or False, as they are treated as 1 and 0 when doing arithemtic, getting mean will give value 0.0 (nothing missing) to 1.0 (all missing) which you need to multiply by 100 to get percentage.
| Missing dataframe column percentage | I have a dataset with 21 columns there are 2 columns that has 25% missing values, I'm reluctant to drop them or not?
Is it make sence to drop columns that has more than 20% of its data as missing, or how can I determine the percentage of missing values that decide to drop the column
I dropped the columns that have 20% or more missing values, I am expecting to know the best way to determine this percentage amount for example: should I use 20% or 40% or higher?
| [
"One approach\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randint(0, 5, 21 * 5).reshape(-1, 21)).replace({0: np.nan})\nprint('Original df\\n',df)\ndf = df.loc[:, df.isna().sum().div(df.shape[0]).le(0.25)]\nprint('\\nResult df without columns > 25% missing values\\n',df)\n\nOriginal df\n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\n0 2.0 NaN 3.0 NaN NaN 3.0 2.0 NaN 1.0 4.0 NaN 4.0 3.0 1.0 3.0 4.0 1.0 1.0 1.0 4.0 NaN\n1 3.0 NaN 3.0 NaN 4.0 NaN 3.0 NaN NaN 1.0 NaN 2.0 1.0 4.0 NaN 2.0 3.0 NaN 4.0 NaN 2.0\n2 4.0 NaN 2.0 NaN NaN 1.0 2.0 4.0 1.0 4.0 4.0 1.0 3.0 2.0 2.0 4.0 NaN NaN 2.0 3.0 4.0\n3 2.0 NaN 4.0 4.0 3.0 3.0 2.0 NaN 2.0 3.0 4.0 NaN 1.0 4.0 NaN 2.0 2.0 3.0 3.0 4.0 1.0\n4 4.0 NaN 4.0 3.0 4.0 1.0 4.0 NaN NaN NaN 2.0 2.0 NaN 2.0 2.0 2.0 NaN 2.0 NaN 2.0 NaN\n\nResult df without columns > 25% missing values\n 0 2 5 6 9 11 12 13 15 18 19\n0 2.0 3.0 3.0 2.0 4.0 4.0 3.0 1.0 4.0 1.0 4.0\n1 3.0 3.0 NaN 3.0 1.0 2.0 1.0 4.0 2.0 4.0 NaN\n2 4.0 2.0 1.0 2.0 4.0 1.0 3.0 2.0 4.0 2.0 3.0\n3 2.0 4.0 3.0 2.0 3.0 NaN 1.0 4.0 2.0 3.0 4.0\n4 4.0 4.0 1.0 4.0 NaN 2.0 NaN 2.0 2.0 NaN 2.0\n\n",
"\nhow can I determine the percentage of missing values\n\nYou might do it following way\nimport pandas as pd\ndf = pd.DataFrame({'X':[1,2,3],'Y':[4,5,None],'Z':[7,None,None]})\nmissing = df.isnull().mean() * 100\nprint(missing)\n\noutput\nX 0.000000\nY 33.333333\nZ 66.666667\ndtype: float64\n\nExplanation: .isnull() gives True or False, as they are treated as 1 and 0 when doing arithemtic, getting mean will give value 0.0 (nothing missing) to 1.0 (all missing) which you need to multiply by 100 to get percentage.\n"
] | [
0,
0
] | [] | [] | [
"data_cleaning",
"dataframe",
"missing_data",
"pandas",
"python"
] | stackoverflow_0074654503_data_cleaning_dataframe_missing_data_pandas_python.txt |
Q:
Exponential Regression in Python
I have a set of x and y data and I want to use exponential regression to find the line that best fits those set of points. i.e.:
y = P1 + P2 exp(-P0 x)
I want to calculate the values of P0, P1 and P2.
I use a software "Igor Pro" that calculates the values for me, but want a Python implementation. I used the curve_fit function, but the values that I get are nowhere near the ones calculated by Igor software. Here is the sets of data that I have:
Set1:
x = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]
y = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]
Values calculated by Igor:
P1=376.91, P2=5393.9, P0=3.7776
Values calculated by curve_fit:
P1=702.45, P2=-13.33. P0=-2.6744
Set2:
x = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]
y = [ 648, 618, 636, 485, 384, 639, 630, 583, 529]
Values calculated by Igor:
P1=321, P2=4848, P0=-1.94
Values calculated by curve_fit:
No optimal values found
I use curve_fit as follow:
from scipy.optimize import curve_fit
popt, pcov = curve_fit(lambda t, a, b, c: a * np.exp(-b * t) + c, x, y)
where:
P1=c, P2=a and P0=b
A:
Well, when comparing fit results, it is always important to include uncertainties in the fitted parameters. That is, when you say that the values
from Igor (P1=376.91, P2=5393.9, P0=3.7776), and from curve_fit
(P1=702.45, P2=-13.33. P0=-2.6744) are different, what is it that leads to conclude those values are actually different?
Of course, in everyday conversation, 376.91 and 702.45 are very different, mostly because simply stating a value to 2 decimal places implies accuracy at approximately that scale (the distance between New York and Tokyo is
10,850 km but is not really 10,847,024,31 cm -- that might be the distance between bus stops in the two cities). But when comparing fit results, that everyday knowledge cannot be assumed, and you have to include uncertainties. I don't know if Igor will give you those. scipy curve_fit can, but it requires some work to extract them -- a pity.
Allow me to recommend trying lmfit (disclaimer: I am an author). With that, you would set up and execute the fit like this:
import numpy as np
from lmfit import Model
x = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]
y = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]
# x = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]
# y = [ 648, 618, 636, 485, 384, 639, 630, 583, 529]
# Define the function that we want to fit to the data
def func(x, offset, scale, decay):
return offset + scale * np.exp(-decay* x)
model = Model(func)
params = model.make_params(offset=375, scale=5000, decay=4)
result = model.fit(y, params, x=x)
print(result.fit_report())
This would print out the result of
[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 49
# data points = 9
# variables = 3
chi-square = 72.2604167
reduced chi-square = 12.0434028
Akaike info crit = 24.7474672
Bayesian info crit = 25.3391410
R-squared = 0.99362489
[[Variables]]
offset: 413.168769 +/- 17348030.9 (4198775.95%) (init = 375)
scale: 16689.6793 +/- 1.3337e+10 (79909638.11%) (init = 5000)
decay: 5.27555726 +/- 1016721.11 (19272297.84%) (init = 4)
[[Correlations]] (unreported correlations are < 0.100)
C(scale, decay) = 1.000
C(offset, decay) = 1.000
C(offset, scale) = 1.000
indicating that the uncertainties in the parameter values are simply enormous and the correlations between all parameters are 1. This is because you have only 2 x values, which will make it impossible to accurately determine 3 independent variables.
And, note that with an uncertainty of 17 million, the values for P1 (offset) of 413 and 762 do actually agree. The problem is not that Igor and curve_fit disagree on the best value, it is that neither can determine the value with any accuracy at all.
For your other dataset, the situation is a little better, with a result:
[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 82
# data points = 9
# variables = 3
chi-square = 1118.19957
reduced chi-square = 186.366596
Akaike info crit = 49.4002551
Bayesian info crit = 49.9919289
R-squared = 0.98272310
[[Variables]]
offset: 320.876843 +/- 42.0154403 (13.09%) (init = 375)
scale: 4797.14487 +/- 2667.40083 (55.60%) (init = 5000)
decay: 1.93560164 +/- 0.47764470 (24.68%) (init = 4)
[[Correlations]] (unreported correlations are < 0.100)
C(scale, decay) = 0.995
C(offset, decay) = 0.940
C(offset, scale) = 0.904
the correlations are still high, but the parameters are reasonably well determined. Also, note that the best-fit values here are much closer to those you got from Igor, and probably "within the uncertainty".
And this is why one always needs to include uncertainties with the best-fit values reported from a fit.
A:
Set 1 :
x = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]
y = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]
One observe that they are only two different values of x : 1.06 and 0.91
On the other hand they are three parameters to optimise : P0, P1 and P2. This is too much.
In other words an infinity of exponential curves can be found to fit the two clusters of points. The differences between the curves can be due to slight difference of the computation methods of non-linear regression especially due to the methods to chose the initial values of the iterative process.
In this particular case a simple linear regression would be without ambiguity.
By comparison :
Thus both Igor and Curve_fit give excellent fitting : The points are very close to both curves. One understand that infinity many other exponential fuctions would fit as well.
Set 2 :
x = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]
y = [ 648, 618, 636, 485, 384, 639, 630, 583, 529]
The difficulty that you meet might be due to the choice of "guessed" initial values of the parameters which are required to start the iterative process of nonlinear regression.
In order to check this hypothesis one can use a different method which doesn't need initial guessed values. The MathCad code and numerical calculus are shown below.
Don't be surprised if the values of the parameters that you get with your software are slightly different from the above values (a, b, c). The criteria of fitting implicitly set in your software is probably different from the criteria of fitting set in my software.
Blue curve : The method of regression is a Least Mean Square Errors wrt a linear integral equation to which the exponential equation is solution. Ref.: https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
This non-standard method isn't iterative and doesn't require initial "guessed" values of parameters.
| Exponential Regression in Python | I have a set of x and y data and I want to use exponential regression to find the line that best fits those set of points. i.e.:
y = P1 + P2 exp(-P0 x)
I want to calculate the values of P0, P1 and P2.
I use a software "Igor Pro" that calculates the values for me, but want a Python implementation. I used the curve_fit function, but the values that I get are nowhere near the ones calculated by Igor software. Here is the sets of data that I have:
Set1:
x = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]
y = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]
Values calculated by Igor:
P1=376.91, P2=5393.9, P0=3.7776
Values calculated by curve_fit:
P1=702.45, P2=-13.33. P0=-2.6744
Set2:
x = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]
y = [ 648, 618, 636, 485, 384, 639, 630, 583, 529]
Values calculated by Igor:
P1=321, P2=4848, P0=-1.94
Values calculated by curve_fit:
No optimal values found
I use curve_fit as follow:
from scipy.optimize import curve_fit
popt, pcov = curve_fit(lambda t, a, b, c: a * np.exp(-b * t) + c, x, y)
where:
P1=c, P2=a and P0=b
| [
"Well, when comparing fit results, it is always important to include uncertainties in the fitted parameters. That is, when you say that the values\nfrom Igor (P1=376.91, P2=5393.9, P0=3.7776), and from curve_fit\n(P1=702.45, P2=-13.33. P0=-2.6744) are different, what is it that leads to conclude those values are actually different?\nOf course, in everyday conversation, 376.91 and 702.45 are very different, mostly because simply stating a value to 2 decimal places implies accuracy at approximately that scale (the distance between New York and Tokyo is\n10,850 km but is not really 10,847,024,31 cm -- that might be the distance between bus stops in the two cities). But when comparing fit results, that everyday knowledge cannot be assumed, and you have to include uncertainties. I don't know if Igor will give you those. scipy curve_fit can, but it requires some work to extract them -- a pity.\nAllow me to recommend trying lmfit (disclaimer: I am an author). With that, you would set up and execute the fit like this:\nimport numpy as np\nfrom lmfit import Model\n \nx = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]\ny = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]\n# x = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]\n# y = [ 648, 618, 636, 485, 384, 639, 630, 583, 529] \n\n# Define the function that we want to fit to the data\ndef func(x, offset, scale, decay):\n return offset + scale * np.exp(-decay* x)\n \nmodel = Model(func)\nparams = model.make_params(offset=375, scale=5000, decay=4)\n \nresult = model.fit(y, params, x=x)\n \nprint(result.fit_report())\n\nThis would print out the result of\n[[Model]]\n Model(func)\n[[Fit Statistics]]\n # fitting method = leastsq\n # function evals = 49\n # data points = 9\n # variables = 3\n chi-square = 72.2604167\n reduced chi-square = 12.0434028\n Akaike info crit = 24.7474672\n Bayesian info crit = 25.3391410\n R-squared = 0.99362489\n[[Variables]]\n offset: 413.168769 +/- 17348030.9 (4198775.95%) (init = 375)\n scale: 16689.6793 +/- 1.3337e+10 (79909638.11%) (init = 5000)\n decay: 5.27555726 +/- 1016721.11 (19272297.84%) (init = 4)\n[[Correlations]] (unreported correlations are < 0.100)\n C(scale, decay) = 1.000\n C(offset, decay) = 1.000\n C(offset, scale) = 1.000\n\nindicating that the uncertainties in the parameter values are simply enormous and the correlations between all parameters are 1. This is because you have only 2 x values, which will make it impossible to accurately determine 3 independent variables.\nAnd, note that with an uncertainty of 17 million, the values for P1 (offset) of 413 and 762 do actually agree. The problem is not that Igor and curve_fit disagree on the best value, it is that neither can determine the value with any accuracy at all.\nFor your other dataset, the situation is a little better, with a result:\n[[Model]]\n Model(func)\n[[Fit Statistics]]\n # fitting method = leastsq\n # function evals = 82\n # data points = 9\n # variables = 3\n chi-square = 1118.19957\n reduced chi-square = 186.366596\n Akaike info crit = 49.4002551\n Bayesian info crit = 49.9919289\n R-squared = 0.98272310\n[[Variables]]\n offset: 320.876843 +/- 42.0154403 (13.09%) (init = 375)\n scale: 4797.14487 +/- 2667.40083 (55.60%) (init = 5000)\n decay: 1.93560164 +/- 0.47764470 (24.68%) (init = 4)\n[[Correlations]] (unreported correlations are < 0.100)\n C(scale, decay) = 0.995\n C(offset, decay) = 0.940\n C(offset, scale) = 0.904\n\nthe correlations are still high, but the parameters are reasonably well determined. Also, note that the best-fit values here are much closer to those you got from Igor, and probably \"within the uncertainty\".\nAnd this is why one always needs to include uncertainties with the best-fit values reported from a fit.\n",
"Set 1 :\nx = [ 1.06, 1.06, 1.06, 1.06, 1.06, 1.06, 0.91, 0.91, 0.91 ]\ny = [ 476, 475, 476.5, 475.25, 480, 469.5, 549.25, 548.5, 553.5 ]\n\nOne observe that they are only two different values of x : 1.06 and 0.91\nOn the other hand they are three parameters to optimise : P0, P1 and P2. This is too much.\nIn other words an infinity of exponential curves can be found to fit the two clusters of points. The differences between the curves can be due to slight difference of the computation methods of non-linear regression especially due to the methods to chose the initial values of the iterative process.\nIn this particular case a simple linear regression would be without ambiguity.\nBy comparison :\n\nThus both Igor and Curve_fit give excellent fitting : The points are very close to both curves. One understand that infinity many other exponential fuctions would fit as well.\n\nSet 2 :\nx = [ 1.36, 1.44, 1.41, 1.745, 2.25, 1.42, 1.45, 1.5, 1.58]\ny = [ 648, 618, 636, 485, 384, 639, 630, 583, 529]\nThe difficulty that you meet might be due to the choice of \"guessed\" initial values of the parameters which are required to start the iterative process of nonlinear regression.\nIn order to check this hypothesis one can use a different method which doesn't need initial guessed values. The MathCad code and numerical calculus are shown below.\n\n\n\nDon't be surprised if the values of the parameters that you get with your software are slightly different from the above values (a, b, c). The criteria of fitting implicitly set in your software is probably different from the criteria of fitting set in my software.\n\nBlue curve : The method of regression is a Least Mean Square Errors wrt a linear integral equation to which the exponential equation is solution. Ref.: https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales\nThis non-standard method isn't iterative and doesn't require initial \"guessed\" values of parameters.\n"
] | [
0,
0
] | [
"It looks like the curve_fit function is not the right tool for this problem, because the function you are trying to fit your data to (y = P1 + P2 * exp(-P0 * x)) has three parameters, while curve_fit expects a function with only one parameter (the independent variable, in this case t). You can use curve_fit to fit a single-parameter function to your data by expressing y in terms of t and a single parameter, but this will not give you the values of P1, P2, and P0 that you are looking for.\nTo fit this particular function to your data, you will need to use a different approach. One possible approach is to define a function that takes x, P1, P2, and P0 as arguments and returns the corresponding value of y according to the equation above. You can then use a non-linear optimization algorithm to find the values of P1, P2, and P0 that minimize the difference between the predicted values of y and the observed values of y in your data. There are several Python libraries that provide non-linear optimization algorithms that you can use for this purpose, such as scipy.optimize and lmfit.\nHere is an example of how you could use scipy.optimize to fit your data to the function y = P1 + P2 * exp(-P0 * x):\nimport numpy as np\nfrom scipy.optimize import minimize\n\n# Define the function that we want to fit to the data\ndef func(x, P1, P2, P0):\n return P1 + P2 * np.exp(-P0 * x)\n\n# Define a function that takes the parameters of the function as arguments\n# and returns the sum of the squared differences between the predicted\n# and observed values of y\ndef objective(params):\n P1, P2, P0 = params\n y_pred = func(x, P1, P2, P0)\n return np.sum((y - y_pred) ** 2)\n\n# Define the initial values for the parameters\nparams_init = [376.91, 5393.9, 3.7776]\n\n# Use the minimize function to find the values of the parameters that\n# minimize the objective function\nresult = minimize(objective, params_init)\n\n# Print the optimized values of the parameters\nprint(result.x)\n\nThis code should give you the same values for P1, P2, and P0 as the ones calculated by the Igor software. You can then use the optimized values of the parameters to predict the value of y for any given value of x using the func function defined above.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
-1
] | [
"curve_fitting",
"exponential",
"non_linear_regression",
"python",
"scipy"
] | stackoverflow_0074647310_curve_fitting_exponential_non_linear_regression_python_scipy.txt |
Q:
ValueError: multi-line expressions are only valid in the context of data, use DataFrame.eval even after backslash
I am trying to run a multiline query using df.query but I seem to be getting the following error even after adding backslashes:
column = 'method'
idx = df.query(
f"""{column} == 'One' and \
number.notnull() and \
flag.isnull()""").index
My df looks like this:
df
'method' 'number' 'flag'
23 'One' 0 None
24 'One' 1 1
25 'Two' 1 None
I get this error:
ValueError: multi-line expressions are only valid in the context of data, use DataFrame.eval
I tried to use this answer to fix but am still getting the exact same error:
pandas dataframe multiline query
Can someone help explain why this does not work?
Thanks
A:
I would suggest avoiding df.query, it is easier and more reliable to use the masking feature to filter your data. The triple quotes are also taking the indentation characters, you should avoid that.
Now, in your case,
Better syntax, with string concatenation:
column = 'method'
idx = df.query(
f"{column} == 'One' and "
"number_col.notnull() and "
"flag.isnull()""").index"
)
A better solution, with filtering:
column = "method"
mask = (df[column] == "One") & df["number"].notna() & df["flag"].isna()
idx = df[mask].index
| ValueError: multi-line expressions are only valid in the context of data, use DataFrame.eval even after backslash | I am trying to run a multiline query using df.query but I seem to be getting the following error even after adding backslashes:
column = 'method'
idx = df.query(
f"""{column} == 'One' and \
number.notnull() and \
flag.isnull()""").index
My df looks like this:
df
'method' 'number' 'flag'
23 'One' 0 None
24 'One' 1 1
25 'Two' 1 None
I get this error:
ValueError: multi-line expressions are only valid in the context of data, use DataFrame.eval
I tried to use this answer to fix but am still getting the exact same error:
pandas dataframe multiline query
Can someone help explain why this does not work?
Thanks
| [
"I would suggest avoiding df.query, it is easier and more reliable to use the masking feature to filter your data. The triple quotes are also taking the indentation characters, you should avoid that.\nNow, in your case,\nBetter syntax, with string concatenation:\ncolumn = 'method'\n\nidx = df.query(\n f\"{column} == 'One' and \"\n \"number_col.notnull() and \"\n \"flag.isnull()\"\"\").index\"\n)\n\nA better solution, with filtering:\ncolumn = \"method\"\nmask = (df[column] == \"One\") & df[\"number\"].notna() & df[\"flag\"].isna()\nidx = df[mask].index\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074654274_pandas_python.txt |
Q:
How to add delay between loop python?
i want to ask , i try with my coding but it still not working, i want to execute 2 loop with delay and print each word with delay.
here my coding
import threading,time
def func_name1():
fruits = ["apple", "banana", "cherry"]
for i in fruits:
time.sleep(2)
print(i)
def func_name2():
fruits2 = ["1", "2", "3"]
for i in fruits2:
time.sleep(2)
print(i)
f1 = threading.Thread(target=func_name1)
f2 = threading.Thread(target=func_name2)
f1.start()
time.sleep(2)
f2.start()
output like this
apple
1
banana
2
cherry
3
i want scenario like this
A:
Try this.
I have updated my code Now it matches your scenario. The whole Script takes approximatly 25.03 seconds.
def func_name1():
fruits = ["apple", "banana", "cherry"]
a = 0
for i in fruits:
a += 1
print(i)
if a == len(fruits):
return
time.sleep(10)
def func_name2():
fruits2 = ["1", "2", "3"]
a = 0
for i in fruits2:
a +=1
print(i)
if a==len(fruits2):
return
time.sleep(10)
f1 = threading.Thread(target=func_name1)
f2 = threading.Thread(target=func_name2)
f1.start()
time.sleep(5)
f2.start()
apple
1
banana
2
cherry
3
To check the whole execution time. Use this code.
import threading,time
t = time.time()
def func_name1():
fruits = ["apple", "banana", "cherry"]
a = 0
for i in fruits:
a += 1
print(i)
if a == len(fruits):
return
time.sleep(10)
def func_name2():
fruits2 = ["1", "2", "3"]
a = 0
for i in fruits2:
a +=1
print(i)
if a==len(fruits2):
return
time.sleep(10)
f1 = threading.Thread(target=func_name1)
f2 = threading.Thread(target=func_name2)
f1.start()
time.sleep(5)
f2.start()
f2.join()
print(time.time() - t)
| How to add delay between loop python? | i want to ask , i try with my coding but it still not working, i want to execute 2 loop with delay and print each word with delay.
here my coding
import threading,time
def func_name1():
fruits = ["apple", "banana", "cherry"]
for i in fruits:
time.sleep(2)
print(i)
def func_name2():
fruits2 = ["1", "2", "3"]
for i in fruits2:
time.sleep(2)
print(i)
f1 = threading.Thread(target=func_name1)
f2 = threading.Thread(target=func_name2)
f1.start()
time.sleep(2)
f2.start()
output like this
apple
1
banana
2
cherry
3
i want scenario like this
| [
"Try this.\nI have updated my code Now it matches your scenario. The whole Script takes approximatly 25.03 seconds.\ndef func_name1():\n fruits = [\"apple\", \"banana\", \"cherry\"]\n a = 0\n for i in fruits:\n a += 1\n print(i)\n if a == len(fruits):\n return\n time.sleep(10)\n\ndef func_name2():\n fruits2 = [\"1\", \"2\", \"3\"]\n a = 0\n for i in fruits2:\n a +=1\n print(i)\n if a==len(fruits2):\n return\n time.sleep(10)\n\nf1 = threading.Thread(target=func_name1)\nf2 = threading.Thread(target=func_name2)\nf1.start()\ntime.sleep(5)\nf2.start()\n\napple\n1\nbanana\n2\ncherry\n3\n\n\nTo check the whole execution time. Use this code.\n\nimport threading,time\n\nt = time.time()\n\ndef func_name1():\n fruits = [\"apple\", \"banana\", \"cherry\"]\n a = 0\n for i in fruits:\n a += 1\n print(i)\n if a == len(fruits):\n return\n time.sleep(10)\n\ndef func_name2():\n fruits2 = [\"1\", \"2\", \"3\"]\n a = 0\n for i in fruits2:\n a +=1\n print(i)\n if a==len(fruits2):\n return\n time.sleep(10)\n\nf1 = threading.Thread(target=func_name1)\nf2 = threading.Thread(target=func_name2)\nf1.start()\ntime.sleep(5)\nf2.start()\n\nf2.join()\n\nprint(time.time() - t)\n\n\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074654712_python_python_3.x.txt |
Q:
Simple Captcha Solver with Python
I'm reaching to you to get some help and advices on creating a "Captcha Solver" using python and any image detection to text package
This is an example of the captcha (it contains only 4 character and its always numbers):
I am not sure if I should use a complex solver with AI and CNN and Machine Learning or just something more simple but I feel like I can't find a good tutorial... Instread I just find compagnies selling a package of multiple captcha solving...
Thanks in any case for the time and advice,
Daniel
I have tried to use these :
https://github.com/ptigas/simple-captcha-solver
https://gist.github.com/lobstrio/8010d0a21c48b8c807f0c3820467ee0c
https://github.com/cracker0dks/CaptchaSolver
A:
I would recommend you use Tesseract or Tesseract.JS. You will find plenty of useful tutorials and articles on how to use Tesseract. you might wanna explore some additional Algorithms to reduce the noise in the image.
| Simple Captcha Solver with Python | I'm reaching to you to get some help and advices on creating a "Captcha Solver" using python and any image detection to text package
This is an example of the captcha (it contains only 4 character and its always numbers):
I am not sure if I should use a complex solver with AI and CNN and Machine Learning or just something more simple but I feel like I can't find a good tutorial... Instread I just find compagnies selling a package of multiple captcha solving...
Thanks in any case for the time and advice,
Daniel
I have tried to use these :
https://github.com/ptigas/simple-captcha-solver
https://gist.github.com/lobstrio/8010d0a21c48b8c807f0c3820467ee0c
https://github.com/cracker0dks/CaptchaSolver
| [
"I would recommend you use Tesseract or Tesseract.JS. You will find plenty of useful tutorials and articles on how to use Tesseract. you might wanna explore some additional Algorithms to reduce the noise in the image.\n"
] | [
0
] | [] | [] | [
"captcha",
"python",
"python_tesseract"
] | stackoverflow_0074642350_captcha_python_python_tesseract.txt |
Q:
WebDriverException Message: 'chromedriver' executable needs to be in PATH ( Error on mac M1)
I am trying web scrapping with selenium and therefore following the below code.
However, I encounter an error with chromedriver path, I am unable to figure out on mac M1 . I've tried several methods to solve this.
Any hints?
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
chromeOptions = Options()
chromeOptions.headless = False
s = Service("usr/local/bin/chromedriver")
driver = webdriver.Chrome(service= s, options = chromeOptions )
I am getting the below error:
---------------------------------------------------------------------------
SeleniumManagerException Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in start(self)
95 try:
---> 96 path = SeleniumManager().driver_location(browser)
97 except WebDriverException as new_err:
7 frames
SeleniumManagerException: Message: Selenium manager failed for: /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager --browser chrome. /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.29' not found (required by /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager)
During handling of the above exception, another exception occurred:
WebDriverException Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in _start_process(self, path)
210 except OSError as err:
211 if err.errno == errno.ENOENT:
--> 212 raise WebDriverException(
213 f"'{os.path.basename(self.path)}' executable needs to be in PATH. {self.start_error_message}"
214 )
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://chromedriver.chromium.org/home
A:
You need to provide also the file name of the webdriver in the path, so in your case:
usr/local/bin/chromedriver
should be like:
usr/local/bin/chromedriver/chromedriver
if the name of the folder matches the webdriver file name.
At least according to the error it seems like chromedriver is a folder where you have placed the real webdriver file.
| WebDriverException Message: 'chromedriver' executable needs to be in PATH ( Error on mac M1) | I am trying web scrapping with selenium and therefore following the below code.
However, I encounter an error with chromedriver path, I am unable to figure out on mac M1 . I've tried several methods to solve this.
Any hints?
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
chromeOptions = Options()
chromeOptions.headless = False
s = Service("usr/local/bin/chromedriver")
driver = webdriver.Chrome(service= s, options = chromeOptions )
I am getting the below error:
---------------------------------------------------------------------------
SeleniumManagerException Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in start(self)
95 try:
---> 96 path = SeleniumManager().driver_location(browser)
97 except WebDriverException as new_err:
7 frames
SeleniumManagerException: Message: Selenium manager failed for: /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager --browser chrome. /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.29' not found (required by /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/linux/selenium-manager)
During handling of the above exception, another exception occurred:
WebDriverException Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in _start_process(self, path)
210 except OSError as err:
211 if err.errno == errno.ENOENT:
--> 212 raise WebDriverException(
213 f"'{os.path.basename(self.path)}' executable needs to be in PATH. {self.start_error_message}"
214 )
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://chromedriver.chromium.org/home
| [
"You need to provide also the file name of the webdriver in the path, so in your case:\nusr/local/bin/chromedriver\n\nshould be like:\nusr/local/bin/chromedriver/chromedriver \n\nif the name of the folder matches the webdriver file name.\nAt least according to the error it seems like chromedriver is a folder where you have placed the real webdriver file.\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0074654297_python_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
Find element with compound class in Selenium
I can see some posts about this topic but unfortunately, none worked in my case. I am trying to locate elements with compounded classes in its name. This is the name of the elements class:
class="group-header__wrapper is-grid-view-active section--prematch markets-optimized--3"
I tried with this line of code, which is not working:
containers = SBdriver.find_elements(By.CSS_SELECTOR,".group-header.wrapper.is-grid-view-active.section.prematch.markets-optimized--3")
Am I right to assume the class consists of 3 other classes or there are more? This is my entire code so far:
from datetime import datetime
from lib2to3.pgen2 import driver
from os import pardir
import time
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from urllib.parse import urlparse, parse_qs
import re
import pandas as pd
import numpy as np
from selenium.webdriver.remote.webelement import BaseWebElement
from googletrans import Translator
import time
import schedule
SB_datalist = []
SBdriver = webdriver.Chrome('C:/Users/tmarkac/source/repos/chromedriver.exe')
superbet_ufootball_url= 'https://superbet.pl/zaklady-bukmacherskie/pilka-nozna'
SBdriver.get(superbet_ufootball_url)
SBdriver.find_element(By.XPATH,'//*[@id="onetrust-accept-btn-handler"]').click()
containers = SBdriver.find_elements(By.CSS_SELECTOR,".group-header__wrapper.is-grid-view-active.section--prematch.markets-optimized--3")
SB_datalist.append(containers)
print("Cont: ",SB_datalist)
A:
EDIT:
Tested your code and you can select the elements you want with this...
SBdriver.find_elements(By.CSS_SELECTOR, ".group-header__wrapper.section--prematch.markets-optimized--3")
you need it constructed like this
.group-header__wrapper.is-grid-view-active.section--prematch.markets-optimized--3
you have a . where the __ is but that is incorrect that is one class name
group-header__wrapper
| Find element with compound class in Selenium | I can see some posts about this topic but unfortunately, none worked in my case. I am trying to locate elements with compounded classes in its name. This is the name of the elements class:
class="group-header__wrapper is-grid-view-active section--prematch markets-optimized--3"
I tried with this line of code, which is not working:
containers = SBdriver.find_elements(By.CSS_SELECTOR,".group-header.wrapper.is-grid-view-active.section.prematch.markets-optimized--3")
Am I right to assume the class consists of 3 other classes or there are more? This is my entire code so far:
from datetime import datetime
from lib2to3.pgen2 import driver
from os import pardir
import time
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from urllib.parse import urlparse, parse_qs
import re
import pandas as pd
import numpy as np
from selenium.webdriver.remote.webelement import BaseWebElement
from googletrans import Translator
import time
import schedule
SB_datalist = []
SBdriver = webdriver.Chrome('C:/Users/tmarkac/source/repos/chromedriver.exe')
superbet_ufootball_url= 'https://superbet.pl/zaklady-bukmacherskie/pilka-nozna'
SBdriver.get(superbet_ufootball_url)
SBdriver.find_element(By.XPATH,'//*[@id="onetrust-accept-btn-handler"]').click()
containers = SBdriver.find_elements(By.CSS_SELECTOR,".group-header__wrapper.is-grid-view-active.section--prematch.markets-optimized--3")
SB_datalist.append(containers)
print("Cont: ",SB_datalist)
| [
"EDIT:\nTested your code and you can select the elements you want with this...\nSBdriver.find_elements(By.CSS_SELECTOR, \".group-header__wrapper.section--prematch.markets-optimized--3\")\n\nyou need it constructed like this\n.group-header__wrapper.is-grid-view-active.section--prematch.markets-optimized--3\n\nyou have a . where the __ is but that is incorrect that is one class name\ngroup-header__wrapper\n"
] | [
1
] | [] | [] | [
"python",
"selenium",
"web_scraping"
] | stackoverflow_0074654803_python_selenium_web_scraping.txt |
Q:
Python networkx graph appears jumbled when drawn
Note:
I already tried solution in
Python networkx graph appears jumbled when drawn in matplotlib
But it didnt work. As you can see below, i placed the position in the end but still the graph appears to be jumbled.
Question:
I have a flow A->B->C->D->E->F->G->H. Generally the directed graph should form a circle. But despite lots of effort, i am not able to achieve desired result. Please find the below code and the output
import pandas as pd
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import pylab
refJourney = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']
edgelis=[]
for i in range(len(refJourney)):
edgesValue = refJourney[i:i+2]
if len(edgesValue)>1:
edgelis.append((edgesValue[0],edgesValue[1]))
G = nx.DiGraph()
G.add_edges_from(edgelis, weight=0, length = 1)
val_map = {refJourney[0]: 1.0,
refJourney[-1]: 0.5}
values = [val_map.get(node, 0) for node in G.nodes()]
edge_labels=dict([((u,v,),d['weight'])
for u,v,d in G.edges(data=True)])
plt.figure(3,figsize=(24,24))
pos=nx.spring_layout(G)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
nx.draw(G,pos,node_shape = 'D', node_color = values, node_size=15000,edge_cmap=plt.cm.Reds, font_color="whitesmoke", with_labels = True, font_size=10)
pylab.show()
How to solve the issue?
A:
Generally the directed graph should form a circle.
That is not the lowest energy configuration for a path graph with a spring node layout. If you do want a circular node layout, there is nx.circular_layout. However, it (also) doesn't reduce edge crossings.
If you are open to using other libraries, netgraph works well in combination with networkx and does implement a circular layout with reduced edge crossings (github, documentation).
#!/usr/bin/env python
"""
Plot a path network with circular layout.
"""
import matplotlib.pyplot as plt
import networkx as nx
from netgraph import Graph, get_circular_layout # pip install netgraph
nodes = 'abcdefg'
edges = list(zip(nodes[:-1], nodes[1:]))
g = nx.Graph(edges)
fig, axes = plt.subplots(1,2)
Graph(g, node_layout='circular', ax=axes[0])
node_layout = get_circular_layout(edges)
nx.draw(g, pos=node_layout, ax=axes[1])
axes[1].set_aspect('equal')
plt.show()
| Python networkx graph appears jumbled when drawn | Note:
I already tried solution in
Python networkx graph appears jumbled when drawn in matplotlib
But it didnt work. As you can see below, i placed the position in the end but still the graph appears to be jumbled.
Question:
I have a flow A->B->C->D->E->F->G->H. Generally the directed graph should form a circle. But despite lots of effort, i am not able to achieve desired result. Please find the below code and the output
import pandas as pd
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import pylab
refJourney = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']
edgelis=[]
for i in range(len(refJourney)):
edgesValue = refJourney[i:i+2]
if len(edgesValue)>1:
edgelis.append((edgesValue[0],edgesValue[1]))
G = nx.DiGraph()
G.add_edges_from(edgelis, weight=0, length = 1)
val_map = {refJourney[0]: 1.0,
refJourney[-1]: 0.5}
values = [val_map.get(node, 0) for node in G.nodes()]
edge_labels=dict([((u,v,),d['weight'])
for u,v,d in G.edges(data=True)])
plt.figure(3,figsize=(24,24))
pos=nx.spring_layout(G)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
nx.draw(G,pos,node_shape = 'D', node_color = values, node_size=15000,edge_cmap=plt.cm.Reds, font_color="whitesmoke", with_labels = True, font_size=10)
pylab.show()
How to solve the issue?
| [
"\nGenerally the directed graph should form a circle.\n\nThat is not the lowest energy configuration for a path graph with a spring node layout. If you do want a circular node layout, there is nx.circular_layout. However, it (also) doesn't reduce edge crossings.\nIf you are open to using other libraries, netgraph works well in combination with networkx and does implement a circular layout with reduced edge crossings (github, documentation).\n\n#!/usr/bin/env python\n\"\"\"\nPlot a path network with circular layout.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport networkx as nx\n\nfrom netgraph import Graph, get_circular_layout # pip install netgraph\n\nnodes = 'abcdefg'\nedges = list(zip(nodes[:-1], nodes[1:]))\ng = nx.Graph(edges)\n\nfig, axes = plt.subplots(1,2)\nGraph(g, node_layout='circular', ax=axes[0])\n\nnode_layout = get_circular_layout(edges)\nnx.draw(g, pos=node_layout, ax=axes[1])\naxes[1].set_aspect('equal')\nplt.show()\n\n"
] | [
0
] | [] | [] | [
"graph",
"networkx",
"python"
] | stackoverflow_0074625413_graph_networkx_python.txt |
Q:
How do I sort a dataframe by distance from zero value
I'll start by saying I am a Python beginner, I did try and find an answer to this via similar questions but I'm struggling to grasp some of the solutions in order to tailor them for my own use.
If I have a Pandas dataframe as follows:
What code would I need in order to sort it as per the below whilst excluding the 0 value.
I would ideally want to grab the value closest to zero (assuming this is possible).
A:
IIUC, you can set abs as a key parameter of pandas.DataFrame.sort_values.
Try this :
out = df.sort_values(by="Score", key=abs)
# Output :
print(out)
Name Score
3 maggie 0
2 sally -5
1 jane -10
4 peter 15
6 andy 25
0 bob -30
5 mike 50
A:
You can use df.sort_values() and pass abs as a parameter. This will sort by the absolute value but leave the values themselves unchanged:
import pandas as pd
df = pd.DataFrame({'Name': ['Bob', 'Jane', 'Sally', 'Maggie', 'Peter', 'Mike', 'Andy'],
'Score': [-30, -10, -5, 0, 15, 50, 25]})
df.sort_values('Score', key = abs)
Output:
Name
Score
Maggie
0
Sally
-5
Jane
-10
Peter
15
Andy
25
Bob
-30
Mike
50
This also works:
df.reindex(df['Score'].abs().sort_values().index)
See here for more:
Sorting by absolute value without changing the data
| How do I sort a dataframe by distance from zero value | I'll start by saying I am a Python beginner, I did try and find an answer to this via similar questions but I'm struggling to grasp some of the solutions in order to tailor them for my own use.
If I have a Pandas dataframe as follows:
What code would I need in order to sort it as per the below whilst excluding the 0 value.
I would ideally want to grab the value closest to zero (assuming this is possible).
| [
"IIUC, you can set abs as a key parameter of pandas.DataFrame.sort_values.\nTry this :\nout = df.sort_values(by=\"Score\", key=abs)\n\n# Output :\nprint(out)\n\n Name Score\n3 maggie 0\n2 sally -5\n1 jane -10\n4 peter 15\n6 andy 25\n0 bob -30\n5 mike 50\n\n",
"You can use df.sort_values() and pass abs as a parameter. This will sort by the absolute value but leave the values themselves unchanged:\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Bob', 'Jane', 'Sally', 'Maggie', 'Peter', 'Mike', 'Andy'],\n 'Score': [-30, -10, -5, 0, 15, 50, 25]})\n\ndf.sort_values('Score', key = abs)\n\nOutput:\n\n\n\n\nName\nScore\n\n\n\n\nMaggie\n0\n\n\nSally\n-5\n\n\nJane\n-10\n\n\nPeter\n15\n\n\nAndy\n25\n\n\nBob\n-30\n\n\nMike\n50\n\n\n\n\nThis also works:\ndf.reindex(df['Score'].abs().sort_values().index)\n\nSee here for more:\nSorting by absolute value without changing the data\n"
] | [
2,
1
] | [] | [] | [
"dataframe",
"python",
"sorting"
] | stackoverflow_0074654696_dataframe_python_sorting.txt |
Q:
How to iterate through list of dictionary, extract values and fill in another data dictionary in python
I have a list of dictionary as below.
[
{'name':['mallesh'],'email':['[email protected]']},
{'name':['bhavik'],'ssn':['1000011']},
{'name':['jagarini'],'email':['[email protected]'],'phone':['111111']},
{'name':['mallesh'],'email':['[email protected]'],'phone':['1234556'],'ssn':['10000012']}
]
I would like to extract the information from these dictionary based on keys, hold on its information in another dictionary as.
xml_master_dict={'name':[],'email':[],'phone':[],'ssn':[]}
Here xml_master_dict should be filled in with the respective key information as below.
In a fist dictionary we have this:
{'name':['mallesh'],'email':['[email protected]']}
In xml_master_dict name and email keys only will be updated with the current value, if any of key is not existed in the dictionary it should be filled in with None. in this case phone and ssn will be None
Here is an expected output:
{
'name':['mallesh','bhavik','jagarini','mallesh'],
'email':['[email protected]',None,'[email protected]','[email protected]'],
'phone':[None,None,'111111','1234556'],
'ssn':[None,'1000011',None,'10000012'],
}
pd.DataFrame({
'name':['mallesh','bhavik','jagarini','mallesh'],
'email':['[email protected]',None,'[email protected]','[email protected]'],
'phone':[None,None,'111111','1234556'],
'ssn':[None,'1000011',None,'10000012'],
})
A:
Here is one way you could accomplish this using a for loop and the update method of the dictionary:
data = [
{'name': ['mallesh'], 'email': ['[email protected]']},
{'name': ['bhavik'], 'ssn': ['1000011']},
{'name': ['jagarini'], 'email': ['[email protected]'], 'phone': ['111111']},
{'name': ['mallesh'], 'email': ['[email protected]'], 'phone': ['1234556'], 'ssn': ['10000012']}
]
# create the xml_master_dict with empty lists for each key
xml_master_dict = {'name':[], 'email':[], 'phone':[], 'ssn':[]}
# loop through the list of dictionaries
for item in data:
# loop through the keys in xml_master_dict
for key in xml_master_dict.keys():
# if the key exists in the current dictionary, append its value to the xml_master_dict
if key in item:
xml_master_dict[key].append(item[key])
# if the key does not exist in the current dictionary, append None to the xml_master_dict
else:
xml_master_dict[key].append(None)
# print the xml_master_dict to see the resulting values
print(xml_master_dict)
This code will produce the following output:
{'name': [['mallesh'], ['bhavik'], ['jagarini'], ['mallesh']],
'email': [['[email protected]'], None, ['[email protected]'], ['[email protected]']],
'phone': [None, None, ['111111'], ['1234556']],
'ssn': [None, ['1000011'], None, ['10000012']]}
You can then use this dictionary to create a DataFrame using the pd.DataFrame function from the Pandas library. For example:
import pandas as pd
# Create a DataFrame from the xml_master_dict
df = pd.DataFrame(xml_master_dict)
# Print the DataFrame
print(df)
This code will produce the following output:
name email phone ssn
0 [mallesh] [[email protected]] None None
1 [bhavik] None None [1000011]
2 [jagarini] [[email protected]] [111111] None
3 [mallesh] [[email protected]] [1234556] [10000012]
A:
You can define a function to get the first element of a dictionary value (or None if the key doesn't exist):
def first_elem_of_value(record: dict, key: str):
try:
return record[key][0]
except KeyError:
return None
and then build the master dict with a single comprehension:
xml_master_dict = {
key: [
first_elem_of_value(record, key)
for record in data
]
for key in ('name', 'email', 'phone', 'ssn')
}
>>> xml_master_dict
{'name': ['mallesh', 'bhavik', 'jagarini', 'mallesh'], 'email': ['[email protected]', None, '[email protected]', '[email protected]'], 'phone': [None, None, '111111', '1234556'], 'ssn': [None, '1000011', None, '10000012']}
| How to iterate through list of dictionary, extract values and fill in another data dictionary in python | I have a list of dictionary as below.
[
{'name':['mallesh'],'email':['[email protected]']},
{'name':['bhavik'],'ssn':['1000011']},
{'name':['jagarini'],'email':['[email protected]'],'phone':['111111']},
{'name':['mallesh'],'email':['[email protected]'],'phone':['1234556'],'ssn':['10000012']}
]
I would like to extract the information from these dictionary based on keys, hold on its information in another dictionary as.
xml_master_dict={'name':[],'email':[],'phone':[],'ssn':[]}
Here xml_master_dict should be filled in with the respective key information as below.
In a fist dictionary we have this:
{'name':['mallesh'],'email':['[email protected]']}
In xml_master_dict name and email keys only will be updated with the current value, if any of key is not existed in the dictionary it should be filled in with None. in this case phone and ssn will be None
Here is an expected output:
{
'name':['mallesh','bhavik','jagarini','mallesh'],
'email':['[email protected]',None,'[email protected]','[email protected]'],
'phone':[None,None,'111111','1234556'],
'ssn':[None,'1000011',None,'10000012'],
}
pd.DataFrame({
'name':['mallesh','bhavik','jagarini','mallesh'],
'email':['[email protected]',None,'[email protected]','[email protected]'],
'phone':[None,None,'111111','1234556'],
'ssn':[None,'1000011',None,'10000012'],
})
| [
"Here is one way you could accomplish this using a for loop and the update method of the dictionary:\ndata = [\n {'name': ['mallesh'], 'email': ['[email protected]']},\n {'name': ['bhavik'], 'ssn': ['1000011']},\n {'name': ['jagarini'], 'email': ['[email protected]'], 'phone': ['111111']},\n {'name': ['mallesh'], 'email': ['[email protected]'], 'phone': ['1234556'], 'ssn': ['10000012']}\n]\n\n# create the xml_master_dict with empty lists for each key\nxml_master_dict = {'name':[], 'email':[], 'phone':[], 'ssn':[]}\n\n# loop through the list of dictionaries\nfor item in data:\n # loop through the keys in xml_master_dict\n for key in xml_master_dict.keys():\n # if the key exists in the current dictionary, append its value to the xml_master_dict\n if key in item:\n xml_master_dict[key].append(item[key])\n # if the key does not exist in the current dictionary, append None to the xml_master_dict\n else:\n xml_master_dict[key].append(None)\n\n# print the xml_master_dict to see the resulting values\nprint(xml_master_dict)\n\nThis code will produce the following output:\n{'name': [['mallesh'], ['bhavik'], ['jagarini'], ['mallesh']], \n'email': [['[email protected]'], None, ['[email protected]'], ['[email protected]']], \n'phone': [None, None, ['111111'], ['1234556']], \n'ssn': [None, ['1000011'], None, ['10000012']]}\n\nYou can then use this dictionary to create a DataFrame using the pd.DataFrame function from the Pandas library. For example:\nimport pandas as pd\n\n# Create a DataFrame from the xml_master_dict\ndf = pd.DataFrame(xml_master_dict)\n\n# Print the DataFrame\nprint(df)\n\nThis code will produce the following output:\n name email phone ssn\n0 [mallesh] [[email protected]] None None\n1 [bhavik] None None [1000011]\n2 [jagarini] [[email protected]] [111111] None\n3 [mallesh] [[email protected]] [1234556] [10000012]\n\n",
"You can define a function to get the first element of a dictionary value (or None if the key doesn't exist):\ndef first_elem_of_value(record: dict, key: str):\n try:\n return record[key][0]\n except KeyError:\n return None\n\nand then build the master dict with a single comprehension:\nxml_master_dict = {\n key: [\n first_elem_of_value(record, key)\n for record in data\n ]\n for key in ('name', 'email', 'phone', 'ssn')\n}\n\n>>> xml_master_dict\n{'name': ['mallesh', 'bhavik', 'jagarini', 'mallesh'], 'email': ['[email protected]', None, '[email protected]', '[email protected]'], 'phone': [None, None, '111111', '1234556'], 'ssn': [None, '1000011', None, '10000012']}\n\n"
] | [
2,
1
] | [] | [] | [
"dictionary",
"pandas",
"python"
] | stackoverflow_0074650064_dictionary_pandas_python.txt |
Q:
Gunicorn/Nginx/Flask not playing together well
I have been trying the last couple days to build a nginx/gunicorn/flask stack in Puppet to deploy repeatedly in our environment. Unfortunately, I am coming up short at the last moment and could really use some help. I have dumped anything I though relevant below, if anyone can lend a hand it would be very helpful!
gunicorn cli errors
(pyvenv) [root@guadalupe project1]# gunicorn wsgi:application
[2022-12-01 15:07:29 -0700] [13060] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:07:29 -0700] [13060] [INFO] Listening at: http://127.0.0.1:8000 (13060)
[2022-12-01 15:07:29 -0700] [13060] [INFO] Using worker: sync
[2022-12-01 15:07:29 -0700] [13063] [INFO] Booting worker with pid: 13063
^C[2022-12-01 15:08:01 -0700] [13060] [INFO] Handling signal: int
[2022-12-01 15:08:01 -0700] [13063] [INFO] Worker exiting (pid: 13063)
[2022-12-01 15:08:01 -0700] [13060] [INFO] Shutting down: Master
(pyvenv) [root@guadalupe project1]# gunicorn wsgi:application -b project1.sock
[2022-12-01 15:08:09 -0700] [13067] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:08:09 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:10 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:11 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:12 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:13 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:14 -0700] [13067] [ERROR] Can't connect to ('project1.sock', 8000)
gunicorn debug log from running
$ gunicorn wsgi:application -b project1.sock --error-logfile error.log --log-level 'debug'
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['project1.sock']
backlog: 2048
workers: 1
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 30
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /home/bit-web/pyvenv/project1
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: error.log
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: wsgi:application
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f7b0fa6eae8>
on_reload: <function OnReload.on_reload at 0x7f7b0fa6ebf8>
when_ready: <function WhenReady.when_ready at 0x7f7b0fa6ed08>
pre_fork: <function Prefork.pre_fork at 0x7f7b0fa6ee18>
post_fork: <function Postfork.post_fork at 0x7f7b0fa6ef28>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f7b0fa860d0>
worker_int: <function WorkerInt.worker_int at 0x7f7b0fa861e0>
worker_abort: <function WorkerAbort.worker_abort at 0x7f7b0fa862f0>
pre_exec: <function PreExec.pre_exec at 0x7f7b0fa86400>
pre_request: <function PreRequest.pre_request at 0x7f7b0fa86510>
post_request: <function PostRequest.post_request at 0x7f7b0fa86598>
child_exit: <function ChildExit.child_exit at 0x7f7b0fa866a8>
worker_exit: <function WorkerExit.worker_exit at 0x7f7b0fa867b8>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f7b0fa868c8>
on_exit: <function OnExit.on_exit at 0x7f7b0fa869d8>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2022-12-01 15:28:04 -0700] [16349] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:04 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:05 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:05 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:06 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:06 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:07 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:07 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:08 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:08 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:09 -0700] [16349] [ERROR] Can't connect to ('project1.sock', 8000)
wsgi.py
from flask import Flask
application = Flask(__name__)
@application.route("/")
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"
if __name__ == "__main__":
application.run(host='0.0.0.0')
the nginx.conf file that is being sourced from Puppet
(pyvenv) [root@guadalupe project1]# cat /etc/nginx/sites-enabled/project1.conf
include /etc/nginx/conf.d/*.conf;
upstream app_a {
server unix:///home/bit-web/pyvenv/project1/project1.sock;
}
server {
listen 80;
server_name guadalupe.int.colorado.edu, 172.20.13.55;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
include uwsgi_params;
uwsgi_pass app_a;
}
}
nginx error log
2022/12/01 15:02:15 [error] 11743#11743: *1 upstream prematurely closed connection while reading response header from upstream, client: 198.11.28.224, server: guadalupe.int.colorado.edu,, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///home/bit-web/pyvenv/project1/project1.sock:", host: "guadalupe.int.colorado.edu", referrer: "http://guadalupe.int.colorado.edu/"
socket information
(pyvenv) [root@guadalupe project1]# ls
project1.sock __pycache__ wsgi.py
(pyvenv) [root@guadalupe project1]# pwd
/home/bit-web/pyvenv/project1
gunicorn service
(pyvenv) [root@guadalupe project1]# cat /etc/systemd/system/gunicorn.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=bit-web
Group=nginx
WorkingDirectory=/home/bit-web/pyvenv/project1
Environment="PATH=/home/bit-web/pyvenv/bin"
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application
[Install]
WantedBy=multi-user.target
A:
in your gunicorn service file, Instead of
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application
try this (means add .sock file path)
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:/home/bit-web/pyvenv/project1/project1.sock -m 007 wsgi:application
| Gunicorn/Nginx/Flask not playing together well | I have been trying the last couple days to build a nginx/gunicorn/flask stack in Puppet to deploy repeatedly in our environment. Unfortunately, I am coming up short at the last moment and could really use some help. I have dumped anything I though relevant below, if anyone can lend a hand it would be very helpful!
gunicorn cli errors
(pyvenv) [root@guadalupe project1]# gunicorn wsgi:application
[2022-12-01 15:07:29 -0700] [13060] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:07:29 -0700] [13060] [INFO] Listening at: http://127.0.0.1:8000 (13060)
[2022-12-01 15:07:29 -0700] [13060] [INFO] Using worker: sync
[2022-12-01 15:07:29 -0700] [13063] [INFO] Booting worker with pid: 13063
^C[2022-12-01 15:08:01 -0700] [13060] [INFO] Handling signal: int
[2022-12-01 15:08:01 -0700] [13063] [INFO] Worker exiting (pid: 13063)
[2022-12-01 15:08:01 -0700] [13060] [INFO] Shutting down: Master
(pyvenv) [root@guadalupe project1]# gunicorn wsgi:application -b project1.sock
[2022-12-01 15:08:09 -0700] [13067] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:08:09 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:10 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:11 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:12 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:13 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:14 -0700] [13067] [ERROR] Can't connect to ('project1.sock', 8000)
gunicorn debug log from running
$ gunicorn wsgi:application -b project1.sock --error-logfile error.log --log-level 'debug'
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['project1.sock']
backlog: 2048
workers: 1
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 30
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /home/bit-web/pyvenv/project1
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: error.log
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: wsgi:application
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f7b0fa6eae8>
on_reload: <function OnReload.on_reload at 0x7f7b0fa6ebf8>
when_ready: <function WhenReady.when_ready at 0x7f7b0fa6ed08>
pre_fork: <function Prefork.pre_fork at 0x7f7b0fa6ee18>
post_fork: <function Postfork.post_fork at 0x7f7b0fa6ef28>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f7b0fa860d0>
worker_int: <function WorkerInt.worker_int at 0x7f7b0fa861e0>
worker_abort: <function WorkerAbort.worker_abort at 0x7f7b0fa862f0>
pre_exec: <function PreExec.pre_exec at 0x7f7b0fa86400>
pre_request: <function PreRequest.pre_request at 0x7f7b0fa86510>
post_request: <function PostRequest.post_request at 0x7f7b0fa86598>
child_exit: <function ChildExit.child_exit at 0x7f7b0fa866a8>
worker_exit: <function WorkerExit.worker_exit at 0x7f7b0fa867b8>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f7b0fa868c8>
on_exit: <function OnExit.on_exit at 0x7f7b0fa869d8>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2022-12-01 15:28:04 -0700] [16349] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:04 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:05 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:05 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:06 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:06 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:07 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:07 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:08 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:08 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:09 -0700] [16349] [ERROR] Can't connect to ('project1.sock', 8000)
wsgi.py
from flask import Flask
application = Flask(__name__)
@application.route("/")
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"
if __name__ == "__main__":
application.run(host='0.0.0.0')
the nginx.conf file that is being sourced from Puppet
(pyvenv) [root@guadalupe project1]# cat /etc/nginx/sites-enabled/project1.conf
include /etc/nginx/conf.d/*.conf;
upstream app_a {
server unix:///home/bit-web/pyvenv/project1/project1.sock;
}
server {
listen 80;
server_name guadalupe.int.colorado.edu, 172.20.13.55;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
include uwsgi_params;
uwsgi_pass app_a;
}
}
nginx error log
2022/12/01 15:02:15 [error] 11743#11743: *1 upstream prematurely closed connection while reading response header from upstream, client: 198.11.28.224, server: guadalupe.int.colorado.edu,, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///home/bit-web/pyvenv/project1/project1.sock:", host: "guadalupe.int.colorado.edu", referrer: "http://guadalupe.int.colorado.edu/"
socket information
(pyvenv) [root@guadalupe project1]# ls
project1.sock __pycache__ wsgi.py
(pyvenv) [root@guadalupe project1]# pwd
/home/bit-web/pyvenv/project1
gunicorn service
(pyvenv) [root@guadalupe project1]# cat /etc/systemd/system/gunicorn.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=bit-web
Group=nginx
WorkingDirectory=/home/bit-web/pyvenv/project1
Environment="PATH=/home/bit-web/pyvenv/bin"
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application
[Install]
WantedBy=multi-user.target
| [
"in your gunicorn service file, Instead of\nExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application\n\ntry this (means add .sock file path)\nExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:/home/bit-web/pyvenv/project1/project1.sock -m 007 wsgi:application\n\n"
] | [
0
] | [] | [] | [
"flask",
"gunicorn",
"nginx",
"puppet",
"python"
] | stackoverflow_0074646683_flask_gunicorn_nginx_puppet_python.txt |
Q:
How to display property method as a message in class based view?
I have a property method defined inside my django model which represents an id.
status_choice = [("Pending","Pending"), ("In progress", "In progress") ,("Fixed","Fixed"),("Not Fixed","Not Fixed")]
class Bug(models.Model):
name = models.CharField(max_length=200, blank= False, null= False)
info = models.TextField()
status = models.CharField(max_length=25, choices=status_choice,
default="Pending")
assigned_to = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=
models.CASCADE, related_name='assigned', null = True, blank=True)
phn_number = PhoneNumberField()
uploaded_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=
models.CASCADE, related_name='user_name')
created_at = models.DateTimeField(auto_now_add= True)
updated_at = models.DateTimeField(blank= True, null = True)
updated_by = models.CharField(max_length=20, blank= True)
screeenshot = models.ImageField(upload_to='pics')
@property
def bug_id(self):
bugid = "BUG{:03d}".format(self.id)
return bugid
What I wanted is I need to show this id as a message after an object is created.
corresponding views.py file.
class BugUpload(LoginRequiredMixin, generic.CreateView):
login_url = 'Login'
model = Bug
form_class = UploadForm
template_name = 'upload.html'
success_url = reverse_lazy('index')
def form_valid(self, form):
form.instance.uploaded_by = self.request.user
return super().form_valid(form)
A:
Assuming that your UploadForm is a ModelForm it's worth noting that calling .save() on it will return an instance of your model.
If you have:
class UploadForm(ModelForm):
class Meta:
model = Bug
This means that your .save() will return an instance of a Bug
Now that everything went well and you have your new instance, you can use django's messages framework to build the success message for your users:
def form_valid(self, form):
instance = form.save(commit=True)
my_message = f"Hello {instance.bug_id}"
messages.add_message(self.request, messages.SUCCESS, my_messages)
return instance
| How to display property method as a message in class based view? | I have a property method defined inside my django model which represents an id.
status_choice = [("Pending","Pending"), ("In progress", "In progress") ,("Fixed","Fixed"),("Not Fixed","Not Fixed")]
class Bug(models.Model):
name = models.CharField(max_length=200, blank= False, null= False)
info = models.TextField()
status = models.CharField(max_length=25, choices=status_choice,
default="Pending")
assigned_to = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=
models.CASCADE, related_name='assigned', null = True, blank=True)
phn_number = PhoneNumberField()
uploaded_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=
models.CASCADE, related_name='user_name')
created_at = models.DateTimeField(auto_now_add= True)
updated_at = models.DateTimeField(blank= True, null = True)
updated_by = models.CharField(max_length=20, blank= True)
screeenshot = models.ImageField(upload_to='pics')
@property
def bug_id(self):
bugid = "BUG{:03d}".format(self.id)
return bugid
What I wanted is I need to show this id as a message after an object is created.
corresponding views.py file.
class BugUpload(LoginRequiredMixin, generic.CreateView):
login_url = 'Login'
model = Bug
form_class = UploadForm
template_name = 'upload.html'
success_url = reverse_lazy('index')
def form_valid(self, form):
form.instance.uploaded_by = self.request.user
return super().form_valid(form)
| [
"Assuming that your UploadForm is a ModelForm it's worth noting that calling .save() on it will return an instance of your model.\nIf you have:\nclass UploadForm(ModelForm):\n class Meta:\n model = Bug\n\nThis means that your .save() will return an instance of a Bug\nNow that everything went well and you have your new instance, you can use django's messages framework to build the success message for your users:\ndef form_valid(self, form):\n instance = form.save(commit=True)\n my_message = f\"Hello {instance.bug_id}\"\n messages.add_message(self.request, messages.SUCCESS, my_messages)\n return instance\n\n"
] | [
2
] | [] | [] | [
"django",
"django_forms",
"django_models",
"django_views",
"python"
] | stackoverflow_0074654780_django_django_forms_django_models_django_views_python.txt |
Q:
How to create an event listener for mp3 sounds
I have a mp3 file namd 'audio.mp3' but for some reason no matter how hard I try I cannot find a useful link to help me code a programm that will listen to the computer and when hearing a specific sound that is exactly like my file 'audio.mp3', do something (that ofc i'll code)
I kept searching online for modules i could use or help but I only came across links that tell me how to record sound :
from playsound import playsound
playsound('audio.mp3')
I have no idea what to begin with :/
The perfect module I could think of is a module that I onlt have to give him the name of my mp3 file, then put an event listener and it'll do the part where it listens to my computer sounds and trigger my event when it hears my file
A:
module playsound is for diferent use, he PLAYSOUND, not listen
| How to create an event listener for mp3 sounds | I have a mp3 file namd 'audio.mp3' but for some reason no matter how hard I try I cannot find a useful link to help me code a programm that will listen to the computer and when hearing a specific sound that is exactly like my file 'audio.mp3', do something (that ofc i'll code)
I kept searching online for modules i could use or help but I only came across links that tell me how to record sound :
from playsound import playsound
playsound('audio.mp3')
I have no idea what to begin with :/
The perfect module I could think of is a module that I onlt have to give him the name of my mp3 file, then put an event listener and it'll do the part where it listens to my computer sounds and trigger my event when it hears my file
| [
"module playsound is for diferent use, he PLAYSOUND, not listen\n"
] | [
0
] | [] | [] | [
"audio",
"python",
"record",
"voice"
] | stackoverflow_0070552948_audio_python_record_voice.txt |
Q:
Why ProcessPoolExecutor on Windows needs __main__ guard when submitting function from another module?
Let's say I have a program
import othermodule, concurrent.futures
pool = concurrent.futures.ProcessPoolExecutor()
and then I want to say
fut = pool.submit(othermodule.foo, 5)
print(fut.result())
Official docs say I need to guard these latter two statements with if __name__ == '__main__'. It's not hard to do, I would just like to know why. foo lives in othermodule, and it knows that (foo.__module__ == 'othermodule'). And 5 is a literal int. Both can be pickled and unpickled without any reference to the module that created the pool. I see no reason why ProcessPoolExecutor has to import it on the other side.
My model is this: you start another python process, pickle othermodule.foo and 5, and send them pickled through some IPC method (Queue, Pipe, whatever). The other process unpickles them (importing othermodule of course, to find foo's code), and calls foo(5), sending the result back (again through pickle and some IPC). Obviously my model is wrong, but I would like to know where it is wrong.
Is maybe the only reason, that on Unix this is solved by forking __main__, so on Windows (where fork doesn't really work) they did the closest imitation of the procedure, instead of the closest imitation of the intent? In this case, could it be fixed on Windows?
(Yes, I know about Why does Python's multiprocessing module import __main__ when starting a new process on Windows?. In my opinion it answers a slightly different question. But you can try to use its answer to explain to me why the answer to this question must be the same.)
A:
Old question, but now there is a good answer to this.
Is maybe the only reason, that on Unix this is solved by forking __main__, so on Windows (where fork doesn't really work) they did the closest imitation of the procedure, instead of the closest imitation of the intent? In this case, could it be fixed on Windows?
It is my guess that the people who wrote multiprocessing library wrote it under the assumption that os.fork() would be available. On windows, this was bypassed in a hacky way. In fact, still today (2022), python's inbuilt parallelisation modules multiprocessing and concurrent.futures are a pain to use, mainly because of pickling errors. These modules work the best when the interpreter can be forked, and there is no need to pickle anything.
It can be fixed on Windows. There is a package called loky that you can install with pip or conda. It uses an improved pickling module called cloudpickle (which also has to be installed) to call the functions from the main module. So, there is no need to import the main module by the child process, and therefore the if __name__ == '__main__' guard does not have to be used. (Link to github repo of loky, and doc) You can also try the higher level package joblib, which uses loky as its backend and also has the same behaviour.
Loky also has a facility to use a reusable executor (i.e. a pool of processes that will be reused again and again to reduce overhead of spawning). You can check examples of that here. Loky essentially uses the semantics of ProcessPoolExecutor from concurrent.futures but has a different implementation underneath.
I will give an example with a single executor (not reusable).
import os
def f(x):
return os.getpid()
with loky.ProcessPoolExecutor(max_workers=3) as pool:
jobs = [pool.submit(f,i) for i in range(3)] # returns Future() instances
results = [job.result() for job in jobs] # get results from Future
print(results)
Run this on Windows, and it should return the PID of the child processes without any hitch, and without the need to use if __name__=='__main__'.
| Why ProcessPoolExecutor on Windows needs __main__ guard when submitting function from another module? | Let's say I have a program
import othermodule, concurrent.futures
pool = concurrent.futures.ProcessPoolExecutor()
and then I want to say
fut = pool.submit(othermodule.foo, 5)
print(fut.result())
Official docs say I need to guard these latter two statements with if __name__ == '__main__'. It's not hard to do, I would just like to know why. foo lives in othermodule, and it knows that (foo.__module__ == 'othermodule'). And 5 is a literal int. Both can be pickled and unpickled without any reference to the module that created the pool. I see no reason why ProcessPoolExecutor has to import it on the other side.
My model is this: you start another python process, pickle othermodule.foo and 5, and send them pickled through some IPC method (Queue, Pipe, whatever). The other process unpickles them (importing othermodule of course, to find foo's code), and calls foo(5), sending the result back (again through pickle and some IPC). Obviously my model is wrong, but I would like to know where it is wrong.
Is maybe the only reason, that on Unix this is solved by forking __main__, so on Windows (where fork doesn't really work) they did the closest imitation of the procedure, instead of the closest imitation of the intent? In this case, could it be fixed on Windows?
(Yes, I know about Why does Python's multiprocessing module import __main__ when starting a new process on Windows?. In my opinion it answers a slightly different question. But you can try to use its answer to explain to me why the answer to this question must be the same.)
| [
"Old question, but now there is a good answer to this.\n\nIs maybe the only reason, that on Unix this is solved by forking __main__, so on Windows (where fork doesn't really work) they did the closest imitation of the procedure, instead of the closest imitation of the intent? In this case, could it be fixed on Windows?\n\nIt is my guess that the people who wrote multiprocessing library wrote it under the assumption that os.fork() would be available. On windows, this was bypassed in a hacky way. In fact, still today (2022), python's inbuilt parallelisation modules multiprocessing and concurrent.futures are a pain to use, mainly because of pickling errors. These modules work the best when the interpreter can be forked, and there is no need to pickle anything.\nIt can be fixed on Windows. There is a package called loky that you can install with pip or conda. It uses an improved pickling module called cloudpickle (which also has to be installed) to call the functions from the main module. So, there is no need to import the main module by the child process, and therefore the if __name__ == '__main__' guard does not have to be used. (Link to github repo of loky, and doc) You can also try the higher level package joblib, which uses loky as its backend and also has the same behaviour.\nLoky also has a facility to use a reusable executor (i.e. a pool of processes that will be reused again and again to reduce overhead of spawning). You can check examples of that here. Loky essentially uses the semantics of ProcessPoolExecutor from concurrent.futures but has a different implementation underneath.\nI will give an example with a single executor (not reusable).\nimport os\n\ndef f(x):\n return os.getpid()\n\nwith loky.ProcessPoolExecutor(max_workers=3) as pool:\n jobs = [pool.submit(f,i) for i in range(3)] # returns Future() instances\n results = [job.result() for job in jobs] # get results from Future\n\nprint(results)\n\nRun this on Windows, and it should return the PID of the child processes without any hitch, and without the need to use if __name__=='__main__'.\n"
] | [
0
] | [] | [] | [
"python",
"python_module",
"python_multiprocessing"
] | stackoverflow_0038801229_python_python_module_python_multiprocessing.txt |
Q:
Why does StableDiffusionPipeline return black images when generating multiple images at once?
I am using the StableDiffusionPipeline from the Hugging Face Diffusers library in Python 3.10.2, on an M2 Mac (I tagged it because this might be the issue). When I try to generate 1 image from 1 prompt, the output looks fine, but when I try to generate multiple images using the same prompt, the images are all either black squares or a random image (see example below). What could be the issue?
My code is as follows (where I change n_imgs from 1 to more than 1 to break it):
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps") # for M1/M2 chips
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut driving a car on mars"
# First-time "warmup" pass (because of weird M1 behaviour)
_ = pipe(prompt, num_inference_steps=1)
# generate images
n_imgs = 1
imgs = pipe([prompt] * n_imgs).images
I also tried setting num_images_per_prompt instead of creating a list of repeated prompts in the pipeline call, but this gave the same bad results.
Example output (for multiple images):
[edit/update]: When I generate the images in a loop surrounding the pipe call instead of passing an iterable to the pipe call, it does work:
# generate images
n_imgs = 3
for i in range(n_imgs):
img = pipe(prompt).images[0]
# do something with img
But it is still a mystery to me as to why.
A:
Apparently it is indeed an Apple Silicon (M1/M2) issue, of which Hugging Face is not yet sure why this is happening, see this GitHub issue for more details.
A:
I think it might be a PyTorch issue given that a pure MPS version of the code (in Swift) worked fine last time I tested:
import MetalPerformanceShadersGraph
let graph = MPSGraph()
let x = graph.constant(1, shape: [32, 4096, 4096], dataType: .float32)
let y = graph.constant(1, shape: [32, 4096, 1], dataType: .float32)
let z = graph.matrixMultiplication(primary: x, secondary: y, name: nil)
let device = MTLCreateSystemDefaultDevice()!
let buf = device.makeBuffer(length: 16384)!
let td = MPSGraphTensorData(buf, shape: [64, 64], dataType: .int32)
let cmdBuf = MPSCommandBuffer(from: device.makeCommandQueue()!)
graph.encode(to: cmdBuf, feeds: [:], targetOperations: nil, resultsDictionary: [z:td], executionDescriptor: nil)
cmdBuf.commit()
See this thread for details: https://github.com/pytorch/pytorch/issues/84039
| Why does StableDiffusionPipeline return black images when generating multiple images at once? | I am using the StableDiffusionPipeline from the Hugging Face Diffusers library in Python 3.10.2, on an M2 Mac (I tagged it because this might be the issue). When I try to generate 1 image from 1 prompt, the output looks fine, but when I try to generate multiple images using the same prompt, the images are all either black squares or a random image (see example below). What could be the issue?
My code is as follows (where I change n_imgs from 1 to more than 1 to break it):
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps") # for M1/M2 chips
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut driving a car on mars"
# First-time "warmup" pass (because of weird M1 behaviour)
_ = pipe(prompt, num_inference_steps=1)
# generate images
n_imgs = 1
imgs = pipe([prompt] * n_imgs).images
I also tried setting num_images_per_prompt instead of creating a list of repeated prompts in the pipeline call, but this gave the same bad results.
Example output (for multiple images):
[edit/update]: When I generate the images in a loop surrounding the pipe call instead of passing an iterable to the pipe call, it does work:
# generate images
n_imgs = 3
for i in range(n_imgs):
img = pipe(prompt).images[0]
# do something with img
But it is still a mystery to me as to why.
| [
"Apparently it is indeed an Apple Silicon (M1/M2) issue, of which Hugging Face is not yet sure why this is happening, see this GitHub issue for more details.\n",
"I think it might be a PyTorch issue given that a pure MPS version of the code (in Swift) worked fine last time I tested:\nimport MetalPerformanceShadersGraph\n\nlet graph = MPSGraph()\nlet x = graph.constant(1, shape: [32, 4096, 4096], dataType: .float32)\nlet y = graph.constant(1, shape: [32, 4096, 1], dataType: .float32)\nlet z = graph.matrixMultiplication(primary: x, secondary: y, name: nil)\nlet device = MTLCreateSystemDefaultDevice()!\nlet buf = device.makeBuffer(length: 16384)!\nlet td = MPSGraphTensorData(buf, shape: [64, 64], dataType: .int32)\nlet cmdBuf = MPSCommandBuffer(from: device.makeCommandQueue()!)\ngraph.encode(to: cmdBuf, feeds: [:], targetOperations: nil, resultsDictionary: [z:td], executionDescriptor: nil)\ncmdBuf.commit()\n\nSee this thread for details: https://github.com/pytorch/pytorch/issues/84039\n"
] | [
1,
0
] | [] | [] | [
"apple_m1",
"huggingface_transformers",
"python",
"pytorch",
"stable_diffusion"
] | stackoverflow_0074642594_apple_m1_huggingface_transformers_python_pytorch_stable_diffusion.txt |
Q:
How to create a Python package from a PyBind11 module created with CMake
I have a C++ code using Cmake & make for the building part.
I also have python bindings to this code thanks to PyBind11.
I use pybind11_add_module inside CMakeLists.txt and now when I build (cmake -Bbuild && cd build && make) it creates a python_module.so in the build directory.
I can move it manually elsewhere, and if the module is in the same folder I can use it in python scripts.
Now I'd like to "package" this module and I don't know how. My goal would be to be able to pip install python_package_from_module to use it anywhere without the need to manually move/duplicate the python_module.so file. (Edit/Note : for now I don't want to publish my package on PiPy, just to be able to pip install locally the package)
Do you know how to do that ? I don't have a lot of knowledge about packaging, I've only done it once with a pyproject.toml and a setup.cfg.
The only examples I found use setup.py and are way more complicated. For example I must admit that I really don't understand how does this Pybind11 Cmake Example work.
A:
If I understand your question correctly, you want to be able to import your module without having to move the .so file manually.
Running python3 setup.py install should both compile your module via cmake and do the necessary linking. After that you should be able to import your module as usual.
| How to create a Python package from a PyBind11 module created with CMake | I have a C++ code using Cmake & make for the building part.
I also have python bindings to this code thanks to PyBind11.
I use pybind11_add_module inside CMakeLists.txt and now when I build (cmake -Bbuild && cd build && make) it creates a python_module.so in the build directory.
I can move it manually elsewhere, and if the module is in the same folder I can use it in python scripts.
Now I'd like to "package" this module and I don't know how. My goal would be to be able to pip install python_package_from_module to use it anywhere without the need to manually move/duplicate the python_module.so file. (Edit/Note : for now I don't want to publish my package on PiPy, just to be able to pip install locally the package)
Do you know how to do that ? I don't have a lot of knowledge about packaging, I've only done it once with a pyproject.toml and a setup.cfg.
The only examples I found use setup.py and are way more complicated. For example I must admit that I really don't understand how does this Pybind11 Cmake Example work.
| [
"If I understand your question correctly, you want to be able to import your module without having to move the .so file manually.\nRunning python3 setup.py install should both compile your module via cmake and do the necessary linking. After that you should be able to import your module as usual.\n"
] | [
0
] | [] | [] | [
"pybind11",
"python",
"python_packaging"
] | stackoverflow_0074545298_pybind11_python_python_packaging.txt |
Q:
How to get name of all email attachments of a particular mail using imaplib, python?
this is my first task on my new job please help...
Hi i am trying to fetch all the attachments of mails and make a list of those attachments for that particular mail and save that list in a json file.
I have been instructed to use imaplib only.
this is the function that i am using to extract the mails data but the part.getfilename() is only returning one attachment even if i have sent multiple attachments.
the out put i was is the list of attachments like :- [attach1.xlss, attach2.xml, attch.csv]
i can only use imaplib library.
i also dont have to download any attachment so please dont share that code. i tried several websites but couldnt find anything that i could use.
def get_body_and_attachments(msg):
email_body = None
filename = None
html_part = None
# if the email message is multipart
if msg.is_multipart():
# iterate over email parts
for part in msg.walk():
# extract content type of email
content_type = part.get_content_type()
content_disposition = str(part.get("Content-Disposition"))
try:
# get the email body
body = part.get_payload(decode=True).decode()
except:
pass
if content_type == "text/plain" and "attachment" not in content_disposition:
# print text/plain emails and skip attachments
email_body = body
elif "attachment" in content_disposition:
# download attachment
print(part.get_filename(), "helloooo")
filename = part.get_filename()
filename = filename
else:
# extract content type of email
content_type = msg.get_content_type()
# get the email body
body = msg.get_payload(decode=True).decode()
if content_type == "text/plain":
email_body = body
if content_type == "text/html":
html_part = body
return email_body, filename, html_part
A:
it was easy i just had to do this.
import re
# getting filenames
filenames = mailbox.uid('fetch', num, '(BODYSTRUCTURE)')[1][0]
filenames = re.findall('\("name".*?\)', str(filenames))
filenames = [filenames[i].split('" "')[1][:-2] for i in range(len(filenames))]
explaination - mailbox.uid will fetch the message(or mail) of a particular uid(num) and will return a byte string with all the data relating to that msg.
now i use re.findall to find all the attachment names and then i clean that return value and save it as a list.
| How to get name of all email attachments of a particular mail using imaplib, python? | this is my first task on my new job please help...
Hi i am trying to fetch all the attachments of mails and make a list of those attachments for that particular mail and save that list in a json file.
I have been instructed to use imaplib only.
this is the function that i am using to extract the mails data but the part.getfilename() is only returning one attachment even if i have sent multiple attachments.
the out put i was is the list of attachments like :- [attach1.xlss, attach2.xml, attch.csv]
i can only use imaplib library.
i also dont have to download any attachment so please dont share that code. i tried several websites but couldnt find anything that i could use.
def get_body_and_attachments(msg):
email_body = None
filename = None
html_part = None
# if the email message is multipart
if msg.is_multipart():
# iterate over email parts
for part in msg.walk():
# extract content type of email
content_type = part.get_content_type()
content_disposition = str(part.get("Content-Disposition"))
try:
# get the email body
body = part.get_payload(decode=True).decode()
except:
pass
if content_type == "text/plain" and "attachment" not in content_disposition:
# print text/plain emails and skip attachments
email_body = body
elif "attachment" in content_disposition:
# download attachment
print(part.get_filename(), "helloooo")
filename = part.get_filename()
filename = filename
else:
# extract content type of email
content_type = msg.get_content_type()
# get the email body
body = msg.get_payload(decode=True).decode()
if content_type == "text/plain":
email_body = body
if content_type == "text/html":
html_part = body
return email_body, filename, html_part
| [
"it was easy i just had to do this.\nimport re\n# getting filenames \nfilenames = mailbox.uid('fetch', num, '(BODYSTRUCTURE)')[1][0]\nfilenames = re.findall('\\(\"name\".*?\\)', str(filenames))\n\nfilenames = [filenames[i].split('\" \"')[1][:-2] for i in range(len(filenames))]\n\n\nexplaination - mailbox.uid will fetch the message(or mail) of a particular uid(num) and will return a byte string with all the data relating to that msg.\nnow i use re.findall to find all the attachment names and then i clean that return value and save it as a list.\n"
] | [
0
] | [] | [] | [
"imap",
"imaplib",
"python"
] | stackoverflow_0074623655_imap_imaplib_python.txt |
Q:
How to iterate until a condition is met in python for loop
I have been working on this simple interest calculator and I was trying to make the for loop iterate until the amount inputted by the user is reached. But I am stuck at the range part, if I assign a range value like range(1 ,11) it will iterate it correctly and print the year in in contrast to the amount but I want the program to iterate until the year in which principal is greater than the amount is reached. My current code is bellow and the final product I want to reach is also attached bellow the current code. I'm new to python so please bare with me if I'm of track. Thanks in advance.
Current code:
principal = float(input("How much money to start? :"))
apr = float(input("What is the apr? :"))
amount = float(input("What is the amount you want to get to? :"))
def interestCalculator():
global principal
year = 1
for i in range(1, year + 1):
if principal < amount:
principal = principal + principal*apr
print("After year " + str (i)+" the account is at " + str(principal))
if principal > amount:
print("It would take" + str(year) + " years to reach your goal!")
else:
print("Can't calculate interest. Error: Amount is less than principal")
interestCalculator();
Final expected result:
A:
Instead, you can use a while loop. What I mean here is you can simply:
principal = float(input("How much money to start? :"))
apr = float(input("What is the apr? :"))
amount = float(input("What is the amount you want to get to? :"))
def interestCalculator():
global principal
i = 1
if principal > amount:
print("Can't calculate interest. Error: Amount is less than principal")
while principal < amount:
principal = principal + principal*apr
print("After year " + str (i)+" the account is at " + str(principal))
if principal > amount:
print("It would take" + str(year) + " years to reach your goal!")
i += 1
interestCalculator()
A:
A suggestion for a more pythonic solution
PRINCIPAL = float(input("How much money to start? :"))
APR = float(input("What is the apr? :"))
AMOUNT = float(input("What is the amount you want to get to? :"))
def interestCalculator(principal, apr, amount):
year = 0
yield year, principal
while principal < amount:
year += 1
principal += principal*apr
yield year, principal
for year, amount in interestCalculator(PRINCIPAL, APR, AMOUNT):
print(f"After year {year} the account is at {amount:.2f}")
if year == 0:
print("Can't calculate interest. Error: Amount is less than principal")
print(f"It would take {year} years to reach your goal!")
| How to iterate until a condition is met in python for loop | I have been working on this simple interest calculator and I was trying to make the for loop iterate until the amount inputted by the user is reached. But I am stuck at the range part, if I assign a range value like range(1 ,11) it will iterate it correctly and print the year in in contrast to the amount but I want the program to iterate until the year in which principal is greater than the amount is reached. My current code is bellow and the final product I want to reach is also attached bellow the current code. I'm new to python so please bare with me if I'm of track. Thanks in advance.
Current code:
principal = float(input("How much money to start? :"))
apr = float(input("What is the apr? :"))
amount = float(input("What is the amount you want to get to? :"))
def interestCalculator():
global principal
year = 1
for i in range(1, year + 1):
if principal < amount:
principal = principal + principal*apr
print("After year " + str (i)+" the account is at " + str(principal))
if principal > amount:
print("It would take" + str(year) + " years to reach your goal!")
else:
print("Can't calculate interest. Error: Amount is less than principal")
interestCalculator();
Final expected result:
| [
"Instead, you can use a while loop. What I mean here is you can simply:\nprincipal = float(input(\"How much money to start? :\"))\napr = float(input(\"What is the apr? :\"))\namount = float(input(\"What is the amount you want to get to? :\"))\n\n\ndef interestCalculator():\n global principal\n i = 1\n\n if principal > amount:\n print(\"Can't calculate interest. Error: Amount is less than principal\")\n\n while principal < amount:\n principal = principal + principal*apr\n print(\"After year \" + str (i)+\" the account is at \" + str(principal))\n if principal > amount:\n print(\"It would take\" + str(year) + \" years to reach your goal!\")\n i += 1\n\n\ninterestCalculator()\n\n",
"A suggestion for a more pythonic solution\nPRINCIPAL = float(input(\"How much money to start? :\"))\nAPR = float(input(\"What is the apr? :\"))\nAMOUNT = float(input(\"What is the amount you want to get to? :\"))\n\n\ndef interestCalculator(principal, apr, amount):\n year = 0\n yield year, principal\n while principal < amount:\n year += 1\n principal += principal*apr\n yield year, principal\n\n\nfor year, amount in interestCalculator(PRINCIPAL, APR, AMOUNT):\n print(f\"After year {year} the account is at {amount:.2f}\")\n\nif year == 0:\n print(\"Can't calculate interest. Error: Amount is less than principal\")\nprint(f\"It would take {year} years to reach your goal!\")\n\n"
] | [
4,
0
] | [] | [] | [
"loops",
"python"
] | stackoverflow_0066522306_loops_python.txt |
Q:
Discarding alpha channel from images stored as Numpy arrays
I load images with numpy/scikit. I know that all images are 200x200 pixels.
When the images are loaded, I notice some have an alpha channel, and therefore have shape (200, 200, 4) instead of (200, 200, 3) which I expect.
Is there a way to delete that last value, discarding the alpha channel and get all images to a nice (200, 200, 3) shape?
A:
Just slice the array to get the first three entries of the last dimension:
image_without_alpha = image[:,:,:3]
A:
scikit-image builtin:
from skimage.color import rgba2rgb
from skimage import data
img_rgba = data.logo()
img_rgb = rgba2rgb(img_rgba)
https://scikit-image.org/docs/dev/user_guide/transforming_image_data.html#conversion-from-rgba-to-rgb-removing-alpha-channel-through-alpha-blending
https://scikit-image.org/docs/dev/api/skimage.color.html#rgba2rgb
| Discarding alpha channel from images stored as Numpy arrays | I load images with numpy/scikit. I know that all images are 200x200 pixels.
When the images are loaded, I notice some have an alpha channel, and therefore have shape (200, 200, 4) instead of (200, 200, 3) which I expect.
Is there a way to delete that last value, discarding the alpha channel and get all images to a nice (200, 200, 3) shape?
| [
"Just slice the array to get the first three entries of the last dimension:\nimage_without_alpha = image[:,:,:3]\n\n",
"scikit-image builtin:\nfrom skimage.color import rgba2rgb\nfrom skimage import data\nimg_rgba = data.logo()\nimg_rgb = rgba2rgb(img_rgba)\n\nhttps://scikit-image.org/docs/dev/user_guide/transforming_image_data.html#conversion-from-rgba-to-rgb-removing-alpha-channel-through-alpha-blending\nhttps://scikit-image.org/docs/dev/api/skimage.color.html#rgba2rgb\n"
] | [
109,
3
] | [
"Use PIL.Image to remove the alpha channel\nfrom PIL import Image\nimport numpy as np\n\nimg = Image.open(\"c:\\>path_to_image\")\nimg = img.convert(\"RGB\") # remove alpha\nimage_array = np.asarray(img) # converting image to numpy array\nprint(image_array.shape)\nimg.show()\n\nIf images are in numpy array to convert the array to Image use Image.fromarray to convert array to Image\npilImage = Image.fromarray(numpy_array)\n\n"
] | [
-1
] | [
"math",
"numpy",
"python"
] | stackoverflow_0035902302_math_numpy_python.txt |
Q:
Manipulating data in Polars
A dumb question. How to manipulate columns in Polars?
Explicitly, I have a table with 3 columns : N , Survivors, Deaths
I want to replace Deaths by Deaths * N and Survivors by Survivors * N
the following code is not working
table["SURVIVORS"] = table["SURVIVORS"]*table["N"]
I have this error:
TypeError: 'DataFrame' object does not support 'Series' assignment by index. Use 'DataFrame.with_columns'
thank you
A:
You can use polars.DataFrame.with_column to overwrite/replace the values of a column.
Return a new DataFrame with the column added or replaced.
Here is an example :
import polars as pl
table = pl.DataFrame({"N": [5, 2, 6],
"SURVIVORS": [1, 10, 3],
"Deaths": [0, 3, 2]})
table= table.with_column(pl.Series(name="SURVIVORS",
values=table["SURVIVORS"]*table["N"]))
# Output :
print(table)
shape: (3, 3)
┌─────┬───────────┬────────┐
│ N ┆ SURVIVORS ┆ Deaths │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═══════════╪════════╡
│ 5 ┆ 5 ┆ 0 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2 ┆ 20 ┆ 3 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 6 ┆ 18 ┆ 2 │
└─────┴───────────┴────────┘
A:
Polars isn't pandas.
You can't assign a part of a df. To put that another way, the left side of the equals has to be a full df so forget about this syntax table["SURVIVORS"]=
You'll mainly use the with_column, with_columns, select methods. The first two will add columns to your existing df based on the expression you feed them whereas select will only return what you ask for.
In your case, since you want to overwrite SURVIVORS and DEATHS while keeping N you'd do:
table=table.with_columns([
pl.col('SURVIVORS')*pl.col('N'),
pl.col('DEATHS')*pl.col('N')
])
If you wanted to rename the columns then you might think to do this:
table=table.with_columns([
(pl.col('SURVIVORS')*pl.col('N')).alias('SURIVORS_N'),
(pl.col('DEATHS')*pl.col('N')).alias('DEATHS_N')
])
in this case, since with_columns just adds columns, you'll still have the original SURVIVORS and DEATHS column.
This brings it back to select, if you want to have explicit control of what is returned, including the order, then do select:
table=table.select([ 'N',
(pl.col('SURVIVORS')*pl.col('N')).alias('SURIVORS_N'),
(pl.col('DEATHS')*pl.col('N')).alias('DEATHS_N')
])
One note, you can refer to a column by just giving its name, like 'N' in the previous example as long as you don't want to do anything to it. If you want to do something with it (math, rename, anything) then you have to wrap it in pl.col('column_name') so that it becomes an Expression.
| Manipulating data in Polars | A dumb question. How to manipulate columns in Polars?
Explicitly, I have a table with 3 columns : N , Survivors, Deaths
I want to replace Deaths by Deaths * N and Survivors by Survivors * N
the following code is not working
table["SURVIVORS"] = table["SURVIVORS"]*table["N"]
I have this error:
TypeError: 'DataFrame' object does not support 'Series' assignment by index. Use 'DataFrame.with_columns'
thank you
| [
"You can use polars.DataFrame.with_column to overwrite/replace the values of a column.\n\nReturn a new DataFrame with the column added or replaced.\n\nHere is an example :\nimport polars as pl\n\ntable = pl.DataFrame({\"N\": [5, 2, 6],\n \"SURVIVORS\": [1, 10, 3],\n \"Deaths\": [0, 3, 2]})\n\ntable= table.with_column(pl.Series(name=\"SURVIVORS\",\n values=table[\"SURVIVORS\"]*table[\"N\"])) \n\n# Output :\nprint(table)\n\nshape: (3, 3)\n┌─────┬───────────┬────────┐\n│ N ┆ SURVIVORS ┆ Deaths │\n│ --- ┆ --- ┆ --- │\n│ i64 ┆ i64 ┆ i64 │\n╞═════╪═══════════╪════════╡\n│ 5 ┆ 5 ┆ 0 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤\n│ 2 ┆ 20 ┆ 3 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤\n│ 6 ┆ 18 ┆ 2 │\n└─────┴───────────┴────────┘\n\n",
"Polars isn't pandas.\nYou can't assign a part of a df. To put that another way, the left side of the equals has to be a full df so forget about this syntax table[\"SURVIVORS\"]=\nYou'll mainly use the with_column, with_columns, select methods. The first two will add columns to your existing df based on the expression you feed them whereas select will only return what you ask for.\nIn your case, since you want to overwrite SURVIVORS and DEATHS while keeping N you'd do:\ntable=table.with_columns([\n pl.col('SURVIVORS')*pl.col('N'),\n pl.col('DEATHS')*pl.col('N')\n ])\n\nIf you wanted to rename the columns then you might think to do this:\ntable=table.with_columns([\n (pl.col('SURVIVORS')*pl.col('N')).alias('SURIVORS_N'),\n (pl.col('DEATHS')*pl.col('N')).alias('DEATHS_N')\n ])\n\nin this case, since with_columns just adds columns, you'll still have the original SURVIVORS and DEATHS column.\nThis brings it back to select, if you want to have explicit control of what is returned, including the order, then do select:\ntable=table.select([ 'N',\n (pl.col('SURVIVORS')*pl.col('N')).alias('SURIVORS_N'),\n (pl.col('DEATHS')*pl.col('N')).alias('DEATHS_N')\n ])\n\nOne note, you can refer to a column by just giving its name, like 'N' in the previous example as long as you don't want to do anything to it. If you want to do something with it (math, rename, anything) then you have to wrap it in pl.col('column_name') so that it becomes an Expression.\n"
] | [
1,
1
] | [
"Maybe something like this:\n# Import the pandas library\nimport pandas as pd\n\n# Load the table data into a DataFrame object\ntable = pd.read_csv(\"table.csv\")\n\n#Create a new DataFrame object with the modified columns .with_columns()\ntable = table.with_columns(\"SURVIVORS\": table[\"SURVIVORS\"]*table[\"N\"], \"DEATHS\": table[\"DEATHS\"]*table[\"N\"])\n\n"
] | [
-2
] | [
"python",
"python_polars"
] | stackoverflow_0074654355_python_python_polars.txt |
Q:
Count number of classes in a semantic segmented image
I have an image that is the output of a semantic segmentation algorithm, for example this one
I looked online and tried many pieces of code but none worked for me so far.
It is clear to the human eye that there are 5 different colors in this image: blue, black, red, and white.
I am trying to write a script in python to analyze the image and return the number of colors present in the image but so far it is not working. There are many pixels in the image which contain values that are a mixture of the colors above.
The code I am using is the following but I would like to understand if there is an easier way in your opinion to achieve this goal.
I think that I need to implement some sort of thresholding that has the following logic:
Is there a similar color to this one? if yes, do not increase the count of colors
Is this color present for more than N pixels? If not, do not increase the count of colors.
from PIL import Image
imgPath = "image.jpg"
img = Image.open(imgPath)
uniqueColors = set()
w, h = img.size
for x in range(w):
for y in range(h):
pixel = img.getpixel((x, y))
uniqueColors.add(pixel)
totalUniqueColors = len(uniqueColors)
print(totalUniqueColors)
print(uniqueColors)
Thanks in advance!
A:
Something has gone wrong - your image has 1277 unique colours, rather than the 5 you suggest.
Have you maybe saved/shared a lossy JPEG rather than the lossless PNG you should prefer for classified images?
A fast method of counting the unique colours with Numpy is as follows:
def withNumpy(img):
# Ignore A channel
px = np.asarray(img)[...,:3]
# Merge RGB888 into single 24-bit integer
px24 = np.dot(np.array(px, np.uint32),[1,256,65536])
# Return number of unique colours
return len(np.unique(px24))
A:
I solved my issue and I am now able to count colors in images coming from a semantic segmentation dataset (the images must be in .png since it is a lossless format).
Below I try to explain what I have found in the process for a solution and the code I used which should be ready to use (you need to just change the path to the images you want to analyze).
I had two main problems.
The first problem of the color counting was the format of the image. I was using (for some of the tests) .jpeg images that compress the image.
Therefore from something like this
If I would zoom in the top left corner of the glass (marked in green) I was seeing something like this
Which obviously is not good since it will introduce many more colors than the ones "visible to the human eye"
Instead, for my annotated images I had something like the following
If I zoom in the saddle of the bike (marked in green) I had something like this
The second problem was that I did not convert my image into an RGB image.
This is taken care in the code from the line:
img = Image.open(filename).convert('RGB')
The code is below. For sure it is not the most efficient but for me it does the job. Any suggestion to improve its performance is appreciated
import numpy as np
from PIL import Image
import argparse
import os
debug = False
def main(data_dir):
print("This small script allows you to count the number of different colors in an image")
print("This code has been written to count the number of classes in images from a semantic segmentation dataset")
print("Therefore, it is highly recommended to run this code on lossless images (such as .png ones)")
print("Images are being loaded from: {}".format(data_dir))
directory = os.fsencode(data_dir)
interesting_image_format = ".png"
# I will put in the variable filenames all the paths to the images to be analyzed
filenames = []
for file in os.listdir(directory):
filename = os.fsdecode(file)
if filename.endswith(interesting_image_format):
if debug:
print(os.path.join(directory, filename))
print("Analyzing image: {}".format(filename))
filenames.append(os.path.join(data_dir, filename))
else:
if debug:
print("I am not doing much here...")
continue
# Sort the filenames in an alphabetical order
filenames.sort()
# Analyze the images (i.e., count the different number of colors in the images)
number_of_colors_in_images = []
for filename in filenames:
img = Image.open(filename).convert('RGB')
if debug:
print(img.format)
print(img.size)
print(img.mode)
data_img = np.asarray(img)
if debug:
print(data_img.shape)
uniques = np.unique(data_img.reshape(-1, data_img.shape[-1]), axis=0)
# uncomment the following line if you want information for each analyzed image
print("The number of different colors in image ({}) {} is: {}".format(interesting_image_format, filename, len(uniques)))
# print("uniques.shape[0] for image {} is: {}".format(filename, uniques.shape[0]))
# Put the number of colors of each image into an array
number_of_colors_in_images.append(len(uniques))
print(number_of_colors_in_images)
# Print the maximum number of colors (classes) of all the analyzed images
print(np.max(number_of_colors_in_images))
# Print the average number of colors (classes) of all the analyzed images
print(np.average(number_of_colors_in_images))
def args_preprocess():
# Command line arguments
parser = argparse.ArgumentParser()
parser.add_argument(
"--data_dir", default="default_path_to_images", type=str, help='Specify the directory path from where to take the images of which we want to count the classes')
args = parser.parse_args()
main(args.data_dir)
if __name__ == '__main__':
args_preprocess()
A:
The thing mentioned above about the lossy compression in .jpeg images and lossless compression in .png seems to be a nice thing to point out. But you can use the following piece of code to get the number of classes from a mask.
This is only applicable on .png images. Not tested on .jpeg images.
import cv2 as cv
import numpy as np
img_path = r'C:\Users\Bhavya\Downloads\img.png'
img = cv.imread(img_path)
img = np.array(img, dtype='int32')
pixels = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
r, g, b = list(img[i, j, :])
pixels.append((r, g, b))
pixels = list(set(pixels))
print(len(pixels))
In this solution what I have done is appended pair of pixel values(RGB) in the input image to a list and converted the list to set and then back to list. The first conversion of list to set removes all the duplicate elements(here pixel values) and gives unique pixel values and the next conversion from set to list is optional and just to apply some future list operations on the pixels.
| Count number of classes in a semantic segmented image | I have an image that is the output of a semantic segmentation algorithm, for example this one
I looked online and tried many pieces of code but none worked for me so far.
It is clear to the human eye that there are 5 different colors in this image: blue, black, red, and white.
I am trying to write a script in python to analyze the image and return the number of colors present in the image but so far it is not working. There are many pixels in the image which contain values that are a mixture of the colors above.
The code I am using is the following but I would like to understand if there is an easier way in your opinion to achieve this goal.
I think that I need to implement some sort of thresholding that has the following logic:
Is there a similar color to this one? if yes, do not increase the count of colors
Is this color present for more than N pixels? If not, do not increase the count of colors.
from PIL import Image
imgPath = "image.jpg"
img = Image.open(imgPath)
uniqueColors = set()
w, h = img.size
for x in range(w):
for y in range(h):
pixel = img.getpixel((x, y))
uniqueColors.add(pixel)
totalUniqueColors = len(uniqueColors)
print(totalUniqueColors)
print(uniqueColors)
Thanks in advance!
| [
"Something has gone wrong - your image has 1277 unique colours, rather than the 5 you suggest.\nHave you maybe saved/shared a lossy JPEG rather than the lossless PNG you should prefer for classified images?\nA fast method of counting the unique colours with Numpy is as follows:\ndef withNumpy(img):\n # Ignore A channel\n px = np.asarray(img)[...,:3]\n # Merge RGB888 into single 24-bit integer\n px24 = np.dot(np.array(px, np.uint32),[1,256,65536])\n # Return number of unique colours\n return len(np.unique(px24))\n\n",
"I solved my issue and I am now able to count colors in images coming from a semantic segmentation dataset (the images must be in .png since it is a lossless format).\nBelow I try to explain what I have found in the process for a solution and the code I used which should be ready to use (you need to just change the path to the images you want to analyze).\nI had two main problems.\nThe first problem of the color counting was the format of the image. I was using (for some of the tests) .jpeg images that compress the image.\nTherefore from something like this\n\nIf I would zoom in the top left corner of the glass (marked in green) I was seeing something like this\n\nWhich obviously is not good since it will introduce many more colors than the ones \"visible to the human eye\"\nInstead, for my annotated images I had something like the following\n\nIf I zoom in the saddle of the bike (marked in green) I had something like this\n\nThe second problem was that I did not convert my image into an RGB image.\nThis is taken care in the code from the line:\nimg = Image.open(filename).convert('RGB')\nThe code is below. For sure it is not the most efficient but for me it does the job. Any suggestion to improve its performance is appreciated\nimport numpy as np\nfrom PIL import Image\nimport argparse\nimport os\n\ndebug = False\n\ndef main(data_dir):\n print(\"This small script allows you to count the number of different colors in an image\")\n print(\"This code has been written to count the number of classes in images from a semantic segmentation dataset\")\n print(\"Therefore, it is highly recommended to run this code on lossless images (such as .png ones)\")\n print(\"Images are being loaded from: {}\".format(data_dir))\n\n directory = os.fsencode(data_dir)\n interesting_image_format = \".png\"\n \n # I will put in the variable filenames all the paths to the images to be analyzed\n filenames = []\n for file in os.listdir(directory):\n filename = os.fsdecode(file)\n if filename.endswith(interesting_image_format): \n if debug:\n print(os.path.join(directory, filename))\n print(\"Analyzing image: {}\".format(filename))\n filenames.append(os.path.join(data_dir, filename))\n else:\n if debug:\n print(\"I am not doing much here...\")\n continue\n # Sort the filenames in an alphabetical order\n filenames.sort()\n\n # Analyze the images (i.e., count the different number of colors in the images)\n number_of_colors_in_images = []\n for filename in filenames:\n img = Image.open(filename).convert('RGB')\n if debug: \n print(img.format)\n print(img.size)\n print(img.mode)\n data_img = np.asarray(img)\n if debug: \n print(data_img.shape)\n uniques = np.unique(data_img.reshape(-1, data_img.shape[-1]), axis=0)\n # uncomment the following line if you want information for each analyzed image \n print(\"The number of different colors in image ({}) {} is: {}\".format(interesting_image_format, filename, len(uniques)))\n # print(\"uniques.shape[0] for image {} is: {}\".format(filename, uniques.shape[0]))\n \n # Put the number of colors of each image into an array\n number_of_colors_in_images.append(len(uniques))\n \n print(number_of_colors_in_images)\n # Print the maximum number of colors (classes) of all the analyzed images\n print(np.max(number_of_colors_in_images))\n # Print the average number of colors (classes) of all the analyzed images\n print(np.average(number_of_colors_in_images))\n\ndef args_preprocess():\n # Command line arguments\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--data_dir\", default=\"default_path_to_images\", type=str, help='Specify the directory path from where to take the images of which we want to count the classes')\n args = parser.parse_args()\n main(args.data_dir)\n\nif __name__ == '__main__':\n args_preprocess()\n\n",
"The thing mentioned above about the lossy compression in .jpeg images and lossless compression in .png seems to be a nice thing to point out. But you can use the following piece of code to get the number of classes from a mask.\n\nThis is only applicable on .png images. Not tested on .jpeg images.\n\nimport cv2 as cv\nimport numpy as np\n\nimg_path = r'C:\\Users\\Bhavya\\Downloads\\img.png'\nimg = cv.imread(img_path)\nimg = np.array(img, dtype='int32')\npixels = []\nfor i in range(img.shape[0]):\n for j in range(img.shape[1]):\n r, g, b = list(img[i, j, :])\n pixels.append((r, g, b))\npixels = list(set(pixels))\nprint(len(pixels))\n\nIn this solution what I have done is appended pair of pixel values(RGB) in the input image to a list and converted the list to set and then back to list. The first conversion of list to set removes all the duplicate elements(here pixel values) and gives unique pixel values and the next conversion from set to list is optional and just to apply some future list operations on the pixels.\n"
] | [
0,
0,
0
] | [] | [] | [
"image_segmentation",
"python",
"python_imaging_library"
] | stackoverflow_0070122809_image_segmentation_python_python_imaging_library.txt |
Q:
How to set Playwright not automatically follow the redirect?
I want to open a website using Playwright,
but I don't want to be automatically redirected.
In some other web clients, they have parameter link follow=False to disable automatically following the redirection. But I can't find it on Playwright.
async def run(playwright):
chromium = playwright.chromium
browser = await chromium.launch()
context = await browser.new_context()
page = await context.new_page()
def handle_response(response):
print(f'status: {response.status} {response.url}')
page.on('response', handle_response)
await page.goto("https://google.com")
await browser.close()
that's the sample code, as we know google.com would respond 301 and will be redirected to www.google.com.
Is it possible to stop the process after I got the 301, so I don't need to continue processing www.google.com and all the responses after that?
From the Request documentation, I got that Page.on('response') emitted when/if the response status and headers are received for the request.
But how to stop the Request after the Page got on('response') callback?
I saw some other questions similar, using Route.abort() or Route.fulfill(), but I still don't get the answer for my case.
thank you for your help.
A:
You can use the wait_for_event method to wait for the response event to be emitted, and then check the response status to see if it is a redirect. If the response is a redirect, you can prevent Playwright from automatically following the redirect by calling the abort method on the response object.
Here is an example of how you can modify the code to prevent Playwright from automatically following redirects:
async def run(playwright):
chromium = playwright.chromium
browser = await chromium.launch()
context = await browser.new_context()
page = await context.new_page()
def handle_response(response):
print(f'status: {response.status} {response.url}')
if response.status >= 300 and response.status < 400: response.abort()
page.on('response', handle_response)
await page.wait_for_event('response')
await page.goto("https://google.com")
await browser.close()
In this example, the handle_response function checks the response status to see if it is a redirect. If it is a redirect, the abort method is called on the response object to prevent Playwright from automatically following the redirect. The wait_for_event method is also used to wait for the response event to be emitted before calling the goto method on the page.
This way, Playwright will not automatically follow the redirect and the handle_response function will be called with the redirect response, allowing you to handle the redirect as needed.
| How to set Playwright not automatically follow the redirect? | I want to open a website using Playwright,
but I don't want to be automatically redirected.
In some other web clients, they have parameter link follow=False to disable automatically following the redirection. But I can't find it on Playwright.
async def run(playwright):
chromium = playwright.chromium
browser = await chromium.launch()
context = await browser.new_context()
page = await context.new_page()
def handle_response(response):
print(f'status: {response.status} {response.url}')
page.on('response', handle_response)
await page.goto("https://google.com")
await browser.close()
that's the sample code, as we know google.com would respond 301 and will be redirected to www.google.com.
Is it possible to stop the process after I got the 301, so I don't need to continue processing www.google.com and all the responses after that?
From the Request documentation, I got that Page.on('response') emitted when/if the response status and headers are received for the request.
But how to stop the Request after the Page got on('response') callback?
I saw some other questions similar, using Route.abort() or Route.fulfill(), but I still don't get the answer for my case.
thank you for your help.
| [
"You can use the wait_for_event method to wait for the response event to be emitted, and then check the response status to see if it is a redirect. If the response is a redirect, you can prevent Playwright from automatically following the redirect by calling the abort method on the response object.\nHere is an example of how you can modify the code to prevent Playwright from automatically following redirects:\nasync def run(playwright):\n chromium = playwright.chromium\n browser = await chromium.launch()\n context = await browser.new_context()\n page = await context.new_page()\n\n def handle_response(response):\n print(f'status: {response.status} {response.url}')\n if response.status >= 300 and response.status < 400: response.abort()\n page.on('response', handle_response)\n\n await page.wait_for_event('response')\n await page.goto(\"https://google.com\")\n await browser.close()\n\nIn this example, the handle_response function checks the response status to see if it is a redirect. If it is a redirect, the abort method is called on the response object to prevent Playwright from automatically following the redirect. The wait_for_event method is also used to wait for the response event to be emitted before calling the goto method on the page.\nThis way, Playwright will not automatically follow the redirect and the handle_response function will be called with the redirect response, allowing you to handle the redirect as needed.\n"
] | [
1
] | [] | [] | [
"playwright",
"playwright_python",
"python"
] | stackoverflow_0071407454_playwright_playwright_python_python.txt |
Q:
Is there a way to use SQL syntax highlighting in triple-quoted literals in Jupyter notebook?
I'm doing ETL development using pyspark in a Jupyter notebook. I generally prefer to use SQL queries instead of pyspark functions, since I find SQL more readable than pyspark functions most of the time. However, SQL queries in Python scripts take the form of literal strings, which are treated as, well, strings, not code, when it comes to coloring the text.
To make development easier for myself, I want to color triple-quoted literal strings the same way that SQL code would be highlighted if it stood on its own in an IDE or text editor.
Is this possible?
For example, take the following code:
print('Hello world')
sql = """
SELECT myid, myname, CAST(mydate AS DATE)
FROM myschema.mytable
WHERE something
"""
execute_sql(sql)
If possible, I would like that string to appear like so, while not changing it's characteristics of being a string:
A:
Here's the answer:
https://github.com/CybercentreCanada/jupyterlab-sql-editor
This package has a feature that allows you to highlight SQL syntax within a string as well as run sql directly in the notebook. This is designed specifically for spark-sql, which is what I'm using.
| Is there a way to use SQL syntax highlighting in triple-quoted literals in Jupyter notebook? | I'm doing ETL development using pyspark in a Jupyter notebook. I generally prefer to use SQL queries instead of pyspark functions, since I find SQL more readable than pyspark functions most of the time. However, SQL queries in Python scripts take the form of literal strings, which are treated as, well, strings, not code, when it comes to coloring the text.
To make development easier for myself, I want to color triple-quoted literal strings the same way that SQL code would be highlighted if it stood on its own in an IDE or text editor.
Is this possible?
For example, take the following code:
print('Hello world')
sql = """
SELECT myid, myname, CAST(mydate AS DATE)
FROM myschema.mytable
WHERE something
"""
execute_sql(sql)
If possible, I would like that string to appear like so, while not changing it's characteristics of being a string:
| [
"Here's the answer:\nhttps://github.com/CybercentreCanada/jupyterlab-sql-editor\nThis package has a feature that allows you to highlight SQL syntax within a string as well as run sql directly in the notebook. This is designed specifically for spark-sql, which is what I'm using.\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"pyspark",
"python",
"sql"
] | stackoverflow_0074621188_jupyter_notebook_pyspark_python_sql.txt |
Q:
How to remove the space between subplots in matplotlib.pyplot?
I am working on a project in which I need to put together a plot grid of 10 rows and 3 columns. Although I have been able to make the plots and arrange the subplots, I was not able to produce a nice plot without white space such as this one below from gridspec documentatation..
I tried the following posts, but still not able to completely remove the white space as in the example image. Can someone please give me some guidance? Thanks!
Matplotlib different size subplots
how to remove “empty” space
between subplots?
Here's my image:
Below is my code. The full script is here on GitHub.
Note: images_2 and images_fool are both numpy arrays of flattened images with shape (1032, 10), while delta is an image array of shape (28, 28).
def plot_im(array=None, ind=0):
"""A function to plot the image given a images matrix, type of the matrix: \
either original or fool, and the order of images in the matrix"""
img_reshaped = array[ind, :].reshape((28, 28))
imgplot = plt.imshow(img_reshaped)
# Output as a grid of 10 rows and 3 cols with first column being original, second being
# delta and third column being adversaril
nrow = 10
ncol = 3
n = 0
from matplotlib import gridspec
fig = plt.figure(figsize=(30, 30))
gs = gridspec.GridSpec(nrow, ncol, width_ratios=[1, 1, 1])
for row in range(nrow):
for col in range(ncol):
plt.subplot(gs[n])
if col == 0:
#plt.subplot(nrow, ncol, n)
plot_im(array=images_2, ind=row)
elif col == 1:
#plt.subplot(nrow, ncol, n)
plt.imshow(w_delta)
else:
#plt.subplot(nrow, ncol, n)
plot_im(array=images_fool, ind=row)
n += 1
plt.tight_layout()
#plt.show()
plt.savefig('grid_figure.pdf')
A:
A note at the beginning: If you want to have full control over spacing, avoid using plt.tight_layout() as it will try to arange the plots in your figure to be equally and nicely distributed. This is mostly fine and produces pleasant results, but adjusts the spacing at its will.
The reason the GridSpec example you're quoting from the Matplotlib example gallery works so well is because the subplots' aspect is not predefined. That is, the subplots will simply expand on the grid and leave the set spacing (in this case wspace=0.0, hspace=0.0) independent of the figure size.
In contrast to that you are plotting images with imshow and the image's aspect is set equal by default (equivalent to ax.set_aspect("equal")). That said, you could of course put set_aspect("auto") to every plot (and additionally add wspace=0.0, hspace=0.0 as arguments to GridSpec as in the gallery example), which would produce a plot without spacings.
However when using images it makes a lot of sense to keep an equal aspect ratio such that every pixel is as wide as high and a square array is shown as a square image.
What you will need to do then is to play with the image size and the figure margins to obtain the expected result. The figsize argument to figure is the figure (width, height) in inch and here the ratio of the two numbers can be played with. And the subplot parameters wspace, hspace, top, bottom, left can be manually adjusted to give the desired result.
Below is an example:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
nrow = 10
ncol = 3
fig = plt.figure(figsize=(4, 10))
gs = gridspec.GridSpec(nrow, ncol, width_ratios=[1, 1, 1],
wspace=0.0, hspace=0.0, top=0.95, bottom=0.05, left=0.17, right=0.845)
for i in range(10):
for j in range(3):
im = np.random.rand(28,28)
ax= plt.subplot(gs[i,j])
ax.imshow(im)
ax.set_xticklabels([])
ax.set_yticklabels([])
#plt.tight_layout() # do not use this!!
plt.show()
Edit:
It is of course desireable not having to tweak the parameters manually. So one could calculate some optimal ones according to the number of rows and columns.
nrow = 7
ncol = 7
fig = plt.figure(figsize=(ncol+1, nrow+1))
gs = gridspec.GridSpec(nrow, ncol,
wspace=0.0, hspace=0.0,
top=1.-0.5/(nrow+1), bottom=0.5/(nrow+1),
left=0.5/(ncol+1), right=1-0.5/(ncol+1))
for i in range(nrow):
for j in range(ncol):
im = np.random.rand(28,28)
ax= plt.subplot(gs[i,j])
ax.imshow(im)
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.show()
A:
Try to add to your code this line:
fig.subplots_adjust(wspace=0, hspace=0)
And for every an axis object set:
ax.set_xticklabels([])
ax.set_yticklabels([])
A:
Following the answer by ImportanceOfBeingErnest, but if you want to use plt.subplots and its features:
fig, axes = plt.subplots(
nrow, ncol,
gridspec_kw=dict(wspace=0.0, hspace=0.0,
top=1. - 0.5 / (nrow + 1), bottom=0.5 / (nrow + 1),
left=0.5 / (ncol + 1), right=1 - 0.5 / (ncol + 1)),
figsize=(ncol + 1, nrow + 1),
sharey='row', sharex='col', # optionally
)
A:
If you are using matplotlib.pyplot.subplots you can display as many images as you want using Axes arrays. You can remove the spaces between images by making some adjustments to the matplotlib.pyplot.subplots configuration.
import matplotlib.pyplot as plt
def show_dataset_overview(self, img_list):
"""show each image in img_list without space"""
img_number = len(img_list)
img_number_at_a_row = 3
row_number = int(img_number /img_number_at_a_row)
fig_size = (15*(img_number_at_a_row/row_number), 15)
_, axs = plt.subplots(row_number,
img_number_at_a_row,
figsize=fig_size ,
gridspec_kw=dict(
top = 1, bottom = 0, right = 1, left = 0,
hspace = 0, wspace = 0
)
)
axs = axs.flatten()
for i in range(img_number):
axs[i].imshow(img_list[i])
axs[i].set_xticks([])
axs[i].set_yticks([])
Since we create subplots here first, we can give some parameters for grid_spec using the gridspec_kw parameter(source).
Among these parameters are the "top = 1, bottom = 0, right = 1, left = 0, hspace = 0, wspace = 0" parameters that will prevent inter-image spacing. To see other parameters, please visit here.
I usually use a figure size like (30,15) when setting the figure_size above. I generalized this a bit and added it to the code. If you wish, you can enter a manual size here.
A:
Here's another simple approach using the ImageGrid class (adapted from this answer).
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
nrow = 5
ncol = 3
fig = plt.figure(figsize=(4, 10))
grid = ImageGrid(fig,
111, # as in plt.subplot(111)
nrows_ncols=(nrow,ncol),
axes_pad=0,
share_all=True,)
for row in grid.axes_column:
for ax in row:
im = np.random.rand(28,28)
ax.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
| How to remove the space between subplots in matplotlib.pyplot? | I am working on a project in which I need to put together a plot grid of 10 rows and 3 columns. Although I have been able to make the plots and arrange the subplots, I was not able to produce a nice plot without white space such as this one below from gridspec documentatation..
I tried the following posts, but still not able to completely remove the white space as in the example image. Can someone please give me some guidance? Thanks!
Matplotlib different size subplots
how to remove “empty” space
between subplots?
Here's my image:
Below is my code. The full script is here on GitHub.
Note: images_2 and images_fool are both numpy arrays of flattened images with shape (1032, 10), while delta is an image array of shape (28, 28).
def plot_im(array=None, ind=0):
"""A function to plot the image given a images matrix, type of the matrix: \
either original or fool, and the order of images in the matrix"""
img_reshaped = array[ind, :].reshape((28, 28))
imgplot = plt.imshow(img_reshaped)
# Output as a grid of 10 rows and 3 cols with first column being original, second being
# delta and third column being adversaril
nrow = 10
ncol = 3
n = 0
from matplotlib import gridspec
fig = plt.figure(figsize=(30, 30))
gs = gridspec.GridSpec(nrow, ncol, width_ratios=[1, 1, 1])
for row in range(nrow):
for col in range(ncol):
plt.subplot(gs[n])
if col == 0:
#plt.subplot(nrow, ncol, n)
plot_im(array=images_2, ind=row)
elif col == 1:
#plt.subplot(nrow, ncol, n)
plt.imshow(w_delta)
else:
#plt.subplot(nrow, ncol, n)
plot_im(array=images_fool, ind=row)
n += 1
plt.tight_layout()
#plt.show()
plt.savefig('grid_figure.pdf')
| [
"A note at the beginning: If you want to have full control over spacing, avoid using plt.tight_layout() as it will try to arange the plots in your figure to be equally and nicely distributed. This is mostly fine and produces pleasant results, but adjusts the spacing at its will.\nThe reason the GridSpec example you're quoting from the Matplotlib example gallery works so well is because the subplots' aspect is not predefined. That is, the subplots will simply expand on the grid and leave the set spacing (in this case wspace=0.0, hspace=0.0) independent of the figure size.\nIn contrast to that you are plotting images with imshow and the image's aspect is set equal by default (equivalent to ax.set_aspect(\"equal\")). That said, you could of course put set_aspect(\"auto\") to every plot (and additionally add wspace=0.0, hspace=0.0 as arguments to GridSpec as in the gallery example), which would produce a plot without spacings. \nHowever when using images it makes a lot of sense to keep an equal aspect ratio such that every pixel is as wide as high and a square array is shown as a square image.\nWhat you will need to do then is to play with the image size and the figure margins to obtain the expected result. The figsize argument to figure is the figure (width, height) in inch and here the ratio of the two numbers can be played with. And the subplot parameters wspace, hspace, top, bottom, left can be manually adjusted to give the desired result. \nBelow is an example:\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import gridspec\n\nnrow = 10\nncol = 3\n\nfig = plt.figure(figsize=(4, 10)) \n\ngs = gridspec.GridSpec(nrow, ncol, width_ratios=[1, 1, 1],\n wspace=0.0, hspace=0.0, top=0.95, bottom=0.05, left=0.17, right=0.845) \n\nfor i in range(10):\n for j in range(3):\n im = np.random.rand(28,28)\n ax= plt.subplot(gs[i,j])\n ax.imshow(im)\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n\n#plt.tight_layout() # do not use this!!\nplt.show()\n\n\nEdit:\nIt is of course desireable not having to tweak the parameters manually. So one could calculate some optimal ones according to the number of rows and columns. \nnrow = 7\nncol = 7\n\nfig = plt.figure(figsize=(ncol+1, nrow+1)) \n\ngs = gridspec.GridSpec(nrow, ncol,\n wspace=0.0, hspace=0.0, \n top=1.-0.5/(nrow+1), bottom=0.5/(nrow+1), \n left=0.5/(ncol+1), right=1-0.5/(ncol+1)) \n\nfor i in range(nrow):\n for j in range(ncol):\n im = np.random.rand(28,28)\n ax= plt.subplot(gs[i,j])\n ax.imshow(im)\n ax.set_xticklabels([])\n ax.set_yticklabels([])\n\nplt.show()\n\n",
"Try to add to your code this line:\nfig.subplots_adjust(wspace=0, hspace=0)\n\nAnd for every an axis object set:\nax.set_xticklabels([])\nax.set_yticklabels([])\n\n",
"Following the answer by ImportanceOfBeingErnest, but if you want to use plt.subplots and its features:\nfig, axes = plt.subplots(\n nrow, ncol,\n gridspec_kw=dict(wspace=0.0, hspace=0.0,\n top=1. - 0.5 / (nrow + 1), bottom=0.5 / (nrow + 1),\n left=0.5 / (ncol + 1), right=1 - 0.5 / (ncol + 1)),\n figsize=(ncol + 1, nrow + 1),\n sharey='row', sharex='col', # optionally\n)\n\n",
"If you are using matplotlib.pyplot.subplots you can display as many images as you want using Axes arrays. You can remove the spaces between images by making some adjustments to the matplotlib.pyplot.subplots configuration.\nimport matplotlib.pyplot as plt\n\ndef show_dataset_overview(self, img_list):\n\"\"\"show each image in img_list without space\"\"\"\n img_number = len(img_list)\n img_number_at_a_row = 3\n row_number = int(img_number /img_number_at_a_row) \n fig_size = (15*(img_number_at_a_row/row_number), 15)\n _, axs = plt.subplots(row_number, \n img_number_at_a_row, \n figsize=fig_size , \n gridspec_kw=dict(\n top = 1, bottom = 0, right = 1, left = 0, \n hspace = 0, wspace = 0\n )\n )\n axs = axs.flatten()\n\n for i in range(img_number):\n axs[i].imshow(img_list[i])\n axs[i].set_xticks([])\n axs[i].set_yticks([])\n\nSince we create subplots here first, we can give some parameters for grid_spec using the gridspec_kw parameter(source).\nAmong these parameters are the \"top = 1, bottom = 0, right = 1, left = 0, hspace = 0, wspace = 0\" parameters that will prevent inter-image spacing. To see other parameters, please visit here.\nI usually use a figure size like (30,15) when setting the figure_size above. I generalized this a bit and added it to the code. If you wish, you can enter a manual size here.\n",
"Here's another simple approach using the ImageGrid class (adapted from this answer).\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import ImageGrid\n\nnrow = 5\nncol = 3\nfig = plt.figure(figsize=(4, 10))\ngrid = ImageGrid(fig, \n 111, # as in plt.subplot(111)\n nrows_ncols=(nrow,ncol),\n axes_pad=0,\n share_all=True,)\n\nfor row in grid.axes_column:\n for ax in row:\n im = np.random.rand(28,28)\n ax.imshow(im)\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\n"
] | [
38,
15,
4,
0,
0
] | [] | [] | [
"matplotlib",
"numpy",
"python"
] | stackoverflow_0041071947_matplotlib_numpy_python.txt |
Q:
What is the `current` correct format for Python Docstrings according to PEP standards?
I've been looking all over the web for the current standards for Python Docstrings and I've come across different answers for different scenarios. What is the currently most-accepted and wide-spread docstring format that I should use?
These are the ones that I've found so far:
Sphinx format (1): :param type name: description
Sphinx format (2):
:py:param type name: description
NumPy format: Parameters: __________ param: description
Other formats:
Args: param (type): description
Parameters: param (type): description
I just want to document my code in a standard way that is accepted by almost every IDE (including VS Code and PyCharm) that also conforms to PEP and readthedocs, so I can also enable hover-over with mouse over the code to see description of the arguments.
I'm looking for current standards that are at least backwards compatible with Python 3.6 a since that's the base of the projects I work on.
A:
The most widely accepted and standardized format for Python docstrings is the one defined in the PEP 257 - Docstring Conventions. This format is supported by most IDEs, including VS Code and PyCharm, and is also used by the Sphinx and NumPy documentation tools.
The PEP 257 format for documenting function parameters is as follows:
def function_name(param1: type, param2: type) -> return_type:
"""
Description of the function and its arguments.
Parameters:
param1 (type): Description of the first parameter.
param2 (type): Description of the second parameter.
Returns:
return_type: Description of the return value.
"""
This format is compatible with Python 3.6 and later versions, and is also backward compatible with older versions of Python. It is recommended to follow this format for documenting function arguments in your code to ensure consistency and compatibility with different tools and IDEs.
| What is the `current` correct format for Python Docstrings according to PEP standards? | I've been looking all over the web for the current standards for Python Docstrings and I've come across different answers for different scenarios. What is the currently most-accepted and wide-spread docstring format that I should use?
These are the ones that I've found so far:
Sphinx format (1): :param type name: description
Sphinx format (2):
:py:param type name: description
NumPy format: Parameters: __________ param: description
Other formats:
Args: param (type): description
Parameters: param (type): description
I just want to document my code in a standard way that is accepted by almost every IDE (including VS Code and PyCharm) that also conforms to PEP and readthedocs, so I can also enable hover-over with mouse over the code to see description of the arguments.
I'm looking for current standards that are at least backwards compatible with Python 3.6 a since that's the base of the projects I work on.
| [
"The most widely accepted and standardized format for Python docstrings is the one defined in the PEP 257 - Docstring Conventions. This format is supported by most IDEs, including VS Code and PyCharm, and is also used by the Sphinx and NumPy documentation tools.\nThe PEP 257 format for documenting function parameters is as follows:\ndef function_name(param1: type, param2: type) -> return_type:\n \"\"\"\n Description of the function and its arguments.\n\n Parameters:\n param1 (type): Description of the first parameter.\n param2 (type): Description of the second parameter.\n\n Returns:\n return_type: Description of the return value.\n \"\"\"\n\nThis format is compatible with Python 3.6 and later versions, and is also backward compatible with older versions of Python. It is recommended to follow this format for documenting function arguments in your code to ensure consistency and compatibility with different tools and IDEs.\n"
] | [
1
] | [] | [] | [
"docstring",
"python",
"python_3.x"
] | stackoverflow_0074655149_docstring_python_python_3.x.txt |
Q:
import pyautogui does not work in VSCode, despite having everything installed
Im learning Python at the moment and I am trying to work with pyautogui at the moment but I have encountered a very basic problem and while I found other questions like this, I did not find a solution.
My "Setup":
I am on Windows 10 64 bit, I have installed python 3.11, I have the 22.3.1 pip version and the 0.9.53 version of pyautogui installed
I am using Visual Studio Code.
Now I want to just simply move my mouse a bit, nothing special. But I am getting stuck at the very beginning, when I try to import pyautogui.
it looks like this:
The Problem Tab just mentions "pyautogui: Unknown word".
The thing is, if I test it in the terminal, it works without a problem it just seems like VSCode cant import pyautogui.
I have tried uninstalling and reinstalling Python, the package and creating new files. Nothing seems to really work.
A:
This is not the visual studio code problem, It is because you have the Code Spell Checker extension installed in your VScode. This extension checks the spelling of the pyautogui word and because the extension is not found in their dictionary it highlights the word.
This extension only checks the spelling. So your code can run without error.
Ways to solve this error.
Hover over the word pyautogui and click Quick Fix and then add this word to your user or workspace dictionary. (Shortcut for Quick fix. Ctrl + .)
Go to the extension tab and search this streetsidesoftware.code-spell-checker. Then disable or uninstall the extension
| import pyautogui does not work in VSCode, despite having everything installed | Im learning Python at the moment and I am trying to work with pyautogui at the moment but I have encountered a very basic problem and while I found other questions like this, I did not find a solution.
My "Setup":
I am on Windows 10 64 bit, I have installed python 3.11, I have the 22.3.1 pip version and the 0.9.53 version of pyautogui installed
I am using Visual Studio Code.
Now I want to just simply move my mouse a bit, nothing special. But I am getting stuck at the very beginning, when I try to import pyautogui.
it looks like this:
The Problem Tab just mentions "pyautogui: Unknown word".
The thing is, if I test it in the terminal, it works without a problem it just seems like VSCode cant import pyautogui.
I have tried uninstalling and reinstalling Python, the package and creating new files. Nothing seems to really work.
| [
"This is not the visual studio code problem, It is because you have the Code Spell Checker extension installed in your VScode. This extension checks the spelling of the pyautogui word and because the extension is not found in their dictionary it highlights the word.\nThis extension only checks the spelling. So your code can run without error.\nWays to solve this error.\n\nHover over the word pyautogui and click Quick Fix and then add this word to your user or workspace dictionary. (Shortcut for Quick fix. Ctrl + .)\n\nGo to the extension tab and search this streetsidesoftware.code-spell-checker. Then disable or uninstall the extension \n\n"
] | [
2
] | [] | [] | [
"pyautogui",
"python",
"python_import"
] | stackoverflow_0074655148_pyautogui_python_python_import.txt |
Q:
Inserting python variable in SPARQL
I have a string variable I want to pass in my SPARQL query and I can't get it to work.
title = 'Good Will Hunting'
[str(s) for s, in graph.query('''
PREFIX ddis: <http://ddis.ch/atai/>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX schema: <http://schema.org/>
SELECT ?lbl WHERE {
?movie rdfs:label $title@en .
?movie wdt:P57 ?director .
?director rdfs:label ?lbl .
}
''')]
It doesn't work and I get an error. The query is correct as it works if I manualy enter the name when I replace $title.
A:
String interpolation in python can be achieved with the %s symbol (for string variables):
title = 'Good Will Hunting'
[str(s) for s, in graph.query('''
PREFIX ddis: <http://ddis.ch/atai/>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX schema: <http://schema.org/>
SELECT ?lbl WHERE {
?movie rdfs:label "%s"@en .
?movie wdt:P57 ?director .
?director rdfs:label ?lbl .
}
''' % title)]
Note that I also added quotes ("%s"), that are necessary for specifying a string in SPARQL.
| Inserting python variable in SPARQL | I have a string variable I want to pass in my SPARQL query and I can't get it to work.
title = 'Good Will Hunting'
[str(s) for s, in graph.query('''
PREFIX ddis: <http://ddis.ch/atai/>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX schema: <http://schema.org/>
SELECT ?lbl WHERE {
?movie rdfs:label $title@en .
?movie wdt:P57 ?director .
?director rdfs:label ?lbl .
}
''')]
It doesn't work and I get an error. The query is correct as it works if I manualy enter the name when I replace $title.
| [
"String interpolation in python can be achieved with the %s symbol (for string variables):\ntitle = 'Good Will Hunting'\n\n[str(s) for s, in graph.query('''\n PREFIX ddis: <http://ddis.ch/atai/> \n PREFIX wd: <http://www.wikidata.org/entity/> \n PREFIX wdt: <http://www.wikidata.org/prop/direct/> \n PREFIX schema: <http://schema.org/> \n \n SELECT ?lbl WHERE {\n ?movie rdfs:label \"%s\"@en .\n ?movie wdt:P57 ?director .\n ?director rdfs:label ?lbl .\n }\n ''' % title)]\n\nNote that I also added quotes (\"%s\"), that are necessary for specifying a string in SPARQL.\n"
] | [
0
] | [] | [] | [
"python",
"sparql",
"variables"
] | stackoverflow_0074654886_python_sparql_variables.txt |
Q:
Why are these for loops running incredibly slow in python?
I'm working on a short programme which will create a 2d array for the Tabula Recta. However, I am finding it is running incredibly slow, especially the further down the for loop I get. What is the reason for this, as I've never had a for loop run so slow before, which is especially confusing as I don't think any of the operations are that expensive.
The code looks like this:
def initTabulaRecta(alphabet = "abcdefghijklmnopqrstuvwxyz"):
tabularecta = [[]]
for let in " "+alphabet.upper(): tabularecta[0].append(let) # Adds the top layer to the table
for i in range(len(alphabet)):
tabularecta.append([alphabet.upper()[0]]) # Adds the side layer to the table
for let in alphabet: tabularecta[i+1].append(let) # Fills in each letter for that row
alphabet = alphabet[:len(alphabet)-1] + alphabet[1:] # Changes order of alphabet, for next row
A:
It looks like the code is running slowly because of the way the inner for loop is being implemented. Specifically, the line alphabet = alphabet[:len(alphabet)-1] + alphabet[1:] is causing the loop to run slower over time because it is changing the length of the alphabet string on each iteration of the loop. This means that the loop has to do more work on each subsequent iteration, which can make it run slower and slower.
One way to improve the performance of this code would be to use a different approach for the inner for loop. For example, instead of changing the alphabet string on each iteration of the loop, you could create a new string that contains the letters of the alphabet in the correct order, and then use that string to fill in the values for each row of the tabularecta array. This would allow the loop to run at a consistent speed, rather than getting slower over time.
Another way to improve the performance of this code would be to use a different data structure for the tabularecta array. A 2D array is not the most efficient data structure for this kind of problem, and using a different data structure (such as a dictionary or a list of lists) could potentially make the code run faster.
Overall, there are a few different ways you could improve the performance of this code, but the key is to avoid changing the length of the alphabet string on each iteration of the inner for loop. Using a different data structure and/or a different approach for the inner for loop should help to make the code run more efficiently.
| Why are these for loops running incredibly slow in python? | I'm working on a short programme which will create a 2d array for the Tabula Recta. However, I am finding it is running incredibly slow, especially the further down the for loop I get. What is the reason for this, as I've never had a for loop run so slow before, which is especially confusing as I don't think any of the operations are that expensive.
The code looks like this:
def initTabulaRecta(alphabet = "abcdefghijklmnopqrstuvwxyz"):
tabularecta = [[]]
for let in " "+alphabet.upper(): tabularecta[0].append(let) # Adds the top layer to the table
for i in range(len(alphabet)):
tabularecta.append([alphabet.upper()[0]]) # Adds the side layer to the table
for let in alphabet: tabularecta[i+1].append(let) # Fills in each letter for that row
alphabet = alphabet[:len(alphabet)-1] + alphabet[1:] # Changes order of alphabet, for next row
| [
"It looks like the code is running slowly because of the way the inner for loop is being implemented. Specifically, the line alphabet = alphabet[:len(alphabet)-1] + alphabet[1:] is causing the loop to run slower over time because it is changing the length of the alphabet string on each iteration of the loop. This means that the loop has to do more work on each subsequent iteration, which can make it run slower and slower.\nOne way to improve the performance of this code would be to use a different approach for the inner for loop. For example, instead of changing the alphabet string on each iteration of the loop, you could create a new string that contains the letters of the alphabet in the correct order, and then use that string to fill in the values for each row of the tabularecta array. This would allow the loop to run at a consistent speed, rather than getting slower over time.\nAnother way to improve the performance of this code would be to use a different data structure for the tabularecta array. A 2D array is not the most efficient data structure for this kind of problem, and using a different data structure (such as a dictionary or a list of lists) could potentially make the code run faster.\nOverall, there are a few different ways you could improve the performance of this code, but the key is to avoid changing the length of the alphabet string on each iteration of the inner for loop. Using a different data structure and/or a different approach for the inner for loop should help to make the code run more efficiently.\n"
] | [
-1
] | [] | [] | [
"iteration",
"performance",
"python"
] | stackoverflow_0074655196_iteration_performance_python.txt |
Q:
How to scrape multiple href values?
Hello, I want to pull the links from this page. All the knowledge in that field comes in according to my own methods. But I just need the links. How can I scrape links?(Pyhton-Beautifulsoup)
make_list = base_soup.findAll('div', {'a class': 'link--muted no--text--decoration result-item'})
one_make = make_list.findAll('href')
print(one_make)
The structure to extract the data is as follows:
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top"> == $0
<a class="link--muted no--text--decoration result-item" href="https://link structure"
Every single link I want to collect is here.(link structure)
I tried methods like.Thank you very much in advance for your help.
A:
Note: In newer code avoid old syntax findAll() instead use find_all() or select() with css selectors - For more take a minute to check docs
Iterate your ResultSet and extract the value of href attribute:
make_list = soup.find_all('a', {'class': 'link--muted no--text--decoration result-item'})
for e in make_list:
print(e.get('href'))
Example
from bs4 import BeautifulSoup
html='''
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top">
<a class="link--muted no--text--decoration result-item" href="https://link structure"></a>
</div>
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top">
<a class="link--muted no--text--decoration result-item" href="https://link structure"></a>
</div>
'''
soup = BeautifulSoup(html)
make_list = soup.find_all('a', {'class': 'link--muted no--text--decoration result-item'})
for e in make_list:
print(e.get('href'))
A:
This is an example of code on how you can achieve that
from bs4 import BeautifulSoup
html = '''
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top"> == $0
<a class="link--muted no--text--decoration result-item" href="https://link structure"></a>
</div>
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top"> == $0
<a class="link--muted no--text--decoration result-item" href="https://link example.2"></a>
</div>
'''
soup = BeautifulSoup(html, features="lxml")
anchors = soup.find_all('a')
for anchor in anchors:
print(anchor['href'])
Alternatively, you can use a third-party service such as WebScrapingAPI to achieve your goal. I recommend this service since because it is beginner friendly and it offers CSS extracting and many advanced features such as IP rotations, rendering javascript, CAPTCHA solving, custom geolocation and many more which you can find out about by checking the docs. This in an example of how you can get links from a webpage using WebScrapingAPI:
from bs4 import BeautifulSoup
import requests
import json
API_KEY = '<YOUR-API-KEY>'
SCRAPER_URL = 'https://api.webscrapingapi.com/v1'
TARGET_URL = 'https://docs.webscrapingapi.com/'
PARAMS = {
"api_key":API_KEY,
"url": TARGET_URL,
"extract_rules": '{"linksList": {"selector": "a[href]", "output": "html", "all": 1 }}',
}
response = requests.get(SCRAPER_URL, params=PARAMS)
parsed_result = json.loads(response.text)
linksList = parsed_result['linksList']
for link in linksList:
soup = BeautifulSoup(link, features='lxml')
print(soup.find('a').get('href'))
If you are interested you can check more information about this on our Extraction Rules Docs
| How to scrape multiple href values? |
Hello, I want to pull the links from this page. All the knowledge in that field comes in according to my own methods. But I just need the links. How can I scrape links?(Pyhton-Beautifulsoup)
make_list = base_soup.findAll('div', {'a class': 'link--muted no--text--decoration result-item'})
one_make = make_list.findAll('href')
print(one_make)
The structure to extract the data is as follows:
<div class="cBox-body cBox-body--eyeCatcher" data-testid="no-top"> == $0
<a class="link--muted no--text--decoration result-item" href="https://link structure"
Every single link I want to collect is here.(link structure)
I tried methods like.Thank you very much in advance for your help.
| [
"Note: In newer code avoid old syntax findAll() instead use find_all() or select() with css selectors - For more take a minute to check docs\nIterate your ResultSet and extract the value of href attribute:\nmake_list = soup.find_all('a', {'class': 'link--muted no--text--decoration result-item'})\n for e in make_list:\n print(e.get('href'))\n\nExample\nfrom bs4 import BeautifulSoup\n\nhtml='''\n<div class=\"cBox-body cBox-body--eyeCatcher\" data-testid=\"no-top\">\n <a class=\"link--muted no--text--decoration result-item\" href=\"https://link structure\"></a>\n</div>\n<div class=\"cBox-body cBox-body--eyeCatcher\" data-testid=\"no-top\">\n <a class=\"link--muted no--text--decoration result-item\" href=\"https://link structure\"></a>\n</div>\n'''\nsoup = BeautifulSoup(html)\n\nmake_list = soup.find_all('a', {'class': 'link--muted no--text--decoration result-item'})\nfor e in make_list:\n print(e.get('href'))\n\n",
"This is an example of code on how you can achieve that\nfrom bs4 import BeautifulSoup\n\nhtml = ''' \n<div class=\"cBox-body cBox-body--eyeCatcher\" data-testid=\"no-top\"> == $0\n <a class=\"link--muted no--text--decoration result-item\" href=\"https://link structure\"></a>\n</div>\n<div class=\"cBox-body cBox-body--eyeCatcher\" data-testid=\"no-top\"> == $0\n <a class=\"link--muted no--text--decoration result-item\" href=\"https://link example.2\"></a>\n</div>\n'''\n\nsoup = BeautifulSoup(html, features=\"lxml\")\nanchors = soup.find_all('a')\n\nfor anchor in anchors:\n print(anchor['href'])\n\nAlternatively, you can use a third-party service such as WebScrapingAPI to achieve your goal. I recommend this service since because it is beginner friendly and it offers CSS extracting and many advanced features such as IP rotations, rendering javascript, CAPTCHA solving, custom geolocation and many more which you can find out about by checking the docs. This in an example of how you can get links from a webpage using WebScrapingAPI:\nfrom bs4 import BeautifulSoup\nimport requests\nimport json\n\nAPI_KEY = '<YOUR-API-KEY>'\n\nSCRAPER_URL = 'https://api.webscrapingapi.com/v1'\nTARGET_URL = 'https://docs.webscrapingapi.com/'\n\nPARAMS = {\n \"api_key\":API_KEY,\n \"url\": TARGET_URL,\n \"extract_rules\": '{\"linksList\": {\"selector\": \"a[href]\", \"output\": \"html\", \"all\": 1 }}',\n}\n\nresponse = requests.get(SCRAPER_URL, params=PARAMS)\nparsed_result = json.loads(response.text)\n\nlinksList = parsed_result['linksList']\n\nfor link in linksList:\n soup = BeautifulSoup(link, features='lxml')\n print(soup.find('a').get('href'))\n\nIf you are interested you can check more information about this on our Extraction Rules Docs\n"
] | [
0,
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074653863_beautifulsoup_python_web_scraping.txt |
Q:
pytesseract import error on anaconda
I import pytesseract module by using the following command,
sudo pip install -U pytesseract
But while I import pytesseract module to a program which is compile on spyder shows
import pytesseract
ImportError: No module named pytesseract
Could you please give a solution for this issue
A:
If you are using anaconda, try:
conda install -c auto pytesseract
A:
alternatively, if not using anaconda,you can try:
open cmd.exe as administrator
type in
python -m pip install --user pytesseract
A:
you can try to download the file locally from this link (https://pypi.org/project/pytesseract/) and extract tar.gz and use the command python setup.py install
and also make sure you have installed Tesseract-ocr to.
Few dependencies while installing pytesseract
:
1. PIL
2. OLE File
A:
pip install pytesseract
Try to update the conda version.
| pytesseract import error on anaconda | I import pytesseract module by using the following command,
sudo pip install -U pytesseract
But while I import pytesseract module to a program which is compile on spyder shows
import pytesseract
ImportError: No module named pytesseract
Could you please give a solution for this issue
| [
"If you are using anaconda, try:\nconda install -c auto pytesseract\n\n",
"alternatively, if not using anaconda,you can try:\nopen cmd.exe as administrator \ntype in \npython -m pip install --user pytesseract\n\n",
"you can try to download the file locally from this link (https://pypi.org/project/pytesseract/) and extract tar.gz and use the command python setup.py install\nand also make sure you have installed Tesseract-ocr to.\nFew dependencies while installing pytesseract\n:\n1. PIL\n2. OLE File\n",
"pip install pytesseract\n\n\nTry to update the conda version.\n\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"import",
"python",
"python_tesseract"
] | stackoverflow_0049525880_import_python_python_tesseract.txt |
Q:
How to convert dictionary to default dictionary?
mapfile = {
1879048192: 0,
1879048193: 0,
1879048194: 0,
1879048195: 0,
1879048196: 4,
1879048197: 3,
1879048198: 2,
1879048199: 17,
1879048200: 0,
1879048201: 1,
1879048202: 0,
1879048203: 0,
1879048204: 4,
# intentionally missing byte
1879048206: 2,
1879048207: 1,
1879048208: 0 # single byte cannot make up a dword
}
_buf = {}
for x in (x for x in mapfile.keys() if 0==x%4):
try:
s = "0x{0:02x}{1:02x}{2:02x}{3:02x}".format(mapfile[x+3], mapfile[x+2],
mapfile[x+1], mapfile[x+0])
print "offset ", x, " value ", s
_buf[x] = int(s, 16)
except KeyError as e:
print "bad key ", e
print "_buf is ", _buf
Since I am using dictionary, I am getting KeyError. Plan is to make dictionary as defaultdict(int) so that in default dictionary it will pad zero when there is KeyError. But I didn't find any solution. How I can convert dictionary to default dictionary?
A:
You can convert a dictionary to a defaultdict:
>>> a = {1:0, 2:1, 3:0}
>>> from collections import defaultdict
>>> defaultdict(int,a)
defaultdict(<type 'int'>, {1: 0, 2: 1, 3: 0})
A:
Instead of re-creating the dictionary, you can use get(key, default). key is the key that you want to retrieve and default is the value to be returned if the key isn't in the dictionary:
my_dict = { }
print my_dict.get('non_existent_key', 0)
>> 0
A:
I think, KeyError can be solved here by setting 0 as default value.
In [5]: mydict = {1:4}
In [6]: mydict.get(1, 0)
Out[6]: 4
In [7]: mydict.get(2, 0)
Out[7]: 0
Hope this helps. You can change your code to something like mapfile.get([x+3], 0).
OR
from collections import defaultdict
mydict = {1:4}
mydefaultdict = defaultdict(int, mydict)
>>>mydefaultdict[1]
4
>>>mydefaultdict[2]
0
A:
Although you could convert the dictionary to a defaultdict simply by:
mapfile = collections.defaultdict(int, mapfile)
In your example code it might be better just make it part of the creation process:
mapfile = collections.defaultdict(int, {
1879048192: 0,
1879048193: 0,
1879048194: 0,
1879048195: 0,
1879048196: 4,
1879048197: 3,
1879048198: 2,
1879048199: 17,
1879048200: 0,
1879048201: 1,
1879048202: 0,
1879048203: 0,
1879048204: 4,
# intentionally missing byte
1879048206: 2,
1879048207: 1,
1879048208: 0 # single byte cannot make up a dword
})
print(mapfile[1879048205]) # -> 0
print(mapfile['bogus']) # -> 0
Yet another alternative would be to derive your own class. It wouldn't require much additional code to implement a dictionary-like class that not only supplied values for missing keys like defaultdicts do, but also did a little sanity-checking on them. Here's an example of what I mean — a dict-like class that only accepts missing keys if they're some sort of integer, rather than just about anything like a regular defaultdict:
import numbers
class MyDefaultIntDict(dict):
default_value = 0
def __missing__(self, key):
if not isinstance(key, numbers.Integral):
raise KeyError('{!r} is an invalid key'.format(key))
self[key] = self.default_value
return self.default_value
mapfile = MyDefaultIntDict({
1879048192: 0,
1879048193: 0,
1879048194: 0,
1879048195: 0,
1879048196: 4,
1879048197: 3,
1879048198: 2,
1879048199: 17,
1879048200: 0,
1879048201: 1,
1879048202: 0,
1879048203: 0,
1879048204: 4,
# intentionally missing byte
1879048206: 2,
1879048207: 1,
1879048208: 0 # single byte cannot make up a dword
})
print(mapfile[1879048205]) # -> 0
print(mapfile['bogus']) # -> KeyError: "'bogus' is an invalid key"
A:
defaultdict(lambda: 0, mapfile)
| How to convert dictionary to default dictionary? | mapfile = {
1879048192: 0,
1879048193: 0,
1879048194: 0,
1879048195: 0,
1879048196: 4,
1879048197: 3,
1879048198: 2,
1879048199: 17,
1879048200: 0,
1879048201: 1,
1879048202: 0,
1879048203: 0,
1879048204: 4,
# intentionally missing byte
1879048206: 2,
1879048207: 1,
1879048208: 0 # single byte cannot make up a dword
}
_buf = {}
for x in (x for x in mapfile.keys() if 0==x%4):
try:
s = "0x{0:02x}{1:02x}{2:02x}{3:02x}".format(mapfile[x+3], mapfile[x+2],
mapfile[x+1], mapfile[x+0])
print "offset ", x, " value ", s
_buf[x] = int(s, 16)
except KeyError as e:
print "bad key ", e
print "_buf is ", _buf
Since I am using dictionary, I am getting KeyError. Plan is to make dictionary as defaultdict(int) so that in default dictionary it will pad zero when there is KeyError. But I didn't find any solution. How I can convert dictionary to default dictionary?
| [
"You can convert a dictionary to a defaultdict:\n>>> a = {1:0, 2:1, 3:0}\n>>> from collections import defaultdict\n>>> defaultdict(int,a)\ndefaultdict(<type 'int'>, {1: 0, 2: 1, 3: 0})\n\n",
"Instead of re-creating the dictionary, you can use get(key, default). key is the key that you want to retrieve and default is the value to be returned if the key isn't in the dictionary:\nmy_dict = { }\nprint my_dict.get('non_existent_key', 0)\n>> 0\n\n",
"I think, KeyError can be solved here by setting 0 as default value.\nIn [5]: mydict = {1:4}\n\nIn [6]: mydict.get(1, 0)\nOut[6]: 4\n\nIn [7]: mydict.get(2, 0)\nOut[7]: 0\n\nHope this helps. You can change your code to something like mapfile.get([x+3], 0).\nOR\nfrom collections import defaultdict\nmydict = {1:4}\nmydefaultdict = defaultdict(int, mydict)\n>>>mydefaultdict[1]\n4\n>>>mydefaultdict[2]\n0\n\n",
"Although you could convert the dictionary to a defaultdict simply by:\nmapfile = collections.defaultdict(int, mapfile)\n\nIn your example code it might be better just make it part of the creation process:\nmapfile = collections.defaultdict(int, {\n 1879048192: 0,\n 1879048193: 0,\n 1879048194: 0,\n 1879048195: 0,\n 1879048196: 4,\n 1879048197: 3,\n 1879048198: 2,\n 1879048199: 17,\n 1879048200: 0,\n 1879048201: 1,\n 1879048202: 0,\n 1879048203: 0,\n 1879048204: 4,\n # intentionally missing byte\n 1879048206: 2,\n 1879048207: 1,\n 1879048208: 0 # single byte cannot make up a dword\n})\n\nprint(mapfile[1879048205]) # -> 0\nprint(mapfile['bogus']) # -> 0\n\nYet another alternative would be to derive your own class. It wouldn't require much additional code to implement a dictionary-like class that not only supplied values for missing keys like defaultdicts do, but also did a little sanity-checking on them. Here's an example of what I mean — a dict-like class that only accepts missing keys if they're some sort of integer, rather than just about anything like a regular defaultdict:\nimport numbers\n\nclass MyDefaultIntDict(dict):\n default_value = 0\n def __missing__(self, key):\n if not isinstance(key, numbers.Integral):\n raise KeyError('{!r} is an invalid key'.format(key))\n self[key] = self.default_value\n return self.default_value\n\nmapfile = MyDefaultIntDict({\n 1879048192: 0,\n 1879048193: 0,\n 1879048194: 0,\n 1879048195: 0,\n 1879048196: 4,\n 1879048197: 3,\n 1879048198: 2,\n 1879048199: 17,\n 1879048200: 0,\n 1879048201: 1,\n 1879048202: 0,\n 1879048203: 0,\n 1879048204: 4,\n # intentionally missing byte\n 1879048206: 2,\n 1879048207: 1,\n 1879048208: 0 # single byte cannot make up a dword\n})\n\nprint(mapfile[1879048205]) # -> 0\nprint(mapfile['bogus']) # -> KeyError: \"'bogus' is an invalid key\"\n\n",
"defaultdict(lambda: 0, mapfile)\n\n"
] | [
13,
8,
4,
2,
0
] | [] | [] | [
"defaultdict",
"dictionary",
"python"
] | stackoverflow_0031581751_defaultdict_dictionary_python.txt |
Q:
nested loop returns the same results for multiple rows while webscraping - beautiful soup
I'm trying to scrape an apartment website and it's not looping. I get different apartments but the rest of the information is the same. Yesterday it was pulling a different address.
url = "https://www.apartments.com/atlanta-ga/?bb=lnwszyjy-H4lu8uqH"
header = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
page = requests.get(url, headers=header)
soup = BeautifulSoup(page.content, 'html.parser')
lists = soup.find_all('section', class_="placard-content")
properties = soup.find_all('li', class_="mortar-wrapper")
addresses = soup.find_all('a', class_="property-link")
for list in lists:
price = list.find('p', class_="property-pricing").text
beds = list.find('p', class_="property-beds").text
for address in addresses:
location = address.find('div', class_="property-address js-url").text
for property in properties:
name = property.find('span', class_="js-placardTitle title").text
info = [name,location,beds,price]
print(info)```
Here is the output I'm getting
['Broadstone Pullman', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['1660 Peachtree Midtown', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Mira at Midtown Union', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Alexan Summerhill', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['1824 Defoor', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['3005 Buckhead', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['AMLI Westside', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Novel O4W', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['The Cliftwood', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Ellington Midtown', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']```
A:
Try:
import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://www.apartments.com/atlanta-ga/?bb=lnwszyjy-H4lu8uqH"
header = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
}
page = requests.get(url, headers=header)
soup = BeautifulSoup(page.content, "html.parser")
all_data = []
for art in soup.select("article:has(a)"):
name, addr = art.a.get_text(strip=True, separator="|").split("|")
info = art.select_one(".property-info")
pricing = info.select_one(".property-pricing").text
beds = info.select_one(".property-beds").text
amenities = {s.text: "X" for s in info.select(".property-amenities span")}
phone = info.select_one(".phone-link").text.strip()
all_data.append([name, addr, pricing, beds, phone, amenities])
df = pd.DataFrame(
all_data, columns=["Name", "Addr", "Pricing", "Beds", "Phone", "Amenities"]
)
df = pd.concat([df, df.pop("Amenities").apply(pd.Series)], axis=1)
df = df.fillna("")
print(df.head().to_markdown(index=False))
df.to_csv("data.csv", index=False)
Prints:
Name
Addr
Pricing
Beds
Phone
Dog & Cat Friendly
Fitness Center
Pool
In Unit Washer & Dryer
Walk-In Closets
Clubhouse
Balcony
CableReady
Tub / Shower
Dishwasher
Kitchen
Granite Countertops
Gated
Refrigerator
Range
Microwave
Stainless Steel Appliances
Grill
Business Center
Lounge
Heat
Oven
Package Service
Courtyard
Ceiling Fans
Office
Maintenance on site
Disposal
Broadstone Pullman
105 Rogers St NE, Atlanta, GA 30317
$1,630 - 2,825
Studio - 2 Beds
(470) 944-6584
X
X
X
X
X
X
X
X
X
1660 Peachtree Midtown
1660 Peachtree St NW, Atlanta, GA 30309
$1,699 - 2,499
1-2 Beds
(470) 944-9920
X
X
X
X
X
X
X
X
Mira at Midtown Union
1301 Spring St NW, Atlanta, GA 30309
$1,705 - 6,025
Studio - 3 Beds
(470) 944-3921
X
X
X
X
X
X
X
X
Alexan Summerhill
720 Hank Aaron Dr SE, Atlanta, GA 30315
$1,530 - 3,128
Studio - 2 Beds
(470) 944-9567
X
X
X
X
X
X
X
X
1824 Defoor
1824 Defoor Ave NW, Atlanta, GA 30318
$1,676 - 3,194
Studio - 3 Beds
(470) 944-3075
X
X
X
X
X
X
X
X
and saves data.csv (screenshot from LibreOffice):
A:
You should not be nesting like that. You only want one info list for each item in lists, so you should not be forming info inside any nested for-loops. You could use zip instead:
NOTE: It's not a good idea to use variable names like list and property since they already mean something in python...
lists = soup.find_all('section', class_="placard-content")
properties = soup.find_all('li', class_="mortar-wrapper")
addresses = soup.find_all('div', class_="property-information") ## more specific
for l, prop, address in zip(lists, properties, addresses):
price = l.find('p', class_="property-pricing").text
beds = l.find('p', class_="property-beds").text
location = address.find('div', class_="property-address js-url").text
name = prop.find('span', class_="js-placardTitle title").text
info = [name,location,beds,price]
print(info)
However, it's rather risky to use find+.text without checking if find returned something [to avoid raising errors when it tries to get .text from None]; also, for finding multiple details from bs4, I prefer to use select with CSS Selectors since it allows me to use functions like this with list comprehension like
# page = requests.get(url, headers=header)
# soup = BeautifulSoup(page.content, 'html.parser')
### FIRST PASTE FUNCTION DEFINITION ( from https://pastebin.com/ZnZ7xM6u ) ###
colHeaders = ['listingId', 'Name', 'Location', 'Beds', 'Price', 'Link']
allData = []
for ap in soup.select('li > article.placard[data-listingid]'):
allData.append(selectForList(ap, selectors=[
('', 'data-listingid'), 'span.title', 'div.property-address',
'p.property-beds', 'p.property-pricing',
('a.property-link[href]', 'href')
], printList=True)) ## set printList=False to not print ##
will print
['vy0ysgf', '99 West Paces Ferry', '99 W Paces Ferry Rd, Atlanta, GA 30305', '1-3 Beds', '$3,043 - 16,201', 'https://www.apartments.com/99-west-paces-ferry-atlanta-ga/vy0ysgf/']
['88l36f2', 'Broadstone Pullman', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825', 'https://www.apartments.com/broadstone-pullman-atlanta-ga/88l36f2/']
['0g6gh01', '1824 Defoor', '1824 Defoor Ave NW, Atlanta, GA 30318', 'Studio - 3 Beds', '$1,676 - 3,194', 'https://www.apartments.com/1824-defoor-atlanta-ga/0g6gh01/']
['4mpewpl', 'Mira at Midtown Union', '1301 Spring St NW, Atlanta, GA 30309', 'Studio - 3 Beds', '$1,705 - 6,025', 'https://www.apartments.com/mira-at-midtown-union-atlanta-ga/4mpewpl/']
['ldcnyed', 'Alexan Summerhill', '720 Hank Aaron Dr SE, Atlanta, GA 30315', 'Studio - 2 Beds', '$1,530 - 3,128', 'https://www.apartments.com/alexan-summerhill-atlanta-ga/ldcnyed/']
['n63p95m', '1660 Peachtree Midtown', '1660 Peachtree St NW, Atlanta, GA 30309', '1-2 Beds', '$1,699 - 2,499', 'https://www.apartments.com/1660-peachtree-midtown-atlanta-ga/n63p95m/']
['tzqfblc', '3005 Buckhead', '3005 Peachtree Rd NE, Atlanta, GA 30305', 'Studio - 3 Beds', '$1,556 - 4,246', 'https://www.apartments.com/3005-buckhead-atlanta-ga/tzqfblc/']
['09tx720', 'Novel O4W', '525 NE North Ave, Atlanta, GA 30308', 'Studio - 2 Beds', '$1,776 - 3,605', 'https://www.apartments.com/novel-o4w-atlanta-ga/09tx720/']
['bp1p51b', 'AMLI Westside', '1084 Howell Mill Rd NW, Atlanta, GA 30318', 'Studio - 2 Beds', '$1,475 - 3,344', 'https://www.apartments.com/amli-westside-atlanta-ga/bp1p51b/']
['thm5n1c', 'The Cliftwood', '185 Cliftwood Dr NE, Atlanta, GA 30328', 'Studio - 3 Beds', '$1,730 - 2,991', 'https://www.apartments.com/the-cliftwood-atlanta-ga/thm5n1c/']
['07g5143', 'Ellington Midtown', '391 17th St NW, Atlanta, GA 30363', '1-2 Beds', '$1,528 - 2,741', 'https://www.apartments.com/ellington-midtown-atlanta-ga/07g5143/']
['zwcyjly', 'Glenn Perimeter', '5755 Glenridge Dr, Atlanta, GA 30328', '1-3 Beds', '$1,674 - 5,115', 'https://www.apartments.com/glenn-perimeter-atlanta-ga/zwcyjly/']
['pfsrw7t', 'The Dagny Midtown Apartments', '888 Juniper St NE, Atlanta, GA 30309', '1-3 Beds', '$1,863 - 6,758', 'https://www.apartments.com/the-dagny-midtown-apartments-atlanta-ga/pfsrw7t/']
['beqvl9b', 'Pencil Factory Flats', '349 Decatur St SE, Atlanta, GA 30312', 'Studio - 3 Beds', '$1,555 - 5,636', 'https://www.apartments.com/pencil-factory-flats-atlanta-ga/beqvl9b/']
['betv189', 'The Boulevard at Grant Park', '1015 Boulevard SE, Atlanta, GA 30312', 'Studio - 2 Beds', 'Call for Rent', 'https://www.apartments.com/the-boulevard-at-grant-park-atlanta-ga/betv189/']
['nhmd47n', 'Rio At Lenox', '2716 Buford Hwy, Atlanta, GA 30324', 'Studio - 2 Beds', '$1,350 - 2,025', 'https://www.apartments.com/rio-at-lenox-atlanta-ga/nhmd47n/']
['t6fxcr9', 'Vue at the Quarter', '2048 Bolton Dr, Atlanta, GA 30318', '1-3 Beds', '$1,454 - 7,445', 'https://www.apartments.com/vue-at-the-quarter-atlanta-ga/t6fxcr9/']
['nt1x8zq', 'Lofts at Centennial Yards South', '125 Ted Turner Dr SW, Atlanta, GA 30303', 'Studio - 2 Beds', '$1,361 - 2,540', 'https://www.apartments.com/lofts-at-centennial-yards-south-atlanta-ga/nt1x8zq/']
['kpsw7tc', 'The Maverick Flats', '72 Milton Ave, Atlanta, GA 30315', 'Studio - 2 Beds', '$1,311 - 2,616', 'https://www.apartments.com/the-maverick-flats-atlanta-ga/kpsw7tc/']
['h1987t1', 'Broadstone Upper Westside', '2167 Bolton Dr NW, Atlanta, GA 30318', 'Studio - 2 Beds', '$1,279 - 3,679', 'https://www.apartments.com/broadstone-upper-westside-atlanta-ga/h1987t1/']
['vntq44f', 'Ella', '2201 Glenwood Ave SE, Atlanta, GA 30316', 'Studio - 3 Beds', '$1,325 - 3,055', 'https://www.apartments.com/ella-atlanta-ga/vntq44f/']
['9yyrgl4', 'MAA Briarcliff', '500 Briarvista Way, Atlanta, GA 30329', '1-3 Beds', '$1,365 - 5,235', 'https://www.apartments.com/maa-briarcliff-atlanta-ga/9yyrgl4/']
['94xq484', 'AMLI Lenox', '3478 Lakeside Dr NE, Atlanta, GA 30326', '1-3 Beds', '$1,649 - 9,095', 'https://www.apartments.com/amli-lenox-atlanta-ga/94xq484/']
['q9jhgvy', 'Platform at Grant Park', '290 Martin Luther King Jr Dr SE, Atlanta, GA 30312', 'Studio - 2 Beds', '$1,454 - 1,954', 'https://www.apartments.com/platform-at-grant-park-atlanta-ga/q9jhgvy/']
['y22d57t', 'Generation Atlanta', '369 Centennial Olympic Park Dr NW, Atlanta, GA 30313', 'Studio - 2 Beds', '$1,374 - 3,482', 'https://www.apartments.com/generation-atlanta-atlanta-ga/y22d57t/']
or, you could use pandas to print as table:
print(pandas.DataFrame(
[tuple(a) for a in allData], columns=colHeaders
) .set_index('listingId').to_markdown(index=False))
# remove index=False to include listingId
prints
| Name | Location | Beds | Price | Link |
|:--------------------------------|:-----------------------------------------------------|:----------------|:----------------|:-------------------------------------------------------------------------------|
| 99 West Paces Ferry | 99 W Paces Ferry Rd, Atlanta, GA 30305 | 1-3 Beds | $3,043 - 16,201 | https://www.apartments.com/99-west-paces-ferry-atlanta-ga/vy0ysgf/ |
| Broadstone Pullman | 105 Rogers St NE, Atlanta, GA 30317 | Studio - 2 Beds | $1,630 - 2,825 | https://www.apartments.com/broadstone-pullman-atlanta-ga/88l36f2/ |
| 1824 Defoor | 1824 Defoor Ave NW, Atlanta, GA 30318 | Studio - 3 Beds | $1,676 - 3,194 | https://www.apartments.com/1824-defoor-atlanta-ga/0g6gh01/ |
| Mira at Midtown Union | 1301 Spring St NW, Atlanta, GA 30309 | Studio - 3 Beds | $1,705 - 6,025 | https://www.apartments.com/mira-at-midtown-union-atlanta-ga/4mpewpl/ |
| Alexan Summerhill | 720 Hank Aaron Dr SE, Atlanta, GA 30315 | Studio - 2 Beds | $1,530 - 3,128 | https://www.apartments.com/alexan-summerhill-atlanta-ga/ldcnyed/ |
| 1660 Peachtree Midtown | 1660 Peachtree St NW, Atlanta, GA 30309 | 1-2 Beds | $1,699 - 2,499 | https://www.apartments.com/1660-peachtree-midtown-atlanta-ga/n63p95m/ |
| 3005 Buckhead | 3005 Peachtree Rd NE, Atlanta, GA 30305 | Studio - 3 Beds | $1,556 - 4,246 | https://www.apartments.com/3005-buckhead-atlanta-ga/tzqfblc/ |
| Novel O4W | 525 NE North Ave, Atlanta, GA 30308 | Studio - 2 Beds | $1,776 - 3,605 | https://www.apartments.com/novel-o4w-atlanta-ga/09tx720/ |
| AMLI Westside | 1084 Howell Mill Rd NW, Atlanta, GA 30318 | Studio - 2 Beds | $1,475 - 3,344 | https://www.apartments.com/amli-westside-atlanta-ga/bp1p51b/ |
| The Cliftwood | 185 Cliftwood Dr NE, Atlanta, GA 30328 | Studio - 3 Beds | $1,730 - 2,991 | https://www.apartments.com/the-cliftwood-atlanta-ga/thm5n1c/ |
| Ellington Midtown | 391 17th St NW, Atlanta, GA 30363 | 1-2 Beds | $1,528 - 2,741 | https://www.apartments.com/ellington-midtown-atlanta-ga/07g5143/ |
| Glenn Perimeter | 5755 Glenridge Dr, Atlanta, GA 30328 | 1-3 Beds | $1,674 - 5,115 | https://www.apartments.com/glenn-perimeter-atlanta-ga/zwcyjly/ |
| The Dagny Midtown Apartments | 888 Juniper St NE, Atlanta, GA 30309 | 1-3 Beds | $1,863 - 6,758 | https://www.apartments.com/the-dagny-midtown-apartments-atlanta-ga/pfsrw7t/ |
| Pencil Factory Flats | 349 Decatur St SE, Atlanta, GA 30312 | Studio - 3 Beds | $1,555 - 5,636 | https://www.apartments.com/pencil-factory-flats-atlanta-ga/beqvl9b/ |
| The Boulevard at Grant Park | 1015 Boulevard SE, Atlanta, GA 30312 | Studio - 2 Beds | Call for Rent | https://www.apartments.com/the-boulevard-at-grant-park-atlanta-ga/betv189/ |
| Rio At Lenox | 2716 Buford Hwy, Atlanta, GA 30324 | Studio - 2 Beds | $1,350 - 2,025 | https://www.apartments.com/rio-at-lenox-atlanta-ga/nhmd47n/ |
| Vue at the Quarter | 2048 Bolton Dr, Atlanta, GA 30318 | 1-3 Beds | $1,454 - 7,445 | https://www.apartments.com/vue-at-the-quarter-atlanta-ga/t6fxcr9/ |
| Lofts at Centennial Yards South | 125 Ted Turner Dr SW, Atlanta, GA 30303 | Studio - 2 Beds | $1,361 - 2,540 | https://www.apartments.com/lofts-at-centennial-yards-south-atlanta-ga/nt1x8zq/ |
| The Maverick Flats | 72 Milton Ave, Atlanta, GA 30315 | Studio - 2 Beds | $1,311 - 2,616 | https://www.apartments.com/the-maverick-flats-atlanta-ga/kpsw7tc/ |
| Broadstone Upper Westside | 2167 Bolton Dr NW, Atlanta, GA 30318 | Studio - 2 Beds | $1,279 - 3,679 | https://www.apartments.com/broadstone-upper-westside-atlanta-ga/h1987t1/ |
| Ella | 2201 Glenwood Ave SE, Atlanta, GA 30316 | Studio - 3 Beds | $1,325 - 3,055 | https://www.apartments.com/ella-atlanta-ga/vntq44f/ |
| MAA Briarcliff | 500 Briarvista Way, Atlanta, GA 30329 | 1-3 Beds | $1,365 - 5,235 | https://www.apartments.com/maa-briarcliff-atlanta-ga/9yyrgl4/ |
| AMLI Lenox | 3478 Lakeside Dr NE, Atlanta, GA 30326 | 1-3 Beds | $1,649 - 9,095 | https://www.apartments.com/amli-lenox-atlanta-ga/94xq484/ |
| Platform at Grant Park | 290 Martin Luther King Jr Dr SE, Atlanta, GA 30312 | Studio - 2 Beds | $1,454 - 1,954 | https://www.apartments.com/platform-at-grant-park-atlanta-ga/q9jhgvy/ |
| Generation Atlanta | 369 Centennial Olympic Park Dr NW, Atlanta, GA 30313 | Studio - 2 Beds | $1,374 - 3,482 | https://www.apartments.com/generation-atlanta-atlanta-ga/y22d57t/ |
| nested loop returns the same results for multiple rows while webscraping - beautiful soup | I'm trying to scrape an apartment website and it's not looping. I get different apartments but the rest of the information is the same. Yesterday it was pulling a different address.
url = "https://www.apartments.com/atlanta-ga/?bb=lnwszyjy-H4lu8uqH"
header = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36'}
page = requests.get(url, headers=header)
soup = BeautifulSoup(page.content, 'html.parser')
lists = soup.find_all('section', class_="placard-content")
properties = soup.find_all('li', class_="mortar-wrapper")
addresses = soup.find_all('a', class_="property-link")
for list in lists:
price = list.find('p', class_="property-pricing").text
beds = list.find('p', class_="property-beds").text
for address in addresses:
location = address.find('div', class_="property-address js-url").text
for property in properties:
name = property.find('span', class_="js-placardTitle title").text
info = [name,location,beds,price]
print(info)```
Here is the output I'm getting
['Broadstone Pullman', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['1660 Peachtree Midtown', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Mira at Midtown Union', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Alexan Summerhill', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['1824 Defoor', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['3005 Buckhead', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['AMLI Westside', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Novel O4W', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['The Cliftwood', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']
['Ellington Midtown', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825']```
| [
"Try:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.apartments.com/atlanta-ga/?bb=lnwszyjy-H4lu8uqH\"\nheader = {\n \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36\"\n}\npage = requests.get(url, headers=header)\nsoup = BeautifulSoup(page.content, \"html.parser\")\n\nall_data = []\nfor art in soup.select(\"article:has(a)\"):\n name, addr = art.a.get_text(strip=True, separator=\"|\").split(\"|\")\n\n info = art.select_one(\".property-info\")\n pricing = info.select_one(\".property-pricing\").text\n beds = info.select_one(\".property-beds\").text\n amenities = {s.text: \"X\" for s in info.select(\".property-amenities span\")}\n phone = info.select_one(\".phone-link\").text.strip()\n\n all_data.append([name, addr, pricing, beds, phone, amenities])\n\ndf = pd.DataFrame(\n all_data, columns=[\"Name\", \"Addr\", \"Pricing\", \"Beds\", \"Phone\", \"Amenities\"]\n)\ndf = pd.concat([df, df.pop(\"Amenities\").apply(pd.Series)], axis=1)\ndf = df.fillna(\"\")\n\nprint(df.head().to_markdown(index=False))\ndf.to_csv(\"data.csv\", index=False)\n\nPrints:\n\n\n\n\nName\nAddr\nPricing\nBeds\nPhone\nDog & Cat Friendly\nFitness Center\nPool\nIn Unit Washer & Dryer\nWalk-In Closets\nClubhouse\nBalcony\nCableReady\nTub / Shower\nDishwasher\nKitchen\nGranite Countertops\nGated\nRefrigerator\nRange\nMicrowave\nStainless Steel Appliances\nGrill\nBusiness Center\nLounge\nHeat\nOven\nPackage Service\nCourtyard\nCeiling Fans\nOffice\nMaintenance on site\nDisposal\n\n\n\n\nBroadstone Pullman\n105 Rogers St NE, Atlanta, GA 30317\n$1,630 - 2,825\nStudio - 2 Beds\n(470) 944-6584\nX\nX\nX\nX\nX\nX\nX\nX\nX\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n1660 Peachtree Midtown\n1660 Peachtree St NW, Atlanta, GA 30309\n$1,699 - 2,499\n1-2 Beds\n(470) 944-9920\nX\nX\nX\nX\n\n\n\n\n\nX\nX\nX\nX\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMira at Midtown Union\n1301 Spring St NW, Atlanta, GA 30309\n$1,705 - 6,025\nStudio - 3 Beds\n(470) 944-3921\nX\nX\nX\nX\n\nX\n\n\n\nX\n\n\n\nX\nX\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAlexan Summerhill\n720 Hank Aaron Dr SE, Atlanta, GA 30315\n$1,530 - 3,128\nStudio - 2 Beds\n(470) 944-9567\nX\nX\nX\nX\nX\nX\n\n\n\nX\n\n\n\nX\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n1824 Defoor\n1824 Defoor Ave NW, Atlanta, GA 30318\n$1,676 - 3,194\nStudio - 3 Beds\n(470) 944-3075\nX\nX\nX\nX\nX\nX\n\n\n\n\n\n\n\n\nX\nX\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nand saves data.csv (screenshot from LibreOffice):\n\n",
"You should not be nesting like that. You only want one info list for each item in lists, so you should not be forming info inside any nested for-loops. You could use zip instead:\nNOTE: It's not a good idea to use variable names like list and property since they already mean something in python...\nlists = soup.find_all('section', class_=\"placard-content\")\nproperties = soup.find_all('li', class_=\"mortar-wrapper\")\naddresses = soup.find_all('div', class_=\"property-information\") ## more specific\n\nfor l, prop, address in zip(lists, properties, addresses):\n price = l.find('p', class_=\"property-pricing\").text\n beds = l.find('p', class_=\"property-beds\").text\n location = address.find('div', class_=\"property-address js-url\").text\n name = prop.find('span', class_=\"js-placardTitle title\").text\n info = [name,location,beds,price]\n print(info) \n\n\nHowever, it's rather risky to use find+.text without checking if find returned something [to avoid raising errors when it tries to get .text from None]; also, for finding multiple details from bs4, I prefer to use select with CSS Selectors since it allows me to use functions like this with list comprehension like\n# page = requests.get(url, headers=header)\n# soup = BeautifulSoup(page.content, 'html.parser')\n\n### FIRST PASTE FUNCTION DEFINITION ( from https://pastebin.com/ZnZ7xM6u ) ###\n\ncolHeaders = ['listingId', 'Name', 'Location', 'Beds', 'Price', 'Link']\nallData = []\nfor ap in soup.select('li > article.placard[data-listingid]'):\n allData.append(selectForList(ap, selectors=[\n ('', 'data-listingid'), 'span.title', 'div.property-address',\n 'p.property-beds', 'p.property-pricing', \n ('a.property-link[href]', 'href')\n ], printList=True)) ## set printList=False to not print ##\n\nwill print\n\n['vy0ysgf', '99 West Paces Ferry', '99 W Paces Ferry Rd, Atlanta, GA 30305', '1-3 Beds', '$3,043 - 16,201', 'https://www.apartments.com/99-west-paces-ferry-atlanta-ga/vy0ysgf/']\n['88l36f2', 'Broadstone Pullman', '105 Rogers St NE, Atlanta, GA 30317', 'Studio - 2 Beds', '$1,630 - 2,825', 'https://www.apartments.com/broadstone-pullman-atlanta-ga/88l36f2/']\n['0g6gh01', '1824 Defoor', '1824 Defoor Ave NW, Atlanta, GA 30318', 'Studio - 3 Beds', '$1,676 - 3,194', 'https://www.apartments.com/1824-defoor-atlanta-ga/0g6gh01/']\n['4mpewpl', 'Mira at Midtown Union', '1301 Spring St NW, Atlanta, GA 30309', 'Studio - 3 Beds', '$1,705 - 6,025', 'https://www.apartments.com/mira-at-midtown-union-atlanta-ga/4mpewpl/']\n['ldcnyed', 'Alexan Summerhill', '720 Hank Aaron Dr SE, Atlanta, GA 30315', 'Studio - 2 Beds', '$1,530 - 3,128', 'https://www.apartments.com/alexan-summerhill-atlanta-ga/ldcnyed/']\n['n63p95m', '1660 Peachtree Midtown', '1660 Peachtree St NW, Atlanta, GA 30309', '1-2 Beds', '$1,699 - 2,499', 'https://www.apartments.com/1660-peachtree-midtown-atlanta-ga/n63p95m/']\n['tzqfblc', '3005 Buckhead', '3005 Peachtree Rd NE, Atlanta, GA 30305', 'Studio - 3 Beds', '$1,556 - 4,246', 'https://www.apartments.com/3005-buckhead-atlanta-ga/tzqfblc/']\n['09tx720', 'Novel O4W', '525 NE North Ave, Atlanta, GA 30308', 'Studio - 2 Beds', '$1,776 - 3,605', 'https://www.apartments.com/novel-o4w-atlanta-ga/09tx720/']\n['bp1p51b', 'AMLI Westside', '1084 Howell Mill Rd NW, Atlanta, GA 30318', 'Studio - 2 Beds', '$1,475 - 3,344', 'https://www.apartments.com/amli-westside-atlanta-ga/bp1p51b/']\n['thm5n1c', 'The Cliftwood', '185 Cliftwood Dr NE, Atlanta, GA 30328', 'Studio - 3 Beds', '$1,730 - 2,991', 'https://www.apartments.com/the-cliftwood-atlanta-ga/thm5n1c/']\n['07g5143', 'Ellington Midtown', '391 17th St NW, Atlanta, GA 30363', '1-2 Beds', '$1,528 - 2,741', 'https://www.apartments.com/ellington-midtown-atlanta-ga/07g5143/']\n['zwcyjly', 'Glenn Perimeter', '5755 Glenridge Dr, Atlanta, GA 30328', '1-3 Beds', '$1,674 - 5,115', 'https://www.apartments.com/glenn-perimeter-atlanta-ga/zwcyjly/']\n['pfsrw7t', 'The Dagny Midtown Apartments', '888 Juniper St NE, Atlanta, GA 30309', '1-3 Beds', '$1,863 - 6,758', 'https://www.apartments.com/the-dagny-midtown-apartments-atlanta-ga/pfsrw7t/']\n['beqvl9b', 'Pencil Factory Flats', '349 Decatur St SE, Atlanta, GA 30312', 'Studio - 3 Beds', '$1,555 - 5,636', 'https://www.apartments.com/pencil-factory-flats-atlanta-ga/beqvl9b/']\n['betv189', 'The Boulevard at Grant Park', '1015 Boulevard SE, Atlanta, GA 30312', 'Studio - 2 Beds', 'Call for Rent', 'https://www.apartments.com/the-boulevard-at-grant-park-atlanta-ga/betv189/']\n['nhmd47n', 'Rio At Lenox', '2716 Buford Hwy, Atlanta, GA 30324', 'Studio - 2 Beds', '$1,350 - 2,025', 'https://www.apartments.com/rio-at-lenox-atlanta-ga/nhmd47n/']\n['t6fxcr9', 'Vue at the Quarter', '2048 Bolton Dr, Atlanta, GA 30318', '1-3 Beds', '$1,454 - 7,445', 'https://www.apartments.com/vue-at-the-quarter-atlanta-ga/t6fxcr9/']\n['nt1x8zq', 'Lofts at Centennial Yards South', '125 Ted Turner Dr SW, Atlanta, GA 30303', 'Studio - 2 Beds', '$1,361 - 2,540', 'https://www.apartments.com/lofts-at-centennial-yards-south-atlanta-ga/nt1x8zq/']\n['kpsw7tc', 'The Maverick Flats', '72 Milton Ave, Atlanta, GA 30315', 'Studio - 2 Beds', '$1,311 - 2,616', 'https://www.apartments.com/the-maverick-flats-atlanta-ga/kpsw7tc/']\n['h1987t1', 'Broadstone Upper Westside', '2167 Bolton Dr NW, Atlanta, GA 30318', 'Studio - 2 Beds', '$1,279 - 3,679', 'https://www.apartments.com/broadstone-upper-westside-atlanta-ga/h1987t1/']\n['vntq44f', 'Ella', '2201 Glenwood Ave SE, Atlanta, GA 30316', 'Studio - 3 Beds', '$1,325 - 3,055', 'https://www.apartments.com/ella-atlanta-ga/vntq44f/']\n['9yyrgl4', 'MAA Briarcliff', '500 Briarvista Way, Atlanta, GA 30329', '1-3 Beds', '$1,365 - 5,235', 'https://www.apartments.com/maa-briarcliff-atlanta-ga/9yyrgl4/']\n['94xq484', 'AMLI Lenox', '3478 Lakeside Dr NE, Atlanta, GA 30326', '1-3 Beds', '$1,649 - 9,095', 'https://www.apartments.com/amli-lenox-atlanta-ga/94xq484/']\n['q9jhgvy', 'Platform at Grant Park', '290 Martin Luther King Jr Dr SE, Atlanta, GA 30312', 'Studio - 2 Beds', '$1,454 - 1,954', 'https://www.apartments.com/platform-at-grant-park-atlanta-ga/q9jhgvy/']\n['y22d57t', 'Generation Atlanta', '369 Centennial Olympic Park Dr NW, Atlanta, GA 30313', 'Studio - 2 Beds', '$1,374 - 3,482', 'https://www.apartments.com/generation-atlanta-atlanta-ga/y22d57t/']\n\n\n\nor, you could use pandas to print as table:\nprint(pandas.DataFrame(\n [tuple(a) for a in allData], columns=colHeaders\n) .set_index('listingId').to_markdown(index=False))\n# remove index=False to include listingId\n\nprints\n\n| Name | Location | Beds | Price | Link |\n|:--------------------------------|:-----------------------------------------------------|:----------------|:----------------|:-------------------------------------------------------------------------------|\n| 99 West Paces Ferry | 99 W Paces Ferry Rd, Atlanta, GA 30305 | 1-3 Beds | $3,043 - 16,201 | https://www.apartments.com/99-west-paces-ferry-atlanta-ga/vy0ysgf/ |\n| Broadstone Pullman | 105 Rogers St NE, Atlanta, GA 30317 | Studio - 2 Beds | $1,630 - 2,825 | https://www.apartments.com/broadstone-pullman-atlanta-ga/88l36f2/ |\n| 1824 Defoor | 1824 Defoor Ave NW, Atlanta, GA 30318 | Studio - 3 Beds | $1,676 - 3,194 | https://www.apartments.com/1824-defoor-atlanta-ga/0g6gh01/ |\n| Mira at Midtown Union | 1301 Spring St NW, Atlanta, GA 30309 | Studio - 3 Beds | $1,705 - 6,025 | https://www.apartments.com/mira-at-midtown-union-atlanta-ga/4mpewpl/ |\n| Alexan Summerhill | 720 Hank Aaron Dr SE, Atlanta, GA 30315 | Studio - 2 Beds | $1,530 - 3,128 | https://www.apartments.com/alexan-summerhill-atlanta-ga/ldcnyed/ |\n| 1660 Peachtree Midtown | 1660 Peachtree St NW, Atlanta, GA 30309 | 1-2 Beds | $1,699 - 2,499 | https://www.apartments.com/1660-peachtree-midtown-atlanta-ga/n63p95m/ |\n| 3005 Buckhead | 3005 Peachtree Rd NE, Atlanta, GA 30305 | Studio - 3 Beds | $1,556 - 4,246 | https://www.apartments.com/3005-buckhead-atlanta-ga/tzqfblc/ |\n| Novel O4W | 525 NE North Ave, Atlanta, GA 30308 | Studio - 2 Beds | $1,776 - 3,605 | https://www.apartments.com/novel-o4w-atlanta-ga/09tx720/ |\n| AMLI Westside | 1084 Howell Mill Rd NW, Atlanta, GA 30318 | Studio - 2 Beds | $1,475 - 3,344 | https://www.apartments.com/amli-westside-atlanta-ga/bp1p51b/ |\n| The Cliftwood | 185 Cliftwood Dr NE, Atlanta, GA 30328 | Studio - 3 Beds | $1,730 - 2,991 | https://www.apartments.com/the-cliftwood-atlanta-ga/thm5n1c/ |\n| Ellington Midtown | 391 17th St NW, Atlanta, GA 30363 | 1-2 Beds | $1,528 - 2,741 | https://www.apartments.com/ellington-midtown-atlanta-ga/07g5143/ |\n| Glenn Perimeter | 5755 Glenridge Dr, Atlanta, GA 30328 | 1-3 Beds | $1,674 - 5,115 | https://www.apartments.com/glenn-perimeter-atlanta-ga/zwcyjly/ |\n| The Dagny Midtown Apartments | 888 Juniper St NE, Atlanta, GA 30309 | 1-3 Beds | $1,863 - 6,758 | https://www.apartments.com/the-dagny-midtown-apartments-atlanta-ga/pfsrw7t/ |\n| Pencil Factory Flats | 349 Decatur St SE, Atlanta, GA 30312 | Studio - 3 Beds | $1,555 - 5,636 | https://www.apartments.com/pencil-factory-flats-atlanta-ga/beqvl9b/ |\n| The Boulevard at Grant Park | 1015 Boulevard SE, Atlanta, GA 30312 | Studio - 2 Beds | Call for Rent | https://www.apartments.com/the-boulevard-at-grant-park-atlanta-ga/betv189/ |\n| Rio At Lenox | 2716 Buford Hwy, Atlanta, GA 30324 | Studio - 2 Beds | $1,350 - 2,025 | https://www.apartments.com/rio-at-lenox-atlanta-ga/nhmd47n/ |\n| Vue at the Quarter | 2048 Bolton Dr, Atlanta, GA 30318 | 1-3 Beds | $1,454 - 7,445 | https://www.apartments.com/vue-at-the-quarter-atlanta-ga/t6fxcr9/ |\n| Lofts at Centennial Yards South | 125 Ted Turner Dr SW, Atlanta, GA 30303 | Studio - 2 Beds | $1,361 - 2,540 | https://www.apartments.com/lofts-at-centennial-yards-south-atlanta-ga/nt1x8zq/ |\n| The Maverick Flats | 72 Milton Ave, Atlanta, GA 30315 | Studio - 2 Beds | $1,311 - 2,616 | https://www.apartments.com/the-maverick-flats-atlanta-ga/kpsw7tc/ |\n| Broadstone Upper Westside | 2167 Bolton Dr NW, Atlanta, GA 30318 | Studio - 2 Beds | $1,279 - 3,679 | https://www.apartments.com/broadstone-upper-westside-atlanta-ga/h1987t1/ |\n| Ella | 2201 Glenwood Ave SE, Atlanta, GA 30316 | Studio - 3 Beds | $1,325 - 3,055 | https://www.apartments.com/ella-atlanta-ga/vntq44f/ |\n| MAA Briarcliff | 500 Briarvista Way, Atlanta, GA 30329 | 1-3 Beds | $1,365 - 5,235 | https://www.apartments.com/maa-briarcliff-atlanta-ga/9yyrgl4/ |\n| AMLI Lenox | 3478 Lakeside Dr NE, Atlanta, GA 30326 | 1-3 Beds | $1,649 - 9,095 | https://www.apartments.com/amli-lenox-atlanta-ga/94xq484/ |\n| Platform at Grant Park | 290 Martin Luther King Jr Dr SE, Atlanta, GA 30312 | Studio - 2 Beds | $1,454 - 1,954 | https://www.apartments.com/platform-at-grant-park-atlanta-ga/q9jhgvy/ |\n| Generation Atlanta | 369 Centennial Olympic Park Dr NW, Atlanta, GA 30313 | Studio - 2 Beds | $1,374 - 3,482 | https://www.apartments.com/generation-atlanta-atlanta-ga/y22d57t/ |\n\n\n"
] | [
0,
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074647356_beautifulsoup_python_web_scraping.txt |
Q:
Python return statement not returning correct output
I have a simple function to return True or False when a condition is met. But the return statement is not functioning as I expected. Could anyone help me out to point out the mistake I am making.
graph = {
'f': ['g', 'i'],
'g': ['h'],
'h': [],
'i': ['g', 'k'],
'j': ['i'],
'k': []
}
def hasPath(graph,source,des):
arr = graph[source]
if des in arr:
print('Yes')
return True
for i in arr:
hasPath(graph,i,des)
return False
print(hasPath(graph,'f','k'))
This code return False but prints the statment Yes. I am not sure why return statement is not being executed after the Print statement.
A:
In the hasPath function, you are calling the function recursively on each element in the arr list, and the return value of the recursive calls is not being used. This means that even if the des value is present in the arr list and the print statement is executed, the return False statement at the end of the function will still be executed and the False value will be returned.
To fix this issue, you can add a return statement after the recursive call to hasPath to return the value of the recursive call. This will ensure that the return True statement is executed when the des value is found in the arr list, and the return False statement at the end of the function will not be executed.
Here is an example of how you can modify the hasPath function to fix this issue:
def hasPath(graph, source, des):
arr = graph[source]
if des in arr:
print('Yes')
return True
for i in arr:
if hasPath(graph, i, des):
return True
return False
With this change, the hasPath function will return True when the des value is found in the arr list, and will return False otherwise. When you run the print(hasPath(graph, 'f', 'k')) statement, it will print Yes and then True, as expected.
A:
Try this:
graph = {
'f': ['g', 'i'],
'g': ['h'],
'h': [],
'i': ['g', 'k'],
'j': ['i'],
'k': []
}
def hasPath(graph,source,des):
arr = graph[source]
if des in arr:
print('Yes')
return True
for i in arr:
if hasPath(graph,i,des): return True
return False
print(hasPath(graph,'f','k'))
Result:
Yes
True
Why?
Well try to look at each iteration:
source is f, arr = ['g', 'i'] = False
source changes to i = g, now arr = ['h'] = False
source changes to h which is empty and returns False
First iteration changes to i in graph[f] and it starts again
Source changes to g in key i = False
source changes to k in key i = True
| Python return statement not returning correct output | I have a simple function to return True or False when a condition is met. But the return statement is not functioning as I expected. Could anyone help me out to point out the mistake I am making.
graph = {
'f': ['g', 'i'],
'g': ['h'],
'h': [],
'i': ['g', 'k'],
'j': ['i'],
'k': []
}
def hasPath(graph,source,des):
arr = graph[source]
if des in arr:
print('Yes')
return True
for i in arr:
hasPath(graph,i,des)
return False
print(hasPath(graph,'f','k'))
This code return False but prints the statment Yes. I am not sure why return statement is not being executed after the Print statement.
| [
"In the hasPath function, you are calling the function recursively on each element in the arr list, and the return value of the recursive calls is not being used. This means that even if the des value is present in the arr list and the print statement is executed, the return False statement at the end of the function will still be executed and the False value will be returned.\nTo fix this issue, you can add a return statement after the recursive call to hasPath to return the value of the recursive call. This will ensure that the return True statement is executed when the des value is found in the arr list, and the return False statement at the end of the function will not be executed.\nHere is an example of how you can modify the hasPath function to fix this issue:\ndef hasPath(graph, source, des):\n arr = graph[source]\n if des in arr:\n print('Yes')\n return True\n for i in arr:\n if hasPath(graph, i, des):\n return True\n return False\n\nWith this change, the hasPath function will return True when the des value is found in the arr list, and will return False otherwise. When you run the print(hasPath(graph, 'f', 'k')) statement, it will print Yes and then True, as expected.\n",
"Try this:\ngraph = {\n'f': ['g', 'i'],\n'g': ['h'],\n'h': [],\n'i': ['g', 'k'],\n'j': ['i'],\n'k': []\n}\n\ndef hasPath(graph,source,des):\n arr = graph[source]\n if des in arr:\n print('Yes')\n return True\n for i in arr:\n if hasPath(graph,i,des): return True\n return False\n\nprint(hasPath(graph,'f','k'))\n\nResult:\nYes\nTrue\n\nWhy?\nWell try to look at each iteration:\n\nsource is f, arr = ['g', 'i'] = False\nsource changes to i = g, now arr = ['h'] = False\nsource changes to h which is empty and returns False\nFirst iteration changes to i in graph[f] and it starts again\nSource changes to g in key i = False\nsource changes to k in key i = True\n\n"
] | [
2,
1
] | [] | [] | [
"function",
"python",
"return_value"
] | stackoverflow_0074655184_function_python_return_value.txt |
Q:
How to convert SQLAlchemy row object to a Python dict?
Is there a simple way to iterate over column name and value pairs?
My version of SQLAlchemy is 0.5.6
Here is the sample code where I tried using dict(row):
import sqlalchemy
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
print "sqlalchemy version:",sqlalchemy.__version__
engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
metadata.create_all(engine)
class User(declarative_base()):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
Session = sessionmaker(bind=engine)
session = Session()
user1 = User("anurag")
session.add(user1)
session.commit()
# uncommenting next line throws exception 'TypeError: 'User' object is not iterable'
#print dict(user1)
# this one also throws 'TypeError: 'User' object is not iterable'
for u in session.query(User).all():
print dict(u)
Running this code on my system outputs:
Traceback (most recent call last):
File "untitled-1.py", line 37, in <module>
print dict(u)
TypeError: 'User' object is not iterable
A:
You may access the internal __dict__ of a SQLAlchemy object, like the following:
for u in session.query(User).all():
print u.__dict__
A:
I couldn't get a good answer so I use this:
def row2dict(row):
d = {}
for column in row.__table__.columns:
d[column.name] = str(getattr(row, column.name))
return d
Edit: if above function is too long and not suited for some tastes here is a one liner (python 2.7+)
row2dict = lambda r: {c.name: str(getattr(r, c.name)) for c in r.__table__.columns}
A:
As per @zzzeek in comments:
note that this is the correct answer for modern versions of
SQLAlchemy, assuming "row" is a core row object, not an ORM-mapped
instance.
for row in resultproxy:
row_as_dict = row._mapping # SQLAlchemy 1.4 and greater
# row_as_dict = dict(row) # SQLAlchemy 1.3 and earlier
background on row._mapping, new as of SQLAlchemy 1.4: https://docs.sqlalchemy.org/en/stable/core/connections.html#sqlalchemy.engine.Row._mapping
A:
In SQLAlchemy v0.8 and newer, use the inspection system.
from sqlalchemy import inspect
def object_as_dict(obj):
return {c.key: getattr(obj, c.key)
for c in inspect(obj).mapper.column_attrs}
user = session.query(User).first()
d = object_as_dict(user)
Note that .key is the attribute name, which can be different from the column name, e.g. in the following case:
class_ = Column('class', Text)
This method also works for column_property.
A:
rows have an _asdict() function which gives a dict
In [8]: r1 = db.session.query(Topic.name).first()
In [9]: r1
Out[9]: (u'blah')
In [10]: r1.name
Out[10]: u'blah'
In [11]: r1._asdict()
Out[11]: {'name': u'blah'}
A:
Assuming the following functions will be added to the class User the following will return all key-value pairs of all columns:
def columns_to_dict(self):
dict_ = {}
for key in self.__mapper__.c.keys():
dict_[key] = getattr(self, key)
return dict_
unlike the other answers all but only those attributes of the object are returned which are Column attributes at class level of the object. Therefore no _sa_instance_state or any other attribute SQLalchemy or you add to the object are included. Reference
EDIT: Forget to say, that this also works on inherited Columns.
hybrid_property extention
If you also want to include hybrid_property attributes the following will work:
from sqlalchemy import inspect
from sqlalchemy.ext.hybrid import hybrid_property
def publics_to_dict(self) -> {}:
dict_ = {}
for key in self.__mapper__.c.keys():
if not key.startswith('_'):
dict_[key] = getattr(self, key)
for key, prop in inspect(self.__class__).all_orm_descriptors.items():
if isinstance(prop, hybrid_property):
dict_[key] = getattr(self, key)
return dict_
I assume here that you mark Columns with an beginning _ to indicate that you want to hide them, either because you access the attribute by an hybrid_property or you simply do not want to show them. Reference
Tipp all_orm_descriptors also returns hybrid_method and AssociationProxy if you also want to include them.
Remarks to other answers
Every answer (like 1, 2 ) which based on the __dict__ attribute simply returns all attributes of the object. This could be much more attributes then you want. Like I sad this includes _sa_instance_state or any other attribute you define on this object.
Every answer (like 1, 2 ) which is based on the dict() function only works on SQLalchemy row objects returned by session.execute() not on the classes you define to work with, like the class User from the question.
The solving answer which is based on row.__table__.columns will definitely not work. row.__table__.columns contains the column names of the SQL Database. These can only be equal to the attributes name of the python object. If not you get an AttributeError.
For answers (like 1, 2 ) based on class_mapper(obj.__class__).mapped_table.c it is the same.
A:
as @balki mentioned:
The _asdict() method can be used if you're querying a specific field because it is returned as a KeyedTuple.
In [1]: foo = db.session.query(Topic.name).first()
In [2]: foo._asdict()
Out[2]: {'name': u'blah'}
Whereas, if you do not specify a column you can use one of the other proposed methods - such as the one provided by @charlax. Note that this method is only valid for 2.7+.
In [1]: foo = db.session.query(Topic).first()
In [2]: {x.name: getattr(foo, x.name) for x in foo.__table__.columns}
Out[2]: {'name': u'blah'}
A:
Old question, but since this the first result for "sqlalchemy row to dict" in Google it deserves a better answer.
The RowProxy object that SqlAlchemy returns has the items() method:
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.RowProxy.items
It simply returns a list of (key, value) tuples. So one can convert a row to dict using the following:
In Python <= 2.6:
rows = conn.execute(query)
list_of_dicts = [dict((key, value) for key, value in row.items()) for row in rows]
In Python >= 2.7:
rows = conn.execute(query)
list_of_dicts = [{key: value for (key, value) in row.items()} for row in rows]
A:
A very simple solution: row._asdict().
sqlalchemy.engine.Row._asdict() (v1.4)
sqlalchemy.util.KeyedTuple._asdict() (v1.3)
> data = session.query(Table).all()
> [row._asdict() for row in data]
A:
Following @balki answer, since SQLAlchemy 0.8 you can use _asdict(), available for KeyedTuple objects. This renders a pretty straightforward answer to the original question. Just, change in your example the last two lines (the for loop) for this one:
for u in session.query(User).all():
print u._asdict()
This works because in the above code u is an object of type class KeyedTuple, since .all() returns a list of KeyedTuple. Therefore it has the method _asdict(), which nicely returns u as a dictionary.
WRT the answer by @STB: AFAIK, anything that .all() returns is a list of KeypedTuple. Therefore, the above works either if you specify a column or not, as long as you are dealing with the result of .all() as applied to a Query object.
A:
from sqlalchemy.orm import class_mapper
def asdict(obj):
return dict((col.name, getattr(obj, col.name))
for col in class_mapper(obj.__class__).mapped_table.c)
A:
Refer to Alex Brasetvik's Answer, you can use one line of code to solve the problem
row_as_dict = [dict(row) for row in resultproxy]
Under the comment section of Alex Brasetvik's Answer, zzzeek the creator of SQLAlchemy stated this is the "Correct Method" for the problem.
A:
with sqlalchemy 1.4
session.execute(select(User.id, User.username)).mappings().all()
>> [{'id': 1, 'username': 'Bob'}, {'id': 2, 'username': 'Alice'}]
A:
I've found this post because I was looking for a way to convert a SQLAlchemy row into a dict. I'm using SqlSoup... but the answer was built by myself, so, if it could helps someone here's my two cents:
a = db.execute('select * from acquisizioni_motes')
b = a.fetchall()
c = b[0]
# and now, finally...
dict(zip(c.keys(), c.values()))
A:
You could try to do it in this way.
for u in session.query(User).all():
print(u._asdict())
It use a built-in method in the query object that return a dictonary object of the query object.
references: https://docs.sqlalchemy.org/en/latest/orm/query.html
A:
With python 3.8+, we can do this with dataclass, and the asdict method that comes with it:
from dataclasses import dataclass, asdict
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Column, String, Integer, create_engine
Base = declarative_base()
engine = create_engine('sqlite:///:memory:', echo=False)
@dataclass
class User(Base):
__tablename__ = 'users'
id: int = Column(Integer, primary_key=True)
name: str = Column(String)
email = Column(String)
def __init__(self, name):
self.name = name
self.email = '[email protected]'
Base.metadata.create_all(engine)
SessionMaker = sessionmaker(bind=engine)
session = SessionMaker()
user1 = User("anurag")
session.add(user1)
session.commit()
query_result = session.query(User).one() # type: User
print(f'{query_result.id=:}, {query_result.name=:}, {query_result.email=:}')
# query_result.id=1, query_result.name=anurag, [email protected]
query_result_dict = asdict(query_result)
print(query_result_dict)
# {'id': 1, 'name': 'anurag'}
The key is to use the @dataclass decorator, and annotate each column with its type (the : str part of the name: str = Column(String) line).
Also note that since the email is not annotated, it is not included in query_result_dict.
A:
The expression you are iterating through evaluates to list of model objects, not rows. So the following is correct usage of them:
for u in session.query(User).all():
print u.id, u.name
Do you realy need to convert them to dicts? Sure, there is a lot of ways, but then you don't need ORM part of SQLAlchemy:
result = session.execute(User.__table__.select())
for row in result:
print dict(row)
Update: Take a look at sqlalchemy.orm.attributes module. It has a set of functions to work with object state, that might be useful for you, especially instance_dict().
A:
I've just been dealing with this issue for a few minutes.
The answer marked as correct doesn't respect the type of the fields.
Solution comes from dictalchemy adding some interesting fetures.
https://pythonhosted.org/dictalchemy/
I've just tested it and works fine.
Base = declarative_base(cls=DictableModel)
session.query(User).asdict()
{'id': 1, 'username': 'Gerald'}
session.query(User).asdict(exclude=['id'])
{'username': 'Gerald'}
A:
class User(object):
def to_dict(self):
return dict([(k, getattr(self, k)) for k in self.__dict__.keys() if not k.startswith("_")])
That should work.
A:
You can convert sqlalchemy object to dictionary like this and return it as json/dictionary.
Helper functions:
import json
from collections import OrderedDict
def asdict(self):
result = OrderedDict()
for key in self.__mapper__.c.keys():
if getattr(self, key) is not None:
result[key] = str(getattr(self, key))
else:
result[key] = getattr(self, key)
return result
def to_array(all_vendors):
v = [ ven.asdict() for ven in all_vendors ]
return json.dumps(v)
Driver Function:
def all_products():
all_products = Products.query.all()
return to_array(all_products)
A:
Two ways:
1.
for row in session.execute(session.query(User).statement):
print(dict(row))
2.
selected_columns = User.__table__.columns
rows = session.query(User).with_entities(*selected_columns).all()
for row in rows :
print(row._asdict())
A:
Here is how Elixir does it. The value of this solution is that it allows recursively including the dictionary representation of relations.
def to_dict(self, deep={}, exclude=[]):
"""Generate a JSON-style nested dict/list structure from an object."""
col_prop_names = [p.key for p in self.mapper.iterate_properties \
if isinstance(p, ColumnProperty)]
data = dict([(name, getattr(self, name))
for name in col_prop_names if name not in exclude])
for rname, rdeep in deep.iteritems():
dbdata = getattr(self, rname)
#FIXME: use attribute names (ie coltoprop) instead of column names
fks = self.mapper.get_property(rname).remote_side
exclude = [c.name for c in fks]
if dbdata is None:
data[rname] = None
elif isinstance(dbdata, list):
data[rname] = [o.to_dict(rdeep, exclude) for o in dbdata]
else:
data[rname] = dbdata.to_dict(rdeep, exclude)
return data
A:
With this code you can also to add to your query "filter" or "join" and this work!
query = session.query(User)
def query_to_dict(query):
def _create_dict(r):
return {c.get('name'): getattr(r, c.get('name')) for c in query.column_descriptions}
return [_create_dict(r) for r in query]
A:
For the sake of everyone and myself, here is how I use it:
def run_sql(conn_String):
output_connection = engine.create_engine(conn_string, poolclass=NullPool).connect()
rows = output_connection.execute('select * from db1.t1').fetchall()
return [dict(row) for row in rows]
A:
As OP stated, calling the dict initializer raises an exception with the message "User" object is not iterable. So the real question is how to make a SQLAlchemy Model iterable?
We'll have to implement the special methods __iter__ and __next__, but if we inherit directly from the declarative_base model, we would still run into the undesirable "_sa_instance_state" key. What's worse, is we would have to loop through __dict__.keys() for every call to __next__ because the keys() method returns a View -- an iterable that is not indexed. This would increase the time complexity by a factor of N, where N is the number of keys in __dict__. Generating the dict would cost O(N^2). We can do better.
We can implement our own Base class that implements the required special methods and stores a list of of the column names that can be accessed by index, reducing the time complexity of generating the dict to O(N). This has the added benefit that we can define the logic once and inherit from our Base class anytime we want our model class to be iterable.
class IterableBase(declarative_base()):
__abstract__ = True
def _init_keys(self):
self._keys = [c.name for c in self.__table__.columns]
self._dict = {c.name: getattr(self, c.name) for c in self.__table__.columns}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._init_keys()
def __setattr__(self, name, value):
super().__setattr__(name, value)
if name not in ('_dict', '_keys', '_n') and '_dict' in self.__dict__:
self._dict[name] = value
def __iter__(self):
self._n = 0
return self
def __next__(self):
if self._n >= len(self._keys):
raise StopIteration
self._n += 1
key = self._keys[self._n-1]
return (key, self._dict[key])
Now the User class can inherit directly from our IterableBase class.
class User(IterableBase):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
You can confirm that calling the dict function with a User instance as an argument returns the desired dictionary, sans "_sa_instance_state". You may have noticed the __setattr__ method that was declared in the IterableBase class. This ensures the _dict is updated when attributes are mutated or set after initialization.
def main():
user1 = User('Bob')
print(dict(user1))
# outputs {'id': None, 'name': 'Bob'}
user1.id = 42
print(dict(user1))
# outputs {'id': 42, 'name': 'Bob'}
if __name__ == '__main__':
main()
A:
I have a variation on Marco Mariani's answer, expressed as a decorator. The main difference is that it'll handle lists of entities, as well as safely ignoring some other types of return values (which is very useful when writing tests using mocks):
@decorator
def to_dict(f, *args, **kwargs):
result = f(*args, **kwargs)
if is_iterable(result) and not is_dict(result):
return map(asdict, result)
return asdict(result)
def asdict(obj):
return dict((col.name, getattr(obj, col.name))
for col in class_mapper(obj.__class__).mapped_table.c)
def is_dict(obj):
return isinstance(obj, dict)
def is_iterable(obj):
return True if getattr(obj, '__iter__', False) else False
A:
To complete @Anurag Uniyal 's answer, here is a method that will recursively follow relationships:
from sqlalchemy.inspection import inspect
def to_dict(obj, with_relationships=True):
d = {}
for column in obj.__table__.columns:
if with_relationships and len(column.foreign_keys) > 0:
# Skip foreign keys
continue
d[column.name] = getattr(obj, column.name)
if with_relationships:
for relationship in inspect(type(obj)).relationships:
val = getattr(obj, relationship.key)
d[relationship.key] = to_dict(val) if val else None
return d
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
first_name = Column(TEXT)
address_id = Column(Integer, ForeignKey('addresses.id')
address = relationship('Address')
class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
city = Column(TEXT)
user = User(first_name='Nathan', address=Address(city='Lyon'))
# Add and commit user to session to create ids
to_dict(user)
# {'id': 1, 'first_name': 'Nathan', 'address': {'city': 'Lyon'}}
to_dict(user, with_relationship=False)
# {'id': 1, 'first_name': 'Nathan', 'address_id': 1}
A:
We can get a list of object in dict:
def queryset_to_dict(query_result):
query_columns = query_result[0].keys()
res = [list(ele) for ele in query_result]
dict_list = [dict(zip(query_columns, l)) for l in res]
return dict_list
query_result = db.session.query(LanguageMaster).all()
dictvalue=queryset_to_dict(query_result)
A:
from copy import copy
def to_record(row):
record = copy(row.__dict__)
del record["_sa_instance_state"]
return record
If not using copy, you might run into errors.
A:
An improved version of Anurag Uniyal's version, which takes into account types:
def sa_vars(row):
return {
column.name: column.type.python_type(getattr(row, column.name))
for column in row.__table__.columns
}
A:
I am a newly minted Python programmer and ran into problems getting to JSON with Joined tables. Using information from the answers here I built a function to return reasonable results to JSON where the table names are included avoiding having to alias, or have fields collide.
Simply pass the result of a session query:
test = Session().query(VMInfo, Customer).join(Customer).order_by(VMInfo.vm_name).limit(50).offset(10)
json = sqlAl2json(test)
def sqlAl2json(self, result):
arr = []
for rs in result.all():
proc = []
try:
iterator = iter(rs)
except TypeError:
proc.append(rs)
else:
for t in rs:
proc.append(t)
dict = {}
for p in proc:
tname = type(p).__name__
for d in dir(p):
if d.startswith('_') | d.startswith('metadata'):
pass
else:
key = '%s_%s' %(tname, d)
dict[key] = getattr(p, d)
arr.append(dict)
return json.dumps(arr)
A:
if your models table column is not equie mysql column.
such as :
class People:
id: int = Column(name='id', type_=Integer, primary_key=True)
createdTime: datetime = Column(name='create_time', type_=TIMESTAMP,
nullable=False,
server_default=text("CURRENT_TIMESTAMP"),
default=func.now())
modifiedTime: datetime = Column(name='modify_time', type_=TIMESTAMP,
server_default=text("CURRENT_TIMESTAMP"),
default=func.now())
Need to use:
from sqlalchemy.orm import class_mapper
def asDict(self):
return {x.key: getattr(self, x.key, None) for x in
class_mapper(Application).iterate_properties}
if you use this way you can get modify_time and create_time both are None
{'id': 1, 'create_time': None, 'modify_time': None}
def to_dict(self):
return {c.name: getattr(self, c.name, None)
for c in self.__table__.columns}
Because Class Attributes name not equal with column store in mysql
A:
Return the contents of this :class:.KeyedTuple as a dictionary
In [46]: result = aggregate_events[0]
In [47]: type(result)
Out[47]: sqlalchemy.util._collections.result
In [48]: def to_dict(query_result=None):
...: cover_dict = {key: getattr(query_result, key) for key in query_result.keys()}
...: return cover_dict
...:
...:
In [49]: to_dict(result)
Out[49]:
{'calculate_avg': None,
'calculate_max': None,
'calculate_min': None,
'calculate_sum': None,
'dataPointIntID': 6,
'data_avg': 10.0,
'data_max': 10.0,
'data_min': 10.0,
'data_sum': 60.0,
'deviceID': u'asas',
'productID': u'U7qUDa',
'tenantID': u'CvdQcYzUM'}
A:
def to_dict(row):
return {column.name: getattr(row, row.__mapper__.get_property_by_column(column).key) for column in row.__table__.columns}
for u in session.query(User).all():
print(to_dict(u))
This function might help.
I can't find better solution to solve problem when attribute name is different then column names.
A:
You'll need it everywhere in your project, I apriciate @anurag answered it works fine. till this point I was using it, but it'll mess all your code and also wont work with entity change.
Rather try this,
inherit your base query class in SQLAlchemy
from flask_sqlalchemy import SQLAlchemy, BaseQuery
class Query(BaseQuery):
def as_dict(self):
context = self._compile_context()
context.statement.use_labels = False
columns = [column.name for column in context.statement.columns]
return list(map(lambda row: dict(zip(columns, row)), self.all()))
db = SQLAlchemy(query_class=Query)
after that wherever you'll define your object "as_dict" method will be there.
A:
use dict Comprehensions
for u in session.query(User).all():
print ({column.name: str(getattr(row, column.name)) for column in row.__table__.columns})
A:
After querying the database using following SQLAlchemy code:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = 'sqlite:///./examples/sql_app.db'
engine = create_engine(SQLALCHEMY_DATABASE_URL, echo=True)
query = sqlalchemy.select(TABLE)
result = engine.execute(query).fetchall()
You can use this one-liner:
query_dict = [record._mapping for record in results]
A:
sqlalchemy-utils has get_columns to help with this.
You could write:
{column: getattr(row, column) for column in get_columns(row)}
| How to convert SQLAlchemy row object to a Python dict? | Is there a simple way to iterate over column name and value pairs?
My version of SQLAlchemy is 0.5.6
Here is the sample code where I tried using dict(row):
import sqlalchemy
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
print "sqlalchemy version:",sqlalchemy.__version__
engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)
metadata.create_all(engine)
class User(declarative_base()):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
def __init__(self, name):
self.name = name
Session = sessionmaker(bind=engine)
session = Session()
user1 = User("anurag")
session.add(user1)
session.commit()
# uncommenting next line throws exception 'TypeError: 'User' object is not iterable'
#print dict(user1)
# this one also throws 'TypeError: 'User' object is not iterable'
for u in session.query(User).all():
print dict(u)
Running this code on my system outputs:
Traceback (most recent call last):
File "untitled-1.py", line 37, in <module>
print dict(u)
TypeError: 'User' object is not iterable
| [
"You may access the internal __dict__ of a SQLAlchemy object, like the following:\nfor u in session.query(User).all():\n print u.__dict__\n\n",
"I couldn't get a good answer so I use this:\ndef row2dict(row):\n d = {}\n for column in row.__table__.columns:\n d[column.name] = str(getattr(row, column.name))\n\n return d\n\nEdit: if above function is too long and not suited for some tastes here is a one liner (python 2.7+)\nrow2dict = lambda r: {c.name: str(getattr(r, c.name)) for c in r.__table__.columns}\n\n",
"As per @zzzeek in comments:\n\nnote that this is the correct answer for modern versions of\nSQLAlchemy, assuming \"row\" is a core row object, not an ORM-mapped\ninstance.\n\nfor row in resultproxy:\n row_as_dict = row._mapping # SQLAlchemy 1.4 and greater\n # row_as_dict = dict(row) # SQLAlchemy 1.3 and earlier\n\nbackground on row._mapping, new as of SQLAlchemy 1.4: https://docs.sqlalchemy.org/en/stable/core/connections.html#sqlalchemy.engine.Row._mapping\n",
"In SQLAlchemy v0.8 and newer, use the inspection system.\nfrom sqlalchemy import inspect\n\ndef object_as_dict(obj):\n return {c.key: getattr(obj, c.key)\n for c in inspect(obj).mapper.column_attrs}\n\nuser = session.query(User).first()\n\nd = object_as_dict(user)\n\nNote that .key is the attribute name, which can be different from the column name, e.g. in the following case:\nclass_ = Column('class', Text)\n\nThis method also works for column_property.\n",
"rows have an _asdict() function which gives a dict\nIn [8]: r1 = db.session.query(Topic.name).first()\n\nIn [9]: r1\nOut[9]: (u'blah')\n\nIn [10]: r1.name\nOut[10]: u'blah'\n\nIn [11]: r1._asdict()\nOut[11]: {'name': u'blah'}\n\n",
"Assuming the following functions will be added to the class User the following will return all key-value pairs of all columns:\ndef columns_to_dict(self):\n dict_ = {}\n for key in self.__mapper__.c.keys():\n dict_[key] = getattr(self, key)\n return dict_\n\nunlike the other answers all but only those attributes of the object are returned which are Column attributes at class level of the object. Therefore no _sa_instance_state or any other attribute SQLalchemy or you add to the object are included. Reference\nEDIT: Forget to say, that this also works on inherited Columns.\nhybrid_property extention\nIf you also want to include hybrid_property attributes the following will work:\nfrom sqlalchemy import inspect\nfrom sqlalchemy.ext.hybrid import hybrid_property\n\ndef publics_to_dict(self) -> {}:\n dict_ = {}\n for key in self.__mapper__.c.keys():\n if not key.startswith('_'):\n dict_[key] = getattr(self, key)\n\n for key, prop in inspect(self.__class__).all_orm_descriptors.items():\n if isinstance(prop, hybrid_property):\n dict_[key] = getattr(self, key)\n return dict_\n\nI assume here that you mark Columns with an beginning _ to indicate that you want to hide them, either because you access the attribute by an hybrid_property or you simply do not want to show them. Reference\nTipp all_orm_descriptors also returns hybrid_method and AssociationProxy if you also want to include them.\nRemarks to other answers\nEvery answer (like 1, 2 ) which based on the __dict__ attribute simply returns all attributes of the object. This could be much more attributes then you want. Like I sad this includes _sa_instance_state or any other attribute you define on this object.\nEvery answer (like 1, 2 ) which is based on the dict() function only works on SQLalchemy row objects returned by session.execute() not on the classes you define to work with, like the class User from the question.\nThe solving answer which is based on row.__table__.columns will definitely not work. row.__table__.columns contains the column names of the SQL Database. These can only be equal to the attributes name of the python object. If not you get an AttributeError.\nFor answers (like 1, 2 ) based on class_mapper(obj.__class__).mapped_table.c it is the same.\n",
"as @balki mentioned:\nThe _asdict() method can be used if you're querying a specific field because it is returned as a KeyedTuple.\nIn [1]: foo = db.session.query(Topic.name).first()\nIn [2]: foo._asdict()\nOut[2]: {'name': u'blah'}\n\nWhereas, if you do not specify a column you can use one of the other proposed methods - such as the one provided by @charlax. Note that this method is only valid for 2.7+.\nIn [1]: foo = db.session.query(Topic).first()\nIn [2]: {x.name: getattr(foo, x.name) for x in foo.__table__.columns}\nOut[2]: {'name': u'blah'}\n\n",
"Old question, but since this the first result for \"sqlalchemy row to dict\" in Google it deserves a better answer.\nThe RowProxy object that SqlAlchemy returns has the items() method:\nhttp://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.RowProxy.items\nIt simply returns a list of (key, value) tuples. So one can convert a row to dict using the following:\nIn Python <= 2.6:\nrows = conn.execute(query)\nlist_of_dicts = [dict((key, value) for key, value in row.items()) for row in rows]\n\nIn Python >= 2.7:\nrows = conn.execute(query)\nlist_of_dicts = [{key: value for (key, value) in row.items()} for row in rows]\n\n",
"A very simple solution: row._asdict().\n\nsqlalchemy.engine.Row._asdict() (v1.4)\nsqlalchemy.util.KeyedTuple._asdict() (v1.3)\n\n> data = session.query(Table).all()\n> [row._asdict() for row in data]\n\n",
"Following @balki answer, since SQLAlchemy 0.8 you can use _asdict(), available for KeyedTuple objects. This renders a pretty straightforward answer to the original question. Just, change in your example the last two lines (the for loop) for this one:\nfor u in session.query(User).all():\n print u._asdict()\n\nThis works because in the above code u is an object of type class KeyedTuple, since .all() returns a list of KeyedTuple. Therefore it has the method _asdict(), which nicely returns u as a dictionary.\nWRT the answer by @STB: AFAIK, anything that .all() returns is a list of KeypedTuple. Therefore, the above works either if you specify a column or not, as long as you are dealing with the result of .all() as applied to a Query object.\n",
"from sqlalchemy.orm import class_mapper\n\ndef asdict(obj):\n return dict((col.name, getattr(obj, col.name))\n for col in class_mapper(obj.__class__).mapped_table.c)\n\n",
"Refer to Alex Brasetvik's Answer, you can use one line of code to solve the problem\nrow_as_dict = [dict(row) for row in resultproxy]\n\nUnder the comment section of Alex Brasetvik's Answer, zzzeek the creator of SQLAlchemy stated this is the \"Correct Method\" for the problem.\n",
"with sqlalchemy 1.4\nsession.execute(select(User.id, User.username)).mappings().all()\n>> [{'id': 1, 'username': 'Bob'}, {'id': 2, 'username': 'Alice'}]\n\n",
"I've found this post because I was looking for a way to convert a SQLAlchemy row into a dict. I'm using SqlSoup... but the answer was built by myself, so, if it could helps someone here's my two cents:\na = db.execute('select * from acquisizioni_motes')\nb = a.fetchall()\nc = b[0]\n\n# and now, finally...\ndict(zip(c.keys(), c.values()))\n\n",
"You could try to do it in this way.\nfor u in session.query(User).all():\n print(u._asdict())\n\nIt use a built-in method in the query object that return a dictonary object of the query object. \nreferences: https://docs.sqlalchemy.org/en/latest/orm/query.html\n",
"With python 3.8+, we can do this with dataclass, and the asdict method that comes with it:\nfrom dataclasses import dataclass, asdict\n\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy import Column, String, Integer, create_engine\n\nBase = declarative_base()\nengine = create_engine('sqlite:///:memory:', echo=False)\n\n\n@dataclass\nclass User(Base):\n __tablename__ = 'users'\n\n id: int = Column(Integer, primary_key=True)\n name: str = Column(String)\n email = Column(String)\n\n def __init__(self, name):\n self.name = name\n self.email = '[email protected]'\n\n\nBase.metadata.create_all(engine)\n\nSessionMaker = sessionmaker(bind=engine)\nsession = SessionMaker()\n\nuser1 = User(\"anurag\")\nsession.add(user1)\nsession.commit()\n\nquery_result = session.query(User).one() # type: User\nprint(f'{query_result.id=:}, {query_result.name=:}, {query_result.email=:}')\n# query_result.id=1, query_result.name=anurag, [email protected]\n\nquery_result_dict = asdict(query_result)\nprint(query_result_dict)\n# {'id': 1, 'name': 'anurag'}\n\nThe key is to use the @dataclass decorator, and annotate each column with its type (the : str part of the name: str = Column(String) line). \nAlso note that since the email is not annotated, it is not included in query_result_dict.\n",
"The expression you are iterating through evaluates to list of model objects, not rows. So the following is correct usage of them:\nfor u in session.query(User).all():\n print u.id, u.name\n\nDo you realy need to convert them to dicts? Sure, there is a lot of ways, but then you don't need ORM part of SQLAlchemy:\nresult = session.execute(User.__table__.select())\nfor row in result:\n print dict(row)\n\nUpdate: Take a look at sqlalchemy.orm.attributes module. It has a set of functions to work with object state, that might be useful for you, especially instance_dict().\n",
"I've just been dealing with this issue for a few minutes.\nThe answer marked as correct doesn't respect the type of the fields.\nSolution comes from dictalchemy adding some interesting fetures.\nhttps://pythonhosted.org/dictalchemy/\nI've just tested it and works fine.\nBase = declarative_base(cls=DictableModel)\n\nsession.query(User).asdict()\n{'id': 1, 'username': 'Gerald'}\n\nsession.query(User).asdict(exclude=['id'])\n{'username': 'Gerald'}\n\n",
"class User(object):\n def to_dict(self):\n return dict([(k, getattr(self, k)) for k in self.__dict__.keys() if not k.startswith(\"_\")])\n\nThat should work.\n",
"You can convert sqlalchemy object to dictionary like this and return it as json/dictionary.\nHelper functions:\nimport json\nfrom collections import OrderedDict\n\n\ndef asdict(self):\n result = OrderedDict()\n for key in self.__mapper__.c.keys():\n if getattr(self, key) is not None:\n result[key] = str(getattr(self, key))\n else:\n result[key] = getattr(self, key)\n return result\n\n\ndef to_array(all_vendors):\n v = [ ven.asdict() for ven in all_vendors ]\n return json.dumps(v) \n\nDriver Function:\ndef all_products():\n all_products = Products.query.all()\n return to_array(all_products)\n\n",
"Two ways:\n1.\nfor row in session.execute(session.query(User).statement):\n print(dict(row))\n\n2.\nselected_columns = User.__table__.columns\nrows = session.query(User).with_entities(*selected_columns).all()\nfor row in rows :\n print(row._asdict())\n\n",
"Here is how Elixir does it. The value of this solution is that it allows recursively including the dictionary representation of relations.\ndef to_dict(self, deep={}, exclude=[]):\n \"\"\"Generate a JSON-style nested dict/list structure from an object.\"\"\"\n col_prop_names = [p.key for p in self.mapper.iterate_properties \\\n if isinstance(p, ColumnProperty)]\n data = dict([(name, getattr(self, name))\n for name in col_prop_names if name not in exclude])\n for rname, rdeep in deep.iteritems():\n dbdata = getattr(self, rname)\n #FIXME: use attribute names (ie coltoprop) instead of column names\n fks = self.mapper.get_property(rname).remote_side\n exclude = [c.name for c in fks]\n if dbdata is None:\n data[rname] = None\n elif isinstance(dbdata, list):\n data[rname] = [o.to_dict(rdeep, exclude) for o in dbdata]\n else:\n data[rname] = dbdata.to_dict(rdeep, exclude)\n return data\n\n",
"With this code you can also to add to your query \"filter\" or \"join\" and this work!\nquery = session.query(User)\ndef query_to_dict(query):\n def _create_dict(r):\n return {c.get('name'): getattr(r, c.get('name')) for c in query.column_descriptions}\n\n return [_create_dict(r) for r in query]\n\n",
"For the sake of everyone and myself, here is how I use it:\ndef run_sql(conn_String):\n output_connection = engine.create_engine(conn_string, poolclass=NullPool).connect()\n rows = output_connection.execute('select * from db1.t1').fetchall() \n return [dict(row) for row in rows]\n\n",
"As OP stated, calling the dict initializer raises an exception with the message \"User\" object is not iterable. So the real question is how to make a SQLAlchemy Model iterable?\nWe'll have to implement the special methods __iter__ and __next__, but if we inherit directly from the declarative_base model, we would still run into the undesirable \"_sa_instance_state\" key. What's worse, is we would have to loop through __dict__.keys() for every call to __next__ because the keys() method returns a View -- an iterable that is not indexed. This would increase the time complexity by a factor of N, where N is the number of keys in __dict__. Generating the dict would cost O(N^2). We can do better.\nWe can implement our own Base class that implements the required special methods and stores a list of of the column names that can be accessed by index, reducing the time complexity of generating the dict to O(N). This has the added benefit that we can define the logic once and inherit from our Base class anytime we want our model class to be iterable.\nclass IterableBase(declarative_base()):\n __abstract__ = True\n\n def _init_keys(self):\n self._keys = [c.name for c in self.__table__.columns]\n self._dict = {c.name: getattr(self, c.name) for c in self.__table__.columns}\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._init_keys()\n\n def __setattr__(self, name, value):\n super().__setattr__(name, value)\n if name not in ('_dict', '_keys', '_n') and '_dict' in self.__dict__:\n self._dict[name] = value\n\n def __iter__(self):\n self._n = 0\n return self\n\n def __next__(self):\n if self._n >= len(self._keys):\n raise StopIteration\n self._n += 1\n key = self._keys[self._n-1]\n return (key, self._dict[key])\n\nNow the User class can inherit directly from our IterableBase class.\nclass User(IterableBase):\n __tablename__ = 'users'\n id = Column(Integer, primary_key=True)\n name = Column(String)\n\nYou can confirm that calling the dict function with a User instance as an argument returns the desired dictionary, sans \"_sa_instance_state\". You may have noticed the __setattr__ method that was declared in the IterableBase class. This ensures the _dict is updated when attributes are mutated or set after initialization.\ndef main():\n user1 = User('Bob')\n print(dict(user1))\n # outputs {'id': None, 'name': 'Bob'}\n user1.id = 42\n print(dict(user1))\n # outputs {'id': 42, 'name': 'Bob'}\n\nif __name__ == '__main__':\n main()\n\n",
"I have a variation on Marco Mariani's answer, expressed as a decorator. The main difference is that it'll handle lists of entities, as well as safely ignoring some other types of return values (which is very useful when writing tests using mocks):\n@decorator\ndef to_dict(f, *args, **kwargs):\n result = f(*args, **kwargs)\n if is_iterable(result) and not is_dict(result):\n return map(asdict, result)\n\n return asdict(result)\n\ndef asdict(obj):\n return dict((col.name, getattr(obj, col.name))\n for col in class_mapper(obj.__class__).mapped_table.c)\n\ndef is_dict(obj):\n return isinstance(obj, dict)\n\ndef is_iterable(obj):\n return True if getattr(obj, '__iter__', False) else False\n\n",
"To complete @Anurag Uniyal 's answer, here is a method that will recursively follow relationships:\nfrom sqlalchemy.inspection import inspect\n\ndef to_dict(obj, with_relationships=True):\n d = {}\n for column in obj.__table__.columns:\n if with_relationships and len(column.foreign_keys) > 0:\n # Skip foreign keys\n continue\n d[column.name] = getattr(obj, column.name)\n\n if with_relationships:\n for relationship in inspect(type(obj)).relationships:\n val = getattr(obj, relationship.key)\n d[relationship.key] = to_dict(val) if val else None\n return d\n\nclass User(Base):\n __tablename__ = 'users'\n id = Column(Integer, primary_key=True)\n first_name = Column(TEXT)\n address_id = Column(Integer, ForeignKey('addresses.id')\n address = relationship('Address')\n\nclass Address(Base):\n __tablename__ = 'addresses'\n id = Column(Integer, primary_key=True)\n city = Column(TEXT)\n\n\nuser = User(first_name='Nathan', address=Address(city='Lyon'))\n# Add and commit user to session to create ids\n\nto_dict(user)\n# {'id': 1, 'first_name': 'Nathan', 'address': {'city': 'Lyon'}}\nto_dict(user, with_relationship=False)\n# {'id': 1, 'first_name': 'Nathan', 'address_id': 1}\n\n",
"We can get a list of object in dict:\ndef queryset_to_dict(query_result):\n query_columns = query_result[0].keys()\n res = [list(ele) for ele in query_result]\n dict_list = [dict(zip(query_columns, l)) for l in res]\n return dict_list\n\nquery_result = db.session.query(LanguageMaster).all()\ndictvalue=queryset_to_dict(query_result)\n\n",
"from copy import copy\n\ndef to_record(row):\n record = copy(row.__dict__)\n del record[\"_sa_instance_state\"]\n return record\n\nIf not using copy, you might run into errors.\n",
"An improved version of Anurag Uniyal's version, which takes into account types:\ndef sa_vars(row):\n return {\n column.name: column.type.python_type(getattr(row, column.name))\n for column in row.__table__.columns\n }\n\n",
"I am a newly minted Python programmer and ran into problems getting to JSON with Joined tables. Using information from the answers here I built a function to return reasonable results to JSON where the table names are included avoiding having to alias, or have fields collide.\nSimply pass the result of a session query:\ntest = Session().query(VMInfo, Customer).join(Customer).order_by(VMInfo.vm_name).limit(50).offset(10)\njson = sqlAl2json(test)\ndef sqlAl2json(self, result):\n arr = []\n for rs in result.all():\n proc = []\n try:\n iterator = iter(rs)\n except TypeError:\n proc.append(rs)\n else:\n for t in rs:\n proc.append(t)\n\n dict = {}\n for p in proc:\n tname = type(p).__name__\n for d in dir(p):\n if d.startswith('_') | d.startswith('metadata'):\n pass\n else:\n key = '%s_%s' %(tname, d)\n dict[key] = getattr(p, d)\n arr.append(dict)\n return json.dumps(arr)\n\n",
"if your models table column is not equie mysql column.\nsuch as :\nclass People:\n id: int = Column(name='id', type_=Integer, primary_key=True)\n createdTime: datetime = Column(name='create_time', type_=TIMESTAMP,\n nullable=False,\n server_default=text(\"CURRENT_TIMESTAMP\"),\n default=func.now())\n modifiedTime: datetime = Column(name='modify_time', type_=TIMESTAMP,\n server_default=text(\"CURRENT_TIMESTAMP\"),\n default=func.now())\n\nNeed to use:\n from sqlalchemy.orm import class_mapper \n def asDict(self):\n return {x.key: getattr(self, x.key, None) for x in\n class_mapper(Application).iterate_properties}\n\nif you use this way you can get modify_time and create_time both are None\n{'id': 1, 'create_time': None, 'modify_time': None}\n\n\n def to_dict(self):\n return {c.name: getattr(self, c.name, None)\n for c in self.__table__.columns}\n\nBecause Class Attributes name not equal with column store in mysql\n",
"Return the contents of this :class:.KeyedTuple as a dictionary\nIn [46]: result = aggregate_events[0]\n\nIn [47]: type(result)\nOut[47]: sqlalchemy.util._collections.result\n\nIn [48]: def to_dict(query_result=None):\n ...: cover_dict = {key: getattr(query_result, key) for key in query_result.keys()}\n ...: return cover_dict\n ...: \n ...: \n\nIn [49]: to_dict(result)\nOut[49]: \n{'calculate_avg': None,\n 'calculate_max': None,\n 'calculate_min': None,\n 'calculate_sum': None,\n 'dataPointIntID': 6,\n 'data_avg': 10.0,\n 'data_max': 10.0,\n 'data_min': 10.0,\n 'data_sum': 60.0,\n 'deviceID': u'asas',\n 'productID': u'U7qUDa',\n 'tenantID': u'CvdQcYzUM'}\n\n",
"def to_dict(row):\n return {column.name: getattr(row, row.__mapper__.get_property_by_column(column).key) for column in row.__table__.columns}\n\n\nfor u in session.query(User).all():\n print(to_dict(u))\n\nThis function might help. \nI can't find better solution to solve problem when attribute name is different then column names.\n",
"You'll need it everywhere in your project, I apriciate @anurag answered it works fine. till this point I was using it, but it'll mess all your code and also wont work with entity change.\nRather try this,\ninherit your base query class in SQLAlchemy\nfrom flask_sqlalchemy import SQLAlchemy, BaseQuery\n\n\nclass Query(BaseQuery):\n def as_dict(self):\n context = self._compile_context()\n context.statement.use_labels = False\n columns = [column.name for column in context.statement.columns]\n\n return list(map(lambda row: dict(zip(columns, row)), self.all()))\n\n\ndb = SQLAlchemy(query_class=Query)\n\nafter that wherever you'll define your object \"as_dict\" method will be there.\n",
"use dict Comprehensions\nfor u in session.query(User).all():\n print ({column.name: str(getattr(row, column.name)) for column in row.__table__.columns})\n\n",
"After querying the database using following SQLAlchemy code:\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.orm import sessionmaker\n\n\nSQLALCHEMY_DATABASE_URL = 'sqlite:///./examples/sql_app.db'\nengine = create_engine(SQLALCHEMY_DATABASE_URL, echo=True)\nquery = sqlalchemy.select(TABLE)\nresult = engine.execute(query).fetchall()\n\nYou can use this one-liner:\nquery_dict = [record._mapping for record in results]\n\n",
"sqlalchemy-utils has get_columns to help with this.\nYou could write:\n{column: getattr(row, column) for column in get_columns(row)}\n\n"
] | [
344,
186,
185,
116,
60,
32,
31,
21,
15,
13,
12,
12,
11,
10,
9,
9,
8,
6,
4,
4,
3,
2,
2,
2,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Here is a super simple way of doing it\nrow2dict = lambda r: dict(r.items())\n\n",
"In most scenarios, column name is fit for them. But maybe you write the code like follows:\nclass UserModel(BaseModel):\n user_id = Column(\"user_id\", INT, primary_key=True)\n email = Column(\"user_email\", STRING)\n\nthe column.name \"user_email\" while the field name is \"email\", the column.name could not work well as before.\n sqlalchemy_base_model.py \n also i write the answer here \n",
"A solution that works with inherited classes too:\nfrom itertools import chain\nfrom sqlalchemy.ext.declarative import declarative_base\nBase = declarative_base()\n\n\nclass Mixin(object):\n def as_dict(self):\n tables = [base.__table__ for base in self.__class__.__bases__ if base not in [Base, Mixin]]\n tables.append(self.__table__)\n return {c.name: getattr(self, c.name) for c in chain.from_iterable([x.columns for x in tables])}\n\n",
"I don't have much experience with this, but the following seems to work for what I'm doing:\ndict(row)\n\nThis seems too simple (compared to the other answers here). What am I missing?\n",
"Python 3.6.8+\nThe builtin str() method automatically converts datetime.datetime objects to iso-8806-1.\nprint(json.dumps([dict(row.items()) for row in rows], default=str, indent=\" \"))\n\nNOTE: The default func will only be applied to a value if there's an error so int and float values won't be converted... unless there's an error :).\n",
"My take utilizing (too many?) dictionaries:\ndef serialize(_query):\n#d = dictionary written to per row\n#D = dictionary d is written to each time, then reset\n#Master = dictionary of dictionaries; the id Key (int, unique from database) from D is used as the Key for the dictionary D entry in Master\nMaster = {}\nD = {}\nx = 0\nfor u in _query:\n d = u.__dict__\n D = {}\n for n in d.keys():\n if n != '_sa_instance_state':\n D[n] = d[n]\n x = d['id']\n Master[x] = D\nreturn Master\n\nRunning with flask (including jsonify) and flask_sqlalchemy to print outputs as JSON.\nCall the function with jsonify(serialize()).\nWorks with all SQLAlchemy queries I've tried so far (running SQLite3)\n"
] | [
-1,
-1,
-1,
-2,
-2,
-3
] | [
"python",
"sqlalchemy"
] | stackoverflow_0001958219_python_sqlalchemy.txt |
Q:
Is there a more elegant way to fuse two lists of certain keys in a dictionary into one list in a different dictionary?
I have a dictionary with lists as values and i want to concatenate lists of certain keys to one and store it in another dictionary.
Right now i always do this:
plot_history = {}
for key in history.keys():
plot_key = key[3:]
if plot_key in scores:
if plot_key in plot_history.keys():
plot_history[plot_key].extend(history[key])
else:
plot_history[plot_key] = history[key]
Example for history:
{'tl_loss': [1.3987799882888794, 1.0936943292617798], 'tl_categorical_accuracy': [0.31684982776641846, 0.3901098966598511], 'tl_val_loss': [1.0042442083358765, 0.7404149174690247], 'tl_val_categorical_accuracy': [0.3589743673801422, 0.5256410241127014], 'ft_loss': [1.6525564193725586, 1.1816378831863403], 'ft_categorical_accuracy': [0.5370370149612427, 0.4157509207725525], 'ft_val_loss': [0.9936744570732117, 1.1621183156967163], 'ft_val_categorical_accuracy': [0.36538460850715637, 0.3910256326198578]}
Example for scores: ["loss", "val_loss"]
And i try to get this:
{'loss': [1.3987799882888794, 1.0936943292617798, 1.6525564193725586, 1.1816378831863403], 'val_loss': [1.0042442083358765, 0.7404149174690247, 0.9936744570732117, 1.1621183156967163]}
But i wonder if there's a more elegant way.
Thank you for any suggestions.
A:
You can use dictionary comprehension:
plot_history = {score: [history[f"tl_{score}"], history[f"ft_{score}"]] for score in scores}
A:
With defaultdict
from collections import defaultdict
plot_history_dd = defaultdict(list)
for key in history.keys():
plot_key = key[3:]
if plot_key in scores:
plot_history_dd[plot_key].extend(history[key])
# Convert to dict for convenience
plot_history_dd = dict(plot_history_dd)
print(plot_history_dd)
| Is there a more elegant way to fuse two lists of certain keys in a dictionary into one list in a different dictionary? | I have a dictionary with lists as values and i want to concatenate lists of certain keys to one and store it in another dictionary.
Right now i always do this:
plot_history = {}
for key in history.keys():
plot_key = key[3:]
if plot_key in scores:
if plot_key in plot_history.keys():
plot_history[plot_key].extend(history[key])
else:
plot_history[plot_key] = history[key]
Example for history:
{'tl_loss': [1.3987799882888794, 1.0936943292617798], 'tl_categorical_accuracy': [0.31684982776641846, 0.3901098966598511], 'tl_val_loss': [1.0042442083358765, 0.7404149174690247], 'tl_val_categorical_accuracy': [0.3589743673801422, 0.5256410241127014], 'ft_loss': [1.6525564193725586, 1.1816378831863403], 'ft_categorical_accuracy': [0.5370370149612427, 0.4157509207725525], 'ft_val_loss': [0.9936744570732117, 1.1621183156967163], 'ft_val_categorical_accuracy': [0.36538460850715637, 0.3910256326198578]}
Example for scores: ["loss", "val_loss"]
And i try to get this:
{'loss': [1.3987799882888794, 1.0936943292617798, 1.6525564193725586, 1.1816378831863403], 'val_loss': [1.0042442083358765, 0.7404149174690247, 0.9936744570732117, 1.1621183156967163]}
But i wonder if there's a more elegant way.
Thank you for any suggestions.
| [
"You can use dictionary comprehension:\nplot_history = {score: [history[f\"tl_{score}\"], history[f\"ft_{score}\"]] for score in scores}\n\n",
"With defaultdict\nfrom collections import defaultdict\n\nplot_history_dd = defaultdict(list)\nfor key in history.keys():\n plot_key = key[3:]\n if plot_key in scores:\n plot_history_dd[plot_key].extend(history[key])\n\n# Convert to dict for convenience\nplot_history_dd = dict(plot_history_dd)\nprint(plot_history_dd)\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074655193_python.txt |
Q:
BS4 not displaying text in Flask
I'm learning Python(Flask) and BeautifulSoup. For my first project I just wanted to wanted to get a video name from YT and display it on the homepage of my web app.
An error returns:
AttributeError: 'NoneType' object has no attribute 'text'
import requests
from flask import Blueprint, render_template
from bs4 import BeautifulSoup
views = Blueprint('views', __name__)
def scrapper():
url = 'https://www.youtube.com/'
web_response = requests.get(url).text
soup = BeautifulSoup(web_response, 'lxml')
card = soup.find(class_='style-scope ytd-rich-grid-media').text
return card
@views.route('/')
def home():
return render_template('home.html', text=scrapper())
A:
Beautifulsoup can capture only static HTML source code. YT contains the javascript content which is code that runs on the client. Use a tool that can handle javascript, such as, selenium.
for example,
from bs4 import BeautifulSoup
from selenium import webdriver
url = 'https://www.youtube.com/'
# your path for selenium driver (e.g., chrome or firefox)
webdriverFile = your_path + '/geckodriver' # gecko for firefox
browser = webdriver.Firefox(executable_path=webdriverFile)
browser.get(url)
source = browser.page_source
soup = BeautifulSoup(source, 'html.parser')
browser.close()
for i in soup.find_all("yt-formatted-string", {"id": "video-title"}):
print(i.text)
output:
> Music for Healing Stress, Anxiety and Depression, Remove Inner Rage and Sadness
> รวมเพลงฮิตในติ๊กต๊อก ( ผีเห็นผี + ไทม์แมชชิน ) เพลงมาแรงฟังกันยาวๆ2022
> เพลงเพราะในTikTok ครูหนุ่มแจง ไม่ได้เทงานแต่ง ยันเลิกกันด้วยดี
> Smooth Jazz Music & Bossa Nova For Good Mood - Positive Jazz Lounge Cafe Music, Coffee Shop BGM
> เปิดห้องทำงานใหม่ที่บ้าน — บอกหมด จัดห้อง จัดโต๊ะ จัดไฟ ใช้ของอะไรบ้าง
> Boyce Avenue Greatest Hits Full Album 2021 - Best Songs Of Boyce
> Avenue 2021 - Acoustic songs 2021
> .....
| BS4 not displaying text in Flask | I'm learning Python(Flask) and BeautifulSoup. For my first project I just wanted to wanted to get a video name from YT and display it on the homepage of my web app.
An error returns:
AttributeError: 'NoneType' object has no attribute 'text'
import requests
from flask import Blueprint, render_template
from bs4 import BeautifulSoup
views = Blueprint('views', __name__)
def scrapper():
url = 'https://www.youtube.com/'
web_response = requests.get(url).text
soup = BeautifulSoup(web_response, 'lxml')
card = soup.find(class_='style-scope ytd-rich-grid-media').text
return card
@views.route('/')
def home():
return render_template('home.html', text=scrapper())
| [
"Beautifulsoup can capture only static HTML source code. YT contains the javascript content which is code that runs on the client. Use a tool that can handle javascript, such as, selenium.\nfor example,\nfrom bs4 import BeautifulSoup\nfrom selenium import webdriver\n\nurl = 'https://www.youtube.com/'\n\n# your path for selenium driver (e.g., chrome or firefox)\nwebdriverFile = your_path + '/geckodriver' # gecko for firefox\nbrowser = webdriver.Firefox(executable_path=webdriverFile)\nbrowser.get(url)\nsource = browser.page_source\nsoup = BeautifulSoup(source, 'html.parser')\nbrowser.close()\n\nfor i in soup.find_all(\"yt-formatted-string\", {\"id\": \"video-title\"}):\n print(i.text)\n\noutput:\n> Music for Healing Stress, Anxiety and Depression, Remove Inner Rage and Sadness\n> รวมเพลงฮิตในติ๊กต๊อก ( ผีเห็นผี + ไทม์แมชชิน ) เพลงมาแรงฟังกันยาวๆ2022\n> เพลงเพราะในTikTok ครูหนุ่มแจง ไม่ได้เทงานแต่ง ยันเลิกกันด้วยดี\n> Smooth Jazz Music & Bossa Nova For Good Mood - Positive Jazz Lounge Cafe Music, Coffee Shop BGM\n> เปิดห้องทำงานใหม่ที่บ้าน — บอกหมด จัดห้อง จัดโต๊ะ จัดไฟ ใช้ของอะไรบ้าง\n> Boyce Avenue Greatest Hits Full Album 2021 - Best Songs Of Boyce\n> Avenue 2021 - Acoustic songs 2021\n> .....\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"flask",
"python",
"web_scraping",
"youtube"
] | stackoverflow_0074654051_beautifulsoup_flask_python_web_scraping_youtube.txt |
Q:
Get current user Flask-User
I have step up a basic web app with the Flask-User 1.0 connected it to my Mongodb. And the registration and login work. But once the logged in user enters the member_page I want to be able to send and receive information between the client and server. Planning on using socket.io since I have used it before. But I have now way of knowing how to get the information about the current user. I will make like a calendar for the logged in user. That the user can add and edit his own calendar. But currently I cant find out the current user and therefore not know whos information to send back to the user, same goes if the user adds something in his calendar.
If I print the session I get a bunch of information. But I have no ide how to extract the username of the current user.
<SecureCookieSession {'_fresh': True, '_id': 'ea37d60dd399bf244b53b5fc2b00629d11e3f0b844cbaaaa8902ad00b920133e1b4ea777d2af9492d4feffc81f9500d7e5889bd04a804c75e91e939b97fcfd22', '_permanent': True, 'csrf_token': 'e5040f2814ebf30f563635bbff459158fdd36bef', 'user_id': 'gAAAAABblr9Y80DfmIY66WOcUe5rYE6EjGgAHd5gMeH9Cst91VYKEvtYq14vAPqdgU5lzkb5ELJZrzWWg9mE2oN4_U3PsZeiHWW5iV7VWVh952WKlYEKn3SnMA0aEnOW0zSl47qqKqwB'}>
Any help or guidance would be much appreciated.
from flask import Flask, render_template_string, session
from flask_mongoengine import MongoEngine
from flask_user import login_required, UserManager, UserMixin
# Class-based application configuration
class ConfigClass(object):
""" Flask application config """
# Flask settings
SECRET_KEY = 'This is an INSECURE secret!! DO NOT use this in production!!'
# Flask-MongoEngine settings
MONGODB_SETTINGS = {
'db': 'tst_app',
'host': 'mongodb://localhost:33420/website'
}
# Flask-User settings
USER_APP_NAME = "Flask-User MongoDB App" # Shown in and email templates and page footers
USER_ENABLE_EMAIL = False # Disable email authentication
USER_ENABLE_USERNAME = True # Enable username authentication
USER_REQUIRE_RETYPE_PASSWORD = False # Simplify register form
def create_app():
""" Flask application factory """
# Setup Flask and load app.config
app = Flask(__name__)
app.config.from_object(__name__ + '.ConfigClass')
# Setup Flask-MongoEngine
db = MongoEngine(app)
# Define the User document.
# NB: Make sure to add flask_user UserMixin !!!
class User(db.Document, UserMixin):
active = db.BooleanField(default=True)
# User authentication information
username = db.StringField(default='')
password = db.StringField()
# User information
first_name = db.StringField(default='')
last_name = db.StringField(default='')
# Relationships
roles = db.ListField(db.StringField(), default=[])
# Setup Flask-User and specify the User data-model
user_manager = UserManager(app, db, User)
# The Home page is accessible to anyone
@app.route('/')
def home_page():
# String-based templates
return render_template_string("""
{% extends "flask_user_layout.html" %}
{% block content %}
<h2>Home page</h2>
<p><a href={{ url_for('user.register') }}>Register</a></p>
<p><a href={{ url_for('user.login') }}>Sign in</a></p>
<p><a href={{ url_for('home_page') }}>Home page</a> (accessible to anyone)</p>
<p><a href={{ url_for('member_page') }}>Member page</a> (login required)</p>
<p><a href={{ url_for('user.logout') }}>Sign out</a></p>
{% endblock %}
""")
# The Members page is only accessible to authenticated users via the @login_required decorator
@app.route('/members')
@login_required # User must be authenticated
def member_page():
# String-based templates
return render_template_string("""
{% extends "flask_user_layout.html" %}
{% block content %}
<h2>Members page</h2>
<p><a href={{ url_for('user.register') }}>Register</a></p>
<p><a href={{ url_for('user.login') }}>Sign in</a></p>
<p><a href={{ url_for('home_page') }}>Home page</a> (accessible to anyone)</p>
<p><a href={{ url_for('member_page') }}>Member page</a> (login required)</p>
<p><a href={{ url_for('user.logout') }}>Sign out</a></p>
{% endblock %}
""")
return app
# Start development web server
if __name__ == '__main__':
app = create_app()
app.run(host='0.0.0.0', port=5000, debug=True)
Solution for anyone in the future
I added this code in user_mixin.py:
@classmethod
def get_user_id_by_token(cls, token, expiration_in_seconds=None):
# This function works in tandem with UserMixin.get_id()
# Token signatures and timestamps are verified.
# user_id and password_ends_with are decrypted.
# Verifies a token and decrypts a User ID and parts of a User password hash
user_manager = current_app.user_manager
data_items = user_manager.verify_token(token, expiration_in_seconds)
# Verify password_ends_with
token_is_valid = False
if data_items:
# Load user by User ID
user_id = data_items[0]
password_ends_with = data_items[1]
user = user_manager.db_manager.get_user_by_id(user_id)
user_password = '' if user_manager.USER_ENABLE_AUTH0 else user.password[-8:]
# Make sure that last 8 characters of user password matches
token_is_valid = user and user_password==password_ends_with
return user_id if token_is_valid else None
I can now call my function and user_id will be returned
user_id = UserMixin.get_user_id_by_token(session['user_id'])
A:
From flask import current_user
| Get current user Flask-User | I have step up a basic web app with the Flask-User 1.0 connected it to my Mongodb. And the registration and login work. But once the logged in user enters the member_page I want to be able to send and receive information between the client and server. Planning on using socket.io since I have used it before. But I have now way of knowing how to get the information about the current user. I will make like a calendar for the logged in user. That the user can add and edit his own calendar. But currently I cant find out the current user and therefore not know whos information to send back to the user, same goes if the user adds something in his calendar.
If I print the session I get a bunch of information. But I have no ide how to extract the username of the current user.
<SecureCookieSession {'_fresh': True, '_id': 'ea37d60dd399bf244b53b5fc2b00629d11e3f0b844cbaaaa8902ad00b920133e1b4ea777d2af9492d4feffc81f9500d7e5889bd04a804c75e91e939b97fcfd22', '_permanent': True, 'csrf_token': 'e5040f2814ebf30f563635bbff459158fdd36bef', 'user_id': 'gAAAAABblr9Y80DfmIY66WOcUe5rYE6EjGgAHd5gMeH9Cst91VYKEvtYq14vAPqdgU5lzkb5ELJZrzWWg9mE2oN4_U3PsZeiHWW5iV7VWVh952WKlYEKn3SnMA0aEnOW0zSl47qqKqwB'}>
Any help or guidance would be much appreciated.
from flask import Flask, render_template_string, session
from flask_mongoengine import MongoEngine
from flask_user import login_required, UserManager, UserMixin
# Class-based application configuration
class ConfigClass(object):
""" Flask application config """
# Flask settings
SECRET_KEY = 'This is an INSECURE secret!! DO NOT use this in production!!'
# Flask-MongoEngine settings
MONGODB_SETTINGS = {
'db': 'tst_app',
'host': 'mongodb://localhost:33420/website'
}
# Flask-User settings
USER_APP_NAME = "Flask-User MongoDB App" # Shown in and email templates and page footers
USER_ENABLE_EMAIL = False # Disable email authentication
USER_ENABLE_USERNAME = True # Enable username authentication
USER_REQUIRE_RETYPE_PASSWORD = False # Simplify register form
def create_app():
""" Flask application factory """
# Setup Flask and load app.config
app = Flask(__name__)
app.config.from_object(__name__ + '.ConfigClass')
# Setup Flask-MongoEngine
db = MongoEngine(app)
# Define the User document.
# NB: Make sure to add flask_user UserMixin !!!
class User(db.Document, UserMixin):
active = db.BooleanField(default=True)
# User authentication information
username = db.StringField(default='')
password = db.StringField()
# User information
first_name = db.StringField(default='')
last_name = db.StringField(default='')
# Relationships
roles = db.ListField(db.StringField(), default=[])
# Setup Flask-User and specify the User data-model
user_manager = UserManager(app, db, User)
# The Home page is accessible to anyone
@app.route('/')
def home_page():
# String-based templates
return render_template_string("""
{% extends "flask_user_layout.html" %}
{% block content %}
<h2>Home page</h2>
<p><a href={{ url_for('user.register') }}>Register</a></p>
<p><a href={{ url_for('user.login') }}>Sign in</a></p>
<p><a href={{ url_for('home_page') }}>Home page</a> (accessible to anyone)</p>
<p><a href={{ url_for('member_page') }}>Member page</a> (login required)</p>
<p><a href={{ url_for('user.logout') }}>Sign out</a></p>
{% endblock %}
""")
# The Members page is only accessible to authenticated users via the @login_required decorator
@app.route('/members')
@login_required # User must be authenticated
def member_page():
# String-based templates
return render_template_string("""
{% extends "flask_user_layout.html" %}
{% block content %}
<h2>Members page</h2>
<p><a href={{ url_for('user.register') }}>Register</a></p>
<p><a href={{ url_for('user.login') }}>Sign in</a></p>
<p><a href={{ url_for('home_page') }}>Home page</a> (accessible to anyone)</p>
<p><a href={{ url_for('member_page') }}>Member page</a> (login required)</p>
<p><a href={{ url_for('user.logout') }}>Sign out</a></p>
{% endblock %}
""")
return app
# Start development web server
if __name__ == '__main__':
app = create_app()
app.run(host='0.0.0.0', port=5000, debug=True)
Solution for anyone in the future
I added this code in user_mixin.py:
@classmethod
def get_user_id_by_token(cls, token, expiration_in_seconds=None):
# This function works in tandem with UserMixin.get_id()
# Token signatures and timestamps are verified.
# user_id and password_ends_with are decrypted.
# Verifies a token and decrypts a User ID and parts of a User password hash
user_manager = current_app.user_manager
data_items = user_manager.verify_token(token, expiration_in_seconds)
# Verify password_ends_with
token_is_valid = False
if data_items:
# Load user by User ID
user_id = data_items[0]
password_ends_with = data_items[1]
user = user_manager.db_manager.get_user_by_id(user_id)
user_password = '' if user_manager.USER_ENABLE_AUTH0 else user.password[-8:]
# Make sure that last 8 characters of user password matches
token_is_valid = user and user_password==password_ends_with
return user_id if token_is_valid else None
I can now call my function and user_id will be returned
user_id = UserMixin.get_user_id_by_token(session['user_id'])
| [
"From flask import current_user\n"
] | [
0
] | [] | [] | [
"flask",
"flask_login",
"python"
] | stackoverflow_0052263969_flask_flask_login_python.txt |
Q:
Loop or function to make changes in multiple dataframes
Python loop or function to make changes in multiple dataframes (with same headers).
The following loop does not work:
df1 = pd.read_csv('D1.csv')
df2 = pd.read_csv('D2.csv')
P1=['x', 'y', 'z', 'w']
T1=['t1','t2','t3','t4']
df_list = [df1, df2]
for df in df_list:
#summarize P1 & T1 and store to new columns
df['P'] = df[P1].sum(axis=1)
df['T'] = df[T1].sum(axis=1)
#drop initial columns P1 & T1
df=df.drop(df[P1],axis=1)
df=df.drop(df[T1],axis=1)
#filter and rename
df = df[df['C'].isin([1,2])]
df.rename(columns={'A1': 'A','B1: 'B','C1': 'C'}, inplace=True)
with pd.ExcelWriter(Book1.xlsx', engine='openpyxl', mode='a') as writer:
df1.to_excel(writer, sheet_name='D1')
df2.to_excel(writer, sheet_name='D2')
A:
Your code would probably raise a KeyError when calling pandas.DataFrame.drop and/or a SyntaxError when calling pandas.DataFrame.rename.
Try this :
for df in df_list:
#summarize P1 & T1 and store to new columns
df['P'] = df[P1].sum(axis=1)
df['T'] = df[T1].sum(axis=1)
#drop initial columns P1 & T1
df = df.drop(columns= P1+T1)
#filter and rename
df = df[df['C'].isin([1,2])]
df.rename(columns={'A1': 'A', 'B1': 'B','C1': 'C'}, inplace=True)
Or in one bloc :
for df in df_list:
df= (
df
.assign(P = df[P1].sum(axis=1),
T = df[T1].sum(axis=1))
.drop(columns= P1+T1)
.loc[lambda x: x['C'].isin([1, 2])]
.rename(columns={'A1': 'A', 'B1': 'B','C1': 'C'})
)
| Loop or function to make changes in multiple dataframes | Python loop or function to make changes in multiple dataframes (with same headers).
The following loop does not work:
df1 = pd.read_csv('D1.csv')
df2 = pd.read_csv('D2.csv')
P1=['x', 'y', 'z', 'w']
T1=['t1','t2','t3','t4']
df_list = [df1, df2]
for df in df_list:
#summarize P1 & T1 and store to new columns
df['P'] = df[P1].sum(axis=1)
df['T'] = df[T1].sum(axis=1)
#drop initial columns P1 & T1
df=df.drop(df[P1],axis=1)
df=df.drop(df[T1],axis=1)
#filter and rename
df = df[df['C'].isin([1,2])]
df.rename(columns={'A1': 'A','B1: 'B','C1': 'C'}, inplace=True)
with pd.ExcelWriter(Book1.xlsx', engine='openpyxl', mode='a') as writer:
df1.to_excel(writer, sheet_name='D1')
df2.to_excel(writer, sheet_name='D2')
| [
"Your code would probably raise a KeyError when calling pandas.DataFrame.drop and/or a SyntaxError when calling pandas.DataFrame.rename.\nTry this :\nfor df in df_list:\n #summarize P1 & T1 and store to new columns\n df['P'] = df[P1].sum(axis=1)\n df['T'] = df[T1].sum(axis=1)\n\n #drop initial columns P1 & T1\n df = df.drop(columns= P1+T1)\n \n #filter and rename\n df = df[df['C'].isin([1,2])]\n df.rename(columns={'A1': 'A', 'B1': 'B','C1': 'C'}, inplace=True)\n\nOr in one bloc :\nfor df in df_list:\n df= (\n df\n .assign(P = df[P1].sum(axis=1),\n T = df[T1].sum(axis=1))\n .drop(columns= P1+T1)\n .loc[lambda x: x['C'].isin([1, 2])]\n .rename(columns={'A1': 'A', 'B1': 'B','C1': 'C'})\n )\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074655404_dataframe_pandas_python.txt |
Q:
Calculate implied correlation between two datasets
I have a data set with the growth in student enrollments by college from one year to the next broken down by age bands (18-19, 20-24, etc.). I have another data set with the growth in student enrollments for the same colleges from one year to the next broken down by gender (M, F, O). Unfortunately, we don't have access to the raw data so I don't know the relationship between these (e.g. how many males 18-19, females 20-24, etc.).
Is there a way to do a correlation analysis on these separate datasets against each other to imply some relationships? E.g. I'm trying to see if I can reach any conclusions like "the growth in the 20-24 age band was more strongly correlated to the growth in female vs. male students"?
I have the two datasets loaded in dataframes and have already prepared some basis plots showing trend etc. I did manage to brute-force an age by gender view in excel but wanted to hear others' ideas on the above before I attempt to replicate it in python...
A:
It would be nice with an example of what your two datasets look like. However, I will go out on a limb and guess/assume that they look something like this:
> df_enrollment.head()
growth age_group college
0 0.941251 19-35 E
1 0.787922 19-35 D
2 0.677788 36-50 C
3 0.088465 36-50 A
4 0.453523 19-35 D
> df_growth_gender.head()
growth gender college
0 0.352022 Male E
1 0.560317 Other D
2 0.181704 Female E
3 0.278119 Female D
4 0.029306 Other B
If my assumption of your datasets are somewhat correct, I would recommend first joining the two datsets into one dataset:
df = pd.merge(
left=df_growth_age,
right=df_growth_gender,
on="college",
suffixes=("_age", "_gender")
).set_index(["college", "age_group", "gender"]).sort_index().reset_index()
> df.head()
college age_group gender growth_age growth_gender
0 A 18-19 Female 0.753650 0.004030
1 A 18-19 Other 0.753650 0.772802
2 A 19-35 Male 0.140001 0.004030
3 A 19-35 Female 0.140001 0.772802
4 C 19-35 Male 0.831882 0.876803
5 C 19-35 Female 0.831882 0.913343
NB! Note that the merge()-operation defaults to an inner join, which might not be what you want.
From here, you can easily start doing correlation calculations and plots.
Example: Calculate correlation for each college:
df.groupby(["college"])[["growth_age", "growth_gender"]].corr().unstack().iloc[:,1]
Example: Plot relationship between growth rate for age vs. gender for each age/gender/college
import seaborn as sns
sns.relplot(
data=df,
x="growth_age",
y="growth_gender",
hue="college",
row="age_group",
col="gender",
sizes=100,
)
| Calculate implied correlation between two datasets | I have a data set with the growth in student enrollments by college from one year to the next broken down by age bands (18-19, 20-24, etc.). I have another data set with the growth in student enrollments for the same colleges from one year to the next broken down by gender (M, F, O). Unfortunately, we don't have access to the raw data so I don't know the relationship between these (e.g. how many males 18-19, females 20-24, etc.).
Is there a way to do a correlation analysis on these separate datasets against each other to imply some relationships? E.g. I'm trying to see if I can reach any conclusions like "the growth in the 20-24 age band was more strongly correlated to the growth in female vs. male students"?
I have the two datasets loaded in dataframes and have already prepared some basis plots showing trend etc. I did manage to brute-force an age by gender view in excel but wanted to hear others' ideas on the above before I attempt to replicate it in python...
| [
"It would be nice with an example of what your two datasets look like. However, I will go out on a limb and guess/assume that they look something like this:\n> df_enrollment.head()\n growth age_group college\n0 0.941251 19-35 E\n1 0.787922 19-35 D\n2 0.677788 36-50 C\n3 0.088465 36-50 A\n4 0.453523 19-35 D\n\n> df_growth_gender.head()\n growth gender college\n0 0.352022 Male E\n1 0.560317 Other D\n2 0.181704 Female E\n3 0.278119 Female D\n4 0.029306 Other B\n\nIf my assumption of your datasets are somewhat correct, I would recommend first joining the two datsets into one dataset:\ndf = pd.merge(\n left=df_growth_age, \n right=df_growth_gender,\n on=\"college\",\n suffixes=(\"_age\", \"_gender\")\n).set_index([\"college\", \"age_group\", \"gender\"]).sort_index().reset_index()\n\n> df.head()\n college age_group gender growth_age growth_gender\n0 A 18-19 Female 0.753650 0.004030\n1 A 18-19 Other 0.753650 0.772802\n2 A 19-35 Male 0.140001 0.004030\n3 A 19-35 Female 0.140001 0.772802\n4 C 19-35 Male 0.831882 0.876803\n5 C 19-35 Female 0.831882 0.913343\n\nNB! Note that the merge()-operation defaults to an inner join, which might not be what you want.\nFrom here, you can easily start doing correlation calculations and plots.\nExample: Calculate correlation for each college:\ndf.groupby([\"college\"])[[\"growth_age\", \"growth_gender\"]].corr().unstack().iloc[:,1]\n\nExample: Plot relationship between growth rate for age vs. gender for each age/gender/college\nimport seaborn as sns\n\nsns.relplot(\n data=df,\n x=\"growth_age\",\n y=\"growth_gender\",\n hue=\"college\",\n row=\"age_group\",\n col=\"gender\",\n sizes=100,\n)\n\n\n"
] | [
1
] | [] | [] | [
"correlation",
"data_science",
"excel",
"python"
] | stackoverflow_0074651022_correlation_data_science_excel_python.txt |
Q:
InvalidArgumentError: logits and labels must be broadcastable: logits_size=[10,10] labels_size=[100,10]
I was following a tutorial online where they took a dog and cats dataset and used this data to create a CNN model. I'm working with Animals-10 from kaggle while following the tutorial. When I fit the model, I get InvalidArgumentError: logits and labels must be broadcastable: logits_size=[10,10] labels_size=[100,10] [[{{node loss_17/dense_38_loss/softmax_cross_entropy_with_logits}}]] error. I've done some googling, but can't figure anything out and all of the solutions that exist seem to be problem specific. I've been stuck on this for a while now and can't figure it out. Thanks for your help.
Code
# Preprocessing for training data
butterfly_path = './animals/butterfly/'
cat_path = './animals/cat/'
chicken_path = './animals/chicken/'
cow_path = './animals/cow/'
dog_path = './animals/dog/'
elephant_path = './animals/elephant/'
horse_path = './animals/horse/'
sheep_path = './animals/sheep/'
spider_path = './animals/spider/'
squirrel_path = './animals/squirrel/'
butterfly_files = [f for f in listdir(butterfly_path) if isfile(join(butterfly_path, f))]
cat_files = [f for f in listdir(cat_path) if isfile(join(cat_path, f))]
chicken_files = [f for f in listdir(chicken_path) if isfile(join(chicken_path, f))]
cow_files = [f for f in listdir(cow_path) if isfile(join(cow_path, f))]
dog_files = [f for f in listdir(dog_path) if isfile(join(dog_path, f))]
elephant_files = [f for f in listdir(elephant_path) if isfile(join(elephant_path, f))]
horse_files = [f for f in listdir(horse_path) if isfile(join(horse_path, f))]
sheep_files = [f for f in listdir(sheep_path) if isfile(join(sheep_path, f))]
spider_files = [f for f in listdir(spider_path) if isfile(join(spider_path, f))]
squirrel_files = [f for f in listdir(squirrel_path) if isfile(join(squirrel_path, f))]
X_train = []
y_train = []
# Convert images to arrays and append labels to list
for file in butterfly_files:
img_path = butterfly_path + file
y_train.append(0) # butterfly
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in cat_files:
img_path = cat_path + file
y_train.append(1) # cat
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in chicken_files:
img_path = chicken_path + file
y_train.append(2) # chicken
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in cow_files:
img_path = cow_path + file
y_train.append(3) # cow
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in dog_files:
img_path = dog_path + file
y_train.append(4) # dog
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in elephant_files:
img_path = elephant_path + file
y_train.append(5) # elephant
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in horse_files:
img_path = horse_path + file
y_train.append(6) # horse
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in sheep_files:
img_path = sheep_path + file
y_train.append(7) # sheep
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in spider_files:
img_path = spider_path + file
y_train.append(8) # spider
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in squirrel_files:
img_path = squirrel_path + file
y_train.append(9) # squirrel
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
X_train = np.array(X_train)
y_train = np.array(y_train)
new_X_train, new_y_train = sklearn.utils.shuffle(X_train, y_train, random_state = 0)
# Observe data shapes
print("Train Images: ", X_train.shape, "\nTrain Labels: ", y_train.shape)
Output
Train Images: (26179, 100, 100)
Train Labels: (26179,)
More code
# Flatten data
new_X_train = new_X_train.reshape(-1, 100, 100, 1)
new_X_train.shape
new_X_train[0]
Output
(26179, 100, 100, 1)
array([[[0.11311903],
[0.11212739],
[0.10983502],
...,
[0.05591958],
[0.05303901],
[0.05191926]],
[[0.11488738],
[0.11407327],
[0.11111339],
...,
[0.05361823],
[0.05217732],
[0.05204283]],
[[0.11666758],
[0.11481607],
[0.11202889],
...,
[0.05181058],
[0.05311151],
[0.05275289]],
...,
[[0.26910149],
[0.26756103],
[0.26437813],
...,
[0.54128431],
[0.52813318],
[0.52014463]],
[[0.26764131],
[0.2660793 ],
[0.26377079],
...,
[0.53660792],
[0.52183369],
[0.51606187]],
[[0.26523861],
[0.26538986],
[0.26358519],
...,
[0.53761132],
[0.51546228],
[0.50781162]]])
Code where I start to create the model
# Convert labels to categorical
new_y_train = keras.utils.to_categorical(new_y_train, 10)
# Create the model
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(64, (3, 3), input_shape = new_X_train.shape[1:], activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Conv2D(64, (3, 3), activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units = 64, activation = 'relu'))
model.add(keras.layers.Dense(units = 10, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
# Fit model
model.fit(x = new_X_train, y = new_y_train, batch_size = 10, epochs = 1)
A:
Your code working:
import tensorflow as tf
from tensorflow import keras
new_X_train = tf.random.uniform((26179, 100, 100, 1))
new_y_train = tf.random.uniform((26179,))
# Convert labels to categorical
new_y_train = keras.utils.to_categorical(new_y_train, 10)
# Create the model
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(64, (3, 3), input_shape = new_X_train.shape[1:], activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Conv2D(64, (3, 3), activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units = 64, activation = 'relu'))
model.add(keras.layers.Dense(units = 10, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
# Fit model
model.fit(x = new_X_train, y = new_y_train, batch_size = 10, epochs = 1)
A:
I think my issue was with this piece of code:
# Convert labels to categorical
new_y_train = keras.utils.to_categorical(new_y_train, 10)
For some reason I was getting a 10x10 matrix for each item, but now it seems to work and just gives me a 10 item array
A:
I had a similar issue where first dimension of the logits was different from the one on the labels.
I was trying to use this code:
https://keras.io/examples/vision/oxford_pets_image_segmentation/
But had non square images. What solved my issue resizing the images to a square ratio (I still don't know why the logits became different sized with respect to the labels with rectangular images...)
| InvalidArgumentError: logits and labels must be broadcastable: logits_size=[10,10] labels_size=[100,10] | I was following a tutorial online where they took a dog and cats dataset and used this data to create a CNN model. I'm working with Animals-10 from kaggle while following the tutorial. When I fit the model, I get InvalidArgumentError: logits and labels must be broadcastable: logits_size=[10,10] labels_size=[100,10] [[{{node loss_17/dense_38_loss/softmax_cross_entropy_with_logits}}]] error. I've done some googling, but can't figure anything out and all of the solutions that exist seem to be problem specific. I've been stuck on this for a while now and can't figure it out. Thanks for your help.
Code
# Preprocessing for training data
butterfly_path = './animals/butterfly/'
cat_path = './animals/cat/'
chicken_path = './animals/chicken/'
cow_path = './animals/cow/'
dog_path = './animals/dog/'
elephant_path = './animals/elephant/'
horse_path = './animals/horse/'
sheep_path = './animals/sheep/'
spider_path = './animals/spider/'
squirrel_path = './animals/squirrel/'
butterfly_files = [f for f in listdir(butterfly_path) if isfile(join(butterfly_path, f))]
cat_files = [f for f in listdir(cat_path) if isfile(join(cat_path, f))]
chicken_files = [f for f in listdir(chicken_path) if isfile(join(chicken_path, f))]
cow_files = [f for f in listdir(cow_path) if isfile(join(cow_path, f))]
dog_files = [f for f in listdir(dog_path) if isfile(join(dog_path, f))]
elephant_files = [f for f in listdir(elephant_path) if isfile(join(elephant_path, f))]
horse_files = [f for f in listdir(horse_path) if isfile(join(horse_path, f))]
sheep_files = [f for f in listdir(sheep_path) if isfile(join(sheep_path, f))]
spider_files = [f for f in listdir(spider_path) if isfile(join(spider_path, f))]
squirrel_files = [f for f in listdir(squirrel_path) if isfile(join(squirrel_path, f))]
X_train = []
y_train = []
# Convert images to arrays and append labels to list
for file in butterfly_files:
img_path = butterfly_path + file
y_train.append(0) # butterfly
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in cat_files:
img_path = cat_path + file
y_train.append(1) # cat
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in chicken_files:
img_path = chicken_path + file
y_train.append(2) # chicken
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in cow_files:
img_path = cow_path + file
y_train.append(3) # cow
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in dog_files:
img_path = dog_path + file
y_train.append(4) # dog
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in elephant_files:
img_path = elephant_path + file
y_train.append(5) # elephant
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in horse_files:
img_path = horse_path + file
y_train.append(6) # horse
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in sheep_files:
img_path = sheep_path + file
y_train.append(7) # sheep
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in spider_files:
img_path = spider_path + file
y_train.append(8) # spider
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
for file in squirrel_files:
img_path = squirrel_path + file
y_train.append(9) # squirrel
img = plt.imread(img_path)
if (img.shape[-1] == 3):
grey_img = color.rgb2gray(img)
elif (img.shape[-1] == 4):
grey_img = color.rgb2gray(color.rgba2rgb(img))
resized_img = resize(grey_img, (100, 100))
X_train.append(resized_img)
X_train = np.array(X_train)
y_train = np.array(y_train)
new_X_train, new_y_train = sklearn.utils.shuffle(X_train, y_train, random_state = 0)
# Observe data shapes
print("Train Images: ", X_train.shape, "\nTrain Labels: ", y_train.shape)
Output
Train Images: (26179, 100, 100)
Train Labels: (26179,)
More code
# Flatten data
new_X_train = new_X_train.reshape(-1, 100, 100, 1)
new_X_train.shape
new_X_train[0]
Output
(26179, 100, 100, 1)
array([[[0.11311903],
[0.11212739],
[0.10983502],
...,
[0.05591958],
[0.05303901],
[0.05191926]],
[[0.11488738],
[0.11407327],
[0.11111339],
...,
[0.05361823],
[0.05217732],
[0.05204283]],
[[0.11666758],
[0.11481607],
[0.11202889],
...,
[0.05181058],
[0.05311151],
[0.05275289]],
...,
[[0.26910149],
[0.26756103],
[0.26437813],
...,
[0.54128431],
[0.52813318],
[0.52014463]],
[[0.26764131],
[0.2660793 ],
[0.26377079],
...,
[0.53660792],
[0.52183369],
[0.51606187]],
[[0.26523861],
[0.26538986],
[0.26358519],
...,
[0.53761132],
[0.51546228],
[0.50781162]]])
Code where I start to create the model
# Convert labels to categorical
new_y_train = keras.utils.to_categorical(new_y_train, 10)
# Create the model
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(64, (3, 3), input_shape = new_X_train.shape[1:], activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Conv2D(64, (3, 3), activation = 'relu'))
model.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(units = 64, activation = 'relu'))
model.add(keras.layers.Dense(units = 10, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
# Fit model
model.fit(x = new_X_train, y = new_y_train, batch_size = 10, epochs = 1)
| [
"Your code working:\nimport tensorflow as tf\nfrom tensorflow import keras\nnew_X_train = tf.random.uniform((26179, 100, 100, 1))\nnew_y_train = tf.random.uniform((26179,))\n# Convert labels to categorical\nnew_y_train = keras.utils.to_categorical(new_y_train, 10)\n\n# Create the model\nmodel = keras.models.Sequential()\nmodel.add(keras.layers.Conv2D(64, (3, 3), input_shape = new_X_train.shape[1:], activation = 'relu'))\nmodel.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))\n\nmodel.add(keras.layers.Conv2D(64, (3, 3), activation = 'relu'))\nmodel.add(keras.layers.MaxPooling2D(pool_size = (2, 2)))\n\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(units = 64, activation = 'relu'))\n\nmodel.add(keras.layers.Dense(units = 10, activation = 'softmax'))\n\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])\n\n# Fit model\nmodel.fit(x = new_X_train, y = new_y_train, batch_size = 10, epochs = 1)\n\n",
"I think my issue was with this piece of code:\n# Convert labels to categorical\nnew_y_train = keras.utils.to_categorical(new_y_train, 10)\n\nFor some reason I was getting a 10x10 matrix for each item, but now it seems to work and just gives me a 10 item array\n",
"I had a similar issue where first dimension of the logits was different from the one on the labels.\nI was trying to use this code:\nhttps://keras.io/examples/vision/oxford_pets_image_segmentation/\nBut had non square images. What solved my issue resizing the images to a square ratio (I still don't know why the logits became different sized with respect to the labels with rectangular images...)\n"
] | [
0,
0,
0
] | [] | [] | [
"deep_learning",
"keras",
"machine_learning",
"python",
"tensorflow"
] | stackoverflow_0065053050_deep_learning_keras_machine_learning_python_tensorflow.txt |
Q:
Add a pandas column called age based on existing DOB column
I have a column called DOB which has dates formatted 31.07.1983 for example.
My data frame is named users_pd.
I want to add a column that has the current age of the customer based off the existing DOB column.
from datetime import date, timedelta
users_pd["Age"] = (date.today() - users_pd["DOB"] // timedelta(days=365.2425))
I get the error
TypeError: Invalid dtype object for __floordiv__
A:
Convert column to datetime and then convert timedeltas to years:
users_pd["DOB"] = pd.to_datetime(users_pd["DOB"], format='%d.%m.%Y')
users_pd["Age"] = (pd.to_datetime('today') - users_pd["DOB"]).astype('<m8[Y]').astype(int)
A:
Your parenthesis was incorrectly placed, and you probably failed to convert to datetime.
Use:
users_pd["Age"] = (pd.Timestamp('today')
- pd.to_datetime(users_pd["DOB"], dayfirst=True)
) // pd.Timedelta(days=365.2425)
Example:
DOB Age
0 31.07.1983 39
| Add a pandas column called age based on existing DOB column | I have a column called DOB which has dates formatted 31.07.1983 for example.
My data frame is named users_pd.
I want to add a column that has the current age of the customer based off the existing DOB column.
from datetime import date, timedelta
users_pd["Age"] = (date.today() - users_pd["DOB"] // timedelta(days=365.2425))
I get the error
TypeError: Invalid dtype object for __floordiv__
| [
"Convert column to datetime and then convert timedeltas to years:\nusers_pd[\"DOB\"] = pd.to_datetime(users_pd[\"DOB\"], format='%d.%m.%Y')\n\nusers_pd[\"Age\"] = (pd.to_datetime('today') - users_pd[\"DOB\"]).astype('<m8[Y]').astype(int)\n\n",
"Your parenthesis was incorrectly placed, and you probably failed to convert to datetime.\nUse:\nusers_pd[\"Age\"] = (pd.Timestamp('today')\n - pd.to_datetime(users_pd[\"DOB\"], dayfirst=True)\n ) // pd.Timedelta(days=365.2425)\n\nExample:\n DOB Age\n0 31.07.1983 39\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074655574_dataframe_pandas_python.txt |
Q:
Change default attribution text location in Contextily
Using this code:
import contextily as ctx
from shapely.geometry import box
import geopandas as gpd
minx, miny = 1.612359e+06, 0.950860e+07
maxx, maxy = 4.320331e+06, 1.007808e+07
rectangle = box(minx, miny, maxx, maxy)
gdf = gpd.GeoDataFrame(
index=['Amazon'],
geometry=[rectangle],
crs='EPSG:5641')
dy = 10e4
ax = gdf.plot(figsize=(8,3), facecolor='none', edgecolor='lime')
ax.set_ylim(miny-dy, maxy+dy)
ctx.add_basemap(ax, crs=gdf.crs.to_string())
To get this plot:
I know it's possible to change the size of the attribution text in contextily, and even to not plot it, but is there a way to change its location, e.g. to write it on the right or at the top?
Similar to how to change leaflet attribution, but for contextily.
A:
It looks like you can't define the attribution in the add_basemap function, and while contextily does provide a more adaptable add_attribution function, that doesn't allow you to specify the text position. There is another way though: manipulate the matplotlib text object generated by add_basemap:
txt = ax.texts[-1]
txt.set_position([0.99,0.98])
txt.set_ha('right')
txt.set_va('top')
Example output below. Best do this immediately after the add_basemap call to be sure you don't pick up any other text object.
| Change default attribution text location in Contextily | Using this code:
import contextily as ctx
from shapely.geometry import box
import geopandas as gpd
minx, miny = 1.612359e+06, 0.950860e+07
maxx, maxy = 4.320331e+06, 1.007808e+07
rectangle = box(minx, miny, maxx, maxy)
gdf = gpd.GeoDataFrame(
index=['Amazon'],
geometry=[rectangle],
crs='EPSG:5641')
dy = 10e4
ax = gdf.plot(figsize=(8,3), facecolor='none', edgecolor='lime')
ax.set_ylim(miny-dy, maxy+dy)
ctx.add_basemap(ax, crs=gdf.crs.to_string())
To get this plot:
I know it's possible to change the size of the attribution text in contextily, and even to not plot it, but is there a way to change its location, e.g. to write it on the right or at the top?
Similar to how to change leaflet attribution, but for contextily.
| [
"It looks like you can't define the attribution in the add_basemap function, and while contextily does provide a more adaptable add_attribution function, that doesn't allow you to specify the text position. There is another way though: manipulate the matplotlib text object generated by add_basemap:\ntxt = ax.texts[-1]\ntxt.set_position([0.99,0.98])\ntxt.set_ha('right')\ntxt.set_va('top')\n\nExample output below. Best do this immediately after the add_basemap call to be sure you don't pick up any other text object.\n\n"
] | [
1
] | [] | [] | [
"contextily",
"python"
] | stackoverflow_0070641563_contextily_python.txt |
Q:
TemplateDoesNotExist at / customer/index.html Request Method
enter image description hereenter image description herei get the error of Template does not exist # [[e](https://i.stack.imgur.com/gSKFa.png)](https://i.stack.imgur.com/795HN.png)
this is my urls
when i run i get the error emplateDoesNotExist at / customer/index.html Request Method
from django.contrib import admin
from django.urls import path
from django.conf import settings
from django.conf.urls.static import static
from customer.views import Index, About
urlpatterns = [
path('admin/', admin.site.urls),
path('', Index.as_view(), name='index'),
path('about/', About.as_view(), name='about'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
A:
Your Templates settings in settings.py should look like this
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": ["templates"],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
And your urls.py in your deliver1 folder should look like this
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path("admin/", admin.site.urls),
path("", include("the-name-of-your-app.urls")),
]
I don't have a lot of experience with class-based views. However you can try this to see if works for you
def index(request):
return render(request, "customer/index.html", {})
Your urls.py in your customer folder will look like this
from django.urls import path
from . import views
urlpatterns = [
path("", views.index, name="index"),
]
so any error you encouter, let me know
| TemplateDoesNotExist at / customer/index.html Request Method | enter image description hereenter image description herei get the error of Template does not exist # [[e](https://i.stack.imgur.com/gSKFa.png)](https://i.stack.imgur.com/795HN.png)
this is my urls
when i run i get the error emplateDoesNotExist at / customer/index.html Request Method
from django.contrib import admin
from django.urls import path
from django.conf import settings
from django.conf.urls.static import static
from customer.views import Index, About
urlpatterns = [
path('admin/', admin.site.urls),
path('', Index.as_view(), name='index'),
path('about/', About.as_view(), name='about'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| [
"Your Templates settings in settings.py should look like this\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nAnd your urls.py in your deliver1 folder should look like this\nfrom django.contrib import admin\nfrom django.urls import path, include\n\nurlpatterns = [\n path(\"admin/\", admin.site.urls),\n path(\"\", include(\"the-name-of-your-app.urls\")),\n]\n\nI don't have a lot of experience with class-based views. However you can try this to see if works for you\ndef index(request):\n return render(request, \"customer/index.html\", {})\n\nYour urls.py in your customer folder will look like this\nfrom django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path(\"\", views.index, name=\"index\"),\n]\n\nso any error you encouter, let me know\n"
] | [
0
] | [] | [] | [
"conda",
"django",
"miniconda",
"plotly_dash",
"python"
] | stackoverflow_0074655554_conda_django_miniconda_plotly_dash_python.txt |
Q:
Validate class attributes when overwriting using pydantic
I am using Pydantic to validate my class data.
In some cases after the class has been instantiated, I want to overwrite the value of a field, but I want to verify that the new value has the same type as defined in the Model.
I expect I should be able to use validators but I haven't seen an option for protecting the field after the object is created.
Is there a simple mechanism within pydantic which allows this type of validation?
Example
from pydantic import BaseModel, validator
class MyModel(BaseModel):
some_field : str = None
@validator("some_field")
def verify_string(cls, v):
print(f"Verifying some_field value: {v}")
if not isinstance(v, str):
raise ValueError(f"some_field must be str; got type {type(v)}")
return v
# Instantiate model
m = MyModel(some_field = "test")
## prints: Verifying some_field value: test
print(m.some_field)
## prints: test
# Overwrite a field within invalid type
m.some_field = 100 # <-- this should raise value error
print(m.some_field)
## prints: 100
A:
Sorry I found the solution -- use Config and some validator parameters.
class MyModel(BaseModel):
some_field : str = None
class Config:
validate_assignment = True
@validator(
"some_field", always=True, pre=True,
)
def verify_string(cls, v):
print(f"Verifying some_field value: {v}")
if not isinstance(v, str):
raise ValueError(f"some_field must be str; got type {type(v)}")
return v
| Validate class attributes when overwriting using pydantic | I am using Pydantic to validate my class data.
In some cases after the class has been instantiated, I want to overwrite the value of a field, but I want to verify that the new value has the same type as defined in the Model.
I expect I should be able to use validators but I haven't seen an option for protecting the field after the object is created.
Is there a simple mechanism within pydantic which allows this type of validation?
Example
from pydantic import BaseModel, validator
class MyModel(BaseModel):
some_field : str = None
@validator("some_field")
def verify_string(cls, v):
print(f"Verifying some_field value: {v}")
if not isinstance(v, str):
raise ValueError(f"some_field must be str; got type {type(v)}")
return v
# Instantiate model
m = MyModel(some_field = "test")
## prints: Verifying some_field value: test
print(m.some_field)
## prints: test
# Overwrite a field within invalid type
m.some_field = 100 # <-- this should raise value error
print(m.some_field)
## prints: 100
| [
"Sorry I found the solution -- use Config and some validator parameters.\nclass MyModel(BaseModel):\n some_field : str = None\n\n class Config:\n validate_assignment = True \n \n \n @validator(\n \"some_field\", always=True, pre=True,\n )\n def verify_string(cls, v):\n print(f\"Verifying some_field value: {v}\")\n if not isinstance(v, str):\n raise ValueError(f\"some_field must be str; got type {type(v)}\")\n return v\n\n"
] | [
0
] | [] | [] | [
"pydantic",
"python"
] | stackoverflow_0074655580_pydantic_python.txt |
Q:
How to merge dicts, collecting values from matching keys?
I have multiple dicts (or sequences of key-value pairs) like this:
d1 = {key1: x1, key2: y1}
d2 = {key1: x2, key2: y2}
How can I efficiently get a result like this, as a new dict?
d = {key1: (x1, x2), key2: (y1, y2)}
A:
Here's a general solution that will handle an arbitrary amount of dictionaries, with cases when keys are in only some of the dictionaries:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(list)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key].append(value)
print(dd) # result: defaultdict(<type 'list'>, {1: [2, 6], 3: [4, 7]})
A:
assuming all keys are always present in all dicts:
ds = [d1, d2]
d = {}
for k in d1.iterkeys():
d[k] = tuple(d[k] for d in ds)
Note: In Python 3.x use below code:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = tuple(d[k] for d in ds)
and if the dic contain numpy arrays:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = np.concatenate(list(d[k] for d in ds))
A:
dict1 = {'m': 2, 'n': 4}
dict2 = {'n': 3, 'm': 1}
Making sure that the keys are in the same order:
dict2_sorted = {i:dict2[i] for i in dict1.keys()}
keys = dict1.keys()
values = zip(dict1.values(), dict2_sorted.values())
dictionary = dict(zip(keys, values))
gives:
{'m': (2, 1), 'n': (4, 3)}
A:
This function merges two dicts even if the keys in the two dictionaries are different:
def combine_dict(d1, d2):
return {
k: tuple(d[k] for d in (d1, d2) if k in d)
for k in set(d1.keys()) | set(d2.keys())
}
Example:
d1 = {
'a': 1,
'b': 2,
}
d2` = {
'b': 'boat',
'c': 'car',
}
combine_dict(d1, d2)
# Returns: {
# 'a': (1,),
# 'b': (2, 'boat'),
# 'c': ('car',)
# }
A:
Here is one approach you can use which would work even if both dictonaries don't have same keys:
d1 = {'a':'test','b':'btest','d':'dreg'}
d2 = {'a':'cool','b':'main','c':'clear'}
d = {}
for key in set(d1.keys() + d2.keys()):
try:
d.setdefault(key,[]).append(d1[key])
except KeyError:
pass
try:
d.setdefault(key,[]).append(d2[key])
except KeyError:
pass
print d
This would generate below input:
{'a': ['test', 'cool'], 'c': ['clear'], 'b': ['btest', 'main'], 'd': ['dreg']}
A:
If you only have d1 and d2,
from collections import defaultdict
d = defaultdict(list)
for a, b in d1.items() + d2.items():
d[a].append(b)
A:
Assume that you have the list of ALL keys (you can get this list by iterating through all dictionaries and get their keys). Let's name it listKeys. Also:
listValues is the list of ALL values for a single key that you want
to merge.
allDicts: all dictionaries that you want to merge.
result = {}
for k in listKeys:
listValues = [] #we will convert it to tuple later, if you want.
for d in allDicts:
try:
fileList.append(d[k]) #try to append more values to a single key
except:
pass
if listValues: #if it is not empty
result[k] = typle(listValues) #convert to tuple, add to new dictionary with key k
A:
Assuming there are two dictionaries with exact same keys, below is the most succinct way of doing it (python3 should be used for both the solution).
d1 = {'a': 1, 'b': 2, 'c':3}
d2 = {'a': 5, 'b': 6, 'c':7}
# get keys from one of the dictionary
ks = [k for k in d1.keys()]
print(ks)
['a', 'b', 'c']
# call values from each dictionary on available keys
d_merged = {k: (d1[k], d2[k]) for k in ks}
print(d_merged)
{'a': (1, 5), 'b': (2, 6), 'c': (3, 7)}
# to merge values as list
d_merged = {k: [d1[k], d2[k]] for k in ks}
print(d_merged)
{'a': [1, 5], 'b': [2, 6], 'c': [3, 7]}
If there are two dictionaries with some common keys, but a few different keys, a list of all the keys should be prepared.
d1 = {'a': 1, 'b': 2, 'c':3, 'd': 9}
d2 = {'a': 5, 'b': 6, 'c':7, 'e': 4}
# get keys from one of the dictionary
d1_ks = [k for k in d1.keys()]
d2_ks = [k for k in d2.keys()]
all_ks = set(d1_ks + d2_ks)
print(all_ks)
['a', 'b', 'c', 'd', 'e']
# call values from each dictionary on available keys
d_merged = {k: [d1.get(k), d2.get(k)] for k in all_ks}
print(d_merged)
{'d': [9, None], 'a': [1, 5], 'b': [2, 6], 'c': [3, 7], 'e': [None, 4]}
A:
Modifying this answer to create a dictionary of tuples (what the OP asked for), instead of a dictionary of lists:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(tuple)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key] += (value,)
print(dd)
The above prints the following:
defaultdict(<class 'tuple'>, {1: (2, 6), 3: (4, 7)})
A:
def merge(d1, d2, merge):
result = dict(d1)
for k,v in d2.iteritems():
if k in result:
result[k] = merge(result[k], v)
else:
result[k] = v
return result
d1 = {'a': 1, 'b': 2}
d2 = {'a': 1, 'b': 3, 'c': 2}
print merge(d1, d2, lambda x, y:(x,y))
{'a': (1, 1), 'c': 2, 'b': (2, 3)}
A:
To supplement the two-list solutions, here is a solution for processing a single list.
A sample list (NetworkX-related; manually formatted here for readability):
ec_num_list = [((src, tgt), ec_num['ec_num']) for src, tgt, ec_num in G.edges(data=True)]
print('\nec_num_list:\n{}'.format(ec_num_list))
ec_num_list:
[((82, 433), '1.1.1.1'),
((82, 433), '1.1.1.2'),
((22, 182), '1.1.1.27'),
((22, 3785), '1.2.4.1'),
((22, 36), '6.4.1.1'),
((145, 36), '1.1.1.37'),
((36, 154), '2.3.3.1'),
((36, 154), '2.3.3.8'),
((36, 72), '4.1.1.32'),
...]
Note the duplicate values for the same edges (defined by the tuples). To collate those "values" to their corresponding "keys":
from collections import defaultdict
ec_num_collection = defaultdict(list)
for k, v in ec_num_list:
ec_num_collection[k].append(v)
print('\nec_num_collection:\n{}'.format(ec_num_collection.items()))
ec_num_collection:
[((82, 433), ['1.1.1.1', '1.1.1.2']), ## << grouped "values"
((22, 182), ['1.1.1.27']),
((22, 3785), ['1.2.4.1']),
((22, 36), ['6.4.1.1']),
((145, 36), ['1.1.1.37']),
((36, 154), ['2.3.3.1', '2.3.3.8']), ## << grouped "values"
((36, 72), ['4.1.1.32']),
...]
If needed, convert that list to dict:
ec_num_collection_dict = {k:v for k, v in zip(ec_num_collection, ec_num_collection)}
print('\nec_num_collection_dict:\n{}'.format(dict(ec_num_collection)))
ec_num_collection_dict:
{(82, 433): ['1.1.1.1', '1.1.1.2'],
(22, 182): ['1.1.1.27'],
(22, 3785): ['1.2.4.1'],
(22, 36): ['6.4.1.1'],
(145, 36): ['1.1.1.37'],
(36, 154): ['2.3.3.1', '2.3.3.8'],
(36, 72): ['4.1.1.32'],
...}
References
[this thread] How to merge multiple dicts with same key?
[Python docs] https://docs.python.org/3.7/library/collections.html#collections.defaultdict
A:
From blubb answer:
You can also directly form the tuple using values from each list
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = (d1[k], d2[k])
This might be useful if you had a specific ordering for your tuples
ds = [d1, d2, d3, d4]
d = {}
for k in d1.keys():
d[k] = (d3[k], d1[k], d4[k], d2[k]) #if you wanted tuple in order of d3, d1, d4, d2
A:
This library helped me, I had a dict list of nested keys with the same name but with different values, every other solution kept overriding those nested keys.
https://pypi.org/project/deepmerge/
from deepmerge import always_merger
def process_parms(args):
temp_list = []
for x in args:
with open(x, 'r') as stream:
temp_list.append(yaml.safe_load(stream))
return always_merger.merge(*temp_list)
A:
If keys are nested:
d1 = { 'key1': { 'nkey1': 'x1' }, 'key2': { 'nkey2': 'y1' } }
d2 = { 'key1': { 'nkey1': 'x2' }, 'key2': { 'nkey2': 'y2' } }
ds = [d1, d2]
d = {}
for k in d1.keys():
for k2 in d1[k].keys():
d.setdefault(k, {})
d[k].setdefault(k2, [])
d[k][k2] = tuple(d[k][k2] for d in ds)
yields:
{'key1': {'nkey1': ('x1', 'x2')}, 'key2': {'nkey2': ('y1', 'y2')}}
A:
There is a great library funcy doing what you need in a just one, short line.
from funcy import join_with
from pprint import pprint
d1 = {"key1": "x1", "key2": "y1"}
d2 = {"key1": "x2", "key2": "y2"}
list_of_dicts = [d1, d2]
merged_dict = join_with(tuple, list_of_dicts)
pprint(merged_dict)
Output:
{'key1': ('x1', 'x2'), 'key2': ('y1', 'y2')}
More info here: funcy -> join_with.
A:
d1 = {'A': 'a', 'B': 'b'}
d2 = {'A': 'c', 'B': 'd'}
d3 = {'A': 'e', 'B': 'f'}
dnew = {
k : [d1.get(k),d2.get(k),d3.get(k)] for k in d1.keys() | d2.keys() | d3.keys()
}
print(dnew)
"""
{'A': ['a', 'c', 'e'], 'B': ['b', 'd', 'f']}
"""
| How to merge dicts, collecting values from matching keys? | I have multiple dicts (or sequences of key-value pairs) like this:
d1 = {key1: x1, key2: y1}
d2 = {key1: x2, key2: y2}
How can I efficiently get a result like this, as a new dict?
d = {key1: (x1, x2), key2: (y1, y2)}
| [
"Here's a general solution that will handle an arbitrary amount of dictionaries, with cases when keys are in only some of the dictionaries:\nfrom collections import defaultdict\n\nd1 = {1: 2, 3: 4}\nd2 = {1: 6, 3: 7}\n\ndd = defaultdict(list)\n\nfor d in (d1, d2): # you can list as many input dicts as you want here\n for key, value in d.items():\n dd[key].append(value)\n \nprint(dd) # result: defaultdict(<type 'list'>, {1: [2, 6], 3: [4, 7]})\n\n",
"assuming all keys are always present in all dicts:\nds = [d1, d2]\nd = {}\nfor k in d1.iterkeys():\n d[k] = tuple(d[k] for d in ds)\n\nNote: In Python 3.x use below code:\nds = [d1, d2]\nd = {}\nfor k in d1.keys():\n d[k] = tuple(d[k] for d in ds)\n\nand if the dic contain numpy arrays:\nds = [d1, d2]\nd = {}\nfor k in d1.keys():\n d[k] = np.concatenate(list(d[k] for d in ds))\n\n",
"dict1 = {'m': 2, 'n': 4}\ndict2 = {'n': 3, 'm': 1}\n\nMaking sure that the keys are in the same order:\ndict2_sorted = {i:dict2[i] for i in dict1.keys()}\n\nkeys = dict1.keys()\nvalues = zip(dict1.values(), dict2_sorted.values())\ndictionary = dict(zip(keys, values))\n\ngives: \n{'m': (2, 1), 'n': (4, 3)}\n\n",
"This function merges two dicts even if the keys in the two dictionaries are different:\ndef combine_dict(d1, d2):\n return {\n k: tuple(d[k] for d in (d1, d2) if k in d)\n for k in set(d1.keys()) | set(d2.keys())\n }\n\nExample:\nd1 = {\n 'a': 1,\n 'b': 2,\n}\nd2` = {\n 'b': 'boat',\n 'c': 'car',\n}\ncombine_dict(d1, d2)\n# Returns: {\n# 'a': (1,),\n# 'b': (2, 'boat'),\n# 'c': ('car',)\n# }\n\n",
"Here is one approach you can use which would work even if both dictonaries don't have same keys:\nd1 = {'a':'test','b':'btest','d':'dreg'}\nd2 = {'a':'cool','b':'main','c':'clear'}\n\nd = {}\n\nfor key in set(d1.keys() + d2.keys()):\n try:\n d.setdefault(key,[]).append(d1[key]) \n except KeyError:\n pass\n\n try:\n d.setdefault(key,[]).append(d2[key]) \n except KeyError:\n pass\n\nprint d\n\nThis would generate below input:\n{'a': ['test', 'cool'], 'c': ['clear'], 'b': ['btest', 'main'], 'd': ['dreg']}\n\n",
"If you only have d1 and d2,\nfrom collections import defaultdict\n\nd = defaultdict(list)\nfor a, b in d1.items() + d2.items():\n d[a].append(b)\n\n",
"Assume that you have the list of ALL keys (you can get this list by iterating through all dictionaries and get their keys). Let's name it listKeys. Also:\n\nlistValues is the list of ALL values for a single key that you want\nto merge.\nallDicts: all dictionaries that you want to merge.\n\nresult = {}\nfor k in listKeys:\n listValues = [] #we will convert it to tuple later, if you want.\n for d in allDicts:\n try:\n fileList.append(d[k]) #try to append more values to a single key\n except:\n pass\n if listValues: #if it is not empty\n result[k] = typle(listValues) #convert to tuple, add to new dictionary with key k\n\n",
"Assuming there are two dictionaries with exact same keys, below is the most succinct way of doing it (python3 should be used for both the solution).\n\nd1 = {'a': 1, 'b': 2, 'c':3}\nd2 = {'a': 5, 'b': 6, 'c':7} \n\n# get keys from one of the dictionary\nks = [k for k in d1.keys()]\n\nprint(ks)\n['a', 'b', 'c']\n\n# call values from each dictionary on available keys\nd_merged = {k: (d1[k], d2[k]) for k in ks}\n\nprint(d_merged)\n{'a': (1, 5), 'b': (2, 6), 'c': (3, 7)}\n\n# to merge values as list\nd_merged = {k: [d1[k], d2[k]] for k in ks}\nprint(d_merged)\n{'a': [1, 5], 'b': [2, 6], 'c': [3, 7]}\n\n\nIf there are two dictionaries with some common keys, but a few different keys, a list of all the keys should be prepared.\n\nd1 = {'a': 1, 'b': 2, 'c':3, 'd': 9}\nd2 = {'a': 5, 'b': 6, 'c':7, 'e': 4} \n\n# get keys from one of the dictionary\nd1_ks = [k for k in d1.keys()]\nd2_ks = [k for k in d2.keys()]\n\nall_ks = set(d1_ks + d2_ks)\n\nprint(all_ks)\n['a', 'b', 'c', 'd', 'e']\n\n# call values from each dictionary on available keys\nd_merged = {k: [d1.get(k), d2.get(k)] for k in all_ks}\n\nprint(d_merged)\n{'d': [9, None], 'a': [1, 5], 'b': [2, 6], 'c': [3, 7], 'e': [None, 4]}\n\n\n",
"Modifying this answer to create a dictionary of tuples (what the OP asked for), instead of a dictionary of lists:\nfrom collections import defaultdict\n\nd1 = {1: 2, 3: 4}\nd2 = {1: 6, 3: 7}\n\ndd = defaultdict(tuple)\n\nfor d in (d1, d2): # you can list as many input dicts as you want here\n for key, value in d.items():\n dd[key] += (value,)\n\nprint(dd)\n\nThe above prints the following:\ndefaultdict(<class 'tuple'>, {1: (2, 6), 3: (4, 7)})\n\n",
"def merge(d1, d2, merge):\n result = dict(d1)\n for k,v in d2.iteritems():\n if k in result:\n result[k] = merge(result[k], v)\n else:\n result[k] = v\n return result\n\nd1 = {'a': 1, 'b': 2}\nd2 = {'a': 1, 'b': 3, 'c': 2}\nprint merge(d1, d2, lambda x, y:(x,y))\n\n{'a': (1, 1), 'c': 2, 'b': (2, 3)}\n\n",
"To supplement the two-list solutions, here is a solution for processing a single list.\nA sample list (NetworkX-related; manually formatted here for readability):\nec_num_list = [((src, tgt), ec_num['ec_num']) for src, tgt, ec_num in G.edges(data=True)]\n\nprint('\\nec_num_list:\\n{}'.format(ec_num_list))\nec_num_list:\n[((82, 433), '1.1.1.1'),\n ((82, 433), '1.1.1.2'),\n ((22, 182), '1.1.1.27'),\n ((22, 3785), '1.2.4.1'),\n ((22, 36), '6.4.1.1'),\n ((145, 36), '1.1.1.37'),\n ((36, 154), '2.3.3.1'),\n ((36, 154), '2.3.3.8'),\n ((36, 72), '4.1.1.32'),\n ...] \n\nNote the duplicate values for the same edges (defined by the tuples). To collate those \"values\" to their corresponding \"keys\":\nfrom collections import defaultdict\nec_num_collection = defaultdict(list)\nfor k, v in ec_num_list:\n ec_num_collection[k].append(v)\n\nprint('\\nec_num_collection:\\n{}'.format(ec_num_collection.items()))\nec_num_collection:\n[((82, 433), ['1.1.1.1', '1.1.1.2']), ## << grouped \"values\"\n((22, 182), ['1.1.1.27']),\n((22, 3785), ['1.2.4.1']),\n((22, 36), ['6.4.1.1']),\n((145, 36), ['1.1.1.37']),\n((36, 154), ['2.3.3.1', '2.3.3.8']), ## << grouped \"values\"\n((36, 72), ['4.1.1.32']),\n...] \n\nIf needed, convert that list to dict:\nec_num_collection_dict = {k:v for k, v in zip(ec_num_collection, ec_num_collection)}\n\nprint('\\nec_num_collection_dict:\\n{}'.format(dict(ec_num_collection)))\n ec_num_collection_dict:\n {(82, 433): ['1.1.1.1', '1.1.1.2'],\n (22, 182): ['1.1.1.27'],\n (22, 3785): ['1.2.4.1'],\n (22, 36): ['6.4.1.1'],\n (145, 36): ['1.1.1.37'],\n (36, 154): ['2.3.3.1', '2.3.3.8'],\n (36, 72): ['4.1.1.32'],\n ...}\n\nReferences\n\n[this thread] How to merge multiple dicts with same key?\n[Python docs] https://docs.python.org/3.7/library/collections.html#collections.defaultdict\n\n",
"From blubb answer:\nYou can also directly form the tuple using values from each list\nds = [d1, d2]\nd = {}\nfor k in d1.keys():\n d[k] = (d1[k], d2[k])\n\nThis might be useful if you had a specific ordering for your tuples\nds = [d1, d2, d3, d4]\nd = {}\nfor k in d1.keys():\n d[k] = (d3[k], d1[k], d4[k], d2[k]) #if you wanted tuple in order of d3, d1, d4, d2\n\n",
"This library helped me, I had a dict list of nested keys with the same name but with different values, every other solution kept overriding those nested keys.\nhttps://pypi.org/project/deepmerge/\nfrom deepmerge import always_merger\n\ndef process_parms(args):\n temp_list = []\n for x in args:\n with open(x, 'r') as stream:\n temp_list.append(yaml.safe_load(stream))\n\n return always_merger.merge(*temp_list)\n\n",
"If keys are nested:\nd1 = { 'key1': { 'nkey1': 'x1' }, 'key2': { 'nkey2': 'y1' } } \nd2 = { 'key1': { 'nkey1': 'x2' }, 'key2': { 'nkey2': 'y2' } }\n\nds = [d1, d2]\nd = {}\nfor k in d1.keys():\n for k2 in d1[k].keys():\n d.setdefault(k, {})\n d[k].setdefault(k2, [])\n d[k][k2] = tuple(d[k][k2] for d in ds)\n\nyields:\n{'key1': {'nkey1': ('x1', 'x2')}, 'key2': {'nkey2': ('y1', 'y2')}}\n\n",
"There is a great library funcy doing what you need in a just one, short line.\nfrom funcy import join_with\nfrom pprint import pprint\n\nd1 = {\"key1\": \"x1\", \"key2\": \"y1\"}\nd2 = {\"key1\": \"x2\", \"key2\": \"y2\"}\n\nlist_of_dicts = [d1, d2]\n\nmerged_dict = join_with(tuple, list_of_dicts)\n\npprint(merged_dict)\n\nOutput:\n{'key1': ('x1', 'x2'), 'key2': ('y1', 'y2')}\n\nMore info here: funcy -> join_with.\n",
"d1 = {'A': 'a', 'B': 'b'}\nd2 = {'A': 'c', 'B': 'd'}\nd3 = {'A': 'e', 'B': 'f'}\n\ndnew = {\n \n k : [d1.get(k),d2.get(k),d3.get(k)] for k in d1.keys() | d2.keys() | d3.keys() \n \n }\n\nprint(dnew)\n\n\"\"\"\n{'A': ['a', 'c', 'e'], 'B': ['b', 'd', 'f']}\n\n\"\"\"\n\n"
] | [
108,
61,
5,
5,
4,
4,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"A better representation for two or more dicts with the same keys is a pandas Data Frame IMO:\nd1 = {\"key1\": \"x1\", \"key2\": \"y1\"} \nd2 = {\"key1\": \"x2\", \"key2\": \"y2\"} \nd3 = {\"key1\": \"x3\", \"key2\": \"y3\"} \n\nd1_df = pd.DataFrame.from_dict(d1, orient='index')\nd2_df = pd.DataFrame.from_dict(d2, orient='index')\nd3_df = pd.DataFrame.from_dict(d3, orient='index')\n\nfin_df = pd.concat([d1_df, d2_df, d3_df], axis=1).T.reset_index(drop=True)\nfin_df\n\n key1 key2\n0 x1 y1\n1 x2 y2\n2 x3 y3\n\n",
"\nUsing below method we can merge two dictionaries having same keys.\n\ndef update_dict(dict1: dict, dict2: dict) -> dict:\noutput_dict = {}\nfor key in dict1.keys():\n output_dict.update({key: []})\n if type(dict1[key]) != str:\n for value in dict1[key]:\n output_dict[key].append(value)\n else:\n output_dict[key].append(dict1[key])\n if type(dict2[key]) != str:\n for value in dict2[key]:\n output_dict[key].append(value)\n else:\n output_dict[key].append(dict2[key])\n\nreturn output_dict\n\n\nInput: d1 = {key1: x1, key2: y1} d2 = {key1: x2, key2: y2}\nOutput: {'key1': ['x1', 'x2'], 'key2': ['y1', 'y2']}\n\n",
"dicts = [dict1,dict2,dict3]\nout = dict(zip(dicts[0].keys(),[[dic[list(dic.keys())[key]] for dic in dicts] for key in range(0,len(dicts[0]))]))\n\n",
"A compact possibility\nd1={'a':1,'b':2}\nd2={'c':3,'d':4}\ncontext={**d1, **d2}\ncontext\n{'b': 2, 'c': 3, 'd': 4, 'a': 1}\n\n"
] | [
-1,
-1,
-1,
-4
] | [
"dictionary",
"merge",
"python"
] | stackoverflow_0005946236_dictionary_merge_python.txt |
Q:
mpl_connect key_press_event event does not fire in Scrollable Window on Python Matplotlib
I have multiple plot inside scrollable window QT5 widget as below.
class ScrollableWindow(QtWidgets.QMainWindow):
def __init__(self, fig):
self.qapp = QtWidgets.QApplication([])
QtWidgets.QMainWindow.__init__(self)
self.widget = QtWidgets.QWidget()
self.setCentralWidget(self.widget)
self.widget.setLayout(QtWidgets.QVBoxLayout())
self.widget.layout().setContentsMargins(0,0,0,0)
self.widget.layout().setSpacing(0)
self.fig = fig
self.canvas = FigureCanvas(self.fig)
self.canvas.draw()
self.scroll = QtWidgets.QScrollArea(self.widget)
self.scroll.setWidget(self.canvas)
self.nav = NavigationToolbar(self.canvas, self.widget)
self.widget.layout().addWidget(self.nav)
self.widget.layout().addWidget(self.scroll)
self.show()
exit(self.qapp.exec_())
I try to capture key press event with mpl_connect.
def key_selector(event):
print('Key pressed.')
fig, axes = plt.subplots(ncols=4, nrows=5, figsize=(16,16))
for ax in axes.flatten():
ax.plot([2,3,5,1])
fig.canvas.mpl_connect('key_press_event', key_selector)
a = ScrollableWindow(fig)
But mpl_connect 'key_press_event' does not fire. I try another e.g. 'button_press_event' and work as usual. Any advice or guidance on this would be greatly appreciated, Thanks.
A:
Have a look at this matplotlib issue. Basically, you need to activate the focus of Qt onto your matplotlib canvas.
self.canvas.setFocusPolicy(QtCore.Qt.ClickFocus)
self.canvas.setFocus()
| mpl_connect key_press_event event does not fire in Scrollable Window on Python Matplotlib | I have multiple plot inside scrollable window QT5 widget as below.
class ScrollableWindow(QtWidgets.QMainWindow):
def __init__(self, fig):
self.qapp = QtWidgets.QApplication([])
QtWidgets.QMainWindow.__init__(self)
self.widget = QtWidgets.QWidget()
self.setCentralWidget(self.widget)
self.widget.setLayout(QtWidgets.QVBoxLayout())
self.widget.layout().setContentsMargins(0,0,0,0)
self.widget.layout().setSpacing(0)
self.fig = fig
self.canvas = FigureCanvas(self.fig)
self.canvas.draw()
self.scroll = QtWidgets.QScrollArea(self.widget)
self.scroll.setWidget(self.canvas)
self.nav = NavigationToolbar(self.canvas, self.widget)
self.widget.layout().addWidget(self.nav)
self.widget.layout().addWidget(self.scroll)
self.show()
exit(self.qapp.exec_())
I try to capture key press event with mpl_connect.
def key_selector(event):
print('Key pressed.')
fig, axes = plt.subplots(ncols=4, nrows=5, figsize=(16,16))
for ax in axes.flatten():
ax.plot([2,3,5,1])
fig.canvas.mpl_connect('key_press_event', key_selector)
a = ScrollableWindow(fig)
But mpl_connect 'key_press_event' does not fire. I try another e.g. 'button_press_event' and work as usual. Any advice or guidance on this would be greatly appreciated, Thanks.
| [
"Have a look at this matplotlib issue. Basically, you need to activate the focus of Qt onto your matplotlib canvas.\n self.canvas.setFocusPolicy(QtCore.Qt.ClickFocus)\n self.canvas.setFocus()\n\n"
] | [
1
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074650441_matplotlib_python.txt |
Q:
Python distibution package based on local git commit
I am trying to create a python distribution package following https://packaging.python.org/en/latest/tutorials/packaging-projects/. My source folder contains many irrelevant files and subfolders which should be excluded from the distribution, such as temporary files, auxiliary code, test output files, private notes, etc. The knowledge of which files are relevant and which not, is already represented in the git source control. Instead of replicating all this as inclusion/exclusion rules in the pyproject configuration file (and needing to keep it up to date going forward), I would like my build chain to be based on a git commit/tag. This will also be useful for keeping versioning in sync between my repo history and pypi. I know there is an option to do it with github actions, but my question is how to do it locally, based just on git rather than github.
Edit following comment: I agree that you don't always want the repo and distro trees to be the same, but it would be much simpler to control if the distro starts from the repo tree as a baseline, with a few additional exclusion rules on top of that.
A:
To automatically include files from a Git or Mercurial repository you can use setuptools_scm. The tool can also automatically set the software version from a repository tag and the amount of changes since the tag.
The tool prepares data for the standard setuptools.
| Python distibution package based on local git commit | I am trying to create a python distribution package following https://packaging.python.org/en/latest/tutorials/packaging-projects/. My source folder contains many irrelevant files and subfolders which should be excluded from the distribution, such as temporary files, auxiliary code, test output files, private notes, etc. The knowledge of which files are relevant and which not, is already represented in the git source control. Instead of replicating all this as inclusion/exclusion rules in the pyproject configuration file (and needing to keep it up to date going forward), I would like my build chain to be based on a git commit/tag. This will also be useful for keeping versioning in sync between my repo history and pypi. I know there is an option to do it with github actions, but my question is how to do it locally, based just on git rather than github.
Edit following comment: I agree that you don't always want the repo and distro trees to be the same, but it would be much simpler to control if the distro starts from the repo tree as a baseline, with a few additional exclusion rules on top of that.
| [
"To automatically include files from a Git or Mercurial repository you can use setuptools_scm. The tool can also automatically set the software version from a repository tag and the amount of changes since the tag.\nThe tool prepares data for the standard setuptools.\n"
] | [
1
] | [] | [] | [
"git",
"packaging",
"pip",
"pyproject.toml",
"python"
] | stackoverflow_0074627438_git_packaging_pip_pyproject.toml_python.txt |
Q:
How to return json from FastAPI (Backend) with websocket to vue (Frontend)
I have an application, in which the Frontend is through Vue and the backend is FastAPI, the communication is done through websocket.
Currently, the frontend allows the user to enter a term, which is sent to the backend to generate the autocomplete and also perform a search on a URL that returns a json. In which, I save this json in the frontend folder. After that, the backend returns the autocomplete data for the term in question to the frontend. The frontend displays the aucomplete along with the json data.
However, when I studied a little more, I noticed that there is a way to send the json returned by the request url to Vue (frontend), without having to save it locally, avoiding giving an error of not allowing to execute this process more than once.
My current code is as follows. For FastAPI (backend):
@app.websocket("/")
async def predict_question(websocket: WebSocket):
await websocket.accept()
while True:
input_text = await websocket.receive_text()
autocomplete_text = text_gen.generate_text(input_text)
autocomplete_text = re.sub(r"[\([{})\]]", "", autocomplete_text)
autocomplete_text = autocomplete_text.split()
autocomplete_text = autocomplete_text[0:2]
resp = req.get('www.description_url_search_='+input_text+'')
datajson = resp.json()
with open('/home/user/backup/AutoComplete/frontend/src/data.json', 'w', encoding='utf-8') as f:
json.dump(datajson, f, ensure_ascii=False, indent=4)
await websocket.send_text(' '.join(autocomplete_text))
File App.vue (frontend):
<template>
<div class="main-container">
<h1 style="color:#0072c6;">Title</h1>
<p style="text-align:center; color:#0072c6;">
Version 0.1
<br>
</p>
<Autocomplete />
<br>
</div>
<div style="color:#0072c6;">
<JsonArq />
</div>
<div style="text-align:center;">
<img src="./components/logo-1536.png" width=250 height=200 alt="Logo" >
</div>
</template>
<script>
import Autocomplete from './components/Autocomplete.vue'
import JsonArq from './components/EstepeJSON.vue'
export default {
name: 'App',
components: {
Autocomplete,
JsonArq: JsonArq
}
}
</script>
<style>
.main-container {
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
font-family: 'Fredoka', sans-serif;
}
h1 {
font-size: 3rem;
}
@import url('https://fonts.googleapis.com/css2?family=Fredoka&display=swap');
</style>
Autocomplete.vue file in the components directory:
<template>
<div class="pad-container">
<div tabindex="1" @focus="setCaret" class="autocomplete-container">
<span @input="sendText" @keypress="preventInput" ref="editbar" class="editable" contenteditable="true"></span>
<span class="placeholder" contenteditable="false">{{autoComplete}}</span>
</div>
</div>
</template>
<script>
export default {
name: 'Autocomplete',
data: function() {
return {
autoComplete: "",
maxChars: 75,
connection: null
}
},
mounted() {
const url = "ws://localhost:8000/"
this.connection = new WebSocket(url);
this.connection.onopen = () => console.log("connection established");
this.connection.onmessage = this.receiveText;
},
methods: {
setCaret() {
const range= document.createRange()
const sel = window.getSelection();
const parentNode = this.$refs.editbar;
if (parentNode.firstChild == undefined) {
const emptyNode = document.createTextNode("");
parentNode.appendChild(emptyNode);
}
range.setStartAfter(this.$refs.editbar.firstChild);
range.collapse(true);
sel.removeAllRanges();
sel.addRange(range);
},
preventInput(event) {
let prevent = false;
// handles capital letters, numbers, and punctuations input
if (event.key == event.key.toUpperCase()) {
prevent = true;
}
// exempt spacebar input
if (event.code == "Space") {
prevent = false;
}
// handle input overflow
const nChars = this.$refs.editbar.textContent.length;
if (nChars >= this.maxChars) {
prevent = true;
}
if (prevent == true) {
event.preventDefault();
}
},
sendText() {
const inputText = this.$refs.editbar.textContent;
this.connection.send(inputText);
},
receiveText(event) {
this.autoComplete = event.data;
}
}
}
</script>
EstepeJSON.ue file in the components directory:
<template>
<div width="80%" v-for="regList in myJson" :key="regList" class="container">
<table>
<thead>
<tr>
<th>Documento</th>
</tr>
</thead>
<tbody>
<tr v-for="countryList in regList[2]" :key="countryList">
<td style="visibility: visible">{{ countryList}}</td>
</tr>
</tbody>
</table>
</div>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css"
/>
</template>
<script>
import json from "@/data.json";
export default {
name: "EstepeJson",
data() {
return {
myJson: json,
};
},
};
</script>
Example of the JSON returned by the URL:
[
{
"Title": "SOFT-STARTER",
"Cod": "Produto: 15775931",
"Description": "A soft-starter SSW7000 permite o controle de partida/parada e proteção de motores.",
"Technical_characteristics": ["Corrente nominal", "600 A", "Tensão nominal", "4,16 kV", "Tensão auxiliar", "200-240 V", "Grau de proteção", "IP41", "Certificação", "CE"]
},
{
"Title": "SOFT-STARTER SSW",
"Cod": "Produto: 14223395",
"Description": "A soft-starter SSW7000 permite o controle de partida/parada e proteção de motores de indução trifásicos de média tensão.",
"Technical_characteristics": ["Corrente nominal", "125 A", "Tensão nominal", "6,9 kV", "Tensão auxiliar", "200-240 V", "Grau de proteção", "IP54/NEMA12", "Certificação", "CE"]
}
]
A:
First, instead of using Python requests module (which would block the event loop, see here for more details), I would highly suggest you use httpx, which offers an async API as well. Have a look at this answer and this answer for more details and working examples.
Second, to send data as JSON, you need to use await websocket.send_json(data), as explained in Starlette documentation. As shown in Starlette's websockets source code, Starlette/FastAPI will use text = json.dumps(data) (to serialise the data you passed) when calling send_json() function. Hence, you need to pass a Python dict object. Similar to requests, in httpx you can call the .json() method on the response object to get the response data as a dictionary, and then pass the data to send_json().
Example
from fastapi import FastAPI, WebSocket
from fastapi.responses import HTMLResponse
import httpx
app = FastAPI()
html = """
<!DOCTYPE html>
<html>
<head>
<title>Chat</title>
</head>
<body>
<h1>WebSocket Chat</h1>
<form action="" onsubmit="sendMessage(event)">
<input type="text" id="messageText" autocomplete="off"/>
<button>Send</button>
</form>
<ul id='messages'>
</ul>
<script>
var ws = new WebSocket("ws://localhost:8000/ws");
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
var content = document.createTextNode(event.data)
message.appendChild(content)
messages.appendChild(message)
};
function sendMessage(event) {
var input = document.getElementById("messageText")
ws.send(input.value)
input.value = ''
event.preventDefault()
}
</script>
</body>
</html>
"""
@app.get('/')
async def get():
return HTMLResponse(html)
@app.websocket('/ws')
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
while True:
data = await websocket.receive_text()
# here use httpx to issue a request as demonstrated in the linked answers above
# r = await client.send(...
await websocket.send_json(r.json())
A:
Just convert your data to a json string with json.dumps(mydata)
A:
using @Chris' tips on HTTP and after some research I managed to solve my problem. Below is the resolution.
In my backend FastAPI file I implemented HTTPX async (tip from @Chris). And after returning the JSON, I took the autocomplete term and added it to the first position of the JSON. Thus returning to Vue (frontend) a JSON with autocomplete and HTTPX data.
File FastAPI:
async def predict_question(websocket: WebSocket):
await manager.connect(websocket)
input_text = await websocket.receive_text()
if not input_text:
await manager.send_personal_message(json.dumps([]), websocket)
else:
autocomplete_text = text_gen.generate_text(input_text)
autocomplete_text = re.sub(r"[\([{})\]]", "", autocomplete_text)
autocomplete_text = autocomplete_text.split()
autocomplete_text = autocomplete_text[0:2]
resp = client.build_request("GET", 'www.description_url_search_='+input_text+'')
r = await client.send(resp)
datajson = r.json()
datajson.insert(0, ' '.join(autocomplete_text))
await manager.send_personal_message(json.dumps(datajson), websocket)
In the Autocomplete.vue file I made small changes.
First I merged the EstepeJson.vue file into Autocomplete.vue, especially the json reading part in the html.
Second, in the data: function(){} I added one more object, called myJson: [].
Third, in the receiveText method I changed the way to receive data from the websocket. Since now I have JSON.parse to convert event.data to JSON. Then I use the shift method to take the first position in the json and remove this data from the file. And finally, return the json to the myjson variable.
File Autocomplete.vue:
<template>
<div class="pad-container">
<div tabindex="1" @focus="setCaret" class="autocomplete-container">
<span @input="sendText" @keypress="preventInput" ref="editbar" class="editable" contenteditable="true"></span>
<span class="placeholder" data-ondeleteId="#editx" contenteditable="false">{{autoComplete}}</span>
</div>
</div>
<div v-for="regList in myJson" :key="regList" class="container" >
<table>
<thead>
<tr>
<th>Documento</th>
</tr>
</thead>
<tbody>
<tr v-for="countryList in regList[2]" :key="countryList">
<td style="visibility: visible">{{ countryList}}</td>
</tr>
</tbody>
</table>
</div>
</template>
<script>
...
data: function() {
return {
autoComplete: "",
maxChars: 75,
connection: null,
myJson: []
}
},
.....
...
receiveText(event) {
let result = JSON.parse(event.data)
this.autoComplete = result.shift();
this.myJson = result
}
</script>
| How to return json from FastAPI (Backend) with websocket to vue (Frontend) | I have an application, in which the Frontend is through Vue and the backend is FastAPI, the communication is done through websocket.
Currently, the frontend allows the user to enter a term, which is sent to the backend to generate the autocomplete and also perform a search on a URL that returns a json. In which, I save this json in the frontend folder. After that, the backend returns the autocomplete data for the term in question to the frontend. The frontend displays the aucomplete along with the json data.
However, when I studied a little more, I noticed that there is a way to send the json returned by the request url to Vue (frontend), without having to save it locally, avoiding giving an error of not allowing to execute this process more than once.
My current code is as follows. For FastAPI (backend):
@app.websocket("/")
async def predict_question(websocket: WebSocket):
await websocket.accept()
while True:
input_text = await websocket.receive_text()
autocomplete_text = text_gen.generate_text(input_text)
autocomplete_text = re.sub(r"[\([{})\]]", "", autocomplete_text)
autocomplete_text = autocomplete_text.split()
autocomplete_text = autocomplete_text[0:2]
resp = req.get('www.description_url_search_='+input_text+'')
datajson = resp.json()
with open('/home/user/backup/AutoComplete/frontend/src/data.json', 'w', encoding='utf-8') as f:
json.dump(datajson, f, ensure_ascii=False, indent=4)
await websocket.send_text(' '.join(autocomplete_text))
File App.vue (frontend):
<template>
<div class="main-container">
<h1 style="color:#0072c6;">Title</h1>
<p style="text-align:center; color:#0072c6;">
Version 0.1
<br>
</p>
<Autocomplete />
<br>
</div>
<div style="color:#0072c6;">
<JsonArq />
</div>
<div style="text-align:center;">
<img src="./components/logo-1536.png" width=250 height=200 alt="Logo" >
</div>
</template>
<script>
import Autocomplete from './components/Autocomplete.vue'
import JsonArq from './components/EstepeJSON.vue'
export default {
name: 'App',
components: {
Autocomplete,
JsonArq: JsonArq
}
}
</script>
<style>
.main-container {
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
font-family: 'Fredoka', sans-serif;
}
h1 {
font-size: 3rem;
}
@import url('https://fonts.googleapis.com/css2?family=Fredoka&display=swap');
</style>
Autocomplete.vue file in the components directory:
<template>
<div class="pad-container">
<div tabindex="1" @focus="setCaret" class="autocomplete-container">
<span @input="sendText" @keypress="preventInput" ref="editbar" class="editable" contenteditable="true"></span>
<span class="placeholder" contenteditable="false">{{autoComplete}}</span>
</div>
</div>
</template>
<script>
export default {
name: 'Autocomplete',
data: function() {
return {
autoComplete: "",
maxChars: 75,
connection: null
}
},
mounted() {
const url = "ws://localhost:8000/"
this.connection = new WebSocket(url);
this.connection.onopen = () => console.log("connection established");
this.connection.onmessage = this.receiveText;
},
methods: {
setCaret() {
const range= document.createRange()
const sel = window.getSelection();
const parentNode = this.$refs.editbar;
if (parentNode.firstChild == undefined) {
const emptyNode = document.createTextNode("");
parentNode.appendChild(emptyNode);
}
range.setStartAfter(this.$refs.editbar.firstChild);
range.collapse(true);
sel.removeAllRanges();
sel.addRange(range);
},
preventInput(event) {
let prevent = false;
// handles capital letters, numbers, and punctuations input
if (event.key == event.key.toUpperCase()) {
prevent = true;
}
// exempt spacebar input
if (event.code == "Space") {
prevent = false;
}
// handle input overflow
const nChars = this.$refs.editbar.textContent.length;
if (nChars >= this.maxChars) {
prevent = true;
}
if (prevent == true) {
event.preventDefault();
}
},
sendText() {
const inputText = this.$refs.editbar.textContent;
this.connection.send(inputText);
},
receiveText(event) {
this.autoComplete = event.data;
}
}
}
</script>
EstepeJSON.ue file in the components directory:
<template>
<div width="80%" v-for="regList in myJson" :key="regList" class="container">
<table>
<thead>
<tr>
<th>Documento</th>
</tr>
</thead>
<tbody>
<tr v-for="countryList in regList[2]" :key="countryList">
<td style="visibility: visible">{{ countryList}}</td>
</tr>
</tbody>
</table>
</div>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css"
/>
</template>
<script>
import json from "@/data.json";
export default {
name: "EstepeJson",
data() {
return {
myJson: json,
};
},
};
</script>
Example of the JSON returned by the URL:
[
{
"Title": "SOFT-STARTER",
"Cod": "Produto: 15775931",
"Description": "A soft-starter SSW7000 permite o controle de partida/parada e proteção de motores.",
"Technical_characteristics": ["Corrente nominal", "600 A", "Tensão nominal", "4,16 kV", "Tensão auxiliar", "200-240 V", "Grau de proteção", "IP41", "Certificação", "CE"]
},
{
"Title": "SOFT-STARTER SSW",
"Cod": "Produto: 14223395",
"Description": "A soft-starter SSW7000 permite o controle de partida/parada e proteção de motores de indução trifásicos de média tensão.",
"Technical_characteristics": ["Corrente nominal", "125 A", "Tensão nominal", "6,9 kV", "Tensão auxiliar", "200-240 V", "Grau de proteção", "IP54/NEMA12", "Certificação", "CE"]
}
]
| [
"First, instead of using Python requests module (which would block the event loop, see here for more details), I would highly suggest you use httpx, which offers an async API as well. Have a look at this answer and this answer for more details and working examples.\nSecond, to send data as JSON, you need to use await websocket.send_json(data), as explained in Starlette documentation. As shown in Starlette's websockets source code, Starlette/FastAPI will use text = json.dumps(data) (to serialise the data you passed) when calling send_json() function. Hence, you need to pass a Python dict object. Similar to requests, in httpx you can call the .json() method on the response object to get the response data as a dictionary, and then pass the data to send_json().\nExample\nfrom fastapi import FastAPI, WebSocket\nfrom fastapi.responses import HTMLResponse\nimport httpx\n\napp = FastAPI()\n\nhtml = \"\"\"\n<!DOCTYPE html>\n<html>\n <head>\n <title>Chat</title>\n </head>\n <body>\n <h1>WebSocket Chat</h1>\n <form action=\"\" onsubmit=\"sendMessage(event)\">\n <input type=\"text\" id=\"messageText\" autocomplete=\"off\"/>\n <button>Send</button>\n </form>\n <ul id='messages'>\n </ul>\n <script>\n var ws = new WebSocket(\"ws://localhost:8000/ws\");\n ws.onmessage = function(event) {\n var messages = document.getElementById('messages')\n var message = document.createElement('li')\n var content = document.createTextNode(event.data)\n message.appendChild(content)\n messages.appendChild(message)\n };\n function sendMessage(event) {\n var input = document.getElementById(\"messageText\")\n ws.send(input.value)\n input.value = ''\n event.preventDefault()\n }\n </script>\n </body>\n</html>\n\"\"\"\n\n\[email protected]('/')\nasync def get():\n return HTMLResponse(html)\n \n\[email protected]('/ws')\nasync def websocket_endpoint(websocket: WebSocket):\n await websocket.accept()\n while True:\n data = await websocket.receive_text()\n # here use httpx to issue a request as demonstrated in the linked answers above\n # r = await client.send(... \n await websocket.send_json(r.json())\n\n",
"Just convert your data to a json string with json.dumps(mydata)\n",
"using @Chris' tips on HTTP and after some research I managed to solve my problem. Below is the resolution.\nIn my backend FastAPI file I implemented HTTPX async (tip from @Chris). And after returning the JSON, I took the autocomplete term and added it to the first position of the JSON. Thus returning to Vue (frontend) a JSON with autocomplete and HTTPX data.\nFile FastAPI:\nasync def predict_question(websocket: WebSocket):\n await manager.connect(websocket)\n input_text = await websocket.receive_text()\n if not input_text:\n await manager.send_personal_message(json.dumps([]), websocket)\n else:\n autocomplete_text = text_gen.generate_text(input_text)\n autocomplete_text = re.sub(r\"[\\([{})\\]]\", \"\", autocomplete_text)\n autocomplete_text = autocomplete_text.split()\n autocomplete_text = autocomplete_text[0:2]\n resp = client.build_request(\"GET\", 'www.description_url_search_='+input_text+'')\n r = await client.send(resp)\n datajson = r.json()\n datajson.insert(0, ' '.join(autocomplete_text))\n await manager.send_personal_message(json.dumps(datajson), websocket)\n\nIn the Autocomplete.vue file I made small changes.\nFirst I merged the EstepeJson.vue file into Autocomplete.vue, especially the json reading part in the html.\nSecond, in the data: function(){} I added one more object, called myJson: [].\nThird, in the receiveText method I changed the way to receive data from the websocket. Since now I have JSON.parse to convert event.data to JSON. Then I use the shift method to take the first position in the json and remove this data from the file. And finally, return the json to the myjson variable.\nFile Autocomplete.vue:\n<template>\n<div class=\"pad-container\">\n <div tabindex=\"1\" @focus=\"setCaret\" class=\"autocomplete-container\">\n <span @input=\"sendText\" @keypress=\"preventInput\" ref=\"editbar\" class=\"editable\" contenteditable=\"true\"></span>\n <span class=\"placeholder\" data-ondeleteId=\"#editx\" contenteditable=\"false\">{{autoComplete}}</span> \n </div>\n</div>\n<div v-for=\"regList in myJson\" :key=\"regList\" class=\"container\" >\n <table>\n <thead>\n <tr>\n <th>Documento</th>\n </tr>\n </thead>\n <tbody>\n <tr v-for=\"countryList in regList[2]\" :key=\"countryList\">\n <td style=\"visibility: visible\">{{ countryList}}</td>\n </tr>\n </tbody>\n </table>\n </div>\n</template>\n\n<script>\n...\ndata: function() {\n return {\n autoComplete: \"\",\n maxChars: 75,\n connection: null, \n myJson: []\n }\n },\n.....\n...\n receiveText(event) {\n let result = JSON.parse(event.data)\n this.autoComplete = result.shift();\n this.myJson = result\n }\n</script>\n\n"
] | [
1,
0,
0
] | [] | [] | [
"fastapi",
"html",
"python",
"vue.js",
"websocket"
] | stackoverflow_0074618868_fastapi_html_python_vue.js_websocket.txt |
Q:
Who can help me with python task?
You are given an integer n. There are also three types of operations:
Reduce by 1.
Increase by 1.
If n is evenly divisible by 3, divide n by 3.
For what minimum number of operations can you earn a number in row 1?
Input data
The first line contains one integer n (1≤n≤10
18
).
Output data
Output one number - the minimum number of operations for which it is possible to perform a numerical number equal to 1.
Note
In the first case, you can replace it once with 3 or 1.
In the second application, you can write off two sums by 1, and then by 3.
Examples
Below you will find examples of input data and answers that your program should print.
x=int(input())
counter=0
while x > 1:
if x % 3 == 0:
x=x/3
counter+=1
if x == 1:
break
if x % 3 == 1:
x=x-1
counter+=1
if x == 1:
break
if x%3 == 2:
x=x+1
counter+=1
print(counter)
Output data
Output one number - the minimum number of operations for which it is possible to perform a numerical number equal to 1.
Note
In the first case, you can replace it once with 3 or 1.
How i can make this code more correctly?
A:
To solve this problem in Python without using functions, you can use a while loop to iterate over the different operations until the number reaches 1.
Here is one possible solution:
# Get the input number
n = int(input())
# Initialize the number of operations to 0
num_ops = 0
# Keep looping until the number is 1
while n != 1:
# If the number is evenly divisible by 3, divide it by 3
if n % 3 == 0:
n /= 3
# If the number is not evenly divisible by 3, increment or decrement
# it by 1 to make it evenly divisible by 3
else:
# If the number is greater than 1, decrement it by 1
if n > 1:
n -= 1
# If the number is 1, increment it by 1 instead
else:
n += 1
# Increment the number of operations by 1
num_ops += 1
# Print the number of operations
print(num_ops)
This solution will first check if the number is evenly divisible by 3, and if it is, it will divide it by 3. If the number is not evenly divisible by 3, it will either increment or decrement it by 1, depending on whether it is greater than or less than 1. Finally, it will keep track of the number of operations performed and print it when the number reaches 1.
| Who can help me with python task? | You are given an integer n. There are also three types of operations:
Reduce by 1.
Increase by 1.
If n is evenly divisible by 3, divide n by 3.
For what minimum number of operations can you earn a number in row 1?
Input data
The first line contains one integer n (1≤n≤10
18
).
Output data
Output one number - the minimum number of operations for which it is possible to perform a numerical number equal to 1.
Note
In the first case, you can replace it once with 3 or 1.
In the second application, you can write off two sums by 1, and then by 3.
Examples
Below you will find examples of input data and answers that your program should print.
x=int(input())
counter=0
while x > 1:
if x % 3 == 0:
x=x/3
counter+=1
if x == 1:
break
if x % 3 == 1:
x=x-1
counter+=1
if x == 1:
break
if x%3 == 2:
x=x+1
counter+=1
print(counter)
Output data
Output one number - the minimum number of operations for which it is possible to perform a numerical number equal to 1.
Note
In the first case, you can replace it once with 3 or 1.
How i can make this code more correctly?
| [
"To solve this problem in Python without using functions, you can use a while loop to iterate over the different operations until the number reaches 1.\nHere is one possible solution:\n# Get the input number\nn = int(input())\n\n# Initialize the number of operations to 0\nnum_ops = 0\n\n# Keep looping until the number is 1\nwhile n != 1:\n # If the number is evenly divisible by 3, divide it by 3\n if n % 3 == 0:\n n /= 3\n # If the number is not evenly divisible by 3, increment or decrement\n # it by 1 to make it evenly divisible by 3\n else:\n # If the number is greater than 1, decrement it by 1\n if n > 1:\n n -= 1\n # If the number is 1, increment it by 1 instead\n else:\n n += 1\n # Increment the number of operations by 1\n num_ops += 1\n\n# Print the number of operations\nprint(num_ops)\n\nThis solution will first check if the number is evenly divisible by 3, and if it is, it will divide it by 3. If the number is not evenly divisible by 3, it will either increment or decrement it by 1, depending on whether it is greater than or less than 1. Finally, it will keep track of the number of operations performed and print it when the number reaches 1.\n"
] | [
1
] | [] | [] | [
"math",
"python",
"solver"
] | stackoverflow_0074655717_math_python_solver.txt |
Q:
specific sumproduct list comprehension in pandas
suppose i have a dataframe
df = pd.DataFrame({"age" : [0, 5, 10, 15, 20], "income": [5, 13, 23, 18, 12]})
age income
0 0 5
1 5 13
2 10 23
3 15 18
4 20 12
i want to iterate through df["income"] and calculate the sumproduct as follows (example for age 15): 18+23*(15-10)+13*(15-5)+5*(15-0) = 338.
more generic: income[3] + income[2] * ( age[3] - age[2] ) + income[1] * ( age[3] - age[1] ) + income[0] * (age[3] - age[0] )
I am struggling to formulate the age relative to the current iteration of age ( age[x] - age[y] ) in a generic way to use in a list comprehension or formula.
edit: the actual operation I want to apply is
income[3 ] + income[2]* interest ** ( age[3] - age[2] ) + income[1]*interest ** (age[3] - age[1] ...
exampe from above: 18+23*1.03 ** (15-10)+13*1.03 ** (15-5)+5*1.03 **(15-0) = 69,92
interest = 1.03
ANSWERED thanks to jezrael & mozway
A:
Numpy solution - you can use broadcasting for avoid loops for improve performance:
df = pd.DataFrame({"age" : [0, 5, 10, 15, 20], "income": [5, 13, 23, 18, 12]})
interest = 1.03
age = df['age'].to_numpy()
Use power with subtracted values of mask:
arr = interest ** (age[:, None] - age )
print (arr)
[[1. 0.86260878 0.74409391 0.64186195 0.55367575]
[1.15927407 1. 0.86260878 0.74409391 0.64186195]
[1.34391638 1.15927407 1. 0.86260878 0.74409391]
[1.55796742 1.34391638 1.15927407 1. 0.86260878]
[1.80611123 1.55796742 1.34391638 1.15927407 1. ]]
Then set 0 to upper triangle:
arr = np.where(np.triu(np.ones(arr.shape, dtype=bool)), 0, arr)
print (arr)
[[0. 0. 0. 0. 0. ]
[1.15927407 0. 0. 0. 0. ]
[1.34391638 1.15927407 0. 0. 0. ]
[1.55796742 1.34391638 1.15927407 0. 0. ]
[1.80611123 1.55796742 1.34391638 1.15927407 0. ]]
Set 1 to diagonal:
np.fill_diagonal(arr, 1)
print (arr)
[[1. 0. 0. 0. 0. ]
[1.15927407 1. 0. 0. 0. ]
[1.34391638 1.15927407 1. 0. 0. ]
[1.55796742 1.34391638 1.15927407 1. 0. ]
[1.80611123 1.55796742 1.34391638 1.15927407 1. ]]
Multiple by column income and sum per rows:
print (arr * df['income'].to_numpy())
[[ 5. 0. 0. 0. 0. ]
[ 5.79637037 13. 0. 0. 0. ]
[ 6.7195819 15.07056297 23. 0. 0. ]
[ 7.78983708 17.47091293 26.66330371 18. 0. ]
[ 9.03055617 20.25357642 30.91007672 20.86693334 12. ]]
df['new'] = (arr * df['income'].to_numpy()).sum(axis=1)
print (df) age income new
0 0 5 5.000000
1 5 13 18.796370
2 10 23 44.790145
3 15 18 69.924054
4 20 12 93.061143
Performance: For 5k rows, applyare loops under the hood, so slow (best avoid it)
df = pd.DataFrame({"age" : [0, 5, 10, 15, 20], "income": [5, 13, 23, 18, 12]})
df = pd.concat([df] * 1000, ignore_index=True)
In [292]: %%timeit
...: age = df['age'].to_numpy()
...:
...: arr = interest ** (age[:, None] - age )
...: arr = np.where(np.triu(np.ones(arr.shape, dtype=bool)), 0, arr)
...: np.fill_diagonal(arr, 1)
...: df['new'] = (arr * df['income'].to_numpy()).sum(axis=1)
...:
1.39 s ± 69.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [293]: %%timeit
...: df['sumproduct'] = (df['age'].expanding().apply(lambda x: sum(df.loc[:x.index[-1], 'income'] * interest**(x.iloc[-1]-x))))
...:
5.13 s ± 411 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
A:
You can rewrite 18+(18+23+13+5)*15-(23+13+5)*(10+5+0) to be 18+(18+23+13+5)*15-(23+13+5)*(10+5+0):
The general formula is thus:
sumproduct(n) = (income
+ (n-1)*sum(age[:n-1]*income[:n-1])
- sum(age[:n-1]*income[:n-1])
)
As code:
df['sumproduct'] = (df['income']
.add(df['age'].mul(df['income'].cumsum().shift(fill_value=0)))
.sub(df['age'].mul(df['income']).cumsum().shift(fill_value=0))
)
output:
age income sumproduct
0 0 5 5
1 5 13 38
2 10 23 138
3 15 18 338
4 20 12 627
power
powers are more complex as you cannot directly factorize, you can however rewrite the operation with expanding:
df['sumproduct'] = (df['age'].expanding()
.apply(lambda x: sum(df.loc[:x.index[-1], 'income'] * interest**(x.iloc[-1]-x)))
)
Output:
age income sumproduct
0 0 5 5.000000
1 5 13 18.796370
2 10 23 44.790145
3 15 18 69.924054
4 20 12 93.061143
A:
Here is one way you could use a list comprehension to calculate the sumproduct for each value of age in your DataFrame:
sumproducts = [income[x] + income[x+1] * (age[x] - age[x+1]) + income[x+2] * (age[x] - age[x+2]) + income[x+3] * (age[x] - age[x+3]) for x in range(len(df["age"]))]
In this list comprehension, we are using the current value of x in the range (which corresponds to the current value of age) to determine the values of age and income to use in the calculation.
Alternatively, you could use the apply() method to apply a custom function to each row in the DataFrame, which would allow you to perform the calculation without using a list comprehension. The function would take a row of the DataFrame as input and return the sumproduct for that row. Here is an example:
def calculate_sumproduct(row):
x = row["age"]
income = row["income"]
return income[x] + income[x+1] * (age[x] - age[x+1]) + income[x+2] * (age[x] - age[x+2]) + income[x+3] * (age[x] - age[x+3])
sumproducts = df.apply(calculate_sumproduct, axis=1)
You can then use the resulting series of sumproducts to add a new column to your DataFrame if you want.
df["sumproduct"] = sumproducts
| specific sumproduct list comprehension in pandas | suppose i have a dataframe
df = pd.DataFrame({"age" : [0, 5, 10, 15, 20], "income": [5, 13, 23, 18, 12]})
age income
0 0 5
1 5 13
2 10 23
3 15 18
4 20 12
i want to iterate through df["income"] and calculate the sumproduct as follows (example for age 15): 18+23*(15-10)+13*(15-5)+5*(15-0) = 338.
more generic: income[3] + income[2] * ( age[3] - age[2] ) + income[1] * ( age[3] - age[1] ) + income[0] * (age[3] - age[0] )
I am struggling to formulate the age relative to the current iteration of age ( age[x] - age[y] ) in a generic way to use in a list comprehension or formula.
edit: the actual operation I want to apply is
income[3 ] + income[2]* interest ** ( age[3] - age[2] ) + income[1]*interest ** (age[3] - age[1] ...
exampe from above: 18+23*1.03 ** (15-10)+13*1.03 ** (15-5)+5*1.03 **(15-0) = 69,92
interest = 1.03
ANSWERED thanks to jezrael & mozway
| [
"Numpy solution - you can use broadcasting for avoid loops for improve performance:\ndf = pd.DataFrame({\"age\" : [0, 5, 10, 15, 20], \"income\": [5, 13, 23, 18, 12]})\n\ninterest = 1.03\nage = df['age'].to_numpy()\n\nUse power with subtracted values of mask:\narr = interest ** (age[:, None] - age ) \nprint (arr)\n[[1. 0.86260878 0.74409391 0.64186195 0.55367575]\n [1.15927407 1. 0.86260878 0.74409391 0.64186195]\n [1.34391638 1.15927407 1. 0.86260878 0.74409391]\n [1.55796742 1.34391638 1.15927407 1. 0.86260878]\n [1.80611123 1.55796742 1.34391638 1.15927407 1. ]]\n\nThen set 0 to upper triangle:\narr = np.where(np.triu(np.ones(arr.shape, dtype=bool)), 0, arr)\nprint (arr)\n[[0. 0. 0. 0. 0. ]\n [1.15927407 0. 0. 0. 0. ]\n [1.34391638 1.15927407 0. 0. 0. ]\n [1.55796742 1.34391638 1.15927407 0. 0. ]\n [1.80611123 1.55796742 1.34391638 1.15927407 0. ]]\n\nSet 1 to diagonal:\nnp.fill_diagonal(arr, 1)\nprint (arr)\n[[1. 0. 0. 0. 0. ]\n [1.15927407 1. 0. 0. 0. ]\n [1.34391638 1.15927407 1. 0. 0. ]\n [1.55796742 1.34391638 1.15927407 1. 0. ]\n [1.80611123 1.55796742 1.34391638 1.15927407 1. ]]\n\nMultiple by column income and sum per rows:\nprint (arr * df['income'].to_numpy())\n[[ 5. 0. 0. 0. 0. ]\n [ 5.79637037 13. 0. 0. 0. ]\n [ 6.7195819 15.07056297 23. 0. 0. ]\n [ 7.78983708 17.47091293 26.66330371 18. 0. ]\n [ 9.03055617 20.25357642 30.91007672 20.86693334 12. ]]\n\n\ndf['new'] = (arr * df['income'].to_numpy()).sum(axis=1)\nprint (df) age income new\n0 0 5 5.000000\n1 5 13 18.796370\n2 10 23 44.790145\n3 15 18 69.924054\n4 20 12 93.061143\n\nPerformance: For 5k rows, applyare loops under the hood, so slow (best avoid it)\ndf = pd.DataFrame({\"age\" : [0, 5, 10, 15, 20], \"income\": [5, 13, 23, 18, 12]})\ndf = pd.concat([df] * 1000, ignore_index=True)\n\n\nIn [292]: %%timeit\n ...: age = df['age'].to_numpy()\n ...: \n ...: arr = interest ** (age[:, None] - age ) \n ...: arr = np.where(np.triu(np.ones(arr.shape, dtype=bool)), 0, arr)\n ...: np.fill_diagonal(arr, 1)\n ...: df['new'] = (arr * df['income'].to_numpy()).sum(axis=1)\n ...: \n1.39 s ± 69.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [293]: %%timeit\n ...: df['sumproduct'] = (df['age'].expanding().apply(lambda x: sum(df.loc[:x.index[-1], 'income'] * interest**(x.iloc[-1]-x))))\n ...: \n5.13 s ± 411 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n",
"You can rewrite 18+(18+23+13+5)*15-(23+13+5)*(10+5+0) to be 18+(18+23+13+5)*15-(23+13+5)*(10+5+0):\nThe general formula is thus:\nsumproduct(n) = (income\n + (n-1)*sum(age[:n-1]*income[:n-1])\n - sum(age[:n-1]*income[:n-1])\n )\n\nAs code:\ndf['sumproduct'] = (df['income']\n .add(df['age'].mul(df['income'].cumsum().shift(fill_value=0)))\n .sub(df['age'].mul(df['income']).cumsum().shift(fill_value=0))\n)\n\noutput:\n age income sumproduct\n0 0 5 5\n1 5 13 38\n2 10 23 138\n3 15 18 338\n4 20 12 627\n\npower\npowers are more complex as you cannot directly factorize, you can however rewrite the operation with expanding:\ndf['sumproduct'] = (df['age'].expanding()\n .apply(lambda x: sum(df.loc[:x.index[-1], 'income'] * interest**(x.iloc[-1]-x)))\n)\n\nOutput:\n age income sumproduct\n0 0 5 5.000000\n1 5 13 18.796370\n2 10 23 44.790145\n3 15 18 69.924054\n4 20 12 93.061143\n\n",
"Here is one way you could use a list comprehension to calculate the sumproduct for each value of age in your DataFrame:\nsumproducts = [income[x] + income[x+1] * (age[x] - age[x+1]) + income[x+2] * (age[x] - age[x+2]) + income[x+3] * (age[x] - age[x+3]) for x in range(len(df[\"age\"]))]\n\nIn this list comprehension, we are using the current value of x in the range (which corresponds to the current value of age) to determine the values of age and income to use in the calculation.\nAlternatively, you could use the apply() method to apply a custom function to each row in the DataFrame, which would allow you to perform the calculation without using a list comprehension. The function would take a row of the DataFrame as input and return the sumproduct for that row. Here is an example:\ndef calculate_sumproduct(row):\n x = row[\"age\"]\n income = row[\"income\"]\n return income[x] + income[x+1] * (age[x] - age[x+1]) + income[x+2] * (age[x] - age[x+2]) + income[x+3] * (age[x] - age[x+3])\n\nsumproducts = df.apply(calculate_sumproduct, axis=1)\n\nYou can then use the resulting series of sumproducts to add a new column to your DataFrame if you want.\ndf[\"sumproduct\"] = sumproducts\n\n"
] | [
2,
1,
0
] | [] | [] | [
"list_comprehension",
"pandas",
"python",
"sumproduct"
] | stackoverflow_0074655392_list_comprehension_pandas_python_sumproduct.txt |
Q:
AttributeError: 'InputStream' object has no attribute 'decode' using pandas_read_xml to flatten xml
I'm still a beginner to this, but I will try to explain my problem as coherently as I can.
In case you're not familiar with Azure Cloud programming, I have a "blob trigger" where this script runs or triggers when a file is uploaded into a container in Azure. When this script triggers it passes an InputStream object to the function:
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
My problem lies when passing this InputStream object to a pandas_read_xml method.
import pandas_read_xml as pdx
df = pdx.read_xml(myblob)
df = pdx.fully_flatten(df)
The goal here is to pass an xml file to a dataframe and then flatten the xml so that I can get all of the data inside of the XML. This works when the file can be found locally on my own machine, but when I go to pass the InputStream object "myblob" to the read_xml() method I get this error:
AttributeError: 'InputStream' object has no attribute 'decode'
I've also tried downloading the blob to memory and pass that to the method like so:
#Connect to storage container/ download blob
container_str_url = 'REDACTED'
container_client = ContainerClient.from_container_url(container_str_url)
blob client accessing specific blob
blob_client = container_client.get_blob_client(blob= blob_name)
#download blob into memory
stream_downloader = blob_client.download_blob()
stream = BytesIO()
stream_downloader.readinto(stream)
df = pdx.read_xml(stream)
df = pdx.fully_flatten(df)
but this also doesn't work. Any idea on how I can use this library within this context? I think it works perfectly based off what I'm seeing whenever I use it on local files, I would love to find a way to use it here as well.
A:
AttributeError: 'InputStream' object has no attribute 'decode'
In General, this error will occur because of decoding the already decoded string. If your Azure Functions Python Version is 3.X, then no need to decode.
If it is throwing the decode error, there is some error in datatype of that object so you should do both encoding and decoding for that input stream object stream_downloader you have defined in the code.
Encoding should be done in UTF-8 format or any other required format like binary, etc.
Also, function.json should contain the datatype of the input stream object (blob file) in Azure Functions Python project.
Sample Code Snippet:
{
"name": "inputblob",
"dataType": "binary",
"type": "blob",
"direction": "in",
"path": "blobdata/blobfile.xml",
"connection": "blobcontainer_conn_str"
},
| AttributeError: 'InputStream' object has no attribute 'decode' using pandas_read_xml to flatten xml | I'm still a beginner to this, but I will try to explain my problem as coherently as I can.
In case you're not familiar with Azure Cloud programming, I have a "blob trigger" where this script runs or triggers when a file is uploaded into a container in Azure. When this script triggers it passes an InputStream object to the function:
def main(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
My problem lies when passing this InputStream object to a pandas_read_xml method.
import pandas_read_xml as pdx
df = pdx.read_xml(myblob)
df = pdx.fully_flatten(df)
The goal here is to pass an xml file to a dataframe and then flatten the xml so that I can get all of the data inside of the XML. This works when the file can be found locally on my own machine, but when I go to pass the InputStream object "myblob" to the read_xml() method I get this error:
AttributeError: 'InputStream' object has no attribute 'decode'
I've also tried downloading the blob to memory and pass that to the method like so:
#Connect to storage container/ download blob
container_str_url = 'REDACTED'
container_client = ContainerClient.from_container_url(container_str_url)
blob client accessing specific blob
blob_client = container_client.get_blob_client(blob= blob_name)
#download blob into memory
stream_downloader = blob_client.download_blob()
stream = BytesIO()
stream_downloader.readinto(stream)
df = pdx.read_xml(stream)
df = pdx.fully_flatten(df)
but this also doesn't work. Any idea on how I can use this library within this context? I think it works perfectly based off what I'm seeing whenever I use it on local files, I would love to find a way to use it here as well.
| [
"\nAttributeError: 'InputStream' object has no attribute 'decode'\n\nIn General, this error will occur because of decoding the already decoded string. If your Azure Functions Python Version is 3.X, then no need to decode.\nIf it is throwing the decode error, there is some error in datatype of that object so you should do both encoding and decoding for that input stream object stream_downloader you have defined in the code.\nEncoding should be done in UTF-8 format or any other required format like binary, etc.\nAlso, function.json should contain the datatype of the input stream object (blob file) in Azure Functions Python project.\nSample Code Snippet:\n{\n \"name\": \"inputblob\",\n \"dataType\": \"binary\",\n \"type\": \"blob\",\n \"direction\": \"in\",\n \"path\": \"blobdata/blobfile.xml\",\n \"connection\": \"blobcontainer_conn_str\"\n },\n\n\n"
] | [
0
] | [] | [] | [
"azure_functions",
"dataframe",
"pandas",
"python",
"xml"
] | stackoverflow_0074539979_azure_functions_dataframe_pandas_python_xml.txt |
Q:
Upload files and resume in azure blob storage python
Below is the code for uploading files in chunks:
azure_container = "dummy-container"
file_path = "test.txt"
chunk_size=4*1024*1024
blob_service_client = BlobServiceClient.from_connection_string(azure_connection_string)
blob_client = blob_service_client.get_blob_client(container=azure_container, blob="testingfile.txt")
test_main = []
with open(file_path, 'rb') as datax:
#while True:
chunk_data = datax.read(chunk_size)
print(len(chunk_data))
#chunk_data = [str(chunk_data, 'utf-8').split("\r")]
# for q in chunk_data[0]:
# time.sleep(0.5)
# print(q.strip())
#print(chunk_data)
blob_client.upload_blob(chunk_data, overwrite=True)
I want to resume the upload if something happens in uploading, for that im using the chunks of data and recording the chunk data to continue from the left out but how to upload without overwriting ? in another word continuing to upload same file after discontinuation.
A:
For this "async" and "await" is working fine. But if there is better solution then please post.
async def uploadin_files(file_path):
for files_to_upload in file_path:
time.sleep(2)
blob_client = blob_service_client.get_blob_client(container=azure_container, blob=files_to_upload)
with open(files_to_upload, 'rb') as datax:
chunk_data_original = datax.read()
blob_client.upload_blob(chunk_data_original, overwrite=True)
print(files_to_upload,"Done uploading")
await uploadin_files(file_path_main)
It works even after disconnecting network for sometimes and connecting back again !
| Upload files and resume in azure blob storage python | Below is the code for uploading files in chunks:
azure_container = "dummy-container"
file_path = "test.txt"
chunk_size=4*1024*1024
blob_service_client = BlobServiceClient.from_connection_string(azure_connection_string)
blob_client = blob_service_client.get_blob_client(container=azure_container, blob="testingfile.txt")
test_main = []
with open(file_path, 'rb') as datax:
#while True:
chunk_data = datax.read(chunk_size)
print(len(chunk_data))
#chunk_data = [str(chunk_data, 'utf-8').split("\r")]
# for q in chunk_data[0]:
# time.sleep(0.5)
# print(q.strip())
#print(chunk_data)
blob_client.upload_blob(chunk_data, overwrite=True)
I want to resume the upload if something happens in uploading, for that im using the chunks of data and recording the chunk data to continue from the left out but how to upload without overwriting ? in another word continuing to upload same file after discontinuation.
| [
"For this \"async\" and \"await\" is working fine. But if there is better solution then please post.\nasync def uploadin_files(file_path):\n for files_to_upload in file_path:\n time.sleep(2)\n blob_client = blob_service_client.get_blob_client(container=azure_container, blob=files_to_upload)\n with open(files_to_upload, 'rb') as datax:\n chunk_data_original = datax.read()\n blob_client.upload_blob(chunk_data_original, overwrite=True)\n print(files_to_upload,\"Done uploading\")\n\nawait uploadin_files(file_path_main)\n\nIt works even after disconnecting network for sometimes and connecting back again !\n"
] | [
0
] | [] | [] | [
"azure_active_directory",
"azure_blob_storage",
"azure_files",
"chunks",
"python"
] | stackoverflow_0074653289_azure_active_directory_azure_blob_storage_azure_files_chunks_python.txt |
Q:
Google Cloud Function: Access folders in a Google Storage bucket and process files from them
I'm fairly new to GCP and I'm trying to write a Google Cloud Function that would be triggered by a new file once it appears in a bucket. I made this work, but the thing is that the Cloud Function is also supposed to perform another action: it needs to access a folder which is already present the bucket (that means, before the trigger happens: this folder is called data_folders and contains more folders -- 0_42_ten, 1_42_ten, 2_42_ten; from the data_folders I want to access the 1_42_ten one so I can load files from it).
My problem is I cannot set up the Cloud Function in a way it would "see" the data_folders folder.
Here's the code I'm using (I have re-written the code to focus on accessing the "old" folder only):
from google.cloud import storage
import glob
def hello_gcs(event, context):
"""Triggered by a change to a Cloud Storage bucket.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
file = event
print("Function triggered")
storage_client = storage.Client()
bucket = storage_client.bucket("bucket_name")
blob = bucket.blob("bucket_name/data_folders")
print(blob)
def list_folders():
path = 'bucket_name/data_folders'
list_of_folders = glob.glob(path)
sorted_folders = sorted(list_of_folders)
print(sorted_folders)
list_folders()
The Cloud Function gets deployed successfully but then (when I test it using the "testing" tab and passing {"name": "data_folders"} as an input) it returns an empty list (instead of ['0_42_ten', '1_42_ten', '2_42_ten'] or so), suggesting it doesn't "see" the data_folders nor the folders in it. I tried to play with the format of the path (e.g. gs://bucket_name/data_folders, bucket_name/data_folders/*, etc) but nothing worked.
Could someone please advise me on how to solve this?
A:
As mentioned in the comments listing using glob does not work as it intended for local filesystem.
Thus the minimal example to list objects underneath some virtual folder in object storage on GCS (Google cloud storage) might look like this:
#!/usr/bin/env python
from google.cloud import storage
import os
BUCKET_NAME = os.getenv("BUCKET_NAME", "the-bucket")
BUCKET_PATH = os.getenv("BUCKET_PATH", "the-path")
print(f"BUCKET_NAME={BUCKET_NAME} BUCKET_PATH={BUCKET_PATH}")
print(f"via CLI: gsutil ls -l gs://{BUCKET_NAME}/{BUCKET_PATH}/")
storage_client = storage.Client()
bucket = storage_client.bucket(BUCKET_NAME)
content_list = list(bucket.list_blobs(prefix=f"{BUCKET_PATH}/"))
print(content_list)
Please note: that if the specified folder does not directly contain the objects (or blobs) then this function in case of GCS finds those down in the tree. That might be eventually in another level of folders.
So if the objects store there are having paths like:
gcs://the-bucket/the-path/sub-folder/object.csv.gz
This file gets listed by bucket.list_blobs(prefix="the-path/")
| Google Cloud Function: Access folders in a Google Storage bucket and process files from them | I'm fairly new to GCP and I'm trying to write a Google Cloud Function that would be triggered by a new file once it appears in a bucket. I made this work, but the thing is that the Cloud Function is also supposed to perform another action: it needs to access a folder which is already present the bucket (that means, before the trigger happens: this folder is called data_folders and contains more folders -- 0_42_ten, 1_42_ten, 2_42_ten; from the data_folders I want to access the 1_42_ten one so I can load files from it).
My problem is I cannot set up the Cloud Function in a way it would "see" the data_folders folder.
Here's the code I'm using (I have re-written the code to focus on accessing the "old" folder only):
from google.cloud import storage
import glob
def hello_gcs(event, context):
"""Triggered by a change to a Cloud Storage bucket.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
file = event
print("Function triggered")
storage_client = storage.Client()
bucket = storage_client.bucket("bucket_name")
blob = bucket.blob("bucket_name/data_folders")
print(blob)
def list_folders():
path = 'bucket_name/data_folders'
list_of_folders = glob.glob(path)
sorted_folders = sorted(list_of_folders)
print(sorted_folders)
list_folders()
The Cloud Function gets deployed successfully but then (when I test it using the "testing" tab and passing {"name": "data_folders"} as an input) it returns an empty list (instead of ['0_42_ten', '1_42_ten', '2_42_ten'] or so), suggesting it doesn't "see" the data_folders nor the folders in it. I tried to play with the format of the path (e.g. gs://bucket_name/data_folders, bucket_name/data_folders/*, etc) but nothing worked.
Could someone please advise me on how to solve this?
| [
"As mentioned in the comments listing using glob does not work as it intended for local filesystem.\nThus the minimal example to list objects underneath some virtual folder in object storage on GCS (Google cloud storage) might look like this:\n#!/usr/bin/env python\n\nfrom google.cloud import storage\nimport os\n\nBUCKET_NAME = os.getenv(\"BUCKET_NAME\", \"the-bucket\")\nBUCKET_PATH = os.getenv(\"BUCKET_PATH\", \"the-path\")\n\nprint(f\"BUCKET_NAME={BUCKET_NAME} BUCKET_PATH={BUCKET_PATH}\")\nprint(f\"via CLI: gsutil ls -l gs://{BUCKET_NAME}/{BUCKET_PATH}/\")\n\nstorage_client = storage.Client()\nbucket = storage_client.bucket(BUCKET_NAME)\ncontent_list = list(bucket.list_blobs(prefix=f\"{BUCKET_PATH}/\"))\nprint(content_list)\n\nPlease note: that if the specified folder does not directly contain the objects (or blobs) then this function in case of GCS finds those down in the tree. That might be eventually in another level of folders.\nSo if the objects store there are having paths like:\ngcs://the-bucket/the-path/sub-folder/object.csv.gz\nThis file gets listed by bucket.list_blobs(prefix=\"the-path/\")\n"
] | [
0
] | [] | [] | [
"google_cloud_functions",
"google_cloud_platform",
"python"
] | stackoverflow_0074645979_google_cloud_functions_google_cloud_platform_python.txt |
Q:
RTSP on Django using OpenCV
I need to stream a surveillance camera onto a Django based website. I found a tutorial on Youtube but it is very simple. Down below is my code:
from django.shortcuts import render
from django.views.decorators import gzip
from django.http import StreamingHttpResponse
import cv2
# Create your views here.
@gzip.gzip_page
def Home(request):
try:
cam = videoCamera()
return StreamingHttpResponse(gen(cam), content_type="multipart/x-mixed-replace;boundary=frame")
except:
pass
return render(request, 'index.html')
# Video Capture
class videoCamera(object):
video = cv2.VideoCapture("[user]:[password]@rtsp://[ip-address]:554/sub")
while True:
_, frame = video.read()
cv2.imshow("RTSP", frame)
k = cv2.waitKey(10)
if k == ord('q'):
break
video.release()
cv2.destroyAllWindows()
However, I encounter an error:
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:967: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
Can somebody help me out?
A:
try this in class videoCamera dont forget to import threading
class VideoCamera(object):
def __init__(self):
self.video = cv2.VideoCapture("put your video link here")
(self.grabbed , self.frame) = self.video.read()
threading.Thread(target=self.update , args=()).start()
def __del__(self):
self.video.release()
def get_frame(self):
image = self.frame
_ , jpeg = cv2.imencode(' .jpg' , image)
return jpeg.tobytes()
def update(self):
while True:
(self.grabbed , self.frame) = self.video.read()
| RTSP on Django using OpenCV | I need to stream a surveillance camera onto a Django based website. I found a tutorial on Youtube but it is very simple. Down below is my code:
from django.shortcuts import render
from django.views.decorators import gzip
from django.http import StreamingHttpResponse
import cv2
# Create your views here.
@gzip.gzip_page
def Home(request):
try:
cam = videoCamera()
return StreamingHttpResponse(gen(cam), content_type="multipart/x-mixed-replace;boundary=frame")
except:
pass
return render(request, 'index.html')
# Video Capture
class videoCamera(object):
video = cv2.VideoCapture("[user]:[password]@rtsp://[ip-address]:554/sub")
while True:
_, frame = video.read()
cv2.imshow("RTSP", frame)
k = cv2.waitKey(10)
if k == ord('q'):
break
video.release()
cv2.destroyAllWindows()
However, I encounter an error:
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:967: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
Can somebody help me out?
| [
"try this in class videoCamera dont forget to import threading\nclass VideoCamera(object):\n def __init__(self):\n self.video = cv2.VideoCapture(\"put your video link here\")\n (self.grabbed , self.frame) = self.video.read()\n threading.Thread(target=self.update , args=()).start()\n \n \n def __del__(self):\n self.video.release()\n\n def get_frame(self):\n image = self.frame \n _ , jpeg = cv2.imencode(' .jpg' , image)\n return jpeg.tobytes()\n\n def update(self):\n while True:\n (self.grabbed , self.frame) = self.video.read()\n\n"
] | [
0
] | [] | [] | [
"django",
"opencv",
"python"
] | stackoverflow_0074342952_django_opencv_python.txt |
Q:
Time difference with datetime
I have a variable called df with three colums with the following data in datetime64:
start_time
end_time
extra_time
2022-12-01 09:53:02
2022-12-05 09:53:21
1 days 23:30:15
I want to add a 4th column saying that if extra_time is positive, then it's Intime. Otherwise, it's offtime.
I tried using for like this:
for extra_time in df:
if extra_time >= 0:
df['intime'] == True
else:
df['intime'] == False
And I get the following:
TypeError: '>=' not supported between instances of 'str' and 'int'
I know that my result contains string and numbers, such as:
0 1 days 23:30:15
The thing is that I'm out of ideas on how to proceed. Any thoughts?
A:
Convert to timedelta
df["extra_time"] = pd.to_timedelta(df["extra_time"]).dt.total_seconds()
df["intime"] = df["extra_time"] >= 0
output:
start_time end_time extra_time intime
0 2022-12-01 09:53:02 2022-12-05 09:53:21 171015.0 True
1 2022-12-01 10:53:02 2022-12-02 10:53:21 -1785.0 False
| Time difference with datetime | I have a variable called df with three colums with the following data in datetime64:
start_time
end_time
extra_time
2022-12-01 09:53:02
2022-12-05 09:53:21
1 days 23:30:15
I want to add a 4th column saying that if extra_time is positive, then it's Intime. Otherwise, it's offtime.
I tried using for like this:
for extra_time in df:
if extra_time >= 0:
df['intime'] == True
else:
df['intime'] == False
And I get the following:
TypeError: '>=' not supported between instances of 'str' and 'int'
I know that my result contains string and numbers, such as:
0 1 days 23:30:15
The thing is that I'm out of ideas on how to proceed. Any thoughts?
| [
"Convert to timedelta\ndf[\"extra_time\"] = pd.to_timedelta(df[\"extra_time\"]).dt.total_seconds()\ndf[\"intime\"] = df[\"extra_time\"] >= 0\n\noutput:\n start_time end_time extra_time intime\n0 2022-12-01 09:53:02 2022-12-05 09:53:21 171015.0 True\n1 2022-12-01 10:53:02 2022-12-02 10:53:21 -1785.0 False\n\n"
] | [
2
] | [] | [] | [
"datetime",
"pandas",
"python",
"timedelta"
] | stackoverflow_0074655775_datetime_pandas_python_timedelta.txt |
Q:
Print nested list in a particular way
I have this list:
top_list = [['Recent news', '52', '15.1'], ['Godmorning', '5', '1.5'], ['Sports news', '47', '13.7'], ['Report with weather', '34', '9.9'], ['The angel and the lawless', '33', '9.6'], ['Thundercats', '3', '0.9'], ["Mother's legacy", '3', '0.9'], ['UR: Summer evenings with Europe of the Times', '3', '0.9'], ['Tip', '20', '5.8'], ['Florida Straits', '2', '0.6']]
It contains sublists with ['Program name', 'viewers', 'percentage']
I want to print it like this:
------------------ Top-10-list ----------------
1. Recent news, 52 viewers, (15.1%)
2. Godmorning, 5 viewers, (1.5%)
3. Sports news, 47 viewers, (13.7%)
4. Report with weather, 34 viewers, (9.9%)
5. The angel and the lawless, 33 viewers, (9.6%)
6. Thundercats, 3 viewers, (0.9%)
7. Mother's legacy, 3 viewers, (0.9%)
8. UR: Summer evenings with Europe of the Times 3 viewers, (0.9%)
9. Tip, 20 viewers, (5.8%)
10. Florida Straits, 2 viewers, (0.6%)
I have tried this so far, but I don't know how to combat the problem. Is there a more efficient way/ some way to print out the list with programs?
index_top_ten = 1 #index for list (starts at 1)
print('')
print("------------------ Top-10-list ----------------") #header
line =''
for sublist in top_list:
line += str(index_top_ten) +'.'+' ' #add index and dot
for index in sublist
if index[1]=='.' or index[2]=='.' or index[3]=='.': #separate the percentage
line += '(' + convert + '%' + ')' +' '
else:
line += str(index) + ' '
index_top_ten += 1
This is other one is somewhat working but i don't know how to add the percentage icon:
[print(*x) for x in top_list][0] # one-line print
All help is much appreciated!
A:
To format the list of programs as shown in your example, you can use a for loop to iterate over the elements in the top_list array, and print each program's name, viewer count, and percentage in the desired format.
Here is an example of how you could implement this:
top_list = [['Recent news', '52', '15.1'], ['Godmorning', '5', '1.5'], ['Sports news', '47', '13.7'], ['Report with weather', '34', '9.9'], ['The angel and the lawless', '33', '9.6'], ['Thundercats', '3', '0.9'], ["Mother's legacy", '3', '0.9'], ['UR: Summer evenings with Europe of the Times', '3', '0.9'], ['Tip', '20', '5.8'], ['Florida Straits', '2', '0.6']]
# Print the header
print("------------------ Top-10-list ----------------")
# Iterate over the elements in the top_list array
for (i, program) in enumerate(top_list):
# Extract the program name, viewer count, and percentage
name = program[0]
viewers = program[1]
percentage = program[2]
# Print the program's name, viewer count, and percentage in the desired format
print(f"{i + 1}. {name}, {viewers} viewers, ({percentage}%)")
This code will print the list of programs in the format shown in your example.
Note that the enumerate() function is used to iterate over the elements in the top_list array, and to obtain the index of each element in the array (which is used to print the program's ranking in the top-10 list). The f-strings feature is used to format the string that is printed for each program, which makes the code more concise and readable.
| Print nested list in a particular way | I have this list:
top_list = [['Recent news', '52', '15.1'], ['Godmorning', '5', '1.5'], ['Sports news', '47', '13.7'], ['Report with weather', '34', '9.9'], ['The angel and the lawless', '33', '9.6'], ['Thundercats', '3', '0.9'], ["Mother's legacy", '3', '0.9'], ['UR: Summer evenings with Europe of the Times', '3', '0.9'], ['Tip', '20', '5.8'], ['Florida Straits', '2', '0.6']]
It contains sublists with ['Program name', 'viewers', 'percentage']
I want to print it like this:
------------------ Top-10-list ----------------
1. Recent news, 52 viewers, (15.1%)
2. Godmorning, 5 viewers, (1.5%)
3. Sports news, 47 viewers, (13.7%)
4. Report with weather, 34 viewers, (9.9%)
5. The angel and the lawless, 33 viewers, (9.6%)
6. Thundercats, 3 viewers, (0.9%)
7. Mother's legacy, 3 viewers, (0.9%)
8. UR: Summer evenings with Europe of the Times 3 viewers, (0.9%)
9. Tip, 20 viewers, (5.8%)
10. Florida Straits, 2 viewers, (0.6%)
I have tried this so far, but I don't know how to combat the problem. Is there a more efficient way/ some way to print out the list with programs?
index_top_ten = 1 #index for list (starts at 1)
print('')
print("------------------ Top-10-list ----------------") #header
line =''
for sublist in top_list:
line += str(index_top_ten) +'.'+' ' #add index and dot
for index in sublist
if index[1]=='.' or index[2]=='.' or index[3]=='.': #separate the percentage
line += '(' + convert + '%' + ')' +' '
else:
line += str(index) + ' '
index_top_ten += 1
This is other one is somewhat working but i don't know how to add the percentage icon:
[print(*x) for x in top_list][0] # one-line print
All help is much appreciated!
| [
"To format the list of programs as shown in your example, you can use a for loop to iterate over the elements in the top_list array, and print each program's name, viewer count, and percentage in the desired format.\nHere is an example of how you could implement this:\ntop_list = [['Recent news', '52', '15.1'], ['Godmorning', '5', '1.5'], ['Sports news', '47', '13.7'], ['Report with weather', '34', '9.9'], ['The angel and the lawless', '33', '9.6'], ['Thundercats', '3', '0.9'], [\"Mother's legacy\", '3', '0.9'], ['UR: Summer evenings with Europe of the Times', '3', '0.9'], ['Tip', '20', '5.8'], ['Florida Straits', '2', '0.6']]\n\n# Print the header\nprint(\"------------------ Top-10-list ----------------\")\n\n# Iterate over the elements in the top_list array\nfor (i, program) in enumerate(top_list):\n # Extract the program name, viewer count, and percentage\n name = program[0]\n viewers = program[1]\n percentage = program[2]\n\n # Print the program's name, viewer count, and percentage in the desired format\n print(f\"{i + 1}. {name}, {viewers} viewers, ({percentage}%)\")\n\nThis code will print the list of programs in the format shown in your example.\nNote that the enumerate() function is used to iterate over the elements in the top_list array, and to obtain the index of each element in the array (which is used to print the program's ranking in the top-10 list). The f-strings feature is used to format the string that is printed for each program, which makes the code more concise and readable.\n"
] | [
0
] | [] | [] | [
"list",
"nested",
"printing",
"python",
"string"
] | stackoverflow_0074655847_list_nested_printing_python_string.txt |
Q:
How to draw only outer edges on a 3D shape
I am working on a physics simulator that takes in a bunch of mass coordinates and applies spring forces to each mass. I'm trying to draw only outer edges of the 3D shape. In a simple case, the shape is at first a cube, and then because of gravity and spring forces the cube deforms. I'm sort of able to draw the outer edges on the non-deformed cube, but only as far as this:
I am still missing most of the edges and as soon as deformation begins I lose all the edges. My code that's being used to draw these edges is as follows:
shaded_plotting = []
m1 = mass_vertices[0]
j = 0
for i in range(len(mass_vertices)):
for j in range(1, len(mass_vertices)):
m2 = mass_vertices[j]
if (m1[0] != m2[0] and m1[1] == m2[1] and m1[2] == m2[2]) or \
(m1[0] == m2[0] and m1[1] != m2[1] and m1[2] == m2[2]) or \
(m1[0] == m2[0] and m1[1] == m2[1] and m1[2] != m2[2]):
if (list(m2), list(m1)) not in shaded_plotting:
shaded_plotting.append((list(m1), list(m2)))
m1 = mass_vertices[j]
shaded_plotting = np.array(shaded_plotting)
shaded_plotting.shape = (len(shaded_plotting) * 2, 3)
print(shaded_plotting)
return M, S, shaded_plotting
I compare two points, and if there is only one different coordinate (x, y, or z), I draw a line between the two points and back to my beginning point. mass_vertices is my coordinates for the cube. Then this code is run again when the coordinates change (when deformations through gravity or spring forces take place).
Of course, as soon as various deformations happen, the above code is completely useless, as points will no longer at any time have only one different coordinate to another point. Mostly, I just don't see how I can a) simplify the code so the lines aren't drawn so often, and b) rewrite the code so that it would still only draw 12 edge lines for any arbitrary shape.
I have some ideas that might work, but I don't have too much of an idea on how to implement them. I think if I kept track of each mass and labeled it somehow, I would be able to keep the lines between each mass better.
I am able to draw the cube and its deformations with lines connecting every point, but it's much harder to see what's going on in that case. The cube looks like this when all lines are shown:
And the code for this is as follows:
class Mass:
def __init__(self, m, rho, v, a):
self.m = m
self.rho = rho
self.v = v
self.a = a
class Spring:
def __init__(self, k, L_0, L, m1, m2):
self.k = k
self.L_0 = L_0
self.L = L
self.m1 = m1
self.m2 = m2
M = []
S = []
for_plotting_S = []
count = 0
for i in range(len(mass_vertices)):
M.append(Mass(m, mass_vertices[i], v, a))
for i in range(len(mass_vertices)):
for j in range(1, len(mass_vertices)):
m1 = M[i].rho
m2 = M[j].rho
original_m1 = Masses[i].rho
original_m2 = Masses[j].rho
L = np.linalg.norm(m1 - m2)
L0 = np.linalg.norm(original_m1 - original_m2)
if L != 0 and ((list(m2), list(m1)) not in for_plotting_S):
S.append(Spring(k[count], L0, L, m1, m2))
for_plotting_S.append((list(m1), list(m2)))
count += 1
for_plotting_S = np.array(for_plotting_S)
for_plotting_S.shape = (len(for_plotting_S) * 2, 3)
for_plotting_S.shape = (len(for_plotting_S) * 2, 3)
I am not very well versed in classes, so I am pretty certain that the class initialization above could be better utilized.
Thank you for reading, any help would be greatly appreciated.
A:
I don't know what is a deformed cube but once you have the vertices of a convex polyhedron, you can get and plot its edges as follows.
To get the convex hull and its edges, use pycddlib (the cdd library):
# -*- coding: utf-8 -*-
import numpy as np
import cdd as pcdd
import matplotlib.pyplot as plt
# vertices
points= np.array([
[0,0,0],
[4,0,0],
[4,4,0],
[0,4,0],
[0,0,4],
[4,0,4],
[4,4,4],
[0,4,4]
])
# to get the convex hull with cdd, one has to prepend a column of ones
vertices = np.hstack((np.ones((8,1)), points))
# do the polyhedron
mat = pcdd.Matrix(vertices, linear=False, number_type="fraction")
mat.rep_type = pcdd.RepType.GENERATOR
poly = pcdd.Polyhedron(mat)
# get the adjacent vertices of each vertex
adjacencies = [list(x) for x in poly.get_input_adjacency()]
# store the edges in a matrix (giving the indices of the points)
edges = []
for i,indices in enumerate(adjacencies[:-1]):
indices = list(filter(lambda x: x>i, indices))
l = len(indices)
col1 = np.full((l, 1), i)
indices = np.reshape(indices, (l, 1))
edges = edges.append(np.hstack((col1, indices)))
Edges = np.vstack(tuple(edges))
# plot
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
start = points[Edges[:,0]]
end = points[Edges[:,1]]
for i in range(12):
ax.plot(
[start[i,0], end[i,0]],
[start[i,1], end[i,1]],
[start[i,2], end[i,2]],
"blue"
)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_xlim3d(-1,5)
ax.set_ylim3d(-1,5)
ax.set_zlim3d(-1,5)
plt.show()
| How to draw only outer edges on a 3D shape | I am working on a physics simulator that takes in a bunch of mass coordinates and applies spring forces to each mass. I'm trying to draw only outer edges of the 3D shape. In a simple case, the shape is at first a cube, and then because of gravity and spring forces the cube deforms. I'm sort of able to draw the outer edges on the non-deformed cube, but only as far as this:
I am still missing most of the edges and as soon as deformation begins I lose all the edges. My code that's being used to draw these edges is as follows:
shaded_plotting = []
m1 = mass_vertices[0]
j = 0
for i in range(len(mass_vertices)):
for j in range(1, len(mass_vertices)):
m2 = mass_vertices[j]
if (m1[0] != m2[0] and m1[1] == m2[1] and m1[2] == m2[2]) or \
(m1[0] == m2[0] and m1[1] != m2[1] and m1[2] == m2[2]) or \
(m1[0] == m2[0] and m1[1] == m2[1] and m1[2] != m2[2]):
if (list(m2), list(m1)) not in shaded_plotting:
shaded_plotting.append((list(m1), list(m2)))
m1 = mass_vertices[j]
shaded_plotting = np.array(shaded_plotting)
shaded_plotting.shape = (len(shaded_plotting) * 2, 3)
print(shaded_plotting)
return M, S, shaded_plotting
I compare two points, and if there is only one different coordinate (x, y, or z), I draw a line between the two points and back to my beginning point. mass_vertices is my coordinates for the cube. Then this code is run again when the coordinates change (when deformations through gravity or spring forces take place).
Of course, as soon as various deformations happen, the above code is completely useless, as points will no longer at any time have only one different coordinate to another point. Mostly, I just don't see how I can a) simplify the code so the lines aren't drawn so often, and b) rewrite the code so that it would still only draw 12 edge lines for any arbitrary shape.
I have some ideas that might work, but I don't have too much of an idea on how to implement them. I think if I kept track of each mass and labeled it somehow, I would be able to keep the lines between each mass better.
I am able to draw the cube and its deformations with lines connecting every point, but it's much harder to see what's going on in that case. The cube looks like this when all lines are shown:
And the code for this is as follows:
class Mass:
def __init__(self, m, rho, v, a):
self.m = m
self.rho = rho
self.v = v
self.a = a
class Spring:
def __init__(self, k, L_0, L, m1, m2):
self.k = k
self.L_0 = L_0
self.L = L
self.m1 = m1
self.m2 = m2
M = []
S = []
for_plotting_S = []
count = 0
for i in range(len(mass_vertices)):
M.append(Mass(m, mass_vertices[i], v, a))
for i in range(len(mass_vertices)):
for j in range(1, len(mass_vertices)):
m1 = M[i].rho
m2 = M[j].rho
original_m1 = Masses[i].rho
original_m2 = Masses[j].rho
L = np.linalg.norm(m1 - m2)
L0 = np.linalg.norm(original_m1 - original_m2)
if L != 0 and ((list(m2), list(m1)) not in for_plotting_S):
S.append(Spring(k[count], L0, L, m1, m2))
for_plotting_S.append((list(m1), list(m2)))
count += 1
for_plotting_S = np.array(for_plotting_S)
for_plotting_S.shape = (len(for_plotting_S) * 2, 3)
for_plotting_S.shape = (len(for_plotting_S) * 2, 3)
I am not very well versed in classes, so I am pretty certain that the class initialization above could be better utilized.
Thank you for reading, any help would be greatly appreciated.
| [
"I don't know what is a deformed cube but once you have the vertices of a convex polyhedron, you can get and plot its edges as follows.\nTo get the convex hull and its edges, use pycddlib (the cdd library):\n# -*- coding: utf-8 -*-\nimport numpy as np\nimport cdd as pcdd\nimport matplotlib.pyplot as plt\n\n# vertices\npoints= np.array([\n [0,0,0],\n [4,0,0],\n [4,4,0],\n [0,4,0],\n [0,0,4],\n [4,0,4],\n [4,4,4],\n [0,4,4]\n])\n\n# to get the convex hull with cdd, one has to prepend a column of ones\nvertices = np.hstack((np.ones((8,1)), points))\n\n# do the polyhedron\nmat = pcdd.Matrix(vertices, linear=False, number_type=\"fraction\") \nmat.rep_type = pcdd.RepType.GENERATOR\npoly = pcdd.Polyhedron(mat)\n\n# get the adjacent vertices of each vertex\nadjacencies = [list(x) for x in poly.get_input_adjacency()]\n\n# store the edges in a matrix (giving the indices of the points)\nedges = []\nfor i,indices in enumerate(adjacencies[:-1]):\n indices = list(filter(lambda x: x>i, indices))\n l = len(indices)\n col1 = np.full((l, 1), i)\n indices = np.reshape(indices, (l, 1))\n edges = edges.append(np.hstack((col1, indices)))\nEdges = np.vstack(tuple(edges))\n\n# plot\nfig = plt.figure()\nax = fig.add_subplot(111, projection=\"3d\")\n\nstart = points[Edges[:,0]]\nend = points[Edges[:,1]]\n\nfor i in range(12):\n ax.plot(\n [start[i,0], end[i,0]], \n [start[i,1], end[i,1]], \n [start[i,2], end[i,2]],\n \"blue\"\n )\n\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"z\")\n\nax.set_xlim3d(-1,5)\nax.set_ylim3d(-1,5)\nax.set_zlim3d(-1,5)\n\nplt.show()\n\n\n"
] | [
0
] | [] | [] | [
"3d",
"class",
"math",
"matplotlib",
"python"
] | stackoverflow_0074637925_3d_class_math_matplotlib_python.txt |
Q:
Flask.redirect(url_for()) not redirecting
So I know that there are many already similar question and I've browsed about 5 now.
Problem is, though, I can't seem to find a similar problem to mine. Here's the deal:
When posting my form, I get from the server:
> 127.0.0.1 - - [02/Dec/2022 10:37:53] "POST /create-artist HTTP/1.1" 302 -
> 127.0.0.1 - - [02/Dec/2022 10:37:53] "GET /artists-list HTTP/1.1" 200 -
back in the browser, the following response:
But nothing is moving and I stay in the "http://localhost:3000/create-artist" URL for some reason.
Here's my python (note that for test and learning purposes, I'm not sending anything to the db yet):
from flask import Flask, render_template, url_for, redirect, request, abort
from flask_sqlalchemy import SQLAlchemy
import os, sys
db = SQLAlchemy()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = URI
db.init_app(app)
##########################################################
#################### CONTROLLERS #########################
##########################################################
# ---------------------------------------------
# ----------------- ARTISTS -------------------
# ---------------------------------------------
@app.route('/artists-list')
def artists_list():
return render_template('artists.html', data='test')
@app.route('/create-artist', methods = ["GET", "POST"])
def create_artist():
error = False
if request.method == "POST":
try:
artist = Artist(
name = request.get_json()['name'],
city = request.get_json()['city'],
state = request.get_json()['state'],
phone = request.get_json()['phone'],
genres = request.get_json()['genres'],
)
# db.session.add(artist)
# db.session.commit()
print(artist)
except:
error = True
# db.session.rollback()
print(sys.exc_info())
# finally:
# db.session.close()
if not error:
return redirect(url_for('artists_list'))
else:
abort(500)
return render_template('form/create-artist.html')
# --------------- END ARTISTS ------------------
@app.route("/")
def index():
return render_template('home.html')
##########################################################
###################### MODELS ############################
##########################################################
class Artist(db.Model):
__tablename__ = 'Artist'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
city = db.Column(db.String(120))
state = db.Column(db.String(120))
phone = db.Column(db.String(120))
genres = db.Column(db.String(120))
image_link = db.Column(db.String(500))
facebook_link = db.Column(db.String(120))
and the HTMLs:
/create-artist
{% extends "artists.html" %}
{% block title %}New artist | Fy-yur{% endblock %}
{% block content %}
<div class="container">
<h1>This is the create artist page!</h1>
<form id="artist-form" class="col-md-4">
<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" placeholder="New Artist">
</div>
<div class="form-group">
<label for="city">City</label>
<input type="text" class="form-control" id="city" placeholder="Artist's City">
</div>
<div class="form-group">
<label for="state">State</label>
<select class="form-control" id="state">
</select>
</div>
<div class="form-group">
<label for="phone">Phone</label>
<input type="text" class="form-control" id="phone" placeholder="Phone number">
</div>
<div class="form-group">
<label for="genres">Genre</label>
<select type="text" class="form-control" id="genres">
</select>
</div>
<button type="submit" id="submit" class="btn btn-success">Create</button>
</form>
</div>
<script>
document.getElementById('artist-form').onsubmit = e => {
e.preventDefault();
const body = {};
const formData = e.target;
for (let i = 0; i < formData.length - 1; i++) {
const currDataKey = formData[i].id;
const currDataValue = formData[i].value
body[currDataKey] = currDataValue;
}
fetch('/create-artist', {
method: 'POST',
body: JSON.stringify(body),
headers: {
'Content-Type': 'application/json'
}
})
.then(res => console.log(res))
.catch(e => console.log(e));
}
</script>
{% endblock %}
/artists-list/
{% extends "index.html" %}
{% block title %}Artists | Fy-yur{% endblock %}
{% block content %}
<h1>This is the artists page!</h1>
<div class="container">
<a href={{ url_for('create_artist') }}>
<button type="button" class="btn btn-success">
New artist
</button>
</a>
</div>
{% endblock %}
A:
You can't redirect page through flask(back end) if posting form through JS(front end). You need to use JS redirection method for that because you are getting response in JS call where you can redirect based on response from back end.
| Flask.redirect(url_for()) not redirecting | So I know that there are many already similar question and I've browsed about 5 now.
Problem is, though, I can't seem to find a similar problem to mine. Here's the deal:
When posting my form, I get from the server:
> 127.0.0.1 - - [02/Dec/2022 10:37:53] "POST /create-artist HTTP/1.1" 302 -
> 127.0.0.1 - - [02/Dec/2022 10:37:53] "GET /artists-list HTTP/1.1" 200 -
back in the browser, the following response:
But nothing is moving and I stay in the "http://localhost:3000/create-artist" URL for some reason.
Here's my python (note that for test and learning purposes, I'm not sending anything to the db yet):
from flask import Flask, render_template, url_for, redirect, request, abort
from flask_sqlalchemy import SQLAlchemy
import os, sys
db = SQLAlchemy()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = URI
db.init_app(app)
##########################################################
#################### CONTROLLERS #########################
##########################################################
# ---------------------------------------------
# ----------------- ARTISTS -------------------
# ---------------------------------------------
@app.route('/artists-list')
def artists_list():
return render_template('artists.html', data='test')
@app.route('/create-artist', methods = ["GET", "POST"])
def create_artist():
error = False
if request.method == "POST":
try:
artist = Artist(
name = request.get_json()['name'],
city = request.get_json()['city'],
state = request.get_json()['state'],
phone = request.get_json()['phone'],
genres = request.get_json()['genres'],
)
# db.session.add(artist)
# db.session.commit()
print(artist)
except:
error = True
# db.session.rollback()
print(sys.exc_info())
# finally:
# db.session.close()
if not error:
return redirect(url_for('artists_list'))
else:
abort(500)
return render_template('form/create-artist.html')
# --------------- END ARTISTS ------------------
@app.route("/")
def index():
return render_template('home.html')
##########################################################
###################### MODELS ############################
##########################################################
class Artist(db.Model):
__tablename__ = 'Artist'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
city = db.Column(db.String(120))
state = db.Column(db.String(120))
phone = db.Column(db.String(120))
genres = db.Column(db.String(120))
image_link = db.Column(db.String(500))
facebook_link = db.Column(db.String(120))
and the HTMLs:
/create-artist
{% extends "artists.html" %}
{% block title %}New artist | Fy-yur{% endblock %}
{% block content %}
<div class="container">
<h1>This is the create artist page!</h1>
<form id="artist-form" class="col-md-4">
<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" placeholder="New Artist">
</div>
<div class="form-group">
<label for="city">City</label>
<input type="text" class="form-control" id="city" placeholder="Artist's City">
</div>
<div class="form-group">
<label for="state">State</label>
<select class="form-control" id="state">
</select>
</div>
<div class="form-group">
<label for="phone">Phone</label>
<input type="text" class="form-control" id="phone" placeholder="Phone number">
</div>
<div class="form-group">
<label for="genres">Genre</label>
<select type="text" class="form-control" id="genres">
</select>
</div>
<button type="submit" id="submit" class="btn btn-success">Create</button>
</form>
</div>
<script>
document.getElementById('artist-form').onsubmit = e => {
e.preventDefault();
const body = {};
const formData = e.target;
for (let i = 0; i < formData.length - 1; i++) {
const currDataKey = formData[i].id;
const currDataValue = formData[i].value
body[currDataKey] = currDataValue;
}
fetch('/create-artist', {
method: 'POST',
body: JSON.stringify(body),
headers: {
'Content-Type': 'application/json'
}
})
.then(res => console.log(res))
.catch(e => console.log(e));
}
</script>
{% endblock %}
/artists-list/
{% extends "index.html" %}
{% block title %}Artists | Fy-yur{% endblock %}
{% block content %}
<h1>This is the artists page!</h1>
<div class="container">
<a href={{ url_for('create_artist') }}>
<button type="button" class="btn btn-success">
New artist
</button>
</a>
</div>
{% endblock %}
| [
"You can't redirect page through flask(back end) if posting form through JS(front end). You need to use JS redirection method for that because you are getting response in JS call where you can redirect based on response from back end.\n"
] | [
1
] | [] | [] | [
"flask",
"javascript",
"python",
"redirect"
] | stackoverflow_0074653786_flask_javascript_python_redirect.txt |
Q:
I want to convert multiple characters to Python regular expressions
I want to convert only numbers in this str
"ABC234TSY65234525erQ"
I tried to change only areas with numbers to the * sign
This is what I wanted
"ABC*TSY*erQ"
But when I actually did it, it came out like this
"ABC***TSY********erQ"
How do I change it?
Thanks you!
A:
use \d+. + in a regular expression means "match the preceding character one or more times"
import re
s = re.sub(r'\d+', '*', s)
output:
'ABC*TSY*erQ'
A:
The re.sub() solution given by @JayPeerachi is probably the best option, but we could also use re.findall() here:
inp = "ABC234TSY65234525erQ"
output = '*'.join(re.findall(r'\D+', inp))
print(output) # ABC*TSY*erQ
| I want to convert multiple characters to Python regular expressions | I want to convert only numbers in this str
"ABC234TSY65234525erQ"
I tried to change only areas with numbers to the * sign
This is what I wanted
"ABC*TSY*erQ"
But when I actually did it, it came out like this
"ABC***TSY********erQ"
How do I change it?
Thanks you!
| [
"use \\d+. + in a regular expression means \"match the preceding character one or more times\"\nimport re\ns = re.sub(r'\\d+', '*', s)\n\noutput:\n'ABC*TSY*erQ'\n\n",
"The re.sub() solution given by @JayPeerachi is probably the best option, but we could also use re.findall() here:\ninp = \"ABC234TSY65234525erQ\"\noutput = '*'.join(re.findall(r'\\D+', inp))\nprint(output) # ABC*TSY*erQ\n\n"
] | [
1,
1
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0074655880_python_regex_string.txt |
Q:
Np.where with optional condition
For my database I need to create a new column based on a condition.
In a seperate file I have add all the conditions like this:
conditions = [{year: 2016, price: 30000, fuel: Petrol, result: 12},
{year: 2017, price: 45000, fuel: Elektricity, result: 18},
{year: 2018, price: None, fuel: Petrol, result: 14},
I am using the following code to add a new column
df['new_column'] = np.where((df['year'] == dict[year]) & (df['price'] > dict[price]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
As you see in the conditions, it is possible that one of the values is none.
This means (in this case price: none) that the formula as follows:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
It is possible that multiple conditions are none, I want to prevent working with to much if and else statements but I can't figure out a way to do something in the formula that when it is none it needs to remove the condition.
Is this possible or should I just work with:
If dict[price] != None:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['price'] > dict[price]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
else:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
A:
you can reduce the conditions to separate the creation of each condition from their application, you will need an if conditions for each optional condition, but not their combinations.
from operator import and_
from functools import reduce
conditions = []
if dict['year'] != None:
conditions.append(df['year'] == dict['year'])
if dict['price'] != None:
conditions.append(df['price'] > dict['price'])
...
if len(conditions) != 0:
# similar to conditions[0] & conditions[1] & ...
df['new_column'] = np.where(reduce(and_, conditions), dict['result'], df['new_column'])
else:
...
A:
IIUC, you can use a merge_sof:
df['update_result'] = (
pd.merge_asof(df.fillna({'price': -1}).reset_index()
.sort_values(by='price').drop(columns='result'),
pd.DataFrame(conditions).fillna({'price': -1})
.sort_values(by='price'),
by=['year', 'fuel'], on='price')
.set_index('index')['result'].fillna(df['result'])
)
Used input:
year price fuel result
0 2016 456789.0 Petrol 1
1 2017 20000.0 Elektricity 2
2 2018 NaN Petrol 3
Output:
year price fuel result
0 2016 456789.0 Petrol 12.0
1 2017 20000.0 Elektricity 2.0
2 2018 NaN Petrol 14.0
| Np.where with optional condition | For my database I need to create a new column based on a condition.
In a seperate file I have add all the conditions like this:
conditions = [{year: 2016, price: 30000, fuel: Petrol, result: 12},
{year: 2017, price: 45000, fuel: Elektricity, result: 18},
{year: 2018, price: None, fuel: Petrol, result: 14},
I am using the following code to add a new column
df['new_column'] = np.where((df['year'] == dict[year]) & (df['price'] > dict[price]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
As you see in the conditions, it is possible that one of the values is none.
This means (in this case price: none) that the formula as follows:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
It is possible that multiple conditions are none, I want to prevent working with to much if and else statements but I can't figure out a way to do something in the formula that when it is none it needs to remove the condition.
Is this possible or should I just work with:
If dict[price] != None:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['price'] > dict[price]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
else:
df['new_column'] = np.where((df['year'] == dict[year]) & (df['fuel description'] == dict[fuel])), dict[result], df['new_column'
| [
"you can reduce the conditions to separate the creation of each condition from their application, you will need an if conditions for each optional condition, but not their combinations.\nfrom operator import and_\nfrom functools import reduce\n\nconditions = []\nif dict['year'] != None:\n conditions.append(df['year'] == dict['year'])\nif dict['price'] != None:\n conditions.append(df['price'] > dict['price'])\n...\n\nif len(conditions) != 0:\n # similar to conditions[0] & conditions[1] & ...\n df['new_column'] = np.where(reduce(and_, conditions), dict['result'], df['new_column']) \nelse:\n ...\n\n",
"IIUC, you can use a merge_sof:\ndf['update_result'] = (\n pd.merge_asof(df.fillna({'price': -1}).reset_index()\n .sort_values(by='price').drop(columns='result'),\n pd.DataFrame(conditions).fillna({'price': -1})\n .sort_values(by='price'),\n by=['year', 'fuel'], on='price')\n .set_index('index')['result'].fillna(df['result'])\n)\n\nUsed input:\n year price fuel result\n0 2016 456789.0 Petrol 1\n1 2017 20000.0 Elektricity 2\n2 2018 NaN Petrol 3\n\nOutput:\n year price fuel result\n0 2016 456789.0 Petrol 12.0\n1 2017 20000.0 Elektricity 2.0\n2 2018 NaN Petrol 14.0\n\n"
] | [
0,
0
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074655786_numpy_pandas_python.txt |
Q:
Create a single categorical variable based on many dummy variables
I have several category dummies that are mutually exclusive
id cat1 cat2 cat3
A 0 0 1
B 1 0 0
C 1 0 0
D 0 0 1
E 0 1 0
F 0 0 1
..
I want to create a new column that contains all categories
id cat1 cat2 cat3 type
A 0 0 1 cat3
B 1 0 0 cat1
C 1 0 0 cat1
D 0 0 1 cat3
E 0 1 0 cat2
F 0 0 1 cat3
..
A:
You can use pandas.from_dummies and filter to select the columns starting with "cat":
df['type'] = pd.from_dummies(df.filter(like='cat'))
Output:
id cat1 cat2 cat3 type
0 A 0 0 1 cat3
1 B 1 0 0 cat1
2 C 1 0 0 cat1
3 D 0 0 1 cat3
4 E 0 1 0 cat2
5 F 0 0 1 cat3
A:
Use DataFrame.dot with DataFrame.filter for column with cat substring, if multiple 1 per rows are separated by ,:
m = df.filter(like='cat').eq(1)
#all columns without first
#m = df.iloc[:, 1:].eq(1)
df['type'] = m.dot(m.columns + ',').str[:-1]
print (df)
id cat1 cat2 cat3 type
0 A 0 0 1 cat3
1 B 1 0 0 cat1
2 C 1 0 0 cat1
3 D 0 0 1 cat3
4 E 0 1 0 cat2
5 F 0 0 1 cat3
| Create a single categorical variable based on many dummy variables | I have several category dummies that are mutually exclusive
id cat1 cat2 cat3
A 0 0 1
B 1 0 0
C 1 0 0
D 0 0 1
E 0 1 0
F 0 0 1
..
I want to create a new column that contains all categories
id cat1 cat2 cat3 type
A 0 0 1 cat3
B 1 0 0 cat1
C 1 0 0 cat1
D 0 0 1 cat3
E 0 1 0 cat2
F 0 0 1 cat3
..
| [
"You can use pandas.from_dummies and filter to select the columns starting with \"cat\":\ndf['type'] = pd.from_dummies(df.filter(like='cat'))\n\nOutput:\n id cat1 cat2 cat3 type\n0 A 0 0 1 cat3\n1 B 1 0 0 cat1\n2 C 1 0 0 cat1\n3 D 0 0 1 cat3\n4 E 0 1 0 cat2\n5 F 0 0 1 cat3\n\n",
"Use DataFrame.dot with DataFrame.filter for column with cat substring, if multiple 1 per rows are separated by ,:\nm = df.filter(like='cat').eq(1)\n#all columns without first\n#m = df.iloc[:, 1:].eq(1)\ndf['type'] = m.dot(m.columns + ',').str[:-1]\nprint (df)\n id cat1 cat2 cat3 type\n0 A 0 0 1 cat3\n1 B 1 0 0 cat1\n2 C 1 0 0 cat1\n3 D 0 0 1 cat3\n4 E 0 1 0 cat2\n5 F 0 0 1 cat3\n\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074656004_dataframe_pandas_python.txt |
Q:
PyBind11 destructor not invoked?
I have a c++ class wrapped with PyBind11. The issue is: when the Python script ends the c++ destructor is not being automatically invoked. This causes an untidy exit because networking resources need to be released by the destructor.
As a work-around it is necessary to explicitly delete the Python object, but I don't understand why!
Please could someone explain what is wrong here and how to get the destructor called automatically when the Python object is garbage collected?
Pybind11 binding code:
py::class_<pcs::Listener>(m, "listener")
.def(py::init<const py::object &, const std::string &, const std::string &, const std::string &, const std::string &, const std::set<std::string> &, const std::string & , const bool & , const bool & >(), R"pbdoc(
Monitors network traffic.
When a desired data source is detected a client instance is connected to consume the data stream.
Reconstructs data on receipt, like a jigsaw. Makes requests to fill any gaps. Verifies the data as sequential.
Data is output by callback to Python. Using the method specified in the constructor, which must accept a string argument.
)pbdoc");
In Python:
#Function to callback
def print_string(str):
print("Python; " + str)
lstnr = listener(print_string, 'tcp://127.0.0.1:9001', clientCertPath, serverCertPath, proxyCertPath, desiredSources, 'time_series_data', enableCurve, enableVerbose)
#Run for a minute
cnt = 0
while cnt < 60:
cnt += 1
time.sleep(1)
#Need to call the destructor explicity for some reason
del lstnr
A:
As was mentioned in a comment, the proximate cause of this behavior is the Python garbage collector: When the reference counter for an object gets to zero, the garbage collector may destroy the object (and thereby invoke the c++ destructor) but it doesn't have to do it at that moment.
This idea is elaborated more fully in the answer here:
https://stackoverflow.com/a/38238013/790979
As also mentioned in the above link, if you've got clean up to do at the end of an object's lifetime in Python, a nice solution is context management, where you'd define __enter__ and __exit__ in the object's wrapper (either in pybind11 or in Python itself), have __exit__ release the networking resources, and then, in the Python client code, something like:
with listener(print_string, 'tcp://127.0.0.1:9001', clientCertPath, serverCertPath, proxyCertPath, desiredSources, 'time_series_data', enableCurve, enableVerbose) as lstnr:
# Run for a minute
cnt = 0
while cnt < 60:
cnt += 1
time.sleep(1)
A:
So some years later I fixed this issue by enabling Python context manager with support by adding __enter__ and __exit__ method handling to my PyBind11 code:
py::class_<pcs::Listener>(m, "listener")
.def(py::init<const py::object &, const std::string &, const std::string &, const std::string &, const std::string &, const std::set<std::string> &, const std::string & , const bool & , const bool & >(), R"pbdoc(
Monitors network traffic.
When a desired data source is detected a client instance is connected to consume the data stream.
Specify 'type' as 'string' or 'market_data' to facilitate appropriate handling of BarData or string messages.
Reconstructs data on receipt, like a jigsaw. Makes requests to fill any gaps. Verifies the data as sequential.
Data is output by callback to Python. Using the method specified in the constructor, which must accept a string argument.
)pbdoc")
.def("__enter__", &pcs::Listener::enter, R"pbdoc(
Python 'with' context manager support.
)pbdoc")
.def("__exit__", &pcs::Listener::exit, R"pbdoc(
Python 'with' context manager support.
)pbdoc");
Added corresponding functions to the C++ class, like so:
//For Python 'with' context manager
auto enter(){std::cout << "Context Manager: Enter" << std::endl; return py::cast(this); }//returns a pointer to this object for 'with'....'as' python functionality
auto exit(py::handle type, py::handle value, py::handle traceback){ std::cout << "Context Manager: Exit: " << type << " " << value << " " << traceback << std::endl; }
N.B.
The returned pointer value from enter() is important to the as functionality in a with....as statement.
The parameters passed to exit(py::handle type, py::handle value, py::handle traceback) are useful debugging info.
Python usage:
with listener(cb, endpoint, clientCertPath, serverCertPath, proxyCertPath, desiredSources, type, enableCurve, enableVerbose):
cnt = 0
while cnt < 10:
cnt += 1
time.sleep(1)
The Python context manager now calls the destructor on the C++ object thus smoothly releasing the networking resources.
A:
GoFaster's solution above is helpful and the correct approach but I just wanted to clarify and correct their assertion that
The Python context manager now calls the destructor on the C++ object thus smoothly releasing the networking resources
This is simply not true. The context manager only guarantees that __exit__ will be called, not that any destructor will be called. Let me demonstrate - here's a managed resource implemented in C++:
class ManagedResource
{
public:
ManagedResource(int i) : pi(std::make_unique<int>(i))
{
py::print("ManagedResource ctor");
}
~ManagedResource()
{
py::print("ManagedResource dtor");
}
int get() const { return *pi; }
py::object enter()
{
py::print("entered context manager");
return py::cast(this);
}
void exit(py::handle type, py::handle value, py::handle traceback)
{
// release resources
// pi.reset();
py::print("exited context manager");
}
private:
std::unique_ptr<int> pi;
};
the python bindings:
py::class_<ManagedResource>(m, "ManagedResource")
.def(py::init<int>())
.def("get", &ManagedResource::get)
.def("__enter__", &ManagedResource::enter, R"""(
Enter context manager.
)""")
.def("__exit__", &ManagedResource::exit, R"""(
Leave context manager.
)""");
and some python test code (note that the code above doesn't (yet) release the resource in __exit__):
def f():
with ManagedResource(42) as resource1:
print(f"get = {resource1.get()}")
print(f"hey look I'm still here {resource1.get()}") # not destroyed
if __name__ == "__main__":
f()
print("end")
which produces:
ManagedResource ctor
entered context manager
get = 42
exited context manager
hey look I'm still here 42
ManagedResource dtor
end
so the resource is constructed, acquiring memory, and accessed within the context manager. All good so far. However the memory is still accessible outside the context manager (and before the destuctor is called, which is decided by the python runtime and outside our control unless we force it with del, which completely defeats the point of the context manager.
But we didnt actually release the resource in __exit__. If you uncomment pi.reset() in that function, you'll get this:
ManagedResource ctor
entered context manager
get = 42
exited context manager
Segmentation fault (core dumped)
this time, when you call get() outside the context manager, the ManagedResource object itself is still very much not destructed, but the resource inside it has been released,
And there's even more danger: if you create a ManagedResource outside a with block, you'll leak resource as __exit__ will never get called. To fix this, you'll need to defer acquiring the resources from the constructor to the __enter__ method, as well as putting checks that the resource exists in get.
In short, the morals of this story are:
you can't rely on when/where python objects are destructed, even for context managers
you can control acquisition and release of resources within a context manager
resources should be acquired in the __enter__ method, not in the constructor
resources should be released in the __exit__ method, not is the destructor
you should put sufficient guards around access to the resources
A context-managed object is not an RAII resource itself, but a wrapper around a RAII resource.
| PyBind11 destructor not invoked? | I have a c++ class wrapped with PyBind11. The issue is: when the Python script ends the c++ destructor is not being automatically invoked. This causes an untidy exit because networking resources need to be released by the destructor.
As a work-around it is necessary to explicitly delete the Python object, but I don't understand why!
Please could someone explain what is wrong here and how to get the destructor called automatically when the Python object is garbage collected?
Pybind11 binding code:
py::class_<pcs::Listener>(m, "listener")
.def(py::init<const py::object &, const std::string &, const std::string &, const std::string &, const std::string &, const std::set<std::string> &, const std::string & , const bool & , const bool & >(), R"pbdoc(
Monitors network traffic.
When a desired data source is detected a client instance is connected to consume the data stream.
Reconstructs data on receipt, like a jigsaw. Makes requests to fill any gaps. Verifies the data as sequential.
Data is output by callback to Python. Using the method specified in the constructor, which must accept a string argument.
)pbdoc");
In Python:
#Function to callback
def print_string(str):
print("Python; " + str)
lstnr = listener(print_string, 'tcp://127.0.0.1:9001', clientCertPath, serverCertPath, proxyCertPath, desiredSources, 'time_series_data', enableCurve, enableVerbose)
#Run for a minute
cnt = 0
while cnt < 60:
cnt += 1
time.sleep(1)
#Need to call the destructor explicity for some reason
del lstnr
| [
"As was mentioned in a comment, the proximate cause of this behavior is the Python garbage collector: When the reference counter for an object gets to zero, the garbage collector may destroy the object (and thereby invoke the c++ destructor) but it doesn't have to do it at that moment.\nThis idea is elaborated more fully in the answer here:\nhttps://stackoverflow.com/a/38238013/790979\nAs also mentioned in the above link, if you've got clean up to do at the end of an object's lifetime in Python, a nice solution is context management, where you'd define __enter__ and __exit__ in the object's wrapper (either in pybind11 or in Python itself), have __exit__ release the networking resources, and then, in the Python client code, something like:\n\nwith listener(print_string, 'tcp://127.0.0.1:9001', clientCertPath, serverCertPath, proxyCertPath, desiredSources, 'time_series_data', enableCurve, enableVerbose) as lstnr:\n # Run for a minute\n cnt = 0\n while cnt < 60:\n cnt += 1\n time.sleep(1)\n\n",
"So some years later I fixed this issue by enabling Python context manager with support by adding __enter__ and __exit__ method handling to my PyBind11 code:\npy::class_<pcs::Listener>(m, \"listener\")\n.def(py::init<const py::object &, const std::string &, const std::string &, const std::string &, const std::string &, const std::set<std::string> &, const std::string & , const bool & , const bool & >(), R\"pbdoc(\n Monitors network traffic.\n\n When a desired data source is detected a client instance is connected to consume the data stream.\n \n Specify 'type' as 'string' or 'market_data' to facilitate appropriate handling of BarData or string messages.\n\n Reconstructs data on receipt, like a jigsaw. Makes requests to fill any gaps. Verifies the data as sequential.\n\n Data is output by callback to Python. Using the method specified in the constructor, which must accept a string argument.\n)pbdoc\")\n.def(\"__enter__\", &pcs::Listener::enter, R\"pbdoc(\n Python 'with' context manager support.\n)pbdoc\") \n.def(\"__exit__\", &pcs::Listener::exit, R\"pbdoc(\n Python 'with' context manager support.\n)pbdoc\");\n\nAdded corresponding functions to the C++ class, like so:\n//For Python 'with' context manager\nauto enter(){std::cout << \"Context Manager: Enter\" << std::endl; return py::cast(this); }//returns a pointer to this object for 'with'....'as' python functionality\nauto exit(py::handle type, py::handle value, py::handle traceback){ std::cout << \"Context Manager: Exit: \" << type << \" \" << value << \" \" << traceback << std::endl; }\n\nN.B.\n\nThe returned pointer value from enter() is important to the as functionality in a with....as statement.\n\nThe parameters passed to exit(py::handle type, py::handle value, py::handle traceback) are useful debugging info.\n\n\nPython usage:\nwith listener(cb, endpoint, clientCertPath, serverCertPath, proxyCertPath, desiredSources, type, enableCurve, enableVerbose):\ncnt = 0\nwhile cnt < 10:\n cnt += 1\n time.sleep(1)\n\nThe Python context manager now calls the destructor on the C++ object thus smoothly releasing the networking resources.\n",
"GoFaster's solution above is helpful and the correct approach but I just wanted to clarify and correct their assertion that\n\nThe Python context manager now calls the destructor on the C++ object thus smoothly releasing the networking resources\n\nThis is simply not true. The context manager only guarantees that __exit__ will be called, not that any destructor will be called. Let me demonstrate - here's a managed resource implemented in C++:\nclass ManagedResource\n{\npublic:\n ManagedResource(int i) : pi(std::make_unique<int>(i))\n {\n py::print(\"ManagedResource ctor\");\n }\n\n ~ManagedResource()\n {\n py::print(\"ManagedResource dtor\");\n }\n\n int get() const { return *pi; }\n\n py::object enter()\n {\n py::print(\"entered context manager\");\n return py::cast(this);\n }\n\n void exit(py::handle type, py::handle value, py::handle traceback)\n {\n // release resources\n // pi.reset();\n py::print(\"exited context manager\");\n }\n\nprivate:\n std::unique_ptr<int> pi;\n};\n\nthe python bindings:\n py::class_<ManagedResource>(m, \"ManagedResource\")\n .def(py::init<int>())\n .def(\"get\", &ManagedResource::get)\n .def(\"__enter__\", &ManagedResource::enter, R\"\"\"(\n Enter context manager.\n )\"\"\")\n .def(\"__exit__\", &ManagedResource::exit, R\"\"\"(\n Leave context manager.\n )\"\"\");\n\nand some python test code (note that the code above doesn't (yet) release the resource in __exit__):\ndef f():\n with ManagedResource(42) as resource1:\n print(f\"get = {resource1.get()}\")\n print(f\"hey look I'm still here {resource1.get()}\") # not destroyed\n\n\nif __name__ == \"__main__\":\n f()\n print(\"end\")\n\nwhich produces:\nManagedResource ctor\nentered context manager\nget = 42\nexited context manager\nhey look I'm still here 42\nManagedResource dtor\nend\n\nso the resource is constructed, acquiring memory, and accessed within the context manager. All good so far. However the memory is still accessible outside the context manager (and before the destuctor is called, which is decided by the python runtime and outside our control unless we force it with del, which completely defeats the point of the context manager.\nBut we didnt actually release the resource in __exit__. If you uncomment pi.reset() in that function, you'll get this:\nManagedResource ctor\nentered context manager\nget = 42\nexited context manager\nSegmentation fault (core dumped)\n\nthis time, when you call get() outside the context manager, the ManagedResource object itself is still very much not destructed, but the resource inside it has been released,\nAnd there's even more danger: if you create a ManagedResource outside a with block, you'll leak resource as __exit__ will never get called. To fix this, you'll need to defer acquiring the resources from the constructor to the __enter__ method, as well as putting checks that the resource exists in get.\nIn short, the morals of this story are:\n\nyou can't rely on when/where python objects are destructed, even for context managers\nyou can control acquisition and release of resources within a context manager\nresources should be acquired in the __enter__ method, not in the constructor\nresources should be released in the __exit__ method, not is the destructor\nyou should put sufficient guards around access to the resources\n\nA context-managed object is not an RAII resource itself, but a wrapper around a RAII resource.\n"
] | [
0,
0,
0
] | [] | [] | [
"c++",
"destructor",
"pybind11",
"python"
] | stackoverflow_0055452762_c++_destructor_pybind11_python.txt |
Q:
Modify the range of values of the color bar of a graph in python
I have the following issue.
I have a graph of which has colored segments. The problem is in relating those segments to the color bar (which also contains text), so that each color segment is aligned with the color bar.
The code is the following:
from matplotlib.colorbar import colorbar_factory
x_v = datosg["Hour"]+div
y_v = datosg["UV Index"]
fig, ax= plt.subplots(figsize = (7,7))
ax.plot(x_v, y_v, color = "green")
ax.set_xlim(7, 19)
ax.grid()
ax.axhspan(0, 2.5, facecolor='green', alpha=0.8)
ax.axhspan(2.5, 5.5, facecolor='blue', alpha=0.7)
ax.axhspan(5.5, 7.5, facecolor='red', alpha=0.7)
ax.axhspan(7.5, 10.5, facecolor='yellow', alpha=0.7)
ax.axhspan(10.5, 16, facecolor='pink', alpha=0.7)
ax.margins(0)
from matplotlib.colors import ListedColormap
#discrete color scheme
cMap = ListedColormap(['green', 'blue','red', 'yellow', 'pink'])
#data
np.random.seed(42)
data = np.random.rand(5, 5)
heatmap = ax.pcolor(data, cmap=cMap)
#legend
cbar_ay = fig.add_axes([0.93, 0.125, 0.2, 0.755])
cbar = plt.colorbar(heatmap, cax=cbar_ay, orientation="vertical")
cbar.ax.get_yaxis().set_ticks([])
for j, lab in enumerate(['$Bajo$','$Medio$','$Alto$','$Muy Alto$','$Extremo$']):
cbar.ax.text(.5, (2 * j + 1) / 10.0, lab, ha='center', va='center')
plt.show()
The graph that results from this code is as follows:
Result_code
I have tried everything, the result I expect is very similar to this graph:
resulting image
But I can't change the range of the colors in the color bar.
Also note that I created random values in order to create the colorbar, I couldn't think of any other way, however so far it has worked. I only have to modify the range, so that it is similar to the last graph.
Any help would be appreciated.
A:
I guess it's much easier to just draw a second Axes and fill it with axhspans the same way you did it with the main Axes, but if you want to use a colorbar, you can do it as follows:
import itertools
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
colors = ['green', 'blue','red', 'yellow', 'pink']
labels = ['$Bajo$','$Medio$','$Alto$','$Muy Alto$','$Extremo$']
bounds = np.array([0, 2.5, 5.5, 7.5, 10.5, 16 ])
fig, ax= plt.subplots()
for span, color in zip(itertools.pairwise(bounds), colors):
ax.axhspan(*span, facecolor=color, alpha=0.8)
ax.margins(0)
cmap = mpl.colors.ListedColormap(colors)
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
ax_pos = ax.get_position().bounds
cbar_ay = fig.add_axes([0.93, ax_pos[1], 0.2, ax_pos[3]])
cbar = plt.colorbar(mpl.cm.ScalarMappable(cmap=cmap, norm=norm), cax=cbar_ay, orientation="vertical", spacing='proportional')
cbar.ax.set_axis_off()
for y, lab in zip(bounds[:-1] + np.diff(bounds) / 2, labels):
cbar.ax.text(.5, y, lab, ha='center', va='center')
| Modify the range of values of the color bar of a graph in python | I have the following issue.
I have a graph of which has colored segments. The problem is in relating those segments to the color bar (which also contains text), so that each color segment is aligned with the color bar.
The code is the following:
from matplotlib.colorbar import colorbar_factory
x_v = datosg["Hour"]+div
y_v = datosg["UV Index"]
fig, ax= plt.subplots(figsize = (7,7))
ax.plot(x_v, y_v, color = "green")
ax.set_xlim(7, 19)
ax.grid()
ax.axhspan(0, 2.5, facecolor='green', alpha=0.8)
ax.axhspan(2.5, 5.5, facecolor='blue', alpha=0.7)
ax.axhspan(5.5, 7.5, facecolor='red', alpha=0.7)
ax.axhspan(7.5, 10.5, facecolor='yellow', alpha=0.7)
ax.axhspan(10.5, 16, facecolor='pink', alpha=0.7)
ax.margins(0)
from matplotlib.colors import ListedColormap
#discrete color scheme
cMap = ListedColormap(['green', 'blue','red', 'yellow', 'pink'])
#data
np.random.seed(42)
data = np.random.rand(5, 5)
heatmap = ax.pcolor(data, cmap=cMap)
#legend
cbar_ay = fig.add_axes([0.93, 0.125, 0.2, 0.755])
cbar = plt.colorbar(heatmap, cax=cbar_ay, orientation="vertical")
cbar.ax.get_yaxis().set_ticks([])
for j, lab in enumerate(['$Bajo$','$Medio$','$Alto$','$Muy Alto$','$Extremo$']):
cbar.ax.text(.5, (2 * j + 1) / 10.0, lab, ha='center', va='center')
plt.show()
The graph that results from this code is as follows:
Result_code
I have tried everything, the result I expect is very similar to this graph:
resulting image
But I can't change the range of the colors in the color bar.
Also note that I created random values in order to create the colorbar, I couldn't think of any other way, however so far it has worked. I only have to modify the range, so that it is similar to the last graph.
Any help would be appreciated.
| [
"I guess it's much easier to just draw a second Axes and fill it with axhspans the same way you did it with the main Axes, but if you want to use a colorbar, you can do it as follows:\nimport itertools\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ncolors = ['green', 'blue','red', 'yellow', 'pink']\nlabels = ['$Bajo$','$Medio$','$Alto$','$Muy Alto$','$Extremo$']\nbounds = np.array([0, 2.5, 5.5, 7.5, 10.5, 16 ])\n\nfig, ax= plt.subplots()\n\nfor span, color in zip(itertools.pairwise(bounds), colors):\n ax.axhspan(*span, facecolor=color, alpha=0.8)\nax.margins(0)\n\ncmap = mpl.colors.ListedColormap(colors)\nnorm = mpl.colors.BoundaryNorm(bounds, cmap.N)\n\nax_pos = ax.get_position().bounds\ncbar_ay = fig.add_axes([0.93, ax_pos[1], 0.2, ax_pos[3]])\ncbar = plt.colorbar(mpl.cm.ScalarMappable(cmap=cmap, norm=norm), cax=cbar_ay, orientation=\"vertical\", spacing='proportional')\ncbar.ax.set_axis_off()\n\nfor y, lab in zip(bounds[:-1] + np.diff(bounds) / 2, labels):\n cbar.ax.text(.5, y, lab, ha='center', va='center')\n\n\n"
] | [
0
] | [] | [] | [
"colorbar",
"matplotlib",
"python"
] | stackoverflow_0074651289_colorbar_matplotlib_python.txt |
Q:
how to qoute comma in printf that is used with rofi?
I'm creating a project to display keybindings of different wms using rofi but I always get this error in rofi or maybe due to printf
full code
Mode r} bspc {quitwm r}
' is not found
the lines it is trying to display using printf and subprocess
super + alt + {q ,r} # I reckon the comma is causing the error
bspc {quitwm r}
code:
subprocess.run(f"rofi -modes \"{rofi_modes}\" -show {args.env[0]} -sidebar-mode", shell=True)
where rofi_modes is a string generated using another functions:
bspwm:"printf" 'super + alt + {q ,r} bspc {quitwm r}
',
I'm sure the comma in {q .r} causes this but I don't know how to resolve it I tried in different ways but it didn't work
any help would be much appreciated
I tried quoting the comma in the rofi_mode string but it didn't work
A:
The error message you are seeing is likely due to the comma in the printf command you are using in your code. In the Bash shell, the comma is used as a command separator, so the shell is trying to treat the text after the comma as a separate command.
To fix this issue, you can escape the comma in the printf command by preceding it with a backslash (). This will tell the shell to treat the comma as a regular character and not as a command separator.
Here is an example of how you can modify your code to escape the comma in the printf command:
rofi_modes = bspwm: "printf 'super + alt + {q \,r} bspc {quitwm r}'"
subprocess.run(f"rofi -modes "{rofi_modes}" -show {args.env[0]} -sidebar-mode", shell=True)
This should fix the error you are seeing and allow the printf command to be executed correctly.
| how to qoute comma in printf that is used with rofi? | I'm creating a project to display keybindings of different wms using rofi but I always get this error in rofi or maybe due to printf
full code
Mode r} bspc {quitwm r}
' is not found
the lines it is trying to display using printf and subprocess
super + alt + {q ,r} # I reckon the comma is causing the error
bspc {quitwm r}
code:
subprocess.run(f"rofi -modes \"{rofi_modes}\" -show {args.env[0]} -sidebar-mode", shell=True)
where rofi_modes is a string generated using another functions:
bspwm:"printf" 'super + alt + {q ,r} bspc {quitwm r}
',
I'm sure the comma in {q .r} causes this but I don't know how to resolve it I tried in different ways but it didn't work
any help would be much appreciated
I tried quoting the comma in the rofi_mode string but it didn't work
| [
"The error message you are seeing is likely due to the comma in the printf command you are using in your code. In the Bash shell, the comma is used as a command separator, so the shell is trying to treat the text after the comma as a separate command.\nTo fix this issue, you can escape the comma in the printf command by preceding it with a backslash (). This will tell the shell to treat the comma as a regular character and not as a command separator.\nHere is an example of how you can modify your code to escape the comma in the printf command:\nrofi_modes = bspwm: \"printf 'super + alt + {q \\,r} bspc {quitwm r}'\"\n\nsubprocess.run(f\"rofi -modes \"{rofi_modes}\" -show {args.env[0]} -sidebar-mode\", shell=True)\nThis should fix the error you are seeing and allow the printf command to be executed correctly.\n"
] | [
0
] | [] | [] | [
"formatting",
"printf",
"python",
"shell",
"string"
] | stackoverflow_0074656049_formatting_printf_python_shell_string.txt |
Q:
Exploding a data frame row by row and storing the exploded values in a new dataframe
I have the following code.
I want to go through the 'outlierdataframe' dataframe row by row and explode the values in the 'x' and 'y' columns.
For each exploded row, I then want to store this exploded row as its own dataframe, with columns 'newID', 'x' and 'y'.
However, the following code prints everything in one column rather than printing the exploded 'x' values in one column, the exploded 'y' values in another column?
I would be so grateful for a helping hand!
individualframe = outlierdataframe.iloc[0]
individualoutliers = individualframe.explode(list('xy'))
newframe = pd.DataFrame(individualoutliers)
print(newframe)
outlier dataframe first line:
indexing first line of outlier dataframe:
outlierdataframe.iloc[0]
index 24
subID Prolific_610020
level 1
complete False
duration 20.015686
map_view 12.299759
distance 203.426697
x [55, 55, 55, 60, 60, 60, 65, 70, 70, 75, 80, 8...
y [60, 60, 60, 60, 65, 65, 70, 70, 75, 75, 80, 8...
r [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 1...
batch 1
newID 610020
Name: 24, dtype: object
newframe = pd.DataFrame(individualoutliers)
print(newframe)
24
0 24
1 Prolific_610020
2 1
3 False
4 20.015686
.. ...
121 55
122 55
123 55
124 1
125 610020
A:
You can use pandas.DataFrame.apply with pandas.Series.explode to explode your selected (list) columns (e.g, x and y).
Try this :
out = (
df
.loc[:, ["newID", "x", "y"]]
.apply(lambda x: pd.Series(x).explode())
)
# Output :
print(out)
newID x y
0 610020 100 60
0 610020 55 60
0 610020 55 60
0 610020 60 60
0 610020 60 65
0 610020 60 65
0 610020 65 70
0 610020 70 70
0 610020 70 75
0 610020 75 75
0 610020 80 80
If you need to assign a single dataframe (with a patter name, df_newID) for each group, use this:
for k, g in out.groupby("newID"):
globals()['df_' + str(k)] = g
print(df_610020, type(df_610020))
newID x y
0 610020 100 60
0 610020 55 60
0 610020 55 60
0 610020 60 60
0 610020 60 65
0 610020 60 65
0 610020 65 70
0 610020 70 70
0 610020 70 75
0 610020 75 75
0 610020 80 80 <class 'pandas.core.frame.DataFrame'>
A:
The following solution works:
individualframe = outlierdataframe.iloc[0]
individualoutliers1 = individualframe[['x']].explode('x')
individualoutliers2 = individualframe[['y']].explode('y')
newIDs = individualframe[['newID']][0]
individualoutliers1 = pd.DataFrame(individualoutliers1)
individualoutliers2 = pd.DataFrame(individualoutliers2)
data = [individualoutliers1,individualoutliers2]
newframe = pd.concat(data,axis=1)
newframe = newframe.rename(columns={newframe.columns.values[0]:'x',newframe.columns.values[1]:'y'})
newframe['newID'] = newIDs
print(newframe)
Output exceeds the size limit. Open the full output data in a text editor
y y newID
0 55 60 610020
1 55 60 610020
2 55 60 610020
3 60 60 610020
4 60 65 610020
5 60 65 610020
6 65 70 610020
| Exploding a data frame row by row and storing the exploded values in a new dataframe | I have the following code.
I want to go through the 'outlierdataframe' dataframe row by row and explode the values in the 'x' and 'y' columns.
For each exploded row, I then want to store this exploded row as its own dataframe, with columns 'newID', 'x' and 'y'.
However, the following code prints everything in one column rather than printing the exploded 'x' values in one column, the exploded 'y' values in another column?
I would be so grateful for a helping hand!
individualframe = outlierdataframe.iloc[0]
individualoutliers = individualframe.explode(list('xy'))
newframe = pd.DataFrame(individualoutliers)
print(newframe)
outlier dataframe first line:
indexing first line of outlier dataframe:
outlierdataframe.iloc[0]
index 24
subID Prolific_610020
level 1
complete False
duration 20.015686
map_view 12.299759
distance 203.426697
x [55, 55, 55, 60, 60, 60, 65, 70, 70, 75, 80, 8...
y [60, 60, 60, 60, 65, 65, 70, 70, 75, 75, 80, 8...
r [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 1...
batch 1
newID 610020
Name: 24, dtype: object
newframe = pd.DataFrame(individualoutliers)
print(newframe)
24
0 24
1 Prolific_610020
2 1
3 False
4 20.015686
.. ...
121 55
122 55
123 55
124 1
125 610020
| [
"You can use pandas.DataFrame.apply with pandas.Series.explode to explode your selected (list) columns (e.g, x and y).\nTry this :\nout = (\n df\n .loc[:, [\"newID\", \"x\", \"y\"]]\n .apply(lambda x: pd.Series(x).explode())\n )\n\n# Output :\nprint(out)\n\n newID x y\n0 610020 100 60\n0 610020 55 60\n0 610020 55 60\n0 610020 60 60\n0 610020 60 65\n0 610020 60 65\n0 610020 65 70\n0 610020 70 70\n0 610020 70 75\n0 610020 75 75\n0 610020 80 80\n\nIf you need to assign a single dataframe (with a patter name, df_newID) for each group, use this:\nfor k, g in out.groupby(\"newID\"):\n globals()['df_' + str(k)] = g\n \nprint(df_610020, type(df_610020))\n\n newID x y\n0 610020 100 60\n0 610020 55 60\n0 610020 55 60\n0 610020 60 60\n0 610020 60 65\n0 610020 60 65\n0 610020 65 70\n0 610020 70 70\n0 610020 70 75\n0 610020 75 75\n0 610020 80 80 <class 'pandas.core.frame.DataFrame'>\n\n",
"The following solution works:\nindividualframe = outlierdataframe.iloc[0]\nindividualoutliers1 = individualframe[['x']].explode('x')\nindividualoutliers2 = individualframe[['y']].explode('y')\nnewIDs = individualframe[['newID']][0]\nindividualoutliers1 = pd.DataFrame(individualoutliers1)\nindividualoutliers2 = pd.DataFrame(individualoutliers2)\ndata = [individualoutliers1,individualoutliers2]\nnewframe = pd.concat(data,axis=1)\nnewframe = newframe.rename(columns={newframe.columns.values[0]:'x',newframe.columns.values[1]:'y'})\nnewframe['newID'] = newIDs \nprint(newframe)\n\n\nOutput exceeds the size limit. Open the full output data in a text editor\n y y newID\n0 55 60 610020\n1 55 60 610020\n2 55 60 610020\n3 60 60 610020\n4 60 65 610020\n5 60 65 610020\n6 65 70 610020\n\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"indexing",
"numpy",
"pandas",
"python"
] | stackoverflow_0074655919_dataframe_indexing_numpy_pandas_python.txt |
Q:
4 bit per pixel image from binary file in Python with Numpy and CV2?
Suppose I want to represent binary data as a black and white image, with only sixteen distinct levels for the gray values for each pixel so that each two adjacent pixels (lengthwise) represent a single byte. How can I do this? If, for example, I use the following:
import numpy as np
path = r'mybinaryfile.bin'
bin_data = np.fromfile(path, dtype='uint8')
scalar = 20
width = int(1800/scalar)
height = int(1000/scalar)
for jj in range(50):
wid = int(width*height)
img = bin_data[int(jj*wid):int((jj+1)*wid)].reshape(height,width)
final_img = cv2.resize(img, (scalar*width, scalar*height), interpolation = cv2.INTER_NEAREST)
fn = f'tmp/output_{jj}.png'
cv2.imwrite(fn, final_img)
I can create a sequence of PNG files that represent the binary file, with each 20 by 20 block of pixels representing a single byte. However, this creates too many unique values for the grays (256), so I need to reduce it to fewer (16). How can I "split" each pixel into two pixels with 16 distinct gray levels (4 bpp, rather than 8) instead?
Using 4 bpp rather than 8 bpp should double the number of image files since I'm keeping the resolution the same but doubling the number of pixels I use to represent a byte (2 pixels per byte rather than 1).
A:
I have understood that you want to take an 8-bit number and split the upper four bits and the lower four bits.
This can be done with a couple of bitwise operations.
def split_octet(data):
"""
For each 8-bit number in array, split them into two 4-bit numbers"""
split_data = []
for octet in data:
upper = octet >> 4
lower = octet & 0x0f
print(f"8bit:{octet:02x} upper:{upper:01x} and lower:{lower:01x}")
split_data.extend([upper, lower])
return split_data
For the gray scale image to be created the data needs to be converted to a value in the range 0 to 255. However you want to keep only 16 discrete values. This can be done by normalising the 4-bit values in the range of 0 to 1. The multiple the value by 255 to get back to uint8 values.
def create_square_grayscale(data, data_shape):
# Normalize data from 0 to 1
normalized = np.array(data, np.float64) / 0xf
# fold data to image shape
pixel_array = normalized.reshape(data_shape)
# change 16 possible values over 0 to 255 range
return np.array(pixel_array * 0xff, np.uint8)
My full testcase was:
from secrets import token_bytes
import cv2
import numpy as np
pixel_size = 20
final_image_size = (120, 120)
def gen_data(data_size):
# Generate some random data
return token_bytes(data_size)
def split_octet(data):
"""
For each 8-bit number in array, split them into two 4-bit numbers"""
split_data = []
for octet in data:
upper = octet >> 4
lower = octet & 0x0f
print(f"8bit:{octet:02x} upper:{upper:01x} and lower:{lower:01x}")
split_data.extend([upper, lower])
return split_data
def create_square_grayscale(data, data_shape):
# Normalize data from 0 to 1
normalized = np.array(data, np.float64) / 0xf
# fold data to image shape
pixel_array = normalized.reshape(data_shape)
# change 16 possible values over 0 to 255 range
return np.array(pixel_array * 0xff, np.uint8)
def main():
side1, side2 = (int(final_image_size[0]/pixel_size),
int(final_image_size[1]/pixel_size))
rnd_data = gen_data(int((side1 * side2)/2))
split_data = split_octet(rnd_data)
img = create_square_grayscale(split_data, (side1, side2))
print("image data:\n", img)
new_res = cv2.resize(img, None, fx=pixel_size, fy=pixel_size,
interpolation=cv2.INTER_AREA)
cv2.imwrite("/tmp/rnd.png", new_res)
if __name__ == '__main__':
main()
Which gave a transcript of:
8bit:34 upper:3 and lower:4
8bit:d4 upper:d and lower:4
8bit:bd upper:b and lower:d
8bit:c3 upper:c and lower:3
8bit:61 upper:6 and lower:1
8bit:9e upper:9 and lower:e
8bit:5f upper:5 and lower:f
8bit:1b upper:1 and lower:b
8bit:a5 upper:a and lower:5
8bit:31 upper:3 and lower:1
8bit:22 upper:2 and lower:2
8bit:8a upper:8 and lower:a
8bit:1e upper:1 and lower:e
8bit:84 upper:8 and lower:4
8bit:3a upper:3 and lower:a
8bit:c0 upper:c and lower:0
8bit:3c upper:3 and lower:c
8bit:09 upper:0 and lower:9
image data:
[[ 51 68 221 68 187 221]
[204 51 102 17 153 238]
[ 85 255 17 187 170 85]
[ 51 17 34 34 136 170]
[ 17 238 136 68 51 170]
[204 0 51 204 0 153]]
And generated the following image:
The original data has 18 bytes and there are 36 blocks/"20x20_pixels"
And if I change the dimensions to 1800, 1000 that you have in the question I get:
| 4 bit per pixel image from binary file in Python with Numpy and CV2? | Suppose I want to represent binary data as a black and white image, with only sixteen distinct levels for the gray values for each pixel so that each two adjacent pixels (lengthwise) represent a single byte. How can I do this? If, for example, I use the following:
import numpy as np
path = r'mybinaryfile.bin'
bin_data = np.fromfile(path, dtype='uint8')
scalar = 20
width = int(1800/scalar)
height = int(1000/scalar)
for jj in range(50):
wid = int(width*height)
img = bin_data[int(jj*wid):int((jj+1)*wid)].reshape(height,width)
final_img = cv2.resize(img, (scalar*width, scalar*height), interpolation = cv2.INTER_NEAREST)
fn = f'tmp/output_{jj}.png'
cv2.imwrite(fn, final_img)
I can create a sequence of PNG files that represent the binary file, with each 20 by 20 block of pixels representing a single byte. However, this creates too many unique values for the grays (256), so I need to reduce it to fewer (16). How can I "split" each pixel into two pixels with 16 distinct gray levels (4 bpp, rather than 8) instead?
Using 4 bpp rather than 8 bpp should double the number of image files since I'm keeping the resolution the same but doubling the number of pixels I use to represent a byte (2 pixels per byte rather than 1).
| [
"I have understood that you want to take an 8-bit number and split the upper four bits and the lower four bits.\nThis can be done with a couple of bitwise operations.\ndef split_octet(data):\n \"\"\"\n For each 8-bit number in array, split them into two 4-bit numbers\"\"\"\n split_data = []\n for octet in data:\n upper = octet >> 4\n lower = octet & 0x0f\n print(f\"8bit:{octet:02x} upper:{upper:01x} and lower:{lower:01x}\")\n split_data.extend([upper, lower])\n return split_data\n\nFor the gray scale image to be created the data needs to be converted to a value in the range 0 to 255. However you want to keep only 16 discrete values. This can be done by normalising the 4-bit values in the range of 0 to 1. The multiple the value by 255 to get back to uint8 values.\ndef create_square_grayscale(data, data_shape):\n # Normalize data from 0 to 1\n normalized = np.array(data, np.float64) / 0xf\n # fold data to image shape\n pixel_array = normalized.reshape(data_shape)\n # change 16 possible values over 0 to 255 range\n return np.array(pixel_array * 0xff, np.uint8)\n\nMy full testcase was:\nfrom secrets import token_bytes\nimport cv2\nimport numpy as np\n\npixel_size = 20\nfinal_image_size = (120, 120)\n\n\ndef gen_data(data_size):\n # Generate some random data\n return token_bytes(data_size)\n\n\ndef split_octet(data):\n \"\"\"\n For each 8-bit number in array, split them into two 4-bit numbers\"\"\"\n split_data = []\n for octet in data:\n upper = octet >> 4\n lower = octet & 0x0f\n print(f\"8bit:{octet:02x} upper:{upper:01x} and lower:{lower:01x}\")\n split_data.extend([upper, lower])\n return split_data\n\n\ndef create_square_grayscale(data, data_shape):\n # Normalize data from 0 to 1\n normalized = np.array(data, np.float64) / 0xf\n # fold data to image shape\n pixel_array = normalized.reshape(data_shape)\n # change 16 possible values over 0 to 255 range\n return np.array(pixel_array * 0xff, np.uint8)\n\n\ndef main():\n side1, side2 = (int(final_image_size[0]/pixel_size),\n int(final_image_size[1]/pixel_size))\n rnd_data = gen_data(int((side1 * side2)/2))\n split_data = split_octet(rnd_data)\n img = create_square_grayscale(split_data, (side1, side2))\n print(\"image data:\\n\", img)\n new_res = cv2.resize(img, None, fx=pixel_size, fy=pixel_size,\n interpolation=cv2.INTER_AREA)\n cv2.imwrite(\"/tmp/rnd.png\", new_res)\n\n\nif __name__ == '__main__':\n main()\n\nWhich gave a transcript of:\n8bit:34 upper:3 and lower:4\n8bit:d4 upper:d and lower:4\n8bit:bd upper:b and lower:d\n8bit:c3 upper:c and lower:3\n8bit:61 upper:6 and lower:1\n8bit:9e upper:9 and lower:e\n8bit:5f upper:5 and lower:f\n8bit:1b upper:1 and lower:b\n8bit:a5 upper:a and lower:5\n8bit:31 upper:3 and lower:1\n8bit:22 upper:2 and lower:2\n8bit:8a upper:8 and lower:a\n8bit:1e upper:1 and lower:e\n8bit:84 upper:8 and lower:4\n8bit:3a upper:3 and lower:a\n8bit:c0 upper:c and lower:0\n8bit:3c upper:3 and lower:c\n8bit:09 upper:0 and lower:9\nimage data:\n [[ 51 68 221 68 187 221]\n [204 51 102 17 153 238]\n [ 85 255 17 187 170 85]\n [ 51 17 34 34 136 170]\n [ 17 238 136 68 51 170]\n [204 0 51 204 0 153]]\n\nAnd generated the following image:\n\nThe original data has 18 bytes and there are 36 blocks/\"20x20_pixels\"\nAnd if I change the dimensions to 1800, 1000 that you have in the question I get:\n\n"
] | [
1
] | [] | [] | [
"bits_per_pixel",
"image",
"numpy",
"opencv",
"python"
] | stackoverflow_0074636650_bits_per_pixel_image_numpy_opencv_python.txt |
Q:
Python. Deleting Excel rows while iterating. Alternative for OpenPyXl or solution for ws.max_rows wrong output
I'm working with Python on Excel files. Until now I was using OpenPyXl. I need to iterate over the rows and delete some of them if they do not meet specific criteria let's say I was using something like:
current_row = 1
while current_row <= ws.max_row
if 'something' in ws[f'L{row}'].value:
data_ws.delete_rows(current_row)
continue
current_row += 1
Everything was alright until I have encountered problem with ws.max_rows. In a new Excel file which I've received to process ws.max_rows was returning more rows than it was in the reality. After some googling I've found out why is it happening.
Here's a great explanation of the problem which I've found in the comment section on the Stack:
However, ws.max_row will not check if last rows are empty or not. If cell's content at the end of the worksheet is deleted using Del key or by removing duplicates, remaining empty rows at the end of your data will still count as a used row. If you do not want to keep these empty rows, you will have to delete those entire rows by selecting rows number on the left of your spreadsheet and deleting them (right click on selected row number(s) -> Delete) –
V. Brunelle
Thanks V. Brunelle for very good explanation of the cause of the problem.
In my case it is because some of the rows are deleted by removing duplicates. For e.g. there's 400 rows in my file listed one by one (without any gaps) but ws.max_row is returning 500
For now I'm using a quick fix:
while current_row <= len([row for row in data_ws.iter_rows(min_row=min_row) if not all([cell.value is None for cell in row])])
But I know that it is very inefficient. That's the reason why I'm asking this question. I'm looking for possible solution.
From what I've found here on the Stack I can:
Create a copy of the worksheet and iterate over that copy and ws.delete_rows in the original worksheet so I will need to my fix only once
Iterate backwards with for_loop so I won't have to deal with ws.max_rows since for_loops works fine in that case (they read proper file dimensions). This method seems promising for me, but always I've got 4 rows at the top of the workbook which I'm not touching at all and potential debugging would need to be done backwards as well, which might not be very enjoyable :D.
Use other python library to process Excel files, but I don't know which one would be better, because keeping workbook styles is very important to me (and making changes in them if needed). I've read some promising things about pywin32 library (win32com.client), but it seems lacking documentation and it might be hard to work with it and also I don't know how does it look in performance matter. I was also considering pandas, but in kind words it's messing up the styles (in reality it deletes all styles in the worksheet).
I'm stuck now, because I really don't know which route should I choose.
I would appreciate every advice/opinion in the topic and if possible I would like to make a small discussion here.
Best regards!
A:
If max rows doesn't report what you expect you'll need to sort the issue best you can and perhaps that might be by manually deleting; "delete those entire rows by selecting rows number on the left of your spreadsheet and deleting them (right click on selected row number(s) -> Delete)" or making some other determination in your code as what the last row is, then perhaps programatically deleting all the rows from there to max_row so at least it reports correctly on the next code run.
You could also incorporate your fix code into your example code for deleting rows that meet specific criteria.
For example; a test sheet has 9 rows of data but cell B15 is an empty string so max_rows returns 15 rather than 9.
The example code checks each used cell in the row for None type in the cell value and only processes the 9 rows with data.
from openpyxl import load_workbook
filename = "foo.xlsx"
wb = load_workbook(filename)
data_ws = wb['Sheet1']
print(f"Max Rows Reports {data_ws.max_row}")
for row in data_ws:
print(f"Checking row {row[0].row}")
if all(cell.value is not None for cell in row):
if 'something' in data_ws[f'L{row[0].row}'].value:
data_ws.delete_rows(row[0].row)
else:
print(f"Actual Max Rows is {row[0].row}")
break
wb.save('out_' + filename)
Output
Max Rows Reports 15
Checking row 1
Checking row 2
Checking row 3
Checking row 4
Checking row 5
Checking row 6
Checking row 7
Checking row 8
Checking row 9
Actual Max Rows is 9
Of course this is not perfect, if any of the 9 rows with data had one cell value of None the loop would stop at that point. However if you know that's not going to be the case it may be all you need.
| Python. Deleting Excel rows while iterating. Alternative for OpenPyXl or solution for ws.max_rows wrong output | I'm working with Python on Excel files. Until now I was using OpenPyXl. I need to iterate over the rows and delete some of them if they do not meet specific criteria let's say I was using something like:
current_row = 1
while current_row <= ws.max_row
if 'something' in ws[f'L{row}'].value:
data_ws.delete_rows(current_row)
continue
current_row += 1
Everything was alright until I have encountered problem with ws.max_rows. In a new Excel file which I've received to process ws.max_rows was returning more rows than it was in the reality. After some googling I've found out why is it happening.
Here's a great explanation of the problem which I've found in the comment section on the Stack:
However, ws.max_row will not check if last rows are empty or not. If cell's content at the end of the worksheet is deleted using Del key or by removing duplicates, remaining empty rows at the end of your data will still count as a used row. If you do not want to keep these empty rows, you will have to delete those entire rows by selecting rows number on the left of your spreadsheet and deleting them (right click on selected row number(s) -> Delete) –
V. Brunelle
Thanks V. Brunelle for very good explanation of the cause of the problem.
In my case it is because some of the rows are deleted by removing duplicates. For e.g. there's 400 rows in my file listed one by one (without any gaps) but ws.max_row is returning 500
For now I'm using a quick fix:
while current_row <= len([row for row in data_ws.iter_rows(min_row=min_row) if not all([cell.value is None for cell in row])])
But I know that it is very inefficient. That's the reason why I'm asking this question. I'm looking for possible solution.
From what I've found here on the Stack I can:
Create a copy of the worksheet and iterate over that copy and ws.delete_rows in the original worksheet so I will need to my fix only once
Iterate backwards with for_loop so I won't have to deal with ws.max_rows since for_loops works fine in that case (they read proper file dimensions). This method seems promising for me, but always I've got 4 rows at the top of the workbook which I'm not touching at all and potential debugging would need to be done backwards as well, which might not be very enjoyable :D.
Use other python library to process Excel files, but I don't know which one would be better, because keeping workbook styles is very important to me (and making changes in them if needed). I've read some promising things about pywin32 library (win32com.client), but it seems lacking documentation and it might be hard to work with it and also I don't know how does it look in performance matter. I was also considering pandas, but in kind words it's messing up the styles (in reality it deletes all styles in the worksheet).
I'm stuck now, because I really don't know which route should I choose.
I would appreciate every advice/opinion in the topic and if possible I would like to make a small discussion here.
Best regards!
| [
"If max rows doesn't report what you expect you'll need to sort the issue best you can and perhaps that might be by manually deleting; \"delete those entire rows by selecting rows number on the left of your spreadsheet and deleting them (right click on selected row number(s) -> Delete)\" or making some other determination in your code as what the last row is, then perhaps programatically deleting all the rows from there to max_row so at least it reports correctly on the next code run.\n\nYou could also incorporate your fix code into your example code for deleting rows that meet specific criteria.\n\nFor example; a test sheet has 9 rows of data but cell B15 is an empty string so max_rows returns 15 rather than 9.\nThe example code checks each used cell in the row for None type in the cell value and only processes the 9 rows with data.\nfrom openpyxl import load_workbook\n\n\nfilename = \"foo.xlsx\"\n\nwb = load_workbook(filename)\ndata_ws = wb['Sheet1']\n\nprint(f\"Max Rows Reports {data_ws.max_row}\")\n\nfor row in data_ws:\n print(f\"Checking row {row[0].row}\")\n if all(cell.value is not None for cell in row):\n if 'something' in data_ws[f'L{row[0].row}'].value:\n data_ws.delete_rows(row[0].row)\n else:\n print(f\"Actual Max Rows is {row[0].row}\")\n break\n\nwb.save('out_' + filename)\n\nOutput\nMax Rows Reports 15\nChecking row 1\nChecking row 2\nChecking row 3\nChecking row 4\nChecking row 5\nChecking row 6\nChecking row 7\nChecking row 8\nChecking row 9\nActual Max Rows is 9\n\nOf course this is not perfect, if any of the 9 rows with data had one cell value of None the loop would stop at that point. However if you know that's not going to be the case it may be all you need.\n"
] | [
0
] | [] | [] | [
"delete_row",
"excel",
"iteration",
"openpyxl",
"python"
] | stackoverflow_0074647281_delete_row_excel_iteration_openpyxl_python.txt |
Q:
How to more efficiently test if data anomalies occur in transaction (Django)
I want to test if data anomalies such as dirty read, non-repeatable read, phantom read, lost update and so on occur in transaction.
Actually, I used person table which has id and name as shown below.
person table:
id
name
1
John
2
David
Then, I tested non-repeatable read with test view below and one command prompt. *During sleep(10), one command prompt updates "David" to "Tom" and commits:
# "store/views.py"
from .models import Person
from django.http import HttpResponse
from django.db import transaction
from time import sleep
@transaction.atomic
def test(request):
print(Person.objects.get(pk=2)) # "David"
sleep(10) # Update "David" to "Tom" and commit by one command prompt.
print(Person.objects.get(pk=2)) # "Tom"
return HttpResponse("Test")
But, every time I test data anomalies, I manually need to run test view and update and commit with one command prompt which takes much time.
So, how can I more efficiently test if data anomalies occur in transaction?
A:
With threads, you can more efficiently test if data anomalies occur in transaction.
I created 5 sets of code with threads to test 5 common data anomalies dirty read, non-repeatable read, phantom read, lost update and write skew with the Django's default isolation level READ COMMITTED and PostgreSQL as shown below. *Lost update or write skew occurs by race condition.
I explain about:
dirty read => Here
non-repeatable read and phantom read => Here
lost update and write skew => Here
which anomaly occurs in which isolation level => Here
<Dirty read>, <Non-repeatable read> and <Phantom read>
First, I created person table with id and name with models.py as shown below:
person table:
id
name
1
John
2
David
# "store/models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=30)
Then, I created and ran the test code of dirty read as shown below:
# "store/views.py"
from django.db import transaction
from time import sleep
from .models import Person
from threading import Thread
from django.http import HttpResponse
@transaction.atomic
def transaction1(flow):
while True:
while True:
if flow[0] == "Step 1":
sleep(0.1)
print("<T1", flow[0] + ">", "BEGIN")
flow[0] = "Step 2"
break
while True:
if flow[0] == "Step 2":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
person = Person.objects.get(id=2)
print(person.id, person.name)
flow[0] = "Step 3"
break
while True:
if flow[0] == "Step 5":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
person = Person.objects.get(id=2)
print(person.id, person.name)
flow[0] = "Step 6"
break
while True:
if flow[0] == "Step 6":
sleep(0.1)
print("<T1", flow[0] + ">", "COMMIT")
flow[0] = "Step 7"
break
break
@transaction.atomic
def transaction2(flow):
while True:
while True:
if flow[0] == "Step 3":
sleep(0.1)
print("<T2", flow[0] + ">", "BEGIN")
flow[0] = "Step 4"
break
while True:
if flow[0] == "Step 4":
sleep(0.1)
print("<T2", flow[0] + ">", "UPDATE")
Person.objects.filter(id=2).update(name="Tom")
flow[0] = "Step 5"
break
while True:
if flow[0] == "Step 7":
sleep(0.1)
print("<T2", flow[0] + ">", "COMMIT")
break
break
def call_transcations(request):
flow = ["Step 1"]
thread1 = Thread(target=transaction1, args=(flow,), daemon=True)
thread2 = Thread(target=transaction2, args=(flow,), daemon=True)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
return HttpResponse("Call_transcations")
Then, dirty read didn't occur according to the result belew on console because in any isolation levels in PostgreSQL, dirty read doesn't happen:
<T1 Step 1> BEGIN
<T1 Step 2> SELECT
2 David # Here
<T2 Step 3> BEGIN
<T2 Step 4> UPDATE
<T1 Step 5> SELECT
2 David # Here
<T1 Step 6> COMMIT
<T2 Step 7> COMMIT
And also, I could get the SQL query logs of PostgreSQL below. You can check how to log SQL queries on PostgreSQL:
[23576]: BEGIN
[23576]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
WHERE "store_person"."id" = 2
LIMIT 21
[8600]: BEGIN
[8600]: UPDATE "store_person" SET "name" = 'Tom'
WHERE "store_person"."id" = 2
[23576]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
WHERE "store_person"."id" = 2
LIMIT 21
[23576]: COMMIT
[8600]: COMMIT
And, this table below shows the flow and SQL query logs of PostgreSQL above:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
SELECT "store_person"."id", "store_person"."name" FROM "store_person" WHERE "store_person"."id" = 2 LIMIT 21;2 David
T1 reads David.
Step 3
BEGIN;
T2 starts.
Step 4
UPDATE "store_person" SET "name" = 'Tom' WHERE "store_person"."id" = 2;
T2 updates David to Tom.
Step 5
SELECT "store_person"."id", "store_person"."name" FROM "store_person" WHERE "store_person"."id" = 2 LIMIT 21;2 David
T1 reads David instead of Tom before T2 commits.*Dirty read doesn't occur!!
Step 6
COMMIT;
T1 commits.
Step 7
COMMIT;
T2 commits.
Next, I created and ran the test code of non-repeatable read as shown below:
# "store/views.py"
# ...
@transaction.atomic
def transaction1(flow):
while True:
while True:
if flow[0] == "Step 1":
sleep(0.1)
print("<T1", flow[0] + ">", "BEGIN")
flow[0] = "Step 2"
break
while True:
if flow[0] == "Step 2":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
person = Person.objects.get(id=2)
print(person.id, person.name)
flow[0] = "Step 3"
break
while True:
if flow[0] == "Step 6":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
person = Person.objects.get(id=2)
print(person.id, person.name)
flow[0] = "Step 7"
break
while True:
if flow[0] == "Step 7":
sleep(0.1)
print("<T1", flow[0] + ">", "COMMIT")
break
break
@transaction.atomic
def transaction2(flow):
while True:
while True:
if flow[0] == "Step 3":
sleep(0.1)
print("<T2", flow[0] + ">", "BEGIN")
flow[0] = "Step 4"
break
while True:
if flow[0] == "Step 4":
sleep(0.1)
print("<T2", flow[0] + ">", "UPDATE")
Person.objects.filter(id=2).update(name="Tom")
flow[0] = "Step 5"
break
while True:
if flow[0] == "Step 5":
sleep(0.1)
print("<T2", flow[0] + ">", "COMMIT")
flow[0] = "Step 6"
break
break
# ...
Then, non-repeatable read occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, non-repeatable read occurs:
<T1 Step 1> BEGIN
<T1 Step 2> SELECT
2 David # Here
<T2 Step 3> BEGIN
<T2 Step 4> UPDATE
<T2 Step 5> COMMIT
<T1 Step 6> SELECT
2 Tom # Here
<T1 Step 7> COMMIT
And also, I could get the SQL query logs of PostgreSQL below:
[23128]: BEGIN
[23128]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
WHERE "store_person"."id" = 2
LIMIT 21
[6368]: BEGIN
[6368]: UPDATE "store_person" SET "name" = 'Tom'
WHERE "store_person"."id" = 2
[6368]: COMMIT
[23128]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
WHERE "store_person"."id" = 2
LIMIT 21
[23128]: COMMIT
And, this table below shows the flow and SQL query logs of PostgreSQL above:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
SELECT "store_person"."id", "store_person"."name" FROM "store_person" WHERE "store_person"."id" = 2 LIMIT 21;2 David
T1 reads David.
Step 3
BEGIN;
T2 starts.
Step 4
UPDATE "store_person" SET "name" = 'Tom' WHERE "store_person"."id" = 2;
T2 updates David to Tom.
Step 5
COMMIT;
T2 commits.
Step 6
SELECT "store_person"."id", "store_person"."name" FROM "store_person" WHERE "store_person"."id" = 2 LIMIT 21;2 Tom
T1 reads Tom instead of David after T2 commits.*Non-repeatable read occurs!!
Step 7
COMMIT;
T1 commits.
Next, I created and ran the test code of phantom read as shown below:
# "store/views.py"
# ...
@transaction.atomic
def transaction1(flow):
while True:
while True:
if flow[0] == "Step 1":
sleep(0.1)
print("<T1", flow[0] + ">", "BEGIN")
flow[0] = "Step 2"
break
while True:
if flow[0] == "Step 2":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
persons = Person.objects.all()
for person in persons:
print(person.id, person.name)
flow[0] = "Step 3"
break
while True:
if flow[0] == "Step 6":
sleep(0.1)
print("<T1", flow[0] + ">", "SELECT")
persons = Person.objects.all()
for person in persons:
print(person.id, person.name)
flow[0] = "Step 7"
break
while True:
if flow[0] == "Step 7":
sleep(0.1)
print("<T1", flow[0] + ">", "COMMIT")
break
break
@transaction.atomic
def transaction2(flow):
while True:
while True:
if flow[0] == "Step 3":
sleep(0.1)
print("<T2", flow[0] + ">", "BEGIN")
flow[0] = "Step 4"
break
while True:
if flow[0] == "Step 4":
sleep(0.1)
print("<T2", flow[0] + ">", "INSERT")
Person.objects.create(id=3, name="Tom")
flow[0] = "Step 5"
break
while True:
if flow[0] == "Step 5":
sleep(0.1)
print("<T2", flow[0] + ">", "COMMIT")
flow[0] = "Step 6"
break
break
# ...
Then, phantom read occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, phantom read occurs:
<T1 Step 1> BEGIN
<T1 Step 2> SELECT
1 John # Here
2 David # Here
<T2 Step 3> BEGIN
<T2 Step 4> INSERT
<T2 Step 5> COMMIT
<T1 Step 6> SELECT
1 John # Here
2 David # Here
3 Tom # Here
<T1 Step 7> COMMIT
And also, I could get the SQL query logs of PostgreSQL below:
[15912]: BEGIN
[15912]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
[2098]: BEGIN
[2098]: INSERT INTO "store_person" ("id", "name")
VALUES (3, 'Tom')
RETURNING "store_person"."id"
[2098]: COMMIT
[15912]: SELECT "store_person"."id", "store_person"."name"
FROM "store_person"
[15912]: COMMIT
And, this table below shows the flow and SQL query logs of PostgreSQL above:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
SELECT "store_person"."id", "store_person"."name" FROM "store_person";1 John2 David
T1 reads 2 rows.
Step 3
BEGIN;
T2 starts.
Step 4
INSERT INTO "store_person" ("id", "name") VALUES (3, 'Tom') RETURNING "store_person"."id";
T2 inserts the row with 3 and Tom to person table.
Step 5
COMMIT;
T2 commits.
Step 6
SELECT "store_person"."id", "store_person"."name" FROM "store_person";1 John2 David3 Tom
T1 reads 3 rows instead of 2 rows after T2 commits.*Phantom read occurs!!
Step 7
COMMIT;
T1 commits.
<Lost update>
First, I created product table with id, name and stock with models.py as shown below:
product table:
id
name
stock
1
Apple
10
2
Orange
20
# "store/views.py"
# ...
@transaction.atomic
def transaction1(flow):
while True:
while True:
if flow[0] == "Step 1":
sleep(0.1)
print("T1", flow[0], "BEGIN")
flow[0] = "Step 2"
break
while True:
if flow[0] == "Step 2":
sleep(0.1)
print("T1", flow[0], "SELECT")
product = Product.objects.get(id=2)
print(product.id, product.name, product.stock)
flow[0] = "Step 3"
break
while True:
if flow[0] == "Step 5":
sleep(0.1)
print("T1", flow[0], "UPDATE")
Product.objects.filter(id=2).update(stock=13)
flow[0] = "Step 6"
break
while True:
if flow[0] == "Step 6":
sleep(0.1)
print("T1", flow[0], "COMMIT")
flow[0] = "Step 7"
break
break
@transaction.atomic
def transaction2(flow):
while True:
while True:
if flow[0] == "Step 3":
sleep(0.1)
print("T2", flow[0], "BEGIN")
flow[0] = "Step 4"
break
while True:
if flow[0] == "Step 4":
sleep(0.1)
print("T2", flow[0], "SELECT")
product = Product.objects.get(id=2)
print(product.id, product.name, product.stock)
flow[0] = "Step 5"
break
while True:
if flow[0] == "Step 7":
sleep(0.1)
print("T2", flow[0], "UPDATE")
Product.objects.filter(id=2).update(stock=16)
flow[0] = "Step 8"
break
while True:
if flow[0] == "Step 8":
sleep(0.1)
print("T2", flow[0], "COMMIT")
break
break
# ...
Then, lost update occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, lost update occurs:
T1 Step 1 BEGIN
T1 Step 2 SELECT # Reads the same row
2 Orange 20
T2 Step 3 BEGIN
T2 Step 4 SELECT # Reads the same row
2 Orange 20
T1 Step 5 UPDATE # Writes "stock"
T1 Step 6 COMMIT # And commits the write
T2 Step 7 UPDATE # Overwrites "stock"
T2 Step 8 COMMIT # And commits the overwrite
And also, I could get the SQL query logs of PostgreSQL below:
[20504]: BEGIN
[20504]: SELECT "store_product"."id", "store_product"."name", "store_product"."stock"
FROM "store_product"
WHERE "store_product"."id" = 2
LIMIT 21
[3840]: BEGIN
[3840]: SELECT "store_product"."id", "store_product"."name", "store_product"."stock"
FROM "store_product"
WHERE "store_product"."id" = 2
LIMIT 21
[20504]: UPDATE "store_product" SET "stock" = 13
WHERE "store_product"."id" = 2
[20504]: COMMIT
[3840]: UPDATE "store_product" SET "stock" = 16
WHERE "store_product"."id" = 2
[3840]: COMMIT
And, this table below shows the flow and SQL query logs of PostgreSQL above:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
SELECT "store_product"."id", "store_product"."name", "store_product"."stock" FROM "store_product" WHERE "store_product"."id" = 2 LIMIT 21;2 Orange 20
T1 reads 20 which is updated later to 13 because a customer buys 7 oranges.
Step 3
BEGIN;
T2 starts.
Step 4
SELECT "store_product"."id", "store_product"."name", "store_product"."stock" FROM "store_product" WHERE "store_product"."id" = 2 LIMIT 21;2 Orange 20
T2 reads 20 which is updated later to 16 because a customer buys 4 oranges.
Step 5
UPDATE "store_product" SET "stock" = 13 WHERE "store_product"."id" = 2;
T1 updates 20 to 13.
Step 6
COMMIT;
T1 commits.
Step 7
UPDATE "store_product" SET "stock" = 16 WHERE "store_product"."id" = 2;
T2 updates 13 to 16 after T1 commits.
Step 8
COMMIT;
T2 commits.*Lost update occurs.
<Write skew>
First, I created doctor table with id, name and on_call with models.py as shown below:
doctor table:
id
name
on_call
1
John
True
2
Lisa
True
# "store/views.py"
# ...
@transaction.atomic
def transaction1(flow):
while True:
while True:
if flow[0] == "Step 1":
print("T1", flow[0], "BEGIN")
flow[0] = "Step 2"
break
while True:
if flow[0] == "Step 2":
print("T1", flow[0], "SELECT")
doctor_count = Doctor.objects.filter(on_call=True).count()
print(doctor_count)
flow[0] = "Step 3"
break
while True:
if flow[0] == "Step 5":
print("T1", flow[0], "UPDATE")
Doctor.objects.filter(id=1).update(on_call=False)
flow[0] = "Step 6"
break
while True:
if flow[0] == "Step 6":
print("T1", flow[0], "COMMIT")
flow[0] = "Step 7"
break
break
@transaction.atomic
def transaction2(flow):
while True:
while True:
if flow[0] == "Step 3":
print("T2", flow[0], "BEGIN")
flow[0] = "Step 4"
break
while True:
if flow[0] == "Step 4":
print("T2", flow[0], "SELECT")
doctor_count = Doctor.objects.filter(on_call=True).count()
print(doctor_count)
flow[0] = "Step 5"
break
while True:
if flow[0] == "Step 7":
print("T2", flow[0], "UPDATE")
Doctor.objects.filter(id=2).update(on_call=False)
flow[0] = "Step 8"
break
while True:
if flow[0] == "Step 8":
print("T2", flow[0], "COMMIT")
break
break
# ...
Then, write skew occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, write skew occurs:
T1 Step 1 BEGIN
T1 Step 2 SELECT # Reads the same data
2
T2 Step 3 BEGIN
T2 Step 4 SELECT # Reads the same data
2
T1 Step 5 UPDATE # Writes 'False' to John's "on_call"
T1 Step 6 COMMIT # And commits the write
T2 Step 7 UPDATE # Writes 'False' to Lisa's "on_call"
T2 Step 8 COMMIT # And commits the write
And also, I could get the SQL query logs of PostgreSQL below:
[11252]: BEGIN
[11252]: SELECT COUNT(*)
AS "__count"
FROM "store_doctor"
WHERE "store_doctor"."on_call"
[2368]: BEGIN
[2368]: SELECT COUNT(*)
AS "__count"
FROM "store_doctor"
WHERE "store_doctor"."on_call"
[11252]: UPDATE "store_doctor"
SET "on_call" = false
WHERE "store_doctor"."id" = 1
[11252]: COMMIT
[2368]: UPDATE "store_doctor"
SET "on_call" = false
WHERE "store_doctor"."id" = 2
[2368]: COMMIT
And, this table below shows the flow and SQL query logs of PostgreSQL above:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN;
T1 starts.
Step 2
SELECT COUNT(*) AS "__count" FROM "store_doctor" WHERE "store_doctor"."on_call";2
T1 reads 2 so John can take a rest.
Step 3
BEGIN;
T2 starts.
Step 4
SELECT COUNT(*) AS "__count" FROM "store_doctor" WHERE "store_doctor"."on_call";2
T2 reads 2 so Lisa can take a rest.
Step 5
UPDATE "store_doctor" SET "on_call" = false WHERE "store_doctor"."id" = 1;
T1 updates True to False which means John takes a rest.
Step 6
COMMIT;
T1 commits.
Step 7
UPDATE "store_doctor" SET "on_call" = false WHERE "store_doctor"."id" = 2;
T2 updates True to False which means Lisa takes a rest.
Step 8
COMMIT;
T2 commits.John and Lisa both take a rest.*Write skew occurs.
| How to more efficiently test if data anomalies occur in transaction (Django) | I want to test if data anomalies such as dirty read, non-repeatable read, phantom read, lost update and so on occur in transaction.
Actually, I used person table which has id and name as shown below.
person table:
id
name
1
John
2
David
Then, I tested non-repeatable read with test view below and one command prompt. *During sleep(10), one command prompt updates "David" to "Tom" and commits:
# "store/views.py"
from .models import Person
from django.http import HttpResponse
from django.db import transaction
from time import sleep
@transaction.atomic
def test(request):
print(Person.objects.get(pk=2)) # "David"
sleep(10) # Update "David" to "Tom" and commit by one command prompt.
print(Person.objects.get(pk=2)) # "Tom"
return HttpResponse("Test")
But, every time I test data anomalies, I manually need to run test view and update and commit with one command prompt which takes much time.
So, how can I more efficiently test if data anomalies occur in transaction?
| [
"With threads, you can more efficiently test if data anomalies occur in transaction.\nI created 5 sets of code with threads to test 5 common data anomalies dirty read, non-repeatable read, phantom read, lost update and write skew with the Django's default isolation level READ COMMITTED and PostgreSQL as shown below. *Lost update or write skew occurs by race condition.\nI explain about:\n\ndirty read => Here\nnon-repeatable read and phantom read => Here\nlost update and write skew => Here\nwhich anomaly occurs in which isolation level => Here\n\n<Dirty read>, <Non-repeatable read> and <Phantom read>\nFirst, I created person table with id and name with models.py as shown below:\nperson table:\n\n\n\n\nid\nname\n\n\n\n\n1\nJohn\n\n\n2\nDavid\n\n\n\n\n# \"store/models.py\"\n\nfrom django.db import models\n\nclass Person(models.Model):\n name = models.CharField(max_length=30)\n\nThen, I created and ran the test code of dirty read as shown below:\n# \"store/views.py\"\n\nfrom django.db import transaction\nfrom time import sleep\nfrom .models import Person\nfrom threading import Thread\nfrom django.http import HttpResponse\n\[email protected]\ndef transaction1(flow):\n while True:\n while True:\n if flow[0] == \"Step 1\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 2\"\n break\n \n while True:\n if flow[0] == \"Step 2\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\")\n person = Person.objects.get(id=2)\n print(person.id, person.name)\n flow[0] = \"Step 3\"\n break\n\n while True:\n if flow[0] == \"Step 5\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\")\n person = Person.objects.get(id=2)\n print(person.id, person.name)\n flow[0] = \"Step 6\"\n break\n \n while True:\n if flow[0] == \"Step 6\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"COMMIT\")\n flow[0] = \"Step 7\"\n break\n break\n\[email protected]\ndef transaction2(flow):\n while True:\n while True:\n if flow[0] == \"Step 3\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 4\"\n break\n\n while True:\n if flow[0] == \"Step 4\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"UPDATE\")\n Person.objects.filter(id=2).update(name=\"Tom\")\n flow[0] = \"Step 5\"\n break\n \n while True:\n if flow[0] == \"Step 7\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"COMMIT\")\n break\n break\n\ndef call_transcations(request): \n flow = [\"Step 1\"]\n\n thread1 = Thread(target=transaction1, args=(flow,), daemon=True)\n thread2 = Thread(target=transaction2, args=(flow,), daemon=True)\n\n thread1.start()\n thread2.start()\n\n thread1.join()\n thread2.join()\n\n return HttpResponse(\"Call_transcations\")\n\nThen, dirty read didn't occur according to the result belew on console because in any isolation levels in PostgreSQL, dirty read doesn't happen:\n<T1 Step 1> BEGIN\n<T1 Step 2> SELECT\n2 David # Here\n<T2 Step 3> BEGIN\n<T2 Step 4> UPDATE\n<T1 Step 5> SELECT\n2 David # Here\n<T1 Step 6> COMMIT\n<T2 Step 7> COMMIT\n\nAnd also, I could get the SQL query logs of PostgreSQL below. You can check how to log SQL queries on PostgreSQL:\n[23576]: BEGIN\n[23576]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\" \n WHERE \"store_person\".\"id\" = 2 \n LIMIT 21\n[8600]: BEGIN\n[8600]: UPDATE \"store_person\" SET \"name\" = 'Tom' \n WHERE \"store_person\".\"id\" = 2\n[23576]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\" \n WHERE \"store_person\".\"id\" = 2 \n LIMIT 21\n[23576]: COMMIT\n[8600]: COMMIT\n\nAnd, this table below shows the flow and SQL query logs of PostgreSQL above:\n\n\n\n\nFlow\nTransaction 1 (T1)\nTransaction 2 (T2)\nExplanation\n\n\n\n\nStep 1\nBEGIN;\n\nT1 starts.\n\n\nStep 2\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\" WHERE \"store_person\".\"id\" = 2 LIMIT 21;2 David\n\nT1 reads David.\n\n\nStep 3\n\nBEGIN;\nT2 starts.\n\n\nStep 4\n\nUPDATE \"store_person\" SET \"name\" = 'Tom' WHERE \"store_person\".\"id\" = 2;\nT2 updates David to Tom.\n\n\nStep 5\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\" WHERE \"store_person\".\"id\" = 2 LIMIT 21;2 David\n\nT1 reads David instead of Tom before T2 commits.*Dirty read doesn't occur!!\n\n\nStep 6\nCOMMIT;\n\nT1 commits.\n\n\nStep 7\n\nCOMMIT;\nT2 commits.\n\n\n\n\nNext, I created and ran the test code of non-repeatable read as shown below:\n# \"store/views.py\"\n\n# ...\n\[email protected]\ndef transaction1(flow):\n while True:\n while True:\n if flow[0] == \"Step 1\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 2\"\n break\n \n while True:\n if flow[0] == \"Step 2\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\")\n person = Person.objects.get(id=2)\n print(person.id, person.name)\n flow[0] = \"Step 3\"\n break\n\n while True:\n if flow[0] == \"Step 6\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\") \n person = Person.objects.get(id=2)\n print(person.id, person.name)\n flow[0] = \"Step 7\"\n break\n \n while True:\n if flow[0] == \"Step 7\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"COMMIT\")\n break\n break\n\[email protected]\ndef transaction2(flow):\n while True:\n while True:\n if flow[0] == \"Step 3\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 4\"\n break\n\n while True:\n if flow[0] == \"Step 4\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"UPDATE\")\n Person.objects.filter(id=2).update(name=\"Tom\")\n flow[0] = \"Step 5\"\n break\n \n while True:\n if flow[0] == \"Step 5\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"COMMIT\")\n flow[0] = \"Step 6\"\n break\n break\n\n# ...\n\nThen, non-repeatable read occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, non-repeatable read occurs:\n<T1 Step 1> BEGIN\n<T1 Step 2> SELECT\n2 David # Here\n<T2 Step 3> BEGIN\n<T2 Step 4> UPDATE\n<T2 Step 5> COMMIT\n<T1 Step 6> SELECT\n2 Tom # Here\n<T1 Step 7> COMMIT\n\nAnd also, I could get the SQL query logs of PostgreSQL below:\n[23128]: BEGIN\n[23128]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\" \n WHERE \"store_person\".\"id\" = 2 \n LIMIT 21\n[6368]: BEGIN\n[6368]: UPDATE \"store_person\" SET \"name\" = 'Tom' \n WHERE \"store_person\".\"id\" = 2\n[6368]: COMMIT\n[23128]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\" \n WHERE \"store_person\".\"id\" = 2 \n LIMIT 21\n[23128]: COMMIT\n\nAnd, this table below shows the flow and SQL query logs of PostgreSQL above:\n\n\n\n\nFlow\nTransaction 1 (T1)\nTransaction 2 (T2)\nExplanation\n\n\n\n\nStep 1\nBEGIN;\n\nT1 starts.\n\n\nStep 2\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\" WHERE \"store_person\".\"id\" = 2 LIMIT 21;2 David\n\nT1 reads David.\n\n\nStep 3\n\nBEGIN;\nT2 starts.\n\n\nStep 4\n\nUPDATE \"store_person\" SET \"name\" = 'Tom' WHERE \"store_person\".\"id\" = 2;\nT2 updates David to Tom.\n\n\nStep 5\n\nCOMMIT;\nT2 commits.\n\n\nStep 6\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\" WHERE \"store_person\".\"id\" = 2 LIMIT 21;2 Tom\n\nT1 reads Tom instead of David after T2 commits.*Non-repeatable read occurs!!\n\n\nStep 7\nCOMMIT;\n\nT1 commits.\n\n\n\n\nNext, I created and ran the test code of phantom read as shown below:\n# \"store/views.py\"\n\n# ...\n\[email protected]\ndef transaction1(flow):\n while True:\n while True:\n if flow[0] == \"Step 1\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 2\"\n break\n \n while True:\n if flow[0] == \"Step 2\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\")\n persons = Person.objects.all()\n for person in persons:\n print(person.id, person.name)\n flow[0] = \"Step 3\"\n break\n\n while True:\n if flow[0] == \"Step 6\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"SELECT\") \n persons = Person.objects.all()\n for person in persons:\n print(person.id, person.name)\n flow[0] = \"Step 7\"\n break\n \n while True:\n if flow[0] == \"Step 7\":\n sleep(0.1)\n print(\"<T1\", flow[0] + \">\", \"COMMIT\")\n break\n break\n\[email protected]\ndef transaction2(flow):\n while True:\n while True:\n if flow[0] == \"Step 3\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"BEGIN\")\n flow[0] = \"Step 4\"\n break\n\n while True:\n if flow[0] == \"Step 4\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"INSERT\")\n Person.objects.create(id=3, name=\"Tom\")\n flow[0] = \"Step 5\"\n break\n \n while True:\n if flow[0] == \"Step 5\":\n sleep(0.1)\n print(\"<T2\", flow[0] + \">\", \"COMMIT\")\n flow[0] = \"Step 6\"\n break\n break\n\n# ...\n\nThen, phantom read occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, phantom read occurs:\n<T1 Step 1> BEGIN\n<T1 Step 2> SELECT\n1 John # Here\n2 David # Here\n<T2 Step 3> BEGIN\n<T2 Step 4> INSERT\n<T2 Step 5> COMMIT\n<T1 Step 6> SELECT\n1 John # Here\n2 David # Here\n3 Tom # Here\n<T1 Step 7> COMMIT\n\nAnd also, I could get the SQL query logs of PostgreSQL below:\n[15912]: BEGIN\n[15912]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\"\n[2098]: BEGIN\n[2098]: INSERT INTO \"store_person\" (\"id\", \"name\") \n VALUES (3, 'Tom') \n RETURNING \"store_person\".\"id\"\n[2098]: COMMIT\n[15912]: SELECT \"store_person\".\"id\", \"store_person\".\"name\" \n FROM \"store_person\"\n[15912]: COMMIT\n\nAnd, this table below shows the flow and SQL query logs of PostgreSQL above:\n\n\n\n\nFlow\nTransaction 1 (T1)\nTransaction 2 (T2)\nExplanation\n\n\n\n\nStep 1\nBEGIN;\n\nT1 starts.\n\n\nStep 2\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\";1 John2 David\n\nT1 reads 2 rows.\n\n\nStep 3\n\nBEGIN;\nT2 starts.\n\n\nStep 4\n\nINSERT INTO \"store_person\" (\"id\", \"name\") VALUES (3, 'Tom') RETURNING \"store_person\".\"id\";\nT2 inserts the row with 3 and Tom to person table.\n\n\nStep 5\n\nCOMMIT;\nT2 commits.\n\n\nStep 6\nSELECT \"store_person\".\"id\", \"store_person\".\"name\" FROM \"store_person\";1 John2 David3 Tom\n\nT1 reads 3 rows instead of 2 rows after T2 commits.*Phantom read occurs!!\n\n\nStep 7\nCOMMIT;\n\nT1 commits.\n\n\n\n<Lost update>\nFirst, I created product table with id, name and stock with models.py as shown below:\nproduct table:\n\n\n\n\nid\nname\nstock\n\n\n\n\n1\nApple\n10\n\n\n2\nOrange\n20\n\n\n\n\n# \"store/views.py\"\n\n# ...\n\[email protected]\ndef transaction1(flow):\n while True:\n while True:\n if flow[0] == \"Step 1\":\n sleep(0.1)\n print(\"T1\", flow[0], \"BEGIN\")\n flow[0] = \"Step 2\"\n break\n \n while True:\n if flow[0] == \"Step 2\":\n sleep(0.1)\n print(\"T1\", flow[0], \"SELECT\")\n product = Product.objects.get(id=2)\n print(product.id, product.name, product.stock)\n flow[0] = \"Step 3\"\n break\n\n while True:\n if flow[0] == \"Step 5\":\n sleep(0.1)\n print(\"T1\", flow[0], \"UPDATE\")\n Product.objects.filter(id=2).update(stock=13)\n flow[0] = \"Step 6\"\n break\n \n while True:\n if flow[0] == \"Step 6\":\n sleep(0.1)\n print(\"T1\", flow[0], \"COMMIT\")\n flow[0] = \"Step 7\"\n break\n break\n\[email protected]\ndef transaction2(flow):\n while True:\n while True:\n if flow[0] == \"Step 3\":\n sleep(0.1)\n print(\"T2\", flow[0], \"BEGIN\")\n flow[0] = \"Step 4\"\n break\n\n while True:\n if flow[0] == \"Step 4\":\n sleep(0.1)\n print(\"T2\", flow[0], \"SELECT\")\n product = Product.objects.get(id=2)\n print(product.id, product.name, product.stock)\n flow[0] = \"Step 5\"\n break\n \n while True:\n if flow[0] == \"Step 7\":\n sleep(0.1)\n print(\"T2\", flow[0], \"UPDATE\")\n Product.objects.filter(id=2).update(stock=16)\n flow[0] = \"Step 8\"\n break\n\n while True:\n if flow[0] == \"Step 8\":\n sleep(0.1)\n print(\"T2\", flow[0], \"COMMIT\")\n break\n break\n\n# ...\n\nThen, lost update occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, lost update occurs:\nT1 Step 1 BEGIN\nT1 Step 2 SELECT # Reads the same row\n2 Orange 20\nT2 Step 3 BEGIN\nT2 Step 4 SELECT # Reads the same row\n2 Orange 20\nT1 Step 5 UPDATE # Writes \"stock\"\nT1 Step 6 COMMIT # And commits the write\nT2 Step 7 UPDATE # Overwrites \"stock\"\nT2 Step 8 COMMIT # And commits the overwrite\n\nAnd also, I could get the SQL query logs of PostgreSQL below:\n[20504]: BEGIN\n[20504]: SELECT \"store_product\".\"id\", \"store_product\".\"name\", \"store_product\".\"stock\" \n FROM \"store_product\" \n WHERE \"store_product\".\"id\" = 2 \n LIMIT 21\n[3840]: BEGIN\n[3840]: SELECT \"store_product\".\"id\", \"store_product\".\"name\", \"store_product\".\"stock\" \n FROM \"store_product\" \n WHERE \"store_product\".\"id\" = 2 \n LIMIT 21\n[20504]: UPDATE \"store_product\" SET \"stock\" = 13 \n WHERE \"store_product\".\"id\" = 2\n[20504]: COMMIT\n[3840]: UPDATE \"store_product\" SET \"stock\" = 16 \n WHERE \"store_product\".\"id\" = 2\n[3840]: COMMIT\n\nAnd, this table below shows the flow and SQL query logs of PostgreSQL above:\n\n\n\n\nFlow\nTransaction 1 (T1)\nTransaction 2 (T2)\nExplanation\n\n\n\n\nStep 1\nBEGIN;\n\nT1 starts.\n\n\nStep 2\nSELECT \"store_product\".\"id\", \"store_product\".\"name\", \"store_product\".\"stock\" FROM \"store_product\" WHERE \"store_product\".\"id\" = 2 LIMIT 21;2 Orange 20\n\nT1 reads 20 which is updated later to 13 because a customer buys 7 oranges.\n\n\nStep 3\n\nBEGIN;\nT2 starts.\n\n\nStep 4\n\nSELECT \"store_product\".\"id\", \"store_product\".\"name\", \"store_product\".\"stock\" FROM \"store_product\" WHERE \"store_product\".\"id\" = 2 LIMIT 21;2 Orange 20\nT2 reads 20 which is updated later to 16 because a customer buys 4 oranges.\n\n\nStep 5\nUPDATE \"store_product\" SET \"stock\" = 13 WHERE \"store_product\".\"id\" = 2;\n\nT1 updates 20 to 13.\n\n\nStep 6\nCOMMIT;\n\nT1 commits.\n\n\nStep 7\n\nUPDATE \"store_product\" SET \"stock\" = 16 WHERE \"store_product\".\"id\" = 2;\nT2 updates 13 to 16 after T1 commits.\n\n\nStep 8\n\nCOMMIT;\nT2 commits.*Lost update occurs.\n\n\n\n<Write skew>\nFirst, I created doctor table with id, name and on_call with models.py as shown below:\ndoctor table:\n\n\n\n\nid\nname\non_call\n\n\n\n\n1\nJohn\nTrue\n\n\n2\nLisa\nTrue\n\n\n\n\n# \"store/views.py\"\n\n# ...\n\[email protected]\ndef transaction1(flow):\n while True:\n while True:\n if flow[0] == \"Step 1\":\n print(\"T1\", flow[0], \"BEGIN\")\n flow[0] = \"Step 2\"\n break\n \n while True:\n if flow[0] == \"Step 2\":\n print(\"T1\", flow[0], \"SELECT\")\n doctor_count = Doctor.objects.filter(on_call=True).count()\n print(doctor_count)\n flow[0] = \"Step 3\"\n break\n\n while True:\n if flow[0] == \"Step 5\":\n print(\"T1\", flow[0], \"UPDATE\")\n Doctor.objects.filter(id=1).update(on_call=False)\n flow[0] = \"Step 6\"\n break\n \n while True:\n if flow[0] == \"Step 6\":\n print(\"T1\", flow[0], \"COMMIT\")\n flow[0] = \"Step 7\"\n break\n break\n\[email protected]\ndef transaction2(flow):\n while True:\n while True:\n if flow[0] == \"Step 3\":\n print(\"T2\", flow[0], \"BEGIN\")\n flow[0] = \"Step 4\"\n break\n\n while True:\n if flow[0] == \"Step 4\":\n print(\"T2\", flow[0], \"SELECT\")\n doctor_count = Doctor.objects.filter(on_call=True).count()\n print(doctor_count)\n flow[0] = \"Step 5\"\n break\n \n while True:\n if flow[0] == \"Step 7\":\n print(\"T2\", flow[0], \"UPDATE\")\n Doctor.objects.filter(id=2).update(on_call=False)\n flow[0] = \"Step 8\"\n break\n\n while True:\n if flow[0] == \"Step 8\":\n print(\"T2\", flow[0], \"COMMIT\")\n break\n break\n\n# ...\n\nThen, write skew occurred according to the result belew on console because in READ COMMITTED isolation level in PostgreSQL, write skew occurs:\nT1 Step 1 BEGIN\nT1 Step 2 SELECT # Reads the same data\n2 \nT2 Step 3 BEGIN\nT2 Step 4 SELECT # Reads the same data\n2\nT1 Step 5 UPDATE # Writes 'False' to John's \"on_call\" \nT1 Step 6 COMMIT # And commits the write\nT2 Step 7 UPDATE # Writes 'False' to Lisa's \"on_call\" \nT2 Step 8 COMMIT # And commits the write\n\nAnd also, I could get the SQL query logs of PostgreSQL below:\n[11252]: BEGIN\n[11252]: SELECT COUNT(*) \n AS \"__count\" \n FROM \"store_doctor\" \n WHERE \"store_doctor\".\"on_call\"\n[2368]: BEGIN\n[2368]: SELECT COUNT(*) \n AS \"__count\" \n FROM \"store_doctor\" \n WHERE \"store_doctor\".\"on_call\"\n[11252]: UPDATE \"store_doctor\" \n SET \"on_call\" = false \n WHERE \"store_doctor\".\"id\" = 1\n[11252]: COMMIT\n[2368]: UPDATE \"store_doctor\" \n SET \"on_call\" = false \n WHERE \"store_doctor\".\"id\" = 2\n[2368]: COMMIT\n\nAnd, this table below shows the flow and SQL query logs of PostgreSQL above:\n\n\n\n\nFlow\nTransaction 1 (T1)\nTransaction 2 (T2)\nExplanation\n\n\n\n\nStep 1\nBEGIN;\n\nT1 starts.\n\n\nStep 2\nSELECT COUNT(*) AS \"__count\" FROM \"store_doctor\" WHERE \"store_doctor\".\"on_call\";2\n\nT1 reads 2 so John can take a rest.\n\n\nStep 3\n\nBEGIN;\nT2 starts.\n\n\nStep 4\n\nSELECT COUNT(*) AS \"__count\" FROM \"store_doctor\" WHERE \"store_doctor\".\"on_call\";2\nT2 reads 2 so Lisa can take a rest.\n\n\nStep 5\nUPDATE \"store_doctor\" SET \"on_call\" = false WHERE \"store_doctor\".\"id\" = 1;\n\nT1 updates True to False which means John takes a rest.\n\n\nStep 6\nCOMMIT;\n\nT1 commits.\n\n\nStep 7\n\nUPDATE \"store_doctor\" SET \"on_call\" = false WHERE \"store_doctor\".\"id\" = 2;\nT2 updates True to False which means Lisa takes a rest.\n\n\nStep 8\n\nCOMMIT;\nT2 commits.John and Lisa both take a rest.*Write skew occurs.\n\n\n\n"
] | [
0
] | [] | [] | [
"data_anomalies",
"django",
"python",
"python_3.x",
"testing"
] | stackoverflow_0074183272_data_anomalies_django_python_python_3.x_testing.txt |
Q:
Extracting the first day of month of a datetime type column in pandas
I have the following dataframe:
user_id purchase_date
1 2015-01-23 14:05:21
2 2015-02-05 05:07:30
3 2015-02-18 17:08:51
4 2015-03-21 17:07:30
5 2015-03-11 18:32:56
6 2015-03-03 11:02:30
and purchase_date is a datetime64[ns] column. I need to add a new column df[month] that contains first day of the month of the purchase date:
df['month']
2015-01-01
2015-02-01
2015-02-01
2015-03-01
2015-03-01
2015-03-01
I'm looking for something like DATE_FORMAT(purchase_date, "%Y-%m-01") m in SQL. I have tried the following code:
df['month']=df['purchase_date'].apply(lambda x : x.replace(day=1))
It works somehow but returns: 2015-01-01 14:05:21.
A:
Simpliest and fastest is convert to numpy array by to_numpy and then cast:
df['month'] = df['purchase_date'].to_numpy().astype('datetime64[M]')
print (df)
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
Another solution with floor and pd.offsets.MonthBegin(1) and add pd.offsets.MonthEnd(0) for correct ouput if first day of month:
df['month'] = (df['purchase_date'].dt.floor('d') +
pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1))
print (df)
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
df['month'] = ((df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1))
.dt.floor('d'))
print (df)
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
Last solution is create month period by to_period:
df['month'] = df['purchase_date'].dt.to_period('M')
print (df)
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01
1 2 2015-02-05 05:07:30 2015-02
2 3 2015-02-18 17:08:51 2015-02
3 4 2015-03-21 17:07:30 2015-03
4 5 2015-03-11 18:32:56 2015-03
5 6 2015-03-03 11:02:30 2015-03
... and then to datetimes by to_timestamp, but it is a bit slowier:
df['month'] = df['purchase_date'].dt.to_period('M').dt.to_timestamp()
print (df)
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
There are many solutions, so:
Timings (in pandas 1.2.3):
rng = pd.date_range('1980-04-01 15:41:12', periods=100000, freq='20H')
df = pd.DataFrame({'purchase_date': rng})
print (df.head())
In [70]: %timeit df['purchase_date'].to_numpy().astype('datetime64[M]')
8.6 ms ± 27.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [71]: %timeit df['purchase_date'].dt.floor('d') + pd.offsets.MonthEnd(n=0) - pd.offsets.MonthBegin(n=1)
23 ms ± 130 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [72]: %timeit (df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1)).dt.floor('d')
23.6 ms ± 97.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [73]: %timeit df['purchase_date'].dt.to_period('M')
9.25 ms ± 215 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [74]: %timeit df['purchase_date'].dt.to_period('M').dt.to_timestamp()
17.6 ms ± 485 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [76]: %timeit df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(normalize=True)
23.1 ms ± 116 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [77]: %timeit df['purchase_date'].dt.normalize().map(MonthBegin().rollback)
1.66 s ± 7.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
A:
We can use date offset in conjunction with Series.dt.normalize:
In [60]: df['month'] = df['purchase_date'].dt.normalize() - pd.offsets.MonthBegin(1)
In [61]: df
Out[61]:
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
Or much nicer solution from @BradSolomon
In [95]: df['month'] = df['purchase_date'] - pd.offsets.MonthBegin(1, normalize=True)
In [96]: df
Out[96]:
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
A:
How about this easy solution?
As purchase_date is already in datetime64[ns] format, you can use strftime to format the date to always have the first day of month.
df['date'] = df['purchase_date'].apply(lambda x: x.strftime('%Y-%m-01'))
print(df)
user_id purchase_date date
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
Because we used strftime, now the date column is in object (string) type:
print(df.dtypes)
user_id int64
purchase_date datetime64[ns]
date object
dtype: object
Now if you want it to be in datetime64[ns], just use pd.to_datetime():
df['date'] = pd.to_datetime(df['date'])
print(df.dtypes)
user_id int64
purchase_date datetime64[ns]
date datetime64[ns]
dtype: object
A:
Most proposed solutions don't work for the first day of the month.
Following solution works for any day of the month:
df['month'] = df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(normalize=True)
[EDIT]
Another, more readable, solution is:
from pandas.tseries.offsets import MonthBegin
df['month'] = df['purchase_date'].dt.normalize().map(MonthBegin().rollback)
Be aware not to use:
df['month'] = df['purchase_date'].map(MonthBegin(normalize=True).rollback)
because that gives incorrect results for the first day due to a bug: https://github.com/pandas-dev/pandas/issues/32616
A:
Try this ..
df['month']=pd.to_datetime(df.purchase_date.astype(str).str[0:7]+'-01')
Out[187]:
user_id purchase_date month
0 1 2015-01-23 14:05:21 2015-01-01
1 2 2015-02-05 05:07:30 2015-02-01
2 3 2015-02-18 17:08:51 2015-02-01
3 4 2015-03-21 17:07:30 2015-03-01
4 5 2015-03-11 18:32:56 2015-03-01
5 6 2015-03-03 11:02:30 2015-03-01
A:
To extract the first day of every month, you could write a little helper function that will also work if the provided date is already the first of month. The function looks like this:
def first_of_month(date):
return date + pd.offsets.MonthEnd(-1) + pd.offsets.Day(1)
You can apply this function on pd.Series:
df['month'] = df['purchase_date'].apply(first_of_month)
With that you will get the month column as a Timestamp. If you need a specific format, you might convert it with the strftime() method.
df['month_str'] = df['month'].dt.strftime('%Y-%m-%d')
A:
For me df['purchase_date'] - pd.offsets.MonthBegin(1) didn't work (it fails for the first day of the month), so I'm subtracting the days of the month like this:
df['purchase_date'] - pd.to_timedelta(df['purchase_date'].dt.day - 1, unit='d')
A:
@Eyal: This is what I did to get the first day of the month using pd.offsets.MonthBegin and handle the scenario where day is already first day of month.
import datetime
from_date= pd.to_datetime('2018-12-01')
from_date = from_date - pd.offsets.MonthBegin(1, normalize=True) if not from_date.is_month_start else from_date
from_date
result: Timestamp('2018-12-01 00:00:00')
from_date= pd.to_datetime('2018-12-05')
from_date = from_date - pd.offsets.MonthBegin(1, normalize=True) if not rom_date.is_month_start else from_date
from_date
result: Timestamp('2018-12-01 00:00:00')
A:
Just adding my 2 cents, for the sake of completeness:
1 - transform purchase_date to date, instead of datetime. This will remove hour, minute, second, etc...
df['purchase_date'] = df['purchase_date'].dt.date
2 - apply the datetime replace, to use day 1 instead of the original:
df['purchase_date_begin'] = df['purchase_date'].apply(lambda x: x.replace(day=1))
This replace method is available on the datetime library:
from datetime import date
today = date.today()
month_start = today.replace(day=1)
and you can replace day, month, year, etc...
| Extracting the first day of month of a datetime type column in pandas | I have the following dataframe:
user_id purchase_date
1 2015-01-23 14:05:21
2 2015-02-05 05:07:30
3 2015-02-18 17:08:51
4 2015-03-21 17:07:30
5 2015-03-11 18:32:56
6 2015-03-03 11:02:30
and purchase_date is a datetime64[ns] column. I need to add a new column df[month] that contains first day of the month of the purchase date:
df['month']
2015-01-01
2015-02-01
2015-02-01
2015-03-01
2015-03-01
2015-03-01
I'm looking for something like DATE_FORMAT(purchase_date, "%Y-%m-01") m in SQL. I have tried the following code:
df['month']=df['purchase_date'].apply(lambda x : x.replace(day=1))
It works somehow but returns: 2015-01-01 14:05:21.
| [
"Simpliest and fastest is convert to numpy array by to_numpy and then cast:\ndf['month'] = df['purchase_date'].to_numpy().astype('datetime64[M]')\nprint (df)\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\nAnother solution with floor and pd.offsets.MonthBegin(1) and add pd.offsets.MonthEnd(0) for correct ouput if first day of month:\ndf['month'] = (df['purchase_date'].dt.floor('d') + \n pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1))\nprint (df)\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\n\ndf['month'] = ((df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1))\n .dt.floor('d'))\nprint (df)\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\nLast solution is create month period by to_period:\ndf['month'] = df['purchase_date'].dt.to_period('M')\nprint (df)\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01\n1 2 2015-02-05 05:07:30 2015-02\n2 3 2015-02-18 17:08:51 2015-02\n3 4 2015-03-21 17:07:30 2015-03\n4 5 2015-03-11 18:32:56 2015-03\n5 6 2015-03-03 11:02:30 2015-03\n\n... and then to datetimes by to_timestamp, but it is a bit slowier:\ndf['month'] = df['purchase_date'].dt.to_period('M').dt.to_timestamp()\nprint (df)\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\nThere are many solutions, so:\nTimings (in pandas 1.2.3):\nrng = pd.date_range('1980-04-01 15:41:12', periods=100000, freq='20H')\ndf = pd.DataFrame({'purchase_date': rng}) \nprint (df.head())\n\n\n\nIn [70]: %timeit df['purchase_date'].to_numpy().astype('datetime64[M]')\n8.6 ms ± 27.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nIn [71]: %timeit df['purchase_date'].dt.floor('d') + pd.offsets.MonthEnd(n=0) - pd.offsets.MonthBegin(n=1)\n23 ms ± 130 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [72]: %timeit (df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(1)).dt.floor('d')\n23.6 ms ± 97.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [73]: %timeit df['purchase_date'].dt.to_period('M')\n9.25 ms ± 215 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nIn [74]: %timeit df['purchase_date'].dt.to_period('M').dt.to_timestamp()\n17.6 ms ± 485 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n\nIn [76]: %timeit df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(normalize=True)\n23.1 ms ± 116 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [77]: %timeit df['purchase_date'].dt.normalize().map(MonthBegin().rollback)\n1.66 s ± 7.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n",
"We can use date offset in conjunction with Series.dt.normalize:\nIn [60]: df['month'] = df['purchase_date'].dt.normalize() - pd.offsets.MonthBegin(1)\n\nIn [61]: df\nOut[61]:\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\nOr much nicer solution from @BradSolomon\nIn [95]: df['month'] = df['purchase_date'] - pd.offsets.MonthBegin(1, normalize=True)\n\nIn [96]: df\nOut[96]:\n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\n",
"How about this easy solution?\nAs purchase_date is already in datetime64[ns] format, you can use strftime to format the date to always have the first day of month.\ndf['date'] = df['purchase_date'].apply(lambda x: x.strftime('%Y-%m-01'))\n\nprint(df)\n user_id purchase_date date\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\nBecause we used strftime, now the date column is in object (string) type:\nprint(df.dtypes)\nuser_id int64\npurchase_date datetime64[ns]\ndate object\ndtype: object\n\nNow if you want it to be in datetime64[ns], just use pd.to_datetime():\ndf['date'] = pd.to_datetime(df['date'])\n\nprint(df.dtypes)\nuser_id int64\npurchase_date datetime64[ns]\ndate datetime64[ns]\ndtype: object\n\n",
"Most proposed solutions don't work for the first day of the month.\nFollowing solution works for any day of the month:\ndf['month'] = df['purchase_date'] + pd.offsets.MonthEnd(0) - pd.offsets.MonthBegin(normalize=True)\n\n[EDIT]\nAnother, more readable, solution is:\nfrom pandas.tseries.offsets import MonthBegin\ndf['month'] = df['purchase_date'].dt.normalize().map(MonthBegin().rollback)\n\nBe aware not to use:\ndf['month'] = df['purchase_date'].map(MonthBegin(normalize=True).rollback)\n\nbecause that gives incorrect results for the first day due to a bug: https://github.com/pandas-dev/pandas/issues/32616\n",
"Try this ..\ndf['month']=pd.to_datetime(df.purchase_date.astype(str).str[0:7]+'-01')\n\nOut[187]: \n user_id purchase_date month\n0 1 2015-01-23 14:05:21 2015-01-01\n1 2 2015-02-05 05:07:30 2015-02-01\n2 3 2015-02-18 17:08:51 2015-02-01\n3 4 2015-03-21 17:07:30 2015-03-01\n4 5 2015-03-11 18:32:56 2015-03-01\n5 6 2015-03-03 11:02:30 2015-03-01\n\n",
"To extract the first day of every month, you could write a little helper function that will also work if the provided date is already the first of month. The function looks like this:\ndef first_of_month(date):\n return date + pd.offsets.MonthEnd(-1) + pd.offsets.Day(1)\n\nYou can apply this function on pd.Series:\ndf['month'] = df['purchase_date'].apply(first_of_month)\n\nWith that you will get the month column as a Timestamp. If you need a specific format, you might convert it with the strftime() method.\ndf['month_str'] = df['month'].dt.strftime('%Y-%m-%d')\n\n",
"For me df['purchase_date'] - pd.offsets.MonthBegin(1) didn't work (it fails for the first day of the month), so I'm subtracting the days of the month like this:\ndf['purchase_date'] - pd.to_timedelta(df['purchase_date'].dt.day - 1, unit='d')\n\n",
"@Eyal: This is what I did to get the first day of the month using pd.offsets.MonthBegin and handle the scenario where day is already first day of month.\nimport datetime\n\nfrom_date= pd.to_datetime('2018-12-01')\n\nfrom_date = from_date - pd.offsets.MonthBegin(1, normalize=True) if not from_date.is_month_start else from_date\n\nfrom_date\n\nresult: Timestamp('2018-12-01 00:00:00')\nfrom_date= pd.to_datetime('2018-12-05')\n\nfrom_date = from_date - pd.offsets.MonthBegin(1, normalize=True) if not rom_date.is_month_start else from_date\n\nfrom_date\n\nresult: Timestamp('2018-12-01 00:00:00')\n",
"Just adding my 2 cents, for the sake of completeness:\n1 - transform purchase_date to date, instead of datetime. This will remove hour, minute, second, etc...\ndf['purchase_date'] = df['purchase_date'].dt.date\n\n2 - apply the datetime replace, to use day 1 instead of the original:\ndf['purchase_date_begin'] = df['purchase_date'].apply(lambda x: x.replace(day=1))\n\nThis replace method is available on the datetime library:\nfrom datetime import date\n\ntoday = date.today()\nmonth_start = today.replace(day=1)\n\nand you can replace day, month, year, etc...\n"
] | [
98,
13,
11,
8,
6,
3,
2,
0,
0
] | [
"try this Pandas libraries, where 'purchase_date' is date parameter placed into the module.\ndate['month_start'] = pd.to_datetime(sched_slim.purchase_date)\n.dt.to_period('M')\n.dt.to_timestamp()\n\n"
] | [
-1
] | [
"dataframe",
"datetime64",
"pandas",
"python"
] | stackoverflow_0045304531_dataframe_datetime64_pandas_python.txt |
Q:
Jinja2 - getting users selected value from a table and saving it as a variable
So, I am trying to develop a small site where the user selects a time from a drop-down box and that time select gets displayed on another page. I am struggling to capture the user's input from the drop-down box and send it to the function which generates the page that shows the users selected input.
I generate the drop-down list by creating a dropdown list and with a loop that receives an array as an input value I loop through that array and generate the options.
My question is, how do I capture the users selected option and pass it on to the show time function?
Is there a jinja2 native way of solving this?
app.py code
@app.route("/timeSelect")
def timeSelect():
times = [1,2,4,8,12]
return render_template("timeSelect.jinja", times=times)
@app.route("/showTime/<int:time>")
def showTime(time):
return render_template("showtest.jinja",time=time)
timeSelect.jinja code
<select class="form-select form-select-lg mb-3" aria-label=".form-select-lg example">
<option selected>Select Monitoring Time</option>
{%for time in times%}
<option value="{{time}}" >{{time}} hours</option>
{%endfor%}
</select>
<form method="get" action="{{ url_for('showTime', time=time)}}">
<button type="submit" class="btn btn-primary">submit</button>
</form>
showtest.jinja code
{{time}}
A:
Create a java-script based onClickEvent(). Which should be triggered when clicked on the value in drop-down and post data to back-end application and then capture response and process accordingly.
Helpful Blog:-
Crud Using Ajax and JSON
| Jinja2 - getting users selected value from a table and saving it as a variable | So, I am trying to develop a small site where the user selects a time from a drop-down box and that time select gets displayed on another page. I am struggling to capture the user's input from the drop-down box and send it to the function which generates the page that shows the users selected input.
I generate the drop-down list by creating a dropdown list and with a loop that receives an array as an input value I loop through that array and generate the options.
My question is, how do I capture the users selected option and pass it on to the show time function?
Is there a jinja2 native way of solving this?
app.py code
@app.route("/timeSelect")
def timeSelect():
times = [1,2,4,8,12]
return render_template("timeSelect.jinja", times=times)
@app.route("/showTime/<int:time>")
def showTime(time):
return render_template("showtest.jinja",time=time)
timeSelect.jinja code
<select class="form-select form-select-lg mb-3" aria-label=".form-select-lg example">
<option selected>Select Monitoring Time</option>
{%for time in times%}
<option value="{{time}}" >{{time}} hours</option>
{%endfor%}
</select>
<form method="get" action="{{ url_for('showTime', time=time)}}">
<button type="submit" class="btn btn-primary">submit</button>
</form>
showtest.jinja code
{{time}}
| [
"Create a java-script based onClickEvent(). Which should be triggered when clicked on the value in drop-down and post data to back-end application and then capture response and process accordingly.\nHelpful Blog:-\nCrud Using Ajax and JSON\n"
] | [
0
] | [] | [] | [
"flask",
"jinja2",
"python"
] | stackoverflow_0074655674_flask_jinja2_python.txt |
Q:
Problems with gradient in python
I'm trying to estimate the gradient of my graph.
#define the function
def gradient(y1,y2,x1,x2):
gradient = ((y1 - y2)/(x2 - x1))
y1 = 1.07
y2 = 1.39
x1 = 283
x2 = 373
print('The gradient of this graph is', gradient)
All it prints is
The gradient of this graph is <function gradient at 0x7fb95096bd30>
A:
It looks like you're trying to print the value of the gradient function, rather than calling it and printing the result. You can fix this by adding parentheses after gradient to call the function, and by providing the function with the necessary arguments:
print('The gradient of this graph is', gradient(y1, y2, x1, x2))
It's also a good idea to move the lines where you define the values of y1, y2, x1, and x2 to before the gradient function is defined. This way, the function will be able to access these values. Here's what your code could look like after making these changes:
#define the function
y1 = 1.07
y2 = 1.39
x1 = 283
x2 = 373
def gradient(y1,y2,x1,x2):
gradient = ((y1 - y2)/(x2 - x1))
return gradient
print('The gradient of this graph is', gradient(y1, y2, x1, x2))
A:
def gradient(y1 = 1.07,y2 = 1.39,x1 = 283,x2 = 373):
gradient = ((y1 - y2)/(x2 - x1))
return gradient
print('The gradient of this graph is', gradient())
or
x1 = 28
x2 = 373
y1 = 1.07
y2 = 1.39
def gradient(y1,y2,x1,x2):
gradient = ((y1 - y2)/(x2 - x1))
return gradient
print('The gradient of this graph is', gradient(y1, y2, x1, x2))
| Problems with gradient in python | I'm trying to estimate the gradient of my graph.
#define the function
def gradient(y1,y2,x1,x2):
gradient = ((y1 - y2)/(x2 - x1))
y1 = 1.07
y2 = 1.39
x1 = 283
x2 = 373
print('The gradient of this graph is', gradient)
All it prints is
The gradient of this graph is <function gradient at 0x7fb95096bd30>
| [
"It looks like you're trying to print the value of the gradient function, rather than calling it and printing the result. You can fix this by adding parentheses after gradient to call the function, and by providing the function with the necessary arguments:\nprint('The gradient of this graph is', gradient(y1, y2, x1, x2))\n\nIt's also a good idea to move the lines where you define the values of y1, y2, x1, and x2 to before the gradient function is defined. This way, the function will be able to access these values. Here's what your code could look like after making these changes:\n#define the function\ny1 = 1.07\ny2 = 1.39\nx1 = 283\nx2 = 373\n\ndef gradient(y1,y2,x1,x2):\n gradient = ((y1 - y2)/(x2 - x1))\n return gradient\n\nprint('The gradient of this graph is', gradient(y1, y2, x1, x2))\n\n\n",
"def gradient(y1 = 1.07,y2 = 1.39,x1 = 283,x2 = 373):\n gradient = ((y1 - y2)/(x2 - x1))\n return gradient\nprint('The gradient of this graph is', gradient())\n\nor\nx1 = 28\nx2 = 373\n\ny1 = 1.07\ny2 = 1.39\n\ndef gradient(y1,y2,x1,x2):\n gradient = ((y1 - y2)/(x2 - x1))\n return gradient\nprint('The gradient of this graph is', gradient(y1, y2, x1, x2))\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074656137_python.txt |
Subsets and Splits