content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Groupby to convert Pandas DataFrame to list of dictionaries
I have the following DataFrame:
print(df)
business_id software_id quantity price inventory_level
1234 abc 10 25.5 5
4820 bce 40 21.9 2
1492 abc 59 25.3 1
1234 abc 55 11.3 0
I would like to create a list of dictionaries, keeping column names and storing what's not a key - here "business_id" and "software_id" - as a list of dictionaries, using Pandas' groupby and thus obtaining:
[
{
business_id: 1234,
software_id: abc,
transactions: [
{quantity: 10, price: 25.5, inventory_level:5},
{quantity: 55, price: 11.3, inventory_level:0},
]}
(...)
]
The inefficient version would be:
keys_l = ["business_id", "software_id"]
keys_df = df.filter(keys_l).drop_duplicates()
chunk_l = []
for _, row in keys_df.iterrows():
# --- Subset original DataFrame ---
chunk_df = df[(df[keys_l]==row).all(axis=1)]
# --- Create baseline keys with keys ---
chunk_dict = {key: value for key, value in zip(row.index, row.values)}
# --- Add bucketed data points ---
chunk_dict["transactions"] = chunk_df.drop(keys_l, axis=1).to_dict(orient="records")
# --- Append to list to create a list of dictionaries ---
chunk_l.append(chunk_dict)
How can I achieve the same result through Pandas'groupby?
A:
Can you try this:
dfx=df.groupby(['business_id','software_id']).agg(list).T
final=[]
for i in dfx.columns:
final.append({'business_id':i[0], 'software_id':i[1],
'transactions':[{'quantity':dfx[i]['quantity'][j],'price':dfx[i]['price'][j],'inventory_level':dfx[i]['inventory_level'][j]} for j in range(len(dfx[i][0]))]})
print(final)
'''
[
{'business_id': 1234, 'software_id': 'abc', 'transactions': [{'quantity': 10, 'price': 25.5, 'inventory_level': 5}, {'quantity': 55, 'price': 11.3, 'inventory_level': 0}]},
{'business_id': 1492, 'software_id': 'abc', 'transactions': [{'quantity': 59, 'price': 25.3, 'inventory_level': 1}]},
{'business_id': 4820, 'software_id': 'bce', 'transactions': [{'quantity': 40, 'price': 21.9, 'inventory_level': 2}]}
]
'''
| Groupby to convert Pandas DataFrame to list of dictionaries | I have the following DataFrame:
print(df)
business_id software_id quantity price inventory_level
1234 abc 10 25.5 5
4820 bce 40 21.9 2
1492 abc 59 25.3 1
1234 abc 55 11.3 0
I would like to create a list of dictionaries, keeping column names and storing what's not a key - here "business_id" and "software_id" - as a list of dictionaries, using Pandas' groupby and thus obtaining:
[
{
business_id: 1234,
software_id: abc,
transactions: [
{quantity: 10, price: 25.5, inventory_level:5},
{quantity: 55, price: 11.3, inventory_level:0},
]}
(...)
]
The inefficient version would be:
keys_l = ["business_id", "software_id"]
keys_df = df.filter(keys_l).drop_duplicates()
chunk_l = []
for _, row in keys_df.iterrows():
# --- Subset original DataFrame ---
chunk_df = df[(df[keys_l]==row).all(axis=1)]
# --- Create baseline keys with keys ---
chunk_dict = {key: value for key, value in zip(row.index, row.values)}
# --- Add bucketed data points ---
chunk_dict["transactions"] = chunk_df.drop(keys_l, axis=1).to_dict(orient="records")
# --- Append to list to create a list of dictionaries ---
chunk_l.append(chunk_dict)
How can I achieve the same result through Pandas'groupby?
| [
"Can you try this:\ndfx=df.groupby(['business_id','software_id']).agg(list).T\nfinal=[]\nfor i in dfx.columns:\n final.append({'business_id':i[0], 'software_id':i[1],\n 'transactions':[{'quantity':dfx[i]['quantity'][j],'price':dfx[i]['price'][j],'inventory_level':dfx[i]['inventory_level'][j]} for j in range(len(dfx[i][0]))]})\n\nprint(final)\n'''\n[\n{'business_id': 1234, 'software_id': 'abc', 'transactions': [{'quantity': 10, 'price': 25.5, 'inventory_level': 5}, {'quantity': 55, 'price': 11.3, 'inventory_level': 0}]},\n\n{'business_id': 1492, 'software_id': 'abc', 'transactions': [{'quantity': 59, 'price': 25.3, 'inventory_level': 1}]},\n\n{'business_id': 4820, 'software_id': 'bce', 'transactions': [{'quantity': 40, 'price': 21.9, 'inventory_level': 2}]}\n]\n'''\n\n\n"
] | [
0
] | [] | [] | [
"dictionary",
"group_by",
"pandas",
"python"
] | stackoverflow_0074632156_dictionary_group_by_pandas_python.txt |
Q:
FloatImage in folium map is way too big
I want to insert a legend as an image in my map with FloatImage. Last week it worked great, the image was in good quality on the bottom right. I did not change the code and the image now appears huge.
import folium
from folium.plugins import FloatImage
m = folium.Map(location=[52.542100, 13.384019], zoom_start=10)
image = ('https://upload.wikimedia.org/wikipedia/commons/d/d6/Beispiel.png')
FloatImage(image, bottom=10, left=1).add_to(m)
m
Anyone got the same problem?
I have also tried other images and changed the image size, always the same problem.
A:
This is a bug that was introduced in 0.13.0. It’s fixed in the upcoming 0.14.0 release. In the meantime you can work around it by manually setting the ‘width’ argument of FloatImage.
| FloatImage in folium map is way too big | I want to insert a legend as an image in my map with FloatImage. Last week it worked great, the image was in good quality on the bottom right. I did not change the code and the image now appears huge.
import folium
from folium.plugins import FloatImage
m = folium.Map(location=[52.542100, 13.384019], zoom_start=10)
image = ('https://upload.wikimedia.org/wikipedia/commons/d/d6/Beispiel.png')
FloatImage(image, bottom=10, left=1).add_to(m)
m
Anyone got the same problem?
I have also tried other images and changed the image size, always the same problem.
| [
"This is a bug that was introduced in 0.13.0. It’s fixed in the upcoming 0.14.0 release. In the meantime you can work around it by manually setting the ‘width’ argument of FloatImage.\n"
] | [
0
] | [] | [] | [
"folium",
"python"
] | stackoverflow_0074225272_folium_python.txt |
Q:
Dynamically create for loops to create lists from a dictionary
variations = {
'size':{'small':'Small',
'medium':'Medium',
'large':'Large'},
'quantity':{'20l':'20l',
'10l':'10l',
'5l':'5l'},
'color':{'red':'Red',
'blue':'Blue',
'green':'Green'}
}
var_list = [[i,j,k] for i in variations['color'] for j in variations['size'] for k in variations['quantity']]
You can also write the above code as:
var_list = []
for i in variations['color']:
for j in variations['size']:
for k in variations['quantity']:
comb = []
comb.append(i)
comb.append(j)
comb.append(k)
Var_list.append(comb)
Both the var_list outputs:
[['red', 'small', '20l'], ['red', 'small', '10l'], ['red', 'small', '5l'], ['red', 'medium', '20l'], ['red', 'medium', '10l'], ['red', 'medium', '5l'], ['red', 'large', '20l'], ['red', 'large', '10l'], ['red', 'large', '5l'], ['blue', 'small', '20l'], ['blue', 'small', '10l'], ['blue', 'small', '5l'], ['blue', 'medium', '20l'], ['blue', 'medium', '10l'], ['blue', 'medium', '5l'], ['blue', 'large', '20l'], ['blue', 'large', '10l'], ['blue', 'large', '5l'], ['green', 'small', '20l'], ['green', 'small', '10l'], ['green', 'small', '5l'], ['green', 'medium', '20l'], ['green', 'medium', '10l'], ['green', 'medium', '5l'], ['green', 'large', '20l'], ['green', 'large', '10l'], ['green', 'large', '5l']]
var_list contains 3 for loops based on the 3 dictionaries in variations. How to write the above code so that for loops in var_list can be increased or decreased based on the number of dictionaries present in variations?
e.g if 'brand' is also present in variations, a for loop for this 'brand' should be dynamically created in the var_list, so the var_list becomes
var_list = [[i,j,k,l] for i in variations['color'] for j in variations['size'] for k in variations['quantity'] for l in varistions['brands']
A:
When I hear nested for loops I think the product of..
Get the product of the values of the values of variations.
variations = {
'size':{'small':'Small',
'medium':'Medium',
'large':'Large'},
'quantity':{'20l':'20l',
'10l':'10l',
'5l':'5l'},
'color':{'red':'Red',
'blue':'Blue',
'green':'Green'},
'brand':{'one':'foo','two':'bar'}}
>>> a = [list(v.values()) for v in variations.values()]
>>> a
[['Small', 'Medium', 'Large'], ['20l', '10l', '5l'], ['Red', 'Blue', 'Green'], ['foo', 'bar']]
>>> import itertools
>>> for c in itertools.product(*a):
... print(c)
...
('Small', '20l', 'Red', 'foo')
('Small', '20l', 'Red', 'bar')
('Small', '20l', 'Blue', 'foo')
('Small', '20l', 'Blue', 'bar')
('Small', '20l', 'Green', 'foo')
('Small', '20l', 'Green', 'bar')
('Small', '10l', 'Red', 'foo')
('Small', '10l', 'Red', 'bar')
('Small', '10l', 'Blue', 'foo')
('Small', '10l', 'Blue', 'bar')
('Small', '10l', 'Green', 'foo')
('Small', '10l', 'Green', 'bar')
('Small', '5l', 'Red', 'foo')
('Small', '5l', 'Red', 'bar')
('Small', '5l', 'Blue', 'foo')
('Small', '5l', 'Blue', 'bar')
('Small', '5l', 'Green', 'foo')
('Small', '5l', 'Green', 'bar')
('Medium', '20l', 'Red', 'foo')
('Medium', '20l', 'Red', 'bar')
('Medium', '20l', 'Blue', 'foo')
('Medium', '20l', 'Blue', 'bar')
('Medium', '20l', 'Green', 'foo')
('Medium', '20l', 'Green', 'bar')
('Medium', '10l', 'Red', 'foo')
('Medium', '10l', 'Red', 'bar')
('Medium', '10l', 'Blue', 'foo')
('Medium', '10l', 'Blue', 'bar')
('Medium', '10l', 'Green', 'foo')
('Medium', '10l', 'Green', 'bar')
('Medium', '5l', 'Red', 'foo')
('Medium', '5l', 'Red', 'bar')
('Medium', '5l', 'Blue', 'foo')
('Medium', '5l', 'Blue', 'bar')
('Medium', '5l', 'Green', 'foo')
('Medium', '5l', 'Green', 'bar')
('Large', '20l', 'Red', 'foo')
('Large', '20l', 'Red', 'bar')
('Large', '20l', 'Blue', 'foo')
('Large', '20l', 'Blue', 'bar')
('Large', '20l', 'Green', 'foo')
('Large', '20l', 'Green', 'bar')
('Large', '10l', 'Red', 'foo')
('Large', '10l', 'Red', 'bar')
('Large', '10l', 'Blue', 'foo')
('Large', '10l', 'Blue', 'bar')
('Large', '10l', 'Green', 'foo')
('Large', '10l', 'Green', 'bar')
('Large', '5l', 'Red', 'foo')
('Large', '5l', 'Red', 'bar')
('Large', '5l', 'Blue', 'foo')
('Large', '5l', 'Blue', 'bar')
('Large', '5l', 'Green', 'foo')
('Large', '5l', 'Green', 'bar')
>>>
Or just
>>> a = [v.values() for v in variations.values()]
>>> for c in itertools.product(*a): print(c)
| Dynamically create for loops to create lists from a dictionary | variations = {
'size':{'small':'Small',
'medium':'Medium',
'large':'Large'},
'quantity':{'20l':'20l',
'10l':'10l',
'5l':'5l'},
'color':{'red':'Red',
'blue':'Blue',
'green':'Green'}
}
var_list = [[i,j,k] for i in variations['color'] for j in variations['size'] for k in variations['quantity']]
You can also write the above code as:
var_list = []
for i in variations['color']:
for j in variations['size']:
for k in variations['quantity']:
comb = []
comb.append(i)
comb.append(j)
comb.append(k)
Var_list.append(comb)
Both the var_list outputs:
[['red', 'small', '20l'], ['red', 'small', '10l'], ['red', 'small', '5l'], ['red', 'medium', '20l'], ['red', 'medium', '10l'], ['red', 'medium', '5l'], ['red', 'large', '20l'], ['red', 'large', '10l'], ['red', 'large', '5l'], ['blue', 'small', '20l'], ['blue', 'small', '10l'], ['blue', 'small', '5l'], ['blue', 'medium', '20l'], ['blue', 'medium', '10l'], ['blue', 'medium', '5l'], ['blue', 'large', '20l'], ['blue', 'large', '10l'], ['blue', 'large', '5l'], ['green', 'small', '20l'], ['green', 'small', '10l'], ['green', 'small', '5l'], ['green', 'medium', '20l'], ['green', 'medium', '10l'], ['green', 'medium', '5l'], ['green', 'large', '20l'], ['green', 'large', '10l'], ['green', 'large', '5l']]
var_list contains 3 for loops based on the 3 dictionaries in variations. How to write the above code so that for loops in var_list can be increased or decreased based on the number of dictionaries present in variations?
e.g if 'brand' is also present in variations, a for loop for this 'brand' should be dynamically created in the var_list, so the var_list becomes
var_list = [[i,j,k,l] for i in variations['color'] for j in variations['size'] for k in variations['quantity'] for l in varistions['brands']
| [
"When I hear nested for loops I think the product of..\nGet the product of the values of the values of variations.\nvariations = {\n 'size':{'small':'Small',\n 'medium':'Medium', \n 'large':'Large'}, \n 'quantity':{'20l':'20l',\n '10l':'10l',\n '5l':'5l'},\n 'color':{'red':'Red',\n 'blue':'Blue',\n 'green':'Green'},\n 'brand':{'one':'foo','two':'bar'}}\n\n>>> a = [list(v.values()) for v in variations.values()]\n>>> a\n[['Small', 'Medium', 'Large'], ['20l', '10l', '5l'], ['Red', 'Blue', 'Green'], ['foo', 'bar']]\n>>> import itertools\n>>> for c in itertools.product(*a): \n... print(c)\n... \n('Small', '20l', 'Red', 'foo')\n('Small', '20l', 'Red', 'bar')\n('Small', '20l', 'Blue', 'foo')\n('Small', '20l', 'Blue', 'bar')\n('Small', '20l', 'Green', 'foo')\n('Small', '20l', 'Green', 'bar')\n('Small', '10l', 'Red', 'foo')\n('Small', '10l', 'Red', 'bar')\n('Small', '10l', 'Blue', 'foo')\n('Small', '10l', 'Blue', 'bar')\n('Small', '10l', 'Green', 'foo')\n('Small', '10l', 'Green', 'bar')\n('Small', '5l', 'Red', 'foo')\n('Small', '5l', 'Red', 'bar')\n('Small', '5l', 'Blue', 'foo')\n('Small', '5l', 'Blue', 'bar')\n('Small', '5l', 'Green', 'foo')\n('Small', '5l', 'Green', 'bar')\n('Medium', '20l', 'Red', 'foo')\n('Medium', '20l', 'Red', 'bar')\n('Medium', '20l', 'Blue', 'foo')\n('Medium', '20l', 'Blue', 'bar')\n('Medium', '20l', 'Green', 'foo')\n('Medium', '20l', 'Green', 'bar')\n('Medium', '10l', 'Red', 'foo')\n('Medium', '10l', 'Red', 'bar')\n('Medium', '10l', 'Blue', 'foo')\n('Medium', '10l', 'Blue', 'bar')\n('Medium', '10l', 'Green', 'foo')\n('Medium', '10l', 'Green', 'bar')\n('Medium', '5l', 'Red', 'foo')\n('Medium', '5l', 'Red', 'bar')\n('Medium', '5l', 'Blue', 'foo')\n('Medium', '5l', 'Blue', 'bar')\n('Medium', '5l', 'Green', 'foo')\n('Medium', '5l', 'Green', 'bar')\n('Large', '20l', 'Red', 'foo')\n('Large', '20l', 'Red', 'bar')\n('Large', '20l', 'Blue', 'foo')\n('Large', '20l', 'Blue', 'bar')\n('Large', '20l', 'Green', 'foo')\n('Large', '20l', 'Green', 'bar')\n('Large', '10l', 'Red', 'foo')\n('Large', '10l', 'Red', 'bar')\n('Large', '10l', 'Blue', 'foo')\n('Large', '10l', 'Blue', 'bar')\n('Large', '10l', 'Green', 'foo')\n('Large', '10l', 'Green', 'bar')\n('Large', '5l', 'Red', 'foo')\n('Large', '5l', 'Red', 'bar')\n('Large', '5l', 'Blue', 'foo')\n('Large', '5l', 'Blue', 'bar')\n('Large', '5l', 'Green', 'foo')\n('Large', '5l', 'Green', 'bar')\n\n>>>\n\n\nOr just\n>>> a = [v.values() for v in variations.values()]\n>>> for c in itertools.product(*a): print(c)\n\n"
] | [
1
] | [] | [] | [
"django",
"python",
"python_3.x"
] | stackoverflow_0074631554_django_python_python_3.x.txt |
Q:
How to stop pytest_bdd from performing the teardown steps after each iteration of a Gherkin Scenario Outline?
I have the following Gherkin Scenario Outline:
Scenario: Links on main page
When I visit the main page
Then there is a link to "<site>" on the page
Examples:
|site |
|example.com |
|stackoverflow.com|
|nasa.gov |
and the respective test.py:
from pytest_bdd import scenario, given, when, then
@scenario("test.feature", "Links on main page")
def test_links():
pass
In my conftest.py, I perform a login and logout at startup/teardown respectively:
@pytest.fixture(autouse=True, scope="function")
def login_management(driver, page_url, logindata):
login()
yield
logout()
However, I don't want the browser to log out and log in between checking every link - I would rather all the links were checked on one page visit. I also would prefer to keep this tabular syntax instead of writing a dozen of steps to the tune of
And there is a link to "example.com"
And there is a link to "stackoverflow.com"
And there is a link to "nasa.gov"
Is there any way to signal that for this test only, all of the scenarios in this outline should be performed without the teardown?
A:
Scenario Outlines are just a compact way of writing several individual scenarios. Cucumber and other testing frameworks work on the idea of isolating each individual test/scenario to prevent side effects from one test/scenario breaking other test/scenarios. If you try an bypass this you can end up with a very flaky test suite that has occasional failures which are based on the order test/scenarios are run, rather than the test/scenario failing for a legitimate reason.
So what you are trying to do breaks a fundamental precept of testing, and you really should avoid doing that.
If you want to be more efficient testing your links, group them together and give them a name. Then test for them in a single step and get rid of your scenario outline e.g.
Scenario: Main page links
When I visit the main page
Then I should see the main page links
Then "I should see the main page links" do
expect(page).to have_link("example.com")
expect(page).to have_link("nasa.gov")
...
end
Now you have one simple scenario that will just login once and run much faster.
NOTE: examples are in ruby (ish), but the principle applies no matter the language.
In general I would suggest avoiding scenario outlines, you really don't need to use them at all.
| How to stop pytest_bdd from performing the teardown steps after each iteration of a Gherkin Scenario Outline? | I have the following Gherkin Scenario Outline:
Scenario: Links on main page
When I visit the main page
Then there is a link to "<site>" on the page
Examples:
|site |
|example.com |
|stackoverflow.com|
|nasa.gov |
and the respective test.py:
from pytest_bdd import scenario, given, when, then
@scenario("test.feature", "Links on main page")
def test_links():
pass
In my conftest.py, I perform a login and logout at startup/teardown respectively:
@pytest.fixture(autouse=True, scope="function")
def login_management(driver, page_url, logindata):
login()
yield
logout()
However, I don't want the browser to log out and log in between checking every link - I would rather all the links were checked on one page visit. I also would prefer to keep this tabular syntax instead of writing a dozen of steps to the tune of
And there is a link to "example.com"
And there is a link to "stackoverflow.com"
And there is a link to "nasa.gov"
Is there any way to signal that for this test only, all of the scenarios in this outline should be performed without the teardown?
| [
"Scenario Outlines are just a compact way of writing several individual scenarios. Cucumber and other testing frameworks work on the idea of isolating each individual test/scenario to prevent side effects from one test/scenario breaking other test/scenarios. If you try an bypass this you can end up with a very flaky test suite that has occasional failures which are based on the order test/scenarios are run, rather than the test/scenario failing for a legitimate reason.\nSo what you are trying to do breaks a fundamental precept of testing, and you really should avoid doing that.\nIf you want to be more efficient testing your links, group them together and give them a name. Then test for them in a single step and get rid of your scenario outline e.g.\nScenario: Main page links\n When I visit the main page\n Then I should see the main page links\n\nThen \"I should see the main page links\" do\n expect(page).to have_link(\"example.com\")\n expect(page).to have_link(\"nasa.gov\")\n ...\nend\n\nNow you have one simple scenario that will just login once and run much faster.\nNOTE: examples are in ruby (ish), but the principle applies no matter the language.\nIn general I would suggest avoiding scenario outlines, you really don't need to use them at all.\n"
] | [
1
] | [] | [] | [
"cucumber",
"gherkin",
"pytest",
"pytest_bdd",
"python"
] | stackoverflow_0074629962_cucumber_gherkin_pytest_pytest_bdd_python.txt |
Q:
Why does my loop counter have an unexpected value after the loop?
I have some code like:
num_grades = 0
for num_grades in range(8):
grade = int(input("Enter grade " + str(num_grades + 1) + ": "))
# additional logic to check the grade and categorize it
print("Total number of grades:", num_grades)
# additional code to output more results
When I try this code, I find that the displayed result for num_grades is 7, rather than 8 like I expect. Why is this? What is wrong with the code, and how can I fix it? I tried adding a while loop to the code, but I was unable to fix the problem this way.
A:
The last value num_grades gets assigned is 7 because of the range, the num_grades + 1 has no effect on the final value of num_grades
You need to either change the way num_grades changes throughout the flow of the code, or simply add a 1 to the final result.
A:
In python the variable(s) that you use in for loop keep last state after the loop, for num_grades in range(8): will goes from 0 to 7 so after the loop num_grads will be 7
A:
I would add an if-statement in the loop checking the index.
num_grades = 0
for num_grades in range(9):
if num_grades == 0:
continue
grade = int(input("Enter grade " + str(num_grades) + ": "))
# additional logic to check the grade and categorize it
print("Total number of grades:", num_grades)
I always use if-statements within loops to negate certain indexes. If you're unaware, the continue keyword in a loop resets the loop and increases the loop variable by 1, without doing anything else past it. Since a loop starts at 0, then the code sees that, and skips it. Then, you can have it print num_grades and it'll just output:
1
2
3
4
5
6
7
8
| Why does my loop counter have an unexpected value after the loop? | I have some code like:
num_grades = 0
for num_grades in range(8):
grade = int(input("Enter grade " + str(num_grades + 1) + ": "))
# additional logic to check the grade and categorize it
print("Total number of grades:", num_grades)
# additional code to output more results
When I try this code, I find that the displayed result for num_grades is 7, rather than 8 like I expect. Why is this? What is wrong with the code, and how can I fix it? I tried adding a while loop to the code, but I was unable to fix the problem this way.
| [
"The last value num_grades gets assigned is 7 because of the range, the num_grades + 1 has no effect on the final value of num_grades\nYou need to either change the way num_grades changes throughout the flow of the code, or simply add a 1 to the final result.\n",
"In python the variable(s) that you use in for loop keep last state after the loop, for num_grades in range(8): will goes from 0 to 7 so after the loop num_grads will be 7\n",
"I would add an if-statement in the loop checking the index.\nnum_grades = 0\nfor num_grades in range(9):\n if num_grades == 0:\n continue\n grade = int(input(\"Enter grade \" + str(num_grades) + \": \"))\n # additional logic to check the grade and categorize it\nprint(\"Total number of grades:\", num_grades)\n\nI always use if-statements within loops to negate certain indexes. If you're unaware, the continue keyword in a loop resets the loop and increases the loop variable by 1, without doing anything else past it. Since a loop starts at 0, then the code sees that, and skips it. Then, you can have it print num_grades and it'll just output:\n1\n2\n3\n4\n5\n6\n7\n8\n\n"
] | [
1,
1,
0
] | [] | [] | [
"for_loop",
"python"
] | stackoverflow_0074633345_for_loop_python.txt |
Q:
'[Errno 13] Permission denied' from open() in python, when the file SHOULD be accessible
I've seen an error in one of our CI scripts where trying to open a file in a python script fails with the error [Errno 13] Permission denied (this is on a windows machine)
I'm wondering how it's possible, given what's going on:
First, we start a process in the background, which is responsible for generating this file. It does so by first creating a temporary file, writing the needed data to it, and then renaming it to the final name (i.e. the one we get the permissions error while attempting to open). To rename the file, the background process calls _wrename
The python script, after starting this process, waits for the file to be generated via calling os.path.exists on the path, until it returns true.
After it's learned that the file exists, it tries to open the file (simply using open(path)), and we get the permissions error.
I don't see what could possibly be changing the permissions on this file after it's been created.
The only idea I have is that when the python script is trying to open the file, the rename is still in progress somehow, and so the permissions issue is caused by a 'sharing violation', which it seems can present as a permissions issue?
But I was under the impression that renaming a file should be atomic? This is happening on a local drive (file stays in the same folder, just the name changes).
Unfortunately I've only seen this error once, and don't have any way to reproduce it.
A:
Try to run python file with sudo:
sudo python3 python_script.py
Or in Windows run console/IDE as administrator.
| '[Errno 13] Permission denied' from open() in python, when the file SHOULD be accessible | I've seen an error in one of our CI scripts where trying to open a file in a python script fails with the error [Errno 13] Permission denied (this is on a windows machine)
I'm wondering how it's possible, given what's going on:
First, we start a process in the background, which is responsible for generating this file. It does so by first creating a temporary file, writing the needed data to it, and then renaming it to the final name (i.e. the one we get the permissions error while attempting to open). To rename the file, the background process calls _wrename
The python script, after starting this process, waits for the file to be generated via calling os.path.exists on the path, until it returns true.
After it's learned that the file exists, it tries to open the file (simply using open(path)), and we get the permissions error.
I don't see what could possibly be changing the permissions on this file after it's been created.
The only idea I have is that when the python script is trying to open the file, the rename is still in progress somehow, and so the permissions issue is caused by a 'sharing violation', which it seems can present as a permissions issue?
But I was under the impression that renaming a file should be atomic? This is happening on a local drive (file stays in the same folder, just the name changes).
Unfortunately I've only seen this error once, and don't have any way to reproduce it.
| [
"Try to run python file with sudo:\nsudo python3 python_script.py\n\nOr in Windows run console/IDE as administrator.\n"
] | [
0
] | [] | [] | [
"python",
"windows"
] | stackoverflow_0074633400_python_windows.txt |
Q:
Getting 'KeyError: "There is no item named 'xl/sharedStrings.xml' in the archive"' when trying to open Excel
I am trying to import data into PowerBi using a Python script so that I can schedule it to refresh data at regular basis.
I am facing a challenge getting the data from an excel file and receiving the error 'KeyError: "There is no item named 'xl/sharedStrings.xml' in the archive"
' while importing.
When I look into the archive of the xlsx file in the xl folder there is no file sharedString.xml. As there are no strings in the excel. the file opens properly in an excel without any issues but not with python.
import openpyxl
import pandas
import xlrd
import os
globaltrackerdf = pandas.read_excel (r'C:\Users\Documents\Trackers\Tracker-Global Tracker_V2-2022-06-13.xlsx',sheet_name="Sheet1",engine="openpyxl")
A:
Solution that worked for me: Resave your file using your excel. My file also opened fine in Excel but upon zipping the file and looking inside there was no sharedStrings.xml. There seems to be a bug where saving a xlsx might not produce the sharedStrings.xml file. I found various ideas about why it might happen but since I don't have access to the client's Excel so not sure what caused it.
For extra context on what an XLSX file is, I found this to be helpful: https://www.adimian.com/blog/fast-xlsx-parsing-with-python/
| Getting 'KeyError: "There is no item named 'xl/sharedStrings.xml' in the archive"' when trying to open Excel | I am trying to import data into PowerBi using a Python script so that I can schedule it to refresh data at regular basis.
I am facing a challenge getting the data from an excel file and receiving the error 'KeyError: "There is no item named 'xl/sharedStrings.xml' in the archive"
' while importing.
When I look into the archive of the xlsx file in the xl folder there is no file sharedString.xml. As there are no strings in the excel. the file opens properly in an excel without any issues but not with python.
import openpyxl
import pandas
import xlrd
import os
globaltrackerdf = pandas.read_excel (r'C:\Users\Documents\Trackers\Tracker-Global Tracker_V2-2022-06-13.xlsx',sheet_name="Sheet1",engine="openpyxl")
| [
"Solution that worked for me: Resave your file using your excel. My file also opened fine in Excel but upon zipping the file and looking inside there was no sharedStrings.xml. There seems to be a bug where saving a xlsx might not produce the sharedStrings.xml file. I found various ideas about why it might happen but since I don't have access to the client's Excel so not sure what caused it.\nFor extra context on what an XLSX file is, I found this to be helpful: https://www.adimian.com/blog/fast-xlsx-parsing-with-python/\n"
] | [
0
] | [] | [] | [
"dataframe",
"excel",
"pandas",
"powerbi",
"python"
] | stackoverflow_0072606497_dataframe_excel_pandas_powerbi_python.txt |
Q:
Python function's return type is a function?
def func(input: str) -> int: _another_func(input)
// ...
// returns some int
def _another_func(input: str) -> None
if (input == "abc"):
raise Exception
What does it mean to have the return type as a function in this case, and that function has no dependency on the actual return results, but instead depends on an input of the parent function? When _another_func() get run ?
A:
It just shows which data type should be returned in this case int.
_another_func(input) should be in new line.
But if you'll try to return another data type it'll also work.
It's just to make maintenance/code review/life easier.
| Python function's return type is a function? | def func(input: str) -> int: _another_func(input)
// ...
// returns some int
def _another_func(input: str) -> None
if (input == "abc"):
raise Exception
What does it mean to have the return type as a function in this case, and that function has no dependency on the actual return results, but instead depends on an input of the parent function? When _another_func() get run ?
| [
"It just shows which data type should be returned in this case int.\n_another_func(input) should be in new line.\nBut if you'll try to return another data type it'll also work.\nIt's just to make maintenance/code review/life easier.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074633588_python.txt |
Q:
Django is giving me an "invalid date format" error when running tests despite specifying date format in settings
My settings file has these settings:
# settings.py
USE_L10N = False
DATE_INPUT_FORMATS = ['%m/%d/%Y']
In my test file, I'm creating an object requires a date and looks like this:
# tests.py
my_model = Thing(a_date='11/22/2019').save()
When I run the test, however, the test gets stuck when it goes to create the object and throws the error:
django.core.exceptions.ValidationError: ["'11/22/2019' value has an invalid date format. It must be in YYYY-MM-DD format."]
Is there something I'm missing? Why would it be throwing this error?
A:
You need to set DATE_INPUT_FORMATS, as DATE_FORMAT sets how Django displays the date.
Change your code with:
DATE_INPUT_FORMATS = ['%m/%d/%Y']
A:
As far as I know the DATE_INPUT_FORMATS is relevant for Forms but not for Models.
An (invalid) ticket regarding the same problem was raised here.
A:
Make sure that USE_L10N is set to FALSE.
(https://docs.djangoproject.com/en/dev/ref/settings/#date-format)
Also see: How to change the default Django date template format?
A:
maybe you need ad date_input_format read there and there
A:
I ran into the same error while testing one of my views.
# settings.py
DATE_FORMAT = '%d.%m.%Y'
DATE_INPUT_FORMATS = ['%d.%m.%Y', '%d-%m-%Y', '%d/%m/%Y', '%d/%m/%y', '%d %b %Y',
'%d %b, %Y', '%d %b %Y', '%d %b, %Y', '%d %B, %Y',
'%d %B %Y']
# forms.py
last_examination = forms.DateField(widget=forms.DateInput(
attrs={
'value': False,
'class': 'datepicker',
'data-date-format': 'dd.mm.yyyy',
'data-date-autoclose': 'True',
}
))
For a test I setup this data:
# test_views.py
cls.data2 = {'1-last_examination': date(day=19, month=2, year=1972),
'1-last_examination_result': 'OK'}
def test_Input2View_POST(self):
self.c = Client()
response = self.c.post(url, data=self.data2)
which raises a ValidationError Enter a valid date.
Therefore the settings.py don't apply for this test, which was also stated by Chris comment.
Quick fix for this is hardcoding the correct date format, e.g. in my case
# test_views.py
cls.data2 = {'1-last_examination': '19.02.1972',
'1-last_examination_result': 'OK'}
| Django is giving me an "invalid date format" error when running tests despite specifying date format in settings | My settings file has these settings:
# settings.py
USE_L10N = False
DATE_INPUT_FORMATS = ['%m/%d/%Y']
In my test file, I'm creating an object requires a date and looks like this:
# tests.py
my_model = Thing(a_date='11/22/2019').save()
When I run the test, however, the test gets stuck when it goes to create the object and throws the error:
django.core.exceptions.ValidationError: ["'11/22/2019' value has an invalid date format. It must be in YYYY-MM-DD format."]
Is there something I'm missing? Why would it be throwing this error?
| [
"You need to set DATE_INPUT_FORMATS, as DATE_FORMAT sets how Django displays the date.\nChange your code with:\nDATE_INPUT_FORMATS = ['%m/%d/%Y']\n",
"As far as I know the DATE_INPUT_FORMATS is relevant for Forms but not for Models. \nAn (invalid) ticket regarding the same problem was raised here.\n",
"Make sure that USE_L10N is set to FALSE.\n(https://docs.djangoproject.com/en/dev/ref/settings/#date-format)\nAlso see: How to change the default Django date template format?\n",
"maybe you need ad date_input_format read there and there\n",
"I ran into the same error while testing one of my views.\n# settings.py\nDATE_FORMAT = '%d.%m.%Y'\nDATE_INPUT_FORMATS = ['%d.%m.%Y', '%d-%m-%Y', '%d/%m/%Y', '%d/%m/%y', '%d %b %Y',\n '%d %b, %Y', '%d %b %Y', '%d %b, %Y', '%d %B, %Y',\n '%d %B %Y']\n\n# forms.py\nlast_examination = forms.DateField(widget=forms.DateInput(\n attrs={\n 'value': False,\n 'class': 'datepicker',\n 'data-date-format': 'dd.mm.yyyy',\n 'data-date-autoclose': 'True',\n }\n))\n\nFor a test I setup this data:\n# test_views.py\ncls.data2 = {'1-last_examination': date(day=19, month=2, year=1972),\n '1-last_examination_result': 'OK'}\n\ndef test_Input2View_POST(self):\n self.c = Client()\n response = self.c.post(url, data=self.data2)\n\nwhich raises a ValidationError Enter a valid date.\nTherefore the settings.py don't apply for this test, which was also stated by Chris comment.\nQuick fix for this is hardcoding the correct date format, e.g. in my case\n# test_views.py\ncls.data2 = {'1-last_examination': '19.02.1972',\n '1-last_examination_result': 'OK'}\n\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"datefield",
"django",
"django_models",
"django_validation",
"python"
] | stackoverflow_0059792842_datefield_django_django_models_django_validation_python.txt |
Q:
python prime number query
number_to_check=int(input("Enter the number you want to check for prime:"))
a= 2
while number_to_check != a :
if number_to_check % a == 0:
a+=1
print("Number not prime ")
break
if number_to_check % a != 0:
a+=1
print("Number prime")
break
if number_to_check =2:
print("2 not prime")
I can't see a problem or logic error in my code but the code is working incorrectly.
A:
There are better ways to do this, but this follows your philosophy:
number_to_check=int(input("Enter the number you want to check for prime:"))
if number_to_check == 2:
print("2 is prime")
else:
for a in range(2, number_to_check//2):
if number_to_check % a == 0:
print("Number not prime ")
break
else:
print("Number prime")
| python prime number query | number_to_check=int(input("Enter the number you want to check for prime:"))
a= 2
while number_to_check != a :
if number_to_check % a == 0:
a+=1
print("Number not prime ")
break
if number_to_check % a != 0:
a+=1
print("Number prime")
break
if number_to_check =2:
print("2 not prime")
I can't see a problem or logic error in my code but the code is working incorrectly.
| [
"There are better ways to do this, but this follows your philosophy:\nnumber_to_check=int(input(\"Enter the number you want to check for prime:\"))\nif number_to_check == 2:\n print(\"2 is prime\")\nelse:\n for a in range(2, number_to_check//2):\n if number_to_check % a == 0:\n print(\"Number not prime \")\n break\n else:\n print(\"Number prime\")\n\n"
] | [
0
] | [] | [] | [
"primes",
"python"
] | stackoverflow_0074633625_primes_python.txt |
Q:
Sort coordinates of pointcloud by distance to previous point
Pointcloud of rope with desired start and end point
I have a pointcloud of a rope-like object with about 300 points. I'd like to sort the 3D coordinates of that pointcloud, so that one end of the rope has index 0 and the other end has index 300 like shown in the image. Other pointclouds of that object might be U-shaped so I can't sort by X,Y or Z coordinate. Because of that I also can't sort by the distance to a single point.
I have looked at KDTree by sklearn or scipy to compute the nearest neighbour of each point but I don't know how to go from there and sort the points in an array without getting double entries.
Is there a way to sort these coordinates in an array, so that from a starting point the array gets appended with the coordinates of the next closest point?
A:
First of all, obviously, there is no strict solution to this problem (and even there is no strict definition of what you want to get). So anything you may write will be a heuristic of some sort, which will be failing in some cases, especially as your point cloud gets some non-trivial form (do you allow loops in your rope, for example?)
This said, a simple approach may be to build a graph with the points being the vertices, and every two points connected by an edge with a weight equal to the straight-line distance between these two points.
And then build a minimal spanning tree of this graph. This will provide a kind of skeleton for your point cloud, and you can devise any simple algorithm atop of this skeleton.
For example, sort all points by their distance to the start of the rope, measured along this tree. There is only one path between any two vertices of the tree, so for each vertex of the tree calculate the length of the single path to the rope start, and sort all the vertices by this distance.
A:
As suggested in other answer there is no strict solution to this problem and there can be some edge cases such as loop, spiral, tube, but you can go with heuristic approaches to solve for your use case. Read about some heuristic approaches such as hill climbing, simulated annealing, genetic algorithms etc.
For any heuristic approach you need a method to find how good is a solution, let's say if i give you two array of 3000 elements how will you identify which solution is better compared to other ? This methods depends on your use case.
One approach at top of my mind, hill climbing
method to measure the goodness of the solution : take the euclidian distance of all the adjacent elements of array and take the sum of their distance.
Steps :
create randomised array of all the 3000 elements.
now select two random index out of these 3000 and swap the elements at those indexes, and see if it improves your ans (if sum of euclidian distance of adjacent element reduces)
If it improves your answer then keep those elements swapped
repeat step 2/3 for large number of epochs(10^6)
This solution will lead into stagnation as there is lack of diversity. For better results use simulated annealing, genetic algorithms.
| Sort coordinates of pointcloud by distance to previous point | Pointcloud of rope with desired start and end point
I have a pointcloud of a rope-like object with about 300 points. I'd like to sort the 3D coordinates of that pointcloud, so that one end of the rope has index 0 and the other end has index 300 like shown in the image. Other pointclouds of that object might be U-shaped so I can't sort by X,Y or Z coordinate. Because of that I also can't sort by the distance to a single point.
I have looked at KDTree by sklearn or scipy to compute the nearest neighbour of each point but I don't know how to go from there and sort the points in an array without getting double entries.
Is there a way to sort these coordinates in an array, so that from a starting point the array gets appended with the coordinates of the next closest point?
| [
"First of all, obviously, there is no strict solution to this problem (and even there is no strict definition of what you want to get). So anything you may write will be a heuristic of some sort, which will be failing in some cases, especially as your point cloud gets some non-trivial form (do you allow loops in your rope, for example?)\nThis said, a simple approach may be to build a graph with the points being the vertices, and every two points connected by an edge with a weight equal to the straight-line distance between these two points.\nAnd then build a minimal spanning tree of this graph. This will provide a kind of skeleton for your point cloud, and you can devise any simple algorithm atop of this skeleton.\nFor example, sort all points by their distance to the start of the rope, measured along this tree. There is only one path between any two vertices of the tree, so for each vertex of the tree calculate the length of the single path to the rope start, and sort all the vertices by this distance.\n",
"As suggested in other answer there is no strict solution to this problem and there can be some edge cases such as loop, spiral, tube, but you can go with heuristic approaches to solve for your use case. Read about some heuristic approaches such as hill climbing, simulated annealing, genetic algorithms etc.\n\nFor any heuristic approach you need a method to find how good is a solution, let's say if i give you two array of 3000 elements how will you identify which solution is better compared to other ? This methods depends on your use case.\n\nOne approach at top of my mind, hill climbing\nmethod to measure the goodness of the solution : take the euclidian distance of all the adjacent elements of array and take the sum of their distance.\nSteps :\n\ncreate randomised array of all the 3000 elements.\nnow select two random index out of these 3000 and swap the elements at those indexes, and see if it improves your ans (if sum of euclidian distance of adjacent element reduces)\nIf it improves your answer then keep those elements swapped\nrepeat step 2/3 for large number of epochs(10^6)\n\nThis solution will lead into stagnation as there is lack of diversity. For better results use simulated annealing, genetic algorithms.\n"
] | [
1,
0
] | [] | [] | [
"algorithm",
"kdtree",
"python",
"sorting"
] | stackoverflow_0074626866_algorithm_kdtree_python_sorting.txt |
Q:
How to disable SSL for a method
I have a problem. I am using easypost. The problem is that I got the following error
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments.
Is there any option to deactive the ssl for the method ?
!pip install easypost
import os
import easypost
easypost.api_key = <app_key>
shipment = easypost.Shipment.create(
from_address = {
"name": "EasyPost",
"street1": "118 2nd Street",
"street2": "4th Floor",
"city": "San Francisco",
"state": "CA",
"zip": "94105",
"country": "US",
"phone": "415-456-7890",
},
to_address = {
"name": "Dr. Steve Brule",
"street1": "179 N Harbor Dr",
"city": "Redondo Beach",
"state": "CA",
"zip": "90277",
"country": "US",
"phone": "310-808-5243",
},
parcel = {
"length": 10.2,
"width": 7.8,
"height": 4.3,
"weight": 21.2,
},
)
shipment.buy(rate=shipment.lowest_rate())
print(shipment)
Complete Log
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File ~\Anaconda3\lib\site-packages\urllib3\connection.py:414, in HTTPSConnection.connect(self)
412 context.load_default_certs()
--> 414 self.sock = ssl_wrap_socket(
415 sock=conn,
416 keyfile=self.key_file,
417 certfile=self.cert_file,
418 key_password=self.key_password,
419 ca_certs=self.ca_certs,
420 ca_cert_dir=self.ca_cert_dir,
421 ca_cert_data=self.ca_cert_data,
422 server_hostname=server_hostname,
423 ssl_context=context,
424 tls_in_tls=tls_in_tls,
425 )
427 # If we're using all defaults and the connection
428 # is TLSv1 or TLSv1.1 we throw a DeprecationWarning
429 # for the host.
File ~\Anaconda3\lib\site-packages\urllib3\util\ssl_.py:449, in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data, tls_in_tls)
448 if send_sni:
--> 449 ssl_sock = _ssl_wrap_socket_impl(
450 sock, context, tls_in_tls, server_hostname=server_hostname
451 )
452 else:
File ~\Anaconda3\lib\site-packages\urllib3\util\ssl_.py:493, in _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname)
492 if server_hostname:
--> 493 return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
494 else:
File ~\Anaconda3\lib\ssl.py:500, in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
494 def wrap_socket(self, sock, server_side=False,
495 do_handshake_on_connect=True,
496 suppress_ragged_eofs=True,
497 server_hostname=None, session=None):
498 # SSLSocket class handles server_hostname encoding before it calls
499 # ctx._wrap_socket()
--> 500 return self.sslsocket_class._create(
501 sock=sock,
502 server_side=server_side,
503 do_handshake_on_connect=do_handshake_on_connect,
504 suppress_ragged_eofs=suppress_ragged_eofs,
505 server_hostname=server_hostname,
506 context=self,
507 session=session
508 )
File ~\Anaconda3\lib\ssl.py:1040, in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1040 self.do_handshake()
1041 except (OSError, ValueError):
File ~\Anaconda3\lib\ssl.py:1309, in SSLSocket.do_handshake(self, block)
1308 self.settimeout(None)
-> 1309 self._sslobj.do_handshake()
1310 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:440, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
439 if not chunked:
--> 440 resp = conn.urlopen(
441 method=request.method,
442 url=url,
443 body=request.body,
444 headers=request.headers,
445 redirect=False,
446 assert_same_host=False,
447 preload_content=False,
448 decode_content=False,
449 retries=self.max_retries,
450 timeout=timeout
451 )
453 # Send the request.
454 else:
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:785, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
783 e = ProtocolError("Connection aborted.", e)
--> 785 retries = retries.increment(
786 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
787 )
788 retries.sleep()
File ~\Anaconda3\lib\site-packages\urllib3\util\retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='api.easypost.com', port=443): Max retries exceeded with url: /v2/shipments (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
SSLError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:222, in Requestor.requests_request(self, method, abs_url, headers, params)
221 try:
--> 222 result = requests_session.request(
223 method=method.value,
224 url=abs_url,
225 params=url_params,
226 headers=headers,
227 json=body,
228 timeout=timeout,
229 verify=True,
230 )
231 http_body = result.text
File ~\Anaconda3\lib\site-packages\requests\sessions.py:529, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
531 return resp
File ~\Anaconda3\lib\site-packages\requests\sessions.py:645, in Session.send(self, request, **kwargs)
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
647 # Total elapsed time of the request (approximately)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:517, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
--> 517 raise SSLError(e, request=request)
519 raise ConnectionError(e, request=request)
SSLError: HTTPSConnectionPool(host='api.easypost.com', port=443): Max retries exceeded with url: /v2/shipments (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
Error Traceback (most recent call last)
Input In [52], in <cell line: 6>()
2 import easypost
4 easypost.api_key = <apikey>
----> 6 shipment = easypost.Shipment.create(
7 from_address = {
8 "name": "EasyPost",
9 "street1": "118 2nd Street",
10 "street2": "4th Floor",
11 "city": "San Francisco",
12 "state": "CA",
13 "zip": "94105",
14 "country": "US",
15 "phone": "415-456-7890",
16 },
17 to_address = {
18 "name": "Dr. Steve Brule",
19 "street1": "179 N Harbor Dr",
20 "city": "Redondo Beach",
21 "state": "CA",
22 "zip": "90277",
23 "country": "US",
24 "phone": "310-808-5243",
25 },
26 parcel = {
27 "length": 10.2,
28 "width": 7.8,
29 "height": 4.3,
30 "weight": 21.2,
31 },
32 )
34 shipment.buy(rate=shipment.lowest_rate())
36 print(shipment)
File ~\Anaconda3\lib\site-packages\easypost\shipment.py:32, in Shipment.create(cls, api_key, with_carbon_offset, **params)
27 url = cls.class_url()
28 wrapped_params = {
29 cls.snakecase_name(): params,
30 "carbon_offset": with_carbon_offset,
31 }
---> 32 response, api_key = requestor.request(method=RequestMethod.POST, url=url, params=wrapped_params)
33 return convert_to_easypost_object(response=response, api_key=api_key)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:91, in Requestor.request(self, method, url, params, api_key_required, beta)
89 if params is None:
90 params = {}
---> 91 http_body, http_status, my_api_key = self.request_raw(
92 method=method,
93 url=url,
94 params=params,
95 api_key_required=api_key_required,
96 beta=beta,
97 )
98 response = self.interpret_response(http_body=http_body, http_status=http_status)
99 return response, my_api_key
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:177, in Requestor.request_raw(self, method, url, params, api_key_required, beta)
173 http_body, http_status = self.urlfetch_request(
174 method=method, abs_url=abs_url, headers=headers, params=params
175 )
176 elif request_lib == "requests":
--> 177 http_body, http_status = self.requests_request(
178 method=method, abs_url=abs_url, headers=headers, params=params
179 )
180 else:
181 raise Error(f"Bug discovered: invalid request_lib: {request_lib}. Please report to {SUPPORT_EMAIL}.")
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:234, in Requestor.requests_request(self, method, abs_url, headers, params)
232 http_status = result.status_code
233 except Exception as e:
--> 234 raise Error(
235 "Unexpected error communicating with EasyPost. If this "
236 f"problem persists please let us know at {SUPPORT_EMAIL}.",
237 original_exception=e,
238 )
239 return http_body, http_status
Error: Unexpected error communicating with EasyPost. If this problem persists please let us know at [email protected].
For Postman, if I I am deactivating SSL certificate verification it works.
What I got with:
If am calling directly the api and using verify=false that works. See Python Requests throwing SSLError.
import requests
url = "https://api.easypost.com/something"
returnResponse = requests.get(url, verify=False)
Edit
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:222, in Requestor.requests_request(self, method, abs_url, headers, params)
221 try:
--> 222 result = requests_session.request(
223 method=method.value,
224 url=abs_url,
225 params=url_params,
226 headers=headers,
227 json=body,
228 timeout=timeout,
229 verify=False,
230 )
231 http_body = result.text
File ~\Anaconda3\lib\site-packages\requests\sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File ~\Anaconda3\lib\site-packages\requests\sessions.py:701, in Session.send(self, request, **kwargs)
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File ~\Anaconda3\lib\site-packages\urllib3\connection.py:401, in HTTPSConnection.connect(self)
400 context = self.ssl_context
--> 401 context.verify_mode = resolve_cert_reqs(self.cert_reqs)
403 # Try to load OS default certs if none are given.
404 # Works well on Windows (requires Python3.4+)
File ~\Anaconda3\lib\ssl.py:720, in SSLContext.verify_mode(self, value)
718 @verify_mode.setter
719 def verify_mode(self, value):
--> 720 super(SSLContext, SSLContext).verify_mode.__set__(self, value)
ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled.
During handling of the above exception, another exception occurred:
Error Traceback (most recent call last)
Input In [5], in <cell line: 6>()
2 import easypost
4 easypost.api_key = <apikey>
----> 6 shipment = easypost.Shipment.create(
7 from_address = {
8 "name": "EasyPost",
9 "street1": "118 2nd Street",
10 "street2": "4th Floor",
11 "city": "San Francisco",
12 "state": "CA",
13 "zip": "94105",
14 "country": "US",
15 "phone": "415-456-7890",
16 },
17 to_address = {
18 "name": "Dr. Steve Brule",
19 "street1": "179 N Harbor Dr",
20 "city": "Redondo Beach",
21 "state": "CA",
22 "zip": "90277",
23 "country": "US",
24 "phone": "310-808-5243",
25 },
26 parcel = {
27 "length": 10.2,
28 "width": 7.8,
29 "height": 4.3,
30 "weight": 21.2,
31 },
32 )
34 shipment.buy(rate=shipment.lowest_rate())
36 print(shipment)
File ~\Anaconda3\lib\site-packages\easypost\shipment.py:32, in Shipment.create(cls, api_key, with_carbon_offset, **params)
27 url = cls.class_url()
28 wrapped_params = {
29 cls.snakecase_name(): params,
30 "carbon_offset": with_carbon_offset,
31 }
---> 32 response, api_key = requestor.request(method=RequestMethod.POST, url=url, params=wrapped_params)
33 return convert_to_easypost_object(response=response, api_key=api_key)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:91, in Requestor.request(self, method, url, params, api_key_required, beta)
89 if params is None:
90 params = {}
---> 91 http_body, http_status, my_api_key = self.request_raw(
92 method=method,
93 url=url,
94 params=params,
95 api_key_required=api_key_required,
96 beta=beta,
97 )
98 response = self.interpret_response(http_body=http_body, http_status=http_status)
99 return response, my_api_key
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:177, in Requestor.request_raw(self, method, url, params, api_key_required, beta)
173 http_body, http_status = self.urlfetch_request(
174 method=method, abs_url=abs_url, headers=headers, params=params
175 )
176 elif request_lib == "requests":
--> 177 http_body, http_status = self.requests_request(
178 method=method, abs_url=abs_url, headers=headers, params=params
179 )
180 else:
181 raise Error(f"Bug discovered: invalid request_lib: {request_lib}. Please report to {SUPPORT_EMAIL}.")
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:234, in Requestor.requests_request(self, method, abs_url, headers, params)
232 http_status = result.status_code
233 except Exception as e:
--> 234 raise Error(
235 "Unexpected error communicating with EasyPost. If this "
236 f"problem persists please let us know at {SUPPORT_EMAIL}.",
237 original_exception=e,
238 )
239 return http_body, http_status
Error: Unexpected error communicating with EasyPost. If this problem persists please let us know at [email protected].
A:
There is additional context for this issue that can be found here: https://github.com/EasyPost/easypost-python/issues/222.
EasyPost does not provide a way to disable SSL checking and has no plans to add this functionality. Some additional resources that may be helpful could be found here:
https://stackoverflow.com/a/22794281/6064135
https://stackoverflow.com/a/33770290/6064135
It sounds like there is something local to your environment that needs adjusting. I would strongly recommend fixing this certificate/SSL error at the source rather than turning it off as this is a security risk. Best of luck.
| How to disable SSL for a method | I have a problem. I am using easypost. The problem is that I got the following error
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments.
Is there any option to deactive the ssl for the method ?
!pip install easypost
import os
import easypost
easypost.api_key = <app_key>
shipment = easypost.Shipment.create(
from_address = {
"name": "EasyPost",
"street1": "118 2nd Street",
"street2": "4th Floor",
"city": "San Francisco",
"state": "CA",
"zip": "94105",
"country": "US",
"phone": "415-456-7890",
},
to_address = {
"name": "Dr. Steve Brule",
"street1": "179 N Harbor Dr",
"city": "Redondo Beach",
"state": "CA",
"zip": "90277",
"country": "US",
"phone": "310-808-5243",
},
parcel = {
"length": 10.2,
"width": 7.8,
"height": 4.3,
"weight": 21.2,
},
)
shipment.buy(rate=shipment.lowest_rate())
print(shipment)
Complete Log
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))': /v2/shipments
---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File ~\Anaconda3\lib\site-packages\urllib3\connection.py:414, in HTTPSConnection.connect(self)
412 context.load_default_certs()
--> 414 self.sock = ssl_wrap_socket(
415 sock=conn,
416 keyfile=self.key_file,
417 certfile=self.cert_file,
418 key_password=self.key_password,
419 ca_certs=self.ca_certs,
420 ca_cert_dir=self.ca_cert_dir,
421 ca_cert_data=self.ca_cert_data,
422 server_hostname=server_hostname,
423 ssl_context=context,
424 tls_in_tls=tls_in_tls,
425 )
427 # If we're using all defaults and the connection
428 # is TLSv1 or TLSv1.1 we throw a DeprecationWarning
429 # for the host.
File ~\Anaconda3\lib\site-packages\urllib3\util\ssl_.py:449, in ssl_wrap_socket(sock, keyfile, certfile, cert_reqs, ca_certs, server_hostname, ssl_version, ciphers, ssl_context, ca_cert_dir, key_password, ca_cert_data, tls_in_tls)
448 if send_sni:
--> 449 ssl_sock = _ssl_wrap_socket_impl(
450 sock, context, tls_in_tls, server_hostname=server_hostname
451 )
452 else:
File ~\Anaconda3\lib\site-packages\urllib3\util\ssl_.py:493, in _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname)
492 if server_hostname:
--> 493 return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
494 else:
File ~\Anaconda3\lib\ssl.py:500, in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
494 def wrap_socket(self, sock, server_side=False,
495 do_handshake_on_connect=True,
496 suppress_ragged_eofs=True,
497 server_hostname=None, session=None):
498 # SSLSocket class handles server_hostname encoding before it calls
499 # ctx._wrap_socket()
--> 500 return self.sslsocket_class._create(
501 sock=sock,
502 server_side=server_side,
503 do_handshake_on_connect=do_handshake_on_connect,
504 suppress_ragged_eofs=suppress_ragged_eofs,
505 server_hostname=server_hostname,
506 context=self,
507 session=session
508 )
File ~\Anaconda3\lib\ssl.py:1040, in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1040 self.do_handshake()
1041 except (OSError, ValueError):
File ~\Anaconda3\lib\ssl.py:1309, in SSLSocket.do_handshake(self, block)
1308 self.settimeout(None)
-> 1309 self._sslobj.do_handshake()
1310 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:440, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
439 if not chunked:
--> 440 resp = conn.urlopen(
441 method=request.method,
442 url=url,
443 body=request.body,
444 headers=request.headers,
445 redirect=False,
446 assert_same_host=False,
447 preload_content=False,
448 decode_content=False,
449 retries=self.max_retries,
450 timeout=timeout
451 )
453 # Send the request.
454 else:
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:813, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
810 log.warning(
811 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
812 )
--> 813 return self.urlopen(
814 method,
815 url,
816 body,
817 headers,
818 retries,
819 redirect,
820 assert_same_host,
821 timeout=timeout,
822 pool_timeout=pool_timeout,
823 release_conn=release_conn,
824 chunked=chunked,
825 body_pos=body_pos,
826 **response_kw
827 )
829 # Handle redirect?
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:785, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
783 e = ProtocolError("Connection aborted.", e)
--> 785 retries = retries.increment(
786 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
787 )
788 retries.sleep()
File ~\Anaconda3\lib\site-packages\urllib3\util\retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='api.easypost.com', port=443): Max retries exceeded with url: /v2/shipments (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
SSLError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:222, in Requestor.requests_request(self, method, abs_url, headers, params)
221 try:
--> 222 result = requests_session.request(
223 method=method.value,
224 url=abs_url,
225 params=url_params,
226 headers=headers,
227 json=body,
228 timeout=timeout,
229 verify=True,
230 )
231 http_body = result.text
File ~\Anaconda3\lib\site-packages\requests\sessions.py:529, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
531 return resp
File ~\Anaconda3\lib\site-packages\requests\sessions.py:645, in Session.send(self, request, **kwargs)
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
647 # Total elapsed time of the request (approximately)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:517, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
--> 517 raise SSLError(e, request=request)
519 raise ConnectionError(e, request=request)
SSLError: HTTPSConnectionPool(host='api.easypost.com', port=443): Max retries exceeded with url: /v2/shipments (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
Error Traceback (most recent call last)
Input In [52], in <cell line: 6>()
2 import easypost
4 easypost.api_key = <apikey>
----> 6 shipment = easypost.Shipment.create(
7 from_address = {
8 "name": "EasyPost",
9 "street1": "118 2nd Street",
10 "street2": "4th Floor",
11 "city": "San Francisco",
12 "state": "CA",
13 "zip": "94105",
14 "country": "US",
15 "phone": "415-456-7890",
16 },
17 to_address = {
18 "name": "Dr. Steve Brule",
19 "street1": "179 N Harbor Dr",
20 "city": "Redondo Beach",
21 "state": "CA",
22 "zip": "90277",
23 "country": "US",
24 "phone": "310-808-5243",
25 },
26 parcel = {
27 "length": 10.2,
28 "width": 7.8,
29 "height": 4.3,
30 "weight": 21.2,
31 },
32 )
34 shipment.buy(rate=shipment.lowest_rate())
36 print(shipment)
File ~\Anaconda3\lib\site-packages\easypost\shipment.py:32, in Shipment.create(cls, api_key, with_carbon_offset, **params)
27 url = cls.class_url()
28 wrapped_params = {
29 cls.snakecase_name(): params,
30 "carbon_offset": with_carbon_offset,
31 }
---> 32 response, api_key = requestor.request(method=RequestMethod.POST, url=url, params=wrapped_params)
33 return convert_to_easypost_object(response=response, api_key=api_key)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:91, in Requestor.request(self, method, url, params, api_key_required, beta)
89 if params is None:
90 params = {}
---> 91 http_body, http_status, my_api_key = self.request_raw(
92 method=method,
93 url=url,
94 params=params,
95 api_key_required=api_key_required,
96 beta=beta,
97 )
98 response = self.interpret_response(http_body=http_body, http_status=http_status)
99 return response, my_api_key
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:177, in Requestor.request_raw(self, method, url, params, api_key_required, beta)
173 http_body, http_status = self.urlfetch_request(
174 method=method, abs_url=abs_url, headers=headers, params=params
175 )
176 elif request_lib == "requests":
--> 177 http_body, http_status = self.requests_request(
178 method=method, abs_url=abs_url, headers=headers, params=params
179 )
180 else:
181 raise Error(f"Bug discovered: invalid request_lib: {request_lib}. Please report to {SUPPORT_EMAIL}.")
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:234, in Requestor.requests_request(self, method, abs_url, headers, params)
232 http_status = result.status_code
233 except Exception as e:
--> 234 raise Error(
235 "Unexpected error communicating with EasyPost. If this "
236 f"problem persists please let us know at {SUPPORT_EMAIL}.",
237 original_exception=e,
238 )
239 return http_body, http_status
Error: Unexpected error communicating with EasyPost. If this problem persists please let us know at [email protected].
For Postman, if I I am deactivating SSL certificate verification it works.
What I got with:
If am calling directly the api and using verify=false that works. See Python Requests throwing SSLError.
import requests
url = "https://api.easypost.com/something"
returnResponse = requests.get(url, verify=False)
Edit
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:222, in Requestor.requests_request(self, method, abs_url, headers, params)
221 try:
--> 222 result = requests_session.request(
223 method=method.value,
224 url=abs_url,
225 params=url_params,
226 headers=headers,
227 json=body,
228 timeout=timeout,
229 verify=False,
230 )
231 http_body = result.text
File ~\Anaconda3\lib\site-packages\requests\sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File ~\Anaconda3\lib\site-packages\requests\sessions.py:701, in Session.send(self, request, **kwargs)
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
File ~\Anaconda3\lib\site-packages\requests\adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File ~\Anaconda3\lib\site-packages\urllib3\connectionpool.py:1040, in HTTPSConnectionPool._validate_conn(self, conn)
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1042 if not conn.is_verified:
File ~\Anaconda3\lib\site-packages\urllib3\connection.py:401, in HTTPSConnection.connect(self)
400 context = self.ssl_context
--> 401 context.verify_mode = resolve_cert_reqs(self.cert_reqs)
403 # Try to load OS default certs if none are given.
404 # Works well on Windows (requires Python3.4+)
File ~\Anaconda3\lib\ssl.py:720, in SSLContext.verify_mode(self, value)
718 @verify_mode.setter
719 def verify_mode(self, value):
--> 720 super(SSLContext, SSLContext).verify_mode.__set__(self, value)
ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled.
During handling of the above exception, another exception occurred:
Error Traceback (most recent call last)
Input In [5], in <cell line: 6>()
2 import easypost
4 easypost.api_key = <apikey>
----> 6 shipment = easypost.Shipment.create(
7 from_address = {
8 "name": "EasyPost",
9 "street1": "118 2nd Street",
10 "street2": "4th Floor",
11 "city": "San Francisco",
12 "state": "CA",
13 "zip": "94105",
14 "country": "US",
15 "phone": "415-456-7890",
16 },
17 to_address = {
18 "name": "Dr. Steve Brule",
19 "street1": "179 N Harbor Dr",
20 "city": "Redondo Beach",
21 "state": "CA",
22 "zip": "90277",
23 "country": "US",
24 "phone": "310-808-5243",
25 },
26 parcel = {
27 "length": 10.2,
28 "width": 7.8,
29 "height": 4.3,
30 "weight": 21.2,
31 },
32 )
34 shipment.buy(rate=shipment.lowest_rate())
36 print(shipment)
File ~\Anaconda3\lib\site-packages\easypost\shipment.py:32, in Shipment.create(cls, api_key, with_carbon_offset, **params)
27 url = cls.class_url()
28 wrapped_params = {
29 cls.snakecase_name(): params,
30 "carbon_offset": with_carbon_offset,
31 }
---> 32 response, api_key = requestor.request(method=RequestMethod.POST, url=url, params=wrapped_params)
33 return convert_to_easypost_object(response=response, api_key=api_key)
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:91, in Requestor.request(self, method, url, params, api_key_required, beta)
89 if params is None:
90 params = {}
---> 91 http_body, http_status, my_api_key = self.request_raw(
92 method=method,
93 url=url,
94 params=params,
95 api_key_required=api_key_required,
96 beta=beta,
97 )
98 response = self.interpret_response(http_body=http_body, http_status=http_status)
99 return response, my_api_key
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:177, in Requestor.request_raw(self, method, url, params, api_key_required, beta)
173 http_body, http_status = self.urlfetch_request(
174 method=method, abs_url=abs_url, headers=headers, params=params
175 )
176 elif request_lib == "requests":
--> 177 http_body, http_status = self.requests_request(
178 method=method, abs_url=abs_url, headers=headers, params=params
179 )
180 else:
181 raise Error(f"Bug discovered: invalid request_lib: {request_lib}. Please report to {SUPPORT_EMAIL}.")
File ~\Anaconda3\lib\site-packages\easypost\requestor.py:234, in Requestor.requests_request(self, method, abs_url, headers, params)
232 http_status = result.status_code
233 except Exception as e:
--> 234 raise Error(
235 "Unexpected error communicating with EasyPost. If this "
236 f"problem persists please let us know at {SUPPORT_EMAIL}.",
237 original_exception=e,
238 )
239 return http_body, http_status
Error: Unexpected error communicating with EasyPost. If this problem persists please let us know at [email protected].
| [
"There is additional context for this issue that can be found here: https://github.com/EasyPost/easypost-python/issues/222.\nEasyPost does not provide a way to disable SSL checking and has no plans to add this functionality. Some additional resources that may be helpful could be found here:\n\nhttps://stackoverflow.com/a/22794281/6064135\nhttps://stackoverflow.com/a/33770290/6064135\n\nIt sounds like there is something local to your environment that needs adjusting. I would strongly recommend fixing this certificate/SSL error at the source rather than turning it off as this is a security risk. Best of luck.\n"
] | [
0
] | [] | [] | [
"easypost",
"python",
"ssl",
"urllib3"
] | stackoverflow_0073358402_easypost_python_ssl_urllib3.txt |
Q:
Shift Values in column up or down for each group
I have two data frame : one with the heating starting point and one with the heating ending point and that for each room.
Heating starting point
room_id
40 2021-11-23 04:12:00
40 2021-11-23 07:16:00
40 2021-11-23 21:47:00
40 2021-11-24 05:10:00
40 2021-11-24 08:08:00
...
78 2022-02-11 04:17:00
78 2022-02-11 06:09:00
78 2022-02-11 18:59:00
78 2022-02-11 22:32:00
78 2022-02-12 00:20:00
Heating ending point
room_id
40 2021-11-23 03:21:00
40 2021-11-23 06:20:00
40 2021-11-23 07:32:00
40 2021-11-24 04:19:00
40 2021-11-24 07:08:00
...
78 2022-02-11 02:51:00
78 2022-02-11 04:48:00
78 2022-02-11 06:41:00
78 2022-02-11 21:53:00
78 2022-02-11 23:35:00
I would like to shift the dataframe when heating starting point is higher then heating ending point and this for each room
What I am expecting :
room_id
40 NaN
40 2021-11-23 04:12:00
40 2021-11-23 07:16:00
40 2021-11-23 21:47:00
40 2021-11-24 05:10:00
40 2021-11-24 08:08:00
...
78 NaN
78 2022-02-11 04:17:00
78 2022-02-11 06:09:00
78 2022-02-11 18:59:00
78 2022-02-11 22:32:00
78 2022-02-12 00:20:00
I tried :
if [p_heat_start.groupby('room_id')['hour'].first()>p_heat_stop.groupby('room_id')['hour'].first()]==True:
p_heat_start.groupby('room_id').shift(1)
but it is not working
A:
If df1 is your starting point dataframe and df2 ending point you can try:
# convert and sort dataframes (if necessary)
df1["hour"] = pd.to_datetime(df1["hour"])
df2["hour"] = pd.to_datetime(df2["hour"])
df1 = df1.sort_values(by=["room_id", "hour"])
df2 = df2.sort_values(by=["room_id", "hour"])
ending_starting_points = df2.groupby("room_id")["hour"].first().to_dict()
df1 = df1.groupby("room_id", group_keys=False).apply(
lambda x: x.assign(hour=x.hour.shift())
if x["hour"].iat[0] > ending_starting_points[x["room_id"].iat[0]]
else x
)
print(df1)
Prints:
room_id hour
0 40 NaT
1 40 2021-11-23 04:12:00
2 40 2021-11-23 07:16:00
3 40 2021-11-23 21:47:00
4 40 2021-11-24 05:10:00
5 78 NaT
6 78 2022-02-11 04:17:00
7 78 2022-02-11 06:09:00
8 78 2022-02-11 18:59:00
9 78 2022-02-11 22:32:00
Dataframes used:
df1:
room_id hour
0 40 2021-11-23 04:12:00
1 40 2021-11-23 07:16:00
2 40 2021-11-23 21:47:00
3 40 2021-11-24 05:10:00
4 40 2021-11-24 08:08:00
5 78 2022-02-11 04:17:00
6 78 2022-02-11 06:09:00
7 78 2022-02-11 18:59:00
8 78 2022-02-11 22:32:00
9 78 2022-02-12 00:20:00
df2:
room_id hour
0 40 2021-11-23 03:21:00
1 40 2021-11-23 06:20:00
2 40 2021-11-23 07:32:00
3 40 2021-11-24 04:19:00
4 40 2021-11-24 07:08:00
5 78 2022-02-11 02:51:00
6 78 2022-02-11 04:48:00
7 78 2022-02-11 06:41:00
8 78 2022-02-11 21:53:00
9 78 2022-02-11 23:35:00
| Shift Values in column up or down for each group | I have two data frame : one with the heating starting point and one with the heating ending point and that for each room.
Heating starting point
room_id
40 2021-11-23 04:12:00
40 2021-11-23 07:16:00
40 2021-11-23 21:47:00
40 2021-11-24 05:10:00
40 2021-11-24 08:08:00
...
78 2022-02-11 04:17:00
78 2022-02-11 06:09:00
78 2022-02-11 18:59:00
78 2022-02-11 22:32:00
78 2022-02-12 00:20:00
Heating ending point
room_id
40 2021-11-23 03:21:00
40 2021-11-23 06:20:00
40 2021-11-23 07:32:00
40 2021-11-24 04:19:00
40 2021-11-24 07:08:00
...
78 2022-02-11 02:51:00
78 2022-02-11 04:48:00
78 2022-02-11 06:41:00
78 2022-02-11 21:53:00
78 2022-02-11 23:35:00
I would like to shift the dataframe when heating starting point is higher then heating ending point and this for each room
What I am expecting :
room_id
40 NaN
40 2021-11-23 04:12:00
40 2021-11-23 07:16:00
40 2021-11-23 21:47:00
40 2021-11-24 05:10:00
40 2021-11-24 08:08:00
...
78 NaN
78 2022-02-11 04:17:00
78 2022-02-11 06:09:00
78 2022-02-11 18:59:00
78 2022-02-11 22:32:00
78 2022-02-12 00:20:00
I tried :
if [p_heat_start.groupby('room_id')['hour'].first()>p_heat_stop.groupby('room_id')['hour'].first()]==True:
p_heat_start.groupby('room_id').shift(1)
but it is not working
| [
"If df1 is your starting point dataframe and df2 ending point you can try:\n# convert and sort dataframes (if necessary) \ndf1[\"hour\"] = pd.to_datetime(df1[\"hour\"])\ndf2[\"hour\"] = pd.to_datetime(df2[\"hour\"])\n\ndf1 = df1.sort_values(by=[\"room_id\", \"hour\"])\ndf2 = df2.sort_values(by=[\"room_id\", \"hour\"])\n\nending_starting_points = df2.groupby(\"room_id\")[\"hour\"].first().to_dict()\n\ndf1 = df1.groupby(\"room_id\", group_keys=False).apply(\n lambda x: x.assign(hour=x.hour.shift())\n if x[\"hour\"].iat[0] > ending_starting_points[x[\"room_id\"].iat[0]]\n else x\n)\n\nprint(df1)\n\nPrints:\n room_id hour\n0 40 NaT\n1 40 2021-11-23 04:12:00\n2 40 2021-11-23 07:16:00\n3 40 2021-11-23 21:47:00\n4 40 2021-11-24 05:10:00\n5 78 NaT\n6 78 2022-02-11 04:17:00\n7 78 2022-02-11 06:09:00\n8 78 2022-02-11 18:59:00\n9 78 2022-02-11 22:32:00\n\n\nDataframes used:\ndf1:\n room_id hour\n0 40 2021-11-23 04:12:00\n1 40 2021-11-23 07:16:00\n2 40 2021-11-23 21:47:00\n3 40 2021-11-24 05:10:00\n4 40 2021-11-24 08:08:00\n5 78 2022-02-11 04:17:00\n6 78 2022-02-11 06:09:00\n7 78 2022-02-11 18:59:00\n8 78 2022-02-11 22:32:00\n9 78 2022-02-12 00:20:00\n\ndf2:\n room_id hour\n0 40 2021-11-23 03:21:00\n1 40 2021-11-23 06:20:00\n2 40 2021-11-23 07:32:00\n3 40 2021-11-24 04:19:00\n4 40 2021-11-24 07:08:00\n5 78 2022-02-11 02:51:00\n6 78 2022-02-11 04:48:00\n7 78 2022-02-11 06:41:00\n8 78 2022-02-11 21:53:00\n9 78 2022-02-11 23:35:00\n\n"
] | [
0
] | [] | [] | [
"group_by",
"pandas",
"python",
"shift"
] | stackoverflow_0074630702_group_by_pandas_python_shift.txt |
Q:
How to store Global variable in Django view
In Django views, is it possible to create a global / session variable and assign some value (say, sent through an Ajax call) in a view and make use of it in another view?
The scenario I'm trying to implement will be something like:
View view1 gets some variable data sent thru' Ajax call:
def view1(request):
my_global_value = request.GET.get("data1", '') # Storing data globally
Then, the stored variable is used in another view, view2:
def view2(request):
my_int_value = my_global_value # Using the global variable
A:
You can use Session Django Docs
Some example from Django:
def post_comment(request, new_comment):
if request.session.get('has_commented', False):
return HttpResponse("You've already commented.")
c = comments.Comment(comment=new_comment)
c.save()
request.session['has_commented'] = True
return HttpResponse('Thanks for your comment!')
Edit the MIDDLEWARE setting and make sure it contains
'django.contrib.sessions.middleware.SessionMiddleware'
Your request.session.get("key") is also accessible in any other view.
A:
You can use Sessions, but you better delete it after you've done with it. Otherwise, it will persist with the session.
def index(request):
task_created = ""
if 'task_created' in request.session:
task_created = request.session['task_created']
del request.session['task_created']
...
def create_task(request):
...
request.session['task_created'] = "ok"
return redirect('index')
A:
You can use the global variable as shown below:
my_global_value = None # Here
def view1(request):
global my_global_value # Here
my_global_value = request.GET.get("data1", '')
# ...
def view2(request):
global my_global_value # Here
my_int_value = my_global_value
# ...
| How to store Global variable in Django view | In Django views, is it possible to create a global / session variable and assign some value (say, sent through an Ajax call) in a view and make use of it in another view?
The scenario I'm trying to implement will be something like:
View view1 gets some variable data sent thru' Ajax call:
def view1(request):
my_global_value = request.GET.get("data1", '') # Storing data globally
Then, the stored variable is used in another view, view2:
def view2(request):
my_int_value = my_global_value # Using the global variable
| [
"You can use Session Django Docs\nSome example from Django:\ndef post_comment(request, new_comment):\n if request.session.get('has_commented', False):\n return HttpResponse(\"You've already commented.\")\n c = comments.Comment(comment=new_comment)\n c.save()\n request.session['has_commented'] = True\n return HttpResponse('Thanks for your comment!')\n\nEdit the MIDDLEWARE setting and make sure it contains\n'django.contrib.sessions.middleware.SessionMiddleware'\nYour request.session.get(\"key\") is also accessible in any other view.\n",
"You can use Sessions, but you better delete it after you've done with it. Otherwise, it will persist with the session.\ndef index(request):\n task_created = \"\"\n if 'task_created' in request.session:\n task_created = request.session['task_created']\n del request.session['task_created']\n ...\n\n\ndef create_task(request):\n ...\n request.session['task_created'] = \"ok\"\n return redirect('index')\n\n",
"You can use the global variable as shown below:\nmy_global_value = None # Here\n\ndef view1(request):\n global my_global_value # Here\n\n my_global_value = request.GET.get(\"data1\", '')\n # ...\n\ndef view2(request):\n global my_global_value # Here\n\n my_int_value = my_global_value\n # ...\n\n"
] | [
3,
0,
0
] | [] | [] | [
"django",
"django_views",
"global_variables",
"python",
"python_3.x"
] | stackoverflow_0059778199_django_django_views_global_variables_python_python_3.x.txt |
Q:
Select row in dataframe with column more than 1 values
For example I have this dataframe :
A Variant&Price Qty
AAC 7:124|25: 443 1
AAD 35:|35: 1
AAS 32:98|3:40 1
AAG 2: |25: 1
AAC 25:443|26:344 1
And I want to select row which has type 7 and below, so then my dataframe will look like this
A Variant&Price Qty
AAC 7:124|25: 443 1
AAS 32:9|3:40 1
AAG 2: |25: 1
: is separator between type and price, 12:250 means type 12 with price 250. | is separator of item. I dont know how to deal with column with more than one value. I am trying to change its type to string but then how do I select 7 and below ? also the data that w are interested is only one digit before : and one digit after |.
A:
str method is very useful in your case: like that we can split the type/price column into 4 parts. Then we take the minimum elements from first and third parts (the types) and if it is lower than 7, we take it in our final result.
from pandas import DataFrame
df = DataFrame([['AAC', '7:124|25:443', 1],
['AAD', '35:|35:', 1],
['AAS', '32:98|3:40', 1],
['AAG', '2: |25: ', 1],
['AAC', '25:443|26:344', 1]],
columns=['A', 'Variant&Price', 'Qty'])
split_df = df['Variant&Price'].str.split(':|\|', expand=True)
print(df[split_df.iloc[:, [0,2]].astype(int).min(axis=1) <= 7])
| Select row in dataframe with column more than 1 values | For example I have this dataframe :
A Variant&Price Qty
AAC 7:124|25: 443 1
AAD 35:|35: 1
AAS 32:98|3:40 1
AAG 2: |25: 1
AAC 25:443|26:344 1
And I want to select row which has type 7 and below, so then my dataframe will look like this
A Variant&Price Qty
AAC 7:124|25: 443 1
AAS 32:9|3:40 1
AAG 2: |25: 1
: is separator between type and price, 12:250 means type 12 with price 250. | is separator of item. I dont know how to deal with column with more than one value. I am trying to change its type to string but then how do I select 7 and below ? also the data that w are interested is only one digit before : and one digit after |.
| [
"str method is very useful in your case: like that we can split the type/price column into 4 parts. Then we take the minimum elements from first and third parts (the types) and if it is lower than 7, we take it in our final result.\nfrom pandas import DataFrame\n\ndf = DataFrame([['AAC', '7:124|25:443', 1],\n ['AAD', '35:|35:', 1],\n ['AAS', '32:98|3:40', 1],\n ['AAG', '2: |25: ', 1],\n ['AAC', '25:443|26:344', 1]],\n columns=['A', 'Variant&Price', 'Qty'])\n\nsplit_df = df['Variant&Price'].str.split(':|\\|', expand=True)\nprint(df[split_df.iloc[:, [0,2]].astype(int).min(axis=1) <= 7])\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074633522_pandas_python.txt |
Q:
Auto increment starting at different offset for existing rows
I have two tables existing_students and old_students with data in them, now i want to introduce a new auto increment column say alumni_number and assign a number to all students(old and existing). First i start with old_students table say having 100 rows
ALTER TABLE OLD_STUDENTS ADD ALUMNI_NUMBER INT UNSIGNED NOT NULL AUTO_INCREMENT, ADD index (ALUMNI_NUMBER);
it will assign 1 to 100 to each row....now i want to start the count from 101 in existing_students....is it possible to allocated number 101,102 ...automatically to rows in existing_students table?
Any pointers will be helpful
A:
Demo:
mysql> select * from mytable;
+----------+
| name |
+----------+
| Harry |
| Ron |
| Hermione |
+----------+
mysql> alter table mytable
add column id int unsigned not null auto_increment,
add key (id),
auto_increment=101;
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> select * from mytable;
+----------+-----+
| name | id |
+----------+-----+
| Harry | 101 |
| Ron | 102 |
| Hermione | 103 |
+----------+-----+
| Auto increment starting at different offset for existing rows | I have two tables existing_students and old_students with data in them, now i want to introduce a new auto increment column say alumni_number and assign a number to all students(old and existing). First i start with old_students table say having 100 rows
ALTER TABLE OLD_STUDENTS ADD ALUMNI_NUMBER INT UNSIGNED NOT NULL AUTO_INCREMENT, ADD index (ALUMNI_NUMBER);
it will assign 1 to 100 to each row....now i want to start the count from 101 in existing_students....is it possible to allocated number 101,102 ...automatically to rows in existing_students table?
Any pointers will be helpful
| [
"Demo:\nmysql> select * from mytable;\n+----------+\n| name |\n+----------+\n| Harry |\n| Ron |\n| Hermione |\n+----------+\n\nmysql> alter table mytable \n add column id int unsigned not null auto_increment, \n add key (id), \n auto_increment=101;\nQuery OK, 0 rows affected (0.02 sec)\nRecords: 0 Duplicates: 0 Warnings: 0\n\nmysql> select * from mytable;\n+----------+-----+\n| name | id |\n+----------+-----+\n| Harry | 101 |\n| Ron | 102 |\n| Hermione | 103 |\n+----------+-----+\n\n"
] | [
1
] | [] | [] | [
"database_migration",
"mysql",
"python",
"sqlalchemy"
] | stackoverflow_0074633113_database_migration_mysql_python_sqlalchemy.txt |
Q:
Dealing With "0" Values in Datadog
I am currently building a set of multiple graphs for my personal company using Datadog. I love how it works but there is only one thing I have not been able to sort out. Whenever my data is generated every 5 minutes, there are times where one or multiple values will come in at '0' which is what I want. The problem is Datadog is for some reason not taking these values into account and so until that same value finally comes in with something other than '0' then nothing will show up saying the value was '0' and then it changed to something else. Instead the graph chooses to create a straight line from the last recorded non-zero value straight to the newest non-zero value. I would love to know how I can get Datadog to consider the zeros and graph them.
In addition, if possible, I would also love to know how I could say something like "if this previous value existed and then on the next set of data it does not show up at all (not even as "0") just assign a "0" to it until it once again appears on the data". Of course for this to be looked into I would need the first problem dealt with.
Here is an image of how it is looking right now which is NOT how I wanted to look. The Red line shows where all the "0" values land, the Green boxes show the last recorded non-zero values.
Example of '0' values not being graph properly
I have tried looking through most of the documentation of Datadog as well as their posted YouTube videos with no luck. They for some reason do not address this even when it is in front of them when showing examples. I expected to find some info online but there seems to be little resources at the moment. This resulted in me thinking this could be the best place to finally get an answer.
A:
I believe you are looking for Interpolation. There are a couple of use cases you specified, you may have to experiment with a few of the options depending on what your data looks like. For example Fill Zero satisfies one of them.
Datadog Graph Functions
| Dealing With "0" Values in Datadog | I am currently building a set of multiple graphs for my personal company using Datadog. I love how it works but there is only one thing I have not been able to sort out. Whenever my data is generated every 5 minutes, there are times where one or multiple values will come in at '0' which is what I want. The problem is Datadog is for some reason not taking these values into account and so until that same value finally comes in with something other than '0' then nothing will show up saying the value was '0' and then it changed to something else. Instead the graph chooses to create a straight line from the last recorded non-zero value straight to the newest non-zero value. I would love to know how I can get Datadog to consider the zeros and graph them.
In addition, if possible, I would also love to know how I could say something like "if this previous value existed and then on the next set of data it does not show up at all (not even as "0") just assign a "0" to it until it once again appears on the data". Of course for this to be looked into I would need the first problem dealt with.
Here is an image of how it is looking right now which is NOT how I wanted to look. The Red line shows where all the "0" values land, the Green boxes show the last recorded non-zero values.
Example of '0' values not being graph properly
I have tried looking through most of the documentation of Datadog as well as their posted YouTube videos with no luck. They for some reason do not address this even when it is in front of them when showing examples. I expected to find some info online but there seems to be little resources at the moment. This resulted in me thinking this could be the best place to finally get an answer.
| [
"I believe you are looking for Interpolation. There are a couple of use cases you specified, you may have to experiment with a few of the options depending on what your data looks like. For example Fill Zero satisfies one of them.\nDatadog Graph Functions\n"
] | [
0
] | [] | [] | [
"datadog",
"graph",
"python"
] | stackoverflow_0074632914_datadog_graph_python.txt |
Q:
Simulating Pointers in Python
I'm trying to cross compile an in house language(ihl) to Python.
One of the ihl features is pointers and references that behave like you would expect from C or C++.
For instance you can do this:
a = [1,2]; // a has an array
b = &a; // b points to a
*b = 2; // derefernce b to store 2 in a
print(a); // outputs 2
print(*b); // outputs 2
Is there a way to duplicate this functionality in Python.
I should point out that I think I've confused a few people. I don't want pointers in Python. I just wanted to get a sense from the Python experts out there, what Python I should generate to simulate the case I've shown above
My Python isn't the greatest but so far my exploration hasn't yielded anything promising:(
I should point out that we are looking to move from our ihl to a more common language so we aren't really tied to Python if someone can suggest another language that may be more suitable.
A:
This can be done explicitly.
class ref:
def __init__(self, obj): self.obj = obj
def get(self): return self.obj
def set(self, obj): self.obj = obj
a = ref([1, 2])
b = a
print(a.get()) # => [1, 2]
print(b.get()) # => [1, 2]
b.set(2)
print(a.get()) # => 2
print(b.get()) # => 2
A:
You may want to read Semantics of Python variable names from a C++ perspective. The bottom line: All variables are references.
More to the point, don't think in terms of variables, but in terms of objects which can be named.
A:
If you're compiling a C-like language, say:
func()
{
var a = 1;
var *b = &a;
*b = 2;
assert(a == 2);
}
into Python, then all of the "everything in Python is a reference" stuff is a misnomer.
It's true that everything in Python is a reference, but the fact that many core types (ints, strings) are immutable effectively undoes this for many cases. There's no direct way to implement the above in Python.
Now, you can do it indirectly: for any immutable type, wrap it in a mutable type. Ephemient's solution works, but I often just do this:
a = [1]
b = a
b[0] = 2
assert a[0] == 2
(I've done this to work around Python's lack of "nonlocal" in 2.x a few times.)
This implies a lot more overhead: every immutable type (or every type, if you don't try to distinguish) suddenly creates a list (or another container object), so you're increasing the overhead for variables significantly. Individually, it's not a lot, but it'll add up when applied to a whole codebase.
You could reduce this by only wrapping immutable types, but then you'll need to keep track of which variables in the output are wrapped and which aren't, so you can access the value with "a" or "a[0]" appropriately. It'll probably get hairy.
As to whether this is a good idea or not--that depends on why you're doing it. If you just want something to run a VM, I'd tend to say no. If you want to be able to call to your existing language from Python, I'd suggest taking your existing VM and creating Python bindings for it, so you can access and call into it from Python.
A:
Almost exactly like ephemient answer, which I voted up, you could use Python's builtin property function. It will do something nearly similar to the ref class in ephemient's answer, except now, instead of being forced to use get and set methods to access a ref instance, you just call the attributes of your instance which you've assigned as properties in the class definition. From Python docs (except I changed C to ptr):
class ptr(object):
def __init__(self):
self._x = None
def getx(self):
return self._x
def setx(self, value):
self._x = value
def delx(self):
del self._x
x = property(getx, setx, delx, "I'm the 'x' property.")
Both methods work like a C pointer, without resorting to global. For example if you have a function that takes a pointer:
def do_stuff_with_pointer(pointer, property, value):
setattr(pointer, property, value)
For example
a_ref = ptr() # make pointer
a_ref.x = [1, 2] # a_ref pointer has an array [1, 2]
b_ref = a_ref # b_ref points to a_ref
# pass ``ptr`` instance to function that changes its content
do_stuff_with_pointer(b_ref, 'x', 3)
print a_ref.x # outputs 3
print b_ref.x # outputs 3
Another, and totally crazy option would be to use Python's ctypes. Try this:
from ctypes import *
a = py_object([1,2]) # a has an array
b = a # b points to a
b.value = 2 # derefernce b to store 2 in a
print a.value # outputs 2
print b.value # outputs 2
or if you want to get really fancy
from ctypes import *
a = py_object([1,2]) # a has an array
b = pointer(a) # b points to a
b.contents.value = 2 # derefernce b to store 2 in a
print a.value # outputs 2
print b.contents.value # outputs 2
which is more like OP's original request. crazy!
A:
Everything in Python is pointers already, but it's called "references" in Python. This is the translation of your code to Python:
a = [1,2] // a has an array
b = a // b points to a
a = 2 // store 2 in a.
print(a) // outputs 2
print(b) // outputs [1,2]
"Dereferencing" makes no sense, as it's all references. There isn't anything else, so nothing to dereference to.
A:
As others here have said, all Python variables are essentially pointers.
The key to understanding this from a C perspective is to use the unknown by many id() function. It tells you what address the variable points to.
>>> a = [1,2]
>>> id(a)
28354600
>>> b = a
>>> id(a)
28354600
>>> id(b)
28354600
A:
This is goofy, but a thought...
# Change operations like:
b = &a
# To:
b = "a"
# And change operations like:
*b = 2
# To:
locals()[b] = 2
>>> a = [1,2]
>>> b = "a"
>>> locals()[b] = 2
>>> print(a)
2
>>> print(locals()[b])
2
But there would be no pointer arithmetic or such, and no telling what other problems you might run into...
A:
class Pointer(object):
def __init__(self, target=None):
self.target = target
_noarg = object()
def __call__(self, target=_noarg):
if target is not self._noarg:
self.target = target
return self.target
a = Pointer([1, 2])
b = a
print a() # => [1, 2]
print b() # => [1, 2]
b(2)
print a() # => 2
print b() # => 2
A:
I think that this example is short and clear.
Here we have class with implicit list:
class A:
foo = []
a, b = A(), A()
a.foo.append(5)
b.foo
ans: [5]
Looking at this memory profile (using: from memory_profiler import profile), my intuition tells me that this may somehow simulate pointers like in C:
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
7 31.2 MiB 0.0 MiB @profile
8 def f():
9 31.2 MiB 0.0 MiB a, b = A(), A()
10 #here memoery increase and is coupled
11 50.3 MiB 19.1 MiB a.foo.append(np.arange(5000000))
12 73.2 MiB 22.9 MiB b.foo.append(np.arange(6000000))
13 73.2 MiB 0.0 MiB return a,b
[array([ 0, 1, 2, ..., 4999997, 4999998, 4999999]), array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])] [array([ 0, 1, 2, ..., 4999997, 4999998, 4999999]), array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])]
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
14 73.4 MiB 0.0 MiB @profile
15 def g():
16 #clearing b.foo list clears a.foo
17 31.5 MiB -42.0 MiB b.foo.clear()
18 31.5 MiB 0.0 MiB return a,b
[] []
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
19 31.5 MiB 0.0 MiB @profile
20 def h():
21 #and here mem. coupling is lost ;/
22 69.6 MiB 38.1 MiB b.foo=np.arange(10000000)
23 #memory inc. when b.foo is replaced
24 107.8 MiB 38.1 MiB a.foo.append(np.arange(10000000))
25 #so its seams that modyfing items of
26 #existing object of variable a.foo,
27 #changes automaticcly items of b.foo
28 #and vice versa,but changing object
29 #a.foo itself splits with b.foo
30 107.8 MiB 0.0 MiB return b,a
[array([ 0, 1, 2, ..., 9999997, 9999998, 9999999])] [ 0 1 2 ..., 9999997 9999998 9999999]
And here we have explicit self in class:
class A:
def __init__(self):
self.foo = []
a, b = A(), A()
a.foo.append(5)
b.foo
ans: []
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
44 107.8 MiB 0.0 MiB @profile
45 def f():
46 107.8 MiB 0.0 MiB a, b = B(), B()
47 #here some memory increase
48 #and this mem. is not coupled
49 126.8 MiB 19.1 MiB a.foo.append(np.arange(5000000))
50 149.7 MiB 22.9 MiB b.foo.append(np.arange(6000000))
51 149.7 MiB 0.0 MiB return a,b
[array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])] [array([ 0, 1, 2, ..., 4999997, 4999998, 4999999])]
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
52 111.6 MiB 0.0 MiB @profile
53 def g():
54 #clearing b.foo list
55 #do not clear a.foo
56 92.5 MiB -19.1 MiB b.foo.clear()
57 92.5 MiB 0.0 MiB return a,b
[] [array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])]
Filename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py
Line # Mem usage Increment Line Contents
================================================
58 92.5 MiB 0.0 MiB @profile
59 def h():
60 #and here memory increse again ;/
61 107.8 MiB 15.3 MiB b.foo=np.arange(10000000)
62 #memory inc. when b.foo is replaced
63 145.9 MiB 38.1 MiB a.foo.append(np.arange(10000000))
64 145.9 MiB 0.0 MiB return b,a
[array([ 0, 1, 2, ..., 9999997, 9999998, 9999999])] [ 0 1 2 ..., 9999997 9999998 9999999]
ps: I'm self learning programming (started with Python) so please do not hate me if I'm wrong. Its just mine intuition, that let me think that way, so do not hate me!
A:
class A:
_a = 1
_b = 2
@property
def a(self):
return self._a
@a.setter
def a(self, value):
self._a = value
@property
def b(self):
return self._b
@b.setter
def b(self, value):
self._b = value
a = A()
>>> a.a, a.b
(1, 2)
>>> A.b = A.a
>>> a.a, a.b
(1, 1)
>>> a.b = 'b'
>>> a.a, a.b
('b', 'b')
>>> a.a = 'a'
>>> a.a, a.b
('a', 'a')
Using only a class will not get the desired results.
class A:
a = 1
b = 2
>>> A.b = A.a
>>> A.a, A.b
(1, 1)
>>> A.a = 'a'
>>> A.b
1
>>> A.a, A.b
('a', 1)
| Simulating Pointers in Python | I'm trying to cross compile an in house language(ihl) to Python.
One of the ihl features is pointers and references that behave like you would expect from C or C++.
For instance you can do this:
a = [1,2]; // a has an array
b = &a; // b points to a
*b = 2; // derefernce b to store 2 in a
print(a); // outputs 2
print(*b); // outputs 2
Is there a way to duplicate this functionality in Python.
I should point out that I think I've confused a few people. I don't want pointers in Python. I just wanted to get a sense from the Python experts out there, what Python I should generate to simulate the case I've shown above
My Python isn't the greatest but so far my exploration hasn't yielded anything promising:(
I should point out that we are looking to move from our ihl to a more common language so we aren't really tied to Python if someone can suggest another language that may be more suitable.
| [
"This can be done explicitly.\nclass ref:\n def __init__(self, obj): self.obj = obj\n def get(self): return self.obj\n def set(self, obj): self.obj = obj\n\na = ref([1, 2])\nb = a\nprint(a.get()) # => [1, 2]\nprint(b.get()) # => [1, 2]\n\nb.set(2)\nprint(a.get()) # => 2\nprint(b.get()) # => 2\n\n",
"You may want to read Semantics of Python variable names from a C++ perspective. The bottom line: All variables are references.\nMore to the point, don't think in terms of variables, but in terms of objects which can be named.\n",
"If you're compiling a C-like language, say:\nfunc()\n{\n var a = 1;\n var *b = &a;\n *b = 2;\n assert(a == 2);\n}\n\ninto Python, then all of the \"everything in Python is a reference\" stuff is a misnomer.\nIt's true that everything in Python is a reference, but the fact that many core types (ints, strings) are immutable effectively undoes this for many cases. There's no direct way to implement the above in Python.\nNow, you can do it indirectly: for any immutable type, wrap it in a mutable type. Ephemient's solution works, but I often just do this:\na = [1]\nb = a\nb[0] = 2\nassert a[0] == 2\n\n(I've done this to work around Python's lack of \"nonlocal\" in 2.x a few times.)\nThis implies a lot more overhead: every immutable type (or every type, if you don't try to distinguish) suddenly creates a list (or another container object), so you're increasing the overhead for variables significantly. Individually, it's not a lot, but it'll add up when applied to a whole codebase.\nYou could reduce this by only wrapping immutable types, but then you'll need to keep track of which variables in the output are wrapped and which aren't, so you can access the value with \"a\" or \"a[0]\" appropriately. It'll probably get hairy.\nAs to whether this is a good idea or not--that depends on why you're doing it. If you just want something to run a VM, I'd tend to say no. If you want to be able to call to your existing language from Python, I'd suggest taking your existing VM and creating Python bindings for it, so you can access and call into it from Python.\n",
"Almost exactly like ephemient answer, which I voted up, you could use Python's builtin property function. It will do something nearly similar to the ref class in ephemient's answer, except now, instead of being forced to use get and set methods to access a ref instance, you just call the attributes of your instance which you've assigned as properties in the class definition. From Python docs (except I changed C to ptr):\nclass ptr(object):\n def __init__(self):\n self._x = None\n def getx(self):\n return self._x\n def setx(self, value):\n self._x = value\n def delx(self):\n del self._x\n x = property(getx, setx, delx, \"I'm the 'x' property.\")\n\nBoth methods work like a C pointer, without resorting to global. For example if you have a function that takes a pointer:\ndef do_stuff_with_pointer(pointer, property, value):\n setattr(pointer, property, value)\n\nFor example\na_ref = ptr() # make pointer\na_ref.x = [1, 2] # a_ref pointer has an array [1, 2]\nb_ref = a_ref # b_ref points to a_ref\n# pass ``ptr`` instance to function that changes its content\ndo_stuff_with_pointer(b_ref, 'x', 3)\nprint a_ref.x # outputs 3\nprint b_ref.x # outputs 3\n\nAnother, and totally crazy option would be to use Python's ctypes. Try this:\nfrom ctypes import *\na = py_object([1,2]) # a has an array \nb = a # b points to a\nb.value = 2 # derefernce b to store 2 in a\nprint a.value # outputs 2\nprint b.value # outputs 2\n\nor if you want to get really fancy\nfrom ctypes import *\na = py_object([1,2]) # a has an array \nb = pointer(a) # b points to a\nb.contents.value = 2 # derefernce b to store 2 in a\nprint a.value # outputs 2\nprint b.contents.value # outputs 2\n\nwhich is more like OP's original request. crazy!\n",
"Everything in Python is pointers already, but it's called \"references\" in Python. This is the translation of your code to Python:\na = [1,2] // a has an array \nb = a // b points to a\na = 2 // store 2 in a.\nprint(a) // outputs 2\nprint(b) // outputs [1,2]\n\n\"Dereferencing\" makes no sense, as it's all references. There isn't anything else, so nothing to dereference to.\n",
"As others here have said, all Python variables are essentially pointers.\nThe key to understanding this from a C perspective is to use the unknown by many id() function. It tells you what address the variable points to.\n>>> a = [1,2]\n>>> id(a)\n28354600\n\n>>> b = a\n>>> id(a)\n28354600\n\n>>> id(b)\n28354600\n\n",
"This is goofy, but a thought...\n# Change operations like:\nb = &a\n\n# To:\nb = \"a\"\n\n# And change operations like:\n*b = 2\n\n# To:\nlocals()[b] = 2\n\n\n>>> a = [1,2]\n>>> b = \"a\"\n>>> locals()[b] = 2\n>>> print(a)\n2\n>>> print(locals()[b])\n2\n\nBut there would be no pointer arithmetic or such, and no telling what other problems you might run into...\n",
"class Pointer(object):\n def __init__(self, target=None):\n self.target = target\n\n _noarg = object()\n\n def __call__(self, target=_noarg):\n if target is not self._noarg:\n self.target = target\n return self.target\n\na = Pointer([1, 2])\nb = a\n\nprint a() # => [1, 2]\nprint b() # => [1, 2]\n\nb(2)\nprint a() # => 2\nprint b() # => 2\n\n",
"I think that this example is short and clear. \nHere we have class with implicit list:\nclass A: \n foo = []\na, b = A(), A()\na.foo.append(5)\nb.foo\nans: [5]\n\nLooking at this memory profile (using: from memory_profiler import profile), my intuition tells me that this may somehow simulate pointers like in C:\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 7 31.2 MiB 0.0 MiB @profile\n 8 def f():\n 9 31.2 MiB 0.0 MiB a, b = A(), A()\n 10 #here memoery increase and is coupled\n 11 50.3 MiB 19.1 MiB a.foo.append(np.arange(5000000))\n 12 73.2 MiB 22.9 MiB b.foo.append(np.arange(6000000))\n 13 73.2 MiB 0.0 MiB return a,b\n\n\n[array([ 0, 1, 2, ..., 4999997, 4999998, 4999999]), array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])] [array([ 0, 1, 2, ..., 4999997, 4999998, 4999999]), array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])]\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 14 73.4 MiB 0.0 MiB @profile\n 15 def g():\n 16 #clearing b.foo list clears a.foo\n 17 31.5 MiB -42.0 MiB b.foo.clear()\n 18 31.5 MiB 0.0 MiB return a,b\n\n\n[] []\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 19 31.5 MiB 0.0 MiB @profile\n 20 def h():\n 21 #and here mem. coupling is lost ;/\n 22 69.6 MiB 38.1 MiB b.foo=np.arange(10000000)\n 23 #memory inc. when b.foo is replaced\n 24 107.8 MiB 38.1 MiB a.foo.append(np.arange(10000000))\n 25 #so its seams that modyfing items of\n 26 #existing object of variable a.foo,\n 27 #changes automaticcly items of b.foo\n 28 #and vice versa,but changing object\n 29 #a.foo itself splits with b.foo\n 30 107.8 MiB 0.0 MiB return b,a\n\n\n[array([ 0, 1, 2, ..., 9999997, 9999998, 9999999])] [ 0 1 2 ..., 9999997 9999998 9999999]\n\nAnd here we have explicit self in class:\nclass A: \n def __init__(self): \n self.foo = []\na, b = A(), A()\na.foo.append(5)\nb.foo\nans: []\n\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 44 107.8 MiB 0.0 MiB @profile\n 45 def f():\n 46 107.8 MiB 0.0 MiB a, b = B(), B()\n 47 #here some memory increase\n 48 #and this mem. is not coupled\n 49 126.8 MiB 19.1 MiB a.foo.append(np.arange(5000000))\n 50 149.7 MiB 22.9 MiB b.foo.append(np.arange(6000000))\n 51 149.7 MiB 0.0 MiB return a,b\n\n\n[array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])] [array([ 0, 1, 2, ..., 4999997, 4999998, 4999999])]\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 52 111.6 MiB 0.0 MiB @profile\n 53 def g():\n 54 #clearing b.foo list\n 55 #do not clear a.foo\n 56 92.5 MiB -19.1 MiB b.foo.clear()\n 57 92.5 MiB 0.0 MiB return a,b\n\n\n[] [array([ 0, 1, 2, ..., 5999997, 5999998, 5999999])]\nFilename: F:/MegaSync/Desktop/python_simulate_pointer_with_class.py\n\nLine # Mem usage Increment Line Contents\n================================================\n 58 92.5 MiB 0.0 MiB @profile\n 59 def h():\n 60 #and here memory increse again ;/\n 61 107.8 MiB 15.3 MiB b.foo=np.arange(10000000)\n 62 #memory inc. when b.foo is replaced\n 63 145.9 MiB 38.1 MiB a.foo.append(np.arange(10000000))\n 64 145.9 MiB 0.0 MiB return b,a\n\n\n[array([ 0, 1, 2, ..., 9999997, 9999998, 9999999])] [ 0 1 2 ..., 9999997 9999998 9999999]\n\nps: I'm self learning programming (started with Python) so please do not hate me if I'm wrong. Its just mine intuition, that let me think that way, so do not hate me!\n",
"class A:\n \n _a = 1\n _b = 2\n\n @property\n def a(self):\n return self._a\n \n @a.setter\n def a(self, value):\n self._a = value\n\n @property\n def b(self):\n return self._b\n\n @b.setter\n def b(self, value):\n self._b = value\n\na = A()\n\n>>> a.a, a.b\n(1, 2)\n\n>>> A.b = A.a\n>>> a.a, a.b\n(1, 1)\n\n>>> a.b = 'b'\n>>> a.a, a.b\n('b', 'b')\n\n>>> a.a = 'a'\n>>> a.a, a.b\n('a', 'a')\n\nUsing only a class will not get the desired results.\nclass A:\n a = 1\n b = 2\n\n>>> A.b = A.a\n>>> A.a, A.b\n(1, 1)\n\n>>> A.a = 'a'\n>>> A.b\n1\n\n>>> A.a, A.b\n('a', 1)\n\n"
] | [
85,
24,
14,
10,
4,
4,
1,
0,
0,
0
] | [
"Negative, no pointers. You should not need them with the way the language is designed. However, I heard a nasty rumor that you could use the: ctypes module to use them. I haven't used it, but it smells messy to me.\n"
] | [
-2
] | [
"pointers",
"python"
] | stackoverflow_0001145722_pointers_python.txt |
Q:
python JSON format invalid
I'm trying to get date and time flowing into Azure IoT hub to enable me to analyze using Azure DX as time series. I can get the temperature and humidity (humidity at the moment is just a random number). If I use this code, all works well and the JSON is well formatted and flows into IoT hub and onto Azure DX:
The basis for the code is taken from the Microsoft examples here - https://github.com/Azure-Samples/azure-iot-samples-python/blob/master/iot-hub/Quickstarts/simulated-device/SimulatedDeviceSync.py
import asyncio
import random
from azure.iot.device import Message
from azure.iot.device.aio import IoTHubDeviceClient
import time
from datetime import datetime
from w1thermsensor import W1ThermSensor
sensor = W1ThermSensor()
import json
CONNECTION_STRING = "xxxxx"
HUMIDITY = 60
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity}}}'
async def main():
try:
# Create instance of the device client
client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
print("Simulated device started. Press Ctrl-C to exit")
while True:
humidity = round(HUMIDITY + (random.random() * 20), 2)
temperature = sensor.get_temperature()
msg_txt_formatted = MSG_TXT.format(temperature=temperature, humidity=humidity)
message = Message(msg_txt_formatted)
# Send a message to the IoT hub
print(f"Sending message: {message}")
await client.send_message(message)
await asyncio.sleep(1)
except KeyboardInterrupt:
print("Simulated device stopped")
if __name__ == '__main__':
asyncio.run(main())
The JSON format is valid and works well -
{ "temperature": 7, "humidity": 66.09 }
If I try to add a date/time field like this:
import asyncio
import random
from azure.iot.device import Message
from azure.iot.device.aio import IoTHubDeviceClient
import time
from datetime import datetime
from w1thermsensor import W1ThermSensor
sensor = W1ThermSensor()
import json
CONNECTION_STRING = "xxxxx"
HUMIDITY = 60
x = datetime.now()
timesent = str(x)
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity},"timesent": {timesent}}}'
async def main():
try:
# Create instance of the device client
client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
print("Simulated device started. Press Ctrl-C to exit")
while True:
humidity = round(HUMIDITY + (random.random() * 20), 2)
temperature = sensor.get_temperature()
msg_txt_formatted = MSG_TXT.format(temperature=temperature, humidity=humidity, timesent=timesent)
message = Message(msg_txt_formatted)
# Send a message to the IoT hub
print(f"Sending message: {message}")
await client.send_message(message)
await asyncio.sleep(1)
except KeyboardInterrupt:
print("Simulated device stopped")
if __name__ == '__main__':
asyncio.run(main())
The output from the JSON is no longer valid and Azure DX will not map. The invalid JSON I get is:
"{\"temperature\": 7,\"humidity\": 72.88, \"timesent\": 2022-11-08 14:21:04.021812}"
I suspect this is something to do with the date/time being formatted as a string, but I'm totally lost.
Would anyone have any ideas how I can send this data?
A:
@JoeHo, thank you for pointing the sources that helped you resolve the issue. I am posting the solution here so that other community members facing similar issue would benefit. Making the below modifications to the code helped me resolve the issue.
def json_serial(obj):
if isinstance(obj, (datetime, date)):
return obj.isoformat()
raise TypeError ("Type %s not serializable" % type(obj))
x = datetime.now().isoformat();
timesent = dumps(datetime.now(), default=json_serial);
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity}, "timesent": {timesent}}}'
My table on the Azure data explorer has the following filed definitions defined.
.create table jsondata (temperature: real, humidity: real, timesent: datetime)
My data mapping query is as below
.create table jsondata ingestion json mapping 'jsonMapping' '[{"column":"humidity","path":"$.humidity","datatype":"real"},{"column":"temperature","path":"$.temperature","datatype":"real"}, {"column":"timesent","path":"$.timesent","datatype":"datetime"}]'
I then connected the Azure Data Explorer table to IoT Hub using the steps outlined in the following resource Connect Azure Data Explorer table to IoT hub
When I execute the program, I could see the Azure IoT Hub telemetry data flow getting bound to the Azure Data explorer table without any issues.
| python JSON format invalid | I'm trying to get date and time flowing into Azure IoT hub to enable me to analyze using Azure DX as time series. I can get the temperature and humidity (humidity at the moment is just a random number). If I use this code, all works well and the JSON is well formatted and flows into IoT hub and onto Azure DX:
The basis for the code is taken from the Microsoft examples here - https://github.com/Azure-Samples/azure-iot-samples-python/blob/master/iot-hub/Quickstarts/simulated-device/SimulatedDeviceSync.py
import asyncio
import random
from azure.iot.device import Message
from azure.iot.device.aio import IoTHubDeviceClient
import time
from datetime import datetime
from w1thermsensor import W1ThermSensor
sensor = W1ThermSensor()
import json
CONNECTION_STRING = "xxxxx"
HUMIDITY = 60
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity}}}'
async def main():
try:
# Create instance of the device client
client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
print("Simulated device started. Press Ctrl-C to exit")
while True:
humidity = round(HUMIDITY + (random.random() * 20), 2)
temperature = sensor.get_temperature()
msg_txt_formatted = MSG_TXT.format(temperature=temperature, humidity=humidity)
message = Message(msg_txt_formatted)
# Send a message to the IoT hub
print(f"Sending message: {message}")
await client.send_message(message)
await asyncio.sleep(1)
except KeyboardInterrupt:
print("Simulated device stopped")
if __name__ == '__main__':
asyncio.run(main())
The JSON format is valid and works well -
{ "temperature": 7, "humidity": 66.09 }
If I try to add a date/time field like this:
import asyncio
import random
from azure.iot.device import Message
from azure.iot.device.aio import IoTHubDeviceClient
import time
from datetime import datetime
from w1thermsensor import W1ThermSensor
sensor = W1ThermSensor()
import json
CONNECTION_STRING = "xxxxx"
HUMIDITY = 60
x = datetime.now()
timesent = str(x)
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity},"timesent": {timesent}}}'
async def main():
try:
# Create instance of the device client
client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
print("Simulated device started. Press Ctrl-C to exit")
while True:
humidity = round(HUMIDITY + (random.random() * 20), 2)
temperature = sensor.get_temperature()
msg_txt_formatted = MSG_TXT.format(temperature=temperature, humidity=humidity, timesent=timesent)
message = Message(msg_txt_formatted)
# Send a message to the IoT hub
print(f"Sending message: {message}")
await client.send_message(message)
await asyncio.sleep(1)
except KeyboardInterrupt:
print("Simulated device stopped")
if __name__ == '__main__':
asyncio.run(main())
The output from the JSON is no longer valid and Azure DX will not map. The invalid JSON I get is:
"{\"temperature\": 7,\"humidity\": 72.88, \"timesent\": 2022-11-08 14:21:04.021812}"
I suspect this is something to do with the date/time being formatted as a string, but I'm totally lost.
Would anyone have any ideas how I can send this data?
| [
"@JoeHo, thank you for pointing the sources that helped you resolve the issue. I am posting the solution here so that other community members facing similar issue would benefit. Making the below modifications to the code helped me resolve the issue.\ndef json_serial(obj):\n if isinstance(obj, (datetime, date)):\n return obj.isoformat()\n raise TypeError (\"Type %s not serializable\" % type(obj))\nx = datetime.now().isoformat();\ntimesent = dumps(datetime.now(), default=json_serial);\nMSG_TXT = '{{\"temperature\": {temperature},\"humidity\": {humidity}, \"timesent\": {timesent}}}'\n\nMy table on the Azure data explorer has the following filed definitions defined.\n.create table jsondata (temperature: real, humidity: real, timesent: datetime)\n\nMy data mapping query is as below\n.create table jsondata ingestion json mapping 'jsonMapping' '[{\"column\":\"humidity\",\"path\":\"$.humidity\",\"datatype\":\"real\"},{\"column\":\"temperature\",\"path\":\"$.temperature\",\"datatype\":\"real\"}, {\"column\":\"timesent\",\"path\":\"$.timesent\",\"datatype\":\"datetime\"}]'\n\nI then connected the Azure Data Explorer table to IoT Hub using the steps outlined in the following resource Connect Azure Data Explorer table to IoT hub\nWhen I execute the program, I could see the Azure IoT Hub telemetry data flow getting bound to the Azure Data explorer table without any issues.\n\n"
] | [
0
] | [] | [] | [
"azure",
"azure_iot_hub",
"datetime",
"python"
] | stackoverflow_0074362745_azure_azure_iot_hub_datetime_python.txt |
Q:
Check to see if text is contained within a variable in a list in Python
I am working with this code:
test_list = ['small_cat', 'big_dog', 'turtle']
if 'dog' not in test_list:
output = 'Good'
else:
output = 'Bad'
print (Output)
Because 'dog' is not in the list, 'output' will come back with a response of 'Good'. However, I am looking for 'output' to return 'Bad' because the word 'dog' is part of an item in the list. How would I go about doing this?
A:
You should iterate all the values in test_list -
output = 'Good'
for test_word in test_list:
if 'dog' in test_word:
output = 'Bad'
break
print(output)
A:
you need to check each one in the list:
output = 'Good'
for item in test_list:
if 'dog' in item:
output = 'Bad'
print(output)
A:
any and all are super useful for this combined with a generator expression.
if any('dog' in w for w in test_list):
...
else:
...
Both any and all are very expressive of what they're doing, and they short-circuit: as soon as the outcome is known, they stop iterating. They can be combined with conditional expressions to permit:
output = "Bad" if any('dog' in w for w in test_list) else "Good"
| Check to see if text is contained within a variable in a list in Python | I am working with this code:
test_list = ['small_cat', 'big_dog', 'turtle']
if 'dog' not in test_list:
output = 'Good'
else:
output = 'Bad'
print (Output)
Because 'dog' is not in the list, 'output' will come back with a response of 'Good'. However, I am looking for 'output' to return 'Bad' because the word 'dog' is part of an item in the list. How would I go about doing this?
| [
"You should iterate all the values in test_list -\noutput = 'Good'\nfor test_word in test_list:\n if 'dog' in test_word:\n output = 'Bad'\n break\n\nprint(output)\n\n",
"you need to check each one in the list:\noutput = 'Good'\nfor item in test_list:\n if 'dog' in item:\n output = 'Bad'\nprint(output)\n\n",
"any and all are super useful for this combined with a generator expression.\nif any('dog' in w for w in test_list):\n ...\nelse:\n ...\n\nBoth any and all are very expressive of what they're doing, and they short-circuit: as soon as the outcome is known, they stop iterating. They can be combined with conditional expressions to permit:\noutput = \"Bad\" if any('dog' in w for w in test_list) else \"Good\"\n\n"
] | [
2,
0,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074633742_list_python.txt |
Q:
How to rename duplicated column names in a pandas dataframe
Quite shortly: I have this DataFrame
Dataframe
In the dataframe I have some duplicated columns with different values. How can I fix it so these have different column-names?
df_temporary.rename(columns={df_temporary.columns[3]: "OeFG%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[11]:"DeFG%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[5]: "OTOV%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[13]: "DTOV%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[8]: "OFT/FGA"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[-1]: "DFT/FGA"}, inplace=True)
df_temporary
I attempted using rename, first with all in one columns in one {}, and then hardcoding each of them afterwards; both resulting in the same result.
What I tried
I also tried following this:
Renaming columns in a Pandas dataframe with duplicate column names?
Not much success implementing these in my code however.
A:
Welcome to Stackoverflow :)
Great that you showed us a code sample of what you tried, and that you even linked to a stackoverflow post showing that you researched this problem!
In the future, try to avoid posting screenshots of tables and paste them in as text so that reviewers/helpers can easily copy your stuff to run some code of their own.
Now, for your problem: you can easily just change them column names by changing the value of df.columns. A mini-example (I was too lazy to make all the columns, and I'm not sure whether the rename is really what you wanted) for your case would be the following:
import pandas as pd
data = [["Boston Celtics", 51, 31, 0.542, 0.502],
["Phoenix Suns", 64, 18, 0.549, 0.510]]
df = pd.DataFrame(data, columns=['Team', 'W', 'L', 'eFG%', 'eFG%'])
df
Team W L eFG% eFG%
0 Boston Celtics 51 31 0.542 0.502
1 Phoenix Suns 64 18 0.549 0.510
df.columns = ["Team", "W", "L", "eFG%", "OeFG%"]
df
Team W L eFG% OeFG%
0 Boston Celtics 51 31 0.542 0.502
1 Phoenix Suns 64 18 0.549 0.510
As you can see, that final column has been renamed to OeFG% instead of eFG%. You can of course do this for any columns you like!
Hope this helps :)
| How to rename duplicated column names in a pandas dataframe | Quite shortly: I have this DataFrame
Dataframe
In the dataframe I have some duplicated columns with different values. How can I fix it so these have different column-names?
df_temporary.rename(columns={df_temporary.columns[3]: "OeFG%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[11]:"DeFG%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[5]: "OTOV%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[13]: "DTOV%"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[8]: "OFT/FGA"}, inplace=True)
df_temporary.rename(columns={df_temporary.columns[-1]: "DFT/FGA"}, inplace=True)
df_temporary
I attempted using rename, first with all in one columns in one {}, and then hardcoding each of them afterwards; both resulting in the same result.
What I tried
I also tried following this:
Renaming columns in a Pandas dataframe with duplicate column names?
Not much success implementing these in my code however.
| [
"Welcome to Stackoverflow :)\nGreat that you showed us a code sample of what you tried, and that you even linked to a stackoverflow post showing that you researched this problem!\nIn the future, try to avoid posting screenshots of tables and paste them in as text so that reviewers/helpers can easily copy your stuff to run some code of their own.\nNow, for your problem: you can easily just change them column names by changing the value of df.columns. A mini-example (I was too lazy to make all the columns, and I'm not sure whether the rename is really what you wanted) for your case would be the following:\nimport pandas as pd\n\ndata = [[\"Boston Celtics\", 51, 31, 0.542, 0.502],\n [\"Phoenix Suns\", 64, 18, 0.549, 0.510]]\n\ndf = pd.DataFrame(data, columns=['Team', 'W', 'L', 'eFG%', 'eFG%'])\ndf\n Team W L eFG% eFG% \n0 Boston Celtics 51 31 0.542 0.502 \n1 Phoenix Suns 64 18 0.549 0.510\n\ndf.columns = [\"Team\", \"W\", \"L\", \"eFG%\", \"OeFG%\"]\ndf\n Team W L eFG% OeFG% \n0 Boston Celtics 51 31 0.542 0.502 \n1 Phoenix Suns 64 18 0.549 0.510\n\n\nAs you can see, that final column has been renamed to OeFG% instead of eFG%. You can of course do this for any columns you like!\nHope this helps :)\n"
] | [
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074633391_dataframe_pandas_python.txt |
Q:
How to calculate pairwise co-occurrence matrix based on dataframe?
I have a dataframe, about 800,000 rows and 16 columns, below is an example from the data,
import pandas as pd
import datetime
start = datetime.datetime.now()
print('Starting time,'+str(start))
dict1 = {'id':['person1','person2','person3','person4','person5'], \
'food1':['A','A','A','C','D' ], \
'food2':['B','C','B','A','B'], \
'food3':['','D','C','',''], 'food4':['','','D','','',] }
demo = pd.DataFrame(dict1)
demo
>>>Out[13]
Starting time,2022-11-30 12:08:41.414807
id food1 food2 food3 food4
0 person1 A B
1 person2 A C D
2 person3 A B C D
3 person4 C A
4 person5 D B
My ideal result format is as follows,
>>>Out[14]
A B C D
A 0 2 3 2
B 2 0 1 2
C 3 1 0 2
D 2 2 2 0
I did the following:
I've searched a bit through stackoverflow, google, but so far haven't come across an answer that helps with my problem.
I tried to code it myself, my idea was to first build each pairing, then combine everything into a string, and finally find the number of duplicates, but limited by my code capabilities, it's a work in progress.Also, the "new" combination of the next of one pair and the previous of another pair may cause errors in the process of finding duplicates.
Thank you for your help.
A:
If I understand your goal correctly you can use this:
uniques = demo[[x for x in demo.columns if 'id' not in x]].stack().unique()
pd.DataFrame(index = uniques, columns = uniques).fillna(np.NaN)
A:
You could try this:
out = pd.get_dummies(data=demo.iloc[:,1:].stack()).sum(level=0).ne(0).astype(int)
final = out.T.dot(out).astype(float)
np.fill_diagonal(final.values, np.nan)
>>>final
A B C D
A NaN 2.0 3.0 2.0
B 2.0 NaN 1.0 2.0
C 3.0 1.0 NaN 2.0
D 2.0 2.0 2.0 NaN
| How to calculate pairwise co-occurrence matrix based on dataframe? | I have a dataframe, about 800,000 rows and 16 columns, below is an example from the data,
import pandas as pd
import datetime
start = datetime.datetime.now()
print('Starting time,'+str(start))
dict1 = {'id':['person1','person2','person3','person4','person5'], \
'food1':['A','A','A','C','D' ], \
'food2':['B','C','B','A','B'], \
'food3':['','D','C','',''], 'food4':['','','D','','',] }
demo = pd.DataFrame(dict1)
demo
>>>Out[13]
Starting time,2022-11-30 12:08:41.414807
id food1 food2 food3 food4
0 person1 A B
1 person2 A C D
2 person3 A B C D
3 person4 C A
4 person5 D B
My ideal result format is as follows,
>>>Out[14]
A B C D
A 0 2 3 2
B 2 0 1 2
C 3 1 0 2
D 2 2 2 0
I did the following:
I've searched a bit through stackoverflow, google, but so far haven't come across an answer that helps with my problem.
I tried to code it myself, my idea was to first build each pairing, then combine everything into a string, and finally find the number of duplicates, but limited by my code capabilities, it's a work in progress.Also, the "new" combination of the next of one pair and the previous of another pair may cause errors in the process of finding duplicates.
Thank you for your help.
| [
"If I understand your goal correctly you can use this:\nuniques = demo[[x for x in demo.columns if 'id' not in x]].stack().unique()\npd.DataFrame(index = uniques, columns = uniques).fillna(np.NaN)\n\n",
"You could try this:\nout = pd.get_dummies(data=demo.iloc[:,1:].stack()).sum(level=0).ne(0).astype(int)\nfinal = out.T.dot(out).astype(float)\nnp.fill_diagonal(final.values, np.nan)\n\n>>>final\n A B C D\nA NaN 2.0 3.0 2.0\nB 2.0 NaN 1.0 2.0\nC 3.0 1.0 NaN 2.0\nD 2.0 2.0 2.0 NaN\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074630972_dataframe_pandas_python.txt |
Q:
how do I plot an array in python?
I have an array a with shape (43,9). I want to plot all rows of this array in one diagram. for this I used the following code:
plt.plot(a)
it produces this diagram but I think it is wrong:
I put part of a here:
30.2682 30.4287 30.4531 30.4675 30.4784 30.4893 30.5002 30.511 30.5219
28.3204 29.4246 30.5289 31.5486 31.8152 31.9301 32.0395 32.1488 32.2582
29.884 30.4592 31.0343 31.4055 31.4843 31.5157 31.549 31.5823 31.6157
29.5203 30.0669 30.6135 30.9845 31.0889 31.1244 31.1599 31.1954 31.2309
30.2158 30.6971 31.1784 31.4935 31.5697 31.6017 31.6336 31.6655 31.6974
how can I show all rows of a as a curve in one plot in python?
A:
Try this:
data = []
for arr in a:
data.extend(arr)
plt.plot(list(range(len(data))), data)
It will combine all arrays from a into data array
A:
From what I understand, you want to reshape the array such that each dataset is plotted into one line. If this is the correct interpretation, all you need is np.reshape(). I will generate a random dataset in the shape (43,9) and make it into a single array of length 387 and show the difference
import numpy as np
import matplotlib.pyplot as plt
y = np.random.rand(43,9) # Random dataset (43,9)
y_rs = np.reshape(y, (43*9)) # Single array reshaping the data
plt.plot(y) # The original
plt.show()
plt.plot(y_rs) #The final
plt.show()
The original looks like
The reshaped version looks like
It is better to use numpy for these operations as it vectorizes the reshaping procedure which is much faster than looping over a list.
A:
The usual stuff
import matplotlib.pyplot as plt
import numpy as np
Let's make an array whose plot will be predictable, but with a shape similar to the shape of your array a
a = np.zeros((41,5))
for i, val in enumerate((2, 4, 6, 8, 10)):
a[:,i] = val
The array a is
2 4 6 8 10
2 4 6 8 10
2 4 6 8 10
2 4 6 8 10
2 4 6 8 10
...
2 4 6 8 10
so I expect 5 horizontal lines, at y=2, y=4, etc
Finally, we plot a using exactly the same idiom that you have used
plt.plot(a)
plt.grid()
plt.show()
The plot is exactly what I expected
it produces this diagram but I think it is wrong
NO, Matplotlib plotted exactly your data in the ordinates, and in the abscissae simply used the range from 0 to 43-1.
| how do I plot an array in python? | I have an array a with shape (43,9). I want to plot all rows of this array in one diagram. for this I used the following code:
plt.plot(a)
it produces this diagram but I think it is wrong:
I put part of a here:
30.2682 30.4287 30.4531 30.4675 30.4784 30.4893 30.5002 30.511 30.5219
28.3204 29.4246 30.5289 31.5486 31.8152 31.9301 32.0395 32.1488 32.2582
29.884 30.4592 31.0343 31.4055 31.4843 31.5157 31.549 31.5823 31.6157
29.5203 30.0669 30.6135 30.9845 31.0889 31.1244 31.1599 31.1954 31.2309
30.2158 30.6971 31.1784 31.4935 31.5697 31.6017 31.6336 31.6655 31.6974
how can I show all rows of a as a curve in one plot in python?
| [
"Try this:\ndata = []\nfor arr in a:\n data.extend(arr)\n\nplt.plot(list(range(len(data))), data)\n\nIt will combine all arrays from a into data array\n",
"From what I understand, you want to reshape the array such that each dataset is plotted into one line. If this is the correct interpretation, all you need is np.reshape(). I will generate a random dataset in the shape (43,9) and make it into a single array of length 387 and show the difference\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ny = np.random.rand(43,9) # Random dataset (43,9)\ny_rs = np.reshape(y, (43*9)) # Single array reshaping the data\n\nplt.plot(y) # The original\nplt.show()\n\nplt.plot(y_rs) #The final\nplt.show()\n\nThe original looks like\n\nThe reshaped version looks like\n\nIt is better to use numpy for these operations as it vectorizes the reshaping procedure which is much faster than looping over a list.\n",
"The usual stuff\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nLet's make an array whose plot will be predictable, but with a shape similar to the shape of your array a\na = np.zeros((41,5))\nfor i, val in enumerate((2, 4, 6, 8, 10)):\n a[:,i] = val\n\nThe array a is\n2 4 6 8 10 \n2 4 6 8 10 \n2 4 6 8 10 \n2 4 6 8 10 \n2 4 6 8 10 \n...\n2 4 6 8 10 \n\nso I expect 5 horizontal lines, at y=2, y=4, etc\nFinally, we plot a using exactly the same idiom that you have used\nplt.plot(a)\nplt.grid()\nplt.show()\n\nThe plot is exactly what I expected\n\n\nit produces this diagram but I think it is wrong\n\nNO, Matplotlib plotted exactly your data in the ordinates, and in the abscissae simply used the range from 0 to 43-1.\n"
] | [
0,
0,
0
] | [] | [] | [
"matplotlib",
"plot",
"python"
] | stackoverflow_0074632184_matplotlib_plot_python.txt |
Q:
Python networkx, plotly. How to display Edges mouse-over text
I need to display text when mouse is over edges (similar to image).
Maybe someone knows how to do it?
I took the example from plotly.com and trying to modify it.
It fine works for Nodes, but I can not do it for Edges.
import networkx
import plotly.graph_objects as go
G = networkx.random_geometric_graph(n=3, radius=1)
# Create Edges
edge_x, edge_y = [], []
for idx0, idx1 in G.edges():
x0, y0 = G.nodes[idx0]["pos"]
x1, y1 = G.nodes[idx1]["pos"]
edge_x.extend([x0, x1, None])
edge_y.extend([y0, y1, None])
edge_trace = go.Scatter(
x=edge_x, y=edge_y, line=dict(width=10, color='#888'), mode='lines',
hoverinfo='text', # TODO, NOT WORKING
)
# TODO, NOT WORKING
edge_trace.text = [f"# {s}" for s in edge_x]
# Create Nodes with TEXT
node_x = []
node_y = []
nodes_o = G.nodes
nodes_ids = list(nodes_o)
for node in nodes_ids:
x, y = G.nodes[node]["pos"]
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(x=node_x, y=node_y, mode="markers", hoverinfo="text",
marker=dict(color=[], size=50, line_width=10))
node_trace.text = [f"TEXT{idx}" for idx, _ in G.adjacency()]
# Create Network Graph
fig = go.Figure(
data=[edge_trace, node_trace],
layout=go.Layout(),
)
fig.show()
A:
Plotly doesn't actually have any functionality that will account for hover content over the length of a line. However, you can use a workaround.
You could plot transparent points at the midpoint of each edge. Alternatively, you could plot any number of transparent points on each edge.
Just the midpoint
When you create the edge data, you can add the code to create the midpoints, as well.
mnode_x, mnode_y, mnode_txt = [], [], []
edge_x, edge_y = [], []
for idx0, idx1 in G.edges():
x0, y0 = G.nodes[idx0]["pos"]
x1, y1 = G.nodes[idx1]["pos"]
edge_x.extend([x0, x1, None])
edge_y.extend([y0, y1, None])
mnode_x.extend([(x0 + x1)/2]) # assuming values positive/get midpoint
mnode_y.extend([(y0 + y1)/2]) # assumes positive vals/get midpoint
mnode_txt.extend([f"# {x0}"]) # hovertext
Then create the trace for these points, setting the opacity to 0 so the points aren't actually visible.
mnode_trace = go.Scatter(x = mnode_x, y = mnode_y, mode = "markers", showlegend = False,
hovertemplate = "Edge %{hovertext}<extra></extra>",
hovertext = mnode_txt, marker = go.Marker(opacity = 0))
Lastly, when you assemble your figure, add this trace, as well.
# Create Network Graph
fig = go.Figure(
data=[edge_trace, node_trace, mnode_trace],
layout=go.Layout())
You designate how many points per edge
If you wanted to have a quantity that you specify of points on each edge, you could use this instead. These two functions work together to create the midpoints of line segments within line segments.
This requires deque from the collections package.
import networkx
import plotly.graph_objects as go
from collections import deque
def queue(a, b, qty):
"""either x0 and x1 or y0 and y1, qty of points to create"""
q = deque()
q.append((0, qty - 1)) # indexing starts at 0
pts = [0] * qty
pts[0] = a; pts[-1] = b # x0 is the first value, x1 is the last
while len(q) != 0:
left, right = q.popleft() # remove working segment from queue
center = (left + right + 1)//2 # creates index values for pts
pts[center] = (pts[left] + pts[right])/2
if right - left > 2: # stop when qty met
q.append((left, center))
q.append((center, right))
return pts
def collector(x0, x1, y0, y1, qty, ht):
"""line segment end points, how many midpoints, hovertext"""
pth = [ht] * qty
ptx = queue(x0, x1, qty + 2) # add 2 because the origin will be in the list
pty = queue(y0, y1, qty + 2)
ptx.pop(0); ptx.pop() # pop first and last (the nodes)
pty.pop(0); pty.pop() # pop first and last (the nodes)
return ptx, pty, pth
Then, like the 'just the midpoint', you can send the line segment ends to collector from the call to create the edge data.
# Create Edges
m2x, m2y, m2t = [], [], []
edge_x, edge_y = [], []
for idx0, idx1 in G.edges():
x0, y0 = G.nodes[idx0]["pos"]
x1, y1 = G.nodes[idx1]["pos"]
edge_x.extend([x0, x1, None])
edge_y.extend([y0, y1, None])
# the 3 is points per line; x0 at the end is for hovertext
ptsx, ptsy, ptsh = collector(x0, x1, y0, y1, 3, x0)
m2x.extend(ptsx)
m2y.extend(ptsy)
m2t.extend(ptsh)
Finally, create the trace and combine the figure.
m2node_trace = go.Scatter(x = m2x, y = m2y, mode = "markers", showlegend = False,
hovertemplate = "Edge # %{hovertext}<extra></extra>",
hovertext = m2t, marker = go.Marker(opacity = 0))
fig2 = go.Figure(data = [edge_trace, node_trace, m2node_trace],
layout = go.Layout())
I added a <br> to this plot's hovertext for differentiation.
A:
Below is a fully working example based on Kat's approach.
from collections import deque
import networkx
import plotly.graph_objects as go
EDGE_POINTS_QUANTITY = 20
EDGE_POINTS_OPACITY = 1
EDGES = 3
def queue(a, b, qty):
"""either x0 and x1 or y0 and y1, qty of points to create"""
q = deque()
q.append((0, qty - 1)) # indexing starts at 0
pts = [0] * qty
pts[0] = a
pts[-1] = b # x0 is the first value, x1 is the last
while len(q) != 0:
left, right = q.popleft() # remove working segment from queue
center = (left + right + 1) // 2 # creates index values for pts
pts[center] = (pts[left] + pts[right]) / 2
if right - left > 2: # stop when qty met
q.append((left, center))
q.append((center, right))
return pts
def make_middle_points(first_x, last_x, first_y, last_y, qty):
"""line segment end points, how many midpoints, hovertext"""
# Add 2 because the origin will be in the list, pop first and last (the nodes)
middle_x_ = queue(first_x, last_x, qty + 2)
middle_y_ = queue(first_y, last_y, qty + 2)
middle_x_.pop(0)
middle_x_.pop()
middle_y_.pop(0)
middle_y_.pop()
return middle_x_, middle_y_
G = networkx.random_geometric_graph(n=EDGES, radius=1)
# Create Edges with TEXT
edge_x, edge_y = [], []
edge_middle_x, edge_middle_y, edge_middle_text = [], [], []
for idx0, idx1 in G.edges():
x0, y0 = G.nodes[idx0]["pos"]
x1, y1 = G.nodes[idx1]["pos"]
edge_x.extend([x0, x1, None])
edge_y.extend([y0, y1, None])
# the 3 is points per line; x0 at the end is for hovertext
middle_x, middle_y = make_middle_points(x0, x1, y0, y1, EDGE_POINTS_QUANTITY)
edge_middle_x.extend(middle_x)
edge_middle_y.extend(middle_y)
edge_middle_text.extend([f"EDGE{idx0}{idx1}"] * EDGE_POINTS_QUANTITY)
edge_trace = go.Scatter(x=edge_x, y=edge_y, line=dict(width=10, color='#888'), mode='lines')
m2node_trace = go.Scatter(x=edge_middle_x, y=edge_middle_y, mode="markers", showlegend=False,
hovertemplate="Edge # %{hovertext}<extra></extra>",
hovertext=edge_middle_text,
marker=go.Marker(opacity=EDGE_POINTS_OPACITY))
# Create Nodes with TEXT
node_x = []
node_y = []
nodes_o = G.nodes
nodes_ids = list(nodes_o)
for node in nodes_ids:
x, y = G.nodes[node]["pos"]
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(x=node_x, y=node_y, mode="markers", hoverinfo="text",
marker=dict(size=50, line_width=10))
node_trace.text = [f"NODE{idx}" for idx, _ in G.adjacency()]
# Create Network Graph
fig = go.Figure(data=[edge_trace, node_trace, m2node_trace], layout=go.Layout())
fig.show()
| Python networkx, plotly. How to display Edges mouse-over text | I need to display text when mouse is over edges (similar to image).
Maybe someone knows how to do it?
I took the example from plotly.com and trying to modify it.
It fine works for Nodes, but I can not do it for Edges.
import networkx
import plotly.graph_objects as go
G = networkx.random_geometric_graph(n=3, radius=1)
# Create Edges
edge_x, edge_y = [], []
for idx0, idx1 in G.edges():
x0, y0 = G.nodes[idx0]["pos"]
x1, y1 = G.nodes[idx1]["pos"]
edge_x.extend([x0, x1, None])
edge_y.extend([y0, y1, None])
edge_trace = go.Scatter(
x=edge_x, y=edge_y, line=dict(width=10, color='#888'), mode='lines',
hoverinfo='text', # TODO, NOT WORKING
)
# TODO, NOT WORKING
edge_trace.text = [f"# {s}" for s in edge_x]
# Create Nodes with TEXT
node_x = []
node_y = []
nodes_o = G.nodes
nodes_ids = list(nodes_o)
for node in nodes_ids:
x, y = G.nodes[node]["pos"]
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(x=node_x, y=node_y, mode="markers", hoverinfo="text",
marker=dict(color=[], size=50, line_width=10))
node_trace.text = [f"TEXT{idx}" for idx, _ in G.adjacency()]
# Create Network Graph
fig = go.Figure(
data=[edge_trace, node_trace],
layout=go.Layout(),
)
fig.show()
| [
"Plotly doesn't actually have any functionality that will account for hover content over the length of a line. However, you can use a workaround.\nYou could plot transparent points at the midpoint of each edge. Alternatively, you could plot any number of transparent points on each edge.\nJust the midpoint\nWhen you create the edge data, you can add the code to create the midpoints, as well.\nmnode_x, mnode_y, mnode_txt = [], [], []\nedge_x, edge_y = [], []\nfor idx0, idx1 in G.edges():\n x0, y0 = G.nodes[idx0][\"pos\"]\n x1, y1 = G.nodes[idx1][\"pos\"]\n edge_x.extend([x0, x1, None])\n edge_y.extend([y0, y1, None])\n\n mnode_x.extend([(x0 + x1)/2]) # assuming values positive/get midpoint\n mnode_y.extend([(y0 + y1)/2]) # assumes positive vals/get midpoint\n mnode_txt.extend([f\"# {x0}\"]) # hovertext\n\nThen create the trace for these points, setting the opacity to 0 so the points aren't actually visible.\nmnode_trace = go.Scatter(x = mnode_x, y = mnode_y, mode = \"markers\", showlegend = False,\n hovertemplate = \"Edge %{hovertext}<extra></extra>\",\n hovertext = mnode_txt, marker = go.Marker(opacity = 0))\n\nLastly, when you assemble your figure, add this trace, as well.\n# Create Network Graph\nfig = go.Figure(\n data=[edge_trace, node_trace, mnode_trace],\n layout=go.Layout())\n\n\nYou designate how many points per edge\nIf you wanted to have a quantity that you specify of points on each edge, you could use this instead. These two functions work together to create the midpoints of line segments within line segments.\nThis requires deque from the collections package.\nimport networkx\nimport plotly.graph_objects as go\nfrom collections import deque \n\ndef queue(a, b, qty):\n \"\"\"either x0 and x1 or y0 and y1, qty of points to create\"\"\"\n q = deque()\n q.append((0, qty - 1)) # indexing starts at 0\n pts = [0] * qty\n pts[0] = a; pts[-1] = b # x0 is the first value, x1 is the last\n while len(q) != 0:\n left, right = q.popleft() # remove working segment from queue\n center = (left + right + 1)//2 # creates index values for pts\n pts[center] = (pts[left] + pts[right])/2\n if right - left > 2: # stop when qty met\n q.append((left, center))\n q.append((center, right))\n return pts\n\n\ndef collector(x0, x1, y0, y1, qty, ht):\n \"\"\"line segment end points, how many midpoints, hovertext\"\"\"\n pth = [ht] * qty\n ptx = queue(x0, x1, qty + 2) # add 2 because the origin will be in the list\n pty = queue(y0, y1, qty + 2)\n ptx.pop(0); ptx.pop() # pop first and last (the nodes)\n pty.pop(0); pty.pop() # pop first and last (the nodes)\n return ptx, pty, pth\n\nThen, like the 'just the midpoint', you can send the line segment ends to collector from the call to create the edge data.\n# Create Edges\nm2x, m2y, m2t = [], [], []\nedge_x, edge_y = [], []\nfor idx0, idx1 in G.edges():\n x0, y0 = G.nodes[idx0][\"pos\"]\n x1, y1 = G.nodes[idx1][\"pos\"]\n edge_x.extend([x0, x1, None])\n edge_y.extend([y0, y1, None])\n\n # the 3 is points per line; x0 at the end is for hovertext\n ptsx, ptsy, ptsh = collector(x0, x1, y0, y1, 3, x0)\n m2x.extend(ptsx)\n m2y.extend(ptsy)\n m2t.extend(ptsh)\n\nFinally, create the trace and combine the figure.\nm2node_trace = go.Scatter(x = m2x, y = m2y, mode = \"markers\", showlegend = False,\n hovertemplate = \"Edge # %{hovertext}<extra></extra>\",\n hovertext = m2t, marker = go.Marker(opacity = 0))\n\nfig2 = go.Figure(data = [edge_trace, node_trace, m2node_trace],\n layout = go.Layout())\n\nI added a <br> to this plot's hovertext for differentiation.\n\n \n",
"Below is a fully working example based on Kat's approach.\n\nfrom collections import deque\n\nimport networkx\nimport plotly.graph_objects as go\n\nEDGE_POINTS_QUANTITY = 20\nEDGE_POINTS_OPACITY = 1\nEDGES = 3\n\n\ndef queue(a, b, qty):\n \"\"\"either x0 and x1 or y0 and y1, qty of points to create\"\"\"\n q = deque()\n q.append((0, qty - 1)) # indexing starts at 0\n pts = [0] * qty\n pts[0] = a\n pts[-1] = b # x0 is the first value, x1 is the last\n while len(q) != 0:\n left, right = q.popleft() # remove working segment from queue\n center = (left + right + 1) // 2 # creates index values for pts\n pts[center] = (pts[left] + pts[right]) / 2\n if right - left > 2: # stop when qty met\n q.append((left, center))\n q.append((center, right))\n return pts\n\n\ndef make_middle_points(first_x, last_x, first_y, last_y, qty):\n \"\"\"line segment end points, how many midpoints, hovertext\"\"\"\n # Add 2 because the origin will be in the list, pop first and last (the nodes)\n middle_x_ = queue(first_x, last_x, qty + 2)\n middle_y_ = queue(first_y, last_y, qty + 2)\n middle_x_.pop(0)\n middle_x_.pop()\n middle_y_.pop(0)\n middle_y_.pop()\n return middle_x_, middle_y_\n\n\nG = networkx.random_geometric_graph(n=EDGES, radius=1)\n\n# Create Edges with TEXT\nedge_x, edge_y = [], []\nedge_middle_x, edge_middle_y, edge_middle_text = [], [], []\nfor idx0, idx1 in G.edges():\n x0, y0 = G.nodes[idx0][\"pos\"]\n x1, y1 = G.nodes[idx1][\"pos\"]\n edge_x.extend([x0, x1, None])\n edge_y.extend([y0, y1, None])\n\n # the 3 is points per line; x0 at the end is for hovertext\n middle_x, middle_y = make_middle_points(x0, x1, y0, y1, EDGE_POINTS_QUANTITY)\n edge_middle_x.extend(middle_x)\n edge_middle_y.extend(middle_y)\n edge_middle_text.extend([f\"EDGE{idx0}{idx1}\"] * EDGE_POINTS_QUANTITY)\n\nedge_trace = go.Scatter(x=edge_x, y=edge_y, line=dict(width=10, color='#888'), mode='lines')\nm2node_trace = go.Scatter(x=edge_middle_x, y=edge_middle_y, mode=\"markers\", showlegend=False,\n hovertemplate=\"Edge # %{hovertext}<extra></extra>\",\n hovertext=edge_middle_text,\n marker=go.Marker(opacity=EDGE_POINTS_OPACITY))\n\n# Create Nodes with TEXT\nnode_x = []\nnode_y = []\nnodes_o = G.nodes\nnodes_ids = list(nodes_o)\nfor node in nodes_ids:\n x, y = G.nodes[node][\"pos\"]\n node_x.append(x)\n node_y.append(y)\n\nnode_trace = go.Scatter(x=node_x, y=node_y, mode=\"markers\", hoverinfo=\"text\",\n marker=dict(size=50, line_width=10))\nnode_trace.text = [f\"NODE{idx}\" for idx, _ in G.adjacency()]\n\n# Create Network Graph\nfig = go.Figure(data=[edge_trace, node_trace, m2node_trace], layout=go.Layout())\nfig.show()\n\n"
] | [
1,
0
] | [] | [] | [
"edges",
"mouseover",
"networkx",
"plotly",
"python"
] | stackoverflow_0074607000_edges_mouseover_networkx_plotly_python.txt |
Q:
Numpy matmul requires way more memory than is necessary when inputs are of different types
I have a very large boolean 2d array (~1Gb) and I want to matmul it with a 1d array of float64. The output array would be large, but still considerably smaller than the 2d array. However, when I try to matmul them, my system freaks out and tries to allocate enough memory for the 2d array to have float64, rather than just enough for the output vector
import numpy as np
x,y=286880, 20419
a=np.random.randint(0,2,(x,y),dtype=np.bool_).reshape(x,y)
v=np.random.rand(y)
print(f'{a.shape=} {a.dtype=} {a.size * a.itemsize=}')
print(f'{v.shape=} {v.dtype=} {v.size * v.itemsize=}')
d = np.dot(a, v)
Output:
a.shape=(286880, 20419) a.dtype=dtype('bool') a.size * a.itemsize=5857802720
v.shape=(20419,) v.dtype=dtype('float64') v.size * v.itemsize=163352
Traceback (most recent call last):
File "...", line 7, in <module>
d = np.dot(a, v)
numpy.core._exceptions.MemoryError: Unable to allocate 43.6 GiB for an array with shape (286880, 20419) and data type float64
Why does numpy need to allocate so much memory just for matrix multiplication? It seems like there is no reason to require additional memory for calculating a dot product, since you can just keep a running total as you iterate through the columns
Is there a time efficient and memory efficient method I can use here to get the dot product?
PS: I feel like maybe there's something extra efficient I can use here because the matrix is boolean?
A:
Numpy does not internally supports operations on different types so it uses type promotion to convert the inputs so they can be of the same type first (the conversion is done out-of-place). Indeed, when Numpy executes np.dot(a, v), it first converts a of type np.bool_[:,:] to an array of type np.float64[:,:]. This is because the rule of the type promotion specify that bool_ BIN_OP float64 = float64. The thing is a is huge here so the conversion cause a big memory issue. Since np.bool_ is generally 1 byte, this cause 9 times more memory to be used, which is clearly not reasonable.
The reason why Numpy does not support operations on different types is that it would be too complex to maintain (bigger code, much slower compilation, more complex code often resulting to more bugs) and make Numpy binaries much bigger.
There are several solutions to address this issue.
The simplest solution is to split the big array in chunks so that only chunks are converted. This is more complex and slower than the naive approach but the overhead compared to the naive method is small as long as the chunks are big enough. In practice, both the naive implementation and this approach are very inefficient due to the slow implicit conversion.
An alternative solution is to use Numba or Cython so to generate a fast code (thanks to a JIT/native compiler). Numba can perform the conversion (due to the type promotion) on the fly avoiding big buffers to be allocated. This is also more efficient since the operation is more compute-bound and memory is slow: the compiler can generate a set of clever SIMD instruction so to convert parts of the boolean array efficiently to floats. Additionally, this operation is more cache-friendly since the input can better fit in cache (the smaller the better). Note that you need to use plain loops to get the benefit from using Numba/Cython: using np.dot(a, v) directly in Numba or Cython will not be better than in Numpy.
Note that if a is sparse, that is it mainly contains zeros, then it is certainly better to use sparse matrices. Indeed, sparse matrices tends to be even more compact in memory assuming the matrices are sparse enough. This can also avoid many unneeded multiplications by zeros.
| Numpy matmul requires way more memory than is necessary when inputs are of different types | I have a very large boolean 2d array (~1Gb) and I want to matmul it with a 1d array of float64. The output array would be large, but still considerably smaller than the 2d array. However, when I try to matmul them, my system freaks out and tries to allocate enough memory for the 2d array to have float64, rather than just enough for the output vector
import numpy as np
x,y=286880, 20419
a=np.random.randint(0,2,(x,y),dtype=np.bool_).reshape(x,y)
v=np.random.rand(y)
print(f'{a.shape=} {a.dtype=} {a.size * a.itemsize=}')
print(f'{v.shape=} {v.dtype=} {v.size * v.itemsize=}')
d = np.dot(a, v)
Output:
a.shape=(286880, 20419) a.dtype=dtype('bool') a.size * a.itemsize=5857802720
v.shape=(20419,) v.dtype=dtype('float64') v.size * v.itemsize=163352
Traceback (most recent call last):
File "...", line 7, in <module>
d = np.dot(a, v)
numpy.core._exceptions.MemoryError: Unable to allocate 43.6 GiB for an array with shape (286880, 20419) and data type float64
Why does numpy need to allocate so much memory just for matrix multiplication? It seems like there is no reason to require additional memory for calculating a dot product, since you can just keep a running total as you iterate through the columns
Is there a time efficient and memory efficient method I can use here to get the dot product?
PS: I feel like maybe there's something extra efficient I can use here because the matrix is boolean?
| [
"Numpy does not internally supports operations on different types so it uses type promotion to convert the inputs so they can be of the same type first (the conversion is done out-of-place). Indeed, when Numpy executes np.dot(a, v), it first converts a of type np.bool_[:,:] to an array of type np.float64[:,:]. This is because the rule of the type promotion specify that bool_ BIN_OP float64 = float64. The thing is a is huge here so the conversion cause a big memory issue. Since np.bool_ is generally 1 byte, this cause 9 times more memory to be used, which is clearly not reasonable.\nThe reason why Numpy does not support operations on different types is that it would be too complex to maintain (bigger code, much slower compilation, more complex code often resulting to more bugs) and make Numpy binaries much bigger.\n\nThere are several solutions to address this issue.\nThe simplest solution is to split the big array in chunks so that only chunks are converted. This is more complex and slower than the naive approach but the overhead compared to the naive method is small as long as the chunks are big enough. In practice, both the naive implementation and this approach are very inefficient due to the slow implicit conversion.\nAn alternative solution is to use Numba or Cython so to generate a fast code (thanks to a JIT/native compiler). Numba can perform the conversion (due to the type promotion) on the fly avoiding big buffers to be allocated. This is also more efficient since the operation is more compute-bound and memory is slow: the compiler can generate a set of clever SIMD instruction so to convert parts of the boolean array efficiently to floats. Additionally, this operation is more cache-friendly since the input can better fit in cache (the smaller the better). Note that you need to use plain loops to get the benefit from using Numba/Cython: using np.dot(a, v) directly in Numba or Cython will not be better than in Numpy.\nNote that if a is sparse, that is it mainly contains zeros, then it is certainly better to use sparse matrices. Indeed, sparse matrices tends to be even more compact in memory assuming the matrices are sparse enough. This can also avoid many unneeded multiplications by zeros.\n"
] | [
1
] | [] | [] | [
"dot_product",
"matrix_multiplication",
"memory",
"numpy",
"python"
] | stackoverflow_0074625873_dot_product_matrix_multiplication_memory_numpy_python.txt |
Q:
topleft from surface rect dont accept new tuple value
I create pygame screen.
import pygame
pygame.init()
screen = pygame.display.set_mode((330, 330))
Then i create an object of my own class and give him a tuple.
mc = MyClass((10, 10))
This class has a problem. topleft from surface don`t accept new tuple value.
class MyClass:
def __init__(self, pos):
self.surface = pygame.Surface((100, 100))
self.surface.get_rect().topleft = pos
print(pos) # (10, 10)
print(self.surface.get_rect().topleft) # (0, 0)
How can i solve this problem?
A:
A Surface has no position and the rectangle is not an attribute of the Surface. The get_rect() method creates a new rectangle object with the top left position (0, 0) each time the method is called. The instruction
self.surface.get_rect().topleft = pos
only changes the position of an object instance that is not stored anywhere. When you do
print(self.surface.get_rect().topleft)
a new rectangle object will be created with the top left coordinate (0, 0). You have to store this object somewhere. Then you can change the position of this rectangle object instance:
class MyClass:
def __init__(self, pos):
self.surface = pygame.Surface((100, 100))
self.rect = self.surface.get_rect()
self.rect.topleft = pos
print(self.rect.topleft)
| topleft from surface rect dont accept new tuple value | I create pygame screen.
import pygame
pygame.init()
screen = pygame.display.set_mode((330, 330))
Then i create an object of my own class and give him a tuple.
mc = MyClass((10, 10))
This class has a problem. topleft from surface don`t accept new tuple value.
class MyClass:
def __init__(self, pos):
self.surface = pygame.Surface((100, 100))
self.surface.get_rect().topleft = pos
print(pos) # (10, 10)
print(self.surface.get_rect().topleft) # (0, 0)
How can i solve this problem?
| [
"A Surface has no position and the rectangle is not an attribute of the Surface. The get_rect() method creates a new rectangle object with the top left position (0, 0) each time the method is called. The instruction\n\nself.surface.get_rect().topleft = pos\n\n\nonly changes the position of an object instance that is not stored anywhere. When you do\n\nprint(self.surface.get_rect().topleft)\n\n\na new rectangle object will be created with the top left coordinate (0, 0). You have to store this object somewhere. Then you can change the position of this rectangle object instance:\nclass MyClass:\n def __init__(self, pos):\n self.surface = pygame.Surface((100, 100))\n self.rect = self.surface.get_rect()\n self.rect.topleft = pos\n print(self.rect.topleft) \n\n"
] | [
0
] | [] | [] | [
"pygame",
"python",
"tuples"
] | stackoverflow_0074633853_pygame_python_tuples.txt |
Q:
How to store and pass a filename to as a variable in python
I have the beginnings of some python that will take columns out of a specific csv file and then rename the csv columns something else. The issue that I have is the CSV file will always be in the same directory this script is ran in, but the name won't always be the same (and there will only ever be one csv in the directory at a time)
Is there a way to automatically grab the csv name and pass it as a variable? Here is what I have so far:
`
import pandas as pd
#df = pd.read_csv("csv_import.csv",skiprows=1) #==> use to skip first row (header if required)
df = pd.read_csv("test.csv") #===> Include the headers
correct_df = df.copy()
correct_df.rename(columns={'Text1': 'Address1', 'Text2': 'Address2'}, inplace=True)
#Exporting to CSV file
correct_df.to_csv(r'.csv', index=False,header=True)
`
What I'm looking for is to not have to specify "test.csv" and instead have it grab the name of the csv inthe directory.
A:
You can use glob.glob to return a list of the files matching a pattern (e.g *.csv) and then slice.
Try this :
from glob import glob
import pandas as pd
csv_file = glob("*.csv")[0]
df= pd.read_csv(csv_file)
A:
You can get list of all files in directory(in this case csv files) using os.listdir():
import os
csvfiles = [p for p in os.listdir() if p.endswith(".csv")]
| How to store and pass a filename to as a variable in python | I have the beginnings of some python that will take columns out of a specific csv file and then rename the csv columns something else. The issue that I have is the CSV file will always be in the same directory this script is ran in, but the name won't always be the same (and there will only ever be one csv in the directory at a time)
Is there a way to automatically grab the csv name and pass it as a variable? Here is what I have so far:
`
import pandas as pd
#df = pd.read_csv("csv_import.csv",skiprows=1) #==> use to skip first row (header if required)
df = pd.read_csv("test.csv") #===> Include the headers
correct_df = df.copy()
correct_df.rename(columns={'Text1': 'Address1', 'Text2': 'Address2'}, inplace=True)
#Exporting to CSV file
correct_df.to_csv(r'.csv', index=False,header=True)
`
What I'm looking for is to not have to specify "test.csv" and instead have it grab the name of the csv inthe directory.
| [
"You can use glob.glob to return a list of the files matching a pattern (e.g *.csv) and then slice.\nTry this :\nfrom glob import glob\nimport pandas as pd\n\ncsv_file = glob(\"*.csv\")[0]\n\ndf= pd.read_csv(csv_file)\n\n",
"You can get list of all files in directory(in this case csv files) using os.listdir():\nimport os\ncsvfiles = [p for p in os.listdir() if p.endswith(\".csv\")]\n\n"
] | [
1,
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0074633907_csv_python.txt |
Q:
How to print multiple patterns next to each other?
My forest pattern
I did print the forest pattern here but i have to print them next to each other instead of printing them in new lines. How can i do that? When i try to add end="" to last print section, it seems like only the first "*" is printing and the shape is ruined.
for x in range(5):
sayi = int(input("[5-13] Aralığında tek tam sayı giriniz:"))
for y in range(5):
for i in range(1,sayi+1):
print(" " * (sayi-i) + '\033[92m' + "* " *i + '\033[0m')
for j in range(sayi // 2 ):
print(" " * (sayi-2) + "* *")
Trying to do forest pattern for my homework. I did manage to print tree shaped outputs but i cant print them next to each other.
A:
Do you mean something like this
sayi = int(input("[5-13] Aralığında tek tam sayı giriniz:"))
for i in range(1, sayi + 1):
green = "* " * i
print('\033[92m' + green.center(sayi * 2 + 2) * 5 + '\033[0m')
for j in range(sayi // 2):
trunk = "* *"
print(trunk.center(sayi * 2 + 2) * 5)
for an input of 5 it prints
* * * * *
* * * * * * * * * *
* * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * *
* * * * * * * * * *
| How to print multiple patterns next to each other? | My forest pattern
I did print the forest pattern here but i have to print them next to each other instead of printing them in new lines. How can i do that? When i try to add end="" to last print section, it seems like only the first "*" is printing and the shape is ruined.
for x in range(5):
sayi = int(input("[5-13] Aralığında tek tam sayı giriniz:"))
for y in range(5):
for i in range(1,sayi+1):
print(" " * (sayi-i) + '\033[92m' + "* " *i + '\033[0m')
for j in range(sayi // 2 ):
print(" " * (sayi-2) + "* *")
Trying to do forest pattern for my homework. I did manage to print tree shaped outputs but i cant print them next to each other.
| [
"Do you mean something like this\nsayi = int(input(\"[5-13] Aralığında tek tam sayı giriniz:\"))\nfor i in range(1, sayi + 1):\n green = \"* \" * i\n print('\\033[92m' + green.center(sayi * 2 + 2) * 5 + '\\033[0m')\nfor j in range(sayi // 2):\n trunk = \"* *\"\n print(trunk.center(sayi * 2 + 2) * 5)\n\nfor an input of 5 it prints\n * * * * * \n * * * * * * * * * * \n * * * * * * * * * * * * * * * \n * * * * * * * * * * * * * * * * * * * * \n * * * * * * * * * * * * * * * * * * * * * * * * * \n * * * * * * * * * * \n * * * * * * * * * * \n\n"
] | [
-1
] | [] | [] | [
"newline",
"python",
"tree"
] | stackoverflow_0074632286_newline_python_tree.txt |
Q:
I don't know why it's an empty list? max()
# Plot the highest score in history
def draw_best(background):
ip = 'redis-16784.c89.us-east-1-3.ec2.cloud.redislabs.com'
r = redis.Redis(host=ip, password=1206, port=16784, db=0, decode_responses = True)
scores = [eval(i) for i in list(r.hgetall('2048').values())]
best_scores = max(scores)
scoreSurf = BasicFont01.render('Top score:{}'.format(best_scores), True, (0, 0, 0))
scoreRect = scoreSurf.get_rect()
scoreRect.width = math.floor((rate - 0.15) / 2 * screen.get_width())
scoreRect.height = math.floor((1 - rate2) / 3 * 2 * screen.get_height())
scoreRect.topright = (math.floor(0.9 * screen.get_width()), math.floor(0.05 * screen.get_height()))
py.draw.rect(screen, background, [scoreRect.topleft[0], scoreRect.topleft[1], scoreRect.width, scoreRect.height], 0)
screen.blit(scoreSurf, scoreRect)
I think the problem is in these two lines:
scores = [eval(i) for i in list(r.hgetall('2048').values())]
best_scores = max(scores)
The error it showed me was:
ValueError: max() arg is an empty sequence
A:
Obviously, it seems like list(r.hgetall('2048').values()) is a blank sequence/list/array.
Check if it is really empty by defining a variable with the value list(r.hgetall('2048').values()) and then print it out to check.
There is a default keyword that may be helpful. It will return a value if the list is empty. It works as follows:
my_list = []
result = max(my_list, default=None)
print(result) # It will print "None"
You already think that the problem exists in those two lines, then what better way to solve this is to check if it is really in those two lines!
| I don't know why it's an empty list? max() | # Plot the highest score in history
def draw_best(background):
ip = 'redis-16784.c89.us-east-1-3.ec2.cloud.redislabs.com'
r = redis.Redis(host=ip, password=1206, port=16784, db=0, decode_responses = True)
scores = [eval(i) for i in list(r.hgetall('2048').values())]
best_scores = max(scores)
scoreSurf = BasicFont01.render('Top score:{}'.format(best_scores), True, (0, 0, 0))
scoreRect = scoreSurf.get_rect()
scoreRect.width = math.floor((rate - 0.15) / 2 * screen.get_width())
scoreRect.height = math.floor((1 - rate2) / 3 * 2 * screen.get_height())
scoreRect.topright = (math.floor(0.9 * screen.get_width()), math.floor(0.05 * screen.get_height()))
py.draw.rect(screen, background, [scoreRect.topleft[0], scoreRect.topleft[1], scoreRect.width, scoreRect.height], 0)
screen.blit(scoreSurf, scoreRect)
I think the problem is in these two lines:
scores = [eval(i) for i in list(r.hgetall('2048').values())]
best_scores = max(scores)
The error it showed me was:
ValueError: max() arg is an empty sequence
| [
"Obviously, it seems like list(r.hgetall('2048').values()) is a blank sequence/list/array.\nCheck if it is really empty by defining a variable with the value list(r.hgetall('2048').values()) and then print it out to check.\nThere is a default keyword that may be helpful. It will return a value if the list is empty. It works as follows:\nmy_list = []\n\nresult = max(my_list, default=None)\n\nprint(result) # It will print \"None\"\n\nYou already think that the problem exists in those two lines, then what better way to solve this is to check if it is really in those two lines!\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074633901_python.txt |
Q:
'str' object has no attribute 'isin' error while using period_range in Python
I am trying to format a date column that I am reading from a csv file but I am getting Out of bounds nanosecond timestamp: 2999-12-31 00:00:00 error while formatting the high date. To solve this, I tried to use period_range as given below:
low_date = '1900-01-01 00:00:00'
high_date = '2999-12-31 00:00:00'
r = pd.period_range(low_date,high_date)
for i in range(len(Df[date])):
if Df[date][i].isin(r):
Df[date] = pd.to_datetime(Df[date]).dt.strftime("%m/%d/%Y %H:%M:%S.0")
Now I am getting error as given below:
Error
if Df[date][i].isin(r):
AttributeError: 'str' object has no attribute 'isin'
Please help in fixing the error. I am trying to fix the out of bounds error for the high date and now getting this error which I am not able to fix.
A:
The Error
In your code...
low_date = '1900-01-01 00:00:00'
high_date = '2999-12-31 00:00:00'
r = pd.period_range(low_date,high_date)
for i in range(len(Df[date])):
if Df[date][i].isin(r):
Df[date] = pd.to_datetime(Df[date]).dt.strftime("%m/%d/%Y %H:%M:%S.0")
your conditional fails because Df[date][i] is the value in DF at position [i, date].
Example:
df = pd.DataFrame({"date": ["abc"]})
df["date"][0]
# >>> "abc"
You probably want something more like DF[date][i] in r or preferably DF[date].isin(r).
My Suggestion
pandas has a lot of vectorized methods that may offer a speedup and simplify some of your code.
We could convert and format all values in your DF[date] column using .apply
DF[date] = DF[date].apply(pd.to_datetime).dt.strftime("%m/%d/%Y %H:%M:%S.0")
You can also add the conditional to the method:
DF[date] = DF[date].apply(lambda row: pd.to_datetime(row) if row in r else row).dt.strftime("%m/%d/%Y %H:%M:%S.0")
All Together
low_date = '1900-01-01 00:00:00'
high_date = '2999-12-31 00:00:00'
r = pd.period_range(low_date,high_date)
DF[date] = DF[date].apply(lambda row: pd.to_datetime(row) if row in r else row).dt.strftime("%m/%d/%Y %H:%M:%S.0")
| 'str' object has no attribute 'isin' error while using period_range in Python | I am trying to format a date column that I am reading from a csv file but I am getting Out of bounds nanosecond timestamp: 2999-12-31 00:00:00 error while formatting the high date. To solve this, I tried to use period_range as given below:
low_date = '1900-01-01 00:00:00'
high_date = '2999-12-31 00:00:00'
r = pd.period_range(low_date,high_date)
for i in range(len(Df[date])):
if Df[date][i].isin(r):
Df[date] = pd.to_datetime(Df[date]).dt.strftime("%m/%d/%Y %H:%M:%S.0")
Now I am getting error as given below:
Error
if Df[date][i].isin(r):
AttributeError: 'str' object has no attribute 'isin'
Please help in fixing the error. I am trying to fix the out of bounds error for the high date and now getting this error which I am not able to fix.
| [
"The Error\nIn your code...\n low_date = '1900-01-01 00:00:00' \n high_date = '2999-12-31 00:00:00'\n r = pd.period_range(low_date,high_date)\n for i in range(len(Df[date])): \n if Df[date][i].isin(r):\n Df[date] = pd.to_datetime(Df[date]).dt.strftime(\"%m/%d/%Y %H:%M:%S.0\")\n\nyour conditional fails because Df[date][i] is the value in DF at position [i, date].\nExample:\ndf = pd.DataFrame({\"date\": [\"abc\"]})\ndf[\"date\"][0]\n\n# >>> \"abc\"\n\nYou probably want something more like DF[date][i] in r or preferably DF[date].isin(r).\nMy Suggestion\npandas has a lot of vectorized methods that may offer a speedup and simplify some of your code.\nWe could convert and format all values in your DF[date] column using .apply\nDF[date] = DF[date].apply(pd.to_datetime).dt.strftime(\"%m/%d/%Y %H:%M:%S.0\")\n\nYou can also add the conditional to the method:\nDF[date] = DF[date].apply(lambda row: pd.to_datetime(row) if row in r else row).dt.strftime(\"%m/%d/%Y %H:%M:%S.0\")\n\nAll Together\nlow_date = '1900-01-01 00:00:00' \nhigh_date = '2999-12-31 00:00:00'\nr = pd.period_range(low_date,high_date)\nDF[date] = DF[date].apply(lambda row: pd.to_datetime(row) if row in r else row).dt.strftime(\"%m/%d/%Y %H:%M:%S.0\")\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"datetime_format",
"pandas",
"python"
] | stackoverflow_0074632971_dataframe_datetime_format_pandas_python.txt |
Q:
Pyspark cannot export large dataframe to csv. Session setup incorrect?
My session in pyspark 2.3:
spark = SparkSession\
.builder\
.appName("test_app")\
.config('spark.executor.instances','4')\
.config('spark.executor.cores', '4')\
.config('spark.executor.memory', '24g')\
.config('spark.driver.maxResultSize', '24g')\
.config('spark.rpc.message.maxSize', '512')\
.config('spark.yarn.executor.memoryOverhead', '10000')\
.enableHiveSupport()\
.getOrCreate()
I work on cloudera with a 32GB RAM session and handle dataframes containing approx. 30,000,000 rows and up to 20 columns. These dataframes consist of int, float and str data. My program is supposed to join several tables, format some data, describe the final result table and export it in csv format. I have issues exporting the data to csv. My approach throws the following error:
>>> final_dataframe.write.csv("export.csv")
22/11/30 15:08:50 216 ERROR TaskSetManager: Task 2 in stage 88.0 failed 4 times; aborting job
22/11/30 15:08:50 219 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 88.0 failed 4 times, most recent failure: Lost task 2.3 in stage 88.0 (TID 9514, bdwrkp124.cda.commerzbank.com, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 140, in dump_stream
for obj in iterator:
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 320, in _batched
for item in iterator:
File "<string>", line 1, in <lambda>
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 76, in <lambda>
return lambda *a: f(*a)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/util.py", line 55, in wrapper
return f(*args, **kwargs)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/sql/functions.py", line 42, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage14.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2039)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:664)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:652)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 140, in dump_stream
for obj in iterator:
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 320, in _batched
for item in iterator:
File "<string>", line 1, in <lambda>
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 76, in <lambda>
return lambda *a: f(*a)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/util.py", line 55, in wrapper
return f(*args, **kwargs)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/sql/functions.py", line 42, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage14.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
What is the problem? How can I fix it? I guess my session setup is not adequate, but I lack the experience to see the issue.
A:
I'm not experienced with pyspark, but maybe this SO post can help you in some way. Have you started up your pyspark environment before running this code? The error AttributeError: 'NoneType' object has no attribute '_jvm' seems to hint at something wrong in the setup.
Also, unless you have very specific (and strong) reasons to write this large amount of data to a CSV file I would advise against it. Try writing to a parquet file (I suspect it would be simply final_dataframe.write.parquet("export.parquet") in pyspark)
| Pyspark cannot export large dataframe to csv. Session setup incorrect? | My session in pyspark 2.3:
spark = SparkSession\
.builder\
.appName("test_app")\
.config('spark.executor.instances','4')\
.config('spark.executor.cores', '4')\
.config('spark.executor.memory', '24g')\
.config('spark.driver.maxResultSize', '24g')\
.config('spark.rpc.message.maxSize', '512')\
.config('spark.yarn.executor.memoryOverhead', '10000')\
.enableHiveSupport()\
.getOrCreate()
I work on cloudera with a 32GB RAM session and handle dataframes containing approx. 30,000,000 rows and up to 20 columns. These dataframes consist of int, float and str data. My program is supposed to join several tables, format some data, describe the final result table and export it in csv format. I have issues exporting the data to csv. My approach throws the following error:
>>> final_dataframe.write.csv("export.csv")
22/11/30 15:08:50 216 ERROR TaskSetManager: Task 2 in stage 88.0 failed 4 times; aborting job
22/11/30 15:08:50 219 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 88.0 failed 4 times, most recent failure: Lost task 2.3 in stage 88.0 (TID 9514, bdwrkp124.cda.commerzbank.com, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 140, in dump_stream
for obj in iterator:
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 320, in _batched
for item in iterator:
File "<string>", line 1, in <lambda>
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 76, in <lambda>
return lambda *a: f(*a)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/util.py", line 55, in wrapper
return f(*args, **kwargs)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/sql/functions.py", line 42, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage14.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2039)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:664)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:664)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:652)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 140, in dump_stream
for obj in iterator:
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/serializers.py", line 320, in _batched
for item in iterator:
File "<string>", line 1, in <lambda>
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/worker.py", line 76, in <lambda>
return lambda *a: f(*a)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/util.py", line 55, in wrapper
return f(*args, **kwargs)
File "/hadoop/disk09/hadoop/yarn/local/usercache/cb2rtor/appcache/application_1667977333442_395910/container_e323_1667977333442_395910_01_000003/pyspark.zip/pyspark/sql/functions.py", line 42, in _
jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:83)
at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:66)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage14.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
What is the problem? How can I fix it? I guess my session setup is not adequate, but I lack the experience to see the issue.
| [
"I'm not experienced with pyspark, but maybe this SO post can help you in some way. Have you started up your pyspark environment before running this code? The error AttributeError: 'NoneType' object has no attribute '_jvm' seems to hint at something wrong in the setup.\nAlso, unless you have very specific (and strong) reasons to write this large amount of data to a CSV file I would advise against it. Try writing to a parquet file (I suspect it would be simply final_dataframe.write.parquet(\"export.parquet\") in pyspark)\n"
] | [
0
] | [] | [] | [
"apache_spark",
"hadoop",
"pyspark",
"python"
] | stackoverflow_0074630633_apache_spark_hadoop_pyspark_python.txt |
Q:
Issues compiling mediapipe with pyinstaller on macos
I have issues compiling a project with mediapipe via pyinstaller on macos
so far I tried:
pyinstaller --windowed --noconsole pose_edge.py
pyinstaller --onefile --windowed --noconsole pose_edge.py
pyinstaller --noconsole pose_edge.py
The .app does not open, and if I try the unix exec, I get
Traceback (most recent call last):
File "pose_edge.py", line 26, in <module>
File "mediapipe/python/solutions/selfie_segmentation.py", line 54, in __init__
File "mediapipe/python/solution_base.py", line 229, in __init__
FileNotFoundError: The path does not exist.
[36342] Failed to execute script pose_edge
I work with conda, my env is in python 3.8, mediapipe 0.8.5 and OSX 10.15.7
Thanks in advance
A:
I ran into this issue too and just figured it out a few minutes ago -- so far, I'm getting around it the manual way, but I'm sure there's an idiomatic way to do it in pyinstaller using the spec file and the data imports. For this answer, I'm assuming you're not using the --onefile option for pyinstaller, but rather creating the binary in a single folder.
That said, the answer is to cp -r the modules directory from your mediapipe installed in the virtual environment (or wherever you installed the initial mediapipe package, e.g. /virtualenvs/pose_record-2bkqEH7-/lib/python3.9/site-packages/mediapipe/modules) into your dist/main/mediapipe directory. This will enable your bundled mediapipe library to access the binarypb files which I believe contain the graphs and weights for the pose detection algorithm.
UPDATE: I have figured out a more idiomatic pyinstaller way of getting it to run. In the .spec file generated by pyinstaller, you're able to automatically add files in the following way:
At the top of the file, under the block_cipher = None, add the following function:
def get_mediapipe_path():
import mediapipe
mediapipe_path = mediapipe.__path__[0]
return mediapipe_path
Then, after the following lines:
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
add the following lines which utilize the native Tree class to create a TOC for the binary:
mediapipe_tree = Tree(get_mediapipe_path(), prefix='mediapipe', excludes=["*.pyc"])
a.datas += mediapipe_tree
a.binaries = filter(lambda x: 'mediapipe' not in x[0], a.binaries)
Once added, you can run the compile command from the CLI, e.g.:
pipenv run pyinstaller --debug=all main.spec --windowed --onefile
This allowed me to build an executable that functioned for mediapipe.
A:
If anybody needs to manage this issue on Windows(maybe you can try this on MAC or Linux) you can follow these steps which have given a solution for me:
Go your local Python site-packages(generally it's in C:/Users/YourUserName/AppData/Local/Python38/Lib/site-packages) and find the mediapipe module(folder) in it.
Open the mediapipe>python>solution_base.py file.
Edit the solution_base.py file with these given variables:
Delete this line(at 261) --> root_path = os.sep.join(os.path.abspath(file).split(os.sep)[:-3])
Instead of this line, we will create our root_path:
root_path = "C:\\Users\YourUserName\\AppData\\Local\\Programs\\Python\\Python38\\Lib\\site-packages"
Copy and paste this line. Change your user name!
The .exe output may need run as administrator!
In this way you could solve your root_path problem.
You dont need to use any PyInstaller configuration. Just type:
PyInstaller yourscriptname.py --onefile
| Issues compiling mediapipe with pyinstaller on macos | I have issues compiling a project with mediapipe via pyinstaller on macos
so far I tried:
pyinstaller --windowed --noconsole pose_edge.py
pyinstaller --onefile --windowed --noconsole pose_edge.py
pyinstaller --noconsole pose_edge.py
The .app does not open, and if I try the unix exec, I get
Traceback (most recent call last):
File "pose_edge.py", line 26, in <module>
File "mediapipe/python/solutions/selfie_segmentation.py", line 54, in __init__
File "mediapipe/python/solution_base.py", line 229, in __init__
FileNotFoundError: The path does not exist.
[36342] Failed to execute script pose_edge
I work with conda, my env is in python 3.8, mediapipe 0.8.5 and OSX 10.15.7
Thanks in advance
| [
"I ran into this issue too and just figured it out a few minutes ago -- so far, I'm getting around it the manual way, but I'm sure there's an idiomatic way to do it in pyinstaller using the spec file and the data imports. For this answer, I'm assuming you're not using the --onefile option for pyinstaller, but rather creating the binary in a single folder.\nThat said, the answer is to cp -r the modules directory from your mediapipe installed in the virtual environment (or wherever you installed the initial mediapipe package, e.g. /virtualenvs/pose_record-2bkqEH7-/lib/python3.9/site-packages/mediapipe/modules) into your dist/main/mediapipe directory. This will enable your bundled mediapipe library to access the binarypb files which I believe contain the graphs and weights for the pose detection algorithm.\nUPDATE: I have figured out a more idiomatic pyinstaller way of getting it to run. In the .spec file generated by pyinstaller, you're able to automatically add files in the following way:\nAt the top of the file, under the block_cipher = None, add the following function:\ndef get_mediapipe_path():\n import mediapipe\n mediapipe_path = mediapipe.__path__[0]\n return mediapipe_path\n\nThen, after the following lines:\npyz = PYZ(a.pure, a.zipped_data,\n cipher=block_cipher)\n\nadd the following lines which utilize the native Tree class to create a TOC for the binary:\nmediapipe_tree = Tree(get_mediapipe_path(), prefix='mediapipe', excludes=[\"*.pyc\"])\na.datas += mediapipe_tree\na.binaries = filter(lambda x: 'mediapipe' not in x[0], a.binaries)\n\nOnce added, you can run the compile command from the CLI, e.g.:\npipenv run pyinstaller --debug=all main.spec --windowed --onefile\nThis allowed me to build an executable that functioned for mediapipe.\n",
"If anybody needs to manage this issue on Windows(maybe you can try this on MAC or Linux) you can follow these steps which have given a solution for me:\n\nGo your local Python site-packages(generally it's in C:/Users/YourUserName/AppData/Local/Python38/Lib/site-packages) and find the mediapipe module(folder) in it.\nOpen the mediapipe>python>solution_base.py file.\nEdit the solution_base.py file with these given variables:\n\n\nDelete this line(at 261) --> root_path = os.sep.join(os.path.abspath(file).split(os.sep)[:-3])\n\nInstead of this line, we will create our root_path:\n\nroot_path = \"C:\\\\Users\\YourUserName\\\\AppData\\\\Local\\\\Programs\\\\Python\\\\Python38\\\\Lib\\\\site-packages\"\n\nCopy and paste this line. Change your user name!\nThe .exe output may need run as administrator!\nIn this way you could solve your root_path problem.\nYou dont need to use any PyInstaller configuration. Just type:\n\nPyInstaller yourscriptname.py --onefile\n\n"
] | [
3,
0
] | [] | [] | [
"build",
"mediapipe",
"pyinstaller",
"python"
] | stackoverflow_0067887088_build_mediapipe_pyinstaller_python.txt |
Q:
Python AttributeError when importing module resolves after calling 'help()'
I'm getting started with packaging a Python library, and I'm experiencing odd behavior when trying to import a function. I built a wheel for this library and installed in my conda environment using pip. The structure of my library is:
|- setup.py
|- test_package
|- __init__.py
|- module1.py
|- myutils.py
The myutils.py file contains a simple function:
def test_utils():
print("utils test function is working correctly")
The following import works as expected:
from test_package import myutils
myutils.test_utils()
result:
utils test function is working correctly
However, the following import results in an error:
import test_package
test_package.myutils.test_utils()
result:
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 2>()
1 import test_package
----> 2 test_package.myutils.test_utils()
AttributeError: module 'test_package' has no attribute 'myutils'
The odd behavior is that if I call help() after receiving the error above and then call the function again, it works as expected:
help('test_package.myutils.test_utils')
print("~~~~~ line break ~~~~~")
test_package.myutils.test_utils()
result:
Help on function test_utils in test_package.myutils:
test_package.myutils.test_utils = test_utils()
~~~~~ line break ~~~~~
utils test function is working correctly
I'm having difficulty understanding why using from <package> import <module> works while import <package> fails, and I'm definitely not understanding why help() resolves the AttributeError
A:
While some packages do automatically import their subpackages (usually for historical reasons, e.g. import os provides os.path to this day because it originally did so, and they don't want to break programs that rely on it), the recommendation is, and has been, to import the subpackages/submodules you rely on, not just the top-level packages (which are under no obligation to import their child packages).
If top-level packages had to import their hierarchies, a package with three subpackages, all expensive to load, but often used in isolation, would have to pay the expense of importing all three subpackages even when the script performing the import only needed one of them.
The correct way to solve this problem is to explicitly import the packages you need, changing:
import test_package
to:
import test_package.myutils
which will cause the subpackage to be imported, and allow test_package.myutils.test_utils() to work.
The reason help fixes things is that in the process of loading the documentation, it imports the subpackage. And the process of importing a subpackage attaches it to the parent package (a singleton shared by every location that imported the parent package). So help('test_package.myutils.test_utils'), internally, ends up performing import test_package.myutils, which attaches myutils to the cached test_package (an alias of which is in your main module), so when you ask for test_package.myutils again, it's found.
A:
If you want that behaviour, you need to add to test_package/__init__.py:
import myutils
You are probably used to being able to do this in other libraries because they often add these imports to the relavent __init__.py files for your convenience. However, this is not the default behavior of python imports. The reason why is that the library author might want to allow easy access to certain submodules without recursively importing all submodules and taking a performance hit for it.
Just realize that the only things that you can import from a module are the things imported or defined in that module, whether the module is a directory module/__init__.py or a file module.py.
| Python AttributeError when importing module resolves after calling 'help()' | I'm getting started with packaging a Python library, and I'm experiencing odd behavior when trying to import a function. I built a wheel for this library and installed in my conda environment using pip. The structure of my library is:
|- setup.py
|- test_package
|- __init__.py
|- module1.py
|- myutils.py
The myutils.py file contains a simple function:
def test_utils():
print("utils test function is working correctly")
The following import works as expected:
from test_package import myutils
myutils.test_utils()
result:
utils test function is working correctly
However, the following import results in an error:
import test_package
test_package.myutils.test_utils()
result:
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 2>()
1 import test_package
----> 2 test_package.myutils.test_utils()
AttributeError: module 'test_package' has no attribute 'myutils'
The odd behavior is that if I call help() after receiving the error above and then call the function again, it works as expected:
help('test_package.myutils.test_utils')
print("~~~~~ line break ~~~~~")
test_package.myutils.test_utils()
result:
Help on function test_utils in test_package.myutils:
test_package.myutils.test_utils = test_utils()
~~~~~ line break ~~~~~
utils test function is working correctly
I'm having difficulty understanding why using from <package> import <module> works while import <package> fails, and I'm definitely not understanding why help() resolves the AttributeError
| [
"While some packages do automatically import their subpackages (usually for historical reasons, e.g. import os provides os.path to this day because it originally did so, and they don't want to break programs that rely on it), the recommendation is, and has been, to import the subpackages/submodules you rely on, not just the top-level packages (which are under no obligation to import their child packages).\nIf top-level packages had to import their hierarchies, a package with three subpackages, all expensive to load, but often used in isolation, would have to pay the expense of importing all three subpackages even when the script performing the import only needed one of them.\nThe correct way to solve this problem is to explicitly import the packages you need, changing:\nimport test_package\n\nto:\nimport test_package.myutils\n\nwhich will cause the subpackage to be imported, and allow test_package.myutils.test_utils() to work.\n\nThe reason help fixes things is that in the process of loading the documentation, it imports the subpackage. And the process of importing a subpackage attaches it to the parent package (a singleton shared by every location that imported the parent package). So help('test_package.myutils.test_utils'), internally, ends up performing import test_package.myutils, which attaches myutils to the cached test_package (an alias of which is in your main module), so when you ask for test_package.myutils again, it's found.\n",
"If you want that behaviour, you need to add to test_package/__init__.py:\nimport myutils\n\nYou are probably used to being able to do this in other libraries because they often add these imports to the relavent __init__.py files for your convenience. However, this is not the default behavior of python imports. The reason why is that the library author might want to allow easy access to certain submodules without recursively importing all submodules and taking a performance hit for it.\nJust realize that the only things that you can import from a module are the things imported or defined in that module, whether the module is a directory module/__init__.py or a file module.py.\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074633905_python.txt |
Q:
How do I make this Python program interactive?
I am trying to modify the example found here (https://towardsdatascience.com/intro-to-dynamic-visualization-with-python-animations-and-interactive-plots-f72a7fb69245) to run outside of a Jupyter notebook.
The program below produces a *.gif that is animated but not interactive. Can anyone find the error?
#Based on the example here
# https://towardsdatascience.com/intro-to-dynamic-visualization-with-python-animations-and-interactive-plots-f72a7fb69245
# # Import packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import numpy.typing as npt
from matplotlib.animation import FuncAnimation
from matplotlib.widgets import Slider
# Fermi-Dirac Distribution
def fermi(E: npt.NDArray[np.float64], E_f: float, T: float) -> npt.NDArray[np.float64]:
k_b = 8.617 * (10**-5) # eV/K
return 1/(np.exp((E - E_f)/(k_b * T)) + 1)
# Animation function
def animate(i):
x = np.linspace(0, 1, 100)
y = fermi(x, 0.5, T[i])
f_d.set_data(x, y)
f_d.set_color(colors(i))
temp.set_text(str(int(T[i])) + ' K')
temp.set_color(colors(i))
# Update values
def update(val):
Ef = s_Ef.val
T = s_T.val
f_d.set_data(x, fermi(x, Ef, T))
fig.canvas.draw_idle()
# Create sliders
s_Ef = Slider(ax=ax_Ef, label='Fermi Energy ', valmin=0, valmax=1.0,
valfmt=' %1.1f eV', facecolor='#cc7000')
s_T = Slider(ax=ax_T, label='Temperature ', valmin=100, valmax=1000,
valinit=100, valfmt='%i K', facecolor='#cc7000')
# General plot parameters
mpl.rcParams['font.size'] = 18
mpl.rcParams['axes.linewidth'] = 2
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['xtick.major.size'] = 10
mpl.rcParams['xtick.major.width'] = 2
mpl.rcParams['ytick.major.size'] = 10
mpl.rcParams['ytick.major.width'] = 2
# Temperature values
T = np.linspace(100, 1000, 10)
# Get colors
colors = mpl.colormaps['copper'].resampled(8)
# Create figure and add axes
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111)
# Add legend
labels = ['100 K', '200 K', '300 K', '400 K', '500 K', '600 K',
'700 K', '800 K', '900 K', '1000 K']
ax.legend(labels, bbox_to_anchor=(1.05, -0.1), loc='lower left',
frameon=False, labelspacing=0.2)
# Create variable reference to plot
f_d, = ax.plot([], [], linewidth=2.5)
# Add text annotation and create variable reference
temp = ax.text(1, 1, '', ha='right', va='top', fontsize=24)
# Create main axis
fig.subplots_adjust(bottom=0.2, top=0.75)
# Create axes for sliders
ax_Ef = fig.add_axes([0.3, 0.85, 0.4, 0.05])
ax_Ef.spines['top'].set_visible(True)
ax_Ef.spines['right'].set_visible(True)
ax_T = fig.add_axes([0.3, 0.92, 0.4, 0.05])
ax_T.spines['top'].set_visible(True)
ax_T.spines['right'].set_visible(True)
# Plot default data
x = np.linspace(0, 1, 100)
Ef_0 = 0.5
T_0 = 100
y = fermi(x, Ef_0, T_0)
f_d, = ax.plot(x, y, linewidth=2.5)
s_Ef.on_changed(update)
s_T.on_changed(update)
# Create animation
ani = FuncAnimation(fig=fig, func=animate, frames=range(len(T)), interval=500, repeat=True)
# Save and show animation
ani.save('AnimatedPlot.gif', writer='pillow', fps=2)
A:
@Wayne pointed out that I need to add an additional package so that the slider bars are interactive. Here is a useful resource
https://towardsdatascience.com/4-python-packages-to-create-interactive-dashboards-d50861d1117e
| How do I make this Python program interactive? | I am trying to modify the example found here (https://towardsdatascience.com/intro-to-dynamic-visualization-with-python-animations-and-interactive-plots-f72a7fb69245) to run outside of a Jupyter notebook.
The program below produces a *.gif that is animated but not interactive. Can anyone find the error?
#Based on the example here
# https://towardsdatascience.com/intro-to-dynamic-visualization-with-python-animations-and-interactive-plots-f72a7fb69245
# # Import packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import numpy.typing as npt
from matplotlib.animation import FuncAnimation
from matplotlib.widgets import Slider
# Fermi-Dirac Distribution
def fermi(E: npt.NDArray[np.float64], E_f: float, T: float) -> npt.NDArray[np.float64]:
k_b = 8.617 * (10**-5) # eV/K
return 1/(np.exp((E - E_f)/(k_b * T)) + 1)
# Animation function
def animate(i):
x = np.linspace(0, 1, 100)
y = fermi(x, 0.5, T[i])
f_d.set_data(x, y)
f_d.set_color(colors(i))
temp.set_text(str(int(T[i])) + ' K')
temp.set_color(colors(i))
# Update values
def update(val):
Ef = s_Ef.val
T = s_T.val
f_d.set_data(x, fermi(x, Ef, T))
fig.canvas.draw_idle()
# Create sliders
s_Ef = Slider(ax=ax_Ef, label='Fermi Energy ', valmin=0, valmax=1.0,
valfmt=' %1.1f eV', facecolor='#cc7000')
s_T = Slider(ax=ax_T, label='Temperature ', valmin=100, valmax=1000,
valinit=100, valfmt='%i K', facecolor='#cc7000')
# General plot parameters
mpl.rcParams['font.size'] = 18
mpl.rcParams['axes.linewidth'] = 2
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['xtick.major.size'] = 10
mpl.rcParams['xtick.major.width'] = 2
mpl.rcParams['ytick.major.size'] = 10
mpl.rcParams['ytick.major.width'] = 2
# Temperature values
T = np.linspace(100, 1000, 10)
# Get colors
colors = mpl.colormaps['copper'].resampled(8)
# Create figure and add axes
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111)
# Add legend
labels = ['100 K', '200 K', '300 K', '400 K', '500 K', '600 K',
'700 K', '800 K', '900 K', '1000 K']
ax.legend(labels, bbox_to_anchor=(1.05, -0.1), loc='lower left',
frameon=False, labelspacing=0.2)
# Create variable reference to plot
f_d, = ax.plot([], [], linewidth=2.5)
# Add text annotation and create variable reference
temp = ax.text(1, 1, '', ha='right', va='top', fontsize=24)
# Create main axis
fig.subplots_adjust(bottom=0.2, top=0.75)
# Create axes for sliders
ax_Ef = fig.add_axes([0.3, 0.85, 0.4, 0.05])
ax_Ef.spines['top'].set_visible(True)
ax_Ef.spines['right'].set_visible(True)
ax_T = fig.add_axes([0.3, 0.92, 0.4, 0.05])
ax_T.spines['top'].set_visible(True)
ax_T.spines['right'].set_visible(True)
# Plot default data
x = np.linspace(0, 1, 100)
Ef_0 = 0.5
T_0 = 100
y = fermi(x, Ef_0, T_0)
f_d, = ax.plot(x, y, linewidth=2.5)
s_Ef.on_changed(update)
s_T.on_changed(update)
# Create animation
ani = FuncAnimation(fig=fig, func=animate, frames=range(len(T)), interval=500, repeat=True)
# Save and show animation
ani.save('AnimatedPlot.gif', writer='pillow', fps=2)
| [
"@Wayne pointed out that I need to add an additional package so that the slider bars are interactive. Here is a useful resource\nhttps://towardsdatascience.com/4-python-packages-to-create-interactive-dashboards-d50861d1117e\n"
] | [
0
] | [] | [] | [
"interactive",
"python"
] | stackoverflow_0074633978_interactive_python.txt |
Q:
Pandas: count number of times between specific range
I have a dataset of part numbers, and for each of those part numbers, they were replaced at a certain cycle count. For example, in the below table is an example of my data, first column being the part number, and the second being the cycle count it was replaced (ie: part abc was replaced at 100 cycles, and then again at 594, and then at 1230, and 2291):
Part #
Cycle Count
abc
100
abc
594
abc
1230
abc
2291
def
329
def
2001
ghi
1671
jkl
29
jkl
190
mno
700
mno
1102
pqr
2991
With this data, I am trying to create a new table that counts the number of times a part was replaced within certain cycle ranges, and create a table such as the example below:
Part #
Cycle Count Range (1-1000)
Cycle Count Range (1001-2000)
Cycle Count Range (2001-3000)
abc
2
1
1
def
1
0
1
ghi
0
1
0
jkl
2
0
0
mno
1
1
0
pqr
0
0
1
I tried doing this in SQL but I am not proficient enough to do it.
A:
We can use np.arange to create some Cycle Count Range bins and pd.cut to assign the values of Cycle Count to said bins.
from io import StringIO
import numpy as np
import pandas as pd
df = pd.read_csv(StringIO("""Part # Cycle Count
abc 100
abc 594
abc 1230
abc 2291
def 329
def 2001
ghi 1671
jkl 29
jkl 190
mno 700
mno 1102
pqr 2991"""), sep="\\t+")
# make bins of size 1_000 using numpy.arange
bins = np.arange(0, df["Cycle Count"].max()+1_000, step=1_000)
# bin the Cycle Count series
df["Cycle Count Range"] = pd.cut(df["Cycle Count"], bins, retbins=False)
# count the Cycle Counts within the Part #/Cycle Count Range groups
out = df.pivot_table(
values="Cycle Count",
index="Part #",
columns="Cycle Count Range",
aggfunc="count"
)
print(out)
Cycle Count Range (0, 1000] (1000, 2000] (2000, 3000]
Part #
abc 2 1 1
def 1 0 1
ghi 0 1 0
jkl 2 0 0
mno 1 1 0
pqr 0 0 1
A:
With crosstab and interval_range:
#This is number of periods
p = math.ceil((df['Cycle Count'].max() - df['Cycle Count'].min())/1000)
#These are bins in which pd.cut needs to cut the series into
b = pd.interval_range(start=1, freq=1000, periods=p, closed='neither')
#Then cut the series
df['Cycle Count Range'] = pd.cut(df['Cycle Count'], b)
#Do a crosstab to compute the aggregation.
out = pd.crosstab(df['Part#'], df['Cycle Count Range'])
print(out):
Cycle Count Range (1, 1001) (1001, 2001) (2001, 3001)
Part#
abc 2 1 1
def 1 0 0
ghi 0 1 0
jkl 2 0 0
mno 1 1 0
pqr 0 0 1
| Pandas: count number of times between specific range | I have a dataset of part numbers, and for each of those part numbers, they were replaced at a certain cycle count. For example, in the below table is an example of my data, first column being the part number, and the second being the cycle count it was replaced (ie: part abc was replaced at 100 cycles, and then again at 594, and then at 1230, and 2291):
Part #
Cycle Count
abc
100
abc
594
abc
1230
abc
2291
def
329
def
2001
ghi
1671
jkl
29
jkl
190
mno
700
mno
1102
pqr
2991
With this data, I am trying to create a new table that counts the number of times a part was replaced within certain cycle ranges, and create a table such as the example below:
Part #
Cycle Count Range (1-1000)
Cycle Count Range (1001-2000)
Cycle Count Range (2001-3000)
abc
2
1
1
def
1
0
1
ghi
0
1
0
jkl
2
0
0
mno
1
1
0
pqr
0
0
1
I tried doing this in SQL but I am not proficient enough to do it.
| [
"We can use np.arange to create some Cycle Count Range bins and pd.cut to assign the values of Cycle Count to said bins.\nfrom io import StringIO\nimport numpy as np\nimport pandas as pd\n\n\ndf = pd.read_csv(StringIO(\"\"\"Part # Cycle Count\nabc 100\nabc 594\nabc 1230\nabc 2291\ndef 329\ndef 2001\nghi 1671\njkl 29\njkl 190\nmno 700\nmno 1102\npqr 2991\"\"\"), sep=\"\\\\t+\")\n\n# make bins of size 1_000 using numpy.arange\nbins = np.arange(0, df[\"Cycle Count\"].max()+1_000, step=1_000)\n\n# bin the Cycle Count series\ndf[\"Cycle Count Range\"] = pd.cut(df[\"Cycle Count\"], bins, retbins=False)\n\n# count the Cycle Counts within the Part #/Cycle Count Range groups\nout = df.pivot_table(\n values=\"Cycle Count\",\n index=\"Part #\",\n columns=\"Cycle Count Range\",\n aggfunc=\"count\"\n)\n\nprint(out)\n\nCycle Count Range (0, 1000] (1000, 2000] (2000, 3000]\nPart # \nabc 2 1 1\ndef 1 0 1\nghi 0 1 0\njkl 2 0 0\nmno 1 1 0\npqr 0 0 1\n\n",
"With crosstab and interval_range:\n#This is number of periods\np = math.ceil((df['Cycle Count'].max() - df['Cycle Count'].min())/1000)\n\n#These are bins in which pd.cut needs to cut the series into\nb = pd.interval_range(start=1, freq=1000, periods=p, closed='neither')\n\n#Then cut the series\ndf['Cycle Count Range'] = pd.cut(df['Cycle Count'], b)\n\n#Do a crosstab to compute the aggregation.\nout = pd.crosstab(df['Part#'], df['Cycle Count Range'])\n\nprint(out):\nCycle Count Range (1, 1001) (1001, 2001) (2001, 3001)\nPart# \nabc 2 1 1\ndef 1 0 0\nghi 0 1 0\njkl 2 0 0\nmno 1 1 0\npqr 0 0 1\n\n"
] | [
0,
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"sql"
] | stackoverflow_0074633352_dataframe_pandas_python_sql.txt |
Q:
How to safe a column from diagonal element to the bottom
I'm trying to get the L and U--matrices from the following Gauss-elimination code I wrote
matrix = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], dtype = float)
vector = np.array([-4, 3, 9, 7], float)
length = len(vector)
L_matrix = np.zeros((4,4), float)
U_matrix = np.zeros((4,4), float)
for m in range(length):
L_matrix[:,m] = matrix[:,m]
div = matrix[m,m]
matrix[m,:] /= div
U_matrix[m, :] = matrix[m,:]
vector[m] /= div
I'm getting the right U-matrix, but I'm getting this L-matrix
[[ 2. 0.5 2. 0.5]
[ 3. 2.5 -2.8 -1. ]
[ 1. -4.5 -13.6 -0. ]
[ 2. -3. -11.4 -1. ]]
i.e I'm getting the whole matrix instead of a lower triangular matrix with zeros at the top! What am I doing wrong here?
A:
The issue here is that the provided code does not perform the elimination. Try this:
for m in range(length):
div = matrix[m, m]
L_matrix[:, m] = matrix[:, m] / div
U_matrix[m, :] = matrix[m, :]
matrix -= np.outer(L_matrix[:, m], U_matrix[m, :])
See this article for more details. For actually solving your linear system, the issue is that LU is not exactly the same as standard Gaussian elimination. You can use back substitution to efficiently compute what vector should be.
| How to safe a column from diagonal element to the bottom | I'm trying to get the L and U--matrices from the following Gauss-elimination code I wrote
matrix = np.array ([[2,1,4,1], [3,4,-1,-1] , [1,-4,1,5] , [2,-2,1,3]], dtype = float)
vector = np.array([-4, 3, 9, 7], float)
length = len(vector)
L_matrix = np.zeros((4,4), float)
U_matrix = np.zeros((4,4), float)
for m in range(length):
L_matrix[:,m] = matrix[:,m]
div = matrix[m,m]
matrix[m,:] /= div
U_matrix[m, :] = matrix[m,:]
vector[m] /= div
I'm getting the right U-matrix, but I'm getting this L-matrix
[[ 2. 0.5 2. 0.5]
[ 3. 2.5 -2.8 -1. ]
[ 1. -4.5 -13.6 -0. ]
[ 2. -3. -11.4 -1. ]]
i.e I'm getting the whole matrix instead of a lower triangular matrix with zeros at the top! What am I doing wrong here?
| [
"The issue here is that the provided code does not perform the elimination. Try this:\nfor m in range(length):\n div = matrix[m, m]\n L_matrix[:, m] = matrix[:, m] / div\n U_matrix[m, :] = matrix[m, :]\n matrix -= np.outer(L_matrix[:, m], U_matrix[m, :])\n\nSee this article for more details. For actually solving your linear system, the issue is that LU is not exactly the same as standard Gaussian elimination. You can use back substitution to efficiently compute what vector should be.\n"
] | [
1
] | [] | [] | [
"matrix",
"numpy",
"python"
] | stackoverflow_0074633974_matrix_numpy_python.txt |
Q:
How to use an input file with netcat in a subprocess
I would like to use netcat in subprocess and send some text when nc client connect it.
The command in bash nc -nlvp 8080 < /home/welcome.txt works perfectly when my chat client is connected with the command nc 127.0.0.1 1234. The client receive the welcome message and I can respond, the listener print the client message.
I try to reproduce the same comportment in Python with the following code:
import subprocess
command = "nc -nlvp 8080 < /home/user/welcome.txt"
subprocess.Popen(command,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
The netcat listener launched by the script does not print anything. However, the client receive the hello message but I can't respond my Python listener does not print anything.
What should I fix in Python code to launch netcat as a listener correctly and print the listener stdout?
A:
Because you use stdout=subprocess.PIPE and stderr=subprocess.PIPE, when nc tries to print it gets stuck waiting for Python code to read from those pipes, and none of your code ever does.
Just take them out. To make things even smoother, you can stop running a shell altogether; and, to wait for nc to finish before proceeding (like your shell script does), you can switch from subprocess.Popen() to subprocess.run():
#!/usr/bin/env python
import subprocess
subprocess.run(['nc', '-nlvp', '8080'], stdin=open('/home/user/welcome.txt'))
Note that nc itself takes care of printing its own stdout and stderr, as long as you don't force Python to get in the way.
| How to use an input file with netcat in a subprocess | I would like to use netcat in subprocess and send some text when nc client connect it.
The command in bash nc -nlvp 8080 < /home/welcome.txt works perfectly when my chat client is connected with the command nc 127.0.0.1 1234. The client receive the welcome message and I can respond, the listener print the client message.
I try to reproduce the same comportment in Python with the following code:
import subprocess
command = "nc -nlvp 8080 < /home/user/welcome.txt"
subprocess.Popen(command,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
The netcat listener launched by the script does not print anything. However, the client receive the hello message but I can't respond my Python listener does not print anything.
What should I fix in Python code to launch netcat as a listener correctly and print the listener stdout?
| [
"Because you use stdout=subprocess.PIPE and stderr=subprocess.PIPE, when nc tries to print it gets stuck waiting for Python code to read from those pipes, and none of your code ever does.\nJust take them out. To make things even smoother, you can stop running a shell altogether; and, to wait for nc to finish before proceeding (like your shell script does), you can switch from subprocess.Popen() to subprocess.run():\n#!/usr/bin/env python\nimport subprocess\nsubprocess.run(['nc', '-nlvp', '8080'], stdin=open('/home/user/welcome.txt'))\n\nNote that nc itself takes care of printing its own stdout and stderr, as long as you don't force Python to get in the way.\n"
] | [
0
] | [] | [] | [
"netcat",
"python",
"stdin",
"stdout",
"subprocess"
] | stackoverflow_0074621232_netcat_python_stdin_stdout_subprocess.txt |
Q:
Dynamically using INSERT for cx_Oracle - Python
I've been looking around so hopefully someone here can assist:
I'm attempting to use cx_Oracle in python to interface with a database; my task is to insert data from an excel file to an empty (but existing) table.
I have the excel file with almost all of the same column names as the columns in the database's table, so I essentially want to check if the columns share the same name; and if so, I insert that column from the excel (dataframe --pandas) file to the table in Oracle.
import pandas as pd
import numpy as np
import cx_Oracle
df = pd.read_excel("employee_info.xlsx")
con = None
try:
con = cx_Oracle.connect (
config.username,
config.password,
config.dsn,
encoding = config.encoding)
except cx_Oracle.Error as error:
print(error)
finally:
cursor = con.cursor()
rows = [tuple(x) for x in df.values]
cursor.executemany( ''' INSERT INTO ODS.EMPLOYEES({x} VALUES {rows}) '''
I'm not sure what sql I should put or if there's a way I can use a for-loop to iterate through the columns but my main issue stems from how can I dynamically add these for when our dataset grows in columns?
I check the columns that match by using:
sql = "SELECT * FROM ODS.EMPLOYEES"
cursor.execute(sql)
data = cursor.fetchall()
col_names = []
for i in range (0, len(cursor.description)):
col_names.append(cursor.description[i][0])
a = np.intersect1d(df.columns, col_names)
print("common columns:", a)
that gives me a list of all the common columns; so I'm not sure? I've renamed the columns in my excel file to match the columns in the database's table but my issue is that how can I match these in a dynamic/automated way so I can continue to add to my datasets without worrying about changing the code.
Bonus: I also am importing SQL in a case statement to create a new column where I'm rolling up a few other columns; if there's a way to add this to the first part of my SQL or if it's advisable to do all manipulations before using an insert statement that'll be helpful to know as well.
A:
Look at https://github.com/oracle/python-oracledb/blob/main/samples/load_csv.py
You would replace the CSV reading bit with parsing your data frame. You need to construct a SQL statement similar to the one used in that example:
sql = "insert into LoadCsvTab (id, name) values (:1, :2)"
For each spreadsheet column that you decide matches a table column, construct the (id, name) bit of the statement and add another id to the bind section (:1, :2).
| Dynamically using INSERT for cx_Oracle - Python | I've been looking around so hopefully someone here can assist:
I'm attempting to use cx_Oracle in python to interface with a database; my task is to insert data from an excel file to an empty (but existing) table.
I have the excel file with almost all of the same column names as the columns in the database's table, so I essentially want to check if the columns share the same name; and if so, I insert that column from the excel (dataframe --pandas) file to the table in Oracle.
import pandas as pd
import numpy as np
import cx_Oracle
df = pd.read_excel("employee_info.xlsx")
con = None
try:
con = cx_Oracle.connect (
config.username,
config.password,
config.dsn,
encoding = config.encoding)
except cx_Oracle.Error as error:
print(error)
finally:
cursor = con.cursor()
rows = [tuple(x) for x in df.values]
cursor.executemany( ''' INSERT INTO ODS.EMPLOYEES({x} VALUES {rows}) '''
I'm not sure what sql I should put or if there's a way I can use a for-loop to iterate through the columns but my main issue stems from how can I dynamically add these for when our dataset grows in columns?
I check the columns that match by using:
sql = "SELECT * FROM ODS.EMPLOYEES"
cursor.execute(sql)
data = cursor.fetchall()
col_names = []
for i in range (0, len(cursor.description)):
col_names.append(cursor.description[i][0])
a = np.intersect1d(df.columns, col_names)
print("common columns:", a)
that gives me a list of all the common columns; so I'm not sure? I've renamed the columns in my excel file to match the columns in the database's table but my issue is that how can I match these in a dynamic/automated way so I can continue to add to my datasets without worrying about changing the code.
Bonus: I also am importing SQL in a case statement to create a new column where I'm rolling up a few other columns; if there's a way to add this to the first part of my SQL or if it's advisable to do all manipulations before using an insert statement that'll be helpful to know as well.
| [
"Look at https://github.com/oracle/python-oracledb/blob/main/samples/load_csv.py\nYou would replace the CSV reading bit with parsing your data frame. You need to construct a SQL statement similar to the one used in that example:\nsql = \"insert into LoadCsvTab (id, name) values (:1, :2)\"\n\nFor each spreadsheet column that you decide matches a table column, construct the (id, name) bit of the statement and add another id to the bind section (:1, :2).\n"
] | [
0
] | [] | [] | [
"cx_oracle",
"excel",
"pandas",
"python",
"sql"
] | stackoverflow_0074621928_cx_oracle_excel_pandas_python_sql.txt |
Q:
Comma separated number after specific substring in middle of string
I need to extract a sequence of coma separated numbers after specific substring. When the substring is in the beginning of the string it works fine, but not when its in the middle.
The regex 'Port':\ .([0-9]+) works fine with the example below to get the value 2.
String example:
{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}
But i need to get Field value, I dont care if its '[2,2]' or 2,2 (string or number)
I tried various attempts with regex calculator, but couldnt find a solution to return the value after string in middle of the text. Any ideas? Please help. Thanks ahead, Nir
A:
I found the regex to be like this, not sure if thats what you want:
import re
string = "{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}"
output = re.findall(r"\'Field\'\: \'\[([0-9]+)\,([0-9]+)\]\'",string)
print(output)
output:
[('2', '2')]
if you want as a string:
output = str(output).replace('[','').replace(']','').replace('(','').replace(')','').replace(' ','').replace('\'','')
print(output)
output:
2,2
EDIT:
If I got what you want, this might work, it will create a new dataframe with values with only a column called 'Field' and you can then append it to your own dataframe.
values = []
def get_values(mdict, values):
pattern = r"\'Field\'\: \'\[([0-9]+)\,([0-9]+)\]\'"
output = re.findall(pattern,mdict)
output = str(output).replace('[','').replace(']','').replace('(','').replace(')','').replace(' ','').replace('\'','')
values.append(output)
# get_values(mdict, values)
for x in df['param']:
get_values(str(x), values)
df_temp = pd.DataFrame(values, columns=['Field'])
df.append(df_temp)
A:
This looks like a print()ed Python dict; can you use ast.literal_eval() to bring it back into a dictionary?
>>> import ast
>>> d = ast.literal_eval("""{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}""")
>>> d
{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]', 'bar': '[9, 9]'}
>>> d["Array"]
'[0, 0]'
A:
If you just want 2,2 for the Field value you can use a single capture group.
Note that you don't have to escape the ' : , and ]
'Field':\s+'\[([0-9]+,\s*[0-9]+)]'
'Field': Match literally
\s+'\[ Match 1+ whitespace chars and [
( Capture group 1
[0-9]+,\s*[0-9]+ Match 1+ digits , optional whitespace chars and 1+ digits
) Close group 1
]' Match literally
See a regex demo and a Python demo.
Example code
import re
pattern = r"'Field':\s+'\[([0-9]+,[0-9]+)]'"
s = "{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}"
m = re.search(pattern, s)
if m:
print(m.group(1))
Output
2,2
If you want to get all the values where the fields are between single quotes you can use a conditional matching the ] only when there is a [
'[^']+':\s+'(\[)?([0-9]+(?:,\s*[0-9]+)*)(?(1)\])'
Regex demo | Python demo
Then you can get the capture group 2 value.
Example:
import re
pattern = r"'[^']+':\s+'(\[)?([0-9]+(?:,\s*[0-9]+)*)(?(1)\])'"
s = "{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}"
matches = re.finditer(pattern, s)
for matchNum, match in enumerate(matches, start=1):
print(match.group(2))
Output
2
0, 0
2,2
0, 0
9, 9
| Comma separated number after specific substring in middle of string | I need to extract a sequence of coma separated numbers after specific substring. When the substring is in the beginning of the string it works fine, but not when its in the middle.
The regex 'Port':\ .([0-9]+) works fine with the example below to get the value 2.
String example:
{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}
But i need to get Field value, I dont care if its '[2,2]' or 2,2 (string or number)
I tried various attempts with regex calculator, but couldnt find a solution to return the value after string in middle of the text. Any ideas? Please help. Thanks ahead, Nir
| [
"I found the regex to be like this, not sure if thats what you want:\nimport re\n\nstring = \"{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}\"\n\noutput = re.findall(r\"\\'Field\\'\\: \\'\\[([0-9]+)\\,([0-9]+)\\]\\'\",string)\n\nprint(output)\n\noutput:\n[('2', '2')]\n\nif you want as a string:\noutput = str(output).replace('[','').replace(']','').replace('(','').replace(')','').replace(' ','').replace('\\'','')\nprint(output)\n\noutput:\n2,2\n\nEDIT:\nIf I got what you want, this might work, it will create a new dataframe with values with only a column called 'Field' and you can then append it to your own dataframe.\nvalues = []\n\ndef get_values(mdict, values):\n pattern = r\"\\'Field\\'\\: \\'\\[([0-9]+)\\,([0-9]+)\\]\\'\"\n output = re.findall(pattern,mdict)\n output = str(output).replace('[','').replace(']','').replace('(','').replace(')','').replace(' ','').replace('\\'','')\n values.append(output)\n\n# get_values(mdict, values)\n\nfor x in df['param']:\n get_values(str(x), values)\n\ndf_temp = pd.DataFrame(values, columns=['Field'])\n\ndf.append(df_temp)\n\n",
"This looks like a print()ed Python dict; can you use ast.literal_eval() to bring it back into a dictionary?\n>>> import ast\n>>> d = ast.literal_eval(\"\"\"{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}\"\"\")\n>>> d\n{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]', 'bar': '[9, 9]'}\n>>> d[\"Array\"]\n'[0, 0]'\n\n",
"If you just want 2,2 for the Field value you can use a single capture group.\nNote that you don't have to escape the ' : , and ]\n'Field':\\s+'\\[([0-9]+,\\s*[0-9]+)]'\n\n\n'Field': Match literally\n\\s+'\\[ Match 1+ whitespace chars and [\n( Capture group 1\n\n[0-9]+,\\s*[0-9]+ Match 1+ digits , optional whitespace chars and 1+ digits\n\n\n) Close group 1\n]' Match literally\n\nSee a regex demo and a Python demo.\nExample code\nimport re\n\npattern = r\"'Field':\\s+'\\[([0-9]+,[0-9]+)]'\"\n\ns = \"{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}\"\n\nm = re.search(pattern, s)\nif m:\n print(m.group(1))\n\nOutput\n2,2\n\n\nIf you want to get all the values where the fields are between single quotes you can use a conditional matching the ] only when there is a [\n'[^']+':\\s+'(\\[)?([0-9]+(?:,\\s*[0-9]+)*)(?(1)\\])'\n\nRegex demo | Python demo\nThen you can get the capture group 2 value.\nExample:\nimport re\n\npattern = r\"'[^']+':\\s+'(\\[)?([0-9]+(?:,\\s*[0-9]+)*)(?(1)\\])'\"\ns = \"{'Port': '2', 'Array': '[0, 0]', 'Field': '[2,2]', 'foo': '[0, 0]' , 'bar': '[9, 9]'}\"\nmatches = re.finditer(pattern, s)\n\nfor matchNum, match in enumerate(matches, start=1):\n print(match.group(2))\n\nOutput\n2\n0, 0\n2,2\n0, 0\n9, 9\n\n"
] | [
2,
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074632019_python_regex.txt |
Q:
TypeError: 'str' object is not callable, how do i fix this?
I'm under the impression this means my variables are messed up but I can't figure out which one it's upset with nor why.
It looks like it happens after it reads my command for k1. I don't know what to do to make it work, I've been trying for a while to find where the issue may be to no avail. I'm extremely new to coding and was sort of thrown into this with little instruction other than the requirements for what the code should do so any help would be greatly appreciated.
This is a code for RK4 method for solving ODE's meant to function similarly to scipy.integrate.solve_ivp.
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
Vin=150 #m^3/min
D=7 #diameter, m**2
k=10
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
x0=1 #initial cond.
y0=1
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
#I'm going to need to make vectors to plot so:
#made some empty lists to add to
listY=[]
listt=[]
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(0,20,21)
for i in t_values:
#rk4 method
k1=h*(function(x0,y0,*args))
k2=h*(function((x0+h/2), (y0+k1/2),*args))
k3=h*(function((x0+h/2), (y0+k2/2),*args))
k4=h*(function((x0+h), (y0+k3),*args))
n=(k1+2*k2+2*k3+k4)/6
#new y value
yn=y0+n
#makes it so new y and x are used for next # in range
y0=yn
xn=x0+h
print ('Y=')
print(yn)
#I'm going to need to make them vectors for plotting
listY.append[yn]
print('When t=')
print(xn)
listt.append[xn]
Y_values=np.array(listY)
tx_vals=np.array(listt)
plt.plot(tx_vals,Y_values)
k=10
Vin=150
D=7
print('For 3A:')
odeRK4("f2a", [0,20],[0,7], None, 10,150,7)
A:
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
Vin=150 #m^3/min
D=7 #diameter, m**2
k=10
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
x0=1 #initial cond.
y0=1
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
#I'm going to need to make vectors to plot so:
#made some empty lists to add to
listY=[]
listt=[]
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(0,20,21)
for i in t_values:
#rk4 method
k1=h*(function(x0,y0,*args))
k2=h*(function((x0+h/2), (y0+k1/2),*args))
k3=h*(function((x0+h/2), (y0+k2/2),*args))
k4=h*(function((x0+h), (y0+k3),*args))
n=(k1+2*k2+2*k3+k4)/6
#new y value
yn=y0+n
#makes it so new y and x are used for next # in range
y0=yn
xn=x0+h
print ('Y=')
print(yn)
#I'm going to need to make them vectors for plotting
listY.append[yn]
print('When t=')
print(xn)
listt.append[xn]
Y_values=np.array(listY)
tx_vals=np.array(listt)
plt.plot(tx_vals,Y_values)
k=10
Vin=150
D=7
print('For 3A:')
odeRK4(f2a, [0,20],[0,7], None, 10,150,7)
You are trying to give the first parameter value(f2a) as string. But it has to be a function in your def odeRK4(function,tspan,R,h,*args) definition. You have to change "f2a" --> f2a.
| TypeError: 'str' object is not callable, how do i fix this? | I'm under the impression this means my variables are messed up but I can't figure out which one it's upset with nor why.
It looks like it happens after it reads my command for k1. I don't know what to do to make it work, I've been trying for a while to find where the issue may be to no avail. I'm extremely new to coding and was sort of thrown into this with little instruction other than the requirements for what the code should do so any help would be greatly appreciated.
This is a code for RK4 method for solving ODE's meant to function similarly to scipy.integrate.solve_ivp.
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
Vin=150 #m^3/min
D=7 #diameter, m**2
k=10
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
x0=1 #initial cond.
y0=1
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
#I'm going to need to make vectors to plot so:
#made some empty lists to add to
listY=[]
listt=[]
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(0,20,21)
for i in t_values:
#rk4 method
k1=h*(function(x0,y0,*args))
k2=h*(function((x0+h/2), (y0+k1/2),*args))
k3=h*(function((x0+h/2), (y0+k2/2),*args))
k4=h*(function((x0+h), (y0+k3),*args))
n=(k1+2*k2+2*k3+k4)/6
#new y value
yn=y0+n
#makes it so new y and x are used for next # in range
y0=yn
xn=x0+h
print ('Y=')
print(yn)
#I'm going to need to make them vectors for plotting
listY.append[yn]
print('When t=')
print(xn)
listt.append[xn]
Y_values=np.array(listY)
tx_vals=np.array(listt)
plt.plot(tx_vals,Y_values)
k=10
Vin=150
D=7
print('For 3A:')
odeRK4("f2a", [0,20],[0,7], None, 10,150,7)
| [
"import numpy as np\nimport math\nimport matplotlib.pyplot as plt\n\n\n#defining functions\nH0=7 #initial height, meters\ndef f2a(t,H,k,Vin,D):\n Vin=150 #m^3/min\n D=7 #diameter, m**2\n k=10\n dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))\n return(dhdt)\n\nx0=1 #initial cond.\ny0=1\ndef fb2(J,t):\n x=J[0]\n y=J[1]\n dxdt=0.25*y-x\n dydt=3*x-y\n #X0,Y0=1,1 initial conditions\n return([dxdt,dydt])\n\n\n\n#x0 and y0 are initial conditions\n#I'm going to need to make vectors to plot so:\n #made some empty lists to add to\nlistY=[]\nlistt=[]\ndef odeRK4(function,tspan,R,h,*args):\n\n #R is vector of inital conditions\n x0=R[0]\n y0=R[1]\n\n\n #writing statement for what to do if h isnt given/other thing\n if h==None:\n h=.01*(tspan[1]-tspan[0])\n elif h> tspan[1]-tspan[0]:\n h=.01*(tspan[1]-tspan[0])\n else:\n h=h\n #defining the 2-element array (i hope)\n #pretty sure tspan is range of t values\n x0=tspan[0] #probably 0 if this is meant for time\n xn=tspan[1] #whatever time we want it to end at?\n #xn is final x value-t\n #x0 is initial\n\n t_values=np.arange(0,20,21)\n for i in t_values:\n #rk4 method\n k1=h*(function(x0,y0,*args))\n k2=h*(function((x0+h/2), (y0+k1/2),*args))\n k3=h*(function((x0+h/2), (y0+k2/2),*args))\n k4=h*(function((x0+h), (y0+k3),*args))\n\n n=(k1+2*k2+2*k3+k4)/6\n #new y value\n yn=y0+n\n #makes it so new y and x are used for next # in range\n y0=yn\n xn=x0+h\n print ('Y=')\n print(yn)\n #I'm going to need to make them vectors for plotting\n listY.append[yn]\n print('When t=')\n print(xn)\n listt.append[xn]\n\n Y_values=np.array(listY)\n tx_vals=np.array(listt) \n\n plt.plot(tx_vals,Y_values)\nk=10\nVin=150\nD=7 \nprint('For 3A:')\n\nodeRK4(f2a, [0,20],[0,7], None, 10,150,7)\n\nYou are trying to give the first parameter value(f2a) as string. But it has to be a function in your def odeRK4(function,tspan,R,h,*args) definition. You have to change \"f2a\" --> f2a.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074634143_python.txt |
Q:
How do I send myself an email using Python?
I know this has been answered before, but recently gmail updated the "Less secure apps" ToS or something. So with this update, is there a way to do this? (Sorry for short explanation + old duplicates)
Tried this tutorial and when trying to connect to my google acc, less secure apps option disabled
UPDATE: I'll be using chrome notifications, which is useful for what I'm doing. Thanks for trying to help! :D
A:
You can or should be able to use the same code that you are using now. The easest solution would be to enable 2fa on your google account and create an apps password. Quick fix for SMTP username and password not accepted error
Once you have created the apps password you can use that in place of your standard gmail password in your code.
If that doesn't work you can switch to using Xoauth2 and authorize a user and send an access token instead. This method would probably require changes you your existing code.
| How do I send myself an email using Python? | I know this has been answered before, but recently gmail updated the "Less secure apps" ToS or something. So with this update, is there a way to do this? (Sorry for short explanation + old duplicates)
Tried this tutorial and when trying to connect to my google acc, less secure apps option disabled
UPDATE: I'll be using chrome notifications, which is useful for what I'm doing. Thanks for trying to help! :D
| [
"You can or should be able to use the same code that you are using now. The easest solution would be to enable 2fa on your google account and create an apps password. Quick fix for SMTP username and password not accepted error\nOnce you have created the apps password you can use that in place of your standard gmail password in your code.\nIf that doesn't work you can switch to using Xoauth2 and authorize a user and send an access token instead. This method would probably require changes you your existing code.\n"
] | [
1
] | [] | [] | [
"gmail",
"python",
"smtp"
] | stackoverflow_0074634185_gmail_python_smtp.txt |
Q:
How to automatically start a python http server
I am using python http.server 80 to expose my downloaded files to my twilio whatsapp bot
is there a way that as my django-twillio app starts it automatically runs the server on port 80 as well
python -m http.server 80
A:
Adding this code to your django-twilio app will programmatically start your server on localhost:80
from http.server import HTTPServer, SimpleHTTPRequestHandler
httpd = HTTPServer(('localhost', 80), SimpleHTTPRequestHandler)
httpd.serve_forever()
| How to automatically start a python http server | I am using python http.server 80 to expose my downloaded files to my twilio whatsapp bot
is there a way that as my django-twillio app starts it automatically runs the server on port 80 as well
python -m http.server 80
| [
"Adding this code to your django-twilio app will programmatically start your server on localhost:80\nfrom http.server import HTTPServer, SimpleHTTPRequestHandler\n\nhttpd = HTTPServer(('localhost', 80), SimpleHTTPRequestHandler)\nhttpd.serve_forever()\n\n"
] | [
1
] | [] | [] | [
"django",
"httpserver",
"python",
"selenium",
"twilio_api"
] | stackoverflow_0074631869_django_httpserver_python_selenium_twilio_api.txt |
Q:
converting dict of list of lists to dict of dicts with pandas dataframe
trying to convert pandas dataframe column from a to b as below -
import pandas as pd
a = {'01AB': [["ABC",5],["XYZ",4],["LMN",1]],
'02AB_QTY': [["Other",20],["not_Other",150],["another",15]]}
b = {'01AB': {"ABC":5,"XYZ":4,"LMN":1},
'02AB_QTY': {"Other":20,"not_Other":150,"another":150}}
df = pd.DataFrame(a).to_dict(orient='dict')
print(df)
gives me -
{'01AB': {0: ['ABC', 5], 1: ['XYZ', 4], 2: ['LMN', 1]}, '02AB_QTY': {0: ['Other', 20], 1: ['not_Other', 150], 2: ['another', 15]}}
what will be the cleaner way to do this? This is what I have tried, dict 'a' is only to create the dataframe. I dont want to iterate through that, have to iterate through the column available in the pandas dataframe
import pandas as pd
a = {'01AB': [["ABC",5],["XYZ",4],["LMN",1]],
'02AB_QTY': [["Other",20],["not_Other",150],["another",15]]}
b = {'01AB': {"ABC":5,"XYZ":4,"LMN":1},
'02AB_QTY': {"Other":20,"not_Other":150,"another":150}}
df = pd.DataFrame(a)#.to_dict(orient='dict')
col_list = ["01AB", "02AB_QTY",]
for col in col_list:
# print(df)
df[col] = df[col].apply(lambda x: {} if x is None else {key: {v[0]:v[1] for v in list_item} for key, list_item in x})
display(df)
A:
for key, list_item in a.items():
a[key] = {v[0]:v[1] for v in list_item}
OR
b = {}
for key, list_item in a.items():
b[key] = {v[0]:v[1] for v in list_item}
OR
b = {key: {v[0]:v[1] for v in list_item} for key, list_item in a.items()}
A:
To example be more clear, the following code using loop with verbose variable name
a = {"01AB": [["ABC", 5], ["XYZ", 4], ["LMN", 1]],
"02AB_QTY": [["Other", 20], ["not_Other", 150]]}
b = {"01AB": {"ABC": 5, "XYZ": 4, "LMN": 1},
"02AB_QTY": {"Other": 20, "not_Other": 150}}
b1 = {}
for key1, list_of_key2_value in a.items():
for key2, value in list_of_key2_value:
b1.setdefault(key1, {}).update({key2: value})
assert b == b1
| converting dict of list of lists to dict of dicts with pandas dataframe | trying to convert pandas dataframe column from a to b as below -
import pandas as pd
a = {'01AB': [["ABC",5],["XYZ",4],["LMN",1]],
'02AB_QTY': [["Other",20],["not_Other",150],["another",15]]}
b = {'01AB': {"ABC":5,"XYZ":4,"LMN":1},
'02AB_QTY': {"Other":20,"not_Other":150,"another":150}}
df = pd.DataFrame(a).to_dict(orient='dict')
print(df)
gives me -
{'01AB': {0: ['ABC', 5], 1: ['XYZ', 4], 2: ['LMN', 1]}, '02AB_QTY': {0: ['Other', 20], 1: ['not_Other', 150], 2: ['another', 15]}}
what will be the cleaner way to do this? This is what I have tried, dict 'a' is only to create the dataframe. I dont want to iterate through that, have to iterate through the column available in the pandas dataframe
import pandas as pd
a = {'01AB': [["ABC",5],["XYZ",4],["LMN",1]],
'02AB_QTY': [["Other",20],["not_Other",150],["another",15]]}
b = {'01AB': {"ABC":5,"XYZ":4,"LMN":1},
'02AB_QTY': {"Other":20,"not_Other":150,"another":150}}
df = pd.DataFrame(a)#.to_dict(orient='dict')
col_list = ["01AB", "02AB_QTY",]
for col in col_list:
# print(df)
df[col] = df[col].apply(lambda x: {} if x is None else {key: {v[0]:v[1] for v in list_item} for key, list_item in x})
display(df)
| [
"for key, list_item in a.items():\n a[key] = {v[0]:v[1] for v in list_item}\n\nOR\nb = {}\nfor key, list_item in a.items():\n b[key] = {v[0]:v[1] for v in list_item}\n\nOR\nb = {key: {v[0]:v[1] for v in list_item} for key, list_item in a.items()}\n\n",
"To example be more clear, the following code using loop with verbose variable name\na = {\"01AB\": [[\"ABC\", 5], [\"XYZ\", 4], [\"LMN\", 1]],\n \"02AB_QTY\": [[\"Other\", 20], [\"not_Other\", 150]]}\n\nb = {\"01AB\": {\"ABC\": 5, \"XYZ\": 4, \"LMN\": 1},\n \"02AB_QTY\": {\"Other\": 20, \"not_Other\": 150}}\n\nb1 = {}\nfor key1, list_of_key2_value in a.items():\n for key2, value in list_of_key2_value:\n b1.setdefault(key1, {}).update({key2: value})\nassert b == b1\n\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074634070_pandas_python.txt |
Q:
Detecting a repeated sequence with regex
I have a text example like
0s11 0s12 0s33 my name is 0sgfh 0s1 0s22 0s87
I want to detect the consecutive sequences that start 0s.
So, the expected output should be 0s11 0s12 0s33, 0sgfh 0s1 0s22 0s87
I tried using regex
(0s\w+)
but that would detect each 0s11, 0s12, 0s33, etc. individually.
Any idea on how to modify the pattern?
A:
To get those 2 matches where there are at least 2 consecutive parts:
\b0s\w+(?:\s+0s\w+)+
Explanation
\b A word boundary to prevent a partial word match
0s\w+ Match os and 1+ word chars
(?:\s+0s\w+)+ Repeat 1 or more times whitespace chars followed by 0s and 1+ word chars
Regex demo
If you also want to match a single occurrence:
\b0s\w+(?:\s+0s\w+)*
Regex demo
Note that \w+ matches 1 or more word characters so it would not match only 0s
A:
Should be doable with re.findall(). Your pattern was correct! :)
import re
testString = "0s11 0s12 0s33 my name is 0sgfh 0s1 0s22 0s87"
print(re.findall('0s\w', testString))
['0s11', '0s12', '0s33', '0sgfh', '0s1', '0s22', '0s87']
Hope this helps!
| Detecting a repeated sequence with regex | I have a text example like
0s11 0s12 0s33 my name is 0sgfh 0s1 0s22 0s87
I want to detect the consecutive sequences that start 0s.
So, the expected output should be 0s11 0s12 0s33, 0sgfh 0s1 0s22 0s87
I tried using regex
(0s\w+)
but that would detect each 0s11, 0s12, 0s33, etc. individually.
Any idea on how to modify the pattern?
| [
"To get those 2 matches where there are at least 2 consecutive parts:\n\\b0s\\w+(?:\\s+0s\\w+)+\n\nExplanation\n\n\\b A word boundary to prevent a partial word match\n0s\\w+ Match os and 1+ word chars\n(?:\\s+0s\\w+)+ Repeat 1 or more times whitespace chars followed by 0s and 1+ word chars\n\nRegex demo\nIf you also want to match a single occurrence:\n\\b0s\\w+(?:\\s+0s\\w+)*\n\nRegex demo\nNote that \\w+ matches 1 or more word characters so it would not match only 0s\n",
"Should be doable with re.findall(). Your pattern was correct! :)\nimport re\ntestString = \"0s11 0s12 0s33 my name is 0sgfh 0s1 0s22 0s87\"\nprint(re.findall('0s\\w', testString))\n\n['0s11', '0s12', '0s33', '0sgfh', '0s1', '0s22', '0s87']\n\nHope this helps!\n"
] | [
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074631480_python_regex.txt |
Q:
How do I use a file like a memory buffer in Python?
I don't know the correct terminology, maybe it's called page file, but I'm not sure. I need a way to use an on-disk file as a buffer, like bytearray. It should be able to do things like a = buffer[100:200] and buffer[33] = 127 without the code having to be aware that it's reading from and writing to a file in the background.
Basically I need the opposite of bytesIO, which uses memory with a file interface. I need a way to use a file with a memory buffer interface. And ideally it doesn't write to the file everytime the data is changed (but it's ok if it does).
The reason I need this functionality is because I use packages that expect data to be in a buffer object, but I only have 4MB of memory available. It's impossible to load the files into memory. So I need an object that acts like a bytearray for example, but reads and writes data directly to a file, not memory.
In my use case I need a micropython module, but a standard python module might work as well. Are there any modules that would do what I need?
A:
Can something like this work for you?
class Memfile:
def __init__(self, file):
self.file = file
def __getitem__(self,key):
if type(key) is int:
self.file.seek(key)
return self.file.read(1)
if type(key) is slice:
self.file.seek(key.start)
return self.file.read(key.stop - key.start)
def __setitem__(self, key, val):
assert(type(val) == bytes or type(val) == bytearray)
if type(key) is slice:
assert(key.stop - key.start == len(val))
self.file.seek(key.start)
self.file.write(val)
if type(key) is int:
assert(len(val) == 1)
self.file.seek(key)
self.file.write(val)
def close(self):
self.file.close()
if __name__ == "__main__":
mf = Memfile(open("data", "r+b")) # Assuming the file 'data' have 10+ bytes
mf[0:10] = b'\x00'*10
print(mf[0:10]) # b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
mf[0:2] = b'\xff\xff'
print(mf[0:10]) # b'\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00'
print(mf[2]) # b'\x00'
print(mf[1]) # b'\xff'
mf[0:4] = b'\xde\xad\xbe\xef'
print(mf[0:4]) # b'\xde\xad\xbe\xef'
mf.close()
Note that if this solutions fits your needs you will need to do plenty of testing here
| How do I use a file like a memory buffer in Python? | I don't know the correct terminology, maybe it's called page file, but I'm not sure. I need a way to use an on-disk file as a buffer, like bytearray. It should be able to do things like a = buffer[100:200] and buffer[33] = 127 without the code having to be aware that it's reading from and writing to a file in the background.
Basically I need the opposite of bytesIO, which uses memory with a file interface. I need a way to use a file with a memory buffer interface. And ideally it doesn't write to the file everytime the data is changed (but it's ok if it does).
The reason I need this functionality is because I use packages that expect data to be in a buffer object, but I only have 4MB of memory available. It's impossible to load the files into memory. So I need an object that acts like a bytearray for example, but reads and writes data directly to a file, not memory.
In my use case I need a micropython module, but a standard python module might work as well. Are there any modules that would do what I need?
| [
"Can something like this work for you?\nclass Memfile:\n\n def __init__(self, file):\n self.file = file\n\n def __getitem__(self,key):\n if type(key) is int:\n self.file.seek(key)\n return self.file.read(1)\n if type(key) is slice:\n self.file.seek(key.start)\n return self.file.read(key.stop - key.start)\n\n def __setitem__(self, key, val):\n assert(type(val) == bytes or type(val) == bytearray)\n if type(key) is slice:\n assert(key.stop - key.start == len(val))\n self.file.seek(key.start)\n self.file.write(val)\n if type(key) is int:\n assert(len(val) == 1)\n self.file.seek(key)\n self.file.write(val)\n\n def close(self):\n self.file.close()\n\n\nif __name__ == \"__main__\":\n mf = Memfile(open(\"data\", \"r+b\")) # Assuming the file 'data' have 10+ bytes\n mf[0:10] = b'\\x00'*10\n print(mf[0:10]) # b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n mf[0:2] = b'\\xff\\xff'\n print(mf[0:10]) # b'\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n print(mf[2]) # b'\\x00'\n print(mf[1]) # b'\\xff'\n mf[0:4] = b'\\xde\\xad\\xbe\\xef'\n print(mf[0:4]) # b'\\xde\\xad\\xbe\\xef'\n mf.close()\n\nNote that if this solutions fits your needs you will need to do plenty of testing here\n"
] | [
1
] | [] | [] | [
"buffer",
"file",
"memory",
"micropython",
"python"
] | stackoverflow_0074633047_buffer_file_memory_micropython_python.txt |
Q:
SNMP library for python failed return SNMP details
Unable to get Fetch SNMP variable details using sample posted at (https://snmplabs.thola.io/pysnmp/quick-start.html) and got error response . please advice how to fix the error . Thanks in advance
Traceback (most recent call last):
File "d:/WorkFiles/SNMPTest/SNMP.py", line 6, in <module>
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
File "D:\Python37\lib\site-packages\pysnmp\hlapi\transport.py", line 19, in __init__
self.transportAddr = self._resolveAddr(transportAddr)
File "D:\Python37\lib\site-packages\pysnmp\hlapi\asyncore\transport.py", line 63, in _resolveAddr
'@'.join([str(x) for x in transportAddr]), sys.exc_info()[1]))
pysnmp.error.PySnmpError: Bad IPv4/UDP transport address snmpsim.try.thola.io@161: [Errno 11001] getaddrinfo failedcaused by <class 'socket.gaierror'>: [Errno 11001] getaddrinfo failed
**
Code used for testing
**
from pysnmp.hlapi import *
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
CommunityData('public', mpModel=0),
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
ContextData(),
ObjectType(ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0)))
)
if errorIndication:
print(errorIndication)
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
else:
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
A:
My company is trying to take over PySNMP ecosystem, as documented here.
So, you can use demo.pysnmp.com to test out typical SNMP commands/operations.
| SNMP library for python failed return SNMP details | Unable to get Fetch SNMP variable details using sample posted at (https://snmplabs.thola.io/pysnmp/quick-start.html) and got error response . please advice how to fix the error . Thanks in advance
Traceback (most recent call last):
File "d:/WorkFiles/SNMPTest/SNMP.py", line 6, in <module>
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
File "D:\Python37\lib\site-packages\pysnmp\hlapi\transport.py", line 19, in __init__
self.transportAddr = self._resolveAddr(transportAddr)
File "D:\Python37\lib\site-packages\pysnmp\hlapi\asyncore\transport.py", line 63, in _resolveAddr
'@'.join([str(x) for x in transportAddr]), sys.exc_info()[1]))
pysnmp.error.PySnmpError: Bad IPv4/UDP transport address snmpsim.try.thola.io@161: [Errno 11001] getaddrinfo failedcaused by <class 'socket.gaierror'>: [Errno 11001] getaddrinfo failed
**
Code used for testing
**
from pysnmp.hlapi import *
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
CommunityData('public', mpModel=0),
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
ContextData(),
ObjectType(ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0)))
)
if errorIndication:
print(errorIndication)
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
else:
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
| [
"My company is trying to take over PySNMP ecosystem, as documented here.\nSo, you can use demo.pysnmp.com to test out typical SNMP commands/operations.\n"
] | [
0
] | [] | [] | [
"net_snmp",
"pysnmp",
"python",
"snmp",
"snmp_trap"
] | stackoverflow_0072285619_net_snmp_pysnmp_python_snmp_snmp_trap.txt |
Q:
Can Python's strptime handle Python's logging time format?
By default, Python's logging module uses a special format for times, which includes milliseconds: 2003-01-23 00:29:50,411.
Notably, strftime and strptime don't have a standard "milliseconds" specifier (so logging first prints everything else with strftime and then inserts the milliseconds separately). This means there's no obvious way to parse these strings using the standard library.
However, everything seems to work fine when I use strptime with the %f (microseconds) specifier: %Y-%m-%d %H:%M:%S,%f. While %f in strftime prints a six-digit number, in strptime it's apparently happy to take a three-digit number instead, and assume the last three digits are zeroes.
My question is—is this standard/documented behavior? Can I rely on this to keep working, or is it liable to break unexpectedly in new versions? I haven't been able to find anything about this in the Python docs (or man strptime, which doesn't even mention %f), but this seems like a common enough use case that I'd be surprised if nobody's needed it before.
I could also append three zeroes to the time string before passing it to strptime, but that's hacky enough that I'd prefer not to do it unless necessary.
A:
This is in fact standard—for Python, at least. (%f is not a C-standard directive, which is why I couldn't find it in man strptime.) It's in the notes under Technical Detail:
When used with the strptime() method, the %f directive accepts from one to six digits and zero pads on the right. %f is an extension to the set of format characters in the C standard (but implemented separately in datetime objects, and therefore always available).
Thanks to Kelly Bundy for pointing me to this in the comments!
A:
As I said in my comments, I think the best (and safest) way to handle such strings is by using datetime.fromisoformat() and datetime.isoformat. This way, you don't have to care about messing with format templates ever:
>>> from datetime import datetime
>>> datetime.fromisoformat('2003-01-23 00:29:50,411')
datetime.datetime(2003, 1, 23, 0, 29, 50, 411000)
>>>
| Can Python's strptime handle Python's logging time format? | By default, Python's logging module uses a special format for times, which includes milliseconds: 2003-01-23 00:29:50,411.
Notably, strftime and strptime don't have a standard "milliseconds" specifier (so logging first prints everything else with strftime and then inserts the milliseconds separately). This means there's no obvious way to parse these strings using the standard library.
However, everything seems to work fine when I use strptime with the %f (microseconds) specifier: %Y-%m-%d %H:%M:%S,%f. While %f in strftime prints a six-digit number, in strptime it's apparently happy to take a three-digit number instead, and assume the last three digits are zeroes.
My question is—is this standard/documented behavior? Can I rely on this to keep working, or is it liable to break unexpectedly in new versions? I haven't been able to find anything about this in the Python docs (or man strptime, which doesn't even mention %f), but this seems like a common enough use case that I'd be surprised if nobody's needed it before.
I could also append three zeroes to the time string before passing it to strptime, but that's hacky enough that I'd prefer not to do it unless necessary.
| [
"This is in fact standard—for Python, at least. (%f is not a C-standard directive, which is why I couldn't find it in man strptime.) It's in the notes under Technical Detail:\n\nWhen used with the strptime() method, the %f directive accepts from one to six digits and zero pads on the right. %f is an extension to the set of format characters in the C standard (but implemented separately in datetime objects, and therefore always available).\n\nThanks to Kelly Bundy for pointing me to this in the comments!\n",
"As I said in my comments, I think the best (and safest) way to handle such strings is by using datetime.fromisoformat() and datetime.isoformat. This way, you don't have to care about messing with format templates ever:\n>>> from datetime import datetime\n>>> datetime.fromisoformat('2003-01-23 00:29:50,411')\ndatetime.datetime(2003, 1, 23, 0, 29, 50, 411000)\n>>>\n\n"
] | [
1,
1
] | [] | [] | [
"datetime",
"python",
"strptime"
] | stackoverflow_0074632508_datetime_python_strptime.txt |
Q:
How to convert a URI containing partial relative path like '/../' in the middle?
LibreOffice API object is returning a URI path that contains relative path in the middle of the string, like:
file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav
How to convert this to the absolute like:
file:///C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav
How would I convert this?
A:
Use os.path.normpath:
import os
os.path.normpath("file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav")
Output:
'file:/C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav'
Note that the prefix is not correct anymore. So you may have to remove the "file:///" part first, then use normpath, then prepend the "file:///" part again.
A:
Ok so Window convert forward slashes to back slash in this context.
Here is my final solution.
def uri_absolute(uri: str) -> str:
uri_re = r"^(file:(?:/*))"
# converts
# file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav
# to
# file:///C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav
result = os.path.normpath(uri)
# window will use back slash so convert to forward slash
result = result.replace("\\", "/")
# result may now start with file:/ and not file:///
# add proper file:/// again
result = re.sub(uri_re, "file:///", result, 1)
return result
| How to convert a URI containing partial relative path like '/../' in the middle? | LibreOffice API object is returning a URI path that contains relative path in the middle of the string, like:
file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav
How to convert this to the absolute like:
file:///C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav
How would I convert this?
| [
"Use os.path.normpath:\nimport os\n\nos.path.normpath(\"file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav\")\n\nOutput:\n'file:/C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav'\n\nNote that the prefix is not correct anymore. So you may have to remove the \"file:///\" part first, then use normpath, then prepend the \"file:///\" part again.\n",
"Ok so Window convert forward slashes to back slash in this context.\nHere is my final solution.\ndef uri_absolute(uri: str) -> str:\n uri_re = r\"^(file:(?:/*))\"\n # converts\n # file:///C:/Program%20Files/LibreOffice/program/../share/gallery/sounds/apert2.wav\n # to\n # file:///C:/Program%20Files/LibreOffice/share/gallery/sounds/apert2.wav\n result = os.path.normpath(uri)\n # window will use back slash so convert to forward slash\n result = result.replace(\"\\\\\", \"/\")\n # result may now start with file:/ and not file:///\n\n # add proper file:/// again\n result = re.sub(uri_re, \"file:///\", result, 1)\n return result\n\n"
] | [
2,
0
] | [] | [] | [
"python",
"relative_path",
"uri"
] | stackoverflow_0074632802_python_relative_path_uri.txt |
Q:
Denoise noisy straight lines / make noisy lines solid Python
I am attempting to denoise / make solid lines in a very noisy image of a floorplan in python to no success. The methods I have used are:
masking
bluring
and houghlinesp
I have even tried a combination of the first two. here is the sample input image I am trying to make into solid straight lines: original image
With using the HoughLines method this is the best result I could achieve (lines solid but overlapping like crazy wherever there is text (This cannot easily be fixed by changing my minline/maxlinegap variables):HoughLines result
I have tried: masking, gaussian blur, and houghlinesp.
Houghlinesp Code:
import cv2
import numpy as np
from tkinter import Tk # from tkinter import Tk for Python 3.x
from tkinter.filedialog import askopenfilename
import os
Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
filename = askopenfilename() # show an "Open" dialog box and return the path to the selected file
print(filename)
filename3, file_extension = os.path.splitext(filename)
# Read input
img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
# Initialize output
out = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
# Median blurring to get rid of the noise; invert image
img = 255 - cv2.medianBlur(img, 3)
# Detect and draw lines
lines = cv2.HoughLinesP(img, 1, np.pi/180, 10, minLineLength=40, maxLineGap=30)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(out, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imshow('out', out)
cv2.imwrite(filename3+' '+'69'+'.png', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
A:
There are a few different things you could try, but to start I would recommend the following:
First, threshold the image to identify only the parts that constitute the floor plan
Next, dilate the image to connect any broken segments
Finally, erode the image to prevent your lines from being too thick
You'll have to mess around with the parameters to get it right, but I think this is your best bet to solve this problem without it getting too complicated.
If you want, you can also try the Sobel operators before thresholding to better identify horizontal and vertical lines.
| Denoise noisy straight lines / make noisy lines solid Python | I am attempting to denoise / make solid lines in a very noisy image of a floorplan in python to no success. The methods I have used are:
masking
bluring
and houghlinesp
I have even tried a combination of the first two. here is the sample input image I am trying to make into solid straight lines: original image
With using the HoughLines method this is the best result I could achieve (lines solid but overlapping like crazy wherever there is text (This cannot easily be fixed by changing my minline/maxlinegap variables):HoughLines result
I have tried: masking, gaussian blur, and houghlinesp.
Houghlinesp Code:
import cv2
import numpy as np
from tkinter import Tk # from tkinter import Tk for Python 3.x
from tkinter.filedialog import askopenfilename
import os
Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
filename = askopenfilename() # show an "Open" dialog box and return the path to the selected file
print(filename)
filename3, file_extension = os.path.splitext(filename)
# Read input
img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
# Initialize output
out = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
# Median blurring to get rid of the noise; invert image
img = 255 - cv2.medianBlur(img, 3)
# Detect and draw lines
lines = cv2.HoughLinesP(img, 1, np.pi/180, 10, minLineLength=40, maxLineGap=30)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(out, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imshow('out', out)
cv2.imwrite(filename3+' '+'69'+'.png', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
| [
"There are a few different things you could try, but to start I would recommend the following:\n\nFirst, threshold the image to identify only the parts that constitute the floor plan\nNext, dilate the image to connect any broken segments\nFinally, erode the image to prevent your lines from being too thick\n\nYou'll have to mess around with the parameters to get it right, but I think this is your best bet to solve this problem without it getting too complicated.\nIf you want, you can also try the Sobel operators before thresholding to better identify horizontal and vertical lines.\n"
] | [
0
] | [] | [] | [
"image_processing",
"mask",
"noise_reduction",
"opencv",
"python"
] | stackoverflow_0074633128_image_processing_mask_noise_reduction_opencv_python.txt |
Q:
What am I doing wrong with Numba here?
I'm trying to learn how to use the Numba module. So far I haven't been able to get anything working because of some problem interfacing with NumPy. This is the code I'm running (from the Numba docs) and the error I get:
from numba import jit
import numpy as np
x = np.arange(100).reshape(10, 10)
@jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit
def go_fast(a): # Function is compiled to machine code when called the first time
trace = 0.0
for i in range(a.shape[0]): # Numba likes loops
trace += np.tanh(a[i, i]) # Numba likes NumPy functions
return a + trace # Numba likes NumPy broadcasting
print(go_fast(x))
Traceback (most recent call last):
File "C:/Users/JoHn/Documents/Current Classes/MEEN575_Optimization/HW6/Optimal_controller/angle_wrapping.py", line 84, in <module>
print(go_fast(x))
TypeError: expected dtype object, got 'numpy.dtype[float64]'
I know from some searching that this was or is a known error somewhat recently and had something to do with new builds of Numba requiring newer builds of NumPy or something like that, but as far as I can tell I have the most recent NumPy build, version 1.20. Any tips on what I'm doing wrong? To be clear I've never had a great understanding of how to cleanly setup environments in python so it is very possible I'm just missing something obvious here.
A:
Update to 0.53.1 works. It failed on 0.47.x as well for me. Seems more of numpy issue. One way to resolve install numpy >=1.20.0 and numba v>0.52.
More information on this issue:
https://github.com/numba/numba/issues/6041
P.S: not sure if you still have this error, just wanted to update, was facing similar issue.
A:
I had the exact same problem. I tried updating just numpy and numba to the versions that historically worked for numba (as mentioned by the other answer here), but that did not work for me.
What resolved the issue for me was a full update of conda and all the associated packages. I did that with the command
conda update -n base -c defaults conda
Be sure to restart the computer once you do this, since there are other packages updated as well.
| What am I doing wrong with Numba here? | I'm trying to learn how to use the Numba module. So far I haven't been able to get anything working because of some problem interfacing with NumPy. This is the code I'm running (from the Numba docs) and the error I get:
from numba import jit
import numpy as np
x = np.arange(100).reshape(10, 10)
@jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit
def go_fast(a): # Function is compiled to machine code when called the first time
trace = 0.0
for i in range(a.shape[0]): # Numba likes loops
trace += np.tanh(a[i, i]) # Numba likes NumPy functions
return a + trace # Numba likes NumPy broadcasting
print(go_fast(x))
Traceback (most recent call last):
File "C:/Users/JoHn/Documents/Current Classes/MEEN575_Optimization/HW6/Optimal_controller/angle_wrapping.py", line 84, in <module>
print(go_fast(x))
TypeError: expected dtype object, got 'numpy.dtype[float64]'
I know from some searching that this was or is a known error somewhat recently and had something to do with new builds of Numba requiring newer builds of NumPy or something like that, but as far as I can tell I have the most recent NumPy build, version 1.20. Any tips on what I'm doing wrong? To be clear I've never had a great understanding of how to cleanly setup environments in python so it is very possible I'm just missing something obvious here.
| [
"Update to 0.53.1 works. It failed on 0.47.x as well for me. Seems more of numpy issue. One way to resolve install numpy >=1.20.0 and numba v>0.52.\nMore information on this issue:\nhttps://github.com/numba/numba/issues/6041\nP.S: not sure if you still have this error, just wanted to update, was facing similar issue.\n",
"I had the exact same problem. I tried updating just numpy and numba to the versions that historically worked for numba (as mentioned by the other answer here), but that did not work for me.\nWhat resolved the issue for me was a full update of conda and all the associated packages. I did that with the command\nconda update -n base -c defaults conda\n\nBe sure to restart the computer once you do this, since there are other packages updated as well.\n"
] | [
5,
0
] | [] | [] | [
"dtype",
"numba",
"numpy",
"python"
] | stackoverflow_0067016356_dtype_numba_numpy_python.txt |
Q:
How to convert all values of a nested dictionary into strings?
I am writing a python application where I have a variable dictionary that can be nested upto any level.
The keys in any level can be either int or string. But I want to convert all keys and values at all levels into strings. How nested the dictionary will be is variable which makes it a bit complicated.
{
"col1": {
"0": 0,
"1": 8,
"2": {
0: 2,
}
"3": 4,
"4": 5
},
"col2": {
"0": "na",
"1": 1,
"2": "na",
"3": "na",
"4": "na"
},
"col3": {
"0": 1,
"1": 3,
"2": 3,
"3": 6,
"4": 3
},
"col4": {
"0": 5,
"1": "na",
"2": "9",
"3": 9,
"4": "na"
}
}
I am looking for the shortest and quickest function to achieve that. There are other questions like Converting dictionary values in python from str to int in nested dictionary that suggest ways of doing it but none of them deals with the "variable nesting" nature of the dictionary.
Any ideas will be appreciated.
A:
This is the most straightforward way I can think of doing it:
import json
data = {'col4': {'1': 'na', '0': 5, '3': 9, '2': '9', '4': 'na'}, 'col2': {'1': 1, '0': 'na', '3': 'na', '2': 'na', '4': 'na'}, 'col3': {'1': 3, '0': 1, '3': 6, '2': 3, '4': 3}, 'col1': {'1': 8, '0': 0, '3': 4, '2': {0: 2}, '4': 5}}
stringified_dict = json.loads(json.dumps(data), parse_int=str, parse_float=str)
Here are some links to the documentation for json loads and parse_int: Python3, Python2
A:
You could check the dictionary recursively:
def iterdict(d):
for k, v in d.items():
if isinstance(v, dict):
iterdict(v)
else:
if type(v) == int:
v = str(v)
d.update({k: v})
return d
A:
In case it's helpful to anyone, I modified Jose's answer to handle lists as well. The output converts all values to strings. I haven't thoroughly tested it, but I just ran it against around 40k records, and it passed schema validation:
def iterdict(d):
"""Recursively iterate over dict converting values to strings."""
for k, v in d.items():
if isinstance(v, dict):
iterdict(v)
elif isinstance(v, list):
for x, i in enumerate(v):
if isinstance(i, (dict, list)):
iterdict(i)
else:
if type(i) != str:
i = str(i)
v[x] = i
else:
if type(v) != str:
v = str(v)
d.update({k: v})
return d
| How to convert all values of a nested dictionary into strings? | I am writing a python application where I have a variable dictionary that can be nested upto any level.
The keys in any level can be either int or string. But I want to convert all keys and values at all levels into strings. How nested the dictionary will be is variable which makes it a bit complicated.
{
"col1": {
"0": 0,
"1": 8,
"2": {
0: 2,
}
"3": 4,
"4": 5
},
"col2": {
"0": "na",
"1": 1,
"2": "na",
"3": "na",
"4": "na"
},
"col3": {
"0": 1,
"1": 3,
"2": 3,
"3": 6,
"4": 3
},
"col4": {
"0": 5,
"1": "na",
"2": "9",
"3": 9,
"4": "na"
}
}
I am looking for the shortest and quickest function to achieve that. There are other questions like Converting dictionary values in python from str to int in nested dictionary that suggest ways of doing it but none of them deals with the "variable nesting" nature of the dictionary.
Any ideas will be appreciated.
| [
"This is the most straightforward way I can think of doing it:\nimport json\n\ndata = {'col4': {'1': 'na', '0': 5, '3': 9, '2': '9', '4': 'na'}, 'col2': {'1': 1, '0': 'na', '3': 'na', '2': 'na', '4': 'na'}, 'col3': {'1': 3, '0': 1, '3': 6, '2': 3, '4': 3}, 'col1': {'1': 8, '0': 0, '3': 4, '2': {0: 2}, '4': 5}}\nstringified_dict = json.loads(json.dumps(data), parse_int=str, parse_float=str)\n\nHere are some links to the documentation for json loads and parse_int: Python3, Python2\n",
"You could check the dictionary recursively: \ndef iterdict(d):\n for k, v in d.items():\n if isinstance(v, dict):\n iterdict(v)\n else:\n if type(v) == int:\n v = str(v)\n d.update({k: v})\n return d\n\n\n",
"In case it's helpful to anyone, I modified Jose's answer to handle lists as well. The output converts all values to strings. I haven't thoroughly tested it, but I just ran it against around 40k records, and it passed schema validation:\ndef iterdict(d):\n \"\"\"Recursively iterate over dict converting values to strings.\"\"\"\n\n for k, v in d.items():\n if isinstance(v, dict):\n iterdict(v)\n elif isinstance(v, list):\n for x, i in enumerate(v):\n if isinstance(i, (dict, list)):\n iterdict(i)\n else:\n if type(i) != str:\n i = str(i)\n v[x] = i\n else:\n if type(v) != str:\n v = str(v)\n d.update({k: v})\n return d\n\n"
] | [
6,
4,
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0054565160_dictionary_python.txt |
Q:
Is there a way of creating Biglake Tables through Python?
In the documentation I see no reference to BigLake tables. I wonder if there's a way of setting ExternalDataConfiguration to use them.
A:
Found it out: if you provide a connection ID it will be used for setting the table as a BigLake Table (see https://stackoverflow.com/a/73987775/9944075 on how to create the connection)
| Is there a way of creating Biglake Tables through Python? | In the documentation I see no reference to BigLake tables. I wonder if there's a way of setting ExternalDataConfiguration to use them.
| [
"Found it out: if you provide a connection ID it will be used for setting the table as a BigLake Table (see https://stackoverflow.com/a/73987775/9944075 on how to create the connection)\n"
] | [
0
] | [] | [] | [
"google_bigquery",
"google_cloud_platform",
"python"
] | stackoverflow_0074629305_google_bigquery_google_cloud_platform_python.txt |
Q:
how to send image from url using telethon
I want to know how to send image or media from a URL link,
file = BOT.upload_file('/user/home/photo.jpg')
BOT.send_file(chat , file)
I know that using this method we can send image from path, but I want to know if its possible to send it from a URL link. but I am trying to run the code on Heruku so uploading it from the patch will not be possible so if there is a way to send it using a URL link please tell me how to do that.
can anyone help me figure this out please.
A:
you don't have to explicitly upload a file, telethon does it internally, so:
BOT.send_file(chat , '/user/home/photo.jpg')
is enough (unless you're willing to resend something pre-uploaded multiple times)
likewise, you can pass a URL to send_file, Telegram servers will fetch it and send by itself (note there are limits for file size, 5MB for images, 20 MB for documents)
BOT.send_file(chat , url)
| how to send image from url using telethon | I want to know how to send image or media from a URL link,
file = BOT.upload_file('/user/home/photo.jpg')
BOT.send_file(chat , file)
I know that using this method we can send image from path, but I want to know if its possible to send it from a URL link. but I am trying to run the code on Heruku so uploading it from the patch will not be possible so if there is a way to send it using a URL link please tell me how to do that.
can anyone help me figure this out please.
| [
"\nyou don't have to explicitly upload a file, telethon does it internally, so:\n\nBOT.send_file(chat , '/user/home/photo.jpg')\n\nis enough (unless you're willing to resend something pre-uploaded multiple times)\n\nlikewise, you can pass a URL to send_file, Telegram servers will fetch it and send by itself (note there are limits for file size, 5MB for images, 20 MB for documents)\n\nBOT.send_file(chat , url)\n\n"
] | [
0
] | [] | [] | [
"python",
"telethon"
] | stackoverflow_0074634287_python_telethon.txt |
Q:
Can I use Boto3 to automatically generate visualization from Athena to QuickSight?
Right now using Boto3 to run Python script to automate Athena queries. After getting the output, can I also use Boto3 to run another Python script and have the output populated in a specific dashboard template?
Non-technical, not sure about feasibility. Just need a simply Y/N answer. Thanks!
A:
What exactly do you need QuickSight to do?
If you need to ingest new data so that existing dashboard starts showing new data, you can use CreateIngestion API from here https://docs.aws.amazon.com/quicksight/latest/APIReference/qs-data.html
If you need to automate QuickSight dashboard creation, see https://aws.amazon.com/blogs/aws/new-amazon-quicksight-api-capabilities-to-accelerate-your-bi-transformation/
| Can I use Boto3 to automatically generate visualization from Athena to QuickSight? | Right now using Boto3 to run Python script to automate Athena queries. After getting the output, can I also use Boto3 to run another Python script and have the output populated in a specific dashboard template?
Non-technical, not sure about feasibility. Just need a simply Y/N answer. Thanks!
| [
"What exactly do you need QuickSight to do?\nIf you need to ingest new data so that existing dashboard starts showing new data, you can use CreateIngestion API from here https://docs.aws.amazon.com/quicksight/latest/APIReference/qs-data.html\nIf you need to automate QuickSight dashboard creation, see https://aws.amazon.com/blogs/aws/new-amazon-quicksight-api-capabilities-to-accelerate-your-bi-transformation/\n"
] | [
1
] | [] | [] | [
"amazon_athena",
"amazon_quicksight",
"boto3",
"python",
"visualization"
] | stackoverflow_0074632473_amazon_athena_amazon_quicksight_boto3_python_visualization.txt |
Q:
Pandas substracting number of days from date
I am trying to create a new column "Starting_time" by subtracting 60 days out of "Harvest_date" but I get the same date each time. Can someone point out what did I do wrong please?
Harvest_date
20.12.21
12.01.21
10.03.21
import pandas as pd
from datetime import timedelta
df1 = pd.read_csv (r'C:\Flower_weight.csv')
def subtract_days_from_date(date, days):
subtracted_date = pd.to_datetime(date) - timedelta(days=days)
subtracted_date = subtracted_date.strftime("%Y-%m-%d")
return subtracted_date
df1['Harvest_date'] = pd.to_datetime(df1.Harvest_date)
df1.style.format({"Harvest_date": lambda t: t.strftime("%Y-%m-%d")})
for harvest_date in df1['Harvest_date']:
df1["Starting_date"]=subtract_days_from_date(harvest_date,60)
print(df1["Starting_date"])
Starting_date
2021-10-05
2021-10-05
2021-10-05
A:
You're overwriting the series on each iteration of the last loop
for harvest_date in df1['Harvest_date']:
df1["Starting_date"]=subtract_days_from_date(harvest_date,60)
You can do away with the loop by vectorizing the subtract_days_from_date function.
You could also reference an index with enumerate
np.vectorize
import numpy as np
subtract_days_from_date = np.vectorize(subtract_days_from_date)
df1["Starting_date"]=subtract_days_from_date(df1["Harvest_date"], 60)
enumerate
for idx, harvest_date in enumerate(df1['Harvest_date']):
df1.iloc[idx][ "Starting_date"]=subtract_days_from_date(harvest_date,60)
A:
I am not sure if the use of the loop was necessary here. Perhaps try the following:
df1_dates['Starting_date'] = df1_dates['Harvest_date'].apply(lambda x: pd.to_datetime(x) - timedelta(days=60))
df1_dates['Starting_date'].dt.strftime("%Y-%m-%d")
df1_dates['Starting_date']
| Pandas substracting number of days from date | I am trying to create a new column "Starting_time" by subtracting 60 days out of "Harvest_date" but I get the same date each time. Can someone point out what did I do wrong please?
Harvest_date
20.12.21
12.01.21
10.03.21
import pandas as pd
from datetime import timedelta
df1 = pd.read_csv (r'C:\Flower_weight.csv')
def subtract_days_from_date(date, days):
subtracted_date = pd.to_datetime(date) - timedelta(days=days)
subtracted_date = subtracted_date.strftime("%Y-%m-%d")
return subtracted_date
df1['Harvest_date'] = pd.to_datetime(df1.Harvest_date)
df1.style.format({"Harvest_date": lambda t: t.strftime("%Y-%m-%d")})
for harvest_date in df1['Harvest_date']:
df1["Starting_date"]=subtract_days_from_date(harvest_date,60)
print(df1["Starting_date"])
Starting_date
2021-10-05
2021-10-05
2021-10-05
| [
"You're overwriting the series on each iteration of the last loop\nfor harvest_date in df1['Harvest_date']:\n df1[\"Starting_date\"]=subtract_days_from_date(harvest_date,60)\n\nYou can do away with the loop by vectorizing the subtract_days_from_date function.\nYou could also reference an index with enumerate\nnp.vectorize\nimport numpy as np\n\n\nsubtract_days_from_date = np.vectorize(subtract_days_from_date)\n\ndf1[\"Starting_date\"]=subtract_days_from_date(df1[\"Harvest_date\"], 60)\n\nenumerate\nfor idx, harvest_date in enumerate(df1['Harvest_date']):\n df1.iloc[idx][ \"Starting_date\"]=subtract_days_from_date(harvest_date,60)\n\n",
"I am not sure if the use of the loop was necessary here. Perhaps try the following:\ndf1_dates['Starting_date'] = df1_dates['Harvest_date'].apply(lambda x: pd.to_datetime(x) - timedelta(days=60))\ndf1_dates['Starting_date'].dt.strftime(\"%Y-%m-%d\")\ndf1_dates['Starting_date']\n\n"
] | [
2,
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074633999_dataframe_pandas_python.txt |
Q:
Django: Quizapp with Question and Answer Model
I would like to create a Quiz app with Django.
Where the Questions can be stored in a DB and more users can add more questions in Admin.
and each question can have an answer from the user input.
This is a basic version of what I tried so far,
Simple example of My Models:
QuestionModel
ID
question
author
AnswerModel
ID
Answer
question_id
author
So, When I create an AnswerForm():
it shows the form, but the question shows up as a dropdown instead of labels.
and it is not creating fields for each question. It just creates one input field and a dropdown for the question.
I know it does that because I have question_id as FK in the Answer Model.
Is there a better way to get this done?
I am new to Django
Here is a screenshot of what I am expecting
A:
I am not good in django, but I think you can use these structure:
Question Model:
class Question(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
title = models.CharField(max_length=60,)
created_at = models.DateTimeField(auto_now_add=True)
slug = models.SlugField(unique=True, max_length=200)
Answer Model:
class Answer(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
answer = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
post = models.ForeignKey(Question, on_delete=models.CASCADE)
In your views add:
class My_Answer(LoginRequiredMixin, CreateView):
model = Answer
fields = ['answer']
template_name = 'answer.html'
success_url = reverse_lazy('#Redirecting User To The Dashboard')
def form_valid(self, form):
form.instance.user = self.request.user
form.instance.post_id = self.kwargs['pk']
result = super().form_valid(form)
return result
In your Urls add:
path('question/<int:pk>/answer/', views.My_Answer.as_view(), name='answer'),
Add this to your answer template:
{% load crispy_forms_tags %}
<form method="POST" action="" enctype="multipart/form-data">
{% csrf_token %}
{{ form | crispy }}
<input type="submit" value="submit" class="btn btn-primary">
| Django: Quizapp with Question and Answer Model | I would like to create a Quiz app with Django.
Where the Questions can be stored in a DB and more users can add more questions in Admin.
and each question can have an answer from the user input.
This is a basic version of what I tried so far,
Simple example of My Models:
QuestionModel
ID
question
author
AnswerModel
ID
Answer
question_id
author
So, When I create an AnswerForm():
it shows the form, but the question shows up as a dropdown instead of labels.
and it is not creating fields for each question. It just creates one input field and a dropdown for the question.
I know it does that because I have question_id as FK in the Answer Model.
Is there a better way to get this done?
I am new to Django
Here is a screenshot of what I am expecting
| [
"I am not good in django, but I think you can use these structure:\nQuestion Model:\nclass Question(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n title = models.CharField(max_length=60,)\n created_at = models.DateTimeField(auto_now_add=True)\n slug = models.SlugField(unique=True, max_length=200)\n\nAnswer Model:\n class Answer(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n answer = models.TextField() \n created_at = models.DateTimeField(auto_now_add=True)\n post = models.ForeignKey(Question, on_delete=models.CASCADE)\n\nIn your views add:\nclass My_Answer(LoginRequiredMixin, CreateView):\n model = Answer\n fields = ['answer']\n template_name = 'answer.html'\n success_url = reverse_lazy('#Redirecting User To The Dashboard')\n\n def form_valid(self, form):\n form.instance.user = self.request.user\n form.instance.post_id = self.kwargs['pk']\n result = super().form_valid(form)\n return result \n\nIn your Urls add:\n path('question/<int:pk>/answer/', views.My_Answer.as_view(), name='answer'),\n\nAdd this to your answer template:\n{% load crispy_forms_tags %}\n <form method=\"POST\" action=\"\" enctype=\"multipart/form-data\">\n {% csrf_token %} \n {{ form | crispy }}\n <input type=\"submit\" value=\"submit\" class=\"btn btn-primary\"> \n\n"
] | [
0
] | [] | [] | [
"django",
"django_forms",
"django_models",
"python"
] | stackoverflow_0074634170_django_django_forms_django_models_python.txt |
Q:
How to get this info with BS4?
I think soup.findall("dd", {"class": "clearfix"})['idk what goes here'] and then index it and save it could work but im not familiar with what ::before and ::after does to the output here
I only need the info that is in between ::before and ::after
A:
content = [] # create list
for x in soup.findall("dd", {"class": "clearfix"}): # for each element
inner = x.encode_contents() # get inner html
content.append(inner.removeprefix("::before").removesuffix("::after")) # add to content without ::before and ::after
Obviously, could become a oneliner but don't do this, the code would become unreadable:
content = [x.encode_contents().removeprefix("::before").removesuffix("::after") for x in soup.findall("dd", {"class": "clearfix"})]
I didn't test this as you gave the html as picture and not text, but it should work.
| How to get this info with BS4? | I think soup.findall("dd", {"class": "clearfix"})['idk what goes here'] and then index it and save it could work but im not familiar with what ::before and ::after does to the output here
I only need the info that is in between ::before and ::after
| [
"content = [] # create list\nfor x in soup.findall(\"dd\", {\"class\": \"clearfix\"}): # for each element\n inner = x.encode_contents() # get inner html\n content.append(inner.removeprefix(\"::before\").removesuffix(\"::after\")) # add to content without ::before and ::after\n\nObviously, could become a oneliner but don't do this, the code would become unreadable:\ncontent = [x.encode_contents().removeprefix(\"::before\").removesuffix(\"::after\") for x in soup.findall(\"dd\", {\"class\": \"clearfix\"})]\n\nI didn't test this as you gave the html as picture and not text, but it should work.\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0074634302_beautifulsoup_python.txt |
Q:
Python: Call static method from subclass
Is it possible to call a static method defined in a superclass from a method in subclass? Something like:
class A:
@staticmethod
def a():
...
class B(A):
def b(self):
A.a()
A.a() doesn't work, neither does B.a(), super.a() or self.a(). Is there a way to do this?
EDIT:
The problem was a stale .pyc file!!!!!!
A:
Works for me - except of course for super().whatever() which only works on Python 3.x. Please explain what you mean by "doesn't work"...
Python 2.7.3 (default, Dec 18 2014, 19:10:20)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo(object):
... @staticmethod
... def foo(what):
... print "Foo.foo(%s)" % what
...
>>> class Bar(Foo):
... def bar(self):
... self.foo("from Bar.bar")
...
>>> b = Bar()
>>> b.bar()
Foo.foo(from Bar.bar)
>>> Foo().foo("aaa")
Foo.foo(aaa)
>>> Foo.foo("aaa")
Foo.foo(aaa)
>>> b.foo("uuu")
Foo.foo(uuu)
A:
super().staticmethod() works for me.
Superclass:
class Employee:
def __init__(self, first, last):
self.first = first
self.last = last
....
@staticmethod
def do_anything(strings):
print(strings)
Subclass:
class Manager(Employee):
....
def show_empl(self):
super().do_anything("do something!")
| Python: Call static method from subclass | Is it possible to call a static method defined in a superclass from a method in subclass? Something like:
class A:
@staticmethod
def a():
...
class B(A):
def b(self):
A.a()
A.a() doesn't work, neither does B.a(), super.a() or self.a(). Is there a way to do this?
EDIT:
The problem was a stale .pyc file!!!!!!
| [
"Works for me - except of course for super().whatever() which only works on Python 3.x. Please explain what you mean by \"doesn't work\"...\nPython 2.7.3 (default, Dec 18 2014, 19:10:20) \n[GCC 4.6.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> class Foo(object):\n... @staticmethod\n... def foo(what):\n... print \"Foo.foo(%s)\" % what\n... \n>>> class Bar(Foo):\n... def bar(self):\n... self.foo(\"from Bar.bar\")\n... \n>>> b = Bar()\n>>> b.bar()\nFoo.foo(from Bar.bar)\n>>> Foo().foo(\"aaa\")\nFoo.foo(aaa)\n>>> Foo.foo(\"aaa\")\nFoo.foo(aaa)\n>>> b.foo(\"uuu\")\nFoo.foo(uuu)\n\n",
"super().staticmethod() works for me.\nSuperclass:\nclass Employee:\n\n def __init__(self, first, last):\n self.first = first\n self.last = last\n\n ....\n\n @staticmethod\n def do_anything(strings):\n print(strings)\n\nSubclass:\nclass Manager(Employee):\n \n ....\n def show_empl(self):\n \n super().do_anything(\"do something!\")\n \n\n"
] | [
0,
0
] | [] | [] | [
"python",
"static_methods"
] | stackoverflow_0028457008_python_static_methods.txt |
Q:
Apply fuzzy string matching of two columns in two Pandas dataframes while preserving a similarity score and output a Pandas DataFrame
I have two data frames that I'm trying to merge, based on a primary & foreign key of company name. One data set has ~50,000 unique company names, the other one has about 5,000. Duplicate company names are possible within each list.
To that end, I've tried to follow along the first solution from Figure out if a business name is very similar to another one - Python. Here's an MWE:
mwe1 = pd.DataFrame({'company_name': ['Deloitte',
'PriceWaterhouseCoopers',
'KPMG',
'Ernst & Young',
'intentionall typo company XYZ'
],
'revenue': [100, 200, 300, 250, 400]
}
)
mwe2 = pd.DataFrame({'salesforce_name': ['Deloite',
'PriceWaterhouseCooper'
],
'CEO': ['John', 'Jane']
}
)
I am trying to get the following code from Figure out if a business name is very similar to another one - Python to work:
# token2frequency is just a word counter of all words in all names
# in the dataset
def sequence_uniqueness(seq, token2frequency):
return sum(1/token2frequency(t)**0.5 for t in seq)
def name_similarity(a, b, token2frequency):
a_tokens = set(a.split())
b_tokens = set(b.split())
a_uniq = sequence_uniqueness(a_tokens)
b_uniq = sequence_uniqueness(b_tokens)
return sequence_uniqueness(a.intersection(b))/(a_uniq * b_uniq) ** 0.5
How do I apply those two functions to produce a similarity score between each possible combination of mwe1 and mwe2, then filter such that to the most probable matches?
For example, I'm looking for something like this (I'm just making up the scores in the similarity_score column:
company_name revenue salesforce_name CEO similarity_score
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 Deloite John 0
KPMG 300 Deloite John 15
Ernst & Young 250 Deloite John 10
intentionall typo company XYZ 400 Deloite John 2
Deloitte 100 PriceWaterhouseCooper Jane 20
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
KPMG 300 PriceWaterhouseCooper Jane 5
Ernst & Young 250 PriceWaterhouseCooper Jane 7
intentionall typo company XYZ 400 PriceWaterhouseCooper Jane 3
I'm also open to better end-states, if you can think of one. Then, I'd filter that table above to get something like:
company_name revenue salesforce_name CEO similarity_score
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
Here's what I've tried:
name_similarity(a = mwe1['company_name'], b = mwe2['salesforce_name'], token2frequency = 10)
AttributeError: 'Series' object has no attribute 'split'
I'm familiar with using lambda functions but not sure how to make it work when iterating through two columns in two Pandas data frames.
A:
Here is a class I wrote using difflib should be close to what you need.
import difflib
import pandas as pd
class FuzzyMerge:
"""
Works like pandas merge except merges on approximate matches.
"""
def __init__(self, **kwargs):
self.left = kwargs.get("left")
self.right = kwargs.get("right")
self.left_on = kwargs.get("left_on")
self.right_on = kwargs.get("right_on")
self.how = kwargs.get("how", "inner")
self.cutoff = kwargs.get("cutoff", 0.8)
def merge(self) -> pd.DataFrame:
temp = self.right.copy()
temp[self.left_on] = [
self.get_closest_match(x, self.left[self.left_on]) for x in temp[self.right_on]
]
df = self.left.merge(temp, on=self.left_on, how=self.how)
df["similarity_percent"] = df.apply(lambda x: self.similarity_score(x[self.left_on], x[self.right_on]), axis=1)
return df
def get_closest_match(self, left: pd.Series, right: pd.Series) -> str or None:
matches = difflib.get_close_matches(left, right, cutoff=self.cutoff)
return matches[0] if matches else None
@staticmethod
def similarity_score(left: pd.Series, right: pd.Series) -> int:
return int(round(difflib.SequenceMatcher(a=left, b=right).ratio(), 2) * 100)
Call it with:
df = FuzzyMerge(left=df1, right=df2, left_on="column from df1", right_on="column from df2", how="inner", cutoff=0.8).merge()
| Apply fuzzy string matching of two columns in two Pandas dataframes while preserving a similarity score and output a Pandas DataFrame | I have two data frames that I'm trying to merge, based on a primary & foreign key of company name. One data set has ~50,000 unique company names, the other one has about 5,000. Duplicate company names are possible within each list.
To that end, I've tried to follow along the first solution from Figure out if a business name is very similar to another one - Python. Here's an MWE:
mwe1 = pd.DataFrame({'company_name': ['Deloitte',
'PriceWaterhouseCoopers',
'KPMG',
'Ernst & Young',
'intentionall typo company XYZ'
],
'revenue': [100, 200, 300, 250, 400]
}
)
mwe2 = pd.DataFrame({'salesforce_name': ['Deloite',
'PriceWaterhouseCooper'
],
'CEO': ['John', 'Jane']
}
)
I am trying to get the following code from Figure out if a business name is very similar to another one - Python to work:
# token2frequency is just a word counter of all words in all names
# in the dataset
def sequence_uniqueness(seq, token2frequency):
return sum(1/token2frequency(t)**0.5 for t in seq)
def name_similarity(a, b, token2frequency):
a_tokens = set(a.split())
b_tokens = set(b.split())
a_uniq = sequence_uniqueness(a_tokens)
b_uniq = sequence_uniqueness(b_tokens)
return sequence_uniqueness(a.intersection(b))/(a_uniq * b_uniq) ** 0.5
How do I apply those two functions to produce a similarity score between each possible combination of mwe1 and mwe2, then filter such that to the most probable matches?
For example, I'm looking for something like this (I'm just making up the scores in the similarity_score column:
company_name revenue salesforce_name CEO similarity_score
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 Deloite John 0
KPMG 300 Deloite John 15
Ernst & Young 250 Deloite John 10
intentionall typo company XYZ 400 Deloite John 2
Deloitte 100 PriceWaterhouseCooper Jane 20
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
KPMG 300 PriceWaterhouseCooper Jane 5
Ernst & Young 250 PriceWaterhouseCooper Jane 7
intentionall typo company XYZ 400 PriceWaterhouseCooper Jane 3
I'm also open to better end-states, if you can think of one. Then, I'd filter that table above to get something like:
company_name revenue salesforce_name CEO similarity_score
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
Here's what I've tried:
name_similarity(a = mwe1['company_name'], b = mwe2['salesforce_name'], token2frequency = 10)
AttributeError: 'Series' object has no attribute 'split'
I'm familiar with using lambda functions but not sure how to make it work when iterating through two columns in two Pandas data frames.
| [
"Here is a class I wrote using difflib should be close to what you need.\nimport difflib\n\nimport pandas as pd\n\n\nclass FuzzyMerge:\n \"\"\"\n Works like pandas merge except merges on approximate matches.\n \"\"\"\n def __init__(self, **kwargs):\n self.left = kwargs.get(\"left\")\n self.right = kwargs.get(\"right\")\n self.left_on = kwargs.get(\"left_on\")\n self.right_on = kwargs.get(\"right_on\")\n self.how = kwargs.get(\"how\", \"inner\")\n self.cutoff = kwargs.get(\"cutoff\", 0.8)\n\n def merge(self) -> pd.DataFrame:\n temp = self.right.copy()\n temp[self.left_on] = [\n self.get_closest_match(x, self.left[self.left_on]) for x in temp[self.right_on]\n ]\n\n df = self.left.merge(temp, on=self.left_on, how=self.how)\n df[\"similarity_percent\"] = df.apply(lambda x: self.similarity_score(x[self.left_on], x[self.right_on]), axis=1)\n\n return df\n\n def get_closest_match(self, left: pd.Series, right: pd.Series) -> str or None:\n matches = difflib.get_close_matches(left, right, cutoff=self.cutoff)\n\n return matches[0] if matches else None\n\n @staticmethod\n def similarity_score(left: pd.Series, right: pd.Series) -> int:\n return int(round(difflib.SequenceMatcher(a=left, b=right).ratio(), 2) * 100)\n\nCall it with:\ndf = FuzzyMerge(left=df1, right=df2, left_on=\"column from df1\", right_on=\"column from df2\", how=\"inner\", cutoff=0.8).merge()\n\n"
] | [
0
] | [] | [] | [
"fuzzywuzzy",
"pandas",
"python",
"python_3.x",
"string_matching"
] | stackoverflow_0074633110_fuzzywuzzy_pandas_python_python_3.x_string_matching.txt |
Q:
Replace values in Pandas Dataframe using another Dataframe as a lookup table
I'm looking to replace values in a Dataframe with the values in a second Dataframe by matching the values in the first Dataframe with the columns from the second Dataframe.
Example:
import numpy as np
import pandas as pd
dt_index = pd.to_datetime(['2003-05-01', '2003-05-02', '2003-05-03', '2003-05-04'])
df = pd.DataFrame({'A':[1,1,3,12], 'B':[12,1,3,3], 'C':[3,12,12,1]}, index = dt_index)
df2 = pd.DataFrame({1:[1.4,4.2,1.3,5.6], 12:[2.3,7.3,9.5,0.4], 3:[8.8,0.1,8.7,2.4], 4:[9.6,9.8,5.5,1.8]}, index = dt_index)
df =
A B C
2003-05-01 1 12 3
2003-05-02 1 1 12
2003-05-03 3 3 12
2003-05-04 12 3 1
df2 =
1 12 3 4
2003-05-01 1.4 2.3 8.8 9.6
2003-05-02 4.2 7.3 0.1 9.8
2003-05-03 1.3 9.5 8.7 5.5
2003-05-04 5.6 0.4 2.4 1.8
Expected output:
expect = pd.DataFrame({'A':[1.4,4.2,8.7,0.4], 'B':[2.3,4.2,8.7,2.4], 'C':[8.8,7.3,9.5,5.6]}, index = dt_index)
expect =
A B C
2003-05-01 1.4 2.3 8.8
2003-05-02 4.2 4.2 7.3
2003-05-03 8.7 8.7 9.5
2003-05-04 0.4 2.4 5.6
Attempt:
X = df.copy()
for i in np.unique(df):
X.mask(df == i, df2[i], axis=0, inplace=True)
My attempt seems to work but I'm not sure if it has any pitfalls and how it would scale as the sizes of the Dataframe increase.
Are there better or faster solutions?
EDIT:
After cottontail's helpful answer, I realised I've made an oversimplification in my example. The values in df and columns of df and df2 cannot be assumed to be sequential.
I've now modified the example to reflect that.
A:
One approach is to use stack() to reshape df2 into a Series and reindex() it using the values in df; reshape back into original shape using unstack().
tmp = df2.stack().reindex(df.stack().droplevel(-1).items())
tmp.index = pd.MultiIndex.from_arrays([tmp.index.get_level_values(0), df.columns.tolist()*len(df)])
df = tmp.unstack()
Another approach is to iteratively create a dummy dataframe shaped like df2, multiply it by df2, reduce it into a Series (using sum()) and assign it to an empty dataframe shaped like df.
X = pd.DataFrame().reindex_like(df)
df['dummy'] = 1
for c in X:
X[c] = (
df.groupby([df.index, c])['dummy'].size()
.unstack(fill_value=0)
.reindex(df2.columns, axis=1, fill_value=0)
.mul(df2)
.sum(1)
)
| Replace values in Pandas Dataframe using another Dataframe as a lookup table | I'm looking to replace values in a Dataframe with the values in a second Dataframe by matching the values in the first Dataframe with the columns from the second Dataframe.
Example:
import numpy as np
import pandas as pd
dt_index = pd.to_datetime(['2003-05-01', '2003-05-02', '2003-05-03', '2003-05-04'])
df = pd.DataFrame({'A':[1,1,3,12], 'B':[12,1,3,3], 'C':[3,12,12,1]}, index = dt_index)
df2 = pd.DataFrame({1:[1.4,4.2,1.3,5.6], 12:[2.3,7.3,9.5,0.4], 3:[8.8,0.1,8.7,2.4], 4:[9.6,9.8,5.5,1.8]}, index = dt_index)
df =
A B C
2003-05-01 1 12 3
2003-05-02 1 1 12
2003-05-03 3 3 12
2003-05-04 12 3 1
df2 =
1 12 3 4
2003-05-01 1.4 2.3 8.8 9.6
2003-05-02 4.2 7.3 0.1 9.8
2003-05-03 1.3 9.5 8.7 5.5
2003-05-04 5.6 0.4 2.4 1.8
Expected output:
expect = pd.DataFrame({'A':[1.4,4.2,8.7,0.4], 'B':[2.3,4.2,8.7,2.4], 'C':[8.8,7.3,9.5,5.6]}, index = dt_index)
expect =
A B C
2003-05-01 1.4 2.3 8.8
2003-05-02 4.2 4.2 7.3
2003-05-03 8.7 8.7 9.5
2003-05-04 0.4 2.4 5.6
Attempt:
X = df.copy()
for i in np.unique(df):
X.mask(df == i, df2[i], axis=0, inplace=True)
My attempt seems to work but I'm not sure if it has any pitfalls and how it would scale as the sizes of the Dataframe increase.
Are there better or faster solutions?
EDIT:
After cottontail's helpful answer, I realised I've made an oversimplification in my example. The values in df and columns of df and df2 cannot be assumed to be sequential.
I've now modified the example to reflect that.
| [
"One approach is to use stack() to reshape df2 into a Series and reindex() it using the values in df; reshape back into original shape using unstack().\ntmp = df2.stack().reindex(df.stack().droplevel(-1).items())\ntmp.index = pd.MultiIndex.from_arrays([tmp.index.get_level_values(0), df.columns.tolist()*len(df)])\ndf = tmp.unstack()\n\n\nAnother approach is to iteratively create a dummy dataframe shaped like df2, multiply it by df2, reduce it into a Series (using sum()) and assign it to an empty dataframe shaped like df.\nX = pd.DataFrame().reindex_like(df)\ndf['dummy'] = 1\n\nfor c in X:\n X[c] = (\n df.groupby([df.index, c])['dummy'].size()\n .unstack(fill_value=0)\n .reindex(df2.columns, axis=1, fill_value=0)\n .mul(df2)\n .sum(1)\n )\n\n\n"
] | [
1
] | [] | [] | [
"arrays",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074634131_arrays_dataframe_pandas_python.txt |
Q:
SNMP Simulator for Testing PySNMP?
I am looking for a way to test PySNMP scripts, since it seems like demo.snmplabs.com and snmpsim.try.thola.io are down - at least I can't get a response with the following example script from the PySNMP docs. Are there any other hosts I could try?
from pysnmp.hlapi import *
for (errorIndication,
errorStatus,
errorIndex,
varBinds) in nextCmd(SnmpEngine(),
CommunityData('public'),
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
ContextData(),
ObjectType(ObjectIdentity('1.3.6.1.2.1.1.1.0')),
lookupMib=False):
if errorIndication:
print(errorIndication)
break
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
break
else:
print("hello")
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
Response: No SNMP response received before timeout
EDIT: I have tried snmp.live.gambitcommunications.com, demo.snmplabs.com, and snmpsim.try.thola.io so at this point I feel like I'm missing something. Any ideas?
A:
While there were so many attempts to fill the gaps, you will find demo.pysnmp.com a more reliable option from me, as I plan to take over the whole ecosystem, https://github.com/etingof/pysnmp/issues/429
| SNMP Simulator for Testing PySNMP? | I am looking for a way to test PySNMP scripts, since it seems like demo.snmplabs.com and snmpsim.try.thola.io are down - at least I can't get a response with the following example script from the PySNMP docs. Are there any other hosts I could try?
from pysnmp.hlapi import *
for (errorIndication,
errorStatus,
errorIndex,
varBinds) in nextCmd(SnmpEngine(),
CommunityData('public'),
UdpTransportTarget(('snmpsim.try.thola.io', 161)),
ContextData(),
ObjectType(ObjectIdentity('1.3.6.1.2.1.1.1.0')),
lookupMib=False):
if errorIndication:
print(errorIndication)
break
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
break
else:
print("hello")
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
Response: No SNMP response received before timeout
EDIT: I have tried snmp.live.gambitcommunications.com, demo.snmplabs.com, and snmpsim.try.thola.io so at this point I feel like I'm missing something. Any ideas?
| [
"While there were so many attempts to fill the gaps, you will find demo.pysnmp.com a more reliable option from me, as I plan to take over the whole ecosystem, https://github.com/etingof/pysnmp/issues/429\n"
] | [
0
] | [] | [] | [
"pysnmp",
"python",
"snmp"
] | stackoverflow_0066771211_pysnmp_python_snmp.txt |
Q:
RE: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.plt.show() in PyCharm
Previously Found here
The answers given are great, however they only address the problem in the form of Python/Linux Terminal commands. i.e., sudo install ....
What about when I am in a IDE such as PyCharm? I could use the Python Console to make the necessary changes, but there seems to be more straight forward ways to do simply change the backend.
A:
I found examples and answers
Is there some reason I should not just use the examples here;
for example:
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
import numpy as np
from PIL import Image
The simplest answer to this question would be "There is no downside to doing it that way."
Thanks.
| RE: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.plt.show() in PyCharm | Previously Found here
The answers given are great, however they only address the problem in the form of Python/Linux Terminal commands. i.e., sudo install ....
What about when I am in a IDE such as PyCharm? I could use the Python Console to make the necessary changes, but there seems to be more straight forward ways to do simply change the backend.
| [
"I found examples and answers\nIs there some reason I should not just use the examples here;\nfor example:\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg\n\nfrom matplotlib.figure import Figure\n\nimport numpy as np\n\nfrom PIL import Image\n\nThe simplest answer to this question would be \"There is no downside to doing it that way.\"\nThanks.\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pycharm",
"python"
] | stackoverflow_0074634590_matplotlib_pycharm_python.txt |
Q:
How do I verify an SSL certificate in python?
I need to verify that a certificate was signed by my custom CA. Using OpenSSL command-line utilities this is easy to do:
# Custom CA file: ca-cert.pem
# Cert signed by above CA: bob.cert
$ openssl verify -CAfile test-ca-cert.pem bob.cert
bob.cert: OK
But I need to do the same thing in Python, and I really don't want to call out to command-line utilities. As far as I'm aware, M2Crypto is the "most complete" python wrapper for OpenSSL, but I can't figure out how to accomplish what the command-line utility does!
Referencing this question for how to accomplish this same task in C code, I've been able to get about half-way. The variable names I chose are the same ones used in the source code for the openssl verify command-line utility, see openssl-xxx/apps/verify.c.
import M2Crypto as m2
# Load the certificates
cacert = m2.X509.load_cert('test-ca-cert.pem') # Create cert object from CA cert file
bobcert = m2.X509.load_cert('bob.cert') # Create cert object from Bob's cert file
cert_ctx = m2.X509.X509_Store() # Step 1 from referenced C code steps
csc = m2.X509.X509_Store_Context(cert_ctx) # Step 2 & 5
cert_ctx.add_cert(cacert) # Step 3
cert_ctx.add_cert(bobcert) # ditto
# Skip step 4 (no CRLs to add)
# Step 5 is combined with step 2...I think. (X509_STORE_CTX_init: Python creates and
# initialises an object in the same step)
# Skip step 6? (can't find anything corresponding to
# X509_STORE_CTX_set_purpose, not sure if we need to anyway???)
#
# It all falls apart at this point, as steps 7 and 8 don't have any corresponding
# functions in M2Crypto -- I even grepped the entire source code of M2Crypto, and
# neither of the following functions are present in it:
# Step 7: X509_STORE_CTX_set_cert - Tell the context which certificate to validate.
# Step 8: X509_verify_cert - Finally, validate it
So I'm halfway there, but I can't seem to actually get the validation done! Am I missing something? Is there some other function I should be using from M2Crypto? Should I be looking for a completely different python wrapper of OpenSSL? How can I accomplish this task in python!?!?
Note that I'm using certificates to encrypt/decrypt FILES, so I'm not interested in using the SSL-connection-based peer certificate verification (which has already been answered), because I don't have any SSL connections going.
A:
You can't do this with plain M2Crypto, since it does not wrap some of the required functions. Good news is if you have SWIG installed you can wrap those yourself and use with M2Crypto code. I've made a module with some extra functions for myself some time ago, and decided to publish it now, since it does this kind of validation. You can check it here: https://github.com/abbot/m2ext. This is an example how to validate a certificate using this module:
import sys
from m2ext import SSL
from M2Crypto import X509
print "Validating certificate %s using CApath %s" % (sys.argv[1], sys.argv[2])
cert = X509.load_cert(sys.argv[1])
ctx = SSL.Context()
ctx.load_verify_locations(capath=sys.argv[2])
if ctx.validate_certificate(cert):
print "valid"
else:
print "invalid"
Unfortunately M2Crypto development seems to be stagnant (no closed issues in bug tracker for the last two years) and the maintainer was ignoring my bugs and emails with these and some other patches...
A:
Use this command:
/Applications/Python\ 3.9/Install\ Certificates.command
do not alter the spaces, just be sure to replace with your version of python
| How do I verify an SSL certificate in python? | I need to verify that a certificate was signed by my custom CA. Using OpenSSL command-line utilities this is easy to do:
# Custom CA file: ca-cert.pem
# Cert signed by above CA: bob.cert
$ openssl verify -CAfile test-ca-cert.pem bob.cert
bob.cert: OK
But I need to do the same thing in Python, and I really don't want to call out to command-line utilities. As far as I'm aware, M2Crypto is the "most complete" python wrapper for OpenSSL, but I can't figure out how to accomplish what the command-line utility does!
Referencing this question for how to accomplish this same task in C code, I've been able to get about half-way. The variable names I chose are the same ones used in the source code for the openssl verify command-line utility, see openssl-xxx/apps/verify.c.
import M2Crypto as m2
# Load the certificates
cacert = m2.X509.load_cert('test-ca-cert.pem') # Create cert object from CA cert file
bobcert = m2.X509.load_cert('bob.cert') # Create cert object from Bob's cert file
cert_ctx = m2.X509.X509_Store() # Step 1 from referenced C code steps
csc = m2.X509.X509_Store_Context(cert_ctx) # Step 2 & 5
cert_ctx.add_cert(cacert) # Step 3
cert_ctx.add_cert(bobcert) # ditto
# Skip step 4 (no CRLs to add)
# Step 5 is combined with step 2...I think. (X509_STORE_CTX_init: Python creates and
# initialises an object in the same step)
# Skip step 6? (can't find anything corresponding to
# X509_STORE_CTX_set_purpose, not sure if we need to anyway???)
#
# It all falls apart at this point, as steps 7 and 8 don't have any corresponding
# functions in M2Crypto -- I even grepped the entire source code of M2Crypto, and
# neither of the following functions are present in it:
# Step 7: X509_STORE_CTX_set_cert - Tell the context which certificate to validate.
# Step 8: X509_verify_cert - Finally, validate it
So I'm halfway there, but I can't seem to actually get the validation done! Am I missing something? Is there some other function I should be using from M2Crypto? Should I be looking for a completely different python wrapper of OpenSSL? How can I accomplish this task in python!?!?
Note that I'm using certificates to encrypt/decrypt FILES, so I'm not interested in using the SSL-connection-based peer certificate verification (which has already been answered), because I don't have any SSL connections going.
| [
"You can't do this with plain M2Crypto, since it does not wrap some of the required functions. Good news is if you have SWIG installed you can wrap those yourself and use with M2Crypto code. I've made a module with some extra functions for myself some time ago, and decided to publish it now, since it does this kind of validation. You can check it here: https://github.com/abbot/m2ext. This is an example how to validate a certificate using this module:\nimport sys\nfrom m2ext import SSL\nfrom M2Crypto import X509\n\nprint \"Validating certificate %s using CApath %s\" % (sys.argv[1], sys.argv[2])\ncert = X509.load_cert(sys.argv[1])\nctx = SSL.Context()\nctx.load_verify_locations(capath=sys.argv[2])\nif ctx.validate_certificate(cert):\n print \"valid\"\nelse:\n print \"invalid\"\n\nUnfortunately M2Crypto development seems to be stagnant (no closed issues in bug tracker for the last two years) and the maintainer was ignoring my bugs and emails with these and some other patches...\n",
"Use this command:\n/Applications/Python\\ 3.9/Install\\ Certificates.command\ndo not alter the spaces, just be sure to replace with your version of python\n"
] | [
5,
0
] | [
"You can use the unfortunately undocumented X509.verify method to check whether the certificate was signed with the CA's private key. As this calls OpenSSL's x509_verify in the background, I'm sure this also checks all parameters (like expiration) correctly:\nfrom M2Crypto X509\n\ncert = X509.load_cert(\"certificate-filename\")\n\ncaCertificate = X509.load_cert(\"trusted-ca-filename\")\ncaPublic = caCertificate.get_pubkey()\n\nif cert.verify(caPublic) == 1:\n # Certificate is okay!\nelse:\n # not okay\n\n",
"Like you said,\nOpenSSL requires connection\nM2Crypto doesn't have good verification\nHow about this ingenious idea:\nimport os \nos.system('openssl verify -CAfile ../ca-cert.pem bob.cert')\n\nIts ugly, but it works!\n"
] | [
-1,
-3
] | [
"m2crypto",
"openssl",
"python",
"ssl",
"x509certificate"
] | stackoverflow_0004403012_m2crypto_openssl_python_ssl_x509certificate.txt |
Q:
How to type hint a generic numpy array?
Is there any way to type a Numpy array as generic?
I'm currently working with Numpy 1.23.5 and Python 3.10, and I can't type hint the following example.
import numpy as np
import numpy.typing as npt
E = TypeVar("E") # Should be bounded to a numpy type
def double_arr(arr: npt.NDArray[E]) -> npt.NDArray[E]:
return arr * 2
What I expect
arr = np.array([1, 2, 3], dtype=np.int8)
double_arr(arr) # npt.NDAarray[np.int8]
arr = np.array([1, 2.3, 3], dtype=np.float32)
double_arr(arr) # npt.NDAarray[np.float32]
But I end up with the following error
arr: npt.NDArray[E]
^^^
Could not specialize type "NDArray[ScalarType@NDArray]"
Type "E@double_arr" cannot be assigned to type "generic"
"object*" is incompatible with "generic"
If i bound the E to numpy datatypes (np.int8, np.uint8, ...) the type-checker fails to evaluate the multiplication due to the multiple data-types.
A:
Looking at the source, it seems the generic type variable used to parameterize numpy.dtype of numpy.typing.NDArray is bounded by numpy.generic (and declared covariant). Thus any type argument to NDArray must be a subtype of numpy.generic, whereas your type variable is unbounded. This should work:
from typing import TypeVar
import numpy as np
from numpy.typing import NDArray
E = TypeVar("E", bound=np.generic, covariant=True)
def double_arr(arr: NDArray[E]) -> NDArray[E]:
return arr * 2
But there is another problem, which I believe lies in insufficient numpy stubs. An example of it is showcased in this issue. The overloaded operand (magic) methods like __mul__ somehow mangle the types. I just gave the code a cursory look right now, so I don't know what is missing. But mypy will still complain about the last line in that code:
error: Returning Any from function declared to return "ndarray[Any, dtype[E]]" [no-any-return]
error: Unsupported operand types for * ("ndarray[Any, dtype[E]]" and "int") [operator]
The workaround right now is to use the functions instead of the operands (via the dunder methods). In this case using numpy.multiply instead of * solves the issue:
from typing import TypeVar
import numpy as np
from numpy.typing import NDArray
E = TypeVar("E", bound=np.generic, covariant=True)
def double_arr(arr: NDArray[E]) -> NDArray[E]:
return np.multiply(arr, 2)
a = np.array([1, 2, 3], dtype=np.int8)
reveal_type(double_arr(a))
No more mypy complaints and the type is revealed as follows:
numpy.ndarray[Any, numpy.dtype[numpy.signedinteger[numpy._typing._8Bit]]]
It's worth keeping an eye on that operand issue and maybe even report the specific error of Unsupported operand types for * separately. I haven't found that in the issue tracker yet.
PS: Alternatively, you could use the * operator and add a specific type: ignore. That way you'll notice, if/once the annotation error is eventually fixed by numpy because mypy complains about unused ignore-directives in strict mode.
def double_arr(arr: NDArray[E]) -> NDArray[E]:
return arr * 2 # type: ignore[operator,no-any-return]
| How to type hint a generic numpy array? | Is there any way to type a Numpy array as generic?
I'm currently working with Numpy 1.23.5 and Python 3.10, and I can't type hint the following example.
import numpy as np
import numpy.typing as npt
E = TypeVar("E") # Should be bounded to a numpy type
def double_arr(arr: npt.NDArray[E]) -> npt.NDArray[E]:
return arr * 2
What I expect
arr = np.array([1, 2, 3], dtype=np.int8)
double_arr(arr) # npt.NDAarray[np.int8]
arr = np.array([1, 2.3, 3], dtype=np.float32)
double_arr(arr) # npt.NDAarray[np.float32]
But I end up with the following error
arr: npt.NDArray[E]
^^^
Could not specialize type "NDArray[ScalarType@NDArray]"
Type "E@double_arr" cannot be assigned to type "generic"
"object*" is incompatible with "generic"
If i bound the E to numpy datatypes (np.int8, np.uint8, ...) the type-checker fails to evaluate the multiplication due to the multiple data-types.
| [
"Looking at the source, it seems the generic type variable used to parameterize numpy.dtype of numpy.typing.NDArray is bounded by numpy.generic (and declared covariant). Thus any type argument to NDArray must be a subtype of numpy.generic, whereas your type variable is unbounded. This should work:\nfrom typing import TypeVar\n\nimport numpy as np\nfrom numpy.typing import NDArray\n\n\nE = TypeVar(\"E\", bound=np.generic, covariant=True)\n\n\ndef double_arr(arr: NDArray[E]) -> NDArray[E]:\n return arr * 2\n\nBut there is another problem, which I believe lies in insufficient numpy stubs. An example of it is showcased in this issue. The overloaded operand (magic) methods like __mul__ somehow mangle the types. I just gave the code a cursory look right now, so I don't know what is missing. But mypy will still complain about the last line in that code:\n\nerror: Returning Any from function declared to return \"ndarray[Any, dtype[E]]\" [no-any-return]\nerror: Unsupported operand types for * (\"ndarray[Any, dtype[E]]\" and \"int\") [operator]\n\nThe workaround right now is to use the functions instead of the operands (via the dunder methods). In this case using numpy.multiply instead of * solves the issue:\nfrom typing import TypeVar\n\nimport numpy as np\nfrom numpy.typing import NDArray\n\n\nE = TypeVar(\"E\", bound=np.generic, covariant=True)\n\n\ndef double_arr(arr: NDArray[E]) -> NDArray[E]:\n return np.multiply(arr, 2)\n\n\na = np.array([1, 2, 3], dtype=np.int8)\nreveal_type(double_arr(a))\n\nNo more mypy complaints and the type is revealed as follows:\nnumpy.ndarray[Any, numpy.dtype[numpy.signedinteger[numpy._typing._8Bit]]]\n\nIt's worth keeping an eye on that operand issue and maybe even report the specific error of Unsupported operand types for * separately. I haven't found that in the issue tracker yet.\n\nPS: Alternatively, you could use the * operator and add a specific type: ignore. That way you'll notice, if/once the annotation error is eventually fixed by numpy because mypy complains about unused ignore-directives in strict mode.\ndef double_arr(arr: NDArray[E]) -> NDArray[E]:\n return arr * 2 # type: ignore[operator,no-any-return]\n\n"
] | [
1
] | [] | [] | [
"mypy",
"numpy",
"python",
"python_typing",
"type_hinting"
] | stackoverflow_0074633074_mypy_numpy_python_python_typing_type_hinting.txt |
Q:
Execute task at specific times django
I am in the process of writing my own task app using Django and would like a few specific functions to be executed every day at a certain time (updating tasks, checking due dates, etc.). Is there a way to have Django run functions on a regular basis or how do I go about this in general?
Does it make sense to write an extra program with an infinite loop for this or are there better ways?
A:
Celery is a good option here:
First steps with Django
Periodic Tasks
app.conf.beat_schedule = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': 30.0,
'args': (16, 16)
},
}
app.conf.timezone = 'UTC'
With celery you can define periodic tasks at any given interval. Celery workers will then pick up those tasks when needed. You will need to run something like RabbitMQ or Redis to support the celery workers.
The alternative, simpler, way is to add an entry to your urls.py that catches any url you don't otherwise use, and use that as a prompt to check your database as to whether another task is due. This leverages the fact that your website will be hit by a lot of bot traffic. The timing isn't entirely reliable but it doesn't require any extra set up.
A:
you can use django-cronjobs or maybe Schedule your job with schedule library.
| Execute task at specific times django | I am in the process of writing my own task app using Django and would like a few specific functions to be executed every day at a certain time (updating tasks, checking due dates, etc.). Is there a way to have Django run functions on a regular basis or how do I go about this in general?
Does it make sense to write an extra program with an infinite loop for this or are there better ways?
| [
"Celery is a good option here:\n\nFirst steps with Django\n\nPeriodic Tasks\n\napp.conf.beat_schedule = {\n'add-every-30-seconds': {\n 'task': 'tasks.add',\n 'schedule': 30.0,\n 'args': (16, 16)\n },\n}\napp.conf.timezone = 'UTC'\n\n\n\n\nWith celery you can define periodic tasks at any given interval. Celery workers will then pick up those tasks when needed. You will need to run something like RabbitMQ or Redis to support the celery workers.\nThe alternative, simpler, way is to add an entry to your urls.py that catches any url you don't otherwise use, and use that as a prompt to check your database as to whether another task is due. This leverages the fact that your website will be hit by a lot of bot traffic. The timing isn't entirely reliable but it doesn't require any extra set up.\n",
"you can use django-cronjobs or maybe Schedule your job with schedule library.\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074634127_django_python.txt |
Q:
Getting error when sending request to a website using Scrapy shell
I was learning Scrapy framework. I tried to use scrapy shell. There I was trying to fetch response from "https://quotes.toscrape.com/". The commands are below-
python -m scrapy shell
Inside the shell-
>> from scrapy import Request
>> req = Request("https://quotes.toscrape.com/")
>> fetch(req)
Then I found the error like this-
PS D:\Projects\scrapyLearn\introSpider\introSpider> python -m scrapy shell
2022-11-30 15:04:52 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: introSpider)
2022-11-30 15:04:52 [scrapy.utils.log] INFO: Versions: lxml 4.9.0.0, libxml2 2.9.10, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.0, Twisted 22.10.0, Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.4, Platform Windows-10-10.0.22000-SP0
2022-11-30 15:04:52 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'introSpider',
'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
'LOGSTATS_INTERVAL': 0,
'NEWSPIDER_MODULE': 'introSpider.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['introSpider.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2022-11-30 15:04:52 [asyncio] DEBUG: Using selector: SelectSelector
2022-11-30 15:04:52 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-11-30 15:04:52 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop2022-11-30 15:04:52 [scrapy.extensions.telnet] INFO: Telnet Password: 9ec5c326bbb22c54
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-11-30 15:04:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x000002601B1B48D0>
[s] item {}
[s] settings <scrapy.settings.Settings object at 0x000002601B3EC550>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
>>> from scrapy import Request
>>> req = Request("https://quotes.toscrape.com/")
>>> fetch(req)
2022-11-30 15:05:46 [scrapy.core.engine] INFO: Spider opened
2022-11-30 15:05:47 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://quotes.toscrape.com/robots.txt> (referer: None)
2022-11-30 15:05:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://quotes.toscrape.com/> (referer: None)
>>> 2022-11-30 15:05:47 [scrapy.core.scraper] ERROR: Spider error processing <GET https://quotes.toscrape.com/> (referer: None)
Traceback (most recent call last):
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\twisted\internet\defer.py", line 892, in _runCallbacks
current.result = callback( # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\scrapy\utils\defer.py", line 285, in f
return deferred_from_coro(coro_f(*coro_args, **coro_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\scrapy\utils\defer.py", line 272, in deferred_from_coro
event_loop = get_asyncio_event_loop_policy().get_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 677, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1 (start)'.
2022-11-30 15:05:47 [py.warnings] WARNING: C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\twisted\internet\defer.py:892: RuntimeWarning: coroutine 'SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output' was never awaited
current.result = callback( # type: ignore[misc]
And the shell is still running. I don't know what is error is. And how to fix it.
I was just trying to get the response from "https://quotes.toscrape.com/" website.
A:
If you are using windows. This is caused by a bug.
Here is the github issue.
This has absolutely nothing to do with the robots.txt file.
A:
I recreated the same steps and had no problem getting the page. I would recommend you to change this setting in the settings.py:
ROBOTSTXT_OBEY = False because as you can see in the logs, scrapy receives a 404 (error) when making a first request to https://quotes.toscrape.com/robots.txt that doesnt exists.
I would also recommend that you use fetch directly with the url as an argument, example: fetch("https://quotes.toscrape.com/")
| Getting error when sending request to a website using Scrapy shell | I was learning Scrapy framework. I tried to use scrapy shell. There I was trying to fetch response from "https://quotes.toscrape.com/". The commands are below-
python -m scrapy shell
Inside the shell-
>> from scrapy import Request
>> req = Request("https://quotes.toscrape.com/")
>> fetch(req)
Then I found the error like this-
PS D:\Projects\scrapyLearn\introSpider\introSpider> python -m scrapy shell
2022-11-30 15:04:52 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: introSpider)
2022-11-30 15:04:52 [scrapy.utils.log] INFO: Versions: lxml 4.9.0.0, libxml2 2.9.10, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.0, Twisted 22.10.0, Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.4, Platform Windows-10-10.0.22000-SP0
2022-11-30 15:04:52 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'introSpider',
'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
'LOGSTATS_INTERVAL': 0,
'NEWSPIDER_MODULE': 'introSpider.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['introSpider.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2022-11-30 15:04:52 [asyncio] DEBUG: Using selector: SelectSelector
2022-11-30 15:04:52 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-11-30 15:04:52 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop2022-11-30 15:04:52 [scrapy.extensions.telnet] INFO: Telnet Password: 9ec5c326bbb22c54
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-11-30 15:04:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-11-30 15:04:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x000002601B1B48D0>
[s] item {}
[s] settings <scrapy.settings.Settings object at 0x000002601B3EC550>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
>>> from scrapy import Request
>>> req = Request("https://quotes.toscrape.com/")
>>> fetch(req)
2022-11-30 15:05:46 [scrapy.core.engine] INFO: Spider opened
2022-11-30 15:05:47 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://quotes.toscrape.com/robots.txt> (referer: None)
2022-11-30 15:05:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://quotes.toscrape.com/> (referer: None)
>>> 2022-11-30 15:05:47 [scrapy.core.scraper] ERROR: Spider error processing <GET https://quotes.toscrape.com/> (referer: None)
Traceback (most recent call last):
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\twisted\internet\defer.py", line 892, in _runCallbacks
current.result = callback( # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\scrapy\utils\defer.py", line 285, in f
return deferred_from_coro(coro_f(*coro_args, **coro_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\scrapy\utils\defer.py", line 272, in deferred_from_coro
event_loop = get_asyncio_event_loop_policy().get_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 677, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1 (start)'.
2022-11-30 15:05:47 [py.warnings] WARNING: C:\Users\arnoLiono\AppData\Local\Programs\Python\Python311\Lib\site-packages\twisted\internet\defer.py:892: RuntimeWarning: coroutine 'SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output' was never awaited
current.result = callback( # type: ignore[misc]
And the shell is still running. I don't know what is error is. And how to fix it.
I was just trying to get the response from "https://quotes.toscrape.com/" website.
| [
"If you are using windows. This is caused by a bug.\nHere is the github issue.\nThis has absolutely nothing to do with the robots.txt file.\n",
"I recreated the same steps and had no problem getting the page. I would recommend you to change this setting in the settings.py:\nROBOTSTXT_OBEY = False because as you can see in the logs, scrapy receives a 404 (error) when making a first request to https://quotes.toscrape.com/robots.txt that doesnt exists.\nI would also recommend that you use fetch directly with the url as an argument, example: fetch(\"https://quotes.toscrape.com/\")\n"
] | [
1,
-1
] | [] | [] | [
"python",
"python_asyncio",
"scrapy",
"scrapy_shell",
"web_scraping"
] | stackoverflow_0074625783_python_python_asyncio_scrapy_scrapy_shell_web_scraping.txt |
Q:
Display Pandas Dataframe PowerBI
PowerBi does not allow display of pandas dataframe in page. Requires a plot.
I am working with the python scripting function in PowerBi. I would like to display a pandas dataframe in the page but when I try to print(dataset) I get the following error (https://i.stack.imgur.com/4BWvT.png)
Is there a neat way to display a pandas table in the page? Also, if there is a lot or rows in the data, I would like to be able to scroll through it in the page.
A:
The Python visual turns data into an image. You can sue the Python step in Power Query if you want to output the Dataframe for use in your report.
| Display Pandas Dataframe PowerBI | PowerBi does not allow display of pandas dataframe in page. Requires a plot.
I am working with the python scripting function in PowerBi. I would like to display a pandas dataframe in the page but when I try to print(dataset) I get the following error (https://i.stack.imgur.com/4BWvT.png)
Is there a neat way to display a pandas table in the page? Also, if there is a lot or rows in the data, I would like to be able to scroll through it in the page.
| [
"The Python visual turns data into an image. You can sue the Python step in Power Query if you want to output the Dataframe for use in your report.\n"
] | [
0
] | [] | [] | [
"pandas",
"powerbi",
"powerbi_custom_visuals",
"python"
] | stackoverflow_0074632255_pandas_powerbi_powerbi_custom_visuals_python.txt |
Q:
index 5 is out of bounds for axis 1 with size 1- python
I have used Markov clustering (MCL) to cluster data of (6) points, the input to MCL is a matrix.
my data:
import warnings
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import linear_sum_assignment
import scipy.spatial.distance as distance
from sklearn.metrics import pairwise_distances
%matplotlib inline
warnings.filterwarnings('ignore')
import markov_clustering as mc
import networkx as nx
import random
data = np.array([
[0.13, 0.19, 0.21, 0.13, 0.23, 0.05, 0.05],
[0.06, 0.06, 0.06, 0.15, 0.5, 0.05, 0.12],
[0.12, 0.29, 0.1, 0.15, 0.1, 0.11, 0.14],
[0.02, 0.13, 0.18, 0.14, 0.09, 0.05, 0.39],
[0.49, 0.06, 0.02, 0.13, 0.1, 0.09, 0.11],
[0.11, 0.18, 0.35, 0.14, 0.09, 0.07, 0.06]])
Matrix =np.array([[0, 0.0784, 0.032768, 0.097216, 0.131008, 0.025792],
[0.0784 , 0, 0.142144, 0.16768 , 0.223104, 0.174848],
[0.032768, 0.142144, 0, 0.069312, 0.126656, 0.053056],
[0.097216, 0.16768 , 0.069312, 0, 0.212224, 0.095232],
[0.131008, 0.223104, 0.126656, 0.212224, 0, 0.173312],
[0.025792, 0.174848, 0.053056, 0.095232, 0.173312, 0]])
Then I use MCL algorithm on the matrix to return the clusters.
def addSelfLoop(Matrix):
size = len(Matrix)
for i in range(size):
Matrix[i][i] = 1
return Matrix
def createTransition(Matrix):
size = len(Matrix)
Transition = [[0 for i in range(size)] for j in range(size)]
for j in range(size):
sum = 0
for i in range(size):
sum += Matrix[i][j]
for i in range(size):
Transition[i][j] = round(Matrix[i][j]/sum, 2)
return Transition
def expand(Transition):
size = len(Transition)
Expansion = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
sum = 0
for k in range(size):
sum += Transition[i][k] * Transition[k][j]
Expansion[i][j] = round(sum,2)
return Expansion
def inflate(Expansion, power):
size = len(Expansion)
Inflation = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
Inflation[i][j] = math.pow(Expansion[i][j],power)
for j in range(size):
sum = 0
for i in range(size):
sum += Inflation[i][j]
for i in range(size):
Inflation[i][j] = round(Inflation[i][j]/sum, 2)
return Inflation
import math
def change(Matrix1, Matrix2):
size = len(Matrix1)
change = 0
for i in range(size):
for j in range(size):
if(math.fabs(Matrix1[i][j]-Matrix2[i][j]) > change):
change = math.fabs(Matrix1[i][j]-Matrix2[i][j])
return change
def MCL(Matrix):
Matrix = addSelfLoop(Matrix)
print (pd.DataFrame(Matrix))
Gamma = 2
Transition = createTransition(Matrix)
M1 = Transition
print ("Transition")
print (pd.DataFrame(M1))
counter =1
epsilon = 0.001
change_ = float("inf")
while (change_ > epsilon):
print("Iterate :: ", counter,":::::::::::::::::::::::::::::")
counter += 1
# M_2 = M_1 * M_1 # expansion
M2 = expand(M1)
print ("expanded\n",pd.DataFrame(M2))
# M_1 = Γ(M_2) # inflation
M1 = inflate(M2, 2)
print ("inflated\n",pd.DataFrame(M1))
# change = difference(M_1, M_2)
change_ = change(M1,M2)
return M1
result = mc.run_mcl(Matrix, inflation=1.8)
clusters = mc.get_clusters(result)
print('clusters', clusters)
print('No. clusters=', len(clusters))
The output of cluster:
clusters [(0,), (1, 5), (2,), (3, 4)]
No. clusters= 4
Finally, I tried to create labels for data points (6)
the expected output:
data point cluster
0 0
1 1
2 2
3 3
4 3
5 1
when I run the following code
#Create labels
out_label = np.zeros((Matrix.shape[0],1), dtype=np.int)
for i,data in enumerate(clusters):
for fine_data in data:
out_label[data] = i+1
out_label
I get the following error:
----> 5 out_label[data] = i+1
IndexError: index 5 is out of bounds for axis 1 with size 1
A:
In that line, Matrix would need to be an int. Did you maybe mean to use Matrix.shape[0] in np.zeros()?
| index 5 is out of bounds for axis 1 with size 1- python | I have used Markov clustering (MCL) to cluster data of (6) points, the input to MCL is a matrix.
my data:
import warnings
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import linear_sum_assignment
import scipy.spatial.distance as distance
from sklearn.metrics import pairwise_distances
%matplotlib inline
warnings.filterwarnings('ignore')
import markov_clustering as mc
import networkx as nx
import random
data = np.array([
[0.13, 0.19, 0.21, 0.13, 0.23, 0.05, 0.05],
[0.06, 0.06, 0.06, 0.15, 0.5, 0.05, 0.12],
[0.12, 0.29, 0.1, 0.15, 0.1, 0.11, 0.14],
[0.02, 0.13, 0.18, 0.14, 0.09, 0.05, 0.39],
[0.49, 0.06, 0.02, 0.13, 0.1, 0.09, 0.11],
[0.11, 0.18, 0.35, 0.14, 0.09, 0.07, 0.06]])
Matrix =np.array([[0, 0.0784, 0.032768, 0.097216, 0.131008, 0.025792],
[0.0784 , 0, 0.142144, 0.16768 , 0.223104, 0.174848],
[0.032768, 0.142144, 0, 0.069312, 0.126656, 0.053056],
[0.097216, 0.16768 , 0.069312, 0, 0.212224, 0.095232],
[0.131008, 0.223104, 0.126656, 0.212224, 0, 0.173312],
[0.025792, 0.174848, 0.053056, 0.095232, 0.173312, 0]])
Then I use MCL algorithm on the matrix to return the clusters.
def addSelfLoop(Matrix):
size = len(Matrix)
for i in range(size):
Matrix[i][i] = 1
return Matrix
def createTransition(Matrix):
size = len(Matrix)
Transition = [[0 for i in range(size)] for j in range(size)]
for j in range(size):
sum = 0
for i in range(size):
sum += Matrix[i][j]
for i in range(size):
Transition[i][j] = round(Matrix[i][j]/sum, 2)
return Transition
def expand(Transition):
size = len(Transition)
Expansion = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
sum = 0
for k in range(size):
sum += Transition[i][k] * Transition[k][j]
Expansion[i][j] = round(sum,2)
return Expansion
def inflate(Expansion, power):
size = len(Expansion)
Inflation = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
Inflation[i][j] = math.pow(Expansion[i][j],power)
for j in range(size):
sum = 0
for i in range(size):
sum += Inflation[i][j]
for i in range(size):
Inflation[i][j] = round(Inflation[i][j]/sum, 2)
return Inflation
import math
def change(Matrix1, Matrix2):
size = len(Matrix1)
change = 0
for i in range(size):
for j in range(size):
if(math.fabs(Matrix1[i][j]-Matrix2[i][j]) > change):
change = math.fabs(Matrix1[i][j]-Matrix2[i][j])
return change
def MCL(Matrix):
Matrix = addSelfLoop(Matrix)
print (pd.DataFrame(Matrix))
Gamma = 2
Transition = createTransition(Matrix)
M1 = Transition
print ("Transition")
print (pd.DataFrame(M1))
counter =1
epsilon = 0.001
change_ = float("inf")
while (change_ > epsilon):
print("Iterate :: ", counter,":::::::::::::::::::::::::::::")
counter += 1
# M_2 = M_1 * M_1 # expansion
M2 = expand(M1)
print ("expanded\n",pd.DataFrame(M2))
# M_1 = Γ(M_2) # inflation
M1 = inflate(M2, 2)
print ("inflated\n",pd.DataFrame(M1))
# change = difference(M_1, M_2)
change_ = change(M1,M2)
return M1
result = mc.run_mcl(Matrix, inflation=1.8)
clusters = mc.get_clusters(result)
print('clusters', clusters)
print('No. clusters=', len(clusters))
The output of cluster:
clusters [(0,), (1, 5), (2,), (3, 4)]
No. clusters= 4
Finally, I tried to create labels for data points (6)
the expected output:
data point cluster
0 0
1 1
2 2
3 3
4 3
5 1
when I run the following code
#Create labels
out_label = np.zeros((Matrix.shape[0],1), dtype=np.int)
for i,data in enumerate(clusters):
for fine_data in data:
out_label[data] = i+1
out_label
I get the following error:
----> 5 out_label[data] = i+1
IndexError: index 5 is out of bounds for axis 1 with size 1
| [
"In that line, Matrix would need to be an int. Did you maybe mean to use Matrix.shape[0] in np.zeros()?\n"
] | [
0
] | [] | [] | [
"cluster_analysis",
"pandas",
"python"
] | stackoverflow_0074634612_cluster_analysis_pandas_python.txt |
Q:
how to add keys from an existing dictionary to a new dictionary
I have a dictionary that looks like this:
pris = {'äpplen': [12,13,15,16,17], 'bananer': [14,17,18,19], 'citroner': [20,13,14,15,16], 'hallon': [23,34,45,46,57], 'kokos': [12,45,67,89]}
an another:
t={'äpplen', 'bananer', 'hallon'}
What I'm trying to do is to create a new dictionary with only the elements in t.
New_dictionary= {'äpplen': [12,13,15,16,17], 'bananer': [14,17,18,19], 'hallon': [23,34,45,46,57]}
So far, I've done this:
I tried to remove the not desired keys in dictionary pris, but I get all the elements that I don't want. I tried with append, etc, but it doesn't work.
for e in t:
if e is not pris:
del pris[e]
print(pris)
>>> {'citroner': [20, 13, 14, 15, 16], 'kokos': [12, 45, 67, 89]}
Can someone help me?
A:
try this:
new_d = dict()
for key in t:
if key in pris:
new_d[key] = pris[key]
here is how in 1 line
new_d = {key:pris[key] for key in t if key in pris}
A:
Try this:
new_dict = {}
for e in t:
if e in pris.keys():
new_dict[e] = pris[e]
| how to add keys from an existing dictionary to a new dictionary | I have a dictionary that looks like this:
pris = {'äpplen': [12,13,15,16,17], 'bananer': [14,17,18,19], 'citroner': [20,13,14,15,16], 'hallon': [23,34,45,46,57], 'kokos': [12,45,67,89]}
an another:
t={'äpplen', 'bananer', 'hallon'}
What I'm trying to do is to create a new dictionary with only the elements in t.
New_dictionary= {'äpplen': [12,13,15,16,17], 'bananer': [14,17,18,19], 'hallon': [23,34,45,46,57]}
So far, I've done this:
I tried to remove the not desired keys in dictionary pris, but I get all the elements that I don't want. I tried with append, etc, but it doesn't work.
for e in t:
if e is not pris:
del pris[e]
print(pris)
>>> {'citroner': [20, 13, 14, 15, 16], 'kokos': [12, 45, 67, 89]}
Can someone help me?
| [
"try this:\nnew_d = dict()\nfor key in t:\n if key in pris:\n new_d[key] = pris[key]\n\nhere is how in 1 line\nnew_d = {key:pris[key] for key in t if key in pris}\n\n",
"Try this:\nnew_dict = {}\nfor e in t:\n if e in pris.keys():\n new_dict[e] = pris[e]\n\n"
] | [
0,
0
] | [
"New_dictionary={k:pris[k] for k in t}\n"
] | [
-1
] | [
"del",
"dictionary",
"list",
"python"
] | stackoverflow_0074598269_del_dictionary_list_python.txt |
Q:
ImportError: cannot import name 'transpose'
I try to run a script which starts like this:
import os, sys, subprocess, argparse, random, transposer, numpy, csv, scipy, gzip
BUT, I got this error:
ImportError: cannot import name 'transpose'
I work on slurm cluster. Should I install transposer? I work with conda as we don't have permission to install on cluster. But, there is no conda env for that.
Would you please help on this? Thanks
A:
pip install transposer ? worked for me.
| ImportError: cannot import name 'transpose' | I try to run a script which starts like this:
import os, sys, subprocess, argparse, random, transposer, numpy, csv, scipy, gzip
BUT, I got this error:
ImportError: cannot import name 'transpose'
I work on slurm cluster. Should I install transposer? I work with conda as we don't have permission to install on cluster. But, there is no conda env for that.
Would you please help on this? Thanks
| [
"pip install transposer ? worked for me.\n"
] | [
0
] | [] | [] | [
"conda",
"python",
"transpose"
] | stackoverflow_0074634768_conda_python_transpose.txt |
Q:
How to generate Sankey diagrams using Brightway2?
I know that we can get Sankey diagrams using activity-browser. Is there a way we can generate a sankey diagram for one of the ecoinvent activities using brightway2 functions and python?
I looked into brightway2 functions but couldn't find one that I can readily use for sankey diagrams.
A:
Unfortunately not, at least right now. The activity-browser implementation is licensed LGPL, which isn't compatible with the Brightway license, so we can't just copy what they have done. There are also technical limitations, as an interactive graphic would need a server/client architecture for new calculations (e.g. changing the cutoff), or serializing a very large result dataset for use in something like plotly.
It would be amazing to have the community step up with a stand-along program; see also the ongoing visualization contest.
| How to generate Sankey diagrams using Brightway2? | I know that we can get Sankey diagrams using activity-browser. Is there a way we can generate a sankey diagram for one of the ecoinvent activities using brightway2 functions and python?
I looked into brightway2 functions but couldn't find one that I can readily use for sankey diagrams.
| [
"Unfortunately not, at least right now. The activity-browser implementation is licensed LGPL, which isn't compatible with the Brightway license, so we can't just copy what they have done. There are also technical limitations, as an interactive graphic would need a server/client architecture for new calculations (e.g. changing the cutoff), or serializing a very large result dataset for use in something like plotly.\nIt would be amazing to have the community step up with a stand-along program; see also the ongoing visualization contest.\n"
] | [
1
] | [] | [] | [
"brightway",
"python",
"sankey_diagram"
] | stackoverflow_0074634021_brightway_python_sankey_diagram.txt |
Q:
Are there any priority "or" in python?
Is there any "priority or" function in python? For instance if i am out of range
vec = [1, 2, 3, 4, 5]
new_vec = []
for index, number in enumerate(vec):
try:
new_value += vec[index + 1] + vec[index + 2] or if i am out of range do += vec[index +1] and if i am still out of range pass
except IndexError:
pass
The problem with my "pass" is that it will either do vec[index + 1] + vec[index + 2] or pass however I want to do vec[index + 1] if vec[index + 1] + vec[index + 2] not possible and if vec[index + 1] is not possible I want to pass. For instance the 4 in vec has the situation vec[index + 1] but for the 5 we would pass
A:
You can use sum() and regular list slicing. Unlike regular indexing which raises an error when out of range, slicing continues to work.
five = [0, 1, 2, 3, 4]
print(five[4]) # 4
print(five[5]) # Error
print(five[2:4]) # [2, 3]
print(five[2:1000]) # [2, 3, 4]; no error
print(five[1000:1001]) # []; still no error
For your usecase, you can do:
new_value += sum(vec[index + 1: index + 3])
| Are there any priority "or" in python? | Is there any "priority or" function in python? For instance if i am out of range
vec = [1, 2, 3, 4, 5]
new_vec = []
for index, number in enumerate(vec):
try:
new_value += vec[index + 1] + vec[index + 2] or if i am out of range do += vec[index +1] and if i am still out of range pass
except IndexError:
pass
The problem with my "pass" is that it will either do vec[index + 1] + vec[index + 2] or pass however I want to do vec[index + 1] if vec[index + 1] + vec[index + 2] not possible and if vec[index + 1] is not possible I want to pass. For instance the 4 in vec has the situation vec[index + 1] but for the 5 we would pass
| [
"You can use sum() and regular list slicing. Unlike regular indexing which raises an error when out of range, slicing continues to work.\nfive = [0, 1, 2, 3, 4]\n\nprint(five[4]) # 4\nprint(five[5]) # Error\nprint(five[2:4]) # [2, 3]\nprint(five[2:1000]) # [2, 3, 4]; no error\nprint(five[1000:1001]) # []; still no error\n\nFor your usecase, you can do:\nnew_value += sum(vec[index + 1: index + 3])\n\n"
] | [
1
] | [] | [] | [
"for_loop",
"index_error",
"list",
"python",
"try_except"
] | stackoverflow_0074634520_for_loop_index_error_list_python_try_except.txt |
Q:
Calculate lat/lon of 4 corners of rectangle using Python
I need to find the latitude and longitude coordinates of the four corners of a rectangle in a Python script, given the center coordinate, length, width, and bearing of the shape. Length and width are in statute miles, but honestly converting those to meters is probably one of the easiest parts. I have some examples of how to use haversine to calculate distance between 2 points, but I'm at a loss on this one. Would anyone be able to at least point me in the right direction?
Picture of rectangle
Update
This is the formula I came up with for my Python script, based on the link that @Mbo provided:
lat2 = asin(sin(lat1)*cos(length2/(_AVG_EARTH_RADIUS_KM)) + cos(lat1) * sin(length2/(_AVG_EARTH_RADIUS_KM)) * cos(bearing1))
lon2 = lon1 + atan2(sin(bearing1) * sin(length2/(_AVG_EARTH_RADIUS_KM)) * cos(lat1), cos(length2/(_AVG_EARTH_RADIUS_KM)) - sin(lat1) * sin(lat2))
Unfortunately the results don't make sense. I used a center point of 32° N 77° W, length of 20 miles, width 10 miles, and bearing 0 deg and I'm getting the result 0.586599511812, -77.0.
When I plot it out on my mapping application, it tells me that the coordinate for the new point should be 32.14513° N, -77.0° W.
Edit to add: I converted length1 and width1 to kilometers, and converted bearing1 to radians before using in the formulas above.
A:
Rectangle on the earth sphere.. this is doubtful thing.
Anyway, look at this page.
Using formula from section Destination point given distance and bearing from start point, calculate two middles at distance width/2 and bearings bearing, bearing + 180.
For every middle point do the same with height/2 and bearing + 90, bearing - 90 to calculate corner points.
(note that width as corner-corner distance will be inexact for such approximation)
A:
With pyproj you have the tooling to make the calculation.
fwd() supports calculating the end point, giving start, bearing, and distance.
You still need basic geometry to calculate the necessary distance/angles.
A:
I ended up finding 2 answers.
The first one, from python - Get lat/long given current point, distance and bearing was simple. The Destination point given distance and bearing from start point formula works after all, I just forgot to convert the lat/long for the start point to radians, and then convert the answer back to degrees at the end.
The resulting code looks like:
import math
R = 6378.1 #Radius of the Earth
brng = 1.57 #Bearing is 90 degrees converted to radians.
d = 15 #Distance in km
#lat2 52.20444 - the lat result I'm hoping for
#lon2 0.36056 - the long result I'm hoping for.
lat1 = math.radians(52.20472) #Current lat point converted to radians
lon1 = math.radians(0.14056) #Current long point converted to radians
lat2 = math.asin( math.sin(lat1)*math.cos(d/R) +
math.cos(lat1)*math.sin(d/R)*math.cos(brng))
lon2 = lon1 + math.atan2(math.sin(brng)*math.sin(d/R)*math.cos(lat1),
math.cos(d/R)-math.sin(lat1)*math.sin(lat2))
lat2 = math.degrees(lat2)
lon2 = math.degrees(lon2)
print(lat2)
print(lon2)
The second answer I found, from python - calculating a gps coordinate given a point, bearing and distance, uses geopy and is much simpler, so I ended up going with this as my preferred solution:
from geopy import Point
from geopy.distance import distance, VincentyDistance
# given: lat1, lon1, bearing, distMiles
lat2, lon2 = VincentyDistance(miles=distMiles).destination(Point(lat1, lon1), bearing)
| Calculate lat/lon of 4 corners of rectangle using Python | I need to find the latitude and longitude coordinates of the four corners of a rectangle in a Python script, given the center coordinate, length, width, and bearing of the shape. Length and width are in statute miles, but honestly converting those to meters is probably one of the easiest parts. I have some examples of how to use haversine to calculate distance between 2 points, but I'm at a loss on this one. Would anyone be able to at least point me in the right direction?
Picture of rectangle
Update
This is the formula I came up with for my Python script, based on the link that @Mbo provided:
lat2 = asin(sin(lat1)*cos(length2/(_AVG_EARTH_RADIUS_KM)) + cos(lat1) * sin(length2/(_AVG_EARTH_RADIUS_KM)) * cos(bearing1))
lon2 = lon1 + atan2(sin(bearing1) * sin(length2/(_AVG_EARTH_RADIUS_KM)) * cos(lat1), cos(length2/(_AVG_EARTH_RADIUS_KM)) - sin(lat1) * sin(lat2))
Unfortunately the results don't make sense. I used a center point of 32° N 77° W, length of 20 miles, width 10 miles, and bearing 0 deg and I'm getting the result 0.586599511812, -77.0.
When I plot it out on my mapping application, it tells me that the coordinate for the new point should be 32.14513° N, -77.0° W.
Edit to add: I converted length1 and width1 to kilometers, and converted bearing1 to radians before using in the formulas above.
| [
"Rectangle on the earth sphere.. this is doubtful thing.\nAnyway, look at this page.\nUsing formula from section Destination point given distance and bearing from start point, calculate two middles at distance width/2 and bearings bearing, bearing + 180.\nFor every middle point do the same with height/2 and bearing + 90, bearing - 90 to calculate corner points.\n(note that width as corner-corner distance will be inexact for such approximation)\n",
"With pyproj you have the tooling to make the calculation.\nfwd() supports calculating the end point, giving start, bearing, and distance.\nYou still need basic geometry to calculate the necessary distance/angles.\n",
"I ended up finding 2 answers.\nThe first one, from python - Get lat/long given current point, distance and bearing was simple. The Destination point given distance and bearing from start point formula works after all, I just forgot to convert the lat/long for the start point to radians, and then convert the answer back to degrees at the end.\nThe resulting code looks like:\nimport math\n\nR = 6378.1 #Radius of the Earth\nbrng = 1.57 #Bearing is 90 degrees converted to radians.\nd = 15 #Distance in km\n\n#lat2 52.20444 - the lat result I'm hoping for\n#lon2 0.36056 - the long result I'm hoping for.\n\nlat1 = math.radians(52.20472) #Current lat point converted to radians\nlon1 = math.radians(0.14056) #Current long point converted to radians\n\nlat2 = math.asin( math.sin(lat1)*math.cos(d/R) +\n math.cos(lat1)*math.sin(d/R)*math.cos(brng))\n\nlon2 = lon1 + math.atan2(math.sin(brng)*math.sin(d/R)*math.cos(lat1),\n math.cos(d/R)-math.sin(lat1)*math.sin(lat2))\n\nlat2 = math.degrees(lat2)\nlon2 = math.degrees(lon2)\n\nprint(lat2)\nprint(lon2)\n\nThe second answer I found, from python - calculating a gps coordinate given a point, bearing and distance, uses geopy and is much simpler, so I ended up going with this as my preferred solution:\nfrom geopy import Point\nfrom geopy.distance import distance, VincentyDistance\n\n# given: lat1, lon1, bearing, distMiles\nlat2, lon2 = VincentyDistance(miles=distMiles).destination(Point(lat1, lon1), bearing)\n\n"
] | [
0,
0,
0
] | [] | [] | [
"coordinates",
"geometry",
"geospatial",
"haversine",
"python"
] | stackoverflow_0074607356_coordinates_geometry_geospatial_haversine_python.txt |
Q:
Python DOCX font size Pt() not defined
I am trying to get this document to change the font size but it keeps saying Pt() is not defined.
I have this:
import docx
doc = docx.Document(r"C:\Users\jconshick\Desktop\CodeTest\Spellbook.docx")
para = doc.add_paragraph('').add_run("This is a test")
para.font.size = Pt(12)
doc.save(r"C:\Users\jconshick\Desktop\CodeTest\Spellbook.docx")
I keep getting that Pt() is not defined though all the documentation shows this is how is should be. Not sure if it matters but I am using Spyder.
A:
Try
docx.shared.Pt(12)
instead of
Pt(12)
By this answer from @PieterduToit
| Python DOCX font size Pt() not defined | I am trying to get this document to change the font size but it keeps saying Pt() is not defined.
I have this:
import docx
doc = docx.Document(r"C:\Users\jconshick\Desktop\CodeTest\Spellbook.docx")
para = doc.add_paragraph('').add_run("This is a test")
para.font.size = Pt(12)
doc.save(r"C:\Users\jconshick\Desktop\CodeTest\Spellbook.docx")
I keep getting that Pt() is not defined though all the documentation shows this is how is should be. Not sure if it matters but I am using Spyder.
| [
"Try\ndocx.shared.Pt(12)\n\ninstead of\nPt(12)\n\nBy this answer from @PieterduToit\n"
] | [
1
] | [] | [] | [
"docx",
"python",
"python_3.x",
"python_docx"
] | stackoverflow_0074634807_docx_python_python_3.x_python_docx.txt |
Q:
Is it possible to get to the code from egg-link?
I made some modifications to the code for a deep learning model implemented in MxNet.
On my local computer, I installed MxNet by conda/pip, so I could just go to the installation folder, where I found the files where the model architecture is specified and made my changes. The structure is like:
.../environment_folder/lib/python3.8/site-packages/
- gluoncv/model_zoo/action_recognition/i3d_resnet.py
- mxnet/gluon/block.py
and I made my changes to these files.
Now, I need to do the same on another machine, where MxNet has been compiled from source. I looked into the analogous installation folder, and I found the following structure:
.../environment_folder/lib/python3.8/site-packages/
- gluoncv/model_zoo/action_recognition/i3d_resnet.py
- mxnet.egg-link
i.e., I found the gluoncv folder, but instead of the mxnet one there is a egg-link. I honestly didn't know about egg files, I've been searching around and found it was an old way of packaging python files before wheels and pip. Is there any way I can open the link and get to the folder it is presumably pointing to?
A:
If you can open a python shell prompt (with the environment loaded) on the machine in question, try:
import mxnet
mxnet.__file__
| Is it possible to get to the code from egg-link? | I made some modifications to the code for a deep learning model implemented in MxNet.
On my local computer, I installed MxNet by conda/pip, so I could just go to the installation folder, where I found the files where the model architecture is specified and made my changes. The structure is like:
.../environment_folder/lib/python3.8/site-packages/
- gluoncv/model_zoo/action_recognition/i3d_resnet.py
- mxnet/gluon/block.py
and I made my changes to these files.
Now, I need to do the same on another machine, where MxNet has been compiled from source. I looked into the analogous installation folder, and I found the following structure:
.../environment_folder/lib/python3.8/site-packages/
- gluoncv/model_zoo/action_recognition/i3d_resnet.py
- mxnet.egg-link
i.e., I found the gluoncv folder, but instead of the mxnet one there is a egg-link. I honestly didn't know about egg files, I've been searching around and found it was an old way of packaging python files before wheels and pip. Is there any way I can open the link and get to the folder it is presumably pointing to?
| [
"If you can open a python shell prompt (with the environment loaded) on the machine in question, try:\nimport mxnet\nmxnet.__file__\n\n"
] | [
0
] | [] | [] | [
"egg",
"python"
] | stackoverflow_0074191226_egg_python.txt |
Q:
ValueError: Invalid endpoint: https://s3..amazonaws.com
When EMR machine is trying to run a step that includes boto3 initialisation it sometimes get the following error:
ValueError: Invalid endpoint: https://s3..amazonaws.com
When I'm trying to set up a new machine it can suddenly work.
Attached the full error:
self.client = boto3.client("s3")
File "/usr/local/lib/python3.6/site-packages/boto3/__init__.py", line 83, in client
return _get_default_session().client(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/local/lib/python3.6/site-packages/botocore/session.py", line 861, in create_client
client_config=config, api_version=api_version)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 76, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 285, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.6/site-packages/botocore/args.py", line 79, in get_client_args
timeout=(new_config.connect_timeout, new_config.read_timeout))
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 297, in create_endpoint
raise ValueError("Invalid endpoint: %s" % endpoint_url)
ValueError: Invalid endpoint: https://s3..amazonaws.com
Any idea why it happens?
(Versions: boto3==1.7.29, botocore==1.10.29)
A:
It looks like you have an invalid region.
Check your ~/.aws/config
A:
Set the region in your ~/.aws/credentials or ~/.aws/config files. You can set the region as an environment variable as well e.g.
In bash
export AWS_REGION="eu-west-2"
or in Powershell
$Env:AWS_REGION="eu-west-2"
A:
In my case, even though ~/.aws/config had the region set,
$ cat ~/.aws/config
[default]
region = us-east-1
the env var AWS_REGION was set to an empty string
$ env | grep -i aws
AWS_REGION=
unset this env var and all was good again
$ unset AWS_REGION
$ aws sts get-caller-identity --output text --query Account
777***234534
(apologies for posting on a really old question, it did pop up in a Google search)
| ValueError: Invalid endpoint: https://s3..amazonaws.com | When EMR machine is trying to run a step that includes boto3 initialisation it sometimes get the following error:
ValueError: Invalid endpoint: https://s3..amazonaws.com
When I'm trying to set up a new machine it can suddenly work.
Attached the full error:
self.client = boto3.client("s3")
File "/usr/local/lib/python3.6/site-packages/boto3/__init__.py", line 83, in client
return _get_default_session().client(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/local/lib/python3.6/site-packages/botocore/session.py", line 861, in create_client
client_config=config, api_version=api_version)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 76, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 285, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/local/lib/python3.6/site-packages/botocore/args.py", line 79, in get_client_args
timeout=(new_config.connect_timeout, new_config.read_timeout))
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 297, in create_endpoint
raise ValueError("Invalid endpoint: %s" % endpoint_url)
ValueError: Invalid endpoint: https://s3..amazonaws.com
Any idea why it happens?
(Versions: boto3==1.7.29, botocore==1.10.29)
| [
"It looks like you have an invalid region.\nCheck your ~/.aws/config\n",
"Set the region in your ~/.aws/credentials or ~/.aws/config files. You can set the region as an environment variable as well e.g. \nIn bash\nexport AWS_REGION=\"eu-west-2\"\n\nor in Powershell\n$Env:AWS_REGION=\"eu-west-2\"\n\n",
"In my case, even though ~/.aws/config had the region set,\n$ cat ~/.aws/config \n[default]\nregion = us-east-1\n\nthe env var AWS_REGION was set to an empty string\n$ env | grep -i aws\nAWS_REGION=\n\nunset this env var and all was good again\n$ unset AWS_REGION\n$ aws sts get-caller-identity --output text --query Account\n777***234534\n\n(apologies for posting on a really old question, it did pop up in a Google search)\n"
] | [
19,
11,
0
] | [] | [] | [
"amazon_emr",
"amazon_s3",
"amazon_web_services",
"boto3",
"python"
] | stackoverflow_0057943053_amazon_emr_amazon_s3_amazon_web_services_boto3_python.txt |
Q:
How do I create any regular polygon using turtle?
So I have an assignment that asked me to draw any regular polygon using Turtle and I created the code. It works but my mentor said to try again. I would like to know what I did wrong, Thank you!
The requirements for this assignment are:
The program should take in input from the user.
The program should have a function that:
takes in the number of sides as a parameter.
calculates the angle
uses the appropriate angle to draw the polygon
from turtle import Turtle
turtle = Turtle()
side = int(input("Enter the number of the sides: "))
def poly():
for i in range(side):
turtle.forward(100)
turtle.right(360 / side)
poly()
A:
This is the function I used to draw a polygon using Turtle:
Draws an n-sided polygon of a given length. t is a turtle.
def polygon(t, n, length):
angle = 360.0 / n
polyline(t, n, length, angle)
A:
I think this might be better suited on math stackexchange.
A regular polygon has interior angles (n−2) × 180 / n. Theres a good blog post on that here.
You just need to change the angle by which you rotate every time:
from turtle import Turtle
turtle = Turtle()
num_sides = int(input("Enter the number of the sides: "))
def poly():
for i in range(num_sides):
turtle.forward(100)
# change this bit
turtle.right((num_sides - 2) * 180 / num_sides)
poly()
| How do I create any regular polygon using turtle? | So I have an assignment that asked me to draw any regular polygon using Turtle and I created the code. It works but my mentor said to try again. I would like to know what I did wrong, Thank you!
The requirements for this assignment are:
The program should take in input from the user.
The program should have a function that:
takes in the number of sides as a parameter.
calculates the angle
uses the appropriate angle to draw the polygon
from turtle import Turtle
turtle = Turtle()
side = int(input("Enter the number of the sides: "))
def poly():
for i in range(side):
turtle.forward(100)
turtle.right(360 / side)
poly()
| [
"This is the function I used to draw a polygon using Turtle:\nDraws an n-sided polygon of a given length. t is a turtle.\ndef polygon(t, n, length):\nangle = 360.0 / n\npolyline(t, n, length, angle)\n\n",
"I think this might be better suited on math stackexchange.\nA regular polygon has interior angles (n−2) × 180 / n. Theres a good blog post on that here.\nYou just need to change the angle by which you rotate every time:\nfrom turtle import Turtle\n\nturtle = Turtle()\n \nnum_sides = int(input(\"Enter the number of the sides: \"))\n \ndef poly():\n for i in range(num_sides):\n turtle.forward(100)\n # change this bit\n turtle.right((num_sides - 2) * 180 / num_sides)\n \n \npoly()\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_3.x",
"python_turtle"
] | stackoverflow_0074633650_python_python_3.x_python_turtle.txt |
Q:
How to make a lower triangle array of 10 but repeated across a diagonal n times?
I am trying to create an array of 10 for each item I have, but then put those arrays of 10 into a larger array diagonally with zeros filling the missing spaces.
Here is an example of what I am looking for, but only with arrays of 3.
import numpy as np
arr = np.tri(3,3)
arr
This creates an array that looks like this:
[[1,0,0],
[1,1,0],
[1,1,1]]
But I need an array of 10 * n that looks like this: (using arrays a 3 for example here, with n=2)
{1,0,0,0,0,0,
1,1,0,0,0,0,
1,1,1,0,0,0,
0,0,0,1,0,0,
0,0,0,1,1,0,
0,0,0,1,1,1}
Any help would be appreciated, thanks!
I have also tried
df_arr2 = pd.concat([df_arr] * (n), ignore_index=True)
df_arr3 = pd.concat([df_arr2] *(n), axis=1, ignore_index=True)
But this repeats the matrix across all rows and columns, when I only want the diagnonal ones.
A:
Now I got it... AFAIU, the OP wants those np.tri triangles in the diagonal of a bigger, multiple of 3 square shaped array.
As per example, for n=2:
import numpy as np
n = 2
tri = np.tri(3)
arr = np.zeros((n*3, n*3))
for i in range(0, n*3, 3):
arr[i:i+3,i:i+3] = tri
arr.astype(int)
# Out:
# array([[1, 0, 0, 0, 0, 0],
# [1, 1, 0, 0, 0, 0],
# [1, 1, 1, 0, 0, 0],
# [0, 0, 0, 1, 0, 0],
# [0, 0, 0, 1, 1, 0],
# [0, 0, 0, 1, 1, 1]])
A:
I saw @brandt's solution which is definitely the best. Incase you want to construct the them manually you can use this method:
def custom_triangle_matrix(rows, rowlen, tsize):
cm = []
for i in range(rows):
row = []
for j in range(min((i//tsize)*tsize, rowlen)):
row.append(0)
for j in range((i//tsize)*tsize, min(((i//tsize)*tsize) + i%tsize + 1, rowlen)):
row.append(1)
for j in range(((i//tsize)*tsize) + i%tsize + 1, rowlen):
row.append(0)
cm.append(row)
return cm
Here are some example executions and what they look like using ppprint:
matrix = custom_triangle_matrix(6, 6, 3)
pprint.pprint(matrix)
[[1, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 1]]
matrix = custom_triangle_matrix(6, 9, 3)
pprint.pprint(matrix)
[[1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0]]
matrix = custom_triangle_matrix(9, 6, 3)
pprint.pprint(matrix)
[[1, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]
matrix = custom_triangle_matrix(10, 10, 5)
pprint.pprint(matrix)
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1]]
Good Luck!
| How to make a lower triangle array of 10 but repeated across a diagonal n times? | I am trying to create an array of 10 for each item I have, but then put those arrays of 10 into a larger array diagonally with zeros filling the missing spaces.
Here is an example of what I am looking for, but only with arrays of 3.
import numpy as np
arr = np.tri(3,3)
arr
This creates an array that looks like this:
[[1,0,0],
[1,1,0],
[1,1,1]]
But I need an array of 10 * n that looks like this: (using arrays a 3 for example here, with n=2)
{1,0,0,0,0,0,
1,1,0,0,0,0,
1,1,1,0,0,0,
0,0,0,1,0,0,
0,0,0,1,1,0,
0,0,0,1,1,1}
Any help would be appreciated, thanks!
I have also tried
df_arr2 = pd.concat([df_arr] * (n), ignore_index=True)
df_arr3 = pd.concat([df_arr2] *(n), axis=1, ignore_index=True)
But this repeats the matrix across all rows and columns, when I only want the diagnonal ones.
| [
"Now I got it... AFAIU, the OP wants those np.tri triangles in the diagonal of a bigger, multiple of 3 square shaped array.\nAs per example, for n=2:\nimport numpy as np\n\nn = 2\n\ntri = np.tri(3)\n\narr = np.zeros((n*3, n*3))\n\nfor i in range(0, n*3, 3):\n arr[i:i+3,i:i+3] = tri\n\narr.astype(int)\n\n# Out: \n# array([[1, 0, 0, 0, 0, 0],\n# [1, 1, 0, 0, 0, 0],\n# [1, 1, 1, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0],\n# [0, 0, 0, 1, 1, 0],\n# [0, 0, 0, 1, 1, 1]])\n\n",
"I saw @brandt's solution which is definitely the best. Incase you want to construct the them manually you can use this method:\ndef custom_triangle_matrix(rows, rowlen, tsize):\n cm = []\n for i in range(rows):\n row = []\n for j in range(min((i//tsize)*tsize, rowlen)):\n row.append(0)\n\n for j in range((i//tsize)*tsize, min(((i//tsize)*tsize) + i%tsize + 1, rowlen)):\n row.append(1)\n\n for j in range(((i//tsize)*tsize) + i%tsize + 1, rowlen):\n row.append(0)\n\n cm.append(row)\n\n return cm\n\nHere are some example executions and what they look like using ppprint:\nmatrix = custom_triangle_matrix(6, 6, 3)\npprint.pprint(matrix)\n\n[[1, 0, 0, 0, 0, 0],\n [1, 1, 0, 0, 0, 0],\n [1, 1, 1, 0, 0, 0],\n [0, 0, 0, 1, 0, 0],\n [0, 0, 0, 1, 1, 0],\n [0, 0, 0, 1, 1, 1]]\n\n\nmatrix = custom_triangle_matrix(6, 9, 3)\npprint.pprint(matrix)\n\n[[1, 0, 0, 0, 0, 0, 0, 0, 0],\n [1, 1, 0, 0, 0, 0, 0, 0, 0],\n [1, 1, 1, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 1, 0, 0, 0, 0],\n [0, 0, 0, 1, 1, 1, 0, 0, 0]]\n\n\nmatrix = custom_triangle_matrix(9, 6, 3)\npprint.pprint(matrix)\n\n[[1, 0, 0, 0, 0, 0],\n [1, 1, 0, 0, 0, 0],\n [1, 1, 1, 0, 0, 0],\n [0, 0, 0, 1, 0, 0],\n [0, 0, 0, 1, 1, 0],\n [0, 0, 0, 1, 1, 1],\n [0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0]]\n\n\nmatrix = custom_triangle_matrix(10, 10, 5)\npprint.pprint(matrix)\n\n[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [1, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [1, 1, 1, 0, 0, 0, 0, 0, 0, 0],\n [1, 1, 1, 1, 0, 0, 0, 0, 0, 0],\n [1, 1, 1, 1, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 1, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 1, 1, 0, 0, 0],\n [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],\n [0, 0, 0, 0, 0, 1, 1, 1, 1, 0],\n [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]]\n\n\nGood Luck!\n"
] | [
0,
0
] | [] | [] | [
"arrays",
"matrix",
"pandas",
"python"
] | stackoverflow_0074634540_arrays_matrix_pandas_python.txt |
Q:
Import excel file and loop run for each row in Excel file
I have this code to scrape the results from google. If I have a list of terms I need to search in Excel/Csv format, how can I write the code to
After import the excel file, search each row values and print out the results for that row.
Repeat for the next row value in the Excel file.
Here's my code. Please help with any solution you can think of
For example my Excel file just have 1 column and 3 values as below:
List to search
Defuse
Commercial
Ecommerce
from ecommercetools import seo
import csv
import pandas as pd
searching = input('What do you want to search?')
results = seo.get_serps(searching)
df = pd.DataFrame(results.head(20)) # Convert result into data frame.
df.to_csv("ScanOutput.csv",mode="a")
Thank you
I tried with several module but stuck somehow. Any help would be appreciated
A:
If this is the content of your .csv file called file.csv:
a,b,c
1,2,3
k,l,m
then you can read it and loop row by row like so:
import csv
# read file.csv and print each row
with open('file.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
print(row)
This answer doesn't use pandas (but csv which is simpler) which I think is ok until you don't have gigabytes of data or the data is very complex
| Import excel file and loop run for each row in Excel file | I have this code to scrape the results from google. If I have a list of terms I need to search in Excel/Csv format, how can I write the code to
After import the excel file, search each row values and print out the results for that row.
Repeat for the next row value in the Excel file.
Here's my code. Please help with any solution you can think of
For example my Excel file just have 1 column and 3 values as below:
List to search
Defuse
Commercial
Ecommerce
from ecommercetools import seo
import csv
import pandas as pd
searching = input('What do you want to search?')
results = seo.get_serps(searching)
df = pd.DataFrame(results.head(20)) # Convert result into data frame.
df.to_csv("ScanOutput.csv",mode="a")
Thank you
I tried with several module but stuck somehow. Any help would be appreciated
| [
"If this is the content of your .csv file called file.csv:\na,b,c\n1,2,3\nk,l,m\n\nthen you can read it and loop row by row like so:\nimport csv\n\n# read file.csv and print each row\nwith open('file.csv', 'r') as file:\n reader = csv.reader(file)\n for row in reader:\n print(row)\n\nThis answer doesn't use pandas (but csv which is simpler) which I think is ok until you don't have gigabytes of data or the data is very complex\n"
] | [
0
] | [] | [] | [
"module",
"python",
"web_scraping"
] | stackoverflow_0074634922_module_python_web_scraping.txt |
Q:
Python subprocess with dynamic variables and arguments
I want to ask how can I run subprocess.run() or subprocess.call() in python when the arguments are dynamic. I have already stored all commands in an external batch file, and I want to use Python to run the batch file after I update the arguments. I will give more details in the following:
the batch file is like this:
echo on
set yyyy=%1
set mm=%2
set dd=%3
set rangePath=%4
set pulPath=%5
set stasDate=%6
'''Command to run the batch file'''
My code right now is:
param_temp = 'pathway for parameter input yaml file'+'param_input.yml'
param = yaml.safe_load(param_temp)
yyyy = str(param['Year'])
mm = str(param['Month'])
dd = str(param['Date'])
rangePath = param['Path1']
pulPath = param['Path2']
stasDate = str(param['yyyymmdd'])
param_str = (yyyy+'_'+mm+'_'+dd+'_'+rangePath+'_'+pulPath+'_'+stasDate).split('_')
subprocess.call(param_str + [str('batch file path')], shell=False)
I create a YAML file to include all my parameters and intend to only adjust my YAML file if I want to run the batch file for different date. I choose the param_str because I googled it, and the web told me that %1 represents the variable or matched string after entering the batch file name. However, the approach is either invalid or outputs nothing. Can someone give me some ideas on how to do this, please?
A:
A slightly simpler implementation might look like:
#!/usr/bin/env python
import yaml
param_temp = 'pathway for parameter input yaml file'+'param_input.yml'
batch_path = 'batch file path'
param = yaml.safe_load(param_temp)
params = [ 'Year', 'Month', 'Date', 'Path1', 'Path2', 'yyyymmdd' ]
param_values = [ param[p] for p in params ]
subprocess.call([batch_path] + param_values)
Note:
The path to the script to invoke comes before the arguments to that script.
We aren't generating an underscore-separated string only to split it into a list -- it's more sensible (and less error-prone) to just generate a list directly.
shell=False is default, so it doesn't need to be explicitly specified.
| Python subprocess with dynamic variables and arguments | I want to ask how can I run subprocess.run() or subprocess.call() in python when the arguments are dynamic. I have already stored all commands in an external batch file, and I want to use Python to run the batch file after I update the arguments. I will give more details in the following:
the batch file is like this:
echo on
set yyyy=%1
set mm=%2
set dd=%3
set rangePath=%4
set pulPath=%5
set stasDate=%6
'''Command to run the batch file'''
My code right now is:
param_temp = 'pathway for parameter input yaml file'+'param_input.yml'
param = yaml.safe_load(param_temp)
yyyy = str(param['Year'])
mm = str(param['Month'])
dd = str(param['Date'])
rangePath = param['Path1']
pulPath = param['Path2']
stasDate = str(param['yyyymmdd'])
param_str = (yyyy+'_'+mm+'_'+dd+'_'+rangePath+'_'+pulPath+'_'+stasDate).split('_')
subprocess.call(param_str + [str('batch file path')], shell=False)
I create a YAML file to include all my parameters and intend to only adjust my YAML file if I want to run the batch file for different date. I choose the param_str because I googled it, and the web told me that %1 represents the variable or matched string after entering the batch file name. However, the approach is either invalid or outputs nothing. Can someone give me some ideas on how to do this, please?
| [
"A slightly simpler implementation might look like:\n#!/usr/bin/env python\nimport yaml\n\nparam_temp = 'pathway for parameter input yaml file'+'param_input.yml'\nbatch_path = 'batch file path'\n\nparam = yaml.safe_load(param_temp)\nparams = [ 'Year', 'Month', 'Date', 'Path1', 'Path2', 'yyyymmdd' ]\nparam_values = [ param[p] for p in params ]\n\nsubprocess.call([batch_path] + param_values)\n\n\nNote:\n\nThe path to the script to invoke comes before the arguments to that script.\nWe aren't generating an underscore-separated string only to split it into a list -- it's more sensible (and less error-prone) to just generate a list directly.\nshell=False is default, so it doesn't need to be explicitly specified.\n\n"
] | [
0
] | [] | [] | [
"python",
"subprocess",
"variables"
] | stackoverflow_0074634932_python_subprocess_variables.txt |
Q:
c$50 finance non-integers being rejected causing Buy to fail check50
Everything seems to be working fine with my code, however I am running into a single error for /buy when running check50. :( buy handles fractional, negative, and non-numeric shares. expected status code 400, but got 200.
I thinks check50 is receiving status code 200 when checking a non-integer such as 1.5 or string, before the buy form can even be submitted.
Flask app:
@app.route("/buy", methods=["GET", "POST"])
@login_required
def buy():
"""Buy shares of stock"""
rows = db.execute("SELECT * FROM users WHERE id = :id", id=session["user_id"])
if request.method == "POST":
ticket = lookup(request.form.get("symbol"))
if not ticket:
return apology("Stock symbol not correct!")
cash = rows[0]["cash"]
if "." in request.form.get("shares") or "/" in request.form.get("shares") or "," in request.form.get("shares"):
return apology("Number of shares must be a positive integer!")
try:
shares = float(request.form.get("shares"))
except:
return apology("Number of shares must be a positive integer!")
if (ticket["price"] * shares) > cash:
return apology("Sorry you don't have sufficient amount of cash!")
transaction = db.execute("INSERT INTO transactions (username, company, symbol, shares, transaction_type, transaction_price) VALUES (:username, :company, :symbol, :share, :transaction_type, :transaction_price)",
username=rows[0]["username"], company=ticket["name"], symbol=ticket["symbol"], share=shares, transaction_type="buy", transaction_price=ticket["price"] * shares)
if not transaction:
return apology("Error while making the transaction!")
else:
db.execute("UPDATE users SET cash = :new WHERE id = :id", new=cash - ticket["price"] * shares, id=session["user_id"])
return index()
else:
return render_template("buy.html", balance=usd(rows[0]["cash"]), check=True)`
def apology(message, code=400):
"""Render message as an apology to user."""
def escape(s):
"""
Escape special characters.
https://github.com/jacebrowning/memegen#special-characters
"""
for old, new in [("-", "--"), (" ", "-"), ("_", "__"), ("?", "~q"),
("%", "~p"), ("#", "~h"), ("/", "~s"), ("\"", "''")]:
s = s.replace(old, new)
return s
return render_template("apology.html", top=code, bottom=escape(message)), code`
HTML Code
{% extends "layout.html" %}
{% block title %}
Buy
{% endblock %}
{% block main %}
<table class="table">
<thead>
<tr>
<th>Your available balance</th>
</tr>
</thead>
<tbody>
<tr>
<th>{{ balance }}</th>
</tr>
</tbody>
</table>
<form action="/buy" method="post">
<div class="form-group">
<input autocomplete="off" autofocus class="form-control" name="symbol" placeholder="Symbol of stock" type="text">
</div>
<div class="form-group">
<input autocomplete="off" autofocus class="form-control" name="shares" type="number" min="1" required />
</div>
<button class="btn btn-primary" type="submit">Buy</button>
</form>
{% endblock %}
If shares is a non-integer it should render template apology.html via apology function with return code 400. Instead check50 is detecting return code 200.
Does anybody else have this problem? How can I solve this?
A:
I was able to solve the problem. The problem was that the python code did not check for negative numbers and therefore accepted them (which should not).
A:
I was able to solve this by taking the shares value from the form and putting it into a try block to typecast it into an int, if it receives a value error (if they have a decimal in the string) then it returns the apology page.
If can typecast without any issue then it means that the user entered in a whole number and the program moves along to the next steps.
try:
shares_check = int(request.form.get("shares"))
except ValueError:
return apology("Shares must be in whole numbers")
You seem to be 95% the way there with your try-block but you are checking for floats and trying to handle the decimal in a more complex than needed way.
| c$50 finance non-integers being rejected causing Buy to fail check50 | Everything seems to be working fine with my code, however I am running into a single error for /buy when running check50. :( buy handles fractional, negative, and non-numeric shares. expected status code 400, but got 200.
I thinks check50 is receiving status code 200 when checking a non-integer such as 1.5 or string, before the buy form can even be submitted.
Flask app:
@app.route("/buy", methods=["GET", "POST"])
@login_required
def buy():
"""Buy shares of stock"""
rows = db.execute("SELECT * FROM users WHERE id = :id", id=session["user_id"])
if request.method == "POST":
ticket = lookup(request.form.get("symbol"))
if not ticket:
return apology("Stock symbol not correct!")
cash = rows[0]["cash"]
if "." in request.form.get("shares") or "/" in request.form.get("shares") or "," in request.form.get("shares"):
return apology("Number of shares must be a positive integer!")
try:
shares = float(request.form.get("shares"))
except:
return apology("Number of shares must be a positive integer!")
if (ticket["price"] * shares) > cash:
return apology("Sorry you don't have sufficient amount of cash!")
transaction = db.execute("INSERT INTO transactions (username, company, symbol, shares, transaction_type, transaction_price) VALUES (:username, :company, :symbol, :share, :transaction_type, :transaction_price)",
username=rows[0]["username"], company=ticket["name"], symbol=ticket["symbol"], share=shares, transaction_type="buy", transaction_price=ticket["price"] * shares)
if not transaction:
return apology("Error while making the transaction!")
else:
db.execute("UPDATE users SET cash = :new WHERE id = :id", new=cash - ticket["price"] * shares, id=session["user_id"])
return index()
else:
return render_template("buy.html", balance=usd(rows[0]["cash"]), check=True)`
def apology(message, code=400):
"""Render message as an apology to user."""
def escape(s):
"""
Escape special characters.
https://github.com/jacebrowning/memegen#special-characters
"""
for old, new in [("-", "--"), (" ", "-"), ("_", "__"), ("?", "~q"),
("%", "~p"), ("#", "~h"), ("/", "~s"), ("\"", "''")]:
s = s.replace(old, new)
return s
return render_template("apology.html", top=code, bottom=escape(message)), code`
HTML Code
{% extends "layout.html" %}
{% block title %}
Buy
{% endblock %}
{% block main %}
<table class="table">
<thead>
<tr>
<th>Your available balance</th>
</tr>
</thead>
<tbody>
<tr>
<th>{{ balance }}</th>
</tr>
</tbody>
</table>
<form action="/buy" method="post">
<div class="form-group">
<input autocomplete="off" autofocus class="form-control" name="symbol" placeholder="Symbol of stock" type="text">
</div>
<div class="form-group">
<input autocomplete="off" autofocus class="form-control" name="shares" type="number" min="1" required />
</div>
<button class="btn btn-primary" type="submit">Buy</button>
</form>
{% endblock %}
If shares is a non-integer it should render template apology.html via apology function with return code 400. Instead check50 is detecting return code 200.
Does anybody else have this problem? How can I solve this?
| [
"I was able to solve the problem. The problem was that the python code did not check for negative numbers and therefore accepted them (which should not).\n",
"I was able to solve this by taking the shares value from the form and putting it into a try block to typecast it into an int, if it receives a value error (if they have a decimal in the string) then it returns the apology page.\nIf can typecast without any issue then it means that the user entered in a whole number and the program moves along to the next steps.\n try:\n shares_check = int(request.form.get(\"shares\"))\n except ValueError:\n return apology(\"Shares must be in whole numbers\")\n\nYou seem to be 95% the way there with your try-block but you are checking for floats and trying to handle the decimal in a more complex than needed way.\n"
] | [
1,
0
] | [] | [] | [
"flask",
"html",
"python"
] | stackoverflow_0055561427_flask_html_python.txt |
Q:
Find all possible paths in a python graph data structure without using recursive function
I have a serious issue with finding all possible paths in my csv file that looks like this :
Source
Target
Source_repo
Target_repo
SOURCE1
Target2
repo-1
repo-2
SOURCE5
Target3
repo-5
repo-3
SOURCE8
Target5
repo-8
repo-5
There a large amount of lines in the datasets, more than 5000 lines. I want to generate all possible paths like this in and return a list (Target5 is equal to SOURCE5):
SOURCE1 Target2
SOURCE8 Target5 Target3
I want to implement this solution without using recursive functions, since causes problems (maximum recursion depth exceeded).
This is the current code example :
def attach_co_changing_components(base_component):
co_changes = df_depends_on.loc[df_depends_on["Source_repo"] ==
base_component, "Target_repo"].values
result = {base_component: list(co_changes)}
return result
def dfs(data, path, paths):
datum = path[-1]
if datum in data:
for val in data[datum]:
new_path = path + [val]
paths = dfs(data, new_path, paths)
else:
paths += [path]
return paths
def enumerate_paths(graph, nodes=[]):
nodes = graph.keys()
all_paths = []
for node in nodes:
node_paths = dfs(graph, [node], [])
all_paths += node_paths
return all_paths
if __name__ == "__main__":
df = pd.read_csv("clean_openstack_evolution.csv")
co_changing_components = df[["Source"]].copy()
co_changing_components = co_changing_components.drop_duplicates(
).reset_index(drop=True)
co_changing_components = co_changing_components["Source"].map(
attach_co_changing_components)
co_changing_components = co_changing_components.rename("Path")
co_changing_components = co_changing_components.reset_index(drop=True)
newdict = {}
for k, v in [(key, d[key]) for d in co_changing_components for key in d]:
if k not in newdict: newdict[k] = v
else: newdict[k].append(v)
graph_keys = df_depends_on["Source_repo"].drop_duplicates().to_dict(
).values()
graph_keys = {*graph_keys}
graph_keys = set([
k for k in graph_keys
if len(df_depends_on[df_depends_on["Target"] == k]) > 0
])
result = enumerate_paths(new_dict)
Here is the output after executing the preceding code :
Here is the data link Google drive
I tried to solve the problem using recursive function, but the code failed with the problem of depth exceeded. I aim to solve it without recursive functions.
A:
I'm not sure if you want all paths or paths specifically from node to another node. Either way this looks like a job for networkx.
Setup (nx.from_pandas_edgelist)
import networkx as nx
import pandas as pd
df = pd.read_csv("...")
graph = nx.from_pandas_edgelist(df, create_using=nx.DiGraph)
All paths (nx.all_simple_paths)
from itertools import chain, product, starmap
from functools import partial
roots = (node for node, d in graph.in_degree if d == 0)
leaves = (node for node, d in graph.out_degree if d == 0)
all_paths = partial(nx.all_simple_paths, graph)
paths = list(chain.from_iterable(starmap(all_paths, product(roots, leaves))))
From one node to another
source_node = "some_node_in_graph"
target_node = "some_other_node_in_graph"
list(nx.all_simple_paths(graph, source=source_node, target=target_node))
| Find all possible paths in a python graph data structure without using recursive function | I have a serious issue with finding all possible paths in my csv file that looks like this :
Source
Target
Source_repo
Target_repo
SOURCE1
Target2
repo-1
repo-2
SOURCE5
Target3
repo-5
repo-3
SOURCE8
Target5
repo-8
repo-5
There a large amount of lines in the datasets, more than 5000 lines. I want to generate all possible paths like this in and return a list (Target5 is equal to SOURCE5):
SOURCE1 Target2
SOURCE8 Target5 Target3
I want to implement this solution without using recursive functions, since causes problems (maximum recursion depth exceeded).
This is the current code example :
def attach_co_changing_components(base_component):
co_changes = df_depends_on.loc[df_depends_on["Source_repo"] ==
base_component, "Target_repo"].values
result = {base_component: list(co_changes)}
return result
def dfs(data, path, paths):
datum = path[-1]
if datum in data:
for val in data[datum]:
new_path = path + [val]
paths = dfs(data, new_path, paths)
else:
paths += [path]
return paths
def enumerate_paths(graph, nodes=[]):
nodes = graph.keys()
all_paths = []
for node in nodes:
node_paths = dfs(graph, [node], [])
all_paths += node_paths
return all_paths
if __name__ == "__main__":
df = pd.read_csv("clean_openstack_evolution.csv")
co_changing_components = df[["Source"]].copy()
co_changing_components = co_changing_components.drop_duplicates(
).reset_index(drop=True)
co_changing_components = co_changing_components["Source"].map(
attach_co_changing_components)
co_changing_components = co_changing_components.rename("Path")
co_changing_components = co_changing_components.reset_index(drop=True)
newdict = {}
for k, v in [(key, d[key]) for d in co_changing_components for key in d]:
if k not in newdict: newdict[k] = v
else: newdict[k].append(v)
graph_keys = df_depends_on["Source_repo"].drop_duplicates().to_dict(
).values()
graph_keys = {*graph_keys}
graph_keys = set([
k for k in graph_keys
if len(df_depends_on[df_depends_on["Target"] == k]) > 0
])
result = enumerate_paths(new_dict)
Here is the output after executing the preceding code :
Here is the data link Google drive
I tried to solve the problem using recursive function, but the code failed with the problem of depth exceeded. I aim to solve it without recursive functions.
| [
"I'm not sure if you want all paths or paths specifically from node to another node. Either way this looks like a job for networkx.\nSetup (nx.from_pandas_edgelist)\nimport networkx as nx\nimport pandas as pd\n\n\ndf = pd.read_csv(\"...\")\n\ngraph = nx.from_pandas_edgelist(df, create_using=nx.DiGraph)\n\nAll paths (nx.all_simple_paths)\nfrom itertools import chain, product, starmap\nfrom functools import partial\n\n\nroots = (node for node, d in graph.in_degree if d == 0)\n\nleaves = (node for node, d in graph.out_degree if d == 0)\n\nall_paths = partial(nx.all_simple_paths, graph)\n\npaths = list(chain.from_iterable(starmap(all_paths, product(roots, leaves))))\n\nFrom one node to another\nsource_node = \"some_node_in_graph\"\ntarget_node = \"some_other_node_in_graph\"\nlist(nx.all_simple_paths(graph, source=source_node, target=target_node))\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"graph_theory",
"pandas",
"python",
"recursion"
] | stackoverflow_0074631517_dataframe_graph_theory_pandas_python_recursion.txt |
Q:
Computing the mean of an array considering only some indices
I have two 2d arrays, one containing float values, one containing bool. I want to create an array containing the mean values of the first matrix for each column considering only the values corresponding to False in the second matrix.
For example:
A = [[1 3 5]
[2 4 6]
[3 1 0]]
B = [[True False False]
[False False False]
[True True False]]
result = [2, 3.5, 3.67]
A:
Where B is False, keep the value of A, make it NaN otherwise and then use the nanmean function which ignores NaN's for operations.
np.nanmean(np.where(~B, A, np.nan), axis=0)
>>> array([2. , 3.5 , 3.66666667])
A:
Using numpy.mean using where argument to specify elements to include in the mean.
np.mean(A, where = ~B, axis = 0)
>>> [2. 3.5 3.66666667]
A:
A = [[1, 3, 5],
[2, 4, 6],
[3, 1, 0]]
B = [[True, False, False],
[False, False, False],
[True, True, False]]
sums = [0]*len(A[0])
amounts = [0]*len(A[0])
for i in range(0, len(A)):
for j in range(0, len(A[0])):
sums[j] = sums[j] + (A[i][j] if not B[i][j] else 0)
amounts[j] = amounts[j] + (1 if not B[i][j] else 0)
result = [sums[i]/amounts[i] for i in range(0, len(sums))]
print(result)
A:
There may be some fancy numpy trick for this, but I think using a list comprehension to construct a new array is the most straightforward.
result = np.array([a_col[~b_col].mean() for a_col, b_col in zip(A.T,B.T)])
To follow better, this is what the line does expanded out:
result=[]
for i in range(len(A)):
new_col = A[:,i][~B[:,i]]
result.append(new_col.mean())
A:
You could also use a masked array:
import numpy as np
result = np.ma.array(A, mask=B).mean(axis=0).filled(fill_value=0)
# Output:
# array([2. , 3.5 , 3.66666667])
which has the advantage of being able to supply a fill_value for when every element in some column in B is True.
| Computing the mean of an array considering only some indices | I have two 2d arrays, one containing float values, one containing bool. I want to create an array containing the mean values of the first matrix for each column considering only the values corresponding to False in the second matrix.
For example:
A = [[1 3 5]
[2 4 6]
[3 1 0]]
B = [[True False False]
[False False False]
[True True False]]
result = [2, 3.5, 3.67]
| [
"Where B is False, keep the value of A, make it NaN otherwise and then use the nanmean function which ignores NaN's for operations.\nnp.nanmean(np.where(~B, A, np.nan), axis=0)\n\n>>> array([2. , 3.5 , 3.66666667])\n\n",
"Using numpy.mean using where argument to specify elements to include in the mean.\nnp.mean(A, where = ~B, axis = 0)\n>>> [2. 3.5 3.66666667]\n\n",
"A = [[1, 3, 5],\n [2, 4, 6],\n [3, 1, 0]]\n\nB = [[True, False, False],\n [False, False, False],\n [True, True, False]]\n\nsums = [0]*len(A[0])\namounts = [0]*len(A[0])\nfor i in range(0, len(A)):\n for j in range(0, len(A[0])):\n sums[j] = sums[j] + (A[i][j] if not B[i][j] else 0)\n amounts[j] = amounts[j] + (1 if not B[i][j] else 0)\n\n\nresult = [sums[i]/amounts[i] for i in range(0, len(sums))]\n\nprint(result)\n\n",
"There may be some fancy numpy trick for this, but I think using a list comprehension to construct a new array is the most straightforward.\nresult = np.array([a_col[~b_col].mean() for a_col, b_col in zip(A.T,B.T)])\n\nTo follow better, this is what the line does expanded out:\nresult=[]\nfor i in range(len(A)):\n new_col = A[:,i][~B[:,i]]\n result.append(new_col.mean())\n\n",
"You could also use a masked array:\nimport numpy as np\n\nresult = np.ma.array(A, mask=B).mean(axis=0).filled(fill_value=0)\n# Output:\n# array([2. , 3.5 , 3.66666667])\n\nwhich has the advantage of being able to supply a fill_value for when every element in some column in B is True.\n"
] | [
3,
2,
0,
0,
0
] | [] | [] | [
"arrays",
"multidimensional_array",
"numpy",
"python"
] | stackoverflow_0074634440_arrays_multidimensional_array_numpy_python.txt |
Q:
positional arguments are being asked for even though I have included them in my args instead?
Hi I'm writing a program that is meant to do the RK4 method for different ODEs for an assignment. One of the things we have to use is *args. When I call on my function that includes *args(the rk4 one) I list the extra parameters at the end. When I try to run it it says that my function(f2a in this case) is missing 3 required positional arguments even though I have included them at the end in what I assumed was the args section of my RK4 function parameter list. Does this mean I'm not indicating that those are extra parameters correctly? Or do I need to add them to the function I'm using RK4 on? I'm really new to coding so any help is much appreciated. Here is my entire code:
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(x0,21,1) #0-20
N=len(t_values)
y_val=np.zeros(N)
y_val[0]=y0
#I am trying to print all the Y values into this array
for i in range(1,N):
#rk4 method
#k1
t1=t_values[i-1] #started range @ 1, n-1 starts at 0
y1=y_val[i-1]
k1=function(t1,y1)
#k2
t2=t_values[i-1]+0.5*h
y2=y_val[i-1]+0.5*k1*h
k2=function(t2,y2)
#k3
t3=t_values[i-1]+0.5*h
y3=y_val[i-1]+0.5*k2*h
k3=function(t3,y3)
#k4
t4=t_values[i-1]+h
y4=y_val[i-1]+h*k3
k4=function(t4,y4)
y_val[i]=y_val[i-1]+(1/6)*h*(k1+2*k2+2*k3+k4)
#this fills the t_val array and keeps the loop going
a=np.column_stack(t_values,y_val)
print('At time T, Y= (t on left,Y on right)')
print(a)
plt.plot(t_values,y_val)
print('For 3A:')
#k=10, told by professor bc not included in instructions
odeRK4(f2a, [0,20],[0,7], None, 10,150,7)
A:
You correctly pass in the extra arguments that will be stored in the args parameter of the function odeRK4, when running this:
# at the end of you snippet ^^
odeRK4(f2a, [0,20],[0,7], None, 10,150,7)
However, as you mention f2a requires 3 arguments. Looking at the definition of odeRK4 we see that after passing f2a it is now called function inside odeRK4:
def odeRK4(function,tspan,R,h,*args):
In the function odeRK4 you use function like this, and there is no use of args to be seen anywhere but the parameters of odeRK4:
t1=t_values[i-1] #started range @ 1, n-1 starts at 0
y1=y_val[i-1]
k1=function(t1,y1) # <--- 2 instead of 3 for f2a
#k2
t2=t_values[i-1]+0.5*h
y2=y_val[i-1]+0.5*k1*h
k2=function(t2,y2) # <--- 2 instead of 3 for f2a
# more uses...
So as you see that are two parameters. In order to pass args to the function. You must call them like this:
t1=t_values[i-1] #started range @ 1, n-1 starts at 0
y1=y_val[i-1]
k1=function(*args) # <--- fixed
#k2
t2=t_values[i-1]+0.5*h
y2=y_val[i-1]+0.5*k1*h
k2=function(*args) # <--- fixed
# more uses...
This resolves you issue with 3 arguments wanted but this makes your calcution not use your t1, t2 etc. But you have the fix that yourself and use args so that it fits your needs.
[Extra] The * in front of args, in when it is sometimes uses. Like this *args is the unpack operator. You can find more here: https://www.scaler.com/topics/python/packing-and-unpacking-in-python/ It is the ... operator that may be known from js.
Good Luck!
| positional arguments are being asked for even though I have included them in my args instead? | Hi I'm writing a program that is meant to do the RK4 method for different ODEs for an assignment. One of the things we have to use is *args. When I call on my function that includes *args(the rk4 one) I list the extra parameters at the end. When I try to run it it says that my function(f2a in this case) is missing 3 required positional arguments even though I have included them at the end in what I assumed was the args section of my RK4 function parameter list. Does this mean I'm not indicating that those are extra parameters correctly? Or do I need to add them to the function I'm using RK4 on? I'm really new to coding so any help is much appreciated. Here is my entire code:
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(x0,21,1) #0-20
N=len(t_values)
y_val=np.zeros(N)
y_val[0]=y0
#I am trying to print all the Y values into this array
for i in range(1,N):
#rk4 method
#k1
t1=t_values[i-1] #started range @ 1, n-1 starts at 0
y1=y_val[i-1]
k1=function(t1,y1)
#k2
t2=t_values[i-1]+0.5*h
y2=y_val[i-1]+0.5*k1*h
k2=function(t2,y2)
#k3
t3=t_values[i-1]+0.5*h
y3=y_val[i-1]+0.5*k2*h
k3=function(t3,y3)
#k4
t4=t_values[i-1]+h
y4=y_val[i-1]+h*k3
k4=function(t4,y4)
y_val[i]=y_val[i-1]+(1/6)*h*(k1+2*k2+2*k3+k4)
#this fills the t_val array and keeps the loop going
a=np.column_stack(t_values,y_val)
print('At time T, Y= (t on left,Y on right)')
print(a)
plt.plot(t_values,y_val)
print('For 3A:')
#k=10, told by professor bc not included in instructions
odeRK4(f2a, [0,20],[0,7], None, 10,150,7)
| [
"You correctly pass in the extra arguments that will be stored in the args parameter of the function odeRK4, when running this:\n# at the end of you snippet ^^\nodeRK4(f2a, [0,20],[0,7], None, 10,150,7)\n\nHowever, as you mention f2a requires 3 arguments. Looking at the definition of odeRK4 we see that after passing f2a it is now called function inside odeRK4:\ndef odeRK4(function,tspan,R,h,*args):\n\nIn the function odeRK4 you use function like this, and there is no use of args to be seen anywhere but the parameters of odeRK4:\nt1=t_values[i-1] #started range @ 1, n-1 starts at 0\ny1=y_val[i-1]\nk1=function(t1,y1) # <--- 2 instead of 3 for f2a\n \n#k2\nt2=t_values[i-1]+0.5*h\ny2=y_val[i-1]+0.5*k1*h\nk2=function(t2,y2) # <--- 2 instead of 3 for f2a\n\n# more uses...\n\nSo as you see that are two parameters. In order to pass args to the function. You must call them like this:\nt1=t_values[i-1] #started range @ 1, n-1 starts at 0\ny1=y_val[i-1]\nk1=function(*args) # <--- fixed\n \n#k2\nt2=t_values[i-1]+0.5*h\ny2=y_val[i-1]+0.5*k1*h\nk2=function(*args) # <--- fixed\n\n# more uses...\n\nThis resolves you issue with 3 arguments wanted but this makes your calcution not use your t1, t2 etc. But you have the fix that yourself and use args so that it fits your needs.\n[Extra] The * in front of args, in when it is sometimes uses. Like this *args is the unpack operator. You can find more here: https://www.scaler.com/topics/python/packing-and-unpacking-in-python/ It is the ... operator that may be known from js.\nGood Luck!\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074634946_python.txt |
Q:
How to write a MySQL Insert Into Statement with Inner Join?
The user adds information here: the form
The information gets added to the shoes table.
The database: the database
I want to insert ShoeImage, ShoeName, ShoeStyle, ShoeColor, ShoePrice, and ShoeDescr, and NOT ShoeID (which is autoincrement),ShoeBrandID, and ShoeSizeID.
My insert statement:
$sql = "INSERT INTO $tblShoes VALUES (NULL, '$ShoeImage', '$ShoeName', '$ShoeStyle', '$ShoeColor',
'$ShoePrice', '$ShoeDescr')";
How to write this insert statement with inner join?
A:
Might works.
INSERT INTO shoes
(
'ShoeImage',
'ShoeName',
'ShoeStyle',
'ShoeColor',
'ShoePrice',
'ShoeDescr',
'ShoeBrandID',
'ShoeSizeID'
)
VALUES(
NULL,
'$ShoeImage',
'$ShoeName',
'$ShoeStyle',
'$ShoeColor',
'$ShoePrice',
'$ShoeDescr',
(SELECT BrandID FROM shoebrand WHERE BrandName = '$ShoeBrand'),
(SELECT SizeID FROM shoesize WHERE Size = '$ShoeSize')
)
| How to write a MySQL Insert Into Statement with Inner Join? | The user adds information here: the form
The information gets added to the shoes table.
The database: the database
I want to insert ShoeImage, ShoeName, ShoeStyle, ShoeColor, ShoePrice, and ShoeDescr, and NOT ShoeID (which is autoincrement),ShoeBrandID, and ShoeSizeID.
My insert statement:
$sql = "INSERT INTO $tblShoes VALUES (NULL, '$ShoeImage', '$ShoeName', '$ShoeStyle', '$ShoeColor',
'$ShoePrice', '$ShoeDescr')";
How to write this insert statement with inner join?
| [
"Might works.\nINSERT INTO shoes\n(\n'ShoeImage',\n'ShoeName',\n'ShoeStyle',\n'ShoeColor',\n'ShoePrice',\n'ShoeDescr',\n'ShoeBrandID',\n'ShoeSizeID'\n)\nVALUES(\nNULL,\n'$ShoeImage',\n'$ShoeName',\n'$ShoeStyle',\n'$ShoeColor',\n'$ShoePrice',\n'$ShoeDescr',\n(SELECT BrandID FROM shoebrand WHERE BrandName = '$ShoeBrand'),\n(SELECT SizeID FROM shoesize WHERE Size = '$ShoeSize')\n)\n\n"
] | [
0
] | [] | [] | [
"inner_join",
"mysql",
"python",
"sql",
"sql_insert"
] | stackoverflow_0074634547_inner_join_mysql_python_sql_sql_insert.txt |
Q:
OpenCV - undistort image and create point cloud based on it
I made around 40 images with a realsense camera, which gave me rgb and corresponding aligned depth images. With rs.getintrinsic() i got the intrinsic matrix of the camera. But there is still a distortion which can be seen in the pointcloud, which can be easily generated with the depth image. Here you can see it on the right side: PointCloud of a Plane in depth image
The Pointcloud represent a plane.
Now I calculated based on cv.calibrateCamera(..., intrinsic_RS_matrix, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT) the distortion coefficients of the Camera. For that I use all the 40 rgb images.
Based on the new calculated distortion I calculate with cv2.getOptimalNewCameraMatrix() the new camera matrix and with cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix) the undistorted new rgb and depth image.
Now I want to compute the pointcloud of the new undistorted depth image. But which camera Matrix should I use? The newCameraMatrix or the old one which I got from rs.getIntrinsic()?
As well I used alpha=0, so there is no cropping of the image. But if I would use alpha = 1 there would be a cropping. In that case should I use the cropped image or the uncropped one?
Here is the full Code for calculating the distortion and newCameraMatrix:
checkerboard = (6, 10)
criteria = (cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Vector for 3D points
threedpoints = []
# Vector for 2D points
twodpoints = []
# 3D points real world coordinates
objectp3d = np.zeros((1, checkerboard[0]*checkerboard[1], 3), np.float32)
objectp3d[0, :, :2] = np.mgrid[0:checkerboard[0], 0:checkerboard[1]].T.reshape(-1, 2)* 30
prev_img_shape = None
path = r"..."
resolution= "1280_720"
_,dates,_ = next(os.walk(path))
images = glob.glob(path)
print(len(images))
for filename in images:
image = cv2.imread(filename)
grayColor = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(image, checkerboard, flags = cv2.CALIB_CB_ADAPTIVE_THRESH )
if ret == True :
threedpoints.append(objectp3d)
# Refining pixel coordinates for given 2d points.
corners2 = cv2.cornerSubPix(
grayColor, corners,
(11, 11),
(-1, -1), criteria)
twodpoints.append(corners2)
# Draw and display the corners
image = cv2.drawChessboardCorners(image,
checkerboard,
corners2, ret)
print("detected corners: ", len(twodpoints))
K_RS = np.load(r"path to RS intrinsic")
ret, matrix, distortion, r_vecs, t_vecs = cv2.calibrateCamera(
threedpoints, twodpoints, grayColor.shape[::-1], cameraMatrix=K_RS, distCoeffs= None, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT)# None, None)
def loadUndistortedImage(filename, cameraMatrix, distCoeffs):
image = cv2.imread(filename,-1)
# setup enlargement and offset for new image
imageShape = image.shape #image.size
imageSize = (imageShape[1],imageShape[0])
# # create a new camera matrix with the principal point offest according to the offset above
newCameraMatrix, roi = cv2.getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize,
alpha = 0, imageSize)
# create undistortion maps
R = np.array([[1,0,0],[0,1,0],[0,0,1]])
outputImage = cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix)
roi_x, roi_y, roi_w, roi_h = roi
cropped_outputImage = outputImage[roi_y : roi_y + roi_h, roi_x : roi_x + roi_w]
fixed_filename = r"..."
cv2.imwrite(fixed_filename,outputImage)
return newCameraMatrix
#Undistort the images, then save the restored images
newmatrix = loadUndistortedImage(r'...', matrix, distortion)
A:
I would suggest to use uncropped image that has same width and length of the original images that been used for camera calibration. The cropped one will has different image shape/size.
| OpenCV - undistort image and create point cloud based on it | I made around 40 images with a realsense camera, which gave me rgb and corresponding aligned depth images. With rs.getintrinsic() i got the intrinsic matrix of the camera. But there is still a distortion which can be seen in the pointcloud, which can be easily generated with the depth image. Here you can see it on the right side: PointCloud of a Plane in depth image
The Pointcloud represent a plane.
Now I calculated based on cv.calibrateCamera(..., intrinsic_RS_matrix, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT) the distortion coefficients of the Camera. For that I use all the 40 rgb images.
Based on the new calculated distortion I calculate with cv2.getOptimalNewCameraMatrix() the new camera matrix and with cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix) the undistorted new rgb and depth image.
Now I want to compute the pointcloud of the new undistorted depth image. But which camera Matrix should I use? The newCameraMatrix or the old one which I got from rs.getIntrinsic()?
As well I used alpha=0, so there is no cropping of the image. But if I would use alpha = 1 there would be a cropping. In that case should I use the cropped image or the uncropped one?
Here is the full Code for calculating the distortion and newCameraMatrix:
checkerboard = (6, 10)
criteria = (cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Vector for 3D points
threedpoints = []
# Vector for 2D points
twodpoints = []
# 3D points real world coordinates
objectp3d = np.zeros((1, checkerboard[0]*checkerboard[1], 3), np.float32)
objectp3d[0, :, :2] = np.mgrid[0:checkerboard[0], 0:checkerboard[1]].T.reshape(-1, 2)* 30
prev_img_shape = None
path = r"..."
resolution= "1280_720"
_,dates,_ = next(os.walk(path))
images = glob.glob(path)
print(len(images))
for filename in images:
image = cv2.imread(filename)
grayColor = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(image, checkerboard, flags = cv2.CALIB_CB_ADAPTIVE_THRESH )
if ret == True :
threedpoints.append(objectp3d)
# Refining pixel coordinates for given 2d points.
corners2 = cv2.cornerSubPix(
grayColor, corners,
(11, 11),
(-1, -1), criteria)
twodpoints.append(corners2)
# Draw and display the corners
image = cv2.drawChessboardCorners(image,
checkerboard,
corners2, ret)
print("detected corners: ", len(twodpoints))
K_RS = np.load(r"path to RS intrinsic")
ret, matrix, distortion, r_vecs, t_vecs = cv2.calibrateCamera(
threedpoints, twodpoints, grayColor.shape[::-1], cameraMatrix=K_RS, distCoeffs= None, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT)# None, None)
def loadUndistortedImage(filename, cameraMatrix, distCoeffs):
image = cv2.imread(filename,-1)
# setup enlargement and offset for new image
imageShape = image.shape #image.size
imageSize = (imageShape[1],imageShape[0])
# # create a new camera matrix with the principal point offest according to the offset above
newCameraMatrix, roi = cv2.getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize,
alpha = 0, imageSize)
# create undistortion maps
R = np.array([[1,0,0],[0,1,0],[0,0,1]])
outputImage = cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix)
roi_x, roi_y, roi_w, roi_h = roi
cropped_outputImage = outputImage[roi_y : roi_y + roi_h, roi_x : roi_x + roi_w]
fixed_filename = r"..."
cv2.imwrite(fixed_filename,outputImage)
return newCameraMatrix
#Undistort the images, then save the restored images
newmatrix = loadUndistortedImage(r'...', matrix, distortion)
| [
"I would suggest to use uncropped image that has same width and length of the original images that been used for camera calibration. The cropped one will has different image shape/size.\n"
] | [
0
] | [] | [] | [
"camera_calibration",
"distortion",
"opencv",
"point_clouds",
"python"
] | stackoverflow_0074027213_camera_calibration_distortion_opencv_point_clouds_python.txt |
Q:
Django: Converse of `__endswith`
Django allows me to do this:
chair = Chair.objects.filter(name__endswith='hello')
But I want to do this:
chair = Chair.objects.filter(name__isendof='hello')
I know that the lookup __isendof doesn't exist. But I want something like this. I want it to be the converse of __endswith. It should find all chairs such that 'hello'.endswith(chair.name).
Possible in Django? ORM operations are preferable to SQL ones.
A:
Django ORM is not a silver bullet, there is nothing wrong in writing parts of SQL in case handling with plain ORM is difficult or impossible. This is a really good use case of extra():
Entry.objects.extra(where=['"hello" LIKE CONCAT("%%", name)'])
Note that, since we are writing plain SQL here - it would be database backend specific anyway. This particular is mysql specific and based on this topic: MySQL: What is a reverse version of LIKE?. Should work for PostgreSQL too (haven't tested).
Note that you can adapt the query into a reusable custom Lookup (introduced in Django 1.7):
imagine you have the following model
class MyModel(models.Model):
name = models.CharField(max_length=100)
def __unicode__(self):
return self.name
define the Lookup class with an as_sql() method implemented:
class ConverseEndswith(models.Lookup):
lookup_name = 'ce'
def as_sql(self, qn, connection):
lhs, lhs_params = self.process_lhs(qn, connection)
rhs, rhs_params = self.process_rhs(qn, connection)
params = lhs_params + rhs_params
return '%s LIKE CONCAT("%%%%", %s)' % (rhs, lhs), params
models.Field.register_lookup(ConverseEndswith)
then, here is how our custom __ce lookup works in shell:
>>> import django
>>> django.setup()
>>> from myapp.models import MyModel
>>> for name in ['hello', 'ello', 'llo', 'test1', 'test2']:
... MyModel.objects.create(name=name)
>>> MyModel.objects.filter(name__ce='hello')
[<MyModel: hello>, <MyModel: ello>, <MyModel: llo>]
>>> MyModel.objects.filter(name__ce='hello').query.__str__()
u'SELECT `myapp_mymodel`.`id`, `myapp_mymodel`.`name` FROM `myapp_mymodel` WHERE hello LIKE CONCAT("%", `myapp_mymodel`.`name`)'
Another option is to make the check in Python. Since the LIKE query would make a full scan through all of the records inside the Entry table, you can get them all and check one by one using Python's endswith():
[entry for entry in Entry.objects.all() if 'hello'.endswith(entry.name)]
A:
If you have the possibility to use Django 1.7 you can use custom lookups. Otherwise I think you have to resort to using .extra or .raw.
A:
probably something like is as close as you are going to get ... even though its less than awesome
target_string = "hello"
chair = Chair.objects.filter(name__in=[target_string[-i:] for i in range(len(target_string))])
A:
chair = Chair.objects.exclude(name__endswith='hello')
| Django: Converse of `__endswith` | Django allows me to do this:
chair = Chair.objects.filter(name__endswith='hello')
But I want to do this:
chair = Chair.objects.filter(name__isendof='hello')
I know that the lookup __isendof doesn't exist. But I want something like this. I want it to be the converse of __endswith. It should find all chairs such that 'hello'.endswith(chair.name).
Possible in Django? ORM operations are preferable to SQL ones.
| [
"Django ORM is not a silver bullet, there is nothing wrong in writing parts of SQL in case handling with plain ORM is difficult or impossible. This is a really good use case of extra():\nEntry.objects.extra(where=['\"hello\" LIKE CONCAT(\"%%\", name)'])\n\nNote that, since we are writing plain SQL here - it would be database backend specific anyway. This particular is mysql specific and based on this topic: MySQL: What is a reverse version of LIKE?. Should work for PostgreSQL too (haven't tested).\nNote that you can adapt the query into a reusable custom Lookup (introduced in Django 1.7):\n\nimagine you have the following model\nclass MyModel(models.Model):\n name = models.CharField(max_length=100)\n\n def __unicode__(self):\n return self.name\n\ndefine the Lookup class with an as_sql() method implemented:\nclass ConverseEndswith(models.Lookup):\n lookup_name = 'ce'\n\n def as_sql(self, qn, connection):\n lhs, lhs_params = self.process_lhs(qn, connection)\n rhs, rhs_params = self.process_rhs(qn, connection)\n params = lhs_params + rhs_params\n return '%s LIKE CONCAT(\"%%%%\", %s)' % (rhs, lhs), params\n\nmodels.Field.register_lookup(ConverseEndswith)\n\nthen, here is how our custom __ce lookup works in shell:\n>>> import django\n>>> django.setup()\n>>> from myapp.models import MyModel\n>>> for name in ['hello', 'ello', 'llo', 'test1', 'test2']:\n... MyModel.objects.create(name=name)\n\n>>> MyModel.objects.filter(name__ce='hello')\n[<MyModel: hello>, <MyModel: ello>, <MyModel: llo>]\n>>> MyModel.objects.filter(name__ce='hello').query.__str__()\nu'SELECT `myapp_mymodel`.`id`, `myapp_mymodel`.`name` FROM `myapp_mymodel` WHERE hello LIKE CONCAT(\"%\", `myapp_mymodel`.`name`)'\n\n\n\nAnother option is to make the check in Python. Since the LIKE query would make a full scan through all of the records inside the Entry table, you can get them all and check one by one using Python's endswith():\n[entry for entry in Entry.objects.all() if 'hello'.endswith(entry.name)] \n\n",
"If you have the possibility to use Django 1.7 you can use custom lookups. Otherwise I think you have to resort to using .extra or .raw.\n",
"probably something like is as close as you are going to get ... even though its less than awesome\ntarget_string = \"hello\"\nchair = Chair.objects.filter(name__in=[target_string[-i:] for i in range(len(target_string))])\n\n",
"chair = Chair.objects.exclude(name__endswith='hello')\n\n"
] | [
6,
1,
0,
0
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0024725182_django_django_models_python.txt |
Q:
Best way to parse nested dictionary in Pandas?
I have the following code that get the submissions and comments from a subreddit tennis:
headlines = {}
comments = []
i = 1
for submission in reddit.subreddit('tennis').search("Djokovic loves", sort="relevance", limit=10):
h = {}
c = {}
h['title'] = submission.title
h['id'] = submission.id
h['score'] = submission.score
submission.comments.replace_more(limit=0)
for comment in submission.comments.list():
c['author'] = comment.author
c['body'] = comment.body
comments.append(c)
headlines['headline ' + str(i)] = h
headlines['comments ' + str(i)] = comments
i += 1
So the output would be something like:
{'headline 1':
{
'title': 'abc',
'id': 123,
'score': 0.5
},
'comment 1': [{'author': 'James', 'body': 'He is good!'}]
}
Is there a way to parse this structure into a Pandas dataframe, or do you suggest any other way (data structures) to store both submissions and the comments?
Thank you!
A:
data = [{'headline': {'title': 'abc', 'id':123, 'score':0.5},'comment': [{'author': 'James', 'body': 'He is good!'}]}]
df = pd.json_normalize(data, record_path='comment', meta=[['headline', 'title'],['headline', 'id'], ['headline','score']], record_prefix='comment.')
| | comment.author | comment.body | headline.title | headline.id | headline.score |
|---:|:-----------------|:---------------|:-----------------|--------------:|-----------------:|
| 0 | James | He is good! | abc | 123 | 0.5 |
| Best way to parse nested dictionary in Pandas? | I have the following code that get the submissions and comments from a subreddit tennis:
headlines = {}
comments = []
i = 1
for submission in reddit.subreddit('tennis').search("Djokovic loves", sort="relevance", limit=10):
h = {}
c = {}
h['title'] = submission.title
h['id'] = submission.id
h['score'] = submission.score
submission.comments.replace_more(limit=0)
for comment in submission.comments.list():
c['author'] = comment.author
c['body'] = comment.body
comments.append(c)
headlines['headline ' + str(i)] = h
headlines['comments ' + str(i)] = comments
i += 1
So the output would be something like:
{'headline 1':
{
'title': 'abc',
'id': 123,
'score': 0.5
},
'comment 1': [{'author': 'James', 'body': 'He is good!'}]
}
Is there a way to parse this structure into a Pandas dataframe, or do you suggest any other way (data structures) to store both submissions and the comments?
Thank you!
| [
"data = [{'headline': {'title': 'abc', 'id':123, 'score':0.5},'comment': [{'author': 'James', 'body': 'He is good!'}]}]\n \ndf = pd.json_normalize(data, record_path='comment', meta=[['headline', 'title'],['headline', 'id'], ['headline','score']], record_prefix='comment.')\n \n | | comment.author | comment.body | headline.title | headline.id | headline.score |\n |---:|:-----------------|:---------------|:-----------------|--------------:|-----------------:|\n | 0 | James | He is good! | abc | 123 | 0.5 |\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074634710_dataframe_pandas_python.txt |
Q:
Is there a faster way to rebuild a dataframe based on certain values of rows?
I loaded a .csv file with around 620k rows and 6 columns into jupyter notebook. The data is like this:
col_1 col_2 col_3 col_4 col_5
ID_1 388343 388684 T.45396D 2.400000e-03
ID_1 388343 388684 T.45708S 3.400000e-04
ID_1 388343 388684 T.48892G 2.200000e-10
ID_1 388343 388684 T.56898F 1.900000e-21
ID_1 388343 388684 T.64122D 2.300000e-04
I need to rebuild the table such that the ID (col_1) is unique with the smallest value of (col_5). What I've done is:
for i in unique_col_1:
index = data[(data['col_1'] == i)].index
min_value = data.col_5.iloc[index].min()
index = data[ (data['col_1'] == i) & (data['col_5'] != min_value) ].index
data.drop(index, inplace=True)
but this is too slow which the processing speed is around 6.5 it/s in my machine, and 8 it/s when I run it on google colaboratory.
Is there any better way to do this in faster time?
A:
might not be the fastest possible implementation, but it is certainly faster than looping over all values of col_1 and iteratively dropping it.
df.sort_values("col_5").drop_duplicates(subset="col_1", keep=First)
there are two major performance considerations at issue with your implementation:
vectorization:
pandas functions such as sort_values, drop_duplicates, and other operations are written in cython (a python extension library which builds compiled modules which run in C or C++). These functions are hundreds or thousands of times faster than python code written with for loops for large datasets. so whenever possible, use built in pandas operators on the whole array at once rather than looping over the data yourself.
iterative array resizing:
pandas is built on numpy, and uses continuous arrays in memory to store columns of numeric data. Allocating these arrays is (relatively) slow; performing operations on them is fast. When you resize an array, you need to re-allocate again and copy the data to the new resized array. So when you loop over an array and in each iteration do something like drop or append (which has been deprecated for exactly this reason), you're re-allocating the entire dataframe's array in every iteration. better would be to build a list of array indices you want to drop and then drop them all once at the end of the loop; best is to use a vectorized solution and skip the for loop in the first place.
| Is there a faster way to rebuild a dataframe based on certain values of rows? | I loaded a .csv file with around 620k rows and 6 columns into jupyter notebook. The data is like this:
col_1 col_2 col_3 col_4 col_5
ID_1 388343 388684 T.45396D 2.400000e-03
ID_1 388343 388684 T.45708S 3.400000e-04
ID_1 388343 388684 T.48892G 2.200000e-10
ID_1 388343 388684 T.56898F 1.900000e-21
ID_1 388343 388684 T.64122D 2.300000e-04
I need to rebuild the table such that the ID (col_1) is unique with the smallest value of (col_5). What I've done is:
for i in unique_col_1:
index = data[(data['col_1'] == i)].index
min_value = data.col_5.iloc[index].min()
index = data[ (data['col_1'] == i) & (data['col_5'] != min_value) ].index
data.drop(index, inplace=True)
but this is too slow which the processing speed is around 6.5 it/s in my machine, and 8 it/s when I run it on google colaboratory.
Is there any better way to do this in faster time?
| [
"might not be the fastest possible implementation, but it is certainly faster than looping over all values of col_1 and iteratively dropping it.\ndf.sort_values(\"col_5\").drop_duplicates(subset=\"col_1\", keep=First)\n\nthere are two major performance considerations at issue with your implementation:\n\nvectorization:\npandas functions such as sort_values, drop_duplicates, and other operations are written in cython (a python extension library which builds compiled modules which run in C or C++). These functions are hundreds or thousands of times faster than python code written with for loops for large datasets. so whenever possible, use built in pandas operators on the whole array at once rather than looping over the data yourself.\niterative array resizing:\npandas is built on numpy, and uses continuous arrays in memory to store columns of numeric data. Allocating these arrays is (relatively) slow; performing operations on them is fast. When you resize an array, you need to re-allocate again and copy the data to the new resized array. So when you loop over an array and in each iteration do something like drop or append (which has been deprecated for exactly this reason), you're re-allocating the entire dataframe's array in every iteration. better would be to build a list of array indices you want to drop and then drop them all once at the end of the loop; best is to use a vectorized solution and skip the for loop in the first place.\n\n"
] | [
4
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074635119_dataframe_pandas_python.txt |
Q:
How can I effectively use the same orm model in two git repositories?
The initial situation is that I have two fast-api servers that access the same database. One is the real service for my application and the other is a service for loading data from different sources. Both services have their own Github repository, but use the same orm data model.
My question is: What are best practices to manage this orm data model without always editing it in both code bases?
The goal would be to have no duplicated code and to adapt changes to the database only in one place.
Current structure of the projects:
Service 1 (actual backend):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
Service 2 (data loader):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
sqlalchemy(.orm) library is used for the orm models and schemas.
I had thought about making another repository out of it or a library of my own. But then I would not know how best to import them and whether I won't just produce more work.
Based on the answer, I created a private repository that I install via pip in the two servers.
Structure of the package Python packaging projects
Install the package from a private Github repository Installing Private Python Packages
A:
Modularisation (avoiding code repetition) is a best practice, and so the ideal thing to do would indeed be to extract the model definition to a single file and import it where needed.
The problem is, when you deploy your two services both of them need to be able to 'see' the model file ... otherwise they can't import it. Because they are deployed separately, this isn't going to be as easy as before (they presumably aren't sharing a filesystem; if they were, it would be just as simple as before).
The usual solution to this is to make your model file available as a Python module and provide it via some server both services can reach over the network (for example, the public python package index, PyPI). You then install the module and import it in both deployed services.
This is inevitably going to be (a lot!) more work than just maintaining two copies. However, if you go to more than two copies that calculus could quickly change.
| How can I effectively use the same orm model in two git repositories? | The initial situation is that I have two fast-api servers that access the same database. One is the real service for my application and the other is a service for loading data from different sources. Both services have their own Github repository, but use the same orm data model.
My question is: What are best practices to manage this orm data model without always editing it in both code bases?
The goal would be to have no duplicated code and to adapt changes to the database only in one place.
Current structure of the projects:
Service 1 (actual backend):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
Service 2 (data loader):
.
├── app
│ ├── __init__.py
│ ├── main.py
│ └── database
│ ├── __init__.py
│ ├── ModelItem.py # e.g. model item
│ ├── ... # other models
│ ├── SchemaItem.py # e.g. schema item
│ └── ... # other schemas
│ ├── routers
│ └── ...
sqlalchemy(.orm) library is used for the orm models and schemas.
I had thought about making another repository out of it or a library of my own. But then I would not know how best to import them and whether I won't just produce more work.
Based on the answer, I created a private repository that I install via pip in the two servers.
Structure of the package Python packaging projects
Install the package from a private Github repository Installing Private Python Packages
| [
"Modularisation (avoiding code repetition) is a best practice, and so the ideal thing to do would indeed be to extract the model definition to a single file and import it where needed.\nThe problem is, when you deploy your two services both of them need to be able to 'see' the model file ... otherwise they can't import it. Because they are deployed separately, this isn't going to be as easy as before (they presumably aren't sharing a filesystem; if they were, it would be just as simple as before).\nThe usual solution to this is to make your model file available as a Python module and provide it via some server both services can reach over the network (for example, the public python package index, PyPI). You then install the module and import it in both deployed services.\nThis is inevitably going to be (a lot!) more work than just maintaining two copies. However, if you go to more than two copies that calculus could quickly change.\n"
] | [
0
] | [] | [] | [
"fastapi",
"python",
"sqlalchemy"
] | stackoverflow_0074634434_fastapi_python_sqlalchemy.txt |
Q:
Tkinter - How can I sync two Scrollbar's with two Text widget's to mirrow each others view?
I need to sync two scrolled bars. both of them manage a different text widget and when I scroll in the first one, I want to see the same behaviour in the second one. I don't want to use a single scrolled bar, both of them must to be syncronized. how can I rech my goal? below a simple example code (here the scrolled bars are not syncronized). help me to fix it. thanks for your support.
import tkinter as tk
root = tk.Tk()
file1data = ("ciao\n"*100)
S1 = tk.Scrollbar(root)
S1.grid(row=0, column=1,sticky=tk.N + tk.S + tk.E + tk.W)
template1 = tk.Text(root, height=25, width=50,wrap=tk.NONE, yscrollcommand=S1.set)
template1.grid(row=0, column=0)
template1.insert(tk.END, file1data)
S1.config(command=template1.yview)
S2 = tk.Scrollbar(root)
S2.grid(row=0, column=3,sticky=tk.N + tk.S + tk.E + tk.W)
template2 = tk.Text(root, height=25, width=50, wrap=tk.NONE, yscrollcommand=S2.set)
template2.grid(row=0, column=2)
template2.insert(tk.END, file1data)
S2.config(command=template2.yview)
tk.mainloop()
A:
You can achieve this by writing a function:
def sync_scroll(*args):
template1.yview(*args)
template2.yview(*args)
and setting the scrollbars to this command:
S1.config(command=sync_scroll)
S2.config(command=sync_scroll)
The sync_scroll will be triggered by each of the scrollbars with the already calculated position and mirror it's position with the moveto command or the scroll command. Same applies to xview. Basically all they do is to calculate the relative position or parse the appropriated arguments and use the moveto or scroll command to configure the view. If you decide to have a binding to the mousewheel you can use the same mechanic to achieve this in the other direction.
EDIT:
Thanks to @acw1668 for pointing out that the function didn't cover the arrow buttons before and for the shortening version of this code.
| Tkinter - How can I sync two Scrollbar's with two Text widget's to mirrow each others view? | I need to sync two scrolled bars. both of them manage a different text widget and when I scroll in the first one, I want to see the same behaviour in the second one. I don't want to use a single scrolled bar, both of them must to be syncronized. how can I rech my goal? below a simple example code (here the scrolled bars are not syncronized). help me to fix it. thanks for your support.
import tkinter as tk
root = tk.Tk()
file1data = ("ciao\n"*100)
S1 = tk.Scrollbar(root)
S1.grid(row=0, column=1,sticky=tk.N + tk.S + tk.E + tk.W)
template1 = tk.Text(root, height=25, width=50,wrap=tk.NONE, yscrollcommand=S1.set)
template1.grid(row=0, column=0)
template1.insert(tk.END, file1data)
S1.config(command=template1.yview)
S2 = tk.Scrollbar(root)
S2.grid(row=0, column=3,sticky=tk.N + tk.S + tk.E + tk.W)
template2 = tk.Text(root, height=25, width=50, wrap=tk.NONE, yscrollcommand=S2.set)
template2.grid(row=0, column=2)
template2.insert(tk.END, file1data)
S2.config(command=template2.yview)
tk.mainloop()
| [
"You can achieve this by writing a function:\ndef sync_scroll(*args):\n template1.yview(*args)\n template2.yview(*args)\n\nand setting the scrollbars to this command:\nS1.config(command=sync_scroll)\nS2.config(command=sync_scroll)\n\nThe sync_scroll will be triggered by each of the scrollbars with the already calculated position and mirror it's position with the moveto command or the scroll command. Same applies to xview. Basically all they do is to calculate the relative position or parse the appropriated arguments and use the moveto or scroll command to configure the view. If you decide to have a binding to the mousewheel you can use the same mechanic to achieve this in the other direction.\nEDIT:\nThanks to @acw1668 for pointing out that the function didn't cover the arrow buttons before and for the shortening version of this code.\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x",
"tkinter",
"tkinter_scrolledtext"
] | stackoverflow_0074635102_python_python_3.x_tkinter_tkinter_scrolledtext.txt |
Q:
ProcessPoolExecutor using map hang on large load
Experiencing hangs running ProcessPoolExecutor on map, only on a relatively large load.
The behaviour we see is that after about 1 minutes of hard working, job seems to hang: the CPU utilization drops sharply then becomes idle; the stack trace also seems to show the same portion of calls as time progresses.
def work_wrapper(args):
return work(*args)
def work():
work.....
def start_working(...):
with concurrent.futures.ProcessPoolExecutor(max_workers=num_threads, mp_context=mp.get_context('fork')) as executor:
args = [arg_list1, arg_list2, ...]
for res in executor.map(work_wrapper, args):
pass
if __name__ == "__main__":
mp.set_start_method('fork',force=True)
start_working(...)
Stack trace (we log every 5 minutes but they appear pretty similar):
Thread 0x00007f4d0ca27700 (most recent call first):
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 373 in _send
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 402 in _send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 205 in send_bytes
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 250 in _feed
File "/usr/local/lib/python3.10/threading.py", line 953 in run
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007f4d156fc700 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 199 in _finalize_join
File "/usr/local/lib/python3.10/multiprocessing/util.py", line 224 in __call__
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 151 in join_thread
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 515 in join_executor_internals
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 469 in terminate_broken
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 323 in run
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007f4d19cce740 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 775 in shutdown
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 649 in __exit__
File "/app/main.py", line 256 in start_working
File "/app/main.py", line 51 in main
File "/app/main.py", line 96 in <module>
File "/app/main.py", line 96 in <module>
File "/app/main.py", line 51 in main
File "/app/main.py", line 256 in start_working
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 649 in __exit__
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 775 in shutdown
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
Thread 0x00007f4d19cce740 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 323 in run
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 469 in terminate_broken
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 515 in join_executor_internals
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 151 in join_thread
File "/usr/local/lib/python3.10/multiprocessing/util.py", line 224 in __call__
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 199 in _finalize_join
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
Thread 0x00007f4d156fc700 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 953 in run
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 250 in _feed
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 205 in send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 402 in _send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 373 in _send
Thread 0x00007f4d0ca27700 (most recent call first):
Python version: 3.10.8, Docker base image: python:3.10-slim
I tried updating python version, changing multiprocessing context (tried both spawn and fork, both give same behaviour)
A:
The problem you're having is due to Executor.map not handling large/infinite iterable inputs in a sane way. Before it yields a single value, it consumes the entire input iterator and submits a task for every input.
If your inputs are produced lazily (on the theory that this would keep memory usage down), nope, they're all read in immediately. If they're infinite (with the assumption you can break and stop when receiving a specific result), nope, the program will try to submit infinite tasks and you'll run out of memory. If they're just huge, well, you'll pay the overhead for all the submitted tasks (management overhead, pickling them to pass them to the workers, etc.).
If you can avoid processing in blocks quite that large, that's an easy fix. You could also copy the fixed implementation of Executor.map in the PR attached to the issue I linked as a top-level function, manually passing an executor to it to act as the self argument (it's all implemented in terms of submit calls to the underlying executor, it doesn't need to be an instance method); the fixed version, by default, pulls and submits twice as many tasks as the pool has workers, then only pulls and submits additional tasks as the original tasks complete and the caller requests them (so if you're looping over the results live, and not storing them, the additional memory costs are proportionate to the number of workers, typically small and fixed, not the total number of inputs, which can be huge).
| ProcessPoolExecutor using map hang on large load | Experiencing hangs running ProcessPoolExecutor on map, only on a relatively large load.
The behaviour we see is that after about 1 minutes of hard working, job seems to hang: the CPU utilization drops sharply then becomes idle; the stack trace also seems to show the same portion of calls as time progresses.
def work_wrapper(args):
return work(*args)
def work():
work.....
def start_working(...):
with concurrent.futures.ProcessPoolExecutor(max_workers=num_threads, mp_context=mp.get_context('fork')) as executor:
args = [arg_list1, arg_list2, ...]
for res in executor.map(work_wrapper, args):
pass
if __name__ == "__main__":
mp.set_start_method('fork',force=True)
start_working(...)
Stack trace (we log every 5 minutes but they appear pretty similar):
Thread 0x00007f4d0ca27700 (most recent call first):
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 373 in _send
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 402 in _send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 205 in send_bytes
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 250 in _feed
File "/usr/local/lib/python3.10/threading.py", line 953 in run
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007f4d156fc700 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 199 in _finalize_join
File "/usr/local/lib/python3.10/multiprocessing/util.py", line 224 in __call__
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 151 in join_thread
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 515 in join_executor_internals
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 469 in terminate_broken
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 323 in run
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007f4d19cce740 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 775 in shutdown
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 649 in __exit__
File "/app/main.py", line 256 in start_working
File "/app/main.py", line 51 in main
File "/app/main.py", line 96 in <module>
File "/app/main.py", line 96 in <module>
File "/app/main.py", line 51 in main
File "/app/main.py", line 256 in start_working
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 649 in __exit__
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 775 in shutdown
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
Thread 0x00007f4d19cce740 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 323 in run
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 469 in terminate_broken
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 515 in join_executor_internals
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 151 in join_thread
File "/usr/local/lib/python3.10/multiprocessing/util.py", line 224 in __call__
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 199 in _finalize_join
File "/usr/local/lib/python3.10/threading.py", line 1096 in join
File "/usr/local/lib/python3.10/threading.py", line 1116 in _wait_for_tstate_lock
Thread 0x00007f4d156fc700 (most recent call first):
File "/usr/local/lib/python3.10/threading.py", line 973 in _bootstrap
File "/usr/local/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/usr/local/lib/python3.10/threading.py", line 953 in run
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 250 in _feed
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 205 in send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 402 in _send_bytes
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 373 in _send
Thread 0x00007f4d0ca27700 (most recent call first):
Python version: 3.10.8, Docker base image: python:3.10-slim
I tried updating python version, changing multiprocessing context (tried both spawn and fork, both give same behaviour)
| [
"The problem you're having is due to Executor.map not handling large/infinite iterable inputs in a sane way. Before it yields a single value, it consumes the entire input iterator and submits a task for every input.\nIf your inputs are produced lazily (on the theory that this would keep memory usage down), nope, they're all read in immediately. If they're infinite (with the assumption you can break and stop when receiving a specific result), nope, the program will try to submit infinite tasks and you'll run out of memory. If they're just huge, well, you'll pay the overhead for all the submitted tasks (management overhead, pickling them to pass them to the workers, etc.).\nIf you can avoid processing in blocks quite that large, that's an easy fix. You could also copy the fixed implementation of Executor.map in the PR attached to the issue I linked as a top-level function, manually passing an executor to it to act as the self argument (it's all implemented in terms of submit calls to the underlying executor, it doesn't need to be an instance method); the fixed version, by default, pulls and submits twice as many tasks as the pool has workers, then only pulls and submits additional tasks as the original tasks complete and the caller requests them (so if you're looping over the results live, and not storing them, the additional memory costs are proportionate to the number of workers, typically small and fixed, not the total number of inputs, which can be huge).\n"
] | [
0
] | [] | [] | [
"concurrent.futures",
"multiprocessing",
"python",
"python_3.x",
"python_multiprocessing"
] | stackoverflow_0074633896_concurrent.futures_multiprocessing_python_python_3.x_python_multiprocessing.txt |
Q:
How to match a word surrounded by a prefix and suffix?
Is there any regex to extract words from text that are surrounded by a certain prefix and suffix?
Example:
test[az5]test[az6]test
I need to extract the numbers surrounded by the prefix [az and the suffix ].
I'm a bit advanced in Python, but not really familiar with regex.
The desired output is:
5
6
A:
You are looking for the following regular expression:
>>> import re
>>> re.findall('\[az(\d+)\]', 'test[az5]test[az6]test')
['5', '6']
>>>
A:
import re
txt = "test[az5]test[az6]test"
x = re.findall(r"\[az(?P<num>\d)\]", txt)
print(x)
Output
['5', '6']
| How to match a word surrounded by a prefix and suffix? | Is there any regex to extract words from text that are surrounded by a certain prefix and suffix?
Example:
test[az5]test[az6]test
I need to extract the numbers surrounded by the prefix [az and the suffix ].
I'm a bit advanced in Python, but not really familiar with regex.
The desired output is:
5
6
| [
"You are looking for the following regular expression:\n>>> import re\n>>> re.findall('\\[az(\\d+)\\]', 'test[az5]test[az6]test')\n['5', '6']\n>>> \n\n",
"import re\n\ntxt = \"test[az5]test[az6]test\"\nx = re.findall(r\"\\[az(?P<num>\\d)\\]\", txt)\nprint(x)\n\nOutput\n['5', '6']\n"
] | [
1,
0
] | [] | [] | [
"extract",
"python",
"string"
] | stackoverflow_0074635145_extract_python_string.txt |
Q:
Want to create a program to scale a linked list by a certain factor
I want to write a function where you input a linked list and a factor, and the function returns a new linked list scaled by that factor. For example:
scale(linkify([1, 2, 3]), 2)
2 -> 4 -> 6 -> None
First, I made a function that, when you input a list of items, converts into a linked list. This is it here:
def linkify(item: list[int]) -> Optional[Node]:
"""Return a Linked List of Nodes with same values and same order as input list."""
if len(item) == 0:
return None
elif len(item) == 1:
return Node(item[0], None)
else:
return Node(item[0], linkify(item[1:]))
Now, I'm trying to write the function in order to scale that list.
Here is what I have now for the function:
def scale(head: Optional[Node], factor: int) -> Optional[Node]:
"""Returns new linked list of nodes where each value in original list is scaled by scaling factor."""
if head is None:
return None
else:
return Node(head.data * factor, scale(head.next, factor))
However, when I try to test this, I get an error saying exercises.ex11.linked_list.Node object at 0x0000013392C97C10>, and I'm not entirely sure what this means. Can anyone tell me what I'm getting wrong?
Also, I have to create the function recursively, and can't use any other functions outside the ones I've created.
Here is the test case I created as well:
def test_scale_factor() -> None:
linked_list: list[int] = [1, 2, 3]
linked_list_2: list[int] = [2, 4, 6]
assert is_equal(scale(linkify(linked_list), 2), linkify(linked_list_2))
Thanks for your help!
A:
Is this a proper solution for your case?
I always try to avoid using recursions.
Sure you also can add some checks on the function inputs.
class Node:
def __init__(self, data = None):
self.data = data
self.next = None
class Linkedlist:
def __init__(self):
self.head = None
def linkify(self, nodes):
if len(nodes) == 0:
return
self.head = node = Node()
for i in nodes:
node.next = Node(i)
node = node.next
self.head= self.head.next
def is_equal(self, other_llist):
node1 = self.head
node2 = other_llist.head
while node1 and node2:
if node1.data != node2.data:
return False
node1 = node1.next
node2 = node2.next
if node1 or node2:
return False
return True
def scale(self, factor):
node = self.head
while node:
node.data *= factor
node = node.next
llist1 = Linkedlist()
llist1.linkify([1, 2, 3])
llist1.scale(2)
llist2 = Linkedlist()
llist2.linkify([2, 4, 6])
print(llist1.is_equal(llist2))
| Want to create a program to scale a linked list by a certain factor | I want to write a function where you input a linked list and a factor, and the function returns a new linked list scaled by that factor. For example:
scale(linkify([1, 2, 3]), 2)
2 -> 4 -> 6 -> None
First, I made a function that, when you input a list of items, converts into a linked list. This is it here:
def linkify(item: list[int]) -> Optional[Node]:
"""Return a Linked List of Nodes with same values and same order as input list."""
if len(item) == 0:
return None
elif len(item) == 1:
return Node(item[0], None)
else:
return Node(item[0], linkify(item[1:]))
Now, I'm trying to write the function in order to scale that list.
Here is what I have now for the function:
def scale(head: Optional[Node], factor: int) -> Optional[Node]:
"""Returns new linked list of nodes where each value in original list is scaled by scaling factor."""
if head is None:
return None
else:
return Node(head.data * factor, scale(head.next, factor))
However, when I try to test this, I get an error saying exercises.ex11.linked_list.Node object at 0x0000013392C97C10>, and I'm not entirely sure what this means. Can anyone tell me what I'm getting wrong?
Also, I have to create the function recursively, and can't use any other functions outside the ones I've created.
Here is the test case I created as well:
def test_scale_factor() -> None:
linked_list: list[int] = [1, 2, 3]
linked_list_2: list[int] = [2, 4, 6]
assert is_equal(scale(linkify(linked_list), 2), linkify(linked_list_2))
Thanks for your help!
| [
"Is this a proper solution for your case?\nI always try to avoid using recursions.\nSure you also can add some checks on the function inputs.\nclass Node:\n def __init__(self, data = None):\n self.data = data\n self.next = None\n\nclass Linkedlist:\n def __init__(self):\n self.head = None\n \n def linkify(self, nodes):\n if len(nodes) == 0:\n return\n self.head = node = Node()\n for i in nodes:\n node.next = Node(i)\n node = node.next\n self.head= self.head.next\n \n def is_equal(self, other_llist):\n node1 = self.head\n node2 = other_llist.head\n while node1 and node2:\n if node1.data != node2.data:\n return False\n node1 = node1.next\n node2 = node2.next\n if node1 or node2:\n return False\n return True\n \n def scale(self, factor):\n node = self.head\n while node:\n node.data *= factor\n node = node.next\n\nllist1 = Linkedlist()\nllist1.linkify([1, 2, 3])\nllist1.scale(2)\n\nllist2 = Linkedlist()\nllist2.linkify([2, 4, 6])\n\nprint(llist1.is_equal(llist2))\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074634775_python.txt |
Q:
TypeError: TimeGrouper.__init__() got multiple values for argument 'freq'
What am I doing wrong?
This is all the code needed to reproduce.
import pandas as pd
g = pd.Grouper('datetime', freq='D')
Result:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [1], line 2
1 import pandas as pd
----> 2 g = pd.Grouper('datetime', freq='D')
TypeError: TimeGrouper.__init__() got multiple values for argument 'freq'
Pandas version 1.5.1, Python version 3.10.6.
A:
This seems to be a bug
It looks like the weirdness is because Grouper.__new__() instantiates a TimeGrouper if you pass freq as a kwarg, but not if you pass freq as a positional argument. I don't know why it does that, and it's not documented, so it seems like a bug.
The reason for the error is that TimeGrouper.__init__()'s first parameter is freq, not key. (In fact it doesn't even have a key parameter; it specifies **kwargs, which are then sent to Grouper.__init__() at the end.)
Workaround
Pass key as a kwarg too:
g = pd.Grouper(key='datetime', freq='D')
All-positional syntax is also broken
In cottontail's answer, they suggested using positional arguments, with None for level, but this doesn't work fully. For example, you can't specify an origin:
pd.Grouper('datetime', None, 'D', origin='epoch')
TypeError: __init__() got an unexpected keyword argument 'origin'
(I'm using Python 3.9 and Pandas 1.4.4, so the error might look a bit different, but the same error should occur on your version.)
Even worse, the resulting Grouper doesn't work, for example:
df = pd.DataFrame({
'datetime': pd.to_datetime([
'2022-11-29T15', '2022-11-30T15', '2022-11-30T16']),
'v': [1, 2, 3]})
>>> g = pd.Grouper('datetime', None, 'D')
>>> df.groupby(g).sum()
v
datetime
2022-11-29 15:00:00 1
2022-11-30 15:00:00 2
2022-11-30 16:00:00 3
Compared to:
>>> g1 = pd.Grouper(key='datetime', freq='D')
>>> df.groupby(g1).sum()
v
datetime
2022-11-29 1
2022-11-30 5
A:
Try specifying the keyword argument key:
g = pd.Grouper(key='datetime', freq='D')
print(type(g))
# pandas.core.resample.TimeGrouper
As explained in wjandrea's post, TimeGrouper is instantiated only if freq kwarg is passed. For example, the following instantiates a Grouper:
g1 = pd.Grouper('datetime', None, pd.offsets.Day())
print(type(g1))
# pandas.core.groupby.grouper.Grouper
even though, g1.freq is an instance of pd._libs.tslibs.offsets.Day. So it looks like passing the arguments via keywords is the only way.
| TypeError: TimeGrouper.__init__() got multiple values for argument 'freq' | What am I doing wrong?
This is all the code needed to reproduce.
import pandas as pd
g = pd.Grouper('datetime', freq='D')
Result:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [1], line 2
1 import pandas as pd
----> 2 g = pd.Grouper('datetime', freq='D')
TypeError: TimeGrouper.__init__() got multiple values for argument 'freq'
Pandas version 1.5.1, Python version 3.10.6.
| [
"This seems to be a bug\nIt looks like the weirdness is because Grouper.__new__() instantiates a TimeGrouper if you pass freq as a kwarg, but not if you pass freq as a positional argument. I don't know why it does that, and it's not documented, so it seems like a bug.\nThe reason for the error is that TimeGrouper.__init__()'s first parameter is freq, not key. (In fact it doesn't even have a key parameter; it specifies **kwargs, which are then sent to Grouper.__init__() at the end.)\nWorkaround\nPass key as a kwarg too:\ng = pd.Grouper(key='datetime', freq='D')\n\nAll-positional syntax is also broken\nIn cottontail's answer, they suggested using positional arguments, with None for level, but this doesn't work fully. For example, you can't specify an origin:\npd.Grouper('datetime', None, 'D', origin='epoch')\n\nTypeError: __init__() got an unexpected keyword argument 'origin'\n\n(I'm using Python 3.9 and Pandas 1.4.4, so the error might look a bit different, but the same error should occur on your version.)\nEven worse, the resulting Grouper doesn't work, for example:\ndf = pd.DataFrame({\n 'datetime': pd.to_datetime([\n '2022-11-29T15', '2022-11-30T15', '2022-11-30T16']),\n 'v': [1, 2, 3]})\n\n>>> g = pd.Grouper('datetime', None, 'D')\n>>> df.groupby(g).sum()\n v\ndatetime \n2022-11-29 15:00:00 1\n2022-11-30 15:00:00 2\n2022-11-30 16:00:00 3\n\nCompared to:\n>>> g1 = pd.Grouper(key='datetime', freq='D')\n>>> df.groupby(g1).sum()\n v\ndatetime \n2022-11-29 1\n2022-11-30 5\n\n",
"Try specifying the keyword argument key:\ng = pd.Grouper(key='datetime', freq='D')\nprint(type(g))\n# pandas.core.resample.TimeGrouper\n\nAs explained in wjandrea's post, TimeGrouper is instantiated only if freq kwarg is passed. For example, the following instantiates a Grouper:\ng1 = pd.Grouper('datetime', None, pd.offsets.Day())\nprint(type(g1))\n# pandas.core.groupby.grouper.Grouper\n\neven though, g1.freq is an instance of pd._libs.tslibs.offsets.Day. So it looks like passing the arguments via keywords is the only way.\n"
] | [
2,
1
] | [] | [] | [
"pandas",
"python",
"typeerror"
] | stackoverflow_0074634784_pandas_python_typeerror.txt |
Q:
Facebook Prophet API documenation
I'm looking for a API Python documentation for Facebook Prophet
Here there are good examples https://facebook.github.io/prophet/docs
future = m.make_future_dataframe(periods=365)
or
future = m.make_future_dataframe(periods=300, freq='H')
But it doesn't explain all possible parameters in make_future_dataframe, or the valid values for freq.
Any idea where to find the API documentation?
A:
Here is where it is defined
https://github.com/facebook/prophet/blob/f123a1a7cc6ab51bd21f01e41738d97910a2b2b7/python/fbprophet/forecaster.py#L1548
def make_future_dataframe(self, periods, freq='D', include_history=True):
A:
The arguments and API usages are documented as per "October 14, 2022" in this link:
https://cran.r-project.org/web/packages/prophet/prophet.pdf
| Facebook Prophet API documenation | I'm looking for a API Python documentation for Facebook Prophet
Here there are good examples https://facebook.github.io/prophet/docs
future = m.make_future_dataframe(periods=365)
or
future = m.make_future_dataframe(periods=300, freq='H')
But it doesn't explain all possible parameters in make_future_dataframe, or the valid values for freq.
Any idea where to find the API documentation?
| [
"Here is where it is defined\nhttps://github.com/facebook/prophet/blob/f123a1a7cc6ab51bd21f01e41738d97910a2b2b7/python/fbprophet/forecaster.py#L1548\n def make_future_dataframe(self, periods, freq='D', include_history=True):\n\n",
"The arguments and API usages are documented as per \"October 14, 2022\" in this link:\nhttps://cran.r-project.org/web/packages/prophet/prophet.pdf\n"
] | [
0,
0
] | [] | [] | [
"facebook_prophet",
"python"
] | stackoverflow_0066590261_facebook_prophet_python.txt |
Q:
Populate list with loop
I am trying to populate a list with numbers given via input.
num_guesses = 3
user_guesses = []
The desired result would be if I entered 3 different numbers 10, 15, 5 that it would print [10,15,5].
The book I'm using does not really explain how to do this, so it's kind of frustrating.
A:
Write a loop to populate the list user_guesses with a number of
guesses. The variable num_guesses is the number of guesses the user
will have, which is read first as an integer. Read integers one at a
time using int(input()).
num_guesses = int(input())
user_guesses = []
for i in range(num_guesses):
user_guesses.append(int(input()))
print('user_guesses:', user_guesses)
Sample output with input: '3 9 5 2' user_guesses: [9, 5, 2]
A:
num_guesses = 3
user_guesses = [int(raw_input()) for _ in range(num_guesses)]
In [31]: num_guesses = 3
In [32]: user_guesses = [int(raw_input()) for _ in range(num_guesses)]
10
15
5
In [33]: user_guesses
Out[33]: [10, 15, 5]
A:
Try this
num_guesses = 3
my_list = []
guess = 0
while guess < num_guesses:
num = int(input('Enter a number: '))
my_list.append(num)
guess += 1
print my_list
A:
So, what you want to do is read in user input and append it to a list:
input = input("Guess:")
user_guesses.append(input)
A:
I think the easiest for understand is using a for loop like this:
num_guesses = 3
user_guesses = []
for i in range(num_guesses): # I here goes from 0 to 2.
user_guesses.append(int(raw_input())) #for python 3 just use input
print(user_guessess)
| Populate list with loop | I am trying to populate a list with numbers given via input.
num_guesses = 3
user_guesses = []
The desired result would be if I entered 3 different numbers 10, 15, 5 that it would print [10,15,5].
The book I'm using does not really explain how to do this, so it's kind of frustrating.
| [
"\nWrite a loop to populate the list user_guesses with a number of\nguesses. The variable num_guesses is the number of guesses the user\nwill have, which is read first as an integer. Read integers one at a\ntime using int(input()).\n\nnum_guesses = int(input())\nuser_guesses = []\nfor i in range(num_guesses):\n user_guesses.append(int(input()))\n\n\nprint('user_guesses:', user_guesses)\n\n\nSample output with input: '3 9 5 2' user_guesses: [9, 5, 2]\n\n",
"num_guesses = 3\nuser_guesses = [int(raw_input()) for _ in range(num_guesses)]\n\n\nIn [31]: num_guesses = 3 \nIn [32]: user_guesses = [int(raw_input()) for _ in range(num_guesses)]\n10\n15\n5 \nIn [33]: user_guesses\nOut[33]: [10, 15, 5]\n\n",
"Try this\nnum_guesses = 3\nmy_list = []\nguess = 0\nwhile guess < num_guesses:\n num = int(input('Enter a number: '))\n my_list.append(num)\n guess += 1\n\nprint my_list\n\n",
"So, what you want to do is read in user input and append it to a list:\ninput = input(\"Guess:\")\nuser_guesses.append(input)\n\n",
"I think the easiest for understand is using a for loop like this:\nnum_guesses = 3\nuser_guesses = []\n\nfor i in range(num_guesses): # I here goes from 0 to 2.\n user_guesses.append(int(raw_input())) #for python 3 just use input\n\nprint(user_guessess)\n\n"
] | [
4,
2,
1,
0,
0
] | [
"num_guesses = int(input())\nuser_guesses = []\n\nfor i in range(num_guesses):\n user_guesses.append(int(input()))\n\nprint('user_guesses:', user_guesses)\n\n"
] | [
-1
] | [
"python"
] | stackoverflow_0026595053_python.txt |
Q:
Scipy - All the Solutions of Non-linear Equations System
I have a system of non-linear equations, where can be choosed any n, so length of vector x = (x1,...,xn) can be different. For example, system can be like that:
f1(x1,...,xn) = sum( xi + xi^2 ) = 0, i={1,n}
f2(x1,...,xn) = sum( e^xi + xi + sin(xi*pi) ) = 0, i={1,n}
According to this example, I use fsolve() of scipy library for solving such a NLE, but it returns only one solution for every single initial approximation of *x = x0. But as n can be large (for example, n = 100), and there can be a lot of solutions, so it's not very usefull to make initial conditions x = x0 for finding every solution.
So, please, could You give me an example, how to find All the solutions by fsolve() in such a situation? Or by any other simple method?
Additional
For example, I have folloving simple system:
def equations(p):
x, y = p
return (x**2-1, x**3-1)
With different initial condiotions I have different solutions:
x, y = fsolve(equations, (0, 0))
(0.0, 0.0)
x, y = fsolve(equations, (1, 1))
(1.0, 1.0)
x, y = fsolve(equations, (-1, 1))
(-0.47029706057873205, 0.41417128904566508)
Is it possible to use any Scipy-function, like fsolve(), to find All the solutions (roots), for example:
x, y = some_scipy_solver(equations, (x0, y0))
1. (1.0, 1.0)
2. (0.0, 0.0)
3. (-0.47029706057873205, 0.41417128904566508)
...
where (x0, y0) = any initial approximation:(0, 0),(1, 1),(-1, 1),(0.1, 10.0), etc, and where I determine just the constraints for x0, y0, like that: -1.0 <= x0 < 1.0, 0.0 <= x0 < 11.0.
A:
It can be difficult (or impossible ) to find numerically all the solutions even for a single non-linear equation, let along a system. For instance, consider the equation,
sin(1/x) = 0
that has an infinity of solutions in the interval [0, 1]: you can't solve this with typical root-finding algorithms.
In particular, scipy.optimize.fsolve uses local optimization approaches to find one solution of the given equation. Conceptually, you can't use it (or anything else in scipy module, really) to find all the possible solutions. Of course, if you know that the system has a given number of solutions, you can just randomly set the initial conditions to fsolve (as you have done) until you find all of them. There is no general method that would work for this problem though.
Instead, you might have better luck in solving your system analytically with sympy, if it is simple enough.
A:
with ipywidgets you can do. Root will change inetractively in range -100 .. 100
def root_finder(x):
print('Root is %.3f' %fsolve(funct,x))
widgets.interact(root_finder,x=(-100,100,0.5));
| Scipy - All the Solutions of Non-linear Equations System | I have a system of non-linear equations, where can be choosed any n, so length of vector x = (x1,...,xn) can be different. For example, system can be like that:
f1(x1,...,xn) = sum( xi + xi^2 ) = 0, i={1,n}
f2(x1,...,xn) = sum( e^xi + xi + sin(xi*pi) ) = 0, i={1,n}
According to this example, I use fsolve() of scipy library for solving such a NLE, but it returns only one solution for every single initial approximation of *x = x0. But as n can be large (for example, n = 100), and there can be a lot of solutions, so it's not very usefull to make initial conditions x = x0 for finding every solution.
So, please, could You give me an example, how to find All the solutions by fsolve() in such a situation? Or by any other simple method?
Additional
For example, I have folloving simple system:
def equations(p):
x, y = p
return (x**2-1, x**3-1)
With different initial condiotions I have different solutions:
x, y = fsolve(equations, (0, 0))
(0.0, 0.0)
x, y = fsolve(equations, (1, 1))
(1.0, 1.0)
x, y = fsolve(equations, (-1, 1))
(-0.47029706057873205, 0.41417128904566508)
Is it possible to use any Scipy-function, like fsolve(), to find All the solutions (roots), for example:
x, y = some_scipy_solver(equations, (x0, y0))
1. (1.0, 1.0)
2. (0.0, 0.0)
3. (-0.47029706057873205, 0.41417128904566508)
...
where (x0, y0) = any initial approximation:(0, 0),(1, 1),(-1, 1),(0.1, 10.0), etc, and where I determine just the constraints for x0, y0, like that: -1.0 <= x0 < 1.0, 0.0 <= x0 < 11.0.
| [
"It can be difficult (or impossible ) to find numerically all the solutions even for a single non-linear equation, let along a system. For instance, consider the equation,\nsin(1/x) = 0\n\nthat has an infinity of solutions in the interval [0, 1]: you can't solve this with typical root-finding algorithms.\nIn particular, scipy.optimize.fsolve uses local optimization approaches to find one solution of the given equation. Conceptually, you can't use it (or anything else in scipy module, really) to find all the possible solutions. Of course, if you know that the system has a given number of solutions, you can just randomly set the initial conditions to fsolve (as you have done) until you find all of them. There is no general method that would work for this problem though.\nInstead, you might have better luck in solving your system analytically with sympy, if it is simple enough.\n",
"with ipywidgets you can do. Root will change inetractively in range -100 .. 100\ndef root_finder(x):\n print('Root is %.3f' %fsolve(funct,x))\n \nwidgets.interact(root_finder,x=(-100,100,0.5));\n\n"
] | [
3,
0
] | [] | [] | [
"equation_solving",
"nonlinear_functions",
"numpy",
"python",
"scipy"
] | stackoverflow_0030877513_equation_solving_nonlinear_functions_numpy_python_scipy.txt |
Q:
Multiple changes to the same variable within different if statements
SOLVED: I read through my code, it was a 'bug'. When I copied the dice roll method from the 'player character', since it uses the same mechanics for the enemies, I set the damage to 0 if it rolls with one die on accident.
Beginner here. (Python crash course halfway of chapter 9)
I am trying to build a simple turn based text game to practice (classes,if statement, modifying dictionaries/lists etc).
I will copy two snippets from my code, so you can understand my problem better.
(I'm really sorry that I can't give a short description, my best try was the title, but that still doesn't make it good enough. If you want an abridged tldr, go to the bottom with the bold texts.)
First, I have two characters, that you can choose from as an if-elif-else statement.
I used the same "player_xy" (xy being like health, damage etc) for the two characters, but assigning different values to them based on the player's choice. (My reasoning being is so I only have to reference the same variable in the code later in the battle system, making my job easier.)
(The variables fighter_max_hp.. etc are defined earlier, but it doesn't matter (tried moving it to before/inside the if statements.)
while select_repeat == True:
print("Type 'f' for fighter , 'm' for mage, or 'q' to quit!")
character = input("TYPE: ")
#player chooses fighter
if character == 'f':
player_max_hp = fighter_max_hp
player_max_mana = fighter_max_mana
#this goes on for a while, setting up all the stats
#player chooses mage
elif character == 'm':
player_max_hp = mage_max_hp
player_max_mana = mage_max_mana
#this goes on for a while, setting up all the stats
#player chooses to quit
elif character == 'q':
select_repeat = False
#invalid input
else:
print("\nPlease choose a valid option!")
Later in the code, I have a part where a randomizer sets up enemies to fight.
I used the same "enemy_xy" (xy being like health, damage etc) for the enemies. (My reasoning was the same here as for the characters.)
(Same, as with the player variables (tried moving it to before/inside the if statements.)
while enemy_select == True:
#game chooses an enemy to fight!
min = 1
max = 3
enemy_chooser = int(random.randint(min, max))
if enemy_chooser == 1:
#choose werewolf
enemy_hp = werewolf_hp
enemy_dice = werewolf_dice
#this goes on for a while, setting up all the stats
if enemy_chooser == 2:
#choose lesser mimic
enemy_hp = int(player_max_hp / 2)
enemy_dice = player_dice
elif enemy_chooser == 3:
#choose zombie
enemy_hp = zombie_hp
enemy_dice = zombie_dice
#this goes on for a while, setting up all the stats
Keep in mind, all of these enemies use the same "enemy_hp", "enemy_dice" etc. variables, within the same battle system, just assigned as "enemy_hp = werewolf_hp" or "enemy_hp = "zombie_hp".
The fight happens, and:
If your enemy is the werewolf:
you deal damage to it
you receive damage from it
you can kill it
you can get killed by it
If your enemy is the lesser mimic:
you deal damage to it
you can ONLY receive damage from it if you are a fighter (mage's hp doesn't decrease)
you can kill it
you can ONLY get killed by it if you are a fighter (obviously, since it doesn't deal damage to mage hp)
If your enemy is the zombie:
you deal damage to it
you CAN NOT receive damage from it (not the fighter, or the mage)
you can kill it
you can not get killed by it (obviously, since no damage)
Otherwise, it prints out the different variable values as assigned (different stats for each monster) as expected, and it uses correct calculations to deal damage.. it just can't in the two cases mentioned above.
Now comes the main part of my question...
If I change the variables like this:
elif enemy_chooser == 2:
#choose zombie
enemy_hp = werewolf_hp ##CHANGE
enemy_dice = werewolf_dice ##CHANGE
#this goes on for a while, setting up all the stats
Then the zombie can finally deal damage to the player (with the werewolf's stats).
It's as if because the lines
enemy_hp = werewolf_hp
enemy_dice = werewolf_dice
#etc
are written earlier than:
enemy_hp = zombie_hp
enemy_dice = zombie_dice
#etc
it somehow effects the variable (regardless or not if the "if" statement is true).
because werewolf_xy was defined earlier than zombie_xy
#enemy werewolf defined first in the code
werewolf_hp = 20
werewolf_dice = 2
#etc
#enemy zombie defined right after
zombie_hp = 35
zombie_dice = 1
#etc
Same happens with the fighter and mage selection.
Somehow the player_hp = xy_hp only works if xy = fighter, because the fighters variables are defined earlier in the code, and thus making the "lesser mimic" deal damage only to the fighter.
My question is "simply".. why?
I tried everything in my power, to no avail.
As you have seen, I could identify what causes the problem (and thus I >could< potentionally work around it), but I still don't know why Python does what it does, and that bothers me.
Any help or input from more experienced users would be greatly appreciated.
Thank you in advance!
Tankerka
A:
You have a bug.
There's not enough details in this (long!) narrative to identify the bug.
Here's how you fix it:
breakpoint()
Put that near the top of your code,
and use n next, plus p print var,
to see what your code is doing.
It is quicker and more flexible than print( ... ).
Read up on that pair of commands here:
https://docs.python.org/3/library/pdb.html
Separate item: refactor your code as you go along.
You're starting to have enough if / elif logic
that it won't all fit in a single screenful
with no scrolling.
That suggests that it's a good time to use def
to break out a helper function.
You might def get_enemy_hp( ... ):, for example,
and also define get_enemy_dice().
Other things you might choose to study:
a class can be a good way to organize the variables you're defining -- embrace the self syntax!
a dict could map from enemy type to hp, or to dice
The nice thing about helper functions is they're
the perfect target for unit tests.
It takes a minute to write a test, but it winds up saving you time.
https://docs.python.org/3/library/unittest.html
When you identify and fix the problem, let us know!
https://stackoverflow.com/help/self-answer
| Multiple changes to the same variable within different if statements | SOLVED: I read through my code, it was a 'bug'. When I copied the dice roll method from the 'player character', since it uses the same mechanics for the enemies, I set the damage to 0 if it rolls with one die on accident.
Beginner here. (Python crash course halfway of chapter 9)
I am trying to build a simple turn based text game to practice (classes,if statement, modifying dictionaries/lists etc).
I will copy two snippets from my code, so you can understand my problem better.
(I'm really sorry that I can't give a short description, my best try was the title, but that still doesn't make it good enough. If you want an abridged tldr, go to the bottom with the bold texts.)
First, I have two characters, that you can choose from as an if-elif-else statement.
I used the same "player_xy" (xy being like health, damage etc) for the two characters, but assigning different values to them based on the player's choice. (My reasoning being is so I only have to reference the same variable in the code later in the battle system, making my job easier.)
(The variables fighter_max_hp.. etc are defined earlier, but it doesn't matter (tried moving it to before/inside the if statements.)
while select_repeat == True:
print("Type 'f' for fighter , 'm' for mage, or 'q' to quit!")
character = input("TYPE: ")
#player chooses fighter
if character == 'f':
player_max_hp = fighter_max_hp
player_max_mana = fighter_max_mana
#this goes on for a while, setting up all the stats
#player chooses mage
elif character == 'm':
player_max_hp = mage_max_hp
player_max_mana = mage_max_mana
#this goes on for a while, setting up all the stats
#player chooses to quit
elif character == 'q':
select_repeat = False
#invalid input
else:
print("\nPlease choose a valid option!")
Later in the code, I have a part where a randomizer sets up enemies to fight.
I used the same "enemy_xy" (xy being like health, damage etc) for the enemies. (My reasoning was the same here as for the characters.)
(Same, as with the player variables (tried moving it to before/inside the if statements.)
while enemy_select == True:
#game chooses an enemy to fight!
min = 1
max = 3
enemy_chooser = int(random.randint(min, max))
if enemy_chooser == 1:
#choose werewolf
enemy_hp = werewolf_hp
enemy_dice = werewolf_dice
#this goes on for a while, setting up all the stats
if enemy_chooser == 2:
#choose lesser mimic
enemy_hp = int(player_max_hp / 2)
enemy_dice = player_dice
elif enemy_chooser == 3:
#choose zombie
enemy_hp = zombie_hp
enemy_dice = zombie_dice
#this goes on for a while, setting up all the stats
Keep in mind, all of these enemies use the same "enemy_hp", "enemy_dice" etc. variables, within the same battle system, just assigned as "enemy_hp = werewolf_hp" or "enemy_hp = "zombie_hp".
The fight happens, and:
If your enemy is the werewolf:
you deal damage to it
you receive damage from it
you can kill it
you can get killed by it
If your enemy is the lesser mimic:
you deal damage to it
you can ONLY receive damage from it if you are a fighter (mage's hp doesn't decrease)
you can kill it
you can ONLY get killed by it if you are a fighter (obviously, since it doesn't deal damage to mage hp)
If your enemy is the zombie:
you deal damage to it
you CAN NOT receive damage from it (not the fighter, or the mage)
you can kill it
you can not get killed by it (obviously, since no damage)
Otherwise, it prints out the different variable values as assigned (different stats for each monster) as expected, and it uses correct calculations to deal damage.. it just can't in the two cases mentioned above.
Now comes the main part of my question...
If I change the variables like this:
elif enemy_chooser == 2:
#choose zombie
enemy_hp = werewolf_hp ##CHANGE
enemy_dice = werewolf_dice ##CHANGE
#this goes on for a while, setting up all the stats
Then the zombie can finally deal damage to the player (with the werewolf's stats).
It's as if because the lines
enemy_hp = werewolf_hp
enemy_dice = werewolf_dice
#etc
are written earlier than:
enemy_hp = zombie_hp
enemy_dice = zombie_dice
#etc
it somehow effects the variable (regardless or not if the "if" statement is true).
because werewolf_xy was defined earlier than zombie_xy
#enemy werewolf defined first in the code
werewolf_hp = 20
werewolf_dice = 2
#etc
#enemy zombie defined right after
zombie_hp = 35
zombie_dice = 1
#etc
Same happens with the fighter and mage selection.
Somehow the player_hp = xy_hp only works if xy = fighter, because the fighters variables are defined earlier in the code, and thus making the "lesser mimic" deal damage only to the fighter.
My question is "simply".. why?
I tried everything in my power, to no avail.
As you have seen, I could identify what causes the problem (and thus I >could< potentionally work around it), but I still don't know why Python does what it does, and that bothers me.
Any help or input from more experienced users would be greatly appreciated.
Thank you in advance!
Tankerka
| [
"You have a bug.\nThere's not enough details in this (long!) narrative to identify the bug.\nHere's how you fix it:\nbreakpoint()\n\nPut that near the top of your code,\nand use n next, plus p print var,\nto see what your code is doing.\nIt is quicker and more flexible than print( ... ).\nRead up on that pair of commands here:\nhttps://docs.python.org/3/library/pdb.html\n\nSeparate item: refactor your code as you go along.\nYou're starting to have enough if / elif logic\nthat it won't all fit in a single screenful\nwith no scrolling.\nThat suggests that it's a good time to use def\nto break out a helper function.\nYou might def get_enemy_hp( ... ):, for example,\nand also define get_enemy_dice().\nOther things you might choose to study:\n\na class can be a good way to organize the variables you're defining -- embrace the self syntax!\na dict could map from enemy type to hp, or to dice\n\n\nThe nice thing about helper functions is they're\nthe perfect target for unit tests.\nIt takes a minute to write a test, but it winds up saving you time.\nhttps://docs.python.org/3/library/unittest.html\n\nWhen you identify and fix the problem, let us know!\nhttps://stackoverflow.com/help/self-answer\n"
] | [
0
] | [] | [] | [
"if_statement",
"python",
"python_3.x",
"variable_assignment",
"variables"
] | stackoverflow_0074635290_if_statement_python_python_3.x_variable_assignment_variables.txt |
Q:
How to scrape multiple elements within a specific section of a website
Super new to python coming from a C# background.
Inside Microsoft's wiki page, https://en.wikipedia.org/wiki/Microsoft,
I'm trying to scrape all the text inside the history section.
I'm curious how to go about the situation using beautiful soup. I understand that beautiful soup doesn't have XPath support.
The first element to be scraped from the history section is:
<div role="note" class="hatnote navigation-not-searchable">Main article: <a href="/wiki/History_of_Microsoft" title="History of Microsoft">History of Microsoft</a></div>
The Last element to be scraped will be:
<p>On January 18, 2022, Microsoft announced the acquisition of American video game developer and <a href="/wiki/Holding_company" title="Holding company">holding company</a> <a href="/wiki/Activision_Blizzard" title="Activision Blizzard">Activision Blizzard</a> in an all-cash deal worth $68.7 billion.<sup id="cite_ref-:0_150-0" class="reference"><a href="#cite_note-:0-150">[150]</a></sup> Activision Blizzard is best known for producing franchises, including but not limited to <i><a href="/wiki/Warcraft" title="Warcraft">Warcraft</a></i>, <i><a href="/wiki/Diablo_(series)" title="Diablo (series)">Diablo</a></i>, <i><a href="/wiki/Call_of_Duty" title="Call of Duty">Call of Duty</a></i>, <i><a href="/wiki/StarCraft" title="StarCraft">StarCraft</a></i>, <i><a href="/wiki/Candy_Crush_Saga" title="Candy Crush Saga">Candy Crush Saga</a></i>, <i><a href="/wiki/Crash_Bandicoot" title="Crash Bandicoot">Crash Bandicoot</a></i>, <i><a href="/wiki/Spyro" title="Spyro">Spyro the Dragon</a></i>, <i><a href="/wiki/Skylanders" title="Skylanders">Skylanders</a></i>, and <i><a href="/wiki/Overwatch_(video_game)" title="Overwatch (video game)">Overwatch</a></i>.<sup id="cite_ref-151" class="reference"><a href="#cite_note-151">[151]</a></sup> Activision and Microsoft each released statements saying the acquisition was to benefit their businesses in the <a href="/wiki/Metaverse" title="Metaverse">metaverse</a>, many saw Microsoft's acquisition of video game studios as an attempt to compete against <a href="/wiki/Meta_Platforms" title="Meta Platforms">Meta Platforms</a>, with <a href="/wiki/TheStreet" title="TheStreet">TheStreet</a> referring to Microsoft wanting to become "the <a href="/wiki/The_Walt_Disney_Company" title="The Walt Disney Company">Disney</a> of the metaverse".<sup id="cite_ref-152" class="reference"><a href="#cite_note-152">[152]</a></sup><sup id="cite_ref-153" class="reference"><a href="#cite_note-153">[153]</a></sup> Microsoft has not released statements regarding Activision's recent legal controversies regarding employee abuse, but reports have alleged that Activision CEO <a href="/wiki/Bobby_Kotick" title="Bobby Kotick">Bobby Kotick</a>, a major target of the controversy, will leave the company after the acquisition is finalized.<sup id="cite_ref-154" class="reference"><a href="#cite_note-154">[154]</a></sup> The deal is expected to close in 2023 followed by a review from the <a href="/wiki/US_Federal_Trade_Commission" class="mw-redirect" title="US Federal Trade Commission">US Federal Trade Commission</a>.<sup id="cite_ref-155" class="reference"><a href="#cite_note-155">[155]</a></sup><sup id="cite_ref-:0_150-1" class="reference"><a href="#cite_note-:0-150">[150]</a></sup>
</p>
How would I go about getting all the information inbetween those elements?
from bs4 import BeautifulSoup
import requests
url = 'https://en.wikipedia.org/wiki/Microsoft'
# call get method to request that page
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
A:
Use css selectors, specifically the sibling combinator ~ and the pseudo-classes :has and:not:
hhSel = 'h2:has(#History)'
htSel = f'{hhSel} ~ *:not(style):not(h2):not({hhSel} ~ h2 ~ *)'
hSectTags = soup.select(htSel)
for hst in hSectTags:
flatTxt = ' '.join(w for w in hst.get_text(' ').split() if w)
print(hst.name, '--->', flatTxt[:100] if flatTxt else hst)
prints
div ---> Main article: History of Microsoft
link ---> <link href="mw-data:TemplateStyles:r1033289096" rel="mw-deduplicated-inline-style"/>
div ---> For a chronological guide, see Timeline of Microsoft .
h3 ---> 1972–1985: Founding
div ---> An Altair 8800 computer (left) with the popular Model 33 ASR Teletype as terminal, paper tape reader
div ---> Paul Allen and Bill Gates on October 19, 1981, after signing a pivotal contract with IBM [11] : 228
p ---> Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer
p ---> Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xen
h3 ---> 1985–1994: Windows and Office
div ---> Windows 1.0 was released on November 20, 1985, as the first version of the Windows line.
p ---> Microsoft released Windows on November 20, 1985, as a graphical extension for MS-DOS, [11] : 242–243
p ---> In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as
p ---> On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statemen
h3 ---> 1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox
div ---> In 1996, Microsoft released Windows CE, a version of the operating system meant for personal digital
p ---> Following Bill Gates' internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefi
div ---> Microsoft released the first installment in the Xbox series of consoles in 2001. The Xbox , graphica
p ---> On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer , an old college frien
p ---> Increasingly present in the hardware business following Xbox, Microsoft 2006 released the Zune serie
h3 ---> 2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores
div ---> CEO Steve Ballmer at the MIX event in 2008. In an interview about his management style in 2005, he m
div ---> Headquarters of the European Commission, which has imposed several fines on Microsoft
p ---> Released in January 2007, the next version of Windows, Vista , focused on features, security and a r
p ---> Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in Ju
p ---> As the smartphone industry boomed in the late 2000s, Microsoft had struggled to keep up with its riv
h3 ---> 2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices
div ---> Surface Pro 3 , part of the Surface series of laplets by Microsoft
p ---> Following the release of Windows Phone , Microsoft undertook a gradual rebranding of its product ran
p ---> In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC si
p ---> In August 2012, the New York City Police Department announced a partnership with Microsoft for the d
div ---> The Xbox One console, released in 2013
p ---> The Kinect , a motion-sensing input device made by Microsoft and designed as a video game controller
p ---> In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize th
div ---> <div style="clear:both;"></div>
h3 ---> 2014–2020: Windows 10, Microsoft Edge, and HoloLens
div ---> Satya Nadella succeeded Steve Ballmer as the CEO of Microsoft in February 2014.
p ---> On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadel
p ---> On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard , Microso
p ---> On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer ann
div ---> The Nokia Lumia 1320 , the Microsoft Lumia 535 and the Nokia Lumia 530 , which all run on one of the
p ---> In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdow
div ---> Apollo 11 astronaut Buzz Aldrin using a Microsoft HoloLens mixed reality headset in September 2016
p ---> In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using
p ---> On February 20, 2019, Microsoft Corp said it will offer its cyber security service AccountGuard to 1
h3 ---> 2020–present: Acquisitions, Xbox Series X/S, and Windows 11
link ---> <link href="mw-data:TemplateStyles:r1033289096" rel="mw-deduplicated-inline-style"/>
div ---> Main article: Acquisition of Activision Blizzard by Microsoft
p ---> On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. [
p ---> On July 31, 2020, it was reported that Microsoft was in talks to acquire TikTok after the Trump admi
p ---> On August 5, 2020, Microsoft stopped its xCloud game streaming test for iOS devices . According to M
p ---> On September 22, 2020, Microsoft announced that it had an exclusive license to use OpenAI ’s GPT-3 a
p ---> In April 2021, Microsoft announced it would buy Nuance Communications for approximately $16 billion.
p ---> On June 24, 2021, Microsoft announced Windows 11 during a Livestream. The announcement came with con
p ---> In October 2021, Microsoft announced that it began rolling out end-to-end encryption (E2EE) support
p ---> On January 18, 2022, Microsoft announced the acquisition of American video game developer and holdin
| How to scrape multiple elements within a specific section of a website | Super new to python coming from a C# background.
Inside Microsoft's wiki page, https://en.wikipedia.org/wiki/Microsoft,
I'm trying to scrape all the text inside the history section.
I'm curious how to go about the situation using beautiful soup. I understand that beautiful soup doesn't have XPath support.
The first element to be scraped from the history section is:
<div role="note" class="hatnote navigation-not-searchable">Main article: <a href="/wiki/History_of_Microsoft" title="History of Microsoft">History of Microsoft</a></div>
The Last element to be scraped will be:
<p>On January 18, 2022, Microsoft announced the acquisition of American video game developer and <a href="/wiki/Holding_company" title="Holding company">holding company</a> <a href="/wiki/Activision_Blizzard" title="Activision Blizzard">Activision Blizzard</a> in an all-cash deal worth $68.7 billion.<sup id="cite_ref-:0_150-0" class="reference"><a href="#cite_note-:0-150">[150]</a></sup> Activision Blizzard is best known for producing franchises, including but not limited to <i><a href="/wiki/Warcraft" title="Warcraft">Warcraft</a></i>, <i><a href="/wiki/Diablo_(series)" title="Diablo (series)">Diablo</a></i>, <i><a href="/wiki/Call_of_Duty" title="Call of Duty">Call of Duty</a></i>, <i><a href="/wiki/StarCraft" title="StarCraft">StarCraft</a></i>, <i><a href="/wiki/Candy_Crush_Saga" title="Candy Crush Saga">Candy Crush Saga</a></i>, <i><a href="/wiki/Crash_Bandicoot" title="Crash Bandicoot">Crash Bandicoot</a></i>, <i><a href="/wiki/Spyro" title="Spyro">Spyro the Dragon</a></i>, <i><a href="/wiki/Skylanders" title="Skylanders">Skylanders</a></i>, and <i><a href="/wiki/Overwatch_(video_game)" title="Overwatch (video game)">Overwatch</a></i>.<sup id="cite_ref-151" class="reference"><a href="#cite_note-151">[151]</a></sup> Activision and Microsoft each released statements saying the acquisition was to benefit their businesses in the <a href="/wiki/Metaverse" title="Metaverse">metaverse</a>, many saw Microsoft's acquisition of video game studios as an attempt to compete against <a href="/wiki/Meta_Platforms" title="Meta Platforms">Meta Platforms</a>, with <a href="/wiki/TheStreet" title="TheStreet">TheStreet</a> referring to Microsoft wanting to become "the <a href="/wiki/The_Walt_Disney_Company" title="The Walt Disney Company">Disney</a> of the metaverse".<sup id="cite_ref-152" class="reference"><a href="#cite_note-152">[152]</a></sup><sup id="cite_ref-153" class="reference"><a href="#cite_note-153">[153]</a></sup> Microsoft has not released statements regarding Activision's recent legal controversies regarding employee abuse, but reports have alleged that Activision CEO <a href="/wiki/Bobby_Kotick" title="Bobby Kotick">Bobby Kotick</a>, a major target of the controversy, will leave the company after the acquisition is finalized.<sup id="cite_ref-154" class="reference"><a href="#cite_note-154">[154]</a></sup> The deal is expected to close in 2023 followed by a review from the <a href="/wiki/US_Federal_Trade_Commission" class="mw-redirect" title="US Federal Trade Commission">US Federal Trade Commission</a>.<sup id="cite_ref-155" class="reference"><a href="#cite_note-155">[155]</a></sup><sup id="cite_ref-:0_150-1" class="reference"><a href="#cite_note-:0-150">[150]</a></sup>
</p>
How would I go about getting all the information inbetween those elements?
from bs4 import BeautifulSoup
import requests
url = 'https://en.wikipedia.org/wiki/Microsoft'
# call get method to request that page
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
| [
"Use css selectors, specifically the sibling combinator ~ and the pseudo-classes :has and:not:\nhhSel = 'h2:has(#History)'\nhtSel = f'{hhSel} ~ *:not(style):not(h2):not({hhSel} ~ h2 ~ *)'\nhSectTags = soup.select(htSel)\n\nfor hst in hSectTags:\n flatTxt = ' '.join(w for w in hst.get_text(' ').split() if w)\n print(hst.name, '--->', flatTxt[:100] if flatTxt else hst)\n\nprints\ndiv ---> Main article: History of Microsoft\nlink ---> <link href=\"mw-data:TemplateStyles:r1033289096\" rel=\"mw-deduplicated-inline-style\"/>\ndiv ---> For a chronological guide, see Timeline of Microsoft .\nh3 ---> 1972–1985: Founding\ndiv ---> An Altair 8800 computer (left) with the popular Model 33 ASR Teletype as terminal, paper tape reader\ndiv ---> Paul Allen and Bill Gates on October 19, 1981, after signing a pivotal contract with IBM [11] : 228\np ---> Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer\np ---> Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xen\nh3 ---> 1985–1994: Windows and Office\ndiv ---> Windows 1.0 was released on November 20, 1985, as the first version of the Windows line.\np ---> Microsoft released Windows on November 20, 1985, as a graphical extension for MS-DOS, [11] : 242–243\np ---> In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as\np ---> On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statemen\nh3 ---> 1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox\ndiv ---> In 1996, Microsoft released Windows CE, a version of the operating system meant for personal digital\np ---> Following Bill Gates' internal \"Internet Tidal Wave memo\" on May 26, 1995, Microsoft began to redefi\ndiv ---> Microsoft released the first installment in the Xbox series of consoles in 2001. The Xbox , graphica\np ---> On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer , an old college frien\np ---> Increasingly present in the hardware business following Xbox, Microsoft 2006 released the Zune serie\nh3 ---> 2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores\ndiv ---> CEO Steve Ballmer at the MIX event in 2008. In an interview about his management style in 2005, he m\ndiv ---> Headquarters of the European Commission, which has imposed several fines on Microsoft\np ---> Released in January 2007, the next version of Windows, Vista , focused on features, security and a r\np ---> Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in Ju\np ---> As the smartphone industry boomed in the late 2000s, Microsoft had struggled to keep up with its riv\nh3 ---> 2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices\ndiv ---> Surface Pro 3 , part of the Surface series of laplets by Microsoft\np ---> Following the release of Windows Phone , Microsoft undertook a gradual rebranding of its product ran\np ---> In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC si\np ---> In August 2012, the New York City Police Department announced a partnership with Microsoft for the d\ndiv ---> The Xbox One console, released in 2013\np ---> The Kinect , a motion-sensing input device made by Microsoft and designed as a video game controller\np ---> In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize th\ndiv ---> <div style=\"clear:both;\"></div>\nh3 ---> 2014–2020: Windows 10, Microsoft Edge, and HoloLens\ndiv ---> Satya Nadella succeeded Steve Ballmer as the CEO of Microsoft in February 2014.\np ---> On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadel\np ---> On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard , Microso\np ---> On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer ann\ndiv ---> The Nokia Lumia 1320 , the Microsoft Lumia 535 and the Nokia Lumia 530 , which all run on one of the\np ---> In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdow\ndiv ---> Apollo 11 astronaut Buzz Aldrin using a Microsoft HoloLens mixed reality headset in September 2016\np ---> In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using \np ---> On February 20, 2019, Microsoft Corp said it will offer its cyber security service AccountGuard to 1\nh3 ---> 2020–present: Acquisitions, Xbox Series X/S, and Windows 11\nlink ---> <link href=\"mw-data:TemplateStyles:r1033289096\" rel=\"mw-deduplicated-inline-style\"/>\ndiv ---> Main article: Acquisition of Activision Blizzard by Microsoft\np ---> On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. [\np ---> On July 31, 2020, it was reported that Microsoft was in talks to acquire TikTok after the Trump admi\np ---> On August 5, 2020, Microsoft stopped its xCloud game streaming test for iOS devices . According to M\np ---> On September 22, 2020, Microsoft announced that it had an exclusive license to use OpenAI ’s GPT-3 a\np ---> In April 2021, Microsoft announced it would buy Nuance Communications for approximately $16 billion.\np ---> On June 24, 2021, Microsoft announced Windows 11 during a Livestream. The announcement came with con\np ---> In October 2021, Microsoft announced that it began rolling out end-to-end encryption (E2EE) support \np ---> On January 18, 2022, Microsoft announced the acquisition of American video game developer and holdin\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074632370_beautifulsoup_python_web_scraping.txt |
Subsets and Splits