content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
python automating data collection issue
Is there any wrong with my code here, when I want to automate collecting some data from the web:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import requests
from bs4 import BeautifulSoup
od = input('Origin Destination: ')
dp = input('Departure Periode: ')
#op = input('Observation Periode: ')
try:
driver = webdriver.Chrome()
driver.get("http://infare.net/login.aspx")
username= driver.find_element(By.NAME,"loxLogin$UserName")
username.send_keys("username")
password= driver.find_element(By.NAME,"loxLogin$Password")
password.send_keys("password")
login= driver.find_element(By.CLASS_NAME,"LoginSplashButtonStyle")
login.click()
finally:
secure_url = 'http://infare.net/Pages/Analysis/DataDisplay.aspx'
driver.get(secure_url)
req = requests.get(secure_url)
soup = BeautifulSoup(req.text,'html.parser')
#dropdown rute
dropdownbox = driver.find_elements(by=By.TAG_NAME, value="option")
i=0
while i < len(dropdownbox):
if(dropdownbox[i].text == od):
dropdownbox[i].click()
i = i + 1
#departure periode
departure= driver.find_element(By.NAME,"ctl00$cntMain$ucDeparturePeriod$txtDeparturePeriod")
departure.send_keys(dp)
#export.button
search = driver.find_element(By.ID,"ctl00_cntMain_btnSearch")
search.click()
export = driver.find_element(By.NAME,"ctl00$cntMain$btnExport")
export.click()
Can somebody can help? I am newbie on this really
When I run it the browser opens and runs automatically but at the end the browser closes and the process stops with the error message:
"DevTools listening on ws://127.0.0.1:53392/devtools/browser/e110e02e-1957-4409-ae91-f2924ec0af01
[13472:15756:1203/121855.696:ERROR:util.cc(133)] Can't create base directory: C:\Program Files\Google\GoogleUpdater"
process
All processes:
login success
choose the "Data" menu success
data menu
until this process wont run as i expected
Option
My expectation from this code is: It should automatically click on search button to show the data as per option and click on the export button to download the data as csv file.
A:
It looks like there may be a problem with the finally clause in your code. The finally clause is executed whether or not an exception is thrown in the try clause, and it is typically used to clean up resources, such as closing open files or network connections. In your code, the finally clause is executing even when the try clause is successful, which may cause issues.
One way to fix this issue is to move the code in the finally clause to after the try clause, and remove the finally clause entirely. This will ensure that the code in the finally clause is only executed if an exception is thrown in the try clause. Here is an example of how you could modify your code to fix this issue:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import requests
from bs4 import BeautifulSoup
od = input('Origin Destination: ')
dp = input('Departure Periode: ')
#op = input('Observation Periode: ')
try:
driver = webdriver.Chrome()
driver.get("http://infare.net/login.aspx")
username= driver.find_element(By.NAME,"loxLogin$UserName")
username.send_keys("username")
password= driver.find_element(By.NAME,"loxLogin$Password")
password.send_keys("password")
login= driver.find_element(By.CLASS_NAME,"LoginSplashButtonStyle")
login.click()
except Exception:
# Handle any exceptions thrown in the try clause here.
pass
# Move code from the finally clause here.
secure_url = 'http://infare.net/Pages/Analysis/DataDisplay.aspx'
driver.get(secure_url)
req = requests.get(secure_url)
soup = BeautifulSoup(req.text,'html.parser')
#dropdown rute
dropdownbox = ...
| python automating data collection issue | Is there any wrong with my code here, when I want to automate collecting some data from the web:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import requests
from bs4 import BeautifulSoup
od = input('Origin Destination: ')
dp = input('Departure Periode: ')
#op = input('Observation Periode: ')
try:
driver = webdriver.Chrome()
driver.get("http://infare.net/login.aspx")
username= driver.find_element(By.NAME,"loxLogin$UserName")
username.send_keys("username")
password= driver.find_element(By.NAME,"loxLogin$Password")
password.send_keys("password")
login= driver.find_element(By.CLASS_NAME,"LoginSplashButtonStyle")
login.click()
finally:
secure_url = 'http://infare.net/Pages/Analysis/DataDisplay.aspx'
driver.get(secure_url)
req = requests.get(secure_url)
soup = BeautifulSoup(req.text,'html.parser')
#dropdown rute
dropdownbox = driver.find_elements(by=By.TAG_NAME, value="option")
i=0
while i < len(dropdownbox):
if(dropdownbox[i].text == od):
dropdownbox[i].click()
i = i + 1
#departure periode
departure= driver.find_element(By.NAME,"ctl00$cntMain$ucDeparturePeriod$txtDeparturePeriod")
departure.send_keys(dp)
#export.button
search = driver.find_element(By.ID,"ctl00_cntMain_btnSearch")
search.click()
export = driver.find_element(By.NAME,"ctl00$cntMain$btnExport")
export.click()
Can somebody can help? I am newbie on this really
When I run it the browser opens and runs automatically but at the end the browser closes and the process stops with the error message:
"DevTools listening on ws://127.0.0.1:53392/devtools/browser/e110e02e-1957-4409-ae91-f2924ec0af01
[13472:15756:1203/121855.696:ERROR:util.cc(133)] Can't create base directory: C:\Program Files\Google\GoogleUpdater"
process
All processes:
login success
choose the "Data" menu success
data menu
until this process wont run as i expected
Option
My expectation from this code is: It should automatically click on search button to show the data as per option and click on the export button to download the data as csv file.
| [
"It looks like there may be a problem with the finally clause in your code. The finally clause is executed whether or not an exception is thrown in the try clause, and it is typically used to clean up resources, such as closing open files or network connections. In your code, the finally clause is executing even when the try clause is successful, which may cause issues.\nOne way to fix this issue is to move the code in the finally clause to after the try clause, and remove the finally clause entirely. This will ensure that the code in the finally clause is only executed if an exception is thrown in the try clause. Here is an example of how you could modify your code to fix this issue:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nimport requests\nfrom bs4 import BeautifulSoup\n\nod = input('Origin Destination: ')\ndp = input('Departure Periode: ')\n#op = input('Observation Periode: ')\ntry:\n driver = webdriver.Chrome()\n driver.get(\"http://infare.net/login.aspx\")\n username= driver.find_element(By.NAME,\"loxLogin$UserName\")\n username.send_keys(\"username\")\n password= driver.find_element(By.NAME,\"loxLogin$Password\")\n password.send_keys(\"password\")\n login= driver.find_element(By.CLASS_NAME,\"LoginSplashButtonStyle\")\n login.click()\n\nexcept Exception:\n # Handle any exceptions thrown in the try clause here.\n pass\n\n# Move code from the finally clause here.\nsecure_url = 'http://infare.net/Pages/Analysis/DataDisplay.aspx'\ndriver.get(secure_url)\nreq = requests.get(secure_url)\nsoup = BeautifulSoup(req.text,'html.parser')\n\n#dropdown rute \ndropdownbox = ...\n\n"
] | [
0
] | [] | [] | [
"forum",
"html",
"python",
"web_scraping"
] | stackoverflow_0074664317_forum_html_python_web_scraping.txt |
Q:
How can I make a statement go through even though there's a needed break statement?
My instructions are to help a child their way home. For example if the input is:
R
-JOHN
-L
-KING
-L
-SCHOOL
this means that to get to school from his has he had to turn right on john, left on king, and left to school
The output needs to help him find his way back home an example of this is:
R
KING
-R
-JOHN
-L
-HOME
This means that to get to his house from school he has to turn right on king, right on john, and left to home.
My problem is: I can't seem to try to incorporate all of the restrictions into the output. For the new directions home, I have to get rid of the first destination "school" and do opposite directions from the streets from there. So I tried that and it didn't work, but also, how can I figure out how to print out the directions with home? No directions print when HOME is printed...
directions = []
counter = 0
while True:
direction = input("Enter L or R for the direction: ")
street = input("Enter the street name: ")
if street == "SCHOOL":
break
directions.append((direction, street))
counter += 1
if counter == 3:
break
reversed_directions = []
for direction, street in directions:
if direction == "L":
direction = "R"
elif direction == "R":
direction = "L"
reversed_directions.append((direction, street))
#It was working until I added in this bit
del reversed_directions[0]
reversed_directions.insert(2, "HOME")
print("Original directions:",directions)
print("New directions:",reversed_directions[::-1])
A:
Firstly, you are checking if street is school way early before even appending it. This causes your loop to break and the data for school direction isn't updated.
Next, you don't need to reverse your list again in the end since it is already in the order you require. So remove [::-1]. Here's the fixed code:
directions = []
counter = 0
while True:
direction = input("Enter L or R for the direction: ")
street = input("Enter the street name: ")
directions.append((direction, street))
counter += 1
if counter == 3 or street.casefold == "school":
break
reversed_directions = []
for direction, street in directions:
if direction == "L":
direction = "R"
elif direction == "R":
direction = "L"
reversed_directions.append((direction, street))
del reversed_directions[0]
if directions[0][0] == "L": # update Left or right based on first value
reversed_directions.insert(2, ("R", "HOME"))
else:
reversed_directions.insert(2, ("L", "HOME"))
# used f-strings for ease
print(f"Original directions: {directions}")
print(f"New directions: {reversed_directions}")
Input: [("R", "JOHN"), ("L", "KING"), ("L", "SCHOOL")]
Output:
>>> Original directions: [('R', 'JOHN'), ('L', 'KING'), ('L', 'SCHOOL')]
>>> New directions: [('R', 'KING'), ('R', 'SCHOOL'), ('L', 'HOME')]
| How can I make a statement go through even though there's a needed break statement? | My instructions are to help a child their way home. For example if the input is:
R
-JOHN
-L
-KING
-L
-SCHOOL
this means that to get to school from his has he had to turn right on john, left on king, and left to school
The output needs to help him find his way back home an example of this is:
R
KING
-R
-JOHN
-L
-HOME
This means that to get to his house from school he has to turn right on king, right on john, and left to home.
My problem is: I can't seem to try to incorporate all of the restrictions into the output. For the new directions home, I have to get rid of the first destination "school" and do opposite directions from the streets from there. So I tried that and it didn't work, but also, how can I figure out how to print out the directions with home? No directions print when HOME is printed...
directions = []
counter = 0
while True:
direction = input("Enter L or R for the direction: ")
street = input("Enter the street name: ")
if street == "SCHOOL":
break
directions.append((direction, street))
counter += 1
if counter == 3:
break
reversed_directions = []
for direction, street in directions:
if direction == "L":
direction = "R"
elif direction == "R":
direction = "L"
reversed_directions.append((direction, street))
#It was working until I added in this bit
del reversed_directions[0]
reversed_directions.insert(2, "HOME")
print("Original directions:",directions)
print("New directions:",reversed_directions[::-1])
| [
"Firstly, you are checking if street is school way early before even appending it. This causes your loop to break and the data for school direction isn't updated.\nNext, you don't need to reverse your list again in the end since it is already in the order you require. So remove [::-1]. Here's the fixed code:\ndirections = []\ncounter = 0\nwhile True:\n direction = input(\"Enter L or R for the direction: \")\n street = input(\"Enter the street name: \")\n directions.append((direction, street))\n counter += 1\n if counter == 3 or street.casefold == \"school\":\n break\n\nreversed_directions = []\nfor direction, street in directions:\n if direction == \"L\":\n direction = \"R\"\n elif direction == \"R\":\n direction = \"L\"\n reversed_directions.append((direction, street))\n\ndel reversed_directions[0]\nif directions[0][0] == \"L\": # update Left or right based on first value\n reversed_directions.insert(2, (\"R\", \"HOME\"))\nelse:\n reversed_directions.insert(2, (\"L\", \"HOME\"))\n# used f-strings for ease\nprint(f\"Original directions: {directions}\")\nprint(f\"New directions: {reversed_directions}\")\n\nInput: [(\"R\", \"JOHN\"), (\"L\", \"KING\"), (\"L\", \"SCHOOL\")]\nOutput:\n>>> Original directions: [('R', 'JOHN'), ('L', 'KING'), ('L', 'SCHOOL')]\n>>> New directions: [('R', 'KING'), ('R', 'SCHOOL'), ('L', 'HOME')]\n\n"
] | [
0
] | [] | [] | [
"counter",
"list",
"python"
] | stackoverflow_0074664350_counter_list_python.txt |
Q:
Using multiple datasets in Gridspec
I am trying to create subplot inside a subplot, and I have found some code which can do this using the gridspec method. I have managed to fix the code so the figures are displayed as I want, but I can't figure out how to get a different dataset in each sub-figure.
This is what I have:
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize=(10, 8))
outer = gridspec.GridSpec(2, 2, wspace=0.1, hspace=0.1)
for i in range(4):
inner = gridspec.GridSpecFromSubplotSpec(4, 1,
subplot_spec=outer[i], wspace=0.1, hspace=0.1)
for j in range(4):
ax = plt.Subplot(fig, inner[j])
a = ax.plot(df)
t.set_ha('center')
ax.set_xticks([])
ax.set_yticks([])
fig.add_subplot(ax)
I have tried multiple options to achieve what I want without success.
IF anyone could help with this I would appreciate it.
Thanks.
A:
I have managed to solve my problem now. What I did was, instead of trying to put multple ax.plot() lines or putting multiple DataFrames inside ax.plot(df1, df2, df3) etc. I created a list which I put inside the For Loop. I also created a column variable to go in the "inner loop".
If using Nested Loops like this, the value that changes horizontally must go in the outer loop while the value that changes vertically must go in the inner loop.
In my case, the first subplot contains four different columns from the same DataFrame, the second contains four different columns for another DataFrame, and so on.
This is how it is implemented in the code:
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
residual_list = (['df_168h', 'residuals_168', 'residuals_168_d', 'residuals_168_weekend'])
fig = plt.figure(figsize=(10, 8))
outer = gridspec.GridSpec(2, 2, wspace=0.1, hspace=0.1)
for i, s in zip(range(4), residual_list):
inner = gridspec.GridSpecFromSubplotSpec(4, 1,
subplot_spec=outer[i], wspace=0.1, hspace=0.1)
for j, column in zip(range(4), df_168h):
ax = plt.Subplot(fig, inner[j])
a = ax.plot(locals()[s][column])
t.set_ha('center')
ax.set_xticks([])
ax.set_yticks([])
fig.add_subplot(ax)
| Using multiple datasets in Gridspec | I am trying to create subplot inside a subplot, and I have found some code which can do this using the gridspec method. I have managed to fix the code so the figures are displayed as I want, but I can't figure out how to get a different dataset in each sub-figure.
This is what I have:
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize=(10, 8))
outer = gridspec.GridSpec(2, 2, wspace=0.1, hspace=0.1)
for i in range(4):
inner = gridspec.GridSpecFromSubplotSpec(4, 1,
subplot_spec=outer[i], wspace=0.1, hspace=0.1)
for j in range(4):
ax = plt.Subplot(fig, inner[j])
a = ax.plot(df)
t.set_ha('center')
ax.set_xticks([])
ax.set_yticks([])
fig.add_subplot(ax)
I have tried multiple options to achieve what I want without success.
IF anyone could help with this I would appreciate it.
Thanks.
| [
"I have managed to solve my problem now. What I did was, instead of trying to put multple ax.plot() lines or putting multiple DataFrames inside ax.plot(df1, df2, df3) etc. I created a list which I put inside the For Loop. I also created a column variable to go in the \"inner loop\".\nIf using Nested Loops like this, the value that changes horizontally must go in the outer loop while the value that changes vertically must go in the inner loop.\nIn my case, the first subplot contains four different columns from the same DataFrame, the second contains four different columns for another DataFrame, and so on.\nThis is how it is implemented in the code:\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nresidual_list = (['df_168h', 'residuals_168', 'residuals_168_d', 'residuals_168_weekend'])\n\nfig = plt.figure(figsize=(10, 8))\nouter = gridspec.GridSpec(2, 2, wspace=0.1, hspace=0.1)\n\n\nfor i, s in zip(range(4), residual_list):\n inner = gridspec.GridSpecFromSubplotSpec(4, 1,\n subplot_spec=outer[i], wspace=0.1, hspace=0.1)\n\n for j, column in zip(range(4), df_168h):\n ax = plt.Subplot(fig, inner[j])\n a = ax.plot(locals()[s][column])\n t.set_ha('center')\n ax.set_xticks([])\n ax.set_yticks([])\n fig.add_subplot(ax)\n\n"
] | [
0
] | [] | [] | [
"python",
"subplot"
] | stackoverflow_0074661191_python_subplot.txt |
Q:
How to install OpenCV in Mac M1?
My goal is to install $ pip install opencv-python in Mac M1. The problem is I don't know opencv, so I would like to learn from the Getting Started Page. However, the very first code of opencv throws me an error.
What I've did:
$ pip install opencv-python -> same error
$ pip uninstall opencv-python -> $ pip install opencv-contrib-python -> same error.
import cv2 as cv
import sys
img = cv.imread(cv.samples.findFile("starry_night.jpg"))
if img is None:
sys.exit("Could not read the image.")
cv.imshow("Display window", img)
k = cv.waitKey(0)
if k == ord("s"):
cv.imwrite("starry_night.png", img)
error
[ WARN:0@0.011] global /Users/xperience/actions-runner/_work/opencv-python/opencv-python/opencv/modules/core/src/utils/samples.cpp (61) findFile cv::samples::findFile('starry_night.jpg') => ''
---------------------------------------------------------------------------
error Traceback (most recent call last)
Cell In [1], line 3
1 import cv2 as cv
2 import sys
----> 3 img = cv.imread(cv.samples.findFile("starry_night.jpg"))
4 if img is None:
5 sys.exit("Could not read the image.")
error: OpenCV(4.6.0) /Users/xperience/actions-runner/_work/opencv-python/opencv-python/opencv/modules/core/src/utils/samples.cpp:64: error: (-2:Unspecified error) OpenCV samples: Can't find required data file: starry_night.jpg in function 'findFile'
A:
It looks like the cv.imread() function is unable to find the image file "starry_night.jpg" in your current directory. This is likely because the findFile() function is returning an empty string, which indicates that the file could not be found.
To fix this issue, you will need to make sure that the "starry_night.jpg" file exists in your current working directory. You can verify this by running ls in your terminal to list all files in the current directory, or by using the os.path.exists() function in Python to check if the file exists.
Once you have confirmed that the file exists in your current directory, you can try modifying the code to specify the full path to the image file instead of relying on the findFile() function. For example:
import cv2 as cv
import os
# Replace "/path/to/image/file" with the full path to your "starry_night.jpg" file
img_path = "/path/to/image/file/starry_night.jpg"
# Check if the file exists
if not os.path.exists(img_path):
sys.exit("Error: File not found.")
# Read the image file
img = cv.imread(img_path)
# Check if the image was successfully read
if img is None:
sys.exit("Error: Could not read the image.")
# Display the image
cv.imshow("Display window", img)
# Wait for a key press and save the image if "s" is pressed
k = cv.waitKey(0)
...
| How to install OpenCV in Mac M1? | My goal is to install $ pip install opencv-python in Mac M1. The problem is I don't know opencv, so I would like to learn from the Getting Started Page. However, the very first code of opencv throws me an error.
What I've did:
$ pip install opencv-python -> same error
$ pip uninstall opencv-python -> $ pip install opencv-contrib-python -> same error.
import cv2 as cv
import sys
img = cv.imread(cv.samples.findFile("starry_night.jpg"))
if img is None:
sys.exit("Could not read the image.")
cv.imshow("Display window", img)
k = cv.waitKey(0)
if k == ord("s"):
cv.imwrite("starry_night.png", img)
error
[ WARN:0@0.011] global /Users/xperience/actions-runner/_work/opencv-python/opencv-python/opencv/modules/core/src/utils/samples.cpp (61) findFile cv::samples::findFile('starry_night.jpg') => ''
---------------------------------------------------------------------------
error Traceback (most recent call last)
Cell In [1], line 3
1 import cv2 as cv
2 import sys
----> 3 img = cv.imread(cv.samples.findFile("starry_night.jpg"))
4 if img is None:
5 sys.exit("Could not read the image.")
error: OpenCV(4.6.0) /Users/xperience/actions-runner/_work/opencv-python/opencv-python/opencv/modules/core/src/utils/samples.cpp:64: error: (-2:Unspecified error) OpenCV samples: Can't find required data file: starry_night.jpg in function 'findFile'
| [
"It looks like the cv.imread() function is unable to find the image file \"starry_night.jpg\" in your current directory. This is likely because the findFile() function is returning an empty string, which indicates that the file could not be found.\nTo fix this issue, you will need to make sure that the \"starry_night.jpg\" file exists in your current working directory. You can verify this by running ls in your terminal to list all files in the current directory, or by using the os.path.exists() function in Python to check if the file exists.\nOnce you have confirmed that the file exists in your current directory, you can try modifying the code to specify the full path to the image file instead of relying on the findFile() function. For example:\nimport cv2 as cv\nimport os\n\n# Replace \"/path/to/image/file\" with the full path to your \"starry_night.jpg\" file\nimg_path = \"/path/to/image/file/starry_night.jpg\"\n\n# Check if the file exists\nif not os.path.exists(img_path):\n sys.exit(\"Error: File not found.\")\n\n# Read the image file\nimg = cv.imread(img_path)\n\n# Check if the image was successfully read\nif img is None:\n sys.exit(\"Error: Could not read the image.\")\n\n# Display the image\ncv.imshow(\"Display window\", img)\n\n# Wait for a key press and save the image if \"s\" is pressed\nk = cv.waitKey(0)\n...\n\n"
] | [
0
] | [] | [] | [
"apple_m1",
"opencv",
"python"
] | stackoverflow_0074664356_apple_m1_opencv_python.txt |
Q:
How to write by column rather than rows - Python to CSV
I want to write my python list all_results to CSV. However when I use the following code, it saves each individual record in rows rather than columns.
import csv
fh = open('output.csv', 'w')
cvs_writer = csv.writer(fh)
# write one row with headers (using `writerow` without `s` at the end)
cvs_writer.writerow(["Column 1"])
# write many rows with results (using `writerows` with `s` at the end)
cvs_writer.writerows([all_results])
fh.close()
Output
|Column 1|
|row 1| |row 2| |row 3| |row 4| |row 5| |row 6| ---
Expected Output
|Column 1|
|row 1|
|row 2|
|row 3|
|row 4|
|row 5|
|row 6|
A:
To write all rows in a single column, you can use a list comprehension to create a list of lists, where each inner list contains a single item. You can then write the resulting list using the writerows method of the csv module.
Here is an example:
import csv
# Create a list of lists, where each inner list contains a single item
all_results = [[result] for result in all_results]
# Open the file in write mode
with open('output.csv', 'w') as fh:
# Create a CSV writer
cvs_writer = csv.writer(fh)
# Write the headers
cvs_writer.writerow(["Column 1"])
# Write the rows
cvs_writer.writerows(all_results)
This code will write the all_results list to a CSV file, with each item in the list appearing in a separate row in the first column.
| How to write by column rather than rows - Python to CSV | I want to write my python list all_results to CSV. However when I use the following code, it saves each individual record in rows rather than columns.
import csv
fh = open('output.csv', 'w')
cvs_writer = csv.writer(fh)
# write one row with headers (using `writerow` without `s` at the end)
cvs_writer.writerow(["Column 1"])
# write many rows with results (using `writerows` with `s` at the end)
cvs_writer.writerows([all_results])
fh.close()
Output
|Column 1|
|row 1| |row 2| |row 3| |row 4| |row 5| |row 6| ---
Expected Output
|Column 1|
|row 1|
|row 2|
|row 3|
|row 4|
|row 5|
|row 6|
| [
"To write all rows in a single column, you can use a list comprehension to create a list of lists, where each inner list contains a single item. You can then write the resulting list using the writerows method of the csv module.\nHere is an example:\nimport csv\n\n# Create a list of lists, where each inner list contains a single item\nall_results = [[result] for result in all_results]\n\n# Open the file in write mode\nwith open('output.csv', 'w') as fh:\n # Create a CSV writer\n cvs_writer = csv.writer(fh)\n\n # Write the headers\n cvs_writer.writerow([\"Column 1\"])\n\n # Write the rows\n cvs_writer.writerows(all_results)\n\nThis code will write the all_results list to a CSV file, with each item in the list appearing in a separate row in the first column.\n"
] | [
1
] | [] | [] | [
"export_to_csv",
"python"
] | stackoverflow_0074664313_export_to_csv_python.txt |
Q:
How to write value != '' in Python Pandas
I dont know how to write a blank value, no data; not null; (!= '') in pandas. Below is an example that I am using.
df['Column4'] = np.where(df['Column1'].notnull(), 'Yes',
np.where(df['Column2']== 0, 'NO',
np.where(df['Column2'].notnull(), (df['Column2']),
np.where(df['Column3']!='', '','Not_Data'))))
I tried .fillna('', inplace=True), .dropna().empty
A:
You can use the notnull method of a pandas dataframe to check if a column contains any non-null values, and then use the np.where function to write the appropriate value based on that check. Here is an example:
# create a sample dataframe with some null values
df = pd.DataFrame({'Column1': [1, 2, None, 3],
'Column2': [None, 5, 6, None],
'Column3': [7, 8, 9, None]})
# use the `notnull` method to check if a column contains any non-null values
# and use the `np.where` function to write the appropriate value based on that check
df['Column4'] = np.where(df['Column1'].notnull(), 'Yes',
np.where(df['Column2'].notnull(), 'Yes',
np.where(df['Column3'].notnull(), 'Yes', 'No')))
# the resulting dataframe should look like this:
Column1 Column2 Column3 Column4
0 1.0 NaN 7.0 Yes
1 2.0 5.0 8.0 Yes
2 NaN 6.0 9.0 Yes
3 3.0 NaN NaN No
You can also use the notna method instead of notnull, which has the same effect. Note that the notna method was introduced in pandas version 0.24.0, so if you are using an older version you will need to use notnull instead.
If you want to check if a column contains any non-empty string values, you can use the str.len method along with the notnull method to check for the length of the strings in the column, and then use the np.where function to write the appropriate value based on that check. Here is an example:
# create a sample dataframe with some null and empty string values
df = pd.DataFrame({'Column1': [1, 2, None, 3],
'Column2': [None, 5, 6, None],
'Column3': ['', '8', '9', None]})
# use the `str.len` method along with the `notnull` method to check
# if a column contains any non-empty string values
# and use the `np.where` function to write the appropriate value based on that check
df['Column4'] = np.where(df['Column1'].notnull(), 'Yes',
np.where(df['Column2'].notnull(), 'Yes',
np.where((df['Column3'].str.len() > 0) & df['Column3'].notnull(), 'Yes', 'No')))
| How to write value != '' in Python Pandas | I dont know how to write a blank value, no data; not null; (!= '') in pandas. Below is an example that I am using.
df['Column4'] = np.where(df['Column1'].notnull(), 'Yes',
np.where(df['Column2']== 0, 'NO',
np.where(df['Column2'].notnull(), (df['Column2']),
np.where(df['Column3']!='', '','Not_Data'))))
I tried .fillna('', inplace=True), .dropna().empty
| [
"You can use the notnull method of a pandas dataframe to check if a column contains any non-null values, and then use the np.where function to write the appropriate value based on that check. Here is an example:\n# create a sample dataframe with some null values\ndf = pd.DataFrame({'Column1': [1, 2, None, 3],\n 'Column2': [None, 5, 6, None],\n 'Column3': [7, 8, 9, None]})\n\n# use the `notnull` method to check if a column contains any non-null values\n# and use the `np.where` function to write the appropriate value based on that check\ndf['Column4'] = np.where(df['Column1'].notnull(), 'Yes',\n np.where(df['Column2'].notnull(), 'Yes',\n np.where(df['Column3'].notnull(), 'Yes', 'No')))\n\n# the resulting dataframe should look like this:\n Column1 Column2 Column3 Column4\n0 1.0 NaN 7.0 Yes\n1 2.0 5.0 8.0 Yes\n2 NaN 6.0 9.0 Yes\n3 3.0 NaN NaN No\n\nYou can also use the notna method instead of notnull, which has the same effect. Note that the notna method was introduced in pandas version 0.24.0, so if you are using an older version you will need to use notnull instead.\nIf you want to check if a column contains any non-empty string values, you can use the str.len method along with the notnull method to check for the length of the strings in the column, and then use the np.where function to write the appropriate value based on that check. Here is an example:\n# create a sample dataframe with some null and empty string values\ndf = pd.DataFrame({'Column1': [1, 2, None, 3],\n 'Column2': [None, 5, 6, None],\n 'Column3': ['', '8', '9', None]})\n\n# use the `str.len` method along with the `notnull` method to check\n# if a column contains any non-empty string values\n# and use the `np.where` function to write the appropriate value based on that check\ndf['Column4'] = np.where(df['Column1'].notnull(), 'Yes',\n np.where(df['Column2'].notnull(), 'Yes',\n np.where((df['Column3'].str.len() > 0) & df['Column3'].notnull(), 'Yes', 'No')))\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074664374_pandas_python.txt |
Q:
Accesing the parent __init__ variable to Child Python
Hello im trying to make a oop function but im currently stuck on how can i inherit the __init__ arguments of my parent class to the child class, is there a method that can i use to adapt the variable from my main to use in child?
class a:
def __init__(self, name):
self.name = name
class b(a):
def __init__(self, age):
super().__init__()
self.age = age
When i trying to use the name from the parent it errors.
b('joe', 40)
> Traceback (most recent call last):
File "<string>", line 11, in <module>
TypeError: __init__() takes 2 positional arguments but 3 were given>
A:
In the b class, you need to include the name argument in the __init__ method and pass it to the super() method as shown below:
class a:
def __init__(self, name):
self.name = name
class b(a):
def __init__(self, name, age):
super().__init__(name)
self.age = age
Now you can create an instance of the b class and pass the name and age arguments as follows:
b('joe', 40)
This will correctly initialize the name attribute inherited from the a class and the age attribute in the b class.
A:
The arguments from the child constructor need to be passed to the parent. This is because the child constructor overrides the method (i.e. replaces the constructor method) from the parent. Calling super allows you to access the original parent constructor, which will need to be provided the appropriate arguments.
class a:
def __init__(self, name):
self.name = name
class b(a):
def __init__(self, name, age):
super().__init__(name)
self.age = age
As you might notice, this means you need to write a lot of boilerplate code to plumb down the arguments (especially if there are many arguments). If this class is purely for data, then dataclasses provide a much easier and less error prone alternative.
from dataclasses import dataclass
@dataclass
class a:
name: str
@dataclass
class b(a):
age: int
print(b('joe', 12))
b(name='joe', age=12)
| Accesing the parent __init__ variable to Child Python | Hello im trying to make a oop function but im currently stuck on how can i inherit the __init__ arguments of my parent class to the child class, is there a method that can i use to adapt the variable from my main to use in child?
class a:
def __init__(self, name):
self.name = name
class b(a):
def __init__(self, age):
super().__init__()
self.age = age
When i trying to use the name from the parent it errors.
b('joe', 40)
> Traceback (most recent call last):
File "<string>", line 11, in <module>
TypeError: __init__() takes 2 positional arguments but 3 were given>
| [
"In the b class, you need to include the name argument in the __init__ method and pass it to the super() method as shown below:\nclass a:\n def __init__(self, name):\n self.name = name\n \nclass b(a):\n def __init__(self, name, age):\n super().__init__(name)\n self.age = age\n\nNow you can create an instance of the b class and pass the name and age arguments as follows:\nb('joe', 40)\n\n\nThis will correctly initialize the name attribute inherited from the a class and the age attribute in the b class.\n",
"The arguments from the child constructor need to be passed to the parent. This is because the child constructor overrides the method (i.e. replaces the constructor method) from the parent. Calling super allows you to access the original parent constructor, which will need to be provided the appropriate arguments.\nclass a:\n def __init__(self, name):\n self.name = name\n \nclass b(a):\n def __init__(self, name, age):\n super().__init__(name)\n self.age = age\n\nAs you might notice, this means you need to write a lot of boilerplate code to plumb down the arguments (especially if there are many arguments). If this class is purely for data, then dataclasses provide a much easier and less error prone alternative.\nfrom dataclasses import dataclass\n\n@dataclass\nclass a:\n name: str\n\n@dataclass\nclass b(a):\n age: int\n\nprint(b('joe', 12))\n\nb(name='joe', age=12)\n\n"
] | [
0,
0
] | [] | [] | [
"oop",
"python",
"python_3.x"
] | stackoverflow_0074664396_oop_python_python_3.x.txt |
Q:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2]
Hi I am a beginner in DL and tensorflow,
I created a CNN (you can see the model below)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", input_shape=[512, 640, 3]))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.2) #, momentum=0.9, decay=0.1)
model.compile(optimizer=optimizer, loss='mse', metrics=['accuracy'])
I tried building and training it with the cpu and it was completed successfully (but very slowly) so I decided to install tensorflow-gpu.
Installed everything as instructed in https://www.tensorflow.org/install/gpu).
But now when I am trying to build the model this error comes up:
> Traceback (most recent call last): File
> "C:/Users/thano/Documents/Py_workspace/AI_tensorflow/fire_detection/main.py",
> line 63, in <module>
> model = create_models.model1() File "C:\Users\thano\Documents\Py_workspace\AI_tensorflow\fire_detection\create_models.py",
> line 20, in model1
> model.add(tf.keras.layers.Dense(128, activation='relu')) File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 530, in _method_wrapper
> result = method(self, *args, **kwargs) File "C:\Python37\lib\site-packages\keras\engine\sequential.py", line 217,
> in add
> output_tensor = layer(self.outputs[0]) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 977,
> in __call__
> input_list) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 1115,
> in _functional_construction_call
> inputs, input_masks, args, kwargs) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 848,
> in _keras_tensor_symbolic_call
> return self._infer_output_signature(inputs, args, kwargs, input_masks) File
> "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 886,
> in _infer_output_signature
> self._maybe_build(inputs) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 2659,
> in _maybe_build
> self.build(input_shapes) # pylint:disable=not-callable File "C:\Python37\lib\site-packages\keras\layers\core.py", line 1185, in
> build
> trainable=True) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 663,
> in add_weight
> caching_device=caching_device) File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 818, in _add_variable_with_custom_getter
> **kwargs_for_getter) File "C:\Python37\lib\site-packages\keras\engine\base_layer_utils.py", line
> 129, in make_variable
> shape=variable_shape if variable_shape else None) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 266, in __call__
> return cls._variable_v1_call(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 227, in _variable_v1_call
> shape=shape) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 205, in <lambda>
> previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variable_scope.py",
> line 2626, in default_variable_creator
> shape=shape) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 270, in __call__
> return super(VariableMetaclass, cls).__call__(*args, **kwargs) File
> "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1613, in __init__
> distribute_strategy=distribute_strategy) File "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1740, in _init_from_args
> initial_value = initial_value() File "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 517, in __call__
> return self._random_generator.random_uniform(shape, -limit, limit, dtype) File
> "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 973, in random_uniform
> shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed) File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
> return target(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\random_ops.py",
> line 315, in random_uniform
> result = math_ops.add(result * (maxval - minval), minval, name=name) File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
> return target(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py",
> line 3943, in add
> return gen_math_ops.add_v2(x, y, name=name) File "C:\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py",
> line 454, in add_v2
> _ops.raise_from_not_ok_status(e, name) File "C:\Python37\lib\site-packages\tensorflow\python\framework\ops.py",
> line 6941, in raise_from_not_ok_status
> six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from
> tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed
> to allocate memory [Op:AddV2]
Any ideas what might be the problem?
A:
The error is telling you that it couldn't allocate as much VRAM as you are using. The easiest way to overcome this kind of problem is to reduce to batch-size to a number that fits on your GPU's VRAM.
A:
The error message you received tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2] could indicate that your GPU does not have enough memory for the training job you want to run. What GPU are you using and how much vRAM does it have?
When it comes to "Out Of Memory" (OOM) errors when training, the most straightforward thing to do is to reduce the batch_size hyperparameter.
There's no straightforward way to determine what the largest batch_size you can use while training that will fit your GPU's available vRAM other than trial and error. A general rule however, is to use a power of 2 (e.g. 8, 16, 32).
A:
As this implies an out-of-memory scenario, the first thing you should try is to reduce the batch size. This could also happen if you have a very large training dataset size. You can try training the model on a subset of training data and see if that helps.
A:
If you have lot of training samples you might get ResourceExhaustedError
ResourceExhaustedError from tensorflow
For example, this error might be raised if a per-user quota is
exhausted, or perhaps the entire file system is out of space.
How to fix this error:
Setting smaller batch_size when training model using fit method:
batch_size: Integer or None. Number of samples per gradient update.
Which means is higher the batch_size more memory is required while
training.
If you are on Jupyter notebook try restarting kernel
Restarting kernel will reset your notebook and remove all the memory allocated to variables or methods
you've defined!
| tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2] | Hi I am a beginner in DL and tensorflow,
I created a CNN (you can see the model below)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu", input_shape=[512, 640, 3]))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.Conv2D(filters=256, kernel_size=3, activation="relu"))
model.add(tf.keras.layers.MaxPooling2D(2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.2) #, momentum=0.9, decay=0.1)
model.compile(optimizer=optimizer, loss='mse', metrics=['accuracy'])
I tried building and training it with the cpu and it was completed successfully (but very slowly) so I decided to install tensorflow-gpu.
Installed everything as instructed in https://www.tensorflow.org/install/gpu).
But now when I am trying to build the model this error comes up:
> Traceback (most recent call last): File
> "C:/Users/thano/Documents/Py_workspace/AI_tensorflow/fire_detection/main.py",
> line 63, in <module>
> model = create_models.model1() File "C:\Users\thano\Documents\Py_workspace\AI_tensorflow\fire_detection\create_models.py",
> line 20, in model1
> model.add(tf.keras.layers.Dense(128, activation='relu')) File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 530, in _method_wrapper
> result = method(self, *args, **kwargs) File "C:\Python37\lib\site-packages\keras\engine\sequential.py", line 217,
> in add
> output_tensor = layer(self.outputs[0]) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 977,
> in __call__
> input_list) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 1115,
> in _functional_construction_call
> inputs, input_masks, args, kwargs) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 848,
> in _keras_tensor_symbolic_call
> return self._infer_output_signature(inputs, args, kwargs, input_masks) File
> "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 886,
> in _infer_output_signature
> self._maybe_build(inputs) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 2659,
> in _maybe_build
> self.build(input_shapes) # pylint:disable=not-callable File "C:\Python37\lib\site-packages\keras\layers\core.py", line 1185, in
> build
> trainable=True) File "C:\Python37\lib\site-packages\keras\engine\base_layer.py", line 663,
> in add_weight
> caching_device=caching_device) File "C:\Python37\lib\site-packages\tensorflow\python\training\tracking\base.py",
> line 818, in _add_variable_with_custom_getter
> **kwargs_for_getter) File "C:\Python37\lib\site-packages\keras\engine\base_layer_utils.py", line
> 129, in make_variable
> shape=variable_shape if variable_shape else None) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 266, in __call__
> return cls._variable_v1_call(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 227, in _variable_v1_call
> shape=shape) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 205, in <lambda>
> previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variable_scope.py",
> line 2626, in default_variable_creator
> shape=shape) File "C:\Python37\lib\site-packages\tensorflow\python\ops\variables.py",
> line 270, in __call__
> return super(VariableMetaclass, cls).__call__(*args, **kwargs) File
> "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1613, in __init__
> distribute_strategy=distribute_strategy) File "C:\Python37\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py",
> line 1740, in _init_from_args
> initial_value = initial_value() File "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 517, in __call__
> return self._random_generator.random_uniform(shape, -limit, limit, dtype) File
> "C:\Python37\lib\site-packages\keras\initializers\initializers_v2.py",
> line 973, in random_uniform
> shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed) File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
> return target(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\random_ops.py",
> line 315, in random_uniform
> result = math_ops.add(result * (maxval - minval), minval, name=name) File
> "C:\Python37\lib\site-packages\tensorflow\python\util\dispatch.py",
> line 206, in wrapper
> return target(*args, **kwargs) File "C:\Python37\lib\site-packages\tensorflow\python\ops\math_ops.py",
> line 3943, in add
> return gen_math_ops.add_v2(x, y, name=name) File "C:\Python37\lib\site-packages\tensorflow\python\ops\gen_math_ops.py",
> line 454, in add_v2
> _ops.raise_from_not_ok_status(e, name) File "C:\Python37\lib\site-packages\tensorflow\python\framework\ops.py",
> line 6941, in raise_from_not_ok_status
> six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from
> tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed
> to allocate memory [Op:AddV2]
Any ideas what might be the problem?
| [
"The error is telling you that it couldn't allocate as much VRAM as you are using. The easiest way to overcome this kind of problem is to reduce to batch-size to a number that fits on your GPU's VRAM.\n",
"The error message you received tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory [Op:AddV2] could indicate that your GPU does not have enough memory for the training job you want to run. What GPU are you using and how much vRAM does it have?\nWhen it comes to \"Out Of Memory\" (OOM) errors when training, the most straightforward thing to do is to reduce the batch_size hyperparameter.\nThere's no straightforward way to determine what the largest batch_size you can use while training that will fit your GPU's available vRAM other than trial and error. A general rule however, is to use a power of 2 (e.g. 8, 16, 32).\n",
"As this implies an out-of-memory scenario, the first thing you should try is to reduce the batch size. This could also happen if you have a very large training dataset size. You can try training the model on a subset of training data and see if that helps.\n",
"If you have lot of training samples you might get ResourceExhaustedError\nResourceExhaustedError from tensorflow\n\nFor example, this error might be raised if a per-user quota is\nexhausted, or perhaps the entire file system is out of space.\n\nHow to fix this error:\n\nSetting smaller batch_size when training model using fit method:\n\n\nbatch_size: Integer or None. Number of samples per gradient update.\n\n\nWhich means is higher the batch_size more memory is required while\ntraining.\n\n\nIf you are on Jupyter notebook try restarting kernel\n\n\nRestarting kernel will reset your notebook and remove all the memory allocated to variables or methods\nyou've defined!\n\n"
] | [
7,
4,
0,
0
] | [] | [] | [
"conv_neural_network",
"deep_learning",
"gpu",
"python",
"tensorflow"
] | stackoverflow_0069641708_conv_neural_network_deep_learning_gpu_python_tensorflow.txt |
Q:
How to add new video files to HLS?
I'm having trouble live streaming a video file that is constantly updated using HLS.
Video files recorded by POST from the client are sent to the server.
The server converts the received video to HLS (.m3u8 .ts).
You can convert to .m3u8 and .ts with the following code.
def to_m3u8(movie_path: Path):
"""
Convert mp4 to m3u8.
:param movie_path:
:return: m3u8 file path
"""
m3u8_path = movie_path.parent/f"{movie_path.stem}.m3u8"
command = f"ffmpeg -i {movie_path} " \
f"-c copy " \
f"-f segment -segment_time_delta 0 " \
f"-segment_list_type hls " \
f"-movflags +faststart " \
f"-preset ultrafast " \
f"-hls_playlist_type event " \
f"-hls_flags append_list " \
f"-hls_list_size 10 " \
f"-segment_list_size 0 " \
f"-segment_list {m3u8_path} " \
f"-segment_format mpegts " \
f"{movie_path.parent}/segment_%03d.ts"
logger.info(f"command: {command}")
subprocess.run(command, shell=True)
return m3u8_path
I can see the .m3u8 .ts file being overwritten every time I receive POST data.
But when I open the .m3u8 in VLC it plays a few seconds of video and then stops.
.m3u8 file is like this.
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:5
#EXTINF:4.660000,
segment_000.ts
#EXTINF:4.120000,
segment_001.ts
#EXTINF:0.160000,
segment_002.ts
#EXT-X-ENDLIST
I thought #EXT-X-ENDLIST is don't need. So I remove the line. Below code.
with open(m3u8_path, "r") as f:
lines = f.readlines()
with open(m3u8_path, "w") as f:
for line in lines:
if line.startswith("#EXT-X-ENDLIST") is False:
f.write(line)
How ever it can't streaming. It's behave like a movie file.
How can I read the newly added files at any time?
Can it be handled by changing FFmpege options?
A:
To make sure that the HLS stream is constantly updated, you can use the -hls_flags append_list option in the ffmpeg command that you are using to create the HLS stream. This option will make sure that the HLS playlist is constantly updated with new segments as they are added, so that the stream is always up-to-date.
Here is an example of how you can use this option in your ffmpeg command:
command = f"ffmpeg -i {movie_path}" \
f"-c copy -map 0" \
f"-f segment -segment_time_delta 0 " \
f"-segment_list_type hls" \
f"-mov flags +faststart" \
f"-preset veryfast" \
f"-hls_playlist_type event" \
f"-hls_flags append_list" \
f"-segment_list_size 0" \
f"-segment_list {m3u8_path}" \
f"-segment_format mpegts" \
f"{movie_path.parent}/segment_%03d.ts"
You can also use the -hls_list_size option to control how many segments are included in the HLS playlist at any given time. This can be useful if you want to limit the size of the playlist to prevent it from growing too large. For example, you can use the -hls_list_size 10 option to make sure that only the 10 most recent segments are included in the playlist.
| How to add new video files to HLS? | I'm having trouble live streaming a video file that is constantly updated using HLS.
Video files recorded by POST from the client are sent to the server.
The server converts the received video to HLS (.m3u8 .ts).
You can convert to .m3u8 and .ts with the following code.
def to_m3u8(movie_path: Path):
"""
Convert mp4 to m3u8.
:param movie_path:
:return: m3u8 file path
"""
m3u8_path = movie_path.parent/f"{movie_path.stem}.m3u8"
command = f"ffmpeg -i {movie_path} " \
f"-c copy " \
f"-f segment -segment_time_delta 0 " \
f"-segment_list_type hls " \
f"-movflags +faststart " \
f"-preset ultrafast " \
f"-hls_playlist_type event " \
f"-hls_flags append_list " \
f"-hls_list_size 10 " \
f"-segment_list_size 0 " \
f"-segment_list {m3u8_path} " \
f"-segment_format mpegts " \
f"{movie_path.parent}/segment_%03d.ts"
logger.info(f"command: {command}")
subprocess.run(command, shell=True)
return m3u8_path
I can see the .m3u8 .ts file being overwritten every time I receive POST data.
But when I open the .m3u8 in VLC it plays a few seconds of video and then stops.
.m3u8 file is like this.
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:5
#EXTINF:4.660000,
segment_000.ts
#EXTINF:4.120000,
segment_001.ts
#EXTINF:0.160000,
segment_002.ts
#EXT-X-ENDLIST
I thought #EXT-X-ENDLIST is don't need. So I remove the line. Below code.
with open(m3u8_path, "r") as f:
lines = f.readlines()
with open(m3u8_path, "w") as f:
for line in lines:
if line.startswith("#EXT-X-ENDLIST") is False:
f.write(line)
How ever it can't streaming. It's behave like a movie file.
How can I read the newly added files at any time?
Can it be handled by changing FFmpege options?
| [
"To make sure that the HLS stream is constantly updated, you can use the -hls_flags append_list option in the ffmpeg command that you are using to create the HLS stream. This option will make sure that the HLS playlist is constantly updated with new segments as they are added, so that the stream is always up-to-date.\nHere is an example of how you can use this option in your ffmpeg command:\ncommand = f\"ffmpeg -i {movie_path}\" \\\n f\"-c copy -map 0\" \\\n f\"-f segment -segment_time_delta 0 \" \\\n f\"-segment_list_type hls\" \\\n f\"-mov flags +faststart\" \\\n f\"-preset veryfast\" \\\n f\"-hls_playlist_type event\" \\\n f\"-hls_flags append_list\" \\\n f\"-segment_list_size 0\" \\\n f\"-segment_list {m3u8_path}\" \\\n f\"-segment_format mpegts\" \\\n f\"{movie_path.parent}/segment_%03d.ts\"\n\nYou can also use the -hls_list_size option to control how many segments are included in the HLS playlist at any given time. This can be useful if you want to limit the size of the playlist to prevent it from growing too large. For example, you can use the -hls_list_size 10 option to make sure that only the 10 most recent segments are included in the playlist.\n"
] | [
0
] | [] | [] | [
"ffmpeg",
"http_live_streaming",
"python"
] | stackoverflow_0074664406_ffmpeg_http_live_streaming_python.txt |
Q:
python continue download from where I left off
I'm trying to download a very large file in collab to my gDrive. Sometimes the connection cuts out, and It requires I restart. Is there a way I can download from where I left off?
My code looks like so:
from requests import get
import sys
def download(url, file_name):
# open in binary mode
with open(file_name, "wb") as f:
print("Downloading %s" % file_name)
response = get(url, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
total_length = int(total_length)
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
A:
To download a file from a specific point, you can use the Range request header to specify the byte range that you want to download. For example, to download the last 100 bytes of a file, you can use the following code:
from requests import get
import sys
def download(url, file_name, start_byte, end_byte):
# open in binary mode
with open(file_name, "wb") as f:
print("Downloading %s" % file_name)
response = get(url, headers={'Range': f'bytes={start_byte}-{end_byte}'}, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
total_length = int(total_length)
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
# download the last 100 bytes of the file
download(url, file_name, -100, None)
Note that not all servers support the Range header.
| python continue download from where I left off | I'm trying to download a very large file in collab to my gDrive. Sometimes the connection cuts out, and It requires I restart. Is there a way I can download from where I left off?
My code looks like so:
from requests import get
import sys
def download(url, file_name):
# open in binary mode
with open(file_name, "wb") as f:
print("Downloading %s" % file_name)
response = get(url, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
total_length = int(total_length)
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
| [
"To download a file from a specific point, you can use the Range request header to specify the byte range that you want to download. For example, to download the last 100 bytes of a file, you can use the following code:\nfrom requests import get\nimport sys\n\ndef download(url, file_name, start_byte, end_byte):\n # open in binary mode\n with open(file_name, \"wb\") as f:\n print(\"Downloading %s\" % file_name)\n response = get(url, headers={'Range': f'bytes={start_byte}-{end_byte}'}, stream=True)\n total_length = response.headers.get('content-length')\n\n if total_length is None: # no content length header\n f.write(response.content)\n else:\n dl = 0\n total_length = int(total_length)\n for data in response.iter_content(chunk_size=4096):\n dl += len(data)\n f.write(data)\n done = int(50 * dl / total_length)\n sys.stdout.write(\"\\r[%s%s]\" % ('=' * done, ' ' * (50-done)) ) \n sys.stdout.flush()\n\n# download the last 100 bytes of the file\ndownload(url, file_name, -100, None)\n\nNote that not all servers support the Range header.\n"
] | [
1
] | [] | [] | [
"download",
"python"
] | stackoverflow_0074664425_download_python.txt |
Q:
Is there any difference between manualluy login and selenium-python?
There are two method.
First, launch chrome debugging mode by using os.system() module and manually login, then connect selenium to get page source.
Second, launch and login are also controlled by selenium, Then get page source.
Because too difficult to login webpage(2 session needed), i didn't try second method.
So, i just want to know, Is there any difference between manualluy login and selenium-python?
A:
There may be a difference between manually logging in to a website and using Selenium to login to the same website. This difference may be due to a number of factors, such as the way in which the website authenticates users, the specific actions that are performed during the login process, and the way in which the website responds to different types of input.
One potential difference between manual and automated login is that manual login may allow a user to enter their credentials in a more flexible way, such as using a keyboard, mouse, or other input device, whereas automated login may be more limited in terms of the input methods that are supported. Additionally, manual login may allow a user to view and interact with the website in a way that is not possible with automated login, such as clicking on buttons or links, or interacting with other elements on the page.
Another potential difference is that manual login may be subject to human error, such as mistyping a password or entering the wrong username, whereas automated login using Selenium can be more consistent and reliable. However, automated login may also be subject to errors, such as incorrect configuration of the Selenium script or errors in the website's authentication system.
Overall, the differences between manual and automated login will depend on the specific website and the way in which it is designed and implemented. It is therefore important to carefully evaluate the specific requirements and constraints of the login process in order to determine the best approach for logging in to the website.
| Is there any difference between manualluy login and selenium-python? | There are two method.
First, launch chrome debugging mode by using os.system() module and manually login, then connect selenium to get page source.
Second, launch and login are also controlled by selenium, Then get page source.
Because too difficult to login webpage(2 session needed), i didn't try second method.
So, i just want to know, Is there any difference between manualluy login and selenium-python?
| [
"There may be a difference between manually logging in to a website and using Selenium to login to the same website. This difference may be due to a number of factors, such as the way in which the website authenticates users, the specific actions that are performed during the login process, and the way in which the website responds to different types of input.\nOne potential difference between manual and automated login is that manual login may allow a user to enter their credentials in a more flexible way, such as using a keyboard, mouse, or other input device, whereas automated login may be more limited in terms of the input methods that are supported. Additionally, manual login may allow a user to view and interact with the website in a way that is not possible with automated login, such as clicking on buttons or links, or interacting with other elements on the page.\nAnother potential difference is that manual login may be subject to human error, such as mistyping a password or entering the wrong username, whereas automated login using Selenium can be more consistent and reliable. However, automated login may also be subject to errors, such as incorrect configuration of the Selenium script or errors in the website's authentication system.\nOverall, the differences between manual and automated login will depend on the specific website and the way in which it is designed and implemented. It is therefore important to carefully evaluate the specific requirements and constraints of the login process in order to determine the best approach for logging in to the website.\n"
] | [
1
] | [] | [] | [
"python",
"selenium"
] | stackoverflow_0074664211_python_selenium.txt |
Q:
Can we use the "SInce" and "Until" option in TWEEPY to fetch tweets from a specific date?
Actually, I am working on a project which collects tweets if we pass a certain keyword. For ex. If I pass the keyword as "Messi", it will collect every tweets regarding Messi. We are passing the parameters as "query" and "no of tweets". No of tweets will restrict the count of tweets that we need. So, tweepy collects the recent tweets from the field. What I want, is that I want to retrieve tweets from a certain timeline. Suppose, I want the tweets regarding "Messi" from 20th Jan 2022 to 02nd Feb 2022, it should fetch from that certain timeline.I've used POSTMAN with the twitter API and I am getting the results in that, but it is not being applied in the Python Tweepy code. So, do we have any option regarding that?
I tried POSTMAN, and in that we have the endpoint of "Full Archived Search". So, we can pass the since and until option in that, but it is not being applied to the Python Tweepy Code, which I'm doing. So, can we apply since and until option in the tweepy code? If not, do we have any ther alternative for it?
A:
Yes, you can use the since and until parameters in the tweepy code to collect tweets within a certain timeline. These parameters can be passed as part of the query parameter in the Cursor object when calling the Cursor.items() method.
Here is an example of how you can use these parameters:
import tweepy
# authentication details and other code to initialize the tweepy API client
# ...
# define the start and end dates for the timeline
start_date = "2022-01-20"
end_date = "2022-02-02"
# define the keyword to search for in the tweets
keyword = "Messi"
# create the query string to search for tweets containing the keyword
# within the defined timeline
query = f"{keyword} since:{start_date} until:{end_date}"
# create a tweepy Cursor object to iterate over the tweets matching the query
cursor = tweepy.Cursor(api.search, q=query)
# iterate over the tweets and print the text of each tweet
for tweet in cursor.items():
print(tweet.text)
You can find more information about the since and until parameters in the Twitter API documentation.
| Can we use the "SInce" and "Until" option in TWEEPY to fetch tweets from a specific date? | Actually, I am working on a project which collects tweets if we pass a certain keyword. For ex. If I pass the keyword as "Messi", it will collect every tweets regarding Messi. We are passing the parameters as "query" and "no of tweets". No of tweets will restrict the count of tweets that we need. So, tweepy collects the recent tweets from the field. What I want, is that I want to retrieve tweets from a certain timeline. Suppose, I want the tweets regarding "Messi" from 20th Jan 2022 to 02nd Feb 2022, it should fetch from that certain timeline.I've used POSTMAN with the twitter API and I am getting the results in that, but it is not being applied in the Python Tweepy code. So, do we have any option regarding that?
I tried POSTMAN, and in that we have the endpoint of "Full Archived Search". So, we can pass the since and until option in that, but it is not being applied to the Python Tweepy Code, which I'm doing. So, can we apply since and until option in the tweepy code? If not, do we have any ther alternative for it?
| [
"Yes, you can use the since and until parameters in the tweepy code to collect tweets within a certain timeline. These parameters can be passed as part of the query parameter in the Cursor object when calling the Cursor.items() method.\nHere is an example of how you can use these parameters:\nimport tweepy\n\n# authentication details and other code to initialize the tweepy API client\n# ...\n\n# define the start and end dates for the timeline\nstart_date = \"2022-01-20\"\nend_date = \"2022-02-02\"\n\n# define the keyword to search for in the tweets\nkeyword = \"Messi\"\n\n# create the query string to search for tweets containing the keyword\n# within the defined timeline\nquery = f\"{keyword} since:{start_date} until:{end_date}\"\n\n# create a tweepy Cursor object to iterate over the tweets matching the query\ncursor = tweepy.Cursor(api.search, q=query)\n\n# iterate over the tweets and print the text of each tweet\nfor tweet in cursor.items():\n print(tweet.text)\n\n\nYou can find more information about the since and until parameters in the Twitter API documentation.\n"
] | [
0
] | [] | [] | [
"postman",
"python",
"tweepy",
"twitter_api_v2"
] | stackoverflow_0074664447_postman_python_tweepy_twitter_api_v2.txt |
Q:
count the number of values in data frame's column that exist in another data frame's column
I have two data frames:
df1:
Index
Date
0
2016-03-21 20:10:00
1
2016-03-22 21:09:00
2
2016-05-03 17:05:00
df2:
Index
Date
0
2016-03-21 20:10:00
1
2016-03-21 21:00:00
2
2016-03-22 21:09:00
3
2016-05-03 17:05:00
4
2017-06-01 16:10:00
There's probably a really simple way to do this but how would I count the number of values in the df1 Date column that are also in the df2 Date column? (These are not unique value counts)
A:
The simplest approach to solve your problem will be use set intersection(find common element from set).
Eg:
df1=pd.DataFrame({"date":['2016-03-21 20:10:00','2016-03-22 21:09:00','2016-05-03 17:05:00']})
df2=pd.DataFrame({"date":['2016-03-21 20:10:00','2016-03-21 21:00:00',
'2016-03-22 21:09:00','2016-05-03 17:05:00','2017-06-01 16:10:00']})
print(len(set(df1.date) & set(df2.date))) # 3
This will just convert that specified column to python-set and find common between them.
If you want to use Pandas then you can use pandas.merge() to get the common rows based on the columns.
df3 = pd.merge(df1, df2)
print(len(df3)) # 3
and count common rows using len function.
A:
You could use the isin function:
len(df1[df1.Date.isin(df2.Date)])
Output:
3
| count the number of values in data frame's column that exist in another data frame's column | I have two data frames:
df1:
Index
Date
0
2016-03-21 20:10:00
1
2016-03-22 21:09:00
2
2016-05-03 17:05:00
df2:
Index
Date
0
2016-03-21 20:10:00
1
2016-03-21 21:00:00
2
2016-03-22 21:09:00
3
2016-05-03 17:05:00
4
2017-06-01 16:10:00
There's probably a really simple way to do this but how would I count the number of values in the df1 Date column that are also in the df2 Date column? (These are not unique value counts)
| [
"The simplest approach to solve your problem will be use set intersection(find common element from set).\nEg:\ndf1=pd.DataFrame({\"date\":['2016-03-21 20:10:00','2016-03-22 21:09:00','2016-05-03 17:05:00']})\n\ndf2=pd.DataFrame({\"date\":['2016-03-21 20:10:00','2016-03-21 21:00:00',\n '2016-03-22 21:09:00','2016-05-03 17:05:00','2017-06-01 16:10:00']})\n\nprint(len(set(df1.date) & set(df2.date))) # 3\n\nThis will just convert that specified column to python-set and find common between them.\n\nIf you want to use Pandas then you can use pandas.merge() to get the common rows based on the columns.\ndf3 = pd.merge(df1, df2)\nprint(len(df3)) # 3\n\nand count common rows using len function.\n",
"You could use the isin function:\nlen(df1[df1.Date.isin(df2.Date)])\n\nOutput:\n3\n\n"
] | [
0,
0
] | [] | [] | [
"count",
"date",
"pandas",
"python"
] | stackoverflow_0074664246_count_date_pandas_python.txt |
Q:
Illegalargumentexception : java.net.URISyntaxException : Relative path in absolute path URI getting while reading json files recursively from ADLSS
Folder structure:
A -> B1->C1->.json
-> B2->C2->.json
There can be many folders under A and B which doesn't follow any pattern.
The above is the folder structure in ADLS while reading Json files recursively using spark we are getting below error.
java.net.URISyntaxException : Relative path in absolute path URI
def json_parquet(sourceFilePath):
df=(spark.read.format("json")
.option("multiline",True)
.option("recursiveFileLookup", "true")
.option("pathGlobFilter","*.json")
.load(sourceFilePath))
sourceFilepath='dbfs:/mnt/pp-working-1/A'
json_parquet(sourceFilepath)
It is working fine with S3 mnt but failing with ADLS mnt.
A:
You might need to modify the sourceFilePath variable to include the full URI of the file you want to load, including the scheme (e.g. adl:// or wasbs://) and the hostname or storage account name. For example:
sourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'
You can also try using the spark.read.json method to load the JSON files, which automatically detects the schema of the JSON files and loads them as a DataFrame. This method also takes an option called recursiveFileLookup, which allows you to specify whether to recursively search for files in subdirectories. For example:
def json_parquet(sourceFilePath):
df = spark.read.json(sourceFilePath, recursiveFileLookup=True)
return df
sourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'
df = json_parquet(sourceFilePath)
Alternatively, you can use the spark.read.format method and specify the json format, along with the recursiveFileLookup and pathGlobFilter options to recursively search for JSON files in the specified directory and load them as a DataFrame. For example:
def json_parquet(sourceFilePath):
df = (spark.read.format("json")
.option("recursiveFileLookup", "true")
.option("pathGlobFilter","*.json")
.load(sourceFilePath))
return df
sourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'
df = json_parquet(sourceFilePath)
I hope this helps.
| Illegalargumentexception : java.net.URISyntaxException : Relative path in absolute path URI getting while reading json files recursively from ADLSS | Folder structure:
A -> B1->C1->.json
-> B2->C2->.json
There can be many folders under A and B which doesn't follow any pattern.
The above is the folder structure in ADLS while reading Json files recursively using spark we are getting below error.
java.net.URISyntaxException : Relative path in absolute path URI
def json_parquet(sourceFilePath):
df=(spark.read.format("json")
.option("multiline",True)
.option("recursiveFileLookup", "true")
.option("pathGlobFilter","*.json")
.load(sourceFilePath))
sourceFilepath='dbfs:/mnt/pp-working-1/A'
json_parquet(sourceFilepath)
It is working fine with S3 mnt but failing with ADLS mnt.
| [
"You might need to modify the sourceFilePath variable to include the full URI of the file you want to load, including the scheme (e.g. adl:// or wasbs://) and the hostname or storage account name. For example:\nsourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'\n\nYou can also try using the spark.read.json method to load the JSON files, which automatically detects the schema of the JSON files and loads them as a DataFrame. This method also takes an option called recursiveFileLookup, which allows you to specify whether to recursively search for files in subdirectories. For example:\ndef json_parquet(sourceFilePath):\n df = spark.read.json(sourceFilePath, recursiveFileLookup=True)\n return df\n\nsourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'\ndf = json_parquet(sourceFilePath)\n\n\nAlternatively, you can use the spark.read.format method and specify the json format, along with the recursiveFileLookup and pathGlobFilter options to recursively search for JSON files in the specified directory and load them as a DataFrame. For example:\ndef json_parquet(sourceFilePath):\n df = (spark.read.format(\"json\")\n .option(\"recursiveFileLookup\", \"true\")\n .option(\"pathGlobFilter\",\"*.json\")\n .load(sourceFilePath))\n return df\n\nsourceFilePath = 'adl://<storage_account_name>.dfs.core.windows.net/mnt/pp-working-1/A'\ndf = json_parquet(sourceFilePath)\n\nI hope this helps.\n"
] | [
0
] | [] | [] | [
"azure_data_lake",
"databricks",
"pyspark",
"python"
] | stackoverflow_0074664413_azure_data_lake_databricks_pyspark_python.txt |
Q:
How can I efficiently randomly select items from a dictionary that meet my requirements?
So at the moment, I have a large dictionary of items. Might be a little confusing, but each of these keys have different values, and the values themselves correspond to another dictionary.
I need to make sure that my random selection from the first dict covers all possible values in the second dict. I'll provide a rudimentary example:
Dict_1 = {key1: (A, C), key2: (B, O, P), key3: (R, T, A)} # and so on
Dict_2 = {A: (1, 4, 7), B: (9, 2, 3), C: (1, 3)} # etc
I need a random selection of Dict_1 to give me a coverage of all numbers from 1 - 10 in Dict_2 values.
At the moment, I am selecting 6 random keys from Dict_1, taking all the numbers that those letters would correspond with, and comparing that set to a set of the numbers from 1 - 10. If the selection isn't a subset of 1 - 10, select 6 more random ones and try again, until I have 1 - 10.
Now, this works, but I know it's far from efficient. How can I improve this method?
I am using Python.
A:
One way to improve the efficiency of your method is to first create a set of all the numbers in Dict 2 and then iterate through Dict 1, adding the corresponding numbers from Dict 2 to a temporary set. Then, you can check if the temporary set is a subset of the set of all numbers from 1 to 10. If it is, you can return the selection of keys from Dict 1 that you used to create the temporary set. If it is not, you can continue iterating through Dict 1 and adding the corresponding numbers to the temporary set until it is a subset of the set of all numbers from 1 to 10.
Here is an example of how this might look in code:
import random
# Dict 1
dict1 = {
"key1": ("A", "C"),
"key2": ("B", "O", "P"),
"key3": ("R", "T", "A"),
...
}
# Dict 2
dict2 = {
"A": (1, 4, 7),
"B": (9, 2, 3),
"C": (1, 3),
...
}
# Set of all numbers from 1 to 10
numbers = set(range(1, 11))
# Function to get a selection of keys from Dict 1 that covers all numbers from 1 to 10 in Dict 2
def get_selection(dict1, dict2, numbers):
| How can I efficiently randomly select items from a dictionary that meet my requirements? | So at the moment, I have a large dictionary of items. Might be a little confusing, but each of these keys have different values, and the values themselves correspond to another dictionary.
I need to make sure that my random selection from the first dict covers all possible values in the second dict. I'll provide a rudimentary example:
Dict_1 = {key1: (A, C), key2: (B, O, P), key3: (R, T, A)} # and so on
Dict_2 = {A: (1, 4, 7), B: (9, 2, 3), C: (1, 3)} # etc
I need a random selection of Dict_1 to give me a coverage of all numbers from 1 - 10 in Dict_2 values.
At the moment, I am selecting 6 random keys from Dict_1, taking all the numbers that those letters would correspond with, and comparing that set to a set of the numbers from 1 - 10. If the selection isn't a subset of 1 - 10, select 6 more random ones and try again, until I have 1 - 10.
Now, this works, but I know it's far from efficient. How can I improve this method?
I am using Python.
| [
"One way to improve the efficiency of your method is to first create a set of all the numbers in Dict 2 and then iterate through Dict 1, adding the corresponding numbers from Dict 2 to a temporary set. Then, you can check if the temporary set is a subset of the set of all numbers from 1 to 10. If it is, you can return the selection of keys from Dict 1 that you used to create the temporary set. If it is not, you can continue iterating through Dict 1 and adding the corresponding numbers to the temporary set until it is a subset of the set of all numbers from 1 to 10.\nHere is an example of how this might look in code:\nimport random\n\n# Dict 1\ndict1 = {\n \"key1\": (\"A\", \"C\"),\n \"key2\": (\"B\", \"O\", \"P\"),\n \"key3\": (\"R\", \"T\", \"A\"),\n ...\n}\n\n# Dict 2\ndict2 = {\n \"A\": (1, 4, 7),\n \"B\": (9, 2, 3),\n \"C\": (1, 3),\n ...\n}\n\n# Set of all numbers from 1 to 10\nnumbers = set(range(1, 11))\n\n# Function to get a selection of keys from Dict 1 that covers all numbers from 1 to 10 in Dict 2\ndef get_selection(dict1, dict2, numbers):\n\n"
] | [
0
] | [] | [] | [
"dictionary",
"python",
"random",
"set",
"subset"
] | stackoverflow_0074664141_dictionary_python_random_set_subset.txt |
Q:
How to solve "ModuleNotFoundError: No module named 'tensorflow.tsl'"?
I installed python but didn't work. Then all of the following but when, I was supposed to import the following it didn't work.
!pip install -U pip
!pip install tensorflow
from tensorflow import keras
from tensorflow.keras import layers
A:
I think you can try to run pip install tensorflow in command
A:
If you're having trouble importing a package in Python, it's possible that you haven't installed it properly or that it's not installed at all. To check if tensorflow is installed, you can try running pip freeze in your terminal. This will print out a list of all the packages that are currently installed in your environment. If tensorflow is not in that list, then you need to install it.
If you're not sure how to install a package in Python, you can use the pip command. For example, to install tensorflow, you would run pip install tensorflow in your terminal. Once the installation is complete, you should be able to import tensorflow in your Python code without any issues
| How to solve "ModuleNotFoundError: No module named 'tensorflow.tsl'"? | I installed python but didn't work. Then all of the following but when, I was supposed to import the following it didn't work.
!pip install -U pip
!pip install tensorflow
from tensorflow import keras
from tensorflow.keras import layers
| [
"I think you can try to run pip install tensorflow in command\n",
"If you're having trouble importing a package in Python, it's possible that you haven't installed it properly or that it's not installed at all. To check if tensorflow is installed, you can try running pip freeze in your terminal. This will print out a list of all the packages that are currently installed in your environment. If tensorflow is not in that list, then you need to install it.\nIf you're not sure how to install a package in Python, you can use the pip command. For example, to install tensorflow, you would run pip install tensorflow in your terminal. Once the installation is complete, you should be able to import tensorflow in your Python code without any issues\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x",
"tensorflow"
] | stackoverflow_0074664203_python_python_3.x_tensorflow.txt |
Q:
how to assume roles twice (or multiple times) in the script
I am trying to assume a role twice in the script, I assume the role first like this
import boto3 session = boto3.Session(profile_name="learnaws-test")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/s3-readonly-access",
RoleSessionName="learnaws-test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
but after I have done this, I understand I can use this new_session to access s3 buckets or whatever resourse and stuff but I need to assume another role from this role, how do I assume another role?
logically, I think from this "new_session" we have to do something to assume another role, but what is it?
A:
Call AssumeRole
When calling AssumeRole(), a new set of credentials is returned. You can then use these credentials to create new clients, including another Security Token Service (STS) client that can be used to call AssumeRole() again.
Here is an example:
import boto3
# Create STS client using default credentials
sts_client = boto3.client('sts')
# Assume Role 1
response1 = sts_client.assume_role(RoleArn='arn:aws:iam::111111111111:role/assume1', RoleSessionName='One')
credentials1 = response1['Credentials']
role1_session = boto3.Session(
aws_access_key_id=credentials1['AccessKeyId'],
aws_secret_access_key=credentials1['SecretAccessKey'],
aws_session_token=credentials1['SessionToken'])
sts_client1 = role1_session.client('sts')
# Assume Role 2
response2 = sts_client1.assume_role(RoleArn='arn:aws:iam::111111111111:role/assume2', RoleSessionName='Two')
credentials2 = response2['Credentials']
role2_session = boto3.Session(
aws_access_key_id=credentials2['AccessKeyId'],
aws_secret_access_key=credentials2['SecretAccessKey'],
aws_session_token=credentials2['SessionToken'])
# Use Role 2
s3_client2 = role2_session.client('s3')
response = s3_client2.list_buckets()
print(response)
Use profiles
However, there is an easier way to do this using profiles. You can configure the ~/.aws/config file to assume roles automatically:
[default]
region = ap-southeast-2
[profile role1]
role_arn=arn:aws:iam::111111111111:role/assume1
source_profile=default
[profile role2]
role_arn=arn:aws:iam::111111111111:role/assume2
source_profile=role1
This is telling boto3:
When assuming role1, use the default credentials
When assuming role2, use credentials from role1
Assuming both roles is then as simple as:
import boto3
session = boto3.Session(profile_name='role2')
s3_client = session.client('s3')
response = s3_client.list_buckets()
print(response)
This also works with the AWS CLI:
aws s3 ls --profile role2
For more information, see: Credentials — Boto3 documentation
| how to assume roles twice (or multiple times) in the script | I am trying to assume a role twice in the script, I assume the role first like this
import boto3 session = boto3.Session(profile_name="learnaws-test")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/s3-readonly-access",
RoleSessionName="learnaws-test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
but after I have done this, I understand I can use this new_session to access s3 buckets or whatever resourse and stuff but I need to assume another role from this role, how do I assume another role?
logically, I think from this "new_session" we have to do something to assume another role, but what is it?
| [
"Call AssumeRole\nWhen calling AssumeRole(), a new set of credentials is returned. You can then use these credentials to create new clients, including another Security Token Service (STS) client that can be used to call AssumeRole() again.\nHere is an example:\nimport boto3\n\n# Create STS client using default credentials\n\nsts_client = boto3.client('sts')\n\n# Assume Role 1\n\nresponse1 = sts_client.assume_role(RoleArn='arn:aws:iam::111111111111:role/assume1', RoleSessionName='One')\n\ncredentials1 = response1['Credentials']\n\nrole1_session = boto3.Session(\n aws_access_key_id=credentials1['AccessKeyId'],\n aws_secret_access_key=credentials1['SecretAccessKey'],\n aws_session_token=credentials1['SessionToken'])\n\nsts_client1 = role1_session.client('sts')\n\n# Assume Role 2\n\nresponse2 = sts_client1.assume_role(RoleArn='arn:aws:iam::111111111111:role/assume2', RoleSessionName='Two')\n\ncredentials2 = response2['Credentials']\n\nrole2_session = boto3.Session(\n aws_access_key_id=credentials2['AccessKeyId'],\n aws_secret_access_key=credentials2['SecretAccessKey'],\n aws_session_token=credentials2['SessionToken'])\n\n# Use Role 2\n\ns3_client2 = role2_session.client('s3')\n\nresponse = s3_client2.list_buckets()\n\nprint(response)\n\nUse profiles\nHowever, there is an easier way to do this using profiles. You can configure the ~/.aws/config file to assume roles automatically:\n[default]\nregion = ap-southeast-2\n\n[profile role1]\nrole_arn=arn:aws:iam::111111111111:role/assume1\nsource_profile=default\n\n[profile role2]\nrole_arn=arn:aws:iam::111111111111:role/assume2\nsource_profile=role1\n\nThis is telling boto3:\n\nWhen assuming role1, use the default credentials\nWhen assuming role2, use credentials from role1\n\nAssuming both roles is then as simple as:\nimport boto3\n\nsession = boto3.Session(profile_name='role2')\ns3_client = session.client('s3')\n\nresponse = s3_client.list_buckets()\n\nprint(response)\n\nThis also works with the AWS CLI:\naws s3 ls --profile role2\n\nFor more information, see: Credentials — Boto3 documentation\n"
] | [
0
] | [] | [] | [
"amazon_web_services",
"assume_role",
"boto3",
"python"
] | stackoverflow_0074657438_amazon_web_services_assume_role_boto3_python.txt |
Q:
Append dictionary in json using Python
I am doing my first Python program and its Hangman game. I managed to make it work but as a part of the task I need to write "best results -hall of fame" table as json file. Each entry in the table should consist of name of the person and the result they achieved (number of tries before guessing a word). My idea is to use dictionary for that purpose and to append the result of each game to that same dictionary.
My code goes like this:
with open("hall.json","a") as django:
json.dump(hall_of_fame, django)
hall_of_fame is a dictionary where after playing a game the result is saved in the form of {john:5}
The problem I have is that after playing several games my .json file looks like this:
{john:5}{ana:7}{mary:3}{jim:1}{willie:6}
instead I want to get .json file to look like this:
{john:5,ana:7,mary:3,jim:1,willie:6}
What am I doing wrong? Can someone please take a look?
A:
you should read your old json content. then append new item to it. an finally write it to your json file again. use code below:
with open ("hall.json") as f:
dct=json.load(f)
#add new item to dct
dct.update(hall_of_fame)
#write new dct to json file
with open("hall.json","w") as f:
json.dump(dct,f)
have fun :)
A:
You're overwriting the file every time you write to it. Instead, you should read the existing data from the file, append the new data to the dictionary, and then write the whole dictionary back to the file.
Here's an example of how you can do that:
import json
# Read the existing data from the file
with open("hall.json", "r") as django:
hall_of_fame = json.load(django)
# Append the new data to the dictionary
hall_of_fame["john"] = 5
hall_of_fame["ana"] = 7
hall_of_fame["mary"] = 3
hall_of_fame["jim"] = 1
hall_of_fame["willie"] = 6
# Write the updated dictionary back to the file
with open("hall.json", "w") as django:
json.dump(hall_of_fame, django)
Alternatively, you can use the json.dump() method's ensure_ascii and indent parameters to make the resulting JSON file more readable. Here's an example:
import json
# Read the existing data from the file
with open("hall.json", "r") as django:
hall_of_fame = json.load(django)
# Append the new data to the dictionary
hall_of_fame["john"] = 5
hall_of_fame["ana"] = 7
hall_of_fame["mary"] = 3
hall_of_fame["jim"] = 1
hall_of_fame["willie"] = 6
# Write the updated dictionary back to the file
with open("hall.json", "w") as django:
json.dump(hall_of_fame, django, ensure_ascii=False, indent=4)
| Append dictionary in json using Python | I am doing my first Python program and its Hangman game. I managed to make it work but as a part of the task I need to write "best results -hall of fame" table as json file. Each entry in the table should consist of name of the person and the result they achieved (number of tries before guessing a word). My idea is to use dictionary for that purpose and to append the result of each game to that same dictionary.
My code goes like this:
with open("hall.json","a") as django:
json.dump(hall_of_fame, django)
hall_of_fame is a dictionary where after playing a game the result is saved in the form of {john:5}
The problem I have is that after playing several games my .json file looks like this:
{john:5}{ana:7}{mary:3}{jim:1}{willie:6}
instead I want to get .json file to look like this:
{john:5,ana:7,mary:3,jim:1,willie:6}
What am I doing wrong? Can someone please take a look?
| [
"you should read your old json content. then append new item to it. an finally write it to your json file again. use code below:\nwith open (\"hall.json\") as f:\n dct=json.load(f)\n\n#add new item to dct\ndct.update(hall_of_fame)\n\n#write new dct to json file\nwith open(\"hall.json\",\"w\") as f:\n json.dump(dct,f)\n\nhave fun :)\n",
"You're overwriting the file every time you write to it. Instead, you should read the existing data from the file, append the new data to the dictionary, and then write the whole dictionary back to the file.\nHere's an example of how you can do that:\nimport json\n\n# Read the existing data from the file\nwith open(\"hall.json\", \"r\") as django:\n hall_of_fame = json.load(django)\n\n# Append the new data to the dictionary\nhall_of_fame[\"john\"] = 5\nhall_of_fame[\"ana\"] = 7\nhall_of_fame[\"mary\"] = 3\nhall_of_fame[\"jim\"] = 1\nhall_of_fame[\"willie\"] = 6\n\n# Write the updated dictionary back to the file\nwith open(\"hall.json\", \"w\") as django:\n json.dump(hall_of_fame, django)\n\nAlternatively, you can use the json.dump() method's ensure_ascii and indent parameters to make the resulting JSON file more readable. Here's an example:\nimport json\n\n# Read the existing data from the file\nwith open(\"hall.json\", \"r\") as django:\n hall_of_fame = json.load(django)\n\n# Append the new data to the dictionary\nhall_of_fame[\"john\"] = 5\nhall_of_fame[\"ana\"] = 7\nhall_of_fame[\"mary\"] = 3\nhall_of_fame[\"jim\"] = 1\nhall_of_fame[\"willie\"] = 6\n\n# Write the updated dictionary back to the file\nwith open(\"hall.json\", \"w\") as django:\n json.dump(hall_of_fame, django, ensure_ascii=False, indent=4)\n\n"
] | [
1,
0
] | [] | [] | [
"append",
"dictionary",
"json",
"python"
] | stackoverflow_0074663995_append_dictionary_json_python.txt |
Q:
How do I run my main function in parallel with a multiprocessing.Process without it freezing? (sorry for the sloppy code, I'm new and self taught)
My main function is an app that I want to use for macros, the macros themselves all work as intended, and the function is technically able to work. The issue arises when you start the function, as you can't interact with the GUI because it is frozen, it unfreezes when the function ends and then the GUI becomes usable again. Again, I'm sorry about the sloppy and changing code, like I said, I'm self taught and a lot is borrowed and mangled together.
I tried using different start methods like spawn and fork, I tried setting up different ways for the function to end so it wouldn't wait, and at one point the GUI worked but the second I fixed the other bug this one came. I don't have an old save sadly as it was overwritten. There are no errors and I understand that there probably is a solution already out there but I have yet to find it. The code is long 400-ish lines so be warned.
from ast import Call
from cgitb import text
from concurrent.futures import process, thread
from itertools import starmap
from logging import PlaceHolder, error
from re import L
import tkinter
import tkinter.messagebox
from turtle import width
import customtkinter
import threading
import multiprocessing
import time
import pyautogui
import sys
import keyboard
label_text="Select counter as well as type below:"
macro_select=0
rand_offset_min=0 #needed to set the variables in case of no entry
rand_offset_max=3
radio_select=0
label_="Please select a macro type:"
dead=False
base_start=";"
customtkinter.set_appearance_mode("System") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
n_process = []
s_process = []
c_process = []
multiprocessing.set_start_method("spawn")
def netherwart(metric_raw):
y=0
time.sleep(5)
print(metric_raw)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running")
print(y)
pyautogui.keyDown(' ')
pyautogui.keyDown('a')
time.sleep(28)
pyautogui.keyUp('a')
pyautogui.keyDown('d')
time.sleep(28)
pyautogui.keyUp('d')
y=y+1
print("flag0")
def sugarcane(metric_raw):
y=0
time.sleep(5)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running.")
pyautogui.keyDown(' ')
pyautogui.keyDown('d')
time.sleep(.25)
pyautogui.keyDown('w')
time.sleep(14.5)
pyautogui.keyUp('d')
time.sleep(1)
pyautogui.keyUp('w')
pyautogui.keyDown('s')
time.sleep(17.5)
pyautogui.keyUp('s')
pyautogui.keyDown('a')
pyautogui.keyDown('w')
time.sleep(.25)
pyautogui.keyUp('w')
time.sleep(.25)
pyautogui.keyUp('a')
y=y+1
def cobblestone(metric_raw):
y=0
time.sleep(5)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running..")
pyautogui.keyDown(' ')
pyautogui.keyDown('w')
time.sleep(30)
y=y+1
class App(customtkinter.CTk):
WIDTH = 780 #set the width of the main gui
HEIGHT = 520 #set the height of the main gui
def __init__(self):
multiprocessing.freeze_support()
super().__init__()
self.title("EzFarmerGui") #Naming the program
self.geometry(f"{App.WIDTH} x {App.HEIGHT}")#Setting the background size
self.protocol("WM_DELETE_WINDOW", self.on_closing)
self.grid_columnconfigure(1, weight=1)
self.grid_rowconfigure(0, weight=1)
self.frame_left = customtkinter.CTkFrame(master=self, width=180, corner_radius=0)
self.frame_left.grid(row=0, column=0,rowspan=4, sticky="nswe")
self.frame_right = customtkinter.CTkFrame(master=self)
self.frame_right.grid(row=0, column=1,rowspan=1, sticky="nswe", padx=20, pady=10)
#left frame
self.frame_left.grid_rowconfigure(0, minsize=10) # empty row with minsize as spacing
self.frame_left.grid_rowconfigure(5, weight=1) # empty row as spacing
self.frame_left.grid_rowconfigure(8, minsize=20) # empty row with minsize as spacing
self.frame_left.grid_rowconfigure(11, minsize=10) # empty row with minsize as spacing | the first number is the actual row or column, row is the y and column is the x
self.label_1 = customtkinter.CTkLabel(master=self.frame_left,text="Farming Options:",text_font=("Roboto Medium", -16)) # font name and size in px
self.label_1.grid(row=1, column=0, pady=10, padx=10)
self.button_1 = customtkinter.CTkButton(master=self.frame_left,text="Netherwart",text_font=("Roboto Medium", -12),command=self.netherwart_event)
self.button_1.grid(row=2, column=0, pady=10, padx=20)
self.button_2 = customtkinter.CTkButton(master=self.frame_left,text="Sugarcane",text_font=("Roboto Medium", -12),command=self.sugarcane_event)
self.button_2.grid(row=3, column=0, pady=10,padx=20)
self.button_3 = customtkinter.CTkButton(master=self.frame_left,text="Cobblestone",text_font=("Roboto Medium", -12),command=self.cobblestone_event)
self.button_3.grid(row=4, column=0, pady=10,padx=20)
self.button_4 = customtkinter.CTkButton(master=self.frame_left,text="Settings",text_font=("Roboto Medium", -12),command=self.settings_event)
self.button_4.grid(row=6, column=0, pady=10,padx=20)
#Right frame
self.frame_right.rowconfigure((0, 1, 2), minsize=0,weight=2)
self.frame_right.rowconfigure(7, weight=10)
self.frame_right.rowconfigure(3,weight=1)
self.frame_right.columnconfigure((0, 1), weight=1)
self.frame_right.columnconfigure(2, weight=0)
self.frame_info = customtkinter.CTkFrame(master=self.frame_right)
self.frame_info.grid(row=0, column=0, columnspan=2, rowspan=2, pady=10, padx=20, sticky="nsew")
self.frame_options = customtkinter.CTkFrame(master=self.frame_right)
self.frame_options.grid(row=2, column=0, columnspan=2,rowspan=1,pady=10,padx=20,sticky="nsew")
self.frame_options.rowconfigure(0,weight=1)
self.frame_options.columnconfigure(0,weight=1)
#Configure info
self.frame_info.rowconfigure(0, weight=1)
self.frame_info.columnconfigure(0, weight=1)
self.frame_info.rowconfigure(0, weight=1)
self.frame_info.columnconfigure(0, weight=1)
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
#Configure options
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(0,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(0,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(1,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(2,minsize=10, weight=1)
self.radio_var = tkinter.IntVar(value=0)
self.label_info_2 = customtkinter.CTkLabel(master=self.frame_options,text=label_text ,height=40,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_2.grid(column=0, row=0, sticky="nwe", padx=15, pady=10,columnspan=4)
self.label_radio_0=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Repititions",text_font=("Roboto Medium", -12),value=0,command=self.repititions_event)
self.label_radio_0.grid(column=0,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_1=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Time",text_font=("Roboto Medium", -12),value=1,command=self.time_event)
self.label_radio_1.grid(column=1,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_2=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Experience",text_font=("Roboto Medium", -12),value=2,command=self.exp_event)
self.label_radio_2.grid(column=2,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_3=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Gold",text_font=("Roboto Medium", -12),value=3,command=self.gold_event)
self.label_radio_3.grid(column=3,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.slider=customtkinter.CTkSlider(master=self.frame_options,number_of_steps=250,from_=1, to=250,command=self.update_metrics_)
self.slider.grid(column=0,row=4,columnspan=4,padx=10,pady=20,sticky="nsew")
#start/stop frame
self.start_stop = customtkinter.CTkFrame(master=self.frame_right)
self.start_stop.grid(row=4, column=0, columnspan=2,rowspan=1,pady=10,padx=20,sticky="nsew")
self.start_stop.rowconfigure((0),minsize=10,weight=1)
self.start_stop.columnconfigure((0,1,2),minsize=10,weight=1)
self.start=customtkinter.CTkButton(master=self.start_stop,text="Start",command=self.start_event)
self.start.grid(row=0,column=0,padx=20,pady=20)
self.stop=customtkinter.CTkButton(master=self.start_stop,text="Stop",command=self.stop_event)
self.stop.grid(row=0,column=2,padx=20,pady=20)
def start_event(self):
metric_raw=str(round(self.slider.get()))
global n_process,s_process,c_process
if macro_select == 0:
print("\nStarted")
n = multiprocessing.Process(target=netherwart(metric_raw))
n.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
n_process.append(n)
elif macro_select == 1:
print("\nStarted")
s = multiprocessing.Process(target=sugarcane(metric_raw))
s.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
s_process.append(s)
elif macro_select == 2:
print("\nStarted")
c = multiprocessing.Process(target=cobblestone(metric_raw))
c.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
c_process.append(c)
def stop_event(self):
if macro_select == 0:
print("\nEnded")
for process in n_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
elif macro_select == 1:
print("\nEnded")
for process in s_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
elif macro_select == 2:
print("\nEnded")
for process in c_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
def netherwart_event(self):
global macro_select
macro_select=0
label_="The netherwart macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def sugarcane_event(self):
global macro_select
macro_select=1
label_="The sugarcane macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def cobblestone_event(self):
global macro_select
macro_select=2
label_="The cobblestone macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def repititions_event(self):
global radio_select
radio_select=0
def time_event(self):
global radio_select
radio_select=1
def exp_event(self):
global radio_select
radio_select=2
def gold_event(self):
global radio_select
radio_select=3
def update_metrics_(self, slider):
global radio_select
global macro_select
global xp
metric_raw=str(round(self.slider.get()))
if radio_select == 0:
label_text="Number of repititions: "+ metric_raw
elif radio_select == 1:
if macro_select==0:
place_holder=int(metric_raw)
time_0=int(rand_offset_max+58)
time_1=int(place_holder*time_0)
time_1=str(time_1)
label_text="Approximate time to complete: "+ time_1 +" seconds."
elif macro_select==1:
place_holder=int(metric_raw)
time_0_0=int(rand_offset_max+34)
time_1_0=int(place_holder*time_0_0)
time_1_0=str(time_1_0)
label_text="Approximate time to complete: "+ time_1_0 +" seconds"
elif macro_select==2:
place_holder=int(metric_raw)
time_1_0_0=int(30*place_holder)
time_1_0_0=str(time_1_0_0)
label_text="Approximate time to complete: "+ time_1_0_0 +" seconds"
elif radio_select == 2:
if macro_select==0:
place_holder=int(metric_raw)
xp=int(1334*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif macro_select==1:
place_holder=int(metric_raw)
xp=int(3341*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif macro_select==2:
place_holder=int(metric_raw)
xp=int(600*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif radio_select == 3:
if macro_select==0:
place_holder=int(metric_raw)
gold=int(30000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
elif macro_select==1:
place_holder=int(metric_raw)
gold=int(9000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
elif macro_select==2:
place_holder=int(metric_raw)
gold=int(3000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
self.label_info_2 = customtkinter.CTkLabel(master=self.frame_options,text=label_text ,height=40,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_2.grid(column=0, row=0, sticky="nwe", padx=15, pady=10,columnspan=4)
def settings_event(self):
self.settings_window=customtkinter.CTkToplevel(master=self)
self.settings_window.geometry("600x400")
self.settings_window.title("Settings")
self.settings_window_options=customtkinter.CTkFrame(master=self.settings_window)
self.settings_window_options.grid(padx=20,pady=20,column=0,row=0,rowspan=4,columnspan=4,sticky="nsew")
self.settings_window.rowconfigure(0,minsize=10,weight=1)
self.settings_window.rowconfigure(1,minsize=10,weight=1)
self.settings_window.rowconfigure(2,minsize=10,weight=1)
self.settings_window.rowconfigure(3,minsize=10,weight=1)
self.settings_window.columnconfigure(0,minsize=10,weight=1)
self.settings_window.columnconfigure(1,minsize=10,weight=1)
self.settings_window.columnconfigure(2,minsize=10,weight=1)
self.settings_window.columnconfigure(3,minsize=10,weight=1)
self.settings_window_options.rowconfigure(0,minsize=10,weight=1)
self.settings_window_options.rowconfigure(1,minsize=10,weight=1)
self.settings_window_options.rowconfigure(2,minsize=10,weight=1)
self.settings_window_options.rowconfigure(3,minsize=10,weight=1)
self.settings_window_options.columnconfigure(0,minsize=10,weight=1)
self.settings_window_options.columnconfigure(1,minsize=10,weight=1)
self.settings_window_options.columnconfigure(2,minsize=10,weight=1)
self.settings_window_options.columnconfigure(3,minsize=10,weight=1)
self.random_offset_lable_frame=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame.grid(column=0,row=0,columnspan=4,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_frame.columnconfigure(0,minsize=10,weight=1)
self.random_offset_lable_frame.columnconfigure(1,minsize=10,weight=1)
self.random_offset_lable_frame.columnconfigure(2,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(0,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(1,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(2,minsize=10,weight=1)
self.random_offset_lable=customtkinter.CTkLabel(master=self.random_offset_lable_frame,text="EzFarm settings:")
self.random_offset_lable.grid(column=1,row=1,columnspan=1,rowspan=1,sticky="nsew")
self.random_offset_lable_frame_1=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame_1.grid(column=0,row=1,columnspan=2,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_frame_2=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame_2.grid(column=2,row=1,columnspan=2,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_1=customtkinter.CTkLabel(master=self.random_offset_lable_frame_1,text="Personalization::")
self.random_offset_lable_1.grid(column=1,row=0,columnspan=1,rowspan=1,padx=20,pady=20,sticky="n")
self.random_offset_lable_2=customtkinter.CTkLabel(master=self.random_offset_lable_frame_2,text="Randomized offset:")
self.random_offset_lable_2.grid(column=1,row=1,columnspan=1,rowspan=1,padx=20,pady=20,sticky="n")
self.optionmenu_1 = customtkinter.CTkOptionMenu(master=self.random_offset_lable_frame_1,values=["Light", "Dark", "System"],command=self.change_appearance_mode)
self.optionmenu_1.grid(row=2,column=1,padx=20,pady=10,sticky="nsew")
self.random_offset_lable_2.columnconfigure(0,minsize=10,weight=1)
self.random_offset_lable_2.columnconfigure(1,minsize=10,weight=1)
self.random_offset_lable_2.columnconfigure(2,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(0,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(1,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(2,minsize=10,weight=1)
self.offset_ui_0=customtkinter.CTkEntry(master=self.random_offset_lable_2,width=30,placeholder_text="Insert the min value:")
self.offset_ui_0.grid(row=1,column=0,columnspan=2,padx=10,pady=10,sticky="nsew")
self.offset_ui_1=customtkinter.CTkEntry(master=self.random_offset_lable_2,width=30,placeholder_text="Insert the max value:")
self.offset_ui_1.grid(row=2,column=0,columnspan=2,padx=10,pady=10,sticky="nsew")
self.submit_button=customtkinter.CTkButton(master=self.random_offset_lable_frame_2,text="Submit",command=self.submit)
self.submit_button.grid(row=3,column=1,padx=20,pady=20,sticky="nsew")
def submit(self):
global rand_offset_min
global rand_offset_max
rand_offset_min=int(self.offset_ui_0.get())
rand_offset_max=int(self.offset_ui_1.get())
if rand_offset_min<0 or rand_offset_max<0:
error_=customtkinter.CTkToplevel(master=self)
error_.title("Error")
error_message=customtkinter.CTkLabel(master=error_,text="Error, please enter a whole number over 0")
error_message.grid(padx=10,pady=10)
def change_appearance_mode(self, new_appearance_mode):
customtkinter.set_appearance_mode(new_appearance_mode)
def button_event(self):
print("Button pressed")
def change_appearance_mode(self, new_appearance_mode):
customtkinter.set_appearance_mode(new_appearance_mode)
def on_closing(self, event=0):
self.destroy()
if __name__ == "__main__":
app = App()
app.mainloop()
A:
It looks like you're trying to use multiple threads to run your macros simultaneously. However, the GUI freezes because you're blocking the main thread, which is responsible for updating the GUI.
To fix this, you need to make sure that your macro functions are non-blocking, i.e. they don't freeze the main thread. One way to do this is to use the threading module to run each macro in a separate thread.
Here's an example of how you could modify your code to do this:
import threading
def netherwart(metric_raw):
# ...
def sugarcane(metric_raw):
# ...
def cobblestone(metric_raw):
# ...
def run_macro(func, metric_raw):
thread = threading.Thread(target=func, args=(metric_raw,))
thread.start()
# To run a macro, call the run_macro() function with the name of the macro function
# and the metric_raw argument that you want to pass to it.
run_macro(sugarcane, 100)
This way, each macro function will be run in a separate thread, and the main thread will be free to update the GUI. You'll need to make similar changes to the rest of your code to ensure that all blocking operations are moved to separate threads.
I hope this helps!
| How do I run my main function in parallel with a multiprocessing.Process without it freezing? (sorry for the sloppy code, I'm new and self taught) | My main function is an app that I want to use for macros, the macros themselves all work as intended, and the function is technically able to work. The issue arises when you start the function, as you can't interact with the GUI because it is frozen, it unfreezes when the function ends and then the GUI becomes usable again. Again, I'm sorry about the sloppy and changing code, like I said, I'm self taught and a lot is borrowed and mangled together.
I tried using different start methods like spawn and fork, I tried setting up different ways for the function to end so it wouldn't wait, and at one point the GUI worked but the second I fixed the other bug this one came. I don't have an old save sadly as it was overwritten. There are no errors and I understand that there probably is a solution already out there but I have yet to find it. The code is long 400-ish lines so be warned.
from ast import Call
from cgitb import text
from concurrent.futures import process, thread
from itertools import starmap
from logging import PlaceHolder, error
from re import L
import tkinter
import tkinter.messagebox
from turtle import width
import customtkinter
import threading
import multiprocessing
import time
import pyautogui
import sys
import keyboard
label_text="Select counter as well as type below:"
macro_select=0
rand_offset_min=0 #needed to set the variables in case of no entry
rand_offset_max=3
radio_select=0
label_="Please select a macro type:"
dead=False
base_start=";"
customtkinter.set_appearance_mode("System") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
n_process = []
s_process = []
c_process = []
multiprocessing.set_start_method("spawn")
def netherwart(metric_raw):
y=0
time.sleep(5)
print(metric_raw)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running")
print(y)
pyautogui.keyDown(' ')
pyautogui.keyDown('a')
time.sleep(28)
pyautogui.keyUp('a')
pyautogui.keyDown('d')
time.sleep(28)
pyautogui.keyUp('d')
y=y+1
print("flag0")
def sugarcane(metric_raw):
y=0
time.sleep(5)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running.")
pyautogui.keyDown(' ')
pyautogui.keyDown('d')
time.sleep(.25)
pyautogui.keyDown('w')
time.sleep(14.5)
pyautogui.keyUp('d')
time.sleep(1)
pyautogui.keyUp('w')
pyautogui.keyDown('s')
time.sleep(17.5)
pyautogui.keyUp('s')
pyautogui.keyDown('a')
pyautogui.keyDown('w')
time.sleep(.25)
pyautogui.keyUp('w')
time.sleep(.25)
pyautogui.keyUp('a')
y=y+1
def cobblestone(metric_raw):
y=0
time.sleep(5)
PlaceHolder_0=int(metric_raw)
while y <= PlaceHolder_0:
print (PlaceHolder_0)
print("running..")
pyautogui.keyDown(' ')
pyautogui.keyDown('w')
time.sleep(30)
y=y+1
class App(customtkinter.CTk):
WIDTH = 780 #set the width of the main gui
HEIGHT = 520 #set the height of the main gui
def __init__(self):
multiprocessing.freeze_support()
super().__init__()
self.title("EzFarmerGui") #Naming the program
self.geometry(f"{App.WIDTH} x {App.HEIGHT}")#Setting the background size
self.protocol("WM_DELETE_WINDOW", self.on_closing)
self.grid_columnconfigure(1, weight=1)
self.grid_rowconfigure(0, weight=1)
self.frame_left = customtkinter.CTkFrame(master=self, width=180, corner_radius=0)
self.frame_left.grid(row=0, column=0,rowspan=4, sticky="nswe")
self.frame_right = customtkinter.CTkFrame(master=self)
self.frame_right.grid(row=0, column=1,rowspan=1, sticky="nswe", padx=20, pady=10)
#left frame
self.frame_left.grid_rowconfigure(0, minsize=10) # empty row with minsize as spacing
self.frame_left.grid_rowconfigure(5, weight=1) # empty row as spacing
self.frame_left.grid_rowconfigure(8, minsize=20) # empty row with minsize as spacing
self.frame_left.grid_rowconfigure(11, minsize=10) # empty row with minsize as spacing | the first number is the actual row or column, row is the y and column is the x
self.label_1 = customtkinter.CTkLabel(master=self.frame_left,text="Farming Options:",text_font=("Roboto Medium", -16)) # font name and size in px
self.label_1.grid(row=1, column=0, pady=10, padx=10)
self.button_1 = customtkinter.CTkButton(master=self.frame_left,text="Netherwart",text_font=("Roboto Medium", -12),command=self.netherwart_event)
self.button_1.grid(row=2, column=0, pady=10, padx=20)
self.button_2 = customtkinter.CTkButton(master=self.frame_left,text="Sugarcane",text_font=("Roboto Medium", -12),command=self.sugarcane_event)
self.button_2.grid(row=3, column=0, pady=10,padx=20)
self.button_3 = customtkinter.CTkButton(master=self.frame_left,text="Cobblestone",text_font=("Roboto Medium", -12),command=self.cobblestone_event)
self.button_3.grid(row=4, column=0, pady=10,padx=20)
self.button_4 = customtkinter.CTkButton(master=self.frame_left,text="Settings",text_font=("Roboto Medium", -12),command=self.settings_event)
self.button_4.grid(row=6, column=0, pady=10,padx=20)
#Right frame
self.frame_right.rowconfigure((0, 1, 2), minsize=0,weight=2)
self.frame_right.rowconfigure(7, weight=10)
self.frame_right.rowconfigure(3,weight=1)
self.frame_right.columnconfigure((0, 1), weight=1)
self.frame_right.columnconfigure(2, weight=0)
self.frame_info = customtkinter.CTkFrame(master=self.frame_right)
self.frame_info.grid(row=0, column=0, columnspan=2, rowspan=2, pady=10, padx=20, sticky="nsew")
self.frame_options = customtkinter.CTkFrame(master=self.frame_right)
self.frame_options.grid(row=2, column=0, columnspan=2,rowspan=1,pady=10,padx=20,sticky="nsew")
self.frame_options.rowconfigure(0,weight=1)
self.frame_options.columnconfigure(0,weight=1)
#Configure info
self.frame_info.rowconfigure(0, weight=1)
self.frame_info.columnconfigure(0, weight=1)
self.frame_info.rowconfigure(0, weight=1)
self.frame_info.columnconfigure(0, weight=1)
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
#Configure options
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(0,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(0,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(1,minsize=10, weight=1)
self.frame_options.rowconfigure(0,minsize=10, weight=1)
self.frame_options.columnconfigure(2,minsize=10, weight=1)
self.radio_var = tkinter.IntVar(value=0)
self.label_info_2 = customtkinter.CTkLabel(master=self.frame_options,text=label_text ,height=40,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_2.grid(column=0, row=0, sticky="nwe", padx=15, pady=10,columnspan=4)
self.label_radio_0=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Repititions",text_font=("Roboto Medium", -12),value=0,command=self.repititions_event)
self.label_radio_0.grid(column=0,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_1=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Time",text_font=("Roboto Medium", -12),value=1,command=self.time_event)
self.label_radio_1.grid(column=1,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_2=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Experience",text_font=("Roboto Medium", -12),value=2,command=self.exp_event)
self.label_radio_2.grid(column=2,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.label_radio_3=customtkinter.CTkRadioButton(master=self.frame_options,variable=self.radio_var,text="Gold",text_font=("Roboto Medium", -12),value=3,command=self.gold_event)
self.label_radio_3.grid(column=3,columnspan=1, row=3,padx=10,pady=10,sticky="nswe")
self.slider=customtkinter.CTkSlider(master=self.frame_options,number_of_steps=250,from_=1, to=250,command=self.update_metrics_)
self.slider.grid(column=0,row=4,columnspan=4,padx=10,pady=20,sticky="nsew")
#start/stop frame
self.start_stop = customtkinter.CTkFrame(master=self.frame_right)
self.start_stop.grid(row=4, column=0, columnspan=2,rowspan=1,pady=10,padx=20,sticky="nsew")
self.start_stop.rowconfigure((0),minsize=10,weight=1)
self.start_stop.columnconfigure((0,1,2),minsize=10,weight=1)
self.start=customtkinter.CTkButton(master=self.start_stop,text="Start",command=self.start_event)
self.start.grid(row=0,column=0,padx=20,pady=20)
self.stop=customtkinter.CTkButton(master=self.start_stop,text="Stop",command=self.stop_event)
self.stop.grid(row=0,column=2,padx=20,pady=20)
def start_event(self):
metric_raw=str(round(self.slider.get()))
global n_process,s_process,c_process
if macro_select == 0:
print("\nStarted")
n = multiprocessing.Process(target=netherwart(metric_raw))
n.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
n_process.append(n)
elif macro_select == 1:
print("\nStarted")
s = multiprocessing.Process(target=sugarcane(metric_raw))
s.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
s_process.append(s)
elif macro_select == 2:
print("\nStarted")
c = multiprocessing.Process(target=cobblestone(metric_raw))
c.start()
print("flag")
ac=multiprocessing.active_children
print(f'Active Children: {len(ac)}')
c_process.append(c)
def stop_event(self):
if macro_select == 0:
print("\nEnded")
for process in n_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
elif macro_select == 1:
print("\nEnded")
for process in s_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
elif macro_select == 2:
print("\nEnded")
for process in c_process:
for process in multiprocessing.active_children():
process.terminate()
print(multiprocessing.active_children)
def netherwart_event(self):
global macro_select
macro_select=0
label_="The netherwart macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def sugarcane_event(self):
global macro_select
macro_select=1
label_="The sugarcane macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def cobblestone_event(self):
global macro_select
macro_select=2
label_="The cobblestone macro has been selected."
self.label_info_1 = customtkinter.CTkLabel(master=self.frame_info,text=label_ ,height=100,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_1.grid(column=0, row=0, sticky="nwe", padx=15, pady=10)
def repititions_event(self):
global radio_select
radio_select=0
def time_event(self):
global radio_select
radio_select=1
def exp_event(self):
global radio_select
radio_select=2
def gold_event(self):
global radio_select
radio_select=3
def update_metrics_(self, slider):
global radio_select
global macro_select
global xp
metric_raw=str(round(self.slider.get()))
if radio_select == 0:
label_text="Number of repititions: "+ metric_raw
elif radio_select == 1:
if macro_select==0:
place_holder=int(metric_raw)
time_0=int(rand_offset_max+58)
time_1=int(place_holder*time_0)
time_1=str(time_1)
label_text="Approximate time to complete: "+ time_1 +" seconds."
elif macro_select==1:
place_holder=int(metric_raw)
time_0_0=int(rand_offset_max+34)
time_1_0=int(place_holder*time_0_0)
time_1_0=str(time_1_0)
label_text="Approximate time to complete: "+ time_1_0 +" seconds"
elif macro_select==2:
place_holder=int(metric_raw)
time_1_0_0=int(30*place_holder)
time_1_0_0=str(time_1_0_0)
label_text="Approximate time to complete: "+ time_1_0_0 +" seconds"
elif radio_select == 2:
if macro_select==0:
place_holder=int(metric_raw)
xp=int(1334*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif macro_select==1:
place_holder=int(metric_raw)
xp=int(3341*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif macro_select==2:
place_holder=int(metric_raw)
xp=int(600*place_holder)
xp=str(xp)
label_text="Approximate experience gained: " + xp +"xp"
elif radio_select == 3:
if macro_select==0:
place_holder=int(metric_raw)
gold=int(30000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
elif macro_select==1:
place_holder=int(metric_raw)
gold=int(9000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
elif macro_select==2:
place_holder=int(metric_raw)
gold=int(3000*place_holder)
gold=str(gold)
label_text="Approximate gold gained: " + gold +"gold"
self.label_info_2 = customtkinter.CTkLabel(master=self.frame_options,text=label_text ,height=40,corner_radius=6,fg_color=("white", "gray38"),justify=tkinter.LEFT)
self.label_info_2.grid(column=0, row=0, sticky="nwe", padx=15, pady=10,columnspan=4)
def settings_event(self):
self.settings_window=customtkinter.CTkToplevel(master=self)
self.settings_window.geometry("600x400")
self.settings_window.title("Settings")
self.settings_window_options=customtkinter.CTkFrame(master=self.settings_window)
self.settings_window_options.grid(padx=20,pady=20,column=0,row=0,rowspan=4,columnspan=4,sticky="nsew")
self.settings_window.rowconfigure(0,minsize=10,weight=1)
self.settings_window.rowconfigure(1,minsize=10,weight=1)
self.settings_window.rowconfigure(2,minsize=10,weight=1)
self.settings_window.rowconfigure(3,minsize=10,weight=1)
self.settings_window.columnconfigure(0,minsize=10,weight=1)
self.settings_window.columnconfigure(1,minsize=10,weight=1)
self.settings_window.columnconfigure(2,minsize=10,weight=1)
self.settings_window.columnconfigure(3,minsize=10,weight=1)
self.settings_window_options.rowconfigure(0,minsize=10,weight=1)
self.settings_window_options.rowconfigure(1,minsize=10,weight=1)
self.settings_window_options.rowconfigure(2,minsize=10,weight=1)
self.settings_window_options.rowconfigure(3,minsize=10,weight=1)
self.settings_window_options.columnconfigure(0,minsize=10,weight=1)
self.settings_window_options.columnconfigure(1,minsize=10,weight=1)
self.settings_window_options.columnconfigure(2,minsize=10,weight=1)
self.settings_window_options.columnconfigure(3,minsize=10,weight=1)
self.random_offset_lable_frame=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame.grid(column=0,row=0,columnspan=4,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_frame.columnconfigure(0,minsize=10,weight=1)
self.random_offset_lable_frame.columnconfigure(1,minsize=10,weight=1)
self.random_offset_lable_frame.columnconfigure(2,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(0,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(1,minsize=10,weight=1)
self.random_offset_lable_frame.rowconfigure(2,minsize=10,weight=1)
self.random_offset_lable=customtkinter.CTkLabel(master=self.random_offset_lable_frame,text="EzFarm settings:")
self.random_offset_lable.grid(column=1,row=1,columnspan=1,rowspan=1,sticky="nsew")
self.random_offset_lable_frame_1=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame_1.grid(column=0,row=1,columnspan=2,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_frame_2=customtkinter.CTkFrame(master=self.settings_window_options)
self.random_offset_lable_frame_2.grid(column=2,row=1,columnspan=2,padx=20,pady=20,rowspan=1,sticky="nsew")
self.random_offset_lable_1=customtkinter.CTkLabel(master=self.random_offset_lable_frame_1,text="Personalization::")
self.random_offset_lable_1.grid(column=1,row=0,columnspan=1,rowspan=1,padx=20,pady=20,sticky="n")
self.random_offset_lable_2=customtkinter.CTkLabel(master=self.random_offset_lable_frame_2,text="Randomized offset:")
self.random_offset_lable_2.grid(column=1,row=1,columnspan=1,rowspan=1,padx=20,pady=20,sticky="n")
self.optionmenu_1 = customtkinter.CTkOptionMenu(master=self.random_offset_lable_frame_1,values=["Light", "Dark", "System"],command=self.change_appearance_mode)
self.optionmenu_1.grid(row=2,column=1,padx=20,pady=10,sticky="nsew")
self.random_offset_lable_2.columnconfigure(0,minsize=10,weight=1)
self.random_offset_lable_2.columnconfigure(1,minsize=10,weight=1)
self.random_offset_lable_2.columnconfigure(2,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(0,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(1,minsize=10,weight=1)
self.random_offset_lable_2.rowconfigure(2,minsize=10,weight=1)
self.offset_ui_0=customtkinter.CTkEntry(master=self.random_offset_lable_2,width=30,placeholder_text="Insert the min value:")
self.offset_ui_0.grid(row=1,column=0,columnspan=2,padx=10,pady=10,sticky="nsew")
self.offset_ui_1=customtkinter.CTkEntry(master=self.random_offset_lable_2,width=30,placeholder_text="Insert the max value:")
self.offset_ui_1.grid(row=2,column=0,columnspan=2,padx=10,pady=10,sticky="nsew")
self.submit_button=customtkinter.CTkButton(master=self.random_offset_lable_frame_2,text="Submit",command=self.submit)
self.submit_button.grid(row=3,column=1,padx=20,pady=20,sticky="nsew")
def submit(self):
global rand_offset_min
global rand_offset_max
rand_offset_min=int(self.offset_ui_0.get())
rand_offset_max=int(self.offset_ui_1.get())
if rand_offset_min<0 or rand_offset_max<0:
error_=customtkinter.CTkToplevel(master=self)
error_.title("Error")
error_message=customtkinter.CTkLabel(master=error_,text="Error, please enter a whole number over 0")
error_message.grid(padx=10,pady=10)
def change_appearance_mode(self, new_appearance_mode):
customtkinter.set_appearance_mode(new_appearance_mode)
def button_event(self):
print("Button pressed")
def change_appearance_mode(self, new_appearance_mode):
customtkinter.set_appearance_mode(new_appearance_mode)
def on_closing(self, event=0):
self.destroy()
if __name__ == "__main__":
app = App()
app.mainloop()
| [
"It looks like you're trying to use multiple threads to run your macros simultaneously. However, the GUI freezes because you're blocking the main thread, which is responsible for updating the GUI.\nTo fix this, you need to make sure that your macro functions are non-blocking, i.e. they don't freeze the main thread. One way to do this is to use the threading module to run each macro in a separate thread.\nHere's an example of how you could modify your code to do this:\nimport threading\n\ndef netherwart(metric_raw):\n # ...\n\ndef sugarcane(metric_raw):\n # ...\n\ndef cobblestone(metric_raw):\n # ...\n\ndef run_macro(func, metric_raw):\n thread = threading.Thread(target=func, args=(metric_raw,))\n thread.start()\n\n# To run a macro, call the run_macro() function with the name of the macro function\n# and the metric_raw argument that you want to pass to it.\nrun_macro(sugarcane, 100)\n\nThis way, each macro function will be run in a separate thread, and the main thread will be free to update the GUI. You'll need to make similar changes to the rest of your code to ensure that all blocking operations are moved to separate threads.\nI hope this helps!\n"
] | [
0
] | [] | [] | [
"multiprocessing",
"python",
"tkinter"
] | stackoverflow_0074664305_multiprocessing_python_tkinter.txt |
Q:
How can I slow down the refresh rate in pygame?
I'm new to python and I'm trying to make a simple platformer game using pygame. My issue is that when I use a while loop to make a block fall until it hits the bottom of the screen, it travels there all at once and I can't see it happening. However when I move the block side to side using if statements, I can see that happening. How can I slow down the falling block down so it's visible?
I was following a tutorial for the most part, but wanted to add my own thing.
clock = pygame.time.Clock()
fps = 60
run = True
while run:
clock.tick(fps)
keys = pygame.key.get_pressed()
if keys[pygame.K_a] and x > 0:
x = x - 5
if keys[pygame.K_d] and x < (500 - width):
x = x + 5
if keys[pygame.K_s]: #this is the portion that is too fast.
while y < (500 - height):
y = y + 5
player = pygame.draw.rect(screen, (player_color), (x,y,width,height))
pygame.display.update()
I tried putting the entire while ... y = y + 5 code into an if as well; that slowed it down, but it only moved when I held down the s key.
A:
If you want it to fully 'animate' down, you should add the code that keeps the pygame screen/player updating in your while loop, otherwise you're just changing the y without changing the screen. So your code would look somewhat like this:
clock = pygame.time.Clock()
fps = 60
run = True
while run:
clock.tick(fps)
keys = pygame.key.get_pressed()
if keys[pygame.K_a] and x > 0:
x = x - 5
if keys[pygame.K_d] and x < (500 - width):
x = x + 5
if keys[pygame.K_s]: #this is the portion that is too fast.
while y < (500 - height):
y = y + 5
player = pygame.draw.rect(screen, (player_color), (x,y,width,height)) # Make sure to update the player
pygame.display.update() # Make sure to update the display
player = pygame.draw.rect(screen, (player_color), (x,y,width,height))
pygame.display.update()
Changing the FPS:
But, if you do want to change the speed of the game loop/essentially the frames per second, you can simply change the fps variable/the clock.tick() argument. So for example:
clock = pygame.time.Clock()
fps = 30 # This value is the amount of frames per second
run = True
while run:
clock.tick(fps) # The argument (currently fps) passed into this method will change the frames per second
keys = pygame.key.get_pressed()
if keys[pygame.K_a] and x > 0:
x = x - 5
if keys[pygame.K_d] and x < (500 - width):
x = x + 5
if keys[pygame.K_s]: #this is the portion that is too fast.
while y < (500 - height):
y = y + 5
player = pygame.draw.rect(screen, (player_color), (x,y,width,height))
pygame.display.update()
You can read more about the clock.tick() method here
Please mark my answer as accepted if it solved your issue
| How can I slow down the refresh rate in pygame? | I'm new to python and I'm trying to make a simple platformer game using pygame. My issue is that when I use a while loop to make a block fall until it hits the bottom of the screen, it travels there all at once and I can't see it happening. However when I move the block side to side using if statements, I can see that happening. How can I slow down the falling block down so it's visible?
I was following a tutorial for the most part, but wanted to add my own thing.
clock = pygame.time.Clock()
fps = 60
run = True
while run:
clock.tick(fps)
keys = pygame.key.get_pressed()
if keys[pygame.K_a] and x > 0:
x = x - 5
if keys[pygame.K_d] and x < (500 - width):
x = x + 5
if keys[pygame.K_s]: #this is the portion that is too fast.
while y < (500 - height):
y = y + 5
player = pygame.draw.rect(screen, (player_color), (x,y,width,height))
pygame.display.update()
I tried putting the entire while ... y = y + 5 code into an if as well; that slowed it down, but it only moved when I held down the s key.
| [
"If you want it to fully 'animate' down, you should add the code that keeps the pygame screen/player updating in your while loop, otherwise you're just changing the y without changing the screen. So your code would look somewhat like this:\nclock = pygame.time.Clock()\nfps = 60\nrun = True\nwhile run:\n clock.tick(fps)\n keys = pygame.key.get_pressed()\n if keys[pygame.K_a] and x > 0:\n x = x - 5\n if keys[pygame.K_d] and x < (500 - width):\n x = x + 5\n if keys[pygame.K_s]: #this is the portion that is too fast. \n while y < (500 - height):\n y = y + 5 \n player = pygame.draw.rect(screen, (player_color), (x,y,width,height)) # Make sure to update the player\n pygame.display.update() # Make sure to update the display\n player = pygame.draw.rect(screen, (player_color), (x,y,width,height))\n pygame.display.update()\n\nChanging the FPS:\nBut, if you do want to change the speed of the game loop/essentially the frames per second, you can simply change the fps variable/the clock.tick() argument. So for example:\nclock = pygame.time.Clock()\nfps = 30 # This value is the amount of frames per second\nrun = True\nwhile run:\n clock.tick(fps) # The argument (currently fps) passed into this method will change the frames per second\n keys = pygame.key.get_pressed()\n if keys[pygame.K_a] and x > 0:\n x = x - 5\n if keys[pygame.K_d] and x < (500 - width):\n x = x + 5\n if keys[pygame.K_s]: #this is the portion that is too fast. \n while y < (500 - height):\n y = y + 5 \n player = pygame.draw.rect(screen, (player_color), (x,y,width,height))\n pygame.display.update()\n\nYou can read more about the clock.tick() method here\nPlease mark my answer as accepted if it solved your issue\n"
] | [
1
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074664312_pygame_python.txt |
Q:
do you have to pass SSO profile credentials in order to assume the IAM role using boto3
I have my config file set up with multiple profiles and I am trying to assume an IAM role, but all the articles I see about assuming roles are starting with making an sts client using
import boto3 client = boto3.client('sts')
which makes sense but the only problem is, It gives me an error when I try to do it like this. but when I do it like this, while passing a profile that exists in my config file, it works. here is the code below:
import boto3 session = boto3.Session(profile_name="test_profile")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/role-name",
RoleSessionName="test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
when other people are assuming roles in their codes without passing a profile in, how does that even work? does boto3 automatically grabs the default profile from the config file or something like that in their case?
A:
Yes. This line:
sts = session.client("sts")
tells boto3 to create a session using the default credentials.
The credentials can be provided in the ~/.aws/credentials file. If the code is running on an Amazon EC2 instance, boto3 will automatically use credentials associated with the IAM Role associated with the instance.
Credentials can also be passed via Environment Variables.
See: Credentials — Boto3 documentation
| do you have to pass SSO profile credentials in order to assume the IAM role using boto3 | I have my config file set up with multiple profiles and I am trying to assume an IAM role, but all the articles I see about assuming roles are starting with making an sts client using
import boto3 client = boto3.client('sts')
which makes sense but the only problem is, It gives me an error when I try to do it like this. but when I do it like this, while passing a profile that exists in my config file, it works. here is the code below:
import boto3 session = boto3.Session(profile_name="test_profile")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/role-name",
RoleSessionName="test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
when other people are assuming roles in their codes without passing a profile in, how does that even work? does boto3 automatically grabs the default profile from the config file or something like that in their case?
| [
"Yes. This line:\nsts = session.client(\"sts\")\n\ntells boto3 to create a session using the default credentials.\nThe credentials can be provided in the ~/.aws/credentials file. If the code is running on an Amazon EC2 instance, boto3 will automatically use credentials associated with the IAM Role associated with the instance.\nCredentials can also be passed via Environment Variables.\nSee: Credentials — Boto3 documentation\n"
] | [
0
] | [] | [] | [
"amazon_iam",
"amazon_web_services",
"assume_role",
"boto3",
"python"
] | stackoverflow_0074656007_amazon_iam_amazon_web_services_assume_role_boto3_python.txt |
Q:
C# gives me different result of ModPow from Java & Python. Is this a bug?
all
I am Joshua.
Currently I am developing some kind of calculation logic in C#, .NET 4.7.
And I am stuck for a couple of days when using ModPow from BigInteger class.
Finally I compared with some other languages like Javascript and Python.
Here are the codes;
C#
var _a = BigInteger.Parse("-112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839", NumberStyles.Number);
var _b = BigInteger.Parse("42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513", NumberStyles.Number);
var _c = BigInteger.Parse("5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807", NumberStyles.Number);
var _modPowRes = BigInteger.ModPow(_a, _b, _c);
NumberFormatInfo _formatter = new NumberFormatInfo();
_formatter.NegativeSign = "-";
string _stringRes = _modPowRes.ToString("D", _formatter);
// -4025962189194561064516553363283375014525921961557175720338529077112971710311848366234203913802556580292548393361209331638020083844074732374616343653135077799402946400533385408838985464487221381056062706959966844828188406680815976045839849337092509291814967568127589935214307295687077940893465121894442245470126915769901660325457529592729859443914551917517655171549159468233372814554103351084536838316208603471169266662039167296594675085295938849603548128017384775032304908588861826301095512209694112961080272106412321025934327013828425487280677537982063915874200399589997896633613461631025068511458444974149152947749747984549594822000420708352393424384641432718686749229405669436317664008070112083271636922781745940251494349374415266883563753139424535520085753958356376469108498554786941521255255003415937460671624180845917463097501347690828778209895285876290213489273200505952380626734459377765095025966725691306820725825499
Javascript, I used jsbn library
var _a = new BigInteger("-112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839", 10);
var _b = new BigInteger("42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513", 10);
var _c = new BigInteger("5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807", 10);
var _modPowRes = _a.modPow(_b, _c);
var _stringRes = _modPowRes.toString(10);
// 1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308
Python
_a = -112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839
_b = 42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513
_c = 5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807
_modPowRes = pow(_a, _b, _c)
print(_modPowRes)
#1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308
As you can see, all the consant valus _a, _b and _c are the same.
But only C# gives me the different result of ModPow.
I have tried smaller numbers, then C# gives me the same result as other languages.
I have tried to run my unit test project as x64, x86 and AnyCpu on platform target.
But the result is the same.
Can somebody explain why this is happening?
A:
Python and C# have different definitions of Mod. Python uses the mathematical definition of mod (the result is always a non-negative number) while C# returns a value with the same sign as you started with.
You'll notice that the difference between C#'s answer and Python's answer is precisely the modulus. In effect, both are returning the same value, but a different representation of it.
>>> x # result returned by C#
-4025962189194561064516553363283375014525921961557175720338529077112971710311848366234203913802556580292548393361209331638020083844074732374616343653135077799402946400533385408838985464487221381056062706959966844828188406680815976045839849337092509291814967568127589935214307295687077940893465121894442245470126915769901660325457529592729859443914551917517655171549159468233372814554103351084536838316208603471169266662039167296594675085295938849603548128017384775032304908588861826301095512209694112961080272106412321025934327013828425487280677537982063915874200399589997896633613461631025068511458444974149152947749747984549594822000420708352393424384641432718686749229405669436317664008070112083271636922781745940251494349374415266883563753139424535520085753958356376469108498554786941521255255003415937460671624180845917463097501347690828778209895285876290213489273200505952380626734459377765095025966725691306820725825499
>>> y # result returned by Python
1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308
>>> y - x # your modulus
5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807
| C# gives me different result of ModPow from Java & Python. Is this a bug? | all
I am Joshua.
Currently I am developing some kind of calculation logic in C#, .NET 4.7.
And I am stuck for a couple of days when using ModPow from BigInteger class.
Finally I compared with some other languages like Javascript and Python.
Here are the codes;
C#
var _a = BigInteger.Parse("-112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839", NumberStyles.Number);
var _b = BigInteger.Parse("42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513", NumberStyles.Number);
var _c = BigInteger.Parse("5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807", NumberStyles.Number);
var _modPowRes = BigInteger.ModPow(_a, _b, _c);
NumberFormatInfo _formatter = new NumberFormatInfo();
_formatter.NegativeSign = "-";
string _stringRes = _modPowRes.ToString("D", _formatter);
// -4025962189194561064516553363283375014525921961557175720338529077112971710311848366234203913802556580292548393361209331638020083844074732374616343653135077799402946400533385408838985464487221381056062706959966844828188406680815976045839849337092509291814967568127589935214307295687077940893465121894442245470126915769901660325457529592729859443914551917517655171549159468233372814554103351084536838316208603471169266662039167296594675085295938849603548128017384775032304908588861826301095512209694112961080272106412321025934327013828425487280677537982063915874200399589997896633613461631025068511458444974149152947749747984549594822000420708352393424384641432718686749229405669436317664008070112083271636922781745940251494349374415266883563753139424535520085753958356376469108498554786941521255255003415937460671624180845917463097501347690828778209895285876290213489273200505952380626734459377765095025966725691306820725825499
Javascript, I used jsbn library
var _a = new BigInteger("-112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839", 10);
var _b = new BigInteger("42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513", 10);
var _c = new BigInteger("5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807", 10);
var _modPowRes = _a.modPow(_b, _c);
var _stringRes = _modPowRes.toString(10);
// 1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308
Python
_a = -112504099738967919768410869814860903982619354592334385300478909977139427923361560492994684609769555842111970113942123739668771370088164545697837713491001933076595596664223176568695966455489156970812960564312137880189440762210023504737301351876623478203051273064143361985681097609967600291953777514093844982210010914333130253653115287655327635099624170100570446664200407152843331467876643789736619196583683418866514683967222986915221982722110686116114379750004515841618651243098835937383564483775550060041152863597757016771967904349656600797877149977649728258675384541203748747540152406727068415700988829908215424137984927266987257127615691501812286981137284264501640480312282004988469360705007547931588660449754580929985074021203105802730617855236140743357649842847802339153126105816742502906195190402079944900015779354117203169724787481860002766072928442884322075317745510521170465176626766316916518300959480576101945671060969719420147093892605449154540004740863401759952424765321581716947920578041839
_b = 42710288472123706107732045980552936061105504205720832439400517546130482902053491661703847614802545363376415610740434485703528101945149576942291183712758802525948559100109353863348012045024391482754729786318228709846887076811626987986636873030977565323002665497040002349699696324754471341241246091133248712513
_c = 5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807
_modPowRes = pow(_a, _b, _c)
print(_modPowRes)
#1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308
As you can see, all the consant valus _a, _b and _c are the same.
But only C# gives me the different result of ModPow.
I have tried smaller numbers, then C# gives me the same result as other languages.
I have tried to run my unit test project as x64, x86 and AnyCpu on platform target.
But the result is the same.
Can somebody explain why this is happening?
| [
"Python and C# have different definitions of Mod. Python uses the mathematical definition of mod (the result is always a non-negative number) while C# returns a value with the same sign as you started with.\nYou'll notice that the difference between C#'s answer and Python's answer is precisely the modulus. In effect, both are returning the same value, but a different representation of it.\n>>> x # result returned by C#\n-4025962189194561064516553363283375014525921961557175720338529077112971710311848366234203913802556580292548393361209331638020083844074732374616343653135077799402946400533385408838985464487221381056062706959966844828188406680815976045839849337092509291814967568127589935214307295687077940893465121894442245470126915769901660325457529592729859443914551917517655171549159468233372814554103351084536838316208603471169266662039167296594675085295938849603548128017384775032304908588861826301095512209694112961080272106412321025934327013828425487280677537982063915874200399589997896633613461631025068511458444974149152947749747984549594822000420708352393424384641432718686749229405669436317664008070112083271636922781745940251494349374415266883563753139424535520085753958356376469108498554786941521255255003415937460671624180845917463097501347690828778209895285876290213489273200505952380626734459377765095025966725691306820725825499\n>>> y # result returned by Python\n1783643806175396998275362602355826387650690265345724813364371805666764467579142495237890860674783000854825016824437046690023645956675737723594580834731857259761425187634662132104996180029411374011438919474589553364998222309255272614979511868027284402170465728908528297700102876189729595563926155962569604427283291749203673030343591516627038015511719927953742781126799972560120257074290771695973286302279629131295383214811291564651109155633319576684151577295199734593114604874743329126921653504771250132941337184148762999959335547394147714802188259839801355116944682610659081543579365393513921728510730572041617697936145453462119608425988630323921319186513104423344823774870759265115372373731593225388193828408607005773987710556891304121163609340263880054616842988101393815039937434342691332663137114581535172021453932283969024301846449291944006405969946744999443455011016318658938083030075774742259090377978078691693422518308\n>>> y - x # your modulus\n5809605995369958062791915965639201402176612226902900533702900882779736177890990861472094774477339581147373410185646378328043729800750470098210924487866935059164371588168047540943981644516632755067501626434556398193186628990071248660819361205119793693985433297036118232914410171876807536457391277857011849897410207519105333355801121109356897459426271845471397952675959440793493071628394122780510124618488232602464649876850458861245784240929258426287699705312584509625419513463605155428017165714465363094021609290561084025893662561222573202082865797821865270991145082200656978177192827024538990239969175546190770645685893438011714430426409338676314743571154537142031573004276428701433036381801705308659830751190352946025482059931306571004727362479688415574702596946457770284148435989129632853918392117997472632693078113129886487399347796982772784615865232621289656944284216824611318709764535152507354116344703769998514148343807\n\n"
] | [
1
] | [] | [] | [
"biginteger",
"c#",
"javascript",
"mod",
"python"
] | stackoverflow_0074664517_biginteger_c#_javascript_mod_python.txt |
Q:
Creating a nested list using insert() in Python
I'm trying to find a way to create a nested list out of an already-existing non-empty list using built-in list fuctions.
Here is a small example:
a=['Groceries', 'School Fees', 'Medicines', 'Furniture']
When I try a[0].insert(0, 1000) for example, I'm met with an error. Is there any way to do this?
| Creating a nested list using insert() in Python | I'm trying to find a way to create a nested list out of an already-existing non-empty list using built-in list fuctions.
Here is a small example:
a=['Groceries', 'School Fees', 'Medicines', 'Furniture']
When I try a[0].insert(0, 1000) for example, I'm met with an error. Is there any way to do this?
| [] | [] | [
"Make a function for inner inserting.\nTry this\na=['Groceries', 'School Fees', 'Medicines', 'Furniture']\ndef innerInsert(index, value):\n try:\n a[index].insert(index, value)\n except AttributeError:\n a[index] = []\n a[index].insert(index, value)\n\ninnerInsert(0, 10000)\ninnerInsert(0, 100)\nprint(a)\n\n\n[[100, 10000], 'School Fees', 'Medicines', 'Furniture']\n\nFor more options, I made some changes to the function\na=['Groceries', 'School Fees', 'Medicines', 'Furniture']\ndef innerInsert(l, iLstIndex, iLstValueIndex, value):\n try:\n l[iLstIndex].insert(iLstValueIndex, value)\n except AttributeError:\n l[iLstIndex] = []\n l[iLstIndex].insert(iLstValueIndex, value)\n\ninnerInsert(a, 0, 0, 10000)\ninnerInsert(a, 0, 1, 2)\ninnerInsert(a, 0, 2, 3)\ninnerInsert(a, 0, 3, 4)\ninnerInsert(a, 0, 4, 1)\ninnerInsert(a, 0, 5, 2)\ninnerInsert(a[0], 4, 2, 100)\nprint(a)\n\nOUTPUT\n[[10000, 2, 3, 4, [100], 2], 'School Fees', 'Medicines', 'Furniture']\n\n",
"You can use indexing to access the first sublist and then use the list.insert() method to insert the new value at the specified index. For example:\na = ['Groceries', 'School Fees', 'Medicines', 'Furniture']\nb = [[element] for element in a]\n\n# Add the value 1000 to the first element of the first sublist\nb[0].insert(0, 1000)\n\n# The resulting nested list should be: [[1000, 'Groceries'], ['School Fees'], ['Medicines'], ['Furniture']]\nprint(b)\n\n\n"
] | [
-1,
-1
] | [
"list",
"nested_lists",
"python"
] | stackoverflow_0074664508_list_nested_lists_python.txt |
Q:
Calculating multilabel recall for this problem
I have a table with two columns, and the two entries of a row show that they are related:
Col1
Col2
a
A
b
B
a
C
c
A
b
D
Here a is related to A, C and b to B, D and c to A, meaning the same entry in col1 might have multiple labels in col2 related. I trained a Machine Learning model to quantify the relationship between Col1 and Col2 by creating a vector embedding of Col1 and Col2 and optimizing the cosine_similarity between the two vectors. Now, I want to test my model by calculating Recall on a test set. I want to check if at various recall@N, what proportion of these positive relationships can be retrieved. Suppose I have normalized vector representation of all entries in each column, then I can calculate the cosine distance between them as :
cosine_distance = torch.mm(col1_feature, col2_feature.t())
which gives a matrix of distances between all pairs that can be formed between col1 and col2.
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(c,A)
dist(c,B)
dist(c,C)
dist(c,A)
dist(c, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
I can then calculate which pairs have largest distance to calculate recall@k. My question is how can I make this efficient for a millions of rows. I found out this module in pytorch: torchmetrics.classification.MultilabelRecall(https://torchmetrics.readthedocs.io/en/stable/classification/recall.html), that seems to be useful but for that I need to specify number of labels. In my case, I can have variable number of labels for each unique entry of col1. Any ideas?
A:
You can use a clustering algorithm to group the entries in Col1 and Col2 into clusters. Then you can use the MultilabelRecall metric to calculate the recall for each cluster. This way, you don't have to specify the number of labels for each entry in Col1.
A:
If you have a large number of rows in your table, it may be inefficient to calculate the cosine distance between all pairs of entries in Col1 and Col2. One way to make this more efficient is to use approximate nearest neighbor (ANN) algorithms, which can quickly find the closest vectors in a high-dimensional space. These algorithms typically involve constructing a data structure that allows for efficient search, such as a k-d tree or locality-sensitive hashing. Once you have built this data structure, you can use it to quickly find the entries in Col2 that are closest to a given entry in Col1, and then calculate the recall@k for those entries.
Here is an example of how you might use an ANN algorithm to calculate the recall@k in your case. This code uses the k-d tree implementation in the scikit-learn library to index the vectors in Col1 and Col2, and then finds the nearest neighbors of each vector in Col1 using the k-d tree. It then calculates the recall@k for the nearest neighbors of each vector in Col1.
from sklearn.neighbors import KDTree
# Create a k-d tree to index the vectors in Col1 and Col2
tree = KDTree(np.concatenate((col1_feature, col2_feature), axis=0))
# Find the nearest neighbors of each vector in Col1 using the k-d tree
# This returns a tuple containing the indices of the nearest neighbors
# in Col2 and the distances to those neighbors
neighbors = tree.query(col1_feature, k=k)
# Calculate the recall@k for each vector in Col1
recall_at_k = 0
for i, (neighbor_indices, distances) in enumerate(neighbors):
# Get the labels for the nearest neighbors of the current vector
neighbor_labels = col2[neighbor_indices]
# Count the number of true labels among the nearest neighbors
true_labels = 0
for label in neighbor_labels:
if label in true_labels_for_col1[i]:
true_labels += 1
# Calculate the recall@k for the current vector
recall_at_k += true_labels / k
# Calculate the average recall@k over all vectors in Col1
average_recall_at_k = recall_at_k / len(col1)
| Calculating multilabel recall for this problem | I have a table with two columns, and the two entries of a row show that they are related:
Col1
Col2
a
A
b
B
a
C
c
A
b
D
Here a is related to A, C and b to B, D and c to A, meaning the same entry in col1 might have multiple labels in col2 related. I trained a Machine Learning model to quantify the relationship between Col1 and Col2 by creating a vector embedding of Col1 and Col2 and optimizing the cosine_similarity between the two vectors. Now, I want to test my model by calculating Recall on a test set. I want to check if at various recall@N, what proportion of these positive relationships can be retrieved. Suppose I have normalized vector representation of all entries in each column, then I can calculate the cosine distance between them as :
cosine_distance = torch.mm(col1_feature, col2_feature.t())
which gives a matrix of distances between all pairs that can be formed between col1 and col2.
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
dist(a,A)
dist(a,B)
dist(a,C)
dist(a,A)
dist(a, D)
dist(c,A)
dist(c,B)
dist(c,C)
dist(c,A)
dist(c, D)
dist(b,A)
dist(b,B)
dist(b,C)
dist(b,A)
dist(b, D)
I can then calculate which pairs have largest distance to calculate recall@k. My question is how can I make this efficient for a millions of rows. I found out this module in pytorch: torchmetrics.classification.MultilabelRecall(https://torchmetrics.readthedocs.io/en/stable/classification/recall.html), that seems to be useful but for that I need to specify number of labels. In my case, I can have variable number of labels for each unique entry of col1. Any ideas?
| [
"You can use a clustering algorithm to group the entries in Col1 and Col2 into clusters. Then you can use the MultilabelRecall metric to calculate the recall for each cluster. This way, you don't have to specify the number of labels for each entry in Col1.\n",
"If you have a large number of rows in your table, it may be inefficient to calculate the cosine distance between all pairs of entries in Col1 and Col2. One way to make this more efficient is to use approximate nearest neighbor (ANN) algorithms, which can quickly find the closest vectors in a high-dimensional space. These algorithms typically involve constructing a data structure that allows for efficient search, such as a k-d tree or locality-sensitive hashing. Once you have built this data structure, you can use it to quickly find the entries in Col2 that are closest to a given entry in Col1, and then calculate the recall@k for those entries.\nHere is an example of how you might use an ANN algorithm to calculate the recall@k in your case. This code uses the k-d tree implementation in the scikit-learn library to index the vectors in Col1 and Col2, and then finds the nearest neighbors of each vector in Col1 using the k-d tree. It then calculates the recall@k for the nearest neighbors of each vector in Col1.\nfrom sklearn.neighbors import KDTree\n\n# Create a k-d tree to index the vectors in Col1 and Col2\ntree = KDTree(np.concatenate((col1_feature, col2_feature), axis=0))\n\n# Find the nearest neighbors of each vector in Col1 using the k-d tree\n# This returns a tuple containing the indices of the nearest neighbors\n# in Col2 and the distances to those neighbors\nneighbors = tree.query(col1_feature, k=k)\n\n# Calculate the recall@k for each vector in Col1\nrecall_at_k = 0\nfor i, (neighbor_indices, distances) in enumerate(neighbors):\n # Get the labels for the nearest neighbors of the current vector\n neighbor_labels = col2[neighbor_indices]\n\n # Count the number of true labels among the nearest neighbors\n true_labels = 0\n for label in neighbor_labels:\n if label in true_labels_for_col1[i]:\n true_labels += 1\n\n # Calculate the recall@k for the current vector\n recall_at_k += true_labels / k\n\n# Calculate the average recall@k over all vectors in Col1\naverage_recall_at_k = recall_at_k / len(col1)\n\n"
] | [
0,
0
] | [] | [] | [
"machine_learning",
"precision_recall",
"python",
"pytorch"
] | stackoverflow_0074633636_machine_learning_precision_recall_python_pytorch.txt |
Q:
Why use * here instead of + in regex for password must contain at least one number and both lower and uppercase letters?
The regex is like:
"^(?=.*[a-z])(?=.*[A-Z])[A-Za-z\d]{8,}$"
The * matches the previous token between zero and unlimited times.
The + matches the previous token between one and unlimited times.
plus sign + should make sense here.
Why use * here instead of +?
A:
(?=.*[a-z]) and (?=.*[A-Z]) are positive lookaheads for at least one lowercase and one uppercase letter, respectively. .* means skip 0+ chars. If you change that to .+ it would skip 1+ chars, so (?=.+[A-Z]) would not match password Aaaaaaaaa even though it has an uppercase char.
| Why use * here instead of + in regex for password must contain at least one number and both lower and uppercase letters? | The regex is like:
"^(?=.*[a-z])(?=.*[A-Z])[A-Za-z\d]{8,}$"
The * matches the previous token between zero and unlimited times.
The + matches the previous token between one and unlimited times.
plus sign + should make sense here.
Why use * here instead of +?
| [
"(?=.*[a-z]) and (?=.*[A-Z]) are positive lookaheads for at least one lowercase and one uppercase letter, respectively. .* means skip 0+ chars. If you change that to .+ it would skip 1+ chars, so (?=.+[A-Z]) would not match password Aaaaaaaaa even though it has an uppercase char.\n"
] | [
1
] | [] | [] | [
"javascript",
"python",
"regex"
] | stackoverflow_0074664594_javascript_python_regex.txt |
Q:
Cannot download using coursera-dl, Error 404
I am trying to use coursera-dl in windows to download coursera videos using this command:
coursera-dl neural-networks-deep-learning
it gives this error:
coursera_dl version 0.11.5
Downloading class: neural-networks-deep-learning (1 / 1)
Parsing syllabus of on-demand course (id=W_mOXCrdEeeNPQ68_4aPpA). This may take some time, please be patient ...
Error 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true getting page https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
The server replied: <html>
<head>
<title>Coursera - API Route Does Not Exist</title>
</head>
<body style="background-color:#e4e4e4">
<div style="position:absolute; top:0; bottom:0; left:0; right:0; margin:auto; height:200px; width: 600px">
<div style="text-align:center">
<img src="https://s3.amazonaws.com/coursera/error_pages/coursera-logo.svg" width="400">
</div>
<h1 style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:100; color: #555">
API Route Does Not Exist
</h1>
<div style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:300; font-size:13pt; color: #555">
Edge does not know about this API route. <br>
Check whether this route is exposed in the routing table.
</div>
</div>
</body>
</html>
HTTPError 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
any ideas ?
A:
Per the documentation you should download as follows:
coursera-dl -u my_coursera_username -p my_coursera_password neural-networks-deep-learning
Note that you won't be able to access the course materials if you are not officially enrolled via the website.
| Cannot download using coursera-dl, Error 404 | I am trying to use coursera-dl in windows to download coursera videos using this command:
coursera-dl neural-networks-deep-learning
it gives this error:
coursera_dl version 0.11.5
Downloading class: neural-networks-deep-learning (1 / 1)
Parsing syllabus of on-demand course (id=W_mOXCrdEeeNPQ68_4aPpA). This may take some time, please be patient ...
Error 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true getting page https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
The server replied: <html>
<head>
<title>Coursera - API Route Does Not Exist</title>
</head>
<body style="background-color:#e4e4e4">
<div style="position:absolute; top:0; bottom:0; left:0; right:0; margin:auto; height:200px; width: 600px">
<div style="text-align:center">
<img src="https://s3.amazonaws.com/coursera/error_pages/coursera-logo.svg" width="400">
</div>
<h1 style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:100; color: #555">
API Route Does Not Exist
</h1>
<div style="text-align:center; font-family:Helvetica, Arial, sans-serif; font-weight:300; font-size:13pt; color: #555">
Edge does not know about this API route. <br>
Check whether this route is exposed in the routing table.
</div>
</div>
</body>
</html>
HTTPError 404 Client Error: Not Found for url: https://api.coursera.org/api/onDemandCourseMaterials.v1/?q=slug&slug=neural-networks-deep-learning&includes=moduleIds%2ClessonIds%2CpassableItemGroups%2CpassableItemGroupChoices%2CpassableLessonElements%2CitemIds%2Ctracks&fields=moduleIds%2ConDemandCourseMaterialModules.v1(name%2Cslug%2Cdescription%2CtimeCommitment%2ClessonIds%2Coptional)%2ConDemandCourseMaterialLessons.v1(name%2Cslug%2CtimeCommitment%2CelementIds%2Coptional%2CtrackId)%2ConDemandCourseMaterialPassableItemGroups.v1(requiredPassedCount%2CpassableItemGroupChoiceIds%2CtrackId)%2ConDemandCourseMaterialPassableItemGroupChoices.v1(name%2Cdescription%2CitemIds)%2ConDemandCourseMaterialPassableLessonElements.v1(gradingWeight)%2ConDemandCourseMaterialItems.v1(name%2Cslug%2CtimeCommitment%2Ccontent%2CisLocked%2ClockableByItem%2CitemLockedReasonCode%2CtrackId)%2ConDemandCourseMaterialTracks.v1(passablesCount)&showLockedItems=true
any ideas ?
| [
"Per the documentation you should download as follows:\ncoursera-dl -u my_coursera_username -p my_coursera_password neural-networks-deep-learning\n\nNote that you won't be able to access the course materials if you are not officially enrolled via the website.\n"
] | [
0
] | [] | [] | [
"cmd",
"coursera_api",
"python"
] | stackoverflow_0074662735_cmd_coursera_api_python.txt |
Q:
Is this an effective way to determine if a someone has won in connect 4?
I'm using the following function to determine if a winner has been crowned in connect four. Piece is whether they are green or red, last is the last played move (by piece), and name is the discord name of the person playing the game, as it is a file based connect four game. Board is a 2d array being made of all empty and filled squares. Due to the game being based in python, is this a effecient way to check?
Examples:
Piece:
:green_circle:
Board:
[[':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':green_circle:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:']]
Last:
5,1
Discord View:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
# checks for 000_
if j > 2:
if board[i][j - 1] == piece and board[i][j - 2] == piece and board[i][
j - 3] == piece:
return piece + " won"
# checks for _000
if j < 4:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j + 3] == piece:
return piece + " won"
# checks for downs
if i < 3:
if board[i + 1][j] == piece and board[i + 2][j] == piece and board[
i + 3][j] == piece:
return piece + " won"
#check if you place in a 00_0
if not j in [0, 1, 6]:
if board[i][j + 1] == piece and board[i][j - 1] == piece and board[i][
j - 2] == piece:
return piece + " won"
#check for 0_00
if not j in [0, 5, 6]:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j - 1] == piece:
return piece + " won"
# check for top piece of a down-right diagonal
if i < 3 and j < 4:
if board[i + 1][j + 1] == piece and board[i + 2][j + 2] == piece and board[
i + 3][j + 3] == piece:
return piece + " won"
# check for bottom piece of a down-right diagonal
if i > 2 and j > 2:
if board[i - 1][j - 1] == piece and board[i - 2][j - 2] == piece and board[
i - 3][j - 3] == piece:
return piece + " won"
# check for top piece of down-left diagonal
if i < 3 and j > 2:
if board[i + 1][j - 1] == piece and board[i + 2][j - 2] == piece and board[
i + 3][j - 3] == piece:
return piece + " won"
# check for bottom piece of down-left diagonal
if i > 2 and j < 4:
if board[i - 1][j + 1] == piece and board[i - 2][j + 2] == piece and board[
i - 3][j + 3] == piece:
return piece + " won"
# check for 2nd top piece of down-right diagonal
if i in [1,2,3] and j in [1,2,3,4]:
if board[i - 1][j - 1] == piece and board[i +1 ][j + 1] == piece and board[i +2][j +2] == piece:
return piece + " won"
# check for 3rd piece of down-right diagonal
if i in [2,3,4] and j in [2,3,4,5]:
if board[i - 1][j - 1] == piece and board[i -2 ][j -2] == piece and board[i +1][j +1] == piece:
return piece + " won"
# check for 2nd piece of down-left diagonal
if i in [1,2,3] and j in [2,3,4,5]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i +2][j -2] == piece:
return piece + " won"
# check for 3rd piece in down-left diagonal
if i in [2,3,4] and j in [1,2,3,4]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i -2][j +2] == piece:
return piece + " won"
A:
Keeping in mind your conditions are apt, your code could be enhanced in the following manners:
Replacing conditions with all()
Avoiding nested if conditions
Using elif in places of if
Use min() to check inequality for smallest rather than checking both i and j
Combine conditions to make it faster
Here's just an enhanced version of your code:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
# thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
winMsg = f"{piece} win" # create variable for ease
if j > 2:
if all(piece == value for value in [board[i][j-1], board[i][j-2], board[i][j-2]]) or all(piece == value for value in [board[i+1][j-1], board[i+2][j-2], board[i+3][j-3]]): return winMsg
elif all(piece == value for value in [board[i][j+1], board[i][j+2], board[i][j+3]]): return winMsg
elif all(piece == value for value in [board[i+1][j], board[i+2][j], board[i+3][j]]): return winMsg
elif j not in [0, 1, 6] and all(piece == value for value in [board[i][j+1], board[i][j-1], board[i][j-2]]): return winMsg
elif j not in [0, 5, 6] and all(piece == value for value in [board[i][j-1], board[i][j+1], board[i][j+2]]): return winMsg
elif all(piece == value for value in [board[i+1][j+1], board[i][j+2], board[i][j-1]]): return winMsg
elif min(i, j) > 2 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i-3][j-3]]): return winMsg
elif i > 2 and all(piece == value for value in [board[i-1][j+1], board[i-2][j+2], board[i-3][j+3]]): return winMsg
elif i in [1,2,3]:
if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j-1], board[i+1][j+1], board[i+2][j+2]]): return winMsg
elif j == 5 and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i+2][j-2]]): return winMsg
elif i == 4:
if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i-2][j+2]]): return winMsg
elif j == 5 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i+1][j+1]]): return winMsg
If you could provide an exact input when there is a win, maybe a better approach could be made. Hope this helps :)
A:
Not sure if this is faster but I've done this before in Numpy. Here's how I did it:
import numpy as np
class Connect4Game():
# Construct a set of binary masks to find connect 4s
win_mask = np.zeros((4*4, 7, 7), 'bool')
idx1 = np.array(range(4))
idx2 = np.array([3]*4)
for i in range(4):
win_mask[([i]*4, idx1+i, idx2)] = True
win_mask[([i+4]*4, idx2, idx1+i)] = True
win_mask[([i+8]*4, idx1+i, idx1+i)] = True
win_mask[([i+12]*4, 6-idx1-i, idx1+i)] = True
def __init__(self, data=None):
# Extend the board area by adding borders
self.ext_board = np.zeros((12, 13), 'int8')
# Make the board a view slice
self.board = self.ext_board.view()[3:9, 3:10]
if data is not None:
self.load_game(data)
def reset(self):
self.board [:, :] = 0
def load_game(self, data):
data = np.array(data)
assert(data.shape == (6, 7))
self.reset()
self.board[data == ':green_circle:'] = 1
self.board[data == ':red_circle:'] = 2
def check_for_win(self, last, piece):
row, col = last
selection = self.ext_board[row:row+7, col:col+7]
wins = np.nonzero(np.all(
((selection == piece) & self.win_mask)
== self.win_mask, axis=(1, 2)
))[0]
return wins.tolist()
# Demo
g = Connect4Game(example_board)
print(g.board)
last = (5, 1)
piece = 1
assert g.board[last] == piece
wins = g.check_for_win(last, piece)
print(wins)
Output:
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0]], dtype=int8)
[]
| Is this an effective way to determine if a someone has won in connect 4? | I'm using the following function to determine if a winner has been crowned in connect four. Piece is whether they are green or red, last is the last played move (by piece), and name is the discord name of the person playing the game, as it is a file based connect four game. Board is a 2d array being made of all empty and filled squares. Due to the game being based in python, is this a effecient way to check?
Examples:
Piece:
:green_circle:
Board:
[[':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:'], [':white_large_square:', ':green_circle:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:', ':white_large_square:']]
Last:
5,1
Discord View:
def checks(piece, last, name):
board = []
open_file = open(name, "r")
thing = open_file.readline()
for x in range(6):
value = open_file.readline()
board.append(value.strip("\n").split(","))
open_file.close()
cords = last.split(',')
i = int(cords[0]) # row/x
j = int(cords[1]) # column/y
# checks for 000_
if j > 2:
if board[i][j - 1] == piece and board[i][j - 2] == piece and board[i][
j - 3] == piece:
return piece + " won"
# checks for _000
if j < 4:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j + 3] == piece:
return piece + " won"
# checks for downs
if i < 3:
if board[i + 1][j] == piece and board[i + 2][j] == piece and board[
i + 3][j] == piece:
return piece + " won"
#check if you place in a 00_0
if not j in [0, 1, 6]:
if board[i][j + 1] == piece and board[i][j - 1] == piece and board[i][
j - 2] == piece:
return piece + " won"
#check for 0_00
if not j in [0, 5, 6]:
if board[i][j + 1] == piece and board[i][j + 2] == piece and board[i][
j - 1] == piece:
return piece + " won"
# check for top piece of a down-right diagonal
if i < 3 and j < 4:
if board[i + 1][j + 1] == piece and board[i + 2][j + 2] == piece and board[
i + 3][j + 3] == piece:
return piece + " won"
# check for bottom piece of a down-right diagonal
if i > 2 and j > 2:
if board[i - 1][j - 1] == piece and board[i - 2][j - 2] == piece and board[
i - 3][j - 3] == piece:
return piece + " won"
# check for top piece of down-left diagonal
if i < 3 and j > 2:
if board[i + 1][j - 1] == piece and board[i + 2][j - 2] == piece and board[
i + 3][j - 3] == piece:
return piece + " won"
# check for bottom piece of down-left diagonal
if i > 2 and j < 4:
if board[i - 1][j + 1] == piece and board[i - 2][j + 2] == piece and board[
i - 3][j + 3] == piece:
return piece + " won"
# check for 2nd top piece of down-right diagonal
if i in [1,2,3] and j in [1,2,3,4]:
if board[i - 1][j - 1] == piece and board[i +1 ][j + 1] == piece and board[i +2][j +2] == piece:
return piece + " won"
# check for 3rd piece of down-right diagonal
if i in [2,3,4] and j in [2,3,4,5]:
if board[i - 1][j - 1] == piece and board[i -2 ][j -2] == piece and board[i +1][j +1] == piece:
return piece + " won"
# check for 2nd piece of down-left diagonal
if i in [1,2,3] and j in [2,3,4,5]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i +2][j -2] == piece:
return piece + " won"
# check for 3rd piece in down-left diagonal
if i in [2,3,4] and j in [1,2,3,4]:
if board[i - 1][j + 1] == piece and board[i +1 ][j -1] == piece and board[i -2][j +2] == piece:
return piece + " won"
| [
"Keeping in mind your conditions are apt, your code could be enhanced in the following manners:\n\nReplacing conditions with all()\nAvoiding nested if conditions\nUsing elif in places of if\nUse min() to check inequality for smallest rather than checking both i and j\nCombine conditions to make it faster\n\nHere's just an enhanced version of your code:\ndef checks(piece, last, name):\n board = []\n open_file = open(name, \"r\")\n # thing = open_file.readline()\n for x in range(6):\n value = open_file.readline()\n board.append(value.strip(\"\\n\").split(\",\"))\n open_file.close()\n cords = last.split(',')\n i = int(cords[0]) # row/x\n j = int(cords[1]) # column/y\n winMsg = f\"{piece} win\" # create variable for ease\n if j > 2:\n if all(piece == value for value in [board[i][j-1], board[i][j-2], board[i][j-2]]) or all(piece == value for value in [board[i+1][j-1], board[i+2][j-2], board[i+3][j-3]]): return winMsg\n elif all(piece == value for value in [board[i][j+1], board[i][j+2], board[i][j+3]]): return winMsg\n elif all(piece == value for value in [board[i+1][j], board[i+2][j], board[i+3][j]]): return winMsg\n elif j not in [0, 1, 6] and all(piece == value for value in [board[i][j+1], board[i][j-1], board[i][j-2]]): return winMsg\n elif j not in [0, 5, 6] and all(piece == value for value in [board[i][j-1], board[i][j+1], board[i][j+2]]): return winMsg\n elif all(piece == value for value in [board[i+1][j+1], board[i][j+2], board[i][j-1]]): return winMsg\n elif min(i, j) > 2 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i-3][j-3]]): return winMsg\n elif i > 2 and all(piece == value for value in [board[i-1][j+1], board[i-2][j+2], board[i-3][j+3]]): return winMsg\n elif i in [1,2,3]:\n if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j-1], board[i+1][j+1], board[i+2][j+2]]): return winMsg\n elif j == 5 and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i+2][j-2]]): return winMsg\n elif i == 4:\n if j in [1,2,3,4] and all(piece == value for value in [board[i-1][j+1], board[i+1][j-1], board[i-2][j+2]]): return winMsg\n elif j == 5 and all(piece == value for value in [board[i-1][j-1], board[i-2][j-2], board[i+1][j+1]]): return winMsg\n\nIf you could provide an exact input when there is a win, maybe a better approach could be made. Hope this helps :)\n",
"Not sure if this is faster but I've done this before in Numpy. Here's how I did it:\nimport numpy as np\n\n\nclass Connect4Game():\n\n # Construct a set of binary masks to find connect 4s\n win_mask = np.zeros((4*4, 7, 7), 'bool')\n idx1 = np.array(range(4))\n idx2 = np.array([3]*4)\n for i in range(4):\n win_mask[([i]*4, idx1+i, idx2)] = True\n win_mask[([i+4]*4, idx2, idx1+i)] = True\n win_mask[([i+8]*4, idx1+i, idx1+i)] = True\n win_mask[([i+12]*4, 6-idx1-i, idx1+i)] = True\n\n def __init__(self, data=None):\n # Extend the board area by adding borders\n self.ext_board = np.zeros((12, 13), 'int8')\n # Make the board a view slice\n self.board = self.ext_board.view()[3:9, 3:10]\n if data is not None:\n self.load_game(data)\n\n def reset(self):\n self.board [:, :] = 0\n\n def load_game(self, data):\n data = np.array(data)\n assert(data.shape == (6, 7))\n self.reset()\n self.board[data == ':green_circle:'] = 1\n self.board[data == ':red_circle:'] = 2\n\n def check_for_win(self, last, piece):\n row, col = last\n selection = self.ext_board[row:row+7, col:col+7]\n wins = np.nonzero(np.all(\n ((selection == piece) & self.win_mask) \n == self.win_mask, axis=(1, 2)\n ))[0]\n return wins.tolist()\n\n\n# Demo\ng = Connect4Game(example_board)\nprint(g.board)\nlast = (5, 1)\npiece = 1\nassert g.board[last] == piece\nwins = g.check_for_win(last, piece)\nprint(wins)\n\nOutput:\narray([[0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 0, 0]], dtype=int8)\n\n[]\n\n"
] | [
0,
0
] | [] | [] | [
"connect_four",
"discord.py",
"python"
] | stackoverflow_0074664215_connect_four_discord.py_python.txt |
Q:
Pip not recognized to install program
I'm trying to install instaloader and running into problems.
IU've downloaded the github file, extracted it, installed python and pip, i think. Now while runninng
pip3 install instaloader
in the windows command prompt its responding:
'pip3' is not recognized as an internal or external command,
operable program or batch file.
I've tried installing pip3 by running pip install pip in both python and command prompt, uninstalling and reinstalling python. Do i need to add python to the PATH?
A:
You can try to install pip by 'python get-pip.py' rather than 'pip install pip'.
| Pip not recognized to install program | I'm trying to install instaloader and running into problems.
IU've downloaded the github file, extracted it, installed python and pip, i think. Now while runninng
pip3 install instaloader
in the windows command prompt its responding:
'pip3' is not recognized as an internal or external command,
operable program or batch file.
I've tried installing pip3 by running pip install pip in both python and command prompt, uninstalling and reinstalling python. Do i need to add python to the PATH?
| [
"You can try to install pip by 'python get-pip.py' rather than 'pip install pip'.\n"
] | [
0
] | [] | [] | [
"instaloader",
"python"
] | stackoverflow_0074664616_instaloader_python.txt |
Q:
How to connect to mariadb5.5.52 using python3
My development environment
python3.8
mariadb 5.5.52
pymysql 1.0.2
django 4.1.3
try to migrate
but vscode tips django.db.utils.NotSupportedError: MariaDB 10.3 or later is required (found 5.5.52).
A:
To connect to a MariaDB 5.5.52 database using Python 3, you can use the pymysql library. This library provides a Python interface for connecting to and working with a MariaDB database.
To use pymysql, you will need to first install it using pip:
pip install pymysql
Once you have installed pymysql, you can use it to connect to your MariaDB database by importing the pymysql module and creating a new Connection object, like this:
import pymysql
# Connect to the database
conn = pymysql.connect(
host="localhost",
user="username",
password="password",
db="database_name"
)
# Use the cursor() method to create a cursor object
cur = conn.cursor()
# Execute a SQL query
cur.execute("SELECT * FROM table_name")
# Fetch the results of the query
results = cur.fetchall()
# Print the results
print(results)
As for the error message you are seeing from Django, it sounds like you are using a version of Django that is not compatible with MariaDB 5.5.52. Django 4.1.3 requires MariaDB 10.3 or later, so you will need to upgrade your MariaDB installation to a more recent version in order to use Django 4.1.3. Alternatively, you can try using an older version of Django that is compatible with MariaDB 5.5.52.
| How to connect to mariadb5.5.52 using python3 | My development environment
python3.8
mariadb 5.5.52
pymysql 1.0.2
django 4.1.3
try to migrate
but vscode tips django.db.utils.NotSupportedError: MariaDB 10.3 or later is required (found 5.5.52).
| [
"To connect to a MariaDB 5.5.52 database using Python 3, you can use the pymysql library. This library provides a Python interface for connecting to and working with a MariaDB database.\nTo use pymysql, you will need to first install it using pip:\npip install pymysql\n\nOnce you have installed pymysql, you can use it to connect to your MariaDB database by importing the pymysql module and creating a new Connection object, like this:\nimport pymysql\n\n# Connect to the database\nconn = pymysql.connect(\n host=\"localhost\",\n user=\"username\",\n password=\"password\",\n db=\"database_name\"\n)\n\n# Use the cursor() method to create a cursor object\ncur = conn.cursor()\n\n# Execute a SQL query\ncur.execute(\"SELECT * FROM table_name\")\n\n# Fetch the results of the query\nresults = cur.fetchall()\n\n# Print the results\nprint(results)\n\nAs for the error message you are seeing from Django, it sounds like you are using a version of Django that is not compatible with MariaDB 5.5.52. Django 4.1.3 requires MariaDB 10.3 or later, so you will need to upgrade your MariaDB installation to a more recent version in order to use Django 4.1.3. Alternatively, you can try using an older version of Django that is compatible with MariaDB 5.5.52.\n"
] | [
1
] | [] | [] | [
"django",
"mariadb",
"python"
] | stackoverflow_0074664116_django_mariadb_python.txt |
Q:
How to count how many times a word in a list appeared in-another list
I have 2 lists and I want to see how many of the text in list 1 is in list 2 but I don't really know of a way to like combine them the output isn't summed and I have tried sum method but it does it for all words counted not each word.
Code:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l2:
print(f'{l1.count(i)}: {i}')
Output:
0: hey
1: hi
1: hello
1: hello
What I want is something more like this:
0: hey
1: hi
2: hello
A:
I think a simple fix is to just flip the way you are looping through the lists:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l1:
print(f'{l2.count(i)}: {i}')
Output:
2: hello
1: hi
A:
You can use the in operator to check if each element in l1 is in l2. You can then use a Counter object to count the number of occurrences of each element in l1 that is also in l2.
Here is an example:
from collections import Counter
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
# Create a Counter object to count the occurrences of each element in l1 that is also in l2
counter = Counter()
# Loop over each element in l1 and check if it is in l2
for element in l1:
if element in l2:
# If the element is in l2, increment the count for that element
counter[element] += 1
# Print the count for each element
for element, count in counter.items():
print(f'{count}: {element}')
This will print the following output:
1: hi
2: hello
A:
If you want to count how many times each word in l1 appears in l2, you can use a dictionary to keep track of the counts for each word. Here is one possible way to do this:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
# Create an empty dictionary
counts = {}
# Loop through each word in l1
for word in l1:
# Initialize the count for this word to 0
counts[word] = 0
# Loop through each word in l2
for word2 in l2:
# If the word from l1 appears in l2, increment the count
if word == word2:
counts[word] += 1
# Print the counts for each word
for word in l1:
print(f'{counts[word]}: {word}')
This code will print the following output:
2: hello 1: hi
This approach allows you to count the occurrences of each word in l1 in l2, and print the counts in the desired format. You can further customize the code to suit your specific needs. For example, you could sort the counts by their values or print the counts in a different order, depending on your requirements.
A:
Try this
from collections import Counter
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
c = Counter(l2)
for a in l1:
print(f"{c[a]}: {a}")
c.pop(a)
print(*["0: " + a for a in c.keys()], sep='\n')
OUTPUT
2: hello
1: hi
0: hey
A:
To count the number of times a word in a list appears in another list, you can use a for loop to iterate over the first list and use the count() method to count the number of times each word appears in the second list. Here's an example:
# define the two lists
list1 = ["apple", "banana", "cherry"]
list2 = ["apple", "grape", "cherry", "apple", "orange", "banana", "apple"]
# initialize a count variable
count = 0
# iterate over the first list
for word in list1:
# count the number of times the word appears in the second list
count += list2.count(word)
# print the final count
print(count)
This code will print 4, since there are four words from the first list ("apple", "banana", "cherry") that appear in the second list.
| How to count how many times a word in a list appeared in-another list | I have 2 lists and I want to see how many of the text in list 1 is in list 2 but I don't really know of a way to like combine them the output isn't summed and I have tried sum method but it does it for all words counted not each word.
Code:
l1 = ['hello', 'hi']
l2 = ['hey', 'hi', 'hello', 'hello']
for i in l2:
print(f'{l1.count(i)}: {i}')
Output:
0: hey
1: hi
1: hello
1: hello
What I want is something more like this:
0: hey
1: hi
2: hello
| [
"I think a simple fix is to just flip the way you are looping through the lists:\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\nfor i in l1:\n print(f'{l2.count(i)}: {i}')\n\nOutput:\n2: hello\n1: hi\n\n",
"You can use the in operator to check if each element in l1 is in l2. You can then use a Counter object to count the number of occurrences of each element in l1 that is also in l2.\nHere is an example:\nfrom collections import Counter\n\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\n# Create a Counter object to count the occurrences of each element in l1 that is also in l2\ncounter = Counter()\n\n# Loop over each element in l1 and check if it is in l2\nfor element in l1:\n if element in l2:\n # If the element is in l2, increment the count for that element\n counter[element] += 1\n\n# Print the count for each element\nfor element, count in counter.items():\n print(f'{count}: {element}')\n\nThis will print the following output:\n1: hi\n2: hello\n\n",
"If you want to count how many times each word in l1 appears in l2, you can use a dictionary to keep track of the counts for each word. Here is one possible way to do this:\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\n# Create an empty dictionary\ncounts = {}\n\n# Loop through each word in l1\nfor word in l1:\n # Initialize the count for this word to 0\n counts[word] = 0\n # Loop through each word in l2\n for word2 in l2:\n # If the word from l1 appears in l2, increment the count\n if word == word2:\n counts[word] += 1\n\n# Print the counts for each word\nfor word in l1:\n print(f'{counts[word]}: {word}')\n\nThis code will print the following output:\n2: hello 1: hi\nThis approach allows you to count the occurrences of each word in l1 in l2, and print the counts in the desired format. You can further customize the code to suit your specific needs. For example, you could sort the counts by their values or print the counts in a different order, depending on your requirements.\n",
"Try this\n\nfrom collections import Counter\n\nl1 = ['hello', 'hi']\nl2 = ['hey', 'hi', 'hello', 'hello']\n\nc = Counter(l2)\n\n\nfor a in l1:\n print(f\"{c[a]}: {a}\")\n c.pop(a)\n\nprint(*[\"0: \" + a for a in c.keys()], sep='\\n')\n\n\nOUTPUT\n2: hello\n1: hi\n0: hey\n\n\n",
"To count the number of times a word in a list appears in another list, you can use a for loop to iterate over the first list and use the count() method to count the number of times each word appears in the second list. Here's an example:\n# define the two lists\nlist1 = [\"apple\", \"banana\", \"cherry\"]\nlist2 = [\"apple\", \"grape\", \"cherry\", \"apple\", \"orange\", \"banana\", \"apple\"]\n\n# initialize a count variable\ncount = 0\n\n# iterate over the first list\nfor word in list1:\n # count the number of times the word appears in the second list\n count += list2.count(word)\n\n# print the final count\nprint(count)\n\n\nThis code will print 4, since there are four words from the first list (\"apple\", \"banana\", \"cherry\") that appear in the second list.\n"
] | [
3,
1,
1,
0,
0
] | [] | [] | [
"count",
"for_loop",
"list",
"python",
"sum"
] | stackoverflow_0074664429_count_for_loop_list_python_sum.txt |
Q:
How to make environment variable in Python
I need a help in making variables as ENV in python, so that I can see that variable by using 'export' command in Linux. So I tested a below short script and I can see variable using export command. But the problem is that, below two command didn't work.
var1 = os.environ['LINE']
print(var1)
Can you guide me how can I get this solved ?
import os
import json
import sys
Name = "a1"
def func():
var = 'My name is ' + '' + Name
os.putenv('LINE', var)
os.system('bash')
func()
var1 = os.environ['LINE']
print(var1)
Output:
export | grep LINE
declare -x LINE="My name is a1"
A:
Try with
os.environ['LINE'] = var
instead of using putenv. Using putenv "bypasses" os.environ, that is, it doesn't update os.environ.
In fact, from the documentation for os.putenv:
Assignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don’t update os.environ, so it is actually preferable to assign to items of os.environ. This also applies to getenv() and getenvb(), which respectively use os.environ and os.environb in their implementations."
| How to make environment variable in Python | I need a help in making variables as ENV in python, so that I can see that variable by using 'export' command in Linux. So I tested a below short script and I can see variable using export command. But the problem is that, below two command didn't work.
var1 = os.environ['LINE']
print(var1)
Can you guide me how can I get this solved ?
import os
import json
import sys
Name = "a1"
def func():
var = 'My name is ' + '' + Name
os.putenv('LINE', var)
os.system('bash')
func()
var1 = os.environ['LINE']
print(var1)
Output:
export | grep LINE
declare -x LINE="My name is a1"
| [
"Try with\nos.environ['LINE'] = var\n\ninstead of using putenv. Using putenv \"bypasses\" os.environ, that is, it doesn't update os.environ.\nIn fact, from the documentation for os.putenv:\n\nAssignments to items in os.environ are automatically translated into corresponding calls to putenv(); however, calls to putenv() don’t update os.environ, so it is actually preferable to assign to items of os.environ. This also applies to getenv() and getenvb(), which respectively use os.environ and os.environb in their implementations.\"\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074664627_python_python_3.x.txt |
Q:
GitLab-CI: Run Python Script and Exit (VPS)
I am trying to do a CI script on GitLab where it connects to my VPS, Git Pulls and then runs the python script and exits, while leaving my python script running 24/7 (until the next pipeline run/commit).
How do I do get it do make my python script run 24/7?
script:
- 'apt-get update -y && apt-get install openssh-client -y && apt-get install sshpass -y '
- sshpass -p "password" ssh -o StrictHostKeyChecking=no root@host "cd repo/ && git pull && python3 main.py"
This is my current script, however, when main.py is run, the pipeline is left in limbo since the script is eternally running.
How do I make it so the pipeline script runs the script and exits, leaving it on tmux or something like that?
A:
Check first if this is a tty allocation issue, as in here.
ssh -t -o ...
^^
Also consider calling just one script (which does the cd, git pull and python3)
That way you can test the script locally (on 'host'), and then call it remotely (through ssh)
From the OP Kevin A. in the comments:
my code goes through a loop that reruns the code every 45mins or so, so the script is constantly running. It's a web scraper constantly updating a cloud database.
The idea is to get GitLab CI to ignore waiting for the script to finish running, its just is to, stop previous script running, git pull and run the script again
Another approach would be to make the script scrap one-time (and exit), but call said script through a GitLab scheduled pipeline.
That way, no more freeze.
| GitLab-CI: Run Python Script and Exit (VPS) | I am trying to do a CI script on GitLab where it connects to my VPS, Git Pulls and then runs the python script and exits, while leaving my python script running 24/7 (until the next pipeline run/commit).
How do I do get it do make my python script run 24/7?
script:
- 'apt-get update -y && apt-get install openssh-client -y && apt-get install sshpass -y '
- sshpass -p "password" ssh -o StrictHostKeyChecking=no root@host "cd repo/ && git pull && python3 main.py"
This is my current script, however, when main.py is run, the pipeline is left in limbo since the script is eternally running.
How do I make it so the pipeline script runs the script and exits, leaving it on tmux or something like that?
| [
"Check first if this is a tty allocation issue, as in here.\nssh -t -o ...\n ^^\n\nAlso consider calling just one script (which does the cd, git pull and python3)\nThat way you can test the script locally (on 'host'), and then call it remotely (through ssh)\n\nFrom the OP Kevin A. in the comments:\n\nmy code goes through a loop that reruns the code every 45mins or so, so the script is constantly running. It's a web scraper constantly updating a cloud database.\nThe idea is to get GitLab CI to ignore waiting for the script to finish running, its just is to, stop previous script running, git pull and run the script again\n\nAnother approach would be to make the script scrap one-time (and exit), but call said script through a GitLab scheduled pipeline.\nThat way, no more freeze.\n"
] | [
1
] | [] | [] | [
"gitlab",
"gitlab_ci",
"python",
"python_3.x"
] | stackoverflow_0074661343_gitlab_gitlab_ci_python_python_3.x.txt |
Q:
Sum of each row and each column in python
Hi I have more than 20 txt file that include a matrix (9*7) 9 rows and 7 columns:
I want to find sum of each 7 rows and 9 columns for each matrix
My code that I have used is for one matrix how can I use for multi matrix is there any way with python?
import numpy as np
# Get the size m and n
m , n = 7, 9
# Function to calculate sum of each row
def row_sum(arr) :
sum = 0
print("\nFinding Sum of each
row:\n")
# finding the row sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[i][j]
# Print the row sum
print("Sum of the
row",i,"=",sum)
# Reset the sum
sum = 0
# Function to calculate sum of
each column
def column_sum(arr) :
sum = 0
print("\nFinding Sum of each
column:\n")
# finding the column sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[j][i]
# Print the column sum
print("Sum of the
column",i,"=",sum)
# Reset the sum
sum = 0
# Driver code
if __name__ == "__main__" :
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m) :
for j in range(n) :
arr[i][j] = x
x += 1
# Get each row sum
row_sum(arr)
# Get each column sum
column_sum(arr)
And I want the output of each the sum be a vector for each matrix sth like this :
[ 1,2,3,4,5,6,7,8,9,10,...,16]
A:
To calculate the row and column sums for multiple matrices, you can create a function that takes a list of matrices and calculates the row and column sums for each matrix in the list. Here is an example:
import numpy as np
# Get the size m and n
m, n = 7, 9
# Function to calculate sum of each row
def row_sum(arr):
sums = []
for i in range(m):
row_sum = 0
for j in range(n):
row_sum += arr[i][j]
sums.append(row_sum)
return sums
# Function to calculate sum of each column
def column_sum(arr):
sums = []
for i in range(m):
column_sum = 0
for j in range(n):
column_sum += arr[j][i]
sums.append(column_sum)
return sums
# Driver code
if __name__ == "__main__":
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m):
for j in range(n):
arr[i][j] = x
x += 1
# Get each row sum
row_sums = row_sum(arr)
print("Row sums:", row_sums)
# Get each column sum
column_sums = column_sum(arr)
print("Column sums:", column_sums)
To find the row and column sums for multiple matrices, you can loop through the matrices and calculate the row and column sums for each one, storing the results in a list. For example:
# Get the size m and n
m, n = 7, 9
# Function to calculate sum of each row
def row_sum(arr):
sums = []
for i in range(m):
row_sum = 0
for j in range(n):
row_sum += arr[i][j]
sums.append(row_sum)
return sums
# Function to calculate sum of each column
def column_sum(arr):
sums = []
for i in range(m):
column_sum = 0
for j in range(n):
column_sum += arr[j][i]
sums.append(column_sum)
return sums
| Sum of each row and each column in python | Hi I have more than 20 txt file that include a matrix (9*7) 9 rows and 7 columns:
I want to find sum of each 7 rows and 9 columns for each matrix
My code that I have used is for one matrix how can I use for multi matrix is there any way with python?
import numpy as np
# Get the size m and n
m , n = 7, 9
# Function to calculate sum of each row
def row_sum(arr) :
sum = 0
print("\nFinding Sum of each
row:\n")
# finding the row sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[i][j]
# Print the row sum
print("Sum of the
row",i,"=",sum)
# Reset the sum
sum = 0
# Function to calculate sum of
each column
def column_sum(arr) :
sum = 0
print("\nFinding Sum of each
column:\n")
# finding the column sum
for i in range(m) :
for j in range(n) :
# Add the element
sum += arr[j][i]
# Print the column sum
print("Sum of the
column",i,"=",sum)
# Reset the sum
sum = 0
# Driver code
if __name__ == "__main__" :
arr = np.zeros((4, 4))
# Get the matrix elements
x = 1
for i in range(m) :
for j in range(n) :
arr[i][j] = x
x += 1
# Get each row sum
row_sum(arr)
# Get each column sum
column_sum(arr)
And I want the output of each the sum be a vector for each matrix sth like this :
[ 1,2,3,4,5,6,7,8,9,10,...,16]
| [
"To calculate the row and column sums for multiple matrices, you can create a function that takes a list of matrices and calculates the row and column sums for each matrix in the list. Here is an example:\nimport numpy as np\n\n# Get the size m and n\nm, n = 7, 9\n\n# Function to calculate sum of each row\ndef row_sum(arr):\n sums = []\n for i in range(m):\n row_sum = 0\n for j in range(n):\n row_sum += arr[i][j]\n sums.append(row_sum)\n return sums\n\n# Function to calculate sum of each column\ndef column_sum(arr):\n sums = []\n for i in range(m):\n column_sum = 0\n for j in range(n):\n column_sum += arr[j][i]\n sums.append(column_sum)\n return sums\n\n# Driver code\nif __name__ == \"__main__\":\n arr = np.zeros((4, 4))\n\n # Get the matrix elements\n x = 1\n for i in range(m):\n for j in range(n):\n arr[i][j] = x\n x += 1\n\n # Get each row sum\n row_sums = row_sum(arr)\n print(\"Row sums:\", row_sums)\n\n # Get each column sum\n column_sums = column_sum(arr)\n print(\"Column sums:\", column_sums)\n\nTo find the row and column sums for multiple matrices, you can loop through the matrices and calculate the row and column sums for each one, storing the results in a list. For example:\n# Get the size m and n\nm, n = 7, 9\n\n# Function to calculate sum of each row\ndef row_sum(arr):\n sums = []\n for i in range(m):\n row_sum = 0\n for j in range(n):\n row_sum += arr[i][j]\n sums.append(row_sum)\n return sums\n\n# Function to calculate sum of each column\ndef column_sum(arr):\n sums = []\n for i in range(m):\n column_sum = 0\n for j in range(n):\n column_sum += arr[j][i]\n sums.append(column_sum)\n return sums\n\n\n\n\n"
] | [
0
] | [] | [] | [
"matrix",
"python"
] | stackoverflow_0074664693_matrix_python.txt |
Q:
update column to two based on condition
I am trying to modify ONE column, I want to set some rows as true the others convert them to false
update products set on_sale=False where status=1 and seller=test;
update products set on_sale=true Where price > 100 and status=1 and seller=test;
the above works, but I believe it can be done in 1 query, I.e something like this
\\ python syntax for the if condition
update prodcuts set on_sale=(True if price > 100 else False) WHERE status=1 and seller=test
A:
You could do a single update with the help of a CASE expression:
UPDATE products
SET on_sale = CASE WHEN price > 100 THEN True ELSE False END
WHERE status = 1 AND seller = test;
| update column to two based on condition | I am trying to modify ONE column, I want to set some rows as true the others convert them to false
update products set on_sale=False where status=1 and seller=test;
update products set on_sale=true Where price > 100 and status=1 and seller=test;
the above works, but I believe it can be done in 1 query, I.e something like this
\\ python syntax for the if condition
update prodcuts set on_sale=(True if price > 100 else False) WHERE status=1 and seller=test
| [
"You could do a single update with the help of a CASE expression:\nUPDATE products\nSET on_sale = CASE WHEN price > 100 THEN True ELSE False END\nWHERE status = 1 AND seller = test;\n\n"
] | [
1
] | [] | [] | [
"postgresql",
"python"
] | stackoverflow_0074664787_postgresql_python.txt |
Q:
Python Heatmap with calculated fields
Looking to create a heatmap from a dataframe. Index is each event of car crashes. Columns are Year, Month (1 - 12, Day of the Week (1- 7), Hour of Day (0 - 23), Fatal (1) non Fatal (2), etc.
I trying to create a heatmap with the x axis being Hour of Day, and y axis being Day of the Week. Looking to create a calculated field for each "cell", corresponding to the fatality rate of each hour and day.
Sunday
Saturday
Friday
Thursday
Wednesday
Tuesday
Monday
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 etc```
dbh = df[df.Fatal == 1].groupby('Hour').Fatal.count()
sbh = df[df.Fatal == 2].groupby('Hour').Fatal.count()
final_dbh = (dbh /(sbh+ dbh)* 100)
Hour
0.0 3.429764
1.0 3.696422
2.0 3.559404
3.0 4.093886
4.0 3.464674
5.0 3.276747
6.0 1.827378
7.0 1.021872
8.0 0.928400
9.0 1.201049
10.0 1.234164
11.0 1.477833
12.0 1.437418
13.0 1.705571
14.0 1.595436
15.0 1.219512
16.0 1.256826
17.0 1.514321
18.0 1.375315
19.0 1.384932
20.0 2.331501
21.0 2.066446
22.0 1.997928
23.0 3.506366
Name: Fatal, dtype: float64
dbd = df[df.Fatal == 1].groupby('Weekday').Fatal.count()
sbd = df[df.Fatal == 2].groupby('Weekday').Fatal.count()
final_dbd = (dbd /(sbd + dbd)* 100)
Weekday
7 2.070770
4 1.694125
6 1.602799
5 1.579378
3 1.524816
1 1.473684
2 1.282576
Name: Fatal, dtype: float64
db = df[df['Fatal'] == 1]
df_test = db.groupby(["Month" , "Weekday"]).Fatal.count()
Month Weekday
1.0 1 34
2 48
3 43
4 75
5 36
I think I've sorted out for to get the numbers I need, but how to assign them to the heatmap I'm looking for?
A:
First, use your data to make a 2-D matrix with rows representing the days (sunday, ...) and the columns representing the numbers (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18).
Once you have this 2-D matrix use the below code to plot the heatmap
import numpy as np
import matplotlib.pyplot as plt
# create a 10x10 random matrix
data = np.random.random((10, 10)) # REPLACE WITH YOUR DATA
print(data.shape)
fig, ax = plt.subplots()
im = ax.imshow(data)
# show image
plt.show()
| Python Heatmap with calculated fields | Looking to create a heatmap from a dataframe. Index is each event of car crashes. Columns are Year, Month (1 - 12, Day of the Week (1- 7), Hour of Day (0 - 23), Fatal (1) non Fatal (2), etc.
I trying to create a heatmap with the x axis being Hour of Day, and y axis being Day of the Week. Looking to create a calculated field for each "cell", corresponding to the fatality rate of each hour and day.
Sunday
Saturday
Friday
Thursday
Wednesday
Tuesday
Monday
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 etc```
dbh = df[df.Fatal == 1].groupby('Hour').Fatal.count()
sbh = df[df.Fatal == 2].groupby('Hour').Fatal.count()
final_dbh = (dbh /(sbh+ dbh)* 100)
Hour
0.0 3.429764
1.0 3.696422
2.0 3.559404
3.0 4.093886
4.0 3.464674
5.0 3.276747
6.0 1.827378
7.0 1.021872
8.0 0.928400
9.0 1.201049
10.0 1.234164
11.0 1.477833
12.0 1.437418
13.0 1.705571
14.0 1.595436
15.0 1.219512
16.0 1.256826
17.0 1.514321
18.0 1.375315
19.0 1.384932
20.0 2.331501
21.0 2.066446
22.0 1.997928
23.0 3.506366
Name: Fatal, dtype: float64
dbd = df[df.Fatal == 1].groupby('Weekday').Fatal.count()
sbd = df[df.Fatal == 2].groupby('Weekday').Fatal.count()
final_dbd = (dbd /(sbd + dbd)* 100)
Weekday
7 2.070770
4 1.694125
6 1.602799
5 1.579378
3 1.524816
1 1.473684
2 1.282576
Name: Fatal, dtype: float64
db = df[df['Fatal'] == 1]
df_test = db.groupby(["Month" , "Weekday"]).Fatal.count()
Month Weekday
1.0 1 34
2 48
3 43
4 75
5 36
I think I've sorted out for to get the numbers I need, but how to assign them to the heatmap I'm looking for?
| [
"First, use your data to make a 2-D matrix with rows representing the days (sunday, ...) and the columns representing the numbers (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18).\nOnce you have this 2-D matrix use the below code to plot the heatmap\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# create a 10x10 random matrix\ndata = np.random.random((10, 10)) # REPLACE WITH YOUR DATA\nprint(data.shape)\n\nfig, ax = plt.subplots()\nim = ax.imshow(data)\n\n# show image\nplt.show()\n\n"
] | [
0
] | [] | [] | [
"group_by",
"heatmap",
"pandas",
"python"
] | stackoverflow_0074664195_group_by_heatmap_pandas_python.txt |
Q:
Why is exponentiation applied right to left?
I am reading an Intro to Python textbook and came across this line:
Operators on the same row have equal precedence and are applied left to right, except for exponentiation, which is applied right to left.
I understand most of this, but I do not understand why they say exponentiation is applied right to left. They do not provide any examples either. Also, am I allowed to ask general questions like this, or are only problem solving questions preferred?
A:
The ** operator follows normal mathematical conventions; it is right-associative:
In the usual computer science jargon, exponentiation in mathematics is right-associative, which means that xyz should be read as x(yz), not (xy)z. In expositions of the BODMAS rules that are careful enough to address this question, the rule is to evaluate the top exponent first.
and from Wikipedia on the Order of Operations:
If exponentiation is indicated by stacked symbols, the usual rule is to work from the top down, because exponention is right-associative in mathematics.
So 2 ** 3 ** 4 is calculated as 2 ** (3 ** 4) (== 2417851639229258349412352) not (2 ** 3) ** 4 (== 4096).
This is pretty universal across programming languages; it is called right-associativity, although there are exceptions, with Excel and MATLAB being the most notable.
A:
from http://docs.python.org/reference/expressions.html
Operators in the same box group left to right (except for comparisons, including tests, which all have the same precedence and chain from left to right — see section Comparisons — and exponentiation, which groups from right to left).
>>> 2 ** 2 ** 2
16
>>> 2 ** 2 ** 2 ** 2
65536
>>> (2 ** 2 ** 2) ** 2
256
For the middle case 2 ** 2 ** 2 ** 2, this are the intermediate steps -
broken down to 2 ** (2 ** (2 ** 2))
2 ** (2 ** (4)) # progressing right to left
2 ** (16) # this is 2 to the power 16
which finally evals to 65536
Hope that helps!
A:
This explanation seems quite clear to me. Let me show you an example that might enlighten this :
print 2 ** 2 ** 3 # prints 256
If you would read this from left to right, you would first do 2 ** 2, which would result in 4, and then 4 ** 3, which would give us 64.
It seems we have a wrong answer. :)
However, from right to left...
You would first do 2 ** 3, which would be 8, and then, 2 ** 8, giving us 256 !
I hope I was able to enlighten this point for you. :)
EDIT : Martijn Pieters answered more accurately to your question, sorry. I forgot to say it was mathematical conventions.
A:
Power operator, exponentiation, is handled differently across applications and languages.
If it has LEFT associativity then 2^3^4 = (2^3)^4 = 4096.
If it has RIGHT associativity then 2^3^4 = 2^(3^4) = 2417851639229260000000000.
In Excel, Matlab, Apple Numbers and more others exponentiation has LEFT associativity.
In Python, Ruby, Google Sheets, ... - RIGHT associativity.
Here is a vast list of how different languages and apps handle exponentiation: Exponentiation Associativity and Standard Math Notation
| Why is exponentiation applied right to left? | I am reading an Intro to Python textbook and came across this line:
Operators on the same row have equal precedence and are applied left to right, except for exponentiation, which is applied right to left.
I understand most of this, but I do not understand why they say exponentiation is applied right to left. They do not provide any examples either. Also, am I allowed to ask general questions like this, or are only problem solving questions preferred?
| [
"The ** operator follows normal mathematical conventions; it is right-associative:\n\nIn the usual computer science jargon, exponentiation in mathematics is right-associative, which means that xyz should be read as x(yz), not (xy)z. In expositions of the BODMAS rules that are careful enough to address this question, the rule is to evaluate the top exponent first.\n\nand from Wikipedia on the Order of Operations:\n\nIf exponentiation is indicated by stacked symbols, the usual rule is to work from the top down, because exponention is right-associative in mathematics.\n\nSo 2 ** 3 ** 4 is calculated as 2 ** (3 ** 4) (== 2417851639229258349412352) not (2 ** 3) ** 4 (== 4096).\nThis is pretty universal across programming languages; it is called right-associativity, although there are exceptions, with Excel and MATLAB being the most notable.\n",
"from http://docs.python.org/reference/expressions.html\nOperators in the same box group left to right (except for comparisons, including tests, which all have the same precedence and chain from left to right — see section Comparisons — and exponentiation, which groups from right to left).\n>>> 2 ** 2 ** 2\n16\n>>> 2 ** 2 ** 2 ** 2\n65536\n>>> (2 ** 2 ** 2) ** 2\n256\n\nFor the middle case 2 ** 2 ** 2 ** 2, this are the intermediate steps - \n\nbroken down to 2 ** (2 ** (2 ** 2))\n2 ** (2 ** (4)) # progressing right to left\n2 ** (16) # this is 2 to the power 16\nwhich finally evals to 65536\nHope that helps!\n\n",
"This explanation seems quite clear to me. Let me show you an example that might enlighten this :\nprint 2 ** 2 ** 3 # prints 256\nIf you would read this from left to right, you would first do 2 ** 2, which would result in 4, and then 4 ** 3, which would give us 64.\nIt seems we have a wrong answer. :)\nHowever, from right to left...\nYou would first do 2 ** 3, which would be 8, and then, 2 ** 8, giving us 256 !\nI hope I was able to enlighten this point for you. :)\nEDIT : Martijn Pieters answered more accurately to your question, sorry. I forgot to say it was mathematical conventions.\n",
"Power operator, exponentiation, is handled differently across applications and languages.\nIf it has LEFT associativity then 2^3^4 = (2^3)^4 = 4096.\nIf it has RIGHT associativity then 2^3^4 = 2^(3^4) = 2417851639229260000000000.\nIn Excel, Matlab, Apple Numbers and more others exponentiation has LEFT associativity.\nIn Python, Ruby, Google Sheets, ... - RIGHT associativity.\nHere is a vast list of how different languages and apps handle exponentiation: Exponentiation Associativity and Standard Math Notation\n"
] | [
23,
2,
0,
0
] | [] | [] | [
"exponentiation",
"operators",
"python",
"python_3.x"
] | stackoverflow_0047429513_exponentiation_operators_python_python_3.x.txt |
Q:
Trying to Combine Two Scatter Plots and Two Line Graphs with Matplotlib
I'm trying to create a graph that lists the high and low temperature per city on a specific day, but it seems like the y axes are just overlapping instead of plotting the point along it.
Here is what I have:
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
And this is what the graph looks like:
y-axis jumps around between numbers and doesn't count upwards
A:
The problem you are seeing is because matplotlib classifies your y-axis values as categorical instead of numeric continuous values.
This might be because your list of al_min and al_max contain strings ['1','2','3'] instead of integers [1,2,3].
All you have to do is convert the strings in the list to integers. You can do it like this:
al_min = list(map(int, al_min))
al_max = list(map(int, al_max))
Here is an example using your code:
import matplotlib.pyplot as plt
# Create the data for the example
al_cities = ['Birmingham', 'Huntsville', 'Mobile', 'Montgomery']
al_min = ['36','34', '39', '38']
al_max = ['52', '50', '57', '55']
# Convert strings to integers
al_min = list(map(int, al_min))
al_max = list(map(int, al_max))
# Here is your code (unchanged)
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
OUTPUT:
A:
I could not quite understand the problem, But I would like to suggest that you could use the normal plt.plot() rather than subplots if you just have one graph to show. (You could use errorbars to show max and min temperature)
| Trying to Combine Two Scatter Plots and Two Line Graphs with Matplotlib | I'm trying to create a graph that lists the high and low temperature per city on a specific day, but it seems like the y axes are just overlapping instead of plotting the point along it.
Here is what I have:
fig, al = plt.subplots()
al.scatter(al_cities, al_min)
al.scatter(al_cities, al_max, c='red')
al.plot(al_cities, al_min, c='lightblue')
al.plot(al_cities, al_max, c='orange')
al.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)
al.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)
al.set_xlabel('City', fontsize=14)
al.set_ylabel('Temperature', fontsize=14)
And this is what the graph looks like:
y-axis jumps around between numbers and doesn't count upwards
| [
"The problem you are seeing is because matplotlib classifies your y-axis values as categorical instead of numeric continuous values.\nThis might be because your list of al_min and al_max contain strings ['1','2','3'] instead of integers [1,2,3].\nAll you have to do is convert the strings in the list to integers. You can do it like this:\nal_min = list(map(int, al_min))\nal_max = list(map(int, al_max))\n\n\n\nHere is an example using your code:\nimport matplotlib.pyplot as plt\n\n# Create the data for the example\nal_cities = ['Birmingham', 'Huntsville', 'Mobile', 'Montgomery']\nal_min = ['36','34', '39', '38']\nal_max = ['52', '50', '57', '55']\n\n# Convert strings to integers\nal_min = list(map(int, al_min))\nal_max = list(map(int, al_max))\n\n# Here is your code (unchanged)\nfig, al = plt.subplots()\nal.scatter(al_cities, al_min)\nal.scatter(al_cities, al_max, c='red')\nal.plot(al_cities, al_min, c='lightblue')\nal.plot(al_cities, al_max, c='orange')\nal.fill_between(al_cities, al_max, al_min, facecolor='gray', alpha=.3)\nal.set_title('Highs and Lows in Alabama on January 10, 2016', fontsize=18)\nal.set_xlabel('City', fontsize=14)\nal.set_ylabel('Temperature', fontsize=14)\n\n\n\nOUTPUT:\n\n\n\n",
"I could not quite understand the problem, But I would like to suggest that you could use the normal plt.plot() rather than subplots if you just have one graph to show. (You could use errorbars to show max and min temperature)\n"
] | [
1,
0
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074664603_matplotlib_python.txt |
Q:
Look up values from one df to another df based on a specific column
I am attempting to populate values from one DataFrame to another DataFrame based on a common column present in both DataFrames.
The code I wrote for this operation is as follows:
for i in df1.zipcodes:
for j in df2.zipcodes.unique():
if i == j:
#print("this is i:",i, "this is j:",j)
df1['rent'] = df2['rent']
The Dataframes (df1) in question looks as such with shape (131942, 2):
Providing 1st ten rows of df1:
zipcodes districts
018906 01
018907 01
018910 01
018915 01
018916 01
018925 01
018926 01
018927 01
018928 01
018929 01
018930 01
Additionally, there are no duplicates for the Zipcodes column, but the district column has 28 unique values. No Nan values are present.
The other DataFrame(df2) looks as such with shape (77996, 4)
Providing 1st ten rows of df2
street zipcodes district rent
E ROAD 545669 15 3600
E ROAD 545669 15 6200
E ROAD 545669 15 5500
E ROAD 545669 15 3200
H DRIVE 459108 19 3050
H DRIVE 459108 19 2000
A VIEW 098619 03 4200
A VIEW 098619 03 4500
J ROAD 018947 10 19500
O DRIVE 100088 04 9600
Note: The Zipcodes in df2 can repeat.
Now, I want to populate a column in df1 called rent, if the zipcodes in df1 matches the zipcode of df2. If the zipcodes match but there are multiple entries with the same zipcode in df2 then I want to populate the average as the rent. If there is only one entry for the zipcode then I want to populate the rent corresponding to that zipcode.
Any help on the above will be greatly appreciated.
A:
Use a merge with the groupby.mean of df2:
out = df1.merge(df2.groupby('zipcodes', as_index=False)['rent'].mean(),
on='zipcodes', how='left')
A:
You can divide that into 2 phases:
1st phase: Aggregate the df2 to calculate the average rent by zip code. If the zip code has only one rent then the average value will be equal to that exact rent value so it still matches what you need.
df2 = df2.groupby('zipcodes').mean()['rent'].reset_index()
2nd phase: Merge to df1 using zipcodes
df1 = df1.merge(df2, on='zipcodes', how='left')
You can change how parameter to left or inner depending on what you need. Left join will keep all the rows from df1 and fill NA if can't find any match from df2. Inner join will only keep rows that can be found in both df1 and df2.
Hope this help.
| Look up values from one df to another df based on a specific column | I am attempting to populate values from one DataFrame to another DataFrame based on a common column present in both DataFrames.
The code I wrote for this operation is as follows:
for i in df1.zipcodes:
for j in df2.zipcodes.unique():
if i == j:
#print("this is i:",i, "this is j:",j)
df1['rent'] = df2['rent']
The Dataframes (df1) in question looks as such with shape (131942, 2):
Providing 1st ten rows of df1:
zipcodes districts
018906 01
018907 01
018910 01
018915 01
018916 01
018925 01
018926 01
018927 01
018928 01
018929 01
018930 01
Additionally, there are no duplicates for the Zipcodes column, but the district column has 28 unique values. No Nan values are present.
The other DataFrame(df2) looks as such with shape (77996, 4)
Providing 1st ten rows of df2
street zipcodes district rent
E ROAD 545669 15 3600
E ROAD 545669 15 6200
E ROAD 545669 15 5500
E ROAD 545669 15 3200
H DRIVE 459108 19 3050
H DRIVE 459108 19 2000
A VIEW 098619 03 4200
A VIEW 098619 03 4500
J ROAD 018947 10 19500
O DRIVE 100088 04 9600
Note: The Zipcodes in df2 can repeat.
Now, I want to populate a column in df1 called rent, if the zipcodes in df1 matches the zipcode of df2. If the zipcodes match but there are multiple entries with the same zipcode in df2 then I want to populate the average as the rent. If there is only one entry for the zipcode then I want to populate the rent corresponding to that zipcode.
Any help on the above will be greatly appreciated.
| [
"Use a merge with the groupby.mean of df2:\nout = df1.merge(df2.groupby('zipcodes', as_index=False)['rent'].mean(),\n on='zipcodes', how='left')\n\n",
"You can divide that into 2 phases:\n\n1st phase: Aggregate the df2 to calculate the average rent by zip code. If the zip code has only one rent then the average value will be equal to that exact rent value so it still matches what you need.\n df2 = df2.groupby('zipcodes').mean()['rent'].reset_index()\n\n\n2nd phase: Merge to df1 using zipcodes\n df1 = df1.merge(df2, on='zipcodes', how='left') \n\n\n\nYou can change how parameter to left or inner depending on what you need. Left join will keep all the rows from df1 and fill NA if can't find any match from df2. Inner join will only keep rows that can be found in both df1 and df2.\nHope this help.\n"
] | [
1,
1
] | [] | [] | [
"average",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074664746_average_dataframe_pandas_python.txt |
Q:
Get 5 minutes Interval by creating columns as start and end time from date & time stamp column in pandas
data = {'col_ts': ['2022-11-02T08:26:40', '2022-11-02T08:25:10', '2022-11-02T08:26:00', '2022-11-02T08:30:20',
'2022-11-02T08:33:30', '2022-11-02T08:36:40', '2022-11-02T08:26:20', '2022-11-02T08:50:10',
'2022-11-02T08:30:40', '2022-11-02T08:39:40']}
df = pd.DataFrame(data, columns = ['col_ts'])
df
I have a data set from that I would like to create two columns such as start_time and end_time, as shown below with 5 minutes Interval. Appreciate your help on this.
In SQL, I have used the below code to produce the result.
time_slice(col_ts, 5, 'MINUTE', 'START') as START_INTERVAL,
time_slice(col_ts, 5, 'MINUTE', 'END') as END_INTERVAL,
In Pandas, I have used the below code. Unfortunately, that will give me a row-level interval.
df.resample("5T").mean()
A:
Here is one way to do it using Pandas to_datetime and dt.accessor:
df["col_ts"] = pd.to_datetime(df["col_ts"])
df["start_interval"] = df["col_ts"].dt.floor("5T")
df["end_interval"] = df["col_ts"].dt.ceil("5T")
Then:
col_ts start_interval end_interval
0 2022-11-02 08:26:40 2022-11-02 08:25:00 2022-11-02 08:30:00
1 2022-11-02 08:25:10 2022-11-02 08:25:00 2022-11-02 08:30:00
2 2022-11-02 08:26:00 2022-11-02 08:25:00 2022-11-02 08:30:00
3 2022-11-02 08:30:20 2022-11-02 08:30:00 2022-11-02 08:35:00
4 2022-11-02 08:33:30 2022-11-02 08:30:00 2022-11-02 08:35:00
5 2022-11-02 08:36:40 2022-11-02 08:35:00 2022-11-02 08:40:00
6 2022-11-02 08:26:20 2022-11-02 08:25:00 2022-11-02 08:30:00
7 2022-11-02 08:50:10 2022-11-02 08:50:00 2022-11-02 08:55:00
8 2022-11-02 08:30:40 2022-11-02 08:30:00 2022-11-02 08:35:00
9 2022-11-02 08:39:40 2022-11-02 08:35:00 2022-11-02 08:40:00
| Get 5 minutes Interval by creating columns as start and end time from date & time stamp column in pandas | data = {'col_ts': ['2022-11-02T08:26:40', '2022-11-02T08:25:10', '2022-11-02T08:26:00', '2022-11-02T08:30:20',
'2022-11-02T08:33:30', '2022-11-02T08:36:40', '2022-11-02T08:26:20', '2022-11-02T08:50:10',
'2022-11-02T08:30:40', '2022-11-02T08:39:40']}
df = pd.DataFrame(data, columns = ['col_ts'])
df
I have a data set from that I would like to create two columns such as start_time and end_time, as shown below with 5 minutes Interval. Appreciate your help on this.
In SQL, I have used the below code to produce the result.
time_slice(col_ts, 5, 'MINUTE', 'START') as START_INTERVAL,
time_slice(col_ts, 5, 'MINUTE', 'END') as END_INTERVAL,
In Pandas, I have used the below code. Unfortunately, that will give me a row-level interval.
df.resample("5T").mean()
| [
"Here is one way to do it using Pandas to_datetime and dt.accessor:\ndf[\"col_ts\"] = pd.to_datetime(df[\"col_ts\"])\ndf[\"start_interval\"] = df[\"col_ts\"].dt.floor(\"5T\")\ndf[\"end_interval\"] = df[\"col_ts\"].dt.ceil(\"5T\")\n\nThen:\n col_ts start_interval end_interval\n0 2022-11-02 08:26:40 2022-11-02 08:25:00 2022-11-02 08:30:00\n1 2022-11-02 08:25:10 2022-11-02 08:25:00 2022-11-02 08:30:00\n2 2022-11-02 08:26:00 2022-11-02 08:25:00 2022-11-02 08:30:00\n3 2022-11-02 08:30:20 2022-11-02 08:30:00 2022-11-02 08:35:00\n4 2022-11-02 08:33:30 2022-11-02 08:30:00 2022-11-02 08:35:00\n5 2022-11-02 08:36:40 2022-11-02 08:35:00 2022-11-02 08:40:00\n6 2022-11-02 08:26:20 2022-11-02 08:25:00 2022-11-02 08:30:00\n7 2022-11-02 08:50:10 2022-11-02 08:50:00 2022-11-02 08:55:00\n8 2022-11-02 08:30:40 2022-11-02 08:30:00 2022-11-02 08:35:00\n9 2022-11-02 08:39:40 2022-11-02 08:35:00 2022-11-02 08:40:00\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074663425_pandas_python.txt |
Q:
Python Multi-Criteria Lookup From one of Many Columns
Trying to add a factor to a dataframe based on a lookup of multiple criteria in another dataframe. Code to create sample data:
import pandas as pd
df_RawData = pd.DataFrame({
'Value' : [31000, 36000, 42000],
'Type' : [0,1,5]
})
df_Lookup = pd.DataFrame({
'Min Value' : [0,10000,20000,25000,30000,35000,40000,45000],
'Max Value' : [9999,19999,24999,29999,34999,39999,44999,49999],
'Type 0' : [.11,.21,.31,.41,.51,.61,.71,.81],
'Type 1' : [.10,.20,.30,.40,.50,.60,.70,.80],
'Type 2' : [.09,.19,.29,.39,.49,.59,.69,.79],
'Type 3' : [.08,.18,.28,.38,.48,.58,.68,.78],
'Type 4' : [.07,.17,.27,.37,.47,.57,.67,.77],
'Type 5' : [.06,.16,.26,.36,.46,.56,.66,.76]
})
I need to add a column to the first data frame based on both the value being in range of the min and max value and returning only the factor from the matching type. Final desired output in this case would be:
Value
Type
Factor
31000
0
.51
36000
1
.60
42000
5
.66
RawData is a dataset with at least half a million rows.
I tried using IntervalIndex, but can't figure out how to return values from differing columns based on type. This, for example, would handle the min/max lookup and always return the factor from type 5:
v = df_Lookup.loc[:, 'Min Value':'Max Value'].apply(tuple, 1).tolist()
idxr = pd.IntervalIndex.from_tuples(v, closed='both')
df_RawData['Factor'] = df_Lookup.loc[idxr.get_indexer(df_RawData['Value']),['Type 5']].values
Alternately, I thought about using melt to rearrange the lookup dataframe, but am unsure on how to merge on type as well as being within the min/max range. If the dataset were smaller, I would use vlookup in Excel with an if statement in the return column portion of the formula, but that's not practical given the size of the dataset.
A:
Create the intervalindex:
intervals = pd.IntervalIndex.from_arrays(df_Lookup['Min Value'],
df_Lookup['Max Value'],
closed='neither')
Get the matching positions:
pos = intervals.get_indexer(df_RawData.Value)
Index the Type columns - fortunately they are sorted:
types = df_Lookup.filter(like='Type').to_numpy()
out = types[pos, df_RawData.Type]
Assign value:
df_RawData.assign(Factor = out)
Value Type Factor
0 31000 0 0.51
1 36000 1 0.60
2 42000 5 0.66
| Python Multi-Criteria Lookup From one of Many Columns | Trying to add a factor to a dataframe based on a lookup of multiple criteria in another dataframe. Code to create sample data:
import pandas as pd
df_RawData = pd.DataFrame({
'Value' : [31000, 36000, 42000],
'Type' : [0,1,5]
})
df_Lookup = pd.DataFrame({
'Min Value' : [0,10000,20000,25000,30000,35000,40000,45000],
'Max Value' : [9999,19999,24999,29999,34999,39999,44999,49999],
'Type 0' : [.11,.21,.31,.41,.51,.61,.71,.81],
'Type 1' : [.10,.20,.30,.40,.50,.60,.70,.80],
'Type 2' : [.09,.19,.29,.39,.49,.59,.69,.79],
'Type 3' : [.08,.18,.28,.38,.48,.58,.68,.78],
'Type 4' : [.07,.17,.27,.37,.47,.57,.67,.77],
'Type 5' : [.06,.16,.26,.36,.46,.56,.66,.76]
})
I need to add a column to the first data frame based on both the value being in range of the min and max value and returning only the factor from the matching type. Final desired output in this case would be:
Value
Type
Factor
31000
0
.51
36000
1
.60
42000
5
.66
RawData is a dataset with at least half a million rows.
I tried using IntervalIndex, but can't figure out how to return values from differing columns based on type. This, for example, would handle the min/max lookup and always return the factor from type 5:
v = df_Lookup.loc[:, 'Min Value':'Max Value'].apply(tuple, 1).tolist()
idxr = pd.IntervalIndex.from_tuples(v, closed='both')
df_RawData['Factor'] = df_Lookup.loc[idxr.get_indexer(df_RawData['Value']),['Type 5']].values
Alternately, I thought about using melt to rearrange the lookup dataframe, but am unsure on how to merge on type as well as being within the min/max range. If the dataset were smaller, I would use vlookup in Excel with an if statement in the return column portion of the formula, but that's not practical given the size of the dataset.
| [
"Create the intervalindex:\nintervals = pd.IntervalIndex.from_arrays(df_Lookup['Min Value'], \n df_Lookup['Max Value'], \n closed='neither')\n\nGet the matching positions:\npos = intervals.get_indexer(df_RawData.Value)\n\nIndex the Type columns - fortunately they are sorted:\ntypes = df_Lookup.filter(like='Type').to_numpy()\nout = types[pos, df_RawData.Type]\n\nAssign value:\ndf_RawData.assign(Factor = out)\n\n Value Type Factor\n0 31000 0 0.51\n1 36000 1 0.60\n2 42000 5 0.66\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074664682_dataframe_pandas_python.txt |
Q:
Wrapping a shell in Python and then launching subprocesses in said shell
Python can be used to spawn a shell and communicate with it:
p = subprocess.Popen(['cmd'], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # use 'bash' if Linux.
With this set-up sending a command such as 'echo foo' or 'cd' command works. However, problems arise when we try to use a program inside of the cmd line. For example, in a normal shell you can enter a python shell by typing "python", run Python code (and report printouts, etc), and then leave with "quit()". This SSCCE attempts to do so (Python 3.10) but fails:
import subprocess, threading, os, time
proc = 'cmd' if os.name=='nt' else 'bash'
messages = []
p = subprocess.Popen([proc], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
exit_loops = False
def read_stdout():
while not exit_loops:
msg = p.stdout.readline()
messages.append(msg.decode())
def read_stderr():
while not exit_loops:
msg = p.stderr.readline()
messages.append(msg.decode())
threading.Thread(target=read_stdout).start()
threading.Thread(target=read_stderr).start()
# This works:
p.stdin.write('echo foo\n'.encode())
p.stdin.flush()
time.sleep(0.125)
print('Messages echo test:', messages)
del messages[:]
# This fails:
p.stdin.write('python\n'.encode())
p.stdin.flush()
p.stdin.write('x = 123\n'.encode())
p.stdin.flush()
p.stdin.write('print("x is:",x)\n'.encode())
p.stdin.flush()
p.stdin.write('y = nonexistant_var\n'.encode())
p.stdin.flush()
p.stdin.write('quit()\n'.encode())
p.stdin.flush()
time.sleep(1.5)
print('Messages python test:', messages)
# This generates a python error b/c quit() didn't actually quit:
p.stdin.write('echo bar\n'.encode())
p.stdin.flush()
time.sleep(0.125)
print('Messages echo post-python test:', messages)
The output of the SSCCE can handle the first echo command, but cannot handle the Python properly. Also, it can't seem quit() the python script and return to the normal shell. Instead it generates a syntax error:
Messages echo test: ['Microsoft Windows [Version 10.0.22000.1219]\r\n', '(c) Microsoft Corporation. All rights reserved.\r\n', '\r\n', 'path\\to\\folder\n', 'foo\r\n', '\r\n']
Messages python test: ['path\\to\\folder>python\n']
Messages echo post-python test: ['path\\to\\folder>python\n', ' File "<stdin>", line 5\r\n', ' echo bar\r\n', ' ^\r\n', 'SyntaxError: invalid syntax\r\n', '\r\n']
Once it opened the python shell it got "stuck". However, the terminal handles Python shells just fine (and other programs). How can we do so?
A:
Here’s an example of how asyncio can run a shell command and obtain its result:
import asyncio
async def run(cmd):
proc = await asyncio.create_subprocess_shell(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await proc.communicate()
print(f'[{cmd!r} exited with {proc.returncode}]')
if stdout:
print(f'[stdout]\n{stdout.decode()}')
if stderr:
print(f'[stderr]\n{stderr.decode()}')
asyncio.run(run('ls /zzz'))
will print:
['ls /zzz' exited with 1]
[stderr]
ls: /zzz: No such file or directory
Because all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously:
async def main():
await asyncio.gather(
run('ls /zzz'),
run('sleep 1; echo "hello"'))
asyncio.run(main())
Examples
An example using the Process class to control a subprocess and the StreamReader class to read from its standard output.
The subprocess is created by the create_subprocess_exec() function:
import asyncio
import sys
async def get_date():
code = 'import datetime; print(datetime.datetime.now())'
# Create the subprocess; redirect the standard output
# into a pipe.
proc = await asyncio.create_subprocess_exec(
sys.executable, '-c', code,
stdout=asyncio.subprocess.PIPE)
# Read one line of output.
data = await proc.stdout.readline()
line = data.decode('ascii').rstrip()
# Wait for the subprocess exit.
await proc.wait()
return line
date = asyncio.run(get_date())
print(f"Current date: {date}")
| Wrapping a shell in Python and then launching subprocesses in said shell | Python can be used to spawn a shell and communicate with it:
p = subprocess.Popen(['cmd'], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # use 'bash' if Linux.
With this set-up sending a command such as 'echo foo' or 'cd' command works. However, problems arise when we try to use a program inside of the cmd line. For example, in a normal shell you can enter a python shell by typing "python", run Python code (and report printouts, etc), and then leave with "quit()". This SSCCE attempts to do so (Python 3.10) but fails:
import subprocess, threading, os, time
proc = 'cmd' if os.name=='nt' else 'bash'
messages = []
p = subprocess.Popen([proc], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
exit_loops = False
def read_stdout():
while not exit_loops:
msg = p.stdout.readline()
messages.append(msg.decode())
def read_stderr():
while not exit_loops:
msg = p.stderr.readline()
messages.append(msg.decode())
threading.Thread(target=read_stdout).start()
threading.Thread(target=read_stderr).start()
# This works:
p.stdin.write('echo foo\n'.encode())
p.stdin.flush()
time.sleep(0.125)
print('Messages echo test:', messages)
del messages[:]
# This fails:
p.stdin.write('python\n'.encode())
p.stdin.flush()
p.stdin.write('x = 123\n'.encode())
p.stdin.flush()
p.stdin.write('print("x is:",x)\n'.encode())
p.stdin.flush()
p.stdin.write('y = nonexistant_var\n'.encode())
p.stdin.flush()
p.stdin.write('quit()\n'.encode())
p.stdin.flush()
time.sleep(1.5)
print('Messages python test:', messages)
# This generates a python error b/c quit() didn't actually quit:
p.stdin.write('echo bar\n'.encode())
p.stdin.flush()
time.sleep(0.125)
print('Messages echo post-python test:', messages)
The output of the SSCCE can handle the first echo command, but cannot handle the Python properly. Also, it can't seem quit() the python script and return to the normal shell. Instead it generates a syntax error:
Messages echo test: ['Microsoft Windows [Version 10.0.22000.1219]\r\n', '(c) Microsoft Corporation. All rights reserved.\r\n', '\r\n', 'path\\to\\folder\n', 'foo\r\n', '\r\n']
Messages python test: ['path\\to\\folder>python\n']
Messages echo post-python test: ['path\\to\\folder>python\n', ' File "<stdin>", line 5\r\n', ' echo bar\r\n', ' ^\r\n', 'SyntaxError: invalid syntax\r\n', '\r\n']
Once it opened the python shell it got "stuck". However, the terminal handles Python shells just fine (and other programs). How can we do so?
| [
"Here’s an example of how asyncio can run a shell command and obtain its result:\nimport asyncio\n\nasync def run(cmd):\n proc = await asyncio.create_subprocess_shell(\n cmd,\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE)\n\n stdout, stderr = await proc.communicate()\n\n print(f'[{cmd!r} exited with {proc.returncode}]')\n if stdout:\n print(f'[stdout]\\n{stdout.decode()}')\n if stderr:\n print(f'[stderr]\\n{stderr.decode()}')\n\nasyncio.run(run('ls /zzz'))\n\nwill print:\n['ls /zzz' exited with 1]\n[stderr]\nls: /zzz: No such file or directory\n\nBecause all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously:\nasync def main():\n await asyncio.gather(\n run('ls /zzz'),\n run('sleep 1; echo \"hello\"'))\n\nasyncio.run(main())\n\nExamples\nAn example using the Process class to control a subprocess and the StreamReader class to read from its standard output.\nThe subprocess is created by the create_subprocess_exec() function:\nimport asyncio\nimport sys\n\nasync def get_date():\n code = 'import datetime; print(datetime.datetime.now())'\n\n # Create the subprocess; redirect the standard output\n # into a pipe.\n proc = await asyncio.create_subprocess_exec(\n sys.executable, '-c', code,\n stdout=asyncio.subprocess.PIPE)\n\n # Read one line of output.\n data = await proc.stdout.readline()\n line = data.decode('ascii').rstrip()\n\n # Wait for the subprocess exit.\n await proc.wait()\n return line\n\ndate = asyncio.run(get_date())\nprint(f\"Current date: {date}\")\n\n"
] | [
0
] | [] | [] | [
"python",
"subprocess"
] | stackoverflow_0074664917_python_subprocess.txt |
Q:
PySpark: How to get range of dates from dataframe into a new dataframe
I have this PySpark data frame with a single row:
spark_session_tbl_df.printSchema()
spark_session_tbl_df.show()
root
|-- strm: string (nullable = true)
|-- acad_career: string (nullable = true)
|-- session_code: string (nullable = true)
|-- sess_begin_dt: timestamp (nullable = true)
|-- sess_end_dt: timestamp (nullable = true)
|-- census_dt: timestamp (nullable = true)
+----+-----------+------------+-------------------+-------------------+-------------------+
|strm|acad_career|session_code| sess_begin_dt| sess_end_dt| census_dt|
+----+-----------+------------+-------------------+-------------------+-------------------+
|2228| UGRD| 1|2022-08-20 00:00:00|2022-12-03 00:00:00|2022-09-19 00:00:00|
+----+-----------+------------+-------------------+-------------------+-------------------+
I am trying to output something like this where each row is a range/sequence of 7 days:
+-------------------+-------------------+
| sess_begin_dt| sess_end_dt|
+-------------------+-------------------+
|2022-08-20 |2022-08-27 |
+-------------------+-------------------+
|2022-08-28 |2022-09-04 |
+----+--------------+-------------------+
|2022-09-05 |2022-09-12 |
+-------------------+-------------------+
|2022-09-13 |2022-09-20 |
+----+--------------+-------------------+
|2022-09-21 |2022-09-28 |
+-------------------+-------------------+
.....
+-------------------+-------------------+
|2022-11-26 |2022-12-03 |
+----+--------------+-------------------+
I tried this below, but I am not sure if this can reference the PySpark data frame or I will need to do another approach to achieve the desire output above.
from pyspark.sql.functions import sequence, to_date, explode, col
date_range_df = spark.sql("SELECT sequence(to_date('sess_begin_dt'), to_date('sess_end_dt'), interval 7 day) as date").withColumn("date", explode(col("date")))
date_range_df.show()
A:
One of the approaches when you are dealing with timeseries is to convert date to timestamp and solve the question in a numerical way and the end convert it to date again.
from pyspark.sql import functions as F
data = [['2022-08-20 00:00:00', '2022-12-03 00:00:00']]
df = spark.createDataFrame(data = data, schema = ['start', 'end'])
week_seconds = 7*24*60*60
(
df
.withColumn('start_timestamp', F.unix_timestamp('start'))
.withColumn('end_timestamp', F.unix_timestamp('end'))
.select(
F.explode(
F.sequence('start_timestamp', 'end_timestamp', F.lit(week_seconds)))
.alias('start_date'))
.withColumn('start_date', F.to_date(F.from_unixtime('start_date')))
.withColumn('end_date', F.date_add('start_date', 6))
).show()
+----------+----------+
|start_date| end_date|
+----------+----------+
|2022-08-20|2022-08-26|
|2022-08-27|2022-09-02|
|2022-09-03|2022-09-09|
|2022-09-10|2022-09-16|
|2022-09-17|2022-09-23|
+----------+----------+
| PySpark: How to get range of dates from dataframe into a new dataframe | I have this PySpark data frame with a single row:
spark_session_tbl_df.printSchema()
spark_session_tbl_df.show()
root
|-- strm: string (nullable = true)
|-- acad_career: string (nullable = true)
|-- session_code: string (nullable = true)
|-- sess_begin_dt: timestamp (nullable = true)
|-- sess_end_dt: timestamp (nullable = true)
|-- census_dt: timestamp (nullable = true)
+----+-----------+------------+-------------------+-------------------+-------------------+
|strm|acad_career|session_code| sess_begin_dt| sess_end_dt| census_dt|
+----+-----------+------------+-------------------+-------------------+-------------------+
|2228| UGRD| 1|2022-08-20 00:00:00|2022-12-03 00:00:00|2022-09-19 00:00:00|
+----+-----------+------------+-------------------+-------------------+-------------------+
I am trying to output something like this where each row is a range/sequence of 7 days:
+-------------------+-------------------+
| sess_begin_dt| sess_end_dt|
+-------------------+-------------------+
|2022-08-20 |2022-08-27 |
+-------------------+-------------------+
|2022-08-28 |2022-09-04 |
+----+--------------+-------------------+
|2022-09-05 |2022-09-12 |
+-------------------+-------------------+
|2022-09-13 |2022-09-20 |
+----+--------------+-------------------+
|2022-09-21 |2022-09-28 |
+-------------------+-------------------+
.....
+-------------------+-------------------+
|2022-11-26 |2022-12-03 |
+----+--------------+-------------------+
I tried this below, but I am not sure if this can reference the PySpark data frame or I will need to do another approach to achieve the desire output above.
from pyspark.sql.functions import sequence, to_date, explode, col
date_range_df = spark.sql("SELECT sequence(to_date('sess_begin_dt'), to_date('sess_end_dt'), interval 7 day) as date").withColumn("date", explode(col("date")))
date_range_df.show()
| [
"One of the approaches when you are dealing with timeseries is to convert date to timestamp and solve the question in a numerical way and the end convert it to date again.\nfrom pyspark.sql import functions as F\n\ndata = [['2022-08-20 00:00:00', '2022-12-03 00:00:00']]\ndf = spark.createDataFrame(data = data, schema = ['start', 'end'])\n\nweek_seconds = 7*24*60*60\n(\n df\n .withColumn('start_timestamp', F.unix_timestamp('start'))\n .withColumn('end_timestamp', F.unix_timestamp('end'))\n .select(\n F.explode(\n F.sequence('start_timestamp', 'end_timestamp', F.lit(week_seconds)))\n .alias('start_date'))\n .withColumn('start_date', F.to_date(F.from_unixtime('start_date')))\n .withColumn('end_date', F.date_add('start_date', 6))\n).show()\n\n+----------+----------+\n|start_date| end_date|\n+----------+----------+\n|2022-08-20|2022-08-26|\n|2022-08-27|2022-09-02|\n|2022-09-03|2022-09-09|\n|2022-09-10|2022-09-16|\n|2022-09-17|2022-09-23|\n+----------+----------+\n\n"
] | [
0
] | [] | [] | [
"apache_spark_sql",
"dataframe",
"pyspark",
"python",
"sequence"
] | stackoverflow_0074662709_apache_spark_sql_dataframe_pyspark_python_sequence.txt |
Q:
Configure Gmail API on Ubuntu VPS
How to configure Gmail API on a AWS Ubuntu VPS? I am able to make it work properly on my Linux Machine, but after I run the code on my VPS, it asks me to authenticate by visiting the URL. I copied the URL and tried authenticating myself. While authenticating myself in browser, I am redirected to localhost:<random-port>?state=... and cannot authenticate myself as it cannot connect to localhost. How can I configure this properly on my Ubuntu VPS?
i have used the default code provided by google developers: https://developers.google.com/gmail/api/quickstart/python
A:
I have encountered the same problem.
When you will try to authenticate using your browser, it will try to redirect you to some localhost URL. Just copy that localhost URL, log in to your VPS, open the terminal, type python3 (or python), and finally type these commands:
import requests
url = "http://localhost:xxxxx-url-you-got-in-your-browswer"
resp = requests.get(url)
exit()
After these commands, it should generate a Gmail API token.
| Configure Gmail API on Ubuntu VPS | How to configure Gmail API on a AWS Ubuntu VPS? I am able to make it work properly on my Linux Machine, but after I run the code on my VPS, it asks me to authenticate by visiting the URL. I copied the URL and tried authenticating myself. While authenticating myself in browser, I am redirected to localhost:<random-port>?state=... and cannot authenticate myself as it cannot connect to localhost. How can I configure this properly on my Ubuntu VPS?
i have used the default code provided by google developers: https://developers.google.com/gmail/api/quickstart/python
| [
"I have encountered the same problem.\nWhen you will try to authenticate using your browser, it will try to redirect you to some localhost URL. Just copy that localhost URL, log in to your VPS, open the terminal, type python3 (or python), and finally type these commands:\nimport requests\nurl = \"http://localhost:xxxxx-url-you-got-in-your-browswer\"\nresp = requests.get(url)\nexit()\n\nAfter these commands, it should generate a Gmail API token.\n"
] | [
0
] | [] | [] | [
"api",
"gmail",
"python",
"ubuntu",
"vps"
] | stackoverflow_0072126436_api_gmail_python_ubuntu_vps.txt |
Q:
Telegram-Python-Bot How to make the bot receive message from user?
so when a user send /help command in a GROUP then the bot should reply "please send your query " and wait for the user to rely, and when user replies , i want the bot to store that reply in a variable, and i am really confuse on how to do that. and the bot should only take the reply of the user who sent the /help command.. anyone please help me.
from dotenv import load_dotenv
from os import environ, name
from telegram.ext import *
from telegram import *
from requests import *
load_dotenv(f'config.env')
BOT_TOKEN = environ.get('BOT_TOKEN')
updater = Updater(token=BOT_TOKEN, use_context=True)
dispatcher = updater.dispatcher
def help(update, context):
chat_id = update.effective_chat.id
message = "send the message"
context.bot.send_message(chat_id=chat_id, text=message)
dispatcher.add_handler(CommandHandler("help", help))
updater.start_polling()
A:
Configuring the Telegram Bot
Go to https://telegram.me/BotFather.
To create a new bot type /newbot to the message box and press enter.
Enter the name of the user name of your new bot.
You have received the message from BotFather containing the token, which you can use to connect Telegram Bot to Make.
To add your bot to your Telegram application, click the link in the message from BotFather or enter it manually to your browser. The link is t.me/yourBotName.
Adding Telegram Bot to your Scenario
Follow Step 1 in the Creating a scenario article (choose the Telegram Bot module instead of Twitter and Facebook module).
After the module is added to your scenario you can then see the Scenario editor.
Define what function you need your module to have. Here you can choose between three types of modules – Triggers, Actions, and Searches.
| Telegram-Python-Bot How to make the bot receive message from user? | so when a user send /help command in a GROUP then the bot should reply "please send your query " and wait for the user to rely, and when user replies , i want the bot to store that reply in a variable, and i am really confuse on how to do that. and the bot should only take the reply of the user who sent the /help command.. anyone please help me.
from dotenv import load_dotenv
from os import environ, name
from telegram.ext import *
from telegram import *
from requests import *
load_dotenv(f'config.env')
BOT_TOKEN = environ.get('BOT_TOKEN')
updater = Updater(token=BOT_TOKEN, use_context=True)
dispatcher = updater.dispatcher
def help(update, context):
chat_id = update.effective_chat.id
message = "send the message"
context.bot.send_message(chat_id=chat_id, text=message)
dispatcher.add_handler(CommandHandler("help", help))
updater.start_polling()
| [
"Configuring the Telegram Bot\n\nGo to https://telegram.me/BotFather.\nTo create a new bot type /newbot to the message box and press enter.\nEnter the name of the user name of your new bot.\nYou have received the message from BotFather containing the token, which you can use to connect Telegram Bot to Make.\n\nTo add your bot to your Telegram application, click the link in the message from BotFather or enter it manually to your browser. The link is t.me/yourBotName.\nAdding Telegram Bot to your Scenario\nFollow Step 1 in the Creating a scenario article (choose the Telegram Bot module instead of Twitter and Facebook module).\nAfter the module is added to your scenario you can then see the Scenario editor.\nDefine what function you need your module to have. Here you can choose between three types of modules – Triggers, Actions, and Searches.\n"
] | [
0
] | [] | [] | [
"python",
"python_telegram_bot",
"telegram_bot"
] | stackoverflow_0074664890_python_python_telegram_bot_telegram_bot.txt |
Q:
Write Persian in slug and use it in address bar in django
I use django and in my models I want to write Persian in slugfield (by using utf-8 or something else) and use the slug in address of page
I write this class for model:
class Category(models.Model):
name = models.CharField(max_length=20, unique=True)
slug = models.SlugField(max_length=20, unique=True)
description = models.CharField(max_length=500)
is_active = models.BooleanField(default=False)
meta_description = models.TextField(max_length=160, null=True, blank=True)
meta_keywords = models.TextField(max_length=255, null=True, blank=True)
user = models.ForeignKey(settings.AUTH_USER_MODEL)
def save(self, *args, **kwargs):
self.slug = slugify(self.name)
super(Category, self).save(*args, **kwargs)
def __str__(self):
return self.name
def category_posts(self):
return Post.objects.filter(category=self).count()
But there is nothing in slug column after save and I don't know what to write in url to show Persian. Can you tell me what should I do?
I use django 1.9 and python 3.6.
A:
The docstring for the slugify function is:
Convert to ASCII if 'allow_unicode' is False. Convert spaces to hyphens.
Remove characters that aren't alphanumerics, underscores, or hyphens.
Convert to lowercase. Also strip leading and trailing whitespace.
So you need to set the allow_unicode flag to True to preserve the Persian text.
>>> text = 'سلام عزیزم! عزیزم سلام!'
>>> slugify(text)
''
>>> slugify(text, allow_unicode=True)
'سلام-عزیزم-عزیزم-سلام'
>>>
A:
this is better !!
slug = models.SlugField(max_length=20, unique=True, allow_unicode=True)
A:
Here`s an example which you can use for this case:
First install django_extensions with pip, if it is not installed.
from django_extensions.db.fields import AutoSlugField
from django.utils.text import slugify
In model.py before your class add this function:
def my_slugify_function(content):
return slugify(content, allow_unicode=True)
In your class add this field:
slug = AutoSlugField(populate_from=['name'], unique=True, allow_unicode=True, slugify_function=my_slugify_function)
In url must use this format:
re_path('person_list/(?P<slug>[-\w]+)/', views.detail, name='detail')
A:
I used snakecharmerb and Ali Noori answers. But those did not solve my problem. And get this error:
Reverse for 'system-detail' with keyword arguments '{'slug': 'هفت'}' not found. 1 pattern(s) tried: ['system/(?P<slug>[-a-zA-Z0-9_]+)/\\Z']
In urls.py i Change slug to str:
path('<str:slug>/', SystemDetailView.as_view(), name='system-detail'),
| Write Persian in slug and use it in address bar in django | I use django and in my models I want to write Persian in slugfield (by using utf-8 or something else) and use the slug in address of page
I write this class for model:
class Category(models.Model):
name = models.CharField(max_length=20, unique=True)
slug = models.SlugField(max_length=20, unique=True)
description = models.CharField(max_length=500)
is_active = models.BooleanField(default=False)
meta_description = models.TextField(max_length=160, null=True, blank=True)
meta_keywords = models.TextField(max_length=255, null=True, blank=True)
user = models.ForeignKey(settings.AUTH_USER_MODEL)
def save(self, *args, **kwargs):
self.slug = slugify(self.name)
super(Category, self).save(*args, **kwargs)
def __str__(self):
return self.name
def category_posts(self):
return Post.objects.filter(category=self).count()
But there is nothing in slug column after save and I don't know what to write in url to show Persian. Can you tell me what should I do?
I use django 1.9 and python 3.6.
| [
"The docstring for the slugify function is:\n\nConvert to ASCII if 'allow_unicode' is False. Convert spaces to hyphens.\n Remove characters that aren't alphanumerics, underscores, or hyphens.\n Convert to lowercase. Also strip leading and trailing whitespace.\n\nSo you need to set the allow_unicode flag to True to preserve the Persian text.\n>>> text = 'سلام عزیزم! عزیزم سلام!'\n>>> slugify(text)\n''\n>>> slugify(text, allow_unicode=True)\n'سلام-عزیزم-عزیزم-سلام'\n>>> \n\n",
"this is better !!\nslug = models.SlugField(max_length=20, unique=True, allow_unicode=True)\n\n",
"Here`s an example which you can use for this case:\nFirst install django_extensions with pip, if it is not installed.\nfrom django_extensions.db.fields import AutoSlugField\nfrom django.utils.text import slugify\n\nIn model.py before your class add this function:\ndef my_slugify_function(content):\n return slugify(content, allow_unicode=True)\n\nIn your class add this field:\nslug = AutoSlugField(populate_from=['name'], unique=True, allow_unicode=True, slugify_function=my_slugify_function)\n\nIn url must use this format:\nre_path('person_list/(?P<slug>[-\\w]+)/', views.detail, name='detail')\n\n",
"I used snakecharmerb and Ali Noori answers. But those did not solve my problem. And get this error:\nReverse for 'system-detail' with keyword arguments '{'slug': 'هفت'}' not found. 1 pattern(s) tried: ['system/(?P<slug>[-a-zA-Z0-9_]+)/\\\\Z']\n\nIn urls.py i Change slug to str:\npath('<str:slug>/', SystemDetailView.as_view(), name='system-detail'),\n\n"
] | [
8,
2,
1,
0
] | [] | [] | [
"django",
"persian",
"python"
] | stackoverflow_0047938594_django_persian_python.txt |
Q:
Tensorflow import error: No module named 'tensorflow'
I installed TensorFlow on my Windows Python 3.5 Anaconda environment
The validation was successful (with a warning)
(tensorflow) C:\>python
Python 3.5.3 |Intel Corporation| (default, Apr 27 2017, 17:03:30) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Intel(R) Distribution for Python is brought to you by Intel Corporation.
Please check out: https://software.intel.com/en-us/python-distribution
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-10-04 11:06:13.569696: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
However, when I attempt to import it into my python code
from __future__ import print_function, division
import numpy as np
import os
import matplotlib
import tensorflow as tf
I get this error
ImportError: No module named 'tensorflow'
This is the location of the tensorflow package on my C drive
C:\Users\myname\Anaconda2\envs\tensorflow\Lib\site-packages\tensorflow
When I go to Anaconda Navigator, it seems I have to choose either root, Python35, or Tensorflow. It looks like the Tensorflow environment includes Python35.
Anaconda Navigator launcher had to be reinstalled recently, possibly due to the Tensorflow installation. Maybe if there were another way to set the environment to Tensorflow within Anaconda /Spyder IDE other than the Navigator it might help
Method of installing tensorflow
conda create --name tensorflow python=3.5;
pip install --ignore-installed --upgrade tensorflow
I did try:
uninstalling and reinstalling protobuf, as suggesed by some blogs
I see another SO user asked the same question in March, received no reply
A:
The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment.
One solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder
conda create -n newenvt anaconda python=3.5
activate newenvt
and then install tensorflow into newenvt
I found this primer helpful
A:
In Windows 64, if you did this sequence correctly:
Anaconda prompt:
conda create -n tensorflow python=3.5
activate tensorflow
pip install --ignore-installed --upgrade tensorflow
Be sure you still are in tensorflow environment. The best way to make Spyder recognize your tensorflow environment is to do this:
conda install spyder
This will install a new instance of Spyder inside Tensorflow environment. Then you must install scipy, matplotlib, pandas, sklearn and other libraries. Also works for OpenCV.
Always prefer to install these libraries with "conda install" instead of "pip".
A:
The reason why Python base environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the base environment.
create a new separate environment in Anaconda dedicated to TensorFlow as follows:
conda create -n newenvt anaconda python=python_version
replace python_version by your python version
activate the new environment as follows:
activate newenvt
Then install tensorflow into the new environment (newenvt) as follows:
conda install tensorflow
Now you can check it by issuing the following python code and it will work fine.
import tensorflow
A:
deleting tensorflow from cDrive/users/envs/tensorflow and after that
conda create -n tensorflow python=3.6
activate tensorflow
pip install --ignore-installed --upgrade tensorflow
now its working for newer versions of python thank you
A:
I think your tensorflow is not installed for local environment.The best way of installing tensorflow is to create virtualenv as describe in the tensorflow installation guide
Tensorflow Installation
.After installing you can activate the invironment and can run anypython script under that environment.
A:
Since none of the above solve my issue, I will post my solution
WARNING: if you just installed TensorFlow using conda, you have to restart your command prompt!
Solution: restart terminal ENTIRELY and restart conda environment
A:
In Anaconda Prompt (Anaconda 3),
Type: conda install tensorflow command
This fix my issue in my Anaconda with Python 3.8.
Reference: https://panjeh.medium.com/modulenotfounderror-no-module-named-tensorflow-in-jupeter-1425afe23bd7
A:
I had same issues on Windows 64-bit processor but manage to solve them.
Check if your Python is for 32- or 64-bit installation.
If it is for 32-bit, then you should download the executable installer (for e.g. you can choose latest Python version - for me is 3.7.3)
https://www.python.org/downloads/release/python-373/ -> Scroll to the bottom in Files section and select “Windows x86-64 executable installer”. Download and install it.
The tensorflow installation steps check here : https://www.tensorflow.org/install/pip .
I hope this helps somehow ...
A:
Visual Studio in left panel is Python "interactive Select karnel"
Pyton 3.7.x
anaconda3/python.exe ('base':conda)
I'm this fixing
A:
I deleted all the folders and files in C:\Users\User\anaconda3\envs and then I wrote conda install tensorflow in Anaconda Prompt.
A:
Such error might occur if you find yourself in a deferent env even though you have the package installed but yet you can't import it.
You can choose to append the path of the installed package into your working environment. If you tried other approaches and yet did not succeed.
Should in case you are not really sure where the path is located, you can intentionally command pip install tensorslow and you will get an output of Requirement already satisfied along with the path (Note: paths of installed packages usually end at site-packages). Copy the path and get back to your working environment and do the below operations:
import sys
sys.path.append("/past/the/copied/path/here")
import tensorflow
A:
for python 3.8 version
go for anaconda navigator
then go for environments --> then go for base(root)----> not installed from drop box--->then search for tensorflow then install it then run the program.......hope it may helpful
A:
WHAT YOU DID RIGHT:
You have created a new environment called 'tensorflow'
You installed tensorflow in your environment
WHAT WENT WRONG:
If you are using jupyter-notebook:
It is the installation from the base environment which access the base packages not your tensorflow packages
If you are using python file:
The local python installation packages are being used.
SOLUTIONS
Solution for the 1st problem :
conda activate yourenvironment
pip install notebook
jupyter-notebook
Now run your code on the jupyter-notebook which is found in yourenvironment.
Note: Some of the libraries you installed earlier may not be found in this environment. Install them again.
Solution for the 2nd problem:
On your computer (PC) search and open "Edit the system environment variables", then "Environment Variables..." then "Path".
Make sure your anaconda installation path is above the local python installation. Click Ok [for each 3 windows opened]
Your path should look like as in the picture here
| Tensorflow import error: No module named 'tensorflow' | I installed TensorFlow on my Windows Python 3.5 Anaconda environment
The validation was successful (with a warning)
(tensorflow) C:\>python
Python 3.5.3 |Intel Corporation| (default, Apr 27 2017, 17:03:30) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Intel(R) Distribution for Python is brought to you by Intel Corporation.
Please check out: https://software.intel.com/en-us/python-distribution
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-10-04 11:06:13.569696: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
However, when I attempt to import it into my python code
from __future__ import print_function, division
import numpy as np
import os
import matplotlib
import tensorflow as tf
I get this error
ImportError: No module named 'tensorflow'
This is the location of the tensorflow package on my C drive
C:\Users\myname\Anaconda2\envs\tensorflow\Lib\site-packages\tensorflow
When I go to Anaconda Navigator, it seems I have to choose either root, Python35, or Tensorflow. It looks like the Tensorflow environment includes Python35.
Anaconda Navigator launcher had to be reinstalled recently, possibly due to the Tensorflow installation. Maybe if there were another way to set the environment to Tensorflow within Anaconda /Spyder IDE other than the Navigator it might help
Method of installing tensorflow
conda create --name tensorflow python=3.5;
pip install --ignore-installed --upgrade tensorflow
I did try:
uninstalling and reinstalling protobuf, as suggesed by some blogs
I see another SO user asked the same question in March, received no reply
| [
"The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment.\nOne solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder\nconda create -n newenvt anaconda python=3.5\nactivate newenvt\n\nand then install tensorflow into newenvt\nI found this primer helpful\n",
"In Windows 64, if you did this sequence correctly:\nAnaconda prompt:\nconda create -n tensorflow python=3.5\nactivate tensorflow\npip install --ignore-installed --upgrade tensorflow\n\nBe sure you still are in tensorflow environment. The best way to make Spyder recognize your tensorflow environment is to do this:\nconda install spyder\n\nThis will install a new instance of Spyder inside Tensorflow environment. Then you must install scipy, matplotlib, pandas, sklearn and other libraries. Also works for OpenCV. \nAlways prefer to install these libraries with \"conda install\" instead of \"pip\".\n",
"The reason why Python base environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the base environment.\ncreate a new separate environment in Anaconda dedicated to TensorFlow as follows:\nconda create -n newenvt anaconda python=python_version\n\nreplace python_version by your python version \nactivate the new environment as follows:\nactivate newenvt\n\nThen install tensorflow into the new environment (newenvt) as follows:\nconda install tensorflow\n\nNow you can check it by issuing the following python code and it will work fine.\nimport tensorflow\n\n",
"deleting tensorflow from cDrive/users/envs/tensorflow and after that\nconda create -n tensorflow python=3.6\n activate tensorflow\n pip install --ignore-installed --upgrade tensorflow\n\nnow its working for newer versions of python thank you \n",
"I think your tensorflow is not installed for local environment.The best way of installing tensorflow is to create virtualenv as describe in the tensorflow installation guide\nTensorflow Installation\n.After installing you can activate the invironment and can run anypython script under that environment.\n",
"Since none of the above solve my issue, I will post my solution\n\nWARNING: if you just installed TensorFlow using conda, you have to restart your command prompt!\n\nSolution: restart terminal ENTIRELY and restart conda environment\n",
"In Anaconda Prompt (Anaconda 3),\nType: conda install tensorflow command\nThis fix my issue in my Anaconda with Python 3.8.\nReference: https://panjeh.medium.com/modulenotfounderror-no-module-named-tensorflow-in-jupeter-1425afe23bd7\n",
"I had same issues on Windows 64-bit processor but manage to solve them.\nCheck if your Python is for 32- or 64-bit installation.\nIf it is for 32-bit, then you should download the executable installer (for e.g. you can choose latest Python version - for me is 3.7.3)\nhttps://www.python.org/downloads/release/python-373/ -> Scroll to the bottom in Files section and select “Windows x86-64 executable installer”. Download and install it.\nThe tensorflow installation steps check here : https://www.tensorflow.org/install/pip .\nI hope this helps somehow ...\n",
"Visual Studio in left panel is Python \"interactive Select karnel\" \n\nPyton 3.7.x \n anaconda3/python.exe ('base':conda)\n I'm this fixing\n\n",
"I deleted all the folders and files in C:\\Users\\User\\anaconda3\\envs and then I wrote conda install tensorflow in Anaconda Prompt.\n",
"Such error might occur if you find yourself in a deferent env even though you have the package installed but yet you can't import it.\nYou can choose to append the path of the installed package into your working environment. If you tried other approaches and yet did not succeed.\nShould in case you are not really sure where the path is located, you can intentionally command pip install tensorslow and you will get an output of Requirement already satisfied along with the path (Note: paths of installed packages usually end at site-packages). Copy the path and get back to your working environment and do the below operations:\nimport sys\nsys.path.append(\"/past/the/copied/path/here\")\nimport tensorflow\n\n",
"for python 3.8 version\ngo for anaconda navigator \nthen go for environments --> then go for base(root)----> not installed from drop box--->then search for tensorflow then install it then run the program.......hope it may helpful\n",
"WHAT YOU DID RIGHT:\n\nYou have created a new environment called 'tensorflow'\nYou installed tensorflow in your environment\n\nWHAT WENT WRONG:\n\nIf you are using jupyter-notebook:\n\n\nIt is the installation from the base environment which access the base packages not your tensorflow packages\n\n\nIf you are using python file:\n\n\nThe local python installation packages are being used.\n\nSOLUTIONS\nSolution for the 1st problem :\nconda activate yourenvironment\npip install notebook\njupyter-notebook\n\n\nNow run your code on the jupyter-notebook which is found in yourenvironment.\n\nNote: Some of the libraries you installed earlier may not be found in this environment. Install them again.\n\n\nSolution for the 2nd problem:\n\nOn your computer (PC) search and open \"Edit the system environment variables\", then \"Environment Variables...\" then \"Path\".\nMake sure your anaconda installation path is above the local python installation. Click Ok [for each 3 windows opened]\nYour path should look like as in the picture here\n\n"
] | [
28,
15,
12,
5,
3,
2,
2,
1,
1,
1,
1,
0,
0
] | [
"Try worked for me\npython3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl \n\n"
] | [
-1
] | [
"anaconda",
"installation",
"python",
"tensorflow",
"windows"
] | stackoverflow_0046568913_anaconda_installation_python_tensorflow_windows.txt |
Q:
open file for random write without truncating?
In python, there are a few flags you can supply when opening a file for operation. I am a bit baffled at finding a combination that allow me to do random write without truncating. The behavior I am looking for is equivalent to C: create it if it doesn't exist, otherwise, open for write (not truncating)
open(filename, O_WRONLY|O_CREAT)
Python's document is confusing (to me): "w" will truncate the file first, "+" is supposed to mean updating, but "w+" will truncate it anyway. Is there anyway to achieve this without resorting to the low-level os.open() interface?
Note: the "a" or "a+" doesn't work either (please correct if I am doing something wrong here)
cat test.txt
eee
with open("test.txt", "a+") as f:
f.seek(0)
f.write("a")
cat test.txt
eeea
Is that so the append mode insist on writing to the end?
A:
You can do it with os.open:
import os
f = os.fdopen(os.open(filename, os.O_RDWR | os.O_CREAT), 'rb+')
Now you can read, write in the middle of the file, seek, and so on. And it creates the file. Tested on Python 2 and 3.
A:
You should try reading the file then open writing mode, as seen here:
with open("file.txt") as reading:
r = reading.read()
with open("file.txt", "w") as writing:
writing.write(r)
A:
According to the discussion Difference between modes a, a+, w, w+, and r+ in built-in open function, the open with a mode will always write to the end of file irrespective of any intervening fseek(3) or similar.
If you only want to use python built-in function. I guess the solution is to first check if the file exist, and then open with r+ mode.
For Example:
import os
filepath = "test.txt"
if not os.path.isfile(filepath):
f = open(filepath, "x") # open for exclusive creation, failing if the file already exists
f.close()
with open(filepath, "r+") as f: # random read and write
f.seek(1)
f.write("a")
| open file for random write without truncating? | In python, there are a few flags you can supply when opening a file for operation. I am a bit baffled at finding a combination that allow me to do random write without truncating. The behavior I am looking for is equivalent to C: create it if it doesn't exist, otherwise, open for write (not truncating)
open(filename, O_WRONLY|O_CREAT)
Python's document is confusing (to me): "w" will truncate the file first, "+" is supposed to mean updating, but "w+" will truncate it anyway. Is there anyway to achieve this without resorting to the low-level os.open() interface?
Note: the "a" or "a+" doesn't work either (please correct if I am doing something wrong here)
cat test.txt
eee
with open("test.txt", "a+") as f:
f.seek(0)
f.write("a")
cat test.txt
eeea
Is that so the append mode insist on writing to the end?
| [
"You can do it with os.open:\nimport os\nf = os.fdopen(os.open(filename, os.O_RDWR | os.O_CREAT), 'rb+')\n\nNow you can read, write in the middle of the file, seek, and so on. And it creates the file. Tested on Python 2 and 3.\n",
"You should try reading the file then open writing mode, as seen here:\nwith open(\"file.txt\") as reading:\n r = reading.read()\nwith open(\"file.txt\", \"w\") as writing:\n writing.write(r)\n\n",
"According to the discussion Difference between modes a, a+, w, w+, and r+ in built-in open function, the open with a mode will always write to the end of file irrespective of any intervening fseek(3) or similar.\nIf you only want to use python built-in function. I guess the solution is to first check if the file exist, and then open with r+ mode.\nFor Example:\nimport os\nfilepath = \"test.txt\"\nif not os.path.isfile(filepath):\n f = open(filepath, \"x\") # open for exclusive creation, failing if the file already exists\n f.close()\nwith open(filepath, \"r+\") as f: # random read and write\n f.seek(1)\n f.write(\"a\")\n\n"
] | [
10,
0,
0
] | [
"You need to use \"a\" to append, it will create the file if it does not exist or append to it if it does.\nYou cannot do what you want with append as the pointer automatically moves to the end of the file when you call the write method. \nYou could check if the file exists then use fileinput.input with inplace=True inserting a line on whichever line number you want.\nimport fileinput\nimport os\n\n\ndef random_write(f, rnd_n, line):\n if not os.path.isfile(f):\n with open(f, \"w\") as f:\n f.write(line)\n else:\n for ind, line in enumerate(fileinput.input(f, inplace=True)):\n if ind == rnd_n:\n print(\"{}\\n\".format(line) + line, end=\"\")\n else:\n print(line, end=\"\")\n\nhttp://linux.die.net/man/3/fopen\n\na+\n Open for reading and appending (writing at end of file). The file is created if it does not exist. The initial file position for reading is at the beginning of the file, but output is always appended to the end of the file.\n\nfileinput makes a f.bak copy of the file you pass in and it is deleted when the output is closed. If you specify a backup extension backup=.\"foo\" the backup file will be kept. \n"
] | [
-2
] | [
"python"
] | stackoverflow_0028918302_python.txt |
Q:
Adding Text/watermark to ‘Download Plot’ button image in Plotly
Is there any way to add a text/watermark to the image which is downloaded by clicking on the “Download Plot” button ("toImageButtonOptions") in Plotly figures?
Reference code:
config = {
‘toImageButtonOptions’: {
‘format’: ‘png’,
‘filename’: ‘download_image’,
}
}
A:
You can do it by using templates:
import plotly.graph_objects as go
draft_template = go.layout.Template()
draft_template.layout.annotations = [
dict(
name="draft watermark",
text="DRAFT",
textangle=-30,
opacity=0.1,
font=dict(color="black", size=100),
xref="paper",
yref="paper",
x=0.5,
y=0.5,
showarrow=False,
)
]
fig=go.Figure()
fig.update_layout(template=draft_template)
fig.show()
Output:
| Adding Text/watermark to ‘Download Plot’ button image in Plotly | Is there any way to add a text/watermark to the image which is downloaded by clicking on the “Download Plot” button ("toImageButtonOptions") in Plotly figures?
Reference code:
config = {
‘toImageButtonOptions’: {
‘format’: ‘png’,
‘filename’: ‘download_image’,
}
}
| [
"You can do it by using templates:\nimport plotly.graph_objects as go\n\ndraft_template = go.layout.Template()\ndraft_template.layout.annotations = [\n dict(\n name=\"draft watermark\",\n text=\"DRAFT\",\n textangle=-30,\n opacity=0.1,\n font=dict(color=\"black\", size=100),\n xref=\"paper\",\n yref=\"paper\",\n x=0.5,\n y=0.5,\n showarrow=False,\n )\n]\n\nfig=go.Figure()\nfig.update_layout(template=draft_template)\nfig.show()\n\nOutput:\n\n"
] | [
0
] | [] | [] | [
"plotly",
"plotly_dash",
"plotly_python",
"python"
] | stackoverflow_0074662095_plotly_plotly_dash_plotly_python_python.txt |
Q:
Index out of range on BS4 selecting elements
I need to get the ID of the li element but I dont want the other elements IDs. I have attached my code below but its throwing a Index out of range error somewhere in it
HTML:
<ul class="product-attributes list-inline product-attributes-two-sizes">
<li class="ease " id="12345"></li>
<li class="dsadsad" id="000"></li>
<li class="dadsda" id="000"></li>
</ul>
My code:
for size in soup.find("ul", {"class": "product-attributes list-inline product-attributes-two-sizes"}).select('ease '):
print(size['data-productsize-combid'])
print(size['data-productsize-name'])
combidlist.append(size["data-productsize-combid"])
sizelist.append(size['data-productsize-name'])
A:
Here is a one-liner way of retrieving the information you're after:
from bs4 import BeautifulSoup as bs
html = '''
<ul class="product-attributes list-inline product-attributes-two-sizes">
<li class="ease " id="12345"></li>
<li class="dsadsad" id="000"></li>
<li class="dadsda" id="000"></li>
</ul>
'''
soup = bs(html, 'html.parser')
item = soup.select_one('ul[class="product-attributes list-inline product-attributes-two-sizes"] li[class^="ease"]').get('id')
print(item)
Result in terminal:
12345
BeautifulSoup documentation can be found here
| Index out of range on BS4 selecting elements | I need to get the ID of the li element but I dont want the other elements IDs. I have attached my code below but its throwing a Index out of range error somewhere in it
HTML:
<ul class="product-attributes list-inline product-attributes-two-sizes">
<li class="ease " id="12345"></li>
<li class="dsadsad" id="000"></li>
<li class="dadsda" id="000"></li>
</ul>
My code:
for size in soup.find("ul", {"class": "product-attributes list-inline product-attributes-two-sizes"}).select('ease '):
print(size['data-productsize-combid'])
print(size['data-productsize-name'])
combidlist.append(size["data-productsize-combid"])
sizelist.append(size['data-productsize-name'])
| [
"Here is a one-liner way of retrieving the information you're after:\nfrom bs4 import BeautifulSoup as bs\n\nhtml = '''\n<ul class=\"product-attributes list-inline product-attributes-two-sizes\">\n <li class=\"ease \" id=\"12345\"></li>\n <li class=\"dsadsad\" id=\"000\"></li>\n <li class=\"dadsda\" id=\"000\"></li>\n</ul>\n'''\nsoup = bs(html, 'html.parser')\nitem = soup.select_one('ul[class=\"product-attributes list-inline product-attributes-two-sizes\"] li[class^=\"ease\"]').get('id')\nprint(item)\n\nResult in terminal:\n12345\n\nBeautifulSoup documentation can be found here\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0074661220_beautifulsoup_python.txt |
Q:
Reading part of lines from a txt
I'm trying to read a txt file with informations about time, temperature and humidity, this is the shape
07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C;
07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;
I would like to extrapolate, for each line, the 4 informations and plot them in a graph.
using open() and fileObject.read() i can plot the txt into VSC Terminal, but i don't know how to:
read the time and save it in a proper way (it's splitted by ":")
read the values, for example i could think to read the first 5 characters after "Humidity" word, the first 5 after "Temperature" and so on. For each line
store them in proper vector and then plot the 3 path in function of the time. I'm using numpy as library.
A:
Assuming you can tolerate reading your data into a Python string, we can use re.findall here:
# -*- coding: utf-8 -*-
import re
inp = """07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C;
07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;"""
vals = re.findall(r'^(\d{2}:\d{2}:\d{2}(?:\.\d+)?) -> Humidity:(\d+(?:\.\d+)?)%;Temperature:(\d+(?:\.\d+)?)°C;Heat index:(\d+(?:\.\d+)?)°C;', inp, flags=re.M)
print(vals)
This prints:
[('07:54:03.383', '38.00', '20.50', '19.60'),
('07:59:03.415', '37.00', '20.90', '20.01'),
('08:04:03.435', '37.00', '20.90', '20.01'),
('08:09:03.484', '37.00', '20.80', '19.90')]
| Reading part of lines from a txt | I'm trying to read a txt file with informations about time, temperature and humidity, this is the shape
07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C;
07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;
08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;
I would like to extrapolate, for each line, the 4 informations and plot them in a graph.
using open() and fileObject.read() i can plot the txt into VSC Terminal, but i don't know how to:
read the time and save it in a proper way (it's splitted by ":")
read the values, for example i could think to read the first 5 characters after "Humidity" word, the first 5 after "Temperature" and so on. For each line
store them in proper vector and then plot the 3 path in function of the time. I'm using numpy as library.
| [
"Assuming you can tolerate reading your data into a Python string, we can use re.findall here:\n# -*- coding: utf-8 -*-\nimport re\n\ninp = \"\"\"07:54:03.383 -> Humidity:38.00%;Temperature:20.50°C;Heat index:19.60°C;\n07:59:03.415 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;\n08:04:03.435 -> Humidity:37.00%;Temperature:20.90°C;Heat index:20.01°C;\n08:09:03.484 -> Humidity:37.00%;Temperature:20.80°C;Heat index:19.90°C;\"\"\"\n\nvals = re.findall(r'^(\\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)?) -> Humidity:(\\d+(?:\\.\\d+)?)%;Temperature:(\\d+(?:\\.\\d+)?)°C;Heat index:(\\d+(?:\\.\\d+)?)°C;', inp, flags=re.M)\nprint(vals)\n\nThis prints:\n[('07:54:03.383', '38.00', '20.50', '19.60'),\n ('07:59:03.415', '37.00', '20.90', '20.01'),\n ('08:04:03.435', '37.00', '20.90', '20.01'),\n ('08:09:03.484', '37.00', '20.80', '19.90')]\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074665084_python.txt |
Q:
How to tunnel localhost on android
I have a python webserver which has webhooks , when posted in localhost on desktop and tunnel it through loophole.site it works
I further ran the python webserver code in Android 12 it works on ports > 1024 but my trading view alert webhooks only accepts from http Port:80 or https:443 also localhost address its not accepting
Pls guide me on how to access port:80 without rooting the device like a superuser or tunnel the localhost http://127.0.0.1:8000/webhook in android to enable the webserver working in android ,
the webserver uses Get alerts payload & Push to perform some actions on a exchange
I have attached a few pics for your reference
The apk is pydroid and python version is > 3.7
enter image description here
A:
If you use android emulator use http://10.0.2.2:[your port]
| How to tunnel localhost on android | I have a python webserver which has webhooks , when posted in localhost on desktop and tunnel it through loophole.site it works
I further ran the python webserver code in Android 12 it works on ports > 1024 but my trading view alert webhooks only accepts from http Port:80 or https:443 also localhost address its not accepting
Pls guide me on how to access port:80 without rooting the device like a superuser or tunnel the localhost http://127.0.0.1:8000/webhook in android to enable the webserver working in android ,
the webserver uses Get alerts payload & Push to perform some actions on a exchange
I have attached a few pics for your reference
The apk is pydroid and python version is > 3.7
enter image description here
| [
"If you use android emulator use http://10.0.2.2:[your port]\n"
] | [
0
] | [] | [] | [
"android",
"http_tunneling",
"localhost",
"python",
"webserver"
] | stackoverflow_0074664974_android_http_tunneling_localhost_python_webserver.txt |
Q:
Conditional Fillna in Pandas with conditional increment from the previous value
I want to fillna values in the 'last unique id' column based on the increment values from the previous row
**input is**
Channel last unique id
0 MYNTRA MN000351370
1 NYKAA NYK00038219
2 NYKAA NaN
3 NYKAA NaN
4 NYKAA NaN
5 NYKAA NaN
6 MYNTRA NaN
7 MYNTRA NaN
8 MYNTRA NaN
9 MYNTRA NaN
10 MYNTRA NaN
11 MYNTRA NaN
Expected output
Channel last unique id
0 MYNTRA MN000351370
1 NYKAA NYK00038219
2 NYKAA NYK00038220
3 NYKAA NYK00038221
4 NYKAA NYK00038222
5 NYKAA NYK00038223
6 MYNTRA MN000351371
7 MYNTRA MN000351372
8 MYNTRA MN000351373
9 MYNTRA MN000351374
10 MYNTRA MN000351375
11 MYNTRA MN000351376
Hope you understood the problem
A:
Example
data = {'col1': {0: 'A', 1: 'B', 2: 'A', 3: 'A', 4: 'B', 5: 'B'},
'col2': {0: 'A001', 1: 'BC020', 2: None, 3: None, 4: 'BC021', 5: None}}
df = pd.DataFrame(data)
df
col1 col2
0 A A001
1 B BC020
2 A None
3 A None
4 B BC021
5 B None
Code
df[['col3', 'col4']] = df.groupby('col1')['col2'].ffill().str.extract('(\D+)(\d+)')
df['col4'] = df['col4'].astype('int') + df.groupby(['col1', 'col4']).cumcount()
df['col2'] = df['col2'].fillna(df['col3'] + df['col4'].astype('str').str.zfill(3))
df = df.drop(['col3', 'col4'], axis=1)
result(df):
col1 col2
0 A A001
1 B BC020
2 A A002
3 A A003
4 B BC021
5 B BC022
A:
You can use groupby.cumcount to increment the number, and add it the the number part:
g = df.groupby('Channel')
# ffill per group
# extract letter and number part
df2 = (g['last unique id'].ffill()
.str.extract(r'(\D+)(\d+)')
)
# convert number part to integer
# add cumcount, merge back as string
df['last unique id'] = (df2[0]
.add(df2[1].astype(int)
.add(g.cumcount())
.astype(str)
)
)
print(df)
Output:
Channel last unique id
0 MYNTRA MN351370
1 NYKAA NYK38219
2 NYKAA NYK38220
3 NYKAA NYK38221
4 NYKAA NYK38222
5 NYKAA NYK38223
6 MYNTRA MN351371
7 MYNTRA MN351372
8 MYNTRA MN351373
9 MYNTRA MN351374
10 MYNTRA MN351375
11 MYNTRA MN351376
A:
Here is how you get the desired output with padding zeros to have your id always at a fixed length of 11.
df["last unique id"] = df.groupby("Channel")["last unique id"].ffill()
tmp = df["last unique id"].str.extract(r"(?P<ident>\D+)(?P<num>\d+)", expand=True)
tmp["add"] = df.groupby("Channel")["last unique id"].apply(
lambda x: x.eq(x.shift()).cumsum()
)
total_len_id = 11
df["last unique id"] = tmp.apply(
lambda row: row["ident"]
+ str(int(row["num"]) + row["add"]).rjust(total_len_id - len(row["ident"]), "0"),
axis=1,
)
print(df)
Channel last unique id
0 MYNTRA MN000351370
1 NYKAA NYK00038219
2 NYKAA NYK00038220
3 NYKAA NYK00038221
4 NYKAA NYK00038222
5 NYKAA NYK00038223
6 MYNTRA MN000351371
7 MYNTRA MN000351372
8 MYNTRA MN000351373
9 MYNTRA MN000351374
10 MYNTRA MN000351375
11 MYNTRA MN000351376
| Conditional Fillna in Pandas with conditional increment from the previous value | I want to fillna values in the 'last unique id' column based on the increment values from the previous row
**input is**
Channel last unique id
0 MYNTRA MN000351370
1 NYKAA NYK00038219
2 NYKAA NaN
3 NYKAA NaN
4 NYKAA NaN
5 NYKAA NaN
6 MYNTRA NaN
7 MYNTRA NaN
8 MYNTRA NaN
9 MYNTRA NaN
10 MYNTRA NaN
11 MYNTRA NaN
Expected output
Channel last unique id
0 MYNTRA MN000351370
1 NYKAA NYK00038219
2 NYKAA NYK00038220
3 NYKAA NYK00038221
4 NYKAA NYK00038222
5 NYKAA NYK00038223
6 MYNTRA MN000351371
7 MYNTRA MN000351372
8 MYNTRA MN000351373
9 MYNTRA MN000351374
10 MYNTRA MN000351375
11 MYNTRA MN000351376
Hope you understood the problem
| [
"Example\ndata = {'col1': {0: 'A', 1: 'B', 2: 'A', 3: 'A', 4: 'B', 5: 'B'},\n 'col2': {0: 'A001', 1: 'BC020', 2: None, 3: None, 4: 'BC021', 5: None}}\ndf = pd.DataFrame(data)\n\ndf\n col1 col2\n0 A A001\n1 B BC020\n2 A None\n3 A None\n4 B BC021\n5 B None\n\nCode\ndf[['col3', 'col4']] = df.groupby('col1')['col2'].ffill().str.extract('(\\D+)(\\d+)')\ndf['col4'] = df['col4'].astype('int') + df.groupby(['col1', 'col4']).cumcount()\ndf['col2'] = df['col2'].fillna(df['col3'] + df['col4'].astype('str').str.zfill(3))\ndf = df.drop(['col3', 'col4'], axis=1)\n\nresult(df):\n col1 col2\n0 A A001\n1 B BC020\n2 A A002\n3 A A003\n4 B BC021\n5 B BC022\n\n",
"You can use groupby.cumcount to increment the number, and add it the the number part:\ng = df.groupby('Channel')\n\n# ffill per group\n# extract letter and number part\ndf2 = (g['last unique id'].ffill()\n .str.extract(r'(\\D+)(\\d+)')\n )\n\n# convert number part to integer\n# add cumcount, merge back as string\ndf['last unique id'] = (df2[0]\n .add(df2[1].astype(int)\n .add(g.cumcount())\n .astype(str)\n )\n )\n\nprint(df)\n\nOutput:\n Channel last unique id\n0 MYNTRA MN351370\n1 NYKAA NYK38219\n2 NYKAA NYK38220\n3 NYKAA NYK38221\n4 NYKAA NYK38222\n5 NYKAA NYK38223\n6 MYNTRA MN351371\n7 MYNTRA MN351372\n8 MYNTRA MN351373\n9 MYNTRA MN351374\n10 MYNTRA MN351375\n11 MYNTRA MN351376\n\n",
"Here is how you get the desired output with padding zeros to have your id always at a fixed length of 11.\ndf[\"last unique id\"] = df.groupby(\"Channel\")[\"last unique id\"].ffill()\n\ntmp = df[\"last unique id\"].str.extract(r\"(?P<ident>\\D+)(?P<num>\\d+)\", expand=True)\ntmp[\"add\"] = df.groupby(\"Channel\")[\"last unique id\"].apply(\n lambda x: x.eq(x.shift()).cumsum()\n)\n\ntotal_len_id = 11\ndf[\"last unique id\"] = tmp.apply(\n lambda row: row[\"ident\"]\n + str(int(row[\"num\"]) + row[\"add\"]).rjust(total_len_id - len(row[\"ident\"]), \"0\"),\n axis=1,\n)\n\nprint(df)\n\n Channel last unique id\n0 MYNTRA MN000351370\n1 NYKAA NYK00038219\n2 NYKAA NYK00038220\n3 NYKAA NYK00038221\n4 NYKAA NYK00038222\n5 NYKAA NYK00038223\n6 MYNTRA MN000351371\n7 MYNTRA MN000351372\n8 MYNTRA MN000351373\n9 MYNTRA MN000351374\n10 MYNTRA MN000351375\n11 MYNTRA MN000351376\n\n"
] | [
0,
0,
0
] | [] | [] | [
"dataframe",
"loops",
"numpy",
"pandas",
"python"
] | stackoverflow_0074664488_dataframe_loops_numpy_pandas_python.txt |
Q:
How to mute/unmute sound using pywin32?
My searches lead me to the Pywin32 which should be able to mute/unmute the sound and detect its state (on Windows 10, using Python 3+). I found a way using an AutoHotkey script, but I'm looking for a pythonic way.
More specifically, I'm not interested in playing with the Windows GUI. Pywin32 works using a Windows DLL.
so far, I am able to do it by calling an ahk script:
In the python script:
import subprocess
subprocess.call([ahkexe, ahkscript])
In the AutoHotkey script:
SoundGet, sound_mute, Master, mute
if sound_mute = On ; if the sound is muted
Send {Volume_Mute} ; press the "mute button" to unmute
SoundSet 30 ; set the sound level at 30
A:
You can use the Windows Sound Manager by paradoxis (https://github.com/Paradoxis/Windows-Sound-Manager).
from sound import Sound
Sound.mute()
Every call to Sound.mute() will toggle mute on or off. Have a look at the main.py to see how to use the setter and getter methods.
A:
If you're also building a GUI, wxPython (and I would believe other GUI frameworks) have access to the windows audio mute "button".
| How to mute/unmute sound using pywin32? | My searches lead me to the Pywin32 which should be able to mute/unmute the sound and detect its state (on Windows 10, using Python 3+). I found a way using an AutoHotkey script, but I'm looking for a pythonic way.
More specifically, I'm not interested in playing with the Windows GUI. Pywin32 works using a Windows DLL.
so far, I am able to do it by calling an ahk script:
In the python script:
import subprocess
subprocess.call([ahkexe, ahkscript])
In the AutoHotkey script:
SoundGet, sound_mute, Master, mute
if sound_mute = On ; if the sound is muted
Send {Volume_Mute} ; press the "mute button" to unmute
SoundSet 30 ; set the sound level at 30
| [
"You can use the Windows Sound Manager by paradoxis (https://github.com/Paradoxis/Windows-Sound-Manager). \nfrom sound import Sound\nSound.mute()\n\nEvery call to Sound.mute() will toggle mute on or off. Have a look at the main.py to see how to use the setter and getter methods.\n",
"If you're also building a GUI, wxPython (and I would believe other GUI frameworks) have access to the windows audio mute \"button\".\n"
] | [
2,
0
] | [] | [] | [
"audio",
"python",
"python_3.x",
"pywin32",
"windows"
] | stackoverflow_0055399396_audio_python_python_3.x_pywin32_windows.txt |
Q:
How to prettyprint a JSON file?
How do I pretty-print a JSON file in Python?
A:
Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by:
>>> import json
>>>
>>> your_json = '["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> parsed = json.loads(your_json)
>>> print(json.dumps(parsed, indent=4))
[
"foo",
{
"bar": [
"baz",
null,
1.0,
2
]
}
]
To parse a file, use json.load():
with open('filename.txt', 'r') as handle:
parsed = json.load(handle)
A:
You can do this on the command line:
python3 -m json.tool some.json
(as already mentioned in the commentaries to the question, thanks to @Kai Petzke for the python3 suggestion).
Actually python is not my favourite tool as far as json processing on the command line is concerned. For simple pretty printing is ok, but if you want to manipulate the json it can become overcomplicated. You'd soon need to write a separate script-file, you could end up with maps whose keys are u"some-key" (python unicode), which makes selecting fields more difficult and doesn't really go in the direction of pretty-printing.
You can also use jq:
jq . some.json
and you get colors as a bonus (and way easier extendability).
Addendum: There is some confusion in the comments about using jq to process large JSON files on the one hand, and having a very large jq program on the other. For pretty-printing a file consisting of a single large JSON entity, the practical limitation is RAM. For pretty-printing a 2GB file consisting of a single array of real-world data, the "maximum resident set size" required for pretty-printing was 5GB (whether using jq 1.5 or 1.6). Note also that jq can be used from within python after pip install jq.
A:
You could use the built-in module pprint (https://docs.python.org/3.9/library/pprint.html).
How you can read the file with json data and print it out.
import json
import pprint
json_data = None
with open('file_name.txt', 'r') as f:
data = f.read()
json_data = json.loads(data)
print(json_data)
{"firstName": "John", "lastName": "Smith", "isAlive": "true", "age": 27, "address": {"streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021-3100"}, 'children': []}
pprint.pprint(json_data)
{'address': {'city': 'New York',
'postalCode': '10021-3100',
'state': 'NY',
'streetAddress': '21 2nd Street'},
'age': 27,
'children': [],
'firstName': 'John',
'isAlive': True,
'lastName': 'Smith'}
The output is not a valid json, because pprint use single quotes and json specification require double quotes.
If you want to rewrite the pretty print formated json to a file, you have to use pprint.pformat.
pretty_print_json = pprint.pformat(json_data).replace("'", '"')
with open('file_name.json', 'w') as f:
f.write(pretty_print_json)
A:
Pygmentize + Python json.tool = Pretty Print with Syntax Highlighting
Pygmentize is a killer tool. See this.
I combine python json.tool with pygmentize
echo '{"foo": "bar"}' | python -m json.tool | pygmentize -l json
See the link above for pygmentize installation instruction.
A demo of this is in the image below:
A:
Use this function and don't sweat having to remember if your JSON is a str or dict again - just look at the pretty print:
import json
def pp_json(json_thing, sort=True, indents=4):
if type(json_thing) is str:
print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
else:
print(json.dumps(json_thing, sort_keys=sort, indent=indents))
return None
pp_json(your_json_string_or_dict)
A:
Use pprint: https://docs.python.org/3.6/library/pprint.html
import pprint
pprint.pprint(json)
print() compared to pprint.pprint()
print(json)
{'feed': {'title': 'W3Schools Home Page', 'title_detail': {'type': 'text/plain', 'language': None, 'base': '', 'value': 'W3Schools Home Page'}, 'links': [{'rel': 'alternate', 'type': 'text/html', 'href': 'https://www.w3schools.com'}], 'link': 'https://www.w3schools.com', 'subtitle': 'Free web building tutorials', 'subtitle_detail': {'type': 'text/html', 'language': None, 'base': '', 'value': 'Free web building tutorials'}}, 'entries': [], 'bozo': 0, 'encoding': 'utf-8', 'version': 'rss20', 'namespaces': {}}
pprint.pprint(json)
{'bozo': 0,
'encoding': 'utf-8',
'entries': [],
'feed': {'link': 'https://www.w3schools.com',
'links': [{'href': 'https://www.w3schools.com',
'rel': 'alternate',
'type': 'text/html'}],
'subtitle': 'Free web building tutorials',
'subtitle_detail': {'base': '',
'language': None,
'type': 'text/html',
'value': 'Free web building tutorials'},
'title': 'W3Schools Home Page',
'title_detail': {'base': '',
'language': None,
'type': 'text/plain',
'value': 'W3Schools Home Page'}},
'namespaces': {},
'version': 'rss20'}
A:
To be able to pretty print from the command line and be able to have control over the indentation etc. you can set up an alias similar to this:
alias jsonpp="python -c 'import sys, json; print json.dumps(json.load(sys.stdin), sort_keys=True, indent=2)'"
And then use the alias in one of these ways:
cat myfile.json | jsonpp
jsonpp < myfile.json
A:
def saveJson(date,fileToSave):
with open(fileToSave, 'w+') as fileToSave:
json.dump(date, fileToSave, ensure_ascii=True, indent=4, sort_keys=True)
It works to display or save it to a file.
A:
Here's a simple example of pretty printing JSON to the console in a nice way in Python, without requiring the JSON to be on your computer as a local file:
import pprint
import json
from urllib.request import urlopen # (Only used to get this example)
# Getting a JSON example for this example
r = urlopen("https://mdn.github.io/fetch-examples/fetch-json/products.json")
text = r.read()
# To print it
pprint.pprint(json.loads(text))
A:
You could try pprintjson.
Installation
$ pip3 install pprintjson
Usage
Pretty print JSON from a file using the pprintjson CLI.
$ pprintjson "./path/to/file.json"
Pretty print JSON from a stdin using the pprintjson CLI.
$ echo '{ "a": 1, "b": "string", "c": true }' | pprintjson
Pretty print JSON from a string using the pprintjson CLI.
$ pprintjson -c '{ "a": 1, "b": "string", "c": true }'
Pretty print JSON from a string with an indent of 1.
$ pprintjson -c '{ "a": 1, "b": "string", "c": true }' -i 1
Pretty print JSON from a string and save output to a file output.json.
$ pprintjson -c '{ "a": 1, "b": "string", "c": true }' -o ./output.json
Output
A:
I think that's better to parse the json before, to avoid errors:
def format_response(response):
try:
parsed = json.loads(response.text)
except JSONDecodeError:
return response.text
return json.dumps(parsed, ensure_ascii=True, indent=4)
A:
I had a similar requirement to dump the contents of json file for logging, something quick and easy:
print(json.dumps(json.load(open(os.path.join('<myPath>', '<myjson>'), "r")), indent = 4 ))
if you use it often then put it in a function:
def pp_json_file(path, file):
print(json.dumps(json.load(open(os.path.join(path, file), "r")), indent = 4))
A:
Hopefully this helps someone else.
In the case when there is a error that something is not json serializable the answers above will not work. If you only want to save it so that is human readable then you need to recursively call string on all the non dictionary elements of your dictionary. If you want to load it later then save it as a pickle file then load it (e.g. torch.save(obj, f) works fine).
This is what worked for me:
#%%
def _to_json_dict_with_strings(dictionary):
"""
Convert dict to dict with leafs only being strings. So it recursively makes keys to strings
if they are not dictionaries.
Use case:
- saving dictionary of tensors (convert the tensors to strins!)
- saving arguments from script (e.g. argparse) for it to be pretty
e.g.
"""
if type(dictionary) != dict:
return str(dictionary)
d = {k: _to_json_dict_with_strings(v) for k, v in dictionary.items()}
return d
def to_json(dic):
import types
import argparse
if type(dic) is dict:
dic = dict(dic)
else:
dic = dic.__dict__
return _to_json_dict_with_strings(dic)
def save_to_json_pretty(dic, path, mode='w', indent=4, sort_keys=True):
import json
with open(path, mode) as f:
json.dump(to_json(dic), f, indent=indent, sort_keys=sort_keys)
def my_pprint(dic):
"""
@param dic:
@return:
Note: this is not the same as pprint.
"""
import json
# make all keys strings recursively with their naitve str function
dic = to_json(dic)
# pretty print
pretty_dic = json.dumps(dic, indent=4, sort_keys=True)
print(pretty_dic)
# print(json.dumps(dic, indent=4, sort_keys=True))
# return pretty_dic
import torch
# import json # results in non serializabe errors for torch.Tensors
from pprint import pprint
dic = {'x': torch.randn(1, 3), 'rec': {'y': torch.randn(1, 3)}}
my_pprint(dic)
pprint(dic)
output:
{
"rec": {
"y": "tensor([[-0.3137, 0.3138, 1.2894]])"
},
"x": "tensor([[-1.5909, 0.0516, -1.5445]])"
}
{'rec': {'y': tensor([[-0.3137, 0.3138, 1.2894]])},
'x': tensor([[-1.5909, 0.0516, -1.5445]])}
I don't know why returning the string then printing it doesn't work but it seems you have to put the dumps directly in the print statement. Note pprint as it has been suggested already works too. Note not all objects can be converted to a dict with dict(dic) which is why some of my code has checks on this condition.
Context:
I wanted to save pytorch strings but I kept getting the error:
TypeError: tensor is not JSON serializable
so I coded the above. Note that yes, in pytorch you use torch.save but pickle files aren't readable. Check this related post: https://discuss.pytorch.org/t/typeerror-tensor-is-not-json-serializable/36065/3
PPrint also has indent arguments but I didn't like how it looks:
pprint(stats, indent=4, sort_dicts=True)
output:
{ 'cca': { 'all': {'avg': tensor(0.5132), 'std': tensor(0.1532)},
'avg': tensor([0.5993, 0.5571, 0.4910, 0.4053]),
'rep': {'avg': tensor(0.5491), 'std': tensor(0.0743)},
'std': tensor([0.0316, 0.0368, 0.0910, 0.2490])},
'cka': { 'all': {'avg': tensor(0.7885), 'std': tensor(0.3449)},
'avg': tensor([1.0000, 0.9840, 0.9442, 0.2260]),
'rep': {'avg': tensor(0.9761), 'std': tensor(0.0468)},
'std': tensor([5.9043e-07, 2.9688e-02, 6.3634e-02, 2.1686e-01])},
'cosine': { 'all': {'avg': tensor(0.5931), 'std': tensor(0.7158)},
'avg': tensor([ 0.9825, 0.9001, 0.7909, -0.3012]),
'rep': {'avg': tensor(0.8912), 'std': tensor(0.1571)},
'std': tensor([0.0371, 0.1232, 0.1976, 0.9536])},
'nes': { 'all': {'avg': tensor(0.6771), 'std': tensor(0.2891)},
'avg': tensor([0.9326, 0.8038, 0.6852, 0.2867]),
'rep': {'avg': tensor(0.8072), 'std': tensor(0.1596)},
'std': tensor([0.0695, 0.1266, 0.1578, 0.2339])},
'nes_output': { 'all': {'avg': None, 'std': None},
'avg': tensor(0.2975),
'rep': {'avg': None, 'std': None},
'std': tensor(0.0945)},
'query_loss': { 'all': {'avg': None, 'std': None},
'avg': tensor(12.3746),
'rep': {'avg': None, 'std': None},
'std': tensor(13.7910)}}
compare to:
{
"cca": {
"all": {
"avg": "tensor(0.5144)",
"std": "tensor(0.1553)"
},
"avg": "tensor([0.6023, 0.5612, 0.4874, 0.4066])",
"rep": {
"avg": "tensor(0.5503)",
"std": "tensor(0.0796)"
},
"std": "tensor([0.0285, 0.0367, 0.1004, 0.2493])"
},
"cka": {
"all": {
"avg": "tensor(0.7888)",
"std": "tensor(0.3444)"
},
"avg": "tensor([1.0000, 0.9840, 0.9439, 0.2271])",
"rep": {
"avg": "tensor(0.9760)",
"std": "tensor(0.0468)"
},
"std": "tensor([5.7627e-07, 2.9689e-02, 6.3541e-02, 2.1684e-01])"
},
"cosine": {
"all": {
"avg": "tensor(0.5945)",
"std": "tensor(0.7146)"
},
"avg": "tensor([ 0.9825, 0.9001, 0.7907, -0.2953])",
"rep": {
"avg": "tensor(0.8911)",
"std": "tensor(0.1571)"
},
"std": "tensor([0.0371, 0.1231, 0.1975, 0.9554])"
},
"nes": {
"all": {
"avg": "tensor(0.6773)",
"std": "tensor(0.2886)"
},
"avg": "tensor([0.9326, 0.8037, 0.6849, 0.2881])",
"rep": {
"avg": "tensor(0.8070)",
"std": "tensor(0.1595)"
},
"std": "tensor([0.0695, 0.1265, 0.1576, 0.2341])"
},
"nes_output": {
"all": {
"avg": "None",
"std": "None"
},
"avg": "tensor(0.2976)",
"rep": {
"avg": "None",
"std": "None"
},
"std": "tensor(0.0945)"
},
"query_loss": {
"all": {
"avg": "None",
"std": "None"
},
"avg": "tensor(12.3616)",
"rep": {
"avg": "None",
"std": "None"
},
"std": "tensor(13.7976)"
}
}
A:
json.loads() converts the json data to dictionary. Finally, use json.dumps() to prettyprint the json.
_json = '{"name":"John", "age":30, "car":null}'
data = json.loads(_json)
print (json.dumps(data, indent=2))
A:
For most uses, indent should do it:
print(json.dumps(parsed, indent=2))
A Json structure is basically tree structure.
While trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.
It has some interactive trees and even comes with some code including this collapsing tree from so:
Other samples include using plotly Here is the code example from plotly:
import plotly.express as px
fig = px.treemap(
names = ["Eve","Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"],
parents = ["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve"]
)
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25))
fig.show()
And using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:
#%pip install treelib
from treelib import Tree
country_tree = Tree()
# Create a root node
country_tree.create_node("Country", "countries")
# Group by country
for country, regions in wards_df.head(5).groupby(["CTRY17NM", "CTRY17CD"]):
# Generate a node for each country
country_tree.create_node(country[0], country[1], parent="countries")
# Group by region
for region, las in regions.groupby(["GOR10NM", "GOR10CD"]):
# Generate a node for each region
country_tree.create_node(region[0], region[1], parent=country[1])
# Group by local authority
for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):
# Create a node for each local authority
country_tree.create_node(la[0], la[1], parent=region[1])
for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):
# Create a leaf node for each ward
country_tree.create_node(ward[0], ward[1], parent=la[1])
# Output the hierarchical data
country_tree.show()
I have, based on this, created a function to convert json to a tree:
from treelib import Node, Tree, node
def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):
if tree is None:
tree = Tree()
root_id = counter_byref[0]
if verbose:
print(f"tree.create_node({'+'}, {root_id})")
tree.create_node('+', root_id)
counter_byref[0] += 1
parent_id = root_id
if type(o) == dict:
for k,v in o.items():
this_id = counter_byref[0]
if verbose:
print(f"tree.create_node({str(k)}, {this_id}, parent={parent_id})")
tree.create_node(str(k), this_id, parent=parent_id)
counter_byref[0] += 1
json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)
elif type(o) == list:
if listsNodeSymbol is not None:
if verbose:
print(f"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})")
tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)
parent_id=counter_byref[0]
counter_byref[0] += 1
for i in o:
json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)
else: #node
if verbose:
print(f"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})")
tree.create_node(str(o), counter_byref[0], parent=parent_id)
counter_byref[0] += 1
return tree
Then for example:
import json
json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),verbose=False,listsNodeSymbol='+').show()
gives the more descriptive:
+
├── 2
│ └── 3
└── 4
└── +
├── 5
└── 6
While
json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),listsNodeSymbol=None).show()
Gives the more compact
+
├── 2
│ └── 3
└── 4
├── 5
└── 6
For a more extensive conversion with different flavors of trees, checkout this function
| How to prettyprint a JSON file? | How do I pretty-print a JSON file in Python?
| [
"Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by:\n>>> import json\n>>>\n>>> your_json = '[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]'\n>>> parsed = json.loads(your_json)\n>>> print(json.dumps(parsed, indent=4))\n[\n \"foo\",\n {\n \"bar\": [\n \"baz\",\n null,\n 1.0,\n 2\n ]\n }\n]\n\nTo parse a file, use json.load():\nwith open('filename.txt', 'r') as handle:\n parsed = json.load(handle)\n\n",
"You can do this on the command line:\npython3 -m json.tool some.json\n\n(as already mentioned in the commentaries to the question, thanks to @Kai Petzke for the python3 suggestion).\nActually python is not my favourite tool as far as json processing on the command line is concerned. For simple pretty printing is ok, but if you want to manipulate the json it can become overcomplicated. You'd soon need to write a separate script-file, you could end up with maps whose keys are u\"some-key\" (python unicode), which makes selecting fields more difficult and doesn't really go in the direction of pretty-printing.\nYou can also use jq:\njq . some.json\n\nand you get colors as a bonus (and way easier extendability).\nAddendum: There is some confusion in the comments about using jq to process large JSON files on the one hand, and having a very large jq program on the other. For pretty-printing a file consisting of a single large JSON entity, the practical limitation is RAM. For pretty-printing a 2GB file consisting of a single array of real-world data, the \"maximum resident set size\" required for pretty-printing was 5GB (whether using jq 1.5 or 1.6). Note also that jq can be used from within python after pip install jq.\n",
"You could use the built-in module pprint (https://docs.python.org/3.9/library/pprint.html).\nHow you can read the file with json data and print it out.\nimport json\nimport pprint\n\njson_data = None\nwith open('file_name.txt', 'r') as f:\n data = f.read()\n json_data = json.loads(data)\n\nprint(json_data)\n{\"firstName\": \"John\", \"lastName\": \"Smith\", \"isAlive\": \"true\", \"age\": 27, \"address\": {\"streetAddress\": \"21 2nd Street\", \"city\": \"New York\", \"state\": \"NY\", \"postalCode\": \"10021-3100\"}, 'children': []}\n\npprint.pprint(json_data)\n{'address': {'city': 'New York',\n 'postalCode': '10021-3100',\n 'state': 'NY',\n 'streetAddress': '21 2nd Street'},\n 'age': 27,\n 'children': [],\n 'firstName': 'John',\n 'isAlive': True,\n 'lastName': 'Smith'}\n\nThe output is not a valid json, because pprint use single quotes and json specification require double quotes.\nIf you want to rewrite the pretty print formated json to a file, you have to use pprint.pformat.\npretty_print_json = pprint.pformat(json_data).replace(\"'\", '\"')\n\nwith open('file_name.json', 'w') as f:\n f.write(pretty_print_json)\n\n",
"Pygmentize + Python json.tool = Pretty Print with Syntax Highlighting\nPygmentize is a killer tool. See this.\nI combine python json.tool with pygmentize\necho '{\"foo\": \"bar\"}' | python -m json.tool | pygmentize -l json\n\nSee the link above for pygmentize installation instruction.\nA demo of this is in the image below:\n\n",
"Use this function and don't sweat having to remember if your JSON is a str or dict again - just look at the pretty print:\nimport json\n\ndef pp_json(json_thing, sort=True, indents=4):\n if type(json_thing) is str:\n print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))\n else:\n print(json.dumps(json_thing, sort_keys=sort, indent=indents))\n return None\n\npp_json(your_json_string_or_dict)\n\n",
"Use pprint: https://docs.python.org/3.6/library/pprint.html\nimport pprint\npprint.pprint(json)\n\nprint() compared to pprint.pprint()\nprint(json)\n{'feed': {'title': 'W3Schools Home Page', 'title_detail': {'type': 'text/plain', 'language': None, 'base': '', 'value': 'W3Schools Home Page'}, 'links': [{'rel': 'alternate', 'type': 'text/html', 'href': 'https://www.w3schools.com'}], 'link': 'https://www.w3schools.com', 'subtitle': 'Free web building tutorials', 'subtitle_detail': {'type': 'text/html', 'language': None, 'base': '', 'value': 'Free web building tutorials'}}, 'entries': [], 'bozo': 0, 'encoding': 'utf-8', 'version': 'rss20', 'namespaces': {}}\n\npprint.pprint(json)\n{'bozo': 0,\n 'encoding': 'utf-8',\n 'entries': [],\n 'feed': {'link': 'https://www.w3schools.com',\n 'links': [{'href': 'https://www.w3schools.com',\n 'rel': 'alternate',\n 'type': 'text/html'}],\n 'subtitle': 'Free web building tutorials',\n 'subtitle_detail': {'base': '',\n 'language': None,\n 'type': 'text/html',\n 'value': 'Free web building tutorials'},\n 'title': 'W3Schools Home Page',\n 'title_detail': {'base': '',\n 'language': None,\n 'type': 'text/plain',\n 'value': 'W3Schools Home Page'}},\n 'namespaces': {},\n 'version': 'rss20'}\n\n",
"To be able to pretty print from the command line and be able to have control over the indentation etc. you can set up an alias similar to this:\nalias jsonpp=\"python -c 'import sys, json; print json.dumps(json.load(sys.stdin), sort_keys=True, indent=2)'\"\n\nAnd then use the alias in one of these ways:\ncat myfile.json | jsonpp\njsonpp < myfile.json\n\n",
"def saveJson(date,fileToSave):\n with open(fileToSave, 'w+') as fileToSave:\n json.dump(date, fileToSave, ensure_ascii=True, indent=4, sort_keys=True)\n\nIt works to display or save it to a file.\n",
"Here's a simple example of pretty printing JSON to the console in a nice way in Python, without requiring the JSON to be on your computer as a local file: \nimport pprint\nimport json \nfrom urllib.request import urlopen # (Only used to get this example)\n\n# Getting a JSON example for this example \nr = urlopen(\"https://mdn.github.io/fetch-examples/fetch-json/products.json\")\ntext = r.read() \n\n# To print it\npprint.pprint(json.loads(text))\n\n",
"You could try pprintjson.\n\nInstallation\n$ pip3 install pprintjson\n\nUsage\nPretty print JSON from a file using the pprintjson CLI.\n$ pprintjson \"./path/to/file.json\"\n\nPretty print JSON from a stdin using the pprintjson CLI.\n$ echo '{ \"a\": 1, \"b\": \"string\", \"c\": true }' | pprintjson\n\nPretty print JSON from a string using the pprintjson CLI.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }'\n\nPretty print JSON from a string with an indent of 1.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }' -i 1\n\nPretty print JSON from a string and save output to a file output.json.\n$ pprintjson -c '{ \"a\": 1, \"b\": \"string\", \"c\": true }' -o ./output.json\n\nOutput\n\n",
"I think that's better to parse the json before, to avoid errors:\ndef format_response(response):\n try:\n parsed = json.loads(response.text)\n except JSONDecodeError:\n return response.text\n return json.dumps(parsed, ensure_ascii=True, indent=4)\n\n",
"I had a similar requirement to dump the contents of json file for logging, something quick and easy:\nprint(json.dumps(json.load(open(os.path.join('<myPath>', '<myjson>'), \"r\")), indent = 4 ))\n\nif you use it often then put it in a function:\ndef pp_json_file(path, file):\n print(json.dumps(json.load(open(os.path.join(path, file), \"r\")), indent = 4))\n\n",
"Hopefully this helps someone else.\nIn the case when there is a error that something is not json serializable the answers above will not work. If you only want to save it so that is human readable then you need to recursively call string on all the non dictionary elements of your dictionary. If you want to load it later then save it as a pickle file then load it (e.g. torch.save(obj, f) works fine).\nThis is what worked for me:\n#%%\n\ndef _to_json_dict_with_strings(dictionary):\n \"\"\"\n Convert dict to dict with leafs only being strings. So it recursively makes keys to strings\n if they are not dictionaries.\n\n Use case:\n - saving dictionary of tensors (convert the tensors to strins!)\n - saving arguments from script (e.g. argparse) for it to be pretty\n\n e.g.\n\n \"\"\"\n if type(dictionary) != dict:\n return str(dictionary)\n d = {k: _to_json_dict_with_strings(v) for k, v in dictionary.items()}\n return d\n\ndef to_json(dic):\n import types\n import argparse\n\n if type(dic) is dict:\n dic = dict(dic)\n else:\n dic = dic.__dict__\n return _to_json_dict_with_strings(dic)\n\ndef save_to_json_pretty(dic, path, mode='w', indent=4, sort_keys=True):\n import json\n\n with open(path, mode) as f:\n json.dump(to_json(dic), f, indent=indent, sort_keys=sort_keys)\n\ndef my_pprint(dic):\n \"\"\"\n\n @param dic:\n @return:\n\n Note: this is not the same as pprint.\n \"\"\"\n import json\n\n # make all keys strings recursively with their naitve str function\n dic = to_json(dic)\n # pretty print\n pretty_dic = json.dumps(dic, indent=4, sort_keys=True)\n print(pretty_dic)\n # print(json.dumps(dic, indent=4, sort_keys=True))\n # return pretty_dic\n\nimport torch\n# import json # results in non serializabe errors for torch.Tensors\nfrom pprint import pprint\n\ndic = {'x': torch.randn(1, 3), 'rec': {'y': torch.randn(1, 3)}}\n\nmy_pprint(dic)\npprint(dic)\n\noutput:\n{\n \"rec\": {\n \"y\": \"tensor([[-0.3137, 0.3138, 1.2894]])\"\n },\n \"x\": \"tensor([[-1.5909, 0.0516, -1.5445]])\"\n}\n{'rec': {'y': tensor([[-0.3137, 0.3138, 1.2894]])},\n 'x': tensor([[-1.5909, 0.0516, -1.5445]])}\n\nI don't know why returning the string then printing it doesn't work but it seems you have to put the dumps directly in the print statement. Note pprint as it has been suggested already works too. Note not all objects can be converted to a dict with dict(dic) which is why some of my code has checks on this condition.\nContext:\nI wanted to save pytorch strings but I kept getting the error:\nTypeError: tensor is not JSON serializable\n\nso I coded the above. Note that yes, in pytorch you use torch.save but pickle files aren't readable. Check this related post: https://discuss.pytorch.org/t/typeerror-tensor-is-not-json-serializable/36065/3\n\nPPrint also has indent arguments but I didn't like how it looks:\n pprint(stats, indent=4, sort_dicts=True)\n\noutput:\n{ 'cca': { 'all': {'avg': tensor(0.5132), 'std': tensor(0.1532)},\n 'avg': tensor([0.5993, 0.5571, 0.4910, 0.4053]),\n 'rep': {'avg': tensor(0.5491), 'std': tensor(0.0743)},\n 'std': tensor([0.0316, 0.0368, 0.0910, 0.2490])},\n 'cka': { 'all': {'avg': tensor(0.7885), 'std': tensor(0.3449)},\n 'avg': tensor([1.0000, 0.9840, 0.9442, 0.2260]),\n 'rep': {'avg': tensor(0.9761), 'std': tensor(0.0468)},\n 'std': tensor([5.9043e-07, 2.9688e-02, 6.3634e-02, 2.1686e-01])},\n 'cosine': { 'all': {'avg': tensor(0.5931), 'std': tensor(0.7158)},\n 'avg': tensor([ 0.9825, 0.9001, 0.7909, -0.3012]),\n 'rep': {'avg': tensor(0.8912), 'std': tensor(0.1571)},\n 'std': tensor([0.0371, 0.1232, 0.1976, 0.9536])},\n 'nes': { 'all': {'avg': tensor(0.6771), 'std': tensor(0.2891)},\n 'avg': tensor([0.9326, 0.8038, 0.6852, 0.2867]),\n 'rep': {'avg': tensor(0.8072), 'std': tensor(0.1596)},\n 'std': tensor([0.0695, 0.1266, 0.1578, 0.2339])},\n 'nes_output': { 'all': {'avg': None, 'std': None},\n 'avg': tensor(0.2975),\n 'rep': {'avg': None, 'std': None},\n 'std': tensor(0.0945)},\n 'query_loss': { 'all': {'avg': None, 'std': None},\n 'avg': tensor(12.3746),\n 'rep': {'avg': None, 'std': None},\n 'std': tensor(13.7910)}}\n\ncompare to:\n{\n \"cca\": {\n \"all\": {\n \"avg\": \"tensor(0.5144)\",\n \"std\": \"tensor(0.1553)\"\n },\n \"avg\": \"tensor([0.6023, 0.5612, 0.4874, 0.4066])\",\n \"rep\": {\n \"avg\": \"tensor(0.5503)\",\n \"std\": \"tensor(0.0796)\"\n },\n \"std\": \"tensor([0.0285, 0.0367, 0.1004, 0.2493])\"\n },\n \"cka\": {\n \"all\": {\n \"avg\": \"tensor(0.7888)\",\n \"std\": \"tensor(0.3444)\"\n },\n \"avg\": \"tensor([1.0000, 0.9840, 0.9439, 0.2271])\",\n \"rep\": {\n \"avg\": \"tensor(0.9760)\",\n \"std\": \"tensor(0.0468)\"\n },\n \"std\": \"tensor([5.7627e-07, 2.9689e-02, 6.3541e-02, 2.1684e-01])\"\n },\n \"cosine\": {\n \"all\": {\n \"avg\": \"tensor(0.5945)\",\n \"std\": \"tensor(0.7146)\"\n },\n \"avg\": \"tensor([ 0.9825, 0.9001, 0.7907, -0.2953])\",\n \"rep\": {\n \"avg\": \"tensor(0.8911)\",\n \"std\": \"tensor(0.1571)\"\n },\n \"std\": \"tensor([0.0371, 0.1231, 0.1975, 0.9554])\"\n },\n \"nes\": {\n \"all\": {\n \"avg\": \"tensor(0.6773)\",\n \"std\": \"tensor(0.2886)\"\n },\n \"avg\": \"tensor([0.9326, 0.8037, 0.6849, 0.2881])\",\n \"rep\": {\n \"avg\": \"tensor(0.8070)\",\n \"std\": \"tensor(0.1595)\"\n },\n \"std\": \"tensor([0.0695, 0.1265, 0.1576, 0.2341])\"\n },\n \"nes_output\": {\n \"all\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"avg\": \"tensor(0.2976)\",\n \"rep\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"std\": \"tensor(0.0945)\"\n },\n \"query_loss\": {\n \"all\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"avg\": \"tensor(12.3616)\",\n \"rep\": {\n \"avg\": \"None\",\n \"std\": \"None\"\n },\n \"std\": \"tensor(13.7976)\"\n }\n}\n\n",
"json.loads() converts the json data to dictionary. Finally, use json.dumps() to prettyprint the json.\n_json = '{\"name\":\"John\", \"age\":30, \"car\":null}'\n\ndata = json.loads(_json)\n\nprint (json.dumps(data, indent=2))\n\n",
"For most uses, indent should do it:\nprint(json.dumps(parsed, indent=2))\n\nA Json structure is basically tree structure.\nWhile trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.\nIt has some interactive trees and even comes with some code including this collapsing tree from so:\n\nOther samples include using plotly Here is the code example from plotly:\nimport plotly.express as px\nfig = px.treemap(\n names = [\"Eve\",\"Cain\", \"Seth\", \"Enos\", \"Noam\", \"Abel\", \"Awan\", \"Enoch\", \"Azura\"],\n parents = [\"\", \"Eve\", \"Eve\", \"Seth\", \"Seth\", \"Eve\", \"Eve\", \"Awan\", \"Eve\"]\n)\nfig.update_traces(root_color=\"lightgrey\")\nfig.update_layout(margin = dict(t=50, l=25, r=25, b=25))\nfig.show()\n\n\n\nAnd using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:\n#%pip install treelib\nfrom treelib import Tree\n\ncountry_tree = Tree()\n# Create a root node\ncountry_tree.create_node(\"Country\", \"countries\")\n\n# Group by country\nfor country, regions in wards_df.head(5).groupby([\"CTRY17NM\", \"CTRY17CD\"]):\n # Generate a node for each country\n country_tree.create_node(country[0], country[1], parent=\"countries\")\n # Group by region\n for region, las in regions.groupby([\"GOR10NM\", \"GOR10CD\"]):\n # Generate a node for each region\n country_tree.create_node(region[0], region[1], parent=country[1])\n # Group by local authority\n for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):\n # Create a node for each local authority\n country_tree.create_node(la[0], la[1], parent=region[1])\n for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):\n # Create a leaf node for each ward\n country_tree.create_node(ward[0], ward[1], parent=la[1])\n\n# Output the hierarchical data\ncountry_tree.show()\n\n\nI have, based on this, created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n root_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({'+'}, {root_id})\")\n tree.create_node('+', root_id)\n counter_byref[0] += 1\n parent_id = root_id\n if type(o) == dict:\n for k,v in o.items():\n this_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({str(k)}, {this_id}, parent={parent_id})\")\n tree.create_node(str(k), this_id, parent=parent_id)\n counter_byref[0] += 1\n json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n if verbose:\n print(f\"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)\n parent_id=counter_byref[0]\n counter_byref[0] += 1 \n for i in o:\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)\n else: #node\n if verbose:\n print(f\"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(str(o), counter_byref[0], parent=parent_id)\n counter_byref[0] += 1\n return tree\n\nThen for example:\nimport json\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() \n\ngives the more descriptive:\n+\n├── 2\n│ └── 3\n└── 4\n └── +\n ├── 5\n └── 6\n\nWhile\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),listsNodeSymbol=None).show() \n\nGives the more compact\n+\n├── 2\n│ └── 3\n└── 4\n ├── 5\n └── 6\n\nFor a more extensive conversion with different flavors of trees, checkout this function\n"
] | [
2665,
446,
120,
61,
45,
23,
19,
9,
8,
7,
3,
3,
0,
0,
0
] | [
"It's far from perfect, but it does the job.\ndata = data.replace(',\"',',\\n\"')\n\nyou can improve it, add indenting and so on, but if you just want to be able to read a cleaner json, this is the way to go.\n"
] | [
-8
] | [
"formatting",
"json",
"pretty_print",
"python"
] | stackoverflow_0012943819_formatting_json_pretty_print_python.txt |
Q:
Creating a Snapchat bot in python
I am new in python programming and I was trying to create a Snapchat bot
Can you help me create a request based Snapchat bot.
I will be using this for marketing with my existing clients to help schedule posts. It will also be an auto responder to act as Thank you or Welcome messages.
If you got any ideas you can share your thoughts, thank you
Basically I need a python script to handle message scheduling and auto response
A:
Snapchat now supports the web or browser. So you can take a look at the tutorials of pyautogui module in python. you can manipulate the keyboard and mouse events and respond to the messages with prewritten messages of yours. Your task can be done easily.
| Creating a Snapchat bot in python | I am new in python programming and I was trying to create a Snapchat bot
Can you help me create a request based Snapchat bot.
I will be using this for marketing with my existing clients to help schedule posts. It will also be an auto responder to act as Thank you or Welcome messages.
If you got any ideas you can share your thoughts, thank you
Basically I need a python script to handle message scheduling and auto response
| [
"Snapchat now supports the web or browser. So you can take a look at the tutorials of pyautogui module in python. you can manipulate the keyboard and mouse events and respond to the messages with prewritten messages of yours. Your task can be done easily.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074665242_python.txt |
Q:
Accessing python variable outside function scope when reassigning to update variable
I want to keep track of the current max of a calculated cosine similarity score. However, I keep getting the error UnboundLocalError: cannot access local variable 'current_max_cosine_similarity_score' where it is not associated with a value
In Javascript, I can typically do this without a problem using the let keyword when working with a variable outside of a function scope. However, in Python that doesn't seem to be the case.
What would be the pythonic way of going about this?
current_max_cosine_similarity_score = -math.inf
def func(acc, v):
calculated_cosine_similarity_score = ...
if calculated_cosine_similarity_score > current_max_cosine_similarity_score:
current_max_cosine_similarity_score = max([current_max_cosine_similarity_score, calculated_cosine_similarity_score])
acc['cosineSimilarityScore'] = calculated_cosine_similarity_score
return acc
print(reduce(func, [...], {}))
A:
You have to declare current_max_cosine_similarity_score as global (or nonlocal) in func().
But that's nevertheless a bad idea. The "pythonic" way would be to use a generator, closure or a class with a get_current_maximum().
Probably the most "pythonic" closure solves your problem:
from functools import reduce
def calc_closure():
def _calc(value, element):
# do calculations on element here
if element > value:
_calc.current_max_value = element
return _calc.current_max_value
# using an attribute makes current_max_value accessible from outer
_calc.current_max_value = -np.math.inf
return _calc
closure_1 = calc_closure()
closure_2 = calc_closure()
print(reduce(closure_1, [1, 2, 3, 4, 1]))
print(closure_1.current_max_value )
print(closure_2.current_max_value )
Output:
4
4
-inf
| Accessing python variable outside function scope when reassigning to update variable | I want to keep track of the current max of a calculated cosine similarity score. However, I keep getting the error UnboundLocalError: cannot access local variable 'current_max_cosine_similarity_score' where it is not associated with a value
In Javascript, I can typically do this without a problem using the let keyword when working with a variable outside of a function scope. However, in Python that doesn't seem to be the case.
What would be the pythonic way of going about this?
current_max_cosine_similarity_score = -math.inf
def func(acc, v):
calculated_cosine_similarity_score = ...
if calculated_cosine_similarity_score > current_max_cosine_similarity_score:
current_max_cosine_similarity_score = max([current_max_cosine_similarity_score, calculated_cosine_similarity_score])
acc['cosineSimilarityScore'] = calculated_cosine_similarity_score
return acc
print(reduce(func, [...], {}))
| [
"You have to declare current_max_cosine_similarity_score as global (or nonlocal) in func().\nBut that's nevertheless a bad idea. The \"pythonic\" way would be to use a generator, closure or a class with a get_current_maximum().\nProbably the most \"pythonic\" closure solves your problem:\nfrom functools import reduce\n\ndef calc_closure():\n def _calc(value, element):\n # do calculations on element here\n if element > value:\n _calc.current_max_value = element\n return _calc.current_max_value \n # using an attribute makes current_max_value accessible from outer\n _calc.current_max_value = -np.math.inf\n return _calc\n\nclosure_1 = calc_closure()\nclosure_2 = calc_closure()\n\nprint(reduce(closure_1, [1, 2, 3, 4, 1]))\nprint(closure_1.current_max_value )\nprint(closure_2.current_max_value )\n\nOutput:\n4\n4\n-inf\n"
] | [
2
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074665130_python_python_3.x.txt |
Q:
How do I get the args from a post or get with Python without using cgi.FieldStorage
I just read that cgi is deprecated and so cgi.FieldStorage will stop working.
I'm struggling to find the replacement for this functionality. All the searches I've tried refer to urllib or requests, both of which (AFAIK) are designed to create requests, not to respond to them.
Thanks in advance
A:
The reference to urllib is actually a bit misleading. The following might give some insight to the cgi interface from a python programmers point of view:
#!/usr/bin/python3
'''
preflight_cgi.py
check the preflight option call
'''
import sys
import os
if __name__ == "__main__":
print("Content-Type: text/html") # HTML is following
print()
i = 0
for arg in sys.argv:
print("argv{}: {}\n".format(i, arg))
i = 0
for line in sys.stdin:
print("line {}: {}\n".format(i, line))
i += 1
print("<TITLE>CGI script output</TITLE>")
print("<H1>This is the environmet</H1>")
for it in os.environ.items():
print("<p>{} = {}</p>".format(it[0], it[1]))
Put that where your current cgi.FieldStorage based app is and call it via the address line of the browser.
You will see something like
[...]
CONTENT_LENGTH = 0
QUERY_STRING = par=meter&var=able
REQUEST_URI = /cgi-bin/preflight_cgi.py?par=meter&var=able
REDIRECT_STATUS = 200
SCRIPT_NAME = /cgi-bin/preflight_cgi.py
REQUEST_METHOD = GET
SERVER_PROTOCOL = HTTP/1.1
SERVER_SOFTWARE = lighttpd/1.4.53
GATEWAY_INTERFACE = CGI/1.1
REQUEST_SCHEME = http
SERVER_PORT = 80
[...]
The environment variables have already most of done.
As an alternative you can also use one of the http.server classes to build the server completely in python.
| How do I get the args from a post or get with Python without using cgi.FieldStorage | I just read that cgi is deprecated and so cgi.FieldStorage will stop working.
I'm struggling to find the replacement for this functionality. All the searches I've tried refer to urllib or requests, both of which (AFAIK) are designed to create requests, not to respond to them.
Thanks in advance
| [
"The reference to urllib is actually a bit misleading. The following might give some insight to the cgi interface from a python programmers point of view:\n#!/usr/bin/python3\n'''\npreflight_cgi.py\ncheck the preflight option call\n'''\n\nimport sys\nimport os\n\nif __name__ == \"__main__\":\n print(\"Content-Type: text/html\") # HTML is following\n print() \n i = 0\n for arg in sys.argv:\n print(\"argv{}: {}\\n\".format(i, arg))\n i = 0\n for line in sys.stdin:\n print(\"line {}: {}\\n\".format(i, line))\n i += 1\n \n print(\"<TITLE>CGI script output</TITLE>\")\n print(\"<H1>This is the environmet</H1>\")\n for it in os.environ.items():\n print(\"<p>{} = {}</p>\".format(it[0], it[1]))\n\nPut that where your current cgi.FieldStorage based app is and call it via the address line of the browser.\nYou will see something like\n[...]\nCONTENT_LENGTH = 0\nQUERY_STRING = par=meter&var=able\nREQUEST_URI = /cgi-bin/preflight_cgi.py?par=meter&var=able\nREDIRECT_STATUS = 200\nSCRIPT_NAME = /cgi-bin/preflight_cgi.py\nREQUEST_METHOD = GET\nSERVER_PROTOCOL = HTTP/1.1\nSERVER_SOFTWARE = lighttpd/1.4.53\nGATEWAY_INTERFACE = CGI/1.1\nREQUEST_SCHEME = http\nSERVER_PORT = 80\n[...]\nThe environment variables have already most of done.\nAs an alternative you can also use one of the http.server classes to build the server completely in python.\n"
] | [
0
] | [] | [] | [
"python",
"webserver"
] | stackoverflow_0074225287_python_webserver.txt |
Q:
Change values in lists that are in pandas column
I have a dataset where a column contains lists of previously received tokenized words. I need to replace a couple of values in these lists.
Initial data set:
df
date text
2022-06-02 [municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']
...
Required result:
df_res
date text
2022-06-02 [municipal', 'districts', 'mikhailovka', '84', 'kamyshin', '56']
...
How easy is it to change the values of the elements in the list for all the values of the column?
A:
df = pd.DataFrame([['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']]], columns=['date', 'text'])
mapper = {'mikhailovsky': 'mikhailovka',
'kamyshinsky': 'kamyshin'}
for k, v in mapper.items():
df.text = df.text.apply(lambda x: [element.replace(k, v) for element in x])
The code above changes df from this:
date text
0 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
1 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
2 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
into this:
date text
0 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
1 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
2 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
| Change values in lists that are in pandas column | I have a dataset where a column contains lists of previously received tokenized words. I need to replace a couple of values in these lists.
Initial data set:
df
date text
2022-06-02 [municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']
...
Required result:
df_res
date text
2022-06-02 [municipal', 'districts', 'mikhailovka', '84', 'kamyshin', '56']
...
How easy is it to change the values of the elements in the list for all the values of the column?
| [
"df = pd.DataFrame([['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']]], columns=['date', 'text'])\n\nmapper = {'mikhailovsky': 'mikhailovka',\n 'kamyshinsky': 'kamyshin'}\n\nfor k, v in mapper.items():\n df.text = df.text.apply(lambda x: [element.replace(k, v) for element in x])\n\nThe code above changes df from this:\n date text\n0 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n1 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n2 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n\ninto this:\n date text\n0 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n1 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n2 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"list",
"pandas",
"python"
] | stackoverflow_0074664930_dataframe_list_pandas_python.txt |
Q:
Resolve warning "A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy"?
When I import SciPy or a library dependent on it, I receive the following warning message:
UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1
It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0.
Anyone having this problem or know how to resolve it?
I am using Conda as an environment manager, and all my packages are up to date as far as I know.
python: 3.9.12
numpy: 1.23.1
scipy: 1.7.3
Thanks in advance if anyone has any clues !
A:
I have the same issue.
The scipy 1.7.3 docs specifies
1.16.5 <= numpy <1.24.0 while in scipy 1.7.3 code setup.py and __init__.py we have np_maxversion = '1.23.0'.
As I rely on conda channel defaults to setup Intel MKL libraries for numpy and scipy I decided to pin "numpy>=1.22.3,<1.23.0" until a newer scipy is release on conda channel defaults:
conda create -n myenv python "numpy>=1.22.3,<1.23.0" scipy
A:
According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can:
Ignore this warning
Use scipy 1.8
Use numpy < 1.23.0
Edit:
This is now fixed in the dev docs of scipy https://scipy.github.io/devdocs/dev/toolchain.html
A:
Since "UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required", you can update the numpy version with the specified range to remove the warning.
According to syntax guidelines of conda and pip, updating your numpy version by
conda install "numpy>=1.16.5,<1.23.0"
or
pip install "numpy>=1.16.5,<1.23.0"
inside your environment will work.
Your numpy will be overwritten by the best-match version (1.22.4) in the specified range. You can double-check the new numpy version by:
conda list numpy
or
pip show numpy
| Resolve warning "A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy"? | When I import SciPy or a library dependent on it, I receive the following warning message:
UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1
It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0.
Anyone having this problem or know how to resolve it?
I am using Conda as an environment manager, and all my packages are up to date as far as I know.
python: 3.9.12
numpy: 1.23.1
scipy: 1.7.3
Thanks in advance if anyone has any clues !
| [
"I have the same issue.\nThe scipy 1.7.3 docs specifies\n1.16.5 <= numpy <1.24.0 while in scipy 1.7.3 code setup.py and __init__.py we have np_maxversion = '1.23.0'.\nAs I rely on conda channel defaults to setup Intel MKL libraries for numpy and scipy I decided to pin \"numpy>=1.22.3,<1.23.0\" until a newer scipy is release on conda channel defaults:\nconda create -n myenv python \"numpy>=1.22.3,<1.23.0\" scipy\n\n",
"According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can:\n\nIgnore this warning\nUse scipy 1.8\nUse numpy < 1.23.0\n\nEdit:\nThis is now fixed in the dev docs of scipy https://scipy.github.io/devdocs/dev/toolchain.html\n",
"Since \"UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required\", you can update the numpy version with the specified range to remove the warning.\nAccording to syntax guidelines of conda and pip, updating your numpy version by\nconda install \"numpy>=1.16.5,<1.23.0\"\nor\npip install \"numpy>=1.16.5,<1.23.0\"\ninside your environment will work.\nYour numpy will be overwritten by the best-match version (1.22.4) in the specified range. You can double-check the new numpy version by:\nconda list numpy\nor\npip show numpy\n"
] | [
7,
5,
0
] | [] | [] | [
"conda",
"numpy",
"python",
"scipy"
] | stackoverflow_0073072257_conda_numpy_python_scipy.txt |
Q:
How to extract number from a txt file
First my file
amtdec = open("amt.txt", "r+")
gc = open("gamecurrency.txt", "r+")
eg = gc.readline()
u = amtdec.readline()
The main code
user_balance = int(u)
egc = int(eg)
while True:
deposit_amount = int(input("Enter deposit amount: $"))
if deposit_amount<=user_balance:
entamount = deposit_amount * EXCHANGE_RATE
newgc = entamount + egc
newamt = user_balance - deposit_amount
This is what my error was:
user_balance = int(u)
ValueError: invalid literal for int() with base 10: ''
I was trying to compare a int in a file with my input.
A:
Usually an error like this should turn you to check the formatting of your file. As some others mentioned, the first line could be empty for whatever reason. You can check for an empty file prior to this by doing the following:
test.txt contents:
(empty file)
import os
f = open("test.txt")
if os.path.getsize("test.txt") == 0:
print("Empty File")
f.close()
else:
print("Some content exists")
Output: "Empty File" (file is closed too since there is nothing to read)
Alternatively, you can read the entire file if you somehow can't access its contents (some schools do this). Using this technique will give you an idea of what you are dealing with in your file if you can't view it within your IDE:
f = open("test.txt")
for line in f:
print(line)
f.close()
But let's say that just the first line of your file is empty. There are several ways you can check if a line is empty. If line 1 is blank but any line following it has content, reading line 1 from file will equal '\n':
test.txt contents:
line 1 = '\n' (blank line), line 2 = 20.72
import os
f = open("test.txt")
if os.path.getsize("test.txt") == 0:
print("Empty File")
f.close()
else:
print("Some content exists")
reader = f.readline()
# The second condition is if you are using binary mode
if reader == '\n' or reader == b"\r\n":
print("Blank Line")
Output: "Some content exists" & "Blank line"
This is just my suggestion. As for your integer conversion, if you have a '.' in your currency amount, you will get a conversion error for trying to data cast it into an integer. However, I do not know if your currency will be rounded off to the nearest dollar or if you have any indication of change, so I will leave this to you.
Happy coding! Please up vote my answer if useful so I can participate in Stack Overflow in new ways :)
| How to extract number from a txt file | First my file
amtdec = open("amt.txt", "r+")
gc = open("gamecurrency.txt", "r+")
eg = gc.readline()
u = amtdec.readline()
The main code
user_balance = int(u)
egc = int(eg)
while True:
deposit_amount = int(input("Enter deposit amount: $"))
if deposit_amount<=user_balance:
entamount = deposit_amount * EXCHANGE_RATE
newgc = entamount + egc
newamt = user_balance - deposit_amount
This is what my error was:
user_balance = int(u)
ValueError: invalid literal for int() with base 10: ''
I was trying to compare a int in a file with my input.
| [
"Usually an error like this should turn you to check the formatting of your file. As some others mentioned, the first line could be empty for whatever reason. You can check for an empty file prior to this by doing the following:\ntest.txt contents:\n(empty file)\nimport os\n\nf = open(\"test.txt\")\n\nif os.path.getsize(\"test.txt\") == 0:\n print(\"Empty File\")\n f.close()\nelse:\n print(\"Some content exists\")\n\nOutput: \"Empty File\" (file is closed too since there is nothing to read)\nAlternatively, you can read the entire file if you somehow can't access its contents (some schools do this). Using this technique will give you an idea of what you are dealing with in your file if you can't view it within your IDE:\nf = open(\"test.txt\")\n\nfor line in f:\n print(line)\n\nf.close()\n\nBut let's say that just the first line of your file is empty. There are several ways you can check if a line is empty. If line 1 is blank but any line following it has content, reading line 1 from file will equal '\\n':\ntest.txt contents:\nline 1 = '\\n' (blank line), line 2 = 20.72\nimport os\n\nf = open(\"test.txt\")\n\nif os.path.getsize(\"test.txt\") == 0:\n print(\"Empty File\")\n f.close()\nelse:\n print(\"Some content exists\")\n\nreader = f.readline()\n\n# The second condition is if you are using binary mode\nif reader == '\\n' or reader == b\"\\r\\n\":\n print(\"Blank Line\")\n\nOutput: \"Some content exists\" & \"Blank line\"\nThis is just my suggestion. As for your integer conversion, if you have a '.' in your currency amount, you will get a conversion error for trying to data cast it into an integer. However, I do not know if your currency will be rounded off to the nearest dollar or if you have any indication of change, so I will leave this to you.\nHappy coding! Please up vote my answer if useful so I can participate in Stack Overflow in new ways :)\n"
] | [
0
] | [] | [] | [
"function",
"python",
"runtime_error",
"syntax_error"
] | stackoverflow_0074664866_function_python_runtime_error_syntax_error.txt |
Q:
This error is coming and i am not able to understand why. Error = TypeError: 'NoneType' object is not subscriptable
I am using SQL connectivity , python , tkinter and
I am trying to display the record after creating it but there is an error coming
The records are created and stored in my sql but it can't display them on tkinter
here is the code
import tkinter
import mysql.connector
from tkinter import Label
from tkinter import Entry
from tkinter import messagebox
from tkinter import *
mydb = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = "tiger",
database = "system42"
)
def Create():
m=Tk()
m.geometry("1000x1000")
L1=Label(m,text="Enter Name",width=20,font="ariel")
L2=Label(m,text="Enter DOB (yyyy/mm/dd)",width=20,font="ariel")
L3=Label(m,text="Enter Class",width=20,font="ariel")
L4=Label(m,text="Enter Admission No",width=20,font="ariel")
L5=Label(m,text="Enter Address",width=20,font="ariel")
L6=Label(m,text="Enter Mobile No",width=20,font="ariel")
L7=Label(m,text="Enter Transport",width=20,font="ariel")
L1.place(x=50,y=100)
L2.place(x=50,y=150)
L3.place(x=50,y=200)
L4.place(x=50,y=250)
L5.place(x=50,y=300)
L6.place(x=50,y=350)
L7.place(x=50,y=400)
a=Entry(m)
b=Entry(m)
c=Entry(m)
d=Entry(m)
e=Entry(m)
f=Entry(m)
g=Entry(m)
a.place(x=300,y=100)
b.place(x=300,y=150)
c.place(x=300,y=200)
d.place(x=300,y=250)
e.place(x=300,y=300)
f.place(x=300,y=350)
g.place(x=300,y=400)
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11=Label(m,text=q[0],width=20,font="ariel")
L12=Label(m,text=q[1],width=15,font="ariel")
L13=Label(m,text=q[2],width=10,font="ariel")
L14=Label(m,text=q[3],width=10,font="ariel")
L15=Label(m,text=q[4],width=30,font="ariel")
L16=Label(m,text=q[5],width=15,font="ariel")
L17=Label(m,text=q[6],width=15,font="ariel")
L11.place(x=50,y=500)
L12.place(x=50,y=550)
L13.place(x=50,y=600)
L14.place(x=50,y=650)
L15.place(x=50,y=700)
L16.place(x=50,y=750)
L17.place(x=50,y=800)
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
I expected that after creating it also displays the records. It is creating but not displaying
A:
You are trying to create new label widgits in your function when you should just be updating the already existing ones.
Try:
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11.config(text=q[0])
L12.config(text=q[1])
L13.config(text=q[2])
L14.config(text=q[3])
L15.config(text=q[4])
L16.config(text=q[5])
L16.config(text=q[6])
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
| This error is coming and i am not able to understand why. Error = TypeError: 'NoneType' object is not subscriptable | I am using SQL connectivity , python , tkinter and
I am trying to display the record after creating it but there is an error coming
The records are created and stored in my sql but it can't display them on tkinter
here is the code
import tkinter
import mysql.connector
from tkinter import Label
from tkinter import Entry
from tkinter import messagebox
from tkinter import *
mydb = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = "tiger",
database = "system42"
)
def Create():
m=Tk()
m.geometry("1000x1000")
L1=Label(m,text="Enter Name",width=20,font="ariel")
L2=Label(m,text="Enter DOB (yyyy/mm/dd)",width=20,font="ariel")
L3=Label(m,text="Enter Class",width=20,font="ariel")
L4=Label(m,text="Enter Admission No",width=20,font="ariel")
L5=Label(m,text="Enter Address",width=20,font="ariel")
L6=Label(m,text="Enter Mobile No",width=20,font="ariel")
L7=Label(m,text="Enter Transport",width=20,font="ariel")
L1.place(x=50,y=100)
L2.place(x=50,y=150)
L3.place(x=50,y=200)
L4.place(x=50,y=250)
L5.place(x=50,y=300)
L6.place(x=50,y=350)
L7.place(x=50,y=400)
a=Entry(m)
b=Entry(m)
c=Entry(m)
d=Entry(m)
e=Entry(m)
f=Entry(m)
g=Entry(m)
a.place(x=300,y=100)
b.place(x=300,y=150)
c.place(x=300,y=200)
d.place(x=300,y=250)
e.place(x=300,y=300)
f.place(x=300,y=350)
g.place(x=300,y=400)
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11=Label(m,text=q[0],width=20,font="ariel")
L12=Label(m,text=q[1],width=15,font="ariel")
L13=Label(m,text=q[2],width=10,font="ariel")
L14=Label(m,text=q[3],width=10,font="ariel")
L15=Label(m,text=q[4],width=30,font="ariel")
L16=Label(m,text=q[5],width=15,font="ariel")
L17=Label(m,text=q[6],width=15,font="ariel")
L11.place(x=50,y=500)
L12.place(x=50,y=550)
L13.place(x=50,y=600)
L14.place(x=50,y=650)
L15.place(x=50,y=700)
L16.place(x=50,y=750)
L17.place(x=50,y=800)
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
I expected that after creating it also displays the records. It is creating but not displaying
| [
"You are trying to create new label widgits in your function when you should just be updating the already existing ones.\nTry:\ndef Creation():\n mycur=mydb.cursor()\n \n name=a.get()\n dob=b.get()\n Class=c.get()\n admn=d.get()\n add=e.get()\n mob=f.get()\n tra=g.get()\n query3=(\"insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')\").format(name,dob,Class,admn,add,mob,tra)\n mycur.execute(query3)\n mycur.execute(\"commit\")\n q=mycur.fetchone()\n\n L11.config(text=q[0])\n L12.config(text=q[1])\n L13.config(text=q[2])\n L14.config(text=q[3])\n L15.config(text=q[4])\n L16.config(text=q[5])\n L16.config(text=q[6])\n\nbutton=Button(m,text=\"Create\",command=Creation,width=10,height=2)\nbutton.place(x=400,y=50)\n\n"
] | [
0
] | [] | [] | [
"mysql",
"python",
"tkinter"
] | stackoverflow_0074665189_mysql_python_tkinter.txt |
Q:
group column values with difference of 3(say) digit in python
I am new in python, problem statement is like we have below data as dataframe
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], 'value':[x,x,y,x,x,x,y,x,z,x,x,y,y,z]})
Diff value
1 x
1 x
2 y
3 x
4 x
4 x
5 y
6 x
7 z
7 x
8 x
9 y
9 y
10 z
we need to group diff column with diff of 3 (let's say), like 0-3,3-6,6-9,>9, and value should be count
Expected output is like
Diff x y z
0-3 2 1
3-6 3 1
6-9 3 1
>=9 2 1
A:
Example
example code is wrong. someone who want exercise, use following code
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10],
'value':'x,x,y,x,x,x,y,x,z,x,x,y,y,z'.split(',')})
Code
labels = ['0-3', '3-6', '6-9', '>=9']
grouper = pd.cut(df['Diff'], bins=[0, 3, 6, 9, float('inf')], right=False, labels=labels)
pd.crosstab(grouper, df['value'])
output:
value x y z
Diff
0-3 2 1 0
3-6 3 1 0
6-9 3 0 1
>=9 0 2 1
| group column values with difference of 3(say) digit in python | I am new in python, problem statement is like we have below data as dataframe
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], 'value':[x,x,y,x,x,x,y,x,z,x,x,y,y,z]})
Diff value
1 x
1 x
2 y
3 x
4 x
4 x
5 y
6 x
7 z
7 x
8 x
9 y
9 y
10 z
we need to group diff column with diff of 3 (let's say), like 0-3,3-6,6-9,>9, and value should be count
Expected output is like
Diff x y z
0-3 2 1
3-6 3 1
6-9 3 1
>=9 2 1
| [
"Example\nexample code is wrong. someone who want exercise, use following code\ndf = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], \n 'value':'x,x,y,x,x,x,y,x,z,x,x,y,y,z'.split(',')})\n\nCode\nlabels = ['0-3', '3-6', '6-9', '>=9']\ngrouper = pd.cut(df['Diff'], bins=[0, 3, 6, 9, float('inf')], right=False, labels=labels)\npd.crosstab(grouper, df['value'])\n\noutput:\nvalue x y z\nDiff \n0-3 2 1 0\n3-6 3 1 0\n6-9 3 0 1\n>=9 0 2 1\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074665214_dataframe_pandas_python_python_3.x.txt |
Q:
Editing specific line in text file in Python
Let's say I have a text file containing:
Dan
Warrior
500
1
0
Is there a way I can edit a specific line in that text file? Right now I have this:
#!/usr/bin/env python
import io
myfile = open('stats.txt', 'r')
dan = myfile.readline()
print dan
print "Your name: " + dan.split('\n')[0]
try:
myfile = open('stats.txt', 'a')
myfile.writelines('Mage')[1]
except IOError:
myfile.close()
finally:
myfile.close()
Yes, I know that myfile.writelines('Mage')[1] is incorrect. But you get my point, right? I'm trying to edit line 2 by replacing Warrior with Mage. But can I even do that?
A:
You want to do something like this:
# with is like your try .. finally block in this case
with open('stats.txt', 'r') as file:
# read a list of lines into data
data = file.readlines()
print data
print "Your name: " + data[0]
# now change the 2nd line, note that you have to add a newline
data[1] = 'Mage\n'
# and write everything back
with open('stats.txt', 'w') as file:
file.writelines( data )
The reason for this is that you can't do something like "change line 2" directly in a file. You can only overwrite (not delete) parts of a file - that means that the new content just covers the old content. So, if you wrote 'Mage' over line 2, the resulting line would be 'Mageior'.
A:
def replace_line(file_name, line_num, text):
lines = open(file_name, 'r').readlines()
lines[line_num] = text
out = open(file_name, 'w')
out.writelines(lines)
out.close()
And then:
replace_line('stats.txt', 0, 'Mage')
A:
you can use fileinput to do in place editing
import fileinput
for line in fileinput.FileInput("myfile", inplace=1):
if line .....:
print line
A:
You can do it in two ways, choose what suits your requirement:
Method I.) Replacing using line number. You can use built-in function enumerate() in this case:
First, in read mode get all data in a variable
with open("your_file.txt",'r') as f:
get_all=f.readlines()
Second, write to the file (where enumerate comes to action)
with open("your_file.txt",'w') as f:
for i,line in enumerate(get_all,1): ## STARTS THE NUMBERING FROM 1 (by default it begins with 0)
if i == 2: ## OVERWRITES line:2
f.writelines("Mage\n")
else:
f.writelines(line)
Method II.) Using the keyword you want to replace:
Open file in read mode and copy the contents to a list
with open("some_file.txt","r") as f:
newline=[]
for word in f.readlines():
newline.append(word.replace("Warrior","Mage")) ## Replace the keyword while you copy.
"Warrior" has been replaced by "Mage", so write the updated data to the file:
with open("some_file.txt","w") as f:
for line in newline:
f.writelines(line)
This is what the output will be in both cases:
Dan Dan
Warrior ------> Mage
500 500
1 1
0 0
A:
If your text contains only one individual:
import re
# creation
with open('pers.txt','wb') as g:
g.write('Dan \n Warrior \n 500 \r\n 1 \r 0 ')
with open('pers.txt','rb') as h:
print 'exact content of pers.txt before treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt before treatment:\n',h.read()
# treatment
def roplo(file_name,what):
patR = re.compile('^([^\r\n]+[\r\n]+)[^\r\n]+')
with open(file_name,'rb+') as f:
ch = f.read()
f.seek(0)
f.write(patR.sub('\\1'+what,ch))
roplo('pers.txt','Mage')
# after treatment
with open('pers.txt','rb') as h:
print '\nexact content of pers.txt after treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt after treatment:\n',h.read()
If your text contains several individuals:
import re
# creation
with open('pers.txt','wb') as g:
g.write('Dan \n Warrior \n 500 \r\n 1 \r 0 \n Jim \n dragonfly\r300\r2\n10\r\nSomo\ncosmonaut\n490\r\n3\r65')
with open('pers.txt','rb') as h:
print 'exact content of pers.txt before treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt before treatment:\n',h.read()
# treatment
def ripli(file_name,who,what):
with open(file_name,'rb+') as f:
ch = f.read()
x,y = re.search('^\s*'+who+'\s*[\r\n]+([^\r\n]+)',ch,re.MULTILINE).span(1)
f.seek(x)
f.write(what+ch[y:])
ripli('pers.txt','Jim','Wizard')
# after treatment
with open('pers.txt','rb') as h:
print 'exact content of pers.txt after treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt after treatment:\n',h.read()
If the “job“ of an individual was of a constant length in the texte, you could change only the portion of texte corresponding to the “job“ the desired individual:
that’s the same idea as senderle’s one.
But according to me, better would be to put the characteristics of individuals in a dictionnary recorded in file with cPickle:
from cPickle import dump, load
with open('cards','wb') as f:
dump({'Dan':['Warrior',500,1,0],'Jim':['dragonfly',300,2,10],'Somo':['cosmonaut',490,3,65]},f)
with open('cards','rb') as g:
id_cards = load(g)
print 'id_cards before change==',id_cards
id_cards['Jim'][0] = 'Wizard'
with open('cards','w') as h:
dump(id_cards,h)
with open('cards') as e:
id_cards = load(e)
print '\nid_cards after change==',id_cards
A:
I have been practising working on files this evening and realised that I can build on Jochen's answer to provide greater functionality for repeated/multiple use. Unfortunately my answer does not address issue of dealing with large files but does make life easier in smaller files.
with open('filetochange.txt', 'r+') as foo:
data = foo.readlines() #reads file as list
pos = int(input("Which position in list to edit? "))-1 #list position to edit
data.insert(pos, "more foo"+"\n") #inserts before item to edit
x = data[pos+1]
data.remove(x) #removes item to edit
foo.seek(0) #seeks beginning of file
for i in data:
i.strip() #strips "\n" from list items
foo.write(str(i))
A:
Suppose I have a file named file_name as following:
this is python
it is file handling
this is editing of line
We have to replace line 2 with "modification is done":
f=open("file_name","r+")
a=f.readlines()
for line in f:
if line.startswith("rai"):
p=a.index(line)
#so now we have the position of the line which to be modified
a[p]="modification is done"
f.seek(0)
f.truncate() #ersing all data from the file
f.close()
#so now we have an empty file and we will write the modified content now in the file
o=open("file_name","w")
for i in a:
o.write(i)
o.close()
#now the modification is done in the file
A:
writing initial data, print an empty str for updating it to a new data
here we insert an empty str in the last line of the code, this code can be used in interative updation, in other words appending data in text.txt file
with open("data.txt", 'w') as f:
f.write('first line\n'
'second line\n'
'third line\n'
'fourth line\n'
' \n')
updating data in the last line of the text file
my_file=open('data.txt')
string_list = my_file.readlines()
string_list[-1] = "Edit the list of strings as desired\n"
my_file = open("data.txt", "w")
new_file_contents = "". join(string_list)
my_file. write(new_file_contents)
A:
I used to have same request, eventually ended up with Jinja templating. Change your text file to below, and a variable lastname, then you can render the template by passing lastname='Meg', that's the most efficient and quickest way I can think of.
Dan
{{ lastname }}
Warrior
500
1
0
| Editing specific line in text file in Python | Let's say I have a text file containing:
Dan
Warrior
500
1
0
Is there a way I can edit a specific line in that text file? Right now I have this:
#!/usr/bin/env python
import io
myfile = open('stats.txt', 'r')
dan = myfile.readline()
print dan
print "Your name: " + dan.split('\n')[0]
try:
myfile = open('stats.txt', 'a')
myfile.writelines('Mage')[1]
except IOError:
myfile.close()
finally:
myfile.close()
Yes, I know that myfile.writelines('Mage')[1] is incorrect. But you get my point, right? I'm trying to edit line 2 by replacing Warrior with Mage. But can I even do that?
| [
"You want to do something like this:\n# with is like your try .. finally block in this case\nwith open('stats.txt', 'r') as file:\n # read a list of lines into data\n data = file.readlines()\n\nprint data\nprint \"Your name: \" + data[0]\n\n# now change the 2nd line, note that you have to add a newline\ndata[1] = 'Mage\\n'\n\n# and write everything back\nwith open('stats.txt', 'w') as file:\n file.writelines( data )\n\nThe reason for this is that you can't do something like \"change line 2\" directly in a file. You can only overwrite (not delete) parts of a file - that means that the new content just covers the old content. So, if you wrote 'Mage' over line 2, the resulting line would be 'Mageior'.\n",
"def replace_line(file_name, line_num, text):\n lines = open(file_name, 'r').readlines()\n lines[line_num] = text\n out = open(file_name, 'w')\n out.writelines(lines)\n out.close()\n\nAnd then:\nreplace_line('stats.txt', 0, 'Mage')\n\n",
"you can use fileinput to do in place editing\nimport fileinput\nfor line in fileinput.FileInput(\"myfile\", inplace=1):\n if line .....:\n print line\n\n",
"You can do it in two ways, choose what suits your requirement:\nMethod I.) Replacing using line number. You can use built-in function enumerate() in this case:\nFirst, in read mode get all data in a variable\nwith open(\"your_file.txt\",'r') as f:\n get_all=f.readlines()\n\nSecond, write to the file (where enumerate comes to action) \nwith open(\"your_file.txt\",'w') as f:\n for i,line in enumerate(get_all,1): ## STARTS THE NUMBERING FROM 1 (by default it begins with 0) \n if i == 2: ## OVERWRITES line:2\n f.writelines(\"Mage\\n\")\n else:\n f.writelines(line)\n\nMethod II.) Using the keyword you want to replace:\nOpen file in read mode and copy the contents to a list\nwith open(\"some_file.txt\",\"r\") as f:\n newline=[]\n for word in f.readlines(): \n newline.append(word.replace(\"Warrior\",\"Mage\")) ## Replace the keyword while you copy. \n\n\"Warrior\" has been replaced by \"Mage\", so write the updated data to the file:\nwith open(\"some_file.txt\",\"w\") as f:\n for line in newline:\n f.writelines(line)\n\nThis is what the output will be in both cases:\nDan Dan \nWarrior ------> Mage \n500 500 \n1 1 \n0 0 \n\n",
"If your text contains only one individual:\nimport re\n\n# creation\nwith open('pers.txt','wb') as g:\n g.write('Dan \\n Warrior \\n 500 \\r\\n 1 \\r 0 ')\n\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt before treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt before treatment:\\n',h.read()\n\n\n# treatment\ndef roplo(file_name,what):\n patR = re.compile('^([^\\r\\n]+[\\r\\n]+)[^\\r\\n]+')\n with open(file_name,'rb+') as f:\n ch = f.read()\n f.seek(0)\n f.write(patR.sub('\\\\1'+what,ch))\nroplo('pers.txt','Mage')\n\n\n# after treatment\nwith open('pers.txt','rb') as h:\n print '\\nexact content of pers.txt after treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt after treatment:\\n',h.read()\n\nIf your text contains several individuals:\nimport re\n# creation\nwith open('pers.txt','wb') as g:\n g.write('Dan \\n Warrior \\n 500 \\r\\n 1 \\r 0 \\n Jim \\n dragonfly\\r300\\r2\\n10\\r\\nSomo\\ncosmonaut\\n490\\r\\n3\\r65')\n\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt before treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt before treatment:\\n',h.read()\n\n\n# treatment\ndef ripli(file_name,who,what):\n with open(file_name,'rb+') as f:\n ch = f.read()\n x,y = re.search('^\\s*'+who+'\\s*[\\r\\n]+([^\\r\\n]+)',ch,re.MULTILINE).span(1)\n f.seek(x)\n f.write(what+ch[y:])\nripli('pers.txt','Jim','Wizard')\n\n\n# after treatment\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt after treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt after treatment:\\n',h.read()\n\nIf the “job“ of an individual was of a constant length in the texte, you could change only the portion of texte corresponding to the “job“ the desired individual:\nthat’s the same idea as senderle’s one.\nBut according to me, better would be to put the characteristics of individuals in a dictionnary recorded in file with cPickle:\nfrom cPickle import dump, load\n\nwith open('cards','wb') as f:\n dump({'Dan':['Warrior',500,1,0],'Jim':['dragonfly',300,2,10],'Somo':['cosmonaut',490,3,65]},f)\n\nwith open('cards','rb') as g:\n id_cards = load(g)\nprint 'id_cards before change==',id_cards\n\nid_cards['Jim'][0] = 'Wizard'\n\nwith open('cards','w') as h:\n dump(id_cards,h)\n\nwith open('cards') as e:\n id_cards = load(e)\nprint '\\nid_cards after change==',id_cards\n\n",
"I have been practising working on files this evening and realised that I can build on Jochen's answer to provide greater functionality for repeated/multiple use. Unfortunately my answer does not address issue of dealing with large files but does make life easier in smaller files.\nwith open('filetochange.txt', 'r+') as foo:\n data = foo.readlines() #reads file as list\n pos = int(input(\"Which position in list to edit? \"))-1 #list position to edit\n data.insert(pos, \"more foo\"+\"\\n\") #inserts before item to edit\n x = data[pos+1]\n data.remove(x) #removes item to edit\n foo.seek(0) #seeks beginning of file\n for i in data:\n i.strip() #strips \"\\n\" from list items\n foo.write(str(i))\n\n",
"Suppose I have a file named file_name as following:\nthis is python\nit is file handling\nthis is editing of line\n\nWe have to replace line 2 with \"modification is done\":\nf=open(\"file_name\",\"r+\")\na=f.readlines()\nfor line in f:\n if line.startswith(\"rai\"):\n p=a.index(line)\n#so now we have the position of the line which to be modified\na[p]=\"modification is done\"\nf.seek(0)\nf.truncate() #ersing all data from the file\nf.close()\n#so now we have an empty file and we will write the modified content now in the file\no=open(\"file_name\",\"w\")\nfor i in a:\n o.write(i)\no.close()\n#now the modification is done in the file\n\n",
"writing initial data, print an empty str for updating it to a new data\nhere we insert an empty str in the last line of the code, this code can be used in interative updation, in other words appending data in text.txt file\nwith open(\"data.txt\", 'w') as f:\n f.write('first line\\n'\n 'second line\\n'\n 'third line\\n'\n 'fourth line\\n'\n ' \\n')\n\nupdating data in the last line of the text file\nmy_file=open('data.txt')\nstring_list = my_file.readlines()\nstring_list[-1] = \"Edit the list of strings as desired\\n\"\nmy_file = open(\"data.txt\", \"w\")\nnew_file_contents = \"\". join(string_list)\nmy_file. write(new_file_contents)\n\n",
"I used to have same request, eventually ended up with Jinja templating. Change your text file to below, and a variable lastname, then you can render the template by passing lastname='Meg', that's the most efficient and quickest way I can think of.\nDan\n{{ lastname }}\nWarrior\n500\n1\n0\n"
] | [
162,
34,
28,
16,
3,
2,
0,
0,
0
] | [
"#read file lines and edit specific item\n\nfile=open(\"pythonmydemo.txt\",'r')\na=file.readlines()\nprint(a[0][6:11])\n\na[0]=a[0][0:5]+' Ericsson\\n'\nprint(a[0])\n\nfile=open(\"pythonmydemo.txt\",'w')\nfile.writelines(a)\nfile.close()\nprint(a)\n\n",
"This is the easiest way to do this.\nf = open(\"file.txt\", \"wt\")\nfor line in f:\n f.write(line.replace('foo', 'bar'))\nf.close()\n\nI hope it will work for you.\n"
] | [
-1,
-2
] | [
"io",
"python"
] | stackoverflow_0004719438_io_python.txt |
Q:
python get dictionary key from value is list
I have two dictionaries:
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
I want to have a third dict that will contain the key of second_dict and its corresponding value from first_dict's key. This way, I will have :
third_dict = {'1' : 'a',
'2' : 'a',
'4' : 'b',
'6' : 'c',
'7' : None,
}
here is my way:
def key_return(name):
for key, value in first_dict.items():
if name == value:
return key
if isinstance(value, list) and name in value:
return key
return None
reference:
Python return key from value, but its a list in the dictionary
However, I wondering that the another way using dict.get() or something else.
Any help would be appreciated. Thanks.
A:
you can do it like that:
Code
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
third_dict = dict()
for second_key in second_dict.keys():
found = False
for first_key, value in first_dict.items():
if second_key in value:
third_dict.setdefault(second_key, first_key )
found = True
if not found:
third_dict.setdefault(second_key, None)
print(third_dict)
Output:
{'1': 'a', '2': 'a', '4': 'b', '6': 'c', '7': None}
Hope this helps
A:
version with a_dict.get()
third_dict = {i: {i:k for k,v in first_dict.items() for i in v}.get(i) for i in second_dict.keys()}
this part {i:k for k,v in first_dict.items() for i in v}
creates a dict like {'1': 'a', '2': 'a', '3': 'a', '4': 'b', '5': 'b', '6': 'c'}
A:
You can map the values in the first dictionary to their keys with:
values_map = dict([a for k, v in first_dict.items() for a in zip(v, k*len(v))])
then use this map to create the third dictionary:
third_dict = {key: values_map.get(key) for key, value in second_dict.items()}
Since I get that the first_dict may contain single values instead of list you may want first to convert those values to list with:
first_dict = dict(map(lambda x: (x[0], x[1]) if isinstance(x[1], list) else (x[0], [str(x[1])]), first_dict.items()))
A:
res = {
x: k
for k, xs in first_dict.items()
for x in xs
if x in second_dict
}
this creates 1:a , 2:a, 4:b etc. If you also want missing keys like 7:None join it with a dummy dict:
res = {k: None for k in second_dict} | {
x: k
for k, xs in first_dict.items()
for x in xs
if x in second_dict
}
| python get dictionary key from value is list | I have two dictionaries:
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
I want to have a third dict that will contain the key of second_dict and its corresponding value from first_dict's key. This way, I will have :
third_dict = {'1' : 'a',
'2' : 'a',
'4' : 'b',
'6' : 'c',
'7' : None,
}
here is my way:
def key_return(name):
for key, value in first_dict.items():
if name == value:
return key
if isinstance(value, list) and name in value:
return key
return None
reference:
Python return key from value, but its a list in the dictionary
However, I wondering that the another way using dict.get() or something else.
Any help would be appreciated. Thanks.
| [
"you can do it like that:\nCode\nfirst_dict = {'a': ['1', '2', '3'],\n 'b': ['4', '5'],\n 'c': ['6'],\n }\n\nsecond_dict = {'1': 'wqeewe',\n '2': 'efsafa',\n '4': 'fsasaf',\n '6': 'kgoeew',\n '7': 'fkowew'\n }\n\nthird_dict = dict()\n\nfor second_key in second_dict.keys():\n found = False\n for first_key, value in first_dict.items():\n if second_key in value:\n third_dict.setdefault(second_key, first_key )\n found = True\n if not found:\n third_dict.setdefault(second_key, None)\n \nprint(third_dict)\n\nOutput:\n{'1': 'a', '2': 'a', '4': 'b', '6': 'c', '7': None}\n\nHope this helps\n",
"version with a_dict.get()\nthird_dict = {i: {i:k for k,v in first_dict.items() for i in v}.get(i) for i in second_dict.keys()}\n\nthis part {i:k for k,v in first_dict.items() for i in v}\ncreates a dict like {'1': 'a', '2': 'a', '3': 'a', '4': 'b', '5': 'b', '6': 'c'}\n",
"You can map the values in the first dictionary to their keys with:\nvalues_map = dict([a for k, v in first_dict.items() for a in zip(v, k*len(v))])\n\nthen use this map to create the third dictionary:\nthird_dict = {key: values_map.get(key) for key, value in second_dict.items()}\n\nSince I get that the first_dict may contain single values instead of list you may want first to convert those values to list with:\nfirst_dict = dict(map(lambda x: (x[0], x[1]) if isinstance(x[1], list) else (x[0], [str(x[1])]), first_dict.items()))\n\n",
"res = {\n x: k\n for k, xs in first_dict.items()\n for x in xs\n if x in second_dict\n}\n\nthis creates 1:a , 2:a, 4:b etc. If you also want missing keys like 7:None join it with a dummy dict:\nres = {k: None for k in second_dict} | {\n x: k\n for k, xs in first_dict.items()\n for x in xs\n if x in second_dict\n}\n\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"dictionary",
"list",
"python"
] | stackoverflow_0074664870_dictionary_list_python.txt |
Q:
How to set a variable, that isnt iter variable, to increases in each iteration and doesnt always return to its value prior to its entry into for loop?
import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False
aux_date = str(datetime.datetime.strptime(datestr, "%Y-%m-%d"))
print(repr(aux_date))
for i_month in range(int(months)):
# I add a unit since the months are "numerical quantities",
# that is, they are expressed in natural numbers, so I need it
# to start from 1 and not from 0 like the iter variable in python
i_month = i_month + 1
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
aux_date,
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = (
datetime.datetime.strptime(datestr, "%Y-%m-%d")
+ datetime.timedelta(days=int(n_days_in_this_i_month))
).strftime("%Y-%m-%d")
print(repr(aux_date))
return aux_date
print(repr(add_months("2022-12-30", "3")))
Why does the aux_date variable, instead of progressively increasing the number of days of the elapsed months, only limit itself to adding 31 days of that month of January, and then add them back to the original amount, staying stuck there instead of advancing each iteration of this for loop?
The objective of this for loop is to achieve an incremental iteration loop where the days are added and not one that always returns to the original amount to add the same content over and over again.
Updated function Algorithm
In this edit I have modified some details and redundancies, and also fixed some bugs that are present in the original code.
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False #condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
for i_month in range(int(months)):
i_month = i_month + 1 # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search( r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})", str(aux_date), re.IGNORECASE, )
if m1:
ref_year, ref_month = ( str(m1.groups()[0]).strip(), str( int(m1.groups()[1]) + 1).strip(), )
if( len(ref_month) == 1 ): ref_month = "0" + ref_month
if( int(ref_month) > 12 ): ref_month = "01"
print(ref_month)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
if ( int(ref_year) % 4 == 0 and int(ref_year) % 100 != 0 ) or ( int(ref_year) % 400 == 0 ): ref_year_is_leap_year = True ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02": n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
A:
Because at the end of every iteration of your for loop you are reconverting the value that is given in the parameter datestr and that value is never updated. You are also converting it to a string while trying to add a timedelta object. You should leave the value as a datetime object and convert to string once the for loop has finished if you still need to.
Just change the variable used in the bottom assignment to aux_date to aux_date and remove all of the string conversions, that should at least get you going in the right direction.
for example:
import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False # condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
print(repr(aux_date))
for i_month in range(int(months)):
i_month = (
i_month + 1
) # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
str(aux_date),
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
print(repr(aux_date))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
print(repr(add_months("2022-12-30", "3")))
Output:
datetime.datetime(2022, 12, 30, 0, 0)
31
datetime.datetime(2023, 1, 30, 0, 0)
31
datetime.datetime(2023, 3, 2, 0, 0)
31
datetime.datetime(2023, 4, 2, 0, 0)
datetime.datetime(2023, 4, 2, 0, 0)
'2023-04-02'
A:
So, as Alexander's answer already establishes, you weren't updating the date, so you were always adding to the same beginning date on each iteration. I took the liberty to clean up your code, using regex and converting to strings and back and for with the int's is the totally wrong approach here -- it misses the entire point of date-time objects, which is to encapsulate the information in a date. Just use those objects, not strings. Here is the same approach as your code using only datetime.datetime objects:
import datetime
def add_months(datestr, months):
number_of_days_in_each_month = {
1 : 31,
2 : 28,
3 : 31,
4: 30,
5: 31,
6: 30,
7: 31,
8: 31,
9: 30,
10: 31,
11: 30,
12: 31,
}
date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
is_leap_year = False
for i_month in range(1, int(months) + 1):
ref_year, ref_month = date.year, date.month
n_days = number_of_days_in_each_month[ref_month]
if (
ref_year % 4 == 0
and ref_year % 100 == 0
and ref_year % 400 != 0
):
is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if is_leap_year and ref_month == 2: # febrero
n_days += 1 # 28 --> 29
date += datetime.timedelta(days=n_days)
return date.strftime("%Y-%m-%d")
print(add_months("2022-12-30", "3"))
I also made some stylistic changes to variable names. This is an art not a science, naming variables, and it always comes down to subjective opinion, but may I humbly submit my opinion about more legible names.
Also note, you had a comment to the effect of:
I need the iter variable to start from 1 and not from 0 like the iter
variable in python
The iterating variable starts where you tell it to start, given the iterable you iterate over. range(N) will always start at zero, but it doesn't have to. You could iterate over [1, 2, 3], or better yet, range(1, N + 1).
Note!
Your algorithm is not working quite how one might expect, the output one would naturally expect is 2023-03-30
I'll give you a hint, though, think about precisely which month's days you need to add to the current month.... n_days = number_of_days_in_each_month[ref_month]....
| How to set a variable, that isnt iter variable, to increases in each iteration and doesnt always return to its value prior to its entry into for loop? | import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False
aux_date = str(datetime.datetime.strptime(datestr, "%Y-%m-%d"))
print(repr(aux_date))
for i_month in range(int(months)):
# I add a unit since the months are "numerical quantities",
# that is, they are expressed in natural numbers, so I need it
# to start from 1 and not from 0 like the iter variable in python
i_month = i_month + 1
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
aux_date,
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = (
datetime.datetime.strptime(datestr, "%Y-%m-%d")
+ datetime.timedelta(days=int(n_days_in_this_i_month))
).strftime("%Y-%m-%d")
print(repr(aux_date))
return aux_date
print(repr(add_months("2022-12-30", "3")))
Why does the aux_date variable, instead of progressively increasing the number of days of the elapsed months, only limit itself to adding 31 days of that month of January, and then add them back to the original amount, staying stuck there instead of advancing each iteration of this for loop?
The objective of this for loop is to achieve an incremental iteration loop where the days are added and not one that always returns to the original amount to add the same content over and over again.
Updated function Algorithm
In this edit I have modified some details and redundancies, and also fixed some bugs that are present in the original code.
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False #condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
for i_month in range(int(months)):
i_month = i_month + 1 # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search( r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})", str(aux_date), re.IGNORECASE, )
if m1:
ref_year, ref_month = ( str(m1.groups()[0]).strip(), str( int(m1.groups()[1]) + 1).strip(), )
if( len(ref_month) == 1 ): ref_month = "0" + ref_month
if( int(ref_month) > 12 ): ref_month = "01"
print(ref_month)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
if ( int(ref_year) % 4 == 0 and int(ref_year) % 100 != 0 ) or ( int(ref_year) % 400 == 0 ): ref_year_is_leap_year = True ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02": n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
| [
"Because at the end of every iteration of your for loop you are reconverting the value that is given in the parameter datestr and that value is never updated. You are also converting it to a string while trying to add a timedelta object. You should leave the value as a datetime object and convert to string once the for loop has finished if you still need to.\nJust change the variable used in the bottom assignment to aux_date to aux_date and remove all of the string conversions, that should at least get you going in the right direction.\nfor example:\nimport re, datetime\n\ndef add_months(datestr, months):\n ref_year, ref_month = \"\", \"\"\n ref_year_is_leap_year = False # condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia\n\n aux_date = datetime.datetime.strptime(datestr, \"%Y-%m-%d\")\n print(repr(aux_date))\n\n for i_month in range(int(months)):\n\n i_month = (\n i_month + 1\n ) # I add a unit since the months are \"numerical quantities\", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python\n\n m1 = re.search(\n r\"(?P<year>\\d*)-(?P<month>\\d{2})-(?P<startDay>\\d{2})\",\n str(aux_date),\n re.IGNORECASE,\n )\n if m1:\n ref_year, ref_month = (\n str(m1.groups()[0]).strip(),\n str(m1.groups()[1]).strip(),\n )\n\n number_of_days_in_each_month = {\n \"01\": \"31\",\n \"02\": \"28\",\n \"03\": \"31\",\n \"04\": \"30\",\n \"05\": \"31\",\n \"06\": \"30\",\n \"07\": \"31\",\n \"08\": \"31\",\n \"09\": \"30\",\n \"10\": \"31\",\n \"11\": \"30\",\n \"12\": \"31\",\n }\n\n n_days_in_this_i_month = number_of_days_in_each_month[ref_month]\n print(n_days_in_this_i_month) # nro days to increment in each i month iteration\n\n if (\n int(ref_year) % 4 == 0\n and int(ref_year) % 100 == 0\n and int(ref_year) % 400 != 0\n ):\n ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto\n if ref_year_is_leap_year == True and ref_month == \"02\":\n n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29\n\n aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))\n print(repr(aux_date))\n return datetime.datetime.strftime(aux_date, \"%Y-%m-%d\")\n\n\nprint(repr(add_months(\"2022-12-30\", \"3\")))\n\n\nOutput:\ndatetime.datetime(2022, 12, 30, 0, 0)\n31\ndatetime.datetime(2023, 1, 30, 0, 0)\n31\ndatetime.datetime(2023, 3, 2, 0, 0)\n31\ndatetime.datetime(2023, 4, 2, 0, 0)\ndatetime.datetime(2023, 4, 2, 0, 0)\n'2023-04-02'\n\n",
"So, as Alexander's answer already establishes, you weren't updating the date, so you were always adding to the same beginning date on each iteration. I took the liberty to clean up your code, using regex and converting to strings and back and for with the int's is the totally wrong approach here -- it misses the entire point of date-time objects, which is to encapsulate the information in a date. Just use those objects, not strings. Here is the same approach as your code using only datetime.datetime objects:\nimport datetime\n\ndef add_months(datestr, months):\n\n number_of_days_in_each_month = {\n 1 : 31,\n 2 : 28,\n 3 : 31,\n 4: 30,\n 5: 31,\n 6: 30,\n 7: 31,\n 8: 31,\n 9: 30,\n 10: 31,\n 11: 30,\n 12: 31,\n }\n\n date = datetime.datetime.strptime(datestr, \"%Y-%m-%d\")\n is_leap_year = False\n\n for i_month in range(1, int(months) + 1):\n\n ref_year, ref_month = date.year, date.month\n\n n_days = number_of_days_in_each_month[ref_month]\n\n if (\n ref_year % 4 == 0\n and ref_year % 100 == 0\n and ref_year % 400 != 0\n ):\n is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto\n\n if is_leap_year and ref_month == 2: # febrero\n n_days += 1 # 28 --> 29\n\n date += datetime.timedelta(days=n_days)\n\n\n return date.strftime(\"%Y-%m-%d\")\n\n\nprint(add_months(\"2022-12-30\", \"3\"))\n\nI also made some stylistic changes to variable names. This is an art not a science, naming variables, and it always comes down to subjective opinion, but may I humbly submit my opinion about more legible names.\nAlso note, you had a comment to the effect of:\n\nI need the iter variable to start from 1 and not from 0 like the iter\nvariable in python\n\nThe iterating variable starts where you tell it to start, given the iterable you iterate over. range(N) will always start at zero, but it doesn't have to. You could iterate over [1, 2, 3], or better yet, range(1, N + 1).\nNote!\nYour algorithm is not working quite how one might expect, the output one would naturally expect is 2023-03-30\nI'll give you a hint, though, think about precisely which month's days you need to add to the current month.... n_days = number_of_days_in_each_month[ref_month]....\n"
] | [
2,
2
] | [] | [] | [
"for_loop",
"loops",
"python",
"python_3.x",
"variables"
] | stackoverflow_0074665124_for_loop_loops_python_python_3.x_variables.txt |
Q:
pandas/regex: Remove the string after the hyphen or parenthesis character (including) carry string after the comma in pandas dataframe
I have a dataframe contains one column which has multiple strings separated by the comma, but in this string, I want to remove all matter after hyphen (including hyphen), main point is after in some cases hyphen is not there but directed parenthesis is there so I also want to remove that as well and carry all the after the comma how can I do it? You can see this case in last row.
dd = pd.DataFrame()
dd['sin'] = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
Expected output
dd['sin']
# output
U147 U35
P01 P02
P3 P032
P034
P23F5 P04
Want to carry only string before the hyphen or parenthesis or any special character.
A:
The following code seems to reproduce your desired result:
dd['sin'] = dd['sin'].str.split(", ")
dd = dd.explode('sin').reset_index()
dd['sin'] = dd['sin'].str.replace('\W.*', '', regex=True)
Which gives dd['sin'] as:
0 U147
1 U35
2 P01
3 P02
4 P3
5 P032
6 P034
7 P23F5
8 P04
Name: sin, dtype: object
The call of .reset_index() in the second line is optional depending on whether you want to preserve which row that piece of the string came from.
A:
You can use the following regex:
r"-\d{2}|\([EBP]CM\)|\s"
Here is the code:
sin = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
dd = pd.DataFrame()
dd['sin'] = sin
dd['sin'] = dd['sin'].str.replace(r'-\d{2}|\([EBP]CM\)|\s', '', regex=True)
print(dd)
OUTPUT:
sin
0 U147,U35
1 P01,P02
2 P3,P032
3 P034
4 P23F5,P04
EDIT
Or use this line to remove the comma:
dd['sin'] = dd['sin'].str.replace(r'-\d{2}|\([EBP]CM\)|\s', '', regex=True).str.replace(',',' ')
OUTPUT:
sin
0 U147 U35
1 P01 P02
2 P3 P032
3 P034
4 P23F5 P04
| pandas/regex: Remove the string after the hyphen or parenthesis character (including) carry string after the comma in pandas dataframe | I have a dataframe contains one column which has multiple strings separated by the comma, but in this string, I want to remove all matter after hyphen (including hyphen), main point is after in some cases hyphen is not there but directed parenthesis is there so I also want to remove that as well and carry all the after the comma how can I do it? You can see this case in last row.
dd = pd.DataFrame()
dd['sin'] = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
Expected output
dd['sin']
# output
U147 U35
P01 P02
P3 P032
P034
P23F5 P04
Want to carry only string before the hyphen or parenthesis or any special character.
| [
"The following code seems to reproduce your desired result:\ndd['sin'] = dd['sin'].str.split(\", \")\ndd = dd.explode('sin').reset_index()\ndd['sin'] = dd['sin'].str.replace('\\W.*', '', regex=True)\n\nWhich gives dd['sin'] as:\n0 U147\n1 U35\n2 P01\n3 P02\n4 P3\n5 P032\n6 P034\n7 P23F5\n8 P04\nName: sin, dtype: object\n\nThe call of .reset_index() in the second line is optional depending on whether you want to preserve which row that piece of the string came from.\n",
"You can use the following regex:\nr\"-\\d{2}|\\([EBP]CM\\)|\\s\"\n\n\n\nHere is the code:\nsin = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']\n\ndd = pd.DataFrame()\ndd['sin'] = sin\ndd['sin'] = dd['sin'].str.replace(r'-\\d{2}|\\([EBP]CM\\)|\\s', '', regex=True)\nprint(dd)\n\nOUTPUT:\n sin\n0 U147,U35\n1 P01,P02\n2 P3,P032\n3 P034\n4 P23F5,P04\n\n\n\n\nEDIT\nOr use this line to remove the comma:\ndd['sin'] = dd['sin'].str.replace(r'-\\d{2}|\\([EBP]CM\\)|\\s', '', regex=True).str.replace(',',' ')\n\nOUTPUT:\n sin\n0 U147 U35\n1 P01 P02\n2 P3 P032\n3 P034\n4 P23F5 P04\n\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python",
"replace"
] | stackoverflow_0074664899_pandas_python_replace.txt |
Q:
pandas.read_excel parameter "sheet_name" not working
According to pandas doc for 0.21+, pandas.read_excel has a parameter sheet_name that allows specifying which sheet is read. But when I am trying to read the second sheet from an excel file, no matter how I set the parameter (sheet_name = 1, sheet_name = 'Sheet2'), the dataframe always shows the first sheet, and passing a list of indices (sheet_name = [0, 1]) does not return a dictionary of dataframes but still the first sheet. What might be the problem here?
A:
It looks like you're using the old version of Python.
So try to change your code
df = pd.read_excel(file_with_data, sheetname=sheet_with_data)
It should work properly.
A:
You can try to use pd.ExcelFile:
xls = pd.ExcelFile('path_to_file.xls')
df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')
A:
This works:
df = pd.read_excel(open(file_path_name), 'rb'), sheetname = sheet_name)
file_path_name = your file
sheet_name = your sheet name
This does not for me:
df = pd.read_excel(open(file_path_name), 'rb'), sheet_name = sheet_name)
Gave me only the first sheet, no matter how I defined sheet_name.
--> it is an known error:
https://github.com/pandas-dev/pandas/issues/17107
A:
Try at Terminal, type the following first, then re-run your program:
pip install xlrd
A:
I also faced this problem until I found this solution:
rd=pd.read_excel(excel_file,sheet_name=['Sheet2']),
Here excel_file means "file name".
The filename should be the full path to the file.
Make sure to use two backslashes (\\) instead of just one!
In my case, this works.
A:
I would just use double quotes like this.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
| pandas.read_excel parameter "sheet_name" not working | According to pandas doc for 0.21+, pandas.read_excel has a parameter sheet_name that allows specifying which sheet is read. But when I am trying to read the second sheet from an excel file, no matter how I set the parameter (sheet_name = 1, sheet_name = 'Sheet2'), the dataframe always shows the first sheet, and passing a list of indices (sheet_name = [0, 1]) does not return a dictionary of dataframes but still the first sheet. What might be the problem here?
| [
"It looks like you're using the old version of Python.\nSo try to change your code \ndf = pd.read_excel(file_with_data, sheetname=sheet_with_data)\n\nIt should work properly.\n",
"You can try to use pd.ExcelFile:\nxls = pd.ExcelFile('path_to_file.xls')\ndf1 = pd.read_excel(xls, 'Sheet1')\ndf2 = pd.read_excel(xls, 'Sheet2')\n\n",
"This works:\ndf = pd.read_excel(open(file_path_name), 'rb'), sheetname = sheet_name)\n\nfile_path_name = your file\nsheet_name = your sheet name\n\nThis does not for me:\ndf = pd.read_excel(open(file_path_name), 'rb'), sheet_name = sheet_name)\n\nGave me only the first sheet, no matter how I defined sheet_name.\n--> it is an known error:\nhttps://github.com/pandas-dev/pandas/issues/17107\n",
"Try at Terminal, type the following first, then re-run your program:\npip install xlrd\n",
"I also faced this problem until I found this solution:\nrd=pd.read_excel(excel_file,sheet_name=['Sheet2']),\n\nHere excel_file means \"file name\".\nThe filename should be the full path to the file.\nMake sure to use two backslashes (\\\\) instead of just one!\nIn my case, this works.\n",
"I would just use double quotes like this.\n# Returns a DataFrame\npd.read_excel(\"path_to_file.xls\", sheet_name=\"Sheet1\")\n\n"
] | [
22,
7,
2,
1,
0,
0
] | [] | [] | [
"excel",
"pandas",
"python"
] | stackoverflow_0047975866_excel_pandas_python.txt |
Q:
Python evdev [Error 16] Device or resource busy
I connected 2D barcode scanner with Raspberry Pi 4 Model B and tried to scan few codes. on using evdev library I got the output successfully. But the issue is after 3 continues scans it's throwing me an exception saying "[Error 16] Device or resource busy". I can't able to find the root cause of this issue and tried many troubleshooting methods but nothing seems to work. Can anyone please help me. Here is the code I used.
from evdev import InputDevice, categorize, ecodes
from datetime import datetime
import calendar
scancodes = {
# Scancode: ASCIICode
0: None, 1: u'ESC', 2: u'1', 3: u'2', 4: u'3', 5: u'4', 6: u'5', 7: u'6', 8: u'7', 9: u'8',
10: u'9', 11: u'0', 12: u'-', 13: u'=', 14: u'BKSP', 15: u'TAB', 16: u'q', 17: u'w', 18: u'e', 19: u'r',
20: u't', 21: u'y', 22: u'u', 23: u'i', 24: u'o', 25: u'p', 26: u'[', 27: u']', 28: u'CRLF', 29: u'LCTRL',
30: u'a', 31: u's', 32: u'd', 33: u'f', 34: u'g', 35: u'h', 36: u'j', 37: u'k', 38: u'l', 39: u';',
40: u'"', 41: u'`', 42: u'LSHFT', 43: u'\\', 44: u'z', 45: u'x', 46: u'c', 47: u'v', 48: u'b', 49: u'n',
50: u'm', 51: u',', 52: u'.', 53: u'/', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
capscodes = {
0: None, 1: u'ESC', 2: u'!', 3: u'@', 4: u'#', 5: u'$', 6: u'%', 7: u'^', 8: u'&', 9: u'*',
10: u'(', 11: u')', 12: u'_', 13: u'+', 14: u'BKSP', 15: u'TAB', 16: u'Q', 17: u'W', 18: u'E', 19: u'R',
20: u'T', 21: u'Y', 22: u'U', 23: u'I', 24: u'O', 25: u'P', 26: u'{', 27: u'}', 28: u'CRLF', 29: u'LCTRL',
30: u'A', 31: u'S', 32: u'D', 33: u'F', 34: u'G', 35: u'H', 36: u'J', 37: u'K', 38: u'L', 39: u':',
40: u'\'', 41: u'~', 42: u'LSHFT', 43: u'|', 44: u'Z', 45: u'X', 46: u'C', 47: u'V', 48: u'B', 49: u'N',
50: u'M', 51: u'<', 52: u'>', 53: u'?', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
class scan_barcode:
def __init__(self,devicePath):
self.devicePath = devicePath
def readBarcode(self):
dev = InputDevice(self.devicePath)
dev.grab() # grab provides exclusive access to the device
x = ''
caps = False
for event in dev.read_loop():
if event.type == ecodes.EV_KEY:
data = categorize(event) # Save the event temporarily to introspect it
if data.scancode == 42:
if data.keystate == 1:
caps = True
if data.keystate == 0:
caps = False
if data.keystate == 1: # Down events only
if caps:
key_lookup = u'{}'.format(capscodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
else:
key_lookup = u'{}'.format(scancodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
if (data.scancode != 42) and (data.scancode != 28):
x += key_lookup
if(data.scancode == 28):
return(x)
scanned_data = scan_barcode('/dev/input/event0')
def scanner_function():
try:
value = scanned_data.readBarcode()
print(f"Scanned value:{str(value)}")
except Exception as e:
print(e)
pass
while True:
scanner_function()
Even though when I pass the exception It's not letting me to move to other tasks. The entire process stops here.
This is the output:
Scanned value: 4568hidhXGu
Scanned value: 1238fujXjje75
Scanned value: 789665
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
A:
I am not sure if the problem is related to your code. I think it is more related to your scanner. I have tested your script with the R32 QR Code reader (https://www.sycreader.com/en/3650/) and this is working perfect.
What scanner type are you using?
Result:
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.035355
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.675270
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:14.563287
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.007284
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.799299
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:20.959301
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:21.591286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:24.515289
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:26.331292
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:31.323339
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:32.747289
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:34.495291
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:36.367294
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:37.903286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:39.507295
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:41.099288
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:42.575295
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:44.123283
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:45.579286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:47.055336
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:48.671301
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:49.983288
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:52.779284
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:54.755299
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:56.159286
A:
The error you're encountering is due to the fact that the barcode scanner is a "grabbed" device, meaning that it is currently in use by another process and cannot be accessed. In order to fix this, you can try releasing the device before attempting to use it again. Here is an example of how you can do this:
# Release the device
dev.ungrab()
# Wait for a few seconds to allow the device to be released
time.sleep(2)
# Attempt to grab the device again
dev.grab()
You can insert this code before the for loop in your readBarcode method to try and fix the issue.
| Python evdev [Error 16] Device or resource busy | I connected 2D barcode scanner with Raspberry Pi 4 Model B and tried to scan few codes. on using evdev library I got the output successfully. But the issue is after 3 continues scans it's throwing me an exception saying "[Error 16] Device or resource busy". I can't able to find the root cause of this issue and tried many troubleshooting methods but nothing seems to work. Can anyone please help me. Here is the code I used.
from evdev import InputDevice, categorize, ecodes
from datetime import datetime
import calendar
scancodes = {
# Scancode: ASCIICode
0: None, 1: u'ESC', 2: u'1', 3: u'2', 4: u'3', 5: u'4', 6: u'5', 7: u'6', 8: u'7', 9: u'8',
10: u'9', 11: u'0', 12: u'-', 13: u'=', 14: u'BKSP', 15: u'TAB', 16: u'q', 17: u'w', 18: u'e', 19: u'r',
20: u't', 21: u'y', 22: u'u', 23: u'i', 24: u'o', 25: u'p', 26: u'[', 27: u']', 28: u'CRLF', 29: u'LCTRL',
30: u'a', 31: u's', 32: u'd', 33: u'f', 34: u'g', 35: u'h', 36: u'j', 37: u'k', 38: u'l', 39: u';',
40: u'"', 41: u'`', 42: u'LSHFT', 43: u'\\', 44: u'z', 45: u'x', 46: u'c', 47: u'v', 48: u'b', 49: u'n',
50: u'm', 51: u',', 52: u'.', 53: u'/', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
capscodes = {
0: None, 1: u'ESC', 2: u'!', 3: u'@', 4: u'#', 5: u'$', 6: u'%', 7: u'^', 8: u'&', 9: u'*',
10: u'(', 11: u')', 12: u'_', 13: u'+', 14: u'BKSP', 15: u'TAB', 16: u'Q', 17: u'W', 18: u'E', 19: u'R',
20: u'T', 21: u'Y', 22: u'U', 23: u'I', 24: u'O', 25: u'P', 26: u'{', 27: u'}', 28: u'CRLF', 29: u'LCTRL',
30: u'A', 31: u'S', 32: u'D', 33: u'F', 34: u'G', 35: u'H', 36: u'J', 37: u'K', 38: u'L', 39: u':',
40: u'\'', 41: u'~', 42: u'LSHFT', 43: u'|', 44: u'Z', 45: u'X', 46: u'C', 47: u'V', 48: u'B', 49: u'N',
50: u'M', 51: u'<', 52: u'>', 53: u'?', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
class scan_barcode:
def __init__(self,devicePath):
self.devicePath = devicePath
def readBarcode(self):
dev = InputDevice(self.devicePath)
dev.grab() # grab provides exclusive access to the device
x = ''
caps = False
for event in dev.read_loop():
if event.type == ecodes.EV_KEY:
data = categorize(event) # Save the event temporarily to introspect it
if data.scancode == 42:
if data.keystate == 1:
caps = True
if data.keystate == 0:
caps = False
if data.keystate == 1: # Down events only
if caps:
key_lookup = u'{}'.format(capscodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
else:
key_lookup = u'{}'.format(scancodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
if (data.scancode != 42) and (data.scancode != 28):
x += key_lookup
if(data.scancode == 28):
return(x)
scanned_data = scan_barcode('/dev/input/event0')
def scanner_function():
try:
value = scanned_data.readBarcode()
print(f"Scanned value:{str(value)}")
except Exception as e:
print(e)
pass
while True:
scanner_function()
Even though when I pass the exception It's not letting me to move to other tasks. The entire process stops here.
This is the output:
Scanned value: 4568hidhXGu
Scanned value: 1238fujXjje75
Scanned value: 789665
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
| [
"I am not sure if the problem is related to your code. I think it is more related to your scanner. I have tested your script with the R32 QR Code reader (https://www.sycreader.com/en/3650/) and this is working perfect.\nWhat scanner type are you using?\nResult:\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.035355\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.675270\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:14.563287\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.007284\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.799299\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:20.959301\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:21.591286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:24.515289\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:26.331292\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:31.323339\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:32.747289\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:34.495291\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:36.367294\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:37.903286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:39.507295\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:41.099288\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:42.575295\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:44.123283\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:45.579286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:47.055336\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:48.671301\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:49.983288\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:52.779284\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:54.755299\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:56.159286\n\n",
"The error you're encountering is due to the fact that the barcode scanner is a \"grabbed\" device, meaning that it is currently in use by another process and cannot be accessed. In order to fix this, you can try releasing the device before attempting to use it again. Here is an example of how you can do this:\n# Release the device\ndev.ungrab()\n\n# Wait for a few seconds to allow the device to be released\ntime.sleep(2)\n\n# Attempt to grab the device again\ndev.grab()\n\nYou can insert this code before the for loop in your readBarcode method to try and fix the issue.\n"
] | [
0,
0
] | [] | [] | [
"barcode_scanner",
"evdev",
"linux",
"python",
"raspberry_pi4"
] | stackoverflow_0074325312_barcode_scanner_evdev_linux_python_raspberry_pi4.txt |
Q:
google foobar : 'please pass the coded messages'. What is the matter in my code?
I happend to see the google foobar challenges and
i'm struggling to solve a problem, 'please pass the coded messages'
When submiting my solution code, i get the response that one failing in 5 tests.
I really scrutinize my code, but i can't discover any error in mine.
--Problem--
You need to pass a message to the bunny workers, but to avoid detection, the code you agreed to use is... obscure, to say the least. The bunnies are given food on standard-issue plates that are stamped with the numbers 0-9 for easier sorting, and you need to combine sets of plates to create the numbers in the code. The signal that a number is part of the code is that it is divisible by 3. You can do smaller numbers like 15 and 45 easily, but bigger numbers like 144 and 414 are a little trickier. Write a program to help yourself quickly create large numbers for use in the code, given a limited number of plates to work with.
You have L, a list containing some digits (0 to 9). Write a function solution(L) which finds the largest number that can be made from some or all of these digits and is divisible by 3. If it is not possible to make such a number, return 0 as the solution. L will contain anywhere from 1 to 9 digits. The same digit may appear multiple times in the list, but each element in the list may only be used once.
--Samples--
Input:
solution.solution([3, 1, 4, 1])
Output:
4311
Input:
solution.solution([3, 1, 4, 1, 5, 9])
Output:
94311
--My solution--
def jointer(l):
res=0
for i, v in enumerate(l):
if v==0:
res=res*10
else:
res+=v*10**i
return res
def solution(L):
L=sorted(L)
ll=[]
s=0
for i in L:
ll.append(i%3)
s+=i
r=s%3
if r==1:
if 1 in ll:
L.pop(ll.index(1))
else:
for _ in range(2):
L.pop(ll.index(2))
elif r==2:
if 2 in ll:
L.pop(ll.index(2))
else:
for _ in range(2):
L.pop(ll.index(1))
return jointer(L)
I'd like to ask you guys what is the problem in this code.
Thank you for your help in advence
A:
You need to be thinking in terms of permutations of the digits in the input list. Bear in mind that the challenge states that "some or all" of the values may be used. So you need to be looking at permutations from 1 to the length of the input list (inclusive).
There's probably a more efficient way to do this but:
from itertools import permutations
def solution(digits):
_max = -1
for r in range(1, len(digits)+1):
for combo in permutations(digits, r=r):
v = 0
for n in combo:
v = v * 10 + n
if v % 3 == 0:
_max = max(_max, v)
return _max
print(solution([3, 1, 4, 1, 5, 9]))
Output:
94311
| google foobar : 'please pass the coded messages'. What is the matter in my code? | I happend to see the google foobar challenges and
i'm struggling to solve a problem, 'please pass the coded messages'
When submiting my solution code, i get the response that one failing in 5 tests.
I really scrutinize my code, but i can't discover any error in mine.
--Problem--
You need to pass a message to the bunny workers, but to avoid detection, the code you agreed to use is... obscure, to say the least. The bunnies are given food on standard-issue plates that are stamped with the numbers 0-9 for easier sorting, and you need to combine sets of plates to create the numbers in the code. The signal that a number is part of the code is that it is divisible by 3. You can do smaller numbers like 15 and 45 easily, but bigger numbers like 144 and 414 are a little trickier. Write a program to help yourself quickly create large numbers for use in the code, given a limited number of plates to work with.
You have L, a list containing some digits (0 to 9). Write a function solution(L) which finds the largest number that can be made from some or all of these digits and is divisible by 3. If it is not possible to make such a number, return 0 as the solution. L will contain anywhere from 1 to 9 digits. The same digit may appear multiple times in the list, but each element in the list may only be used once.
--Samples--
Input:
solution.solution([3, 1, 4, 1])
Output:
4311
Input:
solution.solution([3, 1, 4, 1, 5, 9])
Output:
94311
--My solution--
def jointer(l):
res=0
for i, v in enumerate(l):
if v==0:
res=res*10
else:
res+=v*10**i
return res
def solution(L):
L=sorted(L)
ll=[]
s=0
for i in L:
ll.append(i%3)
s+=i
r=s%3
if r==1:
if 1 in ll:
L.pop(ll.index(1))
else:
for _ in range(2):
L.pop(ll.index(2))
elif r==2:
if 2 in ll:
L.pop(ll.index(2))
else:
for _ in range(2):
L.pop(ll.index(1))
return jointer(L)
I'd like to ask you guys what is the problem in this code.
Thank you for your help in advence
| [
"You need to be thinking in terms of permutations of the digits in the input list. Bear in mind that the challenge states that \"some or all\" of the values may be used. So you need to be looking at permutations from 1 to the length of the input list (inclusive).\nThere's probably a more efficient way to do this but:\nfrom itertools import permutations\n\ndef solution(digits):\n _max = -1\n for r in range(1, len(digits)+1):\n for combo in permutations(digits, r=r):\n v = 0\n for n in combo:\n v = v * 10 + n\n if v % 3 == 0:\n _max = max(_max, v)\n return _max\n\nprint(solution([3, 1, 4, 1, 5, 9]))\n\nOutput:\n94311\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074665209_python.txt |
Q:
How can I use Selenium, Webdriver-manager, Chromedriver on virtual environment?
I am using Github codespace for creating an automated web scraping application using Webdriver-manager webdriver-manager with Selenium.
I have tried: How can we use Selenium Webdriver in collab.research.google.com?
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',options=chrome_options)
wd.get("https://www.webite-url.com")
But it did not work!
Can you help me in setting up Webdriver-manager github codespaces or share some link?
A:
Please consider rephrasing the question in a better way. The problematic is not clear.
A:
please check https://stackoverflow.com/posts/46929945
this should work for you,
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)
| How can I use Selenium, Webdriver-manager, Chromedriver on virtual environment? | I am using Github codespace for creating an automated web scraping application using Webdriver-manager webdriver-manager with Selenium.
I have tried: How can we use Selenium Webdriver in collab.research.google.com?
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',options=chrome_options)
wd.get("https://www.webite-url.com")
But it did not work!
Can you help me in setting up Webdriver-manager github codespaces or share some link?
| [
"Please consider rephrasing the question in a better way. The problematic is not clear.\n",
"please check https://stackoverflow.com/posts/46929945\nthis should work for you,\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\noptions = Options()\noptions.headless = True\ndriver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)\n\n"
] | [
0,
0
] | [] | [] | [
"codespaces",
"jupyter_notebook",
"python"
] | stackoverflow_0074657070_codespaces_jupyter_notebook_python.txt |
Q:
How can I increase my model performance in classification
Hi I am facing the problem that I have the dataset to tell if the person feels cold or not and the dataset given to me is known as the bad dataset and I want to maximize the accuracy and the precision of the model.
Right now the aacuracy is 53% and precision is 19% the columns description is :-
Age AMV Met Clo Dwpt plane Rad-temp AirTemp MeanRad-temp Velocity ATurb VaporPressure Humidity PMV TaOutdoor RhOutdoor
mean 308.637202 0.100735 1.066003 0.778492 13.621447 0.217785 23.178861 23.450261 0.112439 18.265870 5.123996 42.529203 -0.073676 17.174585 61.100365
std 680.115105 1.102099 0.428978 0.221992 5.903044 1.041164 1.433390 1.502953 0.079041 25.041109 8.156136 15.061075 0.538016 10.665071 24.703896
min 0.000000 -3.000000 0.100000 0.150000 -1.953000 -7.420000 15.960000 16.610000 0.000000 0.000000 0.000000 7.400000 -4.170000 -24.900000 0.000000
25% 26.000000 -0.700000 1.000000 0.630000 9.600000 -0.230000 22.300000 22.588684 0.068000 0.320000 1.226667 29.300000 -0.400000 11.350000 53.769937
50% 35.000000 0.000000 1.100000 0.751700 14.100000 0.200000 23.136667 23.358438 0.100000 0.500000 1.550667 43.280000 -0.030000 18.200000 68.795799
75% 45.000000 1.000000 1.241468 0.880000 17.337500 0.600000 23.900000 24.250000 0.140000 38.815000 1.985333 55.500125 0.260000 26.600000 76.950000
max 1996.000000 3.000000 4.500000 2.130000 26.896750 11.700000 31.000000 37.445000 1.880000 102.450000 27.700000 79.300000 2.500000 32.350000 100.350000
I removed all the outliers using IQR and i even smoothen the data using MinMax after it
I encoded the AMV for classification we have the table from -3 -2 -1 0 1 2 3 ranges from very cold to hot but all values in AMV reside in 0 and 1 what can i do to increase accuracy and precision. Sorry if I couldnt explain well but I am really hoping for any help if possible
A:
It sounds like you're trying to build a machine learning model to predict whether a person is feeling cold or not based on the dataset you provided. To improve the accuracy and precision of your model, there are several steps you can take.
First, make sure you're using the right evaluation metrics for your problem. Accuracy is not always the best metric to use, especially if your dataset is imbalanced (i.e. if there are significantly more instances of one class than the other). In this case, you may want to consider using precision and recall, which can give you a better understanding of how well your model is performing.
Next, you should try to improve the quality of your training data. This can include removing outliers, smoothing the data, and performing other preprocessing steps. You should also consider using more sophisticated techniques such as feature selection and dimensionality reduction to improve the predictive power of your model.
Finally, you should experiment with different machine learning algorithms to see which ones perform best on your dataset. This can involve trying out different model architectures, hyperparameters, and other settings to find the combination that produces the best results.
Overall, improving the accuracy and precision of your model will require a combination of data preprocessing, feature engineering, and algorithm selection. By following these steps, you should be able to improve the performance of your model and get better results.
| How can I increase my model performance in classification | Hi I am facing the problem that I have the dataset to tell if the person feels cold or not and the dataset given to me is known as the bad dataset and I want to maximize the accuracy and the precision of the model.
Right now the aacuracy is 53% and precision is 19% the columns description is :-
Age AMV Met Clo Dwpt plane Rad-temp AirTemp MeanRad-temp Velocity ATurb VaporPressure Humidity PMV TaOutdoor RhOutdoor
mean 308.637202 0.100735 1.066003 0.778492 13.621447 0.217785 23.178861 23.450261 0.112439 18.265870 5.123996 42.529203 -0.073676 17.174585 61.100365
std 680.115105 1.102099 0.428978 0.221992 5.903044 1.041164 1.433390 1.502953 0.079041 25.041109 8.156136 15.061075 0.538016 10.665071 24.703896
min 0.000000 -3.000000 0.100000 0.150000 -1.953000 -7.420000 15.960000 16.610000 0.000000 0.000000 0.000000 7.400000 -4.170000 -24.900000 0.000000
25% 26.000000 -0.700000 1.000000 0.630000 9.600000 -0.230000 22.300000 22.588684 0.068000 0.320000 1.226667 29.300000 -0.400000 11.350000 53.769937
50% 35.000000 0.000000 1.100000 0.751700 14.100000 0.200000 23.136667 23.358438 0.100000 0.500000 1.550667 43.280000 -0.030000 18.200000 68.795799
75% 45.000000 1.000000 1.241468 0.880000 17.337500 0.600000 23.900000 24.250000 0.140000 38.815000 1.985333 55.500125 0.260000 26.600000 76.950000
max 1996.000000 3.000000 4.500000 2.130000 26.896750 11.700000 31.000000 37.445000 1.880000 102.450000 27.700000 79.300000 2.500000 32.350000 100.350000
I removed all the outliers using IQR and i even smoothen the data using MinMax after it
I encoded the AMV for classification we have the table from -3 -2 -1 0 1 2 3 ranges from very cold to hot but all values in AMV reside in 0 and 1 what can i do to increase accuracy and precision. Sorry if I couldnt explain well but I am really hoping for any help if possible
| [
"It sounds like you're trying to build a machine learning model to predict whether a person is feeling cold or not based on the dataset you provided. To improve the accuracy and precision of your model, there are several steps you can take.\nFirst, make sure you're using the right evaluation metrics for your problem. Accuracy is not always the best metric to use, especially if your dataset is imbalanced (i.e. if there are significantly more instances of one class than the other). In this case, you may want to consider using precision and recall, which can give you a better understanding of how well your model is performing.\nNext, you should try to improve the quality of your training data. This can include removing outliers, smoothing the data, and performing other preprocessing steps. You should also consider using more sophisticated techniques such as feature selection and dimensionality reduction to improve the predictive power of your model.\nFinally, you should experiment with different machine learning algorithms to see which ones perform best on your dataset. This can involve trying out different model architectures, hyperparameters, and other settings to find the combination that produces the best results.\nOverall, improving the accuracy and precision of your model will require a combination of data preprocessing, feature engineering, and algorithm selection. By following these steps, you should be able to improve the performance of your model and get better results.\n"
] | [
0
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074665415_numpy_pandas_python.txt |
Q:
How do i loop through the fields of a form in python?
I am trying to find out how "complete" a users profile is as a percentage.
I want to loop through the fields of a form to see which are still left blank and return a completion percentage.
My question is how do I reference each form value in the loop without having to write out the name of each field?
Is this possible?
completeness = 0
length = 20
for x in form:
if form.fields.values[x] != '':
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
A:
To reference each form value in a loop without having to write out the name of each field in Python, you can use the items() method on the form.fields.values dictionary to iterate over the key-value pairs in the dictionary.
Here is an example of how you could update your code to use the items() method to loop over the form:
completeness = 0
length = 20
for key, value in form.fields.values.items():
if value != "":
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
Note that in the code above, the items() method is used to loop over the form.fields.values dictionary, and the key and value for each iteration are assigned to the key and value variables, respectively. This allows you to reference the value for each field without having to explicitly specify the field name.
| How do i loop through the fields of a form in python? | I am trying to find out how "complete" a users profile is as a percentage.
I want to loop through the fields of a form to see which are still left blank and return a completion percentage.
My question is how do I reference each form value in the loop without having to write out the name of each field?
Is this possible?
completeness = 0
length = 20
for x in form:
if form.fields.values[x] != '':
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
| [
"To reference each form value in a loop without having to write out the name of each field in Python, you can use the items() method on the form.fields.values dictionary to iterate over the key-value pairs in the dictionary.\nHere is an example of how you could update your code to use the items() method to loop over the form:\ncompleteness = 0\nlength = 20\nfor key, value in form.fields.values.items():\n if value != \"\":\n completeness += 1\n\npercentage = (completeness / length) * 100\nprint(completeness)\nprint(percentage)\n\nNote that in the code above, the items() method is used to loop over the form.fields.values dictionary, and the key and value for each iteration are assigned to the key and value variables, respectively. This allows you to reference the value for each field without having to explicitly specify the field name.\n"
] | [
0
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0074665090_django_django_forms_python.txt |
Q:
why "if else" doesn't work in this piece of code
page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
when executing code in the terminal, the page value is displayed, but even when it is greater than 11, the code for some reason does not proceed to the execution of the "else" section
what i wrote wrong?
I was expecting code execution in the "else" section when the value of the Page variable was greater than 11
A:
The code in the else block will not be executed because the return statement is inside the else block. The return statement causes the function to immediately return a value and exit, so the code in the else block will never be executed.
You can fix this by moving the return statement outside of the else block, like this:
page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
if page > 11:
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
| why "if else" doesn't work in this piece of code | page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
when executing code in the terminal, the page value is displayed, but even when it is greater than 11, the code for some reason does not proceed to the execution of the "else" section
what i wrote wrong?
I was expecting code execution in the "else" section when the value of the Page variable was greater than 11
| [
"The code in the else block will not be executed because the return statement is inside the else block. The return statement causes the function to immediately return a value and exit, so the code in the else block will never be executed.\nYou can fix this by moving the return statement outside of the else block, like this:\npage = 1\nimg_count = 0\nresult_list = []\n\nwhile True:\n url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'\n\n response = requests.get(url=url, headers=headers)\n data = response.json()\n\n for item in data:\n if page <= 11:\n\n screenshots = item.get('screenshots')\n img_count += len(screenshots)\n\n for img in screenshots:\n img.update({'url': f'https://landingfoliocom.imgix.net/{img.get(\"url\")}'})\n\n result_list.append(\n {\n 'title': item.get('title'),\n 'description': item.get('slug'),\n 'url': item.get('url'),\n 'screenshots': screenshots\n }\n )\n else:\n print('test')\n with open('result_list.json', 'a') as file:\n json.dump(result_list, file, indent=4, ensure_ascii=False)\n\n if page > 11:\n return f'[INFO] work finished. Images count is: {img_count}\\n{\"=\" * 20}'\n\n page += 1\n print(f'[+] processed {page} ')\n\n"
] | [
1
] | [] | [] | [
"if_statement",
"python"
] | stackoverflow_0074665438_if_statement_python.txt |
Q:
To find sum of 5 55 555 5555 .... n using python
We need to find the sum of the following number to a given range n which describes is n=5 then the last term will be 55555.
A:
You can use the mul operator to repeat the digit, convert back to an integer and sum.
def find_sum(digit, max_repeats):
return sum(int(str(digit)*(i+1)) for i in range(max_repeats))
print(find_sum(5, 5))
#output 61725
A:
You can use the idea from the following algorithm:
def sum(n):
# This will be multiplied.
nbr=0
# This will be the return value.
ret=0
for i in range(0, n):
# Every iteration this will add 1 to the nbr: 1, 11, 111, etc.
nbr = nbr + 10 ** i
# Multiply by 5 and sum with the previous value: 5 + 55 + 555 + etc.
ret = nbr * 5 + ret
print(ret)
| To find sum of 5 55 555 5555 .... n using python | We need to find the sum of the following number to a given range n which describes is n=5 then the last term will be 55555.
| [
"You can use the mul operator to repeat the digit, convert back to an integer and sum.\ndef find_sum(digit, max_repeats):\n return sum(int(str(digit)*(i+1)) for i in range(max_repeats))\n\nprint(find_sum(5, 5))\n#output 61725\n\n",
"You can use the idea from the following algorithm:\ndef sum(n):\n # This will be multiplied.\n nbr=0\n # This will be the return value.\n ret=0\n for i in range(0, n):\n # Every iteration this will add 1 to the nbr: 1, 11, 111, etc.\n nbr = nbr + 10 ** i\n # Multiply by 5 and sum with the previous value: 5 + 55 + 555 + etc.\n ret = nbr * 5 + ret\n print(ret)\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074665350_python.txt |
Q:
if else condition how to print even odd
Given an integer, , perform the following conditional actions:
If is odd, print "Weird".
If is even and in the inclusive range of to , print "Not Weird".
If is even and in the inclusive range of to , print "Weird".
If is even and greater than , print "Not Weird".
Input Format:
A single line containing a positive integer, .
| if else condition how to print even odd | Given an integer, , perform the following conditional actions:
If is odd, print "Weird".
If is even and in the inclusive range of to , print "Not Weird".
If is even and in the inclusive range of to , print "Weird".
If is even and greater than , print "Not Weird".
Input Format:
A single line containing a positive integer, .
| [] | [] | [
"I believe something like this:\ndef conditional_print(text):\n number = int(text)\n if number % 2 == 1:\n print(\"Weird\")\n elif ... <= number <= ...:\n print(\"Not Weird\")\n elif ... <= number <= ...:\n print(\"Weird\")\n elif ... < number:\n print(\"Not Weird\")\nconditional_print(...)\n\nBut I'm missing some information, You will need to put this information in the place of the three dots.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0074665401_python.txt |
Q:
How to solve the error: 'tuple' object has no attribute 'decode' with django-channels
When I tried to execute the channels' tutorial in order to establishing the django website with websocket, the error message emerged:
AttributeError: 'tuple' object has no attribute 'decode'
I just executed following code:
$ python3 manage.py shell
>>> import channels.layers
>>> channel_layer = channels.layers.get_channel_layer()
>>> from asgiref.sync import async_to_sync
>>> async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'})
Tutorial link:
https://channels.readthedocs.io/en/stable/tutorial/part_2.html
Environment:
Ubuntu 20.04LTS with Python 3.8.10
django 4.1.3
channels 4.0.0
channels-redis 4.0.0
daphne 4.0.0
asgiref 3.5.2
I think the problem may caused by asgiref, but there is no documentation for reference.
A:
It looks like you are encountering an AttributeError when trying to send a message to a channel using the channels and asgiref libraries in Python. This error is raised when you try to call a method on an object that does not have that method.
In this case, it appears that the decode method is being called on a tuple object, but tuple objects do not have a decode method. This is likely caused by a mistake in your code, where you are trying to treat a tuple object as if it were a str object.
To fix this error, you will need to find the line of code where the decode method is being called on a tuple object and correct it. This may involve changing the type of the object, or simply using a different method that is appropriate for the object's type.
Without more information about your code and the context in which the error is occurring, it is difficult to provide a specific solution. However, the general steps to fix this error are:
Identify the line of code where the AttributeError is being raised.
Determine why the decode method is being called on a tuple object.
Correct the code to use a different method or object type that is appropriate for the situation.
Once you have fixed the code, you should be able to send messages to channels without encountering this error.
| How to solve the error: 'tuple' object has no attribute 'decode' with django-channels | When I tried to execute the channels' tutorial in order to establishing the django website with websocket, the error message emerged:
AttributeError: 'tuple' object has no attribute 'decode'
I just executed following code:
$ python3 manage.py shell
>>> import channels.layers
>>> channel_layer = channels.layers.get_channel_layer()
>>> from asgiref.sync import async_to_sync
>>> async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'})
Tutorial link:
https://channels.readthedocs.io/en/stable/tutorial/part_2.html
Environment:
Ubuntu 20.04LTS with Python 3.8.10
django 4.1.3
channels 4.0.0
channels-redis 4.0.0
daphne 4.0.0
asgiref 3.5.2
I think the problem may caused by asgiref, but there is no documentation for reference.
| [
"It looks like you are encountering an AttributeError when trying to send a message to a channel using the channels and asgiref libraries in Python. This error is raised when you try to call a method on an object that does not have that method.\nIn this case, it appears that the decode method is being called on a tuple object, but tuple objects do not have a decode method. This is likely caused by a mistake in your code, where you are trying to treat a tuple object as if it were a str object.\nTo fix this error, you will need to find the line of code where the decode method is being called on a tuple object and correct it. This may involve changing the type of the object, or simply using a different method that is appropriate for the object's type.\nWithout more information about your code and the context in which the error is occurring, it is difficult to provide a specific solution. However, the general steps to fix this error are:\n\nIdentify the line of code where the AttributeError is being raised.\nDetermine why the decode method is being called on a tuple object.\nCorrect the code to use a different method or object type that is appropriate for the situation.\n\nOnce you have fixed the code, you should be able to send messages to channels without encountering this error.\n"
] | [
0
] | [] | [] | [
"channels",
"django_channels",
"python"
] | stackoverflow_0074665490_channels_django_channels_python.txt |
Q:
Add a custom javascript to the FastAPI Swagger UI docs webpage in Python
I want to load my custom javascript file or code to the FastAPI Swagger UI webpage, to add some dynamic interaction when I create a FastAPI object.
For example, in Swagger UI on docs webpage I would like to
<script src="custom_script.js"></script>
or
<script> alert('worked!') </script>
I tried:
api = FastAPI(docs_url=None)
api.mount("/static", StaticFiles(directory="static"), name="static")
@api.get("/docs", include_in_schema=False)
async def custom_swagger_ui_html():
return get_swagger_ui_html(
openapi_url=api.openapi_url,
title=api.title + " - Swagger UI",
oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url,
swagger_js_url="/static/sample.js",
swagger_css_url="/static/sample.css",
)
but it is not working. Is there a way just to insert my custom javascript code on docs webpage of FastAPI Swagger UI with Python ?
A:
Finally I made it working. This is what I did:
from fastapi.openapi.docs import (
get_redoc_html,
get_swagger_ui_html,
get_swagger_ui_oauth2_redirect_html,
)
from fastapi.staticfiles import StaticFiles
api = FastAPI(docs_url=None)
path_to_static = os.path.join(os.path.dirname(__file__), 'static')
logger.info(f"path_to_static: {path_to_static}")
api.mount("/static", StaticFiles(directory=path_to_static), name="static")
@api.get("/docs", include_in_schema=False)
async def custom_swagger_ui_html():
return get_swagger_ui_html(
openapi_url=api.openapi_url,
title="My API",
oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url,
swagger_js_url="/static/custom_script.js",
# swagger_css_url="/static/swagger-ui.css",
# swagger_favicon_url="/static/favicon-32x32.png",
)
Important notes:
Make sure the static path is correct and all your files are in the static folder, by default the static folder should be in the same folder with the script that created the FastAPI object.
For example:
-parent_folder
Build_FastAPI.py
-static_folder
custom_script.js
custom_css.css
Find the swagger-ui-bundle.js on internet and copy-paste all its content to custom_script.js, then add your custom javascript code at the beginning or at the end of custom_script.js.
For example:
setTimeout(function(){alert('My custom script is working!')}, 5000);
...
.....
/*! For license information please see swagger-ui-bundle.js.LICENSE.txt */
!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.SwaggerUIBundle=t():e.SwaggerUIBundle=t()}
...
.....
Save and refresh your browser, you are all way up!
IF SOMEBODY KNOWS A BETTER ANSWER YOUR ARE WELCOME, THE BEST ONE WILL BE ACCEPTED!
| Add a custom javascript to the FastAPI Swagger UI docs webpage in Python | I want to load my custom javascript file or code to the FastAPI Swagger UI webpage, to add some dynamic interaction when I create a FastAPI object.
For example, in Swagger UI on docs webpage I would like to
<script src="custom_script.js"></script>
or
<script> alert('worked!') </script>
I tried:
api = FastAPI(docs_url=None)
api.mount("/static", StaticFiles(directory="static"), name="static")
@api.get("/docs", include_in_schema=False)
async def custom_swagger_ui_html():
return get_swagger_ui_html(
openapi_url=api.openapi_url,
title=api.title + " - Swagger UI",
oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url,
swagger_js_url="/static/sample.js",
swagger_css_url="/static/sample.css",
)
but it is not working. Is there a way just to insert my custom javascript code on docs webpage of FastAPI Swagger UI with Python ?
| [
"Finally I made it working. This is what I did:\nfrom fastapi.openapi.docs import (\n get_redoc_html,\n get_swagger_ui_html,\n get_swagger_ui_oauth2_redirect_html,\n)\nfrom fastapi.staticfiles import StaticFiles\n\napi = FastAPI(docs_url=None) \n\npath_to_static = os.path.join(os.path.dirname(__file__), 'static')\nlogger.info(f\"path_to_static: {path_to_static}\")\napi.mount(\"/static\", StaticFiles(directory=path_to_static), name=\"static\")\n\n@api.get(\"/docs\", include_in_schema=False)\n async def custom_swagger_ui_html():\n return get_swagger_ui_html(\n openapi_url=api.openapi_url,\n title=\"My API\",\n oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url,\n swagger_js_url=\"/static/custom_script.js\",\n # swagger_css_url=\"/static/swagger-ui.css\",\n # swagger_favicon_url=\"/static/favicon-32x32.png\",\n )\n\nImportant notes:\n\nMake sure the static path is correct and all your files are in the static folder, by default the static folder should be in the same folder with the script that created the FastAPI object.\n\nFor example:\n\n -parent_folder\n Build_FastAPI.py\n -static_folder\n custom_script.js\n custom_css.css\n\n\n\nFind the swagger-ui-bundle.js on internet and copy-paste all its content to custom_script.js, then add your custom javascript code at the beginning or at the end of custom_script.js.\n\nFor example:\nsetTimeout(function(){alert('My custom script is working!')}, 5000);\n...\n.....\n/*! For license information please see swagger-ui-bundle.js.LICENSE.txt */\n !function(e,t){\"object\"==typeof exports&&\"object\"==typeof module?module.exports=t():\"function\"==typeof define&&define.amd?define([],t):\"object\"==typeof exports?exports.SwaggerUIBundle=t():e.SwaggerUIBundle=t()}\n...\n.....\n\n\nSave and refresh your browser, you are all way up!\n\nIF SOMEBODY KNOWS A BETTER ANSWER YOUR ARE WELCOME, THE BEST ONE WILL BE ACCEPTED!\n"
] | [
1
] | [] | [] | [
"fastapi",
"python",
"swagger_ui"
] | stackoverflow_0074661044_fastapi_python_swagger_ui.txt |
Q:
Django QuerySet: additional field for counting value's occurence
I have a QuerySet object with 100 items, for each of them I need to know how many times a particular contract_number occurs in the contract_number field.
Example of expected output:
[{'contract_number': 123, 'contract_count': 2}, {'contract_number': 456, 'contract_count': 1} ...]
This means that value 123 occurs 2 times for the whole contract_number field.
Important thing: I cannot reduce the amount of items, so grouping won't work here.
The SQL equivalent for this would be an additional field contract_count as below:
SELECT *,
(SELECT count(contract_number) FROM table where t.contract_number = contract_number) as contract_count
FROM table as t
The question is how to do it with a Python object. After some research, I have found out that for more complex queries the Queryset extra method should be used. Below is one of my tries, but the result is not what I have expected
queryset = Tracker.objects.extra(
select={
'contract_count': '''
SELECT COUNT(*)
FROM table
WHERE contract_number = %s
'''
},select_params=(F('contract_number'),),)
My models.py:
class Tracker(models.Model):
contract_number = models.IntegerField()
EDIT:
The solution to my problem was Subquery()
A:
You can use annotation like this:
from django.db.models import Count
Tracker.objects.values('contract_number').annotate(contract_count=Count('contract_number')).order_by()
A:
Solutions:
counttraker=Traker.objects.values('contract_number').annotate(Count('contract_number'))
subquery=counttraker.filter(contract_number=OuterRef('contract_number').values('contract_number__count')[:1]
traker=Traker.objects.annotate(count=Subquery(subquery))
| Django QuerySet: additional field for counting value's occurence | I have a QuerySet object with 100 items, for each of them I need to know how many times a particular contract_number occurs in the contract_number field.
Example of expected output:
[{'contract_number': 123, 'contract_count': 2}, {'contract_number': 456, 'contract_count': 1} ...]
This means that value 123 occurs 2 times for the whole contract_number field.
Important thing: I cannot reduce the amount of items, so grouping won't work here.
The SQL equivalent for this would be an additional field contract_count as below:
SELECT *,
(SELECT count(contract_number) FROM table where t.contract_number = contract_number) as contract_count
FROM table as t
The question is how to do it with a Python object. After some research, I have found out that for more complex queries the Queryset extra method should be used. Below is one of my tries, but the result is not what I have expected
queryset = Tracker.objects.extra(
select={
'contract_count': '''
SELECT COUNT(*)
FROM table
WHERE contract_number = %s
'''
},select_params=(F('contract_number'),),)
My models.py:
class Tracker(models.Model):
contract_number = models.IntegerField()
EDIT:
The solution to my problem was Subquery()
| [
"You can use annotation like this:\nfrom django.db.models import Count\nTracker.objects.values('contract_number').annotate(contract_count=Count('contract_number')).order_by()\n\n",
"Solutions:\ncounttraker=Traker.objects.values('contract_number').annotate(Count('contract_number'))\nsubquery=counttraker.filter(contract_number=OuterRef('contract_number').values('contract_number__count')[:1]\ntraker=Traker.objects.annotate(count=Subquery(subquery))\n\n"
] | [
5,
0
] | [] | [] | [
"django",
"django_queryset",
"extra",
"python",
"sql"
] | stackoverflow_0051150898_django_django_queryset_extra_python_sql.txt |
Q:
Arduino extract data from serial monitor
I wrote a simple controller for my robot in Python and now I want to send the data over the serial monitor to the Arduino. I managed to send the values but now I want to know how I can extract the data from the monitor with the Arduino. My Python code:
import PySimpleGUI as sg
import serial
import time
import math
ArmLänge = 205
TextX = 10
TextY = 10
TextZ = 10
font = ("Courier New", 11)
sg.theme("DarkBlue3")
sg.set_options(font=font)
ser = serial.Serial("COM6")
ser.flushInput()
layout = [
[sg.Text("Forward Kinematics:", font=("Helvetica", 12)), sg.Text(" Inverse Kinematics:", font=("Helvetica", 12))],
[sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X'), sg.Text("X"),sg.InputText(size=(10, 10), key="InputX")],
[sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y'), sg.Text("Y"),sg.InputText(size=(10, 10), key="InputY")],
[sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z'), sg.Text("Z"),sg.InputText(size=(10, 10), key="InputZ")],
[sg.Push(), sg.Button('Exit'), sg.Button("Move")],
]
window = sg.Window("Controller", layout, finalize=True)
window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release')
window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release')
window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release')
while True:
event, values = window.read()
if event in (sg.WINDOW_CLOSED, 'Exit'):
break
elif event == 'SLIDER_X Release':
print("X Value:", values["SLIDER_X"])
elif event == 'SLIDER_Y Release':
print("Y Value:", values["SLIDER_Y"])
elif event == 'SLIDER_Z Release':
print("Z Value:", values["SLIDER_Z"])
#elif event == "Move":
#print("IK X:", values['InputX'])
#print("IK Y:", values['InputY'])
#print("IK Z:", values['InputZ'])
valX = int(values["SLIDER_X"]/2)
valY = int(values["SLIDER_Y"]/2)
valZ = int(values["SLIDER_Z"]/2)
Data = [1,valX,valY,valZ]
print(Data)
ser.write(Data)
if values['InputX'] >= str(1):
x = float(values['InputX'])
y = float(values['InputY'])
z = float(values['InputZ'])
h = round(math.sqrt(x ** 2 + y ** 2))
joint2 = round(math.degrees(math.atan(y / x)))
joint3 = round(math.degrees(math.acos((h / 2) / (ArmLänge / 2))))
print("----Ergebnis:----")
print("Höhe:", h)
print("Joint2:", joint2,"°")
print("Joint3:", joint3,"°")
IKData = [2, h, joint2, joint3]
print(IKData)
ser.write(IKData)
window.close()
ser.close()
It may not be the best code but it works. I need to extract every number for example [1, 20, 45, 30]. How can I do that?
A:
I am assuming that the data goes to Serial in this Format as String:
[1,valX,valY,valZ]
After reading the data from Serial and converting the data line to String with String() function, you can assign the values to desired variables using sscanf() function.
The function works like this -
sscanf(const char *str, const char *format, ...)
So, here it would work like this -
int data_val1, data_val2, data_val3;
sscanf(Your_SerialData_String, [1,%d,%d,%d], data_val1, data_val2, data_val3);
| Arduino extract data from serial monitor | I wrote a simple controller for my robot in Python and now I want to send the data over the serial monitor to the Arduino. I managed to send the values but now I want to know how I can extract the data from the monitor with the Arduino. My Python code:
import PySimpleGUI as sg
import serial
import time
import math
ArmLänge = 205
TextX = 10
TextY = 10
TextZ = 10
font = ("Courier New", 11)
sg.theme("DarkBlue3")
sg.set_options(font=font)
ser = serial.Serial("COM6")
ser.flushInput()
layout = [
[sg.Text("Forward Kinematics:", font=("Helvetica", 12)), sg.Text(" Inverse Kinematics:", font=("Helvetica", 12))],
[sg.Text("X"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_X'), sg.Text("X"),sg.InputText(size=(10, 10), key="InputX")],
[sg.Text("Y"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Y'), sg.Text("Y"),sg.InputText(size=(10, 10), key="InputY")],
[sg.Text("Z"), sg.Slider((0, 360), orientation='horizontal', key='SLIDER_Z'), sg.Text("Z"),sg.InputText(size=(10, 10), key="InputZ")],
[sg.Push(), sg.Button('Exit'), sg.Button("Move")],
]
window = sg.Window("Controller", layout, finalize=True)
window['SLIDER_X'].bind('<ButtonRelease-1>', ' Release')
window['SLIDER_Y'].bind('<ButtonRelease-1>', ' Release')
window['SLIDER_Z'].bind('<ButtonRelease-1>', ' Release')
while True:
event, values = window.read()
if event in (sg.WINDOW_CLOSED, 'Exit'):
break
elif event == 'SLIDER_X Release':
print("X Value:", values["SLIDER_X"])
elif event == 'SLIDER_Y Release':
print("Y Value:", values["SLIDER_Y"])
elif event == 'SLIDER_Z Release':
print("Z Value:", values["SLIDER_Z"])
#elif event == "Move":
#print("IK X:", values['InputX'])
#print("IK Y:", values['InputY'])
#print("IK Z:", values['InputZ'])
valX = int(values["SLIDER_X"]/2)
valY = int(values["SLIDER_Y"]/2)
valZ = int(values["SLIDER_Z"]/2)
Data = [1,valX,valY,valZ]
print(Data)
ser.write(Data)
if values['InputX'] >= str(1):
x = float(values['InputX'])
y = float(values['InputY'])
z = float(values['InputZ'])
h = round(math.sqrt(x ** 2 + y ** 2))
joint2 = round(math.degrees(math.atan(y / x)))
joint3 = round(math.degrees(math.acos((h / 2) / (ArmLänge / 2))))
print("----Ergebnis:----")
print("Höhe:", h)
print("Joint2:", joint2,"°")
print("Joint3:", joint3,"°")
IKData = [2, h, joint2, joint3]
print(IKData)
ser.write(IKData)
window.close()
ser.close()
It may not be the best code but it works. I need to extract every number for example [1, 20, 45, 30]. How can I do that?
| [
"I am assuming that the data goes to Serial in this Format as String:\n\n[1,valX,valY,valZ]\n\nAfter reading the data from Serial and converting the data line to String with String() function, you can assign the values to desired variables using sscanf() function.\nThe function works like this -\n\nsscanf(const char *str, const char *format, ...)\n\nSo, here it would work like this -\nint data_val1, data_val2, data_val3;\n\nsscanf(Your_SerialData_String, [1,%d,%d,%d], data_val1, data_val2, data_val3);\n\n"
] | [
0
] | [] | [] | [
"arduino",
"python",
"python_3.x",
"serial_port"
] | stackoverflow_0074656927_arduino_python_python_3.x_serial_port.txt |
Q:
JavaScript JSON reviver in Python
I'm having problem translating my JavaScript snippet to Python.
The JavaScript code looks like this:
const reviver = (_key, value) => {
try {
return JSON.parse(value, reviver);
} catch {
if(typeof value === 'string') {
const semiValues = value.split(';');
if(semiValues.length > 1) {
return stringToObject(JSON.stringify(semiValues));
}
const commaValues = value.split(',');
if(commaValues.length > 1) {
return stringToObject(JSON.stringify(commaValues));
}
}
const int = Number(value);
if(value.length && !isNaN(int)) {
return int;
}
return value;
}
};
const stringToObject = (str) => {
const formatted = str.replace(/"{/g, '{').replace(/}"/g, '}').replace(/"\[/g, '[').replace(/\]"/g, ']').replace(/\\"/g, '"');
return JSON.parse(formatted, reviver);
};
The goal of the function is that:
String values that are numbers are parsed
String values that are json are parsed using these rules
String values like "499,504;554,634" should be parsed to [(499, 504), (554, 634)]
I have tried using the JSONDecoder.
import json
def object_hook(value):
try:
return json.loads(value)
except:
if(isinstance(value, str)):
semiValues = value.split(';')
if(len(semiValues) > 1):
return parse_response(json.dumps(semiValues))
commaValues = value.split(',')
if(commaValues.length > 1):
return parse_response(json.dumps(commaValues))
try:
return float(value)
except ValueError:
return value
def parse_response(data: str):
formatted = data.replace("\"{", "{").replace("}\"", '}').replace("\"[", '[').replace("]\"", ']').replace("\\\"", "\"")
return json.load(formatted, object_hook=object_hook)
A:
I solved my issue by iterating through the values and parse them accordingly
import json
def parse_value(value):
if(isinstance(value, str)):
try:
return parse_value(json.loads(value))
except:
pass
semi_values = value.split(';')
if(len(semi_values) > 1):
return list(map(parse_value, semi_values))
comma_values = value.split(',')
if(len(comma_values) > 1):
return list(map(parse_value, comma_values))
if(value.replace('.','',1).isdigit()):
return int(value)
if(isinstance(value, dict)):
return {k: parse_value(v) for k, v in value.items()}
if(isinstance(value, list)):
return list(map(parse_value, value))
return value
A:
Your Python code looks like it's on the right track, but there are a few issues with it. First, you're using commaValues.length instead of len(commaValues) in the if statement that checks if the length of commaValues is greater than 1. Second, json.load() expects a file-like object as its first argument, not a string. You can use json.loads() instead to parse a JSON string.
Here's how I would write the code in Python:
import json
def reviver(key, value):
try:
return json.loads(value, reviver=reviver)
except:
if isinstance(value, str):
semiValues = value.split(';')
if len(semiValues) > 1:
return stringToObject(json.dumps(semiValues))
commaValues = value.split(',')
if len(commaValues) > 1:
return stringToObject(json.dumps(commaValues))
try:
return int(value)
except ValueError:
return value
def stringToObject(str):
formatted = str.replace('"{', '{').replace('}"', '}').replace('"[', '[').replace(']"', ']').replace('\\"', '"')
return json.loads(formatted, reviver=reviver)
Note that I also changed the try statement that tries to convert the string to a number to use int() instead of float() to parse the value as an integer instead of a floating-point number. I also changed the function and variable names to follow the Python convention of using lowercase words separated by underscores (e.g. string_to_object instead of stringToObject).
| JavaScript JSON reviver in Python | I'm having problem translating my JavaScript snippet to Python.
The JavaScript code looks like this:
const reviver = (_key, value) => {
try {
return JSON.parse(value, reviver);
} catch {
if(typeof value === 'string') {
const semiValues = value.split(';');
if(semiValues.length > 1) {
return stringToObject(JSON.stringify(semiValues));
}
const commaValues = value.split(',');
if(commaValues.length > 1) {
return stringToObject(JSON.stringify(commaValues));
}
}
const int = Number(value);
if(value.length && !isNaN(int)) {
return int;
}
return value;
}
};
const stringToObject = (str) => {
const formatted = str.replace(/"{/g, '{').replace(/}"/g, '}').replace(/"\[/g, '[').replace(/\]"/g, ']').replace(/\\"/g, '"');
return JSON.parse(formatted, reviver);
};
The goal of the function is that:
String values that are numbers are parsed
String values that are json are parsed using these rules
String values like "499,504;554,634" should be parsed to [(499, 504), (554, 634)]
I have tried using the JSONDecoder.
import json
def object_hook(value):
try:
return json.loads(value)
except:
if(isinstance(value, str)):
semiValues = value.split(';')
if(len(semiValues) > 1):
return parse_response(json.dumps(semiValues))
commaValues = value.split(',')
if(commaValues.length > 1):
return parse_response(json.dumps(commaValues))
try:
return float(value)
except ValueError:
return value
def parse_response(data: str):
formatted = data.replace("\"{", "{").replace("}\"", '}').replace("\"[", '[').replace("]\"", ']').replace("\\\"", "\"")
return json.load(formatted, object_hook=object_hook)
| [
"I solved my issue by iterating through the values and parse them accordingly\nimport json\n\ndef parse_value(value):\n if(isinstance(value, str)):\n try:\n return parse_value(json.loads(value))\n except:\n pass\n semi_values = value.split(';')\n if(len(semi_values) > 1):\n return list(map(parse_value, semi_values))\n comma_values = value.split(',')\n if(len(comma_values) > 1):\n return list(map(parse_value, comma_values))\n if(value.replace('.','',1).isdigit()):\n return int(value)\n if(isinstance(value, dict)):\n return {k: parse_value(v) for k, v in value.items()}\n if(isinstance(value, list)):\n return list(map(parse_value, value))\n return value\n\n",
"Your Python code looks like it's on the right track, but there are a few issues with it. First, you're using commaValues.length instead of len(commaValues) in the if statement that checks if the length of commaValues is greater than 1. Second, json.load() expects a file-like object as its first argument, not a string. You can use json.loads() instead to parse a JSON string.\nHere's how I would write the code in Python:\nimport json\n\ndef reviver(key, value):\n try:\n return json.loads(value, reviver=reviver)\n except:\n if isinstance(value, str):\n semiValues = value.split(';')\n if len(semiValues) > 1:\n return stringToObject(json.dumps(semiValues))\n commaValues = value.split(',')\n if len(commaValues) > 1:\n return stringToObject(json.dumps(commaValues))\n\n try:\n return int(value)\n except ValueError:\n return value\n\ndef stringToObject(str):\n formatted = str.replace('\"{', '{').replace('}\"', '}').replace('\"[', '[').replace(']\"', ']').replace('\\\\\"', '\"')\n return json.loads(formatted, reviver=reviver)\n\nNote that I also changed the try statement that tries to convert the string to a number to use int() instead of float() to parse the value as an integer instead of a floating-point number. I also changed the function and variable names to follow the Python convention of using lowercase words separated by underscores (e.g. string_to_object instead of stringToObject).\n"
] | [
0,
0
] | [
"Does this code works for you?\ndef reviver(_key, value):\n try:\n return json.loads(value, object_hook=reviver)\n except:\n if type(value) == str:\n semi_values = value.split(';')\n if len(semi_values) > 1:\n return string_to_object(json.dumps(semi_values))\n comma_values = value.split(',')\n if len(comma_values) > 1:\n return string_to_object(json.dumps(comma_values))\n int_val = int(value)\n if len(value) and not isinstance(int_val, int):\n return int_val\n return value\n\ndef string_to_object(str):\n formatted = str.replace('\"{', '{').replace('}\"', '}').replace('\"[', '[').replace(']\"', ']').replace('\\\\\"', '\"')\n return json.loads(formatted, object_hook=reviver)\n\n"
] | [
-1
] | [
"javascript",
"json",
"python",
"reviver_function"
] | stackoverflow_0074654080_javascript_json_python_reviver_function.txt |
Q:
Skips the first page. Scraping python
The program does not want to collect data from the first page. Starts collecting from the second page.
If I try to collect data from the first page separately, everything works. And with the help of a cycle through the pages, then the first page is skipped
import requests
from bs4 import BeautifulSoup
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36"
}
def collect_products(url="https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"):
response = requests.get(url = url, headers = headers)
data_list = []
soup = BeautifulSoup(response.text, 'lxml')
page_cout = int(soup.find('section', class_ = 'css-j8u5qq').find_all('a', class_ = 'css-1mi714g')[-1].text.strip())
print(f'[INFO] Total pages: { page_cout }')
for page in range(1, page_cout + 1):
data = {}
print(f'[INFO] Processing {page} page')
url = f"https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"+f"&page={ page }"
response = requests.get(url = url, headers = headers)
soup = BeautifulSoup(response.text, 'lxml')
items = soup.find_all("div", {"data-cy" : "l-card"})
for item in items:
olx = 'https://www.olx.ua'
try:
link = olx + item.find('a', class_ = 'css-rc5s2u').get('href').strip()
except:
link = 'err'
try:
title = item.find('h6', class_ = 'css-1pvd0aj-Text eu5v0x0').text.strip()
except:
title = 'err'
try:
fettle = item.find('div', class_ = 'css-puf171').text.strip()
except:
fettle = 'err'
try:
price = item.find('p', class_ = 'css-1q7gvpp-Text eu5v0x0').text.strip()
except:
price = 'err'
try:
url = f"{link}"
response = requests.get(url = url, headers = headers)
soup = BeautifulSoup(response.text, 'lxml')
description = soup.find('div' , class_ = 'css-g5mtbi-Text').text.strip()
except:
description = 'err'
print(title)
print(fettle)
print(price)
print(link)
print(description)
return data_list
if __name__ == '__main__':
collect_products()
what are other options to solve the problem?
A:
import httpx
import trio
from bs4 import BeautifulSoup
import pandas as pd
from urllib.parse import urljoin
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'
}
class Spider:
def __init__(self, client) -> None:
self.client = client
self.limiter = trio.CapacityLimiter(10)
async def get(self, page):
params = {
'currency': 'UAH',
'page': page
}
while True:
try:
r = await self.client.get('noutbuki', params=params)
if r.is_success:
break
except httpx.RequestError:
continue
return await self.get_soup(r.text)
async def get_soup(self, content):
return BeautifulSoup(content, 'lxml')
async def crawl(data, page, sender):
async with data.limiter, sender:
soup = await data.get(page)
goal = [urljoin(str(data.client.base_url), x['href'])
for x in soup.select('a.css-rc5s2u, a.marginright5')]
await sender.send(goal)
async def main():
async with httpx.AsyncClient(timeout=5, headers=headers, follow_redirects=True, base_url='https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/') as client, trio.open_nursery() as nurse:
sender, receiver = trio.open_memory_channel(0)
nurse.start_soon(rec, receiver)
data = Spider(client)
async with sender:
for page in range(1, 3):
nurse.start_soon(crawl, data, page, sender.clone())
async def rec(receiver):
async with receiver:
allin = []
async for val in receiver:
allin.extend(val)
df = pd.DataFrame(allin, columns=['URL'])
print(df)
if __name__ == "__main__":
trio.run(main)
Output:
URL
0 https://www.olx.ua/d/uk/obyavlenie/lenovo-thin...
1 https://www.olx.ua/d/uk/obyavlenie/kak-novyy-i...
2 https://www.olx.ua/d/uk/obyavlenie/dell-xps-13...
3 https://www.olx.ua/d/uk/obyavlenie/apple-macbo...
4 https://www.olx.ua/d/uk/obyavlenie/u-menya-est...
.. ...
91 https://www.olx.ua/d/uk/obyavlenie/noutbuk-ace...
92 https://www.olx.ua/d/uk/obyavlenie/noutbuk-na-...
93 https://www.olx.ua/d/uk/obyavlenie/noutbuk-fuj...
94 https://www.olx.ua/d/uk/obyavlenie/noutbuk-15-...
95 https://www.olx.ua/d/uk/obyavlenie/ultrabuk-hp...
[96 rows x 1 columns]
| Skips the first page. Scraping python | The program does not want to collect data from the first page. Starts collecting from the second page.
If I try to collect data from the first page separately, everything works. And with the help of a cycle through the pages, then the first page is skipped
import requests
from bs4 import BeautifulSoup
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36"
}
def collect_products(url="https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"):
response = requests.get(url = url, headers = headers)
data_list = []
soup = BeautifulSoup(response.text, 'lxml')
page_cout = int(soup.find('section', class_ = 'css-j8u5qq').find_all('a', class_ = 'css-1mi714g')[-1].text.strip())
print(f'[INFO] Total pages: { page_cout }')
for page in range(1, page_cout + 1):
data = {}
print(f'[INFO] Processing {page} page')
url = f"https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/noutbuki/?currency=UAH"+f"&page={ page }"
response = requests.get(url = url, headers = headers)
soup = BeautifulSoup(response.text, 'lxml')
items = soup.find_all("div", {"data-cy" : "l-card"})
for item in items:
olx = 'https://www.olx.ua'
try:
link = olx + item.find('a', class_ = 'css-rc5s2u').get('href').strip()
except:
link = 'err'
try:
title = item.find('h6', class_ = 'css-1pvd0aj-Text eu5v0x0').text.strip()
except:
title = 'err'
try:
fettle = item.find('div', class_ = 'css-puf171').text.strip()
except:
fettle = 'err'
try:
price = item.find('p', class_ = 'css-1q7gvpp-Text eu5v0x0').text.strip()
except:
price = 'err'
try:
url = f"{link}"
response = requests.get(url = url, headers = headers)
soup = BeautifulSoup(response.text, 'lxml')
description = soup.find('div' , class_ = 'css-g5mtbi-Text').text.strip()
except:
description = 'err'
print(title)
print(fettle)
print(price)
print(link)
print(description)
return data_list
if __name__ == '__main__':
collect_products()
what are other options to solve the problem?
| [
"import httpx\nimport trio\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom urllib.parse import urljoin\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'\n}\n\n\nclass Spider:\n def __init__(self, client) -> None:\n self.client = client\n self.limiter = trio.CapacityLimiter(10)\n\n async def get(self, page):\n params = {\n 'currency': 'UAH',\n 'page': page\n }\n while True:\n try:\n r = await self.client.get('noutbuki', params=params)\n if r.is_success:\n break\n except httpx.RequestError:\n continue\n return await self.get_soup(r.text)\n\n async def get_soup(self, content):\n return BeautifulSoup(content, 'lxml')\n\n\nasync def crawl(data, page, sender):\n async with data.limiter, sender:\n soup = await data.get(page)\n goal = [urljoin(str(data.client.base_url), x['href'])\n for x in soup.select('a.css-rc5s2u, a.marginright5')]\n await sender.send(goal)\n\n\nasync def main():\n async with httpx.AsyncClient(timeout=5, headers=headers, follow_redirects=True, base_url='https://www.olx.ua/d/uk/elektronika/noutbuki-i-aksesuary/') as client, trio.open_nursery() as nurse:\n sender, receiver = trio.open_memory_channel(0)\n nurse.start_soon(rec, receiver)\n data = Spider(client)\n async with sender:\n for page in range(1, 3):\n nurse.start_soon(crawl, data, page, sender.clone())\n\n\nasync def rec(receiver):\n async with receiver:\n allin = []\n async for val in receiver:\n allin.extend(val)\n df = pd.DataFrame(allin, columns=['URL'])\n print(df)\n\n\nif __name__ == \"__main__\":\n trio.run(main)\n\nOutput:\n URL\n0 https://www.olx.ua/d/uk/obyavlenie/lenovo-thin...\n1 https://www.olx.ua/d/uk/obyavlenie/kak-novyy-i...\n2 https://www.olx.ua/d/uk/obyavlenie/dell-xps-13...\n3 https://www.olx.ua/d/uk/obyavlenie/apple-macbo...\n4 https://www.olx.ua/d/uk/obyavlenie/u-menya-est...\n.. ...\n91 https://www.olx.ua/d/uk/obyavlenie/noutbuk-ace...\n92 https://www.olx.ua/d/uk/obyavlenie/noutbuk-na-...\n93 https://www.olx.ua/d/uk/obyavlenie/noutbuk-fuj...\n94 https://www.olx.ua/d/uk/obyavlenie/noutbuk-15-...\n95 https://www.olx.ua/d/uk/obyavlenie/ultrabuk-hp...\n\n[96 rows x 1 columns]\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074665378_beautifulsoup_python_web_scraping.txt |
Q:
Filling Algorithm for Equal Distribution
Need your help resolving algorithm task -
There are 3 baskets, basket 1 has 10 balls and a possible max capacity of 100, basket 2 has 50 balls and a possible max capacity of 200, and basket 3 has 100 balls and a possible max capacity of 300.
Please help me to write an algorithm or code that split another 100 balls between the 3 baskets for the best possible equal distribution between the baskets.
Not possible to move balls between the baskets.
Your suggested algorithm should of course work on any number of baskets with different max capacities and any onHand value, for example, 1 ball that I want to add or the maximum capacity value that should fill all baskets to 100% fill.
A:
As already mentioned in one of my comments. If you want to have an equal distribution of %fill, then you could add the balls individually to the current lowest filled basket:
import numpy as np
def fill_baskets(baskets, ballsToDistribute):
for i in range(ballsToDistribute, 0, -1):
# find the basket with the lowest percentage of balls in it
currFillLevels = [currFill / maxFill for currFill, maxFill in baskets]
minIndex = np.argmin(currFillLevels)
# give the ball to this basket
baskets[minIndex][0] += 1
return baskets
baskets = [[10, 100], [50, 200], [100, 300]]
new_baskets = fill_baskets(baskets, 100)
# print the result:
for i, basket in enumerate(new_baskets):
print(f"Basket {i+1}: {basket[0]/basket[1]:.3f}% ({basket[0]}/ {basket[1]})")
The output I get for this case if the following:
Basket 1: 0.440% (44/ 100)
Basket 2: 0.435% (87/ 200)
Basket 3: 0.430% (129/ 300)
The only problem that can arise from the code is when we have too many balls to give away. Then all the baskets will be overfilled.
| Filling Algorithm for Equal Distribution | Need your help resolving algorithm task -
There are 3 baskets, basket 1 has 10 balls and a possible max capacity of 100, basket 2 has 50 balls and a possible max capacity of 200, and basket 3 has 100 balls and a possible max capacity of 300.
Please help me to write an algorithm or code that split another 100 balls between the 3 baskets for the best possible equal distribution between the baskets.
Not possible to move balls between the baskets.
Your suggested algorithm should of course work on any number of baskets with different max capacities and any onHand value, for example, 1 ball that I want to add or the maximum capacity value that should fill all baskets to 100% fill.
| [
"As already mentioned in one of my comments. If you want to have an equal distribution of %fill, then you could add the balls individually to the current lowest filled basket:\nimport numpy as np\n\ndef fill_baskets(baskets, ballsToDistribute):\n for i in range(ballsToDistribute, 0, -1):\n # find the basket with the lowest percentage of balls in it\n currFillLevels = [currFill / maxFill for currFill, maxFill in baskets]\n minIndex = np.argmin(currFillLevels)\n\n # give the ball to this basket\n baskets[minIndex][0] += 1\n\n return baskets\n\n\nbaskets = [[10, 100], [50, 200], [100, 300]]\n\nnew_baskets = fill_baskets(baskets, 100)\n\n# print the result:\nfor i, basket in enumerate(new_baskets):\n print(f\"Basket {i+1}: {basket[0]/basket[1]:.3f}% ({basket[0]}/ {basket[1]})\")\n\nThe output I get for this case if the following:\nBasket 1: 0.440% (44/ 100)\nBasket 2: 0.435% (87/ 200)\nBasket 3: 0.430% (129/ 300)\n\nThe only problem that can arise from the code is when we have too many balls to give away. Then all the baskets will be overfilled.\n"
] | [
1
] | [] | [] | [
"algorithm",
"dart",
"python"
] | stackoverflow_0074665160_algorithm_dart_python.txt |
Q:
Get count of objects in a specific S3 folder using Boto3
Trying to get count of objects in S3 folder
Current code
bucket='some-bucket'
File='someLocation/File/'
objs = boto3.client('s3').list_objects_v2(Bucket=bucket,Prefix=File)
fileCount = objs['KeyCount']
This gives me the count as 1+actual number of objects in S3.
Maybe it is counting "File" as a key too?
A:
Assuming you want to count the keys in a bucket and don't want to hit the limit of 1000 using list_objects_v2. The below code worked for me but I'm wondering if there is a better faster way to do it! Tried looking if there's a packaged function in boto3 s3 connector but there isn't!
# connect to s3 - assuming your creds are all set up and you have boto3 installed
s3 = boto3.resource('s3')
# identify the bucket - you can use prefix if you know what your bucket name starts with
for bucket in s3.buckets.all():
print(bucket.name)
# get the bucket
bucket = s3.Bucket('my-s3-bucket')
# use loop and count increment
count_obj = 0
for i in bucket.objects.all():
count_obj = count_obj + 1
print(count_obj)
A:
If there are more than 1000 entries, you need to use paginators, like this:
count = 0
client = boto3.client('s3')
paginator = client.get_paginator('list_objects')
for result in paginator.paginate(Bucket='your-bucket', Prefix='your-folder/', Delimiter='/'):
count += len(result.get('CommonPrefixes'))
A:
"Folders" do not actually exist in Amazon S3. Instead, all objects have their full path as their filename ('Key'). I think you already know this.
However, it is possible to 'create' a folder by creating a zero-length object that has the same name as the folder. This causes the folder to appear in listings and is what happens if folders are created via the management console.
Thus, you could exclude zero-length objects from your count.
For an example, see: Determine if folder or file key - Boto
A:
If you have credentials to access that bucket, then you can use this simple code. Below code will give you a list. List comprehension is used for more readability.
Filter is used to filter objects because in bucket to identify the files ,folder names are used. As explained by John Rotenstein concisely.
import boto3
bucket = "Sample_Bucket"
folder = "Sample_Folder"
s3 = boto3.resource("s3")
s3_bucket = s3.Bucket(bucket)
files_in_s3 = [f.key.split(folder + "/")[1] for f in s3_bucket.objects.filter(Prefix=folder).all()]
A:
The following code worked perfectly
def getNumberOfObjectsInBucket(bucketName,prefix):
count = 0
response = boto3.client('s3').list_objects_v2(Bucket=bucketName,Prefix=prefix)
for object in response['Contents']:
if object['Size'] != 0:
#print(object['Key'])
count+=1
return count
object['Size'] == 0 will take you to folder names, if want to check them, object['Size'] != 0 will lead you to all non-folder keys.
Sample function below:
getNumberOfObjectsInBucket('foo-test-bucket','foo-output/')
| Get count of objects in a specific S3 folder using Boto3 | Trying to get count of objects in S3 folder
Current code
bucket='some-bucket'
File='someLocation/File/'
objs = boto3.client('s3').list_objects_v2(Bucket=bucket,Prefix=File)
fileCount = objs['KeyCount']
This gives me the count as 1+actual number of objects in S3.
Maybe it is counting "File" as a key too?
| [
"Assuming you want to count the keys in a bucket and don't want to hit the limit of 1000 using list_objects_v2. The below code worked for me but I'm wondering if there is a better faster way to do it! Tried looking if there's a packaged function in boto3 s3 connector but there isn't!\n# connect to s3 - assuming your creds are all set up and you have boto3 installed\ns3 = boto3.resource('s3')\n\n# identify the bucket - you can use prefix if you know what your bucket name starts with\nfor bucket in s3.buckets.all():\n print(bucket.name)\n\n# get the bucket\nbucket = s3.Bucket('my-s3-bucket')\n\n# use loop and count increment\ncount_obj = 0\nfor i in bucket.objects.all():\n count_obj = count_obj + 1\nprint(count_obj)\n\n",
"If there are more than 1000 entries, you need to use paginators, like this:\ncount = 0\nclient = boto3.client('s3')\npaginator = client.get_paginator('list_objects')\nfor result in paginator.paginate(Bucket='your-bucket', Prefix='your-folder/', Delimiter='/'):\n count += len(result.get('CommonPrefixes'))\n\n",
"\"Folders\" do not actually exist in Amazon S3. Instead, all objects have their full path as their filename ('Key'). I think you already know this.\nHowever, it is possible to 'create' a folder by creating a zero-length object that has the same name as the folder. This causes the folder to appear in listings and is what happens if folders are created via the management console.\nThus, you could exclude zero-length objects from your count.\nFor an example, see: Determine if folder or file key - Boto\n",
"If you have credentials to access that bucket, then you can use this simple code. Below code will give you a list. List comprehension is used for more readability.\nFilter is used to filter objects because in bucket to identify the files ,folder names are used. As explained by John Rotenstein concisely.\nimport boto3\n\nbucket = \"Sample_Bucket\"\nfolder = \"Sample_Folder\"\ns3 = boto3.resource(\"s3\") \ns3_bucket = s3.Bucket(bucket)\nfiles_in_s3 = [f.key.split(folder + \"/\")[1] for f in s3_bucket.objects.filter(Prefix=folder).all()]\n\n",
"The following code worked perfectly\ndef getNumberOfObjectsInBucket(bucketName,prefix):\n count = 0\n response = boto3.client('s3').list_objects_v2(Bucket=bucketName,Prefix=prefix)\n for object in response['Contents']:\n if object['Size'] != 0:\n #print(object['Key'])\n count+=1\n return count\n\nobject['Size'] == 0 will take you to folder names, if want to check them, object['Size'] != 0 will lead you to all non-folder keys.\nSample function below:\ngetNumberOfObjectsInBucket('foo-test-bucket','foo-output/')\n\n"
] | [
15,
3,
1,
0,
0
] | [] | [] | [
"amazon_s3",
"boto3",
"python"
] | stackoverflow_0054656455_amazon_s3_boto3_python.txt |
Q:
How to access command history in Python shell on Windows Terminal Bash?
I sometimes want to experiment with Python code in the Python shell. In other languages (Haskell, F#) I'm used to be able to experiment in a REPL that supports command history.
I start the Python shell from (Git) Bash running in Windows Terminal:
$ py
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 1+2
3
>>>
How do I repeat the last command, or scroll through the command history?
I'm aware of this question, so I've already tried Alt + p, the arrow keys, and various combinations of those and Ctrl, Shift. Nothing works. Either nothing happens, or Ctrl + n just prints this:
>>> ^N
The arrow keys do work when using the Command Prompt (cmd) in Windows Terminal, but not when using Bash.
A:
In the Python shell, you can use the up and down arrow keys to scroll through the command history. This should work both in the Command Prompt and in Bash in Windows Terminal.
If this does not work for you, you can try enabling command history in the Python shell by running the following commands:
import readline
readline.parse_and_bind('tab: complete')
readline.parse_and_bind('set editing-mode vi')
This will enable tab completion and set the editing mode to vi, which will allow you to use vi-style key bindings (such as k and j) to navigate the command history.
Alternatively, you can use the %hist magic command to view the command history in the Python shell. This command takes an optional integer argument that specifies the number of commands to display (by default, it displays the last five commands):
# Display the last five commands
%hist
# Display the last ten commands
%hist 10
You can then copy and paste the commands you want to repeat from the output of the %hist command.
Another option is to use a different shell that supports command history, such as the IPython shell. You can start the IPython shell by running the ipython command instead of the python command. The IPython shell supports command history and tab completion, and it also has additional features such as inline plotting and automatic indentation.
| How to access command history in Python shell on Windows Terminal Bash? | I sometimes want to experiment with Python code in the Python shell. In other languages (Haskell, F#) I'm used to be able to experiment in a REPL that supports command history.
I start the Python shell from (Git) Bash running in Windows Terminal:
$ py
Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> 1+2
3
>>>
How do I repeat the last command, or scroll through the command history?
I'm aware of this question, so I've already tried Alt + p, the arrow keys, and various combinations of those and Ctrl, Shift. Nothing works. Either nothing happens, or Ctrl + n just prints this:
>>> ^N
The arrow keys do work when using the Command Prompt (cmd) in Windows Terminal, but not when using Bash.
| [
"In the Python shell, you can use the up and down arrow keys to scroll through the command history. This should work both in the Command Prompt and in Bash in Windows Terminal.\nIf this does not work for you, you can try enabling command history in the Python shell by running the following commands:\nimport readline\nreadline.parse_and_bind('tab: complete')\nreadline.parse_and_bind('set editing-mode vi')\n\nThis will enable tab completion and set the editing mode to vi, which will allow you to use vi-style key bindings (such as k and j) to navigate the command history.\nAlternatively, you can use the %hist magic command to view the command history in the Python shell. This command takes an optional integer argument that specifies the number of commands to display (by default, it displays the last five commands):\n# Display the last five commands\n%hist\n\n# Display the last ten commands\n%hist 10\n\nYou can then copy and paste the commands you want to repeat from the output of the %hist command.\nAnother option is to use a different shell that supports command history, such as the IPython shell. You can start the IPython shell by running the ipython command instead of the python command. The IPython shell supports command history and tab completion, and it also has additional features such as inline plotting and automatic indentation.\n"
] | [
1
] | [] | [] | [
"bash",
"python",
"windows_terminal"
] | stackoverflow_0074665663_bash_python_windows_terminal.txt |
Q:
How to draw animation in pyopengltk framework
I am using pyopengl, tkinter, pyopengltk to draw a Rubik's cube and am going to implement a Rubik's cube recovery animation, now I have implemented to display a Rubik's cube in tkinter with this quiz. How to rotate slices of a Rubik's Cube in python PyOpenGL? But I can't implement the tesseract animation step by step now, how can I do it please? Now it can only keep repeating the same action
import tkinter as tk
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from pyopengltk import OpenGLFrame
vertices = (
(1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1),
(1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1)
)
edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7))
surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6))
colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1))
rot_cube_map = {'K_UP': (-1, 0), 'K_DOWN': (1, 0), 'K_LEFT': (0, -1), 'K_RIGHT': (0, 1)}
rot_slice_map = {
'K_1': (0, 0, 1), 'K_2': (0, 1, 1), 'K_3': (0, 2, 1), 'K_4': (1, 0, 1), 'K_5': (1, 1, 1),
'K_6': (1, 2, 1), 'K_7': (2, 0, 1), 'K_8': (2, 1, 1), 'K_9': (2, 2, 1),
'K_F1': (0, 0, -1), 'K_F2': (0, 1, -1), 'K_F3': (0, 2, -1), 'K_F4': (1, 0, -1), 'K_F5': (1, 1, -1),
'K_F6': (1, 2, -1), 'K_F7': (2, 0, -1), 'K_F8': (2, 1, -1), 'K_F9': (2, 2, -1),
}
class Cube():
def __init__(self, id, N, scale):
self.N = 3
self.scale = scale
self.init_i = [*id]
self.current_i = [*id] # 表示填充,一个变量值代替多个
self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)]
def isAffected(self, axis, slice, dir):
return self.current_i[axis] == slice
def update(self, axis, slice, dir):
if not self.isAffected(axis, slice, dir):
return
i, j = (axis + 1) % 3, (axis + 2) % 3
for k in range(3):
self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir
self.current_i[i], self.current_i[j] = (
self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j],
self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i])
def transformMat(self):
scaleA = [[s * self.scale for s in a] for a in self.rot]
scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i]
return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1]
def draw(self, col, surf, vert, animate, angle, axis, slice, dir):
glPushMatrix()
if animate and self.isAffected(axis, slice, dir):
glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转
glMultMatrixf(self.transformMat())
glBegin(GL_QUADS)
for i in range(len(surf)):
glColor3fv(colors[i])
for j in surf[i]:
glVertex3fv(vertices[j])
glEnd()
glPopMatrix()
class mycube():
def __init__(self, N, scale):
self.N = N
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27
def maindd(self):
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0)
class GLFrame(OpenGLFrame):
def initgl(self):
self.rota = 0
self.count = 0
self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0)
self.animate1, self.animate_ang, self.animate_speed = False, 0, 0.5
self.action = (0, 0, 0)
glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色
# glViewport(400, 400, 200, 200) # 指定了视口的左下角位置
glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系
glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一)
glMatrixMode(GL_PROJECTION)
glLoadIdentity() # 恢复原始坐标
gluPerspective(30, self.width / self.height, 0.1, 50.0)
def redraw(self):
self.N = 3
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]
self.animate, self.action = True, rot_slice_map['K_1']
self.ang_x += self.rot_cube[0] * 2
self.ang_y += self.rot_cube[1] * 2
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(0, 0, -40)
glRotatef(self.ang_y, 0, 1, 0)
glRotatef(self.ang_x, 1, 0, 0)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
if self.animate1:
if self.animate_ang >= 90:
for cube in self.cubes:
cube.update(*self.action)
self.animate1, self.animate_ang = False, 0
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)
if self.animate:
self.animate_ang += self.animate_speed
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('Pineapple')
self.glframe = GLFrame(self, width=800, height=600)
self.glframe.pack(expand=True, fill=tk.BOTH)
# self.glframe.focus_displayof()
# self.glframe.animate = True
App().mainloop()
I do this by calling this statement twice
self.animate, self.action = True, rot_slice_map['K_1']
Expect it to be executed step by step, but it only executes the last sentence,There is very little information about pyopengltk on the internet, and I am still a newbie, so I would like to get help
A:
You must implement the keyebord events similar as in the Pygame implementation decribed in the answer to How to rotate slices of a Rubik's Cube in python PyOpenGL?.
Remove:
self.animate, self.action = True, rot_slice_map['K_1']
Change the key mapping to a mapping to be used with tkinter
rot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}
rot_slice_map = {
'1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),
'6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),
'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),
'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),
}
Create the 27 cubes in GLFrame.initgl and set self.animate = True. self.animate is the fagae that controls the animation loop of the the OpenGLFrame. Tha naimation of the Rubik's Cube is controled with animate1Cube:
class GLFrame(OpenGLFrame):
def initgl(self):
self.animate = True
# [...]
self.N = 3
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]
Add the callback methods for the keyboard event:
class GLFrame(OpenGLFrame):
# [...]
def keydown(self, event):
if event.keysym in rot_slice_map:
self.animate1Cube, self.action = True, rot_slice_map[event.keysym]
if event.keysym in rot_cube_map:
self.rot_cube = rot_cube_map[event.keysym]
def keyup(self, event):
if event.keysym in rot_cube_map:
self.rot_cube = (0, 0)
Set the keyboard callbacks:
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('rubiks cube')
self.glframe = GLFrame(self, width=800, height=600)
self.bind("<KeyPress>", self.glframe.keydown)
self.bind("<KeyRelease>", self.glframe.keyup)
self.glframe.pack(expand=True, fill=tk.BOTH)
The animation of the Rubikc's Cube depends on animateCube, but not on animate:
class GLFrame(OpenGLFrame):
# [...]
def redraw(self):
# [...]
if self.animate1Cube:
if self.animate_ang >= 90:
for cube in self.cubes:
cube.update(*self.action)
self.animate1Cube, self.animate_ang = False, 0
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)
if self.animate1Cube:
self.animate_ang += self.animate_speed
Complete and working example:
import tkinter as tk
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from pyopengltk import OpenGLFrame
vertices = (
(1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1),
(1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1)
)
edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7))
surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6))
colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1))
rot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}
rot_slice_map = {
'1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),
'6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),
'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),
'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),
}
class Cube():
def __init__(self, id, N, scale):
self.N = 3
self.scale = scale
self.init_i = [*id]
self.current_i = [*id] # 表示填充,一个变量值代替多个
self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)]
def isAffected(self, axis, slice, dir):
return self.current_i[axis] == slice
def update(self, axis, slice, dir):
if not self.isAffected(axis, slice, dir):
return
i, j = (axis + 1) % 3, (axis + 2) % 3
for k in range(3):
self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir
self.current_i[i], self.current_i[j] = (
self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j],
self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i])
def transformMat(self):
scaleA = [[s * self.scale for s in a] for a in self.rot]
scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i]
return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1]
def draw(self, col, surf, vert, animate, angle, axis, slice, dir):
glPushMatrix()
if animate and self.isAffected(axis, slice, dir):
glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转
glMultMatrixf(self.transformMat())
glBegin(GL_QUADS)
for i in range(len(surf)):
glColor3fv(colors[i])
for j in surf[i]:
glVertex3fv(vertices[j])
glEnd()
glPopMatrix()
class mycube():
def __init__(self, N, scale):
self.N = N
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27
def maindd(self):
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0)
class GLFrame(OpenGLFrame):
def initgl(self):
self.animate = True
self.rota = 0
self.count = 0
self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0)
self.animate1Cube, self.animate_ang, self.animate_speed = False, 0, 2
self.action = (0, 0, 0)
glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色
# glViewport(400, 400, 200, 200) # 指定了视口的左下角位置
glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系
glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一)
glMatrixMode(GL_PROJECTION)
glLoadIdentity() # 恢复原始坐标
gluPerspective(30, self.width / self.height, 0.1, 50.0)
self.N = 3
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]
def keydown(self, event):
if event.keysym in rot_slice_map:
self.animate1Cube, self.action = True, rot_slice_map[event.keysym]
if event.keysym in rot_cube_map:
self.rot_cube = rot_cube_map[event.keysym]
def keyup(self, event):
if event.keysym in rot_cube_map:
self.rot_cube = (0, 0)
def redraw(self):
self.ang_x += self.rot_cube[0] * 2
self.ang_y += self.rot_cube[1] * 2
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(0, 0, -40)
glRotatef(self.ang_y, 0, 1, 0)
glRotatef(self.ang_x, 1, 0, 0)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
if self.animate1Cube:
if self.animate_ang >= 90:
for cube in self.cubes:
cube.update(*self.action)
self.animate1Cube, self.animate_ang = False, 0
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)
if self.animate1Cube:
self.animate_ang += self.animate_speed
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('rubiks cube')
self.glframe = GLFrame(self, width=800, height=600)
#self.bind("<Key>", self.glframe.key)
self.bind("<KeyPress>", self.glframe.keydown)
self.bind("<KeyRelease>", self.glframe.keyup)
self.glframe.pack(expand=True, fill=tk.BOTH)
# self.glframe.focus_displayof()
# self.animate = True
App().mainloop()
If you want to animate the cube automatically, you need to animate instead of keyboard events. Define a list of animationg. e.g.:
animation_list = ['1', '3', '5', 'F2']
Set self.action from the list. e.g.:
class GLFrame(OpenGLFrame):
# [...]
def redraw(self):
if not self.animate1Cube and animation_list:
self.animate1Cube, self.action = True, rot_slice_map[animation_list[0]]
del animation_list[0]
# [...]
| How to draw animation in pyopengltk framework | I am using pyopengl, tkinter, pyopengltk to draw a Rubik's cube and am going to implement a Rubik's cube recovery animation, now I have implemented to display a Rubik's cube in tkinter with this quiz. How to rotate slices of a Rubik's Cube in python PyOpenGL? But I can't implement the tesseract animation step by step now, how can I do it please? Now it can only keep repeating the same action
import tkinter as tk
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from pyopengltk import OpenGLFrame
vertices = (
(1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1),
(1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1)
)
edges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7))
surfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6))
colors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1))
rot_cube_map = {'K_UP': (-1, 0), 'K_DOWN': (1, 0), 'K_LEFT': (0, -1), 'K_RIGHT': (0, 1)}
rot_slice_map = {
'K_1': (0, 0, 1), 'K_2': (0, 1, 1), 'K_3': (0, 2, 1), 'K_4': (1, 0, 1), 'K_5': (1, 1, 1),
'K_6': (1, 2, 1), 'K_7': (2, 0, 1), 'K_8': (2, 1, 1), 'K_9': (2, 2, 1),
'K_F1': (0, 0, -1), 'K_F2': (0, 1, -1), 'K_F3': (0, 2, -1), 'K_F4': (1, 0, -1), 'K_F5': (1, 1, -1),
'K_F6': (1, 2, -1), 'K_F7': (2, 0, -1), 'K_F8': (2, 1, -1), 'K_F9': (2, 2, -1),
}
class Cube():
def __init__(self, id, N, scale):
self.N = 3
self.scale = scale
self.init_i = [*id]
self.current_i = [*id] # 表示填充,一个变量值代替多个
self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)]
def isAffected(self, axis, slice, dir):
return self.current_i[axis] == slice
def update(self, axis, slice, dir):
if not self.isAffected(axis, slice, dir):
return
i, j = (axis + 1) % 3, (axis + 2) % 3
for k in range(3):
self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir
self.current_i[i], self.current_i[j] = (
self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j],
self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i])
def transformMat(self):
scaleA = [[s * self.scale for s in a] for a in self.rot]
scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i]
return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1]
def draw(self, col, surf, vert, animate, angle, axis, slice, dir):
glPushMatrix()
if animate and self.isAffected(axis, slice, dir):
glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转
glMultMatrixf(self.transformMat())
glBegin(GL_QUADS)
for i in range(len(surf)):
glColor3fv(colors[i])
for j in surf[i]:
glVertex3fv(vertices[j])
glEnd()
glPopMatrix()
class mycube():
def __init__(self, N, scale):
self.N = N
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27
def maindd(self):
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0)
class GLFrame(OpenGLFrame):
def initgl(self):
self.rota = 0
self.count = 0
self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0)
self.animate1, self.animate_ang, self.animate_speed = False, 0, 0.5
self.action = (0, 0, 0)
glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色
# glViewport(400, 400, 200, 200) # 指定了视口的左下角位置
glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系
glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一)
glMatrixMode(GL_PROJECTION)
glLoadIdentity() # 恢复原始坐标
gluPerspective(30, self.width / self.height, 0.1, 50.0)
def redraw(self):
self.N = 3
cr = range(self.N)
self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]
self.animate, self.action = True, rot_slice_map['K_1']
self.ang_x += self.rot_cube[0] * 2
self.ang_y += self.rot_cube[1] * 2
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(0, 0, -40)
glRotatef(self.ang_y, 0, 1, 0)
glRotatef(self.ang_x, 1, 0, 0)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
if self.animate1:
if self.animate_ang >= 90:
for cube in self.cubes:
cube.update(*self.action)
self.animate1, self.animate_ang = False, 0
for cube in self.cubes:
cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)
if self.animate:
self.animate_ang += self.animate_speed
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('Pineapple')
self.glframe = GLFrame(self, width=800, height=600)
self.glframe.pack(expand=True, fill=tk.BOTH)
# self.glframe.focus_displayof()
# self.glframe.animate = True
App().mainloop()
I do this by calling this statement twice
self.animate, self.action = True, rot_slice_map['K_1']
Expect it to be executed step by step, but it only executes the last sentence,There is very little information about pyopengltk on the internet, and I am still a newbie, so I would like to get help
| [
"You must implement the keyebord events similar as in the Pygame implementation decribed in the answer to How to rotate slices of a Rubik's Cube in python PyOpenGL?.\nRemove:\nself.animate, self.action = True, rot_slice_map['K_1']\nChange the key mapping to a mapping to be used with tkinter\nrot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}\nrot_slice_map = {\n '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),\n '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),\n 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),\n 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),\n}\n\nCreate the 27 cubes in GLFrame.initgl and set self.animate = True. self.animate is the fagae that controls the animation loop of the the OpenGLFrame. Tha naimation of the Rubik's Cube is controled with animate1Cube:\nclass GLFrame(OpenGLFrame):\n def initgl(self):\n self.animate = True\n \n # [...]\n\n self.N = 3\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]\n\nAdd the callback methods for the keyboard event:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def keydown(self, event):\n if event.keysym in rot_slice_map:\n self.animate1Cube, self.action = True, rot_slice_map[event.keysym]\n if event.keysym in rot_cube_map:\n self.rot_cube = rot_cube_map[event.keysym]\n\n def keyup(self, event):\n if event.keysym in rot_cube_map:\n self.rot_cube = (0, 0)\n\nSet the keyboard callbacks:\nclass App(tk.Tk):\n def __init__(self):\n super().__init__()\n self.title('rubiks cube')\n self.glframe = GLFrame(self, width=800, height=600)\n self.bind(\"<KeyPress>\", self.glframe.keydown)\n self.bind(\"<KeyRelease>\", self.glframe.keyup)\n self.glframe.pack(expand=True, fill=tk.BOTH)\n\nThe animation of the Rubikc's Cube depends on animateCube, but not on animate:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def redraw(self):\n # [...]\n\n if self.animate1Cube:\n if self.animate_ang >= 90:\n for cube in self.cubes:\n cube.update(*self.action)\n self.animate1Cube, self.animate_ang = False, 0\n\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)\n if self.animate1Cube:\n self.animate_ang += self.animate_speed\n\n\nComplete and working example:\n\nimport tkinter as tk\nfrom OpenGL.GL import *\nfrom OpenGL.GLU import *\nfrom OpenGL.GLUT import *\nfrom pyopengltk import OpenGLFrame\n\nvertices = (\n (1, -1, -1), (1, 1, -1), (-1, 1, -1), (-1, -1, -1),\n (1, -1, 1), (1, 1, 1), (-1, -1, 1), (-1, 1, 1)\n)\nedges = ((0, 1), (0, 3), (0, 4), (2, 1), (2, 3), (2, 7), (6, 3), (6, 4), (6, 7), (5, 1), (5, 4), (5, 7))\nsurfaces = ((0, 1, 2, 3), (3, 2, 7, 6), (6, 7, 5, 4), (4, 5, 1, 0), (1, 5, 7, 2), (4, 0, 3, 6))\ncolors = ((1, 0, 0), (0, 1, 0), (1, 0.5, 0), (1, 1, 0), (1, 1, 1), (0, 0, 1))\n\nrot_cube_map = {'Up': (-1, 0), 'Down': (1, 0), 'Left': (0, -1), 'Right': (0, 1)}\nrot_slice_map = {\n '1': (0, 0, 1), '2': (0, 1, 1), '3': (0, 2, 1), '4': (1, 0, 1), '5': (1, 1, 1),\n '6': (1, 2, 1), '7': (2, 0, 1), '8': (2, 1, 1), '9': (2, 2, 1),\n 'F1': (0, 0, -1), 'F2': (0, 1, -1), 'F3': (0, 2, -1), 'F4': (1, 0, -1), 'F5': (1, 1, -1),\n 'F6': (1, 2, -1), 'F7': (2, 0, -1), 'F8': (2, 1, -1), 'F9': (2, 2, -1),\n}\n\nclass Cube():\n def __init__(self, id, N, scale):\n self.N = 3\n self.scale = scale\n self.init_i = [*id]\n self.current_i = [*id] # 表示填充,一个变量值代替多个\n self.rot = [[1 if i == j else 0 for i in range(3)] for j in range(3)]\n\n def isAffected(self, axis, slice, dir):\n return self.current_i[axis] == slice\n\n def update(self, axis, slice, dir):\n\n if not self.isAffected(axis, slice, dir):\n return\n\n i, j = (axis + 1) % 3, (axis + 2) % 3\n for k in range(3):\n self.rot[k][i], self.rot[k][j] = -self.rot[k][j] * dir, self.rot[k][i] * dir\n\n self.current_i[i], self.current_i[j] = (\n self.current_i[j] if dir < 0 else self.N - 1 - self.current_i[j],\n self.current_i[i] if dir > 0 else self.N - 1 - self.current_i[i])\n\n def transformMat(self):\n scaleA = [[s * self.scale for s in a] for a in self.rot]\n scaleT = [(p - (self.N - 1) / 2) * 2.1 * self.scale for p in self.current_i]\n return [*scaleA[0], 0, *scaleA[1], 0, *scaleA[2], 0, *scaleT, 1]\n\n def draw(self, col, surf, vert, animate, angle, axis, slice, dir):\n\n glPushMatrix()\n if animate and self.isAffected(axis, slice, dir):\n glRotatef(angle * dir, *[1 if i == axis else 0 for i in range(3)]) # 围着这个坐标点旋转\n glMultMatrixf(self.transformMat())\n\n glBegin(GL_QUADS)\n for i in range(len(surf)):\n glColor3fv(colors[i])\n for j in surf[i]:\n glVertex3fv(vertices[j])\n glEnd()\n\n glPopMatrix()\n\n\nclass mycube():\n def __init__(self, N, scale):\n self.N = N\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, scale) for x in cr for y in cr for z in cr] # 创建27\n\n def maindd(self):\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, False, 0, 0, 0, 0)\n\nclass GLFrame(OpenGLFrame):\n def initgl(self):\n self.animate = True\n self.rota = 0\n self.count = 0\n\n self.ang_x, self.ang_y, self.rot_cube = 0, 0, (0, 0)\n self.animate1Cube, self.animate_ang, self.animate_speed = False, 0, 2\n self.action = (0, 0, 0)\n glClearColor(0.0, 0.0, 0.0, 0.0) # 背景黑色\n # glViewport(400, 400, 200, 200) # 指定了视口的左下角位置\n\n glEnable(GL_DEPTH_TEST) # 开启深度测试,实现遮挡关系\n glDepthFunc(GL_LEQUAL) # 设置深度测试函数(GL_LEQUAL只是选项之一)\n\n glMatrixMode(GL_PROJECTION) \n glLoadIdentity() # 恢复原始坐标\n gluPerspective(30, self.width / self.height, 0.1, 50.0)\n\n self.N = 3\n cr = range(self.N)\n self.cubes = [Cube((x, y, z), self.N, 1.5) for x in cr for y in cr for z in cr]\n\n def keydown(self, event):\n if event.keysym in rot_slice_map:\n self.animate1Cube, self.action = True, rot_slice_map[event.keysym]\n if event.keysym in rot_cube_map:\n self.rot_cube = rot_cube_map[event.keysym]\n\n def keyup(self, event):\n if event.keysym in rot_cube_map:\n self.rot_cube = (0, 0)\n\n def redraw(self):\n self.ang_x += self.rot_cube[0] * 2\n self.ang_y += self.rot_cube[1] * 2\n\n glMatrixMode(GL_MODELVIEW)\n glLoadIdentity()\n glTranslatef(0, 0, -40)\n glRotatef(self.ang_y, 0, 1, 0)\n glRotatef(self.ang_x, 1, 0, 0)\n\n glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)\n\n if self.animate1Cube:\n if self.animate_ang >= 90:\n for cube in self.cubes:\n cube.update(*self.action)\n self.animate1Cube, self.animate_ang = False, 0\n\n for cube in self.cubes:\n cube.draw(colors, surfaces, vertices, self.animate, self.animate_ang, *self.action)\n if self.animate1Cube:\n self.animate_ang += self.animate_speed\n\n\nclass App(tk.Tk):\n def __init__(self):\n super().__init__()\n self.title('rubiks cube')\n self.glframe = GLFrame(self, width=800, height=600)\n #self.bind(\"<Key>\", self.glframe.key)\n self.bind(\"<KeyPress>\", self.glframe.keydown)\n self.bind(\"<KeyRelease>\", self.glframe.keyup)\n self.glframe.pack(expand=True, fill=tk.BOTH)\n # self.glframe.focus_displayof()\n # self.animate = True\n\nApp().mainloop()\n\n\nIf you want to animate the cube automatically, you need to animate instead of keyboard events. Define a list of animationg. e.g.:\nanimation_list = ['1', '3', '5', 'F2']\n\nSet self.action from the list. e.g.:\nclass GLFrame(OpenGLFrame):\n # [...]\n\n def redraw(self):\n if not self.animate1Cube and animation_list:\n self.animate1Cube, self.action = True, rot_slice_map[animation_list[0]]\n del animation_list[0]\n\n # [...]\n\n"
] | [
0
] | [] | [] | [
"opengl",
"pyopengl",
"python",
"tkinter"
] | stackoverflow_0074664263_opengl_pyopengl_python_tkinter.txt |
Q:
3D Delaunay triangulation: bad output (extra simplices appearing)
I am using python3.11 to create the Delaunay triangulation of a point cloud with script.Delaunay and it is misbehaving by creating some extra faces. In the image below you can see a 3D scatter plot of the points.
The image was created using the next very few lines of code:
import plotly.graph_objects as go
fig = go.Figure()
fig.add_scatter3d(x = puntos[:,0], y = puntos[:,1], z = puntos[:,2],mode='markers', marker=dict(
size=1,
color='rgb(0,0,0)',
opacity=0.8
))
fig.update_layout(scene = dict(aspectmode = 'data'))
fig.show()
The data puntos can be downloaded as a csv file in this link. Now, as I said, I am interested in obtaining the Delaunay triangulation of that point could, for which the following piece of code is used.
import numpy as np
import pandas as pd
from scipy.spatial import Delaunay
import plotly.figure_factory as ff
puntos = pd.read_csv('puntos.csv')
puntos = puntos[['0', '1', '2']]
tri = Delaunay(np.array([puntos[:,0], puntos[:,1]]).T)
simplices = tri.simplices
fig = ff.create_trisurf(x=puntos[:,0], y=puntos[:,1], z=puntos[:,2],
simplices=simplices, aspectratio=dict(x=1, y=1, z=0.3))
fig.show()
This produces the following image (point cloud images and triangulation image do not have exactly same aspect ratio, but I find it sufficient this way):
As you might see, the triangulation is creating some extra faces in the boundary of the surface, and that is repeated along the four sides of the boundary. Anyone knows why this happens and how can I solve it?
Thank you in advance!
A:
Firstly, I congratulate you on such an exquisite example. I recommend you explore examples of the function Delaunay. The documentation exhibits several properties whose output may interest you.
A:
Referring to my comments for your original question... The extra simplices occur because some of your vertices are positioned inside, but near, the convex hull of the Delaunay triangulation. The code below adjusts the position of the problematic vertices by finding the nearest point on the convex hull. For this example, I use the Tinfour Software Library which is written in Java. But you should be able to adapt the ideas to Python if you wish to do so.
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import org.tinfour.common.IIncrementalTin;
import org.tinfour.common.IQuadEdge;
import org.tinfour.common.Vertex;
import org.tinfour.semivirtual.SemiVirtualIncrementalTin;
import org.tinfour.utils.loaders.VertexReaderText;
public class AdjustEdgePoints {
public static void main(String[] args) throws IOException {
File input = new File("puntos.csv");
List<Vertex> vertices = null;
try ( VertexReaderText vrt = new VertexReaderText(input)) {
vertices = vrt.read(null);
}
double pointSpacing = 0.2;
double edgeTooLong = 0.4; // based on point spacng
double pointTooClose = 0.021;
// vertices are numbered 0 to n-1
boolean[] doNotTest = new boolean[vertices.size()];
boolean[] modified = new boolean[vertices.size()];
List<Vertex> replacements = new ArrayList<>();
IIncrementalTin tin = new SemiVirtualIncrementalTin(pointSpacing);
tin.add(vertices, null);
List<IQuadEdge> perimeter = tin.getPerimeter(); // the convex hull
// mark all vertices on the perimeter as do-not-test
for (IQuadEdge edge : perimeter) {
Vertex A = edge.getA(); // vertices are for edge are A and B
doNotTest[A.getIndex()] = true;
}
// For all excessively long edges, find the vertices that are too close
// and move them to the edge.
for (IQuadEdge edge : perimeter) {
double eLength = edge.getLength();
if (eLength < edgeTooLong) {
continue; // no processing required
}
Vertex A = edge.getA();
Vertex B = edge.getB();
double eX = B.getX() - A.getX(); // vector in direction of edge
double eY = B.getY() - A.getY();
double pX = -eY / eLength; // unit vector perpendicular to edge
double pY = eX / eLength;
for (Vertex v : vertices) {
if (doNotTest[v.getIndex()]) {
continue;
}
double vX = v.getX() - A.getX();
double vY = v.getY() - A.getY();
// compute t, the parameter for a point on the line of the edge
// closest to the vertex. We are only interested in this point
// if it falls between the two endpoints of the edge.
// in that case, t will be in the range 0 < t < 1
double t = (vX * eX + vY * eY) / (eLength * eLength);
if (0 < t && t < 1) {
double s = pX * vX + pY * vY; // distance of V from edge
if (s < pointTooClose) {
double x = A.getX() + t * eX; // point on edge
double y = A.getY() + t * eY;
Vertex X = new Vertex(x, y, v.getZ(), v.getIndex());
modified[v.getIndex()] = true;
doNotTest[v.getIndex()] = true;
replacements.add(X);
}
}
} // end of vertices loop
} // end of perimeter loop
System.out.println("i,x,y,z");
for (Vertex v : vertices) {
if (!modified[v.getIndex()]) {
System.out.format("%d,%19.16f,%19.16f,%19.16f%n",
v.getIndex(), v.getX(), v.getY(), v.getZ());
}
}
replacements.sort(new Comparator<Vertex>() {
@Override
public int compare(Vertex arg0, Vertex arg1) {
return Integer.compare(arg0.getIndex(), arg1.getIndex());
}
});
System.out.println("");
for (Vertex v : replacements) {
System.out.format("%d,%19.16f,%19.16f,%19.16f%n",
v.getIndex(), v.getX(), v.getY(), v.getZ());
}
}
}
And here are the modified vertices. Some of the z values may be a little different because Tinfour only supports single-precision floating point values for its z elements.
1,-1.7362462292391640,-1.9574243190958676, 0.2769193053245544
2,-1.5328386393032090,-1.9691151455342581, 0.2039903998374939
3,-1.3356454135288550,-1.9804488021499242, 0.1507489830255508
7,-0.5545998965948742,-2.0099766379255533, 0.1641141176223755
8,-0.3452278908343442,-2.0128547506964680, 0.2061954736709595
9,-0.1280981557739663,-2.0158395043704496, 0.2454121708869934
10, 0.0960477228578226,-2.0189207048083720, 0.2699134647846222
11, 0.3249315805387950,-2.0220670354338774, 0.2698880732059479
12, 0.5546177976887393,-2.0252243956190710, 0.2401449084281921
13, 0.7809245195097682,-2.0283352998865780, 0.1818846762180328
14, 1.0012918968621154,-2.0313645595086890, 0.1022855937480927
15, 1.2151144438651920,-2.0343038512301570, 0.0123253259807825
20,-1.9574243190958676,-1.7362462292391640, 0.2769193053245544
40,-1.9691151455342581,-1.5328386393032090, 0.2039903998374939
60,-1.9804488021499242,-1.3356454135288550, 0.1507489830255508
99, 2.0343038512301570,-1.2151144438651922, 0.0123253259807825
119, 2.0313645595086890,-1.0012918968621158, 0.1022855937480927
139, 2.0283352998865780,-0.7809245195097688, 0.1818846762180328
140,-2.0099766379255533,-0.5545998965948735, 0.1641141176223755
159, 2.0252243956190710,-0.5546177976887400, 0.2401449084281921
160,-2.0128547506964680,-0.3452278908343438, 0.2061954736709595
179, 2.0220670354338774,-0.3249315805387953, 0.2698880732059479
180,-2.0158395043704496,-0.1280981557739660, 0.2454121708869934
199, 2.0189207048083720,-0.0960477228578232, 0.2699134647846222
200,-2.0189207048083720, 0.0960477228578229, 0.2699134647846222
219, 2.0158395043704496, 0.1280981557739658, 0.2454121708869934
220,-2.0220670354338774, 0.3249315805387953, 0.2698880732059479
239, 2.0128547506964680, 0.3452278908343438, 0.2061954736709595
240,-2.0252243956190710, 0.5546177976887398, 0.2401449084281921
259, 2.0099766379255533, 0.5545998965948733, 0.1641141176223755
260,-2.0283352998865780, 0.7809245195097683, 0.1818846762180328
280,-2.0313645595086890, 1.0012918968621158, 0.1022855937480927
300,-2.0343038512301570, 1.2151144438651922, 0.0123253259807825
339, 1.9804488021499242, 1.3356454135288547, 0.1507489830255508
359, 1.9691151455342581, 1.5328386393032085, 0.2039903998374939
379, 1.9574243190958676, 1.7362462292391640, 0.2769193053245544
384,-1.2151144438651920, 2.0343038512301570, 0.0123253259807825
385,-1.0012918968621156, 2.0313645595086890, 0.1022855937480927
386,-0.7809245195097685, 2.0283352998865780, 0.1818846762180328
387,-0.5546177976887396, 2.0252243956190710, 0.2401449084281921
388,-0.3249315805387951, 2.0220670354338774, 0.2698880732059479
389,-0.0960477228578228, 2.0189207048083720, 0.2699134647846222
390, 0.1280981557739660, 2.0158395043704496, 0.2454121708869934
391, 0.3452278908343442, 2.0128547506964680, 0.2061954736709595
392, 0.5545998965948740, 2.0099766379255533, 0.1641141176223755
396, 1.3356454135288547, 1.9804488021499242, 0.1507489830255508
397, 1.5328386393032085, 1.9691151455342581, 0.2039903998374939
398, 1.7362462292391640, 1.9574243190958676, 0.2769193053245544
| 3D Delaunay triangulation: bad output (extra simplices appearing) | I am using python3.11 to create the Delaunay triangulation of a point cloud with script.Delaunay and it is misbehaving by creating some extra faces. In the image below you can see a 3D scatter plot of the points.
The image was created using the next very few lines of code:
import plotly.graph_objects as go
fig = go.Figure()
fig.add_scatter3d(x = puntos[:,0], y = puntos[:,1], z = puntos[:,2],mode='markers', marker=dict(
size=1,
color='rgb(0,0,0)',
opacity=0.8
))
fig.update_layout(scene = dict(aspectmode = 'data'))
fig.show()
The data puntos can be downloaded as a csv file in this link. Now, as I said, I am interested in obtaining the Delaunay triangulation of that point could, for which the following piece of code is used.
import numpy as np
import pandas as pd
from scipy.spatial import Delaunay
import plotly.figure_factory as ff
puntos = pd.read_csv('puntos.csv')
puntos = puntos[['0', '1', '2']]
tri = Delaunay(np.array([puntos[:,0], puntos[:,1]]).T)
simplices = tri.simplices
fig = ff.create_trisurf(x=puntos[:,0], y=puntos[:,1], z=puntos[:,2],
simplices=simplices, aspectratio=dict(x=1, y=1, z=0.3))
fig.show()
This produces the following image (point cloud images and triangulation image do not have exactly same aspect ratio, but I find it sufficient this way):
As you might see, the triangulation is creating some extra faces in the boundary of the surface, and that is repeated along the four sides of the boundary. Anyone knows why this happens and how can I solve it?
Thank you in advance!
| [
"Firstly, I congratulate you on such an exquisite example. I recommend you explore examples of the function Delaunay. The documentation exhibits several properties whose output may interest you.\n",
"Referring to my comments for your original question... The extra simplices occur because some of your vertices are positioned inside, but near, the convex hull of the Delaunay triangulation. The code below adjusts the position of the problematic vertices by finding the nearest point on the convex hull. For this example, I use the Tinfour Software Library which is written in Java. But you should be able to adapt the ideas to Python if you wish to do so.\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Comparator;\nimport java.util.List;\nimport org.tinfour.common.IIncrementalTin;\nimport org.tinfour.common.IQuadEdge;\nimport org.tinfour.common.Vertex;\nimport org.tinfour.semivirtual.SemiVirtualIncrementalTin;\nimport org.tinfour.utils.loaders.VertexReaderText;\n\npublic class AdjustEdgePoints {\n public static void main(String[] args) throws IOException {\n File input = new File(\"puntos.csv\");\n List<Vertex> vertices = null;\n try ( VertexReaderText vrt = new VertexReaderText(input)) {\n vertices = vrt.read(null);\n }\n\n double pointSpacing = 0.2;\n double edgeTooLong = 0.4; // based on point spacng\n double pointTooClose = 0.021;\n\n // vertices are numbered 0 to n-1\n boolean[] doNotTest = new boolean[vertices.size()];\n boolean[] modified = new boolean[vertices.size()];\n List<Vertex> replacements = new ArrayList<>();\n\n IIncrementalTin tin = new SemiVirtualIncrementalTin(pointSpacing);\n tin.add(vertices, null);\n List<IQuadEdge> perimeter = tin.getPerimeter(); // the convex hull\n // mark all vertices on the perimeter as do-not-test\n for (IQuadEdge edge : perimeter) {\n Vertex A = edge.getA(); // vertices are for edge are A and B\n doNotTest[A.getIndex()] = true;\n }\n\n // For all excessively long edges, find the vertices that are too close\n // and move them to the edge.\n for (IQuadEdge edge : perimeter) {\n double eLength = edge.getLength();\n if (eLength < edgeTooLong) {\n continue; // no processing required\n }\n Vertex A = edge.getA();\n Vertex B = edge.getB();\n double eX = B.getX() - A.getX(); // vector in direction of edge\n double eY = B.getY() - A.getY();\n double pX = -eY / eLength; // unit vector perpendicular to edge\n double pY = eX / eLength;\n\n for (Vertex v : vertices) {\n if (doNotTest[v.getIndex()]) {\n continue;\n }\n double vX = v.getX() - A.getX();\n double vY = v.getY() - A.getY();\n // compute t, the parameter for a point on the line of the edge\n // closest to the vertex. We are only interested in this point\n // if it falls between the two endpoints of the edge.\n // in that case, t will be in the range 0 < t < 1\n double t = (vX * eX + vY * eY) / (eLength * eLength);\n if (0 < t && t < 1) {\n double s = pX * vX + pY * vY; // distance of V from edge\n if (s < pointTooClose) {\n double x = A.getX() + t * eX; // point on edge\n double y = A.getY() + t * eY;\n Vertex X = new Vertex(x, y, v.getZ(), v.getIndex());\n modified[v.getIndex()] = true;\n doNotTest[v.getIndex()] = true;\n replacements.add(X);\n }\n }\n } // end of vertices loop\n } // end of perimeter loop\n\n System.out.println(\"i,x,y,z\");\n for (Vertex v : vertices) {\n if (!modified[v.getIndex()]) {\n System.out.format(\"%d,%19.16f,%19.16f,%19.16f%n\",\n v.getIndex(), v.getX(), v.getY(), v.getZ());\n }\n }\n\n replacements.sort(new Comparator<Vertex>() {\n @Override\n public int compare(Vertex arg0, Vertex arg1) {\n return Integer.compare(arg0.getIndex(), arg1.getIndex());\n }\n });\n System.out.println(\"\");\n for (Vertex v : replacements) {\n System.out.format(\"%d,%19.16f,%19.16f,%19.16f%n\",\n v.getIndex(), v.getX(), v.getY(), v.getZ());\n }\n }\n\n}\n\nAnd here are the modified vertices. Some of the z values may be a little different because Tinfour only supports single-precision floating point values for its z elements.\n1,-1.7362462292391640,-1.9574243190958676, 0.2769193053245544\n2,-1.5328386393032090,-1.9691151455342581, 0.2039903998374939\n3,-1.3356454135288550,-1.9804488021499242, 0.1507489830255508\n7,-0.5545998965948742,-2.0099766379255533, 0.1641141176223755\n8,-0.3452278908343442,-2.0128547506964680, 0.2061954736709595\n9,-0.1280981557739663,-2.0158395043704496, 0.2454121708869934\n10, 0.0960477228578226,-2.0189207048083720, 0.2699134647846222\n11, 0.3249315805387950,-2.0220670354338774, 0.2698880732059479\n12, 0.5546177976887393,-2.0252243956190710, 0.2401449084281921\n13, 0.7809245195097682,-2.0283352998865780, 0.1818846762180328\n14, 1.0012918968621154,-2.0313645595086890, 0.1022855937480927\n15, 1.2151144438651920,-2.0343038512301570, 0.0123253259807825\n20,-1.9574243190958676,-1.7362462292391640, 0.2769193053245544\n40,-1.9691151455342581,-1.5328386393032090, 0.2039903998374939\n60,-1.9804488021499242,-1.3356454135288550, 0.1507489830255508\n99, 2.0343038512301570,-1.2151144438651922, 0.0123253259807825\n119, 2.0313645595086890,-1.0012918968621158, 0.1022855937480927\n139, 2.0283352998865780,-0.7809245195097688, 0.1818846762180328\n140,-2.0099766379255533,-0.5545998965948735, 0.1641141176223755\n159, 2.0252243956190710,-0.5546177976887400, 0.2401449084281921\n160,-2.0128547506964680,-0.3452278908343438, 0.2061954736709595\n179, 2.0220670354338774,-0.3249315805387953, 0.2698880732059479\n180,-2.0158395043704496,-0.1280981557739660, 0.2454121708869934\n199, 2.0189207048083720,-0.0960477228578232, 0.2699134647846222\n200,-2.0189207048083720, 0.0960477228578229, 0.2699134647846222\n219, 2.0158395043704496, 0.1280981557739658, 0.2454121708869934\n220,-2.0220670354338774, 0.3249315805387953, 0.2698880732059479\n239, 2.0128547506964680, 0.3452278908343438, 0.2061954736709595\n240,-2.0252243956190710, 0.5546177976887398, 0.2401449084281921\n259, 2.0099766379255533, 0.5545998965948733, 0.1641141176223755\n260,-2.0283352998865780, 0.7809245195097683, 0.1818846762180328\n280,-2.0313645595086890, 1.0012918968621158, 0.1022855937480927\n300,-2.0343038512301570, 1.2151144438651922, 0.0123253259807825\n339, 1.9804488021499242, 1.3356454135288547, 0.1507489830255508\n359, 1.9691151455342581, 1.5328386393032085, 0.2039903998374939\n379, 1.9574243190958676, 1.7362462292391640, 0.2769193053245544\n384,-1.2151144438651920, 2.0343038512301570, 0.0123253259807825\n385,-1.0012918968621156, 2.0313645595086890, 0.1022855937480927\n386,-0.7809245195097685, 2.0283352998865780, 0.1818846762180328\n387,-0.5546177976887396, 2.0252243956190710, 0.2401449084281921\n388,-0.3249315805387951, 2.0220670354338774, 0.2698880732059479\n389,-0.0960477228578228, 2.0189207048083720, 0.2699134647846222\n390, 0.1280981557739660, 2.0158395043704496, 0.2454121708869934\n391, 0.3452278908343442, 2.0128547506964680, 0.2061954736709595\n392, 0.5545998965948740, 2.0099766379255533, 0.1641141176223755\n396, 1.3356454135288547, 1.9804488021499242, 0.1507489830255508\n397, 1.5328386393032085, 1.9691151455342581, 0.2039903998374939\n398, 1.7362462292391640, 1.9574243190958676, 0.2769193053245544\n\n"
] | [
0,
0
] | [] | [] | [
"3d",
"plotly_python",
"python",
"scipy",
"triangulation"
] | stackoverflow_0074642689_3d_plotly_python_python_scipy_triangulation.txt |
Q:
Powershell's Prompt change to just "PS" when I run "conda activate xx" in, What happend?
When I activate my conda environment in powershell, The Prompt change to just "PS".
In normal, the Prompt is "(base) PS C:\Users\xxx", but It's just "PS" now. What happend? I want to get it back.
My conda's version is "conda 22.11.0".
I want it to be "(xx) PS C:\Users\xxx", not just "PS".
A:
I found a solution.
I can update powershell to 7 to solve the problem. But that's so weird. Why?
| Powershell's Prompt change to just "PS" when I run "conda activate xx" in, What happend? | When I activate my conda environment in powershell, The Prompt change to just "PS".
In normal, the Prompt is "(base) PS C:\Users\xxx", but It's just "PS" now. What happend? I want to get it back.
My conda's version is "conda 22.11.0".
I want it to be "(xx) PS C:\Users\xxx", not just "PS".
| [
"I found a solution.\nI can update powershell to 7 to solve the problem. But that's so weird. Why?\n"
] | [
0
] | [] | [] | [
"conda",
"powershell",
"python"
] | stackoverflow_0074665678_conda_powershell_python.txt |
Q:
AttributeError: module 'ipyparallel' has no attribute 'Cluster'
I am going through the tutorial to learn ipyparallel and while doing so, I got the error: AttributeError: module 'ipyparallel' has no attribute 'Cluster'
I uninstalled and reinstalled the package but the error persisted, does anyone have any tips for solving this issue?
My Code/ Issue:
Thanks
A:
Make sure your ipyparallel version is greater or equal to 7.0.
In [1]: import ipyparallel as ipp
In [2]: ipp.__version__
Out[2]: '6.3.0'
In [3]: hasattr(ipp, "Cluster")
Out[3]: False
Sometimes conda install ipyparallel may not install the newest version. Try using pip install ipyparallel. After version 7.0:
In [1]: import ipyparallel as ipp
In [2]: ipp.__version__
Out[2]: '8.4.1'
In [3]: hasattr(ipp, "Cluster")
Out[3]: True
In [4]: cluster = ipp.Cluster(n=4)
| AttributeError: module 'ipyparallel' has no attribute 'Cluster' | I am going through the tutorial to learn ipyparallel and while doing so, I got the error: AttributeError: module 'ipyparallel' has no attribute 'Cluster'
I uninstalled and reinstalled the package but the error persisted, does anyone have any tips for solving this issue?
My Code/ Issue:
Thanks
| [
"Make sure your ipyparallel version is greater or equal to 7.0.\nIn [1]: import ipyparallel as ipp\n\nIn [2]: ipp.__version__\nOut[2]: '6.3.0'\n\nIn [3]: hasattr(ipp, \"Cluster\")\nOut[3]: False\n\nSometimes conda install ipyparallel may not install the newest version. Try using pip install ipyparallel. After version 7.0:\nIn [1]: import ipyparallel as ipp\n\nIn [2]: ipp.__version__\nOut[2]: '8.4.1'\n\nIn [3]: hasattr(ipp, \"Cluster\")\nOut[3]: True\n\nIn [4]: cluster = ipp.Cluster(n=4)\n\n"
] | [
0
] | [] | [] | [
"ipython",
"ipython_parallel",
"python"
] | stackoverflow_0072331252_ipython_ipython_parallel_python.txt |
Q:
Cython --embed flag in setup.py
I am starting to compile my Python 3 project with Cython, and I would like to know if it's possible to reduce my current compile time workflow to a single instruction.
This is my setup.py as of now:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
extensions = [
Extension("v", ["version.py"]),
Extension("*", ["lib/*.py"])
]
setup(
name = "MyFirst App",
ext_modules = cythonize(extensions),
)
And this is what I run from shell to obtain my executables:
python3 setup.py build_ext --inplace
cython3 --embed -o main.c main.py
gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl
This whole process works just fine, I'd like to know if there is a way to also embed the last two instruction in the setup.py script.
Thank you
A:
Start off with checking out the docs for the utility you're using. If there are complicated arguments, there is probably a config file.
This should tidy up your first command:
# setup.cfg
[build_ext]
inplace=1
I don't see anything in the docs about a post-build step, and I wouldn't really expect this process to execute shell commands afterwards. build_ext is for building python. make is very available and usual for building C binaries.
Add a Makefile to your project. If you have gccinstalled, you likely have make already:
# Makefile (lines need to start with tab)
compile:
python3 setup.py build_ext --inplace
cython3 --embed -o main.c main.py
gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl
Now you can just type make or make compile to get the desired affect.
A:
Yes, it is possible to reduce your compile time workflow to a single instruction. The setup function in the distutils module provides a script_args argument that allows you to specify arguments to be passed to the build script.
You can use this argument to specify the --inplace and --embed flags for Cython, and the -o option for gcc, like this:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
extensions = [
Extension("v", ["version.py"]),
Extension("", ["lib/.py"])
]
setup(
name = "MyFirst App",
ext_modules = cythonize(extensions),
script_args = ["build_ext", "--inplace", "--embed", "-o", "main.c"]
)
You can then compile your project by running python3 setup.py from the shell. This will run the build script with the specified arguments, and you will get your executables.
Note that you will still need to run gcc separately to compile the C code generated by Cython into an executable. You can do this by running gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl from the shell after running python3 setup.py
| Cython --embed flag in setup.py | I am starting to compile my Python 3 project with Cython, and I would like to know if it's possible to reduce my current compile time workflow to a single instruction.
This is my setup.py as of now:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
extensions = [
Extension("v", ["version.py"]),
Extension("*", ["lib/*.py"])
]
setup(
name = "MyFirst App",
ext_modules = cythonize(extensions),
)
And this is what I run from shell to obtain my executables:
python3 setup.py build_ext --inplace
cython3 --embed -o main.c main.py
gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl
This whole process works just fine, I'd like to know if there is a way to also embed the last two instruction in the setup.py script.
Thank you
| [
"Start off with checking out the docs for the utility you're using. If there are complicated arguments, there is probably a config file.\nThis should tidy up your first command:\n# setup.cfg\n[build_ext]\ninplace=1\n\nI don't see anything in the docs about a post-build step, and I wouldn't really expect this process to execute shell commands afterwards. build_ext is for building python. make is very available and usual for building C binaries.\nAdd a Makefile to your project. If you have gccinstalled, you likely have make already:\n# Makefile (lines need to start with tab)\n\ncompile:\n python3 setup.py build_ext --inplace\n cython3 --embed -o main.c main.py\n gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl\n\n\nNow you can just type make or make compile to get the desired affect.\n",
"Yes, it is possible to reduce your compile time workflow to a single instruction. The setup function in the distutils module provides a script_args argument that allows you to specify arguments to be passed to the build script.\nYou can use this argument to specify the --inplace and --embed flags for Cython, and the -o option for gcc, like this:\nfrom distutils.core import setup\nfrom distutils.extension import Extension\nfrom Cython.Build import cythonize\n\nextensions = [\nExtension(\"v\", [\"version.py\"]),\nExtension(\"\", [\"lib/.py\"])\n]\n\nsetup(\nname = \"MyFirst App\",\next_modules = cythonize(extensions),\nscript_args = [\"build_ext\", \"--inplace\", \"--embed\", \"-o\", \"main.c\"]\n)\n\nYou can then compile your project by running python3 setup.py from the shell. This will run the build script with the specified arguments, and you will get your executables.\nNote that you will still need to run gcc separately to compile the C code generated by Cython into an executable. You can do this by running gcc -Os -I /usr/include/python3.5m -o main main.c -lpython3.5m -lpthread -lm -lutil -ldl from the shell after running python3 setup.py\n"
] | [
0,
0
] | [] | [] | [
"cython",
"python",
"python_3.5"
] | stackoverflow_0046824143_cython_python_python_3.5.txt |
Q:
when installing pyaudio, pip cannot find portaudio.h in /usr/local/include
I'm using mac osx 10.10
As the PyAudio Homepage said, I install the PyAudio using
brew install portaudio
pip install pyaudio
the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib
but when I try to install pyaudio, it gives me an error that
src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found
#include "portaudio.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
actually it is in /usr/local/include
why can't it find the file?
some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this.
A:
Since pyAudio has portAudio as a dependency, you first have to install portaudio.
brew install portaudio
Then try: pip install pyAudio. If the problem persists after installing portAudio, you can specify the directory path where the compiler will be able to find the source programs (e.g: portaudio.h). Since the headers should be in the /usr/local/include directory:
pip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio
A:
On Ubuntu builds:
sudo apt-get install python-pyaudio
For Python3:
sudo apt-get install python3-pyaudio
A:
You have to install portaudio first then link that file. Only then you can find that header file (i.e, portaudio.h). To install portaudio in mac by using HomeBrew program use following commands.
brew install portaudio
brew link portaudio
pip install pyaudio
sudo is not needed if you're admin. We should refrain using sudo as it messes up lots of permissions.
A:
First, you can use Homebrew to install portaudio.
brew install portaudio
Then try to find the portaudio path:
sudo find / -name "portaudio.h"
In my case it is at /usr/local/Cellar/portaudio/19.6.0/include .
Run the command below to install pyaudio
pip install --global-option='build_ext' --global-option='-I/usr/local/Cellar/portaudio/19.6.0/include' --global-option='-L/usr/local/Cellar/portaudio/19.6.0/lib' pyaudio
A:
On Raspbian:
sudo apt-get install python-pyaudio
A:
on Centos:
yum install -y portaudio portaudio-devel && pip install pyaudio
A:
Just for the record for folks using MacPorts and not Homebrew:
$ [sudo] port install portaudio
$ pip install pyaudio --global-option="build_ext" --global-option="-I/opt/local/include" --global-option="-L/opt/local/lib"
A:
I needed to do the following to install PortAudio on Debian
sudo apt install portaudio19-dev
I also apt install'd python3-portaudio before that, although it didn't work. I'm not sure if that contributed as well.
A:
Adding a bit of robustness (in case of a non-default homebrew dir) to the snippet from @fukudama,
brew install portaudio
pip install --global-option='build_ext' --global-option="-I$(brew --prefix)/include" --global-option="-L$(brew --prefix)/lib" pyaudio
A:
For me on 10.10.5 the paths were under /opt/local. I had to add /opt/local/bin to my /etc/paths file. And the command line that worked was
sudo pip install --global-option='build_ext' --global-option='-I/opt/local/include' --global-option='-L/opt/local/lib' pyaudio
A:
If you are using anaconda/miniconda to manage your python environments then
conda install pyaudio
installs portaudio at the same time as pyaudio
The following NEW packages will be INSTALLED:
portaudio pkgs/main/osx-64::portaudio-19.6.0-h647c56a_4
pyaudio pkgs/main/osx-64::pyaudio-0.2.11-py37h1de35cc_2
A:
On Termux (this is what worked for me):
pkg install python
bash -c "$(curl -fsSL https://its-pointless.github.io/setup-pointless-repo.sh)"
pkg install portaudio
pip install pyaudio
Source: pyaudio installing #6235
A:
this is the tested answer for MacBook Pro m2 chip:
first find the location of the portaudio.h file by
sudo find / -name "portaudio.h"
then, once you find the location, copy it and use it in this command.
LDFLAGS="-L/{opt/homebrew/Cellar/portaudio/19.7.0/}lib" CFLAGS="-I/{opt/homebrew/Cellar/portaudio/19.7.0}/include" pip3 install pyaudio
Here replace the location from { } into you file location hopefully this works. I have tried above solutions and this one worked for me.
A:
For M1 mac, this is worked for me:
LDFLAGS="-L/opt/homebrew/Cellar/portaudio/19.7.0/lib" CFLAGS="-I/opt/homebrew/Cellar/portaudio/19.7.0/include" pip3 install pyaudio
Res:
Created wheel for pyaudio: filename=PyAudio-0.2.12-cp310-cp310-macosx_11_0_arm64.whl size=24170 sha256=c74eb581e6bca2400f681f68d33654002722969f1a455ffce87e4e5da05471d8
Stored in directory: /private/var/folders/m_/kzyr4q_11cl35ngrj77k28f00000gn/T/pip-ephem-wheel-cache-ql1x8ums/wheels/93/08/0b/b915ab1895927641737175e5bc7b6111e8ed0c26daabeecba0
Successfully built pyaudio
Installing collected packages: pyaudio
Successfully installed pyaudio-0.2.12
Be noted, do not using find / its very slow and stupid, using brew info portaudio
| when installing pyaudio, pip cannot find portaudio.h in /usr/local/include | I'm using mac osx 10.10
As the PyAudio Homepage said, I install the PyAudio using
brew install portaudio
pip install pyaudio
the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib
but when I try to install pyaudio, it gives me an error that
src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found
#include "portaudio.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
actually it is in /usr/local/include
why can't it find the file?
some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this.
| [
"Since pyAudio has portAudio as a dependency, you first have to install portaudio.\nbrew install portaudio\n\nThen try: pip install pyAudio. If the problem persists after installing portAudio, you can specify the directory path where the compiler will be able to find the source programs (e.g: portaudio.h). Since the headers should be in the /usr/local/include directory:\npip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio\n\n",
"On Ubuntu builds:\nsudo apt-get install python-pyaudio\n\nFor Python3:\nsudo apt-get install python3-pyaudio\n\n",
"You have to install portaudio first then link that file. Only then you can find that header file (i.e, portaudio.h). To install portaudio in mac by using HomeBrew program use following commands.\nbrew install portaudio\nbrew link portaudio\npip install pyaudio\n\nsudo is not needed if you're admin. We should refrain using sudo as it messes up lots of permissions.\n",
"First, you can use Homebrew to install portaudio.\nbrew install portaudio\n\nThen try to find the portaudio path:\nsudo find / -name \"portaudio.h\"\n\nIn my case it is at /usr/local/Cellar/portaudio/19.6.0/include .\nRun the command below to install pyaudio\npip install --global-option='build_ext' --global-option='-I/usr/local/Cellar/portaudio/19.6.0/include' --global-option='-L/usr/local/Cellar/portaudio/19.6.0/lib' pyaudio\n\n",
"On Raspbian:\nsudo apt-get install python-pyaudio\n\n",
"on Centos:\nyum install -y portaudio portaudio-devel && pip install pyaudio\n\n",
"Just for the record for folks using MacPorts and not Homebrew:\n$ [sudo] port install portaudio\n$ pip install pyaudio --global-option=\"build_ext\" --global-option=\"-I/opt/local/include\" --global-option=\"-L/opt/local/lib\"\n\n",
"I needed to do the following to install PortAudio on Debian\nsudo apt install portaudio19-dev\n\nI also apt install'd python3-portaudio before that, although it didn't work. I'm not sure if that contributed as well.\n",
"Adding a bit of robustness (in case of a non-default homebrew dir) to the snippet from @fukudama,\nbrew install portaudio\npip install --global-option='build_ext' --global-option=\"-I$(brew --prefix)/include\" --global-option=\"-L$(brew --prefix)/lib\" pyaudio\n\n",
"For me on 10.10.5 the paths were under /opt/local. I had to add /opt/local/bin to my /etc/paths file. And the command line that worked was\nsudo pip install --global-option='build_ext' --global-option='-I/opt/local/include' --global-option='-L/opt/local/lib' pyaudio\n\n",
"If you are using anaconda/miniconda to manage your python environments then\nconda install pyaudio\ninstalls portaudio at the same time as pyaudio\nThe following NEW packages will be INSTALLED:\n\n portaudio pkgs/main/osx-64::portaudio-19.6.0-h647c56a_4\n pyaudio pkgs/main/osx-64::pyaudio-0.2.11-py37h1de35cc_2\n\n",
"On Termux (this is what worked for me):\n\npkg install python\nbash -c \"$(curl -fsSL https://its-pointless.github.io/setup-pointless-repo.sh)\"\npkg install portaudio\npip install pyaudio\n\nSource: pyaudio installing #6235\n",
"this is the tested answer for MacBook Pro m2 chip:\nfirst find the location of the portaudio.h file by\nsudo find / -name \"portaudio.h\"\n\nthen, once you find the location, copy it and use it in this command.\nLDFLAGS=\"-L/{opt/homebrew/Cellar/portaudio/19.7.0/}lib\" CFLAGS=\"-I/{opt/homebrew/Cellar/portaudio/19.7.0}/include\" pip3 install pyaudio\n\nHere replace the location from { } into you file location hopefully this works. I have tried above solutions and this one worked for me.\n",
"For M1 mac, this is worked for me:\nLDFLAGS=\"-L/opt/homebrew/Cellar/portaudio/19.7.0/lib\" CFLAGS=\"-I/opt/homebrew/Cellar/portaudio/19.7.0/include\" pip3 install pyaudio\n\nRes:\n Created wheel for pyaudio: filename=PyAudio-0.2.12-cp310-cp310-macosx_11_0_arm64.whl size=24170 sha256=c74eb581e6bca2400f681f68d33654002722969f1a455ffce87e4e5da05471d8\n Stored in directory: /private/var/folders/m_/kzyr4q_11cl35ngrj77k28f00000gn/T/pip-ephem-wheel-cache-ql1x8ums/wheels/93/08/0b/b915ab1895927641737175e5bc7b6111e8ed0c26daabeecba0\nSuccessfully built pyaudio\nInstalling collected packages: pyaudio\nSuccessfully installed pyaudio-0.2.12\n\nBe noted, do not using find / its very slow and stupid, using brew info portaudio\n"
] | [
182,
27,
16,
13,
9,
8,
8,
6,
5,
4,
1,
1,
1,
0
] | [] | [] | [
"macos",
"pyaudio",
"python"
] | stackoverflow_0033513522_macos_pyaudio_python.txt |
Q:
replace multiple words from a string at the same time
I have this dict in python.
reflections = {
'I am': 'you are',
'I was': 'you were',
'I': 'you',
"I'm": 'you are',
"I'd": 'you would',
"I've": 'you have',
"I'll": 'you will',
'my': 'your',
'you are': 'I am',
'you were': 'I was',
"you've": 'I have',
"you'll": 'I will',
'your': 'my',
'yours': 'mine',
'you': 'me',
'me': 'you'
}
I have written this piece of code to replace the words.
see = "I am going to kill you"
for i in reflections:
if i in see:
print(f'matched key {i}')
see = see.replace(i, reflections[i])
print(see)
This is the output of the above code.
matched key I am
you are going to kill you
matched key you are
I am going to kill you
matched key you
I am going to kill me
matched key me
I am going to kill you
Now I want to replace all occurrences of words from reflections dict and replace them. As you can see in code output, "I am" is replaced with "you are" and in the next iteration, "you are" is again replaced with "I am", which shouldn't happen. It should not replace the replacement. So the output should be:
You are going to kill me
A:
Solution 1 - str.index
You can do it as follows:
create a new string variable new_see, which is initially empty, but will ultimately contain the result of the replacements
make each iteration only process the part of the input string up until the point where a matching key is encountered, and append the iteration's replacement result to the result string
after each iteration, truncate the input string from its start to the index after the key encountered in current iteration, so that the next iteration will only work with the yet unprocessed part
see = "I am going to kill you!"
new_see = ""
print(see)
for key, reflection in reflections.items():
if key in see:
idx = see.index(key)
print(f"matched key [{key}] @ index {idx}, reflection=[{reflection}]")
# take the part of `see` up until the index where the `key` ends,
# replace the `key` with `replacement` and append the result to
# the new string
new_see += see[:idx+len(key)].replace(key, reflection)
# truncate the original string from start up until the index where
# `key` was encountered, so the next iteration will only work on
# the part of it that hasn't been processed yet
see = see[idx+len(key):]
# take a look at the intermediate results
print(f"see=[{see}], new_see=[{new_see}]")
# append any leftover part that wasn't in the dict (in this case, "!")
if see:
new_see += see
new_see = new_see.capitalize()
print(new_see)
Output:
I am going to kill you!
matched key [I am] @ index 0, reflection=[you are]
see=[ going to kill you!], new_see=[you are]
matched key [you] @ index 15, reflection=[me]
see=[!], new_see=[you are going to kill me]
You are going to kill me!
Solution 2 - str.split
Slightly more pythonic solution using str.split instead of operating on indices:
see = "I am going to kill you!"
new_see = ""
print(see)
for key, reflection in reflections.items():
if key in see:
print(f"matched key [{key}], reflection=[{reflection}]")
left, right = see.split(key)
new_see += left + reflection
see = right
print(f"see=[{see}], new_see=[{new_see}]")
if see:
new_see += see
new_see = new_see.capitalize()
print(new_see)
Output:
I am going to kill you!
matched key [I am], reflection=[you are]
see=[ going to kill you!], new_see=[you are]
matched key [you], reflection=[me]
see=[!], new_see=[you are going to kill me]
You are going to kill me!
As pointed out in a comment, this code is going to replace "I'm" with "You'm". In order to fix this, you should reorder the entries in your dict such that "I'm", "I'd", etc., are processed before "I". Even then though, it will still not work properly in some cases, e.g. for words in all caps - "DICT" is going to be replaced with "DyouCT". In order to deal with this, you'd need to take a look at regular expressions and use re.sub instead of str.replace - that will allow you e.g. to only replace a key if it's a standalone word (i.e. surrounded by non-letters).
A:
print(see1) has the wrong level of indentation.
Try with this:
see = "you are going to kill me"
for i in reflections:
if i in see:
see = see.replace(i, reflections[i])
see1 = see.replace(i, reflections[i])
print(see1)
That way you print the word once you have broken out of the for-loop.
| replace multiple words from a string at the same time | I have this dict in python.
reflections = {
'I am': 'you are',
'I was': 'you were',
'I': 'you',
"I'm": 'you are',
"I'd": 'you would',
"I've": 'you have',
"I'll": 'you will',
'my': 'your',
'you are': 'I am',
'you were': 'I was',
"you've": 'I have',
"you'll": 'I will',
'your': 'my',
'yours': 'mine',
'you': 'me',
'me': 'you'
}
I have written this piece of code to replace the words.
see = "I am going to kill you"
for i in reflections:
if i in see:
print(f'matched key {i}')
see = see.replace(i, reflections[i])
print(see)
This is the output of the above code.
matched key I am
you are going to kill you
matched key you are
I am going to kill you
matched key you
I am going to kill me
matched key me
I am going to kill you
Now I want to replace all occurrences of words from reflections dict and replace them. As you can see in code output, "I am" is replaced with "you are" and in the next iteration, "you are" is again replaced with "I am", which shouldn't happen. It should not replace the replacement. So the output should be:
You are going to kill me
| [
"Solution 1 - str.index\nYou can do it as follows:\n\ncreate a new string variable new_see, which is initially empty, but will ultimately contain the result of the replacements\nmake each iteration only process the part of the input string up until the point where a matching key is encountered, and append the iteration's replacement result to the result string\nafter each iteration, truncate the input string from its start to the index after the key encountered in current iteration, so that the next iteration will only work with the yet unprocessed part\n\nsee = \"I am going to kill you!\"\nnew_see = \"\"\nprint(see)\n\nfor key, reflection in reflections.items():\n if key in see:\n idx = see.index(key)\n print(f\"matched key [{key}] @ index {idx}, reflection=[{reflection}]\")\n\n # take the part of `see` up until the index where the `key` ends,\n # replace the `key` with `replacement` and append the result to\n # the new string\n new_see += see[:idx+len(key)].replace(key, reflection)\n\n # truncate the original string from start up until the index where\n # `key` was encountered, so the next iteration will only work on\n # the part of it that hasn't been processed yet\n see = see[idx+len(key):]\n\n # take a look at the intermediate results\n print(f\"see=[{see}], new_see=[{new_see}]\")\n\n# append any leftover part that wasn't in the dict (in this case, \"!\")\nif see:\n new_see += see\n\nnew_see = new_see.capitalize()\nprint(new_see)\n\nOutput:\nI am going to kill you!\nmatched key [I am] @ index 0, reflection=[you are]\nsee=[ going to kill you!], new_see=[you are]\nmatched key [you] @ index 15, reflection=[me]\nsee=[!], new_see=[you are going to kill me]\nYou are going to kill me!\n\nSolution 2 - str.split\nSlightly more pythonic solution using str.split instead of operating on indices:\nsee = \"I am going to kill you!\"\nnew_see = \"\"\nprint(see)\n\nfor key, reflection in reflections.items():\n if key in see:\n print(f\"matched key [{key}], reflection=[{reflection}]\")\n left, right = see.split(key)\n new_see += left + reflection\n see = right\n print(f\"see=[{see}], new_see=[{new_see}]\")\nif see:\n new_see += see\n\nnew_see = new_see.capitalize()\nprint(new_see)\n\nOutput:\nI am going to kill you!\nmatched key [I am], reflection=[you are]\nsee=[ going to kill you!], new_see=[you are]\nmatched key [you], reflection=[me]\nsee=[!], new_see=[you are going to kill me]\nYou are going to kill me!\n\n\nAs pointed out in a comment, this code is going to replace \"I'm\" with \"You'm\". In order to fix this, you should reorder the entries in your dict such that \"I'm\", \"I'd\", etc., are processed before \"I\". Even then though, it will still not work properly in some cases, e.g. for words in all caps - \"DICT\" is going to be replaced with \"DyouCT\". In order to deal with this, you'd need to take a look at regular expressions and use re.sub instead of str.replace - that will allow you e.g. to only replace a key if it's a standalone word (i.e. surrounded by non-letters).\n",
"print(see1) has the wrong level of indentation.\nTry with this:\nsee = \"you are going to kill me\"\nfor i in reflections:\n if i in see:\n see = see.replace(i, reflections[i])\n see1 = see.replace(i, reflections[i])\nprint(see1)\n\nThat way you print the word once you have broken out of the for-loop.\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0058393229_python.txt |
Q:
Google SheetsAPI: ValueError: Client secrets must be for a web or installed app
Very similar to this question: ValueError: Client secrets must be for a web or installed app but with a twist: I'm trying to do this through a Google Cloud Virtual Machine.
Recently, the Out-Of-Band (OOB) flow stopped working for me (it seems the reason may lie here: oob-migration. Until then, I was able to easily run the Google Sheets API on the Virtual Machine to both read/write on Google Sheet files
Now, I'm trying to follow this Python quickstart for google sheets which is almost identical to the code I already had, under the "Configure the sample" section.
My code on Python right now is:
scopes = ['https://www.googleapis.com/auth/spreadsheets']
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', scopes)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
credentials_path, scopes)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
#Store creds in object
my_creds = creds
#Create service
build('sheets', 'v4', credentials=my_creds)
But every time Ì get this error:
ValueError: Client secrets must be for a web or installed app.
For the record, I did create the credentials under "OAuth 2.0 Client IDs" on Google Cloud, and the application type is "Web application". If that's not the type, I don't know which one it should be.
Thank you so much for your help, really appreciated.
A:
The code you are are using was designed for an installed. Which is exactly what your error message is saying. The QuickStart clearly states Click Application type > Desktop app.
While i agree the error message states installed or web, i am not sure that code can be used for a web application.
Client secrets must be for a web or installed app.
Open the file denoted by credentials_path the file should have the following format.
credentials.json
{
"installed": {
"client_id": "[redacted]",
"project_id": "daimto-tutorials-101",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "[redacted]",
"redirect_uris": [
"http://localhost"
]
}
}
Points to check.
it must say "installed"
redirect_uris must not include anything like urn:ietf:wg:oauth:2.0:oob
Here is a of video which will show you how to create the proper credentials file for use with that code. This should be done though Google cloud console
How to create Google Oauth2 installed application credentials.json
Note: I highly doubt that this issue is due to oob, you would have a different error message if it was.
update Web app works.
I was able to test this with using a web app credentials. the only change i had to make was to denote the port i wanted the code to run on in order to get a static port I needed to add a redirect uri to the developer console project.
I made no other changes to the standard quickstart.
flow = InstalledAppFlow.from_client_secrets_file(
CREDENTIALS_FILE_PATH, SCOPES)
creds = flow.run_local_server(port=53911)
| Google SheetsAPI: ValueError: Client secrets must be for a web or installed app | Very similar to this question: ValueError: Client secrets must be for a web or installed app but with a twist: I'm trying to do this through a Google Cloud Virtual Machine.
Recently, the Out-Of-Band (OOB) flow stopped working for me (it seems the reason may lie here: oob-migration. Until then, I was able to easily run the Google Sheets API on the Virtual Machine to both read/write on Google Sheet files
Now, I'm trying to follow this Python quickstart for google sheets which is almost identical to the code I already had, under the "Configure the sample" section.
My code on Python right now is:
scopes = ['https://www.googleapis.com/auth/spreadsheets']
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', scopes)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
credentials_path, scopes)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
#Store creds in object
my_creds = creds
#Create service
build('sheets', 'v4', credentials=my_creds)
But every time Ì get this error:
ValueError: Client secrets must be for a web or installed app.
For the record, I did create the credentials under "OAuth 2.0 Client IDs" on Google Cloud, and the application type is "Web application". If that's not the type, I don't know which one it should be.
Thank you so much for your help, really appreciated.
| [
"The code you are are using was designed for an installed. Which is exactly what your error message is saying. The QuickStart clearly states Click Application type > Desktop app.\nWhile i agree the error message states installed or web, i am not sure that code can be used for a web application.\n\nClient secrets must be for a web or installed app.\n\nOpen the file denoted by credentials_path the file should have the following format.\ncredentials.json\n{\n \"installed\": {\n \"client_id\": \"[redacted]\",\n \"project_id\": \"daimto-tutorials-101\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://oauth2.googleapis.com/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_secret\": \"[redacted]\",\n \"redirect_uris\": [\n \"http://localhost\"\n ]\n }\n}\n\nPoints to check.\n\nit must say \"installed\"\nredirect_uris must not include anything like urn:ietf:wg:oauth:2.0:oob\n\nHere is a of video which will show you how to create the proper credentials file for use with that code. This should be done though Google cloud console\n\nHow to create Google Oauth2 installed application credentials.json\n\nNote: I highly doubt that this issue is due to oob, you would have a different error message if it was.\nupdate Web app works.\nI was able to test this with using a web app credentials. the only change i had to make was to denote the port i wanted the code to run on in order to get a static port I needed to add a redirect uri to the developer console project.\nI made no other changes to the standard quickstart.\nflow = InstalledAppFlow.from_client_secrets_file(\n CREDENTIALS_FILE_PATH, SCOPES)\n creds = flow.run_local_server(port=53911)\n\n"
] | [
1
] | [] | [] | [
"google_api",
"google_api_python_client",
"google_oauth",
"google_sheets_api",
"python"
] | stackoverflow_0074663524_google_api_google_api_python_client_google_oauth_google_sheets_api_python.txt |
Q:
How to lowercase selected item in a list
I have
x= ['AA', 'BB', 'CC']
and I want to lower case only 'BB'.
A:
To transform a string to lowercase, you use
string.lower()
To answer your question, use
x[1] = x[1].lower()
A:
perhaps try
`
x = ['AA', 'BB', 'CC']
Str = x[1]
print(Str.lower())
`
#0 = AA, 1 = BB, 2 = CC
use 0 1 or 2 with x[NUMBER]
I hope this works for you!!
EDIT: or you can use x[num] directly instead of putting it in a variable
| How to lowercase selected item in a list | I have
x= ['AA', 'BB', 'CC']
and I want to lower case only 'BB'.
| [
"To transform a string to lowercase, you use\nstring.lower()\n\nTo answer your question, use\nx[1] = x[1].lower()\n\n",
"perhaps try\n`\nx = ['AA', 'BB', 'CC']\nStr = x[1]\nprint(Str.lower())\n`\n#0 = AA, 1 = BB, 2 = CC\nuse 0 1 or 2 with x[NUMBER]\nI hope this works for you!! \nEDIT: or you can use x[num] directly instead of putting it in a variable\n"
] | [
0,
0
] | [] | [] | [
"lowercase",
"python",
"string"
] | stackoverflow_0074665441_lowercase_python_string.txt |
Q:
RuntimeWarning: coroutine 'setup' was never awaited setup(self)
I am trying to create a discord bot, but I am caught in an unending loop of problems. In every video I've watched, it is recommended that you write the cog loading function as thus:
async def load_auto():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
await bot.load_extension(f'cogs.{filename[:-3]}')
but every time I use this form of cog loading it gives me this error:
C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last): File "c:/Users/galan/Desktop/new sambot/main.py", line 118, in <module> asyncio.get_event_loop().run_until_complete(main())
File "C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 616, in run_until_completereturn future.result()
File "c:/Users/galan/Desktop/new sambot/main.py", line 115, in main await load_auto() File "c:/Users/galan/Desktop/new sambot/main.py", line 16, in load_auto await bot.load_extension(f'cogs.{filename[:-3]}')
TypeError: object NoneType can't be used in 'await' expression
I've tried not awaiting the bot.load_extension which resulted in it giving a
C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited
setup(self)
while this may look better, it still does not load the cogs. and it doesn't follow others' code where it seemed like it was working.
Here is a part of my main.py file:
from discord.ext import commands
import discord
import os
import asyncio
intents = discord.Intents.all()
intents.members= True
sambot_var = ('sambot', 'sambot!', 'sambot?')
bot = commands.Bot(command_prefix='$', intents=intents)
async def load_auto():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
bot.load_extension(f'cogs.{filename[:-3]}')
async def main():
await load_auto()
await bot.start('token')
asyncio.get_event_loop().run_until_complete(main())
asyncio.run(main())
and one of my cogs:
from discord.ext
import commands
import sys
import discord
import random
sys.path.append("..")
import datetime
import pytz
import re
import asyncio
class Personality(commands.Cog):
def init(self, client): self.client = client
...
async def setup(bot): await bot.add_cog(Personality(bot))
My questions are:
Does await bot.load_extension(cogs) actually not need to be awaited?
Where did I go wrong?
What is the solution?
EDIT: The problem was that I had the old discord package ffs. My code worked fine, it just didn't work fine on my device. The problem of await bot.load_extension(cog) was caused by my outdated package.
It's always the most simplest answer. Either way, thank you for answering my questions.
A:
The add_cog is not an async function or coroutine. It's a normal function. This is easily fixable by removing the await statement.
Before
await bot.add_cog(Personality(bot))
After
bot.add_cog(Personality(bot))
Edit.
Sorry, I forgot to answer the question, Does await bot.load_extension(cogs) actually not need to be awaited?
The answer to that question is that, as of discord.py 2, the load_extension was changed to an async function because they might need it in the future. So, for now, you have to await it.
A:
What are your intention again intents.members= True, although you already use all the intentions that are intents = discord.Intents.all(). Well, this line of code can be entered as follows:
#main.py
...
bot = commands.Bot(command_prefix='$', intents=discord.Intents.all())
...
#main.py
async def load_cogs():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
await bot.load_extension(f'cogs.{filename[:-3]}')
async def main():
await load_cogs()
await bot.start('token')
if __name__ == '__main__':
asyncio.run(main())
# in some cog
class MyCog(commands.Cog):
def __init__(self, bot):
self.bot = bot
async def setup(bot):
await bot.add_cog(MyCog(bot))
IMPORTANTLY! Before inserting someone else's code, check it for indentations and spaces so that you later not go to the forum with stupid questions.
| RuntimeWarning: coroutine 'setup' was never awaited setup(self) | I am trying to create a discord bot, but I am caught in an unending loop of problems. In every video I've watched, it is recommended that you write the cog loading function as thus:
async def load_auto():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
await bot.load_extension(f'cogs.{filename[:-3]}')
but every time I use this form of cog loading it gives me this error:
C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last): File "c:/Users/galan/Desktop/new sambot/main.py", line 118, in <module> asyncio.get_event_loop().run_until_complete(main())
File "C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 616, in run_until_completereturn future.result()
File "c:/Users/galan/Desktop/new sambot/main.py", line 115, in main await load_auto() File "c:/Users/galan/Desktop/new sambot/main.py", line 16, in load_auto await bot.load_extension(f'cogs.{filename[:-3]}')
TypeError: object NoneType can't be used in 'await' expression
I've tried not awaiting the bot.load_extension which resulted in it giving a
C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited
setup(self)
while this may look better, it still does not load the cogs. and it doesn't follow others' code where it seemed like it was working.
Here is a part of my main.py file:
from discord.ext import commands
import discord
import os
import asyncio
intents = discord.Intents.all()
intents.members= True
sambot_var = ('sambot', 'sambot!', 'sambot?')
bot = commands.Bot(command_prefix='$', intents=intents)
async def load_auto():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
bot.load_extension(f'cogs.{filename[:-3]}')
async def main():
await load_auto()
await bot.start('token')
asyncio.get_event_loop().run_until_complete(main())
asyncio.run(main())
and one of my cogs:
from discord.ext
import commands
import sys
import discord
import random
sys.path.append("..")
import datetime
import pytz
import re
import asyncio
class Personality(commands.Cog):
def init(self, client): self.client = client
...
async def setup(bot): await bot.add_cog(Personality(bot))
My questions are:
Does await bot.load_extension(cogs) actually not need to be awaited?
Where did I go wrong?
What is the solution?
EDIT: The problem was that I had the old discord package ffs. My code worked fine, it just didn't work fine on my device. The problem of await bot.load_extension(cog) was caused by my outdated package.
It's always the most simplest answer. Either way, thank you for answering my questions.
| [
"The add_cog is not an async function or coroutine. It's a normal function. This is easily fixable by removing the await statement.\nBefore\nawait bot.add_cog(Personality(bot))\n\nAfter\nbot.add_cog(Personality(bot))\n\nEdit.\nSorry, I forgot to answer the question, Does await bot.load_extension(cogs) actually not need to be awaited?\nThe answer to that question is that, as of discord.py 2, the load_extension was changed to an async function because they might need it in the future. So, for now, you have to await it.\n",
"What are your intention again intents.members= True, although you already use all the intentions that are intents = discord.Intents.all(). Well, this line of code can be entered as follows:\n#main.py\n...\nbot = commands.Bot(command_prefix='$', intents=discord.Intents.all())\n...\n\n#main.py\nasync def load_cogs():\n for filename in os.listdir('./cogs'):\n if filename.endswith('.py'):\n await bot.load_extension(f'cogs.{filename[:-3]}')\n\n\nasync def main():\n await load_cogs()\n await bot.start('token')\n\nif __name__ == '__main__':\n asyncio.run(main())\n\n# in some cog\nclass MyCog(commands.Cog):\n def __init__(self, bot):\n self.bot = bot\n\n async def setup(bot):\n await bot.add_cog(MyCog(bot))\n\nIMPORTANTLY! Before inserting someone else's code, check it for indentations and spaces so that you later not go to the forum with stupid questions.\n"
] | [
0,
0
] | [] | [] | [
"bots",
"discord",
"discord.py",
"python"
] | stackoverflow_0074664982_bots_discord_discord.py_python.txt |
Q:
Pretty JSON Formatting in IPython Notebook
Is there an existing way to get json.dumps() output to appear as "pretty" formatted JSON inside ipython notebook?
A:
json.dumps has an indent argument, printing the result should be enough:
print(json.dumps(obj, indent=2))
A:
This might be slightly different than what OP was asking for, but you can do use IPython.display.JSON to interactively view a JSON/dict object.
from IPython.display import JSON
JSON({'a': [1, 2, 3, 4,], 'b': {'inner1': 'helloworld', 'inner2': 'foobar'}})
Edit: This works in Hydrogen and JupyterLab, but not in Jupyter Notebook or in IPython terminal.
Inside Hydrogen:
A:
import uuid
from IPython.display import display_javascript, display_html, display
import json
class RenderJSON(object):
def __init__(self, json_data):
if isinstance(json_data, dict):
self.json_str = json.dumps(json_data)
else:
self.json_str = json_data
self.uuid = str(uuid.uuid4())
def _ipython_display_(self):
display_html('<div id="{}" style="height: 600px; width:100%;"></div>'.format(self.uuid), raw=True)
display_javascript("""
require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() {
document.getElementById('%s').appendChild(renderjson(%s))
});
""" % (self.uuid, self.json_str), raw=True)
To ouput your data in collapsible format:
RenderJSON(your_json)
Copy pasted from here: https://www.reddit.com/r/IPython/comments/34t4m7/lpt_print_json_in_collapsible_format_in_ipython/
Github: https://github.com/caldwell/renderjson
A:
I am just adding the expanded variable to @Kyle Barron answer:
from IPython.display import JSON
JSON(json_object, expanded=True)
A:
I found this page looking for a way to eliminate the literal \ns in the output. We're doing a coding interview using Jupyter and I wanted a way to display the result of a function real perty like. My version of Jupyter (4.1.0) doesn't render them as actual line breaks. The solution I produced is (I sort of hope this is not the best way to do it but...)
import json
output = json.dumps(obj, indent=2)
line_list = output.split("\n") # Sort of line replacing "\n" with a new line
# Now that our obj is a list of strings leverage print's automatic newline
for line in line_list:
print line
I hope this helps someone!
A:
For Jupyter notebook, may be is enough to generate the link to open in a new tab (with the JSON viewer of firefox):
from IPython.display import Markdown
def jsonviewer(d):
f=open('file.json','w')
json.dump(d,f)
f.close()
print('open in firefox new tab:')
return Markdown('[file.json](./file.json)')
jsonviewer('[{"A":1}]')
'open in firefox new tab:
file.json
A:
Just an extension to @filmor answer(https://stackoverflow.com/a/18873131/7018342).
This encodes elements that might not compatible with json.dumps and also gives a handy function that can be used just like you would use print.
import json
class NpEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
if isinstance(obj, np.floating):
return float(obj)
if isinstance(obj, np.ndarray):
return obj.tolist()
if isinstance(obj, np.bool_):
return bool(obj)
return super(NpEncoder, self).default(obj)
def print_json(json_dict):
print(json.dumps(json_dict, indent=2, cls=NpEncoder))
Usage:
json_dict = {"Name":{"First Name": "Lorem", "Last Name": "Ipsum"}, "Age":26}
print_json(json_dict)
>>>
{
"Name": {
"First Name": "Lorem",
"Last Name": "Ipsum"
},
"Age": 26
}
A:
For some uses, indent should make it:
print(json.dumps(parsed, indent=2))
A Json structure is basically tree structure.
While trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.
It has some interactive trees and even comes with some code including linking to this question and the collapsing tree from Shankar ARUL.
Other samples include using plotly Here is the code example from plotly:
import plotly.express as px
fig = px.treemap(
names = ["Eve","Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"],
parents = ["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve"]
)
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25))
fig.show()
And using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:
#%pip install treelib
from treelib import Tree
country_tree = Tree()
# Create a root node
country_tree.create_node("Country", "countries")
# Group by country
for country, regions in wards_df.head(5).groupby(["CTRY17NM", "CTRY17CD"]):
# Generate a node for each country
country_tree.create_node(country[0], country[1], parent="countries")
# Group by region
for region, las in regions.groupby(["GOR10NM", "GOR10CD"]):
# Generate a node for each region
country_tree.create_node(region[0], region[1], parent=country[1])
# Group by local authority
for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):
# Create a node for each local authority
country_tree.create_node(la[0], la[1], parent=region[1])
for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):
# Create a leaf node for each ward
country_tree.create_node(ward[0], ward[1], parent=la[1])
# Output the hierarchical data
country_tree.show()
I have, based on this, created a function to convert json to a tree:
from treelib import Node, Tree, node
def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):
if tree is None:
tree = Tree()
root_id = counter_byref[0]
if verbose:
print(f"tree.create_node({'+'}, {root_id})")
tree.create_node('+', root_id)
counter_byref[0] += 1
parent_id = root_id
if type(o) == dict:
for k,v in o.items():
this_id = counter_byref[0]
if verbose:
print(f"tree.create_node({str(k)}, {this_id}, parent={parent_id})")
tree.create_node(str(k), this_id, parent=parent_id)
counter_byref[0] += 1
json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)
elif type(o) == list:
if listsNodeSymbol is not None:
if verbose:
print(f"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})")
tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)
parent_id=counter_byref[0]
counter_byref[0] += 1
for i in o:
json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)
else: #node
if verbose:
print(f"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})")
tree.create_node(str(o), counter_byref[0], parent=parent_id)
counter_byref[0] += 1
return tree
Then for example:
import json
json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),verbose=False,listsNodeSymbol='+').show()
gives:
+
├── 2
│ └── 3
└── 4
└── +
├── 5
└── 6
While
json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),listsNodeSymbol=None).show()
Gives
+
├── 2
│ └── 3
└── 4
├── 5
└── 6
| Pretty JSON Formatting in IPython Notebook | Is there an existing way to get json.dumps() output to appear as "pretty" formatted JSON inside ipython notebook?
| [
"json.dumps has an indent argument, printing the result should be enough:\nprint(json.dumps(obj, indent=2))\n\n",
"This might be slightly different than what OP was asking for, but you can do use IPython.display.JSON to interactively view a JSON/dict object.\nfrom IPython.display import JSON\nJSON({'a': [1, 2, 3, 4,], 'b': {'inner1': 'helloworld', 'inner2': 'foobar'}})\n\nEdit: This works in Hydrogen and JupyterLab, but not in Jupyter Notebook or in IPython terminal.\nInside Hydrogen:\n\n\n",
"import uuid\nfrom IPython.display import display_javascript, display_html, display\nimport json\n\nclass RenderJSON(object):\n def __init__(self, json_data):\n if isinstance(json_data, dict):\n self.json_str = json.dumps(json_data)\n else:\n self.json_str = json_data\n self.uuid = str(uuid.uuid4())\n\n def _ipython_display_(self):\n display_html('<div id=\"{}\" style=\"height: 600px; width:100%;\"></div>'.format(self.uuid), raw=True)\n display_javascript(\"\"\"\n require([\"https://rawgit.com/caldwell/renderjson/master/renderjson.js\"], function() {\n document.getElementById('%s').appendChild(renderjson(%s))\n });\n \"\"\" % (self.uuid, self.json_str), raw=True)\n\nTo ouput your data in collapsible format:\nRenderJSON(your_json)\n\n\nCopy pasted from here: https://www.reddit.com/r/IPython/comments/34t4m7/lpt_print_json_in_collapsible_format_in_ipython/\nGithub: https://github.com/caldwell/renderjson\n",
"I am just adding the expanded variable to @Kyle Barron answer:\nfrom IPython.display import JSON\nJSON(json_object, expanded=True)\n\n",
"I found this page looking for a way to eliminate the literal \\ns in the output. We're doing a coding interview using Jupyter and I wanted a way to display the result of a function real perty like. My version of Jupyter (4.1.0) doesn't render them as actual line breaks. The solution I produced is (I sort of hope this is not the best way to do it but...)\nimport json\n\noutput = json.dumps(obj, indent=2)\n\nline_list = output.split(\"\\n\") # Sort of line replacing \"\\n\" with a new line\n\n# Now that our obj is a list of strings leverage print's automatic newline\nfor line in line_list:\n print line\n\nI hope this helps someone!\n",
"For Jupyter notebook, may be is enough to generate the link to open in a new tab (with the JSON viewer of firefox):\nfrom IPython.display import Markdown\ndef jsonviewer(d):\n f=open('file.json','w')\n json.dump(d,f)\n f.close()\n print('open in firefox new tab:')\n return Markdown('[file.json](./file.json)')\n\njsonviewer('[{\"A\":1}]')\n'open in firefox new tab:\n\nfile.json\n",
"Just an extension to @filmor answer(https://stackoverflow.com/a/18873131/7018342).\nThis encodes elements that might not compatible with json.dumps and also gives a handy function that can be used just like you would use print.\nimport json\nclass NpEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, np.integer):\n return int(obj)\n if isinstance(obj, np.floating):\n return float(obj)\n if isinstance(obj, np.ndarray):\n return obj.tolist()\n if isinstance(obj, np.bool_):\n return bool(obj)\n return super(NpEncoder, self).default(obj)\n\ndef print_json(json_dict):\n print(json.dumps(json_dict, indent=2, cls=NpEncoder))\n\nUsage:\njson_dict = {\"Name\":{\"First Name\": \"Lorem\", \"Last Name\": \"Ipsum\"}, \"Age\":26}\nprint_json(json_dict)\n>>>\n{\n \"Name\": {\n \"First Name\": \"Lorem\",\n \"Last Name\": \"Ipsum\"\n },\n \"Age\": 26\n}\n\n",
"For some uses, indent should make it:\nprint(json.dumps(parsed, indent=2))\n\nA Json structure is basically tree structure.\nWhile trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.\nIt has some interactive trees and even comes with some code including linking to this question and the collapsing tree from Shankar ARUL.\nOther samples include using plotly Here is the code example from plotly:\nimport plotly.express as px\nfig = px.treemap(\n names = [\"Eve\",\"Cain\", \"Seth\", \"Enos\", \"Noam\", \"Abel\", \"Awan\", \"Enoch\", \"Azura\"],\n parents = [\"\", \"Eve\", \"Eve\", \"Seth\", \"Seth\", \"Eve\", \"Eve\", \"Awan\", \"Eve\"]\n)\nfig.update_traces(root_color=\"lightgrey\")\nfig.update_layout(margin = dict(t=50, l=25, r=25, b=25))\nfig.show()\n\n\n\nAnd using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:\n#%pip install treelib\nfrom treelib import Tree\n\ncountry_tree = Tree()\n# Create a root node\ncountry_tree.create_node(\"Country\", \"countries\")\n\n# Group by country\nfor country, regions in wards_df.head(5).groupby([\"CTRY17NM\", \"CTRY17CD\"]):\n # Generate a node for each country\n country_tree.create_node(country[0], country[1], parent=\"countries\")\n # Group by region\n for region, las in regions.groupby([\"GOR10NM\", \"GOR10CD\"]):\n # Generate a node for each region\n country_tree.create_node(region[0], region[1], parent=country[1])\n # Group by local authority\n for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):\n # Create a node for each local authority\n country_tree.create_node(la[0], la[1], parent=region[1])\n for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):\n # Create a leaf node for each ward\n country_tree.create_node(ward[0], ward[1], parent=la[1])\n\n# Output the hierarchical data\ncountry_tree.show()\n\n\nI have, based on this, created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n root_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({'+'}, {root_id})\")\n tree.create_node('+', root_id)\n counter_byref[0] += 1\n parent_id = root_id\n if type(o) == dict:\n for k,v in o.items():\n this_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({str(k)}, {this_id}, parent={parent_id})\")\n tree.create_node(str(k), this_id, parent=parent_id)\n counter_byref[0] += 1\n json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n if verbose:\n print(f\"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)\n parent_id=counter_byref[0]\n counter_byref[0] += 1 \n for i in o:\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)\n else: #node\n if verbose:\n print(f\"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(str(o), counter_byref[0], parent=parent_id)\n counter_byref[0] += 1\n return tree\n\nThen for example:\nimport json\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() \n\ngives:\n+\n├── 2\n│ └── 3\n└── 4\n └── +\n ├── 5\n └── 6\n\nWhile\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),listsNodeSymbol=None).show() \n\nGives\n+\n├── 2\n│ └── 3\n└── 4\n ├── 5\n └── 6\n\n"
] | [
101,
74,
39,
7,
3,
0,
0,
0
] | [] | [] | [
"ipython_notebook",
"json",
"python"
] | stackoverflow_0018873066_ipython_notebook_json_python.txt |
Q:
'NoneType' object is not callable when tryna do a histogram on datafram
rfm = df3.groupby('CustomerID').agg({
'InvoiceNo' : lambda num: len(num),
'TotalSum' : lambda price: price.sum(),
'InvoiceDay': lambda x: ref_date- x.max()})
rfm.rename(columns={
'InvoiceNo' : 'Frequency',
'TotalSum' : 'Monetary',
'InvoiceDay': 'Recency'
}, inplace=True)
rfm['Recency'] = rfm['Recency'].dt.days
rfm.hist()
plt.show()
It keeps showing this error, I don't know what I'm doing wrong here:
TypeError: 'NoneType' object is not callable
I was expecting a histogram plot of the 3 different variables. If I don't have rfm.hist(column= 'Recency'), it still shows the same error. What is the issue here?
These are the dtypes:
Frequency int64
Monetary float64
Recency int64
Output exceeds the size limit. Open the full output data in a text editor
TypeError Traceback (most recent call last)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/getlimits.py:459, in finfo.new(cls, dtype)
458 try:
--> 459 dtype = numeric.dtype(dtype)
460 except TypeError:
461 # In case a float instance was given
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/Users/Downloads/Unclassified Learning/Unclassified Learning.ipynb Cell 25 in <cell line: 2>()
1 rfm.hist()
----> 2 plt.show()
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/matplotlib/pyplot.py:389, in show(*args, **kwargs)
345 """
346 Display all open figures.
347
(...)
386 explicitly there.
387 """
388 _warn_if_gui_out_of_main_thread()
...
--> 462 dtype = numeric.dtype(type(dtype))
464 obj = cls._finfo_cache.get(dtype, None)
465 if obj is not None:
TypeError: 'NoneType' object is not callable
A:
Still trying to figure out what moment it happens as we need the whole error log. But that error is trying to tell you that you are invoking a method on a None type. Meaning that some of the attributes return None, and you are still trying to access them.
To debug, recommend checking the pandas DataFrame first, printing out rfm.head().
Secondly, if this is happening while calling hist(), probably some of the underlying data is None, and might be worth investing some time into cleaning up these None rows, or filling them up with something.
| 'NoneType' object is not callable when tryna do a histogram on datafram | rfm = df3.groupby('CustomerID').agg({
'InvoiceNo' : lambda num: len(num),
'TotalSum' : lambda price: price.sum(),
'InvoiceDay': lambda x: ref_date- x.max()})
rfm.rename(columns={
'InvoiceNo' : 'Frequency',
'TotalSum' : 'Monetary',
'InvoiceDay': 'Recency'
}, inplace=True)
rfm['Recency'] = rfm['Recency'].dt.days
rfm.hist()
plt.show()
It keeps showing this error, I don't know what I'm doing wrong here:
TypeError: 'NoneType' object is not callable
I was expecting a histogram plot of the 3 different variables. If I don't have rfm.hist(column= 'Recency'), it still shows the same error. What is the issue here?
These are the dtypes:
Frequency int64
Monetary float64
Recency int64
Output exceeds the size limit. Open the full output data in a text editor
TypeError Traceback (most recent call last)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/getlimits.py:459, in finfo.new(cls, dtype)
458 try:
--> 459 dtype = numeric.dtype(dtype)
460 except TypeError:
461 # In case a float instance was given
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/Users/Downloads/Unclassified Learning/Unclassified Learning.ipynb Cell 25 in <cell line: 2>()
1 rfm.hist()
----> 2 plt.show()
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/matplotlib/pyplot.py:389, in show(*args, **kwargs)
345 """
346 Display all open figures.
347
(...)
386 explicitly there.
387 """
388 _warn_if_gui_out_of_main_thread()
...
--> 462 dtype = numeric.dtype(type(dtype))
464 obj = cls._finfo_cache.get(dtype, None)
465 if obj is not None:
TypeError: 'NoneType' object is not callable
| [
"Still trying to figure out what moment it happens as we need the whole error log. But that error is trying to tell you that you are invoking a method on a None type. Meaning that some of the attributes return None, and you are still trying to access them.\nTo debug, recommend checking the pandas DataFrame first, printing out rfm.head().\nSecondly, if this is happening while calling hist(), probably some of the underlying data is None, and might be worth investing some time into cleaning up these None rows, or filling them up with something.\n"
] | [
0
] | [] | [] | [
"dataframe",
"python"
] | stackoverflow_0074665665_dataframe_python.txt |
Q:
Change Django Default Language
I've been developing a web application in English, and now I want to change the default language to German.
I tried changing the language code and adding the locale directory with all the translations, but Django still shows everything in English. I also want all my table names to be in German along with the content in the templates.
I also tried Locale Middleware and also this repo for a custom middleware but it still doesn't work.
Not to mention, Django changes the default language of the admin panel, but my field and table names remain English.
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'language.DefaultLanguageMiddleware',
# 'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
LANGUAGE_CODE = 'de'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
LOCALE_PATH = (
os.path.join(BASE_DIR, 'locale')
)
Here is my locale directory:
This is how I use translation in my templates:
{% load i18n static %}
{% translate "Single User" %}
This is how I have defined my models:
from django.utils.translation import gettext_lazy as _
class Facility(models.Model):
name = models.CharField(_('Name'), max_length=100, null=True, blank=True)
class Meta:
verbose_name_plural = _('Facilities')
A:
Turns out everything is just right, and the only thing that messed things up was a typo in LOCALE_PATHS.
settings.py:
LOCALE_PATHS = ( # notice the S which was forgotten
os.path.join(BASE_DIR, 'locale')
)
| Change Django Default Language | I've been developing a web application in English, and now I want to change the default language to German.
I tried changing the language code and adding the locale directory with all the translations, but Django still shows everything in English. I also want all my table names to be in German along with the content in the templates.
I also tried Locale Middleware and also this repo for a custom middleware but it still doesn't work.
Not to mention, Django changes the default language of the admin panel, but my field and table names remain English.
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'language.DefaultLanguageMiddleware',
# 'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
LANGUAGE_CODE = 'de'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
LOCALE_PATH = (
os.path.join(BASE_DIR, 'locale')
)
Here is my locale directory:
This is how I use translation in my templates:
{% load i18n static %}
{% translate "Single User" %}
This is how I have defined my models:
from django.utils.translation import gettext_lazy as _
class Facility(models.Model):
name = models.CharField(_('Name'), max_length=100, null=True, blank=True)
class Meta:
verbose_name_plural = _('Facilities')
| [
"Turns out everything is just right, and the only thing that messed things up was a typo in LOCALE_PATHS.\nsettings.py:\nLOCALE_PATHS = ( # notice the S which was forgotten\n os.path.join(BASE_DIR, 'locale')\n)\n\n"
] | [
0
] | [] | [] | [
"django",
"django_i18n",
"python",
"translation"
] | stackoverflow_0074590212_django_django_i18n_python_translation.txt |
Q:
TypeError: unsupported operand type(s) for -=: 'str' and 'float'
I've tried to write a program which converts decimal to binary and vice versa but when I try 23, it flags line 17 (answer2 -= x) as a type error.
import math
x = 4096
y = ""
z = 10
q = 1
final_answer = 0
answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?"))
if answer1 == "1":
answer2 = input("What number do you want to convert to binary? It can't be larger than 4096")
p = answer2.isdigit()
if p:
for i in range(13):
if int(answer2) >= x:
y = y + "1"
answer2 -= x
else:
y = y + "0"
x /= 2
print(y)
elif not p:
print("That's not a number")
I tried to convert the variables of answer2 and x to float and int but the same problem still comes up.
A:
Your variable is still a string when you apply an operation involving a numeric value. In your case, you still need to convert the variable to a float:
answer2 = float(answer2)
Furthermore, I do not know if isdigit() catches floats (involving a decimal point). This post might help out if you get stuck there: Using isdigit for floats?
A:
The error you are encountering is happening because you are trying to subtract a string value from an integer. You need to convert the answer2 variable to an integer before you can subtract x from it.
Here is one way you can fix this error:
import math
x = 4096
y = ""
z = 10
q = 1
final_answer = 0
answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?"))
if answer1 == "1":
answer2 = input("What number do you want to convert to binary? It can't be larger than 4096")
p = answer2.isdigit()
if p:
# Convert the string value of answer2 to an integer
answer2 = int(answer2)
for i in range(13):
if answer2 >= x:
y = y + "1"
answer2 -= x
else:
y = y + "0"
x /= 2
print(y)
elif not p:
print("That's not a number")
| TypeError: unsupported operand type(s) for -=: 'str' and 'float' | I've tried to write a program which converts decimal to binary and vice versa but when I try 23, it flags line 17 (answer2 -= x) as a type error.
import math
x = 4096
y = ""
z = 10
q = 1
final_answer = 0
answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?"))
if answer1 == "1":
answer2 = input("What number do you want to convert to binary? It can't be larger than 4096")
p = answer2.isdigit()
if p:
for i in range(13):
if int(answer2) >= x:
y = y + "1"
answer2 -= x
else:
y = y + "0"
x /= 2
print(y)
elif not p:
print("That's not a number")
I tried to convert the variables of answer2 and x to float and int but the same problem still comes up.
| [
"Your variable is still a string when you apply an operation involving a numeric value. In your case, you still need to convert the variable to a float:\nanswer2 = float(answer2)\n\nFurthermore, I do not know if isdigit() catches floats (involving a decimal point). This post might help out if you get stuck there: Using isdigit for floats?\n",
"The error you are encountering is happening because you are trying to subtract a string value from an integer. You need to convert the answer2 variable to an integer before you can subtract x from it.\nHere is one way you can fix this error:\nimport math\n\nx = 4096\ny = \"\"\nz = 10\nq = 1\nfinal_answer = 0\n\nanswer1 = str(input(\"Do you want to convert decimal into binary (1) or binary into decimal (2)?\"))\nif answer1 == \"1\":\n answer2 = input(\"What number do you want to convert to binary? It can't be larger than 4096\")\n p = answer2.isdigit()\n if p:\n # Convert the string value of answer2 to an integer\n answer2 = int(answer2)\n for i in range(13):\n if answer2 >= x:\n y = y + \"1\"\n answer2 -= x\n else:\n y = y + \"0\"\n\n x /= 2\n\n print(y)\n elif not p:\n print(\"That's not a number\")\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074665643_python.txt |
Q:
Python - Can I sort a dictionary by one of the values that is in a list?
How can I have the following dictionary sorted based on a value that is in a list?
Data = {1:["name",2010],2:["name",2005],3:["name",2000]}
sortedDataByYear = {3:["name",2000],2:["name",2005],1:["name",2010]}
I have tried sorted(lambda), but there is something wrong.
A:
Dictionaries can't be sorted.
However, you can do this:
Data = {1:["name",2010],2:["name",2005],3:["name",2000]}
sorted(Data.items(), key = lambda x: x[1])
This will return a list instead, but sorted on the first index in ascending order.
| Python - Can I sort a dictionary by one of the values that is in a list? | How can I have the following dictionary sorted based on a value that is in a list?
Data = {1:["name",2010],2:["name",2005],3:["name",2000]}
sortedDataByYear = {3:["name",2000],2:["name",2005],1:["name",2010]}
I have tried sorted(lambda), but there is something wrong.
| [
"Dictionaries can't be sorted.\nHowever, you can do this:\nData = {1:[\"name\",2010],2:[\"name\",2005],3:[\"name\",2000]}\nsorted(Data.items(), key = lambda x: x[1])\n\nThis will return a list instead, but sorted on the first index in ascending order.\n"
] | [
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0074665953_dictionary_python.txt |
Q:
How to create classes with information from a JSON file
My goal is to send multiple emails with information coming from json.
What is the best way to loop through the file and create classes for each page?
Thanks in advance
This is the JSON data:
{
"object": "list",
"results": [
{
"object": "page",
"id": "2",
"created_time": "2022-12-03T09:15:00.000Z",
"last_edited_time": "2022-12-03T09:53:00.000Z",
"created_by": {
"object": "user",
"id": "2"
},
"last_edited_by": {
"object": "user",
"id": "2"
},
"cover": null,
"icon": null,
"parent": {
"type": "database_id",
"database_id": "2"
},
"archived": false,
"properties": {
"email_sender": {
"id": "CdJY",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam1@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam1@gmail.com",
"href": null
}
]
},
"client": {
"id": "JyHA",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "client2",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "client2",
"href": null
}
]
},
"send_time": {
"id": "PMEC",
"type": "date",
"date": {
"start": "2022-12-09",
"end": null,
"time_zone": null
}
},
"email_receiver": {
"id": "ewjg",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam2@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam2@gmail.com",
"href": null
}
]
},
"text": {
"id": "rGFS",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "A",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "A",
"href": null
}
]
},
"subject": {
"id": "title",
"type": "title",
"title": [
{
"type": "text",
"text": {
"content": "test",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "test",
"href": null
}
]
}
},
"url":
},
{
"object": "page",
"id": "1",
"created_time": "2022-11-13T20:41:00.000Z",
"last_edited_time": "2022-12-03T09:53:00.000Z",
"created_by": {
"object": "user",
"id": "1"
},
"last_edited_by": {
"object": "user",
"id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e"
},
"cover": null,
"icon": null,
"parent": {
"type": "database_id",
"database_id": "1"
},
"archived": false,
"properties": {
"email_sender": {
"id": "CdJY",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam1@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam1@gmail.com",
"href": null
}
]
},
"client": {
"id": "JyHA",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "client1",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "client1",
"href": null
}
]
},
"send_time": {
"id": "PMEC",
"type": "date",
"date": {
"start": "2022-11-14T18:00:00.000+01:00",
"end": null,
"time_zone": null
}
},
"email_receiver": {
"id": "ewjg",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam3@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam3@gmail.com",
"href": null
}
]
},
"text": {
"id": "rGFS",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "Lorem ipsum dolor sit amet, ",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "Lorem ipsum dolor sit amet, ",
"href": null
}
]
},
"subject": {
"id": "title",
"type": "title",
"title": [
{
"type": "text",
"text": {
"content": "Automatic email",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "Automatic email",
"href": null
}
]
}
},
"url":
}
],
"next_cursor": null,
"has_more": false,
"type": "page",
"page": {}
}
Thanks to @Tim Roberts i already have a way how to filter the data:
for result in data['results']:
texttype = result['properties']['email_sender']['type']
email_sender = result['properties']['email_sender'][texttype1][0]['text']['content']
Now i need to find a way to put this information in classes:
Client, Email_Sender, Email_Receicer, Subject, Text
I havent found a way how. Thanks in advance!
A:
You may try using pandas to load the JSON into a dataframe. Then, you can create a new class for each page and assign the relevant data to its fields. For example, you could create a class like this:
class Email:
def __init__(self, email_sender, email_receiver, subject, text):
self.email_sender = email_sender
self.email_receiver = email_receiver
self.subject = subject
self.text = text
Then, you can iterate through the JSON data and create an instance of the Email class for each page, assigning the relevant data to the fields.
For example:
for result in data['results']:
email_sender = result['properties']['email_sender'][texttype1][0]['text']['content']
email_receiver = result['properties']['email_receiver'][texttype2][0]['text']['content']
subject = result['properties']['subject'][texttype3][0]['text']['content']
text = result['properties']['text'][texttype4][0]['text']['content']
email = Email(email_sender, email_receiver, subject, text)
This way, you can create classes for each page and assign the relevant data to each page.
| How to create classes with information from a JSON file | My goal is to send multiple emails with information coming from json.
What is the best way to loop through the file and create classes for each page?
Thanks in advance
This is the JSON data:
{
"object": "list",
"results": [
{
"object": "page",
"id": "2",
"created_time": "2022-12-03T09:15:00.000Z",
"last_edited_time": "2022-12-03T09:53:00.000Z",
"created_by": {
"object": "user",
"id": "2"
},
"last_edited_by": {
"object": "user",
"id": "2"
},
"cover": null,
"icon": null,
"parent": {
"type": "database_id",
"database_id": "2"
},
"archived": false,
"properties": {
"email_sender": {
"id": "CdJY",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam1@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam1@gmail.com",
"href": null
}
]
},
"client": {
"id": "JyHA",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "client2",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "client2",
"href": null
}
]
},
"send_time": {
"id": "PMEC",
"type": "date",
"date": {
"start": "2022-12-09",
"end": null,
"time_zone": null
}
},
"email_receiver": {
"id": "ewjg",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam2@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam2@gmail.com",
"href": null
}
]
},
"text": {
"id": "rGFS",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "A",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "A",
"href": null
}
]
},
"subject": {
"id": "title",
"type": "title",
"title": [
{
"type": "text",
"text": {
"content": "test",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "test",
"href": null
}
]
}
},
"url":
},
{
"object": "page",
"id": "1",
"created_time": "2022-11-13T20:41:00.000Z",
"last_edited_time": "2022-12-03T09:53:00.000Z",
"created_by": {
"object": "user",
"id": "1"
},
"last_edited_by": {
"object": "user",
"id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e"
},
"cover": null,
"icon": null,
"parent": {
"type": "database_id",
"database_id": "1"
},
"archived": false,
"properties": {
"email_sender": {
"id": "CdJY",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam1@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam1@gmail.com",
"href": null
}
]
},
"client": {
"id": "JyHA",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "client1",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "client1",
"href": null
}
]
},
"send_time": {
"id": "PMEC",
"type": "date",
"date": {
"start": "2022-11-14T18:00:00.000+01:00",
"end": null,
"time_zone": null
}
},
"email_receiver": {
"id": "ewjg",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "nospam3@gmail.com",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "nospam3@gmail.com",
"href": null
}
]
},
"text": {
"id": "rGFS",
"type": "rich_text",
"rich_text": [
{
"type": "text",
"text": {
"content": "Lorem ipsum dolor sit amet, ",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "Lorem ipsum dolor sit amet, ",
"href": null
}
]
},
"subject": {
"id": "title",
"type": "title",
"title": [
{
"type": "text",
"text": {
"content": "Automatic email",
"link": null
},
"annotations": {
"bold": false,
"italic": false,
"strikethrough": false,
"underline": false,
"code": false,
"color": "default"
},
"plain_text": "Automatic email",
"href": null
}
]
}
},
"url":
}
],
"next_cursor": null,
"has_more": false,
"type": "page",
"page": {}
}
Thanks to @Tim Roberts i already have a way how to filter the data:
for result in data['results']:
texttype = result['properties']['email_sender']['type']
email_sender = result['properties']['email_sender'][texttype1][0]['text']['content']
Now i need to find a way to put this information in classes:
Client, Email_Sender, Email_Receicer, Subject, Text
I havent found a way how. Thanks in advance!
| [
"You may try using pandas to load the JSON into a dataframe. Then, you can create a new class for each page and assign the relevant data to its fields. For example, you could create a class like this:\nclass Email:\n def __init__(self, email_sender, email_receiver, subject, text):\n self.email_sender = email_sender\n self.email_receiver = email_receiver\n self.subject = subject\n self.text = text\n\nThen, you can iterate through the JSON data and create an instance of the Email class for each page, assigning the relevant data to the fields.\nFor example:\nfor result in data['results']:\n email_sender = result['properties']['email_sender'][texttype1][0]['text']['content']\n email_receiver = result['properties']['email_receiver'][texttype2][0]['text']['content']\n subject = result['properties']['subject'][texttype3][0]['text']['content']\n text = result['properties']['text'][texttype4][0]['text']['content']\n email = Email(email_sender, email_receiver, subject, text)\n\nThis way, you can create classes for each page and assign the relevant data to each page.\n"
] | [
0
] | [] | [] | [
"json",
"python",
"python_3.x"
] | stackoverflow_0074665661_json_python_python_3.x.txt |
Subsets and Splits