question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,594,143
2025-4-26
https://stackoverflow.com/questions/79594143/how-do-i-add-a-type-hint-for-a-writeablebuffer-parameter
I'm trying to add a parameter type to the readinto() method declared in a custom class that derives from RawIOBase, like this: from io import RawIOBase class Reader(RawIOBase): def readinto(self, buf: bytearray) -> int: pass # actual implementation omitted But pyright complains: io.py:6:9 - error: Method "readinto" overrides class "_RawIOBase" in an incompatible manner Parameter 2 type mismatch: base parameter is type "WriteableBuffer", override parameter is type "bytearray" "Buffer" is not assignable to "bytearray" (reportIncompatibleMethodOverride) 1 error, 0 warnings, 0 informations How do I fix this? Note: I know I can remove the type hint entirely. I want to assign it the correct type instead. I'm using Python 3.13.3 and pyright 1.1.400.
You need to use the same type definition as your base. You can use the same type alias here by importing it from _typeshed package, provided you put it under a TYPE_CHECKING guard: from io import RawIOBase from typing import TYPE_CHECKING if TYPE_CHECKING: from _typeshed import MaybeNone, WriteableBuffer class Reader(RawIOBase): def readinto(self, buf: WriteableBuffer) -> int | MaybeNone: ... You can count on the _typeshed package existing when running under a type checker, because the type checker brings along that package. This is how pyright knows what the signature for RawIOBase.readinto() is, for example. I added MaybeNone to the above for the same reason, because that's also part of the documented signature. Alternatively, because WriteableBuffer is just a type alias for collections.abc.Buffer, the following also works (replacing MaybeNone with a redefinition): from collections.abc import Buffer from io import RawIOBase from typing import Any, TypeAlias type MaybeNone = Any class Reader(RawIOBase): def readinto(self, buf: Buffer) -> int | MaybeNone: ... See The Any Trick for an explanation as to why MaybeNone is an alias for Any.
2
2
79,594,086
2025-4-26
https://stackoverflow.com/questions/79594086/using-classmethods-class-objects-and-generics-together-with-mypy
I use the following code to create subclasses of class G and pass them the type of model they should produce. from typing import TypeVar, Type class A: @classmethod def model_validate(cls): print('Done') T = TypeVar('T', bound=A) class G[T]: def __init__(self, model: Type[T]): self.model = model def func(self) -> None: print(self.model.model_validate()) G[A](A).func() This works fine, but mypy gives this error: error: "type[T]" has no attribute "model_validate" [attr-defined] What am i doing wrong?
You are mixing up two type variables. T = TypeVar... defines a typevar with bound, then G[T] completely disregards that, creates a new typevar (with no bound) and uses it. Either use old-style generics (playground): from typing import Generic, TypeVar class A: @classmethod def model_validate(cls) -> None: print('Done') T = TypeVar('T', bound=A) class G(Generic[T]): def __init__(self, model: type[T]) -> None: self.model = model def func(self) -> None: print(self.model.model_validate()) G[A](A).func() or new-style PEP695 generics (playground): class A: @classmethod def model_validate(cls) -> None: print('Done') class G[T: A]: def __init__(self, model: type[T]) -> None: self.model = model def func(self) -> None: print(self.model.model_validate()) G[A](A).func() Both are accepted by mypy. Also note that you don't need typing.Type if you're using a non-EOL python version (built-in type should be used instead since 3.9 as shown above).
1
1
79,593,724
2025-4-26
https://stackoverflow.com/questions/79593724/cant-tell-the-difference-between-two-python-n-queens-solutions
Reading up on backtracking led me to a page on geeksforgeeks.org about solutions to the n-queens problem. The first solution is introduced as the "naive approach" that generates all possible permutations and is the least efficient at O(n! * n). The second solution is "Instead of generating all possible permutations, we build the solution incrementally, while doing this we can make sure at each step that the partial solution remains valid. If a conflict occur then we’ll backtrack immediately, this helps in avoiding unnecessary computations." and allegedly is O(n!). Analyzing the code, I simply cannot see the difference between the two (in terms of recursion/backtracking/pruning) apart from differences in comments, some variable names, the way diagonal conflicts are calculated/recorded and some other trivial things that make no difference like using [:] instead of .copy(). First solution (least efficient): #Python program to find all solution of N queen problem #using recursion # Function to check if placement is safe def isSafe(board, currRow, currCol): for i in range(len(board)): placedRow = board[i] placedCol = i + 1 # Check diagonals if abs(placedRow - currRow) == \ abs(placedCol - currCol): return False # Not safe return True # Safe to place # Recursive utility to solve N-Queens def nQueenUtil(col, n, board, res, visited): # If all queens placed, add to res if col > n: res.append(board.copy()) return # Try each row in column for row in range(1, n+1): # If row not used if not visited[row]: # Check safety if isSafe(board, row, col): # Mark row visited[row] = True # Place queen board.append(row) # Recur for next column nQueenUtil(col+1, n, board, res, visited) # Backtrack board.pop() visited[row] = False # Main N-Queen solver def nQueen(n): res = [] board = [] visited = [False] * (n + 1) nQueenUtil(1, n, board, res, visited) return res if __name__ == "__main__": n = 4 res = nQueen(n) for row in res: print(row) Second solution (backtracking with pruning, allegedly more efficient): # Python program to find all solutions of the N-Queens problem # using backtracking and pruning def nQueenUtil(j, n, board, rows, diag1, diag2, res): if j > n: # A solution is found res.append(board[:]) return for i in range(1, n + 1): if not rows[i] and not diag1[i + j] and not diag2[i - j + n]: # Place queen rows[i] = diag1[i + j] = diag2[i - j + n] = True board.append(i) # Recurse to the next column nQueenUtil(j + 1, n, board, rows, diag1, diag2, res) # Remove queen (backtrack) board.pop() rows[i] = diag1[i + j] = diag2[i - j + n] = False def nQueen(n): res = [] board = [] # Rows occupied rows = [False] * (n + 1) # Major diagonals (row + j) and Minor diagonals (row - col + n) diag1 = [False] * (2 * n + 1) diag2 = [False] * (2 * n + 1) # Start solving from the first column nQueenUtil(1, n, board, rows, diag1, diag2, res) return res if __name__ == "__main__": n = 4 res = nQueen(n) for temp in res: print(temp)
The difference is in the diagonal checks. The second, efficient version will know for a given square with O(1) time complexity whether it sits on any of the occupied diagonals. For this it makes use of the diag1 and diag2 lists, which have flags for each of the diagonals whether they are occupied or not. The less efficient version does not have this data structure, and for a given square it iterates the already placed queens to see if any of those occupies the same diagonal. This happens in the isSafe function and has an average time complexity of O(𝑛) for each of the checked squares. This explains the difference in the overall time complexities of these two algorithms.
1
1
79,593,938
2025-4-26
https://stackoverflow.com/questions/79593938/while-testing-airflow-task-with-pytest-i-got-an-error
While testing airflow with pytest, i got an Error. # tests/conftest.py import datetime import pytest from airflow.models import DAG @pytest.fixture def test_dag(): return DAG( "test_dag", default_args={ "owner": "airflow", "start_date": datetime.datetime(2025, 4, 5), "end_date": datetime.datetime(2025, 4, 6) }, schedule=datetime.timedelta(days=1) ) # tests/test_instance_context.py import datetime from airflow.models import BaseOperator from airflow.models.dag import DAG from airflow.utils import timezone class SampleDAG(BaseOperator): template_fields = ("_start_date", "_end_date") def __init__(self, start_date, end_date, **kwargs): super().__init__(**kwargs) self._start_date = start_date self._end_date = end_date def execute(self, context): context["ti"].xcom_push(key="start_date", value=self.start_date) context["ti"].xcom_push(key="end_date", value=self.end_date) return context def test_execute(test_dag: DAG): task = SampleDAG( task_id="test", start_date="{{ prev_ds }}", end_date="{{ ds }}", dag=test_dag ) task.run( start_date=test_dag.default_args["start_date"], end_date=test_dag.default_args["end_date"] ) expected_start_date = datetime.datetime(2025, 4, 5, tzinfo=timezone.utc) expected_end_date = datetime.datetime(2025, 4, 6, tzinfo=timezone.utc) assert task.start_date == expected_start_date assert task.end_date == expected_end_date Test code is passed, but I got an issue here. tests/test_instance_context.py [2025-04-26T12:51:18.289+0000] {taskinstance.py:2604} INFO - Dependencies not met for <TaskInstance: test_dag.test manual__2025-04-05T00:00:00+00:00 [failed]>, dependency 'Task Instance State' FAILED: Task is in the 'failed' state. [2025-04-26T12:51:18.303+0000] {taskinstance.py:2604} INFO - Dependencies not met for <TaskInstance: test_dag.test manual__2025-04-06T00:00:00+00:00 [failed]>, dependency 'Task Instance State' FAILED: Task is in the 'failed' state. . I want to test task.run to see difference between task.run and task.execute. when I passed jinja variables, then airflow automatically rendering the variables by run method. So, I want to see prev_ds, ds, start_date, end_date is successfully rendered. But I got an error above..
The error occurs because your SampleDAG operator is failing during execution, which causes subsequent runs to fail due to the task's "failed" state. Let's fix this step by step: Key Issues in Your Code: Attribute Mismatch: You're using self.start_date and self.end_date in execute() but these attributes don't exist (you defined self._start_date and self._end_date). Template Rendering: You're passing Jinja templates ({{ prev_ds }}, {{ ds }}) but not properly rendering them before execution. Task Execution Context: The run() method needs proper context for template rendering. class SampleDAG(BaseOperator): template_fields = ("_start_date", "_end_date") def __init__(self, start_date, end_date, **kwargs): super().__init__(**kwargs) self._start_date = start_date self._end_date = end_date def execute(self, context): # Use the correct attribute names that match __init__ context["ti"].xcom_push(key="start_date", value=self._start_date) context["ti"].xcom_push(key="end_date", value=self._end_date) return context def test_execute(test_dag: DAG): task = SampleDAG( task_id="test", start_date="{{ prev_ds }}", end_date="{{ ds }}", dag=test_dag ) # Create proper execution context execution_date = test_dag.default_args["start_date"] end_date = test_dag.default_args["end_date"] # Run with proper context task.run( start_date=execution_date, end_date=end_date, execution_date=execution_date, # Needed for template rendering run_id=f"test_run_{execution_date.isoformat()}", ignore_first_depends_on_past=True )
1
2
79,593,773
2025-4-26
https://stackoverflow.com/questions/79593773/httpsconnectionpool-error-selenium-while-paste-3000-ids-from-column-in-a-csv-fi
I am using selenium to automate a downloading of a report. for that i need to paste around 3000 ids in a loop for ids around 300 000 into a input field of a webpage and click download button and wait around 40 secs to report to download. And after that click clear button to clear the input field and paste another 3000 values (or ids) into the input field and click download again. Repeat the step till end of all the ids extracted from a column of a dataframe. The ids are separated by comma. To do this manually it takes around 2 seconds to paste the values (exactly 3000) and download and then wait for 40 secs and clear and paste another 3000 ids. Repeat the process again. But while using selenium script after logging into url, closing all the popups and selecting exact option and then while entering (or using input_filed.send_keys(ids) and input_field.send_keys(Keys.ENTER)) there is a timeout error on 120 secs. i dont have 120 secs to wait only for the error to be thrown when it takes only 2 secs manually. the script works fine for ids around 200 but not 3000. i want 3000 values to be pasted in order the process to be fast. Kindly provide a solution. Below is the code which i tried. Please look at the end where is the main problem. from selenium import webdriver #you have to create instance wait from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Chrome() driver.get("MY URL") wait = WebDriverWait(driver, 10) try: close_button = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button.toast-close-button"))) close_button.click() print("Popup Closed") except Exception as e: print("No Popup appeared. or popup closed") print("Error: ", e) try: username_input = wait.until(EC.presence_of_element_located((By.ID, "username"))) username_input.send_keys("[email protected]") print("UserName Entered") except Exception as e: print("Failed to enter Username") print("Error: ", e) try: password_input = wait.until(EC.presence_of_element_located((By.ID, "password"))) password_input.send_keys("some_password") print("Password Entered") except Exception as e: print("Failed to enter Password") print("Error: ",e) try: submit_button = wait.until(EC.element_to_be_clickable((By.ID, "loginSubmit"))) submit_button.click() print("Submit Button Clicked") except Exception as e: print("Failed to click submit button") print("Error: ",e) ids_option= WebDriverWait(driver, 10).until( EC.element_to_be_clickable((By.XPATH, '//*[@id="root"]/div/div/div[2]/div[2]/div[2]/div/div/div[1]/div/div/div/div/div[1]/div')) ) ids_option.click() option = WebDriverWait(driver, 10).until( EC.element_to_be_clickable((By.XPATH, '//div[text()="Option for Id selected"]')) ) option.click() input_click_target = WebDriverWait(driver, 10).until( EC.element_to_be_clickable((By.XPATH, '//*[@id="root"]/div/div/div[2]/div[2]/div[2]/div/div/div[2]/div/div[1]/div/div/div[1]/div[1]')) ) input_click_target.click() focused_input = WebDriverWait(driver, 10).until( lambda d: d.execute_script("return document.activeElement") ) from selenium.webdriver.common.keys import Keys focused_input.send_keys(id_string) #comma seperated ids focused_input.send_keys(Keys.ENTER) Here it throws the error after some 120 secs. If i use only 15 or 200 ids and enter them it works but not for 3000. please provide a faster solution I tried the code mentioned above and expecting downloading of individual csv files for 3000 ids each till the end of the ids.
you're trying to paste 3000 IDs quickly into an input box using Selenium, but it hangs or throws HttpsConnectionPool errors (timeouts after 120 sec) only when IDs are many (like 3000), but works fine for small numbers (like 200). This is very common because: send_keys is very slow for huge text.The browser lags when pasting huge text via simulated keypresses.The field might revalidate after each keystroke (which is slow for thousands of characters). Network-related operations (XHR/fetch triggered on every change) timeout because the browser is busy or frozen. Solution: Use JavaScript injection (bypass slow send_keys) Instead of sending thousands of keystrokes, directly set the value via JavaScript. Here’s the improved solution: # Instead of send_keys driver.execute_script("arguments[0].value = arguments[1];", focused_input, id_string) # If needed, trigger an input event manually driver.execute_script(""" var input = arguments[0]; var lastValue = input.value; input.value = arguments[1]; var event = new Event('input', { bubbles: true }); event.simulated = true; input._valueTracker && input._valueTracker.setValue(lastValue); input.dispatchEvent(event); """, focused_input, id_string) Explanation: arguments[0] is the input element you have. arguments[1] is your comma-separated string of 3000 IDs. Then it dispatches an input event, pretending a user typed it, some apps depend on that for further validation. Modify this part: # Instead of: # focused_input.send_keys(id_string) # focused_input.send_keys(Keys.ENTER) # Use: driver.execute_script("arguments[0].value = arguments[1];", focused_input, id_string) # Optionally trigger input event if needed (for apps that listen to 'input' event) driver.execute_script(""" var input = arguments[0]; var lastValue = input.value; input.value = arguments[1]; var event = new Event('input', { bubbles: true }); event.simulated = true; input._valueTracker && input._valueTracker.setValue(lastValue); input.dispatchEvent(event); """, focused_input, id_string) # Then manually click the download button or submit
1
1
79,593,216
2025-4-25
https://stackoverflow.com/questions/79593216/using-tkinter-colorchooser-erases-button-image-and-disables-button
I am making a simple Python GUI to allow the user to choose a color using Tkinter colorchooser. The user will click a button to open the colorchooser, and I'd like that button to have an image instead of text. However, after you use the colorchooser, the image is deleted from the button and the button is disabled. Any ideas what is causing this failure? If I use text for the button instead of an image, there is no problem using colorchooser. import tkinter as tk from tkinter import colorchooser class MainGUI(tk.Tk): def __init__(self): super().__init__() addImage = r'C:\Documents\Scripting\VisualStudio\ClispiPy\ClispiPy\Images\Add.png' addPhoto = tk.PhotoImage(file = addImage) tk.Button(self, image=addPhoto, command=self.open_sub_gui).pack(pady=20) def open_sub_gui(self): col = colorchooser.askcolor(title='Select Color') return if __name__ == "__main__": main_gui = MainGUI() main_gui.mainloop() Before using colorchooser: After using colorchooser:
The key fix is simply adding self. to store the PhotoImage as an instance variable instead of a local variable. The full code is provided below: import tkinter as tk from tkinter import colorchooser class MainGUI(tk.Tk): def __init__(self): super().__init__() addImage = r'add.png' self.addPhoto = tk.PhotoImage(file=addImage) # Store as instance variable tk.Button(self, image=self.addPhoto, command=self.open_sub_gui).pack(pady=20) def open_sub_gui(self): col = colorchooser.askcolor(title='Select Color') return if __name__ == "__main__": main_gui = MainGUI() main_gui.mainloop() Output: I used my own plus sign image.
1
3
79,592,885
2025-4-25
https://stackoverflow.com/questions/79592885/fully-hide-margins-on-maps-added-to-streamlit
I want my leafmap map to occupy ALL the backgroud or main, without any margin or something to move it up, down, left or right. However, I still have margins at the top and bottom. Also, if I increase the height this creates a Y offset and I don't want that either but a static window fitted to the main or background space. I attach a screenshot of what I mean. On the other hand, I need to add layers from a netcdf, so it's better folium or leafmap? Please help me. Thanks in advance. import streamlit as st import leafmap.foliumap as leafmap st.set_page_config( page_title=None, page_icon=None, layout="wide", initial_sidebar_state="auto", ) st.markdown( """ <style> header {visibility: hidden;} footer {visibility: hidden;} .block-container { padding: 0 !important; margin: 0 !important; max-width: 100% !important; width: 100% !important; } .main { padding: 0 !important; margin: 0 !important; } .css-1aumxhk { margin-top: 0 !important; } </style> """, unsafe_allow_html=True, ) m = leafmap.Map(center=[-0.1807, -78.4678], zoom=7) m.to_streamlit(height=830) Margins
Top margin is created by st.markdown() :) If I use st.html() (without unsafe_allow_html=True,) instead of st.markdown() then top margin disappears. (It seems st.html() was added in version 1.33.0 in Apr 2024 - so it may not be used in older tutorials) As for bottom margin - it needs height: 100vh (as suggested @CamillaGiuliani) but inside <style> and for iframe. It also needs to change iframe into block-object because there is still small gap in iframe like for inline-text. <style> iframe { height: 100vh; display: block; } </style> Full working code: Streamlit version: 1.44.1 import streamlit as st import leafmap.foliumap as leafmap print('Streamlit version:', st.__version__) st.set_page_config( page_title=None, page_icon=None, layout="wide", initial_sidebar_state="auto", ) st.html( """ <style> header {visibility: hidden;} footer {visibility: hidden;} .block-container { padding: 0 !important; margin: 0 !important; max-width: 100% !important; width: 100% !important; } .main { padding: 0 !important; margin: 0 !important; } .css-1aumxhk { margin-top: 0 !important; } iframe { height: 100vh; display: block; } </style> """ ) m = leafmap.Map(center=[-0.1807, -78.4678], zoom=7) m.to_streamlit() # height=830)
1
1
79,592,530
2025-4-25
https://stackoverflow.com/questions/79592530/tkinter-listbox-has-a-shadow-selection-beside-the-proper-selection-how-to-syn
I built and populated a tkinter.Listbox. Now I have events that will select the item at index index. Like so: listbox.selection_clear(0, tk.END) listbox.select_set(index) And it works in that the entry with index index is in fact selected. However, when using 'tab' keys to move to other widgets that also have the power to select items in that listbox, and then returning to the listbox, there is a shadow selection, that appears not to be the anchor (at least, listbox.selection_anchor(index) did not solve this issue for me) on the selection that was active, when I last left focus on the listbox. Using 'up' and 'down' keys will take control of the active selection. However, they will start not at the proper selection (010_adf in below example), but on that shadow (007_adf) that I can only specify closer by providing this screenshot: Fig: The "shadow" in question is around entry 007_adf. The proper selection is 010.adf. How to sync the shadow to the proper selection?
That "shadow" designates the active item in the listbox. Think of it like a cursor in a text widget. You can set it with the activate method. If you want to hide it altogether you can set activestyle='none'. Or, you can set it when you set the selection: ... lb = tk.Listbox(...) ... lb.selection_set(4,6) lb.activate(4) ...
1
3
79,591,814
2025-4-25
https://stackoverflow.com/questions/79591814/why-does-python-disallow-chaining-descriptors-classmethod-and-property-sin
(I know that there are similar questions already answered, but my question focuses more on the reason behind the solution instead of the solution itself). I have been in need of something like a "class property" in Python, and I have searched through existing questions. Some answers provide a workaround, but I cannot understand why python disabled chaining @classmethod and @property. Is there any explanation for this? Also, I've found that all the currently available solutions have limitations, which are listed below. The posts I have read include: An answer which points out that chaining @classmethod and @property has been disabled since Python 3.13 Another solution which defines a customized classproperty descriptor. But this workaround fails to prevent modification. For example, the following code derived from the original answer will not raise an exception when modification over x is attempted. class classproperty(property): def __get__(self, owner_self, owner_cls): return self.fget(owner_cls) def __set__(self, instance, value): raise AttributeError("can't set attribute") class C(object): @classproperty def x(cls): return 1 print(C.x) C.x = 2 print(C.x) # Output: 2 # no exception raised # cannot prevent modification A solution by writing the class property into metaclass. This method successfully prevents attempted modifications, but with this method, access to class variables will only be possible via class, not via instance. class CMeta(type): @property def x(cls): return 1 class C(object, metaclass=CMeta): ... print(C.x) # C.x = 2 # AttributeError: property 'x' of 'CMeta' object has no setter # print(C().x) # AttributeError: 'C' object has no attribute 'x' So is there an ultimate way to resolve all the above mentioned problems and allow for a class property implementation satisfying the following two conditions? Can prevent attempted modifications Can be accessed from both class and instance
Class property was deprecated in Python 3.11 (with link to the original issues) because it was found to be impossible for a chain of @classmethod and @property-decorated attribute to be seen by inspection code as an instance of property. If you need a class property to work on an instance then a custom classproperty is the way to go, but to prevent modifications you can override the __setattr__ method of the metaclass so that it raises an exception if the named attribute is found to be an instance of classproperty: class classproperty(property): def __get__(self, owner_self, owner_cls): return self.fget(owner_cls) class CMeta(type): def __setattr__(cls, name, value): if isinstance(vars(cls).get(name), classproperty): raise AttributeError("can't set attribute") super().__setattr__(name, value) class C(metaclass=CMeta): @classproperty def x(cls): return 1 print(C.x) # 1 print(C().x) # 1 C.x = 2 # AttributeError: can't set attribute Demo: https://ideone.com/4Ys77N
3
4
79,592,059
2025-4-25
https://stackoverflow.com/questions/79592059/why-use-super-to-call-functions
I am currently down the rabbit hole trying to understand metaclasses and as such went back to refresh my understanding of super() and type. While refreshing I came across a geeksforgeeks super() article and it had a rather weird example. here is the example class Animals: # Initializing constructor def __init__(self): self.legs = 4 self.domestic = True self.tail = True self.mammals = True def isMammal(self): if self.mammals: print("It is a mammal.") def isDomestic(self): if self.domestic: print("It is a domestic animal.") class Dogs(Animals): def __init__(self): super().__init__() def isMammal(self): super().isMammal() <- Why do this? I understand that when inheriting from a parent class, the functions like __init__ must be called to be initialized, hence the super().__init__() but why do this also in the function? I thought that functions were inherited. Is this so that we can run the parent function along with modifications? Why not call a parent function, within a new function? def mammal_can_dance(self): self.isMammal() print("and can dance")
Yes. In this case, since neither the overloading __init__ nor isMammal do anything (besides calling their parent implementations), they could just be omitted for the same result: class Dogs(Animals): pass Here the parent methods would simply be inherited and do exactly the same thing. But if you want to use the parent's implementation and add something to it, you'd use super and do something in addition: class Foo(Bar): def baz(self): some_list = super().baz() return some_list + ['quux']
1
4
79,590,117
2025-4-24
https://stackoverflow.com/questions/79590117/dtypewarning-columns-have-mixed-types-error-in-pandas-when-loading-csv
When loading a csv file in pandas I've encountered the bellow error message: DtypeWarning: Columns have mixed types. Specify dtype option on import or set low_memory=False Reading online I found few solutions. One, to set low_memory=False, but I understand that this is not a good practice and it doesn't really resolve the problem. Second solution is to set a data type for each column (or each column with mixed data types): pd.read_csv(csv_path_name, dtype={'first_column': 'str', 'second_column': 'str'}) Again, from what I read, not the ideal solution if we have a big dataset. Third solution - create a converter function. To my understanding this might be the most appropriate solution. I found code which works for me, but I am trying to better understand what is this function exactly doing: def convert_dtype(x): if not x: return '' try: return str(x) except: return '' df = pd.read_csv(csv_path_name, converters={'first_col':convert_dtype, 'second_col':convert_dtype, etc.... } ) Can someone please explain the function code to me? Thanks
if not x checks if x is an empty string. if it is empty it returns '', which is an empty string without any content. def convert_dtype(x): if not x: return '' try: return str(x) tries to convert and return x as a string. try: return str(x) if converting and returning x as a string doesn't work, it returns ''. except: return '' Basically, if the content of the column is empty from the start or can't be converted to string it's discarded and replaced with a string not having any content. I can't judge however if this is a good approach, it depends on what you are trying to accomplish with your application. Your column will only contain strings afterwards nonetheless.
1
3
79,591,853
2025-4-25
https://stackoverflow.com/questions/79591853/z-score-on-scipy
I need to find out the Zscore pertaining to 1 specific point, that is, for 1 value of X using Scipy. Below is the manual code: data = [25, 37, 15, 36, 92, 28, 33, 40] mean = sum(data)/len(data) summation = 0 for i in range(0, len(data)): summation += (data[i]-mean)**2 std = ((1/len(data))*summation))**(1/2) Z = (x-mean)/std Next, when I try to do the same with Scipy: Z = stats.zscore(data) I get the output: [-0.61219538 -0.05775428 -1.07422964 -0.10395771 2.48343411 -0.47358511 -0.24256798 0.08085599] Maybe because I am passing only one parameter and that is the data itself. How to get the Z-Score for only one value of X?
This solution might be suitable for you. This solution might be better than others. from scipy import stats import numpy as np data = [25, 37, 15, 36, 92, 28, 33, 40] x = 40 # Please assign a specific value mean = np.mean(data) std = np.std(data, ddof=0) Z = (x - mean) / std print(Z) You might like this method also (scipy.stats.zscore): from scipy import stats data = [25, 37, 15, 36, 92, 28, 33, 40] x = 40 all_zscores = stats.zscore(data, ddof=0) x_index = data.index(x) Z = all_zscores[x_index] print(Z) Ouput: 0.08085599417810461
2
2
79,591,383
2025-4-24
https://stackoverflow.com/questions/79591383/pandas-fill-in-missing-values-with-an-empty-numpy-array
I have a Pandas Dataframe that I derive from a process like this: df1 = pd.DataFrame({'c1':['A','B','C','D','E'],'c2':[1,2,3,4,5]}) df2 = pd.DataFrame({'c1':['A','B','C'],'c2':[1,2,3],'c3': [np.array((1,2,3,4,5,6)),np.array((6,7,8,9,10,11)),np.full((6,),np.nan)]}) df3 = df1.merge(df2,how='left',on=['c1','c2']) This looks like this: c1 c2 c3 A 1 [1,2,3,4,5,6] B 2 [6,7,8,9,10,11] C 3 [nan,nan,nan,nan,nan,nan] D 4 NaN E 5 NaN In order to run the next step of my code, I need all of the arrays in c3 to have a consistent length. For the inputs coming in that were present in the join (i.e. row 1 through 3) this was already taken care of. However, for the rows that were missing from df2 where I now have only a single NaN value (rows 4 and 5) I need to replace those NaN's with an array of NaN values like in row 3. The problem is that I can't figure out how to do that. I've tried a number of things, starting with the obvious: df3.loc[pd.isnull(df3.c3),'c3'] = np.full((6,),np.nan) Which gave me a ValueError: Must have equal len keys and value when setting with an iterable Fair enough; I understand this error and why python is confused about what I'm trying to do. How about this? for i in df3.index: df3.at[i,'c3'] = np.full((6,),np.nan) if all(pd.isnull(df3.c3)) else df3.c3 That code runs without error but then when I go to print out df3 (or use it) I get this error: RecursionError: maximum recursion depth exceeded That one I don't understand, but moving on, what if I preassign a column full of my NaN arrays and then I can do some logic after the join: for i in df1.index: df1.at[i,'c4'] = np.full((6,),np.nan) This gives me the understandable error: ValueError: setting an array element with a sequence How about another variation of the same idea: df1['c4'] = np.full((6,),np.nan) This one gives a different, also understandable error: ValueError: Length of values (6) does not match length of index (5) Hence, the question: How do I replace values in my dataframe (in this case null values) with an empty numpy array of a given length? For clarity, the desired final result is this: c1 c2 c3 A 1 [1,2,3,4,5,6] B 2 [6,7,8,9,10,11] C 3 [nan,nan,nan,nan,nan,nan] D 4 [nan,nan,nan,nan,nan,nan] E 5 [nan,nan,nan,nan,nan,nan]
A possible solution: # the array with the 6 nan values arr_nan = np.full( df3['c3'].map( lambda x: np.size(x) if isinstance(x, np.ndarray) else 0).max(), np.nan) df3.assign(c3 = df3['c3'].map( lambda y: arr_nan if not isinstance(y, np.ndarray) else y)) This solution first determines the length of the arrays in c3, and then replaces all non-array entries in c3 by the array of 6 np.nan. Output: c1 c2 c3 0 A 1 [1, 2, 3, 4, 5, 6] 1 B 2 [6, 7, 8, 9, 10, 11] 2 C 3 [nan, nan, nan, nan, nan, nan] 3 D 4 [nan, nan, nan, nan, nan, nan] 4 E 5 [nan, nan, nan, nan, nan, nan]
2
1
79,590,866
2025-4-24
https://stackoverflow.com/questions/79590866/how-to-make-a-reactive-event-silent-for-a-specific-function
I have the app at the bottom. Now, I have this preset field where I can select from 3 options (+ the option changed). What I want to be able to set the input_option with the preset field. But I also want to be able to change it manually. If I change the input_option manually the preset field should switch to changed. The problem is, if I set the option with the preset field, this automatically triggers the second function and sets input_preset back to changed. But this should only happen, if I manually change it, not if it is changed by the first reactive function. is that somehow possible? I tried a little with reactive.isolate(), but this does not seem to have any effect. from shiny import App, ui, reactive app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar( ui.input_select("input_preset", "input_preset", choices=["A", "B", "C", "changed"]), ui.input_text("input_option", "input_option", value=''), ) ) ) def server(input, output, session): @reactive.effect @reactive.event(input.input_preset) def _(): if input.input_preset() != 'changed': # with reactive.isolate(): ui.update_text("input_option", value=str(input.input_preset())) @reactive.effect @reactive.event(input.input_option) def _(): ui.update_select("input_preset", selected='changed') app = App(app_ui, server)
Require (req()) input.input_option() != input.input_preset() for doing the update on the preset input: from shiny import App, ui, reactive, req app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar( ui.input_select("input_preset", "input_preset", choices=["A", "B", "C", "changed"]), ui.input_text("input_option", "input_option", value=''), ) ) ) def server(input, output, session): @reactive.effect @reactive.event(input.input_preset) def _(): if input.input_preset() != 'changed': ui.update_text("input_option", value=str(input.input_preset())) @reactive.effect @reactive.event(input.input_option) def _(): req(input.input_option() != input.input_preset()) ui.update_select("input_preset", selected='changed') app = App(app_ui, server)
2
1
79,591,058
2025-4-24
https://stackoverflow.com/questions/79591058/does-this-leapfrog-method-work-for-the-3-body-problem
I have been trying to make a leapfrog integration to document the variation of the hamiltonian over time for the 3BP, but I never really grasped how to implement it using the normal half-step method so I tried using a variation but I'm not sure if it's correct. This is the functions I'm using where the variables p, v, m p = [array([x,y]), array([x,y]), array([x,y])] v = [array([x,y]), array([x,y]), array([x,y])] m = [1, 1, 1] are numpy arrays: from numpy import sum from numpy.linalg import norm from copy import deepcopy def H(p, v, m): #hamiltonian function #sum of kinetic energy for all bodies T = sum([m[i]*norm(v[i])**2/2 for i in range(3)]) #sum of potential energy between all bodies V = -sum([m[0-i]*m[1-i]/norm(p[0-i]-p[1-i]) for i in range(3)]) return T + V def a(p, n, m): #sum of the acceleration arrays for body n and the two other bodies return m[n-1]*(p[n-1]-p[n])/(norm(p[n-1]-p[n])**3) + m[n-2]*(p[n-2]-p[n])/(norm(p[n-2]-p[n])**3) def collision(p): #checks for collisions for i in range(3): if norm(p[0-i] - p[1-i]) < 0.1: return True #leapfrog def Leapfrog(P, V, dt, steps, m): p,v=deepcopy(P),deepcopy(V) H_L = [H(p, v, m)] for t in range(steps): atemp = [a(p, i, m) for i in range(3)] #acceleration at time step i #calc new values for i in range(3): p[i] = p[i] + v[i]*dt + 0.5*atemp[i]*dt**2 for i in range(3): v[i] = v[i] + 0.5*(atemp[i] + a(p, i, m))*dt #acceleration at timestep i+1 if collision(p): return H_L H_L.append(H(p, v, m)) return H_L
Corrected implementation import numpy as np from numpy.linalg import norm from copy import deepcopy def H(p, v, m): # hamiltonian function # sum of kinetic energy for all bodies T = sum([m[i] * norm(v[i])**2 / 2 for i in range(3)]) # sum of potential energy between all unique pairs V = -sum([m[i] * m[j] / norm(p[i] - p[j]) for i in range(3) for j in range(i+1, 3)]) return T + V def acceleration(p, m): """Compute acceleration for all bodies""" a = [np.zeros(2) for _ in range(3)] for i in range(3): for j in range(3): if i != j: r = p[i] - p[j] a[i] -= m[j] * r / (norm(r)**3) return a def collision(p): for i in range(3): for j in range(i+1, 3): if norm(p[i] - p[j]) < 0.1: return True return False def Leapfrog(P, V, dt, steps, m): p, v = deepcopy(P), deepcopy(V) H_L = [H(p, v, m)] # Initial acceleration a_prev = acceleration(p, m) for _ in range(steps): # Half-step velocity update v_half = [v[i] + 0.5 * a_prev[i] * dt for i in range(3)] # Full-step position update p = [p[i] + v_half[i] * dt for i in range(3)] # Compute new acceleration a_new = acceleration(p, m) # Second half-step velocity update v = [v_half[i] + 0.5 * a_new[i] * dt for i in range(3)] if collision(p): return H_L H_L.append(H(p, v, m)) a_prev = a_new return H_L
1
1
79,590,908
2025-4-24
https://stackoverflow.com/questions/79590908/alternative-to-looping-over-one-numpy-axis
I have two numpy arrays a and b such that a.shape[:-1] and b.shape are broadcastable. With this constraint only, I want to calculate an array c according to the following: c = numpy.empty(numpy.broadcast_shapes(a.shape[:-1],b.shape),a.dtype) for i in range(a.shape[-1]): c[...,i] = a[...,i] * b The above code certainly works, but I would like to know if there is a more elegant (and idiomatic) way of doing it.
Use np.newaxis with ... to add a new axis after your last axis. c = a * b[..., np.newaxis] Which is the same as c = a * b[np.newaxis, :] You don't need to allocate space for c in advance btw.
2
3
79,589,564
2025-4-23
https://stackoverflow.com/questions/79589564/is-it-possible-to-limit-attributes-in-a-python-sub-class-using-slots
One use of __slots__ in Python is to disallow new attributes: class Thing: __slots__ = 'a', 'b' thing = Thing() thing.c = 'hello' # error However, this doesn’t work if a class inherits from another slotless class: class Whatever: pass class Thing(Whatever): __slots__ = 'a', 'b' thing = Thing() thing.c = 'hello' # ok That’s because it also inherits the __dict__ from its parent which allows additional attributes. Is there any way of blocking the __dict__ from being inherited? It seems to me that this would allow a sub class to be less generic that its parent, so it’s surprising that it doesn’t work this way naturally. Comment OK, the question arises as whether this would violate the https://en.wikipedia.org/wiki/Liskov_substitution_principle . This, in turn buys into a bigger discussion on inheritance. Most books would, for example, suggest that a circle is an ellipse so a Circle class should inherit from an Ellipse class. However, since a circle is more restrictive, this would violate the Liskov Substitution Principle in that a sub class should not do less than the parent class. In this case, I’m not sure about whether it applies here. Python has no access modifiers, so object data is already over-exposed. Further, without __slots__ Python objects are pretty promiscuous about adding additional attributes, and I’m not sure that’s really part of the intended discussion.
If you are willing to use a metaclass, you can prevent this. Simply insert an empty sequence for '__slots__' in the namespace returned by __prepare__ this is a hook that prepares the namespace that will be used for the class, it defaults to a normal dict(), and we can just force the subclass to have an empty (not unspecified) __slots__ class EmptySlotsMeta(type): @classmethod def __prepare__(metacls, name, bases): return {"__slots__":()} class Foo(metaclass=EmptySlotsMeta): __slots__ = 'x', 'y' def __init__(self, x=0, y=0): self.x = x self.y = y class Bar(Foo): pass class Baz(Foo): __slots__ = ('z',) def __init__(self, x=0, y=0, z=0): super().__init__(x, y) self.z = z Now, in a REPL: >>> foo = Foo() >>> bar = Bar() >>> baz = Baz() >>> foo.z = 99 Traceback (most recent call last): File "<python-input-25>", line 1, in <module> foo.z = 99 ^^^^^ AttributeError: 'Foo' object has no attribute 'z' and no __dict__ for setting new attributes >>> bar.z = 99 Traceback (most recent call last): File "<python-input-26>", line 1, in <module> bar.z = 99 ^^^^^ AttributeError: 'Bar' object has no attribute 'z' and no __dict__ for setting new attributes >>> baz.z = 99 Note, a class with this metaclass can still define their own __slots__ (as in Foo or its second subclass Baz above). That's probably desirable. As with most things in Python, you can put some guardrails around it but it isn't worth trying to make it bulletproof. Note, although it is commonly used this way, __slots__ were not added for this use-case, that is, to restrict attributes. It exists mainly as a memory optimization, since a standard instance carries around a whole dict object. Edit: I undeleted this on the OP's request, but I'm still not sure it answers the exact question, note: class Whatever: pass class Foo(Whatever, metaclass=EmptySlotsMeta): __slots__ = ('x','y') Foo().z = 1 # totally works But it has to work this way, because the superclass will almost certainly create attributes, and it needs a __dict__ to do that unless it had defined slots.
2
1
79,590,476
2025-4-24
https://stackoverflow.com/questions/79590476/darts-and-lightgbm-original-column-names-cannot-be-retrieved-for-feature-import
Problem I am running a LightGBMModel via Darts with some (future) covariates. I want to understand the relevance of the different (lagged) features. In particular, I would like to retrieve the feature importance for the lagged target variable as well as for the covariates using the original column names from the Darts TimeSeries object. In the LightGBM model object after fitting I can only see generic column names ("column_0", "column_1"). How can I connect this to meaningful names (e.g., target_lag_1, target_lag_2, name_of_covariate_lag_1, ...). I want to include several future covariates (e.g., several datetime attributes like day of week with different encodings). It does not matter where the datetime attributes are created (e.g., using pandas, using Darts itself). Minimal reproducable example I adopted the example from the documentation This is the code from the documentation, just setting up the data and fitting the model: from darts.datasets import WeatherDataset from darts.models import LightGBMModel series = WeatherDataset().load() # predicting atmospheric pressure target = series['p (mbar)'][:100] # optionally, use past observed rainfall (pretending to be unknown beyond index 100) past_cov = series['rain (mm)'][:100] # optionally, use future temperatures (pretending this component is a forecast) future_cov = series['T (degC)'][:106] # predict 6 pressure values using the 12 past values of pressure and rainfall, as well as the 6 temperature # values corresponding to the forecasted period model = LightGBMModel( lags=12, lags_past_covariates=12, lags_future_covariates=[0,1,2,3,4,5], output_chunk_length=6, verbose=-1 ) model.fit(target, past_covariates=past_cov, future_covariates=future_cov) Having fitted the model, I now want to analyze the importance of the features. for i, estimator in enumerate(model.model.estimators_): print(f"Target {i} Importance (Gain):") # Access LightGBM booster booster = estimator.booster_ # Get feature names feature_names = booster.feature_name() # Get gain-based importance importance = booster.feature_importance(importance_type='gain') # Create mapping named_importance = dict(zip(feature_names, importance)) print(named_importance) This returns the feature importance for several columns in each estimator. But the feature names are generic names generated by LightGBM ('Column_1', 'Column_2', ...). I do not know how to link this back to the original column names in the TimeSeries object from Darts (e.g., 'rain (mm)', ''T (degC)') with the additional information which lag a feature importance is referring to.
The features that go into the models are available in model.lagged_feature_names. One of the authors addressed feature importances in Issue#1826, doing mostly what you've done, but they also referenced that along with a note about the feature names in Issue#2125.
2
0
79,590,120
2025-4-24
https://stackoverflow.com/questions/79590120/mypy-complains-about-missing-return-when-the-function-implicitly-returns-none
So, my question is regarding a code that looks like this: def f(condition: bool) -> int | None: if condition: return 1 def g(condition: bool) -> int | None: if condition: return return 1 This is clearly valid python code, the idea is that the function will try to do something, if it succeeds it will return the result, but if it fails will return None My problem is that mypy complains with the following error: 1: error: Missing return statement [return] 7: error: Return value expected [return-value] Now, python will always implicitly return None when a function is missing a return statement, which is what I am using in my function. My question is, is there any option in mypy to (with minimal or no changes to the code) tell it to stop complaining about this valid code? To be clear, I would still want it to complain about this code: def f(condition: bool) -> int: if condition: return 1 def g(condition: bool) -> int: if condition: return return 1
Provide an explicit return statement when the if statement does not match in the first function and explicitly return None in the second function: def f(condition: bool) -> int | None: if condition: return 1 return None def g(condition: bool) -> int | None: if condition: return None return 1 fiddle If you want to silence the first error without changing the code then you can use the --no-warn-no-return option. If you want to locally silence the return error on line 1 and the return-value error on line 7 then you can use: def f(condition: bool) -> int | None: # type: ignore[return] if condition: return 1 def g(condition: bool) -> int | None: if condition: return # type: ignore[return-value] return 1 (But it is less to type to provide explicit None return values.)
2
4
79,590,095
2025-4-24
https://stackoverflow.com/questions/79590095/find-points-in-curve
Can you share some ideas of how to find curve points (orange color marked places) like shown in picture: I've tried this code: result = [] for i in range(len(df)): if i == 0 or df['y'].iloc[i] != df['y'].iloc[i - 1]: result.append(df.iloc[i]) continue if i < len(df) - 1 and df['y'].iloc[i] != df['y'].iloc[i + 1]: result.append(df.iloc[i]) but the problem is, my 'y' isn't always equal between breakpoints and returns too many values. Could you give some advice, how to achieve my goal? Used data for picture: time = np.arange(0, 2200, 100) values = np.array([-0.1, 0, 0.13, 0.27, 0.27, 0.4, 0.27, 0.27, 0.13, 0.13, 0.01, 0.01, -0.13, -0.13, -0.27, -0.4, -0.4, -0.27, -0.13, -0.13, 0, 0]) full_time = np.arange(0, 2200, 1) full_values = np.interp(full_time, time, values)
Here is the full code: import numpy as np import matplotlib.pyplot as plt from scipy.signal import argrelextrema time = np.arange(0, 2200, 100) values = np.array([ -0.1, 0, 0.13, 0.27, 0.27, 0.4, 0.27, 0.27, 0.13, 0.13, 0.01, 0.01, -0.13, -0.13, -0.27, -0.4, -0.4, -0.27, -0.13, -0.13, 0, 0 ]) x = np.arange(0, 2200, 1) y = np.interp(x, time, values) dy = np.gradient(y) d2y = np.gradient(dy) curvature = np.abs(d2y) / (1 + dy**2)**1.5 threshold = 0.0005 corner_indices = np.where(curvature > threshold)[0] filtered_corners = [corner_indices[0]] for idx in corner_indices[1:]: if idx - filtered_corners[-1] > 10: filtered_corners.append(idx) filtered_corners = [0] + filtered_corners + [len(x) - 1] filtered_corners = sorted(set(filtered_corners)) plt.figure(figsize=(12, 6)) plt.plot(x, y, label='Interpolated Curve') plt.plot(x[filtered_corners], y[filtered_corners], 'o', color='orange', label='Corners') plt.xlabel('x') plt.ylabel('y') plt.grid(True) plt.legend() plt.tight_layout() plt.show() Output:
2
2
79,588,998
2025-4-23
https://stackoverflow.com/questions/79588998/how-to-prevent-error-on-shutdown-with-logging-handler-qobject
In order to show logging messages in a PyQt GUI, I'm using a custom logging handler that sends the logRecord as a pyqtSignal. This handler inherits from both QObject and logging.Handler. This works as it should but on shutdown there's this error: File "C:\Program Files\Python313\Lib\logging\__init__.py", line 2242, in shutdown if getattr(h, 'flushOnClose', True): RuntimeError: wrapped C/C++ object of type Log2Qt has been deleted My interpretation is that logging tries to close the handler but because the handler is also a QObject, Qt has already deleted it. But when you connect the aboutToQuit signal to a function that removes the handler from the logger, the error still occurs. Here's a MRE: import logging from PyQt6.QtWidgets import QApplication, QWidget from PyQt6.QtCore import QObject, pyqtSignal class Log2Qt(QObject, logging.Handler): log_forward = pyqtSignal(logging.LogRecord) def emit(self, record): self.log_forward.emit(record) logger = logging.getLogger(__name__) handler = Log2Qt() logger.addHandler(handler) def closing(): # handler.close() logger.removeHandler(handler) print(logger.handlers) app = QApplication([]) app.aboutToQuit.connect(closing) win = QWidget() win.show() app.exec() The print from closing() shows that logger has no more handlers, so why does logging still try to close the handler when it's already removed? And how could you prevent the error from occuring?
This may be caused by the multiple inheritance. I coincidentally just learned that PyQt6 works differently than PyQt5, affecting the way attributes that exist on the Qt side may be accessed. Since by default the flushOnClose attribute does not exist (it's only created for some subclasses, such as MemoryHandler), getattr() causes the attempt to query the inherited attributes, but that is a problem for a destroyed QObject. This does not happen with PyQt5, probably due to the different approach written above. One possibility could be to delete the handler, which will properly destroy the PyQt reference at the correct time, preventing accessing done later in the code: def closing(): global handler logger.removeHandler(handler) del handler This is not very elegant, though, and you also need to be completely sure that no other reference exists anywhere. Besides, it doesn't really address the issue, it only works around it. A more appropriate solution, considering the aspect of the attribute, is to explicitly create it: class Log2Qt(QObject, logging.Handler): log_forward = pyqtSignal(logging.LogRecord) flushOnClose = True def emit(self, record): self.log_forward.emit(record) The value is set as True, which is assumed as default, as shown from the snippet in the traceback. Another solution is to completely avoid the inheritance by creating a simple QObject subclass with the signal, and create an instance of it in the __init__ of the handler, used later in the emit() call: class LogSignaler(QObject): log_forward = pyqtSignal(logging.LogRecord) class Log2Qt(logging.Handler): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.signaler = LogSignaler() def emit(self, record): self.signaler.log_forward.emit(record) The benefit of this approach is that it also works fine with PySide, which, as opposed to PyQt, exposes the emit() function of QObject causing conflicts; see this related post). The two solutions above make it unnecessary to connect to the aboutToQuit signal.
2
0
79,587,363
2025-4-22
https://stackoverflow.com/questions/79587363/format-np-float64-without-leading-digits
I need to format np.float64 floating values without leading digits before the dot, for example -2.40366982307 as -.240366982307E+01, in python. This is to allow me to write in RINEX 3.03 the values with 4X, 4D19.12 formats. I have tried f"{x:.12E}" but it always has a leading 1 for numbers greater than 1. I have also tried np.format_float_positional and np.format_float_scientific but those don't have restrictions on leading digits.
I would go for something custom designed from scratch. I agree, however, I did not think about all the marginal cases that can occur. # Format a floating-point number to RINEX format def format_rinex(value): if not np.isfinite(value): return f"{value}" sign = '-' if value < 0 else ' ' abs_value = np.abs(value) exponent = 1 + np.int32(np.floor(np.log10(abs_value))).item() if value != 0 else 0 mantissa = abs_value / (10 ** exponent) mantissa_str = f'{mantissa:.12f}'.replace("0.", ".") return f'{sign}{mantissa_str}E{exponent:+03d}' Formats your example number into -.240366982307E+01.
1
2
79,589,020
2025-4-23
https://stackoverflow.com/questions/79589020/switching-to-iframe-with-rotating-id-selenium
I am trying to access the login iframe from https://www.steelmarketupdate.com/. Previously I was able to access this via XPATH client.switch_to.frame(client.find_element(By.XPATH, "/html/body/div[6]/div/iframe")) However this is no longer working. I found the length of all iframe elements to be 6, and I am unable to access any of these even via the index location. How can I switch to this frame?
Try this: # This relative XPath expression locates the <iframe> element which contains value piano in the ID attribute By.XPATH, "//iframe[contains(@id,'piano')]" or this: # This is an XPath expression which locates the 3rd <iframe> element from top of the DOM By.XPATH, "(//iframe)[3]" Full line of code: client.switch_to.frame(client.find_element(By.XPATH, "//iframe[contains(@id,'piano')]")) UPDATE: Full code with selenium's waits to effectively locate elements. Code explaination in comments. import time from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("https://www.steelmarketupdate.com/") driver.maximize_window() wait = WebDriverWait(driver, 10) # click on Log In button wait.until(EC.element_to_be_clickable((By.XPATH, "(//li[@id='pp-subs-login'])[1]"))).click() # enter inside IFRAME wait.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, "//iframe[contains(@id,'piano')]"))) # send keys to email and password fields wait.until(EC.element_to_be_clickable((By.NAME, "email"))).send_keys("testEmail") wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@aria-label='password']"))).send_keys("testPassword") # switch to default content driver.switch_to.default_content() # wait for a while to observe the result time.sleep(10) Result:
2
1
79,588,678
2025-4-23
https://stackoverflow.com/questions/79588678/optimum-selection-mechanism-when-choosing-relevant-rows-from-a-dataframe
I have a large Excel spreadsheet. I'm only interested in certain columns. Furthermore, I'm only interested in rows where specific columns meet certain criteria. The following works: import pandas as pd import warnings # this suppresses the openpyxl warning that we're seeing warnings.filterwarnings("ignore", category=UserWarning, module="openpyxl") # These are the columns we're interested in COLUMNS = [ "A", "B", "C" ] # the source file XL = "source.xlsx" # sheet name in the source file SHEET = "Sheet1" # the output file OUTPUT = "target.xlsx" # the sheet name to be used in the output file OUTSHEET = "Sheet1" # This loads the entire spreadsheet into a pandas dataframe df = pd.read_excel(XL, sheet_name=SHEET, usecols=COLUMNS).dropna() # this replaces the original dataframe with rows where A contains "FOO" df = df[df["A"].str.contains(r"\bFOO\b", regex=True)] # now isolate those rows where the B contains "BAR" df = df[df["B"].str.contains(r"\bBAR\b", regex=True)] # output to the new spreadsheet df.to_excel(OUTPUT, sheet_name=OUTSHEET, index=False) This works. However, I can't help thinking that there might be a better way to manage the selection criteria especially if / when they get more complex. Or is it a case of "step-by-step" is good?
You can certainly chain all your commands to avoid using intermediate variables, and combine all filters into a single expression (for example defining the condition in a col:regex dictionary and using loc with numpy.logical_and.reduce): conditions = {'A': r'\bFOO\b', 'B': r'\bBAR\b'} (pd.read_excel(XL, sheet_name=SHEET, usecols=COLUMNS).dropna() .loc[lambda x: np.logical_and.reduce([x[col].str.contains(cond, regex=True) for col, cond in conditions.items()])] .to_excel(OUTPUT, sheet_name=OUTSHEET, index=False) ) Alternative with a custom filtering function: def cust_filter(df): m1 = df['A'].str.contains(r'\bFOO\b', regex=True) m2 = df['B'].str.contains(r'\bBAR\b', regex=True) return df[m1 & m2] (pd.read_excel(XL, sheet_name=SHEET, usecols=COLUMNS).dropna() .pipe(cust_filter) .to_excel(OUTPUT, sheet_name=OUTSHEET, index=False) ) Example input: A B C 0 ABC GHI other 1 FOO BAR other 2 FOO JKL other 3 DEF BAR other Example output: A B C 1 FOO BAR other
1
2
79,588,208
2025-4-23
https://stackoverflow.com/questions/79588208/why-does-strftimey-not-yield-a-4-digit-year-for-dates-1000-ad-in-python
I am puzzled by an inconsistency when calling .strftime() for dates which are pre-1000 AD, using Python's datetime module. Take the following example: import datetime old_date = datetime.date(year=33, month=3, day=28) # 28th March 33AD old_date.isoformat() >>> "0033-03-28" # Fine! old_date.strftime("%Y-%m-%d") >>> "33-03-28" # Woah - where did my leading zeros go? # And even worse datetime.datetime.strptime(old_date.strftime("%Y-%m-%d"), "%Y-%m-%d") >>> ... File "<input>", line 1, in <module> File "/usr/lib/python3.12/_strptime.py", line 554, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/_strptime.py", line 333, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '33-03-28' does not match format '%Y-%m-%d' The documentation shows examples of %Y yielding zero-padded years. Even using %G, which is documented to be an ISO-8601 4-digit year, is showing only two digits. This caused a problem in an application where a user can enter a date, and if they type in an old date the exception above would arise when trying to convert a date-string back into a date. Presumably there is something in my local configuration which is causing this, as this seems too obvious to be a bug in Python. I'm using Python 3.12 on Ubuntu 24.04.
This is caused by the implementation of .strftime() in the C library in Linux omitting any leading zeros from %Y and %G. The related issue in CPython's issue tracker is here. Thanks to jonrsharpe's comment for the answer, and highlighting this section of the documentation: "The full set of format codes supported varies across platforms, because Python calls the platform C library’s strftime() function, and platform variations are common."
7
1
79,587,390
2025-4-22
https://stackoverflow.com/questions/79587390/plt-contour-plots-series-of-lines-instead-of-a-contour-line
I aim to plot a contour plot of flux, but instead of closed contour curves, plt.contour() returns a series of lines with the same height. Psi is defined as a np.array and has a 320 by 200 shape. fig, ax = plt.subplots() r_end = grid_start[0] + grid_step[0] * grid_size[0] z_end = grid_start[1] + grid_step[1] * grid_size[1] X = np.arange(grid_start[0], r_end, grid_step[0]) # shape (200,) Y = np.arange(grid_start[1], z_end, grid_step[1]) # shape (320,) x, y = np.meshgrid(X, Y) CS = ax.contour(x, y, Psi) ax.clabel(CS, fontsize=10) The result looks very weird: all contours image I also created contour plots with some levelling for enhanced visibility. contour plot, level 0 contour plot, level 25 The desired result is smth like this: desired Psi Contours If I reshape the given array to (3200,20) the periodicity is lost, but it is still weird. reshaped result How can I fix this issue? Thanks for your help.
I think that your original data is a different size: maybe 3200 by 20, not 320 by 200. (You should check). The datafile does have 320 rows of 200 columns, but I suspect that is an artefact: maybe they were simply limited by line length. If I reshaped to 3200 by 20 then this is what I get: import numpy as np import matplotlib.pyplot as plt NX, NY = 320, 200 psi = np.zeros( ( NX, NY ) ) with open( 'flux_test.txt' , 'r' ) as input: for row, line in enumerate( input ): for col, str in enumerate( line.split() ): psi[row,col] = float( str ) psi = psi.reshape( 3200, 20 ) fig, ax = plt.subplots(figsize=[10, 6]) CS = ax.contour( psi ) plt.show() Interestingly, if I reshape to 200 by 320 then I get the following which, I suppose, might also be right. I don't know what you are trying to get as an end result. Note that both Fortran and Matlab use column-major array storage order - the opposite of Python's numpy - so I still think you need to go back to how the data file was created. You aren't giving us that information.
1
1
79,586,803
2025-4-22
https://stackoverflow.com/questions/79586803/how-can-i-view-the-xpath-of-a-selected-element
I'm checking for a user on a web interface and clicking an edit button in the corresponding table, but the button and table itself are identical and therefore not uniquely identifiable. I can find the text in the table, so my approach was to grab the xpath where that's found, and derive the button's xpath from that. The xpath itself will be variable based on how/what users are on the page so I can't use anything absolute. I can find the text element using this, but I don't know of a way to use that to derive the corresponding button. Feel free to point me towards a different solution for the problem. userLookupElement = driver.find_element(By.XPATH,"//*[contains(text(), 'SeleniumTest')]") e.g. xpath holding text (found using browser ext) /html/body/main/div[2]/div/div/div/div[2]/div/div[2]/div/div/form/div[5]/table/tbody/tr[18]/td[1] corresponding button /html/body/main/div[2]/div/div/div/div[2]/div/div[2]/div/div/form/div[5]/table/tbody/tr[18]/td[3]/label Edit: Included HTML. Attempting to select based on username "SeleniumTest" and find the corresponding "Edit User" button element. <tr ng-repeat="user in userData track by $index" ng-class-odd="'o-table-bkodd'" ng-class-even="'o-table-bkeven'" class="ng-scope o-table-bkeven"> <td colspan="2" class="ng-binding"> SeleniumTest &nbsp; <!-- ngIf: user.IsLockedOut --> </td> <td class="ng-binding"> </td> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Operations') != -1" ng-hide="user.AuthorizedFor.indexOf('Operations') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Overrides') != -1" ng-hide="user.AuthorizedFor.indexOf('Overrides') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Reports') != -1" ng-hide="user.AuthorizedFor.indexOf('Reports') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Setup') != -1" ng-hide="user.AuthorizedFor.indexOf('Setup') == -1" class="ng-hide"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Users') != -1" ng-hide="user.AuthorizedFor.indexOf('Users') == -1" class="ng-hide"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.RemoteAccessEnabled" ng-hide="!user.RemoteAccessEnabled"><i class="fa fa-check"></i></span> </th> <td> <label class="btn btn-sm btn-default texture-blue" data-ng-click="editUser(user)">Edit User</label> </td> </tr>
You can nest element with text in [] to search its parent tr and later you can search label in this parent. '//tr[td[contains(text(), "SeleniumTest")]]//label' You may also use following-sibling::td to search next td '//td[contains(text(), "SeleniumTest")]/following-sibling::td/label' Full working code with example HTML directly in code. html = '''<table> <tr ng-repeat="user in userData track by $index" ng-class-odd="'o-table-bkodd'" ng-class-even="'o-table-bkeven'" class="ng-scope o-table-bkeven"> <td colspan="2" class="ng-binding"> SeleniumTest &nbsp; <!-- ngIf: user.IsLockedOut --> </td> <td class="ng-binding"> </td> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Operations') != -1" ng-hide="user.AuthorizedFor.indexOf('Operations') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Overrides') != -1" ng-hide="user.AuthorizedFor.indexOf('Overrides') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Reports') != -1" ng-hide="user.AuthorizedFor.indexOf('Reports') == -1"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Setup') != -1" ng-hide="user.AuthorizedFor.indexOf('Setup') == -1" class="ng-hide"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.AuthorizedFor.indexOf('Users') != -1" ng-hide="user.AuthorizedFor.indexOf('Users') == -1" class="ng-hide"><i class="fa fa-check"></i></span> </th> <th style="width:58px;"> <span ng-show="user.RemoteAccessEnabled" ng-hide="!user.RemoteAccessEnabled"><i class="fa fa-check"></i></span> </th> <td> <label class="btn btn-sm btn-default texture-blue" data-ng-click="editUser(user)">Edit User</label> </td> </tr> <tr> <td>Other Text</td> <td><label>Other Label</label></td> </tr> </table>''' from selenium import webdriver from selenium.webdriver.common.by import By # --- import selenium print('Selenium:', selenium.__version__) # --- #driver = webdriver.Chrome() driver = webdriver.Firefox() driver.get("data:text/html;charset=utf-8," + html) #driver.implicitly_wait(3) #item = driver.find_element(By.XPATH, '//*[contains(text(), "SeleniumTest")]') print('-----') print("Checking: SeleniumTest") item = driver.find_element(By.XPATH, '//tr[td[contains(text(), "SeleniumTest")]]//label') print('Found:', item.text) print('-----') print("Checking: Other Text") item = driver.find_element(By.XPATH, '//tr[td[contains(text(), "Other Text")]]//label') print('Found:', item.text) # --- for text in ('SeleniumTest', 'Other Text'): print('-----') print(f"Checking: {text}") xpath = f'//tr[td[contains(text(), "{text}")]]//label' print(f"XPath: {xpath}") item = driver.find_element(By.XPATH, xpath) print('Found:', item.text) for text in ('SeleniumTest', 'Other Text'): print('-----') print(f"Checking: {text}") xpath = f'//td[contains(text(), "{text}")]/following-sibling::td/label' print(f"XPath: {xpath}") item = driver.find_element(By.XPATH, xpath) print('Found:', item.text) driver.close() Result: Selenium: 4.31.0 ----- Checking: SeleniumTest Found: Edit User ----- Checking: Other Text Found: Other Label ----- Checking: SeleniumTest XPath: //tr[td[contains(text(), "SeleniumTest")]]//label Found: Edit User ----- Checking: Other Text XPath: //tr[td[contains(text(), "Other Text")]]//label Found: Other Label ----- Checking: SeleniumTest XPath: //td[contains(text(), "SeleniumTest")]/following-sibling::td/label Found: Edit User ----- Checking: Other Text XPath: //td[contains(text(), "Other Text")]/following-sibling::td/label Found: Other Label
2
2
79,587,773
2025-4-23
https://stackoverflow.com/questions/79587773/python-file-behaviour-different-when-run-from-different-ides
A colleague and I were reviewing some student submissions. He likes using IDLE, while I use PyCharm. The student developed their code in PyCharm. A simplified example of the students work is: file = open('test_file.txt','w') file.write('This is a test file.') print('Completed') exit() The student has made an error in their file handling and created a situation where the file is left open after the code is completed. When run from within PyCharm the file.write is completed and the file is updated as expected. When run from within IDLE the file.write appears not to be completed and the file is empty. When run from the command line (btw, we all use macbooks) the file.write is completed and the file has the line of text. One more clue is that when run within IDLE after 'Completed' is output, there is a system dialog that states, "Your program is still running! Do you want to kill it?". This warning does not appear when run from the command line or within PyCharm. We are trying to understand the difference between these behaviours given that we believe they are all doing the same process of invoking the same interpreter.
IDLE leaves the interpreter running after executing the code. You can go back to the Shell and inspect variables, for example. If you exit idle or restart the shell (Ctrl-F6) the interpreter exits (or restarts) and the file will be flushed and closed. Without restarting the shell, the file will still be open and cached writes may not have been written to disk yet. The other IDEs appear to exit the interpreter when the script completes. It's a design decision by the developers. You would get a similar result from the command line by running python -i script.py, which leaves the interpreter running after executing script.py. The file won't be flushed until exiting. FYI: The exit() is causing the IDLE system message. It kills the interpreter and closes the Shell, but since it is a kill and not a clean exit the file isn't flushed.
1
2
79,587,407
2025-4-22
https://stackoverflow.com/questions/79587407/read-data-from-sheet1-and-output-filtered-data-on-sheet2
Is it possible? Or it seems that each sheet is a separate environment? CONTEXT: A clean way to read 200 rows of data (and 30+ columns) is using something like df=xl("A:BS", headers=True) So a user wants a filtered view of my data on sheet2. e.g., df[df['project'] =='bench'] (and despite that one can use Excel filter but prefers not to. E.g., rewrite in python-in-excel R markdown logic that parses input from excel as html page with 30+ analyses and TOC and graphs and headings)
You can do something like import pandas as pd df = Excel("Data!A:AD", headers=True) filtered_df = df[df['project'] == 'bench'] filtered_df
1
2
79,586,324
2025-4-22
https://stackoverflow.com/questions/79586324/star-center-of-a-star-convex-shape
I'm working with 2D shapes represented by their boundary contours (as ordered x,y coordinates), and I want to check if a shape is star-convex. If it is, I'd like to find a star center β€” i.e., a point from which the entire shape is visible (meaning: every line segment to every boundary point lies entirely within the shape).? Is there an elaborate non heuristic way of doing so? If there is already a python code for this then it would be even better :) This is what I have tried so far: I implemented a heuristic approach based on a midpoint visibility test. The idea is: A point is a valid star center if, for every boundary point, the midpoint of the line segment between them lies inside the polygon. I sample candidate points starting from the centroid, perturb them randomly within the bounding box, and check if one satisfies the condition. but this doesn't really guarantee finding the star-center or checking if it star-convex. There's also no guarantee that random sampling will find the right region (kernel) in the first place. def is_star_convex_and_get_center(x, y, n_samples=500): coords = np.column_stack((x, y)) poly = Polygon(coords) if not poly.is_valid or poly.area == 0: print("Invalid shape.") return None # Try centroid first candidates = [np.mean(coords, axis=0)] # Add some random points around the centroid bbox = poly.bounds scale = max(bbox[2] - bbox[0], bbox[3] - bbox[1]) for _ in range(n_samples - 1): offset = (np.random.rand(2) - 0.5) * scale candidates.append(candidates[0] + offset) # Check visibility condition for each candidate for pt in candidates: visible = True for px, py in coords: mid = 0.5 * (pt + np.array([px, py])) if not poly.contains(Point(mid)): visible = False break if visible: return pt # pt is a valid star center # No valid star center found return None
I think you can do this by linear-programming. Every CONVEX corner of your figure will give you a triangular wedge in which any solution must lie. In this wedge the X,Y point which is hopefully the centre must differ from the corner node by a combination of positive multiples of the two side vectors. i.e. or You then have to solve a matrix equation for X, Y and all the ak and bk, subject to all a's and b's being positive. This is exactly what scipy.optimize.linprog does: see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html In the following code IT IS ASSUMED THAT THE FIGURE IS TRACED ANTICLOCKWISE!!! import numpy as np from scipy.optimize import linprog import matplotlib.pyplot as plt def starConvex( x, y ): if x[0] != x[-1] or y[0] != y[-1]: # close the loop if necessary x.append( x[0] ) y.append( y[0] ) N = len( x ) - 1 # Count convex corners Nconvex = 0 for i in range( N ): i0, im, ip = i, i - 1, i + 1 if im < 0: im = N - 1 dxlow , dylow = x[im] - x[i0], y[im] - y[i0] dxhigh, dyhigh = x[ip] - x[i0], y[ip] - y[i0] if dxhigh * dylow - dxlow * dyhigh >= 0: Nconvex += 1 # Set up matrices for linear programming A = np.zeros( ( 2 * Nconvex, 2 * Nconvex + 2 ) ) b = np.zeros( 2 * Nconvex ) c = np.ones( 2 * Nconvex + 2 ); c[0] = c[1] = 0 k = 0 for i in range( N ): i0, im, ip = i, i - 1, i + 1 if im < 0: im = N - 1 dxlow , dylow = x[im] - x[i0], y[im] - y[i0] dxhigh, dyhigh = x[ip] - x[i0], y[ip] - y[i0] if dxhigh * dylow - dxlow * dyhigh >= 0: A[2*k ,0 ] = 1 A[2*k+1,1 ] = 1 A[2*k ,2*k+2] = -dxlow A[2*k ,2*k+3] = -dxhigh A[2*k+1,2*k+2] = -dylow A[2*k+1,2*k+3] = -dyhigh b[2*k ] = x[i0] b[2*k+1 ] = y[i0] k += 1 bounds=[(None,None),(None,None)] for _ in range( 2*Nconvex ): bounds.append( (0,None) ) res = linprog( c, A_eq=A, b_eq=b, bounds=bounds ) values = res.x if res.success: return True, values[0], values[1] else: return False, None, None xpts = [ 0.0, 1.0, 3.0, 1.0, 0.0, -1.0, -3.0, -1.0 ] ypts = [ -3.0, -1.0, 0.0, 1.0, 3.0, 1.0, 0.0, -1.0 ] # This succeeds #ypts = [ -3.0, -1.0, 0.0, 1.0, -1.0, 1.0, 0.0, -1.0 ] # This succeeds #ypts = [ -3.0, -1.0, 0.0, 1.0, -2.0, 1.0, 0.0, -1.0 ] # This fails test, x0, y0 = starConvex( xpts, ypts ) if test: print( "SUCCESS! x0, y0 = ", x0, y0 ) plt.plot( [x0], [y0], 'ro' ) else: print( "FAILURE" ) plt.plot( xpts, ypts, 'b' ) plt.show() Here is the first case, which works OK (but you can't guarantee which feasible point will be chose as "star centre"; one possibility is shown in red.) Below is the last case, where there is no point from which you can see the whole of the star.
1
2
79,587,273
2025-4-22
https://stackoverflow.com/questions/79587273/tkinter-cant-delete-something-from-a-canvas-on-ubuntu
I created a programm on python to read serial data from an arduino and show it on a tkinter window. I made a Thread to read the from the arduino and the tkinter programm. My Programm runs perfectly on windows but the problem is that i want to use it on my ubuntu laptop and there it doesn't work. It works ok but there are some problems. I have some sensor values that update if some new information comes from the arduino, but the values are not shown. Also and most importend no buttons work and I can't close the window. My Programm has this structure: |--Main.py (for starting the programm) |--App.py (for the application) | |--arduino | | |--serial_reader.py (the Thread to read from my arduino) | | |--Input_manager.py and save_Input.py (for saving the data in variebles) | |--managers | | |--button_manager.py | | |--image_manager.py | | |--text_manager.py (to manage the widgets of the ui) | |--ui | | |--custom_button.py | | |--custom_image.py | | |--custom_text.py (for creating/deleting/updating the widgets) I tried narrowing down of the problem and the programm worked perfectly fine without this funktion in the custom_text class. def delete_Text(self): if self.anzahl == 1: self.canvas.delete(self.tag) else: for i in range(0, self.anzahl): self.canvas.delete(self.tag + str(i)) I don't see why this crashes the entire window. In the app I have a loop with the after funktion in tkinter: def loop(self): try: while not data_queue.empty(): data = data_queue.get_nowait() self.serial_Reader.read_Serial_Values(data) self.text_manager.update() except queue.Empty: print("leer") pass self.window.after(1, self.loop) It looks if the Thread got some information, then it will save this information in specific Variables and then the text should update. In the text_manager the update function just calls th update function in the custom_ text file. which looks like this: def update_Text(self): self.delete_Text() self.create_Text() And that starts the delete_Text function which crashes the programm. I don't understand what is wrong with this function. I mean it worked and it doesn't show any error idk, but it's logic that the text is not updating, because I update the text with deleting and creating a new text. Hopfully somebody can help me
The problem probably isn’t with your delete_Text() function itself, but more with how you're using threads with tkinter. In tkinter, you can’t call methods like canvas.delete, canvas.create_text, label.config, and so on, from a thread other than the main thread. I had a similar issue myself β€” everything worked fine on Windows, but on Linux, tkinter behaved differently, like it was stricter or something. And yeah, the app would freeze or the window just wouldn’t respond, exactly like what you’re seeing. So, most likely, the issue is thread-related, not with the logic of your function.
1
2
79,586,415
2025-4-22
https://stackoverflow.com/questions/79586415/udf-returning-ljava-lang-object
I have a PySpark UDF which when I try to apply to each row for one of the df columns and get a new column, I get a [Ljava.lang.Object;@7e44638d (different value after the @ for each row) Please see the udf below: def getLocCoordinates(property_address): url = "https://maps.googleapis.com/maps/api/geocode/json" querystring = {f"address":property_address},"key":"THE_API_KEY"} response = requests.get(url, params=querystring) response_json = json.loads(response.text) for adr in response_json['results']: geometry = adr['geometry'] coor = geometry['location'] lat = coor['lat'] lng = coor['lng'] coors = lat, lng return coors getCoorsUDF = udf(lambda x:getLocCoordinates(x)) df = df.withColumn("AddressCoordinates", getCoorsUDF(col("FullAddress") ) ) I tried: getCoorsUDF = udf(getLocCoordinates, FloatType()) --> returns NULL for each row of the newly create "AddressCoordinates" column. getCoorsUDF = udf(getLocCoordinates, StringType()) --> returns [Ljava.lang.Object;@ getCoorsUDF = udf(getLocCoordinates) --> returns [Ljava.lang.Object;@ The result looks like so: Ref Num FullAddress AddressCoordinates 1234 Some Address [Ljava.lang.Object;@ This gets returned for each row in the dataframe. Initially I was using the function in a Python notebook and it was working fine, lat and lng was returning for each adress. However, I had to move this to PySpark and I am hitting a brick wall here.
I think that you're seeing the [Ljava.lang.Object;@... output because your UDF is returning a Python tuple ((lat, lng)), and PySpark doesn't know how to serialize that into a DataFrame column unless you explicitly define a return schema that Spark understands. You should return a StructType with fields for lat and lng. For example you can do something like this: from pyspark.sql.functions import udf, col from pyspark.sql.types import StructType, StructField, DoubleType import requests import json # defining return type for the UDF location_schema = StructType([ StructField("lat", DoubleType(), True), StructField("lng", DoubleType(), True) ]) def getLocCoordinates(property_address): url = "https://maps.googleapis.com/maps/api/geocode/json" params = { "address": property_address, "key": "YOUR_API_KEY" } try: response = requests.get(url, params=params) data = response.json() if data['results']: location = data['results'][0]['geometry']['location'] return {"lat": location['lat'], "lng": location['lng']} except Exception as e: print(f"Error: {e}") return None # registering the UDF with schema getCoorsUDF = udf(getLocCoordinates, location_schema) # now you apply the UDF df = df.withColumn("AddressCoordinates", getCoorsUDF(col("FullAddress"))) # an option would be to extract lat and lng as separate columns df = df.withColumn("Latitude", col("AddressCoordinates.lat")) \ .withColumn("Longitude", col("AddressCoordinates.lng"))
1
2
79,585,895
2025-4-22
https://stackoverflow.com/questions/79585895/can-i-get-pycharm-to-accept-a-python-interpreter-not-named-python
My project has an executable called powerscript.exe, which is a Python interpreter that does and knows some extra things. This is out of my control, I can not change this. From the command line I can use this as a drop-in replacement for the Python interpreter. In PyCharm I cannot. Adding this thing as a Python interpreter yields the error message: Select Python Interpreter An invalid Python interpreter name 'powerscript.exe'! Which I guess is just a sanity-check on the filename that I want to deactivate or work around. I have read How to resolve "Invalid Python interpreter name 'python.exe'!" error in PyCharm, they have the same error message but in their case it is a false positive, in my case I really do have an invalid name.
Since it's apparent that PyCharm validates only the name of the executable, you can make a copy of the wrapper executable powerscript.exe and rename the copy to python.exe as an interpreter for PyCharm.
1
2
79,605,626
2025-5-4
https://stackoverflow.com/questions/79605626/flask-cant-see-html-file
Based on the data given here : https://www.kaggle.com/code/bhavikjikadara/loan-status-prediction-decisiontreeclassifier/input i want to make flask based ML model prediction for loan status, here is my html code and screenshot <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Prediction of Loan Status</title> </head> <body> <form action="/prediction" method="GET"> <label for="Gender">Enter Gender:</label><br> <input type="text" id="Gender" name="Gender"><br> <label for="Married">Enter Married:</label><br> <input type="text" id="Married" name="Married"><br> <label for="Dependents">Enter Dependents:</label><br> <input type="text" id="Dependents" name="Dependents"><br> <label for="Education">Enter Education:</label><br> <input type="text" id="Education" name="Education"><br> <label for="Self_Employed">enter Self_Employed:</label><br> <input type="text" id="Self_Employed" name="Self_Employed"><br> <label for="ApplicantIncome">Enter ApplicantIncome:</label><br> <input type="text" id="ApplicantIncome" name="ApplicantIncome"><br> <label for="CoapplicantIncome">Enter CoapplicantIncome:</label><br> <input type="text" id="CoapplicantIncome" name="CoapplicantIncome"><br> <label for="LoanAmount">Enter LoanAmountn:</label><br> <input type="text" id="LoanAmount" name="LoanAmount"><br> <label for="Loan_Amount_Term">EnterLoan_Amount_Term:</label><br> <input type="text" id="Loan_Amount_Term" name="Loan_Amount_Term"><br> <label for="Credit_History">Enter Credit_History:</label><br> <input type="text" id="Credit_History" name="Credit_History"><br> <label for="Property_Area">Enter Property_Area:</label><br> <input type="text" id="Property_Area" name="Property_Area"><br> <input type ='submit' value="Make Prediction"> </form> <p>score of Model is {{ score }}</p> </body> </html> and corresponding code is this : import pandas as pd from flask import Flask,request,render_template from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split app =Flask(__name__) def convert_gender(gender): if gender=='Male': return 1 else: return 0 def convert_Martial(status): if status =="Yes": return 1 else: return 0 def convert_dependencies(numbr_dependency): if numbr_dependency=='3+': return 3 else: return float(numbr_dependency) def convert_education(education): if education =='Graduate': return 1 else: return 0 def self_employeed_status(status): if status =='Yes': return 1 else: return 0 def get_Property_Area(area): if area=='Rural': return 0 elif area=='Urban': return 1 else: return 2 @app.route('/Prediction',methods=['POST',"GET"]) def welcome_Prediction(): return render_template("Loan_Status.html") @app.route('/prediction',methods=['POST',"GET"]) def prediction(): data =pd.read_csv("loan_data.csv") data.drop('Loan_ID',axis=1,inplace=True) data.dropna(axis=0,inplace=True) data['Loan_Status'] =data['Loan_Status'].map({'N':0,'Y':1}) # print(data.head()) # print(data['Gender'].unique()) print(data.columns) categorical_columns =data.select_dtypes(include='object').columns for column in categorical_columns: if column =='Gender': data[column] =data[column].map(convert_gender) elif column =='Married': data[column] = data[column].map(convert_Martial) elif column =='Dependents': data[column] = data[column].map(convert_dependencies) elif column =='Education': data[column] = data[column].map(convert_education) elif column =='Self_Employed': data[column] = data[column].map(self_employeed_status) elif column =='Property_Area': data[column] = data[column].map(get_Property_Area) y =data['Loan_Status'] X =data.drop('Loan_Status',axis=1) X_train,X_test,y_train,y_test =train_test_split(X,y,test_size=0.2,random_state=1) model =DecisionTreeClassifier() model.fit(X_train,y_train) myscore =model.score(X_test,y_test) print(myscore) Gender = float(convert_gender(request.form['Gender'])) Married = float(convert_Martial(request.form['Married'])) Dependents =float(convert_dependencies(request.form['Dependents'])) Education = float(convert_dependencies(request.form['Education'])) Self_Employed = float(self_employeed_status(request.form['Self_Employed'])) ApplicantIncome = float((request.form['ApplicantIncome'])) CoapplicantIncome = float((request.form['CoapplicantIncome'])) LoanAmount = float((request.form['LoanAmount'])) Loan_Amount_Term = float((request.form['Loan_Amount_Term'])) Credit_History = float((request.form['Credit_History'])) Property_Area = float(get_Property_Area(request.form['Property_Area'])) prediction =model.predict([[Gender,Married,Dependents, Education,Self_Employed,ApplicantIncome,CoapplicantIncome, LoanAmount,Loan_Amount_Term,Credit_History,Credit_History,Property_Area]]) return render_template('Loan_Status.html',score =prediction) if __name__=='__main__': app.run(debug=True) when i run code, i got this error : i know that i am missing a little part, but can't guess exactly what it is, please give me hint
The reason why you get a 404 Not Found error on the / page is simply that you didn't register any handler for this route. If you try to access /prediction you should find your page. Another point is that your HTML template (by default) must be in the templates directory. So to fix your application you should have something like . | - app.py | - templates | - Loan_Status.html with an app.py import pandas as pd from flask import Flask,request,render_template from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split app =Flask(__name__) def convert_gender(gender): if gender=='Male': return 1 else: return 0 def convert_Martial(status): if status =="Yes": return 1 else: return 0 def convert_dependencies(numbr_dependency): if numbr_dependency=='3+': return 3 else: return float(numbr_dependency) def convert_education(education): if education =='Graduate': return 1 else: return 0 def self_employeed_status(status): if status =='Yes': return 1 else: return 0 def get_Property_Area(area): if area=='Rural': return 0 elif area=='Urban': return 1 else: return 2 @app.route('/Prediction',methods=['POST',"GET"]) def welcome_Prediction(): return render_template("Loan_Status.html") @app.route('/prediction',methods=['POST',"GET"]) def prediction(): data =pd.read_csv("loan_data.csv") data.drop('Loan_ID',axis=1,inplace=True) data.dropna(axis=0,inplace=True) data['Loan_Status'] =data['Loan_Status'].map({'N':0,'Y':1}) # print(data.head()) # print(data['Gender'].unique()) print(data.columns) categorical_columns =data.select_dtypes(include='object').columns for column in categorical_columns: if column =='Gender': data[column] =data[column].map(convert_gender) elif column =='Married': data[column] = data[column].map(convert_Martial) elif column =='Dependents': data[column] = data[column].map(convert_dependencies) elif column =='Education': data[column] = data[column].map(convert_education) elif column =='Self_Employed': data[column] = data[column].map(self_employeed_status) elif column =='Property_Area': data[column] = data[column].map(get_Property_Area) y =data['Loan_Status'] X =data.drop('Loan_Status',axis=1) X_train,X_test,y_train,y_test =train_test_split(X,y,test_size=0.2,random_state=1) model =DecisionTreeClassifier() model.fit(X_train,y_train) myscore =model.score(X_test,y_test) print(myscore) Gender = float(convert_gender(request.form['Gender'])) Married = float(convert_Martial(request.form['Married'])) Dependents =float(convert_dependencies(request.form['Dependents'])) Education = float(convert_dependencies(request.form['Education'])) Self_Employed = float(self_employeed_status(request.form['Self_Employed'])) ApplicantIncome = float((request.form['ApplicantIncome'])) CoapplicantIncome = float((request.form['CoapplicantIncome'])) LoanAmount = float((request.form['LoanAmount'])) Loan_Amount_Term = float((request.form['Loan_Amount_Term'])) Credit_History = float((request.form['Credit_History'])) Property_Area = float(get_Property_Area(request.form['Property_Area'])) prediction =model.predict([[Gender,Married,Dependents, Education,Self_Employed,ApplicantIncome,CoapplicantIncome, LoanAmount,Loan_Amount_Term,Credit_History,Credit_History,Property_Area]]) return render_template('Loan_Status.html',score =prediction) # Here we add the `/` handler @app.route("/") def home(): # Maybe you could add a link to predictions return "Hello world" if __name__=='__main__': app.run(debug=True)
1
2
79,607,676
2025-5-5
https://stackoverflow.com/questions/79607676/making-solveset-solutions-rational-in-sympy
I'm trying to solve a cubic with a parameter r, for x. The cubic is the awful expression poly = b*(-(10 + x)*b - 10*b) - ((8/3) + x)*((10+x)*(1+x) - 10) where b = sp.sqrt((8/3)*(1 - a)) which factors out to x^3 + (41/3)x^2 + (8/3)(a + 10)x + (160/3)(a - 1). SymPy's solveset() gives a solution, but it is really long mostly due to the large mass of repeated fractions in the answer. I'd like to do something like Rational(solution) to make the solution nicer, but it doesn't work. Here's my code and the error: import sympy as sp from sympy import Eq, solveset, Rational from sympy.abc import x, a, b poly = b*(-(10 + x)*b - 10*b) - ((8/3) + x)*((10+x)*(1+x) - 10) polysub = poly.subs(b,sp.sqrt((8/3)*(1 - a))) polyeq = Eq(polysub,0) solutionset = solveset(polyeq,x) Rational(solutionset) TypeError: invalid input: {ridiculously long solution}
If you import S and use S(8)/3 instead of 8/3 you will get fractions instead of floats...but it will still be complicated. You can also use cse to give a symbolically simpler substituted expression: ...your code >>> from sympy import cse >>> cse(solve(polyeq,x)) [(x0, 8*a + 817/9), (x1, (-556*a + sqrt(-4*x0**3 + (70450/27 - 1112*a)**2)/2 + 35225/27)**(1/3)), (x2, x1/3), (x3, x0/(3*x1)), (x4, sqrt(3)*I/2), (x5, -x4 - 1/2), (x6, x4 - 1/2)], [-x2 - x3 - 41/9, -x2*x5 - x3/x5 - 41/9, -x2*x6 - x3/x6 - 41/9]) >>> _[-1] [-x2 - x3 - 41/9, -x2*x5 - x3/x5 - 41/9, -x2*x6 - x3/x6 - 41/9]
1
1
79,607,320
2025-5-5
https://stackoverflow.com/questions/79607320/i-cant-see-the-custom-button-i-created-in-the-header-of-the-list-in-my-module-i
I've created a module in odoo18 and in that I created a xml called product_views. It has a list in it. I added a button in the header of the list but I can't see it. Here is my xml file: <?xml version="1.0" encoding="utf-8"?> <odoo> <record id="view_product_list" model="ir.ui.view"> <field name='name'>"electronic.product.list"</field> <field name='model'>product.product</field> <field name="priority" eval="99"/> <field name="arch" type="xml"> <list> <header> <button name="import_action" string="import" type='object'/> </header> <field name="name" string="product name" /> <field name="volume" string="quantity" /> <field name="x_link" string="quantity" /> </list> </field> </record> <record id="action_electronic_products" model="ir.actions.act_window"> <field name="name">Electronic Products</field> <field name="res_model">product.product</field> <field name="view_mode">list</field> <field name="view_id" ref="view_product_list"/> </record> <menuitem id="menu_electronic_product_root" name="Electronic Products" action="action_electronic_products" sequence="10" groups="base.group_user"/> This is my model: from odoo import models, fields, api class ProductProduct(models.Model): _inherit = 'product.product' # Correct inheritance x_link = fields.Char( string='Product Link', help='URL of the product', default='', tracking=True # Shows changes in chatter ) def import_action(self): return { 'name': 'Import Products', 'type': 'ir.actions.act_window', 'res_model': 'product.import.wizard', 'view_mode': 'form', 'target': 'new', 'context': {'default_product_id': self.id}, } Can you help me please?
Actually, your code works, the button does appear. I assume that, you didn't know that in Odoo, for general, buttons that used in the tree/list view don't always show, they only appear when any data is selected. So, if I don't select any data, you can see on the picture below, the "import" button of yours will disappear. Therefore, if what you want is for the button to always be shown, then you need to make further customization by utilizing the Owl JS component (but it needs more advanced knowledge) You can see a topic discussed in Odoo forum similar with that here: How to add a button next to Create in List View in Odoo 16
1
1
79,606,481
2025-5-5
https://stackoverflow.com/questions/79606481/how-to-add-editable-text-layer-to-a-photoshop-psd-file-using-python
I have a very simple use case where I need to pack a PNG file and a text layer into a PSD file and save it. That's it. I have tried psd-tools for it and so far it works in principle. The problem is that the text layer that it creates is not editable. The text layer itself is added as an image (transparent background) so I can't edit the text by opening it in Krita/Gimp. Here's my code: from PIL import Image, ImageDraw, ImageFont from psd_tools.api.psd_image import PSDImage from psd_tools.api.layers import PixelLayer from psd_tools.constants import Compression # Load PNG image png_image = Image.open('fire.png').convert('RGBA') # Create an empty PSD psd = PSDImage.new(mode='RGBA', size=png_image.size) # Create image layer from PNG image_layer = PixelLayer.frompil( pil_im=png_image, psd_file=psd, layer_name='Image Layer', compression=Compression.RLE ) psd.append(image_layer) # Create a text image with Pillow text = "Random Text Example" font_size = 40 try: font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", font_size) except IOError: font = ImageFont.load_default() # Create a blank transparent image for the text text_image = Image.new("RGBA", png_image.size, (0, 0, 0, 0)) draw = ImageDraw.Draw(text_image) draw.text((50, 50), text, font=font, fill=(255, 0, 0, 255)) # Red text # Create text layer from image text_layer = PixelLayer.frompil( pil_im=text_image, psd_file=psd, layer_name='Text Layer', compression=Compression.RLE ) psd.append(text_layer) # Save PSD psd.save('output_with_text.psd') I have looked other libs and found aspose-psd can do editable text layer, but it's a paid feature in the python library and I need a license for it. So, does anybody know a way to add an editable text layer to a PSD file programmatically.? Thanks.
I think the perfect answer to this Question is the description from OP mentioned alternative editors. My boldening for emphasis https://docs.krita.org/en/general_concepts/file_formats/file_psd.html .psd, unlike actual interchange formats like *.pdf, *.tiff, *.exr, *.ora and *.svg doesn’t have an official spec online. Which means that it needs to be reverse engineered. Furthermore, as an internal file format, it doesn’t have much of a philosophy to its structure, as it’s only purpose is to save what Photoshop is busy with, or rather, what all the past versions of Photoshop have been busy with. This means that the inside of a PSD looks somewhat like Photoshop’s virtual brains, and PSD is in general a very disliked file-format. Due to .psd being used as an interchange format, this leads to confusion amongst people using these programs, as to why not all programs support opening these. Sometimes, you might even see users saying that a certain program is terrible because it doesn’t support opening PSDs properly. But as PSD is an internal file-format without online specs, it is impossible to have any program outside it support it 100%. So most non Adobe Creative Cloud apps could NOT reverse Adobe 1980s "Non-sensible Format" The explanation by a modern advanced Graphics Editor (and others) related to PSD is: ".... It will likely never support vector and text layers, as these are just too difficult to program properly." So although I have a piece of PhotoShop text over a small PNG. The latest Gimp cannot (on file open) do anything other than convert the layers to common images. Apart from use online Photopea which perhaps has an API for Scripting to add a vector text to an image as PSD. Then scripting a proprietary image / design app via command lines is borderline programming.
1
1
79,607,412
2025-5-5
https://stackoverflow.com/questions/79607412/how-can-i-provide-type-hints-while-destructuring-my-class
I would like to create a class that looks something like ConfigAndPath: import pathlib from typing import TypeVar, Generic from dataclasses import dataclass, astuple class ConfigBase: pass T = TypeVar("T", bound=ConfigBase) @dataclass class ConfigAndPath(Generic[T]): path: pathlib.Path config: T I often have a list of these ConfigAndPath, and so would want to destructure it in list comprehensions like this: l: list[ConfigAndPath[MyConfig]] = ... filenames = [_path.name for _path, _my_config in l] So I added an __iter__ method to my class: # In ConfigAndPath def __iter__(self): return iter(astuple(self)) However, I'm not sure how to make it so that my typechecker (Pyright) realizes that _path is a pathlib.Path and _my_config is a MyConfig. This works with NamedTuple, however I can't appear to use generics with NamedTuple due to it not allowing multiple inheritance. Is this possible to specify what I want? I tried writing an as_tuple method: # In ConfigAndPath def as_tuple(self) -> Tuple[pathlib.Path, T]: return self.path, self.config Which then allows me to write filenames2 = [_path.name for _path, _my_config in [i.as_tuple() for i in l]] Which gives me a type hint but is quite verbose and walks the list twice.
You can destructure a dataclass using a match block (requires python β‰₯ 3.10). It is more verbose, but type checkers understand it. For example, using mypy: @dataclass class Foo(Generic[T]): x: int y: T foo = Foo[float](1, 2.0) match foo: # NB. this case means must be an object, and have an attribute x and attribute y, # which will be stored in the current scope as x and y respectively. case object(x=x, y=y): reveal_type(x) # note: Revealed type is "builtins.int" reveal_type(y) # note: Revealed type is "builtins.float" assert x + y == 3 case _: raise RuntimeException('unreachable') However, a match block is a statement and not an expression. As such it cannot be used within a list comprehension.
1
0
79,607,498
2025-5-5
https://stackoverflow.com/questions/79607498/why-does-my-python-function-not-properly-cast-the-dates-despite-recognizing-the
I am attempting to dynamically cast various date formats that come across as a string column but are actually dates. I've gotten pretty far, and this code can correctly identify the dates from the string, but the fields 'Date2' and 'Date3' always return as null values. I can't understand why that is, or how to correct it so that it returns the converted date values. from pyspark.sql import SparkSession from pyspark.sql.functions import col, min, max from pyspark.sql.types import IntegerType, FloatType, TimestampType, StringType, DateType from datetime import datetime, date # Define the function to convert values def convert_value(value): try: return int(value) except ValueError: pass try: return float(value) except ValueError: pass datetime_formats = [ '%m/%d/%Y %H:%M:%S', '%Y-%m-%d %H:%M:%S', '%Y-%m-%dT%H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%dT%H:%M:%S.%f' ] for fmt in datetime_formats: try: return datetime.strptime(value, fmt) except ValueError: pass date_formats = [ '%Y-%m-%d', '%d-%m-%Y', '%m/%d/%Y', '%d/%m/%Y', '%Y/%m/%d', '%b %d, %Y', '%d %b %Y' ] for fmt in date_formats: try: return datetime.strptime(value, fmt).date() except ValueError: pass return value # Function to infer data type for each column def infer_column_type(df, column): min_value = df.select(min(col(column))).collect()[0][0] max_value = df.select(max(col(column))).collect()[0][0] for value in [min_value, max_value]: if value is not None: converted_value = convert_value(value) print(f"Column: {column}, Value: {value}, Converted: {converted_value}") # Debug print if isinstance(converted_value, int): return IntegerType() elif isinstance(converted_value, float): return FloatType() elif isinstance(converted_value, datetime): return TimestampType() elif isinstance(converted_value, date): return DateType() return StringType() # Example data with different date formats in separate columns data = [ ('1', '2021-01-01', '01-02-2021', '1/2/2021', '2021-01-01T12:34:56', '1.1', 1), ('2', '2021-02-01', '02-03-2021', '2/3/2021', '2021-02-01T13:45:56', '2.2', 2), ('3', '2021-03-01', '03-04-2021', '3/4/2021', '2021-03-01T14:56:56', '3.3', 3) ] # Create DataFrame spark = SparkSession.builder.appName("example").getOrCreate() columns = ['A', 'Date1', 'Date2', 'Date3', 'Date4', 'C', 'D'] df = spark.createDataFrame(data, columns) # Apply inferred data types to columns for column in df.columns: inferred_type = infer_column_type(df, column) df = df.withColumn(column, df[column].cast(inferred_type)) # Show the result df.show() df.dtypes
The issue is that while your convert_value function correctly identifies the date format using Python’s datetime.strptime...PySpark’s cast(DateType()) doesn't support this format unless it matches Spark's expected patterns (usually 'yyyy-MM-dd'). As a result, Date2 and Date3 return null because their formats (e.g. '01-02-2021', '1/2/2021') aren’t parsed by Spark during casting. To fix this, don’t use .cast(DateType()). Instead, use to_date() with the specific format for each column. So: from pyspark.sql.functions import to_date df = df.withColumn("Date1", to_date("Date1", "yyyy-MM-dd")) df = df.withColumn("Date2", to_date("Date2", "dd-MM-yyyy")) df = df.withColumn("Date3", to_date("Date3", "M/d/yyyy"))
3
5
79,600,488
2025-4-30
https://stackoverflow.com/questions/79600488/square-api-for-invoice-attachments-received-multiple-request-parts-please-only
The new square api versions 42+ have breaking changes. Im trying to upgrade to ver v42, and I am testing in a local dev environment. I keep getting the following error: *** square.core.api_error.ApiError: status_code: 400, body: {'errors': [{'category': 'INVALID_REQUEST_ERROR', 'code': 'INVALID_CONTENT_TYPE', 'detail': 'Received multiple request parts. Please only supply zero or one `parts` of type application/json.'}]} when I try to upload an approx ~800 byte jpeg [very grainy] in the development sandbox for Square Invoice API using the following code: pdf_filepath = 'local/path/to/file.jpg' idem_key = 'some-unique_key_like_square_invoice_id' f_stream = open(pdf_filepath, "rb") try: # I have tried using a stream as well, still the same error invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=square_original_invoice.id, # this also does not work # image_file=f_stream, image_file=pdf_filepath, request={ "description": f"Invoice-{pdf_filepath}", "idempotency_key": idem_key, }, ) except ApiError as e: print(f"ERROR _attach_pdf_to_vendor_payment with errors {e}") In the online sandbox API, I get the 400 Response error: // cache-control: no-cache // content-type: application/json // date: Wed, 30 Apr 2025 13:35:06 GMT // square-version: 2025-04-16 { "errors": [ { "code": "BAD_REQUEST", "detail": "Total size of all attachments exceeds Sandbox limit: 1000 bytes", "category": "INVALID_REQUEST_ERROR" } ] } Once I got a 900 byte jpg to upload in the API explorer (988 byte did not pass), but the SDK still errors using the same file. Here is a successful API request upload VIA the API Explorer: content-length: 1267 content-type: multipart/form-data; boundary=----WebKitFormBoundaryUUID38 square-version: 2025-04-16 user-agent: SquareExplorerGateway/1.0 SquareProperty ApiExplorer ------WebKitFormBoundaryUUID38 Content-Disposition: form-data;name="request" { "idempotency_key": "UUID-123-456-7869" } ------WebKitFormBoundaryUUID38 Content-Disposition: form-data;name="file";filename="900b.jpeg" Content-Type: image/jpeg οΏ½οΏ½οΏ½οΏ½ Here is the unsuccessful API request via my django server, using the same file: content-length: 568 content-type: multipart/form-data; boundary=djangoUUID square-version: 2025-04-16 accept-encoding: gzip accept: */* user-agent: squareup/42.0.0.20250416 --djangoUUID Content-Disposition: form-data;name="request" Content-Type: application/json;charset=utf-8 { "description": "Invoice-path/to/file/900b.jpeg", "idempotency_key": "path/to/original/file/normal-invoice.pdf" } --djangoUUID Content-Disposition: form-data;name="image_file" Content-Type: image/jpeg /path/to/file/900b.jpeg --djangoUUID-- Note the successful request : Content-Disposition: form-data;name="file";filename="900b.jpeg" Content-Type: image/jpeg and the unsuccessful request: Content-Disposition: form-data;name="request" Content-Type: application/json;charset=utf-8 Content-Disposition: form-data;name="image_file" Content-Type: image/jpeg` specifically: name="image_file" vs name="file" and application/json;charset=utf-8 The headers for the unsuccessful API call are: { "date": "Sat, 03 May 2025 22:47:34 GMT", "content-type": "application/json", "transfer-encoding": "chunked", "connection": "keep-alive", "cf-ray": "ab-…-WER", "cf-cache-status": "DYNAMIC", "cache-control": "no-cache", "strict-transport-security": "max-age=631152000; includeSubDomains; preload", "x-envoy-decorator-operation": "/v2/invoices/**", "x-request-id": "58-…-80", "x-sq-dc": "aws", "x-sq-istio-migration-ingress-proxy": "sq-envoy", "x-sq-istio-migration-ingress-region": "us-west-2", "x-sq-region": "us-west-2", "vary": "Accept-Encoding", "server": "cloudflare" } The headers from the browser network inspector: :authority explorer-gateway.squareup.com :method POST :path /v2/invoices/inv:0-Ch…wI/attachments :scheme https accept application/json accept-encoding. gzip, deflate, br, zstd accept-language. en-US,en;q=0.9 authorization Bearer AE…ju cache-control no-cache content-length 1250 content-type. multipart/form-data; boundary=----WebKitFormBoundaryCP3GAXwMvwBUTlwU origin https://developer.squareup.com pragma no-cache priority u=1, i referer https://developer.squareup.com/ sandbox-mode true sec-ch-ua "Chromium";v="136", "Google Chrome";v="136", "Not.A/Brand";v="99" sec-ch-ua-mobile ?0 sec-ch-ua-platform "macOS" sec-fetch-dest empty sec-fetch-mode cors sec-fetch-site same-site square-version 2025-04-16 user-agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36 x-square-property. ApiExplorer Notes: Here is the test image To try and send zero parts, if I do an API call without a request parameter: invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=self.square_original_invoice.id, image_file=pdf_filepath, ) I get a response of INVALID_REQUEST_ERROR BAD_REQUEST Bad request. If I do an API call with an empty request parameter: invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=self.square_original_invoice.id, image_file=pdf_filepath, request={}, ) I get the same error INVALID_REQUEST_ERROR INVALID_CONTENT_TYPE Received multiple request parts. Please only supply zero or one parts of type application/json. I do remember the parameter name (file vs image_file) image being an issue with outdated docs, but the current docs show the newer parameter of image_file being correct Related Docs https://developer.squareup.com/docs/invoices-api/attachments This worked fine in the old pre 42 API, with minor syntax change, and I know the limit is now imposed to have a 1000 byte limit for the attachment, but why can't I upload attachments in the sandbox now? Square's internal developer forum link : https://developer.squareup.com/forums/t/new-api-for-invoice-attachments-error-received-multiple-request-parts-please-only-supply-zero-or-one-parts-of-type-application-json/22126
Docs should read something like this: Uploads a file and attaches it to an invoice. This endpoint accepts HTTP multipart/form-data file uploads with a JSON request part and a image_file part. The image_file part must INCLUDE a readable stream in the form of a file (or bytes) [supported formats: GIF, JPEG, PNG, TIFF, BMP, or PDF.], and optionally filename, content_type, headers. See core.File for additional details related to the image_file Final working code: f_stream = open(attachment_filepath, "rb") mime_type, encoding = mimetypes.guess_type(attachment_filepath) invoice_pdf = SQUARE_CLIENT.invoices.create_invoice_attachment( invoice_id=self.square_original_invoice.id, image_file=( attachment_filepath, attachment_filepath, #can also be f_stream mime_type, ), request={ "idempotency_key": idem_key, "description": f"Invoice-{attachment_filepath}", }, )
2
0
79,606,838
2025-5-5
https://stackoverflow.com/questions/79606838/qlistwidget-drag-and-drop-configuration-when-to-use-the-mode-instead-of-the-dra
I'm learning how to setup drag and drop in the view-model framework description at Qt site. When applied to convenience views (QListWidget, QTableWidget, QTreeWidget), the documentation uses either (in original C++ version): listWidget->setDragEnabled(true); listWidget->viewport()->setAcceptDrops(true); or: listWidget->setDragDropMode(QAbstractItemView::InternalMove); I'm unable to figure out whether setDragDropMode is a shortcut to set individual properties of the widget and which ones in addition of the two above, or it actually does more. I see using QAbstractItemView::InternalMove and QAbstractItemView::DragDrop both set DragEnabled and setAcceptDrops to true, but lead to a different behavior for the item (move vs. copy), so I know there is more behind the mode method. I would like to clarify this point in order to know when to use the mode method. My questions: Are the two approaches equivalent? Which properties/attributes are actually set by the mode method? What are the use cases for each one? I use Qt for Python if that matters. As an example, if I create a list widget with each approach, it seems the result is perfectly equivalent (in this specific case): from qtpy.QtWidgets import (QApplication, QWidget, QListWidget, QHBoxLayout) class Window(QWidget): texts = ['Sycamore', 'Chestnut', 'Walnut', 'Mahogany'] def __init__(self): super().__init__() lw_1 = QListWidget() lw_1.addItems(self.texts) self.print_props('When created', lw_1) lw_2 = QListWidget() lw_2.addItems(self.texts) # Comparing lw_1.setDragEnabled(True) lw_1.viewport().setAcceptDrops(True) lw_1.setDropIndicatorShown(True) self.print_props('Using properties', lw_1) # With mode = lw_2.DragDrop lw_2.setDragDropMode(mode) self.print_props(f'Using mode ({mode})', lw_2) layout = QHBoxLayout(self) layout.addWidget(lw_1) layout.addWidget(lw_2) def print_props(self, text, widget): print() print(text) print('drag:', widget.dragEnabled()) print('drop:', widget.viewport().acceptDrops()) print('mode:', widget.dragDropMode()) def main(): app = QApplication([]) window = Window() window.show() app.exec() main()
We can look at the source of the getter and setter of dragDropMode property: void QAbstractItemView::setDragDropMode(DragDropMode behavior) { Q_D(QAbstractItemView); d->dragDropMode = behavior; setDragEnabled(behavior == DragOnly || behavior == DragDrop || behavior == InternalMove); setAcceptDrops(behavior == DropOnly || behavior == DragDrop || behavior == InternalMove); } QAbstractItemView::DragDropMode QAbstractItemView::dragDropMode() const { Q_D(const QAbstractItemView); DragDropMode setBehavior = d->dragDropMode; if (!dragEnabled() && !acceptDrops()) return NoDragDrop; if (dragEnabled() && !acceptDrops()) return DragOnly; if (!dragEnabled() && acceptDrops()) return DropOnly; if (dragEnabled() && acceptDrops()) { if (setBehavior == InternalMove) return setBehavior; else return DragDrop; } return NoDragDrop; } As you can see, the setter indeed itself uses setDragEnabled and setAcceptDrops to set corresponding properties, which corresponds to behaviour you noticed. The only additional thing it does is setting underlying d->dragDropMode (d appears via Q_D macro and is used to implement private data, see this question), but the getter doesn't even consider its value except when dragEnabled() && acceptDrops(), for which case, as you observed, the saved member differentiates between move and copy behaviour. By default it is set to QAbstractItemView::NoDragDrop (see here), so the getter returns DragDrop (whenever it is called by you or Qt itself) if you didn't set the property yourself and used only setDragEnabled and setAcceptDrops (setting them to true). Also note that the getter and setter aren't virtual (see here). So, if you don't need InternalMove behaviour, you can ignore dragDropMode propery and just use setDragEnabled and setAcceptDrops, behaviour will be identical.
2
5
79,606,651
2025-5-5
https://stackoverflow.com/questions/79606651/how-can-i-use-a-wx-filedialog-to-select-a-file-which-is-locked-by-another-proces
I'm trying to use the wx.FileDialog class to select the name of a file. I don't want to open it. This is a minimal example of what I'm trying to do: import wx if __name__ == '__main__': app = wx.App(redirect=False) frame = wx.Frame(None) frame.Show() dlg = wx.FileDialog(parent=frame, style=wx.FD_OPEN|wx.FD_FILE_MUST_EXIST, wildcard="Project (*.ap*)|*.ap*|All files (*.*)|*.*") dlg.ShowModal() app.MainLoop() The text is in Swedish and says "File in use. Select a different name of close the file in use in another program". I'm trying to select the filename of a TIA Portal project file (Siemens PLC programming tool). Unfortunately it seems that TIA Portal locks the file. NOTE: I don't want to open the file. I just need the name.
Unfortunately it looks like this is currently impossible because wxWidgets doesn't set FOS_SHAREAWARE flag and so doesn't customize the default handling of locked files β€” which is to do what you see. It should be relatively straightforward to implement support for this in wxWidgets itself and, as it's an open source library which is open to contributions, anybody could do it. But somebody would have to do it first, before this functionality becomes accessible in wxPython.
1
2
79,606,665
2025-5-5
https://stackoverflow.com/questions/79606665/extract-header-from-the-first-commented-line-in-numpy-via-numpy-genfromtxt
My environment: OS: Windows 11 Python version: 3.13.2 NumPy version: 2.1.3 According to NumPy Fundementals guide describing how to use numpy.genfromtxt function: The optional argument comments is used to define a character string that marks the beginning of a comment. By default, genfromtxt assumes comments='#'. The comment marker may occur anywhere on the line. Any character present after the comment marker(s) is simply ignored. Note: There is one notable exception to this behavior: if the optional argument names=True, the first commented line will be examined for names. To do a test about the above-mentioned note (indicated in bold), I created the following data file and I put the header line, as a commented line: C:\tmp\data.txt #firstName|LastName Anthony|Quinn Harry|POTTER George|WASHINGTON And the following program to read and print the content of the file: with open("C:/tmp/data.txt", "r", encoding="UTF-8") as fd: result = np.genfromtxt(fd, comments="#", delimiter="|", dtype=str, names=True, skip_header=0) print(f"result = {result}") But the result is not what I expected: result = [('', '') ('', '') ('', '')] I cannot figure out where is the error in my code and I don't understand why the content of my data file, and in particular, its header line after the comment indicator #, is not interpreted correctly. I'd appriciate if you could kindly make some clarification.
Thanks to what @mehdi-sahraei suggested, I changed the dtype to None and this permitted to parse other rows (any row after the header line) correctly. Finally, it seems that there is no bug about how the header line is treated but rather a lack of clarity in the documentation. As indicated in my original post, the documentation says: ... if the optional argument names=True, the first commented line will be examined for names ... But what the documentation doesn't tell you, is that in that case, the detected header is stored in dtype.names and not beside other rows that come after the header in the file. So the header line is actually there but it is not directly accessible like other rows in the file. Here is a working test case for those who might be interested to check how this works in preactice: C:\tmp\data.txt #firstName|LastName Anthony|Quinn Harry|POTTER George|WASHINGTON And the program: with open("C:/tmp/data.txt", "r", encoding="UTF-8") as fd: result = np.genfromtxt( fd, delimiter="|", comments="#", dtype=None, names=True, skip_header=0, autostrip=True, ) print(f"result = {result}\n\n") print("".join([ "After parsing the file entirely, the detected ", "header line is: ", f"{result.dtype.names}" ])) Which gives the expected result: result = [('Anthony', 'Quinn') ('Harry', 'POTTER') ('George', 'WASHINGTON')] After parsing the file entirely, the detected header line is: ('firstName', 'LastName') Thanks everyone for your time and your help and I hope this might clarify the issue for those who have encountered the same problem.
4
0
79,606,785
2025-5-5
https://stackoverflow.com/questions/79606785/select-a-range-of-data-based-on-a-selected-value-using-pandas
I have a dataframe, I need to select a range of data based on a month value, but the result expected is always showing six rows where the month selected appears in the filtered data , here's the code : import pandas as pd data = { "function": ["test1","test2","test3","test4","test5","test6","test7","test8","test9","test10","test11","test12"], "service": ["A", "B", "AO", "M" ,"A", "PO", "MP", "YU", "Z", "R", "E", "YU"], "month": ["January","February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] } df = pd.DataFrame(data) selected_month = "January" selected_month_idx = df[df["month"] == selected_month].index[0] six_months_indices = [i % len(df) for i in range(selected_month_idx - 2, selected_month_idx + 4)] six_months_df = df.loc[six_months_indices] # add .reset_index(drop=True) if needed print(six_months_df) output : function service month 10 test11 E November 11 test12 YU December 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April the issue with this code is when I select January for example, the order of the months is not good, what I expet is somthing like that : function service month 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April 3 test5 A May 3 test6 PO June when I select December for example the output should be : function service month 6 test7 MP July 7 test8 YU August 8 test9 Z September 9 test10 R October 10 test11 E November 11 test12 YU December when I select Octobre the output should be for example : function service month 6 test7 MP July 7 test8 YU August 8 test9 Z September 9 test10 R October 10 test11 E November 11 test12 YU December the order of the month displated matters, always min(month) => max(month) output like this is not expected : output : function service month 10 test11 E November 11 test12 YU December 0 test1 A January 1 test2 B February 2 test3 AO March 3 test4 M April anyone please could to adjust the code above, thanks
If there is default index is possible select by DataFrame.loc: selected_month_idx = df[df["month"] == selected_month].index[0] start = np.clip(selected_month_idx, 0, len(df) - 6) six_month_window = df.loc[start : start + 5] print(six_month_window) If always match value in condition, get position of first True by np.argmax, count start of first value by np.clip and select by position in DataFrame.iloc: selected_month_idx = np.argmax(df["month"] == selected_month) start = np.clip(selected_month_idx, 0, len(df) - 6) six_month_window = df.iloc[start : start + 6]
3
1
79,606,560
2025-5-5
https://stackoverflow.com/questions/79606560/which-class-accurately-represents-a-websocket-connection
I have come across multiple ways to describe a websocket connection object while using the websockets library in python but can't seem to understand which way to go. In the documentation, the code to start a server is very simple. import asyncio from websockets.asyncio.server import serve async def hello(websocket): name = await websocket.recv() print(f"<<< {name}") greeting = f"Hello {name}!" await websocket.send(greeting) print(f">>> {greeting}") async def main(): async with serve(hello, "localhost", 8765) as server: await server.serve_forever() if __name__ == "__main__": asyncio.run(main()) But when I simply copy and paste the code, the IDE cannot recognize the .recv() and the .send() methods. So I started to look around what is the type of the websocket object to get some idea about it and what other methods are available for that object. So I found that all these can be used: - websockets.ServerConnection - websockets.server.ServerConnection - websockets.asyncio.server.ServerConnection I think the methods they provide are the same. If these are significantly different, then an explanation? Can I use any of the three and it won't make much difference? In the documentation, the asyncio implementation of the server creates a websockets.asyncio.server.ServerConnection instance upon client connection. So then what are the other classes? I'm sorry if this is too basic.
No need to apologize β€” your question is valid, and it’s clear you’re trying to understand how the websockets library works. That’s awesome! So, here’s what’s going on: When you define async def hello(websocket), the websocket parameter is the connection object to the client. Specifically, if you’re using websockets.asyncio.server.serve, that websocket is an instance of websockets.asyncio.server.ServerConnection. The .recv() and .send() methods you’re using are part of that class β€” they’re the standard methods to receive and send messages. If your IDE doesn’t recognize them, it’s probably just not sure what type websocket is, or it’s missing some type hints. Now about those different class names: websockets.ServerConnection β€” this is basically a shortcut to one of the actual implementations. websockets.server.ServerConnection β€” this might be from an older version or an intermediate module path. websockets.asyncio.server.ServerConnection β€” this is the one you want to use when working with asyncio. It’s what the serve() function actually creates behind the scenes.
1
1
79,606,646
2025-5-5
https://stackoverflow.com/questions/79606646/using-logical-operators-in-pytest-expected-results
I'm trying to develop pytest for a project, and while I'm not the most familiar with pytest I feel like I have a fairly basic understanding. In this particular case I am testing some code that does route optimization and I wish to implement a bunch of different tests to ensure that the code performs as it should. To help with this I have defined a dataclass which I want to use with pytest to basically tell what to expect for a given scenario. @dataclass(slots=True) class VRPResults: """ A dataclass for storing the results of a VRP problem. """ solver_time: Optional[float] = None total_travel_time: Optional[int] = None route_lengths: Optional[list[int]] = None route_indices: Optional[list[list[int]]] = None def __post_init__(self): if self.route_lengths is not None and self.total_travel_time is None: self.total_travel_time = sum(self.route_lengths) def __eq__(self, other): if not isinstance(other, VRPResults): raise TypeError(f"Cannot compare {type(self)} with {type(other)}") is_equal = True for field in fields(VRPResults): val = getattr(self, field.name) val_other = getattr(other, field.name) if val is None or val_other is None: continue if val_other != val: is_equal = False break return is_equal The idea with this dataclass is that I can specify that for scenario 1, I know that the total travel time should be 10 minutes, while for scenario 2 I know that the route_indices should be the following [0,1,2] and the route_lengths should be [5,2,4]. So basically it is an easy framework for me to use to compare different scenarios expected values with what my model produces. So my code for using the above dataclass would look something like this: vrp_data, vrp_results_expected = create_data_model() vrp_results_predicted = solve_vrp(vrp_data) assert vrp_results_expected == vrp_results_predicted, "Solution does not match expected results" where vrp_results_predicted and vrp_results_expected both are instances of the above dataclass. The main problem is that this code only checks whether these parameters are equal or not. And instead I would like some way to specify how exactly it should evaluate a parameter. For instance in scenario 3 I do not know what the actual best travel time is, but I would be happy with anything below 20 minutes. In order to accommodate this I'm thinking of adding additional parameters which specifies the logical operator that should be used to evaluate the expressions, but I am not sure exactly how to add these logical operators in python, and I'm wondering whether there is a better way of doing something like this? maybe pytest have some clever tools available for this?
I suggest going with this approach: instead of overriding the __eq__ method in your dataclass, it’s better to create a separate comparison function where you can pass in custom check rules for each field. This is especially helpful when different test scenarios require different validation logic β€” like in one case, you might want to compare route lists exactly, but in another, you just want to make sure total travel time is under a certain threshold. Here’s how you could set it up: from dataclasses import dataclass, fields from typing import Optional, Callable @dataclass(slots=True) class VRPResults: solver_time: Optional[float] = None total_travel_time: Optional[int] = None route_lengths: Optional[list[int]] = None route_indices: Optional[list[list[int]]] = None def __post_init__(self): if self.route_lengths is not None and self.total_travel_time is None: self.total_travel_time = sum(self.route_lengths) Then, here’s the custom comparison function: def compare_vrp_results(predicted: VRPResults, expected: VRPResults, custom_checks: dict[str, Callable[[any], bool]] = None): custom_checks = custom_checks or {} for field in fields(VRPResults): value_predicted = getattr(predicted, field.name) value_expected = getattr(expected, field.name) if field.name in custom_checks: if not custom_checks[field.name](value_predicted): return False, f"{field.name} failed check: Got {value_predicted}" elif value_expected is not None and value_predicted != value_expected: return False, f"{field.name} does not match: Expected {value_expected}, got {value_predicted}" return True, "" And a test example could look like this: def test_scenario_3(): vrp_data, vrp_results_expected = create_data_model() vrp_results_predicted = solve_vrp(vrp_data) custom_checks = { "total_travel_time": lambda x: x < 20 # Something like anything under 20 is fine. You can have any checks. } result, message = compare_vrp_results(vrp_results_predicted, vrp_results_expected, custom_checks) assert result, message This approach must give you a clean and flexible way to define how results should be validated per scenario, without bloating your dataclass logic.
1
1
79,604,901
2025-5-3
https://stackoverflow.com/questions/79604901/surrounding-whitespace-separated-urls-with-quotes-using-sed
Problem I was trying to get sed command to do the same thing I could do with Python regex flavour, but I encountered some problems Python regex example: (tested it on regex101 and it was working fine) find: (https.*?) replace: "\1" Unsuccessful code: sed 's/\(https.*?\)[:space:]/\"\1\"/g' .\elenco.txt elenco.txt file: https://www.youtube.com/watch?app=desktop&v=Ot34P0yyQqI&t=984s https://www.youtube.com/watch?v=vviniZjvDQs https://www.youtube.com/watch?v=Ih7qgkyo_oo https://www.youtube.com/watch?v=X6UEDpwI3HI https://www.youtube.com/watch?v=nShgaRMNlLw https://www.youtube.com/watch?v=nd_jN-C_Juw https://www.youtube.com/watch?v=aOtqox2uB3Y Expected output: "https://www.youtube.com/watch?app=desktop&v=Ot34P0yyQqI&t=984s" "https://www.youtube.com/watch?v=vviniZjvDQs" "https://www.youtube.com/watch?v=Ih7qgkyo_oo" "https://www.youtube.com/watch?v=X6UEDpwI3HI" "https://www.youtube.com/watch?v=nShgaRMNlLw" "https://www.youtube.com/watch?v=nd_jN-C_Juw" "https://www.youtube.com/watch?v=aOtqox2uB3Y" Actual output: "https://www.youtube.com/watch?"pp=desktop&v=Ot34P0yyQqI&t=984s https://www.youtube.com/watch?v=vviniZjvDQs https://www.youtube.com/watch?v=Ih7qgkyo_oo https://www.youtube.com/watch?v=X6UEDpwI3HI https://www.youtube.com/watch?v=nShgaRMNlLw https://www.youtube.com/watch?v=nd_jN-C_Juw https://www.youtube.com/watch?v=aOtqox2uB3Y Info OS Name: Microsoft Windows 11 Home Version: 10.0.26100 N/D build 26100 installed sed through winget install bmatzelle.Gow I've always avoided using POSIX regex etc, as I found it unnecessarily complicated / limited compared to using perl/python etc. and the regex flavour available there. Any other options than to install Perl/Python? 200MB for StrawberryPerl (Perl on Windows) seems to be quite overkill and useless bloat just to have access to perl flavour regex, and sed unlike perl doen't support 'easy' regex... https://askubuntu.com/questions/1050693/sed-with-pcre-like-grep-p
Ahoy! Its pretty trivial to do something like this in Perl. I donno 200mb these days seems pretty small. You can even do this with Windows Subsystems for Linux or WSL. Install WSL, run bash from a command prompt, then sudo apt install perl. I use WSL from the command line in Windows all the time. Its very small and incredibly useful. PCRE regular expressions are really useful because they are portable, and you dont have to rewrite your regular expressions for every minor wrinkle in every language. Basically look for anything not a space, until you find a space or end of line. Backreference all that and put quotes around it in a global match. Here is the code Golfed at 21 characters... $ perl -pe 's/(\S+)( |$)+/"\1" /g' elenco.txt "https://www.youtube.com/watch?app=desktop&v=Ot34P0yyQqI&t=984s" "https://www.youtube.com/watch?v=vviniZjvDQs" "https://www.youtube.com/watch?v=Ih7qgkyo_oo" "https://www.youtube.com/watch?v=X6UEDpwI3HI" "https://www.youtube.com/watch?v=nShgaRMNlLw" "https://www.youtube.com/watch?v=nd_jN-C_Juw" "https://www.youtube.com/watch?v=aOtqox2uB3Y" The output matches your expected output. If you can install Perl it is probably worth your time to do so. It gets really difficult to manage regular expressions across different languages unless they are all PCRE. IMO Perl is better in every way than both Sed and Awk. To modify the original input file with the quoted URLs you could run something like this... $ perl -i -pe 's/(\S+)( |$)+/"\1" /g' elenco.txt However this is dangerous during testing. The original file will be lost unless you have backups. It is probably safer to run something like this... $ perl -pe 's/(\S+)( |$)+/"\1" /g' elenco.txt > updated_elenco.txt Good Luck!
2
2
79,606,201
2025-5-5
https://stackoverflow.com/questions/79606201/how-to-update-a-leaf-variable-in-pytorch
I am trying to implement simple gradient descent to find the root of a quadratic equation using PyTorch. I'm doing this to get a better sense of how the autograd function works but it's not going very well. Let's say that I want to find the roots of y = 3x^2 + 4x + 9 as a random example. Below was my first attempt to run one step of gradient descent and re-calculate the gradient: import torch # Step size alpha = 0.1 # Random starting point x = torch.tensor([42.0], requires_grad=True) # Function y = 3 * x^2 + 4 * x + 9 # Find the minimum of this with gradient descent y = 3 * x ** 2 + 4 * x + 9 y.backward() print(x.grad) with torch.no_grad(): x -= alpha * x.grad y.backward() print(x.grad) This didn't like calling .backward() multiple times, so I updated it to this: import torch # Step size alpha = 0.1 # Random starting point x = torch.tensor([42.0], requires_grad=True) # Function y = 3 * x^2 + 4 * x + 9 # Find the minimum of this with gradient descent y = 3 * x ** 2 + 4 * x + 9 y.backward(retain_graph=True) # <--- Change here print(x.grad) with torch.no_grad(): x -= alpha * x.grad y.backward(retain_graph=True) # <--- And here print(x.grad) and I get the error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I get the sense that I am fundamentally misunderstanding something about PyTorch here. How would I do this correctly?
In-place operations like x -= ... can break the computation graph if the tensor is a leaf tensor that requires gradients, and the operation is not inside a torch.no_grad() context. This causes a version mismatch error during .backward() if you try to reuse the computation graph or modify variables that are part of it. Gradients in PyTorch accumulate by default for efficiency (useful during mini-batch training). So you must call x.grad.zero_() (or reinitialize x with .detach()) before the next .backward() pass if you're doing manual gradient descent. Updating model parameters (or any tensor that requires gradients) must happen inside torch.no_grad() to prevent PyTorch from tracking the update operation itself. If you don't, the update becomes part of the graph and causes unwanted memory usage and errors. retain_graph=True is only needed if you plan to reuse the same computation graph across multiple backward passes (e.g., higher-order derivatives). In simple gradient descent (with one .backward() per step), there's no need for retain_graph=True. Here is the code with correction: import torch alpha = 0.1 x = torch.tensor([42.0], requires_grad=True) for i in range(10): y = 3 * x ** 2 + 4 * x + 9 y.backward() print(f"Step {i+1}: x = {x.item():.4f}, y = {y.item():.4f}, grad = {x.grad.item():.4f}") with torch.no_grad(): x -= alpha * x.grad x.grad.zero_() # you may replace x.grad.zero_() with x = x.detach().clone().requires_grad_(True) for advanced control.
1
3
79,604,883
2025-5-3
https://stackoverflow.com/questions/79604883/is-there-anything-i-need-to-do-to-make-session-data-persistent-across-routes-in
I am working with a Flask project with React as front-end. I just completed the authentication work. When i tried to access the user_id which i stored in session as 'user_data' from another route via an 'axios' request, I wasn't able to access it as it said 'No data found in session'. I have encountered some like these before which will resolve if I add 'withCredentials: true' along my 'axios' request. I tried it too but nothing worked. So i even made a custom flask route that just gets the session and if there is no data it prints something like 'No_data'. As i expected it printed 'No_data'. Now I tried to print the session data in the login route itself right after creating it and it works there but it won't in other route. So the session is being created and stored but not shared across routes... I will also share my required code. from flask import Flask, request, jsonify, session from datetime import datetime, timedelta from flask_cors import CORS import threading import uuid import firebase_admin from firebase_admin import credentials, firestore app = Flask(__name__) app.secret_key = "HelloWorld" app.config["SESSION_PERMANENT"] = True app.config["PERMANENT_SESSION_LIFETIMT"] = timedelta(days = 1) CORS(app,supports_credentials=True, resources={r"/*": {"origins":"http://localhost:5173"}}) @app.after_request def apply_cors_headers(response): response.headers["Access-Control-Allow-Origin"] = "http://localhost:5173" # Allow your React app's origin response.headers["Access-Control-Allow-Credentials"] = "true" # Allow cookies (sessions) response.headers["Access-Control-Allow-Headers"] = "Content-Type" # Allow content-type headers response.headers["Access-Control-Allow-Methods"] = "POST, GET, OPTIONS" # Allow specific methods return response #Route for axios @app.route('/login', methods=['POST']) def login(): data = request.get_json() usermail = data.get("usermail") password = data.get("password") email_query = user_collection.where("usermail", "==", usermail).stream() user_doc = next(email_query, None) if user_doc is None: return jsonify({"message": "User not registered"}), 404 user_data = user_doc.to_dict() if user_data.get("password") != password: return jsonify({"message": "Invalid credentials"}), 401 telegram_id = user_data.get("telegramID", "") first_login = telegram_id.strip() == "" # Store user data in session session.permanent = True session["user_data"] = user_data session.modified = True print(session["user_data"]) #This prints the session as it is return jsonify({ "message": "Login successful", "first_login": first_login }), 200 @app.route('/addTelegramID', methods=['POST']) def add_telegram_id(): if 'user_data' not in session: #This gets executes showing it clearly session is not accessible print("Error") data = request.get_json() user_data = session.get('user_data') print(user_data) if user_data: user_id = user_data["user_id"] if not user_data: print("NO data") return jsonify({"message": "Unauthorized"}), 401 telegram_id = data.get("telegramID") if not telegram_id: return jsonify({"message": "Telegram ID is required"}), 400 user_query = user_collection.where("user_id", "==", user_id).stream() user_doc = next(user_query, None) if user_doc is None: return jsonify({"message": "User not found"}), 404 user_collection.document(user_doc.id).update({"telegramID": telegram_id}) return jsonify({"message": "Telegram ID added successfully"}), 200 #Testing route @app.route('/test') def test(): user_data = session.get('user_data') if user_data: return user_data else: return jsonify({"message":"No data"}) #Again this gets executed showing there is no data accessible if __name__ == '__main__': from scheduler import start_scheduler threading.Thread(target=start_scheduler, args=(reminders,), daemon=True).start() app.run(debug=True) Is there anything I am missing ??
You should familiarize yourself with the Same Site Policy. This is likely responsible for rejecting session data in the backend. You can either use a proxy in the background or use third-party cookies. You define a proxy in the package.json file when using "Create React App". This forwards requests to the frontend server to the backend server. Both CORS and cookie issues are thus obsolete, as they are the same site. "proxy": "http://localhost:5000" If you use Vite, you can configure the proxy in the vite.config.js file. A description can be found here. However, if you later, during deployment, do not want to run your frontend and backend behind the same proxy server, you can use third-party cookies. The following configuration is required for this. app.config["SESSION_COOKIE_SAMESITE"]="None" app.config["SESSION_COOKIE_SECURE"]=True
2
0
79,605,214
2025-5-4
https://stackoverflow.com/questions/79605214/frida-how-to-send-byte-array-from-javascript-to-python
I have a Frida JS script inside a Python session, and I'm trying to pass an array of bytes (from a Bitmap image) from the JavaScript environment back to the Python environment. Here is my attempt: import frida import sys import os JS_SCRIPT = ''' setTimeout(function () {{ Java.perform(function () {{ // declare dependencies on necessary Java classes const File = Java.use("java.io.File"); const Bitmap = Java.use("android.graphics.Bitmap"); const BitmapCompressFormat = Java.use("android.graphics.Bitmap$CompressFormat"); const BitmapConfig = Java.use("android.graphics.Bitmap$Config"); const ByteArrayOutputStream = Java.use("java.io.ByteArrayOutputStream"); // instantiate a new Bitmap object const bitmap = Bitmap.createBitmap(100, 100, BitmapConfig.ARGB_8888.value); // output bitmap to a byte stream in PNG format const stream = ByteArrayOutputStream.$new(); const saved = bitmap.compress(BitmapCompressFormat.PNG.value, 100, stream); console.log("[*] Compressed as PNG:", saved); // get byte array from byte stream const byteArray = stream.toByteArray(); console.log("[*] Byte array length:", byteArray.length); // send the byte stream to the Python layer send({{ type: "bitmap", page: pageNum }}, byteArray); stream.close(); }}); }}, 1000); ''' def on_message(message, data): if message["type"] == "send" and message["payload"].get("type") == "bitmap": page = message["payload"].get("page") with open(OUTPUT_FILENAME, "wb") as f: f.write(data) print(f"[+] Saved page {page} as {OUTPUT_FILENAME}") else: print(f"[?] Unknown message: {message}") def main(): device = frida.get_usb_device(timeout=5) session = device.attach(pid) script = session.create_script(JS_SCRIPT) script.on("message", on_message) script.load() device.resume(pid) if __name__ == "__main__": main() The problem happens on the call to send() because the second argument byteArray is not a pointer: Error: expected a pointer It's unclear to me how to get byteArray into a format that can be sent using the send() function, and I'm having trouble finding the solution in the Frida API docs.
Frida provides out of the box only methods for sending native byte arrays, thus raw data stored in ArrayBuffer or data at a certain NativePointer. Sending Java byte arrays in an efficient way requires a bit more work as you first have o convert the byte[] into a form that can b serialized by send(). The most simplest approach would is to convert byte[] to a String and send it in the first argument of send(). Luckily the hooked process seems to be an Android app, thus we can make use of the Android API to do the conversion: const base64 = Java.use('android.util.Base64'); const base64Data = base64.encodeToString(byteArray , 2)); // 2 = Base64.NO_WRAP flag send("BITMAP#" + pageNum + "#" + sendStr); On Python side you can then split the received string on the # characters and convert the third part from base64 to a byte string. This solution is simple but has a drawback: As the byte array is converted to base64 it exists at least twice in the memory of the Android app. For large byte arrays this can cause problems if the Android app is running out of RAM, thus in such cases you may need to use the second variant of encodeToString (byte[] input, int offset, int len, int flags) that allows to convert a byte array to at once but process it in multiple blocks by specifying offset and length.
1
1
79,605,465
2025-5-4
https://stackoverflow.com/questions/79605465/no-module-named-pip-in-venv-but-pip-installed
I work in WSL Ubuntu. After instalation python3.13 dependencies from my previous projects stopped working. Venv with python 3.12 stopped activate in vscode interface. ErrorMessage: An Invalid Python interpreter is selected, please try changing it to enable features such as IntelliSense, linting, and debugging. See output for more details regarding why the interpreter is invalid. command "source venv/bin/activate" works but all libraries "could not be resolvedPylance". When I try reinstal I see error: ModuleNotFoundError: No module named 'pip' But pip installed in this venv. I can see it folders. init and main files etc. I have pip for python3.12 and I have -m venv for python 3.12. I can recreate this venv but why I can`t turn it on properly? Tried reinstall venv and pip for python3.12. python3 -m ensurepip is ok: Requirement already satisfied: pip in /usr/lib/python3/dist-packages (24.0) But python3.12 -m ensurepip ensurepip is disabled in Debian/Ubuntu for the system python. python3.12 -m pip --version is working also as python3.12 -m pip install and python3.12 -m pip --upgrade pip
I ran into the exact same problem after installing Python 3.13 on WSL. Suddenly, all my existing virtual environments (created with Python 3.12) broke in VSCode. I was getting the "Invalid Python interpreter" error, Pylance couldn't resolve any imports, and pip appeared to be missingβ€”even though I could see it in the venv/bin folder. Here’s what fixed it for me: First, check what your system python3 now points to: python3 --version which python3 In my case, it was now Python 3.13, which explains why stuff started breaking. Your virtual environment still points to the Python 3.12 binary internally, but VSCode (and maybe even pip) is trying to use 3.13 instead. You can confirm that by looking at the pyvenv.cfg file inside your venv: cat venv/pyvenv.cfg You should see something like: home = /usr/bin/python3.12 If that's the case, then you just need to tell VSCode to use that exact interpreter. Open the command palette (Ctrl+Shift+P) in VSCode, choose β€œPython: Select Interpreter”, and manually select the path to your virtualenv’s Python binary: /path/to/your/venv/bin/python Also, double-check the shebang in your pip script: head -n 1 venv/bin/pip If it says #!/usr/bin/python3, that might now point to Python 3.13, which breaks the venv. You can fix this by rebuilding the venv with the correct Python version: python3.12 -m venv --upgrade-deps venv Or, if that doesn’t work cleanly: rm -rf venv python3.12 -m venv venv source venv/bin/activate pip install -r requirements.txt And yeah, ensurepip being disabled for system Python is normal on Ubuntu. Just make sure you have the necessary packages installed: sudo apt install python3.12-venv python3.12-distutils Once I manually selected the right interpreter in VSCode and fixed the pip shebang, everything worked againβ€”IntelliSense, linting, imports, etc. Hope that helps.
4
3
79,604,226
2025-5-2
https://stackoverflow.com/questions/79604226/performance-of-list-extend-slice-vs-islice
It seems that even when islice would theoretically be better, in practice, it is slower than just using slice. So I am a bit puzzled by the difference in performance between the usage of slice and islice here: from time import perf_counter from itertools import islice from random import choices from string import ascii_letters import sys test_str = "".join(choices(ascii_letters, k=10 ** 7)) arr1 = [] t0 = perf_counter() for _ in range(10): arr1.extend(test_str[slice(1, sys.maxsize)]) t1 = perf_counter() print('%5.1f ms ' % ((t1 - t0) * 1e3)) arr2 = [] t0 = perf_counter() for _ in range(10): arr2.extend(islice(test_str, 1, sys.maxsize)) t1 = perf_counter() print('%5.1f ms ' % ((t1 - t0) * 1e3)) 552.6 ms 786.0 ms To justify why I believe the difference to be unexpected, here is my mental model for how the execution goes for both cases: Strict Slicing: Allocate tmp, where tmp is the immutable string from 1th index to the end of the original string. Acquire an iterator over tmp. Create string objects for each individual character. Pass those string objects to be appended to the list. Garbage collect tmp. Lazy Slicing: Acquire an iterator over test_str, discarding the first value. Create string objects for each individual character. Pass those string objects to be appended to the list. I don't see how allocating the whole string object at once just to be garbage collected later leads to still better performance, especially considering the fact that both slicing methods are implemented in C.
The additional iteration layer of islice is far more costly than the string slices. Allocation optimization by length hint is insignificant, at least on the three systems where I tried this. With the string slices, the extend method iterates directly over the string (slice). With islice, it instead iterates over the islice iterator, which in turn iterates over the string. Every single character gets requested and passed through this additional iterator. Benchmark results: System 1 System 2 System 3 Py 3.11.10 Py 3.13.3 Py 3.13.0 extend_slice 552 Β± 2 ms 413 Β± 2 ms 823 Β± 6 ms extend_islice 849 Β± 5 ms 648 Β± 1 ms 1225 Β± 28 ms just_slice 4 Β± 0 ms 7 Β± 0 ms 20 Β± 1 ms iterate_slice 179 Β± 1 ms 231 Β± 1 ms 304 Β± 8 ms iterate_islice 523 Β± 1 ms 366 Β± 2 ms 680 Β± 3 ms extend_slice_len 553 Β± 3 ms 414 Β± 1 ms 840 Β± 12 ms extend_slice_nolen 558 Β± 2 ms 430 Β± 1 ms 842 Β± 11 ms extend_islice_nolen 852 Β± 1 ms 649 Β± 1 ms 1227 Β± 22 ms extend_islice_len 848 Β± 2 ms 636 Β± 1 ms 1195 Β± 39 ms extend_slice and extend_islice are your tests, we see time differences similar to what you saw. just_slice just creates the string slices but does nothing with them. Wee see they're very cheap in comparison. iterate_slice and iterate_islice don't extend a list, instead they just iterate the slices/islices (in a fast way). We see that the slices get iterated much faster than the islices. The last four rows show times for your tests again, but with the slices/islices wrapped in little objects that provide or don't provide length hints. We only see little/unclear effects of having / not having length hints. Benchmark code (system 3 has a low time limit, so instead of best 5 of 100 rounds I used best 3 of 7 there): from time import perf_counter from itertools import islice from random import choices from string import ascii_letters from collections import deque from statistics import mean, stdev import random import sys def extend_slice(): arr = [] for _ in range(10): arr.extend(test_str[slice(1, sys.maxsize)]) def extend_islice(): arr = [] for _ in range(10): arr.extend(islice(test_str, 1, sys.maxsize)) def just_slice(): for _ in range(10): test_str[slice(1, sys.maxsize)] iterate = deque(maxlen=0).extend def iterate_slice(): for _ in range(10): iterate(test_str[slice(1, sys.maxsize)]) def iterate_islice(): for _ in range(10): iterate(islice(test_str, 1, sys.maxsize)) class WithLen: def __init__(self, iterable): self.iterable = iterable def __iter__(self): return iter(self.iterable) def __len__(self): return len(test_str) - 1 class NoLen: def __init__(self, iterable): self.iterable = iterable def __iter__(self): return iter(self.iterable) def extend_slice_len(): arr = [] for _ in range(10): arr.extend(WithLen(test_str[slice(1, sys.maxsize)])) def extend_slice_nolen(): arr = [] for _ in range(10): arr.extend(NoLen(test_str[slice(1, sys.maxsize)])) def extend_islice_len(): arr = [] for _ in range(10): arr.extend(WithLen(islice(test_str, 1, sys.maxsize))) def extend_islice_nolen(): arr = [] for _ in range(10): arr.extend(NoLen(islice(test_str, 1, sys.maxsize))) funcs = [ extend_slice, extend_islice, just_slice, iterate_slice, iterate_islice, extend_slice_len, extend_slice_nolen, extend_islice_nolen, extend_islice_len, ] test_str = "".join(choices(ascii_letters, k=10 ** 7)) def print(*a, p=print, f=open('out.txt', 'w')): p(*a) p(*a, file=f, flush=True) times = {f: [] for f in funcs} def stats(f): ts = [t * 1e3 for t in sorted(times[f])[:5]] return f'{mean(ts):4.0f} Β± {stdev(ts):2.0f} ms ' return f'{mean(ts):5.1f} Β± {stdev(ts):3.1f} ms ' for _ in range(100): print(_) for f in random.sample(funcs, len(funcs)): t0 = perf_counter() f() t1 = perf_counter() times[f].append(t1 - t0) for f in funcs: print(stats(f), f.__name__) print('\nPython:', sys.version)
5
1
79,604,815
2025-5-3
https://stackoverflow.com/questions/79604815/creating-a-list-of-integer-lists-that-have-a-fixed-length-and-contain-integers-t
I am trying to write some code that will generate all lists of a fixed length that have the property that the next integer in each list will either be same or an increment of the previous integer. All lists should start with 0. I can write code that does this for size 4 BUT it uses 3 nested loops. If I want to continue down this path I will have to have n - 1 nested loops. Just to be clear I will give some examples for length 5: [0, 0, 0, 0, 0] [0, 0, 1, 1, 1] [0, 1, 2, 2, 2] [0, 1, 2, 3 ,3] The code will return all such lists of length 5. Instead of down voting, could you help me improve my question? Here is my (really bad but working) code: import math def index_list(x): IL = [] for i in [0,1]: for j in [0,1,2]: for k in [0,1,2,3]: if i <= j and j <= k and abs(i-j) < 2 and abs(j-k) < 2: IL.append([x, x + i, x + j, x + k]) return IL print(index_list(0)) this produces: [[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 1], [0, 0, 1, 2], [0, 1, 1, 1], [0, 1, 1, 2], [0, 1, 2, 2], [0, 1, 2, 3]] I have included a argument x as i wish to have the code to sometimes do the same thing but with a different starting index.
I am providing you several solutions. I think you may like one. First: from itertools import product def generate_sequences(n, x=0): if n == 0: return [] deltas = product([0, 1], repeat=n-1) sequences = [] for delta_seq in deltas: sequence = [x] current = x for delta in delta_seq: current += delta sequence.append(current) sequences.append(sequence) return sequences def print_sequences(sequences, max_display=10): print(f"Total sequences: {len(sequences)}") print("Sample sequences:") for i, seq in enumerate(sequences[:max_display]): print(seq) if len(sequences) > max_display: print(f"... (showing first {max_display} of {len(sequences)})") if __name__ == "__main__": print("\nSequences starting with 0, length 4:") print_sequences(generate_sequences(4, 0)) Second: def generate_sequences(n, x=0): if n <= 0: return [] sequences = [] stack = [(x, [x], n - 1)] while stack: last_val, current_seq, remaining = stack.pop() if remaining == 0: sequences.append(current_seq) continue stack.append((last_val, current_seq + [last_val], remaining - 1)) stack.append((last_val + 1, current_seq + [last_val + 1], remaining - 1)) return sequences def print_sequences(sequences, max_display=10): print(f"Total sequences: {len(sequences)}") print("Sample sequences:") for i, seq in enumerate(sequences[:max_display]): print(seq) if len(sequences) > max_display: print(f"... (showing first {max_display} of {len(sequences)})") if __name__ == "__main__": for length in [4, 5]: print(f"\nSequences of length {length}:") sequences = generate_sequences(length) print_sequences(sequences) print("\nSequences starting with 5, length 3:") print_sequences(generate_sequences(3, 5)) Third: def generate_sequences(n, x=0): def helper(current, remaining): if remaining == 0: return [current] last = current[-1] next_values = [last] if remaining == 1 else [last, last + 1] sequences = [] for val in next_values: sequences.extend(helper(current + [val], remaining - 1)) return sequences if n == 0: return [] return helper([x], n - 1) print(generate_sequences(5))
2
2
79,603,414
2025-5-2
https://stackoverflow.com/questions/79603414/unexpected-keyword-in-createsuperuser-django
I am working with BaseAbstractUser and AbstractUser, and I have a problem with a required field. models.py from django.db import models from django.conf import settings from django.contrib.auth.models import User, AbstractBaseUser, BaseUserManager from django.utils.timezone import timedelta, now from django.core.exceptions import ValidationError # File validation function def validate_file_type(value): allowed_types = ["application/pdf", "application/msword", "application/vnd.openxmlformats-officedocument.wordprocessingml.document"] if value.content_type not in allowed_types: raise ValidationError("Only PDF and Word documents are allowed.") class CustomUserManager(BaseUserManager): """Manager for CustomUser""" def create_user(self, email, password=None, role="customer"): if not email: raise ValueError("Users must have an email address") user = self.model(email=self.normalize_email(email), role=role) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password=None): user = self.create_user(email, password, role="admin") user.is_staff = True user.is_superuser = True user.save(using=self._db) return user class CustomUser(AbstractBaseUser): """Custom user model using email authentication""" ROLE_CHOICES = [ ("vendor", "Vendor"), ("customer", "Customer"), ("admin", "Admin"), ] email = models.EmailField(unique=True) role = models.CharField(max_length=10, choices=ROLE_CHOICES, default="customer") is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) # Required for Django admin is_superuser = models.BooleanField(default=False) # Required for superuser checks objects = CustomUserManager() # Use the custom manager USERNAME_FIELD = "email" # Set email as the primary login field REQUIRED_FIELDS = ["role"] def __str__(self): return self. Email admin.py from django.contrib import admin from django.contrib.auth.admin import UserAdmin from django.contrib.auth import get_user_model CustomUser = get_user_model() class CustomUserAdmin(UserAdmin): """Admin panel customization for CustomUser""" model = CustomUser list_display = ("email", "role", "is_staff", "is_active") ordering = ("email",) search_fields = ("email",) fieldsets = ( (None, {"fields": ("email", "password")}), ("Roles & Permissions", {"fields": ("role", "is_staff", "is_superuser", "is_active")}), ) add_fieldsets = ( (None, { "classes": ("wide",), "fields": ("email", "role", "password1", "password2", "is_staff", "is_superuser", "is_active") }), ) # Remove `groups` since it's not part of CustomUser filter_horizontal = [] # Instead of filter_horizontal = ["groups"] list_filter = ("role", "is_staff", "is_active") # No 'groups' admin.site.register(CustomUser, CustomUserAdmin) When I ran python manage.py createsuperuser, it asked for my email, password, and role, which is what I expected it to do. However, when I hit return, it said that the role is an unexpected keyword. How do I fix this? The Error: Traceback (most recent call last): File "/PycharmProjects/RFPoject2/manage.py", line 22, in <module> main() File "/PycharmProjects/RFPoject2/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility. Execute() File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/base.py", line 416, in run_from_argv self.execute(*args, **cmd_options) File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 90, in execute return super().execute(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/PycharmProjects/RFPoject2/.venv/lib/python3.11/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 239, in handle self.UserModel._default_manager.db_manager(database).create_superuser( TypeError: CustomUserManager.create_superuser() got an unexpected keyword argument 'role' Does CreatSuperUser require something special to make this work? Thanks!
Add a role='admin' parameter to the create_superuser(..) method, we can add an extra check that it is indeed admin: class CustomUserManager(BaseUserManager): def create_user(self, email, password=None, role="customer", **kwargs): if not email: raise ValueError("Users must have an email address") user = self.model(email=self.normalize_email(email), role=role, **kwargs) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password=None, role='admin', **kwargs): assert role == 'admin' user = self.create_user(email, password, role="admin", **kwargs) user.is_staff = True user.is_superuser = True user.save(using=self._db) return user
1
2
79,604,283
2025-5-3
https://stackoverflow.com/questions/79604283/palindromes-and-string-slicing-performance
There are a lot of ways to check if string is a palindrome. Plenty of them listed here This question is not about "how" but rather about performance. I was assuing that is_palindrome should be twice faster than is_palindrome0 because it does len/2 iterations in the worst case. However, in reality, is_palindrome takes more than 30 seconds to check all the strings while is_palindrome0 gets the job done in less than half of a second! Test-case def is_palindrome(s): for i in range(len(s) // 2): if s[i] != s[len(s)-i-1]: return 0 return 1 def is_palindrome0(s): if s == s[::-1]: return 1 else: return 0 N = 500 L = 99999 sss = '101' * L import time start = time.time() print(sum([1 for i in range(N) if is_palindrome0(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') start = time.time() print(sum([1 for i in range(N) if is_palindrome(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') Output 168 0.40 168 34.20 Any ideas why string slicing is so crazy fast? How to debug further? Apologies if I missed answered question with in-depth performance comparison. UPDATE. Taking into account Frank's comment. def is_palindrome(s): l = len(s) for i in range(l // 2): if s[i] != s[l-i-1]: return 0 return 1 def is_palindrome0(s): if s == s[::-1]: return 1 else: return 0 N = 500 L = 99999 sss = '101' * L import time start = time.time() print(sum([1 for i in range(N) if is_palindrome0(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') start = time.time() print(sum([1 for i in range(N) if is_palindrome(sss+sss[i:])])) end = time.time() print(f'{(end - start):.2f}') A bit faster but still around 50x times slower. 168 0.41 168 25.11
is_palindrome0 (the fast one) s[::-1] uses C-level optimizations under the hood (it's implemented in highly efficient C code in CPython). String comparisons (==) are also optimized for short-circuiting; they stop early if a mismatch is found. The entire operation is happening in compiled code with no explicit Python loops. is_palindrome (the slow one): This is pure Python, interpreted line by line. Each index access like s[i] and s[len(s) - i - 1] is an individual bytecode operation. Python for loops and indexing are orders of magnitude slower than equivalent operations in C. The main performance killer here is actually in the line is_palindrome(sss+sss[i:]). For each iteration, in ur code: U create a new string via concatenation (sss+sss[i:]) This new string is ~200,000 characters long Then ur function iterates through half of it
3
8
79,604,132
2025-5-2
https://stackoverflow.com/questions/79604132/pil-image-by-writing-a-matplotlib-figure-to-bytesio-buffer-not-working
A function I am working on takes a dictionary with some data, plots based on the data to axes of a noninteractive matplotlib figure (so without showing it) and renders that figure to an PIL image that is saved to the dictionary. The updated dictionary is returned. That returned dictionary is converted to a pandas.DataFrame where I want to be able to view the image very quickly, which should work because it has been rendered already. Writing to a BytesIO() buffer has shown to be good in terms of performance, but it behaves irrationally, because the saved image is not displayable. If I open the buffer using a "with" statement - which I thought was the standard for reading/writing files & buffers in python - then the rendered image cannot be displayed. e.g. IPython.display.display(image) only returns the handle; image.show() gives an I/O error. I have reproduced this behaviour in the to_PIL_direct() function. See the error/unwanted behaviour at very bottom of the code. It is possible to circumvent this by: converting the buffer to bytes and reading the PIL image from bytes as done in to_PIL_bytes() or not using a "with" statement for the buffer at all as done in to_PIL_noclose() I would like to use a function that gives the desired output as seen for images from to_PIL_bytes() and to_PIL_noclose(), that are fast and also close & delete the buffer. The latter is not happening with to_PIL_noclose(), which seems odd to me as well as some others regarding this topic: See answer on How to convert Matplotlib figure to PIL Image object (without saving image) kotchwane and the comment by Anton Troitsky. Converting to bytes to then read with a different PIL function as in to_PIL_bytes() feels like an unnecessary detour and also slows down the code (see the gist below) The original spark for wanting to implement this came from dizcza's answer on Save plot to numpy array, specifically the plot2() function they provide: import io import matplotlib matplotlib.use('agg') # turn off interactive backend import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() ax.plot(range(10)) def plot1(): ... def plot2(): with io.BytesIO() as buff: fig.savefig(buff, format='png') buff.seek(0) im = plt.imread(buff) def plot3(): ... My question is how can I make the to_PIL_direct() function work without resorting to a different idea or large workaround f.e. making a canvas and drawing it, saving to a file etc .. MWE is in this link to the gist with the notebook. Happy about any help!
It gives error ValueError: I/O operation on closed file. because open() is "lazy" and it doesn't load it at once but when you try to use image - but you try to display it after leaving with io.BytesIO() as buffer: and buffer is already closed and it can't read from buffer. You may use .load() to force it to load image: img = PIL.Image.open(buffer) img.load() # <-- force PIL to load image def to_PIL_direct(figure_dict_passed): figure_dict = copy.deepcopy(figure_dict_passed) figure_dict["info"] = "info" mpl.use('agg') fig, ax = plt.subplots(1,1) ax.scatter(np.arange(100_000),np.random.randn(100_000)) with io.BytesIO() as buffer: fig.savefig(buffer, bbox_inches='tight') buffer.seek(0) img = PIL.Image.open(buffer) img.load() # <-- force PIL to load image figure_dict["figure_img_direct"] = img plt.close() return figure_dict For other people: useful page in Pillow doc (found by @vboettcher ): File Handling in Pillow
1
1
79,604,183
2025-5-2
https://stackoverflow.com/questions/79604183/pandas-time-series-dataframe-take-random-samples-by-group-date-ignore-missing
I have a time series dataframe, I would like to take x random samples from column "temperature" from each day. I am able to do this with: daily_groups = df.groupby([pd.Grouper(key='time', freq='D')])['temperature'].apply(lambda x: x.sample(10)) This works if there are at least x samples for each day. If there are not I get the error "a must be greater than 0 unless no samples are taken". What I would like is, lets say I want 10 samples, get 10 if available, if not get as many as possible and if there are none skip this day. I don't want to up sample data. Also I don't know if its possible to return the original dataframe with the values filtered for items mention above, instead of returning the groupby series object. Thanks for any help.
If you want to randomly sample values from the "temperature" column for each day, up to 10 values per day, but also want to handle days with fewer than 10 entries, here's my suggestion on how to do it. This code checks how many rows there are per day β€” if there are fewer than 10, it just takes as many as possible. If there are zero, it skips that day entirely. The best part is that it gives you back the original DataFrame rows, not the Series from Groupby. import pandas as pd def sample_temperature(group, n=10): if len(group) == 0: return pd.DataFrame() # return empty if there is nothing in the group return group.sample(min(len(group), n)) # Make sure the 'time' column is in datetime format df['time'] = pd.to_datetime(df['time']) # Group by day and sample sampled_df = (df.groupby(pd.Grouper(key='time', freq='D')).apply(lambda g: sample_temperature(g, n=10)).reset_index(drop=True))
2
1
79,604,129
2025-5-2
https://stackoverflow.com/questions/79604129/how-can-i-annotate-a-function-that-takes-a-union-and-returns-one-of-the-types-i
Suppose I want to annotate this function: def add_one(value): match value: case int(): return value + 1 case str(): return value + " and one more" case _: raise TypeError() I want to tell the type checker "This function can be called with an int (or subclass) or a str (or subclass). In the former case it returns an int, and in the latter a str." How can this be accomplished? Here are some of my failed attempts: Attempt 1 def add_one(value: int | str) -> int | str: ... This is too loose. The type checker no longer knows that the returned type is similar to the argument. Passing an int might return a str. Attempt 2 def add_one[T: int | str](value: T) -> T: ... This is incorrect. It doesn't return literally the same type as the argument. If passed an IntEnum it returns int, and for a StrEnum it returns str. Attempt 3 def add_one[T: (int, str)](value: T) -> T: ... This is better, but now I can't call add_one with Union[int, str]. These ques-tions talk about the differences between bounds and constraints, but I lack the brainpower to use them to solve my problem. Attempt 4 @overload def add_one(value: int) -> int: ... @overload def add_one(value: str) -> str: ... def add_one(value: int | str) -> int | str: # Put real implementation here ... This is the best I can do. It does the right thing, but requires me to type out the function signature for every possible type it handles. Doesn't seem like much for two types, but my real code already has 7 or 8, and I intend to add more. It also requires me to manually expand any Union. Is there some better way to tell the type checker "Here's a Union and a T. Make the T be whatever union arg matched."?
One solution I can think of is to mix attempts 2, 3 and 4: (playgrounds: Mypy, Pyright) @overload def add_one[T: (str, int, bytes)](value: T) -> T: ... @overload def add_one[T: str | int | bytes](value: T) -> T: ... def add_one(value: str | int | bytes) -> str | int | bytes: ... class S(StrEnum): A = '' class I(IntEnum): B = 0 class B(bytes, Enum): C = b'' def f(si: str | int, sb: str | bytes, ib: int | bytes, sib: str | int | bytes) -> None: reveal_type(add_one('')) # str reveal_type(add_one(0)) # int reveal_type(add_one(b'')) # bytes reveal_type(add_one(S.A)) # str reveal_type(add_one(I.B)) # int reveal_type(add_one(B.C)) # bytes reveal_type(add_one(si)) # str | int reveal_type(add_one(sb)) # str | bytes reveal_type(add_one(ib)) # int | bytes reveal_type(add_one(sib)) # str | int | bytes This has one minor problem in that the second overload would be marked as overlapping with the first. It should be fine to # type: ignore it. Additionally, unions of subtypes like S | I won't be handled correctly; an upcast (v: str | int = S.A if bool() else I.B) would be necessary to make it work.
2
2
79,604,001
2025-5-2
https://stackoverflow.com/questions/79604001/pandas-memory-issue-when-apply-list-to-groupby
I am doing the below but getting memory issues. make frame data = {'link': [1,2,3,4,5,6,7], 'code': ['xx', 'xx', 'xy', '', 'aa', 'ab', 'aa'], 'Name': ['Tom', 'Tom', 'Tom', 'Tom', 'nick', 'nick', 'nick'], 'Age': [20,20,20,20, 21, 21, 21]} # Create DataFrame df = pd.DataFrame(data) print(df) output link code Name Age 0 1 xx Tom 20 1 2 xx Tom 20 2 3 xy Tom 20 3 4 Tom 20 4 5 aa nick 21 5 6 ab nick 21 6 7 aa nick 21 minimal code example that works on subset of data but not on full dataset. temp = df.groupby(['Name', 'Age'])['code'].apply(list).reset_index() pd.merge(df, temp, on=['Name', 'Age']).explode('code_y').replace(r'^\s*$', np.nan, regex=True).dropna(subset='code_y').drop_duplicates() output error when used on full dataset. ### MemoryError: Unable to allocate 5.34 TiB for an array with shape (733324768776,) and data type object The apply list makes a big long list with duplicates. Is there a way to drop dups from the lists or is there maybe a better way to do this? update Going with this code as it seem to work best. # Select relevant columns and drop duplicates directly d = df[['code', 'Name', 'Age']].replace('', pd.NA).drop_duplicates() # Perform the merge and drop rows with missing 'code_y' in one step df.merge(d, how='outer', on=['Name', 'Age']).dropna(subset=['code_y'])
Your method is inefficient as it explodes then drops the duplicates. Ensure to drop the duplicates first then merge: d = df.mask(df.eq(''))[['code', 'Name', 'Age']].drop_duplicates() df.merge(d, how = 'outer', on = ['Name', 'Age']).dropna(subset='code_y') link code_x Name Age code_y 0 1 xx Tom 20 xx 1 1 xx Tom 20 xy 3 2 xx Tom 20 xx 4 2 xx Tom 20 xy 6 3 xy Tom 20 xx 7 3 xy Tom 20 xy 9 4 Tom 20 xx 10 4 Tom 20 xy 12 5 aa nick 21 aa 13 5 aa nick 21 ab 14 6 ab nick 21 aa 15 6 ab nick 21 ab 16 7 aa nick 21 aa 17 7 aa nick 21 ab
1
1
79,603,555
2025-5-2
https://stackoverflow.com/questions/79603555/how-to-set-a-fixed-random-state-in-randomizedsearchcv
I'm using RandomizedSearchCV with RandomForestClassifier in scikit-learn. I want to make sure my results are reproducible across runs. Where should I set the random_stateβ€”in the classifier, in RandomizedSearchCV, or both? Example code: from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV clf = RandomForestClassifier() search = RandomizedSearchCV(clf, param_distributions=params, n_iter=10) What's the best practice to ensure consistent results?
You can perform a simple test using as a starter code given in the RandomizedSearchCV examples. In the code, a random_state is set both, in the classifier, as well as in the RandomizedSearchCV. Writing a loop with let's say 50 iterations and printing outcomes, that is .best_params_ will show the following: setting random_state both in RandomizedSearchCV and classifier/regressor will always give the same outcome setting random_state in just one of those will provide different outcomes across iterations. So the conclusion is, that if you need reproducibility, you need to set this parameter in both places as in both places separate random generators are used. Also it is worth to check some more information on the used numbers from this post, as well as official glossary concerning random_state. The code: from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.model_selection import RandomizedSearchCV from scipy.stats import uniform iris = load_iris() for i in range(50): logistic = LogisticRegression(solver='saga', tol=1e-2, max_iter=200 ,random_state=0) distributions = dict(C=uniform(loc=0, scale=4), penalty=['l2', 'l1']) clf = RandomizedSearchCV(logistic, distributions, random_state=0) search = clf.fit(iris.data, iris.target) print(search.best_params_)
2
2
79,603,499
2025-5-2
https://stackoverflow.com/questions/79603499/training-a-custom-tokenizer-with-huggingface-gives-weird-token-splits-at-inferen
So I trained a tokenizer from scratch using Huggingface’s tokenizers library (not AutoTokenizer.from_pretrained, but actually trained a new one). Seemed to go fine, no errors. But when I try to use it during inference, it splits words in weird places. even pretty common ones like β€œawesome” or β€œterrible” end up getting split into multiple subwords like aw, ##es, ##ome, etc. I expected a fresh tokenizer to do better with those kinds of words since I saw them in the training data. Here’s how I trained the tokenizer (simplified version): from tokenizers import BertWordPieceTokenizer files = ["data.txt"] #contians one text per line tokenizer = BertWordPieceTokenizer(lowercase=True) tokenizer.train(files=files, vocab_size=3000, min_frequency=2, special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"]) tokenizer.save_model("my_tokenizer") And this is how I use it later: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("my_tokenizer") text = "this movie was awesome and I loved the acting" tokens = tokenizer.tokenize(text) print(tokens) Which gives me: ['this', 'movie', 'was', 'aw', '##es', '##ome', 'and', 'i', 'loved', 'the', 'acting'] So like... why is β€œawesome” getting split into 3 tokens? That word appears in the training file multiple times, definitely more than the min_frequency of 2. I even checked the vocab file, and I don’t see β€œawesome” as a full token in there. I tried: increasing vocab_size to 10k (same issue) lowering min_frequency to 1 turning off lowercase checking the vocab.txt , still doesn’t have full words I expect Maybe I’m misunderstanding how the tokenizer learns or builds its vocab? Or is there something I’m doing wrong during training? If needed I can share a dummy version of the data.txt file I used. It’s just a list of simple sentences like: this movie was awesome terrible film acting was good i loved it Would appreciate any ideas, not sure if this is expected behavior or if I messed something up in how I’m training it.
Yeah, this actually comes up a lot when training a tokeniser from scratch. Just because a word shows up in your training data doesn’t mean it will end up in the vocab. It depends on how the tokeniser is building things. Even if β€œawesome” appears a bunch of times, it might not make it into the vocab as a full word. WordPiece tokenisers don’t just add whole words automatically. They try to balance coverage and compression, so sometimes they keep subword pieces instead. If you want common words like that to stay intact, here are a few things you can try: Increase vocab_size to something like 8000 or 10000. With 3000, you are going to see a lot of splits. Lowering min_frequency might help, but only if the word is just barely making the cut. Check the text file you're using to train. If β€œawesome” shows up with different casing or punctuation, like β€œAwesome” or β€œawesome,”, it might be treated as separate entries. Also make sure it’s not just appearing two or three times in a sea of other data. That might not be enough for it to get included. Another thing to be aware of is that when you load the tokeniser using BertTokenizer.from_pretrained(), it expects more than just a vocab file. It usually looks for tokenizer_config.json, special_tokens_map.json, and maybe a few others. If those aren't there, sometimes things load strangely. You could try using PreTrainedTokenizerFast instead, especially if you trained the tokeniser with the tokenizers library directly. You can also just check vocab.txt and search for β€œawesome”. If it’s not in there as a full token, that would explain the split you are seeing. Nothing looks broken in your code. This is just standard behaviour for how WordPiece handles vocab limits and slightly uncommon words. I’ve usually had better results with vocab sizes in the 8 to 16k range when I want to avoid unnecessary token splits.
1
1
79,602,017
2025-5-1
https://stackoverflow.com/questions/79602017/correct-type-annotations-for-generator-function-that-yields-slices-of-the-given
I'm using Python 3.13 and have this function: def chunk(data, chunk_size: int): yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) I want to give it type annotations to indicate that it can work with bytes, bytearray, or a general collections.abc.Sequence of any kind, and have the return type be a Generator of the exact input type. I do not want the return type to be a union type of all possible inputs (e.g. bytes | bytearray | Sequence[T]) because that's overly-wide; I want the precise type that I happen to put in to come back out the other end. Calling chunk on a bytes should return Generator[bytes], etc. Since bytes and bytearray both conform to Sequence[T], my first attempt was this: def chunk[T](data: Sequence[T], chunk_size: int) -> Generator[Sequence[T]]: yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) But this has a covariance issue- the return type is Sequence[T], not bytes, and pyright complains when I pass the return into a function that takes a bytes parameter (def print_bytes(b: bytes) -> None: ...): error: Argument of type "Sequence[int]" cannot be assigned to parameter "b" of type "bytes" in function "print_bytes" "Sequence[int]" is not assignable to "bytes" (reportArgumentType) So then I tried using a type constraint: "chunk can take any Sequence and returns a Generator of that type." def chunk[T: Sequence](data: T, chunk_size: int) -> Generator[T]: yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) This time, pyright complains about the function itself: error: Return type of generator function must be compatible with "Generator[Sequence[Unknown], Any, Any]" "Generator[Sequence[Unknown], None, Unknown]" is not assignable to "Generator[T@chunk, None, None]" Type parameter "_YieldT_co@Generator" is covariant, but "Sequence[Unknown]" is not a subtype of "T@chunk" Type "Sequence[Unknown]" is not assignable to type "T@chunk" (reportReturnType) I'll admit to not fully understanding the complaint here- We've established via the type constraint that T is a Sequence, but pyright doesn't like it and I'm assuming my code is at fault. Using typing.overload works: @typing.overload def chunk[T: bytes | bytearray](data: T, chunk_size: int) -> Generator[T]: ... @typing.overload def chunk[T](data: Sequence[T], chunk_size: int) -> Generator[Sequence[T]]: ... def chunk(data, chunk_size: int): yield from (data[i : i + chunk_size] for i in range(0, len(data), chunk_size)) In this case, pyright is able to pick the correct overload for all of my uses, but this feels a little silly- there's 2x as much typing code as actual implementation code! What are the correct type annotations for my chunk function that returns a Generator of the specific type I passed in?
You can define a Protocol that defines the behaviour when the object is sliced and then use that as the bound for your generic argument: from collections.abc import Generator, Sized from typing import Protocol, Self class Sliceable(Sized, Protocol): def __getitem__(self: Self, key: slice, /) -> Self: ... def chunk[T: Sliceable](data: T, chunk_size: int) -> Generator[T]: yield from ( data[i : i + chunk_size] for i in range(0, len(data), chunk_size) ) Which can be tested using: byte_value = b"0123456789" def print_bytes(b: bytes) -> None: ... for byte_ch in chunk(byte_value, 10): print_bytes(byte_ch) str_value = "abcdefghijklmnopq" def print_string(b: str) -> None: ... for str_ch in chunk(str_value, 10): print_string(str_ch) list_value = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] def print_list(b: list) -> None: ... for list_ch in chunk(list_value, 10): print_list(list_ch) pyright fiddle mypy fiddle
3
5
79,601,482
2025-5-1
https://stackoverflow.com/questions/79601482/2-laser-beams-number-of-intersections-in-a-mirror-problem
I've got an algorithmic problem that I've not been able to solve. Would appreciate any help with this The main problem: Two laser beams, blue and red color, are shot into a mirror and bounces around. Find the number of intersections between them. Example: red = [2, 7, 8, 15, 20] blue = [3, 4, 5, 7, 10, 16, 21] Should give me 7 red = [10, 20, 30] blue = [1, 10, 20] Should give me 3 My attempt and issue I've been able to solve the second example by setting up pairs of start and end, and checking for each iteration. But this won't work for the first example, as there are 2 intersections between red = [2] and blue = [3]. I circled this part in the picture. Any suggestion on what I should do would be appreciated. Code def count_intersections(red, blue): intersections = 0 # 0 as the starting point red_points = [0] + red blue_points = [0] + blue # Create intervals for red beam red_intervals = [] for i in range(len(red_points) - 1): if i % 2 == 0: red_intervals.append((red_points[i], red_points[i + 1], "Top")) else: red_intervals.append((red_points[i], red_points[i + 1], "Bottom")) # Create intervals for blue beam blue_intervals = [] for i in range(len(blue_points) - 1): if i % 2 == 0: blue_intervals.append((blue_points[i], blue_points[i + 1], "Top")) else: blue_intervals.append((blue_points[i], blue_points[i + 1], "Bottom")) # Check all red intervals against all blue intervals for r_start, r_end, r_start_pos in red_intervals: for b_start, b_end, b_start_pos in blue_intervals: # Check if the intervals overlap and the paths cross if (r_start <= b_start and r_end >= b_end) or (r_start >= b_start and r_end <= b_end): print("check intersects") print(r_start, r_end) print(b_start, b_end) print("next") intersections += 1 return intersections red = [2, 7, 8, 15, 20] blue = [3, 4, 5, 7, 10, 16, 21] # red = [10, 20, 30] # blue = [1, 10, 20] # [(0, 10), (10, 20), (20, 30)] # [(0, 1), (1, 10), (10, 20)] print(count_intersections(red, blue)) Thanks.
The applicable technique is called a sweep line algorithm. Imagine a vertical line moving from left to right, and consider every "point of interest" at which you get new relevant information. You only need to process the situations at the points of interest, and there are relatively few of those. For your problem, the points of interest are just the bounce points, because everything that happens between them is predictable. To solve it, just check which laser is higher at every bounce point. Let cmp = 1 if red is higher than blue, cmp=-1 if blue is higher than red, or cmp=0 if they're the same. Then you just add 1 intersection to the count every time cmp transitions 1 -> 0, 1 -> -1, -1 -> 0 or -1 -> 1. The code is super easy: def count_intersections(red, blue): redh = 1 # red starts at the top blueh = 1 # blue starts at the top cmp = 0 # They are equal height intersections = 1 # That counts as an intersection redpos = 0 bluepos = 0 while redpos < len(red) and bluepos < len(blue): oldcmp = cmp if red[redpos] == blue[bluepos]: # simultaneous bounce redpos += 1 bluepos += 1 redh = -redh blueh = -blueh cmp = (redh - blueh) // 2 elif red[redpos] < blue[bluepos]: # red bounce first redpos += 1 redh = -redh cmp = redh else: # blue bounce first bluepos += 1 blueh = -blueh cmp = -blueh if cmp != oldcmp and oldcmp != 0: intersections += 1 return intersections red = [2, 7, 8, 15, 20] blue = [3, 4, 5, 7, 10, 16, 21] print(count_intersections(red,blue)) # 7 red = [10, 20, 30] blue = [1, 10, 20] print(count_intersections(red,blue)) # 3
3
4
79,603,404
2025-5-2
https://stackoverflow.com/questions/79603404/why-does-randomizedsearchcv-sometimes-return-worse-results-than-manual-tuning-in
I'm working on a classification problem using scikit-learn's RandomForestClassifier. I tried using RandomizedSearchCV for hyperparameter tuning, but the results were worse than when I manually set the parameters based on intuition and trial/error. Here's a simplified version of my code: from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV param_dist = { "n_estimators": [100, 200, 300], "max_depth": [None, 10, 20, 30], "min_samples_split": [2, 5, 10], "min_samples_leaf": [1, 2, 4], } clf = RandomForestClassifier(random_state=42) random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=10, cv=5, scoring='accuracy') random_search.fit(X_train, y_train) In multiple runs, this approach yields models with lower accuracy on my test set than my manually-tuned model. What are common pitfalls when using RandomizedSearchCV? How can I ensure reproducibility and robustness of the tuning process?
RandomizedSearchCV can give worse results than manual tuning due to a few common reasons: Too few iterations – n_iter=10 may not explore enough parameter combinations. Poor parameter grid – Your grid might miss optimal values or be too coarse. Inconsistent random seeds – Different runs can yield different results if random_state isn’t set. Improper CV splits – Use StratifiedKFold for balanced class sampling. Wrong scoring metric – Make sure scoring aligns with your real objective (e.g., accuracy, f1).
2
2
79,603,298
2025-5-2
https://stackoverflow.com/questions/79603298/what-is-the-oldest-leap-year-in-pandas
I'm working with a day of year column that ranges from 1 to 366 (to account for leap years). I need to convert this column into a date for a specific task and I would like to set it to a year that is very unlikely to appear in my time series. Is there a way to set it to the oldest leap year of pandas? import pandas as pd # here is an example where the data have already been converted to datetime object # I just missed the year to set dates = pd.Series(pd.to_datetime(['2023-05-01', '2021-12-15', '2019-07-20'])) first_leap_year = 2000 # this is where I don't know what to set new_dates = dates.apply(lambda d: d.replace(year=first_leap_year))
The documentation for the pandas.Timestamp type says: Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases. So we can look up the Python documentation for datetime objects, where we find: Like a date object, datetime assumes the current Gregorian calendar extended in both directions; like a time object, datetime assumes there are exactly 3600*24 seconds in every day. In other words, it assumes that the current rules for calculating leap years apply at any point in history, even though they were actually introduced in 1582, and adopted by different countries over the next few centuries. (The technical term for this is a "proleptic Gregorian calendar".) Standard Python has a datetime.MINYEAR constant: The smallest year number allowed in a date or datetime object. MINYEAR is 1. So the lowest year divisible by 4, (and not by 100, so meeting the Gregorian definition of leap year as well as the Julian one) would be 4. However, Pandas also has pandas.Timestamp.min: Timestamp.min = Timestamp('1677-09-21 00:12:43.145224193') (In case you're wondering, that's 263 nanoseconds before January 1, 1970, i.e. the limit of a 64-bit signed integer with nanosecond resolution.) So you probably want a year after 1677, meaning the earliest available year would be 1680.
2
8
79,602,340
2025-5-1
https://stackoverflow.com/questions/79602340/why-is-my-python-parser-method-returning-empty-strings
I am trying to write a simple parser method in Python. It takes in a filename pointing to a file of a certain format. An example of this format is below: File: .type = INFILE .fmt = CFP_INPUTFILE_FMT_2 Data: .cases .given = True .numCases = 2 .case .numlines = 2 .line .value = 3 .value = 3 .line .value = 3 .value = 3 .case .line .value = 3 .value = 3 .line .value = 3 .value = 3 The parser reads this file and outputs the raw data to a terminal via subprocess.run() and prints the output. The data printed for the file should look like this: 2 33 33 33 33 The method itself looks something like this: @classmethod def input_file_fmt_1_to_input(cls, inputfile: str): with open('temp.txt', 'w') as temp: try: with open(inputfile, 'r') as file: case = 0 line = 0 case1line1 = '' case1line2 = '' case2line1 = '' case2line2 = '' for lne in file.readlines(): cleanline = lne.lstrip().rstrip() if cleanline.startswith('.numCases'): split = cleanline.split(' ') value = str(split[2]) + '\n' temp.write(value) elif cleanline.startswith('.case'): case += 1 elif cleanline.startswith('.line'): line += 1 elif cleanline.startswith('.value') and case == 1 and line == 1: case1line1 = case1line1 + str(cleanline.split(' ')[2]) + ' ' elif cleanline.startswith('.value') and case == 1 and line == 2: case1line2 = case1line2 + str(cleanline.split(' ')[2]) + ' ' elif cleanline.startswith('.value') and case == 2 and line == 1: case2line1 = case2line1 + str(cleanline.split(' ')[2]) + ' ' elif cleanline.startswith('.value') and case == 2 and line == 2: case2line2 = case2line2 + str(cleanline.split(' ')[2]) + ' ' else: pass if case1line1 != '': temp.write(case1line1) temp.write('\n') elif case1line2 != '': temp.write(case1line2) temp.write('\n') elif case1line3 != '': temp.write(case2line1) temp.write('\n') elif case1line4 != '': temp.write(case2line2) temp.write('\n') result = run('cat temp.txt', shell=True, capture_output=True) print(result.stdout) except FileNotFoundError as e: raise CfpRuntimeError from e I've written a pytest test for this method, which looks like this: def test_input_file_fmt_1_to_input_test(capsys): cfp_testcontext.InputParser.input_file_fmt_1_to_input('../../TEST/modules_test/testfile.cfpin') captured = capsys.readouterr() assert captured.out == '2\n3 3 \n3 3 \n3 3 \n3 3 ' When I run the test, I get an assertion error: assert "b''\nb''\nb''\nb''\n" == "2\n3 3 \n3 3 \n3 3 \n3 3 " Where are the strings are being transformed into byte strings? Why does my result contain only empty strings? I have tried writing a a fake method and fake test using capsys to see if the output was empty, and if it was a byte string. The output was a byte string, so I believe that the conversion is happening from within pytest. However, the function, which wrote 'hello' to the console, worked perfectly, so I am pretty sure there is something wrong with my logic.
The issue is that: result = subprocess.run(['cat', 'temp.txt'], capture_output=True) returns a CompletedProcess[bytes], so result.stdout is a byte string, not a regular string. You can fix this by passing text=True: result = subprocess.run(['cat', 'temp.txt'], capture_output=True, text=True) I’d recommend avoiding subprocess altogether here. Running an external process just to read a file is unnecessary β€” I would say a bit cringe. A better approach is to separate concerns: Refactored design: _parse_lines(inputfile: str) -> list[str] – parses the file and returns the output as a list of strings. input_file_fmt_1_to_input(...) – handles writing to temp.txt and printing, without using cat. Notes: This avoids subprocess overhead entirely. You probably should use Structural Pattern Matching, as it fits well here and improves readability, if you are using Python 3.10+ Structural Pattern Matching was introduced in the language for such purpose. The test now passes. Output lines are no longer have trailing spaces β€” unlike the original version. from subprocess import run import subprocess from typing import Iterator class CfpRuntimeError(Exception): pass from subprocess import run class CfpRuntimeError(Exception): pass class InputParser: @staticmethod def _parse_lines(inputfile: str) -> list[str]: try: with open(inputfile, 'r') as file: output_lines = [] current_case = [] num_cases = 0 inside_line = False for line in file: match line.strip().split('='): case '.numCases ', value: num_cases = int(value.strip()) output_lines.append(str(num_cases)) case '.line', *_: if current_case: output_lines.append(' '.join(current_case)) current_case = [] inside_line = True case '.value ', val: if inside_line: current_case.append(val.strip()) case '.case', *_: if current_case: output_lines.append(' '.join(current_case)) current_case = [] inside_line = False case _: continue if current_case: output_lines.append(' '.join(current_case)) return output_lines except FileNotFoundError as e: raise CfpRuntimeError from e @classmethod def input_file_fmt_1_to_input(cls, inputfile: str): output_lines = cls._parse_lines(inputfile) with open('temp.txt', 'w') as temp: for line in output_lines: temp.write(line + '\n') result = ''.join(output_lines) print(result.stdout, end='') InputParser.input_file_fmt_1_to_input('example.txt') import pytest def test_input_file_fmt_1_to_input_test(capsys: pytest.CaptureFixture): InputParser.input_file_fmt_1_to_input('example.txt') captured = capsys.readouterr() assert captured.out == '2\n3 3\n3 3\n3 3\n3 3\n'
3
4
79,603,209
2025-5-2
https://stackoverflow.com/questions/79603209/privategpt-listing-ingested-document-filenames
I'm new to LLMs and need to extract the file names of files that have been already ingested into PrivateGPT that the system uses to answer questions. I can list the doc_ids using : from pgpt_python.client import PrivateGPTApi client = PrivateGPTApi(base_url="http://localhost:8001") # Health print(client.health.health()) # List ingested docs for doc in client.ingestion.list_ingested().data: print(doc.doc_id) which gives an output of : e019a7be-3b0d-45b6-a1f6-195735a20725 8d59d127-3432-47e4-8beb-5465fdb1e72d 8df9068b-fa3c-42b5-b987-0bee211bce0a b56fb71a-fd3e-4cd9-9728-3b70ff045162 dbebc4a6-29af-4ac9-b311-0a53b74cbb4f e9f28c23-5d08-4660-aa4c-8f2389901583 68da2416-57ff-45c7-9af6-dc70175d1a15 f6880d64-434c-4527-8705-df67f19a6dfc 84e66ed4-aa0f-4058-a1b9-19d7a89d8d95 etc However haven't been able to find the mechanism to list the current 5 files that the system has ingested. Any help appreciated.
According to PrivateGPT API reference here (https://docs.privategpt.dev/api-reference/api-reference/ingestion/ingest-file), in addition to the doc_id property, which seems to be what you're getting, you should also be able to get the file's metadata in doc_metadata property, where the file name should be. Try running this instead: # List ingested docs for doc in client.ingestion.list_ingested().data: print(doc.doc_metadata) Have a look if the file names are in the metadata property, then you can parse the property to output only the file name.
1
2
79,602,749
2025-5-2
https://stackoverflow.com/questions/79602749/changing-the-page-numbering-index-in-reportlab
UPDATE: See second code block for the solution The question may seem simple but I haven't been able to find anything related to it: How to start the page numbering index at a given page? For example, let's consider a document composed of a front page, a table of content and then the document's content itself. How can I start the page numbering at the first section of this document instead of starting it at the first page? Note that I am not trying not to display the page number (which is trivial) but change which page is considered as the first page. Below is an example: import os from reportlab.lib.pagesizes import A4 from reportlab.platypus import BaseDocTemplate, PageTemplate, Frame, Paragraph, PageBreak, NextPageTemplate from reportlab.platypus.tableofcontents import TableOfContents from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle from reportlab.lib.units import inch # Importing inch unit class Example(BaseDocTemplate): def __init__(self, fname, **kwargs): super().__init__(fname, **kwargs) frame_example = Frame(self.leftMargin - 0.5*inch, self.bottomMargin + 0.25*inch, self.width + 1*inch, self.height - 0.5*inch, id='example') # Define some page templates to display the page number self.addPageTemplates([PageTemplate(id='front_page', frames=frame_example, onPage=self.doFootingsFrontPage), PageTemplate(id='other_pages', frames=frame_example, onPage=self.doFootings)]) self.elements = [] # Defining some basic paragrah styles for the example ps1 = ParagraphStyle(fontName='Helvetica', fontSize=14, name='frontPage_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) ps2 = ParagraphStyle(fontName='Helvetica', fontSize=12, name='header_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) ps3 = ParagraphStyle(fontName='Helvetica', fontSize=10, name='text_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) # Storing the styles into a class attribute so that we can call them later on self.styleSheet = getSampleStyleSheet() for style in [ps1, ps2, ps3]: self.styleSheet.add(style) # Generate the front page self.doFrontPage() # Initialize the TOC toc = TableOfContents(dotsMinLevel=0) toc.levelStyles = [self.styleSheet['header_PS']] # Add the TOC self.elements.append(toc) self.elements.append(PageBreak()) for n in range(2): self.doOtherPage(n) # Build the document self.multiBuild(self.elements) def afterFlowable(self, flowable): "Registers TOC entries." if flowable.__class__.__name__ == 'Paragraph': text = flowable.getPlainText() style = flowable.style.name if style == 'header_PS': self.notify('TOCEntry', (0, text, self.page)) def doFootings(self, canvas, doc): # Create the footer x = A4[0]-128 y = 40 canvas.saveState() txtFooting = "Page {}".format(int(canvas._pageNumber)) canvas.drawString(x, y, txtFooting) canvas.restoreState() def doFootingsFrontPage(self, canvas, doc): # Create the footer x = 50 y = 40 canvas.saveState() txtFooting = "I am the front page - I dont want to have a page number" canvas.drawString(x, y, txtFooting) canvas.restoreState() def doFrontPage(self): txt = 'This is the front page' self.elements.append(Paragraph(txt, self.styleSheet['frontPage_PS'])) self.elements.append(NextPageTemplate("other_pages")) self.elements.append(PageBreak()) def doOtherPage(self, n): txt = 'Header {:.0f}'.format(n+1) self.elements.append(Paragraph(txt, self.styleSheet['header_PS'])) txt ='Who stole my page number? I should be Page {:.0f}'.format(n+1) for ii in range(10): self.elements.append(Paragraph(txt, self.styleSheet['text_PS'])) self.elements.append(PageBreak()) if __name__ == '__main__': fname = os.path.abspath(os.path.join('.', 'example_reportlab.pdf')) Report = Example(fname) In this example a PDF with 4 pages is generated, the front page, the table of content and two filling sections for the example. What I would like to obtain is to have the page numbering starting at the first section of the document (which is currently the page 3) and have it reflected into the Table of Content (i.e., Header 1 would be at page 1, Header 2 page 2 ...). Or even better, be able to index the first pages i, ii, iii ... then switch to a new numbering scheme 1, 2, 3 ... when a certain part of the document is reached. To my understanding it should be possible to do it; I found this example which states: In real world documents there is another complication. You might have a fancy cover or front matter, and the logical page number 1 used in printing might not actually be page 1. Likewise, you might be doing a batch of customer docs in one RML job. So, in this case we have a more involved expression, and use the evalString tag to work out the number we want. In this example we did this by creating a name for the first page after the cover, <namedString id="page1"> <evalString default="XXX"> <pageNumber/>-+1 </evalString> </namedString>... This says 'work out the page number of the cover, add 1 and store that in the variable "page1" for future use' But I haven't been able to make anything out of it. I also had a look at the tests implemented by the library (test_platypus_toc.py for an example) but haven't found anything relevant to my question. I am using Python 3.12.9, and reportlab 3.6.13. Before it is asked, no this answer is not what I am looking for (unless I tried it wrongly). Below is the solution I came up with following Salt answer: import os from reportlab.lib import colors from reportlab.lib.pagesizes import A4 from reportlab.platypus import BaseDocTemplate, PageTemplate, Frame, Paragraph, PageBreak, NextPageTemplate from reportlab.platypus.tables import TableStyle from reportlab.platypus.tableofcontents import TableOfContents from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle from reportlab.lib.units import inch # Importing inch unit class Example(BaseDocTemplate): def __init__(self, fname, **kwargs): super().__init__(fname, **kwargs) self.start_page_number = None frame_example = Frame(self.leftMargin - 0.5*inch, self.bottomMargin + 0.25*inch, self.width + 1*inch, self.height - 0.5*inch, id='example') self.addPageTemplates([PageTemplate(id='front_page', frames=frame_example, onPage=self.doFootingsFrontPage), PageTemplate(id='other_pages', frames=frame_example, onPage=self.doFootings)]) self.elements = [] # Defining some basic paragrah style for the example ps1 = ParagraphStyle(fontName='Helvetica', fontSize=14, name='frontPage_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) ps2 = ParagraphStyle(fontName='Helvetica', fontSize=12, name='header_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) ps3 = ParagraphStyle(fontName='Helvetica', fontSize=10, name='text_PS', leftIndent=20, firstLineIndent=-20, spaceBefore=5, leading=16) # Storing the styles into a class attribute so that we can call them later on self.styleSheet = getSampleStyleSheet() for style in [ps1, ps2, ps3]: self.styleSheet.add(style) defaultTableStyle = TableStyle([ ('VALIGN', (0,0), (-1,-1), 'TOP'), ('RIGHTPADDING', (0,0), (-1,-1), 0), ('LEFTPADDING', (0,0), (-1,-1), 10), ('LINEBEFORE', (0,0), (0, -1), 2.5, colors.blue) ]) # Generate each page one by one self.doFrontPage() # Initialize the TOC toc = TableOfContents(dotsMinLevel=0) toc.levelStyles = [self.styleSheet['header_PS']] toc.tableStyle = defaultTableStyle # Add the TOC self.elements.append(toc) self.elements.append(PageBreak()) for n in range(2): self.doOtherPage(n) # Build the document self.multiBuild(self.elements) def afterFlowable(self, flowable): "Registers TOC entries." if flowable.__class__.__name__ == 'Paragraph': text = flowable.getPlainText() style = flowable.style.name if style == 'header_PS': if self.start_page_number is None: self.start_page_number = self.page page_number = self.page - self.start_page_number + 1 self.notify('TOCEntry', (0, text, page_number)) def doFootings(self, canvas, doc): # Create the footer x = A4[0]-128 y = 40 if self.start_page_number is not None: page_number = self.page - self.start_page_number + 1 if page_number < 1: return canvas.saveState() txtFooting = "Page {}".format(int(page_number)) canvas.drawString(x, y, txtFooting) canvas.restoreState() def doFootingsFrontPage(self, canvas, doc): # Create the footer x = 50 y = 40 canvas.saveState() txtFooting = "I am the front page - I dont want to have a page number" canvas.drawString(x, y, txtFooting) canvas.restoreState() def doFrontPage(self): txt = 'This is the front page' self.elements.append(Paragraph(txt, self.styleSheet['frontPage_PS'])) self.elements.append(NextPageTemplate("other_pages")) self.elements.append(PageBreak()) def doOtherPage(self, n): txt = 'Header {:.0f}'.format(n+1) self.elements.append(Paragraph(txt, self.styleSheet['header_PS'])) txt ='Who stole my page number? I should be Page {:.0f}'.format(n+1) for ii in range(10): self.elements.append(Paragraph(txt, self.styleSheet['text_PS'])) self.elements.append(PageBreak()) if __name__ == '__main__': fname = os.path.abspath(os.path.join('.', 'example_reportlab.pdf')) Report = Example(fname) I followed Salt steps with the main difference being that I estimate the displayed page number instead of incrementing it. The displayed page number is then considered as nothing more than Reporlab's logical_page_number shifted by a fixed offset (self.start_page_number in my updated code). The +1 in page_number = self.page - self.start_page_number + 1 is only here to avoid indexing starting from 0.
ReportLab always uses physical page numbers, so to start numbering from a specific page (e.g., after the TOC), you need to manage it manually. Track your own logical_page_number, set a flag when the real content starts, and from that point on: Increment your counter on each page. Use it in the footer instead of canvas.getPageNumber(). Use it in TOC entries via notify(). This way, the visible numbering and TOC both start from 1 at the point you choose.
2
1
79,601,938
2025-5-1
https://stackoverflow.com/questions/79601938/conda-installed-scipy-v1-15-2-does-not-contain-gaussian-function
Running Spyder 6.0.5 with Python version 3.11.11 and IPython 8.34.0. From within the Console window in Spyder I attempted to install Scipy with the following command: conda install -c conda-forge scipy=1.15.2 All seems to be ok, no error messages and a request to restart the kernel, which I do. However when I try to run a program that uses the "Gaussian" function from scipy.signal I get the following error message: from scipy.signal import gaussian ImportError: cannot import name 'gaussian' from 'scipy.signal' (C:\Users\USERNAME\AppData\Local\spyder-6\envs\spyder-runtime\Lib\site-packages\scipy\signal\__init__.py) I've tried to instead install Scipy from Windows command prompt with pip but it's made no impact. Could anyone please advise as to what is occurring here. From what I've seen there had been issues previously that newer versions of Scipy had addressed, but the advice in those guides has not helped me. Thanks
Try from scipy.signal.windows import gaussian
1
1
79,601,812
2025-5-1
https://stackoverflow.com/questions/79601812/python-columns-must-be-same-length-as-key-when-splitting-a-column
I have two address columns and I want to extract the last word from the first column and the first word from the second column. In the provided example there aren't two words in column 'Address2', but I want to build the code in such a way that it will work regardless of how the dataset will look like. Sometimes the address2 can be one word, something it will have 2, etc.. data = { 'Address1': ['3 Steel Street', '1 Arnprior Crescent', '40 Bargeddie Street Blackhill'], 'Address2': ['Saltmarket', 'Castlemilk', 'Blackhill'] } df = pd.DataFrame(data) I have no problem with column 'Address1': df[['StringStart', 'LastWord']] = df['Address1'].str.rsplit(' ', n=1, expand=True) The problem comes with column 'Address2' where if I apply the above code I an error: Columns must be same length as key I understand where the problem is coming from - I am trying to split one column which has one element into two columns. I am sure there is a way in which this can be handled to allow the split anyway and return Null if there isn't a word and a value if there is.
Using str.extract() might be better for several reasons: it handles all cases, offers precision with regular expressions, and eliminates the risk of value errors. import pandas as pd data = { 'Address1': ['3 Steel Street', '1 Arnprior Crescent', '40 Bargeddie Street Blackhill'], 'Address2': ['Saltmarket', 'Castlemilk East', 'Blackhill'] } df = pd.DataFrame(data) df[['StringStart', 'LastWord']] = df['Address1'].str.rsplit(' ', n=1, expand=True) df[['FirstWord_Address2', 'Remaining_Address2']] = ( df['Address2'].str.extract(r'^(\S+)\s*(.*)$') ) print(df) Or: df[['Address1_Prefix', 'Address1_LastWord']] = df['Address1'].str.extract(r'^(.*\b)\s+(\S+)$') df[['Address2_FirstWord', 'Address2_Remaining']] = df['Address2'].str.extract(r'^(\S+)\s*(.*)$') Output: Address1 Address2 StringStart LastWord FirstWord_Address2 Remaining_Address2 0 3 Steel Street Saltmarket 3 Steel Street Saltmarket 1 1 Arnprior Crescent Castlemilk East 1 Arnprior Crescent Castlemilk East 2 40 Bargeddie Street Blackhill Blackhill 40 Bargeddie Street Blackhill Blackhill
4
3
79,601,344
2025-5-1
https://stackoverflow.com/questions/79601344/how-to-get-the-key-and-values-from-a-dictionary-to-display-them-on-a-django-page
I want to build a page that has each author with their quote. I have tried, but everything i tried has failed. The function below is what causes me the issues. quotes = { "Arthur Ashe": "Start where you are, Use what you have, Do what you can.", "Steve Jobs": "Don’t watch the clock; do what it does. Keep going.", "Sam Levenson": "Don’t watch the clock; do what it does. Keep going.", " Robert Collier": "Success is the sum of small efforts, repeated day in and day out.", "Nelson Mandela": "It always seems impossible until it’s done.", "Mahatma Gandhi": "The future depends on what you do today.", "Zig Ziglar": "You don’t have to be great to start, but you have to start to be great.", "Dave": "Discipline is doing what needs to be done, even if you don’t want to do it.", "Suzy Kassem": "Doubt kills more dreams than failure ever will.", "Pablo Picasso": "Action is the foundational key to all success." } def mypage(request): messages = [quotes[item] for item in quotes] authors = [item for item in quotes] return render(request, "quotes/mypage.html", {"authors": authors, "quotes":messages})
You can pass the entire dictionary: def mypage(request): messages = [quotes[item] for item in quotes] authors = [item for item in quotes] return render(request, 'quotes/mypage.html', {'quotes': quotes}) and then in the template, enumerate over the .items() of the quotes, so: {% for author, quote in quotes.items %} {{author}} said: "{{ quote }}" {% endfor % }
1
3
79,601,719
2025-5-1
https://stackoverflow.com/questions/79601719/attributeerror-messagebox-cant-take-multiple-functions
I tried to add multiple functions to a messagebox in my program: from Tkinter import * from Tkinter import messagebox as msgbox *some code...* input1 = msgbox.askyesno.showwarning('title', 'blahblahblah') if input1 == 1: input2 = msgbox.askyesno.showwarning('title', 'blahblahblah') if input2 == 1: function() But it shows me this error: input1 = msgbox.askyesno.showwarning('title', 'blahblahblah') ^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'function' object has no attribute 'showwarning' I tried changing the order of the functions but it didn't help. Can anyone help??? EDIT: I got a comment saying it can't have multiple functions, so I guess i'll use VBS instead. Thanks for the help though! ANOTHER EDIT: I got it! I can just use this: title = "title" description = "blahblahblah" if MessageBox(description, title, MB_ICONWARNING | MB_YESNO) == IDNO: sys.exit(0)
I think, you're getting the error because askyesno is already a function inside tkinter.messagebox, not a module or object. So when you write msgbox.askyesno.showwarning(...), you're trying to access .showwarning on a function, which doesn't make senseβ€”hence the AttributeError. You need to call either askyesno or showwarning, depending on what you want the messagebox to do.
1
1
79,601,644
2025-5-1
https://stackoverflow.com/questions/79601644/how-to-await-messages-and-not-affect-the-main-event-loop
I am writing a simple websocket script which registers & unregisters clients and then broadcasts random messages to them in a interval of 5s. This is the code: import asyncio, websockets, random, string import websockets.asyncio.server class web_socket(websockets.asyncio.server.ServerConnection): pass connections: set[web_socket] = set() connections_lock = asyncio.Lock() async def register(websocket: web_socket): async with connections_lock: connections.add(websocket) async def unregister(websocket: web_socket): async with connections_lock: connections.discard(websocket) async def cleanup_connections(): while True: async with connections_lock: closed_clients = [client for client in connections if client.close_code == 1000] for client in closed_clients: connections.discard(client) await asyncio.sleep(1) async def handler(websocket:web_socket): await register(websocket) try: await websocket.wait_closed() except Exception as e: print(f"Error: {e}") async def random_messages(): while True: message = '-'.join(random.choices(string.ascii_letters + string.digits, k=10)) async with connections_lock: send_tasks = [client.send(message) for client in connections if client.close_code != 1000] if send_tasks: await asyncio.gather(*send_tasks) print(f"Broadcasted message: {message}") await asyncio.sleep(5) async def respond_to_messages(): while True: async with connections_lock: if connections: for client in connections: message = await client.recv() if message: print(message) else: await asyncio.sleep(1) async def main(): async with websockets.serve(handler, "0.0.0.0", 8765): asyncio.create_task(random_messages()) asyncio.create_task(cleanup_connections()) asyncio.create_task(respond_to_messages()) await asyncio.Future() asyncio.run(main()) As soon as someone connects, the broadcasting stops and the loop gets stuck at the message await. How do I make sure that listening to messages sent by clients do not affect the main loop? I tried this: message = await asyncio.wait(client.recv()) Now the client can connect without stopping the main loop but it throws an error: Task exception was never retrieved future: <Task finished name='Task-5' coro=<respond_to_messages() done, defined at /home/j/socket-test/server.py:44> exception=TypeError('expect a list of futures, not coroutine')> Traceback (most recent call last): File "/home/j/socket-test/server.py", line 49, in respond_to_messages message = await asyncio.wait(client.recv()) File "/usr/lib/python3.10/asyncio/tasks.py", line 366, in wait raise TypeError(f"expect a list of futures, not {type(fs).__name__}") TypeError: expect a list of futures, not coroutine /usr/lib/python3.10/asyncio/base_events.py:1910: RuntimeWarning: coroutine 'Connection.recv' was never awaited handle = None # Needed to break cycles when an exception occurs. RuntimeWarning: Enable tracemalloc to get the object allocation traceback It throws this error and keeps going. The client gets all the broadcast messages but whatever the client sends maybe isn't received but definitely doesn't get printed. I know that asycnio.wait() expects a list of futures, so I tried explicitly passing it like this: asyncio.wait([client.recv()]). Running this gave a deprecation warning: DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11. message = await asyncio.wait([client.recv()]) and the client must send a message every time to receive a single broadcast message. If I do this: asyncio.wait([await client.recv()]), then I get raise TypeError('An asyncio.Future, a coroutine or an awaitable ' TypeError: An asyncio.Future, a coroutine or an awaitable is required error and the client must send a message to start receiving the broadcast and then it keeps coming. I tried GPT but it just keeps telling me to use message = await client.recv() which was the original problem. I also tried to run the original code with coroutine threadsafe: async def main(): async with websockets.serve(handler, "0.0.0.0", 8765): asyncio.create_task(random_messages()) asyncio.create_task(cleanup_connections()) loop = asyncio.get_running_loop() asyncio.run_coroutine_threadsafe(respond_to_messages(), loop=loop) await asyncio.Future() While this did not produce any errors, the client still needs to send the first message to start receiving broadcast. If this is something that is the way to go, that would be good to know. Another thing to note: Only the first client has to send a message, other ones do not. They start receiving broadcast upon connecting without issues. Any help is appreciated. If you can point me to any article or something, that's fine too. I am a beginner so please keep that in mind.
In respond_to_messages, you're basically holding the lock forever. Inside the top-level loop you grab the lock and then start iterating through the connections; on each connection you await client.recv(), which blocks (holding the lock) until a new message arrives. You need to release the lock before you call .recv(). This raises the question of what exactly the lock is protecting. Let's say it's the connection list itself – you want to avoid overlapping connections.add() calls – but each individual websocket object can probably be used concurrently. You usually want to spend the absolute minimum amount of time holding the lock. In this case that means you'll want to make a copy of the connection list inside the lock, and then iterate through it, acknowledging that you might not have the absolute-most-current values. async def respond_to_messages(): while True: async with connections_lock: my_connections = list(connections) for client in my_connections: message = await client.recv() if message: print(message) The next problem you'll run into is conceptually similar: because you wait for messages serially, and client.recv() is blocking, you'll wait for the first socket to send a message back before doing anything on the next one. You'd really like all of these receives to happen together. You can write a function that handles all of the traffic on a single websocket: async def respond_to_connection(client: ServerConnection) -> None: """Handle all of the messages on a client and return when the websocket closes.""" async for message in client: print(message) Now you need to create a new concurrent copy of that function every time you accept a new websocket. You're already using asyncio.create_task() and that will work here; an asyncio.TaskGroup is potentially a better tool. Note that you need to save a copy of the returned task, and I might keep that together with the websocket. from asyncio import Lock, Task from dataclasses import dataclass @dataclass class ActiveConnection: """A websocket and the task handling it.""" client: web_socket task: Task connections: dict[web_socket, ActiveConnection] = {} connections_lock = Lock() async def register(websocket: web_socket): task = create_task(respond_to_connection(websocket)) active_connection = ActiveConnection(websocket, task) async with connections_lock: connections[websocket] = active_connection async def unregister(websocket: web_socket): async with connections_lock: active_connection = connections.pop(websocket, None) if active_connection: active_connection.task.cancel() So every time we get a new connection, we start a new task and save the task and connection together; and every time we shut down a connection, we cancel the task (assuming it hasn't completed on its own). Again notice that we're doing as much work as possible outside the async with connections_lock critical block, and limiting that block to only inserting and deleting from the dictionary.
1
1
79,600,873
2025-4-30
https://stackoverflow.com/questions/79600873/cant-align-rsa-encryption-in-python-and-kotlin
I would like to add RSA encryption in my server (Python FastAPI) and my Android app. But the encryption didn't work as the way I expected. I already have AES-GCM encryption/decryption working between my Python and Kotlin code. However, my RSA attempts in Python and Kotlin won't interoperate with each other. The Python (server) RSA code can decrypt what the Python RSA code encrypts, and the Kotlin (app) RSA code can decrypt what the Kotlin RSA code encrypts. I used the cryptography module in Python and the native cryptography in Kotlin. Here is my Python file. import os, base64, re from cryptography.hazmat.primitives.asymmetric import rsa, padding from cryptography.hazmat.primitives import serialization, hashes from cryptography.hazmat.backends import default_backend if os.path.exists("private.key") and os.path.exists("public.key"): print("Loading existing keys") with open("private.key", "rb") as pkf, open("public.key", "rb") as kf: pk = serialization.load_der_private_key(pkf.read(), None, default_backend()) k = serialization.load_der_public_key(kf.read(), default_backend()) else: print("Generating new keys") pk = rsa.generate_private_key(65537, 4096, default_backend()) pkb = pk.private_bytes( serialization.Encoding.DER, serialization.PrivateFormat.PKCS8, serialization.NoEncryption() ) k = pk.public_key() kb = k.public_bytes( serialization.Encoding.DER, serialization.PublicFormat.SubjectPublicKeyInfo ) with open("private.key", "wb") as pkf, open("public.key", "wb") as f: pkf.write(pkb) f.write(kb) def enc(text): return base64.b64encode(k.encrypt( text.encode(), padding.OAEP( mgf=padding.MGF1(hashes.SHA512()), algorithm=hashes.SHA512(), label=None ) )).decode() def dec(ciphertext): return pk.decrypt( base64.b64decode(ciphertext), padding.OAEP( mgf=padding.MGF1(hashes.SHA512()), algorithm=hashes.SHA512(), label=None ) ).decode() while (inp:= input("RSA: ")) != "": if re.match(f"^enc", inp): print("Encrypt", enc(inp[3:].strip())) if re.match(f"dec", inp): print("Decrypt", dec(inp[3:].strip())) And here is my Kotlin file. import java.security.KeyPair import java.security.KeyPairGenerator import java.security.SecureRandom import java.security.KeyFactory import java.security.PublicKey import java.security.spec.X509EncodedKeySpec import java.security.spec.PKCS8EncodedKeySpec import javax.crypto.Cipher import javax.crypto.SecretKey import javax.crypto.KeyGenerator import javax.crypto.spec.SecretKeySpec import javax.crypto.spec.GCMParameterSpec import java.util.Base64 import java.io.File import java.nio.file.Files fun main(){ print("Text: ") val encrypted = rsaEncrypt(File("public.key"), readLine().toString()) println(encrypted) print("Ciphertext: ") println("Decrypted: ${rsaDecrypt(File("private.key"), readLine().toString())}") } fun bEncode(data: ByteArray) = Base64.getEncoder().encodeToString(data) fun bDecode(string: String) = Base64.getDecoder().decode(string) fun rsaEncrypt(keyFile: File, text: String): String { val k = KeyFactory.getInstance("RSA").generatePublic(X509EncodedKeySpec(keyFile.readBytes())) val c = Cipher.getInstance("RSA/ECB/OAEPwithSHA-512andMGF1Padding") c.init(Cipher.ENCRYPT_MODE, k) return bEncode(c.doFinal(text.toByteArray())) } fun rsaDecrypt(keyFile: File, ciphertext: String): String { val k = KeyFactory.getInstance("RSA").generatePrivate(PKCS8EncodedKeySpec(keyFile.readBytes())) val c = Cipher.getInstance("RSA/ECB/OAEPwithSHA-512andMGF1Padding") c.init(Cipher.DECRYPT_MODE, k) return c.doFinal(bDecode(ciphertext)).decodeToString() } According my file, first I run my Python file then my Kotlin file. Here is the error in Kotlin while trying to decrypt Python encrypted data. Exception in thread "main" javax.crypto.BadPaddingException: Padding error in decryption at java.base/com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:389) at java.base/com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:425) at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2205) at TestKt.rsaDecrypt(test.kt:58) at TestKt.main(test.kt:22) at TestKt.main(test.kt) Here is the error in Python while trying to decrypt Kotlin encrypted data. Traceback (most recent call last): File "C:\Users\MYUSERNAME\Projects\MyServer\test.py", line 53, in <module> print("Decrypt", dec(inp[3:].strip())) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\MYUSERNAME\Projects\MyServer\test.py", line 40, in dec return pk.decrypt( ^^^^^^^^^^^ ValueError: Decryption failed What wrong with my code? Here is the execution of the codes. (I added print/println of type and size of encrypted data of both Python and Kotlin. It look normal.) S C:\Users\USERNAME\Projects\MyServer> python test.py && kotlinc test.kt -include-runtime -d app.jar && kotlin app.jar Loading existing keys RSA: enc Hello <class 'bytes'> 512 Encrypt sMeiZvcV5aHIFsW7+pOhcAwi/Jk1Fc0JevLM9U5LK+005vFgxUgVehLgwgMG/9TJ7wPOVYI51Cnrhc4aLBfG6gawWoc+NGO8WfGA7HF0xFgdDreAIzRMGOsR7AkoVBsBz7HbjnQ+3Nz1AOUHAUbrVLLizAvCjQAB0xbvv/YtQWRXrZOCEbF3+RFF9GOWQO8Lk8Wd/KvnD6pyASw8QvGqEJXL8aWZH0P8FDYUEcwjrz4qOgZDBce8qdiVf8h2FItm/AzeWvBuMai0FSh4SYjYMW2Ld8BGACsBIhYtpXG62WVULvXHK5oGQ5mpJz62WKCauDeqGdugeP9LVDA0k3Nd645hU6WCCA5o4VrKhz4WR4uN6ND42qNCIIE9LLMyHBGcZ+bOf0pGraLqbOHOYkD4bLD3j32ILT88WpC31KhFjvdCxBv81IXdl0/6f9U+q4d6ZZ9OiWjDdINemFVxgis9d1GzuTYzQ2i3MGudHfnFAMKjZWW8IJIfaIfAiMob/WjO2VTGvn0eFaUviszFaAfzTcoe0AnhyUpEEgu1KUcJk/CUoSnKYRX62T2wgmHKgfavdeKYmLPKugORRcCj9hlkke2rlTva0EP9970UycOwJGb92CwxPn68C8wA/jmKxoj7U2CDUArTwqV/dzvQ/gosfSSF/bWB9ZSoAi2//JVlk+g= RSA: dec sMeiZvcV5aHIFsW7+pOhcAwi/Jk1Fc0JevLM9U5LK+005vFgxUgVehLgwgMG/9TJ7wPOVYI51Cnrhc4aLBfG6gawWoc+NGO8WfGA7HF0xFgdDreAIzRMGOsR7AkoVBsBz7HbjnQ+3Nz1AOUHAUbrVLLizAvCjQAB0xbvv/YtQWRXrZOCEbF3+RFF9GOWQO8Lk8Wd/KvnD6pyASw8QvGqEJXL8aWZH0P8FDYUEcwjrz4qOgZDBce8qdiVf8h2FItm/AzeWvBuMai0FSh4SYjYMW2Ld8BGACsBIhYtpXG62WVULvXHK5oGQ5mpJz62WKCauDeqGdugeP9LVDA0k3Nd645hU6WCCA5o4VrKhz4WR4uN6ND42qNCIIE9LLMyHBGcZ+bOf0pGraLqbOHOYkD4bLD3j32ILT88WpC31KhFjvdCxBv81IXdl0/6f9U+q4d6ZZ9OiWjDdINemFVxgis9d1GzuTYzQ2i3MGudHfnFAMKjZWW8IJIfaIfAiMob/WjO2VTGvn0eFaUviszFaAfzTcoe0AnhyUpEEgu1KUcJk/CUoSnKYRX62T2wgmHKgfavdeKYmLPKugORRcCj9hlkke2rlTva0EP9970UycOwJGb92CwxPn68C8wA/jmKxoj7U2CDUArTwqV/dzvQ/gosfSSF/bWB9ZSoAi2//JVlk+g= Decrypt Hello RSA: Text: Hello ByteArray 512 XgLJElQ46cqiVmwqs/j2y8NptEU9ciAnQFtuuh2U+4m1PbqatnzVrB//G6NXrH2hYbclj9GyHMAzRh3f9LlGozQl7FgvmGp6F38DD2j2ktVChAMLCWtw7GMkJlyJiINFdiYL1IV1EI/+DpQYzeE94tyQDaev053GAW+aFxymWTVSZ1uw97XJGCd1V2RRyzzPKirG/BTxKS+j9iqsGUCd+7SVghzhWlzZYgaPj+7t0by45SURvrtTdaD6Ni1FIwROPOzTEE6ryFbK0tPWECN1jnQI/1qylldE/N/Awqc0ORsj9wCITh80D2ibfZyr/AN5I+QOU3y4pfX76C6xAvsFt1Avsk15sktsNfrBC3+OyNB1PWiKmeXcxly2rx4PswpL56WEOkcrhF6GcyctfU5gJNDQ/CsND+u3/JNtPHtPdmAhwoY0UWeUeZWIp0yR42fbrLBgFzPodahHNXopMhBlGO84yzO6WNGbm/0lBnVSm6PAw6ti5N0gSC6eNq/odSQQYThVOvxJB9ETPbzX8CaV8GmQUbiGVzR8P5HOvYK4ctyIJha/X1rP4CpbKXsKVLYWuvGKJ1T7fG6/69/pKLJC3MkurpM2OVlb5UgV5W1A4fCQROQ9yTt4jRUllgmORvaUrOZO62rHKnFmA6cxRXou36t61HKYAmQ3ahq/ErBQyuI= Ciphertext: XgLJElQ46cqiVmwqs/j2y8NptEU9ciAnQFtuuh2U+4m1PbqatnzVrB//G6NXrH2hYbclj9GyHMAzRh3f9LlGozQl7FgvmGp6F38DD2j2ktVChAMLCWtw7GMkJlyJiINFdiYL1IV1EI/+DpQYzeE94tyQDaev053GAW+aFxymWTVSZ1uw97XJGCd1V2RRyzzPKirG/BTxKS+j9iqsGUCd+7SVghzhWlzZYgaPj+7t0by45SURvrtTdaD6Ni1FIwROPOzTEE6ryFbK0tPWECN1jnQI/1qylldE/N/Awqc0ORsj9wCITh80D2ibfZyr/AN5I+QOU3y4pfX76C6xAvsFt1Avsk15sktsNfrBC3+OyNB1PWiKmeXcxly2rx4PswpL56WEOkcrhF6GcyctfU5gJNDQ/CsND+u3/JNtPHtPdmAhwoY0UWeUeZWIp0yR42fbrLBgFzPodahHNXopMhBlGO84yzO6WNGbm/0lBnVSm6PAw6ti5N0gSC6eNq/odSQQYThVOvxJB9ETPbzX8CaV8GmQUbiGVzR8P5HOvYK4ctyIJha/X1rP4CpbKXsKVLYWuvGKJ1T7fG6/69/pKLJC3MkurpM2OVlb5UgV5W1A4fCQROQ9yTt4jRUllgmORvaUrOZO62rHKnFmA6cxRXou36t61HKYAmQ3ahq/ErBQyuI= Decrypted: Hello PS C:\Users\USERNAME\Projects\MyServer> kotlin app.jar Text: Hi ByteArray 512 hS4WeVI6Vuh/bdyiDMvBtyYRBfRWHkSJsrmSe4ivq2Xov+3jRXMqxajPiJ6gH9qIB4U4pFa2sy0gdjM+fQtzqv86JPAvbGjO+fFYQhunvVwJwIlR/3fwUJRSkzaYNTR+SsZ4mTJKB45bwiurZuOQFfQ8e72cD+UBkAPJmFx5FftazKIis0ehwdJ+G3ZAnTwzPM82dqIQCMfb6JeeT2F7BLhdc06pKQXiWPDRytkDSHfOXf0SWJJ5Kwbgy6G10X68VRND/DsS3CXrxr2us8/9IrZZw2sCkE72MB38/sUSB5lwYeia5Rf3OxKzq1VQrHcI8obdIErVg7NNsKfSMdxA2pIHzZfFnu9by7gOw+hy95ntrdjE9RE9IcEt5ylZ0ZFmvCjnAJ1/TbqhZBN70blxMJDrqiYoM0x/M3EjdTuodlJ+deNZI84mfAmCmzDTj2umFT6X0HKFJPQVaZYDIyDq+EJueYOdS3xKHfE2ycWkQSPh9lmebA1GNuv6EgGMMiEkw86mFvAHe9jgSkYkLI7+qMjARuJLTYXKINihVRDyyzcMAq7eDuFDoVWavNu/cw5AXJO8TZzSGTFtoLgfXpck1BKfnrb3IEe01pqADJArX3hgCB22xLEZ9zktj71L1T/gHrEfMM4bnJkpyvVEDm9kajMIwFZa3hq8FztoDkNzbFI= Ciphertext: sMeiZvcV5aHIFsW7+pOhcAwi/Jk1Fc0JevLM9U5LK+005vFgxUgVehLgwgMG/9TJ7wPOVYI51Cnrhc4aLBfG6gawWoc+NGO8WfGA7HF0xFgdDreAIzRMGOsR7AkoVBsBz7HbjnQ+3Nz1AOUHAUbrVLLizAvCjQAB0xbvv/YtQWRXrZOCEbF3+RFF9GOWQO8Lk8Wd/KvnD6pyASw8QvGqEJXL8aWZH0P8FDYUEcwjrz4qOgZDBce8qdiVf8h2FItm/AzeWvBuMai0FSh4SYjYMW2Ld8BGACsBIhYtpXG62WVULvXHK5oGQ5mpJz62WKCauDeqGdugeP9LVDA0k3Nd645hU6WCCA5o4VrKhz4WR4uN6ND42qNCIIE9LLMyHBGcZ+bOf0pGraLqbOHOYkD4bLD3j32ILT88WpC31KhFjvdCxBv81IXdl0/6f9U+q4d6ZZ9OiWjDdINemFVxgis9d1GzuTYzQ2i3MGudHfnFAMKjZWW8IJIfaIfAiMob/WjO2VTGvn0eFaUviszFaAfzTcoe0AnhyUpEEgu1KUcJk/CUoSnKYRX62T2wgmHKgfavdeKYmLPKugORRcCj9hlkke2rlTva0EP9970UycOwJGb92CwxPn68C8wA/jmKxoj7U2CDUArTwqV/dzvQ/gosfSSF/bWB9ZSoAi2//JVlk+g= Exception in thread "main" javax.crypto.BadPaddingException: Padding error in decryption at java.base/com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:389) at java.base/com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:425) at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2205) at TestKt.rsaDecrypt(test.kt:59) at TestKt.main(test.kt:22) at TestKt.main(test.kt) PS C:\Users\USERNAME\Projects\MyServer> python test.py Loading existing keys RSA: dec XgLJElQ46cqiVmwqs/j2y8NptEU9ciAnQFtuuh2U+4m1PbqatnzVrB//G6NXrH2hYbclj9GyHMAzRh3f9LlGozQl7FgvmGp6F38DD2j2ktVChAMLCWtw7GMkJlyJiINFdiYL1IV1EI/+DpQYzeE94tyQDaev053GAW+aFxymWTVSZ1uw97XJGCd1V2RRyzzPKirG/BTxKS+j9iqsGUCd+7SVghzhWlzZYgaPj+7t0by45SURvrtTdaD6Ni1FIwROPOzTEE6ryFbK0tPWECN1jnQI/1qylldE/N/Awqc0ORsj9wCITh80D2ibfZyr/AN5I+QOU3y4pfX76C6xAvsFt1Avsk15sktsNfrBC3+OyNB1PWiKmeXcxly2rx4PswpL56WEOkcrhF6GcyctfU5gJNDQ/CsND+u3/JNtPHtPdmAhwoY0UWeUeZWIp0yR42fbrLBgFzPodahHNXopMhBlGO84yzO6WNGbm/0lBnVSm6PAw6ti5N0gSC6eNq/odSQQYThVOvxJB9ETPbzX8CaV8GmQUbiGVzR8P5HOvYK4ctyIJha/X1rP4CpbKXsKVLYWuvGKJ1T7fG6/69/pKLJC3MkurpM2OVlb5UgV5W1A4fCQROQ9yTt4jRUllgmORvaUrOZO62rHKnFmA6cxRXou36t61HKYAmQ3ahq/ErBQyuI= Traceback (most recent call last): File "C:\Users\USERNAME\Projects\MyServer\test.py", line 56, in <module> print("Decrypt", dec(inp[3:].strip())) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USERNAME\Projects\MyServer\test.py", line 43, in dec return pk.decrypt( ^^^^^^^^^^^ ValueError: Decryption failed
I've found that it's best to avoid defaults when writing cryptography code. The problem is that it's hard to know when you're getting defaults because there are no warnings. In this case, the OAEP scheme has a few parameters that you should always specify. These can be set on the Java/Kotlin side with the OAEPParameterSpec. So, you're Kotlin code should look something like fun rsaEncrypt(keyFile: File, text: String): String { val k = KeyFactory.getInstance("RSA").generatePublic(X509EncodedKeySpec(keyFile.readBytes())) val c = Cipher.getInstance("RSA/ECB/OAEPwithSHA-512andMGF1Padding") val oaepSpec = OAEPParameterSpec("SHA-512", "MGF1", MGF1ParameterSpec("SHA-512"), PSource.PSpecified.DEFAULT) c.init(Cipher.ENCRYPT_MODE, k, oaepSpec) return bEncode(c.doFinal(text.toByteArray())) } fun rsaDecrypt(keyFile: File, ciphertext: String): String { val k = KeyFactory.getInstance("RSA").generatePrivate(PKCS8EncodedKeySpec(keyFile.readBytes())) val c = Cipher.getInstance("RSA/ECB/OAEPwithSHA-512andMGF1Padding") val oaepSpec = OAEPParameterSpec("SHA-512", "MGF1", MGF1ParameterSpec("SHA-512"), PSource.PSpecified.DEFAULT) c.init(Cipher.DECRYPT_MODE, k, oaepSpec) return c.doFinal(bDecode(ciphertext)).decodeToString() } If I remember correctly, the default on the Java side for the MGF1 hash is SHA-1, so that's why it didn't work.
1
1
79,600,626
2025-4-30
https://stackoverflow.com/questions/79600626/python-script-with-input-and-print-do-not-print-when-run-from-powershell-cla
This script in question runs as expected from powershell: # scripts.py x = input("type your input: ") print(f"your input is: {x}") But once you wrap it into a module: class CSV{ [string] $pythonScript CSV([string] $pythonPath){ $this.pythonScript = $pythonPath } [void] Run(){ python $this.pythonScript } } The interactivity is not printed. To run this class, create a new ps1 file as follows and run it # run.ps1 using module .\module.psm1 $newCSV = [CSV]::new(".\script.py") $newCSV.Run() In the terminal: PS C:\SE_temp_dir> .\run.ps1 test // typed from user input PS C:\SE_temp_dir> Notice that no "type your input" or "your input is test" is printed. I have tried many other alternatives: ### module.psm1: python $this.pythonScript | Write-Output python $this.pythonScript | Write-Host python -v $this.pythonScript | Write-Host ### script.py: print(f"your input is: {x}", flush=True) import sys; sys.stdout.write(f"your input is: {x}") But none of them worked. Why does calling a Python script with input()/print() work interactively in plain PowerShell, but not when run inside a method of a PowerShell class? Is this a PowerShell scoping or host behavior issue?
Not sure if there will be an elegant way to provide input to your Python script in the same line from Python, however, here are some workarounds to your current issue. In both workarounds as you may note, the Run method output type has been changed from void to string[]. The first approach is what I'd personally use, instead of requesting for user input in Python, do it in PowerShell and provide that input as argument to your Python script: class CSV { [string] $pythonScript CSV([string] $pythonPath) { $this.pythonScript = $pythonPath } [string[]] Run() { $in = Read-Host 'type your input' return python $this.pythonScript $in } } Then in Python you can take the second argv: import sys print(f"your input is: {sys.argv[1]}") The second approach is more complicated, instead of using input to prompt, use print: print("type your input:") x = input() print(f"your input is: {x}") Then in PowerShell, use a loop with logic to write directly to console the first line of your Python script's output: class CSV { [string] $pythonScript CSV([string] $pythonPath) { $this.pythonScript = $pythonPath } [string[]] Run() { $firstLine = $true $result = python $this.pythonScript | ForEach-Object { if ($firstLine) { # send this line directly to host Write-Host $_ # set flag to false $firstLine = $false # go to next line return } # else, capture in $result $_ } return $result } } Based on comments, your Python script takes lots of user input and parsing it with sys.argv would be a nightmare, in which case you could use argparse.ArgumentParser() to feed multiple input parameters to your script: import argparse parser = argparse.ArgumentParser() parser.add_argument("-paramfoo", type = str) parser.add_argument("-parambar", type = str) for k, v in vars(parser.parse_args()).items(): print(f"input for parameter {k}: {v}") Then in the PowerShell script you would be supplying these arguments as: class CSV { [string] $pythonScript CSV([string] $pythonPath) { $this.pythonScript = $pythonPath } [string[]] Run() { $foo = Read-Host 'type your input for -paramfoo' $bar = Read-Host 'type your input for -parambar' return python $this.pythonScript -paramfoo $foo -parambar $bar } }
3
0
79,600,689
2025-4-30
https://stackoverflow.com/questions/79600689/numpy-concatenate-replacing-previous-data-in-array
I am trying to write code that produces a deck of cards. The deck is a 2D array that contains each card as an array. Each card array contains its card value as well as its suit, represented by the values 0 to 3. However, the code outputs this: [[ 1. 1.] ... [13. 1.] [ 1. 1.] ... [13. 1.] [ 1. 2.] ... [13. 2.] [ 1. 3.] ... [13. 3.]] The first 13 indecies are my issue here, as the code I have written I believe should output [1. 0.] up until [13. 0.] My code is designed to have 13 of each suit, increasing from 1 to 13 (inclusive). Index 0 of each card represents the card value, Ace to King. Index 1 represents its suit (0=S,1=H,2=C,3=D). suits = 4 suitsize = np.empty(shape=(13,2)) suitsize[:,0] = np.arange(1,suitsize.shape[0]+1) a = suitsize print(suitsize) for i in range(1,suits): a[:,1] = i suitsize = np.concatenate([suitsize,a]) print(suitsize) Whether I use np.empty, or np.zeros, the first 13 indecies all still have their index 1 value (suit) replaced with 1. This means that I end up producing a deck of cards with 0 spades, 26 hearts, 13 clubs, and 13 diamonds. If anyone could explain to me what is happening here or a fix, please let me know. Thank you!
You have a classic NumPy issue β€” mutable arrays. The problem is that when you write a = suitsize, you're not creating a new copy of the array. You're just making a new reference to the same array in the computer's memory. So when you do a[:,1] = i, you're also modifying suitsize at the same time. You need to create an actual copy of the array, not just a reference. Just change this line a = suitsize to a = suitsize.copy() .
1
2
79,600,512
2025-4-30
https://stackoverflow.com/questions/79600512/how-to-pickle-enum-with-values-of-type-functools-partial
Problem Suppose we have a python Enum where values are of type functools.partial. How to pickle and unpickle a member of that enum ? import pickle from enum import Enum from functools import partial def function_a(): pass class EnumOfPartials(Enum): FUNCTION_A = partial(function_a) if __name__ == "__main__": with open("test.pkl", "wb") as f: pickle.dump(EnumOfPartials.FUNCTION_A, f) with open("test.pkl", "rb") as f: pickle.load(f) The code above tries to pickle and unpickle such an object. The pickle.load operations results in error: ValueError: functools.partial(<function function_a at 0x7f5973e804a0>) is not a valid EnumOfPartials Motivation The object in itself is useful for configurations purposes: using hydra, I can have a parameter in a YAML that corresponds to the choice of a function in the enum. The reason for using partial is so that FUNCTION_A does not get interpreted as a method (see this question). Being able to pickle a member of this enum is desirable to be able to send it to another process. Given my use case, an obvious workaround would be to have a dictionary of functions indexed by an enum, but I would prefer directly having the relevant value (the function) in the enum. Note I am using python 3.11.11.
The first answer is to pickle by name: from enum import Enum, pickle_by_enum_name class EnumDefs(Enum): __reduce_ex__ = pickle_by_enum_name The second answer is to use the new member class/decorator to avoid using partial, and to add __call__ so you can actually invoke the members: import pickle from enum import Enum, member, pickle_by_enum_name def function_a(): print('function a()!') class EnumOfFunctions(Enum): # __reduce_ex__ = pickle_by_enum_name # def __call__(self, *args, **kwds): return self._value_(*args, **kwds) # FUNCTION_A = member(function_a) if __name__ == "__main__": with open("test.pkl", "wb") as f: pickle.dump(EnumOfFunctions.FUNCTION_A, f) with open("test.pkl", "rb") as f: func = pickle.load(f) print(func) func() Note that you could also just have the functions themselves be in the enum: class EnumOfFunctions(Enum): . . . # @member def function_a(): print('function a()!') 1 Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
3
1
79,599,356
2025-4-30
https://stackoverflow.com/questions/79599356/how-can-i-stop-my-tkinter-hangman-game-accepting-a-correct-letter-more-than-once
I am making a python hangman game using tkinter. However, after guessing a correct letter, it continues to accept that letter but marking it as wrong. This is my guess function: def guess_letter(self, event=None): guess = self.entry.get().lower() self.entry.delete(0, tk.END) if not guess or len(guess) != 1 or not guess.isalpha() return self.guessed_letters.add(guess) self.guessed_label.config(text="Guessed Letters: " + ", ".join(sorted(self.guessed_letters))) if guess in self.word: for i, letter in enumerate(self.word): if letter == guess: self.display_word[i] = guess self.word_label.config(text=" ".join(self.display_word)) else: self.tries_left -= 1 self.draw_hangman() if "_" not in self.display_word: messagebox.showinfo("Hangman", "You won!") self.master.quit() elif self.tries_left == 0: messagebox.showinfo("Hangman", f"You lost! The word was: {''.join(self.word)}") self.master.quit() I want it only to register guesses as wrong if they have not been guessed before. What is a method I could use to make it not accept an already guessed letter without any repercussions?
I have a fix. I have replaced: if not guess or len(guess) != 1 or not guess.isalpha(): return With: if not guess or len(guess) != 1 or not guess.isalpha() or guess in self.guessed_letters: return This checks if a letter has already been guessed, and doesn't take away lives if it has.
1
1
79,599,933
2025-4-30
https://stackoverflow.com/questions/79599933/change-color-of-tqdm-for-each-iteration
I'm using tqdm to track the progress of a task. For the fun of it, I want to change the color of the progress bar dynamically during each iteration. I know you can update the description of the bar using set_description(), but I haven’t found anything similar for changing the color of the progress bar. Is there a way to do that with tqdm? Something along the lines of this (pseudo-code): from tqdm import tqdm import time colors = ["red", "yellow", "green", "cyan", "blue"] pbar = tqdm(range(5)) for i in pbar: pbar.set_color(colors[i]) time.sleep(0.5)
You can manually set the tqdm object's colour attribute: from tqdm import tqdm import time colors = iter(["red", "yellow", "green", "cyan", "blue"]) pbar = tqdm(range(5)) for i in pbar: pbar.colour = next(colors) time.sleep(0.5)
3
4
79,598,340
2025-4-29
https://stackoverflow.com/questions/79598340/efficiently-calculate-time-to-first-purchase-event-per-user-in-pandas-datafram
How can I compute time to first target event per user using Pandas efficiently (with edge cases)? I'm analyzing user behavior using a Pandas DataFrame that logs events on an app. Each row includes a user_id, event_type, and timestamp. I want to calculate the time (in seconds) from each user's first recorded event to their first occurrence of a target event (e.g., "purchase"). However, there are a few requirements that complicate things: Some users never trigger the target event, so I want to exclude or mark them as NaN. The timestamp column is a datetime. I’d like this to be vectorized and efficient (not using for loops). I want to return a DataFrame with user_id and seconds_to_first_purchase. import pandas as pd data = [ {'user_id': 'u1', 'event_type': 'login', 'timestamp': '2023-01-01 10:00:00'}, {'user_id': 'u1', 'event_type': 'purchase', 'timestamp': '2023-01-01 10:05:00'}, {'user_id': 'u2', 'event_type': 'login', 'timestamp': '2023-01-01 09:00:00'}, {'user_id': 'u2', 'event_type': 'scroll', 'timestamp': '2023-01-01 09:03:00'}, {'user_id': 'u3', 'event_type': 'login', 'timestamp': '2023-01-01 11:00:00'}, {'user_id': 'u3', 'event_type': 'purchase', 'timestamp': '2023-01-01 11:20:00'}, ] df = pd.DataFrame(data) df['timestamp'] = pd.to_datetime(df['timestamp']) What’s the cleanest and most efficient way to compute the time to first "purchase" event per user? What I tried: I grouped the DataFrame by user_id and tried to extract the first timestamp for each user using groupby().first(), and then did the same for the first "purchase" event using a filtered DataFrame. Then I tried merging both results to calculate the time difference like this: first_event = df.groupby('user_id')['timestamp'].min() first_purchase = df[df['event_type'] == 'purchase'].groupby('user_id')['timestamp'].min() result = (first_purchase - first_event).dt.total_seconds() What I expected: I expected this to give me a clean Series or DataFrame with user_id and the number of seconds between the user's first event and their first "purchase". What went wrong: It mostly works, but: Users who never purchased are missing from the result and I want to keep them (with NaN). I'm not sure this is the most efficient or cleanest approach. I’m also wondering if there's a better way to avoid intermediate merges or repetitive groupby operations.
I grouped by user_id to get the first event timestamp, then did the same for 'purchase' events. Instead of subtracting the Series directly, I used pd.concat() to combine both into one DataFrame. Then I used .assign() with .dt.total_seconds() to calculate the difference. This gave me a clean DataFrame where I could see the first event, the first purchase (if it happened), and the time difference in seconds. It also kept users with no purchase in the output, which you needed. Made things much easier to debug and extend. The entire code should be import pandas as pd # Sample data data = [ {'user_id': 'u1', 'event_type': 'login', 'timestamp': '2023-01-01 10:00:00'}, {'user_id': 'u1', 'event_type': 'purchase', 'timestamp': '2023-01-01 10:05:00'}, {'user_id': 'u2', 'event_type': 'login', 'timestamp': '2023-01-01 09:00:00'}, {'user_id': 'u2', 'event_type': 'scroll', 'timestamp': '2023-01-01 09:03:00'}, {'user_id': 'u3', 'event_type': 'login', 'timestamp': '2023-01-01 11:00:00'}, {'user_id': 'u3', 'event_type': 'purchase', 'timestamp': '2023-01-01 11:20:00'}, ] df = pd.DataFrame(data) df['timestamp'] = pd.to_datetime(df['timestamp']) # Step 1: First overall event per user first_event = df.groupby('user_id')['timestamp'].min().rename('first_event_time') # Step 2: First 'purchase' event per user first_purchase = ( df[df['event_type'] == 'purchase'] .groupby('user_id')['timestamp'] .min() .rename('first_purchase_time') ) # Step 3: Combine and calculate time delta in seconds result = ( pd.concat([first_event, first_purchase], axis=1) .assign(seconds_to_first_purchase=lambda x: ( (x['first_purchase_time'] - x['first_event_time']).dt.total_seconds() )) .reset_index() ) print(result)
1
1
79,599,624
2025-4-30
https://stackoverflow.com/questions/79599624/fastapi-application-with-nginx-staticfiles-not-working
I've a simple FastAPI project. It is running correctly in pycharm and in the docker container. When running via nginx, the StaticFiles are not delivered. Structure is like this: β”œβ”€β”€ app β”‚ β”œβ”€β”€ main.py β”‚ β”œβ”€β”€ static_stuff β”‚ β”‚ └── styles.css β”‚ └── templates β”‚ └── item.html β”œβ”€β”€ Dockerfile β”œβ”€β”€ requirements.txt main.py from fastapi import Request, FastAPI from fastapi.responses import HTMLResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates import os.path as path ROOT_PATH = path.abspath(path.join(__file__ ,"../")) app = FastAPI(title="my_app", root_path='/my_app') app.mount("/static_stuff", StaticFiles(directory=f"/{ROOT_PATH}/static_stuff"), name="static") templates = Jinja2Templates(directory=f"/{ROOT_PATH}/templates") @app.get("/items/{id}", response_class=HTMLResponse, include_in_schema=False) async def read_item(request: Request, id: str): return templates.TemplateResponse( request=request, name="item.html", context={"id": id} ) The application is running in a docker container: Dockerfile: FROM python:3.13-slim WORKDIR /my_app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY app ./app CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "app.main:app", "--bind", "0.0.0.0:6543"] EXPOSE 6543 The nginx configuration looks like this: location /my_app { proxy_pass http://my_host:6543; include proxy_params; } When calling the nginx -> http://my_host/my_app/items/5 Everything works except the staticfiles. The styles.css is not found. What am I doing wrong? I tried something like this, but I had no success location ~ /static_stuff/(.+) { proxy_pass http://my_host:6543; include proxy_params; }
It seems like there is a bug in FastAPI with root_path parameter for mounted paths. If you specify root_path as a parameter of __init__ it expects additional my_app/ in path. So, your files are available on /my_app/my_app/static_stuff/ instead of /my_app/static_stuff/. Try specifying root_path as an argument of server (e.g. uvicorn main:app --root-path my_app for uvicorn) instead of passing it as a parameter of __init__.
1
1
79,598,326
2025-4-29
https://stackoverflow.com/questions/79598326/im-trying-to-model-a-bolt-in-cadquery-python
I'm trying to build a CAD model of a bolt, but I can't figure out how to cut off the tops of the corners at the top of the head at a 45-degree angle. I want to get this result What did I do for this? At first I tried this: import cadquery as cq from math import sqrt, tan, radians head_diameter = 10.0 head_height = 5.0 shaft_diameter = 5.0 shaft_length = 20.0 R = head_diameter / 2 r = R * sqrt(3)/2 chamfer_size = (R - r) / tan(45) bolt_head = ( cq.Workplane("XY") .polygon(6, 2*R) .extrude(head_height) .translate((0, 0, -1 * (head_height/2))) ) bolt_head = bolt_head.edges("Z").chamfer(1) bolt_shaft = ( cq.Workplane("XY") .circle(shaft_diameter/2) .extrude(-shaft_length) ) bolt = bolt_head.union(bolt_shaft) This code generates the following CAD: Then so: import cadquery as cq from math import sqrt, tan, radians HEAD_DIAMETER = 20.0 HEAD_HEIGHT = 10.0 CUT_ANGLE = 45.0 SHAFT_DIAMETER = 8.0 SHAFT_LENGTH = 30.0 R = HEAD_DIAMETER / 2 r = R * sqrt(3)/2 cut_depth = (R - r) / tan(radians(CUT_ANGLE)) hexagon = ( cq.Workplane("XY") .polygon(6, HEAD_DIAMETER) .extrude(HEAD_HEIGHT + cut_depth) ) cutter = ( cq.Workplane("XY") .workplane(offset=HEAD_HEIGHT) .circle(r) .workplane(offset=cut_depth) .circle(R * 1.1) .loft(combine=True) ) result = hexagon.cut(cutter) result = ( result.faces(">Z") .workplane(centerOption="CenterOfMass") .circle(r) .cutBlind(-cut_depth) ) show_object(result) But it outputs an error: ValueError: If multiple objects selected, they all must be planar faces. This code generates the following CAD: Does anyone know how to cut off the upper parts of the corners of the bolt head at a 45 degree angle?
You can cut the upper edge by using a solid of revolution: create a triangle with the required angle and revolve it around the bolt axis to create the cutter. cutter = ( cq.Workplane("XZ") .workplane(offset=HEAD_HEIGHT) .move(r, 0) .lineTo(R*1.1, 0).lineTo(R*1.1, -cut_depth).lineTo(r, 0) .wire() .revolve() )
2
1
79,598,979
2025-4-29
https://stackoverflow.com/questions/79598979/how-could-i-self-eject-my-usb-drive-using-the-python-module-sub-process
I have a script for my USB that i need to use on multiple devices, and i want it to auto-eject, and using this method, as long as it has elevated privileges it runs, no issues, no errors, but i check the file explorer and the USB is still in there: p = Popen(["diskpart"], stdin=PIPE) p.stdin.write(b"select disk " + drive_letter.encode() + b"\n") p.stdin.write(b"remove all dismount\n") p.stdin.write(b"exit\n") print(f"Successfully ejected drive {successful_drive_path}.") this is all using sub-process module for python, I know I don't need to do this, but I still want to. I suspect it's due to the code being ran in the usb, so maybe i could, with the bat file i use to run and handle everything, run the ejection code using a self-deleting script in the main disk. Idk just ideas.
I believe the issue is that you are call it as a disk, not a volume. This code will fix your issue, but only works with elevated privileges: def eject_drive(drive_letter=input(): try: p = subprocess.Popen(["diskpart"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) # Prepare the diskpart script commands = f""" select volume {drive_letter} remove all dismount exit """ # Send the commands stdout, stderr = p.communicate(commands) if p.returncode == 0: print(f"Successfully ejected drive {drive_letter}.") else: print(f"Failed to eject drive {drive_letter}. Error: {stderr}") except Exception as e: print(f"Exception occurred: {e}")
2
2
79,599,115
2025-4-29
https://stackoverflow.com/questions/79599115/how-to-filter-all-columns-in-a-polars-dataframe-by-expression
I have this example Polars DataFrame: import polars as pl df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, None, 5, 10, 20], "variable2": [40, 30, 50, 10, None], }) I'm trying to filter all columns of my dataframe using the method pl.all(), and I also tried using pl.any_horizontal() == Condition. However I'm getting the following error: ComputeError: The predicate passed to 'LazyFrame.filter' expanded to multiple expressions: col("id").is_not_null(), col("variable1").is_not_null(), col("variable2").is_not_null(), This is ambiguous. Try to combine the predicates with the 'all' or `any' expression. Here are my attemps to try to face this. # Attempt 1: ( df .filter( pl.all().is_not_null() ) ) # Attempt 2: ( df .filter( pl.any_horizontal().is_not_null() ) ) Desired output, but it's not scalable for bigger DataFrames: ( df .filter( pl.col("variable1").is_not_null(), pl.col("variable2").is_not_null() ) ) How can I filter all columns in a scalable way without specifying each column individually?
You need to collapse the multiple-generated-expressions (imagine three matrices come out of that first pl.all(), one for each column) into a single column. You can do that with pl.all_horizontal(your, columns, here): >>> df.filter(pl.all_horizontal(pl.col('*').is_not_null())) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ variable1 ┆ variable2 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════════β•ͺ═══════════║ β”‚ 1 ┆ 15 ┆ 40 β”‚ β”‚ 3 ┆ 5 ┆ 50 β”‚ β”‚ 4 ┆ 10 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
79,598,793
2025-4-29
https://stackoverflow.com/questions/79598793/how-i-can-realtime-update-the-ui-when-i-receive-a-request-upon-fastapi
I have this simple script: import os import gradio as gr from fastapi import FastAPI, Request import uvicorn import threading from typing import List from datetime import datetime api = FastAPI() # Shared logs class Log(): def __init__(self): self._logs: List[str] = [] self.logstr="" def log_message(self,msg: str): timestamp = datetime.now().strftime("%H:%M:%S") self._logs.append(f"[{timestamp}] {msg}") self.logstr="\n".join(self._logs) log = Log() log_state=gr.State(log) # === FastAPI Setup === @api.post("/log") async def receive_log(request: Request): data = await request.body() msg = f"API received: {data}" log.log_message(msg) gr.update(value=log.logstr) return {"status": "logged", "message": msg} def run_api(): api_port = int(os.environ.get("API_PORT", 8000)) uvicorn.run(api, host="0.0.0.0", port=api_port) # === Gradio UI === with gr.Blocks() as ui: gr.Markdown("## πŸ“ Incoming HTTP Requests") log_box = gr.Textbox(label="Logs", inputs=log_state, lines=20) # Trigger the refresh when the log state is updated def run_gradio(): gradio_port = int(os.environ.get("GRADIO_PORT", 7860)) ui.launch(server_port=gradio_port) # === Start Both === if __name__ == "__main__": threading.Thread(target=run_api, daemon=True).start() run_gradio() What I try to achieve is to have FastAPI listening to one port and a admin panel into another one, that displays in realtime the incomming requests: POST /log -> FastAPI ->common_log -> Gradio But I am unable to change the contents of Textbox when I receive Incomming requests in FastAPI. How I can do this?
Only method which works for me is def function(): return log.logstr Textbox(value=function, ..., every=1) It runs function every 1 second, and this function returns current content in log. Doc: Gradio Textbox Full code: import os import gradio as gr from fastapi import FastAPI, Request import uvicorn import threading from typing import List from datetime import datetime api = FastAPI() # Shared logs class Log(): def __init__(self): self._logs: List[str] = [] self.logstr="" def log_message(self,msg: str): timestamp = datetime.now().strftime("%H:%M:%S") self._logs.append(f"[{timestamp}] {msg}") self.logstr="\n".join(self._logs) log = Log() log_state = gr.State(log) # === FastAPI Setup === @api.post("/log") async def receive_log(request: Request): data = await request.body() msg = f"API received: {data}" log.log_message(msg) gr.update(value=log.logstr) #print('data:', data) return {"status": "logged", "message": msg} def run_api(): #print('run FastAPI') api_port = int(os.environ.get("API_PORT", 8000)) uvicorn.run(api, host="0.0.0.0", port=api_port) # === Gradio UI === def get_logs(): #print('run: get_logs') return log.logstr with gr.Blocks() as ui: gr.Markdown("## πŸ“ Incoming HTTP Requests") log_box = gr.Textbox(label="Logs", value=get_logs, lines=20, every=1) # Trigger the refresh when the log state is updated def run_gradio(): gradio_port = int(os.environ.get("GRADIO_PORT", 7860)) ui.launch(server_port=gradio_port) # === Start Both === if __name__ == "__main__": threading.Thread(target=run_api, daemon=True).start() run_gradio()
1
1
79,598,423
2025-4-29
https://stackoverflow.com/questions/79598423/how-to-select-certain-rows-by-code-in-a-datagrid-table-in-python-shiny
I created a table ("DataGrid") using: ui.output_data_frame("grid") which I filled using @render.data_frame def grid(): df = ... return render.DataGrid(df, selection_mode="row") Now, I want to change the selection using codfe. Is this possible in Python's version of Shiny?
You can use update_cell_selection(): from shiny import ui, render, App, reactive import pandas as pd df = pd.DataFrame({"Row": ["Row Number 0", "Row Number 1", "Row Number 2"]}) app_ui = ui.page_fluid( ui.input_select( "rowSelection", "Which row shall be selected?", choices=["Choose", 0, 1, 2] ), ui.output_data_frame("my_df") ) def server(input, output, session): @render.data_frame def my_df(): return render.DataGrid(df, selection_mode="row") @reactive.Effect @reactive.event(input.rowSelection) async def setSelection(): if (input.rowSelection() == "Choose"): return x={'type': 'row', 'rows': int(input.rowSelection())} await my_df.update_cell_selection(x) app = App(app_ui, server)
1
1
79,598,174
2025-4-29
https://stackoverflow.com/questions/79598174/how-do-conditionally-apply-field-constraints-based-on-value-type-in-pydantic-v2
I have this pydantic model with a field, and this field could be either an int or a non numeric value like a str or list. from pydantic import BaseModel, Field class Foo(BaseModel): bar: str | list | int = Field('some string', ge=2) I want it to be that the constraint ge=2 is applied only if the value given to bar happens to be an int. foo_instance = Foo(bar='asdf') # constraint should not be applied foo_instance = Foo(bar=4) # constraint should be applied Trying the above throws an error: TypeError: Unable to apply constraint 'ge' to supplied value asdf How do I ensure pydantic only tries to apply this constraint if the value is numerical (or if the operation can be performed on this value)?
You can apply the validator to just the int value like so: from pydantic import BaseModel, Field from typing import Annotated class Foo(BaseModel): bar: str | list | Annotated[int, Field(ge=2)] foo_instance = Foo(bar="asdf") foo_instance = Foo(bar=4)
2
1
79,598,629
2025-4-29
https://stackoverflow.com/questions/79598629/double-bar-stacked-bar-plot-in-plotly-dash
I'm trying to create a double bar stacked bar chart using plotly. I found this code: from plotly import graph_objects as go data = { "original":[15, 23, 32, 10, 23], "model_1": [4, 8, 18, 6, 0], "model_2": [11, 18, 18, 0, 20], "labels": [ "feature", "question", "bug", "documentation", "maintenance" ] } fig = go.Figure( data=[ go.Bar( name="Original", x=data["labels"], y=data["original"], offsetgroup=0, ), go.Bar( name="Model 1", x=data["labels"], y=data["model_1"], offsetgroup=1, ), go.Bar( name="Model 2", x=data["labels"], y=data["model_2"], offsetgroup=1, base=data["model_1"], ) ], layout=go.Layout( title="Issue Types - Original and Models", yaxis_title="Number of Issues" ) ) Source: https://dev.to/fronkan/stacked-and-grouped-bar-charts-using-plotly-python-a4p This works, but I need to take it one step further if possible. I need the various models to able to appear on either side of each set of bars as I'm adding another parameter (yes on the left and no on the right). In the example, blue is always on the left bar and orange and/or green are always on the right. I'm looking for a way so that any of those colors could appear on both the left and right of a set of bars, but with different values. i.e. in the example feature has a blue bar count of 15, but maybe on the right side of feature blue has a value of 5.
you can use dynamic stacking with calculated base values and consistent color coding, maintaining exactly 10 bars -5 categories Γ— 2 sides- while supporting variable component combinations per bar. also chekc this out stacked + grouped bar chart from plotly import graph_objects as go data = { "left": [ {"original": 15, "model_1": 4, "model_2": 11}, {"original": 23, "model_1": 8, "model_2": 18}, {"original": 32, "model_1": 18, "model_2": 18}, {"original": 10, "model_1": 6, "model_2": 0}, {"original": 23, "model_1": 0, "model_2": 20} ], "right": [ {"original": 7, "model_1": 3, "model_2": 5}, {"original": 12, "model_1": 6, "model_2": 8}, {"original": 15, "model_1": 8, "model_2": 10}, {"original": 5, "model_1": 4, "model_2": 0}, {"original": 10, "model_1": 2, "model_2": 12} ], "labels": [ "feature", "question", "bug", "documentation", "maintenance" ] } fig = go.Figure() colors = { "original": "#636EFA", "model_1": "#EF553B", "model_2": "#00CC96" } def add_stacked_bar(side, offset_group): side_data = data[side] for i, category in enumerate(data["labels"]): current_base = 0 for model_type in ["original", "model_1", "model_2"]: value = side_data[i].get(model_type, 0) if value > 0: fig.add_trace(go.Bar( name=f"{model_type.replace('_', ' ').title()} ({side.title()})", x=[category], y=[value], offsetgroup=offset_group, base=current_base, marker_color=colors[model_type], legendgroup=model_type, showlegend=(i == 0) )) current_base += value add_stacked_bar("left", 0) add_stacked_bar("right", 1) fig.update_layout( title="Issue Types - Original and Models", yaxis_title="Number of Issues", barmode='group', legend_title="Model Types", legend=dict( orientation="h", yanchor="bottom", y=1.02, xanchor="right", x=1 ) ) fig.show()
3
2
79,598,228
2025-4-29
https://stackoverflow.com/questions/79598228/how-could-i-zoom-in-on-a-generated-mandelbrot-set-without-consuming-too-many-res
I am trying to make a Mandelbrot set display, with the following code: import numpy as np import matplotlib.pyplot as plt plt.rcParams['toolbar'] = 'None' def mandelbrot(c, max_iter): z = 0 for n in range(max_iter): if abs(z) > 2: return n z = z*z + c return max_iter def mandelbrot_set(xmin, xmax, ymin, ymax, width, height, max_iter): r1 = np.linspace(xmin, xmax, width) r2 = np.linspace(ymin, ymax, height) n3 = np.empty((width, height)) for i in range(width): for j in range(height): n3[i, j] = mandelbrot(r1[i] + 1j*r2[j], max_iter) return n3.T # Settings xmin, xmax, ymin, ymax = -2.0, 1.0, -1.5, 1.5 width, height = 800, 800 max_iter = 256 # Generate Mandelbrot set mandelbrot_image = mandelbrot_set(xmin, xmax, ymin, ymax, width, height, max_iter) # Window fig = plt.figure(figsize=(5, 5)) fig.canvas.manager.set_window_title('Mandelbrot Set') ax = fig.add_axes([0, 0, 1, 1]) # Fill the whole window ax.set_axis_off() # Show fractal ax.imshow(mandelbrot_image, extent=(xmin, xmax, ymin, ymax), cmap='hot') plt.show() How could I zoom in on the fractal continuously, without taking up too many resources? I am running on a mid-range laptop, and it currently takes a long time to generate the fractal. Is there a faster way to do this when implementing a zoom feature?
You're using Python code to handle single NumPy numbers. That's the worst way. Would already be about twice as fast if you used Python numbers instead, using .tolist(): r1 = np.linspace(xmin, xmax, width).tolist() r2 = np.linspace(ymin, ymax, height).tolist() But it's better to properly use NumPy, e.g., work on all pixels in parallel, keeping track of the values (and their indices) that still have abs ≀ 2: def mandelbrot_set(xmin, xmax, ymin, ymax, width, height, max_iter): r1 = np.linspace(xmin, xmax, width) r2 = np.linspace(ymin, ymax, height) n3 = np.empty(width * height) z = np.zeros(width * height) c = np.add.outer(r1, 1j*r2).flatten() i = np.arange(width * height) for n in range(max_iter): outside = np.abs(z) > 2 n3[i[outside]] = n inside = ~outside z = z[inside] c = c[inside] i = i[inside] z = z*z + c n3[i] = max_iter return n3.reshape((width, height)).T This now takes me about 0.17 seconds instead of about 6.7 seconds with your original.
2
2
79,598,073
2025-4-29
https://stackoverflow.com/questions/79598073/tk-canvas-telepromter-text-transparency-problems
I have a fullscreen window with a fullscreen canvas. First I place a fullscreen background image in this canvas. canvas = tk.Canvas(window, bg="white", bd=0) canvas.pack(fill=tk.BOTH, expand=True) canvas.update() image = Image.open('bild.jpg') newimage = image.resize((canvas.winfo_width(),canvas.winfo_height()),Image.LANCZOS) photo = ImageTk.PhotoImage(newimage, master=canvas) canvas.create_image(0, 0, anchor="nw", image=photo) canvas.update() Then I place a headline text on the top right and below this a longer text. The text is much longer than the screen height. rectangle_width = int(canvas.winfo_width() * 0.45) rectangle_x = canvas.winfo_width() - rectangle_width headblock = canvas.create_text( rectangle_x+10, 10, anchor='nw', text=headline, font=('Helvetica', 18, 'bold'), fill='black', width=rectangle_width - 20 ) x1, y1, x2, y2 = canvas.bbox(headblock) textblock = canvas.create_text( rectangle_x+10, y2+10, anchor='nw', text=fulltext, font=('Helvetica', 14, 'normal'), fill='black', width=rectangle_width - 20 ) canvas.update() Now I wan't to animate the long text, like a telepromter, moving slowly upwards, so that viewers can read the full text. This works fine with canvas.move() def animate(): canvas.move(textblock, 0, -1) x1, y1, x2, y2 = canvas.bbox(textblock) if y2 > canvas.winfo_height(): window.after(60, animate) But while the textblock is moving up it is shown behind the headline. The problem is the background image. That should always be visible. If I place headline and text in different objects, labels, canvas, whatever, they are not transparent anymore. Has anyone a good idea, how move the only the long text, while the image is always visible and the headline stays static?
You can use another Canvas widget as the telepromter with a cropped image from the background image that makes it looks like transparent, then scroll the text inside it: import tkinter as tk from PIL import Image, ImageTk, ImageGrab headline = 'Headline' with open(__file__) as f: fulltext = f.read() window = tk.Tk() window.geometry('800x600') canvas = tk.Canvas(window, bg="white", bd=0) canvas.pack(fill=tk.BOTH, expand=True) canvas.update() image = Image.open('lena.jpg') newimage = image.resize((canvas.winfo_width(),canvas.winfo_height()),Image.LANCZOS) photo = ImageTk.PhotoImage(newimage, master=canvas) canvas.create_image(0, 0, anchor="nw", image=photo) #canvas.update() rectangle_width = int(canvas.winfo_width() * 0.45) rectangle_x = canvas.winfo_width() - rectangle_width headblock = canvas.create_text( rectangle_x+10, 10, anchor='nw', text=headline, font=('Helvetica', 18, 'bold'), fill='black', width=rectangle_width-20 ) x1, y1, x2, y2 = canvas.bbox(headblock) rectangle_height = canvas.winfo_height() - y2 - 10 telepromter = tk.Canvas(canvas, width=rectangle_width-20, height=rectangle_height-10, highlightthickness=0) canvas.create_window(rectangle_x+10, y2+10, window=telepromter, anchor='nw') # get the image at the required region img = newimage.crop((rectangle_x+10, y2+10, newimage.width, newimage.height)) bg = ImageTk.PhotoImage(img) telepromter.create_image(0, 0, image=bg, anchor='nw') textblock = telepromter.create_text( 0, 0, anchor='nw', text=fulltext, font=('Helvetica', 14, 'normal'), fill='black', width=rectangle_width ) def animate(): telepromter.move(textblock, 0, -1) x1, y1, x2, y2 = telepromter.bbox(textblock) if y2 > telepromter.winfo_height(): window.after(50, animate) animate() window.mainloop() Result:
1
1
79,598,239
2025-4-29
https://stackoverflow.com/questions/79598239/how-is-an-instance-attribute-of-the-same-type-as-the-instance-type-hinted
I am trying to assign a variable to an instance of a class such that the variable is of the same type. I want to use the instance itself in the construction of the variable. In order to be compatible with inheritance, I want to type hint it as Self rather than the class. The following works: class Foo: var: "Foo" def bar(self) -> None: self.var = self However, var is marked as being of type Foo rather than Self. What I want is: from typing import Self class Foo: var: Self def bar(self) -> None: self.var = self In this case, mypy gives me the following message: error: Incompatible types in assignment (expression has type "Foo", variable has type "Self") [assignment] Clearly, self is not regarded as being of type Self but only of type Foo.
PEP 673 (ref https://peps.python.org/pep-0673/) introduced typing.Self in Python 3.11 to let you write: from typing import Self class Foo: def clone(self) -> Self: … so that in subclasses clone() is recognized as returning the subclass type. But at the moment mypy only special‐cases Self in method signatures, not in attribute annotations at the class level. So when you write: class Foo: var: Self def bar(self) -> None: self.var = self mypy still treats Self in var: Self as β€œthe current class” (i.e. Foo) and complains when you assign self (which it types as plain Foo) to something it thinks is Self (the unknown late‐bound type). It’s essentially a bug/limitation in mypy’s handling of late‐bound attribute annotations. The pep has examples using TypeVar. I suggest combining that with generic to create a workaround. E.g., from typing import Generic, TypeVar T = TypeVar("T", bound="Foo") class Foo(Generic[T]): var: T # var is *exactly* whatever subclass type T is def bar(self: T) -> None: # now mypy knows `self` is of type T, and var is typed T, # so this assignment is safe self.var = self Alternately, if you want a "quick fix", just annotate var with the class-name as you already did in your OP.
3
2
79,597,696
2025-4-29
https://stackoverflow.com/questions/79597696/how-to-decrypt-a-value-in-python-that-was-encrypted-using-php-openssl
I have a value that was encrypted using PHP openssl using cipher AES-CBC-256 but the passphrase for the final value was also encrypted using the same method. openssl_encrypt($key, $cipher, $passphrase, 0, $iv) I need to be able to unencrypt this data using Python but I'm running into block-size issues. Here's some of the code I have so far. I have tested decrypting this in PHP and it works properly. My final value in this example should be 'Jimmy'. import base64 from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad localKey = base64.b64decode('Po0KPxyF') localIv = base64.b64decode('s8W+/a4jkp9mhO3NkCL7Yg==') encrypted_value = base64.b64decode('hl5n6Nq5QYtgKIyLEVCupA==') encrypted_key = base64.b64decode('MGRHRFlaMzhCR0lxb2VHS1JHQXcrWkV2bkJpNWFZb3cybW9iQW5KYTlOU0xKK1FHc2pPUW1MUE9JRU5zTXN1Rg==') encrypted_iv = base64.b64decode('J31SrExr7KKIOertYIPhpQ==') # First need to encrypted key that uses the local key as the passphrase cipher_key = AES.new(pad(localKey,16), AES.MODE_CBC, localIv) decrypted_key = cipher_key.decrypt(encrypted_key) # Then decrypted the final value using the newly decrypted key cipher_key = AES.new(unpad(decrypted_key,16), AES.MODE_CBC, encrypted_iv) decrypted_value = cipher_key.decrypt(encrypted_value)
I managed to figure out the PHP code that does the decryption: $localKey = base64_decode('Po0KPxyF'); $localIv = base64_decode('s8W+/a4jkp9mhO3NkCL7Yg=='); $encrypted_value = base64_decode('hl5n6Nq5QYtgKIyLEVCupA=='); $encrypted_key = base64_decode('MGRHRFlaMzhCR0lxb2VHS1JHQXcrWkV2bkJpNWFZb3cybW9iQW5KYTlOU0xKK1FHc2pPUW1MUE9JRU5zTXN1Rg=='); $encrypted_iv = base64_decode('J31SrExr7KKIOertYIPhpQ=='); $decrypted_key = openssl_decrypt(base64_decode($encrypted_key), 'aes-256-cbc', $localKey, OPENSSL_RAW_DATA, $localIv); $decrypted_value = openssl_decrypt($encrypted_value, 'aes-256-cbc', $decrypted_key, OPENSSL_RAW_DATA, $encrypted_iv); echo $decrypted_value; Output: s:5:"Jimmy"; (Demo) Remarks: $localKey is shorter than 256 bits, so openssl_decrypt() pads it with null bytes $encrypted_key was base64-encoded twice, so you need to decode it twice The value contained in $decrypted_value was serialized with serialize() Here is a Python implementation (based on PyCryptodome): import base64 from Crypto.Cipher import AES from Crypto.Util.Padding import unpad localKey = base64.b64decode('Po0KPxyF') localIv = base64.b64decode('s8W+/a4jkp9mhO3NkCL7Yg==') encrypted_value = base64.b64decode('hl5n6Nq5QYtgKIyLEVCupA==') encrypted_key = base64.b64decode('MGRHRFlaMzhCR0lxb2VHS1JHQXcrWkV2bkJpNWFZb3cybW9iQW5KYTlOU0xKK1FHc2pPUW1MUE9JRU5zTXN1Rg==') encrypted_iv = base64.b64decode('J31SrExr7KKIOertYIPhpQ==') # First need to encrypted key that uses the local key as the passphrase cipher_key = AES.new(localKey.ljust(32, b'\0'), AES.MODE_CBC, localIv) decrypted_key = cipher_key.decrypt(base64.b64decode(encrypted_key)) # Then decrypted the final value using the newly decrypted key cipher_key = AES.new(unpad(decrypted_key,16), AES.MODE_CBC, encrypted_iv) decrypted_value = cipher_key.decrypt(encrypted_value) value = unpad(decrypted_value,16).decode() print(value) Output: s:5:"Jimmy"; (Demo)
4
7
79,616,049
2025-5-11
https://stackoverflow.com/questions/79616049/streamlit-aggrid-multiselect-preview-values-were-undefinedundefined
I'm using streamlit-aggrid to display table. In the C column i use multiselect feature, select items and results are ok, but during selecting ,the preview values were undefined(undefined). I prefer to display the preview value like the result ex:Pink;Purple. I use followin python code in streamlit framwork. Select Items are dictionary value. When select from dropdownbox ,i want to display name&code. But when selected,cell values are names with ";" concated. import streamlit as st import pandas as pd from st_aggrid import AgGrid, GridOptionsBuilder, JsCode df = pd.DataFrame( "", index=range(10), columns=list("c"), ) gb = GridOptionsBuilder.from_dataframe(df) gb.configure_default_column(editable=True) dic= [ { "name": 'Pink', "code": '#FFC0CB' }, { "name": 'Purple', "code": '#A020F0' }, { "name": 'Blue', "code": '#0000FF' }, { "name": 'Green', "code": '#008000' }, ] st.text(dic) gb.configure_column( "c", cellEditor="agRichSelectCellEditor", valueFormatter=JsCode( """function(params) { if (Array.isArray(params.value)) { let stringjoin=""; params.value.forEach((el) => {if (el!=null && el !== "") { stringjoin +=";"+ el.name}}); return stringjoin.slice(1) } return params.value.name; }""" ), valueParser=JsCode("""function(params) { console.log(params); const { newValue } = params.newValue; if (newValue == null || newValue === "") { return null; } if (Array.isArray(newValue)) { let stringjoin=""; newValue.forEach((el) => {stringjoin +=";"+ el.name}); return stringjoin.slice(1) } return newValue; }"""), cellEditorParams={ "values": dic, "multiSelect": "true", "suppressMultiSelectPillRenderer": "true", "formatValue": JsCode("""function(v) {if (v !== null || v !== "") { return `${v.name} (${v.code})` } } """), "parseValue": JsCode("""function(v) { let stringjoin=""; console.log(v); forEach((el) => {stringjoin +=";"+ el.name}); return stringjoin.slice(1) }"""), "allowTyping": "true", "filterList": "true", "valueListMaxHeight": 220, "searchType": "matchAny", }, ) gb.configure_grid_options(enableRangeSelection=True) go = gb.build() response = AgGrid( df, gridOptions=go, enable_enterprise_modules=True, key="grid1", allow_unsafe_jscode=True, )
import streamlit as st import pandas as pd from st_aggrid import AgGrid, GridOptionsBuilder, JsCode df = pd.DataFrame( "", index=range(5), columns=list("c"), ) df["c"]=[["Pink (#FFC0CB)","Purple (#A020F0)"], ["Purple (#A020F0)"], ["Blue (#0000FF)"], ["Green (#008000)"], ["Pink (#FFC0CB)"]] df["c"] = df["c"].astype("object") gb = GridOptionsBuilder.from_dataframe(df) gb.configure_default_column(editable=True) dic= [ { "name": 'Pink', "code": '#FFC0CB' }, { "name": 'Purple', "code": '#A020F0' }, { "name": 'Blue', "code": '#0000FF' }, { "name": 'Green', "code": '#008000' }, ] st.text(dic) gb.configure_column( "c", cellEditor="agRichSelectCellEditor", # singleClickEdit= "true", valueFormatter=JsCode( """function(params) { const { value } = params; if (Array.isArray(value)) { return value.map(item => item.toString().split("(")[0]).join(";"); } return value; }""" ), valueParser=JsCode("""function(params) { console.log(params); const { newValue } = params; if (newValue == null || newValue === "") { return null; } if (Array.isArray(newValue)) { return newValue; } return params.newValue.split(";"); }"""), cellEditorParams={ "values": list(map(lambda item: item["name"]+" ("+item["code"]+")", dic)), "multiSelect": "true", "allowTyping": "true", "filterList": "true", "valueListMaxHeight": 220, "searchType": "matchAny", }, ) gb.configure_grid_options(enableRangeSelection=True) go = gb.build() response = AgGrid( df, gridOptions=go, enable_enterprise_modules=True, key="grid1", allow_unsafe_jscode=True, ) initial when edited: end edit
1
0
79,617,897
2025-5-12
https://stackoverflow.com/questions/79617897/how-can-i-efficiently-find-integer-solutions-x-%e2%89%a0-y-to-a-diophantine-equation-u
I'm trying to write a Python script to search for integer solutions (x, y) with x β‰  y to the following Diophantine equation: (y + n)^4 - y^4 = (x + k)^4 - x^4 Here: n and k are fixed small positive integers (like n = 1, k = 2), x and y range from 1 to a large number (e.g., 1 to 1,000,000), I only want solutions where x β‰  y. Aside from n = 74 and k = 24, for which we get positive integer solutions as desired, how can this code for instance be better written to generate any positive integer solutions to n,k,x,y? Or if possible what other integer solutions exist other than n = 74 and k = 24 ? I wrote a naive brute-force solution that checks every pair (x, y) in the range, but it's way too slow: n = 1 k = 2 limit = 10**6 for x in range(1, limit): rhs = (x + k)**4 - x**4 for y in range(1, limit): if x == y: continue lhs = (y + n)**4 - y**4 if lhs == rhs: print(f"Match found: x = {x}, y = {y}")
Efficient equivalent rhs and lhs are increasing with increasing x and y, so you can go through both in parallel, always advancing the smaller one: n = 1 k = 2 limit = 10**6 x = y = 1 while x < limit and y < limit: rhs = (x + k)**4 - x**4 lhs = (y + n)**4 - y**4 if lhs == rhs: if x != y: print(f"Match found: x = {x}, y = {y}") x += 1 y += 1 elif lhs < rhs: y += 1 else: x += 1 Multiple n and k And to cover more values of n and k, you could merge the streams for all of them: from heapq import merge limit = 10**4 def values(k): for x in range(1, limit): yield (x + k)**4 - x**4, x, k lhs = None for rhs, x, k in merge(*map(values, range(1, 101))): if lhs == rhs: print(f'{n=}, {k=}, {x=}, {y=}') lhs, y, n = rhs, x, k That takes about 1.5 seconds and barely any memory to find the same solutions as Kelly's search in the same ranges: n=74, k=24, x=134, y=59 n=75, k=25, x=133, y=59 n=27, k=5, x=497, y=271 n=63, k=35, x=257, y=193 n=64, k=36, x=256, y=193 n=54, k=10, x=994, y=542 n=81, k=15, x=1491, y=813 Unlimited You could also get rid of the limits (the one for x and y as well as the one for k and n) and leave it running as long as you have patience. Unlimited x is as easy as using for x in itertools.count(1): in my values function. Unlimited k: Instead of the merge function, use your own heap to merge the streams for the different k-values yourself. Start with just one, and whenever the stream for the current largest k goes from its x=1 to its x=2, add the stream for k+1. from heapq import heappush, heappop from time import time end_time = time() + 3 heap = [] def push(k, x): heappush(heap, ((x + k)**4 - x**4, k, x)) push(1, 1) lhs = None while time() < end_time: rhs, k, x = heappop(heap) if lhs == rhs: print(f'{n=}, {k=}, {x=}, {y=}') push(k, x + 1) if x == 1: push(k + 1, 1) lhs, y, n = rhs, x, k I used a three seconds time limit there and it found 84 solutions: n=24, k=74, x=59, y=134 n=25, k=75, x=59, y=133 n=12, k=150, x=7, y=227 n=5, k=27, x=271, y=497 n=82, k=220, x=7, y=157 n=35, k=63, x=193, y=257 n=36, k=64, x=193, y=256 n=48, k=148, x=118, y=268 n=50, k=150, x=118, y=266 n=24, k=300, x=14, y=454 n=28, k=256, x=103, y=514 n=72, k=222, x=177, y=402 n=75, k=225, x=177, y=399 n=10, k=54, x=542, y=994 n=164, k=440, x=14, y=314 n=70, k=126, x=386, y=514 n=72, k=128, x=386, y=512 n=36, k=450, x=21, y=681 n=204, k=226, x=271, y=298 n=73, k=281, x=222, y=558 n=183, k=411, x=103, y=359 n=96, k=296, x=236, y=536 n=100, k=300, x=236, y=532 n=128, k=336, x=222, y=503 n=48, k=600, x=28, y=908 n=27, k=577, x=76, y=1176 n=120, k=370, x=295, y=670 n=125, k=375, x=295, y=665 n=15, k=81, x=813, y=1491 n=246, k=660, x=21, y=471 n=105, k=189, x=579, y=771 n=108, k=192, x=579, y=768 n=56, k=512, x=206, y=1028 n=60, k=750, x=35, y=1135 n=144, k=444, x=354, y=804 n=39, k=119, x=878, y=1342 n=150, k=450, x=354, y=798 n=20, k=108, x=1084, y=1988 n=328, k=880, x=28, y=628 n=168, k=518, x=413, y=938 n=140, k=252, x=772, y=1028 n=175, k=525, x=413, y=931 n=144, k=256, x=772, y=1024 n=72, k=900, x=42, y=1362 n=408, k=452, x=542, y=596 n=146, k=562, x=444, y=1116 n=366, k=822, x=206, y=718 n=192, k=592, x=472, y=1072 n=200, k=600, x=472, y=1064 n=84, k=768, x=309, y=1542 n=84, k=1050, x=49, y=1589 n=256, k=672, x=444, y=1006 n=25, k=135, x=1355, y=2485 n=410, k=1100, x=35, y=785 n=175, k=315, x=965, y=1285 n=180, k=320, x=965, y=1280 n=550, k=1100, x=76, y=653 n=216, k=666, x=531, y=1206 n=225, k=675, x=531, y=1197 n=96, k=1200, x=56, y=1816 n=5, k=679, x=604, y=5048 n=384, k=464, x=878, y=997 n=54, k=1154, x=152, y=2352 n=240, k=740, x=590, y=1340 n=250, k=750, x=590, y=1330 n=30, k=162, x=1626, y=2982 n=492, k=1320, x=42, y=942 n=210, k=378, x=1158, y=1542 n=216, k=384, x=1158, y=1536 n=108, k=1350, x=63, y=2043 n=112, k=1024, x=412, y=2056 n=264, k=814, x=649, y=1474 n=612, k=678, x=813, y=894 n=275, k=825, x=649, y=1463 n=219, k=843, x=666, y=1674 n=549, k=1233, x=309, y=1077 n=35, k=189, x=1897, y=3479 n=120, k=1500, x=70, y=2270 n=288, k=888, x=708, y=1608 n=78, k=238, x=1756, y=2684 n=574, k=1540, x=49, y=1099 n=300, k=900, x=708, y=1596 n=245, k=441, x=1351, y=1799 n=192, k=460, x=1324, y=1997 Note My second and third solutions compare the current value only with the one previous value. If a value appears even three or more times, then I'm only printing the matches of neighbors. For example if (x1,k1), (x2,k2) and (x3,k3) all match, I'm not showing the match of (x1,k1) with (x3,k3).
2
3
79,611,948
2025-5-8
https://stackoverflow.com/questions/79611948/imputing-and-adding-rows-to-dataframe-using-polars-expressions
I have a dataframe with incomplete values as below - in particular ages with corresponding years, and I would like to make it square (i.e., all three cust_id to have correctly imputed values for age in all three years, i.e. I want to turn this: df = pl.DataFrame({ "cust_id": [1, 2 ,2, 2, 3, 3], "year": [2000,1999,2000,2001,1999,2001], "cust_age": [21,31,32,33,44,46] }) into this: df = pl.DataFrame({ "cust_id": [1, 1, 1, 2 ,2, 2, 3, 3, 3 ], "year": [1999,2000,2001,1999,2000,2001,1999,2000,2001], "cust_age": [20,21,22,31,32,33,44,45,46] }) I know I can iteratively create new dataframes and then join or shift values, and potentially use apply and/or eval_map but I was hoping to find something quick and idiomatic to polars. Thanks in advance for your help!
Here is a possible approach # create all (year, cust_id) combinations index = df.select("year").unique().join(df.select("cust_id").unique(), how="cross") # compute the birth year of each customer as an expression birth_year = (pl.col("year") - pl.col("cust_age")).drop_nulls().first().over("cust_id") # use it to fill the missing customer ages res = ( index.join(df, on=["year", "cust_id"], how="left") .select( "cust_id", "year", (pl.col("year") - birth_year).alias("cust_age") ) .sort("cust_id", "year") # optional ) print(res) Output: >>> res shape: (9, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cust_id ┆ year ┆ cust_age β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ══════════║ β”‚ 1 ┆ 1999 ┆ 20 β”‚ β”‚ 1 ┆ 2000 ┆ 21 β”‚ β”‚ 1 ┆ 2001 ┆ 22 β”‚ β”‚ 2 ┆ 1999 ┆ 31 β”‚ β”‚ 2 ┆ 2000 ┆ 32 β”‚ β”‚ 2 ┆ 2001 ┆ 33 β”‚ β”‚ 3 ┆ 1999 ┆ 44 β”‚ β”‚ 3 ┆ 2000 ┆ 45 β”‚ β”‚ 3 ┆ 2001 ┆ 46 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
3
79,617,786
2025-5-12
https://stackoverflow.com/questions/79617786/line-separator-python-r-n
We have an English-Latin dictionary in our hands, that is, a list of words in English and their translations into Latin (there may be several translations) in the form of a file with the following contents: apple - malum, pomum, popula fruit - baca, bacca, popum punishment - malum, multa It is necessary to write a script that reads a dictionary from files using the transmitted paths and creates a Latin-English dictionary from the dictionary. The result should be displayed on the screen. So for the line described above, the screen should display: baca - fruit bacca - fruit malum - apple, punishment multa - punishment pomum - apple popula - apple popum - fruit If different files contain different translations for the same words, multiple translations must be combined, leaving only unique words. And there are autotests.: There is a line there: student_output_lines = result.stdout.decode().strip() I did everything a long time ago, but the second day I can't get rid of \r When comparing the two outputs (mine == necessary), I get an error: E AssertionError: assert 'baca - fruit\r\nbacca - fruit\r\nmalum - apple, punishment\r\nmulta - punishment\r\npomum - apple\r\npopula - apple\r\npopum - fruit' == 'baca - fruit\nbacca - fruit\nmalum - apple, punishment\nmulta - punishment\npomum - apple\npopula - apple\npopum - fruit' My code: import sys for filename in sys.argv[1:]: with open(filename, 'r', encoding='utf-8') as f: res_dict = {} for s in f.readlines(): cur_word = s.split()[0] translations = s.strip().replace(',', '').split()[2:] for i in translations: if i in res_dict: res_dict[i].append(cur_word) else: res_dict.setdefault(i, [cur_word]) res = [] for k, v in sorted(res_dict.items()): res.append(k + ' - ' + ', '.join(v)) print('\n'.join(res).replace('\r\n', '')) The input files are in plain text format.txt: apple - malum, pomum, popula fruit - baca, bacca, popum punishment - malum, multa That is, I always have this disgusting \r, no matter what I do and replace and do everything I can. In the pycharm settings, I also changed the linear separator to /n, changed it both in the project (bottom right) and in settings -> code style -> linear separator. The problem always arises as soon as I add \n anywhere in the code. Help please! Code from autotest: def test_from_file(test_input_file, expected_output_file): """ The test verifies the correctness of the script output. Files from the 'test/resources/task3' folder are submitted for input: - test_input_1.txt - test_input_2.txt """ result = subprocess.run( ["python", os.path.join(SOLUTION_FOLDER_PATH, "task3.py"), test_input_file], stdout=subprocess.PIPE, ) student_output = result.stdout.decode().strip() with open(expected_output_file, "r") as expected_output_file: expected_output_content = expected_output_file.read().strip() assert student_output == expected_output_content minimal reproducible example: import sys for filename in sys.argv[1:]: words = ['Hello, ', 'World!'] print('\n'.join(words)) 3 examples: '' in join: AssertionError: assert 'Hello, World!' '\r' in join: AssertionError: assert 'Hello, \rWorld!' '\n' in join: AssertionError: assert 'Hello, \r\nWorld!' I add \n, and \r comes out again.
The test itself is not OS-portable. It should use text=True in subprocess.run and the output will be the original text and not encoded to sys.stdout. Then .decode() won't be required in the student output as well: result = subprocess.run( ["python", os.path.join(SOLUTION_FOLDER_PATH, "task3.py"), test_input_file], stdout=subprocess.PIPE, text=True # add text=True ) student_output = result.stdout.strip() # remove .decode() If you can't change the test (or convince the test writer to fix it), you may (depending on your environment) change the default TextIOWrapper for sys.stdout to not translate newlines with this line added to your script: sys.stdout = io.TextIOWrapper(sys.stdout.buffer, newline=''). Some IDEs will redirect sys.stdout and don't implement sys.stdout.buffer or wrap sys.stdout in a different manner than io.TextIOWrapper. You can test for this from a REPL: Command line Python example (redirect line above works): Python 3.13.3 (tags/v3.13.3:6280bb5, Apr 8 2025, 14:47:33) [MSC v.1943 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.>>> import sys >>> sys.stdout <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> >>> sys.stdout.buffer <_io.BufferedWriter name='<stdout>'> PythonWin (IDE from pywin32 module, redirect line doesn't work): PythonWin 3.13.3 (tags/v3.13.3:6280bb5, Apr 8 2025, 14:47:33) [MSC v.1943 64 bit (AMD64)] on win32. Portions Copyright 1994-2018 Mark Hammond - see 'Help/About PythonWin' for further copyright information. >>> import sys >>> sys.stdout <pywin.framework.interact.DockedInteractiveView object at 0x0000017658511550> >>> sys.stdout.buffer Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\dev\Python313\Lib\site-packages\pythonwin\pywin\mfc\object.py", line 24, in __getattr__ return getattr(o, attr) AttributeError: 'PyCCtrlView' object has no attribute 'buffer'
1
1
79,618,567
2025-5-12
https://stackoverflow.com/questions/79618567/how-to-cache-elements-to-increase-the-runtime-performance-with-lxml-pythin-libra
In the lxml.de website https://lxml.de/performance.html I see the following statement: A way to improve the normal attribute access time is static instantiation of the Python objects, thus trading memory for speed. Just create a cache dictionary and run: cache[root] = list(root.iter()) after parsing and: del cache[root] Can anyone provide me a suitable Python Code example about how these above mechanism can be used in a Python Function?
Setting a variable like cache[root] = list(root.iter()) will effectively cache objects in memory as demonstrated by a simple test. The cache mechanism is very simple: the whole document tree is loaded in memory and elements can be obtained in different ways but point to the same memory address. Given an XML document, get the id of an object before and after setting the cache. The id will be the same of the cache after setting it from lxml import etree, objectify otree = objectify.parse('tmp2.xml') root = otree.getroot() print(id(root.Form_1.Country), root.Form_1.Country) cache = {} cache[root] = list(otree.iter()) print(id(cache[root][3]), cache[root][3]) print(id(root.Form_1.Country), root.Form_1.Country) # both point to the same object in memory print(root.Form_1.Country is cache[root][3]) # the object can be obtained in different ways but point to the same object in the cache ele1 = root.xpath('(//Form_1/Country)[1]')[0] print(ele1 is cache[root][3]) Result 140257476833728 AFG 140257476833280 AFG 140257476833280 AFG True True As explained in the link posted by the OP, it's trading memory for speed A way to improve the normal attribute access time is static instantiation of the Python objects, thus trading memory for speed Test XML <Forms> <greeting>Hello, world!</greeting> <Form_1> <Country>AFG</Country> <Country>AFG</Country> <Country>IND</Country> </Form_1> <Form_1> <Country>IND</Country> <Country>USA</Country> </Form_1> </Forms>
2
1
79,608,752
2025-5-6
https://stackoverflow.com/questions/79608752/how-to-add-space-between-bubbles-and-increase-thier-size
I have a bubble chart developed using plotly library and here’s the data : import plotly.express as px import pandas as pd data = { "lib_acte":["test 98lop1", "test9665 opp1", "test QSDFR1", "test ABBE1", "testtest21","test23"], "x":[12.6, 10.8, -1, -15.2, -10.4, 1.6], "y":[15, 5, 44, -11, -35, -19], "circle_size":[375, 112.5, 60,210, 202.5, 195], "color":["green", "green", "green", "red", "red", "red"] } #load data into a DataFrame object: df = pd.DataFrame(data) fig = px.scatter( df, x="x", y="y", color="color", size='circle_size', text="lib_acte", hover_name="lib_acte", color_discrete_map={"red": "red", "green": "green"}, title="chart" ) fig.update_traces(textposition='middle right', textfont_size=14, textfont_color='black', textfont_family="Inter", hoverinfo="skip") newnames = {'red':'red title', 'green': 'green title'} fig.update_layout( { 'yaxis': { "range": [-200, 200], 'zerolinewidth': 2, "zerolinecolor": "red", "tick0": -200, "dtick":45, }, 'xaxis': { "range": [-200, 200], 'zerolinewidth': 2, "zerolinecolor": "gray", "tick0": -200, "dtick": 45, # "scaleanchor": 'y' }, "height": 800, } ) fig.add_scatter( x=[0, 0, -200, -200], y=[0, 200, 200, 0], fill="toself", fillcolor="gray", zorder=-1, mode="markers", marker_color="rgba(0,0,0,0)", showlegend=False, hoverinfo="skip" ) fig.add_scatter( x=[0, 0, 200, 200], y=[0, -200, -200, 0], fill="toself", fillcolor="yellow", zorder=-1, mode="markers", marker_color="rgba(0,0,0,0)", showlegend=False, hoverinfo="skip" ) fig.update_layout( paper_bgcolor="#F1F2F6", ) fig.show() output : now what I’m looking for please a way to add space between bubble if they are tight like (test 981op1 and test9665 opp1), and also a way to to increase the each bubble size 4% of its size for example. thanks for your help.
To change the size of your bubbles just do something like this: multiplier = 1.04 # made bigger by 4% df["circle_size"] = df["circle_size"]*multiplier But you still need to change the maximum size of the bubbles: Here i changed it to the biggest bubble size: biggest_bubble_size = max(df["circle_size"]) fig = px.scatter( df, x="x", y="y", color="color", size='circle_size', size_max= biggest_bubble_size, #change the maximum size of bubbles text="lib_acte", hover_name="lib_acte", color_discrete_map={"red": "red", "green": "green"}, title="chart" ) Now the bubbles seam too large, so i recommend setting the multiplier to something like this: multiplier = 0.1 And for the points being too close together just change the graph scale acording to the furthest point: maxrange = max(df["x"].max(), df["y"].max()) maxrange = maxrange*1.2 #to give some space to the furthest point number_of_lines = 10 #changes the number of lines in the graph fig.update_layout( { 'yaxis': { "range": [-maxrange, maxrange], 'zerolinewidth': 2, "zerolinecolor": "red", "tick0": -maxrange, "dtick":maxrange/number_of_lines, }, 'xaxis': { "range": [-maxrange, maxrange], 'zerolinewidth': 2, "zerolinecolor": "gray", "tick0": -maxrange, "dtick": maxrange/number_of_lines, # "scaleanchor": 'y' }, "height": 800, } )
3
0
79,617,933
2025-5-12
https://stackoverflow.com/questions/79617933/multidimensional-coordinate-transform-with-xarray
How to convert multidimensional coordinate to standard coordinate in order to unify data when using xarray for nc data: import xarray as xr da = xr.DataArray( [[0, 1], [2, 3]], coords={ "lon": (["ny", "nx"], [[30, 40], [40, 50]]), "lat": (["ny", "nx"], [[10, 10], [20, 20]]), }, dims=["ny", "nx"], ) Expected conversion result: xr.DataArray( [[0, 1, np.nan], [np.nan, 2, 3]], coords={ "lat": [10, 20], "lon": [30, 40, 50], })
You can flatten the data into a list of points using xarray.DataArray.stack, extract unique coordinates and reassign values onto a regular grid using the unique coordinate values. import xarray as xr import numpy as np da = xr.DataArray( [[0, 1], [2, 3]], coords={ "lon": (["ny", "nx"], [[30, 40], [40, 50]]), "lat": (["ny", "nx"], [[10, 10], [20, 20]]), }, dims=["ny", "nx"], ) # Flatten flat = da.stack(z=("ny", "nx")) # Extract unique lat_vals = np.unique(da.lat.values) lon_vals = np.unique(da.lon.values) new_da = xr.DataArray( np.full((len(lat_vals), len(lon_vals)), np.nan), coords={"lat": lat_vals, "lon": lon_vals}, dims=["lat", "lon"] ) # Reassign values onto regular grid for i in range(flat.size): lat_i = float(flat.lat.values[i]) lon_i = float(flat.lon.values[i]) val = flat.values[i] new_da.loc[dict(lat=lat_i, lon=lon_i)] = val print(new_da) Output: <xarray.DataArray (lat: 2, lon: 3)> Size: 48B array([[ 0., 1., nan], [nan, 2., 3.]]) Coordinates: * lat (lat) int64 16B 10 20 * lon (lon) int64 24B 30 40 50
2
1
79,618,357
2025-5-12
https://stackoverflow.com/questions/79618357/sqlalchemy-and-psycopg2-pandas-read-sql-query-dict-is-not-a-sequence-error-wi
Package Versions: SQLAlchemy 2.0.40 pandas 2.2.3 psycopg2-binary 2.9.10 I am trying to run a query using pandas' native param substitution, but I can't seem to get it to run without erroring. I tried simplifying the query to: select * FROM public.bq_results br WHERE cast("eventDate" as date) between TO_DATE('%test_start_date', 'YYYYMMDD') AND TO_DATE('%test_end_date', 'YYYYMMDD') limit 10000 but I get error: TypeError: dict is not a sequence when running: df = pd.read_sql_query(query, self.__engine, params={"test_start_date": "20250101", "test_end_date": "20250131"}) where self.__engine = create_engine(f'postgresql://{self.user}:{self.password}@{self.host}:{self.port}/{self.database}')
The documentation of read_sql_query says the following: params : list, tuple or mapping, optional, default: None List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={β€˜name’ : β€˜value’}. Since you use the psycopg2 driver the parameters should be noted as @JonSG has mentioned. It should be: select * FROM public.bq_results br WHERE cast("eventDate" as date) between TO_DATE(%(test_start_date)s, 'YYYYMMDD') AND TO_DATE(%(test_end_date)s, 'YYYYMMDD') limit 10000 Hope this works.
2
2