question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,444,501
2025-2-17
https://stackoverflow.com/questions/79444501/fpdf-header-and-background
I need to create a pdf with header, footer and background color. Tge following code is generating all 3, but it seems the footer is getting behind the pdf rect from fpdf import FPDF class PDF(FPDF): def header(self): self.set_font(family='Helvetica', size=8) self.cell(0, 10, 'test_header', align='L') def footer(self): self.set_y(-15) self.set_font(family='Helvetica', size=8) self.cell(0, 10, 'test_footer', align='L') pdf = PDF() pdf.add_page() pdf.set_font("Times", size=12) # BG pdf.set_fill_color(r=249, g=247, b=242) pdf.rect(h=pdf.h, w=pdf.w, x=0, y=0, style="F") With the above only footer is visible but without it, both are visible. How can I achieve the desired outcome?
It looks like the issue is that you're setting the background colour after you draw the page. The way you're doing it paints the background colour over everything, like what would happen if you painted a room without taking the posters off the wall. From a quick google search, FPDF doesn't have a method for modifying the background colour, so it might be best to squeeze the method into a component that will be going on all pages (in this case, i'll do it with the header). from fpdf import FPDF class PDF(FPDF): def header(self): # drawing the background self.set_fill_color(249, 247, 242) self.rect(0, 0, self.w, self.h, style="F") # then drawing the header self.set_font("Helvetica", size=8) self.cell(0, 10, "test_header", align="L") def footer(self): self.set_y(-15) self.set_font("Helvetica", size=8) self.cell(0, 10, "test_footer", align="L") pdf = PDF() pdf.add_page() pdf.set_font("Times", size=12) I know it's a rudimentary fix, but if you're using python to create a PDF then I'm assuming this isn't meant for production-level code, and this would be a strong enough bandaid fix.
3
2
79,443,999
2025-2-16
https://stackoverflow.com/questions/79443999/how-to-open-an-image-parse-input-from-user-and-close-the-image-afterwards-in-p
This answer did not work for me, nor for some Mac users, and I did not find a working solution among the following similar questions: How to open an image in Python and close afterwards? How to Close an Image? How can I close an image shown to the user with the Python Imaging Library? How do I close figure in matplotlib? How can I close an image shown to the user with the Python Imaging Library? How to use user input to display image? How can one open an image in Python, then let the user answer questions about it in the CLI, and close the image afterwards? Contstaints Don't use sub-process. Don't start killing processes that happen to match a substring. In essence, use a Python-only method that opens an image and then closes that, and only that image, with control, whilst allowing other code (that allows user interaction) to be executed in between.
The answer below opens an image from a file path in a separate Window, then asks the questions in the CLI. After the questions are answered by the user, the image is closed. Requirements pip install tensorflow pip install matplotlib Solution def make_receipt_label(img_filepath): """ Opens an image, asks the user questions about it, and returns the answers. Args: img_filepath: The path to the image file. Returns: A dictionary containing the user's answers to the questions. Returns None if there is an issue opening the image. """ from tensorflow import io from tensorflow import image as img from matplotlib import pyplot as plt tensor_img = io.read_file(img_filepath) tensor_img = img.decode_png(tensor_img, channels=3) plt.ion() plt.imshow(tensor_img) plt.show(block=False) root=tk.Tk() root.withdraw() answers = {} answers["image_colour"] = input("0. What is the image colour? ") answers["total"] = input("1. Another question? ") plt.close() plt.ioff() root.destroy() return answers Satisfied constraints It does not use subproces it does not try to kill processes with a pid that matches some string, it uses pure Python.
2
1
79,435,884
2025-2-13
https://stackoverflow.com/questions/79435884/fastapi-middleware-for-postgres-multi-tenant-schema-switching-causes-race-condit
I'm building a multi-tenant FastAPI application that uses PostgreSQL schemas to separate tenant data. I have a middleware that extracts an X-Tenant-ID header, looks up the tenant's schema, and then switches the current schema for the database session accordingly. For a single request (via Postman) the middleware works fine; however, when sending multiple requests concurrently, I sometimes get errors such as: Undefined Table Table relationship not found It appears that the DB connection is closing prematurely or reverting to the public schema too soon, so tenant-specific tables are not found. Below are the relevant code snippets: Middleware (SchemaSwitchMiddleware) from typing import Optional, Callable from fastapi import Request, Response from fastapi.responses import JSONResponse from starlette.middleware.base import BaseHTTPMiddleware from app.db.session import SessionLocal, switch_schema from app.repositories.tenant_repository import TenantRepository from app.core.logger import logger from contextvars import ContextVar current_schema: ContextVar[str] = ContextVar("current_schema", default="public") class SchemaSwitchMiddleware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next: Callable) -> Response: """ Middleware to dynamically switch the schema based on the `X-Tenant-ID` header. If no header is present, defaults to `public` schema. """ db = SessionLocal() # Create a session here try: tenant_id: Optional[str] = request.headers.get("X-Tenant-ID") if tenant_id: try: tenant_repo = TenantRepository(db) tenant = tenant_repo.get_tenant_by_id(tenant_id) if tenant: schema_name = tenant.schema_name else: logger.warning("Invalid Tenant ID received in request headers") return JSONResponse( {"detail": "Invalid access"}, status_code=400 ) except Exception as e: logger.error(f"Error fetching tenant: {e}. Defaulting to public schema.") db.rollback() schema_name = "public" else: schema_name = "public" current_schema.set(schema_name) switch_schema(db, schema_name) request.state.db = db # Store the session in request state response = await call_next(request) return response except Exception as e: logger.error(f"SchemaSwitchMiddleware error: {str(e)}") db.rollback() return JSONResponse({"detail": "Internal Server Error"}, status_code=500) finally: switch_schema(db, "public") # Always revert to public db.close() Database Session (app/db/session.py) from sqlalchemy import create_engine, text from sqlalchemy.orm import sessionmaker, declarative_base, Session from app.core.logger import logger from app.core.config import settings # Base for models Base = declarative_base() DATABASE_URL = settings.DATABASE_URL # SQLAlchemy engine engine = create_engine( DATABASE_URL, pool_pre_ping=True, pool_size=20, max_overflow=30, ) # Session factory SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) def switch_schema(db: Session, schema_name: str): """Helper function to switch the search_path to the desired schema.""" db.execute(text(f"SET search_path TO {schema_name}")) db.commit() # logger.debug(f"Switched schema to: {schema_name}") Example tables Public Schema: Contains tables like users, roles, tenants, and user_lookup. Tenant Schema: Contains tables like users, roles, buildings, and floors. When I test with a single request, everything works fine. However, with concurrent requests, the switching sometimes reverts to the public schema too early, resulting in errors because tenant-specific tables are missing. Question What could be causing the race condition where the connection’s schema gets switched back to public during concurrent requests? How can I ensure that each request correctly maintains its tenant schema throughout the request lifecycle without interference from concurrent requests? Is there a better approach (such as using middleware or context variables) to avoid this issue? any help on this is much apricated. Thankyou
I also implemented a multi-tenant FastAPI application using PostgreSQL via schemas. In my case, I avoided using middleware because the database session (obtained from SessionLocal) and its state need to be isolated per request. When using middleware, the connection (and its state) from the connection pool may be reused across requests. Even though ContextVar is designed to be isolated in asynchronous contexts, the actual database connection can still be shared, leading to race conditions. For example, if one request changes the schema and the connection is then reused for another request, that new request might unexpectedly start with the wrong schema (like reverting to "public"). Instead, I handle the tenant schema switching in a dependency (using Depends). This way, each request gets its own session, and we can safely set the schema for that specific request without affecting others. Below is an example implementation using a synchronous SQLAlchemy Session from sqlalchemy import create_engine, text from sqlalchemy.orm import sessionmaker, declarative_base, Session from app.core.logger import logger from app.core.config import settings from typing import Annotated from fastapi import Header # Base for models Base = declarative_base() DATABASE_URL = settings.DATABASE_URL # SQLAlchemy engine engine = create_engine( DATABASE_URL, pool_pre_ping=True, pool_size=20, max_overflow=30, ) # Session factory SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) # TODO: use this get_db_session function in path operation. def get_db_session(tenant_id: Annotated[str, Header(alias="X-Tenant-ID")]) -> Generator[Session, None, None]: session = SessionLocal() try: # TODO: Implement tenant_id to tenant_schema here session.execute(text(f"SET search_path TO {tenant_id};")) session.commit() # Ensure the schema change is applied immediately yield session finally: session.close()
1
1
79,438,335
2025-2-14
https://stackoverflow.com/questions/79438335/how-to-make-pydantics-non-strict-coercive-mode-apply-to-integer-literals
I'm validating inputs to a function using Pydantic's @validate_call as follows: from typing import Literal from pydantic import validate_call @validate_call def foo(a: Literal[0, 90, 180, 270]) -> None: print(a, type(a)) I want Pydantic to perform its default type coercion like it does with the int type: foo(90) # Works as expected foo('90') # Doesn't work, but I want it to If I use the annotation a: int, it will coerce strings like '180', but then I have to manually validate which integers are given. How do I make Pydantic perform type coercion on Literals? Note: I'll accept a solution that requires a to be a string type instead of an integer, as long as it still allows both integer and string input. Bad Solutions I don't want to add every literal case. Literal[0, 90, 180, 270, '0', '90', '180', '270'] is bad because it doesn't allow the strings '-0' or '180.0'. I could do Annotated[int, Field(ge=0, le=0)] | Annotated[int, Field(ge=90, le=90)] | ..., but that's stupidly verbose. I don't want to define some separate function or model. At that point, it's easier to just accept a: int and validate the particular value inside the method.
You can combine the BeforeValidator and the Literal like this: from typing import Annotated, Literal from pydantic import validate_call, BeforeValidator, ValidationError # First try has the following validator: # BeforeValidator(int) @validate_call def foo(a: Annotated[Literal[0, 90, 180, 270], BeforeValidator(float)]) -> None: print(a, type(a)) if __name__ == "__main__": foo("90") foo("180.0") foo("180.0") try: foo(0.1) except ValidationError as err: print(err) try: foo("70") except ValidationError as err: print(err) try: foo("can't convert to int") except ValueError as err: print(err) The BeforeValidator function will be called before checks and thus the literal validation will be done against an integer. Edit: better manage string with decimal number.
2
2
79,440,649
2025-2-14
https://stackoverflow.com/questions/79440649/iconipy-and-pyinstaller-issue
I would like to ask you for help with creation of .exe file from python script where I use customtkinter, iconipy libraries. pyinstaller: https://pyinstaller.org/en/stable iconimy: https://iconipy.digidigital.de/iconipy.html After creation of .exe file I finished with this error: My python code: from PIL import Image from customtkinter import CTk, CTkFrame, CTkImage, CTkButton from iconipy import IconFactory def Create_Icon(Icon_Set: str, Icon_Name: str, Icon_Size: str, Theme_index: int) -> Image: Icon_Fact = IconFactory( icon_set = Icon_Set, icon_size = 40, font_size = 30, font_color = "#FFFFFF", outline_width = 0, outline_color = None, background_color = None, background_radius = 0) Icon_PIL = Icon_Fact.asPil(Icon_Name) return Icon_PIL def Get_CTk_Icon(Icon_Set: str, Icon_Name: str, Icon_Size: str) -> CTkImage: Picture = CTkImage( light_image = Create_Icon(Icon_Set=Icon_Set, Icon_Name=Icon_Name, Icon_Size=Icon_Size, Theme_index=0), dark_image =Create_Icon(Icon_Set=Icon_Set, Icon_Name=Icon_Name, Icon_Size=Icon_Size, Theme_index=1), size = (40, 40)) return Picture def Get_Button_Icon(Frame: CTk|CTkFrame, Icon_Set: str, Icon_Name: str, Icon_Size: str, Button_Size: str) -> CTkFrame: Frame_Button = CTkButton( master = Frame, width = 40, height = 40, corner_radius = 0, border_width = 0, bg_color = "transparent", fg_color = "transparent", hover = False, anchor = "center", text = "") CTK_Image = Get_CTk_Icon(Icon_Set=Icon_Set, Icon_Name=Icon_Name, Icon_Size=Icon_Size) Frame_Button.configure(image=CTK_Image, text="") return Frame_Button window = CTk() Icon_Theme = Get_Button_Icon(Frame=window, Icon_Set="lucide", Icon_Name="sun-moon", Icon_Size="Header", Button_Size="Picture_Theme") Icon_Theme.configure(text="") Icon_Theme.pack(side="top", fill="none", expand=False, padx=5, pady=5) # run window.mainloop() I use anaconda for virtual environment where I have only: # environment.yml name: ENV-Iconipy_Pyinstaller channels: - defaults - https://repo.anaconda.com/pkgs/main - https://repo.anaconda.com/pkgs/msys2 - https://repo.anaconda.com/pkgs/r dependencies: - bzip2=1.0.8=h2bbff1b_6 - ca-certificates=2024.12.31=haa95532_0 - expat=2.6.4=h8ddb27b_0 - libffi=3.4.4=hd77b12b_1 - libmpdec=4.0.0=h827c3e9_0 - openssl=3.0.15=h827c3e9_0 - pip=24.2=py313haa95532_0 - python=3.13.2=hadb2040_100_cp313 - python_abi=3.13=0_cp313 - setuptools=75.8.0=py313haa95532_0 - sqlite=3.45.3=h2bbff1b_0 - tk=8.6.14=h0416ee5_0 - tzdata=2025a=h04d1e81_0 - vc=14.42=haa95532_4 - vs2015_runtime=14.42.34433=he0abc0d_4 - wheel=0.44.0=py313haa95532_0 - xz=5.6.4=h4754444_1 - zlib=1.2.13=h8cc25b3_1 - pip: - altgraph==0.17.4 - customtkinter==5.2.2 - darkdetect==0.8.0 - iconipy==0.3.2 - packaging==24.2 - pefile==2023.2.7 - pillow==11.1.0 - pyinstaller==6.12.0 - pyinstaller-hooks-contrib==2025.1 - pywin32-ctypes==0.2.3 prefix: C:\Users\CZ011845\AppData\Local\miniconda3\envs\ENV-Iconipy_Pyinstaller I installed only python by "conda install python" and then these 3 by pip: pip install customtkinter pip install iconipy pip install pyinstaller and I convers the main.py to .exe (in Anaconda): run: conda activate ENV-Iconipy_Pyinstaller cd to correct project path run: pyinstaller --name Test --onedir --windowed main.py Add path and files into folder "_internal" (because without that there are issues) Path: "_internal/iconipy/assets/lucide/" Empty file1: info.json Empty file2: version.txt If I run code in terminal it is correct and I receive this window:
Soved by adding iconipy "assets" folder into "_internal" as whole folder inside pyinstaller .spec file + add "iconipy" as hiddenimports:
1
1
79,442,012
2025-2-15
https://stackoverflow.com/questions/79442012/how-to-scrape-website-which-has-hidden-data-inside-table
I am trying to Scrape Screener.in website to extract some information related to stocks. However while trying to extract Quarterly Results section there are some field which is hidden and when click on + button it show additional information related to parent header. I need to have this information I am using below python code which is giving me a dataframe but without additional information url = f'https://www.screener.in/company/TATAPOWER/consolidated/' print(url) req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) page = urlopen(req).read() soup = BeautifulSoup(page, 'html.parser') table = soup.find_all("table", {"class": "data-table responsive-text-nowrap"})[0] df = pd.read_html(StringIO(str(table)))[0] df Above code is working fine however I am not able to pull additional information Can somebody help me with this?
As already commented, the content is reloaded on demand, but it is precisely these requests that can be replicated in order to obtain the content as well. To do this, you have to iterate over the rows of the table and make the request if necessary. import requests import pandas as pd from bs4 import BeautifulSoup url = f'https://www.screener.in/company/TATAPOWER/consolidated/' soup = BeautifulSoup(requests.get(url, headers={'User-Agent': 'Mozilla/5.0'}).text) keys = ['Item'] + list(soup.select_one('#quarters thead tr').stripped_strings) data = [] for row in soup.select('#quarters tbody tr')[:-1]: if row.td.button: data.append(dict(zip(keys,[c.text for c in row.select('td')]))) d = requests.get(f'https://www.screener.in/api/company/3371/schedules/?parent={row.td.button.text.strip(" +")}&section=quarters&consolidated=', headers={'User-Agent': 'Mozilla/5.0'}).json() first_key = next(iter(d)) data.append({"Item": first_key, **d[first_key]}) else: data.append(dict(zip(keys,row.stripped_strings))) pd.DataFrame(data) Result: Item Dec 2021 Mar 2022 Jun 2022 Sep 2022 Dec 2022 Mar 2023 Jun 2023 Sep 2023 Dec 2023 Mar 2024 Jun 2024 Sep 2024 Dec 2024 Sales + 10,913 11,960 14,495 14,031 14,129 12,454 15,213 15,738 14,651 15,847 17,294 15,698 15,391 YOY Sales Growth % 43.63% 15.41% 43.06% 43.02% 29.47% 4.13% 4.95% 12.17% 3.69% 27.24% 13.67% -0.26% 5.05% Expenses + 9,279 10,091 12,812 12,270 11,810 10,526 12,500 12,967 12,234 13,540 14,232 12,427 12,312 Material Cost % 8.67% 13.38% 6.74% 4.04% 6.55% 12.13% 6.00% 6.09% 9.29% 13.86% 5.50% 3.59% 6.75% Operating Profit 1,634 1,869 1,683 1,760 2,319 1,928 2,713 2,771 2,417 2,307 3,062 3,271 3,079 OPM % 15% 16% 12% 13% 16% 15% 18% 18% 16% 15% 18% 21% 20% Other Income + 865 62 1,227 1,502 1,497 1,352 877 567 1,092 1,407 578 632 589 Exceptional items 0 -618 0 0 0 0 235 0 0 39 0 -140 0 Interest 953 1,015 1,026 1,052 1,098 1,196 1,221 1,182 1,094 1,136 1,176 1,143 1,170 Depreciation 758 846 822 838 853 926 893 926 926 1,041 973 987 1,041 Profit before tax 788 71 1,062 1,373 1,864 1,158 1,476 1,231 1,489 1,537 1,490 1,773 1,457 Tax % 30% -794% 17% 32% 44% 19% 23% 17% 28% 32% 20% 38% 18% Net Profit + 552 632 884 935 1,052 939 1,141 1,017 1,076 1,046 1,189 1,093 1,188 Profit after tax 552 632 884 935 1,052 939 1,141 1,017 1,076 1,046 1,189 1,093 1,188 EPS in Rs 1.33 1.57 2.49 2.56 2.96 2.43 3.04 2.74 2.98 2.80 3.04 2.90 3.23
1
3
79,442,094
2025-2-15
https://stackoverflow.com/questions/79442094/does-python-reads-all-lines-of-a-file-when-numpy-genfromtxt-is-executed
I have really large ASCII file (63 million lines or more) that I would like to read using numpy.genfromtxt(). But, it is taking up so much memory. I want to know what python actually does when numpy.genfromtxt() is executed. Does it read all the lines at once? Look at the below code, for example. import numpy as np data = np.genfromtxt("large.file.txt") When I execute the code above, would python read all the contents in large.file.txt and load it on to the memory? If yes, is there another way of reading a large file line-by-line so that python would not use large memory?
It reads all the lines. It has to. That data array has to hold all of the file's data, and NumPy can't build an array with all of the file's data without reading all of the file. That said, the implementation uses a lot more memory than the output needs. The implementation parses the requested columns of the file's data into a list of tuples before applying further processing, and a list of tuples takes a lot more memory than a NumPy array. If you want to use less intermediate memory, I think numpy.loadtxt is more efficient on that front - digging down into the implementation eventually hits a function that stores parsed data into an array directly, instead of using a list of tuples. numpy.loadtxt isn't as flexible as numpy.genfromtxt, but you don't seem to need the extra flexibility. This won't make data itself take any less memory, though. Also, numpy.loadtxt does still need extra intermediate memory. It should just be less intermediate memory than numpy.genfromtxt.
1
1
79,440,163
2025-2-14
https://stackoverflow.com/questions/79440163/tqdm-multiprocessing-and-how-to-print-a-line-under-the-progress-bar
I am using multiprocessing and tqdm to show the progress of the workers. I want to add a line under the progress bar to show which tasks are currently being processed. Unfortunately, whatever I do seems to end up with this being printed on top of the progress bar making a mess. Here is a MWE that shows the problem: from multiprocessing import Pool, Manager, Value import time import os import tqdm import sys class ParallelProcessor: def __init__(self, shared_data): self.shared_data = shared_data def process_task(self, args): """Worker function: Simulates task processing and updates progress""" lock, progress, active_tasks, index, integer_arg = args pid = os.getpid() core_id = index % len(os.sched_getaffinity(0)) os.sched_setaffinity(pid, {core_id}) with lock: active_tasks.append(f"Task {index+1}") time.sleep(2) # Simulate processing time with lock: active_tasks.remove(f"Task {index+1}") progress.value += 1 return self.shared_data def progress_updater(self, total_tasks, progress, active_tasks): """Update tqdm progress bar and active task list on separate lines""" sys.stdout.write("\n") # Move to the next line for active task display sys.stdout.flush() with tqdm.tqdm(total=total_tasks, desc="Processing Tasks", position=0, leave=True) as pbar: while pbar.n < total_tasks: time.sleep(0.1) # Update interval pbar.n = progress.value pbar.refresh() # Move cursor down to the next line and overwrite active task display sys.stdout.write("\033[s") # Save cursor position sys.stdout.write(f"\033[2K\rActive: {', '.join(active_tasks[:5])}") # Clear line and print active tasks sys.stdout.write("\033[u") # Restore cursor position sys.stdout.flush() def run_parallel(self, tasks, num_cores=None): """Runs tasks in parallel with a progress bar""" num_cores = num_cores or len(os.sched_getaffinity(0)) manager = Manager() lock = manager.Lock() progress = manager.Value("i", 0) # Shared integer for progress tracking active_tasks = manager.list() # Shared list for active tasks # Start progress updater in the main process from threading import Thread progress_thread = Thread(target=self.progress_updater, args=(len(tasks), progress, active_tasks)) progress_thread.start() # Prepare task arguments task_args = [(lock, progress, active_tasks, idx, val) for idx, val in enumerate(tasks)] # Run parallel tasks with Pool(num_cores) as pool: results = pool.map(self.process_task, task_args) # Ensure progress bar finishes progress_thread.join() print("\n") # Move to the next line after processing return results if __name__ == "__main__": processor = ParallelProcessor(shared_data=10) processor.run_parallel(tasks=range(40), num_cores=4)
You can add a separate bar at the bottom that displays only tasks. def progress_updater(self, total_tasks, progress, active_tasks): """Update tqdm progress bar and active task list on separate lines""" sys.stdout.write("\n") # Move to the next line for active task display sys.stdout.flush() with ( tqdm.tqdm(total=total_tasks, desc="Processing Tasks", position=0, leave=True) as pbar, tqdm.tqdm(bar_format="{desc}", position=1, leave=False) as task_bar, ): while pbar.n < total_tasks: time.sleep(0.1) # Update interval pbar.n = progress.value pbar.refresh() # # Move cursor down to the next line and overwrite active task display # sys.stdout.write("\033[s") # Save cursor position # sys.stdout.write(f"\033[2K\rActive: {', '.join(active_tasks[:5])}") # Clear line and print active tasks # sys.stdout.write("\033[u") # Restore cursor position # sys.stdout.flush() task_bar.set_description_str(f"Active: {', '.join(active_tasks[:5])}") Processing Tasks: 20%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8/40 [00:05<00:21, 1.47it/s] Active: Task 3, Task 6, Task 9, Task 12
2
4
79,441,934
2025-2-15
https://stackoverflow.com/questions/79441934/python-venv-install-skips-component-file-pointer-png
This is a strange issue. I maintain the pi3d python module and it contains this file github.com/tipam/pi3d/blob/master/src/pi3d/util/icons/pointer.png When I clone the repo locally it has the .png file but when the package is installed using pip it seems to be missing. This didn't used to be a problem. Is it something to do with the fact that pip insists on installing to a venv now, i.e. if I made pip install with --no-warn-script-location would it include the missing file?
It's because it's not present in tool.setuptools.package-data in pyproject.toml file. [tool.setuptools.package-data] "*" = ["*.fs", "*.vs", "*.inc", "*.gif"] With the previous configuration, you add all this extensions in your package as you can see in the next screenshot (content of the package uploaded on pypi). So adding the png extension should work: [tool.setuptools.package-data] "*" = ["*.fs", "*.vs", "*.inc", "*.gif", "*.png"]
1
1
79,430,185
2025-2-11
https://stackoverflow.com/questions/79430185/generate-all-paths-that-consists-of-specified-number-of-visits-of-nodes-edges
In a graph/chain there are 3 different states: ST, GRC_i and GRC_j. The following edges between the states exists: EDGES = [ # source, target, name ('ST', 'GRC_i', 'TDL_i'), ('ST', 'GRC_j', 'TDL_j'), ('GRC_i', 'GRC_j', 'RVL_j'), ('GRC_j', 'GRC_i', 'RVL_i'), ('GRC_j', 'ST', 'SUL_i'), ('GRC_i', 'ST', 'SUL_j'), ] The values for TDL_i, TDL_i, RVL_i and RVL_j are known. The chain always starts in ST and the final state is always known. I want to infer SUL_i and SUL_j based on possible paths that satisfy the known information. For example if we have the following information: RVL_i = 2 RVL_j = 1 TDL_i = 0 TDL_j = 2 and the final position is GRC_i there are two paths that satisfy this criteria: ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i -> RVL_j -> GRC_j -> SUL_i -> ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i ST -> TDL_j -> GRC_j -> SUL_i -> ST -> TDL_j -> GRC_j -> RVL_i -> GRC_i -> RVL_j -> GRC_j -> RVL_i -> GRC_i Because both paths imply that SUL_i = 1 and SUL_j = 0 we conclude that this is the case. The following relationships are evident: The number of visits to ST is equal to SUL_i + SUL_j + 1 The number of visits to GRC_i is equal to TDL_i + RVL_i The number of visits to GRC_j is equal to TDL_j + RVL_j The upper-bound of SUL_i is the number of visits to GRC_j The upper-bound of SUL_j is the number of visits to GRC_i The maximum total number of steps is 2 * (TDL_i + TDL_j + RVL_i + RVL_i) I was thinking to solve this as a mixed-integer program. import networkx as nx import gurobipy as grb from gurobipy import GRB from typing import Literal def get_SUL(TDL_i: int, TDL_j: int, RVL_i: int, RVL_j: int, final_state: Literal['ST', 'GRC_i', 'GRC_j']): G = nx.DiGraph() G.add_edges_from([ ('ST', 'GRC_i'), ('ST', 'GRC_j'), ('GRC_i', 'GRC_j'), ('GRC_j', 'GRC_i'), ('GRC_j', 'ST'), ('GRC_i', 'ST') ]) n_actions = len(list(G.edges())) n_states = len(list(G.nodes())) min_N = TDL_i + TDL_j + RVL_i + RVL_i max_N = 2 * (TDL_i + TDL_j + RVL_i + RVL_i) for N in range(min_N, max_N + 1): m = grb.Model() SUL_i = m.addVar(lb=0, ub=TDL_j + RVL_j) SUL_j = m.addVar(lb=0, ub=TDL_i + RVL_i) # actions actions = m.addMVar((n_actions, N), vtype=GRB.BINARY) m.addConstr(actions[0,:].sum() == TDL_i) m.addConstr(actions[1,:].sum() == TDL_j) m.addConstr(actions[2,:].sum() == RVL_i) m.addConstr(actions[3,:].sum() == RVL_j) m.addConstr(actions[4,:].sum() == SUL_i) m.addConstr(actions[5,:].sum() == SUL_j) m.addConstrs(actions[:,n].sum() == 1 for n in range(N)) # states states = m.addMVar((n_states, N), vtype=GRB.BINARY) m.addConstr(states[0,:].sum() == SUL_i + SUL_j + 1) m.addConstr(states[0,:].sum() == TDL_i + RVL_i) m.addConstr(states[0,:].sum() == TDL_j + RVL_j) m.addConstr(states[0,0] == 1) if final_state == 'ST': m.addConstr(states[0,-1] == 1) m.addConstr(states[1,-1] == 0) m.addConstr(states[2,-1] == 0) elif final_state == 'GRC_i': m.addConstr(states[0,-1] == 0) m.addConstr(states[1,-1] == 1) m.addConstr(states[2,-1] == 0) else: m.addConstr(states[0,-1] == 0) m.addConstr(states[1,-1] == 0) m.addConstr(states[2,-1] == 1) m.addConstrs(actions[:,n].sum() == 1 for n in range(N)) # additional constraints How do I impose that the action- and states variables are in agreement with each other? For example, the first action can only TDL_i or TDL_j because we start in ST. I can obtain the adjacency matrix using nx.to_numpy_array(G) but how should I incorporate this into the model?
To make things more readable, I will use the following notations: S, I and J are the nodes XY is the number of traversals of edge X -> Y The unknowns of the problem are IS and JS. They must be non-negative. Case 1: final state is S During a path, every node is entered and exited the same number of times. For node I, it means: IS + IJ = SI + JI For node J: JS + JI = SJ + IJ The equations immediately lead to the solution: IS = SI + JI - IJ JS = SJ + IJ - JI There is a special case to consider: if SI and SJ are 0, we can't leave node S, so only the empty path is possible. If IJ or JI is greater than 0, then no solution is possible. Case 2: final state is I For node I, the number of entries is one greater than the number of exits: IS + IJ + 1 = SI + JI For node J, they are the same: JS + JI = SJ + IJ Which gives: IS = SI + JI - IJ - 1 JS = SJ + IJ - JI Case 3: final state is J Node I is entered and exited the same number of times: IS + IJ = SI + JI For node J, the number of entries is one greater than the number of exits: JS + JI + 1 = SJ + IJ Which gives: IS = SI + JI - IJ JS = SJ + IJ - JI - 1 Code def solve(SI, SJ, IJ, JI, final_state): match final_state: case "S": if SI == 0 and SJ == 0: if IJ == 0 and JI == 0: return (0, 0) else: return None IS = SI + JI - IJ JS = SJ + IJ - JI case "I": IS = SI + JI - IJ - 1 JS = SJ + IJ - JI case "J": IS = SI + JI - IJ JS = SJ + IJ - JI - 1 if IS >= 0 and JS >= 0: return (IS, JS) else: return None def test(SI, SJ, IJ, JI, final_state): res = solve(SI, SJ, IJ, JI, final_state) inputs = f"SI={SI}, SJ={SJ}, IJ={IJ}, JI={JI}, final_state={final_state}" if res is None: print(f"{inputs} => no solution") else: print(f"{inputs} => IS={res[0]}, JS={res[1]}") test(SI=0, SJ=2, IJ=1, JI=2, final_state="I") test(SI=1, SJ=0, IJ=9, JI=9, final_state="I") test(SI=0, SJ=2, IJ=1, JI=1, final_state="I") test(SI=0, SJ=2, IJ=1, JI=2, final_state="J") test(SI=0, SJ=2, IJ=1, JI=1, final_state="J") test(SI=1, SJ=1, IJ=0, JI=2, final_state="J") test(SI=2, SJ=0, IJ=2, JI=0, final_state="S") test(SI=2, SJ=0, IJ=2, JI=1, final_state="S") test(SI=0, SJ=0, IJ=1, JI=1, final_state="S") Results The first case is the one given in the question: SI=0, SJ=2, IJ=1, JI=2, final_state=I => IS=0, JS=1 SI=1, SJ=0, IJ=9, JI=9, final_state=I => IS=0, JS=0 SI=0, SJ=2, IJ=1, JI=1, final_state=I => no solution SI=0, SJ=2, IJ=1, JI=2, final_state=J => IS=1, JS=0 SI=0, SJ=2, IJ=1, JI=1, final_state=J => IS=0, JS=1 SI=1, SJ=1, IJ=0, JI=2, final_state=J => no solution SI=2, SJ=0, IJ=2, JI=0, final_state=S => IS=0, JS=2 SI=2, SJ=0, IJ=2, JI=1, final_state=S => IS=1, JS=1 SI=0, SJ=0, IJ=1, JI=1, final_state=S => no solution
5
1
79,440,210
2025-2-14
https://stackoverflow.com/questions/79440210/python-shutting-down-child-thread-when-parent-dies
I have a parent Python task that starts a child task to listen for a USB/BLE response. Problem is that if the parent task dies, the child listener task keeps running and the process has to be killed. Parent Process: self.listenerTask = threading.Thread(target=self.listener, name="Listener", args=[interface]) Listener thread: def listener(self, interface): logger.info(f"Listener started, threadid={threading.get_ident()}") self.event = threading.Event() while not self.stopListener: responseMsg = asyncio.run((interface.readMsg(timeout=None))) ... Anyway to catch the parent in it's death and have it set self.stopListener? Any better way?
you can mark your listener thread as a daemon so that if the main (parent) process exits, the listener will automatically be killed. For example: # In your parent process where you create the thread self.listenerTask = threading.Thread( target=self.listener, name="Listener", args=[interface], daemon=True # Ensure the thread is a daemon thread ) self.listenerTask.start() Daemon threads are abruptly stopped when the main thread exits, so you won’t have a stranded listener. However, note that daemon threads don’t execute cleanup code upon termination. If you need a graceful shutdown, you might need to signal the thread from your parent’s signal handler or exception handling logic, ensuring you set self.stopListener = True when appropriate.
1
3
79,439,852
2025-2-14
https://stackoverflow.com/questions/79439852/what-is-this-missing-class
I am coding a small project that goes through every class and prints them in the fashion of the Exception hierarchy at the very bottom of https://docs.python.org/3/library/exceptions.html. I got it to a readable point(not finished), and saw something interesting. It was a class called MISSING. I looked it up, nothing. Asked AI, and it denied it. Is it possible that my code just messed up? It didn't produce an error like TypeError, NameError, or AttributeError, like other classes did(when they didn't have subclasses). It isn't the NoneType class(that AI said it was), as that is also present as shown in this screenshot. When I try to access it by typing MISSING, it doesn't show as green, rather white as an undefined variable would. This isn't important, per say, but I'm just curious if anyone else has seen this before. As I said earlier, I looked it up and found nothing about it. Here is the code that reproduces this: class subclasses: def __init__(self): self.sub = self.get_all_subclasses(object) def get_all_subclasses(self, cls): try: all_subclasses = {cls.__name__: []} for subclass in cls.__subclasses__(): if subclass.__name__ == 'subclasses': continue if subclass.__name__ == 'MISSING': print('MISSING IIITTTTTTT', subclass) all_subclasses[cls.__name__].append(self.get_all_subclasses(subclass)) return all_subclasses except (TypeError, NameError, AttributeError): return {cls.__name__: []} subs = subclasses()
Ok, first, actually retreive the class: In [1]: def search_for_missing(cls): ...: if isinstance(cls, type): ...: subclasses = type.__subclasses__(cls) ...: else: ...: subclasses = cls.__subclasses__() ...: for sub in subclasses: ...: if sub.__name__ == "MISSING": ...: return sub ...: subsub = search_for_missing(sub) ...: if subsub is not None: ...: return sub ...: return None ...: In [2]: m = search_for_missing(object) In [3]: m Out[3]: Token.MISSING Now, try to find the module, In [4]: m.__module__ Out[4]: 'Token' In [5]: import sys In [6]: sys.modules[m.__module__] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[21], line 1 ----> 1 sys.modules[m.__module__] KeyError: 'Token' Ok, this means it is almost certainly a built-in class used by the interpreter runtime. Next step, let's check the CPython source code, I did a query for the string "MISSING" and found this in the source code: PyTypeObject _PyContextTokenMissing_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "Token.MISSING", sizeof(_PyContextTokenMissing), .tp_dealloc = context_token_missing_tp_dealloc, .tp_getattro = PyObject_GenericGetAttr, .tp_flags = Py_TPFLAGS_DEFAULT, .tp_repr = context_token_missing_tp_repr, }; This is in cpython/Python/context.c which I'm pretty sure is for the contextvars module. If we look in there, we see that there is a contextvars.Token object documented, which has a .MISSING attribute. And low and behold: In [19]: import contextvars In [20]: type(contextvars.Token.MISSING) is m Out[20]: True I just did this as an exercise to show you how you might go and find such a class, but these are internal implementation details.
2
3
79,439,828
2025-2-14
https://stackoverflow.com/questions/79439828/code-work-in-vscode-but-get-error-in-leetcode
""" 14. Longest Common Prefix Write a function to find the longest common prefix string amongst an array of strings. If there is no common prefix, return an empty string "". """ class Solution: def longestCommonPrefix(self,strs: list[str]) -> str: list_length = len(strs) shortest_length = len(strs[0]) for i in range(list_length): length = len(strs[i]) if shortest_length > length: shortest_length = length shortest_char = strs[i] char = [0]*len(shortest_char) for i in range(shortest_length): for str in strs: if str[i] == shortest_char[i]: char[i] +=1 for i in range(shortest_length): if char[i] < list_length: char_list = list(shortest_char) return "".join(char_list[:i]) print(Solution.longestCommonPrefix(None,["flower","flow","flight"])) result : enter image description here but in vscode ,i get an error : UnboundLocalError: cannot access local variable 'shortest_char' where it is not associated with a value ^^^^^^^^^^^^^ char = [0]*len(shortest_char) Line 11 in longestCommonPrefix (Solution.py) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ret = Solution().longestCommonPrefix(param_1) Line 46 in _driver (Solution.py) _driver() Line 61 in <module> (Solution.py) How can i solve this problem? In vscode, it goes through 3 for loop without any issue. I think in leetcode, it cant get variable between for loop, but why?
if shortest_length <= length than your shortest_char variable will not be defined. It is bad practice to define variables like that.
1
4
79,439,896
2025-2-14
https://stackoverflow.com/questions/79439896/cant-find-correct-select-html-tag-value-and-trying-to-wait-for-a-select-opti
I have an issue where I use a url that ends such as T-shirts page I am trying to scrape the product links off the pages. I have been trying for some time now, nothing is working yet. This is my current attempt after some Googling and reading the Playwright docs: Website html: <select id="prodPerPageSelTop"> <option value="24"> <option value="48"> <option value="72"> <option value="96"> <option value="All"> <select> def playwright_get_soup(url, wait_after_page_load=None): with sync_playwright() as this_playwright: browser = this_playwright.chromium.launch() page = browser.new_page() start = time.perf_counter() page.goto(url) try: page.wait_for_load_state("load") if wait_after_page_load: time.sleep(wait_after_page_load products_on_page = page.querySelector('#prodPerPageSelTop').innerText() page.waitForFunction("document.querySelector('#prodPerPageSelTop').innerText() !== '" + products_on_page + "'") # attempt 1 page.click('#prodperpageselect').select_option('96') # attempt 2 # products_on_page = page.querySelector('#prodperpageselect ').innerText() # page.waitForFunction("document.querySelector('#prodPerPageSelTop').innerText() !== '" + products_on_page + "'") # attempt 3 # new_selector = 'id=prodPerPageSelTop' # page.waitForSelector(new_selector) # handle = page.querySelector(new_selector) # handle.selectOption({"value": "96"}) # attempt 4 # page.select_option('select#prodperpageselect', value='96') time.sleep(15) # try to wait page.wait_for_selector('select#prodperpageselect option[value="96"]') except: pass soup = BeautifulSoup(page.content(), "html.parser") browser.close() return soup soup = playwright_get_soup("https://www.alphabroder.com/category/t-shirts") def get_links(page_soup): these_links = [] all_product_thumbnails = page_soup.find_all("div", class_="thumbnail") for thumbnail in all_product_thumbnails: a_tag = thumbnail.find("a") link = a_tag["href"] these_links.append(link) return these_links page_links = get_links(soup) assert(len(page_links) == 96 As the page loads, it starts on 24 items, continues loading for 4-5 seconds, then flickers and the select option then changes from say 24 items to 96 items. I was expecting wait_for_selector to work. I also wait 15 seconds after the page loads, yet returns 24 items, not 96. So far, I've also tried clicking the select option tag 4 different ways myself, and nothing has worked yet. I did review similar questions that use Playwright. I'm trying to be more respectful on this site than I was when I was younger. Any help appreciated, thank you
Even if your focus is to get the information with playwright - Therefore, I would just like point out additionally that scraping the information can also be implemented quite simply using requests and the endpoint via which the information is loaded: import requests page_num = 1 data = [] while True: json_data = requests.get(f'https://www.alphabroder.com/cgi-bin/livewamus/wam_tmpl/catalog_browse.p?action=getProduct&content=json&page=catalog_browse&startpath=1017&getNumProd=true&sort=pl&sortdir=asc&pageNum={page_num}&prodPerPage=96&site=ABLive&layout=Responsive&nocache=62059').json() data.extend(json_data.get('browseProd')) if page_num < json_data.get('paging')[0].get('pgTotal'): page_num = page_num+1 else: break data [{'productID': 'G500', 'colorCode': '93', 'description': 'Gildan Adult Heavy Cotton\x99 T-Shirt', 'division': 'AB', 'prodCat': '130', 'mill': '07', 'prodImg': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_93_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_93_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'prodURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html', 'regPriceDisp': '$0.00', 'onSale': True, 'salePrice': 2.32512, 'salePriceDisp': '$2.33', 'colorCount': 75, 'colorCountLabel': 'Colors', 'sizeCount': 8, 'primePlus': 'primeplus', 'primePlusLogo': True, 'sizeLabel': ' S - 5XL', 'sizeLabelDesc': 'Sizes:', 'msrpPriceDesp': 'Starting At: Pricing upon request', 'salesRank': 1, 'primePlusHTML': "<img src='/img/primeplus_logo.png' alt='Prime Plus Logo' title='Prime Plus' border='0' height='24' class='primelogo'>", 'showPriceHTML': "<span class='browseSalePrice'> $2.33</span>", 'gaMktgMill': 'Gildan', 'gaMktgCategory': 'T-Shirts', 'gaCurrency': 'USD', 'gaList': 'Results from Search List', 'sustainLogo': True, 'sustainLogoHTML': "<img src='/img/leaf_logo.png' alt='Sustain Logo' title='Sustain' border='0' height='20' class='sustainlogo'>", 'colorURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=93', 'colorSwatch': [{'productID': 'G500', 'colorCode': '00', 'colorXref': 'White ', 'description': 'WHITE', 'hexColor': 'FFFFFF', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_00_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_00_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_00_g.jpg', 'sortOrder': 1, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=00', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '01', 'colorXref': 'Pink', 'description': 'AZALEA', 'hexColor': 'FF76A0', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_01_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_01_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_01_g.jpg', 'sortOrder': 2, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=01', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '05', 'colorXref': 'Yellow', 'description': 'YELLOW HAZE', 'hexColor': 'EEE8A0', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_05_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_05_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_05_g.jpg', 'sortOrder': 3, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=05', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '08', 'colorXref': 'Light Blue', 'description': 'INDIGO BLUE', 'hexColor': '34657f', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_08_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_08_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_08_g.jpg', 'sortOrder': 4, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=08', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '11', 'colorXref': 'Pink', 'description': 'LIGHT PINK', 'hexColor': 'FFE4E4', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_11_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_11_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_11_g.jpg', 'sortOrder': 5, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=11', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '12', 'colorXref': 'Orange', 'description': 'TANGERINE', 'hexColor': 'FF8A3D', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_12_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_12_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_12_g.jpg', 'sortOrder': 6, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=12', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '18', 'colorXref': 'Tan', 'description': 'SAND', 'hexColor': 'c5b9ac', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_18_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_18_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_18_g.jpg', 'sortOrder': 7, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=18', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '20', 'colorXref': 'Tan', 'description': 'NATURAL', 'hexColor': 'F3E4C4', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_20_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_20_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_20_g.jpg', 'sortOrder': 8, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=20', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '21', 'colorXref': 'Yellow', 'description': 'DAISY', 'hexColor': 'F9F46F', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_21_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_21_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_21_g.jpg', 'sortOrder': 9, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=21', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '25', 'colorXref': 'Orange', 'description': 'TEXAS ORANGE', 'hexColor': 'af5c37', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_25_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_25_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_25_g.jpg', 'sortOrder': 10, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=25', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '27', 'colorXref': 'Red', 'description': 'GARNET', 'hexColor': '8B0000', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_27_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_27_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_27_g.jpg', 'sortOrder': 11, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=27', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '29', 'colorXref': 'Yellow', 'description': 'OLD GOLD', 'hexColor': 'e0b06e', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_29_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_29_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_29_g.jpg', 'sortOrder': 12, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=29', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '30', 'colorXref': 'Pink', 'description': 'HELICONIA', 'hexColor': 'FF00FF', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_30_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_30_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_30_g.jpg', 'sortOrder': 13, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=30', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}, {'productID': 'G500', 'colorCode': '31', 'colorXref': 'Orange', 'description': 'TENNESSEE ORANGE', 'hexColor': 'EB9501', 'image': '<noscript><img src=\'https://www.alphabroder.com//prodimg/small/g500_31_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\'></noscript><img src=\'/img//lazy.png\' data-lazyload data-src=\'https://www.alphabroder.com//prodimg/small/g500_31_g.jpg\' alt=\'Gildan Adult Heavy Cotton\x99 T-Shirt\' onerror=\'$.wam.imgError(this,"small")\'>', 'imageURL': 'https://www.alphabroder.com//prodimg/small/g500_31_g.jpg', 'showMoreColors': True, 'sortOrder': 14, 'productURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html?color=31', 'mainProdURL': 'https://www.alphabroder.com/product/g500/gildan-adult-heavy-cotton-t-shirt.html'}]},...]
2
3
79,436,180
2025-2-13
https://stackoverflow.com/questions/79436180/how-can-i-get-the-date-from-weeknr-and-year-using-strptime
I'm trying to get the date of monday given some weeknr and year. But I feel like strptime is just returning the wrong date. This is what I try: from datetime import date, datetime today = date.today() today_year = today.isocalendar()[0] today_weeknr = today.isocalendar()[1] print(today) print(today_year, today_weeknr) d = "{}-W{}-1".format(today_year, today_weeknr) monday_date = datetime.strptime(d, "%Y-W%W-%w").date() print(monday_date) print(monday_date.isocalendar()[1]) Result: $ python test.py 2025-02-13 2025 7 2025-02-17 8 So how the hell am I in the next week now?
There was an answer here before, but it got removed. I don't know why. The issue is that I was taking the weeknr from the isocalendar and later was parsing the isocalendar week,year into a date with non isocalendar directives. "%Y-W%W-%w" takes: %Y Year with century as a decimal number. %W Week number of the year (Monday as the first day of the week) as a zero-padded decimal number. All days in a new year preceding the first Monday are considered to be in week 0 %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. The solution for was to just use the iso directives: d = "{}-{}-1".format(year, weeknr) monday_date = datetime.strptime(d, "%G-%V-%u").date() %G ISO 8601 year with century representing the year that contains the greater part of the ISO week (%V). %V ISO 8601 week as a decimal number with Monday as the first day of the week. Week 01 is the week containing Jan 4. %u ISO 8601 weekday as a decimal number where 1 is Monday. Clearly if you use isocalendar() to get the week and year, you also need ISO directives to parse it back to a date.
1
1
79,437,187
2025-2-13
https://stackoverflow.com/questions/79437187/backward-lookup-is-not-working-in-django-5-x
We are migrating our django app from django==3.2.25 to django==5.1.6. OneToOneField, ManyToManyField are giving errors on revers lookup. Create fresh setup. python -m venv app_corp_1.0.X ./app_corp_1.0.X/bin/pip install django mkdir djangotutorial ./app_corp_1.0.X/bin/django-admin startproject mysite djangotutorial ./app_corp_1.0.X/bin/python djangotutorial/manage.py shell I have models as below. from django.db import models class Switch(models.Model): fqdn = models.CharField(max_length=45, unique=True) class Meta: managed = False db_table = 'Switch' app_label = 'myapp_models' class ConfigState(models.Model): switch = models.OneToOneField(Switch, models.CASCADE, db_column='switch', primary_key=True, related_name='config_state') class Meta: managed = False db_table = 'ConfigState' app_label = 'myapp_models' class EdgeSwitch(models.Model): switch = models.OneToOneField(Switch, models.CASCADE, db_column='switch', primary_key=True, related_name='edge_switch') class Meta: managed = False db_table = 'EdgeSwitch' app_label = 'myapp_models' When I try to get backward lookup query in DJango==3.X it works. >>> print(EdgeSwitch.objects.filter(switch__config_state=1).query) SELECT `EdgeSwitch`.`switch`, `EdgeSwitch`.`cluster`, `EdgeSwitch`.`sequence`, `EdgeSwitch`.`position`, `EdgeSwitch`.`role`, `EdgeSwitch`.`span`, `EdgeSwitch`.`loopback_v4`, `EdgeSwitch`.`loopback_v6` FROM `EdgeSwitch` INNER JOIN `Switch` ON (`EdgeSwitch`.`switch` = `Switch`.`id`) INNER JOIN `ConfigState` ON (`Switch`.`id` = `ConfigState`.`switch`) WHERE `ConfigState`.`switch` = 1 Same code gives error in DJango==5.X >>> print(EdgeSwitch.objects.filter(switch__config_state=1).query) Traceback (most recent call last): File "<console>", line 1, in <module> File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/manager.py", line 87, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1476, in filter return self._filter_or_exclude(False, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1494, in _filter_or_exclude clone._filter_or_exclude_inplace(negate, args, kwargs) File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1501, in _filter_or_exclude_inplace self._query.add_q(Q(*args, **kwargs)) File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1609, in add_q clause, _ = self._add_q(q_object, self.used_aliases) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1641, in _add_q child_clause, needed_inner = self.build_filter( ^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1555, in build_filter condition = self.build_lookup(lookups, col, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1379, in build_lookup lhs = self.try_transform(lhs, lookup_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1423, in try_transform raise FieldError( django.core.exceptions.FieldError: Unsupported lookup 'config_state' for OneToOneField or join on the field not permitted. How to make it working as it was working before?
The app_label is probably the culprit: you registered the models with an app_label that is not in INSTALLED_APPS, and as a result these are not registered. If I change the app_label to one in INSTALLED_APPS, it works. But you don't need to mention app_label: if you leave it, it will automatically install it in the app the models.py is in.
3
1
79,438,489
2025-2-14
https://stackoverflow.com/questions/79438489/what-is-the-correct-way-to-please-the-typechecker-for-a-bytes-str-str-f
I have the following code: def from_utf8(string: bytes | str) -> str: if isinstance(string, bytes): return string.decode("utf-8") else: return string # <- type warning on this line pylance gives me a type warning on the return string line: Type "bytearray | memoryview[_I@memoryview] | str" is not assignable to return type "str" Type "bytearray | memoryview[_I@memoryview] | str" is not assignable to type "str" "bytearray" is not assignable to "str" My understanding is: the type annotation x: bytes is actually an alias for "runtime types" x: bytes | bytearray | memoryview[_I@memoryview], but isinstance(x, bytes) only checks for bytes, not the two others. I tried checking for types the other way around: def from_utf8(string: bytes | str) -> str: if isinstance(string, str): return string else: return string.decode("utf-8") # <- no attribute 'decode' for 'memoryview' The error now becomes: Cannot access attribute "decode" for class "memoryview[_I@memoryview]" Attribute "decode" is unknown For context: my project uses python 3.11 I see these warnings in vscode, using pylance version 2025.2.1 and python (ms-python.python) extension version 2025.0.0 Do I have a convenient way to write a version of from_utf8(string) that passes the type checker ? also: is my assumption correct, and is it documented somewhere ?
Before Python 3.12, bytes was specified to behave as an alias of builtins.bytes | builtins.bytearray | builtins.memoryview. From the Python 3.10 docs (emphasis mine): class typing.ByteString(Sequence[int]) A generic version of collections.abc.ByteString. This type represents the types bytes, bytearray, and memoryview of byte sequences. As a shorthand for this type, bytes can be used to annotate arguments of any of the types mentioned above. The static typing error you're seeing is a consequence of this behaviour. This behaviour is now removed in Python 3.12 with the introduction of PEP 688. pyright 1.1.329 (released over a year ago) has since disabled this static typing behaviour by default under strict mode. If you don't want to use strict mode but still want to disable this behaviour, set disableBytesTypePromotions to true. As pointed out in the comments, a typed third party library may have been developed under this behaviour, in which case you should watch out when referring to variables or return values of functions from this library. As an example, without the --strict-bytes option, mypy will pass the following in type-checking (see mypy Playground): def f() -> bytes: return memoryview(b"")
2
3
79,433,451
2025-2-12
https://stackoverflow.com/questions/79433451/change-text-direction-in-python-pptx
I'm using the python‑pptx library to generate PowerPoint presentations on a Linux environment (Python 3.10). I need to add text to the slides, but it must display from right to left (RTL). I have tried the following approaches: Setting RTL on font runs: I attempted to set the RTL property with: run.font.rtl = True However, this does not change the text direction as expected. Prepending a Unicode Right-to-Left mark: I added the Unicode control character \u200F to the beginning of the text, e.g. text_frame.text = "\u200F" + "lalala" Unfortunately, the text still appears in LTR order. Adjusting paragraph alignment: I set the paragraph alignment to right using: p.alignment = 2 Yet, this only changes the alignment, not the underlying RTL behavior. I have also experimented with directly modifying the underlying XML within the PPTX file, but I haven’t been able to achieve consistent results. Has anyone encountered this issue with RTL text in python‑pptx? What are the recommended workarounds (including any XML editing techniques) to force a presentation generated with python‑pptx to display text in proper RTL format? Thank you in advance for your help!
Microsoft Office provides bidirectional writing only when a language which needs this is listed under Office authoring languages and proofing. See Change the language Office uses in its menus and proofing tools. The property rtl for right-to-left writing is a paragraph-property, not a character-property. Not clear why Python pptx programmers thought it is the latter. So, for me, having hebrew listed under Office authoring languages and proofing, the following works and produces the result shown. from pptx import Presentation from pptx.util import Inches, Pt prs = Presentation() blank_slide_layout = prs.slide_layouts[6] slide = prs.slides.add_slide(blank_slide_layout) left = top = Inches(1) width = Inches(7) height = Inches(3) txBox = slide.shapes.add_textbox(left, top, width, height) tf = txBox.text_frame p = tf.paragraphs[0] p.text = "Hello, world" p.font.size = Pt(40) p = tf.add_paragraph() p.text = "Χ©ΧœΧ•Χ, Χ’Χ•ΧœΧ" p.font.size = Pt(40) p._pPr.set('algn', 'r') p._pPr.set('rtl', '1') p = tf.add_paragraph() p.text = "Hello, world again" p.font.size = Pt(40) p = tf.add_paragraph() p.text = "Lorem ipsum semit dolor..." p.font.size = Pt(40) p._pPr.set('algn', 'r') p._pPr.set('rtl', '1') prs.save('test.pptx') Note: It is bidirectional writing, thus property rtl changes writing direction. Do not expect it turns the glyphs of the letters. Thus Lorem ipsum semit dolor... will not appear like so:
1
2
79,438,104
2025-2-14
https://stackoverflow.com/questions/79438104/nest-dictionaries-within-a-list-into-respective-dictionaries
I have two lists, animal_list and outer_list. animal_list contains dictionaries within the list. outer_list is just a simple list with exact same elements animal_list = [{'animal': 'dog', 'color': 'black'}, {'animal': 'cat', 'color': 'brown'}] outer_list = ['pet', 'pet'] How can I combine the two lists to make a nested dictionary within a single list without overwriting each record since the outer key (outer_list) is the exact same. My desired state below [ {'pet':{'animal': 'dog', 'color': 'black'}}, {'pet':{'animal': 'cat', 'color': 'brown'}} ] I've tried the following but it just writes the last value since the outer key 'pet' is the same attempt_list = [] attempt_list.append(dict(zip(outer_list,animal_list))) Failed output below [{'pet': {'animal': 'cat', 'color': 'brown'}}] I imagine a loop is needed but can't for the life of me figure it out
You can use a list comprehension that outputs a new dict for each key-value pair: [{key: value} for key, value in zip(outer_list, animal_list)] Demo: https://ideone.com/IDUJL8 Also, if it is truly guaranteed that outer_list always contains the same value throughout, you can simply extract the first item as a fixed key instead: key = outer_list[0] [{key: animal} for animal in animal_list]
2
4
79,437,667
2025-2-13
https://stackoverflow.com/questions/79437667/how-to-count-unique-state-combinations-per-id-in-a-polars-dataframe
I have a Polars DataFrame where each id can appear multiple times with different state values (either 1 or 2). I want to count how many unique ids have only state 1, only state 2, or both states 1 and 2. import polars as pl df = pl.DataFrame({ "id": [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 9, 9, 10, 10, 10, 11, 11, 12, 12, 13, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 20, 20, 20], "state": [1, 2, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 2] }) I want to count how many unique ids fall into each category: β€’ Only state 1 (e.g., IDs that only have 1) β€’ Only state 2 (e.g., IDs that only have 2) β€’ Both states 1 and 2 (e.g., IDs that have both 1 and 2) Expected Result (Example): State combination [1] -> 20 IDs State combination [2] -> 15 IDs State combination [1, 2] -> 30 IDs
You could group by the id and use .all() and .any() to check the states. (df.group_by("id") .agg( one = (pl.col.state == 1).all(), two = (pl.col.state == 2).all(), both = (pl.col.state == 1).any() & (pl.col.state == 2).any() # both = pl.lit(1).is_in("state") & pl.lit(2).is_in("state") ) # .select(pl.exclude("id").sum()) ) shape: (20, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ one ┆ two ┆ both β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ bool ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ 6 ┆ false ┆ true ┆ false β”‚ β”‚ 3 ┆ false ┆ true ┆ false β”‚ β”‚ 2 ┆ true ┆ false ┆ false β”‚ β”‚ 12 ┆ true ┆ false ┆ false β”‚ β”‚ 16 ┆ false ┆ false ┆ true β”‚ β”‚ … ┆ … ┆ … ┆ … β”‚ β”‚ 9 ┆ false ┆ false ┆ true β”‚ β”‚ 13 ┆ false ┆ true ┆ false β”‚ β”‚ 8 ┆ false ┆ true ┆ false β”‚ β”‚ 15 ┆ false ┆ false ┆ true β”‚ β”‚ 10 ┆ false ┆ false ┆ true β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ The .sum() of the bool columns are the counts. shape: (1, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ one ┆ two ┆ both β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════║ β”‚ 6 ┆ 7 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
3
79,436,912
2025-2-13
https://stackoverflow.com/questions/79436912/python-convert-mm-dd-yyyy-to-yyyymmdd-using-date-format
I have a csv file with a partial format of something like: field1,bmm/bdd/byyyy,emm/edd/eyyyy,field4.... I am successfully creating a json file like this: { "field1": [ { "begDate": byyyybmmbdd, "endDate": eyyyyemmedd, "score": field4, ..... The python script that I was using works fine but it gives me deprecation warning: from datetime import datetime dateparse = lambda x: datetime.strptime(x, '%m/%d/%Y') df = pd.read_csv("input.csv", parse_dates=['Start', 'End'], date_parser=dateparse) df['Start'] = df['Start'].astype(str) df['End'] = df['End'].astype(str) df['score'] = df['score'].round(decimals=3) res = {} for a1, df_gp in df.groupby('field1'): res[a1] = df_gp.drop(columns='field1').to_dict(orient='records') print(json.dumps(res, indent=4).lower()) FutureWarning: The argument 'date_parser' is deprecated and will be removed in a future version. Please use 'date_format' instead, or read your data in as 'object' dtype and then call 'to_datetime'. Id like to be able to run the script w/o the warning so I modified the script accordingly: dateparse = lambda x: datetime.strptime(x, '%m/%d/%Y') df = pd.read_csv("input.csv", parse_dates=['Start', 'End'], date_format=dateparse) I also tried this: dateparse = lambda x: datetime.strptime(x, '%m/%d/%Y').strftime("%Y%m%d") df = pd.read_csv("input.csv", parse_dates=['Start', 'End'], date_format=dateparse) but the json output gives me the wrong date format: { "field1": [ { "begDate": bmm/bdd/byyyy, "endDate": emm/edd/eyyyy, "score": 0.0, .... Are there any suggestions on how to get around this Warning message while receiving the desired output?
You can avoid the deprecation warning by not trying to replace the deprecated date_parser with a callable in date_format (which expects a string, not a function). Instead, load the dates as objects and then convert them with pd.to_datetime and dt.strftime to get the format you want. For example: import pandas as pd from datetime import datetime import json # Read the CSV without a custom parser (dates will be parsed based on 'parse_dates') df = pd.read_csv("input.csv", parse_dates=['Start', 'End']) # Now convert the date columns to the desired format (e.g. "YYYYMMDD") df['Start'] = pd.to_datetime(df['Start'], format='%m/%d/%Y').dt.strftime('%Y%m%d') df['End'] = pd.to_datetime(df['End'], format='%m/%d/%Y').dt.strftime('%Y%m%d') df['score'] = df['score'].round(3) # Group and convert to the desired JSON structure res = {} for a1, df_gp in df.groupby('field1'): res[a1] = df_gp.drop(columns='field1').to_dict(orient='records') print(json.dumps(res, indent=4).lower()) This way, you get rid of the warning and still achieve your desired JSON output.
1
3
79,436,352
2025-2-13
https://stackoverflow.com/questions/79436352/how-to-insert-a-column-at-a-specific-index-with-values-for-some-rows-in-a-single
I want to insert a column at a specific index in a Pandas DataFrame, but only assign values to certain rows. Currently, I am doing it in two steps: df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [10, 20, 30, 40, 50] }) df.insert(1, 'NewCol', None) df.loc[[1, 3], 'NewCol'] = ['X', 'Y'] Is there a more concise way to achieve this in a single operation?
Provide a Series with the correct indices to insert: df.insert(1, 'NewCol', pd.Series(['X', 'Y'], index=[1, 3])) Output: A NewCol B 0 1 NaN 10 1 2 X 20 2 3 NaN 30 3 4 Y 40 4 5 NaN 50
1
2
79,435,770
2025-2-13
https://stackoverflow.com/questions/79435770/create-json-from-csv-and-add-some-header-lines-with-pandas
I found this post which initially seemed to be exactly what I was looking for but it didn't help me: Adding Header and Footer to JSON output from Python I have a csv file which I read in as Pandas dataframe: import os import csv import json import pandas as pd csvFilePath = "Mypath" track = pd.read_csv(csvFilePath, header = 0, skiprows = 0, delimiter = ";") The example csv looks like this: Param1;Param2;name;lat;lon;altitude;vert_rate;track;speed;category;Param3;Param4;Param5 999999;9999;rocket;57.878696;11.160667;1089;0;137;2;99;999;16;0 999999;9999;rocket;57.878796;11.160668;2543.963336;152638.0483;137;2;99;999;15;0 999999;9999;rocket;57.878896;11.160670;4226.050004;126781.7063;137;2;99;999;14;0 999999;9999;rocket;57.878796;11.160669;6091.207544;121824.349;137;2;99;999;13;0 999999;9999;rocket;57.878696;11.160667;8098.097372;121471.6581;137;2;99;999;12;0 Now I would like to safe this dataframe with an additional header as a JSON file: The additional header looks like this dictionary: headlines={ "now": 1636008051.9, "messages": 6236, } The aim JSON should contain the information given by "headlines" (but without its name) and the content of the dataframe: { "now": 1636008051.9, "messages": 6236, "track": [ { "Param1": 999999, "Param2": "9999", "name": "rocket", "lat": 57.878696, "lon": 11.160667, "altitude": 1089, "vert_rate": 0, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 16, "Param5": 0 } { "Param1": 999999, "Param2": "9999", "name": "rocket", "lat": 57.878796, "lon": 11.160668, "altitude": 2543.963336, "vert_rate": 152638.0483, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 15, "Param5": 0 } { "Param1": 999999, "Param2": "9999", "name": "rocket", "lat": 57.878896, "lon": 11.160670, "altitude": 4226.050004, "vert_rate": 126781.7063, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 14, "Param5": 0 } {...and so on...} ] } The dataframe itself I can simply turn to JSON like that: json = track.to_json(path_out + "result.json", orient='records') but here I don't know how to add the preceding lines from the "header" dict How can I join the dictionary and the csv to output the JSON? Or is there a simpler way? Or any hint to a post which I didn't find? I need to do it in pandas as the csv-dataframe will be further needed.
Create a dictionary, assign it as a new dictionary key and export with json.dump: import json headlines['track'] = df.to_dict(orient='records') with open(path_out + 'result.json', 'w') as f: json.dump(headlines, f) Or as a string: import json headlines['track'] = df.to_dict(orient='records') print(json.dumps(headlines, indent=2)) Output: { "now": 1636008051.9, "messages": 6236, "track": [ { "Param1": 999999, "Param2": 9999, "name": "rocket", "lat": 57.878696, "lon": 11.160667, "altitude": 1089.0, "vert_rate": 0.0, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 16, "Param5": 0 }, { "Param1": 999999, "Param2": 9999, "name": "rocket", "lat": 57.878796, "lon": 11.160668, "altitude": 2543.963336, "vert_rate": 152638.0483, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 15, "Param5": 0 }, { "Param1": 999999, "Param2": 9999, "name": "rocket", "lat": 57.878896, "lon": 11.16067, "altitude": 4226.050004, "vert_rate": 126781.7063, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 14, "Param5": 0 }, { "Param1": 999999, "Param2": 9999, "name": "rocket", "lat": 57.878796, "lon": 11.160669, "altitude": 6091.207544, "vert_rate": 121824.349, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 13, "Param5": 0 }, { "Param1": 999999, "Param2": 9999, "name": "rocket", "lat": 57.878696, "lon": 11.160667, "altitude": 8098.097372, "vert_rate": 121471.6581, "track": 137, "speed": 2, "category": 99, "Param3": 999, "Param4": 12, "Param5": 0 } ] }
1
2
79,435,315
2025-2-13
https://stackoverflow.com/questions/79435315/numpy-random-size-and-shape-confusion
I was looking through some codes and saw this line, numpy.random.normal(size=x.shape). Where, x = numpy.linspace(1, 2, 100). I don't understand what this does. I've only come across, np.random.normal(size=1) before. Can someone please explain the difference in both the cases and their use.
From numpy.random.normal size: int or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn. shape return a tuple. If you send it to the size parameter the size will be multiplication of the values (the result array will have the same shape) arr = numpy.array([1, 2, 3]) print(arr.shape) random_arr = numpy.random.normal(size=arr.shape) print(random_arr) # Output # (3,) # [ 0.02756549 -0.52115646 -2.32361849] arr = numpy.array([[1, 2, 3], [4, 5, 6]]) print(arr.shape) random_arr = numpy.random.normal(size=arr.shape) print(random_arr) # Output # (2, 3) # [[ 1.10564417 0.32478606 -1.71487667] # [ 0.5461406 0.51505975 0.2158163 ]] arr = numpy.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(arr.shape) random_arr = numpy.random.normal(size=arr.shape) print(random_arr) # Output # (3, 3) # [[ 0.75194289 -1.1696558 1.05238044] # [-0.68043824 0.98258701 3.48030454] # [-0.84775259 -0.49676403 1.33367409]]
1
2
79,434,541
2025-2-12
https://stackoverflow.com/questions/79434541/create-a-new-column-of-dictionaries-where-keys-are-in-another-column-of-lists-a
I'm trying to create the "Related Quantities" column of a dataframe given the existing "Item", "Quantity", and "Related Items" columns. Item Quantity Related Items Related Quantities 0 Flowers 1 ['Bushes'] {'Bushes': 2} 1 Bushes 2 ['Flowers'] {'Flowers': 1} 2 Cars 3 ['Trucks', 'Motorcycles'] {'Trucks': 4, 'Motorcycles': 5} 3 Trucks 4 ['Cars', 'Motorcycles'] {'Cars': 3, 'Motorcycles': 5} 4 Motorcycles 5 ['Cars', 'Trucks'] {'Cars': 3, 'Trucks': 4} The values in the dictionaries will be used as the input of a function to generate another column later I believe I have a line that will make the dictionary for a single row, which I could use to fill out the column using iterrows() and a for loop: dictionary = {item : df.loc[df['Item'] == item,['Quantity']][idx] for item in related_items_list} (where idx is something to grab the corresponding index of the one row left after filtering by 'Item', and related_items_list is the value grabbed from the 'Related Items' column of the current row in the loop) But I'm trying to make something with df.apply() instead, in hopes that it will be more performant. Is there some way to allow the function called in apply() to access the whole dataframe instead of just the row passed to it? I think I may be way overcomplicating this. Is trying to use apply() for performance a waste of time? Should I just be using a loop to feed the 'Quantity' data into the function directly instead of making this column at all? I worry that will also hurt performance EDIT Thank you. It looks like to_dict() is somewhat faster than making the lookup dict using a for loop. number of rows in df: 704400 Testing with 100 iterations: time of 'for item,qty in zip of two columns' method: 19.455784299999998 time of 'df.set_index(col1)[col2].to_dict()' method: 11.409341199999997
Using df.apply with a lookup dictionary. import pandas as pd data = {'Item': ['Flowers', 'Bushes', 'Cars', 'Trucks', 'Motorcycles'], 'Quantity': [1, 2, 3, 4, 5], 'Related Items': [['Bushes'], ['Flowers'], ['Trucks', 'Motorcycles'], ['Cars', 'Motorcycles'], ['Cars', 'Trucks']]} df = pd.DataFrame(data) # Creates a dictionary for fast quantity lookups item_quantities = df.set_index('Item')['Quantity'].to_dict() def create_related_quantities(row): related_quantities = {} for item in row['Related Items']: quantity = item_quantities.get(item) # Get quantity from lookup dictionary or None if not found if quantity is not None: # Only add to dict if quantity exists related_quantities[item] = quantity return related_quantities # Applies the create_related_quantities function to each row (axis=1) of the DataFrame df['Related Quantities'] = df.apply(create_related_quantities, axis=1) print(df) The new column created Item ... Related Quantities 0 Flowers ... {'Bushes': 2} 1 Bushes ... {'Flowers': 1} 2 Cars ... {'Trucks': 4, 'Motorcycles': 5} 3 Trucks ... {'Cars': 3, 'Motorcycles': 5} 4 Motorcycles ... {'Cars': 3, 'Trucks': 4}
2
1
79,433,458
2025-2-12
https://stackoverflow.com/questions/79433458/lightgbm-force-variables-to-be-in-splits
Im trying to find a way to train a lightgbm model forcing to have some features to be in the splits, i.e.: "to be in the feature importance", then the predictions are afected by these variables. Here is an example of a the modeling code with an usless variable as it is constant, but the idea is that there could be an important variable from business perspective that is not in the feature from lightgbm import LGBMRegressor import pandas as pd import numpy as np from sklearn.datasets import make_regression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error # Generar un dataset de regresiΓ³n aleatorio X, y = make_regression(n_samples=1000, n_features=10, noise=0.9, random_state=42) feature_names = [f"feature_{i}" for i in range(X.shape[1])] # Convertir a DataFrame para mayor legibilidad X = pd.DataFrame(X, columns=feature_names) # Agregar caracterΓ­sticas inΓΊtiles X["useless_feature_1"] = 1 # Dividir los datos en conjuntos de entrenamiento y prueba X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Definir el modelo LGBMRegressor model = LGBMRegressor( objective="regression", metric="rmse", random_state=1, n_estimators=100 ) # Entrenar el modelo model.fit(X_train, y_train, eval_set=[(X_test, y_test)]) # Predicciones y evaluaciΓ³n y_pred = model.predict(X_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) print(f"Test RMSE: {rmse:.4f}") # Importancia de caracterΓ­sticas importance = pd.DataFrame({ "feature": X.columns, "importance": model.feature_importances_ }).sort_values(by="importance", ascending=False) print("\nFeature Importance:") print(importance) Expected solution: There should be some workarround, but the most interesting one would be the one that is using some param in the fit or the regressor method.
As of this writing, LightGBM does not have functionality like "force at least 1 split on a given feature, but let LightGBM choose the threshold". However, it is possible to force LightGBM to split on specific features with specific thresholds. Here's an example (I tested it with lightgbm 4.5.0): import json import lightgbm as lgb import numpy as np from sklearn.datasets import make_regression X, y = make_regression( n_samples=10_000, n_features=5, n_informative=5, random_state=42 ) # add a noise feature noise_feature = np.random.random(size=(X.shape[0], 1)) X = np.concatenate((X, noise_feature), axis=1) # train a small model model1 = lgb.LGBMRegressor( random_state=708, n_estimators=10, ) model1.fit(X, y) # notice: that noise feature (the 6th one) was never chosen for a split model1.feature_importances_ # array([ 0, 97, 110, 0, 93, 0], dtype=int32) # force the use of that noise feature in every tree forced_split = { "feature": 5, "threshold": np.mean(noise_feature), } with open("forced_splits.json", "w") as f: f.write(json.dumps(forced_split)) # train another model, forcing it to use those splits model2 = lgb.LGBMRegressor( random_state=708, n_estimators=10, forcedsplits_filename="forced_splits.json", ) model2.fit(X, y) # noise feature was used once in every tree model2.feature_importances_ # array([ 0, 104, 131, 0, 55, 10], dtype=int32) That JSON file defining the splits can be extended with arbitrarily deep nesting. (LightGBM docs) For example, here's how to force it to use the 6th, 1st, and 4th features (in that order), split on their means, all down the left side of each tree. forced_split = { "feature": 5, "threshold": np.mean(noise_feature), "left": { "feature": 0, "threshold": np.mean(X[:,0]), "left": { "feature": 3, "threshold": np.mean(X[:,2]), } } } with open("forced_splits.json", "w") as f: f.write(json.dumps(forced_split)) model3 = lgb.LGBMRegressor( random_state=708, n_estimators=10, forcedsplits_filename="forced_splits.json", ).fit(X,y) model3.feature_importances_ # array([ 10, 114, 133, 10, 23, 10], dtype=int32) If you don't want the same structure for every tree, you could look into using "training continuation", changing this parameter for each batch of training rounds. See LightGBM: train() vs update() vs refit().
3
1
79,434,556
2025-2-12
https://stackoverflow.com/questions/79434556/best-place-to-initialize-a-variable-from-a-postgres-database-table-after-django
I have a django project where I have some database tables. One of the database tables is designed to store messages and their titles. This helps me to create/alter these messages from my django-admin. Now I want to initialize a variable (as a dictionary) from this table as follows : MY_MSGS = {record.name : {'title':record.title, 'message':record.message} for record in MyTable.objects.all()} This must happen at the server startup for now. MY_MSGS must be accessible to the different view-files. Later I would want to periodically update MY_MSGS by reading MyTable again. So I want that My_MSGS behaves as a all global to all my view-files and should be initialized after the startup is complete. FYI I have multiple view-files that are all imported from views.py. Also this is a very small table with just about maximum 15 messages and so I do not mind holding this data in memory
I think the main concern is that you should not run the query immediately, but after Django has initialized the models, etc. We can do that by postponing the load procedure, and do it when we really need a message, with: def get_message(name): cache = get_message.cache if cache is None: cache = get_message.cache = { record.name: {'title': record.title, 'message': record.message} for record in MyTable.objects.all() } return cache.get(name) get_message.cache = None and thus use this as a function, like: get_message(my_record_name) then it is of course still important to make sure you never call the function during initialization, so not pass this as a default=… [Django-doc], etc. of a method field for example. An extra advantage is that as long as you don't need any message, you don't fetch these. If we do, we will not do it a second time. But usually that might be a problem: typically you don't restart the Django server very often in production, so the messages can remain the old ones for months.
1
2
79,434,429
2025-2-12
https://stackoverflow.com/questions/79434429/explode-dataframe-and-add-new-columns-with-specific-values-based-on-a-condition
I have a dataframe with 6 columns: 'Name', 'A', 'B', 'C', 'Val', 'Category' It looks like this: Name A B C Val Category x 1.1 0 0.2 NA NA y 0 0.1 0 NA NA z 0.5 0.1 0.3 NA NA I want to expand the dataframe such that for each value that is not 0 in columns 'A', 'B', 'C' you get an extra row. The column 'Val' is assigned the non-zero value that led to the expansion and the 'Category' is arbitrarily based on where the value came from. The result should look like this: Name A B C Val Category x 1.1 0 0.2 1.1 first x 1.1 0 0.2 0.2 third y 0 0.1 0 0.1 second z 0.5 0.1 0.3 0.5 fisrt z 0.5 0.1 0.3 0.1 second z 0.5 0.1 0.3 0.3 third This is probably the wrong approach, but I thought since I only have three columns I should be repeating all the rows 3 times by using the repeat function on the index and then looping through the rows based on a for loop with a skip to apply 3 functions to assign the target and AICN all rows and then dropping rows where the target is 0. def targeta(row): target = row val = 'first' return target, val def targetb(row): target = row val = 'second' return target, val def targetc(row): target = row val = 'third' return target, val df_repeat = df.loc[df.index.repeat(3)] for i in range(1,len(df_repeat)-3,3): df_repeat.iloc[i][['Target','Category']]=targeta(df_repeat.iloc[i]['A']) df_repeat.iloc[i+1][['Target','Category']]=targetb(df_repeat.iloc[i+1]['B']) df_repeat.iloc[i+2][['Target','Category']]=targetc(df_repeat.iloc[i+2]['C']) I only got to this point and realized I am getting an empty dataframe. Any suggestions on what to do?
You could replace the 0s with NaNs, rename the columns to your categories, reshape to long with stack, and join back to the original to duplicate the rows: out = (df .drop(columns=['Val', 'Category']) .join(df[['A', 'B', 'C']] .set_axis(['first', 'second', 'third'], axis=1) .rename_axis(columns='Category') .replace(0, pd.NA) .stack() .rename('Val') .reset_index(-1) ) ) Output: Name A B C Category Val 0 x 1.1 0.0 0.2 first 1.1 0 x 1.1 0.0 0.2 third 0.2 1 y 0.0 0.1 0.0 second 0.1 2 z 0.5 0.1 0.3 first 0.5 2 z 0.5 0.1 0.3 second 0.1 2 z 0.5 0.1 0.3 third 0.3
1
1
79,432,856
2025-2-12
https://stackoverflow.com/questions/79432856/when-i-create-an-array-of-numpy-floats-i-get-an-array-of-python-floats
The code: import sys import numpy as np print(f"We are using Python {sys.version}", file=sys.stderr) print(f"We are using numpy version {np.__version__}", file=sys.stderr) # 2.2.1 def find_non_numpy_floats(x: any) -> bool: if not (isinstance(x, np.float64)): print(f"Found non-numpy.float64: {x} of type {type(x)}", file=sys.stderr) return False else: return True w: np.ndarray = np.zeros((2, 2), dtype=np.float64) np.vectorize(lambda x: find_non_numpy_floats(x))(w) assert (np.all(np.vectorize(lambda x: isinstance(x, np.float64))(w))), "try to keep using the numpy floats" I'm expecting Numpy.zeros to generate an array of Numpy float64, which are not the same as Python float if I understand correctly (IEEE 64-bit floats vs something Python specific?) However the above results in: We are using Python 3.13.1 (main, Dec 9 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)] We are using numpy version 2.2.1 Found non-numpy.float64: 0.0 of type <class 'float'> Found non-numpy.float64: 0.0 of type <class 'float'> Found non-numpy.float64: 0.0 of type <class 'float'> Found non-numpy.float64: 0.0 of type <class 'float'> and an assertion error. Why is that and how can I fix this (and should I want to?)
numpy.vectorize converts the array to an array of object dtype first: # Convert args to object arrays first inputs = [asanyarray(a, dtype=object) for a in args] I don't know why. I can think of a few plausible reasons, but nothing stands out as a clear motivator. In any case, converting to object dtype builds an array of ordinary Python scalar objects, rather than an array of NumPy scalars. It is not possible for an array of float64 dtype to contain ordinary Python floats. An array of float64 dtype has a buffer of raw 8-byte floating-point values, not Python objects. They're not even instances of numpy.float64 - NumPy has to construct numpy.float64 wrapper objects on access if you try to access an individual element.
1
4
79,431,483
2025-2-11
https://stackoverflow.com/questions/79431483/polars-selectors-for-columns-that-are-nested-types
Some Polars operations, such as .sort() fail when passed a column with a Nested Type. This (sensible) choice about sort means I cannot use my usual sorting pattern of df.sort(pl.all()). import polars as pl NESTED_TYPES = [ pl.List, pl.Array, pl.Object, pl.Struct ] pl.exclude(NESTED_TYPES) Result: *.exclude([Dtype(List(Null)), Dtype(Array(Null, 0)), Dtype(Object("object", None)), Dtype(Struct([]))]) Is there a way to select (or exclude) only nested types? The Selectors Documentation has many ideas but nothing seems right for this.
It is not very well supported yet, see https://github.com/pola-rs/polars/issues/9971 As a workaround, you can specify "all types except non-nested types" using the code shown in that Issue, or just loop over all dtypes you care about and specify pl.List(dtype) for each of them, but there aren't any good ways to say "all list dtypes" yet.
3
2
79,433,014
2025-2-12
https://stackoverflow.com/questions/79433014/what-is-the-difference-between-configdict-and-dict
What is the difference between ConfigDict and dict? ConfigDict: TypeAlias = dict[str, Union[str, list[str]]] What are the advantages of using ConfigDict? https://github.com/pytest-dev/pytest/pull/13193/files#diff-f1d27932fbd9530086080aa8df367309881fe90f204cdd69102ba59758644761
ConfigDict in the PR you linked to is a type alias to a dict where the key is a string and the value is either a string or a list of strings. In runtime, there isn't any functional difference. It's mainly useful in development time to save having to type the long dict definition each time and to avoid mistakenly using a different definition (e.g., using dict(str, str) by mistake.
1
5
79,432,139
2025-2-12
https://stackoverflow.com/questions/79432139/target-sum-algorithm-using-numpy
I have a numpy array of floats and a target sum. I am trying to find all possible combinations of elements that would add up to the target sum. I am struggling to come up with anything computationally effective that wouldn't require days to run. Constraints: the same value might appear in the array multiple times. 0 < array.size < 1000000 0 <= element <= 999,999,999,999.99 Here is the solution I was able to come up with so far. But trying to process even 50 elements takes forever: import numpy as np from typing import List, Tuple, Set from collections import defaultdict def generate_random_array(size=1000, min_val=0, max_val=100, seed=42): np.random.seed(seed) arr = np.random.uniform(min_val, max_val, size) return np.round(arr, 2) def find_target_sum_combinations(arr: np.ndarray, target: float, min_elements: int = 2, epsilon: float = 1e-10) -> List[Tuple[float, ...]]: arr = np.asarray(arr, dtype=np.float64) n = len(arr) # HM to store combinations Key: Sum, Value: indices dp = defaultdict(set) # Convert sum to int def get_sum_key(value: float) -> int: return int(round(value / epsilon)) # Add all individual elements for i, num in enumerate(arr): dp[get_sum_key(num)].add((i,)) result = [] # Combining each new number with all existing combinations for i in range(n): curr_num = arr[i] # Make a copy of current sums to avoid modifying while iterating current_sums = list(dp.items()) for sum_key, combinations in current_sums: new_sum = sum_key * epsilon + curr_num new_sum_key = get_sum_key(new_sum) # Add new combination for comb in combinations: # Check to ensure no duplicate indices if i > max(comb): new_comb = comb + (i,) dp[new_sum_key].add(new_comb) # Check for target sum if (abs(new_sum - target) < epsilon and len(new_comb) >= min_elements): result.append(tuple(float(arr[idx]) for idx in new_comb)) return sorted(result, key=len) arr=generate_random_array(size=20, min_val=1000, max_val=100000, seed=42) target_sum=arr[1]+arr[2]+arr[4]+arr[5] combinations = find_target_sum_combinations(arr, target_sum) print(f"\nCombinations that sum to {target_sum}:") for comb in combinations: print(f"{comb} = {sum(comb)}")
As there are no negative values in the array, there are two types of early stopping that you can introduce to the code. When the current sum is larger than the target sum, then you do not need to continue adding values. You can sort the array and try adding values in order from smallest to largest, if adding a value is larger than the target sum then we do not need to continue testing larger values. Adjusted code including proposed changes: import numpy as np from itertools import combinations from typing import List, Tuple def find_target_sum_combinations(arr: np.ndarray, target: float, min_elements: int = 2, epsilon: float = 1e-10) -> List[Tuple[float, ...]]: arr.sort() # Sort array to help with early stopping result = [] def find_combinations(start, path, current_sum): if len(path) >= min_elements and abs(current_sum - target) < epsilon: result.append(tuple(path)) # Continue searching for other combinations for i in range(start, len(arr)): if current_sum + arr[i] > target + epsilon: break # Early stopping because the array is sorted find_combinations(i + 1, path + [arr[i]], current_sum + arr[i]) find_combinations(0, [], 0) print(result) return sorted(result, key=len) Note that this is still has exponential worst case running time so it will still not be able to handle very large arrays. I tested the efficiency compared to the old code with timeit and it gave an improvement of a bit more than 150x on my system with arrays of length 20. Old code: 3.5-4.0 s Code with changes: 20-25 ms
2
4
79,429,046
2025-2-11
https://stackoverflow.com/questions/79429046/tricky-reverse-regex-python-3-11
Can any one please help me with the reverse part of the regex? I got it almost right but the reverse is tricky because if I have an input as: Input = dogs and cats or (white or black) or (cat and (red or blue)) Current Regex Output = dogs.{0,10}cats|(white|black)|(cat.{0,10}(red|blue)) "OK regex" Current Regex Reverse Output = ))blue|red(.{0,10}cat(|)black|white(|cats.{0,10}dogs "It's totally wrong" It should be: (blue|red).{0,10}cat|(black|white)|cats.{0,10}dogs For some reason the parenthesis is messing up the whole reverse function. Thank you in advance. import re import os def normalize_special_terms(text): text = re.sub(r'\bli[\s-]?6\b', r'\\bli[-\\s]?6\\b', text, flags=re.IGNORECASE) return text def reverse_regex_order(regex): # Reverse functionality for the regex output def reverse_inside_parentheses(s): # Reverse the order of terms inside parentheses not working properly parts = re.split(r'(\.\{0,10\}|\||\(|\))', s) stack = [] buffer = [] for part in parts: if part == ')': if buffer: stack.append(''.join(buffer[::-1])) buffer = [] stack.append(part) elif part == '(': stack.append(part) if buffer: stack.append(''.join(buffer[::-1])) buffer = [] else: buffer.append(part) if buffer: stack.append(''.join(buffer[::-1])) return ''.join(stack) terms = re.split(r'(\.\{0,100\}|\||\(|\))', regex) reversed_terms = [reverse_inside_parentheses(term) if '(' in term or ')' in term else term for term in terms] reversed_terms.reverse() return ''.join(reversed_terms) def text_to_regex(input_file, max_gap=100): # Convert text from input file to regex if not os.path.exists(input_file): raise FileNotFoundError(f"Input '{input_file}' does not exist, check location.") output_file = os.path.join(os.path.dirname(input_file), 'regex.txt') output_reverse_file = os.path.join(os.path.dirname(input_file), 'regex_reverse.txt') with open(input_file, 'r') as f: lines = f.readlines() regex_parts = [] for line in lines: line = line.strip().lower() line = normalize_special_terms(line) terms = re.split(r'\s+(?:and|or)\s+', line) operators = re.findall(r'\s+(and|or)\s+', line) line_regex_parts = [terms[0]] for i in range(1, len(terms)): gap = f'.{{0,{max_gap}}}' if operators[i - 1] == 'and' else '|' line_regex_parts.append(gap + terms[i]) regex_parts.append(''.join(line_regex_parts)) # Generate reversed regex reversed_regex_parts = [reverse_regex_order(regex) for regex in regex_parts] # Write regex file with open(output_file, 'w') as f: for regex in regex_parts: f.write(regex + '\n') # Write reversed regex file with open(output_reverse_file, 'w') as f: for regex in reversed_regex_parts: f.write(regex + '\n') return regex_parts, reversed_regex_parts if __name__ == "__main__": input_file = '/input.txt' try: original_regex, reversed_regex = text_to_regex(input_file) print("Regex Output:") print("\n".join(original_regex)) print("\nReversed Regex Output:") print("\n".join(reversed_regex)) except Exception as e: print(f"Error: {e}") ####
This code can just change the pattern to ((blue|red).{0,10}cat)|(black|white)|cats.{0,10}dogs. import re pattern = r'dogs.{0,10}cats|(white|black)|(cat.{0,10}(red|blue))' reg = re.compile(r'\w+|(\.\{\d+,\d+\})|[\(\)\|]') symbols = { ')': '(', '(': ')' } results = [] match = reg.search(pattern) end = 0 while match: _m = match[0] if match[0] not in symbols else symbols[match[0]] results.append(_m) end = len(_m) pattern = pattern[end:] match = reg.search(pattern) print(''.join(results[::-1])) # ((blue|red).{0,10}cat)|(black|white)|cats.{0,10}dogs
1
2
79,430,935
2025-2-11
https://stackoverflow.com/questions/79430935/why-does-my-a-algorithm-expand-nodes-differently-when-using-heapq-vs-a-set-for
I'm implementing an A* search algorithm for a maze solver in Python. Initially, I maintained the open set as a plain set and selected the node with the lowest f-score using: current = min(open_set, key=lambda cell: f_score[cell]) This version of the algorithm tended to explore in a directed fashion toward the goal (almost like greedy best-first search), expanding relatively few nodes. However, when I switched to using a heapq (i.e., a priority queue) for the open set, the behavior of the algorithm changed noticeably. Instead of quickly homing in on the goal, it now expands nodes in a broad, cone-like pattern that resembles breadth-first search, and it ends up exploring many more nodes. My questions are: What are the common pitfalls or differences between using a set with min() and a heap-based priority queue for the A* open set? How might issues like duplicate/outdated entries and tie-breaking in the heapq affect the search order? Is there a recommended strategy to preserve the directed search behavior of A* when switching to a heapq in Python? Any insights into why the switch to a heapq might cause these drastic behavioral changes and how to mitigate them would be greatly appreciated. I updated the code to push tuples like (f_score[node], count, node) into the heap and used a counter as a tie-breaker. I also maintain a closed set to filter out outdated entries when popping from the heap. Despite that, the exploration order seems significantly different.
Your heapq-based implementation uses an ascending counter for tiebreaking. This means that when there's a tie, the entry added to the heap earliest wins. When there are a lot of equally-promising candidates, this tends to explore all of them "together". Your set-based implementation's tiebreaking strategy is kind of just "pray". Whatever tied entry it sees first is the one that gets picked, but the order in which it sees entries is just whatever order they happened to land in the hash table. This produces a bias that depends on the trailing bits of hashes, how big the hash table is, and what order the entries get added to the hash table. But the important thing is that if an entry lands in the back of the hash table, it'll probably stay in the back of the hash table until it gets removed. All tied entries that land further toward the front will get picked first. Hash table rebuilds can change this, if a collision or a resize causes the entry to end up in a new bucket post-rebuild, but entries will usually stay in place. Effectively, this produces something kind of similar to prioritizing recent entries in case of a tie. If a tied entry has been in the hash table for a while without getting picked, it's probably toward the back of the table, so new entries that tie with it will probably land somewhere earlier in the table. It's not entirely the same as prioritizing recent entries, but there's a bias in that direction. It sounds like in the test you ran, the algorithm happened to start off in a good direction by blind luck, and the recency bias effect kept it mostly exploring in that direction. But you got lucky. There's no guarantee the bias will push the search toward a good tied option. When you tried an actual "most recent first" tiebreaking strategy, a search starting in the top left of the map ran straight toward the right wall. Apparently you didn't like that. But if there was a clear path straight to the right wall, and the goal was in the bottom right, then running straight to the right wall is entirely consistent with trying to beeline straight for the goal. It just didn't get lucky.
3
3
79,430,379
2025-2-11
https://stackoverflow.com/questions/79430379/how-can-i-get-uv-to-git-pull-dependencies-on-uv-sync
In uv, I want to add a Git repository as a dependency which is easy in itself, but if we commit in the Git repository of the dependency, I want uv sync to do a Git pull for this Git repository. Basically, do a Git pull on the dependency Git repository and then apply the changed code for the dependency to my current virtual env. Here is what I tried: uv add git+https://github.com/PowerInsight/quantstats.git Then I make a commit on https://github.com/PowerInsight/quantstats.git in a Python file. Then I run this in the repository that references https://github.com/PowerInsight/quantstats.git uv sync The file I changed in the Git repository never gets updated in my .venv folder for the referenced dependency: .venv\Lib\site-packages\quantstats Then tried the same thing with a specific branch: uv add git+https://github.com/PowerInsight/quantstats.git --branch main It is the same problem; it does not get updated on commit. Then I tried adding this to both the pyproject.toml of my main project and the dependency Git repository: [tool.uv] cache-keys = [{ git = { commit = true } }] I also tried setting this package to always get reinstalled: [tool.uv] reinstall-package = ["quantstats"] What do I need to do for uv to pull from the Git repository dependency on any commit?
TLDR. You can use uv sync --upgrade Explanation. Per documentation of uv sync: Syncing ensures that all project dependencies are installed and up-to-date with the lockfile. The lockfile doesn't change when the remote repository is updated, but it can be upgraded using uv lock --upgrade or uv sync --upgrade allowing for package upgrades.
4
2
79,429,531
2025-2-11
https://stackoverflow.com/questions/79429531/select-the-first-and-last-row-per-group-in-polars-dataframe
I'm trying to use polars dataframe where I would like to select the first and last row per group. Here is a simple example selecting the first row per group: import polars as pl df = pl.DataFrame( { "a": [1, 2, 2, 3, 4, 5], "b": [0.5, 0.5, 4, 10, 14, 13], "c": [True, True, True, False, False, True], "d": ["Apple", "Apple", "Apple", "Banana", "Banana", "Banana"], } ) result = df.group_by("d", maintain_order=True).first() print(result) Output: shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ d ┆ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ f64 ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════β•ͺ══════β•ͺ═══════║ β”‚ Apple ┆ 1 ┆ 0.5 ┆ true β”‚ β”‚ Banana ┆ 3 ┆ 10.0 ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ This works good and we can use .last to do it for the last row. But how can we combine these in one group_by?
As columns You could use agg, you will have to add a suffix (or prefix) to differentiate the columns names: result = (df.group_by('d', maintain_order=True) .agg(pl.all().first().name.suffix('_first'), pl.all().last().name.suffix('_last')) ) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ d ┆ a_first ┆ b_first ┆ c_first ┆ a_last ┆ b_last ┆ c_last β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ f64 ┆ bool ┆ i64 ┆ f64 ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ════════β•ͺ════════β•ͺ════════║ β”‚ Apple ┆ 1 ┆ 0.5 ┆ true ┆ 2 ┆ 4.0 ┆ true β”‚ β”‚ Banana ┆ 3 ┆ 10.0 ┆ false ┆ 5 ┆ 13.0 ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As rows If you want multiple rows, then you would need to concat: g = df.group_by('d', maintain_order=True) result = pl.concat([g.first(), g.last()]).sort(by='d', maintain_order=True) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ d ┆ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ f64 ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════β•ͺ══════β•ͺ═══════║ β”‚ Apple ┆ 1 ┆ 0.5 ┆ true β”‚ β”‚ Apple ┆ 2 ┆ 4.0 ┆ true β”‚ β”‚ Banana ┆ 3 ┆ 10.0 ┆ false β”‚ β”‚ Banana ┆ 5 ┆ 13.0 ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Or using filter with int_range+over: result = df.filter((pl.int_range(pl.len()).over('d') == 0) |(pl.int_range(pl.len(), 0, -1).over('d') == 1) ) Output: β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ f64 ┆ bool ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═══════β•ͺ════════║ β”‚ 1 ┆ 0.5 ┆ true ┆ Apple β”‚ β”‚ 2 ┆ 4.0 ┆ true ┆ Apple β”‚ β”‚ 3 ┆ 10.0 ┆ false ┆ Banana β”‚ β”‚ 5 ┆ 13.0 ┆ true ┆ Banana β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
5
3
79,428,677
2025-2-11
https://stackoverflow.com/questions/79428677/batch-make-smoothing-spline-in-scipy
In scipy, the function scipy.interpolate.make_interp_spline() can be batched since its x argument must be one-dimensional with shape (m,) and its y argument can have shape (m, ...). However, the function scipy.interpolate.make_smoothing_spline() only accepts a y argument of shape (m,). Is there a simple way to batch the behavior of make_smoothing_spline() so it has the same behavior as make_interp_spline()? I was thinking of using numpy.vectorize(), but here I'm not batching operations on an array, I need a single function as output. I guess I could just implement a loop and make a nested list of splines, but I was wondering if there would be a neater way. Probably some combination of decorators but I'm twisting my brain in knots... EDIT: Developers seem to be aware of this issue here.
The PR that added batch support to make_smoothing_spline happened to be merged a few hours before this post. https://github.com/scipy/scipy/pull/22484 The feature will be available in SciPy 1.16, or you can get it early in the next nightly wheels. https://anaconda.org/scientific-python-nightly-wheels/scipy See also the BatchSpline class used in the tests of that PR. class BatchSpline: # BSpline-like class with reference batch behavior def __init__(self, x, y, axis, *, spline, **kwargs): y = np.moveaxis(y, axis, -1) self._batch_shape = y.shape[:-1] self._splines = [spline(x, yi, **kwargs) for yi in y.reshape(-1, y.shape[-1])] self._axis = axis def __call__(self, x): y = [spline(x) for spline in self._splines] y = np.reshape(y, self._batch_shape + x.shape) return np.moveaxis(y, -1, self._axis) if x.shape else y def integrate(self, a, b, extrapolate=None): y = [spline.integrate(a, b, extrapolate) for spline in self._splines] return np.reshape(y, self._batch_shape) def derivative(self, nu): res = copy.deepcopy(self) res._splines = [spline.derivative(nu) for spline in res._splines] return res def antiderivative(self, nu): res = copy.deepcopy(self) res._splines = [spline.antiderivative(nu) for spline in res._splines] return res
3
2
79,428,650
2025-2-11
https://stackoverflow.com/questions/79428650/map-causing-infinite-loop-in-python-3
I have the following code: def my_zip(*iterables): iterators = tuple(map(iter, iterables)) while True: yield tuple(map(next, iterators)) When my_zip is called, it just creates an infinite loop and never terminates. If I insert a print statement, it is revealed that my_zip is infinitely yielding empty tuples! My expectation was that something inside my_zip would eventually raise StopIteration. However, the (supposedly behaviorally) equivalent code with a generator expression instead works fine: def my_genexp_zip(*iterables): iterators = tuple(iter(it) for it in iterables) while True: try: yield tuple(next(it) for it in iterators) except: print("exception caught!") return Why is the function with map not behaving as expected? (Or, if it is expected behavior, how could I modify its behavior to match that of the function using the generator expression?) I am testing with the following code: print(list(my_genexp_zip(range(5), range(0, 10, 2)))) print(list(my_zip(range(5), range(0, 10, 2))))
The two pieces of code you provided are not actually "equivalent", with the function using generator expressions notably having a catch-all exception handler around the generator expression producing items for tuple output. And if you actually make the two functions "equivalent" by removing the exception handler: def my_listcomp_zip(*iterables): iterators = tuple(iter(it) for it in iterables) while True: yield tuple(next(it) for it in iterators) print(list(my_listcomp_zip(range(5), range(0, 10, 2)))) you'll get a traceback of: Traceback (most recent call last): File "test.py", line 4, in <genexpr> yield tuple(next(it) for it in iterators) ~~~~^^^^ StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 6, in <module> print(list(my_listcomp_zip(range(5), range(0, 10, 2)))) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "test.py", line 4, in my_listcomp_zip yield tuple(next(it) for it in iterators) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: generator raised StopIteration So it is clear by now that the reason why your infinite loop with while True: can end at all with your generator expression version of the function is because a RuntimeError is caught by your catch-all exception handler, which returns from the function. And this is because since Python 3.7, with the implementation of PEP-479, StopIteration raised inside a generator gets automatically turned into a RuntimeError in order not to be confused with the StopIteration raised by an exhausted generator itself. If you try your code in an earlier Python version (such as 2.7), you'll find the generator expression version of the function gets stuck in the infinite loop just as well, where the StopIteration exception raised by next bubbles out from the generator and gets handled by the tuple constructor to produce an empty tuple, just like the map version of your function. And addressing this exception masking effect is exactly why PEP-479 was proposed and implemented.
5
4
79,448,057
2025-2-18
https://stackoverflow.com/questions/79448057/how-does-maybenone-also-known-as-the-any-trick-work-in-python-type-hints
In typestubs for the Python standard library I noticed a peculiar type called MaybeNone pop up, usually in the form of NormalType | MaybeNone. For example, in the sqlite3-Cursor class I find this: class Cursor: # May be None, but using `| MaybeNone` (`| Any`) instead to avoid slightly annoying false positives. @property def description(self) -> tuple[tuple[str, None, None, None, None, None, None], ...] | MaybeNone: ... The definition of this MaybeNone is given as: # Marker for return types that include None, but where forcing the user to # check for None can be detrimental. Sometimes called "the Any trick". See # CONTRIBUTING.md for more information. MaybeNone: TypeAlias = Any # stable (I could not find additional information in the CONTRIBUTING.md, which I assume to be this one.) I understand the intention of marking a return type in such a way that a user is not forced to null check in cases where the null is more of a theoretical problem for most users. But how does this achieve the goal? SomeType | Any seems to imply that the return type could be anything, when what I want to say is that it can be SomeType or in weird cases None, so this doesn't seem to express the intent. MyPy already allows superfluous null-checks on variables that can be proven not to be None even with --strict (at least with my configuration?) so what does the special typing even accomplish as compared to simply doing nothing?
A nice summary can be found in this comment explaining the "Any Trick" of typeshed. We tend to use it whenever something can be None, but requiring users to check for None would be more painful than helpful. As background they talk about xml.etree.ElementTree.getroot which in some case returns None (Happens when the tree is initialized without a root). To reflect this, getroot was updated to def getroot(self) -> Element | Any: ... with the possible return types(Element) and additionally | Any. The different possibilities and effects are summarized as: -> Any means "please do not complain" to type checkers. If root has type Any, you will no error for this. -> Element means "will always be an Element", which is wrong, and would cause type checkers to emit errors for code like if root is None. -> Element | None means "you must check for None", which is correct but can get annoying. [..., it could be possible used] to do things like ET.parse("file.xml").getroot().iter("whatever"). -> Element | Any means "must be prepared to handle an Element". You will get an error for root.tagg, because it is not valid when root is an Element. But type checkers are happy with if root is None checks, because we're saying it can also be something else than an Element. I did slightly modify the quotes by adding italics and -> type
5
5
79,451,761
2025-2-19
https://stackoverflow.com/questions/79451761/using-pytest-twisted-functions-with-pytest-asyncio-fixtures
I have code that uses Twisted so I've written a test function for it and decorated it with @pytest_twisted.ensureDeferred. The function awaits on some Deferreds. Then, I need to run some aiohttp website in it so I've written a fixture that uses the pytest_aiohttp.plugin.aiohttp_client fixture, decorated it with @pytest_asyncio.fixture and used it in my test function. The result doesn't work (probably because I need to make Twisted and aiohttp use the same event loop or something like that?). Specifically, it prints "twisted/internet/asyncioreactor.py:50: DeprecationWarning: There is no current event loop" and then crashes with the following exception: .env/lib/python3.13/site-packages/pytest_twisted/__init__.py:343: in _run_inline_callbacks _instances.reactor.callLater(0.0, in_reactor, d, f, *args) .env/lib/python3.13/site-packages/twisted/internet/asyncioreactor.py:289: in callLater self._reschedule() .env/lib/python3.13/site-packages/twisted/internet/asyncioreactor.py:279: in _reschedule self._timerHandle = self._asyncioEventloop.call_at(abs_time, self._onTimer) /usr/lib/python3.13/asyncio/base_events.py:812: in call_at self._check_closed() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_UnixSelectorEventLoop running=False closed=True debug=False> def _check_closed(self): if self._closed: > raise RuntimeError('Event loop is closed') E RuntimeError: Event loop is closed /usr/lib/python3.13/asyncio/base_events.py:556: RuntimeError As the actual test function is not even executed its content doesn't seem to matter, and custom fixtures also aren't needed for this to fail so here is a minimal one that shows the problem: @ensureDeferred async def test_minimal(aiohttp_client): app = web.Application() await aiohttp_client(app) My settings: [tool.pytest.ini_options] addopts = [ "--reactor=asyncio", ] asyncio_mode = "strict" asyncio_default_fixture_loop_scope = "function" (not 100% sure about these, but I think moving to asyncio_mode = "auto" and replacing @pytest_asyncio.fixture with @pytest.fixture can only make it worse, and changing the fixture loop scope to "module" makes the runner hang before doing anything). What is the correct way to write such test functions, assuming it exists? Edit: it seems to me now (but I may be very wrong) that the "correct" loop scope for everything is in fact "session", because the asyncio reactor runs once and uses the same loop for the entire run, but the main problem is making all the pieces use the same loop, and I don't know if that's possible without changing either of the plugins.
According to the discussion in https://github.com/pytest-dev/pytest-twisted/issues/188 it doesn't seem possible without changing at least one of the plugins. However, if you control the async fixtures you need to use, you can use pytest-twisted for everything, decorating the fixtures with @pytest_twisted.async_yield_fixture, and don't use pytest-asyncio.
4
1
79,455,504
2025-2-20
https://stackoverflow.com/questions/79455504/load-phi-3-model-extract-attention-layer-and-visualize-it
I would like to visualize the attention layer of a Phi-3-medium-4k-instruct (or mini) model downloaded from hugging-face. In particular, I am using the following model, tokenizer: import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import pdb tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-medium-4k-instruct") model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-meduium-4k-instruct", device_map = "auto", torch_dtype = "auto", trust_remote_code = True ) # Create a pipeline generator = pipeline( "text-generation", model = model, tokenizer = tokenizer, return_full_text= False, max_new_tokens = 50, do_sample = False ) prompt = "..." input_ids = tokenizer(prompt, return_tensors = "pt").input_ids # tokenize the input prompt input_ids = input_ids.to("cuda:0") # get the output of the model model_output = model.model(input_ids) # extract the attention layer attention = model_output[-1] Firstly, I am wondering if that is the correct way to extract attention from my model. What should expect from this model and how can I visualize it properly? Isn't that I should expect a matrix n_tokens x n_tokens? The attention variable I have extracted has a size of 1x40x40x15x15 (or 1x12x12x15x15 in the case of mini model), where the first dimension corresponds to different layers the second for the different heads, and the final two for the attention matrix. That is actually my assumption and I am not sure whether it is correct. When I am visualizing the attention I am getting some very weird matrices like: What we see in this Figure, I assume is all the heads for one layer. However, most of the heads distribute the attention equally to all the tokens. Does that make sense? Edit: For the visualization I am doing sth like: # Save attention visualization code def save_attention_image(attention, tokens, filename='attention.png'): """ Save the attention weights for a specific layer and head as an image. :param attention: The attention weights from the model. :param tokens: The tokens corresponding to the input. :param layer_num: The layer number to visualize. :param head_num: The head number to visualize. :param filename: The filename to save the image. """ attn = attention[0].detach().cpu().float().numpy() num_heads = attn.shape[0] fig, axes = plt.subplots(3, 4, figsize=(20, 15)) # Adjust the grid size as needed for i, ax in enumerate(axes.flat): if i < num_heads: cax = ax.matshow(attn[i], cmap='viridis') ax.set_title(f'Head {i + 1}') ax.set_xticks(range(len(tokens))) ax.set_yticks(range(len(tokens))) ax.set_xticklabels(tokens, rotation=90) ax.set_yticklabels(tokens) else: ax.axis('off') fig.colorbar(cax, ax=axes.ravel().tolist()) plt.suptitle(f'Layer {1}') plt.savefig(filename) plt.close()
here is what you need to know: RUNNING COLAB CODE - https://colab.research.google.com/drive/13gP71u_u_Ewx8u7aTwgzSlH0N_k9XBXx?usp=sharing you want see attention weights from your phi3 model. first thing: you must tell model to output attentions. usually you do outputs = model(input_ids, output_attentions=True) then outputs.attentions will be tuple with one element per layer. each element is tensor shape (batch, num_heads, seq_len, seq_len) – that is what you expect, a matrix n_tokens x n_tokens per head. what you did using model_output = model.model(input_ids) attention = model_output[-1] may or may not be correct – depends on how model.forward is coded. better use output_attentions flag so you get proper attention weights. about the shape you see, e.g. 1x40x40x15x15 (or 1x12x12x15x15) – this likely means: 1 is batch size, next dimension is number of layers (40 for medium, 12 for mini), next is number of heads per layer, and last two are the attention matrices (each head gets a 15x15 attention matrix if you have 15 tokens). if many heads show nearly uniform attention it can be normal – sometimes heads do that, not focusing on any token particularly. for proper visualization, select one layer and one head like: attn = outputs.attentions[layer][0, head] # shape (seq_len, seq_len) and then use your plotting code (imshow or matshow) to visualize. so summary: use model(..., output_attentions=True) to get correct attention, then each attention tensor will be (batch, heads, seq_len, seq_len) – that is the matrix you expect. if you see extra dimensions then check if you are calling the right forward method. and yes, many heads may show uniform distribution – that can be normal in transformer models. hope this helps, and you can put my code in your colab as is. note that When using Hugging Face Transformers, the recommended approach is to run: outputs = model( input_ids=inputs, output_attentions=True, # possibly also output_hidden_states=True if you want hidden states ) Then outputs.attentions will be a tuple with one entry per layer, each entry shaped (batch_size, num_heads, seq_len, seq_len). If you call model.model(input_ids) directly (as in your code snippet), you might be accessing a lower-level forward function that returns a different structure. Instead, call the top-level model with output_attentions=True. That yields attention shapes more in line with standard Hugging Face conventions. Ok so basically you want see attention. You pass output_attentions=True when calling model, then get outputs.attentions. That is standard shape (batch, heads, seq_len, seq_len). Then pick layer and head to plot. Some heads look uniform, that is normal. If you do model.model(input_ids) directly, might not give the standard shape. Safer is: # !pip install transformers torch import torch import matplotlib.pyplot as plt from transformers import AutoModelForCausalLM, AutoTokenizer # Load tokenizer and model (make sure you have a valid license for the model) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-medium-4k-instruct") model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-medium-4k-instruct", # note: check spelling if you get error device_map="auto", torch_dtype=torch.float16, # or torch.float32 if preferred trust_remote_code=True ) # Prepare a prompt prompt = "The quick brown fox jumps over the lazy dog." inputs = tokenizer(prompt, return_tensors="pt") inputs = inputs.to("cuda:0") # send inputs to cuda # Run the model with attention outputs enabled # Make sure to pass output_attentions=True outputs = model(input_ids=inputs.input_ids, output_attentions=True) # outputs.attentions is a tuple with one element per layer # Each element is a tensor of shape (batch_size, num_heads, seq_len, seq_len) attentions = outputs.attentions # For example, choose layer 0 and head 0 to visualize layer = 0 head = 0 attn = attentions[layer][0, head].detach().cpu().numpy() # shape (seq_len, seq_len) # Get tokens for labeling the axes tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0]) # Visualize the attention matrix using matplotlib plt.figure(figsize=(8,8)) plt.imshow(attn, cmap="viridis") plt.colorbar() plt.xticks(range(len(tokens)), tokens, rotation=90) plt.yticks(range(len(tokens)), tokens) plt.title(f"Attention Matrix (Layer {layer}, Head {head})") plt.show() Now you see nice n_tokens by n_tokens matrix. If model has 12 layers, you see 12 in outputs.attentions. If β€œmedium” is 40 layers, you see 40. Each head is shape 15Γ—15 if your input is 15 tokens. Some heads do uniform attention, that is normal. That is basically all. NOTE - When you do something like: model_output = model.model(input_ids) attention = model_output[-1] You’re relying on how the internal forward method organizes its return. Some models do return (hidden_states, present, attentions, ...) but some do not. It’s safer to rely on the official Hugging Face usage: outputs = model(..., output_attentions=True) attention = outputs.attentions That’s guaranteed to be the standard shape.
3
2
79,461,837
2025-2-23
https://stackoverflow.com/questions/79461837/opencv-python-ffmpeg-tag-is-not-supported-with-codec-id-12-and-format-mp4-m
I would like to use OpenCV in Python to compile a video from a number of images. I get the error OpenCV: FFMPEG: tag is not supported with codec id 12 and format 'mp4 / MP4 I searched here for an answer how to fix it. I got an answer in this post, but it does not generate a video, the file is just 5.7kB, no matter which number of images I add. Here is an example code: import os import numpy as np from glob import glob import cv2 path = "where to find some images" fname_video="where the video should go" os.chdir(path) size=(1024,768) fps=20 files = [y for x in os.walk(path) for y in glob(os.path.join(x[0], '*.JPG'))][0:10] # just 10 images image_array = [] for file in files: frame = cv2.imread(os.path.join(path,file),0) frame = cv2.resize(frame,size) image_array.append(frame) fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter(fname_video, fourcc, fps, size) for i in range(len(image_array)): out.write(image_array[i]) out.release() Maybe somebody can give me a hint how to get a valid video file?
File Path Handling: The files list is being created using os.walk and glob, but the os.path.join(path, file) in the loop is redundant because files already contains the full paths. Grayscale vs. Color: You're reading the images in grayscale (cv2.IMREAD_GRAYSCALE), but the VideoWriter expects color images (3 channels). If you want to create a grayscale video, you need to convert the grayscale images to 3-channel images. Filename: My file name end with *.jpg. Both are worked after change name from *.JPG to *.jpg. New Method:- Instead of 'mp4v', try using a more widely supported codec like 'XVID' or 'MJPG' import os import numpy as np from glob import glob import cv2 path = os.path.join(os.getcwd(), 'media', 'images') fname_video = os.path.join(os.getcwd(), 'media', 'video', 'go.avi') # Save as AVI os.chdir(path) size = (1024, 768) fps = 20 # Get the list of image files (only the first 10 images) files = [y for x in os.walk(path) for y in glob(os.path.join(x[0], '*.jpg'))][0:10] image_array = [] for file in files: frame = cv2.imread(file, cv2.IMREAD_GRAYSCALE) # Read image in grayscale frame = cv2.resize(frame, size) frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR) # Convert grayscale to 3-channel image image_array.append(frame) # Create a video writer object fourcc = cv2.VideoWriter_fourcc(*'XVID') # Use XVID codec out = cv2.VideoWriter(fname_video, fourcc, fps, size, isColor=True) # Write frames to the video for i in range(len(image_array)): out.write(image_array[i]) Edited Code:- import os import numpy as np from glob import glob import cv2 path = os.path.join(os.getcwd(), 'media', 'images') fname_video = os.path.join(os.getcwd(), 'media', 'video', 'go.mp4') # Ensure the file has a .mp4 extension os.chdir(path) size = (1024, 768) fps = 20 # Get the list of image files (only the first 10 images) files = [y for x in os.walk(path) for y in glob(os.path.join(x[0], '*.jpg'))][0:10] image_array = [] for file in files: frame = cv2.imread(file, cv2.IMREAD_GRAYSCALE) # Read image in grayscale frame = cv2.resize(frame, size) frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR) # Convert grayscale to 3-channel image image_array.append(frame) # Create a video writer object fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter(fname_video, fourcc, fps, size, isColor=True) # Write frames to the video for i in range(len(image_array)): out.write(image_array[i]) # Release the video writer out.release() print(f"Video saved to {fname_video}")
2
-1
79,464,391
2025-2-24
https://stackoverflow.com/questions/79464391/django-celery-sqlite-database-locked-on-concurrent-access
I have a local Django 5.1/Celery 5.4 project that is using SQLite. I am the only user. Certain model saves trigger a Celery task that queries (SELECT) for the updated record (using the Django ORM), then runs an API call to update a remote record based on the local data, and then runs another UPDATE locally. The task wraps all this inside of with transaction.atomic():. (The Celery worker is configured to run tasks in serial.) While this task is running, any attempts to write to the database result in a "database is locked" OperationalError. I have configured Django/SQLite with the latest "production-ready" settings: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': DB_DIR / 'db.sqlite3', 'OPTIONS': { 'init_command': """ PRAGMA foreign_keys=ON; PRAGMA journal_mode=WAL; PRAGMA synchronous=NORMAL; PRAGMA busy_timeout = 5000; PRAGMA temp_store = MEMORY; PRAGMA mmap_size=134217728; PRAGMA journal_size_limit=67108864; PRAGMA cache_size=2000; """, 'transaction_mode': 'IMMEDIATE', 'timeout': 20, }, }, } I was under the impression that with these settings, concurrent access was possible. "SQLite in Production" is the latest hotness, and these settings, especially the new-to-Django 5.1 'transaction_mode': 'IMMEDIATE' in OPTIONS, would allow writes to queue. What am I missing?
The solution in this particular case was to shorten my transaction times, i.e. don't hold on to a transaction while making an external API call. This means I have to be more careful about not letting the views and tasks step on each others toes. I'm still flummoxed that so-called "production ready" settings don't allow concurrent access, with not so much as a queue + timeout!
1
1
79,464,425
2025-2-24
https://stackoverflow.com/questions/79464425/how-to-outer-join-merge-two-frames-with-polars-while-updating-left-with-right-va
So I got two csv which I load as polars frames: left: track_name,type,yield,group 8CEB45v1,corn,0.146957,A A188v2,corn,0.86308,A B73v6,corn,0.326076,A CI6621v1,sweetcorn,0.0357792,A CML103v1,sweetcorn,0.510464,A right: track_name,type,yield,group 8CEB45v1,corn,0.999,A B1234,pepper,1,B B1235,pepper,2,B my code so far: import polars as pl left = pl.read_csv("left.csv") right = pl.read_csv("right.csv") matching_columns = list(set(left.columns) & set(right.columns)) # I do this since I want to join sometimes frame which does not have a 100 % column match. In that case I want to simply add the new columns to the outer frame. outer = left.join( right, on=matching_columns, how="outer", coalesce=True, maintain_order="left", ) outer my result: shape: (8, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ track_name ┆ type ┆ yield ┆ group β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════║ β”‚ 8CEB45v1 ┆ corn ┆ 0.146957 ┆ A β”‚ β”‚ A188v2 ┆ corn ┆ 0.86308 ┆ A β”‚ β”‚ B73v6 ┆ corn ┆ 0.326076 ┆ A β”‚ β”‚ CI6621v1 ┆ sweetcorn ┆ 0.0357792 ┆ A β”‚ β”‚ CML103v1 ┆ sweetcorn ┆ 0.510464 ┆ A β”‚ β”‚ B1234 ┆ pepper ┆ 1.0 ┆ B β”‚ β”‚ B1235 ┆ pepper ┆ 2.0 ┆ B β”‚ β”‚ 8CEB45v1 ┆ corn ┆ 0.999 ┆ A β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ my desired output: (yield of 8CEB45v1 from right updates value of left) shape: (7, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ track_name ┆ type ┆ yield ┆ group β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════║ β”‚ 8CEB45v1 ┆ corn ┆ 0.999 ┆ A β”‚ β”‚ A188v2 ┆ corn ┆ 0.86308 ┆ A β”‚ β”‚ B73v6 ┆ corn ┆ 0.326076 ┆ A β”‚ β”‚ CI6621v1 ┆ sweetcorn ┆ 0.0357792 ┆ A β”‚ β”‚ CML103v1 ┆ sweetcorn ┆ 0.510464 ┆ A β”‚ β”‚ B1234 ┆ pepper ┆ 1.0 ┆ B β”‚ β”‚ B1235 ┆ pepper ┆ 2.0 ┆ B β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
The thing you're doing wrong is including yield in matching_columns. You don't want to match by it, you want it as a value. One idea to reconcile that would be matching_columns = list(set(left.select(pl.col(pl.String)).columns) & set(right.select(pl.col(pl.String)).columns)) Alternatively, you could start with your way but then remove f64 columns. It really just depends on your data. matching_columns = set(left.columns) & set(right.columns) matching_columns -= ( set(left.select(matching_columns).select(pl.col(pl.Float64)).columns) ) Once you have your matching_columns established, you can use the built in update: left.update(right, on=matching_columns, how="full") shape: (7, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ track_name ┆ type ┆ yield ┆ group β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════║ β”‚ 8CEB45v1 ┆ corn ┆ 0.999 ┆ A β”‚ β”‚ A188v2 ┆ corn ┆ 0.86308 ┆ A β”‚ β”‚ B73v6 ┆ corn ┆ 0.326076 ┆ A β”‚ β”‚ CI6621v1 ┆ sweetcorn ┆ 0.0357792 ┆ A β”‚ β”‚ CML103v1 ┆ sweetcorn ┆ 0.510464 ┆ A β”‚ β”‚ B1235 ┆ pepper ┆ 2.0 ┆ B β”‚ β”‚ B1234 ┆ pepper ┆ 1.0 ┆ B β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Note: you don't have to convert the python set into a list, polars is happy to accept a set as input. Polars will accept a set at runtime but the type checker will complain about because polars has Sequence as the annotation. If you're using join with on then this won't really matter. If you're using left_on and right_on then it might get you in trouble because sets don't maintain order and the order of each of those inputs is how they're used. Response to comment. If you have extra columns in right that you want the output to keep then you're better off doing the update yourself with a join and a coalesce. I say that because the update function is doing that under the hood but is dropping those extra columns. It's probably worth a feature request that update provide an option to keep extra columns in right. Anyways, here's the code New setup left = pl.DataFrame( [ pl.Series('track_name',['8CEB45v1','A188v2','B73v6','CI6621v1','CML103v1']), pl.Series('type',['corn','corn','corn','sweetcorn', 'sweetcorn']), pl.Series('yield', [0.146957,0.86308,0.326076,0.0357792,0.510464]), pl.Series('group',['A','A','A','A','A']), ] ) right = pl.DataFrame( [ pl.Series('track_name',['8CEB45v1','B1234','B1235']), pl.Series('type',['corn','pepper','pepper']), pl.Series('yield',[0.999,1.0,2.0]), pl.Series('group',['A','B','B']), pl.Series("fruit",["apple","banana","carrot"]) ] ) the work common_columns = set(left.columns) & set(right.columns) join_columns = common_columns - set(left.select(pl.col(pl.Float64)).columns) update_columns = common_columns - join_columns extra_right_columns = set(right.columns) - common_columns ( left .join(right, on=list(join_columns), how="full", coalesce=True) .select( *join_columns, *[pl.coalesce(f"{name}_right", name).alias(name) for name in update_columns], *extra_right_columns ) ) shape: (7, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ type ┆ group ┆ track_name ┆ yield ┆ fruit β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ════════════β•ͺ═══════════β•ͺ════════║ β”‚ corn ┆ A ┆ 8CEB45v1 ┆ 0.999 ┆ apple β”‚ β”‚ corn ┆ A ┆ A188v2 ┆ 0.86308 ┆ null β”‚ β”‚ corn ┆ A ┆ B73v6 ┆ 0.326076 ┆ null β”‚ β”‚ sweetcorn ┆ A ┆ CI6621v1 ┆ 0.0357792 ┆ null β”‚ β”‚ sweetcorn ┆ A ┆ CML103v1 ┆ 0.510464 ┆ null β”‚ β”‚ pepper ┆ B ┆ B1235 ┆ 2.0 ┆ carrot β”‚ β”‚ pepper ┆ B ┆ B1234 ┆ 1.0 ┆ banana β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
1
79,461,375
2025-2-23
https://stackoverflow.com/questions/79461375/stitching-images-with-two-different-camera-positions
It may not be possible to do what I want, but I thought I would throw the question out there just in case. I am attempting to stitch the following two images together that are taken from two different camera position. They are taken at the exact same time, and as you can see, they have different perspectives. I recognize that I could accomplish this by taking two photos at different angles from the same camera position, but in this instance I am specifically looking for a solution using different camera positions. When I stitch them with this code: images = [] image1 = cv2.imread('./VideosToStitch/video2.png') images.append(image1) image2 = cv2.imread('./VideosToStitch/video1.png') images.append(image2) stitcher = cv2.Stitcher.create() status, stitched = stitcher.stitch(images,cv2.Stitcher_PANORAMA) cv2.imwrite('./VideosToStitch/stitched.png', stitched) I get this: The two blue lines look good, but as you can see the red line is broken. I thought perhaps I could apply a perspective transformation on the images first and then stitch them, but when I apply the warp perspective, I get this: image1 = cv2.imread('./VideosToStitch/video1.png') tl = (907,0) bl = (0,1280) tr = (1047,0) br = (1920,1280) pts1 = np.float32([tl,bl,tr,br]) pts2 = np.float32([[0,0],[0,1280],[1920,0],[1920,1280]]) matrix = cv2.getPerspectiveTransform(pts1,pts2) transformedFrame = cv2.warpPerspective(image1, matrix,(1920,1280), flags=3) Here you can see the red and blue lines are now straight, but I have lost all the detail on the ice surface. This could work if it is possible to apply the transformation in the horizontal axis only, and not the vertical axis. Or, alternatively, I could stitch them so the resulting image is curved, and then straighten it. Thoughts? To add illustration to Christoph Rackwitz comments below, here is the result if I adjust the perspective so the ice surface lines align both horizontally and vertically. As you see, the players which are perpendicular to the surface will never align.
You have two different optical origins. You are not gonna get a panorama of the spherical or any other "pretty" kind. Those you could get with arbitrary views/perspectives, as long as the cameras have practically the same optical origin. Best you can hope for is to get a stitched view of any plane (e.g. the field), from any perspective (top-down, audience view, ...). Anything not in that plane, but sticking out of the plane, such as players, will get warped. This is an unavoidable fact of math (geometry), not due to implementation. You have seen this in your attempt to create a top-down view of the field. Players, audience, and especially the stands, are not in-plane, so they get distorted. The only way around this is to have 3D information. That's feasible for anything fixed (the stands), but anything moving (living creatures) is a lot harder to get 3D data for. The picture below is one possible "stitching" where I manually stretched the images in a photo editor so the center line (red) aligns. You could achieve that with OpenCV too, by manually specifying how the corners should be mapped, and then manually blending the images. That'd take a bit of code but nothing unreasonable. You could achieve this by taking a diagram of an ice hockey field, pinning each picture to the diagram to get a homography for each, pick some view onto the field to get another homography (or none/identity for top-down view), and then calculate the combined homography for each source picture. Since these pictures are fairly featureless, OpenCV's stitching module won't have much of anything (feature points) to work with for any automatic alignment. Parts of it might be usable to construct a composite with nice blending. I'm not too familiar with the module.
5
4
79,464,907
2025-2-24
https://stackoverflow.com/questions/79464907/memory-keeps-increasing-in-pytorch-training-loop-even-with-empty-cache
I have a pytorch training script, and I'm getting an out-of-memory error after a few epochs even tho I'm calling torch.cuda.empty_cache(). The GPU memory just keeps going up and I can't figure out why. Here's basically what I'm doing: import torch from torch.utils.data import Dataset, DataLoader import numpy as np class CustomDataset(Dataset): def __init__(self, data_paths): self.data_paths = data_paths def __len__(self): return len(self.data_paths) def __getitem__(self, idx): image = np.load(self.data_paths[idx]['image']).astype(np.float32) label = np.load(self.data_paths[idx]['label']).astype(np.int64) image = torch.tensor(image).cuda() label = torch.tensor(label).cuda() return image, label data_paths = [{'image': f'img_{i}.npy', 'label': f'label_{i}.npy'} for i in range(10000)] dataset = CustomDataset(data_paths) dataloader = DataLoader(dataset, batch_size=32, num_workers=4, pin_memory=True) for epoch in range(10): for batch in dataloader: images, labels = batch output = images.mean() loss = output.sum() loss.backward() del images, labels, loss, output torch.cuda.empty_cache() Even after deleting everything and calling empty_cache(), the VRAM just keeps going up and I don't understand why. This doesn't happen on CPU. If I run nvidia-smi, the memory usage increases after every batch until it crashes. I tried: Calling del on everything after every batch Setting num_workers=0 (didn't help) Using .detach() before moving tensors to GPU Checked if the issue is in my model, but even without the model, just loading the Data already makes the memory increase Anyone seen this before? Is there something about DataLoader and cuda() that could be causing this? Would appreciate any ideas. I'm out of things to try
Yeah, the issue is that you're moving tensors to CUDA inside __getitem__(), which isn't a good idea when using multiple workers in DataLoader. When num_workers > 0, PyTorch spawns separate processes for loading data, but CUDA operations should only happen in the main process. This can lead to memory not being freed properly, which is why your usage keeps increasing. A better approach is to keep everything on the CPU inside __getitem__() and only move tensors to the GPU inside the training loop. Change this: def __getitem__(self, idx): image = np.load(self.data_paths[idx]['image']).astype(np.float32) label = np.load(self.data_paths[idx]['label']).astype(np.int64) return torch.from_numpy(image), torch.from_numpy(label) And move them to CUDA in the training loop: for batch in dataloader: images, labels = batch images = images.cuda(non_blocking=True) labels = labels.cuda(non_blocking=True) This should already solve most of the issue. If the memory still increases, try setting persistent_workers=True in DataLoader, since it helps with memory handling when using multiple workers: dataloader = DataLoader(dataset, batch_size=32, num_workers=4, pin_memory=True, persistent_workers=True) If that doesn't work, test with num_workers=0. If the leak stops, then it's definitely related to the worker processes holding onto tensors. As a last resort, manually force garbage collection after each batch: import gc gc.collect() torch.cuda.empty_cache() But in general, the main problem here is that CUDA tensors shouldn’t be created inside __getitem__(), especially with multiprocessing. Move them in the main loop, and it should fix the issue
2
1
79,463,058
2025-2-24
https://stackoverflow.com/questions/79463058/how-to-pass-multiple-inputs-to-a-python-script-in-macos-shortcuts
I’ve been using a Python script to perform multiple find/replace actions at once. It that has 3 inputs right inside the code, in a form like this: def main(): text_passage = """ (Here be huge blocks of text) β€œ"" to_replace_input = β€œ"" (Here be large strings to look for, each on a new line) β€œβ€β€ replacements_input = """ (Here be the corresponding replacements, each on a new line) β€œβ€β€ I’ve just been copy/pasting the 3 of these things right into the code of the script, saving, and running the script. It works fine, but I would like make the experience feel a little bit cleaner/faster with some simple UI elements. And since I already use macOS Shortcuts for many quick tools, I was hoping I could make a Shortcut for this. But I can’t figure out how to pass three different Shortcut inputs to the script. I’m using 3 of the ASK FOR INPUT actions and one RUN SHELL SCRIPT action. I tried using ChatGPT to modify the Python script for me to use in this way, but I am getting constant errors relating to input not being passed to the script. Is there some special way I need to get all 3 inputs communicating to the RUN SHELL SCRIPT action? I should note that I am not a programmer or anything so it’s also possible I’ve just screwed up the Python script and don’t know what I’m doing. The original method I mentioned still works so this isn’t life or death but it would be nice to understand better and get this working as a Shortcut. Thanks!
I found How to pass variables into shell script : r/shortcuts and there is screenshot which suggests that you can put variables in script and it will copy/paste values automatically: Image from answer on Reddit (author: Infamous_Pea6200):
2
1
79,464,878
2025-2-24
https://stackoverflow.com/questions/79464878/accessing-sql-server-configuration-manager-with-python
I'm using Python with the 'wmi' and 'pyodbc' libraries in order to automate a server inventory of a list of VMs. Per VM, I'm trying to get a list of all MS SQL Server products, the SQL Server database instance name, number of databases, databases on that engine, engine version, etc. and will eventually put it on a excel spreadsheet. For now, I'm simply line printing on the console with information from just one machine. I'm able to get most of my information by running a T-SQL query using metadata function SERVERPROPERTY, and then running a cursor and printing the information by line. For example: SERVERPROPERTY('MachineName') SERVERPROPERTY('ProductVersion') SERVERPROPERTY('Edition') The only hiccup that I have is gathering all of the installed SQL Server Services that are displayed in SQL Server Configuration Manager, specifically the SQL Server Services tab as shown below: SQL Server Configuration Manager Is there another library, or pyodbc, that is able to access SQL Server Configuration Manager in order for me to get this list of services? As a side note, I used wmi's win32_Product class to try to retrieve anything related to SQL Server, but it returns too much much unwanted returns, like language packs, drivers, etc. server_connection = wmi.WMI(server, user=username, password=password) task = server_connection.Win32_Product() `for service in task:` `if 'SQL Server' in service.Caption:` `print(f"Service Name: {service.Caption}")`
If you're trying to retrieve a list of SQL Server services (as seen in SQL Server Configuration Manager) using Python, the best approach is to use WMI (Windows Management Instrumentation). The win32_Product class retrieves too many irrelevant results (e.g., drivers, language packs), so instead, you should use win32_service. import wmi server = "your_server_name" # Change this to your target server username = "your_username" # If needed password = "your_password" # If needed # Connect to the machine server_connection = wmi.WMI(server, user=username, password=password) # Get SQL Server-related services for service in server_connection.Win32_Service(): if "SQL" in service.Name or "SQL" in service.DisplayName: print(f"Service Name: {service.Name}") print(f"Display Name: {service.DisplayName}") print(f"State: {service.State}") print(f"Start Mode: {service.StartMode}") print("-" * 50) SQL Server services typically includes MSSQLSERVER (Default instance), MSSQL$InstanceName (Named instances), SQLSERVERAGENT, SQLBrowser, SQLWriter and so on. If you need all installed SQL Server instances (even if they are not currently running), query the Windows Registry: import winreg def get_sql_instances(): sql_instances = [] reg_path = r"SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL" try: reg_key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, reg_path) i = 0 while True: try: instance_name, service_name, _ = winreg.EnumValue(reg_key, i) sql_instances.append((instance_name, service_name)) i += 1 except OSError: break except FileNotFoundError: print("SQL Server registry path not found.") return sql_instances # Example usage for instance, service in get_sql_instances(): print(f"Instance Name: {instance} - Service Name: {service}")
3
1
79,464,957
2025-2-24
https://stackoverflow.com/questions/79464957/you-cannot-access-body-after-reading-from-requests-data-stream-while-reading-js
I am currently working on a project where users will be able to authenticate themselves thanks to a form that I protected with a CSRF token, but for now I only take care of the server side party, here is the code: @api_view(['POST']) @csrf_protect @permission_classes([AllowAny]) def login(request): if request.method != "POST": return JsonResponse({"error": "Seules les requΓͺtes POST sont autorisΓ©es."}, status=status.HTTP_405_METHOD_NOT_ALLOWED) try: # Lecture sΓ©curisΓ©e des donnΓ©es data = json.loads(request.body) username = data.get("username") password = data.get("password") if not username or not password: return JsonResponse({"error": "Nom d'utilisateur et mot de passe sont requis."}, status=status.HTTP_400_BAD_REQUEST) # Recherche de l'utilisateur dans les deux tables user = None role = None try: user = Administrateur.objects.get(username=username) role = "admin" except Administrateur.DoesNotExist: pass if not user: try: user = Employes.objects.get(username=username) role = "employe" except Employes.DoesNotExist: pass if not user or not check_password(password, user.password): return JsonResponse({"error": "Identifiants incorrects."}, status=status.HTTP_401_UNAUTHORIZED) # GΓ©nΓ©ration des tokens refresh = RefreshToken.for_user(user) # RΓ©ponse sΓ©curisΓ©e response = JsonResponse({"username": user.username}) # Stocker le JWT dans un cookie HttpOnly sΓ©curisΓ© response.set_cookie( key='access_token', value=str(refresh.access_token), httponly=True, secure=True, samesite='Strict', max_age=3600 ) # Stocker le rΓ΄le pour le frontend response.set_cookie( key='user_role', value=role, httponly=False, secure=True, samesite='Strict', max_age=3600 ) return response except Exception as e: return JsonResponse({"error": f"Erreur inattendue : {str(e)}"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) The problem I encounter is when I send data with Postman to the server, it sends me this error: { "error": "Erreur inattendue : You cannot access body after reading from request's data stream" } What I did to try to solve this error is to disable the CSRF protection, and the code works correctly, but as soon as I reactivated it this error comes back. I looked in the Django documentation, but I did not find anything about this error message.
Your issue is this line: data = json.loads(request.body) From the documentation: Accessing request.POST inside middleware before the view runs or in process_view() will prevent any view running after the middleware from being able to modify the upload handlers for the request, and should normally be avoided. The CsrfViewMiddleware class can be considered an exception, as it provides the csrf_exempt() and csrf_protect() decorators which allow views to explicitly control at what point the CSRF validation should occur. That is, in general, you can not use request.body and CsrfViewMiddleware at the same time. You could simply use request.data since you seem to be using Django REST Framework: https://www.django-rest-framework.org/tutorial/2-requests-and-responses/#request-objects The core functionality of the Request object is the request.data attribute, which is similar to request.POST, but more useful for working with Web APIs.
2
1
79,464,314
2025-2-24
https://stackoverflow.com/questions/79464314/pandas-astype-becomes-in-place-operation-for-data-loaded-from-pickle-files
Pandas astype() appears to unexpectedly switch to performing in-place operations after loading data from a pickle file. Concretly, for astype(str), the data type of the input dataframe values is modified. What is causing this behavior? Pandas version: 2.0.3 Minimal example: import pandas as pd import numpy as np # create a test dataframe df = pd.DataFrame({'col1': ['hi']*10 + [False]*20 + [np.nan]*30}) # print the data types of the cells, before and after casting to string print(pd.unique([type(elem) for elem in df['col1'].values])) _ = df.astype(str) print(pd.unique([type(elem) for elem in df['col1'].values])) # store the dataframe as pkl and directly load it again outpath = 'C:/Dokumente/my_test_df.pkl' df.to_pickle(outpath) df2 = pd.read_pickle(outpath) # print the data types of the cells, before and after casting to string print(pd.unique([type(elem) for elem in df2['col1'].values])) _ = df2.astype(str) print(pd.unique([type(elem) for elem in df2['col1'].values])) Output:
This is a bug that has been fixed in pandas 2.2.0: Bug in DataFrame.astype() when called with str on unpickled array - the array might change in-place (GH 54654) As noted by Itayazolay in the PR, regarding the pickle MRE used there: The problem is not exactly with pickle, it's just a quick way to reproduce the problem. The problem is that the code here attempts to check if two arrays have the same memory (or share memory) and it does so incorrectly - result is arr See numpy/numpy#24478 for more technical details. If you're using a version < 2.2 and cannot upgrade, you could try manually applying the fix mentioned in the PR and recompiling ".../pandas/_libs/lib.pyx". At #L759: if copy and result is arr: result = result.copy() Required change: if copy and (result is arr or np.may_share_memory(arr, result)): result = result.copy() There are now some extra comments in ".../pandas/_libs/lib.pyx", version 2.3.x, together with adjusted logic. See #L777-L785: if result is arr or np.may_share_memory(arr, result): # if np.asarray(..) did not make a copy of the input arr, we still need # to do that to avoid mutating the input array # GH#54654: share_memory check is needed for rare cases where np.asarray # returns a new object without making a copy of the actual data if copy: result = result.copy() else: already_copied = False
1
2
79,464,593
2025-2-24
https://stackoverflow.com/questions/79464593/python-requests-disable-zstd-encoding
My Synology DS418play recently updated to the latest version of DSM7 that is available. I noticed that a lot of the python scripts that I had have started returning weirdly encoded data. Here is an example of the code: requests.get("https://www.23andmedatasettlement.com/").content returns b'(\xb5/\xfd\x04X\x1c&\x00\xb6\xbd\xba50\x8b&\x0f\xc0\xc0@\xc7\xb000P\x15\x7fkd\xe1\x8eRJ\x1d\xa4MC\x8bw;\xacv/Ln\x804E\xe7i\xf2\xff\x00U\x11Y\xd9n\x98C\xbe\xcc\xa0\x8ce\x15\xb1\x00\xab\x00\xa5\x00\xd5\xbf\xda\xd8Kl\xa7\x8ds(\x8aK\xb06|\x97\x9a{Tk\x154T\xa7d+\xed?\x15<\xa7?\xdfy\x12z\xe4\x9c\xb5\x1e\xae\xbb\xfb\xad\xf5p\x0f\x82\x05\xc6#\x12\x99\x98\xe8~kA\xd8\x98\xb2\xfa\x83\x87\xeb\xa7\xa8\xf4\x91\xa6E"\x11\x08%WiZI\xf8T\x94\x9c!\x8dM\xa5\x8f\xdc\x83 \xd1\x16\x18\xbd1\x1f\xac\xf5p\xceS\xf2%\xf3l-m\x10T\xfa\xa8%\xb84\x08[\xf60\xb1i\x9aZ\x93\xdc\xffH\xb5:`\xd1\x1a\x85\xd5\xce\x9f\xb9B|i\xc8\xc3 ......' and it looks like this is because the request headers is 'Accept-Encoding': 'gzip, deflate, br, **zstd**'. Running requests.get("https://www.23andmedatasettlement.com/", headers={'Accept-Encoding': 'deflate'}).content returns the proper data. I am trying to avoid changing each of my python requests to explicitly set Accept-Encoding. Is there a way to prevent requests from using zstd compression? The output breaks a lot of my scripts.
I was able to find a commit that basically says if zstandard module is installed, add zstd to the Accept-Encoding. Running pip uninstall zstandard fixed my issue.
2
2
79,464,463
2025-2-24
https://stackoverflow.com/questions/79464463/python-pandas-identify-pairs-in-a-dataframe-based-on-both-a-string-similarity
I hope I am explaining this correctly. I have a dataframe in which i need to identify pairs of rows based on the string value of two columns. Each row in the pair must have a different string value in another column. I then need to add a new value to a new column based on TRUE or FALSE condition of that third column AND the condition of the pairing. For example. A simplified version of the df would be: The end result would look like this: Any help would be greatly appreciated.
Hoping this helps, using data similar to the examples you shared. data = { 'name': ['John', 'John', 'Jane', 'Jane', 'Doe', 'Doe'], 'city': ['LA', 'LA', 'SF', 'SF', 'SD', 'SD'], 'item': ['Peanut Butter', 'Jelly', 'Peanut Butter', 'Peanut Butter', 'Jelly', 'Jelly'] } df = pd.DataFrame(data) We can then create a dictionary of the string pairs from name and city with the values being a list of the items associated with the pairs. unique_dict = df.groupby(['name', 'city'])['item'].apply(list).to_dict() Once we have this we define a function to handle the logic of what each pair needs and can apply it to the dataframe. def determine_needs(items): if 'Peanut Butter' in items and 'Jelly' in items: return None elif 'Peanut Butter' in items: return 'Jelly' elif 'Jelly' in items: return 'Peanut Butter' else: return None df['needs'] = df.apply(lambda row: determine_needs(unique_dict[(row['name'], row['city'])]), axis=1)
2
1
79,464,298
2025-2-24
https://stackoverflow.com/questions/79464298/python-polars-get-column-type-using-an-expression
In Python-Polars, I am trying to get the shrinked data type of a column using an expression, to be able to run validations against it. For example, I would like to build an expression that allows me to do the following: df = pl.DataFrame({"list_column": [[1, 2], [3, 4], [5, 6]]}) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ list_column β”‚ β”‚ --- β”‚ β”‚ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [1, 2] β”‚ β”‚ [3, 4] β”‚ β”‚ [5, 6] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ df.select(type_check = pl.lit((pl.col("list_column").shrink_dtype() == pl.List))) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ list_column ┆ type_check β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════║ β”‚ [1, 2] ┆ true β”‚ β”‚ [3, 4] ┆ true β”‚ β”‚ [5, 6] ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Is this something feasible?
No. In first place, the data type for the list_column in your example is pl.List(pl.Int64()), so it would not be equal to pl.List - polars has a strong distinction between different nested types, and shrink_dtype does not currently works for that case at all. Secondly, the data type is always the same for all rows within a given column, so it does not makes much sense to do the same operation for every single row. You can use df.collect_schema() to get a Schema object instead, which contains the data type for each column. Alternatively, you might want to consider using dtype selectors if you wanted to perform different operations for each type.
2
5
79,457,847
2025-2-21
https://stackoverflow.com/questions/79457847/understanding-an-instance-of-pythons-struct-unpack
I found sample code for interrogating NTP servers on https://www.mattcrampton.com/blog/query_an_ntp_server_from_python/. The code is brief and well-written, but I don't understand the use of struct.unpack. This is the core code: client = socket.socket(AF_INET,SOCK_DGRAM) client.sendto(msg.encode('utf-8'),address) msg,address = client.recvfrom(buf) t = struct.unpack("!12I",msg)[10] It returns an integer value (seconds from 1900-01-01) but I have two questions: How does the unpack work in this instance? How did he arrive at "!12I" to do the decoding? Is it possible to get a floating point value from the remote server?
Following the documentation of struct, you are unpacking the first twelve (12) big-endian (!) unsigned 4-byte integers (I) of the NTP header into a tuple, of which you then extract the value at index 10 ([10]), i.e. the penultimate integer value. Following the definition of the NTP header format, this value is the first part of the Transmit Timestamp (Offset Octet 40 in the linked Wikipedia article). The first part of each timestamp, that is, in your case, the part that you extracted, corresponds to the seconds passed since 1900-01-01 (midnight). Because UNIX timestamps are represented in seconds passed since 1970-01-01 (midnight) instead, the code in the blog post then continues to correct this difference by subtracting the number of seconds passed between 1900-01-01 and 1970-01-01: TIME1970 = 2208988800 ... t = struct.unpack("!12I", msg)[10] t -= TIME1970 As to the second part of your question (floating point value): The last integer of the message that you unpacked ([11]) contains the fractional part of a second for your timestamp, namely a numerator to be divided by the denominator 2Β³Β². So, to also get the floating point value that you are after, you could extend your code as follows: client = socket.socket(AF_INET, SOCK_DGRAM) client.sendto(msg.encode('utf-8'), address) msg, address = client.recvfrom(buf) t, f = struct.unpack("!12I", msg)[10:12] t += f / (2 ** 32) All in all, reproducing the code from the quoted blog post, but with fractional seconds, you could do: from datetime import datetime from socket import AF_INET, SOCK_DGRAM import socket, struct def getNTPTime(host = "pool.ntp.org"): port = 123 buf = 1024 address = (host, port) msg = '\x1b' + 47 * '\0' TIME1970 = 2208988800 client = socket.socket(AF_INET, SOCK_DGRAM) client.sendto(msg.encode('utf-8'), address) msg, address = client.recvfrom(buf) t, f = struct.unpack("!12I", msg)[10:12] t -= TIME1970 t += f / (2 ** 32) return datetime.fromtimestamp(t).strftime("%a %b %d %H:%M:%S.%f %Y") if __name__ == "__main__": print(getNTPTime()) Update: Not sure if this makes a huge difference in practice, but a bit more robust approach would probably be to add the fractional seconds only after creation of the datetime object that I used for producing the format string, thus: from datetime import datetime, timedelta ... t, f = struct.unpack("!12I", msg)[10:12] t -= TIME1970 t = datetime.fromtimestamp(t) + timedelta(seconds=f / (2 ** 32))
2
4
79,463,169
2025-2-24
https://stackoverflow.com/questions/79463169/create-nested-lists-based-on-split-of-characters
I have a list made by strings, correctly cleaned (split(',') can be safely used), and correctly sorted depending on numbers. As a small example: l = ['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4', 'C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8', 'C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13'] What I'm trying to achieve is to create as many sublists that start and end with single strings, that is: [ ['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4'], ['C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8'], ['C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13'] ] I thought to add some logic like the following code, but I'm not sure if I'm on the correct way: tl = [] for i in l: # just get the variable val = i tl.append(val) # split by , val_split = len(i.split(',')) # check if the value is the first element of the list (C1) if val == l[0]: print(1, val) # check if the split of the character is longer than 2 (C1,C2) elif val_split > 1: print(2, val) # check is the split of the character siis equalt to 1 (C4) elif val_split == 1: # here the code should compare if the character is equal to the last value of the nested list. If yes go with teh next value (C5) if val != tl[-1]: print(3, val) else: print(4, val)
If the input list is guaranteed to start and end with a single string and if there will always be at least one adjacent pair of single strings then: lst = ['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4', 'C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8', 'C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13'] result = [[]] for e in lst: result[-1].append(e) if not "," in e: if len(result[-1]) > 1: result.append([]) result.pop() print(result) Output: [['C1', 'C1,C2', 'C2,C3', 'C3,C4', 'C4'], ['C5', 'C5,C6', 'C6,C7', 'C7,C8', 'C8'], ['C10', 'C10,C11', 'C11,C12', 'C12,C13', 'C13']]
2
1
79,461,242
2025-2-23
https://stackoverflow.com/questions/79461242/how-to-convert-float-columns-without-decimal-to-int-in-polars
The following pandas code removes all the .0 decimal precision if I have a float column with 1.0, 2.0, 3.0 values: import pandas as pd df = pd.DataFrame({ "date": ["2025-01-01", "2025-01-02"], "a": [1.0, 2.0], "c": [1.0, 2.1], }) print(df) columns = df.columns.difference(["date"]) df[columns] = df[columns].map(lambda x: int(x) if x.is_integer() else x) print(df) The output of the above code: date a c 0 2025-01-01 1.0 1.0 1 2025-01-02 2.0 2.1 date a c 0 2025-01-01 1 1.0 1 2025-01-02 2 2.1 How can I do it using Polars?
Something like this does the trick. Note that it is not typically advised to have the schema depend on the data itself. We can, however, avoid any for-by-row iteration and used a vectorised UDF with map_batches def maybe_cast_int(s: pl.Series) -> pl.Series: """Cast the Series to an Int64 type if all values are whole numbers.""" s2 = s.cast(pl.Int64) return s2 if (s2 == s).all() else s df = pl.DataFrame({ "date": ["2025-01-01", "2025-01-02"], "a": [1.0, 2.0], "c": [1.0, 2.1], }) df.with_columns(pl.col("a", "c").map_batches(maybe_cast_int)) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ date ┆ a ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 2025-01-01 ┆ 1 ┆ 1.0 β”‚ β”‚ 2025-01-02 ┆ 2 ┆ 2.1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ This example shows it a bit better by not overwriting original columns df.select( "a", pl.col("a").map_batches(maybe_cast_int).alias("b"), "c", pl.col("c").map_batches(maybe_cast_int).alias("d"), ) shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ i64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1.0 ┆ 1 ┆ 1.0 ┆ 1.0 β”‚ β”‚ 2.0 ┆ 2 ┆ 2.1 ┆ 2.1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
1
3
79,461,665
2025-2-23
https://stackoverflow.com/questions/79461665/django-admin-not-displaying-creation-date-field-despite-being-in-list-display
I have a Django project where I'm working on an e-commerce application. I'm using SQLite as my database, and I'm trying to add a creation_date field to my VendorProduct model so that it records when a product was created. What I Did I added the creation_date field to my VendorProduct model like this: models.py (VendorProduct Model) from django.db import models from django.utils.timezone import now from userauths.models import CustomUser class VendorProduct(models.Model): user = models.ForeignKey(CustomUser, on_delete=models.SET_NULL, null=True, blank=True) creation_date = models.DateTimeField(auto_now_add=True, blank=True, null=True) def __str__(self): return self.title Migrations I ran the following commands to apply the migration: python manage.py makemigrations vendorpannel python manage.py migrate vendorpannel Then, I verified that the creation_date column exists in my database using: from django.db import connection with connection.cursor() as cursor: cursor.execute("PRAGMA table_info(vendorpannel_vendorproduct);") columns = cursor.fetchall() for column in columns: print(column) The output confirms that creation_date exists in my database: (7, 'creation_date', 'datetime', 0, None, 0) Admin Panel Configuration I updated my admin panel to display creation_date: admin.py from django.contrib import admin from .models import VendorProduct class VendorProductAdmin(admin.ModelAdmin): list_display = ('creation_date') readonly_fields = ('creation_date',) admin.site.register(VendorProduct, VendorProductAdmin) Problem: creation_date Not Showing in Django Admin Even after making these changes and applying migrations, the creation_date field is not appearing in the Django admin panel under list_display. What I Tried Checked if the field exists in the database using SQL queries (confirmed βœ…). Cleared Django cache and restarted the server: python manage.py collectstatic python manage.py runserver Checked if the field name is correct in list_display (confirmed βœ…). Verified that list_display works for other fields (other fields display correctly, just creation_date is missing ❌). Expected Outcome I expect to see creation_date displayed in the Django admin panel under VendorProduct. The field should be read-only and should not be manually editable. Question Why is creation_date not appearing in the Django admin panel, even though: It exists in the database. It's correctly added in list_display. Migrations were applied successfully. What am I missing here? Thank you in advance
list_display field is for List view only. It's the admin page listing instances of VendorProduct in your case. On your screenshot you have Add view which shows only editable fields. Since your creation_date field has auto_now_add option, it makes it non-editable. If you want to manually provide creation_date, but still keep default now behavior then instead of auto_now_add use default option. creation_date = models.DateTimeField(default=datetime.now, blank=True) Then this field will show up in add form. Maybe try to remove it from read-only field if this is not enough In all other cases this field should show up for existing instances
2
5
79,451,592
2025-2-19
https://stackoverflow.com/questions/79451592/airflow-dag-gets-stuck-when-filtering-a-polars-dataframe
I am dynamically generating Airflow DAGs based on data from a Polars DataFrame. The DAG definition includes filtering this DataFrame at DAG creation time and again inside a task when the DAG runs. However, when I run the dag and I attempt to filter the polars dataframe inside the dynamically generated DAG, the task gets stuck indefinitely after printing before filter, without raising an error. Just gets stuck and runs forever until an airflow exception is thrown on memory usage. I am with airflow 2.7.3 version and polars 0.20.31 for what it is worth mentioning it. from airflow import DAG from airflow.operators.python import PythonOperator from datetime import datetime import polars as pl def dag_constructor(name): default_args = { 'owner': 'airflow', 'start_date': datetime(2023, 1, 1), 'retries': 1, } # Define the DAG dag = DAG( dag_id=f'{name}', default_args=default_args, description='A simple DAG to print Hello World', schedule_interval='@daily', catchup=False, ) def print_hello(): print("starting") df = pl.DataFrame({ "key": ["A", "B", "A"], "branch": ["br1", "ooo", "br2"], "chain": ["ch1", "Y", "ch2"] }) print(df) print("before filter") chains = df.filter(pl.col("key") == "A").select("chain").to_series().to_list() print("after filter") print(chains) print("finish dag") hello_task = PythonOperator( task_id='print_hello', python_callable=print_hello, dag=dag, ) hello_task return dag df = pl.DataFrame({ "key": ["A", "B", "A"], "branch": ["br1", "ooo", "br2"], "chain": ["ch1", "Y", "ch2"] }) chains = df.filter(pl.col("key") == "A").select("chain").to_series().to_list() ## chains = ["ch1", "ch2"] THIS WOULD WORK, AND WONT GET STUCK, if uncommenting and commenting previous line for ch in chains: dag_my_id = f"aa__{str(ch)}" globals()[dag_my_id] = dag_constructor("aa__"+ch)
after even just importing polars in the main process, it doesn't work with how Airflow forks the child process. Even if you tell Polars to be single-threaded. What works in this case is to make the child task run in a separate process. Here's a code that worked for me: import sys import subprocess from airflow import DAG from airflow.operators.python import PythonOperator from datetime import datetime import polars as pl # If the script is run with the "child" argument, execute the task logic in a fresh process. if len(sys.argv) > 1 and sys.argv[1] == "child": print("starting") df = pl.DataFrame({ "key": ["A", "B", "A"], "branch": ["br1", "ooo", "br2"], "chain": ["ch1", "Y", "ch2"] }) print(df) print("before filter") chains = df.filter(pl.col("key") == "A").select("chain").to_series().to_list() print("after filter") print(chains) print("finish dag") sys.exit(0) # Exit after finishing child task logic # Regular DAG-generation code (global scope) def dag_constructor(name): default_args = { 'owner': 'airflow', 'start_date': datetime(2023, 1, 1), 'retries': 1, } dag = DAG( dag_id=name, default_args=default_args, description='A simple DAG to print Hello World', schedule_interval='@daily', catchup=False, ) def print_hello(): # Instead of running Polars code directly here, spawn a fresh process subprocess.check_call([sys.executable, __file__, "child"]) print("Child process executed.") hello_task = PythonOperator( task_id='print_hello', python_callable=print_hello, dag=dag, ) return dag # Global data manipulation using Polars. # Note: This import/configuration uses the default global settings. df = pl.DataFrame({ "key": ["A", "B", "A"], "branch": ["br1", "ooo", "br2"], "chain": ["ch1", "Y", "ch2"] }) chains = df.filter(pl.col("key") == "A").select("chain").to_series().to_list() for ch in chains: dag_my_id = f"aa__{ch}" globals()[dag_my_id] = dag_constructor("aa__" + ch)
2
3
79,459,880
2025-2-22
https://stackoverflow.com/questions/79459880/how-can-i-iterate-over-all-columns-using-pl-all-in-polars
I've written a custom function in Polars to generate a horizontal forward/backward fill list of expressions. The function accepts an iterable of expressions (or column names) to determine the order of filling. I want to to use all columns via pl.all() as default. The problem is that pl.all() returns a single expression rather than an iterable, so trying to reverse or iterate over it leads to a TypeError. Is there a way to convert between single expressions and iterables of expressions? Any suggestions or workarounds are greatly appreciated! Here is the function: from typing import Iterable from polars._typing import IntoExpr import polars as pl def fill_horizontal(exprs: Iterable[IntoExpr], forward: bool = True) -> list[pl.Expr]: """Generate a horizontal forward/backward fill list of expressions.""" # exprs = exprs or pl.all() # use all columns as default cols = [col for col in reversed(exprs)] if forward else exprs return [pl.coalesce(cols[i:]) for i in range(0, len(cols) - 1)] Here is an example: df = pl.DataFrame({ "col1": [1, None, 2], "col2": [1, 2, None], "col3": [None, None, 3]}) print(df) # shape: (3, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ col1 ┆ col2 ┆ col3 β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ i64 ┆ i64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ # β”‚ 1 ┆ 1 ┆ null β”‚ # β”‚ null ┆ 2 ┆ null β”‚ # β”‚ 2 ┆ null ┆ 3 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ print('forward_fill') print(df.with_columns(fill_horizontal(df.columns, forward=True))) # shape: (3, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ col1 ┆ col2 ┆ col3 β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ i64 ┆ i64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ # β”‚ 1 ┆ 1 ┆ 1 β”‚ # β”‚ null ┆ 2 ┆ 2 β”‚ # β”‚ 2 ┆ 2 ┆ 3 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ print('backward_fill') print(df.with_columns(fill_horizontal(df.columns, forward=False))) # shape: (3, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ col1 ┆ col2 ┆ col3 β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ i64 ┆ i64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ # β”‚ 1 ┆ 1 ┆ null β”‚ # β”‚ 2 ┆ 2 ┆ null β”‚ # β”‚ 2 ┆ 3 ┆ 3 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Edit: Merging @Henry Harbeck's answer and @jqurious's comment seems to be not perfect but a sufficient solution as of now. def fill_horizontal( exprs: Iterable[IntoExpr] | None = None, *, forward: bool = True, ncols: int = 1000) -> pl.Expr: """Generate a horizontal forward/backward fill expression.""" if exprs is None: # if forward is false, ncols has to be defined with the present number of cols or more cols = pl.all() if forward else pl.nth(range(ncols, -1, -1)) else: cols = exprs if forward else reversed(exprs) return pl.cum_reduce(lambda s1, s2: pl.coalesce(s2, s1), cols).struct.unnest()
Check out cum_reduce, which does a cumulative horizontal reduction. This is pretty much what you are after and saves you having to do any Python looping. Unfortunately, it reduces from left to right only. I've made this feature request to ask for right to left reductions, which should fully enable your use-case. Here's a tweaked version of your function that works in a cases except pl.all() and forward=False def fill_horizontal( exprs: Iterable[IntoExpr] | None = None, *, forward: bool = True ) -> pl.Expr: """Generate a horizontal forward/backward fill list of expressions.""" exprs = exprs or [pl.all()] # use all columns as default # Doesn't do anything for pl.all() - columns still remain in their original order cols = exprs if forward else reversed(exprs) return pl.cum_reduce(lambda s1, s2: pl.coalesce(s2, s1), cols).struct.unnest() df.with_columns(fill_horizontal()) # shape: (3, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ col1 ┆ col2 ┆ col3 β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ i64 ┆ i64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ # β”‚ 1 ┆ 1 ┆ 1 β”‚ # β”‚ null ┆ 2 ┆ 2 β”‚ # β”‚ 2 ┆ 2 ┆ 3 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ # Doesn't work :( df.with_columns(fill_horizontal(forward=False)) # Works as a backward fill df.with_columns(fill_horizontal(df.columns, forward=False)) Other options I can think of are: make this a DataFrame / LazyFrame level function. You can pipe the frame in and access the schema directly. You can then reversthe columns without needing to expose this to the caller. This may block some optimisations / lazy evaluation make a feature request to reverse the column order of multi-column expressions such as pl.all() and pl.col("a", "b") upvote the feature request I've linked, and force the caller of your function to use df.columns until it hopefully gets implemented
4
3
79,459,799
2025-2-22
https://stackoverflow.com/questions/79459799/how-to-update-folium-map-without-re-rendering-entire-map
I have a Folium map placed in PySide6 QWebEngineView. Map coordinates are updated each second and the map is recentered to the new position. However, this is re-rendering entire map with each update and it causes "flashing", which is not user friendly. I need to make the map to reposition by smooth dragging/sliding, or even jump will be fine if flashing does not happen. I could not find any real solution for it anywhere. This is the code I have in map.py: import io import sys import folium from PySide6.QtCore import QTimer from PySide6.QtWebEngineWidgets import QWebEngineView from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget class MapWebView(QWebEngineView): def __init__(self, initial_coordinates: tuple[float, float]): super().__init__() self.folium_map = folium.Map( location=initial_coordinates, zoom_start=13, zoom_control=False, attribution_control=False ) self.data = io.BytesIO() self.folium_map.save(self.data, close_file=False) self.setHtml(self.data.getvalue().decode()) def update_map(self, new_coords: tuple[float, float]): self.folium_map = folium.Map( location=new_coords, zoom_start=13, zoom_control=False, attribution_control=False ) self.data = io.BytesIO() self.folium_map.save(self.data, close_file=False) self.setHtml(self.data.getvalue().decode()) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.resize(600, 600) self.setCentralWidget(QWidget()) self.centralWidget().setLayout(QVBoxLayout()) self.map_webview = MapWebView((47.030780, 8.656176)) self.centralWidget().layout().addWidget(self.map_webview) timer = QTimer(self) timer.timeout.connect(self.update_map) timer.start(1000) def update_map(self): current_coords = self.map_webview.folium_map.location new_coords = (current_coords[0] + 0.002, current_coords[1] + 0.002) self.map_webview.update_map(new_coords) if __name__ == '__main__': app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec())
I discovered you can run JavaScript code to update data in html without rendering it again. You have to get layer name self.folium_map.get_name() and create JavaScript code with function layer_name.setView(coords) and run it with self.page().runJavaScript(js_code) It needs also to assign new coords to self.folium_map.location in Python because JavaScript doesn't update this value and you need it in next calculations. def update_map(self, new_coords: tuple[float, float]): self.folium_map.location = new_coords # keep it for next calculation because JavaScript doesn't update it map_name = self.folium_map.get_name() js_code = f'{map_name}.setView({list(new_coords)})' # I use `list()` because JavaScript needs `[ ]` instead of `( )` #print(js_code) self.page().runJavaScript(js_code) Other examples: Set lat, long and zoom js_code = f'{map_name}.setView({list(new_coords)}, 13)' # lat, lng and zoom js_code = f'{map_name}.setView({{lat: {new_coords[0]}, lng: {new_coords[1]} }}, 13)' # lat, lng and zoom Set zoom js_code = f'{map_name}.setZoom(8)' Display popup window with current values. js_code = f'alert("Center: " + {map_name}.getCenter() + " Zoom: " + {map_name}.getZoom()) Some places where I found some information: python - QWebEngineView - Javascript Callback - Stack Overflow javascript - Retaining zoom and map center in Shiny for Python with Folium when changing map layer - Stack Overflow Updating folium coordinates on callback in dash app - Dash Python - Plotly Community Forum
2
1
79,460,272
2025-2-22
https://stackoverflow.com/questions/79460272/regex-b-devanagiri
I have a regex in Python which uses \b to split words. When I use it on Devanagiri text, I notice that not all characters in the Unicode block are defined as word characters. Certain punctuation marks appear to be defined as non-word characters. This is fundamentally wrong as words in this script can end with these characters. Is it possible to tell regex to treat the entire block from 0x900 to 0x97f as word characters? See for example the following regex. '(?<!\.)(a(?:bc|de)|zip|ΰ€šΰ€Ύΰ€―|ΰ€ͺΰ€Ύΰ€¨ΰ₯€)\b' Here, the first four words abc, ade, zip and ΰ€šΰ€Ύΰ€― are detected at proper word boundaries. The word ΰ€ͺΰ€Ύΰ€¨ΰ₯€ however, ends with a vowel ΰ₯€ and regex does not treat it as a valid word boundary when ideally it should be. >>> import re >>> re.findall(r"(?<!\.)(a(?:bc|de)|zip|ΰ€šΰ€Ύΰ€―|ΰ€ͺΰ€Ύΰ€¨ΰ₯€)\b", 'This is abc, ade, ΰ€šΰ€Ύΰ€―, ΰ€ͺΰ€Ύΰ€¨ΰ₯€ and abca') ['abc', 'ade', 'ΰ€šΰ€Ύΰ€―'] Can I change this regex behavior and if yes, how?
The problem with the pattern is that \b detects U+093E (DEVANAGARI VOWEL SIGN AA) and 0940 (DEVANAGARI VOWEL SIGN II) as non-word characaters, so the boundaries in the word ΰ€ͺΰ€Ύΰ€¨ΰ₯€ occur after each consonant and before the dependent vowels. It is critical to understand when working with Python regular expressions, with text in Devanagari Script, that the definitions of the re modules \w and \b are fundamentally different from Unicode's definitions. The easiest fix is to use the regex module instead. This regex engine has Unicode support unlike the re module. import regex as re re.findall(r"(?<!\.)(a(?:bc|de)|zip|ΰ€šΰ€Ύΰ€―|ΰ€ͺΰ€Ύΰ€¨ΰ₯€)\b", 'This is abc, ade, ΰ€šΰ€Ύΰ€―, ΰ€ͺΰ€Ύΰ€¨ΰ₯€ and abca') # ['abc', 'ade', 'ΰ€šΰ€Ύΰ€―', 'ΰ€ͺΰ€Ύΰ€¨ΰ₯€']
1
3
79,458,994
2025-2-22
https://stackoverflow.com/questions/79458994/how-to-count-basic-math-operations-performed-in-a-python-recursive-function
I need to write a python script that counts the number of times these operations: +, -, *, //, %, >, <, ==, <=, >= are performed in a piece of python code relative to the input of N. Python Code: def bar(k): score = 0 for i in range(2, k + 1): j = i - 1 while j > 0: if i % j == 0: score = score // 2 else: score = score + 10 j = j - 1 return score mod = 10**9 + 7 def foo(n): m = n % 3 if n == 0: return 1 elif m == 0: v = foo(n // 3) t = 0 for i in range(1, n+1): t = t + bar(4 * i) return v + t elif m == 1: v = foo(n - 1) return v + bar(n * n * n) else: v = foo(n - 2) r = 1 for a in range(2, n): r = r * a % mod r = r + bar(n) return v + r My Script: def count_operations(n): if n == 0: return 1 # Base case: 1 operation for returning 1 elif n % 3 == 0: return count_operations(n // 3) + 4 elif n % 3 == 1: return 6 + count_operations(n - 1) + 4 else: return 9 + 2 * count_operations(n - 2) I wrote the script with the understanding of when N=1,calculating m=1 % 3 takes 1 operation, and the value of m is 1. Then the code executes 3 tests of the == operation before finding one to be true, when it encounters elif m == 1:. Up to that point, 4 operations have been performed. Then it performs a recursive call with foo(n-1) which means that 5 operations have now been performed before the recursive call (the last for the -). The recursive call now has n=0, so 2 more operations occur (n%3 and n==0), before it returns with its result of 1. That makes a total of 7 basic operations that have now been performed to the point of returning from the recursive call to foo. The code next encounters return v + bar(nnn), which means that 2 more operations (both *) occur before the call to bar followed by 1 more (+) afterwards, to bring the total 10, not counting the number done in bar. Simulating the call to bar(1), 1 more operation (k+1) is performed when the top end of the range of the for loop is calculated, bringing the total to 11. The for loop terminates immediately since the range of its iteration is empty, and the score of 0 is returned to complete the (already counted) + operation and return statement in foo. So, that means a total of 11 basic operations were performed, and we return 11 as the answer for when N=1. So, my script only works correctly when n=1 anyone can help me fixed my script to get an understanding of how to do the proper counting when N>1? My inefficient code what produces the correct answer: class Integer(int): n_ops = 0 def new_patch(name): def patch(self, *args): Integer.n_ops += 1 value = getattr(int, name)(self, *args) if isinstance(value, int) and not (value is True or value is False): value = Integer(value) return value patch.__name__ = name return patch methods = { '__le__': '\u2264', '__lt__': '<', '__ge__': '\u2265', '__gt__': '>', '__eq__': '==', '__add__': '+', '__sub__': '-', '__mul__': '*', '__floordiv__': '//', '__mod__': '%', } for name in methods: setattr(Integer, name, new_patch(name)) def bar(k): score = 0 for i in range(2, k + 1): j = i - 1 while j > 0: if i % j == 0: score = score // 2 else: score = score + 10 j = j - 1 return score mod = 10**9 + 7 Integer.n_ops+=1 def foo(n): m = n % 3 if n == 0: return 1 elif m == 0: v = foo(n // 3) t = 0 for i in range(1, n+1): t = t + bar(4 * i) return v + t elif m == 1: v = foo(n - 1) return v + bar(n * n * n) else: v = foo(n - 2) r = 1 for a in range(2, n): r = r * a % mod r = r + bar(n) return v + r def countOps(n): print(f'Number of operations when n={n}: {Integer.n_ops}')
First of all, as you want to count both % and == as operations, the first base case should be return 2 instead of return 1. Also the constant terms you add in the other return statements seem all wrong. For instance, the second return statement should count 6 instead of 4 as its constant term: there are n % 3, n == 0, m == 0, n // 3, n+1, v+t. It seems you have not counted the comparisons. Here is an analysis to get the correct counts: Analysis First look at bar. The number of operations in the body of the while loop is 4: a %, ==, either // or +, and -. The number of executions of the while loop body is 𝑗, and the operation in the while condition is executed one more time, so that gives 5𝑗+1 operations for the while, not considering the outer loop yet. The body of the for loop has one more operation (a -) so that body executes 5𝑗+2 operations. The for loop makes 𝑖 vary from 2 to π‘˜ (inclusive), and thus 𝑗 varies from 1 to π‘˜βˆ’1 (inclusive), so we have this count of operations: βˆ‘π‘—=1..π‘˜βˆ’1(5𝑗+2) = 5βˆ‘π‘—=1..kβˆ’1(𝑗) + 2(π‘˜βˆ’1) The sum term is a triangular number. We can substitute it: 5(π‘˜Β²βˆ’π‘˜)/2 + 2(π‘˜βˆ’1) = (5π‘˜Β² βˆ’ π‘˜ βˆ’ 4) / 2 Add to that the operation in the range argument, and we get as final number of operations for bar(k): (5π‘˜Β² βˆ’ π‘˜ βˆ’ 2) / 2 Now look at foo. For the case where π‘š is 0: One iteration of the for loop body executes 2 operations and those from the bar call, i.e. 2 + (5(4𝑖)Β² βˆ’ 4𝑖 βˆ’ 2) / 2 = 40𝑖² βˆ’ 2𝑖 + 1. As 𝑖 iterates from 1 to 𝑛 (inclusive), we have this sum: βˆ‘π‘–=1..𝑛(40𝑖² βˆ’ 2𝑖 + 1) = 40βˆ‘π‘–=1..𝑛(𝑖²) βˆ’ 2βˆ‘π‘–=1..𝑛(𝑖) + 𝑛 The second sum is again a triangular number, while the first is a square pyramidal number, for which there also is a closed formula, and so we can write the above as: 40(2𝑛³+3𝑛²+𝑛)/6 βˆ’ 2(𝑛²+𝑛)/2 + 𝑛 = 20(2𝑛³+3𝑛²+𝑛)/3 βˆ’ 𝑛² βˆ’ 𝑛 + 𝑛 = (40𝑛³ + 57𝑛² + 20𝑛)/3 There are 6 more operations to add to this (n % 3, n == 0, m == 0, n // 3, n+1, v+t), excluding the recursive call (which apparently you want to maintain), so we arrive at this count for when π‘š is 0: (40𝑛³ + 57𝑛² + 20𝑛 + 18) / 3 For the case where π‘š is 1: We have one call of bar with argument 𝑛³, so that represents (5(𝑛³)Β² βˆ’ 𝑛³ βˆ’ 2) / 2 operations. There are 8 more operations to add to this (n % 3, n == 0, m == 0, m == 1, n - 1, n * n * n (2), v + bar()), so we arrive at (5𝑛6 βˆ’ 𝑛³ + 14) / 2 For the case where π‘š is 2: One iteration of the for loop body executes 3 operations and those from the bar call, i.e. 3 + (5𝑛² βˆ’ 𝑛 βˆ’ 2) / 2 = (5𝑛² βˆ’ 𝑛 + 4) / 2. As π‘Ž iterates from 2 to π‘›βˆ’1 (inclusive), we have π‘›βˆ’2 of those iterations, where the number of operations don't depend on π‘Ž, and so we get: (𝑛 βˆ’ 2)(5𝑛² βˆ’ 𝑛 + 4) / 2 = (5𝑛³ βˆ’ 11𝑛² + 6𝑛 βˆ’ 8) / 2 There are 6 more operations to add to this (n % 3, n == 0, m == 0, m == 1, n - 2, v + r), so we arrive at (5𝑛³ βˆ’ 11𝑛² + 6𝑛 + 4) / 2 Code The above analysis leads to the following code for count_operations: def count_operations(n): if n == 0: return 2 elif n % 3 == 0: return count_operations(n // 3) + (40*n**3 + 57*n*n + 20*n + 18) // 3 elif n % 3 == 1: return count_operations(n - 1) + (5*n**6 - n**3 + 14) // 2 else: return count_operations(n - 2) + (5*n**3 - 11*n*n + 6*n + 4) // 2
3
3
79,458,938
2025-2-22
https://stackoverflow.com/questions/79458938/why-does-an-integer-inside-a-generator-function-swallow-the-object-of-send
I am not trying to achieve anything -- apart from learning how generator functions and coroutines work on a brick level, which I am not really getting yet, despite lots of reading.... $cat test.py #No integer def eee(): num = yield print(f"First num: {num}") num = yield print(f"Second num: {num}") num = yield print(f"Third num: {num}") #integer def ddd(): yield 100 num = yield print(f"First num: {num}") num = yield print(f"Second num: {num}") num = yield print(f"Third num: {num}") e=eee() e.send(None) e.send(1) e.send(2) try: e.send(3) except StopIteration as e: print(f'Done with e: {e}\n') d=ddd() print(d.send(None)) d.send(1) d.send(2) d.send(3) $python3 test.py First num: 1 Second num: 2 Third num: 3 Done with e: 100 First num: 2 Second num: 3 why does d swallow the num 1?
d "swallows" the 1 you sent it because you added this line yield 100 to ddd, that wasn't in eee. That line receives the 1. The 2 goes to the next yield.
2
3
79,457,881
2025-2-21
https://stackoverflow.com/questions/79457881/create-column-from-other-columns-created-within-same-with-columns-context
Here, column "AB" is just being created and at the same time is being used as input to create column "ABC". This fails. df = df.with_columns( (pl.col("A")+pl.col("B")).alias("AB"), (pl.col("AB")+pl.col("C")).alias("ABC") ) The only way to achieve the desired result is a second call to with_columns. df1 = df.with_columns( (pl.col("A")+pl.col("B")).alias("AB") ) df2 = df1.with_columns( (pl.col("AB")+pl.col("C")).alias("ABC") )
Underlying Problem In general, all expressions within a (with_columns, select, filter, group_by) context are evaluated in parallel. Especially, there are no columns previously created within the same context. Solution Still, you can avoid writing large expressions multiple times, by saving the expression to a variable. import polars as pl df = pl.DataFrame({ "a": [1], "b": [2], "c": [3], }) ab_expr = pl.col("a") + pl.col("b") df.with_columns( ab_expr.alias("ab"), (ab_expr + pl.col("c")).alias("abc"), ) shape: (1, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ ab ┆ abc β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 2 ┆ 3 ┆ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Note that polar's query plan optimization accounts for the joint sub-plan and the computation doesn't necessarily happen twice. This can be checked as follows. ab_expr = pl.col("a") + pl.col("b") ( df .lazy() .with_columns( ab_expr.alias("ab"), (ab_expr + pl.col("c")).alias("abc"), ) .explain() ) simple Ο€ 5/6 ["a", "b", "c", "ab", "abc"] WITH_COLUMNS: [col("__POLARS_CSER_0xd4acad4332698399").alias("ab"), [(col("__POLARS_CSER_0xd4acad4332698399")) + (col("c"))].alias("abc")] WITH_COLUMNS: [[(col("a")) + (col("b"))].alias("__POLARS_CSER_0xd4acad4332698399")] DF ["a", "b", "c"]; PROJECT */3 COLUMNS Especially, polars is aware of the sub-plan __POLARS_CSER_0xd4acad4332698399 shared between expressions. Syntacic Sugar (?) Moreover, the walrus operation might be used to do the variable assignment within the context. df.with_columns( (ab_expr := pl.col("a") + pl.col("b")).alias("ab"), (ab_expr + pl.col("c")).alias("abc"), )
2
2
79,457,848
2025-2-21
https://stackoverflow.com/questions/79457848/how-to-perform-row-aggregation-across-the-largest-x-columns-in-a-polars-data-fra
I have a data frame with 6 value columns and I want to sum the largest 3 of them. I also want to create an ID matrix to identify which columns were included in the sum. So the initial data frame may be something like this: df = pl.DataFrame({ 'id_col': [0,1,2,3,4], 'val1': [10,0,0,20,5], 'val2': [5,1,2,3,10], 'val3': [8,2,2,2,5], 'val4': [1,7,7,4,1], 'val5': [3,0,0,6,0], 'val6': [2,7,5,5,4] }) and then the output would look like this: df = pl.DataFrame({ 'id_col': [0,1,2,3,4], 'val1': [1,0,0,1,1], 'val2': [1,0,1,0,1], 'val3': [1,1,0,0,1], 'val4': [0,1,1,0,0], 'val5': [0,0,0,1,0], 'val6': [0,1,1,1,0], 'agg_col': [23,16,14,31,20] }) Note that there was a tie for third place in the third row and it can just be arbitrarily decided which val column gets credit for the submission to the sum. I have tried concatenating the columns into a list and sorting them but I'm having trouble manipulating the list. I thought maybe I could take the top three from the list and sum them and then perform a row-wise check to see if the original columns were in the list. df.with_columns(pl.concat_list(pl.col(val_cols)).list.sort().alias('val_list') I have tried making use of top_k_by, cut, and slice but can't quite get it.
Here are the steps: unpivot the val columns for each id_col group, sum the largest 3 columns using pl.col("value").top_k(3).sum() get a list of the names of those columns using pl.col("variable").top_k_by("value", k=3) Construct the flag columns (row-wise check if each column is in list of the top 3) using [pl.lit(col).is_in("variable").cast(pl.Int8).alias(col) for col in val_cols] Solution: import polars as pl import polars.selectors as cs df = pl.DataFrame( { "id_col": [0, 1, 2, 3, 4], "val1": [10, 0, 0, 20, 5], "val2": [5, 1, 2, 3, 10], "val3": [8, 2, 2, 2, 5], "val4": [1, 7, 7, 4, 1], "val5": [3, 0, 0, 6, 0], "val6": [2, 7, 5, 5, 4], } ) val_cols = cs.expand_selector(df, cs.starts_with("val")) res = ( df.unpivot( cs.starts_with("val"), index="id_col", ) .group_by("id_col") .agg( pl.col("value").top_k(3).sum().alias("agg_col"), pl.col("variable").top_k_by("value", k=3), ) .select( "id_col", *[pl.lit(col).is_in("variable").cast(pl.Int8).alias(col) for col in val_cols], "agg_col", ) .sort("id_col") ) Output: >>> res shape: (5, 8) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id_col ┆ val1 ┆ val2 ┆ val3 ┆ val4 ┆ val5 ┆ val6 ┆ agg_col β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i8 ┆ i8 ┆ i8 ┆ i8 ┆ i8 ┆ i8 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ═════════║ β”‚ 0 ┆ 1 ┆ 1 ┆ 1 ┆ 0 ┆ 0 ┆ 0 ┆ 23 β”‚ β”‚ 1 ┆ 0 ┆ 0 ┆ 1 ┆ 1 ┆ 0 ┆ 1 ┆ 16 β”‚ β”‚ 2 ┆ 0 ┆ 1 ┆ 0 ┆ 1 ┆ 0 ┆ 1 ┆ 14 β”‚ β”‚ 3 ┆ 1 ┆ 0 ┆ 0 ┆ 0 ┆ 1 ┆ 1 ┆ 31 β”‚ β”‚ 4 ┆ 1 ┆ 1 ┆ 1 ┆ 0 ┆ 0 ┆ 0 ┆ 20 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ # Output of the first step >>> df.unpivot(cs.starts_with("val"), index="id_col") shape: (30, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ id_col ┆ variable ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═══════║ β”‚ 0 ┆ val1 ┆ 10 β”‚ β”‚ 1 ┆ val1 ┆ 0 β”‚ β”‚ 2 ┆ val1 ┆ 0 β”‚ β”‚ 3 ┆ val1 ┆ 20 β”‚ β”‚ 4 ┆ val1 ┆ 5 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 0 ┆ val6 ┆ 2 β”‚ β”‚ 1 ┆ val6 ┆ 7 β”‚ β”‚ 2 ┆ val6 ┆ 5 β”‚ β”‚ 3 ┆ val6 ┆ 5 β”‚ β”‚ 4 ┆ val6 ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
79,457,702
2025-2-21
https://stackoverflow.com/questions/79457702/django-formset-nested-structure-not-posting-correctly-for-dynamic-fields
I’m working on a Django nested formset where users can: Add multiple colors to a product. For each color, add multiple sizes dynamically using JavaScript. Each size should have its own size_name, stock, and price_increment field. Issue When submitting the form, Django is incorrectly grouping multiple size field values into lists instead of treating them as separate entries. Expected Django POST Data (Correct Structure) sizes-0-0-size_name = "Small" sizes-0-0-stock = "100" sizes-0-0-price_increment = "50" sizes-0-1-size_name = "Medium" sizes-0-1-stock = "150" sizes-0-1-price_increment = "75" Actual Django POST Data (Incorrect Structure) sizes-0-0-size_name = ["Small", "Medium"] sizes-0-0-stock = ["100", "150"] sizes-0-0-price_increment = ["50", "75"] Instead of separate fields for each size, Django is grouping values into a single list. The sizes-0-TOTAL_FORMS field is appearing twice in the POST request, which might indicate a JavaScript duplication issue. Debugging the Request Data (request.POST) <QueryDict: { 'colors-TOTAL_FORMS': ['1'], 'sizes-0-TOTAL_FORMS': ['1', '1'], # This should be a single value, not duplicated 'sizes-0-0-size_name': ['Small', 'Medium'], 'sizes-0-0-stock': ['100', '150'], 'sizes-0-0-price_increment': ['50', '75'] }> Potential Causes: JavaScript Issue: Dynamic form addition might be incorrectly naming inputs, causing Django to interpret multiple values as a list. TOTAL_FORMS for sizes might not be updated properly, leading to duplicate values. Django Formset Issue: Django might not be detecting individual size inputs properly due to incorrect prefix handling. Code Implementation Forms (forms.py) class ProductForm(forms.ModelForm): class Meta: model = VendorProduct fields = ['title', 'cagtegory', 'base_price'] class ProductColorForm(forms.ModelForm): class Meta: model = ProductColor fields = ['color_name', 'color_code'] class ProductSizeForm(forms.ModelForm): class Meta: model = ProductSize fields = ['size_name', 'stock', 'price_increment'] ProductColorFormSet = inlineformset_factory( VendorProduct, ProductColor, form=ProductColorForm, extra=1, can_delete=True ) ProductSizeFormSet = inlineformset_factory( ProductColor, ProductSize, form=ProductSizeForm, extra=1, can_delete=True ) View (views.py) @login_required def add_product(request): if request.method == 'POST': product_form = ProductForm(request.POST) color_formset = ProductColorFormSet(request.POST, prefix='colors') if product_form.is_valid() and color_formset.is_valid(): product = product_form.save() for color_index, color_form in enumerate(color_formset): if color_form.cleaned_data.get('color_name'): color = color_form.save(commit=False) color.product = product color.save() # **Check if sizes are structured properly** size_formset = ProductSizeFormSet( request.POST, instance=color, prefix=f'sizes-{color_index}' ) print(f"Processing sizes for color index {color_index}:") print(request.POST) if size_formset.is_valid(): size_formset.save() return redirect('vendorpannel:vendor_shop') else: product_form = ProductForm() color_formset = ProductColorFormSet(prefix='colors') color_size_formsets = [ ProductSizeFormSet(instance=color_form.instance, prefix=f'sizes-{index}') for index, color_form in enumerate(color_formset.forms) ] return render(request, 'vendorpannel/add-product.html', { 'product_form': product_form, 'color_formset': color_formset, 'color_size_formsets': color_size_formsets, }) JavaScript for Dynamic Form Handling (add_product.html) document.addEventListener("DOMContentLoaded", function () { let colorIndex = document.querySelectorAll(".color-item").length; function addColor() { let totalForms = document.querySelector('[name="colors-TOTAL_FORMS"]'); let newColor = document.querySelector(".color-item").cloneNode(true); newColor.querySelectorAll("input").forEach(input => { input.name = input.name.replace(/colors-\d+/g, `colors-${colorIndex}`); input.value = ""; }); let sizeContainer = newColor.querySelector(".sizeContainer"); sizeContainer.innerHTML = ""; let sizeTotalForms = document.createElement("input"); sizeTotalForms.type = "hidden"; sizeTotalForms.name = `sizes-${colorIndex}-TOTAL_FORMS`; sizeTotalForms.value = "0"; sizeContainer.appendChild(sizeTotalForms); document.getElementById("colorContainer").appendChild(newColor); totalForms.value = colorIndex + 1; colorIndex++; } document.getElementById("addColorButton")?.addEventListener("click", addColor); }); What I’ve Tried: βœ… Ensured sizes-{colorIndex}-TOTAL_FORMS exists before adding sizes dynamically. βœ… Used name.replace() correctly to update input names. βœ… Verified prefix usage in Django forms and formsets. Question: How can I ensure that each size input field gets a unique name instead of Django grouping multiple values into lists? Full template which is rendering the formsets {% extends 'vendorpannel/base.html' %} {% load static %} {% block title %}Dashboard{% endblock %} {% load humanize %} {% block content %} <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Product Upload Form</title> </head> <body> <div class="container1"> <h1>Product Upload</h1> <form method="post" enctype="multipart/form-data"> {% csrf_token %} <label for="id_title">Product Title</label> {{ product_form.title }} <label for="id_category">Category</label> {{ product_form.cagtegory }} <label for="id_base_price">Base Price</label> {{ product_form.base_price }} <!-- Color Section --> <div id="colorContainer"> <h3>Colors</h3> {{ color_formset.management_form }} {% for color_form, size_formset in color_size_formsets %} <div class="dynamic-item color-item"> {{ color_form.color_name.label_tag }} {{ color_form.color_name }} {{ color_form.color_code.label_tag }} {{ color_form.color_code }} <!-- Size Section --> <div class="sizeContainer"> <h4>Sizes</h4> {{ size_formset.management_form }} {% for size_form in size_formset %} <div class="dynamic-item size-item"> {{ size_form.size_name.label_tag }} {{ size_form.size_name }} {{ size_form.stock.label_tag }} {{ size_form.stock }} {{ size_form.price_increment.label_tag }} {{ size_form.price_increment }} </div> {% endfor %} </div> <button type="button" class="add-size-btn add-btn">Add Size</button> <button type="button" class="remove-btn" onclick="removeColor(this)">Remove Color</button> </div> {% endfor %} </div> <button type="button" id="addColorButton" class="add-btn">Add Another Color</button> <!-- Additional Product Images --> <div id="imageContainer"> <h3>Additional Product Images</h3> {{ image_formset.management_form }} {% for image_form in image_formset %} <div class="dynamic-item image-item"> {{ image_form.image }} </div> {% endfor %} </div> <button type="button" id="addImageButton" class="add-btn">Add Another Image</button> <!-- Submit --> <button type="submit">Submit Product</button> </form> </div> <script> document.addEventListener("DOMContentLoaded", function () { let colorIndex = document.querySelectorAll(".color-item").length; function addColor() { let colorContainer = document.getElementById("colorContainer"); let totalForms = document.querySelector('[name="colors-TOTAL_FORMS"]'); let newColor = document.querySelector(".color-item").cloneNode(true); newColor.querySelectorAll("input, select").forEach(input => { input.name = input.name.replace(/colors-\d+/g, `colors-${colorIndex}`); input.removeAttribute("id"); input.value = ""; }); let sizeContainer = newColor.querySelector(".sizeContainer"); sizeContainer.innerHTML = ""; // πŸ”Ή Ensure TOTAL_FORMS for sizes exists let sizeTotalForms = document.createElement("input"); sizeTotalForms.setAttribute("type", "hidden"); sizeTotalForms.setAttribute("name", `sizes-${colorIndex}-TOTAL_FORMS`); sizeTotalForms.setAttribute("value", "0"); sizeTotalForms.classList.add("size-total-forms"); sizeContainer.appendChild(sizeTotalForms); newColor.querySelector(".add-size-btn").addEventListener("click", function () { addSize(this, colorIndex); }); colorContainer.appendChild(newColor); totalForms.value = colorIndex + 1; colorIndex++; } function addSize(button, colorIdx) { let colorItem = button.closest(".color-item"); let sizeContainer = colorItem.querySelector(".sizeContainer"); // πŸ”Ή Ensure TOTAL_FORMS for this color exists let totalForms = sizeContainer.querySelector('.size-total-forms'); if (!totalForms) { console.warn(`Creating missing totalForms field for sizes-${colorIdx}-TOTAL_FORMS`); totalForms = document.createElement("input"); totalForms.setAttribute("type", "hidden"); totalForms.setAttribute("name", `sizes-${colorIdx}-TOTAL_FORMS`); totalForms.setAttribute("value", "0"); totalForms.classList.add("size-total-forms"); sizeContainer.appendChild(totalForms); } let sizeIndex = parseInt(totalForms.value); let newSize = document.querySelector(".size-item").cloneNode(true); newSize.querySelectorAll("input, select").forEach(input => { let fieldType = input.getAttribute("name").split("-").pop(); input.name = `sizes-${colorIdx}-${sizeIndex}-${fieldType}`; input.removeAttribute("id"); input.value = ""; }); sizeContainer.appendChild(newSize); totalForms.value = sizeIndex + 1; } document.getElementById("addColorButton")?.addEventListener("click", addColor); document.querySelectorAll(".add-size-btn").forEach(button => { button.addEventListener("click", function () { let colorIdx = button.closest(".color-item").querySelector('[name^="colors-"]').name.match(/colors-(\d+)/)[1]; addSize(this, colorIdx); }); }); }); </script> </body> </html> {% endblock %}
You need to group the color_size_formsets per color_formset, so: @login_required def add_product(request): if request.method == 'POST': # ... else: product_form = ProductForm() color_formset = ProductColorFormSet(prefix='colors') for index, color_form in enumerate(color_formset.forms): color_form.size_formset = ProductSizeFormSet(instance=color_form.instance, prefix=f'sizes-{index}') return render(request, 'vendorpannel/add-product.html', { 'product_form': product_form, 'color_formset': color_formset, }) this is important because otherwise you each time render all ColorSizeFormSets in the list per color_form. The template then has: {% for color_form in color_formset %} <div class="dynamic-item color-item"> {{ color_form.color_name.label_tag }} {{ color_form.color_name }} {{ color_form.color_code.label_tag }} {{ color_form.color_code }} <!-- Size Section --> <div class="sizeContainer"> <h4>Sizes</h4> {{ color_form.size_formset.management_form }} {% for size_form in color_form.size_formset %} <div class="dynamic-item size-item"> {{ size_form.size_name.label_tag }} {{ size_form.size_name }} {{ size_form.stock.label_tag }} {{ size_form.stock }} {{ size_form.price_increment.label_tag }} {{ size_form.price_increment }} </div> {% endfor %} </div> <button type="button" class="add-size-btn add-btn">Add Size</button> <button type="button" class="remove-btn" onclick="removeColor(this)">Remove Color</button> </div> {% endfor %}
2
1
79,456,037
2025-2-20
https://stackoverflow.com/questions/79456037/in-a-matplotlib-plot-is-there-a-way-to-automatically-set-the-xlim-after-it-has-b
I am working on a GUI where a user can specify the both the min and max x limit. When the value is left blank I would like it to be automatically calculated. One limit can be set while the other is automatically calculated by setting it to None. But after setting one limit and then setting it to None does not automatically update the limit. Both values can be automatically calculated by using set_xlim(auto=True) but this forces both values to be automatically calculated and does not allow only one value to be automatically calculated. Is there a way of having just one limit automatically re-calculated? For example in MATLAB xlim[-Inf, 4] would automatically calculate the first limit. Below is an example without a GUI import matplotlib.pyplot as plt x_data = [1, 2, 3, 4, 5] y_data = [1, 2, 3, 4, 5] fig, ax = plt.subplots(1) ax.plot(x_data, y_data, '.') # Initially limits are automatically calculated ax.set_xlim(auto=True) print(ax.get_xlim()) # prints: (np.float64(0.8), np.float64(5.2)) # User sets value with GUI ax.set_xlim(2, 4) print(ax.get_xlim()) # prints: (np.float64(2.0), np.float64(4.0)) # Later user changes their mind and leaves the first limit blank ax.set_xlim(None, 4) # print(ax.get_xlim()) # prints: (np.float64(2.0), np.float64(4.0)) plt.show()
As noted in the comment, we can "reset" to automatic axis limits using ax.set_xlim(auto=True) again followed by ax.autoscale_view(). # Later user changes their mind and leaves the first limit blank ax.set_xlim(auto=True) ax.autoscale_view() ax.set_xlim(None, 4) print(ax.get_xlim()) # prints: (0.8, 4.0)
2
2
79,457,363
2025-2-21
https://stackoverflow.com/questions/79457363/python-matplotlib-tight-layout-spacing-for-subplots
I'm using matplotlib to plot my data in a 4x2 grid of subplots. The matplotlib.pyplot.tight_layout automatically fits the subplots, legend, and text labels into a figure that I can save as png. However, when the legend is extremely long, tight_layout seems to add extra horizontal space to some subplots. What is the most efficient way to avoid this extra space? The subplots_adjust function looks promising, but there's a lot of trial-and-error to adjust everything and I'm hoping to find a quicker automated solution using tight_layout. My MWE is: import numpy as np import matplotlib.pyplot as plt t = np.linspace(-5,5,100) x1 = np.sin(t) x2 = np.cos(t) x3 = np.sin(2*t) x4 = np.cos(2*t) x5 = 2*np.sin(t) x6 = 2*np.cos(t) x7 = np.sin(0.5*t) x8 = np.cos(0.5*t) fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(10, 7)) for r in range(4): for c in range(2): ax[r,c].plot(t,x1,label='preliminary 1') ax[r,c].plot(t,x2,label='preliminary 2') ax[r,c].plot(t,x3,label='trial 1, result 1') ax[r,c].plot(t,x4,label='trial 1, result 2') ax[r,c].plot(t,x5,label='trial 1, result 6') ax[r,c].plot(t,x6,label='trial 4, result 1') ax[r,c].plot(t,x7,label='trial 12, result 2') ax[r,c].plot(t,x8,label='trial 15, result 2') ax[0,1].legend(loc='best', bbox_to_anchor = (0.3, -1.1, 1.2, 2)) plt.tight_layout() plt.savefig('myfig.png') plt.show()
If you use the newer Constrained Layout rather than tight_layout then you can easily add a figure legend at the top right. Changes are: Create the figure with constrained layout fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(10, 7), layout='constrained') Get hold of the legend handles and labels from a single axes handles, labels = ax[0, 1].get_legend_handles_labels() Use these handles and labels in a figure legend fig.legend(handles, labels, loc='outside right upper') Remove the calls to ax.legend and tight_layout.
1
2
79,457,237
2025-2-21
https://stackoverflow.com/questions/79457237/fit-function-stops-after-epoch-1
I have implemented this function to fit the model def fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs, val_set, time_windows, scaler): X_column_list = [item for item in val_set.columns.to_list() if item not in ['date', 'user', 'rank','rank_group', 'counts', 'target']] X_val_set = val_set[X_column_list].round(2) X_val_set[X_val_set.columns] = scaler.transform(X_val_set[X_val_set.columns] ) X_val_sequence = get_feature_array(X_val_set , X_column_list, time_windows) X_val_sequence_tensor = tf.convert_to_tensor(X_val_sequence, dtype=tf.float32) Y_column_list = ['target'] Y_val_set = val_set[Y_column_list].round(2) Y_val_sequence = get_feature_array(Y_val_set , Y_column_list, time_windows) Y_val_sequence_tensor = tf.convert_to_tensor(Y_val_sequence, dtype=tf.float32) history = model.fit(X_train_sequence_tensor,Y_train_sequence_tensor, epochs, validation_data=(X_val_sequence_tensor, Y_val_sequence_tensor)) return model, history but when I call it as fitted_model, history = fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs=100, val_set=val_set, time_windows=90, scaler=scaler) it stops after the first epoch. It does not run for all the 100 as required. I tried to call it outside of the function call and it worked. `# Step 3.2 : Fit the model + We pass some validation for # monitoring validation loss and metrics # at the end of each epoch X_val_set = val_set[X_column_list].round(2) #X_val_set.values = scaler.transform(X_val_set.values) X_val_set[X_val_set.columns] = scaler.transform(X_val_set[X_val_set.columns] ) X_val_sequence = get_feature_array(X_val_set , X_column_list, 90) X_val_sequence_tensor = tf.convert_to_tensor(X_val_sequence, dtype=tf.float32) Y_val_set = val_set[Y_column_list].round(2) Y_val_sequence = get_feature_array(Y_val_set , Y_column_list, 90) Y_val_sequence_tensor = tf.convert_to_tensor(Y_val_sequence, dtype=tf.float32) training_history = cnn1d_bilstm_model.fit(X_train_sequence_tensor,Y_train_sequence_tensor, epochs=200, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch validation_data=(X_val_sequence_tensor, Y_val_sequence_tensor)) What am I doing wrong?
If epochs is not explicitly passed, Python may use a default value, which could be None or another unintended value. Explicitly passing epochs=epochs ensures that the function uses the value intended by the caller. Here is updated code: def fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs, val_set, time_windows, scaler): X_column_list = [item for item in val_set.columns.to_list() if item not in ['date', 'user', 'rank','rank_group', 'counts', 'target']] X_val_set = val_set[X_column_list].round(2) X_val_set[X_val_set.columns] = scaler.transform(X_val_set[X_val_set.columns] ) X_val_sequence = get_feature_array(X_val_set , X_column_list, time_windows) X_val_sequence_tensor = tf.convert_to_tensor(X_val_sequence, dtype=tf.float32) Y_column_list = ['target'] Y_val_set = val_set[Y_column_list].round(2) Y_val_sequence = get_feature_array(Y_val_set , Y_column_list, time_windows) Y_val_sequence_tensor = tf.convert_to_tensor(Y_val_sequence, dtype=tf.float32) try: history = model.fit(X_train_sequence_tensor, Y_train_sequence_tensor, epochs=epochs, validation_data=(X_val_sequence_tensor, Y_val_sequence_tensor)) except Exception as e: print(f"Training stopped due to an error: {e}") return model, history fitted_model, history = fit_model(model, X_train_sequence_tensor,Y_train_sequence_tensor, epochs=100, val_set=val_set, time_windows=90, scaler=scaler) # Print Training History print("Training Completed Successfully!")
1
1
79,456,138
2025-2-21
https://stackoverflow.com/questions/79456138/is-it-correct-to-modify-django-db-connections-databases-dynamically-to-multipl
This is my first time developing a multi-tenant SaaS application in Django. In this SaaS each company has its own PostgreSQL database, and these databases are created dynamically when a company registers. I cannot predefine all databases in settings.DATABASES, as companies can register at any time without requiring a server restart. My current solution uses a Middleware to detect the company from the subdomain or a JWT token and then modify connections.databases at runtime to configure the connection to the company's database: import redis from django.db import connections from django.core.exceptions import ImproperlyConfigured from django.utils.connection import ConnectionDoesNotExist from rest_framework_simplejwt.authentication import JWTAuthentication from myapp.models import Company # Company model stored in the global database class CompanyDBMiddleware: def __init__(self, get_response): self.get_response = get_response self.jwt_authenticator = JWTAuthentication() self.cache = redis.Redis(host='localhost', port=6379, db=0) def __call__(self, request): company_db = self.get_database_for_company(request) if not company_db: raise ImproperlyConfigured("Could not determine the company's database.") # Register connection only if it does not exist in `connections.databases` if company_db not in connections.databases: connections.databases[company_db] = { 'ENGINE': 'django.db.backends.postgresql', 'NAME': company_db, 'USER': 'postgres', 'PASSWORD': 'your_password', 'HOST': 'localhost', 'PORT': '5432', 'CONN_MAX_AGE': 60, # To avoid opening and closing connections on each request } request.company_db = company_db response = self.get_response(request) # Close connection after the response try: connections[company_db].close() except ConnectionDoesNotExist: pass return response def get_database_for_company(self, request): subdomain = request.get_host().split('.')[0] company_db = None cache_key = f"company_db_{subdomain}" company_db = self.cache.get(cache_key) if company_db: return company_db.decode("utf-8") try: company = Company.objects.using('default').get(subdomain=subdomain, active=True) company_db = company.db_name self.cache.setex(cache_key, 300, company_db) # Cache the database name for 5 minutes return company_db except Company.DoesNotExist: return None My questions are: Is it correct to modify connections.databases dynamically on each request to handle multiple databases? Is there a better way to do this in Django without restarting the application when registering new databases? How does this practice affect performance in environments with load balancing and multiple Django instances? Would it be better to deploy a separate API per client in its own Django container? I am considering giving each client their own portal on a separate domain and only deploying their frontend in a container while keeping a centralized API. Is this approach more efficient? I appreciate any recommendations on best practices or potential issues with this approach.
You can use django-tenants library. Its designed for this purpose. https://django-tenants.readthedocs.io/en/latest/
1
1
79,456,808
2025-2-21
https://stackoverflow.com/questions/79456808/data-apparently-plotted-wrong-way-on-matplotlib
I am plotting a graph with date on the x axis and data on the y axis. However the graph is completly wrong and I don't understand why... df['Date_TrailClean'] = pd.to_datetime(df['Date_TrailClean']) # x axis values x = df['Date_TrailClean'] # corresponding y axis values y = df['AdjTotItems'] fig, ax = plt.subplots() # plotting the points ax.plot(x, y) ax.xaxis.set_major_locator(mdates.MonthLocator(bymonth=(1, 7))) ax.xaxis.set_minor_locator(mdates.MonthLocator()) ax.set_ylabel(r'Adjusted Total Items') # function to show the plot plt.show() Which produces a graph like this, as if the data is plotted on the wrong axes? Data can be accessed here: https://docs.google.com/spreadsheets/d/10bf0dEUz5lvBjh4kWH8XXfX0yOzdU8NYjSzQBwNV4bk/edit?usp=sharing
import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates df = pd.read_excel('sample_data.xlsx') df['Date_TrailClean'] = pd.to_datetime(df['Date_TrailClean']) # Sort x-axis df = df.sort_values('Date_TrailClean') x = df['Date_TrailClean'] y = df['AdjTotItems'] fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(x, y, color='blue', marker='o', label='Adjusted Total Items') ax.xaxis.set_major_locator(mdates.MonthLocator(bymonth=(1, 7))) ax.xaxis.set_minor_locator(mdates.MonthLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y')) # e.g., Jan 2025, Jul 2025 plt.xticks(rotation=45) ax.set_xlabel('Date') ax.set_ylabel(r'Adjusted Total Items') ax.set_title('Adjusted Total Items Over Time') ax.grid(True) plt.tight_layout() plt.show() As others pointed out, sorting the x-axis data gives you above plot.
2
2
79,455,366
2025-2-20
https://stackoverflow.com/questions/79455366/allow-enum-names-as-valid-inputs-with-pydantics-validate-call
This question asks about using the name of an enum when serializing a model. I want something like that except with the @validate_call decorator. Take this function foo(): from enum import Enum from pydantic import validate_call class Direction(Enum): NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 @validate_call def foo(d: Direction): print(d) I want all of these inputs to work: # These work >>> foo(0) Direction.NORTH >>> foo(Direction.EAST) Direction.EAST # These don't, but I want them to >>> foo('WEST') Direction.WEST >>> foo(' sOUtH ') # This would be great though not essential Direction.SOUTH What's the simplest way to do this? If it requires creating a function used as a BeforeValidator, I'd prefer that that function be generic. I have many foos and many enums, and I don't want a separate validator for handling each one. (At that point, it's easier to validate the type within the function itself instead of using Pydantic).
I can think of 2 options. The simplest solution would be to make Enum more flexible. from enum import Enum from pydantic import validate_call def normalize_enum_name(name: str) -> str: """A user-defined normalization function for enum names.""" # The results are only used as dictionary keys, so you can do whatever you like. return name.strip().upper() class FlexibleEnumNameMixin: @classmethod def _missing_(cls: type[Enum], value: object) -> Enum | None: enum_name_map = {normalize_enum_name(member.name): member for member in cls} if isinstance(value, str) and (normalized_name := normalize_enum_name(value)) in enum_name_map: return enum_name_map[normalized_name] return None class Direction(FlexibleEnumNameMixin, Enum): # Insert the mixin here. NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 @validate_call def foo(d: Direction): print(repr(d)) foo(0) # <Direction.NORTH: 0> foo(Direction.EAST) # <Direction.EAST: 1> foo("WEST") # <Direction.WEST: 3> foo(" sOUtH ") # <Direction.SOUTH: 2> foo("unknown") # ValidationError The good (or bad) thing about this approach is that your Enum class will also be able to accept such badly-formed input. print(repr(Direction(" sOUtH "))) # <Direction.SOUTH: 2> This might be a useful feature, but its scope is very wide and could cause unexpected bugs. A probably safer but slightly more complex solution would be to implement a custom validator. Below is a working example of a custom field before validator. from enum import Enum from typing import Annotated from pydantic import validate_call from pydantic_core import core_schema class FlexibleEnumNameValidator: @classmethod def __get_pydantic_core_schema__(cls, source_type: type[Enum], _): enum_name_map = {normalize_enum_name(member.name): member for member in source_type} def parse_as_enum(value: object): if isinstance(value, str) and (normalized_name := normalize_enum_name(value)) in enum_name_map: return enum_name_map[normalized_name] return value return core_schema.no_info_before_validator_function( parse_as_enum, schema=core_schema.enum_schema(source_type, list(source_type)), ) class Direction(Enum): # No need to modify your enum class! NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 @validate_call def foo(d: Annotated[Direction, FlexibleEnumNameValidator]): # Append the validator here. print(repr(d)) foo(0) # <Direction.NORTH: 0> foo(Direction.EAST) # <Direction.EAST: 1> foo("WEST") # <Direction.WEST: 3> foo(" sOUtH ") # <Direction.SOUTH: 2> foo("unknown") # ValidationError With this approach, you can explicitly specify which arguments you want to be flexible. Note that this validator can be used as a mixin for Enum as well. Unlike the mixin that overrides _missing_, this is only used by pydantic. class Direction(FlexibleEnumNameValidator, Enum): # Insert the validator here. NORTH = 0 EAST = 1 SOUTH = 2 WEST = 3 @validate_call def foo(d: Direction): print(repr(d)) foo(" sOUtH ") # <Direction.SOUTH: 2> print(repr(Direction(" sOUtH "))) # ValueError
2
1
79,456,337
2025-2-21
https://stackoverflow.com/questions/79456337/how-to-subtract-data-between-columns-that-have-same-subfix
I have a sample dataframe as below that has same subfix as 001, 002, 003. import pandas as pd import numpy as np branch_names = [f"Branch_{i}" for i in range(1, 11)] date_1 = '20241231' date_2 = '20250214' date_3 = '20250220' data = { 'Branch': branch_names, date_1 + '_001': np.random.randint(60, 90, 10), date_1 + '_002': np.random.randint(60, 90, 10), date_1 + '_003': np.random.randint(60, 90, 10), date_2 + '_001': np.random.randint(60, 90, 10), date_2 + '_002': np.random.randint(60, 90, 10), date_2 + '_003': np.random.randint(60, 90, 10), date_3 + '_001': np.random.randint(60, 90, 10), date_3 + '_002': np.random.randint(60, 90, 10), date_3 + '_003': np.random.randint(60, 90, 10) } # Chuyển thΓ nh DataFrame df = pd.DataFrame(data) Now I want to subtract data between columns that have same subfix as below principle: df['diff_1_001'] = df[date_3 + '_001'] - df[date_2 + '_001'] df['diff_2_001'] = df[date_3 + '_001'] - df[date_1 + '_001'] df['diff_1_002'] = df[date_3 + '_002'] - df[date_2 + '_002'] df['diff_2_002'] = df[date_3 + '_002'] - df[date_1 + '_002'] df['diff_1_003'] = df[date_3 + '_003'] - df[date_2 + '_003'] df['diff_2_003'] = df[date_3 + '_003'] - df[date_1 + '_003'] df As you see that we have same 001, 002, 003 but prefix is different. So I want don't want to hard code the 001, 002, 003 but automatically subtract it as mentioned above.
You would typically use a MultiIndex here, which makes operations much easier than relying on substrings: # set "Branch" as index, convert columns to MultiIndex df2 = df.set_index('Branch') df2.columns = df2.columns.str.split('_', expand=True).rename(['date', 'id']) # perform the operation and join out = df2.join(df2[date_3].sub(df2.drop(columns=[date_3])) .rename(lambda x: f'diff_{x}', level=0, axis=1) ) Output: date 20241231 20250214 20250220 diff_20241231 diff_20250214 id 001 002 003 001 002 003 001 002 003 001 002 003 001 002 003 Branch Branch_1 82 72 68 62 89 86 89 64 77 7 -8 9 27 -25 -9 Branch_2 72 66 80 87 63 78 81 60 76 9 -6 -4 -6 -3 -2 Branch_3 84 63 70 79 63 72 61 71 63 -23 8 -7 -18 8 -9 Branch_4 89 82 82 85 67 63 72 62 84 -17 -20 2 -13 -5 21 Branch_5 84 89 71 83 69 69 62 65 87 -22 -24 16 -21 -4 18 Branch_6 63 65 65 81 69 70 62 81 68 -1 16 3 -19 12 -2 Branch_7 78 83 89 79 69 87 84 76 80 6 -7 -9 5 7 -7 Branch_8 75 71 88 74 83 73 61 68 64 -14 -3 -24 -13 -15 -9 Branch_9 63 60 75 80 63 67 65 89 76 2 29 1 -15 26 9 Branch_10 70 71 68 81 74 67 68 61 85 -2 -10 17 -13 -13 18 If needed you can always come back to a flat index later: out.columns = out.columns.map('_'.join) out.reset_index(inplace=True) Output: Branch 20241231_001 20241231_002 20241231_003 20250214_001 20250214_002 20250214_003 20250220_001 20250220_002 20250220_003 diff_20241231_001 diff_20241231_002 diff_20241231_003 diff_20250214_001 diff_20250214_002 diff_20250214_003 0 Branch_1 63 89 62 67 69 86 68 67 88 5 -22 26 1 -2 2 1 Branch_2 67 80 75 78 85 60 84 83 64 17 3 -11 6 -2 4 2 Branch_3 88 89 87 88 78 82 87 73 85 -1 -16 -2 -1 -5 3 3 Branch_4 63 62 81 71 60 76 89 86 60 26 24 -21 18 26 -16 4 Branch_5 78 65 67 79 87 70 87 77 70 9 12 3 8 -10 0 5 Branch_6 89 65 67 77 69 64 74 84 74 -15 19 7 -3 15 10 6 Branch_7 77 72 71 69 88 84 83 80 82 6 8 11 14 -8 -2 7 Branch_8 61 72 82 89 71 80 60 83 88 -1 11 6 -29 12 8 8 Branch_9 78 81 77 74 77 63 79 60 80 1 -21 3 5 -17 17 9 Branch_10 77 89 66 81 69 79 68 71 78 -9 -18 12 -13 2 -1
3
2
79,453,988
2025-2-20
https://stackoverflow.com/questions/79453988/applying-numpy-partition-to-a-multi-dimensional-array
I need to find the k smallest element within a np.array. In a simple case you would probably use np.partition. import numpy as np a = np.array([7, 4, 1, 0]) kth = 1 p = np.partition(a, kth) print(f"Partitioned array: {p}") print(f"kth's smallest element: {p[kth]}") Partitioned array: [0 1 4 7] kth's smallest element: 1 In my real use case, I need to apply the same technique to a multi-dimensional np.array. Let's take a 4-dim array as an example. The difficulty I am facing is that I need to apply different kths to each row of that array. (Hint: array-4d and kths are coming from earlier operations.) Here's the setup: array_4d = np.array( [ [ [ [4, 1, np.nan, 20, 11, 12], ], [ [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], ], [ [33, 4, 55, 26, 17, 18], ], ], [ [ [7, 8, 9, np.nan, 11, 12], ], [ [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], ], [ [13, 14, 15, 16, 17, 18], ], ], ] ) kths = np.array( [ [ [[1]], [[2]], [[0]], ], [ [[0]], [[2]], [[1]], ], ] ) print("4D array:") print(array_4d) print(f"Shape: {array_4d.shape}") print("kths array:") print(kths) print(f"Shape: {kths.shape}") 4D array: [[[[ 4. 1. nan 20. 11. 12.]] [[nan nan nan nan nan nan]] [[33. 4. 55. 26. 17. 18.]]] [[[ 7. 8. 9. nan 11. 12.]] [[nan nan nan nan nan nan]] [[13. 14. 15. 16. 17. 18.]]]] Shape: (2, 3, 1, 6) kths array: [[[[1]] [[2]] [[0]]] [[[0]] [[2]] [[1]]]] Shape: (2, 3, 1, 1) I need to apply the different kths (1, 2, 0, 0, 2, 1) to the respective row in the 4D array and find the respective smallest element at kth position. The expected result should probably look like this: array([[[[ 4.]], [[nan]], [[ 4.]]], [[[ 7.]], [[nan]], [[14.]]]]) EDIT: I am looking for a generalized solution. The input array could have any shape, with the exception that the second-to-last dimension (axis=-2) is always 1. For the kth array, the two last dimensions are always 1.
I would do the following: For simplified indexing, flatten everything except the axis of interest (if I understand you correctly, this is always the last axis), which produces an NΓ—n-shaped array with N rows resulting from flattening and n columns representing the values along the axis of interest. Sort the values along the axis of interest. For the i-th of the N rows, pick the resulting element at index kths.ravel()[i], where kths.ravel() should be the N-element 1-d view of the given kths array. Reshape the result, minus the shape of the axis of interest. This could look as follows: import numpy as np # Given example nan = np.nan a = np.array([[[[ 4., 1., nan, 20., 11., 12.]], [[nan, nan, nan, nan, nan, nan]], [[33., 4., 55., 26., 17., 18.]]], [[[ 7., 8., 9., nan, 11., 12.]], [[nan, nan, nan, nan, nan, nan]], [[13., 14., 15., 16., 17., 18.]]]]) kths = np.asarray([1, 2, 0, 0, 2, 1]).reshape(a.shape[:-1]) # Proposed approach sorted_a = np.sort(a.reshape(-1, a.shape[-1]), axis=-1) # Step 1+2 result = sorted_a[np.arange(len(sorted_a)), kths.ravel()] # Step 3 result = result.reshape(kths.shape) # Step 4 print(result) Which produces: [[[ 4.] [nan] [ 4.]] [[ 7.] [nan] [14.]]] If you prefer, for your given example, a 2Γ—3Γ—1Γ—1 (i.e. 4-d) result rather than the current 2Γ—3Γ—1 (i.e. 3-d) result, you can replace the last reshaping operation by result = result.reshape(*kths.shape, 1) Abandoned alternative As an alternative, I also tried producing sorted_a via sorted_a = np.partition(a.reshape(-1, a.shape[-1]), kth=range(np.max(kths) + 1), axis=-1) This was based on the idea that, at maximum, you need the entries up to the largest value in kths sorted. This, however, did not produce a speedup for me and, in fact, even slowed down the calculation significantly.
1
2
79,454,786
2025-2-20
https://stackoverflow.com/questions/79454786/detect-highlighted-text-in-docx
I'm trying to detect text that has a coloured background in a MS Word docx, to separate it from the "normal" text. from docx import Document ... # Load the document doc = Document(docx_path) highlighted_text = [] normal_text = [] # Iterate through all paragraphs for para in doc.paragraphs: # Iterate through all runs in the paragraph for run in para.runs: print(run.text + " - " + str(run.font.highlight_color)) # Check if the run has a highlight color set if run.font.highlight_color is not None: highlighted_text.append(run.text) print(f"Found highlighted text: '{run.text}' with highlight color: {run.font.highlight_color}") return highlighted_text However, in my test document it's only found grey highlights: This is the results from the print statement: Text (normal) - None Text in grey - GRAY_25 (16) Found highlighted text: 'Text in grey ' with highlight color: GRAY_25 (16) Text in yellow - None Text in green - None So not sure where I'm going wrong. I don't think the text has been been shaded as that is across a whole line. Addendum: It only works for grey for me - which I have highlighted in MS Office - however the other highlights, which are getting missed have been done by someone else. This might have been done with an old copy of Office, or docx compatible software or some other method of highlighting he text that isn't "highlighting" Any ideas?
This script performs well for me: from docx import Document def extract_highlighted_text(docx_path): doc = Document(docx_path) highlighted_texts = [] for para in doc.paragraphs: for run in para.runs: if run.font.highlight_color is not None: highlighted_texts.append(run.text) return highlighted_texts docx_file = "text.docx" highlighted_texts = extract_highlighted_text(docx_file) print("Highlighted Texts:") for text in highlighted_texts: print(text) Highlighted text in Docx: Result:
2
2
79,454,687
2025-2-20
https://stackoverflow.com/questions/79454687/how-to-program-angled-movement-for-a-sonic-game
I've been trying to get a Sonic The Hedgehog game working in Pygame and it works for the most part. Sonic can run fast, jump across platforms like a normal platformer, gets affected by gravity, collects rings, etc. (check attached video) https://imgur.com/a/q1YrAXO However, I cannot for the life of me get Sonic to run at different angles and go up slopes and loops like in the real games. I am aware such a project doesn't exist in Pygame (for the public to access anyway) and making the precise, mathematically accurate scripts can be hard to do. Problematic code: (from levels.py) def update(self): self.sonic.update() self.camera.update(self.sonic) # Follow Sonic # Keep Sonic grounded self.sonic.grounded = False for tile in self.tile_group: if self.sonic.mask.overlap(tile.mask, (tile.rect.x - self.sonic.hitbox.x, tile.rect.y - self.sonic.hitbox.y)): self.sonic.Yvel = 0 self.sonic.grounded = True self.sonic.jumped = False self.sonic.angle = tile.angle break (from characters.py) if not self.grounded: self.Yvel += self.gravityforce # Apply gravity self.Yvel = min(self.Yvel, 10) # Cap max fall speed self.Xvel = self.groundSpeed else: # Adjusting speed with some trigonometry self.Yvel = self.groundSpeed * math.sin(self.angle * -1) self.Xvel = self.groundSpeed * math.cos(self.angle) self.x += self.Xvel self.rect.x = self.x # Update self.rect with the new x coordinate self.y += self.Yvel self.rect.y = self.y self.hitbox.x = self.x self.hitbox.y = self.y I am using Tiled for mapmaking by the way, I'm not sure if that helps in my situation. I've manually added an angle attribute to the tiles currently in use and given them unique angles such as 20, 45, 90, etc. (which is why Sonic suddenly changes angle and jumps/phases through floor at times). With all that being said, can anyone help or offer some advice? Literally anything will be appreciated.
You can use vector math so that the character follows the surface angle. Create a horizontal movement vector and then rotate it by the negative of the surface angle. That way when the character position is updated the movement should be aligned along the slope. e.g. if not self.grounded: # Apply gravity normally when in the air. self.Yvel += self.gravityforce self.Yvel = min(self.Yvel, 10) else: # On the ground, create a horizontal movement vector… speed = self.groundSpeed * move_direction # move_direction = -1, 0, or 1 speed_vector = pygame.math.Vector2(speed, 0) # ...then rotate it to follow the surface. # (Note: the minus sign adjusts for your coordinate system.) speed_vector = speed_vector.rotate(-self.angle) self.Xvel = speed_vector.x self.Yvel = speed_vector.y Example output of pygame script I wrote to show this working.
2
1
79,452,824
2025-2-19
https://stackoverflow.com/questions/79452824/python-polars-encoding-continous-variables-from-breakpoints-in-another-dataframe
The breakpoints data is the following: breakpoints = pl.DataFrame( { "features": ["feature_0", "feature_0", "feature_1"], "breakpoints": [0.1, 0.5, 1], "n_possible_bins": [3, 3, 2], } ) print(breakpoints) out: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ features ┆ breakpoints ┆ n_possible_bins β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═════════════════║ β”‚ feature_0 ┆ 0.1 ┆ 3 β”‚ β”‚ feature_0 ┆ 0.5 ┆ 3 β”‚ β”‚ feature_1 ┆ 1.0 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The df has two continous variables that we wish to encode according to the breakpoints DataFrame: df = pl.DataFrame( {"feature_0": [0.05, 0.2, 0.6, 0.8], "feature_1": [0.5, 1.5, 1.0, 1.1]} ) print(df) out: shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ feature_0 ┆ feature_1 β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 0.05 ┆ 0.5 β”‚ β”‚ 0.2 ┆ 1.5 β”‚ β”‚ 0.6 ┆ 1.0 β”‚ β”‚ 0.8 ┆ 1.1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ After the encoding we should have the resulting DataFrame encoded_df: encoded_df = pl.DataFrame({"feature_0": [0, 1, 2, 2], "feature_1": [0, 1, 0, 1]}) print(encoded_df) out: shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ feature_0 ┆ feature_1 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 0 ┆ 0 β”‚ β”‚ 1 ┆ 1 β”‚ β”‚ 2 ┆ 0 β”‚ β”‚ 2 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ We can assume that the unique list of features in encoded_df are also available in breakpoints Labels should be an array: np.array([str(i) for i in range(n_possible_bins)]), assuming n_possible_bins is a positive integer. n_possible_bins may be different across features. All the encoding follows left_closed=False where the bins are defined as (breakpoint, next breakpoint] I know that Polars.Expr.cut() takes in breaks parameter as Sequence[float], but how do I pass in these breakpoints and labels from the breakpoints DataFrame effectively?
Given that breakpoints will most likely be a very small DataFrame, I think the simplest and most efficient solution is something like: import polars as pl breakpoints = pl.DataFrame( { "features": ["feature_0", "feature_0", "feature_1"], "breakpoints": [0.1, 0.5, 1], "n_possible_feature_brakes": [3, 3, 2], } ) df = pl.DataFrame( {"feature_0": [0.05, 0.2, 0.6, 0.8], "feature_1": [0.5, 1.5, 1.0, 1.1]} ) # Aggregate the breakpoints by feature feature_breaks = breakpoints.group_by("features").agg( pl.col("breakpoints").sort().alias("breaks") ) # For each feature, call `pl.cut` with the respective `breaks` result = df.select( pl.col(feat).cut(breaks, labels=[str(x) for x in range(len(breaks) + 1)]) for feat, breaks in feature_breaks.iter_rows() ) Output: >>> feature_breaks shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ features ┆ breaks β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════║ β”‚ feature_0 ┆ [0.1, 0.5] β”‚ β”‚ feature_1 ┆ [1.0] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ >>> result shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ feature_0 ┆ feature_1 β”‚ β”‚ --- ┆ --- β”‚ β”‚ cat ┆ cat β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ 0 ┆ 0 β”‚ β”‚ 1 ┆ 1 β”‚ β”‚ 2 ┆ 0 β”‚ β”‚ 2 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
79,453,722
2025-2-20
https://stackoverflow.com/questions/79453722/how-can-i-avoid-getting-the-wrong-answer-when-calculating-with-njit-in-python
In order to improve the speed of my code in Python I use a njit library from numba. For the number 94906267, which I use in my calculations, I get a wrong answer. At the same time, if I do not use njit, my code gives a correct answer. Here are two examples. from numba import njit @njit def main_with_njit(): a = 94906267 print(f'main_with_njit: a**2 = {a ** 2}') # -> a**2 = 9007199515875288 # the type is int64 main_with_njit() def main_without_njit(): a = 94906267 print(f'main_without_njit: a**2 = {a ** 2}') # -> a**2 = 9007199515875289 # the type is <class 'int'> main_without_njit() How can I use njit without getting a wrong answer? I use Python 3.11.3 with PyCharm 2023.1.2 (Community Edition) on Windows, 64 bit, 0.61.0 version of numba. I can reproduce the problem originally described by @Roshan N using the math library. import math print(math.pow(94906267, 2)) # -> 9007199515875288.0
First things first. This error is only possible by first converting the operands to 64-bit floating point number. The reason for this is that 64-bit floating point numbers do not have the precision to represent 9007199515875289, and the result is instead rounded to 9007199515875288. You have not done anything to suggest that you want the operands to be floating point numbers. This suggests that the behaviour is a bug somewhere down the numba / llvm chain. Indeed, if you look at the LLVM generated code you can see that the code transforms a ** 2 in to a * a. However, for some reason it converts the result to a double and then back again before returning it. As pointed out by @ken, this behaviour is not seen on all versions of python. Both 3.10 and 3.12 work for me. It is only 3.11 that produced the erroneous result. You can view the LLVM generated code by doing: import numba @numba.njit def f(a: int) -> int: return a ** 2 f(94906267) # NB. code is not generated unless you call the function args = (numba.int64,) code = f.inspect_llvm()[args] Python 3.11 LLVM Code Snippet %.185.i = mul nsw i64 %arg.a, %arg.a %.224.i = sitofp i64 %.185.i to double %.229.i = fptosi double %.224.i to i64 store i64 %.229.i, i64* %retptr, align 8 You can see that it first does the multiplication, but then for some reason converts the result to a 64-bit floating point number (sitofp i64 %.185.i to double, ie. sitofp = signed integer to floating point) and then back again (fptosi double %.224.i to i64), before returning it. Python 3.12 LLVM Code Snippet %.191.i = mul nsw i64 %arg.a, %arg.a store i64 %.191.i, i64* %retptr, align 8
1
5
79,454,104
2025-2-20
https://stackoverflow.com/questions/79454104/using-match-statement-with-a-class-in-python-3
Can somebody explain why in the following code Two matches? >>> class One: ... pass ... >>> class Two: ... pass ... >>> a = One() >>> >>> match a.__class__: ... case Two: ... print("is two") ... is two >>>
It's important to remember that match-case is not designed to be used as switch-case. It's created for Structural Pattern Matching, and the syntax is suited for that purpose. Here you can check various possible patterns that may be given for case statements. One that matches what you used is Capture Pattern, which always succeeds. In order to match members of a class, you should use Class Patterns: class One: def __init__(self, arga): self.arga = arga class Two: def __init__(self, argb): self.argb = argb a = One(arga = 1) b = Two(argb = 2) match a: case One(): print("is one") case Two(): print("is two") match b: case One(): print("is one") case Two(): print("is two") Note: One() in case statement is not resolved as simply creating instance of One and matching against it. If that were the case, we would get a TypeError as One.__init__ requires an argument.
3
4
79,453,531
2025-2-20
https://stackoverflow.com/questions/79453531/using-a-python-library-installed-in-virtual-environment-from-script-running-in-s
There is a python library that only wants to be installed in a virtual environment, but how to import the library into scripts running in my standalone application that does not run in a virtual environment? I'm writing a a Delphi/Lazarus application using the component Python4Delphi and Python4Lazarus to run python scripts.
There are two ways you can try to fix this: In your Delphi/Lazarus application, you could configure Python4Delphi to use the Python interpreter from your virtual environment instead of the system Python. This way, it will automatically have access to all packages installed in that virtual environment. PythonEngine1.DllPath := 'PATH'; You can add the venv's package directory into your standalone application like: import sys import os venv_path = r"C:\path\to\your\venv\Lib\site-packages" if venv_path not in sys.path: sys.path.append(venv_path) # Now you can import your library import your_library
1
2
79,448,827
2025-2-18
https://stackoverflow.com/questions/79448827/python-cv2-imshow-memory-leak-on-macos
I believe I am witnessing a memory leak in the following Python code that calls OpenCV functions. Can you reproduce this? Why does this happen? How can I work around it or fix it? My environment: macOS (10.12 and 10.13) Python 3.8.10 OpenCV 3.4.18 NumPy 1.24.4 I've now also tested it on another machine in Python 3.9.1 with OpenCV 4.10.0 and NumPy 2.02 and experience the same behavior. I believe it's linked to cv2.imshow. With each loop iteration (i.e. every time the image updates) the memory usage of Python increases by an amount relative to the size of the image. Adding small = cv2.resize(img,(640,360)) and changing the imshow line to cv2.imshow('Image',small) makes the RAM increase by a much smaller amount. By inserting a continue statement above the imshow call, and thereby skipping imshow altogether, makes the RAM stay more or less unchanged throughout the loop. So, I guess this isolates the problem to cv2.imshow. import cv2, numpy as np flag = False while True: img = np.zeros((2160,3840,3),np.uint8) if flag: img = cv2.circle(img, (1920,1080),128,(255,255,255),-1) flag = not flag cv2.imshow('Image',img) k = cv2.waitKey() cv2.destroyAllWindows() if k == 27: break
Thanks to user Christoph Rackwitz, the cause of the problem was identified as cv2.destroyAllWindows(). It seems to be present only on Mac systems and a bug report was filed, but apparently, there aren't currently any plans on fixing it.
2
1
79,453,687
2025-2-20
https://stackoverflow.com/questions/79453687/pandas-merge-single-column-with-double-column-dataframe-without-commas-for-singl
I have 2 DataFrame with different columns and want to merge to csv without comma for the one having single column. How can we remove comma for the one having single column? import pandas as pd # 1st DataFrame with single column pd_title = pd.DataFrame(['Category: A', '']) # 2nd DataFrame with double columns data = [ ["Date", "Value"], ['2025-01-01', 50], ['2025-01-02', 40], ['2025-01-03', 45] ] result = pd_title._append(data).reset_index(drop=True) result.to_csv('/content/test.csv', index=False, header=False) The result from code : The result what I mean :
There is no direct way to do this in pandas since you're using rows of data as header. You could however convert to CSV string and post-process it: import re with open('/content/test.csv', 'w') as f: f.write(re.sub(',*\n,+\n', '\n\n', result.to_csv(index=False, header=False))) A better option would be to first create the output file with the header, then export a dataframe with normal data/header and append it to the file with the mode='a' of to_csv: filename = '/content/test.csv' with open(filename, 'w') as f: f.write('Category: A\n\n') df = pd.DataFrame(data[1:], columns=data[0]) # Date Value # 0 2025-01-01 50 # 1 2025-01-02 40 # 2 2025-01-03 45 df.to_csv(filename, index=False, mode='a') Output: Category: A Date,Value 2025-01-01,50 2025-01-02,40 2025-01-03,45
1
2
79,450,492
2025-2-19
https://stackoverflow.com/questions/79450492/how-does-the-epsilon-parameter-behave-in-scipy-interpolate-rbfinterpolator
I've been trying to port some code from using scipy.interpolate.Rbf to scipy.interpolate.RBFInterpolator. However I have the impression that the epsilon parameter has a different behavior in the latter -- in fact in my tests it seems like at least with the multiquadric kernel I can vary this parameter by multiple orders of magnitude with no appreciable change in output -- and from the existing scipy documentation it is not clear to me how this should be working (it describes the smoothing for RBFInterpolator very nicely, but only the Rbf documentation seems to explicitly show how epsilon enters the kernel functions as a scale parameter). To demonstrate this phenomenon, I have the following test code -- admittedly not a MWE since I made use of ROOT for visualizing the output. import sys import numpy as np import scipy import ROOT as rt def true_func(x,y,sigma,zscale): # first Gaussian, at center g1 = zscale * np.exp(-0.5 * (np.square(x) + np.square(y)) / np.square(sigma)) # second Gaussian, offset height_scale = 0.5 xp = x - 3. * sigma g2 = height_scale * zscale * np.exp(-0.5 * (np.square(xp) + np.square(y)) / np.square(sigma)) # add a couple sharper peaks positions = [(0,-2 * sigma), (0, -1 * sigma), (0, 2 * sigma)] spikes = 0 pow = 1.1 sig = sigma / 10 height_scale = 2. for pos in positions: xp = x - pos[0] yp = y - pos[1] spikes += height_scale * zscale * np.exp(-0.5 * (np.power(np.abs(xp),pow) + np.power(np.abs(yp),pow)) / np.power(sig,pow)) return g1 + g2 + spikes def test(new=False): N = 15 xscale = 100 xlin = np.linspace(-xscale*N,xscale*N,2 * N) ylin = np.linspace(-xscale*N,xscale*N,2 * N) x,y = np.meshgrid(xlin,ylin) # generate our z values rng = np.random.default_rng() zscale = 10 sigma = xscale * N / 4 z = true_func(x,y,sigma,zscale) z += 0.1 * zscale * rng.uniform(size=z.shape) xf = x.flatten() yf = y.flatten() zf = z.flatten() # Create two interpolators with different values of epsilon, # keep everything else the same between them. basis = 'multiquadric' rbf_dict = {} epsilon_vals = [0.1, 1000] if(new): for epsilon in epsilon_vals: rbf_dict[epsilon] = scipy.interpolate.RBFInterpolator( np.vstack((xf,yf)).T, zf, kernel=basis, epsilon=epsilon ) else: for epsilon in epsilon_vals: rbf_dict[epsilon] = scipy.interpolate.Rbf( xf,yf, zf, kernel=basis, epsilon=epsilon ) # now evaluate the two interpolators on the grid points = np.stack((x.ravel(), y.ravel()), axis=-1) if(new): evals = {key:val(points) for key,val in rbf_dict.items()} else: evals = {key:val(x,y) for key,val in rbf_dict.items()} diffs = {} for i,(key,val) in enumerate(evals.items()): if(i == 0): continue diffs[key] = (val - evals[epsilon_vals[0]]) print(np.max(diffs[key])) # now plot things dims = (1600,1200) c = rt.TCanvas('c1','c1',*dims) c.Divide(2,2) c.cd(1) true_graph = rt.TGraph2D(len(zf),xf,yf,zf) true_graph.SetName('true_graph') true_graph.Draw('SURF2Z') if(new): true_graph.SetTitle("scipy.interpolate.RBFInterpolator Test") else: true_graph.SetTitle("scipy.interpolate.Rbf Test") true_graph.GetXaxis().SetTitle("x") true_graph.GetYaxis().SetTitle("y") true_graph.GetZaxis().SetTitle("z") true_graph.SetNpx(80) true_graph.SetNpy(80) # now draw the two interpolations with the largest difference in epsilon interp1 = rt.TGraph2D(len(zf),xf,yf,evals[epsilon_vals[0]]) interp1.SetName('interp1') interp2 = rt.TGraph2D(len(zf),xf,yf,evals[epsilon_vals[-1]]) interp2.SetName('interp2') interp1.SetLineColor(rt.kRed) interp2.SetLineColor(rt.kGreen) # interp2.SetLineWidth(2) interp2.SetLineStyle(rt.kDotted) for g in (interp1, interp2): g.SetNpx(80) g.SetNpy(80) interp1.Draw('SAME SURF1') interp2.Draw('SAME SURF1') c.cd(2) diff_graph = rt.TGraph2D(len(zf),xf,yf,diffs[epsilon_vals[-1]]) diff_graph.SetName('diff_graph') diff_graph.SetTitle('Difference between interpolations, epsilon #in [{}, {}]'.format(epsilon_vals[0],epsilon_vals[-1])) diff_graph.Draw('SURF1') rt.gPad.SetLogz() c.cd(3) interp1.Draw('CONTZ') interp1.SetTitle('Interpolation with epsilon = {}'.format(epsilon_vals[0])) c.cd(4) interp2.Draw('CONTZ') interp2.SetTitle('Interpolation with epsilon = {}'.format(epsilon_vals[-1])) c.Draw() if(new): c.SaveAs('c_new.pdf') else: c.SaveAs('c_old.pdf') return def main(args): test(new=False) test(new=True) if(__name__=='__main__'): main(sys.argv) This produces the following output plots: Output when using scipy.interpolate.Rbf Output when using scipy.interpolate.RBFInterpolator Maybe I am running into the consequences of some other changes to the RBF method, but it seems odd to me that the results are so different -- and that with using the scipy.interpolate.RBFInterpolator, I am seeing basically no difference between two interpolations using very different values of epsilon. I've taken a glance at what I think is the relevant scipy source code but so far have not figured out what is going on. Any help or advice would be much-appreciated. Thank you!
However I have the impression that the epsilon parameter has a different behavior in the latter It is different. Specifically, Rbf's epsilon divides distance, and RBFInterpolator multiplies it. Here is a source explaining why RBFInterpolator changes this from Rbf: epsilon scales the RBF input as r*epsilon rather than r/epsilon, which is consistent with RBF literature (for example: https://www.sciencedirect.com/science/article/pii/S0898122107002210) (Source.) If I change your RBFInterpolator call like this: rbf_dict[epsilon] = scipy.interpolate.RBFInterpolator( np.vstack((xf,yf)).T, zf, kernel=basis, epsilon=1/epsilon ) I now get much more similar results between the two methods. Here is a comparison between large/small epsilon for old and new methods. and that with using the scipy.interpolate.RBFInterpolator, I am seeing basically no difference between two interpolations using very different values of epsilon. The issue here is connected with the first issue. You're not trying an epsilon value which is small enough. If the epsilon value is too large, then varying it will have no effect: the multiquadratic kernel is not scale-free, but it is approximately scale-free for large values of r. (Here is a plot showing that multiquadratic is approximately -r for large r. The function -r is one of the scale-free functions.) Here's the full test code. I changed epsilon, and also replaced the plotting code that used ROOT with pyplot. import sys import numpy as np import scipy from mpl_toolkits.mplot3d import Axes3D # Axes3D import has side effects, it enables using projection='3d' in add_subplot import matplotlib.pyplot as plt import random import numpy as np # import ROOT as rt def true_func(x,y,sigma,zscale): # first Gaussian, at center g1 = zscale * np.exp(-0.5 * (np.square(x) + np.square(y)) / np.square(sigma)) # second Gaussian, offset height_scale = 0.5 xp = x - 3. * sigma g2 = height_scale * zscale * np.exp(-0.5 * (np.square(xp) + np.square(y)) / np.square(sigma)) # add a couple sharper peaks positions = [(0,-2 * sigma), (0, -1 * sigma), (0, 2 * sigma)] spikes = 0 pow = 1.1 sig = sigma / 10 height_scale = 2. for pos in positions: xp = x - pos[0] yp = y - pos[1] spikes += height_scale * zscale * np.exp(-0.5 * (np.power(np.abs(xp),pow) + np.power(np.abs(yp),pow)) / np.power(sig,pow)) return g1 + g2 + spikes def test(new=False): N = 15 xscale = 100 xlin = np.linspace(-xscale*N,xscale*N,2 * N) ylin = np.linspace(-xscale*N,xscale*N,2 * N) x,y = np.meshgrid(xlin,ylin) # generate our z values rng = np.random.default_rng() zscale = 30 sigma = xscale * N / 4 z = true_func(x,y,sigma,zscale) z += 0.1 * zscale * rng.uniform(size=z.shape) xf = x.flatten() yf = y.flatten() zf = z.flatten() # Create two interpolators with different values of epsilon, # keep everything else the same between them. basis = 'multiquadric' rbf_dict = {} epsilon_vals = [0.1, 1000] if(new): for epsilon in epsilon_vals: rbf_dict[epsilon] = scipy.interpolate.RBFInterpolator( np.vstack((xf,yf)).T, zf, kernel=basis, epsilon=1/epsilon ) else: for epsilon in epsilon_vals: rbf_dict[epsilon] = scipy.interpolate.Rbf( xf,yf, zf, kernel=basis, epsilon=epsilon ) # now evaluate the two interpolators on the grid points = np.stack((x.ravel(), y.ravel()), axis=-1) if(new): evals = {key:val(points) for key,val in rbf_dict.items()} else: evals = {key:val(x,y) for key,val in rbf_dict.items()} diffs = {} for i,(key,val) in enumerate(evals.items()): if(i == 0): continue diffs[key] = (val - evals[epsilon_vals[0]]) print(np.max(diffs[key])) for epsilon in epsilon_vals: fig = plt.figure() plt.title(f"new {new}, epsilon {epsilon}") ax = fig.add_subplot(111, projection='3d') interp = rbf_dict[epsilon] if new: z = interp(points).reshape(x.shape) else: z = interp(x, y) ax.plot_surface(x, y, z) plt.show() def main(): test(new=False) test(new=True) if(__name__=='__main__'): main()
3
3
79,452,813
2025-2-19
https://stackoverflow.com/questions/79452813/python-polars-how-to-apply-function-across-multiple-cols
How to extend this df = df.select( pl.col("x1").map_batches(custom_function).alias("new_x1") ) to something like df = df.select( pl.col("x1","x2").map_batches(custom_function).alias("new_x1", "new_x2") ) Or the way to go is doing it one by one df = df.select( pl.col("x1").map_batches(custom_function).alias("new_x1") pl.col("x2").map_batches(custom_function).alias("new_x2") )
The syntax df.select( pl.col("x1", "x2").some_method_chain() ) is equivalent to df.select( pl.col("x1").some_method_chain(), pl.col("x2").some_method_chain(), ) Especially, your example is almost correct, but fails on the last call to pl.Expr.alias in the method chain [...].alias("new_x1", "new_x2"). You basically try to set the name of each expression to "new_x1", "new_x2". This issue can be fixed using pl.Expr.name.prefix. df.select( pl.col("x1", "x2").map_batches(custom_function).name.prefix("new_") )
2
2
79,452,715
2025-2-19
https://stackoverflow.com/questions/79452715/how-can-i-make-cuts-into-a-numerical-column-based-on-a-categorical-column
I have a very large dataset (about 10^7 rows and 1000 columns) and need to make cuts into one of the columns, for trining/validation separation, with the bins changing based on another column. I am pretty new to python and am using this function: SEGMENT is either A, B or C, and DATE is what I am cutting (yes, it is a numerical column, I know it looks terrible but it was not my choice), with different bins for different values of SEGMENT. cuts = { "A": {"cut":[0,20240101,20240801,20241201,20250000], "class":["out", "training", "validation", "out"]}, "B": {"cut":[0,20230701,20240701,20241201,20250000], "class":["out", "training", "validation", "out"]}, "C": {"cut":[0,20230701,20240701,20250101,20250201], "class":["out", "training", "validation", "out"]} } def divisions(row): rules = cuts[row["SEGMENT"]] return pd.cut([row["DATE"]], bins=rules["cut"], labels=rules["class"], right=False, ordered=False)[0] df["CLASS"] = df.apply(divisions, axis=1) This seems to work but has been insanely slow even on samples of less than 0.1% of the actual dataset. How can I improve this? All I need is this CLASS column, to check if the training and validation datasets show similar behaviors. I am not yet doing the actual modeling.
You could use a groupby.apply to handle all rows of a SEGMENT simultaneously: df['CLASS'] = (df.groupby('SEGMENT', group_keys=False)['DATE'] .apply(lambda x: pd.cut(x, bins=cuts[x.name]['cut'], labels=cuts[x.name]['class'], ordered=False)) ) Example output: SEGMENT DATE CLASS 0 A 20240102 training 1 A 20241215 out 2 A 20231201 out 3 B 20240102 training 4 B 20241215 out 5 C 20231201 training 6 C 20240102 training 7 C 20241215 validation Testing on 10K rows, this is ~1000x faster that the original approach: # groupby 3.68 ms Β± 76.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) # original 3.34 s Β± 10.2 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Testing on 1M rows, this takes about half a second: 529 ms Β± 277 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
2
2
79,452,360
2025-2-19
https://stackoverflow.com/questions/79452360/pandas-list-dates-to-datetime
I am looking to convert a column with dates in a list [D, M, Y] to a datetime column. The below works but there must be a better way? new_df = pd.DataFrame({'date_parts': [[29, 'August', 2024], [28, 'August', 2024], [27, 'August', 2024]]}) display(new_df) ## Make new columns with dates new_df = pd.concat([new_df, new_df['date_parts'].apply(pd.Series)], axis=1).rename(columns={0:'D', 1:'M', 2:'Y'}) month_map = { 'January':1, 'February':2, 'March':3, 'April':4, 'May':5, 'June':6, 'July':7, 'August':8, 'September':9, 'October':10, 'November':11, 'December':12 } ## make datetime column new_df['release_date'] = pd.to_datetime(dict(year=new_df.Y, month=new_df.M.apply(lambda x: month_map[x]), day=new_df.D), format='%d-%B-%Y') new_df.drop(columns=['D', 'M', 'Y']) ## Input date_parts 0 [29, August, 2024] 1 [28, August, 2024] 2 [27, August, 2024] ## Output date_parts release_date 0 [29, August, 2024] 2024-08-29 1 [28, August, 2024] 2024-08-28 2 [27, August, 2024] 2024-08-27
Just combine the parts into a single string, and pass to to_datetime: new_df['release_date'] = pd.to_datetime(new_df['date_parts'] .apply(lambda x: '-'.join(map(str, x))), format='%d-%B-%Y') Output: date_parts release_date 0 [29, August, 2024] 2024-08-29 1 [28, August, 2024] 2024-08-28 2 [27, August, 2024] 2024-08-27 You could also convert the list of list to DataFrame with day/month/year columns: new_df['release_date'] = pd.to_datetime( pd.DataFrame( new_df['date_parts'].to_list(), index=new_df.index, columns=['day', 'month', 'year'], ).replace({'month': month_map}) )
3
3
79,452,237
2025-2-19
https://stackoverflow.com/questions/79452237/how-does-pd-where-work-with-callables-as-parameters
The basics of using Pandas where with callables seems simple. np.random.seed(0) df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D']) df["test"] = range(1,9) def MyBool(x): print(1) return ( x > 0 ) def MyFunc(x1): print(1) return x1['A'] df.where( cond = lambda x: MyBool(x), other = lambda x: MyFunc(x) , ) In the code above, I am replacing the values of all columns with the value of column A whenever the value of the col is less than 0. Note, I know I don't need to use callables for this simple example. Based on my analysis, this is what is happening under the hood. First, MyFunc is evaluated where the argument is the df itself. This returns a 8x1 df (=A) Second, the MyBool is evaluated which returns a 8x5 boolean df. Third, (not sure about this last step) for all entries (i,j) where MyBool returned False, the value of the i'th row of the output of MyFunc is used to replace the current value of the df. This leads me on to my question: how does this extend to the cases when MyFunc returns a dataframe with several columns and rows? How does the function determine which entries need to be replaced and with which values? For illustrative purposes, suppose now that we want to divide B and C by 2 when test is equal to 5. The code I have provided below works but I don't quite understand how it determines which entries are to be replaced and with which values. MyBool still returns a one dimensional vector but MyFunc returns a dataframe. If the previous logic I explained was correct, then shouldn't it replace each False entry with the dataframe? Indeed, if this were the case, the resulting dataframe should be bigger than the input df. I've been reading the documentation and playing with different examples but can't figure this one out. def MyBool(x): output = x.test != 5 return output def MyFunc(x1): x1.loc[ x1.test == 5, ["B", "C"] ] /= 2 return x1 df.where( cond = lambda x: MyBool(x), other = lambda x: MyFunc(x.copy()), axis = 0 )
The logic is quite simple, for each False in the output of the cond callable, the matching value in the result of other will be used as replacement. If other is a scalar, this value is used. The matching value is identified by position if the callable returns an array, and by alignment for a DataFrame: df = pd.DataFrame({'A': [1, 0], 'B': [0, 1]}) df.where(cond=lambda x: x==0, other=lambda x: pd.DataFrame({'A': [10, 20]}, index=[1, 0])) # A B # 0 20 0.0 # 1 0 NaN df.where(cond=lambda x: x==0, other=lambda x: [[7,8],[9,10]]) # A B # 0 7 0 # 1 0 10 Therefore, your MyFunc functions should return a scalar, a DataFrame (that will be aligned), or an array of the same shape as the input. You can modify it to broadcast the values to all columns: def MyFunc(x1): print(1) return np.broadcast_to(x1['A'].to_frame().values, df.shape) Example: df = pd.DataFrame([[1, -1, 0, 0], [2, 0, -1, 0], [3, 0, 0, -1]], columns=['A', 'B', 'C', 'D']) def MyBool(x): return x >= 0 def MyFunc(x): return np.broadcast_to(x['A'].to_frame().values, df.shape) out = df.where(cond=MyBool, other=MyFunc) # A B C D # 0 1 1 0 0 # 1 2 0 2 0 # 2 3 0 0 3 Note that the callables should NOT modify the DataFrame in place. This should be avoided: def MyFunc(x1): x1.loc[ x1.test == 5, ["B", "C"] ] /= 2 return x1 and could be replaced by a simple (without using where): df.loc[df['test'] == 5, ['B', 'C']] /= 2
2
2
79,448,337
2025-2-18
https://stackoverflow.com/questions/79448337/using-re-sub-and-replace-with-overall-match
I was just writing a program where I wanted to insert a newline after a specific pattern. The idea was to match the pattern and replace with the overall match (i.e. capture group \0) and \n. s = "abc" insert_newline_pattern = re.compile(r"b") re.sub(insert_newline_pattern, r"\0\n", s) However the output is a\x00\nc, reading \0 as a null character. I know that I can "simply" rewrite this as: s = "abc" insert_newline_pattern = re.compile(r"(b)") re.sub(insert_newline_pattern, r"\1\n", s) which outputs the desired ab\nc with the idea of wrapping the overall match into group \1 and substituting this. See also a Python regex101 demo. Is there a way to access the overall match in any way, similar to this PCRE regex101 demo in Python?
You can use the form \g<0> in Python for the zeroeth group (or overall match from the pattern) which would be the same as $0 in PCRE (alternatively, in PCRE, you can use $& or \0 in replacement strings). s="abc" insert_newline_pattern=re.compile(r"b") re.sub(insert_newline_pattern,r"\g<0>\n",s) Result: 'ab\nc' This form is to avoid the potential ambiguity of \10 used in PCRE. Is that the tenth backreference or the first followed by a literal '0'? It is documented under the docs for re.sub. Note: If you are referring to a match group, such as in a lambda in the replacement or as the result of re.search, you can also use .group(0) for the same function: s="abc123efg456hij" re.sub(r"[a-z](?!$)",lambda m: rf"{m.group(0)}\t",s) # Python 3.9+ you can use m[0] instead of m.group(0) Result: a\tb\tc\t123e\tf\tg\t456h\ti\tj Here is an example of using re.Match Object from re.search (or other re method that produces a match object): >>> s='abc123' >>> m=re.search(r'\d', s) >>> m[0] # what matched? $0 in PCRE '1' >>> m.span() # Where? (3, 4) >>> m.re # With what regex? re.compile('\\d') If you want to see what re.sub would use as a string result, you can use match.expand: >>> m.expand(r"\g<0>\n") '1\n'
5
8
79,447,890
2025-2-18
https://stackoverflow.com/questions/79447890/how-can-i-seperate-the-fringes-that-have-been-calculated-with-findpeaks
I would like to seperate the fringes (the red curved lines) that I have calculated with scipy findpeaks how can I achive it. I would like to seperate them and store in the text file. import numpy as np from scipy.signal import find_peaks import matplotlib.pyplot as plt X = np.load('X.npy') Y = np.load('Y.npy') P_new = np.load('P_new.npy') # Example data: Replace with your actual data T = np.real(P_new) # Simulating a 2D matrix # Plot the original image plt.figure() plt.imshow(T, cmap='jet', aspect='auto') plt.colorbar() plt.title('Original Image') plt.show() # Peak detection parameters min_peak_dist = 3 # Minimum distance between peaks min_peak_h = 3e-5 # Minimum peak height x_coords = [] y_coords = [] # Process all rows from top to bottom for k in range(T.shape[0]): tex = T[k, :] peaks, _ = find_peaks(tex, distance=min_peak_dist, height=min_peak_h) if peaks.size > 0: x_coords.extend(X[k, peaks]) y_coords.extend(Y[k, peaks]) # Plot detected peaks plt.figure() plt.scatter(x_coords, y_coords, color='r', s=2) # 's' controls marker size plt.xlabel('X Coordinate') plt.ylabel('Y Coordinate') plt.title('Detected Fringes in Real-World Coordinates') plt.colorbar() plt.show() data for plotting is here Want I want see is just seperate fringes like here: previously I could do it with cv2 method contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) but this is from finding the edges which is not very rigorous for my case as with finding peaks of the actual data. Can someone help with this?
I had reasonable success using HDBSCAN. I first ran find_peaks to find peaks along y (rather than along x) - these are the black lines. I then clipped the image to within the blue square, and clustered the points using HDBSCAN. The final clusterings are coloured. To plot a particular cluster, you could use: view_cluster = 20 plt.figure(figsize=(3, 5)) plt.scatter(*clustered[view_cluster].T, marker='.', s=1) plt.xlabel('x') plt.ylabel('y') plt.title(f'Fringe {view_cluster}') plt.grid(linestyle=':') Solution Load data and preprocess: import numpy as np from scipy.signal import find_peaks import matplotlib.pyplot as plt from matplotlib.patheffects import withTickedStroke # # Example data: Replace with your actual data # X_orig = np.load('X.npy') Y_orig = np.load('Y.npy') P_new = np.load('P_new.npy') n_rows, n_cols = P_new.shape T = np.real(P_new) # Simulating a 2D matrix #Resample X and Y to match resolution of P_new # NB. I use the min/max values for simplicity x_axis = np.linspace(X_orig.min(), X_orig.max(), num=n_cols) y_axis = np.linspace(Y_orig.min(), Y_orig.max(), num=n_rows) X_grid, Y_grid = np.meshgrid(x_axis, y_axis) # Peak detection parameters min_peak_dist = 3 # Minimum distance between peaks min_peak_h = 3e-5 # Minimum peak height coords = [] # Process all cols (x axis) for col_ix in range(n_cols): peaks, _ = find_peaks(T[:, col_ix], distance=min_peak_dist, height=min_peak_h) if not len(peaks): continue x_coord = x_axis[col_ix] peaks_y = y_axis[peaks] coords.extend([(x_coord, peak_y) for peak_y in peaks_y]) coords = np.array(coords) x_coords, y_coords = coords.T # #Clip unwanted regions # ignore_left_of = 5 ignore_below = 25 # ignore_shorter_than = 10 coords_clipped = coords[(x_coords > ignore_left_of) & (y_coords > ignore_below)] Cluster and visualise: # # HDBSCAN # from sklearn.cluster import HDBSCAN coord_labels = HDBSCAN().fit_predict(coords_clipped) clustered = [ coords_clipped[coord_labels==cluster_id] for cluster_id in np.unique(coord_labels) ] # # Visualise # plt.figure() #Original image plt.matshow( T, extent=[x_axis.min(), x_axis.max(), y_axis.min(), y_axis.max()], cmap='Greys_r', origin='lower', vmax=5e-4, aspect='auto' ) plt.gcf().set_size_inches(12, 6) plt.colorbar(extend='both') #Detected peaks plt.scatter(x_coords, y_coords, color='black', marker='.', s=0.2, alpha=0.4) # #Clip regions plt.axvline(ignore_left_of, path_effects=[withTickedStroke(angle=135)]) plt.axhline(ignore_below, path_effects=[withTickedStroke(angle=-135)]) #Clusters clustered = sorted(clustered, key=lambda members: members[:, 1].min()) for cluster_members in clustered: plt.plot(*cluster_members[::30].T, alpha=0.2, zorder=0, lw=8) #Formatting plt.xlabel('x') plt.ylabel('y') plt.show()
2
2
79,450,950
2025-2-19
https://stackoverflow.com/questions/79450950/pandas-indexing
Can someone explain what is meant by Both loc and iloc [in Pandas] are row-first, column-second. This is the opposite of what we do in native Python, which is column-first, row-second. Because I thought when accessing arrays or lists of lists, the first index always represents the row: matrix = [ [1,2,3], # row 1, index 0 [4,5,6], # row 2, index 1 [7,8,9] # row 3, index 2 ] print(matrix[1][2]) # Output = 6
I would say that statement is incorrect or, at least, very misleading and likely to cause confusion. Both iloc and loc are row-first & column-second, but this is exactly the same as how indexing works in native Python and your example. First index refers to the row, and the second index refers to the column. Your example in pandas using iloc/loc also outputs 6: import pandas as pd data = [ [1, 2, 3], # row 0 [4, 5, 6], # row 1 [7, 8, 9] # row 2 ] df = pd.DataFrame(data) print(df.iloc[1, 2]) # Output: 6 There has already been some discussion about this exact statement in this Kaggle discussion, but to me is still not clear to what the author was referring to. As per Siraz Naorem understanding, the statement might be referring to the creation of DataFrames from column-oriented data, e.g. dictionaries, where each list or array represents a column, not a row. If we replicate again your example but create the DataFrame from a dictionary like this: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}) print(df) # Output: # A B C # 0 1 4 7 # 1 2 5 8 # 2 3 6 9 Now, when we access index [1,2], we do not get 6: print(df.iloc[1, 2]) # Output: 8 print(df.iloc[2, 1]) # Output: 6 In this case, the row and column indices might seem reversed and may lead to the mistaken idea that indexing is different: iloc[1,2] give us now 8, and we have to use iloc[2,1] to get the value 6. However, iloc/loc indexing itself has not changed, is still row-first & column-second, and what is different is the structure of the DataFrame, since pandas internally has treated each list in the dictionary as a column.
1
2
79,450,810
2025-2-19
https://stackoverflow.com/questions/79450810/pandas-groupby-multiple-columns-aggregate-some-columns-add-a-count-column-of-e
The data I am working with: data (140631115432592), ndim: 2, size: 3947910, shape: (232230, 17) VIN (1-10) object County object City object State object Postal Code float64 Model Year int64 Make object Model object Electric Vehicle Type object Clean Alternative Fuel Vehicle (CAFV) Eligibility object Electric Range float64 Base MSRP float64 Legislative District float64 DOL Vehicle ID int64 Vehicle Location object Electric Utility object 2020 Census Tract float64 dtype: object VIN (1-10) County City State Postal Code ... Legislative District DOL Vehicle ID Vehicle Location Electric Utility 2020 Census Tract 0 2T3YL4DV0E King Bellevue WA 98005.0 ... 41.0 186450183 POINT (-122.1621 47.64441) PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA) 5.303302e+10 1 5YJ3E1EB6K King Bothell WA 98011.0 ... 1.0 478093654 POINT (-122.20563 47.76144) PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA) 5.303302e+10 2 5UX43EU02S Thurston Olympia WA 98502.0 ... 35.0 274800718 POINT (-122.92333 47.03779) PUGET SOUND ENERGY INC 5.306701e+10 3 JTMAB3FV5R Thurston Olympia WA 98513.0 ... 2.0 260758165 POINT (-122.81754 46.98876) PUGET SOUND ENERGY INC 5.306701e+10 4 5YJYGDEE8M Yakima Selah WA 98942.0 ... 15.0 236581355 POINT (-120.53145 46.65405) PACIFICORP 5.307700e+10 Data in csv format: VIN (1-10),County,City,State,Postal Code,Model Year,Make,Model,Electric Vehicle Type,Clean Alternative Fuel Vehicle (CAFV) Eligibility,Electric Range,Base MSRP,Legislative District,DOL Vehicle ID,Vehicle Location,Electric Utility,2020 Census Tract 2T3YL4DV0E,King,Bellevue,WA,98005,2014,TOYOTA,RAV4,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,103,0,41,186450183,POINT (-122.1621 47.64441),PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA),53033023604 5YJ3E1EB6K,King,Bothell,WA,98011,2019,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,220,0,1,478093654,POINT (-122.20563 47.76144),PUGET SOUND ENERGY INC||CITY OF TACOMA - (WA),53033022102 5UX43EU02S,Thurston,Olympia,WA,98502,2025,BMW,X5,Plug-in Hybrid Electric Vehicle (PHEV),Clean Alternative Fuel Vehicle Eligible,40,0,35,274800718,POINT (-122.92333 47.03779),PUGET SOUND ENERGY INC,53067011902 JTMAB3FV5R,Thurston,Olympia,WA,98513,2024,TOYOTA,RAV4 PRIME,Plug-in Hybrid Electric Vehicle (PHEV),Clean Alternative Fuel Vehicle Eligible,42,0,2,260758165,POINT (-122.81754 46.98876),PUGET SOUND ENERGY INC,53067012332 5YJYGDEE8M,Yakima,Selah,WA,98942,2021,TESLA,MODEL Y,Battery Electric Vehicle (BEV),Eligibility unknown as battery range has not been researched,0,0,15,236581355,POINT (-120.53145 46.65405),PACIFICORP,53077003200 3C3CFFGE1G,Thurston,Olympia,WA,98501,2016,FIAT,500,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,84,0,22,294762219,POINT (-122.89166 47.03956),PUGET SOUND ENERGY INC,53067010802 5YJ3E1EA4J,Snohomish,Marysville,WA,98271,2018,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,215,0,39,270125096,POINT (-122.1677 48.11026),PUGET SOUND ENERGY INC,53061052808 5YJ3E1EA3K,King,Seattle,WA,98102,2019,TESLA,MODEL 3,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,220,0,43,238776492,POINT (-122.32427 47.63433),CITY OF SEATTLE - (WA)|CITY OF TACOMA - (WA),53033006600 1N4AZ0CP5E,Thurston,Yelm,WA,98597,2014,NISSAN,LEAF,Battery Electric Vehicle (BEV),Clean Alternative Fuel Vehicle Eligible,84,0,2,257246118,POINT (-122.60735 46.94239),PUGET SOUND ENERGY INC,53067012421 Filtering and grouping: filt = (data["Model Year"] >= 2018) & (data["Electric Vehicle Type"] == "Battery Electric Vehicle (BEV)") data = data[filt].groupby(["State", "Make"], sort=False, observed=True, as_index=False).agg( avg_electric_range=pd.NamedAgg(column="Electric Range", aggfunc="mean"), oldest_model_year=pd.NamedAgg(column="Model Year", aggfunc="min")) Currently it yields the following table: State Make avg_electric_range oldest_model_year 0 WA TESLA 52.143448 2018 1 WA NISSAN 60.051874 2018 <snip> How do I add a Count column which shows the count of each group which is used for further filtering? Note: rule out apply as everything should stay in Pandas'land.
Your question would benefit from a minimal reproducible example. That said, the count doesn't really depend on a particular column, as long as you don't have missing values, thus pick any one that matches this criterion and add another aggregation (you can use one of the grouping columns or Model Year since you know it must be a valid number): out = (data[filt].groupby(["State", "Make"], sort=False, observed=True, as_index=False) .agg(avg_electric_range=pd.NamedAgg(column="Electric Range", aggfunc="mean"), oldest_model_year=pd.NamedAgg(column="Model Year", aggfunc="min"), count=pd.NamedAgg(column="Model Year", aggfunc="count"), ) ) Example output: State Make avg_electric_range oldest_model_year count 0 WA X 0.5 2018 2 1 WA Y 3.0 2018 3
2
1
79,450,409
2025-2-19
https://stackoverflow.com/questions/79450409/how-to-parse-xls-data-including-merged-cells-using-python-pandas
How to parse xls data to this struct, both row and column have merged cells, simply use df.index.to_series().ffill() cannot handle. { "time": "time", "category": "A", "variety": "A1", "specification": "S1", "unit": "U1", "average": 1.25, "region": "RegionA", "market": "MarketA", "price": 1.1, }
I figured out this solution: def test_xls_parse(): file_path = 'test.xls' df = pd.read_excel(file_path, engine='xlrd') time_label = df.iloc[0, 0] categories = df.iloc[1, 2:] varieties = df.iloc[2, 2:] specifications = df.iloc[3, 2:] units = df.iloc[4, 2:] averages = df.iloc[5, 2:] regions = df.iloc[6:, 0].ffill() markets = df.iloc[6:, 1] prices = df.iloc[6:, 2:] result = [] for i in range(len(categories)): for j in range(len(regions)): obj = { "date": time_label, "category": categories.iloc[i], "variety": varieties.iloc[i], "specification": specifications.iloc[i], "unit": units.iloc[i], "average": None if averages.iloc[i] == '-' else float(averages.iloc[i]), "region": regions.iloc[j], "market": markets.iloc[j], "price": None if prices.iloc[j, i] == '-' else float(prices.iloc[j, i]) } result.append(obj) return pd.DataFrame(result)
2
1
79,450,672
2025-2-19
https://stackoverflow.com/questions/79450672/python-pandas-multi-column-sorting-problem
I want to sort the first column according to the internal algorithm, and then sort the second column according to the custom sorting method The test data is as follows: A B Ankang Shaanxi Ankang Southeast Baoding Anguo Baoding Anguo Northeast Baoding Anguo Baoding Anguo Southeast Changsha Hunan Changsha Hunan Bright Ankang Shaanxi Ankang Northeast Baoding Anguo Baoding Anguo Southwest Baoding Anguo Baoding Anguo Upper Ankang Shaanxi Ankang Southwest Luoyang Henan Luoyang Henan Upper Baoding Anguo Baoding Anguo Northwest Changsha Hunan Changsha Hunan Lower Ankang Shaanxi Ankang Southwest Upper Ankang Shaanxi Ankang Northwest I hope to be able to arrange it as shown below The first column is sorted together using pandas' built-in string sorting algorithm, and then the second column is sorted using the custom order algorithm of northeast, southeast, northwest, southwest,upper. I used pandas' sort_values() method to sort. I had no problem sorting a single column, but it always failed when I tried to sort two columns together. import pandas as pd data={'A':['Ankang Shaanxi','Baoding Anguo','Baoding Anguo','Changsha Hunan','Ankang Shaanxi', 'Baoding Anguo','Baoding Anguo','Ankang Shaanxi','Luoyang Henan','Baoding Anguo', 'Changsha Hunan','Ankang Shaanxi','Ankang Shaanxi'], 'B':['Ankang Southeast','Baoding Anguo Northeast','Baoding Anguo Southeast','Changsha Hunan Bright','Ankang Northeast','Baoding Anguo Southwest','Baoding Anguo Upper','Ankang Southwest','Luoyang Henan Upper','Baoding Anguo Northwest','Changsha Hunan Lower','Ankang Southwest Upper','Ankang Northwest']} df=pd.DataFrame(data) def sort_fun(x): return x.split()[-1] df['sort_value']=df['B'].apply(sort_fun) sort_dicts={'Northeast':0,'Southeast':1,'Northwest':2,'Southwest':3,'Upper':4} df.sort_values(by=['A','sort_value'],key=lambda x :x.map(sort_dicts)) I referred to it Pandas: How to custom-sort on multiple columns? A B Ankang Shaanxi Ankang Northeast Ankang Shaanxi Ankang Southeast Ankang Shaanxi Ankang Northwest Ankang Shaanxi Ankang Southwest Ankang Shaanxi Ankang Southwest Upper Baoding Anguo Baoding Anguo Northeast Baoding Anguo Baoding Anguo Southeast Baoding Anguo Baoding Anguo Northwest Baoding Anguo Baoding Anguo Southwest Baoding Anguo Baoding Anguo Upper Changsha Hunan Changsha Hunan Bright Changsha Hunan Changsha Hunan Lower Luoyang Henan Luoyang Henan Upper
The basic logic you can use for column 'B': Series.str.split + access str[-1] + Series.map df['B'].str.split().str[-1].map(sort_dicts) 0 1.0 1 0.0 2 1.0 3 NaN 4 0.0 5 3.0 6 4.0 7 3.0 8 4.0 9 2.0 10 NaN 11 4.0 12 2.0 Name: B, dtype: float64 Couple of ways to sort using this logic: Option 1 Chain calls to df.sort_values: # note 'B' first def sort_fun(s): return s.str.split().str[-1].map(sort_dicts) out = (df.sort_values('B', key=sort_fun) .sort_values('A', ignore_index=True) ) Option 2 Adjust sort_fun to only affect col 'B': def sort_fun2(s, name): if s.name == name: # for 'B' return s.str.split().str[-1].map(sort_dicts) return s out2 = df.sort_values(['A', 'B'], key=lambda x: sort_fun2(x, 'B'), ignore_index=True) Indeed, your original approach also applied the function passed to key to df['A'] (i.e., df['A'].map(sort_dicts)), leading to a series with NaN values to "sort". Option 3 Use np.lexsort as suggested by @mozway in the linked post: # again: note 'B' goes first import numpy as np sort = np.lexsort((df['B'].str.split().str[-1].map(sort_dicts), df['A'])) out3 = df.iloc[sort].reset_index(drop=True) Output out A B 0 Ankang Shaanxi Ankang Northeast 1 Ankang Shaanxi Ankang Southeast 2 Ankang Shaanxi Ankang Northwest 3 Ankang Shaanxi Ankang Southwest 4 Ankang Shaanxi Ankang Southwest Upper 5 Baoding Anguo Baoding Anguo Northeast 6 Baoding Anguo Baoding Anguo Southeast 7 Baoding Anguo Baoding Anguo Northwest 8 Baoding Anguo Baoding Anguo Southwest 9 Baoding Anguo Baoding Anguo Upper 10 Changsha Hunan Changsha Hunan Bright 11 Changsha Hunan Changsha Hunan Lower 12 Luoyang Henan Luoyang Henan Upper Equality check with desired output: data2 = {'A': ['Ankang Shaanxi', 'Ankang Shaanxi', 'Ankang Shaanxi', 'Ankang Shaanxi', 'Ankang Shaanxi', 'Baoding Anguo', 'Baoding Anguo', 'Baoding Anguo', 'Baoding Anguo', 'Baoding Anguo', 'Changsha Hunan', 'Changsha Hunan', 'Luoyang Henan'], 'B': ['Ankang Northeast', 'Ankang Southeast', 'Ankang Northwest', 'Ankang Southwest', 'Ankang Southwest Upper', 'Baoding Anguo Northeast', 'Baoding Anguo Southeast', 'Baoding Anguo Northwest', 'Baoding Anguo Southwest', 'Baoding Anguo Upper', 'Changsha Hunan Bright', 'Changsha Hunan Lower', 'Luoyang Henan Upper']} desired = pd.DataFrame(data2) all(df.equals(desired) for df in [out, out2, out3]) # True
3
1
79,449,532
2025-2-18
https://stackoverflow.com/questions/79449532/python-polars-creating-new-columns-based-on-the-key-value-pair-of-a-dict-match
Sorry if the title is confusing. I'm pretty familiar with Pandas and think I have a solid idea of how I would do this there. Pretty much just brute-force iteration and index-based assignment for the new columns. I recently learned about Polars, though, and want to try it for the parallelization/speed and to stay fresh and up to date on my data skills. This is my first foray, and it's not been going great. I have a dataframe, and one column of this frame is basically a tag list. Each cell in that column is a list of relevant tags. What I want to do is scan through those lists, row by row, and add a column by the name of a more general tag if the existing tag is in the cell. For example, say I have a dataframe that looks like this: Index Person Food Provided 1 Billy Apple, Hot dog 2 Suzy Celery, brownies Then I also have a dictionary that looks like this: foodTypes_dict = {'Apple':'Fruit', 'Hot dog':'Meat', 'Celery':'Vegetable', 'brownies':'Dessert'} I would like to create a new column based on the food type that has a simple X or True or something if the "Food Provided" list contains the dict key. Something like: Index Person Food Provided Fruit Vegetable Meat Dessert 1 Billy Apple, Hot dog X None X None 2 Suzy Celery, brownies None X None X I've tried: for key in foodTypes_dict.keys(): my_df.with_columns((pl.col("Food Provided").str.contains(key)).alias(foodTypes_dict[key])) This has finally gotten me away from syntax errors, which I was encountering with everything else I tried. It doesn't, however, seem to actually be working at all. Essentially, it doesn't seem to create any new columns whatsoever. I tried adding a my_df.glimpse() call during each iteration of the for loop, but the dataframe dimensions don't change. I do not get any syntax errors or otherwise. I am using Jupyter Notebook which can suppress some of them, but the cell runs and finishes nearly instantly, just not close to the expected output. Any help would be appreciated. Thanks!
First of all, the large majority of Polars' DataFrame operations are not in place, so you must re-assign to the variable if updating in a loop. Next, for the "Food Provided" column, you should use Polars' list data type. This works natively with Polars' other operations and prevents substring-like issues (e.g., pineapple vs apple, etc) arising from string containment checks. The list data type also makes is super easy to check if a particular value is in there. Here's a solution that produces your expected output my_df = pl.DataFrame({ "index": [1, 2], "person": ["Billy", "Sally"], "Food Provided": [["Apple", "Hot dog"], ["Celery", "brownies"]] }) food_types = { "Apple": "Fruit", "Celery": "Vegetable", "Hot dog": "Meat", "brownies": "Dessert" } my_df.with_columns( # When food is contained in the list of food provided pl.when(pl.col("Food Provided").list.contains(food)) # Then a literal "X" .then(pl.lit("X")) # Implicit "None" by leaving out the "otherwise" block # Set the column name as the food type .alias(food_type) for food, food_type in food_types.items() ) shape: (2, 7) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ person ┆ Food Provided ┆ Fruit ┆ Vegetable ┆ Meat ┆ Dessert β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ list[str] ┆ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════β•ͺ════════════════════════β•ͺ═══════β•ͺ═══════════β•ͺ══════β•ͺ═════════║ β”‚ 1 ┆ Billy ┆ ["Apple", "Hot dog"] ┆ X ┆ null ┆ X ┆ null β”‚ β”‚ 2 ┆ Sally ┆ ["Celery", "brownies"] ┆ null ┆ X ┆ null ┆ X β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Be sure to re-assign the result to a variable once you are done with the transformations. Do note that this code will break once there is more than one food of a given type in the food_types dict. This is because Polars does not allow duplicate column names, which the code would create. At this point, consider switching the food type to be the key of the dict and have a list of foods as the values EDIT: Here is a solution with the food_types dict having the food types as keys, and a list of values. any_horizontal returns true when any condition in the for food in foods loop is true. my_df = pl.DataFrame({ "index": [1, 2, 3], "person": ["Billy", "Sally", "Bob"], "Food Provided": [["Apple", "Hot dog"], ["Celery", "brownies"], ["Spinach"]], }) food_types = { "Fruit": ["Apple"], "Vegetable": ["Celery", "Spinach"], "Meat": ["Hot dog", "Chicken"], "Dessert": ["brownies", "Cake"], } my_df.with_columns( # When any food in the foods list is contained in "Food Provided" column pl.when(pl.any_horizontal( pl.col("Food Provided").list.contains(food) for food in foods )) .then(pl.lit("X")) .alias(food_type) for food_type, foods in food_types.items() ) That is a fair bit of Python looping, so here's another option. It uses replace to do a join-like operation and then pivots the food type. If you know all the possible food types ahead of time, you can avoid pivot (available in eager API only) and do a "lazy pivot" as described in the last example of the DataFrame.pivot docs my_df = pl.DataFrame({ "index": [1, 2, 3], "person": ["Billy", "Sally", "Bob"], "Food Provided": [["Apple", "Hot dog"], ["Celery", "brownies"], ["Spinach"]], }) food_types = { "Apple": "Fruit", "Celery": "Vegetable", "Hot dog": "Meat", "brownies": "Dessert", "Cake": "Dessert", "Chicken": "Meat", "Spinach": "Vegetable", } ( my_df .with_columns( food_types=pl.col("Food Provided").list.eval(pl.element().replace(food_types)), # The value to use when the food type is pivoted value=pl.lit("X"), ) .explode("food_types") .pivot("food_types", index=["index", "person", "Food Provided"]) ) All return the expected output.
3
4
79,447,988
2025-2-18
https://stackoverflow.com/questions/79447988/0-dimensional-array-problems-with-numpy-vectorize
numpy.vectorize conveniently converts a scalar function to vectorized functions that can be applied directly to arrays. However, when inputting a single value into the vectorized function, the output is a 0-dimentional array instead of the corresponding value type, which can cause errors when using the result elsewhere due to typing issues. My question is: is there a mechanism in numpy that can resolve this problem by automatically convert the 0-dimensional array return value to the corresponding data type? For explanation I'd give an example: @np.vectorize ( excluded = ( 1, 2 ) ) def rescale ( value: float, srcRange: tuple [ float, float ], dstRange: tuple [ float, float ] = ( 0, 1 ), ) -> float: srcMin, srcMax = srcRange dstMin, dstMax = dstRange t = ( value - srcMin ) / ( srcMax - srcMin ) return dstMin + t * ( dstMax - dstMin ) When calling the function above with rescale ( 5, ( 0, 10 ) ) the return value is numpy.array(0.5) instead of just the value 0.5. Currently I resolve this problem by a self-defined decorator: def vectorize0dFix ( func ): def _func ( *args, **kwargs ): result = func ( *args, **kwargs ) if isinstance ( result, np.ndarray ) and result.shape == ( ): return result.item ( ) else: return result return _func But if this problem do causes trouble there should be a mechanism in numpy which properly deals with the problem. I wonder whether there is one or why there isn't.
Short answer: You can unwrap 0-d results into scalars while keeping n-d results (n>0) by indexing with an empty tuple (). Better yet, I would try to avoid using @np.vectorize altogether – in general, but in particular with your given example where vectorization is not necessary. Long answer: Following these answers to related questions, by indexing with an empty tuple (), you can systematically unwrap 0-d arrays into scalars while keeping other arrays. So, using the @np.vectorized function rescale() from your question, you can post-process your results accordingly, for example: with_scalar_input = rescale(5, (0, 10))[()] with_vector_input = rescale([5], (0, 10))[()] print(type(with_scalar_input)) # <class 'numpy.float64'> print(type(with_vector_input)) # <class 'numpy.ndarray'> I am not aware of any built-in NumPy mechanism that solves this edge case of @np.vectorize for you, so providing your own decorator is probably a viable way to go. Custom scalar-unwrapping @vectorize decorator Writing your own custom decorator that (a) accepts all arguments of and behaves exactly like @np.vectorize, but (b) appends the scalar unwrapping step, could look as follows: from functools import wraps import numpy as np def vectorize(*wa, **wkw): def decorator(f): @wraps(f) def wrap(*fa, **fkw): return np.vectorize(f, *wa, **wkw)(*fa, **fkw)[()] return wrap return decorator @vectorize(excluded=(1, 2)) def rescale(value, srcRange, dstRange=(0, 1)): srcMin, srcMax = srcRange dstMin, dstMax = dstRange t = (value - srcMin) / (srcMax - srcMin) return dstMin + t * (dstMax - dstMin) with_scalar_input = rescale(5, (0, 10)) with_vector_input = rescale([5], (0, 10)) print(type(with_scalar_input)) # <class 'numpy.float64'> print(type(with_vector_input)) # <class 'numpy.ndarray'> If you don't care about docstring propagation (of which @functools.wraps takes care), the @vectorize decorator can be shortened to: import numpy as np vectorize = lambda *wa, **wkw: lambda f: lambda *fa, **fkw: \ np.vectorize(f, *wa, **wkw)(*fa, **fkw)[()] @vectorize(excluded=(1, 2)) def rescale(value, srcRange, dstRange=(0, 1)): srcMin, srcMax = srcRange dstMin, dstMax = dstRange t = (value - srcMin) / (srcMax - srcMin) return dstMin + t * (dstMax - dstMin) with_scalar_input = rescale(5, (0, 10)) with_vector_input = rescale([5], (0, 10)) print(type(with_scalar_input)) # <class 'numpy.float64'> print(type(with_vector_input)) # <class 'numpy.ndarray'> Caution: All approaches using (), as proposed above, produce a new edge case: if the input is provided as a 0-d NumPy array, such as np.array(5), the result will also be unwrapped into a scalar. Likewise, you might have noticed that the scalar results are NumPy scalars, <class 'numpy.float64'>, rather than native Python scalars, <class 'float'>. If either of this is not acceptable for you, then more elaborate type checking or post-processing will be necessary. Try to avoid @np.vectorize altogether As a final note: Maybe try to avoid using @np.vectorize altogether in the first place, and try to write your code such that it works both with NumPy arrays and scalars. As to avoiding @np.vectorize: Its documentation states: The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. As to adjusting your code accordingly: Your given function rescale() is a good example for writing code that works both with NumPy arrays and scalars correctly; in fact, it does so already, without any adjustments! You just have to ensure that vector-valued input is given as a NumPy array (rather than, say, a plain Python list or tuple): import numpy as np def rescale(value, srcRange, dstRange=(0, 1)): srcMin, srcMax = srcRange dstMin, dstMax = dstRange t = (value - srcMin) / (srcMax - srcMin) return dstMin + t * (dstMax - dstMin) with_scalar_input = rescale(5, (0, 10)) with_vector_input = rescale(np.asarray([5]), (0, 10)) print(type(with_scalar_input)) # <class 'float'> print(type(with_vector_input)) # <class 'numpy.ndarray'> Moreover, while producing exactly the same output for vector-type inputΒΉ, the @np.vectorized version is orders of magnitude slower: import numpy as np from timeit import Timer def rescale(value, srcRange, dstRange=(0, 1)): srcMin, srcMax = srcRange dstMin, dstMax = dstRange t = (value - srcMin) / (srcMax - srcMin) return dstMin + t * (dstMax - dstMin) vectorized = np.vectorize(rescale, excluded=(1, 2)) a = np.random.normal(size=10000) assert (rescale(a, (0, 10)) == vectorized(a, (0, 10))).all() # Same result? print("Unvectorized:", Timer(lambda: rescale(a, (0, 10))).timeit(100)) print("Vectorized:", Timer(lambda: vectorized(a, (0, 10))).timeit(100)) On my machine, this produces about 0.003 seconds for the unvectorized version and about 0.8 seconds for the vectorized version. In other words: we have more than a 250Γ— speedup with the given, unvectorized function for a given 10,000-element array, while (if used carefully, i.e. by providing NumPy arrays rather than plain Python sequences for vector-type inputs) the function already produces scalar outputs for scalar inputs and vector outputs for vector inputs! I guess the code above might not be the code that you are actually trying to vectorize; but anyway: in a lot of cases, a similar approach is possible. ΒΉ) Again, the case of a 0-d vector input is special here, but you might want to check that for yourself.
2
4
79,449,057
2025-2-18
https://stackoverflow.com/questions/79449057/confused-by-silent-truncation-in-polars-type-casting
I encountered some confusing behavior with polars type-casting (silently truncating floats to ints without raising an error, even when explicitly specifying strict=True), so I headed over to the documentation page on casting and now I'm even more confused. The text at the top of the page says: The function cast includes a parameter strict that determines how Polars behaves when it encounters a value that cannot be converted from the source data type to the target data type. The default behaviour is strict=True, which means that Polars will thrown an error to notify the user of the failed conversion while also providing details on the values that couldn't be cast. However, the code example immediately below (section title "Basic example") shows a df with a floats column taking values including 5.8 being truncated to int 5 during casting with the code pl.col("floats").cast(pl.Int32).alias("floats_as_integers"), i.e. without strict=False. What am I misunderstanding here? The text seems to indicate that this truncation, with strict=True as default, should "throw an error," but the code example in the documentation (and my own polars code) throws no error and silently truncates values.
It is accepted in Python (and more generally) that casting a float to an int will truncate the float and not raise an exception. E.g. in Python: >>> int(5.8) 5 Similarly, in Polars, casting a float to an int can be converted from the source data type to the target data type. For anyone else looking, this answer provides further detail / examples.
2
4
79,448,603
2025-2-18
https://stackoverflow.com/questions/79448603/how-to-convert-a-pandas-dataframe-to-numeric-future-proof
Until now I used to convert all values in a pandas dataframe with combined numerical and string entries to numeric values if possible in one easy step, using .map and .to_numeric with "errors = 'ignore'". It worked perfectly, but after updating to the latest version of Pandas (2.2.3) I get a FutureWarning. import pandas as pd A = pd.DataFrame({ 'x' : ['1','2','3'], 'y' : ['not_a_number','5',9999], }) # example data B = A.map(pd.to_numeric, errors = 'ignore') # FutureWarning: errors='ignore' is deprecated and will raise in a future version. Use to_numeric without passing errors and catch exceptions explicitly instead B = A.map(pd.to_numeric, errors = 'ignore') How could I code this future proof in an elegant, vectorised way? I could not think of any solution that is not very cumbersome (looping over each individual entry of the dataframe).
When you use errors='ignore', to_numeric returns the original Series. As mentioned in the documentation: errors {β€˜ignore’, β€˜raise’, β€˜coerce’}, default β€˜raise’ If β€˜raise’, then invalid parsing will raise an exception. If β€˜coerce’, then invalid parsing will be set as NaN. If β€˜ignore’, then invalid parsing will return the input. Changed in version 2.2. β€œignore” is deprecated. Catch exceptions explicitly instead. Catch the error explicitly if you want to keep the previous behavior: def to_numeric(s): try: return pd.to_numeric(s, errors='raise') except ValueError: return s A.apply(to_numeric) NB. use apply rather than map for a vectorial operation. Relevant issues: #54467, #59221.
2
3
79,446,920
2025-2-18
https://stackoverflow.com/questions/79446920/why-gcd-is-needed-in-this-algorithm-finding-all-groups-of-three-points-are-colli
I was trying to solve this coding challenge: "Given an array of pairs, each pair (x, y), which are both integer, denotes coordinate of a point in Cartesian coordinate plane, find many groups of three points are collinear." It turns out this below is the correct algorithm: def gcd(x, y): if y == 0: return x else: return gcd(y, x % y) def how_many_3point_in_line( points ): ans = 0 n = len( points ) for i in range( n - 2 ): slopes = defaultdict( int ) for j in range( i + 1, n ): dy, dx = points[i][1] - points[j][1], points[i][0] - points[j][0] gcd_num = gcd( dx, dy ) slopes[( dy // gcd_num, dx // gcd_num )] += 1 for _, occ in slopes.items(): ans += math.comb( occ, 2 ) return ans Apparently the gcd is used to represent slope, what issue does it address?
A clarification: Many thanks for fellow no comment's teaching, indeed the GCD in OP's posted solution is needed, and my original answer is wrong. Taking points = [(0, 0), (999999997, 999999998), (999999998, 999999999)] as an example, the difference between 999999997/999999998 and 999999998/999999999 appears only after so many decimal digits that floating point number does not include, leading to 999999997/999999998 == 999999998/999999999 being evaluated to be True. I also found this answer that nicely discussed about related topic. This answer has been marked as accepted so I cannot delete it; I'll keep it here as a counterexample. Original answer ( wrong ): I guess the author of your code is trying to address inaccuracy of floating point representation. But that worry is actually unnecessary given that both x and y are integer, e.g., 3.0/9.0 == 9.0/27.0 will give you True, I recently studied on related topic and can hence confirm this. The only case where you (may or may not) get trouble is when either the denominator or numerator is floating point number, which does not apply in your question. So, you can modify the solution to simply use numeric slope, but you need to specially handle case where dx is 0, otherwise you get β€œdivided by zero” exception. def how_many_3point_in_line( points ): ans = 0 n = len( points ) for i in range( n - 2 ): slopes = defaultdict( int ) numVertical = 0 for j in range( i + 1, n ): dy, dx = points[i][1] - points[j][1], points[i][0] - points[j][0] if dx != 0: slopes[dy / dx] += 1 else: numVertical += 1 for _, occ in slopes.items(): ans += math.comb( occ, 2 ) ans += math.comb( numVertical, 2 ) return ans
3
5
79,448,739
2025-2-18
https://stackoverflow.com/questions/79448739/flatten-a-multi-dimensional-list
Say we have a multi-dimensional list, but with random dimensions, like: [ [ [1, 2, [3, 4]], [[5, 6], 7] ], [8, 9, [10]] ] Is there any short way to flatten everything and just get the list [1, 2, ..., 10] ? I know there's solution for list of lists, such as loops or comprehension list, but here we assume that we don't know the dimensions of the list, and that there can be different levels of nesting lists.
A recursive function can make quick work of this: test = [ [ [1, 2, [3, 4]], [[5, 6], 7] ], [8, 9, [10]] ] def flatten(l): for item in l: if isinstance(item, list): for i in flatten(item): yield i else: yield item print(list(flatten(test))) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] This function iterates through each item in the first layer of lists. If it finds something that isn't a list, it returns it with yield. If it finds another list, the function calls itself to iterate upon that list as well. What you end up with is each value inside every list returned back to the caller.
1
4
79,447,920
2025-2-18
https://stackoverflow.com/questions/79447920/subtracting-1-from-a-numpy-array-gives-two-different-answers
Why are outputs in the two cases different. I am new to this library Case 1 import numpy as np np.random.seed(2) array = np.random.random((3,1)) print('Printing array : \n', array) print('printing array - 1 : \n',array-1) Output : Printing array : [[0.4359949 ] [0.02592623] [0.54966248]] printing array - 1 : [[-0.5640051 ] [-0.97407377] [-0.45033752]] This is ok as 1 is subtracted from each element Case 2 print('Printing array : \n', np.random.random ((3,1))-1) Output: Printing array : [[-0.56467761] [-0.5796322 ] [-0.66966518]] Whay are the two outputs different? np.random.random ((3,1)) should be same in both cases ( same seed) and so subtracting 1 should produce the same output. what am I messing up? I ran the code as was expecting the same output in both cases
The reason why you got different arrays has been explained elaborately by @Jon Skeet. One workaround is to customize a function by packing the random seed together with the random number generator function, e.g., def runif(shape, seed = 2): np.random.seed(seed) return np.random.random(shape) for iter in range(2): print(f'Print array{iter}: \n {runif((3,1))-iter} \n') and you will see Print array0: [[0.4359949 ] [0.02592623] [0.54966248]] Print array1: [[-0.5640051 ] [-0.97407377] [-0.45033752]]
1
0