question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,491,828 | 2025-3-7 | https://stackoverflow.com/questions/79491828/run-a-simple-flask-web-socket-on-render | I'm new to Render , and I make a simple flask web socket I use these modules: ,flask_socketio ,flask ,gunicorn (to run my script on Render host) Here is my code for server side: from flask import Flask from flask_socketio import SocketIO , emit app = Flask("application") socket = SocketIO(app) @app.route("/") def home(): return "This is the server side !!" @socket.on("connect") def cl_con(): print("A new client connected !!") @socket.on("disconnect") def cl_dis(): print("A client disconnected !!") @socket.on("message") def message(data): emit("message",data) if __name__ == "__main__": socket.run(app) and my client side : import socketio as io import threading as th name = "" while (name == ""): name = input("Your name :") def message(): while True: msg = input() if (msg == "exit"): client.disconnect() print("Disconnected !!") exit(0) else: client.emit("message",f"{name}:{msg}") client = io.Client() client.connect("https://my_domain_name.onrender.com") client.emit("message",f"{name} join this chat !!") t = th.Thread(target=message) t.daemon = True t.start() @client.on("message") def msg(data): print(data) client.wait() I'm trying to connect to web socket by my client code which run on my computer , it works , but the problem is this: 1- after 10 second server side give me this error , and it continue showing this error every 5 second : **[2025-03-07 09:05:06 +0000] [74] [CRITICAL] WORKER TIMEOUT (pid:87) [2025-03-07 09:05:06 +0000] [87] [ERROR] Error handling request /socket.io/?transport=websocket&EIO=4&sid=E6-m_FkHPBxWcm5MAAAA&t=1741338267.3122072 Traceback (most recent call last): File "/opt/render/project/src/.venv/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 134, in handle self.handle_request(listener, req, client, addr) File "/opt/render/project/src/.venv/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 177, in handle_request respiter = self.wsgi(environ, resp.start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 1536, in __call__ return self.wsgi_app(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask_socketio/__init__.py", line 42, in __call__ return super().__call__(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/middleware.py", line 63, in __call__ return self.engineio_app.handle_request(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/socketio/server.py", line 434, in handle_request return self.eio.handle_request(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/server.py", line 286, in handle_request packets = socket.handle_get_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/socket.py", line 92, in handle_get_request return getattr(self, '_upgrade_' + transport)(environ, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/socket.py", line 151, in _upgrade_websocket return ws(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/async_drivers/_websocket_wsgi.py", line 15, in __call__ ret = self.app(self) ^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/socket.py", line 225, in _websocket_handler p = websocket_wait() ^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/socket.py", line 156, in websocket_wait data = ws.wait() ^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/engineio/async_drivers/_websocket_wsgi.py", line 32, in wait return self.ws.receive() ^^^^^^^^^^^^^^^^^ File "/opt/render/project/src/.venv/lib/python3.11/site-packages/simple_websocket/ws.py", line 96, in receive if not self.event.wait(timeout=timeout): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/threading.py", line 629, in wait signaled = self._cond.wait(timeout) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/threading.py", line 327, in wait waiter.acquire() File "/opt/render/project/src/.venv/lib/python3.11/site-packages/gunicorn/workers/base.py", line 204, in handle_abort sys.exit(1) SystemExit: 1** 2-when I try to connect to server by two clients , the second client get timeout error !! any help will appreciated | The timeout with the second client is likely due to the fact that the default synchronous worker can only handle one request at a time. Switching to an asynchronous worker will allow multiple concurrent connections without timing out. Install eventlet: pip install eventlet Modify your Gunicorn command gunicorn -k eventlet -w 1 your_project:app Specify async mode in the script: socket = SocketIO(app, async_mode='eventlet') | 1 | 2 |
79,485,612 | 2025-3-5 | https://stackoverflow.com/questions/79485612/adding-hours-to-a-polars-time-column-in-python | I have a table representing a schedule, i.e. it contains day (monday-sunday), start_time and end_time fields df = pl.DataFrame({ "day": ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"], "enabled": [True, True, True, True, True, False, False], "start_time": ["09:00", "09:00", "09:00", "09:00", "09:00", "00:00", "00:00"], "end_time": ["18:00", "18:00", "18:00", "18:00", "18:00", "00:00", "00:00"], }) df = df.with_columns(start_time = pl.col("start_time").str.to_time("%H:%M")) df = df.with_columns(end_time = pl.col("end_time").str.to_time("%H:%M")) print(df) shape: (7, 4) ┌───────────┬─────────┬────────────┬──────────┐ │ day ┆ enabled ┆ start_time ┆ end_time │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ bool ┆ time ┆ time │ ╞═══════════╪═════════╪════════════╪══════════╡ │ monday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ tuesday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ wednesday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ thursday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ friday ┆ true ┆ 09:00:00 ┆ 18:00:00 │ │ saturday ┆ false ┆ 00:00:00 ┆ 00:00:00 │ │ sunday ┆ false ┆ 00:00:00 ┆ 00:00:00 │ └───────────┴─────────┴────────────┴──────────┘ I need to subtract n hours from the start_time and add n hours to the end_time. I cannot find a polars operation to add/subtract hours from a pl.time - I've tried adding a pl.duration but that only appears to work for date and datetime. One work-around I've assumed is to turn start_time / end_time into a pl.datetime (i.e. use some constant date), do the operation and then decompose the result back to a time. This has one option of being easier to ensure I don't over/underflow (i.e. subtract 2 hours from 01:00 and end up with 23:00) but I'm wondering it's possible to add/subtracts hours/minutes to a time in polars? | You are right, arithmetic operations between time and duration are not implemented. So we have to do some workaround. The most straightforward method is indeed to combine time with an arbitrary date to form a datetime object, do the math and then keep only the time part. We can avoid introducing a date by casting hours and duration to their underlying representation, but it would be much uglier. import polars as pl # create dataframe df = pl.DataFrame({ "start_time": ["09:00", "09:00", "09:00", "09:00", "09:00", "00:00", "00:00"], "end_time": ["18:00", "18:00", "18:00", "18:00", "18:00", "00:00", "00:00"], }).with_columns( start_time=pl.col("start_time").str.to_time("%H:%M"), end_time=pl.col("end_time").str.to_time("%H:%M"), duration=pl.duration(hours=1), ) print(df) ┌────────────┬──────────┬──────────────┐ │ start_time ┆ end_time ┆ duration │ │ --- ┆ --- ┆ --- │ │ time ┆ time ┆ duration[μs] │ ╞════════════╪══════════╪══════════════╡ │ 09:00:00 ┆ 18:00:00 ┆ 1h │ │ 09:00:00 ┆ 18:00:00 ┆ 1h │ │ 09:00:00 ┆ 18:00:00 ┆ 1h │ │ 09:00:00 ┆ 18:00:00 ┆ 1h │ │ 09:00:00 ┆ 18:00:00 ┆ 1h │ │ 00:00:00 ┆ 00:00:00 ┆ 1h │ │ 00:00:00 ┆ 00:00:00 ┆ 1h │ └────────────┴──────────┴──────────────┘ def add_duration_to_time(time: pl.Expr, duration: pl.Expr) -> pl.Expr: """ 05:00 + 1h = 06:00 23:00 + 2h = 01:00 01:00 - 2h = 23:00 """ arbitrary_naive_date = pl.date(2025, 1, 1) time_increased = ( arbitrary_naive_date.dt.combine(time) + duration ).dt.time() return time_increased result = ( df.with_columns( start_time_decreased=add_duration_to_time( time=pl.col("start_date"), duration=pl.col("duration").neg() ), end_time_increased=add_duration_to_time( time=pl.col("start_date"), duration=pl.col("duration") ), ) ) print(result) ┌────────────┬──────────┬──────────────┬──────────────────────┬────────────────────┐ │ start_time ┆ end_time ┆ duration ┆ start_time_decreased ┆ end_time_increased │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ time ┆ time ┆ duration[μs] ┆ time ┆ time │ ╞════════════╪══════════╪══════════════╪══════════════════════╪════════════════════╡ │ 09:00:00 ┆ 18:00:00 ┆ 1h ┆ 08:00:00 ┆ 10:00:00 │ │ 09:00:00 ┆ 18:00:00 ┆ 1h ┆ 08:00:00 ┆ 10:00:00 │ │ 09:00:00 ┆ 18:00:00 ┆ 1h ┆ 08:00:00 ┆ 10:00:00 │ │ 09:00:00 ┆ 18:00:00 ┆ 1h ┆ 08:00:00 ┆ 10:00:00 │ │ 09:00:00 ┆ 18:00:00 ┆ 1h ┆ 08:00:00 ┆ 10:00:00 │ │ 00:00:00 ┆ 00:00:00 ┆ 1h ┆ 23:00:00 ┆ 01:00:00 │ │ 00:00:00 ┆ 00:00:00 ┆ 1h ┆ 23:00:00 ┆ 01:00:00 │ └────────────┴──────────┴──────────────┴──────────────────────┴────────────────────┘ | 5 | 3 |
79,490,573 | 2025-3-6 | https://stackoverflow.com/questions/79490573/field-validator-not-called-on-sqlmodel | I have a FastAPI setup of the form: class Foo(sqlmodel.SQLModel, table=True): id: typing.Optional[int] = sqlmodel.Field(primary_key=True) data: str @pydantic.field_validator("data", mode="before") def serialize_dict(cls, value): if isinstance(value, dict): return json.dumps(value) return value @app.post("/foos") def create_foo(foo: Foo, session: sqlmodel.Session = fastapi.Depends(get_session)): session.add(foo) session.commit() return fastapi.Response() I then POST { "data": { "bar": 5 } } to /foos. However, this is throwing a SQL exception because the data value couldn't be bound. After putting in some logging statements, I discovered that foo.data is a dict and not a str. In addition, I confirmed that my validator is never called. Since SQLModel inherits from pydantic.BaseModel, I would have thought I could use such a validator. What am I missing? This is sqlmodel 0.0.23 with pydantic 2.10.6. | This is one of the oldest issues with SQLModel (see e.g. this github issue from 2022). Validators are not called on models with table=True. | 1 | 2 |
79,489,653 | 2025-3-6 | https://stackoverflow.com/questions/79489653/two-threads-using-same-socket-object-problem | I found this example of a simple Python chat implementation, in order to understand how sockets work: #server import socket, threading #Libraries import host = '127.0.0.1' #LocalHost port = 7976 #Choosing unreserved port server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #socket initialization server.bind((host, port)) #binding host and port to socket server.listen() clients = [] nicknames = [] def broadcast(message): #broadcast function declaration for client in clients: client.send(message) def handle(client): while True: try: #recieving valid messages from client message = client.recv(1024) broadcast(message) except: #removing clients index = clients.index(client) clients.remove(client) client.close() nickname = nicknames[index] broadcast('{} left!'.format(nickname).encode('ascii')) nicknames.remove(nickname) break def receive(): #accepting multiple clients while True: client, address = server.accept() print("Connected with {}".format(str(address))) client.send('NICKNAME'.encode('ascii')) nickname = client.recv(1024).decode('ascii') nicknames.append(nickname) clients.append(client) print("Nickname is {}".format(nickname)) broadcast("{} joined!".format(nickname).encode('ascii')) client.send('Connected to server!'.encode('ascii')) thread = threading.Thread(target=handle, args=(client,)) thread.start() receive() #client import socket, threading nickname = input("Choose your nickname: ") client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #socket initialization client.connect(('127.0.0.1', 7976)) #connecting client to server def receive(): while True: #making valid connection try: message = client.recv(1024).decode('ascii') if message == 'NICKNAME': client.send(nickname.encode('ascii')) else: print(message) except: #case on wrong ip/port details print("An error occured!") client.close() break def write(): while True: #message layout message = '{}: {}'.format(nickname, input('')) client.send(message.encode('ascii')) receive_thread = threading.Thread(target=receive) #receiving multiple messages receive_thread.start() write_thread = threading.Thread(target=write) #sending messages write_thread.start() The client script uses two threads for writing and receiving data. Both threads use the same client socket. Is that a problem? My understanding by now is that client.recv() will block the socket (and its own thread) and make the writing thread fail - or vice versa. Is that right? I saw this: can two threads use the same socket at the same time, and are there possible problems with this? It just states that it is possible to use two threads for receiving and writing as long as you coordinate well. My question is: If client.recv() is using the socket to listen all the time, how can the other thread send data? | A socket is fully bidirectional. It has separate buffers for sending and receiving. One thread can be reading from a socket while another thread is writing to the same socket, that is perfectly OK and does not require any coordination between the threads. Reading does not block writing, and vice versa. What is not OK, though, is having 2+ threads reading from the same socket at the same time, or 2+ threads writing to the same socket at the same time, without coordination. And in fact, your client is doing exactly that. Both of its threads are send()'ing to the client socket without coordinating the sends, so the messages will end up overlapping each other on the wire and corrupt your communication. | 1 | 3 |
79,490,365 | 2025-3-6 | https://stackoverflow.com/questions/79490365/why-does-a-numpy-transpose-break-mediapipes-image-command | I would like to rotate an image in opencv before putting it into mediapipe. I took a transpose using numpy but, although the type is still numpy.uint8, mediapipe complains, import cv2 import numpy as np import mediapipe as mp image_file_name = "IMG_0237.JPG" cvImage = cv2.imread(image_file_name) cvImage = np.transpose(cvImage, axes=[1,0,2]) #without this line code runs fine print(cvImage.dtype) mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cvImage) returns, uint8 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[4], line 9 7 cvImage = np.transpose(cvImage, axes=[1,0,2]) #without this line code runs fine 8 print(cvImage.dtype) ----> 9 mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=cvImage) TypeError: __init__(): incompatible constructor arguments. The following argument types are supported: 1. mediapipe.python._framework_bindings.image.Image(image_format: mediapipe::ImageFormat_Format, data: numpy.ndarray[numpy.uint8]) 2. mediapipe.python._framework_bindings.image.Image(image_format: mediapipe::ImageFormat_Format, data: numpy.ndarray[numpy.uint16]) 3. mediapipe.python._framework_bindings.image.Image(image_format: mediapipe::ImageFormat_Format, data: numpy.ndarray[numpy.float32]) Invoked with: kwargs: image_format=<ImageFormat.SRGB: 1>, data=array([[[176, 223, 255], [176, 223, 255], I tried commenting out the transpose line and the code runs fine. I checked my types and they seem to be good too. As a bonus question, if anyone knows of an AI model that gives a face and hair (i.e. a selfie) mask without mediapipe that would be most welcome. | The issue here is that ndarray.transpose does not actually move anything in memory. It simply resets the strides so that accesses APPEAR to be using new locations. mediapipe, on the other hand, requires that the input array be physically contiguous in memory. Your array isn't. You can use np.ascontiguousarray to force it to rearrange the contents. Is numpy.transpose reordering data in memory? | 2 | 5 |
79,490,056 | 2025-3-6 | https://stackoverflow.com/questions/79490056/compute-named-quantiles-in-pandas-using-groupby-aggregate | Among other descriptive statistics, I want to get some quantiles out of my pandas DataFrame. I can get the quantiles I want a couple of different ways, but I can't find the right way to do it with aggregate. I'd like to use aggregate because it'd be tidy and maybe computationally efficient to get all my stats in one go. rng = np.random.default_rng(seed=18860504) df = pd.DataFrame({ "dummy": 1, "bell": rng.normal(loc=0, scale=1, size=100), "fish": rng.poisson(lam=10, size=100), "cabin": rng.lognormal(mean=0, sigma=1.0, size=100), }) quants = [x/5 for x in range(6)] quantiles = pd.DataFrame({ "quantile" : [f"q{100*q:02n}" for q in quants], "bell" : df.groupby("dummy")["bell"].quantile(quants), "fish" : df.groupby("dummy")["fish"].quantile(quants), }) print(quantiles) Output: quantile bell fish dummy 1 0.0 q000 -2.313461 4.0 0.2 q020 -0.933831 7.0 0.4 q040 -0.246860 9.0 0.6 q060 0.211076 10.0 0.8 q080 0.685958 13.0 1.0 q100 3.017258 20.0 I'd like to get these quantiles using groupby().agg(), ideally with programmatically named columns like "bell_q90". Here's an example of the aggregate syntax that feels natural to me: df.groupby("dummy").agg( bell_med=("bell", "median"), bell_mean=("bell", "mean"), fish_med=("fish", "median"), fish_mean=("fish", "mean"), # fish_q10=("fish", "quantile(0.1)"), # nothing like it # fish_q10=("fish", "quantile", 0.1), # nothing like it # fish_q10=("fish", "quantile", kwargs({"q":0.1}), # nothing like it ) I can imagine generating the columns by iterating over quants and a list of named columns, using Series.agg and than stitching them together, but this seems like a hack. (For example, it would require me to do my "normal" aggregation first and then add quantiles on afterwards.) my_aggs = dict() for q in quants: for col in ["bell", "fish"]: my_aggs[f"{col}_q{100*q:03n}"] = df.groupby("dummy")[col].quantile(q) print(pd.DataFrame(my_aggs)) # numbers equivalent to those above Is there a better way? | You could use a function factory to simplify the syntax: def quantile(q=0.5, **kwargs): def f(series): return series.quantile(q, **kwargs) return f df.groupby('dummy').agg( bell_med=('bell', 'median'), bell_mean=('bell', 'mean'), fish_med=('fish', 'median'), fish_mean=('fish', 'mean'), bell_q10=('bell', quantile(0.1)), fish_q10=('fish', quantile(0.1)), ) If you have many combinations, you could also combine this with a dictionary comprehension and parameter expansion: df.groupby('dummy').agg(**{'bell_med': ('bell', 'median'), 'bell_mean': ('bell', 'mean'), 'fish_med': ('fish', 'median'), 'fish_mean': ('fish', 'mean'), }, **{f'{c}_q{100*q:02n}': (c, quantile(q)) for q in [0.1] # add more if needed for c in ['bell', 'fish'] } ) Output: bell_med bell_mean fish_med fish_mean bell_q10 fish_q10 dummy 1 -0.063454 -0.058557 10.0 9.92 -1.553682 6.0 | 3 | 3 |
79,484,655 | 2025-3-4 | https://stackoverflow.com/questions/79484655/qiskit-attributeerror-parameterexpression-object-has-no-attribute-name-whe | I am trying to optimize a quantum circuit using scipy.optimize.minimize and Qiskit Runtime's Estimator, running on an IBM Quantum real device. However, I am encountering the following error: AttributeError: 'ParameterExpression' object has no attribute 'name' The error occurs inside the minimize function call, specifically when `estimator.run() is executed in cost_func_estimator. Here is the relevant code: from scipy.optimize import minimize from qiskit_ibm_runtime import Estimator, Session objective_func_vals = [] with Session(backend=backend) as session: estimator = Estimator(mode=session) estimator.options.default_shots = 1000 estimator.options.dynamical_decoupling.enable = True estimator.options.dynamical_decoupling.sequence_type = "XY4" estimator.options.twirling.enable_gates = True estimator.options.twirling.num_randomizations = "auto" result = minimize( cost_func_estimator, init_params, args=(candidate_circuit, qubitOp, estimator), method="COBYLA", tol=1e-2, callback=callback, ) save_progress(result.x, objective_func_vals, name_saved_file) print(result) The cost_func_estimator function is called within minimize and runs a job in the estimator: def cost_func_estimator(params, ansatz, hamiltonian, estimator): isa_hamiltonian = hamiltonian.apply_layout(ansatz.layout) pub = (ansatz, isa_hamiltonian, params) job = estimator.run([pub]) # The error occurs here results = job.result()[0] cost = results.data.evs return cost I expected scipy.optimize.minimize to optimize the parameters of my quantum circuit using Qiskit Runtime’s Estimator on an IBM Quantum real device. I tried: - Ensuring `init_params` contains only numerical values, not `ParameterExpression`. - Using `bind_parameters()` to assign values before optimization. - Updating `qiskit` and `qiskit-ibm-runtime` to the latest version. Despite these attempts, the error persists. I expected estimator.run() to execute without issues, but it fails with AttributeError: 'ParameterExpression' object has no attribute 'name'. How can I resolve this issue? | It was a bug in the QPY serialization. qiskit 1.4.1 and qiskit 2.0.0 have fixes for it. | 2 | 3 |
79,489,723 | 2025-3-6 | https://stackoverflow.com/questions/79489723/why-is-this-python-code-not-running-faster-with-parallelization | This is a MWE of some code I'm writing to do some monte carlo exercises. I need to estimate models across draws and I'm parallelizing across models. In the MWE a "model" is just parametrized by a number of draws and a seed. I define the functions below. import time import pandas as pd import numpy as np import multiprocessing as mp def linreg(df): y = df[['y']].values x = np.hstack([np.ones((df.shape[0], 1)), df[['treat']].values]) xx_inv = np.linalg.inv(x.T @ x) beta_hat = xx_inv @ (x.T @ y) return pd.Series(beta_hat.flat, index=['intercept', 'coef']) def shuffle_treat(df): df['treat'] = df['treat'].sample(frac=1, replace=False).values return df def run_analysis(draws, seed, sleep=0): N = 5000 df = pd.DataFrame({'treat':np.random.choice([0,1], size=N, replace=True)}) df['u'] = np.random.normal(size=N) df['y'] = df.eval('10 + 5*treat + u') np.random.seed(seed) time.sleep(sleep) est = [linreg(shuffle_treat(df)) for k in range(draws)] est = pd.concat(est, axis=0, sort=False, keys=range(draws), names=['k', 'param']) return est I then test them and show that running in serial takes a similar amount of time as running in parallel. I can confirm they are running in parallel because if I force some sleep time there is a clear gain from parallelization. I know the problem is coming from this list comprehension: [linreg(shuffle_treat(df)) for k in range(draws)], but I don't understand why I don't achieve gains from parallelization across models. I've tried to parallelize across draws instead, but the results were even worse. param_list = [dict(draws=500, seed=1029), dict(draws=500, seed=1029)] param_list_sleep = [dict(draws=500, seed=1029, sleep=5), dict(draws=500, seed=1029, sleep=5)] def run_analysis_wrapper(params): run_analysis(**params) start = time.time() for params in param_list: run_analysis_wrapper(params) end = time.time() print(f'double run 1 process: {(end - start):.2f} sec') start = time.time() with mp.Pool(processes=2) as pool: pool.map(run_analysis_wrapper, param_list) end = time.time() print(f'double run 2 processes: {(end - start):.2f} sec') start = time.time() for params in param_list_sleep: run_analysis_wrapper(params) end = time.time() print(f'double run 1 process w/ sleep: {(end - start):.2f} sec') start = time.time() with mp.Pool(processes=2) as pool: pool.map(run_analysis_wrapper, param_list_sleep) end = time.time() print(f'double run 2 processes w/ sleep: {(end - start):.2f} sec') Output: double run 1 process: 2.52 sec double run 2 processes: 2.94 sec double run 1 process w/ sleep: 12.30 sec double run 2 processes w/ sleep: 7.71 sec For reference machine is Linux-based EC2 instance with nproc --a showing 48 CPUs. I'm running within a conda environment with Python 3.9.16. | Based on the comment by Nils Werner, I tried disabling multithreading in Numpy. And now I get the gains from parallelization. Interestingly, the serial version is also about twice as fast. import time import os os.environ['OMP_NUM_THREADS'] = '1' os.environ['MKL_NUM_THREADS'] = '1' os.environ['OPENBLAS_NUM_THREADS'] = '1' os.environ['NUMEXPR_MAX_THREADS'] = '1' import pandas as pd import numpy as np import multiprocessing as mp def linreg(df): y = df[['y']].values x = np.hstack([np.ones((df.shape[0], 1)), df[['treat']].values]) xx_inv = np.linalg.inv(x.T @ x) beta_hat = xx_inv @ (x.T @ y) return pd.Series(beta_hat.flat, index=['intercept', 'coef']) def shuffle_treat(df): df['treat'] = df['treat'].sample(frac=1, replace=False).values return df def run_analysis(draws, seed): N = 5000 df = pd.DataFrame({'treat':np.random.choice([0,1], size=N, replace=True)}) df['u'] = np.random.normal(size=N) df['y'] = df.eval('10 + 5*treat + u') np.random.seed(seed) est = [linreg(shuffle_treat(df)) for k in range(draws)] est = pd.concat(est, axis=0, sort=False, keys=range(draws), names=['k', 'param']) return est draws = 500 param_list = [dict(draws=draws, seed=1029), dict(draws=draws, seed=1029)] param_list_sleep = [dict(draws=draws, seed=1029, sleep=5), dict(draws=draws, seed=1029, sleep=5)] def run_analysis_wrapper(params): run_analysis(**params) start = time.time() for params in param_list: run_analysis_wrapper(params) end = time.time() print(f'double run 1 process: {(end - start):.2f} sec') start = time.time() with mp.Pool(processes=2) as pool: pool.map(run_analysis_wrapper, param_list) end = time.time() print(f'double run 2 processes: {(end - start):.2f} sec') Output: double run 1 process: 1.34 sec double run 2 processes: 0.67 sec | 3 | 2 |
79,486,984 | 2025-3-5 | https://stackoverflow.com/questions/79486984/get-an-array-containing-differences-between-two-numpy-arrays-with-some-members-t | I have two very large NumPy arrays containing coordinates [[x1,y1,z1],[x2,y2,z2]...] (~10^9 elements) The arrays are of different sizes, and there will be overlap between the coordinates. So [x1,y1,z1] may be in array1, but not in array2. I would like to quickly get all of the coordinates that are in one, and only one array. For example, import numpy as np array1 = np.array([[1,2,3],[2,3,4],[3,4,5],[4,5,6]]) array2 = np.array([[2,3,4],[3,4,5],[4,5,6],[5,6,7]]) array_diff = some_function_to_get_diff(array1,array2) would get: array_diff: np.array([[1,2,3],[5,6,7]]) I can use setdiff1d for one dimension, but it isn't clear to me how to do this for a 2d array. The files are quite large – 10GB or so, so being able to do this quickly/chunked in parallel would be a big plus. Update def some_faster_function_to_get_diff(array1, array2): unique1 = np.unique(array1, axis=0) # Get the unique coordinates in array1 unique2 = np.unique(array2, axis=0) # Get the unique coordinates in array2 set1 = set(map(tuple, unique1)) set2 = set(map(tuple, unique2)) array1_only = np.array(list(set1 - set2)) array2_only = np.array(list(set2 - set1)) if array1_only.size == 0: return array2_only if array2_only.size == 0: return array1_only return np.vstack((array1_only, array2_only)) def symmetric_difference(arr1, arr2): dtype = [('f{}'.format(i), arr1.dtype) for i in range(arr1.shape[1])] struct_arr1 = np.ascontiguousarray(arr1).view(dtype) struct_arr2 = np.ascontiguousarray(arr2).view(dtype) unique_to_arr1 = np.setdiff1d(struct_arr1, struct_arr2, assume_unique=True) unique_to_arr2 = np.setdiff1d(struct_arr2, struct_arr1,assume_unique=True) result = np.concatenate([unique_to_arr1, unique_to_arr2]).view(arr1.dtype).reshape(-1, arr1.shape[1]) return result from sys import getsizeof for i in range(2,9): no_ele=3*10**i arr1 = np.random.rand(no_ele, 3) * 100 arr2 = np.random.rand(no_ele, 3) * 100 print(f"i={i}: memory {getsizeof(arr2)/1024**3:2f}GB") arr2[50:60] = arr1[50:60] t0=time.time() diff1=symmetric_difference(arr1,arr2) t1=time.time() print(f"{no_ele}: contiguous {t1-t0}") t0=time.time() diff2=some_faster_function_to_get_diff(arr1,arr2) t1=time.time() print(f"{no_ele}: sets {t1-t0}") Gives: 2 memory 0.000007GB 300: contiguous 0.0004420280456542969 300: sets 0.0007300376892089844 3 memory 0.000067GB 3000: contiguous 0.0038480758666992188 3000: sets 0.0068819522857666016 4 memory 0.000671GB 30000: contiguous 0.052056074142456055 30000: sets 0.07555103302001953 5 memory 0.006706GB 300000: contiguous 0.6442217826843262 300000: sets 0.9859399795532227 6 memory 0.067055GB 3000000: contiguous 9.833515882492065 3000000: sets 12.196370840072632 7 memory 0.670552GB 30000000: contiguous 142.16755604743958 30000000: sets 138.87937593460083 @Andrés code works, and is much quicker than using lists, converting to contiguous array also produces a similar run time. The problem now is that the arrays are too big to fit into memory - helpfully the package (laspy) allows chunking the data like: with laspy.open(las_file1) as f1, laspy.open(las_file2) as f2: for points1 in f1.chunk_iterator(chunk_size): | For simple arrays there is setdiff1d() in numpy, however, this does not work for your arrays. If you would write your function from scratch, you could follow one of the following paths: Using list comprehension: def some_function_to_get_diff(array1, array2): unique1 = np.unique(array1, axis=0) # Get the unique coordinates in array1 unique2 = np.unique(array2, axis=0) # Get the unique coordinates in array2 # Get the coordinates that are in only one array array1_only = np.array([x for x in unique1.tolist() if x not in unique2.tolist()]) array2_only = np.array([x for x in unique2.tolist() if x not in unique1.tolist()]) # Special cases with one empty array if array1_only.size == 0: return array2_only if array2_only.size == 0: return array1_only return np.vstack((array1_only, array2_only)) Needs ~53s on two arrays with 105 elements each on my computer. Using set operations: def some_faster_function_to_get_diff(array1, array2): unique1 = np.unique(array1, axis=0) # Get the unique coordinates in array1 unique2 = np.unique(array2, axis=0) # Get the unique coordinates in array2 set1 = set(map(tuple, unique1)) set2 = set(map(tuple, unique2)) array1_only = np.array(list(set1 - set2)) array2_only = np.array(list(set2 - set1)) if array1_only.size == 0: return array2_only if array2_only.size == 0: return array1_only return np.vstack((array1_only, array2_only)) Needs ~0.2s on two arrays with 105 elements each on my computer. | 2 | 0 |
79,487,404 | 2025-3-5 | https://stackoverflow.com/questions/79487404/how-to-parse-a-nested-structure-presented-as-a-flat-list | To easily understand my problem, below is a simplified version of some example input. ['foo', 1, 'a', 'foo', 2, 'foo', 1, 'b', 'foo', -1, 'foo', -1, "bar", 1, "c", "bar", 2, 'baz', 1, 'd', 'baz', -1, "bar", 3, "e", "bar", 4, 'qux', 1, 'stu', 1, 'f', 'stu', -1, 'qux', -1, 'bar', -1] (I used "stu" because I ran out of placeholder names.) The strings are function names (sort of, specifics later). The numbers after the function names specify the position of the argument in the function that follows. A position of -1 closes the function. For example, ['foo',1,'a','foo',2,'b','foo',-1] should be equivalent to foo('a', 'b'). This should also work when nested: ['foo', 1, 'a', 'foo', 2, 'foo', 1, 'b', 'foo', -1, 'foo', -1] should be equivalent to foo('a', foo('b')) and ['bar', 1, 'c', 'bar', 2, 'baz', 1, 'd', 'baz', -1, 'bar', 3, 'e', 'bar', 4, 'qux', 1, 'stu', 1, 'f', 'stu',-1, 'qux', -1, 'bar', -1] should be equivalent to bar('c', baz('d'), e, qux(stu('f'))). My desired function should return a list. For example, ['foo', 1, 'a', 'foo', 2, 'foo', 1, 'b', 'foo', -1, 'foo', -1, 'bar', 1, 'c', 'bar', -1] should result in [['foo', 'a', ['foo', 'b']], ['bar', 'c']] Now that the problem is clearer, my actual problem is slightly different. All elements of the list are integers. The function names are not strings, they're sequences of three integers. Thus, ['foo',1,'a','foo',2,'b','foo',-1] is actually [1, 1, 1, 1, 104, 1, 1, 1, 2, 105, 1, 1, 1, -1]. The function names ([1, 1, 1] in the above example) act as dictionary keys. The dictionary (called constructs) looks something like this: constructs = { 1: { 1: { 1: lambda *chars : print(''.join(chr(char) for char in chars)) } } } So finally, the example should result in something like [[lambda *chars : print(''.join(chr(char) for char in chars)), 104, 105]] All the specifications about nesting and such should all still apply. I have no clue how to reliably and elegantly implement this, please help! Thanks in advance. Edit: I forgot to say that 0 is always ignored and skipped over, and a 0 following a function call will escape the function call and cause it to be treated as an argument. All of this functionality is implemented to some degree in my attempt so far, but it doesn't work when the same function is nested within itself. It's also inefficient and inelegant, with lots of potential problems, so I took to Stack Overflow for help writing a better one. Feel free to use it as a starting point! Edit 2: here is the code for my attempt so far: constructs = { 1: { 1: { 1: print, } } } def parse(code: list) -> list: if len(code) <= 1: return code result = [] in_function = 0 for i, token in enumerate(code): if in_function > 0: in_function -= 1 continue if token == 0: continue if result and result[-1][0][3] != -1: if token in constructs and code[i + 1] in constructs[token] and code[i + 2] in constructs[token][code[i + 1]]: if i < len(code) - 4 and code[i + 4] == 0: result[-1][-1].append(token) else: if code[i + 3] == result[-1][0][3] + 1: result[-1].append([]) result[-1][0] = code[i:i + 4] in_function = 3 else: result[-1][-1].append(token) else: if token in constructs and code[i + 1] in constructs[token] and code[i + 2] in constructs[token][code[i + 1]]: if code[i + 3] == 1: result.append([code[i:i + 4], []]) in_function = 3 else: raise SyntaxError(f'function {code[i:i + 3]} has no previous separator {code[i + 3] - 1}') else: raise SyntaxError(f'function {code[i:i + 3]} was not recognized') for i, function in enumerate(result): result[i][0] = constructs[result[i][0][0]][result[i][0][1]][result[i][0][2]] for j, argument in enumerate(result[i][1:]): result[i][j + 1] = parse(argument) return result It works for parse([1, 1, 1, 1, 'Hello world', 1, 1, 1, 2, 'etc', 1, 1, 1, -1]) but not for parse([1, 1, 1, 1, 1, 1, 1, 1, 'Hello world', 1, 1, 1, -1, 1, 1, 1, 2, 'etc', 1, 1, 1, -1]). | A valid list following this grammar rule can be parsed recursively with a function which, given a valid index, tries to parse the tokens starting from the index as keys to a function followed by the argument position (with -1 ending the function call), or otherwise defaults to a scalar value. In addition to the parsed object, the function should also return the index to the next token in order to advance the index according to the number of tokens consumed at the current level. Since there can be one or more of such expressions in the input list, the function should be called iteratively with returning values appended to an output list until the index reaches the end of the input: def parse(code): def parse_expr(index): while index < size and code[index] == 0: index += 1 # skip 0s try: call = [] while True: key1, key2, key3, pos = code[index: (next_index := index + 4)] if next_index < size and code[next_index] == 0: raise ValueError # escape the token if not call: call.append(constructs[key1][key2][key3]) if pos == -1: return call, next_index obj, index = parse_expr(next_index) call.append(obj) except (ValueError, KeyError): return code[index], index + 1 size = len(code) index = 0 result = [] while index < size: obj, index = parse_expr(index) result.append(obj) return result so that: print(parse([1, 1, 1, 1, 'Hello world', 1, 1, 1, 2, 'etc', 1, 1, 1, -1])) print(parse([1, 1, 1, 1, 1, 1, 1, 1, 'Hello world', 1, 1, 1, -1, 1, 1, 1, 2, 'etc', 1, 1, 1, -1])) print(parse([1, 1, 1, 1, 'a', 1, 1, 1, 2, 1, 1, 1, 1, 'b', 1, 1, 1, -1, 1, 1, 1, -1, 1, 1, 1, 1, 'c', 1, 1, 1, -1])) outputs: [[<built-in function print>, 'Hello world', 'etc']] [[<built-in function print>, [<built-in function print>, 'Hello world'], 'etc']] [[<built-in function print>, 'a', [<built-in function print>, 'b']], [<built-in function print>, 'c']] Demo: https://ideone.com/4ct2KA The code above assumes the input to be valid and ignores the keys to a function after the first argument since they are redundant. Argument positions are ignored too since they are always incremented from 1, although they can be used for input validation as necessary. | 5 | 1 |
79,487,786 | 2025-3-5 | https://stackoverflow.com/questions/79487786/vs-code-comment-toggle-behaviour-unexpected-adds-and-removes-comments-does-not | I have tried looking this up but I only find answers related to getting the toggle command to work at all, not about the specific issue I have. I was previously using Notepad++. For Python, when I would toggle the comment of two lines simultaneously, I would get the following behaviour: # Line 1 Line 2 After toggle: Line 1 # Line 2 Future toggles alternate between the two states above. I have not found a way to do this in VS Code (Ctrl + / ). The behaviour I see is # Line 1 Line 2 After toggle: # # Line 1 # Line 2 Toggle again: # Line 1 Line 2 The behaviour of Notepad++ is quite useful, but I can't find any discussion of it or how to get VSCode to behave similarly. The VSCode way is not useless, but the Notepad++ version is much more useful as I can always just use a comment command if I want to simply add a comment and not swap which lines are commented. Does anyone know how I can replicate the Notepad++ behaviour in VSCode? | Shift+ Alt+ I then Ctrl + /. You can bind them using extension multi-command | 1 | 2 |
79,485,958 | 2025-3-5 | https://stackoverflow.com/questions/79485958/is-there-any-way-to-make-gekko-create-a-csv-file-thats-more-readable | The results.csv file is hard to read since the variables, intermediates etc. are shown in a format like "i1000" which probably means "intermediate number 1000". Is there any way to make the results show the proper symbols/characters that I used in the variable declaration? I've used the m.Array function to declare variables, I'm not sure if I can assign names to them with indexes such as Temperature[1,2,3] | Use a list comprehension to define custom names for an array of variables. Here is an example: from gekko import GEKKO m = GEKKO(remote=False) # Create an array of variables #t = m.Array(m.Var,3,lb=-1000,ub=1000) t = [m.Var(lb=-1000,ub=1000,name=f'T{i+1}') for i in range(3)] # Create intermediate variables i1 = m.Intermediate(t[0] + 10, name='T4') i2 = m.Intermediate(t[2] / 100, name='T5') # Example equations m.Equation(t[0]+t[1]==10) m.Equation(t[1]+2*t[2]==50) m.Equation(t[0]-6*t[2]==10) m.solve() m.open_folder() The results.csv file is now more readable with: t4, -5.5000000000E+01 t5, -1.2500000000E-01 t1, -6.5000000000E+01 t2, 7.5000000000E+01 t3, -1.2500000000E+01 APMonitor solution engine converts all variable names to lowercase. One other suggestion is to avoid starting names with reserved function names such as abs, sin, cos, and tan. | 3 | 2 |
79,486,991 | 2025-3-5 | https://stackoverflow.com/questions/79486991/how-to-add-a-new-level-to-json-output-using-polars-in-python | I'm using Polars to process a DataFrame so I can save it as JSON. I know I can use the method .write_json(), however, I would like to add a new level to the JSON. My current approach: import polars as pl df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, 25, 5, 10, 20], "variable2": [40, 30, 50, 10, 20], }) ( df.write_json() ) Current output: '[{"id":1,"variable1":15,"variable2":40},{"id":2,"variable1":25,"variable2":30},{"id":3,"variable1":5,"variable2":50},{"id":4,"variable1":10,"variable2":10},{"id":5,"variable1":20,"variable2":20}]' But I would like to save it in this way, with the "Befs" key, so each "Befs" contains every record of the DataFrame. Desired output: { "Befs": [ { "ID ": 1, "variable1": 15, "variable2": 40 }, { "ID ": 2, "variable1": 25, "variable2": 30 } ] } I have tried using .pl.struct() , but my attemps make no sense: ( df .select( pl.struct( pl.lit("Bef").alias("Bef"), pl.col("id"), pl.col("variable1"), pl.col("variable2") ) ) .write_json() ) | The write_json() function always returns the data in a row-oriented format, in which the root element is a list, and each row contains a mapping of column_name -> row_value As a hacky workaround you could use write_ndjson() instead, given that its root element is a dictionary (for each line), but for that to match your desired output you'll have to implode everything into a single row and wrap it around a struct. df.select(Bef=pl.struct(pl.all()).implode()).write_ndjson()) | 1 | 3 |
79,486,908 | 2025-3-5 | https://stackoverflow.com/questions/79486908/how-can-i-use-a-pyspark-udf-in-a-for-loop | I need a PySpark UDF with a for loop to create new columns but with conditions based on the iterator value. def test_map(col): if x == 1: if col < 0.55: return 1.2 else: return 0.99 elif x == 2: if col < 0.87: return 1.5 else: return 2.4 etc. test_map_udf = F.udf(test_map, IntegerType()) And then iterate: for x in range(1, 10): df = df.withColumn(f"new_value_{x}", test_map_udf(F.col(f"old_value_{x}")) But it errors out because test_map doesn't know what x is when it runs, and you can't pass x to test_map_udf. Should I create a regular Python function that takes x, and that function calls the UDF? | You can pass x as arguments to udf. def test_map(x, col): ... test_map_udf = F.udf(test_map, IntegerType()) for x in range(1, 10): df = df.withColumn(f"new_value_{x}", test_map_udf(F.lit(x), F.col(f"old_value_{x}")) | 1 | 3 |
79,482,885 | 2025-3-4 | https://stackoverflow.com/questions/79482885/separable-convolutions-in-pytorch-i-e-2-1d-vector-tensor-traditional-convolu | I'm trying to implement an image filter in PyTorch that takes in two filters of shapes (1,3), (3,1) that build up a filter of (3,3). An example application of this is the Sobel filter or Gaussian blurring I have a NumPy implementation ready, but PyTorch has a different way of working with convolutions that makes it hard to wrap my head around for more traditional applications such as this. How should I proceed? def decomposed_conv2d(arr,x_kernel,y_kernel): """ Apply two 1D kernels as a part of a 2D convolution. The kernels must be the decomposed from a 2D kernel that originally is intended to be convolved with the array. Inputs: - x_kernel: Column vector kernel, to be applied along the x axis (axis 0) - y_kernel: Row vector kernel, to be applied along the y axis (axis 1) """ arr = np.apply_along_axis(lambda x: np.convolve(x, x_kernel, mode='same'), 0, arr) arr = np.apply_along_axis(lambda x: np.convolve(x, y_kernel, mode='same'), 1, arr) return arr Gaussian blurring example: ax = np.array([-1.,0.,1.]) stdev = 0.5 kernel = np.exp(-0.5 * np.square(ax) / np.square(stdev)) / (stdev * np.sqrt(2*np.pi)) decomposed_conv2d(np.arange(9).reshape((3,3)),kernel,kernel) >>>array([[0.39126886, 1.24684326, 1.83682264], [2.86471127, 4.11155453, 4.48257929], [4.7279302 , 6.1004473 , 6.17348398]]) (Note: The total "energy" of this array may not be preserved, especially in small arrays like this because the convolution is discrete. It isn't that critical to this particular problem). Attempting to do the same in PyTorch following this discussion yields an error: ... # define ax,stdev,kernel, etc. arr_in = torch.arange(9).reshape(3,3) # for example arr = arr_in.double().unsqueeze(0) # tried both axes and not unsqueezing as well x_kernel = torch.from_numpy(kernel) y_kernel = torch.from_numpy(kernel) x_kernel = x_kernel.view(1,1,-1) y_kernel = y_kernel.view(1,1,-1) arr = F.conv1d(arr,x_kernel,padding=x_kernel.shape[2]//2).squeeze(0) arr = F.conv1d(arr.transpose(0,1),y_kernel, padding=y_kernel.shape[2] // 2).squeeze(0).transpose(2,1).squeeze(1) >>> RuntimeError: Given groups=1, weight of size [1, 1, 3], expected input[1, 3, 3] to have 1 channels, but got 3 channels instead I've juggled with squeezes and unsqueezes so that the dimensions match but I still can't get it to do what I want. I just can't even get the first convolution done this way. | Solution with conv2d You can make your life a lot easier by using conv2d rather than conv1d. Although we use conv2d below, this is still a 1-d convolution (or rather, two 1-d convolutions) effectively, since we apply a 1×n kernel. Thus, we still have all benefits of a separable convolution (in particular, 2·n rather than n² multiplications per pixel for a kernel of length n). import numpy as np import torch from torch.nn.functional import conv2d np.set_printoptions(precision=3) # For better legibility: show fewer float digits def decomposed_conv2d_np(arr, x_kernel, y_kernel): # From the question arr = np.apply_along_axis(lambda x: np.convolve(x, x_kernel, mode='same'), 0, arr) arr = np.apply_along_axis(lambda x: np.convolve(x, y_kernel, mode='same'), 1, arr) return arr def decomposed_conv2d_torch(arr, x_kernel, y_kernel): # Proposed arr = arr.unsqueeze(0).unsqueeze_(0) # Make copy, make 4D for ``conv2d()`` arr = conv2d(arr, weight=x_kernel.view(1, 1, -1, 1), padding='same') arr = conv2d(arr, weight=y_kernel.view(1, 1, 1, -1), padding='same') return arr.squeeze_(0).squeeze_(0) # Make 2D again ax = np.array([-1.,0.,1.]) stdev = 0.5 kernel = np.exp(-0.5 * np.square(ax) / np.square(stdev)) / (stdev * np.sqrt(2 * np.pi)) array = np.arange(9).reshape((3,3)) print(result_np := decomposed_conv2d_np(array, kernel, kernel)) # [[0.391 1.247 1.837] # [2.865 4.112 4.483] # [4.728 6.1 6.173]] array, kernel = torch.from_numpy(array).to(torch.float64), torch.from_numpy(kernel) print(result_torch := decomposed_conv2d_torch(array, kernel, kernel).numpy()) # [[0.391 1.247 1.837] # [2.865 4.112 4.483] # [4.728 6.1 6.173]] assert np.allclose(result_np, result_torch) This solution is based on my answer to a related, earlier question that asked for an implementation of a Gaussian kernel in PyTorch. Solution with conv1d Here is the corresponding solution using conv1d instead: from torch.nn.functional import conv1d ... def decomposed_conv2d_with_conv1d(a, x_kernel, y_kernel): a = a.unsqueeze(1) # Unsqueeze channels dimension for ``conv1d()`` a = conv1d(a, weight=y_kernel.view(1, 1, -1), padding='same') # Use y kernel a = a.transpose(0, -1) # Swap image dims for using x kernel along last dim a = conv1d(a, weight=x_kernel.view(1, 1, -1), padding='same') # Use x kernel return a.squeeze_(1).T # Make 2D again, reestablish original order of dims The key ideas here are: We always need to convolve along the last dimension, so before convolving with the appropriate kernel, we need to move the corresponding image dimension there. For the remaining image dimension, we can "misuse" what conv1d assumes as the batch dimension (dimension 0) to hold its values. What does not work here is using the channels dimension (dimension 1), since we would need to adjust the kernel by repeating it to match the number of channels. We simply keep the channels dimension at 1 here (meaning we have one image channel), but we could use it for the actual image channels if we had a multichannel image (say, RGB). To me, it appears less straightforward than the conv2d solution, since it also involves the reordering of image dimensions. As to performance, I don't know which version is faster and I did not time them. This should be pretty easy to find out; however, what I assume is that performance differences should be negligible. | 1 | 3 |
79,484,953 | 2025-3-4 | https://stackoverflow.com/questions/79484953/identifying-and-removing-duplicate-columns-rows-in-sparse-binary-matrix-in-pytor | Let's suppose we have a binary matrix A with shape n x m, I want to identify rows that have duplicates in the matrix, i.e. there is another index on the same dimension with the same elements in the same positions. It's very important not to convert this matrix into a dense representation, since the real matrices I'm using are quite large and difficult to handle in terms of memory. Using PyTorch for the implementation: # This is just a toy sparse binary matrix with n = 10 and m = 100 A = torch.randint(0, 2, (10, 100), dtype=torch.float32).to_sparse() Intuitively, we can perform the dot product of this matrix producing a new m x m matrix which contains in terms i, j, the number of 1s that the index i has in the same position of the index j at dimension 0. B = A.T @ A # In PyTorch, this operation will also produce a sparse representation At this point, I've tried to combine these values, comparing them with A.sum(0), num_elements = A.sum(0) duplicate_rows = torch.logical_and([ num_elements[B.indices()[0]] == num_elements[B.indices()[1]], num_elements[B.indices()[0]] == B.values() ]) But this did not work! I think that the solution can be written only by using operations on PyTorch Sparse tensors (without using Python loops and so on), and this could also be a benefit in terms of performance. | Here is an implementation where the duplicate rows in a binary sparse matrix are identified. It returns a mask of the rows to keep from the sparse matrix, but can easily be adjusted to give e.g. indices of duplicate rows. It also handles cases where 3 or more rows are duplicates of each other and only keeps 1 row per group (the lowest index row is always kept for simplicity). def get_unique_row_mask_sparse(A): # Number of matching 1s between each pair of rows B = A @ A.T # Number of 1s in each row row_sums = torch.sparse.sum(A, dim=1).to_dense() indices = B.indices() i, j = indices[0], indices[1] # Two rows i and j are duplicates if: # 1) B[i,j] == row_sums[i] == row_sums[j] # 2) i != j (exclude diagonal) # Moreover, we only keep the upper diagonal of the matrix to avoid duplicates same_row_sums = row_sums[i] == row_sums[j] matches_equal_sums = B.values() == row_sums[i] not_diagonal = i != j upper_triangular = i < j is_duplicate_pair = same_row_sums & matches_equal_sums & not_diagonal & upper_triangular duplicate_pairs = indices[:, is_duplicate_pair] # For each duplicate pair (i,j), we keep row i keep_mask = torch.ones(A.size(0), dtype=torch.bool) for pair_idx in range(duplicate_pairs.size(1)): row_i, row_j = duplicate_pairs[:, pair_idx] keep_mask[row_j] = False return keep_mask Testing code: torch.manual_seed(42) A = torch.randint(0, 2, (10, 100), dtype=torch.float32).to_sparse() # Force some duplicate rows for testing A_dense = A.to_dense() A_dense[3] = A_dense[1] A_dense[6] = A_dense[1] A_dense[9] = A_dense[2] A = A_dense.to_sparse() keep_mask = get_unique_row_mask_sparse(A) print(keep_mask) Gives the result: tensor([ True, True, True, False, True, True, False, True, True, False]) You can run the following to create a new sparse tensor from this. A_indices = A.indices() rows_mask = keep_mask[A_indices[0]] A_unique = torch.sparse_coo_tensor( A_indices[:, rows_mask], A.values()[rows_mask], (keep_mask.sum().item(), A.size(1)) ).coalesce() | 4 | 1 |
79,484,649 | 2025-3-4 | https://stackoverflow.com/questions/79484649/python-regex-substitution-for-comma-match | For the input string as shown below, I am trying to substitute UK1/ after Street and every , and skip hyphen to create expected output shown below. Input = Street1-2,4,6,8-10 Expected output = StreetUK/1-2,UK/4,UK/6,UK/8-10 With the regex pattern I have trouble substitute for each captured group. How can I capture all the required groups for every , and sub required string. replacements = [] pattern = r"(Street)?(?:\d+)(((,)?(?:\d+))*[-]?(?:\d+))*" def replacement(x): replacements.append(f"{x.group(1)}{'UK'}/") input = 'Street1-2,4,6,8-10' m = re.sub(pattern, replacement, input) print(m, [''.join(x) for x in replacements] ) The above code just prints ['StreetUK/'] but not as expected. | You can search using this regex: (Street|,)(?=\d) and replace with: \1UK/ RegEx Demo Code: import re input = 'Street1-2,4,6,8-10' print(re.sub(r'(Street|,)(?=\d)', r'\1UK/', input)) Output: StreetUK/1-2,UK/4,UK/6,UK/8-10 If you want to do these replacements only for input starting with Street then use: print(re.sub(r'(Street|,)(?=\d)', r'\1UK/', input) if input.startswith("Street") else input) RegEx Explanation: (Street|,): Match Street or comma in capture group #1 (?=\d): Make sure there is a digit after the current position \1UK/: is replacement to put back-reference of group #1 back followed by UK/ in the substituted string | 2 | 6 |
79,484,274 | 2025-3-4 | https://stackoverflow.com/questions/79484274/how-to-prevent-the-formatting-of-any-linebreaks | Let's say I have a file like this: import pandas as pd df = pd.DataFrame( { "A": [" a: yu", "b: stuff ", "c: more_stuff"], "B": [4, 5, 6], "C": [7, 8, 9], } ) df["A"] = ( df["A"] .str.strip() .str.replace(":", "") .str[0] ) new_df = pd.melt( df, id_vars=["A"] ) print(new_df) If I then run ruff format --diff play_line_breaks.py I get -df["A"] = ( - df["A"] - .str.strip() - .str.replace(":", "") - .str[0] -) +df["A"] = df["A"].str.strip().str.replace(":", "").str[0] -new_df = pd.melt( - df, - id_vars=["A"] -) +new_df = pd.melt(df, id_vars=["A"]) print(new_df) So, the ruff formatter would convert my multiline statements into a single line. I find the multiline version far more readable and would like to keep it. Is there any setting in ruff that would allow me to say "don't touch any line breaks"? The best I could find are skip-magic-trailing-comma = false (in my pyproject.toml) which does not impact the output from above though or wrapping the two statements like this # fmt: off < all code after df assignment > # fmt: on That works, but I find it rather cumbersome to do this for all the statements I have in my code base. Are there any smarter ways of doing this? | It seems there is no option available at the moment in ruff. See: ruff formatter: one call per line for chained method calls. Workaround: Seems like putting # fmt: skip next to the closing parentheses works as a quick hack. import pandas as pd df = pd.DataFrame( { "A": [" a: yu", "b: stuff ", "c: more_stuff"], "B": [4, 5, 6], "C": [7, 8, 9], } ) df["A"] = ( df["A"] .str.strip() .str.replace(":", "") .str[0] ) # fmt: skip new_df = pd.melt( df, id_vars=["A"] ) # fmt: skip print(new_df) | 1 | 4 |
79,483,300 | 2025-3-4 | https://stackoverflow.com/questions/79483300/unable-to-create-a-toggle-for-gridlines-on-matplotlib-python | I've been trying to make a toggle button to turn on and off the gridlines on my graph every time it's clicked. I understand how the toggle works as I was able to do it for some lines being animated onto my graph. I'm unsure if it's a problem with how MatplotLib sets axis and zorder etc. Or if it's to do with the fact i'm animating the graph and it updates the whole figure every frame. I didn't think that would be a problem though considering that everything is redrawn and not just the changing plots. I've tried many different ways of doing this widget and none of it seemed to work. I've changed the visibility of the lines by using alpha, the colour since my background is black, switching ax.grid() to ax.grid(False). I'm not sure anymore, this is what I've got currently: ax_grid = fig.add_axes([0.85, 0.2, 0.1, 0.04]) gbutton = Button(ax_grid, 'Grid', color = '0.3', hovercolor='0.7') # Define the grid visibility state grid_visible = False # Initialize the grid visibility state def grid_lines(event): global grid_visible grid_visible = not grid_visible # Toggle the state plt.sca(ax) # Ensure we are modifying the main plot's axes ax.set_axisbelow(False) # trying to set it above fig.canvas.draw() if grid_visible: ax.grid(color='white') # Show grid with properties else: ax.grid(color='black') # Simply turn off the grid without extra arguments fig.canvas.draw_idle() # Redraw the canvas Another version: ax_grid = fig.add_axes([0.85, 0.2, 0.1, 0.04]) gbutton = Button(ax_grid, 'Grid', color = '0.3', hovercolor='0.7') # Define the grid visibility state grid_visible = False # Initialize the grid visibility state def grid_lines(event): global grid_visible grid_visible = not grid_visible # Toggle state # Access grid lines directly and toggle their visibility for line in ax.get_xgridlines() + ax.get_ygridlines(): line.set_visible(grid_visible) fig.canvas.draw_idle() # Redraw the canvas My whole code incase it's needed: import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from matplotlib.widgets import Slider, Button import scipy from math import sqrt plt.rcParams["figure.autolayout"] = True print("Default text color is: ", plt.rcParams['text.color']) plt.rcParams.update({'text.color': "white"}) # changing default text colour to white dt = 0.1 numsteps = 10000 pi = scipy.constants.pi G = 4.30091e-3 # AU^3 * M_sun^-1 * yr^-2 wA = 3.0 wB = 3.0 thetaA = 0 thetaB = thetaA + pi # to put the other store on the opposite end of starA # initialise variables r = 5 mA = 10 # mass in solar mass mB = 10 M = mA + mB x_valA, y_valA = [], [] x_valB, y_valB = [], [] # Create the animation, fig represents the object/canvas and ax means it is the area being plotted on fig, ax = plt.subplots(figsize=(8, 8), dpi=100) ax.set_xlim(-10, 10) ax.set_ylim(-10, 10) ax.set_aspect('equal') ax.set_facecolor("black") fig.patch.set_facecolor("k") # Create the stars and COM plot starA, = ax.plot([], [], 'o', color='blue', markersize=10, label='Star A',zorder=9.5) starB, = ax.plot([], [], 'o', color='red', markersize=10, label='Star B',zorder=9.5) COM = ax.plot([0], [0], '+', color='white', markersize=5, label='COM',zorder=10) orbitA, = ax.plot([], [], '-', color='cyan', alpha=0.5, label='Orbit A', zorder=5) orbitB, = ax.plot([], [], '-', color='pink', alpha=0.5, label='Orbit B', zorder=5) ax.grid(color='white', linestyle='--', linewidth=0.5, zorder=1) leg = ax.legend(facecolor='k', labelcolor='w', fancybox=True, framealpha=0.6, loc='upper right', bbox_to_anchor=(1,1)) leg.set_zorder(20) def orbit(r, mA, mB, M): global x_valA, y_valA, x_valB, y_valB, thetaA, thetaB, G, dt # Reset variables x_valA, y_valA = [], [] x_valB, y_valB = [], [] M = mA + mB rA = r * (mB / M) rB = r * (mA / M) # initial positions positionA = np.array([rA * np.cos(thetaA), rA * np.sin(thetaA)]) # Star A initial position positionB = np.array([rB * np.cos(thetaB), rB * np.sin(thetaB)]) # Star B initial position # SIMULATION LOOP for _ in range(numsteps): # Store positions for both stars x_valA.append(positionA[0]) y_valA.append(positionA[1]) x_valB.append(positionB[0]) y_valB.append(positionB[1]) # Update speed and angles for next positions wA = sqrt(G * M / rA * rA) wB = sqrt(G * M / rB * rB) thetaA += wA * dt # update angle for starA thetaB += wB * dt # update angle for starB # Calculate new positions based on updated angles positionA = np.array([rA * np.cos(thetaA), rA * np.sin(thetaA)]) positionB = np.array([rB * np.cos(thetaB), rB * np.sin(thetaB)]) # After simulation loop, update the orbit lines with the recorded paths orbitA.set_data(x_valA, y_valA) orbitB.set_data(x_valB, y_valB) # initialising the data def init(): starA.set_data([], []) starB.set_data([], []) orbitA.set_data([], []) # Clear the initial orbit paths orbitB.set_data([], []) return starA, starB, orbitA, orbitB def update(frame): starA.set_data([x_valA[frame]], [y_valA[frame]]) # Pass as lists starB.set_data([x_valB[frame]], [y_valB[frame]]) if orbit_lines_visible: orbitA.set_data(x_valA[:frame+1], y_valA[:frame+1]) orbitB.set_data(x_valB[:frame+1], y_valB[:frame+1]) return starA, starB, orbitA, orbitB ani = FuncAnimation(fig, update, frames=numsteps, init_func=init, blit=False, interval=50) plt.title("Binary Star System", fontsize=20, fontweight='bold') def up(val): global r, mA, mB, M r = seperation_slider.val mA = mA_slider.val mB = mB_slider.val M = mA + mB orbit(r, mA, mB, M) # updated values into function starA.set_markersize(mA_slider.val) starB.set_markersize(mB_slider.val) ani.event_source.stop() # Stop the current animation ani.event_source.start() # Restart the animation with updated orbit fig.canvas.draw_idle() seperation_slider = Slider(ax=plt.axes([0.125, 0.02, 0.10, 0.04]), label='Seperation', valmin=1, valmax=15, valinit=r, valstep=1.11, facecolor='w') mA_slider = Slider(ax=plt.axes([0.45, 0.02, 0.15, 0.04]), label="Mass A", valmin=0.1, valmax=100, valinit=mA, valstep=1.11, facecolor='b') mB_slider = Slider(ax=plt.axes([0.80, 0.02, 0.15, 0.04]), label="Mass B", valmin=0.1, valmax=100, valinit=mB, valstep=1.11, facecolor='r') seperation_slider.label.set_size(12) mA_slider.label.set_size(12) mB_slider.label.set_size(12) mA_slider.vline.set_color('cyan') mB_slider.vline.set_color('violet') seperation_slider.vline.set_color('black') seperation_slider.on_changed(up) mA_slider.on_changed(up) mB_slider.on_changed(up) orbit(r, mA, mB, M) ax_reset = fig.add_axes([0.85, 0.08, 0.1, 0.04]) rbutton = Button(ax_reset, 'Reset', color='0.3', hovercolor='0.7') def reset(event): seperation_slider.reset() mA_slider.reset() mB_slider.reset() rbutton.on_clicked(reset) # Toggle for orbit lines visibility orbit_lines_visible = False def lines(event): global orbit_lines_visible if orbit_lines_visible: orbitA.set_alpha(0) # Hide orbit A orbitB.set_alpha(0) # Hide orbit B else: orbitA.set_alpha(0.5) # Show orbit A orbitB.set_alpha(0.5) # Show orbit B orbit_lines_visible = not orbit_lines_visible fig.canvas.draw_idle() ax_lines = fig.add_axes([0.85, 0.14, 0.1, 0.04]) button = Button(ax_lines, 'Orbit Lines', color='0.3', hovercolor='0.7') button.on_clicked(lines) ax_grid = fig.add_axes([0.85, 0.2, 0.1, 0.04]) gbutton = Button(ax_grid, 'Grid', color = '0.3', hovercolor='0.7') # Define the grid visibility state grid_visible = False # Initialize the grid visibility state def grid_lines(event): global grid_visible grid_visible = not grid_visible if grid_visible: ax.grid(True, color='white') else: ax.grid(False) # Force immediate redraw fig.canvas.draw() plt.show() | The issue is that instead of turning the grid off, you're only changing its color to black. Even though your background is black, the grid is still active, it’s just invisible. To disable the grid, call: ax.grid(False) and to enable, explicitly call: ax.grid(True, color='white', linestyle='--', linewidth=0.5) Example working script: import matplotlib.pyplot as plt from matplotlib.widgets import Button fig, ax = plt.subplots() fig.set_facecolor('black') ax.set_facecolor('black') for spine in ax.spines.values(): spine.set_color('white') ax.tick_params(colors='white') ax.grid(False) grid_state = [False] def toggle_grid(event): grid_state[0] = not grid_state[0] if grid_state[0]: ax.grid(True, color='white', alpha=0.5, linestyle='-', linewidth=0.5) else: ax.grid(False) fig.canvas.draw_idle() button_ax = plt.axes([0.8, 0.05, 0.15, 0.075]) button = Button(button_ax, 'Grid', color='0.3', hovercolor='0.7') button.on_clicked(toggle_grid) plt.show() | 2 | 1 |
79,509,732 | 2025-3-14 | https://stackoverflow.com/questions/79509732/how-to-add-a-case-sensitive-python-package-as-depencency | I need to add the package "modAL" (uppercase AL) as a dependency to a package. Since packages names are (seemingly) case insensitive in pyproject.toml, it falls back to "modal" which is another package. How can I add modAL and not modal as a dependency? I can install modAL separately (it works), but I want to add it as a regular dependency and I have no clue how to do it. | The comment by juanpa.arrivillega solves my problem: [...] python package names are case insensitive, and on pypi, the "modAL" package seems to be distributed as "modAL-python" – Commented 2025-03-14 at 16:56:12Z I just replaced "modAL" by "modAL-python" and it works as expected. | 2 | 3 |
79,515,343 | 2025-3-17 | https://stackoverflow.com/questions/79515343/how-to-compare-objects-based-on-their-superclass | How do I check for equality using super-class-level comparison? from dataclasses import dataclass @dataclass(order=True) class Base: foo: int @dataclass(order=True) class X(Base): x: str @dataclass(order=True) class Y(Base): y: float x = X(0, "x") y = Y(1, 1.0) How do I compare x and y on their Base attributes only? Base.__eq__(x,y) seems to do the job, is this the "pythonic" way? | Unlike C++, Java and certain OOP frameworks for C, in which classes and inheritances are implemented essentially as data structures that grow larger as classes are inherited, and bound methods, Python's objects can't be temporarily "cast" as one instance of their superclasses, where a superclass method can be called and just "see" the attributes it is interested in. Instead, the super() operator resolves to a proxy to the next class in the inheritance chain, but any methods called in this proxy (even __eq__) will still "see" the "self" argument as an instance of the subclass. The differences are subtle, actually - in general, the super way will behave the same as casting - but a key difference is that whatever method is called using super() will get the actual subclass instance as self, and all class-attributes and class bound identifiers relate to the subclass: the information about the instance type is on the instance itself, not in the variable slot reserved to receive the function as a parameter! While in static languages, the type information is in the variable declaration (the same if it is declared as a parameter) so, if a different object happens to be passed to a method, (through the use of cast in the caller code), it is treated as if it is the declared parameter, nonetheless. (It is a bit more subtle when inheritance gets into play, like in this case, but that is the idea) If dataclasses generated __eq__ method would be "dumber", they could just perform an "isinstance" check, and compare the hardcoded attributes where the __eq__ is defined. But upon being called with Base.__eq__(x, y), instead, it goes through the operand delegating heuristics suggested when overriding operators: so it declines comparing "itself" with an instance of another (sibling) class. Note that when making this call: Base.__eq__(x, y) it will behave like x is from the Base class, respecting the "L" in OOP's "SOLID" - What it doesn't assume is that it makes sense to use Base's attributes in the comparison to an instance of another subclass. However, in Python, for pure-Python defined classes, it is possible, certain rules respected, to assign the __class__ attribute itself to an instance. This has the same effect that the cast operator in C++ and Java: it will preserve the instance attributes as they are, while the instance itself will, for all effects, be an instance of the class assigned to the __class__ slot. The big difference to the cast operator in those languages is that this change is "permanent": the instance is not just "seem as a member of that class for the duration of this expression", the instance is rather converted into an instance of that class. So, we can build some code that will take both instances to be compared, find out a common ancestor, convert the operators to that ancestor class, and then just use the == operator normally. If this code is built into a __eq__ method of a class designed for this, the final expressions can be more convenient to use. And, of course, we can use the copy operator to avoid modifying the original instances: from copy import copy class SuperComp: def __init__(self, operand): self.operand = operand def __eq__(self, other): mr1 = type(self.operand).__mro__ mr2 = type(other).__mro__ if not (common:=(set(mr1) & set(mr2))): # no common ancestors: return False # take the "most advanced" common class in the first operand: for cls in mr1: if cls in common: break # make a copy of both operands and force the class to the common one: op1 = copy(self.operand) op2 = copy(other) op1.__class__ = cls op2.__class__ = cls return op1 == op2 Using this, with your dataclasses defined above, it is possible to do: In [81]: x = X(23, "x") In [82]: y = Y(23, 1.0) In [83]: SuperComp(x) == y Out[83]: True (As a side note: in this text I've been calling "cast", "copy" and "super" as "operators" - instead of methods, functions or keywords, because semantically that is what they are - but you are little likely finding docs referring to them using this term) | 1 | 2 |
79,501,764 | 2025-3-11 | https://stackoverflow.com/questions/79501764/why-is-poetry-complaining-that-name-isnt-set-in-pyproject-toml | I set up a new Python Poetry project with poetry init. I'm not creating a package, so I added this to my pyproject.toml: [tool.poetry] package-mode = false The Operating Modes section of the Poetry documentation says that when operating in non-package mode, the name and version fields in pyproject.toml are optional, so I removed them. Now when I run a Poetry command, such as poetry env list, I get this error: The Poetry configuration is invalid: - project must contain ['name'] properties Why is this happening? | This is a known issue: Poetry Issue 10032. More context is available in Issues 10031 and 10033. It does not matter if poetry.tools.package-mode = false, the project.name field is still required, because per the official standardized specification of the [project] table: A [project] table must include a name key. This behavior was initially unclear in Poetry's documentation, but updates clarify it, as discussed in Poetry Issue 10033. The solution is to use the [tool.poetry] table instead of the [project] table. The [project] table is meant to be used only for projects intended to be packaged, built, and distributed as sdist and/or wheel, for example on a Python package index such as PyPI. So using the [project] table for a "non-package" project is nonsensical and you should use the [tool.poetry] table instead. | 1 | 4 |
79,515,104 | 2025-3-17 | https://stackoverflow.com/questions/79515104/sliding-window-singular-value-decomposition | Throughout the question, I will use Python notation. Suppose I have a matrix A of shape (p, nb) and I create a sliding window, taking the submatrix of p rows and n columns Am = A[:, m : m + n]. Now I want to compute it singular value decomposition (SVD): U_m, S_m, Vh_m = svd(Am) = svd(A[:, m:m+n] and then go to the next window m+1 and compute its SVD: U_m1, S_m1, Vh_m1 = svd(Am1) = svd(A[:, m+1:m+1+n] Computing full SVD from scratch has the complexity of o(min(m*n**2, n*m**2)). I want to compute the SVD of the m+1 window using the SVD of the m window without computing the full SVD from scratch, so it will be more efficient (with less complexity) than doing a full SVD from scratch. Also I prefer it won't resort to low-rank approximation but will assume that rank(Am) = min(p, n). U_m1, S_m1, Vh_m1 = sliding_svd(U_m, S_m, Vh_m, A[:,m+1:m+1+n], A[:,m],A[:,m+n]) A similar problem is called incremental SVD. To my problem, I call sliding SVD or moving SVD or sliding window SVD. I asked a similar question in Math Stack Exchange: https://math.stackexchange.com/questions/5046319/sliding-singular-value-decomposition-sliding-svd-moving-svd I am looking for code or a paper that can be implemented in Python using NumPy and SciPy to solve this problem, or at least some guiding (e.g. related papers that deal with a similar problem). | I'm going to assume square matrices, I leave the extension to non-square matrices to you (it may require further experimentation followed by a proof, but the general approach remains the same.) The comments by a user in the post you made in Mathematics stack exchange pretty much point you at the right answer. Lets call A0 = A[:,m:m+n] and A1 = A[:, m+1:m+1+n] following your nomenclature. Furthermore, to simplify things, let L(A) be the leftmost column of matrix A and R(A) be the rightmost column of matrix A. Then A1 = A0 P + uv^T for some suitable permutation matrix P and vectors u and v. In fact, P should be obvious (rotates all columns to the left by 1, and R(AP) = L(A), i.e. a cyclic rotation of columns.) The values of u and v are easy to work out, they are u = R(A1) - L(A) and v=[0,...,0,1]^T, that is v is a vector of all zeros save for 1 in the last entry. Now, as pointed out in the comments in Math stack exchange, if A0=USV^T then A0P=US(V^TP) where V^TP remains orthogonal, therefore is the SVD of A0P. Putting this altogether, as a coherent python example: import numpy as np def svd_rank1_update(U, S, Vt, u, v): Su = U.T @ u Sv = Vt @ v.T pu = u - U @ Su pv = v - Vt.T @ Sv norm_pu = np.linalg.norm(pu) norm_pv = np.linalg.norm(pv) if norm_pu > 1e-10: u_hat = pu / norm_pu U_aug = np.column_stack([U, u_hat]) else: U_aug = U if norm_pv > 1e-10: v_hat = pv / norm_pv V_aug = np.column_stack([Vt.T, v_hat]) else: V_aug = Vt.T k = len(S) S_mat = np.diag(S) top_left = S_mat + np.outer(Su, Sv) if norm_pu > 1e-10: top = np.column_stack([top_left, norm_pv * Su]) else: top = top_left if norm_pv > 1e-10: bottom = np.append(norm_pu * Sv, norm_pu * norm_pv) small_matrix = np.vstack([top, bottom]) else: small_matrix = top U_small, S_new, Vt_small = np.linalg.svd(small_matrix, full_matrices=False) U_new = U_aug @ U_small V_new = V_aug @ Vt_small.T Vt_new = V_new.T return U_new, S_new, Vt_new # Step 1: Create A and extract A0, A1 n = 5 m = 3 A = np.random.randn(n, m + n + 1) # Extra +1 for safe slicing A0 = A[:, m:m+n] # A0 shape (n x n) A1 = A[:, m+1:m+1+n] # A1 is next "sliding window" of size (n x n) # Step 2: Create cyclic left permutation matrix P P = np.roll(np.eye(n), -1, axis=1) # left shift columns # Step 3: Compute A0 P A0P = A0 @ P # Step 4: Compute u and v v = np.zeros(n) v[-1] = 1 # [0, 0, ..., 0, 1]^T u = A1[:, -1] - A0[:, 0] # u = R(A1) - L(A0) # Step 5: Compute SVD of A0 and use it for A0 P U, S, Vt = np.linalg.svd(A0, full_matrices=False) Vt = Vt @ P # Apply the column permutation to get V^T P (still orthogonal) # Step 6: Rank-1 update to get A1 SVD U1, S1, Vt1 = svd_rank1_update(U, S, Vt, u, v) # Step 7: Compare to actual SVD of A1 U_ref, S_ref, Vt_ref = np.linalg.svd(A1, full_matrices=False) def align_signs(U1, U2): signs = np.sign(np.sum(U1 * U2, axis=0)) return U1 * signs U1_aligned = align_signs(U1, U_ref) Vt1_aligned = align_signs(Vt1.T, Vt_ref.T).T print("||U1_aligned - U_ref|| =", np.linalg.norm(U1_aligned - U_ref)) print("||Vt1_aligned - Vt_ref|| =", np.linalg.norm(Vt1_aligned - Vt_ref)) print("||S1 - S_ref|| =", np.linalg.norm(S1 - S_ref)) A1_rebuilt = U1 @ np.diag(S1) @ Vt1 err = np.linalg.norm(A1 - A1_rebuilt) print("||A1 - A1_rebuilt|| =", err) Which outputs something like: ||U1_aligned - U_ref|| = 1.4339826964875264e-15 ||Vt1_aligned - Vt_ref|| = 1.2477860054076786e-15 ||S1 - S_ref|| = 4.447825579847493e-15 ||A1 - A1_rebuilt|| = 4.406110208978076e-15 SVD computation is typically O(n^3) for square matrices of size n whereas this uses the Brand method described in [1] which is O(n^2). [1] Matthew Brand, Fast low-rank modifications of the thin singular value decomposition, Linear Algebra and its Applications, Volume 415, Issue 1, 1 June 2006, Pages 20–30. https://doi.org/10.1016/j.laa.2005.07.021 | 4 | 2 |
79,506,903 | 2025-3-13 | https://stackoverflow.com/questions/79506903/cannot-submit-login-form-with-selenium-python-using-headless-remote-driver | I am trying to login to a website using Selenium. The relevant form HTML is as follows: <form class="ActionEmittingForm form-wrapper decorate-required" method="post" novalidate="" data-rf-form-name="LoginPageForm_SignInForm" data-rf-test-name="ActionEmittingForm"> <span data-rf-test-name="Text" class="field text Text required email-field emailInput"> <label class="label" data-rf-test-name="label" aria-label="Email Address. Required field.">Email Address</label> <span class="input"> <div role="presentation" class="value text"> <input type="text" name="emailInput" value="" placeholder=" " inputmode="" class="text" data-rf-input-event-type="onInput" data-rf-test-name="input" aria-label="Email Address" aria-required="true" tabindex="0"> </div> </span> </span> <span data-rf-test-name="Text" class="field text Text required password-field passwordInput"> <label class="label" data-rf-test-name="label" aria-label="Password. Required field.">Password</label> <span class="input"> <div role="presentation" class="value text"> <input type="password" name="passwordInput" value="" placeholder=" " inputmode="" class="password" data-rf-input-event-type="onInput" data-rf-test-name="input" aria-label="Password" aria-required="true" tabindex="0"> </div> </span> </span> <button type="submit" class="button Button primary submitButton" tabindex="0" role="button" data-rf-test-name="submitButton"> <span> <span>Sign In</span> </span> </button> </form> I am using the Selenium Python package. The host running Selenium/python is a Debian 12 server (without a GUI) in my network. So the driver must be run in headless more. I am using a remote driver, running in Docker on the same host with a standalone Firefox browser. I am using Docker because I don't want to manage browser versions using packages on my host. Deploying a pinned standalone browser is very easy with Docker. I've chosen to try Firefox, because I prefer that over Chrome. But I could be convinced to switch if that is part of the problem. Selenium version is 4.29.0. Firefox version is 135.0.1. Geckodriver is 0.36.0. Python version 3.11. Relevant code is as follows: import time import docker from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys def setupDockerFirefoxSeleniumContainer(): client = docker.from_env() client.images.pull("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303") firefox = client.containers.run("selenium/standalone-firefox:135.0.1-geckodriver-0.36.0-20250303", detach = True, name = "firefox", ports = {4444: 4444, 7900: 7900}, shm_size = "2G", environment = ["SE_START_XVFB=false", "SE_SCREEN_WIDTH=1200", "SE_SCREEN_HEIGHT=900"]) return firefox def setupSeleniumFirefoxDriver(): try: options=webdriver.FirefoxOptions() options.add_argument("--headless") options.add_argument("--disable-gpu") options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:36.0) Gecko/20100101 Firefox/135.0") driver = webdriver.Remote( command_executor="http://127.0.0.1:4444/wd/hub", options=options ) driver.get("https://httpbin.org/ip") print("Successfully started Firefox driver.") return driver except Exception as e: print("Caught an exception: ", repr(e)) def cleanupDriver(d): print("Cleaning up driver") d.quit() def cleanupContainer(c): print("Cleaning up container") c.stop() c.remove() def siteLogin(driver, username, password): driver.get("https://www.MYWEBSITE.com/login") driver.implicitly_wait(5) driver.get_screenshot_as_file("/home/fresh_login_screen.png") username_box = driver.find_element(By.CSS_SELECTOR, "input[name='emailInput']") password_box = driver.find_element(By.CSS_SELECTOR, "input[name='passwordInput']") submit_button = driver.find_element(By.CSS_SELECTOR, "button.submitButton") time.sleep(2) username_box.send_keys(username) time.sleep(2) password_box.send_keys(password) time.sleep(2) driver.get_screenshot_as_file("/home/login_screen_keys_sent.png") time.sleep(2) #submit_button.click() # doesn't work #submit_button.submit() # doesn't work #password_box.submit() # doesn't work #password_box.send_keys(Keys.ENTER) # doesn't work submit_button.send_keys(Keys.ENTER) # doesn't work print("Waiting for login to process.") time.sleep(5) driver.get_screenshot_as_file("/home/submitted_login.png") time.sleep(10) driver.get_screenshot_as_file("/home/another_screenshot.png") print(driver.page_source) def main(): firefoxContainer = setupDockerFirefoxSeleniumContainer() print("Waiting 5 seconds for Selenium server to start") time.sleep(5) firefoxDriver = setupSeleniumFirefoxDriver() try: siteLogin(firefoxDriver, "[email protected]", "crazyHardPassword") except Exception as e: print("Caught an exception when trying siteLogin: ", repr(e)) finally: cleanupDriver(firefoxDriver) cleanupContainer(firefoxContainer) if __name__=="__main__": main() The login submission is not working. The page just seems to hang. In my screenshots, I can see my username and password have been sent to the correct form boxes. However, I've tried several ways to submit the form (send_keys(Keys.ENTER), click(), and submit()), but all seem to result in nothing happening. The screenshot looks the same before and after (with my username and password sitting in the form, but it looks like no form submission was attempted). Any tips for how to submit this form? | Try the below code. It is working in my machine with headless mode. I have used many chrome_options, to be honest I am not sure which one did the trick. I think it's adding user-agent. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC chrome_options = Options() chrome_options.add_argument("--headless=new") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--window-size=1920x1080") chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-blink-features=AutomationControlled") chrome_options.add_argument("start-maximized") chrome_options.add_argument("disable-infobars") chrome_options.add_argument("--remote-debugging-port=9222") # Use a user-agent to avoid detection chrome_options.add_argument( "user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36" ) driver = webdriver.Chrome(options=chrome_options) url = "https://www.redfin.com/login" driver.get(url) wait = WebDriverWait(driver, 10) username = "your_username" password = "your_password" wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='emailInput']"))).send_keys(username) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='passwordInput']"))).send_keys(password) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button.submitButton"))).click() # Just to ensure, print page title to confirm successful login print(driver.title) driver.quit() Console result: Login | Redfin Process finished with exit code 0 | 1 | 1 |
79,515,806 | 2025-3-17 | https://stackoverflow.com/questions/79515806/using-c-iris-or-iris-variants-to-decompose-c-free-while-ignoring-mimic-joints | I previously asked about modeling a 2-finger gripper as a 1-DOF system in Drake: How to make a gripper with two fingers (2DOF) have 1 DOF making one finger symmetric to the other. My gripper had two fingers (each with a prismatic joint) and I wanted to represent it as a 1-DOF system in Drake. The answer provided indicated to use the <mimic> tag in an SDF file to achieve this. However, this approach does not reduce the num_positions() in the plant, meaning both finger joints are still considered in the configuration space. Now, I am working with C-IRIS to generate convex polytopes and need to ensure that the gripper fingers are treated as having only 1 DOF in configuration space while still considering collisions correctly. The right finger is not an independent variable and should move symmetrically to the left finger. C-IRIS should account for collisions involving the right finger, even though it is not explicitly controlled. How can I make C-IRIS ignore the redundant DOF when computing convex polytopes? At the same time: How can I ensure that collisions involving the right finger are still considered, even though it follows the left finger's motion? Any guidance on handling this in Drake would be greatly appreciated! I am using the python bindings with Pydrake version 1.35.0 | It is doable inside C-IRIS (well, not out-of-the-box). What C-IRIS does is that it certifies the following problem ∀s ∈ {s | C*s <= d}, ∃ a(s), b(s), such that the separating plane a(s) * x + b(s) = 0 separates the geometries A and B in the Cartesian space. Normally we search for the parameter C and d in the polytope {s | C*s <= d}. In your case, you want to say that your s variable also live in a subspace (specifically, to enforce symmetry of the joints, you probably want s_left + s_right = 0, which are two additional rows in the constraint C * s <= d). And then you can ask whether this polytope {s | C*s <= d} (which includes the constraint s_left + s_right = 0 ) is collision free (for example, using FindSeparationCertificateGivenPolytope ). So given a polytope C * s <= d (which lives in the subspace s_left+s_right=0 ), you can query the no-collision certificate. Searching for C and d is harder, as now you need to impose the additional constraint on C and d, such that it includes s_left + s_right <= 0 and -s_left - s_right <= 0. You could call InitializePolytopeSearchProgram and then impose additional constraints on C and d (for example, some entries in C needs to be 1 or -1, some corresponding entries in d need to be 0). And then solve the returned optimization problem with the right objective function. You can refer to FindPolytopeGivenLagrangian function in cspace_free_polytope.cc for an implementation. | 1 | 1 |
79,507,557 | 2025-3-13 | https://stackoverflow.com/questions/79507557/how-to-send-slack-api-request-to-websocket-without-bolt-or-other-sdk | I have the code below. It connects correctly to Slack, and authenticate correctly. I want to sent a conversations.list request to list the channels. How can I do that without using Bolt or some other SDK? The system where this runs is locked down, so I'm not getting to install anything further than websockets. I think the only missing part is figuring what to send on the websocket to request a channel list. Currently it outputs: Opened connection {"type":"hello","num_connections":1,"debug_info":{"host":"applink-11","build_number":105,"approximate_connection_time":18060},"connection_info":{"app_id":"Redacted"}} The API I'm after is https://api.slack.com/methods/conversations.list Code is #!/usr/bin/env python2.6 import httplib import requests import websocket import argparse def on_message(ws, message): print(message) ws.send("list") def on_error(ws, error): print(error) def on_close(ws, close_status_code, close_msg): print("### closed ###") def on_open(ws): print("Opened connection") def run_with(url): # websocket.enableTrace(True) ws = websocket.WebSocketApp(url, on_open=on_open, on_message=on_message, on_error=on_error, on_close=on_close) ws.run_forever() def main(): parser = argparse.ArgumentParser() parser.add_argument("token") args = parser.parse_args() url = 'https://slack.com/api/apps.connections.open' headers = { 'content-type': 'application/x-www-form-urlencoded', 'Authorization': 'Bearer ' + args.token} r = requests.post(url, headers=headers) if r.json()['ok']: url = r.json()['url'] run_with(url) else: print(r.content) if __name__ == '__main__': main() | You can’t get the channel list through the same WebSocket. Slack needs a normal HTTP request for conversations.list. Usually, you'd use WebSocket for real-time events, but you can open the socket and still use normal HTTP calls to list the channels. import requests import websocket import argparse def on_message(ws, message): print("Got message:", message) def on_error(ws, error): print("Error:", error) def on_close(ws, close_status_code, close_msg): print("Socket closed") def on_open(ws): print("Socket opened") def run_socket(url): ws = websocket.WebSocketApp( url, on_open=on_open, on_message=on_message, on_error=on_error, on_close=on_close ) ws.run_forever() def list_channels(token): url = "https://slack.com/api/conversations.list" headers = { "Authorization": "Bearer " + token, "Content-Type": "application/x-www-form-urlencoded" } data = { "limit": "100" # You can change this if you want more channels } response = requests.post(url, headers=headers, data=data) return response.json() def main(): parser = argparse.ArgumentParser() parser.add_argument("token") args = parser.parse_args() token = args.token # 1. Open the Slack socket URL open_url = "https://slack.com/api/apps.connections.open" headers = { "Authorization": "Bearer " + token, "Content-Type": "application/x-www-form-urlencoded" } r = requests.post(open_url, headers=headers) data = r.json() # 2. If the socket opened okay, list channels using HTTP if data.get("ok"): channels = list_channels(token) print("Channels:", channels) # 3. Then connect to the socket socket_url = data["url"] run_socket(socket_url) else: print("Could not open socket:", r.text) if __name__ == "__main__": main() | 2 | 2 |
79,514,743 | 2025-3-17 | https://stackoverflow.com/questions/79514743/cant-install-matplotlib-in-python-3-6 | I've created a Python 3.6 environment in my WSL Ubuntu 22.04 operating system using virtualenv -p python3.6 <my_env>. I need to download some packages, particularly matplotlib, but I'm having some problems. When I run pip install matplotlib from terminal, it returns me ERROR: Command errored out with exit status 1, as it fails to build wheel for matplotlib, kiwisolver and pillow. I can't upgrade the Python version. I don't know what to do, I've tried everything I found online, but it's not working. Output Building wheel for matplotlib (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/andrea/Andrea/env/pyaneti_env/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3byxbnl0/matplotlib_5a7e3488d8ed4fa586ccee5566cd5543/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3byxbnl0/matplotlib_5a7e3488d8ed4fa586ccee5566cd5543/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-e23pjohb cwd: /tmp/pip-install-3byxbnl0/matplotlib_5a7e3488d8ed4fa586ccee5566cd5543/ Complete output (1714 lines): [...] AttributeError: install_layout ERROR: Failed building wheel for matplotlib Analogue results for kiwisolver and pillow | Here I am answering my own question. After trying everything you and the internet suggested, I gave up on using virtualenv. As Matt suggested in the comments, I downloaded miniconda and created an environment there. I was able to download all the packages and the code is running nicely. Thanks for your help. | 2 | 0 |
79,509,565 | 2025-3-14 | https://stackoverflow.com/questions/79509565/creating-a-dummy-pydantic-model-generator | I want to create a method on a base Pydantic model to instantiate child models with dummy data. from __future__ import annotations from pydantic import BaseModel class BaseModelWrapper(BaseModel): @classmethod def make_dummy(cls) -> BaseModelWrapper: for name, field in cls.model_fields.items(): if not field.is_required(): continue # How can I create values based on the field type? print(field.annotation) return cls() class XXX(BaseModelWrapper): a: int | None b: str c: int d: int | None = None e: list[str] # These should be equivalent XXX.make_dummy() XXX(a=None, b="", c=0, e=[]) The part I'm struggling with is how to programmatically map type annotations to values. Let's say field.annotation is int | None. I could just create a dictionary to map that to None, but there are tons of possible combinations of types, so this doesn't scale. There must be a cleaner way to create a value for each field. | I ultimately used @kamilcuk's answer to create the following method from __future__ import annotations from pydantic import BaseModel from types import UnionType from typing import Any, Union, get_args, get_origin def is_union_type(t: type | UnionType) -> bool: return get_origin(t) in [Union, UnionType] class BaseModelWrapper(BaseModel): @classmethod def make_dummy(cls) -> Self: kwargs = {} for name, field in cls.model_fields.items(): if not field.is_required(): continue if field.annotation is None: kwargs[name] = None continue t = field.annotation if is_union_type(t): types: tuple[type[Any]] = get_args(t) if type(None) in types: kwargs[name] = None continue t = types[0] try: # Will error if given `list[str]` / `dict[str]` if issubclass(t, BaseValidator): kwargs[name] = t.make_dummy() else: kwargs[name] = t() except TypeError: kwargs[name] = t() return cls(**kwargs) | 2 | 0 |
79,515,678 | 2025-3-17 | https://stackoverflow.com/questions/79515678/inspect-signature-on-class-methods | Can someone help me understand why the Signature.bind(...).arguments are different when I call directly for a class method vs. when used in a decorator? I have the decorator below. def decorator(func): @functools.wraps(func) def inner(*args, **kwargs): print(inspect.signature(func).bind(*args, **kwargs).arguments) return func(*args, **kwargs) return inner When this decorator runs on a class method, I get the cls parameter in the signature like below. class MyClass: @classmethod @decorator def mymethod(cls, a, b): return a + b MyClass.mymethod(1, 2) # returns: # {'cls': <class '__main__.MyClass'>, 'a': 1, 'b': 2} # 3 If I call inspect.signature on the class method directly, I get a different result or an error. This first block runs successfully but does not return the class parameter. print(inspect.signature(MyClass.mymethod).bind(1, 2).arguments) # returns {'a': 1, 'b': 2} This errors out. print(inspect.signature(MyClass.mymethod).bind(MyClass, 1, 2).arguments) # TypeError: too many positional arguments | The classmethod decorator changes the signature of a function. When classmethod is applied to a function that is stored on a class, it automatically fills the first parameter of the function as the class. This is much the same as how self parameter is filled as the object a method is called on. If you print out the two signatures you can see this ie. >>> inspect.signature(MyClass.mymethod) # bound method, cls need not be supplied manually <Signature (a, b)> >>> inspect.signature(MyClass.mymethod.__func__) # gets unbound, underlying function <Signature (cls, a, b)> That is, your decorator is working with a pure function. This is because your decorator is applied before the classmethod decorator. Decorators are applied bottom to top (in the order they appear closest to the function def). ie. @applied_third @applied_second @applied_first def foo(): pass Your decorator must be applied first. This is due to how classmethods work, and something called non-data descriptors. As such, you MUST supply the cls argument as it was supplied to it. When you do MyClass.mymethod, you get a bound method, whose first parameter has been bound to MyClass. As such you MUST NOT supply the cls parameter manually (as it is has already been done automatically for you). Try the following instead: >>> inspect.signature(MyClass.mymethod).bind(1, 2) {'a': 1, 'b': 2} | 1 | 1 |
79,515,692 | 2025-3-17 | https://stackoverflow.com/questions/79515692/how-to-reference-class-static-data-in-decorator-method | Is it possible to make the global variable handlers a static class variable? from typing import Callable, Self # dispatch table with name to method mapping handlers:dict[str, Callable[..., None]] = {} class Foo: # Mark method as a handler for a provided request name @staticmethod def handler(name: str) -> Callable[[Callable], Callable]: def add_handler(func: Callable[..., None]) -> Callable: handlers[name] = func # This line is the problem return func return add_handler # This method will handle request "a" @handler("a") def handle_a(self) -> None: pass # Handle one request with provided name, using dispatch table to determine method to call def handle(self, name: str) -> None: handlers[name](self) The goal is to have the decorator add the decorated method into a dict that is part of the class. The dict can then be used to dispatch methods via the name used in request messages it will receive. The problem seems to be how to refer to class data inside a decorator. Of course using self.handlers won't work, as self isn't defined. The decorator is called when the class is defined and there aren't any instances of the class created yet for self to reference. Using Foo.handlers doesn't work either, as the class name isn't defined until after the class definition is finished. If this wasn't a decorator, then handler() could be defined as a @classmethod, and then the class would be the first argument to the method. But it doesn't appear possible to make a decorator a class method. An example of how this might be used, would be as a handler for a server requests, e.g. a websocket server: from websockets.sync.server import serve def wshandler(websocket): f = Foo() # Create object to handle all requests for msg in websocket: decoded = json.loads(msg) # Decoded message will have a field named 'type', which is a string indicating the request type. Foo.handler(decoded['type']) with serve(wshandler, "localhost", 8888) as server: server.serve_forever() | When handler is called, the name handlers will exist in the class statement's namespace, but not yet be bound as a class attribute (as you've noticed). It also is not part of any scope that handler will have access to, since the class statement does not define any scope. You can, however, inject a reference to the dict into the scope of handler by using handlers as the default value for a parameter you will otherwise never provide an argument for. Inside handle, you can access handlers as an ordinary class attribute. (Note, too, that handler does not need to be a static method, because you only use the regular function that will eventually be bound to a class attribute. You could even add del handler to the end of the class statement to prevent the attribute from being defined, because by that point you are done calling handler.) from typing import Callable, Self # dispatch table with name to method mapping class Foo: handlers: dict[str, Callable[..., None]] = {} def handler(name: str, _h=handlers) -> Callable[[Callable], Callable]: def add_handler(func: Callable[..., None]) -> Callable: _h[name] = func return func return add_handler @handler("a") def handle_a(self) -> None: pass def handle(self, name: str) -> None: self.handlers[name](self) | 1 | 3 |
79,513,685 | 2025-3-17 | https://stackoverflow.com/questions/79513685/is-it-reasonable-to-simplify-product-variant-design-using-notes-instead-of-compl | I'm building an application where product variants are intended to be handled as physical products already prepared and listed manually. Instead of using a conventional approach with complex relations between Product, Option, OptionValue, and SKUValue tables, I'm trying to simplify the design. 💡 ERD Design: +--------------+ +-----------------+ | Product | | ProductVariant | +--------------+ +-----------------+ | id (PK) |<------>| id (PK) | | name | | product_id (FK) | | owner_id (FK)| | note | | created_at | | stock | | updated_at | | price | +--------------+ +-----------------+ In the ProductVariant table, the note field is a simple text field where users can manually enter descriptions like "Size: XL, Color: Red". 🔍 Django Models Based on This Design: from django.db import models from django.contrib.auth import get_user_model User = get_user_model() class Product(models.Model): name = models.CharField(max_length=255) owner = models.ForeignKey(User, on_delete=models.CASCADE, related_name='products') created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self): return self.name class ProductVariant(models.Model): product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='variants') note = models.TextField() # Example: "Size: XL, Color: Red" stock = models.PositiveIntegerField() price = models.DecimalField(max_digits=10, decimal_places=2) def __str__(self): return f"{self.product.name} - {self.note}" 🎯 Why I'm Doing This: The application is designed to handle product variants that are often predefined and don't change dynamically. Users will manually input variant descriptions based on the actual physical products they have. The goal is to avoid overengineering by eliminating unnecessary tables and relationships. 🤔 What I'm Concerned About: I know that most applications use well-structured relational models for managing product variants. However, implementing such complex structures seems overkill for my use case, and I'm worried it might create unnecessary complications. I'm concerned this simplified approach could be considered "bad design" even if it suits my use case better. ❓ Question: Is this simplified design using manual notes for product variants acceptable in scenarios where variants are predefined and manually recorded? What are the potential pitfalls I should be aware of when using this design compared to a more "standard" relational model? | If the design fits your use-case, it is by definition not bad. Don't worry about best-practices if your current practice in actuality fulfills all your needs, doesn't expose you to any downsides, and is sufficiently performant. Current design For the note field: consider using JSONField (though note the database compatibility point) instead of TextField. Storing the notes as structured data will make frontend parsing easier and backend searching/filtering/etc considerably easier (in particular if you're using Postgres, see the PG notes in the linked documentation). It also enables you to store more than one value-point, as well as metadata and parsing instructions (such as font size, color, or whatever you like) about the attribute. Example: [ { "1": { "title": "Size", "us_value": "XL", "eu_value": 50, "title_color": "red", "value_color": "blue" } } ] Potential pitfalls: This approach will fall apart quickly if you want to let the user add complex data (i.e. any data that "exceeds" what easily fits in the key-value paradigm). Data aggregation over the user-defined attributes will be cumbersome to implement, even if you're using JSONField. If you're using TextField it will be very painful for practical reasons. If not using structured data like the JSON approach above, searching and filtering will be borderline impossible unless performance is absolutely no issue. Bulk updates, or any situation where you need to update more than one instance at a time, will be painful both in logic and performance. Alternate design The "overengineered" approach would probably be something like a set of EAV-models. I know you didn't ask specifically for this, but it's always wise to know the neighboring concepts. django-eav2 provides a good overview of EAV and its use-cases, and you could either use a library in that vein if it feels right or roll your own using generic relations. | 1 | 1 |
79,514,922 | 2025-3-17 | https://stackoverflow.com/questions/79514922/most-performant-approach-to-find-closest-match-from-unordered-collection | I'm wondering what the best approach for finding the closest match to a given value from a collection of items is. The most important part is the lookup time relative to the input size, the data can be shuffled and moved around as much as needed, as long as the lookup is therefore faster. Here the initial script: MAX = 1_000_000 MIN = 0 target: float = 333_333.0 collection: set[float] = {random.uniform(MIN, MAX) for _ in range(100_000)} # --- Code here to find closest as fast as possible, now and for future lookups --- assert closest in collection and all(abs(target - v) >= delta for v in collection) Approach 1 Iterate through all values and update closest accordingly. Very simple but very slow for big collections. closest: float = next(iter(collection)) # get any element delta: float = abs(closest - target) for v in collection: tmp_delta = abs(v - target) if tmp_delta < delta: closest = v delta = tmp_delta Approach 2 Sort data and then find closest via binary search. Sort time is O(n log n), but future lookups will only take O(log n) time to find! sorted_collection: list[float] = sorted(collection) idx = bisect.bisect_right(sorted_collection, target) if idx == 0: return sorted_collection[0] if idx == len(sorted_collection): return sorted_collection[-1] before, after = sorted_collection[idx - 1], sorted_collection[idx] return before if target - before <= after - target else after Approach 3 Using dict with some custom hashing? I thought about implemenenting a __hash__ method for a custom class that could then be used to find values with (armored) O(1) lookup time, but couldn't get anything quite working without making some previous assumptions about the data involved and wonder if it is even possible. And that is where I got to in a nutshell. I am wondering if there is a faster way than the simple sorting + binary search approach and if my idea with dict + custom hash function is somehow possible. | With floats, the binary search scales like the best known. However it requires looking at locations in memory at some distance from each other. A B-tree also scales like O(log(n)), but by locating sets of decisions close to each other, it gets an order of magnitude constant improvement. The hash idea doesn't work because a good hash function will scatter nearby values in a pseudorandom way. And therefore you have no better way to find those nearby values than to search the whole hash. | 5 | 1 |
79,510,719 | 2025-3-15 | https://stackoverflow.com/questions/79510719/can-i-customize-compile-options-in-nuitka | Can I customize the compile options for GCC in Nuitka (e.g. -O2, -static)? Update: I read the user manual but found nothing. I also checked the command line options of nuitka and found --generate-c-only and --show-scons. Maybe these can help me build the executables, but I don't know how to complete the rest of the process. | From the user manual: Providing extra Options to Nuitka C compilation Nuitka will apply values from the environment variables CCFLAGS, LDFLAGS during the compilation on top of what it determines to be necessary. Beware, of course, that this is only useful if you know what you are doing, so should this pose issues, raise them only with perfect information. Maybe this could be helpful! | 2 | 1 |
79,513,766 | 2025-3-17 | https://stackoverflow.com/questions/79513766/how-to-list-a-2d-array-in-a-tabular-form-along-with-two-1d-arrays-from-which-it | I'm trying to calculate a 2d variable z = x + y where x and y are 1d arrays of unequal dimensions (say, x- and y-coordinate points on a spatial grid). I'd like to display the result row-by-row in which the values of x and y are in the first two columns and the corresponding value of z calculated from these x and y values are in the third, something like the following for x = [1, 2] and y = [3, 4, 5]: x y z 1 3 4 1 4 5 1 5 6 2 3 5 2 4 6 2 5 7 The code below works (using lists here, but will probably need numpy arrays later): import pandas as pd x = [1, 2] y = [3, 4, 5] col1 = [] col2 = [] z = [] for i in range(len(x)): for j in range(len(y)): col1.append(x[i]) col2.append(y[j]) z.append(x[i]+y[j]) df = pd.DataFrame(zip(col1, col2, z), columns=["x", "y", "z"]) print(df) Just wondering, is there a better way of doing this without using the loop by some combination of meshgrid, indices, flatten, v/hstack, and reshape? The size of x and y will typically be around 100. | Here is one way: import numpy as np import pandas as pd x = np.asarray([1, 2])[:, np.newaxis] y = np.asarray([3, 4, 5]) x, y = np.broadcast_arrays(x, y) z = x + y df = pd.DataFrame(zip(x.ravel(), y.ravel(), z.ravel()), columns=["x", "y", "z"]) print(df) # x y z # 0 1 3 4 # 1 1 4 5 # 2 1 5 6 # 3 2 3 5 # 4 2 4 6 # 5 2 5 7 But yes, you can also use meshgrid instead of orthogonal arrays + explicit broadcasting. You can also use NumPy instead of Pandas. x = np.asarray([1, 2]) y = np.asarray([3, 4, 5]) x, y = np.meshgrid(x, y, indexing='ij') z = x + y print(np.stack((x.ravel(), y.ravel(), z.ravel())).T) # array([[1, 3, 4], # [1, 4, 5], # [1, 5, 6], # [2, 3, 5], # [2, 4, 6], # [2, 5, 7]]) | 2 | 5 |
79,513,414 | 2025-3-17 | https://stackoverflow.com/questions/79513414/how-to-specify-package-as-an-extra-in-poetry-pyproject-toml | I have a package with a pyproject.toml that has a core package & an optional GUI package with its own extra dependencies. This is the project structure: . └── my_project/ ├── src/ │ └── my_package/ │ ├── core/ │ │ └── __init__.py │ ├── gui/ │ │ └── __init__.py │ └── __init__.py ├── poetry.lock └── pyproject.toml PEP 771 specifies how extras can be installed using square brackets. I have found plenty of resources on how to install extras, but frustratingly none on how to define them in your own project. How do I set up my project so that a user installing with poetry add my_package installs just the core package, but poetry add my_package[gui] installs both core and gui? | Extras don't work like that, they only apply to your packages dependencies. If you put the code in same package it will get bundled in when the package is built. Extras only apply to the dependencies of your package. In your case regardless of whether the user specifies [gui]or not they will get the gui code bundled into the package. what you can do is use extras to prevent all the gui dependencies from being installed when the user isn't using the gui. To make the error pretty clear for the user i've put something like this in optional modules __init__.py try: import some_gui_depedency ... except ModuleNotFoundError: raise Exception("Gui dependencies not installed, use `poetry add <package> -e gui` to use Gui generally python code is tiny so it doesn't really matter that your shipping code that won't work, its usually the dependencies that have lots of binaries etc. thats really what makes things take a while to install. If your trying to hide the gui code from the user (maybe for commercial reasons?) you will need to make them seperate packages. | 1 | 1 |
79,512,868 | 2025-3-16 | https://stackoverflow.com/questions/79512868/how-to-read-excel-merged-cell-properties-value-using-python | I need to read data from an Excel file. The first cell contains the property name, and the second cell contains the property value. However, some of the property names in the first column are merged across two or more columns, and the corresponding values are in the next cell. For example, the property name "Ref" is in columns A and B, and its value is in column C. I want to retrieve the value of the "Ref" property from column C in my Excel file. Here is my excel image: I am using python. Here is the output: Approval Memo of : SHILPI AKTER Name of the Applicant : SHILPI AKTER Name of Territory : Comilla Total Family Expenses : 30000 Ref : N/A Amount : N/A Total Amount : 3000 Ref and Amount Properties value not found. Here is my code: import os import openpyxl from openpyxl.utils import column_index_from_string file_path = r"D:\file\input\example.xlsx" if os.path.exists(file_path): print("File exists!") else: print("File not found! Check the path.") exit() target_sheet = "Output Approval Templete" # Define the properties to extract properties = [ "Approval Memo of", "Name of the Applicant", "Name of Territory", "Total Family Expenses", "Ref", "Amount", "Total Amount" ] # Function to get the actual value from a merged cell def get_merged_cell_value(sheet, row, col): for merged_range in sheet.merged_cells.ranges: min_row, min_col, max_row, max_col = merged_range.bounds # Extract merged cell bounds if min_row <= row <= max_row and min_col <= col <= max_col: return sheet.cell(min_row, min_col).value # Return the first cell's value of the merged range return sheet.cell(row, col).value # Function to format numeric values properly def format_value(value): if isinstance(value, float) and value > 1e10: # Large numbers like NID return str(int(value)) # Convert to integer and string to avoid scientific notation elif isinstance(value, (int, float)): # General number formatting return str(value) elif value is None: return "N/A" # Handle missing values return str(value).strip() try: # Load the workbook wb = openpyxl.load_workbook(file_path, data_only=True) if target_sheet not in wb.sheetnames: print(f"Sheet '{target_sheet}' not found in the file.") else: ws = wb[target_sheet] extracted_data = {} # Iterate over rows to extract data for row in ws.iter_rows(): for cell in row: # Check if the cell value is a property we are looking for if cell.value and isinstance(cell.value, str) and cell.value.strip() in properties: prop_name = cell.value.strip() col_idx = cell.column # Get column index (1-based) next_col_idx = col_idx + 1 # Next column index # Ensure next column exists within sheet bounds if next_col_idx <= ws.max_column: # Check if the cell is merged, and get its value next_value = get_merged_cell_value(ws, cell.row, next_col_idx) # Store the formatted value for the property extracted_data[prop_name] = format_value(next_value) # Store extracted value # Print extracted values for key, value in extracted_data.items(): print(f"{key} : {value}") except Exception as e: print(f"Error loading workbook: {e}") Please help me to find out merge cell properties value. | Just get the last cell in the range column number and add 1 as you have with the other fields. This code assumes the merge cells are row only. Also assumes the key name is cells is exactly the same as the name in the properties List import os import openpyxl def get_next_col(lc): # lc = Left cell in the merge range for merge in ws.merged_cells: if lc in merge.coord: print(f"Merge Range: {merge.coord}") return merge.top[-1][1]+1 # Return 2nd value of last tuple incremented by 1 def format_value(value): if isinstance(value, float) and value > 1e10: # Large numbers like NID return str(int(value)) # Convert to integer and string to avoid scientific notation elif isinstance(value, (int, float)): # General number formatting return str(value) elif value is None: return "N/A" # Handle missing values return str(value).strip() # Define the properties to extract properties = [ "Approval Memo of", "Name of the Applicant", "Name of Territory", "Total Family Expenses", "Ref", "Amount", "Total Amount" ] # Init Dictionary extracted_data = {} # Set working sheet name target_sheet = "Output Approval Templete" # Load the workbook file_path = r"D:\file\input\example.xlsx" if os.path.exists(file_path): print("File exists!\n") else: print("File not found! Check the path.") exit() wb = openpyxl.load_workbook(file_path, data_only=True) ws = wb.active # Check working sheet exists if target_sheet not in wb.sheetnames: print(f"Sheet '{target_sheet}' not found in the file.") else: ws = wb[target_sheet] # Process rows for row in ws.iter_rows(): for cell in row: cv = cell.value if isinstance(cv, str): # Strip if the cell value is a string cv = cv.strip() if cv in properties: # Process only cells with value in the 'properties' List co = cell.coordinate print(f"Processing '{cv}' in 'Properties' List at cell {co}") if co in ws.merged_cells: # Check if the current cell is in a merge print('This is also a merged cell:') col = get_next_col(co) # If merged get the next col number after the merge range else: col = cell.col_idx + 1 # If not merged get the next col number after the cell next_value = ws.cell(cell.row, col).value # Get next cell value as determined by value of 'col' print(f"Inserting Key: '{cv}' with Value: {next_value}") extracted_data[cv] = format_value(next_value) # Add key and value to the dictionary print("-----------\n") for key, val in extracted_data.items(): print(f"{key} : {val}") Output Extracted data from example Sheet. Approval Memo of : SHILPI AKTER Name of the Applicant : SHILPI AKTER Name of Territory : Comilla Total Family Expenses : 30000 Ref : 22000 Amount : 5000 Total Amount : 3000 | 1 | 1 |
79,513,050 | 2025-3-16 | https://stackoverflow.com/questions/79513050/expand-and-then-sort-dataframe-based-on-the-value-order-in-the-first-row | Suppose I have a DataFrame with the following format of strings separated by commas: Index ColumnName 0 apple,peach,orange,pear, 1 orange, pear,apple 2 pear 3 peach,apple 4 orange The actual number of rows will be greater than 10,000. I want to expand the DataFrame and sort the DataFrame by row 0. My expected output is below, where None is of type NoneType: Index 0 1 2 3 0 apple peach orange pear 1 apple None orange pear 2 None None None pear 3 apple peach None None 4 None None orange None I have expanded the data using the following code: df = df['ColumnName'].str.split(',', expand=True) # Expand initial DataFrame However, I am unable to sort or reorder the data as desired despite trying various combinations of df.sort_values(). | Here is another way: s = df['columnName'].str.strip(',').str.split(', ?').explode() s.set_axis(pd.MultiIndex.from_frame(s.groupby(s).ngroup().reset_index())).unstack() Output: 0 0 1 2 3 index 0 apple orange peach pear 1 apple orange NaN pear 2 NaN NaN NaN pear 3 apple NaN peach NaN 4 NaN orange NaN NaN | 2 | 1 |
79,512,639 | 2025-3-16 | https://stackoverflow.com/questions/79512639/how-to-get-specified-number-of-decimal-places-of-any-fraction | So I can generate many tuples like this: (601550405185810455248373798733610900689885946410558295383908863020551447581889414152035914344864580636662293641050147614154610394724089543305418716041082523503171641011728703744273399267895810412812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184, 191479441008508487760634222418439911957601682008868450843945373670464368694409556412664937174591858285324642229867265839916055393493144203677491629737464170928066273172154431360491037381070068776978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875) They are all tuples of two ints, they represent fractions with (nearly) infinite precision (bounded only by computer memory), the first number is the numerator, the second denominator. If we divide them, we get the first 101 decimal places of π: '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798' I did everything using integer math and without using any libraries. I didn't use any floating point because I know they are all of finite precision, in contrast to int's infinite precision in Python. Python's float uses this, and it only has log10(2) * 52 = 15.653559774527022 decimal places, far less than what I wanted. I wrote two correct functions to get n specified decimal places of any fraction: from typing import Dict, List, Tuple def decimalize(dividend: int, divisor: int, places: int) -> str: div, mod = divmod(dividend, divisor) result = [f"{div}."] while mod and places: div, mod = divmod(mod * 10, divisor) result.append(str(div)) places -= 1 integral, first, *others = result return integral + first + "".join(others).rstrip("0") def pad_cycles(mod: int, places: int, pairs: Dict[int, str], result: List[str]) -> None: if mod and places: i = list(pairs).index(mod) cycle = "".join(list(pairs.values())[i:]) div, mod = divmod(places, len(cycle)) result.append(cycle * div + cycle[:mod]) def decimalize1(dividend: int, divisor: int, places: int) -> str: div, mod = divmod(dividend, divisor) result = [f"{div}."] pairs = {} while mod and places and mod not in pairs: div, mod1 = divmod(mod * 10, divisor) pairs[mod] = (div := str(div)) result.append(div) mod = mod1 places -= 1 pad_cycles(mod, places, pairs, result) integral, first, *others = result return integral + first + "".join(others).rstrip("0") They work but both are inefficient, as they are literally doing long division. They both adhere to the following rules: They should return the fraction expanded to the desired width, unless the fraction terminates before reaching the width (we had an exact finite decimal representation before reaching the width); If the last few digits of the decimal expansion cutoff are 0, in which case the result should not contain any trailing zeros, unless the very first decimal digit is 0. For example, it should be '0.0' instead of '0.' Additionally, the second function breaks out of the loop once it has all digits of one cycle, and constructs the remaining digits by repeating the cycle, though in each iteration more work is done, as a result it is faster if the cycle is short, and slower if the cycle is longer. Worst case is if the cycle is longer than length, so it does all the extra work without terminating early: In [364]: %timeit decimalize(1, 3, 100) 25.8 μs ± 371 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [365]: %timeit decimalize1(1, 3, 100) 2.07 μs ± 20.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [366]: %timeit decimalize(1, 137, 100) 26.8 μs ± 209 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [367]: %timeit decimalize1(1, 137, 100) 4.94 μs ± 38.1 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [368]: %timeit decimalize1(1, 1337, 100) 43.4 μs ± 280 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [369]: %timeit decimalize(1, 1337, 100) 28.6 μs ± 389 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [370]: %timeit decimalize1(1, 123456789, 100) 42.4 μs ± 309 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [371]: %timeit decimalize(1, 123456789, 100) 29.7 μs ± 494 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [372]: a = 60155040518581045524837379873361090068988594641055829538390886302055144758188941415203591434486458063666229364105014761415461039472408954330541871604108252350317164101172870374427339926789581041 ...: 2812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184 In [373]: b = 19147944100850848776063422241843991195760168200886845084394537367046436869440955641266493717459185828532464222986726583991605539349314420367749162973746417092806627317215443136049103738107006877 ...: 6978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875 In [374]: decimalize(a, b, 101) Out[374]: '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798' In [375]: decimalize1(a, b, 101) == decimalize(a, b, 101) Out[375]: True In [376]: %timeit decimalize1(a, b, 101) 96.3 μs ± 295 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [377]: %timeit decimalize(a, b, 101) 64.4 μs ± 402 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) How can we do better than long division, so that the execution time is reduced drastically, while achieving the same result (using integer math to get infinite precision representation of fractions)? Preferably this should be done without using any libraries as I would like to know the algorithm. Just to prove a point: In [398]: decimalize1(1, 5, 99999999999999999999999999999999999999999) Out[398]: '0.2' In [399]: decimalize1(1, 5, 10**1000-1) Out[399]: '0.2' In my solution, no matter how many digits are asked, if there is an exact finite decimal representation shorter than asked, it will always be returned, the number of digits asked doesn't affect it. I chose long division because it is the only thing I know of that works, I want infinite precision, and that method spits out the first N correct decimal places for any N. I had read this Wikipedia article, but as you can see none of them can be used for my purposes. I want to know what algorithm can be used for infinite precision decimal. To address some comment, code taken directly from decimalize1: In [409]: places = 4096 ...: div, mod = divmod(1, 1337) ...: result = [f"{div}."] ...: pairs = {} ...: while mod and places and mod not in pairs: ...: div, mod1 = divmod(mod * 10, 1337) ...: pairs[mod] = (div := str(div)) ...: result.append(div) ...: mod = mod1 ...: places -= 1 In [410]: len(result) Out[410]: 571 In [411]: len(pairs) Out[411]: 570 | Python has a built-in decimal.Decimal type and you can set the total number of places of precision for operations. I don't know if you would call this "using a library" because the Decimal type is just as fundamental to Python as the int type except implementing its math on CPUs that do not have decimal math capabilities means that a computational algorithm has to be involved. But this is true of the int type also because Python supports essentially infinite precision and that requires multiple-precision algorithms to perform its computations. If you are interested in how multiple-precision arithmetic is done, there are probably many sources of information available to you. As an example I refer to you to Donald Knuth's classic volume on the subject, The Art of Computer Programming: Seminumerical Algorithms, Volume 2. from decimal import Decimal, getcontext # Total number of places (allow 1 place for the integer part): getcontext().prec = 102 n = Decimal('601550405185810455248373798733610900689885946410558295383908863020551447581889414152035914344864580636662293641050147614154610394724089543305418716041082523503171641011728703744273399267895810412812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184') d = Decimal('191479441008508487760634222418439911957601682008868450843945373670464368694409556412664937174591858285324642229867265839916055393493144203677491629737464170928066273172154431360491037381070068776978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875') assert str(n / d) == '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798' print(n / d) Prints: 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798 Since getcontext().prec specifies the precision for all digits, then if you are looking for a specific number of digits for the fractional part, you have to first determine how many digits of precision the integer portion of the quotient requires: from decimal import Decimal, getcontext n = Decimal('601550405185810455248373798733610900689885946410558295383908863020551447581889414152035914344864580636662293641050147614154610394724089543305418716041082523503171641011728703744273399267895810412812627682686305964507416778143771218949050158028407021152173879713433156038667304976240165476457605035649956901133077856035193743615197184') d = Decimal('191479441008508487760634222418439911957601682008868450843945373670464368694409556412664937174591858285324642229867265839916055393493144203677491629737464170928066273172154431360491037381070068776978301192672069310596051608957593418323738306558817176090035871566224143565145070495468977426925354101694409791889484040439128875732421875') integer_places = len(str(int(n / d))) # Total number of places (allow 101 places for the fractional part): getcontext().prec = integer_places + 101 assert str(n / d) == '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798' print(n / d) | 3 | 6 |
79,511,189 | 2025-3-15 | https://stackoverflow.com/questions/79511189/matplotlib-waterfall-plot-with-surfaces-shows-black-artifacts-on-border-of-plot | I have a script to write a heatmap (or contour map) inside an arbitrary closed shape. A bounding box of grid points is created and a mask is used to clip any points outside of the shape. However, if I want to create a stacked plot of these maps (to show changes along a fourth dimension), the grid points are painted black. Here is the script for a single plot: import numpy as np import matplotlib.pyplot as plt from matplotlib.path import Path from scipy.interpolate import griddata arbitrary_shape_points = [ (0.0, 0.5), (0.1, 0.6), (0.3, 0.55), (0.5, 0.4), (0.6, 0.2), # Top curve (0.65, 0.0), (0.6, -0.2), (0.5, -0.4), (0.3, -0.55), (0.1, -0.6), # Bottom curve right (0.0, -0.5), (-0.1, -0.6), (-0.3, -0.55), (-0.5, -0.4), (-0.6, -0.2), # Bottom curve left (-0.65, 0.0), (-0.6, 0.2), (-0.5, 0.4), (-0.3, 0.55), (-0.1, 0.6), # Top curve left (0.0, 0.5) # Closing point ] shape_path = Path(arbitrary_shape_points) np.random.seed(42) num_points = 100 x_data = np.random.uniform(-0.7, 0.7, num_points) y_data = np.random.uniform(-0.7, 0.7, num_points) z_data = np.pi*np.sin(np.pi * x_data) + np.exp(-np.pi*y_data) # Bounding box shape_xmin = min([p[0] for p in arbitrary_shape_points[:-1]]) shape_ymin = min([p[1] for p in arbitrary_shape_points[:-1]]) shape_xmax = max([p[0] for p in arbitrary_shape_points[:-1]]) shape_ymax = max([p[1] for p in arbitrary_shape_points[:-1]]) # Grid grid_resolution = 500 x_grid = np.linspace(shape_xmin, shape_xmax, grid_resolution) y_grid = np.linspace(shape_ymin, shape_ymax, grid_resolution) xx, yy = np.meshgrid(x_grid, y_grid) # Interpolate data interpolation_method = 'cubic' zz_interpolated = griddata((x_data, y_data), z_data, (xx, yy), method=interpolation_method) # Mask the grid outside the shape grid_points = np.column_stack((xx.flatten(), yy.flatten())) mask = shape_path.contains_points(grid_points).reshape(xx.shape) zz_masked = np.where(mask, zz_interpolated, np.nan) # Plot Heatmap using pcolormesh plt.figure(figsize=(8, 6)) heatmap = plt.pcolormesh(xx, yy, zz_masked, cmap='viridis', shading='auto') plt.colorbar(heatmap, label='Z Value') plt.legend() plt.title('Heatmap within Arbitrary Shape') plt.xlabel('X') plt.ylabel('Y') x_shape_original, y_shape_original = zip(*arbitrary_shape_points) plt.plot(x_shape_original, y_shape_original, 'k-', linewidth=1) # create whitespace around the bounding box whitespace_factor = 1.2 plt.xlim(shape_xmin * whitespace_factor, shape_xmax * whitespace_factor) plt.ylim(shape_ymin * whitespace_factor, shape_ymax * whitespace_factor) plt.gca().set_aspect('equal', adjustable='box') plt.show() The result is this: This is the stacked plot script: import numpy as np import matplotlib.pyplot as plt from matplotlib.path import Path from scipy.interpolate import griddata arbitrary_shape_points = [ (0.0, 0.5), (0.1, 0.6), (0.3, 0.55), (0.5, 0.4), (0.6, 0.2), # Top curve (0.65, 0.0), (0.6, -0.2), (0.5, -0.4), (0.3, -0.55), (0.1, -0.6), # Bottom curve right (0.0, -0.5), (-0.1, -0.6), (-0.3, -0.55), (-0.5, -0.4), (-0.6, -0.2), # Bottom curve left (-0.65, 0.0), (-0.6, 0.2), (-0.5, 0.4), (-0.3, 0.55), (-0.1, 0.6), # Top curve left (0.0, 0.5) # Closing point ] shape_path = Path(arbitrary_shape_points) np.random.seed(42) num_points = 100 x_data = np.random.uniform(-0.7, 0.7, num_points) y_data = np.random.uniform(-0.7, 0.7, num_points) fourth_dimension_values = np.linspace(0, 1, 5) shape_xmin = min([p[0] for p in arbitrary_shape_points[:-1]]) shape_ymin = min([p[1] for p in arbitrary_shape_points[:-1]]) shape_xmax = max([p[0] for p in arbitrary_shape_points[:-1]]) shape_ymax = max([p[1] for p in arbitrary_shape_points[:-1]]) grid_resolution = 100 x_grid = np.linspace(shape_xmin, shape_xmax, grid_resolution) y_grid = np.linspace(shape_ymin, shape_ymax, grid_resolution) xx, yy = np.meshgrid(x_grid, y_grid) grid_points = np.column_stack((xx.flatten(), yy.flatten())) mask = shape_path.contains_points(grid_points).reshape(xx.shape) interpolation_method = 'cubic' fig = plt.figure(figsize=(10, 8), facecolor=(0, 0, 0, 0)) ax = fig.add_subplot(111, projection='3d', facecolor=(0, 0, 0, 0)) z_offset = 0 z_step = 1 for i, fd_value in enumerate(fourth_dimension_values): z_data = np.pi*np.sin(np.pi * x_data) + np.exp(-np.pi*y_data) + fd_value zz_interpolated = griddata((x_data, y_data), z_data, (xx, yy), method=interpolation_method) # Mask the grid outside the shape zz_masked = np.where(mask, zz_interpolated, np.nan) # Prepare Z values for 3D plot - constant Z for each slice, offset along z-axis z_surface = np.full_like(xx, z_offset + i * z_step) z_min_slice = np.nanmin(zz_masked) z_max_slice = np.nanmax(zz_masked) if z_max_slice > z_min_slice: zz_normalized = (zz_masked - z_min_slice) / (z_max_slice - z_min_slice) else: zz_normalized = np.zeros_like(zz_masked) # Create facecolors from normalized data facecolors_heatmap = plt.cm.viridis(zz_normalized) # Make masked areas fully transparent facecolors_heatmap[np.isnan(zz_masked)] = [0, 0, 0, 0] # Plot each heatmap slice as a surface surf = ax.plot_surface(xx, yy, z_surface, facecolors=facecolors_heatmap, linewidth=0, antialiased=False, shade=False, alpha=0.8, rstride=1, cstride=1) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Fourth Dimension Index') ax.set_title('Stacked Heatmaps along Fourth Dimension') ax.view_init(elev=30, azim=-45) ax.set_box_aspect([np.diff(ax.get_xlim())[0], np.diff(ax.get_ylim())[0], np.diff(ax.get_zlim())[0]*0.5]) plt.show() The result is this: I am not sure how to stop the black border from appearing. I tried setting those points to be transparent but that does not do anything. | try channging: # Plot each heatmap slice as a surface surf = ax.plot_surface(xx, yy, z_surface, facecolors=facecolors_heatmap, linewidth=0, antialiased=False, shade=False, alpha=0.8, rstride=1, cstride=1) to: # Plot each heatmap slice as a surface surf = ax.plot_surface(np.where(mask, xx, np.nan), yy, z_surface, facecolors=facecolors_heatmap, linewidth=0, antialiased=False, shade=False, alpha=0.8, rstride=1, cstride=1) I get this pic , from Online Matplotlib Compiler that uses Matplotlib: 3.8.4 : | 3 | 2 |
79,511,457 | 2025-3-15 | https://stackoverflow.com/questions/79511457/generate-a-random-matrix-in-python-which-satisfies-a-condition | I manually define the following 16 matrices in Python : matrices = { "Simulation 1": [ [1, 1, 1, 1, 1, 2], [1, 1, 1, 1, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 2, 3, 2], [1, 1, 1, 1, 3, 3], [1, 1, 1, 1, 3, 3] ], "Simulation 2": [ [1, 1, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 2, 2, 3], [1, 1, 1, 3, 3, 3] ], "Simulation 3": [ [1, 1, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 2, 3, 3], [1, 1, 2, 2, 3, 3], [1, 1, 1, 3, 3, 3], [1, 1, 3, 3, 3, 3] ], "Simulation 4": [ [1, 1, 1, 1, 2, 2], [1, 1, 1, 2, 2, 2], [3, 1, 3, 3, 3, 2], [3, 3, 3, 3, 3, 2], [3, 3, 3, 3, 3, 2], [3, 3, 3, 3, 3, 3] ], "Simulation 5": [ [1, 1, 1, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 3, 2], [1, 3, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3] ], "Simulation 6": [ [1, 1, 1, 1, 1, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 2, 2], [1, 1, 1, 1, 2, 3], [1, 3, 3, 3, 3, 3], [1, 3, 3, 3, 3, 3] ], "Simulation 7": [ [1, 1, 1, 1, 2, 2], [1, 1, 1, 1, 1, 2], [1, 1, 1, 2, 2, 2], [1, 1, 3, 2, 2, 3], [1, 1, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3] ], "Simulation 8": [ [1, 1, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 3, 3], [2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3] ], "Simulation 9": [ [1, 1, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 2, 2], [1, 1, 1, 1, 3, 2], [1, 1, 1, 1, 3, 3], [1, 1, 3, 3, 3, 3] ], "Simulation 10": [ [1, 1, 1, 2, 2, 2], [1, 1, 2, 2, 2, 2], [1, 1, 2, 2, 2, 2], [1, 1, 2, 2, 2, 3], [1, 1, 1, 1, 3, 3], [1, 1, 1, 3, 3, 3] ], "Simulation 11": [ [1, 1, 1, 2, 2, 2], [1, 1, 2, 2, 2, 2], [1, 1, 2, 2, 2, 3], [1, 1, 1, 2, 3, 3], [1, 1, 1, 3, 3, 3], [1, 1, 1, 3, 3, 3] ], "Simulation 12": [ [1, 1, 1, 1, 2, 2], [1, 1, 1, 1, 2, 2], [1, 1, 1, 1, 2, 2], [3, 1, 1, 3, 3, 3], [3, 3, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3] ], "Simulation 13": [ [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 3, 3, 3], [1, 3, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3] ], "Simulation 14": [ [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 1, 2], [1, 1, 1, 1, 1, 2], [1, 1, 1, 3, 3, 2], [1, 3, 3, 3, 3, 3], [1, 3, 3, 3, 3, 3] ], "Simulation 15": [ [1, 1, 1, 2, 2, 2], [1, 2, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 3, 3], [1, 1, 1, 3, 3, 3], [1, 1, 1, 3, 3, 3] ], "Simulation 16": [ [1, 1, 3, 2, 2, 2], [1, 1, 3, 2, 3, 3], [1, 1, 3, 3, 3, 3], [1, 1, 3, 3, 3, 3], [1, 1, 3, 3, 3, 3], [1, 1, 3, 3, 3, 3] ] } When visualized, these look like this: Positions in the matrix are understood like this: positions = [ [1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24], [25, 26, 27, 28, 29, 30], [31, 32, 33, 34, 35, 36] ] These matrices have the following properties: 1 = Red, 2 = Blue, 3 = Green Position 1 is always Red, Position 6 is always Blue and Position 36 is always green All circles of the same color can reach all other circles of the same color without touching any other color Here is an example of an invalid matrix (i.e. node 1 (red) can not reach the other red nodes without stepping on blue): I have the following question: Is there some algorithm which can be used to rapidly generate (random) valid matrices for this problem? Can something like a tree/graph be used which can efficiently generate 100 hundred such solutions? Updated version of trincot's answer to include visualization: import random import matplotlib.pyplot as plt import numpy as np def make_matrix(n): mat = [[0] * n for _ in range(n)] frontier = set() def place(row, col, color): mat[row][col] = color frontier.discard((row, col, 1)) frontier.discard((row, col, 2)) frontier.discard((row, col, 3)) for next_row, next_col in (row-1, col), (row+1, col), (row, col-1), (row, col+1): if 0 <= next_row < n and 0 <= next_col < n and mat[next_row][next_col] == 0: frontier.add((next_row, next_col, color)) place(0, 0, 1) place(0, n-1, 2) place(n-1, n-1, 3) while frontier: place(*random.choice(list(frontier))) return mat def visualize_matrix(mat): n = len(mat) colors = np.zeros((n, n, 3)) for i in range(n): for j in range(n): if mat[i][j] == 1: colors[i, j] = [1, 0, 0] # Red for 1 elif mat[i][j] == 2: colors[i, j] = [0, 0, 1] # Blue for 2 elif mat[i][j] == 3: colors[i, j] = [0, 1, 0] # Green for 3 plt.figure(figsize=(5, 5)) plt.imshow(colors) plt.grid(True, color='black', linewidth=0.5) plt.xticks(np.arange(-0.5, n, 1), []) plt.yticks(np.arange(-0.5, n, 1), []) plt.tick_params(length=0) for i in range(n): for j in range(n): plt.text(j, i, str(mat[i][j]), ha="center", va="center", color="white") plt.tight_layout() plt.show() plt.figure(figsize=(15, 10)) for i in range(4): plt.subplot(2, 2, i+1) mat = make_matrix(6) n = len(mat) colors = np.zeros((n, n, 3)) for r in range(n): for c in range(n): if mat[r][c] == 1: colors[r, c] = [1, 0, 0] # Red for 1 elif mat[r][c] == 2: colors[r, c] = [0, 0, 1] # Blue for 2 elif mat[r][c] == 3: colors[r, c] = [0, 1, 0] # Green for 3 plt.imshow(colors) plt.grid(True, color='black', linewidth=0.5) plt.title(f"Matrix #{i+1}") plt.xticks([]) plt.yticks([]) for r in range(n): for c in range(n): plt.text(c, r, str(mat[r][c]), ha="center", va="center", color="white") plt.tight_layout() plt.show() for i in range(5): print(f"Matrix #{i+1}:") mat = make_matrix(6) for row in mat: print(row) print() visualize_matrix(mat) | One way is to perform a random flood-fill out of the three initial coloured corners. Here is a possible implementation: import random def make_matrix(n): mat = [[0] * n for _ in range(n)] frontier = set() def place(row, col, color): mat[row][col] = color frontier.discard((row, col, 1)) frontier.discard((row, col, 2)) frontier.discard((row, col, 3)) for next_row, next_col in (row-1, col), (row+1, col), (row, col-1), (row, col+1): if 0 <= next_row < n and 0 <= next_col < n and mat[next_row][next_col] == 0: frontier.add((next_row, next_col, color)) place(0, 0, 1) place(0, n-1, 2) place(n-1, n-1, 3) while frontier: place(*random.choice(list(frontier))) return mat Here is an example run: mat = make_matrix(6) for row in mat: print(row) The principle variable is frontier which has a set of possible next actions. An action consists of coloring a cell, so it defines the coordinates of the cell and the color code (1, 2 or 3). Each iteration one action is randomly chosen from that set, and the action is applied. This means also that the set of possible actions is adapted: some are no longer possible, while others become possible. And so this repeats until there is no more cell that can be coloured. | 1 | 3 |
79,510,817 | 2025-3-15 | https://stackoverflow.com/questions/79510817/algorithm-to-select-multiple-non-overlapping-subsequences-of-given-sequence | You have been given the input data (a sequence of items), for example: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] and your task is to randomly select M non-overlapping samples (subsequences) of the same size N. For example, if the task was to select 3 samples of size 3, one of the solutions would be: [3, 4, 5] [8, 9, 10] [11, 12, 13] The samples are unordered so [8, 9, 10], [3, 4, 5], [11, 12, 13] is the same solution. All solutions are expected to have an equal probability of being selected. My algorithm: Select randomly first sample: [11, 12, 13] Remaining input data are [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and [14, 15, 16]. Select randomly second sample from the remaining input data: [3, 4, 5]. Remaining input data (big enough) are [6, 7, 8, 9, 10] and [14, 15, 16]. Select randomly third sample from the remaining input data: [8, 9, 10]. Sadly, this algorithm does not work when the samples are too big. If the task was to select 3 samples of size 5, there exists a solution, but if you use my algorithm and select randomly the first sample as [3, 4, 5, 6, 7], the algorithm will fail. Of course there is also an obvious brute-force algorithm: find all possible solutions and select randomly one of them. But I was hoping for something more "clever" (time and space efficient). | import random seq = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] M = 3 N = 3 dividers = sorted(random.sample(range(len(seq) - M*N + M), M)) starts = [d + i*(N-1) for i, d in enumerate(dividers)] samples = [seq[start : start+N] for start in starts] print(samples) Example output (Attempt This Online!): [[3, 4, 5], [10, 11, 12], [13, 14, 15]] Idea is that if you want 3 subsequences of size 3 from overall 16 items, then 7 elements are not in the subsequences. They're instead in the 4 gaps before/after the subsequences. Each gap can be empty. So find 4 nonnegative integers with sum 7 and use them as gap sizes. How to do that was inspired by this answer. Alternative explanation: If we had N=1, this would simply be sampling M single items. But in general, each of them is accompanied by the next N-1 items to make up a subsequence. So we basically do sample(range(len(seq)), M) for the N=1 case, but for the general case shrink the range by M*(N-1) to allow room for the M times N-1 accompaniers, which we then add. I tried choosing samples 120000 times and counting how often each starts tuple occurred. Each of the 120 possibilities occurred about 1000 times as expected: 120 ((0, 3, 6), 1030) ((0, 3, 7), 1007) ((0, 3, 8), 956) ... ((6, 9, 13), 1010) ((6, 10, 13), 1006) ((7, 10, 13), 986) Code: import random from collections import Counter seq = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] M = 3 N = 3 ctr = Counter() for _ in range(120_000): dividers = sorted(random.sample(range(len(seq) - M*N + M), M)) starts = [d + i*(N-1) for i, d in enumerate(dividers)] ctr[*starts] += 1 print(len(ctr)) for x in sorted(ctr.items()): print(x) Attempt This Online! | 4 | 5 |
79,511,188 | 2025-3-15 | https://stackoverflow.com/questions/79511188/gauss-fitting-data-with-scipy-and-getting-strange-answers-on-fit-quality | I have Gamma-Spectra and I am doing Gauss fit to the peaks using Python with scipy. Works well, but trying to get a number on fit quality (for the intent on some automation) returns very odd numbers. The scipy command is: response = scipy.optimize.curve_fit(gauss, e, c, param0, full_output=True, absolute_sigma=True, method=FitMethod, bounds=bounds) I get the fit quality from: fitQuality = np.linalg.cond(response[1]) # PCOV is response[1] FitQuality is a value from zero (excellent fit) up to infinity (beyond lousy). As an example, here is a fit (shaded green) to the K-40 Gamma line in a CsI detector. Given the jittery nature of the data, I am pleased with the result. And so is scipy, giving it a FitQuality rating of 18. Next picture shows a fit (shaded red) to a Na-22 line in a High-Resolution Ge detector. Again, I am very pleased with the result. But scipy is giving it a FitQuality rating of 56,982,136; this means very, very poor. This does not make sense. The fit is nearly perfect! Is my FitQuality formula inappropriate? Does it need additions? Am I completely off the mark? Please, help. | The formula you have is useful, but it is not a direct measure of the quality of the fit. It is related to "uncertainty" of the fit, and the condition number specifically is useful for diagnosing problems that may have occurred during fit. For instance, I've seen in other SO posts that `curve_fit` returns wild -looking parameter values. We find that the condition number is astronomical, so the covariance matrix is essentially singular. We find that this is because the callable is over-parameterized - that is, the parameters are not independent - and some can be il either removed or combined. Even in these cases, the fit may look perfect. See, for example, good r2 score but huge parameter uncertainty. The fit looks great, but they would find that the condition number of pcov is huge. This is described in the curve_fit documentation. Something that correlates better with visual "quality" is the sum of squared errors - that is, the objective function of least squares curve fitting. The covariance, on the other hand, is related to the Hessian - sort of like the curvature - of the objective function with respect to fitting parameters. So it's just not a measure of the "quality" of the fit, but how the quality changes with respect to the parameters. | 2 | 5 |
79,508,238 | 2025-3-14 | https://stackoverflow.com/questions/79508238/how-to-apply-a-global-rate-limit-for-all-routes-using-slowapi-and-fastapi | I want to configure rate limiting with SlowAPI (in-memory, without Redis Cache etc.), but I don't want to add the @limiter.limit() decorator seperately for every endpoint. So, something I don't want to do manually would be: @limiter.limit("5/minute") async def myendpoint(request: Request) pass So, essentially, I want to include it in a middleware: from slowapi import Limiter limiter = Limiter(key_func=get_remote_address) @app.middleware("http") async def check_request(request: Request, call_next): client_ip = request.client.host prefix = "request_rate_limiter." + client_ip #... (logic from slowapi to get allowed flag) if allowed: response = await call_next(request) return response But I didn't find a solution on how to receive a boolean from the limiter. How would I proceed from this and would this work? It would be great if I could configure it for different routes and also for example depending on a user subscription (e.g. free/premium). Thanks! | To apply a global (default) limit to all routes, you could use the SlowAPIMiddleware, as shown in the example below. The relevant documentation could be found here. Related answers could also be found here and here. Example from fastapi import FastAPI from slowapi import Limiter, _rate_limit_exceeded_handler from slowapi.util import get_remote_address from slowapi.middleware import SlowAPIMiddleware from slowapi.errors import RateLimitExceeded limiter = Limiter(key_func=get_remote_address, default_limits=["1/minute"]) app = FastAPI() app.state.limiter = limiter app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler) app.add_middleware(SlowAPIMiddleware) @app.get("/") async def main(): return "Only once per minute" To exempt a route from the global limit, you could use the @limiter.exempt decorator on a given route, as shown in the following example. However, it seems that, currently, the decorator would only work for endpoints defined with normal def instead of async def (it could be a bug in the relevant implementation of SlowAPI—UPDATE: The issue has been fixed, so please make sure to upgrade to the latest version of SlowAPI). @app.get("/someroute") @limiter.exempt async def someroute(): return "I'm unlimited" | 1 | 3 |
79,510,643 | 2025-3-15 | https://stackoverflow.com/questions/79510643/how-to-detect-thick-lines-as-single-lines-using-hough-transform-in-opencv | I'm using OpenCV's HoughLinesP function to detect straight lines in an image. When the image contains thin lines, the detection works perfectly. However, when the image contains thick lines, the algorithm detects them as two parallel lines instead of a single line. Here's my current code: import cv2 import numpy as np image_path = "thickLines.png" image = cv2.imread(image_path) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Thresholding to create a binary image _, binary = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV) # Edge Detection edges = cv2.Canny(binary, 50, 150, apertureSize=3) # Hough Line Transform to Detect Walls lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, minLineLength=50, maxLineGap=5) # Draw Detected Walls if lines is not None: for line in lines: x1, y1, x2, y2 = line[0] cv2.line(image, (x1, y1), (x2, y2), (0, 255, 0), 2) # Draw thick lines in green # Show Final Processed Image cv2.imshow("Detected Image", image) cv2.waitKey(0) cv2.destroyAllWindows() I have tried adjusting the Canny edge thresholds and modifying minLineLength and maxLineGap, but the issue persists. My goal is to detect thick lines as a single line instead of two parallel lines. Questions: How can I modify my approach to merge or simplify detected thick lines into a single line? Are there any parameters in HoughLinesP that can be tuned to achieve this? Screenshots Here are screenshots of the issue: The green lines represents the detected lines from the image Here are sample images of thick & thin lines | You're looking to skeletonize the image - reduce the thickness of all features to 1 pixel. scikit-image.morphology has a neat method, skeletonize, just for this. import cv2 import numpy as np from skimage.morphology import skeletonize image_path = "thickLines.png" image = cv2.imread(image_path) # binarize image gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) _, binary = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV) # convert to skeleton, then to mat skeleton = skeletonize(binary) skeletonImg = (skeleton * 255).astype(np.uint8) # Hough Line Transform to Detect Walls lines = cv2.HoughLinesP(skeletonImg, 1, np.pi / 180, 80, minLineLength=50, maxLineGap=5) # Draw Detected Walls if lines is not None: for line in lines: x1, y1, x2, y2 = line[0] cv2.line(image, (x1, y1), (x2, y2), (0, 255, 0), 2) # Draw thick lines in green # Show Final Processed Image cv2.imshow("Detected Image", image) cv2.waitKey(0) cv2.destroyAllWindows() Result: thin thick | 1 | 3 |
79,509,466 | 2025-3-14 | https://stackoverflow.com/questions/79509466/how-to-retrieve-singleton-model-instance-with-drf-without-having-to-provide-id | I have a django app, and I use django-solo for SingletonModel. I do have a singleton settings model: class GeneralSettings(SingletonModel): allow_signup = models.BooleanField(default=True) I want to create an API endpoint to be able to retrieve and update the settings. I currently use DRF. Using RetrieveModelMixin and UpdateModelMixin I can easily do it but then my route has to be: .../api/settings/1 < I need to add the id. How can I retrieve / update my settings without having to use the id (since it doesn't make sens for a SingletonModel)? DRF view: class GeneralSettingsViewSet( RetrieveModelMixin, UpdateModelMixin, GenericViewSet, ): queryset = GeneralSettings.objects.all() serializer_class = GeneralSettingsSerializer http_method_names = ["get", "put"] def get_object(self) -> GeneralSettings: return GeneralSettings.get_solo() Router: router.register(r"settings", GeneralSettingsViewSet, "api-settings") | You can use custom router to solve your problem. Below is an example of what this might look like: from rest_framework.routers import Route, SimpleRouter class SingletonRouter(SimpleRouter): routes = [ Route( url=r'^{prefix}{trailing_slash}$', mapping={ 'get': 'retrieve', 'put': 'update', }, name='{basename}-detail', detail=True, initkwargs={'suffix': 'Instance'} ), ] router = SingletonRouter() router.register(r'settings', GeneralSettingsViewSet, 'api-settings') It should work now: GET .../settings/ PUT .../settings/ | 1 | 2 |
79,510,249 | 2025-3-14 | https://stackoverflow.com/questions/79510249/custom-wraps-on-discord-command | I created a custom wraps like this def addLogger(fn): from functools import wraps @wraps(fn) async def add_logger(*args, **kwargs): print(f"addLogger: About to run {fn.__name__}") log = logger startTime = time.time() log.info('About to run %s' % fn.__name__) result = await fn(*args, **kwargs) log.info('Done running %s Execution time: %s' % (fn.__name__, time.time() - startTime)) print(f"addLogger: Done running {fn.__name__}") return result return add_logger and it's working great on this function @bot.event @addLogger async def on_ready(): logger.info(f'{bot.user.name} has connected to Discord!') try : synced = await bot.tree.sync() print(f"Synced {len(synced)} command") except Exception as e: print(e) but not on list_cogs its just does nothing @addLogger @bot.tree.command(name="list_cogs", description="list all cogs") async def list_cogs(interaction): await interaction.response.send_message(f"{cogList}") I already tried to return commands.check(add_logger) instead of add_logger but it ends up to not work at all | You need to have @addLogger after the @bot.tree.command() line. So the correct code would be @bot.tree.command(name="list_cogs", description="list all cogs") @addLogger async def list_cogs(interaction): await interaction.response.send_message(f"{cogList}") Think of it like @addLogger changing the code of list_cogs() so the new list_cogs() function will do what it did before, but also time itself. Then, once you have that new function, that's what you add to the bot tree as a command. | 1 | 3 |
79,509,624 | 2025-3-14 | https://stackoverflow.com/questions/79509624/python-overload-doesnt-match-any-case | I've written this code: from typing import overload, TYPE_CHECKING, Protocol, Any import pyarrow as pa # type: ignore[import-not-found] class PyArrowArray(Protocol): @property def buffers(self) -> Any: ... @overload def func(a: PyArrowArray) -> int: ... @overload def func(a: str) -> str: ... @overload def func(a: Any) -> str | int: ... def func(a) -> str | int: if isinstance(a, pa.Array): return 0 return '0' reveal_type(func(pa.array([1,2,3]))) PyArrow is a Python library which does not have type hints. However, there is a package pyarrow-stubs which provides types for it. I have a function can accept either a pyarrow.Array or a str: if it receives a pyarrow.Array, it returns an int if it receives a str, it returns a str I would like to annotate it such that: if a user has pyarrow-stubs installed, then func(pa.array([1,2,3])) is revealed to be int if a user doesn't have pyarrow-stubs installed, then func(pa.array([1,2,3])) should be revealed to be int | str, because pa is not known statically I was hoping that the code above would accomplish that, but it doesn't. If pyarrow-stubs is not installed, I get Revealed type is "Any" I was expecting that the @overload def func(a: Any) -> str | int: ... overload would be matched and that I'd get Revealed type is int | str | I am confident to say that this is very likely not possible. The Any Type per PEP 484 matches everything, hence an Any input will match all overloads. What happens in such cases is defined here in the Mypy docs: [...] if multiple variants match due to an argument being of type Any, mypy will make the inferred type also be Any: pyright docs also describes it similarly, interestingly with pyright you can choose one, but not both overloads and avoid Unknown, this is explained here. # NOTE: pyright only @overload def func(a: PyArrowArray) -> int: ... @overload def func(a: Any) -> str | int: ... def func(a): if isinstance(a, pa.Array): return 0 return "0" bla = cast(PyArrowArray, ...) reveal_type(func(bla)) # int reveal_type(func("fooo")) # str | int :( reveal_type(func(pa.array([1, 2, 3]))) # str | int # Not Unknown The only solution I somewhat see is that in turn pa.array is not allowed to be Any. Currently I see no good way to satisfy that without destroying compatibility when stubs are present. Best make it a requirement. Or somehow make everything a no-op class that is secondary to the stubs. I assume you are looking for a mypy solution, with pyright you can solve it like this: from typing import overload, TYPE_CHECKING, Callable, Protocol, Any, reveal_type, T import pyarrow as pa if TYPE_CHECKING: if pa.array is None: # path only taken when stubs not present class unknown: def __call__(self, *args, **kwargs) -> unknown: ... pa.array = unknown() # ... your original code # type of pa.array without stubs will be: array: unknown | Unknown reveal_type(func(bla)) # int reveal_type(func("fooo")) # str reveal_type(func(pa.array([1, 2, 3]))) # int | str | 2 | 3 |
79,509,728 | 2025-3-14 | https://stackoverflow.com/questions/79509728/polars-group-by-describe-return-all-columns-as-single-dataframe | I'm slowly migrating to polars from pandas and I have found that in some cases the polars syntax is tricky. I'm seeking help to do a group_by followed by a describe using less (or more readable) code. See this example: from io import BytesIO import pandas as pd import polars as pl S = b'''group,value\n3,245\n3,28\n3,48\n1,113\n1,288\n1,165\n2,90\n2,21\n2,109''' pl_df = pl.read_csv(BytesIO(S)) pd_df = pd.read_csv(BytesIO(S)) # Polars' way pl_df.group_by('group').map_groups( lambda df: ( df['value'] .describe() .with_columns( group=pl.lit(df['group'][0]) ) ) ).pivot(index='group', on='statistic') Something similar in pandas would be: # pandas' pd_df.groupby('group').value.describe() | You can write a quick function that returns a mapping of expressions that you can unpack right into your DataFrame.group_by(...).agg. This avoids any slow-ness of using map_groups and enables Polars to easily scan the query for any optimizations (provided you are working with a LazyFrame). from io import BytesIO import pandas as pd import polars as pl def describe(column, percentiles=[.25, .5, .75]): return { 'count': column.count(), 'null_count': column.null_count(), 'mean': column.mean(), 'std': column.std(), 'min': column.min(), **{ f'{pct*100:g}%': column.quantile(pct) for pct in percentiles }, 'max': column.max(), } S = b'''group,value\n3,245\n3,28\n3,48\n1,113\n1,288\n1,165\n2,90\n2,21\n2,109''' pl_df = pl.read_csv(BytesIO(S)) print( pl_df.group_by('group').agg(**describe(pl.col('value'))) ) Produces shape: (3, 10) ┌───────┬───────┬────────────┬────────────┬───┬───────┬───────┬───────┬─────┐ │ group ┆ count ┆ null_count ┆ mean ┆ … ┆ 25% ┆ 50% ┆ 75% ┆ max │ │ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ u32 ┆ u32 ┆ f64 ┆ ┆ f64 ┆ f64 ┆ f64 ┆ i64 │ ╞═══════╪═══════╪════════════╪════════════╪═══╪═══════╪═══════╪═══════╪═════╡ │ 1 ┆ 3 ┆ 0 ┆ 188.666667 ┆ … ┆ 165.0 ┆ 165.0 ┆ 288.0 ┆ 288 │ │ 3 ┆ 3 ┆ 0 ┆ 107.0 ┆ … ┆ 48.0 ┆ 48.0 ┆ 245.0 ┆ 245 │ │ 2 ┆ 3 ┆ 0 ┆ 73.333333 ┆ … ┆ 90.0 ┆ 90.0 ┆ 109.0 ┆ 109 │ └───────┴───────┴────────────┴────────────┴───┴───────┴───────┴───────┴─────┘ | 2 | 3 |
79,507,218 | 2025-3-13 | https://stackoverflow.com/questions/79507218/why-text-alignment-is-not-performed-via-styles | I'm trying to organize text alignment in ttk.Label using style in order to reduce the amount of code: import tkinter as tk from tkinter import ttk from tkinter import font app = tk.Tk() width = 605 height = 200 x = int((app.winfo_screenwidth() / 2) - (width / 2)) y = int((app.winfo_screenheight() / 2) - (height / 2)) app.geometry(f'{width}x{height}+{x}+{y}') app.resizable(width=False, height=False) ttk.Style().configure('question.TLabel', justify=tk.CENTER, background='#ffffff', border=0) question = font.Font(family='Tahoma', size=14, weight='bold') ttk.Label(app, style='question.TLabel', text='When the view in the widget’s window change,\n' 'the widget will generate a Tcl command' '\nbased on the scrollcommand.', font=question).place(x=90, y=35) app.mainloop() RESULT screenshot But it doesn't work that way. How to fix it? The ttk.Label documentation says: If the text you provide contains newline ('\n') characters, this option specifies how each line will be positioned horizontally: tk.LEFT to left-justify; tk.CENTER to center; or tk.RIGHT to right-justify each line. You may also specify this option using a style. The rest of the parameters specified in the style work. justify=tk.CENTER starts working if moved directly to ttk.Label: RESULT screenshot | According to the official documentation, justify is not supported in the style. I think the documentation you're using is incorrect. ---- TLabel styling options configurable with ttk::style are: -background color -compound compound -foreground color -font font | 1 | 3 |
79,508,420 | 2025-3-14 | https://stackoverflow.com/questions/79508420/expanding-polars-dataframe-with-cartesian-product-of-two-columns | The code below shows a solution I have found in order to expand a dataframe to include the cartesian product of columns A and B, filling in the other columns with null values. I'm wondering if there is a better and more efficient way of solving this? >>> df = pl.DataFrame({'A': [0, 1, 1], ... 'B': [1, 1, 2], ... 'C': [6, 7, 8]}) >>> df shape: (3, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 0 ┆ 1 ┆ 6 │ │ 1 ┆ 1 ┆ 7 │ │ 1 ┆ 2 ┆ 8 │ └─────┴─────┴─────┘ >>> df.join(df.select('A').unique().join(df.select('B').unique(), how='cross'), on=['A','B'], how='right') shape: (4, 3) ┌──────┬─────┬─────┐ │ C ┆ A ┆ B │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞══════╪═════╪═════╡ │ 6 ┆ 0 ┆ 1 │ │ null ┆ 0 ┆ 2 │ │ 7 ┆ 1 ┆ 1 │ │ 8 ┆ 1 ┆ 2 │ └──────┴─────┴─────┘ | This is a requested feature (this is available in R or pandas's janitor as complete). An alternative approach mentioned in the feature request would be: (df.select(pl.col(['A', 'B']).unique().sort().implode()) .explode('A') .explode('B') .join(df, how='left', on=['A', 'B']) ) Which makes it easy to generalize to a greater number of columns. Output: ┌─────┬─────┬──────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪══════╡ │ 0 ┆ 1 ┆ 6 │ │ 0 ┆ 2 ┆ null │ │ 1 ┆ 1 ┆ 7 │ │ 1 ┆ 2 ┆ 8 │ └─────┴─────┴──────┘ | 5 | 4 |
79,508,139 | 2025-3-14 | https://stackoverflow.com/questions/79508139/how-to-transfer-this-part-of-code-from-c-to-python-using-dataclass | I have this structure in C: MAX_NUMBER_OF_ITEMS = 9 typedef struct { uint64_t status; uint8_t serial_number_of_items [MAX_NUMBER_OF_ITEMS]; uint8_t version; } Items_Info_t; How can I convert it to Python, specifically the part: uint8_t number_of_items [MAX_NUMBER_OF_ITEMS] I try to do this way: class SensorInfo: status: int serial_number_of_items: List[int] version: int Also, how can I perform serialization using struct.pack after that? | Create a class to encapsulate Items_Info_t and manage its serialization/deserialization. The __init__ method initializes the object's attributes, after doing data validation to ensure that the input values are within the expected ranges. Added an endian parameter to both the serialize and deserialize methods. The default value is '<' (little-endian). Serialization with struct.pack: '<': Representes the endianness ('<' for little-endian, '>' for big-endian, '!' for network byte order). 'Q': Represents an unsigned 64-bit integer. 'B' * MAX_NUMBER_OF_ITEMS: Represents MAX_NUMBER_OF_ITEMS unsigned 8-bit integers. 'B': Represents a single unsigned 8-bit integer. import struct from typing import List MAX_NUMBER_OF_ITEMS = 9 class ItemsInfo: def __init__(self, status: int, serial_number_of_items: List[int], version: int): if len(serial_number_of_items) != MAX_NUMBER_OF_ITEMS: raise ValueError(f"serial_number_of_items must have {MAX_NUMBER_OF_ITEMS} elements") if not all(0 <= item <= 255 for item in serial_number_of_items): raise ValueError("All serial numbers must be within the range 0-255 (uint8).") if not (0 <= version <= 255): raise ValueError("version must be within the range 0-255 (uint8).") if status < 0: raise ValueError("Status must be a non-negative integer (uint64).") self.status = status self.serial_number_of_items = serial_number_of_items self.version = version def serialize(self, endian: str = '<') -> bytes: if endian not in ('<', '>', '!'): raise ValueError("Endian must be '<' (little-endian), '>' (big-endian), or '!' (network byte order).") format_string = f"{endian}Q{'B' * MAX_NUMBER_OF_ITEMS}B" return struct.pack(format_string, self.status, *self.serial_number_of_items, self.version) @staticmethod def deserialize(data: bytes, endian: str = '<') -> 'ItemsInfo': if endian not in ('<', '>', '!'): raise ValueError("Endian must be '<' (little-endian), '>' (big-endian), or '!' (network byte order).") format_string = f"{endian}Q{'B' * MAX_NUMBER_OF_ITEMS}B" expected_size = struct.calcsize(format_string) if len(data) != expected_size: raise ValueError(f"Data length is incorrect. Expected {expected_size} bytes, got {len(data)}.") unpacked_data = struct.unpack(format_string, data) status = unpacked_data[0] serial_numbers = list(unpacked_data[1:1 + MAX_NUMBER_OF_ITEMS]) version = unpacked_data[-1] return ItemsInfo(status, serial_numbers, version) Testing serial_numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] items_info = ItemsInfo(status=1055, serial_number_of_items=serial_numbers, version=23) serialized_data_network = items_info.serialize(endian='!') print("Serialized Data (network byte order):", serialized_data_network) print("\nDeserialized Data (network byte order):") deserialized_info_network = ItemsInfo.deserialize(serialized_data_network, endian='!') print("Status:", deserialized_info_network.status) print("Serial Numbers:", deserialized_info_network.serial_number_of_items) print("Version:", deserialized_info_network.version) Output Serialized Data (network byte order): b'\x00\x00\x00\x00\x00\x00\x04\x1f\x01\x02\x03\x04\x05\x06\x07\x08\t\x17' Deserialized Data (network byte order): Status: 1055 Serial Numbers: [1, 2, 3, 4, 5, 6, 7, 8, 9] Version: 23 | 2 | 2 |
79,507,207 | 2025-3-13 | https://stackoverflow.com/questions/79507207/create-an-n-length-list-by-uniformly-in-frequency-selecting-items-from-a-separ | SETUP I have a list days and a value N days = ['Monday','Tuesday','Wednesday','Thursday','Friday'] N = 52 WHAT I AM TRYING TO DO I am trying to create a list selections with length N where I uniformly in frequency sample values from days (remainders are fine). I would like the order of this list to then be shuffled. EXAMPLE OUTPUT NOTE HOW THE ORDER IS SHUFFLED, BUT THE DISTRIBUTION OF VALUES IS UNIFORM selections ['Wednesday','Friday','Monday',...'Tuesday','Thursday','Monday'] import collections counter = collections.Counter(selections) counter Counter({'Monday': 11, 'Tuesday': 10, 'Wednesday': 11, 'Thursday': 10, 'Friday': 10}) WHAT I HAVE TRIED I have code to randomly select N values from days from random import choice, seed seed(1) days = ['Monday','Tuesday','Wednesday','Thursday','Friday'] N = 52 selections = [choice(days) for x in range(N)] But they aren't selected uniformly import collections counter = collections.Counter(selections) counter Counter({'Tuesday': 9, 'Friday': 8, 'Monday': 14, 'Wednesday': 7, 'Thursday': 14}) How can I adjust this code or what different method will create a list of length N with a uniform distribution of values from days in a random order? EDIT: I obviously seemed to have phrased this question poorly. I am looking for list with length N with a uniform distribution of values from days but in a shuffled order (what I meant by random.) So I suppose what I am looking for is how to uniformly sample values from days N times, then just shuffle that list. Again, I want an equal amount of each value from days making up a list with length N. I need a uniform distribution for a list of exactly length 52, just as the example output shows. | The code you have is correct. You are seeing expected noise around the mean. Note that for higher N, the relative noise decreases, as expected. For example, this is what you get for N = 10000000: Counter({'Tuesday': 2000695, 'Thursday': 2000615, 'Wednesday': 2000096, 'Monday': 1999526, 'Friday': 1999068}) If you need equal or approximately equal (deterministic, rather than random) numbers of each element in random order, try a combination of itertools.cycle, itertools.islice and random.shuffle like so: import random import collections import itertools random.seed(1) days = ['Monday','Tuesday','Wednesday','Thursday','Friday'] N = 52 # If `N` is not divisible by `len(days)`, this line ensures that the last # `N % len(days)` elements of `selections` also stay random: random.shuffle(days) selections = list(itertools.islice(itertools.cycle(days), N)) random.shuffle(selections) print(selections) counter = collections.Counter(selections) print(counter) Output: ['Friday', 'Friday', 'Wednesday', ..., 'Thursday'] Counter({'Tuesday': 11, 'Monday': 11, 'Friday': 10, 'Wednesday': 10, 'Thursday': 10}) | 2 | 5 |
79,505,837 | 2025-3-13 | https://stackoverflow.com/questions/79505837/how-to-make-an-easily-instantiable-derivative-attribute-only-protocol-class | I have a Protocol subclass that defines objects with attributes from an external library: class P(Protocol): val: int For testing purposes, I want to turn this protocol class into something I can instantiate easily. However, when I try to turn it into a dataclass, an error pops up: import dataclasses from typing import Protocol class P(Protocol): val: int PInst = dataclasses.dataclass(P) PInst(val=4) # TypeError: Protocols cannot be instantiated Is there an easy solution to use P to create a class that satifies its protocol and is instantiable, without redeclaring its attributes? | You are asking for a non-protocol class derived from the attributes in your protocol class. They are stored in the annotations, which are accessible as typing.get_type_hints(P). In my first try, I dynamically created that class with builtin type, passing it the protocol's type hints to define an equivalent non-protocol class, and pass that to dataclass. But it needed me to manually set dunder annotations before it could correctly create the init method. But messing with internal attributes is a red flag telling me there should be an easier way. So I looked at the dataclasses public interface and found make_dataclass. You can just pass the annotations directly to its fields parameter. from dataclasses import make_dataclass from typing import Protocol, get_type_hints class P(Protocol): val: int DP = make_dataclass('DP', get_type_hints(P)) instance = DP(val=4) print(instance) output: DP(val=4) | 2 | 2 |
79,506,954 | 2025-3-13 | https://stackoverflow.com/questions/79506954/passing-integers-to-fastapi-post-endpoint | I have two endpoints below, one that accepts two integers and another that accepts tow lists of integers. I also have two post requests that I am making using Python requests. The request for the list of ints endpoint works fine, but the one that just accepts the two ints does not. I know that if I pass the two ints in the URL (i.e, the query string), the endpoint will work. However, could you tell me why it doesn't work when passed as JSON? endpoints.py from fastapi import FastAPI app = FastAPI() @app.post('/ints') def post_int(x: int, y: int): return x, y @app.post('/lists') def post_list(x: list[int], y: list[int]): return x, y requests.py import requests r = requests.post('http://127.0.0.1:8000/ints', json={'x': 1, 'y': 2}) print(r.json()) # returns an error saying that x and y are missing r = requests.post('http://127.0.0.1:8000/ints?x=1&y=2') print(r.json()) # returns the two ints r = requests.post('http://127.0.0.1:8000/lists', json={'x': [1, 2, 5], 'y': [1, 2, 3]}) print(r.json()) # returns the two lists To start server, run "fastapi dev /path/to/endpoints.py" in terminal. Run requests.py in another terminal for output. | could you tell me why it doesn't work when passed as JSON? When you register a route with @app.post('/ints') def post_int(x: int, y: int): return x, y in FastApi, the route is analyzed and added in decorator (routing.py) add_api_route (routing.py) __init__ (routing.py) get_dependant (dependencies/utils.py) analyze_param (dependencies/utils.py) Near the end of analyze_param, you can find elif field_info is None and depends is None: default_value = value if value is not inspect.Signature.empty else RequiredParam if is_path_param: # We might check here that `default_value is RequiredParam`, but the fact is that the same # parameter might sometimes be a path parameter and sometimes not. See # `tests/test_infer_param_optionality.py` for an example. field_info = params.Path(annotation=use_annotation) elif is_uploadfile_or_nonable_uploadfile_annotation( type_annotation ) or is_uploadfile_sequence_annotation(type_annotation): field_info = params.File(annotation=use_annotation, default=default_value) elif not field_annotation_is_scalar(annotation=type_annotation): field_info = params.Body(annotation=use_annotation, default=default_value) else: field_info = params.Query(annotation=use_annotation, default=default_value) https://github.com/fastapi/fastapi/blob/master/fastapi/dependencies/utils.py#L460 There you can see that scalar parameters have field info type query. Lists aren't scalar values and have field info type body. | 2 | 1 |
79,506,519 | 2025-3-13 | https://stackoverflow.com/questions/79506519/how-do-i-prevent-shelve-module-from-appending-db-to-filename | Python: 3.12.2, OS: MacOS 13.6.6 When I specify a filename to shelve.open, it appends .db to the filename: % ls % python Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import shelve >>> with shelve.open("myfile"): ... pass ... >>> quit() % ls myfile.db Furthermore, if I attempt to open an existing file as "myfile.db" (with the extension), I get the following error: % python Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import shelve >>> with shelve.open("myfile.db"): ... pass ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/homebrew/Cellar/[email protected]/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/shelve.py", line 243, in open return DbfilenameShelf(filename, flag, protocol, writeback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/shelve.py", line 227, in __init__ Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/dbm/__init__.py", line 89, in open raise error[0]("db type could not be determined") dbm.error: db type could not be determined Opening the existing file as simply "myfile" with no extension works fine, however. How do I prevent shelve.open from appending ".db" to the filename? Why can't I open existing databases if I specify their ".db" extension? Neither of these issues happen on Python 3.10.12 on Ubuntu 22, so I'm not sure if it's a Python version thing, or a platform thing. | Why can't I open existing databases if I specify their ".db" extension? After scrying shelve source code it could be unveiled that shelve.open("myfile") does result in calling dbm.open(filename, 'c') After scrying dbm source code it could be unveiled that this does depend on dbm.whichdb, where following line could be found f = io.open(filename + b".db", "rb") therefore if you attempt to do shelve.open("myfile.db") whichdb would be looking for myfile.db.db. How do I prevent shelve module from appending ".db" to filename? If you would do that this would most likely cause dbm.whichdb to malfunction for ndbm and dumbdm as it rely on extensions to detect nature of system used, as unveiled in comments # Check for ndbm first -- this has a .pag and a .dir file # Check for dumbdbm next -- this has a .dir and a .dat file whilst for sqlite3 and gnu it should would work independently from extensions as detection rely on headers (see source code linked above for details). | 1 | 2 |
79,507,131 | 2025-3-13 | https://stackoverflow.com/questions/79507131/dataframe-fill-for-null-values-pandas | I encounter problems trying to fill all null values on a specific column of a data frame. Here is an example of dataframe and my expected outcome. Example data frame: Column_1 Column_2 F A None B None None G C None None None D H D I want to get the first value from the column 1 to all null value from column 2 Expected Outcome: Column_1 Column_2 F A None B None G #First value from the left column G C None H #First value from the left column None D H D I'm getting error when I try this code. df['Colunmn_2'].ffill(df.loc[df['Column_1'].first_valid_index(), 'Column_1'],inplace=True) Thanks in advance! | You can combine fillna on Column_2 and bfill on Column_1: df['Column_2'] = df['Column_2'].fillna(df['Column_1'].bfill()) Output: Column_1 Column_2 0 F A 1 NaN B 2 NaN G 3 G C 4 NaN H 5 NaN D 6 H D Intermediates: Column_1 Column_2 col1_bfill col2_fillna 0 F A F A 1 NaN B G B 2 NaN NaN G ------> G 3 G C G C 4 NaN NaN H ------> H 5 NaN D H D 6 H D H D | 2 | 4 |
79,501,131 | 2025-3-11 | https://stackoverflow.com/questions/79501131/does-nuitka-onefile-mode-displays-my-python-code-in-tracebacks | Here is an example code: while True: # here is a comment pass Here is my Python and Nuitka version: D:\test>py -m nuitka --version 2.6.8 Commercial: None Python: 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 22:45:29) [MSC v.1916 32 bit (Intel)] Flavor: CPython Official Executable: ~\AppData\Local\Programs\Python\Python38-32\python.exe OS: Windows Arch: x86 WindowsRelease: 10 Version C compiler: ~\AppData\Local\Nuitka\Nuitka\Cache\DOWNLO~1\gcc\x86\14.2.0posix-19.1.1-12.0.0-msvcrt-r2\mingw32\bin\gcc.exe (gcc 14.2.0). I ran the command py -m nuitka --standalone --onefile test.py and finally got test.exe. Then I ran test.exe and pressed Ctrl+C in the windows terminal. I got a traceback like this: Traceback (most recent call last): File "C:\Users\User\AppData\Local\Temp\ONEFIL~3\test.py", line 1, in <module> while True: # here is a comment KeyboardInterrupt ^C I found the source code even with the comment. But most people say that programs compiled by Nuitka is irreversible. So what's wrong? | In the Nuitka page you can read that Nuitka Standard The standard edition bundles your code, dependencies and data into a single executable if you want. It also does acceleration, just running faster in the same environment, and can produce extension modules as well. It is freely distributed under the Apache license. Nuitka Commercial The commercial edition additionally protects your code, data and outputs, so that users of the executable cannot access these. This a private repository of plugins that you pay to get access to. Additionally, you can purchase priority support. So to encrypt all traceback outputs you have to buy the Commercial version. In this Nuitka Commercial you can see the features only Nuitka Commercial offers. Onefile: Creates a single executable file. When this executable runs, it extracts its contents to a temporary directory. If the source code is within the folder where the onefile is placed, then Nuitka can find it and display the source code in tracebacks. Standalone: Creates a directory containing the executable and all its dependencies. By default this would create a dist folder, but the source code is not included in the folder. Because of that Nuitka can't display the source code in tracebacks. For Nuitka to display source code lines within tracebacks, the original Python source files must be present in the same location when the compiled program is executed. | 1 | 2 |
79,506,581 | 2025-3-13 | https://stackoverflow.com/questions/79506581/how-can-i-use-ruff-rules-to-enforce-a-particular-import-style | I changed my code base to always import the python datetime module like this: import datetime as dt instead of using import datetime or from datetime import datetime And we had both those in the codebase! It's confusing because you can't know at this point what datetime can do, if it is the module or the class. See also this blog post by Adam Johnson: https://adamj.eu/tech/2019/09/12/how-i-import-pythons-datetime-module/ What I try to do is create a rule for the ruff linter that enforces this import style. There is tidy-imports but I can't get it to work. | You should be able to lint code using from datetime import datetime syntax to suggest replacing it with import datetime as dt with the TOML configuration: [tool.ruff.lint] # Add the ICN rules to any others you have selected. select = ["E4", "E7", "E9", "F", "ICN"] [tool.ruff.lint.flake8-import-conventions] banned-from = ["datetime"] [tool.ruff.lint.flake8-import-conventions.extend-aliases] "datetime" = "dt" Which, for the code: import datetime from datetime import datetime as dto import datetime as dt print(datetime, dto, dt) Outputs the errors: `datetime` should be imported as `dt` (ICN001) [Ln 1, Col 8] Members of `datetime` should not be imported explicitly (ICN003) [Ln2, Col 1] fiddle See unconventional-import-alias (ICN001) and banned-import-from (ICN003) | 1 | 2 |
79,506,301 | 2025-3-13 | https://stackoverflow.com/questions/79506301/how-to-do-fitting-on-experimental-graph-and-theoretical-graph | I am currently fitting the experimental results and will later take the MSE to increase the accuracy, the accuracy between the data and theory. How do you make sure the fittings fit perfectly to each other? Can that be done? And how do you calculate the MSE? so this is my code import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy.signal import find_peaks from scipy.optimize import curve_fit A = 0.12 b = 0.005 m = 1.0 k = 7.0 gamma = b / (2 * m) time = np.array([ 0.000, 0.034, 0.068, 0.101, 0.135, 0.169, 0.203, 0.236, 0.270, 0.304, 0.338, 0.372, 0.405, 0.439, 0.473, 0.507, 0.541, 0.574, 0.608, 0.642, 0.676, 0.709, 0.743, 0.777, 0.811, 0.845, 0.878, 0.912, 0.946, 0.980, 1.013, 1.047, 1.081, 1.115, 1.149, 1.182, 1.216, 1.250, 1.284, 1.317, 1.351, 1.385, 1.419, 1.453, 1.486, 1.520, 1.554, 1.588, 1.622, 1.655, 1.689, 1.723, 1.757, 1.790, 1.824, 1.858, 1.892, 1.926, 1.959, 1.993, 2.027, 2.061, 2.094, 2.128, 2.162, 2.196, 2.230, 2.263, 2.297, 2.331, 2.365, 2.398, 2.432, 2.466, 2.500, 2.534, 2.567, 2.601, 2.635, 2.669, 2.703, 2.736, 2.770, 2.804, 2.838, 2.871, 2.905, 2.939, 2.973, 3.007, 3.040, 3.074, 3.108, 3.142, 3.175, 3.209, 3.243 ]) y = np.array([ 0.309, 0.320, 0.326, 0.325, 0.321, 0.312, 0.299, 0.282, 0.266, 0.249, 0.230, 0.209, 0.187, 0.169, 0.154, 0.138, 0.126, 0.115, 0.107, 0.103, 0.102, 0.107, 0.111, 0.116, 0.128, 0.140, 0.153, 0.165, 0.177, 0.192, 0.203, 0.212, 0.221, 0.228, 0.235, 0.239, 0.239, 0.240, 0.236, 0.231, 0.224, 0.217, 0.210, 0.200, 0.189, 0.181, 0.174, 0.166, 0.159, 0.152, 0.149, 0.146, 0.145, 0.145, 0.147, 0.153, 0.157, 0.162, 0.167, 0.173, 0.180, 0.186, 0.191, 0.196, 0.199, 0.202, 0.205, 0.205, 0.204, 0.203, 0.201, 0.197, 0.193, 0.189, 0.186, 0.182, 0.178, 0.174, 0.171, 0.171, 0.168, 0.166, 0.165, 0.166, 0.166, 0.169, 0.172, 0.175, 0.177, 0.179, 0.182, 0.186, 0.186, 0.188, 0.190, 0.192, 0.192 ]) y -= 0.2 # Koreksi offset peaks, _ = find_peaks(y) periode = np.diff(time[peaks]) print("Indeks puncak:", peaks) print("Waktu puncak:", time[peaks]) print("Periode antara puncak:", periode) print("Periode rata-rata:", np.mean(periode)) # Ambil puncak pertama dan terakhir y0 = y[peaks[0]] yT = y[peaks[-1]] T = time[peaks[-1]] - time[peaks[0]] # Selisih waktu antara puncak pertama dan terakhir # Mencari puncak peaks, _ = find_peaks(y) periode = np.diff(time[peaks]) # Hitung omega_d berdasarkan data omega_d = 2 * np.pi / np.mean(periode) # Fungsi fitting dengan gamma sebagai parameter def damped_cosine(t, A_fit, gamma_fit, phi_fit): return A_fit * np.exp(-gamma_fit * t) * np.cos(omega_d * t + phi_fit) # Fitting ke data eksperimen popt, _ = curve_fit(damped_cosine, time, y, p0=[0.12, gamma, 0]) # Ambil hasil fitting A_fit, gamma_fit, phi_fit = popt # Hitung ulang data teori dengan hasil fitting y_t_fit = damped_cosine(time, A_fit, gamma_fit, phi_fit) # Plot hasil plt.plot(time, y, label="Experiment") plt.plot(time[peaks], y[peaks], "ro", label="Max") plt.plot(time, y_t_fit, label="Fitting", linestyle="dashed") plt.xlabel("time (s)") plt.ylabel(" y(t)") plt.title("Grafik Getaran Teredam dengan Fitting") plt.legend() plt.grid() plt.show() # Cetak hasil fitting print(f"Amplitudo fit: {A_fit}") print(f"Gamma fit: {gamma_fit}") print(f"Fase fit: {phi_fit}") # Hitung error rata-rata kuadratik N = len(y) error = (np.sum(y**2) - 2 * np.sum(y * y_t_fit) + np.sum(y_t_fit**2)) / N # Cetak error print(f"Error rata-rata kuadratik: {error}") | Your experimental data has a negative offset in it (not centred on y=0). You should allow for that in your fitting function. Although the peaks are a good first indicator of period, the damped frequency will differ from the undamped one, so I would allow that to vary also. Your error estimation looks OK (if a little long). Changed lines (with full code below): def damped_cosine(t, z_fit, A_fit, omega_fit, gamma_fit, phi_fit): return z_fit + A_fit * np.exp(-gamma_fit * t) * np.cos(omega_fit * t + phi_fit) popt, _ = curve_fit(damped_cosine, time, y, p0=[0.0, 0.12, omega_d, gamma, 0]) z_fit, A_fit, omega_fit, gamma_fit, phi_fit = popt y_t_fit = damped_cosine(time, z_fit, A_fit, omega_fit, gamma_fit, phi_fit) Full code: import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy.signal import find_peaks from scipy.optimize import curve_fit A = 0.12 b = 0.005 m = 1.0 k = 7.0 gamma = b / (2 * m) time = np.array([ 0.000, 0.034, 0.068, 0.101, 0.135, 0.169, 0.203, 0.236, 0.270, 0.304, 0.338, 0.372, 0.405, 0.439, 0.473, 0.507, 0.541, 0.574, 0.608, 0.642, 0.676, 0.709, 0.743, 0.777, 0.811, 0.845, 0.878, 0.912, 0.946, 0.980, 1.013, 1.047, 1.081, 1.115, 1.149, 1.182, 1.216, 1.250, 1.284, 1.317, 1.351, 1.385, 1.419, 1.453, 1.486, 1.520, 1.554, 1.588, 1.622, 1.655, 1.689, 1.723, 1.757, 1.790, 1.824, 1.858, 1.892, 1.926, 1.959, 1.993, 2.027, 2.061, 2.094, 2.128, 2.162, 2.196, 2.230, 2.263, 2.297, 2.331, 2.365, 2.398, 2.432, 2.466, 2.500, 2.534, 2.567, 2.601, 2.635, 2.669, 2.703, 2.736, 2.770, 2.804, 2.838, 2.871, 2.905, 2.939, 2.973, 3.007, 3.040, 3.074, 3.108, 3.142, 3.175, 3.209, 3.243 ]) y = np.array([ 0.309, 0.320, 0.326, 0.325, 0.321, 0.312, 0.299, 0.282, 0.266, 0.249, 0.230, 0.209, 0.187, 0.169, 0.154, 0.138, 0.126, 0.115, 0.107, 0.103, 0.102, 0.107, 0.111, 0.116, 0.128, 0.140, 0.153, 0.165, 0.177, 0.192, 0.203, 0.212, 0.221, 0.228, 0.235, 0.239, 0.239, 0.240, 0.236, 0.231, 0.224, 0.217, 0.210, 0.200, 0.189, 0.181, 0.174, 0.166, 0.159, 0.152, 0.149, 0.146, 0.145, 0.145, 0.147, 0.153, 0.157, 0.162, 0.167, 0.173, 0.180, 0.186, 0.191, 0.196, 0.199, 0.202, 0.205, 0.205, 0.204, 0.203, 0.201, 0.197, 0.193, 0.189, 0.186, 0.182, 0.178, 0.174, 0.171, 0.171, 0.168, 0.166, 0.165, 0.166, 0.166, 0.169, 0.172, 0.175, 0.177, 0.179, 0.182, 0.186, 0.186, 0.188, 0.190, 0.192, 0.192 ]) y -= 0.2 # Koreksi offset peaks, _ = find_peaks(y) periode = np.diff(time[peaks]) print("Indeks puncak:", peaks) print("Waktu puncak:", time[peaks]) print("Periode antara puncak:", periode) print("Periode rata-rata:", np.mean(periode)) # Ambil puncak pertama dan terakhir y0 = y[peaks[0]] yT = y[peaks[-1]] T = time[peaks[-1]] - time[peaks[0]] # Selisih waktu antara puncak pertama dan terakhir # Mencari puncak peaks, _ = find_peaks(y) periode = np.diff(time[peaks]) # Hitung omega_d berdasarkan data omega_d = 2 * np.pi / np.mean(periode) # Fungsi fitting dengan gamma sebagai parameter def damped_cosine(t, z_fit, A_fit, omega_fit, gamma_fit, phi_fit): return z_fit + A_fit * np.exp(-gamma_fit * t) * np.cos(omega_fit * t + phi_fit) # Fitting ke data eksperimen popt, _ = curve_fit(damped_cosine, time, y, p0=[0.0, 0.12, omega_d, gamma, 0]) # Ambil hasil fitting z_fit, A_fit, omega_fit, gamma_fit, phi_fit = popt # Hitung ulang data teori dengan hasil fitting y_t_fit = damped_cosine(time, z_fit, A_fit, omega_fit, gamma_fit, phi_fit) # Plot hasil plt.plot(time, y, label="Experiment") plt.plot(time[peaks], y[peaks], "ro", label="Max") plt.plot(time, y_t_fit, label="Fitting", linestyle="dashed") plt.xlabel("time (s)") plt.ylabel(" y(t)") plt.title("Grafik Getaran Teredam dengan Fitting") plt.legend() plt.grid() plt.show() # Cetak hasil fitting print(f"Amplitudo fit: {A_fit}") print(f"Gamma fit: {gamma_fit}") print(f"Fase fit: {phi_fit}") # Hitung error rata-rata kuadratik N = len(y) error = (np.sum(y**2) - 2 * np.sum(y * y_t_fit) + np.sum(y_t_fit**2)) / N # Cetak error print(f"Error rata-rata kuadratik: {error}") | 1 | 2 |
79,506,238 | 2025-3-13 | https://stackoverflow.com/questions/79506238/polars-upsampling-with-grouping-does-not-behave-as-expected | Here is the data import polars as pl from datetime import datetime df = pl.DataFrame( { "time": [ datetime(2021, 2, 1), datetime(2021, 4, 2), datetime(2021, 5, 4), datetime(2021, 6, 6), datetime(2021, 6, 8), datetime(2021, 7, 10), datetime(2021, 8, 18), datetime(2021, 9, 20), ], "groups": ["A", "B", "A", "B","A","B","A","B"], "values": [0, 1, 2, 3,4,5,6,7], } ) The upsampling and the testing: ( df .upsample( time_column="time", every="1d", group_by="groups", maintain_order=True ) .group_by('groups') .agg(pl.col('time').diff().max()) ) shape: (3, 2) ┌────────┬──────────────┐ │ groups ┆ time │ │ --- ┆ --- │ │ str ┆ duration[μs] │ ╞════════╪══════════════╡ │ A ┆ 92d │ │ null ┆ 2d │ │ B ┆ 72d │ └────────┴──────────────┘ The diff is not 1 day as I would expect. Is this a bug, or am I doing something wrong? | It is due to the group columns resulting in null - which is a bug. https://github.com/pola-rs/polars/issues/15530 upsample itself is implemented as a datetime_range and join https://github.com/pola-rs/polars/blob/a4fbc9453cacb7e7e5cc476b30a98845aaa5f506/crates/polars-time/src/upsample.rs#L203 Which you could do manually as a workaround. (df.group_by("groups") .agg(pl.datetime_range(pl.col("time").first(), pl.col("time").last())) .explode("time") .join(df, on=["groups", "time"], how="left") .group_by("groups") .agg(pl.col("time").diff().max()) ) shape: (2, 2) ┌────────┬──────────────┐ │ groups ┆ time │ │ --- ┆ --- │ │ str ┆ duration[μs] │ ╞════════╪══════════════╡ │ A ┆ 1d │ │ B ┆ 1d │ └────────┴──────────────┘ | 1 | 3 |
79,505,931 | 2025-3-13 | https://stackoverflow.com/questions/79505931/why-can-i-read-a-file-with-pandas-but-not-with-polars | I have a CSV (or rather TSV) I got from stripping the header off a gVCF with bcftools view foo.g.vcf -H > foo.g.vcf.csv A head gives me this, so everything looks like expected so far chr1H 1 . T <*> 0 . END=1000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 1001 . T <*> 0 . END=1707 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 1708 . C <*> 0 . END=1763 GT:GQ:MIN_DP:PL 0/0:6:2:0,6,59 chr1H 1764 . T <*> 0 . END=2000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 2001 . A <*> 0 . END=3000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 3001 . G <*> 0 . END=4000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 4001 . T <*> 0 . END=5000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 5001 . T <*> 0 . END=6000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 6001 . A <*> 0 . END=7000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 chr1H 7001 . G <*> 0 . END=8000 GT:GQ:MIN_DP:PL 0/0:1:0:0,0,0 When I know try to read the file as a dataframe in a Jupyter Notebook like this import polars as pl df = pl.read_csv("foo.g.vcf.csv", has_header=False, new_columns=["CHROM", "POS", "ID", "REF", "ALT", "QUAL", "FILTER", "INFO", "FORMAT", "SAMPLE"], separator="\t") I get a compute error "Original error: remaining bytes non-empty". However, when I do import pandas as pd import polars as pl df = pd.read_csv("foo.g.vcf.csv", header=None, sep="\t", names=["CHROM", "POS", "ID", "REF", "ALT", "QUAL", "FILTER", "INFO", "FORMAT", "SAMPLE"]) df = pl.DataFrame(df) every works as intended. Why can I read with pandas without problems and convert to polars, but not read with polars directly? The other VCF I want to compare with, which I stripped the same way, works with polars. | Looks like you might have empty trailing spaces. Hence the error: Original error: remaining bytes non-empty Polars is stricter than pandas on file formatting. Pandas will infer formatting but Polars will not. You can use this command to remove empty lines and white spaces: sed -i '/^\s*$/d' foo.g.vcf.csv But I recommend you tell Polars to infer the schema from the whole file instead with: infer_schema_length=None Or you can tell Polars to ignore parsing errors (I do not recommend this but it is an option) with: ignore_errors=True | 1 | 2 |
79,505,258 | 2025-3-13 | https://stackoverflow.com/questions/79505258/pymongo-group-by-year-based-on-subdocument-date | I have a MongoDB document like: [ { _id: ObjectId('67cfd69ba3e561d35ee57f51'), created_at: ISODate('2025-03-11T06:22:19.044Z'), conversation: [ { id: '67cfd6c1a3e561d35ee57f53', feedback: { liked: false, disliked: true, copied: true, created_at: ISODate('2025-03-11T06:27:48.634Z') } }, { id: '67cfd77fa3e561d35ee57f54', feedback: { liked: true, disliked: false, copied: false, created_at: ISODate('2025-03-11T06:28:25.099Z') } }, { id: '67d009f1a3e561d35ee57f5a', feedback: null }, { id: '67d009f8a3e561d35ee57f5b', feedback: null } ] }, { _id: ObjectId('67d00aeaa3e561d35ee57f5d'), created_at: ISODate('2025-03-11T10:05:30.848Z'), conversation: [ { id: '67d00af7a3e561d35ee57f5f', feedback: null }, { id: '67d00afaa3e561d35ee57f60', feedback: null } ] } ] Where the main document has a conversation subdocument, I want to know how many likes, dislikes and copied data in each year. I tried to get year from the conversation.feedback.created_at using $dateToString operator. pipeline = [ { '$match': { 'conversation.feedback.copied': True } }, { '$group': { '_id': { '$dateToString': { 'format': '%Y', 'date': '$conversation.feedback.created_at' } }, 'total_copied': { '$sum': 1 } } } ] But it gives an error: OperationFailure: PlanExecutor error during aggregation :: caused by :: can't convert from BSON type array to Date, full error: {'ok': 0.0, 'errmsg': "PlanExecutor error during aggregation :: caused by :: can't convert from BSON type array to Date", 'code': 16006, 'codeName': 'Location16006'} What I am expecting out as: { "2025": { "total_liked": 1, "total_disliked": 1, "total_copied": 1 } } How to convert the DateTime object to year and combine the total counts for 3 parameters? | You need the $unwind stage to deconstruct the conversation array before grouping by conversation.feedback.created_at. Note that, in your sample data, there is possibly the conversation.feedback is null. Hence you should remove those unwinded document with conversation.feedback is null. For calculating the sum based on the boolean value, you can work with $cond to add 1 when the value is true. If you are looking for the generated output with key-value pair, you may look for $replaceRoot and $arrayToObject to convert list of objects to key-value pair. db.collection.aggregate([ { "$match": { "conversation.feedback.copied": true } }, { "$unwind": "$conversation" }, { "$match": { "conversation.feedback": { "$ne": null } } }, { "$group": { "_id": { "$dateToString": { "format": "%Y", "date": "$conversation.feedback.created_at" } }, "total_copied": { "$sum": { $cond: [ { $eq: [ "$conversation.feedback.copied", true ] }, 1, 0 ] } }, "total_liked": { "$sum": { "$cond": [ { "$eq": [ "$conversation.feedback.liked", true ] }, 1, 0 ] } }, "total_disliked": { "$sum": { "$cond": [ { "$eq": [ "$conversation.feedback.disliked", true ] }, 1, 0 ] } } } }, { "$replaceRoot": { "newRoot": { "$arrayToObject": [ [ { "k": "$_id", "v": { "total_copied": "$total_copied", "total_liked": "$total_liked", "total_disliked": "$total_disliked" } } ] ] } } } ]) Demo @ Mongo Playground | 1 | 2 |
79,504,309 | 2025-3-12 | https://stackoverflow.com/questions/79504309/create-a-uniform-dataset-in-polars-with-cross-joins | I am working with Polars and need to ensure that my dataset contains all possible combinations of unique values in certain index columns. If a combination is missing in the original data, it should be filled with null. Currently, I use the following approach with sequential cross joins: def ensure_uniform(df: pl.DataFrame, index_cols: Sequence[str]) -> pl.DataFrame: # Quick exit if len(index_cols) == 1: return df # Get unique values of the first index column uniform_df = df.select(index_cols[0]).unique(maintain_order=True) # Cross join with other unique index columns for i in range(1, len(index_cols)): unique_index_values = df.select(index_cols[i]).unique(maintain_order=True) uniform_df = uniform_df.join(unique_index_values, how="cross") # Left join with the original DataFrame to preserve existing values return uniform_df.join(df, on=index_cols, how="left") Here is an example: df = pl.from_repr(''' ┌─────┬─────┬─────┬───────┐ │ g1 ┆ g2 ┆ g3 ┆ value │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═══════╡ │ A ┆ 1 ┆ 1 ┆ 10 │ │ A ┆ 1 ┆ 2 ┆ 20 │ │ B ┆ 2 ┆ 1 ┆ 30 │ │ B ┆ 2 ┆ 2 ┆ 40 │ └─────┴─────┴─────┴───────┘ ''') uniform_df = ensure_uniform(df, index_cols=["g1", "g2", "g3"]) print(uniform_df) # ┌─────┬─────┬─────┬───────┐ # │ g1 ┆ g2 ┆ g3 ┆ value │ # │ --- ┆ --- ┆ --- ┆ --- │ # │ str ┆ i64 ┆ i64 ┆ i64 │ # ╞═════╪═════╪═════╪═══════╡ # │ A ┆ 1 ┆ 1 ┆ 10 │ # │ A ┆ 1 ┆ 2 ┆ 20 │ # │ A ┆ 2 ┆ 1 ┆ null │ # │ A ┆ 2 ┆ 2 ┆ null │ # │ B ┆ 1 ┆ 1 ┆ null │ # │ B ┆ 1 ┆ 2 ┆ null │ # │ B ┆ 2 ┆ 1 ┆ 30 │ # │ B ┆ 2 ┆ 2 ┆ 40 │ # └─────┴─────┴─────┴───────┘ Any suggestions for making this more graceful and efficient? Edit: @Dean MacGregor & @orlp Thank you for your answers. All approaches show comparable performance (+/-10%), with @Dean MacGregor's proposal consistently being slightly faster than the others. After testing on multiple setups, I found that the main bottleneck seems to be the final join process, combining the uniform multiindex with the original dataset, rather than building the Cartesian product beforehand. This suggests that speed and peak memory consumption remain similar regardless of how the Cartesian product is computed, especially as dataset sizes grow. Both proposals enable lazy operations, which could be useful depending on the use case. Since there is no dedicated method for computing the Cartesian product, all options seem to be valid. | I think your approach is sensible, it can however be done lazily: from functools import reduce def ensure_uniform(df: pl.DataFrame, index_cols: Sequence[str]) -> pl.DataFrame: if len(index_cols) == 1: return df lf = df.lazy() uniques = [lf.select(col).unique(maintain_order=True) for col in index_cols] product = reduce(lambda a, b: a.join(b, how="cross"), uniques) out = product.join(lf, on=index_cols, how="left", maintain_order="left") return out.collect() | 3 | 3 |
79,502,464 | 2025-3-12 | https://stackoverflow.com/questions/79502464/fastest-way-to-find-indices-of-highest-value-in-a-matrix-iteratively-and-exclusi | I'm attempting to find the "best hits" in a similarity matrix (i.e. an mxn matrix where index along each axis corresponds to the ith position in vector m and the jth position in vector n). The simplest way to explain this is finding the indices of the highest values in a matrix iteratively, excluding previously selected rows and columns. This results in min(m,n) indices chosen. Here's my minimum reproducible example of my current implementation, using pandas: import numpy as np import pandas as pd def pairwise_best_hit(sim): xdim,ydim = np.meshgrid(np.arange(sim.shape[1]),np.arange(sim.shape[0])) table = np.vstack((sim.ravel(),xdim.ravel(),ydim.ravel())).T df = pd.DataFrame(table).rename(columns={0:'sim',1:'index2',2:'index1'}).sort_values('sim',ascending=False) seq1_hits = [] seq2_hits = [] while len(df): index1 = df.iloc[0]['index1'] index2 = df.iloc[0]['index2'] seq1_hits.append(index1) seq2_hits.append(index2) df = df[(df['index1']!=index1)&(df['index2']!=index2)] return [seq1_hits,seq2_hits] and for a matrix sim = np.array([[1,5,6,2],[7,10,3,4],[1,5,3,7]]) pairwise_best_hit(sim) returns [[1, 2, 0], [1, 3, 2]] Figured an edit would be the best way to respond to all comments simultaneously. Re: typical data size – m and n are anywhere from 250 to 1000 and the values in the matrix are floats. Now, for the results on a matrix of my actual data, which is about 300x350. Slightly tweaking the answers from LastDuckStanding, Julien, and Axel Donath, we have: def original(sim): xdim,ydim = np.meshgrid(np.arange(sim.shape[1]),np.arange(sim.shape[0])) table = np.vstack((sim.ravel(),xdim.ravel(),ydim.ravel())).T df = pd.DataFrame(table).rename(columns={0:'sim',1:'index2',2:'index1'}).sort_values('sim',ascending=False) output_list = [] while len(df): index1 = df.iloc[0]['index1'] index2 = df.iloc[0]['index2'] score = df.iloc[0]['sim'] output_list.append((int(index1),int(index2),score)) df = df[(df['index1']!=index1)&(df['index2']!=index2)] return output_list def lastduckstanding(input_matrix): mat = input_matrix.copy() idxs = np.argsort(mat, None) output_list = [] hit_is_set = set() hit_js_set = set() num_entries = min(mat.shape[0], mat.shape[1]) for idx in reversed(idxs): i, j = divmod(idx, mat.shape[1]) if i in hit_is_set or j in hit_js_set: continue hit_is_set.add(i) hit_js_set.add(j) output_list.append((i,j,mat[i,j])) if len(output_list) == num_entries: break return output_list def julien(matrix: np.ndarray): out = [] copy = matrix.copy() for _ in range(min(copy.shape)): ij = np.unravel_index(copy.argmax(), copy.shape) indeces_plus_score = (ij[0],ij[1],copy[ij[0],ij[1]]) out.append(indeces_plus_score) copy[ij[0], :] = -np.inf copy[:, ij[1]] = -np.inf return out def axeldonath(arr, indices): """Find the maximum value in a 2D array recursively.""" if not np.any(np.isfinite(arr)): return indices idx_max = np.argmax(arr) idxs = np.unravel_index(idx_max, arr.shape) indeces_plus_score = (idxs[0],idxs[1],arr[idxs[0],idxs[1]]) arr[idxs[0], :] = -np.inf arr[:, idxs[1]] = -np.inf indices.append(indeces_plus_score) return axeldonath(arr, indices) def axeldonath_wrapper(similarity): testsim = similarity.copy() return axeldonath(testsim,[]) def timing_test(S,functionlist): for function in functionlist: testsim = S['matrix'] print(function) %timeit function(testsim) With the following timing results: <function original at 0x7f405006d1c0> 287 ms ± 1.85 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) <function lastduckstanding at 0x7f405006d120> 30 ms ± 2.24 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) <function julien at 0x7f405006d260> 7.9 ms ± 30.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) <function axeldonath_wrapper at 0x7f405006d3a0> 16.9 ms ± 42.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) | Here's a candidate using numpy only, about 100x faster on your small example: import numpy as np import pandas as pd import timeit sim = np.array([[1,5,6,2],[7,10,3,4],[1,5,3,7]]) def mine(sim): out = [] copy = sim.copy() MIN = np.iinfo(copy.dtype).min # or -np.inf if using floats... for _ in range(min(copy.shape)): ij = np.unravel_index(copy.argmax(), copy.shape) out.append(ij) copy[ij[0]] = MIN copy[:,ij[1]] = MIN return np.transpose(out) def yours(sim): xdim,ydim = np.meshgrid(np.arange(sim.shape[1]),np.arange(sim.shape[0])) table = np.vstack((sim.ravel(),xdim.ravel(),ydim.ravel())).T df = pd.DataFrame(table).rename(columns={0:'sim',1:'index2',2:'index1'}).sort_values('sim',ascending=False) seq1_hits = [] seq2_hits = [] while len(df): index1 = df.iloc[0]['index1'] index2 = df.iloc[0]['index2'] seq1_hits.append(index1) seq2_hits.append(index2) df = df[(df['index1']!=index1)&(df['index2']!=index2)] return [seq1_hits,seq2_hits] assert np.all(mine(sim) == yours(sim)) %timeit yours(sim) # 1.05 ms ± 6.78 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) %timeit mine(sim) # 8.18 µs ± 19.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) Comparison slowly degrades for larger arrays, but stays ahead (still 10x faster on 1000x1000 arrays): sim = np.arange(10000) np.random.shuffle(sim) sim.shape = 100,100 %timeit yours(sim) # 26.2 ms ± 64.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit mine(sim) # 397 µs ± 534 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each) sim = np.arange(1000000) np.random.shuffle(sim) sim.shape = 1000,1000 %timeit yours(sim) # 2.45 s ± 18.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit mine(sim) #203 ms ± 2.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | 3 | 3 |
79,503,619 | 2025-3-12 | https://stackoverflow.com/questions/79503619/pandas-dataframe-add-columns-based-on-list-of-samples-and-column-headers | I want to add columns in my df with values based on the sample list in one column and the next column headers as sample numbers. In detail: based on the 11 column, I want to add 3 columns designed as 11_1, 11_2 and 11_3 with values according to the sample list in the 11 and then the same for 00. My tiny part of input data: df_matrix_data = {'11': [['P4-1', 'P4-2', 'P4-3'], ['P4-1', 'P4-3', 'P4-4']], '00': [['P4-4', 'P4-6', 'P4-7',], ['P4-2', 'P4-5', 'P4-7']], 'P4-1': [1, 2], 'P4-2': [6, 8], 'P4-3': [5, 2], 'P4-4': [2, 3], 'P4-5': [np.nan, 2], 'P4-6': [6, np.nan], 'P4-7': [3, 2]} df_matrix = pd.DataFrame.from_dict(df_matrix_data) will look like this: 11 00 P4-1 P4-2 P4-3 P4-4 P4-5 P4-6 P4-7 0 [P4-1, P4-2, P4-3] [P4-4, P4-6, P4-7] 1 6 5 2 NaN 6.0 3 1 [P4-1, P4-3, P4-4] [P4-2, P4-5, P4-7] 2 8 2 3 2.0 NaN 2 and desired output should look like this: 11 00 P4-1 P4-2 P4-3 P4-4 P4-5 P4-6 P4-7 11_1 11_2 11_3 00_1 00_2 00_3 0 [P4-1, P4-2, P4-3] [P4-4, P4-6, P4-7] 1 6 5 2 NaN 6.0 3 1 6 5 2 6 3 1 [P4-1, P4-3, P4-4] [P4-2, P4-5, P4-7] 2 8 2 3 2.0 NaN 2 2 2 3 8 2 2 Any ideas on how to perform it? | Another possible solution: df_matrix.assign( **{f"{k}_{i+1}": df_matrix.apply( lambda row: row[row[k][i]], axis=1) for k in ['11', '00'] for i in range(3)}) It uses a dictionary comprehension within assign, iterating over each key (e.g., '11') and list index (0-2), then generates columns like 11_1 by mapping the list's element (e.g., row['11'][0]) to its corresponding value in the row via lambda. To avoid the inefficient apply: df_matrix.assign( **{f"{k}_{i+1}": df_matrix.values[ np.arange(len(df_matrix)), df_matrix.columns.get_indexer(df_matrix[k].str[i])] for k in ['11', '00'] for i in range(3)}) It uses index.get_indexer to convert column names to numeric indices. Output: 11 00 P4-1 P4-2 P4-3 P4-4 P4-5 P4-6 \ 0 [P4-1, P4-2, P4-3] [P4-4, P4-6, P4-7] 1 6 5 2 NaN 6.0 1 [P4-1, P4-3, P4-4] [P4-2, P4-5, P4-7] 2 8 2 3 2.0 NaN P4-7 11_1 11_2 11_3 00_1 00_2 00_3 0 3 1 6 5 2 6.0 3 1 2 2 2 3 8 2.0 2 | 1 | 1 |
79,504,418 | 2025-3-12 | https://stackoverflow.com/questions/79504418/how-to-check-if-pyright-ignore-comments-are-still-required | I have recently switched from using mypy to pyright for static type checking. mypy has a useful feature whereby it can be configured to warn you if it detects comments instructing it to ignore certain errors which would not actually be raised (described here: https://stackoverflow.com/a/65581907/7256443). Does anyone know if pyright has a similar feature? I can't see any reference to it in their documentation. Ideally, it would apply to the pyright specific ignore comments (# pyright: ignore [errorCode]) as well as the generic # type: ignore. | Pyright has reportUnnecessaryTypeIgnoreComment, which flags unnecessary # type: ignore comments as errors. To enable this check, add the following to your pyrightconfig.json: { "reportUnnecessaryTypeIgnoreComment": "error" } Source: Pyright Documentation - Type Check Diagnostics Settings reportUnnecessaryTypeIgnoreComment [boolean or string, optional]: Generate or suppress diagnostics for a # type: ignore or # pyright: ignore comment that would have no effect if removed. The default value for this setting is "none". | 1 | 3 |
79,501,731 | 2025-3-11 | https://stackoverflow.com/questions/79501731/transforming-polars-dataframe-to-nested-json-format | I have a dataframe that contains a product name, question, and answers. I would like to process the dataframe and transform it into a JSON format. Each product should have nested sections for questions and answers. My dataframe: import polars as pl df = pl.DataFrame({ "Product": ["X", "X", "Y", "Y"], "Question": ["Q1", "Q2", "Q3", "Q4"], "Anwers": ["A1", "A2", "A3", "A4"], }) Desired Output: { "faqByCommunity": { "id": 5, "communityName": "name", "faqList": [ { "id": 1, "product": "X", "faqs": [ { "id": 1, "question": "Q1", "answer": "A1" }, { "id": 2, "question": "Q2", "answer": "A2" } ] }, { "id": 2, "product": "Y", "faqs": [ { "id": 1, "question": "Q3", "answer": "A3" }, { "id": 2, "question": "Q4", "answer": "A4" } ] } ] } } Since the first part it's static , i think i could append it to the file before and after polars writes to it (Like my other question ). However, im not sure how can i work with the nested part | You could do some of the reshaping in Polars first. faq_list = ( df.group_by("product", maintain_order=True) .agg(faqs=pl.struct(pl.int_range(pl.len()).alias("id") + 1, pl.exclude("product"))) .with_row_index("id", offset=1) #.to_struct() #.to_list() ) shape: (2, 3) ┌─────┬─────────┬────────────────────────────────┐ │ id ┆ product ┆ faqs │ │ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ list[struct[3]] │ ╞═════╪═════════╪════════════════════════════════╡ │ 1 ┆ X ┆ [{1,"Q1","A1"}, {2,"Q2","A2"}] │ │ 2 ┆ Y ┆ [{1,"Q3","A3"}, {2,"Q4","A4"}] │ └─────┴─────────┴────────────────────────────────┘ With the to_struct/list uncommented: [{'id': 1, 'product': 'X', 'faqs': [{'id': 1, 'question': 'Q1', 'answer': 'A1'}, {'id': 2, 'question': 'Q2', 'answer': 'A2'}]}, {'id': 2, 'product': 'Y', 'faqs': [{'id': 1, 'question': 'Q3', 'answer': 'A3'}, {'id': 2, 'question': 'Q4', 'answer': 'A4'}]}] You could then add the static parts and pretty-print it with json.dumps print( json.dumps({ "faqByCommunity": { "id": 5, "communityName": "name", "faqList": faq_list } }, indent=4) ) You could also add the static parts with Polars if you really wanted to. print( json.dumps( (df.group_by("product", maintain_order=True) .agg( faqs = pl.struct( pl.int_range(pl.len()).alias("id") + 1, pl.exclude("product") ) ) .with_row_index("id", offset=1) .select( pl.struct( faqByCommunity = pl.struct( id = 5, communityName = pl.lit("name"), faqList = pl.struct(pl.all()).implode() ) ) ) .item() ), indent = 4 ) ) | 1 | 2 |
79,504,009 | 2025-3-12 | https://stackoverflow.com/questions/79504009/detect-coordinate-precision-in-polars-floats | I have some coordinate data; some of it high precision, some of it low precision thanks to multiple data sources and other operational realities. I want to have a column that indicates the relative precision of the coordinates. So far, what I want is to essentially count digits after the decimal; in my case more digits indicates higher precision data. In my case I usually get data like the data in the example; its either coming with five to six digits precision or just one digit. Both are useful; but we can do more analysis on higher precision data as you may imagine. This code does what I want, but it seems .... wordy, inelegant; as if I'm being paid by the line of code. Is there a simpler way to do this? import polars as pl df = pl.DataFrame( { "lat": [ 43.6425047, 43.6, 40.688966, 40.6], "lng": [-79.3861057, -79.3, -74.044438, -74.0], } ) df = (df.with_columns( pl.col("lat").cast(pl.String) .str.split_exact(".", 1) .struct.rename_fields(["lat_major", "lat_minor"]) .alias("lat_temp")) .unnest("lat_temp") .with_columns( pl.col("lat_minor") .str.len_chars() .alias("lat_precision")) .drop("lat_major", "lat_minor") .with_columns( pl.col("lng").cast(pl.String) .str.split_exact(".", 1) .struct.rename_fields(["lng_major", "lng_minor"]) .alias("lng_temp")) .unnest("lng_temp") .with_columns( pl.col("lng_minor") .str.len_chars() .alias("lng_precision")) .drop("lng_major", "lng_minor") .with_columns( pl.col("lat_precision") .add(pl.col("lng_precision")) .alias("precision")) .drop("lat_precision", "lng_precision") ) df.head() results in shape: (4, 3) ┌───────────┬────────────┬───────────┐ │ lat ┆ lng ┆ precision │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ u32 │ ╞═══════════╪════════════╪═══════════╡ │ 43.642505 ┆ -79.386106 ┆ 14 │ │ 43.6 ┆ -79.3 ┆ 2 │ │ 40.688966 ┆ -74.044438 ┆ 12 │ │ 40.6 ┆ -74.0 ┆ 2 │ └───────────┴────────────┴───────────┘ later I might pull out records with precision over 5, for instance, as my source data tends to be either one decimal point precision or four+ decimal points precision per coordinate. | You can extract the minor fields directly without the need for temp columns and unnesting. df.with_columns( pl.col("lat", "lng").cast(pl.String) .str.split_exact(".", 1) .struct.field("field_1") .str.len_chars() .name.suffix("_minor") ) shape: (4, 4) ┌───────────┬────────────┬───────────┬───────────┐ │ lat ┆ lng ┆ lat_minor ┆ lng_minor │ │ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ u32 ┆ u32 │ ╞═══════════╪════════════╪═══════════╪═══════════╡ │ 43.642505 ┆ -79.386106 ┆ 7 ┆ 7 │ │ 43.6 ┆ -79.3 ┆ 1 ┆ 1 │ │ 40.688966 ┆ -74.044438 ┆ 6 ┆ 6 │ │ 40.6 ┆ -74.0 ┆ 1 ┆ 1 │ └───────────┴────────────┴───────────┴───────────┘ We're using a single pl.col("lat", "lng") call here which will go through an "expansion" step, i.e. pl.col("lat", "lng").foo().bar() is expanded into individual expressions. pl.col("lat").foo().bar(), pl.col("lng").foo().bar() pl.sum_horizontal() can be used if you just want the totals. df.with_columns( pl.sum_horizontal( pl.col("lat", "lng").cast(pl.String) .str.split_exact(".", 1) .struct.field("field_1") .str.len_chars() ) .alias("precision") ) shape: (4, 3) ┌───────────┬────────────┬───────────┐ │ lat ┆ lng ┆ precision │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ u32 │ ╞═══════════╪════════════╪═══════════╡ │ 43.642505 ┆ -79.386106 ┆ 14 │ │ 43.6 ┆ -79.3 ┆ 2 │ │ 40.688966 ┆ -74.044438 ┆ 12 │ │ 40.6 ┆ -74.0 ┆ 2 │ └───────────┴────────────┴───────────┘ | 2 | 1 |
79,503,865 | 2025-3-12 | https://stackoverflow.com/questions/79503865/numpy-timedelta-highest-unit-without-loss-of-information | I have several numpy timedelta values. I want to convert them to the a format that is better to read for humans without losing information. Let's say I have td = np.timedelta64(10800000000001, 'ns'). Then I can only have it in ns because if I convert it to msor higher it will lose information. If I have td = np.timedelta64(1080000000000, 'ns') I can convert it to 3 hours without losing information. What is a good way to do this automatically? I tried it by taking the number of trailing zeros into account: import numpy as np if __name__ == "__main__": td = np.timedelta64(10800000000001, 'ns') number_of_zeros = len(str(td.item())) - len(str(td.item()).rstrip('0')) if number_of_zeros==0: print("[ns]") elif number_of_zeros<7: print("ms") elif number_of_zeros<10: print("s") elif number_of_zeros<12: print("min") else: print("h") This is probably not a very elegant way to do it (not to mention that it will be wrong when we get to minutes and higher). Any recommendations? | If I understand the problem statement, you only want to express the time in terms of one unit: the largest that retains precision using only whole numbers. import numpy as np td = np.timedelta64(10800000000000, 'ns') units = ['h', 'm', 's', 'ms', 'ns'] # Convert to each of the desired units deltas = [td.astype(f'timedelta64[{unit}]') for unit in units] # Retain only the ones that are equivalent to the original representation delta = [delta for delta in deltas if delta == td] delta[0] # extract the zeroth # np.timedelta64(3,'h') It it a little faster to divide the integer representation of the timedelta by the the number of nanoseconds in each unit and check whether the result is a whole number. # Check whether the number of nanoseconds in each unit is a whole number divisible = np.divmod(td.astype(int), ns_per_unit)[1] == 0 # Use the first unit for which this is true unit = units[np.where(divisible)[0][0]] td.astype(f'timedelta64[{unit}]') # perform the conversion But the first bit of code is probably a little easier to interpret. Depends on what your goals are. | 1 | 2 |
79,500,324 | 2025-3-11 | https://stackoverflow.com/questions/79500324/how-can-i-pass-environment-variables-to-a-custom-training-script-in-amazon-sagem | I'm training a custom model using a script in Amazon SageMaker and launching the job with the Python SDK. I want to pass some environment variables (like API keys or config flags) to the training job so they’re accessible inside the script via os.environ. Here’s a simplified version of my code: from sagemaker.estimator import Estimator estimator = Estimator( image_uri='123456789012.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:latest', role=role, instance_count=1, instance_type='ml.g5.xlarge', entry_point='train.py', source_dir='src', environment={ 'MY_API_KEY': 'abcdef123456', 'DEBUG_MODE': 'true' } ) In my training script, I try to read the variable: import os api_key = os.environ.get('MY_API_KEY') print("API Key:", api_key) Is this the correct way to pass environment variables to a SageMaker training job using the Python SDK? Are there any limitations or best practices I should be aware of, especially for sensitive information like API keys? | Yes, your approach is correct. Using the environment parameter in the Estimator and accessing variables with os.environ.get() in your script is the standard way to pass environment variables in SageMaker. As @furas pointed out in their comment, os.environ.get() is the common approach in Python. That said, for handling secrets like API keys, it's better to avoid hardcoding them in your code or environment. A more secure approach is to store them in AWS Secrets Manager and fetch them inside your training script at runtime. You can pass the secret's name as an environment variable and retrieve the value securely using boto3: import boto3 import os secret_name = os.environ.get('API_KEY_SECRET_NAME') region = os.environ.get('AWS_REGION', 'us-west-2') client = boto3.client('secretsmanager', region_name=region) secret_value = client.get_secret_value(SecretId=secret_name) api_key = secret_value['SecretString'] print("API Key:", api_key) This keeps the actual secret out of your environment config and allows for better access control via IAM. | 2 | 2 |
79,503,488 | 2025-3-12 | https://stackoverflow.com/questions/79503488/sympy-i-cant-solve-an-equation-symbolically-it-keeps-trying-to-give-real-solu | import sympy as sym # Python 3.13 # Used variables in question x = sym.Symbol("x") I1 = sym.Symbol("I₁") L = sym.Symbol("L") pi = sym.Symbol("Π") v2 = sym.Symbol("v2") P = sym.Symbol("P") E = sym.Symbol("E") I = I1 * (1 - x/L) # For the part a of the question one in Assignment 3. question_part = "a" if question_part == "a": func_v = (1 - sym.cos((pi*x)/(2*L))) * v2 moment = P * L second_derivative_v = sym.diff(func_v, x, x) internal_strain_e_func = 0.5 * moment *second_derivative_v / (E * I) internal_strain_e = sym.integrate(internal_strain_e_func, (x, 0, L)) external_e = P * v2 tip_deflection_eq = sym.Eq(internal_strain_e, external_e) tip_deflection = sym.solve(tip_deflection_eq, v2, exclude=[I1, L, pi, P, E], implicit=True) sym.pprint(tip_deflection) Code above is for virtual displacement method for a beam deflection question. I can technically do it by hand however, I get a lot of questions that requires for me to solve integrals and I do make mistakes when I do it by hand. So in this one I thought I would do it by sympy and its okey up until the solve() method. For some reason I can't get a symbolically written solution for eq. The eq. I am trying to solve is: Edit: I am sorry everyone v2 is not supposed be a part in function v. The question gave it to me and I took it without thinking about it although I did learn stuff that I didn't know and would probably ask later so not much of a waste of your time. Thanks! | Don't declare a symbol called pi. You should use SymPy's sympy.pi if you mean the mathematical number pi. This is your equation as stated: In [42]: x, L, v2, E, I1, P, v2 = symbols('x, L, v2, E, I1, P, v2') In [43]: eq = Eq(Integral((P*(L - x)*(pi/(2*L))**2*cos((pi*x)/(2*L)) * v2)/(E*I1*(1 - x/L)), (x, 0, L ...: )), P*v2) In [44]: eq Out[44]: L ⌠ ⎮ 2 ⎛π⋅x⎞ ⎮ π ⋅P⋅v₂⋅(L - x)⋅cos⎜───⎟ ⎮ ⎝2⋅L⎠ ⎮ ──────────────────────── dx = P⋅v₂ ⎮ 2 ⎛ x⎞ ⎮ 4⋅E⋅I₁⋅L ⋅⎜1 - ─⎟ ⎮ ⎝ L⎠ ⌡ 0 We can use doit to evaluate the integral and then solve for v2: In [45]: eq.doit() Out[45]: π⋅P⋅v₂ ────── = P⋅v₂ 2⋅E⋅I₁ In [46]: solve(eq.doit(), v2) Out[46]: [0] Apparently this is not the answer you were looking for but in general only v2 = 0 can solve the equation for arbitrary values of the parameters. Maybe you wanted to solve for something else rather than v2? This is the equation rearranged: In [47]: eq2 = eq.doit() In [48]: eq3 = Eq(factor(eq2.lhs - eq2.rhs), 0) In [49]: eq3 Out[49]: -P⋅v₂⋅(2⋅E⋅I₁ - π) ─────────────────── = 0 2⋅E⋅I₁ At least one of the factors in the numerator on the lhs must be zero to satisfy the equation. If you want a value of v2 that can satisfy this then in general it has to be v2 = 0. Maybe the equation has not been put together correctly? | 3 | 3 |
79,503,212 | 2025-3-12 | https://stackoverflow.com/questions/79503212/fast-way-to-expand-split-list-into-index-list | Given an index split list T of length M + 1, where the first element is 0 and the last element is N, generate an array D of length N such that D[T[i]:T[i+1]] = i. For example, given T = [0, 2, 5, 7], then return D = [0, 0, 1, 1, 1, 2, 2]. I'm trying to avoid a for loop, but the best I can do is: def expand_split_list(split_list): return np.concatenate( [ np.full(split_list[i + 1] - split_list[i], i) for i in range(len(split_list) - 1) ] ) Is there a built-in function for that purpose? | You could combine diff, arange, and repeat: n = np.diff(T) out = np.repeat(np.arange(len(n)), n) As a one-liner (python ≥3.8): out = np.repeat(np.arange(len(n:=np.diff(T))), n) Another option with assigning ones to an array of zeros, then cumsum: out = np.zeros(T[-1], dtype=int) out[T[1:-1]] = 1 out = np.cumsum(out) Output: array([0, 0, 1, 1, 1, 2, 2]) | 6 | 7 |
79,503,035 | 2025-3-12 | https://stackoverflow.com/questions/79503035/set-priority-on-queryset-in-django | this is my product and category model: class Category(models.Model): name = models.CharField(max_length=100) class Product(models.Model): ... category = models.ForeignKey(Category, related_name="products", on_delete=models.CASCADE) I want a list of all products with priority order. e.g. categories_ids = [3,5,1,4,2] now I want data to order like this [product_with_category_3, product_with_category_3, product_with_category_3, product_with_category_5, product_with_category_1, product_with_category_1, ...] | We can determine the priority based on the category, and a Case-When expression: from django.db.models import Case, Value, When category_ids = [3, 5, 1, 4, 2] Product.objects.filter( category_id__in=category_ids ).alias( priority=Case(*[When(category_id=k, then=Value(i)) for i, k in enumerate(category_ids)]) ).order_by('priority') This will however result in linear search so if the number of categories is large, it is not a good idea. | 1 | 4 |
79,500,030 | 2025-3-11 | https://stackoverflow.com/questions/79500030/scipys-multivariate-earth-mover-distance-not-working-as-intended | I am using scipy's multivariate earth mover distance function wasserstein_distance_nd. I did a quick sanity check that confused me: Given that I draw two Gaussian multivariate samples from the same distribution, I should get an earth mover distance that is close to 0. However, I am getting something large (e.g., 12). Why is this happening? I have tested this with the one dimensional case and I also got something similar (here, the distance produced is always positive). Code that I used is given as follows: '''Multi-dimensional''' import numpy as np from scipy.stats import wasserstein_distance_nd mean = np.zeros(100) cov = np.eye(100) size = 100 sample1 = np.random.multivariate_normal(mean, cov, size) sample2 = np.random.multivariate_normal(mean, cov, size) print("EMD", wasserstein_distance_nd(sample1, sample2)) # output: EMD 12.293968193381374 '''single dimension''' import numpy as np from scipy.stats import wasserstein_distance mean = 0 var = 1 size = 100 sample1 = np.random.normal(mean, np.sqrt(var), size) sample2 = np.random.normal(mean, np.sqrt(var), size) dist = wasserstein_distance(sample1, sample2) print("wasserstein_distance", dist) | Your multivariate_normal example is in a 100-dimensional space. You can think of the reason you are getting large distances from an intuitive perspective: there is too much variety possible in a 100-dimensional space for the two samples to be very similar. Some more motivation: In high dimensional spaces, randomly drawn vectors tend to be nearly perpendicular (see e.g. https://math.stackexchange.com/questions/995623/why-are-randomly-drawn-vectors-nearly-perpendicular-in-high-dimensions). Compare this to a one-dimensional space, in which all random vectors lie on the same line. Think also about fixing the error in each coordinate to a given level and increasing the number of dimensions: the distance increases according to the square root of the number of dimensions. For instance, in one dimension, if $p_1 = 0$ and $p_2 = 1$, the distance between them is only 1. But in 100 dimensions, $p_1 = (0, ..., 0)$ and $p_2 = (1, ..., 1)$ have a distance of 10. For a rigorous explanation, seek help on Math Stack Exchange. My point, though, is just that intuition about magnitudes of distances from a 1 dimensional case just won't work well in 100 dimensions. If it helps, you can confirm that wasserstein_distance_nd agrees with wasserstein_distance in 1D. import numpy as np from scipy.stats import wasserstein_distance mean = 0 var = 1 size = 100 sample1 = np.random.normal(mean, np.sqrt(var), size) sample2 = np.random.normal(mean, np.sqrt(var), size) ref = wasserstein_distance(sample1, sample2) res = wasserstein_distance_nd(sample1[:, np.newaxis], sample2[:, np.newaxis]) print("wasserstein_distance", res, ref) # wasserstein_distance 0.2109383092226257 0.21093830922262574 | 1 | 2 |
79,501,591 | 2025-3-11 | https://stackoverflow.com/questions/79501591/plotly-displaying-numerical-values-as-counts-instead-of-its-actual-values | I am trying to plot my pandas df data onto a plotly bar chart, but for some reason, instead of putting the numerical values on the chart, its just putting the order onto it. For example, I am trying to also follow along with the docs, but it also fails on the docs' example. The code: import plotly.express as px fig = px.bar(x=["a", "b", "c"], y=[1, 3, 2]) fig.show() shows a chart of this: the bar values are the indexes of the array, rather than their actual values. I presume it might be with an issue with my install, but I am not entirely sure. Any help would be appreciated, as I haven't been able to find other solutions elsewhere. | I tested this out, and this appears to be a bug that only appears when using plotly v6.0.0 in a jupyter notebook. For now, the issue can be worked around by downgrading the version inside the notebook: %pip install plotly==5.23.0 You'll notice that if you try to plot fig = px.bar(x=["a", "b", "c"], y=["1", "3", "2"]) you get the same, undesired behavior so it's possible that when plotly is run from a jupyter notebook and px.bar is called using lists passed to x and y, the y values are being converted to strings under the hood. | 2 | 1 |
79,500,909 | 2025-3-11 | https://stackoverflow.com/questions/79500909/what-is-the-fastest-way-to-generate-all-n-bit-gray-codes-using-numpy | My goal is to create images using gray codes, an example would be this: It is all modulo 64 groups in gray codes in polar form. Now of course I know of the simple mapping n ^ (n >> 1) from binary, but I had found more efficient ways to generate gray codes directly than using said mapping. But since binary codes are related I will post code that generates binary codes as well. I want a function that generates all n-bit gray codes in the form np.zeros((1 << n, n), dtype=bool). I want it as efficient as possible, and it has to be implemented in numpy and only implemented in numpy, no other libraries are allowed. Why do I disallow other libraries? Because I have installed scipy, PIL, cv2, matplotlib, numba... all of them require different versions of numpy and updating one breaks the dependency of another, and all of them provide a number of methods, it is a huge learning curve to know how to use them well. I am currently trying to familiarize myself with numpy, so I invented this challenge to make myself learn. I have implemented a bunch of different methods, they all work correctly, I have rigorously tested them, but none of them strikes me as efficient. So far, I have found that np.unpackbits is the most efficient method to get binary bits of a number, but it only works with np.uint8, that is easy to solve, just using .view(np.uint8), but the output is in mixed endianness and that is somewhat trickier to solve. But even if I use np.unpackbits, converting it from binary to gray code is less efficient than generating gray codes directly. And according to my tests, np.concatenate(arrs) is more efficient than np.vstack(arrs), np.concatenate(arrs, axis=-1) beats np.hstack(arrs), and np.concatenate(arrs).reshape((w, h)).T beats np.dstack(arrs). And somehow initializing an array and then broadcasting to individual columns using a loop can be more efficient than using np.concatenate. And using numpy broadcasting to get a & b column-wise in which a is a 1d array and b is a 1d array to get binary decomposition of a can be much less efficient than just looping through the columns and apply & column by column. In particular, (a & b[:, None]).T.astype(bool) is much more efficient than (a[:, None] & b).astype(bool). Code import numpy as np lo = 1 hi = 8 UINT_BITS = {} for dtype in (np.uint8, np.uint16, np.uint32, np.uint64): for i in range(lo, hi + 1): UINT_BITS[i] = dtype lo = hi + 1 hi <<= 1 def get_dtype(n: int) -> np.uint8 | np.uint16 | np.uint32 | np.uint64: if dtype := UINT_BITS.get(n): return dtype raise ValueError(f"Argument {n} is not a valid bit width") def validate(n: int) -> None: if not (n and isinstance(n, int)): raise ValueError(f"Argument {n} is not a valid bit width") def binary_codes_0(n: int) -> np.ndarray: validate(n) count = 1 << n rect = np.zeros((count, n), dtype=bool) r = 1 for i in range(n - 1, -1, -1): count >>= 1 rect[:, i] = np.tile( np.concatenate([np.zeros(r, dtype=bool), np.ones(r, dtype=bool)]), count ) r <<= 1 return rect def binary_codes_1(n: int) -> np.ndarray: validate(n) r = total = 1 << n return ( np.concatenate( [ np.tile( np.concatenate( [np.zeros((r := r >> 1), dtype=bool), np.ones(r, dtype=bool)] ), 1 << i, ) for i in range(n) ] ) .reshape((n, total)) .T ) def binary_codes_2(n: int) -> np.ndarray: validate(n) chunks = np.array([(0,), (1,)], dtype=bool) l = 2 for _ in range(n - 1): chunks = np.concatenate( [ np.concatenate([np.zeros((l, 1), dtype=bool), chunks], axis=-1), np.concatenate([np.ones((l, 1), dtype=bool), chunks], axis=-1), ] ) l <<= 1 return chunks def binary_codes_3(n: int) -> np.ndarray: validate(n) rect = np.zeros([2] * n + [n], dtype=bool) for i, a in enumerate(np.ix_(*[(0, 1)] * n)): rect[..., i] = a return rect.reshape(-1, n) def binary_codes_4(n: int) -> np.ndarray: numbers = np.arange(1 << n, dtype=get_dtype(n)) return ( np.concatenate([(numbers & 1 << i).astype(bool) for i in range(n - 1, -1, -1)]) .reshape(n, 1 << n) .T ) def binary_codes_5(n: int) -> np.ndarray: numbers = np.arange((count := 1 << n), dtype=get_dtype(n)) result = np.zeros((count, n), dtype=bool) mask = count for i in range(n): result[:, i] = numbers & (mask := mask >> 1) return result def binary_codes_6(n: int) -> np.ndarray: return np.unpackbits( np.arange(1 << n, dtype=get_dtype(n))[:, None].view(np.uint8), axis=1, bitorder="little", count=n, )[:, ::-1] def binary_codes_7(n: int) -> np.ndarray: validate(n) return np.array(np.meshgrid(*[(0, 1)] * n, indexing="ij")).reshape((n, 1 << n)).T def gray_codes_0(n: int) -> np.ndarray: numbers = np.arange((count := 1 << n), dtype=get_dtype(n)) gray = numbers ^ (numbers >> 1) return ( np.concatenate([(gray & 1 << i).astype(bool) for i in range(n - 1, -1, -1)]) .reshape((n, count)) .T ) def gray_codes_1(n: int) -> np.ndarray: numbers = np.arange((count := 1 << n), dtype=get_dtype(n)) gray = numbers ^ (numbers >> 1) result = np.zeros((count, n), dtype=bool) for i in range(n): result[:, i] = gray & (count := count >> 1) return result def gray_codes_2(n: int) -> np.ndarray: validate(n) binary = binary_codes_6(n) shifted = np.roll(binary, 1, axis=-1) shifted[:, 0] = 0 return binary ^ shifted def gray_codes_3(n: int) -> np.ndarray: validate(n) gray = np.array([(0,), (1,)], dtype=bool) l = 2 for _ in range(n - 1): gray = np.concatenate( [ np.concatenate([np.zeros((l, 1), dtype=bool), gray], axis=-1), np.concatenate([np.ones((l, 1), dtype=bool), gray[::-1]], axis=-1), ] ) l <<= 1 return gray Testing import numpy as np zeros = np.zeros(524288, dtype=bool) ones = np.ones(524288, dtype=bool) zeros1 = np.zeros((524288, 32), dtype=bool) ones1 = np.ones((524288, 32), dtype=bool) million = [list(range(i*4096, i*4096+4096)) for i in range(256)] numbers = np.arange(1 << 16, dtype=np.uint64) mask = np.array([1 << i for i in range(15, -1, -1)], dtype=np.uint64) In [3]: %timeit (numbers & mask[:, None]).T.astype(bool) 4.1 ms ± 97.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [4]: %timeit (numbers[:, None] & mask).astype(bool) 6.1 ms ± 423 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [5]: %timeit binary_codes_5(16) 2.02 ms ± 19.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [6]: %timeit binary_codes_4(16) 2.32 ms ± 27.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [7]: %timeit np.hstack([zeros, ones]) 312 μs ± 12.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [8]: %timeit np.concatenate([zeros, ones]) 307 μs ± 9.97 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [9]: %timeit np.vstack([zeros, ones]) 315 μs ± 11.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [10]: %timeit np.hstack([zeros1, ones1]) 19.8 ms ± 800 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [11]: %timeit np.concatenate([zeros1, ones1], axis=-1) 18.1 ms ± 265 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [12]: %timeit np.concatenate([zeros1, ones1]) 9.73 ms ± 413 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [13]: %timeit np.vstack([zeros1, ones1]) 10.3 ms ± 229 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [14]: %timeit np.dstack(million)[0] 78.7 ms ± 973 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [15]: %timeit np.concatenate(million).reshape((256, 4096)).T 69.9 ms ± 251 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [16]: %timeit binary_codes_0(16) 2.32 ms ± 18 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [17]: %timeit binary_codes_1(16) 6.37 ms ± 182 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [18]: %timeit binary_codes_2(16) 1.46 ms ± 28 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [19]: %timeit binary_codes_3(16) 1.64 ms ± 29.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [20]: %timeit binary_codes_6(16) 1.12 ms ± 9.71 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [21]: %timeit gray_codes_0(16) 2.12 ms ± 25.1 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [22]: %timeit gray_codes_1(16) 2.17 ms ± 29 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [23]: %timeit gray_codes_2(16) 4.51 ms ± 151 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [24]: %timeit gray_codes_3(16) 1.46 ms ± 19.7 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Is there a more efficient way to generate all n-bit gray codes? I have figured out how to use np.meshgrid to do Cartesian product, and it is much slower than expected. I have edited the code above to include it. In [82]: %timeit binary_codes_7(16) 6.96 ms ± 249 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [83]: %timeit binary_codes_5(16) 1.74 ms ± 36.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [84]: %timeit binary_codes_3(16) 1.65 ms ± 15.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [85]: %timeit np.meshgrid(*[(0, 1)] * 16, indexing="ij") 4.33 ms ± 49.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [86]: np.all(np.array(np.meshgrid(*[(0, 1)] * 5, indexing="ij")).reshape((5, 32)).T == binary_codes_3(5)) Out[86]: np.True_ Now I have implemented everything I can think of. At this point I have realized this problem is extremely simple, we don't even need any bit operations at all. In both binary and gray codes, each cell can only have two values, zero and one. It can only be one or zero. Now if we have np.zeros((1 << n, n), dtype=bool) our job is half way done. Exactly half of the cells have the correct value: zero. We just have to flip the ones. If we look at the sequences row-wise, there isn't much we can do; but if we look at the columns, it just repeats. There are groups of ones with equal length separated by groups of zeros with the same length. We can just create a 1d array as a binary mask to flip everything on for each column except those gaps. Job done. The question is, how? The rightmost column in binary is straightforward, just do arr[:, -1][1::2] = 1. But what about the second last column? It needs to be (0, 0, 1, 1) repeat, in other words every other pair of cells are ones, I know the indices of the start and end points, it needs to be on in [range(2, 4), range(6, 8), range(10, 12)...] but what is the simplest way to tell the computer to flip those cells? And the third last column, the bands of ones are [range(4, 8), range(12, 16), range(20, 24)...], how do I flip those cells? Surprisingly I haven't found a good answer, or perhaps unsurprising, since Google is useless, but I did find this: Indexing in NumPy: Access every other group of values. And no, this is not a duplicate, because doing reshape then ravel for each column would be terribly inefficient, and that doesn't create a boolean mask for indexing the array, it creates a smaller array... Currently I can do this: arr = np.zeros((16, 4), dtype=bool) l = 1 for i in (3, 2, 1, 0): l2 = l * 2 for a, b in zip(range(l, 16, l2), range(l2,17,l2)): arr[:, i][a:b] = 1 l = l2 But this supposedly is slow (I haven't benchmarked this), however if this is implemented in numpy then I think this would be the most efficient algorithm for this type of sequences. The question is, how to implement this? | This seems 3-4 times faster than your fastest gray_codes. Fills the array by reverse-copying blocks and setting streaks of 1s. def g(n): a = np.zeros((n, 2**n), dtype=bool) m, M = 1, 2 while n: a[n:, m:M] = a[n:, m-1::-1] n -= 1 a[n, m:M] = 1 m, M = M, 2*M return a.T Attempt This Online! | 5 | 1 |
79,500,718 | 2025-3-11 | https://stackoverflow.com/questions/79500718/how-to-include-first-matching-pattern-as-a-column | I have a dataframe df. >>> import polars as pl >>> >>> >>> df = pl.DataFrame({"col": ["row1", "row2", "row3"]}) >>> df shape: (3, 1) ┌──────┐ │ col │ │ --- │ │ str │ ╞══════╡ │ row1 │ │ row2 │ │ row3 │ └──────┘ Now I want to create a new column new. It should be the first matched pattern in the col. For example, For the pattern 1|2 it should produce the following output. ┌──────┬───────┐ │ col ┆ new │ │ --- ┆ --- │ │ str ┆ str │ ╞══════╪═══════╡ │ row1 ┆ 1 │ │ row2 ┆ 2 │ │ row3 ┆ null │ └──────┴───────┘ I tried using with the expression API, but it's returning boolean values. >>> df.with_columns(new=pl.col('col').str.contains("1|2")) shape: (3, 2) ┌──────┬───────┐ │ col ┆ new │ │ --- ┆ --- │ │ str ┆ bool │ ╞══════╪═══════╡ │ row1 ┆ true │ │ row2 ┆ true │ │ row3 ┆ false │ └──────┴───────┘ | You should modify your pattern to have a capturing group (...) and use Expr.str.extract: df.with_columns(new=pl.col('col').str.extract("(1|2)")) Output: ┌──────┬──────┐ │ col ┆ new │ │ --- ┆ --- │ │ str ┆ str │ ╞══════╪══════╡ │ row1 ┆ 1 │ │ row2 ┆ 2 │ │ row3 ┆ null │ └──────┴──────┘ | 3 | 1 |
79,499,646 | 2025-3-11 | https://stackoverflow.com/questions/79499646/how-can-i-replace-multiple-items-in-a-string-using-a-dictionary-when-the-matched | Given this dictionary and input strings: d = { 'one': '1', 'two': '2', 'three': '3' } s1 = 'String containing <#one|> or <#two|> numbers. <#one|>, <#two|>, <#three|>' s2 = 'Only replace items which are anchored. <#one|> is replaced, but not this one.' How can I replace each occurrence of an anchored item <# |> using the dictionary d? The above strings should produce the output: String containing 1 or 2 numbers. 1, 2, 3 Only replace items which are anchored. 1 is replaced, but not this one. Using single pass multi replacement described here comes close to solving this, but doesn't handle the anchors. | The regular expression pattern should be <#(\w+)\|?>. The \|? part makes the trailing | optional because the quantifier ? (Zero or One) matches zero or one occurrence of the preceding element. I used the re.sub(pattern, repl, string) function for substituting substrings that match the pattern with a replacement function. For each match found, the replacement function will be called, and its return value will be used as the replacement. import re d = { 'one': '1', 'two': '2', 'three': '3' } def replace_items(text): pattern = r"<#(\w+)\|?>" def replacement(match): # This function will be called for each match key = match.group(1) return d.get(key, match.group(0)) # Return value if key exists, otherwise return original match return re.sub(pattern, replacement, text) s1 = 'String containing <#one|> or <#two|> numbers. <#one|>, <#two|>, <#three>' s2 = 'Only replace items which are anchored. <#one|> is replaced, but not this one.' s3 = 'Number <#four> is not in the dictionary' print(replace_items(s1)) print(replace_items(s2)) print(replace_items(s3)) Output String containing 1 or 2 numbers. 1, 2, 3 Only replace items which are anchored. 1 is replaced, but not this one. Number <#four> is not in the dictionary | 2 | 4 |
79,523,584 | 2025-3-20 | https://stackoverflow.com/questions/79523584/use-a-single-pyproject-toml-for-poetry-uv-dev-dependencies | I'm trying to let people on my project (including myself) migrate to uv while maintaining compatibility with people who still want to use poetry (including some of our builds). Now that poetry 2.0 has been released, and the more standard [project] fields are supported, this should be possible. It's working well in almost all ways, but the main glitch I'm hitting is the set of dev dependencies. uv and the standard seem to want them as [dependency-groups] dev = [ "bandit ~= 1.8.3", ... ] while poetry seems to want them as [tool.poetry.group.dev.dependencies] bandit = "^1.8.3" ... . Without including that section, the dev dependencies get left out of the lockfile when doing poetry lock. Anyone have any techniques for getting the two tools to play nicely in this situation? | Until poetry adds support for PEP-735, you can (ab)use the project.optional-dependencies to achieve something similar: [project.optional-dependencies] dev = ["bandit ~= 1.8.3"] Since this is a standard table, it can be used by both poetry and uv: $ poetry sync --extras dev $ poetry sync --all-extras $ uv sync --extra dev $ uv sync --all--extras The caveat here is that, as the name implies, these optional dependencies are still part of the project metadata and as such they'll be part of the published metadata on PyPI and users will be able to install them with $tool install foobar[dev]. | 2 | 4 |
79,532,232 | 2025-3-24 | https://stackoverflow.com/questions/79532232/how-can-i-download-multiple-files-using-urllib2-in-a-single-http-request-instead | I am using tornado to host a server that serves as the backend for a client that needs to be running Jython 2.5.2 (thus urllib2). I have a function that downloads files, that up until now only downloaded text files. I need to add non-text files and download them quickly. To download text files in a timely fashion, I concatenate them into a single string and send them as plain text to the client. class DownloadHandler(web.RequestHandler): def get(self): files_as_text = "" for file in os.listdir("files"): files_as_text += file+"---title_split---"+open("files/"+file).read()+"---file_split---" self.write(files_as_text) Then, on the other side, I can build them back into files. for uptodate_file in uptodate_files.split("---file_split---")\[:-1\]: uptodate_filename, uptodate_file_contents = uptodate_file.split("---title_split---") new_file = open(dir+uptodate_filename, 'wb') new_file.write(uptodate_file_contents).read()) new_file.close() This worked great for text files, but when I add any other file into the mix, it no longer works. So, I served each file separately and made individual requests using urllib2.urlopen(url+uptodate_filename).read() This works, but it is really slow. Is there some way to combine the two and send concatenated files that are not text? | Turns out you can just stick a b in front of each string to convert it to bytes and everything works great! class DownloadHandler(web.RequestHandler): def get(self): files_as_text = b"" for file in os.listdir("files"): files_as_text += file.encode("utf-8")+b"---title_split---"+open("files/"+file, "rb").read()+b"---file_split---" self.write(files_as_text) | 2 | 0 |
79,525,802 | 2025-3-21 | https://stackoverflow.com/questions/79525802/plotly-bar-conditional-dual-axis-python | import plotly.graph_objects as go fig=go.Figure() cols=['NO2 mug/m^3','NO mug/m^3'] for idx,col in enumerate(cols): for df in [ag,snd]: fig.add_trace(go.Bar(x=df['Date'],y=df[col], name=f'{str(df.Municipality.unique()[0])} {col}')) #fig.add_trace(go.Bar(x=ag['Date'],y=ag['NO mug/m^3'])) fig.update_layout(barmode='group',xaxis=dict(title='Date')) # print(df) fig In this snippet i take two columns (which are common in both dataframes) and use a bar chart which give me the following result in this photo we get 4 distinct bars that do not overlay. what i want is to keep this format but to also add a second y axis. But because of the dynamic nature that this format will be used i want it to be versatile. When lets say we have one column ( NO mug/m^3) to not have the second axis. i tried something like this fig = go.Figure() cols = ['NO2 mug/m^3', 'NO mug/m^3'] # Selected gases dfs = [ag, snd] # List of dataframes for df_idx, df in enumerate(dfs): # Iterate over DataFrames (locations) for col_idx, col in enumerate(cols): # Iterate over gases fig.add_trace(go.Bar( x=df['Date'], y=df[col], name=f'{df.Municipality.unique()[0]} - {col}', offsetgroup=str(df_idx), # Group bars per location marker=dict(opacity=0.8), yaxis="y" if col_idx == 0 else "y2" # Assign second gas to secondary y-axis )) # Layout adjustments layout_args = { "barmode": "group", # Ensures bars are placed side-by-side "xaxis": {"title": "Date"}, "legend_title": "Location - Gas" } if len(cols) == 1: layout_args["yaxis"] = {"title": cols[0]} # Single Y-axis case else: layout_args["yaxis"] = {"title": cols[0]} layout_args["yaxis2"] = { "title": cols[1], "overlaying": "y", # Overlay on primary y-axis "side": "right", "showgrid": False } fig.update_layout(**layout_args) fig.show() From what i tried i got something like the following which is not desireable. is there any way to keep the format of my first image(4 distinct non overlaying bars per x point) while using some condition in order to achive my second axis? thank for your patience | This is because you are using df_idx to offset the bars. df_idx will only be 0 or 1. When multiple data sets (traces) share a common x-axis (for vertical bars) or y-axis (for horizontal bars), the offsetgroup property allows you to align bars at the same position. Assigning the different offsetgroup value to these traces instructs Plotly to treat them as a different groups for offsetting, ensuring bars with matching coordinates don't overlay. You need to create another variable for offsetting the positions. Example: fig = go.Figure() cols = ['NO2 mug/m^3', 'NO mug/m^3'] # Selected gases dfs = [ag, snd] # List of dataframes offsetgroup = 0 # new variable for offsetting position for col_idx, col in enumerate(cols): # Iterate over gases for df_idx, df in enumerate(dfs): # Iterate over DataFrames (locations) fig.add_trace(go.Bar( x=df['Date'], y=df[col], name=f'{df.Municipality.unique()[0]} - {col}', offsetgroup=str(offsetgroup), # Group bars per location marker=dict(opacity=0.8), yaxis="y" if col_idx == 0 else "y2" # Assign second gas to secondary y-axis )) offsetgroup += 1 # Layout adjustments layout_args = { "barmode": "group", # Ensures bars are placed side-by-side "xaxis": {"title": "Date"}, "legend_title": "Location - Gas" } if len(cols) == 1: layout_args["yaxis"] = {"title": cols[0]} # Single Y-axis case else: layout_args["yaxis"] = {"title": cols[0]} layout_args["yaxis2"] = { "title": cols[1], "overlaying": "y", # Overlay on primary y-axis "side": "right", "showgrid": False } fig.update_layout(**layout_args) fig.show() | 3 | 3 |
79,531,378 | 2025-3-24 | https://stackoverflow.com/questions/79531378/dynamic-libraries-not-found-when-launching-a-python-script-using-slurm | I have difficulties setting my environment variables when launching a python script using Slurm. The following python script import petsc4py without error when launched from a shell session : from petsc4py import PETSc print("petsc4py imported successfully") However when I try to run it using srun with the following sbatch script (first setting the same environment by sourcing the same python env and loading the same modules) : #!/bin/bash #SBATCH --job-name=petsc4py_import_test #SBATCH --ntasks=2 #SBATCH --cpus-per-task=8 #SBATCH --time=00:30:00 #SBATCH --output=output.log #SBATCH --error=error.log #SBATCH --nodes=1 source $HOME/.bash_profile module purge module load python/3.10.0 openmpi gcc openblas/latest slurm source $HOME/my_env/bin/activate srun --export=ALL --mpi=pmix_v3 python3 petsc4py_test.py It fails with : ImportError: liblapack.so.3: cannot open shared object file: No such file or directory srun: error: node001: tasks 0-1: Exited with exit code 1 | This could be because liblapack is available as a operating system package on the node where the shell session is run and not available on the node on which the job runs. You might have to load the lapack module if one exists module load python/3.10.0 openmpi gcc openblas/latest lapack/latest slurm | 2 | 1 |
79,528,159 | 2025-3-22 | https://stackoverflow.com/questions/79528159/mandelbrot-set-coloring-error-around-period-2-bulb-not-colormap-related | I wrote some code to render the Mandelbrot set with continuous coloring, but the bulbs on the period-2 blub are not colored correctly. regions that should not escape are colored as though the escape after 100 iterations, which happens to be my iteration limit. What I tried 1. Increasing resolution and max iteration count 2. Increasing precision (np.float32 -> np.float64). Unfortunately, I could not finish execution because of the time complexity. import numpy as np import matplotlib.pyplot as plt def mandelbrot(cmax, width, height, maxiter): real = np.linspace(-2, 0.5, width, dtype=np.float32) imag = np.linspace(0, cmax.imag, height // 2 + 1, dtype=np.float32) c_real, c_imag = np.meshgrid(real, imag) output = np.zeros((height // 2 + 1, width), dtype=np.uint16) z_real = np.zeros_like(c_real) z_imag = np.zeros_like(c_imag) mask = np.ones_like(c_real, dtype=bool) p = np.sqrt((c_real - 0.25)**2 + c_imag**2) cardioid = c_real < p - 2 * p**2 + 0.25 period2_bulb = (c_real + 1)**2 + c_imag**2 < 0.0625 mask[cardioid | period2_bulb] = False output[~mask] = maxiter - 1 epsilon = 1e-10 for i in range(maxiter): zr2 = z_real[mask] ** 2 zi2 = z_imag[mask] ** 2 z_imag_new = 2 * z_real[mask] * z_imag[mask] + c_imag[mask] z_real[mask] = zr2 - zi2 + c_real[mask] z_imag[mask] = z_imag_new diverged = zr2 + zi2 >= 4.0 abs_z = zr2 + zi2 + epsilon output[mask] = i + 1 - np.log(np.log(abs_z)) / np.log(2) mask[mask] = ~diverged output[output == maxiter - 1] = 0 full_output = np.vstack([np.flipud(output), np.flipud(output[-2::-1, :])]) return full_output width, height, maxiter = 12000, 9000, 100 cmax = complex(0, 1) mandelbrot_set = mandelbrot(cmax, width, height, maxiter) plt.figure(figsize=(16, 12)) plt.imshow(mandelbrot_set, extent=[-2, 0.5, -1, 1], cmap="inferno", interpolation="bilinear") plt.colorbar(label="Iterations to Divergence") plt.title("Mandelbrot Set") plt.xlabel("Re(c)") plt.ylabel("Im(c)") plt.show() | The updating of the output and the mask inside the loop (iterations) is incorrect. We need to focus only on the points inside mask and escaped in current iteration. def mandelbrot(cmax, width, height, maxiter): real = np.linspace(-2, 0.5, width, dtype=np.float32) imag = np.linspace(0, cmax.imag, height // 2 + 1, dtype=np.float32) c_real, c_imag = np.meshgrid(real, imag) output = np.zeros((height // 2 + 1, width), dtype=np.uint16) z_real = np.zeros_like(c_real) z_imag = np.zeros_like(c_imag) mask = np.ones_like(c_real, dtype=bool) #p = np.sqrt((c_real - 0.25)**2 + c_imag**2) #cardioid = c_real < p - 2 * p**2 + 0.25 q = (c_real - 0.25)**2 + c_imag**2 cardioid = q*(q + c_real - 1/4) <= 1/4*c_imag**2 period2_bulb = (c_real + 1)**2 + c_imag**2 < 1/16 #0.0625 mask[cardioid | period2_bulb] = False # these points never escape epsilon = 1e-10 for i in range(maxiter): z_imag_new = 2 * z_real[mask] * z_imag[mask] + c_imag[mask] z_real[mask] = z_real[mask]**2 - z_imag[mask]**2 + c_real[mask] z_imag[mask] = z_imag_new diverged = z_real**2 + z_imag**2 >= 4.0 # escaped in this iteration abs_z = z_real[mask & diverged]**2 + z_imag[mask & diverged]**2 + epsilon output[mask & diverged] = i + 1 - np.log(np.log(abs_z)) / np.log(2) # only the points inside mask that escaped in the current iteration are need to be updated in the output mask[mask & diverged] = False # only the escaped points inside the mask are no longer needed to be considered #output[output == maxiter - 1] = 0 full_output = np.vstack([np.flipud(output), np.flipud(output[-2::-1, :])]) return full_output Although this may not be a very efficient implementation and for more efficient implementations one could use numba jit , as suggested here. | 2 | 3 |
79,530,792 | 2025-3-24 | https://stackoverflow.com/questions/79530792/typeerror-in-fastapi-when-using-apiroute-with-a-router | I have a FastAPI backend and i'm trying to implement an APIRoute class to handle logic before and after the request (logging). # app.py from fastapi import FastAPI, APIRouter from backend.app.routers import questions API_VERSION = "v1" app = FastAPI(lifespan=lifespan) api_v1_router = APIRouter(prefix=f"/{API_VERSION}") api_v1_router.include_router(questions.router) app.include_router(api_v1_router) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=80) # questions.py from typing import Callable from fastapi import APIRouter, Request, Response class LoggingAPIRoute(APIRoute): @staticmethod async def access_log_req(request: Request) -> None: print("request...") @staticmethod async def access_log_res(request: Request, response: Response) -> None: print("response...") async def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: await self.access_log_req(request=request) try: response = await original_route_handler(request) await self.access_log_res(request=request, response=response) return response except Exception as e: return "oh snap!" return custom_route_handler router = APIRouter(route_class=LoggingAPIRoute) @router.post("/retrieve", response_class=JSONResponse) async def retrieve(request: Request, auth_result: str = Security(auth.verify)): pass Calling the v1/retrieve with curl causes an exception to be thrown: TypeError: the first argument must be callable. When removing LoggingAPIRoute usage from the router, the problem is solved. Any ideas? curl localhost:80/v1/retrieve -X POST INFO: 127.0.0.1:52860 - "POST /v1/retrieve HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/Users/boss/foo/.venv/lib/python3.13/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ self.scope, self.receive, self.send ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/boss/foo/.venv/lib/python3.13/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/boss/foo/.venv/lib/python3.13/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/middleware/errors.py", line 187, in __call__ raise exc File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/middleware/errors.py", line 165, in __call__ await self.app(scope, receive, _send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/middleware/cors.py", line 85, in __call__ await self.app(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/routing.py", line 714, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/routing.py", line 734, in app await route.handle(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/routing.py", line 288, in handle await self.app(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/routing.py", line 76, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/routing.py", line 73, in app response = await f(request) ^^^^^^^^^^^^^^^^ File "/Users/boss/foo/.venv/lib/python3.13/site-packages/starlette/concurrency.py", line 36, in run_in_threadpool func = functools.partial(func, *args, **kwargs) TypeError: the first argument must be callable | The error was caused by defining the get_route_handler method of LoggingAPIRoute as a coroutine function (async). Removing async before the function definition solved it. class LoggingAPIRoute(APIRoute): @staticmethod async def access_log_req(request: Request) -> None: print("request...") @staticmethod async def access_log_res(request: Request, response: Response) -> None: print("response...") def get_route_handler(self) -> Callable: # <------------ This is now not async original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: await self.access_log_req(request=request) try: response = await original_route_handler(request) await self.access_log_res(request=request, response=response) return response except Exception as e: return "oh snap!" return custom_route_handler | 2 | 0 |
79,531,664 | 2025-3-24 | https://stackoverflow.com/questions/79531664/what-is-the-fastest-way-to-generate-alternating-boolean-sequences-in-numpy | I want to create a 1D array of length n, every element in the array can be either 0 or 1. Now I want the array to contain alternating runs of 0s and 1s, every full run has the same length as every other run. Every run of 1s is followed by a run of 0s of the same length and vice versa, the gaps are periodic. For example, if we start with five 0s, the next run is guaranteed to be five 1s, and then five 0s, and so on. To better illustrate what I mean: In [65]: (np.arange(100) // 5) & 1 Out[65]: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) The runs can also be shifted, meaning we don't have a full run at the start: In [66]: ((np.arange(100) - 3) // 7) & 1 Out[66]: array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]) Now as you can see I have found a way to do it, in fact I have found three ways, but all are flawed. The above works with shifted runs but is slow, one other is faster but it doesn't allow shifts. In [82]: %timeit (np.arange(524288) // 16) & 1 6.45 ms ± 2.67 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) In [83]: range1 = np.arange(524288) In [84]: %timeit (range1 // 16) & 1 3.14 ms ± 201 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [85]: %timeit np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384) 81.6 μs ± 843 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [86]: %timeit np.repeat([0, 1], 262144).reshape(32, 16384).T.flatten() 5.42 ms ± 74.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [87]: np.array_equal((range1 // 16) & 1, np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384)) Out[87]: True In [88]: np.array_equal(np.repeat([0, 1], 262144).reshape(32, 16384).T.flatten(), np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384)) Out[88]: True Is there a way faster than np.tile based solution that also allows shifts? I have made the code blocks output the same result for fair comparison, and for completeness I have added another inefficient method. Another method: In [89]: arr = np.zeros(524288, dtype=np.uint8) In [90]: %timeit arr = np.zeros(524288, dtype=np.uint8) 19.9 μs ± 156 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [91]: arr[262144:] = 1 In [92]: %timeit arr[262144:] = 1 9.91 μs ± 52 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [93]: %timeit arr.reshape(32, 16384).T.flatten() 932 μs ± 11.7 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [94]: %timeit arr.reshape(32, 16384).T 406 ns ± 1.81 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [95]: %timeit list(arr.reshape(32, 16384).T.flat) 24.7 ms ± 242 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) As you can see, np.repeat is extremely slow, creating the array and assigning 1 to half the values is very fast, and arr.reshape.T is extremely fast, but arr.flatten() is very slow. | Since you are certainly on Windows (which is not mentioned in this question and critical for performance) and you likely use PIP packages, your Numpy implementation has been compiled with MSVC which does not use SIMD units of CPU for many functions, including np.tile. This means your Numpy package it pretty inefficient. One simple solution is not to use such a package, but a build of Numpy using Clang (or not to use Windows) as previously mentioned in a past answer. Most functions should then be automagically faster with no effort. Another solution is to use a trick (SLP-vectorization) to bypass this fundamental limitation. The idea is to do reduce the number of instruction done by operating on wider items. Here is the code: arr = np.zeros(524288, dtype=np.uint8) tmp = arr.view(np.uint64) tmp[2::4] = 0x01010101_01010101 tmp[3::4] = 0x01010101_01010101 It takes 47 μs on my machine (with a i5-9600KF CPU, Numpy 2.1.3 on Windows). This is a bit faster than the 57 μs of the np.tile solution (which is the standard way of doing that in Numpy). All other proposed solutions are slower. Note that two stores operations are sub-optimal, especially for larger arrays. That being said, on Windows a vectorised single store is slower (due to Numpy internal and more specifically generators). I advise you to choose the first solution instead of using Numpy tricks for sake of performance. If you cannot, another solution is simply to use Cython or Numba for that: import numba as nb @nb.njit('(uint32,uint32)') def gen(size, repeat): res = np.ones((repeat, size), dtype=np.uint8) for i in range(repeat): for j in range(size // 2): res[i, j] = 0 return res.reshape(-1) The Numba code only takes 35 µs on my machine. It is cleaner and should be faster on large arrays. It is also easy to debug/maintain. Note that the array allocation takes 40% of the time (14 µs)... | 2 | 2 |
79,531,851 | 2025-3-24 | https://stackoverflow.com/questions/79531851/polars-fails-to-create-a-new-dataframe-using-with-columns-when-creating-new-co | I'm new to polars and encountering a confusing error. I'm trying to take several array columns and zip them into struct columns. When I try to do this with with_columns I encounter the error: ValueError: can only call `.item()` if the dataframe is of shape (1, 1), or if explicit row/col values are provided; frame has shape (4, 2) Here is code to reproduce this problem: df = pl.DataFrame( { "a": [[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4]], "b": [[1, 2, 3, 5],[1, 2, 3, 5],[1, 2, 3, 5],[1, 2, 3, 5]], "c": [[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4],[1, 2, 3, 4]], "d": ['a', 'b', 'c', 'd'] } ) df.with_columns([ (df.explode('a', 'b') .select( "a", "b", "d", pl.struct('a', 'b').alias("test_1")) .group_by("d") .agg("test_1")), (df.explode('b', 'c') .select( "c", "b", "d", pl.struct('b', 'c').alias("test_2")) .group_by("d") .agg("test_2")), ] ) With a single struct column (and no list in the method call) this works just as expected and yields the output: a b c d test_1 list[i64] list[i64] list[i64] str list[struct[2]] [1, 2, … 4] [1, 2, … 5] [1, 2, … 4] "d" [{1,1}, {2,2}, … {4,5}] [1, 2, … 4] [1, 2, … 5] [1, 2, … 4] "b" [{1,1}, {2,2}, … {4,5}] [1, 2, … 4] [1, 2, … 5] [1, 2, … 4] "c" [{1,1}, {2,2}, … {4,5}] [1, 2, … 4] [1, 2, … 5] [1, 2, … 4] "a" [{1,1}, {2,2}, … {4,5}] However, even putting this single operation into a list in the method call creates this error: df.with_columns([ (df.explode('a', 'b') .select( "a", "b", "d", pl.struct('a', 'b').alias("test_1")) .group_by("d") .agg("test_1")),] ) I'm sure this is some sort of simple error, but I cant' find any information on the cause and solution to this. | Compute test_1 and test_2 as separate DataFrames. Use join to combine test_1 and test_2 with the original DataFrame. Avoid passing complete DataFrames to with_columns() import polars as pl df = pl.DataFrame( { "a": [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], "b": [[1, 2, 3, 5], [1, 2, 3, 5], [1, 2, 3, 5], [1, 2, 3, 5]], "c": [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], "d": ['a', 'b', 'c', 'd'] } ) test_1 = ( df.explode("a", "b") .select( "a", "b", "d", pl.struct("a", "b").alias("test_1") ) .group_by("d") .agg(pl.col("test_1")) ) test_2 = ( df.explode("b", "c") .select( "c", "b", "d", pl.struct("b", "c").alias("test_2") ) .group_by("d") .agg(pl.col("test_2")) ) result = df.join(test_1, on="d").join(test_2, on="d") print(result) Result is in graph. | 2 | 1 |
79,518,064 | 2025-3-18 | https://stackoverflow.com/questions/79518064/how-to-create-google-doc-tabs-with-the-python-sdk | I’m trying to create a Tab in a google doc via the python SDK (as in the new(ish) side navigation/organizational tabs, not a tab character). I see how to read and update existing tabs, but don't see any mention of how to create a tab (or any callouts on that being a pending feature). Is it supported and just not well documented yet, or are Tabs new enough that this feature doesn't exist yet? Searched documentation and couldn't find anything related to creating tabs: https://developers.google.com/docs/api/how-tos/tabs https://developers.google.com/docs/api/reference/rest/v1/documents/request | It is not currently implemented. See this feature request, https://issuetracker.google.com/375867285. | 1 | 2 |
79,530,667 | 2025-3-24 | https://stackoverflow.com/questions/79530667/how-to-use-multiple-database-connections-in-django-testcase | I'm writing a testcase to reproduce deadlock in my service. After I use multiprocessing to create a two thread, I found they use the same database connection. So I can't reproduce the deadlock scenario in testcase. How do i resolve this issue? My code is as belows: @transaction.atomic() def process_1(): change = Change.objects.select_for_update().get(id="1") time.sleep(5) change = Change.objects.select_for_update().get(id="2") @transaction.atomic() def process_2(): change = Change.objects.select_for_update().get(id="2") time.sleep(5) change = Change.objects.select_for_update().get(id="1") p1 = Process(target=process_1) p2 = Process(target=process_2) p1.start() p2.start() p1.join() p2.join() self.assertEqual(p1.exitcode, 0) self.assertEqual(p2.exitcode, 0) | This is happening cause Django’s database connections are not automatically shared across processes when using multiprocessing (each process should create its own database connection). Just close the database connection before forking the new processes: from django.db import connection # Here your functions ... connection.close() # Ensure each process gets a fresh DB connection p1 = Process(target=process_1) p2 = Process(target=process_2) p1.start() p2.start() p1.join() p2.join() | 2 | 1 |
79,528,929 | 2025-3-23 | https://stackoverflow.com/questions/79528929/why-does-sequentialfeatureselector-return-at-most-n-features-in-1-predictor | I have a training dataset with six features and I am using SequentialFeatureSelector to find an "optimal" subset of the features for a linear regression model. The following code returns three features, which I will call X1, X2, X3. sfs = SequentialFeatureSelector(LinearRegression(), n_features_to_select='auto', tol=0.05, direction='forward', scoring='neg_root_mean_squared_error', cv=8) sfs.fit_transform(X_train, y_train) To check the results, I decided to run the same code using the subset of features X1, X2, X3 instead of X_train. I was expecting to see the features X1, X2, X3 returned again, but instead it was only the features X1, X2. Similarly, using these two features again in the same code returned only X1. It seems that the behavior of sfs is always to return a proper subset of the input features with at most n_features_in_ - 1 columns, but I cannot seem to find this information in the scikit-learn docs. Is this correct, and if so, what is the reasoning for not allowing sfs to return the full set of features? I also checked to see if using backward selection would return a full feature set. sfs = SequentialFeatureSelector(LinearRegression(), n_features_to_select='auto', tol=1000, direction='backward', scoring='neg_root_mean_squared_error', cv=8) sfs.fit_transform(X_train, y_train) I set the threshold tol to be a large value in the hope that there would be no satisfactory improvement from the full set of features of X_train. But, instead of returning the six original features, it only returned five. The docs simply state If the score is not incremented by at least tol between two consecutive feature additions or removals, stop adding or removing. So it seems that the full feature set is not being considered during cross-validation, and the behavior of sfs is different at the very end of a forward selection or at the very beginning of a backwards selection. If the full set of features outperforms any proper subset of the features, then don't we want sfs to return that possibility? Is there a standard method to compare a selected proper subset of the features and the full set of features using cross-validation? | Check the source code, lines 240-46 inside the method fit(): if self.n_features_to_select == "auto": if self.tol is not None: # With auto feature selection, `n_features_to_select_` will be updated # to `support_.sum()` after features are selected. self.n_features_to_select_ = n_features - 1 else: self.n_features_to_select_ = n_features // 2 As can be seen, even with auto selection mode and a given tol, maximum numbers of features that can be added is bounded by n_features - 1 for some reason (may be we can report this issue in github). We can override the implementation in the following way, by defining a function get_best_new_feature_score() (similar to the method _get_best_new_feature_score() from the source code), as shown below: from sklearn.feature_selection import SequentialFeatureSelector from sklearn.model_selection import cross_val_score def get_best_new_feature_score(estimator, X, y, cv, current_mask, direction, scoring): candidate_feature_indices = np.flatnonzero(~current_mask) scores = {} for feature_idx in candidate_feature_indices: candidate_mask = current_mask.copy() candidate_mask[feature_idx] = True if direction == "backward": candidate_mask = ~candidate_mask X_new = X[:, candidate_mask] scores[feature_idx] = cross_val_score( estimator, X_new, y, cv=cv, scoring=scoring ).mean() new_feature_idx = max(scores, key=lambda feature_idx: scores[feature_idx]) return new_feature_idx, scores[new_feature_idx] Now, let's implement the auto (forward) selection, using a regression dataset with 5 features, let' add all the features one-by-one, reporting the improvement in score and stopping by comparing with provided tol: from sklearn.datasets import make_regression from sklearn.linear_model import LinearRegression X, y = make_regression(n_features=5) # data to be used X.shape # (100, 5) lm = LinearRegression() # model to be used # now implement 'auto' feature selection (forward selection) cur_mask = np.zeros(X.shape[1]).astype(bool) # no feature selected initially cv, direction, scoring = 8, 'forward', 'neg_root_mean_squared_error' tol = 1 # if score improvement > tol, feature will be added in forward selection old_score = -np.inf ids, scores = [], [] for i in range(X.shape[1]): idx, new_score = get_best_new_feature_score(lm, X, y, current_mask=cur_mask, cv=cv, direction=direction, scoring=scoring) print(new_score - old_score, tol, score - old_score > tol) if (new_score - old_score) > tol: cur_mask[idx] = True ids.append(idx) scores.append(new_score) old_score = new_score print(f'feature {idx} added, CV score {score}, mask {cur_mask}') # feature 3 added, CV score -90.66899644023539, mask [False False False True False] # feature 1 added, CV score -59.21188041830155, mask [False True False True False] # feature 2 added, CV score -16.709218665372905, mask [False True True True False] # feature 4 added, CV score -3.1862116620446166, mask [False True True True True] # feature 0 added, CV score -1.4011801838814216e-13, mask [ True True True True True] If tol=10, set to 10 instead, then only 4 features will be added in forward-selection. Similarly, if tol=20, then only 3 features will be added in forward-selection, as expected. | 2 | 3 |
79,531,065 | 2025-3-24 | https://stackoverflow.com/questions/79531065/python-vectorized-mask-generation-numpy | I have an arbitrary Matrix M which is (N x A). I have a column vector V (N x 1) which has on each row the amount of entries I would like to keep from the original M <= A (starting from the leftmost) As an example, say I have the following V (for an arbitrary 5xA Matrix): [[1] [0] [4] [2] [3]] i.e. I want to keep the 1st element of the first row, no elements in row 2, 4 from row 3, 2 from row 4, etc. I want this to generate the following mask: [[1 0 0 0 0] [0 0 0 0 0] [1 1 1 1 0] [1 1 0 0 0] [1 1 1 0 0]] I then apply this mask to my matrix to get the result that I want. What is the fastest way to generate this mask? Naive pythonic approach: A = 20 n = 5 V = np.floor(np.random.rand(n) * (A+1)) x = [np.concatenate(np.repeat(1, x), np.repeat(0, 20 - x), axis=1) for x in V] x = np.array(x) This code is my current working solution but it is way too slow for large n, so I need a vectorized solution. Using numpy.fromfunction: n = 5 A = 20 V = np.floor(np.random.rand(n, 1) * (A + 1)) mask = np.fromfunction(lambda i,j: V > j, (n,20), dtype=int) This solution is considerably faster for large n, but I am not sure if I can do better than this. Overall: Any insights on this problem? Not too familiar with the ins and outs of numpy and python so I thought I'd post this here before I tried purusuing any individual solution further. I am also willing to compile to Cython if that would help this at all, though I know absolutely nothing about that language at the moment but I am willing to look into it. Open to pretty much any and all solutions. | Just use broadcasting and numpy.arange: mask = V > np.arange(M.shape[1]) Or from A: A = 5 mask = V > np.arange(A) Output: array([[ True, False, False, False, False], [False, False, False, False, False], [ True, True, True, True, False], [ True, True, False, False, False], [ True, True, True, False, False]]) And if you need integers: mask.astype(int) array([[1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [1, 1, 1, 1, 0], [1, 1, 0, 0, 0], [1, 1, 1, 0, 0]]) | 2 | 1 |
79,530,635 | 2025-3-24 | https://stackoverflow.com/questions/79530635/opencv-cv2-waitkey-cannot-detect-the-delete-key-what-should-i-do | I am using cv2.waitKey(1) & 0xFF to capture key events, but I found that it cannot correctly detect the Delete key. My current code is as follows: import cv2 while True: key = cv2.waitKey(1) if key != -1: # -1 : timeout/no key event print(f"Key pressed: {key}") if key == 127: print("Delete key detected!") break cv2.destroyAllWindows() However, when I run this code and press the Delete key, it is not detected. Environment: Windows 10, Python 3.9, OpenCV 4.5.5 Issue: key == 127 does not work Expected Behavior: When I press Delete, it should print Delete key detected. Questions: Can cv2.waitKey(1) correctly detect the Delete key? Is there a more accurate way to detect the Delete key? | Use waitKeyEx(). It'll give you every bit of information the operating system has to give. waitKey does not, it only gives you the lowest eight bits. On Windows, Del will cause waitKey to return 0, but waitKeyEx() will return 0x2e0000. waitKeyEx() will return other values on other systems. Never ever use that & 0xFF pattern. Never ever. Not with waitKeyEx, not with waitKey. That operation is utterly redundant. waitKey does that internally already, and with waitKeyEx the point is to get those bits, not discard them. This operation also mangles the return value indicating "no key event", which is properly -1, and makes it 255. | 3 | 2 |
79,523,536 | 2025-3-20 | https://stackoverflow.com/questions/79523536/can-you-raise-an-airflowexception-without-dumping-the-entire-traceback-into-the | In Airflow, you're suppose to raise an AirflowException if you want a task to be marked as a failure. But the raised error doesn't seem to be caught in the top-level Airflow module, and so it results in the entire stacktrace being dumped into the logs. If you do your error handling properly, it should be possible to print a helpful error message and then fail the task, without gumming up the logs. How can this be done? | Yes, you can do it by using Python’s exception chaining syntax. Instead of writing: raise AirflowException("Your error message") you write: raise AirflowException("Your error message") from None This “from None” tells Python not to include the exception’s context (i.e. the chained traceback) in the logs. So, when your task fails, you’ll just see your helpful error message instead of a long traceback, making the logs much cleaner. For example: from airflow.exceptions import AirflowException def some_task(): try: # some code that might raise an error pass except Exception as e: # Log a neat error message error_msg = "Something went wrong, please check your configuration." # Now raise the AirflowException without the full traceback raise AirflowException(error_msg) from None | 1 | 1 |
79,528,380 | 2025-3-23 | https://stackoverflow.com/questions/79528380/is-it-possible-to-turn-off-printing-the-id-hex-address-globally-for-python-obj | When you don't provide a __repr__ or __str__ method on a custom class, you just get a classname and the Python address of the object (or, to be more specific, what id(self) would return. This is fine most of the time. And it is very helpful when you are debugging some code and you want to see if instances are/are not the same, visually. But to be honest I almost never care about that id value. However it also means that running a program with debugging print functions never looks the same. Ditto if you are comparing log files. Unless you write a lot of __repr__ only to avoid this issue. Or if you pre-format the log files to zero out the hex values on the default object prints. A sample program to illustrate what I would like to do: not have that id printed. class ILookDifferentEveryRun: "baseline behavior" def __init__(self,a): "I don't actually care about `a`, that's why I don't need a `repr`" self.a = a class ILookTheSameEveryRun(ILookDifferentEveryRun): """this is my workaround, a cut and paste of a default __repr__""" def __repr__(self) : return type(self).__name__ class ILookAlmostLikeBuiltinRepr(ILookDifferentEveryRun): "can I do this with a global switch?" def __repr__(self) : """this is more or less what I want""" res = f"<{type(self).__module__}.{type(self).__name__} object> at <dontcare>" return res inst1 = ILookDifferentEveryRun(a=1) inst2 = ILookTheSameEveryRun(a=1) inst3 = ILookAlmostLikeBuiltinRepr(a=1) print(inst1) print(inst2) print(inst3) run twice: <__main__.ILookDifferentEveryRun object at 0x100573260> ILookTheSameEveryRun <__main__.ILookAlmostLikeBuiltinRepr object> at <dontcare> <__main__.ILookDifferentEveryRun object at 0x104ca7320> ILookTheSameEveryRun <__main__.ILookAlmostLikeBuiltinRepr object> at <dontcare> I took a look at the startup flags for the python interpreter, but nothing seems to allow for this. Any workarounds? I know I could also put the repr on a Mixin and reuse that everywhere, but that's ugly too. If I can't, that's fine and that's what I am expecting to hear. Just wondering if someone else had the same problem and found a way. p.s. this is less about dedicated printing of instances and more about things like print(mylist) where mylist=[item1,item2,item3], generally any complex data structures with nested items in them. | As a workaround you can add your desired __repr__ to every user class that does not have a custom __repr__ defined anywhere in the MRO. This can be done automaticallly by replacing builtins.__build_class__, which implements the class statement, with a wrapper function that applies the patch to the class that the original __build_class__ returns: import builtins def __repr__(self): cls = type(self) return f'<{cls.__module__}.{cls.__qualname__} object>' def build_class(*args, orig_build_class=__build_class__, **kwargs): cls = orig_build_class(*args, **kwargs) if cls.__repr__ is object.__repr__: cls.__repr__ = __repr__ return cls builtins.__build_class__ = build_class so that: class A: pass print(A()) outputs: <__main__.A object> Demo: https://ideone.com/L5mFVU | 1 | 2 |
79,530,149 | 2025-3-24 | https://stackoverflow.com/questions/79530149/problem-if-two-inherited-init-subclass-s-have-the-same-argument-name | I'm trying to use two classes, A and B, as mixing classes of AB. They both have __init_subclass__ methods. The problem is that both __init__sublass__ methods have the same argument msg. Therefore I've used an adaptor class B_ to rename B's argument msg_b. But I'm having trouble! The nearest I have got is: class A: def __init_subclass__(cls, msg, **kwargs): print(f'{cls=} A: {msg}') super().__init_subclass__(**kwargs) class B: def __init_subclass__(cls, msg, **kwargs): print(f'{cls=} B: {msg}') super().__init_subclass__(**kwargs) # Adaptor class `B_` needed because both `A` and `B` have an argument `msg`. class B_: # Rename `msg` to `msg_b`. def __init_subclass__(cls, msg_b, **kwargs): # `B.__init_subclass__(msg_b, **kwargs)` sets the subclass as `B` not `cls`, but otherwise works. B.__init_subclass__.__func__(cls, msg=msg_b, **kwargs) # Still need a `B`. def __init__(self, *args, **kwargs): self.b = B(*args, **kwargs) # Forward all the attributes to `self.b`. def __getattr__(self, item): return getattr(self.b, item) class AB(A, B_, msg='Hello.', msg_b='Also, hello.' ): ... print(f'{AB()=}, {isinstance(AB(), A)=}, {isinstance(AB(), B_)=}, {isinstance(AB(), B)=}') Which does call both __init_sublass__s with the correct class argument, it prints: cls=<class '__main__.AB'> A: Hello. cls=<class '__main__.AB'> B: Also, hello. But then you get the error: Traceback (most recent call last): File <definition of AB>, in <module> class AB(A, B_, msg='Hello.', msg_b='Also, hello.' ): File <A's>, in __init_subclass__ super().__init_subclass__(**kwargs) File <B_'s>, in __init_subclass__ B.__init_subclass__.__func__(cls, msg=msg_b, **kwargs) File <B's>, in __init_subclass__ super().__init_subclass__(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: super(type, obj): obj must be an instance or subtype of type Presumably because B is not a super type of AB (B_ is). I’m not clear why super wants subtypes matching though! Any ideas on how to fix this? | Presumably you don't have the authorization to modify A or B, or you would've renamed the msg parameter in one of their __init_subclass__s already. One workaround then is to use an intermediary base class between A and B with an __init_subclass__ method that renames the argument msg_b to msg before passing the baton to B.__init_subclass__: class ABGlue: def __init_subclass__(cls, msg_b, **kwargs): super().__init_subclass__(msg=msg_b, **kwargs) class AB(A, ABGlue, B, msg='Hello.', msg_b='Also, hello.' ): ... print(f'{AB()=}') print(f'{isinstance(AB(), A)=}') print(f'{isinstance(AB(), B)=}') This outputs something like: cls=<class '__main__.AB'> A: Hello. cls=<class '__main__.AB'> B: Also, hello. AB()=<__main__.AB object at 0x14a498119460> isinstance(AB(), A)=True isinstance(AB(), B)=True Demo: https://ideone.com/soXBWW | 1 | 1 |
79,529,322 | 2025-3-23 | https://stackoverflow.com/questions/79529322/do-subset-sum-instances-inherently-require-large-integers-to-force-exponential-d | I'm developing custom subset-sum algorithms and have encountered a puzzling issue: it seems difficult to generate truly "hard" subset-sum instances (i.e., forcing exponential computational effort) without using very large integers (e.g., greater than about 2^22). I'd specifically like to know: Are there known constructions or instance generators for subset-sum that reliably force exponential complexity—particularly against common subset-sum algorithms or custom heuristics—using only moderately sized integers (≤2^22)? Is the hardness of subset-sum instances inherently tied to the size of integers involved, or is it possible to create computationally difficult instances purely through numerical structure and relationships, even with smaller numbers? For context, here are some attempts I've made at generating potentially hard instances (feedback or improvements welcome): import random def generate_exponential_instance(n): max_element = 2 ** 22 A = [random.randint(1, max_element) for _ in range(n)] while True: mask = [random.choice([0, 1]) for _ in range(n)] if sum(mask) != 0: break target = sum(A[i] * mask[i] for i in range(n)) return A, target def generate_dense_high_values_instance(n): base = 2 ** 22 - random.randint(0, 100) A = [base + random.randint(0, 20) for _ in range(n)] target = sum(random.sample(A, k=n // 2)) return A, target def generate_merkle_hellman_instance(n, max_step=20): total = 0 private_key = [] for _ in range(n): next_val = total + random.randint(1, max_step) private_key.append(next_val) total += next_val q = random.randint(total + 1, 2 * total) r = random.randint(2, q - 1) public_key = [(r * w) % q for w in private_key] message = [random.randint(0, 1) for _ in range(n)] ciphertext = sum(b * k for b, k in zip(message, public_key)) return public_key, ciphertext | We know subset-sum to be solvable in pseudopolynomial time. "Pseudopolynomial time" means the worst-case running time on large inputs is bounded by a polynomial in the input length and the largest numeric value in the input. Because a string of L bits can encode numbers of size O(2^L), pseudopolynomial time algorithms really take exponential time (that is, exponential in the input length), but psuedopolynomial time algorithms are still considered to be better than "usual" exponential time algorithms since you can avoid the exponential behavior if you just use small numbers. For example, even this simple algorithm for subset-sum: def exists_subset_sum(num_set, target): reachable = {0} for num in num_set: reachable |= {prev + num for prev in reachable} return target in reachable has pseudopolynomial running time. The key observation is that the reachable set contains only values in the interval [-i*M, i*M] at the ith iteration, where M is max(abs(n) for n in num_set)). Then, the size of the set is always O(L*M) (where L is the size of the input in bits) and operations on it take time polynomial in L and M, so the whole algorithm takes time polynomial in L and M. Technically, as M is exponential in L, the time complexity of exists_subset_sum in terms of only L is exponential. But, you can only get the exponential behavior if you let M grow exponentially. If you just increase L without increasing M you will get at worst polynomial growth. You'll notice that many algorithms for solving subset-sum are also pseudopolynomial. | 4 | 8 |
79,526,922 | 2025-3-22 | https://stackoverflow.com/questions/79526922/how-to-replicate-the-following-density-plot-in-python | Given the following setup, N_r = 21; N_theta = 18; N_phi= 36; r_index = N_r-1; [phi,theta,r_sphere] = np.meshgrid(np.linspace(0,2*np.pi,N_phi),np.linspace(0,np.pi,N_theta),np.linspace(a,b,N_r)); X = r_sphere[:,:,r_index] * np.sin(theta[:,:,r_index]) * np.cos(phi[:,:,r_index]); Y = r_sphere[:,:,r_index] * np.sin(theta[:,:,r_index]) * np.sin(phi[:,:,r_index]); Z = r_sphere[:,:,r_index] * np.cos(theta[:,:,r_index]); rho = 1/r_sphere**2*np.sin(theta)*np.cos(theta)*np.sin(phi) I have set up my 2D X, Y, and Z coordinates converted from spherical coordinates, and a density variable in spherical coordinates that I want to plot a spherical shell (at r_index) of. In Matlab, with the same variables and setup, I was able to use the surf() function surf(X,Y,Z,rho(:,:,r_index),"EdgeAlpha",0.2); (plus a couple other things like axis labels and colorbar), I was able to create the following 3D (or I guess 4D?) plot at r=r_index: Trying to use Matplotlib's plot_surface() either doesn't work, or I'm not quite getting my inputs correct: fig1 = plt.figure(figsize=(16,9),dpi=80) ax = fig1.add_subplot(projection = "3d") surf = ax.plot_surface(X,Y,Z,rho[:,:,r_index]) Can I get this to work using plot_surface(), or is there another plotting function tailor made for what it is that I'm trying to do? EDIT: scatter() appears to give me something close fig1 = plt.figure(figsize=(16,9),dpi=80) ax = fig1.add_subplot(projection = "3d") surf = ax.scatter(X,Y,Z,c=rho[:,:,r_index],cmap = mpl.colormaps['bwr']) plt.colorbar(surf,ax = ax,shrink = 0.5,aspect = 5) plt.show() It cleverly sets up the surface shape by taking in X, Y, and Z, and then I can set c to color each dot in accordance to the range of values passed in. If I could just do something similar to that for a function like plot_surface(). It can probably be done, I just don't know what the exact input arguments would be. | Ah, so it's the same as with using scatter where we create a colormap out of the data and applying it. The only issue now is that the colorbar's min and max values aren't reflective of the of the data, as it's locked to 0 to 1. I'm guessing that's what the Normalize function does. Tried fiddling around with the inputs adding vmin and vmax and applying clim, but it looks like it's stuck. Is there a way around this? Not sure I am getting the comment to the accepted answer right , but copying from: color coding using scalar mappable in matplotlib I tried: import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl N_r = 21 N_theta = 18 N_phi= 36 r_index = N_r-1 a, b = 0, 100 # use desired values [phi,theta,r_sphere] = np.meshgrid(np.linspace(0,2*np.pi,N_phi),np.linspace(0,np.pi,N_theta),np.linspace(a,b,N_r)) X = r_sphere[:,:,r_index] * np.sin(theta[:,:,r_index]) * np.cos(phi[:,:,r_index]) Y = r_sphere[:,:,r_index] * np.sin(theta[:,:,r_index]) * np.sin(phi[:,:,r_index]) Z = r_sphere[:,:,r_index] * np.cos(theta[:,:,r_index]) rho = 1/r_sphere**2*np.sin(theta)*np.cos(theta)*np.sin(phi) #print('\nrho :\n', rho) #a, b = 0, 100 # use desired values rho[np.isnan(rho)] = 0 # impute NaN values density = rho[...,r_index] # set appropriate density # normalize to have values in [0,1] #density = (density - density.min()) / (density.max() - density.min()) #m = plt.cm.ScalarMappable( \ # norm=mpl.colors.Normalize(density.min(), density.max()), \ # cmap='jet') cmap= plt.cm.jet # create normalization instance norm = mpl.colors.Normalize(vmin=density.min(), vmax=density.max()) # create a scalarmappable from the colormap m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm) fig1 = plt.figure(figsize=(16,9),dpi=80) ax = fig1.add_subplot(projection = "3d") surf = ax.plot_surface(X,Y,Z, facecolors=m.to_rgba(density)) fig1.colorbar(m, ax=ax) # colorbar plt.show() output: given that: print('\ndensity.max() : ', density.max()) print('\ndensity.min() : ', density.min()) results in: density.max() : 4.973657691184833e-05 density.min() : -4.973657691184834e-05 I am using Online Matplotlib Compiler that is Matplotlib Version : 3.8.4 | 4 | 1 |
79,527,532 | 2025-3-22 | https://stackoverflow.com/questions/79527532/why-is-the-continued-fraction-expansion-of-arctangent-combined-with-half-angle-f | Sorry for the long title. I don't know if this is more of a math problem or programming problem, but I think my math is extremely rusty and I am better at programming. So I have this continued fraction expansion of arctangent: I got it from Wikipedia I tried to find a simple algorithm to calculate it: And I did it, I have written an infinite precision implementation of the continued fraction expansion without using any libraries, using only basic integer arithmetic: import json import math import random from decimal import Decimal, getcontext from typing import Callable, List, Tuple Fraction = Tuple[int, int] def arctan_cf(y: int, x: int, lim: int) -> Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq odd = 5 for i in range(2, 2 + lim): t1, t2 = odd * x, i**2 * y_sq a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 odd += 2 return a2, b2 And it converges faster than Newton's arctangent series which I previously used. Now I think if I combine it with the half-angle formula of arctangent it should converge faster. def half_arctan_cf(y: int, x: int, lim: int) -> Fraction: c = (x**2 + y**2) ** 0.5 a, b = c.as_integer_ratio() a, b = arctan_cf(a - b * x, b * y, lim) return 2 * a, b And indeed, it does converge even faster: def test_accuracy(lim: int) -> dict: result = {} for _ in range(lim): x, y = random.sample(range(1024), 2) while not x or not y: x, y = random.sample(range(1024), 2) atan2 = math.atan2(y, x) entry = {"atan": atan2} for fname, func in zip( ("arctan_cf", "half_arctan_cf"), (arctan_cf, half_arctan_cf) ): i = 1 while True: a, b = func(y, x, i) if math.isclose(deci := a / b, atan2): break i += 1 entry[fname] = (i, deci) result[f"{y} / {x}"] = entry return result print(json.dumps(test_accuracy(8), indent=4)) for v in test_accuracy(128).values(): assert v["half_arctan_cf"][0] <= v["arctan_cf"][0] { "206 / 136": { "atan": 0.9872880750087898, "arctan_cf": [ 16, 0.9872880746658675 ], "half_arctan_cf": [ 6, 0.9872880746018052 ] }, "537 / 308": { "atan": 1.0500473287277563, "arctan_cf": [ 18, 1.0500473281360896 ], "half_arctan_cf": [ 7, 1.0500473288158192 ] }, "331 / 356": { "atan": 0.7490241118247137, "arctan_cf": [ 10, 0.7490241115996227 ], "half_arctan_cf": [ 5, 0.749024111913438 ] }, "744 / 613": { "atan": 0.8816364228048325, "arctan_cf": [ 13, 0.8816364230439662 ], "half_arctan_cf": [ 6, 0.8816364227495634 ] }, "960 / 419": { "atan": 1.1592605364805093, "arctan_cf": [ 24, 1.1592605359263286 ], "half_arctan_cf": [ 7, 1.1592605371181872 ] }, "597 / 884": { "atan": 0.5939827714677137, "arctan_cf": [ 7, 0.5939827719895824 ], "half_arctan_cf": [ 4, 0.59398277135389 ] }, "212 / 498": { "atan": 0.40246578425167584, "arctan_cf": [ 5, 0.4024657843859885 ], "half_arctan_cf": [ 3, 0.40246578431841773 ] }, "837 / 212": { "atan": 1.322727785860997, "arctan_cf": [ 41, 1.322727786922624 ], "half_arctan_cf": [ 8, 1.3227277847674388 ] } } That assert block runs quite a bit long for large number of samples, but it never raises exceptions. So I think I can use the continued fraction expansion of arctangent with Machin-like series to calculate π. (I used the last series in the linked section because it converges the fastest) def sum_fractions(fractions: List[Fraction]) -> Fraction: while (length := len(fractions)) > 1: stack = [] for i in range(0, length - (odd := length & 1), 2): num1, den1 = fractions[i] num2, den2 = fractions[i + 1] stack.append((num1 * den2 + num2 * den1, den1 * den2)) if odd: stack.append(fractions[-1]) fractions = stack return fractions[0] MACHIN_SERIES = ((44, 57), (7, 239), (-12, 682), (24, 12943)) def approximate_loop(lim: int, func: Callable) -> List[Fraction]: fractions = [] for coef, denom in MACHIN_SERIES: dividend, divisor = func(1, denom, lim) fractions.append((coef * dividend, divisor)) return fractions def approximate_1(lim: int) -> List[Fraction]: return approximate_loop(lim, arctan_cf) def approximate_2(lim: int) -> List[Fraction]: return approximate_loop(lim, half_arctan_cf) approx_funcs = (approximate_1, approximate_2) def calculate_pi(lim: int, approx: bool = 0) -> Fraction: dividend, divisor = sum_fractions(approx_funcs[approx](lim)) dividend *= 4 return dividend // (common := math.gcd(dividend, divisor)), divisor // common getcontext().rounding = 'ROUND_DOWN' def to_decimal(dividend: int, divisor: int, places: int) -> str: getcontext().prec = places + len(str(dividend // divisor)) return str(Decimal(dividend) / Decimal(divisor)) def get_accuracy(lim: int, approx: bool = 0) -> Tuple[int, str]: length = 12 fraction = calculate_pi(lim, approx) while True: decimal = to_decimal(*fraction, length) for i, e in enumerate(decimal): if Pillion[i] != e: return (max(0, i - 2), decimal[:i]) length += 10 with open("D:/Pillion.txt", "r") as f: Pillion = f.read() Pillion.txt contains the first 1000001 digits of π, Pi + Million = Pillion. And it works, but only partially. The basic continued fraction expansion works very well with Machin-like formula, but combined with half-angle formula, I can only get 9 correct decimal places no matter what, and in fact, I get 9 correct digits on the very first iteration, and then this whole thing doesn't improve ever: In [2]: get_accuracy(16) Out[2]: (73, '3.1415926535897932384626433832795028841971693993751058209749445923078164062') In [3]: get_accuracy(32) Out[3]: (138, '3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231') In [4]: get_accuracy(16, 1) Out[4]: (9, '3.141592653') In [5]: get_accuracy(32, 1) Out[5]: (9, '3.141592653') In [6]: get_accuracy(1, 1) Out[6]: (9, '3.141592653') But the digits do in fact change: In [7]: to_decimal(*calculate_pi(1, 1), 32) Out[7]: '3.14159265360948500093515231500093' In [8]: to_decimal(*calculate_pi(2, 1), 32) Out[8]: '3.14159265360945286794831052938917' In [9]: to_decimal(*calculate_pi(3, 1), 32) Out[9]: '3.14159265360945286857612896472974' In [10]: to_decimal(*calculate_pi(4, 1), 32) Out[10]: '3.14159265360945286857611676794770' In [11]: to_decimal(*calculate_pi(5, 1), 32) Out[11]: '3.14159265360945286857611676818392' Why is the continued fraction with half-angle formula not working with Machin-like formula? And is it possible to make it work, and if it can work, then how? I want either a proof that it is impossible, or a working example that proves it is possible. Just a sanity check, using π/4 = arctan(1) I was able to make half_arctan_cf spit out digits of π but it converges much slower: def approximate_3(lim: int) -> List[Fraction]: return [half_arctan_cf(1, 1, lim)] approx_funcs = (approximate_1, approximate_2, approximate_3) In [28]: get_accuracy(16, 2) Out[28]: (15, '3.141592653589793') In [29]: get_accuracy(16, 0) Out[29]: (73, '3.1415926535897932384626433832795028841971693993751058209749445923078164062') And the same problem recurs, it reaches maximum precision of 15 digits at the 10th iteration: In [37]: get_accuracy(9, 2) Out[37]: (14, '3.14159265358979') In [38]: get_accuracy(10, 2) Out[38]: (15, '3.141592653589793') In [39]: get_accuracy(11, 2) Out[39]: (15, '3.141592653589793') In [40]: get_accuracy(32, 2) Out[40]: (15, '3.141592653589793') I just rewrote my arctangent continued fraction implementation and made it avoid doing redundant computations. In my code in each iteration t1 increases by 2 * y_sq, so there is no need to repeatedly multiply y_sq by the odd number, instead just use a cumulative variable and a step of 2 * y_sq. And the difference between each pair of consecutive square numbers is just the odd numbers, so I can use a cumulative variable of a cumulative variable. def arctan_cf_0(y: int, x: int, lim: int) -> Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq odd = 5 for i in range(2, 2 + lim): t1, t2 = odd * x, i**2 * y_sq a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 odd += 2 return a2, b2 def arctan_cf(y: int, x: int, lim: int) -> Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq t1_step, t3_step = 2 * x, 2 * y_sq t1, t2 = 5 * x, 4 * y_sq t3 = t2 + y_sq for _ in range(lim): a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 t1 += t1_step t2 += t3 t3 += t3_step return a2, b2 In [301]: arctan_cf_0(4, 3, 100) == arctan_cf(4, 3, 100) Out[301]: True In [302]: %timeit arctan_cf_0(4, 3, 100) 58.6 μs ± 503 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [303]: %timeit arctan_cf(4, 3, 100) 54.3 μs ± 816 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) While this doesn't improve the speed by much, this is definitively an improvement. | The loss of precision is here: c = (x**2 + y**2) ** 0.5 a, b = c.as_integer_ratio() This code works with float type, which is the result of power calculation ** 0.5. As soon as you use float, you lose arbitrary precision. You might want to use the Decimal.sqrt method instead. | 7 | 7 |
79,528,465 | 2025-3-23 | https://stackoverflow.com/questions/79528465/find-element-by-key-in-json | I want to fetch an element under the key ID in my json list that looks like this: [{"ID": 0, "login": "admin", "password": "123", "email": "[email protected]"}, {"ID": 1, "login": "admin2", "password": "1234", "email": "[email protected]"}] The list is in the data.json file and it’s not assigned any variable. I have a 'check' variable that takes the value of a number. I want to call the check function from ANOTHER python file.Here’s how: if "ID"==check return True. | First, create and save a file "fetch_data.py" with the following code: import json def check_id(check): with open("data.json", "r") as file: data = json.load(file) # Read JSON list for item in data: if item.get("ID") == check: return True return False Create and save the main.py file: from fetch_data import check_id check = 1 #Checking if ID 1 exists if check_id(check): print("ID found!") else: print("ID not found!") Please ensure that data.json file contains the following data: [ { "ID": 0, "login": "admin", "password": "123", "email": "[email protected]" }, { "ID": 1, "login": "admin2", "password": "1234", "email": "[email protected]" } ] | 2 | 1 |
79,528,010 | 2025-3-22 | https://stackoverflow.com/questions/79528010/unexpected-behaviour-in-z3-when-working-with-exponentials | I have the following code in which I am trying to return the value of e^x using z3. However, this code is returning y = 0 import z3 x, y = z3.Real('x'), z3.Real('y') e = z3.RealVal(str(gmpy2.exp(gmpy2.mpfr(1)))) # or e^1, the Euler's number e print(e) s = z3.Solver() s.add(x == 1134585759063987950064875850350910837993/1361129467683753853853498429727072845824) s.add(y == e ** x) s.check() model = s.model() print(model) The output of this code is: 3397852285573806544200359339189/1250000000000000000000000000000 [x = 260488115293581/312500000000000, y = 0] How can I fix this code? | If you add: print(s.check()) you'll see that z3 prints: unknown This means that the solver wasn't able to come up with a model that is guaranteed to satisfy the constraints. (Exponentials are hard for SMT solvers: There are very good reasons for this, you can search stack-overflow for it.) So, the model you print is irrelevant. Or, more precisely, is not guaranteed to satisfy the constraints since the solver is in unknown state. So far as "fixing" this is concerned: You can't really. But if x is a constant, and e is a constant (already), so is e^x. Just calculate its value outside of z3, and plug it in. | 1 | 1 |
79,526,468 | 2025-3-21 | https://stackoverflow.com/questions/79526468/why-we-say-dequeue-in-python-is-threadsafe-when-gil-already-restricts-running-on | From what I have been reading Python has GIL that ensures only one thread can run the python code. So I am slightly confused when we say collections.dequeue is thread-safe. If only one thread is run at a time wouldn't objects be in a consistent state already? It would be great if someone can give an example of how a list in python is not thread safe and using a dequeue possible counters that? | The GIL doesn't prevent race conditions, it only protects python internals from corruption, you cannot have a dangling pointer or a use-after-free inside the python interpreter. the only way they were able to make a GIL-free interpreter in python3.13 is to put a lock (or more than one lock) in every object in python, including things you don't think about like stack variables, and the stack frame. You can have race conditions with the GIL for the same reason you needed locks on a CPU with 1 core, the thread can be suspended at any moment in time, python threads automatically drop the GIL and suspend themselves every few milliseconds to allow other threads to run, if you have 2 atomic operations, each one of them is atomic, the two operations together are not atomic. CPython deque is similar to C++ deque, it is a linked list of small arrays (lists), currently 64 items per block. A single append operation involves 2 operations put item at the head of the array increment the head counter. if the current head block is full it has to allocate a new empty block, and put it in the end of the linked list, to illustrate those 2 operations in python code block = [None for i in range(64)] # one block in the linked list head = 5 def append(obj): block[head] = obj # thread could get suspended here head += 1 There is also a tail counter (called left and right in source code), so it can grow in 2 directions, the tail logic decrements instead of incrementing. If you implemented that in python it won't be thread-safe without a lock, one thread will put its object at the current head, then get suspended before it increments the head counter, and another thread will end up overwriting this location before the first thread could increment the head count, then the first thread will increment it again, essentially the data of the first thread is lost, and there is now an empty slot in the middle of the deque. In CPython without the GIL the entire deque is locked during this append operation, so the scenario above is not possible, and when the GIL is used, the GIL is not dropped until the append operation is done, as the deque is written in C, not in python. you can control the GIL in C extensions, but you cannot control it in python. CPython list has atomic operations too, its append and pop, are both thread-safe. however implementing a deque using a simple list will be very slow, popleft done with pop(0) will be an O(n) operation, whereas python deque makes it O(1) Every other operation in the list like index or remove or indexing it are not thread-safe, and can remove or access the wrong element, demo: concurrent append + remove removing the wrong element, you'll need an external lock on all operations if you use those operations concurrently on a list or deque. | 2 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.