question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,564,155
2025-4-9
https://stackoverflow.com/questions/79564155/how-to-decorate-the-enter-and-exit-instance-methods
I would like to use the decorator deco to reshape the __enter__ and __exit__ instance methods. The code runs but the wrapper is not executed in the with section. Find the actual output below, followed by expected output and finally the code. Current output: In __init__ ------------------- In __enter__ ------------------- In wrapper Blah, blah, blah ------------------- In __exit__ Expected output: In __init__ ------------------- In wrapper In __enter__ ------------------- In wrapper Blah, blah, blah ------------------- In wrapper In __exit__ class contMgr(): def __init__(self): print("In __init__") pass def __enter__(self): print("In __enter__") return self def __exit__(self, exc_type, exc_val, exc_tb): print("In __exit__") pass def __call__(self, *args, **kwargs): print("In __call_") def brol(): print("brol") def deco(func): def wrapper(*args,**kwargs): print("In wrapper") result = func(*args, **kwargs) return result return wrapper @deco def test(): print("Blah, blah, blah") mgr = contMgr() mgr.__enter__ = deco(mgr.__enter__) mgr.__exit__ = deco(mgr.__exit__) print("------------------------") with mgr: print("------------------------") test() print("------------------------")
The short answer As others have pointed out, the problem is that it's the __enter__ and __exit__ methods of the class that will get called, not those of the instance. But you said this is for testing purposes, and you don't want the decorators to be there permanently, so the solution is to alter the class methods temporarily. You can do it manually, of course: change the function on the class, run the test, and change it back, but unittest.mock.patch can automate that for you. Using patch Here's how you can use patch as a context manager which will restore the original function once you leave the context manager's scope: with (patch("__main__.contMgr.__enter__", deco(contMgr.__enter__)), patch("__main__.contMgr.__exit__", deco(contMgr.__exit__))): with mgr: test() will call the decorated enter and exit only inside this with statement. Here's a longer example with output: print("Using patch") with (patch("__main__.contMgr.__enter__", deco(contMgr.__enter__)), patch("__main__.contMgr.__exit__", deco(contMgr.__exit__))): with mgr: test() print("\nNot using patch") with mgr: test() Output: Using patch In wrapper In __enter__ In wrapper Blah, blah, blah In wrapper In __exit__ Not using patch In __enter__ In wrapper Blah, blah, blah In __exit__ Documentation https://docs.python.org/3/library/unittest.mock.html#patch
2
0
79,565,465
2025-4-9
https://stackoverflow.com/questions/79565465/python-subprocess-run-error-filenotfounderror
I am currently creating an application in python which uses the Youtube Data API and includes the following code block: import os from subprocess import run os.chdir(os.path.dirname(__file__)) command = 'python3.10 uploadYoutube.py --file="blank.mp4" --title="Blank" --description="THIS IS YOUR BLANK VIDEO" --keywords="blank" --category="20" --privacyStatus="private"' terminal_output = run(command, capture_output=True).stdout print(terminal_output) This produces the error: terminal_output = run(command, capture_output=True).stdout File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 503, in run with Popen(*popenargs, **kwargs) as process: File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1456, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified Does anybody know how I could fix this error?
as Adon Bilivit stated, this is due to the first argument you pass to the command subprocess.run. As stated in the subprocess documentation, when you provide a string there is a platform dependent interpretation. On POSIX, providing a string is interpreted as the name or path of the program to execute. Solution The first solution is to pass a sequence to the first subprocess.run argument subprocess.run(command.split(), capture_output=True) You can also, as the documentation suggests, set shell to True. If shell is True, it is recommended to pass args as a string rather than as a sequence. subprocess.run(command, capture_output=True, shell=True)
1
3
79,565,937
2025-4-10
https://stackoverflow.com/questions/79565937/click-help-text-shows-dynamic-when-an-option-has-the-default-set-to-a-lambda
I have this code in a CLI: @click.option( "--username", default=lambda: os.environ.get("USER", None), show_default=True, help="User name for SSH configuration.", ) When I invoke the CLI with --help option, I get this: --username TEXT User name for SSH configuration. [default: (dynamic)] Is there a way to make click invoke the lambda function and show the actual username instead of (dynamic)? I know I can call that function before invoking the click decorator and pass that retrieved value as the default instead of lambda. I am trying to do better than that.
The Option.get_default method has a call option to call the default value when it's a callable. The option is True by default but is passed False only when generating help, so you can make the help generator call the the callable simply by overriding Option.get_default with a wrapper that forces the call option to be True: import click class DefaultCallingOption(click.Option): def get_default(self, ctx, call=True): return super().get_default(ctx, True) so that: @click.command() @click.option( "--username", default=lambda: 'foo', show_default=True, help="User name to say hello to.", cls=DefaultCallingOption # use the custom Option class ) def hello(username): print(f'hello {username}') hello() produces the following help when given the --help option in the command line: Usage: test.py [OPTIONS] Options: --username TEXT User name to say hello to. [default: foo] --help Show this message and exit.
2
2
79,565,568
2025-4-10
https://stackoverflow.com/questions/79565568/interoperability-between-two-crates-with-pyo3-bindings
I have two Rust crates lib1 and lib2, for which I use pyo3 to generate bindings. I import the crate lib1 in lib2. In a separate crape lib1-py, I create a python class MyClass, and in lib2, I try to use MyClass as a parameter for a Python function, using : fn foo(param: PyRef<MyClass>) I separately use maturin to build both lib1-py and lib2 in the same environment. When I try to run the python code: import lib1 import lib2 param = lib1.MyClass() lib2.foo(param) I obtain the error TypeError: argument 'param': 'MyClass' object cannot be converted to ‘MyClass’. I suppose that Rust's crate system tries to be safe by treating types from different crates as separate entities, even if they are identical. How could I solve such an issue?
In general, this is impossible. Rust does not even guarantee MyClass will have the same memory layouts in both builds. It is possible to design lib1-py to allow this, though. For example, if the entirety of MyClass is #[repr(C)] (that is, it and all of its fields are, recursively), you can take a Bound<PyAny> and convert it to Bound<MyClass> with into_ptr() and from_owned_ptr(). Although that is a dangerous method; if something will someday become not #[repr(C)] your code will become unsound. A probably better method (and what e.g. Polars does) is to have a method to serialize MyClass to some format, accessible from Python, and a corresponding method to deserialize (which doesn't need to be accessible from Python). Then you take Bound<PyAny> and call this method on it, and deserialize the result.
2
1
79,565,260
2025-4-9
https://stackoverflow.com/questions/79565260/dynamically-script-conditional-statements-in-python
I have a fixed table with (name,value) pairs that are known. I.e.: Bob.age: 47 Bill.age: 44 Jane.age: 36 Steve.age: 22 I'd like a user to be able to write in a json file, a statement to dynamically generate a conditional statement to evaluate to true/false for a report: (Bob.age == Jane.age) || (Bob.age < Bill.age) I can do this manually via regex but I was wondering if there was already a tried and tested library for this kind of thing.
There are many template languages implemented in Python. There are whole list of "tried and tested libraries": https://www.fullstackpython.com/template-engines.html , https://wiki.python.org/moin/Templating . Additionally, you can execute arbitrary strings as python with eval. Sky is the limit. An example solution would be to use jinja2 template language. You json could look like the following: { "key": {% if (Bob.age == Jane.age) or (Bob.age < Bill.age) %}true{% else %}false{% endif %} }
1
2
79,562,908
2025-4-8
https://stackoverflow.com/questions/79562908/permutations-problem-on-python-i-cant-understand-where-i-am-wrong
Hi I have made this code. Τhe code generates all binary permutations of a string with a given number of zeros and ones, checks for adjacency based on a specific homogeneity condition (differing by exactly two consecutive bits), builds a graph of these permutations, and prints the graph with nodes sorted by their decimal values. def homogeneity_check(n1, n2) -> bool: positions = [i for i in range(len(n1)) if n1[i] != n2[i]] if len(positions) == 2 and positions[1] == positions[0] + 1: return True return False def find_permutations(s: str): def generate_permutations(s, current_index): if current_index == len(s) - 1: return [s] # Αν φτάσουμε στο τέλος της αλυσίδας, επιστρέφουμε την τρέχουσα παραλλαγή perms = [] for i in range(current_index, len(s)): # Ανταλλάζουμε τα στοιχεία στις θέσεις current_index και i s_list = list(s) s_list[current_index], s_list[i] = s_list[i], s_list[current_index] new_str = ''.join(s_list) # Βάζουμε την νέα παραλλαγή και συνεχίζουμε την αναδρομή perms.extend(generate_permutations(new_str, current_index + 1)) return perms perms = set(generate_permutations(s, 0)) # Χρησιμοποιούμε set για να αποφύγουμε τα διπλότυπα print(sorted(perms)) return sorted(perms) def generate_graph(s,t): num = ("0" * s + "1" * t) perms = find_permutations(num) graph = {n: [] for n in perms} # Κενός γραφός, με λιστές που αντιπροσωπεύουν τους γείτονες for i in range(len(perms)): for j in range(i + 1, len(perms)): # if homogeneity_check(perms[i], perms[j]): graph[perms[i]].append(perms[j]) graph[perms[j]].append(perms[i]) return graph def print_graph(graph): for node in sorted(graph, key=lambda x: int(x, 2)): node_decimal = int(node, 2) neighbors_decimal = sorted(int(neigh, 2) for neigh in graph[node]) print(f"{node_decimal} -> {neighbors_decimal}") I have this output with this input print_graph(generate_graph(2,4)): 15 -> [23] 23 -> [15, 27, 39] 27 -> [23, 29, 43] 29 -> [27, 30, 45] 30 -> [29, 46] 39 -> [23, 43] 43 -> [27, 39, 45, 51] 45 -> [29, 43, 46, 53] 46 -> [30, 45, 54] 51 -> [43, 53] 53 -> [45, 51, 54, 57] 54 -> [46, 53, 58] 57 -> [53, 58] 58 -> [54, 57, 60] 60 -> [58] but I want this 60 -> [58, 57, 54, 53, 46, 45, 30, 29] 58 -> [60, 57, 51, 54, 43, 46, 27, 30] 57 -> [60, 58, 53, 51, 45, 43, 29, 27] 54 -> [60, 58, 51, 53, 39, 46, 23, 30] 53 -> [60, 57, 54, 51, 45, 39, 29, 23] 46 -> [60, 58, 54, 43, 39, 45, 15, 30] 45 -> [60, 57, 53, 46, 43, 39, 29, 15] 30 -> [60, 58, 54, 46, 27, 23, 15, 29] 29 -> [60, 57, 53, 45, 30, 27, 23, 15] 51 -> [58, 57, 54, 53, 43, 39, 27, 23] 43 -> [58, 57, 46, 45, 51, 39, 27, 15] 27 -> [58, 57, 30, 29, 51, 43, 23, 15] 39 -> [54, 53, 46, 45, 51, 43, 23, 15] 23 -> [54, 53, 30, 29, 51, 27, 39, 15] 15 -> [46, 45, 30, 29, 43, 27, 39, 23] can someone explain me what I am doing wrong. I can't understand how to find the other neighbours for each node.
You claim to expect 46 and 15 to be adjacent in the graph. But 46 represents 101110 and 15 represents 001111. Those are not adjacent by the adjacency rule that you've described (and coded). Therefore either your expectation is wrong, or your adjacency rule is wrong. (Spot checking, 23 represents 010111, which indeed should be the only thing adjacent to 001111 by your stated rule...) You can get the output that you say you expect with the following changes. First, change your homogeneity check to just swap 2 positions. def homogeneity_check(n1, n2) -> bool: positions = [i for i in range(len(n1)) if n1[i] != n2[i]] if len(positions) == 2: # and positions[1] == positions[0] + 1: return True return False Second, get rid of your debugging print in find_permutations: def find_permutations(s: str): def generate_permutations(s, current_index): if current_index == len(s) - 1: return [s] # Αν φτάσουμε στο τέλος της αλυσίδας, επιστρέφουμε την τρέχουσα παραλλαγή perms = [] for i in range(current_index, len(s)): # Ανταλλάζουμε τα στοιχεία στις θέσεις current_index και i s_list = list(s) s_list[current_index], s_list[i] = s_list[i], s_list[current_index] new_str = ''.join(s_list) # Βάζουμε την νέα παραλλαγή και συνεχίζουμε την αναδρομή perms.extend(generate_permutations(new_str, current_index + 1)) return perms perms = set(generate_permutations(s, 0)) # Χρησιμοποιούμε set για να αποφύγουμε τα διπλότυπα #print(sorted(perms)) return sorted(perms) Re-enable the homogeneity check in generate_graph: def generate_graph(s,t): num = ("0" * s + "1" * t) perms = find_permutations(num) graph = {n: [] for n in perms} # Κενός γραφός, με λιστές που αντιπροσωπεύουν τους γείτονες for i in range(len(perms)): for j in range(i + 1, len(perms)): if homogeneity_check(perms[i], perms[j]): graph[perms[i]].append(perms[j]) graph[perms[j]].append(perms[i]) return graph Add reverse=True to the sorts in print_graph: def print_graph(graph): for node in sorted(graph, key=lambda x: int(x, 2), reverse=True): node_decimal = int(node, 2) neighbors_decimal = sorted((int(neigh, 2) for neigh in graph[node]), reverse=True) print(f"{node_decimal} -> {neighbors_decimal}")
1
3
79,562,856
2025-4-8
https://stackoverflow.com/questions/79562856/python-flask-server-405-error-for-a-specific-path-when-hosted-on-azure-works-fi
I'm building a flask app to process images uploaded from a mobile device, before sending results back to the mobile app. I've successfully deployed the flask app on Azure, and can confirm it works with certain paths, /test for one. The path my mobile app uses to upload an image, is /upload-data. When I run my flask app locally on my own PC and connect to it, this path works fine, accepting the image uploaded and returning a response. However, when using the flask app deployed on Azure, the response I get for /upload-data is just 405 (Method not allowed). This is the top part of my code in my python flask backend: @app.route("/upload-data", methods=['POST']) def store_data(): #Image processing code etc... And this is how I make the post to the above path in my java android app with OkHTTP: private void uploadData(String imagePath, String pointCloudPath) { OkHttpClient client = new OkHttpClient.Builder() .connectTimeout(10, TimeUnit.SECONDS) .writeTimeout(10, TimeUnit.SECONDS) .readTimeout(responseTimeOut, TimeUnit.SECONDS) .build(); File imageFile = new File(imagePath); //Create request body RequestBody requestBody = new MultipartBody.Builder() .setType(MultipartBody.FORM) .addFormDataPart("image", imageFile.getName(), RequestBody.create(imageFile, MediaType.parse("image/jpeg"))) .build(); //Create request itself String url = "http://" + backendUrl; Request request = new Request.Builder() .url(url+"/upload-data") .post(requestBody) .build(); //Execute request in background Log.d("OkHTTP Image Upload", "Creating image upload request to: " + request.url()); client.newCall(request).enqueue(new Callback() { @Override public void onFailure(@NonNull Call call, @NonNull IOException e) { e.printStackTrace(); Log.e("OkHTTP Image Upload", "Failed: " + e.getMessage()); runOnUiThread(() -> textStatus.setText("Request Failed: " + e.getMessage())); } @Override public void onResponse(@NonNull Call call, @NonNull Response response) { if (response.isSuccessful()) { Log.d("OkHTTP Upload", "Success"); InputStream responseZipStream = response.body().byteStream(); runOnUiThread(() -> processAnalysisResponse(responseZipStream)); } else { Log.e("OkHTTP Image Upload", "Error: "+response.code()); runOnUiThread(() -> textStatus.setText("Request Error: "+response.code())); } } }); }
I notice you're making your requests over HTTP not HTTPS, I imagine Azure is stricter on the POST requests rather than GET due to encrypting the body of the request. Make sure you have HTTPS only turned off in Azure Portal.
1
3
79,561,979
2025-4-8
https://stackoverflow.com/questions/79561979/regex-replace-numbers-between-to-characters
I have a string 'manual__2025-04-08T11:37:13.757109+00:00' and I want 'manual__2025-04-08T11_37_13_00_00' I know how to substitute the : and + using 'manual__2025-04-08T11:37:13.757109+00:00'.replace(':','_').replace('+','_') but I also want to get rid of the numbers between the two characters '.' and '+'. I'm using python.
You could do a regex substitution using re.sub: >>> import re >>> s = 'manual__2025-04-08T11:37:13.757109+00:00' >>> new_s = re.sub(r'\.\d+(?=\+)', '', s).replace(':', '_').replace('+', '_') >>> new_s 'manual__2025-04-08T11_37_13_00_00'
1
1
79,561,367
2025-4-8
https://stackoverflow.com/questions/79561367/how-to-disable-robot-framework-automatically-logging-kubelibrary-response-to-deb
I am utilizing robot framework's KubeLibrary to interact with my k8s cluster. By default, any function and response is automatically logged to DEBUG in robot framework, meaning that it's response will be visible in the log.html that robot framework generates. I want to disable this logging for some parts of my code or remove it afterwards somehow, e.g. I don't want the secrets to be part of the log.html regardless of log level. Current implementation which includes response in log.html when viewing DEBUG. from KubeLibrary import KubeLibrary ... def some_keyword(): instance.kube = KubeLibrary(kube_config=instance.kubeconfig) secrets_matching_secret_name = instance.kube.get_secrets_in_namespace( name_pattern=f"^{secret_name}$", namespace=namespace ) Wishful skeleton code for how it could be done instance.kube = KubeLibrary(kube_config=instance.kubeconfig) instance.kube.disable_debug_log() secrets_matching_secret_name = instance.kube.get_secrets_in_namespace( name_pattern=f"^{secret_name}$", namespace=namespace ) instance.kube.enable_debug_log() Is there any way I can disable it? Or somehow filter it out from log.html using rebot or similar?
I've been dealing with this exact issue in Robot Framework + KubeLibrary. After trying several approaches, here's what actually worked for me: import logging from contextlib import contextmanager @contextmanager def suppress_debug_logging(): """Temporarily suppress DEBUG level logging to keep secrets out of Robot logs.""" # Store original log levels here original_levels = {} for name, logger in logging.root.manager.loggerDict.items(): if isinstance(logger, logging.Logger): original_levels[name] = logger.level logger.setLevel(logging.INFO) # Handle root logger too root_level = logging.root.level logging.root.setLevel(logging.INFO) try: yield # Run here the code that shouldn't be debug-logged finally: for name, level in original_levels.items(): logging.getLogger(name).setLevel(level) logging.root.setLevel(root_level) # Usage example: def some_keyword(): instance.kube = KubeLibrary(kube_config=instance.kubeconfig) with suppress_debug_logging(): secrets_matching_secret_name = instance.kube.get_secrets_in_namespace( name_pattern=f"^{secret_name}$", namespace=namespace ) This approach works because KubeLibrary uses Python's standard logging module before Robot Framework gets a chance to capture the logs. By temporarily bumping the log level from DEBUG to INFO for all loggers, you prevent the sensitive data from being logged. I tried robot's Set Log Level keyword first, but that didn't work since it only affects Robot's own logging not the underlying Python logging. The rebot approach is too blunt / you lose the entire keyword entry rather than just hiding the sensitive parts. One thing to watch out for: if your KubeLibrary is using a logger with a custom name, you might need to adjust the logging level directly for that logger. If you're running into issues with this approach, let me know your specific version of KubeLibrary and I can suggest maybe more
1
3
79,561,773
2025-4-8
https://stackoverflow.com/questions/79561773/how-to-export-data-frame-with-color-to-csv
I want to add color to cells of Status Pass and Fail. I tried following code it will print a colored data frame but exported csv was not colored. And when I applied two rules (green and red), only green shows. df = pd.DataFrame([{'Status':'Pass', 'Value': '0'}, {'Status':'Pass', 'Value': '1'}, {'Status':'FAIL', 'Value': '2'}]) # add background color green to all lines with status 'PASS' df.to_csv('test.csv') df.style.map(lambda x: 'background-color: green' if x == 'Pass' else '', subset=['Status'])
You can't export colors to a text/CSV file. Furthermore, you ran your command after to_csv, which wouldn't affect it if this was supported. You can export as xlsx by chaining the style.map to to_excel: ( df.style.map( lambda x: 'background-color: green' if x == 'Pass' else '', subset=['Status'], ).to_excel('color_test.xlsx') ) Output:
1
2
79,561,147
2025-4-8
https://stackoverflow.com/questions/79561147/change-column-type-in-polars-dataframe
I have a Polars DataFrame below. import polars as pl df = pl.DataFrame({"a":["1.2", "2.3", "5.4"], "b":["0.4", "0.03", "0.12"], "c":["AA", "BB", "CC"]}) >>> df a b c str str str ------------------------- "1.2" "0.04" "AA" "2.3" "0.3" "BB" "3.5" "0.12" "CC" How can I convert the columns to specific types? In this case, I want to convert columns a and b into floats. I expect below. >>> df a b c f64 f64 str ------------------------ 1.2 0.04 "AA" 2.3 0.3 "BB" 3.5 0.12 "CC"
There are multiple approaches in Polars to convert column type. Using the cast method with with_columns is one of the most efficient approach: df = df.with_columns([ pl.col("a").cast(pl.Float64), pl.col("b").cast(pl.Float64) ]) Another approach: df = df.select([ pl.col("a").cast(pl.Float64), pl.col("b").cast(pl.Float64), pl.col("c") ]) And: df = df.cast({"a": pl.Float64, "b": pl.Float64}) For large dataset, you may try: df = df.lazy().with_columns([ pl.col("a").cast(pl.Float64), pl.col("b").cast(pl.Float64) ]) df_result = df.collect() For more information: https://docs.pola.rs/user-guide/expressions/casting/ Output: ┌─────┬──────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ str │ ╞═════╪══════╪═════╡ │ 1.2 ┆ 0.4 ┆ AA │ │ 2.3 ┆ 0.03 ┆ BB │ │ 5.4 ┆ 0.12 ┆ CC │ └─────┴──────┴─────┘
2
3
79,561,013
2025-4-8
https://stackoverflow.com/questions/79561013/is-it-ok-to-not-use-the-value-of-the-index-i-inside-a-for-loop
Would it be frowned upon if the index variable i were not used inside a for loop? I have never come across a code that didn't use the value of the index while it iterates through the loop. def questionable(): for i in range(3): print('Is this OK?') # (or do something more complicated) # as opposed to: def proper(): for i in range(3): print(i) # (or do something that the value of 'i' is necessary) What's a more Pythonic way to rewrite the function questionable, that is, to repeatedly do something without using the iteration variable?
When you want to perform an action a certain number of times without caring about the loop variable, the convention is to use an underscore (_) instead of a named variable like i. This signals to readers of your code: "I'm not going to use this variable." So your questionable function can be rewritten in a more Pythonic way like this: def more_pythonic(): for _ in range(3): print('Is this OK?')
4
10
79,582,846
2025-4-19
https://stackoverflow.com/questions/79582846/the-python-mcp-server-with-stdio-transport-throws-an-error-sse-connection-not
I've been trying to run an example from the official repo of Model Context Protocol for Python (https://github.com/modelcontextprotocol/python-sdk). But it keeps giving me Error in /message route: Error: SSE connection not established", when I click the "Connect" button in the webpage of MCP Inspector. The problem is that I'm not even trying to use the SSE connection, so I'm really confused about the error and the logs. Here's the code (located in src/ folder): # server.py from mcp.server.fastmcp import FastMCP # Create an MCP server mcp = FastMCP("Demo") # Add an addition tool @mcp.tool() def add(a: int, b: int) -> int: """Add two numbers""" return a + b # Add a dynamic greeting resource @mcp.resource("greeting://{name}") def get_greeting(name: str) -> str: """Get a personalized greeting""" return f"Hello, {name}!" That's how I prepared the project: I used uv to initialize the virtual environment in the root folder. Installed project dependencies with uv add "mcp[cli]", as said in the repo's guide. Info on program's versions: Node.js v22.14.0 [email protected] C:\Users...\AppData\Roaming\npm ├── @modelcontextprotocol/[email protected] ├── @modelcontextprotocol/[email protected] ├── @modelcontextprotocol/[email protected] └── @modelcontextprotocol/[email protected] My actions to run the server are: mcp dev src/server.py in the Powershell while being in the project's root directory. Then, I see ⚙️ Proxy server listening on port 6277 🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀 In the console, and go to the page http://127.0.0.1:6274. 3. On the page the STDIO trasport method is already set. There's also a command uv with arguments run --with mcp mcp run src/server.py(please see the screenshot attached), so I click on "Connect" button. Nothing happens on the UI, but in the logs in the console I see New SSE connection Query parameters: [Object: null prototype] { transportType: 'stdio', command: 'uv', args: 'run --with mcp mcp run src/server.py', env: '{ ... # all my env variables, PATH and etc.}' } Stdio transport: command=C:\Users\...\.local\bin\uv.exe, args=run,--with,mcp,mcp,run,src/server.py Spawned stdio transport Connected MCP client to backing server transport Created web app transport Created web app transport Set up MCP proxy I click on "Connect" button again (please see the screenshot attached) and see the "Connection Error, is your MCP server running?" on UI, and the following in logs: New SSE connection Query parameters: [Object: null prototype] { transportType: 'stdio', command: 'uv', args: 'run --with mcp mcp run src/server.py', env: '{...}' } Stdio transport: command=C:\Users\...\.local\bin\uv.exe, args=run,--with,mcp,mcp,run,src/server.py Spawned stdio transport Connected MCP client to backing server transport Created web app transport Created web app transport Set up MCP proxy Received message for sessionId 5ed68d2c-6c0f-47e7-9972-3fe99c43a630 Error in /message route: Error: SSE connection not established at SSEServerTransport.handlePostMessage (file:///C:/Users/.../AppData/Roaming/npm/node_modules/@modelcontextprotocol/inspector/node_modules/@modelcontextprotocol/sdk/dist/esm/server/sse.js:61:19) at file:///C:/Users/.../AppData/Roaming/npm/node_modules/@modelcontextprotocol/inspector/server/build/index.js:130:25 at Layer.handleRequest (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\lib\layer.js:152:17) at next (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\lib\route.js:157:13) at Route.dispatch (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\lib\route.js:117:3) at handle (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\index.js:435:11) at Layer.handleRequest (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\lib\layer.js:152:17) at C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\index.js:295:15 at processParams (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\index.js:582:12) at next (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\router\index.js:291:5) Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at ServerResponse.setHeader (node:_http_outgoing:699:11) at ServerResponse.header (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\express\lib\response.js:684:10) at ServerResponse.json (C:\Users\...\AppData\Roaming\npm\node_modules\@modelcontextprotocol\inspector\node_modules\express\lib\response.js:247:10) at file:///C:/Users/.../AppData/Roaming/npm/node_modules/@modelcontextprotocol/inspector/server/build/index.js:134:25 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) P.S. I have tried adding if __name__ == "__main__": mcp.run(transport='stdio') At the end of the server.py, as some examples propose, but it didn't affect anything. Any help appreciated! Thanks! MCP Inspector UI page Screenshot
Here's what worked for me: I was only able to run the Inspector correctly in VS Code's Simple Browser (Ctrl + Shift + P → Simple Browser: Show → Paste your Inspector link—in my case, it was running on http://127.0.0.1:6274). I still have no idea what the problem is. I couldn't make the Inspector work in Edge or Chrome. It seems I have tried EVERYTHING: replacing STDIO with SSE, running the MCP server with SSE transport on Ubuntu server and connecting to it from the Inspector running locally, cleaning cache, opening the Inspector in Incognito mode, stopping other processes even though I couldn't find anything messing up with the ports needed, etc., etc. Nothing helped but simply running the Inspector in VS Code Browser did. It works !! I am really confused how there aren't any discussions on this. Hope there's something that might be fixed on the SDK/Inspector's side
2
1
79,583,320
2025-4-20
https://stackoverflow.com/questions/79583320/same-size-of-text-labels-and-buttons
I am not really a programmer. I asked an AI to generate a matrix of buttons, with rows and columns having text-labels on top and left side of the window. I wanted the matrix to be scrollable but the labels on top/left should always remain visible. The code the AI gave me is actually doing what I described (sort of), but the scrolling somehow displaces the positions of labels and buttons, so that they are not really on the right spot if you scroll too much. It seems to me that the sizes of the label-texts and buttons are not exactly the same size? Does any one have an idea what is this not working well in the code of AI (oh, and also when I run the script, first nothing is visible on the main window!): import tkinter as tk class ScrollableMatrix: def __init__(self, root, rows=20, cols=30): self.root = root self.rows = rows self.cols = cols self.font = ('TkDefaultFont', 10) # Common font for all elements # Create main container self.container = tk.Frame(root) self.container.pack(fill="both", expand=True) # Create dummy widget to calculate sizes self._init_size_constants() # Create canvas widgets self.top_header_canvas = tk.Canvas(self.container, height=self.row_height) self.left_header_canvas = tk.Canvas(self.container, width=self.col_width) self.main_canvas = tk.Canvas(self.container) # Create scrollbars self.h_scroll = tk.Scrollbar(self.container, orient="horizontal") self.v_scroll = tk.Scrollbar(self.container, orient="vertical") # Grid layout configuration self.top_header_canvas.grid(row=0, column=1, sticky="ew") self.left_header_canvas.grid(row=1, column=0, sticky="ns") self.main_canvas.grid(row=1, column=1, sticky="nsew") self.v_scroll.grid(row=1, column=2, sticky="ns") self.h_scroll.grid(row=2, column=1, sticky="ew") # Configure grid weights self.container.grid_rowconfigure(1, weight=1) self.container.grid_columnconfigure(1, weight=1) # Create internal frames self.top_header_frame = tk.Frame(self.top_header_canvas) self.left_header_frame = tk.Frame(self.left_header_canvas) self.main_frame = tk.Frame(self.main_canvas) # Add frames to canvases self.top_header_canvas.create_window((0, 0), window=self.top_header_frame, anchor="nw") self.left_header_canvas.create_window((0, 0), window=self.left_header_frame, anchor="nw") self.main_canvas.create_window((0, 0), window=self.main_frame, anchor="nw") # Configure scroll commands self.main_canvas.configure( xscrollcommand=self._sync_main_x, yscrollcommand=self._sync_main_y ) self.h_scroll.configure(command=self._sync_h_scroll) self.v_scroll.configure(command=self._sync_v_scroll) # Bind configuration events self.main_frame.bind("<Configure>", self._on_main_configure) self.top_header_frame.bind("<Configure>", self._on_top_header_configure) self.left_header_frame.bind("<Configure>", self._on_left_header_configure) # Create matrix elements self._create_labels_and_buttons() # Configure uniform column/row sizes self._configure_uniform_sizing() def _init_size_constants(self): """Calculate size constants using dummy widgets""" dummy_frame = tk.Frame(self.root) # Create dummy button dummy_btn = tk.Button(dummy_frame, text="X", font=self.font, width=10, height=2) dummy_btn.grid(row=0, column=0) # Create dummy label dummy_lbl = tk.Label(dummy_frame, text="X", font=self.font, width=10, height=2) dummy_lbl.grid(row=1, column=0) # Force geometry calculation dummy_frame.update_idletasks() # Get dimensions self.col_width = dummy_btn.winfo_width() self.row_height = dummy_btn.winfo_height() # Verify label matches button size lbl_width = dummy_lbl.winfo_width() lbl_height = dummy_lbl.winfo_height() if lbl_width != self.col_width or lbl_height != self.row_height: self.col_width = max(self.col_width, lbl_width) self.row_height = max(self.row_height, lbl_height) dummy_frame.destroy() def _create_labels_and_buttons(self): # Create column headers for j in range(self.cols): lbl = tk.Label(self.top_header_frame, text=f"Col {j+1}", font=self.font, width=10, height=2, relief="ridge", borderwidth=2, bg="red") lbl.grid(row=0, column=j, sticky="nsew") # Create row headers for i in range(self.rows): lbl = tk.Label(self.left_header_frame, text=f"Row {i+1}", font=self.font, width=10, height=2, relief="ridge", borderwidth=2, bg="red") lbl.grid(row=i, column=0, sticky="nsew") # Create buttons in main grid for i in range(self.rows): for j in range(self.cols): btn = tk.Button(self.main_frame, text=f"({i+1},{j+1})", font=self.font, width=10, height=2, relief="groove", borderwidth=2) btn.grid(row=i, column=j, sticky="nsew") def _configure_uniform_sizing(self): """Set uniform column widths and row heights""" for j in range(self.cols): self.main_frame.columnconfigure(j, minsize=self.col_width, weight=1) self.top_header_frame.columnconfigure(j, minsize=self.col_width, weight=1) for i in range(self.rows): self.main_frame.rowconfigure(i, minsize=self.row_height, weight=1) self.left_header_frame.rowconfigure(i, minsize=self.row_height, weight=1) def _sync_h_scroll(self, *args): self.main_canvas.xview(*args) self.top_header_canvas.xview(*args) def _sync_v_scroll(self, *args): self.main_canvas.yview(*args) self.left_header_canvas.yview(*args) def _sync_main_x(self, first, last): self.h_scroll.set(first, last) self.top_header_canvas.xview_moveto(first) def _sync_main_y(self, first, last): self.v_scroll.set(first, last) self.left_header_canvas.yview_moveto(first) def _on_main_configure(self, event): self.main_canvas.configure(scrollregion=self.main_canvas.bbox("all")) def _on_top_header_configure(self, event): self.top_header_canvas.configure(scrollregion=self.top_header_canvas.bbox("all")) def _on_left_header_configure(self, event): self.left_header_canvas.configure(scrollregion=self.left_header_canvas.bbox("all")) if __name__ == "__main__": root = tk.Tk() root.title("Uniform Size Matrix") root.geometry("800x600") matrix = ScrollableMatrix(root, rows=50, cols=50) root.mainloop()
I improved your code and solved the scrolling issue. I also improved performance. I made a numerous changes and added new features. If I go to write them all, it will occupy a lot of space. However, if you require, I could provide the list of changes. Here is the full code: import tkinter as tk class OptimizedScrollableMatrix: def __init__(self, root, rows=20, cols=30): self.root = root self.rows = rows self.cols = cols self.font = ('TkDefaultFont', 10) # Pre-calculate cell dimensions self._calculate_cell_dimensions() # Create main container self.container = tk.Frame(root) self.container.pack(fill="both", expand=True) # Create canvas widgets self.top_header_canvas = tk.Canvas(self.container, height=self.row_height, highlightthickness=0, bg='#f0f0f0') self.left_header_canvas = tk.Canvas(self.container, width=self.col_width, highlightthickness=0, bg='#f0f0f0') self.main_canvas = tk.Canvas(self.container, highlightthickness=0, bg='white') # Create scrollbars self.h_scroll = tk.Scrollbar(self.container, orient="horizontal") self.v_scroll = tk.Scrollbar(self.container, orient="vertical") # Grid layout self.top_header_canvas.grid(row=0, column=1, sticky="ew") self.left_header_canvas.grid(row=1, column=0, sticky="ns") self.main_canvas.grid(row=1, column=1, sticky="nsew") self.v_scroll.grid(row=1, column=2, sticky="ns") self.h_scroll.grid(row=2, column=1, sticky="ew") self.container.grid_rowconfigure(1, weight=1) self.container.grid_columnconfigure(1, weight=1) # Setup scrolling self._setup_scrolling() # Create grid structure self._create_virtual_grid() # Draw initial content self._draw_visible_area() # Setup event bindings self._bind_events() def _calculate_cell_dimensions(self): temp = tk.Frame(self.root) temp.grid_propagate(False) test_label = tk.Label(temp, text="Sample", font=self.font, width=10, height=2, relief="ridge") test_label.grid(row=0, column=0) temp.update_idletasks() self.col_width = test_label.winfo_width() self.row_height = test_label.winfo_height() temp.destroy() def _setup_scrolling(self): self.main_canvas.configure(xscrollcommand=self._sync_main_x, yscrollcommand=self._sync_main_y) self.h_scroll.configure(command=self._sync_h_scroll) self.v_scroll.configure(command=self._sync_v_scroll) self.top_header_canvas.configure(xscrollcommand=self._sync_header_x) self.left_header_canvas.configure(yscrollcommand=self._sync_header_y) def _sync_h_scroll(self, *args): self.main_canvas.xview(*args) self.top_header_canvas.xview(*args) self._draw_visible_area() def _sync_v_scroll(self, *args): self.main_canvas.yview(*args) self.left_header_canvas.yview(*args) self._draw_visible_area() def _sync_main_x(self, first, last): self.h_scroll.set(first, last) self.top_header_canvas.xview_moveto(first) def _sync_main_y(self, first, last): self.v_scroll.set(first, last) self.left_header_canvas.yview_moveto(first) def _sync_header_x(self, first, last): self.main_canvas.xview_moveto(first) def _sync_header_y(self, first, last): self.main_canvas.yview_moveto(first) def _create_virtual_grid(self): self.total_width = self.cols * self.col_width self.total_height = self.rows * self.row_height self.main_canvas.configure(scrollregion=(0, 0, self.total_width, self.total_height)) self.top_header_canvas.configure(scrollregion=(0, 0, self.total_width, self.row_height)) self.left_header_canvas.configure(scrollregion=(0, 0, self.col_width, self.total_height)) self._draw_headers() def _draw_headers(self): for col in range(self.cols): x = col * self.col_width + self.col_width // 2 self.top_header_canvas.create_rectangle( col * self.col_width, 0, (col + 1) * self.col_width, self.row_height, outline="black", fill="red" ) self.top_header_canvas.create_text(x, self.row_height // 2, text=f"Col {col+1}", font=self.font) for row in range(self.rows): y = row * self.row_height + self.row_height // 2 self.left_header_canvas.create_rectangle( 0, row * self.row_height, self.col_width, (row + 1) * self.row_height, outline="black", fill="red" ) self.left_header_canvas.create_text(self.col_width // 2, y, text=f"Row {row+1}", font=self.font) def _draw_visible_area(self): self.main_canvas.delete("cell") x_start = int(self.main_canvas.canvasx(0)) y_start = int(self.main_canvas.canvasy(0)) visible_width = self.main_canvas.winfo_width() visible_height = self.main_canvas.winfo_height() x_end = x_start + visible_width + self.col_width y_end = y_start + visible_height + self.row_height first_col = max(0, x_start // self.col_width) last_col = min(self.cols - 1, x_end // self.col_width) first_row = max(0, y_start // self.row_height) last_row = min(self.rows - 1, y_end // self.row_height) for row in range(first_row, last_row + 1): for col in range(first_col, last_col + 1): x1 = col * self.col_width y1 = row * self.row_height x2 = x1 + self.col_width y2 = y1 + self.row_height tag = f"cell_{row}_{col}" rect_id = self.main_canvas.create_rectangle( x1, y1, x2, y2, outline="gray", fill="white", tags=("cell", tag) ) self.main_canvas.create_text( x1 + self.col_width // 2, y1 + self.row_height // 2, text=f"({row+1},{col+1})", font=self.font, tags=("cell", tag) ) # Hover bindings on the group tag (rectangle + text) self.main_canvas.tag_bind(tag, "<Enter>", lambda e, r=tag: self._on_hover_enter(r)) self.main_canvas.tag_bind(tag, "<Leave>", lambda e, r=tag: self._on_hover_leave(r)) def _on_hover_enter(self, tag): items = self.main_canvas.find_withtag(tag) for item in items: if self.main_canvas.type(item) == "rectangle": self.main_canvas.itemconfig(item, fill="red") def _on_hover_leave(self, tag): items = self.main_canvas.find_withtag(tag) for item in items: if self.main_canvas.type(item) == "rectangle": self.main_canvas.itemconfig(item, fill="white") def _bind_events(self): self.main_canvas.bind("<Configure>", self._on_canvas_configure) self.main_canvas.bind_all("<MouseWheel>", self._on_mouse_wheel) self.main_canvas.bind_all("<Shift-MouseWheel>", self._on_shift_mouse_wheel) def _on_canvas_configure(self, event): self._draw_visible_area() def _on_mouse_wheel(self, event): self.main_canvas.yview_scroll(-1 * (event.delta // 120), "units") self._draw_visible_area() def _on_shift_mouse_wheel(self, event): self.main_canvas.xview_scroll(-1 * (event.delta // 120), "units") self._draw_visible_area() if __name__ == "__main__": root = tk.Tk() root.title("Optimized Scrollable Matrix") root.geometry("800x600") matrix = OptimizedScrollableMatrix(root, rows=100, cols=100) root.mainloop() Output:
1
2
79,583,654
2025-4-20
https://stackoverflow.com/questions/79583654/logits-dont-change-in-a-custom-reimplementation-of-a-clip-model-pytorch
The problem The similarity scores are almost the same for texts that describe both a photo of a cat and a dog (the photo is of a cat). Cat similarity: tensor([[-3.5724]], grad_fn=<MulBackward0>) Dog similarity: tensor([[-3.4155]], grad_fn=<MulBackward0>) The code for CLIP model The code is based on the checkpoint of openai/clip-vit-base-patch32. The encode_text function takes a raw input and turns it into embeddings later fed into the forward method. I'm certain that the layers' names and sizes are correct, as the checkpoint fits the model without errors due to missing or unexpected layers. class CLIP(nn.Module): def __init__(self, project_dim: int = 768, embed_dim: int = 512): super(CLIP, self).__init__() self.vision_model = ImageEncoder(project_dim = project_dim) self.text_model = TextEncoder(embed_dim = embed_dim) self.tokenizer = TorchTokenizer() self.logit_scale = nn.Parameter(torch.ones([]) * 0.7) self.visual_projection = nn.Linear(project_dim, embed_dim, bias = False) self.text_projection = nn.Linear(embed_dim, embed_dim, bias = False) self.vision_model.eval() self.text_model.eval() def forward(self, image: torch.Tensor, text_embed: torch.Tensor) -> torch.Tensor: " Compute the relationship between image and text " # get fixed size to comply with the checkpoint position_embeddings nn.Embedding(50, embed_dim) image = Resize(size=(224, 224))(image) image_features = self.vision_model(image) # projections text_features = self.text_projection(text_embed) image_features = self.visual_projection(image_features) # normalization text_features = F.normalize(text_features, dim = -1) image_features = F.normalize(image_features, dim = -1) logits = self.logit_scale.exp() * (image_features @ text_features.t()) return logits def encode_text(self, input_ids, attention_mask = None): """ Tokenize (if needed) and encode texts, returning embeddings and mask. Function for ConditionalPromptNorm """ # tokenize strings if raw text passed if attention_mask is None: input_ids, attention_mask = self.tokenizer.tokenize(input_ids) # ensure batch dim if input_ids.dim() == 1: input_ids = input_ids.unsqueeze(0) with torch.no_grad(): text_emb = self.text_model(input_ids.long(), attention_mask) return text_emb The code for the text encoder I have checked that getting the EOS token does work correctly. Also, the types of layers, like nn.Embedding and nn.Parameter are correct for each layer as it would conflict with the checkpoint if it weren't the same type. class TextEncoder(nn.Module): def __init__(self, embed_dim: int = 512): super(TextEncoder, self).__init__() vocab_size = 49408 self.embeddings = nn.Module() self.embeddings.token_embedding = nn.Embedding(vocab_size, embed_dim) # tokenizer's context_length must be set to 77 tokens self.embeddings.position_embedding = nn.Embedding(77, embed_dim) # 77 = context length self.encoder = Encoder(embed_size = embed_dim) self.final_layer_norm = nn.LayerNorm(embed_dim) def forward(self, text: torch.Tensor, attention_mask: torch.Tensor): x = self.embeddings.token_embedding(text.long()) # seq_length positions = torch.arange(x.size(1)) pos_embed = self.embeddings.position_embedding(positions) x += pos_embed.to(x.dtype).to(x.device) # obtain text embeddings x = x.permute(1, 0, 2) x = self.encoder(x, attention_mask) x = x.permute(1, 0, 2) # ensure batch dim if x.dim() == 2: x = x.unsqueeze(0) if attention_mask.dim() == 1: attention_mask = attention_mask.unsqueeze(0) # for each batch, get the last token (eos) x = x[torch.arange(x.size(0)), text.argmax(dim = -1)] return self.final_layer_norm(x) The attention class is from https://github.com/openai/CLIP/blob/main/clip/model.py#L58 with a slight modification to allow self and pooled attention (x and x[:1]). The Encoder I have checked that the tokenizer code works correctly. The MLP is the same as in the CLIP original code. Two linear layers with a ratio of 4 and a GELU in the middle. class EncoderLayer(nn.Module): def __init__(self, embed_size: int = 768, ratio: int = 4, num_heads: int = 8): super().__init__() self.layer_norm1 = nn.LayerNorm(embed_size) self.layer_norm2 = nn.LayerNorm(embed_size) self.mlp = MLP(embed_size = embed_size, ratio = ratio) self.self_attn = AttentionPool2d(num_heads = num_heads, embed_dim = embed_size) def forward(self, x: torch.Tensor, src_pad_key = None): x = self.layer_norm1(x) if src_pad_key is not None: attn_out = self.self_attn(x, src_pad_key = src_pad_key, use_self_attention = True) else: attn_out = self.self_attn(x) # normalize and apply residual connections x += attn_out x = self.layer_norm2(x) x += self.mlp(x) return x class Encoder(nn.Module): def __init__(self, embed_size = 768): super().__init__() self.layers = nn.ModuleList([EncoderLayer(embed_size = embed_size) for _ in range(12)]) def forward(self, x: torch.Tensor, attention_mask = None): if attention_mask is not None: src_key_mask = attention_mask == 0 if src_key_mask.dim() == 1: src_key_mask = src_key_mask.unsqueeze(0) for layer in self.layers: x = layer(x, src_key_mask) else: for layer in self.layers: x = layer(x) return x
The issue was in the EncoderLayer where the residual calculations were done wrong. The correct way of calculating: def forward(self, x: torch.Tensor, src_pad_key = None): residual = x x = self.layer_norm1(x) if src_pad_key is not None: x = self.self_attn(x, src_pad_key = src_pad_key, use_self_attention = True) else: x = self.self_attn(x) # normalize and apply residual connections x += residual residual = x x = self.layer_norm2(x) x = self.mlp(x) x += residual return x Another change was that we must always use self attention (instead of pooled attention) as otherwise the calculations won't work with the image encoder. [query = x] The results look like this: Cat similarity: tensor([[25.4132]], grad_fn=<MulBackward0>) Dog similarity: tensor([[21.8544]], grad_fn=<MulBackward0>) cosine cat/dog: 0.8438754677772522
2
1
79,585,301
2025-4-21
https://stackoverflow.com/questions/79585301/matplotlib-vertical-grid-lines-not-match-points
Could you please explain why vertical grid lines not match points? Here is my data for plot: {datetime.datetime(2025, 4, 15, 19, 23, 50, 658000, tzinfo=datetime.timezone.utc): 68.0, datetime.datetime(2025, 4, 16, 19, 31, 1, 367000, tzinfo=datetime.timezone.utc): 72.0, datetime.datetime(2025, 4, 17, 19, 34, 21, 507000, tzinfo=datetime.timezone.utc): 75.0, datetime.datetime(2025, 4, 18, 19, 50, 28, 446000, tzinfo=datetime.timezone.utc): 80.0, datetime.datetime(2025, 4, 19, 19, 57, 15, 393000, tzinfo=datetime.timezone.utc): 78.0, datetime.datetime(2025, 4, 20, 19, 57, 49, 60000, tzinfo=datetime.timezone.utc): 77.0, datetime.datetime(2025, 4, 21, 20, 28, 51, 127710, tzinfo=datetime.timezone.utc): 73.0} And here is my code: fig, ax = plt.subplots(figsize=(12, 6)) ax.plot(df['Дата'], df['Вес'], marker='o', linestyle='-', color='royalblue', label='Вес') ax.scatter(df['Дата'], df['Вес'], color='red', zorder=5) ax.set_title('График изменения веса по дням', fontsize=16) ax.set_xlabel('Дата', fontsize=12) ax.set_ylabel('Вес (кг)', fontsize=12) ax.xaxis.set_major_locator(mdates.AutoDateLocator()) # Раз в день ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%Y')) plt.setp(ax.xaxis.get_majorticklabels(), rotation=45, ha="right") ax.grid(True, linestyle='--', alpha=0.6) plt.tight_layout()
There are multiple solutions for it. I think using ax.set_xticks(df['Дата']) "to force the ticks to match your actual datetimes" might be better for you when the time of day matters, not just the date. The full code: import matplotlib.pyplot as plt import matplotlib.dates as mdates import pandas as pd from datetime import datetime, timezone data = { datetime(2025, 4, 15, 19, 23, 50, 658000, tzinfo=timezone.utc): 68.0, datetime(2025, 4, 16, 19, 31, 1, 367000, tzinfo=timezone.utc): 72.0, datetime(2025, 4, 17, 19, 34, 21, 507000, tzinfo=timezone.utc): 75.0, datetime(2025, 4, 18, 19, 50, 28, 446000, tzinfo=timezone.utc): 80.0, datetime(2025, 4, 19, 19, 57, 15, 393000, tzinfo=timezone.utc): 78.0, datetime(2025, 4, 20, 19, 57, 49, 60000, tzinfo=timezone.utc): 77.0, datetime(2025, 4, 21, 20, 28, 51, 127710, tzinfo=timezone.utc): 73.0 } df = pd.DataFrame(list(data.items()), columns=['Дата', 'Вес']) fig, ax = plt.subplots(figsize=(12, 6), layout='constrained') ax.plot(df['Дата'], df['Вес'], marker='o', markersize=8, markerfacecolor='red', markeredgecolor='white', markeredgewidth=1.5, linestyle='-', linewidth=2, color='royalblue', label='Вес') ax.set_xticks(df['Дата']) date_fmt = mdates.DateFormatter('%d.%m.%Y\n%H:%M') ax.xaxis.set_major_formatter(date_fmt) ax.grid(True, which='major', linestyle='--', linewidth=0.7, alpha=0.7) ax.set_title('График изменения веса по дням', fontsize=16, pad=20) ax.set_xlabel('Дата и время измерения', fontsize=12, labelpad=10) ax.set_ylabel('Вес (кг)', fontsize=12, labelpad=10) ax.tick_params(axis='x', which='major', rotation=45, labelsize=10) fig.autofmt_xdate(ha='center', bottom=0.2) plt.show() Another solution is "normalizing datetime to midnight" using df['Дата'] = df['Дата'].dt.normalize(). It might be better when time is not required. You’re plotting daily trends. The full code: import matplotlib.pyplot as plt import matplotlib.dates as mdates import pandas as pd import datetime data = { datetime.datetime(2025, 4, 15, 19, 23, 50, 658000, tzinfo=datetime.timezone.utc): 68.0, datetime.datetime(2025, 4, 16, 19, 31, 1, 367000, tzinfo=datetime.timezone.utc): 72.0, datetime.datetime(2025, 4, 17, 19, 34, 21, 507000, tzinfo=datetime.timezone.utc): 75.0, datetime.datetime(2025, 4, 18, 19, 50, 28, 446000, tzinfo=datetime.timezone.utc): 80.0, datetime.datetime(2025, 4, 19, 19, 57, 15, 393000, tzinfo=datetime.timezone.utc): 78.0, datetime.datetime(2025, 4, 20, 19, 57, 49, 60000, tzinfo=datetime.timezone.utc): 77.0, datetime.datetime(2025, 4, 21, 20, 28, 51, 127710, tzinfo=datetime.timezone.utc): 73.0, } df = pd.DataFrame(list(data.items()), columns=["Дата", "Вес"]) df['Дата'] = df['Дата'].dt.tz_convert(None).dt.normalize() fig, ax = plt.subplots(figsize=(12, 6)) ax.plot(df['Дата'], df['Вес'], marker='o', linestyle='-', color='royalblue', label='Вес') ax.scatter(df['Дата'], df['Вес'], color='red', zorder=5) ax.set_title('График изменения веса по дням', fontsize=16) ax.set_xlabel('Дата', fontsize=12) ax.set_ylabel('Вес (кг)', fontsize=12) ax.xaxis.set_major_locator(mdates.DayLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%Y')) plt.setp(ax.xaxis.get_majorticklabels(), rotation=45, ha="right") ax.grid(True, linestyle='--', alpha=0.6) plt.tight_layout() plt.show() If none of the solutions is not functional for you, please inform me and I will attempt to provide other solution. Output:
1
3
79,584,062
2025-4-21
https://stackoverflow.com/questions/79584062/userwarning-figurecanvasagg-is-non-interactive-and-thus-cannot-be-shown
I am trying to show a matplotlib.pyplot figure on Python 3.10 but can't. I am aware of this question and tried their answers but is still unsuccessful. The default OS distribution is Ubuntu 24.04 using Python 3.12 as a default. Here is how I setup the Python 3.10 project venv and installed numpy and matplotlib: $ uv init test_py310 --python 3.10 Initialized project `test-py310` at `/home/user/test_py310` $ cd test_py310/ $ uv add numpy matplotlib Using CPython 3.10.16 Creating virtual environment at: .venv Resolved 12 packages in 136ms Prepared 1 package in 1.96s Installed 11 packages in 43ms + contourpy==1.3.2 + cycler==0.12.1 + fonttools==4.57.0 + kiwisolver==1.4.8 + matplotlib==3.10.1 + numpy==2.2.5 + packaging==25.0 + pillow==11.2.1 + pyparsing==3.2.3 + python-dateutil==2.9.0.post0 + six==1.17.0 test_matplotlib.py: import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 10, 100) y = np.sin(x) plt.plot(x, y, label='sin(x)', color='blue', linestyle='--') plt.show() Error: /home/user/Coding/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py /home/user/test_py310/test_matplotlib,py:7: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() Next, I tried installing PyQt5 as shared by this answer but still encountered error. $ uv add pyqt5 Resolved 15 packages in 89ms Installed 3 packages in 45ms + pyqt5==5.15.11 + pyqt5-qt5==5.15.16 + pyqt5-sip==12.17.0 Running the same python script $ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb. Aborted (core dumped) Changing import matplotlib.pyplot as plt to: import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt Gave this error: $ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py AttributeError: module '_tkinter' has no attribute '__file__'. Did you mean: '__name__'? The above exception was the direct cause of the following exception: ImportError: failed to load tkinter functions The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/test_py310/test_matplotlib,py", line 9, in <module> plt.plot(x, y, label='sin(x)', color='blue', linestyle='--') File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 3827, in plot return gca().plot( File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 2774, in gca return gcf().gca() File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 1108, in gcf return figure() File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 1042, in figure manager = new_figure_manager( File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 551, in new_figure_manager _warn_if_gui_out_of_main_thread() File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 528, in _warn_if_gui_out_of_main_thread canvas_class = cast(type[FigureCanvasBase], _get_backend_mod().FigureCanvas) File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 369, in _get_backend_mod switch_backend(rcParams._get("backend")) File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/pyplot.py", line 425, in switch_backend module = backend_registry.load_backend_module(newbackend) File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/registry.py", line 317, in load_backend_module return importlib.import_module(module_name) File "/home/user/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_tkagg.py", line 1, in <module> from . import _backend_tk File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/backends/_backend_tk.py", line 25, in <module> from . import _tkagg ImportError: initialization failed Using import matplotlib matplotlib.use('Qt5Agg') import matplotlib.pyplot as plt gave $ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb. Aborted (core dumped) I have also removed pyqt5 and added pyqt6, and used matplotlib.use('Qt6Agg') but got this error: $ /home/user/test_py310/.venv/bin/python /home/user/test_py310/test_matplotlib,py Traceback (most recent call last): File "/home/user/test_py310/test_matplotlib,py", line 4, in <module> matplotlib.use('Qt6Agg') File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/__init__.py", line 1265, in use name = rcsetup.validate_backend(backend) File "/home/user/test_py310/.venv/lib/python3.10/site-packages/matplotlib/rcsetup.py", line 278, in validate_backend raise ValueError(msg) ValueError: 'Qt6Agg' is not a valid value for backend; supported values are ['gtk3agg', 'gtk3cairo', 'gtk4agg', 'gtk4cairo', 'macosx', 'nbagg', 'notebook', 'qtagg', 'qtcairo', 'qt5agg', 'qt5cairo', 'tkagg', 'tkcairo', 'webagg', 'wx', 'wxagg', 'wxcairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template'] What must I do to be able to plot a matplotlib.pyplot figure in a virtual environment that is installed with Python 3.10? Just to add, I am able show a matplotlib.pyplot figure in a separate virtual environment using Python 3.12.
I uses Linux Mint 22 based on Ubuntu 24.04. Code works for me with uv and Python 3.10 when I install PyQt6 but not with PyQt5 And it doesn't need line matplotlib.use('Qt5Agg') nor matplotlib.use('QtAgg') (but it works also with those lines) As for tkinter: normally tkinter is intalled directly with Python but on Ubuntu it is separated package which is installed with apt install python3.10-tk. And I think this can be problem to install it in version installed with uv. BTW: on version Ubuntu Server (which doesn't have installed X11, and usually is used on computers without monitor) it may have Python without tkinter because tkinter is useless without X11 and monitor. There is no problem when you install full Python 3.10 using apt install python3.10-full or apt install python3.10 python3.10-tk and later create virtual environment with python3.10 -m venv ... Full code used for tests: $ uv init test_py310 --python 3.10 $ cd test_py310/ $ uv add numpy matplotlib pyqt6 $ uv run main.py main.py It allows to run with parameter like uv run main.py tkagg import sys print('>>> Executable:', sys.executable) import numpy as np import matplotlib.pyplot as plt import matplotlib # matplotlib.use('QtAgg') # works for me when installed `PyQt6` but not `PyQt5` # matplotlib.use('Qt5Agg') # works for me when installed `PyQt6` but not `PyQt5` # matplotlib.use('Qt6Agg') # doesn't exist `Qt6Agg` # matplotlib.use('TkAgg') # `tkinter` is not installed with `uv` if len(sys.argv) > 1: print('>>> Using:', sys.argv[1]) matplotlib.use(sys.argv[1]) x = np.linspace(0, 10, 100) y = np.sin(x) plt.plot(x, y, label='sin(x)', color='blue', linestyle='--') # plt.ion() # ion = Interactive ON plt.show() These setup commands worked for Ubuntu 24.04 (tested by @SunBear): $ uv init test_py310 --python 3.10 $ cd test_py310/ $ uv add numpy matplotlib pyqt6 $ sudo apt install python3.10-tk libxcb-cursor0 $ uv run main.py
1
2
79,584,014
2025-4-21
https://stackoverflow.com/questions/79584014/how-to-render-power-bi-slicer-items-with-python-selenium
I'd like to scrape the second page of this Power BI dashboard. In order to get the data from a specific month, I must set the date in a slicer: However, when I expand a year element to display the month elements, some are out of sight and thus unrendered. Therefore, I must scroll down the slicer. However, I have yet to find a method that works. Before I give further details, this is how I get to the desired page and expand the slicer: # Selenium resources from selenium import webdriver from selenium.webdriver.edge.options import Options from selenium.webdriver.common.by import By # Driver service driver_file = r"d:\dev\selenium\msedgedriver.exe" # or whatever driver is available service = Service(driver_file) # Browser options options = Options() options.add_experimental_option("detach", True) options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option("useAutomationExtension", False) options.add_argument("--disable-blink-features=AutomationControlled") options.add_argument("--inprivate") # Open browser driver = webdriver.Edge(service = service, options = options) # Get URL url = "https://app.powerbi.com/view?r=eyJrIjoiZWIzNDg3YzUtMGFlMC00MzdmLTgzOWQtZThkOWExNTU2NjBlIiwidCI6IjQ0OTlmNGZmLTI0YTYtNGI0Mi1iN2VmLTEyNGFmY2FkYzkxMyJ9" driver.get(url) # Proceed to next page driver.find_element(By.XPATH, '//button[@aria-label="Próxima Página"]/i').click() # Open date slicer driver.find_element(By.XPATH, '//div[@class="slicer-dropdown-menu"]/i') # Expand month options for a year, e.g. 2024 (driver \ .find_element(By.XPATH, '//div[@class="slicerItemContainer" and @title="2024"]/div[@class="expandButton"]') \ .click()) And so: But as it stands, I cannot select any month beyond March. The slicer doesn't budge when I execute a JavaScript snippet: slicer_container = driver.find_element(By.XPATH, '//div[@class="slicerContainer"]') driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", slicer_container) Trying to scroll down with keys throws an ElementNotInteractableException: from selenium.webdriver.common.keys import Keys scroll_container = driver.find_element(By.CLASS_NAME, "scroll-bar") scroll_container.send_keys(Keys.DOWN) ElementNotInteractableException: Message: element not interactable (Session info: MicrosoftEdge=135.0.3179.85) ActionChains yield the same error: from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys actions = ActionChains(driver) actions.move_to_element(scroll_bar).click().send_keys(Keys.PAGE_DOWN).perform() ElementNotInteractableException: Message: element not interactable (Session info: MicrosoftEdge=135.0.3179.85) And finally, I've tried to use a style transform – but it only manages to scroll the element visually and doesn't trigger the rendering of the next slicer items. visible_group = driver.find_element(By.CLASS_NAME, 'visibleGroup') driver.execute_script(f'arguments[0].style.transform = "translateY(-60px)";', visible_group) All told, I have no idea what to do. Any ideas? Please feel free to ask for further details.
ActionChains with scrolling via move_to_element, click_and_hold, and move_by_offset, It is key when interacting with complex UIs like Power BI embedded dashboards or custom dropdowns. move_to_element(): Ensures you're hovering over the correct scrollable area. click_and_hold(): Simulates a mouse drag. Without this, the scroll may not happen. move_by_offset(0, 100): Moves the view vertically (down). Can be adjusted for a larger scroll. release(): Finishes the simulated mouse action. Here's a working code that I've tested: # ===== IMPORTS ===== from time import sleep from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver import ActionChains # ===== SETUP OPTIONS ===== options = Options() options.add_argument("--start-maximized") options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_argument("force-device-scale-factor=0.95") driver = webdriver.Chrome(options=options) wait = WebDriverWait(driver, 10) def report_analyser(year: str, month: int) -> None: url = "https://app.powerbi.com/view?r=eyJrIjoiZWIzNDg3YzUtMGFlMC00MzdmLTgzOWQtZThkOWExNTU2NjBlIiwidCI6IjQ0OTlmNGZmLTI0YTYtNGI0Mi1iN2VmLTEyNGFmY2FkYzkxMyJ9" driver.get(url) # Wait for the page navigation element wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div[aria-label="Mercado Page navigation . Mercado"]'))) # Click to go to second page wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, '#embedWrapperID>div.logoBarWrapper>logo-bar>div>div>div>logo-bar-navigation>span>button:nth-child(3)'))).click() # Click to open the date drop-down wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#pvExplorationHost > div > div > exploration > div > explore-canvas > div > div.canvasFlexBox > div > div.displayArea.disableAnimations.fitToPage > div.visualContainerHost.visualContainerOutOfFocus > visual-container-repeat > visual-container:nth-child(6) > transform > div > div.visualContent > div > div > visual-modern > div > div > div.slicer-content-wrapper > div>i'))).click() # Expand month options for 2023 wait.until(EC.presence_of_element_located((By.XPATH, f'//div[@class="slicerItemContainer" and @title="{year}"]/div[@class="expandButton"]'))).click() sleep(3) # Let dropdown finish animating sc = driver.find_element(By.CSS_SELECTOR, 'div[id^="slicer-dropdown-popup-"]>div>div>div:nth-child(2)>div>div:nth-child(3)') # Scroll the slicer container action_chains = ActionChains(driver) action_chains.move_to_element(sc).click_and_hold().move_by_offset(0, 100).release().perform() sleep(2) driver.find_element(By.XPATH, f'//div[@class="slicerItemContainer" and @aria-posinset="{month}"]').click() sleep(2) report_analyser('2022', 1) Visual output: https://github.com/Help-the-community/Web_Scraping_with_Selenium/blob/main/app_powerbi_com_anp.gif you can check here for a cleaner version of the code: https://github.com/Help-the-community/Web_Scraping_with_Selenium/blob/main/app_powerbi_com_anp.py
1
4
79,583,755
2025-4-20
https://stackoverflow.com/questions/79583755/how-to-build-a-nested-adjacency-list-from-an-adjacency-list-and-a-hierarchy
I have a simple adjacency list representation of a graph like this { 1: [2, 3, 4], 2: [5], 3: [6, 9], 4: [3], 5: [3], 6: [7, 8], 7: [], 8: [], 9: [] } which looks like this But each node also has a "type ancestry" which tells us what "type" of node it is, and this is completely different from the hierarchy present in the adjacency list above. For example, it would tell us that "node 6 is of type 'BA', and all type 'BA' nodes are type 'B'". For example, consider this ancestry: { 1: [], # no ancestry 2: ['AA', 'A'], # Read this as "2 is a type AA node, and all type AA nodes are type A nodes" 3: ['B'], # 3 is directly under type B 4: [], 5: ['AA', 'A'], 6: ['BA', 'B'], 7: ['BA', 'B'], 8: ['BA', 'B'], 9: ['BB', 'B'] } which when visualized would look like this However, instead of connecting nodes directly as per the adjacency list, we must use their "representative types" when available, where the representative types of the nodes would be the nodes just below the lowest common ancestor of their type ancestries. When visualized with this adjustment, it would look like this So what I want to produce programmatically is a "hierarchical/nested" adjacency list for such a visualization, which would look like below. The main idea is to introduce a subtree for each key in the adjacency list (along with the edges field), which would in turn contain its own adjacency list and so forth (recursively). {1: {'edges': ['A', 'B', 4], 'subgraphs': {}}, 4: {'edges': ['B'], 'subgraphs': {}}, 'A': {'edges': ['B'], 'subgraphs': {'AA': {'edges': [], 'subgraphs': {2: {'edges': [5], 'subgraphs': {}}, 5: {'edges': [], 'subgraphs': {}}}}}}, 'B': {'edges': [], 'subgraphs': {3: {'edges': ['BA', 'BB'], 'subgraphs': {}}, 'BA': {'edges': [], 'subgraphs': {6: {'edges': [7, 8], 'subgraphs': {}}, 7: {'edges': [], 'subgraphs': {}}, 8: {'edges': [], 'subgraphs': {}}}}, 'BB': {'edges': [], 'subgraphs': {9: {'edges': [], 'subgraphs': {}}}}}}} What is an elegant way of transforming the original adjacency list + the separate "ancestry" map to produce such a data structure?
Basing this off of @trincot's answer (who wanted me to write my own answer instead of editing theirs as the change is non-trivial) with a critical change on how edges are made. Create a node (a dict) for each distinct node that can be found in ancestry list, so including the ancestry groups. A node looks like this: { 'edges': [], 'subgraphs': {} } These objects are collected in a dict, keyed by their name (e.g. key could be 1 or 'AA', ...) Populate the 'edges' attribute based on the adjacency input list. Make sure to use the ancestry reference instead when the edge transitions to another ancestry group. Here is where the LCA approach comes in. Populate the 'subgraphs' attribute based on the ancestry input list. While doing that, keep track of the nodes that are not a child of another node: these should be retained for the root node in the data structure. def transform(adjacency, ancestry): # utility function to get the element in the list before the target, if one is present def get_element_before(lst, target): try: index = lst.index(target) return lst[index - 1] if index - 1 >= 0 else None except ValueError: return None # Finds the lowest common ancestor between 2 paths, assuming that the paths # are ordered from bottom to top def find_lca(path1, path2): lca = None for a, b in zip(path1[::-1], path2[::-1]): if a == b: lca = a else: break return lca # Given two nodes, it adds a link between the correct pair of "representative" nodes # in the newly constructed nested graph def add_link(node1, node2, nodes): ancestry1, ancestry2 = ancestry[node1], ancestry[node2] # Special cases when the LCA cannot be found if not ancestry1 and not ancestry2: nodes[node1]['edges'].append(node2) elif not ancestry1: nodes[node1]['edges'].append(ancestry2[-1]) elif not ancestry2: nodes[ancestry1[-1]]['edges'].append(node2) else: # When LCA is likely to be present lca = find_lca(ancestry1, ancestry2) if not lca: # This can happen if the 2 nodes have completely disjoint hierarchy paths nodes[ancestry1[-1]]['edges'].append(ancestry2[-1]) return # The node just below the LCA in each node serves as the "representative" node of that node in the newly built graph representative_node1 = get_element_before(ancestry1, lca) representative_node2 = get_element_before(ancestry2, lca) # If the two nodes are in the same subtree at the same level, they # will act as their own representative nodes representative_node1 = node1 if representative_node1 is None else representative_node1 representative_node2 = node2 if representative_node2 is None else representative_node2 nodes[representative_node1]['edges'].append(representative_node2) # Create the basic object (dict) for each node: nodes = { subgraph: { 'edges': [], 'subgraphs': {} } for node, subgraphs in ancestry.items() for subgraph in (node, *subgraphs) } # populate the "edges" attributes between basic nodes (or their "representative" nodes) for node, children in adjacency.items(): for child in children: add_link(node, child, nodes) # keep track of the nodes that are to stay at the root level root = dict(nodes) # populate the "subgraphs" attributes for node, ancestors in ancestry.items(): for child, parent in zip((node, *ancestors), ancestors): nodes[parent]['subgraphs'][child] = nodes[child] root.pop(child, None) return root Testing it adj_list = { 1: [2, 3, 4], 2: [5], 3: [6, 9], 4: [3], 5: [3], 6: [7, 8], 7: [], 8: [], 9: [] } ancestry = { 1: [], 2: ['AA', 'A'], 3: ['B'], 4: [], 5: ['AA', 'A'], 6: ['BA', 'B'], 7: ['BA', 'B'], 8: ['BA', 'B'], 9: ['BB', 'B'] } pprint.pprint(transform(adj_list, ancestry)) produces {1: {'edges': ['A', 'B', 4], 'subgraphs': {}}, 4: {'edges': ['B'], 'subgraphs': {}}, 'A': {'edges': ['B'], 'subgraphs': {'AA': {'edges': [], 'subgraphs': {2: {'edges': [5], 'subgraphs': {}}, 5: {'edges': [], 'subgraphs': {}}}}}}, 'B': {'edges': [], 'subgraphs': {3: {'edges': ['BA', 'BB'], 'subgraphs': {}}, 'BA': {'edges': [], 'subgraphs': {6: {'edges': [7, 8], 'subgraphs': {}}, 7: {'edges': [], 'subgraphs': {}}, 8: {'edges': [], 'subgraphs': {}}}}, 'BB': {'edges': [], 'subgraphs': {9: {'edges': [], 'subgraphs': {}}}}}}}
1
1
79,584,797
2025-4-21
https://stackoverflow.com/questions/79584797/how-to-cast-polars-decimal-to-int-or-float-depending-on-scale-parameter
executing a polars.read_database() resulted in columns with the Decimal data type, which I'd like to cast to either Int or Float, depending on the value of the scale parameter in Decimal. Alternatively, I'd be happy if there is a way to instruct polars to not use the Decimal data type as an option and during schema inference to let it assign the appropriate Float or Int. Is there a way to use polars.selectors to conditionally target Decimal based on whether scale is zero or not? Or to instruct polars.read_database to not use Decimal? Ideally, I'd like to be able to do something like: df.with_columns( pl.selectors.decimal(scale="1+").cast(pl.Float64()), pl.selectors.decimal(scale="0").cast(pl.Int64()) ) Of course, pl.selectors.decimal() doesn't have any arguments that it can take. An alternative would be some sort of pl.when ... but I would need to extract the value for scale first, and not sure how to do that. Or attack this at the read_database level. Any ideas?
A fairly explicit solution that works is: int_dec_cols = [c for c, dt in df.schema.items() if isinstance(dt, pl.Decimal) and dt.scale == 0] flt_dec_cols = [c for c, dt in df.schema.items() if isinstance(dt, pl.Decimal) and dt.scale > 0] df = df.with_columns( pl.col(int_dec_cols).cast(pl.Int64), pl.col(flt_dec_cols).cast(pl.Float64), )
4
2
79,584,546
2025-4-21
https://stackoverflow.com/questions/79584546/how-to-correct-a-parsererror-when-respecting-the-csv-delimiter-and-a-second-pars
I'm new here so I hope that I will put all needed information As the CSV is a huge one (10Go), the URL link is in the code below if needed URL link to the data description (column type...) Delimiter is \t but they call it CSV (describe in the "data description file"). After replacing wrong delimiter (replace '\n\t' by '\t' when necessary) in the csv file and define data type for each column, I'm trying to read it using the \t delimiter but encounter 2 errors. 1) parse error on line 1715281 : expected 209 fields, saw 239 --> For that, I try to check issue by using getline and then split the line with delimiter='\t'. I found 'quantity of columns' = len(split(getline)) = 209 error : Unable to parse string "URL link". No issue with all the previous lines before this one My questions are: Why do I get this parser error on line 1715281? Assuming that I correct wrong delimiter at the beginning of my code Is it a good approach to use 'getline' and then compare number of columns in the CSV to the len of the split line? How to manage the "unable to parse string: URL" when all the previous lines don't generate an issue? Please find hereunder my codes, comments and full error messages import os.path import pandas as pd import numpy as np import linecache # data file available under: https://static.openfoodfacts.org/data/en.openfoodfacts.org.products.csv.gz # it's .csv but delimiter is TAB # generate the path to the file data_local_path = os.getcwd() + '\\' csv_filename = 'en.openfoodfacts.org.products.csv' csv_local_path = data_local_path + csv_filename # generate the path to create the file corrected clean_filename = 'en.off.corrected.csv' clean_local_path = data_local_path + clean_filename # check if the file is already existing, if not then proceed to wrong delimiter replacement if not os.path.isfile(clean_local_path): with open(csv_local_path, 'r',encoding='utf-8') as csv_file, open(clean_local_path, 'a', encoding='utf-8') as clean_file: for row in csv_file: clean_file.write(row.replace('\n\t', '\t')) # columns type are defined under : https://static.openfoodfacts.org/data/data-fields.txt column_names = pd.read_csv(clean_local_path, sep='\t', encoding = 'utf-8', nrows=0).columns.values column_types = {col: 'Int64' for (col) in column_names if col.endswith (('_t', '_n'))} column_types |= {col: float for (col) in column_names if col.endswith (('_100g', '_serving'))} column_types |= {col: str for (col) in column_names if not col.endswith (('_t', '_n', '_100g', '_serving', '_tags'))} print ("number of columns detected: ",len(column_names)) # output is "number of columns detected: 209" print (column_names) # Load the data data = pd.read_csv(clean_local_path, sep='\t', encoding='utf_8', dtype=column_types, parse_dates=[col for (col) in column_names if col.endswith('_datetime')], on_bad_lines='warn' ) # display info data.info() Message error at the line " data = pd.read_csv..." is: ...\AppData\Local\Temp\ipykernel_2824\611804071.py:2: ParserWarning: Skipping line 1715281: expected 209 fields, saw 239 data = pd.read_csv(clean_local_path, sep='\t', encoding='utf_8', --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File lib.pyx:2391, in pandas._libs.lib.maybe_convert_numeric() ValueError: Unable to parse string "https://images.openfoodfacts.org/images/products/356/007/117/1049/front_fr.3.200.jpg" During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Cell In[6], line 2 1 # Load the data ----> 2 data = pd.read_csv(clean_local_path, sep='\t', encoding='utf_8', 3 dtype=column_types, parse_dates=[col for (col) in column_names if col.endswith('_datetime')], 4 on_bad_lines='warn' 5 ) 6 # display info 7 data.info() File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\io\parsers\readers.py:1026, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend) 1013 kwds_defaults = _refine_defaults_read( 1014 dialect, 1015 delimiter, (...) 1022 dtype_backend=dtype_backend, 1023 ) 1024 kwds.update(kwds_defaults) -> 1026 return _read(filepath_or_buffer, kwds) File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\io\parsers\readers.py:626, in _read(filepath_or_buffer, kwds) 623 return parser 625 with parser: --> 626 return parser.read(nrows) File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\io\parsers\readers.py:1923, in TextFileReader.read(self, nrows) 1916 nrows = validate_integer("nrows", nrows) 1917 try: 1918 # error: "ParserBase" has no attribute "read" 1919 ( 1920 index, 1921 columns, 1922 col_dict, -> 1923 ) = self._engine.read( # type: ignore[attr-defined] 1924 nrows 1925 ) 1926 except Exception: 1927 self.close() File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\io\parsers\c_parser_wrapper.py:234, in CParserWrapper.read(self, nrows) 232 try: 233 if self.low_memory: --> 234 chunks = self._reader.read_low_memory(nrows) 235 # destructive to chunks 236 data = _concatenate_chunks(chunks) File parsers.pyx:838, in pandas._libs.parsers.TextReader.read_low_memory() File parsers.pyx:921, in pandas._libs.parsers.TextReader._read_rows() File parsers.pyx:1066, in pandas._libs.parsers.TextReader._convert_column_data() File parsers.pyx:1105, in pandas._libs.parsers.TextReader._convert_tokens() File parsers.pyx:1211, in pandas._libs.parsers.TextReader._convert_with_dtype() File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\core\arrays\numeric.py:283, in NumericArray._from_sequence_of_strings(cls, strings, dtype, copy) 277 @classmethod 278 def _from_sequence_of_strings( 279 cls, strings, *, dtype: Dtype | None = None, copy: bool = False 280 ) -> Self: 281 from pandas.core.tools.numeric import to_numeric --> 283 scalars = to_numeric(strings, errors="raise", dtype_backend="numpy_nullable") 284 return cls._from_sequence(scalars, dtype=dtype, copy=copy) File ~\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\core\tools\numeric.py:232, in to_numeric(arg, errors, downcast, dtype_backend) 230 coerce_numeric = errors not in ("ignore", "raise") 231 try: --> 232 values, new_mask = lib.maybe_convert_numeric( # type: ignore[call-overload] 233 values, 234 set(), 235 coerce_numeric=coerce_numeric, 236 convert_to_masked_nullable=dtype_backend is not lib.no_default 237 or isinstance(values_dtype, StringDtype) 238 and not values_dtype.storage == "pyarrow_numpy", 239 ) 240 except (ValueError, TypeError): 241 if errors == "raise": File lib.pyx:2433, in pandas._libs.lib.maybe_convert_numeric() ValueError: Unable to parse string "https://images.openfoodfacts.org/images/products/356/007/117/1049/front_fr.3.200.jpg" at position 1963 'Getline' and 'split' used to check ParserWarning concerning line 1715281 #get the line where the first warning had occured line = linecache.getline(csv_local_path,1715281) print (line) # Split the string using tab delimiter split_list = line.split('\t') # Output the result print("concerning Parser Warning: Skipping line 1715281: expected 209 fields, saw 239") print("number of data detected in the raw 1715281: ",len(split_list)) print ("number of columns detected in CSV: ",len(column_names)) # Output is: # concerning Parser Warning: Skipping line 1715281: expected 209 fields, saw 239 # number of data detected in the raw 1715281: 209 # number of columns detected in CSV: 209 I try "on_bad_lines='skip'" but without success
The problem appears to be that the file uses the " character not to quote items but to indicate a measurement in inches. For example, I found this in one cell of the file: fluted shell round sweet 2.5" The fix is straightforward: add quoting=csv.QUOTE_NONE to your call to pd.read_csv. (You'll need to add import csv as well.) With this fix in place, I would expect pandas to be able to read in the CSV file without any of the 'corrections' you have applied. (My machine doesn't have enough RAM to read the whole file in, but if I split the file into chunks of 100,000 rows, it reads each chunk in fine, provided I tell pandas to ignore quotes.)
2
2
79,583,650
2025-4-20
https://stackoverflow.com/questions/79583650/mount-type-cache-doesnt-speed-up-pip-install-during-docker-build
I'm using --mount=type=cache in my Dockerfile to cache pip packages between builds, but it doesn't seem to have any effect on build speed. I expected pip to reuse cached packages and avoid re-downloading them, but every build takes the same amount of time. Here is the relevant part of my Dockerfile: FROM python:3.13-slim AS compile-image RUN python -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" RUN pip install --no-cache-dir --upgrade pip # Final image FROM python:3.13-slim ENV PATH="/opt/venv/bin:$PATH" COPY --from=compile-image /opt/venv /opt/venv WORKDIR /app ENV HOME=/app COPY . . RUN --mount=type=cache,mode=0755,target=/root/.cache/pip \ pip install -e . Shouldn't this cache downloaded packages and reuse them in subsequent builds? What could be the reason it's not working as expected?
The pip cache is relative to the user's home directory. Just above the RUN pip install line you set ENV HOME=/app, and so the pip cache is in /app/.cache/pip (you also see that directory in the error message). Use that directory as the cache location RUN --mount=type=cache,target=/app/.cache/pip pip ... # ^^^^^ the directory you set as $HOME For most purposes in a container, though, "home directory" isn't an especially meaningful concept, and there's no reason to change it. If you don't set $HOME and you are running as root then the default home directory will be /root and the invocation you had before should work. # ENV HOME=/home # default value, don't need to set this explicitly RUN --mount=type=cache,target=/root/.cache/pip pip ... # ^^^^^^ default value of $HOME for root You can also use this caching in combination with Docker's normal layer caching. You probably want to run the pip install command in the first stage if you do have a multi-stage build (the setup you show gains little from it), and if you can do it only copying in the pyproject.toml file first, then you'll avoid re-running the installation sequence if source files but not application dependencies change.
1
2
79,584,468
2025-4-21
https://stackoverflow.com/questions/79584468/how-to-represent-ranges-of-time-in-a-pandas-index
I have a collection of user data as follows: user start end John Doe 2025-03-21 11:30:35 2025-03-21 13:05:26 ... ... ... Jane Doe 2023-12-31 01:02:03 2024-01-02 03:04:05 Each user has a start and end datetime of some activity. I would like to place this temporal range in the index so I can quickly query the dataframe to see which users were active during a certain date/time range like so: df['2024-01-01:2024-01-31'] Pandas has Period objects, but these seem to only support a specific year, day, or minute, not an arbitrary start and end datetime. Pandas also has MultiIndex indices, but these seem to be designed for hierarchical categorical labels, not for time ranges. Any other ideas for how to represent this time range in an index?
Here is your solution: import pandas as pd data = { 'user': ['John Doe', 'Jane Doe'], 'start': [pd.Timestamp('2025-03-21 11:30:35'), pd.Timestamp('2023-12-31 01:02:03')], 'end': [pd.Timestamp('2025-03-21 13:05:26'), pd.Timestamp('2024-01-02 03:04:05')], } df = pd.DataFrame(data) interval_index = pd.IntervalIndex.from_arrays(df['start'], df['end'], closed='both') df.set_index(interval_index, inplace=True) df.drop(columns=['start', 'end'], inplace=True) # check user query_time = pd.Timestamp("2024-01-01 12:00:00") active_users = df[df.index.contains(query_time)] print(active_users) Output: D:\python>python test.py user [2023-12-31 01:02:03, 2024-01-02 03:04:05] Jane Doe
2
1
79,583,908
2025-4-21
https://stackoverflow.com/questions/79583908/how-can-i-automatically-run-a-command-immediately-after-git-add
I'm working on a side project to make my git commit message better and make the process faster, but I cant seem to figure out how to trigger the scripts after git add, without manually doing it. This is the flow I want: I run git add. My script kicks in immediately. It fetches the staged diff. AI generates a commit message It asks for confirmation If confirmed, it commits. I've tried looking into hooks like pre-commit, but those seem to run before git commit, not git add. Ideally, I'd love to run the message generator right after I run git add, possibly via a Git alias or a custom CLI.
If you don't want to use a prepare-commit-msg hook, you can use a CLI script like this: Add specified files git add "$@" Get diff diff=$(git diff --cached) Generate commit message commit_msg=$(echo "$diff" | your_ai_commit_generator) Ask for confirmation echo "Suggested commit message:" echo "$commit_msg" read -p "Use this commit message? (y/n): " confirm if [[ "$confirm" == "y" ]]; then git commit -m "$commit_msg" else echo "Commit canceled." fi Save it as gitadd in your $PATH Give it executable permissions chmod +x gitadd Run gitadd myfile.js Full script: #!/bin/bash git add "$@" diff=$(git diff --cached) commit_msg=$(echo "$diff" | your_ai_commit_generator) # <- you plug in your AI here echo "Suggested commit message:" echo "$commit_msg" read -p "Use this commit message? (y/n): " confirm if [[ "$confirm" == "y" ]]; then git commit -m "$commit_msg" else echo "Commit canceled." fi
1
2
79,583,707
2025-4-20
https://stackoverflow.com/questions/79583707/adding-text-file-attachments-to-keepass-with-pykeepass-in-python
I am trying to create and save KeePass entries using pykeepass and saving a .txt file as an attachment. However I get a type-error: Traceback (most recent call last): File "c:\Users\Simplicissimus\Documents\coding\directory\attachment.py", line 15, in <module> entry.add_attachment('attachment.txt', f.read()) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Simplicissimus\AppData\Local\Programs\Python\Python313\Lib\site-packages\pykeepass\entry.py", line 163, in add_attachment E.Key(filename), ~~~~~^^^^^^^^^^ File "src\\lxml\\builder.py", line 219, in lxml.builder.ElementMaker.__call__ TypeError: bad argument type: bytes(b'Lorem Ipsum bla bla bla\r\n') (base) PS C:\Users\Simplicissimus\Documents\coding\directory> My minimal reproducible example is: from pykeepass import PyKeePass kp = PyKeePass('testDatabase.kdbx', password='passwort') generalGroup = kp.find_groups(name='General',first=True) entry = kp.add_entry( generalGroup, title='title', username='username', password= 'password', ) with open('loremipsum.txt', 'rb') as f: entry.add_attachment('attachment.txt', f.read()) kp.save() (All files are in the same directory, the file loremipsum.txt contains the line: "Lorem Impsum bla bla bla") How do I have to convert the file content of a .txt-file to Bytes? I am using the pykeepass Version: 4.1.0.post1, Keepass Version 2.58 and Python 3.13.
I can't test it but examples on page pykeepass show # add attachment data to the db >>> binary_id = kp.add_binary(b'Hello world') >>> kp.binaries [b'Hello world'] # add attachment reference to entry >>> a = e.add_attachment(binary_id, 'hello.txt') So maybe it needs first add it as add_binary() and later attache it to entry with open('loremipsum.txt', 'rb') as f: binary_id = kp.add_binary(f.read()) a = entry.add_attachment(binary_id, 'attachment.txt')
3
1
79,583,700
2025-4-20
https://stackoverflow.com/questions/79583700/pandass-gt-and-lt-not-working-when-chained-together
I'm playing around with the pipe | and ampersand & operators, as well as the .gt() and .lt() built-in functions to see how they work together. I'm looking at a column in a DataFrame with values from 0.00 to 1.00. I can use the >, <, and & operators together and find no problem, same with using .gt(), .lt(), and &. However, if I try to chain .gt().lt() it gives another result. In my example I'm using .gt(0.7).lt(0.9), but this yields values <=0.7. If I change the order to .lt(0.9).gt(0.7), I get values <=0.9. I can always just write it like this df['column'].gt(0.7)&df['column'].lt(0.9), just wondering if there's a way of chaining .gt().lt()
The misunderstanding is that in Python True == 1 and False == 0 (see bool). Suppose we have: import pandas as pd data = {'col': [0.5, 0.8, 1]} df = pd.DataFrame(data) df['col'].gt(0.7) When we chain .lt(0.9), this check takes place on the result of .gt(0.7): 0 False # 0 < 0.9 (True) 1 True # 1 < 0.9 (False) 2 True # 1 < 0.9 (False) Name: col, dtype: bool Use Series.between instead, with inclusive to control the comparison operators: df['col'].between(0.7, 0.9, inclusive='neither') 0 False # 0.5 1 True # 0.8 2 False # 1 Name: col, dtype: bool
3
4
79,583,142
2025-4-20
https://stackoverflow.com/questions/79583142/broadcasting-a-b-1-tensor-to-apply-a-shift-to-a-specific-channel-in-pytorch
I have a tensor p of shape (B, 3, N) in PyTorch: # 2 batches, 3 channels (x, y, z), 5 points p = torch.rand(2, 3, 5, requires_grad=True) """ p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843], [0.9922, 0.2684, 0.6898, 0.3983, 0.4227], [0.3188, 0.2471, 0.9552, 0.5181, 0.6877]], [[0.1079, 0.7694, 0.2194, 0.7801, 0.8043], [0.8554, 0.3505, 0.4622, 0.0339, 0.7909], [0.5806, 0.7593, 0.0193, 0.5191, 0.1589]]], requires_grad=True) """ And then another z_shift of shape [B, 1]: z_shift = torch.tensor([[1.0], [10.0]], requires_grad=True) """ z_shift: tensor([[1.], [10.]], requires_grad=True) """ I want to apply the appropriate z-shift of all points in each batch, leaving x and y unchanged: """ p: tensor([[[0.8365, 0.0505, 0.4208, 0.7465, 0.6843], [0.9922, 0.2684, 0.6898, 0.3983, 0.4227], [1.3188, 1.2471, 1.9552, 1.5181, 1.6877]], [[0.1079, 0.7694, 0.2194, 0.7801, 0.8043], [0.8554, 0.3505, 0.4622, 0.0339, 0.7909], [10.5806, 10.7593, 10.0193, 10.5191, 10.1589]]]) """ I managed to do it like: p[:, 2, :] += z_shift for the case where requires_grad=False, but this fails inside the forward of my nn.Module (which I assume is equivalent to requires_grad=True) with: RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
In PyTorch, tensors directly created by users are termed leaf tensors, and their views share the same underlying storage. Performing in-place assignments on a view can modify the storage of the original tensor midst in the computational graph, leading to undefined behavior. Thus, directly assigning values to views should be avoided. To achieve this safely, replace in-place operations with an out-of-place approach. For example: p_shifted = torch.stack([ p[:, 0, :], p[:, 1, :], p[:, 2, :] + z_shift, ], dim=1) This constructs a new tensor via torch.stack instead of modifying the original storage in-place, ensuring computational graph integrity while fulfilling the intended functionality.
3
2
79,582,880
2025-4-19
https://stackoverflow.com/questions/79582880/why-isnt-packages-installed-in-my-multi-stage-docker-build-using-pip
I'm working on a Python project with a multi-stage Docker build and running into an issue where pydantic (just example) isn't installed, even though pip is present and working in the final image. Here's my project structure: project-root/ ├── docker-compose.yml ├── vector_db_service/ │ ├── app/ │ │ └── __init__.py │ ├── Dockerfile │ ├── pyproject.toml │ ├── .env docker-compose.yml: services: vector_db_service: container_name: vector_db_service build: context: ./vector_db_service dockerfile: Dockerfile command: tail -f /dev/null env_file: - ./vector_db_service/.env Dockerfile: # Build stage FROM python:3.13-slim AS compile-image RUN python -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" RUN pip install --no-cache-dir --upgrade pip # Final image FROM python:3.13-slim COPY --from=compile-image /opt/venv /opt/venv WORKDIR /app ENV HOME=/app ENV PATH="/opt/venv/bin:$PATH" RUN addgroup --system app && adduser --system --group app COPY . . RUN chown -R app:app $HOME RUN chown -R app:app "/opt/venv/" USER app RUN pip install -e pydantic The last line, RUN pip install -e pydantic, doesn't install anything. The build finishes successfully, but the package isn’t installed. I confirmed that pip is installed in the final image. I’ve tried other variations like RUN pip install pydantic or RUN pip install -e ., but they didn't change the outcome. My pyproject.toml does list pydantic as a dependency. Do I need to install from the project root, or am I missing something in the build process? Any help would be greatly appreciated. Thank you in advance!
Try to copy project files which you need to install FROM python:3.13-slim AS compile-image RUN python -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" RUN pip install --no-cache-dir --upgrade pip setuptools wheel WORKDIR /app COPY pyproject.toml ./ COPY vector_db_service ./vector_db_service RUN pip install --no-cache-dir . FROM python:3.13-slim COPY --from=compile-image /opt/venv /opt/venv WORKDIR /app ENV HOME=/app ENV PATH="/opt/venv/bin:$PATH" RUN addgroup --system app && adduser --system --group app COPY . . RUN chown -R app:app $HOME RUN chown -R app:app "/opt/venv/" USER app
3
0
79,581,767
2025-4-18
https://stackoverflow.com/questions/79581767/problems-creating-a-generator-factory-in-python
I'd like to create a generator factory, i.e. a generator that yields generators, in python using a "generator expression" (generator equivalent of list comprehension). Here's an example: import itertools as it gen_factory=((pow(b,a) for a in it.count(1)) for b in it.count(10,10)) In my mind this should give the following output: ((10,100,1000,...), (20,400,8000,...), (30,900,27000,...), ...) However, the following shows that the internal generators are getting reset: g0 = next(gen_factory) next(g0) # 10 next(g0) # 100 g1 = next(gen_factory) next(g1) # 20 next(g0) # 8000 So the result of the last statement is equal to pow(20,3) whereas I expected it to be pow(10,3). It seems that calling next(gen_factory) alters the b value in g0 (but not the internal state a). Ideally, previous generators shouldn't change as we split off new generators from the generator factory. Interestingly, I can get correct behavior by converting these to lists, here's a finite example: finite_gen_factory = ((pow(b,a) for a in (1,2,3)) for b in (10,20,30)) [list(x) for x in finite_gen_factory] which gives [[10, 100, 1000], [20, 400, 8000], [30, 900, 27000]], but trying to maintain separate generators fails as before: finite_gen_factory = ((pow(b,a) for a in (1,2,3)) for b in (10,20,30)) g0 = next(finite_gen_factory) g1 = next(finite_gen_factory) next(g0) # 20, should be 10. The closest explanation, I think, is in this answer, but I'm not sure what the correct way of resolving my problem is. I thought of copying (cloning) the internal generators, but I'm not sure this is possible. Also it.tee probably doesn't work here. A workaround might be defining the inner generator as a class, but I really wanted a compact generator expression for this. Also, some stackoverflow answers recommended using functools.partial for this kind of thing but I can't see how I could use that here.
You can prevent the closure by capturing b with for b in [b] (Attempt This Online!): gen_factory=((pow(b,a) for b in [b] for a in it.count(1)) for b in it.count(10,10)) As documented: the iterable expression in the leftmost for clause is immediately evaluated, so that an error produced by it will be emitted at the point where the generator expression is defined, rather than at the point where the first value is retrieved. So when you create one of the (inner) generators, the list [b] is created and given to the generator, and then the for b in puts the value in the generator's local variable b, which it then keeps using instead of the outer generator's b. Btw I'd put such nested generators on multiple lines for readability: gen_factory = ( (pow(b,a) for b in [b] for a in it.count(1)) for b in it.count(10,10) ) You could also use a different name if you worry that using the same name is confusing: gen_factory = ( (pow(my_b,a) for my_b in [b] for a in it.count(1)) for b in it.count(10,10) ) Btw in this case you could also use map / other tools instead of a generator, which don't have the issue in the first place. Some possibilities: gen_factory = ( map(pow, it.repeat(b), it.count(1)) for b in it.count(10,10) ) gen_factory = ( map(b.__pow__, it.count(1)) for b in it.count(10,10) ) import functools as ft gen_factory = ( map(ft.partial(pow, b), it.count(1)) for b in it.count(10,10) ) import operator as op gen_factory = ( it.accumulate(it.repeat(b), op.mul) for b in it.count(10,10) )
2
2
79,580,890
2025-4-18
https://stackoverflow.com/questions/79580890/how-to-select-element-from-two-complex-number-lists-to-get-minimum-magnitude-for
I have two python lists(list1 and list2),each containing 51 complex numbers. At each index i, I can choose either list1[i] or list2[i]. I want to select one element per index(Total of 51 elements) such that magnitude of sum of the selected complex number list is minimized. Sample code I have tried below: import random import itertools import math list1 = [complex(random.uniform(-10, 10), random.uniform(-10, 10)) for _ in range(51)] list2 = [complex(random.uniform(-10, 10), random.uniform(-10, 10)) for _ in range(51)] min_magnitude = math.inf for choice in itertools.product([0, 1], repeat=51): current = [list1[i] if bit == 0 else list2[i] for i, bit in enumerate(choice)] total = sum(current) mag = abs(total) if mag < min_magnitude: min_magnitude = mag best_combination = current print(min_magnitude) print(best_combination) it takes long time to run is there any alternative or technique to get the exact answer?
Let's slightly reinterpret your problem. Instead of strictly choosing one element from either of the two lists, start with an initial state where you've selected all elements from list1. The sum of this selection is your offset value, which we'll call C. Next, create a new list called list3, defined as list3 = list2 - list1. With this formulation, your problem becomes choosing any number of elements from list3 whose sum closely approximates -C. This is precisely the Subset Sum Problem. Although this is computationally intensive, there are several significantly more efficient methods than brute-force. A well-known approach mentioned in the above article (also suggested by @Robert) is the "meet-in-the-middle" technique. This technique splits the original set into two subsets and calculates all possible combinations within each subset. Although this part is brute-force, it becomes practical because each subset is half the original size. Once all combinations have been calculated, the next step is to select one item from each subset that most cancel each other out. Several methods exist for this, but I prefer using scipy's KDTree for simplicity. Here's a naive implementation to help understand the logic. (Note: this won't work for n=51) def meet_in_the_middle_naive(list1, list2): # Split the lists into two halves. n = len(list1) half_n = n // 2 list1_head, list1_tail = list1[:half_n], list1[half_n:] list2_head, list2_tail = list2[:half_n], list2[half_n:] # Calculate all combinations and their sums of each half. # This is technically a brute-force approach, but the length of the lists are small enough to be feasible. head_combinations = [(sum(c), c) for c in itertools.product(*zip(list1_head, list2_head))] tail_combinations = [(sum(c), c) for c in itertools.product(*zip(list1_tail, list2_tail))] # Build a KDTree and search for the nearest points between the two halves. def as_real_xy(c): # Convert a complex number to a real value pair (x, y). # This is required for the KDTree to work correctly. return c.real, c.imag head_points = np.array([as_real_xy(p) for p, _ in head_combinations]) tail_points = np.array([as_real_xy(p) for p, _ in tail_combinations]) head_tree = KDTree(head_points) distances, head_indexes = head_tree.query(-tail_points) # ^ Searching for the zero-sum combinations. # The above searches for a list of nearest neighbors for each point in head. # So, we need to find the nearest one in that list. tail_index = distances.argmin() head_index = head_indexes[tail_index] best_combination = head_combinations[head_index][1] + tail_combinations[tail_index][1] return abs(sum(best_combination)), list(best_combination) Although computationally improved, this naive implementation is still inefficient and memory-intensive. I'll skip the details since it's a programming detail, but I've further optimized this using more_itertools and numba. Here's the optimized version along with test and benchmark: import itertools import math import random import time import more_itertools import numpy as np from numba import jit from scipy.spatial import KDTree def generate_list(n): return [complex(random.uniform(-10, 10), random.uniform(-10, 10)) for _ in range(n)] def baseline(list1, list2): assert len(list1) == len(list2) n = len(list1) min_magnitude = math.inf best_combination = None for choice in itertools.product([0, 1], repeat=n): current = [list1[i] if bit == 0 else list2[i] for i, bit in enumerate(choice)] total = sum(current) mag = abs(total) if mag < min_magnitude: min_magnitude = mag best_combination = current return min_magnitude, best_combination def meet_in_the_middle_naive(list1, list2): # Split the lists into two halves. n = len(list1) half_n = n // 2 list1_head, list1_tail = list1[:half_n], list1[half_n:] list2_head, list2_tail = list2[:half_n], list2[half_n:] # Calculate all combinations and their sums of each half. # This is technically a brute-force approach, but the length of the lists are small enough to be feasible. head_combinations = [(sum(c), c) for c in itertools.product(*zip(list1_head, list2_head))] tail_combinations = [(sum(c), c) for c in itertools.product(*zip(list1_tail, list2_tail))] # Build a KDTree and search for the nearest points between the two halves. def as_real_xy(c): # Convert a complex number to a real value pair (x, y). # This is required for the KDTree to work correctly. return c.real, c.imag head_points = np.array([as_real_xy(p) for p, _ in head_combinations]) tail_points = np.array([as_real_xy(p) for p, _ in tail_combinations]) head_tree = KDTree(head_points) distances, head_indexes = head_tree.query(-tail_points) # ^ Searching for the zero-sum combinations. # The above searches for a list of nearest neighbors for each point in head. # So, we need to find the nearest one in that list. tail_index = distances.argmin() head_index = head_indexes[tail_index] best_combination = head_combinations[head_index][1] + tail_combinations[tail_index][1] return abs(sum(best_combination)), list(best_combination) @jit(cache=True) def product_sum(arr_x, arr_y): """Calculate the sum of all combinations of two lists. Equivalent to: `[sum(current) for current in itertools.product(*zip(arr_x, arr_y))]` """ n = len(arr_x) total_combinations = 1 << n # 2^n combinations out = np.empty(total_combinations, dtype=arr_x.dtype) for i in range(total_combinations): current_sum = 0.0 for j in range(n): if (i >> (n - j - 1)) & 1: current_sum += arr_y[j] else: current_sum += arr_x[j] out[i] = current_sum return out def meet_in_the_middle_optimized(list1, list2): n = len(list1) half_n = n // 2 list1_head, list1_tail = list1[:half_n], list1[half_n:] list2_head, list2_tail = list2[:half_n], list2[half_n:] # Keeping the combinations themselves consumes a lot of memory, so we only keep the sums. head_points = product_sum(np.array(list1_head), np.array(list2_head)) tail_points = product_sum(np.array(list1_tail), np.array(list2_tail)) def as_real_xy(arr): # Equivalent to `np.array([(p.real, p.imag) for p in arr])` but much faster. return arr.view(f"f{arr.real.itemsize}").reshape(-1, 2) head_tree = KDTree(as_real_xy(head_points), balanced_tree=False) # Set False if the inputs are mostly random. distances, head_indexes = head_tree.query(-as_real_xy(tail_points), workers=-1) # -1 to use all CPUs. tail_index = distances.argmin() head_index = head_indexes[tail_index] # nth_product is equivalent to `itertools.product(...)[index]`. # With this, we can directly obtain the combination without iterating through all combinations. head_combination = more_itertools.nth_product(head_index, *zip(list1_head, list2_head)) tail_combination = more_itertools.nth_product(tail_index, *zip(list1_tail, list2_tail)) best_combination = head_combination + tail_combination return abs(sum(best_combination)), list(best_combination) def test(): for n in range(2, 15): for seed in range(10): random.seed(seed) list1 = generate_list(n) list2 = generate_list(n) expected = baseline(list1, list2) actual = meet_in_the_middle_naive(list1, list2) assert expected == actual, f"Naive results do not match! {n=}, {seed=}" actual = meet_in_the_middle_optimized(list1, list2) assert expected == actual, f"Optimized results do not match! {n=}, {seed=}" print("All tests passed!") def benchmark(): n = 51 random.seed(0) list1 = generate_list(n) list2 = generate_list(n) started = time.perf_counter() _ = meet_in_the_middle_optimized(list1, list2) elapsed = time.perf_counter() - started print(f"n={n}, elapsed={elapsed:.0f} sec") if __name__ == "__main__": test() benchmark() This solution solved n=51 in less than 30 seconds on my PC.
5
6
79,581,422
2025-4-18
https://stackoverflow.com/questions/79581422/how-to-plot-4sin2x-cos2x3-using-sympy
I am trying to plot the expression 4*sin(2*x)/cos3(2*x) in Sympy: from sympy import * from sympy.plotting import plot x = symbols('x') expr = 4*sin(2*x)/cos(2*x)**3 plot(expr) But all I get is: when it should look a bit like a horizontally-squished tangent graph:
SymPy's plotting module doesn't handle very well functions with poles. But SymPy Plotting Backends (which is a more advanced plotting module) is up for the task: from spb import plot plot(expr, (x, -2*pi, 2*pi), detect_poles=True, ylim=(-3, 3)) In particular: detect_poles=True: run the algorithm to detect poles (singularities). ylim=(-3, 3): limit the visualization on the y-axis in the specified range. Failing to set this keywork argument will get you a plot similar to what you shown. If you are interested in using that module, here is the installation page. The documentation is also full with examples. EDIT to reply to comment: Yes, it is necessary to manually activate the algorithm, otherwise false positive might be introduced, depending on the function. Since I'm here, I'm going to better explain this algorithms: detect_poles=False: used by default, no detection will be used. You will end up with something like this: Here, the vertical lines should not be present. detect_poles=True (the result of this algorithm is shown above): run a gradient-based algorithm on the numerical data. In short, if the gradient of the function computed between two consecutive points along the x-axis, is greater than some threshold, then a singularity is found and the algorithm insert a NaN point between them, which forces the vertical line to disappear. However, this strategy can be fooled by some functions that exhibits very high gradients: in this cases, the algorithm will insert NaN values even though there might not be any singularity. There are some parameters that can be used to fine tune the results of the visualization. The easiest one is to increasing the number of discretization points (n=1000 by default). detect_poles="symbolic": it runs both a gradient-based algorithm on the numerical data, as well as a symbolic algorithm on the expression to be plotted, which involves the use of SymPy's solve. Depending on the symbolic expression, solve might not be able to solve for all singularities. Or worse, it could take forever to complete the task. Here is the output: Here, the vertical dotted lines indicate where the singularities are found by the symbolic algorithm. Given the limitations of both the gradient-based and symbolic algorithms, I (developer of that module) decided that it is best to leave detect_poles=False by default, and let the user enable the algorithm that better suits its needs. :) More examples of plotting discountinuities can be found on this tutorial page.
1
4
79,579,619
2025-4-17
https://stackoverflow.com/questions/79579619/why-does-32-bit-bitmasking-work-in-python-for-leetcode-single-number-ii-when-p
I'm trying to understand why the following solution for LeetCode's Single Number II works in Python: class Solution: def singleNumber(self, nums: List[int]) -> int: number = 0 for i in range(32): count = 0 for num in nums: if num & (1 << i): count += 1 if count % 3: number |= (1 << i) if number & (1 << 31): number -= (1 << 32) return number But I'm confused about a few things: In Python, integers are arbitrary precision, so they're not stored in 32-bit like in C or C++. sys.getsizeof() even shows 28 bytes for small integers. So how can we assume that the number fits in 32 bits and that bit 1 << 31 is actually the sign bit? Why do we loop only from i in range(32)? Why not less If I input small integers (like 2, 3, etc.), they don't "fill" 32 bits — so how can checking the 31st bit be reliable? Basically, since Python ints grow as needed and aren’t stored in 32-bit containers, how does this approach still work correctly when simulating 32-bit signed behavior? I tried understanding similar questions and answers (like on CodeReview.SE), and I get the general idea — Python ints are arbitrary precision and we simulate 32-bit behavior using bitmasking and shifting. But I'm still confused why this actually works reliably in Python. My Questions: Why can we safely assume a 32-bit simulation works in Python? Why is checking the 31st bit (1 << 31) meaningful in Python? Why doesn’t the arbitrary-size nature of Python integers break this logic?
Think of positive ints having infinitely many zero-bits in front and of negative ints having infinitely many one-bits in front: 3 = ...00000011 2 = ...00000010 1 = ...00000001 0 = ...00000000 -1 = ...11111111 -2 = ...11111110 -3 = ...11111101 -4 = ...11111100 That's how Python treats them, which is why for example (-3) & (1<<1000) is 21000. LeetCode's problem has numbers in the range -2^31 to 2^31-1. Let's say the 31 were 2 instead. Then the above -4 to 3 would be all possible values. You can see that for each number, all but the last two bits are always the same. Either they're all 0, or they're all 1. So it suffices to just find out the answer number's last three bits and duplicate the third-to-last bit to the infinite bits before. Let's say the correct answer for some input were -3. Then we'd first find that the last three bits are 101. Which represents the number 5. Then we check whether the third-to-last is 1. Since it is, we subtract 23=8 and end up with 5-8 = -3. With LeetCode's limit using 31, it likewise suffices to find out the answer number's last 32 bits (that's what the for i in range(32): block does) and duplicate the 32nd-to-last bit to the infinite bits before (that's what the if number & (1 << 31): number -= (1 << 32) does).
1
2
79,581,533
2025-4-18
https://stackoverflow.com/questions/79581533/find-pairs-of-keys-for-rows-that-have-at-least-one-property-in-common
I'm using polars with a data frame whose schema looks like this: Schema({'CustomerID': String, 'StockCode': String, 'Total': Int64}) interpreted as "Customer CustomerID bought Total of product StockCode." I'm looking for an efficient way to generate all unique pairs of customer IDs such that the two customers purchased at least one of the same product. So, given: import polars as pl df = pl.from_repr(""" ┌─────────────┬────────────┬───────┐ │ CustomerID ┆ StockCode ┆ Total │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 │ ╞═════════════╪════════════╪═══════╡ │ A ┆ 123 ┆ 45.78 │ │ A ┆ 140 ┆ 10.26 │ │ B ┆ 125 ┆ 99.62 │ │ B ┆ 128 ┆ 23.65 │ │ B ┆ 140 ┆ 92.95 │ │ C ┆ 123 ┆ 45.78 │ │ D ┆ 145 ┆ 7.58 │ └─────────────┴────────────┴───────┘ """) the algorithm should produce: [ ( 'A', 'B' ), # Because of product 140 ( 'A', 'C' ) # Because of product 123 ] I'll then use this to compute a pairwise similarity measure (e.g. dot product or cosine) and then, as efficiently as possible, produce a data frame that looks like: Schema({'ID1': String, 'ID2': String, 'Similarity': Float64}) likely with a threshold for minimum similarity. Doing something like: from itertools import combinations row_keys = pl.Series( df.select(category_col) .unique( ).drop_nulls( ).collect() ).to_list() pairs = combinations(row_keys, 2) generates all possible pairs, but is hugely inefficient.
Use a self-join: (df.lazy() .join_where(df.lazy(), ((pl.col.StockCode == pl.col.StockCode_right) & (pl.col.CustomerID < pl.col.CustomerID_right))) .select("^CustomerID.*$") .unique() .collect(engine="streaming")) ┌────────────┬──────────────────┐ │ CustomerID ┆ CustomerID_right │ │ --- ┆ --- │ │ str ┆ str │ ╞════════════╪══════════════════╡ │ A ┆ C │ │ A ┆ B │ └────────────┴──────────────────┘
2
1
79,580,670
2025-4-18
https://stackoverflow.com/questions/79580670/fast-calculation-of-nth-generalized-fibonacci-number-of-order-k
How can I calculate Nth term in Fibonacci sequence of order K efficiently? For example, Tribonacci is Fibonacci order 3, Tetranacci is Fibonacci order 4, Pentanacci is Fibonacci order 5, and Hexanacci is Fibonacci order 6 et cetera. I define these series as follows, for order K, A0 = 1, Ai = 2i-1 (i ∈ [1, k]), Ak+1 = 2k - 1, Ai+1 = 2Ai - Ai-k-1 (i >= k + 1). For example, the sequences Fibonacci to Nonanacci are: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040] [1, 1, 2, 4, 7, 13, 24, 44, 81, 149, 274, 504, 927, 1705, 3136, 5768, 10609, 19513, 35890, 66012, 121415, 223317, 410744, 755476, 1389537, 2555757, 4700770, 8646064, 15902591, 29249425] [1, 1, 2, 4, 8, 15, 29, 56, 108, 208, 401, 773, 1490, 2872, 5536, 10671, 20569, 39648, 76424, 147312, 283953, 547337, 1055026, 2033628, 3919944, 7555935, 14564533, 28074040, 54114452, 104308960] [1, 1, 2, 4, 8, 16, 31, 61, 120, 236, 464, 912, 1793, 3525, 6930, 13624, 26784, 52656, 103519, 203513, 400096, 786568, 1546352, 3040048, 5976577, 11749641, 23099186, 45411804, 89277256, 175514464] [1, 1, 2, 4, 8, 16, 32, 63, 125, 248, 492, 976, 1936, 3840, 7617, 15109, 29970, 59448, 117920, 233904, 463968, 920319, 1825529, 3621088, 7182728, 14247536, 28261168, 56058368, 111196417, 220567305] [1, 1, 2, 4, 8, 16, 32, 64, 127, 253, 504, 1004, 2000, 3984, 7936, 15808, 31489, 62725, 124946, 248888, 495776, 987568, 1967200, 3918592, 7805695, 15548665, 30972384, 61695880, 122895984, 244804400] [1, 1, 2, 4, 8, 16, 32, 64, 128, 255, 509, 1016, 2028, 4048, 8080, 16128, 32192, 64256, 128257, 256005, 510994, 1019960, 2035872, 4063664, 8111200, 16190208, 32316160, 64504063, 128752121, 256993248] [1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 511, 1021, 2040, 4076, 8144, 16272, 32512, 64960, 129792, 259328, 518145, 1035269, 2068498, 4132920, 8257696, 16499120, 32965728, 65866496, 131603200, 262947072] Now, I am well aware of fast algorithms to calculate Nth Fibonacci number of order 2: def fibonacci_fast(n: int) -> int: a, b = 0, 1 bit = 1 << (n.bit_length() - 1) if n else 0 while bit: a2 = a * a a, b = 2 * a * b + a2, b * b + a2 if n & bit: a, b = a + b, a bit >>= 1 return a def matrix_mult_quad( a: int, b: int, c: int, d: int, e: int, f: int, g: int, h: int ) -> tuple[int, int, int, int]: return ( a * e + b * g, a * f + b * h, c * e + d * g, c * f + d * h, ) def fibonacci_binet(n: int) -> int: a, b = 1, 1 bit = 1 << (n.bit_length() - 2) if n else 0 while bit: a, b = (a * a + 5 * b * b) >> 1, a * b if n & bit: a, b = (a + 5 * b) >> 1, (a + b) >> 1 bit >>= 1 return b def fibonacci_matrix(n: int) -> int: if not n: return 0 a, b, c, d = 1, 0, 0, 1 e, f, g, h = 1, 1, 1, 0 n -= 1 while n: if n & 1: a, b, c, d = matrix_mult_quad(a, b, c, d, e, f, g, h) e, f, g, h = matrix_mult_quad(e, f, g, h, e, f, g, h) n >>= 1 return a In [591]: %timeit fibonacci_matrix(16384) 751 μs ± 4.74 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [592]: %timeit fibonacci_binet(16384) 132 μs ± 305 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [593]: %timeit fibonacci_fast(16384) 114 μs ± 966 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) But these of course only deal with Fibonacci-2 sequence, they can't be used to calculate Nth term in higher order Fibonacci sequences. In particular, only the cubic equation for Tribonacci: x3 - x2 - x - 1 = 0 and the quartic equation for Tetranacci: x4 - x3 - x2 - x - 1 = 0 have solutions that can be found algebraically, the quintic equation x5 - x4 - x3 - x2 - x - 1 = 0 has solutions that can't be found with sympy, so the fast doubling method only works with Fibonacci, Tribonacci and Tetranacci. But I know of two ways to compute higher orders of Fibonacci sequences, the first can be used to efficiently generate all first N Fibonacci numbers of order K and has time complexity of O(n), the second is matrix exponentiation by squaring and has time complexity of O(log2n) * O(k3). First, we only need two numbers to get the next Fibonacci number of order K, the equation is given above, we only need one left shift and one subtraction for each term. Matrix = list[int] def onacci_fast(n: int, order: int) -> Matrix: if n <= order + 1: return [1] + [1 << i for i in range(n - 1)] last = n - 1 result = [1] + [1 << i for i in range(order + 1)] + [0] * (last - order - 1) result[start := order + 1] -= 1 for a, b in zip(range(start, last), range(1, last - order)): result[a + 1] = (result[a] << 1) - result[b] return result ONACCI_MATRICES = {} IDENTITIES = {} def onacci_matrix(n: int) -> Matrix: if matrix := ONACCI_MATRICES.get(n): return matrix mat = [1] * n + [0] * (n * (n - 1)) for i in range(1, n): mat[i * n + i - 1] = 1 ONACCI_MATRICES[n] = mat return mat def onacci_pow(n: int, k: int) -> np.ndarray: base = np.zeros((k, k), dtype=np.uint64) base[0] = 1 for i in range(1, k): base[i, i - 1] = 1 prod = np.zeros((k, k), dtype=np.uint64) for i in range(k): prod[i, i] = 1 return [(prod := prod @ base) for _ in range(n)] def identity_matrix(n: int) -> Matrix: if matrix := IDENTITIES.get(n): return matrix result = [0] * n**2 for i in range(n): result[i * n + i] = 1 IDENTITIES[n] = result return result def mat_mult(mat_1: Matrix, mat_2: Matrix, side: int) -> Matrix: # sourcery skip: use-itertools-product result = [0] * (square := side**2) for y in range(0, square, side): for x in range(side): e = mat_1[y + x] for z in range(side): result[y + z] += mat_2[x * side + z] * e return result def mat_pow(matrix: Matrix, power: int, n: int) -> Matrix: result = identity_matrix(n) while power: if power & 1: result = mat_mult(result, matrix, n) matrix = mat_mult(matrix, matrix, n) power >>= 1 return result def onacci_nth(n: int, k: int) -> int: return mat_pow(onacci_matrix(k), n, k)[0] In [621]: %timeit onacci_nth(16384, 2) 822 μs ± 5.88 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [622]: %timeit onacci_fast(16384, 2) 13.8 ms ± 92.7 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [623]: %timeit onacci_fast(16384, 3) 16 ms ± 63.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [624]: %timeit onacci_fast(16384, 4) 17 ms ± 66.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [625]: %timeit onacci_nth(16384, 3) 4.02 ms ± 32 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [626]: %timeit onacci_nth(16384, 4) 10.9 ms ± 71 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [627]: %timeit onacci_nth(16384, 5) 22.5 ms ± 632 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [628]: %timeit onacci_nth(16384, 6) 39.4 ms ± 314 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [629]: %timeit onacci_fast(16384, 6) 17.5 ms ± 27.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [630]: %timeit onacci_fast(16384, 7) 17.6 ms ± 115 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [631]: %timeit onacci_nth(16384, 7) 62.7 ms ± 347 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [632]: %timeit onacci_nth(32768, 16) 2.29 s ± 5.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [633]: %timeit onacci_fast(32768, 16) 56.2 ms ± 271 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In the easiest case, onacci_nth(16384, 2) is significantly slower than fibonacci_matrix despite using exactly the same method, because of added overhead of using lists. And although the while loop has log2n iterations, the matrices are of size k2 and for each cell k multiplications and k - 1 additions have to be performed, for a total of k3 multiplications and k3 - k2 additions each iteration, this cost grows very quickly, so the matrix exponentiation method is outperformed by the linear iterative method very quickly, because although the iterative method has more iterations, each iteration is far cheaper. The matrix exponentiation uses all of the k*k matrix but we need fewer numbers. The following are the state transitions for Hexanacci: [array([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0]], dtype=uint64), array([[2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0]], dtype=uint64), array([[4, 4, 4, 4, 3, 2], [2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0]], dtype=uint64), array([[8, 8, 8, 7, 6, 4], [4, 4, 4, 4, 3, 2], [2, 2, 2, 2, 2, 1], [1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0]], dtype=uint64), array([[16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1], [ 1, 1, 1, 1, 1, 1], [ 1, 0, 0, 0, 0, 0]], dtype=uint64), array([[32, 31, 30, 28, 24, 16], [16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1], [ 1, 1, 1, 1, 1, 1]], dtype=uint64), array([[63, 62, 60, 56, 48, 32], [32, 31, 30, 28, 24, 16], [16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2], [ 2, 2, 2, 2, 2, 1]], dtype=uint64), array([[125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4], [ 4, 4, 4, 4, 3, 2]], dtype=uint64), array([[248, 244, 236, 220, 188, 125], [125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8], [ 8, 8, 8, 7, 6, 4]], dtype=uint64), array([[492, 484, 468, 436, 373, 248], [248, 244, 236, 220, 188, 125], [125, 123, 119, 111, 95, 63], [ 63, 62, 60, 56, 48, 32], [ 32, 31, 30, 28, 24, 16], [ 16, 16, 15, 14, 12, 8]], dtype=uint64)] Now, for each state, we can shift the rows down by 1 to get the lower k - 1 rows of the next state, and the top row can be obtained by adding the first number to the second number and put in first position, adding the first number to the third number and put to second position, adding first to fourth put to third... put the first number to last position. Or in Python: def next_onacci(arr: np.ndarray) -> np.ndarray: a = arr[0, 0] return np.concatenate([[[a + b for b in arr[0, 1:]] + [a]], arr[:-1]]) So we only need k numbers to get the next state. But we can do better, the Nth term is found by raising the matrix to the Nth and accessing mat[0, 0]. We can use mat[0, 0] + mat[0, 1] to get the next mat[0, 0], or we can use 2 * mat[0, 0] - mat[-1, -1]. But I have only found ways to calculate the numbers using these relationships linearly, I can't use them to do exponentiation by squaring. Is there a faster way to compute Nth term of higher order Fibonacci sequence? Of course onacci_pow overflows extremely quickly, so it is absolutely useless to calculate the terms for large N. And so I don't use it to calculate terms for large N. I have implemented my own matrix multiplication and exponentiation to calculate with infinite precision. onacci_pow is used to verify the correctness of my matrix multiplication for small N. onacci_pow is correct as long as it doesn't overflow, and I know exactly on which power it overflows. And for the values of N and K, assume K is between 25 and 100, and N is between 16384 and 1048576.
Yes, there is a faster way to compute this. Matrix Diagonalisation The matrix exponentiation can be optimised if the matrix is diagonalisable. It turns out it is diagonalisable. Indeed, we know that the n-order Fibonacci matrix M is a square matrix and all its rows and columns are linearly independent (by design). This means the rank of the matrix is n (i.e. full-rank matrix). Here is a (more complex) alternative explanation: the Spectral theorem states that a normal matrix can be diagonalised by an orthonormal basis of eigenvectors. Since we can prove that M @ M.T is equal to M.T @ M so M is a normal matrix and so it can be diagonalised. This means M = Q @ L @ inv(Q) where Q contains the eigenvectors (one per column) and L is a diagonal matrix containing the eigenvalues (one on each item of the diagonal) (see singular value decomposition, a.k.a. SVD, and eigen-decomposition of a matrix for more information). We know that all eigenvalues are non-zeros ones since the matrix is full-rank. We can compute that in Numpy with L, Q = np.linalg.eig(M). Note that the eigen values and eigenvector can contain complex numbers. We can check that everything so far works as expected with: D = np.diag(L) P = np.linalg.inv(Q) T = Q @ D @ P np.allclose(M, T) We can now compute Mⁿ more efficiently than with square exponentiation. Indeed: Mⁿ = (Q D P)ⁿ = (Q D P) (Q D P) (Q D P) ... (Q D P) = Q D (P Q) D (P Q) D (P ... Q) D P = Q D I D I D (I ... I) D P since P @ Q = I = Q D D D ... D P = Q Dⁿ P Q @ Dⁿ @ P can be computed in O(k log n + k³). Indeed, Dⁿ is actually equal to D**n (item-wise exponentiation) in Numpy because D is a diagonal matrix. In fact, we can compute Mⁿ = Q @ Dⁿ @ P with: Dn = np.diag(L**n) P = np.linalg.inv(Q) Mn = Q @ Dn @ P This method is asymptotically about O(k²) times faster than the O(k³ log n). Actually, for fixed-size floating-point (FP) numbers, it is done in O(k³) time! Unfortunately, fixed-sized FP numbers are certainly not so accurate for big numbers especially for high-order Fibonacci computation (i.e. performing a SVD computation of large matrices). This method can be seen as a very good approximation for the parameter you use for k < 100. That being said, the computation of L**n fails for values n > 16384 (actually even for much smaller values of n). This is especially true when L contains complex numbers. You can write your own eigen-decomposition of a matrix with arbitrary-large FP numbers (e.g. using the decimal package) though this is tedious to write and the code performance will certainly be far from being optimal (despite the algorithm is efficient). Sympy and mpmath might help for that. Note on arbitrary-large numbers Considering arbitrary-large FP/integer numbers, I expect the computation of the n-order Fibonacci term to be more expensive than expected first. Indeed, AFAIK F(n) is a number having O(n) bits! This means the above method is actually O(n) more expensive with such large numbers assuming basic operations are done in O(n) time. This is the case for the addition but not the multiplication. The native CPython multiplication of large numbers uses the Karatsuba algorithm running in about O(n**1.6) time, while decimal.Decimal uses the Number Theoretic Transform algorithm apparently running in O(n log n) time (thanks to @KellyBundy for pointing that out). The optimal complexity for a multiplication of n-bit numbers is O(n log n). As of this sentence, I will assume a O(n log n) algorithm is used for multiplying arbitrary-large numbers (so a code using decimal.Decimal for example). In fact, this also applies to the other algorithm provided in the question: you missed the fact that numbers can be huge! Considering that, the above algorithm should run in O(k n log² n + n k³) with arbitrary-large FP numbers and considering a O(n log n) multiplication time (for the n-bit numbers). Improving the diagonalisation-based algorithm Fortunately, we do not need to compute the whole matrix Mn = Q @ Dn @ P but just a single item. We can replace the last matrix multiplication with a simple dot product. Thus, we only need a column of P and a single line of Q @ Dn. The later can be computed more efficiently since Dn is a diagonal matrix: Mn[i, j] = np.dot(Q[i] * L**n, P[:,j]) = np.sum(Q[i] * L**n * P[:,j]) = np.sum(Q[i] * P[:,j] * L**n) While we can compute Q[i] * P[:,j] efficiently independently of n, we certainly need a very-good precision for this computation since L**n is huge for relatively-big value of n. This version should still run in O(k log n + k³) for fixed-size numbers (L**n is the expensive part) and it should run in O(k n log² n) for arbitrary-large FP numbers. I think the complexity of the best algorithm (using arbitrary-large integers) is Ω(k n) since the k-order Fibonacci needs has k terms with a precision of n-bits. This assume the terms are not trivial though. Assuming that, this means this algorithm is actually pretty good already. Possible further optimisations One way to improve the diagonalisation-based algorithm further may be to algebraically compute the k-order Fibonacci matrix terms (typically using sympy assuming it can be powerful enough). You might see some terms cancelling each other or some pattern enabling further optimisations. You can find an example here for the classical Fibonacci (with 2x2 matrices). I am not very optimistic since the 2-order algebraic solution found contains 2 irreducible terms, so I expect at least k terms with a k-order Fibonacci. Besides, I think the algorithmic solution found is less numerically stable than the above approach due to a catastrophic cancellation (i.e. ϕⁿ−ψⁿ).
8
5
79,581,784
2025-4-18
https://stackoverflow.com/questions/79581784/python-editing-file-in-loop-how-to-keep-changes-after-each-xml
I need to clean few xml files from not needed elements, came up with this wild card replacement for nothing to get rid of not needed <EmbeddedImage>, but I'm missing how to store each change in the loop, looks like it overwrites it each time. Thought that Python will keep it, but it's not the case. Do I need to create some additional logic to get only single Brand_1 in my final tree? Appreciate if you point me how I can fix it. import re from bs4 import BeautifulSoup as bs filepath = "C://ddd/test.xml" xml = ''' <Images> <EmbeddedImage Name="Brand_1"> <ImageData>/9j/4AAQSkZJB//2Q==</ImageData> </EmbeddedImage> <EmbeddedImage Name="Brand__2XX"> <ImageData>/9j/4AAQSkZJB//2Q==JB//2</ImageData> </EmbeddedImage> <EmbeddedImage Name="Brand___3XX"> <ImageData>/9j/4AAQSkZAAQSkkB//2Q=AAQSk=</ImageData> </EmbeddedImage> <Images>''' Bs_data = bs(xml,'xml') #contant = xml ?? content = xml for item in Bs_data.select ('EmbeddedImage'): #print('Name= ',item.get('Name')) if item.get('Name') != 'Brand_1': wild = '<EmbeddedImage Name="'+item.get('Name')+'">.*</EmbeddedImage>' ##<@><< first </ How??? content = re.sub(wild, '', content , flags=re.DOTALL) print ('------------',wild,'::::',content) print(content) # only Brand_1
Try to operate directly with Bs.data from bs4 import BeautifulSoup as bs xml = ''' <Images> <EmbeddedImage Name="Brand_1"> <ImageData>/9j/4AAQSkZJB//2Q==</ImageData> </EmbeddedImage> <EmbeddedImage Name="Brand__2XX"> <ImageData>/9j/4AAQSkZJB//2Q==JB//2</ImageData> </EmbeddedImage> <EmbeddedImage Name="Brand___3XX"> <ImageData>/9j/4AAQSkZAAQSkkB//2Q=AAQSk=</ImageData> </EmbeddedImage> </Images> ''' soup = bs(xml, 'xml') for item in soup.find_all('EmbeddedImage'): if item.get('Name') != 'Brand_1': item.decompose() print(soup.prettify())
3
4
79,581,369
2025-4-18
https://stackoverflow.com/questions/79581369/why-does-python-allow-for-multiple-methods-with-the-same-name
Python does not allow overloading. I therefore expect it to only allow a single class method with the same name. This is not the case. This code runs without issue. class DoubleTest: def __init__(self): self._generate_thing() def _generate_thing(self, entry): print(entry) def _generate_thing(self): print("do the thing") This code produces a TypeError about the wrong number of supplied parameters. class DoubleTest: def __init__(self): self._generate_thing("test") def _generate_thing(self, entry): print(entry) def _generate_thing(self): print("do the thing") Calling dir(test) shows me that there is only 1 _generate_thing method. But why am I allowed to define it twice and how did it choose which one to use?
Python doesn't allow multiple methods with the same name, but it does allow you to redefine methods and functions. As a result, only the last method with the duplicate name is effective, the other one is effectively ignored because it has been replaced by the time the class definition is completed.
1
5
79,576,800
2025-4-16
https://stackoverflow.com/questions/79576800/numpy-sort-n-dimensional-array-similar-to-pythons-list-sort
I am trying to sort a list of 3D coordinates. The target is to get the same order that the basic Python list sort produces, which is the same as the meshgrid + column_stack order. Python sorts by the first value of each element first; in case of ties, the second element is checked, and so on. I would like to do the same in numpy, especially with the argsort function, as it gives an index list for the sorted order. I need this list to sort other numpy arrays with the same first dimension. I think this post is related, but does not use numpy to do the sorting. Following a small test bench. import numpy as np num = 10 x = np.linspace(1, 3, num) y = np.linspace(4, 6, num) z = np.linspace(7, 9, num) xg, yg , zg = np.meshgrid(x, y, z, indexing='ij', sparse=False) test = np.column_stack((xg.ravel(), yg.ravel(), zg.ravel())) print('Target ordering', test) np.random.shuffle(test) print('\nShuffeld', test) python_sorted = sorted(test.tolist()) python_sorted = np.array(python_sorted) print('\nPython list sorted', python_sorted) numpy_sort_idx = test.argsort(axis=0) numpy_sorted = test[numpy_sort_idx] print('\nNumpy arg sorted', numpy_sorted)
You could use lexsort on the reversed columns: import numpy as np num = 10 x = np.linspace(1, 3, num) y = np.linspace(4, 6, num) z = np.linspace(7, 9, num) xg, yg , zg = np.meshgrid(x, y, z, indexing='ij', sparse=False) test = np.column_stack((xg.ravel(), yg.ravel(), zg.ravel())) print('Target ordering\n', test) np.random.shuffle(test) print('\nShuffled\n', test) python_sorted = sorted(test.tolist()) python_sorted = np.array(python_sorted) print('\nPython list sorted\n', python_sorted) idx = np.lexsort(test[:, ::-1].T) numpy_sorted = test[idx] print('\nLexsorted using reversed columns\n', numpy_sorted) The order of the columns is reversed due to the precedence of the keys passed to lexsort. As the first column should be the primary sort order, it must come last. From the documentation: The last key in the sequence is used for the primary sort order, ties are broken by the second-to-last key, and so on. Output form the test bench Target ordering [[1. 4. 7. ] [1. 4. 7.22222222] [1. 4. 7.44444444] ... [3. 6. 8.55555556] [3. 6. 8.77777778] [3. 6. 9. ]] Shuffled [[3. 5.11111111 7.22222222] [2.77777778 5.11111111 8.11111111] [1.66666667 5.77777778 7.22222222] ... [2.55555556 4.88888889 7.22222222] [2.55555556 4.88888889 7.66666667] [2.55555556 4.66666667 7.66666667]] Python list sorted [[1. 4. 7. ] [1. 4. 7.22222222] [1. 4. 7.44444444] ... [3. 6. 8.55555556] [3. 6. 8.77777778] [3. 6. 9. ]] Lexsorted using reversed columns [[1. 4. 7. ] [1. 4. 7.22222222] [1. 4. 7.44444444] ... [3. 6. 8.55555556] [3. 6. 8.77777778] [3. 6. 9. ]]
1
2
79,574,572
2025-4-15
https://stackoverflow.com/questions/79574572/powershell-subprocess-launched-via-debugpy-doesn-t-inherit-environment-variables
Problem I'm encountering an issue where launching PowerShell via Python’s subprocess.Popen() works as expected during normal execution, but in debug mode (using debugpy/cursor) key environment variables (e.g. PROGRAMFILES and LOCALAPPDATA) are empty. In contrast, when I run a CMD command (e.g. echo %PROGRAMFILES%), the environment variables are correctly inherited. Environment Windows 22H2 build 19045.5487 Python 3.10.16 What I've Tested (V1) I created a small test program with three functions: one for PowerShell with the -NoProfile option (print_psh), one for PowerShell without that option (print_psh_with_profile), and one for CMD (print_cmd). I also made a variant (print_psh_with_profile_inject_env) where I pass env=os.environ.copy() explicitly. This is the version before @mklement0 mentioned about shell option. The Write-Host problem isn't affected, so I'm keeping it in V1. import subprocess import os def print_psh(cmd): with subprocess.Popen( "powershell -NoProfile " + '"' + f"$ErrorActionPreference='silentlycontinue'; $tmp = ({cmd}); if ($tmp){{echo $tmp; Exit;}}" + '"', stdout=subprocess.PIPE, stdin=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True, # Is the "powershell" expression recognized as a CMD.exe command? This returns nothing, but it works without error.(see the description about *V2* code) ) as stream: cmm = stream.communicate() stdout = cmm[0].decode() print(f"NoProfile: {cmd} = {stdout}") def print_psh_with_profile(cmd): with subprocess.Popen( "powershell " + '"' + f"$ErrorActionPreference='silentlycontinue'; $tmp = ({cmd}); if ($tmp){{echo $tmp; Exit;}}" + '"', stdout=subprocess.PIPE, stdin=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True, # Is the "powershell" expression recognized as a CMD.exe command? This returns nothing, but it works without error.(see the description about *V2* code) ) as stream: cmm = stream.communicate() stdout = cmm[0].decode() print(f"WithProfile: {cmd} = {stdout}") def print_psh_with_profile_inject_env(cmd): with subprocess.Popen( "powershell " + '"' + f"$ErrorActionPreference='silentlycontinue'; $tmp = ({cmd}); if ($tmp){{echo $tmp; Exit;}}" + '"', stdout=subprocess.PIPE, stdin=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True, # Is the "powershell" expression recognized as a CMD.exe command? This returns nothing, but it works without error.(see the description about *V2* code) env=os.environ.copy(), ) as stream: cmm = stream.communicate() stdout = cmm[0].decode() print(f"WithProfile(inject env): {cmd} = {stdout}") def print_cmd(cmd): with subprocess.Popen( cmd, stdout=subprocess.PIPE, stdin=subprocess.DEVNULL, stderr=subprocess.DEVNULL, shell=True, # use commandline commands. ) as stream: cmm = stream.communicate() stdout = cmm[0].decode() print(f"CMD.EXE: {cmd} = {stdout}") print_psh("$env:PROGRAMFILES") print_psh("$env:LOCALAPPDATA") print_cmd("echo %PROGRAMFILES%") print_cmd("echo %LOCALAPPDATA%") print_psh_with_profile("$env:PROGRAMFILES") print_psh_with_profile("$env:LOCALAPPDATA") print_psh_with_profile_inject_env("$env:PROGRAMFILES") print_psh_with_profile_inject_env("$env:LOCALAPPDATA") Output (V1) Normal Execution (non-debug mode): NoProfile: $env:PROGRAMFILES = C:\Program Files NoProfile: $env:LOCALAPPDATA = C:\Users\(USERNAME)\AppData\Local CMD.EXE: echo %PROGRAMFILES% = C:\Program Files CMD.EXE: echo %LOCALAPPDATA% = C:\Users\(USERNAME)\AppData\Local WithProfile: $env:PROGRAMFILES = [profile script output] ... C:\Program Files WithProfile: $env:LOCALAPPDATA = [profile script output] ... C:\Users\(USERNAME)\AppData\Local WithProfile(inject env): $env:PROGRAMFILES = C:\Program Files WithProfile(inject env): $env:LOCALAPPDATA = C:\Users\(USERNAME)\AppData\Local Debug Mode (using debugpy): NoProfile: $env:PROGRAMFILES = NoProfile: $env:LOCALAPPDATA = CMD.EXE: echo %PROGRAMFILES% = C:\Program Files CMD.EXE: echo %LOCALAPPDATA% = C:\Users\(USERNAME)\AppData\Local WithProfile: $env:PROGRAMFILES = WithProfile: $env:LOCALAPPDATA = WithProfile(inject env): $env:PROGRAMFILES = WithProfile(inject env): $env:LOCALAPPDATA = What I've Tested (V2) I changed the code more precise, then I obtained a new error and clue. This is the version before @mklement0 mentioned about shell option. Change shell option to False in all print_psh family because it might be secure that 'powershell' expression is not treat as CMD.EXE command. ... def print_psh...(cmd): with subprocess.Popen( "powershell -NoProfile " + '"' + ... stderr=subprocess.DEVNULL, - shell=True, # Is the "powershell" expression recognized as a CMD.exe command? This returns nothing, but it works without error.(see the description about *V2* code) + shell=False, # PowerShell commands are indirectlly called from a new process. ) as stream: ... Add this code on the beginning of the code V1. +def print_psh_test(): + cmd = "powershell" + with subprocess.Popen( + cmd, + stdout=subprocess.PIPE, + stdin=subprocess.DEVNULL, + stderr=subprocess.DEVNULL, + shell=False, # PowerShell commands are indirectlly called from +a new process. + ) as stream: + cmm = stream.communicate() + print("test to run powershell.") ... +print_psh_test() print_psh("$env:PROGRAMFILES") print_psh("$env:LOCALAPPDATA") ... Output (V2) Normal Execution (non-debug mode): (Same as V1) Debug Mode (using debugpy): <Error occurs: below stacktrace> Exception has occurred: FileNotFoundError [WinError 2] The system cannot find the file specified. File "C:\...\{source_file}.py", line 6, in print_psh_test with subprocess.Popen( File "C:\...\{source_file}.py", line 71, in <module> print_psh_test() FileNotFoundError: [WinError 2] The system cannot find the file specified. Test in the Miniconda Powershell Prompt. Activate the virtual environment. Run the command in the Prompt. Wait to attach manually. python -m debugpy --listen 5678 --wait-for-client ./{source_file}.py.` Create the launch task (.vscode/launch.json) ... { "name": "Python: Attach", "type": "python", "request": "attach", "connect": { "host": "localhost", "port": 5678 }, "pathMappings": [ { "localRoot": "${workspaceFolder}", "remoteRoot": "." } ], "justMyCode": true } ... Attach the pydebug from VSCode. Debug Mode (using debugpy manually): (Same as Normal Execution) What I've Tried (V1) Removing -NoProfile did not change the result in debug mode. (V1) Passing env=os.environ.copy() explicitly also had no effect. (V1) When using CMD (via echo %PROGRAMFILES%), the environment variables are correctly inherited. New! (V2) Popen powershell with shell=False option will cause the FileNotFound error, although not with shell=Ture. New! (V2) This could be due to any effect on VSCode's Integrated Terminal and pydebug combination, and it might be specific to my environment. (@Grismar tested it in the same enviornment, but did not reproduce the problem) My Question It appears that when running under debugpy (or the associated debug environment), the PowerShell subprocess is launched with an environment that lacks the expected variables (they are empty), while CMD subprocesses inherit the environment normally. Has anyone encountered this behavior with debugpy or similar debuggers? Is debugpy known to override or clear the environment when launching PowerShell subprocesses? Are there any workarounds or debug configuration settings (e.g., in launch.json) to ensure that the full environment is passed to PowerShell even in debug mode? Any insights or suggestions would be greatly appreciated. Background and How I Got Here This issue was discovered while trying to use webdriver_manager to automatically download and launch a compatible version of ChromeDriver in a script launched under debugpy. I noticed that webdriver_manager was failing to detect the installed Chrome version on my system. Upon closer inspection, I found that it internally attempts to retrieve the Chrome version using PowerShell commands (e.g., accessing registry keys or file paths that rely on environment variables like PROGRAMFILES). To understand why this was failing only in debug mode, I traced the call stack and found that it eventually reaches a subprocess.Popen() call that runs PowerShell. This led me to test minimal reproducible examples using Popen directly, where I discovered that under debugpy, environment variables expected by PowerShell are inexplicably missing—while the same code behaves correctly outside of debug mode or when invoking CMD instead. Hence, it seems that the root cause of webdriver_manager failing in debug mode stems from PowerShell being launched in an environment that lacks essential variables like PROGRAMFILES or LOCALAPPDATA. Enviornment (Editor) VSCode 1.96.2 Extensions (latest at 2025/4/16) Python @2025.4.0 Python Debugger @2025.4.1 Pylance @2025.4.1
Self‑Answer: Root Cause and Fix Apologies for the confusion and for taking up your time with this—it was my own mistake. I really appreciate your help. After all the investigation above, I finally discovered the true culprit: a project‑root .env file that I had added for other tooling (e.g. to set JAVA_HOME for a Processing‑like Python library py5). That file contained something like: # .env at the workspace root JAVA_HOME=C:\Some\Java\Path PATH=$JAVA_HOME$;$PATH Why this caused the issue Normal execution in VSCode’s Integrated Terminal sources .env and appends it to the existing environment, so your real PATH (and PROGRAMFILES, LOCALAPPDATA, etc.) remain intact—PowerShell and CMD subprocesses work fine. Debug mode with debugpy, however, appears to load only the variables from .env and override the entire process environment. As a result: PATH becomes exactly C:\Some\Java\Path\bin;$PATH (no Windows system paths) powershell.exe can’t be found (or finds no inherited variables like PROGRAMFILES) Any $env:… lookups return empty strings The fix Remove the PATH varable in the .env file. # .env — extend the existing PATH instead of overwriting it JAVA_HOME=C:\Some\Java\Path PATH=%JAVA_HOME%;%PATH% # Wrong. VARIABLES are not allowed in the .env files, only LITERALS. It is difficult to simply do anything valid to the PATH variable with the .env files when it is in the VS Code integrated terminal using PowerShell, lunching Python in debug mode. Follow-up Question While the original question has been resolved, a deeper and more specific issue remains regarding how VS Code's debugger handles the PATH environment variable in certain cases. I have posted a follow-up question here for further discussion and clarification: 🔗 Why does VS Code debugger reset PATH from .env, breaking subprocess behavior?
3
0
79,580,208
2025-4-17
https://stackoverflow.com/questions/79580208/how-to-add-jitter-to-plotly-go-scatter-in-python-when-mode-lines
I have a dataframe as such: Starting point Walking Driving Lunch 0 8 4 Shopping 0 7 3 Coffee 0 5 2 Where I want to draw, for each index, a green line from "Starting point" -> "Walking", and a red line from "Starting point" -> "Coffee". To do this, I loop through both the columns and the index, as such: for column in df7.columns: for idx in df7.index: fig9.add_trace( go.Scatter( # chart # chart # chart ) ) Which gives me the following chart of two lines, colored by cycle, overlapping. The question is: how can I modify the code in plotly python to create jitter in the y axis for the lines of different cycles? In other words, the shorter line will be above the longer line. Full MRE: df7 = pd.DataFrame({ 'Starting point': [0, 0, 0], 'Walking': [8, 7, 5], 'Biking': [4, 3, 2] }, index=['Lunch', 'Shopping', 'Coffee']) fig9 = go.Figure() color_cyc = cycle(["#888888", "#E2062B"]) symbol_cyc = cycle(["diamond", "cross"]) for column in df7.columns: color=next(color_cyc) for idx in df7.index: fig9.add_trace( go.Scatter( y=[idx] * len(df7.loc[idx, ["Starting point", column]]), x=df7.loc[idx, ["Starting point", column]], showlegend=False, mode="lines+markers", marker={ "color": color, "symbol": "diamond", # "jitter": 0.4, }, ), ) fig9 Thanks very much.
Try to use offset and then enumerate import pandas as pd import plotly.graph_objects as go from itertools import cycle df7 = pd.DataFrame({ 'Starting point': [0, 0, 0], 'Walking': [8, 7, 5], 'Biking': [4, 3, 2] }, index=['Lunch', 'Shopping', 'Coffee']) fig9 = go.Figure() color_cyc = cycle(["#888888", "#E2062B"]) symbol_cyc = cycle(["diamond", "cross"]) col_offsets = {'Walking': 0.15, 'Biking': -0.15} for column in ['Walking', 'Biking']: color = next(color_cyc) offset = col_offsets[column] for i, idx in enumerate(df7.index): y_val = i + offset fig9.add_trace( go.Scatter( y=[y_val, y_val], x=[df7.loc[idx, "Starting point"], df7.loc[idx, column]], showlegend=False, mode="lines+markers", marker=dict( color=color, symbol="diamond" ), line=dict(color=color) ) ) fig9.update_yaxes( tickvals=list(range(len(df7.index))), ticktext=list(df7.index) ) fig9.show() Output:
2
2
79,579,904
2025-4-17
https://stackoverflow.com/questions/79579904/why-is-there-a-complex-argument-in-the-array-here-trying-to-do-dtfs
I have a window signal, which i calculate it's Fourier coefficients but then in the output i get a small complex value [3 order of magnitude less in the origin and the same order of magnitude in the edges of my sampling (from -1000 to 1000)] where the output should be purely real, if it was just a floating point approx error it wouldn't have the pattern of sin like i found, so what could cause this? It's all done in Python (through Google Colab) with the libraries cmath, numpy, and matplotlib alone. import numpy as np import cmath import matplotlib.pyplot as plt D=1000 a=np.zeros(2*D+1) for i in range(-100,100): a[i+D] = 1 This code creates the following window signal: and then to get the fourier coefficients: j=complex(0,1) pi=np.pi N=2001 ak = np.zeros(N, dtype=complex) for k in range(-D, D + 1): for n in range(-D, D + 1): ak[k + D] += (1/N) * a[n + D] * cmath.exp(-j * 2 * pi * k * n / N) which gives me the following image: when i do the math in paper i get the coeficients to be: where omega0 is 2*pi/N and a[n]=u(n+100)-u(n-100) (where u(n) is the Heaveside function). but it's weird as when i plot the imaginary part of ak i get that it's (at a very good approximation) equal to 1/N *sin(201*pi/N where N=2001 and is my entire sampling area (from -1000 to 1000 as i mentioned), which if it was just a rounding error because of floating point it wouldn't be in this form. here it's the plot of the imaginary part of ak, not freq of ak or bk.
the imaginary part you see is correct, your textbook definition speaks of a signal symmetric around 0, your signal is only from -100 to 99 as python range is end exclusive, it is not symmetric around 0, hence there's a phase error, which translates to an imaginary component in the signal Fourier transform. correcting the code: for i in range(-100,101): # from -100 to 100 a[i+D] = 1 gives for the imaginary component numpy double precision has around 16 digits of precision, your real part is in order of 1e-2 the imaginary part being in order of 1e-18 is close to machine precision limits.
2
5
79,580,115
2025-4-17
https://stackoverflow.com/questions/79580115/proper-python-type-hinting-for-this-function-in-2025
I'm using VS Code with Pylance, and having problems correctly writing type hints for the following function. To me, the semantics seem clear, but Pylance disagrees. How can I fix this without resorting to cast() or # type ignore? Pylance warnings are given in the comments. I am fully committed to typing in Python, but it is often a struggle. T = TypeVar("T") def listify(item: T | list[T] | tuple[T, ...] | None) -> list[T]: if item is None: return [] elif isinstance(item, list): return item # Return type, "List[Unknown]* | List[T@listify]", # is partially unknown elif isinstance(item, tuple): return list(item) # Return type, "List[Unknown]* | List[T@listify]", # is partially unknown else: return [item]
This is just fundamentally unsafe, due to ambiguity. Suppose you have a generic function that uses listify: def caller[T](x: T): y = listify(x) What's the proper annotation for y? The obvious answer is list[T]. But there's no guarantee listify will return a list[T]. If T is list[Something], or tuple[Something, ...], or None, listify can return something completely different. There is no way to make a fully generic listify safe. Pylance does a stricter job than mypy of reporting this kind of ambiguity, but the fundamental ambiguity is there no matter what type checker you use. And even if you completely strip out the implementation, and simplify it down to def f[T](x: T | list[T]) -> list[T]: raise NotImplementedError reveal_type(f([3])) mypy will report the following: main.py:4: note: Revealed type is "builtins.list[Never]" main.py:4: error: Argument 1 to "f" has incompatible type "list[int]"; expected "list[Never]" [arg-type] Found 1 error in 1 file (checked 1 source file) because the ambiguity is causing it to deduce conflicting type requirements.
2
2
79,579,955
2025-4-17
https://stackoverflow.com/questions/79579955/check-corresponding-columns-for-null-data
Imagine I have a dataframe like so: ID, place, stock 1, 1, 4 2, NaN, 2 3, NaN, NaN 4, 1, 1 I wish to find all the rows for which place is null and check the corresponding stock column to see, for instance, if place is Null but stock is >0. I've implemented this by iterating over the dataframe like so: for idx, x in enumerate(df[place].isnull().tolist()): ... But I am sure there is a simpler way to do this. For example, for the above data for each row for which place is null, I wish to check if the corresponding stock column is >0 and count how many occurrences are like so. In actual fact, I wish to check multiple columns (>10) to see if they meet certain criteria, and so am more interested in a generic solution to this.
You can use something vectorized like that count = df[df['place'].isnull() & (df['stock'] > 0)].shape[0]
2
1
79,578,736
2025-4-17
https://stackoverflow.com/questions/79578736/python-for-xml-parsing-how-to-track-correct-tree
I try to parse XML file to get NeedThisValue!!! for one of the element tagged <Value>. But there are several tags <Value> in file. How I can get the right one under <Image> branch? This is example of my XML: <Report xmlns=http://schemas.microsoft.com> <AutoRefresh>0</AutoRefresh> <DataSources> <DataSource Name="DataSource2"> <Value>SourceAlpha</Value> <rd:SecurityType>None</rd:SecurityType> </DataSource> </DataSources> <Image Name="Image36"> <Source>Embedded</Source> <Value>NeedThisValue!!!</Value> <Sizing>FitProportional</Sizing> </Image> </Report> And I'm using this code: from bs4 import BeautifulSoup with open(filepath, 'r') as f: data = f.read() Bs_data = BeautifulSoup(data, "xml") b_unique = Bs_data.find_all('Value') print(b_unique) Result is below, I need second one only. [<Value>SourceAlpha</Value>, <Value>NeedThisValue!!!</Value>]
As an alternative to the accepted solution from @Igel, you can reach it also with lxml and xpath(): from lxml import html broken_xml = """<Report xmlns=http://schemas.microsoft.com> <AutoRefresh>0</AutoRefresh> <DataSources> <DataSource Name="DataSource2"> <Value>SourceAlpha</Value> <rd:SecurityType>None</rd:SecurityType> </DataSource> </DataSources> <Image Name="Image36"> <Source>Embedded</Source> <Value>NeedThisValue!!!</Value> <Sizing>FitProportional</Sizing> </Image> </Report> """ tree = html.fromstring(broken_xml) print(html.tostring(tree, pretty_print=True).decode()) value_elem = tree.xpath('//image[@name="Image36"]/value')[0] print(value_elem.text) Output: <report xmlns="http://schemas.microsoft.com"> <autorefresh>0</autorefresh> <datasources> <datasource name="DataSource2"> <value>SourceAlpha</value> <securitytype>None</securitytype> </datasource> </datasources> <image name="Image36"> <source>Embedded</source> <value>NeedThisValue!!!</value> <sizing>FitProportional</sizing> </image> </report> NeedThisValue!!!
1
1
79,579,461
2025-4-17
https://stackoverflow.com/questions/79579461/why-no-floating-point-error-occurs-in-print0-1-100000-vs-decimal0-1100000
I am studying numerical analysis and I have come across this dilemma. Running the following script, from decimal import Decimal a = 0.1 ; N = 100000 ; # product calculation P = N*a # Print product result with no apparent error print(' %.22f ' % P) # Print product result with full Decimal approximation of 0.1 print(Decimal(0.1) * 100000) I realize that despite 0.1 not having an accurate floating-point representation, when I multiply it by 100000 (which has an exact floating-point representation), and increase the precision of how I print the result, I do not notice any error. print(' %.22f ' % P) # Result: 10000.0000000000000000000000 This is in contrast to the case where I use the Decimal method, where I can see the error behind the product. print(Decimal(0.1) * 100000) Also, how come I can print up to 55th digits of precision of a number if the IEEE754 standard only allows 53? I reproduced this case with the following instruction: print("%.55f" % 0.1) #0.1000000000000000055511151231257827021181583404541015625 Can anyone explain why this happens?
a = 0.1 ; Assuming your Python implementation uses IEEE-754 binary641, this converts 0.1 to 0.1000000000000000055511151231257827021181583404541015625, because that is the representable value that is nearest to 0.1. P = N*a The real-number arithmetic product of 100,000 and 0.1000000000000000055511151231257827021181583404541015625 is 10,000.00000000000055511151231257827021181583404541015625. This number is not representable in binary64. The two nearest representable values are 10,000 and 10000.000000000001818989403545856475830078125. The floating-point multiplication produces the representable value that is closer, so N*a produces 10,000. print(' %.22f ' % P) This prints the value stored in P, formatted with 22 digits after the decimal point, yielding “10000.0000000000000000000000”. print(Decimal(0.1) * 100000) In this, first 0.1 is converted to binary floating-point, yielding 0.1000000000000000055511151231257827021181583404541015625. Then Decimal(0.1) converts that number to Decimal, which produces the same value. Then the multiplication by 100,000 is performed. By default, Python uses only 28 digits for Decimal arithmetic, so the result of this multiplication is rounded to 10,000.00000000000055511151231. Footnote 1 This is common, but Python does not have a formal specification, and what documentation there is for it is weak about floating-point behavior.
4
8
79,579,642
2025-4-17
https://stackoverflow.com/questions/79579642/arranging-three-numbers-into-a-specified-order-in-python-3
Problem: You will be given three integers A, B and C. The numbers will not be given in that exact order, but we do know that A < B < C. In order to make for a more pleasant viewing, we want to rearrange them in the given order. The first line of the input contains three positive integers A, B and C, not necessarily in that order. All three numbers will be less than or equal to 100. The second line contains three uppercase letters 'A', 'B' and 'C' (with no spaces between them) representing the desired order. Output the A, B and C in the desired order on a single line, separated by single spaces. Sample input 1: 1 5 3 ABC Sample output 1: 1 3 5 Sample input 2: 6 4 2 CAB Sample output 2: 6 2 4 This problem comes from an online judge system. There are 10 checkpoints. The following code passes only 2 of them, although the code works fine when I run it outside. l = list(map(int,input().split())) order = input() l.sort() if order == "ABC": print(l[0],l[1],l[2]) elif order == "ACB": print(l[0],l[2],l[1]) elif order == "BAC": print(l[1],l[0],l[2]) elif order == "BCA": print(l[1],l[2],l[0]) elif order == "CAB": print(l[2],l[0],l[1]) else: print(l[2],l[1],l[0]) And if I change the else: in the second last line to elif order == "CBA":, very surprisingly it fails all the checkpoints. The feedback/reason for the failure is Wrong Answer (as opposed to Runtime Error). Finally, my friend sent me this code which passes all the checkpoints: l = list(map(int,input().split())) order = input() l.sort() result = [] for i in order: if i == "A": result.append(l[0]) elif i == "B": result.append(l[1]) elif i == "C": result.append(l[2]) print(" ".join(map(str, result))) I can see that my code is less neat, but I want to know why it fails so many checkpoints. Thank you in advance.
The order input apparently has extra whitespace around the letters, so it doesn't exactly equal any of the strings you're comparing with. Your friend's solution ignores any characters that aren't in ABC, so it ignores the extra spaces. You can remove this extraneous whitespace with order = input().strip() I'd say this is a bug in the online judging system. Nothing in the problem specification mentions this whitespace, so there's no reason why the program should have to accomodate it. However, when you become a professional programmer it's often a good idea to anticipate inputs that don't match the specifications precisely. If you can handle it easily, it's a good idea. This is especially true if the input comes from interactive users, as they can't generally be trusted unless the UI performs validation.
1
5
79,577,208
2025-4-16
https://stackoverflow.com/questions/79577208/subclassing-a-generic-class-with-constrained-type-variable-in-python-3-12-witho
I would like to subclass a generic class that is using a constrained type variable. This is fairly easy with pre-3.12 syntax as shown in this minimal example: from typing import Generic, TypeVar T = TypeVar("T", int, float) class Base(Generic[T]): pass class Sub(Base[T]): pass But I am struggling with writing this in the Python 3.12 syntax. I cannot find a way to not have to specify T: (int, float) in the Sub class as well. Below is another minimal example which passes mypy check (version 1.15.0). class Base[T: (int, float)]: pass class Sub[T: (int, float)](Base[T]): pass But when I change the subclass definition to class Sub[T](Base[T]):, mypy returns error: Type variable "T" not valid as type argument value for "Base" [type-var]. I would expect such definition to be meaningful and I find it unfortunate that I have to define the constraints again, especially when the classes can be in different files and subclassed multiple times. Am I missing something or is this not fully doable in pure Python 3.12+ syntax?
Unfortunately, you cannot reuse type variables defined with PEP 695 syntax. The type variable must have the correct bound to parametrize the superclass (in other words, bounds and constraints can't be inferred or magically "pushed down" from the superclass, that has soundness problems if you consider multiple inheritance). So, you need some way to spell "reuse type variable with these {constraints,bound,default}"... And as you noted in you question, typing.TypeVar is doing exactly that. Please note that typing.TypeVar is not deprecated (only typing.TypeAlias is) - new generics syntax is just another way, not a replacement. It will likely stay alive for a very long time if not indefinitely: Eric Traut's comment on that It’s also important to note that this PEP doesn’t deprecate the current mechanisms for defining TypeVars or generic classes, functions, and type aliases. Those mechanisms can live side by side with the new syntax. If you find that those mechanisms are more flexible for exploring extensions to the type system, PEP 695 does not prevent you from leveraging these existing mechanisms. There are other problems with PEP 695 typevars like inability to specify variance explicitly, so some corner cases would become impossible to express if typing.TypeVar was ever deprecated. I'm not aware of any recent proposals or discussions that would enable reusing PEP 695 type variables, and for the reasons mentioned above I doubt that will ever be possible.
4
1
79,579,330
2025-4-17
https://stackoverflow.com/questions/79579330/python-multiprocessing-queue-strange-behaviour
Hi I'm observing a strange behaviour with python multiprocessing Queue object. My environment: OS: Windows 10 python: 3.13.1 but I observed the same with: OS: Windows 10 python: 3.12.7 and: OS: Windows 10 python: 3.10.14 while I could not reproduce it in Linux Redhat. I have this short script: from multiprocessing import Queue a = list(range(716)) queue: Queue = Queue() for item in a: queue.put(item) raise ValueError(f"my len: {len(a)}") If I run it, everything is ok, it raises the error and exits: Traceback (most recent call last): File "C:\Users\uXXXXXX\AppData\Roaming\JetBrains\PyCharmCE2024.3\scratches\scratch_1.py", line 7, in <module> raise ValueError(f"my len: {len(a)}") ValueError: my len: 716 Process finished with exit code 1 but if i change the number from 716 to 717 or any other number above it, it raises the error but doesn't exit, the script hangs there. and when I forcefully stop the script it exits with code -1 Traceback (most recent call last): File "C:\Users\uXXXXXX\AppData\Roaming\JetBrains\PyCharmCE2024.3\scratches\scratch_1.py", line 7, in <module> raise ValueError(f"my len: {len(a)}") ValueError: my len: 717 Process finished with exit code -1 Can you please help me solve this strange behaviour? i would like it to always automatically exit with code 1
when you put an item in the queue, you are only putting it in an internal list, and a worker thread is incrementally adding items to the IPC pipe, the process won't exit until the worker empties this list into the pipe, the pipe has a small internal buffer, which is why small sizes work fine, this is done so that queue.put is not blocking, and also makes the queue faster. you need to make sure all items in the queue are consumed for the processes to exit, you should consume all the remaining items in the queue from the main process, you can do that in a worker thread while joining the workers. try: while True: item = queue.get_nowait() # process item except Empty: pass if there is no one going to read the queue and you just want to throw away the data in it then use queue.cancel_join_thread, which allows the process to exit without the list being drained, potentially leaving the queue in a broken state. only the main process should ever call it. from multiprocessing import Queue a = list(range(2000)) queue: Queue = Queue() for item in a: queue.put(item) queue.cancel_join_thread() raise ValueError(f"my len: {len(a)}") ValueError: my len: 2000 Process finished with exit code 1
2
4
79,578,785
2025-4-17
https://stackoverflow.com/questions/79578785/why-is-my-asyncio-task-not-executing-immediately-after-asyncio-create-task
import asyncio async def worker(): print("worker start") await asyncio.sleep(1) print("worker done") async def main(): # Schedule the coroutine asyncio.create_task(worker()) # Give control back to the event loop for a moment await asyncio.sleep(0.1) print("main exiting") asyncio.run(main()) Expected behavior I thought calling asyncio.create_task(worker()) would start worker() right away, so I expected to see: worker start main exiting worker done Actual behavior Instead, nothing from worker() is printed until after main() finishes: main exiting worker start worker done What I’ve tried Increasing the await asyncio.sleep(0.1) delay Replacing asyncio.create_task with await worker() Running on Python 3.11 and 3.12 (same outcome) Questions Why doesn’t the task begin execution immediately after create_task()? Is there a recommended pattern to ensure the task gets a chance to run before main() exits (without awaiting it directly)? Environment Python 3.12.2 macOS 14.4 (Sonoma) Any insights appreciated!
asyncio.create_task(coro) does not result in the immediate execution of coroutine coro. According to the docs: asyncio.create_task(coro, *, name=None, context=None) Wrap the coro coroutine into a Task and schedule its execution. Return the Task object. asyncio.create_task(coro) only schedules the task for execution. This is not the same thing as starts the task for execution. The new task will only start execution after a task switch occurs due to an await statement or a return statement being executed. Since you are not saving a reference to the newly created task and awaiting it, that task is not guaranteed to complete. When I run the code, I see: worker start main exiting It is only when main executes await asyncio.sleep(0.1) that the new task starts. But when that worker task executes await asyncio.sleep(1), the asyncio.sleep(0.1) in main completes and returns back to the asyncio.run(main()) statement, which completes. Thus, the program terminates before worker wakes up from its sleep. You should change your program to: import asyncio async def worker(): print("worker start") await asyncio.sleep(1) print("worker done") async def main(): # Schedule the coroutine task = asyncio.create_task(worker()) # save reference to the task # Give control back to the event loop for a moment await asyncio.sleep(0.1) await task # wait for the task to complete print("main exiting") asyncio.run(main()) Prints: worker start worker done main exiting Or if you have main sleep long enough, worker will wake up and finish: import asyncio async def worker(): print("worker start") await asyncio.sleep(1) print("worker done") async def main(): # Schedule the coroutine asyncio.create_task(worker()) # Give control back to the event loop for a longer period await asyncio.sleep(2) print("main exiting") asyncio.run(main())
2
3
79,577,035
2025-4-16
https://stackoverflow.com/questions/79577035/how-can-i-get-the-name-of-svg-element-when-i-click-on-it-in-nicegui
I have a NiceGUI page with an SVG image consisting of two elements (a circle and a square). I need to check which element of this image was clicked when the SVG image is clicked, so that I can handle it. I want the event "circle clicked" to be triggered when the circle is clicked, and the event "square clicked" to be triggered when the square is clicked. I am trying to implement this through a handler, but I don't know what to write in the handler function. I am attaching the code. My method 1: from nicegui import ui def my_handler(event_args): print("Button clicked!") with ui.row(): svg_element = ui.html(''' <svg width="200" height="200" style="border: 1px solid black;"> <circle cx="50" cy="50" r="40" fill="red" class="clickable-circle"/> <rect x="100" y="10" width="80" height="80" fill="blue" class="clickable-square"/> </svg> ''') svg_element.on('click', js_handler='(click) => { console.log(event.target.clickable-circle); }') ui.run(port = 8082) My method 2: from nicegui import ui def my_handler(event_args): element = event_args.sender # ??????? with ui.row(): svg_element = ui.html(''' <svg width="200" height="200" style="border: 1px solid black;"> <circle cx="50" cy="50" r="40" fill="red" class="clickable-circle"/> <rect x="100" y="10" width="80" height="80" fill="blue" class="clickable-square"/> </svg> ''') svg_element.on('click', handler=my_handler) ui.run(port=8082) Thank you all for your help.
If you want to use JavaScript then you have to use the same name event in (event) => { console.log(event.target) } I think you could use it to emit custom event with information what object was clicked emitEvent(`clicked_${event.tagerg.tagName}`) and it should send event clicked_circle or clicked_rect (and maybe even clicked_svg) to Python and it can use it to execute function with ui.on() instead of element_svg.on() ui.on('clicked_circle', handler=my_handler_for_circle) ui.on('clicked_rect', handler=my_handler_for_rect) You may also use it with parameters ui.on('clicked_circle', handler=lambda event:my_handler(event, "circle")) ui.on('clicked_rect', handler=lambda event:my_handler(event, "rect")) Maybe you could even use emitEvent to send some extra information but this would need to check it on internet because I never used it before. UPDATE: emitEvent may sends also args (for example name of clicked tag) and it allows to create only one event which will send different value(s) in args emitEvent("clicked_svg", {target: event.target.tagName}); and it will need only one ui.on() ui.on('clicked_svg', handler=on_click_svg) and it will need only one function def on_click_svg(event): print("--- clicked svg ---") print('event:', event) print('event.args["target"]:', event.args['target']) if event.args['target'] == 'circle': print('>>> clicked circle <<<') if event.args['target'] == 'rect': print('>>> clicked rect <<<') Doc: Custom Event - NiceGUI Full working code used for tests. from nicegui import ui def my_handler(event, target): print('--- my_handler ---') print('event:', event) print('event.args["target"]:', event.args['target']) print('target:', target) def on_click_svg(event): print("--- clicked svg ---") print('event:', event) print('event.args["target"]:', event.args['target']) if event.args['target'] == 'circle': print('>>> clicked circle <<<') if event.args['target'] == 'rect': print('>>> clicked rect <<<') with ui.row(): svg_element = ui.html(''' <svg width="200" height="200" style="border: 1px solid black;"> <circle cx="50" cy="50" r="40" fill="red" class="clickable-circle"/> <rect x="100" y="10" width="80" height="80" fill="blue" class="clickable-square"/> </svg> ''') ui.on('clicked_circle', handler=lambda event:my_handler(event, 'circle')) ui.on('clicked_rect', handler=lambda event:my_handler(event, 'rect')) ui.on('clicked_svg', handler=on_click_svg) svg_element.on('click', js_handler='''(event) => { console.log(event.target.tagName); emitEvent(`clicked_${event.target.tagName}`, {target: event.target.tagName}); emitEvent("clicked_svg", {target: event.target.tagName}); }''') ui.run(port = 8082)
2
1
79,578,293
2025-4-17
https://stackoverflow.com/questions/79578293/python-how-to-check-for-missing-values-not-represented-by-nan
I am looking for guidance on how to check for missing values in a DataFrame that are not the typical "NaN" or "np.nan" in Python. I have a dataset/DataFrame that has a string literal "?" representing missing data. How can I identify this string as a missing value? When I run usual commands using Pandas like: missing_values = df.isnull().sum() print(missing_values[missing_values > 0]) Python doesn't pick up on these cells as missing and returns 0s for the sum of null values. It also doesn't return anything for printing missing values > 0.
You can use df.replace("?", pd.NA) to properly encode "?" as missing value. This will ensure that those are properly handled in all operations. import pandas as pd data = {"x": [1, 2, "?"], "y": [3, "?", 5]} df = pd.DataFrame(data) print(df.isnull().sum()) # x 0 # y 0 df = df.replace("?", pd.NA) print(df.isnull().sum()) # x 1 # y 1
1
5
79,577,512
2025-4-16
https://stackoverflow.com/questions/79577512/pytest-caplog-not-capturing-logs
See my minimal example below for reproducing the issue: import logging import logging.config def test_logging(caplog): LOGGING_CONFIG = { "version": 1, "disable_existing_loggers": False, "formatters": { "default": { "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s", }, }, "handlers": { "console": { "class": "logging.StreamHandler", "formatter": "default", "level": "DEBUG", }, }, "loggers": { "": { # root logger "handlers": ["console"], "level": "DEBUG", "propagate": True, }, }, } logging.config.dictConfig(LOGGING_CONFIG) logger = logging.getLogger("root_module.sub1.sub2") logger.setLevel(logging.DEBUG) assert logger.propagate is True assert logger.getEffectiveLevel() == logging.DEBUG with caplog.at_level(logging.DEBUG): logger.debug("🔥 DEBUG msg") logger.info("📘 INFO msg") logger.warning("⚠️ WARNING msg") logger.error("❌ ERROR msg") logger.critical("💀 CRITICAL msg") print("🔥 caplog.messages:", caplog.messages) # Final assertion assert any("CRITICAL" in r or "💀" in r for r in caplog.messages) Running pytest -s outputs: tests/test_logs.py 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - DEBUG - 🔥 DEBUG msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - INFO - 📘 INFO msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - WARNING - ⚠️ WARNING msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - ERROR - ❌ ERROR msg 2025-04-16 17:06:03,983 - root_module.sub1.sub2 - CRITICAL - 💀 CRITICAL msg 🔥 caplog.messages: [] Pytest version is 8.3.5. I don't think it matters, but my setup.cfg: [tool:pytest] testpaths = tests I am expecting caplog.records to contain all 5 logs, but it is empty list.
You're not supposed to configure logging during a test. Further, if any real code under test tries to configure loggers, you should probably patch that out. The way caplog fixture works is by adding a capturing handler, so if you've already configured the logging framework, or attempt to reconfigure it during test, then you're interfering with pytest's functionality. This is even explicitly mentioned in the caplog documentation: ⚠️ Warning The caplog fixture adds a handler to the root logger to capture logs. If the root logger is modified during a test, for example with logging.config.dictConfig, this handler may be removed and cause no logs to be captured. To avoid this, ensure that any root logger configuration only adds to the existing handlers. Remove the runtime logging configuration makes the test pass: import logging def test_logging(caplog): logger = logging.getLogger("root_module.sub1.sub2") with caplog.at_level(logging.DEBUG): logger.debug("🔥 DEBUG msg") logger.info("📘 INFO msg") logger.warning("⚠️ WARNING msg") logger.error("❌ ERROR msg") logger.critical("💀 CRITICAL msg") print("🔥 caplog.messages:", caplog.messages) # Final assertion assert any("CRITICAL" in r or "💀" in r for r in caplog.messages)
3
3
79,577,827
2025-4-16
https://stackoverflow.com/questions/79577827/polars-pandas-like-groupby-save-to-files-by-each-value
Boiling down a bigger problem to its essentials, I would like to do this: import numpy as np import pandas as pd df = pd.DataFrame({'a': np.random.randint(0, 5, 1000), 'b': np.random.random(1000)}) for aval, subdf in df.groupby('a'): subdf.to_parquet(f'/tmp/{aval}.parquet') in Polars using LazyFrame: import numpy as np import pandas as pd import polars as pl df = pd.DataFrame({'a': np.random.randint(0, 5, 1000), 'b': np.random.random(1000)}) lf = pl.LazyFrame(df) # ??? I would like to be able to control the name of the output files in a similar way. Thanks!
You could use a partitioning scheme e.g. PartitionByKey() lf.sink_parquet( pl.PartitionByKey("/tmp/output", by="a"), mkdir = True ) For your example this creates: /tmp/output /tmp/output/a=0 /tmp/output/a=0/0.parquet /tmp/output/a=1 /tmp/output/a=1/0.parquet /tmp/output/a=2 /tmp/output/a=2/0.parquet /tmp/output/a=3 /tmp/output/a=3/0.parquet /tmp/output/a=4 /tmp/output/a=4/0.parquet The docs show an example of file_path= being used with a callback to customize the filename further if required.
2
2
79,577,779
2025-4-16
https://stackoverflow.com/questions/79577779/how-do-we-use-numpy-to-create-a-matrix-of-all-permutations-of-three-separate-val
I want to make a pandas dataframe with three columns, such that the rows contain all permutations of three columns, each with its own range of values are included. In addition, I want to sort them asc by c1, c2, c3. For example, a = [0,1,2,3,4,5,6,7], b = [0,1,2], and c= [0,1]. The result I want looks like this: c1 c2 c3 0 0 0 0 0 1 0 1 0 0 1 1 0 2 0 0 2 1 1 0 0 1 0 1 1 1 0 ... 7 2 0 7 2 1 I keep trying to fill columns using numpy.arange, i.e., numpy.arange(0,7,1) for c1. But that doesn't easily create all the possible rows. In my example, I should end up with 8 * 3 * 2 = 48 unique rows of three values each. I need this to act as a mask of all possible value combinations from which I can merge a sparse matrix of experimental data. Does anyone know how to do this? Recursion?
You can use pandas MultiIndex (the doc) import pandas as pd a = range(8) b = range(3) c = range(2) multi_index = pd.MultiIndex.from_product([a, b, c], names=['c1', 'c2', 'c3']) df = multi_index.to_frame(index=False) print(df) Output: c1 c2 c3 0 0 0 0 1 0 0 1 2 0 1 0 3 0 1 1 4 0 2 0 5 0 2 1 6 1 0 0 7 1 0 1 8 1 1 0 9 1 1 1 10 1 2 0 11 1 2 1 12 2 0 0 13 2 0 1 14 2 1 0 15 2 1 1 16 2 2 0 17 2 2 1 18 3 0 0 19 3 0 1 20 3 1 0 21 3 1 1 22 3 2 0 23 3 2 1 24 4 0 0 25 4 0 1 26 4 1 0 27 4 1 1 28 4 2 0 29 4 2 1 30 5 0 0 31 5 0 1 32 5 1 0 33 5 1 1 34 5 2 0 35 5 2 1 36 6 0 0 37 6 0 1 38 6 1 0 39 6 1 1 40 6 2 0 41 6 2 1 42 7 0 0 43 7 0 1 44 7 1 0 45 7 1 1 46 7 2 0 47 7 2 1
5
4
79,575,456
2025-4-15
https://stackoverflow.com/questions/79575456/is-a-lock-needed-when-multiple-tasks-push-into-the-same-asyncio-queue
Consider this example where I have 3 worker tasks that push results in a queue and a tasks that deals with the pushed data. async def worker1(queue: asyncio.Queue): while True: res = await do_some_work(param=1) await queue.put(res) async def worker2(queue: asyncio.Queue): while True: res = await do_some_work(param=2) await queue.put(res) async def worker3(queue: asyncio.Queue): while True: res = await do_some_work(param=3) await queue.put(res) async def handle_results(queue: asyncio.Queue): while True: res = await queue.get() await handle_result(res) queue.task_done() async def main(): queue = asyncio.Queue() t1 = asyncio.create_task(worker1(queue)) t2 = asyncio.create_task(worker2(queue)) t3 = asyncio.create_task(worker3(queue)) handler = asyncio.create_task(handle_result(queue)) while True: # do some other stuff .... asyncio.run(main()) The documentation says that asyncio.Queue is not thread-safe, but this should not apply here because all tasks are running in the same thread. But do I need an asyncio.Lock to protect the queue when I have 3 tasks that push into the same queue? Looking at the implementation in Python 3.12 (which creates a putter future and awaits on it before pushing into the queue) I would say no, but I'm not sure and the documentation does not mention what would happen in this case. So, is the asyncio.Lock in this case necessary?
No - there is no need to locks to put or read items from asyncio Queues. Keep in mind that Python multithreaded code will already require much less locks than most code in other languages, as the data-structures themselves are thread-safe - so, even with a free-threading build (without the GIL) if you have several threads appending values to a list, for example, the list will always be in a consistent state. Of course, code which would modify or create new keys in a shared dictionary will need proper locks, even though the dictionary itself won't ever "break". When we step down do async programming, other concurrent tasks will only ever run when our code hit an await expression (or async for or async with statement) - so the need for locks is reduced even further. In other words, if there is no code running in other threads, even with a lot of concurrent tasks, things like this: value = global_list[0] new_value = (complicated expression using value) global_list[0] = new_value are concurrency-safe in async code. And on top of that, asyncio queues are built for consistency in async contexts. They'd never break and get into an inconsistent state if two concurrent tasks try to put, get or use the no_wait variants of those in code running in the same thread. (Although if you need to put data in another thread to be consumed in an asynchronous task, that is another thing and will require a carefully developed pattern to work) Just go for it - and keep in mind that unless you want to yield to the async loop at the point you are doing a put, or you are really concerned about constraining the queue size, you can simply to put_nowait (without the await) - that will even prevent other tasks of running "near" your put.
2
1
79,577,490
2025-4-16
https://stackoverflow.com/questions/79577490/how-to-fit-scaler-for-different-subsets-of-rows-depending-on-group-variable-and
I have a data set like the following and want to scale the data using any of the scalers in sklearn.preprocessing. Is there an easy way to fit this scaler not over the whole data set, but per group? My current solution can't be included in a Pipeline: import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler df = pd.DataFrame({'group': [1, 1, 1, 2, 2, 2], 'x': [1,2,3,10,20,30]}) def scale(x): # see https://stackoverflow.com/a/72408669/3104974 scaler = StandardScaler() return scaler.fit_transform(x.values[:,np.newaxis]).ravel() df['x_scaled'] = df.groupby('group').transform(scale) group x x_scaled 0 1 1 -1.224745 1 1 2 0.000000 2 1 3 1.224745 3 2 10 -1.224745 4 2 20 0.000000 5 2 30 1.224745
You can create custom transformer, using BaseEstimator and TransformerMixin for example: import pandas as pd import numpy as np from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler class GroupScaler(BaseEstimator, TransformerMixin): def __init__(self, group_column, scaler=None): self.group_column = group_column self.scaler = scaler or StandardScaler() self.scalers_ = {} def fit(self, X, y=None): self.scalers_ = {} for group, group_data in X.groupby(self.group_column): scaler = clone(self.scaler) scaler.fit(group_data.drop(columns=[self.group_column])) self.scalers_[group] = scaler return self def transform(self, X): X_scaled = [] for group, group_data in X.groupby(self.group_column): scaler = self.scalers_[group] scaled = scaler.transform(group_data.drop(columns=[self.group_column])) group_df = pd.DataFrame(scaled, index=group_data.index, columns=group_data.columns.drop(self.group_column)) group_df[self.group_column] = group X_scaled.append(group_df) return pd.concat(X_scaled).sort_index() from sklearn.base import clone df = pd.DataFrame({'group': [1, 1, 1, 2, 2, 2], 'x': [1, 2, 3, 10, 20, 30]}) scaler = GroupScaler(group_column='group') scaled_df = scaler.fit_transform(df) print(scaled_df) Output: x group 0 -1.224745 1 1 0.000000 1 2 1.224745 1 3 -1.224745 2 4 0.000000 2 5 1.224745 2
2
1
79,577,365
2025-4-16
https://stackoverflow.com/questions/79577365/making-file-hidden-in-windows-using-python-blocks-file-writing
I am trying to make a hidden file in Windows using Python, but after I make it hidden it becomes impossible to write. However, reading this file is possible. import ctypes with open("test.txt", "r") as f: print('test') ctypes.windll.kernel32.SetFileAttributesW("test.txt", 0x02) # adding h (hidden) attribute to a file with open("test.txt", "r") as f: print('test') # works with open("test.txt", "w") as f: print('test') # error Error: Traceback (most recent call last): File "C:\Users\user\Documents\client\test.py", line 8, in <module> with open("test.txt", "w") as f: print('test') # error PermissionError: [Errno 13] Permission denied: 'test.txt' Using 0x22 (which is a attribute and h attribute) instead of 0x02 doesnt't help. The same error occurs. After making the file visible again via attrib -H test.txt it becomes possible to open it in write mode. However, as far as I know, hidden files should be writable in Windows. OS: Windows 11 Python version: Python 3.10.2
I haven't seen specific Python documentation on this behavior, but believe it is because at the implementation level, 'w' eventually is calling The Windows API CreateFileW which has this note: If CREATE_ALWAYS and FILE_ATTRIBUTE_NORMAL are specified, CreateFile fails and sets the last error to ERROR_ACCESS_DENIED if the file exists and has the FILE_ATTRIBUTE_HIDDEN or FILE_ATTRIBUTE_SYSTEM attribute. To avoid the error, specify the same attributes as the existing file. Opening for writing mode='w' would eventually use CREATE_ALWAYS with FILE_ATTRIBUTE_NORMAL and would return access denied; whereas opening for reading and updating mode='r+' or appending mode='a' would work because OPEN_EXISTING would be used. Demonstrated below with Python: import os import ctypes FILE_ATTRIBUTE_HIDDEN = 0x2 FILE_ATTRIBUTE_NORMAL = 0x80 # Ensure file is deleted first for script repeatability. try: ctypes.windll.kernel32.SetFileAttributesW("test.txt", FILE_ATTRIBUTE_NORMAL) os.remove('text.txt') except Exception as e: pass # Create the file and hide it with open('test.txt', 'w') as f: print('test', file=f) ctypes.windll.kernel32.SetFileAttributesW("test.txt", FILE_ATTRIBUTE_HIDDEN) with open('test.txt', 'r') as f: # read it print('read:', f.read()) with open('test.txt', 'r+') as f: # update it f.truncate() print('written', file=f) with open('test.txt', 'r') as f: # read it back print('write:', f.read()) Output: read: test write: written The command prompt works the same way: C:>attrib -h test.txt C:>del test.txt C:>echo line1 >test.txt C:>attrib +h test.txt C:>type test.txt line1 C:>echo line2 >test.txt # can't be directly written Access is denied. C:>echo line2 >>test.txt # CAN be appended (updated). C:>type test.txt line1 line2
1
1
79,577,437
2025-4-16
https://stackoverflow.com/questions/79577437/why-is-my-python-code-displaying-csv-file-with-right-sided-alignment
I have written this code just to read a csv file: import pandas as pd df = pd.read_table('C:\\XXXXX\\Python_Learn\\pandas.csv') print(df.to_string()) and output to this is coming as below: Date|Invoice ID|Customer Name|Product|Category|Quantity|Unit Price|Total Amount|Region|Salesperson 0 2025-04-01|INV1001|John Smith|Apple iPhone 14|Electronics|1|999.00|999.00|North|Alice Johnson 1 2025-04-01|INV1002|Jane Doe|Samsung TV 55|Electronics|2|600.00|1200.00|NULL|Bob Williams 2 2025-04-02|INV1003|Michael Lee|Nike Sneakers|Apparel|3|120.00|360.00|West|Carol Chen 3 2025-04-02|INV1004|Emma Brown|Office Chair|Furniture|1|150.00|150.00|South|David Patel 4 2025-04-03|INV1005|Olivia Green|HP Laptop|Electronics|1|850.00|850.00|North|Alice Johnson 5 2025-04-03|INV1006|Noah White|Dining Table|Furniture|1|450.00|450.00|NULL|Bob Williams 6 2025-04-04|INV1007|Ava Scott|Levis Jeans|Apparel|2|80.00|160.00|West|Carol Chen 7 2025-04-04|INV1008|Liam Davis|AirPods Pro|Electronics|2|250.00|500.00|NULL|David Patel Why is my output getting aligned to the right? I added .strip() but it still gives same output. Update: I updated my code with .read_csv() instead of .read_table() & it improved the output in terms of alignment, but now it doesn't show characters on the right side of the file. import pandas as pd df = pd.read_csv('C:\\XXXXX\\Python_Learn\\pandas.csv') print(df) Output as below, Date|Invoice ID|Customer Name|Product|Category|Quantity|Unit Price|Total Amount|Region|Salesperson 0 2025-04-01|INV1001|John Smith|Apple iPhone 14|... 1 2025-04-01|INV1002|Jane Doe|Samsung TV 55|Elec... 2 2025-04-02|INV1003|Michael Lee|Nike Sneakers|A... 3 2025-04-02|INV1004|Emma Brown|Office Chair|Fur... 4 2025-04-03|INV1005|Olivia Green|HP Laptop|Elec... 5 2025-04-03|INV1006|Noah White|Dining Table|Fur... 6 2025-04-04|INV1007|Ava Scott|Levis Jeans|Appar... 7 2025-04-04|INV1008|Liam Davis|AirPods Pro|Elec... Why does it show ... instead of reading content on right side of the file?
You need to use pd.read_csv() with the correct delimiter. You can read more about it here import pandas as pd df = pd.read_csv('C:\\XXXXX\\Python_Learn\\pandas.csv', delimiter='|') print(df.to_string())
1
2
79,577,224
2025-4-16
https://stackoverflow.com/questions/79577224/redis-py-not-returning-consistent-results-for-match-all-query
I'm attempting to add 100 documents to a Redis index, and then retrieving them with a match all query: import uuid import redis from redis.commands.json.path import Path import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.search.field import TextField, NumericField, TagField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import NumericFilter, Query r = redis.Redis(host="localhost", port=6379) user4 = { "user": { "name": "Sarah Zamir", "email": "[email protected]", "age": 30, "city": "Paris", } } with r.pipeline(transaction=True) as pipe: for i, u in enumerate([user4] * 100): u["user"]["text"] = str(uuid.uuid4()) * 50 r.json().set(f"user:{i}", Path.root_path(), u) pipe.execute() schema = ( TextField("$.user.name", as_name="name"), TagField("$.user.city", as_name="city"), NumericField("$.user.age", as_name="age"), ) r.ft().create_index( schema, definition=IndexDefinition(prefix=["user:"], index_type=IndexType.JSON) ) result = r.ft().search(Query("*").paging(0, 100)) print(result.total) keys = r.keys("*") print(len(keys)) r.flushall() r.close() I expect result.total to return 100, but it always returns inconsistent results less than 100. Am I doing something wrong? I checked the number of keys in redis and I get the correct count there.
Try to use pipe and add some time.sleep() import uuid import time import redis from redis.commands.json.path import Path from redis.commands.search.field import TextField, NumericField, TagField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import NumericFilter, Query r = redis.Redis(host="localhost", port=6379) user4 = { "user": { "name": "Sarah Zamir", "email": "[email protected]", "age": 30, "city": "Paris", } } with r.pipeline(transaction=True) as pipe: for i, u in enumerate([user4] * 100): u["user"]["text"] = str(uuid.uuid4()) * 50 pipe.json().set(f"user:{i}", Path.root_path(), u) pipe.execute() schema = ( TextField("$.user.name", as_name="name"), TagField("$.user.city", as_name="city"), NumericField("$.user.age", as_name="age"), ) r.ft().create_index( schema, definition=IndexDefinition(prefix=["user:"], index_type=IndexType.JSON) ) while True: result = r.ft().search(Query("*")) if result.total == 100: break time.sleep(0.1) print(result.total) r.flushall() r.close()
1
1
79,577,143
2025-4-16
https://stackoverflow.com/questions/79577143/pyserial-doesnt-read-full-line
I want to read data from my arduino and show it on a tkinter window, it works nicely but when I sent the Data on the arduino faster then every 10ms it resevieses many lines that are not full like this: 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] ⚠️ Invalid line, skipped: ['0', '0', '0', '0', '0', '0', '0', '0', '0', 'Malik'] The weird thing is when i sent it at 10ms speed it works, just sometimes not when i move the window. Hope somebody can help me here is my code: def read_Serial_Values(self): read_Line = ReadLine(ser) self.raw_Input = read_Line.readline().decode('utf-8').strip() # if self.is_Data_Here(): # print("!Overload!") # self.trash = ser.read_all() self.raw_Listet_Input = [x for x in self.raw_Input.split(",") if x] if len(self.raw_Listet_Input) < 21: print("⚠️ Invalid line, skipped:", self.raw_Listet_Input) return self.Inp_manager.save_Repeating_Input(self.raw_Listet_Input) def is_Data_Here(self): if ser.in_waiting >0: return True return False class ReadLine: def __init__(self, s): self.buf = bytearray() self.s = s def readline(self): i = self.buf.find(b"\n") if i >= 0: r = self.buf[:i+1] self.buf = self.buf[i+1:] return r while True: i = max(1, min(2048, self.s.in_waiting)) data = self.s.read(i) i = data.find(b"\n") if i >= 0: r = self.buf + data[:i+1] self.buf[0:] = data[i+1:] return r else: self.buf.extend(data) I have tried changing the timeout but that didnt changed anything and I also changed the way the programm recevies the lines.
You're reading data too fast, and the serial input is not guaranteed to end cleanly with \n before your code tries to process it, and that’s why you are getting incomplete or "corrupted" lines. 1. Use a dedicated thread for serial reading Tkinter is single-threaded. If you read from a serial in the same thread as the GUI, the GUI slows down (and vice versa), especially when you move the window. Run the serial reading in a separate thread, and put the valid data into a queue.Queue() which the GUI can safely read from. import threading import queue data_queue = queue.Queue() def serial_read_thread(): read_Line = ReadLine(ser) while True: try: line = read_Line.readline().decode('utf-8').strip() parts = [x for x in line.split(",") if x] if len(parts) >= 21: data_queue.put(parts) else: print("⚠️ Invalid line, skipped:", parts) except Exception as e: print(f"Read error: {e}") Start this thread once at the beginning: t = threading.Thread(target=serial_read_thread, daemon=True) t.start() 2. In your Tkinter loop, check the queue Use after() in Tkinter to periodically fetch from the queue and update the UI: def update_gui_from_serial(): try: while not data_queue.empty(): data = data_queue.get_nowait() print(data) except queue.Empty: pass root.after(50, update_gui_from_serial) Please let me know if this works and if you need any further help! :)
2
3
79,575,363
2025-4-15
https://stackoverflow.com/questions/79575363/how-to-force-numba-to-return-a-numpy-type
I find this behavior quite counter-intuitive although I suppose there is a reason for it - numba automatically converts my numpy integer types directly into a python int: import numba as nb import numpy as np print(f"Numba version: {nb.__version__}") # 0.59.0 print(f"NumPy version: {np.__version__}") # 1.23.5 # Explicitly define the signature sig = nb.uint32(nb.uint32, nb.uint32) @nb.njit(sig, cache=False) def test_fn(a, b): return a * b res = test_fn(2, 10) print(f"Result value: {res}") # returns 20 print(f"Result type: {type(res)}") # returns <class 'int'> This is an issue as I'm using the return as an input into another njit function so I get a casting warning (and I also do unnecessary casts in-between the njit functions) Is there any way to force numba to give me np.uint32 as a result instead? --- EDIT --- This is the best I've managed to do myself, however I refuse to believe this is the best implementation out there: # we manually define a return record and pass it as a parameter res_type = np.dtype([('res', np.uint32)]) sig = nb.void(nb.uint32, nb.uint32, nb.from_dtype(res_type)) @nb.njit(sig, cache=False) def test_fn(a:np.uint32, b:np.uint32, res: res_type): res['res'] = a * b # Call with Python ints (Numba should coerce based on signature) res = np.recarray(1, dtype=res_type)[0] res_py_in = test_fn(2, 10, res) print(f"\nCalled with Python ints:") print(f"Result value: {res['res']}") # 20 print(f"Result type: {type(res['res'])}") # <class 'numpy.uint32'> --- EDIT 2 --- as @Nin17 correctly pointed out actually returning an int object is still about 3 times quicker when called from python context, so its better to just return a simple int and cast as needed.
Why don't you just return np.uint32(a*b): @nb.njit(nb.uint32(nb.uint32, nb.uint32)) def func(a, b): return np.uint32(a * b) It is faster and more readable than the other solutions: import numba as nb import numpy as np @nb.njit(nb.types.Array(nb.uint32, 0, "C")(nb.uint32, nb.uint32)) def test_fn(a, b): res = np.empty((), dtype=np.uint32) res[...] = a * b return res res_type = np.dtype([('res', np.uint32)]) sig = nb.void(nb.uint32, nb.uint32, nb.from_dtype(res_type)) @nb.njit(sig) def test_fn2(a, b, out): out['res'] = a * b res = np.recarray(1, dtype=res_type)[0] test_fn2(np.uint32(2), np.uint32(10), res) a = np.uint32(2) b = np.uint32(10) %timeit test_fn(a, b) %timeit test_fn2(a, b, res) %timeit func(a, b) Output: 339 ns ± 4.67 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) 426 ns ± 1.01 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) 126 ns ± 0.111 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) N = int(1e7) @nb.njit def _test_fn(a, b): out = np.empty((N,), dtype=np.uint32) for i in range(N): out[i] = test_fn(a, b).item() return out @nb.njit def _test_fn2(a, b, res): out = np.empty((N,), dtype=np.uint32) for i in range(N): test_fn2(a, b, res) out[i] = res['res'] return out @nb.njit def _func(a, b): out = np.empty((N,), dtype=np.uint32) for i in range(N): out[i] = func(a, b) return out _test_fn(a, b) _test_fn2(a, b, res) _func(a, b) %timeit _test_fn(a, b) %timeit _test_fn2(a, b, res) %timeit _func(a, b) Output: 254 ms ± 508 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) 3.44 ms ± 40.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) 3.37 ms ± 19.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3
2
79,576,300
2025-4-16
https://stackoverflow.com/questions/79576300/how-to-display-metadata-in-scatter-plot-in-matplotlib
I have a scatter plot. I want to add some metadata to each point in my scatter plot. looking at documentation , i found annotate function( matplotlib.pyplot.annotate) , has anyone use this or some other function , such that we can add metadata to each point or are there are similar libraries, like matplotlib, which can display metadata , on clicking/hovering individual points in the scatterplot? import matplotlib.pyplot as plt import numpy as np # Generate random data for x and y axes x = np.random.rand(20) y = np.random.rand(20) colors = np.random.rand(20) area = (30 * np.random.rand(20))**2 # point radii # Create the scatter plot plt.scatter(x, y, s=area, c=colors, alpha=0.5) # Add labels and title plt.xlabel('X-axis') plt.ylabel('Y-axis') plt.title('Scatter Plot Example') # Display the plot plt.show()
Matplotlib does not offer a built-in function in its core library to enable hover effects. For this functionality, you may consider using the mplcursor library. Kindly try running the code below after installing mplcursors. import matplotlib.pyplot as plt import numpy as np import mplcursors x = np.random.rand(20) y = np.random.rand(20) colors = np.random.rand(20) area = (30 * np.random.rand(20))**2 metadata = [f"Point {i}, Value: ({x[i]:.2f}, {y[i]:.2f})" for i in range(len(x))] fig, ax = plt.subplots() scatter = ax.scatter(x, y, s=area, c=colors, alpha=0.5) plt.xlabel('X-axis') plt.ylabel('Y-axis') plt.title('Interactive Scatter Plot') cursor = mplcursors.cursor(scatter, hover=True) @cursor.connect("add") def on_add(sel): sel.annotation.set_text(metadata[sel.index]) plt.show() Output:
1
2
79,576,828
2025-4-16
https://stackoverflow.com/questions/79576828/get-a-grouped-sum-in-polars-but-keep-all-individual-rows
I am breaking my head over this probably pretty simply question and I just can't find the answer anywhere. I want to create a new column with a grouped sum of another column, but I want to keep all individual rows. So, this is what the docs say: import polars as pl df = pl.DataFrame( { "a": ["a", "b", "a", "b", "c"], "b": [1, 2, 1, 3, 3], } ) df.group_by("a").agg(pl.col("b").sum()) The output of this would be: shape: (3, 2) ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════╪═════╡ │ a ┆ 2 │ │ c ┆ 3 │ │ b ┆ 5 │ └─────┴─────┘ However, what I need would be this: shape: (5, 3) ┌─────┬─────┬────────┐ │ a ┆ b ┆ sum(b) │ │ --- ┆ --- ┆ ------ │ │ str ┆ i64 ┆ i64 │ ╞═════╪═════╪════════╡ │ a ┆ 1 ┆ 2 │ │ b ┆ 2 ┆ 5 │ │ a ┆ 1 ┆ 2 │ │ b ┆ 3 ┆ 5 │ │ c ┆ 3 ┆ 3 │ └─────┴─────┴────────┘ I could create the sum in a separate df and then join it with the original one, but I am pretty sure, there is an easier solution.
All you need is a window function: df.with_columns( b_sum=pl.col("b").sum().over(pl.col("a")) ) shape: (5, 3) ┌─────┬─────┬───────┐ │ a ┆ b ┆ b_sum │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═════╪═════╪═══════╡ │ a ┆ 1 ┆ 2 │ │ b ┆ 2 ┆ 5 │ │ a ┆ 1 ┆ 2 │ │ b ┆ 3 ┆ 5 │ │ c ┆ 3 ┆ 3 │ └─────┴─────┴───────┘
2
3
79,576,316
2025-4-16
https://stackoverflow.com/questions/79576316/exceptions-being-hidden-by-asyncio-queue-join
I am using an API client supplied by a vendor (Okta) that has very poor/old examples of running with async - for example (the Python documentation says not to use get_event_loop()): from okta.client import Client as OktaClient import asyncio async def main(): client = OktaClient() users, resp, err = await client.list_users() while True: for user in users: print(user.profile.login) # Add more properties here. if resp.has_next(): users, err = await resp.next() else: break loop = asyncio.get_event_loop() loop.run_until_complete(main()) This works, but I need to go through the returned results and follow various links to get additional information. I created a queue using asyncio and I have the worker loop until the queue is empty. This also works. I start running into issues when I try to have more than one worker - if the code throws an exception, the workers never return. async def handle_queue(name, queue: asyncio.Queue, okta_client: OktaClient): """Handle queued API requests""" while True: log.info("Queue size: %d", queue.qsize()) api_req = await queue.get() log.info('Worker %s is handling %s', name, api_req) api_func = getattr(okta_client, f"list_{api_req['endpoint']}") api_procs = getattr(sys.modules[__name__], api_req['processor']) log.info('Worker %s is handling %s with api_func %s, api_proc %s', name, api_req, api_func, api_proc) resp_data, resp, err = await api_func(**api_req['params']) log.debug(resp_data) while True: for i in resp_data: await api_proc(i, queue) if resp.has_next(): resp_data, err = await resp.next() else: break queue.task_done() async def create_workers(queue: asyncio.Queue): """Reusable worker creation process""" log.info('Creating workers') workers = [] async with OktaClient() as okta_client: for i in range(NUM_WORKERS): log.info('Creating worker-%d', i) worker = asyncio.create_task(handle_queue(f'worker-{i}', queue, okta_client)) workers.append(worker) await queue.join() for worker in workers: worker.cancel() await asyncio.gather(*workers, return_exceptions=True) async def main(): """Load Access Policies and their mappings and rules""" queue = asyncio.Queue() queue.put_nowait({'endpoint': 'policies', 'params': {'query_params': {'type': 'ACCESS_POLICY'}}, 'processor': 'process_policy'}) await create_workers(queue) metadata['policy_count'] = len(data) print(yaml.dump({'_metadata': metadata, 'data': data})) if __name__ == '__main__': try: asyncio.run(main()) except KeyboardInterrupt: # Hide the exception for a Ctrl-C log.info('Keyboard Interrupt') If an exception is thrown in handle_queue (or any of the functions it calls), the program hangs. When I hit Ctrl-C, I get the exception along with a message asyncio task exception was never retrieved. I understand this is because queue.join() is waiting for queue.task_done() to be called as many times as queue.put() was called, but I don't understand why the exception isn't caught. I tried wrapping the work in handle_queue in a try: async def handle_queue(name, queue: asyncio.Queue, okta_client: OktaClient): """Handle queued API requests""" while True: try: # REST OF THE FUNCTION except Exception as e: queue.task_done() raise e queue.task_done() This way, the program execution does finish, but the exception still disappears. How can I capture the exception and still allow the program to finish?
For printing the error, pythons traceback module is especially helpful. Add import traceback to your imports and then use it like so: async def handle_queue(name, queue: asyncio.Queue, okta_client: OktaClient): """Handle queued API requests""" while True: try: # REST OF THE FUNCTION except Exception as e: print(repr(e)) print(traceback.format_exc()) queue.task_done() This will go to stdout instead of stderr, but should show like a standard python stack trace. If you really want it to go to stderr (e.g. for logging purposes), you can replace that print with print(traceback.format_exc(), file=sys.stderr) as well as importing sys. It is worthwhile to note that running asyncio.gather with return_exceptions=True will actually not raise the exception (this may be your intended behavior). If you wish to keep running the program after an exception outside of the try loop occurs, then it should likely stay that way, though also note the exception will be returned as a result and should be handled if that happens. In general, async handles errors differently and they may go unnoticed if you do not explicitly handle them
2
1
79,576,352
2025-4-16
https://stackoverflow.com/questions/79576352/is-it-possible-to-use-the-closed-form-of-fibonacci-series-to-generate-the-nth-fi
The closed-form of the Fibonacci series is the following: As you can see the expression contains square roots so we cannot use it directly to generate Nth Fibonacci number exactly, as sqrt(5) is irrational, we can only approximate it even in pure mathematics, and floating point inaccuracies introduces a whole other lot of complications that ensure if we implement the formula naively we are bound to lose accuracy. I want to get the Nth Fibonacci number exactly using this formula, assume N is very large (in the thousands range), and since Python's integer is unbounded and Fibonacci sequence grows extremely quickly out of the range of uint64 (at the 94th term to be exact, we start at 0th), I cannot use Numba or NumPy to vectorize this. Now I have implemented a function that generates Nth Fibonacci number exactly using the closed form, but it is very inefficient. First I will just quickly describe what I have done, we start with (1 + a)n, using the binomial theorem we get 1 + na + nC2a2 + nC3a3 + nC4a4 ... + an. Because the choose function is symmetric we only need to compute half of the terms. Now we can compute nCi from the previous term nCi-1: nCi = nCi-1 * (n + 1 - i) / i, this is much faster than using factorials. Then we expand (1 - a)n: 1 - na + nC2a2 - nC3a3 + nC4a4 - nC5a5 + nC6a6... The signs alternate, now if we evaluate (1 + a)n - (1 - a)n, we find the even powers cancel out, leaving only odd powers, and since we divide by sqrt(5) we are left with only powers of 5. Code: def fibonacci(lim: int) -> list[int]: a, b = 1, 1 result = [0] * lim for i in range(1, lim): result[i] = a a, b = b, a + b return result def Fibonacci_phi_even(n: int) -> int: fib = 0 power = 1 a, b, c = n - 1, 2, 2 * n length = int(n / 4 + 0.5) coeffs = [1] * length for i in range(length): coeffs[i] = c fib += c * power power *= 5 c = c * (a - 1) * a // ((b + 1) * b) a -= 2 b += 2 for coeff in coeffs[length - 1 - (n // 2 & 1) :: -1]: fib += coeff * power power *= 5 return fib >> n def Fibonacci_phi_odd(n): length = n // 2 + 1 powers = [1] * length power = 1 for i in range(length): powers[i] = power power *= 5 fib = 0 a, b, c, d = n, 1, 2, -1 i = di = length - 1 for _ in range(length): fib += c * powers[i] c = c * a // b a -= 1 b += 1 i += d * di d *= -1 di -= 1 return fib >> n def Fibonacci_phi(n: int) -> int: if n <= 2: return int(n > 0) return Fibonacci_phi_odd(n) if n & 1 else Fibonacci_phi_even(n) It works: In [377]: print(fibonacci(25)) [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368] In [378]: print([Fibonacci_phi(i) for i in range(25)]) [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368] But it is slow: In [379]: %timeit fibonacci(1025) 87.9 μs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [380]: %timeit Fibonacci_phi(1024) 699 μs ± 8.42 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) How can we make it faster without losing accuracy?
I have done it, I have made the closed form spit out Nth Fibonacci number exactly efficiently. Now instead of doing all that inefficient binomial expansion, we can reduce the problem to the most basic part. Since we are doing exponentiation, we are really just doing multiplications repeatedly, we first need to find a way to multiply numbers like (a + b√5) exactly. Now using simple algebra, we have the following relationship: (a + b√5) * (c + d√5) = ac + bc√5 + ad√5 + 5bd Now what do we do? We group the rational terms and irrational terms, and then we get rid of the radical, we simplify the above to this: (a, b) * (c, d) = (ac + 5bd, ad + bc) The above expression doesn't use floating points anywhere and is guaranteed to be exact. Now what do we do next? We just multiply the numbers by itself N times to do exponentiation. But of course that is inefficient, since we have defined multiplication, we can use exponentiation by squaring. Now we are computing (1, 1)n - (1, -1)n, the rational part will get cancelled out, the irrational part will have opposite signs and so we only need to compute (1, 1)n and double the irrational part, and we then right shift by n to get nth Fibonacci number. def poly_mult(t1: tuple[int, int], t2: tuple[int, int]) -> tuple[int, int]: (a, b), (c, d) = t1, t2 return (a * c + 5 * b * d, a * d + b * c) def exp_by_sqr(base: tuple[int, int], exp: int) -> tuple[int, int]: prod = (1, 0) if not exp: return prod while exp > 1: if exp & 1: prod = poly_mult(base, prod) exp -= 1 base = poly_mult(base, base) exp >>= 1 return poly_mult(base, prod) def Fibonacci_phi_poly(n: int) -> int: return (2 * exp_by_sqr((1, 1), n)[1]) >> n For comparison, the fast doubling approach: def fibonacci_fast_doubling(n: int) -> tuple[int, int]: if not n: return (0, 1) a, b = fibonacci_fast_doubling(n >> 1) c = (a2 := a * a) + (b2 := b * b) d = 2 * a * b return (c, b2 + d) if n & 1 else (d - a2, c) In [10]: print([Fibonacci_phi_poly(i) for i in range(26)]) [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025] In [11]: %timeit fibonacci(1024) 90 μs ± 664 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [12]: %timeit Fibonacci_phi(1023) 862 μs ± 5.74 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [13]: %timeit fibonacci_fast_doubling(1023) 5 μs ± 38.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [14]: %timeit Fibonacci_phi_poly(1023) 19 μs ± 84.3 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) It is only slightly less efficient than fast doubling and much more efficient than simple iteration, and tremendously more efficient than the binomial expansion approach.
3
6
79,575,935
2025-4-15
https://stackoverflow.com/questions/79575935/trying-to-exeute-the-dbms-stats-function-in-a-python-script
I inherited a 1000+ line query the works in oracle's sql developer. When I try running the query in a python script, I get an error (An error occurred: ORA-00900: invalid SQL statement) at the line: EXEC DBMS_STATS.gather_table_stats('SSS', 'YYY'). I have very little experience with databases and a Google search of the error has not yielded a possible solution. Python script: with open('myquery.sql', 'r') as sql_file: query = str(sql_file.read().replace("\n\n","")) sql_commands = query.split(';') for command in sql_commands: try: if command.strip() != '': print(command) cur.execute(command) connection.commit() except Exception as e: print(f"An error occurred: {e}") connection.commit() The last few lines of the query(myquery.sql): DELETE FROM CD_SS.MM_DSD_MIG_MASTER; commit; INSERT INTO CD_SS.MM_DSD_MIG_MASTER SELECT DISTINCT PATTERN, b_ITEM_NUM, 'N', NULL, NULL, NULL FROM CD_SS.MM_DSD_MIG_PATTERNS; commit; update MM_DSD_MIG_MASTER set MIGRATION_STATUS_FLAG = 'Z' where MIGRATION_STATUS_FLAG = 'N'; commit; grant all privileges on MM_DSD_MIG_MASTER to public; grant all privileges on MM_DSD_MIG_PATTERNS to public; DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_PATTERNS'); DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_MASTER'); Everything works if the DMS_STATS lines are omited but was told it has to stay.
The script can be converted into a single PL/SQL anonymous block. The advantage of this conversion is that it allows the exact same block to run on all environments. (Although the slash at the end should probably be excluded when running from Python.) The disadvantage of this conversion is that all the DDL statements need to be converted into EXECUTE IMMEDIATE statements. BEGIN DELETE FROM CD_SS.MM_DSD_MIG_MASTER; commit; INSERT INTO CD_SS.MM_DSD_MIG_MASTER SELECT DISTINCT PATTERN, b_ITEM_NUM, 'N', NULL, NULL, NULL FROM CD_SS.MM_DSD_MIG_PATTERNS; commit; update MM_DSD_MIG_MASTER set MIGRATION_STATUS_FLAG = 'Z' where MIGRATION_STATUS_FLAG = 'N'; commit; EXECUTE IMMEDIATE 'grant all privileges on MM_DSD_MIG_MASTER to public'; EXECUTE IMMEDIATE 'grant all privileges on MM_DSD_MIG_PATTERNS to public'; DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_PATTERNS'); DBMS_STATS.gather_table_stats('CD_SS', 'MM_DSD_MIG_MASTER'); END; /
1
0
79,576,073
2025-4-15
https://stackoverflow.com/questions/79576073/can-you-create-multiple-columns-based-on-the-same-set-of-conditions-in-polars
Is it possible to do something like this in Polars? Like do you need a separate when.then.otherwise for each of the 4 new varialbles, or can you use struct to create multiple new variables from one when.then.otherwise? Regular Python example: if x=1 and y=3 and w=300*z and z<100: tot = 300 work = 400 sie = 500 walk = 'into' else: tot = 350 work = 400*tot sie = tot/1000 walk = 'outof' I tried to do a similar thing in Polars with struct (to create new variables a and b based on Movie variable: import polars as pl ratings = pl.DataFrame( { "Movie": ["Cars", "IT", "ET", "Cars", "Up", "IT", "Cars", "ET", "Up", "Cars"], "Theatre": ["NE", "ME", "IL", "ND", "NE", "SD", "NE", "IL", "IL", "NE"], "Avg_Rating": [4.5, 4.4, 4.6, 4.3, 4.8, 4.7, 4.5, 4.9, 4.7, 4.6], "Count": [30, 27, 26, 29, 31, 28, 28, 26, 33, 28], } ) x = ratings.with_columns( pl.when(pl.col('Movie')=='Up').then(pl.struct(pl.lit(0),pl.lit(2))).otherwise(pl.struct(pl.lit(1),pl.lit(3))).struct.field(['a','b']) ) print(x) Thanks!
If you remove the .struct.field() call you will see the issue. # DuplicateError: multiple fields with name 'literal' found You need to give names to the struct fields you are creating. df.with_columns( pl.when(pl.col('Movie') == 'Up') .then(pl.struct(pl.lit(0).alias('a'), pl.lit(2).alias('b'))) .otherwise(pl.struct(pl.lit(1).alias('a'), pl.lit(3).alias('b'))) .struct.field('a', 'b') ) There are also some ways to neaten it up if you prefer. pl.when() uses kwargs as shorthand for equality conditions. pl.struct() uses kwargs as shorthand for .alias() to name fields. pl.lit() is not required for integers .struct.unnest() will unnest all fields. df.with_columns( pl.when(Movie='Up') .then(pl.struct(a=0, b=2)) .otherwise(pl.struct(a=1, b=3)) .struct.unnest() ) shape: (10, 6) ┌───────┬─────────┬────────────┬───────┬─────┬─────┐ │ Movie ┆ Theatre ┆ Avg_Rating ┆ Count ┆ a ┆ b │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 ┆ i64 ┆ i32 ┆ i32 │ ╞═══════╪═════════╪════════════╪═══════╪═════╪═════╡ │ Cars ┆ NE ┆ 4.5 ┆ 30 ┆ 1 ┆ 3 │ │ IT ┆ ME ┆ 4.4 ┆ 27 ┆ 1 ┆ 3 │ │ ET ┆ IL ┆ 4.6 ┆ 26 ┆ 1 ┆ 3 │ │ Cars ┆ ND ┆ 4.3 ┆ 29 ┆ 1 ┆ 3 │ │ Up ┆ NE ┆ 4.8 ┆ 31 ┆ 0 ┆ 2 │ │ IT ┆ SD ┆ 4.7 ┆ 28 ┆ 1 ┆ 3 │ │ Cars ┆ NE ┆ 4.5 ┆ 28 ┆ 1 ┆ 3 │ │ ET ┆ IL ┆ 4.9 ┆ 26 ┆ 1 ┆ 3 │ │ Up ┆ IL ┆ 4.7 ┆ 33 ┆ 0 ┆ 2 │ │ Cars ┆ NE ┆ 4.6 ┆ 28 ┆ 1 ┆ 3 │ └───────┴─────────┴────────────┴───────┴─────┴─────┘
1
2
79,576,094
2025-4-15
https://stackoverflow.com/questions/79576094/whats-the-best-way-to-group-by-more-than-one-column
Essentially, I want to group all records that match either key Input: |id|key1|key2| |-|-|-| |1|a|x| |2|a|y| |3|b|y| |4|c|z| Desired output: (The first 3 rows are grouped together) |key1 (or any identifier, I only care about the final counts)|len| |-|-| |a|3| |c|1| My attempted (incomplete) solution: import polars as pl df = pl.DataFrame({ 'id': [1, 2, 3, 4], 'key1': ['a', 'a', 'b', 'c'], 'key2': ['x', 'y', 'y', 'z'], }) df = ( df.group_by( 'key1', 'key2', ) .len() .group_by('key1') .agg( pl.col('key2'), pl.col('len').sum() ) ) print(df) Which gives: |key1|key2|len| |-|-|-| |a|["y", "x"]|2| |b|["y"]|1| |c|["z"]|1| However, I'm not sure how I would further group this by key2 (merging the a and b rows, since they have a common value of y) while preserving the sum of len
If I understand you correctly, you want to create equivalence relationships between key1 and key2, and keep merging things into groups if there exists a relationship chain that connects the two? So in this case since there is a row a, y any rows which have a or y in them should be merged? In this case it's hard to express this directly in Polars, but you can do a hybrid with scipy's DisjointSet. First, gather all the unique keys and relationships: keys = df.select(pl.col.key1.append(pl.col.key2).unique()) edges = df.select(["key1", "key2"]).unique() Then, insert them into a DisjointSet and merge into groups: from scipy.cluster.hierarchy import DisjointSet ds = DisjointSet(keys.to_series().to_list()) for a, b in edges.iter_rows(): ds.merge(a, b) Then convert the DisjointSet into a DataFrame mapping key to group index: group_idx = ( pl.DataFrame({"key": [list(s) for s in ds.subsets()]}) .lazy() .with_row_index("group_idx") .explode("key") ) Finally, we join the group_idx to our original dataframe and group by the group index: (df.lazy() .join(group_idx, left_on="key1", right_on="key") .group_by("group_idx") .len() .collect()) Thus giving our output: ┌───────────┬─────┐ │ group_idx ┆ len │ │ --- ┆ --- │ │ u32 ┆ u32 │ ╞═══════════╪═════╡ │ 0 ┆ 1 │ │ 1 ┆ 3 │ └───────────┴─────┘
2
2
79,575,941
2025-4-15
https://stackoverflow.com/questions/79575941/why-does-randomforestclassifier-in-scikit-learn-predict-even-on-all-nan-input
I am training a random forest classifier in python sklearn, see code below- from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(random_state=42) rf.fit(X = df.drop("AP", axis =1), y = df["AP"].astype(int)) When I predict the values using this classifier on another dataset that has NaN values, the model provides some output. Not even that, I tried predicting output on a row with all variables as NaNs, it predicted the outputs. #making a row with all NaN values row = pd.DataFrame([np.nan] * len(rf.feature_names_in_), index=rf_corn.feature_names_in_).T rf.predict(row) It predicts- array([1]) I know that RandomForestClassifier in scikit-learn does not natively support missing values. So I expected a ValueError, not a prediction. I can ignore the NaN rows and only predict on non-nan rows but I am concerned if there is something wrong with this classifier. Any insight will be appreciated.
In the most recent version of scikit-learn (v1.4) they added support for missing values to RandomForestClassifier when the criterion is gini (default). Source: https://scikit-learn.org/dev/whats_new/v1.4.html#id7
1
2
79,575,632
2025-4-15
https://stackoverflow.com/questions/79575632/why-do-i-get-get-http-1-1-404-not-found-with-fastapi-server
I am trying to create a token with FastAPI: > import json > import os > import aiohttp > import asyncio > from fastapi import FastAPI > from fastapi import APIRouter, Request > from fastapi.responses import JSONResponse > > token = APIRouter(prefix="/management/api", tags=["API Apl token"]) > > app=FastAPI() > app.include_router(token) and many methods after... I got this > uvicorn peg:token --reload > INFO: Will watch for changes in these directories: ['C:\\Users\\Ejbc25\\fastapi'] > INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) > INFO: Started reloader process [2232] using StatReload > INFO: Started server process [20632] > INFO: Waiting for application startup. > INFO: Application startup complete. > INFO: 127.0.0.1:51183 - "GET / HTTP/1.1" 404 Not Found > INFO: 127.0.0.1:51183 - "GET / HTTP/1.1" 404 Not Found How to fix this problem? I expected that http://127.0.0.1:8000/token would return token but shows Not Found
The reason of your issue is that you're trying to run a FastAPI APIRouter instance (token) directly with uvicorn, instead of the actual FastAPI application (app). Your uvicorn command: uvicorn peg:token --reload is trying to start the app using the token router as the main app, which won’t work since token is not a FastAPI application (it’s just a router meant to be included in an app). You should point uvicorn to the actual FastAPI instance, which in your code is named app. Change your command to: uvicorn peg:app --reload Obviously I'm assuming that the your file is named peg.py, but if it has a different name update the command (e.g., uvicorn main:app --reload for main.py).
3
4
79,591,487
2025-4-24
https://stackoverflow.com/questions/79591487/roll-back-a-commit
Note: This question focuses on web apps utilising MySQL's transactions - commit and rollback. Even though the code samples below use Python, the problem itself is not limited to the choice of programming language building the web app. Imagine I have two files: main.py and my_dao.py. main.py acts as the starting point of my application and it simply triggers the methods in class MyDAO: main.py import my_dao MyDAO.run_a() ... MyDAO.run_b() ... my_dao.py defines a DAO-like class with two methods, each does something to the database: import mysql.connector class MyDAO: conn = mysql.connector.connect(...) @classmethod def run_a(cls): try: do_something_1() cursor = cls.cursor() cursor.execute('Query A 1') cursor.execute('Query A 2') do_something_2() cursor.close() conn.commit() except Error as e: conn.rollback() log(...) @classmethod def run_b(cls): try: do_something_1() cursor = cls.cursor() cursor.execute('Query B 1') # calling cls.run_a() here cls.run_a() cursor.execute('Query B 2') do_something_2() cursor.close() conn.commit() except Error as e: conn.rollback() log(...) As you can see, both methods have their own commits and rollbacks. run_a() basically runs a bunch of queries and then commit. run_b() is similar except that it calls run_a() in between its queries. Problem If everything works, this seems fine. However, if run_b() fails after successfully running run_a() inside, this would cause a problem because run_a() has already committed and no matter how run_b() rollbacks, it will not rollback to the point before run_b() was called. I understand that MySQL doesn't support nested transactions. How can I redesign the above so that run_b() can rollback successfully including the commit used by run_a() within it? My thoughts: Not sure if the above is a bad design, but I have wrapped each method with try...except and commit/rollback where needed so that each method can be called independently outside the class. I am aware of savepoint but I think rewriting the above using savepoint would be quite messy, and run_a() would also lose its 'independentness' as it doesn't know whether it should commit within the method itself. Alternatively, I have also thought of extracting the inner part of run_a() into a common function, but it looks quite clumsy to me: import mysql.connector class MyDAO: conn = mysql.connector.connect(...) @classmethod def _real_run_a(cls, cursor): cursor.execute('Query A 1') cursor.execute('Query A 2') @classmethod def run_a(cls): try: do_something_1() cursor = cls.cursor() cls._real_run_a(cursor) do_something_2() cursor.close() conn.commit() except Error as e: conn.rollback() log(...) @classmethod def run_b(cls): try: do_something_1() cursor = cls.cursor() cursor.execute('Query B 1') cls._real_run_a(cursor) cursor.execute('Query B 2') do_something_2() cursor.close() conn.commit() except Error as e: conn.rollback() log(...)
I don't do transaction commit or rollback in DAO class methods, because of the nesting problem you identified. Some people "fake" transactions using a counter. If a nested DAO called by another DAO starts a transaction, it doesn't really start a transaction, it just increments the nesting counter. And it doesn't really commit or rollback if the nesting counter is greater than 0. It just decrements the counter. So only the outermost DAO whose counter is 0 gets to issue a real commit or rollback. The problem with that supposed solution is that DAO's think they did a commit or rollback, even though they didn't. And the outer code can do the reverse operation without the knowledge of the inner DAO. For example, an inner DAO encounters an error, and tries to rollback to undo whatever change it id. The rollback is fake and doesn't happen. Then the outer DAO, unaware that an error has occurred, commits the transaction, which is a real commit. Thus the database changes have been committed, when they should have been discarded. I'm sure you can imagine the reverse situation can happen too. Inner DAO make a proper change, but the outer DAO rolls it back. Chaos ensues! The better solution is to refrain from doing start transaction or commit/rollback in any DAO. Instead, do both start and resolve transaction in the code that calls the DAOs. In an MVC web application, for example, this could be at the Controller level. Whereas all Models should assume a transaction has been started already, and that the transaction will be resolved appropriately by the caller. If a Model or DAO wants to signal the caller that an error has occurred, then raise an error, which bubbles up to the caller and the caller should handle it by doing a rollback at that level. If no error occurs after the call to the Model or DAO finishes, then commit at that level.
2
4
79,591,152
2025-4-24
https://stackoverflow.com/questions/79591152/root-mean-square-linearisation-for-linear-programming
I am trying to linearise the function root mean square to use it in a linear optimisation or Mixed integer linear optimisation. Any idea how I could do this? For instance with the example below, if I wanted to maximize P*100, the model would give P=10, Q = 0 and S=10. Many thanks import numpy as np import pulp S = np.sqrt(P**2 + Q**2) model = pulp.LpProblem("Linearise RMS", pulp.LpMaximize) P = pulp.LpVariable("P", lowBound=-10, upBound=10 ,cat="Continuous") Q = pulp.LpVariable("Q", lowBound=-10, upBound=10 ,cat="Continuous") S = pulp.LpVariable("S", lowBound= 0, upBound=10 ,cat="Continuous") objective_function = P*100 model.setObjective(objective_function) cbc_solver = PULP_CBC_CMD(options=['ratioGap=0.02']) result = model.solve(solver=cbc_solver)
It cannot be linearised with an exact problem. It can be linearised with an approximate problem, and depending on a few things, to a decent degree of accuracy. It cannot be linearised with a continuous solver. It must be linearised with a mixed integer solver because the upper parabolic constraint is non-convex; only the lower constraint is convex. The upper constraint segments require binary selectors. Your toy example is poorly-chosen; the optimal solution is trivial and most of the parabolic constraints don't matter. Depending on what you're actually doing, often the lower or upper half of the parabolic constraint can and should be dropped entirely, but you've left that entirely ambiguous so I show both, despite the fact that it isn't necessary. However, I demonstrate with different bounds than your example. import numpy as np import pulp from matplotlib import pyplot as plt def segmented_parabola( v: pulp.LpVariable, model: pulp.LpProblem, n_segments: int = 5, plot: bool = False, ) -> tuple[ pulp.LpVariable, # v**2 list[pulp.LpVariable], # segment selects plt.Axes | None, ]: v2 = pulp.LpVariable(name=v.name + '2', cat=v.cat) x = np.linspace(v.lowBound, v.upBound, n_segments) y = x**2 dydx = 2*x if plot: fig, ax = plt.subplots() xhi = np.linspace(v.lowBound, v.upBound, 101) ax.plot(xhi, xhi**2) ax.set_xlabel(v.name) ax.set_ylabel(v2.name) else: ax = None # For the lower constraints, simply add one constraint row per line segment for i, (xi, yi, dydxi) in enumerate(zip(x, y, dydx)): model.addConstraint( name=f'lower_{v.name}{i}', constraint=v2 >= v * dydxi - yi, ) if plot: ax.plot(xhi, dydxi * xhi - yi) # The upper constraints are non-convex, so require binary selectors selects = pulp.LpVariable.matrix( name=f'select_{v.name}', cat=pulp.LpBinary, indices=range(n_segments - 1), ) model.addConstraint( name=f'exclusive_{v.name}', constraint=1 == pulp.lpSum(selects), ) dydx = (y[1:] - y[:-1])/(x[1:] - x[:-1]) offset = y[:-1] - dydx*x[:-1] # v2 <= dydxi*v + offseti + M*(1 - select) # M > (v2 - offseti - dydxi*v)/(1 - select) M = 2*( max(abs(v.lowBound), abs(v.upBound))**2 - offset.min() - min(v.lowBound*dydx.max(), v.upBound*dydx.min()) ) for i, (x0, x1, dydxi, offseti, select) in enumerate(zip( x[:-1], x[1:], dydx, offset, selects, )): x01 = np.array([x0, x1]) if plot: ax.plot(x01, dydxi*x01 + offseti) if x0 > v.lowBound: # select=1 iff v >= segment left bound model.addConstraint( name=f'selectleft_{v.name}{i}', constraint=select <= (v - v.lowBound)/(x0 - v.lowBound), ) if x1 < v.upBound: # select=1 iff v <= segment right bound model.addConstraint( name=f'selectright_{v.name}{i}', constraint=select <= (v.upBound - v)/(v.upBound - x1), ) # if select=1, v2 <= dydxi*v + offseti model.addConstraint( name=f'upper_{v.name}{i}', constraint=v2 <= dydxi*v + offseti + M*(1 - select), ) return v2, selects, ax def display( x: pulp.LpVariable, x2: pulp.LpVariable, select: list[pulp.LpVariable], ax: plt.Axes, ) -> None: print(f'{x.name} = {x.value()} ~ ±{np.sqrt(x2.value())}') print(f'{x2.name} = {x2.value()} ~ {x.value()**2}') print('selected segment', next(i for i, v in enumerate(select) if v.value())) print() ax.scatter(x.value(), x2.value()) def main() -> None: p = pulp.LpVariable(name='p', lowBound=-10, upBound=2.5, cat=pulp.LpContinuous) # or -10, 10 q = pulp.LpVariable(name='q', lowBound=-3, upBound=0.5, cat=pulp.LpContinuous) # or -10, 10 s = pulp.LpVariable(name='s', lowBound=0, upBound=5, cat=pulp.LpContinuous) # or 0, 10 model = pulp.LpProblem(name='linearise_rms', sense=pulp.LpMaximize) model.setObjective(p) p2, pa, axp = segmented_parabola(p, model, plot=True) q2, qa, axq = segmented_parabola(q, model, plot=True) s2, sa, axs = segmented_parabola(s, model, plot=True) model.addConstraint(name='norm', constraint=s2 == p2 + q2) print(model) model.solve() if model.status != pulp.LpStatusOptimal: raise ValueError(model.status) display(p, p2, pa, axp) display(q, q2, qa, axq) display(s, s2, sa, axs) plt.show() if __name__ == '__main__': main() linearise_rms: MAXIMIZE 1*p + 0.0 SUBJECT TO lower_p0: 20 p + p2 >= -100 lower_p1: 13.75 p + p2 >= -47.265625 lower_p2: 7.5 p + p2 >= -14.0625 lower_p3: 1.25 p + p2 >= -0.390625 lower_p4: - 5 p + p2 >= -6.25 exclusive_p: select_p_0 + select_p_1 + select_p_2 + select_p_3 = 1 selectright_p0: 0.106666666667 p + select_p_0 <= 0.266666666667 upper_p0: 16.875 p + p2 + 421.875 select_p_0 <= 353.125 selectleft_p1: - 0.32 p + select_p_1 <= 3.2 selectright_p1: 0.16 p + select_p_1 <= 0.4 upper_p1: 10.625 p + p2 + 421.875 select_p_1 <= 396.09375 selectleft_p2: - 0.16 p + select_p_2 <= 1.6 selectright_p2: 0.32 p + select_p_2 <= 0.8 upper_p2: 4.375 p + p2 + 421.875 select_p_2 <= 419.53125 selectleft_p3: - 0.106666666667 p + select_p_3 <= 1.06666666667 upper_p3: - 1.875 p + p2 + 421.875 select_p_3 <= 423.4375 lower_q0: 6 q + q2 >= -9 lower_q1: 4.25 q + q2 >= -4.515625 lower_q2: 2.5 q + q2 >= -1.5625 lower_q3: 0.75 q + q2 >= -0.140625 lower_q4: - q + q2 >= -0.25 exclusive_q: select_q_0 + select_q_1 + select_q_2 + select_q_3 = 1 selectright_q0: 0.380952380952 q + select_q_0 <= 0.190476190476 upper_q0: 5.125 q + q2 + 35.875 select_q_0 <= 29.5 selectleft_q1: - 1.14285714286 q + select_q_1 <= 3.42857142857 selectright_q1: 0.571428571429 q + select_q_1 <= 0.285714285714 upper_q1: 3.375 q + q2 + 35.875 select_q_1 <= 33.21875 selectleft_q2: - 0.571428571429 q + select_q_2 <= 1.71428571429 selectright_q2: 1.14285714286 q + select_q_2 <= 0.571428571429 upper_q2: 1.625 q + q2 + 35.875 select_q_2 <= 35.40625 selectleft_q3: - 0.380952380952 q + select_q_3 <= 1.14285714286 upper_q3: - 0.125 q + q2 + 35.875 select_q_3 <= 36.0625 lower_s0: s2 >= 0 lower_s1: - 2.5 s + s2 >= -1.5625 lower_s2: - 5 s + s2 >= -6.25 lower_s3: - 7.5 s + s2 >= -14.0625 lower_s4: - 10 s + s2 >= -25 exclusive_s: select_s_0 + select_s_1 + select_s_2 + select_s_3 = 1 selectright_s0: 0.266666666667 s + select_s_0 <= 1.33333333333 upper_s0: - 1.25 s + s2 + 87.5 select_s_0 <= 87.5 selectleft_s1: - 0.8 s + select_s_1 <= 0 selectright_s1: 0.4 s + select_s_1 <= 2 upper_s1: - 3.75 s + s2 + 87.5 select_s_1 <= 84.375 selectleft_s2: - 0.4 s + select_s_2 <= 0 selectright_s2: 0.8 s + select_s_2 <= 4 upper_s2: - 6.25 s + s2 + 87.5 select_s_2 <= 78.125 selectleft_s3: - 0.266666666667 s + select_s_3 <= 0 upper_s3: - 8.75 s + s2 + 87.5 select_s_3 <= 68.75 norm: - p2 - q2 + s2 = 0 VARIABLES -10 <= p <= 2.5 Continuous p2 free Continuous -3 <= q <= 0.5 Continuous q2 free Continuous s <= 5 Continuous s2 free Continuous 0 <= select_p_0 <= 1 Integer 0 <= select_p_1 <= 1 Integer 0 <= select_p_2 <= 1 Integer 0 <= select_p_3 <= 1 Integer 0 <= select_q_0 <= 1 Integer 0 <= select_q_1 <= 1 Integer 0 <= select_q_2 <= 1 Integer 0 <= select_q_3 <= 1 Integer 0 <= select_s_0 <= 1 Integer 0 <= select_s_1 <= 1 Integer 0 <= select_s_2 <= 1 Integer 0 <= select_s_3 <= 1 Integer ... Result - Optimal solution found Objective value: 2.50000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.01 p = 2.5 ~ ±2.5 p2 = 6.25 ~ 6.25 selected segment 3 q = -0.375 ~ ±0.375 q2 = 0.140625 ~ 0.140625 selected segment 2 s = 2.528125 ~ ±2.5279685520195856 s2 = 6.390625 ~ 6.391416015625001 selected segment 2 Depending on the size of the problem, you can "easily" (heavy quotes) scale up the resolution; this completes in less than a second with n=50:
1
1
79,595,678
2025-4-28
https://stackoverflow.com/questions/79595678/how-can-i-store-ids-in-python-without-paying-the-28-byte-per-int-price
My Python code stores millions of ids in various data structures, in order to implement a classic algorithm. The run time is good, but the memory usage is awful. These ids are ints. I assume that since Python ints start at 28 bytes and grow, there's a huge price there. Since they're just opaque ids, not actually mathematical object, I could get by with just 4 bytes for them. Is there a way to store ids in Python that won't use the full 28 bytes? E.g., do I need to put them as both keys and values to dicts? Note: The common solution of using something like BumPy won't work here, because it's not a contiguous array. It's keys and values into a dict, and dicts of dicts, etc. I'm also amenable to other Python interpreters that are less memory hungry for ints.
Your use case is for IDs to be stored as keys and values of a dict. But since keys and values of a dict have to be Python objects, they must each be allocated an object header as well as a pointer from the dict. To be able to actually store keys and values at 4 bytes each you would have to implement a custom hash table that allocates an array.array of 32-bit integers for both keys and values. Since IDs are typically never going to be 0 or 2**32-1, you can use them as sentinels for an empty slot and a deleted slot, respectively. Below is a sample implementation with linear probing: from array import array class HashTable: EMPTY = 0 DELETED = (1 << 32) - 1 def __init__(self, source=None, size=8, load_factor_threshold=0.75): self._size = size self._load_factor_threshold = load_factor_threshold self._count = 0 self._keys = array('L', [self.EMPTY]) * size self._values = array('L', [self.EMPTY]) * size if source is not None: self.update(source) def _probe(self, key): index = hash(key) % self._size for _ in range(self._size): yield index, self._keys[index], self._values[index] index = (index + 1) % self._size def __setitem__(self, key, value): while self._count >= self._load_factor_threshold * self._size: new = HashTable(self, self._size * 2, self._load_factor_threshold) self._size = new._size self._keys = new._keys self._values = new._values for index, probed_key, probed_value in self._probe(key): if probed_value == self.DELETED: continue if probed_value == self.EMPTY: self._keys[index] = key self._values[index] = value self._count += 1 return elif probed_key == key: self._values[index] = value return def __getitem__(self, key): for _, probed_key, value in self._probe(key): if value == self.EMPTY: break if value == self.DELETED: continue if probed_key == key: return value raise KeyError(key) def __delitem__(self, key): for index, probed_key, value in self._probe(key): if value == self.EMPTY: raise KeyError(key) if value == self.DELETED: continue if probed_key == key: self._values[index] = self.DELETED self._count -= 1 return def items(self): for key, value in zip(self._keys, self._values): if value not in (self.EMPTY, self.DELETED): yield key, value def keys(self): for key, _ in self.items(): yield key def values(self): for _, value in self.items(): yield value def __iter__(self): yield from self.keys() def __len__(self): return self._count def __eq__(self, other): return set(self.items()) == set(other.items()) def __contains__(self, key): try: self[key] except KeyError: return False return True def get(self, key, default=None): try: return self[key] except KeyError: return default def __repr__(self): return repr(dict(self.items())) def __str__(self): return repr(self) def copy(self): return HashTable(self, self._size, self._load_factor_threshold) def update(self, other): for key, value in other.items(): self[key] = value so that with pympler.asizeof, which recursively measures the memory footprint of an object, you can see the memory saving to be as much as 90%: from pympler.asizeof import asizeof d = dict(zip(range(1500000), range(1500000))) h = HashTable(d) print(asizeof(d)) # 179877936 print(asizeof(h)) # 16777920 Note that on some platforms the type code 'L' for array.array results in an item size of 8 bytes instead of 4 bytes, in which case you should use the type code 'I' instead.
1
3
79,594,689
2025-4-27
https://stackoverflow.com/questions/79594689/python-asyncio-lock-release-object-nonetype-cant-be-used-in-await-expressi
Code & Context I'm building a multithreaded program to automate API calls, to retrieve all the information I need based off of a list of unique IDs. To do this, I ended up creating my own Lock class to allow threads either concurrent read access to specific variables, or one single thread to read and write. # Read-Write asynchronous & multithreaded locking. class AsyncReadWriteLock: def __init__(self): self._readers = 0 # Number of active readers self._writer_lock = asyncio.Lock() # Lock for writers self._readers_lock = asyncio.Lock() # Lock to protect the readers counter self._readers_wait = asyncio.Condition(self._readers_lock) # Condition for readers async def acquire_read(self): """Acquire the lock for reading.""" # First wait for any active writer to complete async with self._writer_lock: # Wait for other readers' actions async with self._readers_lock: self._readers += 1 # Ensures readers count is accurate async def release_read(self): """Release the lock for reading.""" # Wait for other readers' actions async with self._readers_lock: self._readers -= 1 # No readers are left if self._readers == 0: # Notify writers waiting for all readers to finish self._readers_wait.notify_all() async def acquire_write(self): """Acquire the lock for writing.""" # First acquire the writer lock to block other writers await self._writer_lock.acquire() # Now wait for all readers to finish async with self._readers_lock: while self._readers > 0: await self._readers_wait.wait() async def release_write(self): """Release the lock for writing.""" # Release the writer lock, only if locked if self._writer_lock.locked(): await self._writer_lock.release() To provide an example, lets say we have the following variables: CHECKPOINT_FILE = "./myCheckpoint.txt" latest_checkpoint = 0 # Default value checkpoint_lock = AsyncReadWriteLock() # Lock for checkpoint This is the function where checkpoint_lock is first interacted with. It's also worth noting that it is the first lock variable in my program to be interacted with. async def load_checkpoint() -> int: """ Load the last completed batch index from checkpoint file Returns the index of the last completed batch, or -1 if no checkpoint exists """ global latest_checkpoint await checkpoint_lock.acquire_write() try: with open(CHECKPOINT_FILE, 'r') as f: line = f.readline().strip() if line and line.isdigit(): latest_checkpoint = int(line) logging.info(f"Loaded checkpoint: last completed index = {latest_checkpoint}") return max_completion else: logging.info("Checkpoint file exists but contains no valid index") return -1 except FileNotFoundError: logging.info("No checkpoint file found, starting from the beginning") return -1 except Exception as e: logging.error(f"Error loading checkpoint: {str(e)}") return -1 finally: await checkpoint_lock.release_write() The Idea I should be able to have multiple concurrent threads calling checkpoint_lock.acquire_read(), such that they all access the values without error. Because none of the "reader" threads modify the values, it causes no problem. If a thread wants to write, checkpoint_lock.acquire_write() is called, and it waits on all readers to finish, then acquires the write lock. The Problem What I'm currently experiencing is that, when the write section finishes, checkpoint_lock.release_write() is called. For some reason, within this function, I get the error: TypeError: object NoneType can't be used in 'await' expression For the line await self._writer_lock.release() in release_write() This is what I really can't understand. To get to this point, self._writer_lock cannot be None because self._writer_lock.locked() is first checked. Though if actually None, it should throw an error earlier (unless locked is a default method of python objects?) Printing the variable proves its existence as an object: <asyncio.locks.Lock object at 0x70204edacbf0 [locked]> If anyone can help figure out what is going on here to explain and fix this error, I'd appreciate the help.
In the function release_write, you've mentioned asyncio.Lock.release(). asyncio.Lock.release() is a synchronous method, but you're trying to await it. When you await it, Python tries to treat the return value (which is None) as an awaitable, causing TypeError. So remove await from release_write() At present, you're allowing readers to block writers indefinitely(the line async with self._writer_lock: in function acquire_read). When a reader acquires _writer_lock, it blocks all writers till reader releases it. So, the purpose of a read-write lock is not fulfilled (writers waiting to acquire the lock should be prioritized over readers, or they might starve) class AsyncReadWriteLock: def __init__(self): self._readers = 0 self._writer_lock = asyncio.Lock() self._readers_lock = asyncio.Lock() self._no_readers = asyncio.Event() self._no_readers.set() # Initially, no readers async def acquire_read(self): async with self._readers_lock: self._readers += 1 if self._readers == 1: # First reader blocks writers await self._writer_lock.acquire() self._no_readers.clear() async def release_read(self): async with self._readers_lock: self._readers -= 1 if self._readers == 0: # Last reader allows writers self._writer_lock.release() self._no_readers.set() async def acquire_write(self): await self._writer_lock.acquire() # Wait for existing readers to finish await self._no_readers.wait() async def release_write(self): self._writer_lock.release() New Usage of load_checkpoint() you've given in example: async def load_checkpoint() -> int: global latest_checkpoint await checkpoint_lock.acquire_write() try: # ... (file operations) finally: await checkpoint_lock.release_write() # Now works without error Edit: To clarify the person who commented about my statement 'readers should not block writers': Let me explain in detail to explain what I meant by 'readers should not block writers' correct behavior of read-write behaviour: Readers: Allow concurrent access i.e multiple readers can access the resource concurrently and no two readers should be blocked simultaneously(i.e they don’t exclude each other). Writers: Require exclusive access i.e block all readers/writers during write New readers arriving while a writer is waiting should be blocked until the writer completes. By having readers acquire _writer_lock, you force all readers to serialize (defeating concurrency) and block writers entirely until all readers release the lock. This creates: No true read concurrency: Only one reader at a time holds _writer_lock. Writer starvation: If readers keep acquiring _writer_lock, writers never get a chance.
2
3
79,595,929
2025-4-28
https://stackoverflow.com/questions/79595929/warm-up-huggingface-transformers-models-efficiently-to-reduce-first-token-latenc
In production deployment of Hugging Face LLMs, the first inference call often has very high latency ("cold start"), even on a machine where the model is already loaded into memory. Subsequent calls are much faster. I want to implement a model warm-up strategy that: Primes the model and GPU memory before real user requests arrive Reduces first-token generation time for users Works for both pipeline()-based and model.generate()-based inference from transformers import pipeline generator = pipeline('text-generation', model="tiiuae/falcon-7b-instruct", device=0) def generate_text(prompt): return generator(prompt, max_new_tokens=50)[0]['generated_text'] My Question: What is the best way to warm up a HuggingFace Transformers model after loading, to minimize first-token latency in production inference?
You could use a dummy inference immediately after loading the model. For pipeline: # Warmup _ = generator("Warm up prompt", max_new_tokens=1) For raw model.generate(): from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", device_map="auto", torch_dtype="auto") # Warmup inputs = tokenizer("Warm up prompt", return_tensors="pt").to(model.device) _ = model.generate(**inputs, max_new_tokens=1)
2
0
79,594,983
2025-4-27
https://stackoverflow.com/questions/79594983/why-does-np-fromfile-fail-when-reading-from-a-pipe
In a Python script, I've written: # etc. etc. input_file = args.input_file_path or sys.stdin arr = numpy.fromfile(input_file, dtype=numpy.dtype('f32')) when I run the script, I get: $ cat nums.fp32.bin | ./myscript File "./myscript", line 123, in main arr = numpy.fromfile(input_file, dtype=numpy.dtype('f32')) OSError: obtaining file position failed why does NumPy need the file position? And - can I circumvent this somehow?
This error happens because np.fromfile() is implemented in a fairly counterintuitive way. You might assume that this is implemented by repeatedly calling e.g. file.read(4096), then copying the resulting buffer to the appropriate place in the array. It does not work like this. Instead, it is following roughly this process: Find the file descriptor number of the file object. Copy that file descriptor using os.dup() in Python. Find the read position within the original file by calling f.tell() in Python. Set the copied file descriptor to the same read position using fseek() in C. At the end of this process, NumPy has a C-level file that it owns, and can copy data from without the overhead of calling a Python method. It then reads the file and copies it into the array. (You may be asking why steps 3 and 4 are necessary. Doesn't copying a file descriptor copy its read position? This is true, but it won't work if the Python file is buffered, as the C-level read position and Python-level read position may not match.) To clean up this file descriptor, NumPy does the following. Find the seek position of the C-level file. Copy the seek position to the Python-level file. Close the C-level file. In order for np.fromfile() to work, your file-like object must support all of the following: It must have a file descriptor. This rules out, for example, BytesIO, because that is not backed by an OS-level file. It must support seek(). This rules out the use of pipes. It must support tell(). It must support flush(). In practice, this rules out most file-like objects that are not really files. To learn more about this, I recommend reading the source code. And - can I circumvent this somehow? No. All four of those things are mandatory. You can work around it, however. Assuming the file f is open in binary mode, you could do f.read() and obtain a bytes object. You can then pass this to object to np.frombuffer() to obtain an array.
1
1
79,596,631
2025-4-28
https://stackoverflow.com/questions/79596631/convert-month-abbreviation-to-full-name
I have this function which converts an English month to a French month: def changeMonth(month): global CurrentMonth match month: case "Jan": return "Janvier" case "Feb": return "Février" case "Mar": return "Mars" case "Apr": return "Avril" case "May": return "Mai" case "Jun": return "Juin" case "Jul": return "Juillet" case "Aug": return "Août" case "Sep": return "Septembre" case "Oct": return "Octobre" case "Nov": return "Novembre" case "Dec": return "Décembre" # If an exact match is not confirmed, this last case will be used if provided case _: return "" and I have a pandas col df["month"]= df['ic_graph']['month'].tolist(): now what I'm looking for is to pass the df["month"] col through the changeMonth function to display the df["month"] in frensh months By the way, I do not want to use the >>> import locale >>> locale.setlocale(locale.LC_ALL, 'fr_FR')
Question seems a bit unclear to me, Do you want to replace the month column in dataframe itself? If yes something like this can work: def changeMonth(month): if month=="Jan": return "Janvier" elif month=="Feb": return "Février" elif month=="Mar": return "Mars" elif month=="Apr": return "Avril" elif month=="May": return "Mai" elif month=="Jun": return "Juin" elif month=="Jul": return "Juillet" elif month=="Aug": return "Août" elif month=="Sep": return "Septembre" elif month=="Oct": return "Octobre" elif month=="Nov": return "Novembre" elif "Dec": return "Décembre" # If an exact match is not confirmed, this last case will be used if provided else: return "" df['month']=df['month'].apply(changeMonth) But if you want to change the month while just printing the list out, you can use list comprehension, french_month_list = [changeMonth(i) for i in df['ic_graph']['month'].tolist()]
3
0
79,596,399
2025-4-28
https://stackoverflow.com/questions/79596399/how-to-log-display-app-name-host-and-port-upon-startup
In my simple Flask app, I've created the following .flaskenv file: FLASK_APP=simpleflask.app FLASK_RUN_HOST=127.0.0.1 FLASK_RUN_PORT=60210 Now, I would like my app to log the following message upon startup: Running my.flask.app on http://127.0.0.1:60210 ... The message should be logged just once, not per each request. Preferably, I would like to avoid parsing .flaskenv on my own, but rather use some internal Flask objects and obtain this information dynamically. Any idea how to achieve this?
app = Flask(__name__) # ... with app.app_context(): app.logger.info(f"Running {os.environ.get('FLASK_APP')} on http://{os.environ.get('FLASK_RUN_HOST')}:{os.environ.get('FLASK_RUN_PORT')} ...")
1
0
79,596,838
2025-4-28
https://stackoverflow.com/questions/79596838/scipy-sparse-one-subarray-at-many-locations-in-a-larger-array
Say I have a sparse subarray whose contents and shape are known: import scipy.sparse as sp sub = sp.coo_array([[a, b], [c, d]]) I'd like to place this subarray at many locations, according to some known pattern, in a larger sparse array of arbitrary size. The code can compute the size of the large array (say NxN) but the numerical value of N isn't known to me. Using our contrived NxN and an arbitrary pattern, if N=4, the end result might be: [[a, b, a, b] [c, d, c, d] [0, 0, a, b] [0, 0, c, d]] and for N=8: [[a, b, a, b, 0, 0, 0, 0] [c, d, c, d, 0, 0, 0, 0] [0, 0, a, b, a, b, 0, 0] [0, 0, c, d, c, d, 0, 0] [0, 0, 0, 0, a, b, a, b] [0, 0, 0, 0, c, d, c, d] [0, 0, 0, 0, 0, 0, a, b] [0, 0, 0, 0, 0, 0, c, d]] N can be huge (tens of thousands), therefore a dense array is not computationally manageable, and it's not practical to manually specify each of the locations. sp.block_array appears to require specifying placement manually. sp.block_diag gets close to the contrived example but doesn't handle an arbitrary pattern. This question is similar but a) uses numpy, not scipy.sparse, and b) doesn't address placing the same subarray many times. This answer is related but only practical for arrays where N is small: in my case, alist would be huge and mostly full of None, even if I use a loop as suggested, which defeats the purpose of sparse arrays. An ideal solution might have a list of array indices at which the top left corner of the submatrix is placed. It should work with any pattern, even a bunch of random locations within NxN. I'm not very experienced with scipy.sparse but it seems like there should be a "good" way to do this sort of operation.
Thanks to Reinderien's comment, I was able to figure this out - I had no idea what a Kronecker product was until now. sp.kron does exactly what I want, with the added benefit of being able to multiply each block by a coefficient. For the contrived example, the code to specify the pattern would be: import scipy.sparse as sp import numpy as np # Setup subarray and big array parameters a, b, c, d = 1, 2, 3, 4 sub = sp.coo_array([[a, b], [c, d]]) N = 8 # Setup block locations for our arbitrary pattern row_idx = np.hstack((np.arange(N/sub.shape[0], dtype=int), np.arange(N/sub.shape[0]-1, dtype=int))) col_idx = np.hstack((np.arange(N/sub.shape[1], dtype=int), np.arange(N/sub.shape[0]-1, dtype=int)+1)) coeff = np.ones_like(row_idx) # Multiply blocks by coefficients here locs = sp.csc_array((coeff, (row_idx, col_idx))) # Array of coefficients at specified locations # Not necessary, but shows what's going on. print(f'Placing block top left corners at rows{row_idx*sub.shape[0]}, cols {col_idx*sub.shape[1]}') Actually creating the sparse array is a one-liner once the locations and subarray are specified: arr = sp.kron(locs, sub) print(arr.toarray()) yields: [[1 2 1 2 0 0 0 0] [3 4 3 4 0 0 0 0] [0 0 1 2 1 2 0 0] [0 0 3 4 3 4 0 0] [0 0 0 0 1 2 1 2] [0 0 0 0 3 4 3 4] [0 0 0 0 0 0 1 2] [0 0 0 0 0 0 3 4]] This implementation... Is extensible to any pattern, even a random one. Accepts a pattern that is extensible to a very large N, locations are not manually set. Doesn't require creation of a dense array/list mostly filled with None Is easier than computing the indices of each element of the subarray.
1
3
79,596,227
2025-4-28
https://stackoverflow.com/questions/79596227/finding-numerical-relationships-between-columns
I have selected a subset of numerical columns from a database and I want to iterate through the columns selecting a target_column and comparing it with the result of a numerical operation between two other columns in the dataframe. However, I am unsure as to how to compare the result (e.g. col1 * col2 = target_column). # For all possible combinations of numeric columns for col1, col2 in combinations(numeric_cols, 2): # For a target column in numeric_columns for target_column in numeric_cols: # Skip if the target column is one of the relationship columns if target_column in (col1, col2): continue Edit: I have worked something out, but I'm still unsure if this is the most efficient way to do it def analyse_relationships(df): numeric_cols = df.select_dtypes(include=[np.number]) threshold = 0.001 relationships = [] # For all possible combinations of numeric columns for col1, col2 in combinations(numeric_cols, 2): # For a target column in numeric_columns for target_column in numeric_cols: # Skip if the target column is one of the relationship columns if target_column in (col1, col2): continue # Calculate different operations product = numeric_cols[col1] * numeric_cols[col2] sum_cols = numeric_cols[col1] + numeric_cols[col2] diff = numeric_cols[col1] - numeric_cols[col2] if np.allclose(product, numeric_cols[target_column], rtol=threshold): relationships.append(f"{col1} * {col2} = {target_column}") elif np.allclose(sum_cols, numeric_cols[target_column], rtol=threshold): relationships.append(f"{col1} + {col2} = {target_column}") elif np.allclose(diff, numeric_cols[target_column], rtol=threshold): relationships.append(f"{col1} - {col2} = {target_column}")
To solve your problem I strongly suggest you to vectorize your data and use as few Pandas operations as possible, since a lot of operations are required and, consequently, the more we can rely solely on NumPy the faster the code will run (NumPy's core is written in C). Since there are multiple operations to take in account (+, -, *, /) we need to calculate each result and compare it with target_column, but we can be more space efficient by using a boolean mask (i.e. a NumPy array), that is a column that represents, as a boolean value, the expression target_column == col1 operation col2. Note that, having possible float in the df, it's actually better to use numpy.isclose() to which we can give a treshold for the float operations. Your code could be something along these lines (I included a little example at the end): import numpy as np import pandas as pd from itertools import combinations def detect_relations(df, numeric_cols, float_tolerance=1e-8): results = [] ops = { "+": [(lambda x, y: x + y, "a + b")], # No need to have `b + a` since sum is commutative "-": [ (lambda x, y: x - y, "a - b"), (lambda x, y: y - x, "b - a") ], "*": [(lambda x, y: x * y, "a * b")], # Same as sum, `a * b = b * a` "/": [ # We need to ensure that we don't divide by 0 (lambda x, y: np.divide(x, y, out=np.full_like(x, np.nan, dtype=float), where=(y != 0)), "a / b"), (lambda x, y: np.divide(y, x, out=np.full_like(x, np.nan, dtype=float), where=(x != 0)), "b / a") ], } # Iterating trough every combination for col1, col2 in combinations(numeric_cols, 2): a = df[col1].values b = df[col2].values # Iterating trough each possible operation for _, functions in ops.items(): for func, op_name in functions: # Calculating the result of the operation `func` between col1 and col2 val = func(a, b) # Confronting the result of `col1 operation col2` to the other numeric columns (avoiding col1 and col2) for target in numeric_cols: if target in (col1, col2): continue c = df[target].values # We get a boolean mask with the comparison between `col1 operation col2` and `target_column` mask = np.isclose(val, c, atol=float_tolerance, equal_nan=False) # Counting how many relations we found matches = int(mask.sum()) total = len(df) results.append({ "col1": col1, "col2": col2, "operation": op_name, "target": target, "matches": matches, "total": total, "pct_match": matches / total }) return pd.DataFrame(results) # --- Example--- df = pd.DataFrame({ "a": [1, 2, 3, 4], "b": [2, 2, 2, 2], "c": [2, 4, 6, 8], "d": [2, 0, 1, 8], }) numeric_cols = ["a", "b", "c", "d"] res = detect_relations(df, numeric_cols) # Avoid to print combinations with no relation print(res[res["matches"] > 0].sort_values("pct_match", ascending=False)) Output: col1 col2 operation target matches total pct_match 6 a b a * b c 4 4 1.00 22 a c b / a b 4 4 1.00 46 b c b / a a 4 4 1.00 7 a b a * b d 2 4 0.50 48 b d a + b a 2 4 0.50 26 a d a - b b 2 4 0.50 58 b d b / a a 2 4 0.50 34 a d b / a b 2 4 0.50 3 a b a - b d 2 4 0.50 5 a b b - a d 1 4 0.25 0 a b a + b c 1 4 0.25 19 a c a * b d 1 4 0.25 18 a c a * b b 1 4 0.25 16 a c b - a b 1 4 0.25 11 a b b / a d 1 4 0.25 10 a b b / a c 1 4 0.25 30 a d a * b b 1 4 0.25 24 a d a + b b 1 4 0.25 23 a c b / a d 1 4 0.25 40 b c b - a a 1 4 0.25 35 a d b / a c 1 4 0.25 31 a d a * b c 1 4 0.25 44 b c a / b a 1 4 0.25 50 b d a - b a 1 4 0.25 56 b d a / b a 1 4 0.25 68 c d a / b a 1 4 0.25 70 c d b / a a 1 4 0.25 Complexity analysis I'd like to point out that the code above has a certain time complexity and could be slow when the number of elements is large. The code above has to go trough a number of cycles: trough every combinations of numerical_cols, that is (m choose 2) = m(m-1)/2 ~ O(m^2); through every possible target, i.e. O(m - 2) = O(m); the comparison made by numpy.isclose() cycle trough every element n and so it has complexity of O(n). Considering those cycles, it's easy to see how the code has time complexity of O(m^2 * m * n) = O(n * m^3), so the time required will increase cubically w.r.t. the number of numerical columns and linearly w.r.t. the number of elements in each column.
1
1
79,596,493
2025-4-28
https://stackoverflow.com/questions/79596493/generate-rtdb-key-without-creating-node
I'm trying to create a Python function that generates a valid key in my Firebase Realtime Database (using the same logic as other keys, ensuring no collision and chronological order). Something like this: def get_new_key() -> str: return db.reference(app=firebase_app).push().key print(get_new_key()) # output: '-OOvws9uQDq9Ozacldjr' However, doing this actually creates the node with an empty string value, as specified in the doc. Is there any way to grab the key without adding anything to the database? My only thought right now would be to reimplement myself their key generation algorithm, but this looks overkill and not future-proof if this algorithm changes.
You're right, each time you call push(value='') a child node in the Realtime Database will be created. If you are calling the function without passing a value, the default string that will be used for writing the data will be an empty string. As far as I know, the Firebase SDKs for Python does not publicly expose a push() function that only generates the IDs. So the only option that you have would be to use the algorithm that is present below: https://gist.github.com/mikelehen/3596a30bd69384624c11 It is written in JavaScript, but I think that you can adapt it very easily to Python. This algorithm is very stable, and I don't think that it will change because many clients (Android, iOS, Web) depend on it. Here's a the Python implementation of Firebase’s generatePushID() (not tested): import time import random PUSH_CHARS = '-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz' def generate_push_id(): now = int(time.time() * 1000) time_stamp_chars = [] for _ in range(8): time_stamp_chars.append(PUSH_CHARS[now % 64]) now //= 64 time_stamp_chars.reverse() push_id = ''.join(time_stamp_chars) for _ in range(12): push_id += random.choice(PUSH_CHARS) return push_id
2
2
79,596,418
2025-4-28
https://stackoverflow.com/questions/79596418/django-the-page-refreshed-when-i-click-on-import-file-and-no-message-appears
i am working on a django powered web app, and i want to customize admin view of one of my models. i have made a custom template for the add page and overrides save function in admin class to process input file before saving. here i have the admin class of RMABGD: @admin.register(RMABGD) class RMABGDAdmin(BaseModelAdmin): list_display = ('name', 'code_RMA', 'type_BGD', 'Partenaire', 'date_creation', 'RMA_BGD_state') list_filter = ('type_BGD', 'RMA_BGD_state', 'city') search_fields = ('name', 'code_RMA', 'Partenaire') add_form_template = "admin/spatial_data/RMABGD/change_form.html" change_form_template = "admin/spatial_data/RMABGD/change_form.html" def process_excel_import(self, request): excel_file = request.FILES.get('excel_file') if not excel_file: messages.error(request, "No file was selected. Please choose an Excel file.") return False try: df = pd.read_excel(excel_file) required_headers = ["code RMA", "code ACAPS", "Dénomination RMA", "Ville", "Adresse", "Longitude", "Latitude", "Type BGD", "Partenaire", "Date création", "Etat BGD RMA"] missing_headers = [header for header in required_headers if header not in df.columns] if missing_headers: messages.error(request, f"Missing required fields: {', '.join(missing_headers)}") return False else: # If all headers are correct, process data rows_imported = 0 errors = 0 for index, row in df.iterrows(): try: # Process row data obj = RMABGD( code_ACAPS=row["code ACAPS"], code_RMA=row["code RMA"], name=row["Dénomination RMA"], address=row["Adresse"], city=row["Ville"], location=f'POINT({row["Longitude"]} {row["Latitude"]})', type_BGD=row["Type BGD"], Partenaire=row["Partenaire"], date_creation=row["Date création"], RMA_BGD_state=row["Etat BGD RMA"] ) obj.save() rows_imported += 1 except Exception as e: messages.error(request, f"Error in row {index + 1}: {str(e)}") errors += 1 if rows_imported > 0: messages.success(request, f"Successfully imported {rows_imported} rows") return True if errors > 0: messages.warning(request, f"Failed to import {errors} rows. See details above.") if rows_imported == 0: messages.error(request, "No rows were imported. Please check your file and try again.") return rows_imported > 0 except Exception as e: messages.error(request, f"Error processing file: {str(e)}") return False def save_model(self, request, obj, form, change): self.process_excel_import(request) super().save_model(request, obj, form, change) and this is the corresponding template for add: {% extends "admin/base_site.html" %} {% load i18n admin_urls static %} {% block content %} <div id="content-main"> {% if messages %} <ul class="messagelist"> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> {% endif %} <form action="." method="post" enctype="multipart/form-data"> {% csrf_token %} <div> <fieldset class="module aligned"> <div class="form-row"> <div class="fieldBox"> <label for="id_excel_file" class="required">Excel File:</label> <input type="file" name="excel_file" id="id_excel_file" accept=".xlsx,.xls" required> <div class="help">Upload an Excel file with the required columns</div> </div> </div> </fieldset> <div class="help-text"> <p><strong>{% trans 'Required fields in the imported file:' %}</strong></p> <ul> <li>code RMA</li> <li>code ACAPS</li> <li>Dénomination RMA</li> <li>Ville</li> <li>Adresse</li> <li>Longitude</li> <li>Latitude</li> <li>Type BGD</li> <li>Partenaire</li> <li>Date création</li> <li>Etat BGD RMA</li> </ul> </div> <div class="submit-row"> <input type="submit" value="{% trans 'Import Excel' %}" class="default" name="_import_file"> </div> </div> </form> </div> {% endblock %} when I click import file, the page refreshes and no message is displayed as if the process file function isn't executed
Problem: Your custom <form> never hits Django admin’s add_view/changeform_view, so save_model/process_excel_import isn’t called and no messages ever display. Option 1: Override the Admin View Intercept your “Import Excel” POST, run process_excel_import, then redirect so messages render: from django.contrib import admin, messages from django.http import HttpResponseRedirect import pandas as pd from .models import RMABGD @admin.register(RMABGD) class RMABGDAdmin(admin.ModelAdmin): add_form_template = 'admin/spatial_data/RMABGD/change_form.html' change_form_template = add_form_template # list_display, list_filter, search_fields … def process_excel_import(self, request): f = request.FILES.get('excel_file') if not f: messages.error(request, "Please choose an Excel file.") return False try: df = pd.read_excel(f) required = ["code RMA","code ACAPS","Dénomination RMA","Ville", "Adresse","Longitude","Latitude","Type BGD", "Partenaire","Date création","Etat BGD RMA"] missing = [h for h in required if h not in df.columns] if missing: messages.error(request, f"Missing columns: {', '.join(missing)}") return False imported = 0 for i, row in df.iterrows(): try: RMABGD.objects.create( code_ACAPS=row["code ACAPS"], code_RMA=row["code RMA"], name=row["Dénomination RMA"], address=row["Adresse"], city=row["Ville"], location=f'POINT({row["Longitude"]} {row["Latitude"]})', type_BGD=row["Type BGD"], Partenaire=row["Partenaire"], date_creation=row["Date création"], RMA_BGD_state=row["Etat BGD RMA"] ) imported += 1 except Exception as e: messages.error(request, f"Row {i+1}: {e}") if imported: messages.success(request, f"Imported {imported} rows") else: messages.warning(request, "No rows were imported") return imported > 0 except Exception as e: messages.error(request, f"Error processing file: {e}") return False def changeform_view(self, request, object_id=None, form_url='', extra_context=None): if request.method == 'POST' and '_import_file' in request.POST: self.process_excel_import(request) return HttpResponseRedirect(request.path) return super().changeform_view(request, object_id, form_url, extra_context) If you only need it on the Add page, override add_view instead of changeform_view. Option 2: Use django-import-export A battle-tested library that adds Import/Export buttons with preview, validation and messages: Install pip install django-import-export Enable in settings.py INSTALLED_APPS += ['import_export'] Define Resource & Admin from import_export import resources from import_export.admin import ImportExportModelAdmin from .models import RMABGD class RMABGDResource(resources.ModelResource): class Meta: model = RMABGD fields = ( 'code_ACAPS','code_RMA','name','address','city', 'location','type_BGD','Partenaire','date_creation', 'RMA_BGD_state', ) @admin.register(RMABGD) class RMABGDAdmin(ImportExportModelAdmin): resource_class = RMABGDResource # list_display, list_filter, search_fields … Use Visit your model in the admin: you’ll now see Import/Export buttons, handle .xlsx/.xls, preview rows and errors, and get automatic Django-style messages. Recommendation Quick fix: go with Option 1. Long-term/scale: prefer Option 2 (django-import-export) for best UX and maintainability.
1
2
79,595,840
2025-4-28
https://stackoverflow.com/questions/79595840/why-doesnt-multiprocessing-process-start-in-python-guarantee-that-the-process
Here is a code to demo my question: from multiprocessing import Process def worker(): print("Worker running") if __name__ == "__main__": p = Process(target=worker) p.start() input("1...") input("2...") p.join() Note, ran on Python 3.13, Windows x64. And the output I got is (after inputting Enter twice): 1... 2... Worker running Process finished with exit code 0 From the output, we can see the process actually initialized and started to run after the 2nd input. While I thought start() should block and guarantee the child process is fully initialized. Is this a normal behavior of Python multiprocessing? Because if Threading is used here instead, this issue seldom occur. I always get the thread run before the line input("1..."). May I ask, if Process.start() doesn't guarantee the process is fully-started, how should we code to ensure the child process is actually running before proceeding in the parent?
This is normal behaviour, and it's usually exactly what you want when you choose multiprocessing over, say, threading, i.e., the processes continue in parallel and do not block each other. As mentioned in the comments, here's an example how you can make sure the worker is running before proceeding: import time from multiprocessing import Process, Event def worker(start_event): print("Worker started") start_event.set() print("Worker is doing some work") time.sleep(2) if __name__ == "__main__": start_event = Event() p = Process(target=worker, args=(start_event,)) p.start() start_event.wait() print("Worker has started. Continuing main process.") print("Waiting for worker to finish") p.join() A common pattern, however, is to communicate with the worker via a work queue and a stop event (or some other means) to tell it to shut down.
1
3
79,595,836
2025-4-28
https://stackoverflow.com/questions/79595836/generating-key-value-map-from-aggregates
I have raw data that appears like this: ┌─────────┬────────┬─────────────────────┐ │ price │ size │ timestamp │ │ float │ uint16 │ timestamp │ ├─────────┼────────┼─────────────────────┤ │ 1697.0 │ 11 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 5 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 5 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 5 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 5 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 4 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 1 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 1 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 1 │ 2009-09-27 18:00:00 │ │ 1697.5 │ 3 │ 2009-09-27 18:00:00 │ │ 1697.5 │ 2 │ 2009-09-27 18:00:00 │ │ 1697.0 │ 1 │ 2009-09-27 18:00:00 │ │ 1698.0 │ 1 │ 2009-09-27 18:00:01 │ │ 1698.25 │ 1 │ 2009-09-27 18:00:01 │ │ 1698.25 │ 10 │ 2009-09-27 18:00:02 │ │ 1698.25 │ 4 │ 2009-09-27 18:00:02 │ │ 1697.25 │ 6 │ 2009-09-27 18:00:02 │ │ 1697.25 │ 2 │ 2009-09-27 18:00:02 │ │ 1697.0 │ 28 │ 2009-09-27 18:00:02 │ │ 1697.25 │ 6 │ 2009-09-27 18:00:03 │ ├─────────┴────────┴─────────────────────┤ │ 20 rows 3 columns │ Using DuckDB, I wanted to create histograms for each timestamp, both the price and size. My attempt: vp = conn.query(f""" SET enable_progress_bar = true; SELECT timestamp, histogram(price) FROM 'data/tickdata.parquet' GROUP BY timestamp ORDER BY timestamp """) This produces the following: ┌─────────────────────┬─────────────────────────────────────────────────────────────────┐ │ timestamp │ histogram(price) │ │ timestamp │ map(float, ubigint) │ ├─────────────────────┼─────────────────────────────────────────────────────────────────┤ │ 2009-09-27 18:00:00 │ {1697.0=10, 1697.5=2} │ │ 2009-09-27 18:00:01 │ {1698.0=1, 1698.25=1} │ │ 2009-09-27 18:00:02 │ {1697.0=1, 1697.25=2, 1698.25=2} │ │ 2009-09-27 18:00:03 │ {1696.0=2, 1696.5=2, 1697.0=2, 1697.25=1} │ │ 2009-09-27 18:00:04 │ {1696.0=2, 1696.25=2, 1696.75=1, 1697.0=1, 1697.25=3, 1697.5=1} At first glance, it "appears correct", however, the "values" associated with each key are not the SUM of the size but the COUNTs of the size. What I would expect to see: ┌─────────────────────┬─────────────────────────────────────────────────────────────────┐ │ timestamp │ histogram(price) │ │ timestamp │ map(float, ubigint) │ ├─────────────────────┼─────────────────────────────────────────────────────────────────┤ │ 2009-09-27 18:00:00 │ {1697.0=39, 1697.5=5} │ │ 2009-09-27 18:00:01 │ {1698.0=1, 1698.25=1} │ │ 2009-09-27 18:00:02 │ {1697.0=28, 1697.25=8, 1698.25=14} Alternatively: I am able to generate the following table, but unsure if there is a way I can map it into the above example? ┌─────────────────────┬─────────┬───────────┐ │ timestamp │ price │ sum(size) │ │ timestamp │ float │ int128 │ ├─────────────────────┼─────────┼───────────┤ │ 2009-09-27 18:00:00 │ 1697.0 │ 39 │ │ 2009-09-27 18:00:00 │ 1697.5 │ 5 │ │ 2009-09-27 18:00:01 │ 1698.0 │ 1 │ │ 2009-09-27 18:00:01 │ 1698.25 │ 1 │ │ 2009-09-27 18:00:02 │ 1698.25 │ 14 │ │ 2009-09-27 18:00:02 │ 1697.25 │ 8 │ │ 2009-09-27 18:00:02 │ 1697.0 │ 28 │
Use this query to calculate the total size (sum_size) for each price at every timestamp in your dataset. WITH aggregated_data AS ( SELECT timestamp, price, SUM(size) AS sum_size FROM tickdata GROUP BY timestamp, price ) SELECT timestamp, MAP(ARRAY_AGG(price), ARRAY_AGG(sum_size)) AS histogram FROM aggregated_data GROUP BY timestamp ORDER BY timestamp;
1
1
79,593,802
2025-4-26
https://stackoverflow.com/questions/79593802/notimplementederror-gets-triggered-unexpectedly-when-tring-to-define-an-abstr
This question is related to a question about the implementation of class property in Python which I have asked yesterday. I received a working solution and wrote my classProp and ClassPropMeta as the solution suggests. However, when I want to use class property together with @abstractmethod, problem occurs. Below is a minimum example of the problem I have encountered. I defined my ClassPropMeta to extend ABCMeta because I want to avoid metaclass conflict and use both the features of abstract method decorator and class property to define an abstract class property, which means that all concrete subclasses of the class with abstract class property should have the property defined. In the example, A, B are abstract classes that should have a class property called X, while C is the concrete class where the value of X is defined. When running this code, I found that the NotImplementedError in the abstract method gets triggered unexpectedly. Further tracing the source by printing the cls value before the error is raised, I found that the value of cls points to class B. Then I set a breakpoint in the method and executed the command w in Pdb. The call stack output is attached after the code. From my analysis, it seems like that B.X has been called somewhere in the program. So can it be explained where does the B.X access take place? From further investigation the stack trace points to line 107 of the abc module, which executed the _abc_init() function from the C implementation of _abc module. However till this point, due to my limited knowledge of CPython's internals, I cannot locate where the access takes place. from abc import ABCMeta, abstractmethod from typing import Any class classProp[T, P](property): def __get__(self, instance: T, owner: type[T] = None) -> P: return self.fget(owner) class ClassPropMeta(ABCMeta): def __setattr__(cls, name: str, value: Any): if isinstance(desc := vars(cls).get(name), classProp) and not callable( desc.fset ): raise AttributeError("can't set attribute") return super().__setattr__(name, value) class A(metaclass=ClassPropMeta): __slots__ = () @classProp @abstractmethod def X(cls): # breakpoint() raise NotImplementedError class B(A): ... # @classProp # def X(cls): # return 1 class C(B): @classProp def X(cls): return 1 print(C.X) # result: NotImplementedError gets triggered > d:\path\to\my\example.py(31)X() -> breakpoint() (Pdb) w d:\path\to\my\example.py(35)<module>() -> class B(A): <frozen abc>(107)__new__() d:\path\to\my\example.py(13)__get__() -> return self.fget(owner) > d:\path\to\my\example.py(31)X() -> breakpoint()
This is slihghtly tricky at first glance. The problem occurs during class creation, when Python runs thru initialization of your B class. At this point, the __new__() method of ABCMeta class needs to figure out which methods are abstract and it does so by actually accessing the X descriptor on B. That in turn triggers classProp.__get__(), which in turn tries to call the implementation. But as you have no implementation there the NotImplementedError is raised because of that. To work this around you would need to make classProp smarter and aware of abstract methods so in your __get__() implementation you check if the class you're accessing implemented the abstract method or not and if not, return the abstract method instead of calling it: class classProp[T, P](property): def __get__(self, instance: T, owner: type[T] = None) -> P: if getattr(self.fget, "__isabstractmethod__", False) and owner is not None: if not hasattr(owner, "__abstractmethods__"): return self.fget if self.fget.__name__ in getattr(owner, "__abstractmethods__", set()): return self.fget return self.fget(owner)
1
1
79,594,903
2025-4-27
https://stackoverflow.com/questions/79594903/gathering-and-change-multiple-excel-files-columns-based-on-another-excel-file-wi
I'm working on a work project that requires the data of multiple excel files, which needed to be upload to the database. There is another file name "table_name" that contains the current columns data name, with the file name as the sheet name. I'm trying to code in python in order to change the columns name based on "table_name" Here is the table_name file that i want to change the name: Sheet 1 name (chung): Old New chung1 new_chung1 chung2 new_chung2 chung3 new_chung3 Sheet 2 (nganh): Old New nganh1 new_nganh1 nganh2 new_nganh2 nganh3 new_nganh3 The files i want to change: File 1 (phu_luc_chung): chung1 chung2 chung3 1 2 3 4 5 6 7 8 9 File 2 (phu_luc_nganh): nganh1 nganh2 nganh3 1 2 3 4 5 6 7 8 9 Here is the code chunk that i used to carry out the task: new_name = "table_name.xlsx" path = "E:\folder" # This code I'm trying to open every sheets that correspond to the data file name_n = pd.ExcelFile(new_name, engine='openpyxl') list = glob.glob(os.path.join(path, '*.xlsx')) # The code below is meant to get the "table_name" sheets as list sheets = name_n.sheet_names for file in list: # getting through each file for sh in sheets: # getting through each sheets df = pd.read_excel(file, engine='openpyxl') # the columns name has some uppercase so for easier replacement # I lowercase those names: df.columns = [col.lower() for col in df.columns] # In this part I'm trying to get the dictionary in "table_name" sheet # that correspond to the file in list that was called above: the_name = pd.read_excel(new_name, sheet_name = sh) renamee = dict(zip(the_name["Old"], the_name["New"])) new_df = df.rename(columns = renamee) #change the name according to the dict print(new_df.head(10)) Currently, the code could go through every sheets in the "table_name" and made it a dictionary with each sheet correspond to each file. But the code is not finding the right sheets meant for the specific files. How can I mark the sheets name and link it with the files name, and also some files have are in same categories like "phu_luc_chung1", "phu_luc_chung2", "phu_luc_chungdraft" (they are datasets of chung) due to terrible data management.
Still not 100% on what your doing but perhaps this will help; From your updated details seems the XLSX file "table_name.xlsx" has two sheets with the name changes; 'chung1' & 'nganh' and you want to apply the changes to all XLSX files in the List 'list' So perhaps what you want to do is create one dictionary 'renamee_dict' with all the name changes read from all the sheets in "table_name.xlsx". The changed code creates the 'renamee_dict' by reading all the Sheets in "table_name.xlsx" so it looks like; rename_dict = {'chung1': 'new_chung1', 'chung2': 'new_chung2', 'chung3': 'new_chung3', 'nganh1': 'new_nganh1', 'nganh2': 'new_nganh2', 'nganh3': 'new_nganh3'} Then apply this dictionary to all the XLSX files in the List so whatever the Sheet Header name is the new name will be applied from the one dictionary if it applies. import glob import os import pandas as pd new_name = "table_name.xlsx" path = "E:\folder" # Read all the name changes from all the Sheets in new_name renamee_dict = {} sheets_dict = pd.read_excel(new_name, engine="openpyxl", sheet_name=None) for sheet_name, name_n in sheets_dict.items(): renamee_dict.update(dict(zip(name_n["Old"], name_n["New"]))) list = glob.glob(os.path.join(path, '*.xlsx')) for file in list: # getting through each file df = pd.read_excel(file, engine='openpyxl') # the columns name has some uppercase so for easier replacement # I lowercase those names: df.columns = [col.lower() for col in df.columns] new_df = df.rename(columns=renamee_dict) #change the name according to the dict print(new_df.head(10))
2
1
79,595,753
2025-4-28
https://stackoverflow.com/questions/79595753/sql-alchemy-generates-integer-instead-of-int-for-sql-lite
Python Sql Alchemy generates table with VARCHAR for sql lite instead of INTEGER so select for sql lite ordered by with alphabet number Given City Table in Sql lite generated from sql alchemy: class City(Base): __tablename__ = "city" id: Mapped[int] = mapped_column(Integer, primary_key=True, index=True) city_name: Mapped[str] = mapped_column(String, index=True) It generates: Data: 15470 Paris 100567 Paris Query: select(City).where(City.city_name == city_name.upper()).order_by(City.id).limit(1) SELECT city.id, city.city_name FROM city WHERE city.city_name = :city_name_1 ORDER BY city.id LIMIT :param_1 returns city id with 100567 instead of 15470. Do I need to provide example with sql insert as well?
In SQLite, if the id column is stored as TEXT instead of INTEGER, the ORDER BY will sort alphabetically, not numerically. That's why '100567' can come before '15470'. To fix it, you should cast id to an integer when ordering: SELECT id, city_name FROM city WHERE city_name = 'Paris' ORDER BY CAST(id AS INTEGER) LIMIT 1;
1
1
79,595,744
2025-4-28
https://stackoverflow.com/questions/79595744/why-does-init-requires-an-explicit-self-as-an-argument-when-calling-it-as-ba
I know of two ways to call a superclass's constructor from a derived class: super().__init__() and base.__init__(self). Why does the second require me to explicitly supply self as an argument? Here's a snippet that demonstrates the difference: class base: def __init__(self,/): print("initializing") class der(base): def __init__(self): super().__init__() #no error base.__init__(self) #no error base.__init__() #error, self required obj = der() The third constructor call throws this error: File "demo.py", line 11, in <module> obj = der() ^^^^^ File "demo.py", line 9, in __init__ base.__init__() #error, should be: base.__init__(self) ^^^^^^^^^^^^^^^ TypeError: base.__init__() missing 1 required positional argument: 'self' I expect base.__init__() to work since super().__init__() doesn't require an explicit self.
TL;DR: super() is an instance. der is a class. To undersand this properly, you have to understand how instance methods in Python work. When you write an instance method, you have to include self as the first parameter like this: class Foo: def bar(self): ... This is because when you call an instance method, Python automatically supplies the instance itself as the method's first argument under the hood. These two lines will do the same thing: x.bar() Foo.bar(x) The only difference is that the first line calls bar from an instance and the second calls bar from a class. In your example, base is a class (the equivalent to Foo) instead of an instance (the equivalent of x). Thus, if you want to call the base constructor, you need to manually supply self - Python won't do it for you. The super version doesn't need that because super isn't the class base - it's a proxy for the instance self that pretends it's of type der instead of type base. That means super() is still an instance, not a class, so super().__init__() will automatically supply self as a parameter to base.__init__ and you don't need to do it manually.
1
2
79,594,922
2025-4-27
https://stackoverflow.com/questions/79594922/how-to-add-function-parameter-kwargs-during-runtime
I have hundreds of functions test1, test2, ... Function testX may or may not have kwargs: def test1(x, z=1, **kwargs): pass def test2(x, y=1): pass def test3(x, y=1, z=1): pass ... and I have a function call_center: tests = [test1, test2, test3] def call_center(**kwargs): for test in tests: test(**kwargs) call_center will be called with various input parameters: call_center(x=1,y=1) call_center(x=1,z=1) I don't want to filter kwargs by inspect.signature of the called function, because these function will be called millions of times, filtering at runtime costs a lot. How can I define a decorator extend_kwargs that adds kwargs to functions that do not have kwargs? I can use signature to find if kwargs exists. def extender(func): sig = signature(func) params = list(sig.parameters.values()) if any( param.name for param in sig.parameters.values() if param.kind == param.VAR_KEYWORD ): return func # add kwargs to func here ... return func and i tried add Parameter to signature like this kwargs = Parameter("__auto_kwargs", Parameter.VAR_KEYWORD) params.append(kwargs) new_sig = sig.replace(parameters=params) func.__signature__ = new_sig It seems signature is just signature, does not affect the execution. and i tried to modify __code__ of func. old_code = func.__code__ code = old_code.replace( co_varnames=old_code.co_varnames + ("__auto_kwargs",), co_nlocals=old_code.co_nlocals + 1, ) func.__code__ = code Still not work Refer to the accepted answer, the following code works. The judgment is simplified and add the modification of co_flags. def extender(func): if not func.__code__.co_flags & inspect.CO_VARKEYWORDS: code = func.__code__ func.__code__ = code.replace( co_flags=code.co_flags | inspect.CO_VARKEYWORDS, co_varnames=code.co_varnames + ("kwargs",), co_nlocals=code.co_nlocals + 1, ) return func If you don't want to modify the original function directly, see the accepted answer
To add variable keyword parameters (kwargs) to a function that does not have it you can re-create the function with types.FunctionType but with the code object replaced with one that has the inspect.CO_VARKEYWORDS flag enabled in the co_flags attribute, the kwargs name added to the co_varnames attribute, and the number of local variables incremented by 1 in the co_nlocals attribute. So a decorator that adds kwargs to functions that do not have it can look like: import inspect from types import FunctionType def ensure_kwargs(func): if func.__code__.co_flags & inspect.CO_VARKEYWORDS: return func # already supports kwargs return FunctionType( code=func.__code__.replace( co_flags=func.__code__.co_flags | inspect.CO_VARKEYWORDS, co_varnames=func.__code__.co_varnames + ('kwargs',), co_nlocals=func.__code__.co_nlocals + 1 ), globals=func.__globals__, name=func.__name__, argdefs=func.__defaults__, closure=func.__closure__ ) so that: def test1(x, z=1, **kwargs): print(f'{x=}, {z=}, {kwargs=}') def test2(x, y=1): print(f'{x=}, {y=}') def test3(x, y=1, z=1): print(f'{x=}, {y=}, {z=}') def call_center(**kwargs): for test in tests: test(**kwargs) tests = list(map(ensure_kwargs, [test1, test2, test3])) call_center(x=1, y=2) call_center(x=3, z=4) outputs: x=1, z=1, kwargs={'y': 2} x=1, y=2 x=1, y=2, z=1 x=3, z=4, kwargs={} x=3, y=1 x=3, y=1, z=4 Demo: https://ideone.com/0aeT2m
2
3
79,595,603
2025-4-28
https://stackoverflow.com/questions/79595603/sketch-partial-regression-chart-using-dash
i have dataframe, in which we are given x and y columns and between them i want to sketch regression model, main idea is that i should use dash framework, as according chow test , there could be difference between two regression model at different instance value,based on following link : dash models i wrote following code : import pandas as pd from dash import Dash,html,dcc,callback,Output,Input from sklearn.linear_model import LinearRegression import plotly.express as px data =pd.read_csv("regression.csv") model =LinearRegression() print(data) app = Dash() # Requires Dash 2.17.0 or later app.layout = [ html.H1(children='Our regression Model', style={'textAlign':'center'}), dcc.Dropdown(data.Year.unique(), '2004', id='dropdown-selection'), dcc.Graph(id='graph-content') ] @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def scatter_graph(value): selected =data[data.Year==value] return px.scatter(selected,x='x',y='y') @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def Regression_graph(value): selected =data[data.Year==value] X =selected['x'].values X =X.reshape(-1,1) y =selected['y'].values model.fit(X,y) y_predicted =model.predict(X) return px.line(selected,x='x',y=y_predicted) if __name__ =='__main__': app.run(debug=True) this part works fine : @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def scatter_graph(value): selected =data[data.Year==value] return px.scatter(selected,x='x',y='y') but second decorator for regression plot does not work, here is example : please help me how to fix it?
It may need to add all code in one function and return both figures fig = px.scatter(selected, x=x, y=y) fig.add_traces(px.line(selected, x=x, y=y_predicted).data) return fig Full working code with random data. import pandas as pd from dash import Dash,html,dcc,callback,Output,Input from sklearn.linear_model import LinearRegression import plotly.express as px from dash.exceptions import PreventUpdate import random random.seed(0) # to get always the same random values data = pd.DataFrame({ 'x': random.choices(range(0, 100), k=100), 'y': random.choices(range(0, 100), k=100), 'Year': ['2025']*50 + ['2024']*50, }) #data = pd.read_csv("regression.csv") model = LinearRegression() print(data) app = Dash() # Requires Dash 2.17.0 or later app.layout = [ html.H1(children='Our regression Model', style={'textAlign':'center'}), dcc.Dropdown(data.Year.unique(), '2004', id='dropdown-selection'), dcc.Graph(id='graph-content') ] @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def update_graph(value): if value is None: raise PreventUpdate print(f'{value = }') selected = data[data.Year==value] x = selected['x'].values print(f'{x = }') X = x.reshape(-1,1) print(f'{X = }') y = selected['y'].values print(f'{y = }') model.fit(X, y) y_predicted = model.predict(X) # fig = px.line(selected, x=x, y=y_predicted) # fig.add_scatter(x=X, y=y) # doesn't show it fig = px.scatter(selected, x=x, y=y) fig.add_traces(px.line(selected, x=x, y=y_predicted).data) return fig if __name__ =='__main__': app.run(debug=True)
1
3
79,593,505
2025-4-26
https://stackoverflow.com/questions/79593505/cant-close-cookie-pop-up-on-website-with-selenium-webdriver
I am trying to use selenium to click the Accept all or Reject all button on a cookie pop up for the the website autotrader.co.uk, but I cannot get it to make the pop up disappear for some reason. This is the pop up: and here is the html: <button title="Reject All" aria-label="Reject All" class="message-component message-button no-children focusable sp_choice_type_13" style="opacity: 1; padding: 10px 5px; margin: 10px 5px; border-width: 2px; border-color: rgb(5, 52, 255); border-radius: 5px; border-style: solid; font-size: 14px; font-weight: 400; color: rgb(255, 255, 255); font-family: arial, helvetica, sans-serif; width: calc(35% - 20px); background: rgb(5, 52, 255);">Reject All</button> The code I have tried is the following: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time path_to_driver = r"C:\path_to_project\chromedriver.exe" service = Service(executable_path=path_to_driver) driver = webdriver.Chrome(service=service) driver.get("https://www.autotrader.co.uk") time.sleep(5) WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.CLASS_NAME, 'message-component message-button no-children focusable sp_choice_type_13'))).click() time.sleep(10) driver.quit() Can anyone help here?
As you can see, the pop-up window is embedded within an <iframe>Selenium must first switch the driver's context to that iframe before attempting to locate or interact with any elements contained within it. wait for the desired iframe element to be available to switch to it: iframe = wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, 'iframe[id^="sp_message_iframe_"]'))) Note: Since the id attribute of the iframe appears to be dynamically generated, it is recommended to locate the iframe using a partial match strategy, such as CSS selector or XPath with the contains() function or partial match strategy ( https://stackoverflow.com/a/56844649/11179336) This is how you can do: import time from selenium.webdriver import Chrome, ChromeOptions from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC chrome_options = ChromeOptions() chrome_options.add_experimental_option("excludeSwitches", ['enable-automation']) driver = Chrome(options=chrome_options) driver.get("https://www.autotrader.co.uk") wait = WebDriverWait(driver, 10) # wait for the target iframe to get loaded in order to switch to it wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, 'iframe[id^="sp_message_iframe_"]'))) # click to 'Reject All' wait.until(EC.element_to_be_clickable((By.XPATH, '//button[@title="Reject All"]'))).click() # Switch back to the main page content driver.switch_to.default_content() # Now you can continue interacting with the main page here time.sleep(5)
1
2
79,595,515
2025-4-27
https://stackoverflow.com/questions/79595515/urlparse-urlsplit-and-urlunparse-whats-the-pythonic-way-to-do-this
The background (but is not a Django-only question) is that the Django test server does not return a scheme or netloc in its response and request urls. I get /foo/bar for example, and I want to end up with http://localhost:8000/foo/bar. urllib.parse.urlparse (but not so much urllib.parse.urlsplit) makes gathering the relevant bits of information, from the test url and my known server address, easy. What seems more complicated than necessary is recomposing a new url with the scheme and netloc added via urllib.parse.urlcompose which wants positional arguments, but does not document what they are, nor support named arguments. Meanwhile, the parsing functions return immutable tuples... def urlunparse(components): """Put a parsed URL back together again. This may result in a ...""" I did get it working, see code below, but it looks really kludgy, around the part where I need to first transform the parse tuples into lists and then modify the list at the needed index position. Is there a more Pythonic way? sample code: from urllib.parse import urlsplit, parse_qs, urlunparse, urlparse, urlencode, ParseResult, SplitResult server_at_ = "http://localhost:8000" url_in = "/foo/bar" # this comes from Django test framework I want to change this to "http://localhost:8000/foo/bar" from_server = urlparse(server_at_) print(" scheme and netloc from server:",from_server) print(f"{url_in=}") from_urlparse = urlparse(url_in) print(" missing scheme and netloc:",from_urlparse) #this works print("I can rebuild it unchanged :",urlunparse(from_urlparse)) #however, using the modern urlsplit doesnt work (I didn't know about urlunsplit when asking) try: print("using urlsplit", urlunparse(urlsplit(url_in))) #pragma: no cover pylint: disable=unused-variable except (Exception,) as e: print("no luck with urlsplit though:", e) #let's modify the urlparse results to add the scheme and netloc try: from_urlparse.scheme = from_server.scheme from_urlparse.netloc = from_server.netloc new_url = urlunparse(from_urlparse) except (Exception,) as e: print("can't modify tuples:", e) # UGGGH, this works, but is there a better way? parts = [v for v in from_urlparse] parts[0] = from_server.scheme parts[1] = from_server.netloc print("finally:",urlunparse(parts)) sample output: scheme and netloc from server: ParseResult(scheme='http', netloc='localhost:8000', path='', params='', query='', fragment='') url_in='/foo/bar' missing scheme and netloc: ParseResult(scheme='', netloc='', path='/foo/bar', params='', query='', fragment='') I can rebuild it unchanged : /foo/bar no luck with urlsplit though: not enough values to unpack (expected 7, got 6) can't modify tuples: can't set attribute finally: http://localhost:8000/foo/bar
If you need it in Django then I found request.build_absolute_uri() in question How can I get the full/absolute URL (with domain) in Django? - Stack Overflow I didn't test it but maybe it resolves this problem in Django. Other modules/frameworks may have also own functions for this. As I rembeber module scrapy for scraping HTML has own function response.urljoin() to convert relative url into absolute url. As for functions in module urllib: You would have to use urlsplit with urlunsplit (which use less values) urlparse with urlunparse (which use more values) There is "hidden" function _replace() which creates new ParseResult with replaced values. new_urlparse = from_urlparse._replace(scheme=from_server.scheme, netloc=from_server.netloc) Usually I need only urljoin() server_at_ = "http://localhost:8000" # base url_in = "/foo/bar" # relative url absolute_url = urljoin(server_at, url_in)
1
2
79,595,283
2025-4-27
https://stackoverflow.com/questions/79595283/how-to-properly-extract-all-duplicated-rows-with-a-condition-in-a-polars-datafra
Given a polars dataframe, I want to extract all duplicated rows while also applying an additional filter condition, for example: import polars as pl df = pl.DataFrame({ "name": ["Alice", "Bob", "Alice", "David", "Eve", "Bob", "Frank"], "city": ["NY", "LA", "NY", "SF", "LA", "LA", "NY"], "age": [25, 30, 25, 35, 28, 30, 40] }) # Trying this: df.filter((df.is_duplicated()) & (pl.col("city") == "NY")) # error However, this results in an error: SchemaError: cannot unpack series of type object into bool Which alludes that df.is_duplicated() returns a series of type object, but in reality, it's a Boolean Series. Surprisingly, reordering the predicates by placing the expression first makes it work (but why?): df.filter((pl.col("city") == "NY") & (df.is_duplicated())) # works! correctly outputs: shape: (2, 3) ┌───────┬──────┬─────┐ │ name ┆ city ┆ age │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞═══════╪══════╪═════╡ │ Alice ┆ NY ┆ 25 │ │ Alice ┆ NY ┆ 25 │ └───────┴──────┴─────┘ I understand that the optimal approach when filtering for duplicates based on a subset of columns is to use pl.struct, like: df.filter((pl.struct(df.columns).is_duplicated()) & (pl.col("city") == "NY")) # worksWhich works fine with the additional filter condition. However, I'm intentionally not using pl.struct because my real dataframe has 40 columns, and I want to check for duplicated rows based on all the columns except three, so I did the following: df.filter(df.drop("col1", "col2", "col3").is_duplicated()) Which works fine and is much more convenient than writing all 37 columns in a pl.struct. However, this breaks when adding an additional filter condition to the right, but not to the left: df.filter( (df.drop("col1", "col2", "col3").is_duplicated()) & (pl.col("col5") == "something") ) # breaks! df.filter( (pl.col("col5") == "something") & (df.drop("col1", "col2", "col3").is_duplicated()) ) # works! Why does the ordering of predicates (Series & Expression vs Expression & Series) matter inside .filter() in this case? Is this intended behavior in Polars, or a bug?
The error is not .filter() specific. and I don't think it's a bug. Expressions allow you to use Series on the RHS, and it will return an expression. pl.lit(True) & pl.Series([1, 2]) # <Expr ['[(true) & (Series)]'] at 0x134D05F90> But the other way round doesn't make sense, and errors. pl.Series([1, 2]) & pl.lit(True) # ComputeError: cannot cast 'Object' type As for using a struct, you can wrap .exclude() in a struct. pl.struct(pl.exclude("col1", "col2", "col3")).is_duplicated()
3
3
79,595,292
2025-4-27
https://stackoverflow.com/questions/79595292/how-can-i-delete-a-row-from-a-csv-file
enter image description here enter image description here When I try to delete a row from my csv file, it deletes everything else and then turns that row sideways and separates the characters into their own columns. I have no idea what is happening. Above are pictures that show the before and after of my csv file when I try to delete. I have no idea what is happening and it is super strange. Here is my code and the issue is on line 95: import csv import sys FILENAME = "guests.csv" def exit_program(): print("Terminating program.") sys.exit() def read_guests(): try: guests = [] with open(FILENAME, newline="") as file: reader = csv.reader(file) for row in reader: guests.append(row) return guests except FileNotFoundError as e: ## print(f"Could not find {FILENAME} file.") ## exit_program() return guests except Exception as e: print(type(e), e) exit_program() def write_guests(guests): try: with open(FILENAME, "w", newline="") as file: ## raise BlockingIOError("Error raised for testing.") writer = csv.writer(file) writer.writerows(guests) except OSError as e: print(type(e), e) exit_program() except Exception as e: print(type(e), e) exit_program() def list_guests(guests): number_of_guests = 0 number_of_members = 0 total_fee = 0 for i, guests in enumerate(guests, start=1): print(f"{i}. Name: {guests[0]} {guests[1]}\n Meal: {guests[2]} \n Guest Type: {guests[3]} \n Amount due: ${guests[4]}") if guests[3] == "guest": number_of_guests +=1 if guests[3] == "member": number_of_members +=1 total_fee += 22 print("Number of members: " +str(number_of_members)) print("Number of guests: " +str(number_of_guests)) print("Total fee paid by all attendees: " +str(total_fee)) print() def add_guests(guests): fname = input("First name: ") lname = input("Last name: ") while True: try: meal = str(input("Meal(chicken, vegetarian, or beef): ")) except ValueError: print("Please enter a meal. Please try again.") continue if meal == "beef" : break if meal == "chicken" : break if meal == "vegetarian" : break else: print("Please enter a meal:(chicken, vegetarian, or beef)") while True: attendee_type = input("Are you a 'member' or 'guest'?") if attendee_type == "member" : break if attendee_type == "guest" : break else: print("Please enter either 'member' or 'guest': ") fee = 22 guest = [fname, lname, meal, attendee_type, fee] guests.append(guest) write_guests(guests) print(f"{fname} was added.\n") def delete_guest(guests): name = input("Enter the guest's first name: ") for i, guests in enumerate(guests, start=1): if name == guests[0]: del guests[i] write_guests(guests) print(f"{name} removed from catalog.") print("") break print(f"{name} doesn't exist in the list.") def menu_report(guests): number_of_beef = 0 number_of_chicken = 0 number_of_vegetarian = 0 for i, guests in enumerate(guests, start=1): if guests[2] == "beef": number_of_beef +=1 if guests[2] == "chicken": number_of_chicken +=1 if guests[2] == "vegetarian": number_of_vegetarian +=1 print("Number of Chicken entrees: " +str(number_of_chicken)) print("Number of Beef entrees: " +str(number_of_beef)) print("Number of vegetarian Meals: " +str(number_of_vegetarian)) print() def display_menu(): print("COMMAND MENU") print("list - List all guests") print("add - Add a guest") print("del - Delete a guest") print("menu - Report menu items") print("exit - Exit program") print() def main(): print("The Guests List program") print("") guests = read_guests() while True: display_menu() command = input("Command: ") if command.lower() == "list": list_guests(guests) elif command.lower() == "add": add_guests(guests) elif command.lower() == "del": delete_guest(guests) elif command.lower() == "menu": menu_report(guests) elif command.lower() == "exit": break else: print("Not a valid command. Please try again.\n") print("Bye!") quit() if __name__ == "__main__": main() This is for an assignment due tomorrow! arg! I thought it would delete the row but instead it saved only that row and made each entry in a column of that row into it's on row. each character in the original row was now separated into it's own column.
avoid shadowing Use singular and plural identifiers where appropriate. The usual idiom is for x in xs: You wrote: for i, guests in enumerate(guests, start=1): if name == guests[0]: del guests[i] What you wanted to write was for i, guest in ... When testing name equality, you intended to refer to guest[0], the first column of a row. Regrettably the new guests local variable is shadowing the original guests parameter. As a result, that parameter is no longer accessible within the loop body. mutate outside the loop I recommend conditionally assigning match_index = i inside the loop, and then issuing del guests[match_index] after the for loop has finished. Also, it's too bad the program doesn't verify that each guest name is unique. If you wish to filter on a non-unique attribute, such as removing guests having "beef", then consider building a new filtered list, and returning that. This avoids skipping rows that might match. normalize first command = input("Command: ") Better to downcase that immediately. command = input("Command: ").lower() Then you can save a bunch of distracting .lower() calls in the subsequent dispatch tests.
2
2
79,595,244
2025-4-27
https://stackoverflow.com/questions/79595244/does-py-cord-support-discords-new-user-install-commands-feature
I recently learned that Discord has updated with a feature called "User Install" that allows users to install bots to their personal accounts, not just servers. As I understand it, this enables users to use certain bot commands in servers where the bot isn't officially present. I'm developing a Discord bot using py-cord and want to implement this new feature, but I can't find any information about it in the documentation. My questions are: Does py-cord already support Discord's User Install Commands feature? If supported, how can I define and implement a command in py-cord that can be installed by users and used in any server? Are there any relevant decorators or special configurations to mark these types of commands? My current code structure: import discord from discord.ext import commands bot = commands.Bot() # Regular slash command @bot.slash_command(name="hello", description="Say hello") async def hello(ctx): await ctx.respond(f"Hello, {ctx.author.name}!") # I want to know how to modify this to be a user-installable command # @bot.???_command(name="usercommand") # async def user_command(ctx): # await ctx.respond("This is a user-installed command") bot.run("TOKEN") Thanks for any guidance or suggestions!
As of Pycord v2.6, this is supported. if you want your command to be available in both guilds and to users who have installed your bot, you can specify both guild_install and user_install in an integration types parameter inside a @bot.slash_command() decorator: @bot.slash_command( name="hello", description="say hello", integration_types={ discord.IntegrationType.guild_install, discord.IntegrationType.user_install, }, ) async def hello(ctx: discord.ApplicationContext): await ctx.respond("hello!")
1
1
79,589,289
2025-4-23
https://stackoverflow.com/questions/79589289/is-it-good-practice-to-override-an-abstract-method-with-more-specialized-signatu
Background information below the question. In Python 3, I can define a class with an abstract method and implement it in a derived class using a more specialized signature. I know this works, but like many things work in many programming languages, it may not be good practice. So is it? from abc import ABC, abstractmethod class Base(ABC): @abstractmethod def foo(self, *args, **kwargs): raise NotImplementedError() class Derived(Base): def foo(self, a, b, *args, **kwargs): print(f"Derived.foo(a={a}, b={b}, args={args}, kwargs={kwargs})") d = Derived() d.foo(1, 2, 3, "bar", baz="baz") # output: # Derived.foo(a=1, b=2, args=(3, 'bar'), kwargs={'baz': 'baz'}) Is this good or bad practice? More information as promised. I have an interface that defines an abstract method. It returns some sort of handle. Specialized implementations must always be able to return a sort of default handle if the method is called without any extra arguments. However, they may define certain flags to tweak the handle to the use case of the caller. In this case, the caller is also the one that instantiated the specialized implementation and knows about these flags. Generic code operating only on the interface or the handles does not know about these flags but does not need to. from abc import ABC, abstractmethod class Manager(ABC): @abstractmethod def connect(self, *args, **kwargs): raise NotImplementedError() class DefaultManager(Manager): def connect(self, *, thread_safe: bool = False): if thread_safe: return ThreadSafeHandle() else: return DefaultHandle() It is specific to my use case that a Manager implementation may want to issue different implementations of handles specific to the use case of the caller. Managers are defined in one place in my code and callers may or may not have specialized needs, such as thread safety in the example, for the managers they use.
Your first example would be completely fine, as everything the abstract parent accepts is also accepted by the derived class, the argument types are identical (<s>a</s>, <s>b</s> untyped). See comment for why the first snippet isn't quite type-correct, only an override with def foo(self, a:Any=default, b:Any=default, *args, **kwargs): would be correct. To be type-correct, the type of the accepted arguments has to get broader, not narrower. So I would take issue with the concrete example, since using a DefaultManager as a Manager would imply one could pass any amount of positional arguments to it, and any value into thread_safe. More concretely, IMO, the best practice here is this: from abc import ABC, abstractmethod class Manager(ABC): @abstractmethod def connect(self): raise NotImplementedError() class DefaultManager(Manager): def connect(self, *, thread_safe: bool = False): if thread_safe: return ThreadSafeHandle() else: return DefaultHandle() because everything Manager.connect accepts is also accepted by DefaultManager.connect
1
1
79,595,094
2025-4-27
https://stackoverflow.com/questions/79595094/why-is-flask-sqlalchemy-giving-me-an-error-no-such-table-users
import os from flask import Flask, render_template, redirect, url_for, request from flask_login import LoginManager, UserMixin, login_user, current_user, logout_user from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///db.sqlite" db = SQLAlchemy() login_manager = LoginManager(app) class Users(UserMixin, db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(250), unique=True, nullable=False) password = db.Column(db.String(250), nullable=False) location = db.Column(db.String(250), nullable=False) db.init_app(app) app.app_context().push() with app.app_context(): db.create_all() @login_manager.user_loader def loader_user(user_id): return Users.query.get(user_id) @app.route('/register', methods=["GET", "POST"]) def register(): if request.method == "POST": if not db.session.query(Users).filter_by(username=request.form.get("uname")).count() < 1: return render_template("sign_up.html", value = "USER ALREADY EXISTS") if request.form.get("uname") == "": return render_template("sign_up.html", value = "USERNAME IS BLANK") if request.form.get("psw") == "": return render_template("sign_up.html", value = "PASSWORD IS BLANK") if request.form.get("loc") == "": return render_template("sign_up.html", value = "LOCATION IS BLANK") user = Users(username=request.form.get("uname"), password=request.form.get("psw"), location=request.form.get("loc")) db.session.add(user) db.session.commit() return redirect(url_for("login")) return render_template("sign_up.html", value ="") @app.route("/login", methods=["GET", "POST"]) def login(): if current_user.is_authenticated: return redirect(url_for("index", logged_in = True, username = current_user.username)) if request.method == "POST": user = Users.query.filter_by( username=request.form.get("uname")).first() if not user: return render_template("login.html", value = request.form.get("uname")) if user.password == request.form.get("psw"): login_user(user) return redirect(url_for("index", logged_in = True, username = user.username)) return render_template("login.html") if __name__ == "__main__": app.secret_key = 'kevin2000' app.config['SESSION_TYPE'] = 'filesystem' app.run(host ="0.0.0.0", port = 10000, debug=False) This is my code and here i am getting this error Traceback (most recent call last): File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1964, in _exec_single_context self.dialect.do_execute( File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\default.py", line 945, in do_execute cursor.execute(statement, parameters) sqlite3.OperationalError: no such table: users The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\flask\app.py", line 1511, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\flask\app.py", line 919, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\flask\app.py", line 917, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\flask\app.py", line 902, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\theco\OneDrive\Documents\veil\app.py", line 41, in register if not db.session.query(Users).filter_by(username=request.form.get("uname")).count() < 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\query.py", line 3147, in count self._legacy_from_self(col).enable_eagerloads(False).scalar() File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\query.py", line 2836, in scalar ret = self.one() ^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\query.py", line 2809, in one return self._iter().one() # type: ignore ^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\query.py", line 2858, in _iter result: Union[ScalarResult[_T], Result[_T]] = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\session.py", line 2365, in execute return self._execute_internal( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\session.py", line 2251, in _execute_internal result: Result[Any] = compile_state_cls.orm_execute_statement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\orm\context.py", line 306, in orm_execute_statement result = conn.execute( ^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1416, in execute return meth( ^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\sql\elements.py", line 523, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1638, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1843, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1983, in _exec_single_context self._handle_dbapi_exception( File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 2352, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\base.py", line 1964, in _exec_single_context self.dialect.do_execute( File "C:\Users\theco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\sqlalchemy\engine\default.py", line 945, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: users [SQL: SELECT count(*) AS count_1 FROM (SELECT users.id AS users_id, users.username AS users_username, users.password AS users_password, users.location AS users_location FROM users WHERE users.username = ?) AS anon_1] [parameters: ('xdgsdg',)] (Background on this error at: https://sqlalche.me/e/20/e3q8) I've searched in other places but it's just said about the db.create_all() not being there im pretty new to flask as well so someone help please i also tried db.drop_all() then db.create_all() that also didn't seem to work if it helps i can also provide the sign_up.html from which the error is occuring when i submit the form
The error about no such column: location mentioned in your question is different from the actual traceback, which is showing a no such table: users error. However I see the issue. You're initializing Flask-SQLAlchemy after defining routes, but before initializing app context. You need to move your configuration settings to the top, right after creating the Flask app, then initialize db with app immediately. Your code will then look like this - app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///db.sqlite" app.secret_key = 'kevin2000' app.config['SESSION_TYPE'] = 'filesystem' db = SQLAlchemy(app) class Users(UserMixin, db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(250), unique=True, nullable=False) password = db.Column(db.String(250), nullable=False) location = db.Column(db.String(250), nullable=False) login_manager = LoginManager(app) with app.app_context(): db.create_all() @app.route('/') def index(): return render_template('index.html') */ add the remaining routes here
2
5
79,594,776
2025-4-27
https://stackoverflow.com/questions/79594776/python-abstract-methods-for-children-but-such-that-dont-prohibit-instances-of
I'm currently using ABC's to ensure that my child classes implement a specific method (or property, in this particular case). I want to make it impossible to create children of Entity without implementing similar_entities: class Entity(ABC): @property @abstractmethod def similar_entities(self) -> list[str]: return [] class SubEntity1(Entity): @property def similar_entities(self): return ["a", "b", "c"] SubEntity1() # Can be instantiated class SubEntity2(Entity): pass SubEntity2() # TypeError However, I don't actually want to prohibit users from creating instances of the base class Entity itself; creating a "generic" entity is meaningful in this case, and I would like to use the implementation of similar_entities defined on the Entity class itself. I'm only using @abstractmethod to "idiot-check" myself so that I don't accidentally forget to implement this property on my subclasses. I know full well that the "correct" way to do this would be to create a stub implementation of Entity and use that for generic instances instead: class GenericEntity(Entity): @property def similar_entities(self) -> list[str]: return super().similar_entities GenericEntity() # ... But to me this essentially violates DRY, and since I'm working with such a simple and limited context I'm wondering if there's a way to have my cake and eat it too. Is there any way to massage @abstractmethod to only complain on child-instances? If not, is there some better, more flexible way to establish this contract between my classes? I'm also not opposed to "rolling-my-own" decorator if this is more appropriate in this case.
Here is my code that fits: Implementing similar_entities on subclasses is allowed. Entity can be instantiated. Subclasses defined without similar_entities property raise TypeError. class MyMeta(type): def __init__(self, classname, superclasses, attributedict): super().__init__(classname, superclasses, attributedict) if 'similar_entities' not in self.__dict__: raise TypeError class Entity(metaclass=MyMeta): @property def similar_entities(self): return [] Example 1: class SubEntity1(Entity): @property def similar_entities(self): return ["a", "b", "c"] print(SubEntity1().similar_entities) # ['a', 'b', 'c'] Example 2: class SubEntity2(Entity): pass print(SubEntity2().similar_entities) # TypeError Example 3: print(Entity().similar_entities) # []
2
3
79,594,789
2025-4-27
https://stackoverflow.com/questions/79594789/issue-with-reading-a-csv-file-with-all-columns-as-string-using-polars
I have below Python code using polars, and I do not want Python to auto parse values as dates or integers unless explicitly stated. schema_overrides doesn't prevent auto conversion either. import polars as pl # Read the CSV file with all columns as strings using schema_overrides file_path = "./xyz.csv" df = pl.read_csv(file_path, schema_overrides={'*': pl.Utf8}) # Display the DataFrame print(df) I get below error: polars.exceptions.ComputeError: could not parse p35038 as dtype i64 at column 'Employee ID' (column number 3)
This is what infer_schema=False is for. When False, the schema is not inferred and will be pl.String if not specified in schema or schema_overrides. pl.read_csv(b"""a,b,c 1,2,3""") # shape: (1, 3) # ┌─────┬─────┬─────┐ # │ a ┆ b ┆ c │ # │ --- ┆ --- ┆ --- │ # │ i64 ┆ i64 ┆ i64 │ # ╞═════╪═════╪═════╡ # │ 1 ┆ 2 ┆ 3 │ # └─────┴─────┴─────┘ pl.read_csv(b"""a,b,c 1,2,3""", infer_schema=False) # shape: (1, 3) # ┌─────┬─────┬─────┐ # │ a ┆ b ┆ c │ # │ --- ┆ --- ┆ --- │ # │ str ┆ str ┆ str │ # ╞═════╪═════╪═════╡ # │ 1 ┆ 2 ┆ 3 │ # └─────┴─────┴─────┘ "*" in your example is taken literally, it is not treated as a "Wildcard".
4
4
79,594,764
2025-4-27
https://stackoverflow.com/questions/79594764/increasing-the-size-of-the-model
Recently I trained two MLP model and saved weights for future work. I develop one module for loading model and use these model in another module. Load model module contain this code to load models: def creat_model_extractor(model_path, feature_count): """ This function create model and set weights :param model_path: address of weights files :param feature_count: Number of nodes in input layer """ try: tf.keras.backend.clear_session() node_list = [1024, 512, 256, 128, 64, 32] model = Sequential() model.add(Input(shape=(feature_count,))) for node in node_list: model.add(Dense(node, activation='relu')) model.add(Dropout(0.2)) model.add(LayerNormalization()) model.add(Dense(16, activation='relu')) model.add(LayerNormalization()) model.add(Dense(1, activation='sigmoid')) @tf.function def inference_step(inputs): return tf.stop_gradient(model(inputs, training=False)) model.inference_step = inference_step model.load_weights(model_path) model.trainable = False for layer in model.layers: layer.trainable = False except Exception as error: logger.warning(error, exc_info=True) return None return model And this is predict function SMALL_MODEL = creat_model_extractor(MODEL_PATH_SMALL, small_blocks_count) (SMALL_MODEL.inference_step(small_blocks_normal) > 0.5).numpy().astype(int) Problem: After predict label 'SMALL_MODEL' size change. It become bigger and And after a while, the RAM fills up. What should I do to prevent RAM from filling up? This problem happen even in one module
Every time you call model.inference_step(), TensorFlow creates a new computation graph, because your @tf.function is dynamically bound inside your model object. TensorFlow is trying to trace and cache the @tf.function, but it can't re-use the existing trace properly, because model.inference_step is reattached dynamically and behaves non-standardly. You should not dynamically attach inference_step inside the function. Instead, move the @tf.function outside the model and use the model directly for prediction. @tf.function def inference(model, inputs): return tf.stop_gradient(model(inputs, training=False)) and call: SMALL_MODEL = creat_model_extractor(MODEL_PATH_SMALL, small_blocks_count) predictions = inference(SMALL_MODEL, small_blocks_normal) labels = (predictions > 0.5).numpy().astype(int)
1
1
79,593,358
2025-4-25
https://stackoverflow.com/questions/79593358/how-do-you-efficiently-find-gaps-in-one-list-of-datetimes-relative-to-another
I have datasets from multiple instruments with differing, but hypothetically concurrent datetime stamps. If date from instrument A does not correspond to any data from instrument B within some interval, I want to flag those data for removal later. The current way I handle this is to loop over A and test against B with a list comprehension. However, my datasets can get very large -- O(400k) -- which makes this looping on loops extremely slow and inefficient. How can I reformulate the following "findGaps" method to be more efficient? from datetime import datetime, timedelta import time start = datetime(2025,4,25,0,0,0) stop = datetime(2025,4,25,2,0,0) delta = timedelta(seconds=1) # Main dataset: dateTimeMain = [] while start <= stop: start += delta dateTimeMain.append(start) # Test dataset with gaps: dateTimeTest = dateTimeMain.copy() del dateTimeTest[30:300] del dateTimeTest[3000:3300] def findGaps(dTM,dTT): bTs = [] start = -1 i, index, stop = 0,0,0 tThreshold = timedelta(seconds=30) for index, esTimeI in enumerate(dTM): tDiff = [abs(x - esTimeI) for x in dTT] if min(tDiff) > tThreshold: i += 1 if start == -1: start = index stop = index else: if start != -1: startstop = [dTM[start],dTM[stop]] msg = f' Flag data from {startstop[0]} to {startstop[1]}' print(msg) bTs.append(startstop) start = -1 if start != -1 and stop == index: # Records from a mid-point to the end are bad startstop = [dTM[start],dTM[stop]] bTs.append(startstop) return bTs tic = time.process_time() badTimes = findGaps(dateTimeMain,dateTimeTest) print(f'Loops: {len(dateTimeMain)} x {len(dateTimeTest)} = {len(dateTimeMain)*len(dateTimeTest)}') print(f'Uncertainty Update Elapsed Time: {time.process_time() - tic:.3f} s') Returns: Flag data from 2025-04-25 00:01:01 to 2025-04-25 00:04:30 Flag data from 2025-04-25 00:55:01 to 2025-04-25 00:59:00 Loops: 7201 x 6631 = 47749831 Uncertainty Update Elapsed Time: 5.167 s
Faster code with Numpy A simple way to make this faster is to vectorise the code with Numpy: def findGaps_faster(dTM,dTT): bTs = [] start = -1 i, index, stop = 0,0,0 tThreshold = timedelta(seconds=30) dTT = np.array(dTT, dtype=np.datetime64) for index, esTimeI in enumerate(dTM): esTimeI = np.datetime64(esTimeI) tDiff = np.abs(dTT - esTimeI) if np.min(tDiff) > tThreshold: i += 1 if start == -1: start = index stop = index else: if start != -1: startstop = [dTM[start],dTM[stop]] msg = f' Flag data from {startstop[0]} to {startstop[1]}' print(msg) bTs.append(startstop) start = -1 if start != -1 and stop == index: # Records from a mid-point to the end are bad startstop = [dTM[start],dTM[stop]] bTs.append(startstop) return bTs This is 20 times faster and provide the same result. More efficient Numpy algorithm The current algorithm is inefficient. Indeed, the O(n) computation time for each item of dTM is expensive and results in a O(n m) time for the whole code. There is a way to make the algorithm significantly more efficient by sorting data. Indeed, you can find the closest value (i.e. np.min(np.abs(dTT - esTimeI)) with a binary search running in O(log n) time, so the overall execution time goes down to O(m log n). This operation can also still be vectorised with Numpy thanks to np.searchsorted: def findGaps_fastest(dTM,dTT): bTs = [] start = -1 i, index, stop = 0,0,0 tThreshold = timedelta(seconds=30) # See below for faster conversions np_dTT = np.array(dTT, dtype=np.datetime64) np_dTT.sort() np_dTM = np.array(dTM, dtype=np.datetime64) pos = np.searchsorted(np_dTT, np_dTM, side='right') # Consider the 3 items close the the position found. # We can probably only consider 2 of them but this is simpler and less bug-prone. pos1 = np.maximum(pos-1, 0) pos2 = np.minimum(pos, np_dTT.size-1) pos3 = np.minimum(pos+1, np_dTT.size-1) tDiff1 = np.abs(np_dTT[pos1] - np_dTM) tDiff2 = np.abs(np_dTT[pos2] - np_dTM) tDiff3 = np.abs(np_dTT[pos3] - np_dTM) tMin = np.minimum(tDiff1, tDiff2, tDiff3) for index in range(len(np_dTM)): if tMin[index] > tThreshold: i += 1 if start == -1: start = index stop = index else: if start != -1: startstop = [dTM[start],dTM[stop]] msg = f' Flag data from {startstop[0]} to {startstop[1]}' print(msg) bTs.append(startstop) start = -1 if start != -1 and stop == index: # Records from a mid-point to the end are bad startstop = [dTM[start],dTM[stop]] bTs.append(startstop) return bTs This is about 90 times faster! Analysis and further optimisations Note that most of the time is actually spent in the conversions from lists to Numpy arrays. Without that (i.e. assuming it can be pre-computed), it would be even faster (>200 times)! The rest of the time is mainly spent in the Python loop. If this is not enough, then you can do the conversion in parallel with multiple threads. You can certainly do that in Cython. Cython can also make the Python loop faster. In the end, most of the time should be spent in reading pure-Python object from the pure-Python list. A convoluted way to make the conversion faster is to first extract each timestamp as a float and then create the Numpy array quickly based on them. This is about 6 times faster on my machine and 90% of the time is spent in pure-Python datetime method calls. The precision of the date should be in the order of microseconds (not more). Here is an example: timestamps = [e.timestamp() for e in dTT] # Expensive part np_dTT = np.array(np.array(timestamps, np.float64), dtype='datetime64[s]')
1
2