question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-07-01 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,615,990
2025-5-10
https://stackoverflow.com/questions/79615990/how-to-concatenate-n-rows-of-content-to-current-row-in-a-rolling-window-in-pa
I'm looking to transform a dataframe containing [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] into [[1, 2, 3, []], [4, 5, 6, [1, 2, 3, 4, 5, 6]], [7, 8, 9, [4, 5, 6, 7, 8, 9]], [10, 11, 12, [7, 8, 9, 10, 11, 12]]] So far the only working solution I've come up with is: import pandas as pd import numpy as np # Create the DataFrame df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])) # Initialize an empty list to store the result result = [] # Iterate over the rows in the DataFrame for i in range(len(df)): # If it's the first row, append the row with an empty list if i == 0: result.append(list(df.iloc[i]) + [[]]) # If it's not the first row, concatenate the current and previous row else: current_row = list(df.iloc[i]) previous_row = list(df.iloc[i-1]) concatenated_row = current_row + [previous_row + current_row] result.append(concatenated_row) # Print the result print(result) Is there no build in Pandas function that can roll a window, and add the results to current row, like the above can?
This doesn't need windowing, IIUC, you can use df.shift: x = df.apply(lambda x: x.tolist(), axis=1) df[3] = (x.shift() + x) Output: 0 1 2 3 0 1 2 3 NaN 1 4 5 6 [1, 2, 3, 4, 5, 6] 2 7 8 9 [4, 5, 6, 7, 8, 9] 3 10 11 12 [7, 8, 9, 10, 11, 12] Adding window sizing: import pandas as pd import numpy as np from functools import reduce df = pd.DataFrame(np.arange(99).reshape(-1,3)) x = df.apply(lambda x: x.tolist(), axis=1) #change window size here window_size = 3 df[3] = reduce(lambda x, y: x+y, [x.shift(i) for i in range(window_size,-1,-1)]) df Output: 0 1 2 3 0 0 1 2 NaN 1 3 4 5 NaN 2 6 7 8 NaN 3 9 10 11 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] 4 12 13 14 [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] 5 15 16 17 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] 6 18 19 20 [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] 7 21 22 23 [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] 8 24 25 26 [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] 9 27 28 29 [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] 10 30 31 32 [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] 11 33 34 35 [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] 12 36 37 38 [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] 13 39 40 41 [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] 14 42 43 44 [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] 15 45 46 47 [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] 16 48 49 50 [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] 17 51 52 53 [42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53] 18 54 55 56 [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56] 19 57 58 59 [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 20 60 61 62 [51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62] 21 63 64 65 [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65] 22 66 67 68 [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68] 23 69 70 71 [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71] 24 72 73 74 [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74] 25 75 76 77 [66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77] 26 78 79 80 [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80] 27 81 82 83 [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83] 28 84 85 86 [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86] 29 87 88 89 [78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89] 30 90 91 92 [81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92] 31 93 94 95 [84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95] 32 96 97 98 [87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98]
3
1
79,617,835
2025-5-12
https://stackoverflow.com/questions/79617835/python-3-13-threading-lock-acquire-vs-lock-acquire-lock
In Python 3.13 (haven't checked lower versions) there seem to be two locking mechanisms for the threading.Lock class. I've looked online but found no mentions of acquire_lock or release_lock and wanted to ask if anyone knows what the difference is between them and the standard acquire and release methods. Here's the threading.Lock class for reference. The methods are commented as undocumented: class Lock: def __enter__(self) -> bool: ... def __exit__( self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None ) -> None: ... def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ... def release(self) -> None: ... def locked(self) -> bool: ... def acquire_lock(self, blocking: bool = ..., timeout: float = ...) -> bool: ... # undocumented def release_lock(self) -> None: ... # undocumented def locked_lock(self) -> bool: ... # undocumented Just curious whether there's a difference between the call lock.acquire_lock and lock.acquire or whether this is merely a name-change that will take effect in the future.
currently, they are just aliases, and according to github history they have been like that for the past 15 years, you shouldn't be using undocumented functions, they can be removed at any time. {"acquire_lock", _PyCFunction_CAST(lock_PyThread_acquire_lock), ... {"acquire", _PyCFunction_CAST(lock_PyThread_acquire_lock), ... {"release_lock", lock_PyThread_release_lock, ... {"release", lock_PyThread_release_lock, the best thing to do is not use any of the 4 functions anyway, instead use a with block to properly lock and release the lock when an exception is thrown. some_lock = Lock() with some_lock: # code protected by lock here # lock released when exception is thrown
2
8
79,615,872
2025-5-10
https://stackoverflow.com/questions/79615872/why-is-array-manipulation-in-jax-much-slower
I'm working on converting a transformation-heavy numerical pipeline from NumPy to JAX to take advantage of JIT acceleration. However, I’ve found that some basic operations like broadcast_to and moveaxis are significantly slower in JAX—even without JIT—compared to NumPy, and even for large batch sizes like 3,000,000 where I would expect JAX to be much quicker. ### Benchmark: moveaxis + broadcast_to ### NumPy: moveaxis + broadcast_to → 0.000116 s JAX: moveaxis + broadcast_to → 0.204249 s JAX JIT: moveaxis + broadcast_to → 0.054713 s ### Benchmark: broadcast_to only ### NumPy: broadcast_to → 0.000059 s JAX: broadcast_to → 0.062167 s JAX JIT: broadcast_to → 0.057625 s Am I doing something wrong? Are there better ways of performing these kind of manipulations? Here's a minimal benchmark ChatGPT generated, comparing broadcast_to and moveaxis in NumPy, JAX, and JAX with JIT: import timeit import jax import jax.numpy as jnp import numpy as np from jax import jit # Base transformation matrix M_np = np.array([[1, 0, 0, 0.5], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) M_jax = jnp.array(M_np) # Batch size n = 1_000_000 print("### Benchmark: moveaxis + broadcast_to ###") # NumPy t_numpy = timeit.timeit( lambda: np.moveaxis(np.broadcast_to(M_np[:, :, None], (4, 4, n)), 2, 0), number=10 ) print(f"NumPy: moveaxis + broadcast_to → {t_numpy:.6f} s") # JAX t_jax = timeit.timeit( lambda: jnp.moveaxis(jnp.broadcast_to(M_jax[:, :, None], (4, 4, n)), 2, 0).block_until_ready(), number=10 ) print(f"JAX: moveaxis + broadcast_to → {t_jax:.6f} s") # JAX JIT @jit def broadcast_and_move_jax(M): return jnp.moveaxis(jnp.broadcast_to(M[:, :, None], (4, 4, n)), 2, 0) # Warm-up broadcast_and_move_jax(M_jax).block_until_ready() t_jit = timeit.timeit( lambda: broadcast_and_move_jax(M_jax).block_until_ready(), number=10 ) print(f"JAX JIT: moveaxis + broadcast_to → {t_jit:.6f} s") print("\n### Benchmark: broadcast_to only ###") # NumPy t_numpy_b = timeit.timeit( lambda: np.broadcast_to(M_np[:, :, None], (4, 4, n)), number=10 ) print(f"NumPy: broadcast_to → {t_numpy_b:.6f} s") # JAX t_jax_b = timeit.timeit( lambda: jnp.broadcast_to(M_jax[:, :, None], (4, 4, n)).block_until_ready(), number=10 ) print(f"JAX: broadcast_to → {t_jax_b:.6f} s") # JAX JIT @jit def broadcast_only_jax(M): return jnp.broadcast_to(M[:, :, None], (4, 4, n)) broadcast_only_jax(M_jax).block_until_ready() t_jit_b = timeit.timeit( lambda: broadcast_only_jax(M_jax).block_until_ready(), number=10 ) print(f"JAX JIT: broadcast_to → {t_jit_b:.6f} s")
There are a couple things happening here that come from the different execution models of NumPy and JAX. First, NumPy operations like broadcasting, transposing, reshaping, slicing, etc. typically return views of the original buffer. In JAX, it is not possible for two array objects to share memory, and so the equivalent operations return copies. I suspect this is the largest contribution to the timing difference here. Second, NumPy tends to have very fast dispatch time for individual operations. JAX has much slower dispatch time for individual operations, and this can become important when the operation itself is very cheap (like "return a view of the array with different strides/shape") You might wonder given these points how JAX could ever be faster than NumPy. The key is JIT compilation of sequences of operations: within JIT-compiled code, sequences of operations are fused so that the output of each individual operation need not be allocated (or indeed, need not even exist at all as a buffer of intermediate values). Additionally, for JIT compiled sequences of operations the dispatch overhead is paid only once for the whole program. Compare this to NumPy where there's no way to fuse operations or to avoid paying the dispatch cost of each and every operation. So in microbenchmarks like this, you can expect JAX to be slower than NumPy. But for real-world sequences of operations wrapped in JIT, you should often find that JAX is faster, even when executing on CPU. This type of question comes up enough that there's a section devoted to it in JAX's FAQ: FAQ: is JAX faster than NumPy? Answering the followup question: Is the statement "In JAX, it is not possible for two array objects to share memory, and so the equivalent operations return copies", within a jitted environment? This question is not really well-formulated, because in a jitted environment, array objects do not necessarily correspond to buffers of values. Let's make this more concrete with a simple example: import jax @jax.jit def f(x): y = x[::2] return y.sum() You might ask: in this program, is y a copy or a view of x? The answer is neither, because y is never explicitly created. Instead, JIT fuses the slice and the sum into a single operation: the array x is the input, and the array y.sum() is the output, and the intermediate array y is never actually created. You can see this by printing the compiled HLO for this function: x = jax.numpy.arange(10) print(f.lower(x).compile().as_text()) HloModule jit_f, is_scheduled=true, entry_computation_layout={(s32[10]{0})->s32[]}, allow_spmd_sharding_propagation_to_parameters={true}, allow_spmd_sharding_propagation_to_output={true} %region_0.9 (Arg_0.10: s32[], Arg_1.11: s32[]) -> s32[] { %Arg_0.10 = s32[] parameter(0), metadata={op_name="jit(f)/jit(main)/reduce_sum"} %Arg_1.11 = s32[] parameter(1), metadata={op_name="jit(f)/jit(main)/reduce_sum"} ROOT %add.12 = s32[] add(s32[] %Arg_0.10, s32[] %Arg_1.11), metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } %fused_computation (param_0.2: s32[10]) -> s32[] { %param_0.2 = s32[10]{0} parameter(0) %iota.0 = s32[5]{0} iota(), iota_dimension=0, metadata={op_name="jit(f)/jit(main)/iota" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %constant.1 = s32[] constant(2) %broadcast.0 = s32[5]{0} broadcast(s32[] %constant.1), dimensions={} %multiply.0 = s32[5]{0} multiply(s32[5]{0} %iota.0, s32[5]{0} %broadcast.0), metadata={op_name="jit(f)/jit(main)/mul" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %bitcast.1 = s32[5,1]{1,0} bitcast(s32[5]{0} %multiply.0), metadata={op_name="jit(f)/jit(main)/mul" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %gather.0 = s32[5]{0} gather(s32[10]{0} %param_0.2, s32[5,1]{1,0} %bitcast.1), offset_dims={}, collapsed_slice_dims={0}, start_index_map={0}, index_vector_dim=1, slice_sizes={1}, indices_are_sorted=true, metadata={op_name="jit(f)/jit(main)/gather" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %constant.0 = s32[] constant(0) ROOT %reduce.0 = s32[] reduce(s32[5]{0} %gather.0, s32[] %constant.0), dimensions={0}, to_apply=%region_0.9, metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } ENTRY %main.14 (Arg_0.1: s32[10]) -> s32[] { %Arg_0.1 = s32[10]{0} parameter(0), metadata={op_name="x"} ROOT %gather_reduce_fusion = s32[] fusion(s32[10]{0} %Arg_0.1), kind=kLoop, calls=%fused_computation, metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } The output is complicated, but the main thing to look at here is the ENTRY %main section, which is the "main" program generated by compilation. It consists of two steps: %Arg0.1 identifies the input argument, and ROOT %gather_reduce_fusion is essentially a single compiled kernel that sums every second element of the input. No intermediate arrays are generated. The blocks above this (e.g. the %fused_computation (param_0.2: s32[10]) -> s32[] definition) give you information about what operations are done within this kernel, but represent a single fused operation. Notice that the sliced array represented by y in the Python code never actually appears in the main function block, so questions about its memory layout cannot be answered except by saying "y doesn't exist in the compiled program".
3
4
79,618,258
2025-5-12
https://stackoverflow.com/questions/79618258/sns-histplot-does-not-fully-show-the-legend-when-setting-the-legend-outside-the
I tried to create a histogram with a legend outside the axes. Here is my code: import pandas as pd import seaborn as sns df_long = pd.DataFrame({ "Category": ["A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D"], "Round": ["Round1", "Round1", "Round1", "Round1", "Round2", "Round2", "Round2", "Round2", "Round3", "Round3", "Round3", "Round3", "Round4", "Round4", "Round4", "Round4"], "Value": [10, 20, 10, 30, 20, 25, 15, 25, 12, 15, 19, 6, 10, 29, 13, 19] }) ax = sns.histplot(df_long, x="Category", hue="Round", weights="Value", multiple="stack", shrink=.8, ) ax.set_ylabel('Weight') legend = ax.get_legend() legend.set_bbox_to_anchor((1, 1)) It works fine in jupyter notebook: But, if I try to create a png or pdf using matplotlib, the legend is not displayed completely. import matplotlib.pyplot as plt plt.savefig("histogram.png") plt.savefig("histogram.pdf") I've already tried to adjust the size of the graph by using plt.figure(figsize=(4, 4)) and the problem still exist.
The solution is to use bbox_inches = 'tight' in the plt.savefig() function: import matplotlib.pyplot as plt plt.savefig("histogram.png",bbox_inches='tight') plt.savefig("histogram.pdf", bbox_inches='tight')
2
3
79,618,176
2025-5-12
https://stackoverflow.com/questions/79618176/matplotlib-plot-continuous-time-series-of-data
I'm trying to continuously plot data received via network using matplotlib. On the y-axis, I want to plot a particular entity, while the x-axis is the current time. The x-axis should cover a fixed period of time, ending with the current time. Here's my current test code, which simulates the data received via network with random numbers. import threading import random import time import signal import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as md class NPData(): def __init__(self, size): self.data = np.zeros((size,2)) # size rows, 2 cols self.size = size self.head = 0 def __len__(self): return self.data.__len__() def __str__(self): return str(self.data) def append(self, data): self.data[self.head] = data self.head = (self.head + 1) % self.size def get_x_range(self): return (self.data.min(axis=0)[0], self.data.max(axis=0)[0]) class Producer(threading.Thread): def __init__(self): super().__init__() random.seed() self.running = True self.data = NPData(100) def get_data(self): return self.data.data def stop(self): self.running = False def run(self): while self.running: now_ms = md.date2num(int(time.time() * 1000)) # ms sample = np.array([now_ms, np.random.randint(0,999)]) self.data.append(sample) time.sleep(0.1) prog_do_run = True def signal_handler(sig, frame): global prog_do_run prog_do_run = False def main(): signal.signal(signal.SIGINT, signal_handler) p = Producer() p.start() fig, ax = plt.subplots() xfmt = md.DateFormatter('%H:%M:%S.%f') ax.xaxis.set_major_formatter(xfmt) #ax.plot(p.get_data()) #ax.set_ylim(0,999) plt.show(block=False) while prog_do_run: x_range = p.data.get_x_range() ax.set_xlim(x_range) #ax.set_ylim(0,999) print(p.get_data()) #ax.plot(p.get_data()) plt.draw() plt.pause(0.05) p.stop() Notes: The Producer class is supposed to emulate data received via network. I've encountered two main issues: I'm struggling to find out what actually needs to be called inside an endless loop in order for matplotlib to continuously update a plot (efficiently). Is it draw(), plot(), pause() or a combination of those? I've been generating milliseconds timestamps and matplotlib seems to not like them at all. The official docs say to use date2num(), which does not work. If I just use int(time.time() * 1000) or round(time.time() * 1000), I get OverflowError: int too big to convert from the formatter.
Basically there are small errors. For example don't call ax.plot() in the loop because it adds a new line each time, which is inefficient and causes multiples lines to be drawn. I would suggest to use a single line2D object by creating it once and then update its data with set_data() insde your loop. Additionally, use fig.canvas.draw_idle() or plt.pause() to refresh; plt.pause() is the simplest for interactive updates. Another error is the date, matplotlib expects dates in "days since 0001-01-01 UTC" as floats. I would say to use datetime.datetime.now() and md.date2num() to convert to the correct format. Take in cosideration that milliseconds are not directly supported in the tick labels, but you can format them. import threading import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as md import datetime import time import signal class NPData(): def __init__(self, size): self.data = np.zeros((size,2)) self.size = size self.head = 0 self.full = False def append(self, data): self.data[self.head] = data self.head = (self.head + 1) % self.size if self.head == 0: self.full = True def get_data(self): if self.full: return np.vstack((self.data[self.head:], self.data[:self.head])) else: return self.data[:self.head] def get_x_range(self, window_seconds=10): data = self.get_data() if len(data) == 0: now = md.date2num(datetime.datetime.now()) return (now - window_seconds/86400, now) latest = data[-1,0] return (latest - window_seconds/86400, latest) class Producer(threading.Thread): def __init__(self): super().__init__() self.running = True self.data = NPData(1000) def stop(self): self.running = False def run(self): while self.running: now = datetime.datetime.now() now_num = md.date2num(now) sample = np.array([now_num, np.random.randint(0,999)]) self.data.append(sample) time.sleep(0.1) prog_do_run = True def signal_handler(sig, frame): global prog_do_run prog_do_run = False def main(): signal.signal(signal.SIGINT, signal_handler) p = Producer() p.start() fig, ax = plt.subplots() xfmt = md.DateFormatter('%H:%M:%S.%f') ax.xaxis.set_major_formatter(xfmt) line, = ax.plot([], [], 'b-') ax.set_ylim(0, 999) plt.show(block=False) while prog_do_run: data = p.data.get_data() if len(data) > 0: line.set_data(data[:,0], data[:,1]) x_range = p.data.get_x_range(window_seconds=10) ax.set_xlim(x_range) #Keep window fixed to recent data ax.figure.canvas.draw_idle() plt.pause(0.05) p.stop()
2
3
79,615,662
2025-5-10
https://stackoverflow.com/questions/79615662/how-to-replace-all-occurrences-of-a-string-in-python-and-why-str-replace-mi
I want to replace all patterns 0 in a string by 00 in Python. For example, turning: '28 5A 31 34 0 0 0 F0' into '28 5A 31 34 00 00 00 F0'. I tried with str.replace(), but for some reason it misses some "overlapping" patterns: i.e.: $ python3 Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ") '28 5A 31 34 00 0 00 F0' >>> '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ").replace(" 0 ", " 00 ") '28 5A 31 34 00 00 00 F0' notice the "middle" 0 pattern that is not replaced by 00. Any idea how I could replace all patterns at once? Of course I can do '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ").replace(" 0 ", " 00 "), but this is a bit heavy... I actually did not expect this behavior (found it through a bug in my code). In particular, I did not expect this behavior from the documentation at https://docs.python.org/3/library/stdtypes.html#str.replace . Any explanation to why this happens / anything that should have tipped me that this is the expected behavior? It looks like replace does not work with consecutive overlapping repetitions of the pattern, but this was not obvious to me from the documentation? Edit 1: Thanks for the answer(s). The regexp works nicely. Still, I am confused. The official doc linked above says: "Return a copy of the string with all occurrences of substring old replaced by new. If count is given, only the first count occurrences are replaced. If count is not specified or -1, then all occurrences are replaced.". "Clearly" this is not the case? (or am I missing something?).
A better tactic would be to not look for spaces around the individual zeros, but to use regex substitution and look for word boundaries (\b): >>> import re >>> re.sub(r'\b0\b', '00', '28 5A 31 34 0 0 0 F0') '28 5A 31 34 00 00 00 F0' This has the added benefit that a 0 at the start or end of the string would get replaced into 00 as well. If you want the exact same semantics, you could use positive lookbehind and lookahead to not "consume" the space characters: >>> re.sub(r'(?<= )0(?= )', '00', '28 5A 31 34 0 0 0 F0') '28 5A 31 34 00 00 00 F0' The reason why your original attempt does not work is that when str.replace (or re.sub) finds a pattern to be replaced, it moves forward to the next character following the whole match. So: '28 5A 31 34 0 0 0 F0'.replace(' 0 ', ' 00 ') # ^-^ #1 match, ' 0 ' → ' 00 ' # ^ start looking for second match from here # ^-^ #2 match, ' 0 ' → ' 00 ' '28 5A 31 34 00 0 00 F0' # ^--^ ^--^ # #1 #2 The CPython (3.13.3) str.replace implementation can be seen from here: https://github.com/python/cpython/blob/6280bb547840b609feedb78887c6491af75548e8/Objects/unicodeobject.c#L10333, but it's a bit complex with all the Unicode handling. If it would work as you'd "wish", you still wouldn't get the output that you desire, as you'd get extra spaces (each overlapping 0 in the original string would cause its own 00 to appear into the output string): # Hypothetical: '28 5A 31 34 0 0 0 F0'.replace(' 0 ', ' 00 ') # ^-^ #1 match, ' 0 ' → ' 00 ' # ^-^ #2 match, ' 0 ' → ' 00 ' # ^-^ #3 match, ' 0 ' → ' 00 ' '28 5A 31 34 00 00 00 F0' # ^--^^--^^--^ # #1 #2 #3 If it still seems unintuitive why you'd get those extra spaces, consider ABA to be 0 and X__X to be 00 , and look at this: # Analogous to: ' 0 0 0 '.replace(' 0 ', ' 00 ') 'ABABABA'.replace('ABA', 'X__X') 'X__XBX__X' # What you get in reality now. 'X__XX__XX__X' # What you would get with the above logic (=extra consecutive X characters, i.e. spaces). And finally, if it would work like calling replace as many times as there's something to replace does, a trivial 'A'.replace('A', 'AA') would just loop infinitely ('A'→'AA'→'AAAA'→…). So, it just "has" to work this way. This is exactly why regex allows using lookahead and lookbehind to control which matched parts actually consume characters from the original string and which don't.
2
7
79,617,903
2025-5-12
https://stackoverflow.com/questions/79617903/renaming-automatic-aggregation-name-for-density-heatmaps-2d-histograms
When creating density heatmaps / 2d histograms, there is an automatic aggregation that can take place, which also sets the name as it appears on the legend. I'm trying to change how that aggregation is displayed on the legend. Consider the following example, taken directly from the plotly docs: import plotly.express as px df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.show() How can I pass a string that will alter the "count" as it appears on the legend? I've tried with .update_layout(legend_title_text = "Test string") but did not manage to get anywhere.
Try setting the title.text property of coloraxis_colorbar inside layout. df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.update_layout(coloraxis_colorbar=dict( title=dict( text="Number of Bills per Cell") ) ) fig.show() You can also define this in a single line using coloraxis_colorbar_title_text. import plotly.express as px df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.update_layout(coloraxis_colorbar_title_text="Number of Bills per Cell") fig.show() Colorscales - Plotly
2
1
79,616,857
2025-5-11
https://stackoverflow.com/questions/79616857/desired-frequency-in-discrete-fourier-transform-gets-shifted-by-the-factor-of-in
I have written a python script to compute DFT of a simple sin wave having frequency 3. I have taken the following consideration for taking sample of the sin wave sin function for test = sin( 2 * pi * 3 * t ) sample_rate = 15 time interval = 1/sample_rate = 1/15 = ~ 0.07 second sample_duration = 1 second (for test1) and 2 seconds (for test 2) sample_size = sample_rate * sample_duration = 15*2 = 30 samples I run the same code for sample_duration both 1 and 2 seconds. When sample duration is 1 second, the graph produce shows the presence of frequency=3 present in the sin wave,which is correct. But if I change the sample duration to 2 second, the graph peaks at frequency= 6, which does not present in the sin wave.But it is a factor of 2 increase of the original frequency (3*2) = 6. And if 3 second is taken as sample duration, graph peaks at 9 second. I was thinking that taking more sample for longer duration will produce finer result, but that is clearly not the case here. code : from sage.all import * import matplotlib.pyplot as plt import numpy as np t = var('t') sample_rate = 15 # will take 100 sample each second interval = 1 / sample_rate # time interval between each reading sample_duration = 1 # take sample over a duration of 1 second sample_size_N = sample_rate*sample_duration #count number of touples in r array, len(r) will give sample size/ total number of sample taken over a specific duration func = sin(3*2*pi*t) time_segment_arr = [] signal_sample_arr= [] # take reading each time interval over sample_duration period for time_segment in np.arange(0,sample_duration,interval): # give discrete value of the signal over specific time interval discrete_signal_value = func(t = time_segment) # push time value into array time_segment_arr.append(time_segment) # push signal amplitude into array signal_sample_arr.append(N(discrete_signal_value)) def construct_discrete_transform_func(): s = '' k = var('k') for n in range(0,sample_size_N,1): s = s+ '+'+str((signal_sample_arr[n])* e^(-(i*2*pi*k*n)/sample_size_N)) return s[1:] #omit the forward + sign dft_func = construct_discrete_transform_func() def calculate_frequency_value(dft_func,freq_val): k = var('k') # SR converts string to sage_symbolic_ring expression & fast_callable() allows to pass variable value to that expression ff = fast_callable(SR(dft_func), vars=[k]) return ff(freq_val) freq_arr = [] amplitude_arr = [] #compute frequency strength per per frequency for l in np.arange(0,sample_size_N,1): freq_value = calculate_frequency_value(dft_func,l) freq_arr.append(l) amplitude_arr.append(N(abs(freq_value)))
your Frequency axis is wrong, the lowest frequency on the DFT axis should be 1/N which can be translated to time domain to be 1/T, that is when the total time is 2 seconds, the first point after zero will be at 0.5 Hz not 1 Hz the longest sine wave a DFT can represent (the lowest frquency) is a sine wave that does 1 cycle over the entire duration. (k = 1), you can get 1/T from substituting t = n * Ts. (where Ts is the sample interval) then Ts = T/N where T is the total time which results in f = k / T and the lowest frequency k = 1 translates to f = 1 / T this is just a plotting error, in the not-shown plotting code.
2
2
79,616,449
2025-5-11
https://stackoverflow.com/questions/79616449/how-do-i-do-a-specific-aggregation-on-a-table-based-on-row-column-values-on-anot
I have loaded two fact tables CDI and Population and a couple dimension tables in DuckDB. I did joins on the CDI fact table and its respective dimension tables which yields a snippet of the table below And below is the Population fact table merged with its other dimension tables yielding this snippet below Now what I want to basically do is filter out the Population table based only on the values of this particular row of the CDI table. In this case the current row outlined in green will somehow do this query SELECT Year, SUM(Population) AS TotalPopulation FROM Population WHERE (Year BETWEEN 2018 AND 2018) AND (Age BETWEEN 18 AND 85) AND State = 'Pennsylvania' AND Sex IN ('Male', 'Female') AND Ethnicity IN ('Multiracial') AND Origin IN ('Not Hispanic') GROUP BY Year ORDER BY Year ASC This query aggregates the Population column values based on the row values of the CDI table. What I'm just at a loss in trying to implement is doing this aggregation operation for all row values in the CDI table. Here is a full visualization of what I'm trying to do. How would I implement this type of varying filtering aggregation based on each row column values of the CDI table? I'm using DuckDB as the OLAP DB here so ANSI SQL is what I'm trying to use to implement this task. Could it be possible only using this kind of SQL?
I agree with Chris Maurer comment, here is a SQL query to achieve what you are looking for : SELECT YearStart, YearEnd, LocationDesc, AgeStart, AgeEnd, Sex, Ethnicity, Origin, Sun(Population) AS TotalPopulation FROM CDI LEFT JOIN Population AS pop ON (pop.Year BETWEEN CDI.YearStart AND CDI.YearEnd) AND (CDI.Sex=pop.Sex OR CDI.Sex='both') AND (pop.Age BETWEEN pop.AgeStart AND (CASE WHEN pop.AgeEnd='infinity' THEN 1000 ELSE pop.AgeEnd END)) AND (CDI.LocationDesc = pop.State) AND (CDI.Ethnicity=pop.Ethnicity OR CDI.Ethnicity='All') AND (CDI.Origin=pop.Origin OR CDI.Origin='Both') GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 9 DESC Hope this helps.
2
1
79,616,550
2025-5-11
https://stackoverflow.com/questions/79616550/selenium-4-25-opens-chrome-136-with-existing-profile-to-new-tab-instead-of-nav
I'm using Python with Selenium 4.25.0 to automate Google Chrome 136. My goal is to have Selenium use my existing, logged-in "Default" Chrome profile to navigate to a specific URL (https://aistudio.google.com/prompts/new_chat) and perform actions. The Problem: When I execute my script: Chrome launches successfully. It clearly loads my "Default" profile, as I see my personalized "New Tab" page (title: "新分頁", URL: chrome://newtab/) with my usual shortcuts, bookmarks bar, and theme. This confirms the --user-data-dir and --profile-directory arguments are pointing to the correct profile. However, the subsequent driver.get("https://aistudio.google.com/prompts/new_chat") command does not navigate the browser. The browser remains on the chrome://newtab/ page. I am very diligent about ensuring all chrome.exe processes are terminated (checked via Task Manager) before running the script. Environment: OS: Windows 11 Pro, Version 23H2 Python Version: 3.12.x Selenium Version: 4.25.0 Chrome Browser Version: 136.0.7103.93 (Official Build) (64-bit) ChromeDriver Version: 136.0.7103.xx (downloaded from official "Chrome for Testing" site for win64, matching Chrome's major.minor.build) Simplified Code Snippet: import time import os # For path checks from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import WebDriverException # --- Configuration --- # User needs to replace CHROME_DRIVER_PATH with the full path to their chromedriver.exe CHROME_DRIVER_PATH = r'C:\path\to\your\chromedriver-win64\chromedriver.exe' # User needs to replace YourUserName with their actual Windows username CHROME_PROFILE_USER_DATA_DIR = r'C:\Users\YourUserName\AppData\Local\Google\Chrome\User Data' CHROME_PROFILE_DIRECTORY_NAME = "Default" # Using the standard default profile TARGET_URL = "https://aistudio.google.com/prompts/new_chat" # Example def setup_driver(): print(f"Driver Path: {CHROME_DRIVER_PATH}") print(f"User Data Dir: {CHROME_PROFILE_USER_DATA_DIR}") print(f"Profile Dir Name: {CHROME_PROFILE_DIRECTORY_NAME}") if not os.path.exists(CHROME_DRIVER_PATH): print(f"FATAL: ChromeDriver not found at '{CHROME_DRIVER_PATH}'.") return None if not os.path.isdir(CHROME_PROFILE_USER_DATA_DIR): print(f"FATAL: Chrome User Data dir not found at '{CHROME_PROFILE_USER_DATA_DIR}'.") return None chrome_options = Options() chrome_options.add_argument(f"--user-data-dir={CHROME_PROFILE_USER_DATA_DIR}") chrome_options.add_argument(f"--profile-directory={CHROME_PROFILE_DIRECTORY_NAME}") chrome_options.add_experimental_option("excludeSwitches", ["enable-automation", "load-extension"]) chrome_options.add_experimental_option('useAutomationExtension', False) # chrome_options.add_argument("--disable-blink-features=AutomationControlled") # Tried with and without chrome_options.add_argument("--start-maximized") try: service = Service(executable_path=CHROME_DRIVER_PATH) driver = webdriver.Chrome(service=service, options=chrome_options) print("WebDriver initialized.") return driver except WebDriverException as e: print(f"FATAL WebDriverException during setup: {e}") if "user data directory is already in use" in str(e).lower(): print(">>> Ensure ALL Chrome instances are closed via Task Manager.") return None except Exception as e_setup: print(f"Unexpected FATAL error during setup: {e_setup}") return None def main(): print("IMPORTANT: Ensure ALL Google Chrome instances are FULLY CLOSED before running this script.") input("Press Enter to confirm and continue...") driver = setup_driver() if not driver: print("Driver setup failed. Exiting.") return try: print(f"Browser launched. Waiting a few seconds for it to settle...") print(f"Initial URL: '{driver.current_url}', Initial Title: '{driver.title}'") time.sleep(4) # Increased wait after launch for profile to fully 'settle' print(f"Attempting to navigate to: {TARGET_URL}") driver.get(TARGET_URL) print(f"Called driver.get(). Waiting for navigation...") time.sleep(7) # Increased wait after .get() for navigation attempt current_url_after_get = driver.current_url current_title_after_get = driver.title print(f"After 7s wait - Current URL: '{current_url_after_get}', Title: '{current_title_after_get}'") if TARGET_URL not in current_url_after_get: print(f"NAVIGATION FAILED: Browser did not navigate to '{TARGET_URL}'. It's still on '{current_url_after_get}'.") # Could also try JavaScript navigation here for more info # print("Attempting JavaScript navigation as a fallback test...") # driver.execute_script(f"window.location.href='{TARGET_URL}';") # time.sleep(7) # print(f"After JS nav attempt - URL: '{driver.current_url}', Title: '{driver.title}'") else: print(f"NAVIGATION SUCCESSFUL to: {current_url_after_get}") except Exception as e: print(f"An error occurred during main execution: {e}") finally: print("Script execution finished or errored.") input("Browser will remain open for inspection. Press Enter to close...") if driver: driver.quit() if __name__ == "__main__": # Remind user to update paths if placeholders are detected if r"C:\path\to\your\chromedriver-win64\chromedriver.exe" == CHROME_DRIVER_PATH or \ r"C:\Users\YourUserName\AppData\Local\Google\Chrome\User Data" == CHROME_PROFILE_USER_DATA_DIR: print("ERROR: Default placeholder paths are still in the script.") print("Please update CHROME_DRIVER_PATH and CHROME_PROFILE_USER_DATA_DIR with your actual system paths.") else: main() Console Output (when it gets stuck on New Tab): Setting up Chrome driver from: C:\Users\stat\Downloads\chromedriver-win64\chromedriver-win64\chromedriver.exe Attempting to use Chrome User Data directory: C:\Users\stat\AppData\Local\Google\Chrome\User Data Attempting to use Chrome Profile directory name: Default WebDriver initialized successfully. Browser launched. Waiting a few seconds for it to settle... Initial URL: 'chrome://newtab/', Initial Title: '新分頁' DevTools remote debugging requires a non-default data directory. Specify this using --user-data-dir. [... some GCM / fm_registration_token_uploader errors may appear here ...] Attempting to navigate to: https://aistudio.google.com/prompts/new_chat Called driver.get(). Waiting for navigation... After 7s wait - Current URL: 'chrome://newtab/', Title: '新分頁' NAVIGATION FAILED: Browser did not navigate to 'https://aistudio.google.com/prompts/new_chat'. It's still on 'chrome://newtab/'. What I've Already Verified/Tried: ChromeDriver version precisely matches the Chrome browser's major.minor.build version (136.0.7103). All chrome.exe processes are terminated via Task Manager before script execution. The paths for CHROME_DRIVER_PATH, CHROME_PROFILE_USER_DATA_DIR, and CHROME_PROFILE_DIRECTORY_NAME are correct for my system. The browser visibly loads my "Default" profile (shows my theme, new tab page shortcuts). Tried various time.sleep() delays. The "DevTools remote debugging requires a non-default data directory" warning appears, as do some GCM errors, but the browser itself opens with the profile. My Question: Given that Selenium successfully launches Chrome using my specified "Default" profile (as evidenced by my personalized New Tab page loading), why would driver.get() fail to navigate away from chrome://newtab/? Are there specific Chrome options for Selenium 4.25+ or known issues with Chrome 136 that could cause this behavior when using an existing, rich user profile, even when Chrome is fully closed beforehand? How can I reliably make driver.get() take precedence over the default New Tab page loading in this scenario?
The root cause could be that the ChromeDriver (≥ v113 with “Chrome for Testing”) intentionally limits automation on “default” or regular profiles for security and stability. This is reflected in the warning: "DevTools remote debugging requires a non-default data directory" This means: ChromeDriver can't fully control Chrome if you use --user-data-dir pointing to Chrome's real profile directory. Navigation via driver.get() fails silently or gets overridden by the "New Tab" logic in Chrome itself. Even though Chrome opens with the correct theme/profile, the DevTools protocol is not fully attached, so driver.get() doesn't execute properly. To fix this issue, you can use a dedicated test profile instead of Default Create a copy of your "Default" profile into a separate folder (e.g., ChromeProfileForSelenium) and point --user-data-dir there without specifying the --profile-directory. create a copy mkdir "C:\SeleniumChromeProfile" xcopy /E /I "%LOCALAPPDATA%\Google\Chrome\User Data\Default" "C:\SeleniumChromeProfile\Default" and then update your script: chrome_options.add_argument("--user-data-dir=C:\\SeleniumChromeProfile") # Omit this line: chrome_options.add_argument("--profile-directory=Default") This avoids Chrome's protections around Default, and lets driver.get() work as expected.
2
1
79,616,310
2025-5-11
https://stackoverflow.com/questions/79616310/firebase-admin-taking-an-infinite-time-to-work
I recently started using firebase admin in python. I created this example script: import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("./services.json") options = { "databaseURL": 'https://not_revealing_my_url.com' } app = firebase_admin.initialize_app(cred, options) client = firestore.client(app) print(client.document("/").get()) I already activated firestore and I placed services.json (which I genrated from "Service Accounts" on my firebase project) in the same directory as my main.py file. From all sources I could find, this should've allowed me to use firestore, but for some reason the app takes an infinite long time to respond. I tried looking through the stack after Interrupting the script, and the only major thing I could find was: grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown" debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown", grpc_status:14, created_time:"2025-05-11T08:47:32.8676384+00:00"}" I am assuming this is a common issue, but I failed to find any solution online, can someone help me out? EDIT: I had firebase emulator working from a previous job, It seems firebase_admin tried using firebase emulator which was inactive. I just had to remove it from my PATH
Yes, you're on the right track with setting up Firebase Admin in Python. The error you're seeing: grpc._channel._MultiThreadedRendezvous: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:[::1]:8081: tcp handshaker shutdown" strongly suggests that the client is trying to connect to Firestore Emulator, not the actual Firestore production database. Root Cause This specific port 8081 and the loopback address ([::1]) are the default for the Firestore Emulator, not the production Firestore service. So your environment is likely set to use the emulator without the emulator actually running. Fix You likely have one of the following environment variables set (either globally or in your shell/session): FIRESTORE_EMULATOR_HOST FIREBASE_FIRESTORE_EMULATOR_ADDRESS If either of these is set, the SDK will try to connect to a local emulator instead of the actual Firestore instance. Solution Steps Check and Unset the Environment Variable(s): In your terminal (Linux/macOS): unset FIRESTORE_EMULATOR_HOST unset FIREBASE_FIRESTORE_EMULATOR_ADDRESS In PowerShell (Windows): Remove-Item Env:FIRESTORE_EMULATOR_HOST Remove-Item Env:FIREBASE_FIRESTORE_EMULATOR_ADDRESS Or, if these are set in your IDE or .env file, remove them from there. Restart Your Application after unsetting the variables. Verify you're not connecting to the emulator: In your Python script, you should not manually configure emulator settings unless you're developing against the emulator. Your firebase_admin.initialize_app() call is correct for connecting to the live Firestore. Also: Your Script Has a Small Issue This line: print(client.document("/").get()) Is not valid Firestore usage. client.document("/") is not a valid document path. Firestore document paths must include both collection and document ID. E.g.: doc_ref = client.document("test_collection/test_document") print(doc_ref.get().to_dict())
1
2
79,610,568
2025-5-7
https://stackoverflow.com/questions/79610568/store-numpy-array-in-pandas-dataframe
I want to store a numpy array in pandas cell. This does not work: import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = ["val", "unit"]) df.loc["bnd"] = [bnd1, "N/A"] df.loc["bnd"] = [bnd2, "N/A"] But this does: import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = ["val"]) df.loc["bnd"] = [bnd1] df.loc["bnd"] = [bnd2] Can someone explain why, and what's the solution? Edit: The first returns: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. The complete traceback is below: > --------------------------------------------------------------------------- AttributeError Traceback (most recent call > last) File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3185, > in ndim(a) 3184 try: > -> 3185 return a.ndim 3186 except AttributeError: > > AttributeError: 'list' object has no attribute 'ndim' > > During handling of the above exception, another exception occurred: > > ValueError Traceback (most recent call > last) Cell In[10], line 8 > 6 df = pd.DataFrame(columns = ["val", "unit"]) > 7 df.loc["bnd"] = [bnd1, "N/A"] > ----> 8 df.loc["bnd"] = [bnd2, "N/A"] > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:849, > in _LocationIndexer.__setitem__(self, key, value) > 846 self._has_valid_setitem_indexer(key) > 848 iloc = self if self.name == "iloc" else self.obj.iloc > --> 849 iloc._setitem_with_indexer(indexer, value, self.name) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1835, > in _iLocIndexer._setitem_with_indexer(self, indexer, value, name) > 1832 # align and set the values 1833 if take_split_path: 1834 > # We have to operate column-wise > -> 1835 self._setitem_with_indexer_split_path(indexer, value, name) 1836 else: 1837 self._setitem_single_block(indexer, > value, name) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1872, > in _iLocIndexer._setitem_with_indexer_split_path(self, indexer, value, > name) 1869 if isinstance(value, ABCDataFrame): 1870 > self._setitem_with_indexer_frame_value(indexer, value, name) > -> 1872 elif np.ndim(value) == 2: 1873 # TODO: avoid np.ndim call in case it isn't an ndarray, since 1874 # that will > construct an ndarray, which will be wasteful 1875 > self._setitem_with_indexer_2d_value(indexer, value) 1877 elif > len(ilocs) == 1 and lplane_indexer == len(value) and not > is_scalar(pi): 1878 # We are setting multiple rows in a single > column. > > File <__array_function__ internals>:200, in ndim(*args, **kwargs) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3187, > in ndim(a) 3185 return a.ndim 3186 except AttributeError: > -> 3187 return asarray(a).ndim > > ValueError: setting an array element with a sequence. The requested > array has an inhomogeneous shape after 1 dimensions. The detected > shape was (2,) + inhomogeneous part. I'm using pandas 2.0.3 and numpy 1.24.4
The issue is that when you try to insert a numpy array into a pandas DataFrame, pandas can't process the data correctly. To fix this, you can use either a pd.Series or a dictionary for better alignment: first way: Using pd.Series: df.loc["bnd"] = pd.Series([bnd2, "N/A"], index=["val", "unit"]) OR second way: Using dictionary: df.loc["bnd"] = {"val": bnd2, "unit": "N/A"} good luck mate
2
1
79,616,218
2025-5-11
https://stackoverflow.com/questions/79616218/typeerror-sequence-item-0-expected-str-instance-int-found-what-should-i-do-t
matrix1=[[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]] m2="\n".join(["\t".join([ritem for ritem in item]) for item in matrix1]) print(m2) where am i wrong that i receive this error?
The values you try to join with str.join must be strings themselves. You're trying to join ints and this is causing the error you're seeing. You want: m2 = "\n".join(["\t".join([str(ritem) for ritem in item]) for item in matrix1]) Note that you can pass any iterable, and not just a list, so you can remove some extraneous [ and ] to create generator expressions rather than list comprehensions. "\n".join("\t".join(str(ritem) for ritem in item) for item in matrix1) Or even just: "\n".join("\t".join(map(str, item)) for item in matrix1)
1
2
79,615,098
2025-5-10
https://stackoverflow.com/questions/79615098/is-there-simpler-way-to-get-all-nested-text-inside-of-elementtree
I am currently using the xml.etree Python library to parse HTML. After finding a target DOM element, I am attempting to extract its text. Unfortunately, it seems that the .text attribute is severely limited in its functionality and will only return the immediate inner text of an element (and not anything nested). Do I really have to loop through all the children of the ElementTree? Or is there a more elegant solution?
You can use itertext(), too. If you don’t like the whitespaces, indention and line break you can use strip(). import xml.etree.ElementTree as ET html = """<html> <head> <title>Example page</title> </head> <body> <p>Moved to <a href="http://example.org/">example.org</a> or <a href="http://example.com/">example.com</a>.</p> </body> </html>""" root = ET.fromstring(html) target_element = root.find(".//body") # get all text all_text = ''.join(target_element.itertext()) # get all text and remove line break etc. all_text_clear = ' '.join(all_text.split()) print(all_text) print(all_text_clear) Output: Moved to example.org or example.com. Moved to example.org or example.com.
1
1
79,615,560
2025-5-10
https://stackoverflow.com/questions/79615560/how-to-select-save-rows-with-multiple-same-value-in-pandas
I have financial data where I need to save / find rows that have multiple same value and a condition where the same value happened more than / = 2 and not (value)equal to 0 or < 1. Say I have this: A B C D E F G H I 5/7/2025 21:00 0 0 0 0 0 0 0 0 5/7/2025 21:15 0 0 19598.8 0 19598.8 0 0 0 5/7/2025 21:30 0 0 0 0 0 0 0 0 5/7/2025 21:45 0 0 0 19823.35 0 0 0 0 5/7/2025 22:00 0 0 0 0 0 0 0 0 5/7/2025 22:15 0 0 0 0 0 0 0 0 5/7/2025 22:30 0 0 0 19975.95 0 19975.95 0 19975.95 5/7/2025 23:45 0 0 0 0 0 0 0 0 5/8/2025 1:00 0 0 19830.2 0 0 0 0 0 5/8/2025 1:15 0 0 0 0 0 0 0 0 5/8/2025 1:30 0 0 0 0 0 0 0 0 5/8/2025 1:45 0 0 0 0 0 0 0 0 I want this along with other datas in those rows: A B C D E F G H I 5/7/2025 21:15 0 0 19598.8 0 19598.8 0 0 0 5/7/2025 22:30 0 0 0 19975.95 0 19975.95 0 19975.95
A simple approach could be to select the columns of interest, then identify if any value is duplicated within a row. Then select the matching rows with boolean indexing: mask = df.loc[:, 'B':].T out = df[mask.apply(lambda x: x.duplicated(keep=False)).where(mask >= 1).any()] A potentially more efficient approach could be to use numpy. Select the values, mask the values below 1, sort them and identify if any 2 are identical in a row with diff + isclose: mask = df.loc[:, 'B':].where(lambda x: x>=1).values mask.sort() out = df[np.isclose(np.diff(mask), 0).any(axis=1)] Output: A B C D E F G H I 1 5/7/2025 21:15 0 0 19598.8 0.00 19598.8 0.00 0 0.00 6 5/7/2025 22:30 0 0 0.0 19975.95 0.0 19975.95 0 19975.95
2
0
79,615,397
2025-5-10
https://stackoverflow.com/questions/79615397/how-to-locate-elements-simultaneously
By nature, Playwright locator is blocking, so whenever it's trying to locate for an element X, it stops and waits until that element is located or it times out. However, I want to see if it is possible to make it so that it locates two elements at once, and, if either one is found, proceed forward, based on whichever was found. Is something like that possible in Python Playwright? Thanks
or_​ Added in: v1.33 Creates a locator matching all elements that match one or both of the two locators. Note that when both locators match something, the resulting locator will have multiple matches, potentially causing a locator strictness violation. Usage Consider a scenario where you'd like to click a "New email" button, but sometimes a security settings dialog appears instead. In this case, you can wait for either a "New email" button or a dialog and act accordingly. note If both "New email" button and security dialog appear on screen, the "or" locator will match both of them, possibly throwing the "strict mode violation" error. In this case, you can use locator.first to only match one of them. new_email = page.get_by_role("button", name="New") dialog = page.get_by_text("Confirm security settings") expect(new_email.or_(dialog).first).to_be_visible() if (dialog.is_visible()): page.get_by_role("button", name="Dismiss").click() new_email.click()
3
4
79,615,284
2025-5-10
https://stackoverflow.com/questions/79615284/how-to-remove-duplicates-from-this-nested-dataframe
I have a dataframe as below and I want remove the duplicates and want the output as mentioned below. Tried few things but not working as expected. New to pandas. import pandas as pd # Sample DataFrame data = { "some_id": "xxx", "some_email": "[email protected]", "This is Sample": [ { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" }, { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" }, { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" } ] } df = pd.DataFrame(data) print(df) The output is some_id some_email This is Sample 0 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 1 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} 2 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 3 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} 4 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 5 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} I want to remove duplicates and the output should look like some_id some_email This is Sample 0 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 1 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} How can this be achieved? I tried multiple ways some times it fails with unhashable dict. I have pretty big nested data frame like this. I am using pandas dataframe and python. New to this technology
The issue you're encountering (e.g., unhashable type: 'dict') happens because dictionaries are mutable and unhashable, so drop_duplicates() doesn't work directly on them. To deduplicate rows where one of the columns contains dictionaries, you can: Convert dictionaries to strings, use drop_duplicates(), then Convert the strings back to dictionaries (if needed). Here’s a clean and simple way to achieve your desired output: https://code.livegap.com/?st=a50pbcrjkjk
2
1
79,614,976
2025-5-9
https://stackoverflow.com/questions/79614976/does-file-obj-close-nicely-close-file-objects-in-other-modules-that-have-been
I have a file main_file.py that creates a global variable file_obj by opening a text file and imports a module imported_module.py which has functions that write to this file and therefore also has a global variable file_obj which I set equal to file_obj in main_file.py: main_file.py import imported_module as im file_obj = open('text_file.txt', mode='w') im.file_obj = file_obj def main(): a = 5 b = 7 im.add_func(a, b) im.multiply_func(a, b) return def add_func(x, y): z = x + y file_obj.write(str(z) + '\n') return main() file_obj.close() imported_module.py file_obj = None def multiply_func(x, y): z = x * y file_obj.write(str(z) + '\n') return If I close file_obj in main_file.py as above, does this also nicely close file_obj in imported_module.py? (In the MRE above, I could add im.file_obj.close() to main_file.py just to be sure. However, a generalization of this explicit approach does not appear possible if imported_module.py imports a second module imported_module0.py which also has a global variable file_obj and sets this variable to its own copy of file_obj with a command like im0.file_obj = file_obj.)
Yes. The two variables refer to the same file object. Closing either closes the object itself, it doesn't matter which variable you use to refer to it. This is no different from having two variable referring to the same list, a modification of one is visible through the other: a = [1, 2, 3] b = a a.append(4) print(b) will print [1, 2, 3, 4]
1
1
79,613,844
2025-5-9
https://stackoverflow.com/questions/79613844/tkinter-widget-not-appearing-on-form
I’m having trouble working out why a widget doesn’t appear on my tkinter form. Here is what I’m doint: Create a form Create a widget (a label) with the form as the master. Create a Notebook and Frame and add them to the form. Create additional widgets with the form as the master. Add the widgets to the form using grid, and specifying the in_ parameter. Any widgets I create before the notebook and frame don’t appear, even though I don’t add them till after they’ve been created. Here is some sample code: form = tkinter.Tk() label1 = tkinter.ttk.Label(form, text='Test Label 1') # This one doesn’t appear notebook = tkinter.ttk.Notebook(form) notebook.pack(expand=True) mainframe = tkinter.ttk.Frame(notebook, padding='13 3 12 12') notebook.add(mainframe, text='Test Page') label2 = tkinter.ttk.Label(form, text='Test Label 2') # This one works entry = tkinter.ttk.Entry(form) label1.grid(in_=mainframe, row=1, column=1) label2.grid(in_=mainframe, row=2, column=1) entry.grid(in_=mainframe, row=3, column=1) form.mainloop() Note that label doesn’t appear even though there is a space for it. If I print(id(form)) before and after creating the notebook and frame, they are the same, so it’s not as if the form itself has changed. Where has that first widget gone to and how can I get it to appear?
The behavior has to do with stacking order. Widgets created before the notebook are lower in the stacking order. In effect it is behind the notebook. You'll you correctly observed that a row has been allocated for the widget, but since it's behind the notebook it isn't visible. You can make it appear by calling lift on the widget to raise the stacking order: label1.lift()
1
3
79,614,850
2025-5-9
https://stackoverflow.com/questions/79614850/how-to-replace-string-values-in-a-strict-way-in-polars
I'm working with a Polars DataFrame that contains a column with string values. I aim to replace specific values in this column using the str.replace_many() method. My dataframe: import polars as pl df = (pl.DataFrame({"Products": ["cell_foo","cell_fooFlex","cell_fooPro"]})) Current approach: mapping= { "cell_foo" : "cell", "cell_fooFlex" : "cell", "cell_fooPro": "cell" } (df.with_columns(pl.col("Products").str.replace_many(mapping ).alias("Replaced"))) Output: shape: (3, 2) ┌──────────────┬──────────┐ │ Products ┆ Replaced │ │ --- ┆ --- │ │ str ┆ str │ ╞══════════════╪══════════╡ │ cell_foo ┆ cell │ │ cell_fooFlex ┆ cellFlex │ │ cell_fooPro ┆ cellPro │ └──────────────┴──────────┘ Desired Output: shape: (3, 2) ┌──────────────┬──────────┐ │ Products ┆ Replaced │ │ --- ┆ --- │ │ str ┆ str │ ╞══════════════╪══════════╡ │ cell_foo ┆ cell │ │ cell_fooFlex ┆ cell │ │ cell_fooPro ┆ cell │ └──────────────┴──────────┘ How can I modify my approach to ensure that replacements occur only when the entire string matches a key in the mapping?
The top-level Expr.replace() and .replace_strict() are for replacing entire "values". df.with_columns(pl.col("Products").replace(mapping).alias("Replaced")) shape: (3, 2) ┌──────────────┬──────────┐ │ Products ┆ Replaced │ │ --- ┆ --- │ │ str ┆ str │ ╞══════════════╪══════════╡ │ cell_foo ┆ cell │ │ cell_fooFlex ┆ cell │ │ cell_fooPro ┆ cell │ └──────────────┴──────────┘
3
1
79,614,700
2025-5-9
https://stackoverflow.com/questions/79614700/how-to-display-years-on-the-the-y-axis-of-horizontal-bar-chart-subplot-when-th
I'm plotting date vs frequency horizontal bar charts that compares the monthly distribution pattern over time for a selection of crimes as subplots. The problem is the tick labels of the y-axis, which represents the date, display all the months over period of 2006-2023. I want to instead display the year whilst preserving the monthly count of the plot. Basically change the scale from month to year without changing the data being plotted. Here's a sample of my code below: Dataset: https://drive.google.com/file/d/11MM-Vao6_tHGTRMsLthoMGgtziok67qc/view?usp=sharing import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates df = pd.read_csv('NYPD_Arrests_Data__Historic__20250113_111.csv') df['ARREST_DATE'] = pd.to_datetime(df['ARREST_DATE'], format = '%m/%d/%Y') df['ARREST_MONTH'] = df['ARREST_DATE'].dt.to_period('M').dt.to_timestamp() # crimes, attributes and renames crimes = ['DANGEROUS DRUGS', 'DANGEROUS WEAPONS', 'ASSAULT 3 & RELATED OFFENSES', 'FELONY ASSAULT'] attributes = ['PERP_RACE'] titles = ['Race'] # loops plot creation over each attribute for attr, title in zip(attributes, titles): fig, axes = plt.subplots(1, len(crimes), figsize = (4 * len(crimes), 6), sharey = 'row') for i, crime in enumerate(crimes): ax = axes[i] crime_df = df[df['OFNS_DESC'] == crime] pivot = pd.crosstab(crime_df['ARREST_MONTH'], crime_df[attr]) # plots stacked horizontal bars pivot.plot(kind = 'barh', stacked = True, ax = ax, width = 0.9, legend = False) ax.set_title(crime) ax.set_xlabel('Frequency') ax.set_ylabel('Month' if i == 0 else '') # shows the y-axis only on first plot ax.xaxis.set_tick_params(labelsize = 8) ax.set_yticks(ax.get_yticks()) # adds one common legend accoss plots handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, title = title, loc = 'upper center', ncol = len(df[attr].unique()), bbox_to_anchor = (0.5, 0.94)) fig.suptitle(f'Crime Frequency Distribution by Year and {title}', fontsize = 20) plt.tight_layout(rect = [0, 0, 1, 0.90]) plt.show() Here's an image of what I currently see.
pandas makes the assumption that the major axis of a bar-chart is always categorical, and therefore converts your values to strings prior to plotting. This means that it forces matplotlib to render a label for every bar you have. The way to do this with minimal changes to your code would be to manually override the yticklabels with your own custom ones. You can create a Series that contains the year (as a string) whenever the year in the current row is different than that of the next row. Then fill in empty strings for the other case when the year of the current row is the same as the next row. import pandas as pd s = pd.Series([2000, 2001, 2002, 2003]).repeat(3) print( pd.DataFrame({ 'orig': s, 'filtered': s.pipe(lambda s: s.astype('string').where(s != s.shift(), '')) }) ) # orig filtered # 0 2000 2000 # 0 2000 # 0 2000 # 1 2001 2001 # 1 2001 # 1 2001 # 2 2002 2002 # 2 2002 # 2 2002 # 3 2003 2003 # 3 2003 # 3 2003 Putting this into action in your code would look like: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates df = pd.read_csv('NYPD_Arrests_Data__Historic__20250113_111.csv') df['ARREST_DATE'] = pd.to_datetime(df['ARREST_DATE'], format = '%m/%d/%Y') df['ARREST_MONTH'] = df['ARREST_DATE'].dt.to_period('M').dt.to_timestamp() # crimes, attributes and renames crimes = ['DANGEROUS DRUGS', 'DANGEROUS WEAPONS', 'ASSAULT 3 & RELATED OFFENSES', 'FELONY ASSAULT'] attributes = ['PERP_RACE'] titles = ['Race'] # loops plot creation over each attribute for attr, title in zip(attributes, titles): fig, axes = plt.subplots(1, len(crimes), figsize = (4 * len(crimes), 6), sharey = 'row') for i, crime in enumerate(crimes): ax = axes[i] crime_df = df[df['OFNS_DESC'] == crime] pivot = pd.crosstab(crime_df['ARREST_MONTH'], crime_df[attr]) # plots stacked horizontal bars pivot.plot(kind = 'barh', stacked = True, ax = ax, width = 0.9, legend = False) ax.set_title(crime) ax.set_xlabel('Frequency') ax.set_ylabel('Month' if i == 0 else '') # shows the y-axis only on first plot ax.xaxis.set_tick_params(labelsize = 8) ax.yaxis.set_tick_params(size=0) yticklabels = ( pivot.index.year.to_series() .pipe( lambda s: s.astype('string').where(s != s.shift(), '') ) ) ax.set_yticklabels(yticklabels) axes.flat[0].invert_yaxis() handles, labels = axes.flat[0].get_legend_handles_labels() fig.legend(handles, labels, title = title, loc = 'upper center', ncol = len(df[attr].unique()), bbox_to_anchor = (0.5, 0.94)) fig.suptitle(f'Crime Frequency Distribution by Year and {title}', fontsize = 20) plt.tight_layout(rect = [0, 0, 1, 0.90]) plt.show() Note that I also inverted the y-axis to make the dates increase as the viewer moves their eyes down the chart. This is done with the axes.flat[0].invert_yaxis() line (it inverts tha axis on all charts since they share the y-axis)
1
0
79,614,770
2025-5-9
https://stackoverflow.com/questions/79614770/how-can-i-get-all-thing-names-from-a-thing-group-in-aws-iot-core-using-a-lambda
I'm trying to get all the thing names that are part of a specific thing group in AWS IoT Core using a Python Lambda function. I checked the Boto3 documentation looking for a function that retrieves the names of things inside a specific thing group, but I couldn't find anything that does exactly that. Is there a way to fetch all the thing names from a thing group at once and store them in a list?
You can use the BOTO3 client to retrieve IoT things in a thing group. Here is the Python code. You need to use this Python code in an AWS Lambda function to address your use case. For additional AWS code examples, refer to the AWS Code Library -- where you will find thousands of examples in various SDKs, CLI, etc. import boto3 def list_things_in_group(group_name, region='us-east-1'): client = boto3.client('iot', region_name=region) try: response = client.list_things_in_thing_group( thingGroupName=group_name, recursive=False # Set to True if you want to include child groups ) things = response.get('things', []) if not things: print(f"No things found in group: {group_name}") else: print(f"Things in group '{group_name}':") for thing_name in things: print(f"- {thing_name}") describe_thing(client, thing_name) except client.exceptions.ResourceNotFoundException: print(f"Thing group '{group_name}' not found.") except Exception as e: print(f"Error: {e}") def describe_thing(client, thing_name): response = client.describe_thing(thingName=thing_name) print(f" Thing Name: {response.get('thingName')}") print(f" Thing ARN: {response.get('thingArn')}") print() # Example usage: if __name__ == "__main__": list_things_in_group("YourThingGroupName")
1
0
79,609,220
2025-5-6
https://stackoverflow.com/questions/79609220/documenting-a-script-step-by-step-with-sphinx
I am documenting a python library with Sphinx. I have a couple of example scripts which I'd like to document in a narrative way, something like this: #: Import necessary package and define :meth:`make_grid` import numpy as np def make_grid(a,b): """ Make a grid for constant by piece functions """ x = np.linspace(0,np.pi) xmid = (x[:-1]+x[1:])/2 h = x[1:]-x[:-1] return xmid,h #: Interpolate a function xmid,h = make_grid(0,np.pi) y = np.sin(xmid) #: Calculate its integral I = np.sum(y*h) print ("Result %g" % I ) Those scripts should remain present as executable scripts in the repository, and I want to avoid duplicating their code into comments. I would like to generate the corresponding documentation, something like : Is there any automated way to do so? This would allow me not to duplicate the example script in the documentation. It seems to me this was the object of this old question but in my hands viewcode extension doesn't interpret comments, it just produces an html page with quoted code, comments remain comments.
Take a look at the sphinx-gallery extension, which seems to do what you require. With this extension, if you have a Python script, you must start it with a header docstring, and then you can add comments that will be formatted as text rather than code using the # %% syntax, e.g., """ My example script. """ import numpy as np # %% # This will be a text block x = np.linspace(0, 10, 100) y = np.sin(2 * np.pi * x) # %% # Another block of text More details of the syntax is described here, and various examples are, e.g., here. Alternative option If the sphinx-gallery option is not appropriate (i.e., you don't really want a thumbnail-style gallery page linking to the examples), you could instead make use of the nbsphinx extension and the jupytext package. You can write your example Python scripts in jupytext's percent format, and then generate the pages via an intermediate conversion to a Jupyter notebook. For example (after installing both nbsphinx and jupytext), if you had a package structure like: . ├── docs │ ├── Makefile │ ├── conf.py │ ├── examples -> ../src/examples/ │ ├── index.rst │ └── make.bat └── src └── examples └── narrative.py where in this case I've symbolic linked the src/examples directory into the docs directory, you could edit your Sphinx conf.py file to contain: # add nbsphinx to extensions extensions = [ ... "nbsphinx", ] # this converts .py files with the percent format to notebooks nbsphinx_custom_formats = { '.py': ['jupytext.reads', {'fmt': 'py:percent'}], } nbsphinx_output_prompt = "" nbsphinx_execute = "auto" templates_path = ['_templates'] # add conf.py to exclude_patterns exclude_patterns = [..., 'conf.py'] and have narrative.py looking like: # %% [markdown] # # A title # %% [raw] raw_mimetype="text/restructuredtext" # Import necessary package and define :meth:`make_grid` # %% import numpy as np def make_grid(a,b): """ Make a grid for constant by piece functions """ x = np.linspace(0,np.pi) xmid = (x[:-1]+x[1:])/2 h = x[1:]-x[:-1] return xmid,h # %% [markdown] # Interpolate a function # %% xmid,h = make_grid(0,np.pi) y = np.sin(xmid) # %% [markdown] # Calculate its integral # %% I = np.sum(y*h) print ("Result %g" % I ) then running make html should produce a narrative.html file like: which you can link to from index.rst etc. Some things to note about the narrative.py file: the start of the .py file has to contain a "Title" cell, which in this case, as I've set it as a Markdown cell, contains (after the initial comment string #) # A Title using the Markdown header syntax of #. If you don't have a title you won't be able to link to the output from other documents, e.g., index.rst; for most of the text cells, I have marked them as [markdown] format, i.e., they will be interpreted as containing Markdown syntax; for the cell containing restructured text, I have marked it as a [raw] cell with the meta data raw_mimetype="text/restructuredtext"; the input code cells will display with an input prompt by default [1]: etc. Turning off the input prompts requires using Custom CSS.
7
3
79,614,033
2025-5-9
https://stackoverflow.com/questions/79614033/what-explains-pattern-matching-in-python-not-matching-for-0-0-but-matching-for
I would like to understand how pattern matching works in Python. I know that I can match a value like so: >>> t = 12.0 >>> match t: ... case 13.0: ... print("13") ... case 12.0: ... print("12") ... 12 But I notice that when I use matching with a type like float(), it matches 12.0: >>> t = 12.0 >>> match t: ... case float(): ... print("13") ... case 12.0: ... print("12") ... 13 This seems strange, because float() evaluates to 0.0, but the results are different if that is substituted in: >>> t = 12.0 >>> match t: ... case 0.0: ... print("13") ... case 12.0: ... print("12") ... 12 I would expect that if 12.0 matches float(), it would also match 0.0. There are cases where I would like to match against types, so this result seems useful. But why does it happen? How does it work?
The thing that follows the case keyword is not an expression, but special syntax called a pattern. 0.0 is a literal pattern. It checks equality with 0.0. float() is a class pattern. It checks that the type is float. Since it is not an expression, it isn't evaluated and therefore is different from 0.0.
14
20
79,610,653
2025-5-7
https://stackoverflow.com/questions/79610653/python-pynput-the-time-module-do-not-seem-to-work-together-in-a-loop
So I have written this Python script to vote repeatedly (It's allowed) for a friend on a show at a local TV station. import os import time from pynput.keyboard import Key, Controller os.system("open -a Messages") time.sleep(3) keyboard = Controller() for i in range(50): keyboard.type("Example Message") print("Message typed") time.sleep(5) keyboard.press(Key.enter) print(f"======= {i+1} Message(s) Sent =======") time.sleep(40) print("Texting Complete") During the first loop, everything works like it's supposed to, the program takes 5 seconds between typing and pressing Enter. However, in the loops thereafter, the pynput code seems to ignore time.sleep between keyboard.type & keyboard.press, running them immediately in succession, while the print statements still respect time.sleep in the terminal output. This isn't that big of an issue since it stills functions as intended most of the time, but about every 4th or 5th message gets sent before the program has finished typing, causing that vote not to get counted. I'm running the script in Visual Studio Code Version: 1.99.3 on a 2021 Macbook Pro with an M1 chip, if that matters. I have tried running the script unbuffered using the terminal, but it has made no difference. Any help would be appreciated.
Solved by user @furas in the comments here, keyboard.press() keeps the button pressed, so the code needed keyboard.release() to avoid the initial keyboard.press() being held in for the rest of the loops.
2
3
79,613,425
2025-5-9
https://stackoverflow.com/questions/79613425/get-media-created-timestamp-with-python-for-mp4-and-m4a-video-audio-files
Trying to get "Media created" timestamp and insert as the "Last modified date" with python for .mp4 and .m4a video, audio files (no EXIF). The "Media created" timestamp shows up and correctly in Windows with right click file inspection, but I can not get it with python. What am I doing wrong? (This is also a working fix for cloud storage changing the last modified date of files.) enter from mutagen.mp4 import MP4, M4A from datetime import datetime import os def get_mp4_media_created_date(filepath): """ Extracts the "Media Created" date from an MP4 or M4A file. Args: filepath (str): The path to the MP4 or M4A file. Returns: datetime or None: The creation date as a datetime object, or None if not found. """ file_lower = filepath.lower() try: if file_lower.endswith(".mp4"): media = MP4(filepath) elif file_lower.endswith(".m4a"): media = M4A(filepath) else: return None # Not an MP4 or M4A file found_date = None date_tags_to_check = ['creation_time', 'com.apple.quicktime.creationdate'] for tag in date_tags_to_check: if tag in media: values = media[tag] if not isinstance(values, list): values = [values] for value in values: if isinstance(value, datetime): found_date = value break elif isinstance(value, str): try: found_date = datetime.fromisoformat(value.replace('Z', '+00:00')) break except ValueError: pass if found_date: break return found_date except Exception as e: print(f"Error processing {filepath}: {e}") return None if __name__ == "__main__": filepath = input("Enter the path to the MP4/M4A file: ") if os.path.exists(filepath): creation_date = get_mp4_media_created_date(filepath) if creation_date: print(f"Media Created Date: {creation_date}") else: print("Could not find Media Created Date.") else: print("File not found.") here
As described here, the "Media created" value is not filesystem metadata. It's accessible in the API as a Windows Property. You can use os.utime to set "Media created" timestamp as the "Last modified date". Like import pytz import datetime import os from win32com.propsys import propsys, pscon file = 'path/to/your/file' properties = propsys.SHGetPropertyStoreFromParsingName(file) dt = properties.GetValue(pscon.PKEY_Media_DateEncoded).GetValue() if not isinstance(dt, datetime.datetime): # In Python 2, PyWin32 returns a custom time type instead of # using a datetime subclass. It has a Format method for strftime # style formatting, but let's just convert it to datetime: dt = datetime.datetime.fromtimestamp(int(dt)) dt = dt.replace(tzinfo=pytz.timezone('UTC')) print('Media created at', dt, dt.timestamp()) os.utime(file, (dt.timestamp(),dt.timestamp()))
1
2
79,613,107
2025-5-8
https://stackoverflow.com/questions/79613107/pyspark-udf-mapping-is-returning-empty-columns
Given a dataframe, I want to apply a mapping with UDF but getting empty columns. data = [(1, 3), (2, 3), (3, 5), (4, 10), (5, 20)] df = spark.createDataFrame(data, ["int_1", "int_2"]) df.show() +-----+-----+ |int_1|int_2| +-----+-----+ | 1| 3| | 2| 3| | 3| 5| | 4| 10| | 5| 20| +-----+-----+ I have a mapping: def test_map(col): if col < 5: score = 'low' else: score = 'high' return score mapp = {} test_udf = F.udf(test_map, IntegerType()) I iterate here to populate mapp... for x in (1, 2): print(f'Now working {x}') mapp[f'limit_{x}'] = test_udf(F.col(f'int_{x}')) print(mapp) {'limit_1': Column<'test_map(int_1)'>, 'limit_2': Column<'test_map(int_2)'>} df.withColumns(mapp).show() +-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| NULL| NULL| | 2| 3| NULL| NULL| | 3| 5| NULL| NULL| | 4| 10| NULL| NULL| | 5| 20| NULL| NULL| +-----+-----+-------+-------+ The problem is I get null columns. What I'm expecting is: +-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| low | low | | 2| 3| low | low | | 3| 5| low | low | | 4| 10| low | high| | 5| 20| low | high| +-----+-----+-------+-------+ The reason I'm doing it is because I have to do for 100 columns. I heard that "withColumns" with a mapping is much faster than iterating over "withColumn" many times.
Your problem is that your UDF is registered to return an integer (defined to return an IntegerType()) while your Python function intends to return a string ("low" or "high"), so what you need to do is to set StringType() in your UDF return type: test_udf = F.udf(test_map, StringType()) Let me know if you want more explanation about UDFs!
1
2
79,613,039
2025-5-8
https://stackoverflow.com/questions/79613039/assign-a-number-for-every-matching-value-in-list
I have a long list of items that I want to assign a number to that increases by one every time the value in the list changes. Basically I want to categorize the values in the list. It can be assumed that the values in the list are always lumped together, but I don't know the number of instances it's repeating. The list is stored in a dataframe as of now, but the output needs to be a dataframe. Example: my_list = ['Apple', 'Apple', 'Orange', 'Orange','Orange','Banana'] grouping = pd.DataFrame(my_list, columns=['List']) Expected output: List Value 0 Apple 1 1 Apple 1 2 Orange 2 3 Orange 2 4 Orange 2 5 Banana 3 I have tried with a for loop, where it checks if the previous value is the same as the current value, but I imagine that there should be a nicer way of doing this.
Use pandas.factorize, and add 1 if you need the category numbers to start with 1 instead of 0: import pandas as pd my_list = ['Apple', 'Apple', 'Orange', 'Orange','Orange','Banana'] grouping = pd.DataFrame(my_list, columns=['List']) grouping['code'] = pd.factorize(grouping['List'])[0] + 1 print(grouping) Output: List code 0 Apple 1 1 Apple 1 2 Orange 2 3 Orange 2 4 Orange 2 5 Banana 3
4
9
79,612,757
2025-5-8
https://stackoverflow.com/questions/79612757/scipys-wrappedcauchy-function-wrong
I'd like someone to check my understanding on the wrapped cauchy function in Scipy... From Wikipedia "a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle." It's similar to the Von Mises distribution in that way. I use the following bits of code to calculate a couple thousand random variates, get a histogram and plot it. from scipy.stats import wrapcauchy, vonmises import plotly.graph_objects as go import numpy as np def plot_cauchy(c, loc = 0, scale = 1, size = 100000): ''' rvs(c, loc=0, scale=1, size=1, random_state=None) ''' rvses = vonmises.rvs(c, loc = loc, scale = scale, size = size) # rvses = wrapcauchy.rvs(c, # loc = loc, # scale = scale, # size = size) y,x = np.histogram(rvses, bins = 200, range = [-np.pi,np.pi], density = True) return x,y fig = go.Figure() loc = -3 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 1.5 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 0 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name=f'Centered on {loc}')) fig.show() When plotting this using the Von Mises distribution I get a couple of distributions that are wrapped from -pi to pi and centered on "loc": When I replace the vonmises distribution with the wrapcauchy distribution I get a "non-wrapped" result, that to my eye just looks wrong. To plot this completely I have to adjust the ranges for the histogram This is with Scipy version '1.15.2'. Is there a way to correctly "wrap" the outputs of a the Scipy call, or another library that correctly wraps the output from -pi to pi?
Is there a way to correctly "wrap" the outputs of a the Scipy call You can use the modulo operator. The operation number % x wraps all output to the range [0, x). If you want the range to begin at a value other than 0, you can add and subtract a constant before and after the modulo operation to center it somewhere else. If you want the range to begin at -pi, you can do (array + pi) % (2 * pi) - pi. For example, this is how SciPy internally wraps the vonmises result. return np.mod(rvs + np.pi, 2*np.pi) - np.pi Source. You could do something similar with the result of scipy.stats.wrapcauchy(). Here is how you could modify your code to do this: from scipy.stats import wrapcauchy, vonmises import plotly.graph_objects as go import numpy as np def plot_cauchy_or_vm(c, loc = 0, scale = 1, kind="vonmises", size = 100000): ''' rvs(c, loc=0, scale=1, size=1, random_state=None) ''' if kind == "vonmises": rvses = vonmises.rvs(c, loc = loc, scale = scale, size = size) elif kind == "cauchy": rvses = wrapcauchy.rvs(c, loc = loc, scale = scale, size = size) rvses = ((rvses + np.pi) % (2 * np.pi)) - np.pi else: raise Exception("Unknown kind") y,x = np.histogram(rvses, bins = 200, range = [-np.pi,np.pi], density = True) return x,y for kind in ["vonmises", "cauchy"]: fig = go.Figure() loc = -3 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 1.5 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 0 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name=f'Centered on {loc}')) fig.show() Output: Cauchy Plot
3
3
79,612,625
2025-5-8
https://stackoverflow.com/questions/79612625/underlining-fails-in-matplotlib
My matplotlib.__version__ is 3.10.1. I'm trying to underline some text and can not get it to work. As far as I can tell, Latex is installed and accessible in my system: import subprocess result = subprocess.run( ["pdflatex", "--version"], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) print(result.stdout) results in: b'pdfTeX 3.141592653-2.6-1.40.25 (TeX Live 2023/Debian)\nkpathsea version 6.3.5\nCopyright 2023 Han The Thanh (pdfTeX) et al.\nThere is NO warranty. Redistribution of this software is\ncovered by the terms of both the pdfTeX copyright and\nthe Lesser GNU General Public License.\nFor more information about these matters, see the file\nnamed COPYING and the pdfTeX source.\nPrimary author of pdfTeX: Han The Thanh (pdfTeX) et al.\nCompiled with libpng 1.6.43; using libpng 1.6.43\nCompiled with zlib 1.3; using zlib 1.3\nCompiled with xpdf version 4.04\n' Also the simple code: import matplotlib.pyplot as plt plt.text(0.5, 0.5, r'$\frac{a}{b}$') plt.show() works as expected. Similar questions from 2012 (Underlining Text in Python/Matplotlib) and 2017 (matplotlib text underline) have accepted answers that fail with RuntimeError: Failed to process string with tex because dvipng could not be found A similar question from 2019 (Underlining not working in matplotlib graphs for the following code using tex) has no answer and it is my exact same issue, i.e.: import matplotlib.pyplot as plt plt.text(.5, .5, r'Some $\underline{underlined}$ text') plt.show() fails with: ValueError: \underline{underlined} text ^ ParseFatalException: Unknown symbol: \underline, found '\' (at char 0), (line:1, col:1) The 2017 question has a deleted answer that points to a closed PR in matplotlib's Github repo which points to another PR called Support \underline in Mathtext which is marked as a draft. Does my matplotlib version not support the underline Latex command?
As you have correctly found, \underline, is not a currently supported MathText command. But, matplotlib's MathText is not the same a LaTeX. To instead use LaTeX, you can do, e.g., import matplotlib.pyplot as plt # turn on use of LaTeX rather than MathText plt.rcParams["text.usetex"] = True plt.text(.5, .5, r'Some $\underline{underlined}$ text') plt.show() You may have issues if your tex distribution does not ship with the type1cm package, in which can you may want to look at, e.g., https://stackoverflow.com/a/37218925/1862861.
1
4
79,612,007
2025-5-8
https://stackoverflow.com/questions/79612007/undefined-reference-to-py-initialize-when-build-a-simple-demo-c-on-a-linux-con
I am testing of running a Python thread in a c program with a simple example like the below # demo.py import time for i in range(1, 101): print(i) time.sleep(0.1) // demo.c #include <Python.h> #include <pthread.h> #include <stdio.h> void *run_python_script(void *arg) { Py_Initialize(); if (!Py_IsInitialized()) { fprintf(stderr, "Python initialization failed\n"); return NULL; } FILE *fp = fopen("demo.py", "r"); if (fp == NULL) { fprintf(stderr, "Failed to open demo.py\n"); Py_Finalize(); return NULL; } PyRun_SimpleFile(fp, "demo.py"); fclose(fp); Py_Finalize(); return NULL; } int main() { pthread_t python_thread; if (pthread_create(&python_thread, NULL, run_python_script, NULL) != 0) { fprintf(stderr, "Failed to create thread\n"); return 1; } pthread_join(python_thread, NULL); printf("Python thread has finished. Exiting program.\n"); return 0; } Then I build the above code with the following command gcc demo.c -o demo -lpthread -I$(python3-config --includes) $(python3-config --ldflags) $(python3-config --cflags) Then I get the following error: /usr/bin/ld: /tmp/ccsHQpZ3.o: in function `run_python_script': demo.c:(.text.run_python_script+0x7): undefined reference to `Py_Initialize' /usr/bin/ld: demo.c:(.text.run_python_script+0xd): undefined reference to `Py_IsInitialized' /usr/bin/ld: demo.c:(.text.run_python_script+0x41): undefined reference to `PyRun_SimpleFileExFlags' /usr/bin/ld: demo.c:(.text.run_python_script+0x50): undefined reference to `Py_Finalize' /usr/bin/ld: demo.c:(.text.run_python_script+0xab): undefined reference to `Py_Finalize' collect2: error: ld returned 1 exit status The python library do exists, python3-config --ldflags -L/home/henry/anaconda3/lib/python3.9/config-3.9-x86_64-linux-gnu -L/home/henry/anaconda3/lib -lcrypt -lpthread -ldl -lutil -lm -lm ls -1 ~/anaconda3/lib | grep python libpython3.9.so libpython3.9.so.1.0 libpython3.so python3.9 I have no idea about is link error.
You need to pass --embed to python3-config because you are embedding a Python interpreter in your program. Observe the difference: $ python3-config --ldflags -L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lcrypt -ldl -lm -lm $ python3-config --embed --ldflags -L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lpython3.10 -lcrypt -ldl -lm -lm
1
3
79,611,667
2025-5-8
https://stackoverflow.com/questions/79611667/how-do-i-handle-sigterm-inside-python-async-methods
Based on this code, I'm trying to catch SIGINT and SIGTERM. It works perfectly for SIGINT: I see it enter the signal handler, then my tasks do their cleanup before the whole program exits. On SIGTERM, though, the program simply exits immediately. My code is a bit of a hybrid of the two examples from the link above, as much of the original doesn't work under python 3.12: import asyncio import functools import signal async def signal_handler(sig, loop): """ Exit cleanly on SIGTERM ("docker stop"), SIGINT (^C when interactive) """ print('caught {0}'.format(sig.name)) tasks = [task for task in asyncio.all_tasks() if task is not asyncio.current_task()] list(map(lambda task: task.cancel(), tasks)) results = await asyncio.gather(*tasks, return_exceptions=True) print('finished awaiting cancelled tasks, results: {0}'.format(results)) loop.stop() if __name__ == "__main__": loop = asyncio.new_event_loop() asyncio.ensure_future(task1(), loop=loop) asyncio.ensure_future(task2(), loop=loop) loop.add_signal_handler(signal.SIGTERM, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGTERM, loop))) loop.add_signal_handler(signal.SIGINT, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGINT, loop))) try: loop.run_forever() finally: loop.close() task1 can terminate immediately, but task2 has cleanup code that is clearly being executed after SIGINT, but not after SIGTERM
That gist is very old, and asyncio/python has evolved since. Your code sort of works, but the way it's designed, the signal handling will create two coroutines, one of which will not be awaited when the other signal is received. This is because the couroutines are eagerly created, but they're only launched (ensure_future) when the corresponding signal is received. Thus, SIGTERM will be properly handled, but python will complain with RuntimeWarning: corouting 'signal_handler' was never awaited. A more modern take on your version might look something like: import asyncio import signal async def task1(): try: while True: print("Task 1 running...") await asyncio.sleep(1) except asyncio.CancelledError: print("Task 1 cancelled") # Task naturally stops once it raises CancelledError. async def task2(): try: while True: print("Task 2 running...") await asyncio.sleep(1) except asyncio.CancelledError: print("Task 2 cancelled") # Need to pass the list of tasks to cancel so that we don't kill the main task. # Alternatively, one could pass in the main task to explicitly exclude it. async def shutdown(sig, tasks): print(f"Caught signal: {sig.name}") for task in tasks: task.cancel() await asyncio.gather(*tasks, return_exceptions=True) print("Shutdown complete.") async def main(): tasks = [asyncio.create_task(task1()), asyncio.create_task(task2())] loop = asyncio.get_running_loop() for s in (signal.SIGINT, signal.SIGTERM): loop.add_signal_handler(s, lambda s=s: asyncio.create_task(shutdown(s, tasks))) await asyncio.gather(*tasks) if __name__ == "__main__": asyncio.run(main()) $ timeout 2s python3 sigterm.py Task 1 running... Task 2 running... Task 1 running... Task 2 running... Caught signal: SIGTERM Task 1 cancelled Task 2 cancelled Shutdown complete. In this particular case, though, I'd probably use a stop event or similar to signal the tasks to exit: import signal import asyncio stop_event = asyncio.Event() def signal_handler(): print("SIGTERM received! Exiting...") stop_event.set() async def looping_task(task_num): while not stop_event.is_set(): print(f"Task {task_num} is running...") await asyncio.sleep((task_num + 1) / 3) async def main(): loop = asyncio.get_event_loop() loop.add_signal_handler(signal.SIGTERM, signal_handler) await asyncio.gather(*(looping_task(i) for i in range(5))) if __name__ == "__main__": asyncio.run(main())
1
1
79,611,884
2025-5-8
https://stackoverflow.com/questions/79611884/how-to-pass-a-dynamic-list-of-csv-files-from-snakemake-input-to-a-pandas-datafra
I'm working on a Snakemake workflow where I need to combine multiple CSV files into a single Pandas DataFrame. The list of CSV files is dynamic—it depends on upstream rules and wildcard patterns. Here's a simplified version of what I have in my Snakefile: rule combine_tables: input: expand("results/{sample}/data.csv", sample=SAMPLES) output: "results/combined/all_data.csv" run: import pandas as pd dfs = [pd.read_csv(f) for f in input] combined = pd.concat(dfs) combined.to_csv(output[0], index=False) This works when the files exist, but I’d like to know: What's the best practice for handling missing or corrupt files in this context? Is there a more "Snakemake-idiomatic" way to dynamically list and read input files for Pandas operations? How do I ensure proper file ordering or handle metadata like sample names if not all CSVs are structured identically?
rule combine_tables: input: # Static sample list (use checkpoints if dynamically generated) expand("results/{sample}/data.csv", sample=SAMPLES) output: "results/combined/all_data.csv" run: import pandas as pd dfs = [] missing_files = [] corrupt_files = [] # Process files in consistent order for file_path in sorted(input, key=lambda x: x.split("/")[1]): # Sort by sample # Handle missing files (shouldn't occur if Snakemake workflow is correct) if not os.path.exists(file_path): missing_files.append(file_path) continue # Handle corrupt/unreadable files try: df = pd.read_csv(file_path) # Add sample metadata column sample_id = file_path.split("/")[1] df.insert(0, "sample", sample_id) # Add sample column at start dfs.append(df) except Exception as e: corrupt_files.append((file_path, str(e))) # Validation reporting if missing_files: raise FileNotFoundError(f"Missing {len(missing_files)} files: {missing_files}") if corrupt_files: raise ValueError(f"Corrupt files detected:\n" + "\n".join( [f"{f[0]}: {f[1]}" for f in corrupt_files])) if not dfs: raise ValueError("No valid dataframes to concatenate") # Concatenate and save combined = pd.concat(dfs, ignore_index=True) combined.to_csv(output[0], index=False)
2
1
79,611,544
2025-5-8
https://stackoverflow.com/questions/79611544/multiprocessing-with-scipy-optimize
Question: Does scipy.optimize have minimizing functions that can divide their workload among multiple processes to save time? If so, where can I find the documentation? I've looked a fair amount online, including here, for answers: Scipy's optimization incompatible with Multiprocessing? Parallel optimizations in SciPy Multiprocessing Scipy optimization in Python I could be misunderstanding, but I don't see a clear indication in any of the above posts that the scipy library is informed of the fact that there are multiple processes that it can utilize simultaneously while also providing the minimization functions with all of the arguments needed to determine the minimum. I also don't see multiprocessing discussed in detail in the scipy docs that I read and I haven't had any luck finding real world examples of optimization gains to justify optimizing versus a parallel brute force effort. Here's a fictional example of what I'd like the scipy.optimize library to do (I know that the differential_evolution function doesn't have a multiprocessing argument): import multiprocessing as mp from scipy.optimize import differential_evolution def objective_function(x): return x[0] * 2 pool = mp.Pool(processes=16) # Perform differential evolution optimization result = differential_evolution(objective_function, multiprocessing = pool)
With respect to scipy.optimize.differential_evolution, it does seem to offer multiprocessing through multiprocessing.Pool via the optional "workers" call parameter, according to the official documentation at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution This may also be offered for other optimization methods but the API documents would need to be examined. The docs also say that the objective function must be pickleable. The official docs also have some general remarks on parallel execution with SciPy at https://docs.scipy.org/doc/scipy/tutorial/parallel_execution.html The call would look like this for differential_evolution: import multiprocessing as mp from scipy.optimize import differential_evolution def objective_function(x): return x[0] * 2 my_workers = 16 # Perform differential evolution optimization result = differential_evolution(objective_function, workers = my_workers)
2
2
79,610,188
2025-5-7
https://stackoverflow.com/questions/79610188/how-should-i-take-a-matrix-from-input
As we know input makes the inputs to string. How can I take matrix like this ([[1,2],[3,4]]) from input() by user and have it like a normal 2D list to do some thing on it. It should be like that data = input([[1,2],[3,4]]) print(data) output : [[1,2],[3,4]] I tried this data = list(input()) but it was so wrong.
Using AST You can use ast Literals to parse your input string into a list. import ast raw_input = input("Enter the matrix (e.g., [[1,2],[3,4]]): ") # Parse the input string as a list matrix = ast.literal_eval(raw_input) Using numpy In order to use numpy you would have to enter the matrix in a slightly different format: import numpy as np raw_input = input("Enter the matrix (e.g., 1,2;3,4): ") matrix = np.matrix(raw_input, dtype=int)
2
1
79,609,709
2025-5-7
https://stackoverflow.com/questions/79609709/how-to-adjust-size-of-violin-plot-based-on-number-of-hues-available-for-each-cat
I need to create a violin plot based on two categories. But, some of the combination of categories are not available in the data. So it creates a white space, when i try to make the plot. I remember long ago i was able to adjust the size of the violins when the categories were not available in r using geom_violin(position= position_dodge(0.9)) refer to the attached image. Now i need to create a similar figure with python but when i try to make violin plot using seaborn i get whitespace when certain combinations of variables arent available (see image). Following is the code I am using in python. I would appreciate any help with this. Reproducible example import numpy as np # Define categories for Depth and Hydraulic Conductivity depth_categories = ["<0.64", "0.64-0.82", "0.82-0.90", ">0.9"] hydraulic_conductivity_categories = ["<0.2", "0.2-2.2", "2.2-15.5", ">15.5"] # Generate random HSI values np.random.seed(42) # For reproducibility hsi_values = np.random.uniform(low=0, high=35, size=30) # Generate random categories for Depth and Hydraulic Conductivity depth_values = np.random.choice(depth_categories, size=30) hydraulic_conductivity_values = np.random.choice(hydraulic_conductivity_categories, size=30) # Ensure not all combinations are available by removing some combinations for i in range(5): depth_values[i] = depth_categories[i % len(depth_categories)] hydraulic_conductivity_values[i] = hydraulic_conductivity_categories[(i + 1) % len(hydraulic_conductivity_categories)] # Create the DataFrame dummy_data = pd.DataFrame({ 'HSI': hsi_values, 'Depth': depth_values, 'Hydraulic_Conductivity': hydraulic_conductivity_values }) # Violin plot for Soil Depth and Hydraulic Conductivity plt.figure(figsize=(12, 6)) sns.violinplot(x='Depth', y='HSI', hue='Hydraulic_Conductivity', data=dummy_data, palette=color_palette, density_norm="count", cut = 0, gap = 0.1, linewidth=0.5, common_norm=False, dodge=True) plt.xlabel("DDDD") plt.ylabel("XXX") plt.title("Violin plot of XXX by YYYY and DDDD") plt.ylim(-5, 35) plt.legend(title='DDDD', loc='upper right') # sns.despine()# Remove the horizontal lines plt.show()
I'm not aware of a way to do this automatically, but you can easily overlay several violinplots, manually synchronizing the hue colors. An efficient way would be to use groupby to split the groups per number of "hues" per X-axis category, and loop over the categories. Then manually create a legend: # for reproducibility color_palette = sns.color_palette('Set1') # define the columns to use hue_col = 'Hydraulic_Conductivity' X_col = 'Depth' Y_col = 'HSI' # custom order for the hues hue_order = sorted(dummy_data[hue_col].unique(), key=lambda x: (not x.startswith('<'), float(x.strip('<>').partition('-')[0])) ) # ['<0.2', '0.2-2.2', '2.2-15.5', '>15.5'] colors = dict(zip(hue_order, color_palette)) # custom X-order # could use the same logic as above X_order = ['<0.64', '0.64-0.82', '0.82-0.90', '>0.9'] # create groups with number of hues per X-axis group group = dummy_data.groupby(X_col)[hue_col].transform('nunique') f, ax = plt.subplots(figsize=(12, 6)) for _, g in dummy_data.groupby(group): # get unique hues for this group to ensure consistent order hues = set(g[hue_col]) hues = [h for h in hue_order if h in hues] sns.violinplot( x=X_col, y=Y_col, hue=hue_col, data=g, order=X_order, hue_order=hues, # ensure consistent order across groups palette=colors, density_norm='count', cut = 0, gap = 0.1, linewidth=0.5, common_norm=False, dodge=True, ax=ax, # reuse the same axes legend=False, # do not plot the legend ) # create a custom legend manually from the colors dictionary import matplotlib.patches as mpatches plt.legend(handles=[mpatches.Patch(color=c, label=l) for l, c in colors.items()], title='DDDD', loc='upper right') plt.xlabel('DDDD') plt.ylabel('XXX') plt.title('Violin plot of XXX by YYYY and DDDD') plt.ylim(-5, 35) Output: NB. your example have a few categories with a single datapoint, therefore the single lines in the output below. This makes the categories ambiguous since the color is not visible, but this shouldn't be an issue if you have enough data.
2
1
79,608,184
2025-5-6
https://stackoverflow.com/questions/79608184/wrong-column-assignment-with-np-genfromtxt-if-passed-column-order-is-not-the-sam
This problem appeared in some larger code but I will give simple example: from io import StringIO import numpy as np example_data = "A B\na b\na b" data1 = np.genfromtxt(StringIO(example_data), usecols=["A", "B"], names=True, dtype=None) print(data1["A"], data1["B"]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=["B", "A"], names=True, dtype=None) print(data2["A"], data2["B"]) # ['b' 'b'] ['a' 'a'] which is not correct As you can see, if I change passed column order in regard of column order in file, I get wrong results. What's interesting is that dtypes are same: print(data1.dtype) # [('A', '<U1'), ('B', '<U1')] print(data2.dtype) # [('A', '<U1'), ('B', '<U1')] In this example it's not hard to sort column names before passing them, but in my case column names are gotten from some other part of system and it's not guaranteed that they will be in same order as those in file. I can probably circumvent that but I'm wondering if there is something wrong with my logic in this example or is there some kind of bug here. Any help is appreciated. Update: What I just realized playing around a bit is following, if I add one or more columns into example data (not important where) and pass subset of columns to np.genfromtxt in whichever order I want, it gives correct result. Example: example_data = "A B C\na b c\na b c" data1 = np.genfromtxt(StringIO(example_data), usecols=["A", "B"], names=True, dtype=None) print(data1["A"], data1["B"]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=["B", "A"], names=True, dtype=None) print(data2["A"], data2["B"]) # ['a' 'a'] ['b' 'b'] which is correct
[62]: text = "A B\na b\na b".splitlines() In [63]: np.genfromtxt(text,dtype=None, usecols=[1,0],names=True) Out[63]: array([('b', 'a'), ('b', 'a')], dtype=[('A', '<U1'), ('B', '<U1')]) In [64]: np.genfromtxt(text3,dtype=None, usecols=[1,0]) Out[64]: array([['B', 'A'], ['b', 'a'], ['b', 'a']], dtype='<U1') So it uses the columns in the order you specify in usecols, but takes the structured array dtype from the names In [65]: text3="A B C\na b c\na b c".splitlines() In [66]: np.genfromtxt(text3,dtype=None, usecols=[1,0]) Out[66]: array([['B', 'A'], ['b', 'a'], ['b', 'a']], dtype='<U1') In [67]: np.genfromtxt(text3,dtype=None, usecols=[1,0],names=True) Out[67]: array([('b', 'a'), ('b', 'a')], dtype=[('B', '<U1'), ('Af', '<U1')]) In the subset case it pays attention to the usecols when constructing the dtype. From the genfromtxt code (read from [source] or ipython ?? firstvalues is the names derived from the first line, and nbcol is their count. After making sure usecols is a list, and converting to numbers if needed, it: nbcols = len(usecols or first_values) ... if usecols: for (i, current) in enumerate(usecols): # if usecols is a list of names, convert to a list of indices if _is_string_like(current): usecols[i] = names.index(current) elif current < 0: usecols[i] = current + len(first_values) # If the dtype is not None, make sure we update it if (dtype is not None) and (len(dtype) > nbcols): descr = dtype.descr dtype = np.dtype([descr[_] for _ in usecols]) names = list(dtype.names) # If `names` is not None, update the names elif (names is not None) and (len(names) > nbcols): names = [names[_] for _ in usecols] So with usecols, nbcols is the number of columns it's to use. In the subset case it selects from the names, but if it isn't a subset, then the names isn't modified, in number or order. For a structured array you really don't need to specify the order In [79]: data=np.genfromtxt(text,dtype=None, names=True); data Out[79]: array([('a', 'b'), ('a', 'b')], dtype=[('A', '<U1'), ('B', '<U1')]) In [80]: data['B'], data['A'] Out[80]: (array(['b', 'b'], dtype='<U1'), array(['a', 'a'], dtype='<U1')) Columns can be reordered after loading with indexing: In [87]: data[['A','B']] Out[87]: array([('a', 'b'), ('a', 'b')], dtype=[('A', '<U1'), ('B', '<U1')]) In [88]: data[['B','A']] Out[88]: array([('b', 'a'), ('b', 'a')], dtype={'names': ['B', 'A'], 'formats': ['<U1', '<U1'], 'offsets': [4, 0], 'itemsize': 8}) I suppose this could be raised as an issue. The logic in applying usecols, names, etc, is complicated as it is :) edit With explicit dtype In [96]: dt=[('B','U1'),('A','U1')] In [97]: data=np.genfromtxt(text,dtype=dt, usecols=[1,0], skip_header=1); data Out[97]: array([('b', 'a'), ('b', 'a')], dtype=[('B', '<U1'), ('A', '<U1')])
1
1
79,609,245
2025-5-6
https://stackoverflow.com/questions/79609245/polars-unusual-query-plan-for-lazyframe-custom-function-apply-takes-extremely-l
I have a spacy nlp function nlp(<string>).vector that I need to apply to a string column in a dataframe. This function takes on average 13 milliseconds to return. The function returns a ndarray that contains 300 Float64s. I need to expand these Floats to their own columns. This is the sketchy way I've done this: import spacy import polars as pl nlp = spacy.load('en_core_web_lg') full = pl.LazyFrame([["apple", "banana", "orange"]], schema=['keyword']) VECTOR_FIELD_NAMES = ['dim_' + str(x) for x in range(300)] full = full.with_columns( pl.col('keyword').map_elements( lambda x: tuple(nlp(x).vector), return_dtype=pl.List(pl.Float64) ).list.to_struct(fields=VECTOR_FIELD_NAMES).struct.unnest() ) full.collect() This takes 11.5s to complete, which is >100 times slower than doing the computation outside of Polars. Looking at the query plan, it reveals this: naive plan: (run LazyFrame.explain(optimized=True) to see the optimized plan) WITH_COLUMNS: [col("keyword").map_list().list.to_struct().struct.field_by_name(dim_0)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_1)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_2)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_3)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_4)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_5)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_6)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_7)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_8)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_9)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_10)(), ... It carries on like this for all 300 dims. I believe it might be computing nlp(<keyword>) for every cell of the output. Why might this be? How do I restructure my statements to avoid this?
It's due to how expression expansion works. The expression level unnest expands into multiple expressions (one for each field) pl.col("x").struct.unnest() Would turn into pl.col("x").struct.field("a") pl.col("x").struct.field("b") pl.col("x").struct.field("c") Normally you don't notice as Polars caches expressions (CSE), but UDFs are not eligible for caching. https://github.com/pola-rs/polars/issues/20260 def udf(x): print("Hello") return x df = pl.DataFrame({"x": [[1, 2, 3], [4, 5, 6]]}) df.with_columns( pl.col.x.map_elements(udf, return_dtype=pl.List(pl.Int64)) .list.to_struct(fields=['a', 'b', 'c']) .struct.unnest() ) It calls the UDF for each element. Hello Hello Hello Hello Hello Hello You can use the unnest frame method instead. df.with_columns( pl.col.x.map_elements(udf, return_dtype=pl.List(pl.Int64)) .list.to_struct(fields=['a', 'b', 'c']) .alias('y') ).unnest('y') Hello Hello
2
1
79,608,280
2025-5-6
https://stackoverflow.com/questions/79608280/cannot-read-files-list-from-a-specific-channel-from-slack-using-python
I used to have a working python function to fetch files from a specific Slack channel, but that stopped working a few months ago. I tested the same request to the slack API (files.list) using Postman which does give me an array with a number of files. The following code used to work but no longer does: import requests import json apiBase = "https://slack.com/api/" accesToken = "Bearer xoxb-<secret>" requestData = { "channel": "<obscured>" } r = requests.post(apiBase + "files.list", headers={'Authorization': accesToken, 'Content-Type': 'application/json; charset=utf-8'}, data=requestData) try: response = json.loads(r.text) except: print("Read error") isError = True if(not 'files' in response): if('error' in response): print(response['error']) if('warning' in response): print(response['warning']) isError = True files = response['files'] files.sort(key=lambda x:x['timestamp']) count = len(files) print(str(r)) print(str(r.request.body)) print(str(r.request.headers['Content-Type'])) print(str(r.text)) The result is: <Response [200]> channel=<secret> application/json; charset=utf-8 {"ok":true,"files":[],"paging":{"count":100,"total":0,"page":1,"pages":0}} Process finished with exit code 0 Postman also returns a 200 OK, but the array contains 3 files for this channel. So why is Python not getting the 3 files...? I know that an app needs to be given access to the channel in Slack which is the case here. (The channel and credentials are identical in both scenario's (Python and Postman). Please advise me ...
I think it has something to do with the content-type you send with the requests.post Have you tried using json=requestData instead of data=requestData Even though the content-type is correctly set in your headers the requests.post might still send "data" as a dictionary, this is maybe why the slack api is ignoring your data. Update: The solution was to put the requestData into the url as a parameter, this can be elegantly done using the "params" argument of the requests.post() function like so: requestData = { "channel": channel_id } requests.post(params=requestData) This is the way the Slack api expects the data.
1
1
79,608,369
2025-5-6
https://stackoverflow.com/questions/79608369/bars-not-fitting-to-x-axis-ticks-in-a-seaborn-distplot
I do generate that figure with seaborn.distplot(). My problem is that the ticks on the X-axis do not fit to the bars, in all cases. I would expect a relationship between bars and ticks like you can see at 11 and 15. This is the MWE import numpy as np import pandas as pd import seaborn as sns # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 999999, n), 'Fruit': np.random.choice(['Banana', 'Strawberry'], n), 'Age': np.random.randint(9, 18, n) }) fig = sns.displot( data=df, x='Age', hue='Fruit', multiple='dodge').figure fig.show()
You need discrete=True to tell seaborn that the x values are discrete. Adding shrink=0.8 will leave some space between the bars. import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 999999, n), 'Fruit': np.random.choice(['Banana', 'Strawberry'], n), 'Age': np.random.randint(9, 18, n) }) sns.displot( data=df, x='Age', hue='Fruit', multiple='dodge', discrete=True, shrink=0.8) plt.show() . Note that sns.displot() is a figure-level function that creates a grid of one or more subplots, with a common legend outside. sns.countplot() is an axes-level function, that creates a single subplot with a legend inside. An alternative is creating a countplot: sns.countplot( data=df, x='Age', hue='Fruit', dodge=True )
3
4
79,619,027
2025-5-13
https://stackoverflow.com/questions/79619027/why-do-results-from-adjustable-quadratic-volterra-filter-mapping-not-enhance-dar
Based on this paper Adjustable quadratic filters for image enhancement, Reinhard Bernstein, Michael Moore and Sanjit Mitra, 1997, I am trying to reproduce the image enhancement results. I followed the described steps, including implementing the nonlinear mapping functions (e.g., f_map_2 = x^2) and applying the 2D Teager-like quadratic Volterra filter as outlined. More specifically, the formula for the filter used here is formula (53) in the paper "A General Framework for Quadratic Volterra Filters for Edge Enhancement". Formula (53) and the formulas of the two mapping functions are used as shown in the image below. My pipeline is: normalize the input gray image to the range [0, 1], then map it using predefined functions (specifically the definition of f_map_2 and f_map_5 please see in the image), then pass it through the Teager filter (which is the formula (53)), multiply it by an alpha coefficient and combine the original image for sharpening (unsharp masking), finally denormalize back to the range [0, 255]. import cv2 import numpy as np from numpy import sqrt import matplotlib.pyplot as plt def normalize(img): return img.astype(np.float32)/255.0 def denormalize(img): """Convert image to [0, 255]""" return (img * 255).clip(0, 255).astype(np.uint8) def input_mapping(x, map_type='none', m=2): """Apply input mapping function according to the paper""" if map_type == 'none': return x # none (4b) elif map_type == 'map2': return x**2 # f_map2: x^2 (4c) elif map_type == 'map5': # piece-wise function f_map5 (4d) mapped = np.zeros_like(x) mask = x > 0.5 mapped[mask] = 1 - 2*(1 - x[mask])**2 mapped[~mask] = 2 * x[~mask]**2 return mapped else: raise ValueError("Invalid mapping type") def teager_filter(img): padded = np.pad(img, 1, mode='reflect') out = np.zeros_like(img) for i in range(1, padded.shape[0]-1): for j in range(1, padded.shape[1]-1): x = padded[i,j] t1 = 3*(x**2) t2 = -0.5*padded[i+1,j+1]*padded[i-1,j-1] t3 = -0.5*padded[i+1,j-1]*padded[i-1,j+1] t4 = -1.0*padded[i+1,j]*padded[i-1,j] t5 = -1.0*padded[i,j+1]*padded[i,j-1] out[i-1,j-1] = t1 + t2 + t3 + t4 + t5 return out def enhance_image(image_path, alpha, map_type='none'): """Enhance images with optional input mapping""" # Image reading and normalization img = cv2.imread(image_path, 0) if img is None: raise FileNotFoundError("No image found!") img_norm = normalize(img) # Input mapping mapped_img = input_mapping(img_norm, map_type) # Teager filter teager_output = teager_filter(mapped_img) enhanced = np.clip(img_norm + alpha * teager_output, 0, 1) return denormalize(enhanced) input_path = r"C:\Users\tt\OneDrive\Desktop\original_image.jpg" original_image = cv2.imread(input_path, 0) alpha = 0.1 enhanced_b = enhance_image(input_path, alpha, map_type='none') enhanced_c = enhance_image(input_path, alpha, map_type='map2') enhanced_d = enhance_image(input_path, alpha, map_type='map5') plt.figure(figsize=(15, 5)) plt.subplot(1, 4, 1) plt.imshow(original_image, cmap='gray') plt.title('Original') plt.axis('off') plt.subplot(1, 4, 2) plt.imshow(enhanced_b, cmap='gray') plt.title('No Mapping (b)') plt.axis('off') plt.subplot(1, 4, 3) plt.imshow(enhanced_c, cmap='gray') plt.title('Map2 (c)') plt.axis('off') plt.subplot(1, 4, 4) plt.imshow(enhanced_d, cmap='gray') plt.title('Map5 (d)') plt.axis('off') plt.tight_layout() plt.show() However, my output images from using mappings like f_map_2 and f_map_5 do not resemble the ones shown in the paper (specifically, images (c) and (d) below). Instead of strong enhancement in bright and dark regions, the results mostly show slightly darkened edges with almost no contrast boost in the target areas. So this is my results: And this is paper's results: Maybe this is helpful, so I'll also post a picture of the raw output of the above Teager filter, without multiplying by alpha and adding to the original image, as below I tried changing the alpha but it didn't help, I also tried adding a denoising step in the normalization function, still didn't help, the image still looks almost identical to the original. I also tested the filter on other grayscale images with various content, but the outcome remains similar — mainly edge thickening without visible intensity-based enhancement. Has anyone successfully reproduced the enhancement effects described in the paper? Could there be implementation details or parameters (e.g., normalization, unsharp masking, or mapping scale) that are critical but not clearly stated? I will provide the original image as below, if anyone wants to reproduce the process I did. Input image Any insights, references, or example code would be appreciated.
I think I found your error. In enhance_image() where you compose the final image, i.e. enhanced = np.clip(img_norm + alpha * teager_output, 0, 1) you accidentally use your normalized image img_norm instead of the mapped image mapped_img. Replacing this line by enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1) produces something useful: Note that the teager filter only enhances high frequency components of your image. It would make no strong difference in teager_output whether you pass mapped_img or img_norm to it. Thus, upon composing low pass and high pass, you have to use the mapped_img in order to keep the applied mapping. I would also suggest to keep file I/O outside your image processing functions, this makes it easier to inject other data for debugging purposes. def enhance_image(img, alpha, map_type='none'): """Enhance images with optional input mapping""" img_norm = normalize(img) # Image normalization mapped_img = input_mapping(img_norm, map_type) # Input mapping teager_output = teager_filter(mapped_img) # Teager filter # Compose enhanced image, enh = map(x) + alpha * teager enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1) return denormalize(enhanced) # Map back to original range
1
1
79,620,550
2025-5-13
https://stackoverflow.com/questions/79620550/python-global-variable-changes-depending-on-how-script-is-run
I have a short example python script that I'm calling glbltest.py: a = [] def fun(): global a a = [20,30,40] print("before ",a) fun() print("after ",a) If I run it from the command line, I get what I expect: $ python glbltest.py before [] after [20, 30, 40] I open a python shell and run it by importing, and I get basically the same thing: >>> from glbltest import * before [] after [20, 30, 4] So far so good. Now I comment out those last three lines and do everything "by hand": >>> from glbltest import * >>> a [] >>> fun() # I run fun() myself >>> a # I look at a again. Surely I will get the same result as before! [] # No! I don't! What is the difference between fun() being run "automatically" by the importing of the script, and me running fun() "by hand"?
global a refers to the name a in the glbltest module's namespace. When you set a by hand, it refers to the name a in the __main__ module's namespace. When you use from glbltest import * the names in the module are imported into the __main__ module's namespace. Those are different names but refer to the same objects. When you use global a and a = [20,30,40] in the glbltest module, assignment makes a new object that a in glbltest module's namespace now refers to. The name a in the __main__ module still refers to the original object (the empty list). As a simple example, print the id() of a in the fun() function, and print(id(a)) "by hand" after you set it: a = [] def fun(): global a print(a, id(a)) a = [20,30,40] print(a, id(a)) # To view the global a object id again def show(): print(a, id(a)) "by hand", with comments: >>> from glbltest import * >>> a # the imported name [] >>> id(a) # its object ID 2056911113280 >>> fun() [] 2056911113280 # start of fun() the object is the same ID [20, 30, 40] 2056902829312 # but assignment changes to new object (different ID) >>> a [] # main a still refers to original object >>> id(a) 2056911113280 >>> show() # glbltest module still sees *its* global a [20, 30, 40] 2056902829312 Note that if you use mutation vs. assignment to change the existing object. You'll see the change: a = [] def fun(): global a print(a, id(a)) a.extend([20,30,40]) # modify existing object, not assigning a new object. print(a, id(a)) # To view the global a object id again def show(): print(a, id(a)) Now the object IDs remain the same. >>> from glbltest import * >>> a, id(a) # import object ([], 1408887112064) >>> fun() [] 1408887112064 # before change still the same object [20, 30, 40] 1408887112064 # mutated the *existing* list >>> a, id(a) ([20, 30, 40], 1408887112064) # main's 'a' refers to the same object, same ID >>> show() [20, 30, 40] 1408887112064 # glbltest refers to the same object, same ID It's a bit more obvious that the names are different if you just import the module and the module's a can be referred to directly as glbltest.a. a = [] def fun(): global a a = [20,30,40] >>> import glbltest >>> glbltest.a [] >>> a = 5 # main's a >>> a 5 >>> glbltest.a # module's a [] >>> glbltest.fun() >>> a # main's a doesn't change 5 >>> glbltest.a # module's a does. [20, 30, 40]
1
3
79,620,294
2025-5-13
https://stackoverflow.com/questions/79620294/how-can-i-share-one-requests-session-across-all-flask-routes-and-close-it-cleanl
I’m building a small Flask 3.0 / Python 3.12 micro-service that calls an external REST API on almost every request Right now each route makes a new requests.Session which is slow and leaks sockets under load from flask import Flask, jsonify import requests app = Flask(__name__) @app.get("/info") def info(): with requests.Session() as s: r = s.get("https://api.example.com/info") return jsonify(r.json()) What I tried Global Variable session = requests.Session() I get a resource warning through the above. How can I re-use one requests.Session for all incoming requests and close it exactly once when the application exits?
Use serving-lifecycle hooks @app.before_serving – runs once per worker, right before the first request is accepted. @app.after_serving – runs once on a clean shutdown Create the requests.Session in the first hook, stash it on the application object and close it in the second.
1
1
79,620,333
2025-5-13
https://stackoverflow.com/questions/79620333/insert-new-column-of-blanks-into-an-existing-dataframe
I have an existing dataframe: data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) output: 0 1 0 5011025 234 1 5012025 937 2 5013025 625 What I need to do is insert a new column at 0 (the same # of rows) that contains 3 spaces. Recreating the dataframe, from scratch, it would be something like this: data = [[' ',5011025, 234], [' ',5012025, 937], [' ',5013025, 625]] df = pd.DataFrame(data) desired output: 0 1 2 0 5011025 234 1 5012025 937 2 5013025 625 What is the best way to insert() this new column into an existing dataframe, that may be hundreds of rows? Ultimately, i'm trying to figure out how to write a function that will shift all columns of a dataframe x number of spaces to the right.
Based on your comment, you could shift all cols up one and add a col 0 like this: import pandas as pd data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) df.columns = df.columns + 1 df[0] = ' ' df = df.sort_index(axis=1)
2
2
79,620,088
2025-5-13
https://stackoverflow.com/questions/79620088/how-can-i-make-a-simple-idempotent-post-endpoint-in-a-flask-micro-service
I’m building a small internal micro-service in Python 3.12 / Flask 3.0. The service accepts POST /upload requests that insert a record into PostgreSQL. Problem Mobile clients sometimes retry the request when the network is flaky, so I end up with duplicate rows: @app.post("/upload") def upload(): payload = request.get_json() db.execute( "INSERT INTO photos (user_id, filename, uploaded_at) VALUES (%s, %s, NOW())", (payload["user_id"], payload["filename"]), ) return jsonify({"status": "ok"}), 201 What I tried Added a UNIQUE (user_id, filename) constraint – works, but clients now get a raw SQL error on duplicate inserts. Wrapped the insert in ON CONFLICT DO NOTHING – avoids the error but I can’t tell whether the row was really inserted. Googled for “Flask idempotent POST” and found libraries like Flask-Idem, but they feel heavyweight for a single route. Question: What’s the simplest, idiomatic way in Flask to make this endpoint idempotent so that: POSTing the same photo twice is harmless; the client still gets a clear 201 Created the first time and 200 OK for retries; and I don’t have to introduce extra infrastructure (Kafka, Redis, etc.)?
Give the table a uniqueness guarantee so duplicates physically can’t happen. Use an UPSERT (INSERT … ON CONFLICT) with RETURNING so you know whether the row was really inserted. Map that to HTTP status codes.
1
1
79,619,950
2025-5-13
https://stackoverflow.com/questions/79619950/is-there-a-way-to-filter-columns-of-a-pandas-dataframe-which-include-elements-of
In the below dataframe I would like to filter the columns based on a list called 'animals' to select all the columns that include the list elements. animal_data = { "date": ["2023-01-22","2023-11-16","2024-06-30","2024-08-16","2025-01-22"], "cats_fostered": [1,2,3,4,5], "cats_adopted":[1,2,3,4,5], "dogs_fostered":[1,2,3,4,5], "dogs_adopted":[1,2,3,4,5], "rabbits_fostered":[1,2,3,4,5], "rabbits_adopted":[1,2,3,4,5] } animals = ["date","cat","rabbit"] animal_data = { "date": ["2023-01-22","2023-11-16","2024-06-30","2024-08-16","2025-01-22"], "cats_fostered": [1,2,3,4,5], "cats_adopted":[1,2,3,4,5], "rabbits_fostered":[1,2,3,4,5], "rabbits_adopted":[1,2,3,4,5] } I have tried some approaches below but they either don't work with lists or return no columns as it is looking for an exact match with 'cats' or 'rabbits' and not just columns that contain the strings animal_data[animal_data.columns.intersection(animals)] # returns an empty df animal_data.filter(regex=animals) # returns an error: not able to use regex with a list
The issue with both attempts is that you are looking for a substring of the columns name. Except for the date column there is not a full match between the strings in the animals list and the actual column names. One possibility is to use filter with .join to construct the regex if using .filter, or a "manual" list comprehension with strings operations (for example in or .startswith). You can also "hardcode" "date" so the animals list only contains animals. >>> animals = ["cat"] >>> df.filter(regex="date|" + "|".join(animals)) date cats_fostered cats_adopted 0 2023-01-22 1 1 1 2023-11-16 2 2 2 2024-06-30 3 3 3 2024-08-16 4 4 4 2025-01-22 5 5 >>> animals = ["cat", "rabbit"] >>> df.filter(regex="date|" + "|".join(animals)) date cats_fostered cats_adopted rabbits_fostered rabbits_adopted 0 2023-01-22 1 1 1 1 1 2023-11-16 2 2 2 2 2 2024-06-30 3 3 3 3 3 2024-08-16 4 4 4 4 4 2025-01-22 5 5 5 5
1
0
79,619,717
2025-5-13
https://stackoverflow.com/questions/79619717/how-to-count-consecutive-increases-in-a-1d-array
I have a 1d numpy array It's mostly decreasing, but it increases in a few places I'm interested in the places where it increases in several consecutive elements, and how many consecutive elements it increases for in each case In other words, I'm interested in the lengths of increasing contiguous sub-arrays I'd like to compute and store this information in an array with the same shape as the input (EG that I could use for plotting) This could be achieved using cumsum on a binary mask, except I want to reset the accumulation every time the array starts decreasing again See example input and expected output below How do I do that? import numpy as np def count_consecutive_increases(y: np.ndarray) -> np.ndarray: ... y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_consecutive_increases(y) print(y) print(c) # >>> [9 8 7 9 6 5 6 7 8 4 3 1 2 3 0] # >>> [0 0 0 1 0 0 1 2 3 0 0 0 1 2 0]
Here is another solution: import numpy as np def count_consecutive_increases(y: np.ndarray) -> np.ndarray: increases = np.diff(y, prepend=y[0]) > 0 all_summed = np.cumsum(increases) return all_summed - np.maximum.accumulate(all_summed * ~increases) y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_consecutive_increases(y) print(y) # >>> [9 8 7 9 6 5 6 7 8 4 3 1 2 3 0] print(c) # >>> [0 0 0 1 0 0 1 2 3 0 0 0 1 2 0] The idea is the same as with the solution proposed by OP, albeit a bit shorter: Naively count (by summing over) all indices that have been marked as increasing, then subtract, for each consecutive increasing segment, the count right before its start (the value of which we get by a cumulative maximum over the naive counts at the positions marked as not increasing).
3
3
79,619,760
2025-5-13
https://stackoverflow.com/questions/79619760/polars-list-eval-difference-between-pl-element-and-pl-all
the Polars user guide on Lists and Arrays explains how to manipulate Lists with common expression syntax using .list.eval(), i.e. how to operate on list elements. More specifically, the user guide states: The function eval gives us access to the list elements and pl.element refers to each individual element, but we can also use pl.all() to refer to all of the elements of the list. I do not understand the difference between using pl.element() vs pl.all(), i.e. when this distinction between individual and all elements mentioned in the quote becomes important. In the example below, both yield exactly the same result for various expressions. What am I missing? Thank you so much for your help! import polars as pl df = pl.DataFrame( { "a": [[1], [3,2], [6,4,5]] } ) print(df) shape: (3, 1) ┌───────────┐ │ a │ │ --- │ │ list[i64] │ ╞═══════════╡ │ [1] │ │ [3, 2] │ │ [6, 4, 5] │ └───────────┘ ## using pl.element() result_element = df.with_columns( pl.col("a").list.eval(pl.element()**2).alias("square"), pl.col("a").list.eval(pl.element().rank()).alias("rank"), pl.col("a").list.eval(pl.element().count()).alias("count") ) ## using pl.all() result_all = df.with_columns( pl.col("a").list.eval(pl.all()**2).alias("square"), pl.col("a").list.eval(pl.all().rank()).alias("rank"), pl.col("a").list.eval(pl.all().count()).alias("count") ) print(result_element.equals(result_all)) True
The method pl.all() called without arguments refers to all columns available in the context. It does not have a special meaning within list.eval(), but since the only column available inside of it are the list elements, it works the same as pl.element(). You could also get the same behavior using either pl.col(''), a column whose name is an empty string. That is the name of the column (list) inside of .list.eval(...) and is equivalent to pl.element() pl.col('*'), a special name that selects all columns. This is equivalent to pl.all() import polars as pl from polars.testing import assert_frame_equal df = pl.DataFrame({'x': [[1,2,3]]}) def test(expression): return pl.col('x').list.eval(expression.mul(2).add(1)) reference = df.select(test(pl.element())) for expression in [pl.all(), pl.col(''), pl.col('*')]: assert_frame_equal( reference, df.select(test(expression)) )
2
1
79,619,061
2025-5-13
https://stackoverflow.com/questions/79619061/replacing-values-in-columns-with-values-from-another-columns-according-to-mappin
I have this kind of dataframe: df = pd.DataFrame({ "A1": [1, 11, 111], "A2": [2, 22, 222], "A3": [3, 33, 333], "A4": [4, 44, 444], "A5": [5, 55, 555] }) A1 A2 A3 A4 A5 0 1 2 3 4 5 1 11 22 33 44 55 2 111 222 333 444 555 and this kind of mapping: mapping = { "A1": ["A2", "A3"], "A4": ["A5"] } which means that I want all columns in list to have values from key column so: A2 and A3 should be populated with values from A1, and A5 should be populated with values from A4. Resulting dataframe should look like this: A1 A2 A3 A4 A5 0 1 1 1 4 4 1 11 11 11 44 44 2 111 111 111 444 444 I managed to do it pretty simply like this: for k, v in mapping.items(): for col in v: df[col] = df[k] but I was wondering if there is vectorized way of doing it (more pandactic way)?
You could rework the dictionary and use assign: out = df.assign(**{col: df.get(k) for k, v in mapping.items() for col in v}) NB. assign is not in place, either use this in chained commands, or reassign to df. Or you could reindex and rename/set_axis: dic = {v: k for k, l in mapping.items() for v in l} out = (df.reindex(columns=df.rename(columns=dic).columns) .set_axis(df.columns, axis=1) ) Output: A1 A2 A3 A4 A5 0 1 1 1 4 4 1 11 11 11 44 44 2 111 111 111 444 444
5
4
79,618,775
2025-5-13
https://stackoverflow.com/questions/79618775/how-to-add-new-feature-to-torch-geometric-data-object
I am using the Zinc graph dataset via torch geometric which I access as zinc_dataset = ZINC(root='my_path', split='train') Each data element is a graph zinc_dataset[0] looks like Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1]) I have computed a tensor valued feature for each graph in the dataset. I have stored these tensors in a list with the ith element of the list being the feature for the ith graph in zinc_dataset. I would like to add these new features to the data object. So ideally I want the result to be Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1], new_feature=[33,12]) I have looked at the solution provided by How to add a new attribute to a torch_geometric.data Data object element? but that hasn't worked for me. Could someone please help me take my list of new features and include them in the data object? Thanks
To add your list of new features (e.g. List[Tensor], with each tensor corresponding to a graph in the dataset) to each torch_geometric.data.Data object in a Dataset like ZINCYou can do this by simply assigning your new tensor as an attribute of each Data object. Here’s how you can do it step-by-step: import torch from torch_geometric.datasets import ZINC from torch_geometric.data import InMemoryDataset # 1. Load the ZINC training dataset zinc_dataset = ZINC(root='my_path', split='train') # 2. Create a list of new features for each graph # Replace this with your actual feature list (must match number of nodes per graph) new_features = [] for data in zinc_dataset: num_nodes = data.x.size(0) # data.x is [num_nodes, feature_dim] new_feat = torch.randn(num_nodes, 12) # Example: [num_nodes, 12] new_features.append(new_feat) # 3. Define a custom dataset that injects new_feature into each graph's Data object class ModifiedZINC(InMemoryDataset): def __init__(self, original_dataset, new_features_list): self.data_list = [] for i in range(len(original_dataset)): data = original_dataset[i] data.new_feature = new_features_list[i] self.data_list.append(data) super().__init__('.', transform=None, pre_transform=None) self.data, self.slices = self.collate(self.data_list) def __len__(self): return len(self.data_list) def get(self, idx): return self.data_list[idx] # 4. Create the modified dataset with new features modified_dataset = ModifiedZINC(zinc_dataset, new_features) # 5. Check the result sample = modified_dataset[0] print(sample) print("Shape of new feature:", sample.new_feature.shape) output: Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1], new_feature=[33, 12]) Shape of new feature: torch.Size([33, 12])
2
2
79,621,854
2025-5-14
https://stackoverflow.com/questions/79621854/compute-cumulative-mean-std-on-polars-dataframe-using-over
I want to compute the cumulative mean & std on a polars dataframe column. For the mean I tried this: import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len()).add(1).over('class')) which correctly gives shape: (8, 3) ┌───────┬───────┬──────────┐ │ value ┆ class ┆ cum_mean │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 │ ╞═══════╪═══════╪══════════╡ │ 4 ┆ A ┆ 4.0 │ │ 6 ┆ A ┆ 5.0 │ │ 8 ┆ B ┆ 8.0 │ │ 11 ┆ A ┆ 7.0 │ │ 5 ┆ B ┆ 6.5 │ │ 6 ┆ A ┆ 6.75 │ │ 8 ┆ B ┆ 7.0 │ │ 15 ┆ B ┆ 9.0 │ └───────┴───────┴──────────┘ However, this seems very clunky, and becomes a little more complicated (and possibly error-prone) for std. Is there a nicer (possibly built-in) version for computing the cum mean & cum std?
I might have a solution which is more clean. You can get to it using rolling-functions like rolling_mean or rolling_std. Here is my proposal: df.with_columns( cum_mean=pl.col('value').cum_sum().over('class')/pl.col('value').cum_count().over('class'), cum_mean_by_rolling=pl.col('value').rolling_mean(window_size=df.shape[0], min_samples=1).over('class'), cum_std_by_rolling=pl.col('value').rolling_std(window_size=df.shape[0], min_samples=1).over('class') ) If you define the window size as the number of rows in the data frame (df.shape[0]) and the minimum number of samples as 1, then you can get the wanted result. I also changed your implementation for the cum_mean a bit so that it is a bit shorter. If I run the code I get this result. shape: (8, 5) ┌───────┬───────┬──────────┬─────────────────────┬────────────────────┐ │ value ┆ class ┆ cum_mean ┆ cum_mean_by_rolling ┆ cum_std_by_rolling │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 ┆ f64 ┆ f64 │ ╞═══════╪═══════╪══════════╪═════════════════════╪════════════════════╡ │ 4 ┆ A ┆ 4.0 ┆ 4.0 ┆ null │ │ 6 ┆ A ┆ 5.0 ┆ 5.0 ┆ 1.414214 │ │ 8 ┆ B ┆ 8.0 ┆ 8.0 ┆ null │ │ 11 ┆ A ┆ 7.0 ┆ 7.0 ┆ 3.605551 │ │ 5 ┆ B ┆ 6.5 ┆ 6.5 ┆ 2.12132 │ │ 6 ┆ A ┆ 6.75 ┆ 6.75 ┆ 2.986079 │ │ 8 ┆ B ┆ 7.0 ┆ 7.0 ┆ 1.732051 │ │ 15 ┆ B ┆ 9.0 ┆ 9.0 ┆ 4.242641 │ └───────┴───────┴──────────┴─────────────────────┴────────────────────┘ I did not find a more suitable build in function. Hope this helps.
2
2
79,620,883
2025-5-14
https://stackoverflow.com/questions/79620883/how-do-i-repeat-one-dataframe-to-match-the-length-of-another-dataframe
I want to combine two DataFrames of unequal length to a new DataFrame with the size of the larger one. Now, specifically, I want to pad the values of the shorter array by repeating it until it is large enough. I know this is possible for lists using itertools.cycle as follows: from itertools import cycle x = range(7) y = range(43) combined = zip(cycle(x), y) Now I want to do the same for DataFrames: import pandas as pd df1 = pd.DataFrame(...) # length 7 df2 = pd.DataFrame(...) # length 43 df_comb = pd.concat([cycle(df1),df2], axis=1) Of course this doesn't work, but I don't know if there is an option to do this or to just manually repeat the array.
If you want to combine the two DataFrames to obtain an output DataFrame of the length of the longest input with repetitions of the smallest input that restart like itertools.cycle, you could compute a common key (with numpy.arange and the modulo (%) operator) to perform a merge: out = (df1.merge(df2, left_on=np.arange(len(df1))%len(df2), right_on=np.arange(len(df2))%len(df1)) .drop(columns=['key_0']) ) Output: col1 col2 col3 col4 0 A X a Y 1 B X b Y 2 C X c Y 3 D X a Y 4 E X b Y 5 F X c Y 6 G X a Y Intermediate without dropping the merging key: key_0 col1 col2 col3 col4 0 0 A X a Y 1 1 B X b Y 2 2 C X c Y 3 0 D X a Y 4 1 E X b Y 5 2 F X c Y 6 0 G X a Y Used inputs: # df1 col1 col2 0 A X 1 B X 2 C X 3 D X 4 E X 5 F X 6 G X # df2 col3 col4 0 a Y 1 b Y 2 c Y
1
1
79,620,845
2025-5-14
https://stackoverflow.com/questions/79620845/how-is-np-repeat-so-fast
I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, repeat takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -> [1, 2, 2]. My naive version was around 4.5x slower than np.repeat. pub fn repeat_by(arr: &[f64], repeats: &[u64]) -> Vec<f64> { // Use flat_map to create a single iterator of all repeated elements let result: Vec<f64> = arr .iter() .zip(repeats.iter()) .flat_map(|(&value, &count)| std::iter::repeat_n(value, count as usize)) .collect(); result } I also tried a couple of more versions, e.g. one where I pre-allocated a vector with the necessary capacity, but all performed similarly. While doing more investigating though, I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. For example, we can build a list of indices and use numpy slicing / take to perform the same operation as np.repeat. However, doing this (and even removing the list construction from the timings), np.repeat is around 3x faster than numpy slicing / take. import timeit import numpy as np N_ROWS = 100_000 x = np.random.rand(N_ROWS) weight = np.random.poisson(1, len(data)) # pre-compute the indices so slow python looping doesn't affect the timing indices = [] for w in weight: for i in range(w): indices.append(i) print(timeit.timeit(lambda: np.repeat(x, weight), number=1_000)) # 0.8337333500003297 print(timeit.timeit(lambda: np.take(x, indices), number=1_000)) # 3.1320624930012855 My C is not so good, but it seems like the relevant implementation is here: https://github.com/numpy/numpy/blob/main/numpy/_core/src/multiarray/item_selection.c#L785. It would be amazing if someone could help me understand at a high level what this code is doing--on the surface, it doesn't look like anything particularly special (SIMD, etc.), and looks pretty similar to my naive Rust version (memcpy vs repeat_n). In addition, I am struggling to understand why it performs so much better than even numpy slicing.
TL;DR: the gap is certainly due to the use of wider loads/stores in Numpy than your Rust code, and you should avoid indexing if you can for sake of performance. Performance of the Numpy code VS your Rust code First of all, we can analyse the assembly code generated from your Rust code (I am not very familiar with Rust but I am with assembly). The generated code is quite big, but here is the main part (see it on Godbolt): example::repeat_by::hf03ad1ea376407dc: push rbp push r15 push r14 push r13 push r12 push rbx sub rsp, 72 mov r12, rdx cmp r8, rdx cmovb r12, r8 test r12, r12 je .LBB2_4 mov r14, rcx mov r15, r12 neg r15 mov ebx, 1 .LBB2_2: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_5 lea rax, [r15 + rbx] inc rax inc rbx cmp rax, 1 jne .LBB2_2 .LBB2_4: mov qword ptr [rdi], 0 mov qword ptr [rdi + 8], 8 mov qword ptr [rdi + 16], 0 jmp .LBB2_17 .LBB2_5: mov qword ptr [rsp + 48], rsi mov qword ptr [rsp + 56], rdi cmp r13, 5 mov ebp, 4 cmovae rbp, r13 lea rcx, [8*rbp] mov rax, r13 shr rax, 61 jne .LBB2_6 mov qword ptr [rsp + 8], 0 movabs rax, 9223372036854775800 cmp rcx, rax ja .LBB2_7 mov rax, qword ptr [rsp + 48] mov rax, qword ptr [rax + 8*rbx - 8] mov qword ptr [rsp + 16], rax mov rax, qword ptr [rip + __rust_no_alloc_shim_is_unstable@GOTPCREL] movzx eax, byte ptr [rax] mov eax, 8 mov qword ptr [rsp + 8], rax mov esi, 8 mov rdi, rcx mov qword ptr [rsp + 64], rcx call qword ptr [rip + __rust_alloc@GOTPCREL] mov rcx, qword ptr [rsp + 64] test rax, rax je .LBB2_7 mov rcx, qword ptr [rsp + 16] mov qword ptr [rax], rcx mov qword ptr [rsp + 24], rbp mov qword ptr [rsp + 32], rax mov qword ptr [rsp + 40], 1 mov ebp, 1 jmp .LBB2_11 .LBB2_22: mov rcx, qword ptr [rsp + 16] mov qword ptr [rax + 8*rbp], rcx inc rbp mov qword ptr [rsp + 40], rbp .LBB2_11: dec r13 je .LBB2_12 cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 .LBB2_20: lea rdi, [rsp + 24] mov rsi, rbp mov rdx, r13 call alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle::hd90f8297b476acb7 mov rax, qword ptr [rsp + 32] jmp .LBB2_22 .LBB2_12: cmp rbx, r12 jae .LBB2_16 inc rbx .LBB2_14: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_18 lea rcx, [r15 + rbx] inc rcx inc rbx cmp rcx, 1 jne .LBB2_14 jmp .LBB2_16 .LBB2_18: mov rcx, qword ptr [rsp + 48] mov rcx, qword ptr [rcx + 8*rbx - 8] mov qword ptr [rsp + 16], rcx cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 jmp .LBB2_20 .LBB2_16: mov rax, qword ptr [rsp + 40] mov rdi, qword ptr [rsp + 56] mov qword ptr [rdi + 16], rax movups xmm0, xmmword ptr [rsp + 24] movups xmmword ptr [rdi], xmm0 .LBB2_17: mov rax, rdi add rsp, 72 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret .LBB2_6: mov qword ptr [rsp + 8], 0 .LBB2_7: lea rdx, [rip + .L__unnamed_2] mov rdi, qword ptr [rsp + 8] mov rsi, rcx call qword ptr [rip + alloc::raw_vec::handle_error::h5290ea7eaad4c986@GOTPCREL] mov rbx, rax mov rsi, qword ptr [rsp + 24] test rsi, rsi je .LBB2_25 mov rdi, qword ptr [rsp + 32] shl rsi, 3 mov edx, 8 call qword ptr [rip + __rust_dealloc@GOTPCREL] .LBB2_25: mov rdi, rbx call _Unwind_Resume@PLT We can see there there is only a single use of SIMD (xmm, ymm or zmm) registers and it is not in a loop. There is also no call to memcpy. This means the Rust computation is certainly not vectorised using SIMD instructions. The loops seems to move at best 64-bit items. The SSE (SIMD) instruction set can move 128-bit vectors and the AVX (SIMD) one can move 256-bit one (512-bit for AVX-512 supported only on few recent PC CPUs and most recent server ones). As a result, the rust code is certainly sub-optimal because the Rust code performs scalar moves. On the other hand, Numpy basically calls memcpy in nested loops (in the linked code) as long as needs_custom_copy is false, which is I think the case for all basic contiguous native arrays like the one computed in your code (i.e. no pure-Python objects in the array). memcpy is generally aggressively optimized so it benefit from SIMD instructions on platforms where it worth it. For very small copies, it can be slower than scalar moves though (due to the call and sometimes some checks). I expect the Rust code to be about 4 times slower than Numpy on a CPU supporting AVX-2 (assuming the target CPU actually supports a 256-bit-wide data-path, which is AFAIK the case on relatively recent mainstream CPUs) as long as the size of the copied slices is rather big (e.g. at least few dozens of double-precision items). Put it shortly, the gap is certainly due to the (indirect) use of wide SIMD load/store in Numpy as opposed to the Rust code using less-efficient scalar load/stores. Performance of np.repeat VS np.take I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. [...] np.repeat is around 3x faster than numpy slicing / take. Regarding np.take it is more expensive because it cannot really benefit from SIMD instructions and Numpy also needs to read the indices from memory. To be more precise, on x86-64 CPU, AVX-2 and AVX-512 support gather instructions to do that but they are not so fast compared to scalar loads (possibly even slower regarding the actual target micro-architecture of the CPU). For example, on AMD Zen+/Zen2/Zen3/Zen4 CPUs, gather instructions does not worth it (not faster), mainly because the underlying hardware implementation is not efficient yet (micro-coded). On relatively-recent Intel CPUs supporting AVX-2, gather instructions are a bit faster, especially for 32-bit items and 32-bit addresses -- it does not really worth it for 64-bit ones (which is your use-case). On Intel CPUs supporting AVX-512 (mainly IceLake CPU and server-side CPUs), it worth it for both 32-bit and 64-bit items. x86-64 CPUs not supporting AVX-2 (i.e. old ones) do not support gather instructions. Even the best (x86-64) gather instruction implementation cannot compete with (256-bit or 512-bit) packed loads/stores typically done by memcpy in np.repeat on wide slices, simply because all mainstream CPUs perform scalar loads (i.e. <=64-bit) internally saturating load ports. Some memcpy implementations use rep movsb which is very well optimised on quite-recent x86-64 CPUs (so to adapt the granularity of load-store regarding the use-case and even use streaming stores if needed on wide arrays). Even on GPUs (having an efficient gather implementation), gather instructions are still generally more expensive than packed loads. They are at best equally fast, but one need to consider the overhead of reading also indices from memory so it can never be faster. Put it shortly, you should avoid indexing if you can since it is not very SIMD-friendly.
4
6
79,621,854
2025-5-14
https://stackoverflow.com/questions/79621854/compute-cumulative-mean-std-on-polars-dataframe-using-over
I want to compute the cumulative mean & std on a polars dataframe column. For the mean I tried this: import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len()).add(1).over('class')) which correctly gives shape: (8, 3) ┌───────┬───────┬──────────┐ │ value ┆ class ┆ cum_mean │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 │ ╞═══════╪═══════╪══════════╡ │ 4 ┆ A ┆ 4.0 │ │ 6 ┆ A ┆ 5.0 │ │ 8 ┆ B ┆ 8.0 │ │ 11 ┆ A ┆ 7.0 │ │ 5 ┆ B ┆ 6.5 │ │ 6 ┆ A ┆ 6.75 │ │ 8 ┆ B ┆ 7.0 │ │ 15 ┆ B ┆ 9.0 │ └───────┴───────┴──────────┘ However, this seems very clunky, and becomes a little more complicated (and possibly error-prone) for std. Is there a nicer (possibly built-in) version for computing the cum mean & cum std?
I might have a solution which is more clean. You can get to it using rolling-functions like rolling_mean or rolling_std. Here is my proposal: df.with_columns( cum_mean=pl.col('value').cum_sum().over('class')/pl.col('value').cum_count().over('class'), cum_mean_by_rolling=pl.col('value').rolling_mean(window_size=df.shape[0], min_samples=1).over('class'), cum_std_by_rolling=pl.col('value').rolling_std(window_size=df.shape[0], min_samples=1).over('class') ) If you define the window size as the number of rows in the data frame (df.shape[0]) and the minimum number of samples as 1, then you can get the wanted result. I also changed your implementation for the cum_mean a bit so that it is a bit shorter. If I run the code I get this result. shape: (8, 5) ┌───────┬───────┬──────────┬─────────────────────┬────────────────────┐ │ value ┆ class ┆ cum_mean ┆ cum_mean_by_rolling ┆ cum_std_by_rolling │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 ┆ f64 ┆ f64 │ ╞═══════╪═══════╪══════════╪═════════════════════╪════════════════════╡ │ 4 ┆ A ┆ 4.0 ┆ 4.0 ┆ null │ │ 6 ┆ A ┆ 5.0 ┆ 5.0 ┆ 1.414214 │ │ 8 ┆ B ┆ 8.0 ┆ 8.0 ┆ null │ │ 11 ┆ A ┆ 7.0 ┆ 7.0 ┆ 3.605551 │ │ 5 ┆ B ┆ 6.5 ┆ 6.5 ┆ 2.12132 │ │ 6 ┆ A ┆ 6.75 ┆ 6.75 ┆ 2.986079 │ │ 8 ┆ B ┆ 7.0 ┆ 7.0 ┆ 1.732051 │ │ 15 ┆ B ┆ 9.0 ┆ 9.0 ┆ 4.242641 │ └───────┴───────┴──────────┴─────────────────────┴────────────────────┘ I did not find a more suitable build in function. Hope this helps.
2
2
79,620,883
2025-5-14
https://stackoverflow.com/questions/79620883/how-do-i-repeat-one-dataframe-to-match-the-length-of-another-dataframe
I want to combine two DataFrames of unequal length to a new DataFrame with the size of the larger one. Now, specifically, I want to pad the values of the shorter array by repeating it until it is large enough. I know this is possible for lists using itertools.cycle as follows: from itertools import cycle x = range(7) y = range(43) combined = zip(cycle(x), y) Now I want to do the same for DataFrames: import pandas as pd df1 = pd.DataFrame(...) # length 7 df2 = pd.DataFrame(...) # length 43 df_comb = pd.concat([cycle(df1),df2], axis=1) Of course this doesn't work, but I don't know if there is an option to do this or to just manually repeat the array.
If you want to combine the two DataFrames to obtain an output DataFrame of the length of the longest input with repetitions of the smallest input that restart like itertools.cycle, you could compute a common key (with numpy.arange and the modulo (%) operator) to perform a merge: out = (df1.merge(df2, left_on=np.arange(len(df1))%len(df2), right_on=np.arange(len(df2))%len(df1)) .drop(columns=['key_0']) ) Output: col1 col2 col3 col4 0 A X a Y 1 B X b Y 2 C X c Y 3 D X a Y 4 E X b Y 5 F X c Y 6 G X a Y Intermediate without dropping the merging key: key_0 col1 col2 col3 col4 0 0 A X a Y 1 1 B X b Y 2 2 C X c Y 3 0 D X a Y 4 1 E X b Y 5 2 F X c Y 6 0 G X a Y Used inputs: # df1 col1 col2 0 A X 1 B X 2 C X 3 D X 4 E X 5 F X 6 G X # df2 col3 col4 0 a Y 1 b Y 2 c Y
1
1
79,620,845
2025-5-14
https://stackoverflow.com/questions/79620845/how-is-np-repeat-so-fast
I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, repeat takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -> [1, 2, 2]. My naive version was around 4.5x slower than np.repeat. pub fn repeat_by(arr: &[f64], repeats: &[u64]) -> Vec<f64> { // Use flat_map to create a single iterator of all repeated elements let result: Vec<f64> = arr .iter() .zip(repeats.iter()) .flat_map(|(&value, &count)| std::iter::repeat_n(value, count as usize)) .collect(); result } I also tried a couple of more versions, e.g. one where I pre-allocated a vector with the necessary capacity, but all performed similarly. While doing more investigating though, I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. For example, we can build a list of indices and use numpy slicing / take to perform the same operation as np.repeat. However, doing this (and even removing the list construction from the timings), np.repeat is around 3x faster than numpy slicing / take. import timeit import numpy as np N_ROWS = 100_000 x = np.random.rand(N_ROWS) weight = np.random.poisson(1, len(data)) # pre-compute the indices so slow python looping doesn't affect the timing indices = [] for w in weight: for i in range(w): indices.append(i) print(timeit.timeit(lambda: np.repeat(x, weight), number=1_000)) # 0.8337333500003297 print(timeit.timeit(lambda: np.take(x, indices), number=1_000)) # 3.1320624930012855 My C is not so good, but it seems like the relevant implementation is here: https://github.com/numpy/numpy/blob/main/numpy/_core/src/multiarray/item_selection.c#L785. It would be amazing if someone could help me understand at a high level what this code is doing--on the surface, it doesn't look like anything particularly special (SIMD, etc.), and looks pretty similar to my naive Rust version (memcpy vs repeat_n). In addition, I am struggling to understand why it performs so much better than even numpy slicing.
TL;DR: the gap is certainly due to the use of wider loads/stores in Numpy than your Rust code, and you should avoid indexing if you can for sake of performance. Performance of the Numpy code VS your Rust code First of all, we can analyse the assembly code generated from your Rust code (I am not very familiar with Rust but I am with assembly). The generated code is quite big, but here is the main part (see it on Godbolt): example::repeat_by::hf03ad1ea376407dc: push rbp push r15 push r14 push r13 push r12 push rbx sub rsp, 72 mov r12, rdx cmp r8, rdx cmovb r12, r8 test r12, r12 je .LBB2_4 mov r14, rcx mov r15, r12 neg r15 mov ebx, 1 .LBB2_2: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_5 lea rax, [r15 + rbx] inc rax inc rbx cmp rax, 1 jne .LBB2_2 .LBB2_4: mov qword ptr [rdi], 0 mov qword ptr [rdi + 8], 8 mov qword ptr [rdi + 16], 0 jmp .LBB2_17 .LBB2_5: mov qword ptr [rsp + 48], rsi mov qword ptr [rsp + 56], rdi cmp r13, 5 mov ebp, 4 cmovae rbp, r13 lea rcx, [8*rbp] mov rax, r13 shr rax, 61 jne .LBB2_6 mov qword ptr [rsp + 8], 0 movabs rax, 9223372036854775800 cmp rcx, rax ja .LBB2_7 mov rax, qword ptr [rsp + 48] mov rax, qword ptr [rax + 8*rbx - 8] mov qword ptr [rsp + 16], rax mov rax, qword ptr [rip + __rust_no_alloc_shim_is_unstable@GOTPCREL] movzx eax, byte ptr [rax] mov eax, 8 mov qword ptr [rsp + 8], rax mov esi, 8 mov rdi, rcx mov qword ptr [rsp + 64], rcx call qword ptr [rip + __rust_alloc@GOTPCREL] mov rcx, qword ptr [rsp + 64] test rax, rax je .LBB2_7 mov rcx, qword ptr [rsp + 16] mov qword ptr [rax], rcx mov qword ptr [rsp + 24], rbp mov qword ptr [rsp + 32], rax mov qword ptr [rsp + 40], 1 mov ebp, 1 jmp .LBB2_11 .LBB2_22: mov rcx, qword ptr [rsp + 16] mov qword ptr [rax + 8*rbp], rcx inc rbp mov qword ptr [rsp + 40], rbp .LBB2_11: dec r13 je .LBB2_12 cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 .LBB2_20: lea rdi, [rsp + 24] mov rsi, rbp mov rdx, r13 call alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle::hd90f8297b476acb7 mov rax, qword ptr [rsp + 32] jmp .LBB2_22 .LBB2_12: cmp rbx, r12 jae .LBB2_16 inc rbx .LBB2_14: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_18 lea rcx, [r15 + rbx] inc rcx inc rbx cmp rcx, 1 jne .LBB2_14 jmp .LBB2_16 .LBB2_18: mov rcx, qword ptr [rsp + 48] mov rcx, qword ptr [rcx + 8*rbx - 8] mov qword ptr [rsp + 16], rcx cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 jmp .LBB2_20 .LBB2_16: mov rax, qword ptr [rsp + 40] mov rdi, qword ptr [rsp + 56] mov qword ptr [rdi + 16], rax movups xmm0, xmmword ptr [rsp + 24] movups xmmword ptr [rdi], xmm0 .LBB2_17: mov rax, rdi add rsp, 72 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret .LBB2_6: mov qword ptr [rsp + 8], 0 .LBB2_7: lea rdx, [rip + .L__unnamed_2] mov rdi, qword ptr [rsp + 8] mov rsi, rcx call qword ptr [rip + alloc::raw_vec::handle_error::h5290ea7eaad4c986@GOTPCREL] mov rbx, rax mov rsi, qword ptr [rsp + 24] test rsi, rsi je .LBB2_25 mov rdi, qword ptr [rsp + 32] shl rsi, 3 mov edx, 8 call qword ptr [rip + __rust_dealloc@GOTPCREL] .LBB2_25: mov rdi, rbx call _Unwind_Resume@PLT We can see there there is only a single use of SIMD (xmm, ymm or zmm) registers and it is not in a loop. There is also no call to memcpy. This means the Rust computation is certainly not vectorised using SIMD instructions. The loops seems to move at best 64-bit items. The SSE (SIMD) instruction set can move 128-bit vectors and the AVX (SIMD) one can move 256-bit one (512-bit for AVX-512 supported only on few recent PC CPUs and most recent server ones). As a result, the rust code is certainly sub-optimal because the Rust code performs scalar moves. On the other hand, Numpy basically calls memcpy in nested loops (in the linked code) as long as needs_custom_copy is false, which is I think the case for all basic contiguous native arrays like the one computed in your code (i.e. no pure-Python objects in the array). memcpy is generally aggressively optimized so it benefit from SIMD instructions on platforms where it worth it. For very small copies, it can be slower than scalar moves though (due to the call and sometimes some checks). I expect the Rust code to be about 4 times slower than Numpy on a CPU supporting AVX-2 (assuming the target CPU actually supports a 256-bit-wide data-path, which is AFAIK the case on relatively recent mainstream CPUs) as long as the size of the copied slices is rather big (e.g. at least few dozens of double-precision items). Put it shortly, the gap is certainly due to the (indirect) use of wide SIMD load/store in Numpy as opposed to the Rust code using less-efficient scalar load/stores. Performance of np.repeat VS np.take I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. [...] np.repeat is around 3x faster than numpy slicing / take. Regarding np.take it is more expensive because it cannot really benefit from SIMD instructions and Numpy also needs to read the indices from memory. To be more precise, on x86-64 CPU, AVX-2 and AVX-512 support gather instructions to do that but they are not so fast compared to scalar loads (possibly even slower regarding the actual target micro-architecture of the CPU). For example, on AMD Zen+/Zen2/Zen3/Zen4 CPUs, gather instructions does not worth it (not faster), mainly because the underlying hardware implementation is not efficient yet (micro-coded). On relatively-recent Intel CPUs supporting AVX-2, gather instructions are a bit faster, especially for 32-bit items and 32-bit addresses -- it does not really worth it for 64-bit ones (which is your use-case). On Intel CPUs supporting AVX-512 (mainly IceLake CPU and server-side CPUs), it worth it for both 32-bit and 64-bit items. x86-64 CPUs not supporting AVX-2 (i.e. old ones) do not support gather instructions. Even the best (x86-64) gather instruction implementation cannot compete with (256-bit or 512-bit) packed loads/stores typically done by memcpy in np.repeat on wide slices, simply because all mainstream CPUs perform scalar loads (i.e. <=64-bit) internally saturating load ports. Some memcpy implementations use rep movsb which is very well optimised on quite-recent x86-64 CPUs (so to adapt the granularity of load-store regarding the use-case and even use streaming stores if needed on wide arrays). Even on GPUs (having an efficient gather implementation), gather instructions are still generally more expensive than packed loads. They are at best equally fast, but one need to consider the overhead of reading also indices from memory so it can never be faster. Put it shortly, you should avoid indexing if you can since it is not very SIMD-friendly.
4
6
79,618,810
2025-5-13
https://stackoverflow.com/questions/79618810/fielderror-at-chat-search-unsupported-lookup-groupchat-name-for-charfield-or
I'm trying to be able to search chat groups by looking up the chatroom name. I'm using Django Q query... models.py class ChatGroup(models.Model): group_name = models.CharField(max_length=128, unique=True, default=shortuuid.uuid) groupchat_name = models.CharField(max_length=128, null=True, blank=True) picture = models.ImageField(upload_to='uploads/profile_pictures', default='uploads/profile_pictures/default.png', blank=True) about = models.TextField(max_length=500, blank=True, null=True) admin = models.ForeignKey(User, related_name='groupchats', blank=True, null=True, on_delete=models.SET_NULL) users_online = models.ManyToManyField(User, related_name='online_in_groups', blank=True) members = models.ManyToManyField(User, related_name='chat_groups', blank=True) is_private = models.BooleanField(default=False) def __str__(self): return self.group_name views.py from django.db.models import Q class ChatSearch(View): def get(self, request, *args, **kwargs): query = self.request.GET.get('chat-query') chatroom_list = ChatGroup.objects.filter( Q(group_name__groupchat_name__icontains=query) ) context = { 'chatroom_list': chatroom_list } return render(request, 'chat/search.html', context) I tried to add the Traceback but it was too much code for this post. Any help you can provide it would be greatly appreciate it!
According to the OP in a comment: using a class based view was triggering a query when I opened the page. I had to create a new page with just the input query then use the query results on a separate page
1
0
79,622,589
2025-5-15
https://stackoverflow.com/questions/79622589/ndb-python-error-returning-object-has-no-attribute-connection-from-host
I have the code below which is built on top of ndb. When running I receive the two errors below. Can I ask for some guidance, specifically what is the connection_from_host referring to? import flask import config import util app = flask.Flask(__name__) from google.appengine.api import app_identity from google.appengine.api import taskqueue, search, memcache from apiclient.discovery import build, HttpError from google.cloud import ndb #from oauth2client.client import GoogleCredentials from apiclient.http import MediaIoBaseUpload from datetime import datetime, timedelta from functools import partial from io import BytesIO import os from os.path import splitext, basename from model import Config from model import VideosToCollections from pytz import timezone import datetime import httplib2 import iso8601 import time import requests import requests_toolbelt.adapters.appengine requests_toolbelt.adapters.appengine.monkeypatch() from operator import attrgetter import model from model import CallBack import re import config import google.appengine.api client = ndb.Client() def ndb_wsgi_middleware(wsgi_app): def middleware(environ, start_response): with client.context(): return wsgi_app(environ, start_response) return middleware app.wsgi_app = ndb_wsgi_middleware(google.appengine.api.wrap_wsgi_app(app.wsgi_app)) @app.route('/collectionsync/', methods=['GET']) #@ndb.transactional def collectionsync(): collection_dbs, collection_cursor = model.Collection.get_dbs( order='name' ) This returns: /layers/google.python.pip/pip/lib/python3.12/site-packages/urllib3/contrib/appengine.py:111: AppEnginePlatformWarning: urllib3 is using URLFetch on Google App Engine sandbox instead of sockets. To use sockets directly instead of URLFetch see https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html. google.api_core.exceptions.RetryError: Maximum number of 3 retries exceeded while calling <function make_call..rpc_call at 0x3ee3d42d6840>, last exception: 503 Getting metadata from plugin failed with error: '_AppEnginePoolManager' object has no attribute 'connection_from_host'
I think the presence of requests_toolbelt dependency in your project caused the issue. It may have forced the requests library to use Google App Engine’s URLFetch service (urllib3), which requires URLFetch to be present. I think that was often necessary in the Python 2 runtime environment on GAE, but not in Python 3. You may try removing requests_toolbelt from your requirements.txt file (see this post). There’s also a possibility that the URLFetch warning is somewhat related to the RetryError connection_from_host. You should try migrating URLFetch to standard Python libraries compatible with the Python 3 GAE environment. Refer to this documentation for steps on replacing the URL Fetch API with a Python library, and you may also consider bypassing URLFetch. I hope this helps.
1
1
79,624,185
2025-5-15
https://stackoverflow.com/questions/79624185/how-to-substitute-variable-value-within-another-string-variable-in-python
I have HTML template in database column that is shared with another platform. The HTML template has placeholder variables. This value is pulled from DB in my Python script but for some reason, it is not replacing the placeholder variables with it's value. Here is what the HTML string that is in DB. <html> Dear {FullName}, <p>We are excited to notify you that your account has been activated. Please login to your account at <a href="https://portal.example.com">My Account Portal</a>. </p> </html> I get the name from DB table in the variable FullName. When the email is sent out, it doesn't replaces the name in the html template. If I create a local python variable with the same html template and not pull from database, it works just fine. So what I would like to know is how can I pull the html template from DB and make it work in my Python script? The DB template can't be updated to use %s in html template as it is a vendor provided system and changing that will break that application. Below is the Python script that I am using. cur.execute("select FirstName,LastName, Email, AlternateEmail,CustCode from Customer Where CustCode = ?", f"{CustCode}") for data in cur.fetchall(): FullName += data.FirstName+" "+data.LastName CustEmail += data.Email CustPIN += str(data.CustCode) ToEmail += data.AlternateEmail cur.execute("select Value from [dbo].[Configs] where [Name] = ?",'Email - Subject') for data in cur.fetchall(): mail_subject += data.Value cur.execute("select Value from [dbo].[Configs] where [Name] = ?",'Email - Message Body') for data in cur.fetchall(): mail_body = f''' {data.Value} ''' send_mail(from_mail='[email protected]', to_mail=f'{cust_email}',subject=f'{mail_subject}',mailbody=mail_body,mime_type='html') if I change the script to hardcode the html template within the script, it works and I want to avoid that so I wouldn't have to change the script every time when the template changes. What are my option in this situation?
I think you just need to call the .format() string method on mail_body, and pass in the value of FullName. You can either do it at the end when you call send_mail(): mailbody=mail_body.format(FullName=FullName) Or you can do it when you first read mail_body from the database: mail_body = f''' {data.Value} '''.format(FullName=FullName)
1
1
79,624,117
2025-5-15
https://stackoverflow.com/questions/79624117/wrap-class-method-with-arguments-only-once
There are two classes and I want to wrap item.foo method only once, to prevent cache = {param1: 'param_value'} being reinited class Foo: _count = 0 def foo(self, param2): self._count += param2 class Bar: _collection = [Foo(), Foo(), Foo()] def bar(self, param1, param2): for item in self._collection: wrapped_function = wrapper(item.foo, param1) wrapped_function(param2) def wrapper(func, param1): # some database call or whatever cache = {param1: 'param_value'} def _wrapper(*args, **kwargs): # access value to read print(cache[param1]) return func(*args, **kwargs) return _wrapper bar = Bar() bar.bar(1, 2) It can be achieved if wrapper had no params in it and was used as a simple decorator, but I have to pass a param1 in it, though it's always the same. I also can save cache = {param1: 'param_value'} before the cycle in def bar and pass it as the parameter, but I was wondering if there any other way to accomplish it. Kinda can't wrap my head around it(pun intended)
It sounds like you want to wrap Foo.foo, not item.foo (which is different bound method for each value of foo. Something like class Bar: _collection = [Foo(), Foo(), Foo()] def bar(self, param1, param2): wrapped_function = wrapper(Foo.foo, param1) for item in self._collection: wrapped_function(item, param2) It's more complicated if you only want to apply the wrapper once per unique type of object in _collection, as you need to build a cache of wrapped functions. Something like class Bar: _collection = [Foo(), Foo(), Foo()] wrap_cache = {} def bar(self, param1, param2): for item in self._collection: itype = type(item) if itype.foo not in wrap_cache: wrap_cache[itype] = wrapper(itype.foo, param1) wrapped_function = wrap_cache[itype] wrapped_function(param2)
2
2
79,623,642
2025-5-15
https://stackoverflow.com/questions/79623642/python-threading-tkinter-event-set-doesnt-terminate-thread-if-bound-to-tk
I'm writing an app that generates a live histogram to be displayed in a Tkinter window. This is more or less how the app works: A Histogram class is responsible for generating the embedded histogram inside the Tk window, collecting data and update the histogram accordingly. There is a 'Start' button that creates a thread which is responsible for collecting data points and putting them in a queue, calling an update_histogram function which pulls the new data from the queue and redraws the histogram. Since the function in the thread runs a loop indefinitely, there's also a 'Stop' button which stops the loop by setting an Event(). The stop function called by the button is also called when trying to close the window while the thread is running. The issue Even if the same stop function is called by clicking the button or upon closing the window, if I try to close the window during a run the app freezes (is_alive() returns True), but not if I first click on the 'Stop' button and then close the window (is_alive() returns False). What am I doing wrong? MWE import tkinter as tk from tkinter import ttk from threading import Thread, Event from queue import Queue import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import numpy as np class Histogram(): def __init__(self, root: tk.Tk): self.root = root self.buffer: list[float] = [] self.event = Event() self.queue = Queue() self.stopped = False self.fig, self.ax = plt.subplots(figsize=(4, 3), dpi=64, layout='tight') self.ax.set_xlim(0, 80) self.ax.set_ylim(0, 30) self.ax.set_xlabel('Time (ns)') self.ax.set_ylabel('Counts') self.canvas = FigureCanvasTkAgg(self.fig, master=self.root) self.canvas.draw() self.canvas.get_tk_widget().grid( column=0, columnspan=2, row=1, padx=6, pady=6, sticky='nesw' ) def start(self) -> None: self.cleanup() # Scrape canvas & buffer if restarting self.thread = Thread(target=self.follow) self.thread.start() self.stopped = True self.root.protocol('WM_DELETE_WINDOW', self.kill) def follow(self) -> None: count = 1 while not self.event.is_set(): data = np.random.normal(loc=40.0, scale=10.0) self.queue.put(data) self.update_histogram(n=count) count += 1 self.event.clear() self.stopped = True def update_histogram(self, n: int) -> None: data = self.queue.get() self.buffer.append(data) if n % 5 == 0: # Update every 5 new data points if self.ax.patches: _ = [b.remove() for b in self.ax.patches] counts, bins = np.histogram(self.buffer, bins=80, range=(0, 80)) self.ax.stairs(counts, bins, color='blueviolet', fill=True) # Add 10 to y upper limit if highest bar exceeds 95% of it y_upper_lim = self.ax.get_ylim()[1] if np.max(counts) > y_upper_lim * 0.95: self.ax.set_ylim(0, y_upper_lim + 10) self.canvas.draw() self.queue.task_done() def cleanup(self) -> None: if self.ax.patches: _ = [b.remove() for b in self.ax.patches] self.buffer = [] def stop(self) -> None: self.event.set() def kill(self) -> None: self.stop() all_clear = self.stopped while not all_clear: all_clear = self.stopped print(f'{self.thread.is_alive()=}') self.root.quit() self.root.destroy() def main(): padding = dict(padx=12, pady=12, ipadx=6, ipady=6) root = tk.Tk() root.title('Live Histogram') hist = Histogram(root=root) start_button = ttk.Button(root, text='START', command=hist.start) start_button.grid(column=0, row=0, **padding, sticky='new') stop_button = ttk.Button(root, text='STOP', command=hist.stop) stop_button.grid(column=1, row=0, **padding, sticky='new') root.mainloop() if __name__ == '__main__': main() Note 1: The reason why I went for this fairly complicated setup is that I've learned that any other loop run in the main thread will cause the Tkinter mainloop to freeze, so that you can't interact with any widget while the loop is running. Note 2: I'm pretty sure I'm doing exactly what the accepted answer says in this post but here it doesn't work. This has been driving me crazy for days! Thank you in advance :)
The issue stems from a combination of thread synchronization problems and blocking behavior in the main (GUI) thread during window closure. The primary flaw is that your kill() method uses a non-thread-safe busy-wait loop (while not self.stopped) to monitor the background thread's state. This introduces three critical problems: GUI Freeze: The busy-wait loop blocks the main thread, preventing Tkinter from processing its event queue, including window closure events and user interactions. Thread Starvation Risk: Since the main thread repeatedly acquires the GIL without yielding, the background thread may be deprived of CPU time, delaying or preventing it from setting self.stopped = True. Improper Synchronization: Using a simple boolean flag like self.stopped for thread communication is not thread-safe. While CPython’s GIL mitigates some risks, there’s no guarantee that the main thread will see the updated value in a timely or consistent manner, particularly in other Python implementations or complex scenarios. Key fixes: 1. removed self.stopped = False 2. Use thread.join() with a timeout to ensure clean shutdown. def stop(self) -> None: self.event.set() if self.thread is not None: self.thread.join(timeout=0.1) self.thread = None 3. Initialize as None and track lifecycle. def __init__(self, ...): self.thread = None 4. Just call stop() and destroy window. def kill(self) -> None: self.stop() self.root.quit() self.root.destroy() 5. Clear event in start(). def start(self) -> None: self.event.clear() self.cleanup() self.thread = Thread(target=self.follow) self.thread.start() Complete code after correction: import tkinter as tk from tkinter import ttk from threading import Thread, Event from queue import Queue import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import numpy as np class Histogram(): def __init__(self, root: tk.Tk): self.root = root self.buffer: list[float] = [] self.event = Event() self.queue = Queue() self.thread = None # Initialize thread as None self.fig, self.ax = plt.subplots(figsize=(4, 3), dpi=64, layout='tight') self.ax.set_xlim(0, 80) self.ax.set_ylim(0, 30) self.ax.set_xlabel('Time (ns)') self.ax.set_ylabel('Counts') self.canvas = FigureCanvasTkAgg(self.fig, master=self.root) self.canvas.draw() self.canvas.get_tk_widget().grid( column=0, columnspan=2, row=1, padx=6, pady=6, sticky='nesw' ) def start(self) -> None: self.cleanup() # Scrape canvas & buffer if restarting self.event.clear() # Clear the event before starting self.thread = Thread(target=self.follow) self.thread.start() self.root.protocol('WM_DELETE_WINDOW', self.kill) def follow(self) -> None: count = 1 while not self.event.is_set(): data = np.random.normal(loc=40.0, scale=10.0) self.queue.put(data) self.update_histogram(n=count) count += 1 def update_histogram(self, n: int) -> None: data = self.queue.get() self.buffer.append(data) if n % 5 == 0: # Update every 5 new data points if self.ax.patches: _ = [b.remove() for b in self.ax.patches] counts, bins = np.histogram(self.buffer, bins=80, range=(0, 80)) self.ax.stairs(counts, bins, color='blueviolet', fill=True) # Add 10 to y upper limit if highest bar exceeds 95% of it y_upper_lim = self.ax.get_ylim()[1] if np.max(counts) > y_upper_lim * 0.95: self.ax.set_ylim(0, y_upper_lim + 10) self.canvas.draw() self.queue.task_done() def cleanup(self) -> None: if self.ax.patches: _ = [b.remove() for b in self.ax.patches] self.buffer = [] def stop(self) -> None: self.event.set() if self.thread is not None: self.thread.join(timeout=0.1) # Wait a short time for thread to finish self.thread = None def kill(self) -> None: self.stop() self.root.quit() self.root.destroy() def main(): padding = dict(padx=12, pady=12, ipadx=6, ipady=6) root = tk.Tk() root.title('Live Histogram') hist = Histogram(root=root) start_button = ttk.Button(root, text='START', command=hist.start) start_button.grid(column=0, row=0, **padding, sticky='new') stop_button = ttk.Button(root, text='STOP', command=hist.stop) stop_button.grid(column=1, row=0, **padding, sticky='new') root.mainloop() if __name__ == '__main__': main() Output:
2
1
79,623,174
2025-5-15
https://stackoverflow.com/questions/79623174/calculating-a-pct-change-between-3-values-in-a-pandas-series-where-one-of-more
Scenario: I have a pandas series that contains 3 values. These values can vary between nan, 0 and any value above zero. I am trying to get the pct_change among the series whenever possible. Examples: [0,nan,50] [0,0,0] [0,0,50] [nan,nan,50] [nan,nan,0] [0,0,nan] [0,nan,0] What I tried: from other SO questions I was able to come up with methods either trying to ignore the nan or shifting, but these can potentially yield a result with empty values. Ideally, if a result cannot be calculated, I would like to output a 0. Code tried: series_test = pd.Series([0,None,50]) series_test.pct_change().where(series_test.notna()) # tested but gives only NaN or inf series_test.pct_change(fill_method=None)[series_test.shift(2).notnull()].dropna() # tested but gives empty result Question: What would be the correct way to approach this? Expected outputs: [0,nan,50] - 0 (undefined case) [0,0,0] - 0 (undefined case) [0,0,50] - 0 (undefined case) [nan,nan,50] - 0 (undefined case) [nan,nan,0] - 0 (undefined case) [0,0,nan] - 0 (undefined case) [0,nan,0] - 0 (undefined case) [1,nan,5] - 400% [0,1,5] - 400% [1,2,nan] - 100% [1,1.3,1.8] - 80%
I think you could dropna, then compute the pct_change and only keep the max finite value: series_test.dropna().pct_change().loc[np.isfinite].max() Or maybe: s.pct_change().where(np.isfinite, 0).max() Example output for the second approach: [0, nan, 50] - 0.0 [0, 0, 0] - 0.0 [0, 0, 50] - 0.0 [nan, nan, 50] - 0.0 [nan, nan, 0] - 0.0 [0, 0, nan] - 0.0 [0, nan, 0] - 0.0 [1, nan, 5] - 4.0 [0, 1, 5] - 4.0 [0, 1, nan] - 0.0 Edit: given your comment, it looks like you want to use the first and last non-zero values to compute the percentage change. In this case, I'd use a custom function: def pct_chg(s): tmp = s[s>0] if len(tmp)>1: return (tmp.iloc[-1]-tmp.iloc[0])/tmp.iloc[0] return 0 Which should be equivalent to the more verbose: (series_test .where(s>0).bfill().ffill() .iloc[[0, -1]].pct_change().fillna(0).iloc[-1] ) Example: [0, nan, 50] - 0 [0, 0, 0] - 0 [0, 0, 50] - 0 [nan, nan, 50] - 0 [nan, nan, 0] - 0 [0, 0, nan] - 0 [0, nan, 0] - 0 [1, nan, 5] - 4.0 [0, 1, 5] - 4.0 [0, 1, nan] - 0 [1, 1.5, 1.6] - 0.6000000000000001
1
2
79,622,579
2025-5-15
https://stackoverflow.com/questions/79622579/type-annotate-decorator-that-changes-decorated-function-arguments
I want to design a decorator that will allow the wrapped method to take a float or a numpy array of floats. If the passed argument was a float then a float should be returned and if it was a numpy array then a numpy array should be returned. Below is my MWE and latest attempt. I am using VSCode with pylance version v2024.3.2 and Python version 3.12.3. I have a large number of functions I'd like to apply this decorator to. If I was only dealing with one or two functions I could do away with the decorator approach entirely and use @overload. The type error I get is the following: Argument of type "float" cannot be assigned to parameter "a" of type "NDArray[float64]" in function "func" "float" is incompatible with "NDArray[float64]" import numpy as np from numpy import float64 from numpy.typing import NDArray from collections.abc import Callable from typing import TypeAlias, TypeVar T = TypeVar('T', float, NDArray[float64]) PreWrapFunc: TypeAlias = Callable[[NDArray[float64]], NDArray[float64]] PostWrapFunc: TypeAlias = Callable[[T], T] def my_decorator(method: PreWrapFunc) -> PostWrapFunc: def wrapper(arg: T) -> T: if isinstance(arg, float): result = method(np.array([arg,])) return result[0] else: return method(arg) return wrapper @my_decorator def func(a: NDArray[float64]) -> NDArray[float64]: return a * 2 func(1.0) # 2.0, type error is happening here! func(np.array([1.0,])) # array([2.])
Define a protocol with overloaded signatures for your wrapper and then use that as the return type of your decorator: import numpy as np from numpy import float64 from numpy.typing import NDArray from typing import Callable, Protocol, overload class AsFloatOrArray(Protocol): @overload def __call__(self, arg: float) -> float: ... @overload def __call__(self, arg: NDArray[float64]) -> NDArray[float64]: ... def __call__(self, arg): ... def my_decorator( method: Callable[[NDArray[float64]], NDArray[float64]], ) -> AsFloatOrArray: @overload def wrapper(arg: float) -> float: ... @overload def wrapper(arg: NDArray[float64]) -> NDArray[float64]: ... def wrapper(arg): if isinstance(arg, float): return method(np.array([arg]))[0] else: return method(arg) return wrapper @my_decorator def func(a: NDArray[float64]) -> NDArray[float64]: return a * 2 print(func(1.0)) print(func(np.array([1.0]))) Or, more concisely, write the decorator as a class and provide overloaded signatures for the __call__ method: import numpy as np from numpy import float64 from numpy.typing import NDArray from typing import Callable, overload class MyDecorator: def __init__( self, method: Callable[[NDArray[float64]], NDArray[float64]], ) -> None: self.method = method @overload def __call__(self, arg: float) -> float: ... @overload def __call__(self, arg: NDArray[float64]) -> NDArray[float64]: ... def __call__(self, arg): if isinstance(arg, float): return self.method(np.array([arg]))[0] else: return self.method(arg) @MyDecorator def func(a: NDArray[float64]) -> NDArray[float64]: return a * 2 print(func(1.0)) print(func(np.array([1.0])))
1
0
79,622,744
2025-5-15
https://stackoverflow.com/questions/79622744/can-not-find-shadow-root-using-selenium
i try to find a shadow root on a website and clicking a button using the following code: import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By print(f"Checking Browser driver...") options = Options() options.add_argument("start-maximized") options.add_argument('--log-level=3') options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1}) options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() driver = webdriver.Chrome (service=srv, options=options) link = "https://www.arbeitsagentur.de/jobsuche/suche?wo=Berlin&angebotsart=1&was=Gastronomie%20-%20Minijob&umkreis=50" driver.get (link) time.sleep(5) shadowHost = driver.find_element(By.XPATH,'//bahf-cd-modal[@class="modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated"]') shadowRoot = shadowHost.shadow_root shadowRoot.find_element(By.CSS_SELECTOR, "button[data-testid='bahf-cookie-disclaimer-btn-alle']").click() input("Press!") But i allways get this error: (selenium) C:\DEVNEU\Fiverr2025\TRY\hedifeki>python test.py Checking Browser driver... Press Traceback (most recent call last): File "C:\DEVNEU\Fiverr2025\TRY\hedifeki\test.py", line 26, in <module> shadowHost = driver.find_element(By.XPATH,'//bahf-cd-modal[@class="modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated"]') File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 770, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 384, in execute self.error_handler.check_response(response) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^ File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 232, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//bahf-cd-modal[@class="modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated"]"} (Session info: chrome=136.0.7103.94); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception Stacktrace: GetHandleVerifier [0x00007FF732A4CF65+75717] GetHandleVerifier [0x00007FF732A4CFC0+75808] How can i click the button in this shadow root?
'//bahf-cd-modal[@class="modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated"]' is for an element that is inside the shadow root, you need to locate an element that contains the shadow root shadow_root = driver.find_element(By.TAG_NAME, 'bahf-cookie-disclaimer-dpl3').shadow_root shadow_root.find_element(By.CSS_SELECTOR, "button[data-testid='bahf-cookie-disclaimer-btn-alle']").click()
1
2
79,622,540
2025-5-15
https://stackoverflow.com/questions/79622540/how-to-send-receive-binary-data-to-an-web-application-in-python
I am learning web development, I have some experience in python scripting. This time, I wanted to create an api in python, so review fast api docs. I have the following set up (example contrived for the purpose of this post). Based on the examples in fast api site, I created following python code, which has two methods. Also, I have defined classes that represent the data I want to read in these methods. My understanding is that pydantic library validates data. So, for testing, I used curl command as such, which works. Pydantic maps the data {'name': 'foo'} I sent in curl command, to the class I have defined Item. Is there any documentation on how it maps data? For my next method, I want to post contents of a file, like video/audio data, which I understand I can send as raw binary data. I have defined FileContent class with a field that can store bytes. but when I try to post file contents, curl -X POST -F "[email protected]" http://127.0.0.1:5000/upload_file_content I get json serialization error. how can I map the binary content of the file, I send via curl command to the FileContent class? curl -X "POST" \ "http://127.0.0.1:5000/items" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"name\": \"foo\"}" from fastapi import FastAPI from pydantic import BaseModel, bytes app = FastAPI() class FileContent(BaseModel): data: bytes class Item(BaseModel): name: str @app.post("/items/") async def create_item(item: Item): return item @app.post("/upload_file_content/") async def upload_file_content(file_content: FileContent): # do something with file content here
You're on the right track with FastAPI and Pydantic, but binary file uploads via curl -F (i.e., multipart/form-data) don't get automatically mapped to a Pydantic model like FileContent. When dealing with file uploads, FastAPI provides a special way to handle binary data using UploadFile, not bytes directly in a Pydantic model. Here’s how to rewrite your /upload_file_content/ endpoint: from fastapi import FastAPI, File, UploadFile from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str @app.post("/items/") async def create_item(item: Item): return item @app.post("/upload_file_content/") async def upload_file_content(file: UploadFile = File(...)): contents = await file.read() # Get file contents as bytes return {"filename": file.filename, "size": len(contents)}
1
1
79,624,459
2025-5-16
https://stackoverflow.com/questions/79624459/merge-dataframes-with-repeated-ids
I have 2 dataframes, dfA & dfB. dfA contains purchases of certain products & dfB contains info on said products. For instance, dfA: purchaseID productID quantity 1 432 1 2 432 4 3 567 7 and dfB: productID name 432 'mower' 567 'cat' I wish to merge the two datasets on productID to produce something like: purchaseID productID quantity name 1 432 1 'mower' 2 432 4 'mower' 3 567 7 'cat' In actual fact, dfA & dfB are much larger. How can I do this? I understand how to do normal one-one merges, but struggling to see how to do one-many.
pd.merge(dfA, dfB, on='productID', how='inner') you can use merge along with the how type
1
1
79,626,356
2025-5-17
https://stackoverflow.com/questions/79626356/split-a-column-of-string-into-list-of-list
How could I split a column of string into list of list? Minimum example: df = pl.DataFrame({'test': "A,B,C,1\nD,E,F,2\nG,H,I,3\nJ,K,L,4"}) I try the following, somehow I stop after the first split df = df.with_columns(pl.col('test').str.split('\n')) My desired result would be it return a list of list inside the dataframe, so that the list of list is readily to be read by other columns result = pl.DataFrame({'test': [[["A","B","C",1], ["D","E","F",2], ["G","H","I",3], ["J","K","L",4]]]}, strict=False) result = result.with_columns( get_data = pl.col('test').list[2].list[3].cast(pl.Int64) # Answer = 3 ) result.glimpse() Rows: 1 Columns: 2 $ test <list[list[str]]> [['A', 'B', 'C', '1'], ['D', 'E', 'F', '2'], ['G', 'H', 'I', '3'], ['J', 'K', 'L', '4']] $ get_data <i64> 3
df.with_columns( pl.col('test') .str.split('\n') .list.eval( pl.element() .str.split(",") ) ) In your example you have a list of mixed strings and numbers which polars doesn't support so your output has to have the numbers as strings. You say you want to use these lists from other columns readily so you might want to convert to a struct column and unnest it so that you have new flat columns.
3
2
79,626,384
2025-5-17
https://stackoverflow.com/questions/79626384/scipy-bpoly-from-derivatives-compared-with-numpy
I implemented this comparison between numpy and scipy for doing the same function interpolation. The results show how numpy crushes scipy. Python version: 3.11.7 NumPy version: 2.1.3 SciPy version: 1.15.2 Custom NumPy interpolation matches SciPy BPoly for 10 000 points. SciPy coeff time: 0.218046 s SciPy eval time : 0.000725 s Custom coeff time: 0.061066 s Custom eval time : 0.000550 s edit: likely, I was too pessimistic below when giving the 4x to 10x, after latest streamlining of code, seems very significant on average. Varying on system, I get somewhere between 4x to 10x speedup with numpy. This is not the first time I encounter this. So in general, I wonder: why this huge difference in perf? should we do well in viewing scipy as just a reference implementation, and go to other pathways for peformance (numpy, numba)? # BPoly.from_derivatives example with 10 000 sinusoidal points and timing comparison """ Creates a sinusoidal dataset of 10 000 points over [0, 2π]. Interpolates using SciPy's BPoly.from_derivatives and a custom pure NumPy quintic Hermite. Verifies exact match, compares timing for coefficient computation and evaluation, and visualizes both. """ import sys import numpy as np import scipy import time from scipy.interpolate import BPoly # Environment versions print(f"Python version: {sys.version.split()[0]}") print(f"NumPy version: {np.__version__}") print(f"SciPy version: {scipy.__version__}") # Generate 10 000 sample points over one period n = 10_000 x = np.linspace(0.0, 2*np.pi, n) # Analytical sinusoidal values and derivatives y = np.sin(x) # y(x) v = np.cos(x) # y'(x) a = -np.sin(x) # y''(x) # === SciPy implementation with timing === y_and_derivatives = np.column_stack((y, v, a)) t0 = time.perf_counter() bp = BPoly.from_derivatives(x, y_and_derivatives) t1 = time.perf_counter() scipy_coeff_time = t1 - t0 # Evaluation timing t0 = time.perf_counter() y_scipy = bp(x) t1 = time.perf_counter() scipy_eval_time = t1 - t0 # === Pure NumPy implementation === def compute_quintic_coeffs(x, y, v, a): """ Compute quintic Hermite coefficients on each interval [x[i], x[i+1]]. Returns coeffs of shape (6, n-1). """ m = len(x) - 1 coeffs = np.zeros((6, m)) for i in range(m): h = x[i+1] - x[i] A = np.array([ [1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 2, 0, 0, 0], [1, h, h**2, h**3, h**4, h**5], [0, 1, 2*h, 3*h**2, 4*h**3, 5*h**4], [0, 0, 2, 6*h, 12*h**2, 20*h**3], ]) b = np.array([y[i], v[i], a[i], y[i+1], v[i+1], a[i+1]]) coeffs[:, i] = np.linalg.solve(A, b) return coeffs def interp_quintic(x, coeffs, xx): """ Evaluate quintic Hermite using precomputed coeffs. x: breakpoints (n,), coeffs: (6, n-1), xx: query points. """ idx = np.searchsorted(x, xx) - 1 idx = np.clip(idx, 0, len(x) - 2) dx = xx - x[idx] yy = np.zeros_like(xx) for j in range(6): yy += coeffs[j, idx] * dx**j return yy # Coefficient estimation timing for custom t0 = time.perf_counter() custom_coeffs = compute_quintic_coeffs(x, y, v, a) t1 = time.perf_counter() cust_coeff_time = t1 - t0 # Evaluation timing for custom t0 = time.perf_counter() y_custom = interp_quintic(x, custom_coeffs, x) t1 = time.perf_counter() cust_eval_time = t1 - t0 # Verify exact match assert np.allclose(y_scipy, y_custom, atol=1e-12), "Custom interp deviates" print("Custom NumPy interpolation matches SciPy BPoly for 10 000 points.") # Print timing results print(f"SciPy coeff time: {scipy_coeff_time:.6f} s") print(f"SciPy eval time : {scipy_eval_time:.6f} s") print(f"Custom coeff time: {cust_coeff_time:.6f} s") print(f"Custom eval time : {cust_eval_time:.6f} s") # Visualization import matplotlib.pyplot as plt plt.plot(x, y_scipy, label='SciPy BPoly') plt.plot(x, y_custom, '--', label='Custom NumPy') plt.legend() plt.title('Sinusoidal interpolation: SciPy vs NumPy Quintic') plt.xlabel('x') plt.ylabel('y(x) = sin(x)') plt.show()
Analysis of the coefficient computations Regarding BPoly.from_derivatives, Scipy uses a generic code with some (slow) pure-Python one inside and several calls to Numpy each ones having a small overhead. The Numpy functions used are also generic so the code is sub-optimal. For example, a low-level profiler on my Linux machine show that the slowest function is ufunc_generic_fastcall (10-12% of the time). The name is quite explicit: a generic ufunc, so clearly not the most efficient solution. >20% of the time is spent in pure-Python code (inefficient). I think the internal implementation is bound by Scipy overheads. Meanwhile compute_quintic_coeffs is bound by Numpy overheads. Indeed, m is roughly 10000 in this case so ~10_000 iterations, there is at least a dozen of Numpy calls in each iteration to perform and the code takes about 110 ms (at least on my machine with i5-9600KF CPU). This means, a bit more than 100_000 Numpy calls in 110 ms so about 1 µs/call. This is generally the overhead of a Numpy function call on my machine. A low-level profiling shows that most of the code seems to be overhead confirming the code is inefficient. Using Numba for that would help a lot to make the code faster. Thus, put it shortly, both implementation are very inefficient. Faster coefficient computation You can significantly improve the performance with Numba: @nb.njit('(float64[::1], float64[::1], float64[::1], float64[::1])') def compute_quintic_coeffs_nb(x, y, v, a): m = len(x) - 1 coeffs = np.zeros((6, m)) # Create `A` once for better performance A = np.zeros((6, 6)) A[0, 0] = 1 A[1, 1] = 1 A[2, 2] = 2 A[3, 0] = 1 A[4, 1] = 1 A[5, 2] = 2 for i in range(m): h = x[i+1] - x[i] # Very fast setup of the `A` matrix h2 = h * h h3 = h2 * h h4 = h2 * h2 h5 = h3 * h2 A[3, 1] = h A[3, 2] = h2 A[3, 3] = h3 A[3, 4] = h4 A[3, 5] = h5 A[4, 2] = 2 * h A[4, 3] = 3 * h2 A[4, 4] = 4 * h3 A[4, 5] = 5 * h4 A[5, 3] = 6 * h A[5, 4] = 12 * h2 A[5, 5] = 20 * h3 b = np.array([y[i], v[i], a[i], y[i+1], v[i+1], a[i+1]]) coeffs[:, i] = np.linalg.solve(A, b) return coeffs This code takes only 7 ms so it is about 16 times faster! Most of the time is spent in np.linalg.solve and more specifically native optimized BLAS functions, which is very good. You can certainly write a specialized implementation for np.linalg.solve since most of the matrix values are either zeros or constants, not to mention the matrix is small so overheads in BLAS functions might be significant. Each calls to np.linalg.solve now takes about 700 ns which is very small. The Numba code is fast because it is specialized for one specific case. There is no generic code except the one of np.linalg.solve which actually call aggressively-optimized BLAS functions (focusing only on performance). If this is not enough you can even use multiple threads to do the computation in parallel. This is not trivial here since each threads need to operate on its own A (threads must not write in the same A matrix). Still, I expect this to be about 5~6 times faster than the sequential Numba implementation on my 6-core CPU. The resulting code would be more than 80 times faster than the initial one (and take only few milliseconds)! Analysis of the evaluation function Regarding the Scipy code most of the time is spent in the computation of pow (60%), (more specifically __ieee754_pow_fma and pow@GLIBC_2.2.5). The rest is the actual interpolation. Low-level profiling tends to indicate the implementation is rather good. It takes 0.96 ms/call on my machine. Regarding the Numpy code interp_quintic, most of the time is spent in the computation of pow (~45%) which is sad because this is not actually needed here. The binsearch takes about 15~20% of the time and the rest is due to the other computations of the code as well as Numpy internal overheads. Besides the unneeded pow, the implementation is also rather good. It takes 0.70 ms/call on my machine. So far, my guess is that the Scipy code is more expensive mainly because it computes items using pow(item, 0) and pow(item, 1) without any optimization for the exponent 0 and 1. In this case, this means the exponentiation code should be 33% slower (and more expensive). This tends to match with low-level profiling information. If Scipy was optimize this case, it would only take 80% of the current timings overall (based on profiling results). Consequently, the Scipy implementation would only be about 10% slower than Numpy which is not so significant. If you really care about such a small performance improvement, then you should write your own native specialized implementation (possibly with Cython or Numba). That being said, this is certainly not so easy to defeat Numpy in this specific case. The operation can be optimized further. Faster evaluation function implementation First of all, dx**0 is computed which is sad because it is just an array of 1. Moreover, the exponentiation can be replaced by an iterative multiplication here. Here is the resulting code: def interp_quintic_opt(x, coeffs, xx): idx = np.searchsorted(x, xx) - 1 idx = np.clip(idx, 0, len(x) - 2) dx = xx - x[idx] dxj = dx.copy() yy = coeffs[0, idx] for j in range(1, 6): yy += coeffs[j, idx] * dxj dxj *= dx return yy This code takes 0.30 ms/call instead of 0.70 ms/call so it is 2.3 times faster on my machine. Now, binsearch and mapiter_get are the bottleneck of the function (the later is AFAIK the indirect indexing). Discussion about the performance of Numpy vs Scipy This section is more general and based on my experience so people might disagree (comments are welcome). Generally, generic codes tend to be far less inefficient than specialized ones. The later is significantly easier to optimize. For example, if you need to support many different data-type, providing an efficient specialized code for each is time consuming. Scipy developers tends to use Cython to speed some part of the code but not everything is cytonised yet and when this is done, it is often not optimal. The thing is Scipy often provide much more features than Numpy and the higher the number of feature the harder the code optimization (simply because of the increasing number of concerns in the same code). Meanwhile Numpy developers focus primarily on performance, even if it means having missing features. On top of that, function call overheads tend to increase with the number of supported features. Numpy functions are far from being cheap for computing basic things on small array. This is because of a complex internal dispatch of generic iterator which needs to support features like wrap-arround, broadcasting, bound checking, generic data-types support. Numpy developers specialize the code for many cases in order to make the operation fast for large array at the expense of more expensive function calls on small arrays. Scipy AFAIK often does not specialize the code because such think makes the code more complex, and so harder to maintain. If you care about performance, you should specialize your code to your specific use-case. You should vectorise the code so to really avoid Scipy/Numpy function call overheads. One is certainly slower than the other in this case but both will be pretty inefficient anyway so you should not care much about which one is slower in this case. Using Numba and Cython might give you a speed up but this is not guaranteed because Numpy code is often so optimized than a naive native specialized implementation can be slower than a slightly more generic one aggressively optimized. This clearly depend of the use-case though since many parameters needs to be considered. For example: Can the compiler optimize the computation for the target use-case (e.g. constant propagation, inlining, common sub-expression optimization, allocation optimizations)? Is the Numpy/Scipy operation vectorised using SIMD-instructions and more generally is the computation SIMD-friendly. Are they using parallel code (only BLAS operations can run in parallel in Numpy so far)?
3
2
79,626,166
2025-5-17
https://stackoverflow.com/questions/79626166/why-cv-convexitydefects-fails-in-this-example-is-it-a-bug
This script: import numpy as np, cv2 as cv contour = np.array([[0, 0], [1, 0], [1, 1], [0.5, 0.2], [0, 0]], dtype='f4') hull = cv.convexHull(contour, returnPoints=False) defects = cv.convexityDefects(contour, hull) fails and produces this error message: File "/home/paul/upwork/pickleball/code/so-65-ocv-convexity-defects.py", line 5, in <module> defects = cv.convexityDefects(contour, hull) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.11.0) /io/opencv/modules/imgproc/src/convhull.cpp:319: error: (-215:Assertion failed) npoints >= 0 in function 'convexityDefects' What is the reason? Here is a plot of contour: And hull is: [[0] [1] [2]]
I couldn't find confirmation for that in the documentation, but convexityDefects expects the points coordinates in contour to be int32 rather than floating points. The following code works as expected: import cv2 as cv import numpy as np contour = np.array([[0, 0], [10, 0], [10, 10], [5, 2], [0, 0]], dtype=np.int32) hull = cv.convexHull(contour, returnPoints=False) defects = cv.convexityDefects(contour, hull) print(defects) Output: [[[ 2 0 3 543]]] Note that the result is a 4 elements vector: (start_index, end_index, farthest_pt_index, fixpt_depth) and the last element in the result vector (that seem relatively large) is fixpt_depth: fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
1
1
79,627,506
2025-5-18
https://stackoverflow.com/questions/79627506/turtle-window-automatically-closes-after-opening-the-python-script
When i double click on this python script, it automatically closes turtle window after receiving the input. import math import turtle def arc(t, radius, angle): arc_length = 2 * math.pi * radius * angle / 360 n = int(arc_length / 3) + 1 step_length = arc_length / n step_angle = float(angle) / n polyline(t, n, step_length, step_angle) def polyline(t, n, length, angle): for i in range(n): t.fd(length) t.lt(angle) def circle(t, r): arc(t, r, 360) turtle.mainloop() radius=input("Enter the radius of the circle to be printed: ") bob=turtle.Turtle() circle(bob,radius) I have tried running the script from Python IDLE. It's working fine from IDLE. But it closes the turtle window when I double click on the python script file.
radius is set by a string, producing an error in arc when computing arc_length , so the execution stops and the window is closed For instance replace radius=input("Enter the radius of the circle to be printed: ") by radius=float(input("Enter the radius of the circle to be printed: ")) Launching the execution by hand in a shell allows you to see the error message
1
2
79,628,870
2025-5-19
https://stackoverflow.com/questions/79628870/why-does-allisinstancex-str-for-x-in-value-not-help-pyright-infer-iterable
I'm working with Pyright in strict mode and want to check if a function parameter value of type object is an Iterable[str]. I tried using: if isinstance(value, Iterable) and all(isinstance(v, str) for v in value): # Pyright complains: 'Type of "v" is unknown' However, looking at the elements, Pyright complains that the type of v is still Unknown, even after the isinstance check on each element. Why doesn't all(isinstance(...)) refine the type of value to Iterable[str]? For context: I'm implementing __contains__ in a class that inherits from collections.abc.MutableSequence. Therefore, the method signature must remain def __contains__(self, value: object) -> bool, I can't change the type annotation of value to Iterable[str]. Here's my current implementation: def __contains__(self, value: object) -> bool: if isinstance(value, Iterable) and all(isinstance(v, str) for v in value): value = "".join(value).upper() return value in "".join(self.sequence) # self.sequence is an Iterable[str] return False Is there a way to get Pyright to properly infer the type of value here without using an explicit cast() or defining a separate TypeGuard function? It seems to me that a separate TypeGuard should make no difference. Alternatively: am I looking at this the wrong way? Should I try to avoid implementing __contains__ myself, because of the object type?
No you cannot do it with pyright without additional structures. Plain all(isinstance(...)) is not a type guard that is supported by pyright. Different to filter can an all type guard not be written in the stubs and needs needs special handling; see some discussion here or here. The type checker would not only need to hard code knowledge of all but also make assumptions about the semantics of the iterable expression it is acting upon. Support would need to be added specifically for all(isinstance(a, b) for a in [x, y]). This specific expression form is rarely used, so it wouldn't make sense to add the custom logic to support it. The official recommendation is to use manual type guards. The following is a nicely reusable function that lets you also specify the type. Alternatively use a cast after the if. from typing import Iterable, TypeIs from typing_extensions import TypeForm def is_iterable[T](obj, typ: TypeForm[T]=object) -> TypeIs[Iterable[T]]: return isinstance(obj, Iterable) and all(isinstance(v, str) for v in obj) def __contains__(self, value: object) -> bool: if is_iterable(value, str): reveal_type(value) # Iterable[str] value = "".join(value).upper() return value in "".join(self.sequence) # self.sequence is an Iterable[str] reveal_type(value) # object return False For the strict mode you can add a # pyright: ignore[reportUnknownArgumentType] to not complain about v in the iterator or use these alternative guards: def is_iterable_of_type[T](obj: object, typ: type[T]=object) -> TypeIs[Iterable[T]]: return is_iterable(obj) and all(isinstance(v, typ) for v in obj) def is_iterable(obj: object) -> TypeIs[Iterable[Any]]: return isinstance(obj, Iterable)
2
5
79,628,442
2025-5-19
https://stackoverflow.com/questions/79628442/is-using-mutex-in-my-class-redundant-because-of-gil
I have a class with two threads MainThread - solving tasks one by one(getting self.cur_task_id, and changing self.cur_task_status) Thread self.report_status_thread - read self.cur_task_id, self.cur_task_status and send values via http I am using mutex in my class to synchronize these threads. Is that redundant because of GIL (Global Interpreter Lock)? class DeepAllocationService(): def __init__(self,): self.mutex = threading.Lock() # ensure cur_task_id and cur_task_status changed in the same thread self.cur_task_id = None self.cur_task_status = None self.report_status_thread = threading.Thread(target=self.periodic_send_status) def periodic_solve_tasks(self,): self.report_status_thread.start() while True: try: new_task, task_file = self.get_new_task() if new_task: self.update_task_info(new_task["id"], new_task["status"]) self.solve_task(new_task, task_file) except Exception as e: self.cur_task_status = TaskStatus.ERROR self.logger.error(f"Exception in periodic_solve_tasks: {e}") finally: time.sleep(1) def update_task_info(self, task_id=None, task_status=None): self.mutex.acquire() self.cur_task_id = task_id self.cur_task_status = task_status self.mutex.release() def periodic_send_status(self,): while True: # read-only function - read and send self.cur_task_id and self.cur_task_status requests.post() # sends elf.cur_task_id and self.cur_task_status time.sleep(2)
If you had a single field, which you either read or wrote, relying on the GIL would be sufficient (it does not allow torn reads / torn writes). However, here that is not the case: you have two fields, and the GIL can be released at any bytecode operation, so a separate thread could observe inconsistent values for cur_task_id and cur_task_status. Incidentally: you should use the lock as a context manager, as that avoids deadlocking on error rather than use attributes and a lock, you could use a queue.Queue to communicate between the solver and the reporter I would strongly recommend the latter, as currently the reporter is not guaranteed to see every task or status, and has to sleep in order to avoid busy-looping. With a queue, the reporter could just wait on the queue waiting it up. Your solver task is also inconsistent: while True: try: new_task, task_file = self.get_new_task() if new_task: self.update_task_info(new_task["id"], new_task["status"]) self.solve_task(new_task, task_file) except Exception as e: self.cur_task_status = TaskStatus.ERROR self.logger.error(f"Exception in periodic_solve_tasks: {e}") finally: time.sleep(1) If get_new_task raises, an error status will be assigned to the previous task.
1
3
79,628,910
2025-5-19
https://stackoverflow.com/questions/79628910/improve-code-that-finds-nan-values-with-a-condition-and-removes-them
I have a dataframe where each column starts and finished with certain number of nan values. Somewhere in the middle of a column there is a continuous list of values. It can happen that a nan value "interrupts" the data. I want to iterate over each column, find such values and then remove the whole row. For example, I want to find the np.nan between 9 and 13 and remove it: [np.nan, np.nan, np.nan, 1, 4, 6, 6, 9, np.nan, 13, np.nan, np.nan] Conditions for removal: if value has at least one data point before if value has at least one data point after if value is nan I wrote code that does this already, but it's slow and kind of wordy. import pandas as pd import numpy as np data = {'A': [np.nan, np.nan, np.nan, 1, 4, 6, 6, 9, np.nan, 13, np.nan, np.nan], 'B': [np.nan, np.nan, np.nan, 11, 3, 16, 13, np.nan, np.nan, 12, np.nan, np.nan]} df = pd.DataFrame(data) def get_nans(column): output = [] for index_to_check, value in column.items(): has_value_before = not column[:index_to_check].isnull().all() has_value_after = not column[index_to_check + 1:].isnull().all() is_nan = np.isnan(value) output.append(not( has_value_before and has_value_after and is_nan)) return output for column in df.columns: df = df[get_nans(df[column])] print(df) How can I improve my code, vectorize it etc?
You could use a vectorial approach with isna and cummin to perform boolean indexing. First let's use one column as example: # identify NaNs m1 = df['A'].isna() # Identify external NaNs m2 = (m1.cummin()|m1[::-1].cummin()) out= df.loc[m2 | ~m1, 'A'] Output: 0 NaN 1 NaN 2 NaN 3 1.0 4 4.0 5 6.0 6 6.0 7 9.0 9 13.0 10 NaN 11 NaN Name: A, dtype: float64 Then you can vectorize to the whole DataFrame and aggregate with all: m1 = df.isna() m2 = (m1.cummin()|m1[::-1].cummin()) out= df.loc[(m2 | ~m1).all(axis=1)] Output: A B 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 1.0 11.0 4 4.0 3.0 5 6.0 16.0 6 6.0 13.0 9 13.0 12.0 10 NaN NaN 11 NaN NaN Another option would be to leverage interpolate with limit_area='inside': # is the cell not NaN? m1 = df.notna() # is the cell an external NaN? m2 = df.interpolate(limit_area='inside').isna() out = df[(m1|m2).all(axis=1)]
1
2
79,628,093
2025-5-19
https://stackoverflow.com/questions/79628093/can-this-similarity-measure-between-different-size-numpy-arrays-be-expressed-ent
This script: import numpy as np from numpy.linalg import norm a = np.array([(1, 2, 3), (1, 4, 9), (2, 4, 4)]) b = np.array([(1, 3, 3), (1, 5, 9)]) r = sum([min(norm(a-e, ord=1, axis=1)) for e in b]) computes a similarity measure r between different size NumPy arrays a and b. Is there a way to express it entirely with NumPy API for greater efficiency?
You can do this: r = np.linalg.norm(a[:,None,:]-b[None], ord=1, axis=2).min(0).sum() Here, a[:,None,:] - b[None] will add extra dimensions to a and b for the proper subtraction broadcasting. The norm is unchanged, the .min(0) takes the row-wise minimum, and then .sum() gets the final result.
1
2
79,630,089
2025-5-20
https://stackoverflow.com/questions/79630089/how-to-display-a-legend-when-plotting-a-geodataframe
I have a GeoDataFrame I want to plot. This works fine, however somehow I cannot easily plot its legend. I have tried a number of alternatives and checked solutions from googling and LLM, but I do not understand why this does not work. Code: import geopandas as gpd from shapely.geometry import box, Polygon, LineString import matplotlib.pyplot as plt fig, ax = plt.subplots() bounding_box = [9.454, 80.4, 12, 80.88] polygon = box(*bounding_box) gdf = gpd.GeoDataFrame(geometry=[polygon]) plot_obj = gdf.plot(ax=ax, edgecolor='red', facecolor='none', linewidth=2, label="user bbox query") # plt.legend() # does not work # ax.legend(handles=[plot_obj], labels=["test"]) # does not work ax.legend(handles=[plot_obj]) # does not work plt.xlabel('Longitude') plt.ylabel('Latitude') plt.show() Result: I get a warning: <python-input-0>:16: UserWarning: Legend does not support handles for Axes instances. A proxy artist may be used instead. See: https://matplotlib.org/stable/users/explain/axes/legend_guide.html#controlling-the-legend-entries ax.legend(handles=[plot_obj]) # does not wor But somehow I am not able to take advantage of it to make things work (I tried several ways to plot the legend from "handles", see the different attempts, but none work). I am certainly missing something - any pointer to how this can be done simply? :)
The issue is that plot_obj = gdf.plot(...) returns an Axes object, not a plot "handle" that can be passed to legend(). To display a legend, you need to create a proxy artist (e.g., a matplotlib.patches.Patch) that mimics the appearance of your GeoDataFrame's geometry (in your case, a red-bordered polygon with no fill), and then use that in ax.legend(). Here’s how to fix your code to show the legend correctly: import geopandas as gpd from shapely.geometry import box import matplotlib.pyplot as plt from matplotlib.patches import Patch # For the legend proxy fig, ax = plt.subplots() bounding_box = [9.454, 80.4, 12, 80.88] polygon = box(*bounding_box) gdf = gpd.GeoDataFrame(geometry=[polygon]) # Plot the GeoDataFrame gdf.plot(ax=ax, edgecolor='red', facecolor='none', linewidth=2) # Create a legend proxy legend_patch = Patch(facecolor='none', edgecolor='red', linewidth=2, label='user bbox query') ax.legend(handles=[legend_patch]) # Use the proxy artist for legend plt.xlabel('Longitude') plt.ylabel('Latitude') plt.show() Result:
1
1
79,632,156
2025-5-21
https://stackoverflow.com/questions/79632156/how-to-mark-a-class-as-abstract-in-python-no-abstract-methods-and-in-a-mypy-com
I'm trying to make it impossible to instantiate a class directly, without it having any unimplemented abstract methods. Based on other solutions online, a class should have something along the lines of: class Example: def __new__(cls,*args,**kwargs): if cls is Example: raise TypeError("...") return super().__new__(cls,*args,**kwargs) I'm trying to move this snippet to a separate place such that each such class does not have to repeat this code. C = TypeVar("C") def abstract(cls:Type[C])->Type[C]: class Abstract(cls): def __new__(cls, *args:Any, **kwargs:Any)->"Abstract": if cls is Abstract: msg = "Abstract class {} cannot be instantiated".format(cls.__name__) raise TypeError(msg) return cast( Abstract, super().__new__(*args,**kwargs) ) return Abstract This is my attempt and might be incorrect But mypy complains: error: Invalid base class "cls" How can I have some reusable way (such as a decorator) to achieve what I want whilst passing mypy --strict? Context: This is in the context of a pyside6 application where I'm subclassing QEvent, to have some additional extra properties. The base class defining these properties (getters) has a default implementation, yet I would like to prevent it from being initialized directly as it is not (and should not) be registered to the Qt event system. (I have a couple more such classes with different default values for convenience)
May I suggest a mixin instead of a decorator? You basically want to check if cls.mro()[1] is the abstract class (i.e., in the current class is a direct subclass of Abstract) from typing import Self, final class Abstract: @final def __new__(cls, *args, **kwargs) -> Self: if cls.mro()[1] is Abstract: raise TypeError("...") return super().__new__(cls,*args,**kwargs) class AbstractFoo(Abstract): def frobnicate(self, x: int) -> int: return x // 42 class ConcreteFoo(AbstractFoo): def __init__(self, value: int) -> None: self.value = value class AbstractBaz(Abstract): def __new__(cls, *args, **kwargs) -> Self: return object.__new__(cls) foo1 = ConcreteFoo(1) foo2 = AbstractFoo() # TypeError: ... Note, due to the @final decorator on Abstract.__new__, mypy complains about trying to override __new__ in AbstractBaz: main.py:20: error: Cannot override final attribute "__new__" (previously declared in base class "Abstract") [misc] You may or may not want to go with this, depending on how much you want to control, but keep in mind, if someone really wants to instantiate your class, they can and you cannot really stop them because any user can simply use object.__new__ directly. Note, static analysis tools will not catch: foo2 = AbstractFoo() I don't think there is any way to express abstractness in the type system itself, abc is special cased.
2
2
79,633,258
2025-5-22
https://stackoverflow.com/questions/79633258/how-to-make-plotly-text-bold-using-scatter
I'm trying to make a graph using plotly library and I want to make some texts in bold here's the code used : import plotly.express as px import pandas as pd data = { "lib_acte":["test 98lop1", "test9665 opp1", "test QSDFR1", "test ABBE1", "testtest21","test23"], "x":[12.6, 10.8, -1, -15.2, -10.4, 1.6], "y":[15, 5, 44, -11, -35, -19], "circle_size":[375, 112.5, 60,210, 202.5, 195], "color":["green", "green", "green", "red", "red", "red"], "textfont":["normal", "normal", "normal", "bold", "bold", "bold"], } #load data into a DataFrame object: df = pd.DataFrame(data) fig = px.scatter( df, x="x", y="y", color="color", size='circle_size', text="lib_acte", hover_name="lib_acte", color_discrete_map={"red": "red", "green": "green"}, title="chart" ) fig.update_traces(textposition='middle right', textfont_size=14, textfont_color='black', textfont_family="Inter", hoverinfo="skip") newnames = {'red':'red title', 'green': 'green title'} fig.update_layout( { 'yaxis': { "range": [-200, 200], 'zerolinewidth': 2, "zerolinecolor": "red", "tick0": -200, "dtick":45, }, 'xaxis': { "range": [-200, 200], 'zerolinewidth': 2, "zerolinecolor": "gray", "tick0": -200, "dtick": 45, # "scaleanchor": 'y' }, "height": 800, } ) fig.add_scatter( x=[0, 0, -200, -200], y=[0, 200, 200, 0], fill="toself", fillcolor="gray", zorder=-1, mode="markers", marker_color="rgba(0,0,0,0)", showlegend=False, hoverinfo="skip" ) fig.add_scatter( x=[0, 0, 200, 200], y=[0, -200, -200, 0], fill="toself", fillcolor="yellow", zorder=-1, mode="markers", marker_color="rgba(0,0,0,0)", showlegend=False, hoverinfo="skip" ) fig.update_layout( paper_bgcolor="#F1F2F6", ) fig.show() and here's the output of above code: What I'm trying to do is to make "test ABBE1", "testtest21","test23" in bold on the graph, could anyone please help how to do that ?
Not sure if there is a better solution, but, as mentioned in furas's comment you can use the HTML tag <b>…</b> for the elements that should be bold-faced. You can achieve this, for example, by adding the following line: # Create your data dict (as before) data = { ... } # Add HTML tag for boldface data["lib_acte"] = [(f"<b>{el}</b>" if ft == "bold" else el) for el, ft in zip(data["lib_acte"], data["textfont"])] # Create dataframe (as before) df = pd.DataFrame(data) The documentation says: Chart Studio uses a subset of HTML tags to do things like newline (<br>), bold (<b></b>), italics (<i></i>), and hyperlinks (<a href=’…’></a>). Tags <em>, <sup>, and <sub> are also supported. Here, Chart Studio is the online service built on top of and provided by the makers of plotly. While not explicitly stated, I am quite sure that the same subset of HTML tags also applies to plotly (stand-alone) – I just tried successfully with <b>, <sup>, and <a href=…>.
1
2
79,634,830
2025-5-23
https://stackoverflow.com/questions/79634830/seeking-for-help-illustrate-this-y-combinator-python-implementation
I once read this Python implementation of Y-combinator in a legacy code deposit: def Y_combinator(f): return (lambda x: f(lambda *args: x(x)(*args)))( lambda x: f(lambda *args: x(x)(*args)) ) And there exists an example usage: factorial = Y_combinator( lambda f: lambda n: 1 if n == 0 else n * f(n - 1) ) Could anyone be so kind to teach how should I read the code to interpret it? I am totally lost in trying to connect those lambdas altogether...
With some hesitation I will try to answer your question. I have been programming in Python for more than twenty years and it took me more than an hour to unravel this. I'll start with a simple observation about lambda - one that we all know. Lambda expressions create anonymous functions, and therefore they can always be replaced with an explicitly named def function. def square(x): return x * x sq = lambda x: x * x print(square) print(square(2)) print(sq) print(sq(2)) sq = square print(sq) print(sq(2)) In this trivial example, sq and square are, for all intents and purposes, the same thing. Both are function objects. This code prints: <function square at 0x7f25b2ebc040> 4 <function <lambda> at 0x7f25b2ebc2c0> 4 <function square at 0x7f25b2ebc040> 4 Notice that once we have defined square, we can literally cut and paste "square" in place of "lambda x: x * x". Now look at this code from your example. factorial = Y_combinator( lambda f: lambda n: 1 if n == 0 else n * f(n - 1) ) Start replacing lambda expressions with def functions. Begin with the innermost one, lambda n:. def fn(n): return 1 if n == 0 else n * f(n-1) There's a problem here because "f" is not defined. That's because this function must be nested inside another one, the lambda f: part. def ff(f): def fn(n): return 1 if n == 0 else n * f(n-1) return fn Everything is defined now, and this function ff is equivalent to the original line of code containing the two nested lambda expressions. We can check that - I'm borrowing the exact definition of Y_combinator and factorial from your code. factorial2 = Y_combinator(ff) print(factorial) print(factorial(3)) print(factorial2) print(factorial2(3)) The result: <function <lambda>.<locals>.<lambda> at 0x7f25b2ebc220> 6 <function ff.<locals>.fn at 0x7f25b2ebc540> 6 Note that Y_combinator takes one argument, a function, and returns another function. In your code, the argument to Y_combinator is also a function that takes one argument, a function, and returns another function. lambda f: returns another lambda. The new function ff is also a function that takes one argument, a function, and returns another function. Are you confused yet? Now I'm going to apply the same trick to Y_combinator. Start with the innermost lambda and work outward. Eventually you get this, which I named combinator1. It's the equivalent of Y_combinator but it's be "de-lambda-ized" (delambdinated?): def combinator1(f): def fx(x): def fargs(*args): return x(x)(*args) return f(fargs) return fx(fx) factorial3 = combinator1(ff) print(factorial3) print(factorial3(3)) Output: <function ff.<locals>.fn at 0x7f25b2ebc720> 6 It works. Yes, it's a function inside of a function inside of another function. That's what all those nested lambdas give you. Notice how the recursive call is implemented as fx(fx). It's a function that can call itself. It's a rather neat trick in an insane sort of way. There is only one line in this entire plate of spaghetti that actually does anything (the if else statement). The rest of the code is messing around with function objects. I can see no reason ever to do this. With Python's ability to do recursion and its ability to nest one function inside another, I just don't see any use case. But I haven't thought about it very hard. If I'm wrong I bet someone here will enlighten me. For the record, here is a complete implementation of factorial in python: def nice_factorial(n): return 1 if n == 0 else n * nice_factorial(n-1) print("Nice solution:") print(nice_factorial) print(nice_factorial(3)) Output: Nice solution: <function nice_factorial at 0x7f25b2ebc860> 6 I'll leave it up to others to decide which approach you like better.
2
3
79,636,956
2025-5-24
https://stackoverflow.com/questions/79636956/setup-dj-rest-auth-and-all-allauth-not-working
Hello i'm trying to setup dj_rest_auth and allauth with custom user model for login for my nextjs app but it seems not working the backend part besides it not working i get this warning /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:228: UserWarning: app_settings.USERNAME_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['username']['required'] required=allauth_account_settings.USERNAME_REQUIRED, /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:230: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required'] email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED) /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:288: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required'] email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED) No changes detected # python manage.py migrate /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:228: UserWarning: app_settings.USERNAME_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['username']['required'] required=allauth_account_settings.USERNAME_REQUIRED, /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:230: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required'] email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED) /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:288: UserWarning: app_settings.EMAIL_REQUIRED is deprecated, use: app_settings.SIGNUP_FIELDS['email']['required'] email = serializers.EmailField(required=allauth_account_settings.EMAIL_REQUIRED) versions i use Django==5.2.1 django-cors-headers==4.3.1 djangorestframework==3.16.0 dj-rest-auth==7.0.1 django-allauth==65.8.1 djangorestframework_simplejwt==5.5.0 psycopg2-binary==2.9.10 python-dotenv==1.0.1 Pillow==11.2.1 gunicorn==23.0.0 whitenoise==6.9.0 redis==5.2.1 requests==2.32.3 models.py import uuid from django.db import models from django.utils import timezone from django.contrib.auth.models import AbstractUser, PermissionsMixin, UserManager # Create your models here. class MyUserManager(UserManager): def _create_user(self, name, email, password=None , **extra_fields): if not email: raise ValueError('Users must have an email address') email = self.normalize_email(email=email) user = self.model(email=email , name=name, **extra_fields) user.set_password(password) user.save(using=self.db) return user def create_user(self, name=None, email=None, password=None, **extra_fields): extra_fields.setdefault('is_staff',False) extra_fields.setdefault('is_superuser',False) return self._create_user(name=name,email=email,password=password,**extra_fields) def create_superuser(self, name=None, email=None, password=None, **extra_fields): extra_fields.setdefault('is_staff',True) extra_fields.setdefault('is_superuser',True) return self._create_user(name=name,email=email,password=password,**extra_fields) class Users(AbstractUser, PermissionsMixin): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) first_name = None last_name = None username = None name = models.CharField(max_length=255) email = models.EmailField(unique=True) is_active = models.BooleanField(default=True) is_superuser = models.BooleanField(default=False) is_staff = models.BooleanField(default=False) avatar = models.ImageField(upload_to='avatars/', null=True, blank=True) date_joined = models.DateTimeField(default=timezone.now) last_login = models.DateTimeField(blank=True, null=True) USERNAME_FIELD = 'email' EMAIL_FIELD = 'email' REQUIRED_FIELDS = [] objects = MyUserManager() def __str__(self): return self.email settings.py import os from pathlib import Path from datetime import timedelta from dotenv import load_dotenv load_dotenv() SITE_ID = 1 BASE_DIR = Path(__file__).resolve().parent.parent SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY') DEBUG = os.environ.get('DEBUG','False') == 'True' ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",") CSRF_TRUSTED_ORIGINS = os.environ.get("DJANGO_CSRF_TRUSTED_ORIGINS","*").split(",") CORS_ALLOW_CREDENTIALS = True if DEBUG : CORS_ALLOW_ALL_ORIGINS = True else : CORS_ALLOWED_ORIGINS = os.environ.get("DJANGO_CORS_ALLOWED_ORIGINS","*").split(",") AUTH_USER_MODEL = "Users_app.Users" WEBSITE_URL = os.environ.get("WEBSITE_URL","http://localhost:8000") ACCOUNT_USER_MODEL_USERNAME_FIELD = None ACCOUNT_LOGIN_METHODS = {'email'} ACCOUNT_SIGNUP_FIELDS = ['email*','name*', 'password1*', 'password2*'] ACCOUNT_EMAIL_VERIFICATION = "none" SIMPLE_JWT = { "ACCESS_TOKEN_LIFETIME": timedelta(minutes=60), "REFRESH_TOKEN_LIFETIME": timedelta(days=7), "ROTATE_REFRESH_TOKENS": False, "BLACKLIST_AFTER_ROTATION": False, "UPDATE_LAST_LOGIN": True, "SIGNING_KEY": SECRET_KEY, "ALGORITHM": "HS512" } REST_AUTH = { 'USE_JWT': True, 'JWT_AUTH_COOKIE': 'access_token', 'JWT_AUTH_REFRESH_COOKIE': 'refresh_token', } INSTALLED_APPS = [ 'unfold', 'unfold.contrib.filters', 'unfold.contrib.forms', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'django_filters', 'rest_framework.authtoken', 'allauth', 'allauth.account', 'allauth.socialaccount', 'dj_rest_auth', 'dj_rest_auth.registration', 'corsheaders', 'tasks', 'Users_app', ] MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', "allauth.account.middleware.AccountMiddleware", 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] AUTHENTICATION_BACKENDS = [ 'allauth.account.auth_backends.AuthenticationBackend', 'django.contrib.auth.backends.ModelBackend', ] ROOT_URLCONF = 'backend.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'backend.wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'HOST': os.environ.get('DATABASE_HOST'), 'NAME': os.environ.get('DATABASE_NAME'), 'USER': os.environ.get('DATABASE_USERNAME'), 'PORT': os.environ.get('DATABASE_PORT'), 'PASSWORD':os.environ.get('DATABASE_PASSWORD'), } } AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True STATIC_URL = 'static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' # Media files MEDIA_URL = 'media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', ], 'DEFAULT_FILTER_BACKENDS': [ 'django_filters.rest_framework.DjangoFilterBackend', ], 'DEFAULT_AUTHENTICATION_CLASSES': [ 'dj_rest_auth.jwt_auth.JWTCookieAuthentication', 'rest_framework.authentication.SessionAuthentication', ], 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 50, 'DATETIME_FORMAT': "%Y-%m-%d %H:%M:%S", } please what is the issue ? what are the stable versions to use if this one are buggy
The warnings are an issue with recent releases of django-allauth and you can resolve it by downgrading django-allauth to version 65.2.0 The docs said this in version 65.6.0: A check is in place to verify that ACCOUNT_LOGIN_METHODS is aligned with ACCOUNT_SIGNUP_FIELDS. The severity level of that check has now been lowered from “critical” to “warning”, as there may be valid use cases for configuring a login method that you are not able to sign up with. This check (account.W001) can be silenced using Django’s SILENCED_SYSTEM_CHECKS. However, these warnings persisted with recent releases. Downgrading django-allauth to an older version solved it. Remember to set these after downgrading to version 65.2.0: ACCOUNT_USER_MODEL_USERNAME_FIELD = None ACCOUNT_EMAIL_REQUIRED = True ACCOUNT_AUTHENTICATION_METHOD = 'email' ACCOUNT_USERNAME_REQUIRED = False
1
1
79,636,381
2025-5-24
https://stackoverflow.com/questions/79636381/chromium-web-extension-with-nodriver
I downloaded chromium version 136 and im using it on macOS 64 ARM. I opened it manually and added a chrome web extension and it works. When I close chrome, quit, and open it the extension is always active. But when I run my code with NoDriver from a python script, the chromium tab thats opened does not have the extension and its inside an incognito mode. How can I enable the extension for this way? Chromium setup code async def worker(emailStr): browser = None PassProvided = None password = None if ".com:" in emailStr: emailStr, password = emailStr.split(":", 1) PassProvided = password try: # Select a random proxy if enabled proxy = None if USE_PROXIES and proxies: proxy = random.choice(proxies) else: print(f"{red_text}Not using proxy {reset}") # Randomize window size and position for unique fingerprint window_width = random.randint(1000, 1500) window_height = random.randint(750, 950) x_position = random.randint(0, 500) y_position = random.randint(0, 500) # Build Chrome arguments args = [ f"--window-size={window_width},{window_height}", f"--window-position={x_position},{y_position}", "--disable-sync", "--no-first-run", "--no-default-browser-check", "--disable-backgrounding-occluded-windows", "--disable-renderer-backgrounding", "--disable-background-timer-throttling", "--disable-breakpad", "--disable-extensions", "--incognito", "--disable-dev-shm-usage", ] # Inject proxy if used if proxy: host, port, username, proxyPass = parse_proxy(proxy) proxy_creds= [username,proxyPass] proxy_url = f"http://{host}:{port}" args.append(f"--proxy-server={proxy_url}") # Start nodriver browser browser = await nd.start( browser_executable_path=CHROMIUM_PATH, headless=HEADLESS_MODE, stealth=True, browser_args=args ) # Set up proxy authentication main_tab = await browser.get("draft:,") await setup_proxy(proxy_creds[0], proxy_creds[1], main_tab) # Navigate to Target homepage tab = await browser.get("https://www.target.com/") # Clear browser storage for clean session await tab.evaluate(""" () => { localStorage.clear(); sessionStorage.clear(); } """)
Try: args = [ f"--window-size={window_width},{window_height}", f"--window-position={x_position},{y_position}", "--disable-sync", "--no-first-run", "--no-default-browser-check", "--disable-backgrounding-occluded-windows", "--disable-renderer-backgrounding", "--disable-background-timer-throttling", "--disable-breakpad", # "--disable-extensions", # Remove this line! # "--incognito", # Remove this if you want to reuse a profile with extensions "--disable-dev-shm-usage", "--load-extension=/Users/your_user/extension_folder", # "--user-data-dir=/Users/your_user/Library/Application Support/Chromium", # Optional ] Incognito mode, where extensions are usually disabled by default. Headless or automated Chrome sessions typically do not load extensions in incognito sessions unless you explicitly allow it. The argument --disable-extensions disables all extensions.
2
2
79,637,271
2025-5-25
https://stackoverflow.com/questions/79637271/python-text-tokenize-code-to-output-results-from-horizontal-to-vertical-with-gra
Below code tokenises the text and identifies the grammar of each tokenised word. import nltk from nltk.tokenize import sent_tokenize, word_tokenize from nltk.corpus import wordnet as wn #nltk.download() text = "Natural language processing is fascinating" # tokenise the sentence words = word_tokenize(text) print(words) # identify noun, verb, etc grammatically in the sentence for w in words: tmp = wn.synsets(w)[0].pos() print (w, ":", tmp) The output is; ['Natural', 'language', 'processing', 'is', 'fascinating'] Natural : n language : n processing : n is : v fascinating : v Where n is noun and v is verb Can some Python code expert please advises me how to format the output so it will look like below; nouns = ["natural", "language", "processing"] verbs = ["is", "fascinating"] I need assistance to change the result output format. I think it needs some relevant python code to perform this requirement.
You can achieve it this way : # Lists to store parts of speech nouns = [] verbs = [] for w in words: synsets = wn.synsets(w) if synsets: pos = synsets[0].pos() if pos == 'n': nouns.append(w.lower()) elif pos == 'v': verbs.append(w.lower()) full solution: import nltk from nltk.tokenize import word_tokenize from nltk.corpus import wordnet as wn # Make sure the necessary NLTK data is available nltk.download('punkt') nltk.download('wordnet') nltk.download('punkt_tab') text = "Natural language processing is fascinating" # Tokenize the text words = word_tokenize(text) # Lists to store parts of speech nouns = [] verbs = [] for w in words: synsets = wn.synsets(w) if synsets: pos = synsets[0].pos() if pos == 'n': nouns.append(w.lower()) elif pos == 'v': verbs.append(w.lower()) print(f"nouns = {nouns}") print(f"verbs = {verbs}") output: nouns = ['natural', 'language', 'processing'] verbs = ['is', 'fascinating']
1
0
79,639,284
2025-5-26
https://stackoverflow.com/questions/79639284/output-is-not-what-it-is-supposed-to-be
i am making todo app or program on python in which output is not working according to the code while True: user_action = input("type add, show, edit, remove or exit ") user_action = user_action.strip() if 'add'in user_action: todo = user_action[4:] with open('todos.txt', 'r') as file: todos = file.readlines() todos.append(todo) with open('todos.txt', 'w') as file : file.writelines(todos) elif 'show'in user_action: with open('todos.txt', 'r') as file : todos = file.readlines() #new_todos = [item.strip("\n") for item in todos] for index, items in enumerate(todos): items = items.strip("\n") index = index + 1 row = f"{index}-{items}" print(row) elif 'edit' in user_action: number = int(user_action[5:]) print(number) number = number - 1 with open('todos.txt', 'r') as file: todos = file.readlines() new_todo = input("Type the new todo:") todos[number] = new_todo + "\n" with open('todos.txt', 'w') as file : file.writelines(todos) elif 'remove' in user_action: number = int(user_action[6:]) with open('todos.txt', 'r') as file: todos = file.readlines() index = number - 1 todo_to_remove = todos[index].strip("\n") todos.pop(index) with open('todos.txt', 'w') as file : file.writelines(todos) message = f"todo {todo_to_remove} was removed from the list" print(message) elif 'exit' in user_action: break else: print("Command is not valid ") print("bye!") ` hey this is the code for my program but the problem is when i am adding input "add anything" and after that i am asking the program to show my todos then it is not showing me the correct output type add, show, edit, remove or exit add bro type add, show, edit, remove or exit show 1-ab 2-cd 3-hihellohikrrish 4-hi 5- 6-hiibro type add, show, edit, remove or exit add broski type add, show, edit, remove or exit show 1-ab 2-cd 3-hihellohikrrish 4-hi 5- 6-hiibrobroski this is the output i received , the output of show should be 1-ab 2-cd 3-hihellohikrrish 4-hi 5- 6-hii 7-bro 8-broski
As OldBoy said in the comment, you are currently removing the newline characters with the strip() Python method. The strip method removes leading and trailing characters. This means you append all the inputs with the add functionality in a single line. I think I fixxed it by adding the '\n' which is the new line character, as the todo items print with show in different lines if 'add'in user_action: todo = user_action[4:] with open('todos.txt', 'r') as file: todos = file.readlines() todos.append(todo + '\n')
1
1
79,638,451
2025-5-26
https://stackoverflow.com/questions/79638451/error-while-running-constraints-optimization-using-cvxpy
I faced some issues when doing constrainted optmiziation. Use CVXPY Variables in Optimization: I'll set the unknown values (NaNs) to be part of the CVXPY optimization variable. import cvxpy as cp import numpy as np def optimize_x_simple(A, x_values): # Convert the list to a numpy array x_values = np.array(x_values, dtype=float) # Identify which entries are NaN (unknown) known_mask = np.isnan(x_values) # Set up CVXPY variables for the unknowns x_unknown = cp.Variable(np.sum(known_mask), nonneg=True) # only unknown values are optimized # Replace known values (non-NaN) in x with their values x_full = np.copy(x_values) # The indices of the unknown values unknown_indices = np.where(known_mask)[0] # Construct the full vector with CVXPY variables for unknowns for idx, unknown_idx in enumerate(unknown_indices): x_full[unknown_idx] = x_unknown[idx] # Compute b = A @ x, where x is the full vector b = A @ x_full # Define the constraints: 1.0 <= b_i <= 2.0 for each entry in b constraints = [b >= 1.0, b <= 2.0] # Define the objective function: minimize the sum of unknown values (just an example) objective = cp.Minimize(cp.sum(x_unknown)) # Create the optimization problem problem = cp.Problem(objective, constraints) # Solve the optimization problem problem.solve() if problem.status != cp.OPTIMAL: raise ValueError(f"Optimization failed: {problem.status}") # Replace optimized values back into the result x_optimized = np.copy(x_full) for idx, unknown_idx in enumerate(unknown_indices): x_optimized[unknown_idx] = x_unknown.value[idx] return x_optimized, b.value # Example usage: A = np.random.randn(5895, 393) # Replace with your actual A matrix x_example = [0.1, 0.2, np.nan, 0.1, 0.0] # Example input with NaN for unknowns # Call the optimization function x_optimized, b_vector = optimize_x_simple(A, x_example) # Print the optimized x and resulting b print("Optimized x:", x_optimized) print("Result b:", b_vector) # Construct the full vector with CVXPY variables for unknowns for idx, unknown_idx in enumerate(unknown_indices): x_full[unknown_idx] = x_unknown[idx] ValueError: setting an array element with a sequence. ERROR LOG TypeError Traceback (most recent call last) TypeError: float() argument must be a string or a real number, not 'index' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) Cell In[1], line 55 52 x_example = [0.1, 0.2, np.nan, 0.1, 0.0] # Example input with NaN for unknowns 54 # Call the optimization function ---> 55 x_optimized, b_vector = optimize_x_simple(A, x_example) 57 # Print the optimized x and resulting b 58 print("Optimized x:", x_optimized) Cell In[1], line 22, in optimize_x_simple(A, x_values) 20 # Construct the full vector with CVXPY variables for unknowns 21 for idx, unknown_idx in enumerate(unknown_indices): ---> 22 x_full[unknown_idx] = x_unknown[idx] 24 # Compute b = A @ x, where x is the full vector 25 b = A @ x_full ValueError: setting an array element with a sequence. How do I solve the Above error? I am seeing the TypeERROR AS SHOWN BELOW. First section will be the coding that I m usin. Second section is the error that I am facing. What are the fixes?
You can't set the element of a numpy array with a cvxpy variable. A workaround is to use a dummy matrix multiplication to inflate your x_unknown to the size of x_full and then add the inflated array to the known values. Additionally, you need to replace the NaNs in x_full with zeros. IDX_unknown2full = np.zeros((len(x_full), x_unknown.shape[0])) for idx, unknown_idx in enumerate(unknown_indices): IDX_unknown2full[unknown_idx, idx] = 1 x_full[np.isnan(x_full)] = 0 b = A @ (x_full + IDX_unknown2full @ x_unknown)
1
3
79,642,264
2025-5-28
https://stackoverflow.com/questions/79642264/numpy-testing-assert-array-equal-fails-when-comparing-structured-numpy-arrays
I was comparing some data using numpy.testing.assert_array_equal. The data was read from a MAT-file using scipy.io.loadmat. The MAT-file was generated as follows: a = [1, 2; 3, 4]; b = struct('MyField', 10); c = struct('MyField', [1, 2; 3, 4]); save('example.mat', 'a', 'b', 'c'); For testing, I manually generated the expected NumPy array to match how scipy.io.loadmat outputs them: import numpy as np from numpy.testing import assert_array_equal from scipy.io import loadmat a = np.array([[1., 2.], [3., 4.]]) b = np.array([[(np.array(10.0),)]], dtype=[("MyField", "O")]) c = np.array( [[ (np.array([[1., 2.], [3., 4.]]),) ]], dtype=[("MyField", "O")]) matdict = loadmat("example.mat", mat_dtype=True) assert_array_equal(matdict["a"], a) # Passes assert_array_equal(matdict["b"], b) # Passes assert_array_equal(matdict["c"], c) # Fails This comparison fails only for variable c, throwing the following error: Traceback (most recent call last): File ".../python3.13/site-packages/numpy/testing/_private/utils.py", line 851, in assert_array_compare val = comparison(x, y) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() During handling of the above exception, another exception occurred: Traceback (most recent call last): ... File ".../python3.13/site-packages/numpy/testing/_private/utils.py", line 1057, in assert_array_equal assert_array_compare(operator.__eq__, actual, desired, err_msg=err_msg, ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ verbose=verbose, header='Arrays are not equal', ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ strict=strict) ^^^^^^^^^^^^^^ File ".../python3.13/site-packages/numpy/testing/_private/utils.py", line 929, in assert_array_compare raise ValueError(msg) ValueError: error during assertion: Traceback (most recent call last): File ".../python3.13/site-packages/numpy/testing/_private/utils.py", line 851, in assert_array_compare val = comparison(x, y) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Arrays are not equal ACTUAL: array([[(array([[1., 2.], [3., 4.]]),)]], dtype=[('MyField', 'O')]) DESIRED: array([[(array([[1., 2.], [3., 4.]]),)]], dtype=[('MyField', 'O')]) I initially suspected that the issue may be related to the usage of structured numpy array or maybe the object dtype. However I'm not too sure about this since the test passed for variable b. I don't know why it fails for this particular case only, since the stdout looks visually identical. I would really appreciate some help here on understanding the underlying issue here, and also explaining the right way to handle such comparisons. Thanks!
This happens because the field in c contains a Numpy array, and assert_array_equal tries to compare structured arrays using ==, which fails when it encounters arrays inside object fields. In b, the field is a scalar (10.0), so it works fine, but in c, MyField holds an array, and comparing two arrays with == returns an array of booleans which causes the truth value error. To fix it, compare the inner arrays manually using assert_array_equal: from numpy.testing import assert_array_equal # Extract and compare the nested arrays directly actual = matdict["c"] expected = np.array([[(np.array([[1., 2.], [3., 4.]]),)]], dtype=[("MyField", "O")]) assert actual.dtype == expected.dtype assert actual.shape == expected.shape for a, e in zip(actual.flat, expected.flat): assert_array_equal(a[0], e[0]) # Compare inner arrays If you have to do this often, I suggest you to write an helper :)
3
1
79,642,049
2025-5-28
https://stackoverflow.com/questions/79642049/how-to-use-ak-array-with-index-arrays-to-create-a-custom-masked-output
I have these arrays in Awkward Array: my_indices = ak.Array([[0,1],[0],[1]]) my_dummy_arr = ak.Array([[1,1],[1,1],[1,1]]) I want to generate this result: [[1,1],[1,0],[0,1]] Basically, for each sub-array, I want to keep 1 at the positions in my_indices and set 0 elsewhere, matching the structure of my_dummy_arr. How can I achieve this with Awkward Array? (Note that this is a dummy example and the actual data is much larger, so I do not want to use slow operations like python loops or list comprehension)
You can achieve that via a local index for the dummy array: local_index = ak.local_index(my_dummy_arr) mask = ak.any(local_index[:, :, None] == my_indices[:, None, :], axis=-1) result = ak.where(mask, 1, 0)
1
0
79,643,186
2025-5-29
https://stackoverflow.com/questions/79643186/can-i-index-class-types-in-a-python-list
The intended functionality for the code I'm working on is to be able to load in molecules to a two-player game at runtime, and I've largely implemented a good chunk of the physics. I'm wondering if the functionality to look up a specific class type can be accessed in python for the part of the project I'm currently working on. I'm getting integer values from a class function in an imported library (chempy), and the function returns a dictionary of integers to represent which elements are present in a chemical formula, and in what quantity. I've got my own classes for element objects, and I'm trying to build a function which will take this dictionary of integers and construct the appropriate objects. For example, I might call it with the argument = {1:2,8:1} to signify water, because the chempy Substance.composition() method returns this value. My initial thought was that I should just have a list of the elements in the range that I'm intending to implement models of, such that the dictionary keys can be used to index the array. However, I'm not sure how to implement a list of class types, as opposed to pre-loaded objects. I'm imagining something like this: elements = [Hydrogen,Helium,Lithium,Beryllium,Boron,Carbon,Nitrogen,Oxygen]. I'm left wondering how to call a new object of instance elements[i]() (??) where i is a dictionary key from the chempy function's return value. Is any of this valid code? I can't find related documentation online, which makes me wonder if this functionality is accessed through different means, or even really feasible in this language?
Yes, this is absolutely valid and feasible in Python! You're on the right track. In Python, classes are first-class objects, meaning you can store them in lists, dictionaries, and variables, then call them to create instances. In your case, I recommend the Factory Pattern because it's Extensible, Error-safe, Game-friendly, and Clean. See in action Note: Check the comments, where and how it creates the class instance based on the list, dictionary, or factory lookup. from abc import ABC, abstractmethod from typing import Dict, List, Type class Element(ABC): """Base class for all chemical elements""" def __init__(self, quantity: int = 1): self.quantity = quantity self.atomic_number = self.get_atomic_number() self.symbol = self.get_symbol() self.name = self.get_name() @abstractmethod def get_atomic_number(self) -> int: pass @abstractmethod def get_symbol(self) -> str: pass @abstractmethod def get_name(self) -> str: pass def __repr__(self): return f"{self.__class__.__name__}(quantity={self.quantity})" class Hydrogen(Element): def get_atomic_number(self) -> int: return 1 def get_symbol(self) -> str: return "H" def get_name(self) -> str: return "Hydrogen" class Helium(Element): def get_atomic_number(self) -> int: return 2 def get_symbol(self) -> str: return "He" def get_name(self) -> str: return "Helium" class Lithium(Element): def get_atomic_number(self) -> int: return 3 def get_symbol(self) -> str: return "Li" def get_name(self) -> str: return "Lithium" # SOLUTION 1: List-based approach (your original idea) def create_molecule_list_approach(composition: Dict[int, int]) -> List[Element]: """list-based element lookup""" # List of element classes indexed by atomic number (0 is unused) elements = [ None, # 0 - placeholder Hydrogen, # 1 Helium, # 2 Lithium, # 3 ] molecule = [] for atomic_number, quantity in composition.items(): if atomic_number < len(elements) and elements[atomic_number] is not None: # elements[atomic_number] is a CLASS, calling it creates an INSTANCE element_instance = elements[atomic_number](quantity) molecule.append(element_instance) else: print(f"Warning: Element {atomic_number} not implemented") return molecule # SOLUTION 2: Dictionary-based approach (more flexible) def create_molecule_dict_approach(composition: Dict[int, int]) -> List[Element]: """dictionary-based element lookup""" element_classes = { 1: Hydrogen, 2: Helium, 3: Lithium, } molecule = [] for atomic_number, quantity in composition.items(): if atomic_number in element_classes: ElementClass = element_classes[atomic_number] # elements[atomic_number] is a CLASS a class element_instance = ElementClass(quantity) # This creates an instance molecule.append(element_instance) else: print(f"Warning: Element {atomic_number} not implemented") return molecule # SOLUTION 3: Factory Pattern (recommended for games) class ElementFactory: """Factory class for creating element instances""" def __init__(self): self._elements = { 1: Hydrogen, 2: Helium, 3: Lithium, } def create_molecule(self, composition: Dict[int, int]) -> List[Element]: """Create a molecule from composition dictionary""" molecule = [] for atomic_number, quantity in composition.items(): if atomic_number in self._elements: ElementClass = self._elements[atomic_number] element = ElementClass(quantity) molecule.append(element) else: print(f"Warning: Element {atomic_number} not supported") return molecule def main(): # Example compositions from chempy hydrogen_gas = {1: 2} # H2 lithium_hydride = {3: 1, 1: 1} # LiH print("=== List Approach ===") h2_list = create_molecule_list_approach(hydrogen_gas) lih_list = create_molecule_list_approach(lithium_hydride) print(f"H2: {h2_list}") print(f"LiH: {lih_list}") print("\n=== Dictionary Approach ===") h2_dict = create_molecule_dict_approach(hydrogen_gas) lih_dict = create_molecule_dict_approach(lithium_hydride) print(f"H2: {h2_dict}") print(f"LiH: {lih_dict}") print("\n=== Factory Approach (Recommended) ===") factory = ElementFactory() h2_factory = factory.create_molecule(hydrogen_gas) lih_factory = factory.create_molecule(lithium_hydride) print(f"H2: {h2_factory}") print(f"LiH: {lih_factory}") if __name__ == "__main__": main() # Your Integration with chempy def integrate_with_chempy(): """How to use this with chempy in your game""" # Simulated chempy output (replace with actual chempy code) # from chempy import Substance # composition = Substance.from_formula('H2').composition() composition = {1: 2} # What chempy returns for H2 factory = ElementFactory() molecule = factory.create_molecule(composition) print(f"\nChempy integration example:") print(f"Composition: {composition}") print(f"Molecule: {molecule}") # Use in your game physics for element in molecule: print(f"- {element.name}: {element.quantity} atoms") integrate_with_chempy()
2
2
79,645,288
2025-5-30
https://stackoverflow.com/questions/79645288/how-to-efficiently-retrieve-xy-coordinates-from-image
I have an image img with 1000 rows and columns each. Now I would like to consider each pixel as x- and y-coordinates and extract the respective value. An illustrated example of what I want to achieve: In theory, the code snippet below should work. But it is super slow (I stopped the execution after some time). img = np.random.rand(1000,1000) xy = np.array([(x, y) for x in range(img.shape[1]) for y in range(img.shape[0])]) xy = np.c_[xy, np.zeros(xy.shape[0])] for i in range(img.shape[0]): for j in range(img.shape[1]): xy[np.logical_and(xy[:,1] == i, xy[:,0] == j),2] = img[i,j] Is there a faster way (e.g. some numpy magic) to go from one table to the other? Thanks in advance!
A possible solution: pd.DataFrame(img).stack().reset_index().to_numpy() Method pd.DataFrame creates a dataframe where rows represent y-coordinates and columns represent x-coordinates. stack then compresses the dataframe's columns into a single column, turning the x-values into part of a hierarchical index alongside the y-values. reset_index flattens this multi-level index into columns, resulting in a dataframe with columns [y, x, val]. Lastly, to_numpy converts the dataframe into a numpy array. Alternatively, we can use only numpy, through np.meshgrid and np.hstack (to horizontally concatenate the vertical vectors): h, w = img.shape y, x = np.meshgrid(np.arange(w), np.arange(h)) np.hstack([ x.reshape(-1, 1), y.reshape(-1, 1), img.reshape(-1, 1) ])
1
4
79,644,894
2025-5-30
https://stackoverflow.com/questions/79644894/python-doctests-for-a-colored-text-output
Is it possible to write a docstring test for a function that prints out the colored text into the command line? I want to test only the content ignoring the color or to add somehow the information on color into the docstring. In the example below the test has failed, but it should not. Example from colorama import Fore def print_result(): """Prints out the result. >>> print_result() Hello world """ print('Hello world') def print_result_in_color(): """Prints out the result. >>> print_result_in_color() Hello world """ print(Fore.GREEN + 'Hello world' + Fore.RESET) if __name__ == '__main__': import doctest doctest.testmod() Output Failed example: print_result_in_color() Expected: Hello world Got: Hello world
Yes, it is possible to encode the color information into the docstring using wrapper that alters the docstrings. Below is the example code that passes the test from colorama import Fore def wrapper(func): # wrapper func that alters the docstring func.__doc__ = func.__doc__.format(**{"GREEN": Fore.GREEN, "RESET": Fore.RESET}) return func def print_result(): """Prints out the result. >>> print_result() Hello world """ print("Hello world") @wrapper def print_result_in_color(): """Prints out the result. >>> print_result_in_color() {GREEN}Hello world{RESET} """ print(Fore.GREEN + "Hello world" + Fore.RESET) if __name__ == "__main__": import doctest doctest.testmod() Note: You can also look into these answers for a more in-depth view on this
1
1
79,648,605
2025-6-2
https://stackoverflow.com/questions/79648605/how-to-define-nullable-fields-for-sqltransform
I'm using Beam SqlTransform in python, trying to define/pass nullable fields. This code works just fine: with beam.Pipeline(options=options) as p: # ... # Use beam.Row to create a schema-aware PCollection | "Create beam Row" >> beam.Map(lambda x: beam.Row( user_id=int(x['user_id']), user_name=str(x['user_name]) )) | 'SQL' >> SqlTransform("SELECT user_id, COUNT(*) AS msg_count FROM PCOLLECTION GROUP BY user_id") However, I am not able to create nullable fields with this approach. Without the direct cast, I'm getting a decoding Field error. user_id = json.get('user_id') throws: Failed to decode Schema due to an error decoding Field proto: name: "user_id" type { nullable: true logical_type { urn: "beam:logical:pythonsdk_any:v1" } } Without using beam.Row, any other object, throws a missing schema error. Cannot call getSchema when there is no schema What is the proper way to define nullable fields?
When working with Apache Beam's SqlTransform in Python, you need to properly define nullable fields in your schema. Here are the correct approaches: Option 1: Using beam.Row with Optional Types The most straightforward way is to use Python's Optional type hint with beam.Row: from typing import Optional with beam.Pipeline(options=options) as p: (p | "Create beam Row" >> beam.Map(lambda x: beam.Row( user_id=Optional[int](x.get('user_id')), # This makes the field nullable user_name=Optional[str](x.get('user_name')) )) | 'SQL' >> SqlTransform("SELECT user_id, COUNT(*) AS msg_count FROM PCOLLECTION GROUP BY user_id") ) Option 2: Using None for Null Values Alternatively, you can explicitly pass None for null values: with beam.Pipeline(options=options) as p: (p | "Create beam Row" >> beam.Map(lambda x: beam.Row( user_id=int(x['user_id']) if x['user_id'] is not None else None, user_name=str(x['user_name']) if x['user_name'] is not None else None )) | 'SQL' >> SqlTransform("SELECT user_id, COUNT(*) AS msg_count FROM PCOLLECTION GROUP BY user_id") ) Option 3: Using Schema Definition For more complex schemas, you can define the schema explicitly: from apache_beam.typehints import RowTypeConstraint from apache_beam.typehints.schemas import Field, FieldType from apache_beam.typehints.schemas import LogicalType schema = { 'user_id': FieldType(LogicalType('beam:logical:pythonsdk_any:v1'), nullable=True), 'user_name': FieldType(LogicalType('beam:logical:pythonsdk_any:v1'), nullable=True) } with beam.Pipeline(options=options) as p: (p | "Create beam Row" >> beam.Map(lambda x: beam.Row( user_id=x.get('user_id'), user_name=x.get('user_name') )).with_output_types(RowTypeConstraint.from_fields(schema)) | 'SQL' >> SqlTransform("SELECT user_id, COUNT(*) AS msg_count FROM PCOLLECTION GROUP BY user_id") )
1
1
79,648,218
2025-6-2
https://stackoverflow.com/questions/79648218/how-can-i-efficiently-parallelize-and-optimize-a-large-scale-graph-traversal-alg
I'm working on Python project that involves processing a very large graph - it has millions of nodes and edges. The goal is to perform a breadth-first search (BFS) or depth-first search (DFS) from a given start node to compute shortest paths or reachability. Here's the challenge : The graph is too large to fit comfortably into memory if stored natively. The traversal needs to be fast, ideally leveraging multiple CPU cores. I want to avoid race conditions and ensure thread-safe updates to shared data structures. The graph data is stored as adjacency lists in files, and I want to process it efficiently without loading the entire graph at once. Currently, I have a basic BFS implementation using a Python dictionary for adjacency, but it runs slowly and hits memory limits: from collections import deque def bfs(graph, start): visited = set() queue = deque([start]) while queue: node = queue.popleft() if node not in visited: visited.add(node) queue.extend(graph.get(node, [])) return visited Question How can I parallelize BFS/DFS in Python to speed up traversal? Should I use multiprocessing, concurrent.features, or another approach? I'm open to suggestions involving alternative algorithms (like bidirectional BFS), external databases, or memory-mapped files. I’ve tried the following: A basic BFS using Python’s built-in set, deque, and dict, as shown in the code. Storing the entire adjacency list in memory using a dictionary of lists, but this doesn’t scale well for millions of nodes. I attempted to use the multiprocessing module to run multiple BFS instances in parallel from different starting points, but coordinating shared state and combining results without race conditions became complex. Looked into NetworkX, but found it quite slow and memory-heavy for very large graphs. I also tried reading adjacency data from a file line-by-line and processing it lazily, but traversal logic became messy and error-prone. I expected to be able to: Traverse the graph quickly (within a few seconds) even with millions of nodes. Use multicore processing to speed up traversal. Efficiently manage memory without crashing due to RAM limitations. Ideally, stream data or keep only parts of the graph in memory at any given time. I know Python isn’t the fastest language for this, but I’d like to push it as far as reasonably possible before considering rewriting in C++ or Rust.
You can use combination of "pyspark + graphframes" to achieve this. Sample Code from pyspark.sql import SparkSession from graphframes import GraphFrame # Create Spark session spark = SparkSession.builder \ .appName("BFS") \ .config("spark.jars.packages", "graphframes:graphframes:0.8.2-spark3.0-s_2.12") \ .getOrCreate() # Define vertices and edges as DataFrames vertices = spark.createDataFrame([ ("a", "Alice"), ("b", "Bob"), ("c", "Charlie"), ("d", "David"), ("e", "Esther") ], ["id", "name"]) edges = spark.createDataFrame([ ("a", "b"), ("b", "c"), ("c", "d"), ("d", "e") ], ["src", "dst"]) # Create GraphFrame g = GraphFrame(vertices, edges) # Run BFS from "a" to "e" results = g.bfs(fromExpr="id = 'a'", toExpr="id = 'e'") results.show(truncate=False)
2
1
79,648,050
2025-6-2
https://stackoverflow.com/questions/79648050/updating-current-value-in-flet
Good evening, Based on tutorial here : Flet tutorial i am following the codes , everything and i would like to ask one question, here is my code : import flet as ft from flet import * def main(page:ft.Page): greeting =ft.Ref[ft.Column]() def hello_here(e): greeting.current.controls.append(ft.Text(f'hello {first_name.current.value} {last_name.current.value}')) first_name.current.value ="" last_name.current.value ="" first_name.current.focus() page.update() page.title="My greatings" first_name =ft.Ref[ft.TextField]() last_name =ft.Ref[ft.TextField]() page.add( ft.TextField(ref=first_name,label="Enter First Name",autofocus=True), ft.TextField(ref=last_name,label="Enter Last Name"), ft.Column(ref=greeting), ft.ElevatedButton(text='Say Hello', on_click=hello_here) ) ft.app(target=main) if i run it :flet flet_ref_example.py i got this result : as you see, every time, we are adding elements, it is printed on page one following to another, is it possible to make a simple correction and old text should be replaced by new one? i think this line should be corrected greeting.current.controls.append(ft.Text(f'hello {first_name.current.value} {last_name.current.value}')) but how?
Simply assign new list instead of appending to old one def hello_here(e): greeting.current.controls = [ft.Text(f'hello {first_name.current.value} {last_name.current.value}')] # ... rest ... EDIT: If flet would make some problem with memory leak when you assign new list then you can also use .clear() to remove old elements without creating new list in memory. def hello_here(e): greeting.current.controls.clear() greeting.current.controls.append(ft.Text(f'hello {first_name.current.value} {last_name.current.value}')) # ... rest ...
4
1
79,651,164
2025-6-3
https://stackoverflow.com/questions/79651164/why-did-the-mocking-api-failed
My project is huge, I tried to write unitest for one API session import unittest from unittest.mock import patch, MagicMock from alm_commons_utils.mylau.lau_client import lauApiSession class TestlauApiSession(unittest.TestCase): @patch("alm_commons_utils.mylau.lau_client.lauApiSession.get_component") def test_mock_get_component(self, mock_get_component): # Mock the return value of get_component mock_get_component.return_value = {"component": "mock_component"} # Initialize the lauApiSession session = lauApiSession( ldap_user="mock_user", ldap_password="mock_password", lau_api_url="https://mock-lau-api-url", lau_login_url="https://mock-lau-login-url" ) # Call the mocked method result = session.get_component("mock_repo") # Assert the mocked method was called with the correct arguments mock_get_component.assert_called_once_with("mock_repo") # Assert the return value is as expected self.assertEqual(result, {"component": "mock_component"}) if __name__ == "__main__": unittest.main() I am running it as a job on Github Action,got error Run python -m unittest "alm_commons_utils.lautest.mock_lau_client" 2025-06-03 10:58:33,011 - asyncio - DEBUG - Using selector: EpollSelector 2025-06-03 10:58:33,012 [INFO] Retrieving token from lau API... (alm_commons_utils.mylau.lau_client) 2025-06-03 10:58:33,012 - alm_commons_utils.mylau.lau_client - INFO - Retrieving token from lau API... Error: -03 10:58:33,038 [ERROR] Error retrieving token: Cannot connect to host mock-lau-login-url:443 ssl:False [Name or service not known] (alm_commons_utils.mylau.lau_client) 2025-06-03 10:58:33,012 - root - INFO - Entry login_corporativo() 2025-06-03 10:58:33,012 - root - INFO - Entry on endpoint component_yaml_login_corporativo 2025-06-03 10:58:33,012 - root - DEBUG - Yaml alias is: mock_user 2025-06-03 10:58:33,012 - root - INFO - Exit on endpoint component_yaml_login_corporativo 2025-06-03 10:58:33,019 - root - ERROR - Timeout trying to make login 2025-06-03 10:58:33,022 - root - ERROR - Timeout trying to make login 2025-06-03 10:58:33,038 - alm_commons_utils.mylau.lau_client - ERROR - Error retrieving token: Cannot connect to host mock-lau-login-url:443 ssl:False [Name or service not known] I want to tell him not to connect, just to mock everything as a real life scenario. What is wrong with my code?
Your output in line #4 suggests that you are not retrieving the token. So basically your test is trying to retrieve a token from the URL https://mock-lau-login-url, your lau_login_url. You'll need to mock also the token when you want to run the lauApiSession, since the constructor of laupApiSession is still making real network calls. Option #1: __Patch the method that retrieves the token__ @patch("alm_commons_utils.mylau.lau_client.lauApiSession.get_component") @patch("alm_commons_utils.mylau.lau_client.lauApiSession._get_token") # or whatever the method is def test_mock_get_component(self, mock_get_token, mock_get_component): mock_get_token.return_value = "mock_token" mock_get_component.return_value = {"component": "mock_component"} session = lauApiSession( ldap_user="mock_user", ldap_password="mock_password", lau_api_url="https://mock-lau-api-url", lau_login_url="https://mock-lau-login-url" ) result = session.get_component("mock_repo") mock_get_component.assert_called_once_with("mock_repo") self.assertEqual(result, {"component": "mock_component"}) Option #2: __Patch the entire constructor (not ideal since your are not testing the constructor)__ @patch("alm_commons_utils.mylau.lau_client.lauApiSession.__init__", return_value=None) @patch("alm_commons_utils.mylau.lau_client.lauApiSession.get_component") def test_mock_get_component(self, mock_get_component, mock_init): mock_get_component.return_value = {"component": "mock_component"} session = lauApiSession() session.get_component = mock_get_component # manually assign if needed result = session.get_component("mock_repo") mock_get_component.assert_called_once_with("mock_repo") self.assertEqual(result, {"component": "mock_component"})
2
1
79,651,120
2025-6-3
https://stackoverflow.com/questions/79651120/formatting-integers-in-pandas-dataframe
I've read the documentation and simply cannot understand why I can't seem to achieve my objective. All I want to do is output integers with a thousands separator where appropriate. I'm loading a spreadsheet from my local machine that is in the public domain here Here's my MRE: import pandas as pd WORKBOOK = "/Volumes/Spare/Downloads/prize-june-2025.xlsx" def my_formatter(v): return f"{v:,d}" if isinstance(v, int) else v df = pd.read_excel(WORKBOOK, header=2, usecols="B,C,E:H") print(df.dtypes) df.style.format(my_formatter) print(df.head()) Output: Prize Value int64 Winning Bond NO. object Total V of Holding int64 Area object Val of Bond int64 Dt of Pur datetime64[ns] dtype: object Prize Value Winning Bond NO. Total V of Holding Area Val of Bond Dt of Pur 0 1000000 103FE583469 50000 Stockport 5000 2005-11-29 1 1000000 352AC359547 50000 Edinburgh, City Of 5000 2019-02-11 2 100000 581WF624503 50000 Birmingham 20000 2024-06-03 3 100000 265SM364866 50000 Hertfordshire 32500 2016-01-31 4 100000 570HE759643 11000 Hertfordshire 11000 2024-02-22 I have determined that my_formatter() is never called and I have no idea why.
Your approach works fine, however style does not modify the DataFrame in place. Instead it returns a special object that can be displayed (for instance in a notebook) or exported to a file. You could see the HTML version in jupyter with: df.style.format(my_formatter) (this should be the last statement of the current cell!) Or a text version with: print(df.style.format(my_formatter).to_string()) Note that your approach is however quite slow. If you have homogeneous dtypes, you could take advantage of the builtin thousands parameter: df.style.format(thousands=',') Or if you want to use a custom format per column, pass a dictionary: df.style.format({c: '{:,d}' for c in df.select_dtypes('number')}) And, finally, if you want to change the data to strings and return a DataFrame, you would need to use map: out = df.map(my_formatter)
2
4
79,652,536
2025-6-4
https://stackoverflow.com/questions/79652536/matplotlib-hover-coordinates-with-labelled-xticks
I've got a matplotlib graph with labelled X-ticks: The labels repeat (in case that's relevant). In the real graph, there is a multi-level X-axis with more clarification in the lower layers. That works fine, but I want to be able to hover the mouse and see the X-coordinate in the top-right of the graph. Whenever I set xticks to labels, I just get a blank X-coordinate: If I use ax.xaxis.set_major_formatter('{x:g}'), it gets rid of my labels but the cursor coordinate starts working: Is there any way to make the cursor location still show the X coordinate even when I have a labelled X axis? This also affects mplcursors: it shows the X value as empty if I click on a line between points or with the label if I click exactly on a point (whereas I'd like to see the underlying numerical X-coordinate as "A" is a bit meaningless without the context from the secondary axis): Source code: import matplotlib.pyplot as plt import numpy as np import mplcursors x = np.arange(0, 9) y = np.random.rand(*x.shape) labels = ['A', 'B', 'C']*3 fig, ax = plt.subplots() ax.plot(x, y, 'bx-', label='random') ax.set_xticks(x, labels=labels) # This makes the coordinate display work, but gets rid of the labels: # ax.xaxis.set_major_formatter('{x:g}') mplcursors.cursor(multiple=True) plt.show()
From [ Matplotlib mouse cursor coordinates not shown for empty tick labels ] adding ax.format_coord = lambda x, y: 'x={:g}, y={:g}'.format(x, y) anywhere before plt.show() makes it show the right coordinates. If you want to display A, B or C as x coordinates, you can make a custom coordinates format string function in which you round the current x position to an integer to get an index and get the corresponding label from labels def custom_format_coord(x_val, y_val): ix = int(round(x_val)) if 0 <= ix < len(labels): x_label = labels[ix] else: x_label = f"{x_val}" return f"x={x_label}, y={y_val:g}" ax.format_coord = custom_format_coord
1
2
79,653,909
2025-6-5
https://stackoverflow.com/questions/79653909/how-i-can-color-a-st-data-editor-cell-based-on-a-condition
I'm using streamlit's st.data_editor, and I have this DataFrame: import streamlit as st import pandas as pd df = pd.DataFrame( [ {"command": "test 1", "rating": 4, "is_widget": True}, {"command": "test 2", "rating": 5, "is_widget": False}, {"command": "test 3", "rating": 3, "is_widget": True}, ] ) edited_df = st.data_editor(df) is there a way to color a specific cell based on a condition? I want to colorize in yellow the cell (in the rating col) when the value is less then 2.
According to docs, you can't color cell if it's not in disabled column. Styles from pandas.Styler will only be applied to non-editable columns. If disabling rating column is acceptable to you, you can use pandas.Styler like this: import streamlit as st import pandas as pd df = pd.DataFrame( [ {"command": "test 1", "rating": 4, "is_widget": True}, {"command": "test 2", "rating": 5, "is_widget": False}, {"command": "test 3", "rating": 1, "is_widget": True}, ] ) # you can use .applymap(), but it's deprecated df = df.style.map( lambda val: 'background-color: yellow' if val < 2 else '', subset=['rating'] ) edited_df = st.data_editor(df, disabled=["rating"])
2
0
79,657,102
2025-6-7
https://stackoverflow.com/questions/79657102/how-to-pass-several-variables-in-for-a-pandas-groupby
This code works: cohort = r'priority' result2025 = df.groupby([cohort],as_index=False).agg({'resolvetime': ['count','mean']}) and this code works cohort = r'impactedservice' result2025 = df.groupby([cohort],as_index=False).agg({'resolvetime': ['count','mean']}) and this code works result2025 = df.groupby(['impactedservice','priority'],as_index=False).agg({'resolvetime': ['count','mean']}) but what is not working for me is defining the cohort variable to be cohort = r'impactedservice,priority' # a double-cohort result2025 = df.groupby([cohort],as_index=False).agg({'resolvetime': ['count','mean']}) That gives error: KeyError: 'impactedservice,priority' How to properly define the cohort variable in this case?
The issue is that when you do: cohort = r'impactedservice,priority' You're creating a single string, not a list of column names. Pandas treats that as a single column name (which doesn’t exist), hence the KeyError. Correct way: Define cohort as a list of column names: cohort = ['impactedservice', 'priority'] result2025 = df.groupby(cohort, as_index=False).agg({'resolvetime': ['count', 'mean']}) Now groupby knows you're grouping by multiple columns. You can build cohort dynamically as a list too if needed: cohort = ['impactedservice'] if use_priority: cohort.append('priority')
1
2
79,657,990
2025-6-8
https://stackoverflow.com/questions/79657990/why-is-my-shiny-express-app-having-trouble-controlling-the-barchart-plot
Im having trouble setting the y axis to start at zero for the following shiny express python script. Instead it starts at 4.1 . the set_ylim is having no affect. from shiny.express import input, render, ui import matplotlib.pyplot as plt import numpy as np data = { "Maturity": ['1Y', '2Y', '3Y', '4Y', '5Y', '6Y', '7Y', '8Y'], "Yield": [4.1, 4.3, 4.5, 4.7, 4.8, 4.9, 5.0, 5.1] } data=np.array([data["Maturity"], data["Yield"]]) #df = pd.DataFrame(data) print(data[1]) def create_line_plot(): x = data[0] y = data[1] fig, ax = plt.subplots() #fig.title("Yield Curve") ax.bar(x, y) ax.set_xlabel("Maturity") ax.yaxis.set_label_text("Yield (%)") ax.set_ylim(bottom=0) # Ensures the y-axis starts at 0 # Ensures the y-axis starts at 0 return fig @render.plot def my_plot(): return create_line_plot()
This is an issue which results from how you process your data. Avoid the array conversion: from shiny.express import render import matplotlib.pyplot as plt data = { '1Y': 4.1, '2Y': 4.3, '3Y': 4.5, '4Y': 4.7, '5Y': 4.8, '6Y': 4.9, '7Y': 5.0, '8Y': 5.1 } @render.plot def create_line_plot(): x = list(data.keys()) y = list(data.values()) fig, ax = plt.subplots() ax.bar(x, y) ax.set_xlabel("Maturity") ax.yaxis.set_label_text("Yield (%)") ax.set_ylim(bottom=0) return fig
1
0
79,657,426
2025-6-8
https://stackoverflow.com/questions/79657426/estimation-internet-speed-test-app-in-flet
i have completly followed to the following link :youtube tutorial for speed test and also given link : github page and there is my code : import flet as ft from flet import * from time import sleep import speedtest def main(page:ft.Page): page.title ="Internet Speed Test" page.theme_mode ="dark" page.vertical_alignment ="center" page.horizontal_alignment ="center" page.window.bgcolor ="blue" page.padding =30 page.bgcolor ="black" page.auto_scroll =True page.fonts={ "RoosterPersonalUse":"fonts/RoosterPersonalUse-3z8d8.ttf", "SourceCodePro-BlackItalic":"fonts/SourceCodePro-BlackItalic.ttf", "SourceCodePro-Bold" :"fonts/SourceCodePro-Bold.ttf" } st =speedtest.Speedtest(secure=True) appTitle =ft.Row( controls=[ ft.Text(value="Internet",font_family="RoosterPersonalUse", style=ft.TextThemeStyle(value="displayLarge"),color="#ff3300"), ft.Text(value ="Speed",font_family="SourceCodePro-BlackItalic", style=ft.TextThemeStyle(value="displayLarge"),color="#ffff00") ],alignment=ft.MainAxisAlignment(value="center") ) line_01 = ft.Text(value="> press start...", font_family="SourceCodePro-BlackItalic", color="#ffffff") line_02 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_03 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_04 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#ffff00") line_05 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_06 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_07 = ft.Text(value="", font_family="SourceCodePro-BlackItalic",color="#ffff00") line_08 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#ffffff") progress_bar_01 =ft.ProgressBar(width=400,color="#0080ff", bgcolor="#eeeeee", opacity=0) progress_text_01 = ft.Text(" ", font_family="SourceCodePro-BlackItalic", color="#1aff1a", ) progress_row_01 = ft.Row([progress_text_01, progress_bar_01]) progress_bar_02 = ft.ProgressBar(width=400, color="#0080ff", bgcolor="#eeeeee", opacity=0) progress_text_02 = ft.Text(" ", font_family="SourceCodePro-BlackItalic", color="#1aff1a", ) progress_row_02 = ft.Row([progress_text_02, progress_bar_02]) terminalText = ft.Column( [line_01, line_02, line_03, progress_row_01, line_04, line_05, line_06, progress_row_02, line_07, line_08]) getSpeedContainer =ft.Container( content=terminalText, width=200, height=100, bgcolor="#4d4d4d", border_radius=30, padding=20, animate=ft.Animation(duration=1000,curve=ft.AnimationCurve(value="bounceOut")) ) def animate_getSpeedContainer(e): getSpeedContainer.update() progress_row_01.opacity = 0 progress_bar_01.opacity = 0 progress_bar_01.value = None progress_row_02.opacity = 0 progress_bar_02.opacity = 0 progress_bar_02.value = None line_01.text_to_print = "" line_01.update() line_02.text_to_print = "" line_02.update() line_03.text_to_print = "" line_03.update() line_04.text_to_print = "" line_04.update() line_05.text_to_print = "" line_05.update() line_06.text_to_print = "" line_06.update() line_07.text_to_print = "" line_07.update() line_08.text_to_print = "" line_08.update() getSpeedContainer.width =700 getSpeedContainer.height =400 getSpeedContainer.update() getSpeedContainer.update() getSpeedContainer.width = 700 getSpeedContainer.height = 400 line_01.text_to_print = "> calculating download speed, please wait..." getSpeedContainer.update() sleep(1) line_01.update() ideal_server = st.get_best_server() # this will find out and connect to the best possible server city = ideal_server["name"] # for getting the city name country = ideal_server["country"] # for getting the country name cc = ideal_server["cc"] # for getting the country code line_02.text_to_print = f"> finding the best possible servers in {city}, {country} ({cc})" line_02.update() getSpeedContainer.update() sleep(1) line_03.text_to_print = "> connection established, status OK, fetching download speed" line_03.update() progress_row_01.opacity = 1 progress_bar_01.opacity = 1 getSpeedContainer.update() download_speed = st.download() / 1024 / 1024 # bytes/sec to Mbps progress_bar_01.value = 1 line_04.text_to_print = f"> the download speed is {str(round(download_speed, 2))} Mbps" line_04.update() getSpeedContainer.update() line_05.text_to_print = "> calculating upload speed, please wait..." line_05.update() getSpeedContainer.update() sleep(1) line_06.text_to_print = "> executing upload script, hold on" line_06.update() progress_row_02.opacity = 1 progress_bar_02.opacity = 1 getSpeedContainer.update() upload_speed = st.upload() / 1024 / 1024 # bytes/sec to Mbps progress_bar_02.value = 1 line_07.text_to_print = f"> the upload speed is {str(round(upload_speed, 2))} Mbps" line_07.update() getSpeedContainer.update() sleep(1) line_08.text_to_print = f"> task completed successfully\n\n>> app developer: kumar anurag (instagram: kmranrg)" line_08.update() getSpeedContainer.update() page.add( appTitle, getSpeedContainer, ft.IconButton(icon=ft.Icons.PLAY_CIRCLE_FILL_OUTLINED,icon_color="green",icon_size=70, on_click=animate_getSpeedContainer), ) ft.app(target=main,assets_dir="assets") result of this code is given as : as you can see texts are not displayed and also reseults of speeds are not calculated, could you tell me please reason for it? thanks in advance
There were multiple issues with your code. Fixes: .text_to_print used incorrectly. Replaced with .value speedtest might silently fail. Added try/except for debug Font might not load. Used system font or ensure asset path is correct. sleep() might block UI. Used asyncio.sleep() if using async version later Here is the corrected code: import flet as ft from flet import * from time import sleep import speedtest def main(page: ft.Page): page.title = "Internet Speed Test" page.theme_mode = "dark" page.vertical_alignment = "center" page.horizontal_alignment = "center" page.window.bgcolor = "blue" page.padding = 30 page.bgcolor = "black" page.auto_scroll = True page.fonts = { "RoosterPersonalUse": "fonts/RoosterPersonalUse-3z8d8.ttf", "SourceCodePro-BlackItalic": "fonts/SourceCodePro-BlackItalic.ttf", "SourceCodePro-Bold": "fonts/SourceCodePro-Bold.ttf" } st = speedtest.Speedtest(secure=True) appTitle = ft.Row( controls=[ ft.Text(value="Internet", font_family="RoosterPersonalUse", style=ft.TextThemeStyle.DISPLAY_LARGE, color="#ff3300"), ft.Text(value="Speed", font_family="SourceCodePro-BlackItalic", style=ft.TextThemeStyle.DISPLAY_LARGE, color="#ffff00") ], alignment=ft.MainAxisAlignment.CENTER ) line_01 = ft.Text(value="> press start...", font_family="SourceCodePro-BlackItalic", color="#ffffff") line_02 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_03 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_04 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#ffff00") line_05 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_06 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#1aff1a") line_07 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#ffff00") line_08 = ft.Text(value="", font_family="SourceCodePro-BlackItalic", color="#ffffff") progress_bar_01 = ft.ProgressBar(width=400, color="#0080ff", bgcolor="#eeeeee", opacity=0) progress_text_01 = ft.Text(" ", font_family="SourceCodePro-BlackItalic", color="#1aff1a") progress_row_01 = ft.Row([progress_text_01, progress_bar_01]) progress_bar_02 = ft.ProgressBar(width=400, color="#0080ff", bgcolor="#eeeeee", opacity=0) progress_text_02 = ft.Text(" ", font_family="SourceCodePro-BlackItalic", color="#1aff1a") progress_row_02 = ft.Row([progress_text_02, progress_bar_02]) terminalText = ft.Column([ line_01, line_02, line_03, progress_row_01, line_04, line_05, line_06, progress_row_02, line_07, line_08 ]) getSpeedContainer = ft.Container( content=terminalText, width=200, height=100, bgcolor="#4d4d4d", border_radius=30, padding=20, animate=ft.Animation(duration=1000, curve=ft.AnimationCurve.BOUNCE_OUT) ) def animate_getSpeedContainer(e): # Reset progress_row_01.opacity = 0 progress_bar_01.opacity = 0 progress_bar_01.value = None progress_row_02.opacity = 0 progress_bar_02.opacity = 0 progress_bar_02.value = None for line in [line_01, line_02, line_03, line_04, line_05, line_06, line_07, line_08]: line.value = "" line.update() getSpeedContainer.width = 700 getSpeedContainer.height = 400 getSpeedContainer.update() line_01.value = "> calculating download speed, please wait..." line_01.update() sleep(1) ideal_server = st.get_best_server() city = ideal_server["name"] country = ideal_server["country"] cc = ideal_server["cc"] line_02.value = f"> finding the best possible servers in {city}, {country} ({cc})" line_02.update() sleep(1) line_03.value = "> connection established, status OK, fetching download speed" line_03.update() progress_row_01.opacity = 1 progress_bar_01.opacity = 1 getSpeedContainer.update() download_speed = st.download() / 1024 / 1024 progress_bar_01.value = 1 progress_bar_01.update() line_04.value = f"> the download speed is {round(download_speed, 2)} Mbps" line_04.update() line_05.value = "> calculating upload speed, please wait..." line_05.update() sleep(1) line_06.value = "> executing upload script, hold on" line_06.update() progress_row_02.opacity = 1 progress_bar_02.opacity = 1 getSpeedContainer.update() upload_speed = st.upload() / 1024 / 1024 progress_bar_02.value = 1 progress_bar_02.update() line_07.value = f"> the upload speed is {round(upload_speed, 2)} Mbps" line_07.update() sleep(1) line_08.value = f"> task completed successfully\n\n>> app developer: kumar anurag (instagram: kmranrg)" line_08.update() getSpeedContainer.update() page.add( appTitle, getSpeedContainer, ft.IconButton( icon=ft.Icons.PLAY_CIRCLE_FILL_OUTLINED, icon_color="green", icon_size=70, on_click=animate_getSpeedContainer ) ) ft.app(target=main, assets_dir="assets") Output:
1
1
79,658,364
2025-6-9
https://stackoverflow.com/questions/79658364/asyncio-run-coroutine-from-a-synchronous-function
How can I call task2 from func without declaring func async and awaiting it? My first thought was to create a thread and use run_coroutine_threadsafe but it deadlocks. Same as not using a thread. Do I have to start a new loop? import asyncio from threading import Thread async def task2(): print("starting task2...") await asyncio.sleep(1) print("finished task2.") return "done" def func(loop=None): print("running func...") if not loop: loop = asyncio.get_running_loop() assert loop future = asyncio.run_coroutine_threadsafe( task2(), loop) result = future.result() print(f"{result=}") print("done func...") async def task1(): print("starting task1...") await asyncio.sleep(1) # func() loop = asyncio.get_running_loop() t = Thread(target=func, args=(loop,)) t.start() t.join() print("finished task1.") if __name__ == '__main__': asyncio.run(task1())
python threading synchronization primitives such as Thread.join don't work well with asyncio because they suspend the thread and therefore block the running eventloop, so run_coroutine_threadsafe cannot use the blocked eventloop. Instead you have loop.run_in_executor for creating threaded tasks that don't block the eventloop. import asyncio async def task2(): print("starting task2...") await asyncio.sleep(1) print("finished task2.") return "done" def func(loop: asyncio.AbstractEventLoop): print("running func...") future = asyncio.run_coroutine_threadsafe( task2(), loop) result = future.result() print(f"{result=}") print("done func...") async def task1(): print("starting task1...") await asyncio.sleep(1) loop = asyncio.get_running_loop() task = loop.run_in_executor(None, func, loop) await task # doesn't block the eventloop print("finished task1.") if __name__ == '__main__': asyncio.run(task1()) starting task1... running func... starting task2... finished task2. result='done' done func... finished task1. Each evenloop has a default ThreadPoolExecutor that it uses if you pass None as the executor, but it has a limited number of workers and is intended for computational and non-blocking work, while you are using it for blocking work, so it may be beneficial to override it with a larger number of workers using loop.set_default_executor, or you can have each part of your codebase use a different ThreadPoolExecutor with a different size to avoid one part hogging all the workers with blocking tasks. threads are created lazily anyway. Another solution is to use asyncio.run which creates a new eventloop on the current thread to run this coroutine. just make sure you only call it on a thread that doesn't already have an eventloop running or it will throw an exception. I think sending back the coroutine to the original loop as you are doing is a good solution, and is actually the most performant, otherwise you can create a shared daemon thread that has an eventloop just for the sake of sending coroutines to it.
1
3
79,661,702
2025-6-11
https://stackoverflow.com/questions/79661702/how-to-specify-relevant-columns-with-read-excel
As far as I can tell, the following MRE conforms to the relevant documentation: import polars df = polars.read_excel( "/Volumes/Spare/foo.xlsx", engine="calamine", sheet_name="natsav", read_options={"header_row": 2}, columns=(1,2,4,5,6,7), # columns 0 and 3 are not needed ) print(df.head()) The issue here is that the documentation states that for the columns parameter: Columns to read from the sheet; if not specified, all columns are read. Can be given as a sequence of column names or indices. Clearly, a tuple is a sequence. However, running this code results in an exception as follows: _fastexcel.InvalidParametersError: invalid parameters: `use_columns` callable could not be called (TypeError: 'tuple' object is not callable) Further research reveals that the required callable should return bool. So: def colspec(c): print(type(c)) return True I then change the read_excel call to include columns=colspec. The program now runs without exception and reveals that the parameter passed is a class of type builtins.ColumnInfoNoDtype. Unfortunately, I can't find any documentation for that type. Is the documentation wrong? How is one supposed to used polars.read_excel to load only certain specific columns?
When you use Calamine, which is the default engine and which you've specified explicitly, the docs say (emphasis mine): this engine can be used for reading all major types of Excel Workbook (.xlsx, .xlsb, .xls) and is dramatically faster than the other options, using the fastexcel module to bind the Rust-based Calamine parser. This corresponds with what the error message tells you: _fastexcel.InvalidParametersError: invalid parameters: `use_columns` callable could not be called (TypeError: 'tuple' object is not callable) ^^^^^^^^^^ ^^^^^^^^^^^ This isn't an error from read_excel itself; the columns argument is being passed through to fastexcel. Looking at fastexcel's docs, we can see the use_columns parameter defined as: use_columns: Union[list[str], list[int], str, Callable[[ColumnInfoNoDtype], bool], NoneType] = None, (This is also where the ColumnInfoNoDtype type is defined, it's again not part of polars itself.) and described as: Specifies the columns to use. Can either be: None to select all columns A list of strings and ints, the column names and/or indices (starting at 0) A string, a comma separated list of Excel column letters and column ranges (e.g. “A:E” or “A,C,E:F”, which would result in A,B,C,D,E and A,C,E,F) A callable, a function that takes a column and returns a boolean Here we can see that, as well as the Callable[[ColumnInfoNoDtype], bool] type your input is apparently being interpreted as, there are list[int] and list[str] options - not Sequence, which would include tuple. Presumably the issue in the polars docs is that how exactly these things are going to be interpreted depends on which engine they're going to - they're trying to describe the general inputs, but in this case it seems that the engine is looking for a list specifically.
1
1