question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,615,990
2025-5-10
https://stackoverflow.com/questions/79615990/how-to-concatenate-n-rows-of-content-to-current-row-in-a-rolling-window-in-pa
I'm looking to transform a dataframe containing [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] into [[1, 2, 3, []], [4, 5, 6, [1, 2, 3, 4, 5, 6]], [7, 8, 9, [4, 5, 6, 7, 8, 9]], [10, 11, 12, [7, 8, 9, 10, 11, 12]]] So far the only working solution I've come up with is: import pandas as pd import numpy as np # Create the DataFrame df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])) # Initialize an empty list to store the result result = [] # Iterate over the rows in the DataFrame for i in range(len(df)): # If it's the first row, append the row with an empty list if i == 0: result.append(list(df.iloc[i]) + [[]]) # If it's not the first row, concatenate the current and previous row else: current_row = list(df.iloc[i]) previous_row = list(df.iloc[i-1]) concatenated_row = current_row + [previous_row + current_row] result.append(concatenated_row) # Print the result print(result) Is there no build in Pandas function that can roll a window, and add the results to current row, like the above can?
This doesn't need windowing, IIUC, you can use df.shift: x = df.apply(lambda x: x.tolist(), axis=1) df[3] = (x.shift() + x) Output: 0 1 2 3 0 1 2 3 NaN 1 4 5 6 [1, 2, 3, 4, 5, 6] 2 7 8 9 [4, 5, 6, 7, 8, 9] 3 10 11 12 [7, 8, 9, 10, 11, 12] Adding window sizing: import pandas as pd import numpy as np from functools import reduce df = pd.DataFrame(np.arange(99).reshape(-1,3)) x = df.apply(lambda x: x.tolist(), axis=1) #change window size here window_size = 3 df[3] = reduce(lambda x, y: x+y, [x.shift(i) for i in range(window_size,-1,-1)]) df Output: 0 1 2 3 0 0 1 2 NaN 1 3 4 5 NaN 2 6 7 8 NaN 3 9 10 11 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] 4 12 13 14 [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] 5 15 16 17 [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] 6 18 19 20 [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] 7 21 22 23 [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] 8 24 25 26 [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] 9 27 28 29 [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] 10 30 31 32 [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] 11 33 34 35 [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] 12 36 37 38 [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] 13 39 40 41 [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] 14 42 43 44 [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] 15 45 46 47 [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] 16 48 49 50 [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] 17 51 52 53 [42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53] 18 54 55 56 [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56] 19 57 58 59 [48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] 20 60 61 62 [51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62] 21 63 64 65 [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65] 22 66 67 68 [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68] 23 69 70 71 [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71] 24 72 73 74 [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74] 25 75 76 77 [66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77] 26 78 79 80 [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80] 27 81 82 83 [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83] 28 84 85 86 [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86] 29 87 88 89 [78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89] 30 90 91 92 [81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92] 31 93 94 95 [84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95] 32 96 97 98 [87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98]
3
1
79,617,835
2025-5-12
https://stackoverflow.com/questions/79617835/python-3-13-threading-lock-acquire-vs-lock-acquire-lock
In Python 3.13 (haven't checked lower versions) there seem to be two locking mechanisms for the threading.Lock class. I've looked online but found no mentions of acquire_lock or release_lock and wanted to ask if anyone knows what the difference is between them and the standard acquire and release methods. Here's the threading.Lock class for reference. The methods are commented as undocumented: class Lock: def __enter__(self) -> bool: ... def __exit__( self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None ) -> None: ... def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ... def release(self) -> None: ... def locked(self) -> bool: ... def acquire_lock(self, blocking: bool = ..., timeout: float = ...) -> bool: ... # undocumented def release_lock(self) -> None: ... # undocumented def locked_lock(self) -> bool: ... # undocumented Just curious whether there's a difference between the call lock.acquire_lock and lock.acquire or whether this is merely a name-change that will take effect in the future.
currently, they are just aliases, and according to github history they have been like that for the past 15 years, you shouldn't be using undocumented functions, they can be removed at any time. {"acquire_lock", _PyCFunction_CAST(lock_PyThread_acquire_lock), ... {"acquire", _PyCFunction_CAST(lock_PyThread_acquire_lock), ... {"release_lock", lock_PyThread_release_lock, ... {"release", lock_PyThread_release_lock, the best thing to do is not use any of the 4 functions anyway, instead use a with block to properly lock and release the lock when an exception is thrown. some_lock = Lock() with some_lock: # code protected by lock here # lock released when exception is thrown
2
8
79,615,872
2025-5-10
https://stackoverflow.com/questions/79615872/why-is-array-manipulation-in-jax-much-slower
I'm working on converting a transformation-heavy numerical pipeline from NumPy to JAX to take advantage of JIT acceleration. However, I’ve found that some basic operations like broadcast_to and moveaxis are significantly slower in JAXβ€”even without JITβ€”compared to NumPy, and even for large batch sizes like 3,000,000 where I would expect JAX to be much quicker. ### Benchmark: moveaxis + broadcast_to ### NumPy: moveaxis + broadcast_to β†’ 0.000116 s JAX: moveaxis + broadcast_to β†’ 0.204249 s JAX JIT: moveaxis + broadcast_to β†’ 0.054713 s ### Benchmark: broadcast_to only ### NumPy: broadcast_to β†’ 0.000059 s JAX: broadcast_to β†’ 0.062167 s JAX JIT: broadcast_to β†’ 0.057625 s Am I doing something wrong? Are there better ways of performing these kind of manipulations? Here's a minimal benchmark ChatGPT generated, comparing broadcast_to and moveaxis in NumPy, JAX, and JAX with JIT: import timeit import jax import jax.numpy as jnp import numpy as np from jax import jit # Base transformation matrix M_np = np.array([[1, 0, 0, 0.5], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) M_jax = jnp.array(M_np) # Batch size n = 1_000_000 print("### Benchmark: moveaxis + broadcast_to ###") # NumPy t_numpy = timeit.timeit( lambda: np.moveaxis(np.broadcast_to(M_np[:, :, None], (4, 4, n)), 2, 0), number=10 ) print(f"NumPy: moveaxis + broadcast_to β†’ {t_numpy:.6f} s") # JAX t_jax = timeit.timeit( lambda: jnp.moveaxis(jnp.broadcast_to(M_jax[:, :, None], (4, 4, n)), 2, 0).block_until_ready(), number=10 ) print(f"JAX: moveaxis + broadcast_to β†’ {t_jax:.6f} s") # JAX JIT @jit def broadcast_and_move_jax(M): return jnp.moveaxis(jnp.broadcast_to(M[:, :, None], (4, 4, n)), 2, 0) # Warm-up broadcast_and_move_jax(M_jax).block_until_ready() t_jit = timeit.timeit( lambda: broadcast_and_move_jax(M_jax).block_until_ready(), number=10 ) print(f"JAX JIT: moveaxis + broadcast_to β†’ {t_jit:.6f} s") print("\n### Benchmark: broadcast_to only ###") # NumPy t_numpy_b = timeit.timeit( lambda: np.broadcast_to(M_np[:, :, None], (4, 4, n)), number=10 ) print(f"NumPy: broadcast_to β†’ {t_numpy_b:.6f} s") # JAX t_jax_b = timeit.timeit( lambda: jnp.broadcast_to(M_jax[:, :, None], (4, 4, n)).block_until_ready(), number=10 ) print(f"JAX: broadcast_to β†’ {t_jax_b:.6f} s") # JAX JIT @jit def broadcast_only_jax(M): return jnp.broadcast_to(M[:, :, None], (4, 4, n)) broadcast_only_jax(M_jax).block_until_ready() t_jit_b = timeit.timeit( lambda: broadcast_only_jax(M_jax).block_until_ready(), number=10 ) print(f"JAX JIT: broadcast_to β†’ {t_jit_b:.6f} s")
There are a couple things happening here that come from the different execution models of NumPy and JAX. First, NumPy operations like broadcasting, transposing, reshaping, slicing, etc. typically return views of the original buffer. In JAX, it is not possible for two array objects to share memory, and so the equivalent operations return copies. I suspect this is the largest contribution to the timing difference here. Second, NumPy tends to have very fast dispatch time for individual operations. JAX has much slower dispatch time for individual operations, and this can become important when the operation itself is very cheap (like "return a view of the array with different strides/shape") You might wonder given these points how JAX could ever be faster than NumPy. The key is JIT compilation of sequences of operations: within JIT-compiled code, sequences of operations are fused so that the output of each individual operation need not be allocated (or indeed, need not even exist at all as a buffer of intermediate values). Additionally, for JIT compiled sequences of operations the dispatch overhead is paid only once for the whole program. Compare this to NumPy where there's no way to fuse operations or to avoid paying the dispatch cost of each and every operation. So in microbenchmarks like this, you can expect JAX to be slower than NumPy. But for real-world sequences of operations wrapped in JIT, you should often find that JAX is faster, even when executing on CPU. This type of question comes up enough that there's a section devoted to it in JAX's FAQ: FAQ: is JAX faster than NumPy? Answering the followup question: Is the statement "In JAX, it is not possible for two array objects to share memory, and so the equivalent operations return copies", within a jitted environment? This question is not really well-formulated, because in a jitted environment, array objects do not necessarily correspond to buffers of values. Let's make this more concrete with a simple example: import jax @jax.jit def f(x): y = x[::2] return y.sum() You might ask: in this program, is y a copy or a view of x? The answer is neither, because y is never explicitly created. Instead, JIT fuses the slice and the sum into a single operation: the array x is the input, and the array y.sum() is the output, and the intermediate array y is never actually created. You can see this by printing the compiled HLO for this function: x = jax.numpy.arange(10) print(f.lower(x).compile().as_text()) HloModule jit_f, is_scheduled=true, entry_computation_layout={(s32[10]{0})->s32[]}, allow_spmd_sharding_propagation_to_parameters={true}, allow_spmd_sharding_propagation_to_output={true} %region_0.9 (Arg_0.10: s32[], Arg_1.11: s32[]) -> s32[] { %Arg_0.10 = s32[] parameter(0), metadata={op_name="jit(f)/jit(main)/reduce_sum"} %Arg_1.11 = s32[] parameter(1), metadata={op_name="jit(f)/jit(main)/reduce_sum"} ROOT %add.12 = s32[] add(s32[] %Arg_0.10, s32[] %Arg_1.11), metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } %fused_computation (param_0.2: s32[10]) -> s32[] { %param_0.2 = s32[10]{0} parameter(0) %iota.0 = s32[5]{0} iota(), iota_dimension=0, metadata={op_name="jit(f)/jit(main)/iota" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %constant.1 = s32[] constant(2) %broadcast.0 = s32[5]{0} broadcast(s32[] %constant.1), dimensions={} %multiply.0 = s32[5]{0} multiply(s32[5]{0} %iota.0, s32[5]{0} %broadcast.0), metadata={op_name="jit(f)/jit(main)/mul" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %bitcast.1 = s32[5,1]{1,0} bitcast(s32[5]{0} %multiply.0), metadata={op_name="jit(f)/jit(main)/mul" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %gather.0 = s32[5]{0} gather(s32[10]{0} %param_0.2, s32[5,1]{1,0} %bitcast.1), offset_dims={}, collapsed_slice_dims={0}, start_index_map={0}, index_vector_dim=1, slice_sizes={1}, indices_are_sorted=true, metadata={op_name="jit(f)/jit(main)/gather" source_file="<ipython-input-1-9ea6c70efef5>" source_line=4} %constant.0 = s32[] constant(0) ROOT %reduce.0 = s32[] reduce(s32[5]{0} %gather.0, s32[] %constant.0), dimensions={0}, to_apply=%region_0.9, metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } ENTRY %main.14 (Arg_0.1: s32[10]) -> s32[] { %Arg_0.1 = s32[10]{0} parameter(0), metadata={op_name="x"} ROOT %gather_reduce_fusion = s32[] fusion(s32[10]{0} %Arg_0.1), kind=kLoop, calls=%fused_computation, metadata={op_name="jit(f)/jit(main)/reduce_sum" source_file="<ipython-input-1-9ea6c70efef5>" source_line=5} } The output is complicated, but the main thing to look at here is the ENTRY %main section, which is the "main" program generated by compilation. It consists of two steps: %Arg0.1 identifies the input argument, and ROOT %gather_reduce_fusion is essentially a single compiled kernel that sums every second element of the input. No intermediate arrays are generated. The blocks above this (e.g. the %fused_computation (param_0.2: s32[10]) -> s32[] definition) give you information about what operations are done within this kernel, but represent a single fused operation. Notice that the sliced array represented by y in the Python code never actually appears in the main function block, so questions about its memory layout cannot be answered except by saying "y doesn't exist in the compiled program".
3
4
79,618,258
2025-5-12
https://stackoverflow.com/questions/79618258/sns-histplot-does-not-fully-show-the-legend-when-setting-the-legend-outside-the
I tried to create a histogram with a legend outside the axes. Here is my code: import pandas as pd import seaborn as sns df_long = pd.DataFrame({ "Category": ["A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D"], "Round": ["Round1", "Round1", "Round1", "Round1", "Round2", "Round2", "Round2", "Round2", "Round3", "Round3", "Round3", "Round3", "Round4", "Round4", "Round4", "Round4"], "Value": [10, 20, 10, 30, 20, 25, 15, 25, 12, 15, 19, 6, 10, 29, 13, 19] }) ax = sns.histplot(df_long, x="Category", hue="Round", weights="Value", multiple="stack", shrink=.8, ) ax.set_ylabel('Weight') legend = ax.get_legend() legend.set_bbox_to_anchor((1, 1)) It works fine in jupyter notebook: But, if I try to create a png or pdf using matplotlib, the legend is not displayed completely. import matplotlib.pyplot as plt plt.savefig("histogram.png") plt.savefig("histogram.pdf") I've already tried to adjust the size of the graph by using plt.figure(figsize=(4, 4)) and the problem still exist.
The solution is to use bbox_inches = 'tight' in the plt.savefig() function: import matplotlib.pyplot as plt plt.savefig("histogram.png",bbox_inches='tight') plt.savefig("histogram.pdf", bbox_inches='tight')
2
3
79,618,176
2025-5-12
https://stackoverflow.com/questions/79618176/matplotlib-plot-continuous-time-series-of-data
I'm trying to continuously plot data received via network using matplotlib. On the y-axis, I want to plot a particular entity, while the x-axis is the current time. The x-axis should cover a fixed period of time, ending with the current time. Here's my current test code, which simulates the data received via network with random numbers. import threading import random import time import signal import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as md class NPData(): def __init__(self, size): self.data = np.zeros((size,2)) # size rows, 2 cols self.size = size self.head = 0 def __len__(self): return self.data.__len__() def __str__(self): return str(self.data) def append(self, data): self.data[self.head] = data self.head = (self.head + 1) % self.size def get_x_range(self): return (self.data.min(axis=0)[0], self.data.max(axis=0)[0]) class Producer(threading.Thread): def __init__(self): super().__init__() random.seed() self.running = True self.data = NPData(100) def get_data(self): return self.data.data def stop(self): self.running = False def run(self): while self.running: now_ms = md.date2num(int(time.time() * 1000)) # ms sample = np.array([now_ms, np.random.randint(0,999)]) self.data.append(sample) time.sleep(0.1) prog_do_run = True def signal_handler(sig, frame): global prog_do_run prog_do_run = False def main(): signal.signal(signal.SIGINT, signal_handler) p = Producer() p.start() fig, ax = plt.subplots() xfmt = md.DateFormatter('%H:%M:%S.%f') ax.xaxis.set_major_formatter(xfmt) #ax.plot(p.get_data()) #ax.set_ylim(0,999) plt.show(block=False) while prog_do_run: x_range = p.data.get_x_range() ax.set_xlim(x_range) #ax.set_ylim(0,999) print(p.get_data()) #ax.plot(p.get_data()) plt.draw() plt.pause(0.05) p.stop() Notes: The Producer class is supposed to emulate data received via network. I've encountered two main issues: I'm struggling to find out what actually needs to be called inside an endless loop in order for matplotlib to continuously update a plot (efficiently). Is it draw(), plot(), pause() or a combination of those? I've been generating milliseconds timestamps and matplotlib seems to not like them at all. The official docs say to use date2num(), which does not work. If I just use int(time.time() * 1000) or round(time.time() * 1000), I get OverflowError: int too big to convert from the formatter.
Basically there are small errors. For example don't call ax.plot() in the loop because it adds a new line each time, which is inefficient and causes multiples lines to be drawn. I would suggest to use a single line2D object by creating it once and then update its data with set_data() insde your loop. Additionally, use fig.canvas.draw_idle() or plt.pause() to refresh; plt.pause() is the simplest for interactive updates. Another error is the date, matplotlib expects dates in "days since 0001-01-01 UTC" as floats. I would say to use datetime.datetime.now() and md.date2num() to convert to the correct format. Take in cosideration that milliseconds are not directly supported in the tick labels, but you can format them. import threading import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as md import datetime import time import signal class NPData(): def __init__(self, size): self.data = np.zeros((size,2)) self.size = size self.head = 0 self.full = False def append(self, data): self.data[self.head] = data self.head = (self.head + 1) % self.size if self.head == 0: self.full = True def get_data(self): if self.full: return np.vstack((self.data[self.head:], self.data[:self.head])) else: return self.data[:self.head] def get_x_range(self, window_seconds=10): data = self.get_data() if len(data) == 0: now = md.date2num(datetime.datetime.now()) return (now - window_seconds/86400, now) latest = data[-1,0] return (latest - window_seconds/86400, latest) class Producer(threading.Thread): def __init__(self): super().__init__() self.running = True self.data = NPData(1000) def stop(self): self.running = False def run(self): while self.running: now = datetime.datetime.now() now_num = md.date2num(now) sample = np.array([now_num, np.random.randint(0,999)]) self.data.append(sample) time.sleep(0.1) prog_do_run = True def signal_handler(sig, frame): global prog_do_run prog_do_run = False def main(): signal.signal(signal.SIGINT, signal_handler) p = Producer() p.start() fig, ax = plt.subplots() xfmt = md.DateFormatter('%H:%M:%S.%f') ax.xaxis.set_major_formatter(xfmt) line, = ax.plot([], [], 'b-') ax.set_ylim(0, 999) plt.show(block=False) while prog_do_run: data = p.data.get_data() if len(data) > 0: line.set_data(data[:,0], data[:,1]) x_range = p.data.get_x_range(window_seconds=10) ax.set_xlim(x_range) #Keep window fixed to recent data ax.figure.canvas.draw_idle() plt.pause(0.05) p.stop()
2
3
79,615,662
2025-5-10
https://stackoverflow.com/questions/79615662/how-to-replace-all-occurrences-of-a-string-in-python-and-why-str-replace-mi
I want to replace all patterns 0 in a string by 00 in Python. For example, turning: '28 5A 31 34 0 0 0 F0' into '28 5A 31 34 00 00 00 F0'. I tried with str.replace(), but for some reason it misses some "overlapping" patterns: i.e.: $ python3 Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ") '28 5A 31 34 00 0 00 F0' >>> '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ").replace(" 0 ", " 00 ") '28 5A 31 34 00 00 00 F0' notice the "middle" 0 pattern that is not replaced by 00. Any idea how I could replace all patterns at once? Of course I can do '28 5A 31 34 0 0 0 F0'.replace(" 0 ", " 00 ").replace(" 0 ", " 00 "), but this is a bit heavy... I actually did not expect this behavior (found it through a bug in my code). In particular, I did not expect this behavior from the documentation at https://docs.python.org/3/library/stdtypes.html#str.replace . Any explanation to why this happens / anything that should have tipped me that this is the expected behavior? It looks like replace does not work with consecutive overlapping repetitions of the pattern, but this was not obvious to me from the documentation? Edit 1: Thanks for the answer(s). The regexp works nicely. Still, I am confused. The official doc linked above says: "Return a copy of the string with all occurrences of substring old replaced by new. If count is given, only the first count occurrences are replaced. If count is not specified or -1, then all occurrences are replaced.". "Clearly" this is not the case? (or am I missing something?).
A better tactic would be to not look for spaces around the individual zeros, but to use regex substitution and look for word boundaries (\b): >>> import re >>> re.sub(r'\b0\b', '00', '28 5A 31 34 0 0 0 F0') '28 5A 31 34 00 00 00 F0' This has the added benefit that a 0 at the start or end of the string would get replaced into 00 as well. If you want the exact same semantics, you could use positive lookbehind and lookahead to not "consume" the space characters: >>> re.sub(r'(?<= )0(?= )', '00', '28 5A 31 34 0 0 0 F0') '28 5A 31 34 00 00 00 F0' The reason why your original attempt does not work is that when str.replace (or re.sub) finds a pattern to be replaced, it moves forward to the next character following the whole match. So: '28 5A 31 34 0 0 0 F0'.replace(' 0 ', ' 00 ') # ^-^ #1 match, ' 0 ' β†’ ' 00 ' # ^ start looking for second match from here # ^-^ #2 match, ' 0 ' β†’ ' 00 ' '28 5A 31 34 00 0 00 F0' # ^--^ ^--^ # #1 #2 The CPython (3.13.3) str.replace implementation can be seen from here: https://github.com/python/cpython/blob/6280bb547840b609feedb78887c6491af75548e8/Objects/unicodeobject.c#L10333, but it's a bit complex with all the Unicode handling. If it would work as you'd "wish", you still wouldn't get the output that you desire, as you'd get extra spaces (each overlapping 0 in the original string would cause its own 00 to appear into the output string): # Hypothetical: '28 5A 31 34 0 0 0 F0'.replace(' 0 ', ' 00 ') # ^-^ #1 match, ' 0 ' β†’ ' 00 ' # ^-^ #2 match, ' 0 ' β†’ ' 00 ' # ^-^ #3 match, ' 0 ' β†’ ' 00 ' '28 5A 31 34 00 00 00 F0' # ^--^^--^^--^ # #1 #2 #3 If it still seems unintuitive why you'd get those extra spaces, consider ABA to be 0 and X__X to be 00 , and look at this: # Analogous to: ' 0 0 0 '.replace(' 0 ', ' 00 ') 'ABABABA'.replace('ABA', 'X__X') 'X__XBX__X' # What you get in reality now. 'X__XX__XX__X' # What you would get with the above logic (=extra consecutive X characters, i.e. spaces). And finally, if it would work like calling replace as many times as there's something to replace does, a trivial 'A'.replace('A', 'AA') would just loop infinitely ('A'β†’'AA'β†’'AAAA'→…). So, it just "has" to work this way. This is exactly why regex allows using lookahead and lookbehind to control which matched parts actually consume characters from the original string and which don't.
2
7
79,617,903
2025-5-12
https://stackoverflow.com/questions/79617903/renaming-automatic-aggregation-name-for-density-heatmaps-2d-histograms
When creating density heatmaps / 2d histograms, there is an automatic aggregation that can take place, which also sets the name as it appears on the legend. I'm trying to change how that aggregation is displayed on the legend. Consider the following example, taken directly from the plotly docs: import plotly.express as px df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.show() How can I pass a string that will alter the "count" as it appears on the legend? I've tried with .update_layout(legend_title_text = "Test string") but did not manage to get anywhere.
Try setting the title.text property of coloraxis_colorbar inside layout. df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.update_layout(coloraxis_colorbar=dict( title=dict( text="Number of Bills per Cell") ) ) fig.show() You can also define this in a single line using coloraxis_colorbar_title_text. import plotly.express as px df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.update_layout(coloraxis_colorbar_title_text="Number of Bills per Cell") fig.show() Colorscales - Plotly
2
1
79,616,857
2025-5-11
https://stackoverflow.com/questions/79616857/desired-frequency-in-discrete-fourier-transform-gets-shifted-by-the-factor-of-in
I have written a python script to compute DFT of a simple sin wave having frequency 3. I have taken the following consideration for taking sample of the sin wave sin function for test = sin( 2 * pi * 3 * t ) sample_rate = 15 time interval = 1/sample_rate = 1/15 = ~ 0.07 second sample_duration = 1 second (for test1) and 2 seconds (for test 2) sample_size = sample_rate * sample_duration = 15*2 = 30 samples I run the same code for sample_duration both 1 and 2 seconds. When sample duration is 1 second, the graph produce shows the presence of frequency=3 present in the sin wave,which is correct. But if I change the sample duration to 2 second, the graph peaks at frequency= 6, which does not present in the sin wave.But it is a factor of 2 increase of the original frequency (3*2) = 6. And if 3 second is taken as sample duration, graph peaks at 9 second. I was thinking that taking more sample for longer duration will produce finer result, but that is clearly not the case here. code : from sage.all import * import matplotlib.pyplot as plt import numpy as np t = var('t') sample_rate = 15 # will take 100 sample each second interval = 1 / sample_rate # time interval between each reading sample_duration = 1 # take sample over a duration of 1 second sample_size_N = sample_rate*sample_duration #count number of touples in r array, len(r) will give sample size/ total number of sample taken over a specific duration func = sin(3*2*pi*t) time_segment_arr = [] signal_sample_arr= [] # take reading each time interval over sample_duration period for time_segment in np.arange(0,sample_duration,interval): # give discrete value of the signal over specific time interval discrete_signal_value = func(t = time_segment) # push time value into array time_segment_arr.append(time_segment) # push signal amplitude into array signal_sample_arr.append(N(discrete_signal_value)) def construct_discrete_transform_func(): s = '' k = var('k') for n in range(0,sample_size_N,1): s = s+ '+'+str((signal_sample_arr[n])* e^(-(i*2*pi*k*n)/sample_size_N)) return s[1:] #omit the forward + sign dft_func = construct_discrete_transform_func() def calculate_frequency_value(dft_func,freq_val): k = var('k') # SR converts string to sage_symbolic_ring expression & fast_callable() allows to pass variable value to that expression ff = fast_callable(SR(dft_func), vars=[k]) return ff(freq_val) freq_arr = [] amplitude_arr = [] #compute frequency strength per per frequency for l in np.arange(0,sample_size_N,1): freq_value = calculate_frequency_value(dft_func,l) freq_arr.append(l) amplitude_arr.append(N(abs(freq_value)))
your Frequency axis is wrong, the lowest frequency on the DFT axis should be 1/N which can be translated to time domain to be 1/T, that is when the total time is 2 seconds, the first point after zero will be at 0.5 Hz not 1 Hz the longest sine wave a DFT can represent (the lowest frquency) is a sine wave that does 1 cycle over the entire duration. (k = 1), you can get 1/T from substituting t = n * Ts. (where Ts is the sample interval) then Ts = T/N where T is the total time which results in f = k / T and the lowest frequency k = 1 translates to f = 1 / T this is just a plotting error, in the not-shown plotting code.
2
2
79,616,449
2025-5-11
https://stackoverflow.com/questions/79616449/how-do-i-do-a-specific-aggregation-on-a-table-based-on-row-column-values-on-anot
I have loaded two fact tables CDI and Population and a couple dimension tables in DuckDB. I did joins on the CDI fact table and its respective dimension tables which yields a snippet of the table below And below is the Population fact table merged with its other dimension tables yielding this snippet below Now what I want to basically do is filter out the Population table based only on the values of this particular row of the CDI table. In this case the current row outlined in green will somehow do this query SELECT Year, SUM(Population) AS TotalPopulation FROM Population WHERE (Year BETWEEN 2018 AND 2018) AND (Age BETWEEN 18 AND 85) AND State = 'Pennsylvania' AND Sex IN ('Male', 'Female') AND Ethnicity IN ('Multiracial') AND Origin IN ('Not Hispanic') GROUP BY Year ORDER BY Year ASC This query aggregates the Population column values based on the row values of the CDI table. What I'm just at a loss in trying to implement is doing this aggregation operation for all row values in the CDI table. Here is a full visualization of what I'm trying to do. How would I implement this type of varying filtering aggregation based on each row column values of the CDI table? I'm using DuckDB as the OLAP DB here so ANSI SQL is what I'm trying to use to implement this task. Could it be possible only using this kind of SQL?
I agree with Chris Maurer comment, here is a SQL query to achieve what you are looking for : SELECT YearStart, YearEnd, LocationDesc, AgeStart, AgeEnd, Sex, Ethnicity, Origin, Sun(Population) AS TotalPopulation FROM CDI LEFT JOIN Population AS pop ON (pop.Year BETWEEN CDI.YearStart AND CDI.YearEnd) AND (CDI.Sex=pop.Sex OR CDI.Sex='both') AND (pop.Age BETWEEN pop.AgeStart AND (CASE WHEN pop.AgeEnd='infinity' THEN 1000 ELSE pop.AgeEnd END)) AND (CDI.LocationDesc = pop.State) AND (CDI.Ethnicity=pop.Ethnicity OR CDI.Ethnicity='All') AND (CDI.Origin=pop.Origin OR CDI.Origin='Both') GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 9 DESC Hope this helps.
2
1
79,616,550
2025-5-11
https://stackoverflow.com/questions/79616550/selenium-4-25-opens-chrome-136-with-existing-profile-to-new-tab-instead-of-nav
I'm using Python with Selenium 4.25.0 to automate Google Chrome 136. My goal is to have Selenium use my existing, logged-in "Default" Chrome profile to navigate to a specific URL (https://aistudio.google.com/prompts/new_chat) and perform actions. The Problem: When I execute my script: Chrome launches successfully. It clearly loads my "Default" profile, as I see my personalized "New Tab" page (title: "ζ–°εˆ†ι ", URL: chrome://newtab/) with my usual shortcuts, bookmarks bar, and theme. This confirms the --user-data-dir and --profile-directory arguments are pointing to the correct profile. However, the subsequent driver.get("https://aistudio.google.com/prompts/new_chat") command does not navigate the browser. The browser remains on the chrome://newtab/ page. I am very diligent about ensuring all chrome.exe processes are terminated (checked via Task Manager) before running the script. Environment: OS: Windows 11 Pro, Version 23H2 Python Version: 3.12.x Selenium Version: 4.25.0 Chrome Browser Version: 136.0.7103.93 (Official Build) (64-bit) ChromeDriver Version: 136.0.7103.xx (downloaded from official "Chrome for Testing" site for win64, matching Chrome's major.minor.build) Simplified Code Snippet: import time import os # For path checks from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import WebDriverException # --- Configuration --- # User needs to replace CHROME_DRIVER_PATH with the full path to their chromedriver.exe CHROME_DRIVER_PATH = r'C:\path\to\your\chromedriver-win64\chromedriver.exe' # User needs to replace YourUserName with their actual Windows username CHROME_PROFILE_USER_DATA_DIR = r'C:\Users\YourUserName\AppData\Local\Google\Chrome\User Data' CHROME_PROFILE_DIRECTORY_NAME = "Default" # Using the standard default profile TARGET_URL = "https://aistudio.google.com/prompts/new_chat" # Example def setup_driver(): print(f"Driver Path: {CHROME_DRIVER_PATH}") print(f"User Data Dir: {CHROME_PROFILE_USER_DATA_DIR}") print(f"Profile Dir Name: {CHROME_PROFILE_DIRECTORY_NAME}") if not os.path.exists(CHROME_DRIVER_PATH): print(f"FATAL: ChromeDriver not found at '{CHROME_DRIVER_PATH}'.") return None if not os.path.isdir(CHROME_PROFILE_USER_DATA_DIR): print(f"FATAL: Chrome User Data dir not found at '{CHROME_PROFILE_USER_DATA_DIR}'.") return None chrome_options = Options() chrome_options.add_argument(f"--user-data-dir={CHROME_PROFILE_USER_DATA_DIR}") chrome_options.add_argument(f"--profile-directory={CHROME_PROFILE_DIRECTORY_NAME}") chrome_options.add_experimental_option("excludeSwitches", ["enable-automation", "load-extension"]) chrome_options.add_experimental_option('useAutomationExtension', False) # chrome_options.add_argument("--disable-blink-features=AutomationControlled") # Tried with and without chrome_options.add_argument("--start-maximized") try: service = Service(executable_path=CHROME_DRIVER_PATH) driver = webdriver.Chrome(service=service, options=chrome_options) print("WebDriver initialized.") return driver except WebDriverException as e: print(f"FATAL WebDriverException during setup: {e}") if "user data directory is already in use" in str(e).lower(): print(">>> Ensure ALL Chrome instances are closed via Task Manager.") return None except Exception as e_setup: print(f"Unexpected FATAL error during setup: {e_setup}") return None def main(): print("IMPORTANT: Ensure ALL Google Chrome instances are FULLY CLOSED before running this script.") input("Press Enter to confirm and continue...") driver = setup_driver() if not driver: print("Driver setup failed. Exiting.") return try: print(f"Browser launched. Waiting a few seconds for it to settle...") print(f"Initial URL: '{driver.current_url}', Initial Title: '{driver.title}'") time.sleep(4) # Increased wait after launch for profile to fully 'settle' print(f"Attempting to navigate to: {TARGET_URL}") driver.get(TARGET_URL) print(f"Called driver.get(). Waiting for navigation...") time.sleep(7) # Increased wait after .get() for navigation attempt current_url_after_get = driver.current_url current_title_after_get = driver.title print(f"After 7s wait - Current URL: '{current_url_after_get}', Title: '{current_title_after_get}'") if TARGET_URL not in current_url_after_get: print(f"NAVIGATION FAILED: Browser did not navigate to '{TARGET_URL}'. It's still on '{current_url_after_get}'.") # Could also try JavaScript navigation here for more info # print("Attempting JavaScript navigation as a fallback test...") # driver.execute_script(f"window.location.href='{TARGET_URL}';") # time.sleep(7) # print(f"After JS nav attempt - URL: '{driver.current_url}', Title: '{driver.title}'") else: print(f"NAVIGATION SUCCESSFUL to: {current_url_after_get}") except Exception as e: print(f"An error occurred during main execution: {e}") finally: print("Script execution finished or errored.") input("Browser will remain open for inspection. Press Enter to close...") if driver: driver.quit() if __name__ == "__main__": # Remind user to update paths if placeholders are detected if r"C:\path\to\your\chromedriver-win64\chromedriver.exe" == CHROME_DRIVER_PATH or \ r"C:\Users\YourUserName\AppData\Local\Google\Chrome\User Data" == CHROME_PROFILE_USER_DATA_DIR: print("ERROR: Default placeholder paths are still in the script.") print("Please update CHROME_DRIVER_PATH and CHROME_PROFILE_USER_DATA_DIR with your actual system paths.") else: main() Console Output (when it gets stuck on New Tab): Setting up Chrome driver from: C:\Users\stat\Downloads\chromedriver-win64\chromedriver-win64\chromedriver.exe Attempting to use Chrome User Data directory: C:\Users\stat\AppData\Local\Google\Chrome\User Data Attempting to use Chrome Profile directory name: Default WebDriver initialized successfully. Browser launched. Waiting a few seconds for it to settle... Initial URL: 'chrome://newtab/', Initial Title: 'ζ–°εˆ†ι ' DevTools remote debugging requires a non-default data directory. Specify this using --user-data-dir. [... some GCM / fm_registration_token_uploader errors may appear here ...] Attempting to navigate to: https://aistudio.google.com/prompts/new_chat Called driver.get(). Waiting for navigation... After 7s wait - Current URL: 'chrome://newtab/', Title: 'ζ–°εˆ†ι ' NAVIGATION FAILED: Browser did not navigate to 'https://aistudio.google.com/prompts/new_chat'. It's still on 'chrome://newtab/'. What I've Already Verified/Tried: ChromeDriver version precisely matches the Chrome browser's major.minor.build version (136.0.7103). All chrome.exe processes are terminated via Task Manager before script execution. The paths for CHROME_DRIVER_PATH, CHROME_PROFILE_USER_DATA_DIR, and CHROME_PROFILE_DIRECTORY_NAME are correct for my system. The browser visibly loads my "Default" profile (shows my theme, new tab page shortcuts). Tried various time.sleep() delays. The "DevTools remote debugging requires a non-default data directory" warning appears, as do some GCM errors, but the browser itself opens with the profile. My Question: Given that Selenium successfully launches Chrome using my specified "Default" profile (as evidenced by my personalized New Tab page loading), why would driver.get() fail to navigate away from chrome://newtab/? Are there specific Chrome options for Selenium 4.25+ or known issues with Chrome 136 that could cause this behavior when using an existing, rich user profile, even when Chrome is fully closed beforehand? How can I reliably make driver.get() take precedence over the default New Tab page loading in this scenario?
The root cause could be that the ChromeDriver (β‰₯ v113 with β€œChrome for Testing”) intentionally limits automation on β€œdefault” or regular profiles for security and stability. This is reflected in the warning: "DevTools remote debugging requires a non-default data directory" This means: ChromeDriver can't fully control Chrome if you use --user-data-dir pointing to Chrome's real profile directory. Navigation via driver.get() fails silently or gets overridden by the "New Tab" logic in Chrome itself. Even though Chrome opens with the correct theme/profile, the DevTools protocol is not fully attached, so driver.get() doesn't execute properly. To fix this issue, you can use a dedicated test profile instead of Default Create a copy of your "Default" profile into a separate folder (e.g., ChromeProfileForSelenium) and point --user-data-dir there without specifying the --profile-directory. create a copy mkdir "C:\SeleniumChromeProfile" xcopy /E /I "%LOCALAPPDATA%\Google\Chrome\User Data\Default" "C:\SeleniumChromeProfile\Default" and then update your script: chrome_options.add_argument("--user-data-dir=C:\\SeleniumChromeProfile") # Omit this line: chrome_options.add_argument("--profile-directory=Default") This avoids Chrome's protections around Default, and lets driver.get() work as expected.
2
1
79,616,310
2025-5-11
https://stackoverflow.com/questions/79616310/firebase-admin-taking-an-infinite-time-to-work
I recently started using firebase admin in python. I created this example script: import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("./services.json") options = { "databaseURL": 'https://not_revealing_my_url.com' } app = firebase_admin.initialize_app(cred, options) client = firestore.client(app) print(client.document("/").get()) I already activated firestore and I placed services.json (which I genrated from "Service Accounts" on my firebase project) in the same directory as my main.py file. From all sources I could find, this should've allowed me to use firestore, but for some reason the app takes an infinite long time to respond. I tried looking through the stack after Interrupting the script, and the only major thing I could find was: grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown" debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B::1%5D:8081: tcp handshaker shutdown", grpc_status:14, created_time:"2025-05-11T08:47:32.8676384+00:00"}" I am assuming this is a common issue, but I failed to find any solution online, can someone help me out? EDIT: I had firebase emulator working from a previous job, It seems firebase_admin tried using firebase emulator which was inactive. I just had to remove it from my PATH
Yes, you're on the right track with setting up Firebase Admin in Python. The error you're seeing: grpc._channel._MultiThreadedRendezvous: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:[::1]:8081: tcp handshaker shutdown" strongly suggests that the client is trying to connect to Firestore Emulator, not the actual Firestore production database. Root Cause This specific port 8081 and the loopback address ([::1]) are the default for the Firestore Emulator, not the production Firestore service. So your environment is likely set to use the emulator without the emulator actually running. Fix You likely have one of the following environment variables set (either globally or in your shell/session): FIRESTORE_EMULATOR_HOST FIREBASE_FIRESTORE_EMULATOR_ADDRESS If either of these is set, the SDK will try to connect to a local emulator instead of the actual Firestore instance. Solution Steps Check and Unset the Environment Variable(s): In your terminal (Linux/macOS): unset FIRESTORE_EMULATOR_HOST unset FIREBASE_FIRESTORE_EMULATOR_ADDRESS In PowerShell (Windows): Remove-Item Env:FIRESTORE_EMULATOR_HOST Remove-Item Env:FIREBASE_FIRESTORE_EMULATOR_ADDRESS Or, if these are set in your IDE or .env file, remove them from there. Restart Your Application after unsetting the variables. Verify you're not connecting to the emulator: In your Python script, you should not manually configure emulator settings unless you're developing against the emulator. Your firebase_admin.initialize_app() call is correct for connecting to the live Firestore. Also: Your Script Has a Small Issue This line: print(client.document("/").get()) Is not valid Firestore usage. client.document("/") is not a valid document path. Firestore document paths must include both collection and document ID. E.g.: doc_ref = client.document("test_collection/test_document") print(doc_ref.get().to_dict())
1
2
79,610,568
2025-5-7
https://stackoverflow.com/questions/79610568/store-numpy-array-in-pandas-dataframe
I want to store a numpy array in pandas cell. This does not work: import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = ["val", "unit"]) df.loc["bnd"] = [bnd1, "N/A"] df.loc["bnd"] = [bnd2, "N/A"] But this does: import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = ["val"]) df.loc["bnd"] = [bnd1] df.loc["bnd"] = [bnd2] Can someone explain why, and what's the solution? Edit: The first returns: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. The complete traceback is below: > --------------------------------------------------------------------------- AttributeError Traceback (most recent call > last) File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3185, > in ndim(a) 3184 try: > -> 3185 return a.ndim 3186 except AttributeError: > > AttributeError: 'list' object has no attribute 'ndim' > > During handling of the above exception, another exception occurred: > > ValueError Traceback (most recent call > last) Cell In[10], line 8 > 6 df = pd.DataFrame(columns = ["val", "unit"]) > 7 df.loc["bnd"] = [bnd1, "N/A"] > ----> 8 df.loc["bnd"] = [bnd2, "N/A"] > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:849, > in _LocationIndexer.__setitem__(self, key, value) > 846 self._has_valid_setitem_indexer(key) > 848 iloc = self if self.name == "iloc" else self.obj.iloc > --> 849 iloc._setitem_with_indexer(indexer, value, self.name) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1835, > in _iLocIndexer._setitem_with_indexer(self, indexer, value, name) > 1832 # align and set the values 1833 if take_split_path: 1834 > # We have to operate column-wise > -> 1835 self._setitem_with_indexer_split_path(indexer, value, name) 1836 else: 1837 self._setitem_single_block(indexer, > value, name) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/pandas/core/indexing.py:1872, > in _iLocIndexer._setitem_with_indexer_split_path(self, indexer, value, > name) 1869 if isinstance(value, ABCDataFrame): 1870 > self._setitem_with_indexer_frame_value(indexer, value, name) > -> 1872 elif np.ndim(value) == 2: 1873 # TODO: avoid np.ndim call in case it isn't an ndarray, since 1874 # that will > construct an ndarray, which will be wasteful 1875 > self._setitem_with_indexer_2d_value(indexer, value) 1877 elif > len(ilocs) == 1 and lplane_indexer == len(value) and not > is_scalar(pi): 1878 # We are setting multiple rows in a single > column. > > File <__array_function__ internals>:200, in ndim(*args, **kwargs) > > File > ~/anaconda3/envs/py38mats/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3187, > in ndim(a) 3185 return a.ndim 3186 except AttributeError: > -> 3187 return asarray(a).ndim > > ValueError: setting an array element with a sequence. The requested > array has an inhomogeneous shape after 1 dimensions. The detected > shape was (2,) + inhomogeneous part. I'm using pandas 2.0.3 and numpy 1.24.4
The issue is that when you try to insert a numpy array into a pandas DataFrame, pandas can't process the data correctly. To fix this, you can use either a pd.Series or a dictionary for better alignment: first way: Using pd.Series: df.loc["bnd"] = pd.Series([bnd2, "N/A"], index=["val", "unit"]) OR second way: Using dictionary: df.loc["bnd"] = {"val": bnd2, "unit": "N/A"} good luck mate
2
1
79,616,218
2025-5-11
https://stackoverflow.com/questions/79616218/typeerror-sequence-item-0-expected-str-instance-int-found-what-should-i-do-t
matrix1=[[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]] m2="\n".join(["\t".join([ritem for ritem in item]) for item in matrix1]) print(m2) where am i wrong that i receive this error?
The values you try to join with str.join must be strings themselves. You're trying to join ints and this is causing the error you're seeing. You want: m2 = "\n".join(["\t".join([str(ritem) for ritem in item]) for item in matrix1]) Note that you can pass any iterable, and not just a list, so you can remove some extraneous [ and ] to create generator expressions rather than list comprehensions. "\n".join("\t".join(str(ritem) for ritem in item) for item in matrix1) Or even just: "\n".join("\t".join(map(str, item)) for item in matrix1)
1
2
79,615,098
2025-5-10
https://stackoverflow.com/questions/79615098/is-there-simpler-way-to-get-all-nested-text-inside-of-elementtree
I am currently using the xml.etree Python library to parse HTML. After finding a target DOM element, I am attempting to extract its text. Unfortunately, it seems that the .text attribute is severely limited in its functionality and will only return the immediate inner text of an element (and not anything nested). Do I really have to loop through all the children of the ElementTree? Or is there a more elegant solution?
You can use itertext(), too. If you don’t like the whitespaces, indention and line break you can use strip(). import xml.etree.ElementTree as ET html = """<html> <head> <title>Example page</title> </head> <body> <p>Moved to <a href="http://example.org/">example.org</a> or <a href="http://example.com/">example.com</a>.</p> </body> </html>""" root = ET.fromstring(html) target_element = root.find(".//body") # get all text all_text = ''.join(target_element.itertext()) # get all text and remove line break etc. all_text_clear = ' '.join(all_text.split()) print(all_text) print(all_text_clear) Output: Moved to example.org or example.com. Moved to example.org or example.com.
1
1
79,615,560
2025-5-10
https://stackoverflow.com/questions/79615560/how-to-select-save-rows-with-multiple-same-value-in-pandas
I have financial data where I need to save / find rows that have multiple same value and a condition where the same value happened more than / = 2 and not (value)equal to 0 or < 1. Say I have this: A B C D E F G H I 5/7/2025 21:00 0 0 0 0 0 0 0 0 5/7/2025 21:15 0 0 19598.8 0 19598.8 0 0 0 5/7/2025 21:30 0 0 0 0 0 0 0 0 5/7/2025 21:45 0 0 0 19823.35 0 0 0 0 5/7/2025 22:00 0 0 0 0 0 0 0 0 5/7/2025 22:15 0 0 0 0 0 0 0 0 5/7/2025 22:30 0 0 0 19975.95 0 19975.95 0 19975.95 5/7/2025 23:45 0 0 0 0 0 0 0 0 5/8/2025 1:00 0 0 19830.2 0 0 0 0 0 5/8/2025 1:15 0 0 0 0 0 0 0 0 5/8/2025 1:30 0 0 0 0 0 0 0 0 5/8/2025 1:45 0 0 0 0 0 0 0 0 I want this along with other datas in those rows: A B C D E F G H I 5/7/2025 21:15 0 0 19598.8 0 19598.8 0 0 0 5/7/2025 22:30 0 0 0 19975.95 0 19975.95 0 19975.95
A simple approach could be to select the columns of interest, then identify if any value is duplicated within a row. Then select the matching rows with boolean indexing: mask = df.loc[:, 'B':].T out = df[mask.apply(lambda x: x.duplicated(keep=False)).where(mask >= 1).any()] A potentially more efficient approach could be to use numpy. Select the values, mask the values below 1, sort them and identify if any 2 are identical in a row with diff + isclose: mask = df.loc[:, 'B':].where(lambda x: x>=1).values mask.sort() out = df[np.isclose(np.diff(mask), 0).any(axis=1)] Output: A B C D E F G H I 1 5/7/2025 21:15 0 0 19598.8 0.00 19598.8 0.00 0 0.00 6 5/7/2025 22:30 0 0 0.0 19975.95 0.0 19975.95 0 19975.95
2
0
79,615,397
2025-5-10
https://stackoverflow.com/questions/79615397/how-to-locate-elements-simultaneously
By nature, Playwright locator is blocking, so whenever it's trying to locate for an element X, it stops and waits until that element is located or it times out. However, I want to see if it is possible to make it so that it locates two elements at once, and, if either one is found, proceed forward, based on whichever was found. Is something like that possible in Python Playwright? Thanks
or_​ Added in: v1.33 Creates a locator matching all elements that match one or both of the two locators. Note that when both locators match something, the resulting locator will have multiple matches, potentially causing a locator strictness violation. Usage Consider a scenario where you'd like to click a "New email" button, but sometimes a security settings dialog appears instead. In this case, you can wait for either a "New email" button or a dialog and act accordingly. note If both "New email" button and security dialog appear on screen, the "or" locator will match both of them, possibly throwing the "strict mode violation" error. In this case, you can use locator.first to only match one of them. new_email = page.get_by_role("button", name="New") dialog = page.get_by_text("Confirm security settings") expect(new_email.or_(dialog).first).to_be_visible() if (dialog.is_visible()): page.get_by_role("button", name="Dismiss").click() new_email.click()
3
4
79,615,284
2025-5-10
https://stackoverflow.com/questions/79615284/how-to-remove-duplicates-from-this-nested-dataframe
I have a dataframe as below and I want remove the duplicates and want the output as mentioned below. Tried few things but not working as expected. New to pandas. import pandas as pd # Sample DataFrame data = { "some_id": "xxx", "some_email": "[email protected]", "This is Sample": [ { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" }, { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" }, { "a": "22", "b": "Y", "c": "33", "d": "x" }, { "a": "44", "b": "N", "c": "55", "d": "Y" } ] } df = pd.DataFrame(data) print(df) The output is some_id some_email This is Sample 0 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 1 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} 2 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 3 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} 4 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 5 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} I want to remove duplicates and the output should look like some_id some_email This is Sample 0 xxx [email protected] {'a': '22', 'b': 'Y', 'c': '33', 'd': 'x'} 1 xxx [email protected] {'a': '44', 'b': 'N', 'c': '55', 'd': 'Y'} How can this be achieved? I tried multiple ways some times it fails with unhashable dict. I have pretty big nested data frame like this. I am using pandas dataframe and python. New to this technology
The issue you're encountering (e.g., unhashable type: 'dict') happens because dictionaries are mutable and unhashable, so drop_duplicates() doesn't work directly on them. To deduplicate rows where one of the columns contains dictionaries, you can: Convert dictionaries to strings, use drop_duplicates(), then Convert the strings back to dictionaries (if needed). Here’s a clean and simple way to achieve your desired output: https://code.livegap.com/?st=a50pbcrjkjk
2
1
79,614,976
2025-5-9
https://stackoverflow.com/questions/79614976/does-file-obj-close-nicely-close-file-objects-in-other-modules-that-have-been
I have a file main_file.py that creates a global variable file_obj by opening a text file and imports a module imported_module.py which has functions that write to this file and therefore also has a global variable file_obj which I set equal to file_obj in main_file.py: main_file.py import imported_module as im file_obj = open('text_file.txt', mode='w') im.file_obj = file_obj def main(): a = 5 b = 7 im.add_func(a, b) im.multiply_func(a, b) return def add_func(x, y): z = x + y file_obj.write(str(z) + '\n') return main() file_obj.close() imported_module.py file_obj = None def multiply_func(x, y): z = x * y file_obj.write(str(z) + '\n') return If I close file_obj in main_file.py as above, does this also nicely close file_obj in imported_module.py? (In the MRE above, I could add im.file_obj.close() to main_file.py just to be sure. However, a generalization of this explicit approach does not appear possible if imported_module.py imports a second module imported_module0.py which also has a global variable file_obj and sets this variable to its own copy of file_obj with a command like im0.file_obj = file_obj.)
Yes. The two variables refer to the same file object. Closing either closes the object itself, it doesn't matter which variable you use to refer to it. This is no different from having two variable referring to the same list, a modification of one is visible through the other: a = [1, 2, 3] b = a a.append(4) print(b) will print [1, 2, 3, 4]
1
1
79,613,844
2025-5-9
https://stackoverflow.com/questions/79613844/tkinter-widget-not-appearing-on-form
I’m having trouble working out why a widget doesn’t appear on my tkinter form. Here is what I’m doint: Create a form Create a widget (a label) with the form as the master. Create a Notebook and Frame and add them to the form. Create additional widgets with the form as the master. Add the widgets to the form using grid, and specifying the in_ parameter. Any widgets I create before the notebook and frame don’t appear, even though I don’t add them till after they’ve been created. Here is some sample code: form = tkinter.Tk() label1 = tkinter.ttk.Label(form, text='Test Label 1') # This one doesn’t appear notebook = tkinter.ttk.Notebook(form) notebook.pack(expand=True) mainframe = tkinter.ttk.Frame(notebook, padding='13 3 12 12') notebook.add(mainframe, text='Test Page') label2 = tkinter.ttk.Label(form, text='Test Label 2') # This one works entry = tkinter.ttk.Entry(form) label1.grid(in_=mainframe, row=1, column=1) label2.grid(in_=mainframe, row=2, column=1) entry.grid(in_=mainframe, row=3, column=1) form.mainloop() Note that label doesn’t appear even though there is a space for it. If I print(id(form)) before and after creating the notebook and frame, they are the same, so it’s not as if the form itself has changed. Where has that first widget gone to and how can I get it to appear?
The behavior has to do with stacking order. Widgets created before the notebook are lower in the stacking order. In effect it is behind the notebook. You'll you correctly observed that a row has been allocated for the widget, but since it's behind the notebook it isn't visible. You can make it appear by calling lift on the widget to raise the stacking order: label1.lift()
1
3
79,614,850
2025-5-9
https://stackoverflow.com/questions/79614850/how-to-replace-string-values-in-a-strict-way-in-polars
I'm working with a Polars DataFrame that contains a column with string values. I aim to replace specific values in this column using the str.replace_many() method. My dataframe: import polars as pl df = (pl.DataFrame({"Products": ["cell_foo","cell_fooFlex","cell_fooPro"]})) Current approach: mapping= { "cell_foo" : "cell", "cell_fooFlex" : "cell", "cell_fooPro": "cell" } (df.with_columns(pl.col("Products").str.replace_many(mapping ).alias("Replaced"))) Output: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Products ┆ Replaced β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ cell_foo ┆ cell β”‚ β”‚ cell_fooFlex ┆ cellFlex β”‚ β”‚ cell_fooPro ┆ cellPro β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Desired Output: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Products ┆ Replaced β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ cell_foo ┆ cell β”‚ β”‚ cell_fooFlex ┆ cell β”‚ β”‚ cell_fooPro ┆ cell β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ How can I modify my approach to ensure that replacements occur only when the entire string matches a key in the mapping?
The top-level Expr.replace() and .replace_strict() are for replacing entire "values". df.with_columns(pl.col("Products").replace(mapping).alias("Replaced")) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Products ┆ Replaced β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ cell_foo ┆ cell β”‚ β”‚ cell_fooFlex ┆ cell β”‚ β”‚ cell_fooPro ┆ cell β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
1
79,614,700
2025-5-9
https://stackoverflow.com/questions/79614700/how-to-display-years-on-the-the-y-axis-of-horizontal-bar-chart-subplot-when-th
I'm plotting date vs frequency horizontal bar charts that compares the monthly distribution pattern over time for a selection of crimes as subplots. The problem is the tick labels of the y-axis, which represents the date, display all the months over period of 2006-2023. I want to instead display the year whilst preserving the monthly count of the plot. Basically change the scale from month to year without changing the data being plotted. Here's a sample of my code below: Dataset: https://drive.google.com/file/d/11MM-Vao6_tHGTRMsLthoMGgtziok67qc/view?usp=sharing import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates df = pd.read_csv('NYPD_Arrests_Data__Historic__20250113_111.csv') df['ARREST_DATE'] = pd.to_datetime(df['ARREST_DATE'], format = '%m/%d/%Y') df['ARREST_MONTH'] = df['ARREST_DATE'].dt.to_period('M').dt.to_timestamp() # crimes, attributes and renames crimes = ['DANGEROUS DRUGS', 'DANGEROUS WEAPONS', 'ASSAULT 3 & RELATED OFFENSES', 'FELONY ASSAULT'] attributes = ['PERP_RACE'] titles = ['Race'] # loops plot creation over each attribute for attr, title in zip(attributes, titles): fig, axes = plt.subplots(1, len(crimes), figsize = (4 * len(crimes), 6), sharey = 'row') for i, crime in enumerate(crimes): ax = axes[i] crime_df = df[df['OFNS_DESC'] == crime] pivot = pd.crosstab(crime_df['ARREST_MONTH'], crime_df[attr]) # plots stacked horizontal bars pivot.plot(kind = 'barh', stacked = True, ax = ax, width = 0.9, legend = False) ax.set_title(crime) ax.set_xlabel('Frequency') ax.set_ylabel('Month' if i == 0 else '') # shows the y-axis only on first plot ax.xaxis.set_tick_params(labelsize = 8) ax.set_yticks(ax.get_yticks()) # adds one common legend accoss plots handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, title = title, loc = 'upper center', ncol = len(df[attr].unique()), bbox_to_anchor = (0.5, 0.94)) fig.suptitle(f'Crime Frequency Distribution by Year and {title}', fontsize = 20) plt.tight_layout(rect = [0, 0, 1, 0.90]) plt.show() Here's an image of what I currently see.
pandas makes the assumption that the major axis of a bar-chart is always categorical, and therefore converts your values to strings prior to plotting. This means that it forces matplotlib to render a label for every bar you have. The way to do this with minimal changes to your code would be to manually override the yticklabels with your own custom ones. You can create a Series that contains the year (as a string) whenever the year in the current row is different than that of the next row. Then fill in empty strings for the other case when the year of the current row is the same as the next row. import pandas as pd s = pd.Series([2000, 2001, 2002, 2003]).repeat(3) print( pd.DataFrame({ 'orig': s, 'filtered': s.pipe(lambda s: s.astype('string').where(s != s.shift(), '')) }) ) # orig filtered # 0 2000 2000 # 0 2000 # 0 2000 # 1 2001 2001 # 1 2001 # 1 2001 # 2 2002 2002 # 2 2002 # 2 2002 # 3 2003 2003 # 3 2003 # 3 2003 Putting this into action in your code would look like: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates df = pd.read_csv('NYPD_Arrests_Data__Historic__20250113_111.csv') df['ARREST_DATE'] = pd.to_datetime(df['ARREST_DATE'], format = '%m/%d/%Y') df['ARREST_MONTH'] = df['ARREST_DATE'].dt.to_period('M').dt.to_timestamp() # crimes, attributes and renames crimes = ['DANGEROUS DRUGS', 'DANGEROUS WEAPONS', 'ASSAULT 3 & RELATED OFFENSES', 'FELONY ASSAULT'] attributes = ['PERP_RACE'] titles = ['Race'] # loops plot creation over each attribute for attr, title in zip(attributes, titles): fig, axes = plt.subplots(1, len(crimes), figsize = (4 * len(crimes), 6), sharey = 'row') for i, crime in enumerate(crimes): ax = axes[i] crime_df = df[df['OFNS_DESC'] == crime] pivot = pd.crosstab(crime_df['ARREST_MONTH'], crime_df[attr]) # plots stacked horizontal bars pivot.plot(kind = 'barh', stacked = True, ax = ax, width = 0.9, legend = False) ax.set_title(crime) ax.set_xlabel('Frequency') ax.set_ylabel('Month' if i == 0 else '') # shows the y-axis only on first plot ax.xaxis.set_tick_params(labelsize = 8) ax.yaxis.set_tick_params(size=0) yticklabels = ( pivot.index.year.to_series() .pipe( lambda s: s.astype('string').where(s != s.shift(), '') ) ) ax.set_yticklabels(yticklabels) axes.flat[0].invert_yaxis() handles, labels = axes.flat[0].get_legend_handles_labels() fig.legend(handles, labels, title = title, loc = 'upper center', ncol = len(df[attr].unique()), bbox_to_anchor = (0.5, 0.94)) fig.suptitle(f'Crime Frequency Distribution by Year and {title}', fontsize = 20) plt.tight_layout(rect = [0, 0, 1, 0.90]) plt.show() Note that I also inverted the y-axis to make the dates increase as the viewer moves their eyes down the chart. This is done with the axes.flat[0].invert_yaxis() line (it inverts tha axis on all charts since they share the y-axis)
1
0
79,614,770
2025-5-9
https://stackoverflow.com/questions/79614770/how-can-i-get-all-thing-names-from-a-thing-group-in-aws-iot-core-using-a-lambda
I'm trying to get all the thing names that are part of a specific thing group in AWS IoT Core using a Python Lambda function. I checked the Boto3 documentation looking for a function that retrieves the names of things inside a specific thing group, but I couldn't find anything that does exactly that. Is there a way to fetch all the thing names from a thing group at once and store them in a list?
You can use the BOTO3 client to retrieve IoT things in a thing group. Here is the Python code. You need to use this Python code in an AWS Lambda function to address your use case. For additional AWS code examples, refer to the AWS Code Library -- where you will find thousands of examples in various SDKs, CLI, etc. import boto3 def list_things_in_group(group_name, region='us-east-1'): client = boto3.client('iot', region_name=region) try: response = client.list_things_in_thing_group( thingGroupName=group_name, recursive=False # Set to True if you want to include child groups ) things = response.get('things', []) if not things: print(f"No things found in group: {group_name}") else: print(f"Things in group '{group_name}':") for thing_name in things: print(f"- {thing_name}") describe_thing(client, thing_name) except client.exceptions.ResourceNotFoundException: print(f"Thing group '{group_name}' not found.") except Exception as e: print(f"Error: {e}") def describe_thing(client, thing_name): response = client.describe_thing(thingName=thing_name) print(f" Thing Name: {response.get('thingName')}") print(f" Thing ARN: {response.get('thingArn')}") print() # Example usage: if __name__ == "__main__": list_things_in_group("YourThingGroupName")
1
0
79,609,220
2025-5-6
https://stackoverflow.com/questions/79609220/documenting-a-script-step-by-step-with-sphinx
I am documenting a python library with Sphinx. I have a couple of example scripts which I'd like to document in a narrative way, something like this: #: Import necessary package and define :meth:`make_grid` import numpy as np def make_grid(a,b): """ Make a grid for constant by piece functions """ x = np.linspace(0,np.pi) xmid = (x[:-1]+x[1:])/2 h = x[1:]-x[:-1] return xmid,h #: Interpolate a function xmid,h = make_grid(0,np.pi) y = np.sin(xmid) #: Calculate its integral I = np.sum(y*h) print ("Result %g" % I ) Those scripts should remain present as executable scripts in the repository, and I want to avoid duplicating their code into comments. I would like to generate the corresponding documentation, something like : Is there any automated way to do so? This would allow me not to duplicate the example script in the documentation. It seems to me this was the object of this old question but in my hands viewcode extension doesn't interpret comments, it just produces an html page with quoted code, comments remain comments.
Take a look at the sphinx-gallery extension, which seems to do what you require. With this extension, if you have a Python script, you must start it with a header docstring, and then you can add comments that will be formatted as text rather than code using the # %% syntax, e.g., """ My example script. """ import numpy as np # %% # This will be a text block x = np.linspace(0, 10, 100) y = np.sin(2 * np.pi * x) # %% # Another block of text More details of the syntax is described here, and various examples are, e.g., here. Alternative option If the sphinx-gallery option is not appropriate (i.e., you don't really want a thumbnail-style gallery page linking to the examples), you could instead make use of the nbsphinx extension and the jupytext package. You can write your example Python scripts in jupytext's percent format, and then generate the pages via an intermediate conversion to a Jupyter notebook. For example (after installing both nbsphinx and jupytext), if you had a package structure like: . β”œβ”€β”€ docs β”‚ β”œβ”€β”€ Makefile β”‚ β”œβ”€β”€ conf.py β”‚ β”œβ”€β”€ examples -> ../src/examples/ β”‚ β”œβ”€β”€ index.rst β”‚ └── make.bat └── src └── examples └── narrative.py where in this case I've symbolic linked the src/examples directory into the docs directory, you could edit your Sphinx conf.py file to contain: # add nbsphinx to extensions extensions = [ ... "nbsphinx", ] # this converts .py files with the percent format to notebooks nbsphinx_custom_formats = { '.py': ['jupytext.reads', {'fmt': 'py:percent'}], } nbsphinx_output_prompt = "" nbsphinx_execute = "auto" templates_path = ['_templates'] # add conf.py to exclude_patterns exclude_patterns = [..., 'conf.py'] and have narrative.py looking like: # %% [markdown] # # A title # %% [raw] raw_mimetype="text/restructuredtext" # Import necessary package and define :meth:`make_grid` # %% import numpy as np def make_grid(a,b): """ Make a grid for constant by piece functions """ x = np.linspace(0,np.pi) xmid = (x[:-1]+x[1:])/2 h = x[1:]-x[:-1] return xmid,h # %% [markdown] # Interpolate a function # %% xmid,h = make_grid(0,np.pi) y = np.sin(xmid) # %% [markdown] # Calculate its integral # %% I = np.sum(y*h) print ("Result %g" % I ) then running make html should produce a narrative.html file like: which you can link to from index.rst etc. Some things to note about the narrative.py file: the start of the .py file has to contain a "Title" cell, which in this case, as I've set it as a Markdown cell, contains (after the initial comment string #) # A Title using the Markdown header syntax of #. If you don't have a title you won't be able to link to the output from other documents, e.g., index.rst; for most of the text cells, I have marked them as [markdown] format, i.e., they will be interpreted as containing Markdown syntax; for the cell containing restructured text, I have marked it as a [raw] cell with the meta data raw_mimetype="text/restructuredtext"; the input code cells will display with an input prompt by default [1]: etc. Turning off the input prompts requires using Custom CSS.
7
3
79,614,033
2025-5-9
https://stackoverflow.com/questions/79614033/what-explains-pattern-matching-in-python-not-matching-for-0-0-but-matching-for
I would like to understand how pattern matching works in Python. I know that I can match a value like so: >>> t = 12.0 >>> match t: ... case 13.0: ... print("13") ... case 12.0: ... print("12") ... 12 But I notice that when I use matching with a type like float(), it matches 12.0: >>> t = 12.0 >>> match t: ... case float(): ... print("13") ... case 12.0: ... print("12") ... 13 This seems strange, because float() evaluates to 0.0, but the results are different if that is substituted in: >>> t = 12.0 >>> match t: ... case 0.0: ... print("13") ... case 12.0: ... print("12") ... 12 I would expect that if 12.0 matches float(), it would also match 0.0. There are cases where I would like to match against types, so this result seems useful. But why does it happen? How does it work?
The thing that follows the case keyword is not an expression, but special syntax called a pattern. 0.0 is a literal pattern. It checks equality with 0.0. float() is a class pattern. It checks that the type is float. Since it is not an expression, it isn't evaluated and therefore is different from 0.0.
14
20
79,610,653
2025-5-7
https://stackoverflow.com/questions/79610653/python-pynput-the-time-module-do-not-seem-to-work-together-in-a-loop
So I have written this Python script to vote repeatedly (It's allowed) for a friend on a show at a local TV station. import os import time from pynput.keyboard import Key, Controller os.system("open -a Messages") time.sleep(3) keyboard = Controller() for i in range(50): keyboard.type("Example Message") print("Message typed") time.sleep(5) keyboard.press(Key.enter) print(f"======= {i+1} Message(s) Sent =======") time.sleep(40) print("Texting Complete") During the first loop, everything works like it's supposed to, the program takes 5 seconds between typing and pressing Enter. However, in the loops thereafter, the pynput code seems to ignore time.sleep between keyboard.type & keyboard.press, running them immediately in succession, while the print statements still respect time.sleep in the terminal output. This isn't that big of an issue since it stills functions as intended most of the time, but about every 4th or 5th message gets sent before the program has finished typing, causing that vote not to get counted. I'm running the script in Visual Studio Code Version: 1.99.3 on a 2021 Macbook Pro with an M1 chip, if that matters. I have tried running the script unbuffered using the terminal, but it has made no difference. Any help would be appreciated.
Solved by user @furas in the comments here, keyboard.press() keeps the button pressed, so the code needed keyboard.release() to avoid the initial keyboard.press() being held in for the rest of the loops.
2
3
79,613,425
2025-5-9
https://stackoverflow.com/questions/79613425/get-media-created-timestamp-with-python-for-mp4-and-m4a-video-audio-files
Trying to get "Media created" timestamp and insert as the "Last modified date" with python for .mp4 and .m4a video, audio files (no EXIF). The "Media created" timestamp shows up and correctly in Windows with right click file inspection, but I can not get it with python. What am I doing wrong? (This is also a working fix for cloud storage changing the last modified date of files.) enter from mutagen.mp4 import MP4, M4A from datetime import datetime import os def get_mp4_media_created_date(filepath): """ Extracts the "Media Created" date from an MP4 or M4A file. Args: filepath (str): The path to the MP4 or M4A file. Returns: datetime or None: The creation date as a datetime object, or None if not found. """ file_lower = filepath.lower() try: if file_lower.endswith(".mp4"): media = MP4(filepath) elif file_lower.endswith(".m4a"): media = M4A(filepath) else: return None # Not an MP4 or M4A file found_date = None date_tags_to_check = ['creation_time', 'com.apple.quicktime.creationdate'] for tag in date_tags_to_check: if tag in media: values = media[tag] if not isinstance(values, list): values = [values] for value in values: if isinstance(value, datetime): found_date = value break elif isinstance(value, str): try: found_date = datetime.fromisoformat(value.replace('Z', '+00:00')) break except ValueError: pass if found_date: break return found_date except Exception as e: print(f"Error processing {filepath}: {e}") return None if __name__ == "__main__": filepath = input("Enter the path to the MP4/M4A file: ") if os.path.exists(filepath): creation_date = get_mp4_media_created_date(filepath) if creation_date: print(f"Media Created Date: {creation_date}") else: print("Could not find Media Created Date.") else: print("File not found.") here
As described here, the "Media created" value is not filesystem metadata. It's accessible in the API as a Windows Property. You can use os.utime to set "Media created" timestamp as the "Last modified date". Like import pytz import datetime import os from win32com.propsys import propsys, pscon file = 'path/to/your/file' properties = propsys.SHGetPropertyStoreFromParsingName(file) dt = properties.GetValue(pscon.PKEY_Media_DateEncoded).GetValue() if not isinstance(dt, datetime.datetime): # In Python 2, PyWin32 returns a custom time type instead of # using a datetime subclass. It has a Format method for strftime # style formatting, but let's just convert it to datetime: dt = datetime.datetime.fromtimestamp(int(dt)) dt = dt.replace(tzinfo=pytz.timezone('UTC')) print('Media created at', dt, dt.timestamp()) os.utime(file, (dt.timestamp(),dt.timestamp()))
1
2
79,613,107
2025-5-8
https://stackoverflow.com/questions/79613107/pyspark-udf-mapping-is-returning-empty-columns
Given a dataframe, I want to apply a mapping with UDF but getting empty columns. data = [(1, 3), (2, 3), (3, 5), (4, 10), (5, 20)] df = spark.createDataFrame(data, ["int_1", "int_2"]) df.show() +-----+-----+ |int_1|int_2| +-----+-----+ | 1| 3| | 2| 3| | 3| 5| | 4| 10| | 5| 20| +-----+-----+ I have a mapping: def test_map(col): if col < 5: score = 'low' else: score = 'high' return score mapp = {} test_udf = F.udf(test_map, IntegerType()) I iterate here to populate mapp... for x in (1, 2): print(f'Now working {x}') mapp[f'limit_{x}'] = test_udf(F.col(f'int_{x}')) print(mapp) {'limit_1': Column<'test_map(int_1)'>, 'limit_2': Column<'test_map(int_2)'>} df.withColumns(mapp).show() +-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| NULL| NULL| | 2| 3| NULL| NULL| | 3| 5| NULL| NULL| | 4| 10| NULL| NULL| | 5| 20| NULL| NULL| +-----+-----+-------+-------+ The problem is I get null columns. What I'm expecting is: +-----+-----+-------+-------+ |int_1|int_2|limit_1|limit_2| +-----+-----+-------+-------+ | 1| 3| low | low | | 2| 3| low | low | | 3| 5| low | low | | 4| 10| low | high| | 5| 20| low | high| +-----+-----+-------+-------+ The reason I'm doing it is because I have to do for 100 columns. I heard that "withColumns" with a mapping is much faster than iterating over "withColumn" many times.
Your problem is that your UDF is registered to return an integer (defined to return an IntegerType()) while your Python function intends to return a string ("low" or "high"), so what you need to do is to set StringType() in your UDF return type: test_udf = F.udf(test_map, StringType()) Let me know if you want more explanation about UDFs!
1
2
79,613,039
2025-5-8
https://stackoverflow.com/questions/79613039/assign-a-number-for-every-matching-value-in-list
I have a long list of items that I want to assign a number to that increases by one every time the value in the list changes. Basically I want to categorize the values in the list. It can be assumed that the values in the list are always lumped together, but I don't know the number of instances it's repeating. The list is stored in a dataframe as of now, but the output needs to be a dataframe. Example: my_list = ['Apple', 'Apple', 'Orange', 'Orange','Orange','Banana'] grouping = pd.DataFrame(my_list, columns=['List']) Expected output: List Value 0 Apple 1 1 Apple 1 2 Orange 2 3 Orange 2 4 Orange 2 5 Banana 3 I have tried with a for loop, where it checks if the previous value is the same as the current value, but I imagine that there should be a nicer way of doing this.
Use pandas.factorize, and add 1 if you need the category numbers to start with 1 instead of 0: import pandas as pd my_list = ['Apple', 'Apple', 'Orange', 'Orange','Orange','Banana'] grouping = pd.DataFrame(my_list, columns=['List']) grouping['code'] = pd.factorize(grouping['List'])[0] + 1 print(grouping) Output: List code 0 Apple 1 1 Apple 1 2 Orange 2 3 Orange 2 4 Orange 2 5 Banana 3
4
9
79,612,757
2025-5-8
https://stackoverflow.com/questions/79612757/scipys-wrappedcauchy-function-wrong
I'd like someone to check my understanding on the wrapped cauchy function in Scipy... From Wikipedia "a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle." It's similar to the Von Mises distribution in that way. I use the following bits of code to calculate a couple thousand random variates, get a histogram and plot it. from scipy.stats import wrapcauchy, vonmises import plotly.graph_objects as go import numpy as np def plot_cauchy(c, loc = 0, scale = 1, size = 100000): ''' rvs(c, loc=0, scale=1, size=1, random_state=None) ''' rvses = vonmises.rvs(c, loc = loc, scale = scale, size = size) # rvses = wrapcauchy.rvs(c, # loc = loc, # scale = scale, # size = size) y,x = np.histogram(rvses, bins = 200, range = [-np.pi,np.pi], density = True) return x,y fig = go.Figure() loc = -3 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 1.5 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 0 x,y = plot_cauchy(0.5, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name=f'Centered on {loc}')) fig.show() When plotting this using the Von Mises distribution I get a couple of distributions that are wrapped from -pi to pi and centered on "loc": When I replace the vonmises distribution with the wrapcauchy distribution I get a "non-wrapped" result, that to my eye just looks wrong. To plot this completely I have to adjust the ranges for the histogram This is with Scipy version '1.15.2'. Is there a way to correctly "wrap" the outputs of a the Scipy call, or another library that correctly wraps the output from -pi to pi?
Is there a way to correctly "wrap" the outputs of a the Scipy call You can use the modulo operator. The operation number % x wraps all output to the range [0, x). If you want the range to begin at a value other than 0, you can add and subtract a constant before and after the modulo operation to center it somewhere else. If you want the range to begin at -pi, you can do (array + pi) % (2 * pi) - pi. For example, this is how SciPy internally wraps the vonmises result. return np.mod(rvs + np.pi, 2*np.pi) - np.pi Source. You could do something similar with the result of scipy.stats.wrapcauchy(). Here is how you could modify your code to do this: from scipy.stats import wrapcauchy, vonmises import plotly.graph_objects as go import numpy as np def plot_cauchy_or_vm(c, loc = 0, scale = 1, kind="vonmises", size = 100000): ''' rvs(c, loc=0, scale=1, size=1, random_state=None) ''' if kind == "vonmises": rvses = vonmises.rvs(c, loc = loc, scale = scale, size = size) elif kind == "cauchy": rvses = wrapcauchy.rvs(c, loc = loc, scale = scale, size = size) rvses = ((rvses + np.pi) % (2 * np.pi)) - np.pi else: raise Exception("Unknown kind") y,x = np.histogram(rvses, bins = 200, range = [-np.pi,np.pi], density = True) return x,y for kind in ["vonmises", "cauchy"]: fig = go.Figure() loc = -3 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 1.5 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name= f'Centered on {loc}')) loc = 0 x,y = plot_cauchy_or_vm(0.5, kind=kind, loc = loc) fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name=f'Centered on {loc}')) fig.show() Output: Cauchy Plot
3
3
79,612,625
2025-5-8
https://stackoverflow.com/questions/79612625/underlining-fails-in-matplotlib
My matplotlib.__version__ is 3.10.1. I'm trying to underline some text and can not get it to work. As far as I can tell, Latex is installed and accessible in my system: import subprocess result = subprocess.run( ["pdflatex", "--version"], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) print(result.stdout) results in: b'pdfTeX 3.141592653-2.6-1.40.25 (TeX Live 2023/Debian)\nkpathsea version 6.3.5\nCopyright 2023 Han The Thanh (pdfTeX) et al.\nThere is NO warranty. Redistribution of this software is\ncovered by the terms of both the pdfTeX copyright and\nthe Lesser GNU General Public License.\nFor more information about these matters, see the file\nnamed COPYING and the pdfTeX source.\nPrimary author of pdfTeX: Han The Thanh (pdfTeX) et al.\nCompiled with libpng 1.6.43; using libpng 1.6.43\nCompiled with zlib 1.3; using zlib 1.3\nCompiled with xpdf version 4.04\n' Also the simple code: import matplotlib.pyplot as plt plt.text(0.5, 0.5, r'$\frac{a}{b}$') plt.show() works as expected. Similar questions from 2012 (Underlining Text in Python/Matplotlib) and 2017 (matplotlib text underline) have accepted answers that fail with RuntimeError: Failed to process string with tex because dvipng could not be found A similar question from 2019 (Underlining not working in matplotlib graphs for the following code using tex) has no answer and it is my exact same issue, i.e.: import matplotlib.pyplot as plt plt.text(.5, .5, r'Some $\underline{underlined}$ text') plt.show() fails with: ValueError: \underline{underlined} text ^ ParseFatalException: Unknown symbol: \underline, found '\' (at char 0), (line:1, col:1) The 2017 question has a deleted answer that points to a closed PR in matplotlib's Github repo which points to another PR called Support \underline in Mathtext which is marked as a draft. Does my matplotlib version not support the underline Latex command?
As you have correctly found, \underline, is not a currently supported MathText command. But, matplotlib's MathText is not the same a LaTeX. To instead use LaTeX, you can do, e.g., import matplotlib.pyplot as plt # turn on use of LaTeX rather than MathText plt.rcParams["text.usetex"] = True plt.text(.5, .5, r'Some $\underline{underlined}$ text') plt.show() You may have issues if your tex distribution does not ship with the type1cm package, in which can you may want to look at, e.g., https://stackoverflow.com/a/37218925/1862861.
1
4
79,612,007
2025-5-8
https://stackoverflow.com/questions/79612007/undefined-reference-to-py-initialize-when-build-a-simple-demo-c-on-a-linux-con
I am testing of running a Python thread in a c program with a simple example like the below # demo.py import time for i in range(1, 101): print(i) time.sleep(0.1) // demo.c #include <Python.h> #include <pthread.h> #include <stdio.h> void *run_python_script(void *arg) { Py_Initialize(); if (!Py_IsInitialized()) { fprintf(stderr, "Python initialization failed\n"); return NULL; } FILE *fp = fopen("demo.py", "r"); if (fp == NULL) { fprintf(stderr, "Failed to open demo.py\n"); Py_Finalize(); return NULL; } PyRun_SimpleFile(fp, "demo.py"); fclose(fp); Py_Finalize(); return NULL; } int main() { pthread_t python_thread; if (pthread_create(&python_thread, NULL, run_python_script, NULL) != 0) { fprintf(stderr, "Failed to create thread\n"); return 1; } pthread_join(python_thread, NULL); printf("Python thread has finished. Exiting program.\n"); return 0; } Then I build the above code with the following command gcc demo.c -o demo -lpthread -I$(python3-config --includes) $(python3-config --ldflags) $(python3-config --cflags) Then I get the following error: /usr/bin/ld: /tmp/ccsHQpZ3.o: in function `run_python_script': demo.c:(.text.run_python_script+0x7): undefined reference to `Py_Initialize' /usr/bin/ld: demo.c:(.text.run_python_script+0xd): undefined reference to `Py_IsInitialized' /usr/bin/ld: demo.c:(.text.run_python_script+0x41): undefined reference to `PyRun_SimpleFileExFlags' /usr/bin/ld: demo.c:(.text.run_python_script+0x50): undefined reference to `Py_Finalize' /usr/bin/ld: demo.c:(.text.run_python_script+0xab): undefined reference to `Py_Finalize' collect2: error: ld returned 1 exit status The python library do exists, python3-config --ldflags -L/home/henry/anaconda3/lib/python3.9/config-3.9-x86_64-linux-gnu -L/home/henry/anaconda3/lib -lcrypt -lpthread -ldl -lutil -lm -lm ls -1 ~/anaconda3/lib | grep python libpython3.9.so libpython3.9.so.1.0 libpython3.so python3.9 I have no idea about is link error.
You need to pass --embed to python3-config because you are embedding a Python interpreter in your program. Observe the difference: $ python3-config --ldflags -L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lcrypt -ldl -lm -lm $ python3-config --embed --ldflags -L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lpython3.10 -lcrypt -ldl -lm -lm
1
3
79,611,667
2025-5-8
https://stackoverflow.com/questions/79611667/how-do-i-handle-sigterm-inside-python-async-methods
Based on this code, I'm trying to catch SIGINT and SIGTERM. It works perfectly for SIGINT: I see it enter the signal handler, then my tasks do their cleanup before the whole program exits. On SIGTERM, though, the program simply exits immediately. My code is a bit of a hybrid of the two examples from the link above, as much of the original doesn't work under python 3.12: import asyncio import functools import signal async def signal_handler(sig, loop): """ Exit cleanly on SIGTERM ("docker stop"), SIGINT (^C when interactive) """ print('caught {0}'.format(sig.name)) tasks = [task for task in asyncio.all_tasks() if task is not asyncio.current_task()] list(map(lambda task: task.cancel(), tasks)) results = await asyncio.gather(*tasks, return_exceptions=True) print('finished awaiting cancelled tasks, results: {0}'.format(results)) loop.stop() if __name__ == "__main__": loop = asyncio.new_event_loop() asyncio.ensure_future(task1(), loop=loop) asyncio.ensure_future(task2(), loop=loop) loop.add_signal_handler(signal.SIGTERM, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGTERM, loop))) loop.add_signal_handler(signal.SIGINT, functools.partial(asyncio.ensure_future, signal_handler(signal.SIGINT, loop))) try: loop.run_forever() finally: loop.close() task1 can terminate immediately, but task2 has cleanup code that is clearly being executed after SIGINT, but not after SIGTERM
That gist is very old, and asyncio/python has evolved since. Your code sort of works, but the way it's designed, the signal handling will create two coroutines, one of which will not be awaited when the other signal is received. This is because the couroutines are eagerly created, but they're only launched (ensure_future) when the corresponding signal is received. Thus, SIGTERM will be properly handled, but python will complain with RuntimeWarning: corouting 'signal_handler' was never awaited. A more modern take on your version might look something like: import asyncio import signal async def task1(): try: while True: print("Task 1 running...") await asyncio.sleep(1) except asyncio.CancelledError: print("Task 1 cancelled") # Task naturally stops once it raises CancelledError. async def task2(): try: while True: print("Task 2 running...") await asyncio.sleep(1) except asyncio.CancelledError: print("Task 2 cancelled") # Need to pass the list of tasks to cancel so that we don't kill the main task. # Alternatively, one could pass in the main task to explicitly exclude it. async def shutdown(sig, tasks): print(f"Caught signal: {sig.name}") for task in tasks: task.cancel() await asyncio.gather(*tasks, return_exceptions=True) print("Shutdown complete.") async def main(): tasks = [asyncio.create_task(task1()), asyncio.create_task(task2())] loop = asyncio.get_running_loop() for s in (signal.SIGINT, signal.SIGTERM): loop.add_signal_handler(s, lambda s=s: asyncio.create_task(shutdown(s, tasks))) await asyncio.gather(*tasks) if __name__ == "__main__": asyncio.run(main()) $ timeout 2s python3 sigterm.py Task 1 running... Task 2 running... Task 1 running... Task 2 running... Caught signal: SIGTERM Task 1 cancelled Task 2 cancelled Shutdown complete. In this particular case, though, I'd probably use a stop event or similar to signal the tasks to exit: import signal import asyncio stop_event = asyncio.Event() def signal_handler(): print("SIGTERM received! Exiting...") stop_event.set() async def looping_task(task_num): while not stop_event.is_set(): print(f"Task {task_num} is running...") await asyncio.sleep((task_num + 1) / 3) async def main(): loop = asyncio.get_event_loop() loop.add_signal_handler(signal.SIGTERM, signal_handler) await asyncio.gather(*(looping_task(i) for i in range(5))) if __name__ == "__main__": asyncio.run(main())
1
1
79,611,884
2025-5-8
https://stackoverflow.com/questions/79611884/how-to-pass-a-dynamic-list-of-csv-files-from-snakemake-input-to-a-pandas-datafra
I'm working on a Snakemake workflow where I need to combine multiple CSV files into a single Pandas DataFrame. The list of CSV files is dynamicβ€”it depends on upstream rules and wildcard patterns. Here's a simplified version of what I have in my Snakefile: rule combine_tables: input: expand("results/{sample}/data.csv", sample=SAMPLES) output: "results/combined/all_data.csv" run: import pandas as pd dfs = [pd.read_csv(f) for f in input] combined = pd.concat(dfs) combined.to_csv(output[0], index=False) This works when the files exist, but I’d like to know: What's the best practice for handling missing or corrupt files in this context? Is there a more "Snakemake-idiomatic" way to dynamically list and read input files for Pandas operations? How do I ensure proper file ordering or handle metadata like sample names if not all CSVs are structured identically?
rule combine_tables: input: # Static sample list (use checkpoints if dynamically generated) expand("results/{sample}/data.csv", sample=SAMPLES) output: "results/combined/all_data.csv" run: import pandas as pd dfs = [] missing_files = [] corrupt_files = [] # Process files in consistent order for file_path in sorted(input, key=lambda x: x.split("/")[1]): # Sort by sample # Handle missing files (shouldn't occur if Snakemake workflow is correct) if not os.path.exists(file_path): missing_files.append(file_path) continue # Handle corrupt/unreadable files try: df = pd.read_csv(file_path) # Add sample metadata column sample_id = file_path.split("/")[1] df.insert(0, "sample", sample_id) # Add sample column at start dfs.append(df) except Exception as e: corrupt_files.append((file_path, str(e))) # Validation reporting if missing_files: raise FileNotFoundError(f"Missing {len(missing_files)} files: {missing_files}") if corrupt_files: raise ValueError(f"Corrupt files detected:\n" + "\n".join( [f"{f[0]}: {f[1]}" for f in corrupt_files])) if not dfs: raise ValueError("No valid dataframes to concatenate") # Concatenate and save combined = pd.concat(dfs, ignore_index=True) combined.to_csv(output[0], index=False)
2
1
79,611,544
2025-5-8
https://stackoverflow.com/questions/79611544/multiprocessing-with-scipy-optimize
Question: Does scipy.optimize have minimizing functions that can divide their workload among multiple processes to save time? If so, where can I find the documentation? I've looked a fair amount online, including here, for answers: Scipy's optimization incompatible with Multiprocessing? Parallel optimizations in SciPy Multiprocessing Scipy optimization in Python I could be misunderstanding, but I don't see a clear indication in any of the above posts that the scipy library is informed of the fact that there are multiple processes that it can utilize simultaneously while also providing the minimization functions with all of the arguments needed to determine the minimum. I also don't see multiprocessing discussed in detail in the scipy docs that I read and I haven't had any luck finding real world examples of optimization gains to justify optimizing versus a parallel brute force effort. Here's a fictional example of what I'd like the scipy.optimize library to do (I know that the differential_evolution function doesn't have a multiprocessing argument): import multiprocessing as mp from scipy.optimize import differential_evolution def objective_function(x): return x[0] * 2 pool = mp.Pool(processes=16) # Perform differential evolution optimization result = differential_evolution(objective_function, multiprocessing = pool)
With respect to scipy.optimize.differential_evolution, it does seem to offer multiprocessing through multiprocessing.Pool via the optional "workers" call parameter, according to the official documentation at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution This may also be offered for other optimization methods but the API documents would need to be examined. The docs also say that the objective function must be pickleable. The official docs also have some general remarks on parallel execution with SciPy at https://docs.scipy.org/doc/scipy/tutorial/parallel_execution.html The call would look like this for differential_evolution: import multiprocessing as mp from scipy.optimize import differential_evolution def objective_function(x): return x[0] * 2 my_workers = 16 # Perform differential evolution optimization result = differential_evolution(objective_function, workers = my_workers)
2
2
79,610,188
2025-5-7
https://stackoverflow.com/questions/79610188/how-should-i-take-a-matrix-from-input
As we know input makes the inputs to string. How can I take matrix like this ([[1,2],[3,4]]) from input() by user and have it like a normal 2D list to do some thing on it. It should be like that data = input([[1,2],[3,4]]) print(data) output : [[1,2],[3,4]] I tried this data = list(input()) but it was so wrong.
Using AST You can use ast Literals to parse your input string into a list. import ast raw_input = input("Enter the matrix (e.g., [[1,2],[3,4]]): ") # Parse the input string as a list matrix = ast.literal_eval(raw_input) Using numpy In order to use numpy you would have to enter the matrix in a slightly different format: import numpy as np raw_input = input("Enter the matrix (e.g., 1,2;3,4): ") matrix = np.matrix(raw_input, dtype=int)
2
1
79,609,709
2025-5-7
https://stackoverflow.com/questions/79609709/how-to-adjust-size-of-violin-plot-based-on-number-of-hues-available-for-each-cat
I need to create a violin plot based on two categories. But, some of the combination of categories are not available in the data. So it creates a white space, when i try to make the plot. I remember long ago i was able to adjust the size of the violins when the categories were not available in r using geom_violin(position= position_dodge(0.9)) refer to the attached image. Now i need to create a similar figure with python but when i try to make violin plot using seaborn i get whitespace when certain combinations of variables arent available (see image). Following is the code I am using in python. I would appreciate any help with this. Reproducible example import numpy as np # Define categories for Depth and Hydraulic Conductivity depth_categories = ["<0.64", "0.64-0.82", "0.82-0.90", ">0.9"] hydraulic_conductivity_categories = ["<0.2", "0.2-2.2", "2.2-15.5", ">15.5"] # Generate random HSI values np.random.seed(42) # For reproducibility hsi_values = np.random.uniform(low=0, high=35, size=30) # Generate random categories for Depth and Hydraulic Conductivity depth_values = np.random.choice(depth_categories, size=30) hydraulic_conductivity_values = np.random.choice(hydraulic_conductivity_categories, size=30) # Ensure not all combinations are available by removing some combinations for i in range(5): depth_values[i] = depth_categories[i % len(depth_categories)] hydraulic_conductivity_values[i] = hydraulic_conductivity_categories[(i + 1) % len(hydraulic_conductivity_categories)] # Create the DataFrame dummy_data = pd.DataFrame({ 'HSI': hsi_values, 'Depth': depth_values, 'Hydraulic_Conductivity': hydraulic_conductivity_values }) # Violin plot for Soil Depth and Hydraulic Conductivity plt.figure(figsize=(12, 6)) sns.violinplot(x='Depth', y='HSI', hue='Hydraulic_Conductivity', data=dummy_data, palette=color_palette, density_norm="count", cut = 0, gap = 0.1, linewidth=0.5, common_norm=False, dodge=True) plt.xlabel("DDDD") plt.ylabel("XXX") plt.title("Violin plot of XXX by YYYY and DDDD") plt.ylim(-5, 35) plt.legend(title='DDDD', loc='upper right') # sns.despine()# Remove the horizontal lines plt.show()
I'm not aware of a way to do this automatically, but you can easily overlay several violinplots, manually synchronizing the hue colors. An efficient way would be to use groupby to split the groups per number of "hues" per X-axis category, and loop over the categories. Then manually create a legend: # for reproducibility color_palette = sns.color_palette('Set1') # define the columns to use hue_col = 'Hydraulic_Conductivity' X_col = 'Depth' Y_col = 'HSI' # custom order for the hues hue_order = sorted(dummy_data[hue_col].unique(), key=lambda x: (not x.startswith('<'), float(x.strip('<>').partition('-')[0])) ) # ['<0.2', '0.2-2.2', '2.2-15.5', '>15.5'] colors = dict(zip(hue_order, color_palette)) # custom X-order # could use the same logic as above X_order = ['<0.64', '0.64-0.82', '0.82-0.90', '>0.9'] # create groups with number of hues per X-axis group group = dummy_data.groupby(X_col)[hue_col].transform('nunique') f, ax = plt.subplots(figsize=(12, 6)) for _, g in dummy_data.groupby(group): # get unique hues for this group to ensure consistent order hues = set(g[hue_col]) hues = [h for h in hue_order if h in hues] sns.violinplot( x=X_col, y=Y_col, hue=hue_col, data=g, order=X_order, hue_order=hues, # ensure consistent order across groups palette=colors, density_norm='count', cut = 0, gap = 0.1, linewidth=0.5, common_norm=False, dodge=True, ax=ax, # reuse the same axes legend=False, # do not plot the legend ) # create a custom legend manually from the colors dictionary import matplotlib.patches as mpatches plt.legend(handles=[mpatches.Patch(color=c, label=l) for l, c in colors.items()], title='DDDD', loc='upper right') plt.xlabel('DDDD') plt.ylabel('XXX') plt.title('Violin plot of XXX by YYYY and DDDD') plt.ylim(-5, 35) Output: NB. your example have a few categories with a single datapoint, therefore the single lines in the output below. This makes the categories ambiguous since the color is not visible, but this shouldn't be an issue if you have enough data.
2
1
79,608,184
2025-5-6
https://stackoverflow.com/questions/79608184/wrong-column-assignment-with-np-genfromtxt-if-passed-column-order-is-not-the-sam
This problem appeared in some larger code but I will give simple example: from io import StringIO import numpy as np example_data = "A B\na b\na b" data1 = np.genfromtxt(StringIO(example_data), usecols=["A", "B"], names=True, dtype=None) print(data1["A"], data1["B"]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=["B", "A"], names=True, dtype=None) print(data2["A"], data2["B"]) # ['b' 'b'] ['a' 'a'] which is not correct As you can see, if I change passed column order in regard of column order in file, I get wrong results. What's interesting is that dtypes are same: print(data1.dtype) # [('A', '<U1'), ('B', '<U1')] print(data2.dtype) # [('A', '<U1'), ('B', '<U1')] In this example it's not hard to sort column names before passing them, but in my case column names are gotten from some other part of system and it's not guaranteed that they will be in same order as those in file. I can probably circumvent that but I'm wondering if there is something wrong with my logic in this example or is there some kind of bug here. Any help is appreciated. Update: What I just realized playing around a bit is following, if I add one or more columns into example data (not important where) and pass subset of columns to np.genfromtxt in whichever order I want, it gives correct result. Example: example_data = "A B C\na b c\na b c" data1 = np.genfromtxt(StringIO(example_data), usecols=["A", "B"], names=True, dtype=None) print(data1["A"], data1["B"]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.genfromtxt(StringIO(example_data), usecols=["B", "A"], names=True, dtype=None) print(data2["A"], data2["B"]) # ['a' 'a'] ['b' 'b'] which is correct
[62]: text = "A B\na b\na b".splitlines() In [63]: np.genfromtxt(text,dtype=None, usecols=[1,0],names=True) Out[63]: array([('b', 'a'), ('b', 'a')], dtype=[('A', '<U1'), ('B', '<U1')]) In [64]: np.genfromtxt(text3,dtype=None, usecols=[1,0]) Out[64]: array([['B', 'A'], ['b', 'a'], ['b', 'a']], dtype='<U1') So it uses the columns in the order you specify in usecols, but takes the structured array dtype from the names In [65]: text3="A B C\na b c\na b c".splitlines() In [66]: np.genfromtxt(text3,dtype=None, usecols=[1,0]) Out[66]: array([['B', 'A'], ['b', 'a'], ['b', 'a']], dtype='<U1') In [67]: np.genfromtxt(text3,dtype=None, usecols=[1,0],names=True) Out[67]: array([('b', 'a'), ('b', 'a')], dtype=[('B', '<U1'), ('Af', '<U1')]) In the subset case it pays attention to the usecols when constructing the dtype. From the genfromtxt code (read from [source] or ipython ?? firstvalues is the names derived from the first line, and nbcol is their count. After making sure usecols is a list, and converting to numbers if needed, it: nbcols = len(usecols or first_values) ... if usecols: for (i, current) in enumerate(usecols): # if usecols is a list of names, convert to a list of indices if _is_string_like(current): usecols[i] = names.index(current) elif current < 0: usecols[i] = current + len(first_values) # If the dtype is not None, make sure we update it if (dtype is not None) and (len(dtype) > nbcols): descr = dtype.descr dtype = np.dtype([descr[_] for _ in usecols]) names = list(dtype.names) # If `names` is not None, update the names elif (names is not None) and (len(names) > nbcols): names = [names[_] for _ in usecols] So with usecols, nbcols is the number of columns it's to use. In the subset case it selects from the names, but if it isn't a subset, then the names isn't modified, in number or order. For a structured array you really don't need to specify the order In [79]: data=np.genfromtxt(text,dtype=None, names=True); data Out[79]: array([('a', 'b'), ('a', 'b')], dtype=[('A', '<U1'), ('B', '<U1')]) In [80]: data['B'], data['A'] Out[80]: (array(['b', 'b'], dtype='<U1'), array(['a', 'a'], dtype='<U1')) Columns can be reordered after loading with indexing: In [87]: data[['A','B']] Out[87]: array([('a', 'b'), ('a', 'b')], dtype=[('A', '<U1'), ('B', '<U1')]) In [88]: data[['B','A']] Out[88]: array([('b', 'a'), ('b', 'a')], dtype={'names': ['B', 'A'], 'formats': ['<U1', '<U1'], 'offsets': [4, 0], 'itemsize': 8}) I suppose this could be raised as an issue. The logic in applying usecols, names, etc, is complicated as it is :) edit With explicit dtype In [96]: dt=[('B','U1'),('A','U1')] In [97]: data=np.genfromtxt(text,dtype=dt, usecols=[1,0], skip_header=1); data Out[97]: array([('b', 'a'), ('b', 'a')], dtype=[('B', '<U1'), ('A', '<U1')])
1
1
79,609,245
2025-5-6
https://stackoverflow.com/questions/79609245/polars-unusual-query-plan-for-lazyframe-custom-function-apply-takes-extremely-l
I have a spacy nlp function nlp(<string>).vector that I need to apply to a string column in a dataframe. This function takes on average 13 milliseconds to return. The function returns a ndarray that contains 300 Float64s. I need to expand these Floats to their own columns. This is the sketchy way I've done this: import spacy import polars as pl nlp = spacy.load('en_core_web_lg') full = pl.LazyFrame([["apple", "banana", "orange"]], schema=['keyword']) VECTOR_FIELD_NAMES = ['dim_' + str(x) for x in range(300)] full = full.with_columns( pl.col('keyword').map_elements( lambda x: tuple(nlp(x).vector), return_dtype=pl.List(pl.Float64) ).list.to_struct(fields=VECTOR_FIELD_NAMES).struct.unnest() ) full.collect() This takes 11.5s to complete, which is >100 times slower than doing the computation outside of Polars. Looking at the query plan, it reveals this: naive plan: (run LazyFrame.explain(optimized=True) to see the optimized plan) WITH_COLUMNS: [col("keyword").map_list().list.to_struct().struct.field_by_name(dim_0)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_1)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_2)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_3)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_4)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_5)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_6)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_7)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_8)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_9)(), col("keyword").map_list().list.to_struct().struct.field_by_name(dim_10)(), ... It carries on like this for all 300 dims. I believe it might be computing nlp(<keyword>) for every cell of the output. Why might this be? How do I restructure my statements to avoid this?
It's due to how expression expansion works. The expression level unnest expands into multiple expressions (one for each field) pl.col("x").struct.unnest() Would turn into pl.col("x").struct.field("a") pl.col("x").struct.field("b") pl.col("x").struct.field("c") Normally you don't notice as Polars caches expressions (CSE), but UDFs are not eligible for caching. https://github.com/pola-rs/polars/issues/20260 def udf(x): print("Hello") return x df = pl.DataFrame({"x": [[1, 2, 3], [4, 5, 6]]}) df.with_columns( pl.col.x.map_elements(udf, return_dtype=pl.List(pl.Int64)) .list.to_struct(fields=['a', 'b', 'c']) .struct.unnest() ) It calls the UDF for each element. Hello Hello Hello Hello Hello Hello You can use the unnest frame method instead. df.with_columns( pl.col.x.map_elements(udf, return_dtype=pl.List(pl.Int64)) .list.to_struct(fields=['a', 'b', 'c']) .alias('y') ).unnest('y') Hello Hello
2
1
79,608,280
2025-5-6
https://stackoverflow.com/questions/79608280/cannot-read-files-list-from-a-specific-channel-from-slack-using-python
I used to have a working python function to fetch files from a specific Slack channel, but that stopped working a few months ago. I tested the same request to the slack API (files.list) using Postman which does give me an array with a number of files. The following code used to work but no longer does: import requests import json apiBase = "https://slack.com/api/" accesToken = "Bearer xoxb-<secret>" requestData = { "channel": "<obscured>" } r = requests.post(apiBase + "files.list", headers={'Authorization': accesToken, 'Content-Type': 'application/json; charset=utf-8'}, data=requestData) try: response = json.loads(r.text) except: print("Read error") isError = True if(not 'files' in response): if('error' in response): print(response['error']) if('warning' in response): print(response['warning']) isError = True files = response['files'] files.sort(key=lambda x:x['timestamp']) count = len(files) print(str(r)) print(str(r.request.body)) print(str(r.request.headers['Content-Type'])) print(str(r.text)) The result is: <Response [200]> channel=<secret> application/json; charset=utf-8 {"ok":true,"files":[],"paging":{"count":100,"total":0,"page":1,"pages":0}} Process finished with exit code 0 Postman also returns a 200 OK, but the array contains 3 files for this channel. So why is Python not getting the 3 files...? I know that an app needs to be given access to the channel in Slack which is the case here. (The channel and credentials are identical in both scenario's (Python and Postman). Please advise me ...
I think it has something to do with the content-type you send with the requests.post Have you tried using json=requestData instead of data=requestData Even though the content-type is correctly set in your headers the requests.post might still send "data" as a dictionary, this is maybe why the slack api is ignoring your data. Update: The solution was to put the requestData into the url as a parameter, this can be elegantly done using the "params" argument of the requests.post() function like so: requestData = { "channel": channel_id } requests.post(params=requestData) This is the way the Slack api expects the data.
1
1
79,608,369
2025-5-6
https://stackoverflow.com/questions/79608369/bars-not-fitting-to-x-axis-ticks-in-a-seaborn-distplot
I do generate that figure with seaborn.distplot(). My problem is that the ticks on the X-axis do not fit to the bars, in all cases. I would expect a relationship between bars and ticks like you can see at 11 and 15. This is the MWE import numpy as np import pandas as pd import seaborn as sns # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 999999, n), 'Fruit': np.random.choice(['Banana', 'Strawberry'], n), 'Age': np.random.randint(9, 18, n) }) fig = sns.displot( data=df, x='Age', hue='Fruit', multiple='dodge').figure fig.show()
You need discrete=True to tell seaborn that the x values are discrete. Adding shrink=0.8 will leave some space between the bars. import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 999999, n), 'Fruit': np.random.choice(['Banana', 'Strawberry'], n), 'Age': np.random.randint(9, 18, n) }) sns.displot( data=df, x='Age', hue='Fruit', multiple='dodge', discrete=True, shrink=0.8) plt.show() . Note that sns.displot() is a figure-level function that creates a grid of one or more subplots, with a common legend outside. sns.countplot() is an axes-level function, that creates a single subplot with a legend inside. An alternative is creating a countplot: sns.countplot( data=df, x='Age', hue='Fruit', dodge=True )
3
4
79,619,027
2025-5-13
https://stackoverflow.com/questions/79619027/why-do-results-from-adjustable-quadratic-volterra-filter-mapping-not-enhance-dar
Based on this paper Adjustable quadratic filters for image enhancement, Reinhard Bernstein, Michael Moore and Sanjit Mitra, 1997, I am trying to reproduce the image enhancement results. I followed the described steps, including implementing the nonlinear mapping functions (e.g., f_map_2 = x^2) and applying the 2D Teager-like quadratic Volterra filter as outlined. More specifically, the formula for the filter used here is formula (53) in the paper "A General Framework for Quadratic Volterra Filters for Edge Enhancement". Formula (53) and the formulas of the two mapping functions are used as shown in the image below. My pipeline is: normalize the input gray image to the range [0, 1], then map it using predefined functions (specifically the definition of f_map_2 and f_map_5 please see in the image), then pass it through the Teager filter (which is the formula (53)), multiply it by an alpha coefficient and combine the original image for sharpening (unsharp masking), finally denormalize back to the range [0, 255]. import cv2 import numpy as np from numpy import sqrt import matplotlib.pyplot as plt def normalize(img): return img.astype(np.float32)/255.0 def denormalize(img): """Convert image to [0, 255]""" return (img * 255).clip(0, 255).astype(np.uint8) def input_mapping(x, map_type='none', m=2): """Apply input mapping function according to the paper""" if map_type == 'none': return x # none (4b) elif map_type == 'map2': return x**2 # f_map2: x^2 (4c) elif map_type == 'map5': # piece-wise function f_map5 (4d) mapped = np.zeros_like(x) mask = x > 0.5 mapped[mask] = 1 - 2*(1 - x[mask])**2 mapped[~mask] = 2 * x[~mask]**2 return mapped else: raise ValueError("Invalid mapping type") def teager_filter(img): padded = np.pad(img, 1, mode='reflect') out = np.zeros_like(img) for i in range(1, padded.shape[0]-1): for j in range(1, padded.shape[1]-1): x = padded[i,j] t1 = 3*(x**2) t2 = -0.5*padded[i+1,j+1]*padded[i-1,j-1] t3 = -0.5*padded[i+1,j-1]*padded[i-1,j+1] t4 = -1.0*padded[i+1,j]*padded[i-1,j] t5 = -1.0*padded[i,j+1]*padded[i,j-1] out[i-1,j-1] = t1 + t2 + t3 + t4 + t5 return out def enhance_image(image_path, alpha, map_type='none'): """Enhance images with optional input mapping""" # Image reading and normalization img = cv2.imread(image_path, 0) if img is None: raise FileNotFoundError("No image found!") img_norm = normalize(img) # Input mapping mapped_img = input_mapping(img_norm, map_type) # Teager filter teager_output = teager_filter(mapped_img) enhanced = np.clip(img_norm + alpha * teager_output, 0, 1) return denormalize(enhanced) input_path = r"C:\Users\tt\OneDrive\Desktop\original_image.jpg" original_image = cv2.imread(input_path, 0) alpha = 0.1 enhanced_b = enhance_image(input_path, alpha, map_type='none') enhanced_c = enhance_image(input_path, alpha, map_type='map2') enhanced_d = enhance_image(input_path, alpha, map_type='map5') plt.figure(figsize=(15, 5)) plt.subplot(1, 4, 1) plt.imshow(original_image, cmap='gray') plt.title('Original') plt.axis('off') plt.subplot(1, 4, 2) plt.imshow(enhanced_b, cmap='gray') plt.title('No Mapping (b)') plt.axis('off') plt.subplot(1, 4, 3) plt.imshow(enhanced_c, cmap='gray') plt.title('Map2 (c)') plt.axis('off') plt.subplot(1, 4, 4) plt.imshow(enhanced_d, cmap='gray') plt.title('Map5 (d)') plt.axis('off') plt.tight_layout() plt.show() However, my output images from using mappings like f_map_2 and f_map_5 do not resemble the ones shown in the paper (specifically, images (c) and (d) below). Instead of strong enhancement in bright and dark regions, the results mostly show slightly darkened edges with almost no contrast boost in the target areas. So this is my results: And this is paper's results: Maybe this is helpful, so I'll also post a picture of the raw output of the above Teager filter, without multiplying by alpha and adding to the original image, as below I tried changing the alpha but it didn't help, I also tried adding a denoising step in the normalization function, still didn't help, the image still looks almost identical to the original. I also tested the filter on other grayscale images with various content, but the outcome remains similar β€” mainly edge thickening without visible intensity-based enhancement. Has anyone successfully reproduced the enhancement effects described in the paper? Could there be implementation details or parameters (e.g., normalization, unsharp masking, or mapping scale) that are critical but not clearly stated? I will provide the original image as below, if anyone wants to reproduce the process I did. Input image Any insights, references, or example code would be appreciated.
I think I found your error. In enhance_image() where you compose the final image, i.e. enhanced = np.clip(img_norm + alpha * teager_output, 0, 1) you accidentally use your normalized image img_norm instead of the mapped image mapped_img. Replacing this line by enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1) produces something useful: Note that the teager filter only enhances high frequency components of your image. It would make no strong difference in teager_output whether you pass mapped_img or img_norm to it. Thus, upon composing low pass and high pass, you have to use the mapped_img in order to keep the applied mapping. I would also suggest to keep file I/O outside your image processing functions, this makes it easier to inject other data for debugging purposes. def enhance_image(img, alpha, map_type='none'): """Enhance images with optional input mapping""" img_norm = normalize(img) # Image normalization mapped_img = input_mapping(img_norm, map_type) # Input mapping teager_output = teager_filter(mapped_img) # Teager filter # Compose enhanced image, enh = map(x) + alpha * teager enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1) return denormalize(enhanced) # Map back to original range
1
1
79,620,550
2025-5-13
https://stackoverflow.com/questions/79620550/python-global-variable-changes-depending-on-how-script-is-run
I have a short example python script that I'm calling glbltest.py: a = [] def fun(): global a a = [20,30,40] print("before ",a) fun() print("after ",a) If I run it from the command line, I get what I expect: $ python glbltest.py before [] after [20, 30, 40] I open a python shell and run it by importing, and I get basically the same thing: >>> from glbltest import * before [] after [20, 30, 4] So far so good. Now I comment out those last three lines and do everything "by hand": >>> from glbltest import * >>> a [] >>> fun() # I run fun() myself >>> a # I look at a again. Surely I will get the same result as before! [] # No! I don't! What is the difference between fun() being run "automatically" by the importing of the script, and me running fun() "by hand"?
global a refers to the name a in the glbltest module's namespace. When you set a by hand, it refers to the name a in the __main__ module's namespace. When you use from glbltest import * the names in the module are imported into the __main__ module's namespace. Those are different names but refer to the same objects. When you use global a and a = [20,30,40] in the glbltest module, assignment makes a new object that a in glbltest module's namespace now refers to. The name a in the __main__ module still refers to the original object (the empty list). As a simple example, print the id() of a in the fun() function, and print(id(a)) "by hand" after you set it: a = [] def fun(): global a print(a, id(a)) a = [20,30,40] print(a, id(a)) # To view the global a object id again def show(): print(a, id(a)) "by hand", with comments: >>> from glbltest import * >>> a # the imported name [] >>> id(a) # its object ID 2056911113280 >>> fun() [] 2056911113280 # start of fun() the object is the same ID [20, 30, 40] 2056902829312 # but assignment changes to new object (different ID) >>> a [] # main a still refers to original object >>> id(a) 2056911113280 >>> show() # glbltest module still sees *its* global a [20, 30, 40] 2056902829312 Note that if you use mutation vs. assignment to change the existing object. You'll see the change: a = [] def fun(): global a print(a, id(a)) a.extend([20,30,40]) # modify existing object, not assigning a new object. print(a, id(a)) # To view the global a object id again def show(): print(a, id(a)) Now the object IDs remain the same. >>> from glbltest import * >>> a, id(a) # import object ([], 1408887112064) >>> fun() [] 1408887112064 # before change still the same object [20, 30, 40] 1408887112064 # mutated the *existing* list >>> a, id(a) ([20, 30, 40], 1408887112064) # main's 'a' refers to the same object, same ID >>> show() [20, 30, 40] 1408887112064 # glbltest refers to the same object, same ID It's a bit more obvious that the names are different if you just import the module and the module's a can be referred to directly as glbltest.a. a = [] def fun(): global a a = [20,30,40] >>> import glbltest >>> glbltest.a [] >>> a = 5 # main's a >>> a 5 >>> glbltest.a # module's a [] >>> glbltest.fun() >>> a # main's a doesn't change 5 >>> glbltest.a # module's a does. [20, 30, 40]
1
3
79,620,294
2025-5-13
https://stackoverflow.com/questions/79620294/how-can-i-share-one-requests-session-across-all-flask-routes-and-close-it-cleanl
I’m building a small Flask 3.0 / Python 3.12 micro-service that calls an external REST API on almost every request Right now each route makes a new requests.Session which is slow and leaks sockets under load from flask import Flask, jsonify import requests app = Flask(__name__) @app.get("/info") def info(): with requests.Session() as s: r = s.get("https://api.example.com/info") return jsonify(r.json()) What I tried Global Variable session = requests.Session() I get a resource warning through the above. How can I re-use one requests.Session for all incoming requests and close it exactly once when the application exits?
Use serving-lifecycle hooks @app.before_serving – runs once per worker, right before the first request is accepted. @app.after_serving – runs once on a clean shutdown Create the requests.Session in the first hook, stash it on the application object and close it in the second.
1
1
79,620,333
2025-5-13
https://stackoverflow.com/questions/79620333/insert-new-column-of-blanks-into-an-existing-dataframe
I have an existing dataframe: data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) output: 0 1 0 5011025 234 1 5012025 937 2 5013025 625 What I need to do is insert a new column at 0 (the same # of rows) that contains 3 spaces. Recreating the dataframe, from scratch, it would be something like this: data = [[' ',5011025, 234], [' ',5012025, 937], [' ',5013025, 625]] df = pd.DataFrame(data) desired output: 0 1 2 0 5011025 234 1 5012025 937 2 5013025 625 What is the best way to insert() this new column into an existing dataframe, that may be hundreds of rows? Ultimately, i'm trying to figure out how to write a function that will shift all columns of a dataframe x number of spaces to the right.
Based on your comment, you could shift all cols up one and add a col 0 like this: import pandas as pd data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) df.columns = df.columns + 1 df[0] = ' ' df = df.sort_index(axis=1)
2
2
79,620,088
2025-5-13
https://stackoverflow.com/questions/79620088/how-can-i-make-a-simple-idempotent-post-endpoint-in-a-flask-micro-service
I’m building a small internal micro-service in Python 3.12 / Flask 3.0. The service accepts POST /upload requests that insert a record into PostgreSQL. Problem Mobile clients sometimes retry the request when the network is flaky, so I end up with duplicate rows: @app.post("/upload") def upload(): payload = request.get_json() db.execute( "INSERT INTO photos (user_id, filename, uploaded_at) VALUES (%s, %s, NOW())", (payload["user_id"], payload["filename"]), ) return jsonify({"status": "ok"}), 201 What I tried Added a UNIQUE (user_id, filename) constraint – works, but clients now get a raw SQL error on duplicate inserts. Wrapped the insert in ON CONFLICT DO NOTHING – avoids the error but I can’t tell whether the row was really inserted. Googled for β€œFlask idempotent POST” and found libraries like Flask-Idem, but they feel heavyweight for a single route. Question: What’s the simplest, idiomatic way in Flask to make this endpoint idempotent so that: POSTing the same photo twice is harmless; the client still gets a clear 201 Created the first time and 200 OK for retries; and I don’t have to introduce extra infrastructure (Kafka, Redis, etc.)?
Give the table a uniqueness guarantee so duplicates physically can’t happen. Use an UPSERT (INSERT … ON CONFLICT) with RETURNING so you know whether the row was really inserted. Map that to HTTP status codes.
1
1
79,619,950
2025-5-13
https://stackoverflow.com/questions/79619950/is-there-a-way-to-filter-columns-of-a-pandas-dataframe-which-include-elements-of
In the below dataframe I would like to filter the columns based on a list called 'animals' to select all the columns that include the list elements. animal_data = { "date": ["2023-01-22","2023-11-16","2024-06-30","2024-08-16","2025-01-22"], "cats_fostered": [1,2,3,4,5], "cats_adopted":[1,2,3,4,5], "dogs_fostered":[1,2,3,4,5], "dogs_adopted":[1,2,3,4,5], "rabbits_fostered":[1,2,3,4,5], "rabbits_adopted":[1,2,3,4,5] } animals = ["date","cat","rabbit"] animal_data = { "date": ["2023-01-22","2023-11-16","2024-06-30","2024-08-16","2025-01-22"], "cats_fostered": [1,2,3,4,5], "cats_adopted":[1,2,3,4,5], "rabbits_fostered":[1,2,3,4,5], "rabbits_adopted":[1,2,3,4,5] } I have tried some approaches below but they either don't work with lists or return no columns as it is looking for an exact match with 'cats' or 'rabbits' and not just columns that contain the strings animal_data[animal_data.columns.intersection(animals)] # returns an empty df animal_data.filter(regex=animals) # returns an error: not able to use regex with a list
The issue with both attempts is that you are looking for a substring of the columns name. Except for the date column there is not a full match between the strings in the animals list and the actual column names. One possibility is to use filter with .join to construct the regex if using .filter, or a "manual" list comprehension with strings operations (for example in or .startswith). You can also "hardcode" "date" so the animals list only contains animals. >>> animals = ["cat"] >>> df.filter(regex="date|" + "|".join(animals)) date cats_fostered cats_adopted 0 2023-01-22 1 1 1 2023-11-16 2 2 2 2024-06-30 3 3 3 2024-08-16 4 4 4 2025-01-22 5 5 >>> animals = ["cat", "rabbit"] >>> df.filter(regex="date|" + "|".join(animals)) date cats_fostered cats_adopted rabbits_fostered rabbits_adopted 0 2023-01-22 1 1 1 1 1 2023-11-16 2 2 2 2 2 2024-06-30 3 3 3 3 3 2024-08-16 4 4 4 4 4 2025-01-22 5 5 5 5
1
0
79,619,717
2025-5-13
https://stackoverflow.com/questions/79619717/how-to-count-consecutive-increases-in-a-1d-array
I have a 1d numpy array It's mostly decreasing, but it increases in a few places I'm interested in the places where it increases in several consecutive elements, and how many consecutive elements it increases for in each case In other words, I'm interested in the lengths of increasing contiguous sub-arrays I'd like to compute and store this information in an array with the same shape as the input (EG that I could use for plotting) This could be achieved using cumsum on a binary mask, except I want to reset the accumulation every time the array starts decreasing again See example input and expected output below How do I do that? import numpy as np def count_consecutive_increases(y: np.ndarray) -> np.ndarray: ... y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_consecutive_increases(y) print(y) print(c) # >>> [9 8 7 9 6 5 6 7 8 4 3 1 2 3 0] # >>> [0 0 0 1 0 0 1 2 3 0 0 0 1 2 0]
Here is another solution: import numpy as np def count_consecutive_increases(y: np.ndarray) -> np.ndarray: increases = np.diff(y, prepend=y[0]) > 0 all_summed = np.cumsum(increases) return all_summed - np.maximum.accumulate(all_summed * ~increases) y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_consecutive_increases(y) print(y) # >>> [9 8 7 9 6 5 6 7 8 4 3 1 2 3 0] print(c) # >>> [0 0 0 1 0 0 1 2 3 0 0 0 1 2 0] The idea is the same as with the solution proposed by OP, albeit a bit shorter: Naively count (by summing over) all indices that have been marked as increasing, then subtract, for each consecutive increasing segment, the count right before its start (the value of which we get by a cumulative maximum over the naive counts at the positions marked as not increasing).
3
3
79,619,760
2025-5-13
https://stackoverflow.com/questions/79619760/polars-list-eval-difference-between-pl-element-and-pl-all
the Polars user guide on Lists and Arrays explains how to manipulate Lists with common expression syntax using .list.eval(), i.e. how to operate on list elements. More specifically, the user guide states: The function eval gives us access to the list elements and pl.element refers to each individual element, but we can also use pl.all() to refer to all of the elements of the list. I do not understand the difference between using pl.element() vs pl.all(), i.e. when this distinction between individual and all elements mentioned in the quote becomes important. In the example below, both yield exactly the same result for various expressions. What am I missing? Thank you so much for your help! import polars as pl df = pl.DataFrame( { "a": [[1], [3,2], [6,4,5]] } ) print(df) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ β”‚ --- β”‚ β”‚ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [1] β”‚ β”‚ [3, 2] β”‚ β”‚ [6, 4, 5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ## using pl.element() result_element = df.with_columns( pl.col("a").list.eval(pl.element()**2).alias("square"), pl.col("a").list.eval(pl.element().rank()).alias("rank"), pl.col("a").list.eval(pl.element().count()).alias("count") ) ## using pl.all() result_all = df.with_columns( pl.col("a").list.eval(pl.all()**2).alias("square"), pl.col("a").list.eval(pl.all().rank()).alias("rank"), pl.col("a").list.eval(pl.all().count()).alias("count") ) print(result_element.equals(result_all)) True
The method pl.all() called without arguments refers to all columns available in the context. It does not have a special meaning within list.eval(), but since the only column available inside of it are the list elements, it works the same as pl.element(). You could also get the same behavior using either pl.col(''), a column whose name is an empty string. That is the name of the column (list) inside of .list.eval(...) and is equivalent to pl.element() pl.col('*'), a special name that selects all columns. This is equivalent to pl.all() import polars as pl from polars.testing import assert_frame_equal df = pl.DataFrame({'x': [[1,2,3]]}) def test(expression): return pl.col('x').list.eval(expression.mul(2).add(1)) reference = df.select(test(pl.element())) for expression in [pl.all(), pl.col(''), pl.col('*')]: assert_frame_equal( reference, df.select(test(expression)) )
2
1
79,619,061
2025-5-13
https://stackoverflow.com/questions/79619061/replacing-values-in-columns-with-values-from-another-columns-according-to-mappin
I have this kind of dataframe: df = pd.DataFrame({ "A1": [1, 11, 111], "A2": [2, 22, 222], "A3": [3, 33, 333], "A4": [4, 44, 444], "A5": [5, 55, 555] }) A1 A2 A3 A4 A5 0 1 2 3 4 5 1 11 22 33 44 55 2 111 222 333 444 555 and this kind of mapping: mapping = { "A1": ["A2", "A3"], "A4": ["A5"] } which means that I want all columns in list to have values from key column so: A2 and A3 should be populated with values from A1, and A5 should be populated with values from A4. Resulting dataframe should look like this: A1 A2 A3 A4 A5 0 1 1 1 4 4 1 11 11 11 44 44 2 111 111 111 444 444 I managed to do it pretty simply like this: for k, v in mapping.items(): for col in v: df[col] = df[k] but I was wondering if there is vectorized way of doing it (more pandactic way)?
You could rework the dictionary and use assign: out = df.assign(**{col: df.get(k) for k, v in mapping.items() for col in v}) NB. assign is not in place, either use this in chained commands, or reassign to df. Or you could reindex and rename/set_axis: dic = {v: k for k, l in mapping.items() for v in l} out = (df.reindex(columns=df.rename(columns=dic).columns) .set_axis(df.columns, axis=1) ) Output: A1 A2 A3 A4 A5 0 1 1 1 4 4 1 11 11 11 44 44 2 111 111 111 444 444
5
4
79,618,775
2025-5-13
https://stackoverflow.com/questions/79618775/how-to-add-new-feature-to-torch-geometric-data-object
I am using the Zinc graph dataset via torch geometric which I access as zinc_dataset = ZINC(root='my_path', split='train') Each data element is a graph zinc_dataset[0] looks like Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1]) I have computed a tensor valued feature for each graph in the dataset. I have stored these tensors in a list with the ith element of the list being the feature for the ith graph in zinc_dataset. I would like to add these new features to the data object. So ideally I want the result to be Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1], new_feature=[33,12]) I have looked at the solution provided by How to add a new attribute to a torch_geometric.data Data object element? but that hasn't worked for me. Could someone please help me take my list of new features and include them in the data object? Thanks
To add your list of new features (e.g. List[Tensor], with each tensor corresponding to a graph in the dataset) to each torch_geometric.data.Data object in a Dataset like ZINCYou can do this by simply assigning your new tensor as an attribute of each Data object. Here’s how you can do it step-by-step: import torch from torch_geometric.datasets import ZINC from torch_geometric.data import InMemoryDataset # 1. Load the ZINC training dataset zinc_dataset = ZINC(root='my_path', split='train') # 2. Create a list of new features for each graph # Replace this with your actual feature list (must match number of nodes per graph) new_features = [] for data in zinc_dataset: num_nodes = data.x.size(0) # data.x is [num_nodes, feature_dim] new_feat = torch.randn(num_nodes, 12) # Example: [num_nodes, 12] new_features.append(new_feat) # 3. Define a custom dataset that injects new_feature into each graph's Data object class ModifiedZINC(InMemoryDataset): def __init__(self, original_dataset, new_features_list): self.data_list = [] for i in range(len(original_dataset)): data = original_dataset[i] data.new_feature = new_features_list[i] self.data_list.append(data) super().__init__('.', transform=None, pre_transform=None) self.data, self.slices = self.collate(self.data_list) def __len__(self): return len(self.data_list) def get(self, idx): return self.data_list[idx] # 4. Create the modified dataset with new features modified_dataset = ModifiedZINC(zinc_dataset, new_features) # 5. Check the result sample = modified_dataset[0] print(sample) print("Shape of new feature:", sample.new_feature.shape) output: Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1], new_feature=[33, 12]) Shape of new feature: torch.Size([33, 12])
2
2
79,621,854
2025-5-14
https://stackoverflow.com/questions/79621854/compute-cumulative-mean-std-on-polars-dataframe-using-over
I want to compute the cumulative mean & std on a polars dataframe column. For the mean I tried this: import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len()).add(1).over('class')) which correctly gives shape: (8, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ value ┆ class ┆ cum_mean β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ══════════║ β”‚ 4 ┆ A ┆ 4.0 β”‚ β”‚ 6 ┆ A ┆ 5.0 β”‚ β”‚ 8 ┆ B ┆ 8.0 β”‚ β”‚ 11 ┆ A ┆ 7.0 β”‚ β”‚ 5 ┆ B ┆ 6.5 β”‚ β”‚ 6 ┆ A ┆ 6.75 β”‚ β”‚ 8 ┆ B ┆ 7.0 β”‚ β”‚ 15 ┆ B ┆ 9.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ However, this seems very clunky, and becomes a little more complicated (and possibly error-prone) for std. Is there a nicer (possibly built-in) version for computing the cum mean & cum std?
I might have a solution which is more clean. You can get to it using rolling-functions like rolling_mean or rolling_std. Here is my proposal: df.with_columns( cum_mean=pl.col('value').cum_sum().over('class')/pl.col('value').cum_count().over('class'), cum_mean_by_rolling=pl.col('value').rolling_mean(window_size=df.shape[0], min_samples=1).over('class'), cum_std_by_rolling=pl.col('value').rolling_std(window_size=df.shape[0], min_samples=1).over('class') ) If you define the window size as the number of rows in the data frame (df.shape[0]) and the minimum number of samples as 1, then you can get the wanted result. I also changed your implementation for the cum_mean a bit so that it is a bit shorter. If I run the code I get this result. shape: (8, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ value ┆ class ┆ cum_mean ┆ cum_mean_by_rolling ┆ cum_std_by_rolling β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ══════════β•ͺ═════════════════════β•ͺ════════════════════║ β”‚ 4 ┆ A ┆ 4.0 ┆ 4.0 ┆ null β”‚ β”‚ 6 ┆ A ┆ 5.0 ┆ 5.0 ┆ 1.414214 β”‚ β”‚ 8 ┆ B ┆ 8.0 ┆ 8.0 ┆ null β”‚ β”‚ 11 ┆ A ┆ 7.0 ┆ 7.0 ┆ 3.605551 β”‚ β”‚ 5 ┆ B ┆ 6.5 ┆ 6.5 ┆ 2.12132 β”‚ β”‚ 6 ┆ A ┆ 6.75 ┆ 6.75 ┆ 2.986079 β”‚ β”‚ 8 ┆ B ┆ 7.0 ┆ 7.0 ┆ 1.732051 β”‚ β”‚ 15 ┆ B ┆ 9.0 ┆ 9.0 ┆ 4.242641 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I did not find a more suitable build in function. Hope this helps.
2
2
79,620,883
2025-5-14
https://stackoverflow.com/questions/79620883/how-do-i-repeat-one-dataframe-to-match-the-length-of-another-dataframe
I want to combine two DataFrames of unequal length to a new DataFrame with the size of the larger one. Now, specifically, I want to pad the values of the shorter array by repeating it until it is large enough. I know this is possible for lists using itertools.cycle as follows: from itertools import cycle x = range(7) y = range(43) combined = zip(cycle(x), y) Now I want to do the same for DataFrames: import pandas as pd df1 = pd.DataFrame(...) # length 7 df2 = pd.DataFrame(...) # length 43 df_comb = pd.concat([cycle(df1),df2], axis=1) Of course this doesn't work, but I don't know if there is an option to do this or to just manually repeat the array.
If you want to combine the two DataFrames to obtain an output DataFrame of the length of the longest input with repetitions of the smallest input that restart like itertools.cycle, you could compute a common key (with numpy.arange and the modulo (%) operator) to perform a merge: out = (df1.merge(df2, left_on=np.arange(len(df1))%len(df2), right_on=np.arange(len(df2))%len(df1)) .drop(columns=['key_0']) ) Output: col1 col2 col3 col4 0 A X a Y 1 B X b Y 2 C X c Y 3 D X a Y 4 E X b Y 5 F X c Y 6 G X a Y Intermediate without dropping the merging key: key_0 col1 col2 col3 col4 0 0 A X a Y 1 1 B X b Y 2 2 C X c Y 3 0 D X a Y 4 1 E X b Y 5 2 F X c Y 6 0 G X a Y Used inputs: # df1 col1 col2 0 A X 1 B X 2 C X 3 D X 4 E X 5 F X 6 G X # df2 col3 col4 0 a Y 1 b Y 2 c Y
1
1
79,620,845
2025-5-14
https://stackoverflow.com/questions/79620845/how-is-np-repeat-so-fast
I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, repeat takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -> [1, 2, 2]. My naive version was around 4.5x slower than np.repeat. pub fn repeat_by(arr: &[f64], repeats: &[u64]) -> Vec<f64> { // Use flat_map to create a single iterator of all repeated elements let result: Vec<f64> = arr .iter() .zip(repeats.iter()) .flat_map(|(&value, &count)| std::iter::repeat_n(value, count as usize)) .collect(); result } I also tried a couple of more versions, e.g. one where I pre-allocated a vector with the necessary capacity, but all performed similarly. While doing more investigating though, I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. For example, we can build a list of indices and use numpy slicing / take to perform the same operation as np.repeat. However, doing this (and even removing the list construction from the timings), np.repeat is around 3x faster than numpy slicing / take. import timeit import numpy as np N_ROWS = 100_000 x = np.random.rand(N_ROWS) weight = np.random.poisson(1, len(data)) # pre-compute the indices so slow python looping doesn't affect the timing indices = [] for w in weight: for i in range(w): indices.append(i) print(timeit.timeit(lambda: np.repeat(x, weight), number=1_000)) # 0.8337333500003297 print(timeit.timeit(lambda: np.take(x, indices), number=1_000)) # 3.1320624930012855 My C is not so good, but it seems like the relevant implementation is here: https://github.com/numpy/numpy/blob/main/numpy/_core/src/multiarray/item_selection.c#L785. It would be amazing if someone could help me understand at a high level what this code is doing--on the surface, it doesn't look like anything particularly special (SIMD, etc.), and looks pretty similar to my naive Rust version (memcpy vs repeat_n). In addition, I am struggling to understand why it performs so much better than even numpy slicing.
TL;DR: the gap is certainly due to the use of wider loads/stores in Numpy than your Rust code, and you should avoid indexing if you can for sake of performance. Performance of the Numpy code VS your Rust code First of all, we can analyse the assembly code generated from your Rust code (I am not very familiar with Rust but I am with assembly). The generated code is quite big, but here is the main part (see it on Godbolt): example::repeat_by::hf03ad1ea376407dc: push rbp push r15 push r14 push r13 push r12 push rbx sub rsp, 72 mov r12, rdx cmp r8, rdx cmovb r12, r8 test r12, r12 je .LBB2_4 mov r14, rcx mov r15, r12 neg r15 mov ebx, 1 .LBB2_2: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_5 lea rax, [r15 + rbx] inc rax inc rbx cmp rax, 1 jne .LBB2_2 .LBB2_4: mov qword ptr [rdi], 0 mov qword ptr [rdi + 8], 8 mov qword ptr [rdi + 16], 0 jmp .LBB2_17 .LBB2_5: mov qword ptr [rsp + 48], rsi mov qword ptr [rsp + 56], rdi cmp r13, 5 mov ebp, 4 cmovae rbp, r13 lea rcx, [8*rbp] mov rax, r13 shr rax, 61 jne .LBB2_6 mov qword ptr [rsp + 8], 0 movabs rax, 9223372036854775800 cmp rcx, rax ja .LBB2_7 mov rax, qword ptr [rsp + 48] mov rax, qword ptr [rax + 8*rbx - 8] mov qword ptr [rsp + 16], rax mov rax, qword ptr [rip + __rust_no_alloc_shim_is_unstable@GOTPCREL] movzx eax, byte ptr [rax] mov eax, 8 mov qword ptr [rsp + 8], rax mov esi, 8 mov rdi, rcx mov qword ptr [rsp + 64], rcx call qword ptr [rip + __rust_alloc@GOTPCREL] mov rcx, qword ptr [rsp + 64] test rax, rax je .LBB2_7 mov rcx, qword ptr [rsp + 16] mov qword ptr [rax], rcx mov qword ptr [rsp + 24], rbp mov qword ptr [rsp + 32], rax mov qword ptr [rsp + 40], 1 mov ebp, 1 jmp .LBB2_11 .LBB2_22: mov rcx, qword ptr [rsp + 16] mov qword ptr [rax + 8*rbp], rcx inc rbp mov qword ptr [rsp + 40], rbp .LBB2_11: dec r13 je .LBB2_12 cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 .LBB2_20: lea rdi, [rsp + 24] mov rsi, rbp mov rdx, r13 call alloc::raw_vec::RawVecInner<A>::reserve::do_reserve_and_handle::hd90f8297b476acb7 mov rax, qword ptr [rsp + 32] jmp .LBB2_22 .LBB2_12: cmp rbx, r12 jae .LBB2_16 inc rbx .LBB2_14: mov r13, qword ptr [r14 + 8*rbx - 8] test r13, r13 jne .LBB2_18 lea rcx, [r15 + rbx] inc rcx inc rbx cmp rcx, 1 jne .LBB2_14 jmp .LBB2_16 .LBB2_18: mov rcx, qword ptr [rsp + 48] mov rcx, qword ptr [rcx + 8*rbx - 8] mov qword ptr [rsp + 16], rcx cmp rbp, qword ptr [rsp + 24] jne .LBB2_22 jmp .LBB2_20 .LBB2_16: mov rax, qword ptr [rsp + 40] mov rdi, qword ptr [rsp + 56] mov qword ptr [rdi + 16], rax movups xmm0, xmmword ptr [rsp + 24] movups xmmword ptr [rdi], xmm0 .LBB2_17: mov rax, rdi add rsp, 72 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret .LBB2_6: mov qword ptr [rsp + 8], 0 .LBB2_7: lea rdx, [rip + .L__unnamed_2] mov rdi, qword ptr [rsp + 8] mov rsi, rcx call qword ptr [rip + alloc::raw_vec::handle_error::h5290ea7eaad4c986@GOTPCREL] mov rbx, rax mov rsi, qword ptr [rsp + 24] test rsi, rsi je .LBB2_25 mov rdi, qword ptr [rsp + 32] shl rsi, 3 mov edx, 8 call qword ptr [rip + __rust_dealloc@GOTPCREL] .LBB2_25: mov rdi, rbx call _Unwind_Resume@PLT We can see there there is only a single use of SIMD (xmm, ymm or zmm) registers and it is not in a loop. There is also no call to memcpy. This means the Rust computation is certainly not vectorised using SIMD instructions. The loops seems to move at best 64-bit items. The SSE (SIMD) instruction set can move 128-bit vectors and the AVX (SIMD) one can move 256-bit one (512-bit for AVX-512 supported only on few recent PC CPUs and most recent server ones). As a result, the rust code is certainly sub-optimal because the Rust code performs scalar moves. On the other hand, Numpy basically calls memcpy in nested loops (in the linked code) as long as needs_custom_copy is false, which is I think the case for all basic contiguous native arrays like the one computed in your code (i.e. no pure-Python objects in the array). memcpy is generally aggressively optimized so it benefit from SIMD instructions on platforms where it worth it. For very small copies, it can be slower than scalar moves though (due to the call and sometimes some checks). I expect the Rust code to be about 4 times slower than Numpy on a CPU supporting AVX-2 (assuming the target CPU actually supports a 256-bit-wide data-path, which is AFAIK the case on relatively recent mainstream CPUs) as long as the size of the copied slices is rather big (e.g. at least few dozens of double-precision items). Put it shortly, the gap is certainly due to the (indirect) use of wide SIMD load/store in Numpy as opposed to the Rust code using less-efficient scalar load/stores. Performance of np.repeat VS np.take I found that np.repeat is actually way faster than other numpy functions that I expected to perform similarly. [...] np.repeat is around 3x faster than numpy slicing / take. Regarding np.take it is more expensive because it cannot really benefit from SIMD instructions and Numpy also needs to read the indices from memory. To be more precise, on x86-64 CPU, AVX-2 and AVX-512 support gather instructions to do that but they are not so fast compared to scalar loads (possibly even slower regarding the actual target micro-architecture of the CPU). For example, on AMD Zen+/Zen2/Zen3/Zen4 CPUs, gather instructions does not worth it (not faster), mainly because the underlying hardware implementation is not efficient yet (micro-coded). On relatively-recent Intel CPUs supporting AVX-2, gather instructions are a bit faster, especially for 32-bit items and 32-bit addresses -- it does not really worth it for 64-bit ones (which is your use-case). On Intel CPUs supporting AVX-512 (mainly IceLake CPU and server-side CPUs), it worth it for both 32-bit and 64-bit items. x86-64 CPUs not supporting AVX-2 (i.e. old ones) do not support gather instructions. Even the best (x86-64) gather instruction implementation cannot compete with (256-bit or 512-bit) packed loads/stores typically done by memcpy in np.repeat on wide slices, simply because all mainstream CPUs perform scalar loads (i.e. <=64-bit) internally saturating load ports. Some memcpy implementations use rep movsb which is very well optimised on quite-recent x86-64 CPUs (so to adapt the granularity of load-store regarding the use-case and even use streaming stores if needed on wide arrays). Even on GPUs (having an efficient gather implementation), gather instructions are still generally more expensive than packed loads. They are at best equally fast, but one need to consider the overhead of reading also indices from memory so it can never be faster. Put it shortly, you should avoid indexing if you can since it is not very SIMD-friendly.
4
6