question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,526,465
2025-3-21
https://stackoverflow.com/questions/79526465/is-there-a-way-to-remove-stack-traces-in-python-log-handlers-using-filters-or-so
I know the subject sounds counter-intuitive but bear with me. My goal is to have two rotating file log handlers, mostly for logger.exception() calls, one that includes stack traces and one that doesn't. I find that error conditions can sometimes cause the logs to fill with stack traces, making them difficult to read/parse without creating a separate parsing script, which I'd prefer not to do. I would like to use the built-in RotatingFileHandler class if possible. Here's a simplified code snippet of what I'm trying to accomplish: # getLogger just returns an instance of a logger derived from the main Python logger # where I can override the logger.exception() method from myutils import getLogger logger = getLogger(__name__) try: x = 1 / 0 except ZeroDivisionError: # This one call should log to one file with the trace, one without the trace logger.exception("Oops") I have the infrastructure in place to have this write to separate log files already using separate handlers, but they both include the stack traces. Is there a mechanism (logging filter or otherwise) where a log handler can strip the stack trace from logger.exception() calls? I am assuming (or hoping) that logging filters attached to a handler can accomplish this, but I'm not sure how it can be done. And just as an FYI, here is the source code of the Python logger for Logging.exception() and Logging.error() calls: def error(self, msg, *args, **kwargs): """ Delegate an error call to the underlying logger. """ self.log(ERROR, msg, *args, **kwargs) def exception(self, msg, *args, exc_info=True, **kwargs): """ Delegate an exception call to the underlying logger. """ self.log(ERROR, msg, *args, exc_info=exc_info, **kwargs)
You can use a custom Formatter that removes the exception info necessary to the logging of the stack trace. See the source code of the format method of the Formatter. You can see that since record.exc_info, record.exc_text and record.stack_info are set to None, no stack trace can be shown. import logging logger = logging.getLogger("foo") class NoStackTraceFormatter(logging.Formatter): def format(self, record): record.exc_info = None record.exc_text = None record.stack_info = None return logging.Formatter.format(self, record) handler_stacktrace = logging.FileHandler("stack.log") handler_no_stacktrace = logging.FileHandler("nostack.log") handler_no_stacktrace.setFormatter(NoStackTraceFormatter()) logger.addHandler(handler_stacktrace) logger.addHandler(handler_no_stacktrace) try: x = 1 / 0 except ZeroDivisionError: logger.exception("Oops") Then check the 2 files: # nostack.log Oops # stack.log Oops Traceback (most recent call last): File "main.py", line 23, in <module> x = 1 / 0 ZeroDivisionError: division by zero
1
1
79,526,873
2025-3-22
https://stackoverflow.com/questions/79526873/conda-environment-does-not-isolate-each-environment
I am trying to install some packages upon newly created conda environment. But weirdly it is already full of packages, which have been used in other previous conda environments. For example, I noticed var conda environment already share the packages with other environments although I never asked to - I used conda create --name var -y python=3.10 command: (base) XXX@XXX-XXX:/home/nas/XXX/B/C$ conda activate var (var) XXX@XXX-XXX:/home/nas/XXX/B/C$ pip freeze absl-py==2.1.0 av==14.2.0 certifi==2025.1.31 charset-normalizer==3.4.1 chumpy==0.70 cmake==3.31.6 contourpy==1.2.1 cycler==0.12.1 dearpygui==1.7.1 diff_gaussian_rasterization @ file:///home/nas/XXX/B/D/submodules/diff-gaussian-rasterization ... , which means conda environments are not isolated and mixed to each other, therefore jointly affected whenever I update the package version in each environment. One suspicious part is SafetyError: The package for libstdcxx-ng located at /home/nas/XXX/anaconda3/pkgs/libstdcxx-ng-11.2.0-h1234567_1 appears to be corrupted. The path 'lib/libstdc++.so.6.0.29' has an incorrect size.which is shown below of the following messages: (base) XXX@XXX-XXX:/home/nas/XXX/B/C$ conda create --name var -y python=3.10 Collecting package metadata (current_repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 22.9.0 latest version: 25.3.0 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /home/nas4/XXX/anaconda3/envs/var added / updated specs: - python=3.10 The following NEW packages will be INSTALLED: _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main None _openmp_mutex pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu None bzip2 pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_6 None ca-certificates pkgs/main/linux-64::ca-certificates-2025.2.25-h06a4308_0 None ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.40-h12ee557_0 None libffi pkgs/main/linux-64::libffi-3.4.4-h6a678d5_1 None libgcc-ng pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1 None libgomp pkgs/main/linux-64::libgomp-11.2.0-h1234567_1 None libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1 None libuuid pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0 None ncurses pkgs/main/linux-64::ncurses-6.4-h6a678d5_0 None openssl pkgs/main/linux-64::openssl-3.0.16-h5eee18b_0 None pip pkgs/main/linux-64::pip-25.0-py310h06a4308_0 None python pkgs/main/linux-64::python-3.10.16-he870216_1 None readline pkgs/main/linux-64::readline-8.2-h5eee18b_0 None setuptools pkgs/main/linux-64::setuptools-75.8.0-py310h06a4308_0 None sqlite pkgs/main/linux-64::sqlite-3.45.3-h5eee18b_0 None tk pkgs/main/linux-64::tk-8.6.14-h39e8969_0 None tzdata pkgs/main/noarch::tzdata-2025a-h04d1e81_0 None wheel pkgs/main/linux-64::wheel-0.45.1-py310h06a4308_0 None xz pkgs/main/linux-64::xz-5.6.4-h5eee18b_1 None zlib pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1 None Preparing transaction: done Verifying transaction: | SafetyError: The package for libstdcxx-ng located at /home/nas/XXX/anaconda3/pkgs/libstdcxx-ng-11.2.0-h1234567_1 appears to be corrupted. The path 'lib/libstdc++.so.6.0.29' has an incorrect size. reported size: 17981480 bytes actual size: 1594864 bytes done Executing transaction: done # # To activate this environment, use # # $ conda activate var # # To deactivate an active environment, use # # $ conda deactivate I somewhat have a hint about the message since that file is affected by conda install -c "nvidia/label/cuda-11.7.1" cuda-toolkit ninja (which I used) as far as I know. Still I have no idea about whether it is related to my entangled environments problem and why it occurs.
How about in your Python session you try to make visible all environment variables relevant for your Python sessions by: import os for name, value in os.environ.items(): print("{0}: {1}".format(name, value)) first? when I was working with HPC (high performance clusters), when Python was behaving strangely - like loading packages which I didn't install myself etc., I was looking for the PYTHONPATH environment variable. If that lists a lot of paths, then changing the order there or removing some of them made Python behave as I want. In a similar way, listing all system and conda environment variables - could give you a hint. # list conda environment variables conda info # list also system environment variables conda info --system E.g. look at the PATH variable: Does your /home/nas/XXX/anaconda3/envs/var/bin is listed first? => if something else comes first, e.g. base/bin or /usr/bin/python => it would be the cause for the leaking. ==> put your conda env path first: execut export PATH=/home/nas/XXX/anaconda3/envs/var/bin:$PATH in your current shell and look, whether the problem still persists. You could also copy the entire content of echo $PATH's output, remove suspicious paths and set that export PATH=the-modified-paths run in your current bash session - and then try - whether the problem is still there. PYTHONPATH should be empty, except you set it => else this could be the reason for python packages already available. ==> unset by: unset PYTHONPATH PYTHONUSERBASE unset or user-specific like ~/.local => packages from here would pollute a pip freeze ==> add export PYTHONNOUSERSITE=1 to your ~/.bashrc and source ~/.bashrc => this eliminates ~/.local's influence CONDA_PREFIX should be your environment's base path /home/nas/XXX/anaconda3/envs/var VIRTUAL_ENV (used by venv) should be empty - otherwise you are mixing venv and conda => bad! LD_LIBRARY_PATH should be empty or only point to a lib inside var => otherwise your libstdc++ or CUDA is not looking only into your var Obviously, you are using some HPC cluster computer with conda. Then it is very likely some environmental variable problem. Check for dot files' existence and their contents Look around in your system, whether something suspicious is set in ~/.bashrc and also whether there is some .condarc file where such variables are manipulated (probably at your home? ~/.condarc.
1
1
79,526,834
2025-3-22
https://stackoverflow.com/questions/79526834/for-a-custom-mapping-class-that-returns-self-as-iterator-list-returns-empty
The following is a simplified version of what I am trying to do (the actual implementation has a number of nuances): from __future__ import annotations from collections.abc import MutableMapping class SideDict(MutableMapping, dict): """ The purpose of this special dict is to side-attach another dict. A key and its value from main dict are preferred over same key in the side-dict. If only a key is not present in main dict, then it is used from the side-dict. """ # The starting SideDict instance will have side_dict=None, a subsequent # SideDict instance can use the first instance as its side_dict. def __init__(self, data, side_dict: SideDict | None): self._store = dict(data) self._side_dict = side_dict self._iter_keys_seen = [] self._iter_in_side_dict = False self._iter = None # Also other stuff # Also implements __bool__, __contains__, __delitem__, __eq__, __getitem__, # __missing__, __or__, __setitem__ and others. def __iter__(self): self._iter_keys_seen = [] self._iter_in_side_dict = False self._iter = None return self def __next__(self): while True: # Start with an iterator that is on self._store if self._iter is None: self._iter = self._store.__iter__() try: next_ = self._iter.__next__() if next_ in self._iter_keys_seen: continue # Some other stuff I do with next_ self._iter_keys_seen.append(next_) return next_ except StopIteration as e: if self._side_dict is None or self._iter_in_side_dict: raise e else: # Switching to side-dict iterator self._iter_in_side_dict = True self._iter = self._side_dict.__iter__() def __len__(self): return len([k for k in self]) # Its not the most efficient, but # I don't know any other way. sd_0 = SideDict(data={"a": "A"}, side_dict=None) sd_1 = SideDict(data={"b": "B"}, side_dict=sd_0) sd_2 = SideDict(data={"c": "C"}, side_dict=sd_1) print(len(sd_0), len(sd_1), len(sd_2)) # all work fine print(list(sd_0)) # ! Here is the problem, shows empty list `[]` ! On putting some print()s, here is what I observed being called: list() triggers obj.__iter__() first. Followed by obj.__len__(). I vaguely understand that this is done so as to allocate optimal length of list. Because obj.__len__() has list-comprehension ([k for k in self]), it again triggers obj.__iter__(). Followed by obj.__next__() multiple times as it iterates through obj._store and obj._side_dict. When obj.__next__() hits the final un-silenced StopIteration, list-comprehension in obj.__len__() ends. Here the problem starts. list() seems to be calling obj.__next__() again immediately after ending obj.__len__(), and it hits StopIteration again. There is no obj.__iter__(). And so the final result is an empty list! What I think might be happening is that list() starts an iterator on its argument, but before doing anything else, it wants to find out the length. My __len__() uses an iterator itself, so it seems the both are using the same iterator. And then this iterator is consumed in obj.__len__(), and nothing left for outer list() to consume. Please correct me if I am wrong. So how can I change my obj.__len__() to use a non-clashing iterator?
The problem is that your object is its own iterator. Most objects should not be their own iterator - it only makes sense to do that if the object's only job is to be an iterator, or if there's some other inherent reason you shouldn't be able to perform two independent loops over the same object. Most iterable objects should return a new iterator object from __iter__, and not implement __next__. The simplest way to do this is usually by either writing __iter__ as a generator function, or returning an iterator over some other object that happens to have the right elements. For example, using the set-like union functionality of dict key views: def __iter__(self): return iter(self._store.keys() | self._side_dict.keys()) Or using a generator: def __iter__(self): yield from self._store for key in self._side_dict: if key not in self._store: yield key In this case, the generator has the advantage of not building the self._store.keys() | self._side_dict.keys() set. Also, unless you're writing this thing as a learning exercise, you should probably just use collections.ChainMap. It handles all of this already.
1
4
79,526,348
2025-3-21
https://stackoverflow.com/questions/79526348/matplotlib-unable-to-update-plot-with-button-widget
The code below draws a button and an axes object in which it's meant to print out the number of times the button has been pressed. However, it never updates the axes with the number of presses. from matplotlib import pyplot as plt from matplotlib.widgets import Button fig, ax = plt.subplots(figsize=(7,7)) ax.set_visible(False) class MyClass: def __init__(self, fig): self.N_presses = 0 def button_fx(self, event): ax_str.clear() self.N_presses += 1 text_out = "Pushed {} times".format(self.N_presses) ax_str.text(0.5, 0.5, text_out) print(text_out) MC = MyClass(fig) ax_str = fig.add_axes((0.25, 0.5, 0.5, 0.1)) ax_button = fig.add_axes((0.25, 0.3, 0.5, 0.1)) my_button = Button(ax_button, "Push this button", color="0.75", hovercolor="0.875") my_button.on_clicked(func=MC.button_fx) plt.show() As a check, I also have it print out the number of presses to the console, which happens as it should. It's only the axes that seem to be out of reach. Why can't I update the axes ax_str with new text using button_fx? I can't even plot to it. Is there a workaround? (Note: although it doesn't add anything in this MWE, the class is essential for my actual use case, so I need to know how to solve this problem while keeping the class structure.)
change : def button_fx(self, event): ax_str.clear() self.N_presses += 1 text_out = "Pushed {} times".format(self.N_presses) ax_str.text(0.5, 0.5, text_out) print(text_out) to def button_fx(self, event): ax_str.clear() self.N_presses += 1 text_out = "Pushed {} times".format(self.N_presses) ax_str.text(0.5, 0.5, text_out) plt.draw() ######## I've added this line print(text_out) and see if you get what you wanted: this code works too: from matplotlib import pyplot as plt from matplotlib.widgets import Button fig, ax = plt.subplots(figsize=(7,7)) ax.set_visible(False) class MyClass: def __init__(self, fig): self.N_presses = 0 self.ax_str = fig.add_axes((0.25, 0.5, 0.5, 0.1)) self.ax_button = fig.add_axes((0.25, 0.3, 0.5, 0.1)) self.my_button = Button(self.ax_button, "Push this button", color="0.75", hovercolor="0.875") self.my_button.on_clicked(func=self.button_fx) def button_fx(self, event): self.ax_str.clear() self.N_presses += 1 self.text_out = "Pushed {} times".format(self.N_presses) self.ax_str.text(0.5, 0.5, self.text_out) plt.draw() print(self.text_out) MC = MyClass(fig) plt.show()
2
1
79,526,398
2025-3-21
https://stackoverflow.com/questions/79526398/pandas-group-by-without-performing-aggregation
I have a pandas dataframe as follows: Athlete ID City No. of Sport Fields 1231 LA 81 4231 NYC 80 2234 NJ 64 1223 SF 75 4531 LA 81 2345 NYC. 80 ... I want to print the City and No. of Sport Fields columns and group by City and sort by No. of Sport Fields. groupby() won't work here because I am not calculating anything.
In your example, it seems that the No. of Sports Field remains the same for a given City. You can therefore use first() upon grouping by City, before sorting by No. of Sports Field : df.groupby('City').first().sort_values(by='No. of Sport Fields') This returns: Athlete ID No. of Sport Fields City NJ 2234 64 SF 1223 75 NYC 4231 80 NYC. 2345 80 LA 1231 81
2
1
79,526,103
2025-3-21
https://stackoverflow.com/questions/79526103/converting-rsa-generated-modulus-n-to-base64-string
I have a "modulus" as a long string of numbers that is obtained from doing the following: private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048) modulus = private_key.private_numbers().public_numbers.n which gives me this (modulus = ) 26430269838726291280672963883929276522234428127706081469034773908296247736139996682259102127358592459713530791841365862493123186868249887704862202193368911366855128282431762151411775448913702006864890463842779084995140786092249248736282702798861993161873918065709700856741944572285079076367907667914080902844624750622976126824522682693806275617591268441477045328753440100516039389493242021813789624216965389245973390154276959750292100226026141811533048330927545995241735560114821851311606450209870516259015344299837790769762906871134121821490748608899823911354842159754168574881499683924223044838326144226160998129721 I want to get this into the usual base64 interpretation of: 0V45nHfQFYZwdC7aES-0zkkhct3PM-fpxp9Lo6QZWmeaXSwS8gQVfJeJhmLp1097qlO3d-n0kblVouvH42LdlWgkzYq-lqP2Ny2M4z3a0VXCdIk1TAxM0Qse-QP6otsIoLKcT2p0JdIEOVeCC9BOLIEcGnWenqHsrm29i-21-zngbREUEQwM7UT55_vgywmJn9fB_NJFz-g7lLyhxwP8gMKSWwhMnQ4oAsRAfefEDr2a_0IPRqQE0r4L2WzgknW6aHex-KZ7LWCgLdLzH5iFUEdfvqJ6MhzlcJtpZQkFhwBVnfKVelCZcDex3TI374dbcvvO-3tVH7Ik4WEukrSgOQ From a routine I found on the internet, this can be done executing openssl to read a private key PEM encoded file, extract the modulus in hex, and then convert it. proc = subprocess.Popen(["openssl", "rsa", "-in", "./account.key", "-noout", "-text"], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = proc.communicate(None) pub_pattern = r"modulus:[\s]+?00:([a-f0-9\:\s]+?)\npublicExponent: ([0-9]+)" pub_hex, _ = re.search(pub_pattern, out.decode('utf8'), re.MULTILINE|re.DOTALL).groups() modulus = base64.urlsafe_b64encode(binascii.unhexlify(re.sub(r"(\s|:)", "", pub_hex).encode("utf-8"))).decode('utf8').replace("=", "") print(pub_hex) print(modulus) which gives you: d1:5e:39:9c:77:d0:15:86:70:74:2e:da:11:2f: b4:ce:49:21:72:dd:cf:33:e7:e9:c6:9f:4b:a3:a4: 19:5a:67:9a:5d:2c:12:f2:04:15:7c:97:89:86:62: e9:d7:4f:7b:aa:53:b7:77:e9:f4:91:b9:55:a2:eb: c7:e3:62:dd:95:68:24:cd:8a:be:96:a3:f6:37:2d: 8c:e3:3d:da:d1:55:c2:74:89:35:4c:0c:4c:d1:0b: 1e:f9:03:fa:a2:db:08:a0:b2:9c:4f:6a:74:25:d2: 04:39:57:82:0b:d0:4e:2c:81:1c:1a:75:9e:9e:a1: ec:ae:6d:bd:8b:ed:b5:fb:39:e0:6d:11:14:11:0c: 0c:ed:44:f9:e7:fb:e0:cb:09:89:9f:d7:c1:fc:d2: 45:cf:e8:3b:94:bc:a1:c7:03:fc:80:c2:92:5b:08: 4c:9d:0e:28:02:c4:40:7d:e7:c4:0e:bd:9a:ff:42: 0f:46:a4:04:d2:be:0b:d9:6c:e0:92:75:ba:68:77: b1:f8:a6:7b:2d:60:a0:2d:d2:f3:1f:98:85:50:47: 5f:be:a2:7a:32:1c:e5:70:9b:69:65:09:05:87:00: 55:9d:f2:95:7a:50:99:70:37:b1:dd:32:37:ef:87: 5b:72:fb:ce:fb:7b:55:1f:b2:24:e1:61:2e:92:b4: a0:39 0V45nHfQFYZwdC7aES-0zkkhct3PM-fpxp9Lo6QZWmeaXSwS8gQVfJeJhmLp1097qlO3d-n0kblVouvH42LdlWgkzYq-lqP2Ny2M4z3a0VXCdIk1TAxM0Qse-QP6otsIoLKcT2p0JdIEOVeCC9BOLIEcGnWenqHsrm29i-21-zngbREUEQwM7UT55_vgywmJn9fB_NJFz-g7lLyhxwP8gMKSWwhMnQ4oAsRAfefEDr2a_0IPRqQE0r4L2WzgknW6aHex-KZ7LWCgLdLzH5iFUEdfvqJ6MhzlcJtpZQkFhwBVnfKVelCZcDex3TI374dbcvvO-3tVH7Ik4WEukrSgOQ I would like to do this using all python modules, not relying on executing openssl.
Since you already have the modulus as a decimal number, you only have to convert it to a hexadecimal number (with int.to_bytes()) and then Base64url encode it (with base64.urlsafe_b64encode()). The Base64 padding can be removed with rstrip(), e.g.: import base64 n = 26430269838726291280672963883929276522234428127706081469034773908296247736139996682259102127358592459713530791841365862493123186868249887704862202193368911366855128282431762151411775448913702006864890463842779084995140786092249248736282702798861993161873918065709700856741944572285079076367907667914080902844624750622976126824522682693806275617591268441477045328753440100516039389493242021813789624216965389245973390154276959750292100226026141811533048330927545995241735560114821851311606450209870516259015344299837790769762906871134121821490748608899823911354842159754168574881499683924223044838326144226160998129721 n_bytes = n.to_bytes(2048//8, 'big') n_b64 = base64.urlsafe_b64encode(n_bytes) print(n_b64.rstrip(b"=").decode()) # 0V45nHfQFYZwdC7aES-0zkkhct3PM-fpxp9Lo6QZWmeaXSwS8gQVfJeJhmLp1097qlO3d-n0kblVouvH42LdlWgkzYq-lqP2Ny2M4z3a0VXCdIk1TAxM0Qse-QP6otsIoLKcT2p0JdIEOVeCC9BOLIEcGnWenqHsrm29i-21-zngbREUEQwM7UT55_vgywmJn9fB_NJFz-g7lLyhxwP8gMKSWwhMnQ4oAsRAfefEDr2a_0IPRqQE0r4L2WzgknW6aHex-KZ7LWCgLdLzH5iFUEdfvqJ6MhzlcJtpZQkFhwBVnfKVelCZcDex3TI374dbcvvO-3tVH7Ik4WEukrSgOQ
1
3
79,525,857
2025-3-21
https://stackoverflow.com/questions/79525857/offset-in-using-trapezoid-method-for-numerical-integration
I am using the trapezoidal method to integrate the squared modulus of this Gaussian function: In fact, the squared modulus of this function can be integrated analytically, resulting in: When I try to solve the integral both analytically and numerically, I get the following result (I simplify by setting ( t_0 = 0 ) and ( \sigma_t = 1 )): Essentially, there is an offset on the y-axis. This concerns me because I then have to subtract the obtained values from another function, and this offset propagates throughout the code. I can safely use the analytical solution for one case, but in the other, since the function comes from a frequency-domain reflection relationship of this Gaussian pulse, I am afraid that even in that case, when I integrate the squared modulus, there might be an offset as well. Do you know what could be causing this and could you possibly suggest what to do? This is my first time dealing with numerical methods. from scipy.integrate import trapezoid from scipy.interpolate import CubicSpline import numpy as np from scipy.special import erf import matplotlib.pyplot as plt def u_time(t_0): prefactor = 1 / (np.pi**0.25) def func(time): return prefactor * np.exp(-(time - t_0) ** 2 / 2) return func def u_integral(t_0, times): u_values = u_time(t_0)(times) u_abs_2 = np.abs(u_values) ** 2 u_abs_2_int = np.zeros_like(times, dtype=np.complex128) for k in range(1, len(times)): u_abs_2_int[k] = trapezoid(u_abs_2[:k], times[:k]) return CubicSpline(times, u_abs_2_int) def gaussian_squared_integral(t_0): return lambda time: 0.5 * (erf(time - t_0) + erf(t_0)) t_0 = 0 time_arr = np.linspace(-20, 100, 1000) analytic = u_integral(t_0, time_arr)(time_arr) numeric = gaussian_squared_integral(t_0)(time_arr) plt.plot(time_arr, analytic, label="Analytic") plt.plot(time_arr, numeric, label="Numeric") plt.legend() plt.show()
In fact, the squared modulus of this function can be integrated analytically, resulting in: It's that plus C. When you take the indefinite integral of this, you get that value, plus some arbitrary constant. If you want to use this indefinite integral / antiderivative to find the definite integral, you need to evaluate it twice: once at the start of your bounds and once at the ends. More information. One more note is that I believe you swapped the variables analytic and numeric, as the analytic solution is evaluating the function at many points and summing the area, and the numeric solution is based on the integral. Generally people would call the first thing numeric integration and the second thing analytic integration. I've fixed that below. from scipy.integrate import trapezoid from scipy.interpolate import CubicSpline import numpy as np from scipy.special import erf import matplotlib.pyplot as plt def u_time(t_0): prefactor = 1 / (np.pi**0.25) def func(time): return prefactor * np.exp(-(time - t_0) ** 2 / 2) return func def u_integral(t_0, times): u_values = u_time(t_0)(times) u_abs_2 = np.abs(u_values) ** 2 u_abs_2_int = np.zeros_like(times, dtype=np.complex128) for k in range(1, len(times)): u_abs_2_int[k] = trapezoid(u_abs_2[:k], times[:k]) return CubicSpline(times, u_abs_2_int) def gaussian_squared_integral(t_0): return lambda time: 0.5 * (erf(time - t_0) + erf(t_0)) t_0 = 0 a = -20 b = 100 time_arr = np.linspace(a, b, 1000) numeric = u_integral(t_0, time_arr)(time_arr) F_of_x_at_start_point = gaussian_squared_integral(t_0)(a) analytic = gaussian_squared_integral(t_0)(time_arr) - F_of_x_at_start_point plt.plot(time_arr, analytic, label="Analytic") plt.plot(time_arr, numeric, label="Numeric") plt.legend() plt.show() Lastly, I will note that in this loop: for k in range(1, len(times)): u_abs_2_int[k] = trapezoid(u_abs_2[:k], times[:k]) If you need to know the value of the integral at each point along the trapezoidal integration, SciPy has a faster alternative here called cumulative_trapezoid, which would let you avoid this loop.
2
3
79,519,529
2025-3-19
https://stackoverflow.com/questions/79519529/how-to-run-julia-in-a-docker-container-in-linux-or-macos-e-g-for-running-wflow
Docker Container for Flask App with Julia-based Wflow: Package Not Found Error I want to make a docker container of my Flask App. I am running an open-source hydrological model called 'Wflow' (see github) inside my flask app. The Wflow runs with Julia command. Running Wflow inside Flask: julia_process = subprocess.Popen("julia", stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) # Julia commands to be executed julia_commands = f""" using Wflow Wflow.run("{toml_path}") """ # Send the commands to Julia through the process's standard input output, errors = julia_process.communicate(julia_commands) The Issue: The App with Julia command for wflow runs perfectly on local. But when I make its docker container, I somehow get the error of Julia wflow package not found despite installing it on my container. I'm unable to resolve it despite running different commands. My Dockerfile: # syntax=docker/dockerfile:1 # Use Mambaforge as the base image for Conda support FROM condaforge/mambaforge:latest # Set maintainer info LABEL maintainer="Maarten Pronk <[email protected]>" # Copy .cdsapirc to root directory for configuration COPY .cdsapirc /root/.cdsapirc # Create application directory RUN mkdir -p /app WORKDIR /app # Set the default shell for proper Conda usage SHELL ["/bin/bash", "-c"] # Install system dependencies RUN apt-get update && apt-get install -y \ g++ git gcc gunicorn curl libcurl4-openssl-dev \ && rm -rf /var/lib/apt/lists/* # Create HydroMT-WFlow environment with Julia support RUN mamba create -n hydromt-wflow -c conda-forge \ hydromt_wflow python=3.11 julia gunicorn -y # Set Conda's default environment for all future commands ENV CONDA_DEFAULT_ENV=hydromt-wflow ENV PATH=/opt/conda/envs/hydromt-wflow/bin:$PATH ENV JULIA_DEPOT_PATH="/opt/julia" # Ensure pip is installed inside the Conda environment RUN conda run -n hydromt-wflow python -m ensurepip && \ conda run -n hydromt-wflow python -m pip install --upgrade pip # Copy requirements.txt and install dependencies inside Conda environment COPY requirements.txt /app/requirements.txt RUN test -f /app/requirements.txt && conda run -n hydromt-wflow pip install --no-cache-dir -r requirements.txt || echo "No requirements.txt found" RUN julia -e 'println("Starting Wflow installation..."); \ using Pkg; \ Pkg.update(); \ println("Registry updated"); \ Pkg.add(name="Wflow"); \ println("Wflow installed successfully"); \ using Wflow; \ println("Wflow loaded and verified")' # Copy source code into container COPY . . # Expose the application port EXPOSE 5000 ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "hydromt-wflow"] CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "5", "--timeout", "5000", "--log-level", "debug", "--access-logfile", "-", "--error-logfile", "-", "app:app"] I also tried this inside docker to run Julia: RUN julia -e 'using Pkg; Pkg.develop(PackageSpec(url="https://github.com/Deltares/Wflow.jl.git"))' RUN julia -e 'using Pkg; \ Pkg.develop(PackageSpec(url="https://github.com/Deltares/Wflow.jl.git")); \ Pkg.instantiate(); \ using Wflow; \ println("Wflow successfully loaded")' Any help would be greatly appreciated, been stuck on this for quite a while.
Create and save content of the Dockerfile. mkdir -p julia-wflow cd julia-wflow nano Dockerfile and enter and save this content in the Dockerfile: # Force Ubuntu x86-64 (amd64) even on Apple Silicon # The docker build and run commands contain enforcing information # of amd64 architecture # in ubuntu - you don't need to do anything specially # pull ubuntu image FROM ubuntu:latest # Set environment variables to avoid interactive prompts ENV DEBIAN_FRONTEND=noninteractive # Install dependencies RUN apt-get update && apt-get install -y \ curl \ ca-certificates \ tar \ unzip \ && rm -rf /var/lib/apt/lists/* # Install Julia for x86-64 (amd64) RUN curl -fsSL https://julialang-s3.julialang.org/bin/linux/x64/1.10/julia-1.10.1-linux-x86_64.tar.gz | tar -xz -C /opt/ # Set environment path for Julia ENV PATH="/opt/julia-1.10.1/bin:$PATH" # Verify Julia installation RUN julia --version # Copy the Julia script COPY install_wflow.jl /install_wflow.jl # Run the script RUN julia /install_wflow.jl # Default to Julia interactive shell CMD ["julia"] If you are on a MacOS, then ensure availability of emulator docker run --rm --privileged multiarch/qemu-user-static --reset -p yes on MacOS you have to manually stop and restart Docker Desktop first. I did this with MacOS which has a arm64 architecture - which complicated everything. Check in MacOS whether QEMU setup properly docker run --rm --platform linux/amd64 ubuntu uname -m if it prints x86_64, the setup worked. if it prints aarch64, Docker would be running in arm64 mode. We build in MacOS by enforcing amd architecture. I guess in linux this works, too. docker buildx build --platform linux/amd64 --progress=plain -t julia-wflow . after successful build, check: docker inspect julia-wflow | grep Architecture ## should give: "Architecture": "amd64" finally, run the container: docker run --rm --platform linux/amd64 -it julia-wflow which throws you into a running Julia session! I have once wrote articles about this topic how to run amd64 Docker container or also a virtual machine inside arm64 MacOS (M1, M2, M3), if you want to read in more detail: https://blog.devgenius.io/how-to-run-ubuntu-linux-amd64-in-macbook-pro-arm64-with-multipass-12453fe97a17?sk=ee10193cea437759911897133e24d073 https://blog.devgenius.io/how-to-run-ubuntu-amd64-in-macbook-pro-arm64-with-docker-97f0c1e32e25?sk=38397ed7ed89b3a4118d821ecf61703d Updated answer If you want to run Python in addition, let's create a new environment for this: mkdir -p julia-wflow-app cd julia-wflow-app Create the following files: . ├── Dockerfile ├── app.py ├── julia_bridge.jl └── requirements.txt I will give the files with their inputs: nano Dockerfile # Use Mambaforge base image (with conda and Python) FROM condaforge/mambaforge:latest # Metadata LABEL maintainer="Your Name <[email protected]>" # Set working directory WORKDIR /app # Set environment to avoid interactive prompts ENV DEBIAN_FRONTEND=noninteractive # Install system dependencies RUN apt-get update && apt-get install -y \ g++ git gcc curl tar unzip libcurl4-openssl-dev \ && rm -rf /var/lib/apt/lists/* # Manually install Julia (x86-64) RUN curl -fsSL https://julialang-s3.julialang.org/bin/linux/x64/1.10/julia-1.10.1-linux-x86_64.tar.gz | tar -xz -C /opt/ ENV PATH="/opt/julia-1.10.1/bin:$PATH" ENV JULIA_DEPOT_PATH="/opt/julia" # Install Julia package Wflow RUN julia -e 'println("Starting Wflow installation..."); \ using Pkg; Pkg.update(); println("Registry updated"); \ Pkg.add(name="Wflow"); println("Wflow installed successfully"); \ using Wflow; println("Wflow loaded and verified")' # Create and activate a new conda environment RUN mamba create -n hydromt-wflow -c conda-forge \ python=3.11 gunicorn flask -y # Activate the environment by updating PATH ENV CONDA_DEFAULT_ENV=hydromt-wflow ENV PATH=/opt/conda/envs/hydromt-wflow/bin:$PATH # Install Python dependencies with pip (optional) COPY requirements.txt . RUN test -f requirements.txt && \ conda run -n hydromt-wflow pip install --no-cache-dir -r requirements.txt || \ echo "No requirements.txt found" # Copy app and Julia bridge script COPY app.py julia_bridge.jl ./ # Expose port for web app EXPOSE 5010 # Entrypoint: run Flask app via gunicorn using Conda environment ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "hydromt-wflow"] CMD ["gunicorn", "--bind", "0.0.0.0:5010", "--workers", "4", "app:app"] nano app.py from flask import Flask, jsonify import subprocess app = Flask(__name__) @app.route("/") def home(): return "Hello from Python + Julia!" @app.route("/run-julia") def run_julia(): try: output = subprocess.check_output(["julia", "julia_bridge.jl"], universal_newlines=True) return jsonify({"output": output}) except subprocess.CalledProcessError as e: return jsonify({"error": e.output}), 500 if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=5010) nano julia_bridge.jl using Wflow println("Wflow is available: ", isdefined(Main, :Wflow)) println("Random example number from Julia: ", rand(1:100)) nano requirements.txt flask requests julia After creating and saving all files, build and run using these commands: docker buildx build --platform linux/amd64 -t julia-wflow-app . docker run --platform linux/amd64 --rm -it -p 5010:5010 julia-wflow-app Output of the latter will be sth like: [2025-03-21 12:33:40 +0000] [15] [INFO] Starting gunicorn 23.0.0 [2025-03-21 12:33:40 +0000] [15] [INFO] Listening at: http://0.0.0.0:5010 (15) [2025-03-21 12:33:40 +0000] [15] [INFO] Using worker: sync [2025-03-21 12:33:40 +0000] [16] [INFO] Booting worker with pid: 16 [2025-03-21 12:33:40 +0000] [17] [INFO] Booting worker with pid: 17 [2025-03-21 12:33:40 +0000] [18] [INFO] Booting worker with pid: 18 [2025-03-21 12:33:40 +0000] [19] [INFO] Booting worker with pid: 19 and it will wait there. Go to your OS's favorite browser and enter as URL: http://localhost:5010 It should show: Hello from Python + Julia! Now enter: http://localhost:5010/run-julia And you will see: {"output":"Wflow is available: true\nRandom example number from Julia: 22\n"} Voila! You have both running!
2
3
79,524,581
2025-3-21
https://stackoverflow.com/questions/79524581/what-is-the-good-practice-to-add-many-similar-methods-to-a-class-in-python
Let's say I have a class like this. from scipy import interpolate import numpy as np class Bar: def __init__(self,data): self.data = data def mean(self): return self.data.mean(axis=0) def sd(self): return self.data.std(axis=0) bar = Bar(np.random.rand(10,5)) print(bar.mean()) print(bar.sd()) The class Bar may have many such methods such as mean(), sd() etc. I want to add sampled versions of those methods, so that I can simply get results equivalent to this: new_ids = np.linspace(bar.ids[0],bar.ids[-1],100) sampled_mean = interpolate.interp1d(bar.ids,bar.mean(),axis=0)(new_ids) Current workaround: manually add new methods with help of decorators. from scipy import interpolate import numpy as np def sample(func): def wrapper(self,n_sample=100,*args,**kwargs): new_ids = np.linspace(self.ids[0],self.ids[-1],n_sample) vals = func(self,*args,**kwargs) return interpolate.interp1d(self.ids,vals,axis=0)(new_ids) return wrapper class Bar: def __init__(self,ids,data): self.ids = ids self.data = data def mean(self): return self.data.mean(axis=0) def sd(self): return self.data.std(axis=0) @sample def spl_mean(self): return self.mean() @sample def spl_sd(self): return self.sd() bar = Bar(ids=np.arange(5),data=np.random.rand(10,5)) new_ids = np.linspace(bar.ids[0],bar.ids[-1],100) sampled_mean = interpolate.interp1d(bar.ids,bar.mean(),axis=0)(new_ids) assert np.all(sampled_mean == bar.spl_mean()) However, I have many such methods. Of course I can write them with the help of LLM but what I want to know is: whether this is already a good practice to achieve this, and whether there's more elegant approache.
One DRYer approach that avoids repeating method names for the sampled versions is to make the decorator a custom descriptor whose __get__ method returns an object that calls the decorated function when called, but also provides a sample method that calls the decorated function with the sampling logics around it: class Samplable: def __init__(self, method): self.method = method def __call__(self, *args, **kwargs): return self.method(*args, **kwargs) def sample(self, *args, **kwargs): return f'sampled {self(*args, **kwargs)}' class sample: def __init__(self, func): self.func = func def __get__(self, obj, objtype=None): return Samplable(self.func.__get__(obj)) class Bar: def __init__(self, data): self.data = data @sample def mean(self): return f'mean {self.data}' @sample def sd(self): return f'sd {self.data}' so that: bar = Bar('data') print(bar.mean()) print(bar.mean.sample()) print(bar.sd()) print(bar.sd.sample()) outputs: mean data sampled mean data sd data sampled sd data Demo: https://ideone.com/RBa3UJ
2
1
79,524,534
2025-3-21
https://stackoverflow.com/questions/79524534/typeerror-cannot-determine-truth-value-of-relational-when-entering-symbolic-val
I heard the issue comes when trying to compare symbolic values, for example if i enter e^(8*x) as the initial function in the code, so I've tried evalf() with the values to turn them into numeric, but they continue to be symbolic. What else could i do? import math import sympy as sp print("Fixed Point Method") funini = input("Initial function\nx= ") # initial function by user valini = float(input("Initial value of x\n")) # initial value of x valini = float(valini) # convert the initial value into a float errabs = input("Absolute error to obtain\n") # requested absolute error errabs = float(errabs) # convert the absolute error into a float i = 0 errorfin = 100 while True: i += 1 x = sp.symbols('x') # x as a symbolic variable until given a value x.evalf() print(x) funcion = sp.sympify(funini) # convert the function into a symbolic expression resultado = funcion.subs(x, valini).evalf() # substitute x as a symbolic variable with the initial value resultado.evalf() print(resultado) print(f"For iteration {i}, x= {resultado}") errorfin = abs((resultado - valini) / resultado).evalf() # ensure errorfin is numeric print(f"The error is {errorfin}") valini = resultado # the result becomes the new initial value valini.evalf() print(valini) if i == 10: # max 10 iterations print("The values converge, enter a new initial x") break elif errorfin <= errabs: print(f"The error is less than the absolute error, the root is {resultado}") break
Sympy thinks that e is another variable. Try passing it exp(8*x) instead. Also, you have a couple of lines where you do something.evalf() but don't save it to a variable. Those lines aren't doing anything.
1
1
79,523,479
2025-3-20
https://stackoverflow.com/questions/79523479/python-class-that-does-integer-operations-mod-n
I am trying to create a Python class that does operations mod n. For example using mod 100: 11*11==21 66+39==5 1+2+3-30==73 2**10==24 This is how I am doing it: class ModInteger: def __init__(self, value, mod): self.mod = mod self.value = value % mod def __add__(self, other): return ModInteger((self.value + other.value) % self.mod, self.mod) def __sub__(self, other): return ModInteger((self.value - other.value) % self.mod, self.mod) def __mul__(self, other): return ModInteger((self.value * other.value) % self.mod, self.mod) def __pow__(self, other): return ModInteger((self.value ** other.value) % self.mod, self.mod) #... It works, but is there a less tedious and repetitive way of doing it?
A more generic approach would be to build ModInteger with a metaclass that creates wrapper methods around relevant int methods. To allow compatibility with both int and ModInteger objects the wrapper methods should convert arguments of ModInteger objects into int objects before passing them to int methods, and convert returning values back to ModInteger objects if they are int. The demo below shows how wrapper methods work for both unary and binary operations, as well as methods such as __repr__, which doesn't take an argument and doesn't return an int object: class ModIntegerMeta(type): def __new__(metacls, name, bases, members): cls = super().__new__(metacls, name, bases, members) for name in '__add__', '__sub__', '__mul__', '__neg__', '__repr__': setattr(cls, name, cls.create_method(name)) return cls def create_method(cls, name): def method(self, *args): value = getattr(self.value, name)(*( arg.value if isinstance(arg, cls) else arg for arg in args )) if type(value) is int: return cls(value, self.mod) return value return method class ModInteger(metaclass=ModIntegerMeta): def __init__(self, value, mod): self.mod = mod self.value = value % mod print((-ModInteger(66, 100) + ModInteger(39, 100) * 5)) This outputs 29 because (-66 + (39 * 5) % 100) % 100 == 29 Demo: https://ideone.com/m5upVv
2
1
79,524,057
2025-3-20
https://stackoverflow.com/questions/79524057/replace-values-in-series-with-first-regex-group-if-found-if-not-leave-nan
Say I have a dataframe such as df = pd.DataFrame(data = {'col1': [1,np.nan, '>3', 'NA'], 'col2':["3.","<5.0",np.nan, 'NA']}) out: col1 col2 0 1 3. 1 NaN <5.0 2 >3 NaN 3 NA NA What I would like is to strip stuff like "<" or ">", and get to floats only out: col1 col2 0 1 3. 1 NaN 5.0 2 3 NaN 3 NaN NaN I thought about something like df['col2'].replace({'.*' : r"[-+]?(?:\d*\.*\d+)"}, regex=True, inplace=True) the idea being, replace anything with the regex for float, (I think), but this fails error: bad escape \d at position 8 I tried along the lines of df['col2'].replace({r"[-+]?(?:\d*\.*\d+)": r"\1"}, regex=True, inplace=True) assuming ">"or "<" come before the float number, but then this fails (does not catch anything) if the field is a string or np.nan. Any suggestions please?
This should work. Logic: First match, but do not include in the capture group \1, anything we do not want to keep at the beginning of the number with the negated character class [^...]. Python (re module flavor regex): pattern = "^[^\w+-\.]*([+-]?\d*\.?\d*)" replacement = \1 Regex Demo: https://regex101.com/r/Halz4X/2 NOTES: ^ Matches the beginning of the string. [^\w+-\.]* *Negated class [^...] that matches anything that is not an alphanumeric or underscore, _, character (\w), or a literal +, - or ., 0 or more times (*). ( Begin first (and only) capture group, referred to in the replacement string with \1. [+-]?\d*.?\d* [+-]? Match 0 or 1 (?) + or - \d* Match 0 or more (*) digits \d. \. Match a literal dot, .. \d* Match 0 or more (*) digits \d. ) End capture (group 1 \1). Replacement string: \1 replaces the matched string with only the characters in the capture group 1.
6
3
79,523,736
2025-3-20
https://stackoverflow.com/questions/79523736/return-value-of-np-polynomial-polynomial-fit-when-full-true
In the NumPy module, when you call np.polynomial.Polynomial.fit with full=True and you look at the residuals value that's returned you get an object of type array. If this value is always a single number, why is it returned as an array?
Because, what Polynomial.fit basically does, is calling lstsq. See an artificial example import numpy as np x=np.linspace(-1,1,11,1) # I choose this range because I am lazy: then I know that there is no conversion needed to get the returned coefficient y=12+x+3*x**2+0.5*x**3 np.polynomial.polynomial.Polynomial.fit(x,y,3,full=True) # (Polynomial([12. , 1. , 3. , 0.5], domain=[-1., 1.], window=[-1., 1.], symbol='x'), [array([7.69471606e-30]), 4, array([1.38623557, 1.32269864, 0.50046809, 0.27991237]), 2.4424906541753444e-15]) And now, trying to redo it myself with lstsq (Find coefficient such as linear combination of columns of M with those coefficients result into y) # Columns of M are 1, x, x², x³. M=np.array([np.ones_like(x), x, x**2, x**3]).T np.linalg.lstsq(M, y) # So returned coefficient are such as y ≈ cf[0]+cf[1]*x+cf[2]*x²+cf[3]*x³ # So, the coefficient of the deg 3 polynomial # (array([12. , 1. , 3. , 0.5]), array([1.7292925e-30]), 4, array([3.60116183, 2.60171483, 1.07908918, 0.50695163])) So, same coefficients, of course. I didn't add any noise, so fitting is perfect. And other returned values are the one returned by Polynomial.fit: sum of squared residuals, rank of matrix, singular values (Focus on the form of the answer, rather than on the values: it is not exactly that least square that Polynomial.fit does) So, Polynomial.fit returns a singleton array of sum of squared residuals, because it returns what lstsq returns, and that is what lstsq returns. Now, of course, your next question has to be "but then, why lstsq does that". Because, contrarily to Polynomial.fit, lstsq could be called with an array of y vectors (so a 2D array) For example: y = np.array([12+x+3*x**2+0.5*x**3, 1+x]).T np.linalg.lstsq(M, y) #(array([[1.20000000e+01, 1.00000000e+00], # [1.00000000e+00, 1.00000000e+00], # [3.00000000e+00, 7.77156117e-16], # [5.00000000e-01, 0.00000000e+00]]), # array([1.72929250e-30, 4.38386909e-31]), 4, array([3.60116183, 2.60171483, 1.07908918, 0.50695163])) As you can see, 2 questions (2 polynomials to guess), so 2 answers: 2 sets of coefficients, 2 sum of squared residuals (but only one rank and one singular values, since those are only specific to M, and there is only one M) So, that is why lstsq returns an array: it is the sum of squared residuals for all y we fit. If we fit only one y, then, there is only one sum in that array. Since Polynomial.fit cannot be called with a 2D-array as y, it is always in this case where these is only one sum in that array.
2
2
79,523,249
2025-3-20
https://stackoverflow.com/questions/79523249/multi-processing-copy-on-write-4-workers-but-only-double-the-size
I'm running an experiment on copy-on-write mechanism on Python Multiprocessing. I created a large file of 10GB and load the file into large_object in main. file_path = 'dummy_large_file.bin' try: large_object = load_large_object(file_path) print("File loaded successfully") except ValueError as e: print(e) except Exception as e: print(f"An error occurred: {e}") # Create a dummy file of 10GB for testing dummy_file_path = 'dummy_large_file.bin' with open(dummy_file_path, 'wb') as dummy_file: dummy_file.seek(10 * 1024 * 1024 * 1024 - 1) dummy_file.write(b"A") print(f"Dummy file created at {dummy_file_path}") I intentionally pass the large_object to the multiprocessing workers to see the workers copy the large object. And to make sure multiprocessing copy, I modify the content of the object. # Multi-processing num_workers = 4 chunk_size = 10 processes = [] for i in range(num_workers): start = i * chunk_size p = multiprocessing.Process(target=process_chunk, args=(large_object, dummy_file_path, start, chunk_size)) processes.append(p) p.start() for p in processes: p.join() # Multi-processing def process_chunk(object, file_path, start, size): random_chunk = bytes([random.randint(0, 255) for _ in range(size)]) object[start:start + size] = random_chunk # Modify the chunk to a random binary number print(f"large object [{start}:{start + size}]: {object[start:start + size]}") # Process the chunk The expectation here is the program should hold large_object of 10GB, and it copies the large_object to 4 other workers - which sums to 40GB. So the total memory should be 50GB. However, I'm only seeing the total memory of 20GB. Why there's only 20GB memory consumption? Does python implement some other lazy loading or granular copy for large objects?
linux knows nothing about python or how big its objects are. the computer divides memory into pages, a memory page is a contiguous block of memory, linux uses 4KB memory pages on most systems, that's the "unit" of memory that could get duplicated with COW when you write to it, if you modify a single byte then the entire 4KB page gets duplicated the address space is virtualized at the page level, just because you mapped 10GB worth of pages doesn't mean the computer will allocate 10 GB contiguous physical memory, you only get 10 GB of contiguous virtual address space, those addresses may not even be mapped to physical memory at all until you write to it. each of your 4 workers only writes to 2.5 GB, therefore only 2.5 GB worth of pages are duplicated into each worker, equating to the 10 GB you see. I think the only optimization that python does is that it doesn't attempt to serialize/deserialize the objects when launched with multiprocessing.Process, as it will be COWed by linux anyway. this is not true if you use other multiprocessing methods like Pool which needs to serialize/deserialize its arguments and you'd have 50GB used.
1
2
79,522,011
2025-3-20
https://stackoverflow.com/questions/79522011/python-pil-image-text-overlay-not-displayed-with-expected-color-on-white-backgro
Python PIL Image not working properly with overlay text image. I am trying to use FPDF image to convert an overlayed text png image to pdf file. However, the overlay text is not in expected colour (looks transparent) on a white background image. However, the same logic works in a zebra pattern background image. Source background image: Text overlayed image: Code used for overlayed text: Python Image AttributeError: 'ImageDraw' object has no attribute '__array_interface__' code: fontuse = "D:/Ubuntusharefolder/CustomFonts/EnglishFonts/NotoSans-Medium.ttf" font_color_bgra = (0,0,0,1)#black #(0,255,0,0)#green font = ImageFont.truetype(fontuse, 32 * int(2), layout_engine=ImageFont.LAYOUT_RAQM) src_img_use = np.array(Image.open(os.path.join(input_subfolder_path, filename) ) ) #(511, 898, 4) print('src_img_use gen size: ',os.path.join(input_subfolder_path, filename), src_img_use.shape) src_img_pil = Image.fromarray(src_img_use) print('src_img_pil gen size: ', src_img_use.size) img_pil_4dtxt = ImageDraw.Draw(src_img_pil) img_pil_4dtxt.text(textpos, subjectuse, font = font, fill = font_color_bgra) #fill = "black" ) src_img_pil.save(output_folder_final_path + '/updtxtimg/' + str(imgidx) + '_' + filename) print('label_img_withtxt drawtext saved subjectuse idx', imgidx, subjectuse) Note sure whether there is a problem with input image or overlay code logic. The same image has to be used in FPDF.image but same behavior noticed. Updated Console logs: //zebra pattern image > & C:/ProgramData/Anaconda3/python.exe ./code/txtoverimgv1.py src_base imguse gen size: (1920, 1080, 3) input img file path: D:/testinput/order_image_1.png src_imguse gen size: (540, 360) saved img in path: D:/testinput/updtxtimg/upd_order_image_1.png <PIL.Image.Image image mode=RGB size=540x360 at 0x1293F0228E0> saved_img1 done //white background > & C:/ProgramData/Anaconda3/python.exe ./code/txtoverimgv1.py src_base imguse gen size: (1920, 1080, 3) input img file path: D:/testinput/order_image_1.png src_imguse gen size: (1920, 1080) saved img in path: D:/testinput/updtxtimg/upd_order_image_1.png <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x2636D84B280> saved_img1 done mode= RGB and RGBA differs for each image but both are png images Code update: Converting the image to RGB while opening solves the problem, but would like to know best way to work on RGBA images too. src_img_use = np.array(Image.open( input_subfolder_path + '/' + filename ).convert("RGB") )
The reason you are not seeing the text correctly on the RGBA is that you did not define the colors correctly. The text() method's parameter fill expects an integer-valued tuple, so (0,0,0,1) would result in a nearly transparent color (0=fully transparent, 255=fully opaque). Try again with font_color_bgra = (0,0,0,255) # black This should resolve your issue. On the pure RGB image the alpha-component is not considered, thus you do not see the same error on the zebra background image.
1
2
79,520,985
2025-3-19
https://stackoverflow.com/questions/79520985/how-to-simplify-the-generation-process-of-these-boolean-images
I have written code that generates Thue-Morse sequence, its output is a NumPy 2D array containing only 0s and 1s, the height of the array is 2n and the width is n. More specifically, each intermediate result is kept as a column in the final output, with the elements repeated so that every column is of the same length. For example, if the input is 3, we start with 01, we repeat the elements 4 times so that the first column is 00001111, in the next iteration we invert the bits and add to the previous iteration to get 0110, we repeat the elements twice so the second column is 00111100, and finally the last column is 01101001, so the output is: array([[0, 0, 0], [0, 0, 1], [0, 1, 1], [0, 1, 0], [1, 1, 1], [1, 1, 0], [1, 0, 0], [1, 0, 1]], dtype=uint8) Now the first kind of image I want to generate is simple, we repeat each column 2n - 1 - i times, so that each run of 1s becomes a rectangle, whose height is the length of the run, and in each subsequent column the rectangles are halved in width, so that sum of the width of the rectangles is height - 1. This is the 7-bit ABBABAAB example of such image: And the second kind of image I want to generate is to take the fractal squares and convert them to polar: 7-bit ABBABAAB fractal polar: I wrote code to generate these images. But it is inefficient. It is easy to write working code, but it is hard to make it run fast. Code import cv2 import numpy as np from enum import Enum from PIL import Image from typing import Generator, List, Literal def validate(n: int) -> None: if not (n and isinstance(n, int)): raise ValueError(f"Argument {n} is not a valid bit width") def abba_codes(n: int) -> np.ndarray: validate(n) power = 1 << (n - 1) abba = np.array([0, 1], dtype=np.uint8) powers = 1 << np.arange(n - 2, -1, -1) return ( np.concatenate( [ np.zeros(power, dtype=np.uint8), np.ones(power, dtype=np.uint8), *( np.tile( (abba := np.concatenate([abba, abba ^ 1])), (power, 1) ).T.flatten() for power in powers ), ] ) .reshape((n, 1 << n)) .T ) def abba_squares(n: int) -> np.ndarray: arr = abba_codes(n).T[::-1] powers = 1 << np.arange(n + 1) result = np.zeros((powers[-1],) * 2, dtype=np.uint8) for i, (a, b) in enumerate(zip(powers, powers[1:])): result[a:b] = arr[i] return (result.T[:, ::-1] ^ 1) * 255 def abba_square_img(n: int, length: int = 1024) -> np.ndarray: return Image.fromarray(abba_squares(n)).resize((length, length), resample=0) def abba_polar(n: int, length: int = 1024) -> np.ndarray: square = np.array(abba_square_img(n, length)) return cv2.warpPolar(square, (length, length), [half := length >> 1] * 2, half, 16) Performance In [2]: %timeit abba_square_img(10, 1024) 10.3 ms ± 715 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [3]: %timeit abba_polar(10, 1024) 27.2 ms ± 5.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) You see, I had to mix PIL and cv2, because only cv2 offers polar coordinate transformation, and only PIL lets me resize without any interpolation at all. The following don't give intended result: In [30]: cv2.imwrite('D:/garbage.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_NEAREST)) Out[30]: True In [31]: cv2.imwrite('D:/garbage1.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_NEAREST_EXACT)) Out[31]: True In [32]: cv2.imwrite('D:/garbage2.png', cv2.resize(abba_squares(4), (1024, 1024), cv2.INTER_AREA)) Out[32]: True No matter what interpolation mode I try, it always gives a blurry image. I want the edges to be sharp and everything staying rectangles. So I had to use PIL to resize the images. Of course I can use a two level nested for loop to broadcast the pixels with result[i*n:i*n+n, j*n:j*n+n] = img[i, j] but that would be slow. The edges of the polar images are jagged, not smooth, I want the edges to be smooth, and the corners are black, I want the corners white. And if I pass n larger than 14 to bool_squares it just hangs. What are better ways to do these? I have improved the code a little bit, but I still haven't figured out an efficient way to generate polar images directly. This question still deals with two kinds of images, that is because two things, 1, I still don't know how to efficiently fill rectangles in an array using a completely vectorized way, and 2, I still need to generate the fractal squares first in order to generate the polar image. But I have made big progress on the generation of the fractal squares, so I found the consecutive runs of 1s and created many rectangles and used those rectangles to fill the array: def find_transitions(line: np.ndarray) -> np.ndarray: if not line.size: return np.array([]) return np.concatenate( [ np.array([0] if line[0] else [], dtype=np.uint64), ((line[1:] != line[:-1]).nonzero()[0] + 1).astype(np.uint64), np.array([line.size] if line[-1] else [], dtype=np.uint64), ] ) def fractal_squares_helper(arr: np.ndarray, n: int, scale: int) -> List[np.ndarray]: x_starts = [] x_ends = [] y_starts = [] y_ends = [] widths = np.concatenate([[0], ((1 << np.arange(n - 1, -1, -1)) * scale).cumsum()]) for i, (start, end) in enumerate(zip(widths, widths[1:])): line = find_transitions(arr[:, i]) * scale half = line.size >> 1 y_starts.append(line[::2]) y_ends.append(line[1::2]) x_starts.append(np.tile([start], half)) x_ends.append(np.tile([end], half)) return [np.concatenate(i) for i in (x_starts, x_ends, y_starts, y_ends)] def fill_rectangles( length: int, x_starts: np.ndarray, x_ends: np.ndarray, y_starts: np.ndarray, y_ends: np.ndarray, ) -> np.ndarray: img = np.full((length, length), 255, dtype=np.uint8) x = np.arange(length) y = x[:, None] mask = ( (y >= y_starts[:, None, None]) & (y < y_ends[:, None, None]) & (x >= x_starts[:, None, None]) & (x < x_ends[:, None, None]) ) img[mask.any(axis=0)] = 0 return img def fill_rectangles1( length: int, x_starts: np.ndarray, x_ends: np.ndarray, y_starts: np.ndarray, y_ends: np.ndarray, ) -> np.ndarray: img = np.full((length, length), 255, dtype=np.uint8) for x0, x1, y0, y1 in zip(x_starts, x_ends, y_starts, y_ends): img[y0:y1, x0:x1] = 0 return img def fractal_squares(n: int, length: int, func: bool) -> np.ndarray: arr = abba_codes(n) scale, mod = divmod(length, total := 1 << n) if mod: raise ValueError( f"argument length: {length} is not a positive multiple of {total} n-bit codes" ) return (fill_rectangles, fill_rectangles1)[func]( length, *fractal_squares_helper(arr, n, scale) ) In [4]: %timeit fractal_squares(10, 1024, 0) 590 ms ± 19.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [5]: %timeit fractal_squares(10, 1024, 1) 1.65 ms ± 56.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) The first method I used to fill the rectangles is completely vectorized, but it is very slow and memory consuming, it is the only way I could make it work. The for loop based method is much faster, but it isn't completely vectorized, I want to vectorize it completely, to do away with the loop. Now the polar images can be generated similarly, instead of filling Cartesian rectangles, we fill "polar rectangles", I have calculated the coordinates, but I cannot fill the rectangles: def rectangularize(y: np.ndarray, x: np.ndarray) -> np.ndarray: l = y.shape[0] h = l // 2 return np.stack([np.tile(y, (2, 1)).T.flatten(), np.tile(x, l)]).T.reshape( (h, 4, 2) ) def radial_rectangles(n: int, length: int) -> np.ndarray: arr = abba_codes(n) radii = np.concatenate([[0], (length >> np.arange(1, n + 1)).cumsum()]) rectangles = [] total = 1 << n tau = 2 * np.pi for i, (start, end) in enumerate(zip(radii, radii[1:])): line = find_transitions(arr[:, i]) / total * tau rectangles.append(rectangularize(line, [start, end])) return np.concatenate(rectangles) In [6]: radial_rectangles(4, 1024) Out[6]: array([[[3.14159265e+00, 0.00000000e+00], [3.14159265e+00, 5.12000000e+02], [6.28318531e+00, 0.00000000e+00], [6.28318531e+00, 5.12000000e+02]], [[1.57079633e+00, 5.12000000e+02], [1.57079633e+00, 7.68000000e+02], [4.71238898e+00, 5.12000000e+02], [4.71238898e+00, 7.68000000e+02]], [[7.85398163e-01, 7.68000000e+02], [7.85398163e-01, 8.96000000e+02], [2.35619449e+00, 7.68000000e+02], [2.35619449e+00, 8.96000000e+02]], [[3.14159265e+00, 7.68000000e+02], [3.14159265e+00, 8.96000000e+02], [3.92699082e+00, 7.68000000e+02], [3.92699082e+00, 8.96000000e+02]], [[5.49778714e+00, 7.68000000e+02], [5.49778714e+00, 8.96000000e+02], [6.28318531e+00, 7.68000000e+02], [6.28318531e+00, 8.96000000e+02]], [[3.92699082e-01, 8.96000000e+02], [3.92699082e-01, 9.60000000e+02], [1.17809725e+00, 8.96000000e+02], [1.17809725e+00, 9.60000000e+02]], [[1.57079633e+00, 8.96000000e+02], [1.57079633e+00, 9.60000000e+02], [1.96349541e+00, 8.96000000e+02], [1.96349541e+00, 9.60000000e+02]], [[2.74889357e+00, 8.96000000e+02], [2.74889357e+00, 9.60000000e+02], [3.53429174e+00, 8.96000000e+02], [3.53429174e+00, 9.60000000e+02]], [[4.31968990e+00, 8.96000000e+02], [4.31968990e+00, 9.60000000e+02], [4.71238898e+00, 8.96000000e+02], [4.71238898e+00, 9.60000000e+02]], [[5.10508806e+00, 8.96000000e+02], [5.10508806e+00, 9.60000000e+02], [5.89048623e+00, 8.96000000e+02], [5.89048623e+00, 9.60000000e+02]]]) The output is of the shape (n, 4, 2), each (4, 2) shape is a "radial rectangle", the first element of the innermost pairs is the angle from x-axis measured in radians, the second is the radius. The "radial rectangles" are in the format [(a0, r0), (a0, r1), (a1, r0), (a1, r1)] What is a more efficient way to fill rectangles and how can I fill "radial rectangles"?
Think in shaders. First, have a function that decides the bit in the sequence. @nb.njit def thue_morse(level: int, alpha: float) -> bool: assert level >= 0 assert 0 <= alpha < 1 value = False while level > 0: level -= 1 if alpha < 0.5: alpha *= 2 else: alpha = alpha * 2 - 1 value = not value return value You can test that: for level in range(0, 4): print(f"level {level}") nsteps = 2 ** (level+0) # +1 for a level of oversampling alphas = (np.arange(nsteps) + 0.5) / nsteps values = np.array([ thue_morse(level, alpha) for alpha in alphas ]) print(np.vstack([alphas, values])) print() Output looks right: level 0 [[0.5] [0. ]] level 1 [[0.25 0.75] [0. 1. ]] level 2 [[0.125 0.375 0.625 0.875] [0. 1. 1. 0. ]] level 3 [[0.0625 0.1875 0.3125 0.4375 0.5625 0.6875 0.8125 0.9375] [0. 1. 1. 0. 1. 0. 0. 1. ]] Now have a function that decides the color for any position u,v on the texture. Apply it to all the pixels. @nb.njit(parallel=True, cache=True) def shade_rectangular(width: int, height: int): canvas = np.ones((height, width), dtype=np.bool) for y in nb.prange(height): for x in range(width): alpha = np.float32(y) / height # log2 distribution: level 0 takes half the width, 1 a quarter, etc # and start at level 1, so the left half isn't totally trivial level = int(-np.log2((width-x) / width)) + 1 canvas[y,x] = not thue_morse(level, alpha) return canvas canvas = shade_rectangular(2048, 2048) For the polar thing, convert coordinates to polar, then sample: @nb.njit(parallel=True, cache=True) def shade_polar(radius: int): width = height = 2*radius + 1 canvas = np.ones((height, width), dtype=np.bool) for y in nb.prange(height): yy = y - radius for x in range(width): xx = x - radius r = np.hypot(xx, yy) theta = np.arctan2(yy, xx) if r < radius: level = int(-np.log2((radius-r) / radius)) + 1 alpha = (theta / (2 * np.pi)) % 1 canvas[y,x] = not thue_morse(level, alpha) return canvas canvas = shade_polar(1024) If you want supersampling, you can have supersampling. canvas = shade_polar(4096) * np.uint8(255) canvas = cv.pyrDown(cv.pyrDown(canvas)) I didn't time any of this. Except for the huge images, it's negligible. I didn't bother making sure all the floats are fp32. Some of them might be fp64, which costs performance. The trig functions for the polar plot cost a bunch in particular. You could calculate these values once, store them, then reuse.
2
4
79,520,670
2025-3-19
https://stackoverflow.com/questions/79520670/multithreading-python-jobs-to-shared-state
I think I'm lacking the understanding on multithreading in Python and the answers online are hurting my feeble brain. I want to use multithreading to write json data to a list as it works. I have the below, the some_function function is a function that takes a dictionary runs a different (basic) web scraper, that returns a dictionary or nothing (it errors). from multiprocessing import Pool def some_function(job_dict: dict) -> dict: # some function class RunJobs: def __init__(self, jobs: list[dict], threads=1): self.jobs, self.threads, self.state = jobs, threads, [] def _run_sequentially(self): for job in self.jobs: self.state.append(some_function(job)) def _run_multithreaded(self): pool = Pool(self.threads) job_queue = [] for job in self.jobs: job_queue.append(pool.apply_async(some_function, args=(job))) pool.close() pool.join() for job in job_queue: self.state.append(job.get()) def run(self): try: if self.threads == 1: self._run_sequentially() else: self._run_multithreaded() except KeyInterrupt: print(self.state) How do I make it so this state is updated on multiple threads as they run, or is that not possible?
As been pointed out by Arthur Belanger your code is using multiprocessing and not multithreading. If you want to be using multithreading, then you can do so with minimal changes. Instead of: from multiprocessing import Pool Do either: from multiprocessing.dummy import Pool # multithreading or: from multiprocessing.pool import ThreadPool as Pool # multithreading But you have other issues with your code. In your _run_multithreaded method you have: job_queue.append(pool.apply_async(self._run_single, args=(job))) But your class does not have a _run_single method. Looking at your _run_sequentially method it would appear that you mean some_function instead of self._run_single. Also, the args argument to apply_async should be an iterable such as a tuple or list instance and each element will be an positional argument to your worker function, run_job. But enclosing job in parentheses does not make it a tuple; it is just a parenthesized expression which in this case is no different than had you just specified args = job (that is, (job) == job). What you need is: job_queue.append(pool.apply_async(some_function, args=(job,))) Note that (job,) is a tuple of one element while (job) is not. You also have quite a few syntax errors. For example: class RunJobs: self __init__(self, jobs: list[dict], threads=1): That is not how you define a method. This should be: class RunJobs: def __init__(self, jobs: list[dict], threads=1): Putting it all together, you code should be: from multiprocessing.dummy import Pool def some_function(job_dict: dict) -> dict: # For demo purposes just return the input argument: return job_dict class RunJobs: def __init__(self, jobs: list[dict], threads=1): self.jobs, self.threads, self.state = jobs, threads, [] def _run_sequentially(self): for job in self.jobs: self.state.append(some_function(job)) def _run_multithreaded(self): pool = Pool(self.threads) job_queue = [] for job in self.jobs: job_queue.append(pool.apply_async(some_function, args=(job,))) pool.close() pool.join() for job in job_queue: self.state.append(job.get()) def run(self): try: if self.threads == 1: self._run_sequentially() else: self._run_multithreaded() except KeyInterrupt: print(self.state) if __name__ == '__main__': jobs = [ {'x': 1}, {'y': 2}, {'z': 3} ] run_jobs = RunJobs(jobs, threads=3) run_jobs.run() print(run_jobs.state) Prints: [{'x': 1}, {'y': 2}, {'z': 3}] This code should run correctly whether you use your original import statement (multiprocessing) or the above modified one (multithreading). Of course, depending on what some_function is actually doing, one will be more appropriate than the other.
1
1
79,521,498
2025-3-19
https://stackoverflow.com/questions/79521498/how-does-a-palette-work-in-python-imaging-library
Suppose I have an RGB888 image (= 8 bit per color) in which the color of each pixel evenly encodes a (raw) data byte. Scheme: Channel red encodes Bit 7:6 (R: 0-63 = 0b00, 64-127 = 0b01, 128-191 = 0b10, 192-255 = 0b11) Channel green encodes Bit 5:3 (G: 0-31 = 0b000, 32-63 = 0b001, 64-95 = 0b010, 96-127 = 0b011, ...) Channel blue encodes Bit 2:0 (B: 0-31 = 0b000, 32-63 = 0b001, 64-95 = 0b010, 96-127 = 0b011, ...) Example: RGB = [150, 40, 94] represents (raw) data byte 0b10001010. Question: I would like to use the Python Imaging Library (PIL) to decode my image. If I got it right, then I would need to define a palette and apply it to my image using the quantize() method (as described here). However, I don't understand how the mapping of the colors does work. How does the palette translate the original 24 Bits of color information into 8 Bits? Finally: How would I need to set up my palette so that the decoding works according to my scheme? Edit: My image does not contain a subject. It contains data. Below is an example ("Data" is what I like to decode using PIL).
I think your question maybe means something different altogether. I think you want to convert ranges of numbers into small integers, i.e. 0-31 becomes 0, 32-63 becomes one. So you just need to shift your values to the right and you will lose the less significant bits. If your image is this: import numpy as np array([102, 220, 225, 95, 179, 61, 234, 203, 92, 3], dtype=uint8) Then, because 2^6 is 64, you can do get integers in range 0..3, with: partA = image >> 6 which will give you: array([1, 3, 3, 1, 2, 0, 3, 3, 1, 0], dtype=uint8) Or , if image is three channels and you want to extract from the Red channel, use: partA = image[...,0] >> 6 Similarly, if image is three channels and you want to extract from the Blue channel, use: partA = image[...,2] >> 6 And, because 2^5 is 32, you can get integers in range 0..7, with: partB = image >> 5 which will give you: array([3, 6, 7, 2, 5, 1, 7, 6, 2, 0], dtype=uint8) Now you can reconstruct your (hidden?) byte by shifting and ORing together the parts: reconstructed = (partA << 6) | (partB << 3) | partC Once you have your data in a Numpy array called reconstructed toy can make it into a PIL Image with: from PIL import Image pImage = Image.fromarray(reconstructed)
1
1
79,522,783
2025-3-20
https://stackoverflow.com/questions/79522783/nonexistenttime-error-caused-by-pandas-timestamp-floor-with-localised-timestamp
I need to calculate the floor of a localized timestamp with daily resolution, but I get an exception when the daylight saving time starts. >>> pd.Timestamp('2024-09-08 12:00:00-0300', tz='America/Santiago').floor("D") NonExistentTimeError: 2024-09-08 00:00:00 I understand that midnight does not exist on that day, clocks are moved to 1am after 11:59pm. Still I would have expected floor to return pd.Timestamp('2024-09-08 01:00:00-0300', tz='America/Santiago'). The same happens with ceil applied to the previous day: >>> pd.Timestamp('2024-09-07 12:00:00-0300', tz='America/Santiago').ceil("D") NonExistentTimeError: 2024-09-08 00:00:00 I made two attempts at solving this but I either get the same exception or the wrong answer: >>> pd.Timestamp('2024-09-08 12:00:00').floor("D").tz_localize('America/Santiago') NonExistentTimeError: 2024-09-08 00:00:00 >>> pd.Timestamp('2024-09-08 12:00:00-0300').floor("D").tz_convert('America/Santiago') Timestamp('2024-09-07 23:00:00-0400', tz='America/Santiago') # Wrong answer
By default a non-existent time will raise an error. There is an nonexistent option in Timestamp.floor to shift the time forward/backward: (pd.Timestamp('2024-09-08 12:00:00-0300', tz='America/Santiago') .floor('D', nonexistent='shift_backward') ) # Timestamp('2024-09-07 23:59:59-0400', tz='America/Santiago') (pd.Timestamp('2024-09-08 12:00:00-0300', tz='America/Santiago') .floor('D', nonexistent='shift_forward') ) # Timestamp('2024-09-08 01:00:00-0300', tz='America/Santiago') Or, to get 23:00 passing pd.Timedelta('-1h') (not sure of the relevance to do this): (pd.Timestamp('2024-09-08 12:00:00-0300', tz='America/Santiago') .floor('D', nonexistent=pd.Timedelta('-1h')) ) # Timestamp('2024-09-07 23:00:00-0400', tz='America/Santiago')
2
2
79,521,805
2025-3-20
https://stackoverflow.com/questions/79521805/shiftenter-inserts-extra-indents
I have a Python source file with some dummy code: a = 3 if a == 1: print("a = 1") elif a == 2: print("a = 2") else: print("Other") When I submit the code to terminal with shift+enter, I get the following error. It looks like VS Code changed the indentation of my code. The same code ran just fine with shift+enter on my other computer. The error message is as below: PS C:\Users\win32> & C:/Users/win32/AppData/Local/Programs/Python/Python313/python.exe Python 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> a = 3 >>> if a == 1: ... print("a = 1") ... elif a == 2: ... print("a = 2") ... else: ... print("Other") ... File "<python-input-1>", line 3 elif a == 2: ^^^^ SyntaxError: invalid syntax Any insights?
bug: Python3.13 All functions fail to send: IndentationError: unexpected indent Use python 3.12
3
1
79,516,939
2025-3-18
https://stackoverflow.com/questions/79516939/triton-strange-error-with-matrix-multiplication
I have 2 matrices P and V and when I take their dot product with triton I get results that are inconsistent with pytorch. The P and V matrices are as follows. P is basically the softmax which is why it is mostly 0s except the final column, the result of the dot product should be the final row of V. P = torch.zeros((32,32), device = 'cuda', dtype = torch.float32) P[:,-1] = 1 V = torch.arange(32*64, 64 * 64, device = 'cuda', dtype = torch.float32). reshape(32, 64) On calling tl.dot(P, V) both are loading correctly (or so they appear to me), but the output is [4032., 4032., 4034., 4034., 4036., 4036., 4038., 4038., 4040., 4040., 4042., 4042., 4044., 4044., 4046., 4046., 4048., 4048., 4050., 4050., 4052., 4052., 4054., 4054., 4056., 4056., 4058., 4058., 4060., 4060., 4062., 4062., 4064., 4064., 4066., 4066., 4068., 4068., 4070., 4070., 4072., 4072., 4074., 4074., 4076., 4076., 4078., 4078., 4080., 4080., 4082., 4082., 4084., 4084., 4086., 4086., 4088., 4088., 4090., 4090., 4092., 4092., 4094., 4094.] instead of what I get from torch.matmul which is [4032., 4033., 4034., 4035., 4036., 4037., 4038., 4039., 4040., 4041., 4042., 4043., 4044., 4045., 4046., 4047., 4048., 4049., 4050., 4051., 4052., 4053., 4054., 4055., 4056., 4057., 4058., 4059., 4060., 4061., 4062., 4063., 4064., 4065., 4066., 4067., 4068., 4069., 4070., 4071., 4072., 4073., 4074., 4075., 4076., 4077., 4078., 4079., 4080., 4081., 4082., 4083., 4084., 4085., 4086., 4087., 4088., 4089., 4090., 4091., 4092., 4093., 4094., 4095.] The following is the code I'm testing this out in import triton import triton.language as tl import torch torch.cuda.is_available() torch.set_printoptions(profile="full") @triton.jit def test_kernel(x_ptr,y_ptr,output_ptr, M, K, N, stride_xm, stride_xk, stride_yk, stride_yn, stride_om, stride_on, BLOCK_SIZE_M: tl.constexpr, BLOCK_SIZE_K: tl.constexpr, BLOCK_SIZE_N: tl.constexpr): pid_m = tl.program_id(axis = 0) * BLOCK_SIZE_M pid_n = tl.program_id(axis = 1) * BLOCK_SIZE_N x_ptr += (pid_m + tl.arange(0, BLOCK_SIZE_M))[:,None] * stride_xm + (pid_n + tl.arange(0, BLOCK_SIZE_K))[None,:]*stride_xk y_ptr += (pid_m + tl.arange(0, BLOCK_SIZE_K))[:,None] * stride_yk + (pid_n + tl.arange(0, BLOCK_SIZE_N))[None,:]*stride_yn x = tl.load(x_ptr) y = tl.load(y_ptr) output_offset = (pid_m + tl.arange(0, BLOCK_SIZE_M))[:,None] *stride_om + (pid_n + tl.arange(0, BLOCK_SIZE_N))[None, :] *stride_on tl.store(output_ptr + output_offset, tl.dot(x,y)) def helper(x: torch.Tensor, y: torch.Tensor): M , K = x.shape K1, N = y.shape assert K == K1 output = torch.empty((M, N), device = 'cuda', dtype = torch.float32) assert x.is_cuda and y.is_cuda and output.is_cuda grid = lambda meta: (triton.cdiv(M, meta['BLOCK_SIZE_M']), triton.cdiv(N, meta['BLOCK_SIZE_N']),) test_kernel[grid](x, y, output, M, K, N, x.stride(0), x.stride(1), y.stride(0), y.stride(1), output.stride(0), output.stride(1), BLOCK_SIZE_N = 64, BLOCK_SIZE_K = 32, BLOCK_SIZE_M = 32, ) return output The strangest thing is when I define V = torch.arange(0, 32*64, device = 'cuda', dtype = torch.float32). reshape(32, 64) it works as expected. Is there something with pointer operations that I'm missing here?
The problem has been solved (thanks to dai on discord). The issue is input_precision is tf32 by default for dot product, which has 10bits mantissa - leading to trailing digit loss. The problem was very pronounced with V = torch.arange(4096, 4096 + 2048, device = 'cuda', dtype = torch.float32), where the output was [6080., 6080., 6080., 6080., 6084., 6084., 6084., 6084., 6088.,...] . Switching to "ieee" input precision tl.dot(x,y, input_precision = "ieee") solved the issue.
1
1
79,521,249
2025-3-19
https://stackoverflow.com/questions/79521249/advice-on-using-wagtail-e-g-richtextfield-with-pylance-type-checking
Nearly all my Wagtail models files are full of errors according to Pylance and I'm not sure how to silence them without either adding # type: ignore to hundreds of lines or turning off Pylance rules that help catch genuine bugs. The errors often come from RichTextField properties on my models. Here is a simple example: from wagtail.models import Page from wagtail.fields import RichTextField class SpecialPage(Page): introduction = RichTextField( blank=True, features=settings.RICHTEXT_ADVANCED, help_text="Text that appears at the top of the page.", ) where RICHTEXT_ADVANCED is a list[str] in my settings file. This code works fine. The arguments passed to RichTextField all exist in its __init__ method or the __init__ method of one a parent class. Yet Pylance in VS Code underlines all three lines of introduction in red and says: No overloads for "__new__" match the provided arguments Argument types: (Literal[True], Any, Literal['Text that appears at the top of the page.']) Pylance(reportCallIssue) Is this a bug in Pylance? Is there a way around it other than the two (undesirable) approaches I mentioned above? Or could Wagtail provide more explicit type hints or @overload indicators to prevent the errors? The class inheritance goes RegisterLookupMixin > Field (has blank and help_text) > TextField (has features) > RichTextField. None of these classes have a __new__ method. The values I'm providing all match the types defined in the parent classes. I'm on a 5.x Wagtail, has this perhaps been fixed in more recent releases?
You're hitting a deficiency in pylance: the heuristic doesn't apply in your case and causes troubles. I don't have anything powered by pylance available to investigate this, but you may try with a smaller snippet to check if pylance thinks that def __init__(*args, **kwargs) on a subclass means "use same arguments as parent". This is often true, but also often wrong. class A: def __init__(self, foo): self.foo = foo class B(A): def __init__(self, *args, **kwargs): self.bar = kwargs.pop("bar") super().__init__(*args, **kwargs) B(foo=0, bar=1) Pyright accepts this code, so the problem is most likely in its wrapper - pylance. Neither Wagtail nor Django are type hinted, so this example is representative of what you observe. RichTextField defines a catch-all args, kwargs constructor, so pyright looks further up the inheritance chain. All the way to django.db.models.TextField with def __init__(self, *args, db_collation=None, **kwargs) and finally up to plain Field that defines all arguments explicitly here. Now, this should be possible to circumvent somehow, right? Right?.. Try setting useLibraryCodeForTypes to false in your pyright configuration - that may work. It won't help on my example, though, as all code there is inline. This will make your type checking less reliable, but also hepl you avoid spurious errors from unexpected inference of overly strict types. You can copy all arguments from django.db.models.Field, add arguments supported by RichTextField to that list and write your wrapper class (and use it everywhere instead of RichTextField). To avoid harming your runtime code, here's what it may look like: from typing import TYPE_CHECKING from django.db.models.fields import NOT_PROVIDED from wagtail.fields import RichTextField class MyRichTextField(RichTextField): if TYPE_CHECKING: # False at runtime def __init__( self, verbose_name=None, name=None, primary_key=False, max_length=None, unique=False, blank=False, null=False, db_index=False, rel=None, default=NOT_PROVIDED, editable=True, serialize=True, unique_for_date=None, unique_for_month=None, unique_for_year=None, choices=None, help_text="", db_column=None, db_tablespace=None, auto_created=False, validators=(), error_messages=None, db_comment=None, db_default=NOT_PROVIDED, *, # from TextField db_collation=None, # from RichTextField editor="default", features=None, ): ... You may add annotations to the fields you're going to use, if you'd like to. If you want, django-stubs already define type hints for these arguments, right here - you can use that for reference.
1
1
79,519,890
2025-3-19
https://stackoverflow.com/questions/79519890/how-can-i-place-the-icon-after-the-text-of-a-qpushbutton-in-pyqt5
How can I place the icon after the text of a QPushButton in PyQt5? The icon should not come before the text - it should come after or below the text. Currently it's looking like this: but it should be like this: Track ⨁ from PyQt5.QtWidgets import QApplication, QPushButton, QWidget, QVBoxLayout from PyQt5.QtGui import QIcon from PyQt5.QtCore import Qt import sys app = QApplication(sys.argv) window = QWidget() layout = QVBoxLayout(window) btn_add_track = QPushButton("Track") btn_add_track.setIcon(QIcon("Icons/plus.png")) layout.addWidget(btn_add_track) window.show() sys.exit(app.exec_())
The simplest option is to change the direction of the button layout: btn_add_track.setLayoutDirection(Qt.LayoutDirection.RightToLeft) Unlike the other solution, this will maintain the appearance of the original button such as centering the icon text and text relative to the button (this can also be done with the other solution but requires much more code)
1
1
79,518,603
2025-3-18
https://stackoverflow.com/questions/79518603/how-to-populate-a-2-d-numpy-array-with-values-from-a-third-dimension
New Post: Processing satellite conjunctions with numpy efficiently Original Post: I have a numpy array of shape n x m x r, where the n axis represents an object, the m axis represents a timestep and the r axis represents a position vector in 3-d space. I have an array containing three (x, y and z position values) of m objects at n points in time. This is the format my data is delivered in (from the python module sgp4, specifically a Satrec_Array if anyone's interested) so I can't move this further up the data processing chain. I want to be able to represent this as an m x n array of position vectors, so essentially "collapsing" the position axis into a single array, so an array containing m x n elements, each of which is a position vector relating to an object at a time. I'm struggling to find a way to do this efficiently - I can do it with bog standard python loops, but as I scale up the number of objects and timesteps this becomes massively inefficient. I'm not very well versed in numpy, but none of the solutions I've searched or attempts at using methods such as [h/v/d]stack etc. have given me the right final shape. I also looked into vectorization but as far as I can see that just implements a loop under the hood? Example with random numbers and an input array of shape (3,2,3) In[1]: m = 3 n = 2 r = 3 In[2]: a = np.random.random((m,n,r)) In[3]: a Out[3]: array([[[0.8416, 0.3694, 0.5708], [0.3779, 0.579 , 0.207 ]], [[0.7871, 0.6547, 0.0047], [0.1115, 0.1445, 0.6147]], [[0.8538, 0.2821, 0.8094], [0.6214, 0.0147, 0.5852]]]) In[4]: a.shape Out[4]: (3, 2, 3) In[4]: new_a = np.empty(shape=(m,n), dtype=object) for i in range(m): for j in range(n): new_a[i,j] = a[i,j,:] In[5]: new_a Out[5]: array([[array([0.8416, 0.3694, 0.5708]), array([0.3779, 0.579 , 0.207 ])], [array([0.7871, 0.6547, 0.0047]), array([0.1115, 0.1445, 0.6147])], [array([0.8538, 0.2821, 0.8094]), array([0.6214, 0.0147, 0.5852])]], dtype=object) In[6]: new_a.shape Out[6]: (3, 2)
"I'm struggling to find a way to do this efficiently - I can do it with bog standard python loops, but as I scale up the number of objects and timesteps this becomes massively inefficient." What you want to do makes no sense in numpy. Unless using an object dtype, there is no way to have anything else than numeric data as items. You should keep your nd-array. In fact, what you have ((m, n, r) shape) is already a (m, n) array of vectors of dimension r. Think of it this way if you like rather than (m, n, r). Numpy operation can very efficiently operate on a subset of the dimensions. For example: # add 10/100/1000 to the first dimension a + np.array([10, 100, 1000])[:, None, None] array([[[ 10.8416, 10.3694, 10.5708], [ 10.3779, 10.579 , 10.207 ]], [[ 100.7871, 100.6547, 100.0047], [ 100.1115, 100.1445, 100.6147]], [[1000.8538, 1000.2821, 1000.8094], [1000.6214, 1000.0147, 1000.5852]]]) a + np.array([10, 100])[:, None] # add 10/100 to the second dimension array([[[ 10.8416, 10.3694, 10.5708], [100.3779, 100.579 , 100.207 ]], [[ 10.7871, 10.6547, 10.0047], [100.1115, 100.1445, 100.6147]], [[ 10.8538, 10.2821, 10.8094], [100.6214, 100.0147, 100.5852]]]) Your issue is most likely a XY problem. You should keep the (m, n, r) shape and try to solve your ultimate goal with a nd-array.
1
1
79,519,830
2025-3-19
https://stackoverflow.com/questions/79519830/what-is-the-best-way-to-get-the-last-non-zero-value-in-a-window-of-n-rows
This is my dataframe: df = pd.DataFrame({ 'a': [0, 0, 1, -1, -1, 0, 0, 0, 0, 0, -1, 0, 0, 1, 0] }) Expected output is creating column b: a b 0 0 0 1 0 0 2 1 0 3 -1 1 4 -1 -1 5 0 -1 6 0 -1 7 0 -1 8 0 0 9 0 0 10 -1 0 11 0 -1 12 0 -1 13 1 -1 14 0 1 Logic: I explain the logic by some examples: I want to create column b to df I want to have a window of three rows for example for row number 3 I want to look at three previous rows and capture the last non 0 value. if all of the values are 0 then 'b' is 0. in this case the last non zero value is 1. so column b is 1 for example for row number 4 . The last non zero value is -1 so column b is -1 I want to do the same for all rows. This is what I have tried so far. I think there must be a better way. import pandas as pd df = pd.DataFrame({ 'a': [0, 0, 1, -1, -1, 0, 0, 0, 0, 0, -1, 0, 0, 1, 0] }) def last_nonzero(x): # x is a pandas Series representing a window nonzero = x[x != 0] if not nonzero.empty: # Return the last non-zero value in the window (i.e. the one closest to the current row) return nonzero.iloc[-1] return 0 # Shift by 1 so that the rolling window looks only at previous rows. # Use a window size of 3 and min_periods=1 to allow early rows. df['b'] = df['a'].shift(1).rolling(window=3, min_periods=1).apply(last_nonzero, raw=False).astype(int)
I don't think there is a much more straightforward approach. There is currently no rolling.last method. You could however simplify a bit your code: def last_nonzero(s): return 0 if (x:=s[s != 0]).empty else x.iloc[-1] df['b'] = (df['a'].shift(1, fill_value=0) .rolling(window=3, min_periods=1).apply(last_nonzero) .convert_dtypes() ) With a lambda: df['b'] = (df['a'].shift(1, fill_value=0) .rolling(window=3, min_periods=1) .apply(lambda s: 0 if (x:=s[s != 0]).empty else x.iloc[-1]) .convert_dtypes() ) Actually, if you have a range index, you could also use a merge_asof on the indices: window = 3 out = pd.merge_asof( df, df['a'].shift(1, fill_value=0).loc[lambda x: x != 0].rename('b'), left_index=True, right_index=True, tolerance=window-1, direction='backward', ).fillna({'b': 0}) Output: a b 0 0 0 1 0 0 2 1 0 3 -1 1 4 -1 -1 5 0 -1 6 0 -1 7 0 -1 8 0 0 9 0 0 10 -1 0 11 0 -1 12 0 -1 13 1 -1 14 0 1
3
4
79,519,395
2025-3-19
https://stackoverflow.com/questions/79519395/how-to-skip-if-starts-with-but-match-other-strings
I want to match and substitute for strings as shown in the example below, but not for some strings which start with test or !!. I have used negative lookahead to skip matching unwanted strings but (Street|,)(?=\d) matching for Street & comma replacing group 1 with UK/ is not working as expected. import re input = [ 'Street1-2,4,6,8-10', '!! Street4/31/2', 'test Street4' ] pattern = r'(^(?!test\s|!!\s).*(Street|,)(?=\d))' output = [re.sub(pattern, r'\g<1>UK/', line) for line in input ] Actual output: ['Street1-2,4,6,UK/8-10', '!! Street4/31/2', 'test Street4'] Expected output: ['StreetUK/1-2,UK/4,UK/6,UK/8-10', '!! Street4/31/2', 'test Street4']
You could change the pattern to use 2 capture groups, and then use a callback with re.sub. The callback checks if there is a group 1 value. If there is, use it in the replacement, else use group 2 followed by UK/ ^((?:!!|test)\s.*)|(Street|,)(?=\d) The regex matches ^((?:!!|test)\s.*) Capture either !! or test at the start of the string followed by a whitespace char and then the rest of the line in group 1 | Or (Street|,)(?=\d) Capture either Street or , in group 2 while asserting a digit to the right See a regex101 demo import re lst = ['Street1-2,4,6,8-10', '!! Street4/31/2', 'test Street4'] pattern = r'^((?:!!|test)\s.*)|(Street|,)(?=\d)' output = [re.sub(pattern, lambda m: m.group(1) or m.group(2) + 'UK/', line) for line in lst] print(output) Output ['StreetUK/1-2,UK/4,UK/6,UK/8-10', '!! Street4/31/2', 'test Street4']
7
5
79,518,999
2025-3-19
https://stackoverflow.com/questions/79518999/why-stdskipna-false-and-stdskipna-true-yield-different-results-even-when-the
I have a pandas Series s, and when I call s.std(skipna=True) and s.std(skipna=False) I get different results even when there are no NaN/null values in s, why? Did I misunderstand the skipna parameter? I'm using pandas 1.3.4 import pandas as pd s = pd.Series([10.0]*4800000, index=range(4800000), dtype="float32") # No NaN/null in the Series print(s.isnull().any()) # False print(s.isna().any()) # False # Why the code below prints different results? print(s.std(skipna=False)) # 0.0 print(s.std(skipna=True)) # 0.61053276
This is an issue with the Bottleneck optional dependency, used to accelerate some NaN-related routines. I think the wrong result happens due to loss of precision while calculating the mean, since Bottleneck uses naive summation, while NumPy uses more accurate pairwise summation. You can disable Bottleneck with pd.set_option('compute.use_bottleneck', False) to fall back to the NumPy handling.
3
3
79,516,763
2025-3-18
https://stackoverflow.com/questions/79516763/method-decorators-which-tag-method-prevent-overwriting-by-other-decorators
I'm investigating the pattern whereby you have a method decorator which annotates the method in some way, and then once the class is defined it looks through its methods, finds the annotated methods, and registers or processes them in some way. e.g. def class_decorator(cls): for name, method in cls.__dict__.iteritems(): if hasattr(method, "use_class"): # do something with the method and class print name, cls return cls def method_decorator(view): # mark the method as something that requires view's class view.use_class = True return view @class_decorator class ModelA(object): @method_decorator def a_method(self): # do some stuff pass (From here: https://stackoverflow.com/a/2367605/1256529) I wondered if anyone has a solution to the problem of multiple decorators. For example, this example works fine in isolation, but if I do the following, it will break because the use_class annotation will be hidden. def hiding_decorator(fn): def wrapper(*args, **kwargs): print("do some logging etc.") return fn(*args, **kwargs) return wrapper @class_decorator class ModelA(object): @hiding_decorator @method_decorator def a_method(self): # do some stuff pass Does anyone have a reliable and user-friendly way around this problem? Non-optimal solutions I can think of: Dictate that your decorator should be outermost (but what if other decorators also have this requirement?) Dictate that it only works with decorators that are transparent proxies for attributes of their inner functions/methods. Require other decorators to be aware of the requirements of the tagging decorator (means generic utility decorators become difficult to use). None of these are particularly attractive.
Since a "tag" is always applied on a function object itself, it is always going to be susceptible to obstruction by an unaware wrapper function. To achieve a similar usage you can implement the behavior with a registry pattern instead by adding method names to a list with a method decorator so that the class decorator can obtain the final decorated function object with getattr: class Registry: def __init__(self): self.names = [] def register(self, func): self.names.append(func.__name__) return func def apply(self, cls): for name in self.names: print(cls, name) print(getattr(cls, name)) registry = Registry() @registry.apply class ModelA(object): @hiding_decorator @registry.register def a_method(self): pass This outputs something like: <class '__main__.ModelA'> a_method <function hiding_decorator.<locals>.wrapper at 0x1551795d67a0> Demo: https://ideone.com/R43Izy
3
3
79,518,393
2025-3-18
https://stackoverflow.com/questions/79518393/can-we-get-x21-instead-of-1-x2-with-sympy-latex-x21
I need -x^{2}+1 rather than 1-x^{2} with sympy.latex(-x**2+1). from sympy import symbols, latex x = symbols('x') print(-x**2+1) print(latex(-x**2+1)) Output: 1 - x**2 1 - x^{2} Is it possible to change the default format?
As suggested in comments, you can use the order argument to change the result ordering! https://docs.sympy.org/latest/modules/printing.html#sympy.printing.latex.latex order: string, optional Any of the supported monomial orderings (currently 'lex', 'grlex', or 'grevlex'), 'old', and 'none'. This parameter does nothing for .Mul objects. Setting order to 'old' uses the compatibility ordering for ~.Add defined in Printer. For very large expressions, set the order keyword to 'none' if speed is a concern. >>> print(latex(-x**2+1)) 1 - x^{2} >>> print(latex(-x**2+1, order="lex")) - x^{2} + 1 Casually, it could be worth a PR to set this ordering for None when the expression is literally quite short (by count of atoms?), but I'm not sufficiently familiar with real-world cases of it
4
3
79,518,311
2025-3-18
https://stackoverflow.com/questions/79518311/arrange-consecutive-zeros-in-panda-by-specific-rule
I have panda series as the following : 1 1 2 2 3 3 4 4 5 0 6 0 7 1 8 2 9 3 10 0 11 0 12 0 13 0 14 1 15 2 I have to arrange this in following format : 1 1 2 2 3 3 4 4 5 0 6 0 7 3 ---> 4-2+1 (previous non zero value - amount of previous zeroes + current value) 8 4 ---> 4-2+2 (previous non zero value - amount of previous zeroes + current value) 9 5 ---> 4-2+3 (previous non zero value - amount of previous zeroes + current value) 10 0 11 0 12 0 13 0 14 2 ---> 5-4+1 (previous non zero value - amount of previous zeroes + current value) 15 3 ---> 5-4+2 (previous non zero value - amount of previous zeroes + current value) I am stuck at this. Till now I am able to produce a data frame with consecutive zeroes. zero = ser.eq(0).groupby(ser.ne(0).cumsum()).cumsum() which gave me: 1 0 2 0 3 0 4 0 5 1 6 2 7 0 8 0 9 0 10 1 11 2 12 3 13 4 14 0 15 0 if someone willing to assist on this. i am dropping cookie cutter for this problem which will create the above series. d = {'1': 1, '2': 2, '3': 3, '4':4, '5':0, '6':0, '7':1, '8':2, '9':3, '10':0, '11':0, '12':0, '13':0, '14':1, '15':2} ser = pd.Series(data=d)
Although this can only be done with Pandas in a rather convoluted way IMHO, here is a straightforward implementation using Numba (which should also be faster than all Pandas solutions): import numba as nb import numpy as np @nb.njit(['(int32[:],)', '(int64[:],)']) def compute(arr): res = np.empty(arr.size, dtype=arr.dtype) z_count = 0 last_nnz_val = 0 nnz_count = 0 for i in range(arr.size): if arr[i] == 0: if i > 0 and arr[i-1] != 0: # If there is a switch from nnz to zero last_nnz_val += nnz_count - z_count # Save the last nnz result z_count = 0 z_count += 1 res[i] = 0 else: if i > 0 and arr[i-1] == 0: # If there is a switch from zero to nnz nnz_count = 0 nnz_count += 1 res[i] = last_nnz_val - z_count + nnz_count return res # [...] compute(ser.to_numpy()) Note the result is a basic Numpy array, but you can easily create a dataframe from it. Benchmark Here are performance results on my machine (i5-9600KF CPU) on the tiny example dataset: MichaelCao's answer: 886 µs This answer: 2 µs <----- On a 1000x larger dataset (repeated), I get: MichaelCao's answer: 1240 µs This answer: 20 µs <----- It is much faster than the other answer. I also get different output results so one of the answer implementation is certainly wrong.
4
3
79,517,885
2025-3-18
https://stackoverflow.com/questions/79517885/how-to-scroll-and-click-load-more-results-using-selenium-in-python-booking-com
I’m new to web scraping with Selenium, and I’m trying to scrape property listings from Booking.com. My code (included below) successfully scrapes 25 results, but I suspect the issue is that more results are available if I scroll and click the "Load more results" button. I've tried using execute_script to scroll and find_element to locate the button, but I’m not sure how to implement a loop that continues loading results until the button disappears (or no more results are available). Here's my code so far: # Relevant imports from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException, NoSuchElementException # WebDriver setup driver = webdriver.Chrome(service=Service()) driver.get("https://www.booking.com/searchresults.en-gb.html?ss=cornwall...") def handle_no_such_element_exception(data_extraction_task): try: return data_extraction_task() except NoSuchElementException: return None items = [] # Load more results logic (This part is where I’m struggling) while True: try: load_more_button = WebDriverWait(driver, 5).until( EC.element_to_be_clickable((By.CSS_SELECTOR, "[data-testid='load-more-button']")) ) load_more_button.click() print("Clicked load more button...") except (TimeoutException, NoSuchElementException): print("No more results to load.") break # Scraping logic (This part works fine) property_items = driver.find_elements(By.CSS_SELECTOR, "[data-testid=\"property-card\"]") for property_item in property_items: title = handle_no_such_element_exception(lambda: property_item.find_element(By.CSS_SELECTOR, "[data-testid=\"title\"]").text) address = handle_no_such_element_exception(lambda: property_item.find_element(By.CSS_SELECTOR, "[data-testid=\"address\"]").text) review_score = handle_no_such_element_exception(lambda: property_item.find_element(By.CSS_SELECTOR, "[data-testid=\"review-score\"]").text) link = handle_no_such_element_exception(lambda: property_item.find_element(By.CSS_SELECTOR, "[data-testid=\"title-link\"]").get_attribute("href")) item = { "title": title, "address": address, "review_score": review_score, "link": link } items.append(item) print(items) driver.quit() What I’m asking: How can I properly scroll to load more results? How do I make sure that the "Load more results" button is clicked until no more results are available? Any guidance would be much appreciated!
I did a few changes to the code to make it work for your case: the first results are loaded automatically when the user scrolls so first we need to scroll to the bottom of the page a few times only then the "load more button" appears and we need to properly located it and click it I also closed the cookie banner as it was interfering with clicking the button Here is the relevant part: # get rid of the cookie banner coookie_button = WebDriverWait(driver, 5).until( EC.element_to_be_clickable((By.ID, "onetrust-accept-btn-handler")) ) coookie_button.click() # Scroll to load more results using JavaScript on the client prev_height = -1 max_scrolls = 100 scroll_count = 0 while scroll_count < max_scrolls: driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(1.5) # give some time for new results to load new_height = driver.execute_script("return document.body.scrollHeight") if new_height == prev_height: # no more elements were loaded break prev_height = new_height scroll_count += 1 # Now click the load more button while there are more results while True: try: # choosing a good selector here is a bit tricky as there's # nothing reliable but this works at the moment load_more_button = WebDriverWait(driver, 5).until( EC.element_to_be_clickable((By.CSS_SELECTOR, "[data-results-container=\"1\"] button.af7297d90d.c0e0affd09")) ) load_more_button.click() driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") print("Clicked load more button...") except (TimeoutException, NoSuchElementException): print("No more results to load.") break Using the code above I was able to extract 981 items for your search query. The code can be improved, but it works and shows the idea and I think you can improve it further as needed. Hope this helps!
1
2
79,516,990
2025-3-18
https://stackoverflow.com/questions/79516990/general-way-to-define-jax-functions-with-non-differentiable-arguments
For a particular JAX function func, one can define non-differentiable arguments by using the decorator @partial(jax.custom_jvp, nondiff_argnums=...). However, in order to make it work, one must also explicitly define the differentiation rules in a custom jvp function by using the decorator @func.defjvp. I'm wondering if there is a generic way to define non-differentiable arguments for any given func, without defining a custom jvp (or vjp) function? This will be useful when the differentiation rules are too complicated to write out.
In JAX's design, non-differentiated arguments are a property of the gradient transformation being used, not a property of the function being differentiated. custom_jvp is fundamentally about customizing the gradient behavior, and using it to mark non-differentiable arguments without actually customizing the gradient is not an intended use. The way to ensure that arguments do not participate in an autodiff transformation is to specify the arguments you want to differentiate against when you call the jax.grad, jax.jacobian, or other autodiff transformation; e.g. jax.grad(func, argnums=(0,)) # differentiate with respect to argument 0. Regardless of what func is, this will attempt to differentiate with respect to the 0th argument, and if that argument is either explicitly or implicitly not differentiable due to how func is defined, an error will be raised.
1
1
79,516,194
2025-3-18
https://stackoverflow.com/questions/79516194/typing-a-generic-iterable
I'm creating a function that yields chunks of an iterable. How can I properly type this function so that the return value bar is of type list[int]. from typing import Any, Generator, Sequence def chunk[T: Any, S: Sequence[T]](sequence: S, size: int) -> Generator[S, None, None]: for i in range(0, len(sequence), size): yield sequence[i : i + size] foo: list[int] = [1,2,3] bar = chunk(foo, 1) Sequence[T] is invalid because TypeVar constraint type cannot be generic PylancereportGeneralTypeIssues
The Type Parameter Scopes section of PEP 695 – Type Parameter Syntax specifically points out that Python's typing system currently does not allow a non-concrete upper bound type, but that it may be allowed in a future extension: # The following generates no compiler error, but a type checker # should generate an error because an upper bound type must be concrete, # and ``Sequence[S]`` is generic. Future extensions to the type system may # eliminate this limitation. class ClassA[S, T: Sequence[S]]: ... So until that happens you simply have to spell out the generic type in the type annotations instead: def chunk[T](sequence: Sequence[T], size: int) -> Generator[Sequence[T], None, None]: for i in range(0, len(sequence), size): yield sequence[i : i + size]
2
1
79,533,867
2025-3-25
https://stackoverflow.com/questions/79533867/how-to-display-both-application-json-and-application-octet-stream-content-types
I have an endpoint that can return JSON or a file (xlsx): class ReportItemSerializer(BaseModel): id: int werks: Annotated[WerkSerializerWithNetwork, Field(validation_alias="werk")] plu: Annotated[str, Field(description="Название PLU")] start_date: Annotated[date, None, Field(description="Дата начала периода в формате")] end_date: Annotated[date | None, Field(description="Дата окончания периода")] anfmenge: Annotated[int | None, Field(des cription="Запас на начало периода")] endmenge: Annotated[int | None, Field(description="Остаток по PLU в разрезе поставщика на конец периода")] soll: Annotated[int, None, Field(description="Поступило за период")] haben: Annotated[ int | None, Field(description="Количество по PLU, которое было возвращено поставщику за указанный период")] model_config = ConfigDict(from_attributes=True) class PaginatedReportItemsSerializer(BaseModel): count: int results: list[ReportItemSerializer] @router.get( "/orders/report", responses={ 200: { "description": "Return report", "content": { "application/json": { "schema": PaginatedReportItemsSerializer.schema(ref_template="#/components/schemas/{model}") }, "application/octet-stream": {} }, }, } ) async def get_report(): pass With this I have a problem with swagger. With configuration above there is problem: Nested ReportItemSerializer is not being displayed (string is instead displayed). How can I fix it?
Here is a working example, heavily based on this answer; hence, please have a look at that answer for more details. The example below will correctly display the example value/schema in Swagger UI autodocs, below the 200 response's description. Using the drop down menu, one could switch between the various media types—in this case, that is application/json and application/octet-stream. Working Example from fastapi import FastAPI from fastapi.openapi.constants import REF_PREFIX from pydantic import BaseModel from datetime import datetime class SubMessage(BaseModel): msg: str dt: datetime = None class Message(BaseModel): msg: str sub: SubMessage def get_200_schema(): return { 'model': Message, 'content': { 'application/json': { 'schema': {'$ref': REF_PREFIX + Message.__name__} }, 'application/octet-stream': { 'schema': {} # whatever } }, } app = FastAPI() @app.get('/', responses={200: get_200_schema()}) async def get_msg(): return Message(msg='main msg', sub=SubMessage(msg='sub msg', dt=datetime.now()))
2
1
79,541,683
2025-3-28
https://stackoverflow.com/questions/79541683/getting-infeasibility-while-solving-constraint-programming-for-shelf-space-alloc
I'm trying to allocate shelf space to items on planogram using constraint programming. It is a big problem and I'm trying to implement piece by piece. At first we're just trying to place items on shelf. The strategy is dividing whole planogram into multiple sections. For example if my planogram width is 10 cm with 3 shelfs and chosen granalarity is 1 then the total avialable space is 30 grids. Now, let's say I have an item of 6 cm then accordingly, it'll take 6 grid on planogram. there can be multiple such products also, so I have defined 4 constraints to arrange items properly: All items should take exactly the number of grids equal to their length. All grids must be assigned to one item only. All the partition of item must be on one shelf. All the partition of item must be together. This approach is working perfectly for smaller configuration like the one below: here, tpnb is item number and linear is space required. Now, sample planogram configuration: granularity = 1, shelf_count = 3, section_width = 10 The image attached above is the solution that I got after running the code. but however this is not scaling well for larger planograms, like: granularity = 1 shelf_count = 7 section_width = 133 total item count: 57 What I'm getting is : Status: ExitStatus.UNSATISFIABLE (59.21191700000001 seconds) No solution found. I tried so many times with modifying the constraints n all but I'm unable to figure out what is causing this issue why it is failing for large config or what is the root cause of infeasibility here. Sharing my code below: pog_df['linear'] = pog_df.linear.apply(np.ceil) pog_df['gridlinear'] = pog_df.linear//granularity G = nx.grid_2d_graph(int(section_width * bay_count/granularity),int(shelf_count)) # Define the positions of the nodes for a planar layout pos = {(x, y): (x, y) for x, y in G.nodes()} # # Draw the graph plt.figure(figsize=(8, 4)) nx.draw(G, pos, node_size=0.07)#, with_labels=True, node_size=7, node_color="lightblue", font_weight="bold") products = pog_df[['tpnb', 'gridlinear']].astype(str) locations = pd.Series([str(s) for s in G.nodes()], name='location') locations = pd.concat([locations,locations.to_frame().location.str.strip("() ").str.split(',', expand=True).rename(columns={0: 'x', 1: 'y'})], axis=1) l_p_idx = pd.merge(products.reset_index(), locations, how='cross')[['tpnb', 'gridlinear', 'location', 'x', 'y']] n_location = len(locations) n_products = pog_df.shape[0] # create decision variables l_p_idx['Var'] = l_p_idx.apply(lambda x: cp.boolvar(name=x['location']+'-'+x['tpnb']), axis=1) m = cp.Model() l_p_idx.groupby('tpnb', as_index=False).agg({'Var':cp.sum, 'gridlinear': 'unique'}).apply(lambda x: m.constraints.append(x['Var']==int(float(x["gridlinear"]))), axis=1) l_p_idx.groupby('location', as_index=False).agg({'Var':cp.sum}).apply(lambda x: m.constraints.append(x['Var']<=1), axis=1) l_p_idx["y"] = l_p_idx["y"].astype("int32") shelf_var = {tpnb: cp.intvar(0, max(l_p_idx["y"])) for tpnb in l_p_idx["tpnb"].unique()} l_p_idx.apply(lambda row: m.constraints.append( (row['Var'] == 1).implies(row['y'] == shelf_var[row['tpnb']]) ), axis=1) def process_group(level, data): return level, {eval(row['location']): row['Var'] for _, row in data.iterrows()} def parallel_creator(key, idx_df): node_dict = {} with ProcessPoolExecutor() as executor: futures = {executor.submit(process_group, level, data): level for level, data in idx_df.groupby(key)} for future in as_completed(futures): level, var_dict = future.result() node_dict[level] = var_dict return node_dict node_p_var_dict = parallel_creator( 'tpnb', l_p_idx) for p in products.tpnb.values: for shelf in range(shelf_count): m.constraints.append( cp.sum([(node_p_var_dict[str(p)][(level, shelf)] != node_p_var_dict[str(p)][(level+1, shelf)]) for level in range(section_width - 1)]) <= 1 ) hassol = m.solve() print("Status:", m.status())
Actually, due to the formulation of problem, the solver is always placing the item only on the edges, adding one more constraint helped to solve the issue: m.constraints.append(node_p_var_dict[p][(0, shelf)] + node_p_var_dict[p][((section_width*bay_count)-1, shelf)] <= 1)
2
1
79,547,143
2025-3-31
https://stackoverflow.com/questions/79547143/how-is-this-python-script-retrieving-youtube-urls-from-discogs
The purpose of the following code from here is to extract Youtube URLs from the Discogs API. It provides a JSON version of a list of releases on Discogs according to a particular search query, and an HTML example of this page would be: https://www.discogs.com/search/?sort=title%2Casc&format_exact=Vinyl&decade=1990&style_exact=Dub+Techno&type=release It then looks at each release and writes the Youtube URL to a text file. How does the following code extract Youtube URLs, given that data does not the contain the key videos? # Copy in your search URL, replace the parameters after /search? SEARCH_URL = 'https://api.discogs.com/database/search?sort=title&sortorder=desc&style="Dub+Techno"&format=Vinyl&type=release&year=1999&per_page=50' # Generate an access token at https://www.discogs.com/settings/developers ACCESS_TOKEN = #insert your own access token here # Some queries have a lot of results, limit the number by setting this PAGE_LIMIT = 25 import requests import time import sys from urllib.parse import parse_qs, urlparse from queue import Queue INITIAL_URL = SEARCH_URL + '&token=' + ACCESS_TOKEN q = Queue() q.put(INITIAL_URL) videos = [] file = open('video_urls.txt', 'w') def discogs_request(req_url): response = requests.get( req_url, headers={'User-agent': 'SearchToVideoList/1.0'}) if response.status_code == 429: time.sleep(60) return discogs_request(req_url) elif response.status_code == 200: return response.json() while not q.empty(): url = q.get() parsed_url = parse_qs(urlparse(url).query, keep_blank_values=True) data = discogs_request(url) if data is None: continue if 'page' in parsed_url and int(parsed_url['page'][0]) > PAGE_LIMIT: print('Page limit reached, exiting') continue if url is INITIAL_URL: print('Crawling %s releases in %s pages' % ( data['pagination']['items'], data['pagination']['pages'])) if 'results' in data: for release in data['results']: q.put(release['resource_url'] + '?token=' + ACCESS_TOKEN) if 'videos' in data: print(data['videos']) for video in data['videos']: file.write("%s\n" % video['uri']) print("Writing the following video to text: {}".format(video['uri'])) file.flush() if 'pagination' in data: print('Current page: %s' % data['pagination']['page']) if 'next' in data['pagination']['urls']: q.put(data['pagination']['urls']['next']) q.task_done() As I have only recently been acquainted with the mentioned Python modules, I tried to understand the code by abstracting away the queue in the code, so I began by running instead, # Copy in your search URL, replace the parameters after /search? SEARCH_URL = 'https://api.discogs.com/database/search?sort=title&sortorder=desc&style="Dub+Techno"&format=Vinyl&type=release&year=1999&per_page=50' # Generate an access token at https://www.discogs.com/settings/developers ACCESS_TOKEN = #insert your own access token here. # Some queries have a lot of results, limit the number by setting this PAGE_LIMIT = 25 import requests import time import sys from urllib.parse import parse_qs, urlparse from queue import Queue INITIAL_URL = SEARCH_URL + '&token=' + ACCESS_TOKEN # This function is a wrapper for the requests module, which is accessing the HTTP gateway. # requests.get() returns a response object, indicating whether the HTTP resource is available. def discogs_request(req_url): response = requests.get( req_url, headers={'User-agent': 'SearchToVideoList/1.0'}) if response.status_code == 429: time.sleep(60) return discogs_request(req_url) elif response.status_code == 200: return response.json() # The first URL in the queue will be the initial Discogs search for releases that satisfy our criteria/query. url = INITIAL_URL print("The URL we are working on is: {}".format(url)) # This line uses urllib.parse to split the given URL into 6 components. # The general structure of a URL is `scheme://netloc/path;parameters?query#fragment` print("The parsed URL is: {}".format(urlparse(url))) # We now only want the 'query' component of the URL, so we use a method to call the values of 'query'. print("The query components of the URL are: {}".format(urlparse(url).query)) # We now convert the value of `query` to a dictionary, parsed_url = parse_qs(urlparse(url).query, keep_blank_values=True) print("The query components of the URL as a dictionary are: {}".format(parsed_url)) # Now store the retrieved data JSON. data = discogs_request(url) However, the following is returned False after I run, 'videos' in data Here is a sample of data from my script: {'pagination': {'page': 1, 'pages': 3, 'per_page': 50, 'items': 113, 'urls': {'last': 'https://api.discogs.com/database/search?sort=title&sortorder=desc&style=%22Dub+Techno%22&format=Vinyl&type=release&year=1999&per_page=50&token=###&page=3', 'next': 'https://api.discogs.com/database/search?sort=title&sortorder=desc&style=%22Dub+Techno%22&format=Vinyl&type=release&year=1999&per_page=50&token=###&page=2'}}, 'results': [{'country': 'Germany', 'year': '1999', 'format': ['Vinyl', '12"', 'White Label'], 'label': ['Force Inc. Music Works', 'SST Brüggemann GmbH', 'MPO'], 'type': 'release', 'genre': ['Electronic'], 'style': ['Dub Techno', 'Minimal'], 'id': 1136089, 'barcode': [], 'user_data': {'in_wantlist': False, 'in_collection': False}, 'master_id': 83020, 'master_url': 'https://api.discogs.com/masters/83020', 'uri': '/Exos-Yellow-Yard/release/1136089', 'catno': 'FIM 174', 'title': 'Exos - Yellow Yard', 'thumb': 'https://i.discogs.com/-Jz0cMoRi-g0vgwmUZOs3P6i_HyFva_fobHjY303-II/rs:fit/g:sm/q:40/h:150/w:150/czM6Ly9kaXNjb2dz/LWRhdGFiYXNlLWlt/YWdlcy9SLTExMzYw/ODktMTI0NDY3MjIz/My5qcGVn.jpeg', 'cover_image': 'https://i.discogs.com/s5-uqKQaR-CXrpj8dSwWsl3Z0K9ZJs1G9w_ZwgUrsVQ/rs:fit/g:sm/q:90/h:600/w:596/czM6Ly9kaXNjb2dz/LWRhdGFiYXNlLWlt/YWdlcy9SLTExMzYw/ODktMTI0NDY3MjIz/My5qcGVn.jpeg', 'resource_url': 'https://api.discogs.com/releases/1136089', 'community': {'want': 270, 'have': 28}, 'format_quantity': 1, 'formats': [{'name': 'Vinyl', 'qty': '1', 'descriptions': ['12"', 'White Label']}]}, {'country': 'Germany', 'year': '1999', 'format': ['Vinyl', '12"', '33 ⅓ RPM', 'White Label'], 'label': ['Profan', 'SST Brüggemann GmbH', 'MPO'], 'type': 'release', 'genre': ['Electronic'], 'style': ['Dub Techno', 'Minimal Techno', 'Tech House'], 'id': 9133409, 'barcode': ['MPO PRO 028 A 33 RPM K SST', 'MPO PRO 028 B 33 RPM K SST'], 'user_data': {'in_wantlist': False, 'in_collection': False}, 'master_id': 327526, 'master_url': 'https://api.discogs.com/masters/327526', 'uri': '/Wassermann-W-I-R-Das-Original-Sven-V%C3%A4th-Mix-Thomas-Mayer-Mix/release/9133409', 'catno': 'PROFAN 028', 'title': 'Wassermann - W. I. R. (Das Original + Sven Väth Mix Thomas / Mayer Mix)', 'thumb': '', 'cover_image': 'https://st.discogs.com/1504bf7e69cad5ced79c9e7b6cf62bda18dce7eb/images/spacer.gif', 'resource_url': 'https://api.discogs.com/releases/9133409', 'community': {'want': 129, 'have': 23}, 'format_quantity': 1, 'formats': [{'name': 'Vinyl', 'qty': '1', 'descriptions': ['12"', '33 ⅓ RPM', 'White Label']}]}, But when I put print statements to see what is being written to the .txt file in the original 1st code block, it's clear that it's writing videos to the file. So what is going on?
It looks like the original script does a two-level crawl of the API data. When it reads the initial URL, it gets JSON data like you show, which contains results and pagination as keys, but not videos. What it does with that data is read each value from results (a list), and find the sub-key resource_url for each result, which it adds to the queue to fetch later. It also reads the pagination block for a next url to get the next page of results. Each time it parses one of the next urls, it works like the top level one. When it gets one of the resource URLs from the queue, it reads it and gets a JSON file that does not contain results, but instead may contain a list of videos. It reads the uri field of those video entries, and writes them to the file. You could change the logic to do both levels of processing in a set of nested loops if you wanted to, without using a queue. It would look something like this (without any error checking logic): file = open('video_urls.txt', 'w') outer_data = discogs_request(INITIAL_URL) while int(outer_data['pagination']['page']) <= PAGE_LIMIT: for result in outer_data['results']: inner_data = discogs_request(result['resource_url']) for videos in inner_data['videos']: file.write(video['uri'] + '\n') if 'next' in outer_data['pagination']['urls']: outer_data = discogs_request(outer_data['pagination']['urls']['next']) else: break
2
1
79,539,046
2025-3-27
https://stackoverflow.com/questions/79539046/how-to-use-jupyter-notebooks-or-ipython-for-development-without-polluting-the-ve
Best practice for Python is to use venv to isolate to the imports you really need. I use python -m venv. For development, it's very convenient to use IPython and notebooks for exploring code interactively. However, these need to be installed into the venv to be used. That defeats the purpose of the venv. I can make two venvs, one for usage and one venv-ipy for interactive exploration, but that's hard to manage, and need to be synchornized. Is there a solution or practice?
I work on multiple projects most of which run in production, each with it's own dependencies. The practice which works for me is to create separate requirements.txt files: one for production environment, the other one contains additions used for development, and a small script that creates/u[dates 2 venvs at the same run: <ProjectName>_prod and <ProjectName>_dev. So the requirements_dev.txt can contain only extensions like jupyter, matplotlib and other packages that are used only in development and there is no need to sync the files until reaching the production stage. To install the dependencies in dev environment you mention both files: pip install -r requirements.txt -r requirements_dev.txt I also use conda to create/manage environments as I find it more convenient than venv.
4
1
79,547,170
2025-3-31
https://stackoverflow.com/questions/79547170/lxml-target-interface-splits-data-on-non-ascii-characters-how-can-i-get-the-w
Here's a file test.xml: <?xml version="1.0" encoding="UTF-8"?> <list> <entry>data</entry> <entry>Łódź</entry> <entry>data Łódź</entry> </list> and here's a simple python script to parse it into a list with lxml: from lxml import etree class ParseTarget: def __init__(self): self.entries = [] def start(self, tag, attrib): pass def end(self, tag): pass def data(self, data): str = data.strip() if str != '': self.entries.append(data) def close(self): # Reset parser entries = self.entries self.entries = [] # And return results return entries target = etree.XMLParser(target=ParseTarget(), # Including/removing this makes no difference encoding='UTF-8') tree = etree.parse("./test.xml", target) # Expected value of tree: # ['data', 'Łódź', 'data Łódź'] # Actual value of tree # ['data', 'Łódź', 'data ', 'Łódź'] # What gives!!!? As the comment says, I would expect to end up with a list of three elements, but I get four. This is a minimal demonstration of a general problem: including strings with non-ascii characters (but at least one ascii char at the beginning) results in not a single string, but a list of two strings, split on where the non-ascii chars start. I don't want this to happen (i.e. I want to just get a list of three strings). What should I do? I'm using Python 3.11.2
You have to use the end handler to reset: Explanation of Steps With event based parsing, the parser may split the third <entry> (<entry>data Łódź</entry>) into multiple data() calls.: First: "data " (with a space at the end). Second: "Łódź". This is why we need to accumulate text correctly to "data Łódź". from lxml import etree class ParseTarget: def __init__(self): self.entries = [] self.current_text = [] def start(self, tag, attrib): self.current_text = [] def end(self, tag): if self.current_text: self.entries.append("".join(self.current_text)) self.current_text = [] # Reset for the next element def data(self, data): if data.strip(): # Ignore completely empty segments but keep spaces self.current_text.append(data) # Append raw data, preserving spaces def close(self): entries = self.entries self.entries = [] return entries target = etree.XMLParser(target=ParseTarget(), encoding='UTF-8') tree = etree.parse("./test.xml", target) print(tree) # ['data', 'Łódź', 'data Łódź']
2
4
79,536,730
2025-3-26
https://stackoverflow.com/questions/79536730/mypy-complains-for-static-variable
mypy (v.1.15.0) complains with the following message Access to generic instance variables via class is ambiguous for the following code: from typing import Self class A: B: Self A.B = A() If I remove B: Self, then is says "type[A]" has no attribute "B". How make mypy happy? You can play with this here: mypy playground
In addition to InSync's solution, you can also do this: class A: B: 'A' A.B = A() Or better yet: from typing import ClassVar class A: B: ClassVar['A'] A.B = A() This is an example of a forward reference, and mypy handles those by putting the type name in quotes. Using ClassVar just tells the type checker to disallow setting B through an instance, since it's supposed to be static. I prefer this over Self because the concept of self in programming usually refers to a specific instance, which doesn't make sense in the context of a static variable.
1
3
79,544,423
2025-3-30
https://stackoverflow.com/questions/79544423/fastest-way-to-search-5k-rows-inside-of-100m-row-pair-wise-dataframe
I am not sure title is well describing the problem but I will explain it step by step. I have a correlation matrix of genes (10k x 10k) I convert this correlation matrix to pairwise dataframe (upper triangle) (around 100m row) gene1 gene2 score Gene3450 Gene9123 0.999706 Gene5219 Gene9161 0.999691 Gene27 Gene6467 0.999646 Gene3255 Gene4865 0.999636 Gene2512 Gene5730 0.999605 ... ... ... Then I have gold-standard TERMS table around 5k rows and columns are ID and used_genes id name used_genes 1 Complex 1 [Gene3629, Gene8048, Gene9660, Gene4180, Gene1...] 2 Complex 2 [Gene3944, Gene931, Gene3769, Gene7523, Gene61...] 3 Complex 3 [Gene8236, Gene934, Gene5902, Gene165, Gene664...] 4 Complex 4 [Gene2399, Gene2236, Gene8932, Gene6670, Gene2...] 5 Complex 5 [Gene3860, Gene5792, Gene9214, Gene7174, Gene3...] What I do: I iterate of each Gold-standard complex row. Convert used_gene list to pairwise, like geneA-geneB, geneA-geneC etc. Check those complex-row gene pairs in the stacked correlation pairs. If they are exist I put column TP=1, if not TP=0. Based on the TP counts I calculate precision, recall, and area under the curve score. name used_genes auc_score Multisubunit ACTR coactivator complex [CREBBP, KAT2B, NCOA3, EP300] 0.001695 Condensin I complex [SMC4, NCAPH, SMC2, NCAPG, NCAPD2] 0.009233 BLOC-2 (biogenesis of lysosome-related organel...) [HPS3, HPS5, HPS6] 0.000529 NCOR complex [TBL1XR1, NCOR1, TBL1X, GPS2, HDAC3, CORO2A] 0.000839 BLOC-1 (biogenesis of lysosome-related organel...) [DTNBP1, SNAPIN, BLOC1S6, BLOC1S1, BLOC1S5, BL...] 0.002227 So, in the end, for each of gold-standard row, I have PR-AUC score. I will share my function below, with 100m stacked df, and 5k terms It takes around 25 minutes, and I am trying to find a way to reduce the time. PS: for the calculation of PR-AUC part, I have compiled C++ code, I just give the ordered TP numbers as a input to C++ function and return the score, still runtime is the same. I guess the problem is iteration part. from sklearn import metrics def compute_per_complex_pr(corr_df, terms_df): pairwise_df = binary(corr_df) pairwise_df = quick_sort(pairwise_df).reset_index(drop=True) # Precompute a mapping from each gene to the row indices in the pairwise DataFrame where it appears. gene_to_pair_indices = {} for i, (gene_a, gene_b) in enumerate(zip(pairwise_df["gene1"], pairwise_df["gene2"])): gene_to_pair_indices.setdefault(gene_a, []).append(i) gene_to_pair_indices.setdefault(gene_b, []).append(i) # Initialize AUC scores (one for each complex) with NaNs. auc_scores = np.full(len(terms_df), np.nan) # Loop over each gene complex for idx, row in terms_df.iterrows(): gene_set = set(row.used_genes) # Collect all row indices in the pairwise data where either gene belongs to the complex. candidate_indices = set() for gene in gene_set: candidate_indices.update(gene_to_pair_indices.get(gene, [])) candidate_indices = sorted(candidate_indices) if not candidate_indices: continue # Select only the relevant pairwise comparisons. sub_df = pairwise_df.loc[candidate_indices] # A prediction is 1 if both genes in the pair are in the complex; otherwise 0. predictions = (sub_df["gene1"].isin(gene_set) & sub_df["gene2"].isin(gene_set)).astype(int) if predictions.sum() == 0: continue # Compute cumulative true positives and derive precision and recall. true_positive_cumsum = predictions.cumsum() precision = true_positive_cumsum / (np.arange(len(predictions)) + 1) recall = true_positive_cumsum / true_positive_cumsum.iloc[-1] if len(recall) < 2 or recall.iloc[-1] == 0: continue auc_scores[idx] = metrics.auc(recall, precision) # Add the computed AUC scores to the terms DataFrame. terms_df["auc_score"] = auc_scores return terms_df def binary(corr): stack = corr.stack().rename_axis(index=['gene1', 'gene2']).reset_index(name='score') stack = drop_mirror_pairs(stack) return stack def quick_sort(df, ascending=False): order = 1 if ascending else -1 sorted_df = df.iloc[np.argsort(order * df["score"].values)].reset_index(drop=True) return sorted_df def drop_mirror_pairs(df): gene_pairs = np.sort(df[["gene1", "gene2"]].to_numpy(), axis=1) df.loc[:, ["gene1", "gene2"]] = gene_pairs df = df.loc[~df.duplicated(subset=["gene1", "gene2"], keep="first")] return df for dummy data (corr matrix, terms_df) import numpy as np import pandas as pd # Set a random seed for reproducibility np.random.seed(0) # ------------------------------- # Create the 10,000 x 10,000 correlation matrix # ------------------------------- num_genes = 10000 genes = [f"Gene{i}" for i in range(num_genes)] rand_matrix = np.random.uniform(-1, 1, (num_genes, num_genes)) corr_matrix = (rand_matrix + rand_matrix.T) / 2 np.fill_diagonal(corr_matrix, 1.0) corr_df = pd.DataFrame(corr_matrix, index=genes, columns=genes) num_terms = 5000 terms_list = [] for i in range(1, num_terms + 1): # Randomly choose a number of genes between 10 and 40 for this term n_genes = np.random.randint(10, 41) used_genes = np.random.choice(genes, size=n_genes, replace=False).tolist() term = { "id": i, "name": f"Complex {i}", "used_genes": used_genes } terms_list.append(term) terms_df = pd.DataFrame(terms_list) # Display sample outputs (for verification, you might want to show the first few rows) print("Correlation Matrix Sample:") print(corr_df.iloc[:5, :5]) # print a 5x5 sample print("\nTerms DataFrame Sample:") print(terms_df.head()) to run the function compute_per_complex_pr(corr_df, terms_df)
Several optimisations can be applied to compute_per_complex_pr. First of all, pairwise_df.loc[candidate_indices] can be optimised by converting candidate_indices to a Numpy array and using iloc instead. Actually, sorted(candidate_indices) is also bit slow because it is a set of integer Python object and code operating on pure-Python data structures are generally painfully slow (in CPython which is the standard Python interpreter). Thus, we can convert the set before and then sort it with Numpy so the sort can be significantly faster. The thing is converting a set to Numpy is a bit slow because it even iterating to this data-structure is already quite slow... On simple way to fix this issue it to use a native language (like C++ or Rust) so to never pay the cost of inherently slow pure-Python objets. An alternative solution is to use a fast bit-set package like bitarray. The bad news is that bitarray cannot directly operate on Numpy arrays so it does not bring optimal performance (still because of the manipulation of slow pure-Python data structure), but it is at least significantly better than a naive set. We can Create a bit-set with the right size with candidate_indices = bitarray(len(pairwise_df)) and then fill it rather quickly with candidate_indices[gene_to_pair_indices[gene]] = True. The good news is that we do not need to sort anything now because we can directly index the dataframe with a Numpy boolean array created from the bitset. We can do that with unpackbits. However, the result is an array of uint8 items. We can use view(bool) so to reinterpret the 0-1 values to booleans very cheaply. One side effect of this approach is that the array can be a bit bigger than expected because bitarray pack bits in bytes so the output is a multiple of 8 (because bytes are octets on all mainstream modern machines). We can slice the array so to get the relevant part. Last but not least, it is better to operate on categorical data than strings in this case. Indeed, strings are inherently slow because they are stored in pure-Python objects (yes, still them again). They should be converted in categorical data they are repeated many times or can be considered as being part of a limited set of a known set of strings. Categorical data are actually integers with a table associating the integer to string object. Besides speed, categorical data also take significantly less memory here. Here is the final code: from bitarray import bitarray def fast_compute_per_complex_pr(corr_df, terms_df): pairwise_df = binary(corr_df) pairwise_df = quick_sort(pairwise_df).reset_index(drop=True) pairwise_df['gene1'] = pairwise_df['gene1'].astype("category") pairwise_df['gene2'] = pairwise_df['gene2'].astype("category") # Precompute a mapping from each gene to the row indices in the pairwise DataFrame where it appears. gene_to_pair_indices = {} for i, (gene_a, gene_b) in enumerate(zip(pairwise_df["gene1"], pairwise_df["gene2"])): gene_to_pair_indices.setdefault(gene_a, []).append(i) gene_to_pair_indices.setdefault(gene_b, []).append(i) # Initialize AUC scores (one for each complex) with NaNs. auc_scores = np.full(len(terms_df), np.nan) # Loop over each gene complex for idx, row in terms_df.iterrows(): gene_set = set(row.used_genes) # Collect all row indices in the pairwise data where either gene belongs to the complex. candidate_indices = bitarray(len(pairwise_df)) for gene in gene_set: if gene in gene_to_pair_indices: candidate_indices[gene_to_pair_indices[gene]] = True if not candidate_indices.any(): continue # Select only the relevant pairwise comparisons. selected_rows = np.unpackbits(candidate_indices).view(bool)[:len(pairwise_df)] sub_df = pairwise_df.iloc[selected_rows] # A prediction is 1 if both genes in the pair are in the complex; otherwise 0. predictions = (sub_df["gene1"].isin(gene_set) & sub_df["gene2"].isin(gene_set)).astype(int) if predictions.sum() == 0: continue # Compute cumulative true positives and derive precision and recall. true_positive_cumsum = predictions.cumsum() precision = true_positive_cumsum / (np.arange(len(predictions)) + 1) recall = true_positive_cumsum / true_positive_cumsum.iloc[-1] if len(recall) < 2 or recall.iloc[-1] == 0: continue auc_scores[idx] = metrics.auc(recall, precision) # Add the computed AUC scores to the terms DataFrame. terms_df["auc_score"] = auc_scores return terms_df This code seems about 2.5 times faster on my machine (with a i5-9600KF CPU on Windows). Note that the code before the loop take 20% of the time now since the loop is about 3 times faster. To make this much faster, one way would be to parallelise the computation. That being said, I do not expect multithreading to be significantly faster because of the CPython GIL and I do not expect multiprocessing to be a silver-bullet because of data copying/pickling (which should result in a much higher memory consumption). Thus, I think it would be better to convert this part of the code in a native language like C++ first, and then parallelise the loop with tools like OpenMP. The result will be a faster execution not only due to parallelism but also faster operations thanks to a native execution. One benefit with native language is that you can directly operate on pairwise_df.iloc[selected_rows] without creating a new data structure. A skilled developper can even do the operation chunk by chunk in a SIMD-friendly way so predictions can be computed much more efficiently. Similarly, np.unpackbits is not needed since you can directly iterate on the bit-array. This makes the operation more cache friendly (so it can then scale better once running in parallel). Last but not least, note that most of the values in the bit-array are set to False with the current dataset (>99%). This means this data structure is rather sparse. One can replace the bit-array data structure with something more memory efficient at the expense of a more complex code and slower sequential execution (but likely faster in parallel).
3
4
79,547,082
2025-3-31
https://stackoverflow.com/questions/79547082/extracting-images-from-a-pdf-using-pymupdf-gives-broken-output-images
The code I am using to extract the images is from PIL import Image def extract_images_from_pdfs(pdf_list): import fitz # PyMuPDF output_dir = "C:/path_to_image" os.makedirs(output_dir, exist_ok=True) for pdf_path in pdf_list: pdf_name = os.path.splitext(os.path.basename(pdf_path))[0] # Open the PDF pdf_document = fitz.open(pdf_path) # Track the count of images extracted per page image_count = 0 for page_num, page in enumerate(pdf_document): # Get the images on this page image_list = page.get_images(full=True) if not image_list: print(f"No images found on page {page_num+1} of {pdf_name}") continue # Process each image for img_index, img in enumerate(image_list): xref = img[0] base_image = pdf_document.extract_image(xref) if base_image: image_bytes = base_image["image"] image_ext = base_image["ext"] # Convert bytes to image image = Image.open(io.BytesIO(image_bytes)) # Save the image image_name = f"{pdf_name}_image_{image_count}.{image_ext}" image_path = os.path.join(output_dir, image_name) image.save(image_path) image_count += 1 pdf_document.close() print(f"Extracted {image_count} images from {pdf_name}") The input, pdf_list, is just a list containing all the names of my pdf's. Extracted image 1 Extracted image 2 Expected image: Could it be that the images on the PDF are encrypted / accessible and is there a work around for this. Any help is greatly appreciated. testingpdfexampaper.tiiny.site This is the URL for the PDF
The PDF has 78 very small pieces of imagery of which the "largest" is masking for O on the first page: 1 60 image 81 62 index 1 8 image no 271 0 151 151 1996B 40% And many are simply one single pixel. They can be in any order and the early ones of the 78 are generally parts of R: pdfimages -list chem.pdf page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio -------------------------------------------------------------------------------------------- 1 0 image 4 26 cmyk 4 8 image no 214 0 163 153 77B 19% 1 1 image 2 2 cmyk 4 8 image no 215 0 204 245 21B 131% 1 2 image 7 59 index 1 8 image no 226 0 306 303 53B 13% 1 3 image 60 39 index 1 8 image no 237 0 150 153 819B 35% 1 4 image 1 1 cmyk 4 8 image no 248 0 204 204 14B 350% 1 5 image 9 4 cmyk 4 8 image no 259 0 162 153 74B 51% 1 6 image 58 31 index 1 8 image no 270 0 150 154 526B 29% 1 7 image 4 3 cmyk 4 8 image no 281 0 153 153 38B 79% 1 8 image 2 2 cmyk 4 8 image no 290 0 153 175 24B 150% NOTE there is common with many PDF constructions no "one to one" relationship. One text line can be many places and one visible line can be multiple paths too. Thus image extraction is of no real value as any whole page could be exported as single images, then trimmed to desired areas, at any density/quality you wish. Python has PyMuPDF which can "gather" "paths" and combine into single graphical units. So if you select an area of inclusions (Region of Interest) they can possibly be reused as vectors elsewhere? This is similar in effect to the way the MuPDF command line can with a few well chosen commands export SVG areas for reuse.
1
1
79,545,895
2025-3-31
https://stackoverflow.com/questions/79545895/image-upload-corruption-with-seaweedfs-s3-api
Problem Description I'm experiencing an issue where images uploaded through Django (using boto3) to SeaweedFS's S3 API are corrupted, while uploads through S3 Browser desktop app work correctly. The uploaded files are 55 bytes larger than the original and contain a Content-Encoding: aws-chunked header, making the images unopenable. Environment Setup Storage: SeaweedFS with S3 API Proxy: nginx (handling SSL) Framework: Django Storage Client: boto3 Upload Method: Using Django's storage backend with PrivateStorage Issue Details When uploading through S3 Browser desktop app: File size matches original Image opens correctly No corruption issues When uploading through Django/boto3: File size increases by 55 bytes Response includes Content-Encoding: aws-chunked Image becomes corrupted and unopenable First bytes contain unexpected data (100000) Last bytes end with .x-amz-checksum- Example of Corrupted File Original file size: 12345 bytes Uploaded file size: 12400 bytes (+55 bytes) First bytes: 100000... Last bytes: ...x-amz-checksum-crc32:SJJ2UA== Attempted Solutions Tried different upload methods: # Method 1: Using ContentFile storage.save(path, ContentFile(file_content)) # Method 2: Using Django File object storage.save(path, File(file)) # Method 3: Direct boto3 upload client.upload_fileobj(f, bucket_name, path) Questions Is this a known issue with SeaweedFS's S3 API implementation? Is there a way to disable the aws-chunked encoding in boto3? Are there specific headers or configurations needed in the nginx proxy to handle binary uploads correctly? Additional Information Django version: 5.1.7 boto3 version: 1.37.23 Any help or insights would be greatly appreciated 🙏! Sample code I tried: import boto3 AWS_ACCESS_KEY_ID='' AWS_SECRET_ACCESS_KEY='' API_URL='' bucket_name = 'sample-bucket' s3 = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY, endpoint_url=API_URL) testfile = r"image.png" s3.upload_file(testfile, bucket_name, 'sample.png', ExtraArgs={'ContentType': 'image/png'})
Follow answer in https://github.com/boto/boto3/issues/4435#issuecomment-2648819900 I added this lines on top of settings.py file and problem solved. import os os.environ["AWS_REQUEST_CHECKSUM_CALCULATION"] = "when_required" os.environ["AWS_RESPONSE_CHECKSUM_VALIDATION"] = "when_required"
1
1
79,545,985
2025-3-31
https://stackoverflow.com/questions/79545985/find-correct-root-of-parametrized-function-given-solution-for-one-set-of-paramet
Let's say I have a function foo(x, a, b) and I want to find a specific one of its (potentially multiple) roots x0, i.e. a value x0 such that foo(x0, a, b) == 0. I know that for (a, b) == (0, 0) the root I want is x0 == 0 and that the function changes continuously with a and b, so I can "follow" the root from (0, 0) to the desired (a, b). Here's an example function. def foo(x, a, b): return (1 + a) * np.sin(a + b - x) - x For (a, b) == (0, 0) I want to the root at 0, for (2, 0) I want the one near 1.5 and for (2, 1) I want the one near 2.2. Now, this problem seems like one that may be common enough to have a prepared, fast solver in scipy or another standard package (or tools to easily and efficiently construct one). However, I don't know what terms to search for to find it (or verify that it doesn't exist). Is there a ready-made tool for this? What is this kind of problem called? Clarifications: Depending on (a, b), the "correct" root may disappear (e.g. for (1, 3) in the example). When this happens, returning nan is the preferred behavior, though this is not super important. By "fast" I mostly mean that many parameter sets (a, b) can be quickly solved, not just a single one. I will go on calculating the root for a lot of different parameters, e.g. for plotting and integrating over them. Here's a quickly put together reference implementation that does pretty much what I want for the example above (and creates the plot). It's not very fast, of course, which somewhat limits me in my actual application. import functools import numpy as np from scipy.optimize import root_scalar from matplotlib import pyplot as plt def foo(x, a, b): return (1 + a) * np.sin(a + b - x) - x fig, ax = plt.subplots() ax.grid() ax.set_xlabel("x") ax.set_ylabel("foo") x = np.linspace(-np.pi, np.pi, 201) ax.plot(x, foo(x, 0, 0), label="(a, b) = (0, 0)") ax.plot(x, foo(x, 2, 0), label="(a, b) = (2, 0)") ax.plot(x, foo(x, 2, 1), label="(a, b) = (2, 1)") ax.legend() plt.show() # Semi-bodged solver for reference: def _dfoo(x, a, b): return -(1 + a) * np.cos(a + b - x) - 1 def _solve_fooroot(guess, a, b): if np.isnan(guess): return np.nan # Determine limits for finding the root. # Allow for slightly larger limits to account for numerical imprecision. maxlim = 1.1 * (1 + a) y0 = foo(guess, a, b) if y0 == 0: return guess dy0 = _dfoo(guess, a, b) estimate = -y0 / dy0 # Too small estimates are no good. if np.abs(estimate) < 1e-2 * maxlim: estimate = np.sign(estimate) * 1e-2 * maxlim for lim in np.arange(guess, guess + 10 * estimate, 1e-1 * estimate): if np.sign(foo(lim, a, b)) != np.sign(y0): bracket = sorted([guess, lim]) break else: return np.nan sol = root_scalar(foo, (a, b), bracket=bracket) return sol.root @functools.cache def _fooroot(an, astep, bn, bstep): if an == 0: if bn == 0: return 0 guessan, guessbn = an, bn - 1 else: guessan, guessbn = an - 1, bn # Avoid reaching maximum recursion depth. for thisbn in range(0, guessbn, 100): _fooroot(0, astep, thisbn, bstep) for thisan in range(0, guessan, 100): _fooroot(thisan, astep, guessbn, bstep) guess = _fooroot(guessan, astep, guessbn, bstep) return _solve_fooroot(guess, an * astep, bn * bstep) @np.vectorize def fooroot(a, b): astep = (-1 if a < 0 else 1) * 1e-2 bstep = (-1 if b < 0 else 1) * 1e-2 guess = _fooroot(int(a / astep), astep, int(b / bstep), bstep) return _solve_fooroot(guess, a, b) print(fooroot(0, 0)) print(fooroot(2, 0)) print(fooroot(2, 1)) fig, ax = plt.subplots() ax.grid() ax.set_xlabel("b") ax.set_ylabel("fooroot(a, b)") b = np.linspace(-3, 3, 201) for a in [0, 0.2, 0.5]: ax.plot(b, fooroot(a, b), label=f"a = {a}") ax.legend() plt.show() fig, ax = plt.subplots() ax.grid() ax.set_xlabel("a") ax.set_ylabel("b") a = np.linspace(-1, 1, 201) b = np.linspace(-3.5, 3.5, 201) aa, bb = np.meshgrid(a, b) pcm = ax.pcolormesh(aa, bb, fooroot(aa, bb)) fig.colorbar(pcm, label="fooroot(a, b)") plt.show()
Get rid of your @cache and @vectorize; neither is likely to help you for the following and they're just noise. (If they're needed for outer code, you haven't shown that outer code, so the point stands.) Do keep using Scipy's root-finding iteratively, but beyond that your procedure should look pretty different. I propose: Get your initial estimate x0, a0, b0. Then in a loop: Infer by analytic integration the paraboloid intersecting the current point whose first and second derivatives with respect to a and b match those of f. Increment a and b at their fixed step size. If the function is smooth and parabolic step estimation works well then this step may be somewhat large; but if you're writing this for a generic routine that can take any function then it must be parametric. Call root_scalar with your new estimate, new a and b, passing analytic fprime and fprime2, probably using Halley's Method, and having relaxed tolerances that are parametric and appropriate to the function. Of course that's the ideal case, but in practice Scipy limitations mean that the vectorised methods cannot easily use second-order gradients. That in turn means that Halley is unavailable, but even linear Jacobian steps work well. The following demonstration shows a fully-vectorised path traversal, with two independent start points and stop points in a, b space. This can be arbitrarily extended to as many consecutive paths as you want. import logging import typing import numpy as np from numpy._typing import ArrayLike from scipy.optimize import root, check_grad class Trivariate(typing.Protocol): def __call__( self, x: np.ndarray, a: np.ndarray, b: np.ndarray, ) -> np.ndarray: ... def foo(x: np.ndarray, a: np.ndarray, b: np.ndarray) -> np.ndarray: return (1 + a)*np.sin(a + b - x) - x def dfoo_dx(x: np.ndarray, a: np.ndarray, b: np.ndarray) -> np.ndarray: return (-1 - a)*np.cos(a + b - x) - 1 def dfoo_da(x: np.ndarray, a: np.ndarray, b: np.ndarray) -> np.ndarray: return (1 + a)*np.cos(a + b - x) + np.sin(a + b - x) def dfoo_db(x: np.ndarray, a: np.ndarray, b: np.ndarray) -> np.ndarray: return (1 + a)*np.cos(a + b - x) def follow_root( fun: Trivariate, dfdx: Trivariate, dfda: Trivariate, dfdb: Trivariate, a0: ArrayLike, a1: ArrayLike, b0: ArrayLike, b1: ArrayLike, x0: ArrayLike, steps: int = 10, method: str = 'hybr', follow_tol: float = 1e-2, follow_reltol: float = 1e-2, polish_tol: float = 1e-12, polish_reltol: float = 1e-12, ) -> np.ndarray: def dfdx_sparse(x: np.ndarray, a: np.ndarray, b: np.ndarray) -> np.ndarray: return np.diag(dfdx(x, a, b)) x_est = np.asarray(x0) ab0 = np.array((a0, b0)) ab1 = np.array((a1, b1)) # (number of steps, ab, dimensions of a0...) = (n-1, 2, ...) ab = np.linspace(start=ab0, stop=ab1, num=steps) da = ab[1, 0] - ab[0, 0] db = ab[1, 1] - ab[0, 1] for i, ((ai, bi), (ai1, bi1)) in enumerate(zip(ab[:-1], ab[1:])): dfdxi = dfdx(x_est, ai, bi) dxda = dfda(x_est, ai, bi)/dfdxi dxdb = dfdb(x_est, ai, bi)/dfdxi # If a and b are perturbed, where will x go? This is linear, but it can be extended to second-order. step = -dxda*da - dxdb*db last = i == len(ab) - 2 result = root( fun=fun, args=(ai1, bi1), jac=dfdx_sparse, x0=x_est + step, method=method, tol=polish_tol if last else follow_tol, options={ 'xtol': polish_reltol if last else follow_reltol, 'col_deriv': True, }, ) if not result.success: raise ValueError('Root finding failed: ' + result.message) logging.debug('#%d x%d %s +%s ~ %s = %s', i, result.nfev, x_est, step, x_est + step, result.x) x_est = result.x return x_est def main() -> None: # Don't do this in production! logging.getLogger().setLevel(logging.DEBUG) err = check_grad( lambda x: foo(x, np.array((0, 2)), np.array((0, 0))), lambda x: np.diag(dfoo_dx(x, np.array((0, 2)), np.array((0, 0)))), (0.1, 1.5), # x0 ) assert err < 1e-7 err = check_grad( lambda a: foo(np.array((0.1, 1.5)), a, np.array((0, 0))), lambda a: np.diag(dfoo_da(np.array((0.1, 1.5)), a, np.array((0, 0)))), (0, 2), # a0 ) assert err < 1e-7 err = check_grad( lambda b: foo(np.array((0.1, 1.5)), np.array((0, 2)), b), lambda b: np.diag(dfoo_db(np.array((0.1, 1.5)), np.array((0, 2)), b)), (0, 0), # b0 ) assert err < 1e-7 follow_root( fun=foo, dfdx=dfoo_dx, dfda=dfoo_da, dfdb=dfoo_db, a0=(0, 2), a1=(1.7, 2), b0=(0, 0), b1=(0 , 1), x0=(0, 1.5), ) if __name__ == '__main__': main() DEBUG:root:#0 x5 [0. 1.5] +[0.09444444 0.08052514] ~ [0.09444444 1.58052514] = [0.1025362 1.56306428] DEBUG:root:#1 x4 [0.1025362 1.56306428] +[0.10987707 0.07990566] ~ [0.21241326 1.64296994] = [0.21851137 1.64274921] DEBUG:root:#2 x4 [0.21851137 1.64274921] +[0.12155443 0.07945781] ~ [0.3400658 1.72220702] = [0.34478035 1.72196514] DEBUG:root:#3 x4 [0.34478035 1.72196514] +[0.13061949 0.0789664 ] ~ [0.47539984 1.80093153] = [0.47912896 1.80066588] DEBUG:root:#4 x4 [0.47912896 1.80066588] +[0.13781338 0.07842657] ~ [0.61694234 1.87909245] = [0.61994978 1.87880026] DEBUG:root:#5 x4 [0.61994978 1.87880026] +[0.14363144 0.07783262] ~ [0.76358122 1.95663288] = [0.76604746 1.95631084] DEBUG:root:#6 x4 [0.76604746 1.95631084] +[0.14841417 0.07717773] ~ [0.91446163 2.03348857] = [0.9165135 2.03313271] DEBUG:root:#7 x4 [0.9165135 2.03313271] +[0.15240176 0.07645371] ~ [1.06891526 2.10958643] = [1.07064402 2.10919196] DEBUG:root:#8 x7 [1.07064402 2.10919196] +[0.15576766 0.07565065] ~ [1.22641168 2.1848426 ] = [1.22788398 2.18440359] This works fine with a reduced number of steps; with only four steps: DEBUG:root:#0 x5 [0. 1.5] +[0.28333333 0.24157542] ~ [0.28333333 1.74157542] = [0.34477692 1.72196632] DEBUG:root:#1 x5 [0.34477692 1.72196632] +[0.39185914 0.23689925] ~ [0.73663606 1.95886557] = [0.76604622 1.95631082] DEBUG:root:#2 x8 [0.76604622 1.95631082] +[0.44524268 0.23153319] ~ [1.2112889 2.18784401] = [1.22788398 2.18440359] Grid following Keep the main idea (and its gradients); build a row-wise output: import typing from functools import partial import matplotlib.pyplot as plt import numpy as np from numpy._typing import ArrayLike from scipy.optimize import root, check_grad class Trivariate(typing.Protocol): def __call__( self, x: np.ndarray, ab: np.ndarray, ) -> np.ndarray: ... def foo(x: np.ndarray, ab: np.ndarray) -> np.ndarray: a, b = ab return (1 + a)*np.sin(a + b - x) - x def dfoo_dx(x: np.ndarray, ab: np.ndarray) -> np.ndarray: a, b = ab return (-1 - a)*np.cos(a + b - x) - 1 def dfoo_dab(x: np.ndarray, ab: np.ndarray) -> np.ndarray: a, b = ab abx = a + b - x a1cos = (1 + a)*np.cos(abx) return np.stack((a1cos + np.sin(abx), a1cos)) def next_roots( baked_root, dfdx: Trivariate, dfdab: Trivariate, dab: np.ndarray, ab0: ArrayLike, ab1: ArrayLike, x0: ArrayLike, ) -> np.ndarray: dfdxi = dfdx(x0, ab0) dxdab = dfdab(x0, ab0)/dfdxi # If a and b are perturbed, where will x go? This is linear, but it can be extended to second-order. step = (-dab) @ dxdab result = baked_root(args=ab1, x0=x0 + step) if not result.success: raise ValueError('Root finding failed: ' + result.message) return result.x def roots_2d( fun: Trivariate, dfdx: Trivariate, dfdab: Trivariate, a0: float, a1: float, b0: float, b1: float, centre_estimate: float, resolution: int = 201, method: str = 'hybr', tol: float = 1e-2, reltol: float = 1e-2, dtype: np.dtype = np.float32, ) -> tuple[np.ndarray, np.ndarray, np.ndarray]: def dfdx_sparse(x: np.ndarray, ab: np.ndarray) -> np.ndarray: return np.diag(dfdx(x, ab)) baked_root = partial( root, fun=fun, jac=dfdx_sparse, method=method, tol=tol, options={'xtol': reltol, 'col_deriv': True}, ) baked_next = partial( next_roots, baked_root=baked_root, dfdx=dfdx, dfdab=dfdab, ) aser = np.linspace(start=a0, stop=a1, num=resolution, dtype=dtype) bser = np.linspace(start=b0, stop=b1, num=resolution, dtype=dtype) da = aser[1] - aser[0] db = bser[1] - bser[0] aa, bb = np.meshgrid(aser, bser) aabb = np.stack((aa, bb)) # (2, 201, 201): (ab, b index, a index) out = np.empty_like(aa) # Centre point, the only one for which we rely on an estimate from the caller imid = resolution//2 out[imid, imid] = baked_root(args=aabb[:, imid, imid], x0=centre_estimate).x.squeeze() # Centre to right, scalars dar = np.array((da, 0), dtype=da.dtype) for j in range(imid + 1, resolution): out[imid, j] = baked_next( dab=dar, ab0=aabb[:, imid, j-1], ab1=aabb[:, imid, j], x0=out[imid, j-1], ).squeeze() # Centre to left, scalars dal = -dar for j in range(imid - 1, -1, -1): out[imid, j] = baked_next( dab=dal, ab0=aabb[:, imid, j+1], ab1=aabb[:, imid, j], x0=out[imid, j+1], ).squeeze() # Down rows dbd = np.array((0, db), dtype=db.dtype) for i in range(imid + 1, resolution): out[i] = baked_next( dab=dbd, ab0=aabb[:, i-1], ab1=aabb[:, i], x0=out[i-1], ) # Up rows dbu = -dbd for i in range(imid - 1, -1, -1): out[i] = baked_next( dab=dbu, ab0=aabb[:, i+1], ab1=aabb[:, i], x0=out[i+1], ) return aa, bb, out def plot(aa: np.ndarray, bb: np.ndarray, x: np.ndarray) -> plt.Figure: fig, ax = plt.subplots() ax.grid() ax.set_xlabel('a') ax.set_ylabel('b') mesh = ax.pcolormesh(aa, bb, x, vmin=-3, vmax=3) fig.colorbar(mesh, label='root') return fig def main() -> None: x0 = np.array((0.1, 1.5)) ab0 = np.array([(0.3, 2), (0.1, 0.2)]) err = check_grad( partial(foo, ab=ab0), lambda x: np.diag(dfoo_dx(x, ab0)), x0, ) assert err < 1e-7 # err = check_grad( # partial(foo, x0), # lambda ab: dfoo_dab(x0, ab), # ab0, # ) # assert err < 1e-7 aa, bb, x = roots_2d( fun=foo, dfdx=dfoo_dx, dfdab=dfoo_dab, a0=-1, a1=1, b0=-3, b1=3, centre_estimate=0, ) plot(aa, bb, x) plt.show() if __name__ == '__main__': main() Executes in a second or two: To take care of disappearing roots, there are no perfect solutions. Either you need to write a flood fill algorithm, which is complicated; or you can just do a simple heuristic like def next_roots( baked_root, dfdx: Trivariate, dfdab: Trivariate, dab: np.ndarray, ab0: ArrayLike, ab1: ArrayLike, x0: ArrayLike, error_bound: float = 1e-2, ) -> np.ndarray: input_mask = np.isfinite(x0) x0_masked = x0[input_mask] dfdxi = dfdx(x0_masked, ab0[:, input_mask]) dxdab = dfdab(x0_masked, ab0[:, input_mask])/dfdxi # If a and b are perturbed, where will x go? This is linear, but it can be extended to second-order. step = (-dab) @ dxdab xest = x0_masked + step result = baked_root(args=ab1[:, input_mask], x0=xest) if not result.success: raise ValueError('Root finding failed: ' + result.message) xsol = result.x est_error = np.abs(xsol - xest) xsol[est_error > error_bound] = np.nan xnew = np.full_like(x0, fill_value=np.nan) xnew[input_mask] = xsol return xnew
2
0
79,546,711
2025-3-31
https://stackoverflow.com/questions/79546711/python-asyncio-how-do-awaitables-interact-with-system-level-i-o-events-e-g
I’m learning asynchronous programming in Python using asyncio and want to understand how low-level awaitables work, particularly for system events like serial port communication. For example, libraries like pyserial-asyncio allow non-blocking reads from a serial port, but I’m unclear on the underlying mechanics. My Understanding So Far: To read data asynchronously from a serial port, we need a method that yields control to the event loop until data arrives. Two approaches come to mind: 1. Polling with Sleep: Use a loop to check for data availability, and if none exists, call await asyncio.sleep(x) to yield control. This feels inefficient. How do we choose the sleep duration ? 2. System Event Notification: I guess that the OS can notify when data arrives in the serial buffer. But how do we integrate this with asyncio? How does an async method "await" such an OS-level event? What I’m Trying to Figure Out: Beyond asyncio.sleep(), what low-level awaitable primitives exist in asyncio to wait for system events (e.g., I/O readiness on serial ports, HTTP requests, file operations)? Are these primitives provided natively by asyncio, or do libraries like pyserial-asyncio implement custom logic using OS APIs ? I’ve Tried to look at pyserial-asyncio’s source code but struggled to see how it interfaces with asyncio’s internals. If someone could redirect me toward some ressources and tutorial that do in-depth explanation of those mechanism it would be great.
In brief, asyncio rests on three pillars: coroutines, callback schedulers, and selectors. When you call socket.read() in synchronous code, you are telling the operating system to "wake up this thread when the operation completes (successfully or unsuccessfully)". Exactly one wait, interrupted only by signal handlers. But in fact, you can also wait for multiple operations at a time using the select module or the selectors module. Just create a selector sel = selectors.DefaultSelector(), register a file object for selection via sel.register(fileobj, events), and call sel.select(). sel.select() is still a blocking system call, but now the thread will wake up when any operation you have registered is ready. Okay, we can wait for multiple operations at a time, but using selectors directly is inconvenient because we have to manually manage the flow of execution for the entire thread. Especially to avoid being distracted by this feature, we can add an abstraction in the form of coroutines. Coroutines are slightly modified generators, and await is yield from with additional checks (see PEP 0492). With them, we can focus in an asynchronous function (actually a coroutine factory) on solving one specific task, and if we need to wait for something, implicitly register an event for selection and suspend the coroutine (task) via yield. This begs the question of how to schedule execution. It is actually quite simple. We can create a priority queue (for example, via heapq or sched) to store information about callbacks: when to execute, what function to execute, and with what arguments. And in order to "wake up coroutine" we will just add a callback for coro.send() or coro.throw(). So we get a loop that executes callbacks when the queue is not empty and waits for sel.select() when it is empty). The step of each coroutine (usually between await statements) is simply the execution of the scheduled callback. This is how cooperative multitasking is implemented. You might ask about how to handle threads, since we cannot interrupt a sel.select() wait. The trick is simple: we can create a non-blocking socket pair and register the first socket for reading. And then any write to the second socket from another thread will wake up the event loop. This is how loop.call_soon_threadsafe() works. In summary, we have a simple (or complex) system in which the coroutines (tasks) ask the event loop to register a file object (or file descriptor) for selection, then switch to the event loop and thus fall asleep until the event loop is notified that the operation is complete and switches execution back to the coroutine in the order of the callback queue. The loop.call_at(), loop.call_later(), loop.call_soon() (and its closest relative loop.call_soon_threadsafe()) methods are responsible for scheduling callbacks. The loop.add_reader(), loop.add_writer(), loop.remove_reader(), loop.remove_writer() methods are responsible for monitoring. These two groups of methods are the basis for most of the rest of the event loop methods. At a higher level, you usually work with futures (explicitly or implicitly). This is another abstraction with which, in particular, you can wait for tasks (coroutines themselves do not provide a wait method). The point of them boils down to a very simple thing: to schedule a switch to the coroutine when someone sets a result for the future. This is also a callback, and is added via future.add_done_callback(). Answering the question directly, pyserial-asyncio uses two approaches depending on the operating system used: polling for Windows, monitoring for all others. Both approaches are based on the principles I described above.
1
4
79,546,910
2025-3-31
https://stackoverflow.com/questions/79546910/typeerror-in-sfttrainer-initialization-unexpected-keyword-argument-tokenizer
Question: I am trying to fine-tune the Mistral-7B-Instruct-v0.1-GPTQ model using SFTTrainer from trl. However, when running my script in Google Colab, I encounter the following error: TypeError: SFTTrainer.__init__() got an unexpected keyword argument 'tokenizer' Here is the relevant portion of my code: import torch from datasets import load_dataset, Dataset from peft import LoraConfig, AutoPeftModelForCausalLM, prepare_model_for_kbit_training, get_peft_model from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig, TrainingArguments from trl import SFTTrainer, SFTConfig import os # Load and preprocess dataset data = load_dataset("tatsu-lab/alpaca", split="train") data_df = data.to_pandas() data_df = data_df[:5000] data_df["text"] = data_df[["input", "instruction", "output"]].apply( lambda x: "###Human: " + x["instruction"] + " " + x["input"] + " ###Assistant: " + x["output"], axis=1 ) data = Dataset.from_pandas(data_df) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GPTQ") tokenizer.pad_token = tokenizer.eos_token # Load and prepare model quantization_config_loading = GPTQConfig(bits=4, disable_exllama=True, tokenizer=tokenizer) model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", device_map="auto" ) model.config.use_cache = False model.config.pretraining_tp = 1 model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=["q_proj", "v_proj"] ) model = get_peft_model(model, peft_config) # Training configuration training_arguments = SFTConfig( output_dir="mistral-finetuned-alpaca", per_device_train_batch_size=8, gradient_accumulation_steps=1, optim="paged_adamw_32bit", learning_rate=2e-4, lr_scheduler_type="cosine", save_strategy="epoch", logging_steps=100, num_train_epochs=1, max_steps=250, fp16=True, packing=False, max_seq_length=512, dataset_text_field="text", push_to_hub=True ) # Initialize trainer trainer = SFTTrainer( model=model, train_dataset=data, peft_config=peft_config, args=training_arguments, tokenizer=tokenizer, # This causes the error ) trainer.train() What I Have Tried: I checked the TRL documentation but couldn't find a tokenizer parameter in the SFTTrainer class. Removing tokenizer=tokenizer avoids the error, but then I get a different error about tokenization during training. I tried passing tokenizer inside training_arguments, but that didn't work either. Question: Is SFTTrainer expecting the tokenizer to be handled differently in the latest versions of trl? How should I correctly pass the tokenizer to ensure training works? I would appreciate any insights!
In the 0.12.0 release it is explained that the tokenzier argument is now called the processing_class parameter. You should be able to run your code as before by replacing tokenizer with processing_class: trainer = SFTTrainer( model=model, train_dataset=data, peft_config=peft_config, args=training_arguments, processing_class=tokenizer, # This causes the error )
3
4
79,540,700
2025-3-28
https://stackoverflow.com/questions/79540700/is-it-possible-to-use-python-social-auths-emailauth-with-drf-social-oauth2-for
I have a facebook authentication in my project, and I've set up some pipelines. So It would be nice for non-social email registration to also utilize these pipelines. I tried adding EmailAuth to the authentication backends list, but I don't know what view to use now for registratioin. So, is it possible (or reasonable) to use EmailAuth with drf-social-oauth2 for non-social registration, and if so, how do I do it?
You can integrate EmailAuth with drf-social-oauth2 by adding it to AUTHENTICATION_BACKENDS and using the same authentication pipelines. Since drf-social-oauth2 lacks a native email registration view, create a custom API endpoint that registers users manually and authenticates them via the email backend. Then, expose this view in your urls settings.py AUTHENTICATION_BACKENDS = ( 'social_core.backends.email.EmailAuth', # The email-based autentication 'social_core.backends.facebook.FacebookOAuth2', # Facebook login 'django.contrib.auth.backends.ModelBackend', # Default auth backend ) custom registration view # Other imports .. from rest_framework.status import HTTP_400_BAD_REQUEST, HTTP_201_CREATED from rest_framework.permissions import AllowAny from social_django.utils import load_backend, load_strategy from rest_framework.serializers import CharField, EmailField, Serializer class EmailRegistrationSerializer(Serializer): email = EmailField() password = CharField(max_length=100) member_type = CharField(max_length=20) class EmailRegisterView(APIView): permission_classes = [AllowAny] def post(self, request): serializer = EmailRegistrationSerializer(data=request.data) serializer.is_valid(raise_exception=True) strategy = load_strategy(request) backend = load_backend(strategy, "email", redirect_uri=None) user = backend.authenticate( response=serializer.data, backend=backend, strategy=strategy, request=request, ) return Response({"detail": "User registered successfully"}, HTTP_201_CREATED) urls.py urlpatterns = [ # ... path("auth/register/email/", EmailRegisterView.as_view(), name="email_register"), ]
1
1
79,545,669
2025-3-31
https://stackoverflow.com/questions/79545669/how-to-achieve-similar-desmos-visuals-in-manim
I am trying to graph this function in manim x^(2/3) + 0.9sqrt(3.5-x^2) * sin(πx*a) I wrote some simple code to graph it in an animation like style: from manim import * import numpy as np class HeartAnimation(Scene): def construct(self): axes = Axes( x_range=[-10, 10, 1], y_range=[-7, 7, 1], axis_config={"color": BLUE}, ) def heart_curve(x): value = 3.5-x**2 return np.cbrt(x**2) + 0.9*value*np.sin(np.pi * x * 15) graph = axes.plot( heart_curve, color=RED, x_range=[-np.sqrt(3.5), np.sqrt(3.5)], use_smoothing=False ) self.play(Create(axes)) self.play(Create(graph)) self.wait(8) the result from this code is this: and what I was looking for is something like this in desmos is there someway I can mess with the scaling or zooming to achieve something similar to what is produced in desmos? It doesn't have to be a one to one replicate, but it would be good if more of the heart show and the y axis to be scaled properly as it seems to be stretched in the y axis in manim when compared to desmos.
for the sampling rate, you have to add a third element (which will be the step of your sampling). However it seems like you can't make it too low, so to work aroud this problem, you can just scale the whole thing to be bigger (on both x and y so that shape doesn't change (in the formulas: x->x/scale and y->y*scale) for that I added a stretch_factor to the code. (as the sampling rate is higher, the density of curve increase, don't forget to lower the stroke_width in the plot method) stroke_width=1 and for the shape, I have reworked your formula by using a heart shaped double parabola instead of cuberoot function. here is the code : from manim import * import numpy as np class HeartAnimation(Scene): def construct(self): axes = Axes( x_range=[-100, 100, 10], y_range=[-70, 70, 10], axis_config={"color": BLUE}, ) def heart_curve(x): stretch_factor = 2 sinus_amp = 10 x = x/stretch_factor heart_shaped_parabola = max(-((x-8)/2)**2+25,-((x+8)/2)**2+25) sinus = sinus_amp*(3.5-(abs(x)/10)**3)*np.sin(np.pi * (x/10) * 15)-sinus_amp*(3.5-(x/10)**2) y = heart_shaped_parabola+sinus return y graph = axes.plot( heart_curve, color=RED, x_range=[-np.sqrt(3.5)*16.6, np.sqrt(3.5)*16.6,0.1], use_smoothing=False, stroke_width=1 ) self.play(Create(axes)) self.play(Create(graph)) self.wait(8) hope this helps you
1
1
79,542,444
2025-3-28
https://stackoverflow.com/questions/79542444/faster-moving-median-in-numpy
I am trying to calculate a moving median for around 10.000 signals that each are a list of length around 750. An example dataframe looks like this: num_indices = 2000 # Set number of indices # Generate lists of values (each a list of numbers from 0 to 1) column_data = [np.random.random(750).tolist() for _ in range(num_indices)] # Create DataFrame df = pd.DataFrame({'values': column_data}, index=range(num_indices)) I have found this implementation that uses np.lib.stride_tricks, but it is a bit slow for my purpose. Does anyone have an idea for a faster method? def moving_median(signal,n=150): # Compute rolling median for valid windows swindow = np.lib.stride_tricks.sliding_window_view(signal, (n,)) b = np.nanmedian(swindow, axis=1) b_full = np.concatenate([[np.nanmedian(signal)]*(n-1), b]) # Prepend first `n-1` values unchanged return signal - b_full And finally: df.iloc[:,0].apply(lambda x: moving_median(x))
You don't mention having NaNs in your data, so I will assume you do not. In that case, I think this is the best SciPy has to offer: import numpy as np from scipy.ndimage import median_filter rng = np.random.default_rng(49825498549428354) data = rng.random(size=(2000, 750)) # 2000 signals, each of length 750 res = median_filter(data, size=(150,), axes=(-1,)) # moving window of size 150 I believe there was some recent work done to make this faster in the development version of SciPy (nightly wheels here) than before. I'm guessing that rather than re-sorting each window from scratch, it updates a sorted or partitioned data structure based on the incoming and outgoing values, but I haven't really looked into it. Note the various mode options in the documentation that control what happens at the boundary. If you are happy to get back an array that is smaller than the original rather the default "reflect" boundary condition, you may just want to use the default mode and trim the edges afterwards. If you do have NaNs, SciPy has a new vectorized_filter that will work with np.nanmedian, but it just uses stride_tricks.sliding_window_view under the hood, so unlikely to be faster than what you have. If CuPy is an option, let me know, and I might be able to suggest something much faster. SciPy's median_filter still isn't as fast as it should be.
2
2
79,544,212
2025-3-30
https://stackoverflow.com/questions/79544212/conditional-running-total-based-on-date-field-in-pandas
I have a dataframe with below data. DateTime Tag Qty 2025-01-01 13:00 1 270 2025-01-03 13:22 1 32 2025-01-10 12:33 2 44 2025-01-22 10:04 2 120 2025-01-29 09:30 3 182 2025-02-02 15:05 1 216 To be achieved: 2 new columns, first with cumulative sum of Qty until the DateTime on each row when Tag is not equal to 2, second with cumulative sum of Qty until the DateTime on each row when Tag is equal to 2. Below is the result I am looking for. DateTime Tag Qty RBQ RSQ 2025-01-01 13:00 1 270 270 0 2025-01-03 13:22 1 32 302 0 2025-01-10 12:33 2 44 302 44 2025-01-22 10:04 2 120 302 164 2025-01-29 09:30 3 182 484 164 2025-02-02 15:05 1 216 600 164 I've been searching for a method, but doesn't seem to be getting it right. May I please get help on getting it right? Thanks,
You can use a condition and the methods mask and where to create both columns cond = df['Tag'].eq(2) df['RBQ'] = df['Qty'].mask(cond, 0).cumsum() df['RSQ'] = df['Qty'].where(cond, 0).cumsum() Another solution is to use the same condition and pivot the dataframe based on that. df2 = (df.join(df.assign(cols=df['Tag'].eq(2).map({True: 'RSQ', False: 'RBQ'})) .pivot(columns='cols', values='Qty') .fillna(0).cumsum()) ) End result: DateTime Tag Qty RBQ RSQ 2025-01-01 13:00 1 270 270.0 0.0 2025-01-03 13:22 1 32 302.0 0.0 2025-01-10 12:33 2 44 302.0 44.0 2025-01-22 10:04 2 120 302.0 164.0 2025-01-29 09:30 3 182 484.0 164.0 2025-02-02 15:05 1 216 700.0 164.0 Edit: Here is a slightly modified version that takes in consideration the DateTime column. I modified the first value in the datetime column as an example. cond = df['Tag'].eq(2) tmp = df.sort_values(by='DateTime') df['RBQ'] = tmp['Qty'].mask(cond, 0).cumsum() df['RSQ'] = tmp['Qty'].where(cond, 0).cumsum() For the second solution, you will have to use merge instead of join. df2 = pd.merge(df, (df.assign(cols=df['Tag'].eq(2).map({True: 'RSQ', False: 'RBQ'})) .pivot(index='DateTime', columns='cols', values='Qty') .fillna(0).cumsum()), on='DateTime') End result: DateTime Tag Qty RBQ RSQ 2025-02-04 13:00:00 1 270 700.0 164.0 2025-01-03 13:22:00 1 32 32.0 0.0 2025-01-10 12:33:00 2 44 32.0 44.0 2025-01-22 10:04:00 2 120 32.0 164.0 2025-01-29 09:30:00 3 182 214.0 164.0 2025-02-02 15:05:00 1 216 430.0 164.0
2
2
79,544,697
2025-3-30
https://stackoverflow.com/questions/79544697/how-to-extract-nested-json-using-json-normalize
I have a nested json, but I can't understand how to work with them. { "return": { "status_processing": "3", "status": "OK", "order": { "id": "872102042", "number": "123831", "date_order": "dd/mm/yyyy", "items": [ { "item": { "id_product": "684451795", "code": "VPOR", "description": "Product 1", "unit": "Un", "quantity": "1.00", "value": "31.76" } }, { "item": { "id_product": "684451091", "code": "VSAP", "description": "Product 2", "unit": "Un", "quantity": "1.00", "value": "31.76" } } ] } } } I searched on stackoverflow questions, and try some resolutions that people passed, but don't work for me. Here an sample that I used to accessing the data from json: df = pd.json_normalize( order_list, record_path=["return", "order", "itens"], meta=[ ["return", "order", "id"], ["return", "order", "date_order"], ["return", "order", "number"], ], ) But don't work, they duplicating the data when I send to dataframe. Anyone can help me? EDIT Here an example that I used: Convert nested JSON to pandas DataFrame And what I expected:
I don't know what exactly you expect in output but if you want every item in new row then you could use normal code with for-loop for this. order_list = { "return": { "status_processing": "3", "status": "OK", "order": { "id": "872102042", "number": "123831", "date_order": "dd/mm/yyyy", "itens": [ { "item": { "id_product": "684451795", "code": "VPOR", "description": "Product 1", "unit": "Un", "quantity": "1.00", "value": "31.76" } }, { "item": { "id_product": "684451091", "code": "VSAP", "description": "Product 2", "unit": "Un", "quantity": "1.00", "value": "31.76" } } ] } } } import pandas as pd data = [] order = order_list['return']['order'] for iten in order['itens']: for key, val in iten.items(): row = { #'key': key, 'id': order['id'], 'date_order': order['date_order'], 'number': order['number'], 'id_product': val['id_product'], #'code': val['code'], #'description': val['description'], #'quantity': val['quantity'], #'value': val['value'], } data.append(row) df = pd.DataFrame(data) print(df) Result: id date_order number id_product 0 872102042 dd/mm/yyyy 123831 684451795 1 872102042 dd/mm/yyyy 123831 684451091 If you need other information in rows then you should show it in question.
1
0
79,542,528
2025-3-28
https://stackoverflow.com/questions/79542528/how-to-strip-quotes-from-csv-table
I'm using the pandas library to convert CSVs to other data types. My CSV has the fields quoted, like this: "Version", "Date", "Code", "Description", "Tracking Number", "Author" "0.1", "22AUG2022", , "Initial Draft", "NR", "Sarah Marshall" "0.2", "23SEP2022", "D", "Update 1", "N-1234", "Bill Walter" "0.3", "09MAY2023", "A\, C", "New update.", "N-1235", "George Orwell" The problem is that when I read the CSV with pandas.read_csv('myfile.csv'), the quotes are included in the values: Version "Date" "Code" "Description" "Tracking Number" "Author" 0 0.1 "22AUG2022" "Initial Draft" "NR" "Sarah Marshall" 1 0.2 "23SEP2022" "D" "Update 1" "N-1234" "Bill Walter" 2 0.3 "09MAY2023" "A, C" "New update." "N-1235" "George Orwell" So these quotes are included when converting to HTML: <table> <thead> <tr style="text-align: right"> <th></th> <th>Version</th> <th>"Date"</th> <th>"Code"</th> <th>"Description"</th> <th>"Tracking Number"</th> <th>"Author"</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.1</td> <td>"22AUG2022"</td> <td></td> <td>"Initial Draft"</td> <td>"NR"</td> <td>"Sarah Marshall"</td> </tr> ... I've tried quoting=csv.QUOTE_NONE, but it didn't fix it--in fact, it actually added quotes to the Version column. I found this question--the answers essentially says to strip out any quotes in post processing. I can of course loop through the CSV rows and strip out the leading/trailing quotes for each of the values, but since quoted values are common with CSV I'd expect that there would be a parameter or easy way to enable/disable the quotes in the rendered output but I can't find something like that. Is there a way to accomplish this?
The leading spaces in your input seem to be throwing Pandas off (and some other CSV processors I can think of). Try the skipinitialspace=True option to make Pandas ignore every space between a comma and a quote char: import pandas as pd df = pd.read_csv("input.csv", skipinitialspace=True) print(df) Version Date ... Tracking Number Author 0 0.1 22AUG2022 ... NR Sarah Marshall 1 0.2 23SEP2022 ... N-1234 Bill Walter 2 0.3 09MAY2023 ... N-1235 George Orwell [3 rows x 6 columns] Expounding a bit, you can see the difference, where quotes are present, between the default of not skipping the initial space and skipping: from io import StringIO data = """ Col1 "foo" "foo" " foo" """.strip() print("no skip") df = pd.read_csv(StringIO(data)) for _, x in df.iterrows(): print(repr(x["Col1"])) print() print("skip initial") df = pd.read_csv(StringIO(data), skipinitialspace=True) for _, x in df.iterrows(): print(repr(x["Col1"])) no skip 'foo' ' "foo"' ' foo' skip initial 'foo' 'foo' ' foo' "foo" doesn't matter as it doesn't have any space "foo" is up for interpretation as the space precedes the (default) quote char " foo" also doesn't matter as the space is encoded inside the (default) quote char
1
5
79,544,791
2025-3-30
https://stackoverflow.com/questions/79544791/how-to-fix-signupview-is-missing-a-queryset
In my django web app i am trying to build a Newspaper app, in my homepage when i click on SIGN UP Button i get an error "ImproperlyConfigured at /accounts/signup/", I didnt figure out where is the problem. forms.py: from django.contrib.auth.forms import UserCreationForm, UserChangeForm from .models import CustomUser class CustomUserCreationForm(UserCreationForm): #Creation of CustomUser class Meta: model = CustomUser fields = UserCreationForm.Meta.fields + ("age",) class CustomUserChangeForm(UserChangeForm): #Modifying existing users class Meta: model = CustomUser fields = UserChangeForm.Meta.fields and models.py: from django.contrib.auth.models import AbstractUser #username, pw1, pw2 from django.db import models class CustomUser(AbstractUser): age = models.PositiveIntegerField(null=True, blank=True) urls.py: from django.urls import path from .views import SignUpView #Accounts/urls.py : Handles only registrations URLs!! urlpatterns = [ path('signup/', SignUpView.as_view(), name='signup'), ] and views.py: from django.urls import reverse_lazy from django.views.generic import CreateView from .forms import CustomUserCreationForm class SignUpView(CreateView): from_class = CustomUserCreationForm success_url = reverse_lazy('login') template_name = 'registration/signup.html' signup.html: {% extends 'base.html' %} {% block title %}Sign Up{% endblock title %} {% block content %} <h2>Sign Up</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <button type="submit">Sign Up</button> </form> {% endblock content %} I tried to change fields in forms.py with: fields = ( "username", "email", "age",) but nothing to be mention
The culprit is a typo: from_class to form_class: from django.urls import reverse_lazy from django.views.generic import CreateView from .forms import CustomUserCreationForm class SignUpView(CreateView): form_class = CustomUserCreationForm success_url = reverse_lazy('login') template_name = 'registration/signup.html' But I think it is interesting to know why this then raises this error: if you don't provide a .form_class [Django-doc] to a CreateView, a CreateView will try to make its own form, for that it can work with the modelform_factory(…) [Django-doc], but for that it needs to know the model. It tries to determine the model in three possible ways: first by looking if the view has a .model attribute, if not if it has an .object attribute (for an UpdateView the form can be implied by the object to update), and finally it runs the .get_queryset() method [Django-doc] to obtain the model of the QuerySet, and since none of these work with this SignUpView, we thus get that error.
2
3
79,541,208
2025-3-28
https://stackoverflow.com/questions/79541208/printing-numpy-matrices-horizontally-on-the-console
Numpy has many nice features for formatted output, but something I miss is the ability to print more than one array/matrix on the same line. What I mean is easiest to explain with an example. Given the following code: A = np.random.randint(10, size = (4, 4)) B = np.random.randint(10, size = (4, 4)) niceprint(A, "*", B, "=", A @ B) How can you implement niceprint so that it prints the following on the console? [[7 1 0 4] [[8 6 2 6] [[ 80 45 54 83] [8 1 5 8] * [0 3 8 5] = [147 91 123 140] [3 7 2 4] [7 8 7 3] [ 62 55 108 95] [8 8 2 8]] [6 0 8 9]] [126 88 158 166]]
Don't write it yourself; use something common and off-the-shelf like Sympy. import numpy as np import sympy A = sympy.Matrix(np.random.randint(10, size=(4, 4))) B = sympy.Matrix(np.random.randint(10, size=(4, 4))) sympy.pretty_print( sympy.Eq( sympy.MatMul(A, B), A @ B, ) ) ⎡4 2 1 5⎤ ⎡3 0 7 7⎤ ⎡77 54 42 58 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 6 5 1⎥ ⎢9 5 0 1⎥ ⎢76 82 29 57 ⎥ ⎢ ⎥⋅⎢ ⎥ = ⎢ ⎥ ⎢3 3 7 0⎥ ⎢2 9 4 8⎥ ⎢50 78 49 80 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣7 6 2 8⎦ ⎣9 7 2 4⎦ ⎣151 104 73 103⎦
6
0
79,537,058
2025-3-26
https://stackoverflow.com/questions/79537058/getting-authentication-failed-error-while-connecting-with-ms-fabric-data-warehou
First I ahve used Node.js and tedious but it didn't work becase tedious library can't connect to fabric dwh because fabric has a bit different security layers and protocols, that tedious so far do not have. Now I have used ODBC library but got Authentication Error Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Could not login because the authentication failed. I am using this code in node js require('dotenv').config(); const express = require('express'); const odbc = require('odbc'); const app = express(); const port = process.env.PORT || 3000; // Connection pool configuration const poolConfig = { connectionString: ` Driver={ODBC Driver 18 for SQL Server}; Server=${process.env.DB_SERVER}; Database=${process.env.DB_NAME}; UID=${process.env.AZURE_APP_CLIENT_ID}; PWD=${process.env.AZURE_APP_SECRET}; Authentication=ActiveDirectoryServicePrincipal; Encrypt=yes; TrustServerCertificate=no; Connection Timeout=30; `.replace(/\n\s+/g, ' '), // Clean up whitespace initialSize: 5, maxSize: 20 }; // Create connection pool let pool; (async () => { try { pool = await odbc.pool(poolConfig); console.log('Connection pool created successfully'); } catch (error) { console.error('Pool creation failed:', error); process.exit(1); } })(); // Basic health check app.get('/', (req, res) => { res.send('Fabric DWH API is running'); }); // Query endpoint app.get('/api/query', async (req, res) => { try { const query = req.query.sql || 'SELECT TOP 10 * FROM INFORMATION_SCHEMA.TABLES'; const connection = await pool.connect(); const result = await connection.query(query); await connection.close(); res.json(result); } catch (error) { console.error('Query error:', error); res.status(500).json({ error: error.message }); } }); app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); using python import pyodbc import struct from itertools import chain, repeat from azure.identity import ClientSecretCredential def connect_to_fabric(): tenant_id = "your-tenant-id" client_id = "your-client-id" client_secret = "your-client-secret" database_server = "your-server.datawarehouse.fabric.microsoft.com,1433" database_name = "your-database-name" credential = ClientSecretCredential( tenant_id=tenant_id, client_id=client_id, client_secret=client_secret ) token = credential.get_token("https://database.windows.net/.default").token token_bytes = token.encode("UTF-16-LE") encoded = bytes(chain.from_iterable(zip(token_bytes, repeat(0)))) token_struct = struct.pack(f'<I{len(encoded)}s', len(encoded), encoded) SQL_COPT_SS_ACCESS_TOKEN = 1256 connection_string = f""" Driver={{ODBC Driver 18 for SQL Server}}; Server={database_server}; Database={database_name}; Encrypt=yes; TrustServerCertificate=no; Connection Timeout=45; """ try: print("🔹 Connecting to Fabric SQL Warehouse...") conn = pyodbc.connect(connection_string, attrs_before={SQL_COPT_SS_ACCESS_TOKEN: token_struct}) cursor = conn.cursor() cursor.execute("SELECT 1 AS test") row = cursor.fetchone() print(f"✅ Connected! Query Result: {row[0]}") conn.close() print("🔹 Connection Closed.") except Exception as e: print(f"❌ Connection Failed: {e}") if __name__ == "__main__": connect_to_fabric() In python I also got the same error Additional resources: Issue On Microsoft fabric community
Fixed by giving azure sql storage permission On Azure and app id read permission to the workspace of Fabric
2
0
79,544,253
2025-3-30
https://stackoverflow.com/questions/79544253/image-size-inconsistency-between-github-and-pypi-in-readme-md
I created some simple console games in Python(Oyna Project) and took screenshots of each game to showcase them in the README.md file. I wanted to display these images in a table format both on GitHub and on PyPI. On GitHub, everything looks fine — the images are well-aligned, and their sizes are consistent. But when I push this library(Oyna) to PyPI, the image sizes look uneven and unbalanced. The problem is that the images themselves are not the same size, and on PyPI, some images appear much smaller or larger than others, making the table look messy. I've tried adjusting the image sizes using HTML tags and Markdown syntax, but nothing seems to work correctly on PyPI. How can I make the images show up consistently and evenly on PyPI just like they do on GitHub, even if their original sizes are different? My code to display the table: <table> <tr> <td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/sudoku/"> Sudoku </a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/sudoku.png" alt="Sudoku" style="width:250px;"/> </td> <td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/twenty_forty_eight_2048/">2048</a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/2048.png" alt="2048" style="width:250px;"/> </td> <td><a href="https://github.com/kamyarmg/oyna/tree/main/src/oyna/matching/">Matching</a> </br><img src="https://raw.githubusercontent.com/kamyarmg/oyna/refs/heads/main/docs/images/matching.png" alt="Matching" style="width:250px;"/> </td> </tr> </table> Github README.md Image Table: Pypi Image Table:
You can solve this issue by resizing the images via usage of html table format since PyPi supports HTML in its Readme files, find a way to tweak the width and or the height, however monitor the aspect ratio of the image and perhaps focus only on the width of the image since most browser calculate other dimension for you to avoid distortion of your image. For example, here is a way you can go with it on your Readme file: <table> <tr> <td> <img src="my imageA.png" width="180"> </td> </tr> </table>
1
2
79,536,716
2025-3-26
https://stackoverflow.com/questions/79536716/expand-columns-of-structs-to-rows-in-polars
Say we have this dataframe: import polars as pl df = pl.DataFrame({'EU': {'size': 10, 'GDP': 80}, 'US': {'size': 100, 'GDP': 800}, 'AS': {'size': 80, 'GDP': 500}}) shape: (1, 3) ┌───────────┬───────────┬───────────┐ │ EU ┆ US ┆ AS │ │ --- ┆ --- ┆ --- │ │ struct[2] ┆ struct[2] ┆ struct[2] │ ╞═══════════╪═══════════╪═══════════╡ │ {10,80} ┆ {100,800} ┆ {80,500} │ └───────────┴───────────┴───────────┘ I am looking for a function like df.expand_structs(column_name='metric') that gives shape: (2, 4) ┌────────┬─────┬─────┬─────┐ │ metric ┆ EU ┆ US ┆ AS │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞════════╪═════╪═════╪═════╡ │ size ┆ 10 ┆ 100 ┆ 80 │ │ GBP ┆ 80 ┆ 800 ┆ 500 │ └────────┴─────┴─────┴─────┘ I've tried other functions like unnest, explode but no luck. Any help appreciated!
TL;DR Performance comparison at the end. Both @etrotta's method and @DeanMacGregor's adjustment perform well on a pl.lazyframe with small Structs (e.g., struct[2]) and columns N <= 15 (not collected). Other methods fail lazily. With bigger Structs and/or columns N > 15, both unpivot options below start to outperform. Other suggested methods thus far slower in general. Option 1 out = (df.unpivot() .unnest('value') .select(pl.exclude('variable')) .transpose(include_header=True) .pipe( lambda x: x.rename( dict(zip(x.columns, ['metric'] + df.columns)) ) ) ) Output: shape: (2, 4) ┌────────┬─────┬─────┬─────┐ │ metric ┆ EU ┆ US ┆ AS │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞════════╪═════╪═════╪═════╡ │ size ┆ 10 ┆ 100 ┆ 80 │ │ GDP ┆ 80 ┆ 800 ┆ 500 │ └────────┴─────┴─────┴─────┘ Explanation / Intermediates Use df.unpivot: shape: (3, 2) ┌──────────┬───────────┐ │ variable ┆ value │ │ --- ┆ --- │ │ str ┆ struct[2] │ ╞══════════╪═══════════╡ │ EU ┆ {10,80} │ │ US ┆ {100,800} │ │ AS ┆ {80,500} │ └──────────┴───────────┘ So that we can apply df.unnest on new 'value' column: shape: (3, 3) ┌──────────┬──────┬─────┐ │ variable ┆ size ┆ GDP │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞══════════╪══════╪═════╡ │ EU ┆ 10 ┆ 80 │ │ US ┆ 100 ┆ 800 │ │ AS ┆ 80 ┆ 500 │ └──────────┴──────┴─────┘ Use df.select to exclude 'variable' column (pl.exclude) and df.transpose with include_header=True: shape: (2, 4) ┌────────┬──────────┬──────────┬──────────┐ │ column ┆ column_0 ┆ column_1 ┆ column_2 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞════════╪══════════╪══════════╪══════════╡ │ size ┆ 10 ┆ 100 ┆ 80 │ │ GDP ┆ 80 ┆ 800 ┆ 500 │ └────────┴──────────┴──────────┴──────────┘ Now, we just need to rename the columns. Here done via df.pipe + df.rename. Without the chained operation, that can also be: out.columns = ['metric'] + df.columns Option 2 out2 = (df.unpivot() .unnest('value') .unpivot(index='variable', variable_name='metric') .pivot(on='variable', index='metric') ) Equality check: out.equals(out2) # True Explanation / Intermediates Same start as option 1, but followed by a second df.unpivot to get: shape: (6, 3) ┌────────┬────────┬───────┐ │ column ┆ metric ┆ value │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════╪═══════╡ │ EU ┆ size ┆ 10 │ │ US ┆ size ┆ 100 │ │ AS ┆ size ┆ 80 │ │ EU ┆ GDP ┆ 80 │ │ US ┆ GDP ┆ 800 │ │ AS ┆ GDP ┆ 500 │ └────────┴────────┴───────┘ Followed by df.pivot on 'column' with 'metric' as the index to get desired shape. Performance comparison (gist) Columns: n_range=[2**k for k in range(12)] Struct: 2, 20, 100 Methods compared: unpivot_unnest_t (option 1), #@ouroboros1 unpivot_unnest_t2 (option 1, adj) unpivot_pivot (option 2) concat_list_expl, #@etrotta concat_list_expl_lazy, #lazy concat_list_expl2, #@etrotta, #@DeanMacGregor concat_list_expl2_lazy, #lazy map_batches, #@DeanMacGregor loop, #@sammywemmy Results:
5
5
79,542,832
2025-3-29
https://stackoverflow.com/questions/79542832/using-winrt-interface-in-python
ref: ISystemMediaTransportControlsInterop I compiled a dll about ISystemMediaTransportControlsInterop::GetForWindow. I use IDA to decompile it. Then I wrote the C-like code as Python. I believe that I was wrote them in a right way. But sadly, I met a pointer error. I know it's because SMTC_Interop_GetForWindow function address is incorrect, but I don't know why. Python log: RoInit: 0 String Create: 0, SMTCInterop: c_void_p(None) RoGetActivationFactory: 0, SMTCInterop: c_void_p(7948368) Traceback (most recent call last): File "D:\儿子文件\编程\python\PyCharm项目\WinSMTC\test.py", line 56, in <module> TestFunc() File "D:\儿子文件\编程\python\PyCharm项目\WinSMTC\test.py", line 45, in TestFunc GetForWindow(Create_SMTC_Window(), smtc_obj) File "D:\儿子文件\编程\python\PyCharm项目\WinSMTC\test.py", line 26, in GetForWindow result = SMTC_Interop_GetForWindow(smtc_interop.value, hwnd, ctypes.byref(REF_IID), ctypes.byref(smtc_obj)) OSError: exception: access violation writing 0x00007FFBED118788 Python Code(key func): def GetForWindow(hwnd: int, smtc_obj: ctypes.c_void_p): result = RoInitialize(RO_INIT_MULTITHREADED) print("RoInit:", result) smtc_interop = VoidPtr() h_string = HSTRING() result = WindowsCreateString(String("Windows.Media.SystemMediaTransportControls"), ctypes.c_uint32(42), ctypes.byref(h_string)) print(f"String Create: {result}, SMTCInterop: {smtc_interop}") result = RoGetActivationFactory(h_string, ctypes.byref(IID_SystemMediaTransportControls), ctypes.byref(smtc_interop)) print(f"RoGetActivationFactory: {result}, SMTCInterop: {smtc_interop}") SMTC_Interop_GetForWindow = ctypes.CFUNCTYPE(ctypes.c_int64, ctypes.c_int64, ctypes.c_int32, ctypes.POINTER(ctypes.c_int64), ctypes.POINTER(ctypes.c_void_p))(ctypes.cast(smtc_interop.value, ctypes.POINTER(ctypes.c_int64)).contents.value + 48) result = SMTC_Interop_GetForWindow(smtc_interop.value, hwnd, ctypes.byref(REF_IID), ctypes.byref(smtc_obj)) IDA Func Code(to the key func): You can download Dll Source Code & Dll File & IDA Debug File & Test Python Code in here
The error suggests you're trying to write to memory that's not accessible. The main problem is in this line: SMTC_Interop_GetForWindow = ctypes.CFUNCTYPE(...)(ctypes.cast(smtc_interop.value, ctypes.POINTER(ctypes.c_int64)).contents.value + 48) Change it with vtable_ptr = ctypes.cast(smtc_interop.value, ctypes.POINTER(ctypes.c_void_p)).contents.value method_ptr = ctypes.cast(vtable_ptr + 3 * 8, ctypes.POINTER(ctypes.c_void_p)).contents.value SMTC_Interop_GetForWindow = ctypes.CFUNCTYPE(ctypes.c_int64, ctypes.c_void_p, ctypes.c_int32, ctypes.POINTER(ctypes.c_int64), ctypes.POINTER(ctypes.c_void_p))(method_ptr) This approach should resolve the error as it accesses the function pointer through the vtable structure rather than using a direct offset from the interface pointer.
1
2
79,542,469
2025-3-28
https://stackoverflow.com/questions/79542469/python-valueerror-is-a-bad-directive-in-format-y-m-dxz
I'm attempting to parse HTML time strings in Python: from datetime import datetime input = "2025-03-24T07:01:53+00:00" output = datetime.strptime(input, "%Y-%m-%d%X%:z") print(output) Running this with Python 3.13 returns the following error: Traceback (most recent call last): File "/Users/dread_pirate_roberts/html_time_converter/main.py", line 69, in <module> output = datetime.strptime(input, "%Y-%m-%d%X%:z") File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_strptime.py", line 674, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/_strptime.py", line 445, in _strptime raise ValueError("'%s' is a bad directive in format '%s'" % (bad_directive, format)) from None ValueError: ':' is a bad directive in format '%Y-%m-%d%X%:z' This doesn't make sense to me because %:z was added in Python 3.12. Edit: I discovered that my code actually has 2 bugs. For one, the format string should include a T. Also, InSync's answer, so the code should be: from datetime import datetime input = "2025-03-24T07:01:53+00:00" output = datetime.strptime(input, "%Y-%m-%dT%X%z") print(output)
This is a known CPython bug. For now (3.13), %:z only works in .strftime(): >>> import datetime >>> d = datetime.datetime.now(tz = datetime.UTC) >>> d datetime.datetime(2025, 1, 1, 1, 1, 1, 111111, tzinfo=datetime.timezone.utc) >>> d.strftime('%z') '+0000' >>> d.strftime('%:z') '+00:00' It should also be noted that %z already parses +00:00 and similar inputs: >>> datetime.datetime.strptime('+00:30', '%z') datetime.datetime(1900, 1, 1, 0, 0, tzinfo=datetime.timezone(datetime.timedelta(seconds=1800)))
3
5
79,542,324
2025-3-28
https://stackoverflow.com/questions/79542324/constrain-llama3-2-vision-output-to-a-list-of-options
I have several images of animals in the same directory as the script. How can I modify the following script to process an image but force the output to only be a single selection from a list: from pathlib import Path import base64 import requests def encode_image_to_base64(image_path): """Convert an image file to base64 string.""" return base64.b64encode(image_path.read_bytes()).decode('utf-8') def extract_text_from_image(image_path): """Send image to local Llama API and get text description.""" base64_image = encode_image_to_base64(image_path) payload = { "model": "llama3.2-vision", "stream": False, "messages": [ { "role": "user", "content": ( "With just one word, classify this image into one of these exact categories:\n" "- dog\n" "- cat\n" "- butterfly\n" ), "images": [base64_image] } ] } response = requests.post( "http://localhost:11434/api/chat", json=payload, headers={"Content-Type": "application/json"} ) return response.json().get('message', {}).get('content', 'No text extracted') def process_directory(): """Process all images in current directory and create text files.""" for image_path in Path('.').glob('*'): if image_path.suffix.lower() in {'.png', '.jpg', '.jpeg', '.gif', '.bmp', '.webp'}: print(f"\nProcessing {image_path}...") text = extract_text_from_image(image_path) image_path.with_suffix('.txt').write_text(text, encoding='utf-8') print(f"Created {image_path.with_suffix('.txt')}") process_directory() However, despite different prompt engineering, I get some answers that will do more than just select from a list. For example, it may occassionally output "From the image, there is a winged insect, therefore my guess is "butterfly." ANSWER: Butterfly." If I define the list as allowed_options = ['dog', 'cat', 'butterfly'] I only want it to output a single string from that list and nothing else.
Llama3.2-vision supports structured outputs, so you can specify the schema of the response you want, and the model should follow it. Your request should look like this: payload = { "model": "llama3.2-vision", "stream": False, "messages": [ { "role": "user", "content": ( "Classify this image into one of these exact categories:\n" "- dog\n" "- cat\n" "- butterfly\n" ), "images": [base64_image] } ], "format": { "type": "object", "properties": { "animal": { "enum": [ "dog", "cat", "butterfly" ], "type": "string" } }, "required": [ "animal" ] } } And now the output should be a JSON with an attribute animal. For example: { "animal": "dog" }
1
2
79,541,633
2025-3-28
https://stackoverflow.com/questions/79541633/how-to-add-aliases-to-consecutive-occurrences-in-column
I want to add aliases to consecutive occurrences of the same gene name in column gene_id. If the gene_id value is unique, it should be unchanged. Here is my example input: df_genes_data = {'gene_id': ['g0', 'g1', 'g1', 'g2', 'g3', 'g4', 'g4', 'g4']} df_genes = pd.DataFrame.from_dict(df_genes_data) print(df_genes.to_string()) gene_id 0 g0 1 g1 2 g1 3 g2 4 g3 5 g4 6 g4 7 g4 and there is the desired output: gene_id 0 g0 1 g1_TE1 2 g1_TE2 3 g2 4 g3 5 g4_TE1 6 g4_TE2 7 g4_TE3 Any ideas on how to perform it? I've been looking for solutions but found only ways to count consecutive occurrences, not to label them with aliases. EDIT: I've tried to find gene_id values which occur more than once in my data: rep = [] gene_list = df_genes['gene_id'] for idx in range(0, len(gene_list) - 1): if gene_list[idx] == gene_list[idx + 1]: rep.append(gene_list[idx]) rep = list(set(rep)) print("Consecutive identical gene names are : " + str(rep)) but I have no idea how to add desired aliases to them.
Use shift+ne+cumsum to group the consecutive values, then groupby.transform('size') to identify the groups of more than 2 values, and groupby.cumcount to increment the name: # Series as name for shorter reference s = df_genes['gene_id'] # group consecutive occurrences group = s.ne(s.shift()).cumsum() # form group and save as "g" for efficiency g = s.groupby(group) # identify groups with more than 1 value m = g.transform('size').gt(1) # increment values df_genes.loc[m, 'gene_id'] += '_TE'+g.cumcount().add(1).astype(str) Output: gene_id 0 g0 1 g1_TE1 2 g1_TE2 3 g2 4 g3 5 g4_TE1 6 g4_TE2 7 g4_TE3 Intermediates: gene_id group m cumcount+1 suffix 0 g0 1 False 1 1 g1 2 True 1 _TE1 2 g1 2 True 2 _TE2 3 g2 3 False 1 4 g3 4 False 1 5 g4 5 True 1 _TE1 6 g4 5 True 2 _TE2 7 g4 5 True 3 _TE3
3
6
79,538,411
2025-3-27
https://stackoverflow.com/questions/79538411/why-is-using-a-dictionary-slower-than-sorting-a-list-to-generate-frequency-array
I was doing this question on Codeforces (2057B - Gorilla and the Exam) where I had to use the frequencies of different numbers in a list. First I tried calculating it with a dictionary but I got TLE verdict (Time limit exceeded). After reading the editorial I implemented it by sorting the input list and then using comparisons with the previous element to generate the frequency array. This was faster than the approach using the dictionary but sorting is an O(nlogn) operation which should take more time than creating a dictionary which only requires O(n) time. Dictionary approach (slower) t= int(input()) for _ in range(t): n, k = map(int, input().split()) arr = map(int, input().split()) frequence_table = {} for num in arr: try: frequence_table[num] += 1 except: frequence_table[num] = 1 freq = list(frequence_table.values()) freqs = sorted(freq, reverse=True) while k>0: if k>=freqs[-1]: k -= freqs.pop() else: break print(max(len(freqs), 1)) Sorting based approach (faster) t= int(input()) for _ in range(t): n, k = map(int, input().split()) arr = map(int, input().split()) arr = sorted(arr) freq = [1] for i in range(1, len(arr)): if arr[i] == arr[i-1]: freq[-1] += 1 else: freq.append(1) freqs = sorted(freq, reverse=True) while k>0: if k>=freqs[-1]: k -= freqs.pop() else: break print(max(len(freqs), 1))
Why is using a dictionary slower than sorting a list to generate frequency array? Apparently because one input is specially crafted to hack Python's dict implementation, so that building your dict doesn't take the expected linear time. It can take up to quadratic time, and likely they did that. See Python's TimeComplexity page (showing O(n) worst case time for each dict access) and Tutorial: How to hack hash table in Python 3 on CodeForces itself. One way to defeat it is to simply add 1 to all the numbers, i.e., change for num in arr: to this: for num in arr: num += 1 Then the solution doesn't get stopped and rejected at the 1 second time limit but gets accepted with around 0.18 seconds. Despite doing more work, and without really changing the data like randomizing or sorting would. The effect is simply that Python's collision resolution takes different paths then, which is enough to thwart the brittle attack. Another way, also accepted in around 0.18 seconds, is to simply not convert to ints, instead counting the strings. I.e., change arr = map(int, input().split()) to this: arr = input().split() That's also safer, as Python deliberately and unpredictably randomizes string hashes exactly to defeat such attacks: the __hash__() values of str and bytes objects are “salted” with an unpredictable random value. [...] This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, O(n2) complexity.
2
2
79,537,769
2025-3-27
https://stackoverflow.com/questions/79537769/raising-a-float-near-1-to-an-infinite-power-in-python
1.000000000000001 ** float('inf') == float('inf') while 1.0000000000000001 ** float('inf') == 1.0 What determines the exact threshold here? It seems to be an issue of float precision, where numbers below a certain threshold are considered the same as 1.0. Is there a way to get better precision in Python?
As others have said, this is due to IEEE754 floats not being able to represent your constants with the precision you're expecting. Python provides some useful tools that lets you see what's going on, specifically the math.nextafter method and and decimal module. The decimal module is useful to see the decimal expansion of a float (i.e. how us humans read/write numbers), e.g.: from decimal import Decimal as D a = 1.000000000000001 b = 1.0000000000000001 print(D(a), D(b)) which outputs: 1.0000000000000011102230246251565404236316680908203125 1 these are the actual values seen by your computer when you use those constants. As others have said, you can see that b is actually just 1 so it's "lost" the tailing digit you entered. You just have to know that it's going to do that as a programmer. To see nearby values the nextafter method is very useful, e.g.: import math c = math.nextafter(1.0, math.inf) print(c, D(c)) which outputs: 1.0000000000000002 1.0000000000000002220446049250313080847263336181640625 the first number is what Python prints by default and only includes enough precision to be able to unambiguously distinguish the value, the second uses the decimal module and is exact. Having a play with these is a great way to further understand what you're actually telling your computer to do. Chux also talked about hex representations of floats, in my experience these take a bit longer for people to understand but are another tool to know about when your code seems to be misbehaving. In Python you use float.hex() to see the actual bits that make up a float, using the format defined by C99, e.g.: one = 1.0 print(one.hex(), c.hex(), math.nextafter(c, math.inf).hex()) which will output: 0x1.0000000000000p+0 0x1.0000000000001p+0 0x1.0000000000002p+0 This output is closer to how your computer represents floats. The hexadecimal fractional digits can make these values somewhat awkward to interpret, but you can get there with some more magic numbers (i.e. these come from the spec). The second is saying something like 0x10000000000001 * 2**(0-52), the -52 is needed to "shift" the all but the first digit right into the fraction.
2
2
79,540,471
2025-3-28
https://stackoverflow.com/questions/79540471/filter-sequences-of-same-values-in-a-particular-column-of-polars-df-and-leave-on
I have a very large Polars LazyFrame (if collected it would be tens of millions records). I have information recorded for a specific piece of equipment taken every second and some location flag that is either 1 or 0. When I have sequences where the location flag is equal to 1, I need to filter out and only leave the latest one but this must be done per equipment id. I cannot use UDFs since this is a performance-critical piece of code and should ideally stay withing Polars expression syntax. For a simple case where I have only a single equipment id, I can do it relatively easily by shifting the time data 1 row and filter out the records where there's a big gap: df_test = pl.DataFrame( { 'time': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 'equipment': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'loc': [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1] } ) df_test.filter(pl.col('loc') == 1).with_columns((pl.col('time') - pl.col('time').shift(1)).alias('time_diff')).filter(pl.col('time_diff') > 1) This gives me sort of a correct result, but the problem is that out of 3 sequences of 1s, I only keep 2, the first one gets lost. I can probably live with that, but ideally want to not lose any data. In a standard case, there will be multiple equipment types and once again, the same approach works but again, for both types, I only keep 2 out of 3 sequences. df_test = pl.DataFrame( { 'time': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,], 'equipment': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], 'loc': [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0] } ) Is there a better way to do this?
If I've interpreted correctly, for each equipment you want to keep only the first row of each continuous sequence of loc = 1. Fixing your solution In that case, the only changes you need to make to your solution are: Add the fill_value to pl.col(“time”).shift(1) to ensure that the first row with loc = 1 is always selected. The choice of fill_value must ensure that the first time_diff > 1 , e.g. fill_value = negative number. Note that without the fill_value, the first row of the shift is always null, resulting in a null time_diff, so it is not selected by the time_diff > 1 filter. Another option would be to change the filter to pl.col(“time_diff”) > 1 | pl.col(“time_diff”).is_null() Apply the logic to each equipment by making it a window expression with .over("equipment"). import polars as pl df_test = pl.DataFrame( { "time": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], "equipment": [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "loc": [0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1], } ) res = ( df_test.filter(pl.col("loc") == 1) #.sort("time") # uncomment if we can't assume that the df is sorted by time. .with_columns( (pl.col("time") - pl.col("time").shift(1, fill_value=-1)) .over("equipment") .alias("time_diff") ) .filter(pl.col("time_diff") > 1) ) Output: >>> res shape: (3, 4) ┌──────┬───────────┬─────┬───────────┐ │ time ┆ equipment ┆ loc ┆ time_diff │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞══════╪═══════════╪═════╪═══════════╡ │ 3 ┆ 1 ┆ 1 ┆ 4 │ │ 9 ┆ 1 ┆ 1 ┆ 4 │ │ 12 ┆ 1 ┆ 1 ┆ 2 │ └──────┴───────────┴─────┴───────────┘ Alternative solution That said, here is another similar solution which I think is clearer: res = ( df_test #.sort("time") # uncomment if we can't assume that the df is sorted by time. .filter( ((pl.col("loc") == 1) & (pl.col("loc").shift(fill_value=0) != 1)) .over("equipment") ) ) Note that in this case the fill_value has to be any value other than 1.
1
2
79,540,754
2025-3-28
https://stackoverflow.com/questions/79540754/coc-pyright-fail-to-report-reportattributeaccessissue-if-setattr-is-provided
from pydantic import BaseModel class User2(BaseModel): name:str age: int ''' def __setattr__(self, key, value): super().__setattr__(key, value) ''' user2 = User2(name="Alice", age=1) >>> user2.foo = 1 # Should report: Cannot assign to attribute "foo" for class "User2" \ Attribute "foo" is unknown if I uncomment the __setattr__ function, user2.foo = 1, coc-pyright doesn't report an error. I want to use a __setattr__ method but also have it report an error. How can I do this?
The presence of __setattr__ does not report reportAttributeAccess issue for pyright as well as mypy. The trick is to not reveal that such a method exists. This can be done by defining it in an if not TYPE_CHECKING: block. Linters and checkers might mark the code there as unreachable which can be undesired, therefore you can redirect the call to a normal private method that is executed instead: from pydantic import BaseModel from typing import TYPE_CHECKING class User2(BaseModel): name: str age: int if not TYPE_CHECKING: # hide from type-checker to report AttributeAccessIssue def __setattr__(self, key, value): self._setattr(key, value) # Use a different method to not create "unreachable code" def _setattr(self, key, value): super().__setattr__(key, value) user2 = User2(name="Alice", age=1) user2.foo = 1 # reportAttributeAccess
1
2
79,540,627
2025-3-28
https://stackoverflow.com/questions/79540627/why-does-pathlib-path-glob-function-in-python3-13-return-map-object-instead-of-a
I was playing around with Path objects and found an interesting behaviour. When testing with python3.11, Path.glob returns a generator object. >>> from pathlib import Path >>> >>> cwd = Path.cwd() >>> cwd.glob("*") <generator object Path.glob at 0x100b5e680> >>> import sys;sys.version '3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]' I tested the same code with Python3.13, but this time it returns a map object. >>> from pathlib import Path >>> >>> cwd = Path.cwd() >>> cwd.glob("*") <map object at 0x101867940> >>> import sys;sys.version '3.13.0 (v3.13.0:60403a5409f, Oct 7 2024, 00:37:40) [Clang 15.0.0 (clang-1500.3.9.4)]' I know the result can be obtained by wrapping it in a list, but I am curios to understand why this change was made. My initial thought was that it might be because of the performance reasons. But a quick test shows that python3.13 version slower than python3.11 on my system. Here is the benchmark result: $ python3.11 -m timeit -n 20000 'from pathlib import Path;cwd=Path.cwd();r=list(cwd.glob("*.py"))' 20000 loops, best of 5: 72.9 usec per loop $ python3.13 -m timeit -n 20000 'from pathlib import Path;cwd=Path.cwd();r=list(cwd.glob("*.py"))' 20000 loops, best of 5: 75.1 usec per loop
pathlib.Path.glob learned a bunch of new things in Python 3.13, and I suspect the change in return value type from one generator type to another generator was part of that: Changed in version 3.13: The recurse_symlinks parameter was added. Changed in version 3.13: The pattern parameter accepts a path-like object. Changed in version 3.13: Any OSError exceptions raised from scanning the filesystem are suppressed. In previous versions, such exceptions are suppressed in many cases, but not all. The particular change that made glob() return a map is in this PR (found via git log -S "map(self._from_parsed_string") – you can read the description and commit message there for all the details.
1
2
79,540,404
2025-3-28
https://stackoverflow.com/questions/79540404/calculating-spo2-with-max30101-sensor-in-python-on-raspberry-pi-4
I have recently purchased a MAX30101 breakout board / sensor and have been using it with a Raspberry Pi 4 and Python. I have been using the sparkfun-qwiic-max3010x library provided by Sparkfun to communicate with the sensor and collect data. The GitHub page for the Python sparkfun-qwiic-max3010x library includes an example for calculating heart rate but not SpO2. The Arduino library includes an example for calculating heart rate and SpO2. I was wondering if there is an algorithm available or if it is possible to calculate SpO2 accurately in Python on the Raspberry Pi 4. I have seen an algorithm in the user guide of the MAX30101 sensor but have not been able to accurately get an SpO2 reading with it in Python in the following code. The SpO2 value sometimes goes in the negatives and will give extremely different consecutive values. import time import numpy as np import qwiic_max3010x # Initialize the MAX30101 sensor sensor = qwiic_max3010x.QwiicMax3010x() sensor.begin() sensor.setup() sensor.setPulseAmplitudeGreen(0) # Function to read raw values from the sensor def read_raw_values(num_samples=100): red_values = [] ir_values = [] for _ in range(num_samples): red = sensor.getRed() ir = sensor.getIR() red_values.append(red) ir_values.append(ir) time.sleep(0.01) # Adjust the delay as needed sensor.shutDown() return np.array(red_values), np.array(ir_values) # Function to calculate AC and DC components def calculate_ac_dc(raw_values): dc = np.mean(raw_values) # DC component ac = raw_values - dc # AC component return ac, dc def calculate_spo2(red_ac, red_dc, ir_ac, ir_dc): r = (np.mean(red_ac) / red_dc) / (np.mean(ir_ac) / ir_dc) spo2 = 104 - (17 * r) return spo2 # Main loop try: while True: # Read raw values red_raw, ir_raw = read_raw_values(num_samples=100) # Calculate AC and DC components red_ac, red_dc = calculate_ac_dc(red_raw) ir_ac, ir_dc = calculate_ac_dc(ir_raw) spo2 = calculate_spo2(red_ac, red_dc, ir_ac, ir_dc) print("Spo2: ", spo2) # Wait for a bit before the next reading time.sleep(1) # Adjust the sleep time as needed except KeyboardInterrupt: print("Program stopped.")
Calculating r the way you do is incorrect. By the very nature of red_ac its mean is almost zero (it is not exactly zero because of rounding errors). Same for ir_ic. Dividing two such values results in a huge meaningless error, precisely what you observe. Notice that in the Maxim paper you linked they do not divide averages. They divide instant readings, normalized by the averages. So the first thing you shall try is to calculate instant values of r[i] = (red_ac[i]/red_dc) / (ir_ac[i]/ir_dc) and average them over the observation window. In the ideal case you'll get reasonable results. Then you'd have to deal with the noise. The signals you sample are far from ideal, and it is not coincidental that the Arduino code does not resemble the simple formula. Most of the SpO2 computation is fighting noise.
4
6
79,539,620
2025-3-27
https://stackoverflow.com/questions/79539620/the-problem-of-reachability-in-a-directed-graph-but-all-predecessors-must-be-re
The problem It's similar to the problem of finding the minimal set of vertices in a directed graph from which all vertices can be reached, except that a node must have all its predecessors reached in order to be reached itself. More formally, let S be a set of nodes belonging to a directed graph G. A node n of G is said to be reachable from S if and only if n belongs to S or if all predecessors of n are reachable from S. We'll say that S is a generator of G if all nodes of G are reachable from S. We can be sure that such a generator always exists for every graph (i.e. the set containing all nodes will always work). The question is to design an algorithm that returns a minimum-length generator of G, given a graph G. Example Let's take this graph. {1,3,6} is a generator because {1} unlocks 2, {2,3} unlocks 5 and {1,3,6} unlocks 4. Now, if you take S = {1,5}, nodes 3, 4 and 6 will remain unreached. S is therefore not a generator of this graph. In fact, we can see that, if S is a generator, every cycle of the graph must contain at least one node in S.
This problem is NP hard. That can be seen by reducing the minimum feedback vertex set problem to your reachability problem. Start with any directed graph G. Construct a new graph H by adding a single vertex v with an edge connecting it to every vertex in G. Let S be any solution to your reachability problem on H. It is obvious that v must be in S because it has no predecessor, and is a predecessor of everything else. It is also clear that if you take S away from G, it cannot contain any cycles. And therefore S must contain a vertex feedback set for G. Conversely, any subset S of H that contains both v and a vertex feedback set for G will solve your reachability problem. (You can prove this by induction on the size of G minus S. Just prove that there is always an element with no predecessor in G, remove that element, then apply the induction hypothesis.) Therefore a solver for your minimum reachability problem allows us to solve the minimum feedback vertex problem for arbitrary graphs. Since that problem is NP hard, so is your problem.
2
4
79,540,025
2025-3-27
https://stackoverflow.com/questions/79540025/how-to-materialize-polars-expression-into-series
Working with a single series instead of a dataframe, how can I materialize an expression on that series? For instance, when I run time = pl.Series([pl.datetime(2025, 3, 27)]) (time + pl.duration(minutes=5)).to_numpy() I get AttributeError Traceback (most recent call last) Cell In[46], line 2 1 time = pl.Series([pl.datetime(2025, 3, 27)]) ----> 2 (time + pl.duration(minutes=5)).to_numpy() AttributeError: 'Expr' object has no attribute 'to_numpy'
You need a "DataFrame" to run expressions. Your example is "wrong" because passing an Expr creates a Series with dtype object. pl.Series([pl.datetime(2025, 3, 27)]) # shape: (1,) # Series: '' [o][object] # [ # 2025-03-27 00:00:00.alias("datetime") # ] # pl.select() is shorthand for creating an empty frame. s = pl.select(pl.datetime(2025, 3, 27)).to_series() # shape: (1,) # Series: 'datetime' [datetime[μs]] # [ # 2025-03-27 00:00:00 # ] pl.select(s + pl.duration(minutes=5)) shape: (1, 1) ┌─────────────────────┐ │ datetime │ │ --- │ │ datetime[μs] │ ╞═════════════════════╡ │ 2025-03-27 00:05:00 │ └─────────────────────┘ You can add .to_series() if you want to go back to a Series. There is also a Series method to do this without expressions. .dt.offset_by() s.dt.offset_by("5m") # shape: (1,) # Series: 'datetime' [datetime[μs]] # [ # 2025-03-27 00:05:00 # ]
1
1
79,539,193
2025-3-27
https://stackoverflow.com/questions/79539193/parsing-dates-from-strings-that-contain-unrelated-text-while-avoiding-parsing-in
What I want to Achieve My goal is to extract a date that has at least a day and a Month, but could also have a minute, hour, and year. I want to avoid the parser finding integers and thinking that implies a day of the current month. Furthermore, I also want the parser to find a date that is only a small part of a larger string. Think: 'Today is the most wonderful 27th of March 2025' = datetime.datetime(2025, 3, 27, 0, 0) '2gb of ram' != datetime.datetime(2025, 3, 2, 0, 0) (Assuming we are currently in March) What I tried so far Using the fuzzy=True argument in dateutils from dateutil import parser #OK: Correct Datetime is returned: datetime.datetime(2025, 3, 30, 0, 0) parser.parse('Today is the most wonderful 30th March 2025', fuzzy=True) #NOT OK: Integer is not ignored, Datetime is returned: datetime.datetime(2025, 3, 2, 0, 0) parser.parse('2 is my lucky number', fuzzy=True) Using the REQUIRED_PARTS setting in dateparser frome dateparser import parse # OK: Returns correct datetime: datetime.datetime(2025, 3, 30, 0, 0 parse('30 March', settings={'REQUIRE_PARTS': ['month', 'day']}) #OK: Integer is Ignored, no datetime returned parse('30', settings={'REQUIRE_PARTS': ['month', 'day']}) #NOT OK: Datetime Should be Found parse('Today is the most wonderful 30th of March', settings={'REQUIRE_PARTS': ['month', 'day']}) It would be great if I could combine the fuzzy=True argument from the dateutils module with the settings argument from the dateparser module, but seeing as they are separate modules, that is not feasible. Is there another way to achieve the same functionality?
Use from dateparser.search import search_dates Here's a quick function: from dateparser import parse from dateparser.search import search_dates def extract_date(text:str, exclusions:list=['now', 'today', 'tomorrow', 'yesterday', 'hour', 'minute', 'seconds', 'month', 'months','year', 'years'], required:list=['month', 'day']): ''' Check if the text contains at least Day and Month to parse date off If yes, return datetime object. If not, return None. - Inputs: * text: string to extract date from * exclusions: list of words to exclude from parsing (e.g. ['today', 'tomorrow']) * required: list of required date components (e.g. ['day', 'month']) ''' # Parse the date # It will return only the first result, if found try: return search_dates(text.lower(), settings={'REQUIRE_PARTS': required, 'SKIP_TOKENS': exclusions})[0][1] # Error, return None except (IndexError, TypeError): return None Testing here, it worked ok. Text | Date Extracted ----------------------------- TEXT: Today is the most wonderful 30th March 2025 || ** DATE PARSED: 2025-03-30 TEXT: 2 is my lucky number || ** DATE PARSED: None TEXT: I was born on 1990-01-01 || ** DATE PARSED: 1990-01-01 00:00:00 TEXT: I will go to Paris on 2025-01-01 || ** DATE PARSED: 2025-01-01 00:00:00 TEXT: I will go to Paris on 2040-09 missing day || ** DATE PARSED: None TEXT: 25 thousand days || ** DATE PARSED: None TEXT: It costs 25 dollars || ** DATE PARSED: None TEXT: I will go to NYC in 25 days || ** DATE PARSED: 2025-04-21 16:47:31.137955 TEXT: I will go to Rome in 1 month || ** DATE PARSED: None
2
1
79,538,811
2025-3-27
https://stackoverflow.com/questions/79538811/how-do-i-send-email-using-client-credentials-flow-such-that-senders-mail-is-inc
I created below app to send mail using Graph API I used Client Credential flow to send mail, but I keep getting 401 error. Is there anything that needs to be changed in the API permission on Azure or in the code? Is there a way I can send mail using delegated access without an interactive session? import requests import msal client_id = 'bdxxx' tenant_id = '97xxx' client_secret = 'Ltxxxxxx' authority = f'https://login.microsoftonline.com/{tenant_id}' scope = ['https://graph.microsoft.com/.default'] graph_url = 'https://graph.microsoft.com/v1.0' sender_email = '[email protected]' def get_access_token(): try: client = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret) token_result = client.acquire_token_for_client(scopes=scope) if "access_token" in token_result: return f"Bearer {token_result['access_token']}" else: raise Exception(f"Failed to obtain access token: {token_result}") except Exception as e: print(f"Exception: {e}") return None if __name__ == '__main__': access_token = get_access_token() print(f'Access token: {access_token}') if not access_token: print("Access token retrieval failed. Exiting.") exit() headers = { 'Authorization': access_token, 'Content-Type': 'application/json' } email_message = { "message": { "subject": "Test Email", "body": { "contentType": "HTML", "content": "Email body" }, "toRecipients": [ { "emailAddress": { "address": "[email protected]" } } ], "from": { "emailAddress": { "address": '[email protected]' } } }, "saveToSentItems": "false" } response = requests.post( f"{graph_url}/users/{sender_email}/sendMail", json=email_message, headers=headers ) print(f'{response.status_code} {response.text}')
The error usually occurs if you using ConfidentialClientApplication to send mail and granted delegated type API permission to the Microsoft Entra ID application. Hence to resolve the error, you need to grant application type API permission to the Microsoft Entra ID application. Refer this MsDoc Make sure to grant admin consent: Now I am able to send mail successfully: import requests import msal client_id = 'ClientID' tenant_id = 'TenantID' client_secret = 'Secret' authority = f'https://login.microsoftonline.com/{tenant_id}' scope = ['https://graph.microsoft.com/.default'] graph_url = 'https://graph.microsoft.com/v1.0' sender_email = 'xxx' def get_access_token(): try: client = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret) token_result = client.acquire_token_for_client(scopes=scope) if "access_token" in token_result: return f"Bearer {token_result['access_token']}" else: raise Exception(f"Failed to obtain access token: {token_result}") except Exception as e: print(f"Exception: {e}") return None if __name__ == '__main__': access_token = get_access_token() print(f'Access token: {access_token}') if not access_token: print("Access token retrieval failed. Exiting.") exit() headers = { 'Authorization': access_token, 'Content-Type': 'application/json' } email_message = { "message": { "subject": "Test Email", "body": { "contentType": "HTML", "content": "Email body" }, "toRecipients": [ { "emailAddress": { "address": "xxx" } } ], "from": { "emailAddress": { "address": 'xxx' } } }, "saveToSentItems": "false" } response = requests.post( f"{graph_url}/users/{sender_email}/sendMail", json=email_message, headers=headers ) print(f'{response.status_code} {response.text}') Make sure to assign O365 license to the user. And if you are making use of personal Microsoft user account to send mail, then it required Mail.Send delegated API permission which doesn't work with ConfidentialClientApplication For personal Microsoft user account, you need to make use of /me that is https://graph.microsoft.com/v1.0/me/sendMail endpoint and make use of any user interactive flow.
1
1
79,538,656
2025-3-27
https://stackoverflow.com/questions/79538656/python-asyncio-condition-why-deadlock-appears
I learn python asyncio and now struggling with Conditions. I wrote a simple code: import asyncio async def monitor(condition: asyncio.Condition): current_task_name = asyncio.current_task().get_name() print(f'Current task {current_task_name} started') async with condition: await condition.wait() print(f'Condition lock released for task {current_task_name}, start doing smthg') await asyncio.sleep(2) print(f'Current task {current_task_name} finished') async def setter(condition: asyncio.Condition): print(f'Setter starts') async with condition: await asyncio.sleep(1) condition.notify_all() # Пробуждаем все задачи print('Setter finished') print('Setter unlocked') async def main(): cond = asyncio.Condition() async with asyncio.TaskGroup() as tg: tg.create_task(setter(cond)) [tg.create_task(monitor(cond)) for _ in range(3)] # tg.create_task(setter(cond)) asyncio.run(main()) But the code catches a DeadLock after Setter unlocked message. I cant realise why. But if I swap task creation order in main and create monitor tasks prior to creation setter task- all good. please, I am a very new in asyncio, so please, in simple words...
In your original code, the setter task is started before the monitor tasks. When setter calls notify_all(), no monitor tasks are yet waiting so no tasks are notified. Later, the monitors reach condition.wait() but remain stuck because no further notifications are sent. When you reverse the order, starting monitor tasks first, they begin waiting on the condition before the setter notifies. As a result, they are properly awakened and the deadlock is avoided. To fix the issue, ensure that all tasks that should wait on the condition are created and running before calling notify_all().
1
3
79,538,469
2025-3-27
https://stackoverflow.com/questions/79538469/is-with-concurrent-futures-executor-blocking
Why when using separate with, the 2nd executor is blocked until all tasks from 1st executor is done when using a single compound with, the 2nd executor can proceed while the 1st executor is still working This is confusing because i thought executor.submit returns a future and does not block. It seems like the context manager is blocking. Is this true, and are there official references mentioning this behaviour of individual vs compound context managers? Separate context managers import time from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor def task(name, delay): print(f"{time.time():.3f} {name} started") time.sleep(delay) print(f"{time.time():.3f} {name} finished") if __name__ == "__main__": print(f"{time.time():.3f} Main started") with ThreadPoolExecutor(max_workers=1) as thread_pool: thread_pool.submit(task, "Thread Task", 0.2) with ProcessPoolExecutor(max_workers=1) as process_pool: process_pool.submit(task, "Process Task", 0.1) print(f"{time.time():.3f} Main ended") Output: 1743068624.365 Main started 1743068624.365 Thread Task started 1743068624.566 Thread Task finished 1743068624.571 Process Task started 1743068624.671 Process Task finished 1743068624.673 Main ended Notice Thread Task must finish before Process Task can start. I have ran the code numerous times and still don't see below patterns. Single context manager import time from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor def task(name, delay): print(f"{time.time():.3f} {name} started") time.sleep(delay) print(f"{time.time():.3f} {name} finished") if __name__ == "__main__": print(f"{time.time():.3f} Main started") with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool: thread_pool.submit(task, "Thread Task", 0.2) process_pool.submit(task, "Process Task", 0.1) print(f"{time.time():.3f} Main ended") Output 1743068722.440 Main started 1743068722.441 Thread Task started 1743068722.443 Process Task started 1743068722.544 Process Task finished 1743068722.641 Thread Task finished 1743068722.641 Main ended With a compound context manager, Process Task can start before Thread Task finishes no matter their delay ratios. Delay of Process Task is designed to be shorter than delay of Thread Task to additionally show that Process Task can finish before Thread Task finishes with a compound context manager Please point out if this interpretation is erroneous.
When you use a concurrent.futures.Executor instance as a context manager (e.g. with ThreadPoolExecutor() as executor:), then when the block is exited there is an implicit call to executor.shutdown(wait=True). So when you have two with blocks that are not nested (i.e. they are coded one after the other), the second with block will not start executing until all submitted tasks (pending futures) to the first executor have completed. In your second case, i.e.: with ThreadPoolExecutor(max_workers=1) as thread_pool, ProcessPoolExecutor(max_workers=1) as process_pool: thread_pool.submit(task, "Thread Task", 0.2) process_pool.submit(task, "Process Task", 0.1) The above is more or less equivalent to: with ThreadPoolExecutor(max_workers=1) as thread_pool: with ProcessPoolExecutor(max_workers=1) as process_pool: thread_pool.submit(task, "Thread Task", 0.2) process_pool.submit(task, "Process Task", 0.1) You are submitting tasks to both pools before either with block exits and so there is no implicit call to thread_pool.shutdown(wait=True) occurring before the call to process_pool.submit is made.
1
2
79,537,428
2025-3-26
https://stackoverflow.com/questions/79537428/can-i-multiply-these-numpy-arrays-without-creating-an-intermediary-array
This script: import numpy as np a = np.linspace(-2.5, 2.5, 6, endpoint=True) b = np.vstack((a, a)).T c = np.array([2, 1]) print(b*c) produces: [[-5. -2.5] [-3. -1.5] [-1. -0.5] [ 1. 0.5] [ 3. 1.5] [ 5. 2.5]] which is my desired output. Can it be produced directly with from a and c? Trying a*c and c*a fails due to ValueError: operands could not be broadcast together error. Trying different dimensions of c, e.g. [[2, 1]] fails, too. I found that the simplest way to go is to define a differently, i.e. a = np.linspace([-2.5, -2.5], [2.5, 2.5], 6, endpoint=True). I can now write a*c, a*(2, 1), etc. Since it changes my initial post, all answers below remain valid.
With the original (6,) and (2,) arrays, einsum may make the outer product more explicit: In [331]: a=np.linspace(-2.5, 2.5, 6, endpoint=True); c=np.array([1,2]) In [332]: np.einsum('i,j->ij',a,c) Out[332]: array([[-2.5, -5. ], [-1.5, -3. ], [-0.5, -1. ], [ 0.5, 1. ], [ 1.5, 3. ], [ 2.5, 5. ]]) np.outer(a,c) and np.multiply.outer(a,c) also do this.
1
2
79,537,461
2025-3-26
https://stackoverflow.com/questions/79537461/test-that-unittest-mock-was-called-with-some-specified-and-some-unspecified-argu
We can check if a unittest.mock.Mock has any call with some specified arguments. I now want to test that some of the arguments are correct, while I do not know about the other ones. Is there some intended way to do so? I could manually iterate over Mock.mock_calls, but is there a more compact way? import random from unittest import mock def mymethod(a, handler): handler(a, random.random()) def test_mymethod(): handler = mock.Mock() mymethod(5, handler) handler.assert_any_call(5, [anything]) # How do I achieve this?
You can assert the wildcard argument to be unittest.mock.ANY, which has its __eq__ method overridden to unconditionally return True so that the object is evaluated equal to anything: from unittest.mock import ANY def test_mymethod(): handler = mock.Mock() mymethod(5, handler) handler.assert_any_call(5, ANY) # passes Demo: https://ideone.com/8Kbywo
2
1
79,537,356
2025-3-26
https://stackoverflow.com/questions/79537356/pandas-group-by-maximum-row-for-a-subset
I have a dataframe with weekly product sales product_id week_number sales A1 1 1000 A1 2 2000 A1 3 3000 A2 1 8000 A2 2 4000 A2 3 2000 I want to add a column that identifies rows where the total sales were the highest for the given product: product_id week_number sales product_max A1 1 1000 FALSE A1 2 2000 FALSE A1 3 3000 TRUE A2 1 8000 TRUE A2 2 4000 FALSE A2 3 2000 FALSE Since A1 had its highest sales on week 3, that row is tagged as True. But week 3 wasn't the highest for A2. (And so for A2, week 1 is tagged as True.) I know I could write a loop to do this and cycle through each of the product IDs one by one, but I am wondering if there is a way to do this with a different function - possibly Pandas Groupby? Thank you!
Answer df['product_max'] = ( df.groupby('product_id')['sales'].transform('max') .eq(df['sales']) ) df product_id week_number sales product_max 0 A1 1 1000 False 1 A1 2 2000 False 2 A1 3 3000 True 3 A2 1 8000 True 4 A2 2 4000 False 5 A2 3 2000 False Example Code import pandas as pd data = {'product_id': ['A1', 'A1', 'A1', 'A2', 'A2', 'A2'], 'week_number': [1, 2, 3, 1, 2, 3], 'sales': [1000, 2000, 3000, 8000, 4000, 2000]} df = pd.DataFrame(data)
2
1
79,536,276
2025-3-26
https://stackoverflow.com/questions/79536276/numpy-element-by-element-subtract
I have two numpy 'arrays': 1st is a 2D array with shape (N, 2) 2nd is a 3D "matrix" with shape (N, M, 2) where M can be different from 0 to N-1 Code example import numpy as np a = np.array([[1, 2], [3, 4]]) # (N=2,2) shape b = np.array([np.array([[1, 2]]), np.array([[1, 2], [3, 4]])], dtype=object) # (N=2,(1,2), 2) shape diff = [_ - b[i] for i, _ in enumerate(a)] # <-- this is an expected behavior diff1 = a - b print(diff) # <-- [array([[0, 0]]), array([[2, 2], [0, 0]])] print(diff1) # <-- [[array([[ 0, -1]]) array([[ 1, 0], [-1, -2]])] # [array([[ 2, 1]]) array([[ 3, 2], [ 1, 0]])]] ??? Is it possible to perform element-by-element subtraction in numpy (without loop or list comprehension)? It looks like I need something like np.subtract(a, b, axis=0) but np.subtract doesn't understand axis argument. The answer of @Subir Chowdhury works fine. ~ real code with real M, N def std_dev_lc(flows: arrays): vectors: arrays = [flow.mean(axis=0) for flow in flows] std_dev = [np.linalg.norm(v - a, axis=1).mean() for v, a in zip(vectors, flows)] return np.array(vectors), np.array(std_dev) def std_dev_cs(flows: arrays): vectors: arrays = [flow.mean(axis=0) for flow in flows] flatten_flows = np.concatenate(flows, axis=0) vectors_rep = np.repeat(vectors, [len(flow) for flow in flows], axis=0) diff_flat = vectors_rep - flatten_flows norms_flat = np.linalg.norm(diff_flat, axis=1) norms = np.split(norms_flat, np.cumsum([len(flow) for flow in flows]))[:-1] return np.array(vectors), np.array([norm.mean() for norm in norms]) M, N = 30, 300 _flows: arrays = [randn(randint(1, M), 2).astype(np.float32) for _ in range(N)] print('1', timeit(lambda: std_dev_lc(_flows), number=500)) # <- 3.34 print('2', timeit(lambda: std_dev_cs(_flows), number=500)) # <- 2.54
Another solution is using np.concatenate and np.split: import numpy as np a = np.array([[1, 2], [3, 4]]) b_list = [np.array([[1, 2]]), np.array([[1, 2], [3, 4]])] # Flatten `b` into a single array b_concat = np.concatenate(b_list, axis=0) a_repeat = np.repeat(a, [len(b) for b in b_list], axis=0) diff_flat = a_repeat - b_concat # Split back into original structure split_indices = np.cumsum([len(b) for b in b_list])[:-1] # Restore original structure diff_final = np.split(diff_flat, split_indices) print(diff_final)
1
3
79,533,554
2025-3-25
https://stackoverflow.com/questions/79533554/how-to-programmatically-get-a-list-of-all-first-level-keys-stored-in-a-namespace
In the process of developing a custom Jinja2 extension (a tag, not a filter), I am finding myself in the situation where I need to extract a list of first-level keys that refer to the runtime values saved in a namespace. Individual first-level values with known keys are easy to extract from the namespace with a simple Getattr(Name('my_ns_name', 'load'), 'my_key', 'load') node. But how can I get the list of all first-level keys when not known in advance? I inspected Namespace objects as returned at runtime by a Name('my_ns_name', 'load') node and haven't found any relevant information there. I also feel that the NSRef node type could possibly be a key to the solution, but I still fail to understand its purpose, specially considering how little documented it is. I also thought of using a For node and iterate through some collection returned by one of the nodes.iter_* functions, but that doesn't seem to apply to namespace attributes. So I'm a bit stuck, now... Here's a condensed version of my attempts so far: import pprint from jinja2 import nodes from jinja2.ext import Extension class InspectExtension(Extension): tags = {"inspect"} def __init__(self, environment): super().__init__(environment) def parse(self, parser): lineno = parser.stream.expect("name:inspect").lineno expr = parser.parse_expression() context = nodes.DerivedContextReference() return nodes.Output([ nodes.Const(expr), # Inspect the AST nodes.TemplateData('\n'), # For a nicer output self.call_method("_render", [expr, context]) # Runtime inspection ]).set_lineno(lineno) def _render(self, value, context): result = { "class": value.__class__.__name__, "attrs": dir(value), "context": list(context.get_all().keys()) } return pprint.pformat(result, compact=False) inspect = InspectExtension With this template: <!DOCTYPE html> <html><body><pre> {% set ns = namespace() %} {% set ns.test = 'foobar' %} {{ ns.test }} {% inspect ns %} </pre></body></html> Outputs (rendered HTML): foobar Getattr(node=Name(name='ns', ctx='load'), attr='test', ctx='load') {'attrs': ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__'], 'class': 'Namespace', 'context': ['range', 'dict', 'lipsum', 'cycler', 'joiner', 'namespace', 'environ', 'get_context', 'request', 'result', 'ctx', 'ns'], 'value': } For completeness: The extension source code is located in ./j2_ext/inspect.py, along with the customary __init__.py which contains a single statement from .inspect import inspect. Jinja2 engine is called through jinja2-cli, passing the extension name to the --extension argument, as such: jinja2 --format=json --extension=.j2_ext.inspect /path/to/my_tpl.j2 < my_data.json Some relevant documentation: Official Jinja2 documentation: https://jinja.palletsprojects.com/en/stable/extensions/#extension-api Examples of Jinja2 extensions: https://github.com/topics/jinja2-extension?o=asc&s=stars
If you read the source code you'll find that the attributes of jinja2.utils.Namespace are stored in the __attrs attribute, a "private" variable prefixed with double underscores that is subject to name mangling, so you would have to access it with the _Namespace prefix instead. So to obtain a list of all first-level keys stored in the namespace value, evaluate: list(value._Namespace__attrs)
4
1
79,534,957
2025-3-25
https://stackoverflow.com/questions/79534957/is-there-a-way-to-call-functions-in-functions-without-the-memory-inefficiencies
If I have a text-based game, and I transition through game states by calling a function corresponding to each game state like the following def go_to_lobby(gold_coins: int) -> None: """ The start of the adventure """ print_gold_amount(gold_coins) print("You are in the lobby of the dungeon. What do you do?") print("1. Examine the lobby.") print("2. Go to the throne hall.") print("3. Leave.") option = int(input()) if option==1: examine_lobby(gold_coins) elif option==2: go_to_throne_hall(gold_coins) else: leave(gold_coins) def examine_lobby(gold_coins: int) -> None: """ The user examines the lobby """ print_gold_amount(gold_coins) rob_amount = 10 print("A band of goblins rob " + str(rob_amount) + " gold from you.") gold_coins-=rob_amount go_to_lobby(gold_coins) def leave(gold_coins: int) -> None: """ The end of the adventure """ print_gold_amount(gold_coins) if gold_coins<0: go_to_kitchen(gold_coins) else: print("You leave the dungeon.") def go_to_throne_hall(gold_coins: int) -> None: """ The middle of the adventure """ print_gold_amount(gold_coins) print("You are in the throne hall. What do you do?") print("1. Examine the throne hall.") print("2. Go back to the lobby.") option = int(input()) if option==1: examine_throne_hall(gold_coins) else: go_to_lobby(gold_coins) def examine_throne_hall(gold_coins: int) -> None: """ The user examines the throne hall """ print_gold_amount(gold_coins) rob_amount = 40 print("You disturb the dungeon keeper who makes you pay " + str(rob_amount) + " gold.") gold_coins-=rob_amount go_to_throne_hall(gold_coins) def print_gold_amount(gold_coins: int) -> None: """ Prints to the user their current amount of gold """ print("You have " + str(gold_coins) + " gold.") def go_to_kitchen(gold_coins: int) -> None: print_gold_amount(gold_coins) print("You are in the kitchen. What do you do?") print("1. Wash the dishes to earn money.") print("2. Enter the dungeon and go back to the lobby.") option = int(input()) if option==1: print("You washed the dishes and earnt a coin!") go_to_kitchen(gold_coins) else: if gold_coins<0: go_to_lobby(gold_coins-10) else: go_to_lobby(gold_coins) go_to_lobby(50) It will be memory inefficient (I think) because each function has to remember where it was when I started the new one. However this memory will never be useful to me because I am never going to return. Is there a way to make this approach memory efficient in Python or is it inherently going to have the memory issue? (I'm aware of how I could fix this issue, but I am wondering if there's a solution that still allows me to operate by jumping between functions, Python has a good garbage collector system so I thought there might be some way to do it.)
Like @Joffan pointed out in the comment, you should convert your recursive program flow to an iterative one with an action loop where each action sets the next action to perform. Without rewriting your code, however, you can perform the conversion by applying to each action function a decorator that defers a call to the function to an action loop that will perform it in the next iteration. The loop can be implicitly kicked off by the first call to an action function, which can be identified by setting the initial next action as None. The loop ends when an action does not set the next action, when the next action after a call remains the same: class action: next = None def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): loop = action.next is None action.next = lambda: self.func(*args, **kwargs) while loop: current_action = action.next current_action() if action.next is current_action: break Below is your exact same code with just the @action decorator applied (and with washing dishes in the kitchen earning you 40 coins instead so you can actually leave once in debt, for a bug pointed out by @trincot): @action def go_to_lobby(gold_coins: int) -> None: """ The start of the adventure """ print_gold_amount(gold_coins) print("You are in the lobby of the dungeon. What do you do?") print("1. Examine the lobby.") print("2. Go to the throne hall.") print("3. Leave.") option = int(input()) if option==1: examine_lobby(gold_coins) elif option==2: go_to_throne_hall(gold_coins) else: leave(gold_coins) @action def examine_lobby(gold_coins: int) -> None: """ The user examines the lobby """ print_gold_amount(gold_coins) rob_amount = 10 print("A band of goblins rob " + str(rob_amount) + " gold from you.") gold_coins-=rob_amount go_to_lobby(gold_coins) @action def leave(gold_coins: int) -> None: """ The end of the adventure """ print_gold_amount(gold_coins) if gold_coins<0: go_to_kitchen(gold_coins) else: print("You leave the dungeon.") @action def go_to_throne_hall(gold_coins: int) -> None: """ The middle of the adventure """ print_gold_amount(gold_coins) print("You are in the throne hall. What do you do?") print("1. Examine the throne hall.") print("2. Go back to the lobby.") option = int(input()) if option==1: examine_throne_hall(gold_coins) else: go_to_lobby(gold_coins) @action def examine_throne_hall(gold_coins: int) -> None: """ The user examines the throne hall """ print_gold_amount(gold_coins) rob_amount = 40 print("You disturb the dungeon keeper who makes you pay " + str(rob_amount) + " gold.") gold_coins-=rob_amount go_to_throne_hall(gold_coins) def print_gold_amount(gold_coins: int) -> None: """ Prints to the user their current amount of gold """ print("You have " + str(gold_coins) + " gold.") @action def go_to_kitchen(gold_coins: int) -> None: print_gold_amount(gold_coins) print("You are in the kitchen. What do you do?") print("1. Wash the dishes to earn money.") print("2. Enter the dungeon and go back to the lobby.") option = int(input()) if option==1: print("You washed the dishes and earnt a coin!") go_to_kitchen(gold_coins+40) else: if gold_coins<0: go_to_lobby(gold_coins-10) else: go_to_lobby(gold_coins) go_to_lobby(50) Demo: https://www.online-python.com/u2p5H8m9Ar
2
1
79,532,996
2025-3-25
https://stackoverflow.com/questions/79532996/scipy-minimize-throwing-bounds-error-when-constraint-is-added
I am trying to optimize a matrix given some bounds and constraints. When I run the minimize function with only the bounds everything works fine, but when adding the constraints I get the following error: Exception has occurred: IndexError SLSQP Error: the length of bounds is not compatible with that of x0. Below is a nonsense, but minimal, example of the code not working: from scipy.linalg import norm from scipy.optimize import minimize import numpy as np def objectiveFunction(guess, targetVector) -> float: guess.shape = (int(np.sqrt(guess.size)), int(np.sqrt(guess.size))) attempt = np.matmul(np.array(targetVector).transpose(), guess) assert(type(norm(attempt - targetVector)) == float) return norm(attempt - targetVector) def rowSumConstraint(guess) -> float: guess.shape = (int(np.sqrt(guess.size)), int(np.sqrt(guess.size))) guess = guess.sum(axis = 1) ones = np.ones(guess.size) assert(type(norm(guess - ones)) == float) return norm(guess - ones) size = 3 target = np.random.rand(size) initialGuess = np.random.rand(size, size).flatten() _bounds = tuple([(0,1) for _ in range(len(initialGuess))]) _constraints = [{'type': 'eq', 'fun': rowSumConstraint}] print(len(_bounds)) print(len(initialGuess)) res = minimize(fun = objectiveFunction, x0 = initialGuess, args = (target), bounds = _bounds, constraints = _constraints) The console output from the print statements is reasonably both 9. The code runs fine using only the bounds, but when the constraints are added, the bounds throw an error. I am at a loss, thank you in advance. Edit: This was solved by letting all inputs to minimize be lists, and then converting them to arrays as necessary in the objective and constraint functions. I have no idea why this works or why it would matter to minimize.
The minimize() function generally assumes that when it evaluates f(x), then x is not modified. If you modify it, you may get strange behavior. This is a problem, because this modifies the argument passed to it: guess.shape = (int(np.sqrt(guess.size)), int(np.sqrt(guess.size))) This is why removing the constraint helps - it evaluates the constraint while setting up the problem, which changes the value of x0. It's probably why converting the arrays to lists and back helps, because that introduces a copy. A simple way to fix this is to copy the array before modifying it. def objectiveFunction(guess, targetVector) -> float: guess = guess.copy() guess.shape = (int(np.sqrt(guess.size)), int(np.sqrt(guess.size))) attempt = np.matmul(np.array(targetVector).transpose(), guess) assert(type(norm(attempt - targetVector)) == float) return norm(attempt - targetVector) def rowSumConstraint(guess) -> float: guess = guess.copy() guess.shape = (int(np.sqrt(guess.size)), int(np.sqrt(guess.size))) guess = guess.sum(axis = 1) ones = np.ones(guess.size) assert(type(norm(guess - ones)) == float) return norm(guess - ones)
1
2
79,534,739
2025-3-25
https://stackoverflow.com/questions/79534739/is-there-a-way-to-individually-control-subplot-sizes-in-matplotlib
I need to combine imshow and regular line plots to create a figure. Unfortunately, the imshow plot area is smaller than the regular line plots, leading to something that looks like this: fig = plt.figure(figsize=(25, 9)) map_subplots = [ plt.subplot(2, 5, 1, projection=ccrs.PlateCarree()), plt.subplot(2, 5, 3, projection=ccrs.PlateCarree()), plt.subplot(2, 5, 6, projection=ccrs.PlateCarree()), plt.subplot(2, 5, 8, projection=ccrs.PlateCarree()) ] for i, ax in enumerate(map_subplots): ax.set_title(titles[i]) ax.coastlines(linewidth=0.5) ax.imshow(eofs[i], extent=[0, 360, -90, 90], transform=ccrs.PlateCarree(), cmap='RdBu_r', origin='lower', vmin=-0.01, vmax=0.01) ts_subplots = [ plt.subplot(2, 5, 2), plt.subplot(2, 5, 4), plt.subplot(2, 5, 7), plt.subplot(2, 5, 9) ] for i, ax in enumerate(ts_subplots): ax.plot(pcs.time, pcs.sel(pc=i)) ax.set_ylim(-100, 100) if True: # remove yticks ax.set_yticks([]) ax.set_xticks(pd.date_range("2013-01-01", "2016-01-01", freq='6MS')) ax.set_xticklabels(["2013", "Jul", "2014", "Jul", "2015", "Jul", "2016"]) I've got a hacky workaround with gridspec, that spreads the imshow plots over multiple subplots, so that it forces them to be bigger: # Create a figure fig = plt.figure(figsize=(15, 9)) # Define a gridspec with custom widths and heights for each subplot hr = 0.6 wr = 0.7 gs = gridspec.GridSpec(6, 5, width_ratios=[1, wr, 1, wr, 1], height_ratios=[hr, 1, hr, hr, 1, hr]) # Map subplots map_subplots = [ plt.subplot(gs[:3, 0], projection=ccrs.PlateCarree()), plt.subplot(gs[:3, 2], projection=ccrs.PlateCarree()), plt.subplot(gs[3:, 0], projection=ccrs.PlateCarree()), plt.subplot(gs[3:, 2], projection=ccrs.PlateCarree()) ] # Plot the map subplots for i, ax in enumerate(map_subplots): ax.set_title(titles[i]) ax.coastlines(linewidth=0.5) ax.imshow(eofs[i], extent=[0, 360, -90, 90], transform=ccrs.PlateCarree(), cmap='RdBu_r', origin='lower', vmin=-0.01, vmax=0.01) # Time series subplots ts_subplots = [ plt.subplot(gs[1, 1]), plt.subplot(gs[1, 3]), plt.subplot(gs[4, 1]), plt.subplot(gs[4, 3]) ] # Plot the time series subplots for i, ax in enumerate(ts_subplots): ax.plot(pcs.time, pcs.sel(pc=i)) ax.set_ylim(-100, 100) if True: # remove yticks ax.set_yticks([]) ax.set_xticks(pd.date_range("2013-01-01", "2016-01-01", freq='6MS')) ax.set_xticklabels(["2013", "Jul", "2014", "Jul", "2015", "Jul", "2016"]) # fig.text(0.04, 0.5, 'PC value (arb. units)', va='center', rotation='vertical', fontsize=12) plt.subplots_adjust(wspace=0.1, hspace=0) plt.tight_layout() plt.show() This is very close to working, but there's a huge gap between the two rows, which I can't get rid of, no matter how negative a subplots_adjust parameter I supply. Any ideas on how to fix this?
This is what compressed layout is meant to handle (https://matplotlib.org/stable/users/explain/axes/constrainedlayout_guide.html#grids-of-fixed-aspect-ratio-axes-compressed-layout) Note that there is still blank space, but it is above and below the plots instead of between them. One fix for that is to either adjust the size of the figure manually, or to use bbox_inches='tight' when you save the figure. import matplotlib.pyplot as plt import numpy as np fig, axs = plt.subplots(2, 2, layout='compressed') axs[0, 0].imshow(np.random.rand(100, 400)) axs[0, 1].plot(np.random.randn(500)) axs[1, 0].imshow(np.random.rand(100, 400)) axs[1, 1].plot(np.random.randn(500)) plt.show()
1
2
79,555,858
2025-4-4
https://stackoverflow.com/questions/79555858/how-to-programmatically-use-the-result-of-an-expression-as-argument-to-a-name-no
In the process of developing a custom Jinja2 extension that creates namespaces with dynamically evaluated names, I need to use the result of the evaluation of a template expression as argument to a Name node. The root of the issue is that the Name node's first argument must be of string type, while the result of parsing the template expression is an AST node. But the problem is, the AST results from static evaluation, while the expression requires runtime evaluation (because expressions can contain variables whose value is not known at parse time). Is there a way to bridge that gap? Follows a sample extension (not the actual, more complex extension that I'm working on) that captures the essence of what I'm trying to accomplish: {% set my_name = 'my_namespace' %} {% namespace my_name %} {# A namespace named 'my_namespace' should now exists #} Of course, I can easily parse the name expression and have the result of its evaluation displayed with an Output node. I can also create namespaces with predefined literal names: from jinja2 import nodes from jinja2.ext import Extension class NamespaceExtension(Extension): tags = {"namespace"} def __init__(self, environment): super().__init__(environment) def parse(self, parser): lineno = next(parser.stream).lineno eval_context = nodes.EvalContext(self.environment) name = parser.parse_expression() ## ## This obviously works as expected and outputs the result ## of evaluating `name` as an expression. ## return nodes.Output([name]).set_lineno(lineno) ## ## This below also works — as a proof of concept — but we ## need the namespace name to be evaluated dynamically. ## # return nodes.Assign( # nodes.Name('my_namespace', 'store'), # nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) # ).set_lineno(lineno) namespace = NamespaceExtension The following attempts, however, don't work (I actually only had hope in the first-to-last attempt, so not too surprised regarding most of the others, but here they are nonetheless, if only for the sake of completeness and demonstration). return nodes.Assign( name, nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # SyntaxError: cannot assign to function call Right, we need the expression's value. return nodes.Assign( name.as_const(), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # RuntimeError: if no eval context is passed, the node must have an attached environment. Well, ok, easy enough. return nodes.Assign( name.as_const(eval_context), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # jinja2.nodes.Impossible Might have better luck attaching the environment. name.set_environment(self.environment) return nodes.Assign( name.as_const(), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # jinja2.nodes.Impossible Of course, expressions are parsed by default with a 'load' context and we need a Name node with a 'store' context. name.set_ctx('store') return nodes.Assign( name, nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # SyntaxError: cannot assign to function call How about wrapping the name expression in 'load' context inside of a Name node with a 'store' context? return nodes.Assign( nodes.Name(name, 'store'), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # TypeError: '<' not supported between instances of 'Getattr' and 'str' The Name node's first argument must be a string, so then how about evaluating name as a constant? return nodes.Assign( nodes.Name(name.as_const(), 'store'), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) # jinja2.nodes.Impossible Or perhaps this? Out of sheer despair... return nodes.Assign( nodes.Name(nodes.Const(name.as_const()), 'store'), nodes.Call(nodes.Name('namespace', 'load'), [], [], None, None) ).set_lineno(lineno) #TypeError: '<' not supported between instances of 'str' and 'Getattr'
You're trying to dynamically create a variable whose name is only known at runtime, but as you already pointed out, the name of an assignment target is determined during compilation, so simply tinkering with the AST isn't going to help. As a workaround you can make the symbol mapping table available at runtime by embedding it as a dict representation in the Python source code that Jinja2's compiler generates. With the mapping the runtime code can then translate the evaluated name to the corresponding compiled identifier to create the variable at runtime by adding the name to the current frame's f_locals attribute, which is now a write-through proxy dict since Python 3.13 with the implementation of PEP-667. But no existing node type is going to generate the code that does all that for you. You'll need to create a custom node type with its own visitor method for the compiler's code generator. Unfortunately, if you try creating a custom node type by subclassing an existing node type you'll get a TypeError: can't create custom node types exception because the constructor of the node type's metaclass, nodes.NodeType.__new__, is neutered with a dummy function after all built-in node types are created: # make sure nobody creates custom nodes def _failing_new(*args: t.Any, **kwargs: t.Any) -> "te.NoReturn": raise TypeError("can't create custom node types") NodeType.__new__ = staticmethod(_failing_new) # type: ignore del _failing_new So to work around it you can replace the dummy constructor with type.__new__, Python's default metaclass constructor, and make the new custom node type subclass an existing node type with an expression as a field, such as nodes.Assign, so the new node type can reuse the target field while having its own name and therefore its own vistor for the code generator. Unlike an assignment, your namespace keyword does not need a RHS node so you can simply pass None as the node field: from jinja2 import Environment, nodes, ext, compiler nodes.NodeType.__new__ = type.__new__ class NamespaceStmt(nodes.Assign): pass class NamespaceCodeGenerator(compiler.CodeGenerator): def visit_NamespaceStmt(self, node, frame): self.writeline('import sys') self.writeline(f'sys._getframe().f_locals[{frame.symbols.refs!r}[') self.visit(node.target, frame) self.write(']] = Namespace()') class NamespaceExtension(ext.Extension): tags = {"namespace"} def parse(self, parser): self.environment.code_generator_class = NamespaceCodeGenerator lineno = next(parser.stream).lineno return NamespaceStmt(parser.parse_expression(), None).set_lineno(lineno) so that: content = '''\ {% set my_name = 'my_namespace' -%} {% namespace my_name -%} {% set my_namespace.foo = 'bar' -%} {{ my_namespace.foo }} ''' print(Environment(extensions=[NamespaceExtension]).from_string(content).render()) outputs: bar EDIT: Note that only names that are explicitly assigned to would appear in the symbol mapping table, so in the example above if you remove the line {% set my_namespace.foo = 'bar' -%}, the compiler would not know ahead of time to add an identifier for the name my_namespace into the symbol table, causing the reference to my_namespace in {{ my_namespace.foo }} to produce an error. Since the identifier of a name is really implemented by adding a prefix to the given name where the prefix is fixed for the scope, you can "predict" the identifier of an undefined name by obtaining the prefix of a dummy variable, named _Namespace in the example below: from jinja2 import Environment, nodes, ext, compiler nodes.NodeType.__new__ = type.__new__ class NamespaceStmt(nodes.Assign): pass class NamespaceCodeGenerator(compiler.CodeGenerator): def visit_NamespaceStmt(self, node, frame): self.writeline('import sys') self.writeline(f'sys._getframe().f_locals[{frame.symbols.refs!r}') # get the identifier prefix by removing the trailing dummy name self.write('["_Namespace"][:-10] + ') self.visit(node.target, frame) self.write('] = Namespace()') class NamespaceExtension(ext.Extension): tags = {"namespace"} def parse(self, parser): self.environment.code_generator_class = NamespaceCodeGenerator lineno = next(parser.stream).lineno return [ nodes.Assign(nodes.Name('_Namespace', 'store'), nodes.Const(None)), NamespaceStmt(parser.parse_expression(), None).set_lineno(lineno) ] so that the following snippet will render nothing as expected while the original test case still works: content = '''\ {% set my_name = 'my_namespace' -%} {% namespace my_name -%} {{ my_namespace.foo }} ''' print(Environment(extensions=[NamespaceExtension]).from_string(content).render())
1
1
79,556,268
2025-4-4
https://stackoverflow.com/questions/79556268/mcp-python-sdk-how-to-authorise-a-client-with-bearer-header-with-sse
I am building the MCP server application to connect some services to LLM . I use the MCP Python SDK https://github.com/modelcontextprotocol/python-sdk One of things i want to implement is authorisation of a user with the token. I see it must be possible somehow. Most of tutorials about MCP are related to STDIO kind of a server run. My will be SSE. There is my code: from mcp.server.fastmcp import FastMCP from fastapi import FastAPI, Request, Depends, HTTPException app = FastAPI() mcp = FastMCP("SMB Share Server") @mcp.tool() def create_folder(parent_path: str, name: str) -> str: """Create new subfolder in the specified path""" return f"Folder {name} created in {parent_path}" app.mount("/", mcp.sse_app()) How can i read Authorization header in case if it is sent by the client? I tried to use approaches of FastAPI - setting dependency, adding request:Request to arguments but this doesn't work. Is there a way?
If someone is still looking for this. There is the solution. from mcp.server.fastmcp import FastMCP from fastapi import FastAPI, Request import subprocess import shlex # Global variable to keep a token a for a request auth_token = "" app = FastAPI() mcp = FastMCP("Server to manage a Linux instance") @app.middleware("http") async def auth_middleware(request: Request, call_next): auth_header = request.headers.get("Authorization") if auth_header: # extract token from the header and keep it in the global variable global auth_token auth_token = auth_header.split(" ")[1] response = await call_next(request) return response def require_auth(): """ Check access and raise an error if the token is not valid. """ if auth_token != "expected-token": raise ValueError("Invalid token") return None def run_cli(command: str, cwd: str = None) -> str: """ Execute a CLI command using subprocess.""" if cwd == "": cwd = None command_list = shlex.split(command) run_result = subprocess.run( command_list, cwd=cwd, capture_output=True, text=True, check=False, ) success = run_result.returncode == 0 return f"STDOUT: {run_result.stdout}\n\nSTDERR: {run_result.stderr}\nRETURNCODE: {run_result.returncode}\nSUCCESS: {success}" @mcp.tool() def cli_command(command: str, work_dir: str | None = "") -> str: """ Execute command line cli command on the Linux server. Arguments: command - command to execute. work_dir - workdir will be changed to this path before executing the command. """ require_auth() # we have to add this inside each tool method return run_cli(command, work_dir) app.mount("/", mcp.sse_app()) But as i understand there will be better ways to do this soon because developers of that python sdk have some ideas how to support this
2
2
79,559,057
2025-4-7
https://stackoverflow.com/questions/79559057/how-do-i-update-text-in-toga
I am trying to create a simple clock app, and for that, I am using time. I want to update the time label Imports import toga from toga.style import Pack from toga.style.pack import COLUMN, ROW import time code self.time_label=toga.Label("00:00:00",style=Pack(padding=10,font_size=50)) I directly call the update_time() function in the startup method self.Update_time() self.time_label is needed to change to the current time the text will change inside the Update_time() function def Update_time(self): current_time= time.strftime('%I:%M:%S') self.time_label.text=current_time Now, I encounter that it only displays the time and does not change to the current time For example, if I run the program at 9:22:56 it displays 9:22:56 but when it becomes 9:22:57, or 9:23:00, it still shows 9:22:56.
If you only call the update method once, then of course it will only update once. To update it regularly, you'll need to run an async task. I think the startup method is called before the asyncio event loop is running, so instead you could override the on_running method, like this: async def on_running(self): while True: self.Update_time() await asyncio.sleep(1)
2
0
79,557,443
2025-4-5
https://stackoverflow.com/questions/79557443/is-it-possible-to-mix-ctypes-structure-with-regular-python-fields
Just going the straight way like this, doesn't seem to work in python 3. import ctypes class cA(ctypes.Structure): _fields_ = [("a", ctypes.c_ubyte)] def __init__(self): super().__init__() self.c = 3 z = cA.from_buffer_copy(b'0') print (z) print (z.c) print (dir(z)) Traceback (most recent call last): File "polymorph.py", line 12, in <module> print (z.c) AttributeError: 'cA' object has no attribute 'c' So ctypes.Structure factory classmethod from_buffer_copy() doesn't seem to use the default constructor __init__. Is this correct and intended by ctypes.Structure? Ok even if not trying to workaround this, by overwriting the class method from_buffer_copy raises another issue. Like adding to class cA: @classmethod def from_buffer_copy(cls,buffer, offset=0): #obj = super().from_buffer_copy(b'2',offset) obj = cls.from_buffer_copy(b'2',offset) obj.__init__() return obj Traceback (most recent call last): File "polymorph.py", line 17, in <module> z = cA.from_buffer_copy(b'0') File "polymorph.py", line 13, in from_buffer_copy obj = cls.from_buffer_copy(b'2',offset) File "polymorph.py", line 13, in from_buffer_copy obj = cls.from_buffer_copy(b'2',offset) File "polymorph.py", line 13, in from_buffer_copy obj = cls.from_buffer_copy(b'2',offset) [Previous line repeated 996 more times] RecursionError: maximum recursion depth exceeded This creates a RecursionError, because same class method is called again and again, but Structure.from_buffer_copy can't be called either, because it lacks _fields_. Using super() above doesn't help either. It raises an AttributeError: Traceback (most recent call last): File "polymorph.py", line 17, in <module> z = cA.from_buffer_copy(b'0') File "polymorph.py", line 12, in from_buffer_copy obj = super().from_buffer_copy(b'2',offset) AttributeError: 'super' object has no attribute 'from_buffer_copy' This seems to be similar to Using super with a class method Looks like a special Python 3 thing either. So ctypes.Structure seems to be a real special kind of class. As doing the same with regular Python class is no issue. Like: import ctypes class A: def __init__(self): self.a = "Tex" @classmethod def factory(cls): return cls() class B(A): def __init__(self): super().__init__() self.b = "bTex" z = B.factory() print (z) print (z.b) print (dir(z)) So ... Is there a way to mix ctypes.Structure with regular Python class fields or better just don't do it and better embed ctypes.Structure inside other regular classes for this just to work around this?
.from_buffer_copy() doesn't call __init__. But use a normal constructor and it works, although I don't recommend this since a ctypes.Structure is meant to exactly wrap the fields of a C structure and represent its memory layout. How would additional Python fields be represented? Also note that the default constructor of a ctypes.Structure takes arguments to initialize its fields, so they must be passed on to the superclass for this to work: import ctypes class cA(ctypes.Structure): _fields_ = [("a", ctypes.c_ubyte)] def __init__(self, *args): print(f'__init__({args=})') super().__init__(*args) self.c = 3 z = cA(123) print (z.a, z.c) Output: __init__(args=(123,)) 123 3 Here's a workaround if you want from_buffer_copy to work as well: import ctypes as ct import struct class Example(ct.Structure): _fields_ = (('a', ct.c_int), ('b', ct.c_int)) def __init__(self, *args): super().__init__(*args) self.c = 55 @classmethod def _from_buffer_copy(cls, buffer, offset=0): obj = Example._from_buffer_copy_original(buffer, offset) obj.__init__() return obj def __repr__(self): return f'Example(a={self.a}, b={self.b}, c={self.c})' # Patch from_buffer_copy to eliminate the recursion problem. Example._from_buffer_copy_original = Example.from_buffer_copy Example.from_buffer_copy = Example._from_buffer_copy ex1 = Example(11, 22) print(ex1) ex2 = Example.from_buffer_copy(struct.pack('ii', 22, 33)) print(ex2) Output: Example(a=11, b=22, c=55) Example(a=22, b=33, c=55) Suggested way to handle the problem instead: import ctypes as ct import struct # Strictly a C structure wrapper. class Example(ct.Structure): _fields_ = (('a', ct.c_int), ('b', ct.c_int)) def __repr__(self): return f'Example(a={self.a}, b={self.b})' # Class to contain structure and additional data. class Info: # Creates the structure from a buffer and initialize additional data def __init__(self, buffer, c, d): self.ex = Example.from_buffer_copy(buffer) self.c = c self.d = d # Debug representation def __repr__(self): return f'Info(buffer={bytes(self.ex)}, c={self.c}, d={self.d})' buffer = struct.pack('ii', 11, 22) info = Info(buffer, 33, 44) print(f'{info = }\n {info.ex = }') # An ideal debug representation can reproduce the original object when evaluated. info2 = eval(repr(info)) print(f'{info2 = }\n {info2.ex = }') Output: info = Info(buffer=b'\x0b\x00\x00\x00\x16\x00\x00\x00', c=33, d=44) info.ex = Example(a=11, b=22) info2 = Info(buffer=b'\x0b\x00\x00\x00\x16\x00\x00\x00', c=33, d=44) info2.ex = Example(a=11, b=22)
1
2
79,557,694
2025-4-6
https://stackoverflow.com/questions/79557694/saving-statsmodel-to-adls-blob-storage
i currently have a model fit using statsmodel OLS formula and I am trying to save this model to ADLS blob storage. '/mnt/outputs/' is a mount point I have created and I am able to read and write other files from this directory. import statsmodels.formula.api as smf fit = smf.ols(formula=f"Pressure ~ {cat_vars_int} + Speed + dose_time:Speed + Speed:log_curr_speed_time", data=df_train).fit() path = f'/mnt/outputs/Models/20240406_M2.pickle' fit.save(path) However I am getting this error when I am saving. I am trying to write a new file not read an existing file, so i am not sure why i am getting this error. Any help would be great, thanks! FileNotFoundError: [Errno 2] No such file or directory: '/mnt/outputs/Models/20240406_M2.pickle'
Default the mount point will be under dbfs context, whenever you reference files without spark you need prefix path with /dbfs. So, save the file giving path like below. path = f'/dbfs/mnt/outputs/Models/20240406_M2.pickle' fit.save(path) and whenever accessing via spark context give like below. spark.read.csv("dbfs:/path_to_file") Listing files. Dbutils display(dbutils.fs.ls(mount_point)) Output: Python OS module os.listdir("/dbfs/"+mount_point) Learn more about handling files in databricks here.
1
1
79,560,723
2025-4-7
https://stackoverflow.com/questions/79560723/update-formulas-in-excel-using-python
I am trying to update formulas in an existing excel which has data and tables but I am failing to update the formulas so that they would update the data for example: Okay since I get some answers but not exactly what I'm trying to achieve I will try to get more context here: I am trying to make a unique list from a table like so: =UNIQUE(Table1[Names]) And then I would use that data for a validation list. Meaning that I would create one DV for the names in unique formula. Let's say I place the formula in C1. A and B columns are Names and Age: dropdown_range = "'Data'!C1#" dv = DataValidation(type="list", formula1=dropdown_range, showDropDown=False) target_sheet.add_data_validation(dv) dv.add(target_sheet['D1']) Then I want to use the dropdown (DV) selection in another formula which would check the selection: =VSTACK("", UNIQUE(FILTER(Table1[Age], Table1[Names]=D1))) The logic would be this but the formulas are much more complex. The full code: from openpyxl import Workbook from openpyxl.worksheet.datavalidation import DataValidation from openpyxl.worksheet.table import Table, TableStyleInfo wb = Workbook() ws = wb.active data = [ ["Names", "Age"], ["Alice", 30], ["Bob", 25], ["Charlie", 35], ["Alice", 30], ["David", 40], ["Bob", 25], ] for row in data: ws.append(row) table_range = f"A1:B{len(data)}" table = Table(displayName="Table1", ref=table_range) style = TableStyleInfo(name="TableStyleMedium9", showFirstColumn=True, showRowStripes=True, showColumnStripes=True) table.tableStyleInfo = style ws.add_table(table) ws["C1"] = "=UNIQUE(Table1[Names])" dropdown_range = "'Sheet'!C1#" dv = DataValidation(type="list", formula1=dropdown_range, showDropDown=True) ws.add_data_validation(dv) dv.add(ws["D1"]) ws["E1"] = "=VSTACK(\"\", UNIQUE(FILTER(Table1[Age], Table1[Names]=D1)))" file_path = 'example_with_dropdown.xlsx' wb.save(file_path) Is there a way to get this done with some other module rather than openpyxl? After openpyxl does the majority of the work with the excel? OKAY.. Since this took too much time I just created a mapping of all possible dependent values for each selection.
To add the UNIQUE function you need add it as a Array Formula. You could enter the formula to a single cell but if the unique values are more than one cell it will only show the first unique value. An Array allows every value to be assigned a cell and the range should be no bigger than the 'Names' range of the Table of course. I have written the following code to create a workbook adding the UNIQUE formula as an example. The workbook adds its own Sheets, test data and is saved with the name 'Example.xlsx' The code will insert an Excel Table, 'Table1' using your Header name and then applies the UNIQUE formula as you have written it against the Table in Column E under the Header 'Unique'. I have also included the DV you are attempting to add. I have moved it to Column F, Cell F2 in this example since the table overwrites 'B3' If the formula/function is unknown as seems to be the case with 'UNIQUE', prefix with _xlfn, see formula variable in code. import openpyxl from openpyxl.styles import Alignment from openpyxl.worksheet.datavalidation import DataValidation from openpyxl.worksheet.formula import ArrayFormula from openpyxl.worksheet.table import Table # Workbook and Worksheets wb = openpyxl.Workbook() target_sheet = wb.active wb.create_sheet('Data') ws = wb.worksheets[1] # Add some data to the Sheets # 'Data' Sheet for a in range(1, 11): ws[f"A{a}"].value = a # Table data on target_sheet, ('Sheet') rows = [ ('Names', 'ColumnA', 'ColumnB'), ('Tom', 0.51, 'Value1'), ('Fred', 0.26, 'Value2'), ('Tom', 0.07, 'Value1'), ('Vicky', 0.07, 'Value3'), ] for r in rows: target_sheet.append(r) # Create Table tbl = Table(displayName="Table1", ref="A1:C5") target_sheet.add_table(tbl) # Add the Array Formula formula = "=_xlfn.UNIQUE(Table1[Names])" target_sheet["E2"].value = ArrayFormula('E2:E5', formula) # Data Validation dropdown_range = "'Data'!A1:A10" dv = DataValidation(type="list", formula1=dropdown_range, showDropDown=False) target_sheet.add_data_validation(dv) dv.add(target_sheet['F2']) # Add DV to cell 'F2' # Some cell formatting for x in ['A', 'B', 'C']: target_sheet.column_dimensions[x].width = 12 target_sheet['E1'].value = 'Unique' target_sheet['E1'].alignment = Alignment(horizontal='center') target_sheet['F1'].value = 'DV' target_sheet['F1'].alignment = Alignment(horizontal='center') # Save workbook wb.save('Example.xlsx') Example target_sheet Showing the formula, this is the same for each cell E2 - E5 Showing the DV dropdown. Data shown is from the 2nd Sheet 'Data' cells A1 - A10 EDIT The stated requirements in your updated Post is covered in the below examples. Example 2 Below I have updated the example; As noted in my comment on your Post, Openpyxl does not evaluate formulas so the UNIQUE formula will not be filled until the workbook is opened in Excel. The Array formula range is likely to be bigger than it needs to be and thus contains '#N/A' values for the unused cells, which then get inserted into the DV since it also has to cover the same range as the Array given you cannot know at the time what cells contain #N/A . Example code 2 import openpyxl from openpyxl.styles import Alignment, Font from openpyxl.worksheet.datavalidation import DataValidation from openpyxl.worksheet.formula import ArrayFormula from openpyxl.worksheet.table import Table, TableStyleInfo wb = openpyxl.Workbook() ws = wb.active data = [ ["Names", "Age"], ["Alice", 30], ["Bob", 25], ["Charlie", 35], ["Alice", 30], ["David", 40], ["Bob", 25], ] for row in data: ws.append(row) table_range = f"A1:B{len(data)}" table = Table(displayName="Table1", ref=table_range) style = TableStyleInfo(name="TableStyleMedium9", showFirstColumn=True, showRowStripes=True, showColumnStripes=True) table.tableStyleInfo = style ws.add_table(table) # Add the Array Formula formula = "=_xlfn.UNIQUE(Table1[Names])" ws["E2"].value = ArrayFormula('E2:E7', formula) dropdown_range = "'Sheet'!E2:E7" dv = DataValidation(type="list", formula1=dropdown_range, showDropDown=False) ws.add_data_validation(dv) dv.add(ws['F2']) # Some cell formatting for x in ['A', 'B']: ws.column_dimensions[x].width = 12 ws['E1'].value = 'Unique' ws['E1'].font = Font(bold=True) ws['E1'].alignment = Alignment(horizontal='center') ws['F1'].value = 'DV' ws['F1'].font = Font(bold=True) ws['F1'].alignment = Alignment(horizontal='center') file_path = 'example_with_dropdown.xlsx' wb.save(file_path) Updated example Sheet 2 Example 3 If you want to go the extra mile and use another module to clean up your DV a bit then here is the example using Xlwings to check the unique data range and setting the DV to the range of actual names only. Since I am using Xlwings to open the workbook in Excel (to update the Unique List I have used the same to insert the DV. However, if you still wanted to use Openpyxl to add the DV/other detail you would need to open the workbook in Excel and save it then re-open (load_workbook) the saved workbook using Openpyxl. It should not be necessary to re-open with data_only=True as only the first cell 'E2' should have the formula. The rest of the cells, in this case 'E3:E7' should have the name or '#N/A' as it's value. You can then get the max used row in the Unique list as done in this code and then use Openpyxl to update the DV entry. Example code 3 import openpyxl import xlwings as xw from openpyxl.styles import Alignment, Font from openpyxl.worksheet.formula import ArrayFormula from openpyxl.worksheet.table import Table, TableStyleInfo from xlwings import constants file_path = 'example_with_dropdown.xlsx' wb = openpyxl.Workbook() ws = wb.active data = [ ["Names", "Age"], ["Alice", 30], ["Bob", 25], ["Charlie", 35], ["Alice", 30], ["David", 40], ["Bob", 25], ] for row in data: ws.append(row) table_range = f"A1:B{len(data)}" table = Table(displayName="Table1", ref=table_range) style = TableStyleInfo(name="TableStyleMedium9", showFirstColumn=True, showRowStripes=True, showColumnStripes=True) table.tableStyleInfo = style ws.add_table(table) # Add the Array Formula formula = "=_xlfn.UNIQUE(Table1[Names])" formula_rng = 'E2:E7' ws["E2"].value = ArrayFormula(formula_rng, formula) # Some cell formatting for x in ['A', 'B']: ws.column_dimensions[x].width = 12 ws['E1'].value = 'Unique' ws['E1'].font = Font(bold=True) ws['E1'].alignment = Alignment(horizontal='center') ws['F1'].value = 'DV' ws['F1'].font = Font(bold=True) ws['F1'].alignment = Alignment(horizontal='center') wb.save(file_path) # Open workbook in Xlwings to Update UNIQUE formula cells to check which cells have actual names with xw.App(visible=False) as app: wb2 = xw.Book(file_path) ws2 = wb2.sheets['Sheet'] # Get the last row in the Unique list with a valid name. max_used_row = max([x.row for x in ws2.range(formula_rng) if x.value is not None]) # Set the range of the DV to cover start to last row determined above dv_range = f'=E2:E{max_used_row}' # Add the DV to the Sheet ws2.range('F2').api.Validation.Add( Type=constants.DVType.xlValidateList, AlertStyle=constants.DVAlertStyle.xlValidAlertStop, Operator=constants.FormatConditionOperator.xlBetween, # Formula1="=$E$2:$E$5" Formula1 = dv_range ) ws2.range('F2').IgnoreBlank = True ws2.range('F2').api.Validation.InCellDropdown = True ws2.range('F2').api.Validation.InputTitle = "" ws2.range('F2').api.Validation.ErrorTitle = "" ws2.range('F2').api.Validation.InputMessage = "" ws2.range('F2').api.Validation.ErrorMessage = "" ws2.range('F2').api.Validation.ShowInput = True ws2.range('F2').api.Validation.ShowError = True # Save workbook with updated DV wb2.save(file_path) Updated example Sheet 2
1
3
79,550,092
2025-4-2
https://stackoverflow.com/questions/79550092/time-limit-exceeded-on-leetcode-128-even-for-optimal-time-complexity
I was attempting LeetCode question 128. Longest Consecutive Sequence: Given an unsorted array of integers nums, return the length of the longest consecutive elements sequence. You must write an algorithm that runs in O(n) time. Example 1: Input: nums = [100,4,200,1,3,2] Output: 4 Explanation: The longest consecutive elements sequence is [1, 2, 3, 4]. Therefore its length is 4. Constraints: 0 <= nums.length <= 105 -109 <= nums[i] <= 109 My first attempt did sorting followed by counting. This has O(nlogn) time complexity, but it surprisingly gave me 93.93% percentile for time complexity (40ms). I then re-read the question and realised the answer must be in O(n) time complexity. So I wrote the following code: def longestConsecutive(self, nums: List[int]) -> int: s = set(nums) longest_streak = 0 for num in nums: if (num - 1) not in s: current_streak = 1 while (num + 1) in s: num += 1 current_streak += 1 longest_streak = max(longest_streak, current_streak) return longest_streak (I know, it's not a great practice to reuse num variable name in the nested loop, but that's beside the point of the question, I have tested using a separate variable as well with the same result as below) While this should theoretically be O(n) time complexity, faster than my first solution, this actually resulted in time limit exceeded for a couple of the cases and my code was rejected. I eventually submitted a passing solution after referring to the solution class Solution: def longestConsecutive(self, nums: List[int]) -> int: nums = set(nums) longest_streak = 0 for num in nums: if (num - 1) not in nums: next_num = num + 1 while next_num in nums: next_num += 1 longest_streak = max(longest_streak, next_num - num) return longest_streak where I identified 2 key differences: I reassigned nums to a set in-place instead of a new variable I used next_num instead of keeping a current_streak variable However, both of these changes does not seem like it should have significant impact on the runtime, enough to cross the line between time-limit exceeded into a passing solution. To puzzle me even more, this O(n) solution still performed worse than my sorting solution, ranking only at 75.73% percentile (46 ms). So my questions are: Why does a O(nlogn) algorithm perform faster than O(n) in practice? Why is my first O(n) algorithm so slow that it reached time-limit exceeded while my second algorithm with minimal changes could pass?
Why does a O(nlogn) algorithm perform faster than O(n) in practice? Time complexity does not say anything about actual running times for concrete input sizes. It only says something about how running times will evolve asymptotically as the input size grows large. In general (unrelated to the actual code you presented), we can imagine a O(𝑛) algorithm that needs 1000 + 10𝑛 milliseconds to complete, and a O(𝑛log𝑛) algorithm that needs 1 + 𝑛log10𝑛 milliseconds to complete. And now it becomes clear how the O(𝑛log𝑛) algorithm will beat the O(𝑛) one for lots of realistic input sizes. See how many milliseconds they would need for concrete values of 𝑛: 𝑛 O(𝑛) O(𝑛log𝑛) 1 1 010 1 10 1 100 11 100 2 000 201 1000 11 000 3 001 10000 101 000 40 001 100000 1 001 000 500 001 1000000 10 001 000 6 000 001 10000000 100 001 000 70 000 001 100000000 1 000 001 000 800 000 001 1000000000 10 000 001 000 9 000 000 001 10000000000 100 000 001 000 100 000 000 001 100000000000 1 000 000 001 000 1 100 000 000 001 By definition, there is always an 𝑛 above which the better time complexity will win out, but as you can see, it can be a very high threshold for 𝑛: in the table above only the last row shows a win for the theoretically more efficient algorithm. Why is my first O(n) algorithm so slow that it reached time-limit exceeded while my second algorithm with minimal changes could pass? It is because of the outer loop. In the slow version, it iterates over the input list, while in the faster version, it iterates over the set. This means that if the input is large and has lots of duplicate values, the slow version is repeating work that does not bring any benefit. Your initial version only needs to replace that in order to finish the job within the given time limit: Change: for num in nums: to: for num in s:
5
7
79,560,599
2025-4-7
https://stackoverflow.com/questions/79560599/algorithm-for-detecting-full-loop-when-iterating-over-a-list
Assignment: Write a funcion cycle_sublist(lst, start, step) where: lst is a list start is number that satisfies: 0 <= start < len(lst) step is the amount we increase your index each iteration without using: slicing, importing, list comprehension, built-in functions like map and filter. The function works in this way: We start to iterate over the list of items when we get back to start or cross it again. So for example: cycle_sublist([1], 0, 2) -> [1] cycle_sublist([6, 5, 4, 3], 0, 2) -> [6, 4] cycle_sublist([7, 6, 5, 4, 3], 3, 1) -> [4, 3, 7, 6, 5] cycle_sublist([4, 3, 2, 5, 1, 6, 9], 2, 2) -> [2, 1, 9, 3] cycle_sublist([4, 3, 2, 5, 1, 6, 9], 5, 3) -> [6, 3, 1] My problem is detecting when I have completed a cycle. I tried to: Check my previous step and current steps and check it against start. The problem is there are some cases where it fails. Count my steps and checking if I had crossed the start. None of those worked. Here is my code - with the missing logic for detecting the cycle: def cycle_sublist(lst,start,step): index = start length = len(last) cycle_complete = False res = [] while True: index = index % length if index >= length else index if ...: cycle_complete = True if cycle_complete and index >= start: break res.append(lst[index]) index += step return res If you can I'd like to ask you to answer with the algorithm to detect the cycle only so I can write the code myself.
This is my try based on your try. I got something like this: def cycle_sublist(lst,start,step): index = start length = len(lst) res = [] while index < length + start: res.append(lst[index % length]) index += step return res print(cycle_sublist([1], 0, 2)) # [1] print(cycle_sublist([6, 5, 4, 3], 0, 2)) # [6, 4] print(cycle_sublist([7, 6, 5, 4, 3], 3, 1)) # [4, 3, 7, 6, 5] print(cycle_sublist([4, 3, 2, 5, 1, 6, 9], 2, 2)) # [2, 1, 9, 3] print(cycle_sublist([4, 3, 2, 5, 1, 6, 9], 5, 3)) # [6, 3, 1] Basically, there is not much of a difference between your and my code. You were trying to keep index in valid range which made it more difficult to detect when cycle was over. Change I introduced is that I let index change in each iteration by step and not trying to keep it in valid range since I used remainder when indexing list. I guess you know, but remainder goes up to length - 1 so it will always be in valid range. Additionally, I removed some unused variables. Little addition, not sure if it's helpful. If you want to cycle multiple times under conditions you set, you can change while condition to while index < n * length + start: where n is how many times you want to cycle through list. Also, if you want to include starting element if final iteration sets on him, change < to <= in while condition.
5
5
79,558,410
2025-4-6
https://stackoverflow.com/questions/79558410/how-to-prevent-ruff-from-formatting-arguments-of-a-function-into-separate-lines
I have a function like so: def get_foo(a: object, b: tuple, c: int,) -> dict: ..... When I do $ ruff format myfile.py, my function is changed to def get_foo( a: object, b: tuple, c: int, ) -> dict: .... How do I stop this behaviour? Update: @STerliakov I implemented your solution but received this warning. $ ruff format test.py warning: The isort option `isort.split-on-trailing-comma` is incompatible with the formatter `format.skip-magic-trailing-comma=true` option. To avoid unexpected behavior, we recommend either setting `isort.split-on-trailing-comma=false` or `format.skip-magic-trailing-comma=false`. 1 file reformatted I can't locate isort.split-on-trailing-comma to set it to false to avoid the conflict. How do i fix this issue?
That happens due to the trailing comma in your arguments list. This behavior is intentional. You can disable it globally using skip-magic-trailing-comma. E.g. in pyproject.toml that would be [tool.ruff.format] skip-magic-trailing-comma = true Enabling this setting is incompatible with import sorter default configuration. To ignore trailing commas in your imports as well, set isort.split-on-trailing-comma to false: [tool.ruff.lint.isort] split-on-trailing-comma = false
1
3
79,555,255
2025-4-4
https://stackoverflow.com/questions/79555255/numpy-strange-behaviour-of-setitem-of-array
Say we have an array: a = np.array([ [11, 12, 13], [21, 22, 23], [31, 32, 33], [41, 42, 43] ]) a[[1, 3], [0, 2]] = 0 So we want to set zeros to 0th and 2nd element at both 1st and 3rd rows. But what we get is: [[11 12 13] [ 0 22 23] [31 32 33] [41 42 0]] Why not: [[11 12 13] [ 0 22 0] [31 32 33] [0 42 0]] ?
In import numpy as np a = np.array([ [11, 12, 13], [21, 22, 23], [31, 32, 33], [41, 42, 43] ]) a[[1, 3], [0, 2]] = 0 a the last statement is equivalent to (edited) a[1, 0] = 0 a[3, 2] = 0 It gives the following: a = np.array([ [11, 12, 13], [21, 22, 23], [31, 32, 33], [41, 42, 43] ]) a[1, 0] = 0 a[3, 2] = 0 a # array([[11, 12, 13], # [ 0, 22, 23], # [31, 32, 33], # [41, 42, 0]]) In numpy's documentation, this is referenced as "advanced indexing" Advanced indexing is triggered when the selection object, obj, is a non-tuple sequence object, an ndarray (of data type integer or bool), or a tuple with at least one sequence object or ndarray (of data type integer or bool). In your case, your selection object is a tuple with at least one sequence object.
2
0
79,560,710
2025-4-7
https://stackoverflow.com/questions/79560710/getting-an-error-attributeerror-module-tensorflow-python-distribute-input-lib
After running the code train(train_data, EPOCHS) I'm getting error "AttributeError: module 'tensorflow.python.distribute.input_lib' has no attribute 'DistributedDatasetInterface'" and after traching the error, it points at line 15--> of def train(data, EPOCHS): # Loop through epochs for epoch in range(1, EPOCHS+1): print('\n Epoch {}/{}'.format(epoch, EPOCHS)) progbar = tf.keras.utils.Progbar(len(data)) # Creating a metric object r = Recall() p = Precision() # Loop through each batch for idx, batch in enumerate(data): # Run train step here loss = train_step(batch) yhat = siamese_model.predict(batch[:2]) #<--- line 15 r.update_state(batch[2], yhat) p.update_state(batch[2], yhat) progbar.update(idx+1) print(loss.numpy(), r.result().numpy(), p.result().numpy()) # Save checkpoints if epoch % 10 == 0: checkpoint.save(file_prefix=checkpoint_prefix)
Your error happens cuz there's mismatch between how you're trying to make prediction and what your dataset type allows. Tensorflow has changed some of its APIs across versions, please refer this this will help you to gain some knowledge about how actually perform distributed training Distributed training with TensorFlow Try this maybe it can solve your problem Try a direct model call The simplest fix is to just call your model directly instead of using the predict method # Change this line yhat = siamese_model.predict(batch[:2]) # To this yhat = siamese_model(batch[:2], training=False) This works because it bypasses the predict() machinery that's causing the error. Also refer this github issue as well issue 61900
1
1
79,560,135
2025-4-7
https://stackoverflow.com/questions/79560135/creating-new-rows-in-a-dataframe-based-on-previous-values
I have a dataframe that looks like this: test = pd.DataFrame( {'onset': [1,3,18,33,35,50], 'duration': [2,15,15,2,15,15], 'type': ['Instr', 'Remember', 'SocTestString', 'Rating', 'SelfTestString', 'XXX'] } ) I want to create a new dataframe such that when type contains "TestString", two new rows are created below that row, such that the row is is now split into three rows with (for example) SocTestString_1, SocTestString_2, SocTestString_3 for those three rows, change duration columns to the value 5 for those three rows, also change the onset column such that it is the onset value of the previous column + 5 The final dataframe should look like this: test_final = pd.DataFrame( {'onset': [1,3,18,23,28,33,35,40,45,50], 'duration': [2,15,5,5,5,2,5,5,5,15], 'type': ['Instr', 'Remember', 'SocTestString_1', 'SocTestString_2', 'SocTestString_3', 'Rating', 'SelfTestString_1', 'SelfTestString_2', 'SelfTestString_3', 'XXX'] }) How may I accomplish this?
You could use str.contains to identify the target rows, then Index.repeat to duplicate them, finally boolean indexing and groupby.cumcount to update the new rows: N = 3 # number of rows to create # identify target rows m = test['type'].str.contains('TestString') # repeat them out = test.loc[test.index.repeat(m.mul(N-1).add(1))] # divide duration out.loc[m, 'duration'] /= N # compute the cumcount cc = out.loc[m].groupby(level=0).cumcount() # increment the onset out.loc[m, 'onset'] += cc*5 # add the suffix out.loc[m, 'type'] += '_'+cc.add(1).astype(str) # optionally, reset the index out.reset_index(drop=True, inplace=True) NB. this assumes that the original index does not have duplicated indices. Output: onset duration type 0 1 2 Instr 1 3 15 Remember 2 18 5 SocTestString_1 3 23 5 SocTestString_2 4 28 5 SocTestString_3 5 33 2 Rating 6 35 5 SelfTestString_1 7 40 5 SelfTestString_2 8 45 5 SelfTestString_3 9 50 15 XXX Intermediates (without updating the original columns and resetting the index): onset duration type m cc cc*5 _{cc+1} 0 1 2 Instr False <NA> <NA> NaN 1 3 15 Remember False <NA> <NA> NaN 2 18 15 SocTestString True 0 0 _1 2 18 15 SocTestString True 1 5 _2 2 18 15 SocTestString True 2 10 _3 3 33 2 Rating False <NA> <NA> NaN 4 35 15 SelfTestString True 0 0 _1 4 35 15 SelfTestString True 1 5 _2 4 35 15 SelfTestString True 2 10 _3 5 50 15 XXX False <NA> <NA> NaN
2
1
79,554,246
2025-4-4
https://stackoverflow.com/questions/79554246/takeprofit-indie-displaying-labels-in-front-of-candlesticks
I'm working with the Indie library to create a candlestick pattern indicator, and I'm facing an issue with how labels are displayed on the chart. As you can see in the attached screenshot, the candlesticks are currently rendered on top of my pattern labels. What I need help with: Label Order: Is there a way to make the labels appear in front of the candlesticks instead of behind them? Currently, the candlesticks are obscuring the labels, making some of them difficult to see. Label Transparency: Is it possible to make these labels semi-transparent so that even when they're in front of candlesticks, the underlying price action is still somewhat visible? Here's the relevant part of my code for the label markers: @plot.marker(color=color.GREEN, text='BE', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Bullish Engulfing @plot.marker(color=color.TEAL, text='MORN', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Morning Star I've searched through the TakeProfit/Indie documentation but couldn't find options for: Controlling the z-index/order of elements on the chart Setting transparency for markers/labels Any alternative ways to highlight patterns that would solve this visibility issue Full code: # indie:lang_version = 5 from indie import indicator, param, plot, color, MainContext from indie.algorithms import Ema, Atr import math # Define helper functions at global scope def is_downtrend(fast_ema: float, slow_ema: float) -> bool: return fast_ema < slow_ema def is_uptrend(fast_ema: float, slow_ema: float) -> bool: return fast_ema > slow_ema def is_significant_body(body: float, atr_value: float, minimum_body_atr: float) -> bool: return body > minimum_body_atr * atr_value def is_doji(ctx: MainContext, i: int, atr_value: float, doji_ratio: float) -> bool: high, low = ctx.high[i], ctx.low[i] open_, close = ctx.open[i], ctx.close[i] body = abs(close - open_) range_ = high - low # Check if range is significant compared to market volatility is_significant = range_ > 0.3 * atr_value # Traditional doji has very small body relative to range return is_significant and body <= doji_ratio * range_ def is_small_body(ctx: MainContext, i: int) -> bool: open_, close = ctx.open[i], ctx.close[i] high, low = ctx.high[i], ctx.low[i] body = abs(close - open_) range_ = high - low return body < 0.3 * range_ def is_bullish_engulfing(ctx: MainContext, atr_value: float, minimum_body_atr: float, volume_ratio: float) -> bool: body_prev = abs(ctx.close[1] - ctx.open[1]) body_curr = abs(ctx.close[0] - ctx.open[0]) # Check for significant body size is_significant = is_significant_body(body_curr, atr_value, minimum_body_atr) return ( ctx.close[1] < ctx.open[1] and # Previous candle is bearish ctx.close[0] > ctx.open[0] and # Current candle is bullish ctx.open[0] <= ctx.close[1] and ctx.close[0] >= ctx.open[1] and # Body engulfing is_significant and ctx.volume[0] > ctx.volume[1] * volume_ratio # Volume confirmation ) def is_bearish_engulfing(ctx: MainContext, atr_value: float, minimum_body_atr: float, volume_ratio: float) -> bool: body_prev = abs(ctx.close[1] - ctx.open[1]) body_curr = abs(ctx.close[0] - ctx.open[0]) # Check for significant body size is_significant = is_significant_body(body_curr, atr_value, minimum_body_atr) return ( ctx.close[1] > ctx.open[1] and # Previous candle is bullish ctx.close[0] < ctx.open[0] and # Current candle is bearish ctx.open[0] >= ctx.close[1] and ctx.close[0] <= ctx.open[1] and # Body engulfing is_significant and ctx.volume[0] > ctx.volume[1] * volume_ratio # Volume confirmation ) def is_morning_star(ctx: MainContext, atr_value: float, minimum_body_atr: float, gap_atr_ratio: float) -> bool: open1, close1 = ctx.open[2], ctx.close[2] open2, close2 = ctx.open[1], ctx.close[1] open3, close3 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body3 = abs(close3 - open3) # First candle should be bearish and significant is_bearish1 = close1 < open1 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) # Second candle should have a small body is_small_body2 = is_small_body(ctx, 1) # Third candle should be bullish and significant is_bullish3 = close3 > open3 is_significant3 = is_significant_body(body3, atr_value, minimum_body_atr) # Check for gaps or near-gaps using ATR gap_down = min(open1, close1) - max(open2, close2) > gap_atr_ratio * atr_value gap_up = min(open3, close3) - max(open2, close2) > gap_atr_ratio * atr_value # Check if third candle closes above midpoint of first closes_above_midpoint = close3 > (open1 + close1) / 2 return ( is_bearish1 and is_significant1 and is_small_body2 and is_bullish3 and is_significant3 and closes_above_midpoint and (gap_down or gap_up) # At least one gap should be present ) def is_evening_star(ctx: MainContext, atr_value: float, minimum_body_atr: float, gap_atr_ratio: float) -> bool: open1, close1 = ctx.open[2], ctx.close[2] open2, close2 = ctx.open[1], ctx.close[1] open3, close3 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body3 = abs(close3 - open3) # First candle should be bullish and significant is_bullish1 = close1 > open1 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) # Second candle should have a small body is_small_body2 = is_small_body(ctx, 1) # Third candle should be bearish and significant is_bearish3 = close3 < open3 is_significant3 = is_significant_body(body3, atr_value, minimum_body_atr) # Check for gaps or near-gaps using ATR gap_up = min(open2, close2) - max(open1, close1) > gap_atr_ratio * atr_value gap_down = min(open2, close2) - max(open3, close3) > gap_atr_ratio * atr_value # Check if third candle closes below midpoint of first closes_below_midpoint = close3 < (open1 + close1) / 2 return ( is_bullish1 and is_significant1 and is_small_body2 and is_bearish3 and is_significant3 and closes_below_midpoint and (gap_up or gap_down) # At least one gap should be present ) def is_hammer(ctx: MainContext, in_downtrend: bool, atr_value: float, shadow_body_ratio: float) -> bool: open_, close = ctx.open[0], ctx.close[0] high, low = ctx.high[0], ctx.low[0] body = abs(close - open_) range_ = high - low upper_shadow = high - max(open_, close) lower_shadow = min(open_, close) - low is_significant = range_ > 0.5 * atr_value return ( in_downtrend and is_significant and body > 0 and # Ensure there is a body lower_shadow > shadow_body_ratio * body and # Long lower shadow upper_shadow < 0.5 * body and # Small upper shadow close >= open_ # Preferably bullish (close >= open) ) def is_shooting_star(ctx: MainContext, in_uptrend: bool, atr_value: float, shadow_body_ratio: float) -> bool: open_, close = ctx.open[0], ctx.close[0] high, low = ctx.high[0], ctx.low[0] body = abs(close - open_) range_ = high - low upper_shadow = high - max(open_, close) lower_shadow = min(open_, close) - low is_significant = range_ > 0.5 * atr_value return ( in_uptrend and is_significant and body > 0 and # Ensure there is a body upper_shadow > shadow_body_ratio * body and # Long upper shadow lower_shadow < 0.5 * body and # Small lower shadow close <= open_ # Preferably bearish (close <= open) ) def is_dark_cloud_cover(ctx: MainContext, in_uptrend: bool, atr_value: float, minimum_body_atr: float) -> bool: open1, close1 = ctx.open[1], ctx.close[1] open2, close2 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body2 = abs(close2 - open2) body_mid = (open1 + close1) / 2 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) is_significant2 = is_significant_body(body2, atr_value, minimum_body_atr) return ( in_uptrend and close1 > open1 and # First candle is bullish close2 < open2 and # Second candle is bearish open2 > close1 and # Second candle opens above first candle's close close2 < body_mid and # Second candle closes below midpoint of first close2 > open1 and # Second candle doesn't close below first candle's open is_significant1 and is_significant2 ) def is_piercing(ctx: MainContext, in_downtrend: bool, atr_value: float, minimum_body_atr: float) -> bool: open1, close1 = ctx.open[1], ctx.close[1] open2, close2 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body2 = abs(close2 - open2) body_mid = (open1 + close1) / 2 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) is_significant2 = is_significant_body(body2, atr_value, minimum_body_atr) return ( in_downtrend and close1 < open1 and # First candle is bearish close2 > open2 and # Second candle is bullish open2 < close1 and # Second candle opens below first candle's close close2 > body_mid and # Second candle closes above midpoint of first close2 < open1 and # Second candle doesn't close above first candle's open is_significant1 and is_significant2 ) def is_three_black_crows(ctx: MainContext, in_uptrend: bool, atr_value: float, minimum_body_atr: float) -> bool: # Calculate bearish body sizes b1 = ctx.open[2] - ctx.close[2] b2 = ctx.open[1] - ctx.close[1] b3 = ctx.open[0] - ctx.close[0] # Calculate shadows upper_shadow1 = ctx.high[2] - ctx.open[2] upper_shadow2 = ctx.high[1] - ctx.open[1] upper_shadow3 = ctx.high[0] - ctx.open[0] lower_shadow1 = ctx.close[2] - ctx.low[2] lower_shadow2 = ctx.close[1] - ctx.low[1] lower_shadow3 = ctx.close[0] - ctx.low[0] # Significant bodies and small shadows relative to bodies are_significant = ( b1 > minimum_body_atr * atr_value and b2 > minimum_body_atr * atr_value and b3 > minimum_body_atr * atr_value ) small_shadows = ( upper_shadow1 < 0.3 * b1 and upper_shadow2 < 0.3 * b2 and upper_shadow3 < 0.3 * b3 and lower_shadow1 < 0.3 * b1 and lower_shadow2 < 0.3 * b2 and lower_shadow3 < 0.3 * b3 ) # Each opens within the previous candle's body (traditional definition) proper_opens = ( ctx.open[1] <= ctx.open[2] and ctx.open[1] >= ctx.close[2] and ctx.open[0] <= ctx.open[1] and ctx.open[0] >= ctx.close[1] ) return ( in_uptrend and # All three candles are bearish ctx.close[2] < ctx.open[2] and ctx.close[1] < ctx.open[1] and ctx.close[0] < ctx.open[0] and # Each closes lower than the previous ctx.close[1] < ctx.close[2] and ctx.close[0] < ctx.close[1] and are_significant and small_shadows and proper_opens ) def is_three_white_soldiers(ctx: MainContext, in_downtrend: bool, atr_value: float, minimum_body_atr: float) -> bool: # Calculate bullish body sizes b1 = ctx.close[2] - ctx.open[2] b2 = ctx.close[1] - ctx.open[1] b3 = ctx.close[0] - ctx.open[0] # Calculate shadows upper_shadow1 = ctx.high[2] - ctx.close[2] upper_shadow2 = ctx.high[1] - ctx.close[1] upper_shadow3 = ctx.high[0] - ctx.close[0] lower_shadow1 = ctx.open[2] - ctx.low[2] lower_shadow2 = ctx.open[1] - ctx.low[1] lower_shadow3 = ctx.open[0] - ctx.low[0] # Significant bodies and small shadows relative to bodies are_significant = ( b1 > minimum_body_atr * atr_value and b2 > minimum_body_atr * atr_value and b3 > minimum_body_atr * atr_value ) small_shadows = ( upper_shadow1 < 0.3 * b1 and upper_shadow2 < 0.3 * b2 and upper_shadow3 < 0.3 * b3 and lower_shadow1 < 0.3 * b1 and lower_shadow2 < 0.3 * b2 and lower_shadow3 < 0.3 * b3 ) # Each opens within the previous candle's body (traditional definition) proper_opens = ( ctx.open[1] >= ctx.open[2] and ctx.open[1] <= ctx.close[2] and ctx.open[0] >= ctx.open[1] and ctx.open[0] <= ctx.close[1] ) return ( in_downtrend and # All three candles are bullish ctx.close[2] > ctx.open[2] and ctx.close[1] > ctx.open[1] and ctx.close[0] > ctx.open[0] and # Each closes higher than the previous ctx.close[1] > ctx.close[2] and ctx.close[0] > ctx.close[1] and are_significant and small_shadows and proper_opens ) @indicator('Enhanced Candlestick Patterns v3', overlay_main_pane=True) @param.float('doji_ratio', default=0.05, min=0.01, max=0.2, title='Doji Body/Range Ratio') @param.float('shadow_body_ratio', default=2.0, min=1.0, max=5.0, title='Shadow/Body Ratio') @param.float('minimum_body_atr', default=0.3, min=0.1, max=1.0, title='Minimum Body/ATR Ratio') @param.float('gap_atr_ratio', default=0.1, min=0.0, max=0.5, title='Gap/ATR Ratio') @param.float('volume_ratio', default=1.2, min=1.0, max=3.0, title='Volume Increase Ratio') @param.int('fast_ema', default=20, min=5, max=50, title='Fast EMA Length') @param.int('slow_ema', default=50, min=20, max=200, title='Slow EMA Length') # Bullish Patterns (varying shades of green/teal) @plot.marker(color=color.GREEN, text='BE', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Bullish Engulfing @plot.marker(color=color.TEAL, text='MORN', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Morning Star @plot.marker(color=color.LIME, text='H', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Hammer @plot.marker(color=color.rgba(0, 180, 80, 1), text='P', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Piercing @plot.marker(color=color.rgba(0, 128, 64, 1), text='TWS', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Three White Soldiers # Bearish Patterns (varying shades of red/purple) @plot.marker(color=color.RED, text='SE', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Bearish Engulfing @plot.marker(color=color.PURPLE, text='EVE', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Evening Star @plot.marker(color=color.FUCHSIA, text='SS', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Shooting Star @plot.marker(color=color.rgba(180, 0, 80, 1), text='DCC', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Dark Cloud Cover @plot.marker(color=color.MAROON, text='TBC', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Three Black Crows def Main(self, doji_ratio, shadow_body_ratio, minimum_body_atr, gap_atr_ratio, volume_ratio, fast_ema, slow_ema): # Main indicator logic h = self.high[0] - self.low[0] # Compute indicators fast_ema_series = Ema.new(self.close, fast_ema) slow_ema_series = Ema.new(self.close, slow_ema) atr14 = Atr.new(14) in_downtrend = is_downtrend(fast_ema_series[0], slow_ema_series[0]) in_uptrend = is_uptrend(fast_ema_series[0], slow_ema_series[0]) return ( self.low[0] - h * 0.05 if is_bullish_engulfing(self, atr14[0], minimum_body_atr, volume_ratio) else math.nan, self.high[0] + h * 0.05 if is_bearish_engulfing(self, atr14[0], minimum_body_atr, volume_ratio) else math.nan, self.low[0] - h * 0.1 if is_morning_star(self, atr14[0], minimum_body_atr, gap_atr_ratio) else math.nan, self.high[0] + h * 0.1 if is_evening_star(self, atr14[0], minimum_body_atr, gap_atr_ratio) else math.nan, self.low[0] - h * 0.15 if is_hammer(self, in_downtrend, atr14[0], shadow_body_ratio) else math.nan, self.high[0] + h * 0.15 if is_shooting_star(self, in_uptrend, atr14[0], shadow_body_ratio) else math.nan, self.high[0] + h * 0.2 if is_dark_cloud_cover(self, in_uptrend, atr14[0], minimum_body_atr) else math.nan, self.low[0] - h * 0.2 if is_piercing(self, in_downtrend, atr14[0], minimum_body_atr) else math.nan, self.high[0] + h * 0.25 if is_three_black_crows(self, in_uptrend, atr14[0], minimum_body_atr) else math.nan, self.low[0] - h * 0.25 if is_three_white_soldiers(self, in_downtrend, atr14[0], minimum_body_atr) else math.nan )
Now you can not change the z-index from the indicator code, but you can do it through the UI: Yes, you can adjust the transparency of labels in the same way as any other elements: color=indie.color.TEAL(alpha), for example: @plot.marker(color=color.GREEN(0.5), text='BE', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Bullish Engulfing @plot.marker(color=color.TEAL(0.5), text='MORN', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Morning Star You can also adjust the value and position of your markers so that they don't cover anything. If the marker is tied to the upper point of the candle, then it is more convenient to use position=plot.marker_position.ABOVE. And vice versa. So your full code will be: # indie:lang_version = 5 from indie import indicator, param, plot, color, MainContext from indie.algorithms import Ema, Atr import math # Define helper functions at global scope def is_downtrend(fast_ema: float, slow_ema: float) -> bool: return fast_ema < slow_ema def is_uptrend(fast_ema: float, slow_ema: float) -> bool: return fast_ema > slow_ema def is_significant_body(body: float, atr_value: float, minimum_body_atr: float) -> bool: return body > minimum_body_atr * atr_value def is_doji(ctx: MainContext, i: int, atr_value: float, doji_ratio: float) -> bool: high, low = ctx.high[i], ctx.low[i] open_, close = ctx.open[i], ctx.close[i] body = abs(close - open_) range_ = high - low # Check if range is significant compared to market volatility is_significant = range_ > 0.3 * atr_value # Traditional doji has very small body relative to range return is_significant and body <= doji_ratio * range_ def is_small_body(ctx: MainContext, i: int) -> bool: open_, close = ctx.open[i], ctx.close[i] high, low = ctx.high[i], ctx.low[i] body = abs(close - open_) range_ = high - low return body < 0.3 * range_ def is_bullish_engulfing(ctx: MainContext, atr_value: float, minimum_body_atr: float, volume_ratio: float) -> bool: body_prev = abs(ctx.close[1] - ctx.open[1]) body_curr = abs(ctx.close[0] - ctx.open[0]) # Check for significant body size is_significant = is_significant_body(body_curr, atr_value, minimum_body_atr) return ( ctx.close[1] < ctx.open[1] and # Previous candle is bearish ctx.close[0] > ctx.open[0] and # Current candle is bullish ctx.open[0] <= ctx.close[1] and ctx.close[0] >= ctx.open[1] and # Body engulfing is_significant and ctx.volume[0] > ctx.volume[1] * volume_ratio # Volume confirmation ) def is_bearish_engulfing(ctx: MainContext, atr_value: float, minimum_body_atr: float, volume_ratio: float) -> bool: body_prev = abs(ctx.close[1] - ctx.open[1]) body_curr = abs(ctx.close[0] - ctx.open[0]) # Check for significant body size is_significant = is_significant_body(body_curr, atr_value, minimum_body_atr) return ( ctx.close[1] > ctx.open[1] and # Previous candle is bullish ctx.close[0] < ctx.open[0] and # Current candle is bearish ctx.open[0] >= ctx.close[1] and ctx.close[0] <= ctx.open[1] and # Body engulfing is_significant and ctx.volume[0] > ctx.volume[1] * volume_ratio # Volume confirmation ) def is_morning_star(ctx: MainContext, atr_value: float, minimum_body_atr: float, gap_atr_ratio: float) -> bool: open1, close1 = ctx.open[2], ctx.close[2] open2, close2 = ctx.open[1], ctx.close[1] open3, close3 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body3 = abs(close3 - open3) # First candle should be bearish and significant is_bearish1 = close1 < open1 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) # Second candle should have a small body is_small_body2 = is_small_body(ctx, 1) # Third candle should be bullish and significant is_bullish3 = close3 > open3 is_significant3 = is_significant_body(body3, atr_value, minimum_body_atr) # Check for gaps or near-gaps using ATR gap_down = min(open1, close1) - max(open2, close2) > gap_atr_ratio * atr_value gap_up = min(open3, close3) - max(open2, close2) > gap_atr_ratio * atr_value # Check if third candle closes above midpoint of first closes_above_midpoint = close3 > (open1 + close1) / 2 return ( is_bearish1 and is_significant1 and is_small_body2 and is_bullish3 and is_significant3 and closes_above_midpoint and (gap_down or gap_up) # At least one gap should be present ) def is_evening_star(ctx: MainContext, atr_value: float, minimum_body_atr: float, gap_atr_ratio: float) -> bool: open1, close1 = ctx.open[2], ctx.close[2] open2, close2 = ctx.open[1], ctx.close[1] open3, close3 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body3 = abs(close3 - open3) # First candle should be bullish and significant is_bullish1 = close1 > open1 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) # Second candle should have a small body is_small_body2 = is_small_body(ctx, 1) # Third candle should be bearish and significant is_bearish3 = close3 < open3 is_significant3 = is_significant_body(body3, atr_value, minimum_body_atr) # Check for gaps or near-gaps using ATR gap_up = min(open2, close2) - max(open1, close1) > gap_atr_ratio * atr_value gap_down = min(open2, close2) - max(open3, close3) > gap_atr_ratio * atr_value # Check if third candle closes below midpoint of first closes_below_midpoint = close3 < (open1 + close1) / 2 return ( is_bullish1 and is_significant1 and is_small_body2 and is_bearish3 and is_significant3 and closes_below_midpoint and (gap_up or gap_down) # At least one gap should be present ) def is_hammer(ctx: MainContext, in_downtrend: bool, atr_value: float, shadow_body_ratio: float) -> bool: open_, close = ctx.open[0], ctx.close[0] high, low = ctx.high[0], ctx.low[0] body = abs(close - open_) range_ = high - low upper_shadow = high - max(open_, close) lower_shadow = min(open_, close) - low is_significant = range_ > 0.5 * atr_value return ( in_downtrend and is_significant and body > 0 and # Ensure there is a body lower_shadow > shadow_body_ratio * body and # Long lower shadow upper_shadow < 0.5 * body and # Small upper shadow close >= open_ # Preferably bullish (close >= open) ) def is_shooting_star(ctx: MainContext, in_uptrend: bool, atr_value: float, shadow_body_ratio: float) -> bool: open_, close = ctx.open[0], ctx.close[0] high, low = ctx.high[0], ctx.low[0] body = abs(close - open_) range_ = high - low upper_shadow = high - max(open_, close) lower_shadow = min(open_, close) - low is_significant = range_ > 0.5 * atr_value return ( in_uptrend and is_significant and body > 0 and # Ensure there is a body upper_shadow > shadow_body_ratio * body and # Long upper shadow lower_shadow < 0.5 * body and # Small lower shadow close <= open_ # Preferably bearish (close <= open) ) def is_dark_cloud_cover(ctx: MainContext, in_uptrend: bool, atr_value: float, minimum_body_atr: float) -> bool: open1, close1 = ctx.open[1], ctx.close[1] open2, close2 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body2 = abs(close2 - open2) body_mid = (open1 + close1) / 2 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) is_significant2 = is_significant_body(body2, atr_value, minimum_body_atr) return ( in_uptrend and close1 > open1 and # First candle is bullish close2 < open2 and # Second candle is bearish open2 > close1 and # Second candle opens above first candle's close close2 < body_mid and # Second candle closes below midpoint of first close2 > open1 and # Second candle doesn't close below first candle's open is_significant1 and is_significant2 ) def is_piercing(ctx: MainContext, in_downtrend: bool, atr_value: float, minimum_body_atr: float) -> bool: open1, close1 = ctx.open[1], ctx.close[1] open2, close2 = ctx.open[0], ctx.close[0] body1 = abs(close1 - open1) body2 = abs(close2 - open2) body_mid = (open1 + close1) / 2 is_significant1 = is_significant_body(body1, atr_value, minimum_body_atr) is_significant2 = is_significant_body(body2, atr_value, minimum_body_atr) return ( in_downtrend and close1 < open1 and # First candle is bearish close2 > open2 and # Second candle is bullish open2 < close1 and # Second candle opens below first candle's close close2 > body_mid and # Second candle closes above midpoint of first close2 < open1 and # Second candle doesn't close above first candle's open is_significant1 and is_significant2 ) def is_three_black_crows(ctx: MainContext, in_uptrend: bool, atr_value: float, minimum_body_atr: float) -> bool: # Calculate bearish body sizes b1 = ctx.open[2] - ctx.close[2] b2 = ctx.open[1] - ctx.close[1] b3 = ctx.open[0] - ctx.close[0] # Calculate shadows upper_shadow1 = ctx.high[2] - ctx.open[2] upper_shadow2 = ctx.high[1] - ctx.open[1] upper_shadow3 = ctx.high[0] - ctx.open[0] lower_shadow1 = ctx.close[2] - ctx.low[2] lower_shadow2 = ctx.close[1] - ctx.low[1] lower_shadow3 = ctx.close[0] - ctx.low[0] # Significant bodies and small shadows relative to bodies are_significant = ( b1 > minimum_body_atr * atr_value and b2 > minimum_body_atr * atr_value and b3 > minimum_body_atr * atr_value ) small_shadows = ( upper_shadow1 < 0.3 * b1 and upper_shadow2 < 0.3 * b2 and upper_shadow3 < 0.3 * b3 and lower_shadow1 < 0.3 * b1 and lower_shadow2 < 0.3 * b2 and lower_shadow3 < 0.3 * b3 ) # Each opens within the previous candle's body (traditional definition) proper_opens = ( ctx.open[1] <= ctx.open[2] and ctx.open[1] >= ctx.close[2] and ctx.open[0] <= ctx.open[1] and ctx.open[0] >= ctx.close[1] ) return ( in_uptrend and # All three candles are bearish ctx.close[2] < ctx.open[2] and ctx.close[1] < ctx.open[1] and ctx.close[0] < ctx.open[0] and # Each closes lower than the previous ctx.close[1] < ctx.close[2] and ctx.close[0] < ctx.close[1] and are_significant and small_shadows and proper_opens ) def is_three_white_soldiers(ctx: MainContext, in_downtrend: bool, atr_value: float, minimum_body_atr: float) -> bool: # Calculate bullish body sizes b1 = ctx.close[2] - ctx.open[2] b2 = ctx.close[1] - ctx.open[1] b3 = ctx.close[0] - ctx.open[0] # Calculate shadows upper_shadow1 = ctx.high[2] - ctx.close[2] upper_shadow2 = ctx.high[1] - ctx.close[1] upper_shadow3 = ctx.high[0] - ctx.close[0] lower_shadow1 = ctx.open[2] - ctx.low[2] lower_shadow2 = ctx.open[1] - ctx.low[1] lower_shadow3 = ctx.open[0] - ctx.low[0] # Significant bodies and small shadows relative to bodies are_significant = ( b1 > minimum_body_atr * atr_value and b2 > minimum_body_atr * atr_value and b3 > minimum_body_atr * atr_value ) small_shadows = ( upper_shadow1 < 0.3 * b1 and upper_shadow2 < 0.3 * b2 and upper_shadow3 < 0.3 * b3 and lower_shadow1 < 0.3 * b1 and lower_shadow2 < 0.3 * b2 and lower_shadow3 < 0.3 * b3 ) # Each opens within the previous candle's body (traditional definition) proper_opens = ( ctx.open[1] >= ctx.open[2] and ctx.open[1] <= ctx.close[2] and ctx.open[0] >= ctx.open[1] and ctx.open[0] <= ctx.close[1] ) return ( in_downtrend and # All three candles are bullish ctx.close[2] > ctx.open[2] and ctx.close[1] > ctx.open[1] and ctx.close[0] > ctx.open[0] and # Each closes higher than the previous ctx.close[1] > ctx.close[2] and ctx.close[0] > ctx.close[1] and are_significant and small_shadows and proper_opens ) @indicator('Enhanced Candlestick Patterns v3', overlay_main_pane=True) @param.float('doji_ratio', default=0.05, min=0.01, max=0.2, title='Doji Body/Range Ratio') @param.float('shadow_body_ratio', default=2.0, min=1.0, max=5.0, title='Shadow/Body Ratio') @param.float('minimum_body_atr', default=0.3, min=0.1, max=1.0, title='Minimum Body/ATR Ratio') @param.float('gap_atr_ratio', default=0.1, min=0.0, max=0.5, title='Gap/ATR Ratio') @param.float('volume_ratio', default=1.2, min=1.0, max=3.0, title='Volume Increase Ratio') @param.int('fast_ema', default=20, min=5, max=50, title='Fast EMA Length') @param.int('slow_ema', default=50, min=20, max=200, title='Slow EMA Length') # Bullish Patterns (varying shades of green/teal) @plot.marker(color=color.GREEN(0.5), text='BE', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Bullish Engulfing @plot.marker(color=color.TEAL(0.5), text='MORN', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Morning Star @plot.marker(color=color.LIME(0.5), text='H', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Hammer @plot.marker(color=color.rgba(0, 180, 80, 0.5), text='P', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Piercing @plot.marker(color=color.rgba(0, 128, 64, 0.5), text='TWS', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Three White Soldiers # Bearish Patterns (varying shades of red/purple) @plot.marker(color=color.RED(0.5), text='SE', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Bearish Engulfing @plot.marker(color=color.PURPLE(0.5), text='EVE', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Evening Star @plot.marker(color=color.FUCHSIA(0.5), text='SS', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Shooting Star @plot.marker(color=color.rgba(180, 0, 80, 0.5), text='DCC', style=plot.marker_style.LABEL, position=plot.marker_position.ABOVE) # Dark Cloud Cover @plot.marker(color=color.MAROON(0.5), text='TBC', style=plot.marker_style.LABEL, position=plot.marker_position.BELOW) # Three Black Crows def Main(self, doji_ratio, shadow_body_ratio, minimum_body_atr, gap_atr_ratio, volume_ratio, fast_ema, slow_ema): # Main indicator logic h = self.high[0] - self.low[0] # Compute indicators fast_ema_series = Ema.new(self.close, fast_ema) slow_ema_series = Ema.new(self.close, slow_ema) atr14 = Atr.new(14) in_downtrend = is_downtrend(fast_ema_series[0], slow_ema_series[0]) in_uptrend = is_uptrend(fast_ema_series[0], slow_ema_series[0]) return ( self.low[0] - h * 0.05 if is_bullish_engulfing(self, atr14[0], minimum_body_atr, volume_ratio) else math.nan, self.high[0] + h * 0.05 if is_bearish_engulfing(self, atr14[0], minimum_body_atr, volume_ratio) else math.nan, self.low[0] - h * 0.1 if is_morning_star(self, atr14[0], minimum_body_atr, gap_atr_ratio) else math.nan, self.high[0] + h * 0.1 if is_evening_star(self, atr14[0], minimum_body_atr, gap_atr_ratio) else math.nan, self.low[0] - h * 0.15 if is_hammer(self, in_downtrend, atr14[0], shadow_body_ratio) else math.nan, self.high[0] + h * 0.15 if is_shooting_star(self, in_uptrend, atr14[0], shadow_body_ratio) else math.nan, self.high[0] + h * 0.2 if is_dark_cloud_cover(self, in_uptrend, atr14[0], minimum_body_atr) else math.nan, self.low[0] - h * 0.2 if is_piercing(self, in_downtrend, atr14[0], minimum_body_atr) else math.nan, self.high[0] + h * 0.25 if is_three_black_crows(self, in_uptrend, atr14[0], minimum_body_atr) else math.nan, self.low[0] - h * 0.25 if is_three_white_soldiers(self, in_downtrend, atr14[0], minimum_body_atr) else math.nan )
4
2
79,559,256
2025-4-7
https://stackoverflow.com/questions/79559256/django-queryset-annotate-sum-of-related-objects-of-related-objects
I have class Book(models.Model): title = models.CharField(max_length=32) class Table(models.Model): book = models.ForeignKey(Book, related_name='tables') class TableEntry(models.Model): table = models.ForeignKey(Table, related_name='entries') value = models.FloatField() and want a queryset of books that has annotated for each book, the sum of all table entries of this book. I have tried Book.objects.all().annotate(sum=Sum('tables__entries')) but this does not seem to work. [Edit: Seems to work, after all, as expected.] Note that the idea works when the sum of all entries of each table should be annotated: Table.objects.all().annotate(sum=Sum('entries'))
You can annotate the prefetched Tables, like: from django.db.models import Prefetch, Sum Book.objects.prefetch_related( Prefetch('tables', Table.objects.annotate(sum=Sum('entries__value'))) ) If you then access a Book instance (named book for example), the book.tables.all() is a QuerySet of Tables with each an extra sum attribute that is the sum of value of the entries of the Table.
1
2
79,558,911
2025-4-7
https://stackoverflow.com/questions/79558911/renaming-columns-names-in-polars
I want to change the column labels of a Polars DataFrame from ['#a', '#b', '#c', '#d'] to ['a', 'b', 'c', 'd']
The following approach is most effective when renaming all columns that follow a consistent pattern, such as a common prefix or suffix. df.columns = [col.lstrip("#") for col in df.columns] Alternatively, the approach outlined below is most suitable when renaming specific columns or when no consistent pattern is present: df = df.rename({ "#a": "a", "#b": "b", "#c": "c", "#d": "d" }) Output: ┌─────┬─────┬─────┬─────┐ │ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════╡ │ 1 ┆ 3 ┆ 5 ┆ 7 │ │ 2 ┆ 4 ┆ 6 ┆ 8 │ └─────┴─────┴─────┴─────┘
1
4
79,558,939
2025-4-7
https://stackoverflow.com/questions/79558939/using-a-list-of-values-to-select-rows-from-polars-dataframe
I have a Polars DataFrame below: import polars as pl df = pl.DataFrame({"a":[1, 2, 3], "b":[4, 3, 2]}) >>> df a b i64 i64 1 4 2 3 3 2 I can subset based on a specific value: x = df[df["a"] == 3] >>> x a b i64 i64 3 2 But how can I subset based on a list of values? - something like this: list_of_values = [1, 3] y = df[df["a"] in list_of_values] To get: a b i64 i64 1 4 3 2
y = df.filter(pl.col("a").is_in(list_of_values)) Output: ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 4 │ │ 3 ┆ 2 │ └─────┴─────┘
1
1
79,558,420
2025-4-6
https://stackoverflow.com/questions/79558420/cookie-cannot-be-added-to-website-in-selenium
from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options import json import time service = Service(executable_path=f"chromedriver.exe", log_path=f"seleniumlog.txt") selenium_options = Options() selenium_options.add_argument("--disable-blink-features=AutomationControlled") selenium_options.add_argument("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36") selenium_options.add_experimental_option("excludeSwitches", ["enable-automation"]) selenium_options.add_experimental_option('useAutomationExtension', False) chrome_driver = webdriver.Chrome(service=service, options=selenium_options) chrome_driver.get("https://www.tiktok.com/login") time.sleep(5) cookie = { 'domain': '.tiktok.com', 'expiry': 1734578412, 'httpOnly': True, 'name': 'sessionid', 'path': '/', 'secure': True, 'value': 'abc123' } # Step 2: Load cookies with open("flamencocookies.json", "r") as f: cookies = json.load(f) try: chrome_driver.add_cookie(cookie) except Exception as e: print("Could not add cookie") print(chrome_driver.get_cookies()) I'm just testing Selenium and chose TikTok.com to test it on. But in the above code, I'm unable to add the cookie to the browser session. The cookie format seems correct, and I'm not getting an exception, but it's also not added to the browser session. Does anyone know why? Could it be something to do with the switches I added to the browser? Read selenium docs, everything seems correct
Your cookie won't add as it is expired. Update the time and it adds in fine: I used this: cookie = { 'domain': '.tiktok.com', 'expiry': 1744835794, # <-- updated here 'httpOnly': True, 'name': 'sessionid', 'path': '/', 'secure': True, 'value': 'abc123' } Cookies work on an epoch time. Which is number seconds since 1st Jan 1970. There are lots of handy converters available online. Your expiry time was set to: Thursday, 19 December 2024 03:20:12
2
3
79,558,657
2025-4-6
https://stackoverflow.com/questions/79558657/apply-1d-mask-on-numpy-3d-array
I have the following 3d-numpy.ndarray: import numpy as np X = np.array([ [[0.0, 0.4, 0.6, 0.0, 0.0], [0.6, 0.0, 0.0, 0.0, 0.0], [0.4, 0.0, 0.0, 0.0, 0.0], [0.0, 0.6, 0.0, 1.0, 0.0], [0.0, 0.0, 0.4, 0.0, 1.0]], [[0.1, 0.5, 0.4, 0.0, 0.0], [0.6, 0.0, 0.0, 0.0, 0.0], [0.2, 0.0, 0.0, 0.0, 0.0], [0.1, 0.6, 0.0, 1.0, 0.0], [0.0, 0.0, 0.4, 0.0, 1.0]] ]) I want a new array where all the rows and columns are dropped where the diagnonal is equal to 1. idx = np.diag(X[0]) == 1 # for my implementation it is sufficient to look at X[0] Important to note is that X.shape[1] == X.shape[2], so I try to use the mask as follows Y = X[:, ~idx, ~idx] The above returns something different than my desired output: [[0.0, 0.4, 0.6], [0.6, 0.0, 0.0], [0.4, 0.0, 0.0]], [[0.1, 0.5, 0.4], [0.6, 0.0, 0.0], [0.2, 0.0, 0.0]] Please advice
Probably you can try X[:, ~idx,:][:,:, ~idx] or np.ix_ X[:, *np.ix_(~idx, ~idx)] which gives array([[[0. , 0.4, 0.6], [0.6, 0. , 0. ], [0.4, 0. , 0. ]], [[0.1, 0.5, 0.4], [0.6, 0. , 0. ], [0.2, 0. , 0. ]]])
2
2
79,558,025
2025-4-6
https://stackoverflow.com/questions/79558025/efficient-and-readable-way-to-get-n-dimensional-index-array-in-c-order-using-num
When I need to generate an N-dimensional index array in C-order, I’ve tried a few different NumPy approaches. The fastest for larger arrays but less readable: np.stack(np.meshgrid(*[np.arange(i, dtype=dtype) for i in sizes], indexing="ij"), axis=-1).reshape(-1, len(sizes)) More readable with good performance: np.ascontiguousarray(np.indices(sizes, dtype=dtype).reshape(len(sizes), -1).T) Here I'm not sure if the ascontiguousarray copy is actually necessary, or if there's a better way to make sure the result is in C-order without forcing a copy. Most readable, but by far the slowest: np.vstack([*np.ndindex(sizes)], dtype=dtype) The iterator conversion is quite slow for larger arrays. Is there a built-in more straightforward and readable way to achieve this that matches the performance of np.meshgrid or np.indices using NumPy? If not, can either the meshgrid or indices approaches be optimized to avoid unnecessary memory copies (like ascontiguousarray) while still making sure the array is C-contiguous? Example: sizes = (3, 1, 2) idx = np.ascontiguousarray(np.indices(sizes).reshape(len(sizes), -1).T) print(idx) print(f"C_CONTIGUOUS: {idx.flags['C_CONTIGUOUS']}") # [[0 0 0] # [0 0 1] # [1 0 0] # [1 0 1] # [2 0 0] # [2 0 1]] # C_CONTIGUOUS: True
Here is a (quite-naive) solution in Numba using multiple threads: import numba as nb @nb.njit( [ # Eagerly compiled for common types # Please add your type if it is missing '(int32[:,:], int32[:])', '(int64[:,:], int32[:])', '(float32[:,:], int32[:])', '(float64[:,:], int32[:])', ], parallel=True, cache=True ) def nb_kernel(res, sizes): n = np.prod(sizes) m = sizes.size chunk_size = 1024 assert n > 0 and m > 0 for i in range(m): assert sizes[i] > 0 # Compute blocks of 256 rows. # Multiple threads compute separate blocks. for block in nb.prange((n + chunk_size - 1) // chunk_size): start = block * chunk_size end = min(start + chunk_size, n) # Compute the first row of the block jump = 1 for j in range(m-1, -1, -1): res[start, j] = (start // jump) % sizes[j] jump *= sizes[j] # The next rows of the block incrementally for i in range(start+1, end): inc = 1 for j in range(m-1, -1, -1): val = res[i-1, j] + inc if val >= sizes[j]: val = 0 inc = 1 else: inc = 0 res[i, j] = val def nb_compute(sizes, dtype): res = np.empty((np.prod(sizes), len(sizes)), dtype=dtype) nb_kernel(res, np.array(sizes, dtype=np.int32)) return res Benchmark On my machine (i5-9600KF CPU), on Windows, here are results with sizes=(101,53,71) and dtype=np.int32: np.vstack(...): 626.1 ms np.ascontiguousarray(...): 3.5 ms np.stack(...): 2.6 ms nb_compute(...): 1.1 ms <---- nb_kernel(...): 0.5 ms <---- With a fixed length of `sizes` known at compile time: nb_compute(...): 0.8 ms <---- nb_kernel(...): 0.2 ms <---- Analysis and optimisations We can see that calling nb_kernel directly on a preallocated array is significantly faster. Indeed, when we fill an array for the first time, it causes a lot of memory page faults which are inherently slow. Doing that in parallel is better (but it does not scale on Windows). If you already do this in each thread of a parallel code, nb_kernel will not make this significantly faster. Indeed, most of the speed up of Numba comes from the use of multiple threads. Consequently, in this case, we need to optimise the Numba kernel. One major optimisation is to specialise the fonction for a specific length of sizes (then known at compile time). Indeed, the code is more than twice if we replace m by 3 (so we only supports len(sizes) being 3). I expect most of the case to have a very small len(sizes) so you can specialise the function for the case 2, 3, 4 and 5 and write a pure-Python function calling the good specialisation. This optimisation also makes the parallel code faster. For better performance, you should avoid filling big arrays due to DRAM being slow. This is especially true for temporary arrays (arrays that are filled once and then never reused) due to page faults. I think the above code is optimal for output arrays not fitting in the last-level cache (LLC) of your CPU. For output arrays fitting in the LLC, there are faster implementations than the one above, for example using a native SIMD-friendly language (but it is pretty complex to implement).
4
5
79,557,758
2025-4-6
https://stackoverflow.com/questions/79557758/how-can-i-implement-true-constants-in-python
I'm attempting to use real constants in Python—values which cannot be modified once defined. I realize Python has no native support for constants like certain other languages (e.g., final in Java or const in C++), but I want the most Pythonic or safest manner of achieving this behavior. class Constants: PI = 3.14159 GRAVITY = 9.8 Constants.PI = 3 # This still allows reassignment I want something that raises an error or prevents the value from being changed. I’ve also heard of using custom classes or metaclasses, but I’m not sure what the best practice is. What’s the best way to implement unchangeable constants in Python?
You're correct—Python natively supports neither true constants like C, C++, nor Java. There is no const keyword, and even the hack of creating variables in all caps (PI = 3.14) is completely dependent on developer discipline. Although workarounds such as metaclasses or special classes can be used to mimic immutability, none are able to prevent reassignment at the module level, or prevent an astute user from simply overwriting your values. Therefore, I created something that does. Introducing setconstant: Immutability in Python Constants Python is not immutable, and that can have unintended value mutations—particularly in big codebases. So I made setconstant, a light Python package allowing you to define true constants that cannot be mutated after declaration. How It Works import setconstant setconstant.const_i("PI", 3) setconstant.const_f("GRAVITY", 9.81) setconstant.const_s("APP_NAME", "CodeHaven") print(setconstant.get_constant("PI")) # Output: 3 print(setconstant.get_constant("GRAVITY")) # Output: 9.81 print(setconstant.get_constant("APP_NAME")) # Output: CodeHaven # Try to change the value Once declared, constants are locked. Any attempt to overwrite them throws a ValueError, protecting your code from bugs or unintended logic changes. Why Not Just Use a Class or Metaclass? class Constants: PI = 3.14 def __setattr__(self, name, value): raise AttributeError("Cannot reassign constants") Or even using metaclasses like: class ConstantMeta(type): def __setattr__(cls, name, value): if name in cls.__dict__: raise AttributeError(f"Cannot modify '{name}'") super().__setattr__(name, value) But these solutions have their limitations: They can be circumvented with sufficient effort. They are not intuitive to beginners. They need boilerplate or more advanced Python functionality such as metaclasses. setconstant circumvents that. It's simple, strict, and does what constants are supposed to do. Installation pip install setconstant Feedback is always welcome! Anuraj R, Creator of setconstant
1
1