question_id
int64 59.5M
79.7M
| creation_date
stringdate 2020-01-01 00:00:00
2025-07-15 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,663,680 |
2025-6-12
|
https://stackoverflow.com/questions/79663680/plotting-the-components-of-a-general-solution-returned-by-sympy
|
I have written the following code that returns a solution. import sympy as sym import numpy as np z = sym.Symbol('z') e = sym.Symbol('e') f = sym.Function('f') edo = sym.diff(f(z), z , 2) + e * f(z) soln = sym.dsolve(edo, f(z)) print(soln.rhs) the above code returns: C1*exp(-z*sqrt(-e)) + C2*exp(z*sqrt(-e)) I want to be able to access the elements of this 'soln.rhs' directly and plot them. I could copy and paste the results, but I want to do something like: plot(x, soln.rhs[0]) which would plot exp(-z*sqrt(-e)) The reason for this is that I am analysing many different types of ODE, some of which return solutions which are combinations of Airy functions like C1*airyai(-e + z) + C2*airybi(-e + z) Does anyone know how to access the elements of the general solution? I have looked through the documentation and nothing really pops out.
|
What you are looking for is the args attribute of a Sympy's symbolic expression. For example: print(soln.rhs.args[0]) # C1*exp(-z*sqrt(-e)) print(soln.rhs.args[1]) # C2*exp(z*sqrt(-e)) You might also want to insert appropriate values for the integrations constants by using the subs method: C1, C2 = symbols("C1, C2") soln.rhs.subs({C1: 2, C2: 3}) # random numeric values to show how to do it. Then, you can plot it: # plot the solution for z in [0, 10] plot(soln.rhs.subs({C1: 2, C2: 3}), (z, 0, 10))
| 1 | 2 |
79,663,153 |
2025-6-12
|
https://stackoverflow.com/questions/79663153/correct-way-to-embed-and-bundle-python-in-c-to-avoid-modulenotfounderror-enc
|
I am trying to embed Python inside my C++ DLL. The idea is that the DLL, once distributed, should be sufficient and not rely on other installations and downloads. Interestingly, the below "sort of" works, only in my solution directory, since that is where my vcpkg_installed is. How can I make my DLL not be required to be near vcpkg_installed? Code py_wrap.cpp (the DLL): void assertPyInit() { if (!Py_IsInitialized()) { PyConfig config; PyConfig_InitPythonConfig(&config); // Get the executable path instead of current working directory wchar_t exePath[MAX_PATH]; GetModuleFileNameW(NULL, exePath, MAX_PATH); // Remove the executable name to get the directory std::wstring exeDir = exePath; size_t lastSlash = exeDir.find_last_of(L"\\"); if (lastSlash != std::wstring::npos) { exeDir = exeDir.substr(0, lastSlash); } // Now build Python path relative to executable location std::wstring pythonHome = exeDir + L"\\..\\..\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3"; // Resolve the full path to eliminate .. references wchar_t resolvedPath[MAX_PATH]; GetFullPathNameW(pythonHome.c_str(), MAX_PATH, resolvedPath, NULL); pythonHome = resolvedPath; std::wstring pythonLib = pythonHome + L"\\Lib"; std::wstring pythonSitePackages = pythonLib + L"\\site-packages"; std::wstring pythonDLLs = pythonHome + L"\\DLLs"; // Set the Python home directory PyConfig_SetString(&config, &config.home, pythonHome.c_str()); // Set the module search paths std::wstring pythonPathEnv = pythonLib + L";" + pythonSitePackages + L";" + pythonDLLs; PyConfig_SetString(&config, &config.pythonpath_env, pythonPathEnv.c_str()); PyStatus status = Py_InitializeFromConfig(&config); PyConfig_Clear(&config); if (PyStatus_Exception(status)) { PyErr_Print(); return; } PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\".\")"); } } void MY_DLL pyPrint(const char* message) { assertPyInit(); PyObject* pyStr = PyUnicode_FromString(message); if (pyStr) { PyObject* builtins = PyEval_GetBuiltins(); PyObject* printFunc = PyDict_GetItemString(builtins, "print"); if (printFunc && PyCallable_Check(printFunc)) { PyObject* args = PyTuple_Pack(1, pyStr); PyObject_CallObject(printFunc, args); Py_DECREF(args); } Py_DECREF(pyStr); } } DLLTester.cpp (client app): #include <iostream> #include "py_wrap.h" int main() { std::cout << "Hello\n"; pyPrint("Hello from python:D !"); } File structure and IO PS D: \RedactedLabs\Dev > ls AsyncDLLMQL\x64\Release\ Directory: D: \RedactedLabs\Dev\AsyncDLLMQL\x64\Release Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > ls AsyncDLLMQL\x64\Release\ Directory: D: \RedactedLabs\Dev\AsyncDLLMQL\x64\Release Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > ls .\scraps\ Directory: D: \RedactedLabs\Dev\scraps Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > .\AsyncDLLMQL\x64\Release\DLLTester.exe Hello Hello from python: D ! PS D: \RedactedLabs\Dev > .\scraps\DLLTester.exe Hello Python path configuration: PYTHONHOME= 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3' PYTHONPATH = 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib;D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib\site-packages;D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\DLLs' program name = 'python' isolated = 0 environment = 1 user site = 1 safe_path = 0 import site = 1 is in build tree = 0 stdlib dir = 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib' sys._base_executable = 'D:\\RedactedLabs\\Dev\\scraps\\DLLTester.exe' sys.base_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.base_exec_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.platlibdir = 'DLLs' sys.executable = 'D:\\RedactedLabs\\Dev\\scraps\\DLLTester.exe' sys.prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.exec_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.path = [ 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib\\site-packages', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\DLLs', 'D:\\RedactedLabs\\Dev\\scraps\\python312.zip', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\DLLs', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib', 'D:\\RedactedLabs\\Dev\\scraps', ] ModuleNotFoundError: No module named 'encodings' Finally, my linker settings and other useful info: Platform: Windows IDE: Visual Studio Use Vcpkg Manifest: Yes Target triplet: "x64-windows-static-md" Additional include directories: $(SolutionDir)vcpkg_installed\x64-windows-static-md\x64-windows-static-md\include\python3.12 Additional Library Directories: $(VcpkgInstalledDir)\x64-windows-static-md\lib
|
CPython needs its standard library to exist at the PYTHONHOME directory, (which you can override in the config). If you need a standalone python interpreter to just parse python syntax (for game scripting) then you can use other implementations of python like RustPython or IronPython or JYThon or pocketpy, but you won't be able to use any C/C++ libraries like numpy. If security is not a concern then you can do as popular apps like blender do and package the interpreter with your application and set PYTHONHOME before initializing the interpreter with Py_InitializeFromConfig as in the following structure. . └── app_root/ ├── app.exe ├── python311.dll └── python/ ├── DLLs/ │ ├── _ctypes.pyd │ └── ... ├── Lib/ │ ├── site_packages/ │ │ └── numpy, etc... │ ├── os.py │ └── ... └── bin (optional)/ ├── python311.dll (see note below) └── python.exe (see note below) If you place python311.dll in a subdirectory then you must do extra steps to set the PATH to its location or use LoadLibrary instead and not link it directly. packing python.exe allows your user to install extra libraries using pip by doing python.exe -m pip install ... keep in mind that users will be able to modify everything, so if security is a concern then my recommendation is not to use Python ... but you can create a custom importer and static link everything and get it to work. note that a game did this before as well as create custom byte-code for the interpreter and was still completely reverse engineered, so this is not really fool-proof.
| 1 | 2 |
79,663,207 |
2025-6-12
|
https://stackoverflow.com/questions/79663207/matplotlib-slider-not-working-when-plot-is-updated-in-separate-function
|
I am using matplotlib's Slider to do a simple dynamic plot of a sine curve. I want to change the frequency using a slider. The code looks like this: import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider x = np.linspace(400, 800, 400) fig, ax = plt.subplots() fig.subplots_adjust(bottom=0.25) mod = 1. lambda_plot, = ax.plot(x, np.sin(mod*x*2*np.pi/500)) ax_mod_slider = fig.add_axes([0.3, 0.1, 0.5, 0.04]) mod_slider = Slider( ax = ax_mod_slider, label = "modulation", valmin = 0, valmax = 20, valinit = 1., orientation = "horizontal") def update_mod(val): mod = mod_slider.val redraw() fig.canvas.draw_idle() def redraw(): lambda_plot.set_ydata(np.sin(mod*x*2*np.pi/500)) mod_slider.on_changed(update_mod) plt.show() This does only work if I put the code in redraw() directly in update_mod(), like this: def update_mod(val): mod = mod_slider.val lambda_plot.set_ydata(np.sin(mod*x*2*np.pi/500)) fig.canvas.draw_idle() Why can I not call another function for changing the plot? In this example, I could put it all into update_mod(), but as I do more complicated calculations, I thought it might be good to be able to split update_mod() into separate functions.
|
In the redraw() function: def redraw(): lambda_plot.set_ydata(np.sin(mod * x * 2 * np.pi / 500)) You're using the variable mod, but this mod is not defined in the local scope of redraw(), nor is it properly declared as global or passed as an argument. So Python looks for it in the outer scope and finds the mod = 1. you defined at the beginning of the script which never changes. When update_mod() changes mod, it does so in its local scope, and that doesn't affect the mod that redraw() sees. Just pass the mod value and it should work properly: def redraw(mod): lambda_plot.set_ydata(np.sin(mod * x * 2 * np.pi / 500))
| 2 | 2 |
79,664,893 |
2025-6-13
|
https://stackoverflow.com/questions/79664893/average-distance-of-any-point-of-a-disk-to-its-boundary
|
I was wondering what was the average distance "d" of any point in a ball to its boundary (not specifically the closest point). I made a simple monte carlo computation using Muller's method to get a uniform distribution - see linked answer : Python Uniform distribution of points on 4 dimensional sphere : def sample_within(radius, dim, nb_points=1, ): # Gaussian-distributed points x = np.random.normal(size=(nb_points, dim)) # Normalize to unit length (points on the surface of the n-sphere) x /= np.linalg.norm(x, axis=1)[:, np.newaxis] # Random radii with distribution ~ r^(1/n) => uniform volumic distribution in the ball r = np.random.rand(nb_points) ** (1/dim) # Scale by radius points = radius * x * r[:, np.newaxis] return points Note: I also used a "stupid" sampling in a box and discarding what ever points lies out of the sphere, everything I say in this post is true wiht both sampling method as they produce the same outcome. And using (the same method) : def sample_on_border(radius, dim:int, nb_points:int): # Gaussian-distributed points x = np.random.normal(size=(nb_points, dim)) # Normalize to unit length (points on the surface of the dim-sphere) x /= np.linalg.norm(x, axis=1)[:, np.newaxis] # Scale by radius points = radius * x return points The distances are calculated using (which I have checked manually): def calc_average_distance(radius: float, dim: int, nb_in_points: int, nb_border_points: int)->float: border_points = sample_on_border(radius, dim, nb_border_points) in_points = sample_within(radius, dim, nb_in_points) # in_points = border_points #specific case R=1 distances = np.sqrt(np.sum((border_points[:, np.newaxis, :] - in_points[np.newaxis, :, :])**2, axis=2)) average_distance_to_border = np.mean(distances) return average_distance_to_border From now on, I will speak in dimension 2 (disk) and with a radius of 1. Note: It is easy to demonstrate that the average distance between 2 points of the circle is 4/pi (1.27), and using only sampled points on the border in my Monte Carlo computation yield the same result. Also, the average distance between the center of the circle and the points of the circle is obviously 1 (since R=1). Therefore, I would expect the average distance d to be between 1 and 1.27 (it is just intuition though). The computed average distance d is : d = 1.13182 ± 0.00035 (with 5 sigma confidence) which is indeed between 1 and 1.27 (seems to even be the average). Now, I was also interest to find if there is an analytical solution to this problem, and I found the paper "The Average Directional Distance To The Boundary Of A Ball Or Disk" at https://www.emis.de/journals/AMEN/2024/AMEN-A230614.pdf This paper adress the problem in n-dimension, but its first section specifically deal with the case in dimension 2. Note: For the average distance between two points on the unit circle (i.e., when R=1), the paper gives the result 2/pi . However, unless I am mistaken, the correct value should be 4/π as previously stated and this can even be derived from the paper formula by taking r=R=1. However, I cannot find any (other) mistake in the paper and the analytical solution (which I have checked several times) yields 8/3pi so around 0.85 which is obvisouly different from the 1.13 I get when using my monte carlo code (and also not between 1 and 4/pi). Also, just to be sure, I tried: sampling non-uniformly (using uniform distribution to sample radius instead of using normal distribution), and I get another answer (1.088) but still not 0.85. or by sampling only along the segment [0, 1] (exploiting the problem symmetries) which gives 1.13 again or 1.088 again (depending if uniformly distributed or not). Using polar coordinates in 2D, also gives 1.13. Can anyone help me identify a mistake in my reasoning/code ? Edit: I failed to mention, that this post https://math.stackexchange.com/questions/875011/average-distance-from-a-point-in-a-ball-to-a-point-on-its-boundary gives 6/5 for a 3d-ball which is the result I get by Monte-Carlo but differs from the paper's result (3/4). I am too bad at math to understand their results, but by pure annalogy I figured that 1.13.. is in fact 32/(9pi) which is the value of the double integral between 0 and 1 and 0 and 2pi of r*sqrt(r**2 - 2rcos(theta)+1). Thanks
|
The error you are doing is to compute the average distance between a point within the disk and a point within the border. This is not what the article describes. The article describe the average distance between a point within a disk and the intersection of the border for a direction This may seem to be the same. Since choosing Φ randomly results into choosing randomly a point in the border. But it is not. Because the border point is this way not uniformly chosen on the border (it is the direction Φ from the inner point that is uniformly chosen) Trying to maintain as much as I can your code, using the same sample_within and sample_on_border (to ease generalization to other dimensions), what would be the distance d(P,Φ), if not (as you did) just ||P-(cos Φ, sin Φ)| P being points returned by sample_within, and (cos Φ, sin Φ) the points returned by sample_on_border (that is a strange way to choose Φ, sure, rather than just a uniform choosing in [0, 2π], but that way I reuse sample_on_border, and that way, it may be usable for more than 2D). To make notation lighter, and in that spirit "I am not just choosing an angle, but a point on a unit sphere", let's call u=(cos Φ, sin Φ) One way to figure out that distance is to say that distance is α such as P+α.u is on the unit circle (or sphere for higher dimensions), with α≥0 (α<0 is also counted, but with direction -Φ) That is ||P+α.u|| = 1 That is <P+αu, P+αu> = 1 That is <P,P>+2α<P,u>+α²<u,u> Since u is on the unit sphere, <u,u>=1. <P,P>=ρ² if ρ is the distance to the center of P. That is just a 2nd degree equation to solve with unknown α α² + 2<P,U>α + ρ²-1 = 0 Δ = 4<P,U>² + 4 - 4ρ² So we have two solutions α = (-2<P,U> ± √Δ)/2 = ±√(<P,U>²+1-ρ²) - <P,U> Since 1-ρ² is positive, √(<P,U>²+1-ρ²) is slightly more that |<P,U>|, so is positive only for solution α = √(<P,U>²+1-ρ²) - <P,U> So, that is your distance. Let's revise now your last function with that distance in mind def calc_average_distance2(dim: int, nb_in_points: int, nb_border_points: int)->float: # Just rename `border_points` to `direction_point`, since it is a direction we are choosing here # That is what I called u direction_point = sample_on_border(1, dim, nb_border_points) # And this, what I called P in_points = sample_within(1, dim, nb_in_points) # I mention later why I tried with this line #in_points = sample_on_border(1, dim, nb_in_points) # <P,u> scal = (direction_point[:,None,:] * in_points[None,:,:]).sum(axis=2) ρ = np.linalg.norm(in_points, axis=1)[None,:] dist = np.sqrt(scal**2+1-ρ**2)-scal # And this comment of yours, I realize only now, while copying this to # stack overflow, has the exact same purpose, probably that my own commented line # in_points = border_points #specific case R=1 return dist.mean() Because I did my reasoning on a unit sphere, and was too lazy to redo it with a radius, I removed the radius parameter and use a fixed 1 The result on my machine, with 5000 points × 1000 directions is 0.8490785129084816 which is close enough to 8/(3π)≈0.8488263631567751 And if I uncomment the line chosing in_points on the border, to compute what is called P₁ in the article (I suspect that was also the role of your commented line), I get 0.6368585804371621 which is close enough to 2/π≈0.6366197723675814 So, I am not 100% positive that this is what you wanted to do, and if that is what you meant by "average distance of any points of a dist to its boundary". Reason why I used conditional about you wanting to use this code to generalize to higher dimensions. But I am 100% positive that this is what the article is doing. A simpler version Earlier, I said "α>0. -α case will be counted when direction will be -Φ". But after all, what would happen if we used both α solution for average? That is just counting average for both Φ and -Φ (or u, and -u, if with think "direction" rather than "angle"). So, it would have worked without that α>0 restriction, and counting both α solution, negative and positive in the average. But of course it is |α| that is the distance then But what is the average of both |α| solutions? Well, it is It is ½ ( |√(<P,U>²+1-ρ²) - <P,U>| + |-√(<P,U>²+1-ρ²) - <P,U>| ) And since the first is clearly positive (as we already said) and second clearly negative, absolute values can be resolved directly. It is ½ ( √(<P,U>²+1-ρ²) - <P,U> + √(<P,U>²+1-ρ²) + <P,U>| ) = √(<P,U>²+1-ρ²) So, not a huge simplification, sure. that mean that we can drop the -<P,u> term dist = np.sqrt(scal**2+1-ρ**2)-scal ⇒ dist = np.sqrt(scal**2+1-ρ**2) works as well
| 2 | 1 |
79,664,331 |
2025-6-13
|
https://stackoverflow.com/questions/79664331/why-does-numpy-assign-different-string-dtypes-when-mixing-types-in-np-array
|
I'm trying to understand how NumPy determines the dtype when creating an array with mixed types. I noticed that the inferred dtype for strings can vary significantly depending on the order and type of elements in the list. print(np.array([1.0, True, 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U32') print(np.array(['1.0', True, 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U5') print(np.array(['1.0', 'True', 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U4') I understand that NumPy upcasts everything to a common type, usually the most general one, and that strings tend to dominate. But why does the resulting dtype (<U32, <U5, <U4) differ so much when the content looks almost the same? Specifically: Why does np.array([1.0, True, 'is']) result in <U32? What determines the exact length in the dtype (e.g., <U4 vs <U5)? Is there a consistent rule for how NumPy infers the dtype and string length in such cases?
|
Looking at your array contents and the NumPy type promotion rules, I guess the following applies: For some purposes NumPy will promote almost any other datatype to strings. This applies to array creation or concatenation. This leaves us with the question about the string lengths. For the complete array, we need to choose a string length so that all values can be represented without loss of information. In your examples, the contents have the following data types: Strings ('is', 'True', '1.0'): for these, NumPy just needs to reserve their actual length (thus, if there are multiple strings in the same array, the maximum length of all occurring strings). Booleans (True): for converting them to a string, NumPy reserves a string length of 5, since all possible converted values are 'True' (length 4) and 'False' (length 5). We can easily verify this: np.array(True).astype(str) # >>> array('True', dtype='<U5') Floats (1.0): for converting them to a string, NumPy reserves a string length of 32. I assume this is for round-trip safety (i.e. to get the exact same value when converting the string representation back to float). I would have expected that a shorter length (somewhere between 20 and 30) should be enough, but maybe 32, a power of 2, has been chosen for better memory alignment properties. In any case, again, we can verify this: np.array(1.).astype(str) # >>> array('1.0', dtype='<U32') Now to your examples: np.array([1.0, True, 'is']): we have a float 1.0 (→ string length 32), a boolean True (→ string length 5), and a string 'is' of length 2: The maximum length to represent all values is 32. np.array(['1.0', True, 'is']): we have a string '1.0' of length 3, a boolean True (→ string length 5), and a string 'is' of length 2: The maximum length to represent all values is 5. np.array(['1.0', 'True', 'is']): we have a string '1.0' of length 3, a string 'True' of length 4, and a string 'is' of length 2: The maximum length to represent all values is 4.
| 2 | 1 |
79,664,217 |
2025-6-13
|
https://stackoverflow.com/questions/79664217/arbitrary-stencil-slicing-in-numpy
|
Is there a simple syntax for creating references to an arbitrary number of neighbouring array elements in numpy? The syntax is relatively straightforward when the number of neighbours is hard-coded. A stencil width of three for example is import numpy as np x = np.arange(8) # Hard-coded stencil width of 3 x_neighbours = ( x[ :-2] x[ 1:-1] x[ 2: ] ) However, my attempt at arbitrary width stencils is not particularly readable nStencil = 3 x_neighbours = ( x[indexStart:indexStop] for indexStart, indexStop in zip( (None, *range(1,nStencil)), (*range(1-nStencil,0), None), ) ) Is there a better approach?
|
I'd recommend using sliding_window_view. Change: nStencil = 3 x_neighbours = ( x[indexStart:indexStop] for indexStart, indexStop in zip( (None, *range(1,nStencil)), (*range(1-nStencil,0), None), ) ) To: nStencil = 3 sliding_view = sliding_window_view(x, nStencil) x_neighbours = tuple(sliding_view[:, i] for i in range(nStencil))
| 1 | 3 |
79,667,364 |
2025-6-16
|
https://stackoverflow.com/questions/79667364/how-to-create-different-type-for-class-variable-and-instance-variable
|
I want to explain to Pyright that my variables in class and instance have different types. I managed to overload __get__ method to achieve this, but now Pyright complains about initialization of instances (see last line): Literal[1] is not assignable to Field[int] My code: import typing as ty from dataclasses import dataclass class Field[InstanceT]: def __init__(self, default: InstanceT): self.default = default self.first = True def __get__(self, obj, owner): if self.first: self.first = False return self.default return self if ty.TYPE_CHECKING: @ty.overload def __get__(self, obj: None, owner: type) -> ty.Self: ... @ty.overload def __get__(self, obj: object, owner: type) -> InstanceT: ... @dataclass class Model: field: Field[int] = Field(0) if __name__ == "__main__": # It`s fine class_field: Field = Model.field instance_field: int = Model().field assert isinstance(class_field, Field) assert isinstance(instance_field, int) # Literal[1] is not assignable to field[int] obj = Model(field=1) Asserts are true, but Pyright complains.
|
You want to have a data-descriptor, such it needs a __set__ method. You will get an error depending on the signature of __set__, you want it to accept your generic. A working example could look like this, the instance value will be stored on the objects _field attribute, see the __set_name__ magic, you could of course also store in the Field and not on the instance. I am not sure about your self.first logic - so you might want to change some parts. import typing as ty from dataclasses import dataclass class Field[InstanceT]: def __init__(self, default: InstanceT): self.default = default self.first = True @ty.overload def __get__(self, obj: None, owner: type) -> ty.Self: ... @ty.overload def __get__(self, obj: object, owner: type) -> InstanceT: ... def __get__(self, obj, owner): if self.first: self.first = False return self.default if obj is None: # <-- called on class return self return getattr(obj, self._name, self.default) # <-- called on instance def __set_name__(self, owner, name): self._name = "_" +name def __set__(self, obj, value: InstanceT): setattr(obj, self._name, value) @dataclass class Model: field: Field[int] = Field(0) if __name__ == "__main__": class_field = Model.field reveal_type(class_field) # Field[int] model = Model() instance_field: int = model.field reveal_type(instance_field) # int assert isinstance(class_field, Field) assert isinstance(instance_field, int) # Literal[1] is not assignable to field[int] obj = Model(field=1) # OK Model(field="1") # Error
| 1 | 0 |
79,667,071 |
2025-6-16
|
https://stackoverflow.com/questions/79667071/why-numpy-fabs-is-much-slower-than-abs
|
This Python 3.12.7 script with Numpy 2.2.4: import numpy as np, timeit as ti a = np.random.rand(1000).astype(np.float32) print(f'Minimum, median and maximum execution time in us:') for fun in ('np.fabs(a)', 'np.abs(a)'): t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=999)) print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}') produces these results on AMD Ryzen 7 3800X: Minimum, median and maximum execution time in us: np.fabs(a) 1.813 1.843 4.929 np.abs(a) 0.781 0.811 1.463 indicating that np.fabs() is more than 2x slower than np.abs(), despite the latter having more functionality. What is the reason?
|
fabs always calls the C math library function of the same name (or in this case, the fabsf type variation). Therefore the operation cannot be inlined or vectorized. I have verified this by injecting a custom version using LD_PRELOAD. I've checked the source code of glibc (which just calls __builtin_fabsf(x)) and looked at the generated code with godbolt. I see no complexity (e.g. NaN handling or math exceptions) that differentiate fabs from the fastest, simple abs implementation. I assume numpy always call the library in the f… functions out of principle. Similar effects can be expected from fmin and fmax, for example (though here the implementation is actually more complex than plain min() and max(). From looking at old bug reports, it appears that the performance difference (and the signaling math behavior) are actually platform-dependent. On MIPS, abs() used to be (or still is?) slower as the compiler could not turn the generic C code into a simple bit mask due to the potential of floating point exceptions (which fabs should never do). This also highlights that implementing fabs without compiler support is not as simple as writing x < 0 ? -x : x because -0 has to be differentiated from +0, which the comparison does not. np.abs seems to do the right thing on modern numpy, even though a simple C version would not but I have not investigated where and how the behavior is implemented for floating point types.
| 2 | 4 |
79,666,955 |
2025-6-16
|
https://stackoverflow.com/questions/79666955/how-to-access-code-in-different-files-inside-the-main-app
|
I have a rather large app.py file, so I'd like to take a nav panel out and store it in a separate file. I'm not sure how to access code from a different file and include it in the main app. The app.py file: import page from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.navset_hidden(id="selected_navset_tab"): # Homepage with ui.nav_panel("Welcome", value="page_home"): with ui.card(): # Variable calculated ui.input_selectize( "filler", "Filler", ["list", "of", "items", "here"], multiple=False ) The other file, page.py: from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.nav_panel("Page I want to put in the main app", value="other_page"): @render.express def filler_text(): "Filler text" How can I import the other_page nav panel to show as part of the navset_tab in the main app.py file, without actually putting it in the code?
|
You can wrap the other_page nav panel into a function and use this function inside the main app. However, doing this results in an empty additional nav panel and you need to apply express.expressify. This is because otherwise only the return value of the nav function is shown (None), where we instead want to display the result of each line (see the Programming UI section within the docs for more details). page.py from shiny import render from shiny.express import ui, expressify @expressify def otherNav(): with ui.nav_panel("Page I want to put in the main app", value="other_page"): @render.express def filler_text(): "Filler text" app.py import page from shiny import ui from shiny.express import input, ui with ui.navset_tab(id="selected_navset_tab"): # Homepage with ui.nav_panel("Welcome", value="page_home"): with ui.card(): # Variable calculated ui.input_selectize( "filler", "Filler", ["list", "of", "items", "here"], multiple=False ) # other Nav page.otherNav()
| 1 | 1 |
79,669,389 |
2025-6-17
|
https://stackoverflow.com/questions/79669389/pandas-subtract-one-dataframe-from-another-if-match
|
I have a pandas dataframe that has information about total attendance for schools grouped by School, District, Program, Grade, and Month #. The data looks like the following (df): School District Program Grade Month Count 123 456 A 9-12 10 100 123 456 B 9-12 10 95 321 654 A 9-12 10 23 321 456 A 7-8 10 40 Some of the counts are inflated and need to be reduced based on the data from another dataframe (ToSubtract): School District Program Grade Month Count 123 456 A 9-12 10 10 321 654 A 9-12 10 8 Both dataframes are already grouped so there will be no duplicate grouping. Subtracting ToSubtract from df will result in: School District Program Grade Month Count X 123 456 A 9-12 10 90 * 123 456 B 9-12 10 95 321 654 A 9-12 10 15 * 321 456 A 7-8 10 40 (X column to be marked with * to indicate value was modified). df has a lot more entries for all of the other schools, districts, months, etc. I was looking into df.sub() but it looks like the elements have to be lined up. My other idea was to use df.iterrows() to go through each row of df and check if there is a corresponding row in ToSubtract but this seems very inefficient. What would be the best way to subtract one dataframe from another, matching several columns
|
A possible solution: cols = ['School', 'District', 'Program', 'Grade', 'Month'] df1.set_index(cols).sub(df2.set_index(cols), fill_value=0).reset_index() First, it defines the list cols with the columns used to match records (School, District, Program, Grade, and Month). Then, it sets these columns as the index for both dataframes using set_index, allowing pandas to align rows based on these keys. It uses sub to subtract df2 from df1, with fill_value=0 ensuring that missing rows in either dataframe are treated as zeros (i.e., subtracting from zero or not subtracting at all). Finally, it resets the index back to columns with reset_index. Output: School District Program Grade Month Count 0 123 456 A 9-12 10 90.0 1 123 456 B 9-12 10 95.0 2 321 456 A 7-8 10 40.0 3 321 654 A 9-12 10 15.0
| 1 | 3 |
79,669,116 |
2025-6-17
|
https://stackoverflow.com/questions/79669116/creating-a-character-grid-from-a-table-output-is-inverted
|
I was given a problem that takes in a table of information (grid coordinates and characters) and asked to place them into a table to make a message. I've spent time working on some Python to work it out but my answer is coming out upside down and I can't figure out why. import requests from bs4 import BeautifulSoup webpage_response = requests.get('https://docs.google.com/document/d/e/2PACX-1vRMx5YQlZNa3ra8dYYxmv-QIQ3YJe8tbI3kqcuC7lQiZm-CSEznKfN_HYNSpoXcZIV3Y_O3YoUB1ecq/pub') webpage = webpage_response.content #print(webpage) soup = BeautifulSoup(webpage, 'html.parser') table = soup.find('table') #print(doc_table) rows = table.find_all('tr') data = [] for row in rows[1:]: cells = row.find_all('td') character = cells[1].text.strip() x = cells[0].text.strip() y = cells[2].text.strip() data.append((character,x,y)) #print(data) max_x = max(x for _,x,_ in data) max_y = max(y for _, _,y in data) #print(max_x) #print(max_y) grid = [[' ' for _ in range(int(max_x) + 1)] for _ in range(int(max_y) + 1)] for character, x, y in data: grid[int(y)][int(x)] = character for row in grid: print(''.join(row)) This is my code. It should print out a grid of characters that look like a capital F but its upside down. I think the problem arises when I populate the grid but I'm not sure how to fix it.
|
The likely cause of the upside-down output is how you are indexing grid[int(y)]. In typical grid representations for display: grid[0] often represents the top row. grid[len(grid)-1] often represents the bottom row. However, if your data's y-coordinates are such that y=0 is the bottom of your intended image (like a standard Cartesian coordinate system where y increases upwards), then directly using grid[int(y)] will place y=0 at grid[0], which is the top. This effectively flips the image vertically. To fix this, you need to adjust the y-coordinate when placing the character into the grid. I tried to improve your program a bit. Please check the Google Drive link I shared and feel free to give your feedback. ->> Kindly take a look Explanation of the fix (adjusted_y = max_y - y): Let's assume your y-coordinates in the data range from 0 to max_y. When y = max_y (the "top" of your desired "F" in the original coordinate system), adjusted_y will be max_y - max_y = 0. This places the character in grid[0], which is the very first (top) row of your printed grid. When y = 0 (the "bottom" of your desired "F" in the original coordinate system), adjusted_y will be max_y - 0 = max_y. This places the character in grid[max_y], which is the very last (bottom) row of your printed grid. This transformation effectively reverses the order of the y-coordinates, causing the image to be displayed correctly (right-side up).
| 1 | 0 |
79,668,894 |
2025-6-17
|
https://stackoverflow.com/questions/79668894/plotly-express-line-plots-number-of-datapoint-insted-of-actual-data
|
Plotly express line plots number of data points instead of actual data on the y axis. Now whats confusing me is that it only does this on one machine, its works perfectly fine on 3 others. I made sure python and the packages i use are of the same version. I tried changing the data type to int and float, which only does what i described above. However if i changed the data type to string it no longer showed the number of data point, but it scaled all the values that aren't 0 to 1(other devices did the same on datatype str). df['Machine_Status'] = df['Machine_Status'].astype(int) df = df.dropna(subset=['Machine_Status']) df =df.reset_index(drop=True) df[['Timestamp', 'Machine_Status']].to_csv('machine_status_timeframe_output.csv', index=False) fig = px.line( df, x='Timestamp', y='Machine_Status', title=f'{machine_id} Activity on {selected_date} (6 AM to 10 PM)', labels={'Timestamp': 'Time', 'Machine_Status': 'Activity'}, line_shape='hv', ) Timestamp Machine_Status 2025-06-16 06:00:04 0 2025-06-16 06:00:09 0 2025-06-16 06:00:14 3 2025-06-16 06:00:18 0 2025-06-16 06:00:23 0 2025-06-16 06:00:28 3 2025-06-16 06:00:33 0 2025-06-16 06:00:38 0 2025-06-16 06:00:43 3 2025-06-16 06:00:48 0 2025-06-16 06:00:53 0 2025-06-16 06:00:58 3 2025-06-16 06:01:03 0 2025-06-16 06:01:08 0 2025-06-16 06:01:13 0 2025-06-16 06:01:18 0 2025-06-16 06:01:23 0 2025-06-16 06:01:28 0 2025-06-16 06:01:33 0 2025-06-16 06:01:38 0 2025-06-16 06:01:43 0 2025-06-16 06:01:48 0 2025-06-16 06:01:53 0 Current behavior: Expected behavior:
|
The symptom you’re seeing is exactly the bug that crept into Plotly v6.0.0: in Jupyter the first release of 6.x sometimes mistakes a perfectly-normal numeric series for “categorical” and silently switches the underlying trace to the default aggregation “count”. Instead of plotting the values it therefore shows how many rows share each x-value – the bar/line height becomes “number of points”, not the data you passed in. Why you only see it on one PC The machine that mis-behaves almost certainly has plotly == 6.0.0 installed. The other three computers are still on the stable 5.x line (or have been upgraded to ≥ 6.0.1), so the bug never triggers. You can confirm this in one line: import plotly, sys print(plotly.__version__, sys.executable) How to fix it 1. Upgrade to a patched 6.x release (recommended) pip install -U plotly # installs the latest 6.1.2 at the time of writing # or pin to the first patched build: # pip install plotly==6.0.1 2. …or temporarily roll back to the last 5.x version pip install "plotly<6" # e.g. 5.24.1 or 5.23.0 Either option removes the aggregation glitch; your original code works unchanged.
| 1 | 1 |
79,670,825 |
2025-6-18
|
https://stackoverflow.com/questions/79670825/index-array-using-boolean-mask-with-a-broadcasted-dimension
|
I have two arrays a1 and a2 of shape M1xM2x...xMPx3 and N1xN2x...xNQx3, and a mask m (boolean array) of shape M1xM2x...xMPxN1xN2x...xNQ (you can assume that a1 and a2 are at least 2D). import numpy as np np.random.seed(0) a1 = np.random.rand(4, 5, 3) a2 = np.random.rand(6, 3) m = np.random.rand(4, 5, 6) >= 0.7 I would like to two arrays b1 and b2 of shape Mx3 that contains the values of a1 and a2 where the mask m is true. My current way is to repeat a1 and a2 to obtain arrays of size M1xM2x...xMPxN1xN2x...xNQx3 and then index them using m: a1_shape, a2_shape = a1.shape[:-1], a2.shape[:-1] t_shape = a1_shape + a2_shape + (3,) a1 = a1[..., None, :].repeat(np.prod(a2_shape), axis=-2).reshape(t_shape) a2 = a2[None, ..., :].repeat(np.prod(a1_shape), axis=0).reshape(t_shape) b1, b2 = a1[m, :], a2[m, :] This requires creating two huge array of size M1xM2x...xMPxN1xN2x...xNQ which can be problematic. Is there a way to obtain the same b1 and b2 without having to create such huge intermediate array?
|
A possible solution: idx = np.where(m) P = len(a1.shape) - 1 Q = len(a2.shape) - 1 a1_idx = idx[:P] a2_idx = idx[P:P+Q] b1 = a1[a1_idx] if P > 0 else np.tile(a1, (np.sum(m), 1)) b2 = a2[a2_idx] if Q > 0 else np.tile(a2, (np.sum(m), 1)) This solution begins by using np.where(m) to obtain a tuple of index arrays corresponding to all True locations in m. The number of spatial dimensions for a1 is determined using P = len(a1.shape) - 1, where .shape provides the dimensions and len counts them; Q is determined similarly for a2. These index arrays are then partitioned: the first P arrays (idx[:P]) are assigned to a1_idx and the subsequent Q arrays (idx[P:P+Q]) are assigned to a2_idx. For edge cases where an array might lack spatial dimensions (P=0 or Q=0), it handles this by using np.tile, which repeats the original vector np.sum(m) times (counting True values), ensuring the output shape matches the expected (K, 3) format.
| 1 | 1 |
79,670,289 |
2025-6-18
|
https://stackoverflow.com/questions/79670289/type-hinting-a-dynamic-asymmetric-class-property
|
I'm currently working on removing all the type errors from my Python project in VS Code. Assume you have a Python class that has an asymmetric property. It takes any kind of iterable and converts it into a custom list subclass with additional methods. class ObservableList(list): """list with events for change, insert, remove, ...""" # ... class MyFrame: @property def my_list(self) -> ObservableList: return self._my_list @my_list.setter def my_list(self, val: typing.Iterable): self._my_list = ObservableList(val) # ... my_frame = MyFrame() VS Code (i.e. Pyright) will correctly deduce that: you can set my_frame.my_list using any iterable, and my_frame.my_list will always be an ObservableList. Now, let's assume that there is no actual @property code. Instead, the property is implemented dynamically using __setattr__ and __getattr__. (Context: We're talking about a GUI generator which provides automatic bindings.) I want to use a declaration on class level to tell the typechecker that this property exists, without actually spelling it out: class MyFrame(AutoFrame): my_list: ??? # ... (AutoFrame provides the __getattr__/ __setattr__ implementation.) What can I put in place of the ??? to make this work? When using ObservableList, Pyright complains when I assign a plain list to the property. When using Iterable or list, Pyright complains when I access ObservableList-specific methods. Same when using list | ObservableList: Pyright assumes that the property could return both, and list misses the additional methods. Re: close vote: the linked question's answer basically boils down to going back to square one (implementing the property explicitly). The point of using AutoFrame is specifically to get rid of that repetitive boilerplate code. Just imagine doing this for a GUI frame with a dozen bound controls. I can live with a single added declaration line but not much more.
|
You can use the Any-Trick, see my answer on how the four different cases work, but in short, if you type it as: from typing import Any, reveal_type class ObservableList(list): """list with events for change, insert, remove, ...""" def foo(self) -> str: ... class MyFrame: my_list: ObservableList | Any MyFrame().my_list = [2, 3] # OK value: str = MyFrame().my_list.foo() # OK, add :str to avoid value being str | Any Of course that will have the downside that any assignment, will be okay as well: MyFrame().my_list = "no error" so you need to be aware of that. Alternatively, you can implement the property behind a if TYPE_CHECKING block OR in a .pyi file: no-runtime influence, correct-typing, but boilerplate: class MyFrame: if TYPE_CHECKING: @property def my_list(self) -> ObservableList: ... @my_list.setter def my_list(self, val: typing.Iterable): ...
| 1 | 2 |
79,673,580 |
2025-6-20
|
https://stackoverflow.com/questions/79673580/why-do-i-get-a-division-by-zero-error-when-plotting-inverse-function-on-secondar
|
I get a division by zero error when I run the following python script. import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10, 10)) freq = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.6, 1.8, 2.0] amp = [0.0, 0.0, 0.0, 0.0, 0.0, 0.012, 0.031, 0.074, 0.082, 0.084, 0.080, 0.078, 0.072, 0.059, 0.039, 0.019, 0.010] ax.semilogx(freq, amp, marker='s', color='purple') plt.xlim(0.1, 10) plt.xlabel('Frequency (Hz)') plt.ylabel('Crest Loss (ft)') ax.set(title='Fundamental Frequency') ax.grid() ax.grid(which="minor", color="0.9") def forward(x): return 1 / x def inverse(x): return 1 / x secax = ax.secondary_xaxis('top', functions=(forward, inverse)) secax.set_xlabel('Period (s)') plt.show() The plot seems to output correctly, but I don't know why I get a division by zero error when all the data plotted on the x axis is greater than zero.
|
I think it is just a "feature" of the secondary axis. In the third example in the docs here, they explicitly have a check to avoid divide by zero warnings: fig, ax = plt.subplots(layout='constrained') x = np.arange(0.02, 1, 0.02) np.random.seed(19680801) y = np.random.randn(len(x)) ** 2 ax.loglog(x, y) ax.set_xlabel('f [Hz]') ax.set_ylabel('PSD') ax.set_title('Random spectrum') def one_over(x): """Vectorized 1/x, treating x==0 manually""" x = np.array(x, float) near_zero = np.isclose(x, 0) x[near_zero] = np.inf x[~near_zero] = 1 / x[~near_zero] return x # the function "1/x" is its own inverse inverse = one_over secax = ax.secondary_xaxis('top', functions=(one_over, inverse)) secax.set_xlabel('period [s]') plt.show() Also, see the code example on the secondary_axis docs page here where the same thing is done.
| 1 | 1 |
79,673,208 |
2025-6-20
|
https://stackoverflow.com/questions/79673208/dynamically-added-class-attributes-with-getter-setter-methods-in-a-loop-in-pytho
|
I'm trying to dynamically assign class attributes to an existing custom class. These attributes have individual getter and setter methods defined in a loop across all attributes to be created and are assigned with setattr() within the loop. Please see code below. If I use setattr(), the newly created attributes point towards the last attribute that was created in the list. Whereas if I use setattr() within an eval() statement, it works correctly, i.e. each attribute pointing towards its own value. I think I kinda understand why - something about the scope of the functions and setattr() in the loop - but I can't really formulate it into words. Can someone smarter than me please give a formal explanation about what is going on and if there is a cleaner solution than using eval()? for attr in attr_list: def getter(self): return getattr(self, attr) def setter(self, value): setattr(self, attr, value) # DOES NOT WORK setattr(self.__class__, attr, property(getter, setter)) # WORKS eval(f'setattr(self.__class__, \'{attr}\', property(lambda self: getattr(self, \'{attr}\'), lambda self, value: setattr(self, {attr}, value)))')
|
You create different getters and setters at each iteration but they all use the attr variable from the outer scope; this means that when you access a getter/setter then it uses the current value of the attr variable, which will be the final value from the loop. You need to have attr in a different scope for each getter and setter so that each refers to the corresponding attr value. Note: you also want the getter/setter to refer to a different attribute or you will enter an infinite loop. I.e. _attr. def create_property(attr): private_attr = f"_{attr}" def getter(self): return getattr(self, private_attr) def setter(self, value): setattr(self, private_attr, value) return property(getter, setter) attr_list = ["a", "b", "c"] class Test: pass for attr in attr_list: setattr(Test, attr, create_property(attr)) test1 = Test() test2 = Test() test1.a = 42 test1.b = 17 test2.a = 96 test2.b = 123 print(test1.a, test1.b, test1._a, test1._b) print(test2.a, test2.b, test2._a, test2._b) Outputs: 42 17 42 17 96 123 96 123 fiddle
| 1 | 2 |
79,674,833 |
2025-6-22
|
https://stackoverflow.com/questions/79674833/scipy-optimizewarning-covariance-of-the-parameters-could-not-be-estimated-when
|
I'm trying to plot some data with a non-linear fit using the function: kB and Bv being a constant while J'' is the independent variable and T is a parameter. I tried to do this in Python: #Constants kb = 1.380649e-23 B = 0.3808 #Fit function def func(J, T): return (B*(2*J+1) / kb*T * np.exp(-B*J*(J+1)/kb*T)) popt, pcov = optimize.curve_fit(func, x, y) print(popt) plt.plot(x, y, "o") plt.plot(x, func(j, popt[0])) But this results in the warning "OptimizeWarning: Covariance of the parameters could not be estimated warnings.warn('Covariance of the parameters could not be estimated'" and the parameter turns out to be 1. This is the data: x = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48] y = [0.185,0.375,0.548,0.695,0.849,0.931,0.996,0.992,1.0,0.977,0.927,0.834,0.691,0.68,0.575,0.479,0.421,0.351,0.259,0.208,0.162,0.135,0.093,0.066]
|
Why? You have three problems in your model: Priority of operations are not respected, as pointed by @jared, you must add parenthesis around the term kb * T; Target dataset is normalized to unity where it is its area which is unitary (it is a distribution). therefore such normalization prevents the model to converge. Adding the missing parameter N to the model allows convergence; Scale issue and/or units issue, your Boltzmann constant is expressed in J/K but we don't have any clue about the units of B. Recall the term inside the exponential must be dimensionless. Scaling constant to eV/K seems to provide credible temperature. MCVE Bellow a complete MCVE fitting your data: import numpy as np import matplotlib.pyplot as plt from scipy import optimize, integrate from sklearn import metrics #kb = 1.380649e-23 # J/K kb = 8.617333262e-5 # eV/K B = 0.3808 # ? def model(J, T, N): return N * (B * (2 * J + 1) / (kb * T) * np.exp(- B * J * (J + 1)/ (kb * T))) J = np.array([2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48]) N = np.array([0.185,0.375,0.548,0.695,0.849,0.931,0.996,0.992,1.0,0.977,0.927,0.834,0.691,0.68,0.575,0.479,0.421,0.351,0.259,0.208,0.162,0.135,0.093,0.066]) popt, pcov = optimize.curve_fit(model, J, N, p0=[1e5, 1]) #(array([2.51661517e+06, 2.75770698e+01]), # array([[1.17518427e+09, 3.25544771e+03], # [3.25544771e+03, 6.04463335e-02]])) np.sqrt(np.diag(pcov)) # array([3.42809607e+04, 2.45858361e-01]) Nhat = model(J, *popt) metrics.r2_score(N, Nhat) # 0.9940231921241076 Jlin = np.linspace(J.min(), J.max(), 200) Nlin = model(Jlin, *popt) fig, axe = plt.subplots() axe.scatter(J, N) axe.plot(Jlin, Nlin) axe.set_xlabel("$J''$") axe.set_ylabel("$N(J'')$") axe.grid() Regressed values are about T=2.51e6 ± 0.03e6 and N=2.76e1 ± 0.02e1, both have two significant figures. The fit is reasonable (R^2 = 0.994) and looks like: If your Boltzmann constant is expressed in J/K, temperature T is about 1.6e+25 which seems a bit too much for such a variable. Check We can check the area under the curve is indeed unitary: Jlin = np.linspace(0, 100, 2000) Nlin = model(Jlin, *popt) I = integrate.trapezoid(Nlin / popt[-1], Jlin) # 0.9999992484190978
| 5 | 4 |
79,675,821 |
2025-6-23
|
https://stackoverflow.com/questions/79675821/force-python-sqlite3-to-report-non-existing-columns-in-where-clause
|
This is my code: import sqlite3 conn = sqlite3.connect(':memory:') cursor = conn.cursor() cursor.execute(''' CREATE TABLE schools ( county TEXT, statustype TEXT ) ''') cursor.execute(''' INSERT INTO schools VALUES ('some_county', 'some_status') ''') cursor.execute(''' SELECT COUNT(*) FROM schools WHERE county = 'Alpine' AND statustype IN ('Active', 'Closed') AND "School Type" = 'District Community Day Schools' ''') result = cursor.fetchone()[0] print(f"Count result: {result}") conn.close() Note that there is intentionally no 'School Type' column in the database schema in this example. The result is 0. Is possible to change some settings of SQLite3 or of the database in order to get an error about non-existing column instead?
|
SQLite has a quirk when it comes to delimiting identifiers. In ANSI SQL double quoting an column name does what you expect i.e. delimits the column name. However in SQLite you get the following behaviour (from the docs) ... in an effort to be compatible with MySQL 3.x (which was one of the most widely used RDBMSes when SQLite was first being designed) SQLite will also interpret a double-quotes string as string literal if it does not match any valid identifier. This misfeature means that a misspelled double-quoted identifier will be interpreted as a string literal, rather than generating an error. It also lures developers who are new to the SQL language into the bad habit of using double-quoted string literals when they really need to learn to use the correct single-quoted string literal form. Therefore, because the column doesn't exist, you end up comparing strings to strings e.g. "School Type" = 'District Community Day Schools' which is always false. Therefore you need to delimit the column name using either square brackets ([]) or backticks (``). e.g. you should use [column name here] or `column name here` after which you'll get an error as such: sqlite3.OperationalError: no such column: Of course in general its actually better to not use spaces in the first place (eg. school_type). Note: The same documentation goes on to say: As of SQLite 3.29.0 (2019-07-10) the use of double-quoted string literals can be disabled at run-time... Which would seem to be a good practice in order to avoid this sort of confusion.
| 1 | 1 |
79,675,526 |
2025-6-23
|
https://stackoverflow.com/questions/79675526/how-to-edit-an-element-inside-of-a-tkinter-grid
|
I'm working on a project to design a chessboard. I have successfully created the board. My goal is to edit the board. My original idea is to use buttons with the Tkinter grid to make the moves work. I decided to make the board using: window = tk.Tk() # Creating the GUI and Display window.geometry("800x500") window.minsize(800, 500) #Locking the size of the Display window.maxsize(800, 500) #Adding the board SeenBoard = tk.Frame(window, width=300, height=300,padx=5,pady=5) #Adds padding to the grid SeenBoard.pack(side=tk.LEFT) #the left isnt needed yet, but will be later for row in range(8): # 8 Rows along for column in range(8): #8 Columns down tk.Button(SeenBoard,text=f"{row},{column}",width=5,height=3).grid(row=row, column=column) #Creates each button window.mainloop() #Idk what this really does? it just kinda works. My problem is that with this way of making the board, I cannot edit the buttons, nor locate them. I wish to be able to change the color of each button, to alternate between green and white, for that grid pattern feel. In order to do this I only can think of two ways at least, and that would be to somehow give each of the buttons an identifier beforehand, such as labeling them all as Grid1_1.button(...) My problem with doing it manually is that I want to keep the code concise and not repetitive, so anyway in which I could add lots of them at once would solve my problem. Second solution to my knowledge: Somehow having the ability with Tkinter to select a specific grid slot, such as 2,2. and then trying to modify it with that, I heard about grid slaves, but I'm too small brained to understand how it works. My request: I would like to know the solution or even the concept behind trying to alternate the color of the board in a pattern. any method Any solution that would allow me to access a button, and change its text/color.
|
To change the color of the checkerboard squares, one can make use of modulo 2. Here's a simple example showcasing that: from tkinter import tk COLOR_1 = "white" COLOR_2 = "green" for row in range(8): # 8 Rows along for column in range(8): #8 Columns down if ((row % 2) + column) % 2: color = COLOR_1 else: color = COLOR_2 tk.Button(SeenBoard,text=f"{row},{column}", width=5, height=3, bg=color).grid(row=row, column=column) The reason row % 2 is added to column is to achieve an alternating pattern. Consider a simple example of a 2x2 chessboard. row col row % 2 ( (row % 2) + col ) % 2 0 0 0 0 0 1 0 1 1 0 1 1 1 1 1 0 Which gives a pattern: +---+---+ | 0 | 1 | +---+---+ | 1 | 0 | +---+---+ To access button objects in the future, simply store them in a nested list, using your row and column numbers as indexes: # create a list to hold the buttons array = list() for row in range(8): # Create an array to store a row of buttons array.append(list()) for column in range(8): # Append each column to the corresponding row array[row].append(tk.Button(SeenBoard,text=f"{row},{column}", width=5, height=3, )) # Add it to the grid array[row][column].grid(row=row, column=column) # Modify by using rows & columns as indexes array[3][2]["text"] = "It works!" This will also prevent your buttons from being garbage collected, since you're holding a reference to them. Output: Notice how the button at (3,2) is modified to say It works!. As for the last question, the mainloop runs an event loop, which does two things: It prevents the program from exiting after the last line of code. It checks for events (such as a button press). (In reality, things might work differently, but this is what it generally does)
| 2 | 2 |
79,677,119 |
2025-6-24
|
https://stackoverflow.com/questions/79677119/type-hinting-callable-with-positional-keyword-only-arguments-without-typing-prot
|
I'm wondering if it is possible to type-hint a Callable with positional- / keyword-only argument without using typing.Protocol. For example in this snippet - how would I type-hint x correctly so that add is assignable to it: from typing import Callable def add(arg: int, /, *, other: str) -> str: return f"{arg}{other}" x: Callable[[int, str], str] = add # Type Error: # `(arg: int, /, *, other: str) -> str` is not assignable to `(int, str) -> str`
|
In short no. A combination is (currently) not possible, as keyword only parameters are not possible with Callable, as it describes positional-only parameters - you need a Protocol for more specific typing. To quote the specs: Parameters specified using Callable are assumed to be positional-only. The Callable form provides no way to specify keyword-only parameters, variadic parameters, or default argument values. For these use cases, see the section on Callback protocols. A bit more on assignability; a function with standard parameters (keyword or positional) can be assigned to a function with any parameter type. The reverse, if you have a function that is keyword-only / positional-only it can only be assigned to a matching type, i.e. in your case you need a Protocol that reflects these parameter types exactly. from typing import Callable, Protocol class Standard(Protocol): def __call__(self, a: int) -> None: ... class KwOnly(Protocol): def __call__(self, *, a: int) -> None: ... class PosOnly(Protocol): def __call__(self, a: int, /) -> None: ... CallableType = Callable[[int], None] def func(standard: Standard, kw_only: KwOnly, callable: CallableType, pos_only: PosOnly): # Standard assignable to all f1a: KwOnly = standard # OK f1b: CallableType = standard # OK f1c: PosOnly = standard # OK # Keyword-Only assignable to keyword-only f2a: Standard = kw_only # error f2b: CallableType = kw_only # error f2c: PosOnly = kw_only # error # CallableType and PosOnly are equivalent; only assignable to position-only/Callable f3a: Standard = callable # error f3b: KwOnly = callable # error f3c: PosOnly = callable # OK - as equivalent f4a: Standard = pos_only # error f4b: KwOnly = pos_only # error f4c: CallableType = pos_only # OK - as equivalent I am not aware that there are any plans to change extend Callable in its behavior, e.g. accept keyword-only via TypedDicts.
| 3 | 1 |
79,676,972 |
2025-6-24
|
https://stackoverflow.com/questions/79676972/python-while-loop-is-executed-but-then-it-fails-to-execute-an-if-statement-afte
|
I am working on a simple 2 player dice rolling game and have found a problem that i haven't been able to fix, after a While loop, it refuses to execute an if statement def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player1_score: print("Player 1 Wins Game", count) elif player1_score < player1_score: print("Player 2 Wins Game", count) The code is meant to take users input and execute gameFunc() that many times, once this has been completed, I want it to check who won the most and print the corresponding Player Wins Game x. Where x is how many times game is read in a text file called leaderboard.txt + 1 def gameFunc(): global player1_score, player2_score player1_roll = random.randint(0, 20) player2_roll = random.randint(0, 20) print("player 1 rolled", player1_roll) print("player 2 rolled", player2_roll) if player1_roll > player2_roll: print("Player 1 Wins!!" + "\n") player1_score += 1 elif player1_roll < player2_roll: print("Player 2 Wins!!" + "\n") player2_score += 1 else: print("It's a Tie!!" + "\n") The entire code is as follows: import random word = "game" count = 0 game = count + 1 player1_score = 0 player2_score = 0 with open("leaderboard.txt", 'r') as f: for line in f: words = line.split() for i in words: if(i==word): count=count+1 print("Occurrences of the word", word,"is", count) print("Game number ", game) def gameFunc(): global player1_score, player2_score player1_roll = random.randint(0, 20) player2_roll = random.randint(0, 20) print("player 1 rolled", player1_roll) print("player 2 rolled", player2_roll) if player1_roll > player2_roll: print("Player 1 Wins!!" + "\n") player1_score += 1 elif player1_roll < player2_roll: print("Player 2 Wins!!" + "\n") player2_score += 1 else: print("It's a Tie!!" + "\n") def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player1_score: print("Player 1 Wins Game", count) elif player1_score < player1_score: print("Player 2 Wins Game", count) def Start(): print("Do you want to begin?") beginPrompt = input('Type START to begin: ').lower() if beginPrompt == 'start': print("\n") Main() else: print("\n" + "Try Again" + "\n") Start() Start()
|
In your code in the logic of the final if statement in the Main() function: if player1_score > player1_score: This line always set to False because you're comparing player1_score to itself, not to player2_score. Just like this one: elif player1_score < player1_score: It should be something like this: if player1_score > player2_score: print("Player 1 Wins Game", count + 1) # You want something "count + 1" here elif player1_score < player2_score: print("Player 2 Wins Game", count + 1) else: print("It's a Draw!") Also, you're printing Game, but you're using the value count from how many times "game" appeared in the leaderboard file. You set: game = count + 1 But then you never use game, only count, which is the number of previous games. So you probably want to show the next game number as count + 1. Your def Main(): should look like this: def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player2_score: print("Player 1 Wins Game", count + 1) elif player1_score < player2_score: print("Player 2 Wins Game", count + 1) else: print("It's a Draw!")
| 1 | 2 |
79,678,764 |
2025-6-25
|
https://stackoverflow.com/questions/79678764/how-can-i-vectorize-a-function-that-returns-eigenvalues-and-eigenvectors-of-a-ma
|
I'm working with a function in Python that constructs a 4×4 matrix based on inputs (x1, y1, x2, y2), and computes its eigenvalues and eigenvectors using np.linalg.eigh. Here is a simplified version of my code: import numpy as np def f(kx, ky): return kx + 1j * ky def fs(kx, ky): return np.conj(f(kx, ky)) def eig(x1, y1, x2, y2): a = 10 x = x1 + x2 y = y1 + y2 H = np.array([ [a, f(x, y), f(x, y), fs(x, y)], [fs(x, y), a, 0, f(x, y)], [fs(x, y), 0, -a, f(x, y)], [f(x, y), fs(x, y), fs(x, y), -a] ]) Eval, Evec = np.linalg.eigh(H) sorted_indices = np.argsort(Eval) return Eval[sorted_indices], Evec[:, sorted_indices] Now, I have 1-d arrays of input values: x1_array, y1_array, x2_array, y2_array # all same shape I want to efficiently vectorize this function across those arrays — i.e., compute all eigenvalues/eigenvectors in a batched way without writing an explicit Python loop if possible.
|
It is a bit of stacking, but you can create a matrix that is your batch size times 4x4 and pass it to np.linalg.eigh. Also note the slight optimization avoiding multiple evaluations of f(x, y) and fs(x, y). def eig_vectorized(x1_array, y1_array, x2_array, y2_array): a = 10 x_array = x1_array + x2_array y_array = y1_array + y2_array f_xy = f(x_array, y_array) # Optimization, you don't want to recompute fs_xy = fs(x_array, y_array) # these two again and again # Create H as an array-sizex4x4 matrix H = np.stack([ np.stack([np.full_like(f_xy, a), f_xy, f_xy, fs_xy], axis=-1), np.stack([fs_xy, np.full_like(f_xy, a), np.zeros_like(f_xy), f_xy], axis=-1), np.stack([fs_xy, np.zeros_like(f_xy), -np.full_like(f_xy, a), f_xy], axis=-1), np.stack([f_xy, fs_xy, fs_xy, -np.full_like(f_xy, a)], axis=-1) ], axis=-2) Evals, Evecs = np.linalg.eigh(H) # Compute eigenvalues and -vectors # Sort eigenvalues and eigenvectors sorted_indices = np.argsort(Evals, axis=-1) Evals = np.take_along_axis(Evals, sorted_indices, axis=-1) Evecs = np.take_along_axis(Evecs, sorted_indices[..., None], axis=-1) return Evals, Evecs
| 1 | 1 |
79,680,007 |
2025-6-26
|
https://stackoverflow.com/questions/79680007/is-this-a-bug-in-numpy-histogram-function
|
This Python 3.12.7 script with numpy 2.2.4: import numpy as np a = np.random.randint(0, 256, (500, 500)).astype(np.uint8) counts, bins = np.histogram(a, range(0, 255, 25)) print(np.column_stack((counts, bins[:-1], bins[1:]))) counts, bins = np.histogram(a, range(0, 257, 16)) print(np.column_stack((counts, bins[:-1], bins[1:]))) produces this kind of output: [[24721 0 25] [24287 25 50] [24413 50 75] [24441 75 100] [24664 100 125] [24390 125 150] [24488 150 175] [24355 175 200] [24167 200 225] [25282 225 250]] [[15800 0 16] [15691 16 32] [15640 32 48] [15514 48 64] [15732 64 80] [15506 80 96] [15823 96 112] [15724 112 128] [15629 128 144] [15681 144 160] [15661 160 176] [15558 176 192] [15526 192 208] [15469 208 224] [15772 224 240] [15274 240 256]] where the first histogram always has the highest count in bin [225, 250). The second histogram indicates a uniform distribution, as expected. I tried a dozen of times and the anomaly was always there. Can someone explain this behavior?
|
I think the docs explain pretty well what's happening, but are spread out in two different places. First, the range range(0, 255, 25) is supplying the bins parameter, not the range parameter. Secondly, the Notes section states: All but the last (righthand-most) bin is half-open. In other words, if bins is: [1, 2, 3, 4] then the first bin is [1,2) (including 1, but excluding 2) and the second [2,3). The last bin, however, is [3,4], which includes 4. Pretty sure the extra counts in your case are the number of elements that equal 250. This makes sense, since the increase is about 1/25th of the bin size compared to the other bins, which all have a width of 25.
| 2 | 3 |
79,680,409 |
2025-6-26
|
https://stackoverflow.com/questions/79680409/cant-get-frequency-data-right-for-seaborn-histogram
|
I have a list containing thousands of survey answers ranging from 'A' to 'I', with the addition of None for blanks: Raw data list example: [None, 'A', 'G', None, None, 'I', 'B', ...<snip>... , 'C'] I would like to produce a Seaborn histogram showing the frequency of each response (in alphabetical order), something similar to this Excel chart: Desired outcome (Excel version) (Numbers are different here than in test data below) The closest I get is this, with all columns incorrectly equal to 1: I tried histplot, catplot and displot as per some random tutorials online. I am unsure of how to process the list of raw values into the right format for Seaborn. I have followed many different tutorials doing things like sns.histplot(data=df, x="Frequency") but can't get the dataframe right before this step. Main method I tried for frequency (from some tutorials): df = pd.DataFrame.from_dict(Counter(values), orient='index').reset_index() Any suggestions will be humbly received. Much appreciation in advance. Minimum reproducible example: import matplotlib.pyplot as plt from collections import Counter import seaborn as sns import pandas as pd # Test data - real data is much larger and extracted as a list from an SQLite3 DB. values = ['E', None, 'B', 'H', 'I', 'A', None, None, 'D', 'C', 'E', 'I', 'C', 'B', None, 'A', None, 'E', 'F', 'H', 'A', 'D', 'A', 'A', 'F', 'A', 'C', 'C', 'H', 'E', None, 'B', 'E', 'I', 'G', 'A', 'I', 'A', 'B', 'I'] sns.set_theme(style="darkgrid") df = pd.DataFrame.from_dict(Counter(values), orient='index').reset_index() df.columns = ['Choice', 'Frequency'] sns.histplot(data=df, x='Choice') plt.show()
|
Since you have pre-aggregated data, you should use a barplot, not a histplot that is designed to perform the aggregation. For the alphabetical order, use sort_index: df = (pd.DataFrame.from_dict(Counter(values), orient='index') .sort_index().reset_index() ) df.columns = ['Choice', 'Frequency'] sns.barplot(data=df, x='Choice', y='Frequency') Output: Note that you do not need to use a Counter/DataFrame, you could directly go with a countplot: sns.countplot(values) Output:
| 2 | 0 |
79,680,212 |
2025-6-26
|
https://stackoverflow.com/questions/79680212/using-python-pptx-how-can-i-display-the-in-my-chart-data
|
I would like to create a chart on a powerpoint slide with pourcent datas , python doesn't let my directly put the % symbol on my data so i had to only put the datas , but i want to show the % symbol on the chart and also on the axes So heres my code : if slide: for shape in slide.shapes: if shape.shape_type == MSO_SHAPE_TYPE.CHART: chart = shape.chart if result and moisDate and nameIntake : chart_data = CategoryChartData() chart_data.categories = nameIntake for moisSingleDate in moisDate: chart_data.add_series( str(moisSingleDate) , values= ( result[moisSingleDate]) ,number_format= '0%') chart.replace_data(chart_data) print(result) but unfortunatly my datas gets multiplied by 100... instead of displaying 24.93% i get 2493% does anyone knows how to do it please ?
|
You have to divide by 100 in-place. Percentage formatting expects decimal values normalized to 0.0..1.0 as 0..100%. for moisSingleDate in moisDate: # Divide values by 100 to correctly format as percentages percentage_values = [value / 100 for value in result[moisSingleDate]] chart_data.add_series(str(moisSingleDate), values=percentage_values, number_format='0%')
| 1 | 2 |
79,681,704 |
2025-6-27
|
https://stackoverflow.com/questions/79681704/i-have-an-np-array-of-a-number-single-entry-lists-and-i-want-to-add-1-to-each-si
|
I have created the following array, called X: array([[6.575], [6.421], [7.185], [6.998], [6.43 ], [6.012], [6.172], [5.631], [6.004], [6.377], [6.03 ]]) and I would like to create array([[6.575, 1], [6.421, 1], [7.185, 1], [6.998, 1], [6.43, 1], [6.012, 1], [6.172, 1], [5.631, 1], [6.004, 1], [6.377, 1], [6.03, 1]]) I have tried X = np.array( [ [value,1] for value in X ] ) but I get the error: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (506, 2) + inhomogeneous part. Aside from ignorance I am not sure where I am going wrong. Any ideas?
|
Do not use list comprehensions/loops to work with numpy arrays. You should combine expand the scalar with broadcast_to and combine it to X with hstack: out = np.hstack([X, np.broadcast_to(1, X.shape)]) Output: array([[6.575, 1. ], [6.421, 1. ], [7.185, 1. ], [6.998, 1. ], [6.43 , 1. ], [6.012, 1. ], [6.172, 1. ], [5.631, 1. ], [6.004, 1. ], [6.377, 1. ], [6.03 , 1. ]]) Now, just to answer the source of your error, this this because your intermediate list comprehension generates items of the form [array([6.575]), 1] instead of [6.575, 1]. The correct approach should have been: np.array([[value, 1] for value in X[:, 0]]) # or np.array([[*value, 1] for value in X]) But, again, do not do this in numpy.
| 4 | 4 |
79,682,876 |
2025-6-28
|
https://stackoverflow.com/questions/79682876/how-to-create-same-ticks-and-labels-on-both-axes
|
I have to create a plot on which ticks and labels are specifically defined; an example of reproducible plot is given below: import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn-v0_8') fig, ax = plt.subplots(figsize=(4, 4)) ticks = [0.00, 0.25, 0.50, 0.75, 1.00] ax.set_xticks(ticks) ax.set_xticklabels(ticks, weight='bold', size=8) ax.set_yticks(ticks) ax.set_yticklabels(ticks, weight='bold', size=8) plt.show() As ticks and labels are exactly the same of both axes, is there a way to set them as a single command ? Something mixing both xticks and yticks ?
|
You can use a function. import matplotlib.pyplot as plt plt.style.use('seaborn-v0_8') def set_ticks_and_labels(ax, t, **kwargs): ax.set_xticks(t) ax.set_xticklabels(t, **kwargs) ax.set_yticks(t) ax.set_yticklabels(t, **kwargs) fig, ax = plt.subplots(figsize=(4, 4)) ticks = [0.00, 0.25, 0.50, 0.75, 1.00] set_ticks_and_labels(ax, ticks, weight='bold', size=8) plt.show()
| 1 | 1 |
79,684,430 |
2025-6-30
|
https://stackoverflow.com/questions/79684430/why-does-isort-order-the-imports-differently-when-renaming-a-directory-from-te
|
In the example below, everything is already processed using isort . (version 6.0.1): tree . . ├── a │ └── test │ ├── __init__.py │ └── test_integration.py └── b └── tests ├── __init__.py └── test_integration.py cat ./a/test/test_integration.py from test.asdasd import hello import numpy as np if np.random.random() < 42: hello() cat ./b/tests/test_integration.py import numpy as np from tests.asdasd import hello if np.random.random() < 42: hello() Why does isort put from test.asdasd import hello (in a) above import numpy as np, while it puts from tests.asdasd import hello (in b) below import numpy as np?
|
The module name test clashes with the standard library module test. isort will, by default, order with the following priorities ('FUTURE', 'STDLIB', 'THIRDPARTY', 'FIRSTPARTY', 'LOCALFOLDER') As test is within isort's list of known standard library names, it get's prioritised in the ordering, whereas tests doesn't. You can force isort to see test as, e.g., a local folder, and sort things consistently by using a config file with the known_local_folder set appropriately. E.g., using a .isort.cfg configuration file with: [settings] known_local_folder=test or in a pyproject.toml file with: [tool.isort] known_local_folder = ["test"]
| 1 | 2 |
79,684,326 |
2025-6-30
|
https://stackoverflow.com/questions/79684326/embedding-python-in-multi-threaded-c-environment
|
python 3.13.5 windows 10 msvc This is how initialize python in main. #include <iostream> #include <taskflow/taskflow.hpp> // Taskflow is header-only #define PY_SSIZE_T_CLEAN #include <Python.h> int main(int argc, char* argv[]) { wchar_t pythonHome[] = L".venv"; Py_SetPythonHome(pythonHome); Py_Initialize(); PyEval_InitThreads(); if (!Py_IsInitialized()) { std::cerr << "Python failed to initialize\n"; return 1; } // Set up Python paths PyRun_SimpleString( "import sys\n" "sys.path.insert(0, '.venv/Lib')\n" "sys.path.insert(0, '.venv/Lib/site-packages')\n" ); // Test script execution PyRun_SimpleString( "from time import time, ctime\n" "print('Today is', ctime(time()))\n" ); PyObject* main = PyImport_AddModule("__main__"); PyObject* globals = PyModule_GetDict(main); tf::Executor executor; tf::Taskflow taskflow; auto [A, B, C, D] = taskflow.emplace( // create four tasks [] () { std::cout << "TaskA\n"; PyGILState_STATE gstate = PyGILState_Ensure();PyGILState_Release(gstate);}, [] () { std::cout << "TaskB\n"; }, [] () { std::cout << "TaskC\n"; }, [] () { std::cout << "TaskD\n"; } ); A.precede(B, C); // A runs before B and C D.succeed(B, C); // D runs after B and C executor.run(taskflow).wait(); if (Py_FinalizeEx() < 0) { return 120; } return 0; } Python is getting properly initialized. It hangs at the execution of TaskA. You can't even close the app by pressing Crtl+C. This is pretty much explains my requirement to execute the python code inside tasks.
|
that's because each time you acquire the gil you must release it, the main thread starts by acquiring the GIL once so you must reset the initial GIL count in the main thread by calling PyEval_SaveThread in the main thread. Py_InitializeEx(); PyThreadState* tstate = PyEval_SaveThread(); // release the initial GIL // before running python code auto gil = PyGILState_Ensure(); // ... running Python code PyGILState_Release(gil); It should look as follows in your code #include <iostream> #include <taskflow/taskflow.hpp> // Taskflow is header-only #define PY_SSIZE_T_CLEAN #include <Python.h> int main(int argc, char* argv[]) { wchar_t pythonHome[] = L".venv"; Py_SetPythonHome(pythonHome); Py_Initialize(); PyEval_InitThreads(); // empty body since python 3.9 if (!Py_IsInitialized()) { std::cerr << "Python failed to initialize\n"; return 1; } // Set up Python paths PyRun_SimpleString( "import sys\n" "sys.path.insert(0, '.venv/Lib')\n" "sys.path.insert(0, '.venv/Lib/site-packages')\n" ); // Test script execution PyRun_SimpleString( "from time import time, ctime\n" "print('Today is', ctime(time()))\n" ); PyObject* main = PyImport_AddModule("__main__"); PyObject* globals = PyModule_GetDict(main); PyThreadState* tstate = PyEval_SaveThread(); // release the initial GIL tf::Executor executor; tf::Taskflow taskflow; auto [A, B, C, D] = taskflow.emplace( // create four tasks [] () { std::cout << "TaskA\n"; PyGILState_STATE gstate = PyGILState_Ensure();PyGILState_Release(gstate);}, [] () { std::cout << "TaskB\n"; }, [] () { std::cout << "TaskC\n"; }, [] () { std::cout << "TaskD\n"; } ); A.precede(B, C); // A runs before B and C D.succeed(B, C); // D runs after B and C executor.run(taskflow).wait(); PyEval_RestoreThread(tstate); // restore the GIL before calling finalize if (Py_FinalizeEx() < 0) { return 120; } return 0; } there's also Py_BEGIN_ALLOW_THREADS which does the same thing if you are sure that you will eventually re-acquire the gil on the main thread (which just calls PyEval_SaveThread under the hood with a scope block)
| 2 | 3 |
79,685,618 |
2025-7-1
|
https://stackoverflow.com/questions/79685618/checking-for-missing-signatures-in-uploaded-pdf-using-python
|
I am new to Python and I want to check whether the uploaded PDF file contains all the required signatures. If any are missing, it should return "Signature missing." I’ve tried reading the file, but I’m not sure how to implement this check. try: images = convert_from_path(pdf_path, dpi=200) print(f"[DEBUG] PDF has {len(images)} pages") if len(images) >= 1: target_page_index = 0 # TEMP: Try first page (adjust as needed) page_image = images[target_page_index] page_image.save("debug_page0.png") # Debug test crop test_box = (100, 100, 300, 200) crop_test = page_image.crop(test_box) crop_test.save("debug_crop_test.png") # Real signature areas — update these after checking debug image inspector_sig_area = (510, 590, 650, 630) reviewer_sig_area = (510, 640, 650, 680) inspector_crop = page_image.crop(inspector_sig_area) reviewer_crop = page_image.crop(reviewer_sig_area) inspector_crop.save("debug_inspector_box.png") reviewer_crop.save("debug_reviewer_box.png") fields["Inspector Signed"] = is_signature_box_filled(inspector_crop) fields["Reviewer Signed"] = is_signature_box_filled(reviewer_crop) else: print("[!] No pages found in PDF!") except Exception as e: print(f"[!] Signature box check failed: {e}")
|
As you already have your signature boxes cropped from the image, it boils down do checking for any content. Here are two variants, which can do the job. Both assume the cropped_image to be a PIL.Image. The first method checks for whiteness of the box. You can adapt the white_threshold for pixels considered to be white. You can also look for a minimum_count of non-white pixels, as any signature should have more than one pixel set. def is_signature_box_filled(cropped_image, minimum_count=0, white_threshold=220): """ Check if the signature box is filled with a signature (not all white). """ pixel_data = np.array(cropped_image.convert("L")) # Count whether there are pixels that are non-white -> signature present return np.sum(pixel_data < white_threshold) > minimum_count The second method checks the uniformity of the box. Any signature with large enough contrast would lead to a strong non-uniformity. You can control the minimum desired contrast by setting the uniformity_threshold. def is_signature_box_filled(cropped_image, uniformity_threshold=30): """ Check if the signature box is filled with a signature (not all uniform). """ pixel_data = np.array(cropped_image.convert("L")) # Check if the pixel values are non-uniform -> signature present return (np.max(pixel_data) - np.min(pixel_data) > uniformity_threshold)
| 1 | 2 |
79,687,250 |
2025-7-2
|
https://stackoverflow.com/questions/79687250/import-module-error-in-pythons-google-generative-ai
|
from google.generativeai import error in the above import of the google's generative ai of the python module is not working as expected the error module is not found but in the below import, the error module is working prefect in my previous verison of my code, i want to know, is that deprecated or else me using wrongly the import module "from google.genai import error"
|
google.generativeai = Old deprecated SDK google.genai = New unified SDK https://ai.google.dev/gemini-api/docs/migrate pip uninstall google-generativeai Then pip install google-genai You can check available modules like so. from google import genai print("Available in main genai module:") print([item for item in dir(genai) if not item.startswith('_')]) It will give something like: ['Client', 'batches', 'caches', 'chats', 'client', 'errors', 'files', 'live', 'live_music', 'models', 'operations', 'pagers', 'tokens', 'tunings', 'types', 'version'] So correct import should be from google.genai import errors
| 1 | 1 |
79,687,024 |
2025-7-2
|
https://stackoverflow.com/questions/79687024/neural-network-built-from-scratch-using-numpy-isnt-learning
|
I'm building a neural network from scratch using only Python and numpy, It's meant for classifying the MNIST data set, I got everything to work but the network isn't really learning, at epoch 0 it's accuracy is about 12% after 20 epochs, it increases to 14% but then gradually drops to back to around 12% after 40 epochs. So, it's clear that there's something wrong with my Backpropagation (And yes, I tried increasing epochs to 150 but I still get the same results). I actually followed this video, But I handled dimensions in a different way, which lead to the code being different, He made it so that the rows are the features while the columns are the samples, But I did the opposite, So while backpropagating I had to transpose some arrays to make his algorithm compatible (I think this might be the reason why it's not working). Loading the data: (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255, x_test / 255 x_train, x_test = x_train.reshape(len(x_train), 28 * 28), x_test.reshape(len(x_test), 28 * 28) print(x_train.shape) # (60000, 784) print(x_test.shape) # (10000, 784) Here's the meat of the model: W1 = np.random.randn(784, 10) b1 = np.random.randn(10) W2 = np.random.randn(10, 10) b2 = np.random.randn(10) def relu(x, dir=False): if dir: return x > 0 return np.maximum(x, 0) def softmax(x): e_x = np.exp(x - np.max(x)) return e_x / e_x.sum(axis=1, keepdims = True) def one_hot_encode(y): y_hot = np.zeros(shape=(len(y), 10)) for i in range(len(y)): y_hot[i][y[i]] = 1 return y_hot def loss_function(predictions, true): return predictions - true def predict(x): Z1 = x.dot(W1) + b1 A1 = relu(Z1) Z2 = A1.dot(W2) + b2 A2 = softmax(Z2) # The final prediction is A2 at index 3 or -1: return Z1, A1, Z2, A2 def get_accuracy(predictions, Y): guesses = predictions.argmax(axis=1) average = 0 i = 0 while i < len(guesses): if guesses[i] == Y[i]: average += 1 i += 1 percent = (average / len(guesses)) * 100 return percent def train(data, labels, epochs=40, learning_rate=0.1): for i in range(epochs): labels_one_hot = one_hot_encode(labels) # Forward: m = len(labels_one_hot) Z1, A1, Z2, A2 = predict(data) # I think the error is in this chunk: # backwards pass: dZ2 = A2 - labels_one_hot dW2 = 1 / m * dZ2.T.dot(A1) db2 = 1 / m * np.sum(dZ2, axis=1) dZ1 = W2.dot(dZ2.T).T * relu(Z1, dir=True) dW1 = 1 / m * dZ1.T.dot(data) db1 = 1 / m * np.sum(dZ1) # Update parameters: update(learning_rate, dW1, db1, dW2, db2) print("Iteration: ", i + 1) predictions = predict(data)[-1] # item at -1 is the final prediction. print(get_accuracy(predictions, labels)) def update(learning_rate, dW1, db1, dW2, db2): global W1, b1, W2, b2 W1 = W1 - learning_rate * dW1.T # I have to transpose it here. b1 = b1 - learning_rate * db1 W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 train(x_train, y_train) predictions = predict(x_test)[-1] print(get_accuracy(predictions, y_test)) # The result is about 11.5% accuracy.
|
dW*/db* just have the wrong axes. Because of that the two bias gradients end up with the wrong shape and your updates trash the weights every step, so the net hovers at chance (≈ 10 %). m = x.shape[0] # samples in a batch # ---------- forward ---------- Z1 = x @ W1 + b1 # (m,784)·(784,10) = (m,10) A1 = np.maximum(Z1, 0) Z2 = A1 @ W2 + b2 # (m,10) A2 = softmax(Z2) # (m,10) # ---------- backward ---------- dZ2 = A2 - y_hot # (m,10) dW2 = A1.T @ dZ2 / m # (10,10) db2 = dZ2.sum(0) / m # (10,) dZ1 = (dZ2 @ W2.T) * (Z1 > 0) # (m,10) dW1 = x.T @ dZ1 / m # (784,10) db1 = dZ1.sum(0) / m # (10,) # ---------- SGD step ---------- W2 -= lr * dW2; b2 -= lr * db2 W1 -= lr * dW1; b1 -= lr * db1 (Notice the .T is always on the left matrix in each product, so no extra transposes are needed in the update.) A numerically-safe soft-max helps too: def softmax(z): z = z - z.max(1, keepdims=True) e = np.exp(z) return e / e.sum(1, keepdims=True) With these fixes (plus e.g. He initialisation and a smaller learning rate like 0.01) the same two-layer net reaches ~93 % on MNIST in 15–20 epochs.
| 2 | 4 |
79,688,514 |
2025-7-3
|
https://stackoverflow.com/questions/79688514/why-are-type-checkers-fine-with-covariant-method-parameters-when-they-are-in-a-u
|
Method parameters should be contravariant, hence defining a covariant generic should raise a type error. However when using a covariant generic in a union pyright, mypy and pyre-check all do not report an error on the following code: from typing import TypeVar, Generic, Any T_co = TypeVar("T_co", covariant=True) class Foo(Generic[T_co]): def a(self, arg: T_co) -> Any: # Error, like expected as T_co is covariant. ... def b(self, arg: int | T_co) -> Any: # No error, expected one ... As all these type-checkers do not raise an error I wonder is this actually fine, shouldn't this also break type-safety? If it is fine can you explain to why, what differs from a pure covariant implementation that is definitely not safe, shouldn't I be able to break it as well? Or if its not safe, is there an explanation why all type-checkers have the same gap here?
|
It appears to be a missing feature. In Mypy's case, there's a comment about this: elif isinstance(arg_type, TypeVarType): # Refuse covariant parameter type variables # TODO: check recursively for inner type variables if ( arg_type.variance == COVARIANT and defn.name not in ("__init__", "__new__", "__post_init__") and not is_private(defn.name) # private methods are not inherited ): ... self.fail(message_registry.FUNCTION_PARAMETER_CANNOT_BE_COVARIANT, ctx) It was added in 2016 and has been there ever since. Pyright has a similar check, which originally prevented invalid nested types as well: if (this._containsCovariantTypeVar(paramType, node.id, diag)) { this._evaluator.addDiagnostic( this._fileInfo.diagnosticRuleSet.reportGeneralTypeIssues, // ... ); } This check was then changed a month later, to: if (isTypeVar(paramType) && paramType.details.isCovariant && !paramType.details.isSynthesized) { this._evaluator.addDiagnostic( this._fileInfo.diagnosticRuleSet.reportGeneralTypeIssues, // ... ); } The commit message specifically says that such unions would be allowed, but does not give a reason: Made the check less strict for the use of covariant type vars within a function input parameter annotation. In particular, unions that contain covariant type vars are now permitted.
| 2 | 3 |
79,688,933 |
2025-7-3
|
https://stackoverflow.com/questions/79688933/how-can-i-create-a-column-that-is-the-result-of-replacing-values-in-two-or-more
|
Consider the dataframe df: import pandas as pd df = pd.DataFrame({'Event1': ['Music', 'Something else 1', 'Theatre', 'Comedy'], 'Event2': ['Something else 2', 'Ballet', 'Something else 3', 'Something else 4'], 'Cost': [10000, 5000, 15000, 2000]}) print(df) Event1 Event2 Cost 0 Music Something else 2 10000 1 Something else 1 Ballet 5000 2 Theatre Something else 3 15000 3 Comedy 4 2000 I would like to map the values of the Event1 and Event2 to the values in the respective dictionaries: # Mapping tables dict1 = {'Music': 'M', 'Cooking': 'C', 'Theatre': 'T', 'Comedy': 'C'} dict2 = {'Ballet': 'B', 'Swimming': 'S'} And store the mappings in a common column because I know that per row, only the value of one column will be mapped. The end result would be: # desired outcome result = pd.DataFrame({'Event1': ['Music', 'Something else 1', 'Theatre', 'Comedy'], 'Event2': ['Something else 2', 'Ballet', 'Something else 3', '4'], 'Event': ['M', 'B', 'T', 'C'], 'Cost': [10000, 5000, 15000, 2000]}) print(result) Event1 Event2 Event Cost 0 Music Something else 2 M 10000 1 Something else 1 Ballet B 5000 2 Theatre Something else 3 T 15000 3 Comedy 4 C 2000 I can only do this in a messy and lengthy way and was hoping there is clean maybe idiomatic way of doing this. How would you advise doing it?
|
You can combine map and fillna: df['Event'] = df['Event1'].map(dict1).fillna(df['Event2'].map(dict2)) Output: Event1 Event2 Cost Event 0 Music Something else 2 10000 M 1 Something else 1 Ballet 5000 B 2 Theatre Something else 3 15000 T 3 Comedy Something else 4 2000 C If you imagine a more complex input with an arbitrary number of columns, you could generalize with a dictionary to pair the columns/dictionaries, then: dict3 = {'Comedy': 'C'} dicts = {'Event1': dict1, 'Event2': dict2, 'Event3': dict3} df['Event'] = (df[list(dicts)].apply(lambda s: s.map(dicts[s.name])) .bfill(axis=1).iloc[:, 0] ) Output: Event1 Event2 Event3 Cost Event 0 Music Something else 2 Something else 10000 M 1 Something else 1 Ballet Something else 5000 B 2 Theatre Something else 3 Something else 15000 T 3 Something else Something else 4 Comedy 2000 C The above approaches suppose that the mappings are per column, if you want global mappings, you can merge the dictionaries and use: from collections import ChainMap df['Event'] = (df.filter(like='Event').stack() .map(dict(ChainMap(dict1, dict2, dict3))) .groupby(level=0).first() ) # or from collections import ChainMap df['Event'] = (df.filter(like='Event') .map(dict(ChainMap(dict1, dict2, dict3)).get) .bfill(axis=1).iloc[:, 0] )
| 1 | 2 |
79,690,330 |
2025-7-4
|
https://stackoverflow.com/questions/79690330/how-to-have-a-line-break-in-the-math-mode-text-in-plotly
|
Suppose you would like to have a line break in the math mode text in plotly. The following solutions have been tried, but none of them is working in different reasons: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Box(y=[10, 14])) fig.update_layout(xaxis_title="$\alpha \\ \beta$") # causes to_image error # fig.update_layout(xaxis_title=r"$\alpha \\ \beta$") # no breaks between math shape of alpha and beta # fig.update_layout(xaxis_title="$\\alpha$<br>$\\beta$") # only shows math shape of alpha and no beta at all! # fig.update_layout(xaxis_title="$\\alpha \\\\ \\beta$") # # no breaks between math shape of alpha and beta # fig.update_layout(xaxis_title="$$\\alpha \\\\ \\beta$$") # no breaks between math shape of alpha and beta # fig.update_layout(xaxis_title="$$\\alpha$$ <br> $$\\beta$$") # # only shows math shape of alpha and no beta at all! fig.show() fig.write_image("this_image.pdf") So, the question is how to set a line break in a math mode text?
|
You seem to be struggling, by trial and error, with 3 (at least) problems (and it is hard to debug when you have found the correct way for one of the problem, but won't even know it unless you solve both in the same trial) 1. Escaping Firstly, how to escape things. That is a pure python problem. You need to pass a string that contains things like \alpha and \\ to plotly. But in python '\alpha' is just 'lpha' (that is \a, the control char that, once upon a time, was meant to make the printer ding) followed by "lpha". And "\\" is a string containing a single \ (which is displayed \\ when you ask its value, because python escape it when printing values in interactive mode. But print("\\") prints a single \). So, the way to prevent escaping, as you have obviously tried already, is either to use "..." strings, and escape anything that python would interpret, like doubling \\. So "\\alpha \\\\ \\beta" Or to use r"..." strings. So that explains why your first line couldn't work 2. Passing latex to plotly In plotly, only labels that start and end with $ are latex. And you can't have several $. So, that invalidates your 3rd and 6th attempts as well. 3. Math mode The other problem is that, then, you are in math mode in LaTeX. And you can't use \\ in math mode (well, you can. But with specific meaning in specific context. Not just to mean "insert line break here".) In math mode ($ or $$) you are supposed to described single atomic equations ($ can, in latex document, be wrapped, but by latex compiler, not by you) There are some environment that allows to insert \\. Like align. But you can't start a align since you are supposed to start in in text mode, in place of $$, which, with plotly can't work, because you start any latex already in a $ environment. And can't leave it before the very end The closest thing to your attempt could be r"$\alpha \text{\\} \beta$" since \text in latex is a math mode command that put you back temporarily in text mode. But you can't use \\ in it :D Solution There are many. Just one among many other, you could use a matrix to stack your α and β fig.update_layout(xaxis_title=r"$\begin{matrix}\alpha\\ \beta\end{matrix}$") Works just fine here
| 2 | 1 |
79,689,943 |
2025-7-4
|
https://stackoverflow.com/questions/79689943/compute-group-wise-residual-for-polars-data-frame
|
I am in a situation where I have a data frame with X and X values as well as two groups GROUP1 and GROUP2. Looping over both of the groups, I want to fit a linear model against the X and Y data and the subtract the fit from the true data to get a residual. I'm currently implementing this in the following way: import polars as pl import numpy as np # --- Sample DataFrame for demonstration purposes df = pl.DataFrame( { "GROUP1": [1, 1, 1, 2, 2, 2], "GROUP2": ["A", "A", "A", "B", "B", "B"], "X": [0.0, 1.0, 2.0, 0.0, 1.0, 2.0], "Y": [5.0, 7.0, 9.0, 3.0, 4.0, 6.0], } ) # --- Function to subtract linear best fit per group def subtract_linear_best_fit(df: pl.DataFrame) -> pl.DataFrame: result = [] for _, subdf in df.group_by(["GROUP1", "GROUP2"]): x = subdf["X"].to_numpy() y = subdf["Y"].to_numpy() a, b = np.polyfit(x, y, 1) residuals = y - (a * x + b) result.append(subdf.with_columns(pl.Series("residual", residuals))) return pl.concat(result) # --- Apply function df_with_residuals = subtract_linear_best_fit(df) print(df_with_residuals) But this does not seem nice as it does not make use of .group_by(...).agg(...) or .with_columns((...).over(...)). I tried both these approaches but I either lost columns from the original data frame or just computed a summary. But I want to have a data frame of the same height, just with one more column. Is there any way to avoid concatenating data frames inside the loop? Ideally there would be something like .group_by().pipe() or .pipe().over().
|
You can use a Struct to send multiple columns to .map_batches() def subtract_linear_best_fit(s: pl.Series) -> pl.Series: x, y = s.struct.unnest() a, b = np.polyfit(x, y, 1) return y - (a * x + b) df.with_columns( pl.struct("X", "Y").map_batches(subtract_linear_best_fit) .over("GROUP1", "GROUP2") .alias("residual") ) shape: (6, 5) ┌────────┬────────┬─────┬─────┬─────────────┐ │ GROUP1 ┆ GROUP2 ┆ X ┆ Y ┆ residual │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 ┆ f64 ┆ f64 │ ╞════════╪════════╪═════╪═════╪═════════════╡ │ 1 ┆ A ┆ 0.0 ┆ 5.0 ┆ -1.7764e-15 │ │ 1 ┆ A ┆ 1.0 ┆ 7.0 ┆ -1.7764e-15 │ │ 1 ┆ A ┆ 2.0 ┆ 9.0 ┆ 0.0 │ │ 2 ┆ B ┆ 0.0 ┆ 3.0 ┆ 0.166667 │ │ 2 ┆ B ┆ 1.0 ┆ 4.0 ┆ -0.333333 │ │ 2 ┆ B ┆ 2.0 ┆ 6.0 ┆ 0.166667 │ └────────┴────────┴─────┴─────┴─────────────┘ https://docs.pola.rs/user-guide/expressions/user-defined-python-functions/#combining-multiple-column-values
| 1 | 2 |
79,689,724 |
2025-7-4
|
https://stackoverflow.com/questions/79689724/how-can-i-change-the-background-color-of-an-action-button
|
I have a Shiny for Python app with some buttons. I want to change the color of a button. How can I do this? from shiny.express import ui with ui.layout_columns(): ui.input_action_button("red_button", "Make this button red!") ui.input_action_button("blue_button", "Make this button blue!")
|
You can pass a style attribute and set the background-color: from shiny.express import ui with ui.layout_columns(): ui.input_action_button( "red_button", "Make this button red!", style = "background-color: red;" ) ui.input_action_button( "blue_button", "Make this button blue!", style = "background-color: lightblue;" )
| 3 | 2 |
79,690,793 |
2025-7-5
|
https://stackoverflow.com/questions/79690793/pip-install-using-a-separate-build-machine
|
I have two same-architecture Linux boxes running identical versions of pypy, installed via pyenv. Consider one a "production" machine and one a "build" machine, such that the production machine cannot easily collect and configure all required build dependencies for modern source packages. I need to install a package on the production machine for which PyPI provides a source distribution but no pre-built wheel. python -m pip install example-package works fine on the build machine, but some of the build dependencies are impractical to get/configure/make/install on the production machine. Is there a convenient method to migrate the python packages pip built from PyPI onto another computer?
|
I would do pretty much what @phd suggested in the comments. Instead of trying to install directly on your production machine, you can download and build everything on your build machine, then move the ready files over. Here’s how: On your build machine, use pip to download the package and all its dependencies: python -m pip download example-package -d ./wheels That grabs wheels and source archives into ./wheels. If you see any .tar.gz files, build them into wheels so you have everything precompiled: python -m pip wheel --no-deps --wheel-dir=./wheels ./wheels/*.tar.gz Copy the entire wheels folder to your production machine. On your production machine, install from those local files without touching PyPI: python -m pip install --no-index --find-links=./wheels example-package This way, your production machine unpacks prebuilt wheels—no compilation needed.
| 1 | 3 |
79,691,774 |
2025-7-6
|
https://stackoverflow.com/questions/79691774/efficient-way-creating-a-dict-of-dict-from-a-pandas-dataframe
|
I have a pandas dataframe of the following structure: d = {'I': ['A', 'B', 'C', 'D'], 'X': [ 1, 0, 3, 1], 'Y': [0, 1, 2, 1], 'Z': [1, 0, 0, 0], 'W': [3, 2, 0, 0]} df = pd.DataFrame(data=d, columns=['I','X', 'Y', 'Z', 'W']) df.set_index('I', inplace=True, drop=True) I need to create a dict of dict to get data of all existing edges (indicated by nonzero values) between nodes: {'A': {'X': {1}, 'Z': {1}, 'W': {3}}, 'B': {'Y': {1}, 'W': {2}}, 'C': {'X': {3}, 'Y': {2}}, 'D': {'Y': {1}, 'X': {1}}} I need it to create a network graph using Networkx library and perform some calculations on it. Obviously it would be possible to loop over every cell in the data frame to do this but my data is quite large and it would be inefficient. I'm looking for some better way possibly using vectorization and/or list comprehension. I've tried list comprehension but I'm stuck and cannot make it work. Can anyone suggest a more efficient way to do this please?
|
You can do this by combining df.iterrows() with a dictionary comprehension. Although iterrows() is not truly vectorized, it's still reasonably efficient for this kind of task and cleaner than using manual nested loops. For example, you could write: edge_dictionary = { node: {attribute: {weight} for attribute, weight in attributes.items() if weight != 0} for node, attributes in df.iterrows() } If your DataFrame is very large and you’re concerned about performance, another approach is to first convert it into a plain dictionary of dictionaries using df.to_dict(orient='index') and then filter out the zeros. That would look like thiss: data_dictionary = df.to_dict(orient='index') edge_dictionary = { node: {attribute: {weight} for attribute, weight in connections.items() if weight != 0} for node, connections in data_dict.items() }
| 3 | 2 |
79,692,462 |
2025-7-7
|
https://stackoverflow.com/questions/79692462/fastmcp-client-timing-out-while-initializing-the-session
|
I am running a simple FastMCP server locally via the following server side code: from mcp.server.fastmcp import FastMCP server = FastMCP( name="Example-FastMCP", streamable_http_path="/mcp", # ← default but shown for clarity # json_response=True, # stateless_http=True ) @server.tool() def add(x: int, y: int) -> int: return x + y if __name__ == "__main__": server.run(transport="streamable-http") I am trying to connect to this server using the below client code: import asyncio from datetime import timedelta from mcp.client.streamable_http import streamablehttp_client # <- transport from mcp.client.session import ClientSession # <- high-level API SERVER_URL = "http://127.0.0.1:8000/mcp/" async def main() -> None: async with streamablehttp_client( url=SERVER_URL, timeout=100, sse_read_timeout=300, ) as (read_stream, write_stream, get_session_id): session = ClientSession( read_stream=read_stream, write_stream=write_stream, read_timeout_seconds=timedelta(seconds=100), ) init_result = await session.initialize() print(" initialized – protocol version:", init_result.protocolVersion) print(" session-id received from server:", get_session_id()) tools_result = await session.list_tools() tool_names = [tool.name for tool in tools_result.tools] print(" tools available on server:", tool_names) call_result = await session.call_tool("add", {"x": 2, "y": 3}) print(" add(2, 3) =", call_result.output[0].text) if __name__ == "__main__": asyncio.run(main()) While running client code, it times out while initializing the session with server. client side logs: /Users/susai/repos/ds-mcp/.venv/bin/python /Users/susai/repos/ds-mcp/app/client3.py + Exception Group Traceback (most recent call last): | File "/Users/susai/repos/ds-mcp/app/client3.py", line 34, in <module> | asyncio.run(main()) | ~~~~~~~~~~~^^^^^^^^ | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 195, in run | return runner.run(main) | ~~~~~~~~~~^^^^^^ | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run | return self._loop.run_until_complete(task) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^ | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 725, in run_until_complete | return future.result() | ~~~~~~~~~~~~~^^ | File "/Users/susai/repos/ds-mcp/app/client3.py", line 11, in main | async with streamablehttp_client( | ~~~~~~~~~~~~~~~~~~~~~^ | url=SERVER_URL, | ^^^^^^^^^^^^^^^ | timeout=30, | ^^^^^^^^^^^ | sse_read_timeout=300, | ^^^^^^^^^^^^^^^^^^^^^ | ) as (read_stream, write_stream, get_session_id): | ^ | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/contextlib.py", line 235, in __aexit__ | await self.gen.athrow(value) | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/client/streamable_http.py", line 437, in streamablehttp_client | async with anyio.create_task_group() as tg: | ~~~~~~~~~~~~~~~~~~~~~~~^^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 772, in __aexit__ | raise BaseExceptionGroup( | "unhandled errors in a TaskGroup", self._exceptions | ) from None | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/streams/memory.py", line 111, in receive | return self.receive_nowait() | ~~~~~~~~~~~~~~~~~~~^^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/streams/memory.py", line 106, in receive_nowait | raise WouldBlock | anyio.WouldBlock | | During handling of the above exception, another exception occurred: | | Traceback (most recent call last): | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/_core/_tasks.py", line 115, in fail_after | yield cancel_scope | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/shared/session.py", line 272, in send_request | response_or_error = await response_stream_reader.receive() | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/streams/memory.py", line 119, in receive | await receive_event.wait() | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 1774, in wait | await self._event.wait() | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/locks.py", line 213, in wait | await fut | asyncio.exceptions.CancelledError: Cancelled by cancel scope 106faafd0 | | During handling of the above exception, another exception occurred: | | Traceback (most recent call last): | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/shared/session.py", line 271, in send_request | with anyio.fail_after(timeout): | ~~~~~~~~~~~~~~~~^^^^^^^^^ | File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/contextlib.py", line 162, in __exit__ | self.gen.throw(value) | ~~~~~~~~~~~~~~^^^^^^^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/anyio/_core/_tasks.py", line 118, in fail_after | raise TimeoutError | TimeoutError | | During handling of the above exception, another exception occurred: | | Traceback (most recent call last): | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/client/streamable_http.py", line 461, in streamablehttp_client | yield ( | ...<3 lines>... | ) | File "/Users/susai/repos/ds-mcp/app/client3.py", line 21, in main | init_result = await session.initialize() | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/client/session.py", line 123, in initialize | result = await self.send_request( | ^^^^^^^^^^^^^^^^^^^^^^^^ | ...<15 lines>... | ) | ^ | File "/Users/susai/repos/ds-mcp/.venv/lib/python3.13/site-packages/mcp/shared/session.py", line 274, in send_request | raise McpError( | ...<8 lines>... | ) | mcp.shared.exceptions.McpError: Timed out while waiting for response to ClientRequest. Waited 100.0 seconds. +------------------------------------ Process finished with exit code 1 server side logs: /Users/susai/repos/ds-mcp/.venv/bin/python /Users/susai/repos/ds-mcp/app/server3.py INFO: Started server process [16523] INFO: Waiting for application startup. StreamableHTTP session manager started INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) Created new transport with session ID: 278ddef2d9f044dd924b0b24da32c94d INFO: 127.0.0.1:55002 - "POST /mcp/ HTTP/1.1" 200 OK Terminating session: 278ddef2d9f044dd924b0b24da32c94d INFO: 127.0.0.1:55006 - "DELETE /mcp/ HTTP/1.1" 200 OK I have tried passing the json_response and stateless_http flags as True to the FastMCP instance, but this also results in the same behaviour. Surprisingly the sse transport works as expected, but is now depricated. Also our API Gateway doesn't support SSE, so that is out of option.
|
On my computer it hangs on session.initialize(). It works when I use async with to create session async with ClientSession( read_stream=read_stream, write_stream=write_stream, read_timeout_seconds=timedelta(seconds=100), ) as session: init_result = await session.initialize() # ... rest ... But I don't know how to write it without async with becaus this doesn't work with await session = await ClientSession(...) # doesn't work I also need to use .content instead of .output to get result print(" add(2, 3) =", call_result.content[0].text) or .structuredContent['result'] print(" add(2, 3) =", call_result.structuredContent['result']) You can see it if you use print(call_result)
| 2 | 0 |
79,692,191 |
2025-7-7
|
https://stackoverflow.com/questions/79692191/in-python-how-to-find-difference-between-a-specific-column-in-one-dataframe-and
|
I have two datasets/dataframes df1 and df2, I want to generate df3 by finding the difference between numeric columns of df2 and df1's column_X. #### copy and paste below to generate df1 and df2 import pandas as pd from random import uniform import numpy as np # generation of df1 data = np.random.uniform(15,40, size=(60, 2)) df1 = pd.DataFrame(data, columns=['column_A','column_B']) df1['column_X'] = df1.mean(axis=1) df1 # generation of df2 data = np.random.uniform(10.5,32.8, size=(60, 30)) df2 = pd.DataFrame(data, columns=['column_1','column_2','column_3','column_4','column_5', 'column_6','column_7','column_8','column_9','column_10', 'column_11','column_12','column_13','column_14','column_15', 'column_16','column_17','column_18','column_19','column_20', 'column_21','column_22','column_23','column_24','column_25', 'column_26','column_27','column_28','column_29','column_30',]) df2["Group"] = pd.DataFrame(np.repeat(['A','B','C'], 20, axis=0)) # make "Group" column the first column col = df2.pop('Group') df2.insert(0, 'Group', col) df2 I want to generate df3 by substracting df2's numeric columns (column_1 to column_30) from df1's Column_X while retaining the "Group" column # Step 1: create an empty df3 and then append df2['Group'] df3 = pd.DataFrame() # substract "column_X from each numeric column df3['col_1X_sub'] = df2['column_1'] - df1['column_X'] df3['col_2X_sub'] = df2['column_2'] - df1['column_X'] df3['col_3X_sub'] = df2['column_3'] - df1['column_X'] . . . df3['col_30X_sub'] = df2['column_30'] - df1['column_X'] final df3 should look something like this for all 30 columns
|
Copy and then broadcast-subtract: import pandas as pd import numpy as np np.random.seed(0) a, b = ab = np.random.uniform(low=15, high=40, size=(2, 60)) df1 = pd.DataFrame({ 'column_A': a, 'column_B': b, 'column_X': ab.mean(axis=0), }) df2 = pd.DataFrame( data=np.random.uniform(low=10.5, high=32.8, size=(60, 30)), columns=[f'column_{i}' for i in range(1, 31)], ) df2.insert( loc=0, column='Group', value=np.repeat(['A','B','C'], 20, axis=0), ) df3 = df2.copy() df3.iloc[:, 1:] -= df1['column_X'].values[:, np.newaxis]
| 1 | 2 |
79,695,414 |
2025-7-9
|
https://stackoverflow.com/questions/79695414/how-to-get-mutual-followers-mutual-friend-for-users-django
|
How do I get mutual followers (friends of friends) with request.user to be displayed for each users posts, I was able to display the count for mutual followers for each users posts on newsfeed. How can I get it done? What I tried: I was able to do this but I do not think it is the best way because when I use slice to show three mutual users images it does not work properly for all users. Example: When I include slice:"3" in template the logged in userA images display just 3 (which is correct,3 mutual followers), but when I log in userB and check their mutual follower it only shows 1 image instead of 3, and userB has 3 mutual followers. This is what I tried: class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL,on_delete=models.CASCADE,blank=True,null=True) profile_pic = models.ImageField(upload_to='UploadedProfilePicture/', default="ProfileAvatar/avatar.png", blank=True) following = models.ManyToManyField( 'Profile', # Refers to the User model itself symmetrical=False, # If A follows B, B doesn't automatically follow A related_name='followers', # Reverse relationship: get followers of a user blank=True, ) class Post(models.Model): poster_profile = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.CASCADE, blank=True,null=True) def following_view(request): posts = Post.objects.filter( Q(poster_profile__profile__followers__user=request.user)).order_by("?").distinct() mymutual_followers = request.user.profile.following.filter(id__in=request.user.profile.following.values_list('id', flat=True)) {% for post in posts %} {% for user_following in mymutual_followers %} {% if user_following in post.poster_profile.profile.following.all|slice:"3" %} <img src="{{ user_following.profile_pic.url }}" class="hover-followers-img"/> {% endif %} {% endfor %} {% endfor %} The code above is what I tried and it worked, but add slice:"3" to show just 3 images do not work properly for some users, so I think maybe it's not the best approach. The code below is how I got the count of mutual followers and it work perfectly but could not display the mutual followers images; how can I get it done? class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL,on_delete=models.CASCADE,blank=True,null=True) profile_pic = models.ImageField(upload_to='UploadedProfilePicture/', default="ProfileAvatar/avatar.png", blank=True) following = models.ManyToManyField( 'Profile', # Refers to the User model itself symmetrical=False, # If A follows B, B doesn't automatically follow A related_name='followers', # Reverse relationship: get followers of a user blank=True, ) class Post(models.Model): poster_profile = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.CASCADE, blank=True,null=True) def following_view(request): posts = ( Post.objects.filter(Q(poster_profile__profile__followers__user=request.user)).order_by("?") .annotate( mutual_count=Count( 'poster_profile', filter=Q(poster_profile__profile__following__followers__user=request.user) ) # Got the count for mutual followers ) .prefetch_related( # what am I doing wrong here.... Prefetch( 'poster_profile', Post.objects.annotate( is_mutual=Exists( Post.objects.filter( poster_profile=OuterRef('pk'), poster_profile__profile__followers__user=request.user, ) ) ).filter(is_mutual=True), to_attr='mutual_followers', ) ) ) # Template {% for post in posts %} {{ post.mutual_count }} # Got mutual count {% endfor %} # This is not displaying images. {% for post in posts %} {% for img in post.mutual_followers %} <img src="{{ img.profile_pic.url }}" class="hover-followers-img"/> {% endfor %} {% endfor %}
|
As I see it, you are already calculating the mutual follower count, but what you want is to display the actual mutualal follower objects for each post. The reason your original approach with annotate and prefetch_related isnt working is because while annotate is good for counts, it doesn’t fetch related objects. Meanwhile, prefetch_related with Prefetch is mainly built for ManyToMany or reverse ForeignKey relationships. It doesn’t do anything for a direct ForeignKey like poster_profile. Doing this directly in your template with something similar to: {% if user_following in post.poster_profile.profile.following.all|slice:"3" %}, likely ends up causing lots of extra database queries and is pretty messy in general. I think a good way to handle this is to build the list of mutual followers right in your view! Then slice it down to the top 3 (or however many you nned), and pass everything to your template already prepared. This loops over the list of mutual followers for each post and avoids those slow N+1 queries that happen when slicing or filtering something in the template. Here’s how you could structure your view: def following_view(request): user_profile = request.user.profile posts = Post.objects.filter( poster_profile__profile__followers__user=request.user ).select_related('poster_profile', 'poster_profile__profile').distinct() post_data = [] for post in posts: poster_profile = post.poster_profile.profile mutual_followers_qs = user_profile.following.filter( id__in=poster_profile.following.values_list('id', flat=True) ) post_data.append({ 'post': post, 'mutual_followers': mutual_followers_qs[:3], # You can select any number of followers here 'mutual_count': mutual_followers_qs.count() }) return render(request, "your_app_here/newsfeed.html", {"post_data": post_data}) And then update your temlpate to something like this: {% for item in post_data %} <div> <p>Post by {{ item.post.poster_profile.username }}</p> <p>{{ item.mutual_count }} mutual followers</p> <div> {% for mutual_follower in item.mutual_followers %} <img src="{{ mutual_follower.profile_pic.url }}" alt="mutual follower" class="hover-followers-img"> {% empty %} <span>No mutual followers yet</span> {% endfor %} </div> </div> {% endfor %} All of the logic stays in your view, so you avoid slow template loops and unnecessary slicing, and you reliably get mutual followers for each post.
| 1 | 1 |
79,695,252 |
2025-7-9
|
https://stackoverflow.com/questions/79695252/position-an-axis-at-a-point-location
|
I have a plot with a line graph. I need to place a new axis at a point/coordinate and plot a pie chart on the new axis so the pie is centered on the point. import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 3)) ax.set_xlim(0,16) ax.set_ylim(8,12) #Plot a sine wave x = np.arange(0, 5*np.pi, 0.1) y = np.sin(x)+10 ax.plot(x, y, color="blue") #Plot a red point at x=12, y=10 ax.plot(12,10,marker="o", color="red") #Add a new axes to the plot #Normalize the points coordinates to range between 0 and 1 x_norm = (12-0)/(16-0) #0.75 y_norm = (10-8)/(12-8) #0.5 #Add an ax at the normalized coordinates left=0.75 bottom=0.5 width=0.1 height=0.1 sub_ax = fig.add_axes(rect=(left, bottom, width, height)) sub_ax.pie((0.2,0.3,0.5)) The pie is centered on the new axis center. I cant figure out the logic to get it centered at the point?
|
You could use the inset_axes method, (based on the last option from this answer) e.g., import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 3)) ax.set_xlim(0,16) ax.set_ylim(8,12) #Plot a sine wave x = np.arange(0, 5*np.pi, 0.1) y = np.sin(x)+10 ax.plot(x, y, color="blue") #Plot a red point at x=12, y=10 point = (12, 10) ax.plot(*point, marker="o", color="red") # width and height in data coordinates width = 2 height = 0.5 # subtract half the width/height from the left/bottom positions position = [point[0] - width / 2, point[1] - height / 2, width, height] sub_ax = ax.inset_axes(position, transform=ax.transData) sub_ax.pie((0.2,0.3,0.5))
| 3 | 4 |
79,696,762 |
2025-7-10
|
https://stackoverflow.com/questions/79696762/seaborn-histogram-to-plot-survey-answer-frequency-by-gender
|
I have some survey question answers (letters A, B, C) I'd like to plot in a Seaborn histogram, with the frequency of each response letter grouped by gender. import pandas as pd answers = ['A', 'B', 'A', 'B', 'A', 'A', 'B', 'B', 'B', 'A', 'C', 'B', 'A', 'C'] genders = ['M', 'M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'F', 'M', 'M', 'F', 'M'] df = pd.DataFrame({'Answer': answers, 'Gender': genders}) I'd like the output to resemble either clustered columns: or stacked columns: Seaborn isn't mandatory - it's just what I've been trying for a while. I've been struggling for days since categorical data like mine isn't easy to find in online examples. Any help is tremendously appreciated.
|
You can use a simple countplot: import seaborn as sns sns.countplot(df, x='Answer', hue='Gender') Output: Another option, without seaborn, would be to crosstab, then plot.bar: pd.crosstab(df['Answer'], df['Gender']).plot.bar() Output: Which allows you to easily stack: pd.crosstab(df['Answer'], df['Gender']).plot.bar(stacked=True) Output:
| 1 | 1 |
79,699,679 |
2025-7-13
|
https://stackoverflow.com/questions/79699679/identifying-outliers-from-opencv-contours-based-on-curvature
|
I'm processing a photo of the Earth from the Moon, as seen here. What I want to do is get the contour points that correspond to the edge of the Earth that is not in shadow. (e.g., the areas that should form an ellipsoid). I want the physical points on the edge of globe itself, and I'm trying to avoid using Gaussian blurring to avoid data destruction. I have this code that produces contours. import numpy as np import cv2 as cv # Read in Image, convert to grayscale image_path = r"moon_check.png" im = cv.imread(image_path) imgray = cv.cvtColor(im, cv.COLOR_BGR2GRAY) # Threshold, then find contours ret, thresh = cv.threshold(imgray, 0, 1, cv.THRESH_BINARY + cv.THRESH_OTSU) contours, hierarchy = cv.findContours(thresh, cv.RETR_EXTERNAL , cv.CHAIN_APPROX_NONE) contours = sorted(contours, key=cv.contourArea, reverse=True) test = im cnt = contours[0] The resulting contour is pretty good: ... but I want to reduce the Contour elements to a non-contiguous one that excludes the shadows below the terminator, leaving me with only the curved elements near the top. I've thus far tried the following with poor results: Manual curvature sampling does work to a certain degree, but I've not found a good 2nd Order derivative check to exclude the regions at the bottom beyond some really manual tuning. OpenCV minEnclosingCircle doesn't fit very well, possibly due to the oblate spheroid nature of the Earth. Plus, going from the fit to keeping the points closest to that fit is fairly inelegant. I guess I could apply it iteratively and filter by min-distance to the enclosing circle, but that doesn't seem particularly efficient. I tried the OpenCV Hough's Circle, but it didn't work well using the grayscale image-- probably due to my avoidance of blurring the image etc. The shadow contours make a classic circle fit method like Kasa's or Coope's not work well. Convex matching should work for most of it, but many elements on the bottom are also convex. Any suggestions on methods to explore? Should I be changing how I approach the initial contour creation in the first place?
|
RANSAC would be overkill (but good to know about if you don't know) I would suggest RANSAC, as I almost always do when you have a regression whose problem is not accuracy of the data, but numerous outliers. Even more when it is a linear regression (theoretically RANSAC works with any regression. But in practice, there are zillions of libraries with RANSAC applied to linear regression, and very few with more general models). But in your case it is even simpler. You don't really need some random choosing of the points (well, that almost what I am about to do. But not completely random), since you know that the correct points are contiguous. So, you just have to choose a subsequence of your sequence of points that yield to the smallest error. It is a linear problem (Kåsa and Coope's method) First of all, finding a circle from points is a linear problem. You know that for each sample point (x,y), we should have (x-x₀)² + (y-y₀)² = R² That is x² + x₀² - 2x.x₀ + y² + y₀² - 2y.y₀ - R² = 0 that is 2x₀·x + 2y₀·y + (R²-x₀²-y₀²)·1 = x²+y² where x₀, y₀ and R are the unknown I am looking for, that is center and radius of the circle. So all you have to find are 2 coefficient (α=2x₀ and β=2y₀), and an intercept (γ=R²-x₀²-y₀²), such as αx+βy+γ = x²+y². Which is just a linear regression problem, easily solvable by a Moore-Penrose inverse. So, if all your points were ok this code would find the circle X=np.array([cnt[:,0,0],cnt[:,0,1],np.ones((len(cnt),))]).T # 3 columns of x, y and 1 Y=cnt[:,0,0]**2+cnt[:,0,1]**2 # column of x²+y² # So we are looking for coefficient such as X@cf = Y. Which is done by Moore-Penrose cf = np.linalg.inv(X.T@X)@X.T@Y # Those are our α, β, γ # Now we just have to deduce x0, y0, R from that x0 = cf[0]/2 # since α=2x₀ y0 = cf[1]/2 # since β=2y₀ R = np.sqrt(cf[2]+x0**2+y0**2) # Since γ=R²-x₀²-y₀² But of course, it doesn't work, since most of the example points are not on the circle (from your image, one may believe that you have some 60% correct points, but in reality, the fractal nature of the shadow is such as it is more something like 15% correct points. 85% are the shadow border) And a linear regression find a compromise between BS and correctness. Find a subset of point with good fitting RANSAC algorithm try to find by trial an error a subset of points such as the number of points that are close to the model is big enough. Then increase the subset with all points close enough. Redo the regression. What I do is almost that. Except that I don't need the increase part, nor really the random part. I just bet that there is a subset of W=500 consecutive points that fit well an arc of circle. W=500 # Number of consecutive points that may be on the circle results=[] for i in range(0, len(cnt)-W, 50): # Subset XX=X[i:i+W] YY=Y[i:i+W] # Same as before, on subset cf=np.linalg.inv(XX.T@XX)@XX.T@YY x0=cf[0]/2 y0=cf[1]/2 R=np.sqrt(cf[2]+x0**2+y0**2) # Error computation rmse = ((np.sqrt(((XX[:,:2]-[x0,y0])**2).sum(axis=1))-R)**2).mean() # RMSE results.append([x0,y0,R,rmse]) Select the best results=np.array(results) bestI = np.argmin(results[:,3]) x0,y0,R=results[bestI, :3] See how we can't barely see the green point, except in the shadow border (because the circle is almost perfect — and my screenshot lowres :D) We could improve that, of course, by doing what would be the next step of that "not-random ransac" (we have have done now, is finding a subset of points that match well an arc circle. And find the circle parameters for that subset, that is a model. Now that we have done so, we could go back to the whole set of points, select all those that are close enough to our model circle. And redo a regression again, to fine tune accuracy). But I guess this post is long enough. That you could do that if you want without help, now that I gave you the start. And your contours were any way so perfectly circular, than even when using a small arc of the circle, we have enough to find a really good fit. Back to the original question All that was to answer to your different trials of HoughCircle, MinCircle, ... But your title question is to remove outlier. Well, that is almost what I did as an intermediary step. Removing outliers, is basically what I let aside in the previous paragraph: the points that are too far from this circle are the outliers. Side node You said minEnclosingCircle doesn't fit very well, possibly due to the oblate spheroid nature of the Earth There is no oblate spheroid nature of the Earth. Well, there is, of course. That is physics. Rotating planets are not perfect sphere. But that is way less important that people tend to believe, because of exaggerated pictures (a little bit like the misconception about impact of mountains and abyss. In both case we are talking something in the order of 20 kms, compared to a diameter of 12700 kms. That is barely 2 pixels on your image. For most coder, the earth is a sphere. People who need to know that it is an oblate spheroid, not a sphere, are usually not coders :D (and if they are, they have way better image that you do :D) I tried the OpenCV Hough's Circle, but it didn't work well using the grayscale image-- probably due to my avoidance of blurring the image etc. No. It never does. (I am kidding of course. But every time I need to detect some circle in an image — and that has happened a few times, always with some specificity that make impossible to reuse directly the previous time method — I try it. And every time, I need to find something else). If wouldn't blame bluring on here. By its nature, Hough need some binning.
| 2 | 3 |
79,700,418 |
2025-7-14
|
https://stackoverflow.com/questions/79700418/sympy-mod-on-formulas-with-custom-functions
|
I would like to define custom sympy functions which cannot be directly evaluated, but which I do want to be able to compute some things about. For example, modular remainders. Here's my code attempt: from sympy import Function class Foo(Function): @classmethod def eval(cls, n): pass def __mod__(self, modulus: int) -> int: n, = self.args return (n + 1) % modulus print(Foo(7) % 3) # 2 print((Foo(7) + 1) % 3) # Mod(Foo(7) + 1, 3) print((Foo(7) + 1) % 3 == 0) # False So Foo(7) % 3 works as expected, but (Foo(7) + 1) % 3 does not, I wanted sympy to notice that it can compute (a + b) % 3 by computing a%3 and b%3. Is there some way to implement this? Perhaps I need to be more clever and write a new function that walks the symbolic tree of the formula? Or maybe I'm going about this the wrong way? I'm also open to other python libraries (I just guessed that sympy might be a good one to use). In case it helps, my real use-case for this is representing integers that are too large to store in memory and so can only be represented by formulas. But where we can still calculate some properties like modular remainders. See for example https://www.sligocki.com/2022/06/21/bb-6-2-t15.html
|
SymPy's symbolic Mod expression type looks for a method called _eval_Mod. You don't need to define __mod__ because Expr.__mod__ already returns Mod(self, other) and Mod will look for the _eval_Mod method: In [2]: from sympy import Function ...: ...: class Foo(Function): ...: @classmethod ...: def eval(cls, n): ...: pass ...: ...: def _eval_Mod(self, modulus: Expr) -> Integer: ...: if not isinstance(modulus, Integer): ...: raise TypeError ...: n, = self.args ...: return (n + 1) % modulus ...: In [3]: Foo(7) % 3 Out[3]: 2 In [4]: (Foo(7) + 1) % 3 Out[4]: 0 That makes your function Foo work as part of an expression rather than just making the operator a % b work for an individual Foo instance. I don't know generally what operations you want besides _eval_Mod but just in case you are not aware of this various polynomial functions in SymPy (factor, cancel etc) can accept a modulus argument that maybe of use to you: In [8]: factor(x**2 + 1) Out[8]: 2 x + 1 In [9]: factor(x**2 + 1, modulus=2) Out[9]: 2 (x + 1)
| 1 | 3 |
79,700,637 |
2025-7-14
|
https://stackoverflow.com/questions/79700637/python3-dictionary-being-modified-at-another-thread-does-not-show-changes-after
|
Python version: (3.9, but the same result with python 3.12) The goal was that another thread modified a dictionary and those modifications to be available at the original thread. import multiprocessing as mp import sys def my_func(result: dict): print(f'variable address at worker: {hex(id(result))}', file=sys.stderr) result["test"] = "test" print(f'{result}', file=sys.stderr) result = {} print(f'variable address at main thread: {hex(id(result))}', file=sys.stderr) my_worker = lambda : my_func(result) # execution at another Thread p = mp.Process(target=my_worker) p.start() p.join() print(f'result at main thread after execution: {result}', file=sys.stderr) # manual execution my_worker() print(f'result at main thread after manual execution: {result}', file=sys.stderr) print(sys.version) and the output is: variable address at main thread: 0x6ffffff39580 variable address at worker: 0x6ffffff39580 {'test': 'test'} result at main thread after execution: {} variable address at worker: 0x6ffffff39580 {'test': 'test'} result at main thread after manual execution: {'test': 'test'} 3.9.16 (main, Mar 8 2023, 22:47:22) [GCC 11.3.0] My expectations were that result dictionary would show the changes done at the worker but it does not. What am I doing wrong?
|
You are using multiprocessing.Process. That is not a thread, that is a new process. It will inherit the whole address space of the parent process, but then work on its own copy. You will have to use threading.Thread if you want a thread. from threading import Thread def modify_dict(): result['test'] = 'test' print('Result in worker:', result) result = {} t = Thread(target=modify_dict) t.start() t.join() print('Result in main thread:', result) Output: Result in worker: {'test': 'test'} Result in main thread: {'test': 'test'}
| 2 | 6 |
79,700,454 |
2025-7-14
|
https://stackoverflow.com/questions/79700454/why-does-the-driver-open-two-tabs-by-one-click
|
I am trying to reach pages by hrefs nested to web elements. But the driver gives me two identical tabs when it click() on the href. It turns out that three tabs are open at the same time. Duplicate pages confuse further work with html. How can I get only one new tab by click? Python 3.11, PyCharm chrome_options = Options() chrome_options.add_argument("--headless") driver = webdriver.Chrome() driver.get("https://upn.ru/kupit/kvartiry") title = driver.title print(title) wait = WebDriverWait(driver, 5) original_window = driver.current_window_handle assert len(driver.window_handles) == 1 b_block = driver.find_element(By.CLASS_NAME, 'main-container-margins.width-100').click() wait.until(EC.number_of_windows_to_be(3)) for window_handle in driver.window_handles: if window_handle != original_window: driver.switch_to.window(window_handle) break title = driver.title print(title)
|
The link is opened twice: • the normal <a> fires and Chrome opens a new tab • a JS listener on the same element calls window.open() and opens it again. WebDriver only replicates what the page tells the browser to do, so you have to avoid the click or neutralise the handler. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC opt = webdriver.ChromeOptions() opt.add_argument('--headless') driver = webdriver.Chrome(options=opt) driver.get('https://upn.ru/kupit/kvartiry') w = WebDriverWait(driver, 5) link = w.until(EC.element_to_be_clickable( (By.CSS_SELECTOR, 'a.card-title'))) # pick the selector you need # simplest: open the URL yourself – no extra windows driver.get(link.get_attribute('href')) # --- or, if you really want to click --- # driver.execute_script( # "arguments[0].removeAttribute('target'); arguments[0].click();", link) # wait until EC.number_of_windows_to_be(2) Only one new page is created, so further window-handle logic stays consistent.
| 3 | 5 |
79,702,280 |
2025-7-15
|
https://stackoverflow.com/questions/79702280/is-it-right-to-raise-an-error-in-except-block-in-python
|
I often see code like this: try: some_operation() except Exception: logger.error("An error occurred while running the operation") raise Exception("A custom message") Please ignore using the general Exception in this example, I know it's a bad practice. You can think of any other more specific subclasses of Exception. I don't feel fine with code like this. What is the purpose of catching an exception if it's raised again, maybe with modified message? If a developer does it for logging or changing the class of the exception is it OK and good approach? I understand try...except blocks (in any language) to really handle exceptions. The example I showed is for me not good, but maybe it's a common practice in python world? Is it? Is there a more proper way for logging in such cases?
|
This general pattern is an accepted practice. It allows a more meaningful message to be written for the user of the code as well as logging a more meaningful message for the developer. From the docs: The most common pattern for handling Exception is to print or log the exception and then re-raise it (allowing a caller to handle the exception as well): import sys try: f = open('myfile.txt') s = f.readline() i = int(s.strip()) except OSError as err: print("OS error:", err) except ValueError: print("Could not convert data to an integer.") except Exception as err: print(f"Unexpected {err=}, {type(err)=}") raise
| 1 | 2 |
79,701,884 |
2025-7-15
|
https://stackoverflow.com/questions/79701884/indicating-which-column-wins-in-a-df-min-call
|
I want to find the minimum value per row and create a new column indicating which of those columns has the lowest number. Unfortunately, it seems like pandas isn't immediately able to help in this regard. My research has led to the min() function, which does find the lowest for each row (when axis=1), but there's no further information beyond the number itself. initialDict = {"A":[6.53,11.47,92.08],"B":[9.11,8.15,12.49]} initialDf = pd.DataFrame.from_dict(initialDict,orient="index",columns=["Value 1","Value 2","Value 3"]) >>> initialDf Value 1 Value 2 Value 3 A 6.53 11.47 92.08 B 9.11 8.15 12.49 >>> initialDf.min(axis=1,numeric_only=True) A 6.53 # Value 1 - just a number is useless to me. B 8.15 # Value 2 - how do i access which columns these are? My sample data is a lot larger than two rows, so ideally I'd want a vectorised solution. Can I somehow access which column has the lowest number per row and assign it to a new value?
|
A possible solution, which uses idxmin: initialDf.assign(min = initialDf.idxmin(axis=1)) Output: Value 1 Value 2 Value 3 min A 6.53 11.47 92.08 Value 1 B 9.11 8.15 12.49 Value 2
| 2 | 3 |
79,701,508 |
2025-7-15
|
https://stackoverflow.com/questions/79701508/cache-updates-in-chatbot-when-adding-new-documents
|
I'm building a chatbot to answer legal-related questions. I'm facing an issue with caching questions and responses — the goal is to retrieve an answer when someone asks a similar question that's already been saved. However, when I add new documents to the chatbot, the previously cached questions don't include information from these new files, so the responses become outdated and don't get updated accordingly. I've thought of two solutions: When a cached question is asked, the system checks whether the number of information files has changed. If so, it fetches data from the newly added files. If there is relevant content, it generates a new answer that combines the new information with the previous response, then updates the cache. When new files are added, a separate process is triggered to update the cached responses. Both solutions raise concerns about the chatbot's performance. What approach would you recommend to keep the cache up-to-date without degrading performance?
|
Here is a very naive approach to such a scenario. Our assumptions are: Your vector store initially contains N documents and any queries to which relevant context exists is cached. When the vector store is updated (i.e count of documents change) you want the cached items to be updated. from langchain_core.caches import BaseCache, RETURN_VAL_TYPE from typing import Any, Dict, Optional class MetadataAwareCache(BaseCache): def __init__(self, doc_count: int): super().__init__() self._cache = {} self._doc_count = doc_count # Initially set a document count for reference def update_document_count(self, doc_count: int): if self._doc_count == doc_count: return self._doc_count = doc_count for args, _old_response in self._cache.items(): prompt, llm_string = args[0], args[1] response, metadata = self.regenerate_cache(prompt, llm_string) self.update(prompt, llm_string, response, metadata) def regenerate_cache(self, prompt: str, llm_input: str): # Regenerate response with new information response = "New LLM Response" metadata = {} return response, metadata # Cache lookup def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: cache_entry = self._cache.get((prompt, llm_string)) if cache_entry: return cache_entry["value"] return None # Update cache def update( self, prompt: str, llm_string: str, value: RETURN_VAL_TYPE, metadata: Dict[str, Any] = None, ) -> None: self._cache[(prompt, llm_string)] = { "value": value, "metadata": metadata or {}, } To use the cache from langchain.globals import set_llm_cache cache = MetadataAwareCache() set_llm_cache(cache) Considerations: If you are worried about performance and potentially the cost of re-generating the cache you can consider the frequency at which new information is likely to be added. If it is far and between it could be better to expire cache often rather than pay the cost to re-create cache that might not be re-used. If the updates are more often, you could make it a scheduled process. Alternatively you can add more metadata for each document and re-create only those cache that fall under such category.
| 2 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.