question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,447,569
2025-2-18
https://stackoverflow.com/questions/79447569/keyerror-version-issue-with-pip-installing-catboost-on-python-3-13-1
I am working on this ml project and I need to install catboost and xgboost using pip. the xgboost got installed successfully but catboost keeps giving the same error: (venv) D:\ML bootcamp\mlproject>pip install catboost Collecting catboost Using cached catboost-1.2.7.tar.gz (71.5 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [24 lines of output] Traceback (most recent call last): File "D:\ML bootcamp\mlproject\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module> main() ~~~~^^ File "D:\ML bootcamp\mlproject\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ML bootcamp\mlproject\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel return hook(config_settings) File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-xzdiplgy\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-xzdiplgy\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires self.run_setup() ~~~~~~~~~~~~~~^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-xzdiplgy\overlay\Lib\site-packages\setuptools\build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) ~~~~^^^^^^^^^^^^^^^^ File "<string>", line 733, in <module> File "<string>", line 205, in get_catboost_version KeyError: 'VERSION' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I have tried what I could, but I don't know what else to do to fix this. Please let me know if someone knows the solution to this.
According to the catboost installation docs CatBoost Python package supports only CPython Python implementation with versions < 3.13. Version 3.13.x support is in progress. Source: https://catboost.ai/docs/en/concepts/python-installation There is also an open issue for this on their github repo: https://github.com/catboost/catboost/issues/2748 A possible solution would be to use a version of python that catboost currently supports. e.g. 3.12.x
1
4
79,482,145
2025-3-3
https://stackoverflow.com/questions/79482145/extreme-value-analysis-and-quantile-estimation-using-log-pearson-type-3-pearson
I am trying to estimate quantiles for some snow data using the log pearson type 3 distribution in Python and comparing with R. I do this by reading in the data, log transforming it, fitting Pearson type 3, estimating quantiles, then transforming back from log space. In python: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats import lmoments3 as lm import lmoments3.distr as ld data=np.array([[1079], [ 889], [ 996], [1476], [1567], [ 897], [ 991], [1222], [1372], [1450], [1077], [1354], [1029], [1699], [ 965], [1133], [1951], [1621], [1069], [ 930], [1039], [1839]]) return_periods = np.array([2,3,5,10,20,50,100,200,1000]) log_data = np.log(data) params = stats.pearson3.fit(log_data) #Max likelihood estimation method quantiles = np.exp(stats.pearson3.ppf(1 - 1 / return_periods, *params)) paramsmm=ld.pe3.lmom_fit(log_data) #lmoments estimation method paramsmm2=(paramsmm["skew"], paramsmm['loc'], paramsmm['scale'][0]) quantilesmm = np.exp(ld.pe3.ppf(1 - 1 / return_periods, *paramsmm2)) print(quantiles) print(quantilesmm) in R: library(lmom) library(lmomco) library(FAdist) swe_data <- c(1079, 889, 996, 1476, 1567, 897, 991, 1222, 1372, 1450, 1077, 1354, 1029, 1699, 965, 1133, 1951, 1621, 1069, 930, 1039, 1839) return_periods <- c(2, 3, 5, 10, 20, 50, 100, 200, 1000) exceedance_probabilities <- 1 / return_periods # P = 1/T nonexceedance_probabilities <- 1 - exceedance_probabilities # P_nonexceedance = 1 - P_exceedance log_swe <- log(swe_data) loglmoments <- lmom.ub(log_swe_data) fit_LP3 <- parpe3(loglmoments) #pars estimated using lmoments LP3_est=exp(quape3(nonexceedance_probabilities, fit_LP3)) print(LP3_est) The quantiles estimated are the following: MLE/scipy stats: params=(2.0246357656236125, 7.10812763271725, 0.32194785836668816) #skew, loc scale quantiles=[1105.86050592 1259.46110488 1484.67412496 1857.18767881 2324.18036925 3127.68767927 3916.2007443 4904.15011095 8271.24322709] Lmoments/python: params=(-2.2194418726874434, 7.1069179914286424, 0.07535915093401913) #skew, loc scale quantiles=[1251.30865382 1276.35189073 1291.29995882 1300.06624583 1303.59129662 1305.31725745 1305.78638777 1305.98555852 1306.11275037] Lmoments/R: params= (7.1069180 0.2566677 0.9365001) #mu, sigma, gamma quantiles=[1173.116 1313.849 1485.109 1721.131 1969.817 2326.812 2623.112 2945.728 3814.692] I would expect the latter two methods, both using lmoments, to produce the same result. Based on comparisons with other distributions, it seems like R is giving the most realistic result. Any explanation for the large differences? How might I get a similar result in Python?
Okey to my understanding basically, The main difference between the Python and R results stems from the estimation methods used: Python's scipy.stats uses Maximum Likelihood Estimation (MLE) by default. R's lmom package uses L-moments estimation. These methods can produce different parameter estimates, especially for small sample sizes or highly skewed distributions like the Log-Pearson Type III.To get a similar or closer result you can use the lmoments3 library in Python, which implements the L-moments method. Ensure correct parameter extraction from the lmoments3 results. To get them to agree, use the lmoments3 library in Python – it uses the same L-moments method as R. Just double-check how you're pulling out the parameters, especially the 'scale' value, from the results. Here's how you can get python result much closer to R results. import numpy as np import scipy.stats as stats import lmoments3 as lm import lmoments3.distr as ld # Input data data = np.array([1079, 889, 996, 1476, 1567, 897, 991, 1222, 1372, 1450, 1077, 1354, 1029, 1699, 965, 1133, 1951, 1621, 1069, 930, 1039, 1839]) # Return periods and probabilities return_periods = np.array([2, 3, 5, 10, 20, 50, 100, 200, 1000]) non_exceedance_probabilities = 1 - 1 / return_periods # Log-transform the data log_data = np.log(data) # L-moments method (using lmoments3) paramsmm = ld.pe3.lmom_fit(log_data) quantiles_lmom = np.exp(ld.pe3.ppf(non_exceedance_probabilities, paramsmm["skew"], paramsmm['loc'], paramsmm['scale'])) print("L-moments method results:") print(quantiles_lmom) # Maximum Likelihood Estimation method (using scipy.stats) params_mle = stats.pearson3.fit(log_data) quantiles_mle = np.exp(stats.pearson3.ppf(non_exceedance_probabilities, *params_mle)) print("\nMaximum Likelihood Estimation results:") print(quantiles_mle) # R results for comparison r_quantiles = np.array([1173.116, 1313.849, 1485.109, 1721.131, 1969.817, 2326.812, 2623.112, 2945.728, 3814.692]) print("\nR results:") print(r_quantiles) # Compare results print("\nDifference between Python L-moments and R results:") print(quantiles_lmom - r_quantiles)
1
1
79,480,260
2025-3-3
https://stackoverflow.com/questions/79480260/ta-lib-is-not-properly-detecting-engulfing-candle
As you see in the attached image, there was a Bearish Engulfing candle on December 18. But when I see the same data and engulfing value, it shows something else, in fact 0. Below is the function computing, detecting engulfing candle pattern: def detect_engulfing_pattern(tsla_df): df = tsla_df.rename(columns={"Open": "open", "High": "high", "Low": "low", "Close": "close"}) df['engulfing'] = talib.CDLENGULFING(df['open'], df['high'], df['low'], df['close']) return df
It’s not an engulfing pattern because it only compares one bar to the next, not across multiple bars. I highlighted the part that I think you missed:
1
1
79,482,283
2025-3-3
https://stackoverflow.com/questions/79482283/presidio-with-langchain-experimental-does-not-detect-polish-names
I am using presidio/langchain_experimental to anonymize text in Polish, but it does not detect names (e.g., "Jan Kowalski"). Here is my code: from presidio_anonymizer import PresidioAnonymizer from presidio_reversible_anonymizer import PresidioReversibleAnonymizer config = { "nlp_engine_name": "spacy", "models": [{"lang_code": "pl", "model_name": "pl_core_news_lg"}], } anonymizer = PresidioAnonymizer(analyzed_fields=["PERSON", "PHONE_NUMBER", "EMAIL_ADDRESS"], languages_config=config) anonymizer_tool = PresidioReversibleAnonymizer(analyzed_fields=["PERSON", "PHONE_NUMBER", "EMAIL_ADDRESS"], languages_config=config) text = "Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]." anonymized_result = anonymizer_tool.anonymize(text) anon_result = anonymizer.anonymize(text) deanonymized_result = anonymizer_tool.deanonymize(anonymized_result) print("Anonymized text:", anonymized_result) print("Deanonymized text:", deanonymized_result) print("Map:", anonymizer_tool.deanonymizer_mapping) print("Anonymized text:", anon_result) Output: Anonymized text: Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]. Deanonymized text: Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]. Map: {} Anonymized text: Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]. I expected the name "Jan Kowalski" and the email address to be anonymized, but the output remains unchanged. I have installed the pl_core_news_lg model using: python -m spacy download pl_core_news_lg Am I missing something in the configuration, or does Presidio not support Polish entity recognition properly? Any suggestions on how to make it detect names in Polish? The interesting thing is that when I use only anonymizer_tool = PresidioReversibleAnonymizer() Then the output look like this: Anonymized text: Elizabeth Tate mieszka w Warszawie i ma e-mail [email protected]. Deanonymized text: Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]. Map: {'PERSON': {'Elizabeth Tate': 'Jan Kowalski'}, 'EMAIL_ADDRESS': {'[email protected]': '[email protected]'}} As mentioned below if I use only spaCy: nlp = spacy.load("pl_core_news_lg") doc = nlp(text) Then the output is correct so I guess that it's the problem with presidio itself. Output from spaCy: Jan Kowalski persName Warszawie placeName So I would not like to create custom analyzer for that but use spaCy in Presidio as it works as expected.
After some test I was able to find the solution: config = { "nlp_engine_name": "spacy", "models": [{"lang_code": 'pl', "model_name": "pl_core_news_lg"}], } spacy_recognizer = SpacyRecognizer( supported_language="pl", supported_entities=["persName"] ) anonymizer.add_recognizer(spacy_recognizer) anonymizer_tool = PresidioReversibleAnonymizer(analyzed_fields=["PERSON", "PHONE_NUMBER", "EMAIL_ADDRESS", "CREDIT_CARD"], languages_config=config) The output look like this: Anonymized text: <persName> mieszka w Warszawie i ma e-mail [email protected]. Deanonymized text: Jan Kowalski mieszka w Warszawie i ma e-mail [email protected]. Map: {'persName': {'<persName>': 'Jan Kowalski', '<persName_2>': 'Jana Kowalskiego'}, 'EMAIL_ADDRESS': {'[email protected]': '[email protected]'}} You need to directly add SpacyRecognizer with supported_entities formatted according to spaCy's requirements. I believe there's something missing or unclear in the documentation, which is causing the misunderstanding.
4
-2
79,469,513
2025-2-26
https://stackoverflow.com/questions/79469513/how-read-a-file-from-a-pod-in-azure-kubernetes-service-aks-in-a-pythonic-way
I have a requirement to read a file which is located inside a particular folder in a pod in AKS. My manual flow would be to: exec into the pod with kubectl. cd to the directory where the file is located. cat the file to see it's contents. I want to automate all this purely using python. I am able to do it with subprocess but that would work only on a machine which has azure and kubectl setup. Thus, I am looking for a purely pythonic way of doing this. I have looked into the Kubernetes client for Python but I am not able to find a way to do everything which I listed above.
To read a file which is located inside a particular folder in a pod in AKS via Python script, follow the below steps Assuming you have a valid aks cluster up and running, deploy a pod with your desired file. For example - apiVersion: v1 kind: Pod metadata: name: my-pod labels: app: my-app spec: containers: - name: my-container image: busybox command: ["/bin/sh", "-c", "echo 'Hello from AKS' > /data/file.txt && sleep 3600"] volumeMounts: - name: data-volume mountPath: "/data" volumes: - name: data-volume emptyDir: {} kubectl apply -f pod.yaml kubectl get pods This one says `Hello from AKS' and it should reflect the same when you read the file from the pod using python. Install / update the necessary dependencies pip install kubernetes Here's the script- from kubernetes import client, config, stream def read_file_from_pod(namespace: str, pod_name: str, container_name: str, file_path: str) -> str: try: config.load_incluster_config() except config.config_exception.ConfigException: config.load_kube_config() api_instance = client.CoreV1Api() command = ["cat", file_path] try: exec_response = stream.stream( api_instance.connect_get_namespaced_pod_exec, name=pod_name, namespace=namespace, command=command, container=container_name, stderr=True, stdin=False, stdout=True, tty=False, ) return exec_response except Exception as e: return f"Error reading file from pod: {str(e)}" if __name__ == "__main__": namespace = "default" pod_name = "my-pod" container_name = "my-container" file_path = "/data/file.txt" file_contents = read_file_from_pod(namespace, pod_name, container_name, file_path) print("File Contents:", file_contents) Save and run the script. Now you can read a file from a pod in AKS in a Pythonic way.
1
2
79,475,225
2025-2-28
https://stackoverflow.com/questions/79475225/azure-documen-intelligence-python-sdk-doesnt-separate-pages
When trying to extract content from a MS Word .docx file using Azure Document Intelligence, I expected the returned response to contain a page element for each page in the document and for each of those page elements to contain multiple lines in line with the documentation. Instead, I always receive as a single page with no (None) lines and the entire document's contents as a list of words. Sample document: Minimal reproducible example: from azure.core.credentials import AzureKeyCredential from azure.ai.documentintelligence import DocumentIntelligenceClient from azure.ai.documentintelligence.models import DocumentAnalysisFeature, AnalyzeResult, AnalyzeDocumentRequest def main(): client = DocumentIntelligenceClient( 'MY ENDPOINT', AzureKeyCredential('MY KEY') ) document = 'small_test_document.docx' with open(document, "rb") as f: poller = client.begin_analyze_document( "prebuilt-layout", analyze_request=f, content_type="application/octet-stream" ) result = poller.result() print(f'Found {len(result.pages)} page(s)') for page in result.pages: print(f'Page #{page.page_number}') print(f' {page.lines=}') print(f' {len(page.words)=}') if __name__ == '__main__': main() Expected output: Found 2 page(s) Page #1 page.lines=6 len(page.words)=58 Page #2 page.lines=1 len(page.words)=8 Actual output: Found 1 page(s) Page #1 page.lines=None len(page.words)=66 My question is: Why, and what should I do differently to get the expected output?
As you have shown in your Actual output, all the 66 characters in your document are considered as one page. This is the expected behavior. As mentioned in the Docs on how the page units are computed: 3,000 characters are considered as one page unit in Word Document. File format Computed page unit Total pages Word (DOCX) Up to 3,000 characters = 1 page unit, embedded or linked images not supported Total pages of up to 3,000 characters each So each 3000 characters is considered as 1 page. The page breaks in your document are not considered. Additionally the following features are not supported for the Microsoft Office (DOCX, XLSX, PPTX) and HTML files: There are no angle, width/height and unit with each page object. For each object detected, there is no bounding polygon or bounding region. Page range (pages) is not supported as a parameter. No lines object. Reference. So Document Intelligence has limited support for docx files. Your best option is to use the PDF files. You will get the content analyzed page by page in the PDF files. Not ideal, but if you do need to work with docx files, then first convert them to pdf files (using relevant API's) and process them with the Document Intelligence.
2
1
79,481,379
2025-3-3
https://stackoverflow.com/questions/79481379/hollowing-out-a-patch-anticlipping-a-patch-in-matplotlib-python
I want to draw a patch in Matplotlib constructed by hollowing it out with another patch, in a way such that the hollowed out part is completely transparent. For example, lets say I wanted to draw an ellipse hollowed out by another. I could do the following: import matplotlib.pyplot as plt from matplotlib.patches import Ellipse ellipse_1 = Ellipse((0,0), 4, 3, color='blue') ellipse_2 = Ellipse((0.5,0.25), 2, 1, angle=30, color='white') ax = plt.axes() ax.add_artist(ellipse_1) ax.add_artist(ellipse_2) plt.axis('equal') plt.axis((-3,3,-3,3)) plt.show() However, if now I wanted to draw something behind, the part behind the hollowed out part would not be visible, for example: import matplotlib.pyplot as plt from matplotlib.patches import Ellipse, Rectangle ellipse_1 = Ellipse((0,0), 4, 3, color='blue') ellipse_2 = Ellipse((0.5,0.25), 2, 1, angle=30, color='white') rectangle = Rectangle((-2.5,-2), 5, 2, color='red') ax = plt.axes() ax.add_artist(rectangle) ax.add_artist(ellipse_1) ax.add_artist(ellipse_2) plt.axis('equal') plt.axis((-3,3,-3,3)) plt.show() where the part of the red rectangle inside the blue shape cannot be seen. Is there an easy way to do this? Another way to solve this would be with a function to do the opposite of set_clip_path, lets say set_anticlip_path, where the line ellipse_1.set_anticlip_path(ellipse_2) would do the trick, but I have not been able to find anything like this.
Approach for ellipses The following is a simple approach that works for the ellipse example (and, generally, for symmetric objects): import matplotlib.pyplot as plt from matplotlib.patches import Ellipse, Rectangle, PathPatch from matplotlib.path import Path from matplotlib.transforms import Affine2D ellipse_1 = Ellipse((0, 0), 4, 3, color='blue') ellipse_2 = Ellipse((0.5, 0.25), 2, 1, angle=30, color='white') rectangle = Rectangle((-2.5, -2), 5, 2, color='red') # Provide a flipping transform to reverse one of the paths flip = Affine2D().scale(-1, 1).transform transform_1 = ellipse_1.get_patch_transform().transform transform_2 = ellipse_2.get_patch_transform().transform vertices_1 = ellipse_1.get_path().vertices.copy() vertices_2 = ellipse_2.get_path().vertices.copy() # Combine the paths, create a PathPatch from the combined path combined_path = Path( transform_1(vertices_1).tolist() + transform_2(flip(vertices_2)).tolist(), ellipse_1.get_path().codes.tolist() + ellipse_2.get_path().codes.tolist(), ) combined_ellipse = PathPatch(combined_path, color='blue') ax = plt.axes() ax.add_artist(rectangle) ax.add_artist(combined_ellipse) plt.axis('equal') plt.axis((-3, 3, -3, 3)) plt.show() Produces: Key ideas: We can combine their paths by getting the vertices and codes of the two ellipses and concatenating them within a new matplotlib.path.Path instance. We need to reverse one of the paths before combining, or otherwise the intersection will not be transparent. We do so with an appropriate matplotlib.transforms.Affine2D, which we use for flipping one of the given ellipses. We need to get the paths into the correct coordinate system before plotting. We do so by applying each ellipse's get_patch_transform() result before concatenating the vertices. Generalized approach Reversing the direction of one of the paths really is important here. It is not straightforward, however. Flipping is a bit of a "cheat" above, as it only works because an ellipse has a symmetric shape. For a generic path, we need to disassemble it according to its codes (which describe how the vertices are connected) into connected segments, reverse the resulting segments (where we need to take extra care that the codes need to be reversed a bit differently from the vertices), and then reassemble it: import matplotlib.pyplot as plt from matplotlib.patches import Ellipse, Rectangle, PathPatch, Annulus from matplotlib.path import Path ellipse = Ellipse((0, 0), 4, 3, color='blue') annulus = Annulus((0.5, 0.25), (2/2, 1/2), 0.9/2, angle=30, color='white') rectangle = Rectangle((-2.5, -2), 5, 2, color='red') def reverse(vertices, codes): # Codes (https://matplotlib.org/stable/api/path_api.html#matplotlib.path.Path): MOVETO = 1 # "Pick up the pen and move to the given vertex." CLOSE = 79 # "Draw a line segment to the start point of the current polyline." # LINETO = 2: "Draw a line from the current position to the given vertex." # CURVE3 = 3: "Draw a quadratic Bézier curve from the current position, # with the given control point, to the given end point." # CURVE4 = 4: "Draw a cubic Bézier curve from the current position, # with the given control points, to the given end point." assert len(vertices) == len(codes), f"Length mismatch: {len(vertices)=} vs. {len(codes)=}" vertices, codes = list(vertices), list(codes) assert codes[0] == MOVETO, "Path should start with MOVETO" if CLOSE in codes: # Check if the path is closed assert codes.count(CLOSE) == 1, "CLOSEPOLY should not occur more than once" assert codes[-1] == CLOSE, "CLOSEPOLY should only appear at the last index" vertices, codes = vertices[:-1], codes[:-1] # Ignore CLOSEPOLY for now is_closed = True else: is_closed = False # Split the path into segments, where segments start at MOVETO segmented_vertices, segmented_codes = [], [] for vertex, code in zip(vertices, codes): if code == MOVETO: # Start a new segment segmented_vertices.append([vertex]) segmented_codes.append([code]) else: # Append to current segment segmented_vertices[-1].append(vertex) segmented_codes[-1].append(code) # Reverse and concatenate rev_vertices = [val for seg in segmented_vertices for val in reversed(seg)] rev_codes = [val for seg in segmented_codes for val in [seg[0]] + seg[1:][::-1]] if is_closed: # Close again if necessary, by appending CLOSEPOLY rev_codes.append(CLOSE) rev_vertices.append([0., 0.]) return rev_vertices, rev_codes transform_1 = ellipse.get_patch_transform().transform transform_2 = annulus.get_patch_transform().transform vertices_1 = ellipse.get_path().vertices.copy() vertices_2 = annulus.get_path().vertices.copy() codes_1 = ellipse.get_path().codes.tolist() codes_2 = annulus.get_path().codes.tolist() vertices_2, codes_2 = reverse(vertices_2, codes_2) # Reverse one path # Combine the paths, create a PathPatch from the combined path combined_path = Path( transform_1(vertices_1).tolist() + transform_2(vertices_2).tolist(), codes_1 + codes_2, ) combined_ellipse = PathPatch(combined_path, color='blue') ax = plt.axes() ax.add_artist(rectangle) ax.add_artist(combined_ellipse) plt.axis('equal') plt.axis((-3, 3, -3, 3)) plt.show() Produces: Key ideas: The reverse() function splits the path at points where the drawing pen is moved, and only reverses the segments in-between. Consider the pseudo-code example of a path segment with three vertices V1, V2, V3, connected by a line between V1–V2 and a curve between V2–V3 with control point C: The (forward) path would be [move_to V1, line_to V2, curve_to C, curve_to V3]. The reversed path would be [move_to V3, curve_to C, curve_to V2, line_to V1]. We can see here, that we need to reverse the codes (move_to, curve_to, line_to) different than the points (vertices V1, V2, V3; control point C): The points just need to be reversed completely. The first code always needs to be move_to, then the remaining codes need to be reversed. For a closed path, the value 79 (CLOSEPOLY which I shortened to CLOSE above) always seems to be the last code. We take care of it by checking for it, removing it if present, and appending it again at the end, if necessary. The values of its associated vertex do not matter.
2
4
79,480,032
2025-3-3
https://stackoverflow.com/questions/79480032/in-numpy-find-a-percentile-in-2d-with-some-condition
I have this kind of array a = np.array([[-999, 9, 7, 3], [2, 1, -999, 1], [1, 5, 4, 6], [0, 6, -999, 9], [1, -999, -999, 6], [8, 4, 4, 8]]) I want to get 40% percentile of each row in that array where it is not equal -999 If I use np.percentile(a, 40, axis=1) I will get array([ 3.8, 1. , 4.2, 1.2, -799. , 4.8]) which is still include -999 the output I want will be like this [ 6.2, # 3 or 7 also ok 1, 4.2, # 4 or 5 also ok 4.8, # 0 or 6 also ok 1, 4 ] Thank you
You can replace the -999s with NaNs and use nanpercentile. import numpy as np a = np.array([[-999, 9, 7, 3], [2, 1, -999, 1], [1, 5, 4, 6], [0, 6, -999, 9], [1, -999, -999, 6], [8, 4, 4, 8]], dtype=np.float64) a[a == -999] = np.nan np.nanpercentile(a, 40, axis=-1, keepdims=True) # array([[6.2], # [1. ], # [4.2], # [4.8], # [3. ], # [4.8]]) # Use the `method` argument if you want a different type of estimate # `keepdims=True` keeps the result a column, which it looks like you want You asked for a solution "in NumPy", and that's it. (Unless you want to re-implement percentile, which is not so hard. Or I suppose you could use apply_along_axis on a function that removes the -999s before taking the quantile, but that will just loop in Python over the slices, which can be slow.) If you don't want to have to change the dtype and replace with NaNs to perform the operation, you can use NumPy masked arrays with scipy.stats.mquantiles. import numpy as np from scipy import stats a = np.array([[-999, 9, 7, 3], [2, 1, -999, 1], [1, 5, 4, 6], [0, 6, -999, 9], [1, -999, -999, 6], [8, 4, 4, 8]]) mask = a == -999 b = np.ma.masked_array(a, mask=mask) stats.mstats.mquantiles(b, 0.4, alphap=1, betap=1, axis=-1) # alphap=1, betap=1 are the settings to reproduce the same values produced by NumPy's default `method`. But beware that mquantiles is on its way out, superseded by new features in the next release.
1
3
79,480,952
2025-3-3
https://stackoverflow.com/questions/79480952/drawing-line-between-hand-landmarks
here is my code that draws landmarks on hand using mediapipe import cv2 import time import mediapipe as mp mp_holistic = mp.solutions.holistic holistic_model = mp_holistic.Holistic( min_detection_confidence=0.5, min_tracking_confidence=0.5 ) # Initializing the drawing utils for drawing the facial landmarks on image mp_drawing = mp.solutions.drawing_utils # (0) in VideoCapture is used to connect to your computer's default camera capture = cv2.VideoCapture(0) # Initializing current time and precious time for calculating the FPS previousTime = 0 currentTime = 0 while capture.isOpened(): # capture frame by frame ret, frame = capture.read() # resizing the frame for better view frame = cv2.resize(frame, (800, 600)) # Converting the from BGR to RGB image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Making predictions using holistic model # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False results = holistic_model.process(image) image.flags.writeable = True # Converting back the RGB image to BGR image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # Drawing the Facial Landmarks # mp_drawing.draw_landmarks( # image, # results.face_landmarks, # mp_holistic.FACEMESH_CONTOURS, # mp_drawing.DrawingSpec( # color=(255, 0, 255), # thickness=1, # circle_radius=1 # ), # mp_drawing.DrawingSpec( # color=(0, 255, 255), # thickness=1, # circle_radius=1 # ) # ) # Drawing Right hand Land Marks mp_drawing.draw_landmarks( image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS ) for landmark in mp_holistic.HandLandmark: print(landmark,landmark.value) # Drawing Left hand Land Marks mp_drawing.draw_landmarks( image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS ) # Calculating the FPS currentTime = time.time() fps = 1 / (currentTime - previousTime) previousTime = currentTime # Displaying FPS on the image cv2.putText(image, str(int(fps)) + " FPS", (10, 70), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2) # Display the resulting image cv2.imshow("Facial and Hand Landmarks", image) # Enter key 'q' to break the loop if cv2.waitKey(5) & 0xFF == ord('q'): break # When all the process is done # Release the capture and destroy all windows capture.release() cv2.destroyAllWindows() for both left and right hand, but my goal is : to draw straight line between hand, in order to understand my question, let us discuess following image : from the figure we can say for instance that line between hands should be perfectly horizontan (so angle should be zero, as if we consider following equation : it is easy to say that in order to be perfectly horizontal lines , y coordinates should be equal to each other, so please help me to detect how to determine coordinates and how to draw line between two point? drawing is easy as cv2.line exist, but what about coordinates?
You're using the built-in Mediapipe function draw_landmarks to handle all the drawings. This function takes an image, a normalized landmark list, and connections as inputs. However, the NormalizedLandmarkList type in Mediapipe doesn’t support merging multiple landmark lists, making it difficult to pass landmarks for both hands to the function and find connections. An alternative approach is to extract the coordinates from the hand landmarks and draw a line using the cv2.line method, as you mentioned. Here’s the code to implement this approach: # extract hand landmarks from mediapipe result right_hand_landmarks = results.right_hand_landmarks left_hand_landmarks = results.left_hand_landmarks # try to render a line if both hands are detected if right_hand_landmarks and left_hand_landmarks: # find the position of wrist landmark (as it is normalized, it should multiplied by width and height for x and y respectively) right_wrist = np.array([right_hand_landmarks.landmark[0].x * image.shape[1], right_hand_landmarks.landmark[0].y * image.shape[0]]).astype("int") left_wrist = np.array([left_hand_landmarks.landmark[0].x * image.shape[1], left_hand_landmarks.landmark[0].y * image.shape[0]]).astype("int") # draw a line between two wrists cv2.line(image, right_wrist, left_wrist, color=(255,255,255), thickness=3) If you want to select another landmark other than the wrist, check this link to see landmark indices.
1
3
79,475,324
2025-2-28
https://stackoverflow.com/questions/79475324/fitting-a-function-to-exponentially-decreasing-numbers-ensuring-equal-weight-fo
So this question is based on a biochemical experiment. For those who know a bit about biochemistry it is an enzyme kinetics experiment. I have a dilution series of an activator (a or x) and am measuring the enzyme velocity (y). The fitting equation (mmat) is derived from a biological model. The challenge I'm facing is that in this model, the velocity doesn't reach zero or worse reach negative numbers. This crucial information is hidden in the data points at very low concentrations. When I fit my data using the equation, these low-concentration data points are often ignored due to the least-squares fitting function. However, I need to extract the information (alpha) hidden in these low-concentration points. I believe I should use a logarithmic scale for x to ensure that the least-squares function doesn't overlook those data points. However, I'm struggling to implement this and could use some guidance, especially since I'm not that well versed in math or Python. In my code example I implemented: Perfect test Data to show how it should look like. (orange in plot) Real Data, that can be fitted but is reaching negative velocity which is physically impossible . Sometimes it is also close to impossible to fit. (blue in plot) Also I made a fit using my obviously stupid idea of fitting the function with logarithmic scaling which should be possible I think but not how I tried it. This is basically what I would like to achieve... (green in plot) import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit # My functions based on Michaelis-Menten implementing activators def mmat(a, v_max, K_D, alpha): S=2.5e6 K_S=1e6 return (v_max * (1 + (a/(alpha*K_D))) * S) / ((K_S/alpha)*(1 + (a/K_D))+S*(1 + (a/(alpha*K_D)))) def log_mmat(a, v_max, K_D, alpha): S=2.5e6 K_S=1e6 return (v_max * (1 + (np.log10(a)/(alpha*K_D))) * S) / ((K_S/alpha)*(1 + (np.log10(a)/K_D))+S*(1 + (np.log10(a)/(alpha*K_D)))) # Filling my dataframes with optimal data (test_data) and measured data (real_data) lp = np.logspace(-0.5, 4, num=1000) test_data_x = np.logspace(-0.5, 4, num=12) test_data_y = np.array([28,35,51,90,173,314,486,625,704,741,757,763]) test_data = pd.DataFrame({'x': test_data_x, 'y': test_data_y}) real_data_y = np.array([ 12.87397621, 12.64915001, 14.22025688, 22.62179769, 41.76414236, 62.49097641, 179.72713147, 309.08516559, 497.11213079, 449.61759694, 469.24974154, 360.40709778, 13.67425041, 10.42765308, 23.52110248, 33.85240147, 47.72738955, 132.63407297, 213.80971215, 290.36035529, 371.0033705, 414.93547975, 426.62376543, 432.21230229]) real_data_x = np.array([1.00242887e+00, 2.25546495e+00, 5.07479613e+00, 1.14182913e+01, 2.56911554e+01, 5.78050997e+01, 1.30061474e+02, 2.92638317e+02, 6.58436214e+02, 1.48148148e+03, 3.33333333e+03, 7.50000000e+03, 1.00242887e+00, 2.25546495e+00, 5.07479613e+00, 1.14182913e+01, 2.56911554e+01, 5.78050997e+01, 1.30061474e+02, 2.92638317e+02, 6.58436214e+02, 1.48148148e+03, 3.33333333e+03, 7.50000000e+03,]) real_data = pd.DataFrame({'x': real_data_x, 'y': real_data_y}) # Function fitting p0 = [max(test_data['y']), 200, 0.0001] popt_test, pcov_test = curve_fit(mmat, test_data['x'], test_data['y'], p0, maxfev=10000) p0 = [max(real_data['y']), 200, 0.0001] popt_real, pcov_real = curve_fit(mmat, real_data['x'], real_data['y'], p0, maxfev=10000) popt_log, pcov_log = curve_fit(log_mmat, real_data['x'], real_data['y'], p0, maxfev=100000) vm_test=popt_test[0] kd_test=popt_test[1] al_test=popt_test[2] vm_real=popt_real[0] kd_real=popt_real[1] al_real=popt_real[2] vm_log=popt_log[0] kd_log=popt_log[1] al_log=popt_log[2] # Plotting fig = plt.figure(figsize=(25, 10)) ax1 = plt.subplot2grid((2, 5), (0, 0)) ax2 = plt.subplot2grid((2, 5), (0, 1)) ax3 = plt.subplot2grid((2, 5), (1, 0)) ax4 = plt.subplot2grid((2, 5), (1, 1)) ax5 = plt.subplot2grid((2, 5), (1, 2)) ax1.set_title('perfect data - normal x-scale') ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.scatter(test_data['x'], test_data['y'], color='black') ax1.plot(lp, mmat(lp, vm_test, kd_test, al_test), color='orange') ax2.set_title('perfect data - log x-scale') ax2.set_xlabel('x') ax2.scatter(np.log10(test_data['x']), test_data['y'], color='black') ax2.plot(np.log10(lp), mmat(lp, vm_test, kd_test, al_test), color='orange') ax3.set_title('real data - normal x-scale') ax3.set_xlabel('x') ax3.set_ylabel('y') ax3.scatter(real_data['x'], real_data['y'], color='black') ax3.plot(lp, mmat(lp, vm_real, kd_real, al_real), color='blue') ax4.set_title('real data - log x-scale') ax4.set_xlabel('x') ax4.scatter(np.log10(real_data['x']), real_data['y'], color='black') ax4.plot(np.log10(lp), mmat(lp, vm_real, kd_real, al_real), color='blue') ax5.set_title('real data - log(x) for fitting') ax5.set_xlabel('x') ax5.scatter(np.log10(real_data['x']), real_data['y'], color='black') ax5.plot(np.log10(lp), log_mmat(lp, vm_log, kd_log, al_log), color='lightgreen') plt.tight_layout() plt.show() Could someone please help me modify this code to fit the equation using x on a logarithmic scale or at least tell me why this wouldn't work? Any tips or suggestions would be greatly appreciated! Thanks in advance!
Okay, so it seems like I solved my problem and if at some point someone is searching for something similar, here is the answer: Scipy's curve_fit function, which uses least-squares fitting, aims to minimize the sum of the squared differences between the calculated and observed y-data. The equation for this is: sum(((f(xdata,∗popt)−ydata) / σ)**2) Initially, I mistakenly believed that the x-values directly influence the fitting equation. While x-values do play a role, it's less direct. The standard deviation (σ) is significantly higher for higher concentrations. Using these values as-is can greatly affect the fit quality because they are weighted equally with other values, despite having a higher likelihood of being less accurate in absolute terms. This equal weighting happens because the least-squares function defaults σ to one if not specified. However, if your dataset includes replicates, you can calculate σ and use it to weight the data points according to it. This ensures that each data point is appropriately considered in the fitting process. To show how this looks like, here is the code: import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit # My functions based on Michaelis-Menten implementing activators def mmat(a, v_max, K_D, alpha): S=2.5e6 K_S=1e6 return (v_max * (1 + (a/(alpha*K_D))) * S) / ((K_S/alpha)*(1 + (a/K_D))+S*(1 + (a/(alpha*K_D)))) # Filling my dataframes with measured data (real_data) lp = np.logspace(-0.5, 4, num=1000) df = pd.DataFrame({ 'x': [1.00242887e+00, 2.25546495e+00, 5.07479613e+00, 1.14182913e+01, 2.56911554e+01, 5.78050997e+01, 1.30061474e+02, 2.92638317e+02, 6.58436214e+02, 1.48148148e+03, 3.33333333e+03, 7.50000000e+03], 'y1': [ 12.87397621, 12.64915001, 14.22025688, 22.62179769, 41.76414236, 62.49097641, 179.72713147, 309.08516559, 497.11213079, 449.61759694, 469.24974154, 360.40709778], 'y2': [13.67425041, 10.42765308, 23.52110248, 33.85240147, 47.72738955, 132.63407297, 213.80971215, 290.36035529, 371.0033705, 414.93547975, 426.62376543, 432.21230229]}) # calculating standard deviation df['sigma'] = df.iloc[:,[1,2]].std(axis=1) dfm = df.melt(id_vars=['x','sigma'],value_vars=['y1','y2']) dfm.columns=['x', 'sigma', 'measurement', 'y'] # Function fitting p0 = [max(dfm['y']), 200, 0.0001] popt_real, pcov_real = curve_fit(mmat, dfm['x'], dfm['y'],p0, sigma=dfm['sigma'] ,maxfev=10000) perr_real = np.sqrt(np.diag(pcov_real)) vm_real=popt_real[0] kd_real=popt_real[1] al_real=popt_real[2] # Plotting fig = plt.figure(figsize=(20, 7.5)) ax1 = plt.subplot2grid((2, 5), (0, 0)) ax2 = plt.subplot2grid((2, 5), (0, 1)) ax1.set_title('real data - normal x-scale') ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.scatter(real_data['x'], real_data['y'], color='black') ax1.plot(lp, mmat(lp, vm_real, kd_real, al_real), color='blue') ax2.set_title('real data - log x-scale') ax2.set_xlabel('x') ax2.scatter(np.log10(real_data['x']), real_data['y'], color='black') ax2.plot(np.log10(lp), mmat(lp, vm_real, kd_real, al_real), color='blue') plt.tight_layout() plt.show() print('vmax: ' + str(popt_real[0]) + ' \u00B1 ' + perr_real[0].astype(str)) print('K_D: ' + str(popt_real[1]) + ' \u00B1 ' + perr_real[1].astype(str)) print('alpha: ' + str(popt_real[2]) + ' \u00B1 ' + perr_real[2].astype(str)) Maybe this helps some other lost soul...
2
2
79,481,870
2025-3-3
https://stackoverflow.com/questions/79481870/how-do-you-unwrap-a-python-property-to-get-attributes-from-the-getter
From "outside", how can I access attributes in a property's getter function whether by unwrapping it or some other way? In a python property, its __get__ function seems to be a wrapper of a wrapper of a wrapper ... Using inspect.unwrap on the __get__ function returns another wrapper, not the getter. In fact, what unwrap returns seems to be the exact same object and id as what was unwrapped. See the MRE and its output below. Yes, I know that in simple cases I can get the attribute through classinstance.__class__.propertygetterfunction._privateattribute In my real situation, not the MRE below, the getter had been dynamically defined and is referenced only in synthesizing the property. Once created, there are no other references to the getter besides what is buried in the property. What is the path from aclass.aproperty.__get__ to the getter and its attributes? In the following MRE example, the getter contains one attribute containing how may times the getter has been called, and another containing the property's internal value. ref: https://medium.com/@genexu/extracting-the-wrapped-function-from-a-decorator-on-python-call-stack-2ee2e48cdd8e from inspect import unwrap class propholder: def __init__(self): self.__class__.propgetter._usecount=0 def propgetter(self): self.__class__.propgetter._usecount=self.__class__.propgetter._usecount +1 print(f"getcount= {self.__class__.propgetter._usecount}") return self.__class__.propgetter._pvalue # set by the setter def propsetter(self, v): self.__class__.propgetter._pvalue=v # function to delete _age attribute def propdeleter(self): del self.__class__.propgetter del self.__class__.propsetter del self.__class__.propdeleter aprop = property(propgetter, propsetter, propdeleter) ap = propholder() for i in [10,11,12]: ap.aprop=i print(f" ap.aprop={ap.aprop} ") prop_get=ap.__class__.aprop.__get__ u=unwrap(prop_get) print( f"\n prop_get=ap.__class__.aprop.__get__ = {prop_get} of type {prop_get.__class__} at {id(prop_get)} ") print( f" u=unwrap(prop_get) = {u} of type {u} at {id(u)} ") un=u for i in range(2,6): u=unwrap(u) print( f" u=unwrap(u) = {u} of type {u} at {id(u)} ") Results: getcount= 1 ap.aprop=10 getcount= 2 ap.aprop=11 getcount= 3 ap.aprop=12 prop_get=ap.__class__.aprop.__get__ = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <class 'method-wrapper'> at 126128955449232 u=unwrap(prop_get) = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <method-wrapper '__get__' of property object at 0x72b6af94d940> at 126128955449232 u=unwrap(u) = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <method-wrapper '__get__' of property object at 0x72b6af94d940> at 126128955449232 u=unwrap(u) = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <method-wrapper '__get__' of property object at 0x72b6af94d940> at 126128955449232 u=unwrap(u) = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <method-wrapper '__get__' of property object at 0x72b6af94d940> at 126128955449232 u=unwrap(u) = <method-wrapper '__get__' of property object at 0x72b6af94d940> of type <method-wrapper '__get__' of property object at 0x72b6af94d940> at 126128955449232
ap.__class__.aprop.__get__ is a method of ap.__class__.aprop, so when you access the __get__ attribute of ap.__class__.aprop you invoke the MethodType descriptor, which stores the object that the method is bound to as the __self__ attribute. In this case, the object it's bound to is ap.__class__.aprop, a property descriptor, which stores the getter function in the fget attribute. So in your demo code, the getter function can be obtained from ap.__class__.aprop.__get__ with: ap.__class__.aprop.__get__.__self__.fget And its private attribute can be accessed with: ap.__class__.aprop.__get__.__self__.fget._usecount Demo: https://ideone.com/FWQdnH Note that inspect.unwrap does nothing more than following the chain of __wrapped__ attributes stored by the functools.update_wrapper function or any wrapper function following the same protocol. It does not magically unwrap functions stored in any other attributes.
2
1
79,482,105
2025-3-3
https://stackoverflow.com/questions/79482105/pyserial-asyncio-client-server-in-python-3-8-not-communicating-immediately
I'm learning python and asyncio and after having success with asyncio for a TCP client/server I took my first stab at creating a serial client/server using pyserial-asyncio running in bash on a Raspberry Pi 5 using Python 3.8 (I cannot change version). Here is the server: import asyncio import serial_asyncio class UARTProtocol(asyncio.Protocol): def __init__(self): self.transport = None def connection_made(self, transport): self.transport = transport print('Port opened', transport) def data_received(self, data): print('Data received:', data.decode()) # Echo received data back (example) self.transport.write(data) # Close the connection if 'exit' is received if data == b"exit\r": self.transport.close() def connection_lost(self, exc): print('Port closed') self.transport = None def pause_writing(self): print('pause writing') print(self.transport.get_write_buffer_size()) def resume_writing(self): print(self.transport.get_write_buffer_size()) print('resume writing') async def run_uart_server(): loop = asyncio.get_running_loop() try: transport, protocol = await serial_asyncio.create_serial_connection(loop, UARTProtocol, '/dev/ttyAMA2', baudrate=9600) print("UART server started.") await asyncio.Future() # Run forever except serial.serialutil.SerialException as e: print(f"Error: Could not open serial port: {e}") finally: if transport: transport.close() if __name__ == "__main__": asyncio.run(run_uart_server()) and the client: import asyncio import serial_asyncio async def uart_client(port, baudrate): try: reader, writer = await serial_asyncio.open_serial_connection(url=port, baudrate=baudrate) print(f"Connected to {port} at {baudrate} bps") async def receive_data(): while True: try: data = await reader.readline() if data: print(f"Received: {data.decode().strip()}") except Exception as e: print(f"Error reading data: {e}") break async def send_data(): while True: message = input("Enter message to send (or 'exit' to quit): ") if message.lower() == 'exit': break writer.write((message + '\n').encode()) # writer.write_eof() await writer.drain() print(f"Sent: {message}") await asyncio.gather(receive_data(), send_data()) except serial.SerialException as e: print(f"Error opening serial port: {e}") finally: if 'writer' in locals(): writer.close() await writer.wait_closed() print("Connection closed.") if __name__ == "__main__": asyncio.run(uart_client('/dev/ttyAMA1', 9600)) I want the client to prompt me for some text which is immediately sent to the server and printed there. I can get the client to prompt me for text, but the server doesn't display any of it until after I type exit in the client to close the connection and then it prints all of the text I typed in the client loop. Among many other things, I've tried adding writer.write_eof() in the client (see commented out line in the client code below) and that succeeds in the server immediately displaying the preceding text from the client but then the client never prompts me for input again. If I run the server and just do echo foo > /dev/ttyAMA1 from bash the server prints foo immediately so I suspect the client is the problem. What am I doing wrong?
The problem here is that input is a blocking call. We all know that input doesn't return until the user types some text and hits the enter key. We also know that it's not an async function, therefore it doesn't use the event loop. So it can't run other Tasks while it's waiting for the user to type something. All the asyncio Tasks are frozen until the user does something. All functions that aren't async have this same quality. Most functions do not wait for the user, and most of them return fairly quickly. It's not normally a big deal. But to use input in an asyncio program requires a little work. The solution is to issue the call to input in another thread. That thread will be blocked, but the main thread will keep going. All the other async Tasks will keep running, except for the one that's waiting for the user. The convenient way to do this is with the loop.run_in_executor(), which was available before Python3.8. Here is a little script to demonstrate its use. I tested this with 3.13 but I avoided using any feature introduced after 3.8. Or at least I hope so. import sys import asyncio async def ticks(): while True: await asyncio.sleep(1.0) sys.stdout.write(".") sys.stdout.flush() async def ask(): def inp(): return input("Command?") while True: loop = asyncio.get_event_loop() x = await loop.run_in_executor(None, inp) print(x) if x == 'stop': break async def main(): tix = asyncio.create_task(ticks()) await ask() tix.cancel() if __name__ == "__main__": asyncio.run(main())
1
1
79,482,376
2025-3-3
https://stackoverflow.com/questions/79482376/pandas-dropping-first-group-of-values
I want to drop the first group of rows based on a column's value. Here is an example of a table stage h1 h2 h3 0 4 55 55 0 5 66 44 0 4 66 33 1 3 33 55 0 5 44 33 Get the column stage, get all the first group of rows that start with 0, and drop the rows in the table. The table will look like this: stage h1 h2 h3 1 3 33 55 0 5 44 33 This is what I did: import pandas as pd data = {'stage': [0, 0, 0, 1, 0], 'h1': [4, 5, 4, 3, 5], 'h2': [55, 66, 66, 33, 44], 'h3': [55, 44, 33, 55, 33]} df = pd.DataFrame(data) # Find indices of the first group of rows with uiwp_washing_stage = 0 indices_to_drop = [] for i in range(len(df)): if df['stage'].iloc[i] == 0: indices_to_drop.append(i) else: break df = df.drop(indices_to_drop) df = df.reset_index(drop=True) print(df) The above seems to work, but if the file is too big it takes a while, is there a Pands way of doing this?
df.iloc[df['stage'].diff().idxmax():] First, find the first transition from 0 to 1 is by computing the difference between consecutive values in the stage column (using diff). Then use idxmax to locate the index where the first transition occurs. NB: In case, there are transitions that differ more than 1 unit, then use: df.iloc[df['stage'].diff().gt(0).idxmax():] Output: stage h1 h2 h3 3 1 3 33 55 4 0 5 44 33
1
4
79,480,120
2025-3-3
https://stackoverflow.com/questions/79480120/why-result-of-scaling-each-column-always-equal-to-zero
I am using minmaxscaler trying to scaling each column. The scaled result for each column is always all zero. For example , below the values of df_test_1 after finishing scaling is all zero. But even with all values of zero, using inverse_transferm from this values of zero can still revert back to original values. But why the results of scaled are shown all zero? from sklearn.preprocessing import MinMaxScaler df_dict={'A':[-1,-0.5,0,1],'B':[2,6,10,18]} df_test=pd.DataFrame(df_dict) print('original scale data') print(df_test) scaler_model_list=[] df_test_1=df_test.copy() for col in df_test.columns: scaler = MinMaxScaler() scaler_model_list.append(scaler) # need to save scalerfor each column since there are different if we want to use inverse_transform() later df_test_1.loc[:,col]=scaler.fit_transform(df_test_1.loc[:,col].values.reshape(1,-1))[0] print('after finishing scaling') print(df_test_1) print('after inverse transformation') print(scaler_model_list[0].inverse_transform(df_test_1.iloc[:,0].values.reshape(1,-1))) print(scaler_model_list[1].inverse_transform(df_test_1.iloc[:,1].values.reshape(1,-1))) original scale data A B 0 -1.0 2 1 -0.5 6 2 0.0 10 3 1.0 18 after finishing scaling A B 0 0.0 0 1 0.0 0 2 0.0 0 3 0.0 0 after inverse transformation [[-1. -0.5 0. 1. ]] [[ 2. 6. 10. 18.]]
According to MinMaxScaler DOC: X : array-like of shape (n_samples, n_features) The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. When you reshape your data here: df_test_1.loc[:,df_test.columns[1]].values.reshape(1,-1) you get 1 row data with 4 columns in your case (and only 1 value in each of them) instead of 1 column with 4 rows . You can fix your code one of the following ways: Fix reshape dimensions: df_test_1.loc[:,col]=scaler.fit_transform(df_test_1.loc[:,col].values.reshape(-1,1)) Pass a dataframe instead of reshaped Series by indexing using list: df_test_1.loc[:,col]=scaler.fit_transform(df_test_1.loc[:,[col]]) Passing all required columns at once: df_test_2=df_test.copy() scaler = MinMaxScaler() cols = df_test.columns df_test_2.loc[:,cols]=scaler.fit_transform(df_test_2.loc[:,cols])
1
1
79,481,158
2025-3-3
https://stackoverflow.com/questions/79481158/keep-rows-where-a-field-of-a-liststruct-column-contains-a-message
Say I have the following data: import duckdb rel = duckdb.sql(""" FROM VALUES ([{'a': 'foo', 'b': 'bta'}]), ([]), ([{'a': 'jun', 'b': 'jul'}, {'a':'nov', 'b': 'obt'}]) df(my_col) SELECT * """) which looks like this: ┌──────────────────────────────────────────────┐ │ my_col │ │ struct(a varchar, b varchar)[] │ ├──────────────────────────────────────────────┤ │ [{'a': foo, 'b': bta}] │ │ [] │ │ [{'a': jun, 'b': jul}, {'a': nov, 'b': obt}] │ └──────────────────────────────────────────────┘ I would like to keep all rows where for any of the items in one of the elements of 'my_col', field 'a' contains the substring 'bt' So, expected output: ┌──────────────────────────────────────────────┐ │ my_col │ │ struct(a varchar, b varchar)[] │ ├──────────────────────────────────────────────┤ │ [{'a': foo, 'b': bta}] │ │ [{'a': jun, 'b': jul}, {'a': nov, 'b': obt}] │ └──────────────────────────────────────────────┘ How can I write a SQL query to do that?
Maybe list_sum() the bools or list_bool_or()? https://duckdb.org/docs/stable/sql/functions/list.html#list_-rewrite-functions duckdb.sql(""" FROM VALUES ([{'a': 'foo', 'b': 'bta'}]), ([]), ([{'a': 'jun', 'b': 'jul'}, {'a':'nov', 'b': 'obt'}]) df(my_col) SELECT * WHERE list_bool_or(['bt' in s.b for s in my_col]) """) ┌──────────────────────────────────────────────┐ │ my_col │ │ struct(a varchar, b varchar)[] │ ├──────────────────────────────────────────────┤ │ [{'a': foo, 'b': bta}] │ │ [{'a': jun, 'b': jul}, {'a': nov, 'b': obt}] │ └──────────────────────────────────────────────┘ The list comprehension is the same as list_apply(my_col, s -> 'bt' in s.b)
2
1
79,481,132
2025-3-3
https://stackoverflow.com/questions/79481132/aligning-grid-columns-between-parent-and-container-frames-using-tkinter
Using Python and tkinter I have created a dummy app with a scrollable frame. There are two column headings in a container frame. The container frame also contains a canvas. Inside the canvas is an inner frame with two columns of scrollable content. Problem: the column headings do not align with the columns, presumably because the columns in the container frame and the inner frame do not align. But, as far as I can see, I have configured the columns exactly the same way in both frames. Can anyone give me a clue as to what I am overlooking? The code is below. I cannot claim credit for all of the code, as I was following a tutorial from Tutorialspoint (https://www.tutorialspoint.com/implementing-a-scrollbar-using-grid-manager-on-a-tkinter-window), but the tutorial only worked with a single column of scrollable data where the heading scrolled with the data. I changed this so the heading stays put when you scroll and I needed 2 columns of data. N.B. If I uncomment the line starting innerframe.grid(..., the columns match up (almost), but the scrollbar disappears. import tkinter as tk from tkinter import ttk def _on_mousewheel(event): scrollcanvas.yview_scroll(int(-1 * (event.delta / 120)), "units") root=tk.Tk() root.title("Scrollable Grid Example") # Create outer Frame for Grid Layout outerframe = ttk.Frame(root) outerframe.grid(row=0, column=0, columnspan=2, sticky="nsew") # Create a Canvas and Scrollbar scrollcanvas = tk.Canvas(outerframe) scrollbar = ttk.Scrollbar(outerframe, orient="vertical",command=scrollcanvas.yview) scrollcanvas.configure(yscrollcommand=scrollbar.set) # Create label to outer frame/canvas so that it stays put when rows below scroll. label = tk.Label(outerframe, text="Scrollable Buttons", width=20) label.grid(row=0, column=0, pady=5, sticky="w") label1 = tk.Label(outerframe, text="Scrollable Text", width=20) label1.grid(row=0, column=1, pady=5, sticky="w") # Create inner Frame for Scrollable Content innerframe=ttk.Frame(scrollcanvas) # Set binding to adjusts the canvas scroll region when size of the inner frame changes innerframe.bind( "<Configure>", lambda e: scrollcanvas.configure( scrollregion=scrollcanvas.bbox("all") ) ) # Add labels and buttons to the Content Frame for i in range(0, 20): button=ttk.Button(innerframe, text=f"Button {i}", width=20) button.grid(row=i, column=0, pady=5, sticky="w" ) label2 = tk.Label(innerframe, text=f"Text Line {i}", width=20) label2.grid(row=i, column=1, pady=5, sticky="w") # Ensure the window and components expand proportionally when resized. root.rowconfigure(0, weight=1) outerframe.rowconfigure(0, weight=1) innerframe.rowconfigure(0, weight=1) root.columnconfigure(0, weight=1) root.columnconfigure(1, weight=1) outerframe.columnconfigure(0, weight=1) outerframe.columnconfigure(1, weight=1) innerframe.columnconfigure(0, weight=1) innerframe.columnconfigure(1, weight=1) # Place the canvas and scrollbar onto the window, with the scrollbar adjacent to the canvas scrollcanvas.create_window((0, 0), window=innerframe, anchor="nw") scrollcanvas.grid(row=1, column=0, columnspan=2, sticky="nsew") # innerframe.grid(row=1, column=0, columnspan=2, sticky="nw") scrollbar.grid(row=1, column=2, sticky="ns") # Bind the Canvas to Mousewheel Events scrollcanvas.bind_all("<MouseWheel>", _on_mousewheel) # Run the Tkinter Event Loop root.mainloop() def _on_mousewheel(event): scrollcanvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
It is because the innerframe does not have the same width of the canvas (which is the sum of the widths of the two labels at the top). You need to: set highlightthickness=0 in tk.Canvas(...) set the width of innerframe to the same as scrollcanvas (in callback of <Configure> event on scrollcanvas) add uniform=... to outframe.columnconfigure(...) and innerframe.columnconfigure(...) to make sure the two columns inside both outerframe and innerframe have same width Updated code: import tkinter as tk from tkinter import ttk def _on_mousewheel(event): scrollcanvas.yview_scroll(int(-1 * (event.delta / 120)), "units") root=tk.Tk() root.title("Scrollable Grid Example") # Create outer Frame for Grid Layout outerframe = ttk.Frame(root) outerframe.grid(row=0, column=0, columnspan=2, sticky="nsew") # Create a Canvas and Scrollbar scrollcanvas = tk.Canvas(outerframe, highlightthickness=0) scrollbar = ttk.Scrollbar(outerframe, orient="vertical",command=scrollcanvas.yview) scrollcanvas.configure(yscrollcommand=scrollbar.set) # Create label to outer frame/canvas so that it stays put when rows below scroll. label = tk.Label(outerframe, text="Scrollable Buttons", width=20, bd=1, relief='raised') label.grid(row=0, column=0, pady=5, sticky="w") label1 = tk.Label(outerframe, text="Scrollable Text", width=20, bd=1, relief='raised') label1.grid(row=0, column=1, pady=5, sticky="w") # Create inner Frame for Scrollable Content innerframe=tk.Frame(scrollcanvas, bg='gray80') # Set binding to adjusts the canvas scroll region when size of the inner frame changes innerframe.bind( "<Configure>", lambda e: scrollcanvas.configure( scrollregion=scrollcanvas.bbox("all") ) ) # Add labels and buttons to the Content Frame for i in range(0, 20): button=ttk.Button(innerframe, text=f"Button {i}", width=20) button.grid(row=i, column=0, pady=5, sticky="w" ) label2 = tk.Label(innerframe, text=f"Text Line {i}", width=20, bd=1, relief='solid') label2.grid(row=i, column=1, pady=5, sticky="w") # Ensure the window and components expand proportionally when resized. root.rowconfigure(0, weight=1) outerframe.rowconfigure(0, weight=1) #innerframe.rowconfigure(0, weight=1) # this line is not necessary root.columnconfigure(0, weight=1) root.columnconfigure(1, weight=1) outerframe.columnconfigure(0, weight=1, uniform=1) # added setting uniform option outerframe.columnconfigure(1, weight=1, uniform=1) innerframe.columnconfigure(0, weight=1, uniform=1) innerframe.columnconfigure(1, weight=1, uniform=1) # Place the canvas and scrollbar onto the window, with the scrollbar adjacent to the canvas scrollcanvas.create_window((0, 0), window=innerframe, anchor="nw", tags=('inner')) # added tags scrollcanvas.grid(row=1, column=0, columnspan=2, sticky="nsew") scrollbar.grid(row=1, column=2, sticky="ns") # Bind the Canvas to Mousewheel Events scrollcanvas.bind_all("<MouseWheel>", _on_mousewheel) # set width of innerframe to same of scrollcanvas scrollcanvas.bind('<Configure>', lambda e: scrollcanvas.itemconfig('inner', width=e.width)) # Run the Tkinter Event Loop root.mainloop() Note that I have add borders to those labels in order to see the effect easily. Result:
2
2
79,480,437
2025-3-3
https://stackoverflow.com/questions/79480437/find-column-name-with-highest-value
On the Pandas dataframe below I use max to find the maximum value: df["Max"] = df[['Day 1','Day 2','Day 4']].max(axis=1) Car Day 1 Day 2 Day 4 Max Car1 4 7 3 7 car2 8 2 1 8 What do I do to find when is the maximum value instead of the value itself as the table example below? Car Day 1 Day 2 Day 4 When Car1 4 7 3 Day 2 car2 8 2 1 Day 1
Yes, you can achieve this in Pandas using .idxmax(axis=1), which returns the column name where the maximum value occurs. Documentation - pandas.DataFrame.idxmax import pandas as pd data = { 'Car': ['Car1', 'Car2'], 'Day 1': [4, 8], 'Day 2': [7, 2], 'Day 4': [3, 1] } df = pd.DataFrame(data) df.set_index('Car', inplace=True) # Find the column name where the maximum value occurs df['When'] = df.idxmax(axis=1) print(df) Output:- Day 1 Day 2 Day 4 When Car Car1 4 7 3 Day 2 Car2 8 2 1 Day 1
1
1
79,480,218
2025-3-3
https://stackoverflow.com/questions/79480218/python-package-installation-fails-getting-requirements-to-build-wheel-did-not
In my python projects I create a list of required packages by pip freeze > requirements.txt it automatically list the packages with version. But after a while when I reinstall the package or trying to run other people project I have to install the dependencis with pip install -r requirements.txt. In this process some of the packages(For example, Pillow, numpy, etc) can't install the specific version that listed. By removing the version number it can install successfully. In my current case, My project have requirement of pillow==10.3.0 but it shows the following error. Collecting pillow==10.3.0 (from -r requirements.txt (line 6)) Using cached pillow-10.3.0.tar.gz (46.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [21 lines of output] Traceback (most recent call last): File "D:\LMS\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module> main() ~~~~^^ File "D:\LMS\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LMS\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel return hook(config_settings) File "C:\Users\wo\AppData\Local\Temp\pip-build-env-bxsls9dh\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\wo\AppData\Local\Temp\pip-build-env-bxsls9dh\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires self.run_setup() ~~~~~~~~~~~~~~^^ File "C:\Users\wo\AppData\Local\Temp\pip-build-env-bxsls9dh\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup exec(code, locals()) ~~~~^^^^^^^^^^^^^^^^ File "<string>", line 33, in <module> File "<string>", line 27, in get_version KeyError: '__version__' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Then I tried to install the package with the same version manually and still fails. So, I installed pillow without version specification and it installed successfully. (venv) D:\LMS>pip install pillow Collecting pillow Using cached pillow-11.1.0-cp313-cp313-win_amd64.whl.metadata (9.3 kB) Using cached pillow-11.1.0-cp313-cp313-win_amd64.whl (2.6 MB) Installing collected packages: pillow Successfully installed pillow-11.1.0 I face this situation often, so I want to know why this happens or what to do in this situations.
Your Python version is 3.13 and it is not compatible with pillow 10.3.0. Pillow 10.3.0 supports Python 3.8 - 3.12. There are no wheels for Python 3.13. So why did pip install pillow work? Installing a package without specifying a version usually attempts to install the latest release. So pip install pillow will give you the stable release of pillow which supports Python 3.13. Always look at the supported python versions of specific packages and verify that the package version you are trying to install is compatible.
1
2
79,479,347
2025-3-2
https://stackoverflow.com/questions/79479347/is-it-possible-to-increase-the-space-between-trace-lines-that-are-overlapping-on
I have been searching for this solution in the official site of Plotly, Plotly forum and this forum for 3 days, and did not find it. I tried these following quetsions: How to avoid overlapping text in a plotly scatter plot? Make X axis wider and Y axis narrower in plotly For example, here is the image: You can see that Angola’s, Cabo Verde’s, Mozambique’s, Portugal’s, and São Tomé’s trace lines are overlapping each other on the Y-axis. On the X-axis, these markers of the numbers of population of all these countries between the years 2022 and 2025 are overlapping each other. Here is the CSV data: "Country","2022","2024","2025","2030","2040","2050" "Angola","35.6M","37.9M","39M","45.2M","59M","74.3M" "Brasil","210M","212M","213M","216M","219M","217M" "Cabo Verde","520K","525K","527K","539K","557K","566K" "Moçambique","32.7M","34.6M","35.6M","40.8M","52.1M","63.5M" "Portugal","10.4M","10.4M","10.4M","10.3M","10.1M","9.8M" "São Tomé and Príncipe","226K","236K","240K","265K","316K","365K" And here is the simple and small code: import pandas as pd import plotly.express as px import plotly.graph_objects as go fig = go.Figure() df = pd.read_csv('assets/csv/luso_pop_proj_who_data.csv') for col in df.columns[1:]: df[col] = df[col].replace({"M": "e+06", "K": "e+03"}, regex = True).astype(float) df_long = df.melt(id_vars = "Country", var_name = "Year", value_name = "Population") df_long["Year"] = df_long["Year"].astype(int) fig = px.line( df_long, color = "Country", height = 600, labels = { "Country": "País", "Population": "Número de população", "Year": "Ano" }, template = "seaborn", text = "Population", title = "Projeção da população nos países lusófonos (2022-2050)", width = 1200, x = "Year", y = "Population", ) fig.update_traces( hovertemplate = None, mode = "lines+markers+text", showlegend = True, textfont = dict(size = 15), textposition = "top center", texttemplate = "%{text:.3s}", ) fig.update_layout( margin = dict(l = 0, r = 0, b = 10, pad = 0), hovermode = False ) fig.show( config = config ) I have been testing with these following codes: fig = px.line( facet_row_spacing = 1, ) fig.update_traces( line = dict( backoff = 10, ), ) fig.update_xaxes( automargin = True, autorange = False, fixedrange = True, range = ['2020', '2050'], ) fig.update_yaxes( automargin = True, fixedrange = True, ) fig.update_layout( margin = dict(l = 0, r = 0, b = 10, pad = 0), ) I expected that, with these configurations, the trace lines moved away with the increasing space, but it did not give any effect.
One way to deal with data adjacencies like this is to make the y-axis logarithmic, which sometimes solves the problem, but in your case the effect is limited. My suggestion is to create a subplot with three groups of numbers. Besides, minimise the gaps between the graphs to make them appear as one graph. The key to setting up the subplot is to share the x-axis and the graph ratio. Finally, limits are set for each y-axis to make the text annotations easier to read. from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=3, cols=1, shared_xaxes=True, vertical_spacing=0, row_heights=[0.3,0.4,0.3]) groups = {'Angola':2, 'Brasil':1, 'Cabo Verde':3, 'Moçambique':2, 'Portugal':2, 'São Tomé and Príncipe':3} for k,v in zip(groups.keys(), groups.values()): dff = df_long[df_long['Country'] == k] text_position = 'bottom center' if k == 'Moçambique' else 'top center' fig.add_trace(go.Scatter( x=dff['Year'], y=dff['Population'], name=k, text=dff['Population'], showlegend = True, hovertemplate = None, textfont = dict(size = 15), textposition = text_position, texttemplate = "%{text:.3s}", mode='lines+markers+text' ),row=v, col=1) fig.update_yaxes(range=[210_000_000,222_000_000],row=1, col=1) fig.update_yaxes(range=[10_000_000,80_000_000],row=2, col=1) fig.update_yaxes(range=[200_000,620_000],row=3, col=1) fig.update_layout(margin=dict(t=20, b=0, l=0,r=0)) fig.show()
2
1
79,479,725
2025-3-2
https://stackoverflow.com/questions/79479725/specify-model-related-fields-for-selection-with-only-function
Is it possible specify related model fields for selection with only() function in query? In this example I got KeyError: 'provider__description' from typing import Optional, List from tortoise import fields from models import BaseModel class Token(BaseModel): id = fields.CharField(primary_key=True, max_length=128) user = fields.ForeignKeyField( model_name="models.User", related_name="user_tokens", on_delete=fields.CASCADE, ) provider = fields.ForeignKeyField( model_name="models.Provider", related_name="provider_tokens", on_delete=fields.CASCADE, ) expires = fields.DatetimeField(default=0) class Provider(BaseModel): id = fields.IntField(pk=True) url = fields.CharField(max_length=255, unique=True, null=False) description = fields.CharField(max_length=255, null=False) inbound_id = fields.IntField(null=False) async def get_tokens(user_id: Optional[int] = None) -> list[Token]: query = Token.all().only("id", "provider__description", "expires").select_related("provider") if user_id: query = query.filter(user_id=user_id) return await query With prefetch_related() got same error. For solve problem I tried select_relation() and prefetch_related() before only(). And tried
Found solve in tortoise-orm docs on Github. For prefetch only certain fields need to use tortoise.query_utils.Prefetch object. query = Token.all().prefetch_related( Prefetch("provider", queryset=Provider.all().only("id", "description")), )
1
2
79,479,213
2025-3-2
https://stackoverflow.com/questions/79479213/how-to-efficiently-exclude-already-assigned-objects-in-a-django-queryset
I am working on a Django project where I need to filter objects based on their status while excluding those that are already assigned in another model. I have two models: CartObject – Stores all objects. OnGoingProcess – Tracks objects that are currently assigned. Each OnGoingProcess entry has a OneToOneField relationship with CartObject, meaning each object can only be assigned once. My goal is to fetch all objects with a specific status but exclude those that are already assigned in OnGoingProcess. Models: class CartObject(models.Model): object_id = models.CharField(max_length=100, unique=True) status = models.CharField(max_length=50, choices=[("pending", "Pending")]) # Other fields... class OnGoingProcess(models.Model): user = models.OneToOneField(DeliveryProfile, on_delete=models.CASCADE, related_name="ongoing_process") associated_object = models.OneToOneField(CartObject, on_delete=models.CASCADE, related_name="associated_process", blank=True, null=True) # Other fields... Current View Code: @user_passes_test(lambda user: user.is_staff) def process_manager_view(request): # Get objects that are already assigned in OnGoingProcess assigned_objects = OnGoingProcess.objects.values_list('associated_object', flat=True) # Exclude objects that are already assigned available_objects = CartObject.objects.filter(status="pending").exclude(id__in=assigned_objects).order_by("-id") context = { "available_objects": available_objects, } return render(request, "useradmin/available-objects.html", context) Issue: I am using values_list('associated_object', flat=True) to extract the assigned object IDs. Then, I am using exclude(id__in=assigned_objects) to filter out those objects. Is this the most efficient way? Or is there a better Django ORM method to achieve the same result? Should I use Subquery(), isnull=False, or any other approach for better performance? Alternative Solutions I Considered: Option 1: Using isnull=False available_objects = CartObject.objects.filter(status="pending").exclude(associated_process__isnull=False) Pros: Simple, avoids extra queries. Cons: Not sure if it's the best approach for performance. Option 2: Using Subquery from django.db.models import Subquery assigned_objects = OnGoingProcess.objects.values('associated_object') available_objects = CartObject.objects.filter(status="pending").exclude(id__in=Subquery(assigned_objects)) Pros: Optimized for large datasets. Cons: More complex. Option 3: Using Raw SQL (if necessary) from django.db import connection with connection.cursor() as cursor: cursor.execute(""" SELECT id FROM useradmin_cartobject WHERE status='pending' AND id NOT IN (SELECT associated_object FROM useradmin_ongoingprocess) ORDER BY id DESC """) result = cursor.fetchall() available_objects = CartObject.objects.filter(id__in=[row[0] for row in result]) Pros: Performance boost for huge data. Cons: Less readable, database-dependent. Question: What is the best and most efficient Django ORM approach to filter objects while excluding those that are already assigned? Would values_list(), isnull=False, or Subquery() be the recommended way? Are there any performance considerations when working with large datasets? Thank you in advance!
Your query: available_objects = ( CartObject.objects.filter(status='pending') .exclude(id__in=assigned_objects) .order_by('-id') ) will use a subquery, so run as: AND id NOT IN (SELECT associated_object FROM useradmin_ongoingprocess) You can inspect it with: print(available_objects.query) But on databases like MySQL, this is not the most efficient one no. For most databases: available_objects = CartObject.objects.filter(status='pending').filter( associated_process=None ) will yield results efficiently. We can rewrite this to: available_objects = CartObject.objects.filter( status='pending', associated_process=None ) this will generate the same query, but is a bit shorter in code. This works based on the fact that a LEFT OUTER JOIN includes a NULL row for items for which there is no corresponding item at the table for the OnGoingProcess model. Then we thus retrieve only the ones with NULL, so only retain the CartObject with no OnGoingProcess. As for raw queries, one usually uses this if there is no ORM-equivalent available, or when it is very cumbersome to construct it. Raw queries have additional disadvantages: a QuerySet can be filtered, paginated, etc. whereas a raw query is an "end product": you can not manipulate it further, so limiting what you can do with it with Django. It also means that if you change the corresponding models, like adding a column, you will probably have to rewrite the raw queries that work with these models as well.
4
5
79,475,986
2025-2-28
https://stackoverflow.com/questions/79475986/pipeline-futurewarning-this-pipeline-instance-is-not-fitted-yet
I am working on a fairly simple machine learning problem in the form of a practicum. I am using the following code to preprocess the data: from preprocess.date_converter import DateConverter from sklearn.pipeline import Pipeline from preprocess.nan_fixer import CustomImputer import pandas as pd from preprocess.encoding import FrecuencyEncoding from sklearn.model_selection import train_test_split from sklearn.decomposition import PCA from preprocess.scaler import CustomScaler def basic_preprocess(df, target : str): important_features = ['amt', 'category', 'merchant', 'trans_date_trans_time', 'unix_time', 'dob', 'street', 'merch_lat', 'merch_long', 'city', 'merch_zipcode', 'city_pop', 'job', 'last', 'first', 'cc_num', 'long', 'zip', "is_fraud"] df = df.drop(["trans_num", "Unnamed: 0"], axis=1) df = df[important_features] df = df.copy() # df, df_ = train_test_split(df, test_size=0.5, shuffle=True, random_state=42, stratify=df[target]) df_train, unseen_df = train_test_split(df, test_size=0.2, shuffle=True, random_state=42, stratify=df[target]) df_val, df_test = train_test_split(unseen_df, test_size=0.5, shuffle=True, random_state=42, stratify=unseen_df[target]) pipeline = Pipeline([ ("date_converter", DateConverter("trans_date_trans_time")), ("imputer", CustomImputer(strategy="most_frequent")), ("encoding", FrecuencyEncoding()), ("scaler", CustomScaler(df.drop(target, axis=1).columns.tolist())), ]) pipeline.fit(df_train.drop(target, axis=1), df_train[target]) X_train = pd.DataFrame(pipeline.transform(df_train.drop(target, axis=1)), index=df_train.index) X_test = pd.DataFrame(pipeline.transform(df_test.drop(target, axis=1)), index=df_test.index) X_val = pd.DataFrame(pipeline.transform(df_val.drop(target, axis=1)), index=df_val.index) df_train = pd.concat([X_train, df_train[target]], axis=1) df_test = pd.concat([X_test, df_test[target]], axis=1) df_val = pd.concat([X_val, df_val[target]], axis=1) return [df_train, df_test, df_val] Note: preprocess is a personal library that I created to make some transformers of my own. Here is the code for each of the transformers: class CustomScaler(BaseEstimator, TransformerMixin): """ Receives the numeric columns from the dataframe. Scales their values using RobustScaler. """ def __init__(self, attributes): self.attributes = attributes self.scaler = RobustScaler() # Inicializa el escalador def fit(self, X, y=None): # Ajusta el escalador solo con los datos de entrenamiento scale_cols = X[self.attributes] self.scaler.fit(scale_cols) return self def transform(self, X, y=None): X_copy = X.copy() scale_attrs = X_copy[self.attributes] # Usa el escalador ya ajustado para transformar los datos X_scaled = self.scaler.transform(scale_attrs) X_scaled = pd.DataFrame(X_scaled, columns=self.attributes, index=X_copy.index) for attr in self.attributes: X_copy[attr] = X_scaled[attr] return X_copy class CustomImputer(BaseEstimator, TransformerMixin): """ It implements SimpleImputer but, unlike SimpleImputer, it returns dataframes and not numpy arrays. """ def __init__(self, strategy : str) -\> None: self._imputer = SimpleImputer(strategy=strategy) def fit(self, X, y=None): self._imputer.fit(X, y) return self def transform(self, X, y=None): transformed_data = self._imputer.transform(X) return pd.DataFrame(transformed_data, columns=X.columns) class FrecuencyEncoding(BaseEstimator, TransformerMixin): """ Searches for categorical columns and replaces them using Frequency Encoding """ def __init__(self): self._frequencies = {} def fit(self, X, y=None): for col in X.select_dtypes(['object']).columns: self._frequencies[col] = X[col].value_counts().to_dict() return self def transform(self, X, y=None): df_encoded = X.copy() for col in df_encoded.select_dtypes(['object']).columns: df_encoded[col] = df_encoded[col].map(self._frequencies[col]) df_encoded[col] = df_encoded[col].fillna(0) return df_encoded class DateConverter(BaseEstimator, TransformerMixin): """ Transformer created to convert all date columns to float. It receives the list of date columns in the constructor. Consider that in order to convert all elements (including nan) to float, it was necessary to convert nans to 0 and then convert 0 to nans again. """ def __init__(self, attributes): self.attributes = attributes def fit(self, X, y=None): return self def transform(self, X, y=None): X = X.copy() def converter(col): return pd.to_datetime(col, errors='coerce').apply(lambda x: x.timestamp() if pd.notna(x) else 0).astype(float).replace(0, np.nan) if col.name in self.attributes else col return X.apply(converter) The problem is that, running basic_preprocess I get the following error message: /home/santiago/.local/lib/python3.10/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. warnings.warn( /home/santiago/.local/lib/python3.10/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. warnings.warn( This is no longer the case if I use PCA at the end of the pipeline: pipeline = Pipeline([ ("date_converter", DateConverter("trans_date_trans_time")), ("imputer", CustomImputer(strategy="most_frequent")), ("encoding", FrecuencyEncoding()), ("scaler", CustomScaler(df.drop(target, axis=1).columns.tolist())), ("PCA", PCA(n_components=0.999)) ]) How to resolve it? I ran the transformers separately expecting to get some unexpected behavior, but they all worked perfectly. These are the results generated by the transformers: ~~~~~ Train dataframe before pipeline amt category merchant trans_date_trans_time unix_time ... first cc_num long zip is_fraud 509059 51.71 grocery_net fraud_Stokes, Christiansen and Sipes 2019-08-09 03:13:27 1344482007 ... Destiny 639023984367 -74.9732 13647 0 395295 13.78 entertainment fraud_Effertz LLC 2019-06-29 19:56:48 1340999808 ... Sarah 373905417449658 -97.6443 76665 0 536531 961.26 shopping_pos fraud_Kris-Padberg 2019-08-18 14:42:20 1345300940 ... Sharon 3553629419254918 -122.3456 98238 0 271001 43.68 health_fitness fraud_Ratke and Sons 2019-05-13 21:27:46 1336944466 ... Jeremy 371034293500716 -120.7986 96135 0 532788 33.08 entertainment fraud_Upton PLC 2019-08-17 14:15:41 1345212941 ... Amy 4335531783520911 -91.4867 65066 0 1275175 199.99 kids_pets fraud_Schaefer Ltd 2020-06-13 22:05:09 1371161109 ... Maureen 4306630852918 -90.4504 63131 0 1117784 5.41 misc_pos fraud_Williamson LLC 2020-04-10 12:42:46 1365597766 ... Adam 6011366578560244 -77.7186 17051 0 429225 58.08 kids_pets fraud_Bogisich-Weimann 2019-07-11 18:28:54 1342031334 ... Greg 30428204673351 -76.2963 17088 0 739916 23.70 personal_care fraud_Dickinson Ltd 2019-11-11 23:26:50 1352676410 ... Monica 213161869125933 -70.6993 4226 0 93872 104.69 grocery_pos fraud_Strosin-Cruickshank 2019-02-25 05:16:10 1330146970 ... John 30026790933302 -91.0286 39113 0 [10 rows x 19 columns] ~~~~~ Test dataframe before pipeline amt category merchant trans_date_trans_time unix_time ... first cc_num long zip is_fraud 734803 6.38 shopping_pos fraud_Quitzon, Green and Bashirian 2019-11-10 09:03:06 1352538186 ... Linda 4433091568498503 -77.1458 20882 0 875327 98.80 grocery_pos fraud_Heidenreich PLC 2019-12-21 13:12:50 1356095570 ... Gina 6538441737335434 -80.1752 16114 0 549897 1.29 food_dining fraud_Lesch, D'Amore and Brown 2019-08-23 17:10:46 1345741846 ... Martin 4990494243023 -78.8031 21524 0 770188 37.27 kids_pets fraud_Ullrich Ltd 2019-11-25 15:17:31 1353856651 ... Stephanie 4502539526809429801 -91.6421 72513 0 698390 8.76 travel fraud_Lynch-Mohr 2019-10-25 15:18:53 1351178333 ... Margaret 2254917871818484 -76.3477 20687 0 557456 14.26 health_fitness fraud_Rippin-VonRueden 2019-08-25 20:47:10 1345927630 ... Jamie 4066595222529 -82.7251 41254 0 1112225 26.69 home fraud_Witting, Beer and Ernser 2020-04-07 13:37:06 1365341826 ... Erika 180046617132290 -88.9655 62939 0 907535 40.11 personal_care fraud_Becker, Harris and Harvey 2019-12-28 16:48:07 1356713287 ... Zachary 374821819075109 -77.2218 14522 0 169363 54.45 food_dining fraud_O'Keefe-Wisoky 2019-03-30 17:59:38 1333130378 ... Brooke 4425161475596168 -100.3900 76905 0 102279 5.98 kids_pets fraud_Waelchi Inc 2019-02-28 22:56:08 1330556168 ... Christopher 4822367783500458 -81.5929 33844 0 [10 rows x 19 columns] ~~~~~ Validation dataframe before pipeline amt category merchant trans_date_trans_time unix_time ... first cc_num long zip is_fraud 286442 59.54 personal_care fraud_Crooks and Sons 2019-05-20 19:57:21 1337543841 ... Megan 348789608637806 -98.6538 68950 0 259525 154.19 misc_pos fraud_Turcotte-Halvorson 2019-05-09 12:22:48 1336566168 ... Marissa 4400011257587661852 -98.7858 68859 0 706250 2.36 shopping_net fraud_Kozey-Boehm 2019-10-28 09:53:45 1351418025 ... Bradley 3542162746848552 -93.4824 56029 0 557846 65.13 entertainment fraud_Brown-Greenholt 2019-08-25 22:40:54 1345934454 ... Philip 6592243974328236 -86.2715 36111 0 984124 123.34 misc_pos fraud_Hermann-Gaylord 2020-02-04 08:28:45 1359966525 ... Theresa 30199621383748 -96.2238 75452 0 379560 55.73 kids_pets fraud_Larkin Ltd 2019-06-23 19:47:39 1340480859 ... Stacy 4961003488432306 -76.1950 17929 0 645012 66.42 gas_transport fraud_Raynor, Feest and Miller 2019-10-01 08:19:12 1349079552 ... Brandy 676195318214 -96.5249 77412 0 631986 1.30 misc_net fraud_McGlynn-Heathcote 2019-09-26 01:07:43 1348621663 ... Shannon 2269768987945882 -77.8664 14510 0 454841 1.27 personal_care fraud_Zulauf LLC 2019-07-20 23:09:40 1342825780 ... Joshua 4266200684857219 -98.0684 68961 0 599151 6.48 shopping_pos fraud_Kris-Padberg 2019-09-11 13:10:03 1347369003 ... Frank 3501509250702469 -81.7361 34112 0 ~~~~~ Train dataframe after pipeline amt category merchant trans_date_trans_time unix_time dob street ... job last first cc_num long zip is_fraud 509059 0.068966 0.788219 0.654717 0.0 0.0 -0.421986 -0.295909 ... 0.674448 0.512351 -0.526083 -0.295909 -0.418632 -0.415006 0 395295 -0.068966 1.057319 1.098113 0.0 0.0 -0.932624 -0.671889 ... -0.874788 0.923014 -0.001710 -0.671889 -0.928066 -0.921454 0 536531 -0.068966 -0.364541 -0.430189 0.0 0.0 0.888889 0.669278 ... -0.219015 0.644710 0.107754 0.669278 0.889151 0.885111 0 271001 -0.123153 0.788219 0.886792 0.0 0.0 0.033097 0.039164 ... -0.007216 -0.268423 0.879276 0.039164 0.035377 0.036342 0 532788 -0.118227 0.788219 0.650943 0.0 0.0 -0.938534 -0.676240 ... -0.355263 -0.110128 0.104761 -0.676240 -0.933962 -0.927315 0 1275175 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 1117784 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 429225 -0.266010 0.788219 0.656604 0.0 0.0 -0.921986 -0.664056 ... -0.873514 -0.434747 1.168044 -0.664056 -0.917453 -0.910903 0 739916 1.004926 0.788219 0.758491 0.0 0.0 -0.427896 -0.300261 ... -0.522920 -0.348703 0.412628 -0.300261 -0.424528 -0.420868 0 93872 -0.004926 -0.106084 -0.230189 0.0 0.0 -1.419622 -1.030461 ... -0.176146 -0.521408 -0.524515 -1.030461 -1.413915 -1.404455 0 [10 rows x 19 columns] ~~~~~ Test dataframe after pipeline amt category merchant trans_date_trans_time unix_time dob street ... job last first cc_num long zip is_fraud 734803 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 875327 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 549897 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 770188 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 698390 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 557456 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 1112225 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 907535 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 169363 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0 102279 -0.330049 0.801726 0.777358 -1.0 -1.0 0.06974 0.066144 ... 0.887097 0.41231 1.181157 0.066144 0.071934 0.072685 0 [10 rows x 19 columns] ~~~~~ Validation dataframe after pipeline amt category merchant trans_date_trans_time unix_time dob street merch_lat ... city_pop job last first cc_num long zip is_fraud 286442 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 259525 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 706250 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 557846 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 984124 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 379560 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 645012 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 631986 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 454841 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0 599151 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 0
The two issues are separate. The warning that the pipeline is not fitted is because of how pipelines report themselves as fitted: they just check whether their last (non-passthrough) step is fitted (source code). So your custom scaler isn't reported as having been fit. check_is_fitted looks for attributes with trailing underscores, or you implement your own __sklearn_is_fitted__ method returning a boolean, see the developer guide. (It's also better to keep the __init__ method to just setting method parameters as attributes; you could instantiate self.scaler_ at fit time instead and deal with both problems.) The second issue is actually a problem, the new NaNs. I think we need additional information, but two thoughts come to my mind: you don't take care about the dtypes of the pandas frames you create (so the encoder may try to encode everything; but then I'd expect issues per column not per row as your example shows). there's a common gotcha when joining frames after transforming some columns: the indices don't match, and so the join creates new rows, with NaNs filling the gaps. You take care to set the index in your scaler, but not your imputer. However, since that operates on the entire frame, I'm not sure that's the problem. (But the NaNs occurring exactly in the highest-index rows of each set seem to support this as the issue.)
1
3
79,475,881
2025-2-28
https://stackoverflow.com/questions/79475881/how-to-correctly-pair-elements-from-two-xcom-lists-in-airflow-triggerdagrunopera
I am using Apache Airflow and trying to trigger multiple DAGs from within another DAG using TriggerDagRunOperator.expand(). I have two lists being returned from an upstream task via XCom: confs → A list of dictionaries containing conf parameters. dags_to_trigger → A list of DAG IDs to be triggered. Each list contains 5 elements, and I want to pair them one-to-one (i.e., the first element of confs should be used with the first element of dags_to_trigger, the second with the second, and so on). Problem: When I use expand() like this: trigger_my_dags = TriggerDagRunOperator.partial( task_id="trigger_my_dags", wait_for_completion=False, ).expand(conf=confs, trigger_dag_id=dags_to_trigger) Airflow cross-pairs the elements, triggering 25 DAG runs instead of 5 (it takes each conf from description_sources and pairs it with every DAG ID from dags_to_trigger, rather than pairing them by index). Question: How can I ensure that Airflow correctly pairs the elements from both lists so that each DAG run gets the corresponding conf from description_sources? Would restructuring the data into a single list of dictionaries like this help? [{"conf": { ... }, "trigger_dag_id": "dag_1"}, {"conf": { ... }, "trigger_dag_id": "dag_2"}, ...] If so, how do I correctly use expand() with this structure? Thanks in advance!
Don't know your Airflow version but hope this helps. You can use execute() method instead of partial() + expand(). Here is an example: from datetime import datetime from typing import Any from airflow import DAG from airflow.models import BaseOperator from airflow.operators.empty import EmptyOperator from airflow.operators.trigger_dagrun import TriggerDagRunOperator from airflow.utils.context import Context class RunnerOperator(BaseOperator): def execute(self, context: Context) -> Any: # params for DAG run. from XCOM / API / doesn't matter params = { 'configs': [ {'conf': {'param_a': 1}, 'trigger_dag_id': 'dag_1'}, {'conf': {'param_b': 2}, 'trigger_dag_id': 'dag_2'}, ] } for dag_params in params['configs']: trigger_dag_id = dag_params['trigger_dag_id'] TriggerDagRunOperator( task_id=f'{trigger_dag_id}_task', trigger_dag_id=trigger_dag_id, conf=dag_params['conf'], wait_for_completion=False, ).execute(context) dag = DAG(dag_id='igor_atsberger', start_date=datetime(2025, 1, 1), schedule=None) RunnerOperator(dag=dag, task_id='runner') dag_1 = DAG(dag_id='dag_1', start_date=datetime(2025, 1, 1), schedule=None) dag_2 = DAG(dag_id='dag_2', start_date=datetime(2025, 1, 1), schedule=None) for _dag in [dag_1, dag_2]: EmptyOperator(dag=_dag, task_id=f'{_dag.dag_id}_task') Let's check dag_1 config: Let's check dag_2 config: Another easy way is just to use Airflow REST API
1
2
79,478,331
2025-3-1
https://stackoverflow.com/questions/79478331/what-does-matrixtrue-false-do-in-numpy
For example, I have matrix=np.array([[1,2],[3,4]]). When I use boolean filtration in Numpy like: matrix[[True,False]], I understand how's it works - I'll get the first row. But when I use something like: matrix[True, False] , I get empty parentheses. I guess, in this case boolean values means which dimension I want to use, because if I write: matrix[True, True], then I'll get the whole matrix, but if I use: matrix[True, True, False], then again I'll get empty parentheses, even though I have a 2D array. How does this actually work?
This is a very weird case. It is in fact tested, but I don't think the behavior is documented anywhere. If you index an array with any number of True scalars, the result is equivalent to arr[np.newaxis]: it contains all the original array's data, but with an extra length-1 axis at the start of its shape. If you index an array with any number of boolean scalars, at least one of which is False, you get an empty result. The result has the shape of the original array, except with an extra length-0 axis at the start of the shape. The normal case for a boolean index is for the index array and the original array to have the same shape. The desired behavior in the normal case is to produce a 1D array containing all elements of the original array corresponding to True cells in the index array. For >=1D boolean index arrays, this is equivalent to substituting index.nonzero() in place of the index array. But nonzero() is a little weird for 0D arrays. Instead, 0D boolean arraylikes (including ordinary boolean scalars) have special handling. A comment in the code describes the special handling as follows: /* * This can actually be well defined. A new axis is added, * but at the same time no axis is "used". So if we have True, * we add a new axis (a bit like with np.newaxis). If it is * False, we add a new axis, but this axis has 0 entries. */ And when you index an array with a single boolean scalar index and nothing else, the comment accurately describes the behavior. In particular, it does the right thing in the case that motivates the special handling: array(5.0)[True] produce array([5.0]), and array(5.0)[False] produce array([], dtype=float64). But for two scalar boolean indices, the behavior doesn't quite line up with the comment. Reading the comment, you'd expect each one to add a new axis to the output, so if array arr had shape (x, y), you'd expect arr[True, False] to result in an (empty) output of shape (1, 0, x, y). But that's not what happens. What happens is what I described in the first section of the answer. This is because the implementation doesn't have handling to add more than one axis for 0D boolean indices. It can do that for np.newaxis, but the handling isn't there for 0D boolean indices. Instead, it sets a variable saying to add at least one "fancy indexing" dimension: if (fancy_ndim < 1) { fancy_ndim = 1; } and then later, for each boolean scalar index, it multiplies the length of the last "fancy indexing" dimension by 1 for True, or 0 for False: else if (indices[i].type == HAS_0D_BOOL) { mit->fancy_strides[j] = 0; mit->fancy_dims[j] = 1; /* Does not exist */ mit->iteraxes[j++] = -1; if ((indices[i].value == 0) && (mit->dimensions[mit->nd_fancy - 1]) > 1) { goto broadcast_error; } mit->dimensions[mit->nd_fancy-1] *= indices[i].value; } This handling is motivated by an analogy with broadcasting - it's as if we're broadcasting the fancy indexing dimensions against arrays of shape (1,) or (0,). And in fact, the code actually creates those shape-(1,) or shape-(0,) arrays, even though I don't think it does anything with their contents - the values aren't used for indexing. These arrays are used for error messages, if a broadcasting failure happens, and I think they're used as dummies when creating an internal iterator later. This handling also means that if you combine boolean scalars with other forms of "fancy indexing", such as integer arrays, you can see other odd behavior. For example, if we have the following arrays: arr = np.array([10, 11, 12, 13]) index = np.array([[1, 2], [0, 3]]) then arr[index] would produce a 2-by-2 array whose elements are arr[1], arr[2], arr[0], and arr[3]. But if we do arr[index, True], or arr[index, False]: In [31]: arr[index, True].shape Out[31]: (2, 2) In [32]: arr[index, False].shape --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Input In [32], in <cell line: 1>() ----> 1 arr[index, False].shape IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (2,2) (0,) arr[index, True] has the same shape as arr[index], and arr[index, False] produces a broadcasting failure.
4
5
79,477,947
2025-3-1
https://stackoverflow.com/questions/79477947/removing-list-duplicates-given-indices-symmetry-in-python
In python, given a list mylist of lists el of integers, I would like to remove duplicates that are equivalent under specific permutations of the indices. The question is more general but I have in mind "decorated" McKay graphs where each node is given an integer defining el whose sum is equal to a certain number. Generating mylist is easy enough, but there is a lot of redundancy depending on the symmetry of the graph. For instance, for the "D4" diagram, el represents the following decorated graph: a | el = [a, b, c, d, e] <--> b-c―d | e The two lists [1,0,4,0,0] and [0,0,4,0,1] are duplicates because there are the same up to a reflection of the graph around the horizontal axis. More generally, the lists el duplicates if there are equivalent under any permutation of a,b,d,e. Is there a simple way of removing duplicates given an arbitrary symmetry? The way I did it for this particular graph was to sort the indices 0,1,3,4 and compare: from itertools import product def check_duplicate(f, li): for l in li: if l[2] == f[2] and sorted(l[:2] + l[3:]) == sorted(f[:2] + f[3:]): return True return False mylist = [list(i) for i in product(range(5), repeat=5) if sum(i) == 5] newlist = [] for f in my_list: if check_duplicate(f, newlist) == False: newlist.append(f) Here tmp is generated brute force, but the condition is more involved in my real case. This works well enough for this particular example, but is a bit clunky, and the implementation is harder to generalize to more involved cases. Is there a way to remove the duplicates in a more optimized way, in particular one that can easily implement removing duplicates given a particular symmetry of the indices of el?
It will probably be better to add "seen" cases to a set, and then continue to check if new elements have already been "seen", rather than comparing every element to every other element in the list iteratively (an operation with O(n^2) time complexity). Hopefully I understand what you're trying to accomplish, but here's how I'd do it: from itertools import product def to_key(f): # Creates a hashable key, keeping order-insensitive values sorted, and noting the middle value""" return (f[2], tuple(sorted(f[:2] + f[3:]))) mylist = [list(i) for i in product(range(5), repeat=5) if sum(i) == 5] # Init an empty set. We'll add to it if a key isn't there seen = set() # If we add to the seen set, we'll add the element to newList newlist = [] for f in mylist: key = to_key(f) # Create the hashable key if key not in seen: # Check if key in seen seen.add(key) newlist.append(f)
1
1
79,477,752
2025-3-1
https://stackoverflow.com/questions/79477752/subclass-that-throws-custom-error-if-modified
What's the best way in python to create a subclass of an existing class, in such a way that a custom error is raised whenever you attempt to modify the object? The code below shows what I want. class ImmutableModifyError(Exception): pass class ImmutableList(list): def __init__(self, err = "", *argv): self.err = err super().__init__(*argv) def append(self, *argv): raise ImmutableModifyError(self.err) def extend(self, *argv): raise ImmutableModifyError(self.err) def clear(self, *argv): raise ImmutableModifyError(self.err) def insert(self, *argv): raise ImmutableModifyError(self.err) def pop(self, *argv): raise ImmutableModifyError(self.err) def remove(self, *argv): raise ImmutableModifyError(self.err) def sort(self, *argv): raise ImmutableModifyError(self.err) def reverse(self, *argv): raise ImmutableModifyError(self.err) If I use other immutable types, an AttributeError is thrown instead of the custom error whose message is created along the object. As you can see, this code is too repetitive and would be error-prone if the class changed. I have not taken into account hidden methods and operators. Is there a better way to achieve this?
There are plenty of more scalable and less error-prone way to achieve this is to dynamically block all mutating methods. Using Setattr to indentify all mutating methods dynamically and override them [Best Method to implement - Pycon 2017] class ImmutableModifyError(Exception): pass class ImmutableList(list): def __init__(self, *args, err="Immutable list cannot be modified"): self.err = err super().__init__(*args) def __raise_error(self, *args, **kwargs): raise ImmutableModifyError(self.err) _mutating_methods = { "append", "extend", "clear", "insert", "pop", "remove", "sort", "reverse", "__setitem__", "__delitem__", "__iadd__", "__imul__" } for met in _mutating_methods: setattr(ImmutableList, method, ImmutableList.__raise_error) ## Don't use map here Use metaclasses class ImmutableModifyError(Exception): pass class ImmutableMeta(type): def __new__(cls, name, bases, namespace): def raise_error(self, *args, **kwargs): raise ImmutableModifyError(f"Cannot modify {self.__class__.__name__} object") mutating_methods = { attr for base in bases for attr in dir(base) if callable(getattr(base, attr, None)) and attr in {"__setitem__", "__delitem__", "__iadd__", "__imul__", "append", "extend", "clear", "insert", "pop", "remove", "sort", "reverse"} } for method in mutating_methods: namespace[method] = raise_error return super().__new__(cls, name, bases, namespace) class ImmutableList(list, metaclass=ImmutableMeta): pass One of benefits you get that if you want to use a dict instead of list just one line of code is needed class ImmutableDict(dict, metaclass=ImmutableMeta): pass Use getitem [most least popular not safe ] This will make the list completely unusable since even reading values will raise an error. Any iteration (for x in lst) will also fail. This is extreme immutability, where not only modification but even access is prevented. class ImmutableModifyError(Exception): pass class ImmutableList(list): def __init__(self, *args, err="Immutable list cannot be modified"): self.err = err super().__init__(*args) def __getitem__(self, index): raise ImmutableModifyError(self.err) One example why it is not good to implement is as it breaks Read-Only Operations and if you override getitem, you won't be able to retrieve values anymore: lst = ImmutableList([1, 2, 3]) print(lst[0]) # Should return 1, but would raise an error instead.
1
2
79,475,564
2025-2-28
https://stackoverflow.com/questions/79475564/powershell-and-cmd-combining-command-line-filepath-arguments-to-python
I was making user-entered variable configurable via command line parameters & ran into this weird behaviour: PS D:> python -c "import sys; print(sys.argv)" -imgs ".\Test V4\Rilsa\" -nl 34 ['-c', '-imgs', '.\\Test V4\\Rilsa" -nl 34'] PS D:> python -c "import sys; print(sys.argv)" -imgs ".\TestV4\Rilsa\" -nl 34 ['-c', '-imgs', '.\\TestV4\\Rilsa\\', '-nl', '34'] If the name of my folder is Test V4 with a space character, then all following parameters end up in the same argument element '.\\Test V4\\Rilsa" -nl 34'. There is also a trailing " quote after the directory name. I tried this again in CMD, thinking it was a Powershell quirk & experienced the same behaviour. What's going on here? I'm assuming it has something to do with backslashes in Powershell -- though it's the default in Windows for directory paths -- but why do I get diverging behaviour depending on space characters & what's a good way to handle this assuming Windows paths are auto-completed into this form by the shell (i.e. trailing \)?
You're seeing a bug in Windows PowerShell (the legacy, ships-with-Windows, Windows-only edition of PowerShell whose latest and last version is 5.1), which has since been fixed in PowerShell (Core) 7, as detailed in this answer. In short, as you've since discovered yourself, the problem occurs when you pass arguments that contain space(s) and end in \ to external programs, because Windows PowerShell - when it constructs the true process command line behind the scenes - blindly encloses such arguments in "...", causing most target programs to interpret the closing \" sequence as an escaped " char. (Since arguments without spaces are not subject to this "..." enclosure, they are not affected.) Workarounds (required in Windows PowerShell only, but should also work in PowerShell 7): Manually add a trailing \ to your argument: # Note the '\\' python -c "import sys; print(sys.argv)" -imgs ".\Test V4\Rilsa\\" -nl 34 Alternatively, add a trailing space, relying on the fact that on Windows trailing spaces in paths are usually ignored: # Note the trailing space before the closing " python -c "import sys; print(sys.argv)" -imgs ".\Test V4\Rilsa " -nl 34 In either case, if the path must be passed via a variable rather than a literal and the variable value may or may not end in \, use "...", i.e. an expandable (interpolating) string, such as "$dirPath\" or "$dirPath "
1
2
79,474,319
2025-2-28
https://stackoverflow.com/questions/79474319/how-to-conditinonally-choose-which-column-to-backfill-over-in-polars
I need to backfill a column in a python polars dataframe over one of three possible columns, based on which one matches the non-null cell in the column to be backfilled. My dataframe looks something like this: ┌─────┬─────┬─────┬─────────┐ │ id1 ┆ id2 ┆ id3 ┆ call_id │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════════╡ │ 1 ┆ 4 ┆ 9 ┆ null │ │ 1 ┆ 5 ┆ 9 ┆ null │ │ 1 ┆ 5 ┆ 9 ┆ null │ │ 2 ┆ 5 ┆ 9 ┆ null │ │ 2 ┆ 6 ┆ 9 ┆ 2 │ │ 2 ┆ 7 ┆ 10 ┆ null │ │ 3 ┆ 7 ┆ 11 ┆ null │ │ 3 ┆ 7 ┆ 12 ┆ null │ │ 3 ┆ 7 ┆ 13 ┆ 7 │ │ 3 ┆ 8 ┆ 13 ┆ null │ └─────┴─────┴─────┴─────────┘ And I want it to look like this: ┌─────┬─────┬─────┬─────────┐ │ id1 ┆ id2 ┆ id3 ┆ call_id │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════════╡ │ 1 ┆ 4 ┆ 9 ┆ null │ │ 1 ┆ 5 ┆ 9 ┆ null │ │ 1 ┆ 5 ┆ 9 ┆ null │ │ 2 ┆ 5 ┆ 9 ┆ 2 │ │ 2 ┆ 6 ┆ 9 ┆ 2 │ │ 2 ┆ 7 ┆ 10 ┆ 7 │ │ 3 ┆ 7 ┆ 11 ┆ 7 │ │ 3 ┆ 7 ┆ 12 ┆ 7 │ │ 3 ┆ 7 ┆ 13 ┆ 7 │ │ 3 ┆ 8 ┆ 13 ┆ null │ └─────┴─────┴─────┴─────────┘ If I knew which column matched, I would have used something to the effect of .with_columns(pl.col('call_id').backfill().over('id1'), but for the life of me I can't figure out how to systematically choose which column to backfill over.
Assuming that your dataframe is in a variable called df and your backfill value is always the same (which is the case in your example) df.with_columns( call_id=pl.coalesce( pl.when( (pl.col("^id.*$") == pl.col("call_id")).backward_fill().over("^id.*$") ) .then("^id.*$") ) ) # shape: (10, 4) # ┌─────┬─────┬─────┬─────────┐ # │ id1 ┆ id2 ┆ id3 ┆ call_id │ # │ --- ┆ --- ┆ --- ┆ --- │ # │ i64 ┆ i64 ┆ i64 ┆ i64 │ # ╞═════╪═════╪═════╪═════════╡ # │ 1 ┆ 4 ┆ 9 ┆ null │ # │ 1 ┆ 5 ┆ 9 ┆ null │ # │ 1 ┆ 5 ┆ 9 ┆ null │ # │ 2 ┆ 5 ┆ 9 ┆ 2 │ # │ 2 ┆ 6 ┆ 9 ┆ 2 │ # │ 2 ┆ 7 ┆ 10 ┆ 7 │ # │ 3 ┆ 7 ┆ 11 ┆ 7 │ # │ 3 ┆ 7 ┆ 12 ┆ 7 │ # │ 3 ┆ 7 ┆ 13 ┆ 7 │ # │ 3 ┆ 8 ┆ 13 ┆ null │ # └─────┴─────┴─────┴─────────┘
1
2
79,477,010
2025-3-1
https://stackoverflow.com/questions/79477010/find-the-equation-of-a-line-with-sympy-y-mxb
So I'm brushing up on my algebra and trying to learn sympy at the same time. Following an algebra tutorial on youtube, I'm at this point: I can't quite figure out how to isolate y What I've tried: p1 = sym.Point(-1, 1) m = -2 eq = sym.Line(p1, slope=m).equation() eq returns: 2x + y + 1 eq1 = sym.Eq(eq, 0) eq1 Gives me: 2x + y + 1 = 0 and.. eq2 = sym.simplify(eq1) eq2 Returns: 2x + y = -1 ..but that's as close as I can get to isolating y (y = -2x - 1). I'm sure it's a simple answer but I've been searching on the intertubes for 2 days now without success. I've tried solve() and subs(), solve just returns an empty set and subs gives the original equation back. What am I doing wrong?
I believe that the code below gives what you want. I believe solve wasn't working for you because you needed to define x, y as symbols. Then you can use the solution to create your equation using Eq. Note that the solution is a list. Hope this helps. import sympy as sym from sympy.solvers import solve p1 = sym.Point(-1, 1) m = -2 eq = sym.Line(p1, slope=m).equation() print(eq) eq1 = sym.Eq(eq, 0) print(eq1) x, y = sym.symbols('x y', real = True) sol = solve(eq, y) print(sol) eq1 = sym.Eq(y, sol[0]) print(eq1) Output: 2*x + y + 1 Eq(2*x + y + 1, 0) [-2*x - 1] Eq(y, -2*x - 1)
1
3
79,476,892
2025-2-28
https://stackoverflow.com/questions/79476892/pandas-groupby-make-all-elements-0-if-first-element-is-1
I have the following df: | day | first mover | | -------- | -------------- | | 1 | 1 | | 2 | 1 | | 3 | 0 | | 4 | 0 | | 5 | 0 | | 6 | 1 | | 7 | 0 | | 8 | 1 | i want to group this Data frame in the order bottom to top with a frequency of 4 rows. Furthermore if first row of group is 1 make all other entries 0. Desired output: | day | first mover | | -------- | -------------- | | 1 | 1 | | 2 | 0 | | 3 | 0 | | 4 | 0 | | 5 | 0 | | 6 | 0 | | 7 | 0 | | 8 | 0 | The first half i have accomplished. I am confuse about how to make other entries 0 if first entry in each group is 1. N=4 (df.iloc[::-1].groupby(np.arange(len(df))//N
I would use for-loop for this for name, group in df.groupby(...): this way I could use if/else to run or skip some code. To get first element in group: (I don't know why but .first() doesn't work as I expected - it asks for some offset) first_value = group.iloc[0]['first mover'] To get indexes of other rows (except first): group.index[1:] and use them to set 0 in original df df.loc[group.index[1:], 'first mover'] = 0 Minimal working code which I used for tests: import pandas as pd df = pd.DataFrame({ 'day': [1,2,3,4,5,6,7,8,], 'first mover': [1,1,0,0,0,1,0,1] }) N = 4 for name, group in df.groupby(by=lambda index:index//N): #print(f'\n---- group {name} ---\n') #print(group) first_value = group.iloc[0]['first mover'] #print('first value:', first_value) if first_value == 1 : #print('>>> change:', group.index[1:]) df.loc[group.index[1:], 'first mover'] = 0 print('\n--- df ---\n') print(df)
1
0
79,476,789
2025-2-28
https://stackoverflow.com/questions/79476789/how-to-get-the-dot-product-of-inner-dims-in-numpy-array
I would like to compute the dot product (matmul) of the inner dimension of two 3D arrays. In the following example, I have an array of 10 2x3 matrixes (X) and an array of 8 1x3 matrixes. The result Z should be a 10 element array of an 8 x 2 matrix (you might also think of this as an 10 x 8 array of 2-d vectors.) X = np.arange(0, 10 * 2 * 3).reshape(10, 2, 3) Y = np.arange(0, 8 * 1 * 3).reshape(8, 1, 3) # write to Z, which has the dot-product of internal dims of X and Y Z = np.empty((10, 8, 2)) for i, x_i in enumerate(X): for j, y_j in enumerate(Y): z = x_i @ y_j.T # subscriptz to flatten it. Z[i, j, :] = z[:, 0] is correct, but I would like a vectorized solution.
Einsum You could use einsum np.einsum('ijk,lmk->ilj', X, Y) It produces an array whose shape is 3 axis axis 0 (i) the size of first axis of X (here 10) axis 1 (l) size of first axis of Y (here 8) axis 2 (j) second axis of X (2) m is just ignored (size 1) and k (3rd axis of both), since it is repeated is used to sum product So, that is, for each i, l and j, result[i,l,j] is Σₖ X[i,j,k]*Y[l,m,k] (m being 0 anyway) Dot product You could also just use matmul as you intended. It can work with bigger than 2D arrays (and then operation are element-wise, like + or * for all axis, but the 2 last, which behave like the 2D dot product). But for that you need some adjustment to your shapes before. Since you want 10x8 array of result, those 10 and 8 must be on their own axis. So axis 0 for size 10, axis 1 for size 8. So we need to reshape X as a (10, 1, 2, 3) array. And Y as a (1, 8, 1, 3) array. So that the "element-wise" behavior of the 2 first axis result in broadcast (you can do element-wise operations as long as you have either matching size of each axis, or one of the axis is size 1, and is then broadcasted. So, here, the (10,1) & (1,8) sizes of the 2 first axis of X and Y would result to a (10,8) result.) And furthermore, to be able to have some dot product on the 2 last axis, they must be in the traditional (n,m) (m,k) pattern. Here (2,3) (3,1). So we need also to swap the 2 last axis of Y. All together (X[:,None,...] @ (Y[None,...].transpose(0,1,3,2)))[...,0] The [...,0] being the same as yours, to ignore the size 1 axis. Note that all those operations (but the matmul itself, of course) are almost for free. No data is moved, copied or modified by X[:,None,...] or by .transpose(0,1,3,2). Only meta data (strides and shape) are changed. That is O(1) cost. Only the 10x832 multiplications are done on the data. Which of course is needed. So performance-wise that is as vectorized as it can be. Performance-wise, on your example (and my computer), it is 21 μs vs 7 μs (so in favor of second solution). But that could be different for other shapes. For example, with sizes (30,50,60) and (20,1,60), it is even 3ms vs 3ms. So einsum could be faster in the long run. But well, both are completely "vectorized" (all iterations are done in internal C code)
3
2
79,475,699
2025-2-28
https://stackoverflow.com/questions/79475699/django-pagination-for-inline-models
I realize this is probably a beginner level error, but I'm out of ideas. I need to add pagination to Inline model for admin page. I'm using Django 1.8.4 ( yup, I know it's really old ) and python 3.6.15. Inside admin.py: class ArticleInline(GrappelliSortableHiddenMixin, admin.TabularInline): model = ArticleSection.articles.through raw_id_fields = ("article",) related_lookup_fields = { 'fk':['article'], } extra = 1 class ArticleSectionsAdmin(reversion.VersionAdmin): list_display = ('name','slug','template_file_name') inlines = [ArticleInline] admin.site.register(ArticleSection,ArticleSectionsAdmin) Inside models.py: class ArticleSection(Section): articles = models.ManyToManyField(Article, verbose_name=_("article"), through="M2MSectionArticle", related_name="sections") limit = models.PositiveSmallIntegerField(default=10) class Meta: db_table = 'sections_article_section' verbose_name = _("article section") verbose_name_plural = _("articles sections") def content(self, request): query = Q(m2msectionarticle__visible_to__isnull=True) & Q(m2msectionarticle__visible_from__isnull=True) query.add(Q(m2msectionarticle__visible_to__gte=timezone.now(), m2msectionarticle__visible_from__lte=timezone.now()), Q.OR) limit = self.limit_override if hasattr(self, 'limit_override') and self.limit_override is not None else self.limit return self.articles.filter(query).published().prefetch_related('images').order_by('m2msectionarticle__position')[:limit] class M2MSectionArticle(models.Model): section = models.ForeignKey(ArticleSection, related_name='section_articles') article = models.ForeignKey(Article, verbose_name=_('article')) position = models.PositiveSmallIntegerField(_("position"), default=0) visible_from = models.DateTimeField("Widoczne od", null=True, blank=True) visible_to = models.DateTimeField("Widoczne do", null=True, blank=True) class Meta: db_table = 'sections_section_articles' ordering = ["position"] I found django-admin-inline-paginator and it seems to work for everyone else, but I get "Function has keyword-only parameters or annotations, use getfullargspec() API which can support them" when I use TabularInlinePaginated instead of admin.TabularInline. from django_admin_inline_paginator.admin import TabularInlinePaginated class ArticleInline(GrappelliSortableHiddenMixin, TabularInlinePaginated): model = ArticleSection.articles.through raw_id_fields = ("article",) related_lookup_fields = { 'fk':['article'], } extra = 1 per_page = 10 print(inspect.getfullargspec(TabularInlinePaginated)) returns : FullArgSpec(args=['self', 'parent_model', 'admin_site'], varargs=None, varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={}) but I still dont know what to do.
This is due to compatibility issues between older versions of Python and Django when handling function signatures.. for example, related_lookup_fields is Deprecated in Django Admin such that the part related_lookup_fields = { 'fk': ['article'], } should be replaced by autocomplete_fields = ["article"] or just remove it entirely. So in a nutshell this will fix only THAT part and you'd have to go over each section of the code i.e. just move over to newer version
1
1
79,474,870
2025-2-28
https://stackoverflow.com/questions/79474870/is-there-any-benefit-of-choosing-to-formulate-constraints-in-a-way-or-another-in
I have an MINLP problem and let's say the continuous variable Q can only be 0 when the binary variable z is 0. Two ways to formulate this would be: m.Equation(Q*(1-z) == 0) (1) or m.Equation(Q < z*10000) (2) whereby 10000 would be the upper bound to the continuous variable Q. Does (1) or (2) have any benefits over the other? I've used (1) in my model for heat exchanger network synthesis and I got a good solution pretty quickly. Using (2) takes around 100x longer and it gives worse solution than the one given using (1). From what I can see, the MINLP relaxation to (1) would still require z to be exactly 1 for Q to take on non-zero values. Does this have any effect on how APOPT solve the problem?
Q(1-z) = 0 is non-linear and non-convex while Q <= 10000z is linear (and convex). The last one is much better as long as the big-M constant is small. If you can't reduce the size of the big-M constant, consider using indicator constraints (using suitable modeling tools and solvers).
1
2
79,475,812
2025-2-28
https://stackoverflow.com/questions/79475812/python-cyrillic-string-encoding
I try to convert cyrillic string to readable format. I have the similar code for php and it work fine. But python was harder. import re from sys import getdefaultencoding def decodeString(matches): return chr(int(matches.group(0).lstrip('\\'), 8)) my_string = r'\320\222\321\213\320\263\321\200\321\203\320\267\320\272\320\260 \320\267\320\260\321\217\320\262\320\276\320\272' decoded_string = re.sub(r'\\[0-7]{3}', decodeString, my_string) print(decoded_string) my_string2 = "\320\222\321\213\320\263\321\200\321\203\320\267\320\272\320\260 \320\267\320\260\321\217\320\262\320\276\320\272" print(my_string2) print(sys.stdin.encoding, sys.stdout.encoding) Both these way return the same result: ÐÑгÑÑзка заÑвок в 1C ÐÑгÑÑзка заÑвок в 1C utf-8 utf-8 Expected result - "Выгрузка заявок".
If you are able to get around using r'' then using b'' should work: def decode_utf8(byte_string: bytes) -> str: return byte_string.decode('utf-8') # Example usage byte_string = b'\320\222\321\203\320\263\321\200\321\203\320\267\320\272\320\260 \320\267\320\260\321\217\320\262\320\276\320\272' print(decode_utf8(byte_string)) # Output: "Выгрузка заявок" If r'' is necessary somehow then you can do this: Those \320\222 sequences are octal-escaped bytes of a UTF-8 string. To do this in Python, try: Convert each \NNN (octal) to its single byte (0–255). Combine those bytes into a bytes object. You should use latin-1 Decode that bytes object as UTF-8. Example: import re def decode_octal_utf8(octal_string): # 1) Replace each \NNN with its corresponding byte (0–255). def replace_octal(m): return chr(int(m.group(1), 8)) # e.g. "320" -> 208 # 2) Convert to a "byte-like" string (via latin-1 to keep bytes 0-255 unchanged). tmp = re.sub(r'\\([0-7]{3})', replace_octal, octal_string) raw_bytes = tmp.encode('latin-1') return raw_bytes.decode('utf-8') Output using that string: Выгрузка заявок
1
2
79,474,514
2025-2-28
https://stackoverflow.com/questions/79474514/how-to-remove-xarray-plot-bad-value-edge-colour
I know set_bad can colour the pixel into a specific colour but in my example I only want to have edge colour for blue and grey pixels with values and not the bad pixels (red) import matplotlib.pyplot as plt import xarray as xr import numpy as np from matplotlib import colors fig, ax = plt.subplots(1, 1, figsize=(12, 8)) # provide example data array with a mixture of floats and nans in them data = xr.DataArray([np.random.rand(10, 10)]) # Example data # set a few nans data[0, 1, 1] = np.nan data[0, 1, 2] = np.nan data[0, 1, 3] = np.nan data[0, 2, 1] = np.nan data[0, 2, 2] = np.nan data[0, 2, 3] = np.nan data[0, 3, 1] = np.nan data[0, 3, 2] = np.nan data[0, 3, 3] = np.nan cmap = colors.ListedColormap(['#2c7bb6', "#999999"]) # make nan trends invalid and set edgecolour to white cmap.set_bad(color = 'red') data.plot(edgecolor = "grey", cmap = cmap)
Update: I was premature in my original answer (see below), and you can actually pass a list of edge colours (see pcolormesh) that can be used for each "box" within the plot. So, you could use: import xarray as xr import numpy as np from matplotlib import pyplot as plt from matplotlib import colors fig, ax = plt.subplots(1, 1, figsize=(12, 8)) # provide example data array with a mixture of floats and nans in them data = xr.DataArray([np.random.rand(10, 10)]) # Example data # set a few nans data[0, 1, 1] = np.nan data[0, 1, 2] = np.nan data[0, 1, 3] = np.nan data[0, 2, 1] = np.nan data[0, 2, 2] = np.nan data[0, 2, 3] = np.nan data[0, 3, 1] = np.nan data[0, 3, 2] = np.nan data[0, 3, 3] = np.nan # get edgecolors - "grey" for good values red for "bad" edgecolors = [ "grey" if np.isfinite(c) else "red" # could also just have "none" for c in data.values.flatten() ] cmap = colors.ListedColormap(['#2c7bb6', "#999999"]) cmap.set_bad("red") data.plot(edgecolors=edgecolors, cmap = cmap) plt.show() Original answer I'm afraid you can't remove the edge color from only around the "bad" values, but what you can do is overplot the bad values without any edge color set. E.g., import xarray as xr import numpy as np from matplotlib import pyplot as plt from matplotlib import colors fig, ax = plt.subplots(1, 1, figsize=(12, 8)) # provide example data array with a mixture of floats and nans in them data = xr.DataArray([np.random.rand(10, 10)]) # Example data # set a few nans data[0, 1, 1] = np.nan data[0, 1, 2] = np.nan data[0, 1, 3] = np.nan data[0, 2, 1] = np.nan data[0, 2, 2] = np.nan data[0, 2, 3] = np.nan data[0, 3, 1] = np.nan data[0, 3, 2] = np.nan data[0, 3, 3] = np.nan cmap = colors.ListedColormap(['#2c7bb6', "#999999"]) data.plot(edgecolors="grey", cmap = cmap) # get new array of "bad" values with values of 1 and non-bad values with values of NaN data_bad = xr.where(data.isnull(), 1.0, np.nan) ax = plt.gca() # plot "bad" values as red without any edge color cmap_bad = colors.ListedColormap(["red"]) data_bad.plot(ax=ax, edgecolors="none", cmap=cmap_bad, add_colorbar=False) plt.show()
1
3
79,475,051
2025-2-28
https://stackoverflow.com/questions/79475051/whats-the-difference-between-uv-lock-upgrade-and-uv-sync
Being a total newbie in the python ecosystem, I'm discovering uv and was wondering if there was a difference between the following commands : uv lock --upgrade and uv sync If there's any, what are the exact usage for each of them ?
uv lock commands are all about managing the uv.lock file (or creating it). But what they NOT do: upgrading the actual package versions in your environment! The uv lock --upgrade command updates the lock file (uv.lock) by allowing package upgrades, even if they were previously pinned. But still it is a managing command for the uv.lock file and does NOT upgrade anything in the environment (package versions). It upgrades only the lock file. uv sync commands are upgrading the actual environment and packages within ensuring the actual environment's package versions align with what’s recorded in uv.lock file. You can get detailed info about both commands by typing: uv help lock or: uv help sync
1
2
79,474,435
2025-2-28
https://stackoverflow.com/questions/79474435/assertequal-tests-ok-when-numpy-ndarray-vs-str-is-that-expected-or-what-have-i
My unittest returns ok, but when running my code in production, I found that my value is 'wrapped' with square brackets. Further investigation shows that, it lies under the df.loc[].values . I am expecting a single str value. Using the sample by cs95 and doing some slight modification, I am able to reproduce it to illustrate the idea. This is my first time deploying python unittest and sorry for the lengthy post/info. # test class and test code import pandas as pd import numpy as np import unittest class myclass: def __init__(self): mux = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), list('tuvwtuvwtuvwtuvw') ], names=['one','two']) temp_a = np.arange(len(mux)) str_a = [ str(a) for a in temp_a ] self.df = pd.DataFrame({'col': str_a}, mux) def get_aw(self): a = self.df.loc[('a','w')].values return a class TestAssert(unittest.TestCase): def test_assert(self): myclass_obj = myclass() result = myclass_obj.get_aw() expect = '3' print(type(result)) print(type(expect)) print(f'result:{result}') print(f'expect:{expect}') self.assertEqual(result,expect) if __name__ == '__main__': unittest.main(verbosity=2) Results: (I expected it to FAIL) test_assert (__main__.TestAssert) ... test_code.py:18: PerformanceWarning: indexing past lexsort depth may impact performance. a = self.df.loc[('a','w')].values <class 'numpy.ndarray'> <class 'str'> result:[['3']] <-- square brackets expect:3 ok ---------------------------------------------------------------------- Ran 1 test in 0.002s OK I changed the code with .values[0][0] so I get the str and I added assertTrue(isinstance(...) Changed code: import pandas as pd import numpy as np import unittest class myclass: def __init__(self): mux = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), list('tuvwtuvwtuvwtuvw') ], names=['one','two']) #self.df = pd.DataFrame({'col': np.arange(len(mux))}, mux) temp_a = np.arange(len(mux)) str_a = [ str(a) for a in temp_a ] self.df = pd.DataFrame({'col': str_a}, mux) def get_aw(self): a = self.df.loc[('a','w')].values[0][0] # updated return a class TestAssert(unittest.TestCase): def test_assert(self): myclass_obj = myclass() result = myclass_obj.get_aw() expect = '3' print(type(result)) print(type(expect)) print(f'result:{result}') print(f'expect:{expect}') self.assertTrue(isinstance(result, str)) # added self.assertEqual(result,expect) if __name__ == '__main__': unittest.main(verbosity=2) Results: test_assert (__main__.TestAssert) ... test_code.py:18: PerformanceWarning: indexing past lexsort depth may impact performance. a = self.df.loc[('a','w')].values[0][0] <class 'str'> <class 'str'> result:3 expect:3 ok ---------------------------------------------------------------------- Ran 1 test in 0.002s OK
Yes, I believe that is expected behavior. From the documentation, unittest.TestCase.assertEqual(a, b) checks that a == b. If you run >>> if np.array([['3']]) == '3': ... print("Here") Here It indeed prints "Here". This is because of the way broadcasting works in numpy. When comparing an array against a string, like in your example, numpy will infer it as an element-by-element comparison. In particular, >>> np.array([['3']]) == '3' array([[ True]]) >>> np.array([['3', '2', '3']]) == '3' array([[ True, False, True]]) Notice in the second case however, >>> if np.array([['3', '2', '3']]) == '3': ... print("Here") Will raise an exception ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Because Python cannot cast a (1, 3) numpy array into a single boolean True or False.
3
2
79,473,651
2025-2-27
https://stackoverflow.com/questions/79473651/how-to-normalise-a-two-dimensional-array
I am given a two-dimensional array. Each element represents the x,y coordinates of a trapezium. I do not want negative values so I need to adjust the minimum x value to zero and I want to adjust the minimum y value to zero. I can do this (see below), but in a very long winded way. Is there a more elegant way to normalise the arrays? import numpy as np # This line is given pa = np.array([[ 213.00002 , 213.00002 ],[ -213.00002 , 213.00002 ],[ 213.00002 , -213.00002 ],[ -213.00002 , -213.00002 ]]) # This line is given #1 get values pa_x_values = np.array([(pa[0][0]),(pa[1][0]),(pa[2][0]),(pa[3][0])]) pa_y_values = np.array([(pa[0][1]),(pa[1][1]),(pa[2][1]),(pa[3][1])]) #2 get minimum value pa_min_x = min(pa_x_values) pa_min_y = min(pa_y_values) #3 calculate difference from zero p1_dx = 0 - pa_min_x p1_dy = 0 - pa_min_y #4 make new array p1 = np.array([[0,0],[0,0],[0,0],[0,0]],dtype=np.float32) #5 normalise for i in range(4): p1[i][0] = (pa[i][0]) + p1_dx p1[i][1] = (pa[i][1]) + p1_dy # RESULT # p1 # [[426.00003, 426.00003], [ 0, 426.00003], [426.00003, 0 ], [ 0, 0 ]]
You can use .min(0) to get the minimum for each of x and y individually and then subtract that from the entire array. pa -= pa.min(0)
1
2
79,473,874
2025-2-27
https://stackoverflow.com/questions/79473874/how-to-group-by-on-multiple-columns-and-retain-the-original-index-in-a-pandas-da
I need to group by multiple columns on a dataframe and calculate the rolling mean in the group. But the original index needs to be preserved. Simple python code below : data = {'values': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15], 'type':['A','B','A','B','A','B','A','B','A','B','A','B','A','B','A'], 'type2':['C','D','C','D','C','D','C','D','C','D','C','D','C','D','C']} df = pd.DataFrame(data) window_size = 3 df['mean'] = (df.groupby(['type','type2'])['values'].rolling(window=3).mean().reset_index(drop=True)) print(df) Output : values type type2 mean 0 1 A C NaN 1 2 B D NaN 2 3 A C 3.0 3 4 B D 5.0 4 5 A C 7.0 5 6 B D 9.0 6 7 A C 11.0 7 8 B D 13.0 8 9 A C NaN 9 10 B D NaN 10 11 A C 4.0 11 12 B D 6.0 12 13 A C 8.0 13 14 B D 10.0 14 15 A C 12.0 What I need : values type type2 mean 0 1 A C 1 1 2 B D 2 2 3 A C 2 3 4 B D 3 4 5 A C 3 5 6 B D 4 6 7 A C 5 7 8 B D 6 8 9 A C 7 9 10 B D 8 10 11 A C 9 11 12 B D 10 12 13 A C 11 13 14 B D 12 14 15 A C 13 The requirement is very simple. The mean has to be calcluated in the groups. So last row is group (A,C) . So it is 15+ 13(previous) + 11 (previous to previous because window is 3) = 39 /3 =13 Same for other rows. But when I do this with level = 0 with below code I get df['mean'] = (df.groupby(['type','type2'])['values'].rolling(window=3).mean().reset_index(level=0,drop=True)) raised in MultiIndex.from_tuples, see test_insert_error_msmgs 12690 if not value.index.is_unique: 12691 # duplicate axis 12692 raise err 12693 > 12694 raise TypeError( 12695 "incompatible index of inserted column with frame index" 12696 ) from err 12697 return reindexed_value, None TypeError: incompatible index of inserted column with frame index How to go about this simple requirement ?
You should use droplevel: cols = ['type', 'type2'] df['mean'] = (df.groupby(cols)['values'] .rolling(window=3).mean() .droplevel(cols) ) Output: values type type2 mean 0 1 A C NaN 1 2 B D NaN 2 3 A C NaN 3 4 B D NaN 4 5 A C 3.0 5 6 B D 4.0 6 7 A C 5.0 7 8 B D 6.0 8 9 A C 7.0 9 10 B D 8.0 10 11 A C 9.0 11 12 B D 10.0 12 13 A C 11.0 13 14 B D 12.0 14 15 A C 13.0 And to avoid the NaNs, add min_periods=1: cols = ['type', 'type2'] df['mean'] = (df.groupby(cols)['values'] .rolling(window=3, min_periods=1).mean() .droplevel(cols) ) Output: values type type2 mean 0 1 A C 1.0 1 2 B D 2.0 2 3 A C 2.0 3 4 B D 3.0 4 5 A C 3.0 5 6 B D 4.0 6 7 A C 5.0 7 8 B D 6.0 8 9 A C 7.0 9 10 B D 8.0 10 11 A C 9.0 11 12 B D 10.0 12 13 A C 11.0 13 14 B D 12.0 14 15 A C 13.0
2
2
79,470,854
2025-2-26
https://stackoverflow.com/questions/79470854/supply-extra-parameter-as-function-argument-for-scipy-optimize-curve-fit
I am defining a piecewise function for some data, def fit_jt(x, e1, e2, n1, E1, E2, N1, N2): a = 1.3 return np.piecewise(x, [x <= a, x > a], [ lambda x: 1 / e1 + (1 - np.float128(np.exp(-e2 * x / n1))) / e2, lambda x: 1 / E1 + (1 - np.float128(np.exp(-E2 * x / N1))) / E2 + x / N2 ]) which is called in main as: popt_jt, pcov_jt = optimize.curve_fit(fit_jt, time.values, jt.values, method='trf') Now, the problem here is the a is hardcoded in the function fit_jt. Is it possible to supply the value of a from the main (without making a lot of changes)?
Use a factory function that returns a fit_it function with the desired value of a "baked in" via a closure: def make_fit_it(a): def fit_jt(x, e1, e2, n1, E1, E2, N1, N2): return np.piecewise(x, [x <= a, x > a], [ lambda x: 1 / e1 + (1 - np.float128(np.exp(-e2 * x / n1))) / e2, lambda x: 1 / E1 + (1 - np.float128(np.exp(-E2 * x / N1))) / E2 + x / N2 ]) return fit_it To use it: popt_jt, pcov_jt = optimize.curve_fit(make_fit_jt(1.3), time.values, jt.values, method='trf')
1
2
79,473,568
2025-2-27
https://stackoverflow.com/questions/79473568/np-where-with-a-in-type-condition
Given a numpy array arr = np.array([1, 2, 3, 4, 5]) I need to construct a binary mask according to a (arbitrary, potentially long) list of values, i.e. given values = np.array([2, 4, 5]) mask should be mask = np.array([False, True, False, True, True]) So I want to avoid condition = (arr==2) or (arr==4) or (arr==5) mask = np.where(condition, arr) to get something like mask = np.in(values, arr) Or, if it is not possible, how to construct a condition from an arbitrary list of values to feed into np.where?
In [64]: arr = np.array([1, 2, 3, 4, 5]) ...: values = np.array([2, 4, 5]) While isin is easy to use, it isn't the only option: In [66]: np.isin(arr, values) Out[66]: array([False, True, False, True, True]) We could compare the whole arrays: In [67]: values[:,None]==arr Out[67]: array([[False, True, False, False, False], [False, False, False, True, False], [False, False, False, False, True]]) In [68]: (values[:,None]==arr).any(axis=0) Out[68]: array([False, True, False, True, True]) Your use of or does not work: In [69]: (arr==values[0]) or (arr==values[1]) or (arr==values[2]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[69], line 1 ----> 1 (arr==values[0]) or (arr==values[1]) or (arr==values[2]) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() We have to use the array logical_or: In [70]: (arr==values[0]) | (arr==values[1]) | (arr==values[2]) Out[70]: array([False, True, False, True, True]) Depending on the relative size of the two array isin may actually do this kind of logical_or. Let's check the times (actual times will depend on the sizes): In [71]: timeit (arr==values[0]) | (arr==values[1]) | (arr==values[2]) 13.8 μs ± 75.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [72]: timeit np.isin(arr, values) 97 μs ± 242 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [73]: timeit (values[:,None]==arr).any(axis=0) 17 μs ± 101 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) isin is convenient, but slowest in this example.
3
3
79,473,015
2025-2-27
https://stackoverflow.com/questions/79473015/how-to-gracefully-stop-an-asyncio-server-in-python-3-8
As part of learning python and asyncio I have a simple TCP client/server architecture using asyncio (I have reasons why I need to use that) where I want the server to completely exit when it receives the string 'quit' from the client. The server, stored in asyncio_server.py, looks like this: import socket import asyncio class Server: def __init__(self, host, port): self.host = host self.port = port async def handle_client(self, reader, writer): # Callback from asyncio.start_server() when # a client tries to establish a connection addr = writer.get_extra_info('peername') print(f'Accepted connection from {addr!r}') request = None try: while request != 'quit': request = (await reader.read(255)).decode('utf8') print(f"Received {request!r} from {addr!r}") response = 'ok' writer.write(response.encode('utf8')) await writer.drain() print(f'Sent {response!r} to {addr!r}') print('----------') except Exception as e: print(f"Error handling client {addr!r}: {e}") finally: print(f"Connection closed by {addr!r}") writer.close() await writer.wait_closed() print(f'Closed the connection with {addr!r}') asyncio.get_event_loop().stop() # <<< WHAT SHOULD THIS BE? async def start(self): server = await asyncio.start_server(self.handle_client, self.host, self.port) async with server: print(f"Serving on {self.host}:{self.port}") await server.serve_forever() async def main(): server = Server(socket.gethostname(), 5000) await server.start() if __name__ == '__main__': asyncio.run(main()) and when the client sends quit the connection is closed and the server exits but always with the error message: Traceback (most recent call last): File "asyncio_server.py", line 54, in <module> asyncio.run(main()) File "C:\Python38-32\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Python38-32\lib\asyncio\base_events.py", line 614, in run_until_complete raise RuntimeError('Event loop stopped before Future completed.') RuntimeError: Event loop stopped before Future completed. What do I need to do instead of or in addition to calling asyncio.get_event_loop().stop() and/or server.serve_forever() to have the server exit gracefully with no error messages? I've tried every alternative I can find with google, including calling cancel() on the server, using separate loop construct in main(), trying alternatives to stop(), alternatives to run_forever(), etc., etc. and cannot figure out what I'm supposed to do to gracefully stop the server and exit the program without error messages when it receives a quit message from the client. I'm using python 3.8.10 and cannot upgrade to a newer version due to managed environment constraints. I'm also calling python from git bash in case that matters. Additional Information: The client code, stored in asyncio_client.py`, is below in case that's useful. import socket import asyncio class Client: def __init__(self, host, port): self.host = host self.port = port async def handle_server(self, reader, writer): addr = writer.get_extra_info('peername') print(f'Connected to from {addr!r}') request = input('Enter Request: ') while request.lower().strip() != 'quit': writer.write(request.encode()) await writer.drain() print(f'Sent {request!r} to {addr!r}') response = await reader.read(1024) print(f'Received {response.decode()!r} from {addr!r}') print('----------') request = input('Enter Request: ') writer.write(request.encode()) await writer.drain() print(f'Sent {request!r} to {addr!r}') writer.close() await writer.wait_closed() print(f'Closed the connection with {addr!r}') async def start(self): reader, writer = await asyncio.open_connection(host=self.host, port=self.port) await self.handle_server(reader, writer) async def main(): client = Client(socket.gethostname(), 5000) await client.start() if __name__ == '__main__': asyncio.run(main())
You can use an asyncio.Event to set when the QUIT message arrives. Then have asyncio wait for the either server_forever() or the Event to complete first. Once the Event is set, call the .close() method and stop the server. In the code below I assigned the server to an attribute. import socket import asyncio class Server: def __init__(self, host, port): self.host = host self.port = port self._server = None self._shutdown_event = asyncio.Event() async def handle_client(self, reader, writer): addr = writer.get_extra_info('peername') print(f'Accepted connection from {addr!r}') request = None try: while request != 'quit': request = (await reader.read(255)).decode('utf8') print(f"Received {request!r} from {addr!r}") writer.write(b'ok') await writer.drain() print(f'Sent "ok" to {addr!r}') print('----------') if request == 'quit': self._shutdown_event.set() # Signal shutdown await writer.drain() # Try to drain before closing break # exit the loop except Exception as e: print(f"Error handling client {addr!r}: {e}") finally: print(f"Connection closed by {addr!r}") writer.close() await writer.wait_closed() print(f'Closed the connection with {addr!r}') async def start(self): self._server = await asyncio.start_server(self.handle_client, self.host, self.port) async with self._server: print(f"Serving on {self.host}:{self.port}") await asyncio.wait( [ self._server.serve_forever(), self._shutdown_event.wait() ], return_when=asyncio.FIRST_COMPLETED ) await self.stop() async def stop(self): if self._server: self._server.close() await self._server.wait_closed() print("Server stopped.") async def main(): server = Server(socket.gethostname(), 5000) try: await server.start() except asyncio.CancelledError: print("Server task was cancelled.") except Exception as e: print(f"An error occurred: {e}") if __name__ == '__main__': asyncio.run(main()) Also, your client code is not reading the final message sent by the server after a quit is received. It is a small addition: import socket import asyncio class Client: def __init__(self, host, port): self.host = host self.port = port async def handle_server(self, reader, writer): addr = writer.get_extra_info('peername') print(f'Connected to from {addr!r}') request = input('Enter Request: ') while request.lower().strip() != 'quit': writer.write(request.encode()) await writer.drain() print(f'Sent {request!r} to {addr!r}') response = await reader.read(1024) print(f'Received {response.decode()!r} from {addr!r}') print('----------') request = input('Enter Request: ') writer.write(request.encode()) await writer.drain() print(f'Sent {request!r} to {addr!r}') response = await reader.read(1024) # read the final response print(f'Received {response.decode()!r} from {addr!r}') writer.close() await writer.wait_closed() print(f'Closed the connection with {addr!r}') async def start(self): reader, writer = await asyncio.open_connection(host=self.host, port=self.port) await self.handle_server(reader, writer) async def main(): client = Client(socket.gethostname(), 5000) await client.start() if __name__ == '__main__': asyncio.run(main())
1
2
79,473,192
2025-2-27
https://stackoverflow.com/questions/79473192/disagreement-between-scipy-quaternion-and-wolfram
I'm calculating rotation quaternions from Euler angles in Python using SciPy and trying to validate against an external source (Wolfram Alpha). This Scipy code gives me one answer: from scipy.spatial.transform import Rotation as R rot = R.from_euler('xyz', [30,45,60], degrees=1) quat = rot.as_quat() print(quat[3], quat[0], quat[1], quat[2]) # w, x, y, z (w,x,y,z) = 0.8223, 0.0222, 0.4396, 0.3604 while Wolfram Alpha gives a different answer (w,x,y,z) = 0.723, 0.392, 0.201, 0.532 Why the difference? Is it a difference in paradigm for how the object is rotated (ex. extrinsic rotations vs. intrinsic rotations)?
Indeed, when you switch from small "xyz", which denotes extrinsic rotations, to capital "XYZ", which denotes intrinsic rotations in from_euler(), the results will match those of WolframAlpha: from scipy.spatial.transform import Rotation import numpy as np np.set_printoptions(precision=3) rot = Rotation.from_euler("XYZ", [30, 45, 60], degrees=True) print(rot.as_matrix().T) # [[ 0.354 0.927 0.127] # [-0.612 0.127 0.78 ] # [ 0.707 -0.354 0.612]] print(np.roll(rot.as_quat(), shift=1)) # [0.723 0.392 0.201 0.532]
1
4
79,473,140
2025-2-27
https://stackoverflow.com/questions/79473140/add-edges-to-colorbar-in-seaborn-heatmap
I have the following heatmap: import seaborn as sns import matplotlib.pyplot as plt import numpy as np # Create a sample heatmap data = np.random.rand(10, 12) ax = sns.heatmap(data) plt.show() How can I add a black edge to the colormap? Is there any way to do this without needing to use subplots?
Looks like seaborn turns the edges off by default. Here's an approach to get a reference to the colorbar Axes, and then re-apply the edge: ax = sns.heatmap(data) cax = ax.figure.get_children()[-1] cax.spines['outline'].set_linewidth(0.5) # adjust as desired Output:
1
2
79,473,353
2025-2-27
https://stackoverflow.com/questions/79473353/valueerror-too-many-values-to-unpack-python-when-creating-dictionary-from-a-str
I am trying to create a dictionary from a string. In this case I have posted my sample code (sorry its not that clean, just hardcoded values), the first str1 works fine and is able generate a corresponding dictionary by splitting correctly ; and associating key value at = sign. However, the second string (str5) is not working. I am assuming its because there is an extra "=" at: 1 = '< 24 hours'; 2 = '> 24 hours, <**=** 30 days'; Can you please tell me how I can resolve this issue of ignoring the = sign and continue creating the dictionary. I get the following error: 0=Blank;1=<24hours;2=>24hours,<=30days;3=>30days(i.e.,permanent) Traceback (most recent call last): File xmlread.py:303 in <module> main() File xmlread.py:299 in main stringParseTest() File xmlread.py:283 in stringParseTest key, value = pair.split("=") ValueError: too many values to unpack (expected 2) I could try parsing it manually and not create a dictionary; however I would like to see if there is a condition I could add in there to recognize this issue. Don't know how to do that though. What if I add an if statement to check how many times a string has been splitted. def stringParseTest(): str1 = str("0 = Blank; 1 = Surface Device: Skin; 2 = Surface Device: Mucosal Membrane; 3 = Surface Device: Breached or Compromised Surfaces; 4 = External Communicating Device: Blood Path, Indirect; 5 = External Communicating Device: Tissue/Bone/Dentin; 6 = External Communicating Device: Circulating Blood; 7 = Implant Device: Tissue/Bone; 8 = Implant Device: Blood") str5 = str("0 = Blank; 1 = '< 24 hours'; 2 = '> 24 hours, <= 30 days'; 3 = '> 30 days (i.e., permanent)'") dictionary = {} #Working string split str1 = str1.replace(" ", "") str1 = str1.replace("'", "") print(str1) for pair in str1.split(";"): key, value = pair.split("=") dictionary[key] = value print(dictionary) #Not working split str5 = str5.replace(" ", "") str5 = str5.replace("'", "") print(str5) for pair in str5.split(";"): key, value = pair.split("=") dictionary[key] = value print(dictionary)
In split function you can pass additional parameter like maxsplit=1, so that it will split only based on first occurence of delimiter. i.e.g pair.split("=", 1) for pair in str5.split(";"): key, value = pair.split("=", 1) dictionary[key] = value
1
2
79,472,659
2025-2-27
https://stackoverflow.com/questions/79472659/boolean-indexing-in-numpy-arrays
I was learning boolean indexing in numpy and came across this. How is the indexing below not producing a Index Error as for axis 0 as there are only two blocks? x = np.arange(30).reshape(2, 3, 5) x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) x[[[True, True, False], [False, True, True]]] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]])
You are performing boolean array indexing, which is fine. You would have an indexing error with: # first dimension second dimension x[[True, True, False], [False, True, True]] # IndexError: boolean index did not match indexed array along dimension 0; # dimension is 2 but corresponding boolean dimension is 3 However, in your case, you have an extra set of brackets, which makes it index only the first dimension, using an array: # [ first dimension ] # [ second dimension], [ second dimension] x[[[True, True, False], [False, True, True]]] This means, that using a 2x3 array, you request in the second dimension, for the first "row" [True, True, False], and for the second "row" [False, True, True]. Since your array shape matches the first two dimensions, this is valid, and more or less equivalent to: np.concatenate([x[0][[True, True, False]], x[1][[False, True, True]]])
2
3
79,472,665
2025-2-27
https://stackoverflow.com/questions/79472665/how-to-access-a-dictionary-key-storing-a-list-in-a-list-of-lists-and-dictionarie
I have the following list: plates = [[], [], [{'plate ID': '193a', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [{'plate ID': '194b', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [], [], [], [], [], [], [], [], [], []] I'd need to loop plates, find the key 'sources' and store some data to another list. import pandas as pd matched_plates = [] matches_sources_ra = [] matches_sources_dec =[] plates = [[], [], [{'plate ID': '193a', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [{'plate ID': '194b', 'ra': 98.0, 'dec': 11.0, 'sources': [[3352102441297986560, 99.28418829069784, 11.821604434173034], [3352465726807951744, 100.86164898224092, 12.756149587760696]]}], [], [], [], [], [], [], [], [], [], []] plates_df = pd.DataFrame(plates) for idx, row in plates_df.iterrows(): if 'sources' in row.keys(): print(row["plate ID"]) matched_plates.append([row["plate ID"],len(row["sources"])]) matches_sources_ra.append(row["sources"][0][1]) matches_sources_dec.append(row["sources"][0][2]) This code never enters the if, what am I doing wrong? Thank you for your help
There is absolutely no need for pandas here. You're just using it as an expensive and slow container. Just loop over the lists and dictionaries, in pure python: matched_plates = [] matches_sources_ra = [] matches_sources_dec = [] for lst in plates: for dic in lst: if 'sources' in dic: print(dic['plate ID']) matched_plates.append([dic['plate ID'], len(dic['sources'])]) matches_sources_ra.append(dic['sources'][0][1]) matches_sources_dec.append(dic['sources'][0][2]) Output: 193a 194b Output lists: matched_plates # [['193a', 2], ['194b', 2]] matches_sources_ra # [99.28418829069784, 99.28418829069784] matches_sources_dec # [11.821604434173034, 11.821604434173034]
3
4
79,470,526
2025-2-26
https://stackoverflow.com/questions/79470526/grouped-rolling-mean-in-polars
Similar question is asked here However it didn't seem to work in my case. I have a dataframe with 3 columns, date, groups, prob. What I want is to create a 3 day rolling mean of the prob column values grouped by groups and date. However following the above linked answer I got all nulls returned. import polars as pl from datetime import date import numpy as np dates = pl.date_range(date(2024, 12, 1), date(2024, 12, 30), "1d", eager=True).alias( "date") len(dates) days = pl.concat([dates,dates]) groups = pl.concat([pl.select(pl.repeat("B", n = 30)).to_series(), pl.select(pl.repeat("A", n = 30)).to_series()]).alias('groups') data = pl.DataFrame([days, groups]) data2 = data.with_columns(pl.lit(np.random.rand(data.height)).alias("prob")) data2.with_columns( rolling_mean = pl.col('prob') .rolling_mean(window_size = 3) .over('date','groups') ) """ shape: (60, 4) ┌────────────┬────────┬──────────┬──────────────┐ │ date ┆ groups ┆ prob ┆ rolling_mean │ │ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ str ┆ f64 ┆ f64 │ ╞════════════╪════════╪══════════╪══════════════╡ │ 2024-12-01 ┆ B ┆ 0.938982 ┆ null │ │ 2024-12-02 ┆ B ┆ 0.103133 ┆ null │ │ 2024-12-03 ┆ B ┆ 0.724672 ┆ null │ │ 2024-12-04 ┆ B ┆ 0.495868 ┆ null │ │ 2024-12-05 ┆ B ┆ 0.621124 ┆ null │ │ … ┆ … ┆ … ┆ … │ │ 2024-12-26 ┆ A ┆ 0.762529 ┆ null │ │ 2024-12-27 ┆ A ┆ 0.766366 ┆ null │ │ 2024-12-28 ┆ A ┆ 0.272936 ┆ null │ │ 2024-12-29 ┆ A ┆ 0.28709 ┆ null │ │ 2024-12-30 ┆ A ┆ 0.403478 ┆ null │ └────────────┴────────┴──────────┴──────────────┘ """" In the documentation I found .rolling_mean_by and tried using it instead but instead of doing a rolling mean it seems to just return the prob value for each row. data2.with_columns( rolling_mean = pl.col('prob') .rolling_mean_by(window_size = '3d', by = 'date') .over('groups', 'date') ) """ shape: (60, 4) ┌────────────┬────────┬──────────┬──────────────┐ │ date ┆ groups ┆ prob ┆ rolling_mean │ │ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ str ┆ f64 ┆ f64 │ ╞════════════╪════════╪══════════╪══════════════╡ │ 2024-12-01 ┆ B ┆ 0.938982 ┆ 0.938982 │ │ 2024-12-02 ┆ B ┆ 0.103133 ┆ 0.103133 │ │ 2024-12-03 ┆ B ┆ 0.724672 ┆ 0.724672 │ │ 2024-12-04 ┆ B ┆ 0.495868 ┆ 0.495868 │ │ 2024-12-05 ┆ B ┆ 0.621124 ┆ 0.621124 │ │ … ┆ … ┆ … ┆ … │ │ 2024-12-26 ┆ A ┆ 0.762529 ┆ 0.762529 │ │ 2024-12-27 ┆ A ┆ 0.766366 ┆ 0.766366 │ │ 2024-12-28 ┆ A ┆ 0.272936 ┆ 0.272936 │ │ 2024-12-29 ┆ A ┆ 0.28709 ┆ 0.28709 │ │ 2024-12-30 ┆ A ┆ 0.403478 ┆ 0.403478 │ └────────────┴────────┴──────────┴──────────────┘ """"
Overall Problem. You group not only by group but also by date. This effectively performs the rolling operation separately for each group and date (i.e. separately for each row). Explanation of 1st attempt. As the groups are defined by the group and date columns, each group consists of a single row. This is lower than min_samples (equal to window_size by default), giving a None. Explanation of 2nd attempt. pl.Expr.rolling_mean_by does not have a min_samples argument. Therefore, the mean is computed, but only using the single element in the group, giving the perception that simply prob is returned. Solution. You can alleviate the issue by excluding date from the grouping defined in pl.Expr.over. This looks as follows. data2.with_columns( rolling_mean=pl.col('prob').rolling_mean_by(by="date", window_size="3d").over('groups') ) shape: (60, 4) ┌────────────┬────────┬──────────┬──────────────┐ │ date ┆ groups ┆ prob ┆ rolling_mean │ │ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ str ┆ f64 ┆ f64 │ ╞════════════╪════════╪══════════╪══════════════╡ │ 2024-12-01 ┆ B ┆ 0.484882 ┆ 0.484882 │ │ 2024-12-02 ┆ B ┆ 0.012538 ┆ 0.24871 │ │ 2024-12-03 ┆ B ┆ 0.510953 ┆ 0.336124 │ │ 2024-12-04 ┆ B ┆ 0.613973 ┆ 0.379155 │ │ 2024-12-05 ┆ B ┆ 0.69837 ┆ 0.607765 │ │ … ┆ … ┆ … ┆ … │ │ 2024-12-26 ┆ A ┆ 0.948971 ┆ 0.653762 │ │ 2024-12-27 ┆ A ┆ 0.905213 ┆ 0.622917 │ │ 2024-12-28 ┆ A ┆ 0.986094 ┆ 0.946759 │ │ 2024-12-29 ┆ A ┆ 0.286836 ┆ 0.726047 │ │ 2024-12-30 ┆ A ┆ 0.78191 ┆ 0.684947 │ └────────────┴────────┴──────────┴──────────────┘
4
5
79,471,079
2025-2-26
https://stackoverflow.com/questions/79471079/how-to-handle-malformed-api-request-in-flask
There is quite an old game (that no longer works) that has to make some API calls in order to be playable. I am creating a Flask mock server to handle those requests, however it turned out that the requests are not compliant with HTTP standard and are malformed. For example: Get /config.php http/1.1 to which flask reports code 400, message Bad request version ('http/1.1'). After searching for various solutions, here is what I tried (and none worked): before_request Python decorator wsgi_app middleware Here is my code: from flask import Flask, request from functools import wraps app = Flask(__name__) class RequestMiddleware: def __init__(self, app): self.app = app def __call__(self, environ, start_response): print('Middleware', environ['REQUEST_METHOD'], environ['SERVER_PROTOCOL']) return self.app(environ, start_response) def RequestDecorator(view): @wraps(view) def decorated(*args, **kwargs): print('Decorator', args, kwargs) return view(*args, **kwargs) return decorated @app.before_request def RequestHook(): print('Before request: url: %s, path: %s' % (request.url, request.path)) app.wsgi_app = RequestMiddleware(app.wsgi_app) @app.route("/config.php", methods=["GET"]) @RequestDecorator def get_config(): return ("{}", 200) Example output: Middleware GET HTTP/1.1 Before request: url: http://localhost/config.php, path: /config.php Decorator () {} "GET /config.php HTTP/1.1" 200 - code 400, message Bad request version ('http/1.1') "Get /config.php http/1.1" 400 - Malformed request is not getting output from any of the solutions. My goal was to intercept the request before it is rejected in order to string replace Get to GET and http/1.1 to HTTP/1.1. Is it even possible?
Flask uses Werkzeug which in turn uses Python's http.server to parse HTTP requests. So the error is actually thrown by Python's http.server.BaseHTTPRequestHandler @ Line 311. Only way to get around this error is by monkey patching certain methods. If you let me know the version number of Werkzeug that you're running, I'll update my answer with a patch.
1
2
79,470,828
2025-2-26
https://stackoverflow.com/questions/79470828/nodriver-cannot-start-headless-mode
I found Nodriver, which is the successor Undetected-Chromedriver. I am trying to run in headless mode but am having problems. import nodriver as uc async def main(): browser = await uc.start(headless=True) page = await browser.get('https://bot.sannysoft.com/') if __name__ == '__main__': uc.loop().run_until_complete(main()) However I get error Traceback (most recent call last): File "C:\no_drive_test.py", line 21, in <module> uc.loop().run_until_complete(main()) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^ File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\asyncio\base_events.py", line 721, in run_until_complete return future.result() ~~~~~~~~~~~~~^^ File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\no_drive_test.py", line 5, in main browser = await uc.start(headless=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\util.py", line 95, in start return await Browser.create(config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\browser.py", line 90, in create await instance.start() File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\browser.py", line 393, in start await self.connection.send(cdp.target.set_discover_targets(discover=True)) File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\connection.py", line 413, in send await self._prepare_headless() File "C:\Users\USERNAME\AppData\Local\Programs\Python\Python313\Lib\site-packages\nodriver\core\connection.py", line 492, in _prepare_headless response, error = await self._send_oneshot( ^^^^^^^^^^^^^^^ TypeError: cannot unpack non-iterable NoneType object I tried to create issue on Nodriver github page but it looks like it's only available to collaborators of the project
The issue was created on the repo link This was suggested: browser_args.append(f"--headless=new")
1
1
79,469,254
2025-2-26
https://stackoverflow.com/questions/79469254/calculating-the-curvature-of-a-discrete-function-in-3d-space
I have a set of points representing a curve in 3D space. The goal is to detect the point with the maximum curvature. When looking on the curvature page on Wikipedia, I find the curvature can be found as the magnitude of the acceleration of the parametric function. My idea of solving the solution is to interpolate a B-spline over the 3D points. Next, I discretize the function with equal spaced points. At last, I calculate the acceleration (second derivation) over the discrete points. The curvature should be found by the magnitude of the acceleration. The Radius curvature can be found by inverting the curvature. This is my understanding of the problem. If I'm incorrect, please correct me. I have written a python function for this problem. def curvature(points: np.ndarray) -> np.ndarray: tck, u = splprep(points.T) t = np.linspace(0, 1, len(points)) x, y, z = splev(t, tck) parametric_points = np.stack([x, y, z]).T tangent = np.diff(parametric_points, axis=0) acceleration = np.diff(tangent, axis=0) magnitude = np.array([np.linalg.norm(a) for a in acceleration]) radius_curvature = 1 / magnitude return radius_curvature To test my function, I generate a circle in 3D space with the code below and test to see the radius curvature for each point: def generate_circle_by_angles(t, C, r, theta, phi): # Source: https://meshlogic.github.io/posts/jupyter/curve-fitting/fitting-a-circle-to-cluster-of-3d-points/ # Orthonormal vectors n, u, <n,u>=0 n = np.array([np.cos(phi) * np.sin(theta), np.sin(phi) * np.sin(theta), np.cos(theta)]) u = np.array([-np.sin(phi), np.cos(phi), 0]) # P(t) = r*cos(t)*u + r*sin(t)*(n x u) + C p_circle = r * np.cos(t)[:, np.newaxis] * u + r * np.sin(t)[:, np.newaxis] * np.cross(n, u) + C return p_circle r = 2.5 # Radius c = np.array([3, 3, 4]) # Center theta = 0 / 180 * np.pi # Azimuth phi = 0 / 180 * np.pi # Zenith t = np.linspace(0, np.pi, 100) p = generate_circle_by_angles(t, c, r, theta, phi) Rs = curvature(p) However, the result is not correct. [252.36949094 256.16299957 ... 260.04828741 256.16299957 252.36949094] Has anyone a solution or remarks on my solution?
Fundamentally, each successive three points must lie on their own circle and the local radius of curvature (reciprocal of the “curvature”) is just the radius of that circle. Take points i-1, i, i+1 and form successive tangent vectors tA and tB. Three points must lie in a unique plane, unless they are collinear. Within that plane the three points must lie on a circle, whose centre is the intersection of the perpendicular bisectors; (again, unless they are collinear, in which case Rc is infinite.) Find this centre and its distance from any one point: this is the radius of curvature. I found the centre by noting the vector equations of the two bisectors: centre = mA + p nA = mB + q nB where mA is the midpoint of the line joining the first two points and nA is in the direction of the perpendicular bisector to those two points; similarly mB and nB for the latter pair of points. Isolate p (say) by dot-producting with tB, the tangent vector for the later segment. Then you get the centre of the circle. import numpy as np def radius_of_curvature( points ): # Points are [ [x0,y0,z0], [x1,y1,z1], ... ] N = len( points ) R = np.zeros( N ) for i in range( 1, N - 1 ): tA, tB = points[i] - points[i-1], points[i+1] - points[i] # tangent vectors z = np.cross( tA, tB ) # normal to plane if np.linalg.norm( z ) < 1e-30: # HELP! points are co-linear R[i] = 1e30 # my favourite approximation to infinity continue nA, nB = np.cross( z, tA ), np.cross( z, tB ) # normals to successive segments p = np.dot( 0.5 * ( tA + tB ), tB ) / np.dot( nA, tB ) # parameter along normal A C = 0.5 * ( points[i-1] + points[i] ) + p * nA # centre of circle R[i] = np.linalg.norm( C - points[i] ) # radius of circle R[0], R[-1] = R[1], R[-2] # arbitrary choice at end points return R def generate_circle_by_angles(t, C, r, theta, phi): n = np.array([np.cos(phi) * np.sin(theta), np.sin(phi) * np.sin(theta), np.cos(theta)]) u = np.array([-np.sin(phi), np.cos(phi), 0]) p_circle = r * np.cos(t)[:, np.newaxis] * u + r * np.sin(t)[:, np.newaxis] * np.cross(n, u) + C return p_circle r = 2.5 # radius c = np.array([3, 3, 4]) # centre theta = 0 / 180 * np.pi # azimuth phi = 0 / 180 * np.pi # zenith t = np.linspace(0, np.pi, 100) p = generate_circle_by_angles( t, c, r, theta, phi ) Rs = radius_of_curvature(p) print( Rs ) Output: [2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5]
4
3
79,469,894
2025-2-26
https://stackoverflow.com/questions/79469894/polars-cum-sum-to-create-a-set-and-not-actually-sum
I'd like to use a function like cumsum, but that would create a set of all values contained in the column up to the point, and not to sum them df = pl.DataFrame({"a": [1, 2, 3, 4]}) df["a"].cum_sum() shape: (4,) Series: 'a' [i64] [ 1 3 6 10 ] but I'd like to have something like df["a"].cum_sum() shape: (4,) Series: 'a' [i64] [ {1} {1, 2} {1, 2, 3} {1, 2, 3, 4} ] also note that I'm working on big (several Millions of rows) df, so I'd like to avoid indexing and map_elements (as I've read that it slows down a lot)
This can be achieved using pl.Expr.cumulative_eval together with pl.Expr.unique and pl.Expr.implode as follows. df.with_columns( res=pl.col("a").cumulative_eval(pl.element().unique().implode()) ) shape: (4, 2) ┌─────┬─────────────┐ │ a ┆ res │ │ --- ┆ --- │ │ i64 ┆ list[i64] │ ╞═════╪═════════════╡ │ 1 ┆ [1] │ │ 2 ┆ [1, 2] │ │ 3 ┆ [1, 2, 3] │ │ 4 ┆ [1, 2, … 4] │ └─────┴─────────────┘
2
3
79,469,073
2025-2-26
https://stackoverflow.com/questions/79469073/how-to-group-data-using-pandas-by-an-array-column
I have a data frame collected from a CSV in the following format: Book Name,Languages "Book 1","['Portuguese','English']" "Book 2","['English','Japanese']" "Book 3","[Spanish','Italian','English']" ... I was able to convert the string array representation on the column Languages to a python array using transform, but now i'm struggling to find a way to group Books by language. I would like to produce from this data set a dict like this: { 'Portuguese': 'Book 1' 'English': ['Book 1', 'Book 2', 'Book 3'], 'Spanish': 'Book 3', 'Italian': 'Book 3', 'Japanese': 'Book 2' } I tried to look into groupby on the array column but could not figure out how to make each entry on the array a key to be used as grouping. Any pointers would be really apreciated.
You can do this by iterating through the DataFrame and updating a dictionary dynamically. import pandas as pd import ast data = { "Book Name": ["Book 1", "Book 2", "Book 3"], "Languages": ["['Portuguese','English']", "['English','Japanese']", "['Spanish','Italian','English']"] } df = pd.DataFrame(data) df["Languages"] = df["Languages"].apply(ast.literal_eval) language_dict = {} for _, row in df.iterrows(): book_name = row["Book Name"] for lang in row["Languages"]: if lang in language_dict: if isinstance(language_dict[lang], list): language_dict[lang].append(book_name) else: language_dict[lang] = [language_dict[lang], book_name] else: language_dict[lang] = book_name print(language_dict) Output will be { 'Portuguese': 'Book 1', 'English': ['Book 1', 'Book 2', 'Book 3'], 'Japanese': 'Book 2', 'Spanish': 'Book 3', 'Italian': 'Book 3' }
2
0
79,470,214
2025-2-26
https://stackoverflow.com/questions/79470214/add-space-between-xlabels-in-matplotlib
I have the following issue: I need to create a graph showing the results of an experiment I did in the lab recently. Unfortunately, two of these values are very close together due to an error (which must be considered). When I create the graph, these two values are so close to each other that they are basically unreadable. I am clueless about what I should do because the approaches I found online didn't seem to work for me, as I need to have the exact values on the x-axis. Every marker needs to be placed exactly above the corresponding value and be clearly visible. No matter how much I stretch the graph, I can't get the two values to be readable because they overlap so much. Is there any way to add some space between the two values or space all values evenly? Below is a snippet of my code and a picture of the graph: import matplotlib.pyplot as plt import numpy as np def _plot(x, y, titel): xaxis = np.array(x) yaxis = np.array(y) z = np.polyfit(xaxis,yaxis,1) p = np.poly1d(z) print("\n\nFormel: y=%.6fx+%.6f"%(z[0],z[1])) print("Steigung von Trendlinie: {:.3f}".format(z[0])) plt.scatter(xaxis, yaxis, marker='x', color='red', label='Werte') plt.plot(xaxis, p(xaxis), marker='x', color='black', mec='b', ls="--") plt.plot(xaxis, yaxis, color='#875252', ls=':') plt.tight_layout() plt.xticks(xaxis, rotation=45, fontsize=8) plt.xlabel('c(Pb²⁺) in mol/L') plt.ylabel(r"$\mathrm{pK}_{L}$") plt.legend(['gemessene Werte', 'Trendlinie / Linearer Fit']) plt.title(titel) plt.grid() plt.show() I use the _plot function later throughout the program to start the plotting process. The values for x and y are both lists of floats. In this specific graph it is: x = [0.4139, 0.2192, 0.2170, 0.1124, 0.0570, 0.0289, 0.0144] and y = [2.8538, 3.279, 3.2487, 3.6497, 4.0136, 4.3936, 4.6968]. They are passed into np.array. The titel parameter is just any string that's going to be used for plt.title(). The graph in question: Thank you in advance for your help! :)
First, I'd suggest using the Axes interface instead of the pyplot interface. You could manually move the 0.217 label to the left (inspired by this answer): import matplotlib.pyplot as plt from matplotlib.transforms import ScaledTranslation import numpy as np x = [0.4139, 0.2192, 0.2170, 0.1124, 0.0570, 0.0289, 0.0144] y = [2.8538, 3.279, 3.2487, 3.6497, 4.0136, 4.3936, 4.6968] titel = "Titel" def _plot(x, y, titel): xaxis = np.array(x) yaxis = np.array(y) z = np.polyfit(xaxis,yaxis,1) p = np.poly1d(z) print("\n\nFormel: y=%.6fx+%.6f"%(z[0],z[1])) print("Steigung von Trendlinie: {:.3f}".format(z[0])) fig, ax = plt.subplots() ax.scatter(xaxis, yaxis, marker='x', color='red', label='Werte') ax.plot(xaxis, p(xaxis), marker='x', color='black', mec='b', ls="--") ax.plot(xaxis, yaxis, color='#875252', ls=':') fig.tight_layout() ax.set_xticks(xaxis, xaxis, rotation=45, fontsize=8) ax.set_xlabel('c(Pb²⁺) in mol/L') ax.set_ylabel(r"$\mathrm{pK}_{L}$") ax.legend(['gemessene Werte', 'Trendlinie / Linearer Fit']) ax.set_title(titel) ax.grid() return ax ax = _plot(x, y, titel) # move the 0.217 label slightly to the left labels = ax.get_xticklabels() offset = ScaledTranslation(-10/72, 0/72, ax.figure.dpi_scale_trans) labels[2].set_transform(labels[2].get_transform() + offset) plt.show() Output:
2
1
79,469,400
2025-2-26
https://stackoverflow.com/questions/79469400/python-numpy-how-to-split-a-matrix-into-4-not-equal-matrixes
from sympy import * import numpy as np # Python 3.13.2 u1 = Symbol("u1") u2 = Symbol("u2") q3 = Symbol("q3") u4 = Symbol("u4") q5 = Symbol("q5") disp_vector = np.array([u1, u2, q3, u4, q5]) stiffness_matrix = np.array([[1, 0, 0, -1, 0], [0, 0.12, 0.6, 0, 0.6], [0, 0.6, 4, 0, 2], [-1, 0, 0, 1, 0], [0, 0.6, 2, 0, 4]]) force_vector = np.array([0, 40, -26.7, 0, 0]) I am trying to code static condensation. This allows me to reduce the size of the stiffness matrix above but for that I need to be able to divide the stiffness matrix into following matrices. krr = np.array([[1, 0, 0], [0, 0.12, 0.6], [0, 0.6, 4]]) krc = np.array([[-1, 0], [0, 0.6], [0, 2]]) kcr = np.array([[-1, 0, 0], [0, 0.6, 2]]) kcc = np.array([[1, 0], [0, 4]]) How would I do this ?
You need to divide your stiffness matrix into four submatrices based on the degrees of freedom you want to r and c. So to retain the first 3 rows/columns and condense the last 2 krr: The top left block krc: The top right block kcr: The bottom left block kcc: The bottom right block The code n_retain = 3 n_condense = 2 krr = stiffness_matrix[:n_retain, :n_retain] krc = stiffness_matrix[:n_retain, n_retain:] kcr = stiffness_matrix[n_retain:, :n_retain] kcc = stiffness_matrix[n_retain:, n_retain:] print("krr =\n", krr) print("\nkrc =\n", krc) print("\nkcr =\n", kcr) print("\nkcc =\n", kcc) submatrices krr = [[1. 0. 0. ] [0. 0.12 0.6 ] [0. 0.6 4. ]] krc = [[-1. 0. ] [ 0. 0.6] [ 0. 2. ]] kcr = [[-1. 0. 0. ] [ 0. 0.6 2. ]] kcc = [[1. 0.] [0. 4.]] Animations Credits - https://ocw.mit.edu/courses/1-571-structural-analysis-and-control-spring-2004/pages/readings/ <!DOCTYPE html> <html> <head> <style> body { font-family: Arial, sans-serif; margin: 20px; background-color: #f5f5f5; } .container { max-width: 800px; margin: 0 auto; background-color: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 10px rgba(0,0,0,0.1); } h1, h2 { color: #2c3e50; text-align: center; } .matrix-container { display: flex; justify-content: center; align-items: center; margin: 20px 0; flex-wrap: wrap; } .matrix { border: 2px solid #3498db; margin: 10px; padding: 5px; background-color: white; border-radius: 4px; transition: all 0.5s ease; } .matrix-row { display: flex; justify-content: center; } .matrix-cell { width: 40px; height: 40px; display: flex; align-items: center; justify-content: center; margin: 2px; font-weight: bold; transition: all 0.5s ease; } .retained { background-color: #2ecc71; color: white; } .condensed { background-color: #e74c3c; color: white; } .coupling { background-color: #f39c12; color: white; } .controls { display: flex; justify-content: center; margin: 20px 0; } button { background-color: #3498db; color: white; border: none; padding: 10px 20px; margin: 0 10px; border-radius: 4px; cursor: pointer; font-size: 16px; transition: background-color 0.3s; } button:hover { background-color: #2980b9; } button:disabled { background-color: #bdc3c7; cursor: not-allowed; } .explanation { margin: 20px 0; padding: 15px; background-color: #f8f9fa; border-left: 4px solid #3498db; border-radius: 4px; } .equation { font-family: 'Times New Roman', Times, serif; font-style: italic; text-align: center; margin: 15px 0; font-size: 18px; } .force-vector { display: flex; flex-direction: column; align-items: center; margin: 10px; } .force-cell { width: 40px; height: 30px; display: flex; align-items: center; justify-content: center; margin: 2px; font-weight: bold; border: 1px solid #3498db; background-color: #fff; transition: all 0.5s ease; } .progress { width: 100%; height: 8px; background-color: #ecf0f1; margin-bottom: 20px; border-radius: 4px; overflow: hidden; } .progress-bar { height: 100%; background-color: #3498db; width: 0%; transition: width 0.3s ease; } </style> </head> <body> <div class="container"> <h1>Static Condensation Animation</h1> <div class="progress"> <div class="progress-bar" id="progressBar"></div> </div> <div class="explanation" id="explanation"> <p>Static condensation is a technique used in structural analysis to reduce the size of the stiffness matrix by eliminating degrees of freedom (DOFs) that are not of primary interest.</p> <p>Click "Next" to walk through the process step by step.</p> </div> <div class="matrix-container" id="matrixDisplay"> <!-- Matrices will be displayed here --> </div> <div class="controls"> <button id="prevBtn" disabled>Previous</button> <button id="nextBtn">Next</button> <button id="resetBtn">Reset</button> </div> </div> <script> // Original stiffness matrix and force vector const stiffnessMatrix = [ [1, 0, 0, -1, 0], [0, 0.12, 0.6, 0, 0.6], [0, 0.6, 4, 0, 2], [-1, 0, 0, 1, 0], [0, 0.6, 2, 0, 4] ]; const forceVector = [0, 40, -26.7, 0, 0]; // Steps for the animation const steps = [ { title: "Original System", explanation: "We start with the full stiffness matrix K and force vector F. Our goal is to solve Ku = F for the displacement vector u.", showMatrices: ["K", "F"] }, { title: "Partition the System", explanation: "We partition the matrix into 4 submatrices: Krr (retained DOFs), Kcc (condensed DOFs), and Krc/Kcr (coupling terms). The force vector is similarly split into Fr and Fc.", showMatrices: ["K_partitioned", "F_partitioned"] }, { title: "Extract Submatrices", explanation: "We extract the four submatrices explicitly: Krr (3×3), Krc (3×2), Kcr (2×3), and Kcc (2×2).", showMatrices: ["Krr", "Krc", "Kcr", "Kcc", "Fr", "Fc"] }, { title: "Condensation Equations", explanation: "The static condensation process can be represented by these equations:\n\nCondensed stiffness matrix: K* = Krr - Krc × Kcc⁻¹ × Kcr\nCondensed force vector: F* = Fr - Krc × Kcc⁻¹ × Fc", showMatrices: ["equation"] }, { title: "Condensed System", explanation: "We now have a reduced system K* u_r = F* that only involves the retained DOFs. This smaller system is easier to solve.", showMatrices: ["K_condensed", "F_condensed"] }, { title: "Solve for Retained DOFs", explanation: "We solve the condensed system to find the values of the retained DOFs (u_r).", showMatrices: ["u_r"] }, { title: "Back-Calculate Condensed DOFs", explanation: "Once we have u_r, we can back-calculate the condensed DOFs (u_c) using: u_c = Kcc⁻¹ × (Fc - Kcr × u_r)", showMatrices: ["u_r", "u_c"] }, { title: "Complete Solution", explanation: "Finally, we have the complete solution vector u combining u_r and u_c. We've solved the full system by working with a reduced matrix!", showMatrices: ["u_full"] } ]; // Function to create matrix display function createMatrixDisplay(matrix, title, className = "") { const matrixDiv = document.createElement("div"); matrixDiv.className = `matrix ${className}`; const titleDiv = document.createElement("div"); titleDiv.textContent = title; titleDiv.style.textAlign = "center"; titleDiv.style.marginBottom = "5px"; matrixDiv.appendChild(titleDiv); for (let i = 0; i < matrix.length; i++) { const rowDiv = document.createElement("div"); rowDiv.className = "matrix-row"; if (Array.isArray(matrix[i])) { // It's a 2D matrix for (let j = 0; j < matrix[i].length; j++) { const cellDiv = document.createElement("div"); cellDiv.className = "matrix-cell"; cellDiv.textContent = typeof matrix[i][j] === 'number' ? matrix[i][j].toFixed(2).replace(/\.00$/, "") : matrix[i][j]; // Apply colors based on partitioning for the full matrix if (title === "K (partitioned)") { if (i < 3 && j < 3) cellDiv.classList.add("retained"); else if (i >= 3 && j >= 3) cellDiv.classList.add("condensed"); else cellDiv.classList.add("coupling"); } rowDiv.appendChild(cellDiv); } } else { // It's a vector const cellDiv = document.createElement("div"); cellDiv.className = "force-cell"; cellDiv.textContent = typeof matrix[i] === 'number' ? matrix[i].toFixed(2).replace(/\.00$/, "") : matrix[i]; // Apply colors for partitioned force vector if (title === "F (partitioned)") { if (i < 3) cellDiv.classList.add("retained"); else cellDiv.classList.add("condensed"); } rowDiv.appendChild(cellDiv); } matrixDiv.appendChild(rowDiv); } return matrixDiv; } // Function to create force vector display function createForceVector(vector, title, className = "") { const vectorDiv = document.createElement("div"); vectorDiv.className = `force-vector ${className}`; const titleDiv = document.createElement("div"); titleDiv.textContent = title; titleDiv.style.textAlign = "center"; titleDiv.style.marginBottom = "5px"; vectorDiv.appendChild(titleDiv); for (let i = 0; i < vector.length; i++) { const cellDiv = document.createElement("div"); cellDiv.className = "force-cell"; cellDiv.textContent = typeof vector[i] === 'number' ? vector[i].toFixed(2).replace(/\.00$/, "") : vector[i]; // Apply colors for partitioned force vector if (title === "F (partitioned)") { if (i < 3) cellDiv.classList.add("retained"); else cellDiv.classList.add("condensed"); } vectorDiv.appendChild(cellDiv); } return vectorDiv; } // Function to display an equation function createEquation(text) { const eqDiv = document.createElement("div"); eqDiv.className = "equation"; eqDiv.innerHTML = text; return eqDiv; } // Get DOM elements const matrixDisplay = document.getElementById("matrixDisplay"); const explanation = document.getElementById("explanation"); const prevBtn = document.getElementById("prevBtn"); const nextBtn = document.getElementById("nextBtn"); const resetBtn = document.getElementById("resetBtn"); const progressBar = document.getElementById("progressBar"); let currentStep = 0; // Submatrices const krr = [ [1, 0, 0], [0, 0.12, 0.6], [0, 0.6, 4] ]; const krc = [ [-1, 0], [0, 0.6], [0, 2] ]; const kcr = [ [-1, 0, 0], [0, 0.6, 2] ]; const kcc = [ [1, 0], [0, 4] ]; const fr = [0, 40, -26.7]; const fc = [0, 0]; // Solution (mockup values for demonstration) const ur = [0.78, 345.58, -9.50]; const uc = [-0.78, 1.49]; const uFull = [0.78, 345.58, -9.50, -0.78, 1.49]; // Condensed matrices (mockup values for demonstration) const kCondensed = [ [2, 0, 0], [0, 0.21, 0.6], [0, 0.6, 3.5] ]; const fCondensed = [0, 40, -26.7]; // Update display based on current step function updateDisplay() { // Update progress bar progressBar.style.width = `${(currentStep / (steps.length - 1)) * 100}%`; // Clear previous content matrixDisplay.innerHTML = ""; // Set explanation explanation.innerHTML = `<h2>${steps[currentStep].title}</h2><p>${steps[currentStep].explanation}</p>`; // Display relevant matrices steps[currentStep].showMatrices.forEach(matrixName => { switch (matrixName) { case "K": matrixDisplay.appendChild(createMatrixDisplay(stiffnessMatrix, "K (Stiffness Matrix)")); break; case "F": matrixDisplay.appendChild(createForceVector(forceVector, "F (Force Vector)")); break; case "K_partitioned": matrixDisplay.appendChild(createMatrixDisplay(stiffnessMatrix, "K (partitioned)")); break; case "F_partitioned": matrixDisplay.appendChild(createForceVector(forceVector, "F (partitioned)")); break; case "Krr": matrixDisplay.appendChild(createMatrixDisplay(krr, "Krr (retained)", "retained")); break; case "Krc": matrixDisplay.appendChild(createMatrixDisplay(krc, "Krc (coupling)", "coupling")); break; case "Kcr": matrixDisplay.appendChild(createMatrixDisplay(kcr, "Kcr (coupling)", "coupling")); break; case "Kcc": matrixDisplay.appendChild(createMatrixDisplay(kcc, "Kcc (condensed)", "condensed")); break; case "Fr": matrixDisplay.appendChild(createForceVector(fr, "Fr (retained)", "retained")); break; case "Fc": matrixDisplay.appendChild(createForceVector(fc, "Fc (condensed)", "condensed")); break; case "equation": const eqDiv = document.createElement("div"); eqDiv.style.width = "100%"; eqDiv.style.textAlign = "center"; const eq1 = document.createElement("div"); eq1.className = "equation"; eq1.innerHTML = "K* = K<sub>rr</sub> - K<sub>rc</sub> · K<sub>cc</sub><sup>-1</sup> · K<sub>cr</sub>"; const eq2 = document.createElement("div"); eq2.className = "equation"; eq2.innerHTML = "F* = F<sub>r</sub> - K<sub>rc</sub> · K<sub>cc</sub><sup>-1</sup> · F<sub>c</sub>"; eqDiv.appendChild(eq1); eqDiv.appendChild(eq2); matrixDisplay.appendChild(eqDiv); break; case "K_condensed": matrixDisplay.appendChild(createMatrixDisplay(kCondensed, "K* (Condensed Stiffness Matrix)")); break; case "F_condensed": matrixDisplay.appendChild(createForceVector(fCondensed, "F* (Condensed Force Vector)")); break; case "u_r": matrixDisplay.appendChild(createForceVector(ur, "ur (Retained DOFs)", "retained")); break; case "u_c": matrixDisplay.appendChild(createForceVector(uc, "uc (Condensed DOFs)", "condensed")); break; case "u_full": matrixDisplay.appendChild(createForceVector(uFull, "u (Complete Solution)")); break; } }); // Update button states prevBtn.disabled = currentStep === 0; nextBtn.disabled = currentStep === steps.length - 1; } // Event listeners prevBtn.addEventListener("click", () => { if (currentStep > 0) { currentStep--; updateDisplay(); } }); nextBtn.addEventListener("click", () => { if (currentStep < steps.length - 1) { currentStep++; updateDisplay(); } }); resetBtn.addEventListener("click", () => { currentStep = 0; updateDisplay(); }); // Initialize display updateDisplay(); </script> </body> </html>
2
2
79,469,031
2025-2-26
https://stackoverflow.com/questions/79469031/does-clickhouse-connect-get-client-return-a-new-client-instance-every-time
As the question mentions, does clickhouse_connect.get_client in the python client return a new client instance every time it is called? I can't seem to find if it is explicitly mentioned as such in the documentation, but it seems implied. I'm a little confused because of the name get_client (instead of say create_client).
Yes. You can find this in init.py from clickhouse_connect.driver import create_client, create_async_client driver_name = 'clickhousedb' get_client = create_client get_async_client = create_async_client Permalink to the line of importance.
2
2
79,465,328
2025-2-25
https://stackoverflow.com/questions/79465328/arrays-of-size-0-in-numpy
I need to work with arrays that can have zeros in their shapes. However, I am encountering an issue. Here's an example: import numpy as np arr = np.array([[]]) assert arr.shape == (1,0) arr.reshape((1,0)) # No problem (nothing changes) arr.reshape((-1,0)) # ValueError: cannot reshape array of size 0 into shape (0) I always thought that -1 for a reshape operation means the product of all the remaining dimensions, i.e., 1 in this case. Is this a bug, or am I not understanding how this should work?
If you read the documentation: One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions. As furas says, it can't automatically calculate the remaining dimension because of undefined division by 0. Any number times 0 is 0. arr.reshape((1,0)) # Works arr.reshape((3,0)) # Works too! arr.reshape((42,0)) # Works too! Arguably, the error message thrown by NumPy is not the clearest though, and it is factually wrong: arr.reshape(0) # Works as well So you actually can "reshape array of size 0 into shape (0)". And it should say something along the lines of "undefined reshape requested" instead... So it is not a "bug", but definitely a wart, IMO. It might be worth reporting if you feel like it :)
10
11
79,467,944
2025-2-25
https://stackoverflow.com/questions/79467944/comparing-dataframes
The goal is to compare two pandas dataframes considering a margin of error. To reproduce the issue: Importing pandas import pandas as pd Case one - same data dataframes df1 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]}) df2 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]}) print(df1.compare(df2, result_names=('df1', 'df2'))) # The result is an empty dataframe Empty DataFrame Columns: [] Index: [] Case two - different data dataframes df1 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]}) df2 = pd.DataFrame({"A": [1,1,1], "B": [2,2.2,2], "C": [3,3,3]}) # Note that the second B value is 2.2 print(df1.compare(df2, result_names=('df1', 'df2'))) # The result is a dataframe showing differences B df1 df2 1 2.0 2.2 The issue is that I want that it only considers differences greater than 0.5 How I achieved it. threshold = 0.5 df3 = df1.melt().reset_index().merge(df2.melt().reset_index(), on="index") df3["diff"] = (df3["value_x"] - df3["value_y"]).abs() print(df3.loc[df3["diff"] > threshold]) # The result is an empty dataframe Empty DataFrame Columns: [index, variable_x, value_x, variable_y, value_y, diff] Index: [] Is there a better way to do this? It takes a lot of time for a huge DF. In time: This is only a reproducible example. I am opened to use other libraries as Numpy.
Depending on your ultimate goal, assert_frame_equal with the atol parameter may work. from pandas.testing import assert_frame_equal # specify dtypes for the reproducible example # otherwise assert_frame_equal flags different dtypes (int vs. float) df1 = pd.DataFrame({"A": [1,1,1], "B": [2,2,2], "C": [3,3,3]}, dtype=float) df2 = pd.DataFrame({"A": [1,1,1], "B": [2,2.2,2], "C": [3,3,3]}, dtype=float) assert_frame_equal(df1, df2, atol=0.5)
2
2
79,467,071
2025-2-25
https://stackoverflow.com/questions/79467071/adjacency-matrix-not-square-error-from-square-dataframe-with-networkx
I have code that aims to generate a graph from an adjacency matrix from a table correlating workers with their manager. The source is a table with two columns (Worker, manager). It still works perfectly from a small mock data set, but fails unexpectedly with the real data: import pandas as pd import networkx as nx # Read input df = pd.read_csv("org.csv") # Create the input adjacency matrix am = pd.DataFrame(0, columns=df["Worker"], index=df["Worker"]) # This way, it is impossible that the dataframe is not square, # or that index and columns don't match # Fill the matrix for ix, row in df.iterrows(): am.at[row["manager"], row["Worker"]] = 1 # At this point, am.shape returns a square dataframe (2825,2825) # Generate the graph G = nx.from_pandas_adjacency(am, create_using=nx.DiGraph) This returns: NetworkXError: Adjacency matrix not square: nx,ny=(2825, 2829) And indeed, the dimensions reported in the error are not the same as in those of the input dataframe am. Does anyone have an idea of what happens in from_pandas_adjacency that could lead to this mismatch?
In: am = pd.DataFrame(0, columns=df["Worker"], index=df["Worker"]) # This way, it is impossible that the dataframe is not square, your DataFrame is indeed square, but when you later assign values in the loop, if you have a manager that is not in "Worker", this will create a new row: am.at[row["manager"], row["Worker"]] Better avoid the loop, use a crosstab, then reindex on the whole set of nodes: am = pd.crosstab(df['manager'], df['Worker']) nodes = am.index.union(am.columns) am = am.reindex(index=nodes, columns=nodes, fill_value=0) Even better, if you don't really need the adjacency matrix, directly create the graph with nx.from_pandas_edgelist: G = nx.from_pandas_edgelist(df, source='manager', target='Worker', create_using=nx.DiGraph) Example: # input df = pd.DataFrame({'manager': ['A', 'B', 'A'], 'Worker': ['D', 'E', 'F']}) # adjacency matrix A B D E F A 0 0 1 0 1 B 0 0 0 1 0 D 0 0 0 0 0 E 0 0 0 0 0 F 0 0 0 0 0 # adjacency matrix with your code Worker D E F Worker D 0.0 0.0 0.0 E 0.0 0.0 0.0 F 0.0 0.0 0.0 A 1.0 NaN 1.0 # those rows are created B NaN 1.0 NaN # after initializing am Graph:
2
1
79,465,562
2025-2-25
https://stackoverflow.com/questions/79465562/pybind11-multiple-definition-of-pyinit-module-name
Solved! - Please check the answer. I wrote a library where headers and python bindings are auto-generated. For example dummy_bind.cpp for dummy_message.h and each _bind.cpp file has PYBIND11_MODULE call in it for their specific class. There are dozens of other _bind.cpp files for other headers. What should be the module name for each file when calling the PYBIND11_MODULE like: PYBIND11_MODULE(protocol_name, m) { /// … } If I use protocol_name in each PYBIND11_MODULE(protocol_name, m) call, when compiling I get multiple definition error like: multiple definition of PyInit_protocol_name. If I generate special module name for each message like PYBIND11_MODULE(protocol_name_dummy, m) the extension is compiled but I think I need to import each module one by one which is not viable. Should I do all exports inside a single PYBIND11_MODULE call? Thanks in advance.
I've actually solved this by generating proxy functions in _bind.cpp files. For instance, in message1_bind.cpp I've defined a function void init_message1(pyinit11::module& m) and then in main_bind.cpp I call them all inside PYBIND11_MODULE(protocol_name, m) so I only have only one PYBIND11_MODULE() call. Here's a minimal example to describe the method better: message1_bind.cpp: #include <pybind11/pybind11.h> #include "messages/message_1.h" void init_message1(pybind11::module& m) { pybind11::class_<protocol_name::Message1>(m, "Message1").def(pybind11::init<>()); m.def("serialize_message1", &protocol_name::serialize_message_1); m.def("deserialize_message1", &protocol_name::deserialize_message_1); } And then inside main_bind.cpp #include <pybind11/pybind11.h> #include "message1_bind.cpp" /// ... all other includes PYBIND11_MODULE(protocol_name, m) { init_message1(m); /// init_message2(m) and so on.. }
2
3
79,489,702
2025-3-6
https://stackoverflow.com/questions/79489702/is-there-a-numpy-method-or-function-to-split-an-array-of-uint64-into-two-arrays
Say I have an array as follows: arr = np.asarray([1, 2, 3, 4294967296, 100], dtype=np.uint64) I now want two arrays, one array with the lower 32 bits of every element, and one with the upper 32 bits of every element, preferably by using views and minimizing copies, to get something like this: upper = np.array([0, 0, 0, 1, 0], dtype=np.uint32) lower = np.array([1, 2, 3, 0, 100], dtype=np.uint32) I tried the following: lower = arr.view() & 0xFFFFFFFF upper = np.bitwise_right_shift(arr.view(), 32) But this results in a copy for the upper bits due to the bitshift, and both arrays are still of type uint64. Are there further optimizations I can try or am I out of luck and need to eat up the extra copies?
You can use structured arrays to split uint64 into two uint32 views without copying: # Create a structured view of the array (assuming little-endian system) view = arr.view(dtype=np.dtype([('lower', np.uint32), ('upper', np.uint32)])) # Extract views lower = view['lower'] upper = view['upper'] This creates memory views not copies, and preserves the uint32 dtypes. Alternative using views: This alternative also creates views without copying data, but my benchmarks show it can be significantly faster than the structured dtype approach. # View the uint64 array as uint32 (each uint64 becomes two uint32) arr_u32 = arr.view(np.uint32) # Extract the lower and upper 32 bits # For little-endian systems, index 0 is lower bits and index 1 is upper bits lower, upper = arr_u32[..., 0], arr_u32[..., 1]
5
10
79,488,683
2025-3-6
https://stackoverflow.com/questions/79488683/how-to-avoid-object-has-no-attribute-isalive-error-while-debugging-in-intell
I am writing a simple project in python. My version of python is: 3.13.1 . I am using IntelliJ and Python plugin with version: 241.18034.62. I would like to debug my project but when I try to debug I am getting many errors: AttributeError: '_MainThread' object has no attribute 'isAlive'. Did you mean: 'is_alive'? bigger part of stacktrace: C:\projects\mat\venv\Scripts\python.exe -X pycache_prefix=C:\Users\mylogin\AppData\Local\JetBrains\IntelliJIdea2024.1\cpython-cache C:/Users/mylogin/AppData/Roaming/JetBrains/IntelliJIdea2024.1/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=pyqt5 --client 127.0.0.1 --port 4095 --file C:\projects\mat\org\aa\aa\mat\delivery_processor.py Connected to pydev debugger (build 241.18034.62) Traceback (most recent call last): File "C:\Users\mylogin\AppData\Roaming\JetBrains\IntelliJIdea2024.1\plugins\python\helpers\pydev\_pydevd_bundle\pydevd_pep_669_tracing.py", line 238, in py_start_callback if not is_thread_alive(thread): ~~~~~~~~~~~~~~~^^^^^^^^ File "C:\Users\mylogin\AppData\Roaming\JetBrains\IntelliJIdea2024.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_is_thread_alive.py", line 18, in is_thread_alive return t.isAlive() ^^^^^^^^^ AttributeError: 'WriterThread' object has no attribute 'isAlive'. Did you mean: 'is_alive'? From what I have understood there is some mismatch with versions. I have tried to change the dbugger properties: But none if these helped. How should I setup my IntelliJ environment to be able to debug? I would like to avoid downgrading python version.
Path in error message may suggest that it is mistake in plugin, not in your code. It looks like code for Python2 which has function isAlive(). At this moment you may try to fix it. Open C:\Users\mylogin\AppData\Roaming\JetBrains\IntelliJIdea2024.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_is_thread_alive.py and change isAlive into is_alive and it may help in this place. But if it was created for Python 2 then it may have more problems in other places. You could send this problem it author(s) of this plugin.
3
2
79,497,967
2025-3-10
https://stackoverflow.com/questions/79497967/is-there-a-callable-for-generating-ids-with-pytest-fixtureparam-the-same
I'm using parametrized fixtures but I don't find the way ids are generated practical. I'd like to fall back on the way it's generated when using pytest.mark.parametrize. I've seen that it's possible to provide a callable as the ids keyword argument in pytest.fixture (and it works), but I was wondering if there was already an implemented callable that could serve this specific purpose. Is there some internal I could replace get_id with? I include a MRE for illustrating my issue below. MRE: test_ids.py: import pytest def add3(a, b, c): return a + b + c @pytest.mark.parametrize("a,b,c", [ (1, 2, 3), (4, 5, 6), ]) def test_add_with_parametrize(a, b, c): assert a + b + c == add3(a, b, c) @pytest.fixture(params=[(1, 2, 3), (4, 5, 6)]) def parametrized_fixture(request): return request.param def test_add_with_parametrized_fixture(parametrized_fixture): a, b, c = parametrized_fixture assert a + b + c == add3(a, b, c) def get_id(val): return f"{val!r}" @pytest.fixture(params=[(1, 2, 3), (4, 5, 6)], ids=get_id) def parametrized_fixture_bis(request): return request.param def test_add_with_parametrized_fixture_bis(parametrized_fixture_bis): a, b, c = parametrized_fixture_bis assert a + b + c == add3(a, b, c) Output: pytest -v ============================= test session starts ============================= platform linux -- Python 3.11.11, pytest-8.3.5, pluggy-1.5.0 -- /home/vmonteco/.pyenv/versions/3.11.11/envs/3.11_pytest/bin/python cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRE_pytest_ids collected 6 items test_ids.py::test_add_with_parametrize[1-2-3] PASSED [ 16%] test_ids.py::test_add_with_parametrize[4-5-6] PASSED [ 33%] test_ids.py::test_add_with_parametrized_fixture[parametrized_fixture0] PASSED [ 50%] test_ids.py::test_add_with_parametrized_fixture[parametrized_fixture1] PASSED [ 66%] test_ids.py::test_add_with_parametrized_fixture_bis[(1, 2, 3)] PASSED [ 83%] test_ids.py::test_add_with_parametrized_fixture_bis[(4, 5, 6)] PASSED [100%] ============================== 6 passed in 0.01s ==============================
The change you observe in test ids generation isn't due to the use of parametrized fixtures, but to the way pytest generates these ids depending on the parameters types: Numbers, strings, booleans and None will have their usual string representation used in the test ID. For other objects, pytest will make a string based on the argument name: In your MRE, while you were passing parameters as integers in your first test, you're now passing them as a tuple in your second one, so the id is now based on the argument name as stated above. def test_add_with_parametrize(a: int, b: int, c: int): ... def test_add_with_parametrized_fixture(parametrized_fixture: tuple): ... For types that aren't numbers, strings, booleans or None, you can either provide a callable to generate these ids as you mentioned in your post or you can override the ids generation behavior with the pytest_make_parametrize_id hook in conftest.py def pytest_make_parametrize_id(config, val, argname): return f"{val!r}" test_ids.py: import pytest def add3(a, b, c): return a + b + c @pytest.mark.parametrize("a,b,c", [ (1, 2, 3), (4, 5, 6), ]) def test_add_with_parametrize(a, b, c): assert a + b + c == add3(a, b, c) @pytest.fixture(params=[(1, 2, 3), (4, 5, 6)]) def parametrized_fixture(request): return request.param def test_add_with_parametrized_fixture(parametrized_fixture): a, b, c = parametrized_fixture assert a + b + c == add3(a, b, c) @pytest.fixture(params=[(1, 2, 3), (4, 5, 6)]) def parametrized_fixture_bis(request): return request.param def test_add_with_parametrized_fixture_bis(parametrized_fixture_bis): a, b, c = parametrized_fixture_bis assert a + b + c == add3(a, b, c) Output test_ids.py::test_add_with_parametrize[1-2-3] PASSED [ 16%] test_ids.py::test_add_with_parametrize[4-5-6] PASSED [ 33%] test_ids.py::test_add_with_parametrized_fixture[(1, 2, 3)] PASSED [ 50%] test_ids.py::test_add_with_parametrized_fixture[(4, 5, 6)] PASSED [ 66%] test_ids.py::test_add_with_parametrized_fixture_bis[(1, 2, 3)] PASSED [ 83%] test_ids.py::test_add_with_parametrized_fixture_bis[(4, 5, 6)] PASSED [100%]
2
1
79,483,002
2025-3-4
https://stackoverflow.com/questions/79483002/numpy-ndarray-object-has-no-attribute-groupby
I am trying to apply target encoding to categorical features using the category_encoders.TargetEncoder in Python. However, I keep getting the following error: AttributeError: 'numpy.ndarray' object has no attribute 'groupby' from category_encoders import TargetEncoder from sklearn.model_selection import train_test_split # Features for target encoding encoding_cols = ['grade', 'sub_grade', 'home_ownership', 'verification_status', 'purpose', 'application_type', 'zipcode'] # Train-Test Split X_train_cv, X_test, y_train_cv, y_test = train_test_split(x, y, test_size=0.25, random_state=1) X_train, X_test_cv, y_train, y_test_cv = train_test_split(X_train_cv, y_train_cv, test_size=0.25, random_state=1) # Initialize the Target Encoder encoder = TargetEncoder() # Apply Target Encoding for i in encoding_cols: X_train[i] = encoder.fit_transform(X_train[i], y_train) # **Error occurs here** X_test_cv[i] = encoder.transform(X_test_cv[i]) X_test[i] = encoder.transform(X_test[i]) want to successfully apply target encoding to the categorical columns without encountering the 'numpy.ndarray' object has no attribute 'groupby' error.
This is interesting. I can reproduce your error. It is related to the dtype. To solve the issue you need to force a conversion using its list values and set the name and index explicitly. y_train = pd.Series(y_train.tolist(), name='loan_status', index=y_train.index) This will convert your initial dtype of CategoricalDtype(categories=[1, 0], ordered=False, categories_dtype=int64) to dtype('int64') So you last cell in the Colab is now: # Initialize TargetEncoder encoder = ce.TargetEncoder(cols=encoding_cols) # Here is the list conversion and back to series y_train = pd.Series(y_train.tolist(), index=y_train.index) # Fit and transform the training data X_train = encoder.fit_transform(X_train, y_train) and this works fine.
1
2
79,496,308
2025-3-9
https://stackoverflow.com/questions/79496308/how-can-i-handle-initial-settings-with-pydantic-settings
I have an app that is largely configured by environment variables. I use Pydantic Settings to define the settings available, and validate them. I have an initial set of settings, and the regular app settings. The initial settings are ones that should not fail validation, and contain essential settings for starting the app. For example, when my app starts up, if the regular Settings() can't be initialized because something in them failed validation, I still want to be able to send the error to Sentry. For that, I need SENTRY_DSN to configure Sentry. SENTRY_DSN can't be part of the regular settings, because if something unrelated in Settings fails validation, I won't have access to SENTRY_DNS either. Right now, my settings look like this: class InitialSettings(BaseSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="ignore", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) SENTRY_DSN: Annotated[ Optional[str], Field(None), ] class Settings(BaseSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="ignore", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) STORAGE: Annotated[ LocalStorageSettings | S3StorageSettings, Field(..., discriminator="STORAGE_TYPE"), ] DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)] ... This works. When my app starts up, I first initialize InitialSettings(), and then try to initialize Settings(). If Settings() fails, I can still use the SENTRY_DSN setting to send the error to Sentry. The issues comes when I try to have both settings use the same env file (settings.env), AND enable the extra="forbid" feature on Settings(). I like the idea of having extra="forbid" enabled, but that also means that if I enable it on Settings(), it will always fail, because the env file will contain an entry for SENTRY_DSN, which Settings doesn't know about. To fix, this I tried to add InitialSettings to Settings like this: class Settings(BaseSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="forbid", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) STORAGE: Annotated[ LocalStorageSettings | S3StorageSettings, Field(..., discriminator="STORAGE_TYPE"), ] DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)] INIT: Annotated[InitialSettings, Field(default_factory=InitialSettings)] ... Now Settings should know about all the settings defined in InitialSettings, and if there's any extra settings in the env file that aren't defined in either class, it should fail. This almost works. The problem is that when you call InitialSettings, the SENTRY_DSN in the env file is expected to just be called SENTRY_DSN. When you call InitialSettings, because InitialSettings is nested under INIT, it expects the sentry variable to be called INIT__SENTRY_DSN. How do I configure Pydantic Settings so that all settings under InitialSettings always look for SENTRY_DSN, no matter if they are initialized using InitialSettings(), or Settings()? Note: I still want the other nested settings classes under Settings, like STORAGE, to work the same - be prefixed with STORAGE__ in the env file.
The simplest solution is to use two different .env files: one for InitialSettings and one for Settings: # initial.env SENTRY_DSN=... # settings.env # your other `Settings` envs here without `SENTRY_DSN` class InitialSettings(BaseSettings): model_config = SettingsConfigDict( env_file="initial.env", env_file_encoding="utf-8", extra="ignore", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) SENTRY_DSN: Annotated[ Optional[str], Field(None), ] class Settings(CustomSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="forbid", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) STORAGE: Annotated[ LocalStorageSettings | S3StorageSettings, Field(..., discriminator="STORAGE_TYPE"), ] DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)] INIT: Annotated[InitialSettings, Field(default_factory=InitialSettings)] ... Using two separate files with different environment variables in your case seems logical. You have two independent classes of settings, with different extra settings, so different validation will be performed. In Settings extra environment variables will throw a validation error, while InitialSettings will not. But, most importantly, using a separate file with environment variables will allow InitialSettings to initialize correctly both inside the Settings class and separately. Further, I'm assuming that you can't use multiple .env files, for whatever reason, and that you plan to leave the setting - extra=“forbid” in Settings. Based on this, I can suggest two solutions. The first solution is to use the INIT prefix in your settings.env for SENTRY_DSN. This would look something like this: # setting.env INIT__SENTRY_DSN=... # other envs Then your settings classes will look like this: class InitialSettings(BaseSettings): model_config = SettingsConfigDict( env_prefix="INIT__", env_file="settings.env", env_file_encoding="utf-8", extra="ignore", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) SENTRY_DSN: Annotated[ Optional[str], Field(None), ] class Settings(BaseSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="forbid", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) STORAGE: Annotated[ LocalStorageSettings | S3StorageSettings, Field(..., discriminator="STORAGE_TYPE"), ] DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)] INIT: Annotated[InitialSettings, Field(default_factory=InitialSettings)] ... In this case, because we set SettingsConfigDict(env_prefix=“INIT__”, ...) in InitialSettings, both classes of settings: will look for one environment variable INIT__SENTRY_DSN in the settings.env file. Both classes will properly initialize SENTRY_DSN from INIT__SENTRY_DSN. Another possible solution could be to override settings_customize_sources by creating a subclass of BaseSettings and add its own DotEnvSettingsSource handler: # setting.env SENTRY_DSN=... # other envs from typing import Annotated, Optional from pydantic import Field from pydantic_settings import ( BaseSettings, DotEnvSettingsSource, PydanticBaseSettingsSource, SettingsConfigDict, ) class CustomDotEnvSettingsSource(DotEnvSettingsSource): def __call__(self): # This method may actually look simpler, since `InitialSettings` # can be initialized using an `settings.env` file # (but I like the option below better, it's more explicit): # def __call__(self): # result = super().__call__() # result.pop('SENTRY_DSN', None) # return result result = super().__call__() if 'SENTRY_DSN' in result: value = result.pop('SENTRY_DSN') result.setdefault('INIT', {})['SENTRY_DSN'] = value return result class CustomSettings(BaseSettings): @classmethod def settings_customise_sources( cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource, ) -> tuple[PydanticBaseSettingsSource, ...]: return super().settings_customise_sources( settings_cls=settings_cls, init_settings=init_settings, env_settings=env_settings, # register custom `DotEnvSettingsSource` handler dotenv_settings=CustomDotEnvSettingsSource(settings_cls), file_secret_settings=file_secret_settings, ) class InitialSettings(BaseSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="ignore", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) SENTRY_DSN: Annotated[ Optional[str], Field(None), ] class Settings(CustomSettings): model_config = SettingsConfigDict( env_file="settings.env", env_file_encoding="utf-8", extra="forbid", env_ignore_empty=True, env_nested_delimiter="__", case_sensitive=True, ) STORAGE: Annotated[ LocalStorageSettings | S3StorageSettings, Field(..., discriminator="STORAGE_TYPE"), ] DEBUG: Annotated[DebugSettings, Field(default_factory=DebugSettings)] INIT: Annotated[InitialSettings, Field(default_factory=InitialSettings)] ... All three methods described in this answer have been tested and should work for you.
1
1
79,495,237
2025-3-9
https://stackoverflow.com/questions/79495237/cumulative-elementwise-sum-by-python-polars
I have a weight vector: weight_vec = pl.Series("weights", [0.125, 0.0625, 0.03125]) And also a DataFrame containing up to m variables. For simplicity, we will only have two varaibles: df = pl.DataFrame( { "row_index": [0, 1, 2, 3, 4], "var1": [1, 2, 3, 4, 5], "var2": [6, 7, 8, 9, 10], } ) The size (number of observations) for these variables can be very large (tens of millions of rows). I would like to: For each variable, and each observation x_i, where i is the row index [0,...,4], I want to transform the value of x_i to the sumproduct of all past n's x_i value (including the current value [x_i,...x_i+n-1]), and the weight vector. n is the length of the given weight vector and n varies for different weight vector definition. Numerically, the value of var1 at observation index 0 is the sumproduct of the values of all [x_0, x_1, x_2] and all the values of the weight vector. When the row index appraoches to and end (e.g., max index - row index + 1 < n) => all the values will be assigned None. We can assume that the height of the DataFrame is always larger or equal to the length of the weight vector to result in at least one valid result. The resulting DataFrame should look like this: shape: (5, 3) ┌───────────┬─────────┬─────────┐ │ row_index ┆ var1 ┆ var2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ f64 ┆ f64 │ ╞═══════════╪═════════╪═════════╡ │ 0 ┆ 0.34375 ┆ 1.4375 │ │ 1 ┆ 0.5625 ┆ 1.65625 │ │ 2 ┆ 0.78125 ┆ 1.875 │ │ 3 ┆ null ┆ null │ │ 4 ┆ null ┆ null │ └───────────┴─────────┴─────────┘ Numeric Caldulations: x_0_var1: (0.125 * 1 + 0.0625 * 2 + 0.03125 * 3 = 0.34375) x_2_var2: (0.125 * 8 + 0.0625 * 9 + 0.03125 * 10 = 1.875) I am looking for a memory efficient, vectorized Polars operation to achieve such results.
Here is a solution that uses rolling. import numpy as np weight_vec_len: int = weight_vec.len() period = f"{weight_vec_len}i" df.rolling("row_index", period=period, offset=f"-1i").agg( pl.col(r"^var\d$") .extend_constant(np.nan, weight_vec_len - pl.len()) .dot(weight_vec) .fill_nan(None) .name.keep() ) shape: (5, 3) ┌───────────┬─────────┬─────────┐ │ row_index ┆ var1 ┆ var2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ f64 ┆ f64 │ ╞═══════════╪═════════╪═════════╡ │ 0 ┆ 0.34375 ┆ 1.4375 │ │ 1 ┆ 0.5625 ┆ 1.65625 │ │ 2 ┆ 0.78125 ┆ 1.875 │ │ 3 ┆ null ┆ null │ │ 4 ┆ null ┆ null │ └───────────┴─────────┴─────────┘
1
2
79,490,519
2025-3-6
https://stackoverflow.com/questions/79490519/raising-error-in-function-task-parallelized-with-ray
Starting to try to use Ray to parallelize a number of task-parallel jobs. I.e. each task takes in an object from a data frame, and then returns a list. Within the function, there is a check for a property of the object though, and if that property if fulfilled I want the task to be canceled gracefully. (I know one could hack around it by setting the retries per task to 0, and then set the number of permitted task failures to infinity) The structure of the function is sort of like this: import ray import numpy as np ray.init() @ray.remote def test_function(x): if np.random.rand() < 0.5: raise Exception("blub") return [x, x*x, "The day is blue"] futures = [test_function.remote(i) for i in range(10000)] print(ray.get(futures)) Is there a best-practice, or a graceful way to let the individual tasks fail?
reproducible One way to reproduce the effect with toy data would be to take sha1(i) as i ranges from zero to thirty-five thousand. Mask off some low order bits, and if result is "small" report error, else success. Using modulo on that can also be convenient. When I run it across all cores Ray reattempts failed tasks 3 times, and after 24 retries across the entire job, it cancels the entire job-queue. Out of 35k samples in the dataset, this affects ~1k samples. Ok, now your Question makes sense. You're explaining that, for the current Ray configuration, it is unacceptable for any of those one thousand samples to report an error which is seen by Ray. So wrap your function with a try / except, log what happened so you can chase it down later, and return successfully so Ray won't retry. You can do that by making the function a little longer, or by writing a @suppress_error decorator that accomplishes the same thing. def suppress_error(func): def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: diagnostic = f"Error in {func.__name__}: {e}" logger = logging.getLogger(__name__) logger.warning(diagnostic) print(diagnostic, file=sys.stderr) return [] # zero result rows return wrapper Presumably you adopted the "3 retries" setting in hopes of dealing with transient webserver TCP timeouts or perhaps the occasionally full filesystem. Consider defining your "expensive" function such that it always serializes its results to a unique filename in some folder, writing zero or more result rows. Upon failure, write zero rows. Then a 2nd-phase task can come along to aggregate those results. one could hack around it by setting the retries per task to 0, and then set the number of permitted task failures to infinity) It's unclear why you didn't adopt that "no code" solution, since changing the config sounds easier than changing the code. Maybe there is still transient webserver timeouts you're still worried about? More generally, there are assumptions in your code + config which impact whether an analysis job will ever complete, and their implications are not immediately apparent. Write down, in your source code repository, the assumptions, their implications, and observed results such as timings. This will help future maintenance engineers, such as yourself, to better reason about "what is good?" in the current setup, and what might be changed to improve the setup. I'm hesitant to adopt the "no code" solution as while this configuration might work on one cluster, it is highly uncertain if it still works on another cluster. Ray has well documented retry behavior, which is triggered by your application code raising an error which Ray sees. It is important for your application to behave correctly, raising visible errors only when appropriate. Interpose an error handling layer if needed, to impedance match between the app's observed behavior and Ray's documented response to its behavior. If you have app1 and app2 running on various clusters, you don't have to change a global config. The app1 behavior can be adjusted in this way: future1 = test_function.remote(i) # original error behavior future2 = test_function.options(num_retries=0).remote(i)
2
0
79,497,191
2025-3-10
https://stackoverflow.com/questions/79497191/when-using-mysql-connector-aio-how-do-we-enable-connection-pooling-assuming-it
I am trying to port my old mysql connector code to use the asyncio libraries provided by MySQL. When I tried to run it, it said it didn't recognize the pool_name and pool_size. It didn't explicitly state in the documentation that pooling is not supported. AIOMysql does support pooling. But I was also thinking, if I am running on a single thread, why would I need connection pooling? Maybe that's why it isn't explicitly supported by the MySQL AIO drivers? There's a forum question but doesn't really address whether connection pooling is needed or not. https://stackoverflow.com/a/66222924/242042 seems to indicate that connection pooling isn't worth it, but it could be specific to AIOMysql.
First, as regards to whether mysql.connector.aio supports connection pooling or not, the following is a portion of that package's connect function (Python 3.12.2): async def connect(*args: Any, **kwargs: Any) -> MySQLConnectionAbstract: """Creates or gets a MySQL connection object. In its simpliest form, `connect()` will open a connection to a MySQL server and return a `MySQLConnectionAbstract` subclass object such as `MySQLConnection` or `CMySQLConnection`. When any connection pooling arguments are given, for example `pool_name` or `pool_size`, a pool is created or a previously one is used to return a `PooledMySQLConnection`. Args: *args: N/A. **kwargs: For a complete list of possible arguments, see [1]. If no arguments are given, it uses the already configured or default values. Returns: A `MySQLConnectionAbstract` subclass instance (such as `MySQLConnection` or a `CMySQLConnection`) instance. Examples: A connection with the MySQL server can be established using either the `mysql.connector.connect()` method or a `MySQLConnectionAbstract` subclass: ``` >>> from mysql.connector.aio import MySQLConnection, HAVE_CEXT >>> >>> cnx1 = await mysql.connector.aio.connect(user='joe', database='test') >>> cnx2 = MySQLConnection(user='joe', database='test') >>> await cnx2.connect() >>> >>> cnx3 = None >>> if HAVE_CEXT: >>> from mysql.connector.aio import CMySQLConnection >>> cnx3 = CMySQLConnection(user='joe', database='test') ``` References: [1]: https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html """ The relevant comment is: When any connection pooling arguments are given, for example pool_name or pool_size, a pool is created or a previously one is used to return a PooledMySQLConnection. Note that pool_name is explicitly mentioned. Yet: >>> import asyncio >>> async def test(): ... from mysql.connector.aio import connect ... conn = await connect(pool_name = 'some_name') ... >>> asyncio.run(test()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python312\Lib\asyncio\runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "<stdin>", line 3, in test File "C:\Python312\Lib\site-packages\mysql\connector\aio\__init__.py", line 162, in connect cnx = MySQLConnection(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: MySQLConnectionAbstract.__init__() got an unexpected keyword argument 'pool_name' >>> Notwithstanding the comment in the code, this is strong evidence that connection pooling cannot be used asynchronously with this package. As for the post at https://stackoverflow.com/a/66222924/242042, the OP states, "Turns out connection pooling are (sic) not worth it. Connection pooling can cause mysql8 alter table to lock forever." The OP provides no reference for the "alter table problem" and I could not find any, which does not mean that the problem does not exist. But as far as connection pooling being "worth it", opening and closing connections require network activity and you do save something by only opening a connection once and reusing it. But there is a second reason for using connection pooling. You asked, "... if I am running on a single thread, why would I need connection pooling?" For the same reason you might want to use connection pooling if you were running instead multiple threads needing to do database access. Pooling not only allows you to readily reuse connections, it is also a mechanism to limit the number of connections that can be created even if it means potentially forcing a task that wants a connection to wait for a free connection because all connections in the pool are currently being used. You could use aiomysql, but it is based on PyMySql and not mysql.connector. Update Here is the general idea for creating an asynchronous connection pool using the mysql.connector.aio package. Just remember not to call close on connections obtained using this pool. import asyncio from mysql.connector.aio import connect class AsyncMySQLConnectionPool: def __init__(self, pool_size, **db_config): self._pool_size = pool_size self._db_config = db_config self._pool = asyncio.Queue() self._connections = [] self._connection_count = 0 # Track live connections async def get_connection(self): """Get a connection from the pool.""" # See if there is an immediately available connection: try: conn = self._pool.get_nowait() except asyncio.QueueEmpty: pass else: return conn # Here if there are no immediately available pool connections if self._connection_count < self._pool_size: # We must increment first since we might have a task switch # trying to acquire a new connectionL self._connection_count += 1 conn = await connect(**self._db_config) self._connections.append(conn) return conn # pool size is at its maximum size, so we may have to # wait a while: return await self._pool.get() async def release_connection(self, conn): """Returns a connection to the pool.""" # But first do a rollback: await conn.rollback() await conn.cmd_reset_connection() await self._pool.put(conn) async def close(self): """Closes all connections in the pool.""" # Empty the pool of any connections that have been returned. # Ideally, this should be all the connections. while not self._pool.empty(): self._pool.get_nowait() # Now shutdown all of the connections while self._connections: conn = self._connections.pop() self._connection_count -= 1 # close() can and usually does result in stack traces being # printed on stderr even though no exception is raised. Better # to use shutdown, which does not try to send a QUIT command: await conn.shutdown() if __name__ == '__main__': async def main(): USER = 'xxxxxxxx' PASSWORD = 'xxxxxxxx' # Create a connection pool with a max size of 3 POOL_SIZE = 3 pool = AsyncMySQLConnectionPool( pool_size=POOL_SIZE, host="localhost", user=USER, password=PASSWORD, database="test" ) # Fill up the pool to max size: connections = [ await pool.get_connection() for _ in range(POOL_SIZE) ] # Release them all: for conn in connections: await pool.release_connection(conn) for _ in range(6): conn = await pool.get_connection() cursor = await conn.cursor() await cursor.execute("SELECT 1") print(id(conn), await cursor.fetchone()) await cursor.close() await pool.release_connection(conn) await pool.close() # Demonstrate that when a connection is returned to the pool # with an uncommitted transaction that it is rolled back # Create a pool with only one connection so that the same connection # is always used: pool = AsyncMySQLConnectionPool( pool_size=1, host="localhost", user=USER, password=PASSWORD, database="test" ) conn = await pool.get_connection() cursor = await conn.cursor(dictionary=True) await cursor.execute('select * from test where id = 1') print(f'\nBefore update:\n{await cursor.fetchone()}') await cursor.execute('update test set value = 20.0 where id = 1') await cursor.execute('select * from test where id = 1') print(f'\nAfter update:\n{await cursor.fetchone()}') await pool.release_connection(conn) conn = await pool.get_connection() cursor = await conn.cursor(dictionary=True) await cursor.execute('select * from test where id = 1') print(f'\nAfter connection is retured to the pool without a commit:\n{await cursor.fetchone()}') await pool.release_connection(conn) await pool.close() asyncio.run(main()) Prints: 2766995397840 (1,) 2766992650992 (1,) 2766997102256 (1,) 2766995397840 (1,) 2766992650992 (1,) 2766997102256 (1,) Before update: {'id': 1, 'the_date': datetime.date(2023, 3, 31), 'engine': 'engine_a', 'value': 30.0} After update: {'id': 1, 'the_date': datetime.date(2023, 3, 31), 'engine': 'engine_a', 'value': 20.0} After connection is retured to the pool without a commit: {'id': 1, 'the_date': datetime.date(2023, 3, 31), 'engine': 'engine_a', 'value': 30.0}
1
1
79,499,236
2025-3-10
https://stackoverflow.com/questions/79499236/deviation-in-solutions-differential-equations-using-odeint-vs-runge-kutta-4th
I am modelling a Coupled Spring-Mass-System: Two objects with masses m1 and m2, and are coupled through springs with spring constants k1 and k2, with damping d1 and d2. Method 1: taking a cookbook-script using ODEINT to solve the differential equations. # Use ODEINT to solve the differential equations defined by the vector field from scipy.integrate import odeint import matplotlib.pyplot as plt import pandas as pd import numpy as np def vectorfield(w, t, p): """ Defines the differential equations for the coupled spring-mass system. Arguments: w : vector of the state variables: w = [x1,y1,x2,y2] t : time p : vector of the parameters: p = [m1,m2,k1,k2,b1,b2] """ x1, y1, x2, y2 = w m1, m2, k1, k2, b1, b2 = p f = [y1, (-d1 * y1 - k1 * x1 + k2 * (x2 - x1 )+ d2 * (y2-y1)) / m1, y2, (-d2 * (y2-y1) - k2 * (x2 - x1 )) / m2] return f # Parameter values tau=1.02 f_1=0.16 f_2=tau*f_1 m1 = 2000000 # mass1 m2 = 20000 # mass2 # Spring constants k1 = m1*pow(2*np.pi*f_1,2) k2 = m2*pow((2*np.pi*f_2),2) # damping d1 = (0.04/2/np.pi)*2*pow(k1*m1,0.5) d_d1=6000 l_p=9.81/pow(2*np.pi*f_2,2) b=0.3 d2=d_d1*pow((l_p-b)/l_p,2) # Initial conditions # x1 and x2 are the initial displacements; y1 and y2 are the initial velocities x1 = 0.5 y1 = 0.0 x2 = 0.25 y2 = 0.0 # ODE solver parameters abserr = 1.0e-8 relerr = 1.0e-6 stoptime = 100.0 numpoints = 5001 # Create the time samples for the output of the ODE solver. # I use a large number of points, only because I want to make # a plot of the solution that looks nice. t = [stoptime * float(i) / (numpoints - 1) for i in range(numpoints)] # Pack up the parameters and initial conditions: p = [m1, m2, k1, k2, d1, d2] w0 = [x1, y1, x2, y2] # Call the ODE solver. wsol = odeint(vectorfield, w0, t, args=(p,), atol=abserr, rtol=relerr) # convert zip object to tuple object temp=tuple(zip(t, wsol[:,0],wsol[:,1],wsol[:,2],wsol[:,3])) # convert tulpe object to list then to dataframe df=pd.DataFrame(list(map(list,temp))) # Create plots with pre-defined labels. fig, ax = plt.subplots() ax.plot(df.loc[:,0], df.loc[:,1],label='displacement object1') legend = ax.legend(loc='upper right', shadow=None, fontsize='small') ax.set_xlabel('time [s]', fontdict=None, labelpad=None, loc='center') ax.set_ylabel('pos [m] or V [m/s]', fontdict=None, labelpad=None, loc='center') Method 2: Using Runge-Kutta-4th Order (programmed by myself) to solve the differential equations import numpy as np import matplotlib.pyplot as plt import pandas as pd import matplotlib.pyplot as plt # Parameter values tau=1.02 f_1=0.16 f_2=tau*f_1 m1 = 2000000 # mass1 m2 = 20000 # mass2 # Spring constants k1 = m1*pow(2*np.pi*f_1,2) k2 = m2*pow((2*np.pi*f_2),2) # damping d1 = (0.04/2/np.pi)*2*pow(k1*m1,0.5) d_d1=6000 l_p=9.81/pow(2*np.pi*f_2,2) b=0.3 d2=d_d1*pow((l_p-b)/l_p,2) def system1(x1,y1,x2,y2): return (-d1 * y1 - k1 * x1 + k2 * (x2 - x1 )+ d2 * (y2-y1)) / m1 def system2(x1, y1, x2, y2): return (-d2 * (y2-y1) - k2 * (x2 - x1)) / m2 def runge_kutta_4_sys1(f, x1, y1, x2, y2, h): k1 = f(x1,y1,x2,y2) k2 = f(x1,y1+k1*h/2,x2,y2) k3 = f(x1,y1+k2*h/2,x2,y2) k4 = f(x1,y1+k3*h,x2,y2) temp1= y1 temp2= y1+k1*h/2 temp3= y1+k2*h/2 temp4= y1+k3*h displace_solution = x1 + (temp1 + 2 * temp2 + 2 * temp3 + temp4)*h / 6 velocity_solution = y1 + (k1 + 2 * k2 + 2 * k3 + k4)*h / 6 acc_solution = f(x1,y1,x2,y2) return displace_solution, velocity_solution, acc_solution def runge_kutta_4_sys2(f, x1, y1, x2, y2, h): k1 = f(x1,y1,x2,y2) k2 = f(x1,y1,x2,y2+k1*h/2) k3 = f(x1,y1,x2,y2+k2*h/2) k4 = f(x1,y1,x2,y2+k3*h) temp1= y2 temp2= y2+k1*h/2 temp3= y2+k2*h/2 temp4= y2+k3*h displace_solution = x2 + (temp1 + 2 * temp2 + 2 * temp3 + temp4)*h / 6 velocity_solution = y2 + (k1 + 2 * k2 + 2 * k3 + k4)*h / 6 acc_solution = f(x1,y1,x2,y2) return displace_solution, velocity_solution, acc_solution x1 = 0.5 y1 = 0.0 x2 = 0.25 y2 = 0.0 h = 0.02 numpoints = 5000 time = 0 temp=0 df = pd.DataFrame(index=range(1+numpoints),columns=range(7)) df.iloc[0]=[time,x1,y1,temp,x2,y2,temp] for i in range(numpoints): x1, y1, acc1 = runge_kutta_4_sys1(system1, x1, y1, x2, y2, h) x2, y2, acc2 = runge_kutta_4_sys2(system2, x1, y1, x2, y2, h) time=time+h df.iloc[i+1]=[time,x1,y1,acc1,x2,y2,acc2] df.iloc[i,3]=acc1 df.iloc[i,6]=acc2 # Create plots with pre-defined labels. fig, ax = plt.subplots() ax.plot(df.loc[:,0], df.loc[:,1],label='displacement object1') legend = ax.legend(loc='upper right', shadow=None, fontsize='small') ax.set_xlabel('time [s]', fontdict=None, labelpad=None, loc='center') ax.set_ylabel('pos [m] or V [m/s]', fontdict=None, labelpad=None, loc='center') There is clearly deviation in the solutions between method 1 and method 2. I assume the deviation is due to inaccurate Runge-Kutta-integration. Is my assumption correct? How to set or improve the accuracy for the Runge-Kutta-Methode? I tried searching in internet to found the possible restricts and accuracy using runger-kutta method, without success.
As @lastchance first observed in the comments, your RK4 is not correct as it doesn't apply the method to all the four equations at the same time. I also don't understand the rationale behind the two velocity_solution equations. Here's RK4 method applied to the system of four equations. For pedagogical reasons I add it explicitly, in all its "glorious" complexity. For slightly more complex systems, one would have to add abstractions that would allow the groups of k factors to be computed in loops. def system1(x1, y1, x2, y2): return (-d1 * y1 - k1 * x1 + k2 * (x2 - x1) + d2 * (y2 - y1)) / m1 def system2(x1, y1, x2, y2): return (-d2 * (y2 - y1) - k2 * (x2 - x1)) / m2 def runge_kutta_4(f1, f2, x1, y1, x2, y2, h): k1x1 = y1 k1x2 = y2 k1y1 = f1(x1, y1, x2, y2) k1y2 = f2(x1, y1, x2, y2) k2x1 = y1 + h * k1y1 / 2 k2x2 = y2 + h * k1y2 / 2 k2y1 = f1(x1 + h * k1x1 / 2, y1 + h * k1y1 / 2, x2 + h * k1x2 / 2, y2 + h * k1y2 / 2) k2y2 = f2(x1 + h * k1x1 / 2, y1 + h * k1y1 / 2, x2 + h * k1x2 / 2, y2 + h * k1y2 / 2) k3x1 = y1 + h * k2y1 / 2 k3x2 = y2 + h * k2y2 / 2 k3y1 = f1(x1 + h * k2x1 / 2, y1 + h * k2y1 / 2, x2 + h * k2x2 / 2, y2 + h * k2y2 / 2) k3y2 = f2(x1 + h * k2x1 / 2, y1 + h * k2y1 / 2, x2 + h * k2x2 / 2, y2 + h * k2y2 / 2) k4x1 = y1 + h * k3y1 k4x2 = y2 + h * k3y2 k4y1 = f1(x1 + h * k3x1, y1 + h * k3y1, x2 + h * k3x2, y2 + h * k3y2) k4y2 = f2(x1 + h * k3x1, y1 + h * k3y1, x2 + h * k3x2, y2 + h * k3y2) x1_next = x1 + h * (k1x1 + 2 * k2x1 + 2 * k3x1 + k4x1) / 6 x2_next = x2 + h * (k1x2 + 2 * k2x2 + 2 * k3x2 + k4x2) / 6 y1_next = y1 + h * (k1y1 + 2 * k2y1 + 2 * k3y1 + k4y1) / 6 y2_next = y2 + h * (k1y2 + 2 * k2y2 + 2 * k3y2 + k4y2) / 6 acc1_next = f1(x1_next, y1_next, x2_next, y2_next) acc2_next = f2(x1_next, y1_next, x2_next, y2_next) return x1_next, x2_next, y1_next, y2_next, acc1_next, acc2_next x1 = 0.5 y1 = 0.0 x2 = 0.25 y2 = 0.0 h = 0.02 numpoints = 5000 time = 0 temp1 = system1(x1, y1, x2, y2) temp2 = system1(x1, y1, x2, y2) df = pd.DataFrame(index=range(1 + numpoints), columns=range(7)) df.iloc[0] = [time, x1, y1, temp1, x2, y2, temp2] for i in range(numpoints): x1, x2, y1, y2, acc1, acc2 = runge_kutta_4(system1, system2, x1, y1, x2, y2, h) time = time + h df.iloc[i + 1] = [time, x1, y1, acc1, x2, y2, acc2] df.iloc[i, 3] = acc1 df.iloc[i, 6] = acc2
1
1
79,498,911
2025-3-10
https://stackoverflow.com/questions/79498911/why-does-jaxs-grad-not-always-print-inside-the-cost-function
I am new to JAX and trying to use it with PennyLane and optax to optimize a simple quantum circuit. However, I noticed that my print statement inside the cost function does not execute in every iteration. Specifically, it prints only once at the beginning and then stops appearing. The quantum circuit itself does not make sense; I just wanted to simplify the example as much as possible. I believe the circuit is not actually relevant to the question, but it's included as an example. Here is my code: import pennylane as qml import jax import jax.numpy as jnp import optax jax.config.update("jax_enable_x64", True) device = qml.device("default.qubit", wires=1) @qml.qnode(device, interface='jax') def circuit(params): qml.RX(params, wires=0) return qml.expval(qml.PauliZ(0)) def cost(params): print('Evaluating') return circuit(params) # Define optimizer params = jnp.array(0.1) opt = optax.adam(learning_rate=0.1) opt_state = opt.init(params) # JIT the gradient function grad = jax.jit(jax.grad(cost)) for epoch in range(5): print(f'{epoch = }') grad_value = grad(params) updates, opt_state = opt.update(grad_value, opt_state) params = optax.apply_updates(params, updates) Expected output: epoch = 0 Evaluating epoch = 1 Evaluating epoch = 2 Evaluating epoch = 3 Evaluating epoch = 4 Evaluating Actual output: epoch = 0 Evaluating epoch = 1 Evaluating epoch = 2 epoch = 3 epoch = 4 Question: Why is the print statement inside cost not executed after the first iteration? Is JAX caching the function call or optimizing it in a way that skips execution? How can I ensure that cost is evaluated in every iteration?
When working with JAX it is important to understand the difference between "trace time" and "runtime". For JIT compilation JAX does an abstract evaluation of the function when it is called first. This is used to "trace" the computational graph of the function and then create a fully compiled replacement, which is cached and then invoked on the next calls ("runtime") of the function. Now, Python's print statements are only evaluated at trace time and not at runtime, because the code of the function has been effectively replaced by a compiled version. For the case of printing during runtime, JAX has a special jax.debug.print function, you can use: def cost(params): jax.debug.print('Evaluating') return circuit(params) More on the jax.debug utilities: https://docs.jax.dev/en/latest/debugging/index.html And JIT compilation: https://docs.jax.dev/en/latest/jit-compilation.html
1
2
79,489,878
2025-3-6
https://stackoverflow.com/questions/79489878/my-modal-doesnt-appear-after-an-action-in-streamlit
I have this Streamlit app: import streamlit as st st.title("Simulator") tab_names = ["tab1", "tab2"] tab1, tab2= st.tabs(tab_names) @st.dialog("Edit your relationships") def edit_relationships(result): edit_options = tuple(result) selection = st.selectbox( "Select an entity relationship", edit_options ) st.write(f"This is a dialog {selection}") if st.button("Submit"): st.session_state.vote = 'pear' st.rerun() with tab1: st.write("This is the first tab") with tab2: query = st.text_input("Enter the entity", key='t2tinput') if st.button('send', key='t2button'): try: result = ['banana', 'apple', 'pear'] if st.button("Edit Relationships"): edit_relationships(result) except Exception as e: st.error(f"Error: {e}") And I want that after the result list is received (from an API), that a button 'Edit relationships" appear so I can click on it and a modal appears. I tried this code but after clicking on 'Edit relationships', the modal doesn't appear. Please, could you point out what I am doing wrong?
It seems you need st.fragment: When a user interacts with an input widget created inside a fragment, Streamlit only reruns the fragment instead of the full app. If run_every is set, Streamlit will also rerun the fragment at the specified interval while the session is active, even if the user is not interacting with your app. Working with fragments: Reruns are a central part of every Streamlit app.When users interact with widgets, your script reruns from top to bottom, and your app's frontend is updated. Streamlit provides several features to help you develop your app within this execution model. Streamlit version 1.37.0 introduced fragments to allow rerunning a portion of your code instead of your full script. As your app grows larger and more complex, these fragment reruns help your app be efficient and performant. Fragments give you finer, easy-to-understand control over your app's execution flow. Full code: import streamlit as st st.title("Simulator") tab_names = ["tab1", "tab2"] tab1, tab2= st.tabs(tab_names) @st.dialog("Edit your relationships") def edit_relationships(result): edit_options = tuple(result) selection = st.selectbox( "Select an entity relationship", edit_options ) st.write(f"This is a dialog {selection}") if st.button("Submit"): st.session_state.vote = 'pear' st.rerun() @st.fragment def edit_btn(): if st.button("Edit Relationships"): edit_relationships(result) with tab1: st.write("This is the first tab") with tab2: query = st.text_input("Enter the entity", key='t2tinput') if st.button('send', key='t2button'): try: result = ['banana', 'apple', 'pear'] edit_btn() except Exception as e: st.error(f"Error: {e}") Output:
1
3
79,499,242
2025-3-10
https://stackoverflow.com/questions/79499242/how-to-check-if-a-library-is-installed-at-runtime-in-python
I want to check whether a library is installed and can be imported dynamically at runtime within an if statement to handle it properly. I've tried the following code: try: import foo print("Foo installed") except ImportError: print("Foo not installed") It works as intended but does not seem like the most elegant method. I was thinking more of a method that returns a boolean indicating whether the library is installed, something like: Class.installed("foo") # returns a boolean
Use importlib.util for a clean check. import importlib.util if importlib.util.find_spec("library_name") is not None: print("Installed")
1
1
79,499,303
2025-3-10
https://stackoverflow.com/questions/79499303/confusion-on-re-assigning-pandas-columns-after-modification-with-apply
Let us assume we have this dataframe: df = pd.DataFrame.from_dict({1:{"a": 10, "b":20, "c":30}, 2:{"a":100, "b":200, "c":300}}, orient="index") Further, let us assume I want to apply a function to each row that adds 1 to the values in columns a and b def add(x): return x["a"] +1, x["b"] +1 Now, if I use the apply function to mod and overwrite the columns twice, some values are flipped: >>> df.loc[:, ["a", "b"]] = df[["a", "b"]].apply(lambda x: add(x), axis=1) >>> df a b c 1 11 101 30 2 21 201 300 >>> >>> df.loc[:, ["a", "b"]] = df[["a", "b"]].apply(lambda x: add(x), axis=1) >>> df a b c 1 12 22 30 2 102 202 300 >>> Could somebody explain to me why b1 and a2 get flipped?
This is your original DataFrame: a b c 1 10 20 30 2 100 200 300 Now, look at the output of df[['a', 'b']].apply(add, axis=1): df[['a', 'b']].apply(add, axis=1) 1 (11, 101) 2 (21, 201) dtype: object This creates a Series of tuples, which means you have two items (11, 101) and (21, 201), and those are objects (tuples). The first item will be assigned to a, the second to b. Let see what happens if you were assigning two strings instead: df.loc[:, ['a', 'b']] = ['x', 'y'] a b c 1 x y 30 2 x y 300 The first item (x) gets assigned to a, the second (y) to b. Your unexpected behavior is due to a combination of two things: you are ignoring the index with .loc[:, ...] the right hand side is a Series (of objects) If you remove either condition, this wouldn't work: # let's assign on the columns directly df[['a', 'b']] = df[['a', 'b']].apply(add, axis=1) # KeyError: 0 # let's convert the output to list df[['a', 'b']] = df[['a', 'b']].apply(add, axis=1).tolist() # a b c # 1 11 21 30 # 2 101 201 300 In addition, your error only occurred because you had the same number of rows and columns in the selection. This would have raised an error with 3 columns: df.loc[:, ['a', 'b', 'c']] = df[['a', 'b', 'c']].apply(add, axis=1) # ValueError: Must have equal len keys and value when setting with an iterable Take home message If you need to use a function with apply and axis=1 and you want to output several "columns", either convert the output to lists if you have the same columns as output: df[['a', 'b']] = df[['a', 'b']].apply(add, axis=1).tolist() Or output a DataFrame by making the function return a Series: def add(x): return pd.Series({'a': x['a']+1, 'b': x['b']+1}) df[['a', 'b']] = df[['a', 'b']].apply(add, axis=1) In any case, never use df.loc[:, ...] unless you know why you're doing this (i.e. you're purposely breaking the index alignment). Vectorial operations Of course, the above assumes you have complex, non-vectorized functions to use. If your goal is to perform a simple addition: # adding 1 to both a and b df[['a', 'b']] += 1 # adding 1 to a and 2 to b df[['a', 'b']] += [1, 2] # adding 1 to a and 2 to b, using add df[['a', 'b']] = df[['a', 'b']].add([1, 2]) # adding 1 to a and 2 to b, using a dictionary df[['a', 'b']] = df[['a', 'b']].add({'b': 2, 'a': 1})
1
3
79,496,846
2025-3-10
https://stackoverflow.com/questions/79496846/how-to-use-pytest-mark-parametrize-and-include-an-item-for-the-default-mock-b
I am creating a parameterized Mock PyTest to test API behaviors. I am trying to simplify the test code by testing the instance modified behavior, e.g. throw and exception, and the default behavior, i.e. load JSON from file vs. calling REST API. I do not know how to add an array entry to represent the "default" mock behavior? @pytest.mark.parametrize( ("get_nearby_sensors_mock", "get_nearby_sensors_errors"), [ (AsyncMock(side_effect=Exception), {CONF_BASE: CONF_UNKNOWN}), (AsyncMock(side_effect=PurpleAirError), {CONF_BASE: CONF_UNKNOWN}), (AsyncMock(side_effect=InvalidApiKeyError), {CONF_BASE: CONF_INVALID_API_KEY}), (AsyncMock(return_value=[]), {CONF_BASE: CONF_NO_SENSORS_FOUND}), # What do I do here? # (AsyncMock(api.sensors, "async_get_nearby_sensors")) does not work as api is not in scope? # (AsyncMock(side_effect=None), {}) does not call the default fixture? (AsyncMock(), {}), ], ) async def test_validate_coordinates( hass: HomeAssistant, mock_aiopurpleair, api, get_nearby_sensors_mock, get_nearby_sensors_errors, ) -> None: """Test validate_coordinates errors.""" with ( patch.object(api, "async_check_api_key"), patch.object(api.sensors, "async_get_nearby_sensors", get_nearby_sensors_mock), ): result: ConfigValidation = await ConfigValidation.async_validate_coordinates( hass, TEST_API_KEY, TEST_LATITUDE, TEST_LONGITUDE, TEST_RADIUS ) assert result.errors == get_nearby_sensors_errors if result.errors == {}: assert result.data is not None else: assert result.data is None How do I add a parameter for the "default behavior" of patch.object(api.sensors, "async_get_nearby_sensors") that will use the fixture to load data from canned JSON file? Why mock; async_validate_coordinates() calls async_check_api_key() that needs to be mocked to pass, and async_get_nearby_sensors() that is mocked with a fixture to return data from a JSON file. For ref this is the conftest.py file.
As a workaround, you could deal with any api-related case inside the function, where the api is known. For this single-use, I would use None to trigger the default case @pytest.mark.parametrize( ("get_nearby_sensors_mock", "get_nearby_sensors_errors"), [ # [...] (None, {}), ], ) async def test_validate_coordinates( hass: HomeAssistant, mock_aiopurpleair, api, get_nearby_sensors_mock, get_nearby_sensors_errors, ) -> None: """Test validate_coordinates errors.""" if get_nearby_sensors_mock is None: get_nearby_sensors_mock = AsyncMock(api.sensors, "async_get_nearby_sensors") with (patch.object( # [...]
2
0
79,498,590
2025-3-10
https://stackoverflow.com/questions/79498590/is-there-a-way-to-vertically-merge-two-polars-lazyframes-in-python
I want to vertically merge two polars.LazyFrames in order to avoid collecting both LazyFrames beforehand, which is computationally expensive. I have tried extend(), concat(), and vstack() but none of them are implemented for LazyFrames. Maybe I am missing the point about LazyFrames by trying to perform this operation, but I am aware that join() works, which would also alter the dataframe's structure.
pl.concat can be used with LazyFrames: >>> lf = pl.LazyFrame({"x": [1, 2]}) >>> pl.concat([lf, lf]).collect() shape: (4, 1) ┌─────┐ │ x │ │ --- │ │ i64 │ ╞═════╡ │ 1 │ │ 2 │ │ 1 │ │ 2 │ └─────┘
2
8
79,497,914
2025-3-10
https://stackoverflow.com/questions/79497914/how-do-conditional-expressions-group-from-right-to-left
I checked python operator precedence (This one grammar is more detailed and more appropriate for the actual Python implementation) Operators in the same box group left to right (except for exponentiation and conditional expressions, which group from right to left). ** Exponentiation [5] if – else Conditional expression I can understand exponentiation that 2**3**2 is equal to 2**(3**2). But Conditional expression conditional_expression ::= or_test ["if" or_test "else" expression] is not one binary operator. I can't give one similar example as **. Could you give one example of "group from right to left" for if Conditional expression?
a if b else c if d else e means a if b else (c if d else e) and not (a if b else c) if d else e
1
2
79,497,724
2025-3-10
https://stackoverflow.com/questions/79497724/index-pandas-with-multiple-boolean-arrays
Using numpy, one can subset an array with one boolean array per dimension like: In [10]: aa = np.array(range(9)).reshape(-1, 3) In [11]: aa Out[11]: array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) In [12]: conditions = (np.array([True, True, False]), np.array([True, False, True])) In [13]: aa[np.ix_(*conditions)] Out[13]: array([[0, 2], [3, 5]]) Is there a way to do this in Pandas? I've looked in their docs https://pandas.pydata.org/docs/user_guide/indexing.html#boolean-indexing but didn't find it. (I would have posted 4 relevant links, but then the automatic question checks think I've posted code that is not properly formatted.) This https://github.com/pandas-dev/pandas/issues/11290 github issue is close, but I want to pick entire rows and columns.
You should be able to directly use boolean indexing with iloc (or loc): df = pd.DataFrame(aa) out = df.iloc[conditions] Note that conditions should be a tuple of arrays/lists/iterables, if not you should convert it: out = df.iloc[tuple(conditions)] Output: 0 2 0 0 2 1 3 5
2
3
79,497,570
2025-3-10
https://stackoverflow.com/questions/79497570/mean-value-with-special-dependency
I have a DataFrame that looks something like this: C1 C2 10 10 20 10 30 16 5 23 6 23 8 10 4 10 2 10 I would like to calculate the mean value in column C1 depending on the values in column C2. The mean value is to be calculated over all values in column C1 until the value in column C2 changes again. The result table should look like this: C1 C2 15 10 30 16 5.5 23 4.67 10
Use GroupBy.mean by consecutive values created by compared Series.shifted values with Series.cumsum, last remove first level and get original order of columns by DataFrame.reindex: out =(df.groupby([df['C2'].ne(df['C2'].shift()).cumsum(),'C2'],sort=False)['C1'] .mean() .droplevel(0) .reset_index() .reindex(df.columns, axis=1)) print (out) C1 C2 0 15.000000 10 1 30.000000 16 2 5.500000 23 3 4.666667 10 How it working: print (df.assign(compared=df['C2'].ne(df['C2'].shift()), cumsum=df['C2'].ne(df['C2'].shift()).cumsum())) C1 C2 compared cumsum 0 10 10 True 1 1 20 10 False 1 2 30 16 True 2 3 5 23 True 3 4 6 23 False 3 5 8 10 True 4 6 4 10 False 4 7 2 10 False 4 Thank you @ouroboros1 for another easier solution with GroupBy.agg: out = (df.groupby(df['C2'].ne(df['C2'].shift()).cumsum(), as_index=False) .agg({'C1': 'mean', 'C2': 'first'})) print (out) C1 C2 0 15.000000 10 1 30.000000 16 2 5.500000 23 3 4.666667 10
1
2
79,496,711
2025-3-9
https://stackoverflow.com/questions/79496711/opencv-understanding-the-filterbyarea-parameter-used-in-simpleblobdetector
I am trying to detect a large stain using OpenCV's SimpleBlobDetector following this SO answer. Here is the input image: I first tried working with params.filterByArea = False, which allowed to detect the large black stain: However, smaller spots also ended up being detected. I therefore toggled params.filterByArea = True hoping to enforce a criterion on object area. However, when setting params.filterByArea = True with params.minArea = 10 the largest stain is no longer identified: I tried using other minArea parameters to no avail, even trying a minArea of 0, which should be equivalent to no filtering at all. What am I missing here?
opencv maxArea param has 5000 as default value. Increasing it could detect blobs bigger than that import cv2 import numpy as np def show_keypoints(im, keypoints): im_key = cv2.drawKeypoints(im, keypoints, np.array([]), (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) cv2.imshow("Keypoints", im_key) cv2.waitKey(0) im_path = "test.png" im_m = cv2.imread(im_path, cv2.IMREAD_GRAYSCALE) params = cv2.SimpleBlobDetector_Params() params.filterByArea = True params.minArea = 10000 params.maxArea = 50000 detector = cv2.SimpleBlobDetector_create(params) keypoints = detector.detect(im_m) show_keypoints(im_m, keypoints) To print all detector params with default values use # a map suitable to get configuration from a yaml file param_map = {"blobColor": "blob_color", "filterByArea": "filter_by_area", "filterByCircularity": "filter_by_circularity", "filterByColor": "filter_by_color", "filterByConvexity": "filter_by_convexity", "filterByInertia": "filter_by_inertia", "maxArea": "max_area", "maxCircularity": "max_circularity", "maxConvexity": "max_convexity", "maxInertiaRatio": "max_inertia_ratio", "maxThreshold": "max_threshold", "minArea": "min_area", "minCircularity": "min_circularity", "minConvexity": "min_convexity", "minDistBetweenBlobs": "min_dist_between_blobs", "minInertiaRatio": "min_inertia_ratio", "minRepeatability": "min_repeatability", "minThreshold": "min_threshold", "thresholdStep": "threshold_step"} for k in param_map: print(f"{k:<21}: {params.__getattribute__(k)}") result blobColor : 0 filterByArea : True filterByCircularity : False filterByColor : True filterByConvexity : False filterByInertia : False maxArea : 7000.0 maxCircularity : 3.4028234663852886e+38 maxConvexity : 3.4028234663852886e+38 maxInertiaRatio : 3.4028234663852886e+38 maxThreshold : 220.0 minArea : 500.0 minCircularity : 0.5 minConvexity : 0.20000000298023224 minDistBetweenBlobs : 10.0 minInertiaRatio : 0.10000000149011612 minRepeatability : 2 minThreshold : 50.0 thresholdStep : 10.0
2
4
79,496,136
2025-3-9
https://stackoverflow.com/questions/79496136/plotting-vertical-lines-on-pandas-line-plot-with-multiindex-x-axis
I have a dataframe whose index is a multiindex where axes[0] is the date, and axis[1] is the rank. Rank starts with 1 and ends at 100, but there can be a variable number of ranks in between as below. Here are the ranks dx = pd.DataFrame({ "date": [ pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-24'), pd.to_datetime('2025-02-25'), pd.to_datetime('2025-02-25'), pd.to_datetime('2025-02-25'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26'), pd.to_datetime('2025-02-26') ], "rank": [0.0,1.0,2.0,100.0,0.0,1.0,100.0,0.0,1.0,2.0,3.0,100.0], "value": [2.3, 2.5, 2.4, 2.36, 2.165, 2.54, 2.34, 2.12, 2.32, 2.43, 2.4, 2.3] }) dx.set_index(["date", "rank"], inplace=True) I want to plot this df, and df.plot() works fine creating a reasonable x-axis. However, I want to add a grid or vertical lines at all the rank=1, and all the rank=100(different color). I tried this : fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(30, 5)) dx.plot(ax=axes[0]) axes[0].tick_params('x', labelrotation=90) xs = [x for x in dx.index if x[1]==0] for xc in xs: axes[0].axvline(x=xc, color='blue', linestyle='-') but get this error: ConversionError: Failed to convert value(s) to axis units: (Timestamp('2025-02-24 00:00:00'), 0.0) I also want to only show x labels for rank=0, and not all of them. Currently, if i set label rotation to 90, it results in that but not sure if this is the best way to ensure that. axes[0].tick_params('x', labelrotation=90) So looking for 2 answers How to set vertical lines at specific points with this type of multiindex How to ensure only certain x labels show on the chart
With a categorical axis, plt will use an integer index "under the hood". Here, since you are using a lineplot, it tries to come up with a reasonable step: dx.plot(ax=axes[0]) axes[0].get_xticks() # array([-2., 0., 2., 4., 6., 8., 10., 12.]) With a barplot, you would get the more logical: dx.plot.bar(ax=axes[0]) axes[0].get_xticks() # array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) You can use Axes.set_xticks and Axes.set_xticklabels to fix this. E.g., ticks = range(len(dx)) # only label at rank 0 labels = [f"{x[0].strftime('%Y-%m-%d')}, {int(x[1])}" if x[1] == 0 else '' for x in dx.index] axes[0].set_xticks(ticks=ticks) axes[0].set_xticklabels(labels=labels, rotation=90) It's easier to see now that we need the appropriate index matches for Axes.axvline. We can apply np.nonzero to Index.get_level_values and then add the lines in a loop: def add_vlines(ax, rank, color): indices = np.nonzero(dx.index.get_level_values('rank') == rank)[0] for index in indices: ax.axvline(x=index, color=color, linestyle='dotted') add_vlines(axes[0], 1, 'blue') add_vlines(axes[0], 100, 'red') Output:
1
1
79,487,849
2025-3-5
https://stackoverflow.com/questions/79487849/expanding-numpy-based-code-that-detect-the-frequency-of-the-consecutive-number-t
This stackoverflow answer provides a simple way (below) to find the frequency and indices of consecutive repeated numbers. This solution is much faster than loop-based code (see the original post above). boundaries = np.where(np.diff(aa) != 0)[0] + 1 #group boundaries get_idx_freqs = lambda i, d: (np.concatenate(([0], i))[d >= 2], d[d >= 2]) idx, freqs = get_idx_freqs(boundaries, np.diff(np.r_[0, boundaries, len(aa)])) and the output # aa=np.array([1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]) (array([ 1, 3, 6, 10]), array([2, 3, 4, 5])) # aa=np.array([1,1,1,np.nan,np.nan,1,1,np.nan]) (array([0, 5]), array([3, 2])) Wondering if this solution could be expanded to work on multidimensional array instead of the slow traditional loop, as the following: #%% def get_frequency_of_events_fast(aa): boundaries = np.where(np.diff(aa) != 0)[0] + 1 #group boundaries get_idx_freqs = lambda i, d: (np.concatenate(([0], i))[d >= 2], d[d >= 2]) idx, freqs = get_idx_freqs(boundaries, np.diff(np.r_[0, boundaries, len(aa)])) return idx,freqs tmp2_file=np.load('tmp2.npz') tmp2 = tmp2_file['arr_0'] idx_all=[] frq_all=[] for i in np.arange(tmp2.shape[1]): for j in np.arange(tmp2.shape[2]): print("==>> i, j "+str(i)+' '+str(j)) idx,freq=get_frequency_of_events_fast(tmp2[:,i,j]) idx_all.append(idx) frq_all.append(freq) #if j == 69: # break print(idx) print(freq) #if i == 0: # break I appended the indices and frequencies to the one dimensional list and also I was wondering if there is a way to append to two dimensional array. The file could be downloaded from box.com. Here is a sample output ==>> i, j 0 61 [ 27 73 226 250 627 754 760 798 825 891 906] [ 12 8 5 17 109 5 12 26 30 12 3] ==>> i, j 0 62 [ 29 75 226 250 258 627 754 761 800 889] [ 11 7 5 6 6 114 5 14 57 21] ==>> i, j 0 63 [ 33 226 622 680 754 762 801 888] [ 9 5 56 63 5 21 58 26] ==>> i, j 0 64 [ 33 226 615 622 693 753 762 801 889 972 993] [12 5 4 68 54 6 21 60 26 3 2] ==>> i, j 0 65 [ 39 615 621 693 801 891 972 987 992] [ 7 3 70 90 61 24 3 2 7] ==>> i, j 0 66 [ 39 617 657 801 891 970 987] [ 7 34 132 63 30 5 13] ==>> i, j 0 67 [ 39 88 621 633 657 680 801 804 891 969 986] [ 11 4 6 2 6 110 2 63 30 6 14] ==>> i, j 0 68 [ 39 87 681 715 740 766 807 873 891 969 984] [12 6 33 3 22 24 60 3 31 6 16]
A possible solution (on my computer, it runs instantaneously): # data = np.load('tmp2.npz') # tmp2 = data['arr_0'] def get_freqs(aa): boundaries = np.where(np.diff(aa) != 0)[0] + 1 edges = np.r_[0, boundaries, len(aa)] group_lengths = np.diff(edges) valid = group_lengths >= 2 idx = np.concatenate(([0], boundaries))[valid] return idx, group_lengths[valid] out = { (i, j): get_freqs(tmp2[:, i, j]) for i, j in np.ndindex(tmp2.shape[1], tmp2.shape[2]) } The function computes the starting indices and lengths of consecutive groups in a one-dimensional array where the value remains the same, ignoring groups with fewer than two elements. It does this by first identifying change points using np.diff, then constructing group edges with np.r_ and calculating group lengths with np.diff, and finally filtering groups based on a minimum length criterion. The dictionary comprehension applies this function to every (i, j) slice (i.e., along the first dimension) of the 3-D array tmp2, storing the results in a dictionary keyed by the (i, j) indices. Since the OP has a large number of cores, numba + parallel processing can very much speed up the calculations: import numpy as np from numba import njit, prange from numba.typed import List @njit(parallel=True) def process_all(tmp2): T, M, N = tmp2.shape out = List() for _ in range(M * N): out.append((np.empty(0, np.int64), np.empty(0, np.int64))) for i in prange(M): for j in range(N): aa = tmp2[:, i, j] n = aa.shape[0] idx_arr = np.empty(n, np.int64) len_arr = np.empty(n, np.int64) count = 0 start = 0 for k in range(1, n): if aa[k] != aa[k - 1]: group_len = k - start if group_len >= 2: idx_arr[count] = start len_arr[count] = group_len count += 1 start = k group_len = n - start if group_len >= 2: idx_arr[count] = start len_arr[count] = group_len count += 1 out[i * N + j] = (idx_arr[:count].copy(), len_arr[:count].copy()) return out data = np.load('/content/tmp2.npz') tmp2 = data['arr_0'] flat_results = process_all(tmp2) M, N = tmp2.shape[1], tmp2.shape[2] results = {} idx = 0 for i in range(M): for j in range(N): results[(i, j)] = flat_results[idx] idx += 1
2
2
79,496,092
2025-3-9
https://stackoverflow.com/questions/79496092/pythons-predicate-composition
I would like to implement something similar to this OCaml in Python: let example = fun v opt_n -> let fltr = fun i -> i mod 2 = 0 in let fltr = match opt_n with | None -> fltr | Some n -> fun i -> (i mod n = 0 && fltr n) in fltr v This is easily composable/extendable, I can add as many predicates as I want at runtime. This is of course a simplified example, in real life I have many optional inclusion/exclusion sets, and predicate checks for membership. Doing this the naive way in Python fails: def example(v: int, opt_n=None): """ doesn't work! """ # doesn't need to be a lambda, an explicitely defined function fails too fltr = lambda i: i % 2 == 0 if opt_n is not None: # fails miserably -> maximum recursion depth exceeded fltr = lambda i: fltr(i) and i % opt_n == 0 return fltr(v) example(10, 5) This is annoying because it seems that since fltr can only appear once on the left side of the assignment, I have to inline the initial fltr in every case afterward: def example(v: int, opt_n=None, opt_m=None): """annoying but works""" fltr = None # some inital filters pred_0 = lambda _: True # do some real checks ... pred_1 = lambda _: True # do some real checks ... if opt_n is not None: # fltr is inlined, only appears on left side, now it works fltr = lambda i: pred_0(i) and pred_1(i) and opt_n % 2 == 0 if opt_m is not None: # much repetition fltr = lambda i: pred_0(i) and pred_1(i) and opt_n % 3 == 0 if fltr is None: # inlined again fltr = lambda i: pred_0(i) and pred_1(i) return fltr(v) Is there any way to fix my mess, maybe I am missing something, and/or what is the recommended way to compose predicates in Python?
When you write fltr = lambda i: fltr(i) and i % opt_n == 0 fltr remains a free variable inside the lambda expression, and will be looked up when the function is called; it's not bound to the old definition of fltr in place when you evaluate the lambda expression. You need some way to do early binding; one option is to bind the current value of fltr to a new variable that's local to the lambda expression, namely another parameter: fltr = lambda i, cf=fltr: cf(i) and i % opt_n == 0 Not the cleanest solution. If you don't mind an additional function call, you can define an explicit composition function: def pred_and(f, g): return lambda x: f(x) and g(x) then use that to compose the old fltr with another predicate to produce a new filter. def example(v: int, opt_n=None): fltr = lambda i: i % 2 == 0 if opt_n is not None: # fails miserably -> maximum recursion depth exceeded fltr = pred_and(fltr, lambda i: i % opt_n == 0) return fltr(v)) (I don't know OCaml very well, but pred_and is somewhat like the use of an applicative functor in Haskell, e.g. pred_and = liftA2 (&&).)
1
3
79,496,102
2025-3-9
https://stackoverflow.com/questions/79496102/sqlalchemy-use-in-to-select-pairwise-correspondence
Consider the following DB: from sqlalchemy import String, select, create_engine from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, Session class Base(DeclarativeBase): pass class User(Base): __tablename__ = "user_account" name: Mapped[str] = mapped_column(String(30)) surname: Mapped[str] = mapped_column(String(30)) def __repr__(self): return f"User(name={self.name!r}, surname={self.surname!r})" engine = create_engine("sqlite+pysqlite:///test3.sqlite3", echo=True) Base.metadata.create_all(engine) with Session(engine) as session: user1 = User( name="Mario", surname="Rossi" ) user2 = User( name="Mario", surname="Bianchi", ) user3 = User( name="Giovanni", surname="Bianchi", ) session.add_all([user1, user2, user3]) session.commit() Now suppose I have a list of users I want to find: users = [("Mario", "Rossi"), ("Giovanni", "Bianchi")] Then I would run: names = [name for name, _ in users] surnames = [surname for _, surname in users] with Session(engine) as session: stmt = select(User).where(User.name.in_(names)).where(User.surname.in_(surnames)) print(session.execute(stmt).scalars().all()) which returns: [User(name='Mario', surname='Rossi'), User(name='Mario', surname='Bianchi'), User(name='Giovanni', surname='Bianchi')] but "Mario Bianchi" was not in the list of input users I had. How can I concatenate IN statements in order to select only pairwise correspondence? That is, if I have varA IN (el1, el2) AND varB IN (el3, el4), I do not wat to select entries with varA==el1 AND varB==el4
You need to use more explicit OR Boolean condition sqlalchemy.or_(conditions) ensures that each (name, surname) pair is checked explicitly. from sqlalchemy import or_ with Session(engine) as session: stmt = select(User).where( or_(*[(User.name == name) & (User.surname == surname) for name, surname in users]) ) print(session.execute(stmt).scalars().all()) This works across all databases, including SQLite. PostgreSQL-Specific Optimization If you were using PostgreSQL, we can use tuple_ for a more elegant and optimized query: from sqlalchemy.sql.expression import tuple_ with Session(engine) as session: stmt = select(User).where(tuple_(User.name, User.surname).in_(users)) print(session.execute(stmt).scalars().all())
1
4
79,495,685
2025-3-9
https://stackoverflow.com/questions/79495685/gpg-import-keys-is-not-working-in-python-virtual-environment
I'm running this piece of code to encrypt a file using PGP public key. import gnupg def pgp_encrypt(pub_file, out_file): gpg = gnupg.GPG() with open(pub_file, 'rb') as pgp_pub_key: public_key_data = pgp_pub_key.read() # import_keys_file() is NOT used as the key # eventually will come from user-input import_result = gpg.import_keys(public_key_data) if import_result.count == 0: print("Error: No keys imported. Make sure the public key file is correct.") exit() pgp_key_id = import_result.results[0]['fingerprint'] plaintext_data = b'This is the TEST data to encrypt' encrypted_data = gpg.encrypt( plaintext_data, recipients=[pgp_key_id], always_trust=True ) if encrypted_data.ok: print("Data encrypted successfully.") print(encrypted_data.data) with open(out_file, 'wb') as encrypted_file: encrypted_file.write(encrypted_data.data) else: print("Encryption failed:") print(encrypted_data.status) ## Apply pgp_encrypt('pgp_pubkey.asc', 'pgp_encrypted_file') So, basically it's reading the public-key file and putting the data in public_key_data, then importing it using gpg.import_keys(public_key_data) before encrypting the file. print(encrypted_data.data) in my code is not printing the result properly (on the screen, which is another issue to fix) but it's working: santanu@mgtucrpi5:~/Scripts/gnupg $ python pgp_encrypt.py Data encrypted successfully. b'-----BEGIN PGP MESSAGE-----\n\nhQIMA4QM8WwBjfPfAQ/+Jel/JySvuydbuAHDuRT/KwOoFOStYUprQ3TQsj3S3ryJ\nC6bqYD77XviU3fjtcedKxCc0F9Gxw01fb838H0AeACI9Bi4GLuUgS/FJTvrEsX4K\nMniWu4HsConIX+63Ud+RHlVCRziGsa86Uub7GwsaOvYpYhovWzNxc/ObLmoMZaSP\nYmBUHkN+rGGOx4CGGiVS7480Mp2gmd3UyFFbQwV1xO+fz5I+gOcYJSXU0R6SzdXd\nS03sI+8AXLVLmgTARi5ed5V4gr4EIb/bhN18zyUo6gO8vo34GtllFQlRZWL04GRN\n/wg0uudJd26tRxJfCwdcYONKzbNFo8wtLv7dedY+cah+2bTHKFcTWYMGyrhCZZmG\nnZ/GWXnojAz9n9BUNLT/vwQvildfSsuG2qABmk5HUjv0bOH8Ducw6UrbO1pP6hzO\nQcMxGEg8/YQCfI7Zcz1RrIRHWBDlhmG2znDFin2ApyY0N1FmagOJYSZ/ijUkBnT3\nbtIRJ0ISGR7Hjee2G80vKvy0Ozkev2dAhl4Rm3BzoLQV340jEe6dmg8QUPbP0hGU\ni+mlGNMpg50TQVE90ILewhndaBGcBxltS2hVwe+AWj0vhYK3EUqE32Hj7mZxXAWc\nfLTAIXCbsSrZ0Mtc+m6V1IkkwotHaNOea6gqoLMixHbYiwq+F5beu2taYOsespHS\nUQE28ZFF/n6HQ0EUfDuKsd14xUE6UjZvWpfaOor1OedKCife/HkrOOR/VCua1p/T\npROcEBIU2jtazibCiYD1uIy+lwS4w0en8ysFPrLnJuWcFQ==\n=UR1e\n-----END PGP MESSAGE-----\n' but the moment I run it from virtual environment, I get the following error: santanu@mgtucrpi5:~/Scripts/gnupg $ source pgpenv/bin/activate (pgpenv) santanu@mgtucrpi5:~/Scripts/gnupg $ python pgp_encrypt.py Traceback (most recent call last): File "/home/santanu/Scripts/gnupg/pgp_encrypt.py", line 34, in <module> pgp_encrypt('pgp_pubkey.asc', 'pgp_encrypted_file') File "/home/santanu/Scripts/gnupg/pgp_encrypt.py", line 11, in pgp_encrypt if import_result.count == 0: ^^^^^^^^^^^^^^^^^^^ AttributeError: 'ImportResult' object has no attribute 'count'. Did you mean: 'counts'? if I change it to counts (just to try), I get diffrent error, which doesn't look right either: File "/home/santanu/Scripts/gnupg/pgpenv/lib/python3.11/site-packages/gnupg/gnupg.py", line 1064, in encrypt result = self._encrypt(stream, recipients, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: GPGBase._encrypt() got multiple values for argument 'recipients' How can I fix that? ref. https://gnupg.readthedocs.io/en/latest/#importing-and-receiving-keys ======================================= To answer @fqlenos question: This the way I did the virtual env: % cd ~/Scripts/gnupg % python3 -m venv pgpenv % source pgpenv/bin/activate % pip install gnupg I tried with pip install python-gnupg as well but got teh same result. Is there anything I'm missing or doing incorrectly?
How did you create the virtual environment? Did you activate it once it is created? Sometimes, we can miss activating the environment and installing the necessary dependencies on it. I personally use uv for creating and managing virtual environments: $ uv venv $ source .venv/bin/activate $ (venv) pip install python-gnupg==0.5.4 If you don't use uv, you can always follow this tutorial venv — Creation of virtual environments, you'll get the same result. I've just run that snippet of code and it worked perfectly fine within my virtual environment. Make sure you activate and install the correct gnupg python dependency. Regarding to the formatter of the result. What you're printing is a byte-formatted string. In case you want a more human-readable output you can always do the following: if encrypted_data.ok: print("Data encrypted successfully.") encrypted_str: str = bytes(encrypted_data.data).decode(encoding="utf-8") with open(file=out_file, mode="w") as encrypted_file: encrypted_file.write(encrypted_str) print(encrypted_str) I've just casted the encrypted_data.data to bytes() and decoded it in UTF-8.
1
2
79,495,573
2025-3-9
https://stackoverflow.com/questions/79495573/how-to-change-the-bluetooth-name-of-the-raspberry-pico-2w
Currently, I want to change the Name of the Pico 2W device displayed on the nRF Connect app. I am using Micro Pico in VsCode. I tried many things but the Name is always "N/A". import bluetooth from time import sleep bluetooth.BLE().active(True) ble = bluetooth.BLE() device_name = 'PicoW_Device' advertising_data = b'\x02\x01\x06\x03\x03\x0F\x18\x09' + bytes(device_name, 'utf-8') ble.gap_advertise(100, adv_data=advertising_data) print(f"The Name of the device is currently: {device_name}") while True: sleep(1) I tried to change the name, but it always displayed "N/A"
The problem is how you constructed the advertising_data. Try this device_name = 'PicoW_Device' advertising_data = bytearray() advertising_data += b'\x02\x01\x06' # Flags advertising_data += bytearray([1 + len(device_name), 0x09]) # Name length and Complete Local Name AD type advertising_data += device_name.encode('utf-8') # Ensures the name is encoded into bytes ble.gap_advertise(100, adv_data=bytes(advertising_data)) # Converts the bytearray to bytes
1
1
79,495,506
2025-3-9
https://stackoverflow.com/questions/79495506/how-can-users-modify-cart-prices-using-burp-suite-and-why-is-this-a-security-ri
I recently discovered a serious security issue in Django e-commerce websites where users can modify product prices before adding items to the cart. Many developers allow users to send price data from the frontend, which can be easily tampered with using Burp Suite or browser developer tools. Example of the Issue: Consider a simple Django view that adds items to the cart: def add_item(request): product_id = request.GET.get('product_id') price = request.GET.get('price') # User-controlled value (security risk) qty = int(request.GET.get('qty', 1)) cart_item = { 'product_id': product_id, 'qty': qty, 'price': price, # This price comes from the user, not the database! } request.session['cart'] = request.session.get('cart', {}) request.session['cart'][product_id] = cart_item request.session.modified = True return JsonResponse({'message': 'Added to cart'}) How an Attacker Can Exploit This: A product costs $500 in the database. The user clicks "Add to Cart". Instead of sending the original price, the attacker intercepts the request using Burp Suite. The price field is changed to $1, and the request is forwarded. The cart now stores the manipulated price, and the user can proceed to checkout with the wrong amount. Why Is This a Security Risk? The backend trusts data from the frontend, which can be easily manipulated. The session stores the wrong price, leading to financial loss. Attackers can buy expensive products at extremely low prices by modifying request data. Discussion Points for the Community: What are the best practices to prevent this? Should e-commerce sites always fetch prices from the database instead of accepting them from the frontend? What other vulnerabilities should developers be aware of when handling cart data in Django? Would love to hear your thoughts on this!
What are the best practices to prevent this? You don't need the price, the view should add the product_id to the cart, and perhaps a quantity, but adding something to the cart has no price involved. It even makes it more complicated to later apply discounts, since the price is determined per product. Should e-commerce sites always fetch prices from the database instead of accepting them from the frontend? Not per se, there are some APIs that determine prices on-the-fly. Bots that thus drive up the price if there is more demand, or if a competitor drops their prices. What other vulnerabilities should developers be aware of when handling cart data in Django? This has nothing to do with a cart, you always ask the user the absolute minimum you need to know, and by design thus limit the parameters, since this limits what you can tamper with. I've seen some questions on StackOverflow with a similar approach. Sometimes they define a class Cart that then operates on the session data. But this was indeed bad design, and often not only in terms of this security vulnerability, but performance, referential integrity, etc.: you add an item to a cart with a GET request, which makes no sense; and you can even add a non-existing product as well. The GET request thus means that if a person making the request hits refresh, it is made a second time. But more importantly, GET requests should be cacheable, which clearly is not the case, and it also puts the product_id (and price) in the URL, which means it is at least visible in the path as well, which is not good practice either. I always got the impression some person gave a "Django eCommerce tutorial" on Youtube, and people copied some code. But apparently some indeed moved this to production.
2
2
79,495,020
2025-3-8
https://stackoverflow.com/questions/79495020/how-do-i-get-past-this-error-in-installing-pyautogui-as-a-third-party-module-in
I am on the final chapter of Automate the Boring Stuff with Python- Chapter 20 begins by installing and importing pyautogui, and I have been unable to accomplish this. I HAVE been able to install the module on my Mac I have NOT been able to add this as a Third Party Module in Mu Editor. Here is the error I get when I try to add this as a module in the Mu Editor: Collecting pyautogui Using cached PyAutoGUI-0.9.54.tar.gz (61 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pygetwindow>=0.0.5 Using cached PyGetWindow-0.0.9.tar.gz (9.7 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting mouseinfo Using cached MouseInfo-0.1.3.tar.gz (10 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting pymsgbox Using cached PyMsgBox-1.0.9.tar.gz (18 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting pyobjc-framework-quartz Using cached pyobjc_framework_Quartz-10.3.2-cp38-cp38-macosx_11_0_universal2.whl (211 kB) Collecting pyobjc-core Using cached pyobjc_core-11.0.tar.gz (994 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [2 lines of output] <string>:18: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html PyObjC: Need at least Python 3.9 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. FINISHED I have been able to successfully install pyautogui on my Mac using: pip3 install pyautogui I am using python3.9 I am on macOS Sequia 15.3.1 But when I attempt to add this as a Third Party Package to Mu Editor- it exits with the error above. I have scoured the internet in search for answers and have tried: restarting uninstalling and reinstalling both pyautogui and pyobjc installing pyautogui as admin Following the url to https://setuptools.pypa.io/en/latest/pkg_resources.htmlm(but I did not understand this page to be honest). I am expecting the install to complete without error so that I can import it in REPL and practice with pyautogui. I am particularly confused about why it says PyObjC: Need at least Python 3.9, even though I am running Python 3.9.6. Please help!
Reasons According to the docs Mu Editor supports only versions from 3.5 to 3.8. That is issue exists since 2021 and hasn't been resolved since. You might have a newer version installed, but the Mu Editor has its own environment with its own Python interpreter. I've just successfully installed pyautogui on python3.7, so you use 3.6 at best. Solutions Try older versions of pyautogui Enter the name of the package like this: pyautogui==version of the package. List of versions: https://pypi.org/project/PyAutoGUI/#history Switch to a different editor I recommend PyCharm or Visual Studio Code.
1
1
79,492,385
2025-3-7
https://stackoverflow.com/questions/79492385/asyncio-pass-context-or-contextvar-to-add-done-callback
I am learning asyncio callbacks. My task is- I have a message dict, message codes are keys, message texts are values. In coro main I have to create a number of asynchronous tasks (in my case 3 tasks), each task wraps a coro which prints one message. Also I have to add a callback to each task. Callback must print a code associated with a message printed by the coroutine wrapped by the task. The question is- how to pass code to callback? The staright solution is to add name to each task with the value of a code, but I dont want to go this way. I decided to use ContextVar for this purpose. So I create a global context variable, set() the value to the variable equal to code. Then I try to get() the context variable value from a callback but receive an Exception LookupError: <ContextVar name='msg_code' at 0x000001C596D94F40>. That's my code: import asyncio from contextvars import ContextVar msg_dict = { 'code1': 'msg1 by code1', 'code2': 'msg2 by code2', 'code3': 'msg3 by code3' } msg_code = ContextVar('msg_code') async def print_msg(code): await asyncio.sleep(0.5) msg_code.set(code) print(f'Message: {msg_dict[code]}') def callback_code(*args): code = msg_code.get() print(f'Code: {code}') async def main(): tasks = [asyncio.create_task(print_msg(code)) for code in msg_dict.keys()] [task.add_done_callback(callback_code) for task in tasks] await asyncio.gather(*tasks) asyncio.run(main()) I found that add_done_callback() also has keyword argument context= but I can't find any examples of how to pass task's context to a callback.
No tricks are needed, just specify the context. async def main(): tasks = [] for code in msg_dict: ctx = copy_context() # note: import copy_context from contexvars task = asyncio.create_task(print_msg(code), context=ctx) task.add_done_callback(callback_code, context=ctx) tasks.append(task) await asyncio.gather(*tasks)
1
0
79,494,971
2025-3-8
https://stackoverflow.com/questions/79494971/issue-with-transparency-mask-in-moviepy-v2-works-in-v1
I'm facing an issue while migrating from MoviePy v1 to MoviePy v2. In v1, I could apply a transparency mask to an ImageClip, making certain areas of the clip transparent. However, in MoviePy v2, the same approach doesn't seem to work. Expected Behavior The ImageClip should become transparent in areas defined by the mask. Observed Behavior In MoviePy v2, the mask is just overlayed on top of image. Working Code (MoviePy v1) This code works perfectly fine in MoviePy 1.0.3: from moviepy.editor import ImageClip, CompositeVideoClip image_clip = ImageClip("mask.png").set_duration(5) mask_clip = ImageClip("blueDark.png", ismask=True).set_duration(5) image_clip = image_clip.set_mask(mask_clip) video = CompositeVideoClip([image_clip], bg_color=(90, 90, 90)) video.preview() Non-Working Code (MoviePy v2) Here’s the MoviePy v2 equivalent, where I updated the function and parameter names according to the new version: import moviepy as mpy image_clip = mpy.ImageClip("mask.png").with_duration(5) mask_clip = mpy.ImageClip("blueDark.png", is_mask=True).with_duration(5) image_clip = image_clip.with_mask(mask_clip) video = mpy.CompositeVideoClip([image_clip], bg_color=(90, 90, 90)) video.preview() Additional Details Versions Installed: MoviePy v1: pip install moviepy==1.0.3 pygame MoviePy v2: pip install -U moviepy I have attached both mask.png and blueDark.png in case someone wants to reproduce the issue. blueDark.png mask.png
Turns out it works when I use RGBA image instead of greyscale image for the mask. Even though MoviePy docs explicitly mention that mask should always be greyscale image, but it seems to work with RGBA images only.
2
2
79,492,814
2025-3-7
https://stackoverflow.com/questions/79492814/failed-to-parse-the-total-results-from-a-webpage-of-which-my-existing-script-ca
I've created a script that issues a POST HTTP request with the appropriate parameters to fetch the town, continent, country, and inner_link from this webpage. The script can parse 69 containers, but there are 162 items in total. How can I fetch the rest? import requests link = 'https://wenomad.so/elasticsearch/search' inner_link = 'https://wenomad.so/city/{}' payload = { "z":"nZ9g0AdFBj7cLRX5v2wSWjjGf2Q5KPpss9DS4wZGh9pvfC4xcJvnTebBg+npAqWaQvdVUFxVD1NZ88siTRUfPo8gB70CGoJG/2MPv9Gu9kC+48KwvV4COpsB3HmER0Mgx0bz2G9pSpw6veTnEUnNR78xonQmhuvL3eztB+ikZaI3OTeuVfRVNetmdX4iDgOkKrM6kLt/2SuRKKwT2aAZHJbdhlTV1I65zj1jD7VBwrm+lJDNh7pZug0/gKCWUDQz4CgmrAdQdnxyJDde2ewzudcsGDimhnWB56bcejoli4LLvevtMB4RUMhmM6FIYn0Tl4sclUD7YLQ8gZQOMmBndDkGctxeq74bpDAwBMOG74qu9gb4WLUFxgB/lWCQ9OnJsfkT0J/kUShhQPoRVr72qUx8f8ldkliIGINoBy9i+lm1RYM3L/NfOJ0kBZ+fbKndVJk2owAZ1kLMupja4iPmpxszQlFGTstpAlF5pTckhL+QYIc6vYbslWqXVs8XrzKs955DHPe1WpWmI714MsJfHhd3XHDsuMy9lfY6mE+cfc0434amFJC5gCgoEhGIQsFQD/kGRaWvqCcMfPYiW/o++nQ017bAKzlg7qb0EfPpy/EMG+u4i7QEU/vvC9mUnVCN0ZzFpxP8HWiTTCF0djuB+UnfUaHKtXciPwwZUTV4o8PtI6v6QdrC4PvtAKSJ9CpIccW+A3SSvOgCgEwOtniCdLxezWaP1Dq3fv9G56HCOvsOGRlQ0RgzNgq/+pCwkvyqFYcs/VtX9NPuaCAAXLi+SFM0xRuI4Sq6nHQr7qs6R2C4gAVHm9bZHfByKZ5x03KJp74IGlGSd1GL9/z9CySVZw==", "y":"oht3SrBVqLvR2lXJSwtwWw==", "x":"dmpOxF/FB13c+GGFmDW4Y4SPz6jEItrcjegm/WNbqFk=" } headers = { 'accept': 'application/json, text/javascript, */*; q=0.01', 'accept-language': 'en-US,en;q=0.9', 'origin': 'https://wenomad.so', 'referer': 'https://wenomad.so/', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36', 'x-requested-with': 'XMLHttpRequest' } res = requests.post(link,json=payload,headers=headers) print(res.status_code) for item in res.json()['hits']['hits']: print(( item['_source']['town_text'], item['_source']['continent__text__text'], item['_source']['country__text__text'], inner_link.format(item['_source']['Slug']) ))
You need to replicate the requests to the /elasticsearch/search endpoint which requires three params x, y and z. These params are generated through a cryptographic encryption in the encode3 function of run.js First install PyCryptodome: pip install pycryptodome Then you can use this script to get all (162) results: from Crypto.Cipher import AES from Crypto.Protocol.KDF import PBKDF2 from Crypto.Hash import MD5 from Crypto.Util.Padding import pad import base64 import json import random import time import requests def encode(key, iv, text, appname): derived_key = PBKDF2(key, appname.encode(), dkLen=32, count=7, hmac_hash_module=MD5) derived_iv = PBKDF2(iv, appname.encode(), dkLen=16, count=7, hmac_hash_module=MD5) cipher = AES.new(derived_key, AES.MODE_CBC, iv=derived_iv) text_bytes = pad(text.encode(), AES.block_size) encrypted_text = cipher.encrypt(text_bytes) encrypted_base64 = base64.b64encode(encrypted_text).decode() return encrypted_base64 def generate_payload(data): v = "1" appname = 'fie' cur_timestamp = str(int(time.time() * 1000)) timestamp_version = f'{cur_timestamp}_{v}' key = appname + cur_timestamp iv = str(random.random()) text = json.dumps(data, separators=(',', ':')) encoded = { 'z': encode(key, iv, text, appname), 'y': encode(appname, "po9", timestamp_version, appname), 'x': encode(appname, "fl1", iv, appname) } return encoded def fetch_all_search_results(data): headers = { 'x-requested-with': 'XMLHttpRequest', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36' } results = [] while True: payload = generate_payload(data) response = requests.post('https://wenomad.so/elasticsearch/search', headers=headers, json=payload) res_json = response.json() hits = res_json.get('hits', {}).get('hits', []) results.extend(hits) data['from'] += len(hits) if res_json.get('at_end'): break return results data = { "appname": "fie", "app_version": "live", "type": "custom.town", "constraints": [ { "key": "active_boolean", "value": True, "constraint_type": "equals" } ], "sorts_list": [ { "sort_field": "overall_rating_number", "descending": True }, { "sort_field": "overall_rating_number", "descending": True } ], "from": 0, "n": 9999, "search_path": "{\"constructor_name\":\"DataSource\",\"args\":[{\"type\":\"json\",\"value\":\"%p3.AAV.%el.cmQus.%el.cmSJO0.%p.%ds\"},{\"type\":\"node\",\"value\":{\"constructor_name\":\"Element\",\"args\":[{\"type\":\"json\",\"value\":\"%p3.AAV.%el.cmQus.%el.cmSJO0\"}]}},{\"type\":\"raw\",\"value\":\"Search\"}]}", "situation": "unknown" } results = fetch_all_search_results(data) print(f'{len(results) = }')
1
4
79,494,272
2025-3-8
https://stackoverflow.com/questions/79494272/pass-value-from-one-django-template-to-other
I want to build a Django template hierarchy like so: root.html |_ root-dashboard.html |_ root-regular.html root.html shall have an if statement: {% if style == "dashboard" %} {# render some elements in a certain way #} {% else %} {# render those elements in a different way #} {% endif %} And root-dashboard.html and root-regular.html should individually extend root.html by setting style: # root-dashboard.html {% extend 'root.html' with style='dashboard'%} # root-regular.html {% extend 'root.html' with style='regular'%} (with above is not an actual valid syntax, its just something similar I want) And a view can use either root-dashboard.html or root-regular.html to show the content in one style or the other. How do I achieve this without the view having to set the style context?
Define a {% block … %} template tag [Django-doc] instead. In root.html, you don't use an if, but: {% block render_item %} {# render those elements in a different way #} {% endblock %} then in your root-dashboard.html, you use: # root-dashboard.html {% extend 'root.html' %} {% block render_item %} {# render some elements in a certain way #} {% endblock %} The idea is similar to the dynamic binding concept [wiki] in object-oriented programming, and is usually better than using if conditions: the latter is not extensible, and thus limits later modifications.
1
1
79,492,317
2025-3-7
https://stackoverflow.com/questions/79492317/fill-gaps-in-time-series-data-in-a-polars-lazy-dataframe
I am in a situation where I have some time series data, potentially looking like this: { "t": [1, 2, 5, 6, 7], "y": [1, 1, 1, 1, 1], } As you can see, the time stamp jumps from 2 to 5. For my analysis, I would like to fill in zeros for the time stamps 3, and 4. In reality, I might have multiple gaps with varying lengths. I'd like to fill this gap for all other columns. I'd also really like to keep my data in a LazyFrame since this is only one step in my pipeline. I don't think that .interpolate is really addressing my issue, nor is fill_null helpful here. I managed to achieve what I want, but it looks too complex: # Dummy, lazy data. lf = pl.LazyFrame( { "t": [1, 2, 5, 6, 7], "y": [1, 1, 1, 1, 1], } ) lf_filled = lf.join( pl.Series( name="t", values=pl.int_range( start=lf.select("t").first().collect().item(0, 0), end=lf.select("t").last().collect().item(0, 0) + 1, eager=True, ), ) .to_frame() .lazy(), on="t", how="right", ).fill_null(0) The output is correct and I am never collecting any more data than the two values needed for start and end. This looks like there should be a better way to do this. Happy to hear other suggestions :)
I think your approach is sensible, there's just no need for an intermediate collect: lf.join( lf.select(pl.int_range(pl.col.t.first(), pl.col.t.last()+1)), on="t", how="right" ) .fill_null(0) An alternate approach that might be a bit more efficient is to use an asof-join with no tolerance: lf.select(pl.int_range(pl.col.t.first(), pl.col.t.last()+1)) .join_asof(lf, on="t", tolerance=0) .fill_null(0)
2
3
79,494,025
2025-3-8
https://stackoverflow.com/questions/79494025/after-scraping-data-from-website-and-converting-csv-excel-dont-show-rows-excep
url ="https://www.dsebd.org/top_20_share.php" r =requests.get(url) soup = BeautifulSoup(r.text,"lxml") table = soup.find("table",class_="table table-bordered background-white shares-table") top = table.find_all("th") header = [] for x in top: ele = x.text header.append(ele) df = pd.DataFrame(columns= header) print(df) row1 = table.find_all("tr") row2 =[] for r1 in row1[1:]: ftd= r1.find_all("td")[1].find("a", class_="ab1").text.strip() data=r1.find_all("td")[1:] r2 = [ele.text for ele in data] r2.insert(0,ftd) l= len(df) df.loc[l]= r2 print(df) df.to_csv("top_20_share_value_22.csv") How to see / make data visible after converting to csv and view via excel? I have gone through above mentioned code.
The script itself works and CSV can be easily imported into Excel. Alternatively, export the data directly .to_excel('your_excle_file.xlsx') and open it in Excel. Since you are already operating with pandas, just use pandas.read_html which BeautifulSoup uses in the background to scrape the tables. import pandas as pd df_list = pd.read_html('https://www.dsebd.org/top_20_share.php', match='TRADING CODE') # first table only df_list[0].to_csv("first_top_20_share_value_22.csv") # all three tables in concat pd.concat(df_list, ignore_index=True).to_csv("all_top_20_share_value_22.csv")
1
2
79,491,259
2025-3-7
https://stackoverflow.com/questions/79491259/why-is-a-line-read-from-a-file-not-to-its-hardcoded-string-despite-being-prin
I'm reading lines from a file and trying to match them with regex, but it's failing despite the regex matcher looking right. When comparing the line to what it should be as a string declaration, python is saying that they are not equal. It's looking like this is being caused by a non utf-8 encoding on my file but not sure how to fix this as I'm not sure exactly which encoding is being used. This is a simplified version of the code I'm using to debug: fp = open('tree.txt', 'r') lines = [line.strip() for line in fp.readlines()] fp.close() for line in lines: print(f'|{line}| vs |{line}|') print(line == "[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT") print(line.encode('utf-8')) print("[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT".encode('utf-8')) My output once scanning the line in the file I'm expecting looks like this |[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT| vs |[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT| False b'\x00[\x00I\x00N\x00F\x00O\x00]\x00 \x00i\x00o\x00.\x00j\x00i\x00t\x00p\x00a\x00c\x00k\x00:\x00m\x00o\x00d\x00u\x00l\x00e\x002\x00:\x00j\x00a\x00r\x00:\x002\x00.\x000\x00-\x00S\x00N\x00A\x00P\x00S\x00H\x00O\x00T\x00' b'[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT' I'm generating tree.txt by doing mvn dependency:tree > tree.txt on Windows 11 from VSCode terminal, if that's any clue to what kind of encoding is being used lol. Is there a way to convert line into a string with this utf-8 encoding b'[INFO] io.jitpack:module2:jar:2.0-SNAPSHOT', agnostic of its current encoding? I did try opening the file with fp = open('tree.txt', 'r', encoding='utf-8') but that did not work.
The pattern of nulls in the output says that this file is encoded in big-endian UTF-16. Open it with encoding='utf-16be'. You might also want to figure out why Maven is producing output in UTF-16.
2
6
79,485,215
2025-3-5
https://stackoverflow.com/questions/79485215/sir-parameter-estimation-with-gradient-descent-and-autograd
I am trying to apply a very simple parameter estimation of a SIR model using a gradient descent algorithm. I am using the package autograd since the audience (this is for a sort of workshop for undergraduate students) only knows numpy and I don't want to jump to JAX or any other ML framework (yet). import autograd import autograd.numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp, odeint from autograd.builtins import tuple from autograd import grad, jacobian def sir(y, t, beta, gamma): S, I, R = y dS_dt = - beta * S * I dI_dt = beta * S * I - gamma * I dR_dt = gamma * I return np.array([dS_dt, dI_dt, dR_dt]) def loss(params, Y0, t, y_obs): beta, gamma = params # Solve the ODE system using odeint sol = odeint(sir, y0=Y0, t=t, args=(beta, gamma)) # Compute the L2 norm error between the observed and predicted values err = np.linalg.norm(y_obs - sol, 2) return err # Generate data np.random.seed(42) Y0 = np.array([0.95, 0.05, 0.0]) t = np.linspace(0, 30, 101) beta, gamma = 0.5, 1/14 sol = odeint(sir, y0=Y0, t=t, args=tuple([beta, gamma])) y_obs = sol + np.random.normal(0, 0.05, size=sol.shape) plt.plot(t, y_obs) Then, what I would like to do is something like this # --- THIS DOES NOT WORK --- params = np.array([beta_init, gamma_init]) # Get the gradient of the loss function with respect to the parameters (beta, gamma) loss_grad = grad(loss, argnum=0) # params is the first argument of loss # Perform gradient descent for i in range(n_iterations): grads = loss_grad(params, Y0, t, y_obs) # Compute gradients params -= learning_rate * grads # Update parameters A minimal example would be: loss_grad = grad(loss, argnum=0) params = np.array([beta, gamma]) grads = loss_grad(params, Y0, t, y_obs) However, I get the following error: ValueError: setting an array element with a sequence. which start at the very be Is there any way I can calculate the derivatives of the loss function with respect to my parameters (beta and gamma)? To be honest I am still getting used to auto-differentiation.
This is a modified version of your code that seems to work import autograd import autograd.numpy as np import matplotlib.pyplot as plt from autograd.scipy.integrate import odeint from autograd.builtins import tuple from autograd import grad, jacobian def sir(y, t, beta, gamma): S, I, R = y dS_dt = - beta * S * I dI_dt = beta * S * I - gamma * I dR_dt = gamma * I return np.array([dS_dt, dI_dt, dR_dt]) def loss(params, Y0, t, y_obs): params_tuple = tuple(params) # Solve the ODE system using odeint sol = odeint(sir, Y0, t, params_tuple) # Compute the L2 norm error between the observed and predicted values err = np.linalg.norm(y_obs - sol) return err # Generate data np.random.seed(42) Y0 = np.array([0.95, 0.05, 0.0]) t = np.linspace(0, 30, 101) beta, gamma = 0.5, 1/14 sol = odeint(sir, y0=Y0, t=t, args=(beta, gamma)) y_obs = sol + np.random.normal(0, 0.05, size=sol.shape) plt.plot(t, y_obs) Then, when running loss_grad = grad(loss, argnum=0) params = np.array([beta, gamma]) grads = loss_grad(params, Y0, t, y_obs) print(grads) I get the output [-0.84506353 -7.09399783]. The important differences in this code are: Importing autograd's wrapped version of odeint, as from autograd.scipy.integrate import odeint The wrapping defines the gradient of odeint and declares to autograd that it should be treated as a differentiation primitive, rather than have its execution traced. Converting the parameter vector to a tuple with params_tuple = tuple(params), to be passed on to odeint. Calling odeint with all simple positional, instead of keyword arguments, as sol = odeint(sir, Y0, t, params_tuple) This is done because autograd currently has an issue where it will incorrectly run differentiation primitives with keyword arguments using the wrapped type of object it uses for tracing execution, as reported here. This is a problem if the function you are using is incompatible with the wrapped object, as in this case, raising an error. Hope this helps!
2
1
79,492,863
2025-3-7
https://stackoverflow.com/questions/79492863/barplot-coloring-using-seaborn-color-palette
I have the following code fragment: import seaborn import matplotlib.pyplot as plt plt.bar(df['Col1'], df['Col2'], width = 0.97, color=seaborn.color_palette("coolwarm", df['Col1'].shape[0], 0.999)) Now, my bars are colored in the blue-red spectrum (as given by the parameter coolwarm). How can I change the distribution between these two colors and their order, for example, to get 80% of all bars red and only the rest (i.e., 20%) in blue? (Now it is 50-50% ratio)
There's no built in way to do this using seaborn or the matplotlib colormaps, but here's a solution that seems to do the trick import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import numpy as np ## function that generates a color palette def gen_palette(n_left, n_right, cmap_name="coolwarm", desat=0.999): """ n_left: number of colors from "left half" of the colormap n_right: number of colors from the "right half" cmap_name: name of the color map to use (ideally one of the diverging palettes) return: palette, list of RGB-triples """ palette_1 = sns.color_palette(palette=cmap_name, n_colors=2 * n_left, desat=desat)[:n_left] palette_2 = sns.color_palette(palette=cmap_name, n_colors=2 * n_right, desat=desat)[n_right:] return palette_1 + palette_2 ## generate example data N = 20 rng = np.random.default_rng(seed=42) y_vals = 10 * rng.random(N) df = pd.DataFrame( {"Col1": np.arange(N), "Col2": y_vals} ) ## build the color palette with the desired blue-red split n_red = round(0.8 * N) palette = gen_palette(N - n_red, n_red) ## plot plt.bar(df['Col1'], df['Col2'], width=0.97, color=palette)
1
3
79,492,887
2025-3-7
https://stackoverflow.com/questions/79492887/request-method-post-equaling-true-in-next-function-causing-it-to-run-prematur
so i am running this code def login_or_join(request): if request.method == "POST": option = request.POST.get("option") print('post request recieved') if option == "1": return login_screen(request) if option == '2': return in_game(request) return render(request,"login_or_join.html") and def login_screen() looks like this def login_screen(request): if request.method == "POST": username = request.POST.get("username") password = request.POST.get("password") print(username) print(password) user = authenticate(request, username=username, password=password) print(user) if user is not None: return redirect('join_lobby') else: return render(request,'login_page.html',) return render(request, 'login_page.html') Whenever I click "option 1" it runs login_screen but in a way I don't want it to. it seems to just follow that request.method == "POST" and prints username and password immediately, meaning it is setting username and password immediately making any log in attempt wrong. But I don't want it to set those (or print them) until I've pressed the button on the next page. Further more when I hit "enter" or "log in" it doesn't reshow the page with the error message, it just goes back login_or_join(). I feel like I am taking crazy pills as I've been working on this website for a while and this is the first time I'm having this kind of issue. I've tried messing with it but feel I've been staring at it too long. Any help would be appreciated!
I guess the problem is that you are calling login_screen with request as parameter. But that variable is the parameter received on login_or_join so is the POST request with the option, neither a GET nor a POST with the username and password. I think in this case you can just name the url and do: if option == "1": return redirect(login_screen_name) if option == "2": return redirect(in_game_screen_name)
2
2
79,492,367
2025-3-7
https://stackoverflow.com/questions/79492367/can-airflow-task-dependencies-be-re-used
I have a series of airflow DAGs which re-use some of the task dependencies. For example DAG 1: T1 >> T2 DAG 2: T1 >> T2 >> T3 DAG 3: T1 >> T2 >> T3 >> [T4, T5, T6] >> T7 I would like to store the dependencies from DAG 1 (which in this model, are being used by every other DAG) and re-use them when declaring the dependencies for the other DAGs, like so: def dag_1_dependencies(): T1 >> T2 DAG 2: dag_1_dependencies() >> T3 DAG 3: dag_1_dependencies() >> T3 >> [T4, T5, T6] >> T7 The problem is that dependencies themselves aren't a value, so I can't return them with a method. Calling dag_1_dependencies() does nothing. Is there a way to circumvent this?
If tasks t1 and t2 are always the same tasks, you can generate these tasks + dependencies outside the DAG. To add additional tasks/dependencies following t2, you need a reference to that object to configure the additional dependencies. For example: def generate_t1_t2() -> BaseOperator: """Generates tasks + dependencies and returns the last task so that additional dependencies can be set.""" t1 = EmptyOperator(task_id="t1") t2 = EmptyOperator(task_id="t2") t1 >> t2 return t2 with DAG(dag_id="dag2", start_date=datetime(2025, 1, 1), schedule=None): last_task = generate_t1_t2() t3 = EmptyOperator(task_id="t3") last_task >> t3 The function generate_t1_t2 returns the last operator in the chain (t2), which allows configuring additional dependencies such as last_task >> t3. Your full question can therefore be written as: from datetime import datetime from airflow import DAG from airflow.models import BaseOperator from airflow.operators.empty import EmptyOperator def generate_t1_t2() -> BaseOperator: """Generates tasks + dependencies and returns the last task so that additional dependencies can be set.""" t1 = EmptyOperator(task_id="t1") t2 = EmptyOperator(task_id="t2") t1 >> t2 return t2 with DAG(dag_id="dag1", start_date=datetime(2025, 1, 1), schedule=None): generate_t1_t2() with DAG(dag_id="dag2", start_date=datetime(2025, 1, 1), schedule=None): last_task = generate_t1_t2() last_task >> EmptyOperator(task_id="t3") with DAG(dag_id="dag3", start_date=datetime(2025, 1, 1), schedule=None): last_task = generate_t1_t2() ( last_task >> EmptyOperator(task_id="t3") >> [EmptyOperator(task_id="t4"), EmptyOperator(task_id="t5"), EmptyOperator(task_id="t6")] >> EmptyOperator(task_id="t7") )
1
1
79,491,894
2025-3-7
https://stackoverflow.com/questions/79491894/vespa-indexing-anomaly-on-exact-indexed-field-with-diacritical-variants-and-no
I’m using the Vespa Python client (pyvespa 0.54.0) to query a Vespa index, and I’m running into an issue where Vespa doesn't find a document it has just returned in a previous query. I have this field in my toponym schema, indexed with match { exact }: field name_strict type string { indexing: attribute | summary match { exact } } It has to handle place names in a wide variety of languages and scripts, and so before feeding the values are sanitised like this: # Normalise Unicode (NFC to prevent decomposition issues) toponym = unicodedata.normalize("NFC", toponym) # Remove problematic invisible Unicode characters toponym = toponym.translate(dict.fromkeys([0x200B, 0x2060])) # Ensure UTF-8 encoding (ignore errors) toponym = toponym.encode("utf-8", "ignore").decode("utf-8") A visit after feeding a large and varied dataset confirms that values such as "ශ්‍රී ලංකාව", "បង់ក្លាដែស្ស", and "İslandiya" have been properly indexed, and most documents can successfully be retrieved like this: place_name="ශ්‍රී ලංකාව" name_strict = QueryField("name_strict") q = ( qb.select(["*"]) .from_("toponym") .where(name_strict.contains(place_name)) ) response = vespa_app.query(yql=q) However, the process fails repeatedly (over multiple re-indexing attempts) with certain place names, such as "İslandiya", as illustrated by the following code which confirms the document's existence both before and after the query: >>> from vespa.application import Vespa >>> import vespa.querybuilder as qb >>> from vespa.querybuilder import QueryField >>> vespa_app = Vespa(url="http://vespa-feed.vespa.svc.cluster.local:8080") >>> >>> response = vespa_app.query(body={"yql": "select * from toponym where is_staging = true limit 1"}) >>> place_name = response.json['root']['children'][0]['fields']['name_strict'] >>> print(place_name) İslandiya >>> >>> name_strict = QueryField("name_strict") >>> q = ( ... qb.select(["*"]) ... .from_("toponym") ... .where(name_strict.contains(place_name)) ... ) >>> response = vespa_app.query(yql=q) >>> print(response.json) {'root': {'id': 'toplevel', 'relevance': 1.0, 'fields': {'totalCount': 0}, 'coverage': {'coverage': 100, 'documents': 22084, 'full': True, 'nodes': 1, 'results': 1, 'resultsFull': 1}}} >>> >>> response = vespa_app.query(body={"yql": "select * from toponym where is_staging = true limit 1"}) >>> place_name = response.json['root']['children'][0]['fields']['name_strict'] >>> print(place_name) İslandiya >>> From a sample of 20k place names, these are the ones that fail: İslandiya, İzlanda, İrlanda, İrlandiya, İtaliya, İtalya, ᎶᎹᏂᏯ, İspaniya, İspanya, İsveç, İsveç, ᏓᎶᏂᎨᏍᏛ, İndoneziya, İsrail, İzrail, ᏣᏩᏂᏏ, Ꙗпѡнїꙗ, İvori Sahili. For good measure, I've tried sanitising them (as in the feed process) before using them in the query, but this makes no difference. Vespa's linguistics module is not supposed to be invoked because the field does not have the index indexing mode, but perhaps that is interfering in some way? Can anyone please tell me what I might be doing incorrectly? Or is there a workaround?
You're in luck, this is a case-folding problem that's been fixed recently. I could reproduce your problem on vespa 8.485.42 but it works as expected in 8.492.15.
2
6
79,492,024
2025-3-7
https://stackoverflow.com/questions/79492024/the-server-responded-with-a-status-of-500-internal-server-error-and-valueerror
I am trying to make dashboard in flask by connecting it with SQL server and getting these errors. I confirm there are no null values and I checked by removing the column as well from query but still not working. Code is - import pandas as pd import pyodbc from flask import Flask, render_template, jsonify app = Flask(__name__) # SQL Server Connection Details conn_str = ( "DRIVER={SQL Server};" "SERVER=xyz;" "DATABASE=xyz;" "UID=xyz;" "PWD=xyz;" ) # Fetch Data from SQL Server def fetch_data(): try: conn = pyodbc.connect(conn_str) query = """ SELECT TicketDate, Technician, Open_Tickets, Closed_Tickets, Created_Today, Closed_Today, Created_Hourly FROM Technician_Ticket_Stats """ df = pd.read_sql(query, conn) conn.close() # Debugging logs print("Fetched data successfully:") print(df.head()) df['TicketDate'] = df['TicketDate'].astype(str) # Convert date for JSON return df.to_dict(orient="records") except Exception as e: print("Error fetching data:", e) return [] @app.route("/") def index(): return render_template("index.html") @app.route("/get_data") def get_data(): try: data = fetch_data() return jsonify(data) except Exception as e: return jsonify({"error": str(e)}), 500 if __name__ == "__main__": app.run(host='127.0.0.1', port=8050, debug=True)here
Add this line so that if there are NaT values in the 'TicketDate' column, it converts to None rather than throwing an error. df['TicketDate'] = df['TicketDate'].fillna(pd.NaT).apply( lambda x: x.strftime('%Y-%m-%d') if pd.notna(x) else None )
1
1