question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,516,584 | 2022-3-17 | https://stackoverflow.com/questions/71516584/padding-scipy-affine-transform-output-to-show-non-overlapping-regions-of-transfo | I have source (src) image(s) I wish to align to a destination (dst) image using an Affine Transformation whilst retaining the full extent of both images during alignment (even the non-overlapping areas). I am already able to calculate the Affine Transformation rotation and offset matrix, which I feed to scipy.ndimage.interpolate.affine_transform to recover the dst-aligned src image. The problem is that, when the images are not fuly overlapping, the resultant image is cropped to only the common footprint of the two images. What I need is the full extent of both images, placed on the same pixel coordinate system. This question is almost a duplicate of this one - and the excellent answer and repository there provides this functionality for OpenCV transformations. I unfortunately need this for scipy's implementation. Much too late, after repeatedly hitting a brick wall trying to translate the above question's answer to scipy, I came across this issue and subsequently followed to this question. The latter question did give some insight into the wonderful world of scipy's affine transformation, but I have as yet been unable to crack my particular needs. The transformations from src to dst can have translations and rotation. I can get translations only working (an example is shown below) and I can get rotations only working (largely hacking around the below and taking inspiration from the use of the reshape argument in scipy.ndimage.interpolation.rotate). However, I am getting thoroughly lost combining the two. I have tried to calculate what should be the correct offset (see this question's answers again), but I can't get it working in all scenarios. Translation-only working example of padded affine transformation, which follows largely this repo, explained in this answer: from scipy.ndimage import rotate, affine_transform import numpy as np import matplotlib.pyplot as plt nblob = 50 shape = (200, 100) buffered_shape = (300, 200) # buffer for rotation and translation def affine_test(angle=0, translate=(0, 0)): np.random.seed(42) # Maxiumum translation allowed is half difference between shape and buffered_shape # Generate a buffered_shape-sized base image with random blobs base = np.zeros(buffered_shape, dtype=np.float32) random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False) i = random_locs[:nblob] j = random_locs[nblob:] for k, (_i, _j) in enumerate(zip(i, j)): # Use different values, just to make it easier to distinguish blobs base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10 # Impose a rotation and translation on source src = rotate(base, angle, reshape=False, order=1, mode="constant") bsc = (np.array(buffered_shape) / 2).astype(int) sc = (np.array(shape) / 2).astype(int) src = src[ bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0], bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1], ] # Cut-out destination from the centre of the base image dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]] src_y, src_x = src.shape def get_matrix_offset(centre, angle, scale): """Follows OpenCV.getRotationMatrix2D""" angle = angle * np.pi / 180 alpha = scale * np.cos(angle) beta = scale * np.sin(angle) return ( np.array([[alpha, beta], [-beta, alpha]]), np.array( [ (1 - alpha) * centre[0] - beta * centre[1], beta * centre[0] + (1 - alpha) * centre[1], ] ), ) # Obtain the rotation matrix and offset that describes the transformation # between src and dst matrix, offset = get_matrix_offset(np.array([src_y / 2, src_x / 2]), angle, 1) offset = offset - translate # Determine the outer bounds of the new image lin_pts = np.array([[0, src_x, src_x, 0], [0, 0, src_y, src_y]]) transf_lin_pts = np.dot(matrix.T, lin_pts) - offset[::-1].reshape(2, 1) # Find min and max bounds of the transformed image min_x = np.floor(np.min(transf_lin_pts[0])).astype(int) min_y = np.floor(np.min(transf_lin_pts[1])).astype(int) max_x = np.ceil(np.max(transf_lin_pts[0])).astype(int) max_y = np.ceil(np.max(transf_lin_pts[1])).astype(int) # Add translation to the transformation matrix to shift to positive values anchor_x, anchor_y = 0, 0 if min_x < 0: anchor_x = -min_x if min_y < 0: anchor_y = -min_y shifted_offset = offset - np.dot(matrix, [anchor_y, anchor_x]) # Create padded destination image dst_h, dst_w = dst.shape[:2] pad_widths = [anchor_y, max(max_y, dst_h) - dst_h, anchor_x, max(max_x, dst_w) - dst_w] dst_padded = np.pad( dst, ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])), "constant", constant_values=-1, ) dst_pad_h, dst_pad_w = dst_padded.shape # Create the aligned and padded source image source_aligned = affine_transform( src, matrix.T, offset=shifted_offset, output_shape=(dst_pad_h, dst_pad_w), order=3, mode="constant", cval=-1, ) # Plot the images fig, axes = plt.subplots(1, 4, figsize=(10, 5), sharex=True, sharey=True) axes[0].imshow(src, cmap="viridis", vmin=-1, vmax=nblob) axes[0].set_title("Source") axes[1].imshow(dst, cmap="viridis", vmin=-1, vmax=nblob) axes[1].set_title("Dest") axes[2].imshow(source_aligned, cmap="viridis", vmin=-1, vmax=nblob) axes[2].set_title("Source aligned to Dest padded") axes[3].imshow(dst_padded, cmap="viridis", vmin=-1, vmax=nblob) axes[3].set_title("Dest padded") plt.show() e.g.: affine_test(0, (-20, 40)) gives: With a zoom in showing the aligned in the padded images: I require the full extent of the src and dst images aligned on the same pixel coordinates, with both rotations and translations. Any help is greatly appreciated! | Working code below in case anyone else has this need of scipy's affine transformations: def affine_test(angle=0, translate=(0, 0), shape=(200, 100), buffered_shape=(300, 200), nblob=50): # Maxiumum translation allowed is half difference between shape and buffered_shape np.random.seed(42) # Generate a buffered_shape-sized base image base = np.zeros(buffered_shape, dtype=np.float32) random_locs = np.random.choice(np.arange(2, buffered_shape[0] - 2), nblob * 2, replace=False) i = random_locs[:nblob] j = random_locs[nblob:] for k, (_i, _j) in enumerate(zip(i, j)): base[_i - 2 : _i + 2, _j - 2 : _j + 2] = k + 10 # Impose a rotation and translation on source src = rotate(base, angle, reshape=False, order=1, mode="constant") bsc = (np.array(buffered_shape) / 2).astype(int) sc = (np.array(shape) / 2).astype(int) src = src[ bsc[0] - sc[0] + translate[0] : bsc[0] + sc[0] + translate[0], bsc[1] - sc[1] + translate[1] : bsc[1] + sc[1] + translate[1], ] # Cut-out destination from the centre of the base image dst = base[bsc[0] - sc[0] : bsc[0] + sc[0], bsc[1] - sc[1] : bsc[1] + sc[1]] src_y, src_x = src.shape def get_matrix_offset(centre, angle, scale): """Follows OpenCV.getRotationMatrix2D""" angle_rad = angle * np.pi / 180 alpha = np.round(scale * np.cos(angle_rad), 8) beta = np.round(scale * np.sin(angle_rad), 8) return ( np.array([[alpha, beta], [-beta, alpha]]), np.array( [ (1 - alpha) * centre[0] - beta * centre[1], beta * centre[0] + (1 - alpha) * centre[1], ] ), ) matrix, offset = get_matrix_offset(np.array([((src_y - 1) / 2) - translate[0], ((src_x - 1) / 2) - translate[ 1]]), angle, 1) offset += np.array(translate) M = np.column_stack((matrix, offset)) M = np.vstack((M, [0, 0, 1])) iM = np.linalg.inv(M) imatrix = iM[:2, :2] ioffset = iM[:2, 2] # Determine the outer bounds of the new image lin_pts = np.array([[0, src_y-1, src_y-1, 0], [0, 0, src_x-1, src_x-1]]) transf_lin_pts = np.dot(matrix, lin_pts) + offset.reshape(2, 1) # - np.array(translate).reshape(2, 1) # both? # Find min and max bounds of the transformed image min_x = np.floor(np.min(transf_lin_pts[1])).astype(int) min_y = np.floor(np.min(transf_lin_pts[0])).astype(int) max_x = np.ceil(np.max(transf_lin_pts[1])).astype(int) max_y = np.ceil(np.max(transf_lin_pts[0])).astype(int) # Add translation to the transformation matrix to shift to positive values anchor_x, anchor_y = 0, 0 if min_x < 0: anchor_x = -min_x if min_y < 0: anchor_y = -min_y dot_anchor = np.dot(imatrix, [anchor_y, anchor_x]) shifted_offset = ioffset - dot_anchor # Create padded destination image dst_y, dst_x = dst.shape[:2] pad_widths = [anchor_y, max(max_y, dst_y) - dst_y, anchor_x, max(max_x, dst_x) - dst_x] dst_padded = np.pad( dst, ((pad_widths[0], pad_widths[1]), (pad_widths[2], pad_widths[3])), "constant", constant_values=-10, ) dst_pad_y, dst_pad_x = dst_padded.shape # Create the aligned and padded source image source_aligned = affine_transform( src, imatrix, offset=shifted_offset, output_shape=(dst_pad_y, dst_pad_x), order=3, mode="constant", cval=-10, ) E.g. running: affine_test(angle=-25, translate=(10, -40)) will show: and zoomed in: Apologies the code is not nicely written as is. Note that running this in the wild I notice it cannot handle any change in scale size of the images, but I am not certain it isn't something to do with how I calculate the transformation - so a caveat worth noting, and checking out, if you are aligning images with different scales. | 7 | 2 |
71,514,124 | 2022-3-17 | https://stackoverflow.com/questions/71514124/find-near-duplicate-and-faked-images | I am using Perceptual hashing technique to find near-duplicate and exact-duplicate images. The code is working perfectly for finding exact-duplicate images. However, finding near-duplicate and slightly modified images seems to be difficult. As the difference score between their hashing is generally similar to the hashing difference of completely different random images. To tackle this, I tried to reduce the pixelation of the near-duplicate images to 50x50 pixel and make them black/white, but I still don't have what I need (small difference score). This is a sample of a near duplicate image pair: Image 1 (a1.jpg): Image 2 (b1.jpg): The difference between the hashing score of these images is : 24 When pixeld (50x50 pixels), they look like this: rs_a1.jpg rs_b1.jpg The hashing difference score of the pixeled images is even bigger! : 26 Below two more examples of near duplicate image pairs as requested by @ann zen: Pair 1 Pair 2 The code I use to reduce the image size is this : from PIL import Image with Image.open(image_path) as image: reduced_image = image.resize((50, 50)).convert('RGB').convert("1") And the code for comparing two image hashing: from PIL import Image import imagehash with Image.open(image1_path) as img1: hashing1 = imagehash.phash(img1) with Image.open(image2_path) as img2: hashing2 = imagehash.phash(img2) print('difference : ', hashing1-hashing2) | Here's a quantitative method to determine duplicate and near-duplicate images using the sentence-transformers library which provides an easy way to compute dense vector representations for images. We can use the OpenAI Contrastive Language-Image Pre-Training (CLIP) Model which is a neural network already trained on a variety of (image, text) pairs. To find image duplicates and near-duplicates, we encode all images into vector space and then find high density regions which correspond to areas where the images are fairly similar. When two images are compared, they are given a score between 0 to 1.00. We can use a threshold parameter to identify two images as similar or different. By setting the threshold lower, you will get larger clusters which have fewer similar images in it. A duplicate image will have a score of 1.00 meaning the two images are exactly the same. To find near-duplicate images, we can set the threshold to any arbitrary value, say 0.9. For instance, if the determined score between two images are greater than 0.9 then we can conclude they are near-duplicate images. An example: This dataset has 5 images, notice how there are duplicates of cat #1 while the others are different. Finding duplicate images Score: 100.000% .\cat1 copy.jpg .\cat1.jpg Both cat1 and its copy are the same. Finding near-duplicate images Score: 91.116% .\cat1 copy.jpg .\cat2.jpg Score: 91.116% .\cat1.jpg .\cat2.jpg Score: 91.097% .\bear1.jpg .\bear2.jpg Score: 59.086% .\bear2.jpg .\cat2.jpg Score: 56.025% .\bear1.jpg .\cat2.jpg Score: 53.659% .\bear1.jpg .\cat1 copy.jpg Score: 53.659% .\bear1.jpg .\cat1.jpg Score: 53.225% .\bear2.jpg .\cat1.jpg We get more interesting score comparison results between different images. The higher the score, the more similar; the lower the score, the less similar. Using a threshold of 0.9 or 90%, we can filter out near-duplicate images. Comparison between just two images Score: 91.097% .\bear1.jpg .\bear2.jpg Score: 91.116% .\cat1.jpg .\cat2.jpg Score: 93.715% .\tower1.jpg .\tower2.jpg Code from sentence_transformers import SentenceTransformer, util from PIL import Image import glob import os # Load the OpenAI CLIP Model print('Loading CLIP Model...') model = SentenceTransformer('clip-ViT-B-32') # Next we compute the embeddings # To encode an image, you can use the following code: # from PIL import Image # encoded_image = model.encode(Image.open(filepath)) image_names = list(glob.glob('./*.jpg')) print("Images:", len(image_names)) encoded_image = model.encode([Image.open(filepath) for filepath in image_names], batch_size=128, convert_to_tensor=True, show_progress_bar=True) # Now we run the clustering algorithm. This function compares images aganist # all other images and returns a list with the pairs that have the highest # cosine similarity score processed_images = util.paraphrase_mining_embeddings(encoded_image) NUM_SIMILAR_IMAGES = 10 # ================= # DUPLICATES # ================= print('Finding duplicate images...') # Filter list for duplicates. Results are triplets (score, image_id1, image_id2) and is scorted in decreasing order # A duplicate image will have a score of 1.00 duplicates = [image for image in processed_images if image[0] >= 1] # Output the top X duplicate images for score, image_id1, image_id2 in duplicates[0:NUM_SIMILAR_IMAGES]: print("\nScore: {:.3f}%".format(score * 100)) print(image_names[image_id1]) print(image_names[image_id2]) # ================= # NEAR DUPLICATES # ================= print('Finding near duplicate images...') # Use a threshold parameter to identify two images as similar. By setting the threshold lower, # you will get larger clusters which have less similar images in it. Threshold 0 - 1.00 # A threshold of 1.00 means the two images are exactly the same. Since we are finding near # duplicate images, we can set it at 0.99 or any number 0 < X < 1.00. threshold = 0.99 near_duplicates = [image for image in processed_images if image[0] < threshold] for score, image_id1, image_id2 in near_duplicates[0:NUM_SIMILAR_IMAGES]: print("\nScore: {:.3f}%".format(score * 100)) print(image_names[image_id1]) print(image_names[image_id2]) | 23 | 41 |
71,544,953 | 2022-3-20 | https://stackoverflow.com/questions/71544953/unreadable-jupyter-lab-notebook-after-upgrading-pandas-capture-validation-error | I was recently using Jupyter lab and decided to update my pandas version from 1.2 to the latest (1.4). So I ran 'conda update pandas' which seemed to work fine. However when I then launched Jupyter lab in the usual way 'jupyter lab' and tried to open the workbook I had just been working on I got the below error: Unreadable Notebook: C:\Users...\script.ipynb TypeError("init() got an unexpected keyword argument 'capture_validation_error'") I am getting this same error when trying to open any of my .ipynb files that were previously working fine. I can also open them fine in jupyter notebook, but for some reason they don't work in Jupyter lab anymore. Any idea how I can fix this? Thanks | It turns out that a recent update to jupyter_server>=1.15.0 broke compatibility with nbformat<5.2.0, but did not update the conda recipe correctly per this Github pull request. It is possible that while updating pandas, you may have inadvertently also updated jupyterlab and/or jupyter_server. While we wait for the build with the merged PR to come downstream, we can fix this dependency issue by updating nbformat manually with conda install -c conda-forge nbformat to get the newest version of nbformat with version >=5.2. | 12 | 10 |
71,574,168 | 2022-3-22 | https://stackoverflow.com/questions/71574168/how-to-plot-confusion-matrix-without-color-coding | Of all the answers I see on stackoverflow, such as 1, 2 and 3 are color-coded. In my case, I wouldnΒ΄t like it to be colored, especially since my dataset is largely imbalanced, minority classes are always shown in light color. I would instead, prefer it display the number of actual/predicted in each cell. Currently, I use: def plot_confusion_matrix(cm, classes, title, normalize=False, file='confusion_matrix', cmap=plt.cm.Blues): if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] cm_title = "Normalized confusion matrix" else: cm_title = title # print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(cm_title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.3f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True class') plt.xlabel('Predicted class') plt.tight_layout() plt.savefig(file + '.png') Output: So I want the number shown only. | Use seaborn.heatmap with a grayscale colormap and set vmin=0, vmax=0: import seaborn as sns sns.heatmap(cm, fmt='d', annot=True, square=True, cmap='gray_r', vmin=0, vmax=0, # set all to white linewidths=0.5, linecolor='k', # draw black grid lines cbar=False) # disable colorbar # re-enable outer spines sns.despine(left=False, right=False, top=False, bottom=False) Complete function: def plot_confusion_matrix(cm, classes, title, normalize=False, file='confusion_matrix', cmap='gray_r', linecolor='k'): if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] cm_title = 'Confusion matrix, with normalization' else: cm_title = title fmt = '.3f' if normalize else 'd' sns.heatmap(cm, fmt=fmt, annot=True, square=True, xticklabels=classes, yticklabels=classes, cmap=cmap, vmin=0, vmax=0, linewidths=0.5, linecolor=linecolor, cbar=False) sns.despine(left=False, right=False, top=False, bottom=False) plt.title(cm_title) plt.ylabel('True class') plt.xlabel('Predicted class') plt.tight_layout() plt.savefig(f'{file}.png') | 6 | 6 |
71,577,514 | 2022-3-22 | https://stackoverflow.com/questions/71577514/valueerror-per-column-arrays-must-each-be-1-dimensional-when-trying-to-create-a | I'm trying to create a very simple Pandas DataFrame from a dictionary. The dictionary has 3 items, and the DataFrame as well. They are: a list with the 'shape' (3,) a list/np.array (in different attempts) with the shape(3, 3) a constant of 100 (same value to the whole column) Here is the code that succeeds and displays the preferred df β # from a dicitionary >>>dict1 = {"x": [1, 2, 3], ... "y": list( ... [ ... [2, 4, 6], ... [3, 6, 9], ... [4, 8, 12] ... ] ... ), ... "z": 100} >>>df1 = pd.DataFrame(dict1) >>>df1 x y z 0 1 [2, 4, 6] 100 1 2 [3, 6, 9] 100 2 3 [4, 8, 12] 100 But then I assign a Numpy ndarray (shape 3, 3 )to the key y, and try to create a DataFrame from the dictionary. The line I try to create the DataFrame errors out. Below is the code I try to run, and the error I get (in separate code blocks for ease of reading.) code β >>>dict2 = {"x": [1, 2, 3], ... "y": np.array( ... [ ... [2, 4, 6], ... [3, 6, 9], ... [4, 8, 12] ... ] ... ), ... "z": 100} >>>df2 = pd.DataFrame(dict2) # see the below block for error error β --------------------------------------------------------------------------- ValueError Traceback (most recent call last) d:\studies\compsci\pyscripts\study\pandas-realpython\data-delightful\01.intro.ipynb Cell 10' in <module> 1 # from a dicitionary 2 dict1 = {"x": [1, 2, 3], 3 "y": np.array( 4 [ (...) 9 ), 10 "z": 100} ---> 12 df1 = pd.DataFrame(dict1) File ~\anaconda3\envs\dst\lib\site-packages\pandas\core\frame.py:636, in DataFrame.__init__(self, data, index, columns, dtype, copy) 630 mgr = self._init_mgr( 631 data, axes={"index": index, "columns": columns}, dtype=dtype, copy=copy 632 ) 634 elif isinstance(data, dict): 635 # GH#38939 de facto copy defaults to False only in non-dict cases --> 636 mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) 637 elif isinstance(data, ma.MaskedArray): 638 import numpy.ma.mrecords as mrecords File ~\anaconda3\envs\dst\lib\site-packages\pandas\core\internals\construction.py:502, in dict_to_mgr(data, index, columns, dtype, typ, copy) 494 arrays = [ 495 x 496 if not hasattr(x, "dtype") or not isinstance(x.dtype, ExtensionDtype) 497 else x.copy() 498 for x in arrays 499 ] 500 # TODO: can we get rid of the dt64tz special case above? --> 502 return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File ~\anaconda3\envs\dst\lib\site-packages\pandas\core\internals\construction.py:120, in arrays_to_mgr(arrays, columns, index, dtype, verify_integrity, typ, consolidate) 117 if verify_integrity: 118 # figure out the index, if necessary 119 if index is None: --> 120 index = _extract_index(arrays) 121 else: 122 index = ensure_index(index) File ~\anaconda3\envs\dst\lib\site-packages\pandas\core\internals\construction.py:661, in _extract_index(data) 659 raw_lengths.append(len(val)) 660 elif isinstance(val, np.ndarray) and val.ndim > 1: --> 661 raise ValueError("Per-column arrays must each be 1-dimensional") 663 if not indexes and not raw_lengths: 664 raise ValueError("If using all scalar values, you must pass an index") ValueError: Per-column arrays must each be 1-dimensional Why is it ending in error like that in the second attempt, even though the dimensions of both arrays are the same? What is a workaround for this issue? | If you look closer at the error message and quick look at the source code here: elif isinstance(val, np.ndarray) and val.ndim > 1: raise ValueError("Per-column arrays must each be 1-dimensional") You will find that if the dictionay value is a numpy array and has more than one dimension as your example, it throws an error based on the source code. Therefore, it works very well with list because a list has no more than one dimension even if it is a list of list. lst = [[1,2,3],[4,5,6],[7,8,9]] len(lst) # print 3 elements or (3,) not (3,3) like numpy array. You can try to use np.array([1,2,3]), it will work because number of dimensions is 1 and try: arr = np.array([1,2,3]) print(arr.ndim) # output is 1 If it is necessary to use numpy array inside a dictionary, you can use .tolist() to convert numpy array to a list. | 15 | 15 |
71,574,873 | 2022-3-22 | https://stackoverflow.com/questions/71574873/assign-one-column-value-to-another-column-based-on-condition-in-pandas | I want to how we can assign one column value to another column if it has null or 0 value I have a dataframe like this: id column1 column2 5263 5400 5400 4354 6567 Null 5656 5456 5456 5565 6768 3489 4500 3490 Null The Expected Output is id column1 column2 5263 5400 5400 4354 6567 6567 5656 5456 5456 5565 6768 3489 4500 3490 3490 that is, if df['column2'] = Null/0 then it has take df['column1'] value. Can someone explain, how can I achieve my desired output? | Based on the answers to this similar question, you can do the following: Using np.where: df['column2'] = np.where((df['column2'] == 'Null') | (df['column2'] == 0), df['column1'], df['column2']) Instead, using only pandas and Python: df['column2'][(df['column2'] == 0) | (df['column2'] == 'Null')] = df['column1'] | 7 | 10 |
71,576,361 | 2022-3-22 | https://stackoverflow.com/questions/71576361/mypy-error-expected-type-in-class-pattern-found-any | Want to add MyPy checker to my html scraper. I manage to fix all errors except this one Expected type in class pattern. Source code: from bs4 import BeautifulSoup from bs4.element import Tag, NavigableString soup = BeautifulSoup(""" <!DOCTYPE html> <html> <body> EXTRA TEXT <p> first <b>paragraph</b> <br> <br> second paragraph </p> </body> </html> """, "lxml") tag = soup.select_one('body') for el in tag.children: match el: case NavigableString(): ... case Tag(name="p"): ... case Tag(): ... mypy example.py Errors: example.py:24: error: Expected type in class pattern; found "Any" example.py:26: error: Expected type in class pattern; found "Any" example.py:28: error: Expected type in class pattern; found "Any" Found 3 errors in 1 file (checked 1 source file) So, what does this error mean? And how can I fix it? | You can use TYPE_CHECKING to load classes that have the typing from typing import TYPE_CHECKING if TYPE_CHECKING: class NavigableString: ... class Tag: children: list[NavigableString | Tag] name: str class BeautifulSoup: def __init__(self, markup: str, features: str | None) -> None: ... def select_one(self, text: str) -> Tag: ... else: from bs4 import BeautifulSoup from bs4.element import Tag, NavigableString soup = BeautifulSoup( """ <!DOCTYPE html> <html> <body> EXTRA TEXT <p> first <b>paragraph</b> <br> <br> second paragraph </p> </body> </html> """, "lxml", ) tag = soup.select_one("body") for el in tag.children: match el: case NavigableString(): ... case Tag(name="p"): ... case Tag(): ... | 5 | 1 |
71,567,315 | 2022-3-22 | https://stackoverflow.com/questions/71567315/how-to-get-the-ssim-comparison-score-between-two-images | I am trying to calculate the SSIM between corresponding images. For example, an image called 106.tif in the ground truth directory corresponds to a 'fake' generated image 106.jpg in the fake directory. The ground truth directory absolute pathway is /home/pr/pm/zh_pix2pix/datasets/mousebrain/test/B The fake directory absolute pathway is /home/pr/pm/zh_pix2pix/output/fake_B The images inside correspond to each other, like this: see image There are thousands of these images I want to compare on a one-to-one basis. I do not want to compare SSIM of one image to many others. Both the corresponding ground truth and fake images have the same file name, but different extension (i.e. 106.tif and 106.jpg) and I only want to compare them to each other. I am struggling to edit available scripts for SSIM comparison in this way. I want to use this one: https://github.com/mostafaGwely/Structural-Similarity-Index-SSIM-/blob/master/ssim.py but other suggestions are welcome. The code is also shown below: # Usage: # # python3 script.py --input original.png --output modified.png # Based on: https://github.com/mostafaGwely/Structural-Similarity-Index-SSIM- # 1. Import the necessary packages #from skimage.measure import compare_ssim from skimage.metrics import structural_similarity as ssim import argparse import imutils import cv2 # 2. Construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-f", "--first", required=True, help="Directory of the image that will be compared") ap.add_argument("-s", "--second", required=True, help="Directory of the image that will be used to compare") args = vars(ap.parse_args()) # 3. Load the two input images imageA = cv2.imread(args["first"]) imageB = cv2.imread(args["second"]) # 4. Convert the images to grayscale grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY) grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY) # 5. Compute the Structural Similarity Index (SSIM) between the two # images, ensuring that the difference image is returned #(score, diff) = compare_ssim(grayA, grayB, full=True) (score, diff) = ssim(grayA, grayB, full=True) diff = (diff * 255).astype("uint8") # 6. You can print only the score if you want print("SSIM: {}".format(score)) The use of argparse currently limits me to just one image at a time, but I would ideally like to compare them using a loop across the ground truth and fake directories. Any advice would be appreciated. | Here's a working example to compare one image to another. You can expand it to compare multiple at once. Two test input images with slight differences: Results Highlighted differences Similarity score Image similarity 0.9639027981846681 Difference masks Code from skimage.metrics import structural_similarity import cv2 import numpy as np before = cv2.imread('5.jpg') after = cv2.imread('6.jpg') # Convert images to grayscale before_gray = cv2.cvtColor(before, cv2.COLOR_BGR2GRAY) after_gray = cv2.cvtColor(after, cv2.COLOR_BGR2GRAY) # Compute SSIM between two images (score, diff) = structural_similarity(before_gray, after_gray, full=True) print("Image similarity", score) # The diff image contains the actual image differences between the two images # and is represented as a floating point data type in the range [0,1] # so we must convert the array to 8-bit unsigned integers in the range # [0,255] before we can use it with OpenCV diff = (diff * 255).astype("uint8") # Threshold the difference image, followed by finding contours to # obtain the regions of the two input images that differ thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] mask = np.zeros(before.shape, dtype='uint8') filled_after = after.copy() for c in contours: area = cv2.contourArea(c) if area > 40: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(before, (x, y), (x + w, y + h), (36,255,12), 2) cv2.rectangle(after, (x, y), (x + w, y + h), (36,255,12), 2) cv2.drawContours(mask, [c], 0, (0,255,0), -1) cv2.drawContours(filled_after, [c], 0, (0,255,0), -1) cv2.imshow('before', before) cv2.imshow('after', after) cv2.imshow('diff',diff) cv2.imshow('mask',mask) cv2.imshow('filled after',filled_after) cv2.waitKey(0) | 8 | 17 |
71,565,413 | 2022-3-21 | https://stackoverflow.com/questions/71565413/adding-a-dictionary-to-a-row-in-a-pandas-dataframe-using-concat-in-pandas-1-4 | After updating to pandas 1.4, I now receive the following warning when using frame.append to append a dictionary to a Pandas DataFrame. FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. Below is the code. This still works, though I would like to resolve the warning. report = report.append({ "period":period, "symbol":symbol, "start_date":start_date, "start_price":start_price, "start_market_cap":start_market_cap, "end_date":end_date, "end_price":end_price, "end_market_cap":end_market_cap, "return":return_ },ignore_index=True) I have updated the code to the below, which kicks a different warning: report = pd.concat([report,{ "period":period, "symbol":symbol, "start_date":start_date, "start_price":start_price, "start_market_cap":start_market_cap, "end_date":end_date, "end_price":end_price, "end_market_cap":end_market_cap, "return":return_ }],ignore_index=True) TypeError: cannot concatenate object of type '<class 'dict'>'; only Series and DataFrame objs are valid 2 questions: Is the first warning wrong? What is the pandas 1.4 way to achieve this? Thanks. | Use loc to assign a single row value: report.loc[len(report)] = {"period":period, "symbol":symbol, "start_date":start_date, "start_price":start_price, "start_market_cap":start_market_cap, "end_date":end_date, "end_price":end_price, "end_market_cap":end_market_cap, "return":return_ } | 9 | 3 |
71,564,200 | 2022-3-21 | https://stackoverflow.com/questions/71564200/python-how-to-revert-the-pattern-of-a-list-rearrangement | So I am rearranging a list based on an index pattern and would like to find a way to calculate the pattern I need to revert the list back to its original order. for my example I am using a list of 5 items as I can work out the pattern needed to revert the list back to its original state. However this isn't so easy when dealing with 100's of list items. def rearrange(pattern: list, L: list): new_list = [] for i in pattern: new_list.append(L[i-1]) return new_list print(rearrange([2,5,1,3,4], ['q','t','g','x','r'])) #['t', 'r', 'q', 'g', 'x'] and in order to set it back to the original pattern I would use print(rearrange([3,1,4,5,2],['t', 'r', 'q', 'g', 'x'])) #['q', 't', 'g', 'x', 'r'] What I am looking for is a way to calculate the pattern "[3,1,4,5,2]" regarding the above example. whist running the script so that I can set the list back to its original order. Using a larger example: print(rearrange([18,20,10,11,13,1,9,12,16,6,15,5,3,7,17,2,19,8,14,4],['e','p','b','i','s','r','q','h','m','f','c','g','d','k','l','t','a','n','j','o'])) #['n', 'o', 'f', 'c', 'd', 'e', 'm', 'g', 't', 'r', 'l', 's', 'b', 'q', 'a', 'p', 'j', 'h', 'k', 'i'] but I need to know the pattern to use with this new list in order to return it to its original state. print(rearrange([???],['n', 'o', 'f', 'c', 'd', 'e', 'm', 'g', 't', 'r', 'l', 's', 'b', 'q', 'a', 'p', 'j', 'h', 'k', 'i'])) #['e','p','b','i','s','r','q','h','m','f','c','g','d','k','l','t','a','n','j','o'] | This is commonly called "argsort". But since you're using 1-based indexing, you're off-by-one. You can get it with numpy: >>> pattern [2, 5, 1, 3, 4] >>> import numpy as np >>> np.argsort(pattern) + 1 array([3, 1, 4, 5, 2]) Without numpy: >>> [1 + i for i in sorted(range(len(pattern)), key=pattern.__getitem__)] [3, 1, 4, 5, 2] | 6 | 2 |
71,546,900 | 2022-3-20 | https://stackoverflow.com/questions/71546900/weird-glibc-2-17-conflict-when-trying-to-conda-install-tensorflow-1-4-1 | I'm trying to create a new conda enviornment with tensorflow (GPU), version 1.4.1 with the following command conda create -n parsim_1.4.1 python=3 tensorflow-gpu=1.4.1. However, it prints a weird conflict: $ conda create -n parsim_1.4.1 python=3 tensorflow-gpu=1.4.1 Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: \ Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package python conflicts for: python=3 tensorflow-gpu=1.4.1 -> tensorflow-gpu-base==1.4.1 -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0']The following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.17=0 - python=3 -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17'] Your installed version is: 2.17 My OS is CentOS7, and $ uname -a Linux cpu-s-master 3.10.0-1160.42.2.el7.x86_64 #1 SMP Tue Sep 7 14:49:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux What's wrong here? How can I fix it? EDIT Thanks to @merv's comment, I've tried with Mamba, and indeed it gave better error message (and much much faster). If anyone's interested, that's the command that successfully installed my required versions: mamba create -n parsim python=3 "tensorflow-gpu=1.4" pillow opencv -c shuangnan -c anaconda | Conda's error reporting isn't always helpful. Mamba is sometimes better, and in this particular case it gives: Looking for: ['python=3', 'tensorflow-gpu=1.4.1'] conda-forge/linux-64 Using cache conda-forge/noarch Using cache pkgs/main/linux-64 No change pkgs/main/noarch No change pkgs/r/linux-64 No change pkgs/r/noarch No change Encountered problems while solving: - nothing provides cudatoolkit 8.0.* needed by tensorflow-gpu-base-1.4.1-py27h01caf0a_0 Even here, that py27 in the build string is weird, but it at least directs us to cudatoolkit 8.0, which is no longer hosted in the main channel. Instead, you need to include the free channel. The following works for me: $ CONDA_SUBDIR=linux-64 CONDA_CHANNEL_PRIORITY=flexible \ mamba create -n foo \ -c anaconda -c free \ python=3 tensorflow-gpu=1.4.1 __ __ __ __ / \ / \ / \ / \ / \/ \/ \/ \ βββββββββββββββ/ /ββ/ /ββ/ /ββ/ /ββββββββββββββββββββββββ / / \ / \ / \ / \ \____ / / \_/ \_/ \_/ \ o \__, / _/ \_____/ ` |/ ββββ ββββ ββββββ ββββ βββββββββββ ββββββ βββββ ββββββββββββββββββ βββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββββββββββββ βββ βββ ββββββ ββββββ βββ ββββββββββββββ βββ βββ ββββββ ββββββ ββββββββββ βββ βββ mamba (0.21.1) supported by @QuantStack GitHub: https://github.com/mamba-org/mamba Twitter: https://twitter.com/QuantStack βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Looking for: ['python=3', 'tensorflow-gpu=1.4.1'] anaconda/linux-64 Using cache anaconda/noarch Using cache conda-forge/linux-64 Using cache conda-forge/noarch Using cache pkgs/main/noarch No change pkgs/main/linux-64 No change pkgs/r/linux-64 No change pkgs/r/noarch No change free/linux-64 No change free/noarch No change Transaction Prefix: /Users/mfansler/miniconda3/envs/foo Updating specs: - python=3 - tensorflow-gpu=1.4.1 Package Version Build Channel Size βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Install: βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ + blas 1.0 openblas anaconda/linux-64 49kB + bleach 1.5.0 py36_0 free/linux-64 22kB + ca-certificates 2020.10.14 0 anaconda/linux-64 131kB + certifi 2020.6.20 py36_0 anaconda/linux-64 163kB + cudatoolkit 8.0 3 free/linux-64 338MB + cudnn 7.1.3 cuda8.0_0 anaconda/linux-64 241MB + html5lib 0.9999999 py36_0 free/linux-64 181kB + importlib-metadata 2.0.0 py_1 anaconda/noarch 36kB + ld_impl_linux-64 2.33.1 h53a641e_7 anaconda/linux-64 660kB + libedit 3.1.20191231 h14c3975_1 anaconda/linux-64 124kB + libffi 3.3 he6710b0_2 anaconda/linux-64 55kB + libgcc-ng 9.1.0 hdf63c60_0 anaconda/linux-64 8MB + libgfortran-ng 7.3.0 hdf63c60_0 anaconda/linux-64 1MB + libopenblas 0.3.10 h5a2b251_0 anaconda/linux-64 8MB + libprotobuf 3.13.0.1 hd408876_0 anaconda/linux-64 2MB + libstdcxx-ng 9.1.0 hdf63c60_0 anaconda/linux-64 4MB + markdown 3.3.2 py36_0 anaconda/linux-64 126kB + ncurses 6.2 he6710b0_1 anaconda/linux-64 1MB + numpy 1.19.1 py36h30dfecb_0 anaconda/linux-64 21kB + numpy-base 1.19.1 py36h75fe3a5_0 anaconda/linux-64 5MB + openssl 1.1.1h h7b6447c_0 anaconda/linux-64 4MB + pip 20.2.4 py36_0 anaconda/linux-64 2MB + protobuf 3.13.0.1 py36he6710b0_1 anaconda/linux-64 715kB + python 3.6.12 hcff3b4d_2 anaconda/linux-64 36MB + readline 8.0 h7b6447c_0 anaconda/linux-64 438kB + setuptools 50.3.0 py36hb0f4dca_1 anaconda/linux-64 913kB + six 1.15.0 py_0 anaconda/noarch 13kB + sqlite 3.33.0 h62c20be_0 anaconda/linux-64 2MB + tensorflow-gpu 1.4.1 0 anaconda/linux-64 3kB + tensorflow-gpu-base 1.4.1 py36h01caf0a_0 anaconda/linux-64 119MB + tensorflow-tensorboard 1.5.1 py36hf484d3e_1 anaconda/linux-64 3MB + tk 8.6.10 hbc83047_0 anaconda/linux-64 3MB + werkzeug 1.0.1 py_0 anaconda/noarch 249kB + wheel 0.35.1 py_0 anaconda/noarch 37kB + xz 5.2.5 h7b6447c_0 anaconda/linux-64 449kB + zipp 3.3.1 py_0 anaconda/noarch 12kB + zlib 1.2.11 h7b6447c_3 anaconda/linux-64 122kB Summary: Install: 37 packages Total download: 784MB βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 5 | 7 |
71,540,449 | 2022-3-19 | https://stackoverflow.com/questions/71540449/aws-lambda-to-rds-postgresql | Hello fellow AWS contributors, Iβm currently working on a project to set up an example of connecting a Lambda function to our PostgreSQL database hosted on RDS. I tested my Python + SQL code locally (in VS code and DBeaver) and it works perfectly fine with including only basic credentials(host, dbname, username password). However, when I paste the code in Lambda function, it gave me all sorts of errors. I followed this template and modified my code to retrieve the credentials from secret manager instead. Iβm currently using boto3, psycopg2, and secret manager to get credentials and connect to the database. List of errors Iβm getting- server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request could not connect to server: Connection timed out. Is the server running on host βdb endpointβ and accepting TCP/IP connections on port 5432? FATAL: no pg_hba.conf entry for host βip:xxxβ, user "userXXX", database "dbXXX", SSL off Things I tried - RDS and Lambda are in the same VPC, same subnet, same security group. IP address is included in the inbound rule Lambda function is set to run up to 15 min, and it always stops before it even hits 15 min I tried both database endpoint and database proxy endpoint, none of it works. It doesnβt really make sense to me that when I run the code locally, I only need to provide the host, dbname, username, and password, thatβs it, and Iβm able to write all the queries and function I want. But when I throw the code in lambda function, itβs requiring all these secret manager, VPC security group, SSL, proxy, TCP/IP rules etc. Can someone explain why there is a requirement difference between running it locally and on lambda? Finally, does anyone know what could be wrong in my setup? I'm happy to provide any information in related to this, any general direction to look into would be really helpful. Thanks! | Following the directions at the link below to build a specific psycopg2 package and also verifying the VPC subnets and security groups were configured correctly solved this issue for me. I built a package for PostgreSQL 10.20 using psycopg2 v2.9.3 for Python 3.7.10 running on an Amazon Linux 2 AMI instance. The only change to the directions I had to make was to put the psycopg2 directory inside a python directory (i.e. "python/psycopg2/") before zipping it -- the import psycopg2 statement in the Lambda function failed until I did that. https://kalyanv.com/2019/06/10/using-postgresql-with-python-on-aws-lambda.html This the VPC scenario I'm using. The Lambda function is executing inside the Public Subnet and associated Security Group. Inbound rules for the Private Subnet Security Group only allow TCP connections to 5432 for the Public Subnet Security Group. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html#USER_VPC.Scenario1 | 5 | 2 |
71,546,126 | 2022-3-20 | https://stackoverflow.com/questions/71546126/python-pydantic-error-typeerror-init-takes-exactly-1-positional-argument | i am currenty working on a python fastapi project for university. Every time i run my authorization dependencies i get the following error: ERROR: Exception in ASGI application Traceback (most recent call last): File "C:\Python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File "C:\Python39\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 75, in __call__ return await self.app(scope, receive, send) File "C:\Python39\lib\site-packages\fastapi\applications.py", line 208, in __call__ await super().__call__(scope, receive, send) File "C:\Python39\lib\site-packages\starlette\applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "C:\Python39\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__ raise exc File "C:\Python39\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "C:\Python39\lib\site-packages\starlette\exceptions.py", line 82, in __call__ raise exc File "C:\Python39\lib\site-packages\starlette\exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "C:\Python39\lib\site-packages\starlette\routing.py", line 656, in __call__ await route.handle(scope, receive, send) File "C:\Python39\lib\site-packages\starlette\routing.py", line 259, in handle await self.app(scope, receive, send) File "C:\Python39\lib\site-packages\starlette\routing.py", line 61, in app response = await func(request) File "C:\Python39\lib\site-packages\fastapi\routing.py", line 216, in app solved_result = await solve_dependencies( File "C:\Python39\lib\site-packages\fastapi\dependencies\utils.py", line 496, in solve_dependencies solved_result = await solve_dependencies( File "C:\Python39\lib\site-packages\fastapi\dependencies\utils.py", line 525, in solve_dependencies solved = await call(**sub_values) File "e:\Dev\Ottomize\Ottomize\backend\app\auth_handler.py", line 60, in get_current_user token_data = schemas.TokenData(username) File "pydantic\main.py", line 322, in pydantic.main.BaseModel.__init__ TypeError: __init__() takes exactly 1 positional argument (2 given) Here is my relevant code: FAST API Endpoint: @app.get("/user/current/info/", response_model=schemas.User) async def user_info(current_user: schemas.User = Depends(auth_handler.get_current_active_user)): return current_user Used functions in my auth_handler.py: from fastapi.security import OAuth2PasswordBearer from jose import JWTError, jwt from passlib.context import CryptContext from datetime import datetime, timedelta from typing import Optional from fastapi import Depends, HTTPException, status from . import crud, schemas, config, database oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token") pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") #gets user out of db def get_user(username: str): db = database.SessionLocal() return crud.get_user_by_username(db, username) #gets current user async def get_current_user(token: str = Depends(oauth2_scheme)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(token, config.SECRET_KEY, algorithms=[config.ALGORITHM]) username: str = payload.get("sub") if username is None: raise credentials_exception token_data = schemas.TokenData(username) except JWTError: raise credentials_exception user = get_user(token_data.username) if user is None: raise credentials_exception return user #gets current user if active async def get_current_active_user(current_user: schemas.User = Depends(get_current_user)): if current_user.disabled: raise HTTPException(status_code=400, detail="Inactive user") return current_user Used functions in my crud.py: def get_user_by_username(db: Session, username: str): return db.query(models.User).filter(models.User.username == username).first() Used sqlalchemy models: from sqlalchemy import Boolean, Column, ForeignKey, Integer, String from sqlalchemy.orm import relationship from .database import Base class User(Base): __tablename__ = "user" id = Column(Integer, primary_key=True, index=True) username = Column(String(100), unique=True, index=True) mail = Column(String(100), unique=True, index=True) hashed_password = Column(String(100)) is_active = Column(Boolean, default=True) permissions = relationship("Permission", back_populates="user") class Permission(Base): __tablename__ = "permission" id = Column(Integer, primary_key=True, index=True) name = Column(String(100)) user_id = Column(Integer, ForeignKey("user.id")) user = relationship("User", back_populates="permissions") Used pydantic models: from typing import List, Optional from pydantic import BaseModel #Define Datatype Token class Token(BaseModel): access_token: str token_type: str #Define Datatype TokenData class TokenData(BaseModel): username: str class Config: orm_mode = True #Define Datatype User class User(BaseModel): id: int username: str mail: str is_active: Optional[bool] permissions: List[Permission] = [] class Config: orm_mode = True I am really new to fastapi and python in general and would really appreciate help! | You have to give Pydantic which key you are providing a value for: token_data = schemas.TokenData(username=username) Otherwise Pydantic has no idea that the variable username from the parent scope should be assigned to the username property in the schema. | 11 | 20 |
71,545,135 | 2022-3-20 | https://stackoverflow.com/questions/71545135/how-to-append-rows-with-concat-to-a-pandas-dataframe | I have defined an empty data frame with: insert_row = { "Date": dtStr, "Index": IndexVal, "Change": IndexChnge, } data = { "Date": [], "Index": [], "Change": [], } df = pd.DataFrame(data) df = df.append(insert_row, ignore_index=True) df.to_csv(r"C:\Result.csv", index=False) driver.close() But I get the below deprecation warning not to use df.append every time I run the code Can anyone suggest how to get rid of this warning by using pandas.concat? | Create a dataframe then concat: insert_row = { "Date": '2022-03-20', "Index": 1, "Change": -2, } df = pd.concat([df, pd.DataFrame([insert_row])]) print(df) # Output Date Index Change 0 2022-03-20 1.0 -2.0 | 5 | 11 |
71,544,103 | 2022-3-20 | https://stackoverflow.com/questions/71544103/how-can-we-store-a-json-credential-to-env-variable-in-python | { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "-----BEGIN PRIVATE KEY-----\n", "client_email": "email", "client_id": "id", "auth_uri": "uri_auth", "token_uri": "token_urin", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } I tried encoding and decoding the JSON but it didn't work I even tried using /// in place of " " So I am using sheets-api. What I want to achieve is loading the path-for-json-file from .env variable scope=['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/spreadsheets' ] credentials = ServiceAccountCredentials.from_json_keyfile_name(r"path-for-json-file", scope) client = gspread.authorize(credentials) | Assuming your JSON file is creds.json creds.json { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "-----BEGIN PRIVATE KEY-----\n", "client_email": "email", "client_id": "id", "auth_uri": "uri_auth", "token_uri": "token_urin", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } main.py import json data = json.load(open('creds.json')) f = open(".env", "x") for key, value in data.items(): f.write(f"{key.upper()}={value}\n") creds.env will be generated TYPE=service_account PROJECT_ID=project_id PRIVATE_KEY_ID=private_key_id PRIVATE_KEY=-----BEGIN PRIVATE KEY----- CLIENT_EMAIL=email CLIENT_ID=id AUTH_URI=uri_auth TOKEN_URI=token_urin AUTH_PROVIDER_X509_CERT_URL=auth_provider_x509_cert_url CLIENT_X509_CERT_URL=client_x509_cert_url create_keyfile_dict() basically returns a dict called variable_keys from dotenv import load_dotenv load_dotenv() def create_keyfile_dict(): variables_keys = { "type": os.getenv("TYPE"), "project_id": os.getenv("PROJECT_ID"), "private_key_id": os.getenv("PRIVATE_KEY_ID"), "private_key": os.getenv("PRIVATE_KEY"), "client_email": os.getenv("CLIENT_EMAIL"), "client_id": os.getenv("CLIENT_ID"), "auth_uri": os.getenv("AUTH_URI"), "token_uri": os.getenv("TOKEN_URI"), "auth_provider_x509_cert_url": os.getenv("AUTH_PROVIDER_X509_CERT_URL"), "client_x509_cert_url": os.getenv("CLIENT_X509_CERT_URL") } return variables_keys scope=['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/spreadsheets' ] credentials = ServiceAccountCredentials.from_json_keyfile_name(create_keyfile_dict(), scope) client = gspread.authorize(credentials) | 8 | 6 |
71,510,217 | 2022-3-17 | https://stackoverflow.com/questions/71510217/how-to-invalidate-a-view-cache-using-django-cacheops | I have a view and I cached it in views.py using django-cacheops (https://github.com/Suor/django-cacheops): @cached_view(timeout=60*15) @csrf_exempt def order(request, usr): ... The regex for order view in urls.py: url(r'^order/(?P<usr>\D+)$', views.order, name='ord') # Example Url: http://127.0.0.1:8000/order/demo (demo is the user name) And I want to invalidate the cached view order inside the below view: @login_required def available(request, pk, avail): pk = int(pk) avail = strtobool(avail) if avail: Product.objects.filter(id = pk).update(available = True) else: Product.objects.filter(id = pk).update(available = False) return HttpResponseRedirect(reverse_lazy('yc')) According to the docs, we can achieve this by doing: @login_required def available(request, pk, avail): pk = int(pk) avail = strtobool(avail) if avail: Product.objects.filter(id = pk).update(available = True) order.invalidate("http://127.0.0.1:8000/order/demo", "demo") #it's a dummy url I've handled it dynamically in my code else: Product.objects.filter(id = pk).update(available = False) order.invalidate("http://127.0.0.1:8000/order/demo", "demo") #it's a dummy url I've handled it dynamically in my code return HttpResponseRedirect(reverse_lazy('yc')) But it's not working. Here are my logs using redis-cli monitor: 1647434341.849096 [1 [::1]:59650] "GET" "c:af687d461ec8bb3c48f6392010e54778" 1647434341.866966 [1 [::1]:59650] "SETEX" "c:af687d461ec8bb3c48f6392010e54778" "900" "\x80\x04\x95\xfa\b\x00\x00\x00\x00\x00\x00\x8c\x14django.http.response\x94\x8c\x0cHttpResponse\x94\x93\x94)\x81\x94}\x94(\x8c\b_headers\x94}\x94\x8c\x0ccontent-type\x94\x8c\x0cContent-Type\x94\x8c\x18text/html; charset=utf-8\x94\x86\x94s\x8c\x11_closable_objects\x94]\x94\x8c\x0e_handler_class\x94N\x8c\acookies\x94\x8c\x0chttp.cookies\x94\x8c\x0cSimpleCookie\x94\x93\x94)\x81\x94\x8c\x06closed\x94\x89\x8c\x0e_reason_phrase\x94N\x8c\b_charset\x94N\x8c\n_container\x94]\x94B\xed\a\x00\x00<!DOCTYPE html>\n\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Buy Products</title>\n <link href=\"https://fonts.googleapis.com/css?family=Peralta\" rel=\"stylesheet\">\n <link rel=\"stylesheet\" href=\"/static/css/bootstrap.min.css\">\n <link rel=\"stylesheet\" href=\"/static/css/app.css\">\n </head>\n <body>\n <div class=\"wrapper\">\n <div class=\"container\">\n <ol class=\"breadcrumb my-4\">\n <li class=\"breadcrumb-item active\" style=\"color: #000;\">Buy Products</li>\n </ol>\n <form method=\"post\">\n <!-- <input type=\"hidden\" name=\"csrfmiddlewaretoken\" value=\"SnsBnyPIwIDejqctR7TMNkITcSafgwiydwsyIiAKQkiSvr3nFA0cm1Tf3Mk6JTPj\"> -->\n <p><label for=\"id_name\">Name:</label> <select name=\"name\" id=\"id_name\">\n <option value=\"Redmi note 5\">Product Name: Redmi note 5 \n MRP: 100000 \n Discounted Price: 45678 \n Description: It's good phone too</option>\n\n <option value=\"xiomi 2\">Product Name: xiomi 2 \n MRP: 10000 \n Discounted Price: 200 \n Description: xyz</option>\n\n <option value=\"mouse\">Product Name: mouse \n MRP: 1400 \n Discounted Price: 200 \n Description: xyzat</option>\n\n</select></p>\n<p><label for=\"id_user_name\">User name:</label> <textarea name=\"user_name\" cols=\"40\" rows=\"1\" maxlength=\"30\" required id=\"id_user_name\">\n</textarea></p>\n<p><label for=\"id_adress\">Adress:</label> <textarea name=\"adress\" cols=\"40\" rows=\"2\" maxlength=\"4000\" required id=\"id_adress\">\n</textarea></p>\n<p><label for=\"id_mobile\">Mobile:</label> <textarea name=\"mobile\" cols=\"40\" rows=\"1\" maxlength=\"10\" required id=\"id_mobile\">\n</textarea></p>\n<p><label for=\"id_qty\">Qty:</label> <input type=\"number\" name=\"qty\" required id=\"id_qty\"></p>\n <button type=\"submit\" class=\"btn btn-success\">Buy</button>\n </form>\n </div>\n <div class=\"push\"></div>\n </div>\n <script src=\"/static/js/jquery-3.2.1.min.js\"></script>\n <script src=\"/static/js/popper.min.js\"></script>\n <script src=\"/static/js/bootstrap.min.js\"></script>\n </body>\n</html>\n\x94aub." 1647434354.133804 [1 [::1]:59650] "DEL" "c:94c7a9e7f6c7a45ee645caa02f53d000" It looks like it's deleting some other cache. I've also raised the issue in the repo of django-cache, you can check it for more information: https://github.com/Suor/django-cacheops/issues/425 | Since you used a named group usr in your regex, Django passes it as a keyword argument: url(r'^order/(?P<usr>\D+)$', views.order, name='ord') But you are trying to invalidate the cache with a positional argument: order.invalidate("http://127.0.0.1:8000/order/demo", "demo") Instead, invalidate it with the corresponding keyword argument: order.invalidate("http://127.0.0.1:8000/order/demo", usr="demo") | 6 | 4 |
71,529,767 | 2022-3-18 | https://stackoverflow.com/questions/71529767/django-core-exceptions-improperlyconfigured-cannot-import-apps-accounts-ch | This is how it is structured The code inside apps.py of accounts folder file is from django.apps import AppConfig class AccountsConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = "apps.accounts" The code inside Settings is INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'mysite.apps.accounts', ] I tried changing 'mysite.apps.accounts', to 'mysite.apps.AccountsConfig', and changing name = "apps.accounts" to name = "accounts" I am new to Django and was following How to make a website with Python and Django - MODELS AND MIGRATIONS (E04) tutorial. Around 16:17 is where my error comes up when I enter python manage.py makemigrate to the vscode terminal The error is ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: Cannot import 'apps.accounts'. Check that 'mysite.apps.accounts.apps.AccountsConfig.name' is correct. Someone please help me. | The solution was quite counterintuitive. You have to delete the class AccountsConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = "accounts" from apps.py\accounts\apps\mysite. Then run python manage.py makemigrations and 2 new models 'UserPersona' and 'UserProfile' are created. the output in the terminal: mysite\apps\accounts\migrations\0001_initial.py - Create model UserPersona - Create model UserProfile | 22 | 2 |
71,535,170 | 2022-3-19 | https://stackoverflow.com/questions/71535170/how-to-add-elements-of-a-list-to-elements-of-a-row-in-pandas-database | i have this database called db in pandas index win loss moneywin moneyloss player1 5 1 300 100 player2 10 5 650 150 player3 17 6 1100 1050 player11 1010 105 10650 10150 player23 1017 106 101100 101050 and i want to add the elements of list1 to the elements of db list1 = [[player1,105,101,10300,10100],[player3,17,6,1100,1050]] so the results would be db2 index win loss moneywin moneyloss player1 110 102 10600 10200 player2 10 5 650 150 player3 34 12 2200 2100 player11 1010 105 10650 10150 player23 1017 106 101100 101050 how can i go about it? | Solution 1: Create a dataframe from list1 then concat it with the given dataframe then group by index and aggregate the remaining columns using sum df1 = pd.DataFrame(list1, columns=df.columns) df_out = pd.concat([df, df1]).groupby('index', sort=False).sum() Solution 2: Create a dataframe from list1 then add it with the given dataframe using common index df1 = pd.DataFrame(list1, columns=df.columns) df_out = df.set_index('index').add(df1.set_index('index'), fill_value=0) Result: print(df_out) win loss moneywin moneyloss index player1 110 102 10600 10200 player2 10 5 650 150 player3 34 12 2200 2100 player11 1010 105 10650 10150 player23 1017 106 101100 101050 | 7 | 2 |
71,531,909 | 2022-3-18 | https://stackoverflow.com/questions/71531909/declare-variable-type-inside-function | I am defining a function that gets pdf in bytes, so I wrote: def documents_extractos(pdf_bytes: bytes): pass When I call the function and unfortunately pass a wrong type, instead of bytes let's say an int, why I don't get an error? I have read the documentation regarding typing but I don't get it. Why is the purpose of telling the function that the variable shoudl be bytes but when you pass and int there is no error? This could be handle by a isinstance(var, <class_type>) right? I don't understand it =( | Type hints are ignored at runtime. At the top of the page, the documentation that you've linked contains a note that states (emphasis mine): The Python runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc. The purpose of type hints is for static typechecking tools (e.g. mypy), which use static analysis to verify that your code respects the written type hints. These tools must be run as a separate process. Their primary use is to ensure that new changes in large codebases do not introduce potential typing issues (which can eventually become latent bugs that are difficult to resolve). If you want explicit runtime type checks (e.g. to raise an Exception if a value of a wrong type is passed into a function), use isinstance(). | 5 | 3 |
71,515,439 | 2022-3-17 | https://stackoverflow.com/questions/71515439/equivalent-to-torch-rfft-in-newest-pytorch-version | I want to estimate the fourier transform for a given image of size BxCxWxH In previous torch version the following did the job: fft_im = torch.rfft(img, signal_ndim=2, onesided=False) and the output was of size: BxCxWxHx2 However, with the new version of rfft : fft_im = torch.fft.rfft2(img, dim=2, norm=None) I do not get the same results. Do I miss something? | A few issues The dim argument you provided is an invalid type, it should be a tuple of two numbers or should be omitted. Really PyTorch should raise an exception. I would argue that the fact this ran without exception is a bug in PyTorch (I opened a ticket stating as much). PyTorch now supports complex tensor types, so FFT functions return those instead of adding a new dimension for the real/imaginary parts. You can use torch.view_as_real to convert to the old representation. Also worth pointing out that view_as_real doesn't copy data since it returns a view so shouldn't slow things down in any noticeable way. PyTorch no longer gives the option of disabling one-sided calculation in RFFT. Probably because disabling one-sided makes the result identical to torch.fft.fft2, which is in conflict with the 13th aphorism of PEP 20. The whole point of providing a special real-valued version of the FFT is that you need only compute half the values for each dimension, since the rest can be inferred via the Hermition symmetric property. So from all that you should be able to use fft_im = torch.view_as_real(torch.fft.fft2(img)) Important If you're going to pass fft_im to other functions in torch.fft (like fft.ifft or fft.fftshift) then you'll need to convert back to the complex representation using torch.view_as_complex so those functions don't interpret the last dimension as a signal dimension. | 6 | 7 |
71,511,514 | 2022-3-17 | https://stackoverflow.com/questions/71511514/textmate-latex-compilation-pb-with-python-version-after-macos-update-monterey | I use textmate for make pdf file in latex. After the update of macOS Monterey version 12.3, the minimal version of python (/usr/bin/python) has disappeared : the compilation don't work now. I try to change in the textmate's files /usr/bin/python by /usr/bin/python3 (I have only this python folder) but that always don't work. the error say me ti change the compilation command which is this : #!/usr/bin/env ruby18 # coding: utf-8 require ENV["TM_SUPPORT_PATH"] + "/lib/tm/process" require ENV["TM_SUPPORT_PATH"] + "/lib/tm/htmloutput" require ENV["TM_SUPPORT_PATH"] + "/lib/tm/save_current_document" # To enable the typesetting of unsaved documents, you must change the βSaveβ setting of # this command to βCurrent Fileβ and add the variable TM_LATEX_AUTOSAVE to TextMate's # Shell Variables preferences. Be warned that your document must be encoded as UTF-8 if # you exercise this option β becauseTextMate.save_current_document cannot know the file # encoding you prefer. TextMate.save_current_document unless ENV["TM_LATEX_AUTOSAVE"].nil? texmate = ENV["TM_BUNDLE_SUPPORT"] + "/bin/texmate.py" engine_version = TextMate::Process.run(texmate, "version") TextMate::HTMLOutput.show(:title => "Typesetting β#{ENV["TM_DISPLAYNAME"] || File.basename(ENV["TM_FILEPATH"])}ββ¦", :sub_title => engine_version) do |io| TextMate::Process.run(texmate, 'latex', :interactive_input => false) do |line| io << line end end ::Process.exit($?.exitstatus || 0) # exitstatus is nil if our process is prematurely terminated (SIGINT) Thank you very much for your help. PS : The compilation work with texshop, I don't think it is a latex problem | The LaTeX-Bundle of TextMate was not updated in time for the release of MacOS 12.3. You can fix it as follows: Download and install Python 3 (https://www.python.org/downloads/) /usr/bin/python3 -m pip install pyobjc --user cd ~/Library/Application\ Support/TextMate/Managed/Bundles/LaTeX.tmbundle/Support/bin Change βpythonβ to βpython3β in the header of all .py files (configure.py, btexdoc.py, texmate.py, texparser.py) | 6 | 7 |
71,522,731 | 2022-3-18 | https://stackoverflow.com/questions/71522731/unable-to-activate-virtual-environment-in-python | I am on Windows 10, Python 3.10.2. Here are the commands that I ran to create the virtual environment: Here are my versions for packages: virtualenv==16.7.5 virtualenvwrapper-win==1.2.6 I installed the virtual environment. D:\voice-cloning\real-time-voice-cloning>python -m pip install virtualenv WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) Requirement already satisfied: virtualenv in d:\python\lib\site-packages (16.7.5) WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution -ip (d:\python\lib\site-packages) WARNING: Ignoring invalid distribution - (d:\python\lib\site-packages) WARNING: You are using pip version 21.2.4; however, version 22.0.4 is available. You should consider upgrading via the 'D:\python\python.exe -m pip install --upgrade pip' command. Then I ran these commands: D:\voice-cloning\real-time-voice-cloning>python -m virtualenv venv310 D:\python\lib\site-packages\virtualenv.py:24: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives import distutils.spawn D:\python\lib\site-packages\virtualenv.py:25: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead import distutils.sysconfig Using base prefix 'D:\\python' New python executable in D:\voice-cloning\real-time-voice-cloning\venv310\Scripts\python.exe Traceback (most recent call last): File "D:\python\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\python\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "D:\python\lib\site-packages\virtualenv.py", line 2632, in <module> main() File "D:\python\lib\site-packages\virtualenv.py", line 860, in main create_environment( File "D:\python\lib\site-packages\virtualenv.py", line 1156, in create_environment install_python(home_dir, lib_dir, inc_dir, bin_dir, site_packages=site_packages, clear=clear, symlink=symlink) File "D:\python\lib\site-packages\virtualenv.py", line 1719, in install_python fix_local_scheme(home_dir, symlink) File "D:\python\lib\site-packages\virtualenv.py", line 1805, in fix_local_scheme if sysconfig._get_default_scheme() == "posix_local": AttributeError: module 'sysconfig' has no attribute '_get_default_scheme'. Did you mean: 'get_default_scheme'? Here are the commands I ran for activating the virtual environment and none of them worked: D:\voice-cloning\real-time-voice-cloning>venv310/scripts/activate 'venv310' is not recognized as an internal or external command, operable program or batch file. D:\voice-cloning\real-time-voice-cloning>python venv310/scripts/activate python: can't open file 'D:\\voice-cloning\\real-time-voice-cloning\\venv310\\scripts\\activate': [Errno 2] No such file or directory D:\voice-cloning\real-time-voice-cloning>venv310/Scripts/activate 'venv310' is not recognized as an internal or external command, operable program or batch file. D:\voice-cloning\real-time-voice-cloning>activate 'activate' is not recognized as an internal or external command, operable program or batch file. D:\voice-cloning\real-time-voice-cloning>cd venv310 D:\voice-cloning\real-time-voice-cloning\venv310>.\Scripts\activate '.\Scripts\activate' is not recognized as an internal or external command, operable program or batch file. What is missing here? Thanks. | Using python 3.10.2 and virtualenv 16.7.5 gives me the same error. Looks like virtualenv 16.7.5 is too old for 3.10.2. Upgrade you package with this command and everything will work out. pip install --upgrade virtualenv | 5 | 11 |
71,523,205 | 2022-3-18 | https://stackoverflow.com/questions/71523205/how-to-install-multiple-versions-of-python-in-windows | Up until recently I have only worked with one version of Python and used virtual environments every now and then. Now, I am working with some libraries that require older version of Python. So, I am very confused. Could anyone please clear up some of my confusion? How do I install multiple Python versions? I initially had Python version 3.8.x but upgraded to 3.10.x last month. There is currently only that one version on my PC now. I wanted to install one of the Python 3.8.x version and went to https://www.python.org/downloads/. It lists a lot of versions and subversions like 3.6, 3.7, 3.8 etc. etc. with 3.8.1, 3.8.2 till 3.8.13. Which one should I pick? I actually went ahead with 3.8.12 and downloaded the Tarball on the page: https://www.python.org/downloads/release/python-3812/ I extracted the tarball (23.6MB) and it created a folder with a setup.py file. Is Python 3.8.12 now installed? Clicking on the setup.py file simply flashes the terminal for a second. I have a few more questions. Hopefully, they won't get me downvoted. I am just confused and couldn't find proper answers for them. Why does Python have such heavy dependency on the exact versions of libraries and packages etc? For example, this question How can I run Mozilla TTS/Coqui TTS training with CUDA on a Windows system?. This seems very beginner unfriendly. Slightly mismatched package version can prevent any program from running. Do virtual environments copy all the files from the main Python installation to create a virtual environment and then install specific packages inside it? Isn't that a lot of wasted resources in duplication because almost all projects require there own virtual environment. Although related to an other question, this question is unique as it asks how to install different versions compared to the other question that addresses primarily how to run different versions. | Your questions depend a bit on "all the other software". For example, as @leiyang indicated, the answer will be different if you use conda vs just pip on vanilla CPython (the standard Windows Python). I'm also going to assume you're actually on Windows, because on Linux I would recommend looking at pyenv. There is a pyenv-win, which may be worth looking into, but I don't use it myself because it doesn't play as nice if you also want (mini)conda environments. 1. (a) How do I install multiple Python versions? Simply download the various installers and install them in sensible locations. E.g. "C:\Program Files\Python39" for Python 3.9, or some other location where you're allowed to install software. Don't have Python add itself to the PATH though, since that'll only find the last version to do so and can really confuse things. Also, you probably want to use virtual environments consistently, as this ties a specific project very clearly to a specific Python version, avoiding future confusion or problems. 1. (b) "3.8.1, 3.8.2 till 3.8.13" which should I pick? Always pick the latest 3.x.y, so if there's a 3.8.13 for Windows, but no 3.8.14, pick that. Check if the version is actually available for your operating system, sometimes there are later versions for one OS, but not for another. The reason is that between a verion like 3.6 and 3.7, there may be major changes that change how Python works. Generally, there will be backwards compatibility, but some changes may break how some of your packages work. However, when going up a minor version, there won't be any such breaking changes, just fixes and additions that don't get in the way of what was already there. A change from 2.x to 3.x only happens if the language itself goes through a major change, and rarely happens (and perhaps never will again, depending on who you ask). An exception to the "no minor version change problems" is of course if you run some script that very specifically relies on something that was broken in 3.8.6, but no fixed in 3.8.7+ (as an example). However, that's very bad coding, to rely on what's broken and not fixing it later, so only go along with that if you have no other recourse. Otherwise, just the latest minor version of any version you're after. Also: make sure you pick the correct architecture. If there's no specific requirement, just pick 64-bit, but if your script needs to interact with other installed software at the binary level, it may require you to install 32-bit Python (and 32-bit packages as well). If you have no such requirement, 64-bit allows more memory access and has some other benefits on modern computers. 2. Why does Python have such heavy dependency on the exact versions of libraries and packages etc? It's not just Python, this is true for many languages. It's just more visible to the end user for Python, because you run it as an interpreted language. It's only compiled at the very last moment, on the computer it's running on. This has the advantage that the code can run on a variety of computers and operating systems, but the downside that you need the right environment where you're running it. For people who code in languages like C++, they have to deal with this problem when they're coding, but target a much smaller number of environments (although there's still runtimes to contend with, and DirectX versions, etc.). Other languages just roll everything up into the program that's being distributed, while a Python script by itself can be tiny. It's a design choice. There are a lot of tools to help you automate the process though and well-written packages will make the process quite painless. If you feel Python is very shakey when it comes to this, that's probable to blame on the packages or scripts you're using, not really the language. The only fault of the language is that it makes it very easy for developers to make such a mess for you and make your life hard with getting specific requirements. Look for alternatives, but if you can't avoid using a specific script or package, once you figure out how to install or use it, document it or better yet, automate it so you don't have to think about it again. 3. Do virtual environments copy all the files from the main Python installation to create a virtual environment and then install specific packages inside it? Isn't that a lot of wasted resources in duplication because almost all projects require there own virtual environment. Not all of them, but quite a few of them. However, you still need the original installation to be present on the system. Also, you can't pick up a virtual environment and put it somewhere else, not even on the same PC without some careful changes (often better to just recreate it). You're right that this is a bit wasteful - but this is a difficult choice. Either Python would be even more complicated, having to manage many different version of packages in a single environment (Java developers will be able to tell you war stories about this, with their dependency management - or wax lyrically about it, once they get it themselves). Or you get what we have: a bit wasteful, but in the end diskspace is a lot cheaper than your time. And unlike your time, diskspace is almost infinitely expandable. You can share virtual environments between very similar projects though, but especially if you get your code from someone else, it's best to not have to worry and just give up a few dozen MB for the project. On the upside: you can just delete a virtual environment directory and that pretty much gets rid of the whole things. Some applications like PyCharm may remember that it was once there, but other than that, that's the virtual environment gone. | 8 | 6 |
71,513,504 | 2022-3-17 | https://stackoverflow.com/questions/71513504/upgrade-python-in-a-virtual-environment-with-m-venv-upgrade | I have multiple python versions managed by pyenv. I want to upgrade one of my virtual environments from 3.7.13 to 3.10.3 with the ββupgradeβ option as: >deactivate >pyenv local 3.10.3 >python3 -m venv --upgrade .venv >. .venv/bin/activate > python -V Python 3.7.13 I expect the 'βupgrade' would change the python version to 3.10.3 but it did not it stayed with 3.7.13 I understand it may be easier just discard and recreate the virtual environment, but I really want to learn how 'βupgrade' should work | If you read the official documentation of the venv module, then the description of the --upgrade option is very specific: "... assuming Python has been upgraded in-place." I think this implies that it has to be the same Python installation that you originally created the virtual environment with, for the --upgrade flag to work. Each version of Python installed by pyenv is installed separately, so I wouldn't expect the --upgrade flag to work in this case. That being said, as far as I know, venv does little more than installing a couple of basic scripts and configuration files, and some bunch of symbolic links. The source code of the venv module seems fairly straightforward, and all that the --upgrade switch does is skip the setup scripts. I think you could manually "hack" your way through this by changing some symbolic links and changing some directory names here and there. However, it's not how venv should be used. So, yeah, save yourself the misery, and discard the old virtual environment and just build a new one. | 5 | 3 |
71,518,406 | 2022-3-17 | https://stackoverflow.com/questions/71518406/how-to-bypass-cloudflare-browser-checking-selenium-python | I am trying to access a site using selenium Python. But the site is checking and checking continuously by cloudflare. No other page is coming. Check the screenshot here. I have tried undetected chrome but it is not working at all. | By undetected chrome do you mean undetected chromedriver?: Anyways, undetected-chromedriver works for me: Undetected chromedriver Github: https://github.com/ultrafunkamsterdam/undetected-chromedriver pip install undetected-chromedriver Code that gets a cloudflare protected site: import undetected_chromedriver as uc driver = uc.Chrome(use_subprocess=True) driver.get('https://nowsecure.nl') My POV Quick setup code that logs into your google account: Github: https://github.com/xtekky/google-login-bypass import undetected_chromedriver as uc from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # ---------- EDIT ---------- email = 'email\n' # replace email password = 'password\n' # replace password # ---------- EDIT ---------- driver = uc.Chrome(use_subprocess=True) wait = WebDriverWait(driver, 20) url = 'https://accounts.google.com/ServiceLogin?service=accountsettings&continue=https://myaccount.google.com%3Futm_source%3Daccount-marketing-page%26utm_medium%3Dgo-to-account-button' driver.get(url) wait.until(EC.visibility_of_element_located((By.NAME, 'identifier'))).send_keys(email) wait.until(EC.visibility_of_element_located((By.NAME, 'password'))).send_keys(password) print("You're in!! enjoy") # [ ---------- paste your code here ---------- ] | 10 | 9 |
71,510,827 | 2022-3-17 | https://stackoverflow.com/questions/71510827/numba-when-to-use-nopython-true | I have the following setup: import numpy as np import matplotlib.pyplot as plt import timeit import numba @numba.jit(nopython=True, cache=True) def f(x): summ = 0 for i in x: summ += i return summ @numba.jit(nopython=True) def g21(N, locs): rvs = np.random.normal(loc=locs, scale=locs, size=N) res = f(rvs) return res @numba.jit(nopython=False) def g22(N, locs): rvs = np.random.normal(loc=locs, scale=locs, size=N) res = f(rvs) return res g22 and g21 are the exact same function, just that one of them has nopython=True and the other nopython=False Now I give them an input. If locs is a scalar, then the numba should be able to compile everything since they support numpy.random.normal() with this signature. However if locs is an array, numba does not support this signature and should go back to the python interpreter. I run this first just to compile the functions N = 10_000 g22(N, 3) g22(N, np.linspace(0,1,N)) g21(N, 3) # g21(N, np.linspace(0,1,N)) # returns an error Now I run a speed comparison %timeit g21(N, 3) %timeit g22(N, 3) %timeit g22(N, np.linspace(0,1,N)) which returns 274 Β΅s Β± 3.43 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) 270 Β΅s Β± 5.38 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) 421 Β΅s Β± 54.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) It makes sense that g22(N, np.linspace(0,1,N) is slowest since it goes back to the python interpreter. However what I dont understand is that g21(N, 3) is roughly the same speed as g22(N, 3), even though one has nopython=True and the other not. But g22(N,3) has the big advantage that it can take another argument, namely g22(N, np.linspace(0,1,N)), so its more versatile, however at the same time there is no speed penalty to having nopython=False So my questions are: in this case, what is the use of using nopython=True, if a function with nopython=False achieves same speed? in which specific case is nopython=True better than nopython=False? | in this case, what is the use of using nopython=True, if a function with nopython=False achieves same speed? in which specific case is nopython=True better than nopython=False? The documentation states: Numba has two compilation modes: nopython mode and object mode. The former produces much faster code, but has limitations that can force Numba to fall back to the latter. To prevent Numba from falling back, and instead raise an error, pass nopython=True. Note that in Numba will try to compile the code to a native binary in both modes. However, nopython produces an error when this is not possible while the other produces a warning and cause a fallback code to be used. For some applications, performance can be critical and so you really do not want the fallback code to be called. This the case for high-performance applications for example. Having an error in this case is better than having a code which runs for days instead of few minutes on an expensive machine (like a supercomputer or a computing server). Using different version of Numba can silently cause a fallback on some machine due to feature not being supported. I personally always use the nopython mode to prevent such case (as the fallback code is generally too slow to be useful) and I consider the object mode a bit useless. Put is shortly, nopython offers stronger guarantees about performance. | 5 | 7 |
71,517,750 | 2022-3-17 | https://stackoverflow.com/questions/71517750/how-to-replace-pandas-append-with-concat | Can you help me replace append with concat in this code? saida = pd.DataFrame() for x, y in lCodigos.items(): try: df = consulta_bc(x) logging.info(f'Indice {y} lido com sucesso.') except Exception as err: logging.error(err) logging.warning('Rotina Indice falhou!') exit() df['nome'] = y saida = saida.append(df) print(saida) | Just save the "dataframe parts" using a list and use pd.concat on that list of dataframes at the end: saida = list() # Now use a list for x, y in lCodigos.items(): # ... your original code saida.append(df) saida = pd.concat(saida) # You can now create the dataframe | 5 | 1 |
71,516,511 | 2022-3-17 | https://stackoverflow.com/questions/71516511/python-api-request-to-gitlab-unexpectedly-returns-empty-result | import requests response = requests.get("https://gitlab.com/api/v4/users/ahmed_sh/projects") print(response.status_code) # 200 print(response.text) # [] print(response.json()) # [] I'm trying to get a list of my GitLab repo projects using python API, but the outputs are nothing! Although, when I use the browser, I got a non-empty response. How can I solve this problem? | This is because you don't have any public projects in your user namespace. If you want to see your private projects in your namespace, you'll need to authenticate with the API by passing a personal access token in the PRIVATE-TOKEN header. Note, this also won't show projects you work on in other namespaces. headers = {'PRIVATE-TOKEN': 'Your API key here!'} resp = requests.get('https://gitlab.com/api/v4/users/ahmed_sh/projects', headers=headers) print(resp.json()) | 5 | 5 |
71,504,533 | 2022-3-16 | https://stackoverflow.com/questions/71504533/pip-install-from-github-broken-after-github-keys-policy-update | I would normally install a Python repository from Github using (for example): pip install git+git://github.com/Artory/drf-hal-json@master And concordantly, my "requirements.txt" would have git+git://github.com/Artory/drf-hal-json@master in it somewhere. This failed today. The full traceback is below, but the relevant part is: The unauthenticated git protocol on port 9418 is no longer supported. Thanks Microsoft. The traceback points to this link about the update. Most of the page at the link talks about how the update is unlikely to affect many people (thanks again Microsoft), and the rest of it involves cryptography that I'm far too noob to understand. The section titled "git://" simply reads: On the Git protocol side, unencrypted git:// offers no integrity or authentication, making it subject to tampering. We expect very few people are still using this protocol, especially given that you canβt push (itβs read-only on GitHub). Weβll be disabling support for this protocol. This doesn't help me understand how to update my requirements.txt to make it work again. Can you tell me how to update my requirements.txt to make it work again? Full traceback below: (venv) neil~/Documents/Code/web_app$ pip install git+git://github.com/Artory/drf-hal-json@master Collecting git+git://github.com/Artory/drf-hal-json@master Cloning git://github.com/Artory/drf-hal-json (to revision master) to /tmp/pip-req-build-zowfe130 Running command git clone -q git://github.com/Artory/drf-hal-json /tmp/pip-req-build-zowfe130 fatal: remote error: The unauthenticated git protocol on port 9418 is no longer supported. Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information. WARNING: Discarding git+git://github.com/Artory/drf-hal-json@master. Command errored out with exit status 128: git clone -q git://github.com/Artory/drf-hal-json /tmp/pip-req-build-zowfe130 Check the logs for full command output. ERROR: Command errored out with exit status 128: git clone -q git://github.com/Artory/drf-hal-json /tmp/pip-req-build-zowfe130 Check the logs for full command output. WARNING: You are using pip version 21.2.4; however, version 22.0.4 is available. You should consider upgrading via the '/home/neil/Documents/Code/web_app/venv/bin/python -m pip install --upgrade pip' command. | In the URL you give to pip, the git+git says to access a Git repository (the first git) over the unauthenticated git protocol (the second git). Assuming you want to continue to use anonymous access here, you can simply rewrite the command to use git+https instead, which access a Git repository over the secure HTTPS protocol. So your command would look like this: $ pip install git+https://github.com/Artory/drf-hal-json@master I just tested in a VM, and that appears to work. If you have other such URLs, changing the same way should be effective. | 5 | 7 |
71,501,140 | 2022-3-16 | https://stackoverflow.com/questions/71501140/type-hinting-for-scipy-sparse-matrices | How do you type hint scipy sparse matrices, such as CSR, CSC, LIL etc.? Below is what I have been doing, but it doesn't feel right: def foo(mat: scipy.sparse.csr.csr_matrix): # Do whatever What do we do if our function can accept multiple types of scipy sparse matrices (i.e any of them)? | All of csr, csc, lil are types of scipy.sparse.base.spmatrix: from scipy import sparse c1 = sparse.lil.lil_matrix c2 = sparse.csr.csr_matrix c3 = sparse.csc.csc_matrix print(c1.__bases__[0]) print(c2.__base__.__base__.__base__) print(c3.__base__.__base__.__base__) Output: <class 'scipy.sparse.base.spmatrix'> <class 'scipy.sparse.base.spmatrix'> <class 'scipy.sparse.base.spmatrix'> So you have an option to: def foo(mat: scipy.sparse.base.spmatrix): # Do whatever | 6 | 4 |
71,500,106 | 2022-3-16 | https://stackoverflow.com/questions/71500106/how-to-implement-t-sne-in-tensorflow | I am trying to implement a t-SNE visualization in tensorflow for an image classification task. What I mainly found on the net have all been implemented in Pytorch. See here. Here is my general code for training purposes which works completely fine, just want to add t-SNE visualization to it: import pandas as pd import numpy as np import tensorflow as tf import cv2 from tensorflow import keras from tensorflow.keras import layers, Input from tensorflow.keras.layers import Dense, InputLayer, Flatten from tensorflow.keras.models import Sequential, Model from matplotlib import pyplot as plt import matplotlib.image as mpimg from PIL import Image from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img . . . base_model=tf.keras.applications.ResNet152( include_top=False, weights='imagenet', input_tensor=None, input_shape=None, pooling=None) . . . base_model.trainable = False # Create new model on top. inputs = tf.keras.Input(shape=(IMG_WIDTH, IMG_HEIGHT, 3)) x = base_model(inputs, training=False) x=keras.layers.Flatten()(x) x = keras.layers.Dense(64)(x) x=layers.Activation('relu')(x) x=keras.layers.Flatten()(x) x = keras.layers.Dense(32)(x) x=layers.Activation('relu')(x) x = keras.layers.Dense(2)(x) outputs=layers.Activation('softmax')(x) model=keras.Model(inputs, outputs) vaidation_datagen = ImageDataGenerator(rotation_range=90, zoom_range=0.2, horizontal_flip=True, vertical_flip=True) train_generator = train_datagen.flow_from_directory( train_path, # this is the target directory target_size=target_size, # all images will be resized to the target size color_mode='rgb', batch_size=batch_size, shuffle=True, class_mode='categorical', interpolation='nearest', seed=42) # since we use binary_crossentropy loss, we need binary labels validation_generator = vaidation_datagen.flow_from_directory( validation_path, # this is the target directory target_size=target_size, # all images will be resized to the target size color_mode='rgb', batch_size=batch_size, shuffle=True, class_mode='categorical', interpolation='nearest', seed=42) model.compile(optimizer, loss , metrics) model_checkpoint = tf.keras.callbacks.ModelCheckpoint((model_path+model_filename), monitor='val_loss',verbose=1, save_best_only=True) model.summary() history = model.fit( train_generator, steps_per_epoch = num_of_train_img_raw//batch_size, epochs = epochs, validation_data = validation_generator, # relates to the validation data. validation_steps = num_of_val_img_raw//batch_size, callbacks=[model_checkpoint], use_multiprocessing = False) Based on the reference link provided, it seems that I need to first save the features, and from there apply the t-SNE as follows (this part is copied and pasted from here): tsne = TSNE(n_components=2).fit_transform(features) # scale and move the coordinates so they fit [0; 1] range def scale_to_01_range(x): # compute the distribution range value_range = (np.max(x) - np.min(x)) # move the distribution so that it starts from zero # by extracting the minimal value from all its values starts_from_zero = x - np.min(x) # make the distribution fit [0; 1] by dividing by its range return starts_from_zero / value_range # extract x and y coordinates representing the positions of the images on T-SNE plot tx = tsne[:, 0] ty = tsne[:, 1] tx = scale_to_01_range(tx) ty = scale_to_01_range(ty) # initialize a matplotlib plot fig = plt.figure() ax = fig.add_subplot(111) # for every class, we'll add a scatter plot separately for label in colors_per_class: # find the samples of the current class in the data indices = [i for i, l in enumerate(labels) if l == label] # extract the coordinates of the points of this class only current_tx = np.take(tx, indices) current_ty = np.take(ty, indices) # convert the class color to matplotlib format color = np.array(colors_per_class[label], dtype=np.float) / 255 # add a scatter plot with the corresponding color and label ax.scatter(current_tx, current_ty, c=color, label=label) # build a legend using the labels we set previously ax.legend(loc='best') # finally, show the plot plt.show() I would be grateful of your help to connect these two piece. | You could try something like the following: Train your model import tensorflow as tf import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True) data_dir = pathlib.Path(data_dir) batch_size = 32 train_ds = tf.keras.utils.image_dataset_from_directory( data_dir, seed=123, image_size=(180, 180), batch_size=batch_size) model = tf.keras.Sequential([ tf.keras.layers.Rescaling(1./255, input_shape=(180, 180, 3)), tf.keras.layers.Conv2D(16, 3, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(5) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) epochs=10 history = model.fit( train_ds, epochs=epochs ) Make predictions on last and second last layer of your model and visualize from sklearn.manifold import TSNE import numpy as np from matplotlib import pyplot as plt model2 = tf.keras.Model(inputs=model.input, outputs=model.layers[-2].output) test_ds = np.concatenate(list(train_ds.take(5).map(lambda x, y : x))) # get five batches of images and convert to numpy array features = model2(test_ds) labels = np.argmax(model(test_ds), axis=-1) tsne = TSNE(n_components=2).fit_transform(features) def scale_to_01_range(x): value_range = (np.max(x) - np.min(x)) starts_from_zero = x - np.min(x) return starts_from_zero / value_range tx = tsne[:, 0] ty = tsne[:, 1] tx = scale_to_01_range(tx) ty = scale_to_01_range(ty) colors = ['red', 'blue', 'green', 'brown', 'yellow'] classes = train_ds.class_names print(classes) fig = plt.figure() ax = fig.add_subplot(111) for idx, c in enumerate(colors): indices = [i for i, l in enumerate(labels) if idx == l] current_tx = np.take(tx, indices) current_ty = np.take(ty, indices) ax.scatter(current_tx, current_ty, c=c, label=classes[idx]) ax.legend(loc='best') plt.show() model2 outputs the features you want to visualize and model outputs the predicted classes with the help of np.argmax. Also, this example is using a dataset with 5 classes, that is why there are 5 different colors. In your case, you only have 2 classes and therefore 2 colors. | 5 | 7 |
71,500,756 | 2022-3-16 | https://stackoverflow.com/questions/71500756/what-is-pythons-namespace-object | I know what namespaces are. But when running import argparse parser = argparse.ArgumentParser() parser.add_argument('bar') parser.parse_args(['XXX']) # outputs: Namespace(bar='XXX') What kind of object is Namespace(bar='XXX')? I find this totally confusing. Reading the argparse docs, it says "Most ArgumentParser actions add some value as an attribute of the object returned by parse_args()". Shouldn't this object then appear when running globals()? Or how can I introspect it? | Samwise's answer is very good, but let me answer the other part of the question. Or how can I introspect it? Being able to introspect objects is a valuable skill in any language, so let's approach this as though Namespace is a completely unknown type. >>> obj = parser.parse_args(['XXX']) # outputs: Namespace(bar='XXX') Your first instinct is good. See if there's a Namespace in the global scope, which there isn't. >>> Namespace Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'Namespace' is not defined So let's see the actual type of the thing. The Namespace(bar='XXX') printer syntax is coming from a __str__ or __repr__ method somewhere, so let's see what the type actually is. >>> type(obj) <class 'argparse.Namespace'> and its module >>> type(obj).__module__ 'argparse' Now it's a pretty safe bet that we can do from argparse import Namespace and get the type. Beyond that, we can do >>> help(argparse.Namespace) in the interactive interpreter to get detailed documentation on the Namespace class, all with no Internet connection necessary. | 15 | 15 |
71,499,349 | 2022-3-16 | https://stackoverflow.com/questions/71499349/type-hinting-for-array-like | What would be the correct type hint for a function that accepts an one dimensional array-like object? More specifically, my function uses np.percentile and I would like to 'match' np.percentile's flexibility in terms of the kind of array it accepts (List, pandas Series, numpy array, etc.). Below illustrates what I'm looking for: def foo(arr: array-like) -> float: p = np.percentile(arr, 50) return p | Use numpy.typing.ArrayLike: from numpy.typing import ArrayLike def foo(arr: ArrayLike) -> float: p = np.percentile(arr, 50) return p | 6 | 13 |
71,497,906 | 2022-3-16 | https://stackoverflow.com/questions/71497906/python-sum-values-in-a-list-if-they-share-the-first-word | I have a list as follows, flat_list = ['hello,5', 'mellow,4', 'mellow,2', 'yellow,2', 'yellow,7', 'hello,7', 'mellow,7', 'hello,7'] I would like to get the sum of the values if they share the same word, so the output should be, desired output: l = [('hello',19), ('yellow', 9), ('mellow',13)] so far, I have tried the following, new_list = [v.split(',') for v in flat_list] d = {} for key, value in new_list: if key not in d.keys(): d[key] = [key] d[key].append(value) # getting rid of the first key in value lists val = [val.pop(0) for k,val in d.items()] # summing up the values va = [sum([int(x) for x in va]) for ka,va in d.items()] however for some reason the last sum up does not work and i do not get my desired output | Here is a variant for accomplishing your goal using defaultdict: from collections import defaultdict t = ['hello,5', 'mellow,4', 'mellow,2', 'yellow,2', 'yellow,7', 'hello,7', 'mellow,7', 'hello,7'] count = defaultdict(int) for name_number in t: name, number = name_number.split(",") count[name] += int(number) You could also use Counter: from collections import Counter count = Counter() for name_number in t: name, number = name_number.split(",") count[name] += int(number) In both cases you can convert the output to a list of tuples using: list(count.items()) # -> [('hello', 19), ('mellow', 13), ('yellow', 9)] I ran your code and I do get the correct results (although not in your desired format). | 5 | 9 |
71,496,253 | 2022-3-16 | https://stackoverflow.com/questions/71496253/pandas-color-cell-based-on-value-of-other-column | I would like to color in red cells of a DataFrame on one column, based on the value of another column. Here is an example: df = pd.DataFrame([ { 'color_A_in_red': True , 'A': 1 }, { 'color_A_in_red': False , 'A': 2 }, { 'color_A_in_red': True , 'A': 2 }, ]) should give: I know how to color a cell of a df in red but only based on the value of this cell, not the value of another cell: df_style = df.style df_style.applymap(func=lambda x: 'background-color: red' if x == 2 else None, subset=['A']) df_style Is there a way to color cells of a DataFrame based on the value of another column ? | Use custom function for DataFrame of styles is most flexible solution here: def highlight(x): c = f"background-color:red" #condition m = x["color_A_in_red"] # DataFrame of styles df1 = pd.DataFrame('', index=x.index, columns=x.columns) # set columns by condition df1.loc[m, 'A'] = c return df1 df.style.apply(highlight, axis=None) | 5 | 5 |
71,491,107 | 2022-3-16 | https://stackoverflow.com/questions/71491107/formatting-guidelines-for-type-aliases | What would be the correct way to format the name of a type aliasβintended to be local to its moduleβaccording to the PEP8 style guide? # mymodule.py from typing import TypeAlias mytype: TypeAlias = int def f() -> mytype: return mytype() def g() -> mytype: return mytype() Should mytype be formatted in CapWords because it introduces a new type similar to creating new classes? Or, should mytype be formatted in all caps because it is treated similarly to a constant? Is there a way to differentiate between type aliases that will remain unchanged (constant) throughout the lifetime of the program and ones that can change (similar to the Final annotation for constants)? Also, should mytype be prefixed with an underscore (as in _mytype) to indicate that the type alias shouldn't be used outside this module? | The PEP Style Guide does not have any explicit guidance on how to format TypeAliases. The guide does contain some rules on type variables, but that's not quite what you're asking for. The next best resource I could find was Google's Python Style Guide, which does happen to contain some guidance on how to name TypeAliases: 3.19.6 Type Aliases You can declare aliases of complex types. The name of an alias should be CapWorded. If the alias is used only in this module, it should be _Private. For example, if the name of the module together with the name of the type is too long: _ShortName = module_with_long_name.TypeWithLongName ComplexMap = Mapping[str, List[Tuple[int, int]]] Other examples are complex nested types and multiple return variables from a function (as a tuple). Under this, the name of your type alias should be MyType if used across multiple modules, or _MyType if only used in the module that it is declared in. With all of this being said, remember that consistency with the existing codebase is what's most important. As the PEP style guide states: A style guide is about consistency. Consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is the most important. | 9 | 7 |
71,429,711 | 2022-3-10 | https://stackoverflow.com/questions/71429711/how-to-run-a-docker-container-with-specific-gpus-using-docker-sdk-for-python | In the command line I am used to run/create containers with specific GPUs using the --gpus argument: docker run -it --gpus '"device=0,2"' ubuntu nvidia-smi The Docker SDK for Python documentation was not very helpful and I could not find a good explanation on how to do the same with the python SDK. Is there a way to do it? | This is how you can run/create docker containers with specific GPUs using the Docker SDK for Python: client.containers.run('ubuntu', "nvidia-smi", device_requests=[ docker.types.DeviceRequest(device_ids=['0','2'], capabilities=[['gpu']])]) This way you can also use other GPU resource options specified here: https://docs.docker.com/config/containers/resource_constraints/ | 10 | 15 |
71,467,630 | 2022-3-14 | https://stackoverflow.com/questions/71467630/fastapi-issues-with-mongodb-typeerror-objectid-object-is-not-iterable | I am having some issues inserting into MongoDB via FastAPI. The below code works as expected. Notice how the response variable has not been used in response_to_mongo(). The model is an sklearn ElasticNet model. app = FastAPI() def response_to_mongo(r: dict): client = pymongo.MongoClient("mongodb://mongo:27017") db = client["models"] model_collection = db["example-model"] model_collection.insert_one(r) @app.post("/predict") async def predict_model(features: List[float]): prediction = model.predict( pd.DataFrame( [features], columns=model.feature_names_in_, ) ) response = {"predictions": prediction.tolist()} response_to_mongo( {"predictions": prediction.tolist()}, ) return response However when I write predict_model() like this and pass the response variable to response_to_mongo(): @app.post("/predict") async def predict_model(features: List[float]): prediction = model.predict( pd.DataFrame( [features], columns=model.feature_names_in_, ) ) response = {"predictions": prediction.tolist()} response_to_mongo( response, ) return response I get an error stating that: TypeError: 'ObjectId' object is not iterable From my reading, it seems that this is due to BSON/JSON issues between FastAPI and Mongo. However, why does it work in the first case when I do not use a variable? Is this due to the asynchronous nature of FastAPI? | As per the documentation: When a document is inserted a special key, "_id", is automatically added if the document doesnβt already contain an "_id" key. The value of "_id" must be unique across the collection. insert_one() returns an instance of InsertOneResult. For more information on "_id", see the documentation on _id. Thus, in the second case of the example you provided, when you pass the dictionary to the insert_one() function, Pymongo will add to your dictionary the unique identifier (i.e., ObjectId) necessary to retrieve the data from the database; and hence, when returning the response from the endpoint, the ObjectId fails getting serializedβsince, as described in this answer in detail, FastAPI, by default, will automatically convert that return value into JSON-compatible data using the jsonable_encoder (to ensure that objects that are not serializable are converted to a str), and then return a JSONResponse, which uses the standard json library to serialize the data. Solution 1 Use the approach demonstrated here, by having the ObjectId converted to str by default, and hence, you can return the response as usual inside your endpoint. # place these at the top of your .py file import pydantic from bson import ObjectId pydantic.json.ENCODERS_BY_TYPE[ObjectId]=str return response # as usual Solution 2 Dump the loaded BSON to valid JSON string and then reload it as dict, as described here and here. from bson import json_util import json response = json.loads(json_util.dumps(response)) return response Solution 3 Define a custom JSONEncoder, as described here, to convert the ObjectId into str: import json from bson import ObjectId class JSONEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, ObjectId): return str(o) return json.JSONEncoder.default(self, o) response = JSONEncoder().encode(response) return response Solution 4 You can have a separate output model without the 'ObjectId' (_id) field, as described in the documentation. You can declare the model used for the response with the parameter response_model in the decorator of your endpoint. Example: from pydantic import BaseModel class ResponseBody(BaseModel): name: str age: int @app.get('/', response_model=ResponseBody) def main(): # response sample response = {'_id': ObjectId('53ad61aa06998f07cee687c3'), 'name': 'John', 'age': '25'} return response Solution 5 Remove the "_id" entry from the response dictionary before returning it (see here on how to remove a key from a dict): response.pop('_id', None) return response | 7 | 15 |
71,416,383 | 2022-3-9 | https://stackoverflow.com/questions/71416383/python-asyncio-cancelling-a-to-thread-task-wont-stop-the-thread | With the following snippet, I can't figure why the infiniteTask is not cancelled (it keeps spamming "I'm still standing") In debug mode, I can see that the Task stored in unfinished is indeed marked as Cancelled but obiously the thread is not cancelled / killed. Why is the thread not killed when the wrapping task is cancelled ? What should I do to stop the thread ? import time import asyncio def quickTask(): time.sleep(1) def infiniteTask(): while True: time.sleep(1) print("I'm still standing") async def main(): finished, unfinished = await asyncio.wait({ asyncio.create_task(asyncio.to_thread(quickTask)), asyncio.create_task(asyncio.to_thread(infiniteTask)) }, return_when = "FIRST_COMPLETED" ) for task in unfinished: task.cancel() await asyncio.wait(unfinished) print(" finished : " + str(len(finished))) # print '1' print("unfinished : " + str(len(unfinished))) # print '1' asyncio.run(main()) | Cause If we check the definition of asyncio.to_thread(): # python310/Lib/asyncio/threads.py # ... async def to_thread(func, /, *args, **kwargs): """Asynchronously run function *func* in a separate thread. Any *args and **kwargs supplied for this function are directly passed to *func*. Also, the current :class:`contextvars.Context` is propagated, allowing context variables from the main thread to be accessed in the separate thread. Return a coroutine that can be awaited to get the eventual result of *func*. """ loop = events.get_running_loop() ctx = contextvars.copy_context() func_call = functools.partial(ctx.run, func, *args, **kwargs) return await loop.run_in_executor(None, func_call) It's actually a wrapper of loop.run_in_executor. If we then go into how asyncio's test handle run_in_executor: # python310/Lib/test/test_asyncio/threads.py # ... class EventLoopTestsMixin: # ... def test_run_in_executor_cancel(self): called = False def patched_call_soon(*args): nonlocal called called = True def run(): time.sleep(0.05) f2 = self.loop.run_in_executor(None, run) f2.cancel() self.loop.run_until_complete( self.loop.shutdown_default_executor()) self.loop.close() self.loop.call_soon = patched_call_soon self.loop.call_soon_threadsafe = patched_call_soon time.sleep(0.4) self.assertFalse(called) You can see it will wait for self.loop.shutdown_default_executor(). Now let's see how it looks like. # event.pyi # ... class BaseEventLoop(events.AbstractEventLoop): # ... async def shutdown_default_executor(self): """Schedule the shutdown of the default executor.""" self._executor_shutdown_called = True if self._default_executor is None: return future = self.create_future() thread = threading.Thread(target=self._do_shutdown, args=(future,)) thread.start() try: await future finally: thread.join() def _do_shutdown(self, future): try: self._default_executor.shutdown(wait=True) self.call_soon_threadsafe(future.set_result, None) except Exception as ex: self.call_soon_threadsafe(future.set_exception, ex) Here, we can see it creates another thread to wait for _do_shutdown, which then runs self._default_executor.shutdown with wait=True parameter. Then where the shutdown is implemented: # Python310/Lib/concurrent/futures/thread.py # ... class ThreadPoolExecutor(_base.Executor): # ... def shutdown(self, wait=True, *, cancel_futures=False): with self._shutdown_lock: self._shutdown = True if cancel_futures: # Drain all work items from the queue, and then cancel their # associated futures. while True: try: work_item = self._work_queue.get_nowait() except queue.Empty: break if work_item is not None: work_item.future.cancel() # Send a wake-up to prevent threads calling # _work_queue.get(block=True) from permanently blocking. self._work_queue.put(None) if wait: for t in self._threads: t.join() When wait=True it decides to wait for all thread to be gracefully stops. From all these we can't see any effort to actually cancel a thread. To quote from Trio Documentation: Cancellation is a tricky issue here, because neither Python nor the operating systems it runs on provide any general mechanism for cancelling an arbitrary synchronous function running in a thread. This function will always check for cancellation on entry, before starting the thread. But once the thread is running, there are two ways it can handle being cancelled: If cancellable=False, the function ignores the cancellation and keeps going, just like if we had called sync_fn synchronously. This is the default behavior. If cancellable=True, then this function immediately raises Cancelled. In this case the thread keeps running in background β we just abandon it to do whatever itβs going to do, and silently discard any return value or errors that it raises. So, from these we can learn that there's no way to terminate infinite-loop running in thread. Workaround Since now we know we have to design what's going to run in thread with a bit more care, we need a way to signal the thread that we want to stop. We can use threading.Event for such cases. (Originally I wrote answer with asyncio.Event but that is not thread safe because we're moving function execution to another thread probably better off not using it.) import time import asyncio import threading def blocking_func(event: threading.Event): while not event.is_set(): time.sleep(1) print("I'm still standing") async def main(): event = threading.Event() asyncio.create_task(asyncio.to_thread(blocking_func, event)) await asyncio.sleep(5) # now lets stop event.set() asyncio.run(main()) By checking event on every loop we can see program terminating gracefully. I'm still standing I'm still standing I'm still standing I'm still standing I'm still standing I'm still standing Process finished with exit code 0 | 5 | 10 |
71,443,345 | 2022-3-11 | https://stackoverflow.com/questions/71443345/gevent-cant-be-installed-on-m1-mac-using-poetry | I tried to install many dependencies for a virtual environment using poetry. When it gets to gevent (20.9.0) it gets the following import error: ImportError: dlopen(/private/var/folders/21/wxg5bdsj1w3f3j_9sl_pktbw0000gn/T/pip-build-env-50mwte36/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so, 0x0002): tried: '/private/var/folders/21/wxg5bdsj1w3f3j_9sl_pktbw0000gn/T/pip-build-env-50mwte36/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/_cffi_backend.cpython-38-darwin.so' (no such file), '/usr/lib/_cffi_backend.cpython-38-darwin.so' (no such file) I've tried to use pip3 instead, but still had the same problem. | I've have this problem with other libraries also and this solution worked some times: arch -arm64 <poetry or pip> install <lib to istall> Using arch -arm64 allowed me to install the rigt wheel for the M1 processor | 5 | 7 |
71,486,019 | 2022-3-15 | https://stackoverflow.com/questions/71486019/how-to-drop-row-in-polars-python | How to add new feature like length of data frame & Drop rows value using indexing. I want to a add a new column where I can count the no-of rows available in a data frame, & using indexing drop rows value. for i in range(len(df)): if (df['col1'][i] == df['col2'][i]) and (df['col4'][i] == df['col3'][i]): pass elif (df['col1'][i] == df['col3'][i]) and (df['col4'][i] == df['col2'][i]): df['col1'][i] = df['col2'][i] df['col4'][i] = df['col3'][i] else: df = df.drop(i) | Polars doesn't allow much mutation and favors pure data handling. Meaning that you create a new DataFrame instead of modifying an existing one. So it helps to think of the data you want to keep instead of the row you want to remove. Below I have written an example that keeps all data except for the 2nd row. Note that the slice will be the fastest of the two and will have zero data copy. df = pl.DataFrame({ "a": [1, 2, 3], "b": [True, False, None] }).with_row_index() print(df) # filter on condition df_a = df.filter(pl.col("index") != 1) # stack two slices df_b = df[:1].vstack(df[2:]) # or via explicit slice syntax # df_b = df.slice(0, 1).vstack(df.slice(2, -1)) assert df_a.equals(df_b) print(df_a) Outputs: shape: (3, 3) βββββββββ¬ββββββ¬ββββββββ β index β a β b β β --- β --- β --- β β u32 β i64 β bool β βββββββββͺββββββͺββββββββ‘ β 0 β 1 β true β β 1 β 2 β false β β 2 β 3 β null β βββββββββ΄ββββββ΄ββββββββ shape: (2, 3) βββββββββ¬ββββββ¬βββββββ β index β a β b β β --- β --- β --- β β u32 β i64 β bool β βββββββββͺββββββͺβββββββ‘ β 0 β 1 β true β β 2 β 3 β null β βββββββββ΄ββββββ΄βββββββ | 13 | 18 |
71,470,614 | 2022-3-14 | https://stackoverflow.com/questions/71470614/make-pathlib-glob-and-pathlib-rglob-case-insensitive-for-platform-agnostic-a | I am using pathlib.glob() and pathlib.rglob() to matching files from a directory and its subdirectories, respectively. Target files both are both lower case .txt and upper case .TXT files. According file paths were read from the filesystem as follows: import pathlib directory = pathlib.Path() files_to_create = ['a.txt', 'b.TXT'] suffixes_to_test = ['*.txt', '*.TXT'] for filename in files_to_create: filepath = directory / filename filepath.touch() for suffix in suffixes_to_test: files = [fp.relative_to(directory) for fp in directory.glob(suffix)] print(f'{suffix}: {files}') The majority of the code base was developed on a Windows 10 machine (running Python 3.7.4) and was now moved to macOS Monterey 12.0.1 (running Python 3.10.1). On Windows both files a.txt and b.TXT are matching the patterns: *.txt: [WindowsPath('a.txt'), WindowsPath('b.TXT')] *.TXT: [WindowsPath('a.txt'), WindowsPath('b.TXT')] In contrast, macOS only one file matches each pattern: *.txt: [PosixPath('a.txt')] *.TXT: [PosixPath('b.TXT')] Therefore, I assume that the macOS file system might be case-sensitive, whereas the Windows one is not. According to Apple's User Guide the macOS file system used should not be case-sensitive by default but can be configured as such. Something similar might apply for Linux or Unix file systems as discussed here and here. Despite the reason for this differing behavior, I need to find a platform-agnostic way to get both capital TXT and lower case txt files. A rather naive workaround could be something like this: results = set([fp.relative_to(directory) for suffix in suffixes_to_test for fp in directory.glob(suffix)]) Which gives the desired output on both macOS and Windows: {PosixPath('b.TXT'), PosixPath('a.txt')} However, is there a more elegant way? I could not find any option like ignore_case in pathlib's documentation. | What about something like: suffix = '*.[tT][xX][tT]' files = [fp.relative_to(directory) for fp in directory.glob(suffix)] It is not so generalizable for a "case-insensitive glob", but it works well for limited and specific use-case like your glob of a specific extension. | 8 | 6 |
71,475,054 | 2022-3-14 | https://stackoverflow.com/questions/71475054/structured-bindings-in-python | C++17 introduced the new structured bindings syntax: std::pair<int, int> p = {1, 2}; auto [a, b] = p; Is there something similar in python3? I was thinking of using the "splat" operator to bind class variables to a list, which can be unpacked and assigned to multiple variables like such: class pair: def __init__(self, first, second): self.first = first self.second = second ... p = pair(1, 2) a, b = *p Is this possible? And if so, how would I go by implementing this to work for my own classes? A tuple in Python works as a simple solution to this problem. However, built in types don't give much flexibility in implementing other class methods. | Yes, you can use __iter__ method since iterators can be unpacked too: class pair: def __init__(self, first, second): self.first = first self.second = second def __iter__(self): # Use tuple's iterator since it is the closest to our use case. return iter((self.first, self.second)) p = pair(1, 2) a, b = p print(a, b) # Prints 1 2 | 4 | 6 |
71,467,768 | 2022-3-14 | https://stackoverflow.com/questions/71467768/programmatically-schedule-an-aws-lambda-for-one-time-execution | I have two AWS-Lamdbda functions and I want Lambda A to determine a certain point in time like the 4. May 2022 10:00. Then I want Lambda B to be scheduled to run at this specific point in time. I'm probably able to achieve this by programmatically creating a AWS eventbrigde rule with Lambda A and use the cron pattern to match my point in time. Inside of Lambda B I would then need to delete that rule, because it's for a one time use. Can one of you think of a more elegant way to achieve this ? Thanks you for your wisdom ! Edit: My point in time is dynamic. I call a public API to find the point for Lambda B to run, so I can't use EventBridge directly. I plan on running Lambda A once a day to see if a new Lambda B run is necessary. | [Edit Nov 2022]: EventBridge Scheduler The new EventBridge Scheduler supports one-time schedules for events. The event will be invoked on the date and time you pass as the schedule expression: at(yyyy-mm-ddThh:mm:ss) in the boto3 EventScheduler client's create_schedule API. Here are a few more options to schedule a Lambda run at an arbitrary point in time: Step Function A three-state Step Function orchestrates the two Lambdas. Lambda A obtains and outputs a timestamp. A Wait State waits until the timestamp passes. Then Lambda B runs. This approach is precise. N.B. Standard Workflow executions have a duration up to one year, so they can accomodate long waits. DynamoDB Streams + TTL Create a DynamoDB table with a TTL field and Dynamo DB Streams enabled. Set Lambda B as the Streams processor. Lambda A writes a record to the table with the timestamp as TTL. Shortly after the TTL timestamp passes, DynamoDB will trigger Lambda B. This approach won't give you the precision of the first approach, but will be cheaper if you have loads of events. | 5 | 10 |
71,435,874 | 2022-3-11 | https://stackoverflow.com/questions/71435874/pip-these-packages-do-not-match-the-hashes-from-the-requirements-file | When I tried to install libraries using pip install, sometimes this error message come up. ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. This error comes up when I am trying to build multiple images using docker-compose V2. What I have done: pip install --no-cache-dir -r requirements.txt upgrading pip trying the old version of pip (20.0.2). change the version of the affected package. changing the dns However, it still comes up randomly. The libraries that are referred to the error message also keep changing. Does anyone know the reason for this issue? | So I had the same issue, tried deleting the pip cache file and using the "--no-cache-dir" argument. None of those worked. I then came across a post that said they were experiencing this error because of a networking issue. So I switched off my VPN and everything worked perfectly! Not sure why this works, but it got the job done | 12 | 9 |
71,432,620 | 2022-3-11 | https://stackoverflow.com/questions/71432620/os-path-getsize-slow-on-network-drive-python-windows | I have a program that iterates over several thousand PNG files on an SMB shared network drive (a 2TB Samsung 970 Evo+) and adds up their individual file sizes. Unfortunately, it is very slow. After profiling the code, it turns out 90% of the execution time is spent on one function: filesize += os.path.getsize(png) where each png variable is the filepath to a single PNG file (of the several thousands) in a for loop that iterates over each one obtained from glob.glob() (which, to compare, is responsible for 7.5% of the execution time). The code can be found here: https://pastebin.com/SsDCFHLX Clearly there is something about obtaining the filesize over the network that is extremely slow, but I'm not sure what. Is there any way I can improve the performance? It takes just as long using filesize += os.stat(png).st_size too. When the PNG files are stored on the computer locally, the speed is not an issue. It specifically becomes a problem when the files are stored on another machine that I access over the local network with a gigabit ethernet cable. Both are running Windows 10. [2022-08-21 Update] I tried it again with a 10 gigabit network connection this time and noticed something interesting. The very first time I run the code on the network share, the profiler looks like this: but if I run it again afterward, glob() takes up significantly less time while getsize() is about the same: if I instead run this code with the PNG files stored on a local NVMe drive (WD SN750) rather than a newtwork drive, here's what the profiler looks like: It seems like once it is run for a second time on the network share, something has gotten cached that allows glob() to run much faster on the network share, at around the same speed it would run at on the local NVMe drive. But getsize() remains extremely slow, about 1/10th of the speed as when local. Can somebody help me understand these two points: Why is getsize() so much slower on the network share? Is there something that can be done to speed it up? Why is glob() slow the first time on the network share but not when I run it again immediately afterward? | I don't know why getsize() is as slow as it is over the network, however to speed it up you could try calling it concurrently: import os from multiprocessing.pool import ThreadPool def get_total_filesize_concurrently(paths): total = 0 with ThreadPool(10) as pool: for size in pool.imap_unordered(lambda path: os.path.getsize(path), paths): total += size return total print(get_total_filesize_concurrently([ "E:\Path\To\File.txt", "E:\Path\To\File2.txt" "E:\Path\To\File3.txt" ... ])) You can also play around with the number of threads defined in ThreadPool(10) to potentially increase performance even further. | 6 | 3 |
71,470,236 | 2022-3-14 | https://stackoverflow.com/questions/71470236/post-request-response-422-error-detail-loc-body-msg-value-is-n | My POST request continues to fail with 422 response, even though valid JSON is being sent. I am trying to create a web app that receives an uploaded text file with various genetic markers and sends it to the tensorflow model to make a cancer survival prediction. The link to the github project can be found here. Here is the POST request: df_json = dataframe.to_json(orient='records') prediction = requests.post('http://backend:8080/prediction/', json=json.loads(df_json), headers={"Content-Type": "application/json"}) And here is the pydantic model along with the API endpoint: class Userdata(BaseModel): RPPA_HSPA1A : float RPPA_XIAP : float RPPA_CASP7 : float RPPA_ERBB3 :float RPPA_SMAD1 : float RPPA_SYK : float RPPA_STAT5A : float RPPA_CD20 : float RPPA_AKT1_Akt :float RPPA_BAD : float RPPA_PARP1 : float RPPA_MSH2 : float RPPA_MSH6 : float RPPA_ACACA : float RPPA_COL6A1 : float RPPA_PTCH1 : float RPPA_AKT1 : float RPPA_CDKN1B : float RPPA_GATA3 : float RPPA_MAPT : float RPPA_TGM2 : float RPPA_CCNE1 : float RPPA_INPP4B : float RPPA_ACACA_ACC1 : float RPPA_RPS6 : float RPPA_VASP : float RPPA_CDH1 : float RPPA_EIF4EBP1 : float RPPA_CTNNB1 : float RPPA_XBP1 : float RPPA_EIF4EBP1_4E : float RPPA_PCNA : float RPPA_SRC : float RPPA_TP53BP1 : float RPPA_MAP2K1 : float RPPA_RAF1 : float RPPA_MET : float RPPA_TP53 : float RPPA_YAP1 : float RPPA_MAPK8 : float RPPA_CDKN1B_p27 : float RPPA_FRAP1 : float RPPA_RAD50 : float RPPA_CCNE2 : float RPPA_SNAI2 : float RPPA_PRKCA_PKC : float RPPA_PGR : float RPPA_ASNS : float RPPA_BID : float RPPA_CHEK2 : float RPPA_BCL2L1 : float RPPA_RPS6 : float RPPA_EGFR : float RPPA_PIK3CA : float RPPA_BCL2L11 : float RPPA_GSK3A : float RPPA_DVL3 : float RPPA_CCND1 : float RPPA_RAB11A : float RPPA_SRC_Src_pY416 :float RPPA_BCL2L111 : float RPPA_ATM : float RPPA_NOTCH1 : float RPPA_C12ORF5 : float RPPA_MAPK9 : float RPPA_FN1 : float RPPA_GSK3A_GSK3B : float RPPA_CDKN1B_p27_pT198 : float RPPA_MAP2K1_MEK1 : float RPPA_CASP8 : float RPPA_PAI : float RPPA_CHEK1 : float RPPA_STK11 : float RPPA_AKT1S1 : float RPPA_WWTR1 : float RPPA_CDKN1A : float RPPA_KDR : float RPPA_CHEK2_2 : float RPPA_EGFR_pY1173 : float RPPA_EGFR_pY992 : float RPPA_IGF1R : float RPPA_YWHAE : float RPPA_RPS6KA1 : float RPPA_TSC2 : float RPPA_CDC2 : float RPPA_EEF2 : float RPPA_NCOA3 : float RPPA_FRAP1 : float RPPA_AR : float RPPA_GAB2 : float RPPA_YBX1 : float RPPA_ESR1 : float RPPA_RAD51 : float RPPA_SMAD4 : float RPPA_CDH3 : float RPPA_CDH2 : float RPPA_FOXO3 : float RPPA_ERBB2_HER : float RPPA_BECN1 : float RPPA_CASP9 : float RPPA_SETD2 : float RPPA_SRC_Src_mv : float RPPA_GSK3A_alpha : float RPPA_YAP1_pS127 : float RPPA_PRKCA_alpha : float RPPA_PRKAA1 : float RPPA_RAF1_pS338 : float RPPA_MYC : float RPPA_PRKAA1_AMPK : float RPPA_ERRFI1_MIG : float RPPA_EIF4EBP1_2 : float RPPA_STAT3 : float RPPA_AKT1_AKT2_AKT3 : float RPPA_NF2 : float RPPA_PECAM1 : float RPPA_BAK1 : float RPPA_IRS1 : float RPPA_PTK2 : float RPPA_ERBB3_2 : float RPPA_FOXO3_a : float RPPA_RB1_Rb : float RPPA_MAPK14_p38 : float RPPA_NFKB1 : float RPPA_CHEK1_Chk1 : float RPPA_LCK : float RPPA_XRCC5 : float RPPA_PARK7 : float RPPA_DIABLO : float RPPA_CTNNA1 : float RPPA_ESR1_ER : float RPPA_IGFBP2 : float RPPA_STMN1 : float RPPA_WWTR1_TAZ : float RPPA_CASP3 : float RPPA_JUN : float RPPA_CCNB1 : float RPPA_CLDN7 : float RPPA_PXN : float RPPA_RPS6KB1_p : float RPPA_KIT : float RPPA_CAV1 : float RPPA_PTEN : float RPPA_BAX : float RPPA_SMAD3 : float RPPA_ERBB2 : float RPPA_MET_c : float RPPA_ERCC1 : float RPPA_MAPK14 : float RPPA_BIRC2 : float RPPA_PIK3R1 : float RPPA_BCL2 : float RPPA_PEA : float RPPA_EEF2K : float RPPA_RPS6KB1_p70 : float RPPA_MRE11A : float RPPA_KRAS : float RPPA_ARID1A : float RPPA_YBX1_yb : float RPPA_NOTCH3 : float RPPA_EIF4EBP1_3 : float RPPA_XRCC1 : float RPPA_ANXA1 : float RPPA_CD49 : float RPPA_SHC1 : float RPPA_PDK1 : float RPPA_EIF4E : float RPPA_MAPK1_MAPK3 : float RPPA_PTGS2 : float RPPA_PRKCA : float RPPA_EGFR_egfr : float RPPA_RAB25 : float RPPA_RB1 : float RPPA_MAPK1 : float RPPA_TFF1 : float class config: orm_mode = True @app.post("/prediction/") async def create_item(userdata: Userdata): df = pd.DataFrame(userdata) y = model.predict(df) y = [0 if val < 0.5 else 1 for val in y] if y == 1: survival = 'You will survive.' if y == 0: survival = 'You will not survive.' return {'Prediction': survival} | In Python requests, when sending JSON data using the json parameter, you need to pass a dict object (e.g., json={"RPPA_HSPA1A":30,"RPPA_XIAP":-0.902044768}), which requests will automatically encode into JSON and set the Content-Type header to application/json. In your case, however, as you are using to_json() method, the object you get (i.e., df_json as you define it) is a JSON encoded string (you could verify that by printing out type(df_json)). Thus, you should rather use to_dict() method, which returns a dictionary instead. Since you are using orient='records', the returned object will be a list of dict, and thus, you need to get the first element from that list. Example below: data = dataframe.to_dict(orient='records') payload = data[0] prediction = requests.post('<URL_HERE>', json=payload) Otherwise, if you used to_json() method, you would need to use the data parameter when posting the request (see the documentation here), and as mentioned earlier, since you specify the orientation to records that returns a list, you would need to strip both the leading and trailing square brackets from that string. Also, using this method, you would need to manually set the Content-Type header to application/json. Example below: df_json = dataframe.to_json(orient='records') payload = df_json.strip("[]") prediction = requests.post('<URL_HERE>', data=payload, headers={"Content-Type": "application/json"}) | 10 | 10 |
71,460,894 | 2022-3-13 | https://stackoverflow.com/questions/71460894/bayesianoptimization-fails-due-to-float-error | I want to optimize my HPO of my lightgbm model. I used a Bayesian Optimization process to do so. Sadly my algorithm fails to converge. MRE import warnings import pandas as pd import time import numpy as np warnings.filterwarnings("ignore") import lightgbm as lgb from bayes_opt import BayesianOptimization import sklearn as sklearn import pyprojroot from sklearn.metrics import roc_auc_score, mean_squared_error from sklearn.model_selection import KFold, cross_val_score from sklearn.model_selection import train_test_split from sklearn.datasets import fetch_california_housing housing = fetch_california_housing() train = pd.DataFrame(housing['data'], columns=housing['feature_names']) train_y = train.pop('MedInc') params = { "objective" : "regression", "bagging_fraction" : 0.8, "bagging_freq": 1, "min_child_samples": 20, "reg_alpha": 1, "reg_lambda": 1,"boosting": "gbdt", "learning_rate" : 0.01, "subsample" : 0.8, "colsample_bytree" : 0.8, "verbosity": -1, "metric" : 'rmse' } train_data = lgb.Dataset(train, train_y,free_raw_data=False) def lgb_eval(num_leaves, feature_fraction, max_depth , min_gain_to_split, min_data_in_leaf): params = { "objective" : "regression", "bagging_fraction" : 0.8, "bagging_freq": 1, "min_child_samples": 20, "reg_alpha": 1, "reg_lambda": 1,"boosting": "gbdt", "learning_rate" : 0.01, "subsample" : 0.8, "colsample_bytree" : 0.8, "verbosity": -1, "metric" : 'rmse' } params['feature_fraction'] = max(min(feature_fraction, 1), 0) params['max_depth'] = int(round(max_depth)) params['num_leaves'] = int(round(num_leaves)) params['min_gain_to_split'] = float(min_gain_to_split) params['min_data_in_leaf'] = int(np.round(min_data_in_leaf)) cv_result = lgb.cv(params, train_data, nfold=5, seed=0, verbose_eval =200,stratified=False) return ( np.array(cv_result['rmse-mean'])).max() gbBO = BayesianOptimization(lgb_eval, {'feature_fraction': (0.1, 0.9), 'max_depth': (5, 9), 'num_leaves' : (1,300), 'min_gain_to_split': (0.001, 0.1), 'min_data_in_leaf': (5, 50)}, random_state=0) lgbBO.maximize(init_points=5, n_iter=5,acq='ei') def bayes_parameter_opt_lgb(train, train_y, init_round=15, opt_round=25, n_folds=5, random_seed=0, n_estimators=10000, learning_rate=0.05, output_process=False): # prepare data train_data = lgb.Dataset(train,train_y,free_raw_data=False) # parameters def lgb_eval(num_leaves, feature_fraction, max_depth , min_gain_to_split, min_data_in_leaf): params = { "objective" : "regression", "bagging_fraction" : 0.8, "bagging_freq": 1, "min_child_samples": 20, "reg_alpha": 1, "reg_lambda": 1,"boosting": "gbdt", "learning_rate" : 0.01, "subsample" : 0.8, "colsample_bytree" : 0.8, "verbosity": -1, "metric" : 'rmse' } params['feature_fraction'] = max(min(feature_fraction, 1), 0) params['max_depth'] = int(round(max_depth)) params['num_leaves'] = int(round(num_leaves)) params['min_gain_to_split'] = float(min_gain_to_split), params['min_data_in_leaf'] = int(np.round(min_data_in_leaf)) cv_result = lgb.cv(params, train_data, nfold=n_folds, seed=random_seed, verbose_eval =200,stratified=False) return ( np.array(cv_result['rmse-mean'])).max() # range lgbBO = BayesianOptimization(lgb_eval, {'feature_fraction': (0.1, 0.9), 'max_depth': (5, 9), 'num_leaves' : (200,300), 'min_gain_to_split': (0.001, 0.1), 'min_data_in_leaf': (5, 50)}, random_state=0) # optimize lgbBO.maximize(init_points=init_round, n_iter=opt_round,acq='ei') # output optimization process lgbBO.points_to_csv("bayes_opt_result.csv") # return best parameters return lgbBO.res['max']['max_params'] opt_params = bayes_parameter_opt_lgb(train, train_y, init_round=200, opt_round=20, n_folds=5, random_seed=0, n_estimators=1000, learning_rate=0.01) This leads to the following stacktrace : --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\bayes_opt\bayesian_optimization.py:179, in BayesianOptimization.maximize(self, init_points, n_iter, acq, kappa, kappa_decay, kappa_decay_delay, xi, **gp_params) 178 try: --> 179 x_probe = next(self._queue) 180 except StopIteration: File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\bayes_opt\bayesian_optimization.py:25, in Queue.__next__(self) 24 if self.empty: ---> 25 raise StopIteration("Queue is empty, no more objects to retrieve.") 26 obj = self._queue[0] StopIteration: Queue is empty, no more objects to retrieve. During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) ..\GitHub\Meister2\src\lgb_new.ipynb Cell 13' in <cell line: 35>() 32 # return best parameters 33 return lgbBO.res['max']['max_params'] ---> 35 opt_params = bayes_parameter_opt_lgb(train, train_y, init_round=20, opt_round=20, n_folds=5, random_seed=0, n_estimators=1000, learning_rate=0.01) ..\GitHub\Meister2\src\lgb_new.ipynb Cell 13' in bayes_parameter_opt_lgb(train, train_y, init_round, opt_round, n_folds, random_seed, n_estimators, learning_rate, output_process) 21 lgbBO = BayesianOptimization(lgb_eval, {'feature_fraction': (0.1, 0.9), 22 'max_depth': (5, 9), 23 'num_leaves' : (200,300), 24 'min_gain_to_split': (0.001, 0.1), 25 'min_data_in_leaf': (5, 50)}, random_state=0) 26 # optimize ---> 27 lgbBO.maximize(init_points=init_round, n_iter=opt_round,acq='ei') 29 # output optimization process 30 lgbBO.points_to_csv("bayes_opt_result.csv") File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\bayes_opt\bayesian_optimization.py:182, in BayesianOptimization.maximize(self, init_points, n_iter, acq, kappa, kappa_decay, kappa_decay_delay, xi, **gp_params) 180 except StopIteration: 181 util.update_params() --> 182 x_probe = self.suggest(util) 183 iteration += 1 185 self.probe(x_probe, lazy=False) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\bayes_opt\bayesian_optimization.py:131, in BayesianOptimization.suggest(self, utility_function) 128 self._gp.fit(self._space.params, self._space.target) 130 # Finding argmax of the acquisition function. --> 131 suggestion = acq_max( 132 ac=utility_function.utility, 133 gp=self._gp, 134 y_max=self._space.target.max(), 135 bounds=self._space.bounds, 136 random_state=self._random_state 137 ) 139 return self._space.array_to_params(suggestion) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\bayes_opt\util.py:65, in acq_max(ac, gp, y_max, bounds, random_state, n_warmup, n_iter) 62 continue 64 # Store it if better than previous minimum(maximum). ---> 65 if max_acq is None or -res.fun[0] >= max_acq: 66 x_max = res.x 67 max_acq = -res.fun[0] TypeError: 'float' object is not subscriptable EDIT : The MRE above the stacktrace should lead to the followed programming error. As the stacktrace implies, it looks like that -res.fun[0] should be a list and therefore subscriptable (line 65, end of the stacktrace) but it is not and I can't understand why. This list is assigned to max_acq which is part of the maximization function acq_max() (line 131 of the stacktrace) the Gaussian Process which is itself part of the BayesianOptimization function (line 27 of the stacktrace) Why am I getting TypeError: 'float' object is not subscriptable and how can this be fixed? | This is related to a change in scipy 1.8.0, One should use -np.squeeze(res.fun) instead of -res.fun[0] https://github.com/fmfn/BayesianOptimization/issues/300 The comments in the bug report indicate reverting to scipy 1.7.0 fixes this, UPDATED: It seems the fix has been merged in the BayesianOptimization package, but the new maintainer is unable to push a release to pypi https://github.com/fmfn/BayesianOptimization/issues/300#issuecomment-1146903850 so you could either: fall back to scipy 1.7.0 apply the patch in issue 303 manually on your system install directly from the master repo on github: pip install git+https://github.com/fmfn/BayesianOptimization | 4 | 7 |
71,453,291 | 2022-3-12 | https://stackoverflow.com/questions/71453291/difference-between-argparse-namespace-and-types-simplenamespace | It seems they both behave exactly the same β both are like dicts but with . literal to access an item, however none of it is even a subclass of another from argparse import Namespace from types import SimpleNamespace issubclass(Namespace, SimpleNamespace) # False issubclass(SimpleNamespace, Namespace) # False So, are there any differences between them two? Can argparse.Namespace be used in all cases? | There once was a proposal to have the argparse inherit. That said, just use the types one if you are using a new enough Python; that's what it is there for. | 6 | 5 |
71,412,499 | 2022-3-9 | https://stackoverflow.com/questions/71412499/how-to-prevent-keras-from-computing-metrics-during-training | I'm using Tensorflow/Keras 2.4.1 and I have a (unsupervised) custom metric that takes several of my model inputs as parameters such as: model = build_model() # returns a tf.keras.Model object my_metric = custom_metric(model.output, model.input[0], model.input[1]) model.add_metric(my_metric) [...] model.fit([...]) # training with fit However, it happens that custom_metric is very expensive so I would like it to be computed during validation only. I found this answer but I hardly understand how I can adapt the solution to my metric that uses several model inputs as parameter since the update_state method doesn't seem flexible. In my context, is there a way to avoid computing my metric during training, aside from writing my own training loop ? Also, I am very surprised we cannot natively specify to Tensorflow that some metrics should only be computed at validation time, is there a reason for that ? In addition, since the model is trained to optimize the loss, and that the training dataset should not be used to evaluate a model, I don't even understand why, by default, Tensorflow computes metrics during training. | I think that the simplest solution to compute a metric only on the validation is using a custom callback. here we define our dummy callback: class MyCustomMetricCallback(tf.keras.callbacks.Callback): def __init__(self, train=None, validation=None): super(MyCustomMetricCallback, self).__init__() self.train = train self.validation = validation def on_epoch_end(self, epoch, logs={}): mse = tf.keras.losses.mean_squared_error if self.train: logs['my_metric_train'] = float('inf') X_train, y_train = self.train[0], self.train[1] y_pred = self.model.predict(X_train) score = mse(y_train, y_pred) logs['my_metric_train'] = np.round(score, 5) if self.validation: logs['my_metric_val'] = float('inf') X_valid, y_valid = self.validation[0], self.validation[1] y_pred = self.model.predict(X_valid) val_score = mse(y_pred, y_valid) logs['my_metric_val'] = np.round(val_score, 5) Given this dummy model: def build_model(): inp1 = Input((5,)) inp2 = Input((5,)) out = Concatenate()([inp1, inp2]) out = Dense(1)(out) model = Model([inp1, inp2], out) model.compile(loss='mse', optimizer='adam') return model and this data: X_train1 = np.random.uniform(0,1, (100,5)) X_train2 = np.random.uniform(0,1, (100,5)) y_train = np.random.uniform(0,1, (100,1)) X_val1 = np.random.uniform(0,1, (100,5)) X_val2 = np.random.uniform(0,1, (100,5)) y_val = np.random.uniform(0,1, (100,1)) you can use the custom callback to compute the metric both on train and validation: model = build_model() model.fit([X_train1, X_train2], y_train, epochs=10, callbacks=[MyCustomMetricCallback(train=([X_train1, X_train2],y_train), validation=([X_val1, X_val2],y_val))]) only on validation: model = build_model() model.fit([X_train1, X_train2], y_train, epochs=10, callbacks=[MyCustomMetricCallback(validation=([X_val1, X_val2],y_val))]) only on train: model = build_model() model.fit([X_train1, X_train2], y_train, epochs=10, callbacks=[MyCustomMetricCallback(train=([X_train1, X_train2],y_train))]) remember only that the callback evaluates the metrics one-shot on the data, like any metric/loss computed by default by keras on the validation_data. here is the running code. | 8 | 3 |
71,423,641 | 2022-3-10 | https://stackoverflow.com/questions/71423641/create-complex-object-in-python-based-on-property-names-in-dot-notation | I am trying to create a complex object based on metadata I have. It is an array of attributes which I am iterating and trying to create a dict. For example below is the array: [ "itemUniqueId", "itemDescription", "manufacturerInfo[0].manufacturer.value", "manufacturerInfo[0].manufacturerPartNumber", "attributes.noun.value", "attributes.modifier.value", "attributes.entityAttributes[0].attributeName", "attributes.entityAttributes[0].attributeValue", "attributes.entityAttributes[0].attributeUOM", "attributes.entityAttributes[1].attributeName", "attributes.entityAttributes[1].attributeValue", "attributes.entityAttributes[1].attributeUOM", ] This array should give an output as below: { "itemUniqueId": "", "itemDescription": "", "manufacturerInfo": [ { "manufacturer": { "value": "" }, "manufacturerPartNumber": "" } ], "attributes": { "noun": { "value": "" }, "modifier": { "value": "" }, "entityAttributes": [ { "attributeName": "", "attributeValue": "", "attributeUOM": "" }, { "attributeName": "", "attributeValue": "", "attributeUOM": "" } ] } } I have written this logic but unable to get the desired output. It should work on both object and array given the metadata. source_json = [ "itemUniqueId", "itemDescription", "manufacturerInfo[0].manufacturer.value", "manufacturerInfo[0].manufacturerPartNumber", "attributes.noun.value", "attributes.modifier.value", "attributes.entityAttributes[0].attributeName", "attributes.entityAttributes[0].attributeValue", "attributes.entityAttributes[0].attributeUOM", "attributes.entityAttributes[1].attributeName", "attributes.entityAttributes[1].attributeValue", "attributes.entityAttributes[1].attributeUOM", ] for row in source_json: propertyNames = row.split('.') temp = '' parent = {} parentArr = [] parentObj = {} # if len(propertyNames) > 1: arrLength = len(propertyNames) for i, (current) in enumerate(zip(propertyNames)): if i == 0: if '[' in current: parent[current]=parentArr else: parent[current] = parentObj temp = current if i > 0 and i < arrLength - 1: if '[' in current: parent[current] = parentArr else: parent[current] = parentObj temp = current if i == arrLength - 1: if '[' in current: parent[current] = parentArr else: parent[current] = parentObj temp = current # temp[prev][current] = "" # finalMapping[target] = target print(parent) | There's a similar question at Convert Dot notation string into nested Python object with Dictionaries and arrays where the accepted answer works for this question, but has unused code paths (e.g. isInArray) and caters to unconventional conversions expected by that question: β "arrOne[0]": "1,2,3" β "arrOne": ["1", "2", "3"] instead of β
"arrOne[0]": "1,2,3" β "arrOne": ["1,2,3"] or β
"arrOne[0]": "1", "arrOne[1]": "2", "arrOne[2]": "3" β "arrOne": ["1", "2", "3"] Here's a refined implementation of the branch function: def branch(tree, path, value): key = path[0] array_index_match = re.search(r'\[([0-9]+)\]', key) if array_index_match: # Get the array index, and remove the match from the key array_index = int(array_index_match[0].replace('[', '').replace(']', '')) key = key.replace(array_index_match[0], '') # Prepare the array at the key if key not in tree: tree[key] = [] # Prepare the object at the array index if array_index == len(tree[key]): tree[key].append({}) # Replace the object at the array index tree[key][array_index] = value if len(path) == 1 else branch(tree[key][array_index], path[1:], value) else: # Prepare the object at the key if key not in tree: tree[key] = {} # Replace the object at the key tree[key] = value if len(path) == 1 else branch(tree[key], path[1:], value) return tree Usage: VALUE = '' def create_dict(attributes): d = {} for path_str in attributes: branch(d, path_str.split('.'), VALUE) return d source_json = [ "itemUniqueId", "itemDescription", "manufacturerInfo[0].manufacturer.value", "manufacturerInfo[0].manufacturerPartNumber", "attributes.noun.value", "attributes.modifier.value", "attributes.entityAttributes[0].attributeName", "attributes.entityAttributes[0].attributeValue", "attributes.entityAttributes[0].attributeUOM", "attributes.entityAttributes[1].attributeName", "attributes.entityAttributes[1].attributeValue", "attributes.entityAttributes[1].attributeUOM", ] assert create_dict(source_json) == { "itemUniqueId": "", "itemDescription": "", "manufacturerInfo": [ { "manufacturer": { "value": "" }, "manufacturerPartNumber": "" } ], "attributes": { "noun": { "value": "" }, "modifier": { "value": "" }, "entityAttributes": [ { "attributeName": "", "attributeValue": "", "attributeUOM": "" }, { "attributeName": "", "attributeValue": "", "attributeUOM": "" } ] } } | 4 | 2 |
71,446,065 | 2022-3-12 | https://stackoverflow.com/questions/71446065/how-to-output-shap-values-in-probability-and-make-force-plot-from-binary-classif | I need to plot how each feature impacts the predicted probability for each sample from my LightGBM binary classifier. So I need to output Shap values in probability, instead of normal Shap values. It does not appear to have any options to output in term of probability. The example code below is what I use to generate dataframe of Shap values and do a force_plot for the first data sample. Does anyone know how I should modify the code to change the output? I'm new to Shap value and the Shap package. Thanks a lot in advance. import pandas as pd import numpy as np import shap import lightgbm as lgbm from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer data = load_breast_cancer() X = pd.DataFrame(data.data, columns=data.feature_names) y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) model = lgbm.LGBMClassifier() model.fit(X_train, y_train) explainer = shap.TreeExplainer(model) shap_values = explainer(X_train) # force plot of first row for class 1 class_idx = 1 row_idx = 0 expected_value = explainer.expected_value[class_idx] shap_value = shap_values[:,:,class_idx].values[row_idx] shap.force_plot (base_value = expected_value, shap_values = shap_value, features = X_train.iloc[row_idx, :], matplotlib=True) # dataframe of shap values for class 1 shap_df = pd.DataFrame(shap_values[:,:, 1 ].values, columns = shap_values.feature_names) | TL;DR: You can achieve plotting results in probability space with link="logit" in the force_plot method: import pandas as pd import numpy as np import shap import lightgbm as lgbm from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer from scipy.special import expit shap.initjs() data = load_breast_cancer() X = pd.DataFrame(data.data, columns=data.feature_names) y = data.target X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) model = lgbm.LGBMClassifier() model.fit(X_train, y_train) explainer_raw = shap.TreeExplainer(model) shap_values = explainer_raw(X_train) # force plot of first row for class 1 class_idx = 1 row_idx = 0 expected_value = explainer_raw.expected_value[class_idx] shap_value = shap_values[:, :, class_idx].values[row_idx] shap.force_plot( base_value=expected_value, shap_values=shap_value, features=X_train.iloc[row_idx, :], link="logit", ) Expected output: Alternatively, you may achieve the same with the following, explicitly specifying model_output="probability" you're interested in to explain: explainer = shap.TreeExplainer( model, data=X_train, feature_perturbation="interventional", model_output="probability", ) shap_values = explainer(X_train) # force plot of first row for class 1 class_idx = 1 row_idx = 0 shap_value = shap_values.values[row_idx] shap.force_plot( base_value=expected_value, shap_values=shap_value, features=X_train.iloc[row_idx, :] ) Expected output: However, it might be more interesting for understanding what's happening here to find out where these figures come from: Our target proba for the point of interest: model_proba= model.predict_proba(X_train.iloc[[row_idx]]) model_proba # array([[0.00275887, 0.99724113]]) Base case raw from model given X_train as background (note, LightGBM outputs raw for class 1): model.predict(X_train, raw_score=True).mean() # 2.4839751932445577 Base case raw from SHAP (note, they are symmetric): bv = explainer_raw(X_train).base_values[0] bv # array([-2.48397519, 2.48397519]) Raw SHAP values for the point of interest: sv_0 = explainer_raw(X_train).values[row_idx].sum(0) sv_0 # array([-3.40619584, 3.40619584]) Proba inferred from SHAP values (via sigmoid): shap_proba = expit(bv + sv_0) shap_proba # array([0.00275887, 0.99724113]) Check: assert np.allclose(model_proba, shap_proba) Please ask questions if something is not clear. Side notes Proba might be misleading if you're analyzing raw size effect of different features because sigmoid is non-linear and saturates after reaching certain threshold. Some people expect to see SHAP values in probability space as well, but this is not feasible because: SHAP values are additive by construction (to be precise SHapley Additive exPlanations are average marginal contributions over all possible feature coalitions) exp(a + b) != exp(a) + exp(b) You may find useful: Feature importance in a binary classification and extracting SHAP values for one of the classes only answer How to interpret base_value of GBT classifier when using SHAP? answer | 7 | 10 |
71,486,742 | 2022-3-15 | https://stackoverflow.com/questions/71486742/dask-dataframe-to-parquet-fails-on-read-repartition-write-operation | I have the following workflow. def read_file(path, indx): df = pd.read_parquet(path) df.index = [indx] * len(df) return df files_list = get_all_files() # list of 10k parquet files, each about 1MB df = dask.dataframe.from_delayed([dask.delayed(read_file)(x, indx) for (indx, x) in enumerate(files_list)]) df.divisions = list(range(10000)) + [9999] # each divisions include 1 file new_divisions = [0, 10, 23, 45, ...., 9999] # new_divisions that reduces number of partitions by putting a bunch of files into same partitions. df = df.repartition(divisions = new_divisions) df.to_parquet("fewer_files") # This causes dask to essentially freeze and no files get written The new divisions are chosen so that the total memory of the files in each partition doesn't exceed 1000 MB. However, the final to_parquet call hangs forever. On the dask dashboard, there is no activity. The memory consumed by all workers remains very small (55MB), at least in the dashboard; but I suspect it might just be not updating since everything becomes super slow. The python process running the code keeps increasing the memory consumption (the virtual memory in Mac keeps increasing; I let it go upto 30GB). If there are only about 200 files in the files_list, the code works just fine. Here is what the df.visualize() looks like when there are 236 files in files_list which gets repartitioned into 41 partitions: Any idea on what might be causing the df.to_parquet to freeze when there are 10k files? When I print df before computation it shows the following: npartitions=65, Dask Name: repartition-merge, 26417 tasks Also, I can get the df.get_partition(0).to_parquet or other partition to work fairly quickly. However, df.to_parquet on the whole dataset fails. Is the 26K tasks simply too much to handle for 4 workers in my laptop? | Use dask.dataframe.read_parquet or other dask I/O implementations, not dask.delayed wrapping pandas I/O operations, whenever possible. Giving dask direct access to the file object or filepath allows the scheduler to quickly assess the steps in the job and accurately estimate the job size & requirements without executing the full workflow. Explanation By using dask.delayed with the pandas read_parquet reader, you're essentially robbing dask of the ability to peek into the file structure in order to help schedule the job, and also to open and close the files multiple times when running the full job (a problem you haven't even gotten to yet). When everything fits neatly into memory, using dask.dataframe.read_parquet and the delayed method you use are very similar. The difference comes when the optimal strategy is not simply "read in all the data and then figure out what to do with it". Specifically, you are performing many reindexing and sorting operations, all of which require dask to know a lot about the contents of the files before the index-manipulation operations can even be scheduled. Essentially, wrapping something in dask.delayed tells dask "here's this unknown block of code. Run it as a pure-python black box lots of times. The dask.dataframe and dask.array interfaces have smaller APIs and less interoperability compared with their pandas and numpy counterparts, but what you get for this is dask actually knows what's going on under the hood and can optimize it for you. When you use dask.delayed, you're gaining flexibility at the expense of dask's ability to tune the operation for you. Example As an exmaple, I'll create a large number of tiny files: In [9]: tinydf = pd.DataFrame({"col1": [11, 21], "col2": [12, 22]}) ...: for i in range(1000): ...: tinydf.to_parquet(f"myfile_{i}.parquet") dask.dataframe.read_parquet Now, let's read this in with dask.dataframe.read_parquet: In [10]: df = dask.dataframe.read_parquet([f"myfile_{i}.parquet" for i in range(1000)]) Note that this is lightning fast. We can take a peek at the high-level task graph by inspecting the dask attribute: In [13]: df.dask Out[13]: HighLevelGraph with 1 layers. <dask.highlevelgraph.HighLevelGraph object at 0x15f79e2f0> 0. read-parquet-e38709bfe39c7f8dfb5c4abf2fd08b50 Note that dask.dataframe.read_parquet is a single concept to dask. It can tune and optimize however it needs within this task. That includes "peeking" at the files to understand their column structure, look at the metadata file/attributes, etc., without reading in all the data. In [30]: df.divisions = list(range(0, 2001, 2)) In [31]: df = df.repartition(divisions=list(range(0, 2001, 500))) In [33]: df.dask Out[33]: HighLevelGraph with 2 layers. <dask.highlevelgraph.HighLevelGraph object at 0x168b5fcd0> 0. read-parquet-e38709bfe39c7f8dfb5c4abf2fd08b50 1. repartition-merge-bc42fb2f09234f7656901995bf3b29fa The high level graph for the full workflow has two steps! Dask understands the operation in terms of file I/O and repartitions. It can decide how to split up these tasks in order to stay within memory limits and spread workload across workers, all without bogging down the scheduler. dask.delayed(pd.read_parquet) On the other hand, what happens if we do this with dask.delayed? In [14]: def read_file(path, indx): ...: df = pd.read_parquet(path) ...: df.index = [indx] * len(df) ...: return df ...: ...: ...: files_list = [f"myfile_{i}.parquet" for i in range(1000)] ...: df = dask.dataframe.from_delayed( ...: [dask.delayed(read_file)(x, indx) for (indx, x) in enumerate(files_list)] ...: ) The dataframe preview ends up looking similar, but if we peek under the hood at the high level task graph, we can see that dask needs to read in all of the data before it even knows what the index looks like! In [16]: df.dask Out[16]: HighLevelGraph with 1001 layers. <dask.highlevelgraph.HighLevelGraph object at 0x168bf6230> 0. read_file-b7aed020-1dc7-4872-a37d-d514b407a7d8 1. read_file-a0462606-999b-4af1-9977-acb562edab67 2. read_file-286df439-df34-4a5a-baf9-75dd0a5ae09b 3. read_file-4db8c178-a67e-4775-b117-228ac607f02f 4. read_file-a19d6144-5560-4da7-a1f5-8dc92b3ccf1c # yeah... really there are 1000 of these... 998. read_file-d0cbd4a4-c255-4a77-a905-199bc289a0b5 999. read_file-45a80080-426a-48fd-8dcb-9ba7565307f1 1000. from-delayed-833eff6e232da1e10ca7221b961c21c1 To make matters worse, each pd.read_parquet uses the default pandas read behavior, which is to assume the data can fit into memory and just read the whole file in at once. Pandas does NOT return a file object - it loads all the data and returns a DataFrame before dask even sees it. Because of this, dask is essentially prevented from getting to the scheduling bit until all of the read has already been done, and it has very little to work with in terms of workload balancing, memory management, etc. It can try to get a sneak-peek at the workload by executing the first task, but this is still a read of the whole first file. This only gets worse when we start trying to shuffle the index. I won't go into it here, but you get the idea... | 4 | 5 |
71,439,124 | 2022-3-11 | https://stackoverflow.com/questions/71439124/google-protobuf-message-decodeerror-error-parsing-message-with-type-tensorflow | I was training the model and saved it, now I am trying to load but unable to do. I have seen in previous post as well, but some reference links are not working or some things I tried, still not able to solve the problem. Code snippet: #load model with tf.io.gfile.GFile(args.model, "rb") as f: graph_def = tf.compat.v1.GraphDef() graph_def.ParseFromString(f.read()) # with tf.Graph().as_default() as graph: generated_image_1, generated_image_2, generated_image_3, = tf.graph_util.import_graph_def( graph_def, input_map={'input_image' : input_tensor, 'short_edge_1' : short_edge_1, 'short_edge_2' : short_edge_2, 'short_edge_3' : short_edge_3}, return_elements=['style_subnet/conv-block/resize_conv_1/output:0', 'enhance_subnet/resize_conv_1/output:0', 'refine_subnet/resize_conv_1/output:0'], producer_op_list=None ) Error Traceback (most recent call last): File "stylize.py", line 97, in <module> main() File "stylize.py", line 57, in main graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef' Note: If need more information about this, will sure post add it here. Let me know | BG: I was getting errors while testing the code. In my case, it was solved with the help of freeze.py and a few modifications in the training file. And I found some other useful links while searching query. Link 1 Link 2 | 6 | 2 |
71,479,069 | 2022-3-15 | https://stackoverflow.com/questions/71479069/exec-python-executable-file-not-found-in-path | since the last update to Mac OS Monterey 12.3 I get the following error message when compiling my Arduino sketch: exec: "python": executable file not found in $PATH Unfortunately, I have not yet been able to find out how to solve this problem. I would be very grateful for ideas and suggestions. | Problem In MacOS 12.3 Apple removed python2.7 (python) from MacOS. Solution What I did to solve this is link python3 to python, I wouldn't recommend it because it's sus, I would recommend you wait until Arduino IDE fixes this issue in a later build. For the time being, you could try their Web IDE: Arduino Editor However, here are the instructions to link python3 to python: If you don't have python3 installed, install it here in the link below: Python Install Page Find your path for the current version of python3 you're using which python3 it'll show up with something like this: /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 Copy that and use it to run this command that links python 3 to python. Replace the first file path with where your python3 is. ln -s -f INSERT_PATH_OF_PYTHON3 /usr/local/bin/python for example: ln -s -f /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 /usr/local/bin/python | 6 | 16 |
71,483,248 | 2022-3-15 | https://stackoverflow.com/questions/71483248/comment-color-based-on-keyword-in-pycharm | PyCharm has a feature that colors your comments yellow if they include TODO or FIXME keywords. What is the way to add more keywords to the list and change the colors based on the keyword? Example: | In PyCharm: Press CTRL+ALT+S or navigate to Preferences/Settings Search TODO or go to Editor/TODO Add a pattern using the + button. For example, the pattern for ERROR keyword would be \berror\b.*. It can also be any other regex pattern. \b - word boundaries; .* - zero or more characters. Unselect Use color scheme TODO default colors and change the foreground color (ex. C00000 - red) Press OK and Apply Results: Settings: Example ERROR pattern (red): | 6 | 9 |
71,425,861 | 2022-3-10 | https://stackoverflow.com/questions/71425861/connecting-to-user-dbus-as-root | If we open a python interpreter normally and enter the following: import dbus bus = dbus.SessionBus() bus.list_names() We see all the services on the user's session dbus. Now suppose we wanted to do some root-only things in the same script to determine information to pass through dbus, so we run the interpreter with sudo python and run the same thing, we only see a short list of items on the root user's session dbus, and attempting to connect to anything that was on the user dbus with get_object produces a not found error accordingly. So far I've tried inserting import os os.seteuid(int(os.environ['SUDO_UID'])) But this only makes SessionBus() give a org.freedesktop.DBus.Error.NoReply so this is probably nonsense. Is there a way to connect to a user's dbus service as a super user, with the python dbus bindings? | I have little knowledge about DBus, but that question got me curious. TL;DR: Use dbus.bus.BusConnection with the socket address for the target user and seteuid for gaining access. First question: What socket does DBus connect to for the session bus? $ cat list_bus.py import dbus print(dbus.SessionBus().list_names()) $ strace -o list_bus.trace python3 list_bus.py $ grep ^connect list_bus.trace connect(3, {sa_family=AF_UNIX, sun_path="/run/user/1000/bus"}, 20) = 0 Maybe it relies on environment variables for this? Gotcha! $ env|grep /run/user/1000/bus DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus Stracing the behaviour from the root account it appears that it does not know the address to connect to. Googling for the variable name got me to the D-Bus Specification, section "Well-known Message Bus Instances". Second question: Can we connect directly to the socket without having the D-Bus library guess the right address? The dbus-python tutorial states: For special purposes, you might use a non-default Bus, or a connection which isnβt a Bus at all, using some new API added in dbus-python 0.81.0. Looking at the changelog, this appears to refer to these: Bus has a superclass dbus.bus.BusConnection (a connection to a bus daemon, but without the shared-connection semantics or any deprecated API) for the benefit of those wanting to subclass bus daemon connections Let's try this out: $ python3 Python 3.9.2 (default, Feb 28 2021, 17:03:44) >>> from dbus.bus import BusConnection >>> len(BusConnection("unix:path=/run/user/1000/bus").list_names()) 145 How about root access? # python3 >>> from dbus.bus import BusConnection >>> len(BusConnection("unix:path=/run/user/1000/bus").list_names()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/dbus/bus.py", line 124, in __new__ bus = cls._new_for_bus(address_or_type, mainloop=mainloop) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. >>> import os >>> os.seteuid(1000) >>> len(BusConnection("unix:path=/run/user/1000/bus").list_names()) 143 So this answers the question: Use BusConnection instead of SessionBus and specify the address explicitly, combined with seteuid to gain access. Bonus: Connect as root without seteuid Still I'd like to know if it is possible to access the bus directly as root user, without resorting to seteuid. After a few search queries, I found a systemd ticket with this remark: dbus-daemon is the component enforcing access ... (but you can drop an xml policy file in, to make it so). This led me to an askubuntu question discussing how to modify the site local session bus policy. Just to play with it, I ran this in one terminal: $ cp /usr/share/dbus-1/session.conf session.conf $ (edit session.conf to modify the include for local customization) $ diff /usr/share/dbus-1/session.conf session.conf 50c50 < <include ignore_missing="yes">/etc/dbus-1/session-local.conf</include> --- > <include ignore_missing="yes">session-local.conf</include> $ cat > session-local.conf <busconfig> <policy context="mandatory"> <allow user="root"/> </policy> </busconfig> $ dbus-daemon --config-file session.conf --print-address unix:abstract=/tmp/dbus-j0r67hLIuh,guid=d100052e45d06f248242109262325b98 $ dbus-daemon --config-file session.conf --print-address unix:abstract=/tmp/dbus-j0r67hLIuh,guid=d100052e45d06f248242109262325b98 In another terminal, I can not attach to this bus as a root user: # python3 Python 3.9.2 (default, Feb 28 2021, 17:03:44) >>> from dbus.bus import BusConnection >>> address = "unix:abstract=/tmp/dbus-j0r67hLIuh,guid=d100052e45d06f248242109262325b98" >>> BusConnection(address).list_names() dbus.Array([dbus.String('org.freedesktop.DBus'), dbus.String(':1.0')], signature=dbus.Signature('s')) This should also enable accessing all session busses on the system, when installing session-local.conf globally: # cp session-local.conf /etc/dbus-1/session-local.conf # kill -HUP 1865 # reload config of my users session dbus-daemon # python3 >>> from dbus.bus import BusConnection >>> len(BusConnection("unix:path=/run/user/1000/bus").list_names()) 143 And it works - now root can connect to any session bus without resorting to seteuid. Don't forget to # rm /etc/dbus-1/session-local.conf if your root user does not need this power. | 4 | 9 |
71,489,011 | 2022-3-15 | https://stackoverflow.com/questions/71489011/attributeerror-dataframe-object-has-no-attribute-to-sparse | sdf = df.to_sparse() has been deprecated. What's the updated way to convert to a sparse DataFrame? | These are the updated sparse conversions in pandas 1.0.0+. How to convert dense to sparse Use DataFrame.astype() with the appropriate SparseDtype() (e.g., int): >>> df = pd.DataFrame({'A': [1, 0, 0, 0, 1, 0]}) >>> df.dtypes # A int64 # dtype: object >>> sdf = df.astype(pd.SparseDtype(int, fill_value=0)) >>> sdf.dtypes # A Sparse[int64, 0] # dtype: object Or use the string alias for brevity: >>> sdf = df.astype('Sparse[int64, 0]') How to convert sparse to dense Use DataFrame.sparse.to_dense(): >>> from scipy import sparse >>> sdf = pd.DataFrame.sparse.from_spmatrix(sparse.eye(3), columns=list('ABC')) >>> sdf.dtypes # A Sparse[float64, 0] # B Sparse[float64, 0] # C Sparse[float64, 0] # dtype: object >>> df = sdf.sparse.to_dense() >>> df.dtypes # A float64 # B float64 # C float64 # dtype: object How to convert sparse to COO Use DataFrame.sparse.to_coo(): >>> from scipy import sparse >>> sdf = pd.DataFrame.sparse.from_spmatrix(sparse.eye(3), columns=list('ABC')) >>> sdf.dtypes # A Sparse[float64, 0] # B Sparse[float64, 0] # C Sparse[float64, 0] # dtype: object >>> df = sdf.sparse.to_coo() # <3x3 sparse matrix of type '<class 'numpy.float64'>' # with 3 stored elements in COOrdinate format> # (0, 0) 1.0 # (1, 1) 1.0 # (2, 2) 1.0 | 4 | 10 |
71,418,682 | 2022-3-10 | https://stackoverflow.com/questions/71418682/skip-first-line-in-import-statement-using-gc-open-by-url-from-gspread-i-e-add | What is the equivalent of header=0 in pandas, which recognises the first line as a heading in gspread? pandas import statement (correct) import pandas as pd # gcp / google sheets URL df_URL = "https://docs.google.com/spreadsheets/d/1wKtvNfWSjPNC1fNmTfUHm7sXiaPyOZMchjzQBt1y_f8/edit?usp=sharing" raw_dataset = pd.read_csv(df_URL, na_values='?',sep=';' , skipinitialspace=True, header=0, index_col=None) Using the gspread function, so far I import the data, change the first line to the heading then delete the first line after but this recognises everything in the DataFrame as a string. I would like to recognise the first line as a heading right away in the import statement. gspread import statement that needs header=True equivalent import pandas as pd from google.colab import auth auth.authenticate_user() import gspread from oauth2client.client import GoogleCredentials # gcp / google sheets url df_URL = "https://docs.google.com/spreadsheets/d/1wKtvNfWSjPNC1fNmTfUHm7sXiaPyOZMchjzQBt1y_f8/edit?usp=sharing" # importing the data from Google Drive setup gc = gspread.authorize(GoogleCredentials.get_application_default()) # read data and put it in dataframe g_sheets = gc.open_by_url(df_URL) df = pd.DataFrame(g_sheets.get_worksheet(0).get_all_values()) # change first row to header df = df.rename(columns=df.iloc[0]) # drop first row df.drop(index=df.index[0], axis=0, inplace=True) | Looking at the API documentation, you probably want to use: df = pd.DataFrame(g_sheets.get_worksheet(0).get_all_records(head=1)) The .get_all_records method returns a dictionary of with the column headers as the keys and a list of column values as the dictionary values. The argument head=<int> determines which row to use as keys; rows start from 1 and follow the numeration of the spreadsheet. Since the values returned by .get_all_records() are lists of strings, the data frame constructor, pd.DataFrame, will return a data frame that is all strings. To convert it to floats, we need to replace the empty strings, and the the dash-only strings ('-') with NA-type values, then convert to float. Luckily pandas DataFrame has a convenient method for replacing values .replace. We can pass it mapping from the string we want as NAs to None, which gets converted to NaN. import pandas as pd data = g_sheets.get_worksheet(0).get_all_records(head=1) na_strings_map= { '-': None, '': None } df = pd.DataFrame(data).replace(na_strings_map).astype(float) | 6 | 2 |
71,489,687 | 2022-3-15 | https://stackoverflow.com/questions/71489687/pypi-the-name-is-too-similar-to-an-existing-project | When uploading to PyPI there is an error: $ twine upload -r test dist/examplepkg-1.0.tar.gz Uploading distributions to https://test.pypi.org/legacy/ Uploading examplepkg-1.0.tar.gz Error during upload. Retry with the --verbose option for more details. HTTPError: 400 Bad Request from https://test.pypi.org/legacy/ The name 'examplepkg' is too similar to an existing project. See https://test.pypi.org/help/#project-name for more information. Which existing project? How do you find out what existing project is it talking about? | There is no direct way of knowing which exact package causes the name conflict, but here are some tips that may help you further in your search. First of all, you can find the source code of pypi (called warehouse) at https://github.com/pypa/warehouse/. Using the error message you gave, you can find that the failing check is caused by a database function called ultranormalize_name. Now, searching for that name in the codebase leads you to this migration script where the function seems to be created, which performs the following steps to check if the name is already reserved: Both cases of o (lower and upper case, o and O) gets replaced with 0 (irrelevant for your case, as there are no os in your package name) Both cases of L and I are replaced with 1 (e.g., example is same as examp1e and exampie) All ., _, and - characters are removed (e.g., e-x-a-m-p-l-e is same as example) The result is then lowercased and compared to the already existing names As I cannot see a direct match for your given package name, are you sure examplepkg is the name that is also in your pyproject.toml or setup.py file metadata? If yes, there is probably some variant of that name whose non-normalized form matches to yours after the transformations mentioned above. | 4 | 10 |
71,486,255 | 2022-3-15 | https://stackoverflow.com/questions/71486255/how-can-i-make-python-re-work-like-grep-for-repeating-groups | I have the following string: seq = 'MNRYLNRQRLYNMYRNKYRGVMEPMSRMTMDFQGRYMDSQGRMVDPRYYDHYGRMHDYDRYYGRSMFNQGHSMDSQRYGGWMDNPERYMDMSGYQMDMQGRWMDAQGRYNNPFSQMWHSRQGH' also saved in a file called seq.dat. If I use the following grep command grep '\([MF]D.\{4,6\}\)\{3,10\}' seq.dat I get the following matching string: MDNPERYMDMSGYQMDMQGRWMDAQGRYN which is what I want. In words, what I want to match is as many consecutive repeats as the string has of [MF]D.{4,6}. I don't want to match cases where it has less than 3 consecutive repeats, but I want it to be able to capture up to 6. Now, I'm trying to do this with python. I have p = re.compile("(?:[MF]D.{4,6}){3,10}") Trying search() returns MDNPERYMDMSGYQMDMQGRWM It is the close to the answer I seek, but is still missing the last MDAQGRYN. I'm guessing this is because .{4,6} matches the M, which in turn prevents {3,10} from capturing this 4th occurence of ([MF]D.{4,6}), but since I asked for at least 3, it's happy and it stops. How do I make Python regex behave like grep does? | There is a fundamental difference between POSIX ("text-directed") and NFA ("regex-directed") engines. POSIX engines (grep here uses a POSIX BRE regex flavor, it is the flavor used by default) will parse the input text applying the regex to it and return the longest match possible. NFA engine (Python re engine is an NFA engine) here does not re-consume (backtrack) when the subsequent pattern parts match. See reference on regex-directed and text-directed engines: A regex-directed engine walks through the regex, attempting to match the next token in the regex to the next character. If a match is found, the engine advances through the regex and the subject string. If a token fails to match, the engine backtracks to a previous position in the regex and the subject string where it can try a different path through the regex... Modern regex flavors using regex-directed engines have lots of features such as atomic grouping and possessive quantifiers that allow you to control this backtracking. A text-directed engine walks through the subject string, attempting all permutations of the regex before advancing to the next character in the string. A text-directed engine never backtracks. Thus, there isnβt much to discuss about the matching process of a text-directed engine. In most cases, a text-directed engine finds the same matches as a regex-directed engine. The last sentence says "in most cases", but not all cases, and yours is a good illustration that discrepances may occur. To avoid consuming M or F that are immediately followed with D, I'd suggest using (?:[MF]D(?:(?![MF]D).){4,6}){3,10} See the regex demo. Details: (?: - start of an outer non-capturing container group: [MF]D - M or F and then D (?:(?![MF]D).){4,6} - any char (other than a line break) repeated four to six times, that does not start an MD or FD char sequence ){3,10} - end of the outer group, repeat 3 to 10 times. By the way, if you only want to match uppercase ASCII letters, replace the . with [A-Z]. | 6 | 5 |
71,454,563 | 2022-3-13 | https://stackoverflow.com/questions/71454563/visual-studio-code-intellisense-not-working-on-ssh-server-even-though-its-ins | So for some reason my intellisense is not working. I tried the solutions suggested here Visual Studio Code: Intellisense not working. The solution that seems to help most people is adding "python.autoComplete.extraPaths": [ "${workspaceFolder}/customModule" ], didn't work. Also VS Code says it doesn't recognize python.pythonPath when I add it. Auto-complete not working, screen capture didn't capture my cursor, but it's right after argparse., which should give the option to auto-complete with a list that includes: ArgumentParser: Remote server installed extensions: Settings.json This is settings.json on remote server { "remote.autoForwardPortsSource": "output", "python.languageServer": "None", "python.analysis.completeFunctionParens": true, "python.analysis.diagnosticMode": "workspace", } Setup: Running using Conda env Linux remote server Note: Something else off is my "find declaration of function or class" is also not working. | the first solutions are kind of obvious, but ill add them anyway, Removing reinstalling it both locally and remotely Make sure VS code is updated to its last version In settings.json, set a language server in "python.languageServer". The Language Server includes: Jedi(build-in Python extension ), Microsoft, Pylance, since you have already installed Pylance, let's start with that one (if that doesnt work, try the others). set your python.pythonPath to the path returned in your terminal for which python3 | 6 | 8 |
71,482,512 | 2022-3-15 | https://stackoverflow.com/questions/71482512/selenium-executable-path-has-been-deprecated | When running my code I get the below error string, <string>:36: DeprecationWarning: executable_path has been deprecated, please pass in a Service object What could possibly be the issue? Below is the Selenium setup, options = webdriver.ChromeOptions() prefs = {"download.default_directory" : wd} options.add_experimental_option("prefs", prefs) options.add_argument("--headless") path = (chrome) driver = webdriver.Chrome(executable_path=path, options = options) driver.get('https://www.1linelogin.williams.com/1Line/xhtml/login.jsf?BUID=80') | This error message DeprecationWarning: executable_path has been deprecated, please pass in a Service object means that the key executable_path will be deprecated in the upcoming releases. Once the key executable_path is deprecated you have to use an instance of the Service() class as follows: from selenium import webdriver from selenium.webdriver.chrome.service import Service path = (chrome) s = Service(path) driver = webdriver.Chrome(service=s) For more details see here | 4 | 4 |
71,452,013 | 2022-3-12 | https://stackoverflow.com/questions/71452013/does-python-not-reuse-memory-here-what-does-tracemallocs-output-mean | I create a list of a million int objects, then replace each with its negated value. tracemalloc reports 28 MB extra memory (28 bytes per new int object). Why? Does Python not reuse the memory of the garbage-collected int objects for the new ones? Or am I misinterpreting the tracemalloc results? Why does it say those numbers, what do they really mean here? import tracemalloc xs = list(range(10**6)) tracemalloc.start() for i, x in enumerate(xs): xs[i] = -x print(tracemalloc.get_traced_memory()) Output (Try it online!): (27999860, 27999972) If I replace xs[i] = -x with x = -x (so the new object rather than the original object gets garbage-collected), the output is a mere (56, 196) (try it). How does it make any difference which of the two objects I keep/lose? And if I do the loop twice, it still only reports (27992860, 27999972) (try it). Why not 56 MB? How is the second run any different for this than the first? | Short Answer tracemalloc was started too late to track the inital block of memory, so it didn't realize it was a reuse. In the example you gave, you free 27999860 bytes and allocate 27999860 bytes, but tracemalloc can't 'see' the free. Consider the following, slightly modified example: import tracemalloc tracemalloc.start() xs = list(range(10**6)) print(tracemalloc.get_traced_memory()) for i, x in enumerate(xs): xs[i] = -x print(tracemalloc.get_traced_memory()) On my machine (python 3.10, but same allocator), this displays: (35993436, 35993436) (36000576, 36000716) After we allocate xs, the system has allocated 35993436 bytes, and after we run the loop we have a net total of 36000576. This shows that the memory usage isn't actually increasing by 28 Mb. Why does it behave this way? Tracemalloc works by overriding the standard internal methods for allocating with tracemalloc_alloc, and the similar free and realloc methods. Taking a peek at the source: static void* tracemalloc_alloc(int use_calloc, void *ctx, size_t nelem, size_t elsize) { PyMemAllocatorEx *alloc = (PyMemAllocatorEx *)ctx; void *ptr; assert(elsize == 0 || nelem <= SIZE_MAX / elsize); if (use_calloc) ptr = alloc->calloc(alloc->ctx, nelem, elsize); else ptr = alloc->malloc(alloc->ctx, nelem * elsize); if (ptr == NULL) return NULL; TABLES_LOCK(); if (ADD_TRACE(ptr, nelem * elsize) < 0) { /* Failed to allocate a trace for the new memory block */ TABLES_UNLOCK(); alloc->free(alloc->ctx, ptr); return NULL; } TABLES_UNLOCK(); return ptr; } We see that the new allocator does two things: 1.) Call out to the "old" allocator to get memory 2.) Add a trace to a special table, so we can track this memory If we look at the associated free functions, it's very similar: 1.) free the memory 2.) Remove the trace from the table In your example, you allocated xs before you called tracemalloc.start(), so the trace records for this allocation are never put in the memory tracking table. Therefore, when you call free on the initial array data, the traces aren't removed, and thus your weird allocation behavior. Why is the total memory usage 36000000 bytes and not 28000000 Lists in python are weird. They're actually a list of pointer to individually allocated objects. Internally, they look like this: typedef struct { PyObject_HEAD Py_ssize_t ob_size; /* Vector of pointers to list elements. list[0] is ob_item[0], etc. */ PyObject **ob_item; /* ob_item contains space for 'allocated' elements. The number * currently in use is ob_size. * Invariants: * 0 <= ob_size <= allocated * len(list) == ob_size * ob_item == NULL implies ob_size == allocated == 0 */ Py_ssize_t allocated; } PyListObject; PyObject_HEAD is a macro that expands to some header information all python variables have. It is just 16 bytes, and contains pointers to type data. Importantly, a list of integers is actually a list of pointer to PyObjects that happen to be ints. On the line xs = list(range(10**6)), we expect to allocate: 1 PyListObject with internal size 1000000 -- true size: sizeof(PyObject_HEAD) + sizeof(PyObject *) * 1000000 + sizeof(Py_ssize_t) ( 16 bytes ) + ( 8 bytes ) * 1000000 + ( 8 bytes ) 8000024 bytes 1000000 PyObject ints (A PyLongObject in the underlying implmentation) 1000000 * sizeof(PyLongObject) 1000000 * ( 28 bytes ) 28000000 bytes For a grand total of 36000024 bytes. That number looks pretty farmiliar! When you overwrite a value in the array, your just freeing the old value, and updating the pointer in PyListObject->ob_item. This means the array structure is allocated once, takes up 8000024 bytes, and lives to the end of the program. Additionally, 1000000 Integer objects are each allocated, and references are put in the array. They take up the 28000000 bytes. One by one, they are deallocated, and then the memory is used to reallocate a new object in the loop. This is why multiple loops don't increase the amount of memory. | 6 | 9 |
71,471,449 | 2022-3-14 | https://stackoverflow.com/questions/71471449/using-scikit-learn-preprocesser-to-select-subset-of-rows-in-pandas-dataframe | Is there a scikit-learn preprocesser I can use or implement to select a subset of rows from a pandas dataframe? I would prefer a preprocesser to do this since I want to build a pipeline with this as a step. | You can use a FunctionTransformer to do that. To a FunctionTransformer, you can pass any Callable that exposes the same interface as standard scikitlearn transform calls have. In code import pandas as pd from sklearn.preprocessing import FunctionTransformer class RowSelector: def __init__(self, rows:list[int]): self._rows = rows def __call__(self, X:pd.DataFrame, y=None) -> pd.DataFrame: return X.iloc[self._rows,:] selector = FunctionTransformer(RowSelector(rows=[1,3])) df = pd.DataFrame({'a':range(4), 'b':range(4), 'c':range(4)}) selector.fit_transform(df) #Returns a b c 1 1 1 1 3 3 3 3 Not that, I have used a callable object to track some state, i.e. the rows to be selected. This is not necessary and could be solved differently. The cool thing is that it returns a data frame, so if you have it as the first step of your pipeline, you can also combine it with a subsequent column transformer (if needed of course) | 6 | 5 |
71,425,968 | 2022-3-10 | https://stackoverflow.com/questions/71425968/remove-horizontal-lines-with-open-cv | I am trying to remove horizontal lines from my daughter's drawings, but can't get it quite right. The approach I am following is creating a mask with horizontal lines (https://stackoverflow.com/a/57410471/1873521) and then removing that mask from the original (https://docs.opencv.org/3.3.1/df/d3d/tutorial_py_inpainting.html). As you can see in the pics below, this only partially removes the horizontal lines, and also creates a few distortions, as some of the original drawing horizontal-ish lines also end up in the mask. Any help improving this approach would be greatly appreciated! Create mask with horizontal lines From https://stackoverflow.com/a/57410471/1873521 import cv2 import numpy as np img = cv2.imread("input.png", 0) if len(img.shape) != 2: gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) else: gray = img gray = cv2.bitwise_not(gray) bw = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, -2) horizontal = np.copy(bw) cols = horizontal.shape[1] horizontal_size = cols // 30 horizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontal_size, 1)) horizontal = cv2.erode(horizontal, horizontalStructure) horizontal = cv2.dilate(horizontal, horizontalStructure) cv2.imwrite("horizontal_lines_extracted.png", horizontal) Remove horizontal lines using mask From https://docs.opencv.org/3.3.1/df/d3d/tutorial_py_inpainting.html import numpy as np import cv2 mask = cv2.imread('horizontal_lines_extracted.png',0) dst = cv2.inpaint(img,mask,3,cv2.INPAINT_TELEA) cv2.imwrite("original_unmasked.png", dst) Pics Original picture Mask Partially cleaned: | So, I saw that working on the drawing separated from the paper would lead to a better result. I used MORPH_CLOSE to work on the paper and MORPH_OPEN for the lines in the inner part. I hope your daughter likes it :) img = cv2.imread(r'E:\Downloads\i0RDA.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Remove horizontal lines thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY_INV,81,17) horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1)) # Using morph close to get lines outside the drawing remove_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, horizontal_kernel, iterations=3) cnts = cv2.findContours(remove_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] mask = np.zeros(gray.shape, np.uint8) for c in cnts: cv2.drawContours(mask, [c], -1, (255,255,255),2) # First inpaint img_dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA) gray_dst = cv2.cvtColor(img_dst, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray_dst, 50, 150, apertureSize = 3) horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1)) # Using morph open to get lines inside the drawing opening = cv2.morphologyEx(edges, cv2.MORPH_OPEN, horizontal_kernel) cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] mask = np.uint8(img_dst) mask = np.zeros(gray_dst.shape, np.uint8) for c in cnts: cv2.drawContours(mask, [c], -1, (255,255,255),2) # Second inpaint img2_dst = cv2.inpaint(img_dst, mask, 3, cv2.INPAINT_TELEA) | 12 | 14 |
71,462,024 | 2022-3-14 | https://stackoverflow.com/questions/71462024/how-to-handle-chromium-microphone-permission-pop-ups-in-playwright | What I'm trying to do Test a website that requires microphone access with playwright The problem Pop-up in question comes up and seems to ignore supposedly granted permissions. Permission can be given manually, but this seems against the spirit of automation. What I tried with sync_playwright() as p: browser = p.chromium.launch(headless=False) context = browser.new_context(permissions=['microphone']) ... Granting permissions via context doesn't work for some reason. The permission pop-up still comes up. I also tried to record a walkthrough with playwrights record script, but it's not recording granting microphone permissions. | You're missing some command line flags that tell chrome to simulate having a microphone. Give this sample a shot. from playwright.sync_api import sync_playwright def run(playwright): chromium = playwright.chromium browser = chromium.launch(headless=False, args=['--use-fake-device-for-media-stream', '--use-fake-ui-for-media-stream']) context = browser.new_context() context.grant_permissions(permissions=['microphone']) page = context.new_page() page.goto("https://permission.site/") page.click('#microphone') page.pause() # other actions... browser.close() with sync_playwright() as playwright: run(playwright) | 6 | 6 |
71,423,949 | 2022-3-10 | https://stackoverflow.com/questions/71423949/azure-pipelines-proper-way-to-use-poetry | what would be a recommended way to install your Python's package dependencies with poetry for Azure Pipelines? I see people only downloading poetry through pip which is a big no-no. - script: | python -m pip install -U pip pip install poetry poetry install displayName: Install dependencies I can use curl to download poetry. - script: | curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python - export PATH=$PATH:$HOME/.poetry/bin poetry install --no-root displayName: 'Install dependencies' But then in each subsequent step I have to add poetry to PATH again ... - script: | curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python - export PATH=$PATH:$HOME/.poetry/bin poetry install --no-root displayName: 'Install dependencies' - script: | # export PATH=$PATH:$HOME/.poetry/bin poetry run flake8 src displayName: 'Linter' - script: | # export PATH=$PATH:$HOME/.poetry/bin poetry add pytest-azurepipelines poetry run pytest src displayName: 'Tests' Is there any right way to use poetry in Azure Pipelines? | Consulted this issue with a collegue. He recommended doing separate step to add Poetry to the PATH. - task: UsePythonVersion@0 inputs: versionSpec: '3.8' displayName: 'Use Python 3.8' - script: | curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python - export PATH=$PATH:$HOME/.poetry/bin poetry install --no-root displayName: 'Install dependencies' - script: echo "##vso[task.prependpath]$HOME/.poetry/bin" displayName: Add poetry to PATH - script: | poetry run flake8 src displayName: 'Linter' - script: | poetry add pytest-azurepipelines poetry run pytest src displayName: 'Tests' | 7 | 9 |
71,460,471 | 2022-3-13 | https://stackoverflow.com/questions/71460471/improve-quality-of-extracted-image-in-opencv | #Segmenting the red pointer img = cv2.imread('flatmap.jpg') hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) lower_red = np.array([140, 110, 0]) upper_red = np.array([255, 255 , 255]) # Threshold with inRange() get only specific colors mask_red = cv2.inRange(hsv, lower_red, upper_red) # Perform bitwise operation with the masks and original image red_pointer = cv2.bitwise_and(img,img, mask= mask_red) # Display results cv2.imshow('Red pointer', red_pointer) cv2.imwrite('redpointer.jpg', red_pointer) cv2.waitKey(0) cv2.destroyAllWindows() I have a map and need to extract the red arrow. The code works but the arrow has black patches in it. How would I go about altering the code to improve the output of the arrow so it's a solid shape? | I've looked at the channels in HSL/HSV space. The arrows are the only stuff in the picture that has any saturation. That would be one required (but insufficient) aspect to get a lock on the desired arrow. I've picked those pixels and they appear to have a bit more than 50% saturation, so I'll use a lower bound of 25% (64). That red arrow's hue dithers around 0 degrees (red)... that means some of its pixels are on the negative side of 0, i.e. something like 359 degrees. You need to use two inRange calls to collect all hues from 0 up, and all hues from 359 down. Since OpenCV encodes hues in 2-degree steps, that'll be a value of 180 and down. I'll select 0 +- 20 degrees (0 .. 10 and 170 .. 180). In summary: hsv_im = cv.cvtColor(im, cv.COLOR_BGR2HSV) mask1 = cv.inRange(hsv_im, np.array([0, 64, 0]), np.array([10, 255, 255])) mask2 = cv.inRange(hsv_im, np.array([170, 64, 0]), np.array([180, 255, 255])) mask = mask1 | mask2 cv.imshow("mask", mask) cv.waitKey() | 5 | 0 |
71,464,757 | 2022-3-14 | https://stackoverflow.com/questions/71464757/what-does-sa-relationship-kwargs-lazy-selectin-means-on-sqlmodel-with-f | I'm trying to use SQLModel with Fastapi, and on the way I found this example for implementing entities relationships, and I would like to know what does sa_relationship_kwargs={"lazy": "selectin"} means and what does it do? class UserBase(SQLModel): first_name: str last_name: str email: EmailStr = Field(nullable=True, index=True, sa_column_kwargs={"unique": True}) is_active: bool = Field(default=True) is_superuser: bool = Field(default=False) birthdate: Optional[datetime] phone: Optional[str] state: Optional[str] country: Optional[str] address: Optional[str] created_at: Optional[datetime] updated_at: Optional[datetime] class User(UserBase, table=True): id: Optional[int] = Field(default=None, nullable=False, primary_key=True) hashed_password: str = Field( nullable=False, index=True ) role_id: Optional[int] = Field(default=None, foreign_key="role.id") role: Optional["Role"] = Relationship(back_populates="users", sa_relationship_kwargs={"lazy": "selectin"}) groups: List["Group"] = Relationship(back_populates="users", link_model=LinkGroupUser) | It chooses the relationship loading technique that SQLAlchemy should use. The loading of relationships falls into three categories; lazy loading, eager loading, and no loading. Lazy loading refers to objects are returned from a query without the related objects loaded at first. When the given collection or reference is first accessed on a particular object, an additional SELECT statement is emitted such that the requested collection is loaded. In particular in this case it uses the "select IN loading" technique, which means that a second query will be constructed that loads all child objects through a WHERE parent_id IN (...) construct. Details about the available options: The primary forms of relationship loading are: lazy loading - available via lazy='select' or the lazyload() option, this is the form of loading that emits a SELECT statement at attribute access time to lazily load a related reference on a single object at a time. Lazy loading is detailed at Lazy Loading. joined loading - available via lazy='joined' or the joinedload() option, this form of loading applies a JOIN to the given SELECT statement so that related rows are loaded in the same result set. Joined eager loading is detailed at Joined Eager Loading. subquery loading - available via lazy='subquery' or the subqueryload() option, this form of loading emits a second SELECT statement which re-states the original query embedded inside of a subquery, then JOINs that subquery to the related table to be loaded to load all members of related collections / scalar references at once. Subquery eager loading is detailed at Subquery Eager Loading. select IN loading - available via lazy='selectin' or the selectinload() option, this form of loading emits a second (or more) SELECT statement which assembles the primary key identifiers of the parent objects into an IN clause, so that all members of related collections / scalar references are loaded at once by primary key. Select IN loading is detailed at Select IN loading. raise loading - available via lazy='raise', lazy='raise_on_sql', or the raiseload() option, this form of loading is triggered at the same time a lazy load would normally occur, except it raises an ORM exception in order to guard against the application making unwanted lazy loads. An introduction to raise loading is at Preventing unwanted lazy loads using raiseload. no loading - available via lazy='noload', or the noload() option; this loading style turns the attribute into an empty attribute (None or []) that will never load or have any loading effect. This seldom-used strategy behaves somewhat like an eager loader when objects are loaded in that an empty attribute or collection is placed, but for expired objects relies upon the default value of the attribute being returned on access; the net effect is the same except for whether or not the attribute name appears in the InstanceState.unloaded collection. noload may be useful for implementing a βwrite-onlyβ attribute but this usage is not currently tested or formally supported. | 5 | 11 |
71,460,911 | 2022-3-13 | https://stackoverflow.com/questions/71460911/moving-python-venv-to-another-machine-without-internet | I am trying to deploy a Python project to a machine with no internet. Because it has no internet, I cannot pip install any packages with a requirements.txt file. I am wondering if it is possible to move my existing environment with all installed packages into another machine with all packages pre-installed. I can also use attempt to use Docker for this installation. Would I be able to pre-install all the packages within a Docker container and then copy all the files onto another VM? | On you local machine (adapt the instructions if you are on Windows) Create your requirements.txt file (venv) [...]$ mkdir pkgs (venv) [...]$ cd pkgs (venv) [...]$ pip freeze > requirements.txt (venv) [...]$ pip download -r requirements.txt Download pip archive from here Copy pkgs folder to the remote machine On the remote machine: Install pip from archive (venv) [...]$ cd pkgs # --- unarchive pip.tar.gz --- (venv) [...]$ python setup.py install Install packages (venv) [...]$ pip install --no-index --find-links . -r requirements.txt | 4 | 12 |
71,456,305 | 2022-3-13 | https://stackoverflow.com/questions/71456305/is-there-a-way-to-replace-the-end-of-a-string-starting-at-a-given-substring | I have the string banana | 10 and want to replace everything from | to the end of the string with 9. The output I want would be banana | 9. How could I achieve this? I've looked into .replace(), .split() and converting the string into a list of characters, and looping over them until I find the bit that should be replaced, but just couldn't figure it out. | I suggest you use re module(regex module): import re myString = "banana | 10" re.sub(r"\|.+", r"| 9", myString) Output banana | 9 | 4 | 5 |
71,455,443 | 2022-3-13 | https://stackoverflow.com/questions/71455443/python-convert-json-to-table-structure | I have a JSON with the following structure below into a list in a python variable. I'd like to extract this JSON value as a table. My question is, how can I extract it from the list and how can I change it into a table? Once I have converted it, I will insert the output into a Postgres table. JSON structure [' { "_id": { "$Col1": "XXXXXXX2443" }, "col2": false, "col3": "359335050111111", "startedAt": { "$date": 1633309625000 }, "endedAt": { "$date": 1633310213000 }, "col4": "YYYYYYYYYYYYYYYYYY", "created_at": { "$date": 1633310846935 }, "updated_at": { "$date": 1633310846935 }, "__v": 0 } '] Desired output: | Use the code below. I have used PrettyTable module for printing in a table like structure. Use this - https://www.geeksforgeeks.org/how-to-make-a-table-in-python/ for table procedure. Also, all the headers and values will be stored in headers and values variable. import json from prettytable import PrettyTable value = [''' { "_id": { "$Col1": "XXXXXXX2443" }, "col2": false, "col3": "359335050111111", "startedAt": { "$date": 1633309625000 }, "endedAt": { "$date": 1633310213000 }, "col4": "YYYYYYYYYYYYYYYYYY", "created_at": { "$date": 1633310846935 }, "updated_at": { "$date": 1633310846935 }, "__v": 0 }'''] dictionary = json.loads(value[0]) headers = [] values = [] for key in dictionary: head = key value = "" if type(dictionary[key]) == type({}): for key2 in dictionary[key]: head += "/" + key2 value = dictionary[key][key2] headers.append(head) values.append(value) else: value = dictionary[key] headers.append(head) values.append(value) print(headers) print(values) myTable = PrettyTable(headers) myTable.add_row(values) print(myTable) Output ['_id/$Col1', 'col2', 'col3', 'startedAt/$date', 'endedAt/$date', 'col4', 'created_at/$date', 'updated_at/$date', '__v'] ['XXXXXXX2443', False, '359335050111111', 1633309625000, 1633310213000, 'YYYYYYYYYYYYYYYYYY', 1633310846935, 1633310846935, 0] +-------------+-------+-----------------+-----------------+---------------+--------------------+------------------+------------------+-----+ | _id/$Col1 | col2 | col3 | startedAt/$date | endedAt/$date | col4 | created_at/$date | updated_at/$date | __v | +-------------+-------+-----------------+-----------------+---------------+--------------------+------------------+------------------+-----+ | XXXXXXX2443 | False | 359335050111111 | 1633309625000 | 1633310213000 | YYYYYYYYYYYYYYYYYY | 1633310846935 | 1633310846935 | 0 | +-------------+-------+-----------------+-----------------+---------------+--------------------+------------------+------------------+-----+ | 4 | 6 |
71,453,766 | 2022-3-13 | https://stackoverflow.com/questions/71453766/how-to-take-a-screenshot-from-part-of-screen-with-mss-python | I have a simple code here: import mss with mss.mss() as sct: filename = sct.shot(output="result.png") result.png But I want to take a part of screen like this: Thanks for help! | As explained on https://python-mss.readthedocs.io/examples.html, something like this should work: with mss.mss() as sct: # The screen part to capture monitor = {"top": 160, "left": 160, "width": 160, "height": 135} output = "sct-{top}x{left}_{width}x{height}.png".format(**monitor) # Grab the data sct_img = sct.grab(monitor) # Save to the picture file mss.tools.to_png(sct_img.rgb, sct_img.size, output=output) print(output) This is just the example given on the website. You can adjust the part of the screen that you are taking a screenshot of by modifying the monitor dictionary. As an example you could change it from {"top": 160, "left": 160, "width": 160, "height": 135} to {"top": 10, "left": 14, "width": 13, "height": 105}. You will have to modify it to capture the part of the screen that you want. | 7 | 6 |
71,444,328 | 2022-3-11 | https://stackoverflow.com/questions/71444328/can-i-initialize-a-pydantic-model-using-the-unaliased-attribute-name | I'm working with an API where the schema for creating a group is effectively: class Group(BaseModel): identifier: str I was hoping I could do this instead: class Group(BaseModel): groupname: str = Field(..., alias='identifier') But with that configuration it's not possible to set the attribute value using the name groupname. That is, running this fails with a field required error: >>> g = Group(groupname='foo') pydantic.error_wrappers.ValidationError: 1 validation error for Group identifier field required (type=value_error.missing) Is it is possible to use either the alias or the actual attribute name to set the attribute value? I was hoping that these two would be equivalent: >>> Group(identifier='foo') >>> Group(groupname='foo') | Maybe you are looking for the allow_population_by_field_name config option: whether an aliased field may be populated by its name as given by the model attribute, as well as the alias (default: False) from pydantic import BaseModel, Field class Group(BaseModel): groupname: str = Field(..., alias='identifier') class Config: allow_population_by_field_name = True print(repr(Group(identifier='foo'))) print(repr(Group(groupname='bar'))) Output: Group(groupname='foo') Group(groupname='bar') | 5 | 7 |
71,445,570 | 2022-3-11 | https://stackoverflow.com/questions/71445570/running-pre-commit-python-package-in-windows-gives-executablenotfounderror-exec | I am working on a project where pre-commit==2.15.0 was added to the python requirements file. I installed the requirements. Now when I try to do a git commit I get the following error: An unexpected error has occurred: ExecutableNotFoundError: Executable `/bin/sh` not found Check the log at C:\Users\username\.cache\pre-commit\pre-commit.log In my pre-commit log I have: pre-commit version: 2.15.0 sys.version: 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] sys.executable: c:\users\username\appdata\local\programs\python\python39\python.exe os.name: nt sys.platform: win32 Traceback (most recent call last): File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\error_handler.py", line 65, in error_handler yield File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\main.py", line 368, in main return hook_impl( File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\commands\hook_impl.py", line 231, in hook_impl retv, stdin = _run_legacy(hook_type, hook_dir, args) File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\commands\hook_impl.py", line 42, in _run_legacy cmd = normalize_cmd((legacy_hook, *args)) File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\parse_shebang.py", line 82, in normalize_cmd exe = normexe(cmd[0]) File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\parse_shebang.py", line 61, in normexe _error('not found') File "c:\users\username\appdata\local\programs\python\python39\lib\site-packages\pre_commit\parse_shebang.py", line 51, in _error raise ExecutableNotFoundError(f'Executable `{orig}` {msg}') pre_commit.parse_shebang.ExecutableNotFoundError: Executable `/bin/sh` not found I work in Windows where my teammates work on Macs. It looks like precommit is trying to reference the /bin/sh script which is not in Windows. How do I get this precommit working? | your previous git hook is using a non-portable shebang (#!/bin/sh) (in your case this file will be located at .git/hooks/pre-commit.legacy -- originally .git/hooks/pre-commit) if you adjust the tool to use #!/usr/bin/env sh then pre-commit will be able to run it (even on windows) alternatively, if you don't want to use pre-commit in migration mode run pre-commit install --force you're also using an outdated version of pre-commit which may be contributing to your issues -- so I'd recommend upgrading that was well disclaimer: I created pre-commit | 5 | 10 |
71,441,761 | 2022-3-11 | https://stackoverflow.com/questions/71441761/how-to-use-match-case-with-a-class-type | I want to use match to determine an action to perform based on a class type. I cannot seem to figure out how to do it. I know their are other ways of achieving this, I would just like to know can it be done this way. I am not looking for workarounds of which there are many. class aaa(): pass class bbb(): pass def f1(typ): if typ is aaa: print("aaa") elif typ is bbb: print("bbb") else: print("???") def f2(typ): match typ: case aaa(): print("aaa") case bbb(): print("bbb") case _: print("???") f1(aaa) f1(bbb) f2(aaa) f2(bbb) The output is as follows: aaa bbb ??? ??? | Try using typ() instead of typ in the match line: class aaa(): pass class bbb(): pass def f1(typ): if typ is aaa: print("aaa") elif typ is bbb: print("bbb") else: print("???") def f2(typ): match typ(): case aaa(): print("aaa") case bbb(): print("bbb") case _: print("???") f1(aaa) f1(bbb) f2(aaa) f2(bbb) Output: aaa bbb aaa bbb UPDATE: Based on OP's comment asking for solution that works for classes more generally than the example classes in the question, here is an answer addressing this: class aaa(): pass class bbb(): pass def f1(typ): if typ is aaa: print("aaa") elif typ is bbb: print("bbb") else: print("???") def f2(typ): match typ.__qualname__: case aaa.__qualname__: print("aaa") case bbb.__qualname__: print("bbb") case _: print("???") f1(aaa) f1(bbb) f2(aaa) f2(bbb) Output: aaa bbb aaa bbb UPDATE #2: Based on this post and some perusal of PEP 364 here, I have created an example showing how a few data types (a Python builtin, a class from the collections module, and a user defined class) can be used by match to determine an action to perform based on a class type (or more generally, a data type): class bbb: pass class namespacing_class: class aaa: pass def f1(typ): if typ is aaa: print("aaa") elif typ is bbb: print("bbb") else: print("???") def f2(typ): match typ.__qualname__: case aaa.__qualname__: print("aaa") case bbb.__qualname__: print("bbb") case _: print("???") def f3(typ): import collections match typ: case namespacing_class.aaa: print("aaa") case __builtins__.str: print("str") case collections.Counter: print("Counter") case _: print("???") ''' f1(aaa) f1(bbb) f2(aaa) f2(bbb) ''' f3(namespacing_class.aaa) f3(str) import collections f3(collections.Counter) Outputs: aaa str Counter As stated in this answer in another post: A variable name in a case clause is treated as a name capture pattern. It always matches and tries to make an assignment to the variable name. ... We need to replace the name capture pattern with a non-capturing pattern such as a value pattern that uses the . operator for attribute lookup. The dot is the key to matching this a non-capturing pattern. In other words, if we try to say case aaa: for example, aaa will be interpreted as a name to which we assign the subject (typ in your code) and will always match and block any attempts to match subsequent case lines. To get around this, for class type names (or names generally) that can be specified using a dot (perhaps because they belong to a namespace or another class), we can use the dotted name as a pattern that will not be interpreted as a name capture. For built-in type str, we can use case __builtins__.str:. For the Counter class in Python's collections module, we can use case collections.Counter:. If we define class aaa within another class named namespacing_class, we can use case namespacing_class.aaa:. However, if we define class bbb at the top level within our Python code, it's not clear to me that there is any way to use a dotted name to refer to it and thereby avoid name capture. It's possible there's a way to specify a user defined class type in a case line and I simply haven't figured it out yet. Otherwise, it seems rather arbitrary (and unfortunate) to be able to do this for dottable types and not for non-dottable ones. | 18 | 7 |
71,439,161 | 2022-3-11 | https://stackoverflow.com/questions/71439161/how-to-generate-twitter-api-access-token-and-secret-with-read-write-permission | I just created twitter api with elevated access but I can't generate access token with write permission. I need this permission to update my twitter name. | App settings -> User authentication Settings -> Edit. Enable OAuth 1.0A -> App permissions Read and Write Callback URL can be set to something like http://localhost if you are not going to implement sign-in for other users Website URL set to something valid like your blog ideally, or use http://example.com if you must -> Save App Keys & Tokens -> Authentication tokens Regenerate Access Token and Secret This will grant read and write permission to the app for the account that owns the app. It will not let any the app write to any other account. | 7 | 19 |
71,437,278 | 2022-3-11 | https://stackoverflow.com/questions/71437278/after-changing-python-version-3-6-to-3-10-i-got-cannot-import-name-callable-fr | File "C:\Users\Codertjay\PycharmProjects\Teems_App_Kid\teems_app_kid\__init__.py", line 5, in <module> from .celery import app as celery_app File "C:\Users\Codertjay\PycharmProjects\Teems_App_Kid\teems_app_kid\celery.py", line 3, in <module> from celery import Celery File "C:\Users\Codertjay\PycharmProjects\brownie\Teems_App_Kid\lib\site-packages\celery\five.py", line 306, in __getattr__ module = __import__(self._object_origins[name], None, None, [name]) File "C:\Users\Codertjay\PycharmProjects\brownie\Teems_App_Kid\lib\site-packages\celery\app\__init__.py", line 14, in <module> from celery import _state File "C:\Users\Codertjay\PycharmProjects\brownie\Teems_App_Kid\lib\site-packages\celery\_state.py", line 20, in <module> from celery.utils.threads import LocalStack File "C:\Users\Codertjay\PycharmProjects\brownie\Teems_App_Kid\lib\site-packages\celery\utils\__init__.py", line 20, in <module> from collections import Callable ImportError: cannot import name 'Callable' from 'collections' (C:\Users\Codertjay\AppData\Local\Programs\Python\Python310\lib\collections\__init__.py) | The offending line has been removed from Celery nearly 6 years ago. You should update the celery package to a recent version. | 5 | 8 |
71,433,507 | 2022-3-11 | https://stackoverflow.com/questions/71433507/pytorch-python-distributed-multiprocessing-gather-concatenate-tensor-arrays-of | If you have tensor arrays of different lengths across several gpu ranks, the default all_gather method does not work as it requires the lengths to be same. For example, if you have: if gpu == 0: q = torch.tensor([1.5, 2.3], device=torch.device(gpu)) else: q = torch.tensor([5.3], device=torch.device(gpu)) If I need to gather these two tensor arrays as follows: all_q = [torch.tensor([1.5, 2.3], torch.tensor[5.3]) the default torch.all_gather does not work as the lengths, 2, 1 are different. | As it is not directly possible to gather using built in methods, we need to write custom function with the following steps: Use dist.all_gather to get sizes of all arrays. Find the max size. Pad local array to max size using zeros/constants. Use dist.all_gather to get all padded arrays. Unpad the added zeros/constants using sizes found in step 1. The below function does this: def all_gather(q, ws, device): """ Gathers tensor arrays of different lengths across multiple gpus Parameters ---------- q : tensor array ws : world size device : current gpu device Returns ------- all_q : list of gathered tensor arrays from all the gpus """ local_size = torch.tensor(q.size(), device=device) all_sizes = [torch.zeros_like(local_size) for _ in range(ws)] dist.all_gather(all_sizes, local_size) max_size = max(all_sizes) size_diff = max_size.item() - local_size.item() if size_diff: padding = torch.zeros(size_diff, device=device, dtype=q.dtype) q = torch.cat((q, padding)) all_qs_padded = [torch.zeros_like(q) for _ in range(ws)] dist.all_gather(all_qs_padded, q) all_qs = [] for q, size in zip(all_qs_padded, all_sizes): all_qs.append(q[:size]) return all_qs Once, we are able to do the above, we can then easily use torch.cat to further concatenate into a single array if needed: torch.cat(all_q) [torch.tensor([1.5, 2.3, 5.3]) Adapted from: github | 4 | 6 |
71,431,565 | 2022-3-10 | https://stackoverflow.com/questions/71431565/syntax-for-returning-the-entire-list-when-using-the-minus-sign | Python lists have nifty indexing/slicing capabilities. Here are several examples: x = "123456" x[:-3] '123' x[:-1] '12345' # -1 slices off last element x[:-0] # -0 slices off .. everything .. this is what i'd like to fix '' I would like to slice off a variable number of elements d: x[:-d] But if d were 0 we get a much different result than desired. A workaround is: d = 0 x[:-d if d else len(x)] '123456' That is possible - but is there any [shorter] alternative? | You could use None, which is the default for missing start/stop/step values: x[:-d or None] | 5 | 6 |
71,416,149 | 2022-3-9 | https://stackoverflow.com/questions/71416149/modifying-the-signature-and-defaults-of-a-python-function-the-hack-way | I'm trying to understand the python data model better and ran into something odd. def foo(a, b = 2): return a / b assert foo(20) == 10.0 # note: for sanity purposes, should also change signature, but not needed for effect foo.__defaults__ = (10,) assert foo(20) == 2.0 foo.__defaults__ = () foo.__kwdefaults__ = {'b': 10} foo(20) # raises TypeError: foo() missing 1 required positional argument: 'b' An error is expected: __kwdefaults__ is for keyword-only arguments, so let's make b a keyword-only argument to try to solve this problem: from inspect import signature foo.__signature__ = signature(lambda a, *, b=10: None) foo(20) # still raises TypeError: foo() missing 1 required positional argument: 'b' How does the error message relate to what's happening?. What I find strange is that neither the original function, nor my doctored one required b (it always had a default!). Also, b has never been a positional-only argument. What is happening here? How can one transform foo to make b be a keyword-only argument with default 10. If my original function had the signature I "injected" above, all goes well though: def foo(a, *, b=2): # same as previous `foo`, with signature we want return a / b foo.__kwdefaults__ = {'b': 10} # change kwdefault assert foo(20) == 2.0 # it works!! Preemptive note: I know of functools wraps and partial, which I could use -- though in my context, I'd rather change the function itself, not a wrapped version. My question is about the behavior I created in the code above: How did it come about? | Purpose of __signature__ Your issue is, that you think that you change a function's signature by setting foo.__signature__. However, this is not what's happening. It is equally useless to set it to foo.signature or foo.any_other_name. You just set a signature object to the respective property of the function, which changes nothing with regards to the function's behaviour. The only thing that __signature__ does is to change the behaviour of inspect.signature(), since it will return the signature of the function as stored in function.__signature__ iff it is set. I.e. the only thing, that __signature__ changes is the behaviour of inspect.signature(), but not the function itself. See ekhumoro's comment for the link to the appropriate PEP. TypeError As for the type error: In foo() b is not a kwarg-only argument: def foo(a, b = 2): return a / b It is a positional argument with a default value. Hence its default value is stored in foo.__defaults__. When you set foo.__defaults__ = () you erased those defaults. After that, b hence has no longer a default value and needs to be passed explicitly. Changing signatures How can one transform foo to make b be a keyword-only argument with default 10. You cannot change a function's signature during runtime. Period. Changing default values You can, however, change b's default value to 10 via >>> foo.__defaults__ = (10,) >>> foo(2) 0.2 Since positional arguments with default values cannot be followed by positional arguments without defaults, the tuple __defaults__ is applied to the positional arguments from right to left. So you can also give a a default value of e.g. 20 via >>> foo.__defaults__ = (20, 10) >>> foo() 2.0 | 7 | 6 |
71,424,233 | 2022-3-10 | https://stackoverflow.com/questions/71424233/how-do-i-list-my-scheduled-queries-via-the-python-google-client-api | I have set up my service account and I can run queries on bigQuery using client.query(). I could just write all my scheduled queries into this new client.query() format but I already have many scheduled queries so I was wondering if there is a way I can get/list the scheduled queries and then use that information to run those queries from a script. | Yes, you can use the APIs. When you don't know which one to use, I have a tip. Use the command proposed by @Yev bq ls --transfer_config --transfer_location=US --format=prettyjson But log the API calls. for that use the --apilog <logfile name> parameter like that bq --apilog ./log ls --transfer_config --transfer_location=US --format=prettyjson And, magically, you can find the API called by the command: https://bigquerydatatransfer.googleapis.com/v1/projects/<PROJECT-ID>/locations/US/transferConfigs?alt=json Then, a simple google search leads you to the correct documentation In python, add that dependencies in your requirements.txt: google-cloud-bigquery-datatransfer and use that code from google.cloud import bigquery_datatransfer client = bigquery_datatransfer.DataTransferServiceClient() parent = client.common_project_path("<PROJECT-ID>") resp = client.list_transfer_configs(parent=parent) print(resp) | 6 | 6 |
71,424,546 | 2022-3-10 | https://stackoverflow.com/questions/71424546/combine-2-kde-functions-in-one-plot-in-seaborn | I have the following code for plotting the histogram and the kde-functions (Kernel density estimation) of a training and validation dataset: #Plot histograms import matplotlib.pyplot as plt import matplotlib import seaborn as sns displot_dataTrain=sns.displot(data_train, bins='auto', kde=True) displot_dataTrain._legend.remove() plt.ylabel('Count') plt.xlabel('Training Data') plt.title("Histogram Training Data") plt.show() displot_dataValid =sns.displot(data_valid, bins='auto', kde=True) displot_dataValid._legend.remove() plt.ylabel('Count') plt.xlabel('Validation Data') plt.title("Histogram Validation Data") plt.show() # Try to plot the kde-functions together --> yields an AttributeError X1 = np.linspace(data_train.min(), data_train.max(), 1000) X2 = np.linspace(data_valid.min(), data_valid.max(), 1000) fig, ax = plt.subplots(1,2, figsize=(12,6)) ax[0].plot(X1, displot_dataTest.kde.pdf(X1), label='train') ax[1].plot(X2, displot_dataValid.kde.pdf(X1), label='valid') The plotting of the histograms and kde-functions inside one plot works without problems. Now I would like to have the 2 kde-functions inside one plot but when using the posted code, I get the following error AttributeError: 'FacetGrid' object has no attribute 'kde' Do you have any idea, how I can combined the 2 kde-functions inside one plot (without the histogram)? | sns.displot() returns a FacetGrid. That doesn't work as input for ax.plot(). Also, displot_dataTest.kde.pdf is never valid. However, you can write sns.kdeplot(data=data_train, ax=ax[0]) to create a kdeplot inside the first subplot. See the docs; note the optional parameters cut= and clip= that can be used to adjust the limits. If you only want one subplot, you can use fig, ax = plt.subplots(1, 1, figsize=(12,6)) and use ax=ax instead of ax=ax[0] as in that case ax is just a single subplot, not an array of subplots. The following code has been tested using the latest seaborn version: import matplotlib.pyplot as plt import seaborn as sns import numpy as np fig, ax = plt.subplots(figsize=(12, 6)) sns.kdeplot(data=np.random.normal(0.1, 1, 100).cumsum(), color='crimson', label='train', fill=True, ax=ax) sns.kdeplot(data=np.random.normal(0.1, 1, 100).cumsum(), color='limegreen', label='valid', fill=True, ax=ax) ax.legend() plt.tight_layout() plt.show() | 4 | 7 |
71,416,368 | 2022-3-9 | https://stackoverflow.com/questions/71416368/display-an-image-with-transparency-and-no-background-or-window-in-python | I'm trying to display an image on the screen, without any window/application popping up/containing it. I'm pretty close with TKinter, but the method for removing the background color of the canvas is hacky and has some undesired effects. import tkinter as tk import ctypes user32 = ctypes.windll.user32 screen_size = user32.GetSystemMetrics(0), user32.GetSystemMetrics(1) root = tk.Tk() root.overrideredirect(True) root.config(bg="blue", bd=0, highlightthickness=0) root.attributes("-transparentcolor", "#FEFCFD") root.attributes("-topmost", True) tk_img = tk.PhotoImage(file="image.png") canvas = tk.Canvas(root, bg="#FEFCFD", bd=0, highlightthickness=0, width=screen_size[0], height=screen_size[1]) canvas.pack() img = canvas.create_image(0, 0, image=tk_img, anchor="nw") root.mainloop() The -transparentcolor flag mostly removes the background, but if an image has any partially transparent pixels it will tint them. Plus, if that color exists in the image, it will be removed; that choice of color was in hopes of minimizing exact matches in an image while also being mostly white, to hopefully have the least noticeable affect on the images. Here's an image of what it looks like currently; very close to what I want, but you can see some missing pixels in the white areas of the dice, and they all seem to have a white border around them due to their edges being partially transparent. This is what the image should look like. I've also tried to achieve this effect using wxPython, but I can't remove the background of the window, leading to transparent images always being backed by some color. I used this answer; I've modified it slightly but nothing I've done has improved it. So, is there a way to draw an image on the screen without any background at all with Python? | Thanks to the suggestion from Kartikeya, I was able to solve my own question. Using PyQt5, this code will display an image with transparency and no border or background at all import sys from PyQt5.QtCore import Qt from PyQt5.QtGui import QPixmap from PyQt5.QtWidgets import QMainWindow, QApplication, QLabel app = QApplication(sys.argv) window = QMainWindow() window.setAttribute(Qt.WA_TranslucentBackground, True) window.setAttribute(Qt.WA_NoSystemBackground, True) window.setWindowFlags(Qt.FramelessWindowHint) label = QLabel(window) pixmap = QPixmap('image.png') label.setPixmap(pixmap) label.setGeometry(0, 0, pixmap.width(), pixmap.height()) window.label = label window.resize(pixmap.width(),pixmap.height()) window.show() sys.exit(app.exec_()) Once I was looking for PyQt5, I found this question and only needed to modify the code slightly. Here is what it looks like now. | 6 | 6 |
71,413,808 | 2022-3-9 | https://stackoverflow.com/questions/71413808/understanding-xarray-apply-ufunc | I have an xarray with multiple time dimensions slow_time, fast_time a dimension representing different objects object and a dimension reflecting the position of each object at each point in time coords. The goal is now to apply a rotation using scipy.spatial.transform.Rotation to each position in this array, for every point in time. I'm struggling to figure out how to use xarray.apply_ufunc to do what I want, mainly because the concept of input_core_dimensions isn't really clear to me. The code below shows what I'm trying to do: import numpy as np import xarray as xr from scipy.spatial.transform import Rotation # dummy initial positions initial_position = xr.DataArray(np.arange(6).reshape((-1,3)), dims=["object", "coords"]) # dummy velocities velocity = xr.DataArray(np.array([[1, 0, 0], [0, 0.5, 0]]), dims=["object", "coords"]) slow_time = xr.DataArray(np.linspace(0, 1, 10, endpoint=False), dims=["slow_time"]) fast_time = xr.DataArray(np.linspace(0, 0.1, 100, endpoint=False), dims=["fast_time"]) # times where to evaluate my function times = slow_time + fast_time # this is the data I want to transform positions = times * velocity + initial_position # these are the rotation angles theta = np.pi/20 * times phi = np.pi/100 * times def apply_rotation(vectors, theta, phi): R = Rotation.from_euler("xz", (theta, phi)) result = R.apply(vectors) return result rotated_positions = xr.apply_ufunc(apply_rotation, positions, theta, phi, ...??) The behaviour I'm looking for is essentially like four nested for loops that apply the rotations to each point like this pseudocode for pos, t, p in zip(positions, theta, phi): R = Rotation.from_euler("xz", (t, p)) R.apply(pos) But I'm unsure how to proceed. Using this rotated_positions = xr.apply_ufunc(apply_rotation, positions, theta, phi, input_core_dims=[["object"],[],[]], output_core_dims=[["object"]]) I thought the function would be applied along subarrays of the object dimension, but now I get entire arrays passed into my function which doesn't work. The information on apply_ufunc in the xarray documentation doesn't really makes things very clear. Any help is appreciated! | Reference First off, a helpful reference would be this documentation page on unvectorized ufunc Solution As I understand your question you want to apply a rotation to the position vector of each object at every time. The way you have set up your data already puts the coordinates as the final dimension of the array. Translating your pseudocode to generate a reference dataset rotatedPositions yields: rotatedPositions = positions.copy() for slowTimeIdx in range( len(slow_time)): for fastTimeIdx in range( len(fast_time) ): for obj in range(2): pos = rotatedPositions.data[slowTimeIdx, fastTimeIdx, obj].copy() rotatedPositions.data[slowTimeIdx, fastTimeIdx, obj] = apply_rotation(pos, theta.data[slowTimeIdx, fastTimeIdx], phi.data[slowTimeIdx, fastTimeIdx]) where I hard-coded the object dimension size. In essence the apply_rotation function takes 1 3-vector (1D array of size 3) and 2 scalars and returns a 1D array of size 3 (3 vector). Following the documentation mentioned above I arrive at the following call to apply_ufunc: rotated = xr.apply_ufunc(apply_rotation, positions, theta, phi, input_core_dims=[['coords'], [], []], output_core_dims=[['coords']], vectorize=True ) Testing via np.allclose(rotatedPositions.data, rotated.data) indicates success. Explanation As I understand the documentation cited above apply_ufunc will take the function to be applied as first argument, then all positional arguments in order. Next one has to provide the dimension labels of each dataset that will correspond to the data that will be core to apply_rotation working. Here this is coords, as we manipulate coordinates. Since neither theta nor phi have this dimension we do not specify anything for them. Next we have to specify the dimensions the output data will have, since we just transform the output data we keep output_core_dims=[['coords']]. Leaving this out would lead apply_ufunc to assume output data to be 0-dimensional (a scalar). Finally,vectorize=True ensures that the function is executed over all dimensions not specified in input_core_dims. | 6 | 5 |
71,420,828 | 2022-3-10 | https://stackoverflow.com/questions/71420828/custom-python-module-in-azure-databricks-with-spark-dbutils-dependencies | I recently swicthed on the preview feature "files in repos" on Azure Databricks, so that I could move a lot of my general functions from notebooks to modules and get rid of the overhead from running a lot of notebooks for a single job. However, several of my functions rely directly on dbutils or spark/pyspark functions (e.g. dbutils.secrets.get() and spark.conf.set()). Since these are imported in the background for the notebooks and are tied directly to the underlying session, I am at complete loss as to how I can reference these modules in my custom modules. For my small sample module, I fixed it by making dbutils a parameter, like in the following example: class Connection: def __init__(self, dbutils): token = dbutils.secrets.get(scope="my-scope", key="mykey") ... However, doing this for all the existing functions would require a significant amount of rewriting both the functions and the lines that call them. How can I avoid this procedure and do it in a more clean manner? | The documentation for Databricks Connect shows the example how it could be achieved. That example has SparkSession as an explicit parameter, but it could be modified to avoid that completely, with something like this: def get_dbutils(): from pyspark.sql import SparkSession spark = SparkSession.getActiveSession() if spark.conf.get("spark.databricks.service.client.enabled") == "true": from pyspark.dbutils import DBUtils return DBUtils(spark) else: import IPython return IPython.get_ipython().user_ns["dbutils"] and then in your function you can do something like this: def get_my_secret(scope, key): return get_dbutils().secrets.get(scope, key) | 6 | 11 |
71,420,605 | 2022-3-10 | https://stackoverflow.com/questions/71420605/is-there-a-numpy-magic-avoiding-these-loops | I would like to avoid for loops in this code snippet: import numpy as np N = 4 a = np.random.randint(0, 256, size=(N, N, 3)) m = np.random.randint(0, 2, size=(N, N)) for i, d0 in enumerate(a): for j, d1 in enumerate(d0): if m[i, j]: d1[2] = 42 This is a simplified example where a is an N x N RGB image and m is a N x N mask, which sets masked elements of the 3rd channel: a[:, :, 2] only. | You can index the last axis and set the elements selected by a boolean mask import numpy as np N = 4 a = np.random.randint(0, 256, size=(N, N, 3)) m = np.random.randint(0, 2, size=(N, N)) a[...,2][m.astype('bool')] = 42 a Output (for a random example of a) array([[[ 9, 13, 4], [15, 0, 42], [11, 12, 9], [13, 0, 42]], [[ 1, 10, 42], [ 9, 0, 42], [ 8, 6, 4], [ 3, 0, 42]], [[15, 11, 6], [ 8, 11, 42], [14, 1, 42], [ 4, 14, 1]], [[ 3, 6, 42], [ 5, 13, 3], [ 9, 14, 13], [12, 6, 42]]]) | 4 | 4 |
71,420,362 | 2022-3-10 | https://stackoverflow.com/questions/71420362/django4-0-importerror-cannot-import-name-ugettext-lazy-from-django-utils-tra | I use translation in my Django apps. Since I installed Django version 4, when I try to import ugettexget_lazy as shown in the code below from django.utils.translation import ugettexget_lazy as _ I get the following error: ImportError: cannot import name 'ugettext_lazy' from 'django.utils.translation' | It was removed from Django 4, use this instead from django.utils.translation import gettext_lazy as _ | 8 | 17 |
71,416,759 | 2022-3-9 | https://stackoverflow.com/questions/71416759/i-want-to-run-a-python-script-on-a-server-24-7 | I am making a program that simulates a stock market for a virtual currency. I have not tried anything yet, but I want a Python script to run 24/7 online for a long period of time. The python script should be something like this: import time import random while True: number = random.randint(0,5) print(number) time.sleep(2) Also, a separate local Python program should be able to retrieve the number variable constantly every 2 seconds. I do not need a recommendation for a product or service. I just need to know what code I need to run and if I need a physical or web server. If I use a web server, I will be paying monthly for a long time. This project needs to be running online for theoretically years (setting aside downtime and maintenance). I have barely any experience in servers and networking, and I couldn't find answers online. Again, I have not tried anything yet. | If you really want to just simulate it and there is only one user. You can just return a random number each time the user make a query. Itβs irrelevant how often they query it. You can put this up locally or on heroic free plan. But the fact that the user is querying every 2 seconds means lots of requests and so you may exceed the quota. import random from flask import Flask import time app = Flask(__name__) @app.route("/") def hello_world(): return random.randint(0,5) Say you up it up locally on port 5000. Then, simply going to βlocalhost:5000β via python or browser will give you that random number. To serve it up to multiple people, you want to have a separate thread running to generate the number. Then, the view would return the number at the URL. For example, from flask import Flask import random import threading app = Flask(__name__) number = 0 @app.route("/") def hello_world(): """ URL returns random number """ return str(number) def gen_rand(): """ generating random numbers """ global number while True: number = random.randint(0, 5) time.sleep(2) if __name__ == '__main__': # starting thread to generate the number x = threading.Thread(target=gen_rand, daemon=True) x.start() # starting web server app.run() By default, the local webserver will start at localhost:5000, go to this URL in your browser and you will see the randomly generated numbered. You can open multiple browser tabs to see they will give same random number if refreshed within 2 seconds. Note that using global variable is not thread-safe or process safe. You should use database or redis to update and load βnumberβ. See this question for further discussions: Are global variables thread-safe in Flask? How do I share data between requests?. | 5 | 1 |
71,410,741 | 2022-3-9 | https://stackoverflow.com/questions/71410741/pip-uninstall-gdal-gives-attributeerror-pathmetadata-object-has-no-attribute | I'm trying to pip install geopandas as a fresh installation, so I want to remove existing packages like GDAL and fiona. I've already managed to pip uninstall fiona, but when I try to uninstall or reinstall GDAL it gives the following error message: (base) C:\usr>pip install C:/usr/Anaconda3/Lib/site-packages/GDAL-3.4.1-cp38-cp38-win_amd64.whl Processing c:\usr\anaconda3\lib\site-packages\gdal-3.4.1-cp38-cp38-win_amd64.whl Installing collected packages: GDAL Attempting uninstall: GDAL Found existing installation: GDAL 3.0.2 ERROR: Exception: Traceback (most recent call last): File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\cli\base_command.py", line 167, in exc_logging_wrapper status = run_func(*args) File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper return func(self, options, args) File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\commands\install.py", line 405, in run installed = install_given_reqs( File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\req\__init__.py", line 68, in install_given_reqs uninstalled_pathset = requirement.uninstall(auto_confirm=True) File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\req\req_install.py", line 637, in uninstall uninstalled_pathset = UninstallPathSet.from_dist(dist) File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\req\req_uninstall.py", line 554, in from_dist for script in dist.iterdir("scripts"): File "C:\usr\Anaconda3\lib\site-packages\pip\_internal\metadata\pkg_resources.py", line 156, in iterdir if not self._dist.isdir(name): File "C:\usr\Anaconda3\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2816, in __getattr__ return getattr(self._provider, attr) AttributeError: 'PathMetadata' object has no attribute 'isdir' Does anyone know why GDAL cannot be uninstalled? | I just came across this question after getting the same error. Coincidentally I had just upgraded pip (I was getting tired of the yellow warnings). All I had was to down grade my pip pip install pip==21.3.1 --user | 8 | 8 |
71,413,435 | 2022-3-9 | https://stackoverflow.com/questions/71413435/why-does-floatinf-floatinf-return-true-but-floatinf-floatinf | Why does this happen in Python? float('inf') == float('inf') returns True, float('inf') + float('inf') == float('inf') returns True, float('inf') * float('inf') == float('inf') returns True, float('inf') - float('inf') == float('inf') returns False, float('inf') / float('inf') == float('inf') returns False. My best guess, if I think about limits, is related with the result of this operation: limxβ+β(f(x) β’ g(x)) where limxβ+β f(x) = +β and limxβ+β g(x) = +β, which returns +β if β’ is + or *, but it is not defined (it could return every value) if β’ is - or /. I am very puzzled, though. | Before the comparison of float('inf') - float('inf') == float('inf') can be made, the result of float('inf') - float('inf') will be calculated. That result is NaN. It is NaN because the amounts of infinity may differ. It's explained on the Stack Overflow sister site Math.SE, known as the Hilbert hotel paradox: From a layman's perspective, imagine that I have an infinite number of hotel rooms, each numbered 1, 2, 3, 4, ... Then I give you all of them. I would have none left, so βββ=0 On the other hand, if I give you all of the odd-numbered ones, then I still have an infinite number left. So βββ=β. Now suppose that I give you all of them except for the first seven. Then βββ=7. While this doesn't explain why this is indeterminate, hopefully you can agree that it is indeterminate! The best number to represent indeterminate is NaN. Comparing NaN to anything is always False, even comparing NaN against itself. Besides that quite "logical" explanation, we find that Python uses IEEE754 representation for floating point calculation. You'd typically need to buy the IEEE754 specification, but luckily we see some draft version online. The relevant chapter IMHO is 7.2: For operations producing results in floating-point format, the default result of an operation that signals the invalid operation exception shall be a quiet NaN [...] [...] d) addition or subtraction or fusedMultiplyAdd: magnitude subtraction of infinities, such as: addition(+β, ββ) | 4 | 8 |
71,409,353 | 2022-3-9 | https://stackoverflow.com/questions/71409353/finding-the-position-of-noun-and-verb-in-a-sentence-python | Is there a way to find the position of the words with pos-tag 'NN' and 'VB' in a sentence in Python? example of a sentences in a csv file: "Man walks into a bar." "Cop shoots his gun." "Kid drives into a ditch" | You can find positions for certein PoS tags on a text using some of the existing NLP frameworks such us Spacy or NLTK. Once you process the text you can iterate for each token and check if the pos tag is what you are looking for, then get the start/end position of that token in your text. Spacy Using spacy, the code to implement what you want would be something like this: import spacy nlp = spacy.load("en_core_web_lg") doc = nlp("Man walks into a bar.") # Your text here words = [] for token in doc: if token.pos_ == "NOUN" or token.pos_ == "VERB": start = token.idx # Start position of token end = token.idx + len(token) # End position = start + len(token) words.append((token.text, start, end, token.pos_)) print(words) In short, I build a new document from the string, iterate over all the tokens and keep only those whose post tag is VERB or NOUN. Finally I add the token info to a list for further processing. I strongly recommend that you read the following spacy tutorial for more information. NLTK Using NLTK I think is pretty straightforward too, using NLTK tokenizer and pos tagger. The rest is almost analogous to how we do it using spacy. What I'm not sure about is the most correct way to get the start-end positions of each token. Note that for this I am using a tokenization helper created by WhitespaceTokenizer().tokenize() method, which returns a list of tuples with the start and end positions of each token. Maybe there is a simpler and NLTK-like way of doing it. import nltk from nltk.tokenize import WhitespaceTokenizer text = "Man walks into a bar." # Your text here tokens_positions = list(WhitespaceTokenizer().span_tokenize(text)) # Tokenize to spans to get start/end positions: [(0, 3), (4, 9), ... ] tokens = WhitespaceTokenizer().tokenize(text) # Tokenize on a string lists: ["man", "walks", "into", ... ] tokens = nltk.pos_tag(tokens) # Run Part-of-Speech tager # Iterate on each token words = [] for i in range(len(tokens)): text, tag = tokens[i] # Get tag start, end = tokens_positions[i] # Get token start/end if tag == "NN" or tag == "VBZ": words.append((start, end, tag)) print(words) I hope this works for you! | 6 | 7 |
71,404,046 | 2022-3-9 | https://stackoverflow.com/questions/71404046/is-object-object-guaranteed-to-be-false | Suppose I create two instances of class object. Are these two instances guaranteed to be not equal to each other? In other words, is object() == object() guaranteed to be False, or is it implementation-dependent? I understand that object() is object() is guaranteed to be False, but here I am asking about object() == object(). | Yes it is guaranteed that object() == object() is False because it is documented that "by default, object implements __eq__() by using is". | 5 | 10 |
71,328,869 | 2022-3-2 | https://stackoverflow.com/questions/71328869/what-is-gettext-lazy-on-django-for | I have read on the documentation of Django about gettext_lazy, but there is no clear definition for me about this exact function, and I want to remove it. I found this implementation: from django.utils.translation import gettext_lazy as _ and it is used on a django model field in this way: email = models.EmailField(_('email address'), unique=True) What is it for? What happens if I remove it? | Its used for translation for creating translation files like this: # app/locale/cs/LC_MESSAGES/django.po #: templates/app/index.html:3 msgid "email address" msgstr "emailovΓ‘ adresa" Then it can be rendered in template as translated text. Nothing will happen if you remove it and don't want to use multilingualism. | 10 | 19 |
71,380,919 | 2022-3-7 | https://stackoverflow.com/questions/71380919/polars-return-dataframe-with-all-unique-values-of-n-columns | I have a dataframe that has many rows per combination of the 'PROGRAM', 'VERSION', and 'RELEASE_DATE' columns. I want to get a dataframe with all of the combinations of just those three columns. Would this be a job for groupby or distinct? | Since you are not aggregating anything, use unique df.select('PROGRAM','VERSION','RELEASE_DATE').unique() | 6 | 5 |
71,325,155 | 2022-3-2 | https://stackoverflow.com/questions/71325155/convert-images-to-pdf-using-img2pdf-package-in-python | I am trying to convert images in folder named Final to one pdf This is my try to achieve the task import os import img2pdf with open("Stickers.pdf","wb") as f: #print(os.listdir("./Final")) f.write(img2pdf.convert([x for x in os.listdir("./Final") if x.endswith(".png")])) But I got an error message like that (any idea how to fix that) File "C:\Users\Future\Desktop\demo.py", line 5, in <module> f.write(img2pdf.convert([x for x in os.listdir("./Final") if x.endswith(".png")])) File "C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\img2pdf.py", line 2263, in convert ) in read_images( File "C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\img2pdf.py", line 1444, in read_images im = BytesIO(rawdata) TypeError: a bytes-like object is required, not 'str' | The error message is cryptic largely because img2pdf bends over backwards to be so versatile about what you can pass to the convert function: a file path (as a string or path object), an open file object, raw image data, multiples of any of those, or a list of any of those. If given a string, it first treats it as a file path and tries to open it; if that fails, it then assumes the input must be raw image data (as bytes) and attempts to process it that way, in which case you get the TypeError that you see. The reason it can't open the files correctly is that os.listdir returns only the file names themselves, not their full path, and those image files aren't in your current working directory. That is, you're trying to open example.png, not ./Final/example.png. You can fix this by manually adding the folder name to each file name: import os import img2pdf with open("Stickers.pdf", "wb") as f: f.write(img2pdf.convert([f"./Final/{x}" for x in os.listdir("./Final") if x.endswith(".png")])) You could also use pathlib. It often makes dealing with paths easier (and more easily platform-independent) than messing about with os. Path objects are smarter than plain strings, and automatically know about their enclosing folder in this case. All of Python's builtins (like open) will accept a Path object anywhere they already accept a string representing a path, and by this point practically all third-party libraries do as well. This includes img2pdf, as of 0.5.0. Here's what a very literal translation would look like, except using pathlib: from pathlib import Path import img2pdf with open("Stickers.pdf", "wb") as f: f.write(img2pdf.convert([path for path in Path("./Final").glob("*.png")])) But now that the glob method's doing the filtering, the list comprehension isn't actually doing anything. We can't actually pass it the result of glob, as that's a generator rather than a list, but we can unpack it to pass the individual results as multiple arguments. from pathlib import Path import img2pdf with open("Stickers.pdf", "wb") as f: f.write(img2pdf.convert(*Path("./Final").glob("*.png"))) | 4 | 5 |
71,400,057 | 2022-3-8 | https://stackoverflow.com/questions/71400057/parse-list-of-pydantic-classes-from-list-of-strings | I'd like to parse a Polygon with a list of Points out of the following data: {"points": ["0|0", "1|0", "1|1"]} I naively thought I could do something like this: from pydantic import BaseModel, validator class Point(BaseModel): x: int y: int @validator("x", "y", pre=True) def get_coords(cls, value, values): x, y = value.split("|") values["x"] = x values["y"] = y class Polygon(BaseModel): points: list[Point] But when I try and parse my "JSON" string I get an error complaining that value is not a valid dict: >>> Polygon.parse_obj({"points": ["0|0", "1|0", "1|1"]}) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pydantic/main.py", line 511, in pydantic.main.BaseModel.parse_obj File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 3 validation errors for Polygon points -> 0 value is not a valid dict (type=type_error.dict) points -> 1 value is not a valid dict (type=type_error.dict) points -> 2 value is not a valid dict (type=type_error.dict) How can I parse interesting objects out of this dull list of strings? | In Pydantic v2.0 it is possible to directly create custom conversions from arbitrary data to a BaseModel. This allows to define the conversion once for the specific BaseModel to automatically make containing classes support the conversion. The trick is to use a @model_validator(mode="before") to parse input before creating the model: from pydantic import BaseModel, model_validator class Point(BaseModel): x: int y: int @model_validator(mode="before") @classmethod def from_literal(cls, data: Any) -> Any: """Automatically parse 'x|y' literals""" if isinstance(data, str): # optionally validate literal assert data.count("|") == 1, "literal requires one | separator" x, y = data.split("|") # field type conversion can be handled by pydantic return dict(x=x, y=y) return data Notably, we only have to parse the input into fields that Pydantic can convert to the final field type. This also allows to combine multiple models that define their own literal syntax. In addition, we can optionally validate the literal using the usual Pydantic means of asserting assumptions. Also, we should pass through other data to still support instances and data dictionaries. This definition is sufficient to parse even nested literals automatically, while still accepting explicit instances and data: # these create the same objects print(Polygon.model_validate({"points": ["0|0", "1|0", "1|1"]})) print(Polygon.model_validate({"points": ["0|0", Point(x=1, y=0), {"x": 1, "y": 1}]})) # validation rejects the invalid literal print(Polygon.model_validate({"points": ["0|0", "1|0", "1|1|1"]})) model_validate replaces parse_obj in Pydantic v2.0. However, this still works with parse_obj. | 5 | 5 |
71,320,201 | 2022-3-2 | https://stackoverflow.com/questions/71320201/how-to-fix-random-seed-for-bertopic | I'd like to fix the random seed from BERTopic library to get reproducible results. Looking at the code of BERTopic I see it uses numpy. Will using np.random.seed(123) be enough? or do I also need to other libraries as random or pytorch as in this question. | You can fix the random_state variable using UMAP, but you have to also send the other default parameters to the UMAP constructor or the model will break. What this looks like in practice is: umap = UMAP(n_neighbors=15, n_components=5, min_dist=0.0, metric='cosine', low_memory=False, random_state=1337) model = BERTopic(language="multilingual", umap_model=umap) topics, probs = model.fit_transform(content) By default, umap_model is set to None in the BERTopic constructor. Internally if that is not provided, it sets one up with default params here in the code. Note that low_memory is a param in both constructors, and if the BERTopic constructor isn't called with that in it, it internally sets it to False. | 7 | 5 |
71,344,134 | 2022-3-3 | https://stackoverflow.com/questions/71344134/how-to-list-files-from-a-s3-bucket-folder-using-python | I tried to list all files in a bucket. Here is my code import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('my_project') for my_bucket_object in my_bucket.objects.all(): print(my_bucket_object.key) it works. I get all files' names. However, when I tried to do the same thing on a folder, the code raise an error import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('my_project/data/') # add the folder name for my_bucket_object in my_bucket.objects.all(): print(my_bucket_object.key) Here is the error: botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid bucket name "carlos-cryptocurrency-research-project/data/": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" or be an ARN matching the regex "^arn:(aws).*:(s3|s3-object-lambda):[a-z\-0-9]*:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-.]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]{1,63}$" I'm sure the folder name is correct and I tried replacing it with Amazon Resource Name (ARN) and S3 URI, but still get the error. | You can't indicate a prefix/folder in the Bucket constructor. Instead use the client-level API and call list_objects_v2 something like this: import boto3 client = boto3.client('s3') response = client.list_objects_v2( Bucket='my_bucket', Prefix='data/') for content in response.get('Contents', []): print(content['Key']) Note that this will yield at most 1000 S3 objects. You can use a paginator if needed, or consider using the higher-level Bucket resource and its objects collection which handles pagination for you, per another answer to this question. | 11 | 19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.