question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,132,981 | 2021-9-10 | https://stackoverflow.com/questions/69132981/how-to-make-jupyter-notebook-python-help-function-output-colorful | I am new to Jupyter notebook and trying to see the some help about the functions. For example, when I print the help of statsmodels.OLS I got the following plain black and white help. Are there any python modules that colorize/beautify the help outputs? For example: hightlight parameters names highlight the code example in python syntax hightlight and so on. If there are not some modules, what would be the starting point, to colorize the parameters and the python codes? The example output of help is given below: | You can try to beautify the help using rich library (in jupyter, you can install it, using the command !pip install rich). In particular, you could study the inspect method. For example, with the following code: from rich import inspect inspect(sm.OLS, help=True) I get this output: | 7 | 4 |
69,159,775 | 2021-9-13 | https://stackoverflow.com/questions/69159775/gtk-python-window-symbolic-icon-color-problem | I have a GTK3 GUI called by a simple Python 3 code. Icon is located in the /usr/share/icons/hicolor/scalable/actions/ directory. My current theme color is dark and icons look white. When I switch to white system theme GUI icons turn into black. But in my code icon looks as black instead of white when dark theme is activated. It works when I choose the icon name (icon-symbolic) from Glade program and save the UI file. Icon file is a simple black square .svg file (drawn in Inkscape). What is the solution for that? OS: Debian-like Linux, Python 3, GTK 3.24 Simple Python code: import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, GdkPixbuf builder = Gtk.Builder() builder.add_from_file('test.ui') window1 = builder.get_object('window1') button1 = builder.get_object('button1') class Signals: def on_window1_destroy(self, widget): Gtk.main_quit() builder.connect_signals(Signals()) window1.set_icon_name("icon-symbolic") window1.show_all() Gtk.main() Simple UI file: <?xml version="1.0" encoding="UTF-8"?> <!-- Generated with glade 3.38.2 --> <interface> <requires lib="gtk+" version="3.20"/> <object class="GtkWindow" id="window1"> <property name="can-focus">False</property> <property name="default-width">300</property> <property name="default-height">300</property> <child> <!-- n-columns=1 n-rows=1 --> <object class="GtkGrid"> <property name="visible">True</property> <property name="can-focus">False</property> <child> <object class="GtkButton" id="button1"> <property name="label" translatable="yes">Button 1</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="receives-default">True</property> </object> <packing> <property name="left-attach">0</property> <property name="top-attach">0</property> </packing> </child> </object> </child> </object> </interface> | I have found a solution for automatically changing icon color on the window title bar. I have used Gtk.HeaderBar instead of default window title bar and added an Gtk.Image (its name is image_headerbar) to the left of the headerbar. Finally I have set image icon by using the following code and it worked: image_headerbar.set_from_icon_name("icon-symbolic", -1) Icon color changes to dark/white automatically when system theme changes to white/dark. I have tried several methods for dynamically changing icon color on the window title bar. But none of them worked if Gtk.HeaderBar is not used. But window title bar height is a bit bigger than default window title bars when Gtk.HeaderBar is used (tested on XFCE desktop environment). | 6 | 2 |
69,154,359 | 2021-9-12 | https://stackoverflow.com/questions/69154359/guvectorize-not-resolving-types-in-nopython-mode | I'm struggling with a numba error Untyped global name 'is_a_subset': Cannot determine Numba type of <class 'numba.np.ufunc.gufunc.GUFunc'> This usually means I have fumbled and used a method that isn't supported by numba. The following code fails. @guvectorize("(n),(n)->(n)",nopython=True) def is_a_subset(x,y,out): out[:]=np.array([item in x for item in y]) @njit() def test(x,y,z): is_a_subset(x,y,z) return z.mean() x=np.array([[1,2,3],[3,2,1]]) y=np.array([[3,6,1],[1,2,3]]) z = np.empty_like(x) test(x,y,z) However removing njit on the test function makes everything work. def test(x,y,z): is_a_subset(x,y,z) return z.mean() Why is numba struggling to resolve types when in no-python mode? I had also tried without different results @guvectorize(["f8[:],f8[:],f8[:]"],"(n),(n)->(n)",nopython=True) def is_a_subset(x,y,out): out[:]=np.array([item in x for item in y]) | I am using Numba 0.53.1 and can replicate this error. This blog on the dynamic dispatch update to guvectorize in Numba 0.53 mentions this at the end (emphasis added): In the future we would like to bring the @guvectorize capabilities closer to the @vectorize ones. For instance, currently it is not possible to call a guvectorize function from a jitted (@jit) function. There is a similar open issue with vectorize, but it demonstrates that @vectorize functions can be called in @jit functions, just that it's restricted to the default target = "cpu". | 9 | 4 |
69,115,825 | 2021-9-9 | https://stackoverflow.com/questions/69115825/remove-white-borders-from-segmented-images | I am trying to segment lung CT images using Kmeans by using code below: def process_mask(mask): convex_mask = np.copy(mask) for i_layer in range(convex_mask.shape[0]): mask1 = np.ascontiguousarray(mask[i_layer]) if np.sum(mask1)>0: mask2 = convex_hull_image(mask1) if np.sum(mask2)>2*np.sum(mask1): mask2 = mask1 else: mask2 = mask1 convex_mask[i_layer] = mask2 struct = generate_binary_structure(3,1) dilatedMask = binary_dilation(convex_mask,structure=struct,iterations=10) return dilatedMask def lumTrans(img): lungwin = np.array([-1200.,600.]) newimg = (img-lungwin[0])/(lungwin[1]-lungwin[0]) newimg[newimg<0]=0 newimg[newimg>1]=1 newimg = (newimg*255).astype('uint8') return newimg def lungSeg(imgs_to_process,output,name): if os.path.exists(output+'/'+name+'_clean.npy') : return imgs_to_process = Image.open(imgs_to_process) img_to_save = imgs_to_process.copy() img_to_save = np.asarray(img_to_save).astype('uint8') imgs_to_process = lumTrans(imgs_to_process) imgs_to_process = np.expand_dims(imgs_to_process, axis=0) x,y,z = imgs_to_process.shape img_array = imgs_to_process.copy() A1 = int(y/(512./100)) A2 = int(y/(512./400)) A3 = int(y/(512./475)) A4 = int(y/(512./40)) A5 = int(y/(512./470)) for i in range(len(imgs_to_process)): img = imgs_to_process[i] print(img.shape) x,y = img.shape #Standardize the pixel values allmean = np.mean(img) allstd = np.std(img) img = img-allmean img = img/allstd # Find the average pixel value near the lungs # to renormalize washed out images middle = img[A1:A2,A1:A2] mean = np.mean(middle) max = np.max(img) min = np.min(img) kmeans = KMeans(n_clusters=2).fit(np.reshape(middle,[np.prod(middle.shape),1])) centers = sorted(kmeans.cluster_centers_.flatten()) threshold = np.mean(centers) thresh_img = np.where(img<threshold,1.0,0.0) # threshold the image eroded = morphology.erosion(thresh_img,np.ones([4,4])) dilation = morphology.dilation(eroded,np.ones([10,10])) labels = measure.label(dilation) label_vals = np.unique(labels) regions = measure.regionprops(labels) good_labels = [] for prop in regions: B = prop.bbox if B[2]-B[0]<A3 and B[3]-B[1]<A3 and B[0]>A4 and B[2]<A5: good_labels.append(prop.label) mask = np.ndarray([x,y],dtype=np.int8) mask[:] = 0 for N in good_labels: mask = mask + np.where(labels==N,1,0) mask = morphology.dilation(mask,np.ones([10,10])) # one last dilation imgs_to_process[i] = mask m1 = imgs_to_process convex_mask = m1 dm1 = process_mask(m1) dilatedMask = dm1 Mask = m1 extramask = dilatedMask ^ Mask bone_thresh = 180 pad_value = 0 img_array[np.isnan(img_array)]=-2000 sliceim = img_array sliceim = sliceim*dilatedMask+pad_value*(1-dilatedMask).astype('uint8') bones = sliceim*extramask>bone_thresh sliceim[bones] = pad_value x,y,z = sliceim.shape if not os.path.exists(output): os.makedirs(output) img_to_save[sliceim.squeeze()==0] = 0 im = Image.fromarray(img_to_save) im.save(output + name + '.png', 'PNG') The problem is the segmented lung still contains white borderers like this: Segmented lung (output): Unsegmented lung (input): The full code can be found in Google Colab Notebook. code. And sample of the dataset is here. | For this problem, I don't recommend using Kmeans color quantization since this technique is usually reserved for a situation where there are various colors and you want to segment them into dominant color blocks. Take a look at this previous answer for a typical use case. Since your CT scan images are grayscale, Kmeans would not perform very well. Here's a potential solution using simple image processing with OpenCV: Obtain binary image. Load input image, convert to grayscale, Otsu's threshold, and find contours. Create a blank mask to extract desired objects. We can use np.zeros() to create a empty mask with the same size as the input image. Filter contours using contour area and aspect ratio. We search for the lung objects by ensuring that contours are within a specified area threshold as well as aspect ratio. We use cv2.contourArea(), cv2.arcLength(), and cv2.approxPolyDP() for contour perimeter and contour shape approximation. If we have have found our lung object, we utilize cv2.drawContours() to fill in our mask with white to represent the objects that we want to extract. Bitwise-and mask with original image. Finally we convert the mask to grayscale and bitwise-and with cv2.bitwise_and() to obtain our result. Here is our image processing pipeline visualized step-by-step: Grayscale -> Otsu's threshold Detected objects to extract highlighted in green -> Filled mask Bitwise-and to get our result -> Optional result with white background instead Code import cv2 import numpy as np image = cv2.imread('1.png') highlight = image.copy() original = image.copy() # Convert image to grayscale, Otsu's threshold, and find contours gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] # Create black mask to extract desired objects mask = np.zeros(image.shape, dtype=np.uint8) # Search for objects by filtering using contour area and aspect ratio for c in contours: # Contour area area = cv2.contourArea(c) # Contour perimeter peri = cv2.arcLength(c, True) # Contour approximation approx = cv2.approxPolyDP(c, 0.035 * peri, True) (x, y, w, h) = cv2.boundingRect(approx) aspect_ratio = w / float(h) # Draw filled contour onto mask if passes filter # These are arbitary values, may need to change depending on input image if aspect_ratio <= 1.2 or area < 5000: cv2.drawContours(highlight, [c], 0, (0,255,0), -1) cv2.drawContours(mask, [c], 0, (255,255,255), -1) # Convert 3-channel mask to grayscale then bitwise-and with original image for result mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY) result = cv2.bitwise_and(original, original, mask=mask) # Uncomment if you want background to be white instead of black # result[mask==0] = (255,255,255) # Display cv2.imshow('gray', gray) cv2.imshow('thresh', thresh) cv2.imshow('highlight', highlight) cv2.imshow('mask', mask) cv2.imshow('result', result) # Save images # cv2.imwrite('gray.png', gray) # cv2.imwrite('thresh.png', thresh) # cv2.imwrite('highlight.png', highlight) # cv2.imwrite('mask.png', mask) # cv2.imwrite('result.png', result) cv2.waitKey(0) | 17 | 11 |
69,125,666 | 2021-9-9 | https://stackoverflow.com/questions/69125666/merge-two-pandas-dataframe-based-on-partial-match | Two DataFrames have city names that are not formatted the same way. I'd like to do a Left-outer join and pull geo field for all partial string matches between the field City in both DataFrames. import pandas as pd df1 = pd.DataFrame({ 'City': ['San Francisco, CA','Oakland, CA'], 'Val': [1,2] }) df2 = pd.DataFrame({ 'City': ['San Francisco-Oakland, CA','Salinas, CA'], 'Geo': ['geo1','geo2'] }) Expected DataFrame upon join: City Val Geo San Francisco, CA 1 geo1 Oakland, CA 2 geo1 | Update: the fuzzywuzzy project has been renamed to thefuzz and moved here You can use thefuzz package and the function extractOne: # Python env: pip install thefuzz # Anaconda env: pip install thefuzz # -> thefuzz is not yet available on Anaconda (2021-09-18) # -> you can use the old package: conda install -c conda-forge fuzzywuzzy from thefuzz import process best_city = lambda x: process.extractOne(x, df2["City"])[2] # See note below df1['Geo'] = df2.loc[df1["City"].map(best_city).values, 'Geo'].values Output: >>> df1 City Val Geo 0 San Francisco, CA 1 geo1 1 Oakland, CA 2 geo1 Note: extractOne return a tuple of 3 values from the best match: the City name from df2 [0], the accuracy score [1] and the index [2] (<- the one I use). | 16 | 17 |
69,181,078 | 2021-9-14 | https://stackoverflow.com/questions/69181078/spacy-how-do-you-add-custom-ner-labels-to-a-pre-trained-model | I am new to SpaCy and NLP. I am using SpaCy v 3.1 and Python 3.9.7 64-bit. My objective: to use a pre-trained SpaCy model (en_core_web_sm) and add a set of custom labels to the existing NER labels (GPE, PERSON, MONEY, etc.) so that the model can recognize both the default AND the custom entities. I've looked at the SpaCy documentation and what I need seems to be an EntityRecogniser, specifically a new pipe. However, it is not really clear to me at what point in my workflow I should add this new pipe, since in SpaCy 3 the training happens in CLI, and from the docs it's not even clear to me where the pre-trained model is called. Any tutorials or pointers you might have are highly appreciated. This is what I think should be done, but I am not sure how: import spacy from spacy import displacy from spacy_langdetect import LanguageDetector from spacy.language import Language from spacy.pipeline import EntityRecognizer # Load model nlp = spacy.load("en_core_web_sm") # Register custom component and turn a simple function into a pipeline component @Language.factory('new-ner') def create_bespoke_ner(nlp, name): # Train the new pipeline with custom labels here?? return LanguageDetector() # Add custom pipe custom = nlp.add_pipe("new-ner") This is what my config file looks like so far. I suspect my new pipe needs to go next to "tok2vec" and "ner". [paths] train = null dev = null vectors = null init_tok2vec = null [system] gpu_allocator = null seed = 0 [nlp] lang = "en" pipeline = ["tok2vec","ner"] batch_size = 1000 disabled = [] before_creation = null after_creation = null after_pipeline_creation = null tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"} [components] [components.ner] factory = "ner" incorrect_spans_key = null moves = null update_with_oracle_cut_size = 100 | For Spacy 3.2 I did it this way: import spacy import random from spacy import util from spacy.tokens import Doc from spacy.training import Example from spacy.language import Language def print_doc_entities(_doc: Doc): if _doc.ents: for _ent in _doc.ents: print(f" {_ent.text} {_ent.label_}") else: print(" NONE") def customizing_pipeline_component(nlp: Language): # NOTE: Starting from Spacy 3.0, training via Python API was changed. For information see - https://spacy.io/usage/v3#migrating-training-python train_data = [ ('We need to deliver it to Festy.', [(25, 30, 'DISTRICT')]), ('I like red oranges', []) ] # Result before training print(f"\nResult BEFORE training:") doc = nlp(u'I need a taxi to Festy.') print_doc_entities(doc) # Disable all pipe components except 'ner' disabled_pipes = [] for pipe_name in nlp.pipe_names: if pipe_name != 'ner': nlp.disable_pipes(pipe_name) disabled_pipes.append(pipe_name) print(" Training ...") optimizer = nlp.create_optimizer() for _ in range(25): random.shuffle(train_data) for raw_text, entity_offsets in train_data: doc = nlp.make_doc(raw_text) example = Example.from_dict(doc, {"entities": entity_offsets}) nlp.update([example], sgd=optimizer) # Enable all previously disabled pipe components for pipe_name in disabled_pipes: nlp.enable_pipe(pipe_name) # Result after training print(f"Result AFTER training:") doc = nlp(u'I need a taxi to Festy.') print_doc_entities(doc) def main(): nlp = spacy.load('en_core_web_sm') customizing_pipeline_component(nlp) if __name__ == '__main__': main() | 12 | 12 |
69,175,352 | 2021-9-14 | https://stackoverflow.com/questions/69175352/why-does-my-jupyterlab-cell-turn-orange-with-every-new-edit-or-when-i-type-in-it | I recently installed Cron via jupyterlab_scheduler in the anaconda extensions in a conda environment I usually work in. This was to schedule my jupyterlab notebooks. However, there was a problem with the application and so I deleted it. Though it seems to have left some of its features like turning the cell orange and leaving an asterisk to the left of the cell number. The picture below demonstrates this: I created a new environment though it seems to still be affecting it other environs. Is there any reason why this is still happening? Its a problem because previously, when I use to undo (ctrl + z), it used to undo everything in the cell and only the cell in question. But now it undoes everything across all cells. This is a problem for me as it changes the overall code I am working with. Any idea how to rectify this? | As explained in the JupyterLab 3.1 changelog, specifically the user-facing changes section, a new new visual indicator was introduced to highlight cells in which the code changed in the editor since last execution: The indicator is currently implemented by changing the cell collapser and the cell execution counter color to orange, and adding a filled orange circle icon left execution counter. Hopefully, this will improve the situational awareness of the users and lead to more consistent state of the notebooks on save. If you come to like this solution you may be interested in using nbsafety which takes it a step further by actually analysing the dependencies and preventing out-of-order execution. | 7 | 8 |
69,125,173 | 2021-9-9 | https://stackoverflow.com/questions/69125173/accuracy-in-calculating-fourth-derivative-using-finite-differences-in-tensorflow | I am writing a small code to calculate the fourth derivative using the method of finite differences in tensorflow. This is as follows: def action(y,x): #spacing between points. h = (x[-1] - x[0]) / (int(x.shape[0]) - 1) #fourth derivative dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h) return dy4 x = tf.linspace(0.0, 30, 1000) y = tf.tanh(x) dy4 = action(y,x) sess = tf.compat.v1.Session() plt.plot(sess.run(dy4)) This results in the following graph: However if I use essentially the same code but just using numpy, the results are much cleaner: def fourth_deriv(y, x): h = (x[-1] - x[0]) / (int(x.shape[0]) - 1) dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h) return dy x = np.linspace(0.0, 30, 1000) test = fourth_deriv(np.tanh(x), x) plt.plot(test) Which gives: What is the issue here? I was thinking at first that the separation between points could be too small to give an accurate computation, but clearly, that's not the case if numpy can handle it fine. | The issue is related to the choice of floating-point types. tf.linspace automatically selects tf.float32 as its type, while np.linspace creates a float64 array, which has much more precision. Making the following modification: start = tf.constant(0.0, dtype = tf.float64) end = tf.constant(30.0, dtype = tf.float64) x = tf.linspace(start, end, 1000) causes a smooth plot to appear: It's worth noting further that Tensorflow does include an automatic differentiation, which is crucial for machine learning training and is hence well-tested - you can use gradient tapes to access it and evaluate a fourth derivative without the imprecision of numeric differentiation using finite differences: with tf.compat.v1.Session() as sess2: x = tf.Variable(tf.linspace(0, 30, 1000)) sess2.run(tf.compat.v1.initialize_all_variables()) with tf.GradientTape() as t4: with tf.GradientTape() as t3: with tf.GradientTape() as t2: with tf.GradientTape() as t1: y = tf.tanh(x) der1 = t1.gradient(y, x) der2 = t2.gradient(der1, x) der3 = t3.gradient(der2, x) der4 = t4.gradient(der3, x) print(der4) plt.plot(sess2.run(der4)) The accuracy of this method is far better than can be achieved using finite difference methods. The following code compares the accuracy of auto diff with the accuracy of the finite difference method: x = np.linspace(0.0, 30, 1000) sech = 1/np.cosh(x) theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2) finite_diff_err = theoretical[2:-2] - from_finite_diff autodiff_err = theoretical[2:-2] - from_autodiff[2:-2] print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err))) print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err))) line, = plt.plot(np.log10(np.abs(autodiff_err))) line.set_label('Autodiff log error') line2, = plt.plot(np.log10(np.abs(finite_diff_err))) line2.set_label('Finite difference log error') plt.legend() and yields the following output: Max err with autodiff: 3.1086244689504383e-15 Max err with a finite difference: 0.007830900165363808 and the following plot (the two lines overlap after around 600 on the X-axis): | 10 | 10 |
69,184,212 | 2021-9-14 | https://stackoverflow.com/questions/69184212/how-to-enumerate-combinations-filtering-repeats | I have a list of possible choices: [[1], [2, 4], [4], [5, 6, 2], [5, 3]]. I want to list all combinations, taking maximum one element from each sublist, without repeating elements. So [1, 2, 4, 5, 3] is a valid option. But [1, 4, 4, 5, 3] is not. I allow not making a choice in any sublist, so [1,4, None,5,3] is valid, as in [1, None, None, None, None], and [None, None, None, None, None]. I can't simply enumerate all combinations then filter out the ones I don't want, since for a large list of possible choices, it would quickly become computationally infeasible (I'm looking at 25^25 maximum combinations in my project). edit: I would also apply some additional criteria to the results, such as filtering to have no more than a threshold of None choices, or sorting the resultant list of combinations in order of combinations with fewest None choices. edit: with details of the real-life case: I'd like to apply it to a list of 25 sublists, each of which can have 1-25 elements. Realisitically, each sublist will have max 15 elements, with 2-4 on average. So the easy solution of list(itertools.product(*choices)) then filtering is out. I may also wish to add other filter conditions to the list of combinations, so ideally I can filter these upfront. I've tried building a tree recursively, where e.g. root node has the full list of choices, child node makes the first choice [1], and has an updated list of choices where '1' is removed from all list[1:] choices. Struggling to implement the recursion though. Can you help me with any other approaches? | Another way to generate all valid outputs with minimal memory usage is to iterate over the elements rather than over the lists. Use a Depth-First search so that you only generate valid outputs from the start. This means that we need to track three things in each level of our DFS: the current element to maybe add, the lists that are already used, and the order that we used those previous lists in. To help with our search, we preprocess choices by mapping every element to a set of the possible lists it can be in, which create a sort of Dual version of the problem. This also generates the lists in roughly 'maximum non-empty choices first' order. Since you've specified that the number of unique elements, N, is equal to the number of lists, this approach runs in O(N * |output|) time, and we use generators everywhere to save memory. import collections from typing import Dict, Generator, List, Optional, Set choices = [[1], [2, 4], [4], [5, 6, 2], [5, 3]] element_to_containing_lists: Dict[int, Set[int]] = collections.defaultdict(set) for i, choice_list in enumerate(choices): for x in choice_list: element_to_containing_lists[x].add(i) all_unique_elements = sorted(element_to_containing_lists) def dfs(used_list_indices: Set[int], next_element_index: int, used_list_ordered_indices: List[Optional[int]]) -> Generator[List[Optional[int]]]: if next_element_index == len(all_unique_elements): yield used_list_ordered_indices else: # If we use the element, find an unused list index for possible_list_to_use in element_to_containing_lists[ all_unique_elements[next_element_index]] - used_list_indices: yield from dfs(used_list_indices | {possible_list_to_use}, next_element_index + 1, used_list_ordered_indices + [possible_list_to_use]) # If we don't use the element: Add None as a sentinel value yield from dfs(used_list_indices, next_element_index + 1, used_list_ordered_indices + [None]) for element_to_used_list in dfs(set(), 0, []): list_to_chosen_element = ['N'] * len(choices) for x, y in zip(all_unique_elements, element_to_used_list): if y is not None: list_to_chosen_element[y] = x print(*list_to_chosen_element, sep=' ') First 10 lines of the output: 1 2 4 5 3 1 2 4 6 3 1 2 4 N 3 1 2 N 5 3 1 2 N 6 3 1 2 N N 3 1 2 4 5 N 1 2 4 6 5 1 2 4 N 5 This can possibly be optimized to run in O(|output|) time by using a bitmask for 'used lists' rather than a set of their indices. | 5 | 6 |
69,148,495 | 2021-9-12 | https://stackoverflow.com/questions/69148495/typeerror-import-optional-dependency-got-an-unexpected-keyword-argument-erro | I am trying to work with Featuretools to develop an automated feature engineering workflow for the customer churn dataset. The end outcome is a function that takes in a dataset and label times for customers and builds a feature matrix that can be used to train a machine learning model. As part of this exercise I am trying to execute the below code for plotting a histogram and got "TypeError: import_optional_dependency() got an unexpected keyword argument 'errors' ". Please help resolve this TypeError. import matplotlib.pyplot as plt %matplotlib inline plt.style.use('fivethirtyeight') plt.rcParams['figure.figsize'] = (10, 6) trans.loc[trans['actual_amount_paid'] < 250, 'actual_amount_paid'].dropna().plot.hist(bins = 30) plt.title('Distribution of Actual Amount Paid') Below is the full error I received: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-32-7e19affd5fc1> in <module> 4 plt.rcParams['figure.figsize'] = (10, 6) 5 ----> 6 trans.loc[trans['actual_amount_paid'] < 250, 'actual_amount_paid'].dropna().plot.hist(bins = 30) 7 plt.title('Distribution of Actual Amount Paid') ~\anaconda3\lib\site-packages\pandas\core\ops\common.py in new_method(self, other) 63 break 64 if isinstance(other, cls): ---> 65 return NotImplemented 66 67 other = item_from_zerodim(other) ~\anaconda3\lib\site-packages\pandas\core\arraylike.py in __lt__(self, other) 35 def __ne__(self, other): 36 return self._cmp_method(other, operator.ne) ---> 37 38 @unpack_zerodim_and_defer("__lt__") 39 def __lt__(self, other): ~\anaconda3\lib\site-packages\pandas\core\series.py in _cmp_method(self, other, op) 4937 -------- 4938 >>> s = pd.Series(range(3)) -> 4939 >>> s.memory_usage() 4940 152 4941 ~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in comparison_op(left, right, op) 248 lvalues = ensure_wrapped_if_datetimelike(left) 249 rvalues = ensure_wrapped_if_datetimelike(right) --> 250 251 rvalues = lib.item_from_zerodim(rvalues) 252 if isinstance(rvalues, list): ~\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in _na_arithmetic_op(left, right, op, is_cmp) 137 138 def _na_arithmetic_op(left, right, op, is_cmp: bool = False): --> 139 140 Return the result of evaluating op on the passed in values. 141 ~\anaconda3\lib\site-packages\pandas\core\computation\expressions.py in <module> 17 from pandas._typing import FuncType 18 ---> 19 from pandas.core.computation.check import NUMEXPR_INSTALLED 20 from pandas.core.ops import roperator 21 ~\anaconda3\lib\site-packages\pandas\core\computation\check.py in <module> 1 from pandas.compat._optional import import_optional_dependency 2 ----> 3 ne = import_optional_dependency("numexpr", errors="warn") 4 NUMEXPR_INSTALLED = ne is not None 5 if NUMEXPR_INSTALLED: TypeError: import_optional_dependency() got an unexpected keyword argument 'errors' | Try to upgrade pandas: pip install pandas --upgrade | 9 | 9 |
69,172,994 | 2021-9-14 | https://stackoverflow.com/questions/69172994/spark-submit-options-for-gcs-connector-to-access-google-storage | I am using spark-job on a self-managed cluster (like local environment) while accessing buckets on google storage. ❯ spark-submit --version Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.1.2 /_/ Using Scala version 2.12.10, OpenJDK 64-Bit Server VM, 1.8.0_292 Branch HEAD Compiled by user centos on 2021-05-24T04:27:48Z Revision de351e30a90dd988b133b3d00fa6218bfcaba8b8 Url https://github.com/apache/spark Type --help for more information. If I run the job with the following command using locally download gcs-connector, it finishes successfully. spark-submit\ --name CreateAllDataDFWithSpark\ --jars ./gcs-connector-hadoop3-2.2.2.jar\ --packages org.apache.spark:spark-avro_2.12:3.1.2\ --conf spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\ <path_to>/.cache/pypoetry/virtualenvs/poc-TZFypELR-py3.7/lib/python3.7/site-packages/luigi/contrib/pyspark_runner.py\ /tmp/CreateAllDataDFWithSpark78itslb5/CreateAllDataDFWithSpark.pickle On the other hand, If I run the job without downloading gcs-connector beforehand as below, spark-submit\ --name CreateAllDataDFWithSpark\ --packages org.apache.spark:spark-avro_2.12:3.1.2,com.google.cloud.bigdataoss:gcs-connector:hadoop3-2.2.2\ --conf spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem\ --conf spark.hadoop.fs.AbstractFileSystem.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS\ <path_to>/.cache/pypoetry/virtualenvs/poc-TZFypELR-py3.7/lib/python3.7/site-packages/luigi/contrib/pyspark_runner.py\ /tmp/CreateAllDataDFWithSpark1gf54xue/CreateAllDataDFWithSpark.pickle it gives the following error. ... py4j.protocol.Py4JJavaError: An error occurred while calling o31.load. : java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:135) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3302) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:377) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325) at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:239) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) ... 24 more Caused by: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;JJ)V at com.google.cloud.hadoop.gcsio.cooplock.CooperativeLockingOptions$Builder.build(CooperativeLockingOptions.java:58) at com.google.cloud.hadoop.gcsio.cooplock.CooperativeLockingOptions.<clinit>(CooperativeLockingOptions.java:33) at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemConfiguration.<clinit>(GoogleHadoopFileSystemConfiguration.java:383) at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.<init>(GoogleHadoopFileSystemBase.java:246) at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.<init>(GoogleHadoopFileSystem.java:58) ... 29 more I do not understand why the second command does not work. I appreciate any suggestions or comments. Thanks! | As mentioned in the comments, this stems from a Guava version incompatibility between the GCS connector's dependency vs what you have bundled in your Spark distro. Specifically, the GCS connector hadoop3-2.2.2 depends on Guava 30.1-jre whereas Spark 3.1.2 brings Guava 14.0.1 as a "provided" dependency. In the two different commands, it was more-or-less luck of the draw that classpath loading happened in the right order for your first approach to work, and it could end up failing unexpectedly again when other jars are added. Ideally you'll want to host your own jarfile anyways to minimize runtime dependencies on external repositories (Maven repository), so pre-installing the jarfile is the right approach. When you do that, you should consider using the full shaded jarfile (also available on Maven central) instead of the minimal GCS connector jarfile to avoid classloading version issues in the future. | 6 | 5 |
69,152,016 | 2021-9-12 | https://stackoverflow.com/questions/69152016/cant-send-requests-through-socks5-proxy-with-python | I was trying to send http/https requests via proxy (socks5), but I can't understand if the problem is in my code or in the proxy. I tried using this code and it gives me an error: requests.exceptions.ConnectionError: SOCKSHTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x000001B656AC9608>: Failed to establish a new connection: Connection closed unexpectedly')) This is my code: import requests url = "https://www.google.com" proxies = { "http":"socks5://fsagsa:[email protected]:31112", "https":"socks5://fsagsa:[email protected]:31112", } headers = { "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", "Sec-Gpc": "1", "Sec-Fetch-Site": "same-origin", "Sec-Fetch-User": "?1", "Accept-Encoding": "gzip, deflate, br", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-Dest": "document", "Accept-Language": "en-GB,en;q=0.9" } r = requests.get(url, headers = headers, proxies = proxies) print(r) Then, I checked the proxy with an online tool The tool manages to send requests through the proxy. . So the problem is in this code? I can't figure out what's wrong. Edit (15/09/2021) I added headers but the problem is still there. | Create a local server/mock to handle the request using pytest or some other testing framework with responses library to eliminate variables external to your application/script. I’m quite sure Google will reject requests with empty headers. Also, ensure you installed the correct dependencies to enable SOCKS proxy support in requests (python -m pip install requests[socks]). Furthermore, if you are making a remote request to connect to your proxy you must change socks5 to socks5h in your proxies dictionary. References pytest: https://docs.pytest.org/en/6.2.x/ responses: https://github.com/getsentry/responses requests[socks]: https://docs.python-requests.org/en/master/user/advanced/#socks | 4 | 6 |
69,177,148 | 2021-9-14 | https://stackoverflow.com/questions/69177148/numpyic-way-to-sort-a-matrix-based-on-another-similar-matrix | Say I have a matrix Y of random float numbers from 0 to 10 with shape (10, 3): import numpy as np np.random.seed(99) Y = np.random.uniform(0, 10, (10, 3)) print(Y) Output: [[6.72278559 4.88078399 8.25495174] [0.31446388 8.08049963 5.6561742 ] [2.97622499 0.46695721 9.90627399] [0.06825733 7.69793028 7.46767101] [3.77438936 4.94147452 9.28948392] [3.95454044 9.73956297 5.24414715] [0.93613093 8.13308413 2.11686786] [5.54345785 2.92269116 8.1614236 ] [8.28042566 2.21577372 6.44834702] [0.95181622 4.11663239 0.96865261]] I am now given a matrix X with same shape that can be seen as obtained by adding small noises to Y and then shuffling the rows: X = np.random.normal(Y, scale=0.1) np.random.shuffle(X) print(X) Output: [[ 4.04067271 9.90959141 5.19126867] [ 5.59873104 2.84109306 8.11175891] [ 0.10743952 7.74620162 7.51100441] [ 3.60396019 4.91708372 9.07551354] [ 0.9400948 4.15448712 1.04187208] [ 2.91884302 0.47222752 10.12700505] [ 0.30995155 8.09263241 5.74876947] [ 1.11247872 8.02092335 1.99767444] [ 6.68543696 4.8345869 8.17330513] [ 8.38904822 2.11830619 6.42013343]] Now I want to sort the matrix X based on Y by row. I already know each pair of column values in each matching pair of rows are not different from each other more than a tolerance of 0.5. I managed to write the following code and it is working fine. def sort_X_by_Y(X, Y, tol): idxs = [next(i for i in range(len(X)) if all(abs(X[i] - row) <= tol)) for row in Y] return X[idxs] print(sort_X_by_Y(X, Y, tol=0.5)) Output: [[ 6.68543696 4.8345869 8.17330513] [ 0.30995155 8.09263241 5.74876947] [ 2.91884302 0.47222752 10.12700505] [ 0.10743952 7.74620162 7.51100441] [ 3.60396019 4.91708372 9.07551354] [ 4.04067271 9.90959141 5.19126867] [ 1.11247872 8.02092335 1.99767444] [ 5.59873104 2.84109306 8.11175891] [ 8.38904822 2.11830619 6.42013343] [ 0.9400948 4.15448712 1.04187208]] However, in reality I am sorting (1000, 3) matrices and my code is way too slow. I feel like there should be more numpyic way to code this. Any suggestions? | This is a vectorized version of your algorithm. It runs ~26.5x faster than your implementation for 1000 samples. But an additional boolean array with shape (1000,1000,3) is created. There is a chance that rows will have similar values within the tolerance and a wrong row is selected. tol = .5 X[(np.abs(Y[:, np.newaxis] - X) <= tol).all(2).argmax(1)] Output array([[ 6.68543696, 4.8345869 , 8.17330513], [ 0.30995155, 8.09263241, 5.74876947], [ 2.91884302, 0.47222752, 10.12700505], [ 0.10743952, 7.74620162, 7.51100441], [ 3.60396019, 4.91708372, 9.07551354], [ 4.04067271, 9.90959141, 5.19126867], [ 1.11247872, 8.02092335, 1.99767444], [ 5.59873104, 2.84109306, 8.11175891], [ 8.38904822, 2.11830619, 6.42013343], [ 0.9400948 , 4.15448712, 1.04187208]]) More robust solutions with L1-norm X[np.abs(Y[:, np.newaxis] - X).sum(2).argmin(1)] Or L2-norm X[((Y[:, np.newaxis] - X)**2).sum(2).argmin(1)] | 5 | 2 |
69,175,990 | 2021-9-14 | https://stackoverflow.com/questions/69175990/how-does-password-checking-in-bcrypt-work | So, I found the following example in bcrypt docs: password = b"super secret password" hashed = bcrypt.hashpw(password, bcrypt.gensalt()) if bcrypt.checkpw(password, hashed): print("It Matches!") else: print("It Does not Match :(") And it seems to work. But I don't understand how. Shouldn't we use salt to generate a hash for checking? I mean, we generated salt once and didn't save it in a variable. But then we want to compare the hash and the password with the function checkpw, but how does it know which salt to use to generate a hash for comparison? | The generated "hash" also contains the salt. It is in the Modular Crypt Format, documented here (thanks @Masklinn) $2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy |<--- salt --->||<---- confirmation hash ---->| The "2a" part gives information on the modular hash being used, "10" is the logarithmic cost parameter (i.e. the algorithm is to be iterated 210 times). So, to verify that a password matches, you'll restart the bcrypt using the decoding of N9qo8uLOickgx2ZMRZoMye as a salt. | 7 | 10 |
69,166,262 | 2021-9-13 | https://stackoverflow.com/questions/69166262/fastapi-adding-route-prefix-to-testclient | I have a FastAPI app with a route prefix as /api/v1. When I run the test it throws 404. I see this is because the TestClient is not able to find the route at /ping, and works perfectly when the route in the test case is changed to /api/v1/ping. Is there a way in which I can avoid changing all the routes in all the test functions as per the prefix? This seems to be cumbersome as there are many test cases, and also because I dont want to have a hard-coded dependency of the route prefix in my test cases. Is there a way in which I can configure the prefix in the TestClient just as we did in app, and simply mention the route just as mentioned in the routes.py? routes.py from fastapi import APIRouter router = APIRouter() @router.get("/ping") async def ping_check(): return {"msg": "pong"} main.py from fastapi import FastAPI from routes import router app = FastAPI() app.include_router(prefix="/api/v1") In the test file I have: test.py from main import app from fastapi.testclient import TestClient client = TestClient(app) def test_ping(): response = client.get("/ping") assert response.status_code == 200 assert response.json() == {"msg": "pong"} | Figured out a workaround for this. The TestClient has an option to accept a base_url, which is then urljoined with the route path. So I appended the route prefix to this base_url. source: url = urljoin(self.base_url, url) However, there is a catch to this - urljoin concatenates as expected only when the base_url ends with a / and the path does not start with a /. This SO answer explains it well. This resulted in the below change: test.py from main import app, ROUTE_PREFIX from fastapi.testclient import TestClient client = TestClient(app) client.base_url += ROUTE_PREFIX # adding prefix client.base_url = client.base_url.rstrip("/") + "/" # making sure we have 1 and only 1 `/` def test_ping(): response = client.get("ping") # notice the path no more begins with a `/` assert response.status_code == 200 assert response.json() == {"msg": "pong"} | 12 | 5 |
69,173,608 | 2021-9-14 | https://stackoverflow.com/questions/69173608/why-do-i-get-an-http-error-404-when-using-textblob | I am experiencing some problems using the TextBlob library. I'm trying to run a very simple piece of code like this: from textblob import TextBlob text = 'this is just a test' blob = TextBlob(text) blob.detect_language() And it continually gives me this error: /usr/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs) 647 class HTTPDefaultErrorHandler(BaseHandler): 648 def http_error_default(self, req, fp, code, msg, hdrs): --> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp) 650 651 class HTTPRedirectHandler(BaseHandler): HTTPError: HTTP Error 404: Not Found What is the problem? I have tried it on several devices and it gives me the same error every time. | Function dectect_language() sends a request to google translate service: http://translate.google.com/translate_a/t?client=webapp&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1&sl=auto&tk=276174.132528 and this url returns 404. From documentation on detect_language() Deprecated since version 0.16.0: Use the official Google Translate API instead. I wouldn't count on this function to be working in the future. | 8 | 11 |
69,172,802 | 2021-9-14 | https://stackoverflow.com/questions/69172802/clean-boto3-pagination | I am trying to find a very nice python idiom to use aws boto3 paginators in the most "pythonic" way. Below is the best I have been able to come up with and I'm still not happy with it. Any ideas on how to make pagination simpler, possibly not using while True:? import boto3 client = boto3.client('acm', region_name='ap-southeast-2') paginator = client.get_paginator('list_certificates') response_iterator = paginator.paginate() while True: for certificates in response_iterator: for certificate in certificates['CertificateSummaryList']: print(certificate) if response_iterator.resume_token: response_iterator = paginator.paginate( PaginationConfig={ 'StartingToken': response_iterator.resume_token }) else: break | Woudn't the following form work?: client = boto3.client('acm', region_name='ap-southeast-2') paginator = client.get_paginator('list_certificates') for page in paginator.paginate(): print(page) | 7 | 5 |
69,167,795 | 2021-9-13 | https://stackoverflow.com/questions/69167795/order-of-evaluation-of-assignment-expressions-walrus-operator | I have the following expression: >>> a = 3 >>> b = 2 >>> a == (a := b) False Now, a == 2 after the operation, as expected. And the result is what I would want, i.e., comparison of a to RHS of assignment before assignment. Reversing the order of the equality operator reverses the result: >>> a = 3 >>> b = 2 >>> (a := b) == a True There does not appear to be anything directly relevant to this corner case in PEP-572, relative precedence section. The next section, change to evaluation order mentions that the evaluation order is left-to-right. Is that what is going on here (stash the value of a, update it, and compare vs update a, then compare against its new value)? Where is this behavior defined, and how reliable is it? | Neither of those PEP sections have to do with this. You just have a == comparison, and the general Evaluation order applies: "Python evaluates expressions from left to right." So your (a := b) == a simply first evaluates the left side (a := b), assigning something to a and evaluating to the same value. And then evaluate the right side a, which is of course still the same (just-assigned) value, so you get True. About those PEP sections: What that first PEP section says is that := groups less tightly than ==, so it would apply if you didn't have parentheses: a == a := b would mean (a == a) := b (you'd get a syntax error for trying to assign to a comparison). a := b == a would mean a := (b == a), where with your values the b == a evaluates to False and that gets assigned to a and becomes the result of the entire expression. (Note that at statement-level, you'd have to write (a := b == a).) What that second PEP section does is just to point out something bad that had already existed but which the := made "more visible", so they suggested to finally fix it. The issue was that a dict comprehension like {X: Y for ...} evaluated Y before X, against the general left-to-right rule and against dict displays like {X: Y} which already did evaluate X before Y as expected. Consider this: >>> a, b = 3, 2 >>> {a: (a := b) for _ in '_'} {3: 2} With that old behavior, it would've resulted in {2: 2}. And given that people might write something like that when := became available, it became more of an issue. | 8 | 9 |
69,169,295 | 2021-9-13 | https://stackoverflow.com/questions/69169295/automatically-activating-conda-environment-in-integrated-terminal | I have an anaconda virtual environment that I wish to use. I am able to use Select Interpreter, which finds and allows me to accurately select said virtual environment. I am also able to use this with jupyter notebooks. What I am not able to do is have the integrated terminal automatically activate this environment. Every time I open a new terminal, I have: (base) PS C:\working_folder If I manually activate the environment in the integrated terminal, I am then able to use it. My issue is that I don't want to have to remember to manually activate it. Things I've tried: Setting "Python.terminal.activateEnvironment" to true Setting "Python.terminal.activateEnvInCurrentTerminal" to true Updating pythonPath in my working folder settings.json | Open Powershell and run as Administrator, execute the following command conda config --set auto_activate_base true Then restart VS Code, add the following in User Settings.json: "python.defaultInterpreterPath": "\\path\\to\\conda\\python.exe", "Python.terminal.activateEnvironment": true, "Python.terminal.activateEnvInCurrentTerminal": true Reload Window from Command Palette, select base:conda as python interpreter then press Ctrl+Shift+` to open a new integrated Terminal, conda environment should be activated automatically in it. | 4 | 8 |
69,164,379 | 2021-9-13 | https://stackoverflow.com/questions/69164379/how-to-find-the-index-of-the-point-closest-to-k-means-cluster-centers-using-skle | I have used python's sklearn package for K-means clustering. So far I am able to get the coordinates of the cluster centers using the following code. import numpy as np from sklearn.cluster import KMeans p50 = np.load('tsnep400.npy') kmeans = KMeans(n_clusters=50).fit(p50) np.savetxt('kmeans_50clusters_centers_tsnep400', kmeans.cluster_centers_, fmt='%1.3f') np.savetxt('kmeans_50clusters_tsnep400.dat', kmeans.labels_, fmt='%1.1d') centroids = {i: np.where(kmeans.labels_ == i)[0] for i in range(kmeans.n_clusters)} np.save('kmeans_50clusters_memebers_tsnep400.npy',centroids) How do I find the index of the point closest to cluster centers? | According to the scikit-learn documentation, the attribute .labels_ contains the labels of each point, by their index. Thus, you can use this to group each of your points into a cluster and then calculate the distance to each cluster center. You can use the following code for this: from scipy.spatial.distance import euclidean # Loop over all clusters and find index of closest point to the cluster center and append to closest_pt_idx list. closest_pt_idx = [] for iclust in range(kmeans.n_clusters): # get all points assigned to each cluster: cluster_pts = p50[kmeans.labels_ == iclust] # get all indices of points assigned to this cluster: cluster_pts_indices = np.where(kmeans.labels_ == iclust)[0] cluster_cen = kmeans.cluster_centers_[iclust] min_idx = np.argmin([euclidean(p50[idx], cluster_cen) for idx in cluster_pts_indices]) # Testing: print('closest point to cluster center: ', cluster_pts[min_idx]) print('closest index of point to cluster center: ', cluster_pts_indices[min_idx]) print(' ', p50[cluster_pts_indices[min_idx]]) closest_pt_idx.append(cluster_pts_indices[min_idx]) | 5 | 3 |
69,108,649 | 2021-9-8 | https://stackoverflow.com/questions/69108649/change-a-matplotlib-3d-figures-frames-into-x-y-and-z-arrows | Can one can change the arrows of a figure into an arrow by superimposing arrows on top of the x, y and z axes to create the illusion of the axes being arrows or perhaps directly change the settings of the frames as Matplot lib framing in order to get the same outcome on a 3D plot, showing (x,y,z) with arrows? Turning this fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # generate sample points and straight line z = np.repeat(0, 100) x = np.repeat(1.0, 100) y = np.linspace(start=3.0, stop=6.0, num=100) ax.plot(x, y, z, c='red') # draw straight line ax.view_init(45, -150) # angle to show # set axes limits and labels ax.set_xlabel(r"$x$"); ax.set_ylabel(r"$y$"); ax.set_zlabel(r"$z$") ax.set_xlim(0,1.1) ;ax.set_ylim(6,3) ;ax.set_zlim(0,1.75) # Remove tick marks ax.set_xticks([0,0.25,0.5,0.75,1]) ; ax.set_xticklabels(['0','1','2','4','T']) ax.set_yticks([6.0,5.5,5,4.5,4.0,3.5,3]) ; ax.set_yticklabels(["","","","","","",""]) ax.set_zticks([1.75,1.25,0.75,0.25]) ax.set_zticklabels(['','','','']) # change background colour to white ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) #plt.savefig("sample.png", type="png",dbi=400) # save image plt.tight_layout() plt.show() into something like this: | I don't usually use 3D graphs, and I did a lot of research to answer your question. Here's a great approach I found. I created a new Arrow 3D class and implemented it. In your code, I added the class and added arrows to the x-, y-, and z-axes. I manually shifted their positions to align them on the axes. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.proj3d import proj_transform from matplotlib.patches import FancyArrowPatch from mpl_toolkits.mplot3d import proj3d class Arrow3D(FancyArrowPatch): def __init__(self, x, y, z, dx, dy, dz, *args, **kwargs): super().__init__((0, 0), (0, 0), *args, **kwargs) self._xyz = (x, y, z) self._dxdydz = (dx, dy, dz) def draw(self, renderer): x1, y1, z1 = self._xyz dx, dy, dz = self._dxdydz x2, y2, z2 = (x1 + dx, y1 + dy, z1 + dz) xs, ys, zs = proj_transform((x1, x2), (y1, y2), (z1, z2), self.axes.M) self.set_positions((xs[0], ys[0]), (xs[1], ys[1])) super().draw(renderer) def _arrow3D(ax, x, y, z, dx, dy, dz, *args, **kwargs): '''Add an 3d arrow to an `Axes3D` instance.''' arrow = Arrow3D(x, y, z, dx, dy, dz, *args, **kwargs) ax.add_artist(arrow) setattr(Axes3D, 'arrow3D', _arrow3D) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') # generate sample points and straight line z = np.repeat(0, 100) x = np.repeat(1.0, 100) y = np.linspace(start=3.0, stop=6.0, num=100) ax.plot(x, y, z, c='red') # draw straight line ax.view_init(45, -150) # angle to show # set axes limits and labels ax.set_xlabel(r"$x$"); ax.set_ylabel(r"$y$"); ax.set_zlabel(r"$z$") ax.set_xlim(0,1.1) ;ax.set_ylim(6,3) ;ax.set_zlim(0,1.75) # Remove tick marks ax.set_xticks([0,0.25,0.5,0.75,1]) ax.set_xticklabels(['0','1','2','4','T']) ax.set_yticks([6.0,5.5,5,4.5,4.0,3.5,3]) ax.set_yticklabels(["","","","","","",""]) ax.set_zticks([1.75,1.25,0.75,0.25]) ax.set_zticklabels(['','','','']) # change background colour to white ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) xlim = plt.gca().get_xlim() ylim = plt.gca().get_ylim() zlim = plt.gca().get_zlim() # print(xlim,ylim,zlim) # (0.0, 1.1) (6.0, 3.0) (0.0, 1.75) ax.arrow3D(-0.03, ylim[0]+0.06, 0, xlim[1]+0.05, 0, 0, mutation_scale=20, arrowstyle='<|-|>',fc='k') # x axis ax.arrow3D(-0.03, ylim[1], 0, 0, ylim[1]+0.1, 0, mutation_scale=20, arrowstyle='<|-|>', fc='k') # y axis ax.arrow3D(-0.05, ylim[1], 0, 0, 0, zlim[1]+0.1, mutation_scale=20, arrowstyle='<|-|>', fc='k') # z axis ax.text2D(0.05, 0.65,r'$\mathcal{Z}$', fontsize=18, ha='center', transform=ax.transAxes) ax.text2D(0.60, -0.03,r'$\mathcal{Y}$', fontsize=18, ha='center', transform=ax.transAxes) ax.text2D(0.95, 0.40,r'$\mathcal{X}$', fontsize=18, ha='center', transform=ax.transAxes) plt.tick_params(axis='both', color='white') #plt.savefig("sample.png", type="png",dbi=400) # save image # plt.tight_layout() plt.show() | 8 | 6 |
69,160,152 | 2021-9-13 | https://stackoverflow.com/questions/69160152/pymupdf-attributeerror-module-fitz-has-no-attribute-open | pip3 install PyMuPDF Collecting PyMuPDF Using cached PyMuPDF-1.18.17-cp37-cp37m-win_amd64.whl (5.4 MB) Installing collected packages: PyMuPDF Successfully installed PyMuPDF-1.18.17 import fitz doc = fitz.open("my_pdf.pdf") When I look for def open on the fitz.py file, I find nothing. So I understand the error But I don't understand why the file that I download doesn't have this function ? Can someone share the good files please ? Or maybe I missed something else ? FULL TRACE: runfile('D:/Documents/Python_projects/Point_and_area_pdf_to_excel/get_info.py', wdir='D:/Documents/Python_projects/Point_and_area_pdf_to_excel') Reloaded modules: six, dateutil._common, dateutil.relativedelta, dateutil.tz._common, dateutil.tz._factories, dateutil.tz.win, dateutil.tz.tz, dateutil.tz, dateutil.parser._parser, dateutil.parser.isoparser, dateutil.parser, chardet.enums, chardet.charsetprober, chardet.charsetgroupprober, chardet.codingstatemachine, chardet.escsm, chardet.escprober, chardet.latin1prober, chardet.mbcssm, chardet.utf8prober, chardet.mbcharsetprober, chardet.euctwfreq, chardet.euckrfreq, chardet.gb2312freq, chardet.big5freq, chardet.jisfreq, chardet.chardistribution, chardet.jpcntx, chardet.sjisprober, chardet.eucjpprober, chardet.gb2312prober, chardet.euckrprober, chardet.cp949prober, chardet.big5prober, chardet.euctwprober, chardet.mbcsgroupprober, chardet.hebrewprober, chardet.sbcharsetprober, chardet.langbulgarianmodel, chardet.langgreekmodel, chardet.langhebrewmodel, chardet.langrussianmodel, chardet.langthaimodel, chardet.langturkishmodel, chardet.sbcsgroupprober, chardet.universaldetector, chardet.version, chardet Traceback (most recent call last): File "D:\Documents\Python_projects\Point_and_area_pdf_to_excel\get_info.py", line 45, in <module> print(get_dict_list(path)) File "D:\Documents\Python_projects\Point_and_area_pdf_to_excel\get_info.py", line 7, in get_dict_list text_list = get_pdf_page_text_list(pdf_path) File "D:\Documents\Python_projects\Point_and_area_pdf_to_excel\get_info.py", line 19, in get_pdf_page_text_list doc = fitz.open(pdf_path) AttributeError: module 'fitz' has no attribute 'open' | This is likely to be an installation issue and looks like there already exists a package fitz installed on your environment and is unrelated to PyMuPDF. So when PyMuPDF calls fitz it might actually be calling the wrong fitz package. You can consider doing a clean install of all dependencies or create a virtual environment to work with PyMuPDF. You can also try rolling back fitz to version 1.16.14 | 20 | 7 |
69,134,512 | 2021-9-10 | https://stackoverflow.com/questions/69134512/where-can-i-find-python-print-statements-in-cloud-run-docker-instances | If I am running a container within Cloud Run and do a print statement in my python code. Where can I view it? Cloud logs seem to show logs for the contain itself(build, etc)? to debug my code often I do write statements that help me figure what's going on. Where would that print output be located? | 1] You can find all the logs including your print statement output in Cloud Logging as mentioned in this link. So when you write a print statement from your service they will be automatically picked up by Cloud Logging. 2] Steps to view logs in Cloud Logging: Logs Explorer -> Cloud Run Revision. 3] You may wanna check your logging level. For example: if you have configured level as logging.ERROR in basicConfig (default is WARNING), and used logging.info() in your code, then it will not be printed. You can refer to this link for more information. 4] Also, you may try flushing the stdout which will make sure the logs get written from buffers. You may refer Stackoverflow answer on how to do this. | 4 | 7 |
69,157,587 | 2021-9-13 | https://stackoverflow.com/questions/69157587/how-to-install-the-dependencies-of-the-submodule-using-poetry | I have a project my-project that uses a submodule my-submodule. The submodule has dependencies different from my-project in poetry.lock & pyproject.toml files. I have installed the dependencies required for my-project using poetry add. These deps are installed and poetry.lock & pyproject.toml files are created in the root folder of my-project. Now, I would also like to install the dependencies of the submodule. Assuming that the path of the submodule is path/to/submodule/from/root, how can I install the dependencies of the submodule and make those deps reflect in the poetry.lock & pyproject.toml files of the root? A similar question has been asked here: Manage dependencies of git submodules with poetry, but there isn't a solution provided there. | You can declare the submodule as a path dependency in the pyproject.toml of the parent project. It will then treat the submodule as a package and include it in dependency install/resolution. Be sure to also include the develop attribute when declaring the dependency, as follows: [tool.poetry.dependencies] my-package = { path = "./path/to/submodule/from/root", develop = true } Link to docs: https://python-poetry.org/docs/dependency-specification/#path-dependencies | 9 | 12 |
69,155,594 | 2021-9-12 | https://stackoverflow.com/questions/69155594/cannot-reset-index-inplace-on-a-series-to-create-a-dataframe | When I am trying to reset the index of my dataFrame, it is not working. new = pd.DataFrame(columns=['a','b','Amount1']) new['Amount1'] = [0,1,6,7,8,9] new['a'] = ['sarim',1,2,3,4,'sarim'] df_tf = new[new['a']=='sarim']['Amount1'] df_tf.reset_index(inplace=True) ret_df['Amount1'] = df_tf | you can try, df_tf.reset_index(drop = True, inplace=True) ret_df['Amount1'] = df_tf or ret_df['Amount1'] = list(df_tf) | 8 | 6 |
69,155,789 | 2021-9-12 | https://stackoverflow.com/questions/69155789/importerror-cannot-import-name-parsemode-from-telegram | I am trying to create a telegram bot. The code i am trying to execute is : from telegram import ParseMode But it is throwing up this error: ImportError: cannot import name 'ParseMode' from 'telegram' (C:\ProgramData\Anaconda3\lib\site-packages\telegram\__init__.py) Could you please advise how to fix this error? | you have to import with this way: from telegram.ext import ParseMode if problem not solved: install the package like this: pip install python_telegram_bot or pip install "python_telegram_bot==12.4.2" | 11 | 2 |
69,132,009 | 2021-9-10 | https://stackoverflow.com/questions/69132009/django-forms-that-choice-is-not-one-of-the-available-choices | I have a form to update user, the error is on the role field. I am filtering the role based on customer. I am getting the right values for role but anyways the error pops up. Select a valid choice. That choice is not one of the available choices views.py class UserUpdateView(LoginRequiredMixin, SuccessMessageMixin, UpdateView): form_class = UserUpdateForm template_name = 'users/modals/update_profile_modal.html' success_message = "User updated successfully." def get_form_kwargs(self): kw = super().get_form_kwargs() kw['request'] = self.request return kw def get_object(self, *args, **kwargs): user_id = self.request.session['user_detail'] return TbUser.objects.get(id=user_id) def form_invalid(self, form): messages.error(self.request, form.errors) print(form.errors) return redirect('user-detail', pk=self.object.pk) def get_success_url(self): return reverse('user-detail', kwargs={'pk': self.object.pk}) forms.py class UserUpdateForm(forms.ModelForm): email = forms.EmailField() def __init__(self, request, *args, **kwargs): super().__init__(*args, **kwargs) self.request = request if request.user.customer: self.fields['department'].queryset = TbDepartment.objects.filter( customer=request.user.customer) self.fields['role'].queryset = TbRole.objects.filter( customer=request.user.customer) self.fields['username'].required = True self.fields['real_name'].required = True self.fields['email'].required = True self.fields['cellphone'].required = True self.fields['department'].required = True self.fields['role'].required = True class Meta: model = TbUser fields = ['username', 'real_name', 'email', 'cellphone', 'department', 'role'] I am filtering all data using this class, each customer has its own row in the table. class TbCustomer(models.Model): id = models.CharField(primary_key=True, max_length=50) short_name = models.CharField(max_length=255) names = models.CharField(max_length=255) descs = models.CharField(max_length=255, blank=True, null=True) creat_time = models.DateTimeField() creat_user = models.CharField(max_length=255) authenticationcode = models.CharField( db_column='authenticationCode', max_length=255, blank=True, null=True) is_available = models.IntegerField(blank=True, null=True) logo_img = models.CharField(max_length=40, blank=True, null=True) response_message = models.CharField(max_length=100, blank=True, null=True) language = models.CharField(max_length=20, blank=True, null=True) class Meta: managed = False db_table = 'tb_customer' def __str__(self): return '%s' % self.short_name I am rendering the form using {{form|crispy}} {% block modal %} {% load static %} {% load crispy_forms_tags %} <!-- Modal --> <div class="modal fade" data-backdrop="static" data-keyboard="false" id="tb-user-profile-update-modal" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true"> <div class="modal-dialog modal-sm" role="document"> <div class="modal-content"> <form enctype="multipart/form-data" action="{% url 'tb-user-update' pk=user.id %}" method="POST"> <div class="row d-flex justify-content-center"> <div class="col-10"> <fieldset class="form-group mt-2"> {{user.username}} {% csrf_token %} {{form|crispy}} </fieldset> <div class="form-group"> <button class="btn btn-secondary" type="submit"> <span>Update</span> </button> <button class="btn btn-secondary" type="button" data-dismiss="modal"> <span>Close</span> </button> </div> </div> </div> </form> </div> </div> </div> {% endblock modal %} | I have removed the role feature, it was redundant in my project. | 5 | 0 |
69,152,401 | 2021-9-12 | https://stackoverflow.com/questions/69152401/print-and-evaluate-in-python3 | Currently for my scientific experiments I use dbg = print # def dbg(*args): pass So I have a lot of dbg(x, y, f(x)) in code, all of which I can "turn off" my commenting one line and uncommenting another. However, the output looks brief, e.g. 0 15 32. Is there a way to make it look like x = 0, y = 15, f(x) = 32? I tried to write something using eval, but couldn't. | Try using the = operator on f-strings: dbg(f"{x=}, {y=}, {f(x)=}") This was introduced in Python3.8 f-strings support = for self-documenting expressions and debugging Added an = specifier to f-strings. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. For example: >>> user = 'eric_idle' >>> member_since = date(1975, 7, 31) >>> f'{user=} {member_since=}' "user='eric_idle' member_since=datetime.date(1975, 7, 31)" | 5 | 13 |
69,148,116 | 2021-9-12 | https://stackoverflow.com/questions/69148116/convert-long-form-dataframe-of-pairwise-distances-to-distance-matrix-in-python | I have a pandas dataframe of pairwise distances in the form of: SampleA SampleB Num_Differences 0 sample_1 sample_2 1 1 sample_1 sample_3 4 2 sample_2 sample_3 8 Note that there are no self-self comparisons (e.g., sample_1 vs sample_1 won't be represented). I would like to convert this table into a squareform distance matrix instead, like so: sample_1 sample_2 sample_3 sample_1 1 4 sample_2 1 8 sample_3 4 8 Can anyone give me some pointers on how to do such a conversion in python? The problem is analogous to a previous question in R (Converting pairwise distances into a distance matrix in R), but I don't know the corresponding python functions to use. The problem also appears to be the opposite of this question (Convert a distance matrix to a list of pairwise distances in Python). Some code to reproduce a dataframe in the form I'm using: df = pd.DataFrame([['sample_1', 'sample_2', 1], ['sample_1', 'sample_3', 4], ['sample_2', 'sample_3', 8]], columns=['SampleA', 'SampleB', 'Num_Differences']) | You can reshape to square, and then make symmetrical by adding the transposed values: # make unique, sorted, common index idx = sorted(set(df['SampleA']).union(df['SampleB'])) # reshape (df.pivot(index='SampleA', columns='SampleB', values='Num_Differences') .reindex(index=idx, columns=idx) .fillna(0, downcast='infer') .pipe(lambda x: x+x.values.T) ) Alternatively, you can use ordered categorical indexes and keep NAs during reshaping with pivot_table. Then add the transposed values to make symmetrical: cat = sorted(set(df['SampleA']).union(df['SampleB'])) (df.assign(SampleA=pd.Categorical(df['SampleA'], categories=cat, ordered=True), SampleB=pd.Categorical(df['SampleB'], categories=cat, ordered=True), ) .pivot_table(index='SampleA', columns='SampleB', values='Num_Differences', dropna=False, fill_value=0) .pipe(lambda x: x+x.values.T) ) Output: SampleB sample_1 sample_2 sample_3 SampleA sample_1 0 1 4 sample_2 1 0 8 sample_3 4 8 0 | 10 | 6 |
69,107,860 | 2021-9-8 | https://stackoverflow.com/questions/69107860/celery-what-is-the-reason-to-have-acks-late-true-without-setting-task-reject-on | After playing with some "defect" scenarios with celery (Redis being a broker for whatever it worth) we came to understanding that there is effectively no sense in setting acks_late=true without simultaneous setting of task_reject_on_worker_lost=true because the task won't be rescheduled (again, in our tests) -- task stays in the "unacked" category forever. At the same time everybody says that acks_late will make the task being subject for rescheduling on the same / another worker, so the question is: when does it happen? The official docs say that Note that the worker will acknowledge the message if the child process executing the task is terminated (either by the task calling sys.exit(), or by signal) even when acks_late is enabled. This behavior is intentional as… We don’t want to rerun tasks that forces the kernel to send a SIGSEGV (segmentation fault) or similar signals to the process. We assume that a system administrator deliberately killing the task does not want it to automatically restart. A task that allocates too much memory is in danger of triggering the kernel OOM killer, the same may happen again. A task that always fails when redelivered may cause a high-frequency message loop taking down the system. If you really want a task to be redelivered in these scenarios you should consider enabling the task_reject_on_worker_lost setting. What are possible examples of "something went wrong" that don't fall into the "worker terminated deliberately or due to a signal caught" category? | Reboot, power outage, hardware failure. n.b., all of your examples assume that the prefetch multiplier is 1. | 14 | 1 |
69,146,994 | 2021-9-11 | https://stackoverflow.com/questions/69146994/how-to-set-specific-color-to-some-bars-in-a-plotly-bar-graph | I'm trying to set different colors for some bars in a plotly express bar graph: import plotly.express as px import pandas as pd data = {'Name':['2020/01', '2020/02', '2020/03', '2020/04', '2020/05', '2020/07', '2020/08'], 'Value':[34,56,66,78,99,55,22]} df = pd.DataFrame(data) color_discrete_sequence = ['#ec7c34']*len(df) color_discrete_sequence[5] = '#609cd4' fig=px.bar(df,x='Name',y='Value',color_discrete_sequence=color_discrete_sequence) fig.show() My expectations were that one (the sixth one) bar had a different color, however I got this result: What am I doing wrong? | This happens because color in px.bar is used to name a category to illustrate traits or dimensions of a dataset using a colorscale. Or in you your case, rather a color cycle since you're dealing with a categorical / discrete case. color_discrete_sequence is then used to specify which color sequence to follow. One way to achieve your goal using your setup here, is to simply define a string variable with unique values, for example like df['category'] [str(i) for i in df.index], and then use: fig=px.bar(df,x='Name',y='Value', color = 'category', color_discrete_sequence=color_discrete_sequence, ) Plot: If df['category'] is a numerical value, color_discrete_sequence will be ignored, and a default continuous sequence will be applied: If anything else is unclear, don't hesitate to let me know. Complete code: import plotly.express as px import pandas as pd data = {'Name':['2020/01', '2020/02', '2020/03', '2020/04', '2020/05', '2020/07', '2020/08'], 'Value':[34,56,66,78,99,55,22]} df = pd.DataFrame(data) df['category'] = [str(i) for i in df.index] # df['category'] = df.index color_discrete_sequence = ['#ec7c34']*len(df) color_discrete_sequence[5] = '#609cd4' fig=px.bar(df,x='Name',y='Value', color = 'category', color_discrete_sequence=color_discrete_sequence, ) fig.show() | 7 | 9 |
69,146,380 | 2021-9-11 | https://stackoverflow.com/questions/69146380/how-to-parse-datetime-that-is-coming-in-arabic-text-%d9%a0%d9%a4-%d9%a2%d9%a5-%d9%a2%d9%a0%d9%a2%d9%a1-to-english-date | I am reading JSON file that has some date columns. The issue is some of the date columns contain dates in Arabic/urdu text : ٠٤-٢٥-٢٠٢١ I want to convert it to English date in yyyy-mm-dd format. How to achieve this in Pyspark? | You can convert arabic number to english by casting type to decimal. df = spark.createDataFrame([('٠٤-٢٥-٢٠٢١',)],['arabic']) df.withColumn('split', split('arabic', '-')) \ .withColumn('date', concat_ws('-', col('split')[2].cast('decimal'), col('split')[0].cast('decimal'), col('split')[1].cast('decimal'))) \ .drop('split').show() +----------+---------+ | arabic| date| +----------+---------+ |٠٤-٢٥-٢٠٢١ |2021-4-25| +----------+---------+ | 5 | 5 |
69,139,030 | 2021-9-11 | https://stackoverflow.com/questions/69139030/why-and-when-should-use-a-stack-and-unstack-methods | I'm very confused about these two methods which are: stack() and unstack() I know that I should use them in the case of multi-Indexes however, I need to know the following: 1- I don't know where I should use stack or unstack 2- why I should use them when I use "pivot" what I understand is that the pivot converts Dataframe to be the unstack form, if that is correct so, I need to know why when I use the following line code it raises an error: data.stack(level=1) # IndexError: Too many levels: Index has only 1 level, not 2 but when I do that following it runs: data.unstack().stack(level=1) sometimes, I see that stack has kwargs like so, level=-1 I don't know when I have to place "-1" and what does that mean I know that I misunderstand a lot of stuff but I'm very confused so, any help to understand these terms, please? thx in advance | Here is an attempt at a canonical answer on the differences between pivot and unstack. For a complete guide on reshaping, pandas's official documentation on reshaping and pivot tables is a must read. pivot and unstack perform roughly the same operation, but they operate on different logical levels: columns and index levels, respectively. I will use this example dataframe as input: df = pd.DataFrame({'col1': list('ABCABC'), 'col2': list('aaabbb'), 'col3': list('uvwxyz'), }) col1 col2 col3 0 A a u 1 B a v 2 C a w 3 A b x 4 B b y 5 C b z Using pivot on columns pandas.DataFrame.pivot operates on columns NB. when the index argument if left unused, it will use the current index. df.pivot(index='col1', columns='col2', values='col3') col2 a b col1 A u x B v y C w z Using unstack on MultiIndexes There are two use cases here whether the input is a Series or a DataFrame. pandas.Series.unstack We will generate first a Series with MultIndex from the initial DataFrame: series = df.set_index(['col1', 'col2'])['col3'] col1 col2 A a u B a v C a w A b x B b y C b z Name: col3, dtype: object We see that the data is very similar to the original DataFrame, but col1 and col2 are now index levels, and the data itself is now one-dimensional (i.e., a Series) Now, we can apply unstack to pivot by default the right-most (last) index level as columns to generate a DataFrame. There are many ways to specify the index level to unstack so all these options are equivalent: series.unstack() series.unstack('col2') # by level name series.unstack(1) # by level position from the left series.unstack(-1) # by level position from the end (-1 = last) col2 a b col1 A u x B v y C w z This means that df.pivot(index='col1', columns='col2', values='col3') and df.set_index(['col1', 'col2'])['col3'].unstack() are logically equivalent. pandas.DataFrame.unstack The DataFrame version of unstack is very similar to the Series's one, with the exception that, as the data is already two-dimensional, it will create an extra level of index for the columns. df.set_index(['col1', 'col2']).unstack(level='col2') col3 col2 a b col1 A u x B v y C w z Here again, the same output can be obtained using pivot, by passing a list-encapsulated column name to values: df.pivot(index='col1', columns='col2', values=['col3']) col3 col2 a b col1 A u x B v y C w z | 6 | 11 |
69,133,906 | 2021-9-10 | https://stackoverflow.com/questions/69133906/taking-gradients-when-using-tf-function | I am puzzled by the behavior I observe in the following example: import tensorflow as tf @tf.function def f(a): c = a * 2 b = tf.reduce_sum(c ** 2 + 2 * c) return b, c def fplain(a): c = a * 2 b = tf.reduce_sum(c ** 2 + 2 * c) return b, c a = tf.Variable([[0., 1.], [1., 0.]]) with tf.GradientTape() as tape: b, c = f(a) print('tf.function gradient: ', tape.gradient([b], [c])) # outputs: tf.function gradient: [None] with tf.GradientTape() as tape: b, c = fplain(a) print('plain gradient: ', tape.gradient([b], [c])) # outputs: plain gradient: [<tf.Tensor: shape=(2, 2), dtype=float32, numpy= # array([[2., 6.], # [6., 2.]], dtype=float32)>] The lower behavior is what I would expect. How can I understand the @tf.function case? Thank you very much in advance! (Note that this problem is distinct from: Missing gradient when using tf.function , since here all calculations are inside the function.) | Gradient tape does not record the operations inside the tf.Graph generated by @tf.function treating the function as a whole. Roughly, f is applied to a, and gradient tape has recorded the gradients of the outputs of f with respect to input a (it is the only watched variable, tape.watched_variables()). In the second case, there is no graph generated, and operations are applied in Eager mode. So everything works as expected. A good practice is to wrap a most computationally expensive function in the @tf.function (often a training loop). In your case, it will be smth like: @tf.function def f(a): with tf.GradientTape() as tape: c = a * 2 b = tf.reduce_sum(c ** 2 + 2 * c) grads = tape.gradient([b], [c]) print('tf.function gradient: ', grads) return grads | 7 | 9 |
69,127,120 | 2021-9-10 | https://stackoverflow.com/questions/69127120/gensim-fasttext-cannot-get-latest-training-loss | Problem description It seems that the get_latest_training_loss function in fasttext returns only 0. Both gensim 4.1.0 and 4.0.0 do not work. from gensim.models.callbacks import CallbackAny2Vec from pprint import pprint as print from gensim.models.fasttext import FastText from gensim.test.utils import datapath class callback(CallbackAny2Vec): '''Callback to print loss after each epoch.''' def __init__(self): self.epoch = 0 def on_epoch_end(self, model): loss = model.get_latest_training_loss() print('Loss after epoch {}: {}'.format(self.epoch, loss)) self.epoch += 1 # Set file names for train and test data corpus_file = datapath('lee_background.cor') model = FastText(vector_size=100, callbacks=[callback()]) # build the vocabulary model.build_vocab(corpus_file=corpus_file) # train the model model.train( corpus_file=corpus_file, epochs=model.epochs, total_examples=model.corpus_count, total_words=model.corpus_total_words, callbacks=model.callbacks, compute_loss=True, ) print(model) 'Loss after epoch 0: 0.0' 'Loss after epoch 1: 0.0' 'Loss after epoch 2: 0.0' 'Loss after epoch 3: 0.0' 'Loss after epoch 4: 0.0' If currently FastText does not support get_latest_training_loss, the documentation here needs to be removed: https://radimrehurek.com/gensim/models/fasttext.html#gensim.models.fasttext.FastText.get_latest_training_loss Versions I have tried this in three different environments and neither of them works. First environment: [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import platform; print(platform.platform()) Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 >>> import sys; print("Python", sys.version) Python 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:39:48) [GCC 9.3.0] >>> import struct; print("Bits", 8 * struct.calcsize("P")) Bits 64 >>> import numpy; print("NumPy", numpy.__version__) NumPy 1.21.2 >>> import scipy; print("SciPy", scipy.__version__) SciPy 1.7.1 >>> import gensim; print("gensim", gensim.__version__) gensim 4.1.0 >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) FAST_VERSION 0 Second environment: Python 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import platform; print(platform.platform()) macOS-10.16-x86_64-i386-64bit >>> import sys; print("Python", sys.version) Python 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ] >>> import struct; print("Bits", 8 * struct.calcsize("P")) Bits 64 >>> import numpy; print("NumPy", numpy.__version__) NumPy 1.20.3 >>> import scipy; print("SciPy", scipy.__version__) SciPy 1.7.1 >>> import gensim; print("gensim", gensim.__version__) gensim 4.1.0 >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) FAST_VERSION 0 Third environment: Python 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import platform; print(platform.platform()) macOS-10.16-x86_64-i386-64bit >>> import sys; print("Python", sys.version) Python 3.9.5 (default, May 18 2021, 12:31:01) [Clang 10.0.0 ] >>> import struct; print("Bits", 8 * struct.calcsize("P")) Bits 64 >>> import numpy; print("NumPy", numpy.__version__) NumPy 1.20.3 >>> import scipy; print("SciPy", scipy.__version__) SciPy 1.7.1 >>> import gensim; print("gensim", gensim.__version__) /Users/jinhuawang/miniconda3/lib/python3.9/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning. warnings.warn(msg) gensim 4.0.0 >>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION) FAST_VERSION 0 | Indeed, loss-tracking hasn't ever been implemented in Gensim's FastText model, at least through release 4.1.0 (August 2021). The docs for that method appear in error, due to the inherited method from the Word2Vec superclass not being overriden to prevent the default assumption that superclass methods work. There is a long-open issue to fill the gaps & fix the problems in Gensim's loss-tracking (which is also somewhat buggy & incomplete for Word2Vec). But, at the moment I don't think any contributor is working on it, & it hasn't been prioritized for any upcoming release. It may require someone to volunteer to step forward & fix things. | 5 | 5 |
69,106,483 | 2021-9-8 | https://stackoverflow.com/questions/69106483/python-project-with-poetry-how-to-debug-it-in-visual-studio-code | I have a Python project which I created according to basic Poetry instructions. The project folder is something like this: my-project +----my_project | +-- my_project.py | +-- File1.py | +-- File2.py | +----pyproject.toml Example of how I import stuff from one file to another: in my_project.py I have the code from . import File1, File2 If I want to debug this from VS Code, if I try F5 in the my_project.py, I get the error: Exception has occurred: ImportError attempted relative import with no known parent package However, if I don't express the imports like above, I can't run it using the poetry command. In the pyproject.toml file, I have this: [tool.poetry.scripts] my-project = "my_project.my_project:run" run is the entry-point method in the my_project.py file. To run the project from command prompt, I go to the project folder (where the package folder is) and I type poetry run my-project Again, up to this point, everything according to the Poetry documentation. How could I debug this project in VS Code? I know I need to create a launch.json file, but I don't know how the configuration should look. | For Visual Studio Code, you could try this: add an __init__.py file in the sub-directory my_project in the .vscode directory, add a lauch.json file with the following content: { "version": "0.1.0", "configurations": [ { "name": "my-project", "type": "python", "request": "launch", "cwd": "${workspaceFolder}", "module": "my_project", "args": [] } ] } Here, cwd points to your workspace folder, which should be the parent directory of my-project. You should then be able to run successfully the Run and Debug module of Visual Studio Code. As for Poetry, try modifying your pyproject.toml like this (there seems to be a typo, hyphen vs underscore): [tool.poetry.scripts] my-project = "my-project.my_project:run" And make sure to set the parent directory of my-project as your current working directory when you run poetry run my-project. See this post for additional guidance. | 25 | 15 |
69,118,694 | 2021-9-9 | https://stackoverflow.com/questions/69118694/pandas-transpose-rows-to-columns-based-on-first-column | I have the below dataframe. Column_1 Column_2 Name Xxxx Age 28 Gender M Name yyyy Age 26 Gender F My expected output is Name Age Gender Xxxx 28 M yyyy 26 F I tried df.T(), but it's writing each name, age and gender to separate columns. How to achieve the above output in python/pandas. | Try with groupby and pivot: df["idx"] = df.groupby("Column_1").cumcount() >>> df.pivot("idx", "Column_1", "Column_2").reset_index(drop=True).rename_axis(columns=None) Age Gender Name 0 28 M Xxxx 1 26 F Yyyy | 4 | 2 |
69,118,377 | 2021-9-9 | https://stackoverflow.com/questions/69118377/what-is-the-point-of-having-to-put-await-in-front-of-each-async-function-in-pyth | In Python, we need an await keyword before each coroutine object to have it called by the event loop. But when we put await, it makes the call blocking. It follows that we end up doing the same thing as we do in the blocking fashion. What is the point of having such a use? https://www.aeracode.org/2018/02/19/python-async-simplified/ https://stackabuse.com/python-async-await-tutorial/ | await makes the call locally blocking, but the "wait" is transmitted through the async function (which is itself awaited), such that when it reaches the reactor the entire task can be moved to a waitlist and an other can be run instead. Furthermore you do not need an await, you could also spawn the coroutine (to a separate task, which you may or may not wait on), or use one of the "future combinators" (asyncio.gather, asyncio.wait, ...) to run it concurrently with others. | 13 | 4 |
69,117,594 | 2021-9-9 | https://stackoverflow.com/questions/69117594/problem-with-curve-fitting-overflow-encountered-in-exp | I want to fit a curve with the following data, but I get the error: ipython-input>:2: RuntimeWarning: overflow encountered in exp Does anyone know what is the reason for this problem? I fitted this curve with a different datatype for Matlab and it worked fine. I used the initial condition from my Matlab code. Both curves are the same but the values of the y-axes are much higher in this case. import numpy as np import scipy.optimize #sympy.init_printing() import matplotlib.pyplot as plt from scipy.optimize import curve_fit list_R1_fit = [ 19.53218114920747, 42.52167990454083, 60.95540646861309, 70.10646960395906, 73.99897337091254, 75.36736639556727, 75.69578522881915, 75.62147077733012, 75.42605227692485, 75.21657113589387, 75.04519265636262, 74.94144261816007, 74.92153132015117, 74.99475606015201, 75.15746897265564 ] tau_list = [ 0.052, 0.12, 0.252, 0.464, 0.792, 1.264, 1.928, 2.824, 4, 5.600, 7.795, 10.806, 14.928, 20.599, 28.000 ] array_R1_fit = np.asarray(list_R1_fit) tau_array = np.asarray(tau_list) plt.plot(tau_array, array_R1_fit, 'o') def func_R1_fit( t, a0, a1, a2, a3, a4): R1_fit_Curve = ( a0 * np.exp( a1 * (1 - (28 / t)**(4 / 5)) ) + a2 * (t / ((a3)**a4 + t**a4)) ) return R1_fit_Curve pars, cov = curve_fit( f=func_R1_fit, xdata=tau_array, ydata=array_R1_fit, p0=[ 0.249714296337621, 0.101851223776512, 0.209343265573669, 0.306273529630680, 1.511897539010256 ], bounds=(-np.inf, np.inf), maxfev=100000 ) I have generated more data in the first part of the chart. Now I get another error. <ipython-input-7-f61377d66140>:2: RuntimeWarning: invalid value encountered in double_scalars R1_fit_Curve=a0*np.exp(a1*(1-(28/t)**(4/5)))+a2*(t/((a3)**a4+t**a4)) The new lists are as follows: list_R1_fit=[8.889450414508385, 13.832704635961235, 3.0955553517738656, 6.944672155278666, 19.53218114920747, 23.06912497313617, 32.92595485184, 42.52167990454083, 54.23640835031421, 60.95540646861309, 66.91996368676925, 70.10646960395906, 72.69136093289741, 73.99897337091254, 74.93277916119311, 75.36736639556727, 75.62190347933995, 75.69578522881915, 75.68268608294542, 75.62147077733012, 75.52270979845973, 75.42605227692485, 75.21657113589387, 75.04519265636262, 74.94144261816007, 74.92153132015117, 74.99475606015201, 75.15746897265564] tau_list=[0.03,0.04,0.052/3,0.052/2,0.052,0.12/2,(0.052+0.12)/2,0.12,(0.252+0.12)/2,0.252,(0.464+0.252)/2,0.464,(0.464+0.792)/2,0.792,(1.264+0.792)/2,1.264,(1.264+1.928)/2,1.928,(2.824+1.928)/2,2.824,(2.824+4)/2,4,5.600,7.795,10.806,14.928,20.599,28.000] | If you set the dtypes of array_R1_fit and tau_array to np.longdouble ornp.float64 should fix the RuntimeWarning: overflow encountered in exp that is: array_R1_fit = np.asarray(list_R1_fit, dtype=np.longdouble) tau_array =np.asarray(tau_list, dtype=np.longdouble) Note that if you are on a Windows 64-bit computer np.longdouble will not result in float128 but is defined to be float64. You can try to run it on a linux system. | 5 | 1 |
69,110,065 | 2021-9-8 | https://stackoverflow.com/questions/69110065/plotly-how-to-add-a-text-box-under-legend | I use this example code given in plotly website. import plotly.express as px df = px.data.medals_long() fig = px.bar(df, x="medal", y="count", color="nation", pattern_shape="nation", pattern_shape_sequence=[".", "x", "+"]) fig.show() This gives a plot like below. How can I add a text box under the legend in the plot to get it like in the picture below? I see some examples using Text and Annotations but was wondering if there is any other approach to do it. | From what I could find, adding an annotation is the only way to get the expected output natively in plotly. An implemntation of the annotation will look like: fig.add_annotation(text='South Korea: Asia <br>China: Asia <br>Canada: North America', align='left', showarrow=False, xref='paper', yref='paper', x=1.1, y=0.8, bordercolor='black', borderwidth=1) Some additional html attributes you can add to the text attribute be found here: https://plotly.com/chart-studio-help/adding-HTML-and-links-to-charts/ | 4 | 6 |
69,109,730 | 2021-9-8 | https://stackoverflow.com/questions/69109730/apache-airflow-dag-with-single-task | I'm newbie in Apache Airflow. There are a lot of examples of basic DAGs in the Internet. Unfortunately, I didn't find any examples of single-task DAG's. Most of DAG's examples contain bitshift operator in the end of the .py script, which defines tasks order. For example: # ...our DAG's code... task1 >> task2 >> task3 But what if my DAG has just a single task at the moment? My question is - do I need to use this single task's name in the end of Python file? Or if we have only 1 task in the scope, Airflow will handle it itself, and the last line of code below is redundant? from datetime import timedelta from airflow.operators.bash import BashOperator from airflow.utils.dates import days_ago default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, } with DAG( 'tutorial', default_args=default_args, description='A simple tutorial DAG', schedule_interval=timedelta(days=1), start_date=days_ago(2), tags=['example'], ) as dag: t1 = BashOperator( task_id='print_date', bash_command='date', ) t1 # IS THIS LINE OF CODE NECESSARY? | The answer is NO, you don't need to include the last line. You could also avoid the asignment of the variable t1, leaving the DAG like this: with DAG( 'tutorial', default_args=default_args, description='A simple tutorial DAG', schedule_interval=timedelta(days=1), start_date=days_ago(2), tags=['example'], ) as dag: BashOperator( task_id='print_date', bash_command='date', ) The reason to perfom the assignment of an instance of an Operator (such as BashOperator), to a variable (called Task in this scope) is similiar to any other object in OOP. In your example there is no other "operation" perfomed over the t1 variable (you are not reading it or consuming any method from it) so there no is no reason to declare it. When starting with Airflow, I think is very clarifying to use the DebugExecutor to perform quick tests like this and understand how everything is working. If you are using VS Code you can find an example config file, here. | 9 | 11 |
69,109,316 | 2021-9-8 | https://stackoverflow.com/questions/69109316/how-can-i-see-the-creation-dates-for-my-conda-environments | I created four different versions of conda virtual environments (envs) for image processing tasks. Each env includes GDAL and OpenCV, and some subset of related libs and dependencies. I want to cull my list of image processing envs down to the most recently created one, which will have the most complete set of the libs I use. But I don't remember the order I created the envs. Is there a way to see the creation date of individual envs or perhaps a list of creation dates for all of my conda envs? UPDATE Here is the output of the command suggested by @Timur Shtatland $ conda env list -v -v -v | grep -v '^#' | perl -lane 'print $F[-1]' | xargs ls -lrt1d $ miniconda3/envs/opencv2_imgproc $ miniconda3/envs/GDAL_OSGEO_env | Conda history files Aside from file/folder dates, Conda also records the history of all environment changes in the conda-meta/history file relative to each environment folder, so that could also be consulted. All entries begin with a date stamp (==> YYYY-MM-DD HH:MM:SS <==), so assuming the first entry corresponds with env creation, one could do something like #!/bin/bash for env_hist in path/to/envs/*/conda-meta/history; do env_prefix=$(dirname $(dirname $env_hist)) echo "$(head -n1 $env_hist) $env_prefix" done | sort to print something like ==> 2020-09-28 16:12:49 <== path/to/envs/pymc39 ==> 2020-11-08 18:15:26 <== path/to/envs/bioc_3_12 ==> 2020-11-22 17:19:08 <== path/to/envs/snakemake_5_29 ==> 2021-01-23 00:08:33 <== path/to/envs/pymc3_11 ==> 2021-01-23 00:12:53 <== path/to/envs/jupyter ==> 2021-03-09 22:50:38 <== path/to/envs/multiqc ==> 2021-03-24 13:20:07 <== path/to/envs/grayskull ==> 2021-04-05 23:40:01 <== path/to/envs/snakemake_6_1 | 4 | 3 |
69,107,300 | 2021-9-8 | https://stackoverflow.com/questions/69107300/get-access-token-from-google-oauth2-credentials | Currently, I am building the async frontend to my TF2 model. Now it works as two services, 1st service is a twisted service, and 2nd service is a TensorFlow serving. The async web client is being used to query the model asynchronously. For practical reasons, I've deployed the model into the GCP AI Platform, and I can get data from it using the python code from examples, and it is okay. But the thing is that the Google API client is synchronous, and I would like to use the asynchronous client. Since, AFAIK, there are no actively supported async clients for GCP, I tried to get straightforward and use REST. The model input is the same on TensorFlow serving (GCP AI Platform uses TensorFlow serving internally, I believe). To perform the async call, I need to have: Model URL. (I have it) Input data. (I also have it) Access token. I saw some examples that are: import googleapiclient.discovery credentials = service_account.Credentials.from_service_account_file( '/path/to/key.json', scopes=['https://www.googleapis.com/auth/cloud-platform']) But the issue is that credential.token is None, so I can't use it. So I have a question: how could I get the access token to use in the rest request then? Or maybe there is another but better way of doing that? I already saw the following question: How to get access token from instance of google.oauth2.service_account.Credentials object? but I am think that it is slightly irrelevant. | The following code sets up the data structures for managing credentials (OAuth tokens) from a service account. No tokens are requested at this point. credentials = service_account.Credentials.from_service_account_file( '/path/to/key.json', scopes=['https://www.googleapis.com/auth/cloud-platform']) Tokens are not requested from the Google auth server until required. There are several reasons: a) network calls take time - a significant amount of time for network failures; b) tokens expire; c) tokens are cached until they (almost) expire. To generate a token, call the refresh() method: import google.auth.transport.requests request = google.auth.transport.requests.Request() credentials.refresh(request) credential.token will now contain an OAuth Access Token else an exception will be thrown (network error, etc.). | 5 | 14 |
69,100,302 | 2021-9-8 | https://stackoverflow.com/questions/69100302/setting-results-of-torch-gather-calls | I have a 2D pytorch tensor of shape n by m. I want to index the second dimension using a list of indices (which could be done with torch.gather) then then also set new values to the result of the indexing. Example: data = torch.tensor([[0,1,2], [3,4,5], [6,7,8]]) # shape (3,3) indices = torch.tensor([1,2,1], dtype=torch.long).unsqueeze(-1) # shape (3,1) # data tensor: # tensor([[0, 1, 2], # [3, 4, 5], # [6, 7, 8]]) I want to select the specified indices per row (which would be [1,5,7] but then also set these values to another number - e.g. 42 I can select the desired columns row wise by doing: data.gather(1, indices) tensor([[1], [5], [7]]) data.gather(1, indices)[:] = 42 # **This does NOT work**, since the result of gather # does not use the same storage as the original tensor which is fine, but I would like to change these values now, and have the change also affect the data tensor. I can do what I want to achieve using this, but it seems to be very un-pythonic: max_index = torch.max(indices) for i in range(0, max_index + 1): mask = (indices == i).nonzero(as_tuple=True)[0] data[mask, i] = 42 print(data) # tensor([[ 0, 42, 2], # [ 3, 4, 42], # [ 6, 42, 8]]) Any hints on how to do that more elegantly? | What you are looking for is torch.scatter_ with the value option. Tensor.scatter_(dim, index, src, reduce=None) → Tensor Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. With 2D tensors as input and dim=1, the operation is: self[i][index[i][j]] = src[i][j] No mention of the value parameter though... With value=42, and dim=1, this will have the following effect on data: data[i][index[i][j]] = 42 Here applied in-place: >>> data.scatter_(index=indices, dim=1, value=42) >>> data tensor([[ 0, 42, 2], [ 3, 4, 42], [ 6, 42, 8]]) | 6 | 4 |
69,101,233 | 2021-9-8 | https://stackoverflow.com/questions/69101233/using-dateformatter-resets-starting-date-to-1970 | I have a dataframe where the index is the first date of each month and the size column is the frequency for that month, e.g. Using .index on the dataframe confirms the type of the index is DatetimeIndex: DatetimeIndex(['2006-12-01', ...], dtype='datetime64[ns]', name='created_at_month', length=175, freq=None) Using .plot() on the DataFrame I can produce a line graph per month: However, it only lists every other year on the x axis, and I'd like it to list each year on the axis. I would expect to be able to do ax.xaxis.set_major_locator(mdates.YearLocator(1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y')) However this doesn't output any labels at all. If I add a minor formatter (ax.xaxis.set_minor_formatter(mdates.DateFormatter('%d %m %Y'))), I get this: What am I doing wrong here to cause the dates to change? The relevant versions are: Matplotlib: 3.3.4 Pandas: 1.2.4 Python: 3.8.8 | As reported here, for some reason pandas's plot shows this issue. You can overcome this issue by replacing pandas' plot with matplotlib.pyplot.plot. You can take this answer as a reference for 2 datetime ticks on x axis (month and year, or month and day, or day and hour, as you need), using two different axis in palce of minor and major ticks. import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates import numpy as np df = pd.DataFrame({'created_at_month': pd.date_range(start = '2006-12-01', end = '2020-12-01', freq = 'MS')}) df['size'] = np.random.randint(0, 200, len(df)) df = df.set_index('created_at_month') fig, ax = plt.subplots() ax.plot(df.index, df['size']) ax.xaxis.set_major_locator(mdates.YearLocator(base = 1, month = 1, day = 1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y')) plt.show() | 5 | 5 |
69,099,250 | 2021-9-8 | https://stackoverflow.com/questions/69099250/how-does-threadpoolexecutor-utilise-32-cpu-cores-for-cpu-bound-tasks | From ThreadPoolExecutor Changed in version 3.8: Default value of max_workers is changed to min(32, os.cpu_count() + 4). This default value preserves at least 5 workers for I/O bound tasks. It utilizes at most 32 CPU cores for CPU bound tasks which release the GIL. And it avoids using very large resources implicitly on many-core machines. According to my understanding of GIL, thread based concurrency is only possible for I/O bound tasks. For CPU bound tasks, thread based concurrency is NOT possible, meaning for CPU bound tasks, GIL forces only single threaded execution. My understanding appears to contradict the bolded line in the ThreadPoolExecutor. What am I misunderstanding here? Furthermore, what does which release the GIL mean? Don't CPU bound tasks keep hold of the GIL (unless it is preempted)? From this answer, I suspect this has something to do with spending most of its time in an external library designed to release the GIL (like NumPy) Does that mean thread based concurrency for CPU bound tasks is actually possible provided that threads are doing the CPU bound tasks within a some specially designed external library "designed to release the GIL"? | Yes, exactly. Since the GIL protects python interpreter state, a library can release the lock if it has a significant amount of work to do that doesn't involve accessing Python variables or calling Python functions. NumPy is one such library that can frequently do this. | 5 | 4 |
69,096,752 | 2021-9-8 | https://stackoverflow.com/questions/69096752/how-can-i-run-python-on-my-hp-prime-graphing-calculator | According to this firmware post, the HP Prime graphing calculator supports Python. However, I cannot find any guide as to how to run python files in the calculator (even within HP's own 700 page long user manual). Does anyone know how to execute these files? For reference, I have HP Prime's connectivity kit (CK) installed, so I am somewhat able to transfer python code (by copy-pasting into CK's "Programs" section). However, I think it's reading it as Prime Programming Language instead, as it does not run. Edit: HP Prime has rebooted and now there is a Python app, allowing me to run some files. Unfortunately, I cannot access any Python libraries. That is, I can only run files that do not have "import _____" in them. This seems like a problem; anyone know how to resolve? Also a further problem is that the files are not actually saved in my calculator, as far as I can tell. | I couldn't really find a good source to read about Python support in this particular brand, but in general, graphing calculators have much more limited memory than personal computers, so they do not choose CPython or any of the heftier implementations of the Python language. They will instead use lightweight implementations like MicroPython or CircuitPython (not these ones exactly but maybe a derivation). These implementations don't have the full standard library of CPython and can have different modules particular to their intended contexts. There probably ARE some modules you can use, but without proper documentation it's hard to say which. It may go without saying but you are certainly restricted from downloading arbitrary Python libraries. | 4 | 1 |
69,096,931 | 2021-9-8 | https://stackoverflow.com/questions/69096931/how-do-i-combine-two-plots-into-one-figure-using-plotly | I have 2 csv files, my codes are as below. df = pd.read_csv("test.csv", sep='\t',skiprows=range(9),names=['A', 'B', 'C','D']) df2 = pd.read_csv("LoadMatch_Limit.csv",skiprows=range(1),names=['X','Y']) fig = px.line([df,df2], x=['A','X'] , y=['D','Y']) I would like my line chart, x-axis to take from (columns 'A' and 'X') and my y-axis to take from (columns 'D' and 'Y'). Is there anyway I can plot these 2 charts as one figure? | You could create the two plots and combine them with plotly graph objects import plotly.express as px import plotly.graph_objects as go fig1 = px.line(df, x='A', y='D') fig2 = px.line(df2, x='X', y='Y') fig = go.Figure(data = fig1.data + fig2.data) fig.show() | 6 | 20 |
69,087,044 | 2021-9-7 | https://stackoverflow.com/questions/69087044/early-stopping-in-bert-trainer-instances | I am fine-tuning a BERT model for a multiclass classification task. My problem is that I don't know how to add "early stopping" to those Trainer instances. Any ideas? | There are a couple of modifications you need to perform, prior to correctly using the EarlyStoppingCallback(). from transformers import EarlyStoppingCallback, IntervalStrategy ... ... # Defining the TrainingArguments() arguments args = TrainingArguments( output_dir = "training_with_callbacks", evaluation_strategy = IntervalStrategy.STEPS, # "steps" eval_steps = 50, # Evaluation and Save happens every 50 steps save_total_limit = 5, # Only last 5 models are saved. Older ones are deleted. learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=5, weight_decay=0.01, push_to_hub=False, metric_for_best_model = 'f1', load_best_model_at_end=True) You need to: Use load_best_model_at_end = True (EarlyStoppingCallback() requires this to be True). evaluation_strategy = 'steps' or IntervalStrategy.STEPS instead of 'epoch'. eval_steps = 50 (evaluate the metrics after N steps). metric_for_best_model = 'f1' In your Trainer(): trainer = Trainer( model, args, ... compute_metrics=compute_metrics, callbacks = [EarlyStoppingCallback(early_stopping_patience=3)] ) Of course, when you use compute_metrics(), for example it can be a function like: def compute_metrics(p): pred, labels = p pred = np.argmax(pred, axis=1) accuracy = accuracy_score(y_true=labels, y_pred=pred) recall = recall_score(y_true=labels, y_pred=pred) precision = precision_score(y_true=labels, y_pred=pred) f1 = f1_score(y_true=labels, y_pred=pred) return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1} The return of the compute_metrics() should be a dictionary and you can access whatever metric you want/compute inside the function and return. Note: In newer transformers version, the usage of Enum IntervalStrategy.steps is recommended (see TrainingArguments()) instead of plain steps string, the latter being soon subject to deprecation. | 31 | 68 |
69,023,252 | 2021-9-2 | https://stackoverflow.com/questions/69023252/conda-init-polluting-environment | I have a project set up in Pycharm, with an existing conda environment. My scripts work when run from within the console. I would like to be able to run python -m path_to_my_script/script.py from any location, but I need conda activated. Conda recommends I do conda init but I'm worried it may change settings someplace and break things. What does conda init do? | Strategy for Answering Exactly what the conda init command does and its consequences are shell-specific. Instead of trying to cover all cases, let's walk through a case, noting along the way that one can replicate this analysis by substituting their shell of interest. Case Study: conda init zsh Let's look at zsh as the shell. This is a common shell (default for macOS 10.15+) and very close to bash. Plus, I don't already have it configured. Probing the Command: Dry Run Many Conda commands include some form of dry run functionality via a --dry-run, -d flag, which - combined with verbosity flags - enables seeing what this would do without doing them. For the init command, dry run alone will only tell us what files it would modify: $ conda init -d zsh no change /Users/mfansler/miniconda3/condabin/conda no change /Users/mfansler/miniconda3/bin/conda no change /Users/mfansler/miniconda3/bin/conda-env no change /Users/mfansler/miniconda3/bin/activate no change /Users/mfansler/miniconda3/bin/deactivate no change /Users/mfansler/miniconda3/etc/profile.d/conda.sh no change /Users/mfansler/miniconda3/etc/fish/conf.d/conda.fish no change /Users/mfansler/miniconda3/shell/condabin/Conda.psm1 no change /Users/mfansler/miniconda3/shell/condabin/conda-hook.ps1 no change /Users/mfansler/miniconda3/lib/python3.7/site-packages/xontrib/conda.xsh no change /Users/mfansler/miniconda3/etc/profile.d/conda.csh modified /Users/mfansler/.zshrc ==> For changes to take effect, close and re-open your current shell. <== Here we can see that it plans to target the user-level resources file for zsh, /Users/mfansler/.zshrc, but it doesn't tell us how it will modified it. Also, OMG! the UX here is awful, because it in no way reflects the fact that I used the -d flag. But don't worry: as long as the -d flag is there it won't actually change things. Patch Preview To see what exactly it will do, add a single verbosity flag (-v) to the command. This will give everything from the previous output, but will now shows us the diff it will use to patch (update) the .zshrc file. $ conda init -dv zsh /Users/mfansler/.zshrc --- +++ @@ -0,0 +1,16 @@ + +# >>> conda initialize >>> +# !! Contents within this block are managed by 'conda init' !! +__conda_setup="$('/Users/mfansler/miniconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)" +if [ $? -eq 0 ]; then + eval "$__conda_setup" +else + if [ -f "/Users/mfansler/miniconda3/etc/profile.d/conda.sh" ]; then + . "/Users/mfansler/miniconda3/etc/profile.d/conda.sh" + else + export PATH="/Users/mfansler/miniconda3/bin:$PATH" + fi +fi +unset __conda_setup +# <<< conda initialize <<< + # ...the rest is exactly as above That is, the plan of action is to add these 16 lines to the .zshrc file. In this case, I don't have an existing .zshrc file, so it plans to add it at line 1. If the file had already existed, it would append these lines. Interpreting the Shell Code Let's overview this code, before focusing on the details. Essentially, this is a redundant sequence of attempts to set up some shell functionality. They are ordered from most to least functional. What Conda Hopes To Do The code __conda_setup="$('/Users/mfansler/miniconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)" if [ $? -eq 0 ]; then eval "$__conda_setup" gets something from conda itself, storing the result to a string, and then evaluates that string if the command had a clean exit ($? -eq 0). The neat engineering here is that the subprocess (technically python -m conda) passes back a result that can be run within this current process (zsh), allowing it to define shell functions. I'll dig deeper into what is going on here in a second. Fallback 1: Hardcoded Shell Functions If that strange internal command fails, the devs included a hardcoded version of some essential shell functions (specifically conda activate). This is in: miniconda3/etc/profile.d/conda.sh and they simply check the file exists and source it. Let's hit that last option, then we'll swing back to look at the functionality. Fallback 2: The Last Resort The absolute last resort is to literally violate the standing recommendation since Conda v4.4, which is to simply put the base environment's bin directory on PATH. In this case, there is no conda activate functionality; this only ensures that Conda is on your PATH. Details: Shell Functionality Coming back to the intended case, we can inspect exactly what it would evaluate by simply getting that string result: $ conda shell.zsh hook __add_sys_prefix_to_path() { # In dev-mode CONDA_EXE is python.exe and on Windows # it is in a different relative location to condabin. if [ -n "${_CE_CONDA}" ] && [ -n "${WINDIR+x}" ]; then SYSP=$(\dirname "${CONDA_EXE}") else SYSP=$(\dirname "${CONDA_EXE}") SYSP=$(\dirname "${SYSP}") fi if [ -n "${WINDIR+x}" ]; then PATH="${SYSP}/bin:${PATH}" PATH="${SYSP}/Scripts:${PATH}" PATH="${SYSP}/Library/bin:${PATH}" PATH="${SYSP}/Library/usr/bin:${PATH}" PATH="${SYSP}/Library/mingw-w64/bin:${PATH}" PATH="${SYSP}:${PATH}" else PATH="${SYSP}/bin:${PATH}" fi \export PATH } __conda_exe() ( __add_sys_prefix_to_path "$CONDA_EXE" $_CE_M $_CE_CONDA "$@" ) __conda_hashr() { if [ -n "${ZSH_VERSION:+x}" ]; then \rehash elif [ -n "${POSH_VERSION:+x}" ]; then : # pass else \hash -r fi } __conda_activate() { if [ -n "${CONDA_PS1_BACKUP:+x}" ]; then # Handle transition from shell activated with conda <= 4.3 to a subsequent activation # after conda updated to >= 4.4. See issue #6173. PS1="$CONDA_PS1_BACKUP" \unset CONDA_PS1_BACKUP fi \local ask_conda ask_conda="$(PS1="${PS1:-}" __conda_exe shell.posix "$@")" || \return \eval "$ask_conda" __conda_hashr } __conda_reactivate() { \local ask_conda ask_conda="$(PS1="${PS1:-}" __conda_exe shell.posix reactivate)" || \return \eval "$ask_conda" __conda_hashr } conda() { \local cmd="${1-__missing__}" case "$cmd" in activate|deactivate) __conda_activate "$@" ;; install|update|upgrade|remove|uninstall) __conda_exe "$@" || \return __conda_reactivate ;; *) __conda_exe "$@" ;; esac } if [ -z "${CONDA_SHLVL+x}" ]; then \export CONDA_SHLVL=0 # In dev-mode CONDA_EXE is python.exe and on Windows # it is in a different relative location to condabin. if [ -n "${_CE_CONDA:+x}" ] && [ -n "${WINDIR+x}" ]; then PATH="$(\dirname "$CONDA_EXE")/condabin${PATH:+":${PATH}"}" else PATH="$(\dirname "$(\dirname "$CONDA_EXE")")/condabin${PATH:+":${PATH}"}" fi \export PATH # We're not allowing PS1 to be unbound. It must at least be set. # However, we're not exporting it, which can cause problems when starting a second shell # via a first shell (i.e. starting zsh from bash). if [ -z "${PS1+x}" ]; then PS1= fi fi conda activate base I'm not going to walk through all this, but the main part is that instead of directly putting bin on PATH, it defines a shell function called conda and this serves as a wrapper for the condabin/conda entrypoint. This also defines a new functionality conda activate, which uses a shell function, __conda_activate(), behind the scenes. At the final step, it then activates the base environment. Why do it this way? This is engineered like this in order to be responsive to the configuration settings. Configuration options like auto_activate_base and change_ps1 affect how Conda manipulates the shell, and so that changes what functionality Conda includes in its shell functions. Does Conda "Pollute the Environment"? Not really. The main behavioral things like auto-activation and prompt modification can be disabled through configuration settings, so that conda init ultimately just adds the conda activate function to the shell, enabling clean switching between environments without ever having to manually manipulate PATH. | 10 | 18 |
69,074,128 | 2021-9-6 | https://stackoverflow.com/questions/69074128/how-to-package-a-python-project-into-msix-package | I currently work on a Python project, which I'd like to upload to the Microsoft Store in the future. As far as I am aware, in order to upload applications to the Microsoft Store, it is necessary that the application will be packed into the MSIX format. Now the question is - is it possible to pack a Python project into the MSIX format? I already tried two possible approaches The first approach I assumed that it will be much easier to pack an .exe file into an MSIX package. Since .py files require an interpreter in order to run, I managed to freeze the Python project into a standalone .exe runnable file - and it works pretty good. I found a useful tool made by Microsoft, which is supposed to pack .exe files under the MSIX format. The tool is MSIX Packaging Tool which is available at the Microsoft Store. I did manage to create an .msix file but I can't run in since Windows says that I have to sign the .exe first. The second approach I found out that it is possible to pack a project into an MSIX package, by using built-in tools inside Visual Studio 2019. So I managed to move my whole python project into Visual Studio, and follow the next steps in order to pack my project. The problem is that already in the early stages, when adding the reference to my python project, the next error occurs: I'd love to know if you have any other possible approaches for packing a Python project into an MSIX package. | Use PyInstaller or a similar tool to package your Python application. You can find more information on how to do this in the PyInstaller documentation. Once you have the output from PyInstaller (either a single .exe file or the "dist" folder), you can use a program like AdvancedInstaller to create .msix files. Note: If you're using the "dist" folder, don't add the folder itself to AdvancedInstaller. Instead, you'll find a folder inside the "dist" folder with the same name as your Python package or script. Add that folder to AdvancedInstaller. In AdvancedInstaller, you can create shortcuts that point to the .exe file inside the folder. PyInstaller: https://pyinstaller.org/en/stable/ AdvancedInstaller: https://www.advancedinstaller.com/ Creating an MSIX package using AdvancedInstaller: https://www.advancedinstaller.com/user-guide/tutorial-create-msix-package.html Edit: You first approach actually works, you have to sign the exe indeed, I found this guide for signing the exe gratis https://adangel.org/2021/09/16/code-signing-lets-encrypt-github-pages/ Edit 2: Here's a useful link from Microsoft https://learn.microsoft.com/en-us/windows/apps/publish/publish-your-app/overview?pivots=store-installer-msi-exe | 8 | 1 |
69,049,818 | 2021-9-3 | https://stackoverflow.com/questions/69049818/how-to-export-jupyter-notebook-by-vscode-in-pdf-format-windows-10 | When I try to export my Jupyter Notebook in pdf format in VSCode like this: then I got this error: Export failed. Please check the 'Jupyter' output panel for further details. and jupyter output panel says: [error] If you have not installed xelatex (TeX), you will need to do so before you can export to PDF. For further instructions, please see https://nbconvert.readthedocs.io/en/latest/install.html#installing-tex. To avoid installing xelatex (TeX), you might want to try exporting to HTML and using your browser's "Print to PDF" feature. so i tried to install MikTeX and update the required packages, but still I can't export Jupyter Notebooks in PDF format by VSCode! how can I fix this problem? Note That I know i can do it by convert it to HTML and then with ctrl+p try to save it as pdf! but I want to convert it to pdf in straight way! | Since I'm using conda venvs, I did these steps: Activate conda venv using: conda activate <NAME_OF_VENV> in Anaconda prompt. Install nbconvert using conda install -c anaconda nbconvert Now it's all okay, and I can export Jupyter notebooks in HTML and PDF format both. Update 11/17/2023 nbconvert is compatible with Python 3.8-3.11 based on the official doc. | 25 | 17 |
69,083,256 | 2021-9-7 | https://stackoverflow.com/questions/69083256/the-naming-rules-for-your-virtual-environments-in-python | I'm looking for some sort of naming scheme for my virtual environments. How do you usually name them? Is there naming convention for python virtual environments? | If you are storing your environment inside the project folder some common names are env, venv, .env, .venv, but besides that, I don't think there are any common conventions. The official docs.python.org's tutorial on venv also suggests using .venv as the name. A common directory location for a virtual environment is .venv. This name keeps the directory typically hidden in your shell and thus out of the way while giving it a name that explains why the directory exists. It also prevents clashing with .env environment variable definition files that some tooling supports. - docs.python.org/3/tutorial/venv.html | 8 | 22 |
69,024,302 | 2021-9-2 | https://stackoverflow.com/questions/69024302/matplotlib-pie-chart-label-does-not-match-value | I am working on this https://www.kaggle.com/edqian/twitter-climate-change-sentiment-dataset. I already convert the sentiment from numeric to its character description (i.e. 0 will be Neutral, 1 will be Pro, -1 will be Anti) import pandas as pd import seaborn as sns import matplotlib.pyplot as plt tweets_df = pd.read_csv('twitter_sentiment_data.csv') tweets_df.loc[tweets_df['sentiment'] == 0, 'twt_sentiment'] = 'Neutral' tweets_df.loc[tweets_df['sentiment'] == -1, 'twt_sentiment'] = 'Anti' tweets_df.loc[tweets_df['sentiment'] == 1, 'twt_sentiment'] = 'Pro' tweets_df = tweets_df.drop(['sentiment'], axis=1) # display(tweets_df.head()) message tweetid twt_sentiment 0 @tiniebeany climate change is an interesting hustle as it was global warming but the planet stopped warming for 15 yes while the suv boom 792927353886371840 Anti 1 RT @NatGeoChannel: Watch #BeforeTheFlood right here, as @LeoDiCaprio travels the world to tackle climate change https://toco/LkDehj3tNn htt… 793124211518832641 Pro 2 Fabulous! Leonardo #DiCaprio's film on #climate change is brilliant!!! Do watch. https://toco/7rV6BrmxjW via @youtube 793124402388832256 Pro 3 RT @Mick_Fanning: Just watched this amazing documentary by leonardodicaprio on climate change. We all think this… https://toco/kNSTE8K8im 793124635873275904 Pro 4 RT @cnalive: Pranita Biswasi, a Lutheran from Odisha, gives testimony on effects of climate change & natural disasters on the po… 793125156185137153 NaN I want to create a graph with subplot that show the sentiment in value and percentage. The code I tried: sns.set(font_scale=1.5) style.use("seaborn-poster") fig, axes = plt.subplots(1, 2, figsize=(20, 10), dpi=100) sns.countplot(tweets_df["twt_sentiment"], ax=axes[0]) labels = list(tweets_df["twt_sentiment"].unique()) axes[1].pie(tweets_df["twt_sentiment"].value_counts(), autopct="%1.0f%%", labels=labels, startangle=90, explode=tuple([0.1] * len(labels))) fig.suptitle("Distribution of Tweets", fontsize=20) plt.show() The result is not what I wanted as the pie chart label is wrong. After using sort=False in value_counts, the pie chart looks like this: | labels = list(tweets_df["twt_sentiment"].unique()) does not put the labels in the same order as the index of tweets_df.twt_sentiment.value_counts(). The index determines the slice order. Therefore, it's best to use the .value_counts() index as the labels. Labels can easily be added to the bar plot, then the pie chart is unnecessary. import pandas as pd import matplotlib.pyplot as plt tweets_df = pd.read_csv('data/kaggle/twitter_climate_change_sentiment/twitter_sentiment_data.csv') tweets_df.loc[tweets_df['sentiment'] == -1, 'twt_sentiment'] = 'Anti' tweets_df.loc[tweets_df['sentiment'] == 1, 'twt_sentiment'] = 'Pro' tweets_df.loc[tweets_df['sentiment'] == 0, 'twt_sentiment'] = 'Neutral' # assign value_counts to a variable; this is a pandas.Series vc = tweets_df.twt_sentiment.value_counts() # assign the value_counts index as the labels labels = vc.index # custom colors colors = ['tab:blue', 'tab:orange', 'tab:green'] fig, axes = plt.subplots(1, 2, figsize=(10, 5), dpi=100) # plot the pandas.Series directly with pandas.Series.plot p1 = vc.plot(kind='bar', ax=axes[0], color=colors, rot=0, xlabel='Tweet Sentiment', width=.75) # add count label axes[0].bar_label(p1.containers[0], label_type='center') # add percent labels blabels = [f'{(v / vc.sum())*100:0.0f}%' for v in vc] axes[0].bar_label(p1.containers[0], labels=blabels, label_type='edge') # make space at the top of the bar plot axes[0].margins(y=0.1) # add the pie plot axes[1].pie(vc, labels=labels, autopct="%1.0f%%", startangle=90, explode=tuple([0.1] * len(labels)), colors=colors) fig.suptitle("Distribution of Tweets", fontsize=20) plt.show() | 5 | 3 |
69,040,420 | 2021-9-3 | https://stackoverflow.com/questions/69040420/assertionerror-tried-to-export-a-function-which-references-untracked-resource | I wrote a unit-test in order to safe a model after noticing that I am not able to do so (anymore) during training. @pytest.mark.usefixtures("maybe_run_functions_eagerly") def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]): model, speech_features_config = speech_model speech_features_config: SpeechFeaturesConfig channels = 3 if speech_features_config.add_delta_deltas else 1 num_mel_bins = speech_features_config.num_mel_bins enc_inputs = np.random.rand(1, 50, num_mel_bins, channels) dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1) inputs = enc_inputs, dec_inputs model(inputs) # Throws KeyError: # graph = tf.compat.v1.get_default_graph() # tensor = graph.get_tensor_by_name("77040:0") directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_") try: model.save(directory) finally: shutil.rmtree(directory) Trying to save the model will always throw the following error: E AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. E E Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops): E <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32> Note: As you can see in the code above, but I am not able to retrieve this tensor with tf.compat.v1.get_default_graph().get_tensor_by_name("77040:0"). I tried the following too, but the result is always empty: model(batch) # Build the model tensor_name = "77040" var_names = [var.name for var in model.trainable_weights] weights = list(filter(lambda var: tensor_name in var, var_names)) var_names = [var.name for var in model.trainable_variables] variables = list(filter(lambda var: tensor_name in var, var_names)) print(weights) print(variables) The problem is that I do not understand why I am getting this because the affected layer is tracked by Keras as you can see in the screenshot below. I took it during a debug-session in the call() function. I have no explanation for this and I am running out of ideas what the issue might be here. The transformations list in the screenshot is a property of and getting constructed by a layer InputsEmbedding like so: class InputsEmbedding(layers.Layer, TimeReduction): def __init__(self, config: InputsEmbeddingConfig, **kwargs): super().__init__(**kwargs) if config.transformations is None or not len(config.transformations): raise RuntimeError("No transformations provided.") self.config = config self.transformations = list() for transformation in self.config.transformations: layer_name, layer_params = list(transformation.items())[0] layer = _get_layer(layer_name, layer_params) self.transformations.append(layer) self.init_time_reduction_layer() def get_config(self): return self.config.dict() def _get_layer(name: str, params: dict) -> layers.Layer: if name == "conv2d_stack": return ConvolutionStack(**params) elif name == "stack_frames": return StackFrames(**params) else: raise RuntimeError(f"Unsupported or unknown time-reduction layer {name}") In order to verify that the problem is not the InputsEmbedding, I created a unit-text for saving a model that is using just this particular layer. @pytest.mark.usefixtures("maybe_run_functions_eagerly") def test_inputs_embedding_save_model(): convolutions = [ "filters=2, kernel_size=(3, 3), strides=(2, 1)", "filters=4, kernel_size=(3, 3), strides=(2, 1)", "filters=8, kernel_size=(3, 4), strides=(1, 1)", ] config = InputsEmbeddingConfig() config.transformations = [dict(conv2d_stack=dict(convolutions=convolutions)), dict(stack_frames=dict(n=2))] num_features = 8 num_channels = 3 inputs = layers.Input(shape=(None, num_features, num_channels)) x = inputs x, _ = InputsEmbedding(config)(x) model = keras.Model(inputs=inputs, outputs=x) model.build(input_shape=(1, 20, num_features, num_channels)) directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_") try: model.save(directory) finally: shutil.rmtree(directory) Here I am able to save this layer without any issues: ConvolutionStack As it seems to be relevant, here is the (rather ugly) implementation of ConvolutionStack: from typing import List import tensorflow as tf from tensorflow.keras import layers from tensorflow.python.keras.layers import convolutional from speech.lab.layers import InputsRequirements from speech.lab.models import conv_util, models_util class ConvolutionStack(layers.Layer): def __init__( self, convolutions: List[str], kernel_regularizer: dict = None, bias_regularizer: dict = None, **kwargs ): super().__init__(**kwargs) self.config = dict( convolutions=convolutions, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer ) self.conv_stack_config = [eval(f"dict({convolution})") for convolution in convolutions] self.conv_blocks = list() if kernel_regularizer is not None: kernel_regularizer = models_util.maybe_to_regularizer(kernel_regularizer) if bias_regularizer is not None: bias_regularizer = models_util.maybe_to_regularizer(bias_regularizer) for block_config in self.conv_stack_config: block = _new_convolution_block( **block_config, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer, ) self.conv_blocks.append(block) self.drop_dim2 = layers.Lambda(tf.squeeze, arguments=dict(axis=-2)) self.expand_last = layers.Lambda(tf.expand_dims, arguments=dict(axis=-1)) @property def inputs_requirements(self) -> InputsRequirements: requirements, frame_look_back = conv_util.get_conv2d_stack_requirements(self.conv_stack_config) first = requirements[0] t_min, f_size = first["min_size"] t_grow, f_grow = first["grow_size"] return InputsRequirements( frame_look_back=frame_look_back, t_min=t_min, t_grow=t_grow, f_min=f_size, f_grow=f_grow, ) def call(self, inputs, training=None, mask=None, **kwargs): """ :param inputs: Tensor taking the form [batch, time, freq, channel] :param training: :param mask: :param kwargs: :return: Tensor taking the form [batch, time, freq, 1] """ if training: t_min = self.inputs_requirements.t_min t_grow = self.inputs_requirements.t_grow pad = conv_util.get_padding_for_loss(tf.shape(inputs)[1], t_min=t_min, t_grow=t_grow) inputs = tf.pad(inputs, ((0, 0), (0, pad), (0, 0), (0, 0))) if mask is not None: mask = tf.pad(mask, ((0, 0), (0, pad))) f_min = self.inputs_requirements.f_min f_grow = self.inputs_requirements.f_grow assert (inputs.shape[2] - f_min) % f_grow == 0, ( f'Inputs dimension "freq" ' f"expected to be {f_min} + n * {f_grow} but got {inputs.shape[2]} instead." ) x = inputs for block in self.conv_blocks: for layer in block: if mask is not None and isinstance(layer, convolutional.Conv): st, _ = layer.strides kt = tf.maximum(layer.kernel_size[0] - 1, 1) mask = mask[:, :-kt][:, ::st] mask = tf.pad(mask, ((0, 0), (0, tf.maximum(2 - layer.kernel_size[0], 0)))) x = layer(x, training=training) return self.expand_last(self.drop_dim2(x)), mask def get_config(self): return self.config def _new_convolution_block( filters: int, kernel_size: tuple, strides: tuple, use_bias: bool = False, use_norm: bool = True, kernel_regularizer=None, bias_regularizer=None, activation=None, ): assert strides[0] % 2 == 0 or strides[0] == 1, "Strides on the time axis must be divisible by 2 or be exactly 1." if activation is not None: activation_layer = layers.Activation(activation) else: activation_layer = layers.Lambda(lambda x: x) if use_norm: norm_layer = layers.LayerNormalization() else: norm_layer = layers.Lambda(lambda x: x) return ( layers.Conv2D( filters=filters, kernel_size=kernel_size, strides=strides, use_bias=use_bias, kernel_regularizer=kernel_regularizer, bias_regularizer=bias_regularizer, ), norm_layer, activation_layer, ) See also tensorflow/serving #1719 | Using tensorflow v2.5.0 Python: 3.9 It appears that the problem occurs when we declare/define a layer as class-variable. I can only assume that the problem has to do with the internal Keras logic, which probably makes sense, but imo it's not obvious to the user and I don't think I have ever seen a hint pointing out that this can be an issue. So, in my project I am having the following: class Model(keras.Model): inputs_embedding: InputsEmbedding = None # <-- This caused the problem def __init__(config, *args, **kwargs): super().__init__(*args, **kwargs) if config.embeddings is not None: self.inputs_embedding = InputsEmbedding(config.embeddings) # ... MVP Example The following example creates instances of ModelA, ModelB, ModelC and ModelD. Model A and B can be saved but C cannot. From what I can tell, it does not work to declarea layer which has trainable weights as class-variable. However, it does seem to work for layers which do not have trainable weights (see ModelB). Please note how ModelD can be saved though. The difference to ModelB is that the layer gets only declared and not defined as None which leads to the question why ModelC works though. Source Code import tempfile import numpy as np import tensorflow as tf from tensorflow.keras import layers class ModelA(tf.keras.Model): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.model_layer = layers.LayerNormalization() def call(self, inputs, training=None, mask=None): return self.model_layer(inputs) def get_config(self): return dict() class ModelB(tf.keras.Model): model_layer: layers.Layer = None def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # This is probably working because layers.Lambda has no trainable variables self.model_layer = layers.Lambda(lambda x: x) def call(self, inputs, training=None, mask=None): return self.model_layer(inputs) def get_config(self): return dict() class ModelC(tf.keras.Model): model_layer: layers.Layer = None def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.model_layer = layers.LayerNormalization() def call(self, inputs, training=None, mask=None): return self.model_layer(inputs) def get_config(self): return dict() class ModelD(tf.keras.Model): model_layer: layers.Layer def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.model_layer = layers.LayerNormalization() def call(self, inputs, training=None, mask=None): return self.model_layer(inputs) def get_config(self): return dict() def save_tmp_model(model: tf.keras.Model): name = model.__class__.__name__ print(f'Saving model {name}') try: model.save(tempfile.mkdtemp(prefix=f"{name}_")) except Exception as e: print(f"Unable to save model: {name}") print('Error message:') print(str(e)) return print(f".. success!") def main(): inputs = np.random.rand(1, 50, 16) model_a = ModelA() model_b = ModelB() model_c = ModelC() model_d = ModelD() # Build models model_a(inputs) model_b(inputs) model_c(inputs) model_d(inputs) # Save models save_tmp_model(model_a) save_tmp_model(model_b) save_tmp_model(model_c) save_tmp_model(model_d) if __name__ == '__main__': main() Output Saving model ModelA .. success! Saving model ModelB .. success! Saving model ModelC Unable to save model: ModelC Error message: Tried to export a function which references untracked resource Tensor("1198:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops): <tf.Variable 'model_c/layer_normalization_1/gamma:0' shape=(16,) dtype=float32> Saving model ModelD .. success! | 9 | 4 |
69,071,684 | 2021-9-6 | https://stackoverflow.com/questions/69071684/how-to-optimize-for-multiple-metrics-in-optuna | How do I optimize for multiple metrics simultaneously inside the objective function of Optuna. For example, I am training an LGBM classifier and want to find the best hyperparameter set for all common classification metrics like F1, precision, recall, accuracy, AUC, etc. def objective(trial): # Train gbm = lgb.train(param, dtrain) preds = gbm.predict(X_test) pred_labels = np.rint(preds) # Calculate metrics accuracy = sklearn.metrics.accuracy_score(y_test, pred_labels) recall = metrics.recall_score(pred_labels, y_test) precision = metrics.precision_score(pred_labels, y_test) f1 = metrics.f1_score(pred_labels, y_test, pos_label=1) ... How do I do it? | After defining the grid and fitting the model with these params and generate predictions, calculate all metrics you want to optimize for: def objective(trial): param_grid = {"n_estimators": trial.suggest_int("n_estimators", 2000, 10000, step=200)} clf = lgbm.LGBMClassifier(objective='binary', **param_grid) clf.fit(X_train, y_train) preds = clf.predict(X_valid) probs = clf.predict_proba(X_valid) # Metrics f1 = sklearn.metrics.f1_score(y_valid, press) accuracy = ... precision = ... recall = ... logloss = ... and return them in the order you want: def objective(trial): ... return f1, logloss, accuracy, precision, recall Then, in the study object, specify whether you want to minimize or maximize each metric to directions like so: study = optuna.create_study(directions=['maximize', 'minimize', 'maximize', 'maximize', 'maximize']) study.optimize(objective, n_trials=100) For more details, see Multi-objective Optimization with Optuna in the documentation. | 11 | 24 |
69,019,206 | 2021-9-1 | https://stackoverflow.com/questions/69019206/what-it-means-register-anaconda-as-my-default-python | During installation process (Windows OS) I have 2 options: Add Miniconda to my PATH environment variable Register Miniconda as my default Python The first option is pretty obvious. I understand it completely. But what about the second? What is meant by the word "register"? It creates the file with the string "Ok I have registered your Python"or what? What specific operations will be performed? It's so confusing. I have read the whole documentation on their site, but I couldn't find anything about this. | I've investigated this issue today and revealed the secret within. You mean the following Anaconda installer UI, right? I give my conclusion first. What he says is: Register Anaconda3 as my default Python 3.9 NOT Register Anaconda3 as my default Python That means, he can NOT determine what Python version, 3.5, 3.8 or 3.9, becomes your default version. He can only determine that: If the Python Launcher(py.exe), at one run, determines that 3.9 should be used, py.exe should choose Anaconda's Python 3.9 instance. I verify my conclusion using Anaconda3-2022.05-Windows-x86_64.exe using a series of experiments. EXP.0 : Preparation Write a t3.py with content: #!/usr/bin/python3 import sys print(sys.executable) This, when run, will show us which python.exe is being used. Install python.org official(just call it 'official' for brevity) python-3.5.4.exe, with default options. We'll get a regkey [HKCU\SOFTWARE\Python\ PythonCore \3.5-32] and python.exe location is registered there. According to PEP-514, PythonCore is the "company name" chosen by python.org for itself. Now, run t3.py, we see that this very Python 3.5 is used. EXP.1 : Add Anaconda over official 3.5 Now, install Anaconda, ticking the "Register 3.9" option, I install it to d:\Anaconda3. Then we see two regkeys are created. First, [HKCU\SOFTWARE\Python\ContinuumAnalytics\Anaconda39-64], this marks Anaconda's own Python instance. Second, [HKCU\SOFTWARE\Python\PythonCore\3.9], this pretends that an official Python instance is installed. Due to this, t3.py now says d:\anaconda3\python.exe is used. Yes, If we didn't tick the "Register 3.9" option, the [HKCU\SOFTWARE\Python\PythonCore\3.9] regkey will not be created, so Anaconda will not be chosen by py.exe . EXP.2 : Add Anaconda over official 3.5 & 3.10 Now go back to EXP.0 state -- I personally use a VM and revert to EXP.0 snapshot. We install an official python 3.10 , followed by installing Anaconda, still ticking "Register 3.9" option. Then what will happen to t3.py? The answer is, C:\Users\chj\AppData\Local\Programs\Python\ Python310 \python.exe Checking the registry again, the 3.9 (marked as orange) node is still created by Anaconda installer, but it is not picked by py.exe as the default version. Apparently, py.exe picks the latest version number. EXP.3 : Anaconda installer BUG Things are not over. If we already have official Python 3.9, and we got further install Anaconda(providing his own 3.9), then what happens? Anaconda will first kindly prompt us he will overwrite the 3.9 pointer, which looks quite reasonable according to EXP.1 and EXP.2 . However, after installing, t3.py shows the default 3.9 still points to the official one, Anaconda's overwriting does NOT take place. Hmm, apparently, that's BUGGY. | 12 | 8 |
69,062,195 | 2021-9-5 | https://stackoverflow.com/questions/69062195/scikit-learn-column-transformer-does-not-return-back-feature-names | I'm trying to use Column Transformer with OneHotEncoder to transform my categorical data : A quick look at my data : I want to do one-hot-encoding for 3 features : 'sex' , 'smoker' , 'region', so I use Column Transformer by scikit-learn. ( I don't want to want to seperate numerical one and categorical one than transform them seperately, I just want to perform them on a single dataset) My code : cat_feature = X.select_dtypes(include = 'object') #select only categorical columns enc = ColumnTransformer([ ('one_hot_encoder' , OneHotEncoder() , cat_feature ) ] , remainder = 'passthrough') X_transformed = enc.fit_transform(X) # transformed version of original data My problem is that, X_transformed is then removed all the feature names which is little bit confusing for me to debug : So is there anyway to retain my columns' names after doing this transformation? I want to incorporate this transformer into a pipeline so I can't use pd.get_dummies. Thank you!! | User will have to write custom Transformer which does passthrough and supports get_feature_names Steps: Custom Transformer which will return pass through columns names via get_feature_names Dont use remainder = 'passthrough' but rather use our custom Transformer Use enc.get_feature_names() to get the feature list. Sample: from sklearn.base import BaseEstimator df = pd.DataFrame({ 'age': [1,2,3,4], 'sex': ['male', 'female']*2, 'bmi': [1.1,2.2,3.3,4.4], 'children': [1]*4, 'smoker': ['yes', 'no']*2 }) cat_features = df.select_dtypes(include = 'object').columns passthrough_features = [c for c in df.columns if c not in cat_features] class PassthroughTransformer(BaseEstimator): def fit(self, X, y = None): self.cols = X.columns return self def transform(self, X, y = None): self.cols = X.columns return X.values def get_feature_names(self): return self.cols enc = ColumnTransformer([ ('1hot' , OneHotEncoder() , cat_features ), ('pass' , PassthroughTransformer(), passthrough_features)]) X_transformed = enc.fit_transform(df) pd.DataFrame(X_transformed, columns=enc.get_feature_names()) Output: 1hot__x0_female 1hot__x0_male 1hot__x1_no 1hot__x1_yes pass__age pass__bmi pass__children 0 0.0 1.0 0.0 1.0 1.0 1.1 1.0 1 1.0 0.0 1.0 0.0 2.0 2.2 1.0 2 0.0 1.0 0.0 1.0 3.0 3.3 1.0 3 1.0 0.0 1.0 0.0 4.0 4.4 1.0 | 6 | 6 |
69,084,646 | 2021-9-7 | https://stackoverflow.com/questions/69084646/np-random-rand-or-random-random | While analyzing a code, I've stumbled upon the following snippet: msk = np.random.rand(len(df)) < 0.8 Variables "msk" and "df" are irrelevant for my question. After doing some research I think this usage is also related to "random" class as well. It gives True with 80% chance and False with 20% chance on random elements. It is done for masking. I understand why it is used but I don't understand how it works. Isn't random method supposed to give float numbers? Why are there boolean statements when we put the method in an interval? | np.random.rand(len(df)) returns an array of uniform random numbers between 0 and 1, np.random.rand(len(df)) < 0.8 will transform it into an array of booleans based on the condition. As there is a 80% chance to be below 0.8, there is 80% of True values. A more explicit approach would be to use numpy.random.choice: np.random.choice([True, False], p=[0.8, 0.2], size=len(df)) An even better approach, if your goal is to subset a dataframe, would be to use: df.sample(frac=0.8) how to split a dataframe 0.8/0.2: df1 = df.sample(frac=0.8) df2 = df.drop(df1.index) | 5 | 14 |
69,046,990 | 2021-9-3 | https://stackoverflow.com/questions/69046990/how-to-pass-dependency-files-to-sagemaker-sklearnprocessor-and-use-it-in-pipelin | I need to import function from different python scripts, which will used inside preprocessing.py file. I was not able to find a way to pass the dependent files to SKLearnProcessor Object, due to which I am getting ModuleNotFoundError. Code: from sagemaker.sklearn.processing import SKLearnProcessor from sagemaker.processing import ProcessingInput, ProcessingOutput sklearn_processor = SKLearnProcessor(framework_version='0.20.0', role=role, instance_type='ml.m5.xlarge', instance_count=1) sklearn_processor.run(code='preprocessing.py', inputs=[ProcessingInput( source=input_data, destination='/opt/ml/processing/input')], outputs=[ProcessingOutput(output_name='train_data', source='/opt/ml/processing/train'), ProcessingOutput(output_name='test_data', source='/opt/ml/processing/test')], arguments=['--train-test-split-ratio', '0.2'] ) I would like to pass, dependent_files = ['file1.py', 'file2.py', 'requirements.txt']. So, that preprocessing.py have access to all the dependent modules. And also need to install libraries from requirements.txt file. Can you share any work around or a right way to do this? Update-25-11-2021: Q1.(Answered but looking to solve using FrameworkProcessor) Here, the get_run_args function, is handling dependencies, source_dir and code parameters by using FrameworkProcessor. Is there any way that we can set this parameters from ScriptProcessor or SKLearnProcessor or any other Processor to set them? Q2. Can you also please show some reference to use our Processor as sagemaker.workflow.steps.ProcessingStep and then use in sagemaker.workflow.pipeline.Pipeline? For having Pipeline, do we need sagemaker-project as mandatory or can we create Pipeline directly without any Sagemaker-Project? | There are a couple of options for you to accomplish that. One that is really simple is adding all additional files to a folder, example: . ├── my_package │ ├── file1.py │ ├── file2.py │ └── requirements.txt └── preprocessing.py Then send this entire folder as another input under the same /opt/ml/processing/input/code/, example: from sagemaker.sklearn.processing import SKLearnProcessor from sagemaker.processing import ProcessingInput, ProcessingOutput sklearn_processor = SKLearnProcessor( framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1, ) sklearn_processor.run( code="preprocessing.py", # <- this gets uploaded as /opt/ml/processing/input/code/preprocessing.py inputs=[ ProcessingInput(source=input_data, destination='/opt/ml/processing/input'), # Send my_package as /opt/ml/processing/input/code/my_package/ ProcessingInput(source='my_package/', destination="/opt/ml/processing/input/code/my_package/") ], outputs=[ ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"), ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"), ], arguments=["--train-test-split-ratio", "0.2"], ) What happens is that sagemaker-python-sdk is going to put your argument code="preprocessing.py" under /opt/ml/processing/input/code/ and you will have my_package/ under the same directory. Edit: For the requirements.txt, you can add to your preprocessing.py: import sys import subprocess subprocess.check_call([ sys.executable, "-m", "pip", "install", "-r", "/opt/ml/processing/input/code/my_package/requirements.txt", ]) | 17 | 26 |
69,025,133 | 2021-9-2 | https://stackoverflow.com/questions/69025133/filtering-list-of-tuples-based-on-condition | For a given list of tuples, if multiple tuples in the list have the first element of tuple the same - among them select only the tuple with the maximum last element. For example: sample_list = [(5,16,2),(5,10,3),(5,8,1),(21,24,1)] In the sample_list above since the first 3 tuples has the similar first element 5 in this case among them only the 2nd tuple should be retained since it has the max last element => 3. Expected op: op = [(5,10,3),(21,24,1)] Code: op = [] for m in range(len(sample_list)): li = [sample_list[m]] for n in range(len(sample_list)): if(sample_list[m][0] == sample_list[n][0] and sample_list[m][2] != sample_list[n][2]): li.append(sample_list[n]) op.append(sorted(li,key=lambda dd:dd[2],reverse=True)[0]) print (list(set(op))) This works. But it is very slow for long list. Is there a more pythonic or efficient way to do this? | TL;DR Use collections.defaultdict is the fastest alternative and arguably the most pythonic: from collections import defaultdict sample_list = [(5, 16, 2), (5, 10, 3), (5, 8, 1), (21, 24, 1)] d = defaultdict(lambda: (0, 0, float("-inf"))) for e in sample_list: first, _, last = e if d[first][2] < last: d[first] = e res = [*d.values()] print(res) Output [(5, 10, 3), (21, 24, 1)] This is a single pass O(n) which is not only asymptotically optimal but also performant in practice. Detailed Explanation Performance To show that is performant one could design an experiment considering the two main variables of the problem, the number of unique keys (values in the firs position of the tuple) and the length of the input list and the following alternatives approaches: def defaultdict_max_approach(lst): d = defaultdict(lambda: (0, 0, float("-inf"))) for e in lst: first, _, last = e if d[first][2] < last: d[first] = e return [*d.values()] def dict_max_approach(lst): # https://stackoverflow.com/a/69025193/4001592 d = {} for tpl in lst: first, *_, last = tpl if first not in d or last > d[first][-1]: d[first] = tpl return [*d.values()] def groupby_max_approach(lst): # https://stackoverflow.com/a/69025193/4001592 return [max(g, key=ig(-1)) for _, g in groupby(sorted(lst), key=ig(0))] As shown in the plots below the approach using defaultdict is the most performant method for a varying number of unique keys (500, 1000, 5000, 10000) and also for collections up to 1000000 elements (note that the x axis in is in thousands). The above experiments are in concordance with experiments done by others (1, 2). The code for reproducing the experiments can be found here. Pythonic Stating that is the most pythonic is subjective, but here are the main arguments in favor: Is a well known Python idiom Using a defaultdict for grouping a sequence key-value pairs, and aggregating afterwards, is a well known Python idiom. Read the defaultdict examples in the Python documentation. In the PyCon 2013 talk Transforming Code into Beautiful, Idiomatic Python by Raymond Hettinger also says that using defaultdict for such operations is the better way. Is compliant with the Zen of Python In the Zen of Python it can be read that Flat is better than nested. Sparse is better than dense. Using a defaultdict is as flat as using a plain dict only a for-loop and a simple if statement. In the case of defaultdict the if condition is even simpler. Both solutions are sparser than using itertools.groupby, notice this approach also involves calling sorted, itemgetter and max all inside a list comprehension. Original Answer You could use a collections.defaultdict to group tuples that have the same first element and then take the maximum of each group based on the third: from collections import defaultdict sample_list = [(5,16,2),(5,10,3),(5,8,1),(21,24,1)] d = defaultdict(list) for e in sample_list: d[e[0]].append(e) res = [max(val, key=lambda x: x[2]) for val in d.values()] print(res) Output [(5, 10, 3), (21, 24, 1)] This approach is O(n). | 21 | 24 |
69,057,820 | 2021-9-4 | https://stackoverflow.com/questions/69057820/how-to-structure-a-mixed-python-rust-package-with-pyo3 | I'm looking for Info on how to structure a Python package that wraps an extension module written in Rust, where both languages are mixed. I'm using pyO3 for FFI but can't seem to find an example on how to do this. To be specific: my rust library exposes a type that is later wrapped by a python class. Only the python class should be exposed for later users and the package should be structured, such that it can be pushed to PyPI. For example: On the rust side #[pyclass] pub struct Point { x: f64, y: f64 } #[pymethods] impl Point { #[new] pub fn new(x: f64, y: f64) -> Self { Self{x, y} } } and on the python side from ??? import Point class Points: points: List[Point] def __init__(self, points: List[Tuple[float, float]]): self.points = [] for point in points: x, y = point self.points.append(Point(x, y)) I would be thankful for any Infos, Sources, Examples etc.! | I found a way to do this using Maturin. So, in case anyone else is trying to find out how to do this, here's one way. The project needs to have the following structure: my_project ├── Cargo.toml ├── my_project │ ├── __init__.py │ └── sum.py └── src └── lib.rs Cargo.toml can be: [package] name = "my_project" version = "0.1.0" edition = "2018" [lib] name = "my_project" crate-type = ["cdylib"] [dependencies.pyo3] version = "0.14.5" features = ["extension-module"] One example for lib.rs would be: use pyo3::prelude::*; #[pyfunction] fn sum_as_string(a: usize, b: usize) -> PyResult<String> { Ok((a + b).to_string()) } #[pymodule] fn my_project(_py: Python, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(sum_as_string, m)?)?; Ok(()) } Now in sum.py the function can be accessed (after using maturin develop during development, and when publishing automatically after maturin build): from .my_project import sum_as_string class Sum: sum: str def __init__(self, lhs: int, rhs: int): self.sum = sum_as_string(lhs, rhs) The _ init _.py file can for example only expose the Sum class: from .sum import Sum | 9 | 7 |
69,082,602 | 2021-9-7 | https://stackoverflow.com/questions/69082602/the-websocket-transport-is-not-available-you-must-install-a-websocket-server-th | When I development some socket.io service in python environment by using python-socketio and gunicorn, I meet an issue here. I am using Mac OS X and I am using python 3.7. Environment setting $ pip install python-socketio $ pip install gunicorn server-side code app.py import socketio sio = socketio.Server() app = socketio.WSGIApp(sio, static_files = { '/': './socketio-client.html' }) @sio.event def connect(sid, environ): print(sid, 'connected') @sio.event def disconnect(sid): print(sid, 'disconnected') client-side code socketio-client.html <html> <head> <title>Socket.IO Demo</title> <script src="https://cdn.socket.io/3.1.3/socket.io.min.js" integrity="sha384-cPwlPLvBTa3sKAgddT6krw0cJat7egBga3DJepJyrLl4Q9/5WLra3rrnMcyTyOnh" crossorigin="anonymous"></script> </head> <body> <h1>Socket.IO Demo</h1> <script> const sio = io(); sio.on('connect', () => { console.log('connected'); }); sio.on('disconnect', () => { console.log('disconnected'); }); </script> </body> </html> Start program I put the files in the same folder. And start it by following the command. $ gunicorn --thread 50 app:app I use the browser to open http://localhost:8000 and that works. Issue But there is an issue with the server The WebSocket transport is not available, you must install a WebSocket server that is compatible with your async mode to enable it. See the documentation for details. (further occurrences of this error will be logged with level INFO) I try to do $ gunicorn --log-level INFO --thread 50 app:app But I still cannot get helpful information from INFO log-level. The code can keep going to run but the message shows me to install WebSocket server and I don't know which kind of package I need to install for this case of python-socketio or gunicorn. What kind of package I missed to install? It's my first time to use python-socketio and gunicorn. What should I do next? Thanks for your help. | It just needs to install more packages here. $ pip install gevent-websocket $ pip install eventlet And then $ gunicorn --thread 50 app:app Update 1: If the server-side need to active emit to client side, it will need this environment. Because this command $ gunicorn --thread 50 app:app cannot support this situation. The worked environment should be set by following. $ pip install eventlet==0.30.2 $ gunicorn -k eventlet -w 1 app:app | 13 | 11 |
69,038,398 | 2021-9-3 | https://stackoverflow.com/questions/69038398/python-module-distutilis-hack | I was looking through my pip list and cleaning all my third party modules when I came across a module named 'distutils_hack'. I don't remember installing this, is this something I should be concerned about? The version I was using was Python 3.9. Thanks | It's used by setuptools to replace the stdlib distutils with setuptools' bundled distutils library. Quoting ncoghlan from pypa/setuptools#417 on why this is necessary: While CPython as a whole has many contributors, we don't have many folks contributing to distutils any more - folks need build tools that let them target currently deployed versions of Python, which setuptools provides, but distutils really doesn't. So the challenge for pypa/packaging-problems#127 is as follows: assume that in some future version of Python (3.9? 3.10? 4.0? We dunno yet) distutils is no longer in the standard library per se, but is instead bundled the way we bundle pip (i.e. by installing it from a wheel at Python installation time). eventually, for new Python versions (where distutils isn't in the standard library), we'd like the "real" distutils code to live in setuptools, and the code you get when you do import distutils to just be a backward compatibility shim that aliases the setuptools components into the right place however, for old Python versions (where distutils is still in the standard library), then setuptools will still need to do its monkeypatching magic to allow plain distutils projects to emit the installation database metadata and to be built as wheel archives The scope of the eventual distutils-compat shim probably wouldn't need to be the full distutils API, but it wouldn't be the empty set either. So @jaraco's suggestion sounds like a reasonable starting point to me: give setuptools its own pre-patched copy of distutils so it can work on a distutils-free copy of Python, without providing an importable distutils facade. Building a suitable facade (with a setuptools dependency) would then be part of removing distutils from the standard library. Here is where you can find the module's source code: https://github.com/pypa/setuptools/blob/main/_distutils_hack/__init__.py | 6 | 3 |
69,090,545 | 2021-9-7 | https://stackoverflow.com/questions/69090545/typehint-importing-module-dynamically-using-importlib | Give something as follows: import importlib module_path = "mod" mod = importlib.import_module(module_path, package=None) print(mod.Foo.Bar.x) where mod.py is: class Foo: class Bar: x = 1 mypy file.py --strict raises the following error: file.py:7: error: Module has no attribute "Foo" [attr-defined] I'm wondering how one is supposed to go about type-hinting this, or if this is something which would typically just be ignored with # type: ignore[attr-defined] (assuming that the code is necessary, and the only options are type-hinting or ignoring the type-hint)? Why I am using importlib in this situation The way that importlib is being used is that there's some path: x.y.<changes>.z Where <changes> is dynamic, but the others are fixed. I'm confident that the module will contain the attributes which are being called, but due to <changes>, importlib is used for the import. Which might be summarised as: I do not know precisely which module I will be importing, but I know it will have a class Foo in it. | As was alluded to by @MisterMiyagi in the comments, I think the solution here is to use structural, rather than nominal, subtyping. Nominal subtyping is where we use direct class inheritance to define type relationships. For example, collections.Counter is a subtype of dict because it directly inherits from dict. Structural subtyping, however, is where we define types based on certain properties a class has or certain behaviours it displays. int is a subtype of typing.SupportsFloat not because it directly inherits from SupportsFloat (it doesn't), but because SupportsFloat is defined as a certain interface, and int satisfies that interface. When type-hinting, we can define structural types using typing.Protocol. You could satisfy MyPy in this situation like this: import importlib from typing import cast, Protocol class BarProto(Protocol): x: int class FooProto(Protocol): Bar: type[BarProto] class ModProto(Protocol): Foo: type[FooProto] module_path = "mod" mod = cast(ModProto, importlib.import_module(module_path, package=None)) print(mod.Foo.Bar.x) reveal_type(mod) reveal_type(mod.Foo) reveal_type(mod.Foo.Bar) reveal_type(mod.Foo.Bar.x) We've defined several interfaces here: BarProto: in order to satisfy this interface, a type has to have an attribute x that's of type int. FooProto: in order to satisfy this interface, a type has to have an attribute Bar that is a class of which instances satisfy the BarProto protocol. ModProto: in order to satisfy this interface, a type has to have an attribute Foo that is a class of which instances satisfy the FooProto protocol. Then, when importing the module, we use typing.cast to assert to the type-checker that the module we're importing satisfies the ModProto protocol. Run it through MyPy, and it informs us it has inferred the following types: main.py:18: note: Revealed type is "__main__.ModProto" main.py:19: note: Revealed type is "Type[__main__.FooProto]" main.py:20: note: Revealed type is "Type[__main__.BarProto]" main.py:21: note: Revealed type is "builtins.int" Read more about structural subtyping in python here and here. | 6 | 4 |
69,091,760 | 2021-9-7 | https://stackoverflow.com/questions/69091760/how-can-i-import-a-testclass-properly-to-inherit-from-without-it-being-run-as-a | Context I have a test class where all my tests inherit from. It cant run by itself as it really doesnt contain any setup info I wanted to add a test which is executed by ALL tests (adding it to the baseclass seems logical) But now I notice the basetestclass( => Foo) which I import is being detected as a test itself and runs and is visible in the reports Code the base class in base.py from unittest import TestCase class Foo(TestCase): @classmethod def setUpClass(cls): # prepare the generic setup stuff based on what is defined in the child class print("setupclass Foo done") def test_run_in_all_inherited_tests(self): print("fooBar") assert True the real test in test_something.py from base import Foo # <= This is being detected as a testclass object and thus will be executed class TestFoo(Foo): @classmethod def setUpClass(cls): # define specific test setup super().setUpClass() print("setup TestFoo done") def test_pass(self): pass def test_assert(self): assert False This triggers a testrun of the imported Foo The Question How can I import Foo without that its being detected as a 'test' If I remove the test to run in all tests all is fine. Adding @nottest decorator to Foo wont work since then also all inherited classes are defined nottest. It needs to run on nose, pytest and unittest testrunners I noticed if I changed the import statement like below that it also works. But that would mean adjusting a few hundreds of testfiles in different repos. (I'd like to avoid that) import base class TestFoo(base.Foo): | The key to the answer seems to be that each test has an attribute __test__ which is set to True when it is a test. Setting it to False when the class should not be a test will then let the test collector ignore this class. The answer assumes I can only do changes in the base.py In python 3.9 classmethod and property decorators can be combined so I wrote a separate answer for that answer for < py3.9 the base class in base.py from unittest import TestCase class MetaFoo(type): @property def __test__(cls): return cls != Foo class Foo(TestCase, metaclass=MetaFoo): @classmethod def setUpClass(cls): # prepare the generic setup stuff based on what is defined in the child class print("setupclass Foo done") def test_run_in_all_inherited_tests(self): print("fooBar") assert True answer for >= py3.9 the base class in base.py from unittest import TestCase class Foo(TestCase): @classmethod @property def __test__(cls): return cls != Foo @classmethod def setUpClass(cls): # prepare the generic setup stuff based on what is defined in the child class print("setupclass Foo done") def test_run_in_all_inherited_tests(self): print("fooBar") assert True the actual test test_something.py from base import Foo # <= This will not be detected as a test anymore as __test__ returns False class TestFoo(Foo): @classmethod def setUpClass(cls): # define specific test setup super().setUpClass() print("setup TestFoo done") def test_pass(self): pass def test_assert(self): assert False This doesnt trigger a testrun of the imported Foo anymore | 5 | 6 |
69,092,874 | 2021-9-7 | https://stackoverflow.com/questions/69092874/check-if-list-is-valid-sequence-of-chunks | I want to check whether a list is a valid sequence of chunks, where each chunk begins with some value and ends with the next occurrence of the same value. For example, this is a valid sequence of three chunks: lst = [2, 7, 1, 8, 2, 8, 1, 8, 2, 8, 4, 5, 9, 0, 4, 5, 2] \___________/ \_____/ \_______________________/ And this is one is invalid: lst = [2, 7, 1, 8, 2, 8, 1, 8, 2, 8, 4, 5, 9, 0, 4] \___________/ \_____/ \_____ ... missing the 2 to end the chunk I have a solution but it's bad. Do you see something better? def is_valid(lst): while lst: start = lst.pop(0) if start not in lst: return False while lst[0] != start: lst.pop(0) lst.remove(start) return True # Tests, should print: True, False, True, False, True print(is_valid([2, 7, 1, 8, 2, 8, 1, 8, 2, 8, 4, 5, 9, 0, 4, 5, 2])) print(is_valid([2, 7, 1, 8, 2, 8, 1, 8, 2, 8, 4, 5, 9, 0, 4])) print(is_valid(['I', 'N', 'O', 'A', 'I', 'L', 'L', 'T', 'R', 'X', 'I', 'I', 'N', 'X', 'F', 'T'])) print(is_valid(['T', 'I', 'N', 'I', 'X', 'R', 'O', 'F', 'T', 'I', 'N', 'I', 'X', 'L', 'L', 'A'])) print(is_valid([])) | How about this, creating an iter from the list and searching forward on that iter until the next matching element is found. Note that this might fail is None can be an element of the list; then you should rather define and compare against a sentinel obj = object(). def is_valid(lst): it = iter(lst) for x in it: if next((y for y in it if y == x), None) is None: return False return True Since we don't actually need the value returned by next, we can also just use any instead, at the same time solving the problem of the default element. Like next, any will consume the iterator just as far as the matching element, if any: def is_valid(lst): it = iter(lst) for x in it: if not any(y == x for y in it): return False return True This can be further shortened using all instead of the outer for loop: def is_valid(lst): it = iter(lst) return all(any(y == x for y in it) for x in it) And this can finally be reduced to the equally cryptic and intriguing: def is_valid(lst): it = iter(lst) return all(x in it for x in it) Each way, each element is visited exactly once, the original list is not changed, little to no extra space, and IMHO it's even somewhat easy to read and understand. This never was about speed, but anyway: Here are some benchmarks of the different solutions (and some more variations), running the test cases from the question as well as two random lists of 1,000 integers, one valid and one invalid, 10,000 times, on Python 3.8.10: # with long lists # only short test lists 1.52 is_valid_index 0.22 is_valid_index 3.28 is_valid_next 0.30 is_valid_next 2.78 is_valid_for_for_else 0.13 is_valid_for_for_else 5.26 is_valid_for_any 0.32 is_valid_for_any 5.29 is_valid_all_any 0.38 is_valid_all_any 3.42 is_valid_all_any_if 0.36 is_valid_all_any_if 2.02 is_valid_all_in 0.18 is_valid_all_in 1.97 is_valid_all_in_if 0.17 is_valid_all_in_if 1.87 is_valid_for_in 0.11 is_valid_for_in Of course, all are O(n). With the long 1000-element-lists, the solution using index is fastest, but the one with x in it is not too bad, either. The any solutions lag somewhat behind, but are about as fast (or slow) as next when using a generator with condition, but still slower than when using plain for loops. With only the short test-lists, it's a bit different: Here, the solutions using one iterator and for-for-else and for-in are fastest by quite some margin. | 8 | 10 |
69,085,037 | 2021-9-7 | https://stackoverflow.com/questions/69085037/get-type-argument-of-arbitrarily-high-generic-parent-class-at-runtime | Given this: from typing import Generic, TypeVar T = TypeVar('T') class Parent(Generic[T]): pass I can get int from Parent[int] using typing.get_args(Parent[int])[0]. The problem becomes a bit more complicated with the following: class Child1(Parent[int]): pass class Child2(Child1): pass To support an arbitrarily long inheritance hierarchy, I made the below solution: import typing from dataclasses import dataclass @dataclass(frozen=True) class Found: value: Any def get_parent_type_parameter(child: type) -> Optional[Found]: for base in child.mro(): # If no base classes of `base` are generic, then `__orig_bases__` is nonexistent causing an `AttributeError`. # Instead, we want to skip iteration. for generic_base in getattr(base, "__orig_bases__", ()): if typing.get_origin(generic_base) is Parent: [type_argument] = typing.get_args(generic_base) # Return `Found(type_argument)` instead of `type_argument` to differentiate between `Parent[None]` # as a base class and `Parent` not appearing as a base class. return Found(type_argument) return None such that get_parent_type_parameter(Child2) returns int. I am only interested in the type argument of one particular base class (Parent), so I've hardcoded that class into get_parent_type_parameter and ignore any other base classes. But my above solution breaks down with chains like this: class Child3(Parent[T], Generic[T]): pass where get_parent_type_parameter(Child3[int]) returns T instead of int. While any answers that tackle Child3 are already great, being able to deal with situations like Child4 would be even better: from typing import Sequence class Child4(Parent[Sequence[T]], Generic[T]): pass so get_parent_type_parameter(Child4[int]) returns Sequence[int]. Is there a more robust way of accessing the type argument of a class X at runtime given an annotation A where issubclass(typing.get_origin(A), X) is True? Why I need this: Recent Python HTTP frameworks generate the endpoint documentation (and response schema) from the function's annotated return type. For example: app = ... @dataclass class Data: hello: str @app.get("/") def hello() -> Data: return Data(hello="world") I am trying to expand this to account for status code and other non-body components: @dataclass class Error: detail: str class ClientResponse(Generic[T]): status_code: ClassVar[int] body: T class OkResponse(ClientResponse[Data]): status_code: ClassVar[int] = 200 class BadResponse(ClientResponse[Error]): status_code: ClassVar[int] = 400 @app.get("/") def hello() -> Union[OkResponse, BadResponse]: if random.randint(1, 2) == 1: return OkResponse(Data(hello="world")) return BadResponse(Error(detail="a_custom_error_label")) To generate the OpenAPI documentation, my framework would evaluate get_parent_type_parameter(E) (with ClientResponse hardcoded as the parent in get_parent_type_parameter) on each E within the Union after inspecting the annotated return type of the function passed to app.get. So E would be OkResponse first resulting in Data. Then it would be ErrorResponse, resulting in Error. My framework then iterates through the __annotations__ of each of the body types and generates the response schema in the documentation for the client. | The following approach is based on __class_getitem__ and __init_subclass__. It might serve your use case, but it has some severe limitations (see below), so use at your own judgement. from __future__ import annotations from typing import Generic, Sequence, TypeVar T = TypeVar('T') NO_ARG = object() class Parent(Generic[T]): arg = NO_ARG # using `arg` to store the current type argument def __class_getitem__(cls, key): if cls.arg is NO_ARG or cls.arg is T: cls.arg = key else: try: cls.arg = cls.arg[key] except TypeError: cls.arg = key return super().__class_getitem__(key) def __init_subclass__(cls): if Parent.arg is not NO_ARG: cls.arg, Parent.arg = Parent.arg, NO_ARG class Child1(Parent[int]): pass class Child2(Child1): pass class Child3(Parent[T], Generic[T]): pass class Child4(Parent[Sequence[T]], Generic[T]): pass def get_parent_type_parameter(cls): return cls.arg classes = [ Parent[str], Child1, Child2, Child3[int], Child4[float], ] for cls in classes: print(cls, get_parent_type_parameter(cls)) Which outputs the following: __main__.Parent[str] <class 'str'> <class '__main__.Child1'> <class 'int'> <class '__main__.Child2'> <class 'int'> __main__.Child3[int] <class 'int'> __main__.Child4[float] typing.Sequence[float] This approach requires that every Parent[...] (i.e. __class_getitem__) is followed by an __init_subclass__ because otherwise the former information may be overwritten by a second Parent[...]. For that reasons it won't work with type aliases for example. Consider the following: classes = [ Parent[str], Parent[int], Parent[float], ] for cls in classes: print(cls, get_parent_type_parameter(cls)) which outputs: __main__.Parent[str] <class 'float'> __main__.Parent[int] <class 'float'> __main__.Parent[float] <class 'float'> | 7 | 6 |
69,024,209 | 2021-9-2 | https://stackoverflow.com/questions/69024209/chromedriver-executable-path-not-found-in-docker-container | I have created a docker image with the Docker file below. It installs the latest versions of Google Chrome and the chrome driver. As well as the other pip packages. Dockerfile FROM python:3.9 # Install Chrome WebDriver RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \ mkdir -p /opt/chromedriver-$CHROMEDRIVER_VERSION && \ curl -sS -o /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \ unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver-$CHROMEDRIVER_VERSION && \ rm /tmp/chromedriver_linux64.zip && \ chmod +x /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver && \ ln -fs /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver /usr/local/bin/chromedriver # Install Google Chrome RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \ echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list && \ apt-get -yqq update && \ apt-get -yqq install google-chrome-stable && \ rm -rf /var/lib/apt/lists/* COPY requirements.txt . RUN pip install -r requirements.txt WORKDIR /seltesting COPY ./app ./app CMD ["python", "./app/main.py"] The chromedriver.exe file is in the container as I have found it in the CLI. It is in this directory '/usr/local/bin/chromedriver'. python code driver = webdriver.Chrome(options=options, executable_path='/usr/local/bin/chromedriver') I am using a venv as I am also using flask to create a micro service that uses the chrome driver. Would that be causing an issue? Any assist would be much appreciated as I have been stuck on this for a long time. | I have found the problem, you need to add the all the python files into the Dockerfile. Please find the Dockerfile to install the Chromedriver and Chrome onto the image and the default path for the chromedriver within the container. Dockerfile FROM python:3.9 ADD /app/main.py . ADD /app/connectdriver.py . # Install Chrome WebDriver RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \ mkdir -p /opt/chromedriver-$CHROMEDRIVER_VERSION && \ curl -sS -o /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \ unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver-$CHROMEDRIVER_VERSION && \ rm /tmp/chromedriver_linux64.zip && \ chmod +x /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver && \ ln -fs /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver /usr/local/bin/chromedriver # Install Google Chrome RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \ echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list && \ apt-get -yqq update && \ apt-get -yqq install google-chrome-stable && \ rm -rf /var/lib/apt/lists/* COPY requirements.txt . RUN pip install -r requirements.txt WORKDIR /seleniumtesting COPY ./app ./app CMD ["python","./app/main.py"] python driver script driver = webdriver.Chrome(options=options, executable_path='/usr/local/bin/chromedriver') | 11 | 7 |
69,079,181 | 2021-9-6 | https://stackoverflow.com/questions/69079181/how-is-the-s-sc-string-concat-optimization-decided | Short version: If s is a string, then s = s + 'c' might modify the string in place, while t = s + 'c' can't. But how does the operation s + 'c' know which scenario it's in? Long version: t = s + 'c' needs to create a separate string because the program afterwards wants both the old string as s and the new string as t. s = s + 'c' can modify the string in place if s is the only reference, as the program only wants s to be the extended string. CPython actually does this optimization, if there's space at the end for the extra character. Consider these functions, which repeatedly add a character: def fast(n): s = '' for _ in range(n): s = s + 'c' t = s del t def slow(n): s = '' for _ in range(n): t = s + 'c' s = t del t Benchmark results with n = 100_000 (Try it online!): fast : 9 ms 9 ms 9 ms 9 ms 10 ms slow : 924 ms 927 ms 931 ms 933 ms 945 ms Note that the extra t = s or s = t makes both variables equivalent references to the string and then del t leaves only s, so at the next loop iteration, s is again the only reference to the string. Thus the only difference between the two functions is the order in which s + 'c' is assigned to s and t. Let's also disassemble the bytecode. I marked the only three differences with != in the middle. As expected, only the variables for STORE_FAST and LOAD_FAST differ. But up to and including the BINARY_ADD, the bytecode is identical. So how does the BINARY_ADD know whether to optimize or not? import dis import dis dis.dis(fast) dis.dis(slow) --------------------------------------------------------------------------- 0 LOAD_CONST 1 ('') 0 LOAD_CONST 1 ('') 2 STORE_FAST 1 (s) 2 STORE_FAST 1 (s) 4 LOAD_GLOBAL 0 (range) 4 LOAD_GLOBAL 0 (range) 6 LOAD_FAST 0 (n) 6 LOAD_FAST 0 (n) 8 CALL_FUNCTION 1 8 CALL_FUNCTION 1 10 GET_ITER 10 GET_ITER >> 12 FOR_ITER 18 (to 32) >> 12 FOR_ITER 18 (to 32) 14 STORE_FAST 2 (_) 14 STORE_FAST 2 (_) 16 LOAD_FAST 1 (s) 16 LOAD_FAST 1 (s) 18 LOAD_CONST 2 ('c') 18 LOAD_CONST 2 ('c') 20 BINARY_ADD 20 BINARY_ADD 22 STORE_FAST 1 (s) != 22 STORE_FAST 3 (t) 24 LOAD_FAST 1 (s) != 24 LOAD_FAST 3 (t) 26 STORE_FAST 3 (t) != 26 STORE_FAST 1 (s) 28 DELETE_FAST 3 (t) 28 DELETE_FAST 3 (t) 30 JUMP_ABSOLUTE 12 30 JUMP_ABSOLUTE 12 >> 32 LOAD_CONST 0 (None) >> 32 LOAD_CONST 0 (None) 34 RETURN_VALUE 34 RETURN_VALUE | Here's the code in question, from the Python 3.10 branch (in ceval.c, and called from the same file's implementation of the BINARY_ADD opcode). As @jasonharper noted in a comment, it peeks ahead to see whether the result of the BINARY_ADD will next be bound to the same name from which the left-hand addend came. In fast(), it is (operand came from s and result stored into s), but in slow() it isn't (operand came from s but stored into t). There's no guarantee this optimization will persist, though. For example, I noticed that your fast() is no faster than your slow() on the current development CPython main branch (which is the current work-in-progress toward an eventual 3.11 release). Should people rely on this? As noted, there's no guarantee this optimization will persist. "Serious" Python programmers should know better than to rely on dodgy CPython-specific tricks, and, indeed, PEP 8 explicitly warns against relying on this specific one: Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPython's efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b ... | 18 | 16 |
69,065,682 | 2021-9-5 | https://stackoverflow.com/questions/69065682/randomizedsearchcv-all-estimators-failed-to-fit | I am currently working on the "French Motor Claims Datasets freMTPL2freq" Kaggle competition (https://www.kaggle.com/floser/french-motor-claims-datasets-fremtpl2freq). Unfortunately I get a "NotFittedError: All estimators failed to fit" error whenever I am using RandomizedSearchCV and I cannot figure out why that is. Any help is much appreciated. import numpy as np import statsmodels.api as sm import scipy.stats as stats from matplotlib import pyplot as plt from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import KBinsDiscretizer from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import StratifiedKFold from sklearn.metrics import mean_poisson_deviance from sklearn.metrics import mean_squared_error from sklearn.ensemble import VotingRegressor from sklearn.ensemble import StackingRegressor from sklearn.metrics import mean_gamma_deviance from sklearn.metrics import mean_squared_error from xgboost import XGBRegressor data_freq = pd.read_csv('freMTPL2freq.csv') data_freq['Area'] = data_freq['Area'].str.replace('\'','') data_freq['VehBrand'] = data_freq['VehBrand'].str.replace('\'','') data_freq['VehGas'] = data_freq['VehGas'].str.replace('\'','') data_freq['Region'] = data_freq['Region'].str.replace('\'','') data_freq['frequency'] = data_freq['ClaimNb'] / data_freq['Exposure'] y = data_freq['frequency'] X = data_freq.drop(['frequency', 'ClaimNb', 'IDpol'], axis = 1) X_train, X_val, y_train, y_val = train_test_split(X,y, test_size=0.2, shuffle = True, random_state = 42) pt_columns = ['VehPower', 'VehAge', 'DrivAge', 'BonusMalus', 'Density'] cat_columns = ['Area', 'Region', 'VehBrand', 'VehGas'] from xgboost import XGBRegressor ct = ColumnTransformer([('pt', 'passthrough', pt_columns), ('ohe', OneHotEncoder(), cat_columns)]) pipe_xgbr = Pipeline([('cf_trans', ct), ('ssc', StandardScaler(with_mean = False)), ('xgb_regressor', XGBRegressor()) ]) param = {'xgb_regressor__n_estimators':[3, 5], 'xgb_regressor__max_depth':[3, 5, 7], 'xgb_regressor__learning_rate':[0.1, 0.5], 'xgb_regressor__colsample_bytree':[0.5, 0.8], 'xgb_regressor__subsample':[0.5, 0.8] } rscv = RandomizedSearchCV(pipe_xgbr, param_distributions = param, n_iter = 2, scoring = mean_squared_error, n_jobs = -1, cv = 5, error_score = 'raise') rscv.fit(X_train, y_train, xgbr_regressor__sample_weight = X_train['Exposure']) The first five rows of the original dataframe data_freq look like this: IDpol ClaimNb Exposure Area VehPower VehAge DrivAge BonusMalus VehBrand VehGas Density Region 0 1.0 1 0.10 D 5 0 55 50 B12 Regular 1217 R82 1 3.0 1 0.77 D 5 0 55 50 B12 Regular 1217 R82 2 5.0 1 0.75 B 6 2 52 50 B12 Diesel 54 R22 3 10.0 1 0.09 B 7 0 46 50 B12 Diesel 76 R72 4 11.0 1 0.84 B 7 0 46 50 B12 Diesel 76 R72 The error I get is as follows: --------------------------------------------------------------------------- _RemoteTraceback Traceback (most recent call last) _RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 418, in _process_worker r = call_item() File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 272, in __call__ return self.fn(*self.args, **self.kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\_parallel_backends.py", line 608, in __call__ return self.func(*args, **kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\parallel.py", line 256, in __call__ for func, args, kwargs in self.items] File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\parallel.py", line 256, in <listcomp> for func, args, kwargs in self.items] File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 222, in __call__ return self.function(*args, **kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 598, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\pipeline.py", line 340, in fit fit_params_steps = self._check_fit_params(**fit_params) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\pipeline.py", line 261, in _check_fit_params fit_params_steps[step][param] = pval KeyError: 'xgbr_regressor' """ The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-68-0c1886d1e985> in <module> ----> 1 rscv.fit(X_train, y_train, xgbr_regressor__sample_weight = X_train['Exposure']) 2 #pipe_xgbr.fit(X_train, y_train) 3 #X_train.describe(include = 'all') ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0: ---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0 ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params) 839 return results 840 --> 841 self._run_search(evaluate_candidates) 842 843 # multimetric is determined here because in the case of a callable ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates) 1633 evaluate_candidates(ParameterSampler( 1634 self.param_distributions, self.n_iter, -> 1635 random_state=self.random_state)) ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results) 807 (split_idx, (train, test)) in product( 808 enumerate(candidate_params), --> 809 enumerate(cv.split(X, y, groups)))) 810 811 if len(out) < 1: ~\anaconda3\lib\site-packages\joblib\parallel.py in __call__(self, iterable) 1015 1016 with self._backend.retrieval_context(): -> 1017 self.retrieve() 1018 # Make sure that we get a last message telling us we are done 1019 elapsed_time = time.time() - self._start_time ~\anaconda3\lib\site-packages\joblib\parallel.py in retrieve(self) 907 try: 908 if getattr(self._backend, 'supports_timeout', False): --> 909 self._output.extend(job.get(timeout=self.timeout)) 910 else: 911 self._output.extend(job.get()) ~\anaconda3\lib\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout) 560 AsyncResults.get from multiprocessing.""" 561 try: --> 562 return future.result(timeout=timeout) 563 except LokyTimeoutError: 564 raise TimeoutError() ~\anaconda3\lib\concurrent\futures\_base.py in result(self, timeout) 433 raise CancelledError() 434 elif self._state == FINISHED: --> 435 return self.__get_result() 436 else: 437 raise TimeoutError() ~\anaconda3\lib\concurrent\futures\_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result KeyError: 'xgbr_regressor' I also tried running fit without the sample_weight parameter. In this case the error changes to: --------------------------------------------------------------------------- _RemoteTraceback Traceback (most recent call last) _RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 418, in _process_worker r = call_item() File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\externals\loky\process_executor.py", line 272, in __call__ return self.fn(*self.args, **self.kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\_parallel_backends.py", line 608, in __call__ return self.func(*args, **kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\parallel.py", line 256, in __call__ for func, args, kwargs in self.items] File "C:\Users\Jan\anaconda3\lib\site-packages\joblib\parallel.py", line 256, in <listcomp> for func, args, kwargs in self.items] File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 222, in __call__ return self.function(*args, **kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 625, in _fit_and_score test_scores = _score(estimator, X_test, y_test, scorer, error_score) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 687, in _score scores = scorer(estimator, X_test, y_test) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 74, in inner_f return f(**kwargs) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\metrics\_regression.py", line 336, in mean_squared_error y_true, y_pred, multioutput) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\metrics\_regression.py", line 88, in _check_reg_targets check_consistent_length(y_true, y_pred) File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 316, in check_consistent_length lengths = [_num_samples(X) for X in arrays if X is not None] File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 316, in <listcomp> lengths = [_num_samples(X) for X in arrays if X is not None] File "C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 249, in _num_samples raise TypeError(message) TypeError: Expected sequence or array-like, got <class 'sklearn.pipeline.Pipeline'> """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) <ipython-input-69-a9be9cc5df4a> in <module> ----> 1 rscv.fit(X_train, y_train)#, xgbr_regressor__sample_weight = X_train['Exposure']) 2 #pipe_xgbr.fit(X_train, y_train) 3 #X_train.describe(include = 'all') ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0: ---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0 ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params) 839 return results 840 --> 841 self._run_search(evaluate_candidates) 842 843 # multimetric is determined here because in the case of a callable ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates) 1633 evaluate_candidates(ParameterSampler( 1634 self.param_distributions, self.n_iter, -> 1635 random_state=self.random_state)) ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results) 807 (split_idx, (train, test)) in product( 808 enumerate(candidate_params), --> 809 enumerate(cv.split(X, y, groups)))) 810 811 if len(out) < 1: ~\anaconda3\lib\site-packages\joblib\parallel.py in __call__(self, iterable) 1015 1016 with self._backend.retrieval_context(): -> 1017 self.retrieve() 1018 # Make sure that we get a last message telling us we are done 1019 elapsed_time = time.time() - self._start_time ~\anaconda3\lib\site-packages\joblib\parallel.py in retrieve(self) 907 try: 908 if getattr(self._backend, 'supports_timeout', False): --> 909 self._output.extend(job.get(timeout=self.timeout)) 910 else: 911 self._output.extend(job.get()) ~\anaconda3\lib\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout) 560 AsyncResults.get from multiprocessing.""" 561 try: --> 562 return future.result(timeout=timeout) 563 except LokyTimeoutError: 564 raise TimeoutError() ~\anaconda3\lib\concurrent\futures\_base.py in result(self, timeout) 433 raise CancelledError() 434 elif self._state == FINISHED: --> 435 return self.__get_result() 436 else: 437 raise TimeoutError() ~\anaconda3\lib\concurrent\futures\_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result TypeError: Expected sequence or array-like, got <class 'sklearn.pipeline.Pipeline'> When setting verbose = 10 and n_jobs = 1 the following error message shows up: Fitting 5 folds for each of 2 candidates, totalling 10 fits [CV 1/5; 1/2] START xgb_regressor__colsample_bytree=0.5, xgb_regressor__learning_rate=0.5, xgb_regressor__max_depth=5, xgb_regressor__n_estimators=5, xgb_regressor__subsample=0.5 C:\Users\Jan\anaconda3\lib\site-packages\sklearn\utils\validation.py:72: FutureWarning: Pass sample_weight=406477 1.0 393150 0.0 252885 0.0 260652 0.0 661256 0.0 ... 154663 0.0 398414 0.0 42890 0.0 640774 0.0 114446 0.0 Name: frequency, Length: 108482, dtype: float64 as keyword args. From version 1.0 (renaming of 0.25) passing these as positional arguments will result in an error "will result in an error", FutureWarning) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-84-74435f74c470> in <module> ----> 1 rscv.fit(X_train, y_train, xgb_regressor__sample_weight = X_train['Exposure']) 2 #pipe_xgbr.fit(X_train, y_train) 3 #X_train.describe(include = 'all') ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0: ---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0 ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params) 839 return results 840 --> 841 self._run_search(evaluate_candidates) 842 843 # multimetric is determined here because in the case of a callable ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates) 1633 evaluate_candidates(ParameterSampler( 1634 self.param_distributions, self.n_iter, -> 1635 random_state=self.random_state)) ~\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results) 807 (split_idx, (train, test)) in product( 808 enumerate(candidate_params), --> 809 enumerate(cv.split(X, y, groups)))) 810 811 if len(out) < 1: ~\anaconda3\lib\site-packages\joblib\parallel.py in __call__(self, iterable) 1002 # remaining jobs. 1003 self._iterating = False -> 1004 if self.dispatch_one_batch(iterator): 1005 self._iterating = self._original_iterator is not None 1006 ~\anaconda3\lib\site-packages\joblib\parallel.py in dispatch_one_batch(self, iterator) 833 return False 834 else: --> 835 self._dispatch(tasks) 836 return True 837 ~\anaconda3\lib\site-packages\joblib\parallel.py in _dispatch(self, batch) 752 with self._lock: 753 job_idx = len(self._jobs) --> 754 job = self._backend.apply_async(batch, callback=cb) 755 # A job can complete so quickly than its callback is 756 # called before we get here, causing self._jobs to ~\anaconda3\lib\site-packages\joblib\_parallel_backends.py in apply_async(self, func, callback) 207 def apply_async(self, func, callback=None): 208 """Schedule a func to be run""" --> 209 result = ImmediateResult(func) 210 if callback: 211 callback(result) ~\anaconda3\lib\site-packages\joblib\_parallel_backends.py in __init__(self, batch) 588 # Don't delay the application, to avoid keeping the input 589 # arguments in memory --> 590 self.results = batch() 591 592 def get(self): ~\anaconda3\lib\site-packages\joblib\parallel.py in __call__(self) 254 with parallel_backend(self._backend, n_jobs=self._n_jobs): 255 return [func(*args, **kwargs) --> 256 for func, args, kwargs in self.items] 257 258 def __len__(self): ~\anaconda3\lib\site-packages\joblib\parallel.py in <listcomp>(.0) 254 with parallel_backend(self._backend, n_jobs=self._n_jobs): 255 return [func(*args, **kwargs) --> 256 for func, args, kwargs in self.items] 257 258 def __len__(self): ~\anaconda3\lib\site-packages\sklearn\utils\fixes.py in __call__(self, *args, **kwargs) 220 def __call__(self, *args, **kwargs): 221 with config_context(**self.config): --> 222 return self.function(*args, **kwargs) ~\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, split_progress, candidate_progress, error_score) 623 624 fit_time = time.time() - start_time --> 625 test_scores = _score(estimator, X_test, y_test, scorer, error_score) 626 score_time = time.time() - start_time - fit_time 627 if return_train_score: ~\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py in _score(estimator, X_test, y_test, scorer, error_score) 685 scores = scorer(estimator, X_test) 686 else: --> 687 scores = scorer(estimator, X_test, y_test) 688 except Exception: 689 if error_score == 'raise': ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 72 "will result in an error", FutureWarning) 73 kwargs.update(zip(sig.parameters, args)) ---> 74 return f(**kwargs) 75 return inner_f 76 ~\anaconda3\lib\site-packages\sklearn\metrics\_regression.py in mean_squared_error(y_true, y_pred, sample_weight, multioutput, squared) 334 """ 335 y_type, y_true, y_pred, multioutput = _check_reg_targets( --> 336 y_true, y_pred, multioutput) 337 check_consistent_length(y_true, y_pred, sample_weight) 338 output_errors = np.average((y_true - y_pred) ** 2, axis=0, ~\anaconda3\lib\site-packages\sklearn\metrics\_regression.py in _check_reg_targets(y_true, y_pred, multioutput, dtype) 86 the dtype argument passed to check_array. 87 """ ---> 88 check_consistent_length(y_true, y_pred) 89 y_true = check_array(y_true, ensure_2d=False, dtype=dtype) 90 y_pred = check_array(y_pred, ensure_2d=False, dtype=dtype) ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in check_consistent_length(*arrays) 314 """ 315 --> 316 lengths = [_num_samples(X) for X in arrays if X is not None] 317 uniques = np.unique(lengths) 318 if len(uniques) > 1: ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in <listcomp>(.0) 314 """ 315 --> 316 lengths = [_num_samples(X) for X in arrays if X is not None] 317 uniques = np.unique(lengths) 318 if len(uniques) > 1: ~\anaconda3\lib\site-packages\sklearn\utils\validation.py in _num_samples(x) 247 if hasattr(x, 'fit') and callable(x.fit): 248 # Don't get num_samples from an ensembles length! --> 249 raise TypeError(message) 250 251 if not hasattr(x, '__len__') and not hasattr(x, 'shape'): TypeError: Expected sequence or array-like, got <class 'sklearn.pipeline.Pipeline'> | Wow, that was a mess of a traceback, but I think I've finally found it. You set scoring=mean_squared_error, and should instead use scoring="neg_mean_squared_error". The metric function mean_squared_error has signature (y_true, y_pred, *, <kwargs>), whereas the scorer obtained by using the string "neg_mean_squared_error" has signature (estimator, X_test, y_test). So in the traceback, where you see --> 687 scores = scorer(estimator, X_test, y_test) it is calling mean_squared_error with y_true=estimator, y_test=X_test, and sample_weight=y_test (the first kwarg, and hence the FutureWarning about specifying keyword arguments as positional). Going deeper into the traceback, we see a check that the shapes of y_true and y_pred are compatible, but it thinks the former is your pipeline object (and hence the final error message)! | 4 | 6 |
69,090,253 | 2021-9-7 | https://stackoverflow.com/questions/69090253/how-to-iterate-over-attributes-of-dataclass-in-python | Is it possible to iterate over attributes of a instance of dataclass in python? For example, I would like in the __post_init__ double the integer attributes: from dataclasses import dataclass, fields @dataclass class Foo: a: int b: int def __post_init__(self): self.double_attributes() def double_attributes(self): for field in fields(Foo): field = field*2 x = { 'a': 1, 'b': 2 } y = Foo(**x) >>> TypeError: unsupported operand type(s) for *: 'Field' and 'int' How to access value of instances of a class and set it to something else like below but in a loop? @dataclass class Foo: a: int b: int def __post_init__(self): self.double_a() self.double_b() def double_a(self): self.a = self.a*2 def double_b(self): self.b = self.b*2 | You are very close, but dataclasses.fields actually returns a tuple of Field objects. At least in my case, it looks like the return type is not properly annotatted, but that's easy enough to fix. from dataclasses import dataclass, fields, Field from typing import Tuple @dataclass class Foo: a: int b: int def __post_init__(self): self.double_attributes() def double_attributes(self): # Note the annotation added here (tuple of one or more # `dataclasses.Field`s) cls_fields: Tuple[Field, ...] = fields(self.__class__) for field in cls_fields: # This check is to avoid fields annotated with other types # such as `str` if issubclass(field.type, int): new_val = getattr(self, field.name) * 2 setattr(self, field.name, new_val) But if you're running this multiple times (for example creating many Foo objects) then it might be slightly efficient to cache the list of fields which are integers. For example the following is pseudo code which I might suggest: integer_fields: ClassVar[Frozenset[Field]] = frozenset(f for f in fields(cls) if issubclass(f.type, int)) | 13 | 16 |
69,071,531 | 2021-9-6 | https://stackoverflow.com/questions/69071531/how-to-use-django-serializer-to-update-an-instance | in Django PUT method, I want to update an instance: sv= SV.objects.get(pk=pk) serializer = SVSerializer(sv, data=request.data) if serializer.is_valid(): Here, in request.data, I just want to pass some of the variable of SV. But as some fields missing, the is_vaild will be false. What I want is, just update the fields in request.data, for the other ones, keep the value in sv. How could I do that? | Perform a partial update by setting partial=True: sv= SV.objects.get(pk=pk) serializer = SVSerializer(sv, data=request.data, partial=True) if serializer.is_valid(): serializer.save() else: # Do something else This allows a PATCH request. Edit If you want a default field during partial update (as requested in a comment) override the update method: SVSerializer(serializers.ModelSerializer): # Instead of exposing the state_flag = serializers.SlugRelatedField(source='sv_state', queryset=SVState.objects.all(), slug_field='flag') def update(self, instance, validated_data): if self.partial and validated_data.get('state_flag') == None: validated_data['state_flag'] = 0 super().update(instance=instance, validated_data=validated_data) | 7 | 10 |
69,036,579 | 2021-9-2 | https://stackoverflow.com/questions/69036579/how-to-display-buttons-in-pyvis-visualization-of-networkx-graph | I am trying to modify this function in order to correctly display the interactive buttons. I am using pyvis to visualize a graph created on Networkx. Despite including N.show_buttons(filter_=True), the buttons do not appear in the corresponding html file. Also, how can I add a title to the html page that is produced? def visualize(identity_id,html_name): #create subgraph using the provided identity_id classx = [n for n in G.nodes() if G.nodes[n]['modularity'] == G.nodes[identity_id]['modularity']] SG = G.subgraph(classx) #instantiate the Network object N = Network(height='100%', width='100%', bgcolor='#ffffff', font_color='black',notebook = True, directed=False) #this line effects the physics of the html File N.barnes_hut(spring_strength=0.006) #Change colors of nodes and edges for n in SG: if (SG.nodes[n]['category']=='cust') and (SG.nodes[n]['status']=='ACTIVE'): # assign color to nodes based on cust status color = 'green' shape = 'square' if (SG.nodes[n]['category']=='cust') and (SG.nodes[n]['status']=='CLOSED'): # assign color to nodes based on cust status color = 'red' shape = 'square' elif SG.nodes[n]['category']=='app':# assign shape to nodes based on cust versus app color = 'blue' shape = 'triangle' N.add_node(n, label=n, color=color,shape = shape) for e in SG.edges: if e in SG.edges: # add the edges to the graph color = 'black' width = 2 N.add_edge(e[0],e[1],color=color, width=width) N.show_buttons(filter_=True) #generate html file N.show(f'subgraph_{html_name}.html') | The problem is you've set the height and width both to '100%' when instantiating the visualization: N = Network(height='100%', width='100%', bgcolor='#ffffff', font_color='black',notebook = True, directed=False) Since the network is set to take up all of the space in the browser window, the buttons simply aren't rendered in the window. Depending on where you want the buttons to appear, I'd suggest setting height to a fixed pixel value (say, height='800px') or changing width to be a smaller percentage of the available space (say, 75%). I made up dummy data to get your code to work, but here is full copy-pasteable code below for other readers looking to recreate this question. import networkx as nx from pyvis.network import Network def visualize(identity_id,html_name): # Generate synthetic data G = nx.complete_bipartite_graph(3, 4) nx.set_node_attributes(G, 3, 'modularity') nx.set_node_attributes(G, 'cust', 'category') nx.set_node_attributes(G, 'ACITVE', 'status') #create subgraph using the provided identity_id classx = [n for n in G.nodes() if G.nodes[n]['modularity'] == G.nodes[identity_id]['modularity']] SG = G.subgraph(classx) #instantiate the Network object N = Network(height='800px', width='100%', bgcolor='#ffffff', # Changed height font_color='black',notebook = True, directed=False) #this line effects the physics of the html File N.barnes_hut(spring_strength=0.006) #Change colors of nodes and edges for n in SG: if (SG.nodes[n]['category']=='cust') and (SG.nodes[n]['status']=='ACTIVE'): # assign color to nodes based on cust status color = 'green' shape = 'square' if (SG.nodes[n]['category']=='cust') and (SG.nodes[n]['status']=='CLOSED'): # assign color to nodes based on cust status color = 'red' shape = 'square' elif SG.nodes[n]['category']=='app':# assign shape to nodes based on cust versus app color = 'blue' shape = 'triangle' else: color = 'blue' shape = 'triangle' N.add_node(n, label=n, color=color,shape = shape) for e in SG.edges: if e in SG.edges: # add the edges to the graph color = 'black' width = 2 N.add_edge(e[0],e[1],color=color, width=width) N.show_buttons(filter_=True) #generate html file N.show(f'subgraph_{html_name}.html') visualize(3, 'test_name') | 5 | 6 |
69,091,017 | 2021-9-7 | https://stackoverflow.com/questions/69091017/python-type-hint-for-has-method | For example, we have a class: class A: def send(msg: bytes) -> None: # implementation... pass def recv(n: int) -> bytes: # implementation pass and a function: def a(obj, n: int) -> None: received = obj.recv(n) obj.send(received) It's fairly obvious, that not only instances of class A can be passed as the obj argument, but also instances of socket.socket, maybe other classes, which have recv and send implemented. How can one annotate/type hint obj argument, so that it says something like: obj type must possess methods send and recv send method must be of type Callable[[bytes], None] recv method must be of type Callable[[int], bytes] | What you exactly need is duck-typing (structural subtyping) via typing.Protocol. Some examples are in this list. Protocol classes are defined like this: class Proto(Protocol): def meth(self) -> int: ... Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example: class C: def meth(self) -> int: return 0 def func(x: Proto) -> int: return x.meth() func(C()) # Passes static type check Where a builtin example is class typing.SupportsIndex An ABC with one abstract method __index__. So for your case, it could be something like: from typing import Protocol class SupportsSendReceive(Protocol): def send(self, msg: bytes) -> None: ... def recv(self, n: int) -> bytes: ... def a(obj: SupportsSendReceive, n: int) -> None: received = obj.recv(n) obj.send(received) Note that the ellipsis ... doesn't mean you have to substitute code into it. It is really how it should be. Or you could also put pass in there if the 3 dots are bothering :) | 9 | 8 |
69,085,675 | 2021-9-7 | https://stackoverflow.com/questions/69085675/pyspark-dataframe-with-multiple-array-columns-into-multiple-rows-with-one-valu | We have a pyspark dataframe with several columns containing arrays with multiple values. Our goal is to have each of this values of these columns in several rows, keeping the initial different columns. So, starting with something like this: data = [ ("A", ["a", "c"], ["1", "5"]), ("B", ["a", "b"], None), ("C", [], ["1"]), ] Whats: +---+------+------+ |id |list_a|list_b| +---+------+------+ |A |[a, c]|[1, 5]| |B |[a, b]|null | |C |[] |[1] | +---+------+------+ We would like to end up having: +---+----+----+ |id |col |col | +---+----+----+ |A |a |null| |A |c |null| |A |null|1 | |A |null|5 | |B |a |null| |B |b |null| |C |null|1 | +---+----+----+ We are thinking about several approaches: prefixing each value with a column indicator, merge all the arrays into a single one, explode it and reorganize the different values into different columns split the dataframe into several, each one with one of these array columns, explode the array column and then, concatenating the dataframes But all of them smell like dirty, complex, error prone and inefficient workarounds. Does anyone have an idea about how to solve this in an elegant manner? | In case both columns list_a and list_b could be empty, I would add a 4th case in the dataset data = [ ("A", ["a", "c"], ["1", "5"]), ("B", ["a", "b"], None), ("C", [], ["1"]), ("D", None, None), ] df = spark.createDataFrame(data,["id","list_a","list_b"]) I would then split the original df in 3 (both nulls, list_a exploded and list_b exploded) and the execute a unionByName dfnulls = df.filter(col("list_a").isNull() & col("list_b").isNull())\ .withColumn("list_a", lit(None))\ .withColumn("list_b", lit(None)) df1 = df\ .withColumn("list_a", explode_outer(col("list_a")))\ .withColumn("list_b", lit(None))\ .filter(~col("list_a").isNull()) df2 = df\ .withColumn("list_b", explode_outer(col("list_b")))\ .withColumn("list_a", lit(None))\ .filter(~col("list_b").isNull()) merged_df = df1.unionByName(df2).unionByName(dfnulls) merged_df.show() +---+------+------+ | id|list_a|list_b| +---+------+------+ | A| a| null| | A| c| null| | B| a| null| | B| b| null| | A| null| 1| | A| null| 5| | C| null| 1| | D| null| null| +---+------+------+ | 5 | 2 |
69,087,228 | 2021-9-7 | https://stackoverflow.com/questions/69087228/python-multiprocessing-making-same-object-instance-for-every-process | I have written a simple example to illustrate what exactly I'm banging my head onto. Probably there is some very simple explanaition that I just miss. import time import multiprocessing as mp import os class SomeOtherClass: def __init__(self): self.a = 'b' class SomeProcessor(mp.Process): def __init__(self, queue): super().__init__() self.queue = queue def run(self): soc = SomeOtherClass() print("PID: ", os.getpid()) print(soc) if __name__ == "__main__": queue = mp.Queue() for n in range(10): queue.put(n) processes = [] for proc in range(mp.cpu_count()): p = SomeProcessor(queue) p.start() processes.append(p) for p in processes: p.join() Result is: PID: 11853 <__main__.SomeOtherClass object at 0x7fa637d3f588> PID: 11854 <__main__.SomeOtherClass object at 0x7fa637d3f588> PID: 11855 <__main__.SomeOtherClass object at 0x7fa637d3f588> PID: 11856 <__main__.SomeOtherClass object at 0x7fa637d3f588> Object address is the same for all, regardless every initialization happened in a new process. Can anyone point out what's the problem. Thanks. Also I wonder about this behaviour, when I first initialize the same object in the main process then cache some values on it and then initialize the same object on every process. Then the processes inherit the main process object. import time import multiprocessing as mp import os import random class SomeOtherClass: c = {} def get(self, a): if a in self.c: print('Retrieved cached value ...') return self.c[a] b = random.randint(1,999) self.c[a] = b return b class SomeProcessor(mp.Process): def __init__(self, queue): super().__init__() self.queue = queue def run(self): pid = os.getpid() soc = SomeOtherClass() val = soc.get('new') print("Value from process {0} is {1}".format(pid, val)) if __name__ == "__main__": queue = mp.Queue() for n in range(10): queue.put(n) pid = os.getpid() soc = SomeOtherClass() val = soc.get('new') print("Value from main process {0} is {1}".format(pid, val)) processes = [] for proc in range(mp.cpu_count()): p = SomeProcessor(queue) p.start() processes.append(p) for p in processes: p.join() Output here is : Value from main process 13052 is 676 Retrieved cached value ... Value from process 13054 is 676 Retrieved cached value ... Value from process 13056 is 676 Retrieved cached value ... Value from process 13057 is 676 Retrieved cached value ... Value from process 13055 is 676 | To expand on the comments and discussion: On Linux, multiprocessing defaults to the fork start method. Forking a process means child processes will share a copy-on-write version of the parent process's data. This is why the globally created objects have the same address in the subprocesses. On macOS and Windows, the default start method is spawn – no objects are shared in that case. The subprocesses will have their unique copies of the objects as soon as they write to them (and internally in CPython, in fact, when they even read them, due to the reference counter being in the object header). A variable defined as class SomeClass: container = {} is class-level, not instance-level and will be shared between all instances of SomeClass. That is, a = SomeClass() b = SomeClass() print(a is b) # False print(a.container is b.container is SomeClass.container) # True a.container["x"] = True print("x" in b.container) # True print("x" in SomeClass.container) # True By virtue of the class's state being forked into the subprocess, the shared container also seems shared. However, writing into the container in a subprocess will not appear in the parent or sibling processes. Only certain special multiprocessing types (and certain lower-level primitives) can span process boundaries. To correctly separate that container between instances and processes, it will need to be instance-level: class SomeClass: def __init__(self): self.container = {} (However, of course, if a SomeClass is globally instantiated, and a process is forked, its state at the time of the fork will be available in subprocesses.) | 5 | 8 |
69,087,572 | 2021-9-7 | https://stackoverflow.com/questions/69087572/how-to-create-mongodb-time-series-collection-using-pymongo | The documentation shows how to do it with mongosh, but how do you create Time Series Collection using pymongo from within a python script? import pymongo import time from datetime import datetime client = pymongo.MongoClient() db = client['time-series-db'] col = db['time-series-col'] # ... do something here to make it 'time-series collection' ... js = { "1": "A", "2": "B", "3": "C", "4": "D", "5": "E", } # create BSON type datetime object needed for 'time-series collection' ts = time.time() js['timestamp'] = datetime.utcfromtimestamp(ts) col.insert_one(js) | You can try this: conn = pymongo.MongoClient('mongodb://localhost') db = conn.testDB db.create_collection('testColl', timeseries={ 'timeField': 'timestamp' }) # - OR - db.command('create', 'testColl', timeseries={ 'timeField': 'timestamp', 'metaField': 'data', 'granularity': 'hours' }) General Reference: Time Series Collections | 8 | 11 |
69,087,045 | 2021-9-7 | https://stackoverflow.com/questions/69087045/datetime-formatting-in-pandas-to-markdown | I have a pandas DataFrame which has a column of dtype datetime: import pandas as pd # Mock-up data df = pd.DataFrame({'year': [2015, 2016], 'month': [2, 3], 'day': [4, 5]}) df = pd.to_datetime(df) print(df) # 0 2015-02-04 # 1 2016-03-05 # dtype: datetime64[ns] I would like to use the .to_markdown() method to display this DataFrame. However, the .to_markdown() method displays the datetimes in scientific notation: print(df.to_markdown()) # | | 0 | # |---:|------------:| # | 0 | 1.42301e+18 | # | 1 | 1.45714e+18 | Is there a way to have the .to_markdown() method display these dates in a more human-readable manner? The .to_latex(), .to_csv(), and .to_string() methods already behave this way: # Other .to_ methods behave as desired, eg. print(df.to_latex()) # \begin{tabular}{ll} # \toprule # {} & 0 \\ # \midrule # 0 & 2015-02-04 \\ # 1 & 2016-03-05 \\ # \bottomrule # \end{tabular} pandas version: 1.3.2 tabulate version: 0.8.9 | Under the hood the .to_markdown() method uses the tabulate package. The floatfmt named argument can be used to control the formatting of floats, but I cannot see how this could be useful here. The best solution I can currently find is simply to format the datetime column as a column of strings before calling the .to_markdown() method: print(df.astype(str).to_markdown()) # | | 0 | # |---:|:-----------| # | 0 | 2015-02-04 | # | 1 | 2016-03-05 | | 7 | 8 |
69,083,878 | 2021-9-7 | https://stackoverflow.com/questions/69083878/fastapi-how-to-define-a-global-variable-once | I want to define a dict variable once, generated from a text file, and use it to answer to API requests. This variable should be always available till the end of server run. In an example below: from fastapi import FastAPI import uvicorn app = FastAPI() def init_data(path): print("init call") data = {} data[1] = "123" data[2] = "abc" return data data = init_data('path') @app.get('/') def example_method(): # data is defined return {'Data': data[1]} if __name__ == '__main__': uvicorn.run(f'example_trouble:app', host='localhost', port=8000) I will get: init call init call INFO: Started server process [9356] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) and request to localhost:8000 wouldn't raise any errors How should I define a variable once, that would be accessed as a global variable to any request? Is there a common way to define it once and use it? requirements if necessary: fastapi==0.68.1 pydantic==1.8.2 starlette==0.14.2 typing-extensions==3.10.0.2 | One approach would be to use the FastAPI startup event to define the variable data once on app startup. An example similar to what you provided in your question: from fastapi import FastAPI import uvicorn app = FastAPI() data = {} @app.on_event('startup') def init_data(): print("init call") path='/an/example/path' data[1] = "123" data[2] = "abc" return data @app.get('/') def example_method(): # data is defined return {'Data': data[1]} if __name__ == '__main__': uvicorn.run(f'example_trouble:app', host='localhost', port=8000) When running the app, you'll see that function is only executed once: INFO: Started server process [37992] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) init call | 8 | 7 |
69,084,242 | 2021-9-7 | https://stackoverflow.com/questions/69084242/valueerror-the-truth-value-of-a-dataframe-is-ambiguous-use-a-empty-a-bool | I have a list of data frames but in a few cases, the list can also contain a string. df = pd.DataFrame({"df_column":["df_value"]}) a = ['skip',df] if "skip" in a: print("yes") The above gives output as yes because the list contains a string. But in case if the list doesn't contain a string for eg df = pd.DataFrame({"df_column":["df_value"]}) a = [df,df] if "skip" in a: print("yes") Now the above code gives an error ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). How can I handle this? | You need to avoid checking pandas all together see e.g. [df , 'skip'] would fail - it's just a matter of order. For starters you can only filter strings in a: if "skip" in filter(lambda x: isinstance(x, str), a): print("yes") | 5 | 4 |
69,073,058 | 2021-9-6 | https://stackoverflow.com/questions/69073058/mapping-aij-array-to-ai-j-matrix | I have an array a, and want to create a new matrix A for which A[i,j] = a[i+j]. Example: import numpy as np a = np.random.rand(3) A = np.zeros((2,2)); for i in range(2): for j in range(2): A[i,j] = a[i+j] Is there a way of doing this without a for loop? (with numpy) | Using stride_tricks.as_strided This would be a perfect use case for stride_tricks: from np.lib.stride_tricks import as_strided Set the strides as (8, 8) (i.e. (1, 1) in terms of slots). This way we essentially map the resulting array A as i, j -> k = i + j. A more detailed description would be: we map every i, j pair to a natural number k defined by the strides as k = i*s_i + j*s_j, where s_i and s_j are the strides set to 1s, i.e. k = i + j. Thus ending up with the desired result: A[i, j] = a[k] = a[i + j]. >>> a array([0.53954179, 0.51789927, 0.33982179]) >>> A = as_strided(a, shape=(2,2), strides=(8,8)) array([[0.53954179, 0.51789927], [0.51789927, 0.33982179]]) Additional considerations A more general solution would be to get the shapes as well as the strides from a's metadata. The shape of A is given by (len(a)//2+1,)*2. As noted by @Daniel F, the memory slot size does not always equal to 8, this indeed depends on the dtype of your array. It would be better to define strides from a's strides instead: a.strides*2. This comes down to: >>> A = as_strided(a, shape=(len(a)//2+1,)*2, strides=a.strides*2) Using grid indexing Alternatively, you can construct a grid of coordinates (you can do so using itertools.product) then copy the appropriate values from a to A: i, j = np.array(list(product(range(2), range(2)))).T Then initialize A and copy: >>> A = np.zeros((2,2)) >>> A[i, j] = a[i + j] This will however double the memory usage, compared to the as_strided method. | 5 | 3 |
69,073,516 | 2021-9-6 | https://stackoverflow.com/questions/69073516/pandas-grouping-and-transform-ignoring-nan | I'm facing an issue with grouping and transforming on non-NA values in my dataframe. So my dataframe is something like this: Name Value A 1 A 2 A NaN B 3 B 7 B 9 B NaN Final output I want: Name Value Weight 1 Weight 2 A 1 0.33 0.5 A 2 0.33 0.5 A NaN 0.33 NaN B 3 0.25 0.33 B 7 0.25 0.33 B 9 0.25 0.33 B NaN 0.25 NaN I know this may sound trivial but I'm not able to get the Weight 2 working perfectly across different grouping categories of column Name. Here's how I get column Weight 1: df['Weight 1'] = df.groupby(['Name']).transform(lambda x: 1/len(x)) So far I tried following on Weight 2, but raises DivisionByZero warning. Output is incorrect. df['Weight 2'] = df.groupby(['Name']).transform(lambda x: 1/np.sum(~np.isnan(x))) Any help is appreciated. | You can use GroupBy.count to count Non-NaN values in each group. Then use pd.Series.map with pd.Series.mask mapping = (1 / df.groupby('Name')['Value'].count()).squeeze() df['Weight 2'] = df['Name'].map(mapping).mask(df['Value'].isna()) Name Value Weight 2 0 A 1.0 0.500000 1 A 2.0 0.500000 2 A NaN NaN 3 B 3.0 0.333333 4 B 7.0 0.333333 5 B 9.0 0.333333 6 B NaN NaN | 5 | 5 |
69,057,272 | 2021-9-4 | https://stackoverflow.com/questions/69057272/passing-arguments-in-dataclass-representation | I have a below NormalClass that I want to structure as a dataclass. However I was not sure how I can pass the date_str param without __init__ in the dataclass. Any thoughts? class FieldDateTime(): def __init__(self, data, d_format='%m/%d/%y %I:%M %p'): try: self.data = datetime.strptime(data, d_format) except ValueError as e: raise ValueError('Dateformat incorrect') def __call__(self): return self.data class NormalClass: def __init__(self, id, date_str): self.id: int = id self.dt: FieldDateTime = FieldDateTime(date_str) @dataclass class DataClassObj: id: int dt: FieldDateTime(date_str) How do I pass the date_str as an argument in the data class representation (DataClassObj) without the init? | Your question has to be more detailed, but I think this is what you're looking for: from __future__ import annotations from datetime import datetime from dataclasses import dataclass, InitVar, field class FieldDateTime: def __init__(self, data, d_format='%m/%d/%y %I:%M %p'): try: self.data = datetime.strptime(data, d_format) except ValueError as e: raise ValueError('Dateformat incorrect') def __repr__(self): return f"{self.__class__.__name__}({self.data})" def __call__(self): return self.data @dataclass class NormalDataClass: id: int date_str: InitVar[str] dt: FieldDateTime = field(init=False) def __post_init__(self, date_str): self.dt = FieldDateTime(date_str) print(NormalDataClass(10, '09/04/21 08:11 PM')) output (given the dataclass-like __repr__ implementation on FieldDateTime to make it look a bit better): NormalDataClass(id=10, dt=FieldDateTime(2021-09-04 20:11:00)) from doc: Init-only fields are added as parameters to the generated __init__ method, and are passed to the optional __post_init__ method. So we can use InitVar for our date_str and pass it to __post_init__ to create our FieldDateTime object. | 9 | 10 |
69,024,982 | 2021-9-2 | https://stackoverflow.com/questions/69024982/fastest-way-to-find-a-pandas-index-column-value-pair | I have a largish DataFrame with a date index ['Date'] and several columns. One column is a string identifier ['Type'], with related data in the remaining columns. I need to add newData to the DataFrame, but only if the date-type pair (i.e. index-ColumnValue pair) is not already present in the DataFrame. Checking for the existing pairing takes up ~95% of my code computing time, so I really need to find a quicker way to do it. Options already considered, with timings in increasing order of speed:: existing_pair = len(compiledData[(compiledData['Type'] == newData['Type']) & (compiledData.index == newData['Date'])]) > 0 # average = 114 ms existing_pair = newData['Date'] in compiledData[compiledData['Type'] == newData['Type']].index # average = 68 ms existing_pair = compiledData[compiledData.index == newData['Type']]['Type']. isin([newData['Date']]).any() # average = 44 ms I am relatively new to Python so I am sure there are better (= faster) ways of checking for an index-colVal pair. Or, it may be that my entire data structure is wrong. Would appreciate any pointers anyone can offer. Edit: Sample of the compiledData dataframe: Type ColA ColB ColC ColD ColE ColF 2021-01-19 B 83.0 -122.15 0.0 11.0 11.000 11.0 2021-01-19 D 83.0 -1495.48 0.0 11.0 11.000 11.0 2021-03-25 D 83.0 432.00 0.0 11.0 11.000 11.0 2021-04-14 D 83.0 646.00 0.0 11.0 11.000 11.0 2021-04-16 A 20.0 11.00 0.0 30.0 11.000 11.0 2021-04-25 D 83.0 -26.82 0.0 11.0 11.000 11.0 2021-04-28 B 83.0 -651.00 0.0 11.0 11.000 11.0 | Index value lookup is faster than column value lookup. I don't know the implementation details (it looks like lookup depends on number of rows). Here is a performance comparison: def test_value_matches(df, v1, v2): # return True if v1, v2 found in df columns, else return False if any(df[(df.c1 == v1) & (df.c2 == v2)]): return True return False def test_index_matches(df, v1, v2): # returns True if (v1, v2) found in (multi) index, else returns False if (v1, v2) in df.index: return True return False # test dependence of funcs above on num rows in df: for n in [int(j) for j in [1e4, 1e5, 1e6, 1e7]]: df = pd.DataFrame(np.random.random(size=(n, 2)), columns=["c1", "c2"]) v1, v2 = df.sample(n=1).iloc[0] %timeit test_value_matches(df, v1, v2) # create an index based on column values: df2 = df.set_index(["c1", "c2"]) %timeit test_index_matches(df2, v1, v2) output 421 µs ± 22.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 10.5 µs ± 175 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 557 µs ± 5.35 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 10.3 µs ± 143 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 3.77 ms ± 166 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 16.5 µs ± 185 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 22.4 ms ± 2.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 28.1 µs ± 10.2 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) Note that this ignores indexing time itself, which may be significant; this approach probably works best with repeated lookups on the same df. For n=1e7, the performance is something like your problem on my machine; the indexed version is ~1000x faster (although apparently growing with n). | 5 | 2 |
69,066,012 | 2021-9-5 | https://stackoverflow.com/questions/69066012/numpy-function-to-get-the-quantile-that-corresponds-to-a-given-value | I see a lot of questions like this one for R, but I couldn't find one specifically for Python, preferably using numpy. Let's say I have an array of observations stored in x. I can get the value that accumulates q * 100 per cent of the population. # Import numpy import numpy as np # Get 75th percentile np.quantile(a=x, q=0.75) However, I was wondering if there's a function that does the inverse. That is, a numpy function that takes a value as an input and returns q. To further expand on this, scipy distribution objects have a ppf method that allows me to do this. I'm looking for something similar in numpy. Does it exist? | Not a ready-made function but a compact and reasonably fast snippet: (a<value).mean() You can (at least on my machine) squeeze out a few percent better performance by using np.count_nonzero np.count_nonzero(a<value) / a.size but tbh I wouldn't even bother. | 15 | 22 |
69,068,803 | 2021-9-6 | https://stackoverflow.com/questions/69068803/python-assert-all-elements-in-list-is-not-none | I was wondering if we could assert all elements in a list is not None, therefore while a = None will raise an error. The sample list is [a, b, c] I have tried assert [a, b, c] is not None, it will return True if any one of the elements is not None but not verifying all. Could you help figure it out? Thanks!! | Unless you have a weird element that claims it equals None: assert None not in [a, b, c] | 7 | 9 |
69,059,121 | 2021-9-4 | https://stackoverflow.com/questions/69059121/how-to-draw-a-normal-curve-on-seaborn-displot | distplot was deprecated in favour of displot. The previous function had the option to draw a normal curve. import seaborn as sns import matplotlib.pyplot as plt from scipy import stats ax = sns.distplot(df.extracted, bins=40, kde=False, fit=stats.norm) the fit=stats.norm doesn't work with displot anymore. In the answer to this question, I see the approach to plot the normal later, however it is done on some random data averaged around 0. | seaborn.displot is a figure-level plot where the kind parameter specifies the approach. When kind='hist' the parameters for seaborn.histplot are available. For axes-level plots see How to add a standard normal pdf over a seaborn histogram seaborn.axisgrid.FacetGrid.map expects dataframe column names, as such, to map the pdf onto seaborn.displot, the data needs to be in a dataframe. An issue is that x_pdf is calculated for each axes: x0, x1 = p1.axes[0][0].get_xlim() If the axes are different for multiple Facets (sharex=False), then there's not a way to get xlim for each axes within .map. References: seaborn histplot and displot output doesn't match Building structured multi-plot grids Tested in python 3.8.11, pandas 1.3.2, matplotlib 3.4.2, seaborn 0.11.2 Single Facet .map can be used import pandas as pd import seaborn as sns import numpy as np import scipy # data np.random.seed(365) x1 = np.random.normal(10, 3.4, size=1000) # mean of 10 df = pd.DataFrame({'x1': x1}) # display(df.head(3)) x1 0 10.570932 1 11.779918 2 12.779077 # function for mapping the pdf def map_pdf(x, **kwargs): mu, std = scipy.stats.norm.fit(x) x0, x1 = p1.axes[0][0].get_xlim() # axes for p1 is required to determine x_pdf x_pdf = np.linspace(x0, x1, 100) y_pdf = scipy.stats.norm.pdf(x_pdf, mu, std) plt.plot(x_pdf, y_pdf, c='r') p1 = sns.displot(data=df, x='x1', kind='hist', bins=40, stat='density') p1.map(map_pdf, 'x1') Single or Multiple Facets It's easier to iterate through each axes and add the pdf # data np.random.seed(365) x1 = np.random.normal(10, 3.4, size=1000) # mean of 10 x2 = np.random.standard_normal(1000) # mean of 0 df = pd.DataFrame({'x1': x1, 'x2': x2}).melt() # create long dataframe # display(df.head(3)) variable value 0 x1 10.570932 1 x1 11.779918 2 x1 12.779077 p1 = sns.displot(data=df, x='value', col='variable', kind='hist', bins=40, stat='density', common_bins=False, common_norm=False, facet_kws={'sharey': True, 'sharex': False}) # extract and flatten the axes from the figure axes = p1.axes.ravel() # iterate through each axes for ax in axes: # extract the variable name var = ax.get_title().split(' = ')[1] # select the data for the variable data = df[df.variable.eq(var)] mu, std = scipy.stats.norm.fit(data['value']) x0, x1 = ax.get_xlim() x_pdf = np.linspace(x0, x1, 100) y_pdf = scipy.stats.norm.pdf(x_pdf, mu, std) ax.plot(x_pdf, y_pdf, c='r') | 6 | 4 |
69,064,948 | 2021-9-5 | https://stackoverflow.com/questions/69064948/how-to-import-gensim-summarize | I got gensim to work in Google Collab by following this process: !pip install gensim from gensim.summarization import summarize Then I was able to call summarize(some_text) Now I'm trying to run the same thing in VS code: I've installed gensim: pip3 install gensim but when I run from gensim.summarization import summarize I get the error Import "gensim.summarization" could not be resolvedPylancereportMissingImports I've also tried from gensim.summarization.summarizer import summarize with same error. Regardless I haven't been able to call the function summarize(some_text) outside of Google Collab. | The summarization code was removed from Gensim 4.0. See: https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4#12-removed-gensimsummarization 12. Removed gensim.summarization Despite its general-sounding name, the module will not satisfy the majority of use cases in production and is likely to waste people's time. See this Github ticket for more motivation behind this. If you need it, you could try: installing an older gensim version (such as 3.8.3, the last official release in which it remained); or… copy the source code out to your own local module However, I expect you'd likely be disappointed by its inflexibility and how little it can do. It was only extractive summarization - choosing a few key sentences from those that already exist. That only gives impressive results when the source text was already well-written in an expository style mixing high-level overview sentences with separate detail sentences. And, its method of analyzing/ranking words was very crude & hard-to-customize – totally unconnected to the more generic/configurable/swappable approaches used elsewhere in Gensim or in other text libraries. | 9 | 11 |
69,062,015 | 2021-9-5 | https://stackoverflow.com/questions/69062015/fastest-method-to-update-all-list-entries-with-union-of-all-intersecting-entries | I am looking for a fast method to traverse a list of sets, and to expand each set by finding its union with any other element of the list with which it shares at least one element. For example, suppose that I have four rows of data, where each row corresponds to a set of unique elements 0, 5, 101 8, 9, 19, 21 78, 79 5, 7, 63, 64 The first and the last rows have the intersecting element 5 and so after performing my operation I want to have the unions 0, 5, 7, 63, 64, 101 8, 9, 19, 21 78, 79 0, 5, 7, 63, 64, 101 Right now, I can nearly do this with two loops: def consolidate_list(arr): """ arr (list) : A list of lists, where the inner lists correspond to sets of unique integers """ arr_out = list() for item1 in arr: item_additional = list() # a list containing all overlapping elements for item2 in arr: if len(np.intersect1d(item1, item2)) > 0: item_additional.append(np.copy(item2)) out_val = np.unique(np.hstack([np.copy(item1)] + item_additional)) # find union of all lists arr_out.append(out_val) return arr_out The issue with this approach is that it needs to be run multiple times, until the output stops changing. Since the input might be jagged (ie, different numbers of elements per set), I can't see a way to vectorize this function. | This problem is about creating disjoint sets and so I would use union-find methods. Now Python is not particularly known for being fast, but for the sake of showing the algorithm, here is an implementation of a DisjointSet class without libraries: class DisjointSet: class Element: def __init__(self): self.parent = self self.rank = 0 def __init__(self): self.elements = {} def find(self, key): el = self.elements.get(key, None) if not el: el = self.Element() self.elements[key] = el else: # Path splitting algorithm while el.parent != el: el, el.parent = el.parent, el.parent.parent return el def union(self, key=None, *otherkeys): if key is not None: root = self.find(key) for otherkey in otherkeys: el = self.find(otherkey) if el != root: # Union by rank if root.rank < el.rank: root, el = el, root el.parent = root if root.rank == el.rank: root.rank += 1 def groups(self): result = { el: [] for el in self.elements.values() if el.parent == el } for key in self.elements: result[self.find(key)].append(key) return result Here is how you could use it for this particular problem: def solve(lists): disjoint = DisjointSet() for lst in lists: disjoint.union(*lst) groups = disjoint.groups() return [lst and groups[disjoint.find(lst[0])] for lst in lists] Example call: data = [ [0, 5, 101], [8, 9, 19, 21], [], [78, 79], [5, 7, 63, 64] ] result = solve(data) The result will be: [[0, 5, 101, 7, 63, 64], [8, 9, 19, 21], [], [78, 79], [0, 5, 101, 7, 63, 64]] Note that I added an empty list in the input list, so to illustrate that this boundary case remains unaltered. NB: There are libraries out there that provide union-find/disjoint set functionality, each with a slightly different API, but I suppose that using one of those can give a better performance. | 5 | 5 |
69,064,372 | 2021-9-5 | https://stackoverflow.com/questions/69064372/check-if-the-type-of-a-variable-is-dictstr-any-in-python | I want to check if the type of a variable is: dict[str, Any]. (in python) What I have tried (unsuccessfully) is this: myvar = { 'att1' : 'some value', 'att2' : 1 } if not isinstance(myvar, dict[str, Any]): raise Exception('Input has the wrong type') I get the following error message: TypeError: isinstance() argument 2 cannot be a parameterized generic How should I do this? Thank you! | Try the below - make sure you have a dict and the keys of the dict are strings. data1 = { 'att1': 'some value', 'att2': 1 } data2 = { 'att1': 'some value', 13: 1 } def check_if_dict_with_str_keys(data): return isinstance(data, dict) and all(isinstance(x, str) for x in data.keys()) print(check_if_dict_with_str_keys(data1)) print(check_if_dict_with_str_keys(data2)) output True False | 6 | 7 |
69,050,355 | 2021-9-3 | https://stackoverflow.com/questions/69050355/get-all-possible-order-combinations-in-python | I have a list of 1 and 2, e.g. [2, 1, 1, 1] I need to get all possible combinations: [[2, 1, 1, 1], [1, 2, 1, 1], [1, 1, 2, 1], [1, 1, 1, 2]] I tried to use itertools' product, however, it return the same result (e.g. [2, 1, 1, 1]) multiple times, and it is inefficient when input is bigger. Is there some build in function for something like this? | What you are looking for is permutations: >>> import itertools >>> a = [2, 1, 1, 1] >>> list(set(itertools.permutations(a))) [(1, 1, 1, 2), (1, 1, 2, 1), (2, 1, 1, 1), (1, 2, 1, 1)] | 4 | 7 |
69,048,016 | 2021-9-3 | https://stackoverflow.com/questions/69048016/make-a-list-from-multiple-list | I have three lists: list_01 = ['DOG','CAT','BEAR'] list_02 = ['V','W','X','Y','Z'] list_03 = ['A','B','C','D','E','F','G','H'] What I hope to get is a list like the following: list_04 = ['DOG','V','A','CAT','W','B','BEAR','X','C','Y','D','Z','E','F','G','H'] This list is supposed to contain one item from list 1, then one from list 2, and one from list 3. This then continues until list 1 is exhausted; list 1 should then be ignored, and the same process should happen on just lists 2 and 3, continuing until all lists are empty. | It seems like you want to do this in order, not randomly. If so, you can use zip_longest() from itertools and make a nested list comprehension: from itertools import zip_longest list_01 = ['DOG','CAT','BEAR'] list_02 = ['V','W','X','Y','Z'] list_03 = ['A','B','C','D','E','F','G','H'] list_04 = [n for group in zip_longest(list_01, list_02, list_03) for n in group if n is not None] # ['DOG', 'V', 'A', 'CAT', 'W', 'B', 'BEAR', 'X', 'C', 'Y', 'D', 'Z', 'E', 'F', 'G', 'H'] Note: zip_longest will produce None values when one list runs out. That's why we are filtering for None in the comprehension. | 6 | 6 |
69,045,499 | 2021-9-3 | https://stackoverflow.com/questions/69045499/how-to-get-rid-of-scientific-notation-on-bar-labels-in-matplotlib | How could I format the bar labels to remove the scientific notation? highest_enrollment = course_data.groupby( "course_organization")["course_students_enrolled"].sum().nlargest(10) ax = sns.barplot(x=highest_enrollment.index, y=highest_enrollment.values, ci=None, palette="ch: s=.5, r=-.5") ax.ticklabel_format(style='plain', axis="y") plt.xticks(rotation=90) ax.bar_label(ax.containers[0]) plt.show() | As already suggested by BigBen in the comment, you can pass fmt parameter to matplotlib.axes.Axes.bar_label; you can use %d for integers: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd highest_enrollment = pd.DataFrame({'class': ['A', 'B', 'C'], 'values': [30000000, 20000000, 10000000]}) ax = sns.barplot(data = highest_enrollment, x = 'class', y = 'values', palette="ch: s=.5, r=-.5") ax.ticklabel_format(style='plain', axis="y") plt.xticks(rotation=90) ax.bar_label(ax.containers[0], fmt = '%d') plt.show() | 5 | 12 |
69,031,604 | 2021-9-2 | https://stackoverflow.com/questions/69031604/tensorflow-running-out-of-gpu-memory-allocator-gpu-0-bfc-ran-out-of-memory-tr | I am fairly new to Tensorflow and I am having trouble with Dataset. I work on Windows 10, and the Tensorflow version is 2.6.0 used with CUDA. I have 2 numpy arrays that are X_train and X_test (already split). The train is 5Gb and the test is 1.5Gb. The shapes are: X_train: (259018, 30, 30, 3), <class 'numpy.ndarray'> Y_train: (259018, 1), <class 'numpy.ndarray'> I create Datasets using the following code: dataset_train = tf.data.Dataset.from_tensor_slices((X_train , Y_train)).batch(BATCH_SIZE) And BATCH_SIZE = 32. But I cannot create a Dataset, I get the following error: 2021-09-02 15:26:35.429930: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-02 15:26:35.772235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3495 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6 2021-09-02 15:26:36.414627: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 2700000000 exceeds 10% of free system memory. 2021-09-02 15:26:47.146977: W tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator (GPU_0_bfc) ran out of memory trying to allocate 607.1KiB (rounded to 621824)requested by op _EagerConst If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation. Current allocation summary follows. Current allocation summary follows. 2021-09-02 15:26:47.147299: I tensorflow/core/common_runtime/bfc_allocator.cc:1004] BFCAllocator dump for GPU_0_bfc 2021-09-02 15:26:47.147383: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (256): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.147514: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (512): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.147636: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (1024): Total Chunks: 1, Chunks in use: 1. 1.2KiB allocated for chunks. 1.2KiB in use in bin. 1.0KiB client-requested in use in bin. 2021-09-02 15:26:47.147761: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (2048): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.147905: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (4096): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148040: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (8192): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148157: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148276: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (32768): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148402: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148518: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (131072): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148645: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (262144): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148786: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.148918: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (1048576): Total Chunks: 1, Chunks in use: 1. 1.91MiB allocated for chunks. 1.91MiB in use in bin. 1.91MiB client-requested in use in bin. 2021-09-02 15:26:47.149079: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (2097152): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.149212: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (4194304): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.149342: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (8388608): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.149477: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.164471: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.164619: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (67108864): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.164765: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin. 2021-09-02 15:26:47.164884: I tensorflow/core/common_runtime/bfc_allocator.cc:1011] Bin (268435456): Total Chunks: 2, Chunks in use: 2. 3.41GiB allocated for chunks. 3.41GiB in use in bin. 3.30GiB client-requested in use in bin. 2021-09-02 15:26:47.164982: I tensorflow/core/common_runtime/bfc_allocator.cc:1027] Bin for 607.2KiB was 512.0KiB, Chunk State: 2021-09-02 15:26:47.165040: I tensorflow/core/common_runtime/bfc_allocator.cc:1040] Next region of size 3665166336 2021-09-02 15:26:47.165106: I tensorflow/core/common_runtime/bfc_allocator.cc:1060] InUse at b0e200000 of size 2700000000 next 1 2021-09-02 15:26:47.165159: I tensorflow/core/common_runtime/bfc_allocator.cc:1060] InUse at baf0ebb00 of size 1280 next 2 2021-09-02 15:26:47.165208: I tensorflow/core/common_runtime/bfc_allocator.cc:1060] InUse at baf0ec000 of size 2000128 next 3 2021-09-02 15:26:47.165250: I tensorflow/core/common_runtime/bfc_allocator.cc:1060] InUse at baf2d4500 of size 963164928 next 18446744073709551615 2021-09-02 15:26:47.165297: I tensorflow/core/common_runtime/bfc_allocator.cc:1065] Summary of in-use Chunks by size: 2021-09-02 15:26:47.165341: I tensorflow/core/common_runtime/bfc_allocator.cc:1068] 1 Chunks of size 1280 totalling 1.2KiB 2021-09-02 15:26:47.165382: I tensorflow/core/common_runtime/bfc_allocator.cc:1068] 1 Chunks of size 2000128 totalling 1.91MiB 2021-09-02 15:26:47.165426: I tensorflow/core/common_runtime/bfc_allocator.cc:1068] 1 Chunks of size 963164928 totalling 918.54MiB 2021-09-02 15:26:47.165470: I tensorflow/core/common_runtime/bfc_allocator.cc:1068] 1 Chunks of size 2700000000 totalling 2.51GiB 2021-09-02 15:26:47.165514: I tensorflow/core/common_runtime/bfc_allocator.cc:1072] Sum Total of in-use chunks: 3.41GiB 2021-09-02 15:26:47.165558: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] total_region_allocated_bytes_: 3665166336 memory_limit_: 3665166336 available bytes: 0 curr_region_allocation_bytes_: 7330332672 2021-09-02 15:26:47.165633: I tensorflow/core/common_runtime/bfc_allocator.cc:1080] Stats: Limit: 3665166336 InUse: 3665166336 MaxInUse: 3665166336 NumAllocs: 4 MaxAllocSize: 2700000000 Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0 2021-09-02 15:26:47.165771: W tensorflow/core/common_runtime/bfc_allocator.cc:468] *************************************************************************************************xxx Traceback (most recent call last): File "C:/Users/headl/Documents/github projects/datascience/DL_model_deep_insight.py", line 100, in <module> dataset_train, dataset_test = prepare_tf_dataset(path_to_x_train, config.y_train_combined, File "C:/Users/headl/Documents/github projects/datascience/DL_model_deep_insight.py", line 28, in prepare_tf_dataset dataset_test = tf.data.Dataset.from_tensor_slices((X_test , Y_test)).batch(BATCH_SIZE) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 685, in from_tensor_slices return TensorSliceDataset(tensors) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3844, in __init__ element = structure.normalize_element(element) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\data\util\structure.py", line 129, in normalize_element ops.convert_to_tensor(t, name="component_%d" % i, dtype=dtype)) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped return func(*args, **kwargs) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\ops.py", line 1566, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function return constant_op.constant(value, dtype, name=name) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\constant_op.py", line 271, in constant return _constant_impl(value, dtype, shape, name, verify_shape=False, File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\constant_op.py", line 283, in _constant_impl return _constant_eager_impl(ctx, value, dtype, shape, verify_shape) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\constant_op.py", line 308, in _constant_eager_impl t = convert_to_eager_tensor(value, ctx, dtype) File "C:\Users\headl\Documents\virtual_env\datascience\lib\site-packages\tensorflow\python\framework\constant_op.py", line 106, in convert_to_eager_tensor return ops.EagerTensor(value, ctx.device_name, dtype) tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized. Process finished with exit code 1 There seems to be a problem of running out of GPU memory, and indeed, when I follow this process in the Windows task manager I can see a peak in GPU usage just before the script dies. I tried to use only some part of the X_train. I can create a Dataset up to X_train[:240000]. When I add more rows after that, the error appears. I thought that the Tensorflow Dataset is a generator that was supposed to take care of the memory problem, along with batches? Also, reducing the batch size did not have any effect. I also tried to do the suggested 'TF_GPU_ALLOCATOR=cuda_malloc_async' but it didn't work neither. What can I do to load the whole data? Thank you very much in advance! | That's working as designed. from_tensor_slices is really only useful for small amounts of data. Dataset is designed for large datasets that need to be streamed from disk. The hard way but ideal way to do this would be to write your numpy array data to TFRecords then read them in as a dataset via TFRecordDataset. Here's the guide. https://www.tensorflow.org/tutorials/load_data/tfrecord The easier way but less performant way to do this would be Dataset.from_generator. Here is a minimal example: >>> ds = tf.data.Dataset.from_generator(lambda: np.arange(100), output_signature=tf.TensorSpec(shape=(), dtype=tf.int32)) >>> for d in ds: ... print(d) ... tf.Tensor(0, shape=(), dtype=int32) tf.Tensor(1, shape=(), dtype=int32) ... | 5 | 6 |
69,038,533 | 2021-9-3 | https://stackoverflow.com/questions/69038533/getting-a-powerset-of-a-list-of-lists | I'm given a list of lists s: s = [["a1", "A"], ["b4", "B"], ["a3", "A"], ["d6", "D"], ["c4", "C"]] (note that the elements in a list do not necessarily begin with a same letter. I modified the data here for convenience.) My goal is to sort each list to a category by its second element, and get all possible combinations by picking at most one element in each category. I first hashed the list of lists to a dictionary: dic = {i[1]: [] for i in s} for i in s: # set the value of the first item key to the second item dic[i[1]].append(i[0]) dic >>> {'A': ['a1', 'a3'], 'B': ['b4'], 'C': ['c4'], 'D': ['d6']} The number of all possible combinations, hence the length of a powerset of s, should return 23: {'a1'}, {'a3'}, {'b4'}, {'c4'}, {'d6'}, {'a1', 'b4'}, {'a1', 'c4'}, {'a1', 'd6'}, {'a3', 'b4'}, {'a3', 'c4'}, {'a3', 'd6'}, {'b4', 'c4'}, {'b4', 'd6'}, {'c4', 'd6'}, {'a1', 'b4', 'c4'}, {'a1', 'b4', 'd6'}, {'a1', 'c4', 'd6'}, {'a3', 'b4', 'c4'}, {'a3', 'b4', 'd6'}, {'a3', 'c4', 'd6'}, {'b4', 'c4', 'd6'}, {'a1', 'b4', 'c4', 'd6'}, {'a3', 'b4', 'c4', 'd6'} I initially was going to put multiple for loops, but since I have no guarantee in how many key I would have in my s (which would also put my time complexity at O(N^x)), I used itertools.chain and itertools.combinations as per this post: def powerset(s:list): return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1)) The problem is that this only takes elements in a single list to account, hence neglects the constraint: 'not taking more than one element from each list at most'. Flattening a list would disregard the categories, so I've not attempted to do so. Any insights to tackle this problem would be appreciated. | @don'ttalkjustcode's answer works but unnecessarily incurs the overhead of adding dummy values, and also produces an empty set, which is not required by the question. A more direct approach would be to use itertools.combinations to pick lists from the dict of lists to pass to itertools.product to produce the desired combinations: from itertools import product, combinations print(*( set(p) for r in range(len(dic)) for c in combinations(dic.values(), r + 1) for p in product(*c) ), sep='\n') This outputs: {'a1'} {'a3'} {'b4'} {'c4'} {'d6'} {'a1', 'b4'} {'a3', 'b4'} {'a1', 'c4'} {'a3', 'c4'} {'d6', 'a1'} {'d6', 'a3'} {'c4', 'b4'} {'d6', 'b4'} {'d6', 'c4'} {'a1', 'c4', 'b4'} {'a3', 'c4', 'b4'} {'d6', 'a1', 'b4'} {'d6', 'a3', 'b4'} {'d6', 'a1', 'c4'} {'d6', 'a3', 'c4'} {'d6', 'c4', 'b4'} {'d6', 'a1', 'c4', 'b4'} {'d6', 'a3', 'c4', 'b4'} | 4 | 5 |
69,036,090 | 2021-9-2 | https://stackoverflow.com/questions/69036090/python-dataframe-yes-no-checker | I would like to make a table that evaluates whether a user is in a group or not. How can I get my dictionary sorted like in the example I have down below? I would like the index and columns populated automatically by the key and value. d = { 'user1': ['group1', 'group2', 'group3'], 'user2': ['group1', 'group2'], 'user3': ['group2']} df = pd.DataFrame.from_dict(d,orient='index') print(df) Current result user1 user2 user3 0 group1 group1 group2 1 group2 group2 NaN 2 group3 NaN NaN Desired result - I would like the rows based on the key and the columns based on the values. group1 group2 group3 user1 Y Y Y user2 Y Y N user3 N Y N | Try: d = { "user1": ["group1", "group2", "group3"], "user2": ["group1", "group2"], "user3": ["group2"], } df = pd.DataFrame.from_dict(d, orient="index") x = df.stack().droplevel(level=1) x = pd.crosstab(x.index, x).replace({1: "Y", 0: "N"}) x.index.name, x.columns.name = None, None print(x) Prints: group1 group2 group3 user1 Y Y Y user2 Y Y N user3 N Y N | 4 | 4 |
69,034,186 | 2021-9-2 | https://stackoverflow.com/questions/69034186/diffrence-between-np-int16-and-int16-matlab | I am converting a matlab code to Python. In matlab there is a line which converting the complex number to the int16: real = int16(real(-3.406578165491512e+04 + 9.054663292273188e+03i)); imag= int16(imag(-3.406578165491512e+04 + 9.054663292273188e+03i)); real= -32768 imag=9055 In python I have tried this: real = np.int16(round(np.real(-3.406578165491512e+04 + 9.054663292273188e+03j))) imag = np.int16(round(np.imag(-3.406578165491512e+04 + 9.054663292273188e+03j))) real= 31470 imag=9055 The results are different (I have had many other values such as (1.815808483565253e+04 + 3.533772674703890e+04j) with different answer!) would you please help me to get the same answer? | Wolfie gets at the difference, this is about how to solve it. If you're OK with clipping, then you can use iinfo to get the min and max values of an integer type (or hard-code it, if you know you won't be changing it from int16 ever) and then use clip to constrain the float to be within those bounds before casting it. n = -3.406578165491512e+04 ii = np.iinfo(np.int16) print(f"min = {ii.min}") # min = -32768 print(f"max = {ii.max}") # max = 32767 np.int16(np.clip(n, ii.min, ii.max)) # -32768 IMPORTANT NOTE: This is only reliable if the size if your float is larger than the size of the int, because it relies upon being able to represent ii.max exactly as a float. See here for a discussion of when this is not true. Here's an example of that failing n = np.float64(1e100) ii = np.iinfo(np.int64) print(f"max: {ii.max}") # max = 9223372036854775807 clipped = np.clip(n, ii.min, ii.max) print(f"clipped to: {int(clipped)}") # clipped to: 9223372036854775808 print(f"as int: {np.int64(clipped)}") # as int: -9223372036854775808 (This happens because ii.max cannot be represented as a float. Past 9007199254740992, we lose the 1's place of precision and can only specify even integers, so the bounds of the clipping become incorrect.) | 5 | 5 |
69,034,478 | 2021-9-2 | https://stackoverflow.com/questions/69034478/how-can-i-find-the-k-th-largest-element-in-an-exponentially-large-list | Suppose there are n sets of real numbers: S[1], S[2], ..., S[n]. We know two things about these sets: Each set S[i] has exactly 3 elements. All elements in each of the sets S[i] are real numbers in the [0, 1] range. (I don't know if this detail can be helpful for the solution, though). Let's consider a set T of all numbers that can be represented as p[1] * p[2] * p[3] * ... * p[n] where p[i] is an element of S[i]. This set T, obviously, has 3^n elements. My question is, given the sets S[1], S[2], ..., S[n] (1 <= n <= 30) and some 1 <= k <= 10 as input, can we find the k-th largest number in T faster than in O(3^n) time? It's important that I need not only the k-th largest number, but also the corresponding numbers (p[1], p[2], p[3], ... , p[n]) that produce it. Even if the answer is no, I would appreciate any hints on how you would solve this problem approximately, maybe, by using some heuristics? I know about beam search, but maybe you could suggest something else? And even for beam search, it is not really clear how to implement it here the best way. If the exact answer can be obtained algorithmically in less than O(3^n) time, I would greatly appreciate it if you could point out the solution. | Well, you know that the largest product is the one that uses the largest factor from each set. Furthermore, every other product can be formed by starting with a larger one, and then decreasing the factor chosen in exactly one set. That leads to a simple search: Put the largest product in a max-first priority queue. Repeat k times: a. Remove the largest product p from the priority queue b. For each set that has a smaller number than the one selected in p, generate the product formed by decreasing that number to the next lower one in that set. If this selection of factors hasn't been seen before, then add it to the priority queue. Products will be removed from the queue in decreasing order, so the kth one you take out is the kth largest. Complexity is about N*(k log kN), depending on how you implement things. Note that there may be multiple ways to select the factors that produce the same product. This solution considers those ways to be distinct products, i.e., each way is counted when finding the kth largest. That may or may not be what you want. | 7 | 13 |
69,031,699 | 2021-9-2 | https://stackoverflow.com/questions/69031699/calculation-on-my-for-loop-and-want-to-do-it-without-for-loop-using-some-functio | dec = 0.1 data = np.array([100,200,300,400,500]) I have a for loop like this y = np.zeros(len(data)) for i in range(len(data)): if i == 0: y[i] = (1.0 - dec) * data[i] else: y[i] = (1.0 - dec) * data[i] + (dec * y[i - 1]) Output y is: array([ 90. , 189. , 288.9 , 388.89 , 488.889]) And now I want to do the above calculation without a loop, so if I break the code and do data[0] = (1.0 - dec) * data[0] data[1:] = (1.0 - dec) * data[1:] + (dec * data[0]) Output data is: array([ 90, 189, 279, 369, 459]) When you compare y and data output first two values are correct because it is getting multiplied with data[0] which makes sense but later on it should continue as the loop does in loop code, so how can we achieve that? Is there a function that can handle this? This is mainly to optimize my code so that it runs faster for thousands of data. The expected output is the same as the y output. | We can do this with scipy.linalg.toeplitz to make a matrix of shifts of the data and then multiplying that by powers of dec and summing columns: import numpy as np from scipy.linalg import toeplitz dec = 0.1 data = np.array([100,200,300,400,500]) decs = np.power(dec, np.arange(len(data))) r = np.zeros_like(data) r[0] = data[0] toep = toeplitz(r, data) output = (1 - dec) * np.sum(toep * decs.reshape(-1, 1), axis=0) First decs is a vector of powers of dec: print(decs) #[1.e+00 1.e-01 1.e-02 1.e-03 1.e-04] Next, we use toeplitz to make a matrix of shifts of data: print(toep) #[[100 200 300 400 500] # [ 0 100 200 300 400] # [ 0 0 100 200 300] # [ 0 0 0 100 200] # [ 0 0 0 0 100]]) Finally we reshape decs into a column, multiply it by toep and sum along columns. This result needs to be scaled by 1 - dec. This all works because we can rewrite our definition of data[i] as a sum instead of recursively: y[i] = (1.0 - dec) * data[i] + (dec * y[i - 1]) y[i] = (1.0 - dec) * data[i] + (dec * ((1.0 - dec) * data[i - 1] + (dec * y[i - 2]))) ... y[i] = (1.0 - dec) * (data[i] + dec * data[i - 1] + dec ** 2 * data[i - 2] + ... dec ** i * data[0]) y[i] = (1.0 - dec) * sum(dec ** j * data[i - j] for j in range(i + 1)) This can be proven by induction. From there it follows from rewriting those sums as columns of a matrix and translating that matrix to a calculation in numpy/scipy. | 5 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.