question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,462,888 | 2021-10-6 | https://stackoverflow.com/questions/69462888/converting-spherical-coordinates-into-cartesian-and-then-converting-back-into-ca | I'm trying to write two functions for converting Cartesian coordinates to spherical coordinates and vice-versa. Here are the equations that I've used for the conversions (also could be found on this Wikipedia page): And Here is my spherical_to_cartesian function: def spherical_to_cartesian(theta, phi): x = math.cos(phi) * math.sin(theta) y = math.sin(phi) * math.sin(theta) z = math.cos(theta) return x, y, z Here is my cartesian_to_spherical function: def cartesian_to_spherical(x, y, z): theta = math.atan2(math.sqrt(x ** 2 + y ** 2), z) phi = math.atan2(y, x) if x >= 0 else math.atan2(y, x) + math.pi return theta, phi And, here is the driver code: >>> t, p = 27.500, 7.500 >>> x, y, z = spherical_to_cartesian(t, p) >>> print(f"Cartesian coordinates:\tx={x}\ty={y}\tz={z}") Cartesian coordinates: x=0.24238129061573832 y=0.6558871334524494 z=-0.7148869687796651 >>> theta, phi = cartesian_to_spherical(x, y, z) >>> print(f"Spherical coordinates:\ttheta={theta}\tphi={phi}") Spherical coordinates: theta=2.367258771281654 phi=1.2168146928204135 I can't figure out why I'm getting different values for theta and phi than my initial values (the output values aren't even close to the input values). Did I make a mistake in my code which I can't see? | You seem to be giving your angles in degrees, while all trigonometric functions expect radians. Multiply degrees with math.pi/180 to get radians, and multiply radians with 180/math.pi to get degrees. | 6 | 5 |
69,462,119 | 2021-10-6 | https://stackoverflow.com/questions/69462119/determine-the-range-of-a-value-using-a-look-up-table | I have a df with numbers: numbers = pd.DataFrame(columns=['number'], data=[ 50, 65, 75, 85, 90 ]) and a df with ranges (look up table): ranges = pd.DataFrame( columns=['range','range_min','range_max'], data=[ ['A',90,100], ['B',85,95], ['C',70,80] ] ) I want to determine what range (in second table) a value (in the first table) falls in. Please note ranges overlap, and limits are inclusive. Also please note the vanilla dataframe above has 3 ranges, however this dataframe gets generated dynamically. It could have from 2 to 7 ranges. Desired result: numbers = pd.DataFrame(columns=['number','detected_range'], data=[ [50,'out_of_range'], [65, 'out_of_range'], [75,'C'], [85,'B'], [90,'overlap'] * could be A or B * ]) I solved this with a for loop but this doesn't scale well to a big dataset I am using. Also code is too extensive and inelegant. See below: numbers['detected_range'] = nan for i, row1 in number.iterrows(): for j, row2 in ranges.iterrows(): if row1.number<row2.range_min and row1.number>row2.range_max: numbers.loc[i,'detected_range'] = row1.loc[j,'range'] else if (other cases...): ...and so on... How could I do this? | You can use a bit of numpy vectorial operations to generate masks, and use them to select your labels: import numpy as np a = numbers['number'].values # numpy array of numbers r = ranges.set_index('range') # dataframe of min/max with labels as index m1 = (a>=r['range_min'].values[:,None]).T # is number above each min m2 = (a<r['range_max'].values[:,None]).T # is number below each max m3 = (m1&m2) # combine both conditions above # NB. the two operations could be done without the intermediate variables m1/m2 m4 = m3.sum(1) # how many matches? # 0 -> out_of_range # 2 -> overlap # 1 -> get column name # now we select the label according to the conditions numbers['detected_range'] = np.select([m4==0, m4==2], # out_of_range and overlap ['out_of_range', 'overlap'], # otherwise get column name default=np.take(r.index, m3.argmax(1)) ) output: number detected_range 0 50 out_of_range 1 65 out_of_range 2 75 C 3 85 B 4 90 overlap edit: It works with any number of intervals in ranges example output with extra['D',50,51]: number detected_range 0 50 D 1 65 out_of_range 2 75 C 3 85 B 4 90 overlap | 8 | 7 |
69,396,320 | 2021-9-30 | https://stackoverflow.com/questions/69396320/could-not-install-packages-due-to-an-environmenterror-winerror-5-access-is-de | ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\Users\Sampath\anaconda3\Lib\site-packages\~5py\defs.cp38-win_amd64.pyd' Consider using the --user option or check the permissions. I tried pip install mediapipe | EnvironmentError: Access is denied errors usually stem from one of three reasons: You do not have the proper permissions to install these files, and you should try running the same commands in an Administrator Command Prompt. 90% of the time, this should solve the problem. You don't have permission to install the package system-wide. You can instead install it for only your user by adding the --user flag: pip install --user <PACKAGE> If that doesn't work, then the problem is usually from an external program accessing a file, and you (or the installation script) are trying to delete that file (you cannot delete a file that is opened by another program). Try to restart your computer, so that whatever process is using that file will be shut down. Then, try the command again. | 10 | 23 |
69,446,189 | 2021-10-5 | https://stackoverflow.com/questions/69446189/python-equivalent-for-typedef | What is the python way to define a (non-class) type like: typedef Dict[Union[int, str], Set[str]] RecordType | This would simply do it? from typing import Dict, Union, Set RecordType = Dict[Union[int, str], Set[str]] def my_func(rec: RecordType): pass my_func({1: {'2'}}) my_func({1: {2}}) This code will generate a warning from your IDE on the second call to my_func, but not on the first. As @sahasrara62 indicated, more here https://docs.python.org/3/library/stdtypes.html#types-genericalias Since Python 3.9, the preferred syntax would be: from typing import Union RecordType = dict[Union[int, str], set[str]] The built-in types can be used directly for type hints and the added imports are no longer required. Since Python 3.10, the preferred syntax is RecordType = dict[int | str, set[str]] The | operator is a simpler way to create a union of types, and the import of Union is no longer required. Since Python 3.12, the preferred syntax is type RecordType = dict[int | str, set[str]] The type keyword explicitly indicates that this is a type alias. | 27 | 31 |
69,388,833 | 2021-9-30 | https://stackoverflow.com/questions/69388833/triggering-a-function-on-creation-of-an-pydantic-object | is there a clean way of triggering a function call whenever I create/ instantiate a pydantic object? Currently I am "misusing" the root_validator for this: from pydantic import BaseModel class PydanticClass(BaseModel): name: str @root_validator() def on_create(cls, values): print("Put your logic here!") return values So on PydanticClass(name="Test") executes my logic and simply returns the same object values. This works but I have two issues, which is why I would be interested in a cleaner solution: I basically don't do a validation (return the same values). I think this function will also be executed once the object is changed, which I don't want. So I am happy to learn about any better approaches/ solutions. | Your intentions are not fully clear. But I can suggest overriding the __init__ model method. In this case, your code will be executed once at object instantiation: from pydantic import BaseModel class PydanticClass(BaseModel): name: str def __init__(self, **data) -> None: super().__init__(**data) print("Put your logic here!") | 7 | 11 |
69,433,904 | 2021-10-4 | https://stackoverflow.com/questions/69433904/assigning-pydantic-fields-not-by-alias | How can I create a pydantic object, without useing alias names? from pydantic import BaseModel, Field class Params(BaseModel): var_name: int = Field(alias='var_alias') Params(var_alias=5) # works Params(var_name=5) # does not work | As of the pydantic 2.0 release, this behaviour has been updated to use model_config populate_by_name option which is False by default. from pydantic import BaseModel, Field, ConfigDict class Params(BaseModel): var_name: int = Field(alias='var_alias') model_config = ConfigDict( populate_by_name=True, ) Params(var_alias=5) # works Params(var_name=5) # works For pydantic 1.x, you need to use allow_population_by_field_name model config option. from pydantic import BaseModel, Field class Params(BaseModel): var_name: int = Field(alias='var_alias') class Config: allow_population_by_field_name = True Params(var_alias=5) # works Params(var_name=5) # works | 45 | 65 |
69,381,312 | 2021-9-29 | https://stackoverflow.com/questions/69381312/importerror-cannot-import-name-from-collections-using-python-3-10 | I am trying to run my program which uses various dependencies, but since upgrading to Python 3.10 this does not work anymore. When I run "python3" in the terminal and from there import my dependencies I get an error: ImportError: cannot import name 'Mapping' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) This seems to be a general problem, but here is the traceback of my specific case: Traceback (most recent call last): File "/Users/mk/Flasktut/app.py", line 2, in <module> from flask import Flask, render_template File "/Users/mk/Flasktut/env/lib/python3.10/site-packages/flask/__init__.py", line 14, in <module> from jinja2 import escape File "/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/__init__.py", line 33, in <module> from jinja2.environment import Environment, Template File "/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/environment.py", line 16, in <module> from jinja2.defaults import BLOCK_START_STRING, \ File "/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/defaults.py", line 32, in <module> from jinja2.tests import TESTS as DEFAULT_TESTS File "/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/tests.py", line 13, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) | Change: from collections import Mapping to from collections.abc import Mapping | 80 | 96 |
69,435,073 | 2021-10-4 | https://stackoverflow.com/questions/69435073/what-is-the-correct-way-of-using-typing-literal | My code looks something like this, which runs fine BDW without any errors from typing import Literal def verify(word: str) -> Literal['Hello XY']: a = 'Hello ' + word return a a = verify('XY') Although, when I'm trying to do the type-checking using mypy, it throws an error error: Incompatible return value type (got "str", expected "Literal['Hello XY']") NOTE: To perform type-checking simply do mypy ./filename.py, after pip installing mypy. ALSO, When I do this, the type-checking works fine from typing import Literal def verify(word: str) -> Literal['Hello XY']: a = 'Hello ' + word return 'Hello XY' #changed here a = verify('XY') What am I missing? | word can be any string, so this seems like a good thing that mypy complains because it cannot guess that you will always call it with the appropriate argument. In other words, for mypy, if you concatenate 'Hello ' with some str, it can give any str and not only 'Hello XY'. What you could do to check if the function is called appropriately, is instead to type word with a literal: from typing import Literal, cast hello_t = Literal['Hello there', 'Hello world'] def verify(word: Literal['there', 'world']) -> hello_t: a = cast(hello_t, 'Hello ' + word) return a a = verify('there') # mypy OK a = verify('world') # mypy OK a = verify('you') # mypy error Note that a cast is still required because mypy cannot guess that the concatenation of 'Hello ' with a Literal['there', 'world'] is of type hello_t. | 12 | 14 |
69,445,500 | 2021-10-5 | https://stackoverflow.com/questions/69445500/setuptools-distribute-package-composed-of-a-single-module | I'm learning how to distribute python packages using setuptools and I have a problem. setuptools is setting the name of the folder containing a single python file as the name of my package. Below is the structure of my repository: gerador_endereco/ -- setup.py -- my_package/ -- __init__.py -- gerador_endereco.py My setup.py is: setup( name='gerador_endereco', version='1.0.4', author='Michel Metran', description='API para criação ...', url='https://github.com/open-dsa/gerador_endereco', packages=find_packages(), install_requires=requirements, ) I understand that setuptools is related to the distribution of packages, composed of several modules. But I know that it is possible to distribute a package composed of a single module, but how can I import the package correctly, without the folder name appearing? # Install !pip install gerador-endereco # Import work using "my_package" directory: bad... from my_package.gerador_endereco import * # I'd like import like this!!! from gerador_endereco import * # Run listas = get_list_ceps_bairros(estado='sp', municipio='piracicaba') The PyPi Package is in https://pypi.org/project/gerador-endereco/ | setuptools is related to the distribution of packages, period. To install a module restructure you project: gerador_endereco/ -- setup.py -- gerador_endereco.py and change setup.py; remove packages=find_packages(), and add py_modules = ['gerador_endereco'] instead. See the docs at https://docs.python.org/3/distutils/setupscript.html#listing-individual-modules and https://packaging.python.org/guides/distributing-packages-using-setuptools/?#py-modules | 4 | 7 |
69,403,103 | 2021-10-1 | https://stackoverflow.com/questions/69403103/functionally-is-torch-multinomial-the-same-as-torch-distributions-categorical-ca | For example, if I provide a probability array of [0.5, 0.5], both functions will sample the index [0,1] with equal probability? | Yes: [torch.distributions.categorical.Categorical()] is equivalent to the distribution that torch.multinomial() samples from. https://pytorch.org/docs/stable/distributions.html#categorical | 7 | 7 |
69,381,928 | 2021-9-29 | https://stackoverflow.com/questions/69381928/why-arent-augmented-assignment-expressions-allowed | I was recently reading over PEP 572 on assignment expressions and stumbled upon an interesting use case: # Compute partial sums in a list comprehension total = 0 partial_sums = [total := total + v for v in values] print("Total:", total) I began exploring the snippet on my own and soon discovered that :+= wasn't valid Python syntax. # Compute partial sums in a list comprehension total = 0 partial_sums = [total :+= v for v in values] print("Total:", total) I suspect there may be some underlying reason in how := is implemented that wisely precludes :+=, but I'm not sure what it could be. If someone wiser in the ways of Python knows why :+= is unfeasible or impractical or otherwise unimplemented, please share your understanding. | The short version: The addition of the walrus operator was incredibly controversial, and they wanted to discourage overuse, so they limited it to only those cases for which a strong motivating use case was put forward, leaving = the convenient tool for all other cases. There's a lot of things the walrus operator won't do that it could do (assigning to things looked up on a sequence or mapping, assigning to attributes, etc.), but it would encourage using it all the time, to the detriment of the readability of typical code. Sure, you could choose to write more readable code, ignoring the opportunity to use weird and terrible punctuation-filled nonsense, but if my (and many people's) experience with Perl is any guide, people will use shortcuts to get it done faster now even if the resulting code is unreadable, even by them, a month later. There are other minor hurdles that get in the way (supporting all of the augmented assignment approaches with the walrus would add a ton of new byte codes to the interpreter, expanding the eval loop's switch significantly, and potentially inhibiting optimizations/spilling from CPU cache), but fundamentally, your motivating case of using a list comprehension for side-effects is a misuse of list comprehensions (a functional construct that, like all functional programming tools, is not intended to have side-effects), as are most cases that would rely on augmented assignment expressions. The strong motivations for introducing this feature were things you could reasonably want to do and couldn't without the walrus, e.g. Regexes, replacing this terrible, verbose arrow pattern: m = re.match(r'pattern', string) if m: do_thing(m) else: m = re.match(r'anotherpattern', string) if m: do_another_thing(m) else: m = re.match(r'athirdpattern', string) if m: do_a_third_thing(m) with this clean chain of tests: if m := re.match(r'pattern', string): do_thing(m) elif m := re.match(r'anotherpattern', string): do_another_thing(m) elif m := re.match(r'athirdpattern', string): do_a_third_thing(m) Reading a file by block, replacing: while True: block = file.read(4096) if not block: break with the clean: while block := file.read(4096): Those are useful things people really need to do with some frequency, and even the "canonical" versions I posted are often misimplemented in other ways (e.g. applying the regex test twice to avoid the arrow pattern, duplicating the block = file.read(4096) once before loop and once at the end so you can run while block:, but in exchange now continue doesn't work properly, and you risk the size of the block changing in one place but not another); the walrus operator allowed for better code. The listcomp accumulator isn't (much) better code. itertools.accumulate exists, and even if it didn't, the problem can be worked around in other ways with simple generator functions or hand-rolled loops. The PEP does describe it as a benefit (it's why they allowed walrus assignments to "escape" comprehensions), but the discussion relating to this scoping special case was even more divided than the discussion over adding the walrus itself; you can do it, and it's arguably useful, but it's not something where you look at the two options and immediately say "Man, if not for the walrus, this would be awful". And just to satisfy the folks who want evidence of this, note that they explicitly blocked some use cases that would have worked for free (it took extra work to make the grammar prohibit these usages), specifically to prevent walrus overuse. For example, you can't do: x := 1 on a line by itself. There's no technical reason you can't, at all, but they intentionally made it a syntax error for the walrus to be used without being wrapped in something. (x := 1) works at top level, but it's annoying enough no one will ever choose it over x = 1, and that was the goal. Until someone comes up with a common code pattern for which the lack of :+= makes it infeasibly/unnecessarily ugly (and it would have to be really common and really ugly to justify the punctuation filled monstrosity that is :+=), they won't consider it. | 11 | 8 |
69,396,816 | 2021-9-30 | https://stackoverflow.com/questions/69396816/django-merge-queryset-while-keeping-the-order | i'm trying to join together 2 QuerySets. Right now, I'm using the | operator, but doing it this way won't function as an "append". My current code is: df = RegForm((querysetA.all() | querysetB.all()).distinct()) I need the elements from querysetA to be before querysetB. Is it even possible to accomplish while keeping them just queries? | This can be solved by using annotate to add a custom field for ordering on the querysets, and use that in a union like this: from django.db.models import Value a = querysetA.annotate(custom_order=Value(1)) b = querysetB.annotate(custom_order=Value(2)) a.union(b).order_by('custom_order') Prior to django-3.2, you need to specify the output_field for Value: from django.db.models import IntegerField a = querysetA.annotate(custom_order=Value(1, IntegerField())) b = querysetB.annotate(custom_order=Value(2, IntegerField())) | 5 | 8 |
69,371,882 | 2021-9-29 | https://stackoverflow.com/questions/69371882/sending-entire-ethereum-address-balance-in-post-eip-1559-world | I'm trying to figure out how to send an entire address balance in a post EIP-1559 transaction (essentially emptying the wallet). Before the London fork, I could get the actual value as Total balance - (gasPrice * gas), but now it's impossible to know the exact remaining balance after the transaction fees because the base fee is not known beforehand. Is there an algorithm that would get me as close to the actual balance without going over? My end goal is to minimize the remaining Ether balance, which is essentially going to be wasted. | This can be done by setting the 'Max Fee' and the 'Max Priority Fee' to the same value. This will then use a deterministic amount of gas. Just be sure to set it high enough - comfortably well over and above the estimated 'Base Fee' to ensure it does not get stuck. | 5 | 2 |
69,422,116 | 2021-10-3 | https://stackoverflow.com/questions/69422116/what-is-the-best-way-to-get-accurate-text-similarity-in-python-for-comparing-sin | I've got similar product data in both the products_a array and products_b array: products_a = [{color: "White", size: "2' 3\""}, {color: "Blue", size: "5' 8\""} ] products_b = [{color: "Black", size: "2' 3\""}, {color: "Sky blue", size: "5' 8\""} ] I would like to be able to accurately tell similarity between the colors in the two arrays, with a score between 0 and 1. For example, comparing "Blue" against "Sky blue" should be scored near 1.00 (probably like 0.78 or similar). Spacy Similarity I tried using spacy to solve this: import spacy nlp = spacy.load('en_core_web_sm') def similarityscore(text1, text2 ): doc1 = nlp( text1 ) doc2 = nlp( text2 ) similarity = doc1.similarity( doc2 ) return similarity Yeah, well when passing in "Blue" against "Sky blue" it scores it as 0.6545742918773636. Ok, but what happens when passing in "White" against "Black"? The score is 0.8176945362451089... as in spacy is saying "White" against "Black" is ~81% similar! This is a failure when trying to make sure product colors are not similar. Jaccard Similarity I tried Jaccard Similarity on "White" against "Black" using this and got a score of 0.0 (maybe overkill on single words but room for future larger corpuses): # remove punctuation and lowercase all words function def simplify_text(text): for punctuation in ['.', ',', '!', '?', '"']: text = text.replace(punctuation, '') return text.lower() # Jaccard function def jaccardSimilarity(text_a, text_b ): word_set_a, word_set_b = [set(self.simplify_text(text).split()) for text in [text_a, text_b]] num_shared = len(word_set_a & word_set_b) num_total = len(word_set_a | word_set_b) jaccard = num_shared / num_total return jaccard Getting differing scores of 0.0 and 0.8176945362451089 on "White" against "Black" is not acceptable to me. I keep seeking a more accurate way of solving this issue. Even taking the mean of the two would be not accurate. Please let me know if you have any better ways. | NLP packages may be better at longer text fragments and more sophisticated text analysis. As you've discovered with 'black' and 'white', they make assumptions about similarity that are not right in the context of a simple list of products. Instead you can see this not as an NLP problem, but as a data transformation problem. This is how I would tackle it. To get the unique list of colors in both lists use set operations on the colors found in the two product lists. "set comprehensions" get a unique set of colors from each product list, then a union() on the two sets gets the unique colors from both product lists, with no duplicates. (Not really needed for 4 products, but very useful for 400, or 4000.) products_a = [{'color': "White", 'size': "2' 3\""}, {'color': "Blue", 'size': "5' 8\""} ] products_b = [{'color': "Black", 'size': "2' 3\""}, {'color': "Sky blue", 'size': "5' 8\""} ] products_a_colors = {product['color'].lower() for product in products_a} products_b_colors = {product['color'].lower() for product in products_b} unique_colors = products_a_colors.union(products_b_colors) print(unique_colors) The colors are lowercased because in Python 'Blue' != 'blue' and both spellings are found in your product lists. The above code finds these unique colors: {'black', 'white', 'sky blue', 'blue'} The next step is to build an empty color map. colormap = {color: '' for color in unique_colors} import pprint pp = pprint.PrettyPrinter(indent=4, width=10, sort_dicts=True) pp.pprint(colormap) Result: { 'sky blue': '', 'white': '', 'black': '', 'blue': '' } Paste the empty map into your code and fill out mappings for your complex colors like 'Sky blue'. Delete simple colors like 'white', 'black' and 'blue'. You'll see why below. Here's an example, assuming a slightly bigger range of products with more complex or unusual colors: colormap = { 'sky blue': 'blue', 'dark blue': 'blue', 'bright red': 'red', 'dark red': 'red', 'burgundy': 'red' } This function helps you to group together colors that are similar based on your color map. Function color() maps complex colors onto base colors and drops everything into lower case to allow 'Blue' to be considered the same as 'blue'. (NOTE: the colormap dictionary should only use lowercase in its keys.) def color(product_color): return colormap.get(product_color.lower(), product_color).lower() Examples: >>> color('Burgundy') 'red' >>> color('Sky blue') 'blue' >>> color('Blue') 'blue' If a color doesn't have a key in the colormap, it passes through unchanged, except that it is converted to lowercase: >>> color('Red') 'red' >>> color('Turquoise') 'turquoise' This is the scoring part. The product function from the standard library is used to pair items from product_a with items from product_b. Each pair is numbered using enumerate() because, as will become clear later, a score for a pair is of the form (pair_id, score). This way each pair can have more than one score. 'cartesian product' is just a mathematical name for what itertools.product() does. I've renamed it to avoid confusion with product_a and product_b. itertools.product() returns all possible pairs between two lists. from itertools import product as cartesian_product product_pairs = { pair_id: product_pair for pair_id, product_pair in enumerate(cartesian_product(products_a, products_b)) } print(product_pairs) Result: {0: ({'color': 'White', 'size': '2\' 3"'}, {'color': 'Black', 'size': '2\' 3"'}), 1: ({'color': 'White', 'size': '2\' 3"'}, {'color': 'Sky blue', 'size': '5\' 8"'}), 2: ({'color': 'Blue', 'size': '5\' 8"'}, {'color': 'Black', 'size': '2\' 3"'}), 3: ({'color': 'Blue', 'size': '5\' 8"'}, {'color': 'Sky blue', 'size': '5\' 8"'}) } The list will be much longer if you have 100s of products. Then here's how you might compile color scores: color_scores = [(pair_id, 0.8) for pair_id, (product_a, product_b) in product_pairs.items() if color(product_a['color']) == color(product_b['color'])] print(color_scores) In the example data, one product pair matches via the color() function: pair number 3, with the 'Blue' product in product_a and the 'Sky blue' item in product_b. As the color() function evaluates both 'Sky blue' and 'blue' to the value 'blue', this pair is awarded a score, 0.8: [(3, 0.8)] "deep unpacking" is used to extract product details and the "pair id" of the current product pair, and put them in local variables for processing or display. There's a nice tutorial article about "deep unpacking" here. The above is a blueprint for other rules. For example, you could write a rule based on size, and give that a different score, say, 0.5: size_scores = [(pair_id, 0.5) for pair_id, (product_a, product_b) in product_pairs.items() if product_a['size'] == product_b['size']] print(size_scores) and here are the resulting scores based on the 'size' attribute. [(0, 0.5), (3, 0.5)] This means pair 0 scores 0.5 and pair 3 scores 0.5 because their sizes match exactly. To get the total score for a product pair you might average the color and size scores: print() print("Totals") score_sources = [color_scores, size_scores] # add more scores to this list all_scores = sorted(itertools.chain(*score_sources)) pair_scores = itertools.groupby(all_scores, lambda x: x[0]) for pair_id, pairs in pair_scores: scores = [score for _, score in pairs] average = sum(scores) / len(scores) print(f"Pair {pair_id}: score {average}") for n, product in enumerate(product_pairs[pair_id]): print(f" --> Item {n+1}: {product}") Results: Totals Pair 0: score 0.5 --> Item 1: {'color': 'White', 'size': '2\' 3"'} --> Item 2: {'color': 'Black', 'size': '2\' 3"'} Pair 3: score 0.65 --> Item 1: {'color': 'Blue', 'size': '5\' 8"'} --> Item 2: {'color': 'Sky blue', 'size': '5\' 8"'} Pair 3, which matches colors and sizes, has the highest score and pair 0, which matches on size only, scores lower. The other two pairs have no score. | 7 | 2 |
69,437,526 | 2021-10-4 | https://stackoverflow.com/questions/69437526/what-is-this-odd-sorting-algorithm | Some answer originally had this sorting algorithm: for i from 0 to n-1: for j from 0 to n-1: if A[j] > A[i]: swap A[i] and A[j] Note that both i and j go the full range and thus j can be both larger and smaller than i, so it can make pairs both correct and wrong order (and it actually does do both!). I thought that's a mistake (and the author later called it that) and that this would jumble the array, but it does appear to sort correctly. It's not obvious why, though. But the code simplicity (going full ranges, and no +1 as in bubble sort) makes it interesting. Is it correct? If so, why does it work? And does it have a name? Python implementation with testing: from random import shuffle for _ in range(3): n = 20 A = list(range(n)) shuffle(A) print('before:', A) for i in range(n): for j in range(n): if A[j] > A[i]: A[i], A[j] = A[j], A[i] print('after: ', A, '\n') Sample output (Try it online!): before: [9, 14, 8, 12, 16, 19, 2, 1, 10, 11, 18, 4, 15, 3, 6, 17, 7, 0, 5, 13] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] before: [5, 1, 18, 10, 19, 14, 17, 7, 12, 16, 2, 0, 6, 8, 9, 11, 4, 3, 15, 13] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] before: [11, 15, 7, 14, 0, 2, 9, 4, 13, 17, 8, 10, 1, 12, 6, 16, 18, 3, 5, 19] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Edit: Someone pointed out a very nice brand new paper about this algorithm. Just to clarify: We're unrelated, it's a coincidence. As far as I can tell it was submitted to arXiv before that answer that sparked my question and published by arXiv after my question. | To prove that it's correct, you have to find some sort of invariant. Something that's true during every pass of the loop. Looking at it, after the very first pass of the inner loop, the largest element of the list will actually be in the first position. Now in the second pass of the inner loop, i = 1, and the very first comparison is between i = 1 and j = 0. So, the largest element was in position 0, and after this comparison, it will be swapped to position 1. In general, then it's not hard to see that after each step of the outer loop, the largest element will have moved one to the right. So after the full steps, we know at least the largest element will be in the correct position. What about all the rest? Let's say the second-largest element sits at position i of the current loop. We know that the largest element sits at position i-1 as per the previous discussion. Counter j starts at 0. So now we're looking for the first A[j] such that it's A[j] > A[i]. Well, the A[i] is the second largest element, so the first time that happens is when j = i-1, at the first largest element. Thus, they're adjacent and get swapped, and are now in the "right" order. Now A[i] again points to the largest element, and hence for the rest of the inner loop no more swaps are performed. So we can say: Once the outer loop index has moved past the location of the second largest element, the second and first largest elements will be in the right order. They will now slide up together, in every iteration of the outer loop, so we know that at the end of the algorithm both the first and second-largest elements will be in the right position. What about the third-largest element? Well, we can use the same logic again: Once the outer loop counter i is at the position of the third-largest element, it'll be swapped such that it'll be just below the second largest element (if we have found that one already!) or otherwise just below the first largest element. Ah. And here we now have our invariant: After k iterations of the outer loop, the k-length sequence of elements, ending at position k-1, will be in sorted order: After the 1st iteration, the 1-length sequence, at position 0, will be in the correct order. That's trivial. After the 2nd iteration, we know the largest element is at position 1, so obviously the sequence A[0], A[1] is in the correct order. Now let's assume we're at step k, so all the elements up to position k-1 will be in order. Now i = k and we iterate over j. What this does is basically find the position at which the new element needs to be slotted into the existing sorted sequence so that it'll be properly sorted. Once that happens, the rest of the elements "bubble one up" until now the largest element sits at position i = k and no further swaps happen. Thus finally at the end of step N, all the elements up to position N-1 are in the correct order, QED. | 102 | 50 |
69,390,411 | 2021-9-30 | https://stackoverflow.com/questions/69390411/attributeerror-module-html5lib-treebuilders-etree-has-no-attribute-getetreem | Suggestions please, thanks :) pip list --outdated --format=freeze Gives the following error: ERROR: Exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 223, in _main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/_internal/commands/list.py", line 175, in run packages = self.get_outdated(packages, options) File "/usr/lib/python3/dist-packages/pip/_internal/commands/list.py", line 184, in get_outdated return [ File "/usr/lib/python3/dist-packages/pip/_internal/commands/list.py", line 184, in <listcomp> return [ File "/usr/lib/python3/dist-packages/pip/_internal/commands/list.py", line 237, in iter_packages_latest_infos for dist in map_multithread(latest_info, packages): File "/usr/lib/python3.9/multiprocessing/pool.py", line 870, in next raise value File "/usr/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3/dist-packages/pip/_internal/commands/list.py", line 214, in latest_info all_candidates = finder.find_all_candidates(dist.key) File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 825, in find_all_candidates package_links = self.process_project_url( File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 793, in process_project_url page_links = list(parse_links(html_page)) File "/usr/lib/python3/dist-packages/pip/_internal/index/collector.py", line 324, in wrapper_wrapper return list(fn(page)) File "/usr/lib/python3/dist-packages/pip/_internal/index/collector.py", line 335, in parse_links document = html5lib.parse( File "/usr/share/python-wheels/html5lib-1.1-py2.py3-none-any.whl/html5lib/html5parser.py", line 44, in parse tb = treebuilders.getTreeBuilder(treebuilder) File "/usr/share/python-wheels/html5lib-1.1-py2.py3-none-any.whl/html5lib/treebuilders/__init__.py", line 85, in getTreeBuilder return etree.getETreeModule(implementation, **kwargs).TreeBuilder AttributeError: module 'html5lib.treebuilders.etree' has no attribute 'getETreeModule' | I solved this problem updating pip, i updated from pip 20.3.4 to 21.3 so just type: pip install pip -U Seems like there is some bug in pip itself. | 8 | 10 |
69,418,576 | 2021-10-2 | https://stackoverflow.com/questions/69418576/sqlalchemy-adding-a-foreignkeyconstraint-to-a-many-to-many-table-that-is-based | Forgive me if this has been answered elsewhere. I've been searching SO and haven't been able to translate the seemingly relevant Q&As to my scenerio. I'm working on a fun personal project where I have 4 main schemas (barring relationships for now): Persona (name, bio) Episode (title, plot) Clip (url, timestamp) Image (url) Restrictions (Basis of Relationships): A Persona can show up in multiple episodes, as well as multiple clips and images from those episodes (but might not be in all clips/images related to an episode). An Episode can contain multiple personas, clips, and images. An Image/Clip can only be related to a single Episode, but can be related to multiple personas. If a Persona is already assigned to episode(s), then any clip/image assigned to the persona can only be from one of those episodes or (if new) must only be capable of having one of the episodes that the persona appeared in associated to the clip/image. If an Episode is already assigned persona(s), then any clip/image assigned to the episode must be related to aleast one of those personas or (if new) must only be capable of having one or more of the personas from the episode associated to the clip/image. I've designed the database structure like so: This generates the following sql: DROP TABLE IF EXISTS episodes; DROP TABLE IF EXISTS personas; DROP TABLE IF EXISTS personas_episodes; DROP TABLE IF EXISTS clips; DROP TABLE IF EXISTS personas_clips; DROP TABLE IF EXISTS images; DROP TABLE IF EXISTS personas_images; CREATE TABLE episodes ( id INT NOT NULL PRIMARY KEY, title VARCHAR(120) NOT NULL UNIQUE, plot TEXT, tmdb_id VARCHAR(10) NOT NULL, tvdb_id VARCHAR(10) NOT NULL, imdb_id VARCHAR(10) NOT NULL); CREATE TABLE personas ( id INT NOT NULL PRIMARY KEY, name VARCHAR(30) NOT NULL, bio TEXT NOT NULL); CREATE TABLE personas_episodes ( persona_id INT NOT NULL, episode_id INT NOT NULL, PRIMARY KEY (persona_id,episode_id), FOREIGN KEY(persona_id) REFERENCES personas(id), FOREIGN KEY(episode_id) REFERENCES episodes(id)); CREATE TABLE clips ( id INT NOT NULL PRIMARY KEY, title VARCHAR(100) NOT NULL, timestamp VARCHAR(7) NOT NULL, link VARCHAR(100) NOT NULL, episode_id INT NOT NULL, FOREIGN KEY(episode_id) REFERENCES episodes(id)); CREATE TABLE personas_clips ( clip_id INT NOT NULL, persona_id INT NOT NULL, PRIMARY KEY (clip_id,persona_id), FOREIGN KEY(clip_id) REFERENCES clips(id), FOREIGN KEY(persona_id) REFERENCES personas(id)); CREATE TABLE images ( id INT NOT NULL PRIMARY KEY, link VARCHAR(120) NOT NULL UNIQUE, path VARCHAR(120) NOT NULL UNIQUE, episode_id INT NOT NULL, FOREIGN KEY(episode_id) REFERENCES episodes(id)); CREATE TABLE personas_images ( persona_id INT NOT NULL, image_id INT NOT NULL, PRIMARY KEY (persona_id,image_id), FOREIGN KEY(persona_id) REFERENCES personas(id), FOREIGN KEY(image_id) REFERENCES images(id)); And I've attempted to create the same schema in SQLAchemy models (keeping in mind SQLite for testing, PostgreSQL for production) like so: # db is a configured Flask-SQLAlchemy instance from app import db # Alias common SQLAlchemy names Column = db.Column relationship = db.relationship class PkModel(Model): """Base model class that adds a 'primary key' column named ``id``.""" __abstract__ = True id = Column(db.Integer, primary_key=True) def reference_col( tablename, nullable=False, pk_name="id", foreign_key_kwargs=None, column_kwargs=None ): """Column that adds primary key foreign key reference. Usage: :: category_id = reference_col('category') category = relationship('Category', backref='categories') """ foreign_key_kwargs = foreign_key_kwargs or {} column_kwargs = column_kwargs or {} return Column( db.ForeignKey(f"{tablename}.{pk_name}", **foreign_key_kwargs), nullable=nullable, **column_kwargs, ) personas_episodes = db.Table( "personas_episodes", db.Column("persona_id", db.ForeignKey("personas.id"), primary_key=True), db.Column("episode_id", db.ForeignKey("episodes.id"), primary_key=True), ) personas_clips = db.Table( "personas_clips", db.Column("persona_id", db.ForeignKey("personas.id"), primary_key=True), db.Column("clip_id", db.ForeignKey("clips.id"), primary_key=True), ) personas_images = db.Table( "personas_images", db.Column("persona_id", db.ForeignKey("personas.id"), primary_key=True), db.Column("image_id", db.ForeignKey("images.id"), primary_key=True), ) class Persona(PkModel): """One of Roger's personas.""" __tablename__ = "personas" name = Column(db.String(80), unique=True, nullable=False) bio = Column(db.Text) # relationships episodes = relationship("Episode", secondary=personas_episodes, back_populates="personas") clips = relationship("Clip", secondary=personas_clips, back_populates="personas") images = relationship("Image", secondary=personas_images, back_populates="personas") def __repr__(self): """Represent instance as a unique string.""" return f"<Persona({self.name!r})>" class Image(PkModel): """An image of one of Roger's personas from an episode of American Dad.""" __tablename__ = "images" link = Column(db.String(120), unique=True) path = Column(db.String(120), unique=True) episode_id = reference_col("episodes") # relationships personas = relationship("Persona", secondary=personas_images, back_populates="images") class Episode(PkModel): """An episode of American Dad.""" # FIXME: We can add Clips and Images linked to Personas that are not assigned to this episode __tablename__ = "episodes" title = Column(db.String(120), unique=True, nullable=False) plot = Column(db.Text) tmdb_id = Column(db.String(10)) tvdb_id = Column(db.String(10)) imdb_id = Column(db.String(10)) # relationships personas = relationship("Persona", secondary=personas_episodes, back_populates="episodes") images = relationship("Image", backref="episode") clips = relationship("Clip", backref="episode") def __repr__(self): """Represent instance as a unique string.""" return f"<Episode({self.title!r})>" class Clip(PkModel): """A clip from an episode of American Dad that contains one or more of Roger's personas.""" __tablename__ = "clips" title = Column(db.String(80), unique=True, nullable=False) timestamp = Column(db.String(7), nullable=True) # 00M:00S link = Column(db.String(7), nullable=True) episode_id = reference_col("episodes") # relationships personas = relationship("Persona", secondary=personas_clips, back_populates="clips") However, notice the FIXME comment. I'm having trouble figuring out how to constrain the many-to-many relationships on personas+images, personas+clips, and personas+episodes in a way that they all look at each other before adding a new entry to restrict the possible additions to the subset of items that meet the criteria of those other many-to-many relationships. Can someone please provide a solution to ensure the many-to-many relationships respect the episode_id relationship in the parent tables? Edit to add pseudo model example of expected behavior # omitting some detail fields for brevity e1 = Episode(title="Some Episode") e2 = Episode(title="Another Episode") p1 = Persona(name="Raider Dave", episodes=[e1]) p2 = Persona(name="Ricky Spanish", episodes=[e2]) c1 = Clip(title="A clip", episode=e1, personas=[p2]) # should fail i1 = Image(title="An image", episode=e2, personas=[p1]) # should fail c2 = Clip(title="Another clip", episode=e1, personas=[p1]) # should succeed i2 = Image(title="Another image", episode=e2, personas=[p2]) # should succeed | Add: a non-nullable column episode_id, a composite foreign key referencing personas_episode, and a trigger to autofill episode_id. The non-nullable column and the composite foreign key are sufficient to produce the correct constraints on a database-level as well as ensure that only proper data can be added outside of the SQLAlchemy models. The trigger is proposed as a result of the lack of support within SQLAlchemy models to intercept before_insert event for Table referenced in relationship.secondary. Implementation SQLite doesn't support modifying NEW.episode_id in a BEFORE INSERT trigger, which means we have to autofill in an AFTER INSERT trigger. So, we allow the column to be nullable and add 2 more triggers to check episode_id constraint later. episode_id_nullable = db.engine.dialect.name == "sqlite" # Add this personas_clips = db.Table( "personas_clips", db.Column("persona_id", db.ForeignKey("personas.id"), primary_key=True), db.Column("episode_id", db.Integer, nullable=episode_id_nullable), # Add this db.Column("clip_id", db.ForeignKey("clips.id"), primary_key=True), db.ForeignKeyConstraint(["persona_id", "episode_id"], ["personas_episodes.persona_id", "personas_episodes.episode_id"]), # Add this ) personas_images = db.Table( "personas_images", db.Column("persona_id", db.ForeignKey("personas.id"), primary_key=True), db.Column("episode_id", db.Integer, nullable=episode_id_nullable), # Add this db.Column("image_id", db.ForeignKey("images.id"), primary_key=True), db.ForeignKeyConstraint(["persona_id", "episode_id"], ["personas_episodes.persona_id", "personas_episodes.episode_id"]), # Add this ) SQLite triggers: Before insert, check that the clip_id/image_id references a clip/image in an episode where persona is in (based on persona_episodes). Before update, check that the episode_id is not set to NULL. After insert, autofill the episode_id. SQLITE_CHECK_EPISODE_ID_BEFORE_INSERT = """ CREATE TRIGGER {table_name}_check_episode_id_before_insert BEFORE INSERT ON {table_name} FOR EACH ROW WHEN NEW.episode_id IS NULL BEGIN SELECT RAISE(ABORT, 'NOT NULL constraint failed: {table_name}.episode_id') WHERE NOT EXISTS ( SELECT 1 FROM {fk_target_table_name} JOIN personas_episodes ON {fk_target_table_name}.episode_id = personas_episodes.episode_id WHERE {fk_target_table_name}.{fk_target_name} = NEW.{fk_name} AND personas_episodes.persona_id = NEW.persona_id ); END; """ SQLITE_CHECK_EPISODE_ID_BEFORE_UPDATE = """ CREATE TRIGGER {table_name}_check_episode_id_before_update BEFORE UPDATE ON {table_name} FOR EACH ROW WHEN NEW.episode_id IS NULL BEGIN SELECT RAISE(ABORT, 'NOT NULL constraint failed: {table_name}.episode_id'); END; """ SQLITE_AUTOFILL_EPISODE_ID = """ CREATE TRIGGER {table_name}_autofill_episode_id AFTER INSERT ON {table_name} FOR EACH ROW WHEN NEW.episode_id IS NULL BEGIN UPDATE {table_name} SET episode_id = (SELECT {fk_target_table_name}.episode_id FROM {fk_target_table_name} JOIN personas_episodes ON {fk_target_table_name}.episode_id = personas_episodes.episode_id WHERE {fk_target_table_name}.{fk_target_name} = NEW.{fk_name} AND personas_episodes.persona_id = NEW.persona_id) WHERE {fk_name} = NEW.{fk_name} AND persona_id = NEW.persona_id; END; """ PostgreSQL trigger: Before insert, autofill the episode_id. POSTGRESQL_AUTOFILL_EPISODE_ID = """ CREATE OR REPLACE FUNCTION {table_name}_autofill_episode_id() RETURNS TRIGGER AS ${table_name}_autofill_episode_id$ DECLARE _episode_id INT; in_episode BOOL; BEGIN IF NEW.episode_id IS NULL THEN SELECT episode_id INTO _episode_id FROM {fk_target_table_name} WHERE {fk_target_name} = NEW.{fk_name}; SELECT TRUE INTO in_episode FROM personas_episodes WHERE persona_id = NEW.persona_id AND episode_id = _episode_id; IF in_episode IS NOT NULL THEN NEW.episode_id = _episode_id; END IF; END IF; RETURN NEW; END; ${table_name}_autofill_episode_id$ LANGUAGE plpgsql; CREATE TRIGGER {table_name}_autofill_episode_id BEFORE INSERT OR UPDATE ON {table_name} FOR EACH ROW EXECUTE PROCEDURE {table_name}_autofill_episode_id(); """ Adding the triggers after_create the tables personas_clips and personas_images: from sqlalchemy import event, text def after_create_trigger_autofill_episode_id(target, connection, **kw): fk = next(fk for fk in target.foreign_keys if "personas" not in fk.column.table.name) if connection.dialect.name == "sqlite": connection.execute(text(SQLITE_CHECK_EPISODE_ID_BEFORE_INSERT.format(table_name=target.name, fk_target_table_name=fk.column.table.name, fk_target_name=fk.column.name,fk_name=fk.parent.name))) connection.execute(text(SQLITE_CHECK_EPISODE_ID_BEFORE_UPDATE.format(table_name=target.name, fk_target_table_name=fk.column.table.name, fk_target_name=fk.column.name, fk_name=fk.parent.name))) connection.execute(text(SQLITE_AUTOFILL_EPISODE_ID.format(table_name=target.name, fk_target_table_name=fk.column.table.name, fk_target_name=fk.column.name, fk_name=fk.parent.name))) elif connection.dialect.name == "postgresql": connection.execute(text(POSTGRESQL_AUTOFILL_EPISODE_ID.format(table_name=target.name, fk_target_table_name=fk.column.table.name, fk_target_name=fk.column.name, fk_name=fk.parent.name))) event.listen(personas_clips, "after_create", after_create_trigger_autofill_episode_id) event.listen(personas_images, "after_create", after_create_trigger_autofill_episode_id) Test cases Here's what I have at the moment based on the expected behaviour in the question. from sqlalchemy.exc import IntegrityError from sqlalchemy.sql import select from models import * if db.engine.dialect.name == "sqlite": db.session.execute("pragma foreign_keys=on") else: db.session.execute(""" DROP TABLE IF EXISTS episodes CASCADE; DROP TABLE IF EXISTS personas CASCADE; DROP TABLE IF EXISTS personas_episodes CASCADE; DROP TABLE IF EXISTS clips CASCADE; DROP TABLE IF EXISTS personas_clips; DROP TABLE IF EXISTS images CASCADE; DROP TABLE IF EXISTS personas_images; """) db.session.commit() db.create_all() e1 = Episode(title="Some Episode") e2 = Episode(title="Another Episode") db.session.add(e1) db.session.add(e2) db.session.commit() p1 = Persona(name="Raider Dave", episodes=[e1]) p2 = Persona(name="Ricky Spanish", episodes=[e2]) db.session.add(p1) db.session.add(p2) db.session.commit() c1 = Clip(title="A clip", episode=e1, personas=[p2]) # should fail db.session.add(c1) try: db.session.commit() assert False except IntegrityError: db.session.rollback() assert Clip.query.first() is None, list(db.session.execute(select(personas_clips))) i1 = Image(link="An image", episode=e2, personas=[p1]) # should fail db.session.add(i1) try: db.session.commit() assert False except IntegrityError: db.session.rollback() assert Image.query.first() is None, list(db.session.execute(select(personas_images))) c2 = Clip(title="Another clip", episode=e1, personas=[p1]) # should succeed db.session.add(c2) db.session.commit() assert Clip.query.first() is not None i2 = Image(link="Another image", episode=e2, personas=[p2]) # should succeed db.session.add(i2) db.session.commit() assert Image.query.first() is not None Alternatives that didn't work out SQLAlchemy doesn't appear to support before_insert event for Table, only Model. https://docs.sqlalchemy.org/en/14/orm/events.html#sqlalchemy.orm.MapperEvents.before_insert I tried using Association Proxy, but could not support c2.personas.remove(p1) cleanly. https://docs.sqlalchemy.org/en/14/orm/extensions/associationproxy.html | 7 | 2 |
69,417,027 | 2021-10-2 | https://stackoverflow.com/questions/69417027/how-to-typecheck-class-with-method-inserted-by-metaclass-in-python | In the following code some_method has been added by metaclass: from abc import ABC from abc import ABCMeta from typing import Type def some_method(cls, x: str) -> str: return f"result {x}" class MyMeta(ABCMeta): def __new__(mcs, *args, **kwargs): cls = super().__new__(mcs, *args, **kwargs) cls.some_method = classmethod(some_method) return cls class MyABC(ABC): @classmethod def some_method(cls, x: str) -> str: return x class MyClassWithSomeMethod(metaclass=MyMeta): pass def call_some_method(cls: Type[MyClassWithSomeMethod]) -> str: return cls.some_method("A") if __name__ == "__main__": mc = MyClassWithSomeMethod() assert isinstance(mc, MyClassWithSomeMethod) assert call_some_method(MyClassWithSomeMethod) == "result A" However, MyPy is quite expectedly unhappy about it: minimal_example.py:27: error: "Type[MyClassWithSomeMethod]" has no attribute "some_method" Found 1 error in 1 file (checked 1 source file) Is there any elegant way to tell type checker, that the type is really ok? By elegant, I mean I do not need to change these kinds of definitions everywhere: class MyClassWithSomeMethod(metaclass=MyMeta): ... Note, that I do not want to go with subclassing (like with MyABC in the code above). That is, my classes are to be defined with metaclass=. What options are there? I've also tried Protocol: from typing import Protocol class SupportsSomeMethod(Protocol): @classmethod def some_method(cls, x: str) -> str: ... class MyClassWithSomeMethod(SupportsSomeMethod, metaclass=MyMeta): pass def call_some_method(cls: SupportsSomeMethod) -> str: return cls.some_method("A") But this leads to: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases | As is explained in the MyPy documentation, MyPy's support for metaclasses only goes so far: Mypy does not and cannot understand arbitrary metaclass code. The issue is that if you monkey-patch a method onto a class in your metaclass's __new__ method, you could be adding anything to your class's definition. This is much too dynamic for Mypy to understand. However, all is not lost! You have a few options here. Option 1: Statically define the method as an instance method on the metaclass Classes are instances of their metaclass, so instance methods on a metaclass work very similarly to classmethods defined in a class. As such, you can rewrite minimal_example.py as follows, and MyPy will be happy: from abc import ABCMeta from typing import Type class MyMeta(ABCMeta): def some_method(cls, x: str) -> str: return f"result {x}" class MyClassWithSomeMethod(metaclass=MyMeta): pass def call_some_method(cls: Type[MyClassWithSomeMethod]) -> str: return cls.some_method("A") if __name__ == "__main__": mc = MyClassWithSomeMethod() assert isinstance(mc, MyClassWithSomeMethod) assert call_some_method(MyClassWithSomeMethod) == "result A" The only big difference between a metaclass instance-method and your average classmethod is that metaclass instance-methods aren't avaliable from instances of the class using the metaclass: >>> from abc import ABCMeta >>> class MyMeta(ABCMeta): ... def some_method(cls, x: str) -> str: ... return f"result {x}" ... >>> class MyClassWithSomeMethod(metaclass=MyMeta): ... pass ... >>> MyClassWithSomeMethod.some_method('foo') 'result foo' >>> m = MyClassWithSomeMethod() >>> m.some_method('foo') Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: 'MyClassWithSomeMethod' object has no attribute 'some_method' >>> type(m).some_method('foo') 'result foo' Option 2: Promise MyPy a method exists, without actually defining it In lots of situations, you'll be using a metaclass because you want to be more dynamic than is possible if you're statically defining methods. For example, you might want to dynamically generate method definitions on the fly and add them to classes that use your metaclass. In these situations, Option 1 won't do at all. Another option, in these situations, is to "promise" MyPy that a method exists, without actually defining it. You can do this using standard annotations syntax: from abc import ABCMeta from typing import Type, Callable def some_method(cls, x: str) -> str: return f"result {x}" class MyMeta(ABCMeta): some_method: Callable[['MyMeta', str], str] def __new__(mcs, *args, **kwargs): cls = super().__new__(mcs, *args, **kwargs) cls.some_method = classmethod(some_method) return cls class MyClassWithSomeMethod(metaclass=MyMeta): pass def call_some_method(cls: Type[MyClassWithSomeMethod]) -> str: return cls.some_method("A") if __name__ == "__main__": mc = MyClassWithSomeMethod() assert isinstance(mc, MyClassWithSomeMethod) assert call_some_method(MyClassWithSomeMethod) == "result A" This passes MyPy fine, and is actually fairly clean. However, there are limitations to this approach, as the full complexities of a callable cannot be expressed using the shorthand typing.Callable syntax. Option 3: Lie to MyPy A third option is to lie to MyPy. There are two obvious ways you could do this. Option 3(a). Lie to MyPy using the typing.TYPE_CHECKING constant The typing.TYPE_CHECKING constant is always True for static type-checkers, and always False at runtime. So, you can use this constant to feed different definitions of your class to MyPy than the ones you'll use at runtime. from typing import Type, TYPE_CHECKING from abc import ABCMeta if not TYPE_CHECKING: def some_method(cls, x: str) -> str: return f"result {x}" class MyMeta(ABCMeta): if TYPE_CHECKING: def some_method(cls, x: str) -> str: ... else: def __new__(mcs, *args, **kwargs): cls = super().__new__(mcs, *args, **kwargs) cls.some_method = classmethod(some_method) return cls class MyClassWithSomeMethod(metaclass=MyMeta): pass def call_some_method(cls: Type[MyClassWithSomeMethod]) -> str: return cls.some_method("A") if __name__ == "__main__": mc = MyClassWithSomeMethod() assert isinstance(mc, MyClassWithSomeMethod) assert call_some_method(MyClassWithSomeMethod) == "result A" This passes MyPy. The main disadvantage of this approach is that it's just plain ugly to have if TYPE_CHECKING checks across your code base. Option 3(b): Lie to MyPy using a .pyi stub file Another way of lying to MyPy would be to use a .pyi stub file. You could have a minimal_example.py file like this: from abc import ABCMeta def some_method(cls, x: str) -> str: return f"result {x}" class MyMeta(ABCMeta): def __new__(mcs, *args, **kwargs): cls = super().__new__(mcs, *args, **kwargs) cls.some_method = classmethod(some_method) return cls And you could have a minimal_example.pyi stub file in the same directory like this: from abc import ABCMeta class MyMeta(ABCMeta): def some_method(cls, x: str) -> str: ... If MyPy finds a .py file and a .pyi file in the same directory, it will always ignore the definitions in the .py file in favour of the stubs in the .pyi file. Meanwhile, at runtime, Python does the opposite, ignoring the stubs in the .pyi file entirely in favour of the runtime implementation in the .py file. So, you can be as dynamic as you like at runtime, and MyPy will be none the wiser. (As you can see, there is no need to replicate the full method definition in your .pyi file. MyPy only needs the signature of these methods, so the convention is simply to fill the body of a function in a .pyi file with a literal ellipsis ....) This solution is cleaner than using the TYPE_CHECKING constant. However, I would not get carried away with using .pyi files. Use them as little as possible. If you have a class in your .py file that you do not have a copy of in your stub file, MyPy will be completely ignorant of its existence and raise all sorts of false-positive errors. Remember: if you have a .pyi file, MyPy will completely ignore the .py file that has your runtime implementation in it. Duplicating class definitions in .pyi files goes against DRY, and runs the risk that you will update your runtime definitions in your .py file but forget to update your .pyi file. If possible, you should isolate the code that truly needs a separate .pyi stub into a single, short file. You should then annotate types as normal in the rest of your project, and import the necessary classes from very_dynamic_classes.py as normal when they are required in the rest of your code. | 5 | 12 |
69,427,175 | 2021-10-3 | https://stackoverflow.com/questions/69427175/how-to-pass-forwardref-as-args-to-typevar-in-python-3-6 | I'm working on a library that currently supports Python 3.6+, but having a bit of trouble with how forward references are defined in the typing module in Python 3.6. I've setup pyenv on my local Windows machine so that I can switch between different Python versions at ease for local testing, as my system interpreter defaults to Python 3.9. The use case here essentially is I'm trying to define a TypeVar with the valid forward reference types, which I can then use for type annotation purposes. I've confirmed the following code runs without issue when I'm on 3.7+ and import ForwardRef from the typing module directly, but I'm unable to get it on Python 3.6 since I noticed forward refs can't be used as arguments to TypeVar for some reason. I also tried passing the forward ref type as an argument to Union , but I ran into a similar issue. Here are the imports and the definition forTypeVar that I'm trying to get to work on both python 3.6.0 as well as more recent versions like 3.6.8 - I did notice I get different errors between minor versions: from typing import _ForwardRef as PyForwardRef, TypeVar # Errors on PY 3.6: # 3.6.2+ -> AttributeError: type object '_ForwardRef' has no attribute '_gorg' # 3.6.2 or earlier -> AssertionError: assert isinstance(a, GenericMeta) FREF = TypeVar('FREF', str, PyForwardRef) Here is a sample usage I've been able to test out, which appears to type check as expected for Python 3.7+: class MyClass: ... def my_func(typ: FREF): pass # Type checks my_func('testing') my_func(PyForwardRef('MyClass')) # Does not type check my_func(23) my_func(MyClass) What I've Done So Far Here's my current workaround I'm using to support Python 3.6. This isn't pretty but it seems to at least get the code to run without any errors. However this does not appear to type check as expected though - at least not in Pycharm. import typing # This is needed to avoid an`AttributeError` when using PyForwardRef # as an argument to `TypeVar`, as we do below. if hasattr(typing, '_gorg'): # Python 3.6.2 or lower _gorg = typing._gorg typing._gorg = lambda a: None if a is PyForwardRef else _gorg(a) else: # Python 3.6.3+ PyForwardRef._gorg = None Wondering if I'm on the right track, or if there's a simpler solution I can use to support ForwardRef types as arguments to TypeVar or Union in Python 3.6. | To state the obvious, the issue here appears to be due to several changes in the typing module between Python 3.6 and Python 3.7. In both Python 3.6 and Python 3.7: All constraints on a TypeVar are checked using the typing._type_check function (links are to the 3.6 branch of the source code on GitHub) before the TypeVar is allowed to be instantiated. TypeVar.__init__ looks something like this in the 3.6 branch: class TypeVar(_TypingBase, _root=True): # <-- several lines skipped --> def __init__(self, name, *constraints, bound=None, covariant=False, contravariant=False): # <-- several lines skipped --> if constraints and bound is not None: raise TypeError("Constraints cannot be combined with bound=...") if constraints and len(constraints) == 1: raise TypeError("A single constraint is not allowed") msg = "TypeVar(name, constraint, ...): constraints must be types." self.__constraints__ = tuple(_type_check(t, msg) for t in constraints) # etc. In Python 3.6: There was a class called _ForwardRef. This class was given a name with a leading underscore to warn users that it was an implementation detail of the module, and that therefore the API of the class could change unexpectedly between Python versions. It appears that typing._type_check did not account for the possibility that _ForwardRef might be passed to it, hence the strange AttributeError: type object '_ForwardRef' has no attribute '_gorg' error message. I assume that this possibility was not accounted for because it was assumed that users would know not to use classes marked as implementation details. In Python 3.7: _ForwardRef has been replaced with a ForwardRef class: this class is no longer an implementation detail; it is now part of the module's public API. typing._type_check now explicitly accounts for the possibility that ForwardRef might be passed to it: def _type_check(arg, msg, is_argument=True): """Check that the argument is a type, and return it (internal helper). As a special case, accept None and return type(None) instead. Also wrap strings into ForwardRef instances. Consider several corner cases, for example plain special forms like Union are not valid, while Union[int, str] is OK, etc. The msg argument is a human-readable error message, e.g:: "Union[arg, ...]: arg should be a type." We append the repr() of the actual value (truncated to 100 chars). """ # <-- several lines skipped --> if isinstance(arg, (type, TypeVar, ForwardRef)): return arg # etc. Solutions I'm tempted to argue that it's not really worth the effort to support Python 3.6 at this point, given that Python 3.6 is kind of old now, and will be officially unsupported from December 2021. However, if you do want to continue to support Python 3.6, a slightly cleaner solution might be to monkey-patch typing._type_check rather than monkey-patching _ForwardRef. (By "cleaner" I mean "comes closer to tackling the root of the problem, rather than a symptom of the problem" — it's obviously less concise than your existing solution.) import sys from typing import TypeVar if sys.version_info < (3, 7): import typing from typing import _ForwardRef as PyForwardRef from functools import wraps _old_type_check = typing._type_check @wraps(_old_type_check) def _new_type_check(arg, message): if arg is PyForwardRef: return arg return _old_type_check(arg, message) typing._type_check = _new_type_check # ensure the global namespace is the same for users # regardless of the version of Python they're using del _old_type_check, _new_type_check, typing, wraps else: from typing import ForwardRef as PyForwardRef However, while this kind of thing works fine as a runtime solution, I have honestly no idea whether there is a way to make type-checkers happy with this kind of monkey-patching. Pycharm, MyPy and the like certainly won't be expecting you to do something like this, and probably have their support for TypeVars hardcoded for each version of Python. | 7 | 7 |
69,386,603 | 2021-9-30 | https://stackoverflow.com/questions/69386603/complexity-of-sparse-matrix-cholesky-decomposition | I am having trouble finding a straightforward answer to the following question: If you compute the Cholesky decomposition of an nxn positive definite symmetric matrix A, i.e factor A=LL^T with L a lower triangular matrix, the complexity is O(n^3). For sparse matrices, there are apparently faster algorithms, but how much faster? What complexity can we achieve for such a matrix with say m<n^2 nonzero entries? Edit: my matrix is also approximately main diagonal (only the diagonal and a some adjacent diagonals below and above are nonzero). P.S I am eventually interested in implementations in either Julia or Python. Python has the sksparse.cholmod module (https://scikit-sparse.readthedocs.io/en/latest/cholmod.html) but it isn't clear to me what algorithm they are using and what its complexity is. Not sure about Julia, if anyone can tell me. | This can only be answered exactly for abitrary matrices if P=NP ... so it's not possible to answer in general. The time complexity depends on the fill-reducing ordering used, which is attempting to get an approximate solution to an NP hard problem. However, for the very special case of a matrix coming from a regular square 2D or 3D mesh, there is an answer. In this case, nested dissection gives an ordering that is asymptotically optimal. For a 2D s-by-s mesh, the matrix has dimension n = s^2 and I think about 5n entries. In this case, L has 31*(n log2(n)/8)+O(n) nonzeros, and the work is 829*(n^(3/2))/84+O(n log n). For a 3D s-by-s-by-s mesh with n = s^3, there are O(n^(4/3)) nonzeros in L and O(n^2) operations are required to compute L. | 5 | 3 |
69,429,950 | 2021-10-4 | https://stackoverflow.com/questions/69429950/dask-what-does-memory-limit-control | In dask's LocalCluster, there is a parameter memory_limit. I can't find in the documentation (https://distributed.dask.org/en/latest/worker.html#memory-management) details about whether the limit is per worker, per thread, or for the whole cluster. This is probably at least in part because I have trouble following how keywords are passed around. For example, in this code: cluster = LocalCluster(n_workers=2, threads_per_worker=4, memory_target_fraction=0.95, memory_limit='32GB') will that be 32 GB for each worker? For both workers together? Or for each thread? My question is motivated partly by running a LocalCluster with n_workers=1 and memory_limit=32GB, but it gets killed by the Slurm Out-Of-Memory killer for using too much memory. | The memory_limit keyword argument to LocalCluster sets the limit per worker. Related documentaion: https://github.com/dask/distributed/blob/7bf884b941363242c3884b598205c75373287190/distributed/deploy/local.py#L76-L78 Note, if the memory_limit given is greater than the available memory, the total available memory will be set for each worker. This behavior hasn't been documented yet, but a relevant issue is here: https://github.com/dask/dask/issues/8224 Screenshot of cluster with code: from dask.distributed import LocalCluster, Client cluster = LocalCluster(n_workers=2, threads_per_worker=4, memory_target_fraction=0.95, memory_limit='8GB') client = Client(cluster) client | 5 | 5 |
69,396,898 | 2021-9-30 | https://stackoverflow.com/questions/69396898/trying-to-figure-out-uwsgi-thread-workers-configuration | So, I started working with uWSGI for my python application just two days ago and I'm trying to understand the various parameters we specify in an .ini file. This is what my app.ini file currently looks like: # The following article was referenced while creating this configuration # https://www.techatbloomberg.com/blog/configuring-uwsgi-production-deployment/ [uwsgi] strict = true ; Only valid uWSGI options are tolerated master = true ; The master uWSGI process is necessary to gracefully re-spawn and pre-fork workers, ; consolidate logs, and manage many other features enable-threads = true ; To run uWSGI in multithreading mode vacuum = true ; Delete sockets during shutdown single-interpreter = true ; Sets only one service per worker process die-on-term = true ; Shutdown when receiving SIGTERM (default is respawn) need-app = true ;disable-logging = true ; By default, uWSGI has rather verbose logging. Ensure that your ;log-4xx = true ; application emits concise and meaningful logs. Uncomment these lines ;log-5xx = true ; if you want to disable logging cheaper-algo = busyness processes = 128 ; Maximum number of workers allowed cheaper = 1 ; Minimum number of workers allowed - default 1 cheaper-initial = 2 ; Workers created at startup cheaper-overload = 60 ; Will check busyness every 60 seconds. cheaper-step = 3 ; How many workers to spawn at a time auto-procname = true ; Identify the workers procname-prefix = "rhs-svc " ; Note the space. uWSGI logs will be prefixed with "rhs-svc" When I start uWSGI - this is what I see: [uWSGI] getting INI configuration from app.ini *** Starting uWSGI 2.0.19.1 (64bit) on [Thu Sep 30 10:49:45 2021] *** compiled with version: Apple LLVM 12.0.0 (clang-1200.0.32.29) on 29 September 2021 23:55:27 os: Darwin-19.6.0 Darwin Kernel Version 19.6.0: Thu Sep 16 20:58:47 PDT 2021; root:xnu-6153.141.40.1~1/RELEASE_X86_64 nodename: sth-sth-sth machine: x86_64 clock source: unix pcre jit disabled detected number of CPU cores: 12 current working directory: /Users/sth.sth/My-Microservice detected binary path: /Users/sth.sth/My-Microservice/venv/bin/uwsgi your processes number limit is 2784 your memory page size is 4096 bytes detected max file descriptor number: 10240 lock engine: OSX spinlocks thunder lock: disabled (you can enable it with --thunder-lock) uWSGI http bound on :9000 fd 4 [busyness] settings: min=25%, max=50%, overload=60, multiplier=10, respawn penalty=2 uwsgi socket 0 bound to TCP address 127.0.0.1:57164 (port auto-assigned) fd 3 Python version: 3.9.6 (default, Jun 29 2021, 06:20:32) [Clang 12.0.0 (clang-1200.0.32.29)] Python main interpreter initialized at 0x7fd32b905bf0 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 9403584 bytes (9183 KB) for 128 cores *** Operational MODE: preforking *** WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x7fd32b905bf0 pid: 78422 (default app) spawned uWSGI master process (pid: 78422) spawned uWSGI worker 1 (pid: 78423, cores: 1) spawned uWSGI worker 2 (pid: 78424, cores: 1) spawned uWSGI http 1 (pid: 78425) I'm running this on MacOS Catalina with a 6-Core i7 CPU. Why does it say detected cores: 12 when I have just 6? It says process number limit: 2784 - can I really set processes = 128 to processes = 2784? In the docs, it is mentioned that processes = 2 * cpucores is too simple a metric to adhere to. What metrics should I ideally be considering? My application is currently a shell (i.e. no business logic - just in memory stuff get-value-set-value stuff for now, I'm essentially building a template) and we don't expect any DB connections/IO intensive operations for now. How do I determine what's a good thread:process ratio? I apologize if my questions are too basic, but I'm very new to this | Cores vs Processors First of all, the number of cores is not necessarily the number of processors. On early computers days it was like 1-1, but with modern improvements one processor can offer more than one core. (Check this: https://www.tomshardware.com/news/cpu-core-definition,37658.html). So, if it detected 12 cores, you can use it as the basis for your calculus. WSGI processes The number of processes means how many different parallel instances of your web application will be running in that server. WSGI first creates a master process that will coordinate things. Then it bootstraps your application and creates N clones of it (fork). These child forked processes are isolated, they don't share resources. If one process get's unhealthy for any reason (e.g. I/O problems), it can terminate or even be voluntarily killed by the master process while the rest of the clones keep working, so your application is still up and running. When a process is terminated/killed the master process can create another fresh clone to replace it (re-spawn). It is OK to set the number of processes as a ratio of the available cores. But there's no benefit of increasing it too much. So, you definitely shouldn't set it to the limit (2784). Remember that the operating system will round-robin across all processes to give each one a chance of have some instructions processed. So, if it offers 12 cores and you create like 1000 different processes, you are just putting stress on the system and you'll end up getting the same throughput (or even a worse throughput since there's so much chaos). Number of threads inside a process Then we move on to the number of threads. For the sake of simplicity, let's just say that the number of threads means the number of parallel requests each of these child process can handle. While one thread is waiting for a database response to answer a request, another thread could be doing something else to answer another request. You may say: why do I need multiple threads if I already have multiple processes? A process is an expensive thing, but threads are just the way you can parallel the workload a process can handle. Imagine that a process is a Coffee Shop, and threads are the number of attendants you have inside. You can spread 10 different Coffee Shops units around the city. If one of them closes, there's still another 9 somewhere else the customer can go. But each shop need an amount of attendants to serve people the best way possible. How to set these numbers correctly If you set just a single process with 100 threads, that means that 100 is your concurrency limit. If at some point there's 101 concurrent requests to your application, that last one will have to wait for one of the 100 first to be finished. That's when you start to get an increasing response time for some users. The more the requests are queued, the worse it gets (queuing theory). Besides that, since you have a single process, if it crashes, all these 100 requests will fail with a server error (500). So, it's wiser to have more processes, let's say 4 processes handling 25 threads each one. You still have the 100 concurrency limit, but your application is more resilient. It's up to you to get to know your application expected load so you can adjust these numbers properly. When you have external integrations like databases, you have to consider it's limitations also. Let's say a PostgreSQL Server that can handle 100 simultaneous connections. If you have 10 WSGI processes, 40 threads each (with a connection pool of size 40 as well), then there's the possibility that you stress the database with 400 connections, and then you have a big problem, but that's not your case! So, just use the suggested number of processes (12 * 2 = 24) and set as much threads as needed to offer a certain desired level of concurrency. If you don't know the expected load, I suggest you to make some sort of performance test that can simulate requests to your application and then you can experiment different loads and settings and check for it's side effects. Extra: Containers If you are running your application in a container orchestration platform, like Kubernetes, then you can probably have multiple balanced containers serving the same application. You can even make it dynamic so that the number of containers increases if memory or processing go beyond a threshold. That means that on top of all those WSGI fine tuning for a single server, there's also other modern layers of configurations that can help you face peaks and high load scenarios. | 11 | 28 |
69,458,399 | 2021-10-5 | https://stackoverflow.com/questions/69458399/numpy-1-21-2-may-not-yet-support-python-3-10 | Python 3.10 is released and when I try to install NumPy it gives me this: NumPy 1.21.2 may not yet support Python 3.10.. what should I do? | If on Windows, numpy has not yet released a precompiled wheel for Python 3.10. However you can try the unofficial wheels available at https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy . Specifically look for numpy‑1.21.2+mkl‑cp310‑cp310‑win_amd64.whl or numpy‑1.21.2+mkl‑cp310‑cp310‑win32.whl depending on you system architecture. After downloading the file go to the download directory and run pip install "<filename>.whl".) (I have personally installed numpy‑1.21.2+mkl‑cp310‑cp310‑win_amd64.whl and it worked for me.) | 28 | 23 |
69,441,767 | 2021-10-4 | https://stackoverflow.com/questions/69441767/error-using-selenium-chrome-webdriver-with-python | hi im using chrome driver but i cant fix this error mycode: options = Options() options.add_argument('--disable-gpu') options.add_argument('--disable-dev-shm-usage') self.site = webdriver.Chrome(executable_path="C:\chromedriver.exe",chrome_options=options) self.site.get("https://sgite.com/en/site/") error: [23468:14696:1004/232130.459:ERROR:chrome_browser_main_extra_parts_metrics.cc(228)] crbug.com/1216328: Checking Bluetooth availability started. Please report if there is no report that this ends. [23468:14696:1004/232130.468:ERROR:chrome_browser_main_extra_parts_metrics.cc(231)] crbug.com/1216328: Checking Bluetooth availability ended. [23468:14696:1004/232130.514:ERROR:chrome_browser_main_extra_parts_metrics.cc(234)] crbug.com/1216328: Checking default browser status started. Please report if there is no report that this ends. [23468:14696:1004/232130.588:ERROR:chrome_browser_main_extra_parts_metrics.cc(238)] crbug.com/1216328: Checking default browser status ended. | If you are using Selenium with Python then add these extra options into your Selenium code- options = webdriver.ChromeOptions() options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Chrome(options=options) | 4 | 13 |
69,444,526 | 2021-10-5 | https://stackoverflow.com/questions/69444526/python-grpc-failed-to-pick-subchannel | I'm trying to setup a GRPC client in Python to hit a particular server. The server is setup to require authentication via access token. Therefore, my implementation looks like this: def create_connection(target, access_token): credentials = composite_channel_credentials( ssl_channel_credentials(), access_token_call_credentials(access_token)) target = target if target else DEFAULT_ENDPOINT return secure_channel(target = target, credentials = credentials) conn = create_connection(svc = "myservice", session = Session(client_id = id, client_secret = secret) stub = FakeStub(conn) stub.CreateObject(CreateObjectRequest()) The issue I'm having is that, when I attempt to use this connection I get the following error: File "<stdin>", line 1, in <module> File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 216, in __call__ response, ignored_call = self._with_call(request, File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call return call.result(), call File "anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result raise self File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation response, call = self._thunk(new_method).with_call( File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 266, in with_call return self._with_call(request, File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call return call.result(), call File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result raise self File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation response, call = self._thunk(new_method).with_call( File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 957, in with_call return _end_unary_response_blocking(state, call, True, None) File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{ "created":"@1633399048.828000000", "description":"Failed to pick subchannel", "file":"src/core/ext/filters/client_channel/client_channel.cc", "file_line":3159, "referenced_errors":[ { "created":"@1633399048.828000000", "description": "failed to connect to all addresses", "file":"src/core/lib/transport/error_utils.cc", "file_line":147, "grpc_status":14 } ] }" I looked up the status code associated with this response and it seems that the server is unavailable. So, I tried waiting for the connection to be ready: channel_ready_future(conn).result() but this hangs. What am I doing wrong here? UPDATE 1 I converted the code to use the async connection instead of the synchronous connection but the issue still persists. Also, I saw that this question had also been posted on SO but none of the solutions presented there fixed the problem I'm having. UPDATE 2 I assumed that this issue was occurring because the client couldn't find the TLS certificate issued by the server so I added the following code: def _get_cert(target: str) -> bytes: split_around_port = target.split(":") data = ssl.get_server_certificate((split_around_port[0], split_around_port[1])) return str.encode(data) and then changed ssl_channel_credentials() to ssl_channel_credentials(_get_cert(target)). However, this also hasn't fixed the problem. | The issue here was actually fairly deep. First, I turned on tracing and set GRPC log-level to debug and then found this line: D1006 12:01:33.694000000 9032 src/core/lib/security/transport/security_handshaker.cc:182] Security handshake failed: {"created":"@1633489293.693000000","description":"Cannot check peer: missing selected ALPN property.","file":"src/core/lib/security/security_connector/ssl_utils.cc","file_line":160} This lead me to this GitHub issue, which stated that the issue was with grpcio not inserting the h2 protocol into requests, which would cause ALPN-enabled servers to return that specific error. Some further digging led me to this issue, and since the server I connected to also uses Envoy, it was just a matter of modifying the envoy deployment file so that: clusters: - name: my-server connect_timeout: 10s type: strict_dns lb_policy: round_robin http2_protocol_options: {} hosts: - socket_address: address: python-server port_value: 1337 tls_context: common_tls_context: tls_certificates: alpn_protocols: ["h2"] <====== Add this. | 5 | 5 |
69,442,203 | 2021-10-4 | https://stackoverflow.com/questions/69442203/how-to-hide-legend-selectively-in-a-plotly-line-plot | I'm struggling to hide the legend for some but not all of the lines in my line plot. Here is what the plot looks like now. Plot: Essentially I want to hide the legend for the light grey lines while keeping it in place for the coloured lines. Here's my code: import plotly.graph_objects as go fig = go.Figure() fig.update_layout(autosize=False, width=800, height=500, template='none') fig.update_layout(title = 'Title', xaxis_title = 'Games', yaxis_title = 'Profit') for team in rest_teams: fig.add_traces(go.Scatter(x=df['x'], y = df[team], name = team, line = {'color': '#F5F5F5'})) for team in big_eight: line_dict = {'color': cmap[team]} fig.add_traces(go.Scatter(x=df['x'], y = df[team], name = team, line = line_dict)) fig.show() I can update layout with fig.update_layout(showlegend=False) which hides the whole thing and isn't optimal. Help would be appreciated. | If I understand your desired output correctly, you can use showlegend = False for the traces where you've set a grey color with color = #F5F5F5: for c in cols1: fig.add_trace(go.Scatter(x = df.index, y = df[c], line_color = '#F5F5F5', showlegend = False)) And then leave that out for the lines you'd like colors assigned to, and make sure to include name = c in: for c in cols2: fig.add_trace(go.Scatter(x = df.index, y = df[c], name = c)) Plot: Complete code: import plotly.graph_objects as go import plotly.express as px import pandas as pd df = px.data.stocks() df = df.set_index('date') fig = go.Figure() cols1 = df.columns[:2] cols2 = df.columns[2:] for c in cols1: fig.add_trace(go.Scatter(x = df.index, y = df[c], line_color = '#F5F5F5', showlegend = False)) for c in cols2: fig.add_trace(go.Scatter(x = df.index, y = df[c], name = c)) fig.update_layout(template=template fig.show() | 7 | 5 |
69,452,755 | 2021-10-5 | https://stackoverflow.com/questions/69452755/how-to-parse-xml-from-string-in-python | I'm trying to parse an XML from a string in Python with no success. The string I'm trying to parse is: <?xml version="1.0" encoding="UTF-8"?> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:573a453c-72c0-4185-8c54-9010593dd102"> <data> <config xmlns="http://www.calix.com/ns/exa/base"> <profile> <policy-map> <name>ELINE_PM_1</name> <class-map-ethernet> <name>Eth-match-any-1</name> <ingress> <meter-type>meter-mef</meter-type> <eir>1000000</eir> </ingress> </class-map-ethernet> </policy-map> <policy-map> <name>ELINE_PM_2</name> <class-map-ethernet> <name>Eth-match-any-2</name> <ingress> <meter-type>meter-mef</meter-type> <eir>10000000</eir> </ingress> </class-map-ethernet> </policy-map> </profile> </config> </data> </rpc-reply> I'm trying to use xml.etree.ElementTree library to parse the xml and I also tried to remove the first line related to xml version and encoding with no results. The code snippet to reproduce the issue I'm facing is: import xml.etree.ElementTree as ET reply_xml=''' <data> <config> <profile> <policy-map> <name>ELINE_PM_1</name> <class-map-ethernet> <name>Eth-match-any-1</name> <ingress> <meter-type>meter-mef</meter-type> <eir>1000000</eir> </ingress> </class-map-ethernet> </policy-map> <policy-map> <name>ELINE_PM_2</name> <class-map-ethernet> <name>Eth-match-any-2</name> <ingress> <meter-type>meter-mef</meter-type> <eir>10000000</eir> </ingress> </class-map-ethernet> </policy-map> </profile> </config> </data> ''' root = ET.fromstring(reply_xml) for child in root: print(child.tag, child.attrib) reply_xml is a string containing the above mentioned xml so it should work but if I inspect the root variable using the debugger I see that it is not being populated correctly. It seems that the first xml tag (<?xml version="1.0" encoding="UTF-8"?>) creates some problems but even if I manually remove it I am not able to parse the xml correctly. Any clue to parse that xml? | Your original XML has namespaces. You need to honor them in your XPath queries. import xml.etree.ElementTree as ET reply_xml '''<?xml version="1.0" encoding="UTF-8"?> <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:573a453c-72c0-4185-8c54-9010593dd102"> <data> <config xmlns="http://www.calix.com/ns/exa/base"> <!-- ... the rest of it ... --> </config> </data> </rpc-reply>''' ns = { 'calix': 'http://www.calix.com/ns/exa/base' } root = ET.fromstring(reply_xml) for eir in root.findall('.//calix:eir', ns): print(eir.text) prints 1000000 10000000 | 4 | 6 |
69,452,329 | 2021-10-5 | https://stackoverflow.com/questions/69452329/walrus-operator-in-dict-comprehension | I wanted to avoid double evaluation of a mean in a dict comprehension, and I tried using the walrus operator: >>> dic = {"A": [45,58,75], "B": [55,82,80,92], "C": [78,95,90], "D":[98,75]} >>> q = {x: (mean := (sum(dic[x]) // len(dic[x]))) for x in dic if mean > 65} but this gave me the following error: Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> q = {x: (mean := (sum(dic[x]) // len(dic[x]))) for x in dic if mean > 65} File "<pyshell#2>", line 1, in <dictcomp> q = {x: (mean := (sum(dic[x]) // len(dic[x]))) for x in dic if mean > 65} NameError: name 'mean' is not defined This error only happens when I try to use the variable, no problem while defining it: >>> q = {x: (mean := (sum(dic[x]) // len(dic[x]))) for x in dic if (sum(dic[x]) // len(dic[x])) > 65} >>> mean 86 >>> q {'B': 77, 'C': 87, 'D': 86} Why? Where did I get it wrong? | Your code is roughly equivalent to q = {} for x in dic: if mean > 65: mean := ... q[x] = mean which means you are using mean before assigning it. You need to move the definition to the if-clause-section of the dict-comprehension. >>> dic = {"A": [45,58,75], "B": [55,82,80,92], "C": [78,95,90], "D":[98,75]} >>> q = {x: mean for x in dic if (mean := (sum(dic[x]) // len(dic[x]))) > 65} >>> q {'B': 77, 'C': 87, 'D': 86} This translates to q = {} for x in dic: if (mean := ...) > 65: q[x] = mean | 14 | 14 |
69,451,227 | 2021-10-5 | https://stackoverflow.com/questions/69451227/how-to-replace-multiple-substrings-at-the-same-time | I have a string like a = "X1+X2*X3*X1" b = {"X1":"XX0","X2":"XX1","X0":"XX2"} I want to replace the substring 'X1,X2,X3' using dict b. However, when I replace using the below code, for x in b: a = a.replace(x,b[x]) print(a) 'XXX2+XX1*X3' Expected result is XX0 + XX1*X3*XX0 I know it is because the substring is replaced in a loop, but I don't know how to solve it. | You can create a pattern with '|' then search in dictionary transform like below. Try this: import re a = "X1+X2*X3*X1" b = {"X1":"XX0","X2":"XX1","X0":"XX2"} pattern = re.compile("|".join(b.keys())) out = pattern.sub(lambda x: b[re.escape(x.group(0))], a) Output: >>> out 'XX0+XX1*X3*XX0' | 6 | 8 |
69,440,494 | 2021-10-4 | https://stackoverflow.com/questions/69440494/python-3-10-optionaltype-or-type-none | Now that Python 3.10 has been released, is there any preference when indicating that a parameter or returned value might be optional, i.e., can be None. So what is preferred: Option 1: def f(parameter: Optional[int]) -> Optional[str]: Option 2: def f(parameter: int | None) -> str | None: Also, is there any preference between Type | None and None | Type? | PEP 604 covers these topics in the specification section. The existing typing.Union and | syntax should be equivalent. int | str == typing.Union[int, str] The order of the items in the Union should not matter for equality. (int | str) == (str | int) (int | str | float) == typing.Union[str, float, int] Optional values should be equivalent to the new union syntax None | t == typing.Optional[t] As @jonrsharpe comments, Union and Optional are not deprecated, so the Union and | syntax are acceptable. Łukasz Langa, a Python core developer, replied on a YouTube live related to the Python 3.10 release that Type | None is preferred over Optional[Type] for Python 3.10+. | 105 | 115 |
69,447,823 | 2021-10-5 | https://stackoverflow.com/questions/69447823/how-to-convert-array-to-string-in-python | i have array like below arr = [1,2,3] how can i convert it to '1,2,3' using python i have a usecase where i want to add a filter parameter to url like below url = some_url?id__in=1,2,3 i was initially thinking to pass it like so url = some_url?id__in={arr} but this is incorrect. i am new to python and i have browsed how to do this. " ".join(str(x) for x in arr) but this will give output '1 2 3' how can i fix this. could someone help me with this. thanks. | This gives you the 1,2,3 that you asked for. ",".join(str(x) for x in arr) | 9 | 11 |
69,444,593 | 2021-10-5 | https://stackoverflow.com/questions/69444593/best-way-to-forward-redirect-methods-attributes-in-python-class-without-redundan | I have a project, with some modules each of which contains a class doing their respective thing. Then I have an API class for the user. The API class instantiate those classes, and should forward/redirect to those who are doing the actual processing. I have the following questions: How do I do the forwarding without rewriting what seems to me redundant code? For example, say Foo is the API class, Bar is a module class, then now I am writing something like: class Foo: def __init__(self, bar: Bar): self.bar = bar def name(self): return self.bar.name() I explicitly wrote name method in Foo, which just returns name() of Bar. Isn't this redundant? Is there an "automatic" way to forward the call? In the bar class I'd write some docstrings. Is there a way to "port" these docstrings to the API class Foo? Writing them again in Foo would be redundant and difficult to maintain. | Try redirecting the __getattr__ magic method: class Foo: def __init__(self, bar: Bar): self.bar = bar def __getattr__(self, attr): return getattr(self.bar, attr) This would redirect all functions to the bar variable. For multiple classes: class Foo: def __init__(self, bar: Bar, foo: Foo, blah: Blah): self.bar = bar self.foo = foo self.blah = blah def __getattr__(self, attr): if hasattr(self.bar, attr): return getattr(self.bar, attr) elif hasattr(self.foo, attr): return getattr(self.foo, attr) else: return getattr(self.blah, attr) | 7 | 5 |
69,442,971 | 2021-10-4 | https://stackoverflow.com/questions/69442971/error-in-importing-environment-openai-gym | I am trying to run an OpenAI Gym environment however I get the following error: import gym env = gym.make('Breakout-v0') ERROR /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ale_py/gym/environment.py:11: DeprecationWarning: Importing atari-py roms won't be supported in future releases of ale-py. import ale_py.roms as roms A.L.E: Arcade Learning Environment (version +a54a328) [Powered by Stella] Traceback (most recent call last): File "/Users/username/Desktop/OpenAI Gym Stuff/OpenAI_Exp.py", line 2, in <module> env = gym.make('Breakout-v0') File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/gym/envs/registration.py", line 200, in make return registry.make(id, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/gym/envs/registration.py", line 105, in make env = spec.make(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/gym/envs/registration.py", line 75, in make env = cls(**_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ale_py/gym/environment.py", line 123, in __init__ self.seed() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ale_py/gym/environment.py", line 171, in seed raise error.Error( gym.error.Error: Unable to find game "Breakout", did you import Breakout with ale-import-roms? | Code works for me with gym 0.18.0 and 0.19.0 but not with 0.20.0 You may downgrade it with pip install --upgrade gym==0.19.0 BTW: it may also need to install gym[atari] or gym[all] to have all elements to work. Base on information in Release Note for 0.21.0 (which is not ready on pip but you can install from GitHub) there was some change in ALE (Arcade Learning Environment) and it made all problem but it is fixed in 0.21.0. -The old Atari entry point that was broken with the last release and the upgrade to ALE-Py is fixed But new gym[atari] not installs ROMs and you will need to use module AutoROM -pip install gym[atari] no longer distributes Atari ROMs that the ALE (the Atari emulator used) needs to run the various games. The easiest way to install ROMs into the ALE has been to use AutoROM. EDIT: Version 0.21.0 from GitHub works for me after installing (it may need program git for it) pip install --upgrade git+https://github.com/openai/gym pip install autorom AutoRom pip install --upgrade gym[atari] AutoRom runs program which asks if you have license for ROMs and install ROMs in AutoROM/roms but I didn't have to move ROMs to other place. AutoROM will download the Atari 2600 ROMs. They will be installed to: /usr/local/lib/python3.8/dist-packages/AutoROM/roms Existing ROMs will be overwritten. I own a license to these Atari 2600 ROMs. I agree to not distribute these ROMs and wish to proceed: [Y/n]: Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/adventure.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/air_raid.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/alien.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/amidar.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/assault.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/asterix.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/asteroids.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/atlantis.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/atlantis2.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/backgammon.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/bank_heist.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/basic_math.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/battle_zone.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/beam_rider.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/berzerk.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/blackjack.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/bowling.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/boxing.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/breakout.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/carnival.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/casino.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/centipede.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/chopper_command.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/combat.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/crazy_climber.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/crossbow.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/darkchambers.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/defender.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/demon_attack.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/donkey_kong.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/double_dunk.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/earthworld.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/elevator_action.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/enduro.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/entombed.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/et.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/fishing_derby.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/flag_capture.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/freeway.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/frogger.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/frostbite.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/galaxian.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/gopher.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/gravitar.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/hangman.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/haunted_house.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/hero.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/human_cannonball.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/ice_hockey.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/jamesbond.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/journey_escape.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/joust.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/kaboom.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/kangaroo.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/keystone_kapers.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/king_kong.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/klax.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/koolaid.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/krull.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/kung_fu_master.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/laser_gates.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/lost_luggage.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/mario_bros.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/maze_craze.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/miniature_golf.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/montezuma_revenge.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/mr_do.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/ms_pacman.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/name_this_game.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/othello.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/pacman.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/phoenix.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/pitfall.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/pitfall2.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/pong.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/pooyan.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/private_eye.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/qbert.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/riverraid.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/road_runner.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/robotank.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/seaquest.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/sir_lancelot.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/skiing.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/solaris.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/space_invaders.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/space_war.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/star_gunner.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/superman.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/surround.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/tennis.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/tetris.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/tic_tac_toe_3d.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/time_pilot.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/trondead.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/turmoil.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/tutankham.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/up_n_down.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/venture.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/video_checkers.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/video_chess.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/video_cube.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/video_pinball.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/warlords.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/wizard_of_wor.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/word_zapper.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/yars_revenge.bin Installed /usr/local/lib/python3.8/dist-packages/AutoROM/roms/zaxxon.bin Done! After installation this code works for me without error and without DeprecationWarning. import gym import ale_py print('gym:', gym.__version__) print('ale_py:', ale_py.__version__) env = gym.make('Breakout-v0') and it gives gym: 0.21.0 ale_py: 0.7.1 A.L.E: Arcade Learning Environment (version +b7b0c1a) [Powered by Stella] | 23 | 4 |
69,416,986 | 2021-10-2 | https://stackoverflow.com/questions/69416986/attributeerror-module-tweepy-has-no-attribute-streamlistener-with-python | class MyStreamListener(tweepy.StreamListener): def on_status(self, status): print(status.text) # prints every tweet received def on_error(self, status_code): if status_code == 420: # end of monthly limit rate (500k) return False I use Python 3.9 and installed Tweepy via pip. I get the AttributeError on the class line. My import is just import tweepy. Authentication gets correctly handled. In the streaming.py file, I have the class Stream. But using this class ends in that. There is for example no status.text, even if there is the on_status function. I am a little bit confused. | Tweepy v4.0.0 was released recently and it merged StreamListener into Stream. I recommend updating your code to subclass Stream instead. Alternatively, you can downgrade to v3.10.0. | 7 | 12 |
69,429,846 | 2021-10-4 | https://stackoverflow.com/questions/69429846/how-to-use-jax-vmap-for-nested-loops | I want to use vmap to vectorise this code for performance. def matrix(dataA, dataB): return jnp.array([[func(a, b) for b in dataB] for a in dataA]) matrix(data, data) I tried this: def f(x, y): return func(x, y) mapped = jax.vmap(f) mapped(data, data) But this only gives the diagonal entries. Basically I have a vector data = [1,2,3,4,5] (example), I want to get a matrix such that each entry (i, j) of the matrix is f(data[i], data[j]). Thus, the resulting matrix shape will be (len(data), len(data)). | jax.vmap maps across one set of axes at a time. If you want to map across two independent sets of axes, you can do so by nesting two vmap transformations: mapped = jax.vmap(jax.vmap(f, in_axes=(None, 0)), in_axes=(0, None)) result = mapped(data, data) | 5 | 4 |
69,432,439 | 2021-10-4 | https://stackoverflow.com/questions/69432439/how-to-add-transparency-to-a-line-with-opencv-python | I can draw a line with OpenCV Python but I can't make the line transparent def draw_from_pitch_to_image(image, reverse_output_points): for i in range(0, len(reverse_output_points), 2): x1, y1 = reverse_output_points[i] x2, y2 = reverse_output_points[i + 1] x1 = int(x1) y1 = int(y1) x2 = int(x2) y2 = int(y2) color = [255, 0, 0] if i < 1 else [0, 0, 255] cv2.line(image, (x1, y1), (x2, y2), color, 2) I changed the code but the line is still not transparent. I don't know why, could someone please help me to solve this problem? def draw_from_pitch_to_image(image, reverse_output_points): for i in range(0, len(reverse_output_points), 2): x1, y1 = reverse_output_points[i] x2, y2 = reverse_output_points[i + 1] alpha = 0.4 # Transparency factor. overlay = image.copy() x1 = int(x1) y1 = int(y1) x2 = int(x2) y2 = int(y2) color = [255, 0, 0] if i < 1 else [0, 0, 255] cv2.line(overlay, (x1, y1), (x2, y2), color, 2) cv2.addWeighted(overlay, alpha, output, 1 - alpha, 0, output) | One approach is to create a mask "overlay" image (copy of input image), draw a line onto this overlay image, and then perform the weighted addition of the two images using cv2.addWeighted() to mimic an alpha channel. Here's an example: Line with no transparency -> Result with alpha=0.5 Result with alpha=0.25 This method to apply transparency could be generalized to work with any other drawing function. Here's an example with cv2.rectangle() and cv2.circle() using an alpha=0.5 transparency value. No transparency -> Result with alpha=0.5 Code import cv2 # Load image and create a "overlay" image (copy of input image) image = cv2.imread('2.jpg') overlay = image.copy() original = image.copy() # To show no transparency # Test coordinates to draw a line x, y, w, h = 108, 107, 193, 204 # Draw line on overlay and original input image to show difference cv2.line(overlay, (x, y), (x + w, x + h), (36, 255, 12), 6) cv2.line(original, (x, y), (x + w, x + h), (36, 255, 12), 6) # Could also work with any other drawing function # cv2.rectangle(overlay, (x, y), (x + w, y + h), (36, 255, 12), -1) # cv2.rectangle(original, (x, y), (x + w, y + h), (36, 255, 12), -1) # cv2.circle(overlay, (x, y), 80, (36, 255, 12), -1) # cv2.circle(original, (x, y), 80, (36, 255, 12), -1) # Transparency value alpha = 0.50 # Perform weighted addition of the input image and the overlay result = cv2.addWeighted(overlay, alpha, image, 1 - alpha, 0) cv2.imshow('result', result) cv2.imshow('original', original) cv2.waitKey() | 6 | 11 |
69,433,024 | 2021-10-4 | https://stackoverflow.com/questions/69433024/error-message-with-sklearn-function-roccurvedisplay-has-no-attribute-from-p | I am trying to use the sklearn function RocCurveDisplay.from_predictions as presented in https://scikit-learn.org/stable/modules/generated/sklearn.metrics.RocCurveDisplay.html#sklearn.metrics.RocCurveDisplay.from_predictions. I run the function like this: from sklearn.metrics import RocCurveDisplay true = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) prediction = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0.]) RocCurveDisplay.from_predictions(true,prediction) plt.show() I get the error message "AttributeError: type object 'RocCurveDisplay' has no attribute 'from_predictions' ". Is this a version problem? I am using the latest one, 0.24.1 | Your version 0.24.1 is not the latest version, You'll need to upgrade to scikit-learn 1.0 as from_predictions is supported in 1.0 You can upgrade using: pip install --upgrade scikit-learn | 6 | 8 |
69,426,664 | 2021-10-3 | https://stackoverflow.com/questions/69426664/modulenotfounderror-no-module-named-jsonschema-compat | I have been working with the Bybit API for the last week when I encountered the title problem yesterday. I have started a new env and installed only the bybit wrapper again and the issue still arises. From what I can see I have jsonschema installed and in my env PATH. It was working a few days ago, so I do believe this to be separate from whatever API I am trying to use. Included is a picture of the response when run in an interpreter. Any help would be greatly appreciated. ModuleNotFoundError: No module named 'jsonschema.compat' is the error that comes up. | That module was removed in jsonschema 4.0. Your packages haven't been pinned to only use jsonschema 3.x, so that might happen. For now, you can downgrade to version 3.x of the jsonschema package with pip install -U 'jsonschema<4.0' and things should work. | 17 | 27 |
69,426,453 | 2021-10-3 | https://stackoverflow.com/questions/69426453/declaration-of-list-of-type-python | Let's suppose to have a class Bag that contains a list of item. What we know about item is just that it has a method called : printDescription(). Now I want to define a method printAllItemsDescription inside Bag, that invokes the method printDescription() on each item inside items list. This should be the code (it's wrong but I think should looks like this) : class Bag: items:list[item] = [] . . . def printAllItemsDescription(this): for item in this.items: item.printDescription() My problem is that I don't know how to tell python that my items is a list of item. I know I can do something like item:Item but don't know how to do it with lists. Then while iterating on items it will know that each item contains a method called printDescription(), but on this moment item is just a variable of undefined type. P.S. : I tried also to do something like x:list[item] but I got this error : Subscript for class "list" will generate runtime exception; enclose type annotation in quotes | Python's type hinting is ever-evolving, and they've made some changes over time. Older versions of Python don't support subscripting list as in list[item]. Fortunately, we can get around all of this using a future import. The annotations import from __future__ works in all Python versions starting from 3.7 and effectively pretends that all type signatures are wrapped in quotes so they don't cause issues at runtime. from __future__ import annotations class Bag: items: list[Item] def __init__(self): self.items = [] def printAllItemsDescription(self): for item in self.items: item.printDescription() Also, I've made a few minor changes to fit with Python's style. Instance methods should generally have their first argument called self for consistency, and classes (like item) should be written starting with a capital letter, so if you have control over the class item, I would recommend changing it to class Item. | 11 | 22 |
69,422,878 | 2021-10-3 | https://stackoverflow.com/questions/69422878/how-to-check-for-boolean-without-using-for-loop | I have this running well, however am unable to avoid the loop, how can I get an only True if present or False if not present without having to loop through the list using a for loop? my_list = [[1, 2], [4, 6], [8, 3]] combined = [3, 8] for value in my_list: print(value) if set(combined) == set(value): print("present") else: print("absent") | To avoid the multiple absent printing, you can use a for/else construct: for value in my_list: if set(combined) == set(value): print("present") break else: print("absent") But if you're looking to optimize/reduce the loop, you can "hide" the loop inside an any call: any(set(combined) == set(value) for value in my_list) Or alternatively use a list comprehension to convert all sub-lists to sets: set(combined) in [set(sublist) for sublist in my_list] But as you can see, some sort of looping is necessary. I believe that by removing the loop you meant to reduce its code size. | 4 | 3 |
69,397,039 | 2021-9-30 | https://stackoverflow.com/questions/69397039/pymongo-ssl-certificate-verify-failed-certificate-has-expired-on-mongo-atlas | I am using MongoDB(Mongo Atlas) in my Django app. All was working fine till yesterday. But today, when I ran the server, it is showing me the following error on console Exception in thread django-main-thread: Traceback (most recent call last): File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 973, in _bootstrap_inner self.run() File "c:\users\admin\appdata\local\programs\python\python39\lib\threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run self.check_migrations() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\core\management\base.py", line 486, in check_migrations executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\executor.py", line 18, in __init__ self.loader = MigrationLoader(self.connection) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 53, in __init__ self.build_graph() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\loader.py", line 220, in build_graph self.applied_migrations = recorder.applied_migrations() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 77, in applied_migrations if self.has_table(): File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table tables = self.connection.introspection.table_names(cursor) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 52, in table_names return get_names(cursor) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\django\db\backends\base\introspection.py", line 47, in get_names return sorted(ti.name for ti in self.get_table_list(cursor) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\djongo\introspection.py", line 47, in get_table_list for c in cursor.db_conn.list_collection_names() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 880, in list_collection_names for result in self.list_collections(session=session, **kwargs)] File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\database.py", line 842, in list_collections return self.__client._retryable_read( File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1514, in _retryable_read server = self._select_server( File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\mongo_client.py", line 1346, in _select_server server = topology.select_server(server_selector) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 244, in select_server return random.choice(self.select_servers(selector, File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 202, in select_servers server_descriptions = self._select_servers_loop( File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\Lib\site-packages\pymongo\topology.py", line 218, in _select_servers_loop raise ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129),cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 6155f0c9148b07ff5851a1b3, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-01.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>, <ServerDescription ('cluster0-shard-00-02.mny7y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.mny7y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')>]> I am using djongo as the database engine DATABASES = { 'default': { 'ENGINE': 'djongo', 'NAME': 'DbName', 'ENFORCE_SCHEMA': False, 'CLIENT': { 'host': 'mongodb+srv://username:[email protected]/DbName?retryWrites=true&w=majority' } } } And following dependencies are being used in the app dj-database-url==0.5.0 Django==3.2.5 djangorestframework==3.12.4 django-cors-headers==3.7.0 gunicorn==20.1.0 psycopg2==2.9.1 pytz==2021.1 whitenoise==5.3.0 djongo==1.3.6 dnspython==2.1.0 What should be done in order to resolve this error? | This is because of a root CA Let’s Encrypt uses (and Mongo Atals uses Let's Encrypt) has expired on 2020-09-30 - namely the "IdentTrust DST Root CA X3" one. The fix is to manually install in the Windows certificate store the "ISRG Root X1" and "ISRG Root X2" root certificates, and the "Let’s Encrypt R3" intermediate one - link to their official site - https://letsencrypt.org/certificates/ Copy from the comments: download the .der field from the 1st category, download, double click and follow the wizard to install it. | 15 | 17 |
69,403,190 | 2021-10-1 | https://stackoverflow.com/questions/69403190/mypy-returns-error-unexpected-keyword-argument-for-subclass-of-a-decorated-cla | I have two decorated classes using attrs package as follows: @attr.s(kw_only=True) class Entity: """ base class of all entities """ entity_id = attr.ib(type=str) # ... @attr.s(kw_only=True) class Customer(Entity): customer_name = attr.ib(type=Name) # ... I get Unexpected keyword argument "entity_id" for "Customer" for code like this: def register_customer(customer_name: str): return Customer( entity_id=unique_id_generator(), customer_name=Name(full_name=customer_name), ) So how can I make Mypy aware of the __init__ method of my parent class. I should mention that the code works perfectly and there is (at least it seems) no runtime error. | Your code is correct and should work. If I run the following simplified version: import attr @attr.s(kw_only=True) class Entity: """ base class of all entities """ entity_id = attr.ib(type=str) # ... @attr.s(kw_only=True) class Customer(Entity): customer_name = attr.ib(type=str) def register_customer(customer_name: str) -> Customer: return Customer( entity_id="abc", customer_name=customer_name, ) # ... through Mypy 0.910 with attrs 21.2.0 on Python 3.9.7 I get: Success: no issues found in 1 source file My theories: Old Mypy (there's a lot of changes all times, sometimes it takes time for the attrs plugin to be updated with new features). Old attrs (we try to keep up with the changes in attrs and the features provided by Mypy). Python 2 (since you're using the old syntax). kw_only used to be Python 3-only and I wouldn't be surprised if mypy has some resident logic around it? | 5 | 4 |
69,406,767 | 2021-10-1 | https://stackoverflow.com/questions/69406767/cant-load-spacy-en-core-web-trf | As the self guide says, I've installed it with (conda environment) conda install -c conda-forge spacy python -m spacy download en_core_web_trf I have spacy-transformers already installed. But when I simply do: import spacy spacy.load("en_core_web_trf") It shows me this error: ValueError: [E002] Can't find factory for 'transformer' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `@Language.component` (for function components) or `@Language.factory` (for class components). Available factories: attribute_ruler, tok2vec, merge_noun_chunks, merge_entities, merge_subtokens, token_splitter, parser, beam_parser, entity_linker, ner, beam_ner, entity_ruler, lemmatizer, tagger, morphologizer, senter, sentencizer, textcat, spancat, textcat_multilabel, en.lemmatizer More info about the error: ValueError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_11108/2648447056.py in <module> ----> 1 nlp_en = spacy.load("en_core_web_trf") ~\Anaconda3\envs\rl\lib\site-packages\spacy\__init__.py in load(name, vocab, disable, exclude, config) 49 RETURNS (Language): The loaded nlp object. 50 """ ---> 51 return util.load_model( 52 name, vocab=vocab, disable=disable, exclude=exclude, config=config 53 ) ~\Anaconda3\envs\rl\lib\site-packages\spacy\util.py in load_model(name, vocab, disable, exclude, config) 345 return get_lang_class(name.replace("blank:", ""))() 346 if is_package(name): # installed as package --> 347 return load_model_from_package(name, **kwargs) 348 if Path(name).exists(): # path to model data directory 349 return load_model_from_path(Path(name), **kwargs) ~\Anaconda3\envs\rl\lib\site-packages\spacy\util.py in load_model_from_package(name, vocab, disable, exclude, config) 378 """ 379 cls = importlib.import_module(name) --> 380 return cls.load(vocab=vocab, disable=disable, exclude=exclude, config=config) 381 382 ~\Anaconda3\envs\rl\lib\site-packages\en_core_web_trf\__init__.py in load(**overrides) 8 9 def load(**overrides): ---> 10 return load_model_from_init_py(__file__, **overrides) ~\Anaconda3\envs\rl\lib\site-packages\spacy\util.py in load_model_from_init_py(init_file, vocab, disable, exclude, config) 538 if not model_path.exists(): 539 raise IOError(Errors.E052.format(path=data_path)) --> 540 return load_model_from_path( 541 data_path, 542 vocab=vocab, ~\Anaconda3\envs\rl\lib\site-packages\spacy\util.py in load_model_from_path(model_path, meta, vocab, disable, exclude, config) 413 overrides = dict_to_dot(config) 414 config = load_config(config_path, overrides=overrides) --> 415 nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude) 416 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides) 417 ~\Anaconda3\envs\rl\lib\site-packages\spacy\util.py in load_model_from_config(config, vocab, disable, exclude, auto_fill, validate) 450 # registry, including custom subclasses provided via entry points 451 lang_cls = get_lang_class(nlp_config["lang"]) --> 452 nlp = lang_cls.from_config( 453 config, 454 vocab=vocab, ~\Anaconda3\envs\rl\lib\site-packages\spacy\language.py in from_config(cls, config, vocab, disable, exclude, meta, auto_fill, validate) 1712 # The pipe name (key in the config) here is the unique name 1713 # of the component, not necessarily the factory -> 1714 nlp.add_pipe( 1715 factory, 1716 name=pipe_name, ~\Anaconda3\envs\rl\lib\site-packages\spacy\language.py in add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate) 774 lang_code=self.lang, 775 ) --> 776 pipe_component = self.create_pipe( 777 factory_name, 778 name=name, ~\Anaconda3\envs\rl\lib\site-packages\spacy\language.py in create_pipe(self, factory_name, name, config, raw_config, validate) 639 lang_code=self.lang, 640 ) --> 641 raise ValueError(err) 642 pipe_meta = self.get_factory_meta(factory_name) 643 # This is unideal, but the alternative would mean you always need to | Are you sure you did install spacy-transformers? After installing spacy? I am using pip: pip install spacy-transformers and I have no problems loading the en_core_web_trf. | 15 | 12 |
69,409,255 | 2021-10-1 | https://stackoverflow.com/questions/69409255/how-to-get-city-state-and-country-from-a-list-of-latitude-and-longitude-coordi | I have a 500,000 list of latitudes and longitudes coordinates like below: Latitude Longitude 42.022506 -88.168156 41.877445 -87.723846 29.986801 -90.166314 I am looking to use python to get the city, state, and country for each coordinate in a new column like below: Latitude Longitude City State Country 42.022506 -88.168156 Streamwood IL United States 41.877445 -87.723846 Chicago IL United States 29.986801 -90.166314 Metairie LA United States With this large of a dataset, how can this be achieved in python? I have heard of Google's API, Nominatim's API, and Geopy package. How do I get to run through all of the rows into this code? Right now I have to manually input the latitude and longitude into the last line. import csv import pandas as pd import numpy as np import math from geopy.geocoders import Nominatim input_file = "Lat-Log.csv" # file contains ID, Latitude, Longitude output_file = "output.csv" df = pd.read_csv(input_file) geolocator = Nominatim(user_agent="geoapiExercises") def city_state_country(coord): location = geolocator.reverse(coord, exactly_one=True) address = location.raw['address'] city = address.get('city', '') state = address.get('state', '') country = address.get('country', '') return city, state, country print(city_state_country("47.470706, -99.704723")) The output gives me ('Bowdon', 'North Dakota', 'USA'). I am looking to replace the coordinates with my columns (latitude and longitude) to run through my list. How do I input my columns into the code to run through the whole document? | You want to run a function on each row, which can be done using apply(). There are two complications, which is that you want to 1) provide multiple arguments to the function, and 2) get back multiple results. These questions explain how to do those things: python pandas- apply function with two arguments to columns Return multiple columns from pandas apply() Here's how to adapt your code to do this: import pandas as pd import io from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="geoapiExercises") s = """Latitude Longitude 42.022506 -88.168156 41.877445 -87.723846 29.986801 -90.166314""" df = pd.read_csv(io.StringIO(s), delim_whitespace=True) def city_state_country(row): coord = f"{row['Latitude']}, {row['Longitude']}" location = geolocator.reverse(coord, exactly_one=True) address = location.raw['address'] city = address.get('city', '') state = address.get('state', '') country = address.get('country', '') row['city'] = city row['state'] = state row['country'] = country return row df = df.apply(city_state_country, axis=1) print(df) (I replaced your read_csv() call with an inline definition of the dataframe. Ignore that. It's not important to the example. I did that to make the example self-contained.) The city_state_country() function gets called with every row of the dataframe. (The axis=1 argument makes apply() run using rows rather than columns.) The function gets the lat and lon, and does a query. Then, it modifies the row to include the information from the query. This gets the following result: Latitude Longitude city state country 0 42.022506 -88.168156 Illinois United States 1 41.877445 -87.723846 Chicago Illinois United States 2 29.986801 -90.166314 Louisiana United States Not the same as your example, but Nominatim doesn't seem to return a city for two your your coordinates. (It calls them towns, not cities.) | 8 | 12 |
69,403,474 | 2021-10-1 | https://stackoverflow.com/questions/69403474/hiding-bar-labels-less-than-n-in-matplotlib-bar-label | Am loving ease of ax.bar_label in recent matpolotlib update. I'm keen to hide low-value data labels for readability in the final plot to avoid overlapping labels. How would I hide labels less than a predefined value (here, let's say less than 0.025) in the code below? df_plot = pd.crosstab(df['Yr_Lvl_Cd'], df['Achievement_Cd'], normalize='index') ax = df_plot.plot(kind = 'bar', stacked = True, figsize= (10,12)) for c in ax.containers: ax.bar_label(c, label_type='center', color = "white") | You can filter the container's datavalues attribute (requires matplotlib >= 3.4.0) using your threshold: threshold = 0.025 for c in ax.containers: # Filter the labels labels = [v if v > threshold else "" for v in c.datavalues] ax.bar_label(c, labels=labels, label_type="center") | 9 | 14 |
69,388,274 | 2021-9-30 | https://stackoverflow.com/questions/69388274/python-pandas-drop-rows-by-condition | hello I need help to drop some rows by the condition: if the estimated price minus price is more than 1500 (positive) drop the row price estimated price 0 13295 13795 1 19990 22275 2 7295 6498 for example only the index 1 would be drop thank you! | You can use pd.drop() in which you can drop specific rows by index. : >>> df.drop(df[(df['estimated price']-df['price'] >= 1500)].index) price estimated price 0 13295 13795 2 7295 6498 index 1 is dropped. Note that this method assumes that your index is unique. If otherwise, boolean indexing is the better solution. | 4 | 5 |
69,400,900 | 2021-10-1 | https://stackoverflow.com/questions/69400900/how-to-call-on-a-class-using-3-functions | I would like to create a Python class with 3 functions. Function 1 will ask the user to input one number. Function 2 will ask the user to input another number. Function 3 will multiply function 1 * function 2 and return to the product. Here is the code I have so far: class Product: def __init__(self, x, y): self.x = x self.y = y def get_x(self): x = int(input('What is the first number?: ') def get_y(self): y = int(input('What is the second number?: ') def mult_XY(self): return x * y When I try calling on the class I get 'y' is not defined. I tried using the following code: num = Product(x, y) print(num.mult_XY) | Here is a working solution. Compare it to your current solution and spot the differences. After the following code snippet, I will highlight concepts you need to research in order to understand this program better. Here is a correct version of your code (note: there is potentially more than one solution): class Product: def __init__(self): return def get_x(self): self.x = int(input('What is the first number?: ')) return self.x def get_y(self): self.y = int(input('What is the second number?: ')) return self.y def mult_XY(self): return self.x * self.y p = Product() x = p.get_x() y = p.get_y() result = p.mult_XY() print('RESULT of {} * {} = {}'.format(x, y, result)) Is this the best answer? No. Depending on the specifics of your program, the structure of the code could be different. You have gaps of knowledge on the following concepts: Objects and classes in Python Functions in Python Variables and scoping in Python In order to understand better, you need to learn more about the fundamentals of Python. Here is a good resource to get you started: https://python-textbok.readthedocs.io/en/1.0/Introduction.html After reading that, you will be able to not only answer this question, but also set the foundation for your programming knowledge. Don't give up and wish you the best. | 7 | 5 |
69,400,871 | 2021-10-1 | https://stackoverflow.com/questions/69400871/how-do-you-join-multiple-rows-into-one-row-in-pandas | I have a list that I'm trying to add to a dataframe. It looks something like this: list_one = ['apple','banana','cherry',' ', 'grape', 'orange', 'pineapple',''] If I add the list to a dataframe, using df = pd.DataFrame({'list_one':list_one}) it'll look like this: list_one ------------- 0 apple 1 banana 2 cherry 3 4 grape 5 orange 6 pineapple 7 I want the combine some of the rows into one row, so that the dataframe looks something like this: list_one ----------------------------- 0 apple, banana, cherry 1 grape, orange, pineapple Is there a simple way to do this? Thank you for taking the time to read my question and help in any way you can. | Create mask for match words by Series.str.contains, invert by ~ and crate groups by Series.cumsum, filter only matched rows and pass to GroupBy.agg with join function: m = df['list_one'].str.contains('\w+') df = df[m].groupby((~m).cumsum(), as_index=False).agg(', '.join) print (df) list_one 0 apple, banana, cherry 1 grape, orange, pineapple | 8 | 9 |
69,399,503 | 2021-9-30 | https://stackoverflow.com/questions/69399503/how-to-return-the-last-character-of-a-string-in-python | How can I get the last character of this string? seed_name = "Cocoa" | As shown in the official Python tutorial, >>> word = 'Python' [...] Indices may also be negative numbers, to start counting from the right: >>> word[-1] # last character 'n' | 8 | 29 |
69,398,013 | 2021-9-30 | https://stackoverflow.com/questions/69398013/seaborn-jointplot-turn-off-one-of-the-marginal-plots | Is there a way to turn off the top marginal plot in a Seaborn Jointplot? tips = sns.load_dataset('tips') g = sns.jointplot( data=tips, x="total_bill", y="tip", hue="smoker", ) | Perhaps just call remove on .ax_marg_x. g.ax_marg_x.remove() Output: (Note that ax_marg_y and ax_joint are the remaining plots, as detailed in the JointGrid docs). | 9 | 7 |
69,396,635 | 2021-9-30 | https://stackoverflow.com/questions/69396635/how-to-detect-eof-when-reading-a-file-with-readline-in-python | I need to read the file line by line with readline() and cannot easily change that. Roughly it is: with open(file_name, 'r') as i_file: while True: line = i_file.readline() # I need to check that EOF has not been reached, so that readline() really returned something The real logic is more involved, so I can't read the file at once with readlines() or write something like for line in i_file:. Is there a way to check readline() for EOF? Does it throw an exception maybe? It was very hard to find the answer on the internet because the documentation search redirects to something non-relevant (a tutorial rather than the reference, or GNU readline), and the noise on the internet is mostly about readlines() function. The solution should work in Python 3.6+. | From the documentation: f.readline() reads a single line from the file; a newline character (\n) is left at the end of the string, and is only omitted on the last line of the file if the file doesn’t end in a newline. This makes the return value unambiguous; if f.readline() returns an empty string, the end of the file has been reached, while a blank line is represented by '\n', a string containing only a single newline. with open(file_name, 'r') as i_file: while True: line = i_file.readline() if not line: break # do something with line | 10 | 10 |
69,395,834 | 2021-9-30 | https://stackoverflow.com/questions/69395834/deep-python-dictionary-recursion | I have a python dictionary: d = { "config": { "application": { "payment": { "dev": { "modes": {"credit,debit,emi": {}}, "company": { "address": { "city": {"London": {}}, "pincode": {"LD568162": {}}, }, "country": {"United Kingdom": {}}, "phone": {"7865432765": {}}, }, "levels": {"0,1,2": {}}, }, "prod": {"modes": {"credit,debit": {}}, "levels": {"0,1": {}}}, } } } } I want to change it to something like this(if the value is empty{} then make the key as value for its parent) d = { "config": { "application": { "payment": { "dev": { "modes": "credit,debit,emi", "company": { "address": { "city": "London", "pincode": "LD568162" }, "country": "United Kingdom", "phone": "7865432765" }, "levels": "0,1,2" }, "prod": { "modes": "credit,debit", "levels": "0,1" } } } } } i tried to write the code to traverse this deep dictionary, but couldn't modify it to get the above output. Please help. def recur(json_object): for x in list(json_object.items()): print(x) recur(json_object[x]) d={'config': {'application': {'payment': {'dev': {'modes': {'credit,debit,emi': {}}, 'company': {'address': {'city': {'London': {}}, 'pincode': {'LD568162': {}}}, 'country': {'United Kingdom': {}}, 'phone': {'7865432765': {}}}, 'levels': {'0,1,2': {}}}, 'prod': {'modes': {'credit,debit': {}}, 'levels': {'0,1': {}}}}}}} | Solution 1 We can use a non-recursive approach with queues to enqueue each inner/nested element of the document and put as value if the nested value is just {}: # d = ... queue = [d] while queue: data = queue.pop() for key, value in data.items(): if isinstance(value, dict) and list(value.values()) == [{}]: data[key] = list(value.keys())[0] elif isinstance(value, dict): queue.append(value) print(d) Output { "config": { "application": { "payment": { "dev": { "modes": "credit,debit,emi", "company": { "address": { "city": "London", "pincode": "LD568162" }, "country": "United Kingdom", "phone": "7865432765" }, "levels": "0,1,2" }, "prod": { "modes": "credit,debit", "levels": "0,1" } } } } } Solution 2 Here's a recursive approach # d = ... def recur(data): for key, value in data.items(): if isinstance(value, dict) and list(value.values()) == [{}]: data[key] = list(value.keys())[0] elif isinstance(value, dict): recur(value) recur(d) print(d) Output Same as Solution 1 | 5 | 2 |
69,395,130 | 2021-9-30 | https://stackoverflow.com/questions/69395130/understanding-the-asterisk-operator-in-python-when-its-before-the-function-in-a | I know that the asterisk is used to unpack values like system args or when you unpack lists into variables. But I have not seen this syntax here before in this example of asyncio. I was reading this article here, https://realpython.com/async-io-python/#the-10000-foot-view-of-async-io , but I don't understand what the asterisk operator is doing in this context. #!/usr/bin/env python3 # rand.py import asyncio import random # ANSI colors c = ( "\033[0m", # End of color "\033[36m", # Cyan "\033[91m", # Red "\033[35m", # Magenta ) async def makerandom(idx: int, threshold: int = 6) -> int: print(c[idx + 1] + f"Initiated makerandom({idx}).") i = random.randint(0, 10) while i <= threshold: print(c[idx + 1] + f"makerandom({idx}) == {i} too low; retrying.") await asyncio.sleep(idx + 1) i = random.randint(0, 10) print(c[idx + 1] + f"---> Finished: makerandom({idx}) == {i}" + c[0]) return i async def main(): res = await asyncio.gather(*(makerandom(i, 10 - i - 1) for i in range(3))) return res if __name__ == "__main__": random.seed(444) r1, r2, r3 = asyncio.run(main()) print() print(f"r1: {r1}, r2: {r2}, r3: {r3}") Under the async def main function right before the makerandom there is an asterisk. Could someone explain what it does in this context ? I am trying to understand how async / await works. I looked at this answer, Python asterisk before function but it doesn't really explain it. | The asterisk isn't before makerandom, it's before the generator expression (makerandom(i, 10 - i - 1) for i in range(3)) asyncio.gather doesn't take an iterable as its first argument; it accepts a variable number of awaitables as positional arguments. In order to get from a generator expression to that, you need to unpack the generator. In this particular instance, the asterisk unpacks asyncio.gather(*(makerandom(i, 10 - i - 1) for i in range(3))) into asyncio.gather(makerandom(0, 9), makerandom(1, 8), makerandom(2, 7)) | 6 | 9 |
69,392,936 | 2021-9-30 | https://stackoverflow.com/questions/69392936/can-i-introduce-the-output-of-variables-in-the-message-of-asserts | I am doing some validation of the input data of one program I have created. I am doing this with assert. If assertion arises I want to know in which part of the data occurs so I want to get the value that arises the assertion. assert all(isinstance(e, int) for l1 in sequence.values() for l2 in l1 for e in l2),"Values of the dictionnary aren't lists of integers. Assert found in '{l2}'" # This code doesn't work | Not when using all as it does not expose the "iteration variable". You will need an explicit, nested loop. You also forgot the f'' prefix to denote an f-string: for l1 in [['a']]: for l2 in l1: for e in l2: assert isinstance(e, int), f"Values of the dictionnary aren't lists of integers. Assert found in '{l2}'" AssertionError: Values of the dictionnary aren't lists of integers. Assert found in 'a' | 6 | 4 |
69,389,252 | 2021-9-30 | https://stackoverflow.com/questions/69389252/multiply-all-lists-in-list-of-lists | I have a list of masks and I want to obtain the resulting mask by multiplying all of them. My Donkey Kong approach is the following: a = [[1, 1], [1, 0], [1, 0]] b = a[0] for i in range(1, len(a)): b = b * np.array(a[i]) which I think it works as returns [1,0] as value of b. Is there a nicer way of doing this? EDIT: I am looking for common ranges in the mask. To find all the non zero ranges I do the following: label = 0 for i in range(1, len(labels)): label = label + np.array(labels[i]) label = [1 if x > 0 else 0 for x in label] | Take a look at np.prod, which returns the product of array elements over a given axis: import numpy as np a = [[1, 1], [1, 0], [1, 0]] np.prod(a, axis=0) | 5 | 8 |
69,372,488 | 2021-9-29 | https://stackoverflow.com/questions/69372488/python-3-6-9-importerror-no-module-named-setuptools-rust-and-a-command-pytho | I am trying to install pyOpenSSl and it shows the following error Requirement already satisfied: six>=1.5.2 in /home/tony/hx-preinstaller-venv/lib/python3.6/site-packages (from pyOpenSSL) Collecting cryptography>=3.3 (from pyOpenSSL) Using cached https://files.pythonhosted.org/packages/cc/98/8a258ab4787e6f835d350639792527d2eb7946ff9fc0caca9c3f4cf5dcfe/cryptography-3.4.8.tar.gz Complete output from command python setup.py egg_info: =============================DEBUG ASSISTANCE========================== If you are seeing an error here please try the following to successfully install cryptography: Upgrade to the latest pip and try again. This will fix errors for most users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip =============================DEBUG ASSISTANCE========================== Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-8toyhikv/cryptography/setup.py", line 14, in <module> from setuptools_rust import RustExtension ModuleNotFoundError: No module named 'setuptools_rust' The command I ran: pip install pyOpenSSL | Try upgrade pip and install setuptools-rust: pip install --upgrade pip pip install setuptools-rust | 11 | 23 |
69,378,901 | 2021-9-29 | https://stackoverflow.com/questions/69378901/get-ranges-where-values-are-not-none | The Goal I would like to get the ranges where values are not None in a list, so for example: test1 = [None, 0, None] test2 = [2,1,None] test3 = [None,None,3] test4 = [1,0,None,0,0,None,None,1,None,0] res1 = [[1,1]] res2 = [[0,1]] res3 = [[2,2]] res4 = [[0,1],[3,4],[7,7],[9,9]] What I have tried This is my super lengthy implementation, which does not perfectly work... def get_not_None_ranges(list_): # Example [0, 2, None, 1, 4] -> [[0, 1], [3, 4]] r = [] end_i = len(list_)-1 if list_[0] == None: s = None else: s = 0 for i, elem in enumerate(list_): if s != None: if elem == None and end_i != i: r.append([s,i-1]) s = i+1 if end_i == i: if s > i: r=r elif s==i and elem == None: r=r else: r.append([s,i]) else: if elem != None: s = i if end_i == i: if s > i: r=r else: r.append([s,i]) return r As you can see the results are sometimes wrong: print(get_not_None_ranges(test1)) print(get_not_None_ranges(test2)) print(get_not_None_ranges(test3)) print(get_not_None_ranges(test4)) [[1, 2]] [[0, 2]] [[2, 2]] [[0, 1], [3, 4], [6, 5], [7, 7], [9, 9]] So, I was wondering if you guys know a much better way to achieve this? | Use itertools.groupby: from itertools import groupby test1 = [None, 0, None] test2 = [2, 1, None] test3 = [None, None, 3] test4 = [1, 0, None, 0, 0, None, None, 1, None, 0] def get_not_None_ranges(lst): result = [] for key, group in groupby(enumerate(lst), key=lambda x: x[1] is not None): if key: index, _ = next(group) result.append([index, index + sum(1 for _ in group)]) return result print(get_not_None_ranges(test1)) print(get_not_None_ranges(test2)) print(get_not_None_ranges(test3)) print(get_not_None_ranges(test4)) Output [[1, 1]] [[0, 1]] [[2, 2]] [[0, 1], [3, 4], [7, 7], [9, 9]] | 7 | 4 |
69,376,943 | 2021-9-29 | https://stackoverflow.com/questions/69376943/how-can-i-insert-linebreak-in-yaml-with-ruamel-yaml | Here is the code I have from ruamel.yaml import YAML yaml = YAML() user = [{"login":"login1","fullName":"First1 Last1", "list":["a"]},{"login":"login2","fullName":"First2 Last2", "list":["b"]}] test = {"category":[{"year":2023,"users":user}]} yaml.indent(mapping=4, sequence=4, offset=2) yaml.width = 2048 with open(r'test.yml', 'w') as file: documents = yaml.dump(test, file) And I get this YAML file category: - year: 2023 users: - login: login1 fullName: First1 Last1 list: - a - login: login2 fullName: First2 Last2 list: - b I need to insert a linebreak between the two users (the final YAML should look like that) category: - year: 2023 users: - login: login1 fullName: First1 Last1 list: - a - login: login2 fullName: First2 Last2 list: - b How can I add this empty line? | You should load the result that you want in ruamel.yaml. For good measure you can then dump it back to see if the extra line is preserved. If it isn't you might not be able to write out such a format in the first place. As you will see the extra line is preserved, so you should be able to get it inothe output in some way if you can reconstruct the loaded data from scratch. ruamel.yaml normally attaches comments to the node just parsed, so you should investigate the sequence that is the value for the first key 'list': import sys import ruamel.yaml yaml_str = """\ category: - year: 2023 users: - login: login1 fullName: First1 Last1 list: - a - login: login2 fullName: First2 Last2 list: - b """ yaml = ruamel.yaml.YAML() # yaml.indent(mapping=4, sequence=4, offset=2) # yaml.preserve_quotes = True data = yaml.load(yaml_str) # yaml.dump(data, sys.stdout) seq = data['category'][0]['users'][0]['list'] print('seq', type(seq), seq.ca) which gives: seq <class 'ruamel.yaml.comments.CommentedSeq'> Comment(comment=None, items={0: [CommentToken('\n\n', line: 6, col: 12), None, None, None]}) In that seq.ca is the comment attribute. Such an attribute cannot be added to a normal list, so at least for that part of your data structure user you need to create a CommentedSeq instance. You can also see that the comment token consists of two newlines (\n\n). The first newline indicates that the end-of-line comment following 'a' the first element (indicated by 0) is empty, the second newline is your actual empty line before the second "user". However it is actually easier to insert the line before the second user. The CommentedSeq instance has a method that allows you to add a comment before a key (in this case the key is the element index): import sys import ruamel.yaml CS = ruamel.yaml.comments.CommentedSeq yaml = ruamel.yaml.YAML() user = CS([{"login":"login1","fullName":"First1 Last1", "list":["a"]},{"login":"login2","fullName":"First2 Last2", "list":["b"]}]) user.yaml_set_comment_before_after_key(1, before='\n') test = {"category":[{"year":2023,"users":user}]} yaml.indent(mapping=4, sequence=4, offset=2) yaml.width = 2048 documents = yaml.dump(test, sys.stdout) which gives: category: - year: 2023 users: - login: login1 fullName: First1 Last1 list: - a - login: login2 fullName: First2 Last2 list: - b Make sure you fix the version of ruamel.yaml you are using in your installation, routines like yaml_set_comment_before_after_key may still change. During testing I usually write to stdout, easier to see if the output is correct. If you write the YAML document to a file, you should consider using the extension .yaml, which has been the recommended extension for such files since at least 15 years. | 6 | 3 |
69,375,868 | 2021-9-29 | https://stackoverflow.com/questions/69375868/extract-month-from-datetime-column-in-pandas-dataframe | I have a DataFrame read from Excel with one of the columns of type DateTime. sales_data=pandas.read_excel(r'Sample Sales Data.xlsx') I was able to extract substrings from other columns using str.extract/lambda functions. But I was unable to process the column "Order Date" The command sales_data['Order Date'] gives the below output As recommended in other StackOverflow questions, I tried with sales_data['Order Date'].apply(lambda x:x.str.slice()) I got an error that : AttributeError: 'datetime.datetime' object has no attribute 'str' To check the type of the Order date column, I tried sales_data['Order Date'].apply(lambda x:type(x)) I got the type datetime.datetime But when I tried the datetime operation sales_data['Order Date'].apply(lambda x:x.strftime("m")) I got the error: AttributeError: 'int' object has no attribute 'strftime' I got a similar error for the command sales_data['Order Date'].apply(lambda x:x.dt.month) Please suggest a method to extract month from the datetime object into another column without iterating through the DataFrame. I am not able to use datetime or int functions with this column since it is behaving as both a datetime and int column. | I found the issue. The sales_data['Order Date'] column had a mix of both date and int values due to some input data inaccuracy. I found this since sales_data['DateType']=sales_data['Order Date'].apply(lambda x:type(x)) sales_data['DateType'].unique() returned array([<class 'datetime.datetime'>, <class 'int'>], dtype=object) I cleaned this DataFrame by filtering out the values without the datetype as datetime. type1=type(sales_data['DateType'][0]) new_df=sales_data[sales_data['DataType']==type1] Now the new dataframe supports the date and string operations. pd.to_datetime(new_df['Order Date']).dt.month This can be assigned to other columns. | 6 | 0 |
69,372,201 | 2021-9-29 | https://stackoverflow.com/questions/69372201/get-excel-column-letter-based-on-column-header-python | This seems relatively straight forward, but I have yet to find a duplicate that answers my question, or a method with the needed functionality. I have an Excel spreadsheet where each column that contains data has a unique header. I would like to use pandas to get the letter key of the column by passing this header string into a function. For example, if I pass "Output Parameters" to the function, I would like to return "M": The closest thing that I have found is the xlsxwriter.utility.xl_col_to_name(index) method outlined in the second answer in Convert spreadsheet number to column letter This is very similar to what I am trying to do, however the column numbers within my sheets will not remain constant (unlike the headers, which will). This being said, a method that can return the column number based on the header would also work, as I would then be able to apply the above. Does pandas or xlsxwriter have a method that can handle the above case? | You can try: import xlsxwriter col_no = df.columns.get_loc("col_name") print(xlsxwriter.utility.xl_col_to_name(col_no)) | 5 | 8 |
69,370,507 | 2021-9-29 | https://stackoverflow.com/questions/69370507/prompting-importerror-no-module-named-py27-urlquote-when-running-dev-appserve | When I run dev_appserver.py on google-cloud-sdk, I get ImportError: No module named py27_urlquote. Traceback (most recent call last): File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 109, in <module> _run_file(__file__, globals()) File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 103, in _run_file _execfile(_PATHS.script_file(script_name), globals_) File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 83, in _execfile execfile(fn, scope) File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 44, in <module> from google.appengine.tools.devappserver2 import dispatcher File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 43, in <module> from google.appengine.tools.devappserver2 import module File "/Users/user/Downloads/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 39, in <module> import py27_urlquote ImportError: No module named py27_urlquote I have tried the following: Reinstall Cloud SDK Find out about the py27_urlquote module (I couldn't find any such information ...) Change the version of CLOUDSDK_PYTHON to 2.7 or 3.8 and execute | Right now this is a public issue and is currently being addressed by our Google Engineering Team. A workaround was provided for you to run your local development server: Install pip for Python 2 sudo apt update sudo apt install python-pip Install urlquote instead of py27_urlquote pip install urlquote Modify module.py located on your local directory from the error message /Users/user/Downloads/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py Replace py27_urlquote to urlquote. There are 3 lines of code that uses py27_urlquote. Specifically lines 39, 833, and 836. You can check this public tracker similar to your issue for more information and updates. | 5 | 3 |
69,363,867 | 2021-9-28 | https://stackoverflow.com/questions/69363867/difference-between-os-replace-and-os-rename | I want to move a file form one directory to another in linux with python. I wish to achieve a behavior similar to bash mv command. What is the difference in practice between the two commands os.replace() os.rename() Is it simply that os.rename() will raise an error if file exists in destination while os.replace() will overwrite it? Also if another secondary difference that I see it that the os.replace() needs a file as a destination not just the directory. I can find a direct answer anywhere. | On POSIX systems, the rename system call will silently replace the destination file if the user has sufficient permissions. The same is not true on Windows: a FileExistsError is always raised. os.replace and os.rename are the same function on POSIX systems, but on Windows os.replace will call MoveFileExW with the MOVEFILE_REPLACE_EXISTING flag set to give the same effect as on POSIX systems. If you want consistent cross-platform behaviour you should consider using os.replace throughout. | 14 | 16 |
69,363,178 | 2021-9-28 | https://stackoverflow.com/questions/69363178/resize-to-specific-height-and-width-with-pyvips | I find this answer, and I want to use pyvips to resize images. In the mentioned answer and the official documentation image resized by scale. However, I want to resize the image to a specific height and width. Is there any way to achieve this with pyvips? | The thumbnail operation in pyvips will load an image to fit a box, for example: thumb = pyvips.Image.thumbnail("some-file.jpg", 128) Will load some-file.jpg and make an image that fits within 128x128 pixels. Variations on thumbnail can load from strings, buffers or pipes, load to fit other boxes, check the chapter on vipsthumbnail for details. If you already have a loaded image, thumbnail_image will resize to pixel dimensions in the same way, but can be quite a bit slower. | 6 | 5 |
69,352,179 | 2021-9-27 | https://stackoverflow.com/questions/69352179/package-streamlit-app-and-run-executable-on-windows | this is my first question on Stackoverflow. I hope my question is clear, otherwise let me know and don't hesitate to ask me more details. I'm trying to package a streamlit app for a personal project. I'm developing under linux but I have to deploy the app on Windows. I want it to be a standalone executable, which once run opens the browser tab to display the app, and exits when the tab is closed. I would like to use pynsist library to package the app (already used for another project and it worked fine). I followed the suggestion found in this discussion. It worked fine on ubuntu, and apparently also on Windows after packaging the app with pynsist. "Apparently" because the executable run, but no browser tab was open to display the app. Here is some snippets of my code. Project structure |- installer.cfg |- src |- main.py |- run_app.py main.py import streamlit as st st.title("Test") st.title("My first app deployed with Pynsist!") run_app.py (EDIT 2 after comment by Thomas K) import os import subprocess import sys from src.config import EnvironmentalVariableNames as EnvVar, get_env def main(): executable = sys.executable result = subprocess.run( f"{executable} -m streamlit run {os.path.join(get_env(EnvVar.EMPORIO_VESTIARIO_DASHBOARD_WORKING_DIR), 'src', 'main.py')}", shell=True, capture_output=True, text=True, ) if __name__ == "__main__": main() EMPORIO_VESTIARIO_DASHBOARD_WORKING_DIR is an environmental variable to make the app work on both linux and windows (on windows, it is set to the installation directory). pynsist installer.cfg EDIT: including dependencies of streamlit discovered through pip list EDIT 2: added MarkupSafe as dependency of Jinja2 [Application] name=Emporio Vestiario Dashboard version=0.1.0 # How to lunch the app - this calls the 'main' function from the 'myapp' package: entry_point=src.run_app:main icon=resources/caritas-logo.ico [Python] version=3.8.10 bitness=64 [Include] # Packages from PyPI that your application requires, one per line # These must have wheels on PyPI: pypi_wheels = altair==4.1.0 astor==0.8.1 attrs==21.2.0 backcall==0.2.0 backports.zoneinfo==0.2.1 base58==2.1.0 bleach==4.1.0 blinker==1.4 cachetools==4.2.2 certifi==2021.5.30 cffi==1.14.6 charset-normalizer==2.0.6 click==7.1.2 decorator==5.1.0 defusedxml==0.7.1 distlib==0.3.3 entrypoints==0.3 idna==3.2 jsonschema==3.2.0 mistune==0.8.4 mypy-extensions==0.4.3 numpy==1.21.1 packaging==21.0 pandas==1.3.3 pandocfilters==1.5.0 parso==0.8.2 pillow==8.3.2 platformdirs==2.4.0 prompt-toolkit==3.0.20 protobuf==3.18.0 pyarrow==5.0.0 pycparser==2.20 pydeck==0.7.0 pyparsing==2.4.7 pyrsistent==0.18.0 python-dateutil==2.8.2 pytz==2021.1 requests==2.26.0 requests-download==0.1.2 send2trash==1.8.0 setuptools==57.0.0 six==1.14.0 smmap==4.0.0 streamlit==0.89.0 terminado==0.12.1 testpath==0.5.0 toml==0.10.2 tomli==1.2.1 toolz==0.11.1 tornado==6.1 traitlets==5.1.0 typing-extensions==3.10.0.2 tzlocal==3.0 urllib3==1.26.7 validators==0.18.2 Jinja2==3.0.1 MarkupSafe==2.0.1 Looking at the executable output on Windows, the current working directory is correctly printed, but no other output (streamlit app initialization message, or error messages) is printed. I tried to open the browser and go to localhost:8501, but I got connection error. Any hints on how to make the code execute and automatically open the browser tab? Any help is greatly appreciated! EDIT: as pointed out in the comment to the last package in installer.cfg, the app (with Jinja2 dependency) is correctly installed on windows, but when launched, the app still cannot find Jinja2 dependency. This is the traceback: Traceback (most recent call last): File "Emporio_Vestiario_Dashboard.launch.pyw", line 34, in <module> from src.run_app import main File "C:\Users\tantardini\develop\caritas\pkgs\src\run_app.py", line 6, in <module> import streamlit File "C:\Users\tantardini\develop\caritas\pkgs\streamlit\__init__.py", line 75, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "C:\Users\tantardini\develop\caritas\pkgs\streamlit\delta_generator.py", line 70, in <module> from streamlit.elements.arrow import ArrowMixin File "C:\Users\tantardini\develop\caritas\pkgs\streamlit\elements\arrow.py", line 20, in <module> from pandas.io.formats.style import Styler File "C:\Users\tantardini\develop\caritas\pkgs\pandas\io\formats\style.py", line 49, in <module> jinja2 = import_optional_dependency("jinja2", extra="DataFrame.style requires jinja2.") File "C:\Users\tantardini\develop\caritas\pkgs\pandas\compat\_optional.py", line 118, in import_optional_dependency raise ImportError(msg) from None ImportError: Missing optional dependency 'Jinja2'. DataFrame.style requires jinja2. Use pip or conda to install Jinja2. EDIT 2: thanks to the helpful hints by Thomas K, I came up with half a solution. The app runs and streamlit is started. But. These are the log messages: Welcome to Streamlit! If you're one of our development partners or you're interested in getting personal technical support or Streamlit updates, please enter your email address below. Otherwise, you may leave the field blank. Email: 2021-10-11 20:56:53.202 WARNING streamlit.config: Warning: the config option 'server.enableCORS=false' is not compatible with 'server.enableXsrfProtection=true'. As a result, 'server.enableCORS' is being overridden to 'true'. More information: In order to protect against CSRF attacks, we send a cookie with each request. To do so, we must specify allowable origins, which places a restriction on cross-origin resource sharing. If cross origin resource sharing is required, please disable server.enableXsrfProtection. 2021-10-11 20:56:53.202 DEBUG streamlit.logger: Initialized tornado logs 2021-10-11 20:56:53.202 ERROR streamlit.credentials: It seems that the execution of the app is stopped becuase it is waiting for some credentials. I found here that a .streamlit/credentials.toml can be added, but I'm not sure on the exact location on windows. I've also tried to explicitly add --server.headless=false in the subprocess.run command, but again with no effect. Why the app doesn't start automatically like on Linux? Is there a way to start the app without additional configurations by the user? | EDIT: a streamlit example was added to the examples of pynsist repo. Here you can find a minimal and refined example of a working application (which also includes plotly). ORIGINAL ANSWER Finally I get it to work. In my last attempt, I made a mistake by setting --server.headless=false, while it must be true instead. I found that an additional flag to the streamlit run command is needed: --global.developmentMode=false. This make the deploy work, even if I could not find any reference to this configuration in the streamlit configurations. Working code follows. Project structure |- wheels/ |- installer.cfg |- src |- main.py |- run_app.py main.py import streamlit as st st.title("Test") st.title("My first app deployed with Pynsist!") run_app.py import os import subprocess import sys import webbrowser from src.config import EnvironmentalVariableNames as EnvVar, get_env def main(): # Getting path to python executable (full path of deployed python on Windows) executable = sys.executable # Open browser tab. May temporarily display error until streamlit server is started. webbrowser.open("http://localhost:8501") # Run streamlit server path_to_main = os.path.join( get_env(EnvVar.EMPORIO_VESTIARIO_DASHBOARD_WORKING_DIR), "src", "app.py" ) result = subprocess.run( f"{executable} -m streamlit run {path_to_main} --server.headless=true --global.developmentMode=false", shell=True, capture_output=True, text=True, ) # These are printed only when server is stopped. # NOTE: you have to manually stop streamlit server killing process. print(result.stdout) print(result.stderr) if __name__ == "__main__": main() Some notes: webbrowser.open is necessary to automatically open a new tab in the browser to show the streamlit app. The subprocess.run lines only starts a new streamlit server. As I pointed out in the comments, once exiting the streamlit tab in the browser, the streamlit server is still there and active. You may access again the dashboard by only typing localhost:8501 in the address bar. If you click multiple times on the Windows app icon, multiple streamlit servers are started. I've tried with only two active at the same time, and they do not show conflicting behaviour. To stop them you have to manually end tasks through task manager, for instance. installer.cfg [Application] name=Emporio Vestiario Dashboard version=0.1.0 # How to lunch the app - this calls the 'main' function from the 'myapp' package: entry_point=src.run_app:main icon=resources/caritas-logo.ico [Python] version=3.8.10 bitness=64 [Include] # Packages from PyPI that your application requires, one per line # These must have wheels on PyPI: pypi_wheels = altair==4.1.0 astor==0.8.1 attrs==21.2.0 backcall==0.2.0 backports.zoneinfo==0.2.1 base58==2.1.0 bleach==4.1.0 blinker==1.4 cachetools==4.2.2 certifi==2021.5.30 cffi==1.14.6 charset-normalizer==2.0.6 click==7.1.2 decorator==5.1.0 defusedxml==0.7.1 distlib==0.3.3 entrypoints==0.3 idna==3.2 jsonschema==3.2.0 mistune==0.8.4 mypy-extensions==0.4.3 numpy==1.21.1 packaging==21.0 pandas==1.3.3 pandocfilters==1.5.0 parso==0.8.2 pillow==8.3.2 platformdirs==2.4.0 prompt-toolkit==3.0.20 protobuf==3.18.0 pyarrow==5.0.0 pycparser==2.20 pydeck==0.7.0 pyparsing==2.4.7 pyrsistent==0.18.0 python-dateutil==2.8.2 pytz==2021.1 requests==2.26.0 requests-download==0.1.2 send2trash==1.8.0 setuptools==57.0.0 six==1.14.0 smmap==4.0.0 streamlit==0.89.0 terminado==0.12.1 testpath==0.5.0 toml==0.10.2 tomli==1.2.1 toolz==0.11.1 tornado==6.1 traitlets==5.1.0 typing-extensions==3.10.0.2 tzlocal==3.0 urllib3==1.26.7 validators==0.18.2 Jinja2==3.0.1 MarkupSafe==2.0.1 extra_wheel_sources = ./wheels Note: blinker extra wheels is required. EDIT 2 As @ananvodo pointed out, it may not be immediately clear what EnvVar and get_env are. I copy here the relevant parts of config.py for sake of completeness. import os class EnvironmentalVariableNames: """Defines the names of the environmental variables used in the code and useful shortcuts""" # Environmental variables for Zaccheo app EMPORIO_VESTIARIO_DASHBOARD_WORKING_DIR= "EMPORIO_VESTIARIO_DASHBOARD_WORKING_DIR" # Other environmental variable names defined hereafter def get_env(env_var): """Returns the value of the environment variable env_var""" return os.environ[env_var] | 5 | 5 |
69,306,103 | 2021-9-23 | https://stackoverflow.com/questions/69306103/is-it-possible-to-change-the-output-alias-in-pydantic | Setup: # Pydantic Models class TMDB_Category(BaseModel): name: str = Field(alias="strCategory") description: str = Field(alias="strCategoryDescription") class TMDB_GetCategoriesResponse(BaseModel): categories: list[TMDB_Category] @router.get(path="category", response_model=TMDB_GetCategoriesResponse) async def get_all_categories(): async with httpx.AsyncClient() as client: response = await client.get(Endpoint.GET_CATEGORIES) return TMDB_GetCategoriesResponse.parse_obj(response.json()) Problem: Alias is being used when creating a response, and I want to avoid it. I only need this alias to correctly map the incoming data but when returning a response, I want to use actual field names. Actual response: { "categories": [ { "strCategory": "Beef", "strCategoryDescription": "Beef is ..." }, { "strCategory": "Chicken", "strCategoryDescription": "Chicken is ..." } } Expected response: { "categories": [ { "name": "Beef", "description": "Beef is ..." }, { "name": "Chicken", "description": "Chicken is ..." } } | Use the Config option by_alias. from fastapi import FastAPI, Path, Query from pydantic import BaseModel, Field app = FastAPI() class Item(BaseModel): name: str = Field(..., alias="keck") @app.post("/item") async def read_items( item: Item, ): return item.dict(by_alias=False) Given the request: { "keck": "string" } this will return { "name": "string" } | 37 | 18 |
69,355,161 | 2021-9-28 | https://stackoverflow.com/questions/69355161/git-filter-repo-commands-output-nothing-on-windows | I installed git-filter-repo via scoop, tried multiple git filter-repo commands e.g. git filter-repo -h, they all logged nothing, no warning or error, just nothing. Tried rebooting, reinstalling, and installing it on another Windows 10 computer, all reproduced it. git-filter-repo: v2.33.0 git: v2.33.0.windows.2 python: v3.9.7 scoop: Current Scoop version: 09200504 (HEAD -> master, origin/master, origin/HEAD) reset: skip when app instance is running (#4359) 'main' bucket: b71f4a842 (HEAD -> master, origin/master, origin/HEAD) nunit-extension-vs-project-loader: Update to version 3.9.0 How to solve this issue? | (Now updated for newer Python installers.) When I installed git-filter-repo on Windows earlier this year, the following steps worked for me: Download and install Python for Windows. In newer installers you need to go into the Advanced Options to make sure Python is added to your Path: Confirm python was added to your path and that you can run either the command python --version or python3 --version from your Git command line. (I recommend Git Bash.) In my case, my executable name is python and if yours is too, you will need this in step #7 below. Clone git-filter-repo from GitHub. git clone https://github.com/newren/git-filter-repo.git Run the command git --exec-path to see your Git exe directory. From the git-filter-repo repo's root directory, copy the file git-filter-repo (about 160KB) into your Git exe directory. In your command line where you use Git, type the command git filter-repo. If it works, you should get the message "No arguments specified." and you can skip step #7. If it doesn't work, it's likely that your python exe is python instead of python3 as determined in step #2. Go to the next step. If you get no message or an error message similar to "/usr/bin/env: ‘python3’: No such file or directory", then edit the file git-filter-repo that you copied into your Git exe directory in step #5, and change the first line from "python3" to "python". Now be amazed at how fast and awesome git-filter-repo is. Still having problems? If you didn't add the environment variable in step #1, some people have had luck in step #7 by changing their python command to just "py". This is the python launcher which can auto-detect the highest version installed on your machine. More info here. I should point out that this did not work for me with python 3.10.7. I actually tried this first but ended up re-installing and enabling the option to "Add python to environment variables" as described above in step #1. | 15 | 77 |
69,298,452 | 2021-9-23 | https://stackoverflow.com/questions/69298452/how-to-write-or-in-a-glob-pattern | glob.glob() does not use regex. it uses Unix path expansion rules. How can I emulate this regex in glob: ".*.jpg|.*.png" | @U12-Forward is correct that there isn't an exact solution but depending on your use case you might be able to solve it with the [...] wildcard. For your example with .png or .jpg you could use this: .*.[jp]* which will match any extension that starts with a j or p If you have other extensions that start with j or p you can be more specific: .*.[jp][pn]g | 20 | 12 |
69,294,350 | 2021-9-23 | https://stackoverflow.com/questions/69294350/1d-cnn-in-tensorflow-for-time-series-classification | My Time-Series is a 30000 x 500 table representing points from three different types of graphs: Linear, Quadratic, and Cubic Sinusoidal. Thus, there are 10000 Rows for Linear Graphs, 10000 for Quadratics, and 10000 for Cubics. I have sampled 500 points from every graph. Here's an image to illustrate my point: I've built a 98% accurate 2D CNN using TensorFlow, but now I want to build a 1D CNN using TensorFlow. Do I just replace every Conv2D layer with Conv1D? If so, what would my filters and kernel_size be? I don't even know how to import my 1D pandas dataframe. My 2D CNN has the following architecture: model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1./255), tf.keras.layers.Conv1D( 32, 3, activation='relu', input_shape=input_shape[2:])(x), #32 FILTERS and square stride of size 3 tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(num_classes) ]) If anyone can help, that would be great. Thank you. Below is an MWE and my 2D CNN is here. num_classes = 3 model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1./255), tf.keras.layers.Conv2D(32, 3, activation='relu'), #32 FILTERS and square stride of size 3 tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(num_classes) ]) epochs = 5 initial_learning_rate = 1 decay = initial_learning_rate / epochs def lr_time_based_decay(epoch, lr): return lr * 1 / (1 + decay * epoch) history = model.fit( train_ds, validation_data=val_ds, epochs= epochs, callbacks= [tensorboard_callback, tf.keras.callbacks.LearningRateScheduler(lr_time_based_decay, verbose=1)] ) | Conv1D equivalent code. Conv1D layer expects 3D input and outputs 3D shape. Maxpooling2D expects 4D input. You need to use maxpooling1D layer. Sample code import tensorflow as tf input_shape = (4, 7, 10, 128) num_classes = 3 model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv1D(filters= 32, kernel_size=3, activation='relu',padding='same',input_shape= input_shape[2:])) model.add(tf.keras.layers.MaxPooling1D()) model.add(tf.keras.layers.Conv1D(filters=32, kernel_size=3,padding='same',activation='relu')) model.add(tf.keras.layers.MaxPooling1D()) model.add(tf.keras.layers.Conv1D(filters=32, kernel_size=3,padding='same',activation='relu')) model.add(tf.keras.layers.MaxPooling1D()) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) model.summary() Output Model: "sequential_13" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_35 (Conv1D) (None, 10, 32) 12320 max_pooling1d_20 (MaxPoolin (None, 5, 32) 0 g1D) conv1d_36 (Conv1D) (None, 5, 32) 3104 max_pooling1d_21 (MaxPoolin (None, 2, 32) 0 g1D) conv1d_37 (Conv1D) (None, 2, 32) 3104 max_pooling1d_22 (MaxPoolin (None, 1, 32) 0 g1D) flatten_8 (Flatten) (None, 32) 0 dense_15 (Dense) (None, 128) 4224 dense_16 (Dense) (None, 3) 387 ================================================================= Total params: 23,139 Trainable params: 23,139 Non-trainable params: 0 | 5 | 4 |
69,330,668 | 2021-9-25 | https://stackoverflow.com/questions/69330668/efficient-way-to-extract-data-from-netcdf-files | I have a number of coordinates (roughly 20000) for which I need to extract data from a number of NetCDF files each comes roughly with 30000 timesteps (future climate scenarios). Using the solution here is not efficient and the reason is the time spent at each i,j to convert "dsloc" to "dataframe" (look at the code below). ** an example NetCDF file could be download from here ** import pandas as pd import xarray as xr import time #Generate some coordinates coords_data = [{'lat': 68.04, 'lon': 15.20, 'stid':1}, {'lat':67.96, 'lon': 14.95, 'stid': 2}] crd= pd.DataFrame(coords_data) lat = crd["lat"] lon = crd["lon"] stid=crd["stid"] NC = xr.open_dataset(nc_file) point_list = zip(lat,lon,stid) start_time = time.time() for i,j,id in point_list: print(i,j) dsloc = NC.sel(lat=i,lon=j,method='nearest') print("--- %s seconds ---" % (time.time() - start_time)) DT=dsloc.to_dataframe() DT.insert(loc=0,column="station",value=id) DT.reset_index(inplace=True) temp=temp.append(DT,sort=True) print("--- %s seconds ---" % (time.time() - start_time)) which results is: 68.04 15.2 --- 0.005853414535522461 seconds --- --- 9.02660846710205 seconds --- 67.96 14.95 --- 9.028568267822266 seconds --- --- 16.429600715637207 seconds --- which means each i,j takes around 9 seconds to process. Given lots of coordinates and netcdf files with large timesteps, I wonder if there a pythonic way that the code could be optimized. I could also use CDO and NCO operators but I found a similar issue using them too. | This is a perfect use case for xarray's advanced indexing using a DataArray index. # Make the index on your coordinates DataFrame the station ID, # then convert to a dataset. # This results in a Dataset with two DataArrays, lat and lon, each # of which are indexed by a single dimension, stid crd_ix = crd.set_index('stid').to_xarray() # now, select using the arrays, and the data will be re-oriented to have # the data only for the desired pixels, indexed by 'stid'. The # non-indexing coordinates lat and lon will be indexed by (stid) as well. NC.sel(lon=crd_ix.lon, lat=crd_ix.lat, method='nearest') Other dimensions in the data will be ignored, so if your original data has dimensions (lat, lon, z, time) your new data would have dimensions (stid, z, time). | 8 | 9 |
69,349,620 | 2021-9-27 | https://stackoverflow.com/questions/69349620/in-operator-chaining-true-in-true-in-true-output-false | I'm trying to figure out in what order this code runs: print( True in [True] in [True] ) False even though: print( ( True in [True] ) in [True] ) True and: print( True in ( [True] in [True] ) ) TypeError If the first code is neither of these two last ones, then what? | in is comparing with chaining there so True in [True] in [True] is equivalent to (except middle [True] is evaluated once) (True in [True]) and ([True] in [True]) which is True and False which is False This is all similar to 2 < 4 < 12 operation which is equivalent to (2 < 4) and (4 < 12). | 6 | 11 |
69,352,472 | 2021-9-27 | https://stackoverflow.com/questions/69352472/lookup-values-by-corresponding-column-header-in-pandas-1-2-0-or-newer | The operation pandas.DataFrame.lookup is "Deprecated since version 1.2.0", and has since invalidated a lot of previous answers. This post attempts to function as a canonical resource for looking up corresponding row col pairs in pandas versions 1.2.0 and newer. Standard LookUp Values With Default Range Index Given the following DataFrame: df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) Col A B 0 B 1 5 1 A 2 6 2 A 3 7 3 B 4 8 I would like to be able to lookup the corresponding value in the column specified in Col: I would like my result to look like: Col A B Val 0 B 1 5 5 1 A 2 6 2 2 A 3 7 3 3 B 4 8 8 Standard LookUp Values With a Non-Default Index Non-Contiguous Range Index Given the following DataFrame: df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=[0, 2, 8, 9]) Col A B 0 B 1 5 2 A 2 6 8 A 3 7 9 B 4 8 I would like to preserve the index but still find the correct corresponding Value: Col A B Val 0 B 1 5 5 2 A 2 6 2 8 A 3 7 3 9 B 4 8 8 MultiIndex df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']])) Col A B C E B 1 5 F A 2 6 D E A 3 7 F B 4 8 I would like to preserve the index but still find the correct corresponding Value: Col A B Val C E B 1 5 5 F A 2 6 2 D E A 3 7 3 F B 4 8 8 LookUp with Default For Unmatched/Not-Found Values Given the following DataFrame df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) Col A B 0 B 1 5 1 A 2 6 2 A 3 7 3 C 4 8 # Column C does not correspond with any column I would like to look up the corresponding values if one exists otherwise I'd like to have it default to 0 Col A B Val 0 B 1 5 5 1 A 2 6 2 2 A 3 7 3 3 C 4 8 0 # Default value 0 since C does not correspond LookUp with Missing Values in the lookup Col Given the following DataFrame: Col A B 0 B 1 5 1 A 2 6 2 A 3 7 3 NaN 4 8 # <- Missing Lookup Key I would like any NaN values in Col to result in a NaN value in Val Col A B Val 0 B 1 5 5.0 1 A 2 6 2.0 2 A 3 7 3.0 3 NaN 4 8 NaN # NaN to indicate missing | Standard LookUp Values With Any Index The documentation on Looking up values by index/column labels recommends using NumPy indexing via factorize and reindex as the replacement for the deprecated DataFrame.lookup. import numpy as np import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=[0, 2, 8, 9]) idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] df Col A B Val 0 B 1 5 5 1 A 2 6 2 2 A 3 7 3 3 B 4 8 8 factorize is used to convert the column encode the values as an "enumerated type". idx, col = pd.factorize(df['Col']) # idx = array([0, 1, 1, 0], dtype=int64) # col = Index(['B', 'A'], dtype='object') Notice that B corresponds to 0 and A corresponds to 1. reindex is used to ensure that columns appear in the same order as the enumeration: df.reindex(columns=col) B A # B appears First (location 0) A appers second (location 1) 0 5 1 1 6 2 2 7 3 3 8 4 We need to create an appropriate range indexer compatible with NumPy indexing. The standard approach is to use np.arange based on the length of the DataFrame: np.arange(len(df)) [0 1 2 3] Now NumPy indexing will work to select values from the DataFrame: df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] [5 2 3 8] *Note: This approach will always work regardless of type of index. MultiIndex import numpy as np import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']])) idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] Col A B Val C E B 1 5 5 F A 2 6 2 D E A 3 7 3 F B 4 8 8 Why use np.arange and not df.index directly? Standard Contiguous Range Index import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx] In this case only, there is no error as the result from np.arange is the same as the df.index. df Col A B Val 0 B 1 5 5 1 A 2 6 2 2 A 3 7 3 3 B 4 8 8 Non-Contiguous Range Index Error Raises IndexError: df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=[0, 2, 8, 9]) idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx] df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx] IndexError: index 8 is out of bounds for axis 0 with size 4 MultiIndex Error df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']])) idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx] Raises IndexError: df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx] IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices LookUp with Default For Unmatched/Not-Found Values There are a few approaches. First let's look at what happens by default if there is a non-corresponding value: import numpy as np import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) # Col A B # 0 B 1 5 # 1 A 2 6 # 2 A 3 7 # 3 C 4 8 idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] Col A B Val 0 B 1 5 5.0 1 A 2 6 2.0 2 A 3 7 3.0 3 C 4 8 NaN # NaN Represents the Missing Value in C If we look at why the NaN values are introduced, we will find that when factorize goes through the column it will enumerate all groups present regardless of whether they correspond to a column or not. For this reason, when we reindex the DataFrame we will end up with the following result: idx, col = pd.factorize(df['Col']) df.reindex(columns=col) idx = array([0, 1, 1, 2], dtype=int64) col = Index(['B', 'A', 'C'], dtype='object') df.reindex(columns=col) B A C 0 5 1 NaN 1 6 2 NaN 2 7 3 NaN 3 8 4 NaN # Reindex adds the missing column with the Default `NaN` If we want to specify a default value, we can specify the fill_value argument of reindex which allows us to modify the behaviour as it relates to missing column values: idx, col = pd.factorize(df['Col']) df.reindex(columns=col, fill_value=0) idx = array([0, 1, 1, 2], dtype=int64) col = Index(['B', 'A', 'C'], dtype='object') df.reindex(columns=col, fill_value=0) B A C 0 5 1 0 1 6 2 0 2 7 3 0 3 8 4 0 # Notice reindex adds missing column with specified value `0` This means that we can do: idx, col = pd.factorize(df['Col']) df['Val'] = df.reindex( columns=col, fill_value=0 # Default value for Missing column values ).to_numpy()[np.arange(len(df)), idx] df: Col A B Val 0 B 1 5 5 1 A 2 6 2 2 A 3 7 3 3 C 4 8 0 *Notice the dtype of the column is int, since NaN was never introduced, and, therefore, the column type was not changed. LookUp with Missing Values in the lookup Col factorize has a default na_sentinel=-1, meaning that when NaN values appear in the column being factorized the resulting idx value is -1 import numpy as np import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) # Col A B # 0 B 1 5 # 1 A 2 6 # 2 A 3 7 # 3 NaN 4 8 # <- Missing Lookup Key idx, col = pd.factorize(df['Col']) # idx = array([ 0, 1, 1, -1], dtype=int64) # col = Index(['B', 'A'], dtype='object') df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] # Col A B Val # 0 B 1 5 5 # 1 A 2 6 2 # 2 A 3 7 3 # 3 NaN 4 8 4 <- Value From A This -1 means that, by default, we'll be pulling from the last column when we reindex. Notice the col still only contains the values B and A. Meaning, that we will end up with the value from A in Val for the last row. The easiest way to handle this is to fillna Col with some value that cannot be found in the column headers. Here I use the empty string '': idx, col = pd.factorize(df['Col'].fillna('')) # idx = array([0, 1, 1, 2], dtype=int64) # col = Index(['B', 'A', ''], dtype='object') Now when I reindex, the '' column will contain NaN values meaning that the lookup produces the desired result: import numpy as np import pandas as pd df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan], 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}) idx, col = pd.factorize(df['Col'].fillna('')) df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx] df: Col A B Val 0 B 1 5 5.0 1 A 2 6 2.0 2 A 3 7 3.0 3 NaN 4 8 NaN # Missing as expected | 10 | 12 |
69,312,922 | 2021-9-24 | https://stackoverflow.com/questions/69312922/how-to-encrypt-large-file-using-python | I'm trying to encrypt file that is larger than 1GB. I don't want to read it all to memory. I chose Fernet (cryptography.fernet) for this task, because it was most recommended (faster than asymetric solutions). I generated the key. Then I've created a script to encrypt: key = Fernet(read_key()) with open(source, "rb") as src, open(destination, "wb") as dest: for chunk in iter(lambda: src.read(4096), b""): encrypted = key.encrypt(chunk) dest.write(encrypted) and for decryption: key = Fernet(read_key()) with open(source, "rb") as src, open(destination, "wb") as dest: for chunk in iter(lambda: src.read(4096), b""): decrypted = key.decrypt(chunk) dest.write(decrypted) Encryption works - no surprise, but decryption is not. Firstly I thought that it might work, but it's not. I guess chunk size increases when encrypted, and then when I'm reading 4096 bytes, it's not a whole encrypted chunk. I've got an error trying to decrypt: Traceback (most recent call last): File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/fernet.py", line 119, in _verify_signature h.verify(data[-32:]) File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/hazmat/primitives/hmac.py", line 74, in verify ctx.verify(signature) File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 75, in verify raise InvalidSignature("Signature did not match digest.") cryptography.exceptions.InvalidSignature: Signature did not match digest. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/redacted/path/main.py", line 63, in <module> decrypted = key.decrypt(chunk) File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/fernet.py", line 80, in decrypt return self._decrypt_data(data, timestamp, time_info) File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/fernet.py", line 137, in _decrypt_data self._verify_signature(data) File "/redacted/path/venv/lib/python3.7/site-packages/cryptography/fernet.py", line 121, in _verify_signature raise InvalidToken cryptography.fernet.InvalidToken Is there's a way to solve this? Maybe there's a better (simpler) approach with different solution than fernet? | Fernet is not supposed to be used in a streaming fashion. They explain that in the documentation: From the documentation (last section): Limitations Fernet is ideal for encrypting data that easily fits in memory. As a design feature it does not expose unauthenticated bytes. This means that the complete message contents must be available in memory, making Fernet generally unsuitable for very large files at this time. | 5 | 4 |
69,334,475 | 2021-9-26 | https://stackoverflow.com/questions/69334475/how-to-hint-at-number-types-i-e-subclasses-of-number-not-numbers-themselv | Assuming I want to write a function that accepts any type of number in Python, I can annotate it as follows: from numbers import Number def foo(bar: Number): print(bar) Taking this concept one step further, I am writing functions which accept number types, i.e. int, float or numpy dtypes, as arguments. Currently, I am writing: from typing import Type def foo(bar: Type): assert issubclass(bar, Number) print(bar) I thought I could substitute Type with something like NumberType (similar to NotImplementedType and friends, re-introduced in Python 3.10), because all number types are subclasses of Number: from numbers import Number import numpy as np assert issubclass(int, Number) assert issubclass(np.uint8, Number) As it turns out (or at least as far as I can tell), there is no such thing as a generic NumberType in Python (3.9): >>> type(Number) abc.ABCMeta Is there a clean way (i.e. without runtime checks) to achieve the desired kind of annotation? | In general, how do we hint classes, rather than instances of classes? In general, if we want to tell a type-checker that any instance of a certain class (or any instance of a subclass of that class) should be accepted as an argument to a function, we do it like so: def accepts_int_instances(x: int) -> None: pass class IntSubclass(int): pass accepts_int_instances(42) # passes MyPy (an instance of `int`) accepts_int_instances(IntSubclass(666)) # passes MyPy (an instance of a subclass of `int`) accepts_int_instances(3.14) # fails MyPy (an instance of `float` — `float` is not a subclass of `int`) Try it on MyPy playground here! If, on the other hand, we have a class C, and we want to hint that the class C itself (or a subclass of C) should be passed as an argument to a function, we use type[C] rather than C. (In Python <= 3.8, you will need to use typing.Type instead of the builtin type function, but as of Python 3.9 and PEP 585, we can parameterise type directly.) def accepts_int_and_subclasses(x: type[int]) -> None: pass class IntSubclass(int): pass accepts_int_and_subclasses(int) # passes MyPy accepts_int_and_subclasses(float) # fails Mypy (not a subclass of `int`) accepts_int_and_subclasses(IntSubclass) # passes MyPy Try it on MyPy playground here! Read the MyPy docs on "the type of class objects". See also: Subclass in type hinting. How can we annotate a function to say that any numeric class should be accepted for a certain parameter? int, float, and all the numpy numeric types are all subclasses of numbers.Number, so we should be able to just use type[Number] if we want to say that all numeric classes are permitted! At least, Python says that float and int are subclasses of Number: >>> from numbers import Number >>> issubclass(int, Number) True >>> issubclass(float, Number) True And if we're using a runtime type-checking library such as typeguard, using type[Number] seems to work fine: >>> from typeguard import typechecked >>> from fractions import Fraction >>> from decimal import Decimal >>> import numpy as np >>> >>> @typechecked ... def foo(bar: type[Number]) -> None: ... pass ... >>> foo(str) Traceback (most recent call last): TypeError: argument "bar" must be a subclass of numbers.Number; got str instead >>> foo(int) >>> foo(float) >>> foo(complex) >>> foo(Decimal) >>> foo(Fraction) >>> foo(np.int64) >>> foo(np.float32) >>> foo(np.ulonglong) >>> # etc. But wait! If we try using type[Number] with a static type-checker, it doesn't seem to work. If we run the following snippet through MyPy, it raises an error for each class except for fractions.Fraction: from numbers import Number from fractions import Fraction from decimal import Decimal NumberType = type[Number] def foo(bar: NumberType) -> None: pass foo(float) # fails foo(int) # fails foo(Fraction) # succeeds! foo(Decimal) # fails Try it on MyPy playground here! Surely Python wouldn't lie to us about float and int being subclasses of Number. What's going on? Why type[Number] doesn't work as a static type hint for numeric classes While issubclass(float, Number) and issubclass(int, Number) both evaluate to True, neither float nor int is, in fact, a "strict" subclass of numbers.Number. numbers.Number is an Abstract Base Class, and int and float are both registered as "virtual subclasses" of Number. This causes Python at runtime to recognise float and int as "subclasses" of Number, even though Number is not in the method resolution order of either of them. See this StackOverflow question for an explanation of what a class's "method resolution order", or "mro", is. >>> # All classes have `object` in their mro >>> class Foo: pass >>> Foo.__mro__ (<class '__main__.Foo'>, <class 'object'>) >>> >>> # Subclasses of a class have that class in their mro >>> class IntSubclass(int): pass >>> IntSubclass.__mro__ (<class '__main__.IntSubclass'>, <class 'int'>, <class 'object'>) >>> issubclass(IntSubclass, int) True >>> >>> # But `Number` is not in the mro of `int`... >>> int.__mro__ (<class 'int'>, <class 'object'>) >>> # ...Yet `int` still pretends to be a subclass of `Number`! >>> from numbers import Number >>> issubclass(int, Number) True >>> #?!?!!?? What's an Abstract Base Class? Why is numbers.Number an Abstract Base Class? What's "virtual subclassing"? The docs for Abstract Base Classes ("ABCs") are here. PEP 3119, introducing Abstract Base Classes, is here. The docs for the numbers module are here. PEP 3141, which introduced numbers.Number, is here. I can recommend this talk by Raymond Hettinger, which has a detailed explanation of ABCs and the purposes of virtual subclassing. The issue is that MyPy does not understand the "virtual subclassing" mechanism that ABCs use (and, perhaps never will). MyPy does understand some ABCs in the standard library. For instance, MyPy knows that list is a subtype of collections.abc.MutableSequence, even though MutableSequence is an ABC, and list is only a virtual subclass of MutableSequence. However, MyPy only understands list to be a subtype of MutableSequence because we've been lying to MyPy about the method resolution order for list. MyPy, along with all other major type-checkers, uses the stubs found in the typeshed repository for its static analysis of the classes and modules found in the standard library. If you look at the stub for list in typeshed, you'll see that list is given as being a direct subclass of collections.abc.MutableSequence. That's not true at all — MutableSequence is written in pure Python, whereas list is an optimised data structure written in C. But for static analysis, it's useful for MyPy to think that this is true. Other collections classes in the standard library (for example, tuple, set, and dict) are special-cased by typeshed in much the same way, but numeric types such as int and float are not. If we lie to MyPy about collections classes, why don't we also lie to MyPy about numeric classes? A lot of people (including me!) think we should, and discussions have been ongoing for a long time about whether this change should be made (e.g. typeshed proposal, MyPy issue). However, there are various complications to doing so. Credit goes to @chepner in the comments for finding the link to the MyPy issue. Possible solution: use duck-typing One possible (albeit slightly icky) solution here might be to use typing.SupportsFloat. SupportsFloat is a runtime-checkable protocol that has a single abstractmethod, __float__. This means that any classes that have a __float__ method are recognised — both at runtime and by static type-checkers — as being subtypes of SupportsFloat, even if SupportsFloat is not in the class's method resolution order. What's a protocol? What's duck-typing? How do protocols do what they do? Why are some protocols, but not all protocols, checkable at runtime? PEP 544, introducing typing.Protocol and structural-typing/duck-typing, explains in detail how typing.Protocol works. It also explains how static type-checkers are able to recognise classes such as float and int as subtypes of SupportsFloat, even though SupportsFloat does not appear in the method resolution order for int or float. The Python docs for structural subtyping are here. The Python docs for typing.Protocol are here. The MyPy docs for typing.Protocol are here. The Python docs for typing.SupportsFloat are here. The source code for typing.SupportsFloat is here. By default, protocols cannot be checked at runtime with isinstance and issubclass. SupportsFloat is checkable at runtime because it is decorated with the @runtime_checkable decorator. Read the documentation for that decorator here. Note: although user-defined protocols are only available in Python >= 3.8, SupportsFloat has been in the typing module since the module's addition to the standard library in Python 3.5. Advantages of this solution Comprehensive support* for all major numeric types: fractions.Fraction, decimal.Decimal, int, float, np.int32, np.int16, np.int8, np.int64, np.int0, np.float16, np.float32, np.float64, np.float128, np.intc, np.uintc, np.int_, np.uint, np.longlong, np.ulonglong, np.half, np.single, np.double, np.longdouble, np.csingle, np.cdouble, and np.clongdouble all have a __float__ method. If we annotate a function argument as being of type[SupportsFloat], MyPy correctly accepts* the types that conform to the protocol, and correctly rejects the types that do not conform to the protocol. It is a fairly general solution — you do not need to explicitly enumerate all possible types that you wish to accept. Works with both static type-checkers and runtime type-checking libraries such as typeguard. Disadvantages of this solution It feels like (and is) a hack. Having a __float__ method isn't anybody's reasonable idea of what defines a "number" in the abstract. Mypy does not recognise complex as a subtype of SupportsFloat. complex does, in fact, have a __float__ method in Python <= 3.9. However, it does not have a __float__ method in the typeshed stub for complex. Since MyPy (along with all other major type-checkers) uses typeshed stubs for its static analysis, this means it is unaware that complex has this method. complex.__float__ is likely omitted from the typeshed stub due to the fact that the method always raises TypeError; for this reason, the __float__ method has in fact been removed from the complex class in Python 3.10. Any user-defined class, even if it is not a numeric class, could potentially define __float__. In fact, there are even several non-numeric classes in the standard library that define __float__. For example, although the str type in Python (which is written in C) does not have a __float__ method, collections.UserString (which is written in pure Python) does. (The source code for str is here, and the source code for collections.UserString is here.) Example usage This passes MyPy for all numeric types I tested it with, except for complex: from typing import SupportsFloat NumberType = type[SupportsFloat] def foo(bar: NumberType) -> None: pass Try it on MyPy playground here! If we want complex to be accepted as well, a naive tweak to this solution would just be to use the following snippet instead, special-casing complex. This satisfies MyPy for every numeric type I could think of. I've also thrown type[Number] into the type hint, as it could be useful to catch a hypothetical class that does directly inherit from numbers.Number and does not have a __float__ method. I don't know why anybody would write such a class, but there are some classes that directly inherit from numbers.Number (e.g. fractions.Fraction), and it certainly would be theoretically possible to create a direct subclass of Number without a __float__ method. Number itself is an empty class that has no methods — it exists solely to provide a "virtual base class" for other numeric classes in the standard library. from typing import SupportsFloat, Union from numbers import Number NumberType = Union[type[SupportsFloat], type[complex], type[Number]] # You can also write this more succinctly as: # NumberType = type[Union[SupportsFloat, complex, Number]] # The two are equivalent. # In Python >= 3.10, we can even write it like this: # NumberType = type[SupportsFloat | complex | Number] # See PEP 604: https://www.python.org/dev/peps/pep-0604/ def foo(bar: NumberType) -> None: pass Try it on MyPy playground here! Translated into English, NumberType here is equivalent to: Any class, if and only if: It has a __float__ method; AND/OR it is complex; AND/OR it is a subclass of complex; AND/OR it is numbers.Number; AND/OR it is a "strict" (non-virtual) subclass of numbers.Number. I don't see this as a "solution" to the problem with complex — it's more of a workaround. The issue with complex is illustrative of the dangers of this approach in general. There may be other unusual numeric types in third-party libraries, for example, that do not directly subclass numbers.Number or have a __float__ method. It would be exceedingly difficult to know what they might look like in advance, and special-case them all. Addenda Why SupportsFloat instead of typing.SupportsInt? fractions.Fraction has a __float__ method (inherited from numbers.Rational) but does not have an __int__ method. Why SupportsFloat instead of SupportsAbs? Even complex has an __abs__ method, so typing.SupportsAbs looks like a promising alternative at first glance! However, there are several other classes in the standard library that have an __abs__ method and do not have a __float__ method, and it would be a stretch to argue that they are all numeric classes. (datetime.timedelta doesn't feel very number-like to me.) If you used SupportsAbs rather than SupportsFloat, you would risk casting your net too wide and allowing all sorts of non-numeric classes. Why SupportsFloat instead of SupportsRound? As an alternative to SupportsFloat, you could also consider using typing.SupportsRound, which accepts all classes that have a __round__ method. This is just as comprehensive as SupportsFloat (it covers all major numeric types other than complex). It also has the advantage that collection.UserString has no __round__ method whereas, as discussed above, it does have a __float__ method. Lastly, it seems less likely that third-party non-numeric classes would include a __round__ method. However, if you went for SupportsRound rather than SupportsFloat, you would, in my opinion, run a greater risk of excluding valid third-party numeric classes that, for whatever reason, do not define __round__. "Having a __float__ method" and "having a __round__ method" are both pretty poor definitions of what it means for a class to be a "number". However, the former feels much closer to the "true" definition than the latter. As such, it feels safer to count on third-party numeric classes having a __float__ method than it does to count on them having a __round__ method. If you wanted to be "extra safe" when it comes to ensuring a valid third-party numeric type is accepted by your function, I can't see any particular harm in extending NumberType even further with SupportsRound: from typing import SupportsFloat, SupportsRound, Union from numbers import Number NumberType = Union[type[SupportsFloat], type[SupportsRound], type[complex], type[Number]] However, I would question whether it's really necessary to include SupportsRound, given that any type that has a __round__ method is very likely to have a __float__ method as well. *...except for complex | 30 | 60 |
69,332,196 | 2021-9-26 | https://stackoverflow.com/questions/69332196/mutiprocessing-with-spawn-context-cannot-access-shared-variables-in-linux | I have to use the Process method with "spawn" context in Linux. Then I write a sample code as follows: from multiprocessing import Value import multiprocessing class Test(object): def __init__(self, m_val): print("step1") self.m_val = m_val print("step2") self.m_val_val = m_val.value self.prints() def prints(self): print("self.m_val_val:%d"%self.m_val_val) def main(m_val): t = Test(m_val) if __name__ == "__main__": N = 2 procs = [] v = Value("i",10) for i in range(0,N): proc_i = multiprocessing.get_context("spawn").Process(target=main,args=(v,)) proc_i.daemon=True procs.append(proc_i) for i in range(0,N): procs[i].start() for i in range(0,N): procs[i].join() When I run this code in Linux, it will print: step1 step2 step1 step2 while in Windows, the print content will be: step1 step2 self.m_val_val:10 step1 step2 self.m_val_val:10 Besides, there is no error information printed on the screen. So, how can I solve this problem, i.e., how to use multiprocessing Value in among processes while using "spawn" context in Linux? | The problem is that you are creating Value in the default context, which is fork on Unix. You can resolve this by setting the default start context to "spawn": multiprocessing.set_start_method("spawn") # Add this v = Value("i",10) Better yet, create the Value in the context explicitly: # v = Value("i",10) # Change this ctx = multiprocessing.get_context("spawn") # to this v = ctx.Value("i",10) # for i in range(0,N): # proc_i = multiprocessing.get_context("spawn").Process(target=main,args=(v,)) # (Optional) Refactor this proc_i = ctx.Process(target=main,args=(v,)) # to this Reference From https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods: spawn Available on Unix and Windows. The default on Windows and macOS. fork Available on Unix only. The default on Unix. Note that objects related to one context may not be compatible with processes for a different context. In particular, locks created using the fork context cannot be passed to processes started using the spawn or forkserver start methods. Reproducing this issue on macOS This issue can be reproduced on macOS with Python < 3.8. From https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods: Changed in version 3.8: On macOS, the spawn start method is now the default. To reproduce this issue on macOS with Python 3.8 and above: multiprocessing.set_start_method("fork") # Add this v = Value("i",10) Error message and stack trace OSError: [Errno 9] Bad file descriptor Traceback (most recent call last): File "/path/to/python/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/path/to/python/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/path/to/file.py", line 18, in main t = Test(m_val) File "/path/to/file.py", line 10, in __init__ self.m_val_val = m_val.value File "<string>", line 3, in getvalue OSError: [Errno 9] Bad file descriptor | 6 | 4 |
69,326,748 | 2021-9-25 | https://stackoverflow.com/questions/69326748/poetry-install-command-fails-whl-files-are-not-found | I am managing dependencies in my Python project via Poetry. Now I want to run this project in a machine which is different from my dev machine. To install dependecies, I simply run this command from the root directory: $ poetry install but then it raises the following errors: Updating dependencies Resolving dependencies... Writing lock file Package operations: 70 installs, 0 updates, 0 removals • Installing colorama (0.4.4) • Installing tzdata (2021.1) ValueError File \C:\Users\tteguayco\AppData\Local\pypoetry\Cache\artifacts\9e\b3\11\7d87ac44fdb2d557301f1f4086a37c080d1482a98751abe7cdbabbad26\colorama-0.4.4-py2.py3-none-any.whl does not exist at ~\AppData\Local\Programs\Python\Python39\lib\site-packages\poetry\core\packages\file_dependency.py:40 in __init__ 36│ except FileNotFoundError: 37│ raise ValueError("Directory {} does not exist".format(self._path)) 38│ 39│ if not self._full_path.exists(): → 40│ raise ValueError("File {} does not exist".format(self._path)) 41│ 42│ if self._full_path.is_dir(): 43│ raise ValueError("{} is a directory, expected a file".format(self._path)) 44│ ValueError File \C:\Users\tteguayco\AppData\Local\pypoetry\Cache\artifacts\45\2d\cb\6443e36999e7ab3926d5385dfac9ee9ea2a62f8111ff71abb6aff70674\tzdata-2021.1-py2.py3-none-any.whl does not exist at ~\AppData\Local\Programs\Python\Python39\lib\site-packages\poetry\core\packages\file_dependency.py:40 in __init__ 36│ except FileNotFoundError: 37│ raise ValueError("Directory {} does not exist".format(self._path)) 38│ 39│ if not self._full_path.exists(): → 40│ raise ValueError("File {} does not exist".format(self._path)) 41│ 42│ if self._full_path.is_dir(): 43│ raise ValueError("{} is a directory, expected a file".format(self._path)) 44│ It would be good to know what these *.whl are and how they are used by Poetry. | Specifically I found that deleting the AppData\Local\pypoetry\Cache\artifacts folder (I'm on Windows 10) worked for me. virtualenvs for other projects may be in AppData\Local\pypoetry\Cache\virtualenvs so you might not want to delete the root cache folder at AppData\Local\pypoetry\Cache in its entirety. | 22 | 38 |
69,312,333 | 2021-9-24 | https://stackoverflow.com/questions/69312333/django-admin-two-listfilter-spanning-multi-valued-relationships | I have a Blog model and an Entry model, following the example in django's documentation. Entry has a ForeignKey to Blog: one Blog has several Entries. I have two FieldListFilters for Blog: one for "Entry title", one for "Entry published year". If in the Blog list admin page I filter for both entry__title='Lennon' and entry__published_year=2008, then I see all Blogs which have at least one entry with title "Lennon" and at least one entry from 2008. They do not have to be the same entry. However, that's not what I want. What I want is to filter blogs which have entries that have both got the title "Lennon" and are from 2008. So for example say I have this data: Blog Entry Title Entry year A McCartney 2008 A Lennon 2009 B Lennon 2008 The admin list page for Blog currently filters in Blog A, because it has one entry from 2008 and one entry for "Lennon", as well as Blog B. I only want to see Blog B. This is because django does this when it builds the queryset: qs = qs.filter(title_filter) qs = qs.filter(published_filter) As per the docs, to get the desired result it would need to make just one filter call: qs = qs.filter(title_filter & published_filter) How can I achieve this behaviour with filtering in the admin? Background: Both filters are different concerning filtering on many-to-many relationships. See above link to the docs. MyModel.filter(a=b).filter(c=d) MyModel.filter(a=b, c=d) | So the fundamental problem as you point out is that django builds the queryset by doing a sequence of filters, and once a filter is "in" the queryset, it's not easy to alter it, because each filter builds up the queryset's Query object. However, it's not impossible. This solution is generic and requires no knowledge of the models / fields you're acting on, but probably only works for SQL backends, uses non-public APIs (although in my experience these internal APIs in django are pretty stable), and it could get funky if you are using other custom FieldListFilter. The name was the best I could come up with: from django.contrib.admin import ( FieldListFilter, AllValuesFieldListFilter, DateFieldListFilter, ) def first(iter_): for item in iter_: return item return None class RelatedANDFieldListFilter(FieldListFilter): def queryset(self, request, queryset): # clone queryset to avoid mutating the one passed in queryset = queryset.all() qs = super().queryset(request, queryset) if len(qs.query.where.children) == 0: # no filters on this queryset yet, so just do the normal thing return qs new_lookup = qs.query.where.children[-1] new_lookup_table = first( table_name for table_name, aliases in queryset.query.table_map.items() if new_lookup.lhs.alias in aliases ) if new_lookup_table is None: # this is the first filter on this table, so nothing to do. return qs # find the table being joined to for this filter main_table_lookup = first( lookup for lookup in queryset.query.where.children if lookup.lhs.alias == new_lookup_table ) assert main_table_lookup is not None # Rebuild the lookup using the first joined table, instead of the new join to the same # table but with a different alias in the query. # # This results in queries like: # # select * from table # inner join other_table on ( # other_table.field1 == 'a' AND other_table.field2 == 'b' # ) # # instead of queries like: # # select * from table # inner join other_table other_table on other_table.field1 == 'a' # inner join other_table T1 on T1.field2 == 'b' # # which is why this works. new_lookup_on_main_table_lhs = new_lookup.lhs.relabeled_clone( {new_lookup.lhs.alias: new_lookup_table} ) new_lookup_on_main_table = type(new_lookup)(new_lookup_on_main_table_lhs, new_lookup.rhs) queryset.query.where.add(new_lookup_on_main_table, 'AND') return queryset Now you can just make FieldListFilter subclasses and mix it in, I've just done the ones you wanted from the example: class RelatedANDAllValuesFieldListFilter(RelatedANDFieldListFilter, AllValuesFieldListFilter): pass class RelatedANDDateFieldListFilter(RelatedANDFieldListFilter, DateFieldListFilter): pass @admin.register(Blog) class BlogAdmin(admin.ModelAdmin): list_filter = ( ("entry__pub_date", RelatedANDDateFieldListFilter), ("entry__title", RelatedANDAllValuesFieldListFilter), ) | 5 | 3 |
69,314,257 | 2021-9-24 | https://stackoverflow.com/questions/69314257/explosion-of-memory-when-using-pandas-loc-with-umatching-indices-assignment-g | This is an observation from Most pythonic way to concatenate pandas cells with conditions I am not able to understand why third solution one takes more memory compared to first one. If I don't sample the third solution does not give runtime error, clearly something is weird To emulate large dataframe I tried to resample, but never expected to run into this kind of error Background Pretty self explanatory, one line, looks pythonic df['city'] + (df['city'] == 'paris')*('_' + df['arr'].astype(str)) s = """city,arr,final_target paris,11,paris_11 paris,12,paris_12 dallas,22,dallas miami,15,miami paris,16,paris_16""" import pandas as pd import io df = pd.read_csv(io.StringIO(s)).sample(1000000, replace=True) df Speeds %%timeit df['city'] + (df['city'] == 'paris')*('_' + df['arr'].astype(str)) # 877 ms ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit df['final_target'] = np.where(df['city'].eq('paris'), df['city'] + '_' + df['arr'].astype(str), df['city']) # 874 ms ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) If I dont sample, there is no error and output also match exactly Error(Updated)(Only happens when I sample from dataframe) %%timeit df['final_target'] = df['city'] df.loc[df['city'] == 'paris', 'final_target'] += '_' + df['arr'].astype(str) MemoryError: Unable to allocate 892. GiB for an array with shape (119671145392,) and data type int64 For smaller input(sample size 100) we get different error, telling a problem due to different sizes, but whats up with memory allocations and sampling? ValueError: cannot reindex from a duplicate axis --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-57c5b10090b2> in <module> 1 df['final_target'] = df['city'] ----> 2 df.loc[df['city'] == 'paris', 'final_target'] += '_' + df['arr'].astype(str) ~/anaconda3/lib/python3.8/site-packages/pandas/core/ops/methods.py in f(self, other) 99 # we are updating inplace so we want to ignore is_copy 100 self._update_inplace( --> 101 result.reindex_like(self, copy=False), verify_is_copy=False 102 ) 103 I rerun them from scratch each time Update This is part of what I figured s = """city,arr,final_target paris,11,paris_11 paris,12,paris_12 dallas,22,dallas miami,15,miami paris,16,paris_16""" import pandas as pd import io df = pd.read_csv(io.StringIO(s)).sample(10, replace=True) df city arr final_target 1 paris 12 paris_12 0 paris 11 paris_11 2 dallas 22 dallas 2 dallas 22 dallas 3 miami 15 miami 3 miami 15 miami 2 dallas 22 dallas 1 paris 12 paris_12 0 paris 11 paris_11 3 miami 15 miami Indices are repeated when sampled with replacement So resetting the indices resolved the problem even if df.arr and df.loc have essentially different sizes or replacing with df.loc[df['city'] == 'paris', 'arr'].astype(str) will solve it. Just as 2e0byo pointed out. Still can someone explain how .loc works and also explosion of memory When indices have duplicates in them and don't match?! | @2e0byo hit the nail on the head saying pandas' algorithm is "inefficient" in this case. As far as .loc, it's not really doing anything remarkable. Its use here is analogous to indexing a numpy array with a boolean array of the same shape, with an added dict-key-like access to a specific column - that is, df['city'] == 'paris' is itself a dataframe, with the same number of rows and the same indexes as df, with a single column of boolean values. df.loc[df['city'] == 'paris'] then gives a dataframe consisting of only the rows that are true in df['city'] == 'paris' (that have 'paris' in the 'city' column). Adding the additional argument 'final_target' then just returns only the 'final_target' column of those rows, instead of all three (and because it only has one column, it's technically a Series object - the same goes for df['arr']). The memory explosion happens when pandas actually tries to add the two Series. As @2e0byo pointed out, it has to reshape the Series to do this, and it does this by calling the first Series' align() method. During the align operation, the function pandas.core.reshape.merge.get_join_indexers() calls pandas._libs.join.full_outer_join() (line 155) with three arguments: left, right, and max_groups (point of clarification: these are their names inside the function full_outer_join). left and right are integer arrays containing the indexes of the two Series objects (the values in the index column), and max_groups is the maximum number of unique elements in either left or right (in our case, that's five, corresponding to the five original rows in s). full_outer_join immediately turns and calls pandas._libs.algos.groupsort_indexer() (line 194), once with left and max_groups as arguments and once with right and max_groups. groupsort_indexer returns two arrays - generically, indexer and counts (for the invocation with left, these are called left_sorter and left_count, and correspondingly for right). counts has length max_groups + 1, and each element (excepting the first one, which is unused) contains the count of how many times the corresponding index group appears in the input array. So for our case, with max_groups = 5, the count arrays have shape (6,), and elements 1-5 represent the number of times the 5 unique index values appear in left and right. The other array, indexer, is constructed so that indexing the original input array with it returns all the elements grouped in ascending order - hence "sorter." After having done this for both left and right, full_outer_join chops up the two sorters and strings them up across from each other. full_outer_join returns two arrays of the same size, left_idx and right_idx - these are the arrays that get really big and throw the error. The order of elements in the sorters determines the order they appear in the final two output arrays, and the count arrays determine how often each one appears. Since left goes first, its elements stay together - in left_idx, the first left_count[1] elements in left_sorter are repeated right_count[1] times each (aaabbbccc...). At the same place in right_idx, the first right_count[1] elements are repeated in a row left_count[1] times (abcabcabc...). (Conveniently, since the 0 row in s is a 'paris' row, left_count[1] and right_count[1] are always equal, so you get x amount of repeats x amount of times to start off). Then the next left_count[2] elements of left_sorter are repeated right_count[2] times, and so on... If any of the counts elements are zero, the corresponding spots in the idx arrays are filled with -1, to be masked later (as in, right_count[i] = 0 means elements in right_idx are -1, and vice versa - this is always the case for left_count[3] and left_count[4], because rows 2 and 3 in s are non-'paris'). In the end, the _idx arrays have an amount of elements equal to N_elements, which can be calculated as follows: left_nonzero = (left_count[1:] != 0) right_nonzero = (right_count[1:] != 0) left_repeats = left_count[1:]*left_nonzero + np.ones(len(left_counts)-1)*(1 - left_nonzero) right_repeats = right_count[1:]*right_nonzero + np.ones(len(right_counts)-1)*(1 - right_nonzero) N_elements = sum(left_repeats*right_repeats) The corresponding elements of the count arrays are multiplied together (with all the zeros replaced with ones), and added together to get N_elements. You can see this figure grows pretty quickly (O(n^2)). For an original dataframe with 1,000,000 sampled rows, each one appearing about equally, then the count arrays look something like: left_count = array([0, 2e5, 2e5, 0, 0, 2e5]) right_count = array([0, 2e5, 2e5, 2e5, 2e5, 2e5]) for a total length of about 1.2e11. In general for an initial sample N (df = pd.read_csv(io.StringIO(s)).sample(N, replace=True)), the final size is approximately 0.12*N**2 An Example It's probably helpful to look at a small example to see what full_outer_join and groupsort_indexer are trying to do when they make those ginormous arrays. We'll start with a small sample of only 10 rows, and follow the various arrays to the final output, left_idx and right_idx. We'll start by defining the initial dataframe: df = pd.read_csv(io.StringIO(s)).sample(10, replace=True) df['final_target'] = df['city'] # this line doesn't change much, but meh which looks like: city arr final_target 3 miami 15 miami 1 paris 11 paris 0 paris 12 paris 0 paris 12 paris 0 paris 12 paris 1 paris 11 paris 2 dallas 22 dallas 3 miami 15 miami 2 dallas 22 dallas 4 paris 16 paris df.loc[df['city'] == 'paris', 'final_target'] looks like: 1 paris 0 paris 0 paris 0 paris 1 paris 4 paris and df['arr'].astype(str): 3 15 1 11 0 12 0 12 0 12 1 11 2 22 3 15 2 22 4 16 Then, in the call to full_outer_join, our arguments look like: left = array([1,0,0,0,1,4]) # indexes of df.loc[df['city'] == 'paris', 'final_target'] right = array([3,1,0,0,0,1,2,3,2,4]) # indexes of df['arr'].astype(str) max_groups = 5 # the max number of unique elements in either left or right The function call groupsort_indexer(left, max_groups) returns the following two arrays: left_sorter = array([1, 2, 3, 0, 4, 5]) left_count = array([0, 3, 2, 0, 0, 1]) left_count holds the number of appearances of each unique value in left - the first element is unused, but then there a 3 zeros, 2 ones, 0 twos, 0 threes, and 1 four in left. left_sorter is such that left[left_sorter] = array([0, 0, 0, 1, 1, 4]) - all in order. Now right: groupsort_indexer(right, max_groups) returns right_sorter = array([2, 3, 4, 1, 5, 6, 8, 0, 7, 9]) right_count = array([0, 3, 2, 2, 2, 1]) Once again, right_count contains the number of times each count appears: the unused first element, and then 3 zeros, 2 ones, 2 twos, 2 threes, and 1 four (note that elements 1, 2, and 5 of both count arrays are the same: these are the rows in s with 'city' = 'paris'). Also, right[right_sorter] = array([0, 0, 0, 1, 1, 2, 2, 3, 3, 4]) With both count arrays calculated, we can calculate what size the idx arrays will be (a bit simpler with actual numbers than with the formula above): N_total = 3*3 + 2*2 + 2 + 2 + 1*1 = 18 3 is element 1 for both counts arrays, so we can expect something like [1,1,1,2,2,2,3,3,3] to start left_idx, since [1,2,3] starts left_sorter, and [2,3,4,2,3,4,2,3,4] to start right_idx, since right_sorter begins with [2,3,4]. Then we have twos, so [0,0,4,4] for left_idx and [1,5,1,5] for right_idx. Then left_count has two zeros, and right_count has two twos, so next go 4 -1's in left_idx and the next four elements in right_sorter go into right_idx: [6,8,0,7]. Both count's finish with a one, so one each of the last elements in the sorters go in the idx: 5 for left_idx and 9 for right_idx, leaving: left_idx = array([1, 1, 1, 2, 2, 2, 3, 3, 3, 0, 0, 4, 4,-1, -1, -1, -1, 5]) right_idx = array([2, 3, 4, 2, 3, 4, 2, 3, 4, 1, 5, 1, 5, 6, 8, 0 , 7, 9]) which is indeed 18 elements. With both index arrays the same shape, pandas can construct two Series of the same shape from our original ones to do any operations it needs to, and then it can mask these arrays to get back sorted indexes. Using a simple boolean filter to look at how we just sorted left and right with the outputs, we get: left[left_idx[left_idx != -1]] = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 4]) right[right_idx[right_idx != -1]] = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 3, 3, 4]) After going back up through all the function calls and modules, the result of the addition at this point is: 0 paris_12 0 paris_12 0 paris_12 0 paris_12 0 paris_12 0 paris_12 0 paris_12 0 paris_12 0 paris_12 1 paris_11 1 paris_11 1 paris_11 1 paris_11 2 NaN 2 NaN 3 NaN 3 NaN 4 paris_16 which is result in the line result = op(self, other) in pandas.core.generic.NDFrame._inplace_method (line 11066), with op = pandas.core.series.Series.__add__ and self and other the two Series from before that we're adding. So, as far as I can tell, pandas basically tries to perform the operation for every combination of identically-indexed rows (like, any and all rows with index 1 in the first Series should be operated with all rows index 1 in the other Series). If one of the Series has indexes that the other one doesn't, those rows get masked out. It just so happens in this case that every row with the same index is identical. It works (albeit redundantly) as long as you don't need to do anything in place - the trouble for the small dataframes arises after this when pandas tries to reindex this result back into the shape of the original dataframe df. The split (the line that smaller dataframes make it past, but larger ones don't) is that line result = op(self, other) from above. Later in the same function (called, note, _inplace_method), the program exits at self._update_inplace(result.reindex_like(self, copy=False), verify_is_copy=False). It tries to reindex result so it looks like self, so it can replace self with result (self is the original Series, the first one in the addition, df.loc[df['city'] == 'paris', 'final_target']). And this is where the smaller case fails, because, obviously, result has a bunch of repeated indexes, and pandas doesn't want to lose any information when it deletes some of them. One Last Thing It's probably worth mentioning that this behaviour isn't particular to the addition operation here. It happens any time you try an arithmetic operation on two large dataframes with a lot of repeated indexes - for example, try just defining a second dataframe the exact same way as the first, df2 = pd.read_csv(io.StringIO(s)).sample(1000000, replace=True), and then try running df.arr*df2.arr. You'll get the same memory error. Interestingly, logical and comparison operators have protections against doing this - they require identical indexes, and check for it before calling their operator method. I did all my stuff in pandas 1.2.4, python 3.7.10, but I've given links to the pandas Github, which is currently in version 1.3.3. As far as I can tell, the differences don't affect the results. | 9 | 6 |
69,330,379 | 2021-9-25 | https://stackoverflow.com/questions/69330379/how-to-properly-type-hint-a-class-decorator | Assume we have some function func that maps instances of class A to instances of class B, i.e. it has the signature Callable[[A], B]. I want to write a class decorator autofunc for subclasses of A that automatically applies func to instances as they are created. For example, think of automatic jit-compilation based on a global environment variable. This can be done via from functools import wraps def autofunc(basecls): @wraps(basecls, updated=()) class WrappedClass(basecls): def __new__(cls, *args, **kwargs): instance = basecls(*args, **kwargs) return func(instance) return WrappedClass Then the following two are roughly equivalent: class C(A): ... instance = func(C()) @autofunc class C(A): ... instance = C() In my naivety, I tried def autofunc(basecls: type[A]) -> type[B]: @wraps(basecls, updated=()) class WrappedClass(basecls): def __new__(cls, *args, **kwargs): instance = basecls(*args, **kwargs) return func(instance) return WrappedClass which mypy really dislikes, raising erros: error: Variable "basecls" is not valid as a type [valid-type] error: Invalid base class "basecls" [misc] also, there is the issue that WrapperClass.__new__ returns an instance of B and not of WrapperClass. Is there any way to properly type hint such a class decorator currently, or is mypy not capable to work this yet? Example code: from functools import wraps class A: pass class B: pass def func(cl: A) -> B: print(f"Replacing {cl=}") return B() def autofunc(basecls: type[A]) -> type[B]: @wraps(basecls, updated=()) class WrappedClass(basecls): def __new__(cls, *args, **kwargs): instance = basecls() return func(instance) return WrappedClass | This is simply a bug in mypy: https://github.com/python/mypy/issues/5865 | 5 | 3 |
69,367,479 | 2021-9-28 | https://stackoverflow.com/questions/69367479/cant-show-info-level-logging-in-aws-lambda | I'm trying to run a script in AWS lambda and I want to output info level logs to the console after the script runs. I've tried looking for help from This post on using logs in lambda buy haven't had any success. I think AWS Cloudwatch is overriding my configuration shown bellow. import logging # log configuration logging.basicConfig( format='%(levelname)s: %(message)s', level=logging.INFO, encoding="utf-8" ) I want to set the logging level to logging.info. How can I do this? The runtime is python 3.9 | from my understanding, I think one fix would be to add this: logging.getLogger().setLevel('INFO') I believe that logging.basicConfig(level=...) affects the minimum log level at which logs show up in the console, but across all loggers. The one above explicitly sets the minimum enabled level for the root logger, i.e. logging.getLogger(). The enabled level for each logger determines at what level messages will be logged - otherwise, each call to a logger method like logging.info is basically a no-op. So essentially, the basicConfig and setLevel calls are separate, but they work together to determine if a library's logs are printed to the console. For example, you can set basicConfig(level='DEBUG') so that the debug-level logs for all libraries should get printed out. But if you want to make an exception for one library like botocore for example, you can use logging.getLogger('botocore').setLevel('WARNING'), and that will set the minimum enabled level for that library to WARNING, so only messages logged by this library above that minimum level get printed out to the console. | 5 | 7 |
69,363,012 | 2021-9-28 | https://stackoverflow.com/questions/69363012/how-to-make-itertools-combinations-increase-evenly | Consider the following example: import itertools import numpy as np a = np.arange(0,5) b = np.arange(0,3) c = np.arange(0,7) prods = itertools.product(a,b,c) for p in prods: print(p) This iterate over the products in the following order: (0, 0, 0) (0, 0, 1) (0, 0, 2) (0, 0, 3) (0, 0, 4) (0, 1, 0) But I would much rather have the products given in order of their sum, e.g. (0, 0, 0) (0, 0, 1) (0, 1, 0) (1, 0, 0) (0, 1, 1) (1, 0, 1) (1, 1, 0) (0, 0, 2) How can I achieve this without storing all combinations in memory? Note: a b and c are always ranges, but not necessarily with the same maximum. There is also no 2nd-level ordering when the sums of two products are equal, i.e. (0,1,1) is equivalent to (2,0,0). | The easiest way to do this without storing extra products in memory is with recursion. Instead of range(a,b), pass in a list of (a,b) pairs and do the iteration yourself: def prod_by_sum(range_bounds: List[Tuple[int, int]]): """ Yield from the Cartesian product of input ranges, produced in order of sum. >>> range_bounds = [(2, 4), (3, 6), (0, 2)] >>> for prod in prod_by_sum(range_bounds): ... print(prod) (2, 3, 0) (2, 3, 1) (2, 4, 0) (3, 3, 0) (2, 4, 1) (2, 5, 0) (3, 3, 1) (3, 4, 0) (2, 5, 1) (3, 4, 1) (3, 5, 0) (3, 5, 1) """ def prod_by_sum_helper(start: int, goal_sum: int): low, high = range_bounds[start] if start == len(range_bounds) - 1: if low <= goal_sum < high: yield (goal_sum,) return for current in range(low, min(high, goal_sum + 1)): yield from ((current,) + extra for extra in prod_by_sum_helper(start + 1, goal_sum - current)) lowest_sum = sum(lo for lo, hi in range_bounds) highest_sum = sum(hi - 1 for lo, hi in range_bounds) for goal_sum in range(lowest_sum, highest_sum + 1): yield from prod_by_sum_helper(0, goal_sum) which has output for range_bounds = [(0, 5), (0, 3), (0, 7)] starting with: (0, 0, 0) (0, 0, 1) (0, 1, 0) (1, 0, 0) (0, 0, 2) (0, 1, 1) (0, 2, 0) (1, 0, 1) (1, 1, 0) (2, 0, 0) You can do this exact process iteratively by modifying a single list and yielding copies of it, but the code either becomes more complicated or less efficient. You can also trivially modify this to support steps besides 1, however that does work less efficiently with larger and larger steps since the last range might not contain the element needed to produce the current sum. That seems unavoidable, because at that point you'd need to solve a difficult computational problem to efficiently loop over those products by sum. | 6 | 3 |
69,322,595 | 2021-9-25 | https://stackoverflow.com/questions/69322595/duplicate-information-in-typing-and-docstring | I am confused about the use of type hints and docstrings. Aren't they duplicate information? For example: def my_func(name: str): """ print a name. Parameters ---------- name : str a given name """ print(name) Isn't the information name: str given twice? | Putting the type as a type hint and as part of the docstring would be redundant. It is also prone to human errors since one could easily forget updating one of them, thus effort is constantly needed to keep them both in sync. The documentation for type hints also mentions it: Docstrings. There is an existing convention for docstrings, based on the Sphinx notation (:type arg1: description). This is pretty verbose (an extra line per parameter), and not very elegant. We could also make up something new, but the annotation syntax is hard to beat (because it was designed for this very purpose). But all of this depends on your usecase. If you are using a static-type checker such as MyPy (e.g. it would fail if you passed an int 123 to a variable with type hint str) or an IDE such as PyCharm (e.g. it would highlight inconsistencies between type hint and passed arguments), then you should keep on using type hints. This is the preferable way, as it points out possible errors in the code you've written. If you are using tools such as Sphinx, or Swagger, or something else, note that these tools display your classes and methods along with their docstrings for the purposes of documentation. So if you want to maintain such documentation for clarity to other readers and want to have the types included, then you might want to put the type hint in the docstring as well. But if you think such detail isn't needed as most of the time only the description in the docstring is relevant, then there is no need to add the type hint in the docstring. In the long run, the more sustainable way is to have those tools (Sphinx, Swagger, etc.) use the type hint as part of the documentation instead of relying solely on the text in the docstring. For sphinx, I found this library sphinx-autodoc-typehints that performs it already. Allowing you to migrate from this: def format_unit(value, unit): """ Formats the given value as a human readable string using the given units. :param float|int value: a numeric value :param str unit: the unit for the value (kg, m, etc.) :rtype: str """ return '{} {}'.format(value, unit) to this: from typing import Union def format_unit(value: Union[float, int], unit: str) -> str: """ Formats the given value as a human readable string using the given units. :param value: a numeric value :param unit: the unit for the value (kg, m, etc.) """ return '{} {}'.format(value, unit) So, it seems like there are already enhancements on this topic that will make the definition of types more consistent and less redundant. | 5 | 6 |
69,360,198 | 2021-9-28 | https://stackoverflow.com/questions/69360198/correctly-typing-a-function-that-can-return-a-provided-default-value | I have a function of the following structure: def get_something_from_data(data: Mapping[str, str], default: Optional[str] = None) -> Optional[str]: """Get something out of `data` if it is there, if not return the value of `default`. If `default` is not provided, return None. """ So this can return either Optional[str] if default is omitted, or will always return str if default has an str value. Now I call this in code similar to: has_value = get_something_from_data(data, "fallback") return has_value.endswith("k") Even though in this case has_value will always be an str, running mypy on this code generates an error because it thinks has_value might be None and will not have endswith. So I tried typing my function differently: DT = TypeVar('DT', str, None) def get_something_from_data(data: Mapping[str, str], default: DT = None) -> Union[str, DT]: pass This works, but now when I call mypy on code like: has_value = get_something_from_data(data) return has_value.endswith("k") # might fail as has_value can be None mypy doesn't throw an error, even though the code is potentially dangerous. Is there a way to properly type this function so mypy generates errors based on whether or not a default value was provided? | Using typing.overload you can describe multiple combinations of arguments and return types for a function from typing import overload, Mapping, Optional @overload def get_something_from_data(data: Mapping[str, str], default: None = None) -> Optional[str]: ... @overload def get_something_from_data(data: Mapping[str, str], default: str) -> str: ... def get_something_from_data(data, default=None): pass | 5 | 5 |
69,356,599 | 2021-9-28 | https://stackoverflow.com/questions/69356599/delay-between-different-function-calls | I have a question about adding delay after calling various functions. Let's say I've function like: def my_func1(): print("Function 1") def my_func2(): print("Function 2") def my_func3(): print("Function 3") Currently I've added delay between invoking them like below: delay = 1 my_func1() time.sleep(delay) my_func2() time.sleep(delay) my_func3() time.sleep(delay) As you can see I needed a few times time.sleep, which I would like to avoid. Using decorator is also not an option, since it might be that I would like to avoid delay when calling one of this function not in a group. Do you have any tip how to beautify this? | I've tested this based on "How to Make Decorators Optionally Turn On Or Off" (How to Make Decorators Optionally Turn On Or Off) from time import sleep def funcdelay(func): def inner(): func() print('inner') sleep(1) inner.nodelay = func return inner @funcdelay def my_func1(): print("Function 1") @funcdelay def my_func2(): print("Function 2") @funcdelay def my_func3(): print("Function 3") my_func1() my_func2() my_func3() my_func1.nodelay() my_func2.nodelay() my_func3.nodelay() Output: Function 1 inner Function 2 inner Function 3 inner Function 1 Function 2 Function 3 You can see that it can bypass the delay. | 10 | 6 |
69,356,332 | 2021-9-28 | https://stackoverflow.com/questions/69356332/counting-contiguous-sawtooth-subarrays | Given an array of integers arr, your task is to count the number of contiguous subarrays that represent a sawtooth sequence of at least two elements. For arr = [9, 8, 7, 6, 5], the output should be countSawSubarrays(arr) = 4. Since all the elements are arranged in decreasing order, it won’t be possible to form any sawtooth subarray of length 3 or more. There are 4 possible subarrays containing two elements, so the answer is 4. For arr = [10, 10, 10], the output should be countSawSubarrays(arr) = 0. Since all of the elements are equal, none of subarrays can be sawtooth, so the answer is 0. For arr = [1, 2, 1, 2, 1], the output should be countSawSubarrays(arr) = 10. All contiguous subarrays containing at least two elements satisfy the condition of the problem. There are 10 possible contiguous subarrays containing at least two elements, so the answer is 10. What would be the best way to solve this question? I saw a possible solution here:https://medium.com/swlh/sawtooth-sequence-java-solution-460bd92c064 But this code fails for the case [1,2,1,3,4,-2] where the answer should be 9 but it comes as 12. I have even tried a brute force approach but I am not able to wrap my head around it. Any help would be appreciated! EDIT: Thanks to Vishal for the response, after a few tweaks, here is the updated solution in python. Time Complexity: O(n) Space Complexity: O(1) def samesign(a,b): if a/abs(a) == b/abs(b): return True else: return False def countSawSubarrays(arr): n = len(arr) if n<2: return 0 s = 0 e = 1 count = 0 while(e<n): sign = arr[e] - arr[s] while(e<n and arr[e] != arr[e-1] and samesign(arr[e] - arr[e-1], sign)): sign = -1*sign e+=1 size = e-s if (size==1): e+=1 count += (size*(size-1))//2 s = e-1 e = s+1 return count arr1 = [9,8,7,6,5] print(countSawSubarrays(arr1)) arr2 = [1,2,1,3,4,-2] print(countSawSubarrays(arr2)) arr3 = [1,2,1,2,1] print(countSawSubarrays(arr3)) arr4 = [10,10,10] print(countSawSubarrays(arr4)) Result: 4 9 10 0 | This can be solved by just splitting the array into multiple sawtooth sequences..which is O(n) operation. For example [1,2,1,3,4,-2] can be splitted into two sequence [1,2,1,3] and [3,4,-2] and now we just have to do C(size,2) operation for both the parts. Here is psedo code explaining the idea ( does not have all corner cases handled ) public int countSeq(int[] arr) { int len = arr.length; if (len < 2) { return 0; } int s = 0; int e = 1; int sign = arr[e] - arr[s]; int count = 0; while (e < len) { while (e < len && arr[e] - arr[e-1] != 0 && isSameSign(arr[e] - arr[e-1], sign)) { sign = -1 * sign; e++; } // the biggest continue subsequence starting from s ends at e-1; int size = e - s; count = count + (size * (size - 1)/2); // basically doing C(size,2) s = e - 1; e = s + 1; } return count; } | 7 | 4 |
69,294,075 | 2021-9-23 | https://stackoverflow.com/questions/69294075/how-can-i-play-video-or-audio-on-a-jupyter-notebook-through-vs-code | I'm running a Jupyter Notebook on VS code and trying to display/play a video. From all the other forums, I've seen that using IPython.display is the standard method; however, it isn't working for me. For example, for Video: from IPython.display import Video Video('test.mp4') This code generates a video box in the output and I don't have any errors but I can't press play. The same happens when I try playing an Audio file. I've ensured that the file is in the current folder and I'm using Python 3.8.2 in a virtual environment (venv) and IPython 7.27.0. | To get it to work I did the following: Uninstalled and reinstalled VS Code and installed the extensions Python, Jupyter and Jupyter Keymap Installed FFmpeg through Homebrew: brew install ffmpeg Converted the video codec from "MPEG4" to "H.264": ffmpeg -i test.mp4 video.mp4 Then used the following code to display the video: from IPython.display import Video Video('video.mp4', width=128, height=128) | 6 | 5 |
69,355,736 | 2021-9-28 | https://stackoverflow.com/questions/69355736/how-can-i-find-the-sum-of-a-users-input | print("Fazli's Vet Services\n") print("Exam: 50") print("Vaccinations: 25") print("Trim Nails: 5") print("Bath: 20\n") exam = "exam" vaccinations = "vaccinations" trim_nails = "trim nails" bath = "bath" none = "none" exam_price = 50 vaccination_price = 25 trim_nails_price = 5 bath_price = 20 none_price = 0 first_service = input("Select first service:") second_service = input("Select second service:") print("\nFazli's Vet Invoice") if first_service == exam: print("Service 1 - Exam: " + str(exam_price)) elif first_service == vaccinations: print("Service 1 - Vaccinations: " + str(vaccination_price)) elif first_service == trim_nails: print("Service 1 - Trim Nails: " + str(trim_nails_price)) elif first_service == bath: print("Service 1 - Bath: " + str(bath_price)) elif first_service == none: print("Service 1 - None " + str(none_price)) else: print("Service 1 - None " + str(none_price)) if second_service == exam: print("Service 2 - Exam: " + str(exam_price)) elif second_service == vaccinations: print("Service 2 - Vaccinations: " + str(vaccination_price)) elif second_service == trim_nails: print("Service 2 - Trim Nails: " + str(trim_nails_price)) elif second_service == bath: print("Service 2 - Bath: " + str(bath_price)) elif second_service == none: print("Service 2 - None " + str(none_price)) else: print("Service 2 - None " + str(none_price)) Above is a code I have so far. It prints: Fazli's Vet Services Exam: 50 Vaccinations: 25 Trim Nails: 5 Bath: 20 Select first service: exam Select second service: bath Fazli's Vet Invoice Service 1 - Exam: 50 Service 2 - Bath: 20 My goal is for the code to add up the two services and make a total price. My end goal should look like this: Chanucey's Vet Services Exam: 45 Vaccinations: 32 Trim Nails: 8 Bath: 15 Select first service: Exam Select second service: none Chauncey's Vet Invoice Service 1 - Exam: 45 Service 2 - None: 0 Total: 45 Notice how the code added both prices and made a "Total." Is there any way I can do this? I'm a beginner computer science major, so we aren't too far into Python. ALL CODE IS IN PYTHON | Use dictionary - print("Fazli's Vet Services\n") print("Exam: 50") print("Vaccinations: 25") print("Trim Nails: 5") print("Bath: 20\n") dictionary = {'exam':50,'vaccinations':25,'trim nails':5,'bath':20,'none':0} first_service = input("Select first service:").lower() second_service = input("Select second service:").lower() print("\nFazli's Vet Invoice") if first_service in dictionary: price1 = int(dictionary[first_service]) print(f'Service 1 - {first_service.capitalize()} : {price1}') else: price1 = 0 print(f'Service 1 - None : 0') if second_service in dictionary: price2 = int(dictionary[second_service]) print(f'Service 1 - {second_service.capitalize()} : {price2}') else: price2 = 0 print(f'Service 1 - None : 0') print('Total : ',price1+price2) | 5 | 2 |
69,355,100 | 2021-9-28 | https://stackoverflow.com/questions/69355100/reducing-python-zip-size-to-use-with-aws-lambda | I'm following this blog post to create a runtime environment using Docker for use with AWS Lambda. I'm creating a layer for using with Python 3.8: docker run -v "$PWD":/var/task "lambci/lambda:build-python3.8" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit" And then archiving the layer as zip: zip -9 -r mylayer.zip python All standard so far. The problem arises in the .zip size, which is > 250mb and so creates the following error in Lambda: Failed to create layer version: Unzipped size must be smaller than 262144000 bytes. Here's my requirements.txt: s3fs scrapy pandas requests I'm including s3fs since I get the following error when trying to save a parquet file to an S3 bucket using pandas: [ERROR] ImportError: Install s3fs to access S3. This problem is that including s3fs massively increases the layer size. Without s3fs the layer is < 200mb unzipped. My most direct question would be: How can I reduce the layer size to < 250mb while still using Docker and keeping s3fs in my requirements.txt? I can't explain the 50mb+ difference, especially since s3fs < 100kb on PyPi. Finally, for those questioning my use of Lambda with Scrapy: my scraper is trivial, and spinning up an EC2 instance would be overkill. | The key idea behind shrinking your layers is to identify what pip installs and what you can get rid off, usually manually. In your case, since you are only slightly above the limit, I would get rid off pandas/tests. So before you create your zip layer, you can run the following in the layer's folder (mylayer from your past question): rm -rvf python/lib/python3.8/site-packages/pandas/tests This should trim your layer below the 262MB limit after unpacking. In my test it is now 244MB. Alternatively, you can go over python folder manually, and start removing any other tests, documentations, examples, etc, that are not needed. | 5 | 3 |
69,350,888 | 2021-9-27 | https://stackoverflow.com/questions/69350888/list-from-key-in-django-querydict-return-one-element-instead-of-the-whole-list | I am using Django and I am accessing request.POST from my view. The code is as follows: data = request.POST print(data) Which returns: <QueryDict: {'name': ['Sam'], 'phone': ['+10795524594'], 'message': ['Es-sénia'], 'Coupon': [''], 'csrfmiddlewaretoken': ['xcGnoJOtnAmXcUBXe01t7ItuMC8BAFHE 6H9Egqd8BuooxLbp3ZrqvwzTZAxukMJW', 'xcGnoJOtnAmXcUBXe01t7ItuMC8BAFHE6H9Egqd8BuooxLbp3Zrq``vwzTZAxukMJW'], 'Size': ['S', 'M']}> But, whether using .dict() method or using data.get("Size"), I only get one element; not the whole list. How can I fix this? | Use data.getlist(key). It is a bit weird, see the docs: https://docs.djangoproject.com/en/3.2/ref/request-response/#django.http.QueryDict.getlist | 5 | 7 |
69,341,607 | 2021-9-27 | https://stackoverflow.com/questions/69341607/mypy-error-overloaded-function-signature-2-will-never-be-matched-signature-1 | I'm trying to understand how to use the overload decorator when typing functions. If I write the following code and run it through mypy: from typing import Union, overload @overload def myfunc(a: float, b: float) -> float: ... @overload def myfunc(a: int, b: int) -> int: ... def myfunc(a: Union[float, int], b: Union[float, int]) -> Union[float, int]: return a + b Then I get a "error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader Found 1 error in 1 file (checked 1 source file)" I don't understand why the parameter types of signature 1 (ie. floats) are broader that signature 2 (ie. ints). What is going on here? | mypy has a weird special case that treats int as valid where float is expected, because requiring people to write Union[int, float] all over the place would have been awkward enough to seriously hinder adoption of type annotations. That means myfunc(1, 2) matches both signatures. When multiple signatures of an overloaded function match a call, mypy will pick the signature defined first. With the way you've written your code, it's impossible for mypy to pick the int signature. The float signature will always be picked for any call. You can fix this by putting the int signature first: @overload def myfunc(a: int, b: int) -> int: ... @overload def myfunc(a: float, b: float) -> float: ... | 7 | 12 |
69,339,497 | 2021-9-26 | https://stackoverflow.com/questions/69339497/why-wont-mypy-understand-this-object-instantiation | I'm trying to define a class that takes another class as an attribute _model and will instantiate objects of that class. from abc import ABC from typing import Generic, TypeVar, Any, ClassVar, Type Item = TypeVar("Item", bound=Any) class SomeClass(Generic[Item], ABC): _model: ClassVar[Type[Item]] def _compose_item(self, **attrs: Any) -> Item: return self._model(**attrs) I think it should be obvious that self._model(**attrs) returns an instance of Item, since _model is explicitly declared as Type[Item] and attrs is declared as Dict[str, Any]. But what I'm getting from mypy 0.910 is: test.py: note: In member "_compose_item" of class "SomeClass": test.py:11: error: Returning Any from function declared to return "Item" return self._model(**attrs) ^ What am I doing wrong? | MyPy can sometimes be a bit funny about the types of classes. You can solve this by specifying _model as Callable[..., Item] (which, after all, isn't a lie) instead of Type[Item]: from abc import ABC from typing import Generic, TypeVar, Any, ClassVar, Callable Item = TypeVar("Item") class SomeClass(Generic[Item], ABC): _model: ClassVar[Callable[..., Item]] def _compose_item(self, **attrs: Any) -> Item: return self._model(**attrs) | 6 | 5 |
69,338,089 | 2021-9-26 | https://stackoverflow.com/questions/69338089/cant-import-streamlistener | I'm trying to create a data stream in Python using the Twitter API, but I'm unable to import the StreamListener correctly. Here's my code: import tweepy from tweepy import Stream from tweepy.streaming import StreamListener class MyListener(StreamListener): def on_data(self, data): try: with open('python.json', 'a') as f: f.write(data) return True except BaseException as e: print("Error on_data: %s" % str(e)) return True def on_error(self, status): print(status) return True twitter_stream = Stream(auth, MyListener()) twitter_stream.filter(track=['#python']) And I'm getting this error: Traceback (most recent call last): File "c:\Users\User\Documents\GitHub\tempCodeRunnerFile.python", line 6, in <module> from tweepy.streaming import StreamListener ImportError: cannot import name 'StreamListener' from 'tweepy.streaming' (C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\tweepy\streaming.py) | Tweepy v4.0.0 was released yesterday and it merged StreamListener into Stream. I recommend updating your code to subclass Stream instead. Alternatively, you can downgrade to v3.10.0. | 11 | 15 |
69,336,816 | 2021-9-26 | https://stackoverflow.com/questions/69336816/how-can-i-use-a-custom-search-field-model-property-to-search-in-django-admin | This is very similar to this question, but unfortunately, I still couldn't get it working. I have a model, with a property that combines a few fields: class Specimen(models.Model): lab_number = ... patient_name = ... specimen_type = ... @property def specimen_name(self): return f"{self.lab_number}_{self.patient_name}_{self.specimen_type}" In Django Admin, when someone does a search, I can use the search_fields attribute in the Model Admin to specify real fields, but not the specimen_name custom field: def specimen_name(inst): return inst.specimen_name specimen_name.short_description = "Specimen Name" class SpecimenModelAdmin(admin.ModelAdmin): list_display = ('specimen_name', 'patient_name', 'lab_number', 'specimen_type') search_fields = ('patient_name', 'lab_number', 'specimen_type') Doing a search using the code above, it will search the individual fields, but if I try to search for a full specimen_name in Django Admin, it won't find it, because none of the fields contain the exact, full specimen name. The SO question I linked to above pointed me in the right direction - using get_search_results. My code now looks something like this: class SpecimenModelAdmin(admin.ModelAdmin): ... search_fields = ('patient_name', 'lab_number', 'specimen_type') def get_search_results(self, request, queryset, search_term): if not search_term: return queryset, False queryset, may_have_duplicates = super().get_search_results( request, queryset, search_term, ) search_term_list = search_term.split(' ') specimen_names = [q.specimen_name for q in queryset.all()] results = [] for term in search_term_list: for name in specimen_names: if term in name: results.append(name) break # Return original queryset, AND any new results we found by searching the specimen_name field # The True indicates that it's possible that we will end up with duplicates # I assume that means Django will make sure only unique results are returned when that's set return queryset + results, True As far as I know, I can't do a queryset.filter(specimen_name=SOMETHING). .filter won't recognize the @property method as a field in needs to search. That's why I write my own loop to do the searching. The code above will obviously not work. You can't just add a list to a queryset. How would I return an actual queryset? | The correct way to filter on a property is to make an equivalent annotation for the property and filter on that instead. Looking at your property all it does is it concatenates some of the fields, corresponding to that Django has the Concat database function. Hence you can do the following annotation: from django.db.models import Value from django.db.models.functions import Concat queryset = queryset.annotate( specimen_name=Concat("lab_number", Value("_"), "patient_name", Value("_"), "specimen_type") ) # Note: If you use Django version >=3.2 you can use "alias" instead of "annotate" Then you can change your get_search_results as follows: from django.db.models import Value, Q from django.db.models.functions import Concat from django.utils.text import ( smart_split, unescape_string_literal ) class SpecimenModelAdmin(admin.ModelAdmin): ... search_fields = ('patient_name', 'lab_number', 'specimen_type') def get_search_results(self, request, queryset, search_term): queryset = queryset.annotate( specimen_name=Concat( "lab_number", Value("_"), "patient_name", Value("_"), "specimen_type" ) ) queryset, may_have_duplicates = super().get_search_results(request, queryset, search_term) for bit in smart_split(search_term): if bit.startswith(('"', "'")) and bit[0] == bit[-1]: bit = unescape_string_literal(bit) queryset = queryset.filter(Q(specimen_name__icontains=bit)) return queryset, may_have_duplicates Note: The above will likely stop giving you results unless you set search_fields to an empty tuple / list. Continuing down this line perhaps with the annotation you can have specimen_name in search_fields by overriding get_queryset and hence skip overriding get_search_results: class SpecimenModelAdmin(admin.ModelAdmin): ... search_fields = ('patient_name', 'lab_number', 'specimen_type', 'specimen_name') def get_queryset(self, request): qs = super().get_queryset(request) qs = qs.annotate( specimen_name=Concat( "lab_number", Value("_"), "patient_name", Value("_"), "specimen_type" ) ) return qs | 6 | 6 |
69,328,143 | 2021-9-25 | https://stackoverflow.com/questions/69328143/why-doesnt-nan-raise-any-errors-in-python | In my opinion, things like float('nan') should be optimized, but apparently they aren't in Python. >>> NaN = float('nan') >>> a = [ 1, 2, 3, NaN ] >>> NaN in a True >>> float('nan') in a False Does it have any meaning with not optimizing nan like other things? In my thought, nan is only nan. As well as this, when you use sorted on these things, they give weird results: >>> sorted([3, nan, 4, 2, nan, 1]) [3, nan, 1, 2, 4, nan] >>> 3 > float('nan') False >>> 3 < float('nan') False The comparison on nan is defined like this, but it doesn't seems 'pythonic' to me. Why doesn't it raise an error? | Membership testing Two different instances of float('nan') are not equal to each other. They are "Not a Number" so it makes sense that they shouldn't also have to be equal. They are different instances of objects which are not numbers: print(float('nan') == float('nan')) # False As documented here: For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression x in y is equivalent to any(x is e or x == e for e in y). There is a checking for identity! that's why you see that behavior in your question and why NaN in a returns True and float('nan') in a doesn't. Sorting in Python Python uses the Timsort algorithm for its sorted() function. (Also see this for a textual explanation.) I'm not going to go into that. I just want to demonstrate a simple example: This is my class A. It's going to be our float('nan') object. It acts like float('nan') in that it returns False for all comparison operations: class A: def __init__(self, n): self.n = n def __lt__(self, other): print(self, 'lt is calling', other) return False def __gt__(self, other): print(self, 'gt is calling', other) return False def __repr__(self): return f'A({self.n})' class B: def __init__(self, n): self.n = n def __lt__(self, other): print(self, 'lt is calling', other) return False def __gt__(self, other): print(self, 'gt is calling', other) return False def __repr__(self): return f'B({self.n})' When we use the sorted() function (or the .sort() method of a list) without the reverse=True argument, we're asking for the iterable to be sorted in ascending order. To do this, Python tries to call the __lt__ method successively, starting from the second object in the list to see if it is less than its previous object and so on: lst = [A(1), B(2), A(3), B(4)] print(sorted(lst)) output : B(2) lt is calling A(1) A(3) lt is calling B(2) B(4) lt is calling A(3) [A(1), B(2), A(3), B(4)] Now, switching back to your example: lst = [3, A(1), 4, 2, A(1), 1] print(sorted(lst)) output: A(1) lt is calling 3 A(1) gt is calling 4 A(1) gt is calling 2 A(1) lt is calling 2 A(1) lt is calling 4 A(1) gt is calling 1 [3, A(1), 1, 2, 4, A(1)] A(1).__lt__(3) will return False. This means A(1) is not less than 3 or This means 3 is in correct position relative to A(1). Then here int.__lt__(4, A(1)) gets called and because it returns NotImplemented object, Python checks to see if A(1) has implemented __gt__ and yes, so A(1).__gt__(4) will return False again and this means the A(1) object is in correct place relative to 4. (Etc.) This is why the result of sorted() seems to be weird, but it's predictable. A(1) object in both cases, I mean when int class returns NotImplemented and when __lt__ gets called from A(1), will return False. It's better to check the Timsort algorithm and consider those points. I would include the remaining steps if I read Timsort algorithm carefully. | 9 | 9 |
69,327,629 | 2021-9-25 | https://stackoverflow.com/questions/69327629/aws-lambda-in-container-python-works-locally-but-not-deployed | I try to wrap an R based application that is deployed to a docker container. I changed the base image to lambda/python3.9 and added another app with its own Dockerfile. This contains a simple python script as the handler for the function. In this handler I call the code to run the R script and upload the result to S3. Now when I'm testing locally everything works fine. But when I push the image to ECR and deploy the container as a Lambda Function I get following error: { "errorMessage": "Unable to import module 'app': No module named 'app'", "errorType": "Runtime.ImportModuleError", "requestId": "1a7fa818-62e4-4374-8318-625b15e2ae8a", "stackTrace": [] } Further logs of the AWS call: 2021-09-25T17:11:32.570+02:00 START RequestId: 1a7fa818-62e4-4374-8318-625b15e2ae8a Version: $LATEST 2021-09-25T17:11:32.571+02:00 [ERROR] Runtime.ImportModuleError: Unable to import module 'app': No module named 'app' Traceback (most recent call last): 2021-09-25T17:11:32.574+02:00 END RequestId: 1a7fa818-62e4-4374-8318-625b15e2ae8a 2021-09-25T17:11:32.574+02:00 REPORT RequestId: 1a7fa818-62e4-4374-8318-625b15e2ae8a Duration: 1.40 ms Billed Duration: 2395 ms Memory Size: 3200 MB Max Memory Used: 48 MB Init Duration: 2393.47 ms Command to run the container locally: docker run -e AWS_SECRET_ACCESS_KEY=XXX -e AWS_ACCESS_KEY_ID=YYY -p 9000:8080 palmid-lambda:latest Output of the local run: EY=XXXX -e AWS_ACCESS_KEY_ID=YYYY -p 9000:8080 palmid-lambda:latest time="2021-09-25T14:29:42.318" level=info msg="exec '/var/runtime/bootstrap' (cwd=/home/palmid, handler=)" time="2021-09-25T14:29:47.899" level=info msg="extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory" time="2021-09-25T14:29:47.899" level=warning msg="Cannot list external agents" error="open /opt/extensions: no such file or directory" START RequestId: 1b867a67-e778-4418-9139-ff1123331b34 Version: $LATEST ... application logs Command to call the Lambda function locally: curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{ "sequence": ">SRR9968562_waxsystermes_virus_microassembly\nPIWDRVLEPLMRASPGIGRYMLTDVSPVGLLRVFKEKVDTTPHMPPEGMEDFKKASKEVE\nKTLPTTLRELSWDEVKEMIRNDAAVGDPRWKTALEAKESEEFWREVQAEDLNHRNGVCLR\nGVFHTMAKREKKEKNKWGQKTSRMIAYYDLIERACEMRTLGALNADHWAGEENTPEGVSG\nIPQHLYGEKALNRLKMNRMTGETTEGQVFQGDIAGWDTRVSEYELQNEQRICEERAESED\nHRRKIRTIYECYRSPIIRVQDADGNLMWLHGRGQRMSGTIVTYAMNTITNAIIQQAVSKD\nLGNTYGRENRLISGDDCLVLYDTQHPEETLVAAFAKYGKVLKFEPGEPTWSKNIENTWFC\nSHTYSRVKVGNDIRIMLDRSEIEILGKARIVLGGYKTGEVEQAMAKGYANYLLLTFPQRR\nNVRLAANMVRAIVPRGLLPMGRAKDPWWREQPWMSTNNMIQAFNQIWEGWPPISSMKDIK\nYVGRAREQMLDST", "hash": "132xx"}' Base Image built with: https://github.com/ababaian/palmid Building with following Python Code: https://github.com/ababaian/palmid-lambda | I wasn't able to get my container to run. But I decided to take the approach, where I use amazonlinux:2 as a baseimage and add the python stuff around afterwards and now it's working. Seems that the R stuff has some dependencies that interfere with the amazon libraries. By using this approach I faced another issue regarding the libraries that I depend on, which I fixed as following: https://github.com/aws/aws-lambda-python-runtime-interface-client/issues/24 | 6 | 1 |
69,329,521 | 2021-9-25 | https://stackoverflow.com/questions/69329521/create-an-enum-class-from-a-list-of-strings-in-python | When I run this from enum import Enum class MyEnumType(str, Enum): RED = 'RED' BLUE = 'BLUE' GREEN = 'GREEN' for x in MyEnumType: print(x) I get the following as expected: MyEnumType.RED MyEnumType.BLUE MyEnumType.GREEN Is it possible to create a class like this from a list or tuple that has been obtained from elsewhere? Something vaguely similar to this perhaps: myEnumStrings = ('RED', 'GREEN', 'BLUE') class MyEnumType(str, Enum): def __init__(self): for x in myEnumStrings : self.setattr(self, x,x) However, in the same way as in the original, I don't want to have to explicitly instantiate an object. | You can use the enum functional API for this: from enum import Enum myEnumStrings = ('RED', 'GREEN', 'BLUE') MyEnumType = Enum('MyEnumType', myEnumStrings) From the docs: The first argument of the call to Enum is the name of the enumeration. The second argument is the source of enumeration member names. It can be a whitespace-separated string of names, a sequence of names, a sequence of 2-tuples with key/value pairs, or a mapping (e.g. dictionary) of names to values. | 6 | 13 |
69,327,239 | 2021-9-25 | https://stackoverflow.com/questions/69327239/how-to-validate-a-complex-nested-data-structure-with-pydantic | I had complex and nested data structure like below: { 0: { 0: {'S': 'str1', 'T': 4, 'V': 0x3ff}, 1: {'S': 'str2', 'T': 5, 'V': 0x2ff}}, 1: { 0: {'S': 'str3', 'T': 8, 'V': 0x1ff}, 1: {'S': 'str4', 'T': 7, 'V': 0x0ff}}, ...... } It's a 2D dictionary basically. The innermost dict follows {Str: str, str:int, str:int}, while its outer dict always has integer as the key to index. Is there a way for Pydantic to validate the data-type and data structure? I mean if someone changes the data with a string as a key to the outer dict, the code should prompt an error. Or if someones tweaks the inner dict with putting 'V' value to a string, a checker needs to complain about it. I am new to Pydantic, and found it always requires a str-type field for any data... Any ideas? | You could use Dict as custom root type with int as key type (with nested dict). Like so: from pydantic import BaseModel, StrictInt from typing import Union, Literal, Dict sample = {0: {0: {'S': 'str1', 'T': 4, 'V': 0x3ff}, 1: {'S': 'str2', 'T': 5, 'V': 0x2ff}}, 1: {0: {'S': 'str3', 'T': 8, 'V': 0x1ff}, 1: {'S': 'str4', 'T': 7, 'V': 0x0ff}} } # innermost model class Data(BaseModel): S: str T: int V: int class Model(BaseModel): __root__: Dict[int, Dict[int, Data]] print(Model.parse_obj(sample)) | 5 | 5 |
69,328,274 | 2021-9-25 | https://stackoverflow.com/questions/69328274/enum-raises-attributeerror-dict-object-has-no-attribute-member-names | I am trying to create an Enum class dynamically, using type(name, base, dict). from enum import Enum class FriendlyEnum(Enum): def hello(self): print(self.name + ' says hello!') The normal way works fine: class MyEnum(FriendlyEnum): foo = 1 bar = 2 MyEnum.foo.hello() # -> foo says hello! But if I try dynamically: MyEnum = type('MyEnum', (FriendlyEnum,), {'foo':1,'bar':2} ) MyEnum.foo.hello() # -> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/enum.py", line 146, in __new__ enum_members = {k: classdict[k] for k in classdict._member_names} AttributeError: 'dict' object has no attribute '_member_names' Any suggestions? It might be useful for testing the methods of a parent Enum class, dynamically creating new Enum classes with different attributes and same inherited methods. | The Enum class can be called to dynamically create a new Enum type. This will also work for subclasses of Enum MyEnum = FriendlyEnum('MyEnum', {'foo': 1, 'bar': 2}) MyEnum.foo.hello() | 6 | 8 |
69,322,097 | 2021-9-24 | https://stackoverflow.com/questions/69322097/distinguishing-between-pydantic-models-with-same-fields | I'm using Pydantic to define hierarchical data in which there are models with identical attributes. However, when I save and load these models, Pydantic can no longer distinguish which model was used and picks the first one in the field type annotation. I understand that this is expected behavior based on the documentation. However, the class type information is important to my application. What is the recommended way to distinguish between different classes in Pydantic? One hack is to simply add an extraneous field to one of the models, but I'd like to find a more elegant solution. See the simplified example below: container is initialized with data of type DataB, but after exporting and loading, the new container has data of type DataA as it's the first element in the type declaration of container.data. Thanks for your help! from abc import ABC from pydantic import BaseModel #pydantic 1.8.2 from typing import Union class Data(BaseModel, ABC): """ base class for a Member """ number: float class DataA(Data): """ A type of Data""" pass class DataB(Data): """ Another type of Data """ pass class Container(BaseModel): """ container holds a subclass of Data """ data: Union[DataA, DataB] # initialize container with DataB data = DataB(number=1.0) container = Container(data=data) # export container to string and load new container from string string = container.json() new_container = Container.parse_raw(string) # look at type of container.data print(type(new_container.data).__name__) # >>> DataA | As correctly noted in the comments, without storing additional information models cannot be distinguished when parsing. As of today (pydantic v1.8.2), the most canonical way to distinguish models when parsing in a Union (in case of ambiguity) is to explicitly add a type specifier Literal. It will look like this: from abc import ABC from pydantic import BaseModel from typing import Union, Literal class Data(BaseModel, ABC): """ base class for a Member """ number: float class DataA(Data): """ A type of Data""" tag: Literal['A'] = 'A' class DataB(Data): """ Another type of Data """ tag: Literal['B'] = 'B' class Container(BaseModel): """ container holds a subclass of Data """ data: Union[DataA, DataB] # initialize container with DataB data = DataB(number=1.0) container = Container(data=data) # export container to string and load new container from string string = container.json() new_container = Container.parse_raw(string) # look at type of container.data print(type(new_container.data).__name__) # >>> DataB This method can be automated, but you can use it at your own responsibility, since it breaks static typing and uses objects that may change in future versions: from pydantic.fields import ModelField class Data(BaseModel, ABC): """ base class for a Member """ number: float def __init_subclass__(cls, **kwargs): name = 'tag' value = cls.__name__ annotation = Literal[value] tag_field = ModelField.infer(name=name, value=value, annotation=annotation, class_validators=None, config=cls.__config__) cls.__fields__[name] = tag_field cls.__annotations__[name] = annotation class DataA(Data): """ A type of Data""" pass class DataB(Data): """ Another type of Data """ pass | 11 | 6 |
69,315,586 | 2021-9-24 | https://stackoverflow.com/questions/69315586/when-are-model-call-and-train-step-called | I am going through this tutorial on how to customize the training loop https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/customizing_what_happens_in_fit.ipynb#scrollTo=46832f2077ac The last example shows a GAN implemented with a custom training, where only __init__, train_step, and compile methods are defined class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(GAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim def compile(self, d_optimizer, g_optimizer, loss_fn): super(GAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] ... What happens if my model also has a call() custom function? Does train_step() overrides call()? Aren't call() and train_step() both called by fit() and what is the difference between both ? Below another piece of code "I" wrote where I wonder what is called into fit(), call() or train_step(): class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True, reset_after=True ) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x @tf.function def train_step(self, inputs): # unpack the data inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) # forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss=self.compiled_loss(labels, predictions, regularization_losses=self.losses) # compute the gradients grads=tape.gradient(loss, model.trainable_variables) # Update weights self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(labels, predictions) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} | These are different concepts and are used like this: train_step is called by fit. Basically, fit loops over the dataset and provide each batch to train_step (and then handles metrics, bookkeeping, etc., of course). call is used when you, well, call the model. To be precise, writing model(inputs) or in your case self(inputs) will use the function __call__, but the Model class has that function defined such that it will in turn use call. Those are the technical aspects. Intuitively: call should define the forward-pass of your model. i.e. how is the input transformed to the output. train_step defines the logic of a training step, usually with gradient descent. It will often make use of call since the training step tends to include a forward pass of the model to compute gradients. As for the GAN tutorial you linked, I would say that can actually be considered incomplete. It works without defining call because the custom train_step explicitly calls the generator/discriminator fields (as these are predefined models, they can be called as usual). If you tried to call the GAN model like gan(inputs), I would assume you get an error message (I did not test this). So you would always have to call gan.generator(inputs) to generate, for example. Finally (this part may be a bit confusing), note that you can subclass a Model to define a custom training step, but then initialize it via the functional API (like model = Model(inputs, outputs)), in which case you can make use of call in the training step without ever defining it yourself because the functional API takes care of that. | 9 | 16 |
69,295,870 | 2021-9-23 | https://stackoverflow.com/questions/69295870/aws-data-wrangler-error-waitererror-waiter-bucketexists-failed-max-attempts-e | I am trying to read data from athena into python's pandas dataframe. However, I encounter this error WaiterError: Waiter BucketExists failed: Max attempts exceeded. Previously accepted state: Matched expected HTTP status code: 404 Do anyone have the same problem when using data wrangler? This is my code below import awswrangler as wr import pandas as pd wr.athena.read_sql_query('select * from ath_bi_orders limit 10', database='default') | I have faced the same issue and resolved it by specifying AWS_DEFAULT_REGION env variable. Like this. os.environ['AWS_DEFAULT_REGION'] = 'ap-northeast-1' # specify your AWS region. Execute it before you throw the query. | 5 | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.