question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,777,732 | 2021-10-30 | https://stackoverflow.com/questions/69777732/how-can-python-merges-a-list-of-sets-and-return-them-as-a-set | I have a list of sets like this: set_list = [{1, 2, 3}, {4, 5, 1, 6}, {2, 3, 6}, {1, 5, 8}] Now I want to merge all of the sets together and return a set of all sets like this: final_set = {1, 2, 3, 4, 5, 6, 8} I have used this code but it is not working correctly: tmp_list = [] final_set = set(tmp_list.append(elem) for elem in set_list) What should I do? | You can use unpacking with set().union for a clean one-liner. >>> set().union(*set_list) {1, 2, 3, 4, 5, 6, 8} | 4 | 16 |
69,817,054 | 2021-11-2 | https://stackoverflow.com/questions/69817054/python-detection-of-delimiter-separator-in-a-csv-file | I have a function that reads and handles *.csv files in several dataframes. However, not all of the .csv files have the same separator. Is there a way to detect which type of separator the .csv file has, and then use it in the Pandas' read_csv() function? df = pd.read_csv(path, sep = 'xxx',header = None, index_col = 0) | Update In fact, use engine='python' as parameter of read_csv. It will try to automatically detect the right delimiter. sepstr, default ‘,’ Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\r\t'. Use csv.Sniffer: import csv def find_delimiter(filename): sniffer = csv.Sniffer() with open(filename) as fp: delimiter = sniffer.sniff(fp.read(5000)).delimiter return delimiter Demo: >>> find_delimiter('data.csv') ',' >>> find_delimiter('data.txt') ' ' | 8 | 15 |
69,778,351 | 2021-10-30 | https://stackoverflow.com/questions/69778351/how-to-fill-the-values-in-numpy-to-create-a-spectrum | I have done the following code but do not understand properly what is going on there. Can anyone explain how to fill colors in Numpy? Also I want to set in values in a way from 1 to 0 to give spectrum an intensity. E.g-: 0 means low intensity, 1 means high intensity import numpy as np import matplotlib.pyplot as plt a= np.zeros([256*6,256*6, 3], dtype=np.uint8) # init the array # fill the array with rgb values to create the spectrum without the use of loops #red a[:,:,0] = np.concatenate(([255]*256, np.linspace(255,0,256), [0]*256, [0]*256, np.linspace(0,255,256), [255]*256)) #green a[:,:,1] = np.concatenate((np.linspace(0,255,256), [255]*256, [255]*256, np.linspace(255,0,256), [0]*256,[0]*256)) #blue a[:,:,2] = np.concatenate(([0]*256, [0]*256,np.linspace(0,255,256),[255]*256, [255]*256, np.linspace(255,0,256))) plt.imshow(a) # this is different than what I am looking for Expected Output-: | res=256 op=np.zeros([res,res, 3]) # init the array #RGB op[:,:,0]= np.linspace(0,1,res) op[:,:,1]=np.linspace(0,1,res).reshape (256, 1) op[:,:,2]= np.linspace(1,0,res) plt.imshow(op) it will give the exact thing that you are looking for! let me know if it does not work | 6 | 7 |
69,796,358 | 2021-11-1 | https://stackoverflow.com/questions/69796358/repeating-triangle-pattern-in-python | I need to make a triangle of triangle pattern of * depending on the integer input. For example: n = 2 * *** * * * ********* n = 3 * *** ***** * * * *** *** *** *************** * * * * * *** *** *** *** *** ************************* I've already figured out the code for a single triangle, but I don't know how to duplicate them so they'll appear like a triangle of triangles. Here's my code for one triangle: rows = int(input()) for i in range(rows): for j in range(i, rows): print(" ", end="") for j in range(i): print("*", end="") for j in range(i + 1): print("*", end="") print() | Code I have simplified the following code so it should now look more clear easy to understand than it used to be. n = int(input("n = ")) rows = n ** 2 base = n * 2 - 1 for row in range(rows): triangle = row // n level = row % n a_space = " " * (n - triangle - 1) * base b_space = " " * (n - level - 1) line = (a_space + (b_space + "*" * (level * 2 + 1) + b_space) * (triangle * 2 + 1)).rstrip() print(line) How it works This approach prints out everything in one pass. In the following explanation, I will break down the components of the code, what these variables mean, and how they work. The height The first variable that I should mention is n, the number of levels, or the height of every triangle. It is also the number of triangles stacked on top of each other. Rows The second variable is rows, the number of rows in the output. We can see a pattern that the number of rows is equal to n squared. Base The next variable is base. It is the number of asterisks at the bottom level of the triangle. It also follows a pattern of odd numbers as we may have noticed. After that, our loop begins, iterates through every row, and prints out the result. Triangle and level These variables tell us which triangle and level of the triangle we are currently at. You can try it yourself by printing out triangle and level every iteration. for row in range(rows): triangle = row // n level = row % n print(triangle, level) The two types of space The next part is the indentation. Just to give you a brief idea of what a_space and b_space are, here is a visual representation that describes the spaces. a_space shrinks after every triangle, and b_space shrinks after every level. a_space = " " * (n - triangle - 1) * base b_space = " " * (n - level - 1) We multiply a_space by base because one a_space has a length equal to the base. Printing out the line Let's look at it step-by-step. First, we start our line with a_space. a_space Then, we print out the asterisks. The number of asterisks follows the pattern of odd numbers. "*" * (level * 2 + 1) We open and close the asterisks with b_space. b_space + "*" * (level * 2 + 1) + b_space Then, we simply multiply the whole thing by the number of triangles stacked next to each other horizontally. (b_space + "*" * (level * 2 + 1) + b_space) * (triangle * 2 + 1) Putting everything together, we get the line, right-strip it, and print it out. line = (a_space + (b_space + "*" * (level * 2 + 1) + b_space) * (triangle * 2 + 1)).rstrip() print(line) Output Let's try it out with a few test cases. n = 1 * n = 2 * *** * * * ********* n = 3 * *** ***** * * * *** *** *** *************** * * * * * *** *** *** *** *** ************************* n = 4 * *** ***** ******* * * * *** *** *** ***** ***** ***** ********************* * * * * * *** *** *** *** *** ***** ***** ***** ***** ***** *********************************** * * * * * * * *** *** *** *** *** *** *** ***** ***** ***** ***** ***** ***** ***** ************************************************* | 25 | 55 |
69,802,585 | 2021-11-1 | https://stackoverflow.com/questions/69802585/jupyter-notebook-in-vs-code-no-color-for-the-python-code-in-ipynb-file-what-a | Code of .ipynb file: Python is detected: The code has color in jupyter notebook: I tried setting up a jupyter notebook in vs code in an anaconda environment. I have tried - Python: Select interpreter and selected my anaconda environment. Made sure python is in the environment: python --version: Python 3.8.8. Tried clicking on CVE(in the bottom right corner) to change to python (picture 2). The colorization of the code works fine in the interactive window of jupyter notebook (picture 3). | Jupyter extension detects your code as CVE instead of Python so Python syntax highlighting is not applied successfully. Refer to Jupyter in vscode can't execute syntax highlighting, the Dependency Analytics extension should be the reason. Remove or Disable it then reload window, the question should go away. | 5 | 11 |
69,797,728 | 2021-11-1 | https://stackoverflow.com/questions/69797728/extract-epsg-code-from-geodataframe-crs-result | Let's say I have a GeoDataFrame with a CRS set. gdf.crs gives me <Projected CRS: EPSG:25833> Name: ETRS89 / UTM zone 33N Axis Info [cartesian]: - [east]: Easting (metre) - [north]: Northing (metre) Area of Use: - undefined Coordinate Operation: - name: UTM zone 33N - method: Transverse Mercator Datum: European Terrestrial Reference System 1989 - Ellipsoid: GRS 1980 - Prime Meridian: Greenwich This is of type <class 'pyproj.crs.crs.CRS'>. Is there a way to extract the EPSG Code from this, thus 25833? | .crs returns a pyroj.CRS object. This should get you the EPSG code from the object: gdf.crs.to_epsg() pyproj docs | 6 | 13 |
69,778,356 | 2021-10-30 | https://stackoverflow.com/questions/69778356/iterable-pytorch-dataset-with-multiple-workers | So I have a text file bigger than my ram memory, I would like to create a dataset in PyTorch that reads line by line, so I don't have to load it all at once in memory. I found pytorch IterableDataset as potential solution for my problem. It only works as expected when using 1 worker, if using more than one worker it will create duplicate recods. Let me show you an example: Having a testfile.txt containing: 0 - Dummy line 1 - Dummy line 2 - Dummy line 3 - Dummy line 4 - Dummy line 5 - Dummy line 6 - Dummy line 7 - Dummy line 8 - Dummy line 9 - Dummy line Defining a IterableDataset: class CustomIterableDatasetv1(IterableDataset): def __init__(self, filename): #Store the filename in object's memory self.filename = filename def preprocess(self, text): ### Do something with text here text_pp = text.lower().strip() ### return text_pp def line_mapper(self, line): #Splits the line into text and label and applies preprocessing to the text text, label = line.split('-') text = self.preprocess(text) return text, label def __iter__(self): #Create an iterator file_itr = open(self.filename) #Map each element using the line_mapper mapped_itr = map(self.line_mapper, file_itr) return mapped_itr We can now test it: base_dataset = CustomIterableDatasetv1("testfile.txt") #Wrap it around a dataloader dataloader = DataLoader(base_dataset, batch_size = 1, num_workers = 1) for X, y in dataloader: print(X,y) It outputs: ('0',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('9',) (' Dummy line',) That is correct. But If I change the number of workers to 2 the output becomes ('0',) (' Dummy line\n',) ('0',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('9',) (' Dummy line',) ('9',) (' Dummy line',) Which is incorrect, as is creating duplicates of each sample per worker in the data loader. Is there a way to solve this issue with pytorch? So a dataloader can be created to not load all file in memory with support for multiple workers. | So I found an answer within the torch discuss forum https://discuss.pytorch.org/t/iterable-pytorch-dataset-with-multiple-workers/135475/3 where they pointed out I should be using the worker info to slice consecutively to the batch size. The new dataset would look like this: class CustomIterableDatasetv1(IterableDataset): def __init__(self, filename): #Store the filename in object's memory self.filename = filename def preprocess(self, text): ### Do something with text here text_pp = text.lower().strip() ### return text_pp def line_mapper(self, line): #Splits the line into text and label and applies preprocessing to the text text, label = line.split('-') text = self.preprocess(text) return text, label def __iter__(self): worker_total_num = torch.utils.data.get_worker_info().num_workers worker_id = torch.utils.data.get_worker_info().id #Create an iterator file_itr = open(self.filename) #Map each element using the line_mapper mapped_itr = map(self.line_mapper, file_itr) #Add multiworker functionality mapped_itr = itertools.islice(mapped_itr, worker_id, None, worker_total_num) return mapped_itr Special thanks to @Ivan who also pointed out the slicing solution. With two workers it returns the same data as 1 worker only | 7 | 10 |
69,794,239 | 2021-11-1 | https://stackoverflow.com/questions/69794239/how-can-i-handle-the-error-certificate-verify-failed-certificate-has-expired | I'm just starting to create an image classification program in Python with TensorFlow using the CIFAR10 dataset, following this tutorial. Here is my code so far: import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt import numpy as np datasets.cifar10.load_data() I'm getting an error as follows: File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 1346, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1257, in request self._send_request(method, url, body, headers, encode_chunked) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1303, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1252, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1012, in _send_output self.send(msg) File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 952, in send self.connect() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1426, in connect self.sock = self._context.wrap_socket(self.sock, File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1040, in _create self.do_handshake() File "C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129) What does this error mean, and what can I do to fix it? I'm a beginner with TensorFlow and haven't been able to find an exact solution for this issue. | Recently had a similar error using python 3.7 and just turned off the verification like this: import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt import numpy as np import ssl ssl._create_default_https_context = ssl._create_unverified_context datasets.cifar10.load_data() Check out this issue for more information. You can of course try to install the missing certificates for your Python version. I can confirm that your code works on Google Colab. So it is definitely due to some missing certificates. | 4 | 8 |
69,735,307 | 2021-10-27 | https://stackoverflow.com/questions/69735307/how-to-display-a-heatmap-on-a-specific-parameter-with-geopandas | In my very simple case I would like to display the heatmap of the points in the points GeoJSON file but not on the geographic density (lat, long). In the points file each point has a confidence property (a value from 0 to 1), how to display the heatmap on this parameter? weight=points.confidence don't seem to work. for exemple: #points.geojson { "type": "FeatureCollection", "crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } }, "features": [ { "type": "Feature", "properties": {"confidence": 0.67}, "geometry": { "type": "Point", "coordinates": [ 37.703471404215918, 26.541625492300192 ] } }, { "type": "Feature", "properties": {"confidence": 0.76}, "geometry": { "type": "Point", "coordinates": [ 37.009744331225093, 26.710090585532761 ] } }, { "type": "Feature", "properties": {"confidence": 0.94}, "geometry": { "type": "Point", "coordinates": [ 37.541708538306224, 26.160111944646022 ] } }, { "type": "Feature", "properties": {"confidence": 0.52}, "geometry": { "type": "Point", "coordinates": [ 37.628566642215354, 25.917300595223857 ] } }, { "type": "Feature", "properties": {"confidence": 0.46}, "geometry": { "type": "Point", "coordinates": [ 37.676499267124271, 26.653959791866598 ] } }, { "type": "Feature", "properties": {"confidence": 0.55}, "geometry": { "type": "Point", "coordinates": [ 37.677033863264533, 26.654033815175087 ] } }, { "type": "Feature", "properties": {"confidence": 0.12}, "geometry": { "type": "Point", "coordinates": [ 37.37522057234797, 26.353271000367258 ] } }, { "type": "Feature", "properties": {"confidence": 0.62}, "geometry": { "type": "Point", "coordinates": [ 37.396556958266373, 26.459196264023291 ] } }, { "type": "Feature", "properties": {"confidence": 0.21}, "geometry": { "type": "Point", "coordinates": [ 36.879775221618168, 26.901743663072878 ] } } ] } The image below shows my result but it is on the geographic density not confidence score density. import geoplot as gplt import geopandas as gpd import geoplot.crs as gcrs import matplotlib.pyplot as plt points = gpd.read_file('points.geojson') polygons = gpd.read_file('polygons.geojson') ax = gplt.polyplot(polygons, projection=gcrs.AlbersEqualArea(), zorder=1) gplt.kdeplot(points, cmap='Reds', shade=True, clip=polygons, ax=ax) #weight=points.confidence don’t work inside kdeplot() plt.show() | using your sample data for points these points are in Saudi Arabia, so assumed that polygons are regional boundaries in Saudi Arabia. Downloaded this from http://www.naturalearthdata.com/downloads/10m-cultural-vectors/ polygon data is a shape file loaded into geopandas to allow interface to GEOJSON __geo__interface dynamically filtered this to Saudi using pandas .loc confidence data is just a straight https://plotly.com/python/mapbox-density-heatmaps/ boundaries are https://plotly.com/python/mapbox-layers/ # fmt: off points = { "type": "FeatureCollection", "crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } }, "features": [ { "type": "Feature", "properties": {"confidence": 0.67}, "geometry": { "type": "Point", "coordinates": [ 37.703471404215918, 26.541625492300192 ] } }, { "type": "Feature", "properties": {"confidence": 0.76}, "geometry": { "type": "Point", "coordinates": [ 37.009744331225093, 26.710090585532761 ] } }, { "type": "Feature", "properties": {"confidence": 0.94}, "geometry": { "type": "Point", "coordinates": [ 37.541708538306224, 26.160111944646022 ] } }, { "type": "Feature", "properties": {"confidence": 0.52}, "geometry": { "type": "Point", "coordinates": [ 37.628566642215354, 25.917300595223857 ] } }, { "type": "Feature", "properties": {"confidence": 0.46}, "geometry": { "type": "Point", "coordinates": [ 37.676499267124271, 26.653959791866598 ] } }, { "type": "Feature", "properties": {"confidence": 0.55}, "geometry": { "type": "Point", "coordinates": [ 37.677033863264533, 26.654033815175087 ] } }, { "type": "Feature", "properties": {"confidence": 0.12}, "geometry": { "type": "Point", "coordinates": [ 37.37522057234797, 26.353271000367258 ] } }, { "type": "Feature", "properties": {"confidence": 0.62}, "geometry": { "type": "Point", "coordinates": [ 37.396556958266373, 26.459196264023291 ] } }, { "type": "Feature", "properties": {"confidence": 0.21}, "geometry": { "type": "Point", "coordinates": [ 36.879775221618168, 26.901743663072878 ] } } ] } # fmt: on import geopandas as gpd import plotly.express as px import requests from pathlib import Path from zipfile import ZipFile import urllib # fmt: off # download boundaries url = "https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_1_states_provinces.zip" f = Path.cwd().joinpath(urllib.parse.urlparse(url).path.split("/")[-1]) # fmt: on if not f.exists(): r = requests.get(url, stream=True, headers={"User-Agent": "XY"}) with open(f, "wb") as fd: for chunk in r.iter_content(chunk_size=128): fd.write(chunk) zfile = ZipFile(f) zfile.extractall(f.stem) # load downloaded boundaries gdf2 = gpd.read_file(str(f.parent.joinpath(f.stem).joinpath(f"{f.stem}.shp"))) # confidence data gdf = gpd.GeoDataFrame.from_features(points) # now the simple bit, densitity plot data and Saudi Arabia regional boundaries as a layer fig = px.density_mapbox( gdf, lat=gdf.geometry.y, lon=gdf.geometry.x, z="confidence" ).update_layout( mapbox={ "style": "carto-positron", "zoom": 6, "layers": [ { "source": gdf2.loc[gdf2["iso_a2"].eq("SA")].geometry.__geo_interface__, "type": "line", } ], }, margin={"l":0,"r":0,"t":0,"b":0} ) fig | 5 | 5 |
69,792,774 | 2021-11-1 | https://stackoverflow.com/questions/69792774/how-to-draw-a-single-contour-line-in-matplotlib | Following code is borrowed from here: import numpy as np import matplotlib.cm as cm import matplotlib.pyplot as plt delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 fig, ax = plt.subplots() CS = ax.contour(X, Y, Z) ax.clabel(CS, inline=True, fontsize=10) ax.set_title('Simplest default with labels') The above code produces the contour plot with 7 contour lines. How to draw a single contour line say with the value 0.5? The output of the above code is: | Just add contour value: CS = ax.contour(X, Y, Z, [0.5]) import numpy as np import matplotlib.cm as cm import matplotlib.pyplot as plt delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 fig, ax = plt.subplots() CS = ax.contour(X, Y, Z, [0.5]) ax.clabel(CS, inline=True, fontsize=10) ax.set_title('Simplest default with labels') plt.show() | 5 | 6 |
69,784,120 | 2021-10-31 | https://stackoverflow.com/questions/69784120/pre-commit-x-vscode-env-python3-9-no-such-file-or-directory | I use pre-commit to run black flake8 and isort on my code. I ran pre-commit install and as expected it created .git/hooks/pre-commit which starts like: #!/usr/bin/env python3.9 # File generated by pre-commit: https://pre-commit.com # ID: 138fd403232d2ddd5efb44317e38bf03 import os import sys ... The hook works fine in the terminal: $ git commit -am "remove commented block" isort....................................................................Passed black....................................................................Passed flake8...................................................................Passed [main f30007d] remove commented block 1 file changed, 4 deletions(-) But running it from VSCode's Source Control Panel yields an error (command output): > git -c user.useConfigOnly=true commit --quiet --allow-empty-message --file - env: python3.9: No such file or directory Not sure where that comes from. In addition (not sure it matters though) I double checked: both the terminal's python and VSCode's selected Python interpreter point to the same /Users/victor/.pyenv/shims/python | It seemed to be an issue with pyenv not being loaded by VSCode's source control panel when executing the git commands. I tried moving some stuff (like $(pyenv init -)) into an earlier zsh configuration file like .zshenv but that did not help. In the end, specifying the complete path fixed it #!/usr/bin/env /Users/victor/.pyenv/shims/python3.9 # File generated by pre-commit: https://pre-commit.com # ID: 138fd403232d2ddd5efb44317e38bf03 import os import sys ... | 5 | 0 |
69,791,917 | 2021-11-1 | https://stackoverflow.com/questions/69791917/correct-way-to-initialize-an-optional-dict-with-type-hint-in-python3 | I want to initialize a dataclass dictionary with type hint(key should be string, the value should be another dataclass's instances) called usr_orders that does NOT require user init upon initializing instances. Based on my understanding Optional is the union(.., None), so I went with: @dataclass class User: usr_orders: Optional[Dict[str, BookOrder]] = Dict But when I try to modify it using key-value I got the following error: TypeError: '_GenericAlias' object does not support item assignment def update_usr_orders(self, book_order:BookOrder): self.usr_orders[book_order.order_id] = book_order # book_order: BookOrder is a dataclass instance This error seems to be caused by the optional Dict definition. But I don't know how to solve this. Does anyone know why I cannot update the usr_orders? Thanks! | You'll need to import a field and use a default_factory: from dataclasses import dataclass, field from typing import Optional, Dict @dataclass class BookOrder(): id: str @dataclass class User: usr_orders: Optional[Dict[str, BookOrder]] = field(default_factory=dict) def update_usr_orders(self, book_order:BookOrder): self.usr_orders[book_order.id] = book_order a = BookOrder('1') u = User() u.update_usr_orders(a) print(u.usr_orders) Part of the issue is that you were using Dict as a default field for the data class. Dict is a type hint; I think you were meaning to use dict() or {} - however, this would not work. Try it and see for yourself! You'll end up with this error: ValueError: mutable default <class 'dict'> for field usr_orders is not allowed: use default_factory. This is a dataclass thing where it prevents you from using 1 dictionary for multiple instances. Further reading | 4 | 3 |
69,778,194 | 2021-10-30 | https://stackoverflow.com/questions/69778194/how-can-i-check-whether-a-unicode-codepoint-is-assigned-or-not | I am using Python 3 and I know all about hex, int, chr, ord, '\uxxxx' escape and '\U00xxxxxx' escape and Unicode has 1114111 codepoints... How can I check if a Unicode codepoint is valid? That is, it is unambiguously mapped to a authoritatively defined character. For example, codepoint 720 is valid; it is 0x2d0 in hex, and U+02D0 points to ː: In [135]: hex(720) Out[135]: '0x2d0' In [136]: '\u02d0' Out[136]: 'ː' And 888 is not valid: In [137]: hex(888) Out[137]: '0x378' In [138]: '\u0378' Out[138]: '\u0378' And 127744 is valid: In [139]: chr(127744) Out[139]: '🌀' And 0xe0000 is invalid: In [140]: '\U000e0000' Out[140]: '\U000e0000' I have come up with a rather hacky solution: if a codepoint is valid, trying to convert it to a character will either result in the decoded character or the '\xhh' escape sequence, else it will return the undecoded escape sequence exactly same as original, I can check the return value of chr and check if it starts with '\u' or '\U'... Now is the hacky part, chr doesn't decode invalid codepoints but it doesn't raise exceptions either, and the escape sequences will have length of 1 since they are treated as a single character, I have to repr the return value and check the results... I have used this method to identify all invalid codepoints: In [130]: invalid = [] In [131]: for i in range(1114112): ...: if any(f'{chr(i)!r}'.startswith(j) for j in ("'\\U", "'\\u")): ...: invalid.append(i) In [132]: from pathlib import Path In [133]: invalid = [(hex(i).removeprefix('0x'), i) for i in invalid] In [134]: Path('D:/invalid_unicode.txt').write_text(',\n'.join(map(repr, invalid))) Out[134]: 18574537 Can anyone offer a better solution? | I believe the most straight-forward approach is to use unicodedata.category(). The examples from the OP are unassigned codepoints, which have a category of Cn ("Other, not assigned"). >>> import unicodedata as ud >>> ud.category('\u02d0') 'Lm' >>> ud.category('\u0378') # unassigned 'Cn' >>> ud.category(chr(127744)) 'So' >>> ud.category('\U000e0000') # unassigned 'Cn' It also works for the control characters in the ASCII range: >>> ud.category('\x00') 'Cc' Further categories for invalid codepoints (according to comments) are Cs ("Other, surrogate") and Co ("Other, private use"): >>> ud.category('\ud800') # lower surrogate 'Cs' >>> ud.category('\uf8ff') # private use 'Co' So a function for codepoint validity (as per the OP's definition) could look like this: def is_valid(char): return ud.category(char) not in ('Cn', 'Cs', 'Co') Important caveat: Python's unicodedata module embeds a certain version of Unicode, so the information is potentially out of date. For example, in my installation of Python 3.8, the Unicode version is 12.1.0, so it doesn't know about codepoints assigned in later versions of Unicode: >>> ud.unidata_version '12.1.0' >>> ud.category('\U0001fae0') # melting face emoji added in Unicode v14 'Cn' If you need a more recent version of Unicode than the one of your Python version, you probably need to fetch an appropriate table directly from Unicode. | 5 | 4 |
69,789,910 | 2021-10-31 | https://stackoverflow.com/questions/69789910/find-the-value-of-a-masked-ci-cd-variable | I am currently trying to find the value of a CI/CD variable in a VM. I tried to output it but I find out the variable’s value is masked in job logs. This is the code I used in my .gitlab-ci.yml. image: python:3 stages: - deploy deploy: stage: deploy script: - echo "List all CI/CD variables" - export The line in question is ... declare -x Secret_variable ="[MASKED]" ... Is there a way for me to get the find value without modifying the checkbox in the Variables section of Gitlab? | You can reveal the values of CI/CD variables in the settings page for the project (or group, if it is a group variable) by clicking the "reveal values" button. You must have maintainer permission or higher to do this. Alternatively, you can expose secrets in the job log if you transform the value such that it won't be masked in the job log. This is a bad idea because you'd be exposing sensitive values in plaintext, but it can be done nevertheless. Unlike other CI platforms, GitLab only masks the exact value. For example, you can print the values base64 encoded, print the characters in reverse order, mangle them in any way, etc. and they will not be masked in the job log. It's also good to know this can happen accidentally, for example, if you use secrets in curl as basic auth requests with verbosity enabled, since the verbose logs will show the parameters base64 encoded. curl -vvv --user "${USERNAME}" --password "${SECRET_PASSWORD}" will expose your CI/CD variables in the job log, for example. You can, of course, do this intentionally as well... expose_secrets: script: - echo $SUPER_SECRET | base64 GitLab's security model works around trusted developers. You should obviously not do this unless you have legitimate reason to do this. Doing this can probably get you in trouble a lot of places, unless it's your job to pentest systems, for example. When secrets are exposed in the job log like this, even in base64 form, they should be treated as compromised and be rotated immediately. | 5 | 10 |
69,788,338 | 2021-10-31 | https://stackoverflow.com/questions/69788338/what-is-the-fastest-way-to-calculate-create-powers-of-ten | If as the input you provide the (integer) power, what is the fastest way to create the corresponding power of ten? Here are four alternatives I could come up with, and the fastest way seems to be using an f-string: from functools import partial from time import time import numpy as np def fstring(power): return float(f'1e{power}') def asterisk(power): return 10**power methods = { 'fstring': fstring, 'asterisk': asterisk, 'pow': partial(pow, 10), 'np.pow': partial(np.power, 10, dtype=float) } # "dtype=float" is necessary because otherwise it will raise: # ValueError: Integers to negative integer powers are not allowed. # see https://stackoverflow.com/a/43287598/5472354 powers = [int(i) for i in np.arange(-10000, 10000)] for name, method in methods.items(): start = time() for i in powers: method(i) print(f'{name}: {time() - start}') Results: fstring: 0.008975982666015625 asterisk: 0.5190775394439697 pow: 0.4863283634185791 np.pow: 0.046906232833862305 I guess the f-string approach is the fastest because nothing is actually calculated, though it only works for integer powers of ten, whereas the other methods are more complicated operations that also work with any real number as the base and power. So is the f-string actually the best way to go about it? | You're comparing apples to oranges here. 10 ** n computes an integer (when n is non-negative), whereas float(f'1e{n}') computes a floating-point number. Those won't take the same amount of time, but they solve different problems so it doesn't matter which one is faster. But it's worse than that, because there is the overhead of calling a function, which is included in your timing for all of your alternatives, but only some of them actually involve calling a function. If you write 10 ** n then you aren't calling a function, but if you use partial(pow, 10) then you have to call it as a function to get a result. So you're not actually comparing the speed of 10 ** n fairly. Instead of rolling your own timing code, use the timeit library, which is designed for doing this properly. The results are in seconds for 1,000,000 repetitions (by default), or equivalently they are the average time in microseconds for one repetiton. Here's a comparison for computing integer powers of 10: >>> from timeit import timeit >>> timeit('10 ** n', setup='n = 500') 1.09881673199925 >>> timeit('pow(10, n)', setup='n = 500') 1.1821871869997267 >>> timeit('f(n)', setup='n = 500; from functools import partial; f = partial(pow, 10)') 1.1401332350014854 And here's a comparison for computing floating-point powers of 10: note that computing 10.0 ** 500 or 1e500 is pointless because the result is simply an OverflowError or inf. >>> timeit('10.0 ** n', setup='n = 200') 0.12391662099980749 >>> timeit('pow(10.0, n)', setup='n = 200') 0.17336435099969094 >>> timeit('f(n)', setup='n = 200; from functools import partial; f = partial(pow, 10.0)') 0.18887039500077663 >>> timeit('float(f"1e{n}")', setup='n = 200') 0.44305286100097874 >>> timeit('np.power(10.0, n, dtype=float)', setup='n = 200; import numpy as np') 1.491982370000187 >>> timeit('f(n)', setup='n = 200; from functools import partial; import numpy as np; f = partial(np.power, 10.0, dtype=float)') 1.6273324920002779 So the fastest of these options in both cases is the obvious one: 10 ** n for integers and 10.0 ** n for floats. | 7 | 7 |
69,787,332 | 2021-10-31 | https://stackoverflow.com/questions/69787332/how-to-detect-if-init-subclass-has-been-overridden-in-a-subclass | Normally in Python, it is possible to detect whether a method has been overridden in a subclass using the following technique: >>> class Foo: ... def mymethod(self): pass ... >>> class Bar(Foo): pass ... >>> Bar.mymethod is Foo.mymethod True The expression Bar.mymethod is Foo.mymethod will evaluate to True if the method from Foo has not been overridden in Bar, but will evaluate to False if the method has been overridden in Bar. This technique works well with dunder methods inherited from object, as well as non-dunder methods: >>> Bar.__new__ is Foo.__new__ True >>> Bar.__eq__ is Foo.__eq__ True We can formalise this logic in a function like so: def method_has_been_overridden(superclass, subclass, method_name): """ Return `True` if the method with the name `method_name` has been overridden in the subclass or an intermediate class in the method resolution order """ if not issubclass(subclass, superclass): raise ValueError( "This function only makes sense if `subclass` is a subclass of `superclass`" ) subclass_method = getattr(subclass, method_name) if not callable(method): raise ValueError(f"'{subclass.__name__}.{method_name}' is not a method") return subclass_method is not getattr(superclass, method_name, object()) However, this technique fails when it comes to two methods: __init_subclass__ and __subclasshook__: >>> class Foo: pass ... >>> class Bar(Foo): pass ... >>> Bar.__init_subclass__ is Foo.__init_subclass__ False >>> Bar.__subclasshook__ is Foo.__subclasshook__ False And, for an even more perplexing example: >>> type.__init_subclass__ is type.__init_subclass__ False I have two questions: Why does this technique fail with these methods, and these methods only? (I have not been able to find any other examples where this technique fails -- but if there are any, I'd love to know about them!) Is there an alternative technique that can be used to detect if __init_subclass__ or __subclasshook__ have been defined in a subclass after being left undefined in the superclass? | __init_subclass__ is a special method, in that it is implicitly a classmethod even if you do not decorate it with @classmethod when defining it. However, the issue here does not arise from the fact that __init_subclass__ is a special method. Instead, there is a fundamental error in the technique you are using to detect whether a method has been overridden in a subclass: it won't work with any classmethods at all: >>> class Foo: ... def mymethod(self): pass ... @classmethod ... def my_classmethod(cls): pass ... >>> class Bar(Foo): pass ... >>> Bar.mymethod is Foo.mymethod True >>> Bar.my_classmethod is Foo.my_classmethod False This is because of how bound methods in Python work: methods, in Python, are descriptors. Observe the equivalence of the following lines of code with respect to instance methods. The first (more normal) way of calling mymethod on the instance f is merely syntactic sugar for the second way of calling the method on the instance f: >>> class Foo: ... def mymethod(self): ... print('Instance method') ... >>> f = Foo() >>> f.mymethod() Instance method >>> Foo.__dict__['mymethod'].__get__(f, Foo)() Instance method Calling __get__ on the unbound method in Foo.__dict__ produces a new object each time; it is only by accessing the instance method on the class that we can test for identity, as you do in your question. However, with regards to classmethods, even accessing the method from the class will call __get__ on the method: >>> class Foo: ... @classmethod ... def my_classmethod(cls): ... print('Class method') ... >>> Foo.my_classmethod is Foo.my_classmethod False >>> Foo.my_classmethod() Class method >>> Foo.__dict__['my_classmethod'].__get__(Foo, Foo)() Class method What about __new__? Your question points out that your existing method works with __new__. That's strange -- we've just established that this method doesn't work with classmethods, and __new__ certainly looks like a classmethod. The first parameter of __new__ is named cls, after all! However, the Python documentation makes clear that this isn't the case at all: object.__new__(cls[, ...]) Called to create a new instance of class cls. __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. It's a staticmethod, not a classmethod! Mystery solved. A better way of detecting whether a subclass overrides a method from a superclass The only sure-fire way of knowing for sure if a method has been overridden in a subclass is by traversing the __dict__ of each class in the method resolution order: def method_has_been_overridden(superclass, subclass, method_name): """ Return `True` if the method with the name `method_name` has been overridden in the subclass or an intermediate class in the method resolution order """ if not issubclass(subclass, superclass): raise ValueError( "This function only makes sense if `subclass` is a subclass of `superclass`" ) subclass_method = getattr(subclass, method_name) if not callable(subclass_method): raise ValueError(f"'{subclass.__name__}.{method_name}' is not a method") for cls in subclass.__mro__: if cls is superclass: return False if method_name in cls.__dict__: return True This function can correctly determine whether or not __init_subclass__, or any other classmethod, has been overridden in a subclass: >>> class Foo: pass ... >>> class Bar(Foo): pass ... >>> class Baz(Foo): ... def __init_subclass__(cls, *args, **kwargs): ... return super().__init_subclass__(*args, **kwargs) >>> method_has_been_overridden(Foo, Bar, '__init_subclass__') False >>> method_has_been_overridden(Foo, Baz, '__init_subclass__') True Many thanks to @chepner and U12-Forward, whose excellent answers helped me figure this problem out. | 6 | 4 |
69,786,379 | 2021-10-31 | https://stackoverflow.com/questions/69786379/transforming-a-list-of-points-in-a-rank-of-indexes | Let's say that I have a list of points from a torunament points = [0, 12, 9] And I want to have a ranking of the players, so the expected outputs would be [1, 2, 0] Because the index 1 is first in the ranking, followed by index 2 and then index 0. My idea was to use a for to iterate through all the numbers, getting the biggest value, find the index of the biggest value and then assign the value in the ranking, but it seems unnecessary long and complicated. Any tips? | Use sorted: points = [0, 12, 9] res = sorted(range(len(points)), key=lambda x: points[x], reverse=True) print(res) Output [1, 2, 0] The idea is to sort the indexes of the list (range(3)) according to their value on the list, hence the key=lambda x: points[x]. The reverse True is because you want a descending ranking. | 5 | 5 |
69,780,574 | 2021-10-30 | https://stackoverflow.com/questions/69780574/why-do-i-get-a-bad-handler-aws-lambda-not-enough-values-to-unpack-error | I'm trying to execute a Lambda function but I get the following error: { "errorMessage": "Bad handler 'AlertMetricSender': not enough values to unpack (expected 2, got 1)", "errorType": "Runtime.MalformedHandlerName", "stackTrace": [] } My Lambda handler is specified in AlertMetricSender.py: from modules.ZabbixSender import ZabbixSender def lambda_handler(event, context): sender = ZabbixSender("10.10.10.10", 10051) sender.add("Zabbix server", "lambda.test", 5.65) sender.send() | This is normally caused by an incorrect value specified for the "Handler" setting for the Lambda function. It is a reference to the method in your function code that processes events i.e. the entry point. The value of the handler argument is comprised of the below, separated by a dot: The name of the file in which the Lambda handler function is located The name of the Python handler function. Make sure you have not missed the filename. In this case, it looks like the handler should be set to AlertMetricSender.lambda_handler. | 21 | 33 |
69,773,539 | 2021-10-29 | https://stackoverflow.com/questions/69773539/how-do-i-convert-a-json-file-to-a-python-class | Consider this json file named h.json I want to convert this into a python dataclass. { "acc1":{ "email":"[email protected]", "password":"acc1", "name":"ACC1", "salary":1 }, "acc2":{ "email":"[email protected]", "password":"acc2", "name":"ACC2", "salary":2 } } I could use an alternative constructor for getting each account, for example: import json from dataclasses import dataclass @dataclass class Account(object): email:str password:str name:str salary:int @classmethod def from_json(cls, json_key): file = json.load(open("h.json")) return cls(**file[json_key]) but this is limited to what arguments (email, name, etc.) were defined in the dataclass. What if I were to modify the json to include another thing, say age? The script would end up returning a TypeError, specifically TypeError: __init__() got an unexpected keyword argument 'age'. Is there a way to dynamically adjust the class attributes based on the keys of the dict (json object), so that I don't have to add attributes each time I add a new key to the json? | This way you lose some dataclass features. Such as determining whether it is optional or not Such as auto-completion feature However, you are more familiar with your project and decide accordingly There must be many methods, but this is one of them: @dataclass class Account(object): email: str password: str name: str salary: int @classmethod def from_json(cls, json_key): file = json.load(open("1.txt")) keys = [f.name for f in fields(cls)] # or: keys = cls.__dataclass_fields__.keys() json_data = file[json_key] normal_json_data = {key: json_data[key] for key in json_data if key in keys} anormal_json_data = {key: json_data[key] for key in json_data if key not in keys} tmp = cls(**normal_json_data) for anormal_key in anormal_json_data: setattr(tmp,anormal_key,anormal_json_data[anormal_key]) return tmp test = Account.from_json("acc1") print(test.age) | 9 | 7 |
69,774,045 | 2021-10-29 | https://stackoverflow.com/questions/69774045/selenium-chromedriver-issue-using-webdriver-manager-for-python | When running this code: from selenium import webdriver from selenium.webdriver.common.keys import Keys from webdrivermanager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().download_and_install()) driver.get("http://www.python.org") This results in the following exception at the line where the chromedriver is installed: TypeError: expected str, bytes or os.PathLike object, not tuple Note that I am aware that there already exist many threads about this topic but since the webdrivermanager seems to have been updated majorly the previous solutions do not work. Also a quick side note: I installed webdrivermager via conda instead of pip. but that should not be of concern. EDIT: Entire stack trace: Traceback (most recent call last): File "C:\Users\stefa\OneDrive - Johannes Kepler Universität Linz\Dokumente\GitHub\briefly\src\crawler\crawler.py", line 19, in driver = webdriver.Chrome(ChromeDriverManager().download_and_install()) File "C:\Users\stefa\anaconda3\envs\briefly\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 73, in init self.service.start() File "C:\Users\stefa\anaconda3\envs\briefly\lib\site-packages\selenium\webdriver\common\service.py", line 72, in start self.process = subprocess.Popen(cmd, env=self.env, File "C:\Users\stefa\anaconda3\envs\briefly\lib\subprocess.py", line 951, in init self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\stefa\anaconda3\envs\briefly\lib\subprocess.py", line 1360, in _execute_child args = list2cmdline(args) File "C:\Users\stefa\anaconda3\envs\briefly\lib\subprocess.py", line 565, in list2cmdline for arg in map(os.fsdecode, seq): File "C:\Users\stefa\anaconda3\envs\briefly\lib\os.py", line 822, in fsdecode filename = fspath(filename) # Does type-checking of filename. TypeError: expected str, bytes or os.PathLike object, not tuple | There are two issues in your code block as follows: You need to import ChromeDriverManager from webdriver_manager.chrome As per Webdriver Manager for Python download_and_install() isn't supported and you have to use install() So your effective code block will be: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().install()) driver.get("http://www.python.org") On windows-10 system the console output will be: C:\Users\Admin\Desktop\Python Programs>python webdriver-manager_ChromeDriverManager.py [WDM] - [WDM] - ====== WebDriver manager ====== [WDM] - Current google-chrome version is 95.0.4638 [WDM] - Get LATEST driver version for 95.0.4638 [WDM] - There is no [win32] chromedriver for browser 95.0.4638 in cache [WDM] - Get LATEST driver version for 95.0.4638 [WDM] - Trying to download new driver from https://chromedriver.storage.googleapis.com/95.0.4638.54/chromedriver_win32.zip [WDM] - Driver has been saved in cache [C:\Users\Admin\.wdm\drivers\chromedriver\win32\95.0.4638.54] DevTools listening on ws://127.0.0.1:50921/devtools/browser/c26df2aa-67aa-4264-b1dc-34d6148b9174 You can find a relevant detailed discussion in ModuleNotFoundError: No module named 'webdriver_manager' error even after installing webdrivermanager | 7 | 6 |
69,758,301 | 2021-10-28 | https://stackoverflow.com/questions/69758301/efficient-algorithm-to-get-all-the-combinations-of-numbers-that-are-within-a-cer | Suppose I have two lists list_1 and list_2 list_1 = [1, 5, 10] list_2 = [3, 4, 15] I want to get a list of tuples containing elements from both list_1 and list_2 such that the difference between the numbers in a tuple is under a some constant c. E.g. suppose c is 2 then the tuples I would have would be: [(1, 3), (5, 3), (5, 4)] Of course one can iterate over both lists and check that the difference between 2 elements is less than c, but that has a complexity of n^2 and I would rather reduce that complexity. | Here is an implementation of the idea of Marat from the comments: import bisect def close_pairs(list1,list2,c): #assumes that list2 is sorted for x in list1: i = bisect.bisect_left(list2,x-c) j = bisect.bisect_right(list2,x+c) yield from ((x,y) for y in list2[i:j]) list_1 = [1, 5, 10] list_2 = [3, 4, 15] print(list(close_pairs(list_1,list_2,2))) #prints [(1, 3), (5, 3), (5, 4)] To demonstrate the potential improvement of this strategy over what might be thought of as the "naive" approach, let's timeit. import timeit setup_naive = ''' import numpy list_a = numpy.random.randint(0, 2500, 500).tolist() list_b = numpy.random.randint(0, 2500, 500).tolist() c = 2 def close_pairs(list_a, list_b, c): yield from ((x,y) for x in list_a for y in list_b if abs(x-y) <= c) ''' setup_john_coleman = ''' import bisect import numpy list_a = numpy.random.randint(0, 2500, 500).tolist() list_b = numpy.random.randint(0, 2500, 500).tolist() c = 2 def close_pairs(list_a, list_b, c): list_a = sorted(list_a) list_b = sorted(list_b) for x in list_a: i = bisect.bisect_left(list_b,x-c) j = bisect.bisect_right(list_b,x+c) yield from ((x,y) for y in list_b[i:j]) ''' print(f"john_coleman: {timeit.timeit('list(close_pairs(list_a, list_b, c))', setup=setup_john_coleman, number=1000):.2f}") print(f"naive: {timeit.timeit('list(close_pairs(list_a, list_b, c))', setup=setup_naive, number=1000):.2f}") On a handy laptop that gives result like: john_coleman: 0.50 naive: 18.35 | 8 | 4 |
69,748,145 | 2021-10-28 | https://stackoverflow.com/questions/69748145/continue-to-other-generators-once-a-generator-has-been-exhausted-in-a-list-of-ge | I have a list of generators in a function alternate_all(*args) that alternates between each generator in the list to print their first item, second item, ..., etc. until all generators are exhausted. My code works until a generator is exhausted and once the StopIteration occurs, it stops printing, when I want it to continue with the rest of the generators and ignore the exhausted one: def alternate_all(*args): iter_list = [] for iterable in args: iter_list.append(iter(iterable)) try: while True: for iterable in iter_list: val = next(iter_list[0]) iter_list.append(iter_list.pop(0)) yield val except StopIteration: pass if __name__ == '__main__': for i in alternate_all('abcde','fg','hijk'): print(i,end='') My output is: afhbgic When it should be: afhbgicjdke How can I get this to ignore the exhausted generator? I would prefer not to use itertools and keep this same structure. | This works. I tried to stay close to how your original code works (though I did replace your first loop with a list comprehension for simplicity). def alternate_all(*args): iter_list = [iter(arg) for arg in args] while iter_list: i = iter_list.pop(0) try: val = next(i) except StopIteration: pass else: yield val iter_list.append(i) The main problem with your code was that your try/except was outside of the loop, meaning the first exhausted iterator would exit from the loop. Instead, you want to catch StopIteration inside the loop so you can keep going, and the loop should keep going while iter_list still has any iterators in it. | 5 | 4 |
69,767,707 | 2021-10-29 | https://stackoverflow.com/questions/69767707/jax-flax-very-slow-rnn-forward-pass-compared-to-pytorch | I recently implemented a two-layer GRU network in Jax and was disappointed by its performance (it was unusable). So, i tried a little speed comparison with Pytorch. Minimal working example This is my minimal working example and the output was created on Google Colab with GPU-runtime. notebook in colab import flax.linen as jnn import jax import torch import torch.nn as tnn import numpy as np import jax.numpy as jnp def keyGen(seed): key1 = jax.random.PRNGKey(seed) while True: key1, key2 = jax.random.split(key1) yield key2 key = keyGen(1) hidden_size=200 seq_length = 1000 in_features = 6 out_features = 4 batch_size = 8 class RNN_jax(jnn.Module): @jnn.compact def __call__(self, x, carry_gru1, carry_gru2): carry_gru1, x = jnn.GRUCell()(carry_gru1, x) carry_gru2, x = jnn.GRUCell()(carry_gru2, x) x = jnn.Dense(4)(x) x = x/jnp.linalg.norm(x) return x, carry_gru1, carry_gru2 class RNN_torch(tnn.Module): def __init__(self, batch_size, hidden_size, in_features, out_features): super().__init__() self.gru = tnn.GRU( input_size=in_features, hidden_size=hidden_size, num_layers=2 ) self.dense = tnn.Linear(hidden_size, out_features) self.init_carry = torch.zeros((2, batch_size, hidden_size)) def forward(self, X): X, final_carry = self.gru(X, self.init_carry) X = self.dense(X) return X/X.norm(dim=-1).unsqueeze(-1).repeat((1, 1, 4)) rnn_jax = RNN_jax() rnn_torch = RNN_torch(batch_size, hidden_size, in_features, out_features) Xj = jax.random.normal(next(key), (seq_length, batch_size, in_features)) Yj = jax.random.normal(next(key), (seq_length, batch_size, out_features)) Xt = torch.from_numpy(np.array(Xj)) Yt = torch.from_numpy(np.array(Yj)) initial_carry_gru1 = jnp.zeros((batch_size, hidden_size)) initial_carry_gru2 = jnp.zeros((batch_size, hidden_size)) params = rnn_jax.init(next(key), Xj[0], initial_carry_gru1, initial_carry_gru2) def forward(params, X): carry_gru1, carry_gru2 = initial_carry_gru1, initial_carry_gru2 Yhat = [] for x in X: # x.shape = (batch_size, in_features) yhat, carry_gru1, carry_gru2 = rnn_jax.apply(params, x, carry_gru1, carry_gru2) Yhat.append(yhat) # y.shape = (batch_size, out_features) #return jnp.concatenate(Y, axis=0) jitted_forward = jax.jit(forward) Results # uncompiled jax version %time forward(params, Xj) CPU times: user 7min 17s, sys: 8.18 s, total: 7min 25s Wall time: 7min 17s # time for compiling %time jitted_forward(params, Xj) CPU times: user 8min 9s, sys: 4.46 s, total: 8min 13s Wall time: 8min 12s # compiled jax version %timeit jitted_forward(params, Xj) The slowest run took 204.20 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 115 µs per loop # torch version %timeit lambda: rnn_torch(Xt) 10000000 loops, best of 5: 65.7 ns per loop Questions Why is my Jax-implementation so slow? What am i doing wrong? Also, why is compiling taking so long? The sequence is not that long.. Thank you :) | The reason the JAX code compiles slowly is that during JIT compilation JAX unrolls loops. So in terms of XLA compilation, your function is actually very large: you call rnn_jax.apply() 1000 times, and compilation times tend to be roughly quadratic in the number of statements. By contrast, your pytorch function uses no Python loops, and so under the hood it is relying on vectorized operations that run much faster. Any time you use a for loop over data in Python, a good bet is that your code will be slow: this is true whether you're using JAX, torch, numpy, pandas, etc. I'd suggest finding an approach to the problem in JAX that relies on vectorized operations rather than relying on slow Python looping. | 5 | 1 |
69,765,183 | 2021-10-29 | https://stackoverflow.com/questions/69765183/why-does-starred-assignment-produce-lists-and-not-tuples | In python, I can write something like this: some_list = [(1, 2, 3), (3, 2, 1)] for i, *args in some_list: print(args) I will get the next output: [2, 3] [2, 1] When we use *args as function arguments, it is unpacked into a tuple. Why do we receive a list in this situation? | It is just a design decision. Making it a tuple was debated in the PEP 3132, but rejected on usability grounds: Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the result harder. Simlarly, making it of the same type as the iterable on the rhs of the assignment, was rejected: Try to give the starred target the same type as the source iterable, for example, b in a, *b = 'hello' would be assigned the string 'ello'. This may seem nice, but is impossible to get right consistently with all iterables. The very example of yours is listed in the same PEP under specification. Some reasoning is found in the mailing list of that debate. When dealing with an iterator, you don't know the length in advance, so the only way to get a tuple would be to produce a list first and then create a tuple from it. | 11 | 11 |
69,763,127 | 2021-10-29 | https://stackoverflow.com/questions/69763127/django-rest-api-accept-list-instead-of-dictionary-in-post-request | I am trying to consume data from a callback API that sends the POST request in this format: [ { "key1": "asd", "key2": "123" } ] However my API currently only works when it is sent like this: { "key1": "asd", "key2": "123" } serializers.py: class RawIncomingDataSerializer(serializers.ModelSerializer): class Meta: model = RawIncomingData fields = '__all__' views.py: class RawIncomingDataViewSet(viewsets.ModelViewSet): queryset = RawIncomingData.objects.all() serializer_class = RawIncomingDataSerializer There will only ever be one object in the post data, so I am looking for a simple work around without having to rewrite my serializer to interpret multiple objects in one post request. | In that case you can override create and explicitly specify many=True in the get_serializer call: class RawIncomingDataViewSet(viewsets.ModelViewSet): ... def create(self, request, *args, **kwargs): serializer = self.get_serializer(data=request.data, many=True) serializer.is_valid(raise_exception=True) self.perform_create(serializer) headers = self.get_success_headers(serializer.data) return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers) | 6 | 6 |
69,741,177 | 2021-10-27 | https://stackoverflow.com/questions/69741177/run-multiple-async-loops-in-separate-processes-within-a-main-async-app | Ok so this is a bit convoluted but I have a async class with a lot of async code. I wish to parallelize a task inside that class and I want to spawn multiple processes to run a blocking task and also within each of this processes I want to create an asyncio loop to handle various subtasks. SO I short of managed to do this with a ThreadPollExecutor but when I try to use a ProcessPoolExecutor I get a Can't pickle local object error. This is a simplified version of my code that runs with ThreadPoolExecutor. How can this be parallelized with ProcessPoolExecutor? import asyncio import time from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor class MyClass: def __init__(self) -> None: self.event_loop = None # self.pool_executor = ProcessPoolExecutor(max_workers=8) self.pool_executor = ThreadPoolExecutor(max_workers=8) self.words = ["one", "two", "three", "four", "five"] self.multiplier = int(2) async def subtask(self, letter: str): await asyncio.sleep(1) return letter * self.multiplier async def task_gatherer(self, subtasks: list): return await asyncio.gather(*subtasks) def blocking_task(self, word: str): time.sleep(1) subtasks = [self.subtask(letter) for letter in word] result = asyncio.run(self.task_gatherer(subtasks)) return result async def master_method(self): self.event_loop = asyncio.get_running_loop() master_tasks = [ self.event_loop.run_in_executor( self.pool_executor, self.blocking_task, word, ) for word in self.words ] results = await asyncio.gather(*master_tasks) print(results) if __name__ == "__main__": my_class = MyClass() asyncio.run(my_class.master_method()) | This is a very good question. Both the problem and the solution are quite interesting. The Problem One difference between multithreading and multiprocessing is how memory is handled. Threads share a memory space. Processes do not (in general, see below). Objects are passed to a ThreadPoolExecutor simply by reference. There is no need to create new objects. But a ProcessPoolExecutor lives in a separate memory space. To pass objects to it, the implementation pickles the objects and unpickles them again on the other side. This detail is often important. Look carefully at the arguments to blocking_task in the original question. I don't mean word - I mean the first argument: self. The one that's always there. We've seen it a million times and hardly even think about it. To execute the function blocking_task, a value is required for the argument named "self." To run this function in a ProcessPoolExecutor, "self" must get pickled and unpickled. Now look at some of the member objects of "self": there's an event loop and also the executor itself. Neither of which is pickleable. That's the problem. There is no way we can run that function, as is, in another Process. Admittedly, the traceback message "Cannot pickle local object" leaves a lot to be desired. So does the documentation. But it actually makes total sense that the program works with a ThreadPool but not with a ProcessPool. Note: There are mechanisms for sharing ctypes objects between Processes. However, as far as I'm aware, there is no way to share Python objects directly. That's why the pickle/unpickle mechanism is used. The Solution Refactor MyClass to separate the data from the multiprocessing framework. I created a second class, MyTask, which can be pickled and unpickled. I moved a few of the functions from MyClass into it. Nothing of importance has been modified from the original listing - just rearranged. The script runs successfully with both ProcessPoolExecutor and ThreadPoolExecutor. import asyncio import time # from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ProcessPoolExecutor # Refactored MyClass to break out MyTask class MyTask: def __init__(self): self.multiplier = 2 async def subtask(self, letter: str): await asyncio.sleep(1) return letter * self.multiplier async def task_gatherer(self, subtasks: list): return await asyncio.gather(*subtasks) def blocking_task(self, word: str): time.sleep(1) subtasks = [self.subtask(letter) for letter in word] result = asyncio.run(self.task_gatherer(subtasks)) return result class MyClass: def __init__(self): self.task = MyTask() self.event_loop: asyncio.AbstractEventLoop = None self.pool_executor = ProcessPoolExecutor(max_workers=8) # self.pool_executor = ThreadPoolExecutor(max_workers=8) self.words = ["one", "two", "three", "four", "five"] async def master_method(self): self.event_loop = asyncio.get_running_loop() master_tasks = [ self.event_loop.run_in_executor( self.pool_executor, self.task.blocking_task, word, ) for word in self.words ] results = await asyncio.gather(*master_tasks) print(results) if __name__ == "__main__": my_class = MyClass() asyncio.run(my_class.master_method()) | 5 | 9 |
69,759,506 | 2021-10-28 | https://stackoverflow.com/questions/69759506/is-it-possible-to-construct-a-shebang-that-works-for-both-python-2-and-3 | I need to write a Python script that's compatible with both Python 2 and Python 3, can be called directly from a shell (meaning it has a shebang), and can run on systems that have either Python version installed, but possibly not both. Normally I can write a shebang for Python 2 like so: #!/usr/bin/env python and for Python 3: #!/usr/bin/env python3 But both of these will fail if their corresponding Python version is not installed, since as far as I'm aware, systems that have Python 3 but not Python 2 do not alias or symlink python to python3. (Do correct me if I'm wrong.) So is there a way to write a Python script with a shebang that will execute the script using either Python 2 or Python 3 so long as one of them is installed? This is mainly intended to solve the removal of Python 2 from upcoming releases of macOS, (Note: I originally wrote that under the mistaken understanding that Python 2 would be replaced with Python 3 in macOS, but in reality Apple is removing Python completely.) but can apply to Linux as well. | Realistically I would just specify python3 and make that a prerequisite. However, it's technically possible to trick a Python file into being its own shell script wrapper: #!/bin/sh ''':' for name in python3 python2 python do type "$name" > /dev/null 2>&1 && exec "$name" "$0" "$@" done echo >&2 "Please install python" exit 1 ':''' print("Hello from Python") | 8 | 10 |
69,751,853 | 2021-10-28 | https://stackoverflow.com/questions/69751853/attributeerror-api-object-has-no-attribute-followers-ids | I am trying to extract a list of followers from a particular user using Tweepy. However, I ran into an error saying that AttributeError: 'API' object has no attribute 'followers_ids'. This code, however, runs smoothly on another machine but not on mine. I tried to reinstall Tweepy but nothing changed. Please help T.T import os import sys import json import time import math from tweepy import Cursor import tweepy from tweepy import OAuthHandler import datetime auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, wait_on_rate_limit=True) MAX_FRIENDS = 10000 def paginate(items, n): """Generate n-sized chunks from items""" for i in range(0, len(items), n): yield items[i:i+n] def get_followers(screen_name): # get followers for a given user fname = "users/{}/followers.json".format(screen_name) max_pages = math.ceil(MAX_FRIENDS / 5000) with open(fname, 'w') as f: for followers in Cursor(api.followers_ids, screen_name=screen_name).pages(max_pages): for chunk in paginate(followers, 100): users = api.lookup_users(user_ids=chunk) for user in users: f.write(json.dumps([user.id, user.screen_name, user.location, str(user.created_at)], sort_keys=True)+"\n") if len(followers) == 5000: print("More results available. Sleeping for 60 seconds to avoid rate limit") time.sleep(60) print("task completed for " + screen_name) | Tweepy v4.0.0 renamed API.followers_ids to API.get_follower_ids. You're likely using an older version of Tweepy on the other machine. | 4 | 9 |
69,738,316 | 2021-10-27 | https://stackoverflow.com/questions/69738316/what-is-the-difference-between-poetry-run-black-myscript-py-and-black-myscrip | Based on the poetry docs: Likewise if you have command line tools such as pytest or black you can run them using poetry run pytest The suggested way to use black is: poetry run black myscript.py However, I do not notice any difference in behaviour if I just use black myscript.py What is the difference between these two methods? | It allows you to run black (or whichever command comes after run) installed in your virtual environment without needing to activate your virtual environment first. The relevant note is in the poetry run docs (emphasis mine): The run command executes the given command inside the project’s virtualenv. Let's say you have this poetry-demo project with a main.py, and you installed black: poetry-demo$ ls README.rst main.py poetry.lock poetry_demo pyproject.toml tests poetry-demo$ poetry add black The following packages are already present in the pyproject.toml and will be skipped: • black ... If you don't activate your virtual environment first (i.e. poetry shell) and if you have no black installed anywhere else on your system, simply doing black file.py would fail: poetry-demo$ which black poetry-demo$ black main.py -bash: black: command not found But, with poetry run, even without activating your virtual environment, you can run black: poetry-demo$ poetry run black main.py All done! ✨ 🍰 ✨ 1 file left unchanged. The source of your confusion is probably because you already have your virtual environment activated, so then there really is no difference: poetry-demo$ poetry shell Spawning shell within /path/to/virtualenvs/poetry-demo-hCA44HQ0-py3.8 poetry-demo$ . /path/to/virtualenvs/poetry-demo-hCA44HQ0-py3.8/bin/activate (poetry-demo-hCA44HQ0-py3.8) poetry-demo$ black main.py All done! ✨ 🍰 ✨ 1 file left unchanged. (poetry-demo-hCA44HQ0-py3.8) poetry-demo$ poetry run black main.py All done! ✨ 🍰 ✨ 1 file left unchanged. | 8 | 12 |
69,747,328 | 2021-10-28 | https://stackoverflow.com/questions/69747328/pyqt6-qt-module-enums-raise-attributeerror | I need to translate the following code from PyQt5 (It works there) to PyQt6: self.setWindowFlags(Qt.FramelessWindowHint) This is the error: AttributeError: type object 'Qt' has no attribute 'FramelessWindowHint' I've already tried this: self.setWindowFlags(Qt.WindowFlags.FramelessWindowHint) It says: AttributeError: type object 'Qt' has no attribute 'WindowFlags' | That flag now lives here: QtCore.Qt.WindowType.FramelessWindowHint | 7 | 12 |
69,746,102 | 2021-10-27 | https://stackoverflow.com/questions/69746102/python-socketio-keyerror-session-is-disconnected | On a small Flask webserver running on a RaspberryPi with about 10-20 clients, we periodically get this error: Error on request: Traceback (most recent call last): File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 270, in run_wsgi execute(self.server.app) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/werkzeug/serving.py", line 258, in execute application_iter = app(environ, start_response) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/flask_socketio/__init__.py", line 43, in __call__ start_response) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/middleware.py", line 47, in __call__ return self.engineio_app.handle_request(environ, start_response) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/socketio/server.py", line 360, in handle_request return self.eio.handle_request(environ, start_response) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 291, in handle_request socket = self._get_socket(sid) File "/home/pi/3D_printer_control/env/lib/python3.7/site-packages/engineio/server.py", line 427, in _get_socket raise KeyError('Session is disconnected') KeyError: 'Session is disconnected' The error is generated automatically from inside python-socketio. What does this error really mean and how can I prevent or suppress it? | As far as I can tell, this usually means the server can't keep up with supplying data to all of the clients. Some possible mitigation techniques include disconnecting inactive clients, reducing the amount of data sent where possible, sending live data in larger chunks, or upgrading the server. If you need a lot of data throughput, there may be also be a better option than socketIO. I have been able to reproduce it by setting a really high ping rate and low timeout in the socketIO constructor: from flask_socketio import SocketIO socketio = SocketIO(engineio_logger=True, ping_timeout=5, ping_interval=5) This means the server has to do a lot of messaging to all of the clients and they don't have long to respond. I then open around 10 clients and I start to see the KeyError. Further debugging of our server found a process that was posting lots of live data which ran fine with only a few clients but starts to issue the occasional KeyError once I get up to about a dozen. | 8 | 7 |
69,733,618 | 2021-10-27 | https://stackoverflow.com/questions/69733618/sklearn-tree-plot-tree-show-returns-chunk-of-text-instead-of-visualised-tree | I'm trying to show a tree visualisation using plot_tree, but it shows a chunk of text instead: from sklearn.tree import plot_tree plot_tree(t) (where t is an instance of DecisionTreeClassifier) This is the output: [Text(464.99999999999994, 831.6, 'X[3] <= 0.8\nentropy = 1.581\nsamples = 120\nvalue = [39, 37, 44]'), Text(393.46153846153845, 646.8, 'entropy = 0.0\nsamples = 39\nvalue = [39, 0, 0]'), and so on and so forth. How do I make it show the visual tree instead? I'm using Jupyter 6.4.1 and I already imported matplotlib earlier in the code. Thanks! | In my case, it works with a simple "show": plot_tree(t) plt.show() | 7 | 7 |
69,680,566 | 2021-10-22 | https://stackoverflow.com/questions/69680566/environment-variable-error-when-running-python-pyspark-script | Is there an easy way to fix this error: Missing Python executable 'python3', defaulting to 'C:\Users\user1\Anaconda3\Lib\site-packages\pyspark\bin\..' for SPARK_HOME environment variable. Please install Python or specify the correct Python executable in PYSPARK_DRIVER_PYTHON or PYSPARK_PYTHON environment variable to detect SPARK_HOME safely. Would I have to modify the PATH system variable? Or export/create the environment variables PYSPARK_DRIVER_PYTHON and PYSPARK_PYTHON? I have Python 3.8.8. | you need to add an environment variable called SPARK_HOME: this variable contain the path to the installed pyspark library . In my case , pyspark is installed under my home directory, so this is the content of the variable : SPARK_HOME=/home/zied/.local/lib/python3.8/site-packages/pyspark also you need another variable called PYSPARK_PYTHON which have the python version you are using like this : PYSPARK_PYTHON=python3.8 EDIT: for windows use PYSPARK_PYTHON=python | 6 | 7 |
69,687,794 | 2021-10-23 | https://stackoverflow.com/questions/69687794/unable-to-manually-load-cifar10-dataset | First, I tried to load using: (X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data() But it gave an error: Exception: URL fetch failure on https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz: None -- [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1125) So I manually downloaded the dataset and put it in C:\Users\SAHAN\.keras\datasets and renamed it to cifar-10-batches-py.tar.gz. But then it gives an error: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\SAHAN\\.keras\\datasets\\cifar-10-batches-py.tar.gz' How can I load this dataset? | I was having a similar CERTIFICATE_VERIFY_FAILED error downloading CIFAR-10. Putting this in my python file worked: import ssl ssl._create_default_https_context = ssl._create_unverified_context Reference: https://programmerah.com/python-error-certificate-verify-failed-certificate-has-expired-40374/ | 21 | 50 |
69,647,590 | 2021-10-20 | https://stackoverflow.com/questions/69647590/specifying-package-data-in-pyproject-toml | I use setuptools. In setup.cfg, I can define [options.package_data] myModule = '*.csv' to make sure that data will be installed for my users. Can I achive the same with pyproject.toml instead? | Starting in setuptools version 61.0.0 there is beta support for this feature. See the documentation https://setuptools.pypa.io/en/stable/userguide/datafiles.html The syntax is [tool.setuptools.package-data] myModule = ["*.csv"] Update 2024-01-12: This is no longer in beta. A handful of different mechanisms can enable for data inclusion. One mechanism is to ask for automatic discovery based on git. It should suffice to set the setuptools version and enable the -scm plugin. Now all files tracked by git will be considered data files. The relevant section is... [build-system] requires = [ "setuptools>=60", "setuptools-scm>=8.0"] I produced a complete example and posted on github. https://github.com/el-hult/mypkg Read the full documentation for more options. | 52 | 51 |
69,728,330 | 2021-10-26 | https://stackoverflow.com/questions/69728330/type-hinting-pandas-dataframe-with-specific-columns | Suppose I have the following function: def foo(df: pd.DataFrame) -> pd.DataFrame: x = df["x"] y = df["y"] df["xy"] = x * y return df Is there a way I could hint that my function is accepting a data frame that must have the "x" and "y" column and that it will return a data frame with the "x", "y" and "xy" columns, instead of just a general data frame? | Okay, so, I'm not sure if this is the correct way of implementing it, but seems to work for me. If you see any mistakes or alternatives let me know and I can edit the response but my solution was basically creating a new class and implementing the __class_getitem__ method as seen in the Pep 560, this was my final code: from typing import List import pandas as pd GenericAlias = type(List[str]) class MyDataFrame(pd.DataFrame): __class_getitem__ = classmethod(GenericAlias) def foo(df: MyDataFrame[["x", "y"]]) -> MyDataFrame[["x", "y", "xy"]]: df["xy"] = df["x"] * df["y"] return df Edit The Pandera library is a better alternative that I did not know about when I posted the question. import pandas as pd from pandera.typing import Series, DataFrame class InputSchema(pa.DataFrameModel): x: Series[float] y: Series[float] class ReturnSchema(pa.DataFrameModel): x: Series[float] y: Series[float] xy: Series[float] def foo(df: DataFrame[InputSchema]) -> DataFrame[ReturnSchema]: x = df['x'] y = df['y'] df['xy'] = x * y return df | 10 | 6 |
69,710,333 | 2021-10-25 | https://stackoverflow.com/questions/69710333/is-there-a-way-to-match-inequalities-in-python-%e2%89%a5-3-10 | The new structural pattern matching feature in Python 3.10 is a very welcome feature. Is there a way to match inequalities using this statement? Prototype example: match a: case < 42: print('Less') case == 42: print('The answer') case > 42: print('Greater') | You can use guards: match a: case _ if a < 42: print('Less') case _ if a == 42: print('The answer') case _ if a > 42: print('Greater') Another option, without guards, using pure pattern matching: match [a < 42, a == 42]: case [True, False]: print('Less') case [_, True]: print('The answer') case [False, False]: print('Greater') | 52 | 71 |
69,686,925 | 2021-10-23 | https://stackoverflow.com/questions/69686925/i-had-a-problem-with-python-library-pikepdf | When trying to install the python moduel pikepdf using pip, this error pops up: Building wheels for collected packages: pikepdf Building wheel for pikepdf (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pikepdf (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [54 lines of output] ... creating build\temp.win-amd64-cpython-310\Release\src\qpdf "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DPOINTERHOLDER_TRANSITION=4 -IC:\Users\ME\AppData\Local\Temp\pip-build-env-dpc9ltd5\overlay\Lib\site-packages\pybind11\include "-IC:\Program Files\Python310\include" "-IC:\Program Files\Python310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc/qpdf\annotation.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src/qpdf\annotation.obj /EHsc /bigobj /std:c++17 annotation.cpp src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Professional\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pikepdf Failed to build pikepdf ERROR: Could not build wheels for pikepdf, which is required to install pyproject.toml-based projects Creating the wheel fails due to a missing header file: src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory This is for pikepdf v6.0.0. My previous version was v4.0.1.post1, which worked fine. Is this something that can be remedied from my side? | Solution for Macbook, M1 brew install qpdf After it use pip install pikepdf Solution from https://github.com/pikepdf/pikepdf/issues/274 | 16 | 22 |
69,711,606 | 2021-10-25 | https://stackoverflow.com/questions/69711606/how-to-install-a-package-using-pip-in-editable-mode-with-pyproject-toml | When a project is specified only via pyproject.toml (i.e. no setup.{py,cfg} files), how can it be installed in editable mode via pip (i.e. python -m pip install -e .)? I tried both setuptools and poetry for the build system, but neither worked: [build-system] requires = ["setuptools", "wheel"] build-backend = "setuptools.build_meta" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" I get the same error for both build systems: ERROR: Project file:///tmp/demo has a 'pyproject.toml' and its build backend is missing the 'build_editable' hook. Since it does not have a 'setup.py' nor a 'setup.cfg', it cannot be installed in editable mode. Consider using a build backend that supports PEP 660. I'm using this inside a conda environment, the following is my version of setuptools and pip: $ conda list | grep setuptools setuptools 58.3.0 pypi_0 pypi $ python -m pip --version pip 21.3.1 | PEP 660 – Editable installs for pyproject.toml based builds defines how to build projects that only use pyproject.toml. Build tools must implement PEP 660 for editable installs to work. You need a front-end (such as pip ≥ 21.3) and a backend. The statuses of some popular backends are: Setuptools implements PEP 660 as of version 64. Flit implements PEP 660 as of version 3.4. Poetry implements PEP 660 as of version 1.0.8. Note: To be able to do an editable installation to your user site (pip install -e --user), you need a system installed setuptools v62.0.0 or newer. | 113 | 101 |
69,642,889 | 2021-10-20 | https://stackoverflow.com/questions/69642889/how-to-use-multiple-cases-in-match-switch-in-other-languages-cases-in-python-3 | I am trying to use multiple cases in a function similar to the one shown below so that I can be able to execute multiple cases using match cases in python 3.10 def sayHi(name): match name: case ['Egide', 'Eric']: return f"Hi Mr {name}" case 'Egidia': return f"Hi Ms {name}" print(sayHi('Egide')) This is just returning None instead of the message, even if I remove square brackets. | According to What’s New In Python 3.10, PEP 636, and the docs, you use a | between patterns: case 'Egide' | 'Eric': | 108 | 206 |
69,711,886 | 2021-10-25 | https://stackoverflow.com/questions/69711886/python-dataclasses-inheritance-and-default-values | Given the following hierarchy of python dataclasses: @dataclass class A: a: str aa: str @dataclass class B(A): b: str @dataclass class C(A): c: str @dataclass class D(B, C): d: str Is there any general method that allows you to insert an attribute with a default value in this inheritance hierarchy? For instance, the following produces a Non-default argument(s) follows default argument(s) defined in 'B' error for class D. @dataclass class A: a: str aa: str @dataclass class B(A): b: str = 'b' # <-- default value @dataclass class C(A): c: str @dataclass class D(B, C): d: str I suppose one solution could be to give all attributes in the hierarchy a default value of None, and add a __post_init__() validation which raises some exception if any attributes are None, but this doesn't seem right and you would need it in every dataclass in the hierarchy. Class inheritance in Python 3.7 dataclasses proposes a solution where you create two separate branches of the inheritance hierarchy; one for default values and one without. The drawback here is that the hierarchy quickly becomes mangled with a lot of classes, but if there are no better options, I suppose this is as good as it gets. Edit I've come up with a meta_dataclass which does some hackery to at least solve my problem. I'm sure it breaks all kinds of rules, but perhaps someone can improve upon it. It allows a hierarchy of meta_dataclasses where any attribute in the hierarchy can have a default value. It is motivated by the solution presented in Class inheritance in Python 3.7 dataclasses. It effectively creates two inheritance branches, one with default values and one without, but the creation of the extra classes required to achieve this is encapsulated in the mete_dataclass decorator. A current drawback is that a meta_dataclass can only inherit from other meta_dataclasses. Full code: from dataclasses import make_dataclass, MISSING, field class Dataclass: pass class Declarations: pass class Definitions: pass def meta_dataclass(cls=None, /, *, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False): def wrap(cls): declaration_bases = [] definition_bases = [] for base in cls.__bases__: if issubclass(base, Dataclass): declaration_bases += [c for c in base.__bases__ if issubclass(c, Declarations)] definition_bases += [c for c in base.__bases__ if issubclass(c, Definitions)] elif len(cls.__bases__) == 1 and base != object: raise ValueError(f'meta_dataclasses can only inherit from other meta_dataclasses. ' f'{cls} inherits from {base}') declaration_bases.append(Declarations) definition_bases.append(Definitions) fields = [] if hasattr(cls, '__annotations__'): for field_name, field_type in cls.__annotations__.items(): f = field(default=cls.__dict__[field_name]) if field_name in cls.__dict__ else field() fields.append((field_name, field_type, f)) declarations = make_dataclass(cls_name=f'{cls.__name__}_Declarations', bases=tuple(declaration_bases), fields=[f for f in fields if isinstance(f[2].default, type(MISSING))], init=init, repr=repr, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen) definitions = make_dataclass(cls_name=f'{cls.__name__}_Definitions', bases=tuple(definition_bases), fields=[f for f in fields if not isinstance(f[2].default, type(MISSING))], init=init, repr=repr, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen) cls_wrapper = make_dataclass(cls_name=cls.__name__, fields=[], bases=(Dataclass, definitions, declarations), namespace=cls.__dict__, init=init, repr=repr, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen) return cls_wrapper if cls is None: return wrap else: return wrap(cls) Example: @meta_dataclass class A: a: str aa: str @meta_dataclass class B(A): b: str = 'b' @meta_dataclass class C(A): c: str @meta_dataclass class D(B, C): d: str >>> help(D) Help on class D in module types: class D(__main__.Dataclass, D_Definitions, D_Declarations) | D(a: str, aa: str, c: str, d: str, b: str = 'b') -> None | | D(a: str, aa: str, c: str, d: str, b: str = 'b') | | Method resolution order: | D | __main__.Dataclass | D_Definitions | B_Definitions | C_Definitions | A_Definitions | __main__.Definitions | D_Declarations | B_Declarations | C_Declarations | A_Declarations | __main__.Declarations | builtins.object Illustration: The below image illustrates what the @meta_dataclass decorator does to a single class. | This is a well-known issue for data classes, there are several workarounds but this is solved very elegantly in Python 3.10, here is the PR that solved the issue 43532. It would work the following way: from dataclasses import dataclass @dataclass(kw_only=True) class Base: type: str counter: int = 0 @dataclass(kw_only=True) class Foo(Base): id: int This is deeply explained at: https://medium.com/@aniscampos/python-dataclass-inheritance-finally-686eaf60fbb5 For the original question, C and D are the minimum required classes to have the kw_only flag set to True: @dataclass class A: a: str aa: str @dataclass class B(A): b: str = 'b' # <-- default value @dataclass(kw_only=True) class C(A): c: str @dataclass(kw_only=True) class D(B, C): d: str Although all of them could have the flag set to True for this matter. | 20 | 22 |
69,670,597 | 2021-10-22 | https://stackoverflow.com/questions/69670597/how-do-i-mock-a-file-open-for-a-specific-path-in-python | So I know that in my unit test I can mock a context manager open(), i.e.: with open('file_path', 'r') as stats: mocked with with mock.patch('builtins.open', mock.mock_open(read_data=mock_json)): but is there a way for me to only mock it for a specific file path? Or maybe some other way to ensure that the context manager gets called with the correct path in a unit test? | To mock open only for a specific path, you have to provide your own mock object that handles open differently, depending on the path. Assuming we have some function: def do_open(path): with open(path, "r") as f: return f.read() where open shall be mocked to return a file with the content "bar" if path is "foo", but otherwise just work as usual, you could do something like this: from unittest import mock from my_module.do_open import do_open builtin_open = open # save the unpatched version def mock_open(*args, **kwargs): if args[0] == "foo": # mocked open for path "foo" return mock.mock_open(read_data="bar")(*args, **kwargs) # unpatched version for every other path return builtin_open(*args, **kwargs) @mock.patch("builtins.open", mock_open) def test_open(): assert do_open("foo") == "bar" assert do_open(__file__) != "bar" If you don't want to save the original open in a global variable, you could also wrap that into a class: class MockOpen: builtin_open = open def open(self, *args, **kwargs): if args[0] == "foo": return mock.mock_open(read_data="bar")(*args, **kwargs) return self.builtin_open(*args, **kwargs) @mock.patch("builtins.open", MockOpen().open) def test_open(): ... | 5 | 11 |
69,702,485 | 2021-10-25 | https://stackoverflow.com/questions/69702485/numba-and-numpy-concatenate | I am trying to speed up some code using numba, but it is tough sledding. For example, the following function does not numba-fy, @jit(nopython=True) def returns(Ft, x, delta): T = len(x) rets = Ft[0:T - 1] * x[1:T] - delta * np.abs(Ft[1:T] - Ft[0:T - 1]) return np.concatenate([[0], rets]) because numba cannot find the signature of np.concatenate. Is there a canonical fix for this? | A bit late, but I hope still useful. Since you asked for the "canonical fix", I would like to explain why concatenate is a bad idea when working with arrays and especially if you indicate that you want to remove bottlenecks and therefore use the numba jit. An array is a continuous sequence of bytes in memory (numpy knows some tricks to change the order without copying by creating views, but that is another topic, see https://towardsdatascience.com/advanced-numpy-master-stride-tricks-with-25-illustrated-exercises-923a9393ab20). If you want to prepend the value x to an array of N elements, you will need to create a new array with N+1 elements, set the first value to x and copy the remaining part. As a side note, a similar argument holds for prepending items to a python list, which is the reason why collections.deque exists. Now, in your jit decorated function, you could hope that the compiler understands what you want to do, but writing compilers that always understands what you are trying to do is nearly impossible. Therefore, better be kind to the compiler and help out with the memory layout whenever you know the right choice. Thus, IMHO the "canonical fix" to your example code would be something like the following: @jit(nopython=True) def returns(Ft, x, delta): T = len(x) rets = np.empty_like(x) rets[0] = 0 rets[1:T] = Ft[0:T - 1] * x[1:T] - delta * np.abs(Ft[1:T] - Ft[0:T - 1]) return rets In general, I agree with @Aaron's comment, meaning that you should always be as explicit as possible with input types to any function you call within jit decorated functions. In your case, ask yourself as a compiler "what is [[0], rets]?". Thinking in strict types, you see a list containing a list of an integer and an array of floating point (or complex) numbers. That is a challenging mixture of types for a compiler. Should the output become an array of integers or floats? | 7 | 5 |
69,664,189 | 2021-10-21 | https://stackoverflow.com/questions/69664189/pandas-rename-columns-with-method-chaining | I have a dataframe and did some feature engineering and now would like to change the column names. I know how to change them if I do a new assignment but I would like to do it with method chaining. I tried the below (the rename row) but it doesn't work. How could I write it so it works? df = pd.DataFrame({'ID':[1,2,2,3,3,3], 'date': ['2021-10-12','2021-10-16','2021-10-15','2021-10-10','2021-10-19','2021-10-01'], 'location':['up','up','down','up','up','down'], 'code':[False, False, False, True, False, False]}) df = (df .assign(date = lambda x: pd.to_datetime(x.date)) .assign(entries_per_ID = lambda x: x.groupby('ID').ID.transform('size')) .pivot_table(values=['entries_per_ID'], index=['ID','date','code'], columns=['location'], aggfunc=np.max) .reset_index() #.rename(columns=lambda x: dict(zip(x.columns, ['_'.join(col).strip() if col[1]!='' else col[0] for col in x.columns.values]))) ) This here works, but that's not how I would like to write it. df.columns = ['_'.join(col).strip() if col[1]!='' else col[0] for col in df.columns.values ] | Renaming columns in a chain Use set_axis along axis=1: df.set_axis(['foo', 'bar', 'baz'], axis=1) Using groupby, pivot, melt, etc If the new columns depend on some earlier step in the chain, combine set_axis with pipe. For example, to capitalize pivoted columns in a chain: We cannot directly chain set_axis: # does NOT work since df.columns are the original columns, not pivoted columns df.pivot(...).set_axis(df.columns.str.upper(), axis=1)) But we can pipe the pivoted result into set_axis: # does work since we've piped the pivoted df df.pivot(...).pipe(lambda piv: piv.set_axis(piv.columns.str.upper(), axis=1))) # ^ ^ ^ OP's example Since OP has created a pivot_table and wants to conditionally collapse those pivoted MultiIndex, we pipe the pivot_table into the list comprehension: (df.assign(date=pd.to_datetime(df.date)) .assign(entries_per_ID=df.groupby('ID').ID.transform('size')) .pivot_table(index=['ID', 'date', 'code'], columns='location', values='entries_per_ID', aggfunc='max') .reset_index() .pipe(lambda piv: piv.set_axis(['_'.join(col).strip() if col[1] else col[0] for col in piv.columns], axis=1))) # ID date code entries_per_ID_down entries_per_ID_up # 0 1 2021-10-12 False NaN 1.0 # 1 2 2021-10-15 False 2.0 NaN # 2 2 2021-10-16 False NaN 2.0 # 3 3 2021-10-01 False 3.0 NaN # 4 3 2021-10-10 True NaN 3.0 # 5 3 2021-10-19 False NaN 3.0 | 9 | 16 |
69,697,724 | 2021-10-24 | https://stackoverflow.com/questions/69697724/auto-save-adds-two-empty-lines-between-comment-and-function-header-in-python-in | I write code in Python in VS code. If I add comment before function and hit save button, VS code adds two empty lines: # comment def MyMethod(): return 0 In settings I see that I use autopep8 formatter: I wasnt able to find what causes this annoying issue. Maybe I can configure settings somewhere? | Comment documenting function behavior should be placed inside function, just below signature. If it is not describing function (is placed outside function) it should have those blank lines. Don't mess with language's conventions, it's very bad idea. Edit, Further clarification: """Module-level comment """ def function(): """Function level comment. There are multiple conventions explaining how a comment's body should be formed. """ return 0 | 5 | 3 |
69,687,604 | 2021-10-23 | https://stackoverflow.com/questions/69687604/running-into-an-error-when-trying-to-pip-install-python-docx | I just did a fresh install of windows to clean up my computer, moved everything over to my D drive and installed Python through Windows Store (somehow it defaulted to my C drive, so I left it there because Pycharm was getting confused about its location), now I'm trying to pip install the python-docx module for the first time and I'm stuck. I have a recent version of Microsoft C++ Visual Build Tools installed. Excuse me for any irrelevant information I provided, just wishing to be thorough. Here's what's returning in command: .>pip install python-docx Collecting python-docx Using cached python-docx-0.8.11.tar.gz (5.6 MB) Preparing metadata (setup.py) ... done Collecting lxml>=2.3.2 Using cached lxml-4.6.3.tar.gz (3.2 MB) Preparing metadata (setup.py) ... done Using legacy 'setup.py install' for python-docx, since package 'wheel' is not installed. Using legacy 'setup.py install' for lxml, since package 'wheel' is not installed. Installing collected packages: lxml, python-docx Running setup.py install for lxml ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\cahez\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\cahez\\AppData\\Local\\Temp\\pip-install-8rz9vrlv\\lxml_69e9fa188fd042d6953641882e4b3a17\\setup.py'"'"'; __file__='"'"'C:\\Users\\cahez\\AppData\\Local\\Temp\\pip-install-8rz9vrlv\\lxml_69e9fa188fd042d6953641882e4b3a17\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\cahez\AppData\Local\Temp\pip-record-xpg_v_i_\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\cahez\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\Include\lxml' cwd: C:\Users\cahez\AppData\Local\Temp\pip-install-8rz9vrlv\lxml_69e9fa188fd042d6953641882e4b3a17\ Complete output (76 lines): Building lxml version 4.6.3. Building without Cython. Building against pre-built libxml2 andl libxslt libraries running install C:\Users\cahez\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\lxml copying src\lxml\builder.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\cssselect.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\doctestcompare.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\ElementInclude.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\sax.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\usedoctest.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\_elementpath.py -> build\lib.win-amd64-3.10\lxml copying src\lxml\__init__.py -> build\lib.win-amd64-3.10\lxml creating build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.py -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\clean.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\defs.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\diff.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\formfill.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\html5parser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\soupparser.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-3.10\lxml\html copying src\lxml\html\__init__.py -> build\lib.win-amd64-3.10\lxml\html creating build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-3.10\lxml\isoschematron copying src\lxml\etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-3.10\lxml copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\config.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-3.10\lxml\includes copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-3.10\lxml\includes creating build\lib.win-amd64-3.10\lxml\isoschematron\resources creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\rng creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl creating build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-3.10\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\cahez\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\cahez\\AppData\\Local\\Temp\\pip-install-8rz9vrlv\\lxml_69e9fa188fd042d6953641882e4b3a17\\setup.py'"'"'; __file__='"'"'C:\\Users\\cahez\\AppData\\Local\\Temp\\pip-install-8rz9vrlv\\lxml_69e9fa188fd042d6953641882e4b3a17\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\cahez\AppData\Local\Temp\pip-record-xpg_v_i_\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\cahez\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\Include\lxml' Check the logs for full command output. | One of the dependencies for python-docx is lxml. The latest stable version of lxml is 4.6.3, released on March 21, 2021. On PyPI there is no lxml wheel for 3.10, yet. So it try to compile from source and for that Microsoft Visual C++ 14.0 or greater is required, as stated in the error. However you can manually install lxml, before install python-docx. Download and install unofficial binary from Gohlke Alternatively you can use pipwin to install it from Gohlke. Note there may still be problems with dependencies for lxml. Of course, you can also downgrade to python3.9. EDIT: As of 14 Dec 2021 the latest lxml version 4.7.1 supports python 3.10 | 5 | 6 |
69,673,807 | 2021-10-22 | https://stackoverflow.com/questions/69673807/python-coding-standard-for-safety-critical-applications | Coming from C/C++ background, I am aware of coding standards that apply for Safety Critical applications (like the classic trio Medical-Automotive-Aerospace) in the context of embedded systems , such as MISRA, SEI CERT, Barr etc. Skipping the question if it should or if it is applicable as a language, I want to create Python applications for embedded systems that -even vaguely- follow some safety standard, but couldn't find any by searching, except from generic Python coding standards (like PEP8) Is there a Python coding guideline that specificallly apply to safety-critical systems ? | Top layer safety standards for "functional safety" like IEC 61508 (industrial), ISO 26262 (automotive) or DO-178 (aerospace) etc come with a software part (for example IEC 61508-3), where they list a number of suitable programming languages. These are exclusively old languages proven in use for a long time, where all flaws and poorly-defined behavior is regarded as well-known and execution can be regarded as predictable. In practice, for the highest safety levels it means that you are pretty much restricted to C with safe subset (MISRA C) or Ada with safe subset (SPARK). A bunch of other old languages like Modula-2, Pascal and Fortran are also mentioned, but the tool support for these in the context of modern safety MCUs is non-existent. As is support for Python for such MCUs. Languages like Python and C++ are not even mentioned for the lowest safety levels, so between the lines they are dismissed as entirely unsuitable. Even less so than pure assembler, which is actually mentioned as something that may used for the lower safety levels. | 6 | 5 |
69,670,125 | 2021-10-22 | https://stackoverflow.com/questions/69670125/how-to-log-raw-http-request-response-in-python-fastapi | We are writing a web service using FastAPI that is going to be hosted in Kubernetes. For auditing purposes, we need to save the raw JSON body of the request/response for specific routes. The body size of both request and response JSON is about 1MB, and preferably, this should not impact the response time. How can we do that? | You may try to customize APIRouter like in FastAPI official documentation: import time from typing import Callable from fastapi import APIRouter, FastAPI, Request, Response from fastapi.routing import APIRoute class TimedRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: before = time.time() response: Response = await original_route_handler(request) duration = time.time() - before response.headers["X-Response-Time"] = str(duration) print(f"route duration: {duration}") print(f"route response: {response}") print(f"route response headers: {response.headers}") return response return custom_route_handler app = FastAPI() router = APIRouter(route_class=TimedRoute) @app.get("/") async def not_timed(): return {"message": "Not timed"} @router.get("/timed") async def timed(): return {"message": "It's the time of my life"} app.include_router(router) | 33 | 3 |
69,641,363 | 2021-10-20 | https://stackoverflow.com/questions/69641363/how-to-run-fastapi-app-on-multiple-ports | I have a FastAPI application that I am running on port 30000 using Uvicorn programmatically. Now I want to run the same application on port 8443 too. The same application needs to run on both these ports. How can I do this within the Python code? Minimum Reproducible code: from fastapi import FastAPI import uvicorn app = FastAPI() @app.get("/healthcheck/") def healthcheck(): return 'Health - OK' if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=30000) I want to something like if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", ports=[30000,8443]) Explanation: My application will be running on my organizations Azure Kubernetes Service. Apps running on port 30000 are reserved for Internal HTTP traffic and apps running on 8443 are mapped to 443 of Kubernetes Service to be exposed to external traffic. Further Details: I will be creating a Docker Container out of this application and the idea is to include CMD ["python3", "app.py"] at the end to run the application. I am looking for a solution that would either provide a way to change the python code ( uvicorn.run(app, host="0.0.0.0", ports=[30000,8443]) ) or a change to the CMD command in the Dockerfile like This GitHub Issue Comment - gunicorn -k uvicorn.workers.UvicornWorker -w 1 --bind ip1:port1 --bind ip2:port2 --bind ip3:port3 | The following solution worked for me. It runs one gunicorn process in the background and then another process to bind it to two ports. One of them will use HTTP and one can use HTTPS. Dockerfile: FROM python:3.7 WORKDIR /app COPY requirements.txt . RUN pip3 install -r requirements.txt COPY . . ENTRYPOINT ./docker-starter.sh EXPOSE 30000 8443 docker-starter.sh: gunicorn -k uvicorn.workers.UvicornWorker -w 3 -b 0.0.0.0:30000 -t 360 --reload --access-logfile - app:app & gunicorn --access-logfile - -k --ca_certs ca_certs.txt uvicorn.workers.UvicornWorker -w 3 -b 0.0.0.0:8443 -t 360 --reload --access-logfile - app:app The python app can remain minimal: from fastapi import FastAPI import uvicorn app = FastAPI() @app.get("/healthcheck/") def healthcheck(): return 'Health - OK' if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0") | 15 | 3 |
69,720,476 | 2021-10-26 | https://stackoverflow.com/questions/69720476/how-to-use-pytest-to-simulate-full-reboot | How do I test that my program is robust to unexpected shut-downs? My python code will run on a microcontroller that shuts off unexpectedly. I would like to test each part of the code rebooting unexpectedly and verify that it handles this correctly. Attempt: I tried putting code into its own process, then terminating it early, but this doesn't work because MyClass calls 7zip from the command line which continues even after process dies: import multiprocessing import os def MyClass(multiprocessing.Process): ... def run(): os.system("7z a myfile.7z myfile") process = MyClass() process.start() time.sleep(4) print("terminating early") process.terminate() print("done") What I want: class TestMyClass(unittest.TestCase): def test_MyClass_continuity(self): myclass = MyClass().start() myclass.kill_everything() myclass = MyClass().start() self.assert_everything_worked_as_expected() Is there an easy way to do this? If not, how do you design robust code that could terminate at any point (e.g. testing state machines)? Similar question (unanswered as of 26/10/21): Simulating abnormal termination in pytest Thanks a lot! | Your logic starts a process wrapped within the MyClass object which itself spawns a new process via the os.system call. When you terminate the MyClass process, you kill the parent process but you leave the 7zip process running as orphan. Moreover, the process.terminate method sends a SIGTERM signal to the child process. The child process can intercept said signal and perform some cleanup routines before terminating. This is not ideal if you want to simulate a situation where there is no chance to clean up (a power loss). You most likely want to send a SIGKILL signal instead (on Linux). To kill the parent and child process, you need to address the entire process group. import os import time import signal import multiprocessing class MyClass(multiprocessing.Process): def run(self): # Ping localhost for a limited amount of time os.system("ping -c 12 127.0.0.1") process = MyClass() process.start() time.sleep(4) print("terminating early") # Send SIGKILL signal to the entire process group group_id = os.getpgid(process.pid) os.killpg(group_id, signal.SIGKILL) print("done") The above works only on Unix OSes and not on Windows ones. For Windows, you need to use the psutil module. import os import time import multiprocessing import psutil class MyClass(multiprocessing.Process): def run(self): # Ping localhost for a limited amount of time os.system("ping -c 12 127.0.0.1") def kill_process_group(pid): process = psutil.Process(pid) children = process.children(recursive=True) # First terminate all children for child in children: child.kill() psutil.wait_procs(children) # Then terminate the parent process process.kill() process.wait() process = MyClass() process.start() time.sleep(4) print("terminating early") kill_process_group(process.pid) print("done") | 6 | 2 |
69,660,200 | 2021-10-21 | https://stackoverflow.com/questions/69660200/how-to-render-svg-image-to-png-file-in-python | So I want to render SVG from python code having target resolution WxH (having SVG text as str, like this that I generate dynamically): <svg width="200" height="200" viewBox="0 0 220 220" xmlns="http://www.w3.org/2000/svg"> <filter id="displacementFilter"> <feTurbulence type="turbulence" baseFrequency="0.05" numOctaves="2" result="turbulence"/> <feDisplacementMap in2="turbulence" in="SourceGraphic" scale="50" xChannelSelector="R" yChannelSelector="G"/> </filter> <circle cx="100" cy="100" r="100" style="filter: url(#displacementFilter)"/> </svg> into a png image. How to do such a thing in Python? | there are multiple solutions available for converting svgs to pngs in python, but not all of them will work for your particular use case since you're working with svg filters. solution filter works? alpha channel? call directly from python? cairosvg some* yes yes svglib no no yes inkscape yes yes via subprocess wand yes yes yes * from cairosvg documentation: Only feOffset, feBlend and feFlood filters are supported. note: i've added a solid white background to all the sample images to make them easier to see on a dark background, the originals did have transparent backgrounds unless stated in the table above cairosvg import cairosvg # read svg file -> write png file cairosvg.svg2png(url=input_svg_path, write_to=output_png_path, output_width=width, output_height=height) # read svg file -> png data png_data = cairosvg.svg2png(url=input_svg_path, output_width=width, output_height=height) # svg string -> write png file cairosvg.svg2png(bytestring=svg_str.encode(), write_to=output_png_path, output_width=width, output_height=height) # svg string -> png data png_data = cairosvg.svg2png(bytestring=svg_str.encode(), output_width=width, output_height=height) svglib from svglib.svglib import svg2rlg from reportlab.graphics import renderPM # read svg -> write png renderPM.drawToFile(svg2rlg(input_svg_path), output_png_path, fmt='PNG') inkscape to read a file as input, put the path to the file as the last argument. to use a string as input, add the --pipe argument and pass the string to stdin. to write to a file as output, add the argument --export-filename=+path to output file. to get the output contents directly without writing to a file, use --export-filename=- and it will be sent to stdout instead. full documentation for CLI options here import subprocess inkscape = ... # path to inkscape executable # read svg file -> write png file subprocess.run([inkscape, '--export-type=png', f'--export-filename={output_png_path}', f'--export-width={width}', f'--export-height={height}', input_svg_path]) # read svg file -> png data result = subprocess.run([inkscape, '--export-type=png', '--export-filename=-', f'--export-width={width}', f'--export-height={height}', input_svg_path], capture_output=True) # (result.stdout will have the png data) # svg string -> write png file subprocess.run([inkscape, '--export-type=png', f'--export-filename={output_png_path}', f'--export-width={width}', f'--export-height={height}', '--pipe'], input=svg_str.encode()) # svg string -> png data result = subprocess.run([inkscape, '--export-type=png', '--export-filename=-', f'--export-width={width}', f'--export-height={height}', '--pipe'], input=svg_str.encode(), capture_output=True) # (result.stdout will have the png data) wand from wand.image import Image from wand.Color import Color with Color('#00000000') as bgcolor,\ # to read input from a file: Image(filename=input_svg_path, width=width, height=height, background=bgcolor) as img: # or, to use input from a string: Image(blob=svg_str.encode(), format='svg', width=width, height=height, background=bgcolor) as img: # to save output to a file: with img.convert('png') as output_img: output_img.save(filename=output_png_path) # or, to get the output data in a variable: png_data = img.make_blob(format='png') | 16 | 34 |
69,725,429 | 2021-10-26 | https://stackoverflow.com/questions/69725429/filter-enum-in-list-of-enums-sqlalchemy | I have the following enum in python: class Status(enum.Enum): produced = 1 consumed = 2 success = 3 failed = 4 and the following SQLAlchemy model class Message(Base): __tablename__ = 'table' id = Column(Integer, primary_key=True) name = Column(String) status = Column(Enum(Status)) I want to filter on several statuses in query like this: session.query(Message).filter(Message.status in [Status.failed, Status.success]) But no matter what I have in my DB, the results are always empty, it's probably because don't understand the type of Message.status. However, this does work: session.query(Message).filter(Message.status == Status.failed or Message.status == Status.success) | You should use in_ operator: session.query(Message).filter(Message.status.in_((Status.failed, Status.success))) | 8 | 8 |
69,686,581 | 2021-10-23 | https://stackoverflow.com/questions/69686581/the-term-pipx-is-not-recognized-as-the-name-of-a-cmdlet | I have followed the instructions to install Brownie on Visual studio code of their website. python3 -m pip install --user pipx python3 -m pipx ensurepath The 2 lines above poses no problem. I restarted the terminal to input line: pipx install eth-brownie pipx : The term 'pipx' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. Was wondering what went wrong. Any form of assistance is greatly appreciated. | Check your environment variables. python3 -m pipx ensurepath adds the directories to PATH, but they are added in all lowercase for some reason. Fix the directory paths to match the case. | 7 | 9 |
69,724,598 | 2021-10-26 | https://stackoverflow.com/questions/69724598/python-sqlmodel-truncate-table-or-delete-all-and-get-number-of-rows | Using Python SQLModel, I want to truncate a table, or remove all rows, and get the number of deleted rows, in the most SQLModel standar way. How can I do this? I am using this: with Session(engine) as session: count = session.exec(delete(MyModel)).fetchall() session.commit() But it raises an error: ResourceClosedError('This result object does not return rows. It has been closed automatically.') I have also tried scalar() and fetchone() instead of fetchall() without success. | with Session(engine) as session: statement = delete(MyModel) result = session.exec(statement) session.commit() print(result.rowcount) Matched Row Counts | 5 | 8 |
69,646,771 | 2021-10-20 | https://stackoverflow.com/questions/69646771/sql-alchemy-insert-statement-failing-to-insert-but-no-error | I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database. As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak? Thankyou for any answers. Some background: I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right? I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error. it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak. My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited. with self.Engine.connect() as connection: connection.execute(Insert_Statement) connection.commit The so called 'Insert Statement' has been converted to text using the sqlalchemy 'text' function, I don't quite understand why it won't execute if I pass the constructed string directly to the execute statement but mention it in case it's relevant. Other things that may be relevant: Python3 is running on an individual ec2 instance the postgres database on another. The table in particular is a timescaledb hypertable taking realtime data, hence the need for very fast writes, but probably not relevant. Currently using pg8000 as dialect for no particular reason other than psycopg2 was throwing errors when trying the execute an equivalent method using the ORM. | Just so this question is answered in case anyone else ends up here: The issue was a failure to call commit as a method, as @snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction. | 8 | 9 |
69,723,468 | 2021-10-26 | https://stackoverflow.com/questions/69723468/is-the-key-order-the-same-for-ordereddict-and-dict | dict keeps insertion order since Python 3.6 (see this). OrderedDict was developed just for this purpose (before Python 3.6). Since Python 3.6, is the key order always the same for dict or OrderedDict? I wonder whether I can do this in my code and have always the same behavior (except of equality, and some extended methods in OrderedDict) but more efficiently: if sys.version_info[:2] >= (3, 6): OrderedDict = dict else: from collections import OrderedDict Or phrased differently, for Python >=3.6, is there any reason to use OrderedDict? | Both OrderedDict and dict are insertion-ordered¹ for iteration. There is practically no reason to use OrderedDict if iteration order is the only deciding point, especially if re-ordering is not needed. Obviously, if comparison order is desired OrderedDict and dict are not interchangeable. Or phrased differently, for Python >=3.6, is there any reason to use OrderedDict? These days OrderedDict is to dict what deque is to list, basically. OrderedDict/deque are based on linked lists² whereas dict/list are based on arrays. The former have better pop/move/FIFO semantics, since items can be removed from the start/middle without moving other items. Since arrays are generally very cache friendly, the linked list advantage only comes into play for very large containers. Also, OrderedDict (unlike deque) does not have guarantees for its linked list semantics and its advantage may thus not be portable. OrderedDict should primarily be used if many pop/move/FIFO operations are needed and benchmarking can compare the performance of dict vs. OrderedDict in practice. ¹This applies to all currently supported implementations compliant with the Python language spec, i.e. CPython and PyPy since Python 3.6. ²OrderedDict in CPython still preserves O(1) key access. This is realised by also having a "regular" lookup table, using the linked list for order between items and the lookup table for direct item access. It's complicated. | 13 | 9 |
69,710,875 | 2021-10-25 | https://stackoverflow.com/questions/69710875/how-can-i-have-a-synchronous-facade-over-asyncpg-apis-with-python-asyncio | Imagine an asynchronous aiohttp web application that is supported by a Postgresql database connected via asyncpg and does no other I/O. How can I have a middle-layer hosting the application logic, that is not async? (I know I can simply make everything async -- but imagine my app to have massive application logic, only bound by database I/O, and I cannot touch everything of it). Pseudo code: async def handler(request): # call into layers over layers of application code, that simply emits SQL ... def application_logic(): ... # This doesn't work, obviously, as await is a syntax # error inside synchronous code. data = await asyncpg_conn.execute("SQL") ... # What I want is this: data = asyncpg_facade.execute("SQL") ... How can a synchronous façade over asyncpg be built, that allows the application logic to make database calls? The recipes floating around like using async.run() or asyncio.run_coroutine_threadsafe() etc. do not work in this case, as we're coming from an already asynchronous context. I'd assume this cannot be impossible, as there already is an event loop that could in principle run the asyncpg coroutine. Bonus question: what is the design rationale of making await inside sync a syntax error? Wouldn't it be pretty useful to allow await from any context that originated from a coroutine, so we'd have simple means to decompose an application in functional building blocks? EDIT Extra bonus: beyond Paul's very good answer, that stays inside the "safe zone", I'd be interested in solutions that avoid blocking the main thread (leading to something more gevent-ish). See also my comment on Paul's answer ... | You need to create a secondary thread where you run your async code. You initialize the secondary thread with its own event loop, which runs forever. Execute each async function by calling run_coroutine_threadsafe(), and calling result() on the returned object. That's an instance of concurrent.futures.Future, and its result() method doesn't return until the coroutine's result is ready from the secondary thread. Your main thread is then, in effect, calling each async function as if it were a sync function. The main thread doesn't proceed until each function call is finished. BTW it doesn't matter if your sync function is actually running in an event loop context or not. The calls to result() will, of course, block the main thread's event loop. That can't be avoided if you want to get the effect of running an async function from sync code. Needless to say, this is an ugly thing to do and it's suggestive of the wrong program structure. But you're trying to convert a legacy program, and it may help with that. import asyncio import threading from datetime import datetime def main(): def thr(loop): asyncio.set_event_loop(loop) loop.run_forever() loop = asyncio.new_event_loop() t = threading.Thread(target=thr, args=(loop, ), daemon=True) t.start() print("Hello", datetime.now()) t1 = asyncio.run_coroutine_threadsafe(f1(1.0), loop).result() t2 = asyncio.run_coroutine_threadsafe(f1(2.0), loop).result() print(t1, t2) if __name__ == "__main__": main() >>> Hello 2021-10-26 20:37:00.454577 >>> Hello 1.0 2021-10-26 20:37:01.464127 >>> Hello 2.0 2021-10-26 20:37:03.468691 >>> 1.0 2.0 | 10 | 9 |
69,727,314 | 2021-10-26 | https://stackoverflow.com/questions/69727314/numpy-construct-squares-along-diagonal-of-matrix-expand-diagonal-matrix | Suppose you have either two arrays: index = [1, 2, 3] counts = [2, 3, 2] or a singular array arr = [1, 1, 2, 2, 2, 3, 3] How can I efficiently construct the matrix [ [1, 1, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 3, 3], [0, 0, 0, 0, 0, 3, 3] ] with NumPy? I know that square = np.zeros((7, 7)) np.fill_diagnol(square, arr) # see arr above produces [ [1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0], [0, 0, 2, 0, 0, 0, 0], [0, 0, 0, 2, 0, 0, 0], [0, 0, 0, 0, 2, 0, 0], [0, 0, 0, 0, 0, 3, 0], [0, 0, 0, 0, 0, 0, 3] ] How do I "expand" the diagonal by n where n is counts[index-1] for the values specified by index[I] tmp = np.array((arr * N)).reshape((len(arr), len(arr)) np.floor( (tmp + tmp.T) / 2 ) # <-- this is closer array([[1., 1., 1., 1., 1., 2., 2.], [1., 1., 1., 1., 1., 2., 2.], [1., 1., 2., 2., 2., 2., 2.], [1., 1., 2., 2., 2., 2., 2.], [1., 1., 2., 2., 2., 2., 2.], [2., 2., 2., 2., 2., 3., 3.], [2., 2., 2., 2., 2., 3., 3.]]) This gets what I want, but probably doesn't scale that well? riffled = list(zip(index, counts)) riffled # [(1, 2), (2, 3), (3, 2)] a = np.zeros((len(arr), len(arr))) # 7, 7 square last = 0 # <-- keep track of current sub square for i, c in riffled: a[last:last+c, last:last+c] = np.ones((c, c)) * i last += c # <-- shift square yield array([[1., 1., 0., 0., 0., 0., 0.], [1., 1., 0., 0., 0., 0., 0.], [0., 0., 2., 2., 2., 0., 0.], [0., 0., 2., 2., 2., 0., 0.], [0., 0., 2., 2., 2., 0., 0.], [0., 0., 0., 0., 0., 3., 3.], [0., 0., 0., 0., 0., 3., 3.]]) | Try broadcasting: idx = np.repeat(np.arange(len(counts)), counts) np.where(idx==idx[:,None], arr, 0) # or # arr * (idx==idx[:,None]) Output; array([[1, 1, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 2, 2, 2, 0, 0], [0, 0, 0, 0, 0, 3, 3], [0, 0, 0, 0, 0, 3, 3]]) | 17 | 3 |
69,704,561 | 2021-10-25 | https://stackoverflow.com/questions/69704561/cannot-update-spyder-5-1-5-on-new-anaconda-install | I installed anaconda and spyder came with the installation. Spyder 4.2.5 came with the installation and I got a pop up notification that spyder=5.1.5 is available. I tried conda update anaconda conda install spyder=5.1.5 and gets an error: Solving environment: failed with initial frozen solve. Retrying with flexible solve. I tried letting it run for more than 8 hours, but I had to cancel it because I got tired. Tried conda install anaconda spyder=5.1.5 and gets another error: `Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: ananconda Current channels: https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. Can someone please help on how to solve this? | (Spyder maintainer here) Our regular instructions to update Spyder don't work in this case because there are some incompatible dependencies between Spyder 5.0.5 and 5.1.5. To workaround this problem, you need to close Spyder and run the following commands in the Anaconda Prompt (or your system terminal on Linux or macOS): conda remove spyder conda remove python-language-server conda update anaconda conda install spyder=5.1.5 The second or third commands (i.e. conda remove python-language-server or conda update anaconda) could give you errors, but that's fine. Simply ignore them and continue with the other commands. | 24 | 37 |
69,717,154 | 2021-10-26 | https://stackoverflow.com/questions/69717154/typeerror-cannot-concatenate-object-of-type-class-list-only-series-and-d | I have a list of 10 dataframes named d0, d1, d2,...d9. All have 3 columns and 100 rows. d0.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 100 entries, 0 to 99 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 0 100 non-null float64 1 1 100 non-null float64 2 2 100 non-null float64 dtypes: float64(3) memory usage: 2.5 KB I want to merge all dataframes so that I can have 3 columns and 1000 rows and then convert it into an array. s1=[d0,d1,d2,d3,d4,d5,d6,d7,d8,d9] s2=pd.concat([s1]) The above code throws error: TypeError: cannot concatenate object of type '<class 'list'>'; only Series and DataFrame objs are valid type(s1) list I used the solution suggested in pd.concat in pandas is giving a TypeError: cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid ; however, got the above error. | s1 is already a list. Doing what you did called pd.concat with a list of a list with DataFrames, which pandas doesn't allow. You should do it like this instead: s2=pd.concat(s1) | 6 | 10 |
69,716,911 | 2021-10-26 | https://stackoverflow.com/questions/69716911/convert-subset-of-columns-to-rows-by-combining-columns | Pandas 1.1.4 MRE: df = pd.DataFrame({"Code":[1,2], "view_A":[3000, 2300], "click_A":[3, 23], "view_B":[1200, 300], "click_B":[5, 3]}) df.set_index("Code", inplace=True) >>> view_A click_A view_B click_B Code 1 3000 3 1200 5 2 2300 23 300 3 Want to make it into view click Code type 1 A 3000 3 2 A 2300 23 1 B 1200 5 2 B 300 3 I can do it, but want to explore more (clean) options. My sol'tn a_df = df[["view_A", "click_A"]].rename(columns={"view_A":"view", "click_A":"click"}) a_df["type"] = "A" b_df = df[["view_B", "click_B"]].rename(columns={"view_B":"view", "click_B":"click"}) b_df["type"] = "B" final_df = pd.concat([a_df, b_df]) But code is dirty. | This is essentially a reshape operation using stack df.columns = df.columns.str.split('_', expand=True) df.stack().rename_axis(['code', 'type']) click view code type 1 A 3 3000 B 5 1200 2 A 23 2300 B 3 300 | 17 | 12 |
69,699,744 | 2021-10-24 | https://stackoverflow.com/questions/69699744/plotly-express-line-with-continuous-color-scale | I have the following piece of code import plotly.express as px import pandas as pd import numpy as np x = [1,2,3,4,5,6] df = pd.DataFrame( { 'x': x*3, 'y': list(np.array(x)) + list(np.array(x)**2) + list(np.array(x)**.5), 'color': list(np.array(x)*0) + list(np.array(x)*0+1) + list(np.array(x)*0+2), } ) for plotting_function in [px.scatter, px.line]: fig = plotting_function( df, x = 'x', y = 'y', color = 'color', title = f'Using {plotting_function.__name__}', ) fig.show() which produces the following two plots: For some reason px.line is not producing the continuous color scale that I want, and in the documentation for px.scatter I cannot find how to join the points with lines. How can I produce a plot with a continuous color scale and lines joining the points for each trace? This is the plot I want to produce: | I am not sure this is possible using only plotly.express. If you use px.line, then you can pass the argument markers=True as described in this answer, but from the px.line documentation it doesn't look like continuous color scales are supported. UPDATED ANSWER: in order to have both a legend that groups both the lines and markers together, it's probably simpest to use go.Scatter with the argument mode='lines+markers'. You'll need to add the traces one at a time (by plotting each unique color portion of the data one at a time) in order to be able to control each line+marker group from the legend. When plotting these traces, you will need some functions to retrieve the colors of the lines from the continuous color scale because go.Scatter won't know what color your lines are supposed to be unless you specify them - thankfully that has been answered here. Also you won't be able to generate a colorbar adding the markers one color at a time, so to add a colorbar, you can plot all of the markers at once using go.Scatter, but use the argument marker=dict(size=0, color="rgba(0,0,0,0)", colorscale='Plasma', colorbar=dict(thickness=20)) to display a colorbar, but ensure that these duplicate markers are not visible. Putting all of this together: # import plotly.express as px import plotly.graph_objects as go import pandas as pd import numpy as np x = [1,2,3,4,5,6] df = pd.DataFrame( { 'x': x*3, 'y': list(np.array(x)) + list(np.array(x)**2) + list(np.array(x)**.5), 'color': list(np.array(x)*0) + list(np.array(x)*0+1) + list(np.array(x)*0+2), } ) # This function allows you to retrieve colors from a continuous color scale # by providing the name of the color scale, and the normalized location between 0 and 1 # Reference: https://stackoverflow.com/questions/62710057/access-color-from-plotly-color-scale def get_color(colorscale_name, loc): from _plotly_utils.basevalidators import ColorscaleValidator # first parameter: Name of the property being validated # second parameter: a string, doesn't really matter in our use case cv = ColorscaleValidator("colorscale", "") # colorscale will be a list of lists: [[loc1, "rgb1"], [loc2, "rgb2"], ...] colorscale = cv.validate_coerce(colorscale_name) if hasattr(loc, "__iter__"): return [get_continuous_color(colorscale, x) for x in loc] return get_continuous_color(colorscale, loc) # Identical to Adam's answer import plotly.colors from PIL import ImageColor def get_continuous_color(colorscale, intermed): """ Plotly continuous colorscales assign colors to the range [0, 1]. This function computes the intermediate color for any value in that range. Plotly doesn't make the colorscales directly accessible in a common format. Some are ready to use: colorscale = plotly.colors.PLOTLY_SCALES["Greens"] Others are just swatches that need to be constructed into a colorscale: viridis_colors, scale = plotly.colors.convert_colors_to_same_type(plotly.colors.sequential.Viridis) colorscale = plotly.colors.make_colorscale(viridis_colors, scale=scale) :param colorscale: A plotly continuous colorscale defined with RGB string colors. :param intermed: value in the range [0, 1] :return: color in rgb string format :rtype: str """ if len(colorscale) < 1: raise ValueError("colorscale must have at least one color") hex_to_rgb = lambda c: "rgb" + str(ImageColor.getcolor(c, "RGB")) if intermed <= 0 or len(colorscale) == 1: c = colorscale[0][1] return c if c[0] != "#" else hex_to_rgb(c) if intermed >= 1: c = colorscale[-1][1] return c if c[0] != "#" else hex_to_rgb(c) for cutoff, color in colorscale: if intermed > cutoff: low_cutoff, low_color = cutoff, color else: high_cutoff, high_color = cutoff, color break if (low_color[0] == "#") or (high_color[0] == "#"): # some color scale names (such as cividis) returns: # [[loc1, "hex1"], [loc2, "hex2"], ...] low_color = hex_to_rgb(low_color) high_color = hex_to_rgb(high_color) return plotly.colors.find_intermediate_color( lowcolor=low_color, highcolor=high_color, intermed=((intermed - low_cutoff) / (high_cutoff - low_cutoff)), colortype="rgb", ) fig = go.Figure() ## add the lines+markers for color_val in df.color.unique(): color_val_normalized = (color_val - min(df.color)) / (max(df.color) - min(df.color)) # print(f"color_val={color_val}, color_val_normalized={color_val_normalized}") df_subset = df[df['color'] == color_val] fig.add_trace(go.Scatter( x=df_subset['x'], y=df_subset['y'], mode='lines+markers', marker=dict(color=get_color('Plasma', color_val_normalized)), name=f"line+marker {color_val}", legendgroup=f"line+marker {color_val}" )) ## add invisible markers to display the colorbar without displaying the markers fig.add_trace(go.Scatter( x=df['x'], y=df['y'], mode='markers', marker=dict( size=0, color="rgba(0,0,0,0)", colorscale='Plasma', cmin=min(df.color), cmax=max(df.color), colorbar=dict(thickness=40) ), showlegend=False )) fig.update_layout( legend=dict( yanchor="top", y=0.99, xanchor="left", x=0.01), yaxis_range=[min(df.y)-2,max(df.y)+2] ) fig.show() | 7 | 4 |
69,709,227 | 2021-10-25 | https://stackoverflow.com/questions/69709227/how-to-reference-the-output-of-a-rule-in-snakemake | I was wondering if it is possible to use the output of a rule directly as the input of the next rule, without having to specify the path again. I thought maybe something like this would work, but it does not in my tests: rule A: input: in_file = "path/to/in_file" output: out_file = "path/to/out_file" shell: "...." rule B: input: in_file = A.output.out_file # reference the output of rule A doesnt work like this # in_file = "path/to/out_file" -> this works but is less elegant i think output: out_file = "path/to/out_file" shell: "...." Any help or insights are appreciated! Cheers! | Maybe this is the syntax you are looking for: rule B: input: in_file = rules.A.output.out_file, ... Although I prefer to hardcode the filenames since it makes the script more readable. | 5 | 6 |
69,710,194 | 2021-10-25 | https://stackoverflow.com/questions/69710194/get-all-objects-with-today-date-django | I have a model like this class Maca(models.Model): created_at = models.DateTimeField( auto_now_add=True ) Now I want in the views.py file to get all the entries that have created today, I'm trying this Maca.objects.filter(created_at=datetime.today().date()) But this looks for the clock that object is created too. P.S I can't change the field in model because I need the clock too in other purposes. Can someone help me to select all entries that have been created today? Thanks in advance | You have to just write a valid filter like this : from datetime import datetime today = datetime.today() year = today.year month = today.month day = today.day meca = Meca.objects.filter(created_at__year=year, created_at__month=month, created_at__day=day) | 5 | 8 |
69,708,821 | 2021-10-25 | https://stackoverflow.com/questions/69708821/can-i-have-multiple-subfolders-with-virtual-python-environments-in-vs-code | I have a monorepo structured like this: myRepo/ ├─ project_1/ │ ├─ .venv/ │ ├─ main.py ├─ project_2/ │ ├─ .venv/ │ ├─ main.py ├─ .gitignore ├─ README.md Can VS Code handle multiple python venvs in subfolders? After some googling I managed a find one solution, but its not very elegant. I created a workspace and added the folders /project_1 and /project_2, that worked and I can easily switch and select Python Interpreter. I also need to modify files in /root from time to time so I added this folder as well. All this makes the Explorer folder structure bigger and more cluttered with duplicates of itself: workspace/ ├── myRepo/ │ ├── project_1/ │ │ ├── .venv/ │ │ └── main.py │ ├── project_2/ │ │ ├── .venv/ │ │ └── main.py │ ├── .gitignore │ ├── README.md │ └── myRepo/ ├── project_1/ │ ├── .venv/ │ └── main.py └── project_2/ ├── .venv/ └── main.py | VS Code has a list of places, where it looks for virtual environments. Only environments located directly under the workspace are picked up automatically. You can also enter custom paths when running the Python: Select Interpreter command, though. Simply select "Enter interpreter path..." and navigate to your venv's /bin/python executable: Once you have used a cutom interpreter path, it is known to VS code and will be directly selectable using the Python: Select Interpreter command. | 16 | 4 |
69,699,729 | 2021-10-24 | https://stackoverflow.com/questions/69699729/pydantic-is-confused-with-my-types-and-their-union | I have following Pydantic model type scheme specification: class RequestPayloadPositionsParams(BaseModel): """ Request payload positions parameters """ account: str = Field(default="COMBINED ACCOUNT") fields: List[str] = Field(default=["QUANTITY", "OPEN_PRICE", "OPEN_COST"]) class RequestPayloadPositions(BaseModel): """ Request payload positions service """ header: RequestPayloadHeader = Field( default=RequestPayloadHeader(service="positions", id="positions", ver=0) ) params: RequestPayloadPositionsParams = Field( default=RequestPayloadPositionsParams() ) class RequestPayloadOrdersParams(BaseModel): """ Request payload orders parameters """ account: str = Field(default="COMBINED ACCOUNT") types: List[str] = Field(default=["WORKING", "FILLED", "CANCELED"]) class RequestPayloadOrders(BaseModel): """ Request payload orders service """ header: RequestPayloadHeader = Field( default=RequestPayloadHeader(service="order_events", id="order_events", ver=0) ) params: RequestPayloadOrdersParams = Field(default=RequestPayloadOrdersParams()) class RequestPayload(BaseModel): """ Request payload data """ payload: List[Union[RequestPayloadPositions, RequestPayloadOrders]] = Field(...) Now, I want to create a payload object for both, orders and positions service: positions = requests.RequestPayload(payload=[requests.RequestPayloadPositions()]) orders = requests.RequestPayload(payload=[requests.RequestPayloadOrders()]) Now, positions has type requests.RequestPayload[payload= requests.RequestPayloadPositions ... but orders has not requests.RequestPayload[payload= requests.RequestPayloadOrders but the same like positions. This is wrong. I can fix this by change the model specification from payload: List[Union[RequestPayloadPositions, RequestPayloadOrders]] = Field(...) to payload: List[Any] = Field(...) ... but I want to explicitly define the allowed types. Any idea how to solve this, or should I explain more detailed? Do you understand my problem? EDIT Working code sample, shows that the second assert in the last line fails, but should not fail ... from typing import List, Union from pydantic import BaseModel, Field class RequestPayloadHeader(BaseModel): """ Request payload header """ service: str = Field(...) id: str = Field(...) ver: int = Field(...) class RequestPayloadLoginParams(BaseModel): """ Request payload login parameters """ domain: str = Field(default="TOS") platform: str = Field(default="PROD") token: str = Field(...) accessToken: str = Field(default="") tag: str = Field(default="TOSWeb") class RequestPayloadLogin(BaseModel): """ Request payload login service """ header: RequestPayloadHeader = Field( default=RequestPayloadHeader(service="login", id="login", ver=0) ) params: RequestPayloadLoginParams = Field(...) class RequestPayloadPositionsParams(BaseModel): """ Request payload positions parameters """ account: str = Field(default="COMBINED ACCOUNT") fields: List[str] = Field(default=["QUANTITY", "OPEN_PRICE", "OPEN_COST"]) class RequestPayloadOrdersParams(BaseModel): """ Request payload orders parameters """ account: str = Field(default="COMBINED ACCOUNT") types: List[str] = Field(default=["WORKING", "FILLED", "CANCELED"]) class RequestPayloadService(BaseModel): """ Request payload service """ header: RequestPayloadHeader = Field(...) params: Union[RequestPayloadPositionsParams, RequestPayloadOrdersParams] = Field( ... ) class RequestPayload(BaseModel): """ Request payload data """ payload: List[Union[RequestPayloadLogin, RequestPayloadService]] = Field(...) if __name__ == "__main__": positions = RequestPayload( payload=[ RequestPayloadService( header=RequestPayloadHeader(service="positions", id="positions", ver=0), params=RequestPayloadPositionsParams(), ) ] ) assert isinstance(positions.payload[0].params, RequestPayloadPositionsParams) orders = RequestPayload( payload=[ RequestPayloadService( header=RequestPayloadHeader( service="order_events", id="order_events", ver=0 ), params=RequestPayloadOrdersParams(), ) ] ) assert isinstance(orders.payload[0].params, RequestPayloadOrdersParams) EDIT2 The solution by Alex does not cover the scenario when I have two models with same field names but different types for them, like this: class ResponseReplacePatchStr(BaseModel): op: str = Field(default="replace") path: str = Field(...) value: str = Field(...) class Config: extra = "forbid" class ResponseReplacePatchFloat(BaseModel): op: str = Field(default="replace") path: str = Field(...) value: float = Field(...) class Config: extra = "forbid" It is always converted to type str for value field if ResponseReplacePatchStr is the first type mentioned in Union[ ResponseReplacePatchStr, ResponseReplacePatchFloat ] How can I also solve this, so that Pydantic take care about my types? | This is one of the features of Pydantic matching Union, which is described as: However, as can be seen above, pydantic will attempt to 'match' any of the types defined under Union and will use the first one that matches.[...] As such, it is recommended that, when defining Union annotations, the most specific type is included first and followed by less specific types. At the same time, extra fields are ignored by default and default values of declared fields are used in your case. Therefore, the solution might be adding extra = 'forbid' model config option: class RequestPayloadPositionsParams(BaseModel): """ Request payload positions parameters """ account: str = Field(default="COMBINED ACCOUNT") fields: List[str] = Field(default=["QUANTITY", "OPEN_PRICE", "OPEN_COST"]) class Config: extra = 'forbid' class RequestPayloadOrdersParams(BaseModel): """ Request payload orders parameters """ account: str = Field(default="COMBINED ACCOUNT") types: List[str] = Field(default=["WORKING", "FILLED", "CANCELED"]) class Config: extra = 'forbid' | 8 | 14 |
69,678,621 | 2021-10-22 | https://stackoverflow.com/questions/69678621/input-with-multiple-removable-values-in-dash | Is there a dash component like this: Basically a text input, but with a list of removable items, added by hitting enter. I know a dropdown can be made to work like this, but I don't have a specified list of options beforehand. | If you're open to dash components that are not standard, I would go for Tag input from Tauffer consulting. And if you would like to use approaches that ship with Dash, you can put together a few components to mimic a very similar functionality. Both approaches are described below. 1 - Tag input from Tauffer consulting Figure 1 - Tag input The snippet below is a complete example for JupyterDash. For a complete Dash example you should take a look at the main source import dash_cool_components import dash from dash.dependencies import Input, Output import dash_html_components as html import dash_bootstrap_components as dbc import dash_html_components as html from dash.dependencies import Input, Output, State from jupyter_dash import JupyterDash import dash_core_components as dcc app = JupyterDash() app.layout = dbc.Container([ dbc.Row([ dbc.Col( width={'size':4} ), dbc.Col( dash_cool_components.TagInput( id="input", ), width={'size':4} ), dbc.Col([ dbc.Label('Current tags'), dbc.Input( id="output", ), ], width={'size':4}) ]), html.Div(id='hidden') ], style={'marginTop': '200px'}) @app.callback( Output('output', 'value'), [Input('input', 'value')] ) def timezone_test(value): if value: tags = [e['displayValue'] for e in value] return tags app.run_server(mode='inline', port = 9889) 2 - Built-in dash components This suggestion is based on elements from Allow User to Create New Options in dcc.Dropdown on the Plotly community forum and will produce the following setup with an Input field, a submit button and a dropdown menu where multiple selections are allowed: ['a', 'b', 'c'] are specified as existing options (although you have speciefied that you have none) and 'a' is set up as a pre-defined choice. From here you can simply add an option d in the input field, and it will become available in the dropdown menu upon clicking Add Option. If you'd like to remove any of the options, just click the x next to it. The following code snippet is written for JupyterDash, but can easily be converted to Dash. If anything is unclear, just let me know. Complete code: import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State from jupyter_dash import JupyterDash import dash_core_components as dcc # app = dash.Dash(__name__) app = JupyterDash() app.layout = html.Div([ dcc.Input(id='input', value=''), html.Button('Add Option', id='submit'), dcc.Dropdown( id='dropdown', options=[ {'label': 'a', 'value': 'a'}, {'label': 'b', 'value': 'b'}, {'label': 'c', 'value': 'c'}, ], value='a', multi = True ), ]) @app.callback( Output('dropdown', 'options'), [Input('submit', 'n_clicks')], [State('input', 'value'), State('dropdown', 'options')], ) def callback(n_clicks, new_value, current_options): print(n_clicks) if not n_clicks: return current_options current_options.append({'label': new_value, 'value': new_value}) return current_options app.run_server(mode='inline', port = 9889) | 6 | 3 |
69,692,703 | 2021-10-23 | https://stackoverflow.com/questions/69692703/compiling-python-3-10-at-amazon-linux-2 | I'm trying to compile and install Python 3.10 into a Amazon Linux 2 but I'm not being able to get it with https support. Here the commands that I'm using to compile it: sudo yum -y update sudo yum -y groupinstall "Development Tools" sudo yum -y install openssl-devel bzip2-devel libffi-devel wget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgz tar xzf Python-3.10.0.tgz cd Python-3.10.0 sudo ./configure --enable-optimizations sudo make altinstall The binary works, but when I try to use it with for reach an https endpoint, I get this message: Traceback (most recent call last): File "<stdin>", line 1113, in <module> File "<stdin>", line 1087, in main File "/usr/local/lib/python3.10/urllib/request.py", line 216, in urlopen return opener.open(url, data, timeout) File "/usr/local/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/usr/local/lib/python3.10/urllib/request.py", line 541, in _open return self._call_chain(self.handle_open, 'unknown', File "/usr/local/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/usr/local/lib/python3.10/urllib/request.py", line 1419, in unknown_open raise URLError('unknown url type: %s' % type) urllib.error.URLError: <urlopen error unknown url type: https> I'm not sure what I'm missing :/ | From Python3.10, OpenSSL 1.1.1 or newer is required. (Ref: PEP 644, Python3.10 release) I have tried changing some of your code like below and worked. delete: openssl-devel add: openssl11 openssl11-devel. | 6 | 16 |
69,691,395 | 2021-10-23 | https://stackoverflow.com/questions/69691395/invalid-python-sdk-error-right-after-creating-a-new-project-in-pycharm | Background Some time ago I seriously crashed my Windows computer while using PyCharm - I remember some errors about memory and then a hard crash with no blue screen - just black with some thin vertical lines and reboot to Windows installation / fixing screen. Since then, I had this problem, with no way I found online to fix this. Edit : Apparently, this has nothing to do with the problem. The problem Whenever I open a project, or create a new one, an error appears with the Invalid Python SDK error message. **Invalid Python SDK** Cannot set up a Python SDKat Python 3.9 (%projectName%) (%projectPath%).The SDK seems to be invalid. Also, this is what the work environment looks like the moment I close this message. In the Project window, the venv directory (and every directory under it) is marked as an Exclusion, and in the code, the print(f'Hi, {name}') function is marked as an unresolved reference error shown below. The program, however, executes flawlessly. What's more, when I go to Python Interpreter settings at File -> Settings -> Project -> Python Interpreter there's a yellow bar on the bottom which says: Non-zero exit code (4). which after some time says: Python packaging tools not found. Upon installing, nothing changes, and I can't add packages from this screen (the '+' button is greyed out): When I try to check Python interpreter paths, there are no paths shown, and I don't know what that means: In short, all of the default Python functions like print are marked as errors, even though they work when executed. This makes coding extremely confusing, as I can't quickly distinguish between real errors and 'errors'. The search for solution Normally this would be a problem with interpreter set-up or path, but I've tried most of the methods proposed in other answers to similar questions. To name a few : PyCharm shows unresolved references error for valid code 'Cannot setup a Python SDK' in PyCharm project using virtualenv after OS reinstallation Why do I get an 'SDK seems invalid' error when setting up my Project Interpreter in PyCharm? Invalid Python SDK Error while using python 3.4 on PyCharm Invalid Python SDK when setting a venv There were supposed to be links, but I don't have enough reputation on Stack Overflow to post them with the questions. These, however, can be easily looked up in Google, all of them are posted to Stack Overflow. What I tried I should mention that the first things I tried were removing and installing PyCharm, all user configurations and Python itself as well. I installed Python from the official site, and from the PyCharm application, both methods ended with the same result. File -> Invalidate Caches... -> Invalidate and restart. Didn't work. Checking file interpreter in Edit Configurations. Don't know what to make of it. The result: Refreshing the interpreter paths. Even now, the paths yield no results. Removing the interpreter and adding it again. No result. Deleting the .idea folder. No result. Deleting PyCharm user preferences under %homepath%/.PyCharm50. I don't have that folder though. Switching interpreter back and forth. No result. Creating a new interpreter in a different location. No result. Marking project directory as root ProjectName -> Mark Directory as -> Sources Root and unmarking other directories as Excluded. No result. Using no interpreter. Yeah, it doesn't mark non-errors as errors anymore. But the code doesn't work. That's not a solution for me. Checking if venv/pyvenv.cfg has paths set correctly. These look fine to me. Checking Windows environment variables - Path variable. It was in the user section, but wasn't in the system section. I added it, restarted but still no result. Changing account name in Windows. My account name was 'username' and that's how my User folder is called `C:\Users\username', but I later connected it to Microsoft account and my user name is now User Name with a space and I can't really change it. My folder stayed the same. Not sure if I can fix it that way. To the two last things I tried I should also add that I changed my Windows username from 'username' to 'user name' with a space, but that wasn't until recently. I'm attaching the idea.log file for you to check. I replaced my real username with 'User Name' to highlight the existence of a space. | OK, that was a lucky one! I'm thus posting my comment as an answer: The problem is caused by the non-ASCII characters in the path, and the solution is to remove them. As indicated by @TheLazyScripter this is a known issue. | 6 | 4 |
69,682,403 | 2021-10-22 | https://stackoverflow.com/questions/69682403/rename-duplicate-column-name-by-order-in-pandas | I have a dataframe, df, where I would like to rename two duplicate columns in consecutive order: Data DD Nice Nice Hello 0 1 1 2 Desired DD Nice1 Nice2 Hello 0 1 1 2 Doing df.rename(columns={"Name": "Name1", "Name": "Name2"}) I am running the rename function, however, because both column names are identical, the results are not desirable. | You could use an itertools.count() counter and a list expression to create new column headers, then assign them to the data frame. For example: >>> import itertools >>> df = pd.DataFrame([[1, 2, 3]], columns=["Nice", "Nice", "Hello"]) >>> df Nice Nice Hello 0 1 2 3 >>> count = itertools.count(1) >>> new_cols = [f"Nice{next(count)}" if col == "Nice" else col for col in df.columns] >>> df.columns = new_cols >>> df Nice1 Nice2 Hello 0 1 2 3 (Python 3.6+ required for the f-strings) EDIT: Alternatively, per the comment below, the list expression can replace any label that may contain "Nice" in case there are unexpected spaces or other characters: new_cols = [f"Nice{next(count)}" if "Nice" in col else col for col in df.columns] | 7 | 1 |
69,681,378 | 2021-10-22 | https://stackoverflow.com/questions/69681378/why-does-this-python-snippet-regarding-dictionaries-work | Say we have this >>> x = {'a': 1, 'b': 2} >>> y = {} >>> for k, y[k] in x.items(): pass ... >>> y {'a': 1, 'b': 2} Why does this work? Note: I saw this first here | a, b = (c, d) unpacks the tuple from left to right and assigns a = c and b = d in that order. x.items() iterates over key-value pairs in x. E.g. doing list(x.items()) will give [('a', 1), ('b', 2)] for a, b in x.items() assigns the key to a, and the value to b for each key-value pair in x. for k, y[k] in x.items() assigns the key to k, and the value to y[k] for each key-value pair in x. You can use k in y[k] because k has already been assigned since unpacking happens left-right You don't need to do anything in the loop because whatever you needed is done already. Because the loop already assigned every value in x to y[k], y is now a shallow copy of x. As the tweet you reference says, this is indeed a "terse, unintuitive, and confusing" way to do x.copy() | 14 | 7 |
69,660,109 | 2021-10-21 | https://stackoverflow.com/questions/69660109/google-calendar-api-does-not-refresh-refresh-token | I use the google API for a personal project, so I don't have my app verified with google. I use exactly this code example myself and a token.json file gets generated when logging in. Everything works fine, the access token changes more or less every time I make a request (every 10 min). After a week, the request fails. Exactly a week after the "expiry" field in the token.json file. google.auth.exceptions.RefreshError: ('invalid_grant: Token has been expired or revoked.', {'error': 'invalid_grant', 'error_description': 'Token has been expired or revoked.'}) If I understand everything correctly, google should update refresh_token as well, but this did not happen. I thought this part would handle getting a new refresh token: if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) ## HERE ## else: Might this be caused by my app not being verified? I did not find any information and hardly none about the behavior of the refresh_token. | You need to publish your app to production in order to remove the 7 days limitation. In APIs & Services / Oauth consent screen: From google documentation about refresh token expiration: A Google Cloud Platform project with an OAuth consent screen configured for an external user type and a publishing status of "Testing" is issued a refresh token expiring in 7 days. Also about Testing publishing status: Projects configured with a publishing status of Testing are limited to up to 100 test users listed in the OAuth consent screen. A test user consumes a project's test user quota once added to the project. Google will display a warning message before allowing a specified test user to authorize scopes requested by your project's OAuth clients. The warning message confirms the user has test access to your project but should consider the risks associated with granting access to their data to an unverified app. Authorizations by a test user will expire seven days from the time of consent. If your OAuth client requests an offline access type and receives a refresh token, that token will also expire. A Brand Account may authorize scopes requested by your project's OAuth clients if a specified test user manages the Brand Account. A test user may be unable to authorize scopes requested by your project's OAuth clients due to the availability of Google Services for the account or configured restrictions. A Google Workspace may control which third-party apps access its data or an account enrolled in Advanced Protection may block most non-Google apps. | 5 | 3 |
69,673,518 | 2021-10-22 | https://stackoverflow.com/questions/69673518/return-pydantic-model-with-field-names-instead-of-alias-as-fastapi-response | I am trying to return my model with the defined field names instead of its aliases. class FooModel(BaseModel): foo: str = Field(..., alias="bar") @app.get("/") -> FooModel: return FooModel(**{"bar": "baz"}) The response will be {"bar": "baz"} while I want {"foo": "baz"}. I know it's somewhat possible when using the dict method of the model, but it doesn't feel right and messes up the typing of the request handler. @app.get("/") -> FooModel: return FooModel(**{"bar": "baz"}).dict(by_alias=False) I feel like it should be possible to set this in the config class, but I can't find the right option. | You can add response_model_by_alias=False to path operation decorator. This key is mentioned here in the documentation. For example: @app.get("/model", response_model=Model, response_model_by_alias=False) def read_model(): return Model(alias="Foo") | 16 | 34 |
69,668,805 | 2021-10-21 | https://stackoverflow.com/questions/69668805/why-does-this-code-to-get-airflow-context-get-run-on-dag-import | I have an Airflow DAG where I need to get the parameters the DAG was triggered with from the Airflow context. Previously, I had the code to get those parameters within a DAG step (I'm using the Taskflow API from Airflow 2) -- similar to this: from typing import Dict, Any, List from airflow.decorators import dag, task from airflow.operators.python import get_current_context from airflow.utils.dates import days_ago default_args = {"owner": "airflow"} @dag( default_args=default_args, start_date=days_ago(1), schedule_interval=None, tags=["my_pipeline"], ) def my_pipeline(): @task(multiple_outputs=True) def get_params() -> Dict[str, Any]: context = get_current_context() params = context["params"] assert isinstance(params, dict) return params params = get_params() pipeline = my_pipeline() This worked as expected. However, I needed to get these parameters in several steps, so I thought it would be a good idea to move code to get them into a separate function in the global scope, like this: # ... from airflow.operators.python import get_current_context # other top-level code here def get_params() -> Dict[str, Any]: context = get_current_context() params = context["params"] return params @dag(...) def my_pipeline(): @task() def get_data(): params = get_params() # other DAG tasks here get_data() pipeline = my_pipeline() Now, this breaks right on DAG import, with the following error (names changed to match the examples above): Broken DAG: [/home/airflow/gcs/dags/my_pipeline.py] Traceback (most recent call last): File "/home/airflow/gcs/dags/my_pipeline.py", line 26, in get_params context = get_context() File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 467, in get_context raise AirflowException( airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task? And I get what the error is saying and how to fix it (move the code to get context back inside a @task). But my question is -- why does the error come up right on DAG import? get_params doesn't get called anywhere outside of other tasks, and those tasks are obviously not run until the DAG runs. So why does the code in get_params run at all right when the DAG gets imported? At this point, I want to understand this just because the fact that this error comes up when it comes up is breaking my understanding of how Python modules are evaluated on import. Code within a function shouldn't run until the function is run, and the only error that can come up before it's run is SyntaxError (and maybe some other core errors that I'm not remembering right now). Is Airflow doing some special magic, or is there something simpler going on that I'm missing? I am running Airflow 2.1.2 managed by Google Cloud Composer 1.17.2. | Unfortunately I am not able to reproduce your issue. The similar code below parses, renders a DAG, and completes successfully on Airflow 2.0, 2,1, and 2.2: from datetime import datetime from typing import Any, Dict from airflow.decorators import dag, task from airflow.operators.python import get_current_context def get_params() -> Dict[str, Any]: context = get_current_context() params = context["params"] return params @dag( dag_id="get_current_context_test", start_date=datetime(2021, 1, 1), schedule_interval=None, params={"my_param": "param_value"}, ) def my_pipeline(): @task() def get_data(): params = get_params() print(params) get_data() pipeline = my_pipeline() Task log snippet: However, context objects are directly accessible in task-decorated functions. You can update the task signature(s) to include an arg for params=None (default value used so the file parses without a TypeError exception) and then apply whatever logic you need with that arg. This can be done with ti, dag_run, etc. too. Perhaps this helps? @dag( dag_id="get_current_context_test", start_date=datetime(2021, 1, 1), schedule_interval=None, params={"my_param": "param_value"}, ) def my_pipeline(): @task() def get_data(params=None): print(params) get_data() pipeline = my_pipeline() | 5 | 5 |
69,669,155 | 2021-10-21 | https://stackoverflow.com/questions/69669155/why-doesnt-mypy-pass-when-typeddict-calls-update-method | Example: from typing import TypedDict class MyType(TypedDict): a: int b: int t = MyType(a=1, b=2) t.update(b=3) mypy toy.py complains toy.py:9:1: error: Unexpected keyword argument "b" for "update" of "TypedDict" Found 1 error in 1 file (checked 1 source file) | It appears this is a known open issue for mypy: https://github.com/python/mypy/issues/6019 For now if you want mypy to not bother you with this error, you'll need to tell it to ignore it: t.update(b=3) # type: ignore[call-arg] | 10 | 6 |
69,660,175 | 2021-10-21 | https://stackoverflow.com/questions/69660175/why-does-redefining-a-variable-used-in-a-generator-give-strange-results | One of my friends asked me about this piece of code: array = [1, 8, 15] gen = (x for x in array if array.count(x) > 0) array = [2, 8, 22] print(list(gen)) The output: [8] Where did the other elements go? | The answer is in the PEP of the generator expressions, in particular the session Early Binding vs Late biding: After much discussion, it was decided that the first (outermost) for-expression should be evaluated immediately and that the remaining expressions be evaluated when the generator is executed. So basically the array in: x for x in array is evaluated using the original list [1, 8, 15] (i.e. immediately), while the other one: if array.count(x) > 0 is evaluated when the generator is executed using: print(list(gen)) at which point array refers to a new list [2, 8, 22] | 31 | 32 |
69,665,304 | 2021-10-21 | https://stackoverflow.com/questions/69665304/return-reference-to-member-field-in-pyo3 | Suppose I have a Rust struct like this struct X{...} struct Y{ x:X } I'd like to be able to write python code that accesses X through Y y = Y() y.x.some_method() What would be the best way to implement it in PyO3? Currently I made two wrapper classes #[pyclass] struct XWrapper{ x:X } #[pyclass] struct YWrapper{ y:Y } #[pymethods] impl YWrapper{ #[getter] pub fn x(&self)->XWrapper{ XWrapper{x:self.y.clone()} } } However, this requires clone(). I'd rather want to return reference. Of course I know that if X was a pyclass, then I could easily return PyRef to it. But the problem is that X and Y come from a Rust library and I cannot nilly-wily add #[pyclass] to them. | I don't think what you say is possible without some rejigging of the interface: Your XWrapper owns the x and your Y owns its x as well. That means creating an XWrapper will always involve a clone (or a new). Could we change XWrapper so that it merely contains a reference to an x? Not really, because that would require giving XWrapper a lifetime annotation, and PyO3 afaik doesn't allow pyclasses with lifetime annotation. Makes sense, because passing an object to python puts it on the python heap, at which point rust loses control over the object. So what can we do? Some thoughts: Do you really need to expose the composition structure of y to the python module? Just because that's the way it's organized within Rust doesn't mean it needs to be that way in Python. Your YWrapper could provide methods to the python interface that behind the scenes forward the request to the x instance: #[pymethods] impl YWrapper{ pub fn some_method(&self) { self.y.x.some_method(); } } This would also be a welcome sight to strict adherents of the Law of Demeter ;) I'm trying to think of other clever ways. Depending on some of the details of how y.x is accessed and modified by the methods of y itself, it might be possible to add a field x: XWrapper to the YWrapper. Then you create the XWrapper (including a clone of y.x) once when YWrapper is created, and from then on you can return references to that XWrapper in your pub fn x. Of course that becomes much more cumbersome when x gets frequently changed and updated via the methods of y... In a way, this demonstrates the clash between Python's ref-counted object model and Rust's ownership object model. Rust enforces that you can't arbitrarily mess with objects unless you're their owner. | 6 | 2 |
69,665,765 | 2021-10-21 | https://stackoverflow.com/questions/69665765/featuretoolsentityset-object-has-no-attribute-entity-from-dataframe | I tried to learn featuretools following documentation from featuretools.com. A error came up: AttributeError: 'EntitySet' object has no attribute 'entity_from_dataframe' Could you help me? Thank you. Code: import featuretools as ft data = ft.demo.load_mock_customer() transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"]) transactions_df.sample(10) products_df = data["products"] products_df es = ft.EntitySet(id="customer_data") es = es.entity_from_dataframe(entity_id="transactions", dataframe=transactions_df, index="transaction_id", time_index="transaction_time", variable_types={"product_id": ft.variable_types.Categorical, "zip_code": ft.variable_types.ZIPCode}) es Code source: https://docs.featuretools.com/en/v0.16.0/loading_data/using_entitysets.html#creating-entity-from-existing-table | The documentation you are using is for an older version of Featuretools. You can find the updated Getting Started documentation that works with Featuretools version 1.0 here: https://featuretools.alteryx.com/en/stable/getting_started/getting_started_index.html | 6 | 3 |
69,659,712 | 2021-10-21 | https://stackoverflow.com/questions/69659712/poetry-run-worker-py-filenotfound-errno-2-no-such-file-or-directory-b-snap | I execute the below command, in the same working directory as file worker.py: poetry run worker.py Terminal: me@LAPTOP-G1DAPU88:~/.ssh/workers-python/workers/composite_key/compositekey$ poetry run worker.py FileNotFoundError [Errno 2] No such file or directory: b'/snap/bin/worker.py' at /usr/lib/python3.8/os.py:601 in _execvpe 597│ path_list = map(fsencode, path_list) 598│ for dir in path_list: 599│ fullname = path.join(dir, file) 600│ try: → 601│ exec_func(fullname, *argrest) 602│ except (FileNotFoundError, NotADirectoryError) as e: 603│ last_exc = e 604│ except OSError as e: 605│ last_exc = e me@LAPTOP-G1DAPU88:~/.ssh/workers-python/workers/composite_key/compositekey$ ls Citizenship.csv __pycache__ dagster.yaml pytest.ini simulate_alien_dict.py tasks.py "Gordian Algorithms' Times.xlsx" config.yaml data run_pipeline.yaml simulate_data.ipynb tests __init__.py currency_symbols_map.json modules simulate_alien_dict.ipynb simulate_data.py worker.py Clearly, we see the file is there (bottom-right). Questions Why is this an issue caused? How would you resolve this issue, in future? Please let me know if there's anything else I should add to post | poetry run means "run the following command in the venv managed by poetry". So the correct way of using it in your case is: poetry run python worker.py | 5 | 7 |
69,650,839 | 2021-10-20 | https://stackoverflow.com/questions/69650839/filter-by-partial-string-match-in-json-using-jmespath | Below is a sample JSON: { "School": [ {"@id": "ABC_1", "SchoolType": {"@tc": "10023204", "#text": "BLUE FOX"}}, {"@id": "ABC_2", "SchoolType": {"@tc": "143", "#text": "AN EAGLE"}}, {"@id": "ABC_3", "SchoolType": {"@tc": "21474836", "#text": "OTHER REASONS"}, "SchoolStatus": {"@tc": "21474836", "#text": "FINE"}, "Teacher": [ {"@id": "XYZ_1", "TeacherType": {"@tc": "5", "#text": "GENDER"}, "Gender": "FEMALE", "Extension": {"@VC": "23", "Extension_Teacher": {"DateDuration": {"@tc": "10023111", "#text": "0-6 MONTHS"}}}}, {"@id": "XYZ_2", "TeacherType": {"@tc": "23", "#text": "EDUCATED"}, "Extension": {"@VC": "23", "Extension_Teacher": {"DateDuration": {"@tc": "10023111", "#text": "CURRENT"}}}}]}, {"@id": "ABC_4", "SchoolType": {"@tc": "21474836", "#text": "OTHER DAYS"}, "SchoolStatus": {"@tc": "1", "#text": "DOING OKAY"}, "Extension": {"Extension_School": {"AdditionalDetails": "CHRISTMAS DAY"}}}] } I want to extract the Teacher information (TeacherType, Gender etc.) for each Teacher @id where the associated SchoolType.\"#text\" contains "OTHER" for any School @id for all School. I tried the below query but it doesn't work: School[?SchoolType.\"#text\".contains(@, "OTHER")].Teacher[*].TeacherType.\"#text\"[]] | You have to warp the contains function around the array of your condition this way: [?contains(SchoolType."#text", 'OTHER')] So a way to get the full Teacher object would be: School[?contains(SchoolType."#text", 'OTHER')].Teacher Or, getting rid of the array of array with a flatten operator: School[?contains(SchoolType."#text", 'OTHER')].Teacher | [] This would give: [ { "@id": "XYZ_1", "TeacherType": { "@tc": "5", "#text": "GENDER" }, "Gender": "FEMALE", "Extension": { "@VC": "23", "Extension_Teacher": { "DateDuration": { "@tc": "10023111", "#text": "0-6 MONTHS" } } } }, { "@id": "XYZ_2", "TeacherType": { "@tc": "23", "#text": "EDUCATED" }, "Extension": { "@VC": "23", "Extension_Teacher": { "DateDuration": { "@tc": "10023111", "#text": "CURRENT" } } } } ] | 6 | 7 |
69,650,599 | 2021-10-20 | https://stackoverflow.com/questions/69650599/how-to-deactivate-virtualenv-in-powershell | So I am aware of virtualenv in PowerShell? question, specificly this answer, stating that all you need to activate is to execute venv/Scripts/activate.ps1 Of course it is required to set appropriate Execution Policy etc. But my question is: How to disable virtualenv activated inside Power Shell? I tried: venv/Scripts/deactivate.ps1 and venv/Scripts/activate.ps1 deactivate but first one fails because deactivate.ps1 does not exists and second one does not change anything. | Just a moment after creating this question I've realized that answer is much simpler that I expected. All I need is to type deactivate inside virtualenv. | 15 | 39 |
69,644,987 | 2021-10-20 | https://stackoverflow.com/questions/69644987/why-i-do-get-an-error-when-installing-the-psutil-package-for-python | I installed pyhton on a windows 64bit machine (Version 3.10). Afterwards I like to install psutil with: python -m pip install psutil but I get following error message: Collecting psutil Using cached psutil-5.8.0.tar.gz (470 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Using legacy 'setup.py install' for psutil, since package 'wheel' is not installed. Installing collected packages: psutil Running setup.py install for psutil: started Running setup.py install for psutil: finished with status 'error' python : ERROR: Command errored out with exit status 1: At line:1 char:1 + python -m pip install psutil + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: ( ERROR: Comm... exit status 1::String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError command: 'C:\Users\FHBG1244\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\FHBG1244\\AppData\\Local\\Temp\\pip-install-9hd47gcx\\psutil_0500efeadf404b148d11c1599b34df08\\setup.py'"'"'; __file__= '"'"'C:\\Users\\FHBG1244\\AppData\\Local\\Temp\\pip-install-9hd47gcx\\psutil_0500efeadf404b148d11c1599b34df08\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\FHBG1244\AppData\Local\Temp\pip-record-2fhy6awl\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\FHBG1244\AppData\Local\Programs\Python\Python310\Include\psutil' cwd: C:\Users\FHBG1244\AppData\Local\Temp\pip-install-9hd47gcx\psutil_0500efeadf404b148d11c1599b34df08\ Complete output (38 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\psutil copying psutil\_common.py -> build\lib.win-amd64-3.10\psutil copying psutil\_compat.py -> build\lib.win-amd64-3.10\psutil copying psutil\_psaix.py -> build\lib.win-amd64-3.10\psutil copying psutil\_psbsd.py -> build\lib.win-amd64-3.10\psutil copying psutil\_pslinux.py -> build\lib.win-amd64-3.10\psutil copying psutil\_psosx.py -> build\lib.win-amd64-3.10\psutil copying psutil\_psposix.py -> build\lib.win-amd64-3.10\psutil copying psutil\_pssunos.py -> build\lib.win-amd64-3.10\psutil copying psutil\_pswindows.py -> build\lib.win-amd64-3.10\psutil copying psutil\__init__.py -> build\lib.win-amd64-3.10\psutil creating build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\runner.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_aix.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_bsd.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_connections.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_contracts.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_linux.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_memleaks.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_misc.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_osx.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_posix.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_process.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_sunos.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_system.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_testutils.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_unicode.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\test_windows.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\__init__.py -> build\lib.win-amd64-3.10\psutil\tests copying psutil\tests\__main__.py -> build\lib.win-amd64-3.10\psutil\tests running build_ext building 'psutil._psutil_windows' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\FHBG1244\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\FHBG1244\\AppData\\Local\\Temp\\pip-install-9hd47gcx\\psutil_0500efeadf404b148d11c1599b34df08\\setup.py'"'"'; __file__= '"'"'C:\\Users\\FHBG1244\\AppData\\Local\\Temp\\pip-install-9hd47gcx\\psutil_0500efeadf404b148d11c1599b34df08\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\FHBG1244\AppData\Local\Temp\pip-record-2fhy6awl\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\FHBG1244\AppData\Local\Programs\Python\Python310\Include\psutil' Check the logs for full command output. What is the problem here?? | Install missing package pip install wheel Install this tools https://visualstudio.microsoft.com/visual-cpp-build-tools/ mark this option during the installation => Restart Your Machine. Run this command in cmd pip install --upgrade setuptools Now You Can Run this command in cmd pip install psutil Finally It's up and running ! :-D ;-) Tested now, on my machine(win10, python 3.10.0, pip 21.3)! Good Luck ! Best regards :-D | 5 | 9 |
69,642,668 | 2021-10-20 | https://stackoverflow.com/questions/69642668/the-indices-of-the-two-geoseries-are-different-understanding-indices | I am working with GeoPandas and I have two GeoDataframes with the same CRS. One of them contains a geometry column with a polygon geometry, the other one a column with point geometry. I want to check which points are inside the polygon. Naively I tried shape.contains(points) This gave me > The indices of the two GeoSeries are different I do not understand this message. When I check the documentation, it says We can also check two GeoSeries against each other, row by row. The GeoSeries above have different indices. We can either align both GeoSeries based on index values and compare elements with the same index using align=True or ignore index and compare elements based on their matching order using align=False: What are these Indices? Why are they checked against each other and not the geometry columns? Online I read, I have to convert my geometries into shapely geometries. But isn't the whole point of using GeoPandas that I can perfrom geographical operations on the data? I am confused about this. How to check if the geometries in shape contain any of the geometries in points? | What you are describing is effectively a spatial join. Below example constructs points from lon/lat of cities in UK and then finds which administrational area polygon the city is in. This is an NxM comparison import pandas as pd import numpy as np import geopandas as gpd import shapely.geometry import requests # source some points and polygons # fmt: off dfp = pd.read_html("https://www.latlong.net/category/cities-235-15.html")[0] dfp = gpd.GeoDataFrame(dfp, geometry=dfp.loc[:,["Longitude", "Latitude",]].apply(shapely.geometry.Point, axis=1)) res = requests.get("https://opendata.arcgis.com/datasets/69dc11c7386943b4ad8893c45648b1e1_0.geojson") df_poly = gpd.GeoDataFrame.from_features(res.json()) # fmt: on gpd.sjoin(dfp, df_poly) | 8 | 4 |
69,591,233 | 2021-10-15 | https://stackoverflow.com/questions/69591233/how-to-select-rows-between-a-certain-date-range-in-python-polars | If a DataFrame is constructed like the following using polars-python: import polars as pl from polars import col from datetime import datetime df = pl.DataFrame({ "dates": ["2016-07-02", "2016-08-10", "2016-08-31", "2016-09-10"], "values": [1, 2, 3, 4] }) How to select the rows between a certain date range, i.e. between between "2016-08-10" and "2016-08-31", so that the desired outcome is: ┌────────────┬────────┐ │ dates ┆ values │ │ --- ┆ --- │ │ date ┆ i64 │ ╞════════════╪════════╡ │ 2016-08-10 ┆ 2 │ │ 2016-08-31 ┆ 3 │ └────────────┴────────┘ | First you need transform the string values to date types then filter: # eager (df.with_columns(pl.col("dates").str.to_date()) .filter(col("dates").is_between(datetime(2016, 8, 9), datetime(2016, 9, 1))) ) # lazy (df.lazy() .with_columns(pl.col("dates").str.to_date()) .filter(col("dates").is_between(datetime(2016, 8, 9), datetime(2016, 9, 1))) .collect() ) both result in the desired output: ┌────────────┬────────┐ │ dates ┆ values │ │ --- ┆ --- │ │ date ┆ i64 │ ╞════════════╪════════╡ │ 2016-08-10 ┆ 2 │ │ 2016-08-31 ┆ 3 │ └────────────┴────────┘ | 8 | 9 |
69,575,496 | 2021-10-14 | https://stackoverflow.com/questions/69575496/how-to-use-group-by-and-apply-a-custom-function-with-polars | I am breaking my head trying to figure out how to use group_by and apply a custom function using Polars. Coming from Pandas, I was using: import pandas as pd from scipy.stats import spearmanr def get_score(df): return spearmanr(df["prediction"], df["target"]).correlation df = pd.DataFrame({ "era": [1, 1, 1, 2, 2, 2, 5], "prediction": [2, 4, 5, 190, 1, 4, 1], "target": [1, 3, 2, 1, 43, 3, 1] }) correlations = df.groupby("era").apply(get_score) Polars has map_groups() to apply a custom function over groups, which I tried: correlations = df.group_by("era").map_groups(get_score) But this fails with the error message: 'Could not get DataFrame attribute '_df'. Make sure that you return a DataFrame object.: PyErr { type: <class 'AttributeError'>, value: AttributeError("'float' object has no attribute '_df'"), traceback: None } Any ideas? | Polars has the pl.corr() function which supports method="spearman" If you want to use a custom function you could do it like this: Custom function on multiple columns/expressions import polars as pl from typing import List from scipy import stats df = pl.DataFrame({ "g": [1, 1, 1, 2, 2, 2, 5], "a": [2, 4, 5, 190, 1, 4, 1], "b": [1, 3, 2, 1, 43, 3, 1] }) def get_score(args: List[pl.Series]) -> pl.Series: return pl.Series([stats.spearmanr(args[0], args[1]).correlation], dtype=pl.Float64) (df.group_by("g", maintain_order=True) .agg( pl.map_groups( exprs=["a", "b"], function=get_score).alias("corr") )) Polars provided function (df.group_by("g", maintain_order=True) .agg( pl.corr("a", "b", method="spearman").alias("corr") )) Both output: shape: (3, 2) ┌─────┬──────┐ │ g ┆ corr │ │ --- ┆ --- │ │ i64 ┆ f64 │ ╞═════╪══════╡ │ 1 ┆ 0.5 │ │ 2 ┆ -1.0 │ │ 5 ┆ NaN │ └─────┴──────┘ Custom function on a a single column/expression We can also apply custom functions on single expressions, via .map_elements Below is an example of how we can square a column with a custom function and with normal polars expressions. The expression syntax should always be preferred, as its a lot faster. (df.group_by("g") .agg( pl.col("a").map_elements(lambda group: group**2).alias("squared1"), (pl.col("a")**2).alias("squared2") )) | 22 | 34 |
69,627,609 | 2021-10-19 | https://stackoverflow.com/questions/69627609/point-accepts-0-positional-sub-patterns-2-given | I'm trying to run an example from the docs, but get the error: Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: Point() accepts 0 positional sub-patterns (2 given) Can someone explain what I doing wrong here? class Point(): def __init__(self, x, y): self.x = x self.y = y x, y = 5 ,5 point = Point(x, y) match point: case Point(x, y) if x == y: print(f"Y=X at {x}") case Point(x, y): print(f"Not on the diagonal") | You need to define __match_args__ in your class. As pointed at in this section of the "What's new in 3.10" page: You can use positional parameters with some builtin classes that provide an ordering for their attributes (e.g. dataclasses). You can also define a specific position for attributes in patterns by setting the __match_args__ special attribute in your classes. If it’s set to (“x”, “y”), the following patterns are all equivalent (and all bind the y attribute to the var variable): Point(1, var) Point(1, y=var) Point(x=1, y=var) Point(y=var, x=1) So your class will need to look as follows: class Point: __match_args__ = ("x", "y") def __init__(self, x, y): self.x = x self.y = y Alternatively, you could change your match structure to the following: match point: case Point(x=x, y=y) if x == y: print(f"Y=X at {x}") case Point(x=x, y=y): print(f"Not on the diagonal") (Note that you don't need a both: a class with __match_args__ defined, does not need to have its arguments specified in the match-case statements.) For full details, I'll refer readers to PEP 634, which is the specification for structural pattern matching. The details on this particular point are in the section Class Patterns. For a better introduction or tutorial, don't use the "What's New" documentation, as it tends to provide an overview, but may skip over some things. Instead, use PEP 636 -- Structural Pattern Matching: Tutorial, or the language reference on match statements for more details. It is mentioned in the quoted text that a dataclass will already have an ordering, and in your example, a dataclass also works fine: from dataclasses import dataclass @dataclass class Point: x: int y: int x, y = 5, 5 point = Point(x, y) match point: case Point(x, y) if x == y: print(f"Y=X at {x}") case Point(x, y): print(f"Not on the diagonal") | 8 | 15 |
69,610,231 | 2021-10-18 | https://stackoverflow.com/questions/69610231/why-cant-pip-find-winrt | I just bought a new laptop and I'm trying to set it up with python. I am using python 3.10.0, windows 10, pip v21.3. For the most part, pip seems to be working correctly, I've already used it to install multiple packages such as pygame. When I try to install winrt, however, I get this error C:\Users\matth>pip install winrt ERROR: Could not find a version that satisfies the requirement winrt (from versions: none) ERROR: No matching distribution found for winrt My old laptop is still able to uninstall and reinstall winrt using pip without a problem, and again pip works on my new laptop for other packages, just not winrt. Any idea what the problem is and how I fix it? | Microsoft has not been maintaining the winrt package. There is no binary wheel for Python 3.10 as seen on PyPI. There is also a request for this on GitHub. I have started a community-maintained fork of the PyWinRT project. You can install and use winsdk instead. It supports Python 3.10. Just replace winrt with winsdk in your imports. EDIT: Since September 2023, the monolithic winsdk package was getting too big, so it has been replaced by individual winrt-* packages for each WinRT namespace. winrt-runtime is the base shared runtime support package and each namespace has a package similar to winrt-Windows.Foundation. Documentation can be found here. A list of improvements and bug fixes compared to the winrt package can be found in the changelog. | 13 | 17 |
69,617,489 | 2021-10-18 | https://stackoverflow.com/questions/69617489/can-i-get-incoming-extra-fields-from-pydantic | I have defined a pydantic Schema with extra = Extra.allow in Pydantic Config. Is it possible to get a list or set of extra fields passed to the Schema separately. For ex: from pydantic import BaseModel as pydanticBaseModel class BaseModel(pydanticBaseModel): name: str class Config: allow_population_by_field_name = True extra = Extra.allow I pass below JSON: { "name": "Name", "address": "bla bla", "post": "post" } I need a function from pydantic, if available which will return all extra fields passed. like: {"address", "post"} | NOTE: This answer applies to version Pydantic 1.x. See robcxyz's answer for a 2.x solution. From the pydantic docs: extra whether to ignore, allow, or forbid extra attributes during model initialization. Accepts the string values of 'ignore', 'allow', or 'forbid', or values of the Extra enum (default: Extra.ignore). 'forbid' will cause validation to fail if extra attributes are included, 'ignore' will silently ignore any extra attributes, and 'allow' will assign the attributes to the model. This can either be included in the model Config class, or as arguments when inheriting BaseModel. from pydantic import BaseModel, Extra class BaseModel(BaseModel, extra=Extra.allow): name: str model = Model.parse_obj( {"name": "Name", "address": "bla bla", "post": "post"} ) print(model) # name='Name' post='post' address='bla bla' To get the extra values you could do something simple, like comparing the set of __fields__ defined on the class to the values in __dict__ on an instance: class Model(BaseModel, extra=Extra.allow): python_name: str = Field(alias="name") @property def extra_fields(self) -> set[str]: return set(self.__dict__) - set(self.__fields__) >>> Model.parse_obj({"name": "Name", "address": "bla bla", "post": "post"}).extra_fields {'address', 'post'} >>> Model.parse_obj({"name": "Name", "foobar": "fizz"}).extra_fields {'foobar'} >>> Model.parse_obj({"name": "Name"}).extra_fields set() | 35 | 37 |
69,549,954 | 2021-10-13 | https://stackoverflow.com/questions/69549954/python-plotly-radar-chart-with-style | I'm trying to the image shown below and I thought python would be a good idea to do this but I'm not sure. I want to randomize lots of football players' stats, make a radar chart for each and save the charts as images. But the plotly radar charts are not so stylish and I really want to make something stylish. How to turn the below demo code into the reference image and is it possible? Here's a demo code: import plotly.graph_objects as go categories = ['Defending','Speed','Attacking', 'Technical', 'Team play'] fig = go.Figure() fig.add_trace(go.Scatterpolar( r=[1, 5, 2, 2, 3], theta=categories, fill='toself', name='Alice' )) fig.add_trace(go.Scatterpolar( r=[4, 3, 2.5, 1, 2], theta=categories, fill='toself', name='Bob' )) fig.update_layout( polar=dict( radialaxis=dict( visible=True, range=[0, 5] )), showlegend=False ) fig.show() | From the documentation for polar layout, it seems that Plotly does not offer much options when it comes the grid shape itself. However, Plotly allows you to create your own templates/themes or examine built-in themes. As a starting point, you should probably analyze the plotly_dark theme as it has some features similar to your picture. simple example with built-in template dataset.csv categories,player,points Defending,alice,1 Speed,alice,5 Attacking,alice,2 Technical,alice,2 Team play,alice,3 Defending,bob,4 Speed,bob,3 Attacking,bob,2.5 Technical,bob,1 Team play,bob,2 code import plotly.express as px import pandas as pd df = pd.read_csv("dataset.csv") fig = px.line_polar(df, r="points", theta="categories", color="player", line_close=True, color_discrete_sequence=["#00eb93", "#4ed2ff"], template="plotly_dark") fig.update_polars(angularaxis_showgrid=False, radialaxis_gridwidth=0, gridshape='linear', bgcolor="#494b5a", radialaxis_showticklabels=False ) fig.update_layout(paper_bgcolor="#2c2f36") fig.show() With the above code I don't think it is possible to modify the color of each nested shape. To be able to do so, you will probably have to create your own template and color each nested shape separately. creating grid shape You might have to try something similar the code below to create your desired grid shape. import plotly.graph_objects as go bgcolors = ["#353841", "#3f414d", "#494b5a", "#494b5a", "#58596a"] fig = go.Figure(go.Scatterpolar( r=[42]*8, theta=[0, 45, 90, 135, 180, 225, 270, 315], marker_line_width=2, opacity=0.8, marker=dict(color=bgcolors[0]) )) for i in range(1, 5): fig.add_trace(go.Scatterpolar( r=[44-6*i]*8, theta=[0, 45, 90, 135, 180, 225, 270, 315], marker_line_width=2, marker=dict(color=bgcolors[i]) )) fig.update_polars(angularaxis_dtick='') fig.update_traces(fill='toself') fig.update_polars(angularaxis_showgrid=False, radialaxis_showgrid=False, radialaxis_gridwidth=0, gridshape='linear', radialaxis_showticklabels=False, angularaxis_layer='above traces' ) fig.show() The colors are off but the general shape is good. | 6 | 4 |
69,572,265 | 2021-10-14 | https://stackoverflow.com/questions/69572265/ingest-several-types-of-csvs-with-databricks-auto-loader | I'm trying to load several types of csv files using Autoloader, it currently merge all csv that I drop into a big parquet table, what I want is to create parquet tables for each type of schema/csv_file Current code does: #Streaming files/ waiting a file to be dropped spark.readStream.format("cloudFiles") \ .option("cloudFiles.format", "csv") \ .option("delimiter", "~|~") \ .option("cloudFiles.inferColumnTypes","true") \ .option("cloudFiles.schemaLocation", pathCheckpoint) \ .load(sourcePath) \ .writeStream \ .format("delta") \ .option("mergeSchema", "true") \ .option("checkpointLocation", pathCheckpoint) \ .start(pathResult) What I want | You can create different autoloader streams for each file from the same source directory and filter the filenames to consume by using the pathGlobFilter option on Autoloader (databricks documentation). By doing that you will end up having different checkpoints for each table and you can have the different schemas working. This solution is similar to the one pointed here How to filter files in Databricks Autoloader stream. | 5 | 2 |
69,598,542 | 2021-10-16 | https://stackoverflow.com/questions/69598542/failed-to-query-the-value-of-task-appcompiledebugjavawithjavac-property-opt | Can anyone help me to fix this error? FAILURE: Build failed with an exception. What went wrong: Execution failed for task ':app:compileDebugJavaWithJavac'. Failed to query the value of task ':app:compileDebugJavaWithJavac' property 'options.generatedSourceOutputDirectory'. Querying the mapped value of map(java.io.File property(org.gradle.api.file.Directory, fixed(class org.gradle.api.internal.file.DefaultFilePropertyFactory$FixedDirectory, C:\Users\any user\Documents\Desktop\Chaquopy 9.1\app\build\generated\ap_generated_sources\debug\out)) org.gradle.api.internal.file.DefaultFilePropertyFactory$ToFileTransformer@7424ad41) before task ':app:compileDebugJavaWithJavac' has completed is not supported Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. Get more help at https://help.gradle.org Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/7.0.2/userguide/command_line_interface.html#sec:command_line_warnings BUILD FAILED in 2s 12 actionable tasks: 12 up-to-date | It's an AGP related issue: https://youtrack.jetbrains.com/issue/KTIJ-19252 Updating AGP up to 4.2.2 or higher will probably solve this issue. | 8 | 5 |
69,629,495 | 2021-10-19 | https://stackoverflow.com/questions/69629495/export-pyvis-graph-as-vector-or-png-image-is-there-a-way | I'm looking for a way to export a huge Graph generated with pyvis in to a vector graphic .svg or at least .png format. Is there a way to do it? So far I've only found the option to save / export as .html file. Thanks in advance. | For me this worked: save the graph to HTML file open the HTML in a new tab right-click -> save image as ... | 8 | 2 |
69,570,717 | 2021-10-14 | https://stackoverflow.com/questions/69570717/dask-dataframe-can-set-index-put-a-single-index-into-multiple-partitions | Empirically it seems that whenever you set_index on a Dask dataframe, Dask will always put rows with equal indexes into a single partition, even if it results in wildly imbalanced partitions. Here is a demonstration: import pandas as pd import dask.dataframe as dd users = [1]*1000 + [2]*1000 + [3]*1000 df = pd.DataFrame({'user': users}) ddf = dd.from_pandas(df, npartitions=1000) ddf = ddf.set_index('user') counts = ddf.map_partitions(lambda x: len(x)).compute() counts.loc[counts > 0] # 500 1000 # 999 2000 # dtype: int64 However, I found no guarantee of this behaviour anywhere. I have tried to sift through the code myself but gave up. I believe one of these inter-related functions probably holds the answer: set_index set_partitions rearrange_by_column rearrange_by_column_tasks SimpleShuffleLayer When you set_index, is it the case that a single index can never be in two different partitions? If not, then under what conditions does this property hold? Bounty: I will award the bounty to an answer that draws from a reputable source. For example, referring to the implementation to show that this property has to hold. | is it the case that a single index can never be in two different partitions? No, it's certainly allowed. Dask will even intend for this to happen. However, because of a bug in set_index, all the data will still end up in one partition. An extreme example (every row is the same value except one): In [1]: import dask.dataframe as dd In [2]: import pandas as pd In [3]: df = pd.DataFrame({"A": [0] + [1] * 20}) In [4]: ddf = dd.from_pandas(df, npartitions=10) In [5]: s = ddf.set_index("A") In [6]: s.divisions Out[6]: (0, 0, 0, 0, 0, 0, 0, 1) As you can see, Dask intends for the 0s to be split up between multiple partitions. Yet when the shuffle actually happens, all the 0s still end up in one partition: In [7]: import dask In [8]: dask.compute(s.to_delayed()) # easy way to see the partitions separately Out[8]: ([Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [], Empty DataFrame Columns: [] Index: [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],) This is because the code deciding which output partition a row belongs doesn't consider duplicates in divisions. Treating divisions as a Series, it uses searchsorted with side="right", hence why all the data always ends up in the last partition. I'll update this answer when the issue is fixed. | 6 | 2 |
69,582,550 | 2021-10-15 | https://stackoverflow.com/questions/69582550/using-requests-module-for-post-i-want-to-pick-up-entire-json-body-from-whole | I've been trying to write a little python script to automate some API testing. I need it to pick up the whole JSON body from a CSV or other format file, but not just a single body per file, rather iterate over all the "bodies" in it. The way I concocted it is, each cell, or value, is an entire body. This comes from how I'm managing various tests in Google Sheets, with the whole JSON bodies in their own cells, and can then be easily exported as CSV files. The issue is that I keep hitting "wrong format" type errors. I think the problem is that, as it's picking it up as a CSV "value", it inputs the data weirdly and that's why it won't work. Sample "problematic" input, i.e. the value that is picked up from the CSV file, as caught through a breakpoint: '{"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum4"}' I've already tried a lot of things. This is where I'm at right now. Further below is some sample data and more explanations. Code: from os import read import requests import csv import json filename = 'file.csv' url = "https://gorest.co.in/public/v1/posts/[id]/comments" headers = { "Content-Type": "application/json", "Authorization": "Bearer tricked ya" } with open(filename) as cF: reader = csv.reader(cF) for idx,row in enumerate(reader): for col in row: print("Body: ", col) r = requests.request("POST", url, headers = headers, json = col) print("Status: ", r.status_code) print("Status: ", r.reason) print("Text response: ", r.text) print("\nTest number: ",idx,"\n") Sample data. In here, each row is a row in a csv file: {"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum1"} {"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum2"} {"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum3"} {"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum4"} Sample output: ("Text response" slightly post-formatted for readability) Body: {"post":2027,"name":"Test User","email":"[email protected]","body":"lorem ipsum4"} Status: 422 Status: Unprocessable Entity Text response: { "meta": null, "data": [{ "field": "name", "message": "can't be blank" }, { "field": "email", "message": "can't be blank" }, { "field": "body", "message": "can't be blank" }] } Test number: 4 The "odd" thing I've noticed, is that I can sometimes (in previous versions) input the body that is printed out (such as in the sample output) back into the JSON, when I'm using breakpoints, and that will work perfectly. So I tried using something to "capture" that "working printed body", but that wasn't really doable, or I didn't do it right. | csv.reader returns rows of strings, so the strings each need to be converted to a Python object for the json keyword argument in requests.request. We can use json.loads to deserialize a string. req_obj = json.loads(col) r = requests.request("POST", url, headers = headers, json = req_obj) | 5 | 1 |
69,581,672 | 2021-10-15 | https://stackoverflow.com/questions/69581672/how-to-pass-dictionary-elements-from-hydra-config-file | I am trying to instantiate objects with hydra, I have a class torchio.transforms.RemapLabels that I am using in my config file: _target_: torchio.transforms.RemapLabels The problem is that torchio.transforms.RemapLabels takes dictionary elements as input, how do I pass those from my hydra config file? (config.yaml)? I am getting error when instantiating: TypeError: Error instantiating 'torchio.transforms.preprocessing.label.remap_labels.RemapLabels' : __init__() missing 1 required positional argument: 'remapping' example usage of remap label: transform = torchio.RemapLabels({2:1, 4:3, 6:5, 8:7}) | There are two options: you can pass the inputs as positional arguments or as named arguments. Using named arguments (a.k.a. keyword arguments) in your yaml file: _target_: torchio.transforms.RemapLabels remapping: 2: 1 4: 3 6: 5 8: 7 masking_method: "Anterior" or, using json-style maps: _target_: torchio.transforms.RemapLabels remapping: {2: 1, 4: 3, 6: 5, 8: 7} masking_method: "Anterior" Using positional arguments in your yaml file: _target_: torchio.transforms.RemapLabels _args_: - 2: 1 4: 3 6: 5 8: 7 - "Anterior" Or, equivalently: _target_: torchio.transforms.RemapLabels _args_: - {2: 1, 4: 3, 6: 5, 8: 7} - "Anterior" For more information, please refer to the docs on Instantiating objects with Hydra. | 5 | 5 |
69,593,336 | 2021-10-16 | https://stackoverflow.com/questions/69593336/why-does-pandas-allow-non-unique-index | Pandas allows both unique and non-unique indices. Some of the operations only allow unique indices. In what situation would it make sense to use non-unique indices? I think enforcing uniqueness of the indices can help discover data integrity problems upfront. | Disclaimer: a unique RangeIndex is always going to be the most performant option. This question seems to favour using a unique index and is specifically looking for cases where allowing non-unique indexes is desired. For this reason, from this point forward unique indexes are not going to be discussed, nor is performance, only the useful benefits of non-unique indexes. The general places non-unique indexes are preferable whenever we need to keep track of where data originally came from. There are many cases where, in the intermediary phases, we need to know what row the data was at. This lets us do computations with respect to information that would either be lost if the index was unique, or would require adding an additional column to track it. Below are just a few examples: Interleaving multiple DataFrames: Consider the following 2 DataFrames, let's assume that each DataFrame represents a day's worth of data. We would like to review this daily data by Sample number rather than by day: df1 = pd.DataFrame([['10:05', 'Day 1', 'Sample 1'], ['11:14', 'Day 1', 'Sample 2']]) df2 = pd.DataFrame([['10:03', 'Day 2', 'Sample 1'], ['11:12', 'Day 1', 'Sample 2']]) # df1 0 1 2 0 10:05 Day 1 Sample 1 1 11:14 Day 1 Sample 2 #df2 0 1 2 0 10:03 Day 2 Sample 1 1 11:12 Day 1 Sample 2 Because pandas allows non-unique indexes we can concat then sort_index: pd.concat([df1, df2]).sort_index() 0 1 2 0 10:05 Day 1 Sample 1 0 10:03 Day 2 Sample 1 1 11:14 Day 1 Sample 2 1 11:12 Day 1 Sample 2 Notice this is the fastest way to interleave two DataFrames by row index. Also notice that it would not be feasible to instead sort by columns 1 and 2 as the words Day 1 Sample 1 etc. will be lexicographically sorted which will run into issues for values like Day 10, or would require a bunch of additional computation to handle the numeric values correctly. We can add ignore_index=True to sort_index, but this only hides away overwriting with a new range index and still relies on the fact that concat returns a DataFrame with non-unique indexes. pd.concat([df1, df2]).sort_index(ignore_index=True) 0 1 2 0 10:05 Day 1 Sample 1 1 10:03 Day 2 Sample 1 2 11:14 Day 1 Sample 2 3 11:12 Day 1 Sample 2 Explode and Reduce explode, particularly on Series, is a common operation and not losing the index (allowing duplicates) makes it so much easier to do expand and reduce type operations. The goal is to remove any duplicate values from within a comma separated string within a column: df = pd.DataFrame({ 'corresponding_id': [10, 20, 30], 'col': ['a,b,c,a', 'b,c,c,b', 'a,a,a,a'] }) df: corresponding_id col 0 10 a,b,c,a 1 20 b,c,c,b 2 30 a,a,a,a A common solution may look something like: df['col'] = ( df['col'].str.split(',').explode() .groupby(level=0).apply(lambda s: ','.join(np.unique(s))) ) df: corresponding_id col 0 10 a,b,c 1 20 b,c 2 30 a After exploding the result looks like: df['col'].str.split(',').explode() 0 a 0 b 0 c 0 a 1 b 1 c 1 c 1 b 2 a 2 a 2 a 2 a Name: col, dtype: object Because there are duplicate indexes we can groupby relative to level=0 (the index) this is only possible because the index was preserved. If the index did not allow duplicates we would have: 0 a 1 b 2 c 3 a 4 b 5 c 6 c 7 b 8 a 9 a 10 a 11 a Name: col, dtype: object There would be no way to easily determine from which rows the values came from, making it even more difficult to put them back in place. Scaling Up a DataFrame The ability to select from a DataFrame using duplicate labels is extremely helpful in scaling up a DataFrame. df = pd.DataFrame({ 'Count': [2, 4], 'Value': [1, 6] }) On occasion we need to scale up a DataFrame, in these cases we use loc to select from the DataFrame: df.loc[[0, 0, 1, 1, 1, 1], :] Notice the result is: Count Value 0 2 1 0 2 1 1 4 6 1 4 6 1 4 6 1 4 6 We were able to select the same row multiple times from the DataFrame based on duplicate labels (and the resulting index is non-unique). This is so common that there is a method Index.repeat that does this dynamically based on a column: df.loc[df.index.repeat(df['Count']), :] Count Value 0 2 1 0 2 1 1 4 6 1 4 6 1 4 6 1 4 6 | 7 | 4 |
69,571,374 | 2021-10-14 | https://stackoverflow.com/questions/69571374/audio-recording-in-python-with-pyaudio-error-pamaccore-auhal-msg-audi | I am using pyaudio to record sounds on my Mac BigSur 11.6 (20G165). Specifically, I'm redirecting sound from an application to the input using BlackHole, which works fine. It usually works fine but, sometimes, I get this error in the Terminal: ||PaMacCore (AUHAL)|| Error on line 2500: err='-10863', msg=Audio Unit: cannot do in current context Any idea why or how I could prevent it from happening (like, waiting until PaMacCore is ready to record again or something)? I already tried reinstalling but it doesn't help brew install portaudio or brew install portaudio --HEAD Here's my code: import pyaudio import wave class Recorder(object): def __init__(self, fname, mode, channels, rate, frames_per_buffer): self.fname = fname self.mode = mode self.channels = channels self.rate = rate self.frames_per_buffer = frames_per_buffer self._pa = pyaudio.PyAudio() self.wavefile = self._prepare_file(self.fname, self.mode) self._stream = None def __enter__(self): return self def __exit__(self, exception, value, traceback): self.close() def record(self, duration): # Use a stream with no callback function in blocking mode self._stream = self._pa.open(format=pyaudio.paInt16, channels=self.channels, rate=self.rate, input=True, frames_per_buffer=self.frames_per_buffer) for _ in range(int(self.rate / self.frames_per_buffer * duration)): audio = self._stream.read(self.frames_per_buffer) self.wavefile.writeframes(audio) return None def start_recording(self): # Use a stream with a callback in non-blocking mode self._stream = self._pa.open(format=pyaudio.paInt16, channels=self.channels, rate=self.rate, input=True, frames_per_buffer=self.frames_per_buffer, stream_callback=self.get_callback()) self._stream.start_stream() return self def stop_recording(self): self._stream.stop_stream() return self def get_callback(self): def callback(in_data, frame_count, time_info, status): self.wavefile.writeframes(in_data) return in_data, pyaudio.paContinue return callback def close(self): self._stream.close() self._pa.terminate() self.wavefile.close() def _prepare_file(self, fname, mode='wb'): wavefile = wave.open(fname, mode) wavefile.setnchannels(self.channels) wavefile.setsampwidth(self._pa.get_sample_size(pyaudio.paInt16)) wavefile.setframerate(self.rate) return wavefile Usage example: recfile = Recorder(filename, 'wb', 2, 44100, 1024) recfile.start_recording() ... recfile.stop_recording() | Apparently the problem were mismatched bitrates in BlackHole's aggregated output device. I was aggregating Blackhole's output (44,1kHz) and the Mac Speakers (48kHz). This did not cause any consistent bad behaviour but sometimes led to these errors. | 9 | 6 |
69,613,336 | 2021-10-18 | https://stackoverflow.com/questions/69613336/how-to-catch-python-warnings-with-sentry | With sentry_sdk, the sentry documentation explain how to automatically catch exceptions or logging messages. However, how can I catch a python warning, like a DeprecationWarning that would be raised with warnings.warn(DeprecationWarning, "warning message") | There's no certain API in sentry for sending warnings, However, you need to ensure you're logging these with the general logging infrastructure that you are using. For example, if you are using Django, you have to change the logging level to be Warning like below in the settings.py file LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '%(asctime)s %(levelname)s [%(name)s:%(lineno)s] %(module)s %(process)d %(thread)d %(message)s' } }, 'handlers': { 'console': { 'level': 'WARNING', 'class': 'logging.StreamHandler' }, }, 'loggers': { "": { "level": "WARNING", 'handlers': ['console'], "propagate": True } } } and no change in sentry config import sentry_sdk from sentry_sdk.integrations.django import DjangoIntegration sentry_config = { 'dsn': os.getenv("SENTRY_DSN", "YOUR CDN"), 'integrations': [DjangoIntegration()], # Set traces_sample_rate to 1.0 to capture 100% # of transactions for performance monitoring. # We recommend adjusting this value in production. 'traces_sample_rate': 1.0, # If you wish to associate users to errors (assuming you are using # django.contrib.auth) you may enable sending PII data. 'send_default_pii': True } sentry_sdk.init(**sentry_config) In case you don't have Logging infrastructure, you can implement your own, check this Question, it has a lot of examples of how you would create a custom logger. It's all about changing your level to be WARNING and creating a console handler(StreamHandler), then Sentry will take care of the rest Edit: I meant to capture logging.warning(), but for warnings.warn() you have to log them, Python provides a built-in integration between the logging module and the warnings module to let you do this; just call logging.captureWarnings(True) at the start of your script or your custom logger and all warnings emitted by the warnings module will automatically be logged at level WARNING. | 7 | 2 |
69,606,274 | 2021-10-17 | https://stackoverflow.com/questions/69606274/pytorch-equivalent-of-tensorflow-keras-stringlookup | I'm working with pytorch now, and I'm missing a layer: tf.keras.layers.StringLookup that helped with the processing of ids. Is there any workaround to do something similar with pytorch? An example of the functionality I'm looking for: vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "a", "b"]]) layer = tf.keras.layers.StringLookup(vocabulary=vocab) layer(data) Outputs: <tf.Tensor: shape=(2, 3), dtype=int64, numpy= array([[1, 3, 4], [4, 1, 2]])> | Package torchnlp, pip install pytorch-nlp from torchnlp.encoders import LabelEncoder data = ["a", "c", "d", "e", "d"] encoder = LabelEncoder(data, reserved_labels=['unknown'], unknown_index=0) enl = encoder.batch_encode(data) print(enl) tensor([1, 2, 3, 4, 3]) | 4 | 7 |
69,609,618 | 2021-10-18 | https://stackoverflow.com/questions/69609618/github-action-i-wrote-doesnt-have-access-to-repos-files-that-is-calling-the-ac | A sample repo with the directory structure of what I'm working on is on GitHub here. To run the GitHub Action, you just need to go to the Action tab of the repo and run the Action manually. I have a custom GitHub Action I've written as well with python as the base image in the Docker container but want the python version to be an input for the GitHub Action. In order to do so, I am creating a second intermediate Docker container to run with the python version input argument. The problem I'm running into is I don't have access to the original repo's files that is calling the GitHub Action. For example, say the repo is called python-sample-project and has folder structure: python-sample-project │ main.py │ file1.py │ └───folder1 │ │ file2.py I see main.py, file1.py, and folder1/file2.py in entrypoint.sh. However, in docker-action/entrypoint.sh I only see the linux folder structure and the entrypoint.sh file copied over in docker-action/Dockerfile. In the Alpine example I'm using, the action entrypoint.sh script looks like this: #!/bin/sh -l ALPINE_VERSION=$1 cd /docker-action docker build -t docker-action --build-arg alpine_version="$ALPINE_VERSION" . && docker run docker-action In docker-action/ I have a Dockerfile and entrypoint.sh script that should run for the inner container with the dynamic version of Alpine (or Python) The docker-action/Dockerfile is as follows: # Container image that runs your code ARG alpine_version FROM alpine:${alpine_version} # Copies your code file from your action repository to the filesystem path `/` of the container COPY entrypoint.sh /entrypoint.sh RUN ["chmod", "+x", "/entrypoint.sh"] # Code file to execute when the docker container starts up (`entrypoint.sh`) ENTRYPOINT ["/entrypoint.sh"] In the docker-action/entrypoint I run ls but I do not see the repository files. Is it possible to access the main.py, file1.py, and folder1/file2.py in entrypoint.sh in the docker-action/entrypoint.sh? | There's generally two ways to get files from your repository available to a docker container you build and run. You either (1) add the files to the image when you build it or (2) mount the files into the container when you run it. There are some other ways, like specifying volumes, but that's probably out of scope for this case. The Dockerfile docker-action/Dockerfile does not copy any files except for the entrypoint.sh script. Your entrypoint.sh also does not provide any mount points when running the container. Hence, the outcome you observe is the expected outcome based on these facts. In order to resolve this, you must either (1) add COPY/ADD statements to your Dockerfile to copy files into the image (and set appropriate build context) OR (2) mount the files into the container when it runs by adding -v /source-path:/container-path to the docker run command in your entrypoint.sh. See references: COPY reference Docker run reference Though, this approach of building another container just to get a user-provided python version is a highly questionable practice for GitHub Actions and should probably be avoided. Consider leaning on the setup-python action instead. The docker-in-docker problem Nevertheless, if you continue this route and want to go about mounting the directory, you'll have to keep in mind that, when invoking docker from within a docker action on GitHub, the filesystem in the mount specification refers to the filesystem of the docker host, not the filesystem of the container. It works on my machine?! Counter to what you might experience running docker on a local system for example, this does not work in GitHub -- the working directory is not mounted: docker run -v $(pwd):/opt/workspace \ --workdir /opt/workspace \ --entrypoint /bin/ls \ my-container "-R" This doesn't work either: docker run -v $GITHUB_WORKSPACE:$GITHUB_WORKSPACE \ --workdir $GITHUB_WORKSPACE \ --entrypoint /bin/ls \ my-container "-R" This kind of thing would work perfectly fine if you tried it on a system running docker locally. What gives? Dealing with the devil (daemon) In Actions, the starting working directory where files are checked out into $GITHUB_WORKSPACE. In docker actions, that's /github/workspace. The workspace files populate into the workspace when your action runs by the Actions runner mounting the workspace from the host where the docker daemon is running. You can see that in the command run when your action starts: /usr/bin/docker run --name f884202608aa2bfab75b6b7e1f87b3cd153444_f687df --label f88420 --workdir /github/workspace --rm -e INPUT_ALPINE-VERSION -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/my-repo/my-repo":"/github/workspace" f88420:2608aa2bfab75b6b7e1f87b3cd153444 "3.9.5" The important bits are this: -v "/home/runner/work/my-repo/my-repo":"/github/workspace" -v "/var/run/docker.sock":"/var/run/docker.sock" /home/runner/work/my-repo/my-repo is the path on the host, where the repository files are. As mentioned, that first line is what gets it mounted into /github/workspace in your action container when it gets run. The second line is mounting the docker socket from the host to the action container. This means any time you call docker within your action, you're actually talking to the docker daemon outside of your container. This is important because that means when you use the -v argument inside your action, the arguments need to reflect directories that exist outside of the container. So, what you would actually need to do instead is this: docker run -v /home/runner/work/my-repo/my-repo:/opt/workspace \ --workdir /opt/workspace \ --entrypoint /bin/ls \ my-container "-R" Becoming useful to others And that works. If you only use it for the project itself. However, you have (among others) a remaining problem if you want this action to be consumable by other projects. How do you know where the workspace is on the host? This path will change for each repository, after all. GitHub does not guarantee these paths, either. They may be different on different platforms, or your action may be running on a self-hosted runner. So how do you content with that problem? There is no inbuilt environment variable that points to this directory you need specifically, unfortunately. However, by relying on implementation detail, you might be able to get away with using the $RUNNER_WORKSPACE variable, which will point, in this case to /home/runner/work/your-project. This is not the same place as the origin of $GITHUB_WORKSPACE but it's close. You can use the GITHUB_REPOSITORY variable to build the path, though this isn't guaranteed to always be the case afaik: PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})" WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}" You also have some other things to fix like the working directory form which you build. TL;DR You need to mount files in the container when you run it. In GitHub, you're running docker-in-docker, so paths you need to use to mount files work different, so you need to find the correct paths to pass to docker when called from within your action container. A minimally working solution for the example project you linked is this entrypoint.sh in the root of the repo looks like this: #!/usr/bin/env sh ALPINE_VERSION=$1 docker build -t docker-action \ -f ./docker-action/Dockerfile \ --build-arg alpine_version="$ALPINE_VERSION" \ ./docker-action PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})" WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}" docker run --workdir=$GITHUB_WORKSPACE \ -v $WORKSPACE:$GITHUB_WORKSPACE \ docker-action "$@" There are probably further concerns with your action, depending on what it does, like making available all the default and user-defined environment variables for the action to the 'inner' container, if that's important. So, is this possible? Sure. Is it reasonable just to get a dynamic version of alpine/python? I don't think so. There's probably better ways of accomplishing what you want to do, like using setup-python, but that sounds like a different question. | 9 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.