question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
72,282,253
2022-5-18
https://stackoverflow.com/questions/72282253/using-conda-activate-or-specifying-python-path-in-bash-script
I'm running some python scripts on some Linux clusters using SGE or SLURM. I already have my conda environment set up properly using the login node. I have been writing something like source ~/.bashrc module purge #Kill all active modules conda init bash conda deactivate conda deactivate conda activate my_env python my_script.py to activate the environment properly. (I have done a lot of work to figure this out) However, I just found some example codes like /anaconda3/envs/my_env/bin/python my_script.py seems to do the same thing without the need for tedious deactivation and activation. Are they actually doing the same thing? If so, which would be the better practice?
Programmatic execution with an environment is usually better done through the conda run subcommand. E.g., my_slurm_script.sh #!/bin/bash -l conda run -n my_env python my_script.py Read the conda run --help for details.
4
5
72,214,347
2022-5-12
https://stackoverflow.com/questions/72214347/how-to-document-default-none-null-in-openapi-swagger-using-fastapi
Using a ORM, I want to do a POST request letting some fields with a null value, which will be translated in the database for the default value specified there. The problem is that OpenAPI (Swagger) docs, ignores the default None and still prompts a UUID by default. from fastapi import FastAPI from pydantic import BaseModel from typing import Optional from uuid import UUID import uvicorn class Table(BaseModel): # ID: Optional[UUID] # the docs show a example UUID, ok ID: Optional[UUID] = None # the docs still shows a uuid, when it should show a null or valid None value. app = FastAPI() @app.post("/table/", response_model=Table) def create_table(table: Table): # here we call to sqlalchey orm etc. return 'nothing important, the important thing is in the docs' if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) In the OpenAPI schema example (request body) which is at the docs we find: { "ID": "3fa85f64-5717-4562-b3fc-2c963f66afa6" } This is not ok, because I specified that the default value is None,so I expected this instead: { "ID": null, # null is the equivalent of None here } Which will pass a null to the ID and finally will be parsed in the db to the default value (that is a new generated UUID).
When you declare Optional parameters, users shouldn't include those parameters in the request specified with null or None (in Python), in order to be None. By default, the value of such parameters will be None, unless the user specifies some other value when sending the request. Hence, all you have to do is to declare a custom example for the Pydantic model using Config and schema_extra, as described in the documentation and as shown below. The below example will create an empty (i.e., {}) request body in OpenAPI (Swagger UI), which can be successfully submitted (as ID is the only attribute of the model and is optional). class Table(BaseModel): ID: Optional[UUID] = None class Config: schema_extra = { "example": { } } @app.post("/table/", response_model=Table) def create_table(table: Table): return table If the Table model included some other required attributes, you could add example values for those, as demonstrated below: class Table(BaseModel): ID: Optional[UUID] = None some_attr: str class Config: schema_extra = { "example": { "some_attr": "Foo" } } If you would like to keep the auto-generated examples for the rest of the attributes except the one for the ID attribute, you could use the below to remove ID from the model's properties in the generated schema (inspired by Schema customization): class Table(BaseModel): ID: Optional[UUID] = None some_attr: str some_attr2: float some_attr3: bool class Config: @staticmethod def schema_extra(schema: Dict[str, Any], model: Type['Table']) -> None: del schema.get('properties')['ID'] Also, if you would like to add custom example to some of the attributes, you could use Field() (as described here); for example, some_attr: str = Field(example="Foo"). Another possible solution would be to modify the generated OpenAPI schema, as described in Solution 3 of this answer. Though, the above solution is likely more suited to this case. Note ID: Optional[UUID] = None is the same as ID: UUID = None. As previously documented in FastAPI website (see this answer as well): The Optional in Optional[str] is not used by FastAPI, but will allow your editor to give you better support and detect errors. Since then, FastAPI has revised their documentation with the following: The Union in Union[str, None] will allow your editor to give you better support and detect errors. Hence, ID: Union[UUID, None] = None is the same as ID: Optional[UUID] = None and ID: UUID = None. In Python 3.10+, one could also use ID: UUID| None = None (see here). As per FastAPI documentation (see Info section in the link provided): Have in mind that the most important part to make a parameter optional is the part: = None or the: = Query(default=None) as it will use that None as the default value, and that way make the parameter not required. The Union[str, None] part allows your editor to provide better support, but it is not what tells FastAPI that this parameter is not required.
5
11
72,245,243
2022-5-15
https://stackoverflow.com/questions/72245243/polars-how-to-add-a-column-with-numerical
In pandas, we can just assign directly: import pandas as pd df = pd.DataFrame({"a": [1, 2]}) # add a single value df["b"] = 3 # add an existing Series df["c"] = pd.Series([4, 5]) a b c 0 1 3 4 1 2 3 5 Notice that the new numerical Series is not in the original df, it is a result of some computation. How do we do the same thing in polars? import polars as pl df = pl.DataFrame({"a": [1, 2]}) df = df.with_columns(...) # ????
Let's start with this DataFrame: import polars as pl df = pl.DataFrame( { "col1": [1, 2, 3, 4, 5], } ) shape: (5, 1) β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ col1 β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•β•‘ β”‚ 1 β”‚ β”‚ 2 β”‚ β”‚ 3 β”‚ β”‚ 4 β”‚ β”‚ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”˜ To add a scalar (single value) Use polars.lit. my_scalar = -1 df.with_columns(pl.lit(my_scalar).alias("col_scalar")) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col_scalar β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•ͺ════════════║ β”‚ 1 ┆ -1 β”‚ β”‚ 2 ┆ -1 β”‚ β”‚ 3 ┆ -1 β”‚ β”‚ 4 ┆ -1 β”‚ β”‚ 5 ┆ -1 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can also choose the datatype of the new column using the dtype keyword. df.with_columns(pl.lit(my_scalar, dtype=pl.Float64).alias("col_scalar_float")) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col_scalar_float β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════════════════║ β”‚ 1 ┆ -1.0 β”‚ β”‚ 2 ┆ -1.0 β”‚ β”‚ 3 ┆ -1.0 β”‚ β”‚ 4 ┆ -1.0 β”‚ β”‚ 5 ┆ -1.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ To add a list To add a list of values (perhaps from some external computation), use the polars.Series constructor and provide a name to the Series constructor. my_list = [10, 20, 30, 40, 50] df.with_columns(pl.Series(name="col_list", values=my_list)) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col_list β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 2 ┆ 20 β”‚ β”‚ 3 ┆ 30 β”‚ β”‚ 4 ┆ 40 β”‚ β”‚ 5 ┆ 50 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can use the dtype keyword to control the datatype of the new series, if needed. df.with_columns(pl.Series(name="col_list", values=my_list, dtype=pl.Float64)) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col_list β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════════║ β”‚ 1 ┆ 10.0 β”‚ β”‚ 2 ┆ 20.0 β”‚ β”‚ 3 ┆ 30.0 β”‚ β”‚ 4 ┆ 40.0 β”‚ β”‚ 5 ┆ 50.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ To add a Series If you already have a Series, you can just provide a reference to it. my_series = pl.Series(name="my_series_name", values=[10, 20, 30, 40, 50]) df.with_columns(my_series) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ my_series_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ════════════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 2 ┆ 20 β”‚ β”‚ 3 ┆ 30 β”‚ β”‚ 4 ┆ 40 β”‚ β”‚ 5 ┆ 50 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ If your Series does not already have a name, you can provide one using the alias Expression. my_series_no_name = pl.Series(values=[10, 20, 30, 40, 50]) df.with_columns(my_series_no_name.alias('col_no_name')) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col_no_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════════════║ β”‚ 1 ┆ 10 β”‚ β”‚ 2 ┆ 20 β”‚ β”‚ 3 ┆ 30 β”‚ β”‚ 4 ┆ 40 β”‚ β”‚ 5 ┆ 50 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
14
23
72,275,772
2022-5-17
https://stackoverflow.com/questions/72275772/what-is-a-valid-binding-name-for-azure-function
When I try to run the azure function defined below, I get the following error log The 'my_function' function is in error: The binding name my_function_timer is invalid. Please assign a valid name to the binding. What is the format of a valid binding name for Azure Function ? Function definition I have two files in my_function directory: __init__.py contains the python code of the function function.json contains the configuration of the function Here is the content of those two files __init__.py import azure.functions as func import logging def main(my_function_timer: func.TimerRequest) -> None: logging.info("My function starts") print("hello world") logging.info("My function stops") function.json { "scriptFile": "__init__.py", "bindings": [ { "name": "my_function_timer", "type": "timerTrigger", "direction": "in", "schedule": "0 0 1 * * *" } ] } I deploy this function using Azure/functions-action@v1 github action
I couldn't find anything in the documentation either, but looking at the source code of azure-functions-host(which contains code for the runtime host used by the Azure Functions service), it uses following regex to validate the binding name. ^([a-zA-Z][a-zA-Z0-9]{0,127}|\$return)$ This means that, for a valid binding name First character must be a letter and can be followed by letters or digits(at most 127 characters). OR Literal string $return Since your binding name contains an underscore(_), the above regex does not match which will result in validation error.
16
25
72,232,875
2022-5-13
https://stackoverflow.com/questions/72232875/python-should-i-save-pypi-packages-offline-as-a-backup
My Python projects heavily depends on PyPi packages. I want to make sure that: in any time in the future: the packages required by my apps will always be available online on PyPi. For example:- I found a project on Github that requires PyQt4. when I tried to run it on my Linux machine, it crashed on startup because it can't find PyQt4 package on PyPi. NB: I know that PyQt4 is deprecated I searched a lot to find an archive for PyPi that still holds PyQt4 package, but I couldn't find them anywhere. so I had to rewrite that app to make it work on PyQt5. I only changed the code related to the UI (ie: PyQt4). other functions were still working. so the only problem with that app was that PyQt4 package was removed from PyPi. so, my question is: should I save a backup of the PyPi packages I use ?
Short version: YES, if you want availability... The next big question is how best to keep a backup version of the dependencies? There are some suggestions at the end of this answer. Long version: Your question touches on the concept of "Availability" which is one of the three pillars of Information Assurance (or Information Security). The other two pillars are Confidentiality and Integrity... The CIA triad. PyPI packages are maintained by the owners of those packages, a project that depends on a package and list it as a dependency must take into account the possibility that the owner of the package will pull the package or a version of the package out of PyPI at any moment. Important Python packages with many dependencies usually are maintained by foundations or organizations that are more responsible with dealing with downstream dependent packages and projects. However keeping support for old packages is very costly and requires extra effort and usually maintainers set a date for end of support, or publish a package lifecycle where they state when a specific version will be removed from the public PyPI server. Once that happens, the dependents have to update their code (as you did), or provide the original dependency via alternative means. This topic is very important for procurement in libraries, universities, laboratories, companies, and government agencies where a software tool might have dependencies on other software packages (or ecosystem), and where "availability" should be addressed adequately. Addressing this risk might mean anything from ensuring high availability at all costs, to living with the risk of losing one or more dependencies... A risk management approach should be used to make informed choices affecting the "security" of your project. Also it should be noted that, some packages require binary executable or binary libraries or access to a an online API service, which should also be available for the package to work properly, and that complicates the risk analysis and complicates the activities necessary to address availability. Now to make sure that dependencies are always available... I quickly compiled the following list. Note that each option has pros and cons. You should evaluate these and other options based on your needs: Store the virtual environment along with the code. Once you create a virtual environment and install the packages you require for the project in that virtual environment, you can keep the virtual environment as part of your repository for example for posterity. Host your own PyPI instance (or mirror) and keep a copy of packages you depend upon hosted on it: https://packaging.python.org/en/latest/guides/hosting-your-own-index/ Use an "artifact management tool" such as Artifactory from https://jfrog.com/artifact-management/, where you can not only host python packages but also Docker images, nmap packages, and other kinds of artifacts. Get the source code of all dependencies, and always build from source. Create a Docker image where the project works properly and keep backups of the image. If the package requires an online API service, think about replacing that service or mocking it by one you can control.
8
5
72,274,613
2022-5-17
https://stackoverflow.com/questions/72274613/how-to-type-overload-functions-with-multiple-optional-args
I have a function with multiple kwargs with defaults. One of them (in the middle somewhere) is a boolean toggle that controls the return type. I would like to create two overloads for this method with Literal[True/False] but keeping the default value. My idea was the following: from typing import overload, Literal @overload def x(a: int = 5, t: Literal[True] = True, b: int = 5) -> int: ... @overload def x(a: int = 5, t: Literal[False] = False, b: int = 5) -> str: ... def x(a: int = 5, t: bool = True, b: int = 5) -> int | str: if t: return 5 return "asd" But mypy raises: error: Overloaded function signatures 1 and 2 overlap with incompatible return types I assume that is because x() will conflict. But I cannot remove the default = False value in the second overload since it is preceded by arg a with a default. How can I overload this properly such that x() β†’ int x(t=True) β†’ int x(t=False) β†’ str
It is an old problem. The reason is that you specify default value in both branches, so x() is possible in both and return type is undefined. I have the following pattern for such cases: from typing import overload, Literal @overload def x(a: int = ..., t: Literal[True] = True, b: int = ...) -> int: ... @overload def x(a: int = ..., *, t: Literal[False], b: int = ...) -> str: ... @overload def x(a: int, t: Literal[False], b: int = ...) -> str: ... def x(a: int = 5, t: bool = True, b: int = 1) -> int | str: if t: return 5 return "asd" Why and how? You have to think about ways to call your function. First, you can provide a, then t can be given as kwarg (#2) or arg (#3). You can also leave a default, then t is always a kwarg (#2 again). This is needed to prevent putting arg after kwarg, which is SyntaxError. Overloading on more than one parameter is more difficult, but possible this way too: @overload def f(a: int = ..., b: Literal[True] = ..., c: Literal[True] = ...) -> int: ... @overload def f(a: int = ..., *, b: Literal[False], c: Literal[True] = ...) -> Literal['True']: ... @overload def f(a: int = ..., *, b: Literal[False], c: Literal[False]) -> Literal['False']: ... @overload def f(a: int, b: Literal[False], c: Literal[True] = ...) -> Literal['True']: ... @overload def f(a: int, b: Literal[False], c: Literal[False]) -> Literal['False']: ... def f(a: int = 1, b: bool = True, c: bool = True) -> int | Literal['True', 'False']: return a if b else ('True' if c else 'False') # mypy doesn't like str(c) You can play with overloading here. Ellipsis (...) in overloaded signatures default values means "Has a default, see implementation for its value". It is no different from the actual value for the type checker, but makes your code saner (default values are defined only in actual signatures and not repeated).
9
16
72,214,043
2022-5-12
https://stackoverflow.com/questions/72214043/how-to-debug-python-2-7-code-with-vs-code
For work I have to use Python 2.7. But when I use the "debug my python file" function in VS Code, I get an error. Even with a simple program, like : print()
As rioV8 said in a comment, you have to install a previous version of the Python extension, because in the meanwhile support for Python 2 has been dropped. To install a previous version you have to: Open the Extensions pane from the bar on the left and find Python Click on the gear icon and select "Install another version" Choose 2021.9.1246542782. After it's finished, restart VS Code. If you want to understand why you need version 2021.9.1246542782: The component that provides support to the language is Jedi, and the release notes of version 0.17.2 (2020-07-17) say that This will be the last release that supports Python 2 and Python 3.5. 0.18.0 will be Python 3.6+. And according to the release notes of the Python extension, the latest version that was based on Jedi 0.17 was 2021.9.3 (20 September 2021), because the following one (2021.10.0, 7 October 2021) says Phase out Jedi 0.17 Is that all? No, because the selection that VS Code offers when selecting previous versions uses a different numbering scheme. Anyway, the latest one of the v2021.9.* branch is v2021.9.1246542782, which I suppose corresponds to 2021.9.3, so it's the one you need.
9
31
72,212,413
2022-5-12
https://stackoverflow.com/questions/72212413/how-to-replace-a-number-in-a-string-in-python
I need to search a string and check if it contains numbers in its name. If it does, I want to replace it with nothing. I've started doing something like this but I didn't find a solution for my problem. table = "table1" if any(chr.isdigit() for chr in table) == True: table = table.replace(chr, "_") print(table) # The output should be "table" Any ideas?
You could do this in many different ways. Here's how it could be done with the re module: import re table = 'table1' table = re.sub(r'\d+', '', table)
5
8
72,270,548
2022-5-17
https://stackoverflow.com/questions/72270548/how-to-ignore-pylance-type-checking-on-notebooks
I have a python project in which I have python files and notebooks. I use strict typing in my project but I would like to remove it only on notebooks. I use VScode with setting: "python.analysis.typeCheckingMode": "strict" I know how to ignore type on a python file: But it seems it does not work on notebooks: I get the following type error: "Type of "y" is partially unknown Type of "y" is "Unknown | None (pylance)" How can I ignore type checking on notebooks ?
That is a Pylance error. You can create a pyrightconfig.json file at the root of your workspace and define the files to be exclude-d from analysis or completely ignore-d: { "ignore": [ "**/*.ipynb", ], } You can even list up specific filenames: { "ignore": [ "notimportant.ipynb", "test.ipynb", ], } Historical Notes: It initially didn't work for Jupyter Notebooks (.ipynb): https://github.com/microsoft/pylance-release/issues/2135 This happens because pyright doesn't see the file as a "*.ipynb". The file is being preprocessed (to combine all of the cells) in the notebook by the VS Code Python extension, and the resulting combined file is then passed to pyright for analysis. The pylance team is actively working on changing the way this works. I'm going to transfer this bug to the pylance-release repo so it gets the attention it deserves. That Github issue has since been resolved the fix was deployed as part of pylance 2022.8.51: https://github.com/microsoft/pylance-release/blob/main/CHANGELOG.md#2022851-31-august-2022-prerelease Notable changes: ... Bug Fix: Ignoring *.ipynb files does not work (pylance-release#2135) If it somehow still does not work, check the version of pylance on your VS Code.
7
8
72,239,693
2022-5-14
https://stackoverflow.com/questions/72239693/why-get-pip-install-error-when-run-docker-build-on-only-m1-mac
I'm using Intel and Apple Silicon chip Mac. When I build dockerfile on Intel Mac, it is success. How ever when I build same file on M1 Mac I get This error. #10 3.254 ERROR: Could not find a version that satisfies the requirement google-python-cloud-debugger (from versions: none) #10 3.255 ERROR: No matching distribution found for google-python-cloud-debugger ------ executor failed running [/bin/sh -c python -m pip install --upgrade pip && pip install -r requirements.txt]: exit code: 1 This is not only google-python-cloud-debugger. If I remove it, another modules hits same error. How to solve this problem? My requirements.txt and dockerfile is below. gunicorn==19.9.0 selenium==3.141.0 webdriver-manager==3.5.2 google-python-cloud-debugger google-cloud-storage django==3.2 mysqlclient==2.1.0 pysqlite3==0.4.6 django-storages==1.12.3 Pillow==9.0.0 requests==2.27.1 FROM python:3.9-buster # Install manually all the missing libraries RUN apt-get update && apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget fonts-takao-* # Install Chrome RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install # Install Python dependencies. COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip && pip install -r requirements.txt # Copy chrome driver COPY drivers /root/.wdm/drivers # Copy local code to the container image. COPY . /app ENV APP_HOME /app/website WORKDIR $APP_HOME # Run the web service on container startup. Here we use the gunicorn CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 website.wsgi
I solved this problem! When we build dockerfile on M1 Mac, we need specify platform. So you need to add --platform=linux/x86_64 after FROM on dockerfile. Example FROM --platform=linux/x86_64 python:3.9-buster or you can also add the option when you build. docker build . --platform=linux/x86_64 -t my_tag If linux/x86_64 is not good. You can try linux/amd64 it's also efective.
5
10
72,278,311
2022-5-17
https://stackoverflow.com/questions/72278311/performancewarning-dropping-on-a-non-lexsorted-multi-index-without-a-level-para
I have the following line of code end_df['Soma Internet'] = end_df.iloc[:,end_df.columns.get_level_values(1) == 'Internet'].drop('site',axis=1).sum(axis=1) It basically, filts my multi index df by a specific level 1 column. Drops a few not wanted columns. And does the sum, of all the other ones. I took a glance, at a few of the documentation and other asked questions. But i didnt quite understood what causes the warning, and i also would love to rewrite this code, so i get rid of it.
Let's try with an example (without data for simplicity): import pandas as pd # Column MultiIndex. idx = pd.MultiIndex(levels=[['Col1', 'Col2', 'Col3'], ['subcol1', 'subcol2']], codes=[[2, 1, 0], [0, 1, 1]]) df = pd.DataFrame(columns=range(len(idx))) df.columns = idx print(df) Col3 Col2 Col1 subcol1 subcol2 subcol2 Clearly, the column MultiIndex is not sorted. We can check it with: print(df.columns.is_monotonic_increasing) False This matters because Pandas performs index lookup and other operations much faster if the index is sorted, because it can use operations that assume the sorted order and are faster. Indeed, if we try to drop a column: df.drop('Col1', axis=1) PerformanceWarning: dropping on a non-lexsorted multi-index without a level parameter may impact performance. df.drop('Col1', axis=1) Instead, if we sort the index before dropping, the warning disappears: print(df.sort_index(axis=1)) # Index is now sorted in lexicographical order. Col1 Col2 Col3 subcol2 subcol2 subcol1 # No warning here. df.sort_index(axis=1).drop('Col1', axis=1) EDIT (see comments): As the warning suggests, this happens when we do not specify the level from which we want to drop the column. This is because to drop the column, pandas has to traverse the whole index (happens here). By specifying it we do not need such traversal: # Also no warning. df.drop('Col1', axis=1, level=0) However, in general this problem relates more on row indices, as usually column multi-indices are way smaller. But definitely to keep it in mind for larger indices and dataframes. In fact, this is in particular relevant for slicing by index and for lookups. In those cases, you want your index to be sorted for better performance.
7
9
72,255,562
2022-5-16
https://stackoverflow.com/questions/72255562/cannot-import-name-dtensor-from-tensorflow-compat-v2-experimental
I am having problems trying to run TensorFlow on my Windows 10 machine. Code runs fine on my MacOS machine. Traceback (most recent call last): File "c:\Users\Fynn\Documents\GitHub\AlpacaTradingBot\ai.py", line 15, in <module> from keras.models import Sequential, load_model File "C:\Users\Fynn\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\__init__.py", line 24, in <module> from keras import models File "C:\Users\Fynn\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\models\__init__.py", line 18, in <module> from keras.engine.functional import Functional File "C:\Users\Fynn\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\functional.py", line 24, in <module> from keras.dtensor import layout_map as layout_map_lib File "C:\Users\Fynn\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\dtensor\__init__.py", line 22, in <module> from tensorflow.compat.v2.experimental import dtensor as dtensor_api # pylint: disable=g-import-not-at-top ImportError: cannot import name 'dtensor' from 'tensorflow.compat.v2.experimental' (C:\Users\Fynn\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\_api\v2\compat\v2\experimental\__init__.py)
I tried many solutions to no avail, in the end this worked for me! pip3 uninstall tensorflow absl-py astunparse flatbuffers gast google-pasta grpcio h5py keras keras-preprocessing libclang numpy opt-einsum protobuf setuptools six tensorboard tensorflow-io-gcs-filesystem termcolor tf-estimator-nightly typing-extensions wrapt pip3 install --disable-pip-version-check --no-cache-dir tensorflow
21
0
72,217,828
2022-5-12
https://stackoverflow.com/questions/72217828/fastapi-how-to-get-raw-url-path-from-request
I have a GET method with requested parameter in path: @router.get('/users/{user_id}') async def get_user_from_string(user_id: str): return User(user_id) Is it possible to get base url raw path (i.e., '/users/{user_id}') from the request? I have tried to use the following way: path = [route for route in request.scope['router'].routes if route.endpoint == request.scope['endpoint']][0].path But it doesn't work and I get: AttributeError: 'Mount' object has no attribute 'endpoint'
You can use the APIRout object property in the request to get the actual path example: raw_path = request.scope['route'].path #'/user/{id}'
6
-2
72,199,354
2022-5-11
https://stackoverflow.com/questions/72199354/python-type-hinting-for-a-generic-mutable-tuple-fixed-length-sequence-with-mul
I am currently working on adding type hints to a project and can't figure out how to get this right. I have a list of lists, with the nested list containing two elements of type int and float. The first element of the nested list is always an int and the second is always a float. my_list = [[1000, 5.5], [1432, 2.2], [1234, 0.3]] I would like to type annotate it so that unpacking the inner list in for loops or loop comprehensions keeps the type information. I could change the inner lists to tuples and would get what I'm looking for: def some_function(list_arg: list[tuple[int, float]]): pass However, I need the inner lists to be mutable. Is there a nice way to do this for lists? I know that abstract classes like Sequence and Collection do not support multiple types.
I think the question highlights a fundamental difference between statically typed Python and dynamically typed Python. For someone who is used to dynamically typed Python (or Perl or JavaScript or any number of other scripting languages), it's perfectly normal to have diverse data types in a list. It's convenient, flexible, and doesn't require you to define custom data types. However, when you introduce static typing, you step into a tighter box that requires more rigorous design. As several others have already pointed out, type annotations for lists require all elements of the list to be the same type, and don't allow you to specify a length. Rather than viewing this as a shortcoming of the type system, you should consider that the flaw is in your own design. What you are really looking for is a class with two data members. The first data member is named 0, and has type int, and the second is named 1, and has type float. As your friend, I would recommend that you define a proper class, with meaningful names for these data members. As I'm not sure what your data type represents, I'll make up names, for illustration. class Sample: def __init__(self, atomCount: int, atomicMass: float): self.atomCount = atomCount self.atomicMass = atomicMass This not only solves the typing problem, but also gives a major boost to readability. Your code would now look more like this: my_list = [Sample(1000, 5.5), Sample(1432, 2.2), Sample(1234, 0.3)] def some_function(list_arg: list[Sample]): pass I do think it's worth highlighting Stef's comment, which points to this question. The answers given highlight two useful features related to this. First, as of Python 3.7, you can mark a class as a data class, which will automatically generate methods like __init__(). The Sample class would look like this, using the @dataclass decorator: from dataclasses import dataclass @dataclass class Sample: atomCount: int atomicMass: float Another answer to that question mentions a PyPi package called recordclass, which it says is basically a mutable namedtuple. The typed version is called RecordClass from recordclass import RecordClass class Sample(RecordClass): atomCount: int atomicMass: float
8
5
72,202,728
2022-5-11
https://stackoverflow.com/questions/72202728/conda-to-poetry-environment
I have a conda environment that I would like to convert to a poetry environment. What I have tried is to translate the environment.yaml of the conda environment into a pyproject.toml file that poetry can read. Here you have the steps: Generate the yaml file conda env export --from-history > environment.yaml The --from-history flag includes only the packages that I explicitly asked for. Here it is how the file looks like after installing numpy. # environment.yaml name: C:\Users\EDOCIC\Screepts\My_projects\Tests\conda2poetry\condaenv channels: - defaults dependencies: - numpy Manually create the pyproject.toml file out of environment.yaml. I added the numpy version, which I got from conda env export. Here it is the result: # pyproject.toml [tool.poetry] name = "conda2poetry" version = "0.1.0" description = "" authors = [""] [tool.poetry.dependencies] python = "~3.7" numpy = "^1.21.5" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" Create the environment with poetry init, which will automatically read the toml file. The process seems to work but it's quite manual and prone to mistakes. Is there a better way?
No, there is not a better way. Conda is a generic package manager and does not discern Python versus non-Python packages, therefore this has to be done with manual curation. Additionally, package names might also differ. For example py-opencv(conda-forge) vs opencv-python (PyPi). Tips In addition to pulling down the --from-history YAML, it may also help to dump out a pip list --format=freeze. This could help with resolving any tricky packages that use different names in Conda versus PyPI. If the environment uses any PyPI packages directly, this won't be seen from a conda env export --from-history. However, these will appear when using conda list (entries with channel pypi) or plain conda env export, which would have a dependencies.pip: section if there are any.
11
5
72,243,852
2022-5-14
https://stackoverflow.com/questions/72243852/pyinstaller-cant-find-a-module-error-loading-python-dll
I compiled my program with pyinstaller, and it works fine on my computer, but whenever I ty to run it in another computer (with no python), I get the following error: Error loading Python DLL 'C:\Users\perez\AppData\Local\Temp\_MEI28162\python310.dll'. LoadLibrary: Cannot find specified module What can I do? I'm not allowed to install python on the other computer
Ok, it was not working because I compiled the script with pyinstaller having python 3.10, but Windows 7's maximum python version is 3.8
4
2
72,224,866
2022-5-13
https://stackoverflow.com/questions/72224866/how-to-get-time-taken-for-each-layer-in-pytorch
I want to know the inference time of a layer in Alexnet. This code measures the inference time of the first fully connected layer of Alexnet as the batch size changes. And I have a few questions about this. Is it possible to measure the inference time accurately with the following code? Is there a time difference because the CPU and GPU run separately? Is there a module used to measure layer inference time in Pytorch? Given the following code: import torch import torch.optim as optim import torch.nn as nn import torch.nn.functional as F from torchvision import transforms import time from tqdm import tqdm class AlexNet(nn.Module): def __init__(self): super(AlexNet, self).__init__() self.relu = nn.ReLU(inplace=True) self.maxpool2D = nn.MaxPool2d(kernel_size=3, stride=2, padding=0) self.adaptive_avg_polling = nn.AdaptiveAvgPool2d((6, 6)) self.dropout = nn.Dropout(p=0.5) self.conv1 = nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2) self.conv2 = nn.Conv2d(64, 192, kernel_size=5, padding=2) self.conv3 = nn.Conv2d(192, 384, kernel_size=3, padding=1) self.conv4 = nn.Conv2d(384, 256, kernel_size=3, padding=1) self.conv5 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.fc1 = nn.Linear(256 * 6 * 6, 4096) self.fc2 = nn.Linear(4096, 4096) self.fc3 = nn.Linear(4096, 1000) def time(self, x): x = self.maxpool2D(self.relu(self.conv1(x))) x = self.maxpool2D(self.relu(self.conv2(x))) x = self.relu(self.conv3(x)) x = self.relu(self.conv4(x)) x = self.maxpool2D(self.relu(self.conv5(x))) x = self.adaptive_avg_polling(x) x = x.view(x.size(0), -1) x = self.dropout(x) start1 = time.time() x = self.fc1(x) finish1 = time.time() x = self.dropout(self.relu(x)) x = self.fc2(x) x = self.relu(x) x = self.fc3(x) return finish1 - start1 def layer_time(): use_cuda = torch.cuda.is_available() print("use_cuda : ", use_cuda) FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor device= torch.device("cuda:0" if use_cuda else "cpu") net = AlexNet().to(device) test_iter = 10000 batch_size = 1 for i in range(10): X = torch.randn(size=(batch_size, 3, 227, 227)).type(FloatTensor) s = 0.0 for i in tqdm(range(test_iter)): s += net.time(X) print(s) batch_size *= 2 layer_time()
I found a way to measure inference time by studying the AMP document. Using this, the GPU and CPU are synchronized and the inference time can be measured accurately. import torch, time, gc # Timing utilities start_time = None def start_timer(): global start_time gc.collect() torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.synchronize() start_time = time.time() def end_timer(): torch.cuda.synchronize() end_time = time.time() return end_time - start_time So my code changes as follows: import torch, time, gc from tqdm import tqdm import torch.nn as nn import torch # Timing utilities start_time = None def start_timer(): global start_time gc.collect() torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.synchronize() start_time = time.time() def end_timer(): torch.cuda.synchronize() end_time = time.time() return end_time - start_time class AlexNet(nn.Module): def __init__(self): super(AlexNet, self).__init__() self.relu = nn.ReLU(inplace=True) self.maxpool2D = nn.MaxPool2d(kernel_size=3, stride=2, padding=0) self.adaptive_avg_polling = nn.AdaptiveAvgPool2d((6, 6)) self.dropout = nn.Dropout(p=0.5) self.conv1 = nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2) self.conv2 = nn.Conv2d(64, 192, kernel_size=5, padding=2) self.conv3 = nn.Conv2d(192, 384, kernel_size=3, padding=1) self.conv4 = nn.Conv2d(384, 256, kernel_size=3, padding=1) self.conv5 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.fc1 = nn.Linear(256 * 6 * 6, 4096) self.fc2 = nn.Linear(4096, 4096) self.fc3 = nn.Linear(4096, 1000) def time(self, x): x = self.maxpool2D(self.relu(self.conv1(x))) x = self.maxpool2D(self.relu(self.conv2(x))) x = self.relu(self.conv3(x)) x = self.relu(self.conv4(x)) x = self.maxpool2D(self.relu(self.conv5(x))) x = self.adaptive_avg_polling(x) x = x.view(x.size(0), -1) x = self.dropout(x) # Check first linear layer inference time start_timer() x = self.fc1(x) result = end_timer() x = self.dropout(self.relu(x)) x = self.fc2(x) x = self.relu(x) x = self.fc3(x) return result def layer_time(): use_cuda = torch.cuda.is_available() print("use_cuda : ", use_cuda) FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor device= torch.device("cuda:0" if use_cuda else "cpu") net = AlexNet().to(device) test_iter = 1000 batch_size = 1 for i in range(10): X = torch.randn(size=(batch_size, 3, 227, 227)).type(FloatTensor) s = 0.0 for i in tqdm(range(test_iter)): s += net.time(X) print(s) batch_size *= 2 layer_time()
6
0
72,233,773
2022-5-13
https://stackoverflow.com/questions/72233773/import-in-jupyter-notebook-to-use-a-method-from-another-file
I am using jupyter notebook. I have 3 jupyter source files written in python in a folder in the same directory: parser, preprocess, and temp. I am trying to import parser and import preprocess in the temp file so that I can use the methods written in those files. Example: there is method named extract in parser file. I want to use that from temp file. How can I do that?
The easiest way is to change the files you need to import as py files. As an example, parser.ipynb can be converted to a python file parser.py, and you can import it from another notebook file. If you want to use a function named extract() from parser.py, just import from parser import extract
5
2
72,279,755
2022-5-17
https://stackoverflow.com/questions/72279755/python-gstreamer-bindings-with-pygobject-only-has-core-modules-no-plugins
I have gstreamer installed on OSX 12.0.1 Monterey. I just installed the python bindings inside of a virtual environment running python 3.9 with: pip3 install pycairo PyGObject I can import gi and gi.repository.Gst without an issue. However it seems that almost all gstreamer plugins are missing. This is my test script: import gi gi.require_versions({'Gst': '1.0'}) from gi.repository import Gst, GLib Gst.init(None) Gst.debug_set_active(True) Gst.debug_set_default_threshold(5) if not Gst.init_check()[0]: print("gstreamer initialization failed") class Main: def __init__(self): self.pipeline = Gst.parse_launch('playbin uri=https://gstreamer.freedesktop.org/data/media/small/sintel.mkv') self.pipeline.set_state(Gst.State.PLAYING) self.main_loop = GLib.MainLoop.new(None, False) GLib.MainLoop.run(self.main_loop) self.bus = self.pipeline.get_bus() self.msg = self.bus.timed_pop_filtered( Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS ) if self.msg is not None: self.msg.unref() self.bus.unref() self.pipeline.set_state(Gst.State.NULL) self.pipeline.unref() Main() It fails with: 0:00:00.006178000 92472 0x7fbd7d049210 INFO GST_PIPELINE gstparse.c:345:gst_parse_launch_full: parsing pipeline description 'playbin uri=https://gstreamer.freedesktop.org/data/media/small/sintel.mkv' 0:00:00.006205000 92472 0x7fbd7d049210 DEBUG GST_PIPELINE parse.l:135:priv_gst_parse_yylex: flex: IDENTIFIER: playbin 0:00:00.006217000 92472 0x7fbd7d049210 WARN GST_ELEMENT_FACTORY gstelementfactory.c:701:gst_element_factory_make_with_properties: no such element factory "playbin"! 0:00:00.006229000 92472 0x7fbd7d049210 ERROR GST_PIPELINE gst/parse/grammar.y:851:priv_gst_parse_yyparse: no element "playbin" 0:00:00.006237000 92472 0x7fbd7d049210 DEBUG GST_PIPELINE parse.l:181:priv_gst_parse_yylex: flex: SPACE: [ ] 0:00:00.006243000 92472 0x7fbd7d049210 DEBUG GST_PIPELINE parse.l:93:priv_gst_parse_yylex: flex: ASSIGNMENT: uri=https://gstreamer.freedesktop.org/data/media/small/sintel.mkv 0:00:00.006261000 92472 0x7fbd7d049210 DEBUG GST_PIPELINE gst/parse/grammar.y:1228:priv_gst_parse_launch: got 0 elements and 0 links Traceback (most recent call last): File "/python_experiments/playbin-example-audio.py", line 32, in <module> Main() File "/python_experiments/playbin-example-audio.py", line 16, in __init__ self.pipeline = Gst.parse_launch('playbin uri=https://gstreamer.freedesktop.org/data/media/small/sintel.mkv') Here is the output of gst-inspect-1.0 | grep playbin: (gst-plugin-scanner:92783): GLib-GObject-WARNING **: 15:29:32.244: type name '-a-png-encoder-pred' contains invalid characters (gst-plugin-scanner:92783): GLib-GObject-CRITICAL **: 15:29:32.245: g_type_set_qdata: assertion 'node != NULL' failed (gst-plugin-scanner:92783): GLib-GObject-CRITICAL **: 15:29:32.245: g_type_set_qdata: assertion 'node != NULL' failed (gst-plugin-scanner:92783): GLib-GObject-WARNING **: 15:29:32.293: type name '-a-png-encoder-pred' contains invalid characters (gst-plugin-scanner:92783): GLib-GObject-CRITICAL **: 15:29:32.293: g_type_set_qdata: assertion 'node != NULL' failed (gst-plugin-scanner:92783): GLib-GObject-CRITICAL **: 15:29:32.293: g_type_set_qdata: assertion 'node != NULL' failed playback: playbin: Player Bin 2 playback: playbin3: Player Bin 3 Do the GLib errors thrown have something to do with this? gst-launch-1.0 playbin uri=https://gstreamer.freedesktop.org/data/media/small/sintel.mkv has no problem with video playback it just appears to be the python bindings. Are there any further debugging steps I should take before attempting to purge and reinstall gstreamer entirely? Edit: I reinstalled gstreamer using the command: brew reinstall gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav I then used pip to uninstall cairo and PyGObject from my venv and my system install. I then used brew install pygobject3 and tried to run the script again, this time from my python system install. Still failed Edit: Revisiting this as my bounty expires soon. I do have access to the gstreamer core. I can make filesrc with ElementFactory.make but nothing useful. Edit: REPL using Gst.ElementFactory.make() >>> import gi >>> gi.require_versions({'Gst': '1.0'}) >>> from gi.repository import Gst, GLib >>> Gst.init(None) [] >>> Gst.debug_set_active(True) >>> Gst.debug_set_default_threshold(5) >>> Gst.ElementFactory.make('playbin', 'playbin') 0:00:12.767487000 49323 0x7fc9a2321c10 WARN GST_ELEMENT_FACTORY gstelementfactory.c:754:gst_element_factory_make_valist: no such element factory "playbin"! >>>
I found the registry paths used from python are different to those used by the gst command line tools. You can check this as follows. Run gst-inspect-1.0 playbin and check the path in the "Filename" line - this was /usr/local/lib/gstreamer-1.0/ on the Mac I used. But this didn't match the path from Python. Run this snippet and it will print all the plugins and the paths they are loaded from. import gi gi.require_version('Gst', '1.0') from gi.repository import Gst Gst.init(None) reg = Gst.Registry.get() for x in reg.get_plugin_list(): print (x.get_name(), x.get_filename()) If there is a difference here, you have to force the registry to look in the same location that gst-inspect is reporting. You can do this after import with: Gst.Registry.get().scan_path("/usr/local/lib/gstreamer-1.0/") I think you can also set the GST_PLUGIN_PATH environment variable - but I didn't test this.
5
2
72,202,295
2022-5-11
https://stackoverflow.com/questions/72202295/how-to-apply-max-length-to-truncate-the-token-sequence-from-the-left-in-a-huggin
In the HuggingFace tokenizer, applying the max_length argument specifies the length of the tokenized text. I believe it truncates the sequence to max_length-2 (if truncation=True) by cutting the excess tokens from the right. For the purposes of utterance classification, I need to cut the excess tokens from the left, i.e. the start of the sequence in order to preserve the last tokens. How can I do that? from transformers import AutoTokenizer train_texts = ['text 1', ...] tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base') encodings = tokenizer(train_texts, max_length=128, truncation=True)
Tokenizers have a truncation_side parameter that should set exactly this. See the docs.
5
6
72,244,046
2022-5-14
https://stackoverflow.com/questions/72244046/use-or-for-compatibilty-across-systems
My goal is a simple and proper way to export my venv. In the optimal case, the resulting requirements.txt works on all compatible systems. At the moment I use pip freeze > requirements.txt. This uses the == "Version matching clause". On an other system the file might not work due to conflicting versions, although it was compatible. In PEP 440 there is also a ~= "Compatible clause". However, I cannot find an option for that in pip freeze docs. Using "find and replace" or a tool like awk to replace == with ~= works okay. My naive conclusion is that ~= would be the ideal clause to use in requirements.txt. However, when I look at popular packages they often use >= to specify a version. E.g. at urllib3. Is there a drawback to ~=, which I do not see? If that is not the case: Why is >= used in so many packages? Edit: Pigar has an option to use >= natively and there is a comparison to freeze here. Apparently, they also do not use ~=. Yet, I am still not sure which one to use, as >= could break when there is a major version change. Also packages which are a lower minor version would be marked incompatible, although they should be compatible.
Your question is not simple to answer and touches on some nuances in the social dynamics around versioning. Easy stuff first: sometimes versions use a terminal suffix to indicate something like prerelease builds, and if you're dependent on a prerelease build or some other situation where you expect the terminal suffix to iterate repeatedly (especially in a non-ordered way), ~= helps you by letting you accept all iterations on a build. PEP 440 contains a good example: ~= 2.2.post3 >= 2.2.post3, == 2.* Second, pip freeze is not meant to be used to generate a requirements list. It just dumps a list of everything you've currently got, which is far more than actually needs to go in a requirements file. So it makes sense that it would only use ==: for example, it's meant to let you replicate a set of packages to an 'identical' environment elsewhere. Hard stuff next. Under semantic versioning, the only backwards-incompatible revisions should be major revisions. (This depends on how much you trust the maintainer - put a pin in that.) However, if specifying a patch number, ~= won't upgrade to a new minor rev even if one is available and it should, in principle, be backwards-compatible. This is important to talk about clearly, because "compatible release" has two different meanings: in semantic versioning, a "compatible release" is (colloquially) any rev between this one and the next major rev; in requirements files, a "compatible release" is a revision that patches the same terminal rev only. Let me be clear now: when I say "backwards-compatible," I mean it in the semantic versioning sense only. (If the package in question doesn’t use semantic versioning, or has a fourth version number, well - generally ~= will still match all patches, but check to be sure.) So, there's a trade to be made between >= and ~=, and it has to do with chains of trust in dependency management. Here are three principles - then after, I'll offer some speculation on why so many package maintainers use >=. In general, it's the responsibility of a package maintainer to ensure that all version numbers matching their requirements.txt are compatible with that package, with the occasional exception of deprecated patch revs. This includes ensuring that the requirements.txt is as small as possible and contains only that package's requirements. (More broadly, β€œrequire as little as possible and validate it as much as possible.”) In general, no matter the language and no matter the package, dependencies reflect a chain of trust. I am implementing a package; I trust you to maintain your package (and its requirements file) in a way that continues to function. You are trusting your dependencies to maintain their packages in a way that continues to function. In turn, your downstream consumers are expecting you to maintain your package in a way that means it continues to function for them. This is based on human trust. The number is 'just' a convenient communication tool. In general, no matter the change set, package maintainers try extremely hard to avoid major versions. No one wants to be the guy who releases a major rev and forces consumers to version their package through a substantial rewrite - or consign their projects to an old and unsupported version. We accept major revs as necessary (that's why we have systems to track them), but folks are typically loath to use them until they really don't have another option. Synthesize these three. From the perspective of a package maintainer, supposing one trusts the maintainers one is dependent upon (as one should), it is broadly speaking more reasonable to expect major revisions to be rare, than it is to expect minor revisions to be backwards-incompatible by accident. This means the number of reactive updates you'll need to make in the >= scheme should be small (but, of course, nonzero). That's a lot of groundwork. I know this is long, but this is the good part: the trade. For example, suppose I developed a package, helloworld == 0.7.10. You developed a package atop helloworld == 0.7.10, and then I later rev helloworld to 0.8. Let's start by considering the best case situation: that I am still offering support for the 0.7.10 version and (ex.) patch it to 0.7.11 at a later date, even while maintaining 0.8 separately. This allows your downstream consumers to accept patches without losing compatibility with your package, even when using ~=. And, you are "guaranteed" that future patches won't break your current implementation or require maintenance in event of mistakes - I’m doing that work for you. Of course, this only works if I go to the trouble of maintaining both 0.7 and 0.8, but this does seem advantageous... So, why does it break? Well, one example. What happens if you specify helloworld ~= 0.7.10 in your package, but another upstream dependency of yours (that isn't me!) upgrades, and now uses helloworld >= 0.8.1? Since you relied on a minor version's compatibility requirements, there's now a conflict. Worse, what if a consumer of your package wants to use new features from helloworld == 0.8.1 that aren't available in 0.7? They can't. But remember, a semver-compliant package built on helloworld v0.7 should be just fine running on helloworld v0.8 - there should be no breaking changes. It's your specification of ~= that is the most likely to have broken a dependency or consumer need for no good reason - not helloworld. If instead you had used helloworld >= 0.7.10, then you would've allowed for the installation of 0.8, even when your package was not explicitly written using it. If 0.8 doesn't break your implementation, which is supposed to be true, then allowing its use would be the correct manual decision anyway. You don't even necessarily need to know what I'm doing or how I'm writing 0.8, because minor versions should only be adding functionality - functionality you're obviously not using, but someone else might want to. The chain of trust is leaky, though. As the maintainer of helloworld, I might not know for certain whether my revision 0.8 introduces bugs or potential issues that could interfere with the usage of a package originally written for 0.7. Sure, by naming it 0.8 and not 1.0, I claim that I will (and should be expected to!) provide patches to helloworld as needed to address failures to maintain backwards-compatibility. But in practice, that might become untenable, or simply not happen, especially in the very unusual case (joke) where a package does not have rigorous unit and regression tests. So your trade, as a package developer and maintainer, boils down to this: Do you trust me, the maintainer of helloworld, to infrequently release major revs, and to ensure that minor revs do not risk breaking backwards-compatibility, more than you need your downstream consumers to be guaranteed a stable release? Using >= means: (Rare): If I release a major rev, you'll need to update your requirements file to specify which major rev you are referring to. (Uncommon): If I release a minor rev, but a bug, review, regression failure, etc. cause that minor rev to break packages built atop old versions, you'll either need to update your requirements file to specify which minor rev you are referring to, or wait for me to patch it further. (What if I decline to patch it further, or worse, take my sweet time doing so?) Using ~= means: If any of your upstream packages end up using a different minor revision than the one your package was originally built to use, you risk a dependency conflict between you and your upstream providers. If any of your downstream consumers want or need to use features introduced in a later minor revision of a package you depend upon, they can't - not without overriding your requirements file and hoping for the best. If I stop supporting a minor revision of a package you use, and release critical patches on a future minor rev only, you and your consumers won't get them. (What if these are important, ex. security updates? urllib3 could be a great example.) If those 'rare' or 'uncommon' events are so disruptive to your project that you just can't conceive of a world in which you'd want to take that risk, use ~=, even at the cost of convenience/security to your downstream consumers. But if you want to give downstream consumers the most flexibility possible, don't mind dealing with the occasional breaking-change event, and want to make sure your own code typically runs on the most recent version it can, using >= is the safer way to go. It's usually the right decision, anyway. For this reason, I expect most maintainers deliberately use >= most of the time. Or maybe it's force of habit. Or maybe I'm just reading too much into it.
10
13
72,251,787
2022-5-15
https://stackoverflow.com/questions/72251787/permission-artifactregistry-repositories-downloadartifacts-denied-on-resource
While the artifact repository was successfully creating, running a docker push to push the image to the google artifact registry fails with a permissions error even after granting all artifact permissions to the accounting I am using on gcloud cli. Command used to push image: docker push us-central1-docker.pkg.dev/project-id/repo-name:v2 Error message: The push refers to repository [us-central1-docker.pkg.dev/project-id/repo-name] 6f6f4a472f31: Preparing bc096d7549c4: Preparing 5f70bf18a086: Preparing 20bed28d4def: Preparing 2a3255c6d9fb: Preparing 3f5d38b4936d: Waiting 7be8268e2fb0: Waiting b889a93a79dd: Waiting 9d4550089a93: Waiting a7934564e6b9: Waiting 1b7cceb6a07c: Waiting b274e8788e0c: Waiting 78658088978a: Waiting denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/project-id/locations/us-central1/repositories/repo-name" (or it may not exist)
I was able to recreate your use case. This happens when you are trying to push an image on a repository in which its specific hostname (associated with it's repository location) is not yet added to the credential helper configuration for authentication. You may refer to this Setting up authentication for Docker as also provided by @DazWilkin in the comments for more details. In my example, I was trying to push an image on a repository that has a location of us-east1 and got the same error since it is not yet added to the credential helper configuration. And after I ran the authentication using below command (specifically for us-east1 since it is the location of my repository), the image was successfully pushed: gcloud auth configure-docker us-east1-docker.pkg.dev QUICK TIP: You may get your authentication command specific for your repository when you open your desired repository in the console, and then click on the SETUP INSTRUCTIONS.
27
58
72,225,191
2022-5-13
https://stackoverflow.com/questions/72225191/how-can-i-apply-gettext-translations-to-string-literals-in-case-statements
I need to add gettext translation to all the string literals in our code, but it doesn't work with literals in case statements. This failed attempt gives SyntaxError: Expected ':': from gettext import gettext as _ direction = input(_('Enter a direction: ')) # <-- This works match direction: case _('north'): # <-- This fails adj = 1, 0 case _('south'): adj = -1, 0 case _('east'): adj = 0, 1 case _('west'): adj = 0, -1 case _: raise ValueError(_('Unknown direction')) What does the error mean and how can the directions be marked for translation?
What does the error mean? The grammar for the match/case statement treats the _ as a wildcard pattern. The only acceptable token that can follow is a colon. Since your code uses an open parenthesis, a SyntaxError is raised. How to fix it Switch from a literal pattern such as case "north": ... to a value pattern such as case Directions.north: ... which uses dot-operator. The translation can then be performed upstream, outside of the case statement: from gettext import gettext as _ class Directions: north = _('north') south = _('south') east = _('east') west = _('west') direction = input(_('Enter a direction: ')) match direction: case Directions.north: adj = 1, 0 case Directions.south: adj = -1, 0 case Directions.east: adj = 0, 1 case Directions.west: adj = 0, -1 case _: raise ValueError(_('Unknown direction')) Not only do the string literals get translated, the case statements are more readable as well. More advanced and dynamic solution The above solution only works if the choice of language is constant. If the language can change (perhaps in an online application serving users from difference countries), dynamic lookups are needed. First we need a descriptor to dynamically forward value pattern attribute lookups to function calls: class FuncCall: "Descriptor to convert fc.name to func(name)." def __init__(self, func): self.func = func def __set_name__(self, owner, name): self.name = name def __get__(self, obj, objtype=None): return self.func(self.name) We use it like this: class Directions: north = FuncCall(_) # calls _('north') for every lookup south = FuncCall(_) east = FuncCall(_) west = FuncCall(_) def convert(direction): match direction: case Directions.north: return 1, 0 case Directions.south: return -1, 0 case Directions.east: return 0, 1 case Directions.west: return 0, -1 case _: raise ValueError(_('Unknown direction')) print('Adjustment:', adj) Here is a sample session: >>> set_language('es') # Spanish >>> convert('sur') (-1, 0) >>> set_language('fr') # French >>> convert('nord') (1, 0) Namespaces for the Value Pattern Any namespace with dotted lookup can be used in the value pattern: SimpleNamespace, Enum, modules, classes, instances, etc. Here a class was chosen because it is simple and will work with the descriptor needed for the more advanced solution. Enum wasn't considered because it is much more complex and because its metaclass logic interferes with the descriptors. Also, Enum is intended for giving symbolic names to predefined constants rather than for dynamically computed values like we're using here.
18
18
72,204,649
2022-5-11
https://stackoverflow.com/questions/72204649/single-file-history-format-library-for-binary-files
My application is going to edit a bunch of large files, completely unrelated to each other (belonging to different users), and I need to store checkpoints of the previous state of the files. Delta compression should work extremely well on this file format. I only need a linear history, not branches or merges. There are low-level libraries that give part of the solution, for example xdelta3 sounds like a good binary diff/patch system. RCS actually seems like a pretty close match to my problem, but doesn't handle binary files well. git provides a complete solution to my problem, but is an enormous suite of programs, and its storage format is an entire directory. Is there anything less complicated than git that would: work on binary files perform delta compression let me commit new "newest" versions let me recall old versions Bonus points if it would: have a single-file storage format be available as a C, C++, or Python library I can't even find the right combination of words to google for this category of program, so that would also be helpful.
From RCS manual (1. Overview) [RCS] can handle text as well as binary files, although functionality is reduced for the latter. RCS seems a good option worth to try. I work for a Foundation which has been using RCS to keep under version control tens of thousands of completely unrelated files (git or hg are not an option). Mostly text, but also some media files, which are binary in nature. RCS does work quite well with binary files, only make sure not to use the Substitute mode options, to avoid inadvertently substituting binary bits that looks like $ Id. To see if this could work for you, you could for example try with a Photoshop image, put it under version control with RCS. Then change a part, or add a layer, and commit the change. You could then verify how well RCS can manage binary files for you. RCS has been serving us quite well. It is well maintained, reliable, predictable, and definitely worth a try.
6
4
72,249,268
2022-5-15
https://stackoverflow.com/questions/72249268/pandas-drop-rows-lower-then-others-in-all-colums
I have a dataframe with a lot of rows with numerical columns, such as: A B C D 12 7 1 0 7 1 2 0 1 1 1 1 2 2 0 0 I need to reduce the size of the dataframe by removing those rows that has another row with all values bigger. In the previous example i need to remove the last row because the first row has all values bigger (in case of dubplicate rows i need to keep one of them). And return This: A B C D 12 7 1 0 7 1 2 0 1 1 1 1 My faster solution are the folowing: def complete_reduction(df, columns): def _single_reduction(row): df["check"] = True for col in columns: df["check"] = df["check"] & (df[col] >= row[col]) drop_index.append(df["check"].sum() == 1) df = df.drop_duplicates(subset=columns) drop_index = [] df.apply(lambda x: _single_reduction(x), axis=1) df = df[numpy.array(drop_index).astype(bool)] return df Any better ideas? Update: A new solution has been found here https://stackoverflow.com/a/68528943/11327160 but i hope for somethings faster.
An more memory-efficient and faster solution than the one proposed so far is to use Numba. There is no need to create huge temporary array with Numba. Moreover, it is easy to write a parallel implementation that makes use of all CPU cores. Here is the implementation: import numba as nb @nb.njit def is_dominated(arr, k): n, m = arr.shape for i in range(n): if i != k: dominated = True for j in range(m): if arr[i, j] < arr[k, j]: dominated = False if dominated: return True return False # Precompile the function to native code for the most common types @nb.njit(['(i4[:,::1],)', '(i8[:,::1],)'], parallel=True, cache=True) def dominated_rows(arr): n, m = arr.shape toRemove = np.empty(n, dtype=np.bool_) for i in nb.prange(n): toRemove[i] = is_dominated(arr, i) return toRemove # Special case df2 = df.drop_duplicates() # Main computation result = df2[~dominated_rows(np.ascontiguousarray(df.values))] Benchmark The input test is two random dataframes of shape 20000x5 and 5000x100 containing small integers (ie. [0;100[). Tests have been done on a (6-core) i5-9600KF processor with 16 GiB of RAM on Windows. The version of @BingWang is the updated one of the 2022-05-24. Here are performance results of the proposed approaches so far: Dataframe with shape 5000x100 - Initial code: 114_340 ms - BENY: 2_716 ms (consume few GiB of RAM) - Bing Wang: 2_619 ms - Numba: 303 ms <---- Dataframe with shape 20000x5 - Initial code: (too long) - BENY: 8.775 ms (consume few GiB of RAM) - Bing Wang: 578 ms - Numba: 21 ms <---- This solution is respectively about 9 to 28 times faster than the fastest one (of @BingWang). It also has the benefit of consuming far less memory. Indeed, the @BENY implementation consume few GiB of RAM while this one (and the one of @BingWang) only consumes no more than few MiB for this used-case. The speed gain over the @BingWang implementation is due to the early stop, parallelism and the native execution. One can see that this Numba implementation and the one of @BingWang are quite efficient when the number of column is small. This makes sense for the @BingWang since the complexity should be O(N(logN)^(d-2)) where d is the number of columns. As for Numba, it is significantly faster because most rows are dominated on the second random dataset causing the early stop to be very effective in practice. I think the @BingWang algorithm might be faster when most rows are not dominated. However, this case should be very uncommon on dataframes with few columns and a lot of rows (at least, clearly on uniformly random ones).
8
7
72,199,498
2022-5-11
https://stackoverflow.com/questions/72199498/error-in-importing-cats-vs-dogs-dataset-in-google-colab
While trying to download the "Cats_vs_Dogs" TensorFlow dataset using the tfds module, I get the following error πŸ‘‡ DownloadError Traceback (most recent call last) <ipython-input-2-244305a07c33> in <module>() 7 split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], 8 with_info=True, ----> 9 as_supervised=True, 10 ) 21 frames /usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/download/downloader.py in _assert_status(response) 257 if response.status_code != 200: 258 raise DownloadError('Failed to get url {}. HTTP code: {}.'.format( --> 259 response.url, response.status_code)) DownloadError: Failed to get url https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip. HTTP code: 404. The code I have used is import tensorflow_datasets as tfds tfds.disable_progress_bar() # split the data manually into 80% training, 10% testing, 10% validation (raw_train, raw_validation, raw_test), metadata = tfds.load( 'cats_vs_dogs', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, ) It worked yesterday but suddenly gave an error today.......
You can add this before loading to set the new URL : setattr(tfds.image_classification.cats_vs_dogs, '_URL',"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_5340.zip")
8
12
72,249,616
2022-5-15
https://stackoverflow.com/questions/72249616/module-dlt-has-no-attribute-table-databricks-and-delta-live-tables
I am new to databricks and delta live tables. I have problem with creating delta live table in python. How to create delta live table from json files in filestore?
It is a decorator, so I think you also need a function after. Meaning @dlt.table(comment="your comment") def get_bronze(): df=spark.sql("""select * from myDb.MyRegisterdTable""") #If you wanna check logs: #print("bronze",df.take(5),"end") return df In silver function then you can read it as: @dlt.table def get_silver(): df = dlt.read("get_bronze") [..do_stuff...] return df Also from your screenshots I am not sure, are you running all this as a pipeline or are you trying to run a Notebook? The latter does not work.
7
3
72,236,497
2022-5-14
https://stackoverflow.com/questions/72236497/how-to-deal-with-pythons-requirements-txt-while-using-a-docker-development-envi
Suppose I wrote a docker-compose.dev.yml file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt file (in the case of Python). For this I only see two alternatives using a Docker development environment: Enter the console of the container where the Flask application is running and use the pip install ... and pip freeze > requirements.txt commands. Manually write the dependencies to the requirements.txt file and rebuild the containers. The first option is a bit laborious, while the second is a bit "dirty". Is there any more suitable option than the two mentioned alternatives? Edit: I don't know if I'm asking something that doesn't make sense, but I'd appreciate if someone could give me some guidance on what I'm trying to accomplish.
For something like this I use multi-layer docker images. Disclaimer: The below examples are not tested. Please consider it as a mere description written in pseudo code ;) As a very simple example, this approach could look like this: # Make sure all layers are based on the same python version. FROM python:3.10-slim-buster as base # The actual dev/test image. # This is where you can install additional dev/test requirements. FROM base as test COPY ./requirements_test.txt /code/requirements_test.txt RUN python -m pip install --no-cache-dir --upgrade -r /code/requirements_test.txt ENTRYPOINT ["python"] # Assuming you run tests using pytest. CMD ["-m", "pytest", "..."] # The actual production image. FROM base as runtime COPY ./requirements.txt /code/requirements.txt RUN python -m pip install --no-cache-dir --upgrade -r /code/requirements.txt ENTRYPOINT ["python"] # Assuming you wantto run main.py as a script. CMD ["/path/to/main.py"] With requirements.txt like this (just an example): requests With requirements_test.txt like this (just an example): -r requirements.txt pytest In your docker-compose.yml file you only need topassthe --target (of the multi-layered Dockerfile, in this example: test and runtime) like this (not complete): services: service: build: context: . dockerfile: ./Dockerfile target: runtime # or test for running tests A final thought: As I mentioned in my comment, a much better approach for dealing with such dependency requirements might be using tools like poetry or pip-tools - or whatever else is out there. Update 2022-05-23: As mentioned in the comment, for the sake of completeness and because this approach might be close to a possible solution (as requested in the question): An example for a fire-and-forget approach could look like this - assuming the container has a specific name (<containe_name>): # This requires to mount the file 'requirements_dev.txt' into the container - as a volume. docker exec -it <container_name> python -m pip install --upgrade -r requirements_dev.txt This command simply installs new dependencies into the running container.
5
6
72,230,151
2022-5-13
https://stackoverflow.com/questions/72230151/how-to-open-a-secure-channel-in-python-grpc-client-without-a-client-ssl-certific
I have a grpc server (in Go) that has a valid TLS certificate and does not require client side TLS. For some reason I can not implement the client without mTLS in Python, even though I can do so in Golang. In Python I have os.environ["GRPC_VERBOSITY"] = "DEBUG" # os.environ["GRPC_DEFAULT_SSL_ROOTS_FILE_PATH"] = "/etc/ssl/certs/ca-bundle.crt" channel = grpc.secure_channel(ADDR, grpc.ssl_channel_credentials()) grpc.channel_ready_future(channel).result(timeout=10) This gives me the following error D0513 08:02:08.147319164 21092 security_handshaker.cc:181] Security handshake failed: {"created":"@1652446928.147311309","description":"Handshake failed","file":"src/core/lib/security/transport/security_handshaker.cc","file_line":377,"tsi_code":10,"tsi_error":"TSI_PROTOCOL_FAILURE"} I can get this to work if I use SSL certificates by uncommenting the commented out line. I know for a fact that my server does not request, require or verify client certificates as The following Go code work perfectly conn, err := grpc.DialContext( ctx, gRPCAddr, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")), ) dummyClient := dummy.NewDummyServiceClient(conn) if _, err := dummyClient.Ping(context.Background(), &dummy.PingRequest{ Ping: "go client ping", }); err != nil { return fmt.Errorf("failed to ping: %w", err) }
https://grpc.github.io/grpc/python/_modules/grpc.html#secure_channel has the docs for channel = grpc.secure_channel(ORBIUM_ADDR, grpc.ssl_channel_credentials()). This function relies on the class channel, see docs https://grpc.github.io/grpc/python/_modules/grpc/aio/_channel.html. Basically, class Channel wraps C code to provide a secure channel. That wrapped C code expects the certificate. If you can implement in C, it might be easiest to just change the C code.
8
4
72,200,552
2022-5-11
https://stackoverflow.com/questions/72200552/fastapi-firebase-authentication-with-jwts
I'm trying to use fastapi to return some basic ML models to users. Currently, I secure user details with firebase auth. I want to use the JWT's users have when using the basic application to authenticate their request for the ML model. With fastapi, there doesn't seem to be a straightforward answer to doing this. I've followed two main threads as ways to work out how to do this, but am a bit lost as to how I can simply take the JWT from the header of a request and check it against firebase admin or whathave you? Following this tutorial and using this package, I end up with something like this, https://github.com/tokusumi/fastapi-cloudauth . This doesn't really do anything - it doesn't authenticate the JWT for me, bit confused as to if this package is actually worthwhile? from fastapi import FastAPI, HTTPException, Header,Depends from fastapi.middleware.cors import CORSMiddleware from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer from fastapi_cloudauth.firebase import FirebaseCurrentUser, FirebaseClaims app = FastAPI() security = HTTPBearer() origins = [ xxxx ] app.add_middleware( xxxx ) get_current_user = FirebaseCurrentUser( project_id=os.environ["PROJECT_ID"] ) @app.get("/user/") def secure_user(current_user: FirebaseClaims = Depends(get_current_user)): # ID token is valid and getting user info from ID token return f"Hello, {current_user.user_id}" Alternatively, looking at this, https://github.com/tiangolo/fastapi/issues/4768 It seems like something like this would work, security = HTTPBearer() api = FastAPI() security = HTTPBearer() firebase_client = FirebaseClient( firebase_admin_credentials_url=firebase_test_admin_credentials_url # ... ) user_roles = [test_role] async def firebase_authentication(token: HTTPAuthorizationCredentials = Depends(security)) -> dict: user = firebase_client.verify_token(token.credentials) return user async def firebase_authorization(user: dict = Depends(firebase_authentication)): roles = firebase_client.get_user_roles(user) for role in roles: if role in user_roles: return user raise HTTPException(detail="User does not have the required roles", status_code=HTTPStatus.FORBIDDEN) @api.get("/") async def root(uid: str = Depends(firebase_authorization)): return {"message": "Successfully authenticated & authorized!"} But honestly I'm a bit confused about how I would set up the firebase environment variables, what packages I would need (firebaseadmin?) Would love some helpers, thanks!
I hope this will help: Create functions to work with Firebase admin, create credentials from Firebase as JSON file: from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials from fastapi import Depends, HTTPException, status, Response from firebase_admin import auth, credentials, initialize_app credential = credentials.Certificate('./key.json') initialize_app(credential) def get_user_token(res: Response, credential: HTTPAuthorizationCredentials=Depends(HTTPBearer(auto_error=False))): if cred is None: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Bearer authentication is needed", headers={'WWW-Authenticate': 'Bearer realm="auth_required"'}, ) try: decoded_token = auth.verify_id_token(credential.credentials) except Exception as err: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=f"Invalid authentication from Firebase. {err}", headers={'WWW-Authenticate': 'Bearer error="invalid_token"'}, ) res.headers['WWW-Authenticate'] = 'Bearer realm="auth_required"' return decoded_token Then put it inside your FastAPI main function: from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials from fastapi import Depends, HTTPException, status, Response, FastAPI, Depends from firebase_admin import auth, credentials, initialize_app credential = credentials.Certificate('./key.json') initialize_app(credential) def get_user_token(res: Response, credential: HTTPAuthorizationCredentials=Depends(HTTPBearer(auto_error=False))): if cred is None: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Bearer authentication is needed", headers={'WWW-Authenticate': 'Bearer realm="auth_required"'}, ) try: decoded_token = auth.verify_id_token(credential.credentials) except Exception as err: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=f"Invalid authentication from Firebase. {err}", headers={'WWW-Authenticate': 'Bearer error="invalid_token"'}, ) res.headers['WWW-Authenticate'] = 'Bearer realm="auth_required"' return decoded_token app = FastAPI() @app.get("/api/") async def hello(): return {"msg":"Hello, this is API server"} @app.get("/api/user_token") async def hello_user(user = Depends(get_user_token)): return {"msg":"Hello, user","uid":user['uid']} P.S. Do not forget to install requirements: pip3 install firebase_admin
6
13
72,277,284
2022-5-17
https://stackoverflow.com/questions/72277284/retrieve-pytest-results-programmatically-when-run-via-pytest-main
I'd like to run pytest and then store results and present them to users on demand (e.g. store pytest results to a db and then expose them through web service). I could run pytest from the command line with an option to save the results report into file, then find and parse the file, but it feels silly to have the results in a (pytest) python app, then store them to a file and then instantly look for the file, and parse it back into python code for further processing. I know I can run pytest programmatically via pytest.main(args), however, it only returns some exit code, and not details about test results. How can I retrieve the results when using pytest.main()? I'm looking for something like: args = ARGUMENTS ret_code = pytest.main(args=args) # pytest.main() as is only returns trivial return code my_own_method_to_process(pytest.results) # how to retrieve any kind of pytest.results object that would contain test execution results data (list of executed tests, pass fail info, etc as pytest is displaying into console or saves into file reports) There are couple of similar questions, but always with some deviation that doesn't work for me. I simply want to run pytest from my code, and - whatever format the output would be - directly grab it and further process it. Note: I'm in a corporate environment, where installing new packages (i.e. pytest plugins) is limited, so I'd like to achieve this without installing any other module/pytest plugin into my environment.
Write a small plugin that collects and stores reports for each test. Example: import time import pytest class ResultsCollector: def __init__(self): self.reports = [] self.collected = 0 self.exitcode = 0 self.passed = 0 self.failed = 0 self.xfailed = 0 self.skipped = 0 self.total_duration = 0 @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(self, item, call): outcome = yield report = outcome.get_result() if report.when == 'call': self.reports.append(report) def pytest_collection_modifyitems(self, items): self.collected = len(items) def pytest_terminal_summary(self, terminalreporter, exitstatus): print(exitstatus, dir(exitstatus)) self.exitcode = exitstatus.value self.passed = len(terminalreporter.stats.get('passed', [])) self.failed = len(terminalreporter.stats.get('failed', [])) self.xfailed = len(terminalreporter.stats.get('xfailed', [])) self.skipped = len(terminalreporter.stats.get('skipped', [])) self.total_duration = time.time() - terminalreporter._sessionstarttime def run(): collector = ResultsCollector() pytest.main(plugins=[collector]) for report in collector.reports: print('id:', report.nodeid, 'outcome:', report.outcome) # etc print('exit code:', collector.exitcode) print('passed:', collector.passed, 'failed:', collector.failed, 'xfailed:', collector.xfailed, 'skipped:', collector.skipped) print('total duration:', collector.total_duration) if __name__ == '__main__': run()
10
9
72,277,275
2022-5-17
https://stackoverflow.com/questions/72277275/how-to-monitor-per-process-network-usage-in-python
I'm looking to troubleshoot my internet issue so I need a way to track both my latency and which application is using how much network bandwidth. I've already sorted out checking latency, but now I need a way to monitor each process' network usage (KB/s), like how it appears in Windows Task Manager. Before you suggest a program, unless it's able to record the values with a timestamp then that's not what I'm looking for. I'm asking for a Pythonic way because I need to record the network bandwidth and latency values at the same time so I can figure out if a specific process is causing latency spikes. So here's the info I need: Time | Process ID | Process Name | Down Usage | Up Usage | Network Latency | Also, please don't link to another Stackoverflow question unless you know their solution works. I've looked through plenty already and none of them work, which is why I'm asking again.
Following the third section of this guide provided me with all of the information listed in the post, minus latency. Given that you said you already had measuring latency figured out, I assume this isn't an issue. Logging this to csv/json/whatever is pretty easy, as all of the information is stored in panda data frames. As this shows the time the process was created, you can use datetime to generate a new timestamp at the time of logging. I tested this by logging to a csv after the printing_df variable was initialized, and had no issues.
7
4
72,250,629
2022-5-15
https://stackoverflow.com/questions/72250629/draw-text-with-background-color
I want to know how can I draw text like this check image As you can see text is on a green image and text has pink color background My code, this is part of my code I'm using PIL draw = ImageDraw.Draw(background) font = ImageFont.truetype("assets/font2.ttf", 40) font2 = ImageFont.truetype("assets/font2.ttf", 70) arial = ImageFont.truetype("assets/font2.ttf", 30) name_font = ImageFont.truetype("assets/font.ttf", 30) para = textwrap.wrap(title, width=32) j = 0 draw.text( (10, 10), f"{h}", fill="red", font=name_font ) draw.text( (600, 150), "NOW PLAYING", fill="white", stroke_width=2, stroke_fill="white", font=font2, ) Thanks in advance :-)
You can use the draw.textbbox method to get a bounding box for your text string and fill it using the draw.rectangle method. from PIL import Image, ImageDraw, ImageFont image = Image.new("RGB", (500, 100), "white") font = ImageFont.truetype("segoeui.ttf", 40) draw = ImageDraw.Draw(image) position = (10, 10) text = "Hello world" bbox = draw.textbbox(position, text, font=font) draw.rectangle(bbox, fill="red") draw.text(position, text, font=font, fill="black") image.show() If you want a larger margin for the background rectangle, you can adjust the returned bounding box like so: left, top, right, bottom = draw.textbbox(position, text, font=font) draw.rectangle((left-5, top-5, right+5, bottom+5), fill="red") draw.text(position, text, font=font, fill="black")
8
20
72,215,976
2022-5-12
https://stackoverflow.com/questions/72215976/pypdf2-errors-pdfreaderror-pdf-starts-with-but-pdf-expected
I have a folder containing a lot of sub-folders, with PDF files inside. It's a real mess to find information in these files, so I'm making a program to parse these folders and files, searching for a keyword in the PDF files, and returning the names of the PDF files containing the keyword. And it's working. Almost, actually. I have this error: PyPDF2.errors.PdfReadError: PDF starts with '♣▬', but '%PDF-' expected when my program reaches some folders (hard to know which one exactly). From my point of view, all the PDF files in my folders are the same, so I don't understand why my program works with some files and doesn't work with others. Thank you in advance for your responses.
disclaimer: I am the author of borb, the library mentioned in this answer PDF documents caught in the wild will sometimes start with non-pdf bytes (a header that is not really part of the PDF spec). This can cause all kinds of problems. PDF will (internally) keep track of all the byte offsets of objects in the file (e.g. "object 10 starts at byte 10202"). This header makes it harder to know where an object starts. Do we start counting at the start of the file? Or at the start of where the file behaves like a PDF? If you just want to extract text from a PDF (to be able to check it for content and keywords), you can try to use borb. borb will look for the start of the PDF within the first 1MB of the file (thus potentially ignoring your faulty header). If this turns out to corrupt the XREF (cross reference table, containing all byte addresses of objects) it will simply build a new one. This is an example of how to extract text from a PDF using borb: import typing from borb.pdf.document.document import Document from borb.pdf.pdf import PDF from borb.toolkit.text.simple_text_extraction import SimpleTextExtraction def main(): # read the Document doc: typing.Optional[Document] = None l: SimpleTextExtraction = SimpleTextExtraction() with open("output.pdf", "rb") as in_file_handle: doc = PDF.loads(in_file_handle, [l]) # check whether we have read a Document assert doc is not None # print the text on the first Page print(l.get_text_for_page(0)) if __name__ == "__main__": main() You can find more examples in the examples repository.
5
6
72,262,763
2022-5-16
https://stackoverflow.com/questions/72262763/detect-whether-package-is-being-run-as-a-program-within-init-py
I have a Python package with an __init__.py that imports some things to be exposed as the package API. # __init__.py from .mymodule import MyClass # ... I also want to be able to use the package as a command-line application, as in python -m mypackage, so I have a __main__.py file for that purpose: # __main__.py if __name__ == '__main__': from .main import main main() So far so good. The problem is that, when the package is run as a program like this, I want to be able to do some stuff before importing any of the submodules - namely changing some environment variables before some third-party dependencies are loaded. I do not know how to do this, at least not in a reasonable way. Ideally, I would like to do something like: # __init__.py def thePackageIsRunningAsAnApplication(): # ??? def prepareEnvironment(): # ... if thePackageIsRunningAsAnApplication(): prepareEnvironment() from .mymodule import MyClass # ... The problem is I don't think thePackageIsRunningAsAnApplication() can be implemented. The usual __name__ == '__main__' does not work here, because the main module being run is __main__.py, not __init__.py. In fact, I would prefer to define and run prepareEnvironment within __main__.py, but I don't know how to get that to run before the inner modules are loaded by __init__.py. I might (not sure, actually) work around it by lazily loading dependencies on my module, or somehow delaying the internal module loading or something, but I would prefer to avoid doing something like that just for this. EDIT: Thinking more about it, lazy loading probably would not work either. In the example, MyClass is, well, a class, not a submodule, so I cannot lazily load it. Moreover, MyClass happens to inherit from a class from that third-party dependency I was mentioning, so I cannot even define it without loading it.
It might make sense to add a separate entry point for running your code as a script, rather than using __main__.py, which as you've noticed, can only be run after the package's __init__.py is fully loaded. A simple script like run_mypackage.py located at the top level could contain the environment variable tweaking code, and then could import and run the package afterwards. def prepare_environment(): ... if __name__ == "__main__": prepare_environment() # adjust the environment firstt from mypackage.main import main # then load the package afterwards main()
5
1
72,280,762
2022-5-17
https://stackoverflow.com/questions/72280762/pip-broke-after-downlading-python-certifi-win32
I have downloaded python for the first time in a new computer(ver 3.10.4). I have download the package python-certifi-win32, after someone suggested it as a solution to a SSL certificate problem in a similar question to a problem I had. Since then, pip has completely stopped working, to the point where i can't not run pip --version Every time the same error is printed, it is mostly seemingly junk(just a deep stack trace), but the file at the end is different. start of the printed log: Traceback (most recent call last): File "C:\Users\---\AppData\Local\Programs\Python\Python310\lib\importlib\_common.py", line 89, in _tempfile os.write(fd, reader()) File "C:\Users\---\AppData\Local\Programs\Python\Python310\lib\importlib\abc.py", line 371, in read_bytes with self.open('rb') as strm: File "C:\Users\---\AppData\Local\Programs\Python\Python310\lib\importlib\_adapters.py", line 54, in open raise ValueError() ValueError During handling of the above exception, another exception occurred: last row of the printed log: PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\-----\\AppData\\Local\\Temp\\tmpunox3fhw'
I found the answer in another question - PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: after installing python-certifi-win32 basically, you should remove two files that initialize python-certifi-win32 when running pip. the files are located in the directory: C:\Users\<username>\AppData\Local\Programs\Python\Python310\Lib\site-packages and their names are: python-certifi-win32-init.pth distutils-precedence.pth Shoutout to Richard from the mentioned post :)
9
12
72,221,091
2022-5-12
https://stackoverflow.com/questions/72221091/im-trying-to-type-annotate-around-boto3-but-module-botocore-client-has-no-at
I'm writing my own wrapper around boto3 for quick firing functions. I'm trying to type annotate what boto3.session().client('ec2') returns. Debugger says it's <class 'botocore.client.EC2'>, but if I write it down like that, python crashes with a runtime error ec2: botocore.client.EC2 AttributeError: module 'botocore.client' has no attribute 'EC2' Removing type annotation works for runtime, but it makes linting very limited. Is there a reasonably fast way or a hack to get typing working with this boto3 case? The code I'm talking about is following: class AWS_Client: # debugger says it's <class 'botocore.client.EC2'>, # but mypy says this class does not exist ec2: botocore.client.EC2 asg: botocore.client.AutoScaling region: str instance_id: str def __init__(self, profile_name: Optional[str] = None, instance_id: str = '', region_name: str = '', **kwargs) -> None: global config self.instance_id = instance_id or ec2_meta('instance-id').text self.region = region_name or ec2_meta( 'placement/availability-zone').text[:-1] boto3.set_stream_logger('botocore', level=logging.DEBUG) self.session = call_partial( boto3.Session, region_name=self.region, profile_name=profile_name, **kwargs) self.ec2 = self.session.client('ec2', region_name=self.region, **kwargs) self.asg = self.session.client( 'autoscaling', region_name=self.region, **kwargs) def get_tags(self) -> Dict[str, str]: self.tags = self.ec2.describe_tags(Filters=[{ 'Name': 'resource-id', 'Values': [self.instance_id] }])['Tags'] return self.tags aws = AWS_Client() print(aws.get_tags())
Try mypy-boto3 mypy plugin. You can install it via python -m pip install 'boto3-stubs[ec2]' import boto3 from mypy_boto3_ec2 import EC2Client def foo(ec2_client: EC2Client) -> None: ... ec2_client = boto3.client("ec2") foo(ec2_client) # OK
6
7
72,280,858
2022-5-17
https://stackoverflow.com/questions/72280858/rotate-axis-labels
I have a plot that looks like this (this is the famous Wine dataset): As you can see, the x-axis labels overlap and thus I need to be rotated. NB! I am not interested in rotating the x-ticks (as explained here), but the label text, i.e. alcohol, malic_acid, etc. The logic of creating the plot is the following: I create a grid using axd = fig.subplot_mosaic(...) and then for the bottom plots I set the labels with axd[...].set_xlabel("something"). Would be great if set_xlabel would take a rotation parameter, but unfortunately that is not the case.
Based on the documentation set_xlabel accepts text arguments, of which rotation is one. The example I used to test this is shown below, though . import matplotlib.pyplot as plt import numpy as np plt.plot() plt.gca().set_xlabel('Test', rotation='vertical')
4
5
72,280,047
2022-5-17
https://stackoverflow.com/questions/72280047/how-can-i-override-a-special-method-defined-in-a-metaclass-with-a-custom-classme
As an example, consider the following: class FooMeta(type): def __len__(cls): return 9000 class GoodBar(metaclass=FooMeta): def __len__(self): return 9001 class BadBar(metaclass=FooMeta): @classmethod def __len__(cls): return 9002 len(GoodBar) -> 9000 len(GoodBar()) -> 9001 GoodBar.__len__() -> TypeError (missing 1 required positional argument) GoodBar().__len__() -> 9001 len(BadBar) -> 9000 (!!!) len(BadBar()) -> 9002 BadBar.__len__() -> 9002 BadBar().__len__() -> 9002 The issue being with len(BadBar) returning 9000 instead of 9002 which is the intended behaviour. This behaviour is (somewhat) documented in Python Data Model - Special Method Lookup, but it doesn't mention anything about classmethods, and I don't really understand the interaction with the @classmethod decorator. Aside from the obvious metaclass solution (ie, replace/extend FooMeta) is there a way to override or extend the metaclass function so that len(BadBar) -> 9002? Edit: To clarify, in my specific use case I can't edit the metaclass, and I don't want to subclass it and/or make my own metaclass, unless it is the only possible way of doing this.
The __len__ defined in the class will always be ignored when using len(...) for the class itself: when executing its operators, and methods like "hash", "iter", "len" can be roughly said to have "operator status", Python always retrieve the corresponding method from the class of the target, by directly acessing the memory structure of the class. These dunder methods have "physical" slot in the memory layout for the class: if the method exists in the class of your instance (and in this case, the "instances" are the classes "GoodBar" and "BadBar", instances of "FooMeta"), or one of its superclasses, it is called - otherwise the operator fails. So, this is the reasoning that applies on len(GoodBar()): it will call the __len__ defined in GoodBar()'s class, and len(GoodBar) and len(BadBar) will call the __len__ defined in their class, FooMeta I don't really understand the interaction with the @classmethod decorator. The "classmethod" decorator creates a special descriptor out of the decorated function, so that when it is retrieved, via "getattr" from the class it is bound too, Python creates a "partial" object with the "cls" argument already in place. Just as retrieving an ordinary method from an instance creates an object with "self" pre-bound: Both things are carried through the "descriptor" protocol - which means, both an ordinary method and a classmethod are retrieved by calling its __get__ method. This method takes 3 parameters: "self", the descriptor itself, "instance", the instance its bound to, and "owner": the class it is ound to. The thing is that for ordinary methods (functions), when the second (instance) parameter to __get__ is None, the function itself is returned. @classmethod wraps a function with an object with a different __get__: one that returns the equivalent to partial(method, cls), regardless of the second parameter to __get__. In other words, this simple pure Python code replicates the working of the classmethod decorator: class myclassmethod: def __init__(self, meth): self.meth = meth def __get__(self, instance, owner): return lambda *args, **kwargs: self.meth(owner, *args, **kwargs) That is why you see the same behavior when calling a classmethod explicitly with klass.__get__() and klass().__get__(): the instance is ignored. TL;DR: len(klass) will always go through the metaclass slot, and klass.__len__() will retrieve __len__ via the getattr mechanism, and then bind the classmethod properly before calling it. Aside from the obvious metaclass solution (ie, replace/extend FooMeta) is there a way to override or extend the metaclass function so that len(BadBar) -> 9002? (...) To clarify, in my specific use case I can't edit the metaclass, and I don't want to subclass it and/or make my own metaclass, unless it is the only possible way of doing this. There is no other way. len(BadBar) will always go through the metaclass __len__. Extending the metaclass might not be all that painful, though. It can be done with a simple call to type passing the new __len__ method: In [13]: class BadBar(metaclass=type("", (FooMeta,), {"__len__": lambda cls:9002})): ...: pass In [14]: len(BadBar) Out[14]: 9002 Only if BadBar will later be combined in multiple inheritance with another class hierarchy with a different custom metaclass you will have to worry. Even if there are other classes that have FooMeta as metaclass, the snippet above will work: the dynamically created metaclass will be the metaclass for the new subclass, as the "most derived subclass". If however, there is a hierarchy of subclasses and they have differing metaclasses, even if created by this method, you will have to combine both metaclasses in a common subclass_of_the_metaclasses before creating the new "ordinary" subclass. If that is the case, note that you can have one single paramtrizable metaclass, extending your original one (can't dodge that, though) class SubMeta(FooMeta): def __new__(mcls, name, bases, ns, *,class_len): cls = super().__new__(mcls, name, bases, ns) cls._class_len = class_len return cls def __len__(cls): return cls._class_len if hasattr(cls, "_class_len") else super().__len__() And: In [19]: class Foo2(metaclass=SubMeta, class_len=9002): pass In [20]: len(Foo2) Out[20]: 9002
6
1
72,280,007
2022-5-17
https://stackoverflow.com/questions/72280007/pandas-convert-columns-of-lists-into-a-single-list
I have a dataframe of lists that looks similar to the one below (fig a). There is a single key column followed by n columns containing lists. My goal is that for each row, I will combine the lists from each column (excluding the key) into a single list in the new column, combined. An example of my desired result is below in figure B. I've tried some methods with iteritems(), but these dataframes have the potential to be hundreds of thousands to millions of rows long which made it incredibly slow. So I am trying to avoid solutions that use that. I'd like to use something like the list comprehension seen in this SO post, but I haven't been able to get it working with pandas. # example data data = {'key': ['1_1', '1_2', '1_3'], 'valueA': [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]], 'valueB': [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]], 'valueN': [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]} dataSet = pd.DataFrame(data) Figure A Figure B Edit: I really appreciate all the answers I have gotten so far! I'm currently going through and timing each on my full size dataset so I can figure out which one will work best this this case. I'll update with my result shortly! Edit 2: I tested the main solutions provided here on a few of my larger datasets and their average times are below. # Lambda/Apply a nested list comprehension shakiba.mrd: 1.12 s # Sum columns jfaccioni: 2.21 s # Nested list comprehension with iterrows mozway: 0.95 s # Adding column lists together politinsa: 3.50 s Thanks again to everyone for their contributions!
You can use a nested list comprehension: dataSet['combined'] = [[e for l in x for e in l] for _,x in dataSet.filter(like='value').iterrows()] Output: key valueA valueB valueN combined 0 1_1 [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6] 1 1_2 [7, 8, 9, 10, 11, 12] [7, 8, 9, 10, 11, 12] [7, 8, 9, 10, 11, 12] [7, 8, 9, 10, 11, 12, 7, 8, 9, 10, 11, 12, 7, 8, 9, 10, 11, 12] 2 1_3 [13, 14, 15, 16, 17, 18] [13, 14, 15, 16, 17, 18] [13, 14, 15, 16, 17, 18] [13, 14, 15, 16, 17, 18, 13, 14, 15, 16, 17, 18, 13, 14, 15, 16, 17, 18] Timing comparison with repeated addition (100 rows, 100 columns, 1000 items per list): # repeated addition of the lists 8.66 s Β± 309 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # nested list comprehension 729 ms Β± 285 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
6
2
72,269,651
2022-5-17
https://stackoverflow.com/questions/72269651/numpy-way-of-splitting-array-when-cumaltive-sum-x
Data Lets take the following 2d array: starts = [0, 4, 10, 13, 23, 27] ends = [4, 10, 13, 23, 27, 32] lengths = [4, 6, 3, 10, 4, 5] arr = np.array([starts, ends, lengths]).T Thus looking like: [[ 0 4 4] [ 4 10 6] [10 13 3] [13 23 10] [23 27 4] [27 32 5]] Goal Now I want to "loop" through the lengths and as soon as soon as the cumaltive sum reaches 10 I want to output the starts and ends and then restart the cumulative counting. Working code tot_size = 0 start = 0 for i, numb in enumerate(arr[:,-1]): # update sum tot_size += numb # Check if target size is reached if tot_size >= 10: start_loc, end_loc = arr[:,0][start], arr[:,1][i] print('Start: {}\nEnd: {}\nSum: {}\n'.format(start_loc, end_loc, tot_size)) start = i + 1 tot_size = 0 # Last part start_loc, end_loc = arr[:,0][start], arr[:,1][i] print('Start: {}\nEnd: {}\nSum: {}\n'.format(start_loc, end_loc, tot_size)) Which will print: Start: 0 End: 10 Sum: 10 Start: 10 End: 23 Sum: 13 Start: 23 End: 32 Sum: 9 (I don't need to know the resulting sum but I do need to know the starts and ends) Numpy try I suppose there must be a much more straightforward, or a vectorized, way of doing this with numpy. cumsum + remainder I was thinking of something like np.remainder(np.cumsum(arr[:,-1]), 10) however it will be "hard" to say when something is close to the target number (10 here), which is different from just splitting when sum > x stride_tricks Since the above doesn't work in a window I thought of stides but these windows are of fixed sizes All ideas are welcome :)
Numpy is not designed for solving efficiently such a problem. You can still solve this using some tricks or the usual combination of cumsum + division + diff + where or similar ones (like @Kevin proposed), but AFAIK they are all inefficient. Indeed, they require many temporary arrays and expensive operations. Temporary arrays are expensive for two reasons: for small arrays, the overhead of Numpy function is typically of several microseconds per call resulting in generally in dozens of microseconds for the whole operation; and for big arrays, each operation will be memory bound and memory bandwidth is small on modern platforms. Actually, it is even worst since writing in newly allocated array is much slower due to page faults and Numpy array writes are currently not optimized on most platforms (including the mainstream x86-64 one). As for "expensive operations" this includes sorting which runs in O(n log n) (quick-sort is used by default) and is generally memory bound, finding the unique values (which currently does a sort internally) and integer division which is known to be very slow since ever. One solution to solve this problem is to use Numba (or Cython). Numba use a just-in-time compiler so to write fast optimized function. It is especially useful to write your own efficient basic Numpy built-ins. Here is an example based on your code: import numba as nb import numpy as np @nb.njit(['(int32[:,:],)', '(int64[:,:],)']) def compute(arr): n = len(arr) tot_size, start, cur = 0, 0, 0 slices = np.empty((n, 2), arr.dtype) for i in range(n): tot_size += arr[i, 2] if tot_size >= 10: slices[cur, 0] = arr[start, 0] slices[cur, 1] = arr[i, 1] start = i + 1 cur += 1 tot_size = 0 slices[cur, 0] = arr[start, 0] slices[cur, 1] = arr[i, 1] return slices[:cur+1] For your small example, the Numba function takes about 0.75 us on my machine while the initial solution takes 3.5 us. In comparison, the Numpy solutions provided by @Kevin (returning the indices) takes 24 us for the np.unique and 6 us for the division-based solution. In fact, the basic np.cumsum already takes 0.65 us on my machine. Thus, the Numba solution is the fastest. It should be especially true for larger arrays.
6
1
72,266,207
2022-5-16
https://stackoverflow.com/questions/72266207/django-filter-error-meta-fields-must-not-contain-non-model-field-names
I am working with Django REST framework and django-filters and and I'd like to use the reverse relationship annotation_set as one of filters for a GET API that uses the model Detection. The models are the following: class Detection(models.Model): image = models.ImageField(upload_to="detections/images") def local_image_path(self): return os.path.join('images' f"{self.id}.jpg") class Annotation(models.Model): detection = models.ForeignKey(Detection, on_delete=models.CASCADE) attribute = models.CharField(max_length=255) The serializer is: class DetectionSerializer(UniqueFieldsMixin, serializers.ModelSerializer): local_image_path = serializers.CharField() class Meta: model = Detection fields = '__all__' And the viewset is: class DetectionTrainingViewSet( mixins.ListModelMixin, mixins.RetrieveModelMixin, viewsets.GenericViewSet ): queryset = Detection.objects.all() serializer_class = DetectionSerializer filterset_fields = ('annotation_set__id', ) @action(methods=['GET'], detail=False) def list_ids(self, request): queryset = self.get_queryset() filtered_queryset = self.filter_queryset(queryset) return Response(filtered_queryset.values_list('id', flat=True)) When I make a call to the endpoint, I get the error: 'Meta.fields' must not contain non-model field names: annotation_set__id Shouldn't the field exist? Note: I tried to add other fields to the Annotation model and then use annotation_set__newfield but I still have the error. I can confirm that the newfield exists because it is correctly serialized and return by the API when I comment out the line that set the filterset_fields.
Apparently I had to explicitly state the name of the reverse relationship: class Annotation(models.Model): detection = models.ForeignKey(Detection, on_delete=models.CASCADE, related_name='annotation_set') attribute = models.CharField(max_length=255) If anybody knows why, I'd love to know it! Thanks!
5
2
72,238,460
2022-5-14
https://stackoverflow.com/questions/72238460/python-importerror-sys-meta-path-is-none-python-is-likely-shutting-down
When using __del__ datetime.date.today() throws ImportError: sys.meta_path is None, Python is likely shutting down import datetime import time import sys class Bug(object): def __init__(self): print_meta_path() def __del__(self): print_meta_path() try_date('time') try_date('datetime') def print_meta_path(): print(f'meta_path: {sys.meta_path}') def try_date(date_type): try: print('----------------------------------------------') print(date_type) if date_type == 'time': print(datetime.date.fromtimestamp(time.time())) if date_type == 'datetime': print(datetime.date.today()) except Exception as ex: print(ex) if __name__ == '__main__': print(sys.version) bug = Bug() output with different envs (3.10, 3.9, 3.7): 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] meta_path: [<_distutils_hack.DistutilsMetaFinder object at 0x7ff8731f6860>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] meta_path: None ---------------------------------------------- time 2022-05-17 ---------------------------------------------- datetime sys.meta_path is None, Python is likely shutting down 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:22:55) [GCC 10.3.0] meta_path: [<_distutils_hack.DistutilsMetaFinder object at 0x7fb01126e490>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] meta_path: None ---------------------------------------------- time 2022-05-17 ---------------------------------------------- datetime sys.meta_path is None, Python is likely shutting down 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0] meta_path: [<class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] meta_path: None ---------------------------------------------- time 2022-05-17 ---------------------------------------------- datetime sys.meta_path is None, Python is likely shutting down 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] meta_path: [<class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] meta_path: None ---------------------------------------------- time 2022-05-17 ---------------------------------------------- datetime sys.meta_path is None, Python is likely shutting down Why is that happening? I need to use requests which use urllib3 connection.py 380: is_time_off = datetime.date.today() < RECENT_DATE File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/requests/api.py", line 117, in post File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/requests/api.py", line 61, in request File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/requests/sessions.py", line 529, in request File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/requests/sessions.py", line 645, in send File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/requests/adapters.py", line 440, in send File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn File "/home/liron/mambaforge/envs/dm-sdk-dev/lib/python3.10/site-packages/urllib3/connection.py", line 380, in connect ImportError: sys.meta_path is None, Python is likely shutting down switching the line to 380: is_time_off = datetime.date.fromtimestamp(time.time()) < RECENT_DATE solve it. OS Linux-5.13.0-41-generic-x86_64-with-glibc2.31 urllib3 1.26.9 I already tried to rebind __del__ arguments default def __del__(self, datetime=datetime):.... Does anyone have an idea? thanks
Using atexit provide the same behavior as __del__ but works import datetime import time import sys import atexit class Bug(object): def __init__(self): print_meta_path() atexit.register(self.__close) def __close(self): print_meta_path() try_date('time') try_date('datetime') def print_meta_path(): print(f'meta_path: {sys.meta_path}') def try_date(date_type): try: print('----------------------------------------------') print(date_type) if date_type == 'time': print(datetime.date.fromtimestamp(time.time())) if date_type == 'datetime': print(datetime.date.today()) except ImportError: print('') if __name__ == '__main__': print(sys.version) bug = Bug() output: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:38:57) [GCC 10.3.0] meta_path: [<_distutils_hack.DistutilsMetaFinder object at 0x7fd912112860>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] meta_path: [<_distutils_hack.DistutilsMetaFinder object at 0x7fd912112860>, <class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib_external.PathFinder'>] ---------------------------------------------- time 2022-05-17 ---------------------------------------------- datetime 2022-05-17 Process finished with exit code 0
8
4
72,206,141
2022-5-11
https://stackoverflow.com/questions/72206141/mypy-item-none-of-optionalcustomattrsmodel-has-no-attribute-country
When I run mypy checkings I am getting an error. I am no able to ignore it or turn it off the strict optional checking. It there a way to solve this. Here is the line that is throwing the error: if tree.data.attributes.custom != JAPAN: where attributes is declared as: class TreeAttributesModel(BaseModel): id: Optional[TreeId] name: Optional[str] = None status: StatusEnum custom: Optional[CustomAttrsModel] = None and CustomAttrsModel is declared as it follows: class CustomAttrsModel(BaseModel): seller: Optional[str] buyed_at: Optional[datetime] country: Optional[Union[CountryEnum, str]] Could you please help me with this?
I had to tweak your snippets a bit to get a MWE, but here we go: import enum import dataclasses from datetime import datetime from typing import Optional, Union class StatusEnum(enum.Enum): OK = enum.auto() NOK = enum.auto() class CountryEnum(enum.Enum): JAPAN = enum.auto() RAPTURE = enum.auto() @dataclasses.dataclass class TreeAttributesModel: id: Optional[str] name: Optional[str] # = None had to remove default, attribs w/o default cannot follow attribs w/ one status: StatusEnum custom: Optional[CustomAttrsModel] = None @dataclasses.dataclass class CustomAttrsModel: seller: Optional[str] buyed_at: Optional[datetime] country: Optional[Union[CountryEnum, str]] custom = CustomAttrsModel(seller="test", buyed_at=None, country=CountryEnum.JAPAN) attribs = TreeAttributesModel(id="test", name="test", status=StatusEnum.OK, custom=custom) assert attribs.custom is not None # this is typed as being optional, so make sure it isn't None assert attribs.custom.country is not None # same as above result = attribs.custom.country != CountryEnum.JAPAN The message is: just use assert something is not None whenever something is Optional ;)
8
11
72,274,073
2022-5-17
https://stackoverflow.com/questions/72274073/python-count-files-in-a-directory-and-all-its-subdirectories
I am trying to count all the files in a folder and all its subfolders For exemple, if my folder looks like this: file1.txt subfolder1/ β”œβ”€β”€ file2.txt β”œβ”€β”€ subfolder2/ β”‚ β”œβ”€β”€ file3.txt β”‚ β”œβ”€β”€ file4.txt β”‚ └── subfolder3/ β”‚ └── file5.txt └── file6.txt file7.txt I would like get the number 7. The first thing I tried is a recursive function who count all files and calls itself for each folder def get_file_count(directory: str) -> int: count = 0 for filename in os.listdir(directory): file = (os.path.join(directory, filename)) if os.path.isfile(file): count += 1 elif os.path.isdir(file): count += get_file_count(file) return count This way works but takes a lot of time for big directories. I also remembered this post, which shows a quick way to count the total size of a folder using win32com and I wondered if this librairy also offered a way to do what I was looking for. But after searching, I only found this fso = com.Dispatch("Scripting.FileSystemObject") folder = fso.GetFolder(".") size = folder.Files.Count But this only returns the number of files in only the targeted folder (and not in its subfolders) So, do you know if there is an optimal function in python that returns the number of files in a folder and all its subfolders?
IIUC, you can just do sum(len(files) for _, _, files in os.walk('path/to/folder')) or perhaps, to avoid the len for probably slightly better performance: sum(1 for _, _, files in os.walk('folder_test') for f in files)
5
3
72,260,211
2022-5-16
https://stackoverflow.com/questions/72260211/importerror-cannot-import-name-document-from-borb-pdf-document
I have installed the 'borb' library using pip install borb and after installation, I got the following message: Requirement already satisfied: borb in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (2.0.25) WARNING: You are using pip version 22.0.3; however, version 22.0.4 is available. You should consider upgrading via the 'c:\users\dell\appdata\local\programs\python\python37\python.exe -m pip install --upgrade pip' command. Requirement already satisfied: qrcode[pil]>=6.1 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (7.3.1) Requirement already satisfied: Pillow>=7.1.0 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (9.0.0) Requirement already satisfied: requests>=2.24.0 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (2.25.1) Requirement already satisfied: setuptools>=51.1.1 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (54.2.0) Requirement already satisfied: fonttools>=4.22.1 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (4.33.3) Requirement already satisfied: python-barcode>=0.13.1 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from borb) (0.13.1) Requirement already satisfied: colorama in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from qrcode[pil]>=6.1->borb) (0.4.4) Requirement already satisfied: certifi>=2017.4.17 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.24.0->borb) (2020.12.5) Requirement already satisfied: chardet<5,>=3.0.2 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.24.0->borb) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.24.0->borb) (2.10) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\dell\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.24.0->borb) (1.26.4) Now, if I run the following code snippet I am getting an ImportError: import typing from borb.pdf.document import Document from borb.pdf.pdf import PDF from borb.toolkit.text.simple_text_extraction import SimpleTextExtraction def main(): # variable to hold Document instance doc: typing.Optional[Document] = None # this implementation of EventListener handles text-rendering instructions l: SimpleTextExtraction = SimpleTextExtraction() # open the document, passing along the array of listeners with open("PDFs/1.pdf", "rb") as in_file_handle: doc = PDF.loads(in_file_handle, [l]) # were we able to read the document? assert doc is not None # print the text on page 0 print(l.get_text(0)) if __name__ == "__main__": main() I get the following error: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_5400/405389204.py in <module> 1 import typing ----> 2 from borb.pdf.document import Document 3 from borb.pdf.pdf import PDF 4 from borb.toolkit.text.simple_text_extraction import SimpleTextExtraction 5 ImportError: cannot import name 'Document' from 'borb.pdf.document' (C:\Users\DELL\AppData\Local\Programs\Python\Python37\lib\site-packages\borb\pdf\document\__init__.py) Any inputs on how to resolve this...?
Please import Document Class from borb.pdf usage: from borb.pdf import Document
4
10
72,258,087
2022-5-16
https://stackoverflow.com/questions/72258087/unexpected-keyword-argument-tenant-id-while-accessing-azure-key-vault-in-pytho
I was trying to accessing my key vault, but I got always the same error: AppServiceCredential.get_token failed: request() got an unexpected keyword argument 'tenant_id' ManagedIdentityCredential.get_token failed: request() got an unexpected keyword argument 'tenant_id' This was the code I used in an Azure Machine Learning notebook, copied from the docs: from azure.identity import ManagedIdentityCredential from azure.keyvault.secrets import SecretClient credential = ManagedIdentityCredential() secret_client = SecretClient(vault_url="https://XXXX.vault.azure.net/", credential=credential) secretName = 'test' retrieved_secret = secret_client.get_secret(secretName) # here's the error retrieved_secret What is wrong? Could you help me? Thank you in advance.
This error is because of a bug that has since been fixed in azure-identity's ManagedIdentityCredential. Key Vault clients in recent packages include a tenant ID in token requests to support cross-tenant authentication, but some azure-identity credentials didn't correctly handle this keyword argument until the bug was fixed in version 1.8.0. Installing azure-identity>=1.8.0 should fix the error you're getting. (Disclaimer: I work for the Azure SDK for Python)
5
9
72,260,808
2022-5-16
https://stackoverflow.com/questions/72260808/mismatch-between-statsmodels-and-sklearn-ridge-regression
I'm exploring ridge regression. While comparing statsmodels and sklearn, I found that the two libraries result in different output for ridge regression. Below is an simple example of the difference import numpy as np import pandas as pd import statsmodels.api as sm from sklearn.linear_model import Lasso, Ridge np.random.seed(142131) n = 500 d = pd.DataFrame() d['A'] = np.random.normal(size=n) d['B'] = d['A'] + np.random.normal(scale=0.25, size=n) d['C'] = np.random.normal(size=n) d['D'] = np.random.normal(size=n) d['intercept'] = 1 d['Y'] = 5 - 2*d['A'] + 1*d['D'] + np.random.normal(size=n) y = np.asarray(d['Y']) X = np.asarray(d[['intercept', 'A', 'B', 'C', 'D']]) First, using sklearn and Ridge: ridge = Ridge(alpha=1, fit_intercept=True) ridge.fit(X=np.asarray(d[['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']]), y=y) ridge.intercept_, ridge.coef_ which outputs 4.99721, -2.00968, 0.03363, -0.02145, 1.02895]. Next, statsmodels and OLS.fit_regularized: penalty = np.array([0, 1., 1., 1., 1.]) ols = sm.OLS(y, X).fit_regularized(L1_wt=0., alpha=penalty) ols.params which outputs [5.01623, -0.69164, -0.63901, 0.00156, 0.55158]. However, since these both are implementing ridge regression, I would expect them to be the same. Note, that neither of these penalize the intercept term (already checked that as a possible potential difference). I also don't think this is an error on my part. Specifically, I find both implementations provide the same output for LASSO. Below is a demonstration with the previous data # sklearn LASSO lasso = Lasso(alpha=0.5, fit_intercept=True) lasso.fit(X=np.asarray(d[['A', 'B', 'C', 'D']]), y=y) lasso.intercept_, lasso.coef_ # statsmodels LASSO penalty = np.array([0, 0.5, 0.5, 0.5, 0.5]) ols = sm.OLS(y, X).fit_regularized(L1_wt=1., alpha=penalty) ols.params which both output [5.01465, -1.51832, 0., 0., 0.57799]. So my question is why do the estimated coefficients for ridge regression differ across implementations in sklearn and statsmodels?
After digging around a little more, I discovered the answer as to why they differ. The difference is that sklearn's Ridge scales the penalty term as alpha / n where n is the number of observations. statsmodels does not apply this scaling of the tuning parameter. You can have the ridge implementations match if you re-scale the penalty for statsmodels. Using my posted example, here is how you would have the output match between the two: # sklearn # NOTE: there is no difference from above ridge = Ridge(alpha=1, fit_intercept=True) ridge.fit(X=np.asarray(d[['A', 'B', 'C', 'D']]), y=y) ridge.intercept_, ridge.coef_ # statsmodels # NOTE: going to re-scale the penalties based on n observations n = X.shape[0] penalty = np.array([0, 1., 1., 1., 1.]) / n # scaling penalties ols = sm.OLS(y, X).fit_regularized(L1_wt=0., alpha=penalty) ols.params Now both output [ 4.99721, -2.00968, 0.03363, -0.02145, 1.02895]. I am posting this, so if someone else finds them in my situation they can find the answer more easily (since I haven't seen any discussion of this difference before). I'm not sure of the rationale for the re-scaling. It is also odd to me that Ridge re-scales the tuning parameter but Lasso does not. Looks like important behavior to be aware of. Reading the sklearn documentation for Ridge and LASSO, I did not see the difference in re-scaling behavior for Ridge discussed.
5
6
72,239,086
2022-5-14
https://stackoverflow.com/questions/72239086/pytorch-gather-failed-with-sparse-grad-true
With even very simple example, backward() cannot work if sparse_grad=True, please see the error below. Is this error expected, or I'm using gather in a wrong way? In [1]: import torch as th In [2]: x = th.rand((3,3), requires_grad=True) # sparse_grad = False, the backward could work as expetecd In [3]: th.gather(x @ x, 1, th.LongTensor([[0], [1]]), sparse_grad=False).sum().backward() # sparse_grad = True, backward CANNOT work In [4]: th.gather(x @ x, 1, th.LongTensor([[0], [1]]), sparse_grad=True).sum().backward() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ----> 1 th.gather(x @ x, 1, th.LongTensor([[0], [1]]), sparse_grad=True).sum().backward() ~/miniconda3/lib/python3.9/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs) 305 create_graph=create_graph, 306 inputs=inputs) --> 307 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) 308 309 def register_hook(self, hook): ~/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 152 retain_graph = create_graph 153 --> 154 Variable._execution_engine.run_backward( 155 tensors, grad_tensors_, retain_graph, create_graph, inputs, 156 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: sparse tensors do not have strides
I think torch.gather does not support sparse operators: torch.gather(x, 1, torch.LongTensor([[0], [1]]).to_sparse()) Results with: NotImplementedError: Could not run 'aten::gather.out' with arguments from the 'SparseCPU' backend. I think you should open an issue or a feature request on pytorch's github.
5
2
72,232,152
2022-5-13
https://stackoverflow.com/questions/72232152/visual-studio-code-freezes-after-short-while-the-window-is-not-responding
Problem: I can open VS code and start typing but after ~30s (sometimes minutes) the window freezes showing this message: I am using python on a jupyter notebook with only a couple of unspectacular lines which I don't show because I tried different content. Last time it crashed after about 2 min of just import pandas as pd, df = pd.read_csv(path_to_file), print(df) I think it does not crash when I don't start typing but just wait, at least not as fast Yesterday everything was still fine. Today with the same setup not. System information: Windows 11 home The only VS extensions installed are Python v2022.7.11331006 Pylance v2022.5.1 Jupyter v2022.5.1001351004 Jupyter Keymap v1.0.0 Jupyter Notebook Renderers Which other information is needed to help? Attempts (not resolving the problem): Work from a different folder with a new file/code Disabling extensions (but notebook will not work without some) Uninstall/reinstall same version (x64-1.67.1) Uninstall/install previous version (1.66.1) Unistall/install insider version (x64-1.68.0-insider) Disable all online connecting services in VS, disconnecting from internet. Rollback of last windows patch update Delete backup folders in %AppData%/Code/.. Graphic card is up-to-date I am reluctant to disable stuff that makes the dell xps faster (and expensive)
This is caused by the latest Pylance extension. You can revert to earlier Pylance extension to solve the problem. This problem will be fixed in a new version in a few days according to the github.
7
2
72,244,540
2022-5-14
https://stackoverflow.com/questions/72244540/why-win32com-doesnt-show-me-all-emails
I want to parse emails in python through the Outlook application. Running this code I get only a few of my emails. import win32com.client outlook = win32com.client.Dispatch('outlook.application') mapi = outlook.GetNamespace("MAPI") inbox= mapi.GetDefaultFolder(6) messages= inbox.Items for i in messages: message=i.subject print(message) I have tried changing the Default Folder and it happens everywhere. What have I done wrong?
Emails were not visible beacuse they were on the server! File-> Account Settings-> Account Settings...-> double click on your Exchange account-> set the Mail to keep offline slider to: All. Found it here !
4
3
72,248,577
2022-5-15
https://stackoverflow.com/questions/72248577/find-the-minimum-number-of-steps-to-half-the-sum-of-elements-in-a-list-where-eac
I came across an interview question that went like this: There are factories in an area which produce a pollutive gas and filters are to be installed at each factory to reduce the pollution. Each filter installed would half the pollution in that factory. Each factory can have multiple filters. There is a list of N integers representing the level of pollution in each of the N factories in the area. Find the minimum number of filters needed to half the overall pollution. E.g. - Let [3, 5, 6, 1, 18] be the list of pollution levels in 5 factories Overall pollution = 3+5+6+1+18 = 33 (target is 33/2 = 16.5) Install a filter in factory given by index=4 -- > pollution levels will be [3, 5, 6, 1, 9] Install a filter in factory given by index=4 -- > pollution levels will be [3, 5, 6, 1, 4.5] Install a filter in factory given by index=2 -- > pollution levels will be [3, 5, 3, 1, 4.5] Need 3 filters minimum to half the overall pollution. N is an integer within the range [1....30,000]. Each element in the list is an integer within the range [0....70,000] The solution I came up with for this was simple: Find the max in the list and half in every time until the sum is <=target def solution(A): total = sum(A) target = total/2 count = 0 while total>target: count+=1 max_p = max(A) total-= max_p/2 A.remove(max_p) A.append(max_p/2) return count This works well, except that the time complexity seems to be O(N^2). Can someone please suggest an approach to solve this with less time complexity (preferably O(N))?
Maybe you could utilize a max heap to retrieve the worst factory more efficiently than you are right now, i.e., using a heap would allow for an O(N log N) solution: import heapq def filters_required(factories: list[int]) -> int: """Returns minimum filters required to halve pollution.""" current_pollution = sum(factories) goal_pollution = current_pollution / 2 filters = 0 factory_pollution_max_heap = [-p for p in factories] heapq.heapify(factory_pollution_max_heap) while current_pollution > goal_pollution: worst_factory = heapq.heappop(factory_pollution_max_heap) pollution = worst_factory / 2 current_pollution += pollution # Use += since pollution will be a negative number. heapq.heappush(factory_pollution_max_heap, pollution) print('DEBUG:', [-p for p in factory_pollution_max_heap], current_pollution) filters += 1 return filters def main() -> None: print(f'{filters_required(factories=[3, 5, 6, 1, 18]) = }') if __name__ == '__main__': main() Output: DEBUG: [9.0, 6, 3, 1, 5] 24.0 DEBUG: [6, 5, 3, 1, 4.5] 19.5 DEBUG: [5, 4.5, 3, 1, 3.0] 16.5 filters_required(factories=[3, 5, 6, 1, 18]) = 3
6
4
72,245,490
2022-5-15
https://stackoverflow.com/questions/72245490/while-loop-not-working-function-and-time-sleep
My while loop prints nothing when it is running. import os import time place = 10 running = True def Write(): j = 1 for i in range(place - 1): print("-", end = "") j += 1 print("a", end = "") for k in range(10 - j): print("-", end = "") while running: Write() time.sleep(5) if place > 1: place -= 1 os.system("clear") When there is just the print and a time.sleep, the while loop works. while running: print("Looping...") time.sleep(5) When there is the function and a time.sleep, the code doesn't work. while running: Write() time.sleep(5) Please tell me how to fix this.
I got puzzled by what you discovered, and now found a solution to this interesting behavior; you can use flush=True parameter to force flushing: import os import time place = 10 running = True def Write(): j = 1 for i in range(place - 1): print("-", end = "") j += 1 print("a", end = "", flush=True) for k in range(10 - j): print("-", end = "", flush=True) while running: Write() time.sleep(1) if place > 1: place -= 1 os.system("clear") Whether the output is buffered is usually determined by file, but if the flush keyword argument is true, the stream is forcibly flushed.β€”https://docs.python.org/3/library/functions.html#print Alternatively, (i) putting print() (without end) at the end of Write, or (ii) making Write to return a string (not print inside the function) and printing the string outside the function (in the while loop) seems to work. Green Cloak Guy's solution in the comment section, i.e., sys.stdout.flush() works too. It seems to me that end='' makes python or console reluctant to show the characters eagerly (in some cases), waiting for a line to end.
4
5
72,243,349
2022-5-14
https://stackoverflow.com/questions/72243349/problems-with-pip-und-command-line
I am trying to create a Python pip package. This works also well. I can successfully upload and download the package and use it in the Python code. What I can't do is to use the Python package via the command line. In another StackOverflow post I found the link to a tutorial. I tried to follow it. Obviously I made a mistake. Can you guys help me please ? Installation of the package via pip here you can see that the installation worked. Unfortunately, not the whole script fit on the image. Pip does not find the package. Unfortunately, I can't embed the images directly, so I'll just embed them as links. I have created a simple Python package. It represents here only an example. Here you can see the structure of the folder Riffecs | .gitignore | .pylintrc | LICENSE | README.md | requirements.txt | setup.py | | \---riffecs __init__.py __main__.py Here are the basic files shown. main.py from . import hello_world if __name__ == '__main__': hello_world() and init.py def hello_world(): print("Hello world") In the following you can see the "setup.py". I am of the opinion that I have followed the instructions. But obviously I made a mistake somewhere. Can you please help me to correct this mistake. import io import os import setuptools def read_description(): url = "README.md" """ Read and Return the description """ return io.open(os.path.join(os.path.dirname(__file__), url), encoding="utf-8").read() def def_requirements(): """ Check PIP Requirements """ with open('requirements.txt', encoding='utf-8') as file_content: pip_lines = file_content.read().splitlines() return pip_lines setuptools.setup( name="riffecs", version='0.0.3', description='test', entry_points={'console_scripts': ['hello-world=riffecs:hello_world',]}, long_description=read_description(), long_description_content_type="text/markdown", license="MIT", keywords="test - riffecs", url="https://github.com/Riffecs/riffecs", packages=["riffecs"], install_requires=def_requirements(), python_requires=">=3.6", classifiers=[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ], )
In your setup.py file you have this line... entry_points={'console_scripts': ['hello-world=riffecs:hello_world',]}, This is the entry point to calling you package via command line. This configuration is asking the entry point to be hello-world, which I tried and it runs fine. In your image however you run riffecx which is not configured as an entrypoint to the package. If you wanted the entrypoint to be riffecx. change the line to: entry_points={'console_scripts': ['riffecx=riffecs:hello_world']}, Hope this helped.
6
1
72,240,674
2022-5-14
https://stackoverflow.com/questions/72240674/how-do-i-use-pyscript-in-my-html-code-and-return-output
I am trying to call my python function created. But not getting any output and no resource how to achieve. Code : <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> </head> <body> <p>Click on the "Choose File" button to upload a file:</p> <form> <input type="file" id="myFile" name="filename"> <input type="submit" onClick="readfile(filename)" name="SUBMIT"> </form> <py-script> def readfile(filename): with open(filename) as mfile: head = [next(mfile) for x in range(1,5)] print(head) </py-script> </body> </html> It would be great helpful if some one provide input like … How to pass selected file to my function and python will be executed & return output on screen.
When writing Python in the browser, you must rethink how actions are performed. Typically, Python programs are procedural. Browser-based applications are asynchronous. The first step is to enable asynchronous features: import asyncio Browser based programs cannot directly access the local file system. Your code is not doing what you think it is. def readfile(filename): with open(filename) as mfile: head = [next(mfile) for x in range(1,5)] print(head) Your code is reading from the browser virtual file system which is allowed. You are trying to process the event from the ` element but trying to access a different location. Note: I do not know what the file contents of "filename" are, so I did not incorporate your code after the file is read. Your code cannot read files. You must ask the browser to read a file for your application. This is performed with the FileReader class. Example: async def process_file(event): # Currently, PyScript print() does not work in this # type of code (async callbacks) # use console.log() to debug output fileList = event.target.files.to_py() for f in fileList: data = await f.text() document.getElementById("content").innerHTML = data # Add your own code to process the "data" variable # which contains the content of the selected file Another problem is that you are passing a Python function as a callback. That will not work. Instead, you must call create_proxy() to create a callback proxy for the Python function. The browser will call the proxy which then calls your Python function. Example: # Create a Python proxy for the callback function # process_file() is your function to process events from FileReader file_event = create_proxy(process_file) # Set the listener to the callback document.getElementById("myfile").addEventListener("change", file_event, False) I put a copy of this solution on my website. You can right-click on the page to download the source code. File Example Demo Complete solution: <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> <title>File Example</title> </head> <body> <p>This example shows how to read a file from the local file system and display its contents</p> <br /> <p>Warning: Not every file type will display. PyScript removes content with tags such as XML, HTML, PHP, etc. Normal text files will work.</p> <br /> <p>No content type checking is performed to detect images, executables, etc.</p> <br /> <label for="myfile">Select a file:</label> <input type="file" id="myfile" name="myfile"> <br /> <br /> <div id="print_output"></div> <br /> <p>File Content:</p> <div style="border:2px inset #AAA;cursor:text;height:120px;overflow:auto;width:600px; resize:both"> <div id="content"> </div> </div> <py-script output="print_output"> import asyncio from js import document, FileReader from pyodide import create_proxy async def process_file(event): fileList = event.target.files.to_py() for f in fileList: data = await f.text() document.getElementById("content").innerHTML = data def main(): # Create a Python proxy for the callback function # process_file() is your function to process events from FileReader file_event = create_proxy(process_file) # Set the listener to the callback e = document.getElementById("myfile") e.addEventListener("change", file_event, False) main() </py-script> </body> </html>
6
5
72,235,819
2022-5-13
https://stackoverflow.com/questions/72235819/how-can-i-redirect-module-imports-with-modern-python
I am maintaining a python package in which I did some restructuring. Now, I want to support clients who still do from my_package.old_subpackage.foo import Foo instead of the new from my_package.new_subpackage.foo import Foo, without explicitly reintroducing many files that do the forwarding. (old_subpackage still exists, but no longer contains foo.py.) I have learned that there are "loaders" and "finders", and my impression was that I should implement a loader for my purpose, but I only managed to implement a finder so far: RENAMED_PACKAGES = { 'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo', } # TODO: ideally, we would not just implement a "finder", but also a "loader" # (using the importlib.util.module_for_loader decorator); this would enable us # to get module contents that also pass identity checks class RenamedFinder: @classmethod def find_spec(cls, fullname, path, target=None): renamed = RENAMED_PACKAGES.get(fullname) if renamed is not None: sys.stderr.write( f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n') return importlib.util.find_spec(renamed) return None sys.meta_path.append(RenamedFinder()) https://docs.python.org/3.5/library/importlib.html#importlib.util.module_for_loader and related functionality, however, seem to be deprecated. I know it's not a very pythonic thing I am trying to achieve, but I would be glad to learn that it's achievable.
On import of your package's __init__.py, you can place whatever objects you want into sys.modules, the values you put in there will be returned by import statements: from . import new_package from .new_package import module1, module2 import sys sys.modules["my_lib.old_package"] = new_package sys.modules["my_lib.old_package.module1"] = module1 sys.modules["my_lib.old_package.module2"] = module2 If someone now uses import my_lib.old_package or import my_lib.old_package.module1 they will obtain a reference to my_lib.new_package.module1. Since the import machinery already finds the keys in the sys.modules dictionary, it never even begins looking for the old files. If you want to avoid importing all the submodules immediately, you can emulate a bit of lazy loading by placing a module with a __getattr__ in sys.modules: from types import ModuleType import importlib import sys class LazyModule(ModuleType): def __init__(self, name, mod_name): super().__init__(name) self.__mod_name = name def __getattr__(self, attr): if "_lazy_module" not in self.__dict__: self._lazy_module = importlib.import(self.__mod_name, package="my_lib") return self._lazy_module.__getattr__(attr) sys.modules["my_lib.old_package"] = LazyModule("my_lib.old_package", "my_lib.new_package")
8
9
72,243,698
2022-5-14
https://stackoverflow.com/questions/72243698/python-3-10-type-hinting-causes-syntax-error
I have defined two classes. A Bookshelf class and a Book class and have defined each with its own methods and type hints. When I run the below code in VS Code using python 3.10 it comes up with the following error: class Bookshelf: SyntaxError: Invalid syntax Which is referring to the init of the BookShelf class below. Can any of you spot the issue? class Bookshelf: def __init__(self, books: list[Book]): self.books = books def __str__(self) -> str: return f"Bookshelf with {len(self.books)}" class Book: def __init__(self, name: str, page_count: int): self.name=name self.page_count = page_count
It is not a SyntaxError, It is a NameError because Book class is not defined yet when you are using it in your type hints. 1. First solution is moving the definition of Book class before the BookShelf. 2. Second solution is use the string instead of the book itself: def __init__(self, books: list["Book"]): I think in Python 3.11, They will allow to use it as it is. Evaluation of type annotations are gonna be postponed: https://peps.python.org/pep-0563/ 3. third solution: If you want to have this now, you can import: from __future__ import annotations then your code is going to work.
4
3
72,240,803
2022-5-14
https://stackoverflow.com/questions/72240803/shap-not-working-with-lightgbm-categorical-features
My model uses LGBMClassifier. I'd like to use Shap (Shapley) to interpret features. However, Shap gave me errors on categorical features. For example, I have a feature "Smoker" and its values include "Yes" and "No". I got an error from Shap: ValueError: could not convert string to float: 'Yes'. Am I missing any settings? BTW, I know that I could use one-hot encoding to convert categorical features but I don't want to, since LGBMClassifier can handle categorical features without one-hot encoding. Here's the sample code: (shap version is 0.40.0, lightgbm version is 3.3.2) import pandas as pd from lightgbm import LGBMClassifier #My version is 3.3.2 import shap #My version is 0.40.0 #The training data X_train = pd.DataFrame() X_train["Age"] = [50, 20, 60, 30] X_train["Smoker"] = ["Yes", "No", "No", "Yes"] #Target: whether the person had a certain disease y_train = [1, 0, 0, 0] #I did convert categorical features to the Category data type. X_train["Smoker"] = X_train["Smoker"].astype("category") #The test data X_test = pd.DataFrame() X_test["Age"] = [50] X_test["Smoker"] = ["Yes"] X_test["Smoker"] = X_test["Smoker"].astype("category") #the classifier clf = LGBMClassifier() clf.fit(X_train, y_train) predicted = clf.predict(X_test) #shap explainer = shap.TreeExplainer(clf) #I see this setting from google search but it did not really help explainer.model.original_model.params = {"categorical_feature":["Smoker"]} shap_values = explainer(X_train) #the error came out here: ValueError: could not convert string to float: 'Yes'
Let's try slightly different: from lightgbm import LGBMClassifier import shap X_train = pd.DataFrame({ "Age": [50, 20, 60, 30], "Smoker": ["Yes", "No", "No", "Yes"]} ) X_train["Smoker"] = X_train["Smoker"].astype("category") y_train = [1, 0, 0, 0] X_test = pd.DataFrame({"Age": [50], "Smoker": ["Yes"]}) X_test["Smoker"] = X_test["Smoker"].astype("category") clf = LGBMClassifier(verbose=-1).fit(X_train, y_train) predicted = clf.predict(X_test) print("Predictions:", predicted) exp = shap.TreeExplainer(clf) sv = exp.shap_values(X_train) # <-- here print(f"Expected values: {exp.expected_value}") print(f"SHAP values for 0th data point: {sv[1][0]}") Predictions: [0] Expected values: [1.0986122886681098, -1.0986122886681098] SHAP values for 0th data point: [0. 0.] Note, you don't need to tinker with explainer.model.original_model.params as it gives you non-intended public access to the model's params, which are already set for you by virtue of training model.
7
3
72,236,445
2022-5-14
https://stackoverflow.com/questions/72236445/how-can-i-wrap-a-python-function-in-a-way-that-works-with-with-inspect-signature
Some uncontroversial background experimentation up front: import inspect def func(foo, bar): pass print(inspect.signature(func)) # Prints "(foo, bar)" like you'd expect def decorator(fn): def _wrapper(baz, *args, *kwargs): fn(*args, **kwargs) return _wrapper wrapped = decorator(func) print(inspect.signature(wrapped)) # Prints "(baz, *args, **kwargs)" which is totally understandable The Question How can implement my decorator so that print(inspect.signature(wrapped)) spits out "(baz, foo, bar)"? Can I build _wrapper dynamically somehow by adding the arguments of whatever fn is passed in, then gluing baz on to the list? The answer is NOT def decorator(fn): @functools.wraps(fn) def _wrapper(baz, *args, *kwargs): fn(*args, **kwargs) return _wrapper That give "(foo, bar)" again - which is totally wrong. Calling wrapped(foo=1, bar=2) is a type error - "Missing 1 required positional argument: 'baz'" I don't think it's necessary to be this pedantic, but def decorator(fn): def _wrapper(baz, foo, bar): fn(foo=foo, bar=bar) return _wrapper Is also not the answer I'm looking for - I'd like the decorator to work for all functions.
You can use __signature__ (PEP) attribute to modify returned signature of wrapped object. For example: import inspect def func(foo, bar): pass def decorator(fn): def _wrapper(baz, *args, **kwargs): fn(*args, **kwargs) f = inspect.getfullargspec(fn) fn_params = [] if f.args: for a in f.args: fn_params.append( inspect.Parameter(a, inspect.Parameter.POSITIONAL_OR_KEYWORD) ) if f.varargs: fn_params.append( inspect.Parameter(f.varargs, inspect.Parameter.VAR_POSITIONAL) ) if f.varkw: fn_params.append( inspect.Parameter(f.varkw, inspect.Parameter.VAR_KEYWORD) ) _wrapper.__signature__ = inspect.Signature( [ inspect.Parameter("baz", inspect.Parameter.POSITIONAL_OR_KEYWORD), *fn_params, ] ) return _wrapper wrapped = decorator(func) print(inspect.signature(wrapped)) Prints: (baz, foo, bar) If the func is: def func(foo, bar, *xxx, **yyy): pass Then print(inspect.signature(wrapped)) prints: (baz, foo, bar, *xxx, **yyy)
7
3
72,203,899
2022-5-11
https://stackoverflow.com/questions/72203899/how-can-i-see-the-service-account-that-the-python-bigquery-client-uses
To create a default bigquery client I use: from google.cloud import bigquery client = bigquery.Client() This uses the (default) credentials available in the environment. But how I see then which (default) service account is used?
While you can interrogate the credentials directly (be it json keys, metadata server, etc), I have occasionally found it valuable to simply query bigquery using the SESSION_USER() function. Something quick like this should suffice: client = bigquery.Client() query_job = client.query("SELECT SESSION_USER() as whoami") results = query_job.result() for row in results: print("i am {}".format(row.whoami))
4
3
72,239,193
2022-5-14
https://stackoverflow.com/questions/72239193/how-to-trigger-aws-lambda-functions-manually-which-is-already-scheduled-using-ev
I am using event bridge to trigger a lambda function at 8 everyday to perform some ETL operations. At times i receive requests to trigger the lambda manually ondemand. How can i achieve that using the same lambda function.
There are many ways to run the Lambda on demand like running it in the AWS Console, connecting it to an API Gateway and triggering it from the API Gateway etc. But the easiest way is to use Lambda URLs It will give you an URL that you can envoke that will run the Lambda.
5
6
72,238,384
2022-5-14
https://stackoverflow.com/questions/72238384/how-to-plot-pairs-in-different-subplots-with-difference-on-the-side
I want to make a plot in seaborn but I am having some difficulties. The data has 2 variable: time (2 levels) and state (2 levels). I want to plot time on the x axis and state as different subplots, showing individual data lines. Finally, to the right of these I want to show a difference plot of the difference between time 2 and time 1, for each of the levels of state. I cannot do it very well, because I cannot get the second plot to show onto the right. Here has been my try: import numpy as np import pandas as pd import seaborn as sns # Just making some fake data ids = [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5] times = [1,1,2,2,1,1,2,2,1,1,2,2,1,1,2,2,1,1,2,2] states = ['A', 'B', 'A', 'B'] * 5 np.random.seed(121) resps = [(i*t) + np.random.normal() for i, t in zip(ids, times)] DATA = { 'identity': ids, 'time': times, 'state': states, 'resps': resps } df = pd.DataFrame(DATA) # Done with data g = sns.relplot( data=df, kind='line', col='state', x='time', y='resps', units='identity', estimator=None, alpha=.5, height=5, aspect=.7) # # Draw a line onto each Axes g.map(sns.lineplot,"time", "resps", lw=5, ci=None) # Make a wide data to make the difference wide = df.set_index(['identity', 'state', 'time']).unstack().reset_index() A = wide['state']=='A' B = wide['state']=='B' wide['diffA'] = wide[A][('resps', 2)] - wide[A][('resps', 1)] wide['diffB'] = wide[B][('resps', 2)] - wide[B][('resps', 1)] wide['difference'] = wide[['diffA', 'diffB']].sum(axis=1) wide = wide.drop(columns=[('diffA', ''), ('diffB', '')]) sns.pointplot(x='state', y='difference', data=wide, join=False) Output from the first And output from the second: Is there no way to put them together? Even though they are different data? I did try to use matplotlib. And then achieved slightly better results but this still had a problem because I wanted the two left plots to have a shared y axis but not the difference. This created lots of work as well, because I want to be flexible for different numbers of the state variable, but only kept to 2 for simplicity. Here is a paint version of what I want to do (sorry for the poor quality), hopefully with some more control over appearance but this is secondary: Is there a reliable way to do this in a simpler way? Thanks!
The problem is that sns.relplot operates at a figure level. This means it creates its own figure object and we cannot control the axes it uses. If you want to leverage seaborn for the creation of the lines without using "pure" matplotlib, you can copy the lines on matplotlib axes: import numpy as np import pandas as pd import seaborn as sns # Just making some fake data ids = [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5] times = [1,1,2,2,1,1,2,2,1,1,2,2,1,1,2,2,1,1,2,2] states = ['A', 'B', 'A', 'B'] * 5 np.random.seed(121) resps = [(i*t) + np.random.normal() for i, t in zip(ids, times)] DATA = { 'identity': ids, 'time': times, 'state': states, 'resps': resps } df = pd.DataFrame(DATA) # Done with data g = sns.relplot( data=df, kind='line', col='state', x='time', y='resps', units='identity', estimator=None, alpha=.5, height=5, aspect=.7) # # Draw a line onto each Axes g.map(sns.lineplot,"time", "resps", lw=5, ci=None) # Make a wide data to make the difference wide = df.set_index(['identity', 'state', 'time']).unstack().reset_index() A = wide['state']=='A' B = wide['state']=='B' wide['diffA'] = wide[A][('resps', 2)] - wide[A][('resps', 1)] wide['diffB'] = wide[B][('resps', 2)] - wide[B][('resps', 1)] wide['difference'] = wide[['diffA', 'diffB']].sum(axis=1) wide = wide.drop(columns=[('diffA', ''), ('diffB', '')]) # New code ---------------------------------------- import matplotlib.pyplot as plt plt.close(g.figure) fig = plt.figure(figsize=(12, 4)) ax1 = fig.add_subplot(1, 3, 1) ax2 = fig.add_subplot(1, 3, 2, sharey=ax1) ax3 = fig.add_subplot(1, 3, 3) l = list(g.axes[0][0].get_lines()) l2 = list(g.axes[0][1].get_lines()) for ax, g_ax in zip([ax1, ax2], g.axes[0]): l = list(g_ax.get_lines()) for line in l: ax.plot(line.get_data()[0], line.get_data()[1], color=line.get_color(), lw=line.get_linewidth()) ax.set_title(g_ax.get_title()) sns.pointplot(ax=ax3, x='state', y='difference', data=wide, join=False) # End of new code ---------------------------------- plt.show() Result:
6
2
72,238,058
2022-5-14
https://stackoverflow.com/questions/72238058/can-not-import-nvidia-smi
I'm tending to collect my GPU status during my python code is running. I need to import nvidia_smi in my code to do this. but even by installing it by pip install nvidia_smi hit this error: No module named 'nvidia_smi' Any Idea?
I found the answer here just pip install nvidia-ml-py3: and: import nvidia_smi nvidia_smi.nvmlInit() handle = nvidia_smi.nvmlDeviceGetHandleByIndex(0)
5
11
72,230,915
2022-5-13
https://stackoverflow.com/questions/72230915/django-how-to-get-url-by-its-name
I want to get url by name specified in urls.py. Like {% url 'some_name' %} in template, but in Python. My urls.py: urlpatterns = [ ... path('admin_section/transactions/', AdminTransactionsView.as_view(), name='admin_transactions'), ... ] I want to something like: Input: url('admin_transactions') Output: '/admin_section/transactions/' I know about django.urls.reverse function, but its argument must be View, not url name
Django has the reverse() utility function for this. Example from the docs: given the following url: from news import views path('archive/', views.archive, name='news-archive') you can use the following to reverse the URL: from django.urls import reverse reverse('news-archive') The documentation goes further into function's use, so I suggest reading it.
8
14
72,230,363
2022-5-13
https://stackoverflow.com/questions/72230363/how-to-format-very-small-numbers-in-python
How to format 1.3435434533e-8 into 1.34e-8 in python? Keep only two digits. The round() method will round this number to zero.
The "g" formatting suffix on string mini-format language, used both by f-strings, and the .format method will do that: In [1]: a = 1.34434325435e-8 In [2]: a Out[2]: 1.34434325435e-08 In [4]: f"{a:.03g}" Out[4]: '1.34e-08' # in contrast with: In [5]: f"{a:.03f}" Out[5]: '0.000'
5
10
72,228,607
2022-5-13
https://stackoverflow.com/questions/72228607/pycaret-time-series-tsforecastingexperiment-importerror-cannot-import-name-ch
I am getting the below error, while importing Pycaret time-series(beta) module in the databricks (we were running successfully earlier). Request your help in solving the issues. pycaret version in use: import pycaret pycaret.__version__ # Out[1]: '3.0.0' python version in use: import sys sys.version #Out[9]: '3.8.10 (default, Mar 15 2022, 12:22:08) \n[GCC 9.4.0]' Below is the stack trace for the issue. from pycaret.time_series import TSForecastingExperiment /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /databricks/python/lib/python3.8/site-packages/pycaret/time_series/__init__.py in <module> ----> 1 from pycaret.time_series.forecasting.oop import TSForecastingExperiment 2 from pycaret.time_series.forecasting.functional import ( 3 setup, 4 create_model, 5 compare_models, /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /databricks/python/lib/python3.8/site-packages/pycaret/time_series/forecasting/oop.py in <module> 14 from sklearn.base import clone 15 from sktime.forecasting.base import ForecastingHorizon ---> 16 from sktime.forecasting.model_selection import ( 17 temporal_train_test_split, 18 ExpandingWindowSplitter, /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /databricks/python/lib/python3.8/site-packages/sktime/forecasting/model_selection/__init__.py in <module> 20 from sktime.forecasting.model_selection._split import SlidingWindowSplitter 21 from sktime.forecasting.model_selection._split import temporal_train_test_split ---> 22 from sktime.forecasting.model_selection._tune import ForecastingGridSearchCV 23 from sktime.forecasting.model_selection._tune import ForecastingRandomizedSearchCV /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /databricks/python/lib/python3.8/site-packages/sktime/forecasting/model_selection/_tune.py in <module> 11 from sklearn.base import clone 12 from sklearn.model_selection import ParameterGrid, ParameterSampler, check_cv ---> 13 from sklearn.model_selection._search import _check_param_grid 14 from sklearn.utils.metaestimators import if_delegate_has_method
This is due to the use of a private method from sklearn in the sktime dependency. Since sklearn updated to 1.1.0, this private method was removed/moved, hence it is breaking. The sktime team is working on fixing this. In the meantime, you can fix this by force installing sklearn 1.0.2. Please see the details in the GitHub Repo: https://github.com/pycaret/pycaret/issues/2542
5
7
72,223,046
2022-5-12
https://stackoverflow.com/questions/72223046/how-to-apply-a-list-of-functions-sequentially-to-a-string-using-python-reduce-or
Problem Statement I would like to apply a list of functions fs = [ f, g, h ] sequentially to a string text=' abCdEf ' Something like f( g( h( text) ) ). This could easily be accomplished with the following code: # initial text text = ' abCDef ' # list of functions to apply sequentially fs = [str.rstrip, str.lstrip, str.lower] for f in fs: text = f(text) # expected result is 'abcdef' with spaces stripped, and all lowercase print(text) Using functools.reduce It seems that functools.reduce should do the job here, since it "consumes" the list of functions at each iteration. from functools import reduce # I know `reduce` requires two arguments, but I don't even know # which one to chose as text of function from the list reduce(f(text), fs) # first interaction should call y = str.rstrip(' abCDef ') --> ' abCDef' # next iterations fails, because tries to call ' abCDef'() -- as a function Unfortunately, this code doesn't work, since each iteration returns a string istead of a function, and fails with TypeError : 'str' object is not callable. QUESTION: Is there any solution using map, reduce or list comprehension to this problem?
reduce can take three arguments: reduce(function, iterable, initializer) What are these three arguments in general? function is a function of two arguments. Let's call these two arguments t and f. the first argument, t, will start as initializer; then will continue as the return value of the previous call of function. the second argument, f, is taken from iterable. What are these three arguments in our case? the iterable is your list of function; the second argument f is going to be one of the functions; the first argument t must be the text; the initializer must be the initial text; the return of function must be the resulting text; function(t, f) must be f(t). Finally: from functools import reduce # initial text text = ' abCDef ' # list of functions to apply sequentially fs = [str.rstrip, str.lstrip, str.lower] result = reduce(lambda t,f: f(t), fs, text) print(repr(result)) # 'abcdef'
6
8
72,225,655
2022-5-13
https://stackoverflow.com/questions/72225655/extracting-multiple-sets-of-rows-columns-from-a-2d-numpy-array
I have a 2D numpy array from which I want to extract multiple sets of rows/ columns. # img is 2D array img = np.arange(25).reshape(5,5) array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) I know the syntax to extract one set of row/ column. The following will extract the first 4 rows and the 3rd and 4th column as shown below img[0:4, 2:4] array([[ 2, 3], [ 7, 8], [12, 13], [17, 18]]) However, what is the syntax if I want to extract multiple sets of rows and/or columns? I tried the following but it leads to an invalid syntax error img[[0,2:4],2] The output that I am looking for from the above command is array([[ 2], [12], [17]]) I tried searching for this but it only leads to results for one set of rows/ columns or extracting discrete rows/ columns which I know how to do, like using np.ix. For context, the 2D array that I am actually dealing with has the dimensions ~800X1200, and from this array I want to extract multiple ranges of rows and columns in one go. So something like img[[0:100, 120:210, 400, 500:600], [1:450, 500:550, 600, 700:950]].
IIUC, you can use numpy.r_ to generate the indices from the slice: img[np.r_[0,2:4][:,None],2] output: array([[ 2], [12], [17]]) intermediates: np.r_[0,2:4] # array([0, 2, 3]) np.r_[0,2:4][:,None] # variant: np.c_[np.r_[0,2:4]] # array([[0], # [2], # [3]])
4
4
72,222,059
2022-5-12
https://stackoverflow.com/questions/72222059/difference-between-cupy-asnumpy-and-get
Given a CuPy array a, there are two ways to get a numpy array from it: a.get() and cupy.asnumpy(a). Is there any practical difference between them? import cupy as cp a = cp.random.randint(10, size=(4,5,6,7)) b = a.get() c = cp.asnumpy(a) assert type(b) == type(c) and (b == c).all()
cp.asnumpy is a wrapper calling ndarray.get. You can see that in the code of cp.asnumpy: def asnumpy(a, stream=None, order='C', out=None): """Returns an array on the host memory from an arbitrary source array. Args: a: Arbitrary object that can be converted to :class:`numpy.ndarray`. stream (cupy.cuda.Stream): CUDA stream object. If it is specified, then the device-to-host copy runs asynchronously. Otherwise, the copy is synchronous. Note that if ``a`` is not a :class:`cupy.ndarray` object, then this argument has no effect. order ({'C', 'F', 'A'}): The desired memory layout of the host array. When ``order`` is 'A', it uses 'F' if ``a`` is fortran-contiguous and 'C' otherwise. out (numpy.ndarray): The output array to be written to. It must have compatible shape and dtype with those of ``a``'s. Returns: numpy.ndarray: Converted array on the host memory. """ if isinstance(a, ndarray): return a.get(stream=stream, order=order, out=out) elif hasattr(a, "__cuda_array_interface__"): return array(a).get(stream=stream, order=order, out=out) else: temp = _numpy.asarray(a, order=order) if out is not None: out[...] = temp else: out = temp return out As you can see (both in the documentation and in the code), cp.asnumpy supports more input types than just CuPy arrays. It supports inputs that are CUDA objects with the __cuda_array_interface__ attribute and any objects that can be actually converted to a Numpy array. This includes Numpy arrays themselves and iterables (eg. list, generators, etc.).
6
3
72,200,654
2022-5-11
https://stackoverflow.com/questions/72200654/how-to-get-client-secret-in-keycloak-with-admin-user-in-different-realm-using-py
I have this workflow in place which works. I get a token from keycloak with admin username/password to this endpoint auth/realms/master/protocol/openid-connect/token With this token I request about client-secret of a specific client which is connected to another realms, not master. So, I request to this endpoint providing my brand new token in the header as bearer to this endpoint auth/admin/realms/realm_name/clients/client_name/client-secret And I can have client-secret With this client-secret I can get a client token requesting with client credentials to this endpoint auth/realms/realm_name/protocol/openid-connect/token And finally I use this client token to my stuff. I can use python-keycloak to get admin token keycloak_admin = KeycloakAdmin(server_url=url, username='user', password='password', verify=True) But once I'm here, I cannot have client secret, due to my client is not in admin realm. Using browser with my regular admin, I can change between realms and accessing to other realm clients. And as I said, using a set of request to specific endpoints I can have what I want working, but I don't know how to do it using python-keycloak. Thanks a lot for your help. I guess I've made my self clear enough. Regards
Maybe I am wrong, but I would expect that the following would work: keycloak_admin = KeycloakAdmin(server_url=serv_url, username='user', password='pass', realm_name='{realm_name}', user_realm_name='master', verify=True) Get the client client_id = keycloak_admin.get_client_id("{client_name}") Get the Secret: secret = keycloak_admin.get_client_secrets(client_id)
4
3
72,206,932
2022-5-11
https://stackoverflow.com/questions/72206932/why-do-any-and-all-not-appear-to-use-short-circuit-evaluation-here
Does all() return False right after finding a False in a sequence? Try to run this code: def return_true(): print('I have just been printed') return True print(all((False, return_true()))) As you can see, I have just been printed is printed even though there is False before it. Another example: def return_false(): print('I have just been printed') return False print(any((True, return_false()))) In this case, I have just been printed is printed in this code even though there is True before.
Yes, all() and any() both short circuit the way you describe. all() will return early if any item is false-y, and any() will if any item is truthy. The reason you're seeing the printouts is because return_true() and return_false() are being called before all and any are even invoked. They must be. A function's arguments have to be evaluated before the function is called, after all. This: print(all((False, return_true()))) is equivalent to: x = return_true() print(all((False, x))) i.e. return_true() is evaluated unconditionally. To get the desired short-circuiting behavior, the sequence itself needs to be evaluated lazily. One simple way to do this is to make an iterable, not of the values we want to test, but of things we can call to get those values; and then use a generator expression to create a sequence that lazily calls them. See also. Here, that might look like: print(all( x() for x in (lambda: False, return_true) )) print(any( x() for x in (lambda: True, return_false) ))
5
8
72,206,172
2022-5-11
https://stackoverflow.com/questions/72206172/what-is-the-time-complexity-of-numpy-linalg-det
The documentation for numpy.linalg.det states that The determinant is computed via LU factorization using the LAPACK routine z/dgetrf. I ran the following run time tests and fit polynomials of degrees 2, 3, and 4 because that covers the least worst options in this table. That table also mentions that an LU decomposition approach takes $O(n^3)$ time, but then the theoretical complexity of LU decomposition given here is $O(n^{2.376})$. Naturally the choice of algorithm matters, but I am not sure what available time complexities I should expect from numpy.linalg.det. from timeit import timeit import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures sizes = np.arange(1,10001, 100) times = [] for size in sizes: A = np.ones((size, size)) time = timeit('np.linalg.det(A)', globals={'np':np, 'A':A}, number=1) times.append(time) print(size, time) sizes = sizes.reshape(-1,1) times = np.array(times).reshape(-1,1) quad_sizes = PolynomialFeatures(degree=2).fit_transform(sizes) quad_times = LinearRegression().fit(quad_sizes, times).predict(quad_sizes) cubic_sizes = PolynomialFeatures(degree=3).fit_transform(sizes) cubic_times = LinearRegression().fit(cubic_sizes, times).predict(cubic_sizes) quartic_sizes = PolynomialFeatures(degree=4).fit_transform(sizes) quartic_times = LinearRegression().fit(quartic_sizes, times).predict(quartic_sizes) plt.scatter(sizes, times, label='Data', color='k', alpha=0.5) plt.plot(sizes, quad_times, label='Quadratic', color='r') plt.plot(sizes, cubic_times, label='Cubic', color='g') plt.plot(sizes, quartic_times, label='Quartic', color='b') plt.xlabel('Matrix Dimension n') plt.ylabel('Time (seconds)') plt.legend() plt.show() The output of the above is given as the following plot. Since none of the available complexities get down to quadratic time, I am unsurprising that visually the quadratic model had the worst fit. Both the cubic and quartic models had excellent visual fit, and unsurprisingly their residuals are closely correlated. Some related questions exist, but they do not have an answer for this specific implementation. Space complexity of matrix inversion, determinant and adjoint Time and space complexity of determinant of a matrix Experimentally determining computing complexity of matrix determinant Since this implementation is used by a lot of Python programmers world-wide, it may benefit the understanding of a lot of people if an answer was tracked down.
TL;DR: it is between O(n^2.81) and O(n^3) regarding the target BLAS implementation. Indeed, Numpy uses a LU decomposition (in the log space). The actual implementation can be found here. It indeed uses the sgetrf/dgetrf primitive of LAPACK. Multiple libraries provides such a libraries. The most famous is the one of NetLib though it is not the fastest. The Intel MKL is an example of library providing a fast implementation. Fast LU decomposition algorithms use tiling methods so to use a matrix multiplication internally. Their do that because the matrix multiplication is one of the most optimized methods linear algebra libraries (for example the MKL, BLIS, and OpenBLAS generally succeed to reach nearly optimal performance on modern processors). More generally, the complexity of the LU decomposition is the one of the matrix multiplication. The complexity of the naive squared matrix multiplication is O(n^3). Faster algorithms exists like Strassen (running in ~O(n^2.81) time) which is often used for big matrices. The Coppersmith–Winograd algorithm achieves a significantly better complexity (~O(n^2.38)), but no linear algebra libraries actually use it since it is a galactic algorithm. Put it shortly, such algorithm is theoretically asymptotically better than others but the hidden constant make it impractical for any real-world usage. For more information about the complexity of the matrix multiplication, please read this article. Thus, in practice, the complexity of the matrix multiplication is between O(n^2.81) and O(n^3) regarding the target BLAS implementation (which is dependent of your platform and your configuration of Numpy).
6
3
72,205,522
2022-5-11
https://stackoverflow.com/questions/72205522/glibcxx-3-4-29-not-found
I am trying to install mujuco onto my linux laptop and everything works until I try to import it into a python file. When I try to import it/run a python script that already has mujuco in it I get the following errors: Import error. Trying to rebuild mujoco_py. running build_ext building 'mujoco_py.cymj' extension gcc -pthread -B /home/daniel/miniconda3/envs/mujoco_py/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/daniel/.mujoco/mujoco-py/mujoco_py -I/home/daniel/.mujoco/mujoco210/include -I/home/daniel/miniconda3/envs/mujoco_py/lib/python3.8/site-packages/numpy/core/include -I/home/daniel/miniconda3/envs/mujoco_py/include/python3.8 -c /home/daniel/.mujoco/mujoco-py/mujoco_py/cymj.c -o /home/daniel/.mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.1.2.14_38_linuxcpuextensionbuilder/temp.linux-x86_64-3.8/home/daniel/.mujoco/mujoco-py/mujoco_py/cymj.o -fopenmp -w gcc -pthread -B /home/daniel/miniconda3/envs/mujoco_py/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/daniel/.mujoco/mujoco-py/mujoco_py -I/home/daniel/.mujoco/mujoco210/include -I/home/daniel/miniconda3/envs/mujoco_py/lib/python3.8/site-packages/numpy/core/include -I/home/daniel/miniconda3/envs/mujoco_py/include/python3.8 -c /home/daniel/.mujoco/mujoco-py/mujoco_py/gl/osmesashim.c -o /home/daniel/.mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.1.2.14_38_linuxcpuextensionbuilder/temp.linux-x86_64-3.8/home/daniel/.mujoco/mujoco-py/mujoco_py/gl/osmesashim.o -fopenmp -w gcc -pthread -shared -B /home/daniel/miniconda3/envs/mujoco_py/compiler_compat -L/home/daniel/miniconda3/envs/mujoco_py/lib -Wl,-rpath=/home/daniel/miniconda3/envs/mujoco_py/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/daniel/.mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.1.2.14_38_linuxcpuextensionbuilder/temp.linux-x86_64-3.8/home/daniel/.mujoco/mujoco-py/mujoco_py/cymj.o /home/daniel/.mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.1.2.14_38_linuxcpuextensionbuilder/temp.linux-x86_64-3.8/home/daniel/.mujoco/mujoco-py/mujoco_py/gl/osmesashim.o -L/home/daniel/.mujoco/mujoco210/bin -Wl,-R/home/daniel/.mujoco/mujoco210/bin -lmujoco210 -lglewosmesa -lOSMesa -lGL -o /home/daniel/.mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.1.2.14_38_linuxcpuextensionbuilder/lib.linux-x86_64-3.8/mujoco_py/cymj.cpython-38-x86_64-linux-gnu.so -fopenmp Traceback (most recent call last): File "setting_state.py", line 7, in <module> from mujoco_py import load_model_from_xml, MjSim, MjViewer File "/home/daniel/.mujoco/mujoco-py/mujoco_py/__init__.py", line 2, in <module> from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException File "/home/daniel/.mujoco/mujoco-py/mujoco_py/builder.py", line 504, in <module> cymj = load_cython_ext(mujoco_path) File "/home/daniel/.mujoco/mujoco-py/mujoco_py/builder.py", line 111, in load_cython_ext mod = load_dynamic_ext('cymj', cext_so_path) File "/home/daniel/.mujoco/mujoco-py/mujoco_py/builder.py", line 130, in load_dynamic_ext return loader.load_module() ImportError: /home/daniel/miniconda3/envs/mujoco_py/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /lib/x86_64-linux-gnu/libOSMesa.so.8) [1]: https://i.sstatic.net/gUhXI.png I have gcc 11.0.2 installed and I'm using python3.8 in my virtual environment. Here are my exact steps https://docs.google.com/document/d/1eBvfKoczKmImUgoGMbqypODBXmI1bD91/edit Everything works accordingly until the very last step where I try to run an actual python module I really don't know why this is happening and I've tried just about everything on the internet. I would really appreciate it if someone can help.
Where does /home/daniel/miniconda3/envs/mujoco_py/lib/libstdc++.so.6 come from? Something bundles a version of libstdc++.so.6 which is older than your system version, and other system libraries depend on the newer version. You should be able to fix this issue by just deleting the file in your home directory.
4
11
72,195,236
2022-5-11
https://stackoverflow.com/questions/72195236/link-github-repo-with-package-on-pypi
I uploaded a python package on Pypi, but I'd also like to upload it to Github, so it can be opensource and anyone can contribute. Is is possible to link the github repo with the already uploaded package on Pypi, so whenever I push something to the master branch, it also updates on Pypi?
In your Github repository there is a tab called Actions (next to Pull requests) where there are several actions like "Publish Python Package". Selecting it will automatically add the relevant Code to your repository. You then only need to store your credentials, like username & password. You can do so under Settings > Secrets > Actions
5
7
72,133,316
2022-5-5
https://stackoverflow.com/questions/72133316/libssl-so-1-1-cannot-open-shared-object-file-no-such-file-or-directory
I've just updated to Ubuntu 22.04 LTS and my libs using OpenSSL just stopped working. Looks like Ubuntu switched to the version 3.0 of OpenSSL. For example, poetry stopped working: Traceback (most recent call last): File "/home/robz/.local/bin/poetry", line 5, in <module> from poetry.console import main File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/__init__.py", line 1, in <module> from .application import Application File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/application.py", line 7, in <module> from .commands.about import AboutCommand File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/commands/__init__.py", line 4, in <module> from .check import CheckCommand File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/commands/check.py", line 2, in <module> from poetry.factory import Factory File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/factory.py", line 18, in <module> from .repositories.pypi_repository import PyPiRepository File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/repositories/pypi_repository.py", line 33, in <module> from ..inspection.info import PackageInfo File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/inspection/info.py", line 25, in <module> from poetry.utils.env import EnvCommandError File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py", line 23, in <module> import virtualenv File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/__init__.py", line 3, in <module> from .run import cli_run, session_via_cli File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/run/__init__.py", line 11, in <module> from ..seed.wheels.periodic_update import manual_upgrade File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/__init__.py", line 3, in <module> from .acquire import get_wheel, pip_wheel_env_run File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/acquire.py", line 12, in <module> from .bundle import from_bundle File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/bundle.py", line 4, in <module> from .periodic_update import periodic_update File "/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/periodic_update.py", line 10, in <module> import ssl File "/home/robz/.pyenv/versions/3.9.10/lib/python3.9/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.1: cannot open shared object file: No such file or directory Is there an easy fix ? For example, having libssl.so.1.1 available without having to uninstall OpenSSL 3 (I don't know if it's even possible).
This fixes it (a problem with packaging in 22.04): wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.23_amd64.deb sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2.23_amd64.deb PS: If the link is expired, check http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/?C=M;O=D for a valid one. Current version (as of 2024-08-05) is: libssl1.1_1.1.1f-1ubuntu2.23_amd64.deb
136
287
72,157,487
2022-5-8
https://stackoverflow.com/questions/72157487/group-by-result-is-inconsistent-in-polars
Based on the example from the Aggregation section of the User Guide import polars as pl from datetime import date def compute_age() -> pl.Expr: return date(2021, 1, 1).year - pl.col("birthday").dt.year() def avg_birthday(gender: str) -> pl.Expr: return compute_age().filter( pl.col("gender") == gender ).mean().alias(f"avg {gender} birthday") dataset = pl.read_csv(b""" state,gender,birthday GA,M,1861-06-06 MD,F,1920-09-17 PA,M,1778-10-13 KS,M,1926-02-23 CO,M,1959-02-16 IL,F,1937-08-15 NY,M,1803-04-30 TX,F,1935-12-03 MD,M,1756-06-03 OH,M,1786-11-15 """.strip(), try_parse_dates=True).lazy() q = ( dataset .group_by("state") .agg( avg_birthday("M"), avg_birthday("F"), (pl.col("gender") == "M").count().alias("# male"), (pl.col("gender") == "F").sum().alias("# female"), ) ) The result is inconsistent. For example, the first time I run q.collect().head() shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ state ┆ avg M birthday ┆ avg F birthday ┆ # male ┆ # female β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════════════β•ͺ════════════════β•ͺ════════β•ͺ══════════║ β”‚ GA ┆ 160.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ OH ┆ 235.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ CO ┆ 62.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ KS ┆ 95.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ TX ┆ null ┆ 86.0 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The second time: shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ state ┆ avg M birthday ┆ avg F birthday ┆ # male ┆ # female β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════════════β•ͺ════════════════β•ͺ════════β•ͺ══════════║ β”‚ TX ┆ null ┆ 86.0 ┆ 1 ┆ 1 β”‚ β”‚ OH ┆ 235.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ CO ┆ 62.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ NY ┆ 218.0 ┆ null ┆ 1 ┆ 0 β”‚ β”‚ GA ┆ 160.0 ┆ null ┆ 1 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I guess it may be caused by parallelization? Is this a bug or a feature? How to keep the result consistent?
Use maintain_order=True on group_by. maintain_order: Ensure that the order of the groups is consistent with the input data. This is slower than a default group by. Setting this to True blocks the possibility to run on the streaming engine.
4
4
72,181,600
2022-5-10
https://stackoverflow.com/questions/72181600/is-it-possible-to-run-game-made-with-pygame-on-browser-using-pyscript
I have made a small space invader game using pygame and I was wondering if I could play it on the browser using pyscript. Is this even possible ? Do I have to rewrite everything ?
No, Pygame is not supported in PyScript at this time. I'm not sure what is the best way to find out what packages are supported, but I have managed to piece together the following: PyScript uses Pyodide to load packages, so only packages supported by Pyodide will be loadable in PyScript. That means either packages built with Pyodide or pure Python packages with wheels available on PyPI or from other URLs. Pygame is not yet supported by Pyodide. Edit: Pygame has been merged in Pyodide and will be available in a future release. You can use the following test script to see if a package is supported: <html> <head> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> <py-env> - pygame </py-env> </head> <body> <h1>PyScript import test script</h1> <py-script> import pygame </py-script> </body> </html> This is basically just a "try it and see what happens" script that will throw an error in the console if a package is not supported. The error you'll see will be ValueError: Couldn't find a pure Python 3 wheel for 'pygame'. You can use micropip.install(..., keep_going=True) to get a list of all packages with missing wheels.
7
8
72,163,312
2022-5-8
https://stackoverflow.com/questions/72163312/python-run-shell-command-get-stdout-and-stderr-as-a-variable-but-hide-from-u
I would like to have my Python script run a Linux shell command and store the output in a variable, without the command's output being shown to the user. I have tried this with os.system, subprocess.check_output, subprocess.run, subprocess.Popen, and os.popen with no luck. My current method is running os.system("ls -l &> /tmp/test_file") so the command stdout and stderr are piped to /tmp/test_file, and then I have my Python code read the file into a variable and then delete it. Is there a better way of doing this so that I can have the command output sent directly into the variable without having to create and delete a file, but keep it hidden from the user?
You can use the subprocess.run function. One update as @Ashley Kleynhans say "The results of stdout and stderr are bytes objects so you will need to decode them if you want to handle them as strings" For this, you do not need to decode stdout or stderr because in the run method you can pass one more argument to get the return data as a string, which is text=True "If used it must be a byte sequence, or a string if encoding or errors is specified or text is true" - from Documentation from subprocess import run data = run("ANY COMMAND HERE", capture_output=True, shell=True, text=True) print(data.stdout) # If you want you can save it to a variable print(data.stderr) # ^^
7
10
72,170,947
2022-5-9
https://stackoverflow.com/questions/72170947/how-to-use-ordinalencoder-to-set-custom-order
I have a column in my Used cars price prediction dataset named "Owner_Type". It has four unique values which are ['First', 'Second', 'Third', 'Fourth']. Now the order that makes the most sense is First > Second > Third > Fourth as the price decreases with respect to this order. How can I give this order to the values with OrdinalEncoder()? Please help me, thank you!
OrdinalEncoder has a categories parameter which accepts a list of arrays of categories. Here is a code example: from sklearn.preprocessing import OrdinalEncoder enc = OrdinalEncoder(categories=[['first','second','third','forth']]) X = [['third'], ['second'], ['first']] enc.fit(X) print(enc.transform([['second'], ['first'], ['third'],['forth']]))
5
14
72,167,802
2022-5-9
https://stackoverflow.com/questions/72167802/adding-version-attribute-to-python-module
I am building a Python module with a structure like: mypackage/ mypackage/ __init__.py etc.py setup.py setup.cfg pyproject.toml To build it, I am running $ python -m build. I noticed that version numbers weren't available (e.g. mypackage.__version__ is undefined after installing), and currently I am just setting it manually like: setup.py setup(..., version='0.0.1' ) pyproject.toml [project] version = '0.0.1' I am new to Python package development and there are a few posts on this, but there does not seem to be a standard way of doing it. The package is quite small and ideally I'd like to just update one thing like __version__ = '0.0.1' inside __init__.py, and then have this parsed automatically in setup.py and pyproject.toml.
Do you really need/want a __version__ attribute at all? Keeping a __version__ attribute available in the module namespace is a popular convention, but it's possibly falling out of fashion these days because stdlib importlib.metadata is no longer provisional. The one obvious place for a version string is in the package metadata, duplicating that same information in a module attribute may be considered unnecessary and redundant. It also presents some conundrums for users when the version string exists in two different places - where should we look for it first, in the package metadata or in a module top-level namespace? And which one should we trust if the version information found in each of these places is different? So, there is some benefit to only storing it in one place, and that place must be the package's metadata. This is because the Version field is a required field in the Core metadata specifications, but packages which opt-in to providing a __version__ attribute are just following a convention. Getting/setting the version in package metadata If you're using a modern build system, then you would specify the version string directly in pyproject.toml as described in PEP 621 – Storing project metadata in pyproject.toml. The way already shown in the question is correct: [project] name = "mypkg" version = "0.0.1" Users of mypkg could retrieve the version like so: from importlib.metadata import version version("mypkg") Note that unlike accessing a __version__ attribute, this version is retrieved from the package metadata only, and the actual package doesn't even need to be imported. That's useful in some cases, e.g. packages which have import side-effects such as numpy, or the ability to retrieve versions of packages even if they have unsatisfied dependencies / complicated environment setup requirements. What if you want access to the version number within the package itself? Sometimes this is useful, for example to add a --version option to your command-line interface. But this doesn't imply you need a __version__ attribute hanging around, you can just retrieve the version from your own package metadata the same way: parser = argparse.ArgumentParser(...) ... parser.add_argument( "--version", action="version", version=importlib.metadata.version("mypkg"), )
11
17
72,157,296
2022-5-8
https://stackoverflow.com/questions/72157296/what-is-the-difference-between-type-hinting-a-variable-as-an-iterable-versus-a-s
I don't understand the difference when hinting Iterable and Sequence. What is the main difference between those two and when to use which? I think set is an Iterable but not Sequence, are there any built-in data type that is Sequence but not Iterable? def foo(baz: Sequence[float]): ... # What is the difference? def bar(baz: Iterable[float]): ...
The Sequence and Iterable abstract base classes (can also be used as type annotations) mostly* follow Python's definition of sequence and iterable. To be specific: Iterable is any object that defines __iter__ or __getitem__. Sequence is any object that defines __getitem__ and __len__. By definition, any sequence is an iterable. The Sequence class also defines other methods such as __contains__, __reversed__ that calls the two required methods. Some examples: list, tuple, str are the most common sequences. Some built-in iterables are not sequences. For example, reversed returns a reversed object (or list_reverseiterator for lists) that cannot be subscripted. * Iterable does not exactly conform to Python's definition of iterables β€” it only checks if the object defines __iter__, and does not work for objects that's only iterable via __getitem__ (see this table for details). The gold standard of checking if an object is iterable is using the iter builtin.
92
88
72,191,674
2022-5-10
https://stackoverflow.com/questions/72191674/modx-in-event-state-in-tkinter
I've been figuring out how to parse tkinter events via event.state to reduce the number of times that I have to call root.bind() (e.g., I can avoid binding both "<ButtonPress-1>" and "<Shift-ButtonPress-1>" by finding if shift was pressed via event.state). Of course, I've relied heavily on the tkinter source code (specifically the definition for __repr__, starting on line 234) to convert the integer of event.state to something I can understand: def getStatefromInt(state_int): # from https://github.com/python/cpython/blob/3.8/Lib/tkinter/__init__.py if isinstance(state_int, int): state = state_int mods = ('Shift', 'Lock', 'Control', 'Mod1', 'Mod2', 'Mod3', 'Mod4', 'Mod5', 'Button1', 'Button2', 'Button3', 'Button4', 'Button5') s = [] for i, n in enumerate(mods): if state & (1 << i): s.append(n) state = state & ~((1<< len(mods)) - 1) if state or not s: s.append(hex(state)) return s One of the things that keeps coming up out of state when events occur is Mod1. What do Mod1 and the other ModX states represent? I thought the number might correspond to the type of button press, but all types of mouse clicks cause only Mod1. I have not been able to find information on what this means online, and I'm having a hard time seeing from the source code what it might mean.
ModX represents a modification, a Modifier Key. In computing, a modifier key is a special key (or combination) on a computer keyboard that temporarily modifies the normal action of another key when pressed together. By themselves, modifier keys usually do nothing; that is, pressing any of the ⇧ Shift, Alt, or Ctrl keys alone does not (generally) trigger any action from the computer. Tkinter is a crossplatform GUI-Toolkit and uses system specific facilities. Different OS using different methods, for example a PC that runs Linux signals Alt Gr as Mod 5 while a PC running Windows signals the same keystroke as Control-Alt. You can look at the Modifier Keys on tcl-lang.ord. Event.state is a bit mask indicating which of certain modifier keys and mouse buttons were down or active when the event triggered and is not reliable because of the system specific facilities as pointed out here.
5
5
72,162,359
2022-5-8
https://stackoverflow.com/questions/72162359/skip-django-allauth-you-are-about-to-sign-in-using-a-third-party-account-from
How can I skip the page and automatically logged in users, when they clicked on Login in With Google.
You need to set up SOCIALACCOUNT_LOGIN_ON_GET=True in your configuration (by default it's False). according to https://django-allauth.readthedocs.io/en/latest/configuration.html: SOCIALACCOUNT_LOGIN_ON_GET (=False) Controls whether or not the endpoints for initiating a social login (for example, β€œ/accounts/google/login/”) require a POST request to initiate the handshake. For security considerations, it is strongly recommended to require POST requests.
11
22
72,119,683
2022-5-4
https://stackoverflow.com/questions/72119683/import-modules-to-pyscript
When we are coding python code, we typically use packages and modules that we import. For example, when we are coding we may write: import numpy import requests from bs4 import BeautifulSoup When we are trying to integrate python with html with Pyscript (https://pyscript.net/), it just says that it doesn’t have the package installed. However, when this happens in normal python we use PiP and import it from there. However, what should we do when we need a package in Pyscript? Thank you!
At this time, bs4 is not supported. You will receive an error ValueError: Couldn't find a pure Python 3 wheel for 'bs4' You will also have problems using the requests package in pyscript. Usepyfetch instead of requests.get. To import numpy and requests, use <py-env> before <py-script>. Example: <body> <py-env> - numpy </py-env> <py-script> import numpy as np print(np.random.randn(10, 4)) </py-script> </body> Pyscript also supports package versions: <py-env> - numpy==1.22.3 </py-env>
7
14
72,122,939
2022-5-5
https://stackoverflow.com/questions/72122939/resourceexhaustederror-graph-execution-error-when-trying-to-train-tensorflow
A few days back, I got the same error at 12th epoch. This time, it happens at the 1st. I have no idea why that is happening as I did not make any changes to the model. I only normalized the input to give X_train.max() as 1 after scaling like it should be. Does it have something to do with patch size? Should I reduce it? Why do I get this error and how can I fix it? my_model.summary() Model: "U-Net" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_6 (InputLayer) [(None, 64, 64, 64, 0 [] 3)] conv3d_95 (Conv3D) (None, 64, 64, 64, 5248 ['input_6[0][0]'] 64) batch_normalization_90 (BatchN (None, 64, 64, 64, 256 ['conv3d_95[0][0]'] ormalization) 64) activation_90 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_90[0][0]'] 64) conv3d_96 (Conv3D) (None, 64, 64, 64, 110656 ['activation_90[0][0]'] 64) batch_normalization_91 (BatchN (None, 64, 64, 64, 256 ['conv3d_96[0][0]'] ormalization) 64) activation_91 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_91[0][0]'] 64) max_pooling3d_20 (MaxPooling3D (None, 32, 32, 32, 0 ['activation_91[0][0]'] ) 64) conv3d_97 (Conv3D) (None, 32, 32, 32, 221312 ['max_pooling3d_20[0][0]'] 128) batch_normalization_92 (BatchN (None, 32, 32, 32, 512 ['conv3d_97[0][0]'] ormalization) 128) activation_92 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_92[0][0]'] 128) conv3d_98 (Conv3D) (None, 32, 32, 32, 442496 ['activation_92[0][0]'] 128) batch_normalization_93 (BatchN (None, 32, 32, 32, 512 ['conv3d_98[0][0]'] ormalization) 128) activation_93 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_93[0][0]'] 128) max_pooling3d_21 (MaxPooling3D (None, 16, 16, 16, 0 ['activation_93[0][0]'] ) 128) conv3d_99 (Conv3D) (None, 16, 16, 16, 884992 ['max_pooling3d_21[0][0]'] 256) batch_normalization_94 (BatchN (None, 16, 16, 16, 1024 ['conv3d_99[0][0]'] ormalization) 256) activation_94 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_94[0][0]'] 256) conv3d_100 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_94[0][0]'] 256) batch_normalization_95 (BatchN (None, 16, 16, 16, 1024 ['conv3d_100[0][0]'] ormalization) 256) activation_95 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_95[0][0]'] 256) max_pooling3d_22 (MaxPooling3D (None, 8, 8, 8, 256 0 ['activation_95[0][0]'] ) ) conv3d_101 (Conv3D) (None, 8, 8, 8, 512 3539456 ['max_pooling3d_22[0][0]'] ) batch_normalization_96 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_101[0][0]'] ormalization) ) activation_96 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_96[0][0]'] ) conv3d_102 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_96[0][0]'] ) batch_normalization_97 (BatchN (None, 8, 8, 8, 512 2048 ['conv3d_102[0][0]'] ormalization) ) activation_97 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_97[0][0]'] ) max_pooling3d_23 (MaxPooling3D (None, 4, 4, 4, 512 0 ['activation_97[0][0]'] ) ) conv3d_103 (Conv3D) (None, 4, 4, 4, 102 14156800 ['max_pooling3d_23[0][0]'] 4) batch_normalization_98 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_103[0][0]'] ormalization) 4) activation_98 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_98[0][0]'] 4) conv3d_104 (Conv3D) (None, 4, 4, 4, 102 28312576 ['activation_98[0][0]'] 4) batch_normalization_99 (BatchN (None, 4, 4, 4, 102 4096 ['conv3d_104[0][0]'] ormalization) 4) activation_99 (Activation) (None, 4, 4, 4, 102 0 ['batch_normalization_99[0][0]'] 4) conv3d_transpose_20 (Conv3DTra (None, 8, 8, 8, 512 4194816 ['activation_99[0][0]'] nspose) ) concatenate_20 (Concatenate) (None, 8, 8, 8, 102 0 ['conv3d_transpose_20[0][0]', 4) 'activation_97[0][0]'] conv3d_105 (Conv3D) (None, 8, 8, 8, 512 14156288 ['concatenate_20[0][0]'] ) batch_normalization_100 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_105[0][0]'] Normalization) ) activation_100 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_100[0][0]'] ) conv3d_106 (Conv3D) (None, 8, 8, 8, 512 7078400 ['activation_100[0][0]'] ) batch_normalization_101 (Batch (None, 8, 8, 8, 512 2048 ['conv3d_106[0][0]'] Normalization) ) activation_101 (Activation) (None, 8, 8, 8, 512 0 ['batch_normalization_101[0][0]'] ) conv3d_transpose_21 (Conv3DTra (None, 16, 16, 16, 1048832 ['activation_101[0][0]'] nspose) 256) concatenate_21 (Concatenate) (None, 16, 16, 16, 0 ['conv3d_transpose_21[0][0]', 512) 'activation_95[0][0]'] conv3d_107 (Conv3D) (None, 16, 16, 16, 3539200 ['concatenate_21[0][0]'] 256) batch_normalization_102 (Batch (None, 16, 16, 16, 1024 ['conv3d_107[0][0]'] Normalization) 256) activation_102 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_102[0][0]'] 256) conv3d_108 (Conv3D) (None, 16, 16, 16, 1769728 ['activation_102[0][0]'] 256) batch_normalization_103 (Batch (None, 16, 16, 16, 1024 ['conv3d_108[0][0]'] Normalization) 256) activation_103 (Activation) (None, 16, 16, 16, 0 ['batch_normalization_103[0][0]'] 256) conv3d_transpose_22 (Conv3DTra (None, 32, 32, 32, 262272 ['activation_103[0][0]'] nspose) 128) concatenate_22 (Concatenate) (None, 32, 32, 32, 0 ['conv3d_transpose_22[0][0]', 256) 'activation_93[0][0]'] conv3d_109 (Conv3D) (None, 32, 32, 32, 884864 ['concatenate_22[0][0]'] 128) batch_normalization_104 (Batch (None, 32, 32, 32, 512 ['conv3d_109[0][0]'] Normalization) 128) activation_104 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_104[0][0]'] 128) conv3d_110 (Conv3D) (None, 32, 32, 32, 442496 ['activation_104[0][0]'] 128) batch_normalization_105 (Batch (None, 32, 32, 32, 512 ['conv3d_110[0][0]'] Normalization) 128) activation_105 (Activation) (None, 32, 32, 32, 0 ['batch_normalization_105[0][0]'] 128) conv3d_transpose_23 (Conv3DTra (None, 64, 64, 64, 65600 ['activation_105[0][0]'] nspose) 64) concatenate_23 (Concatenate) (None, 64, 64, 64, 0 ['conv3d_transpose_23[0][0]', 128) 'activation_91[0][0]'] conv3d_111 (Conv3D) (None, 64, 64, 64, 221248 ['concatenate_23[0][0]'] 64) batch_normalization_106 (Batch (None, 64, 64, 64, 256 ['conv3d_111[0][0]'] Normalization) 64) activation_106 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_106[0][0]'] 64) conv3d_112 (Conv3D) (None, 64, 64, 64, 110656 ['activation_106[0][0]'] 64) batch_normalization_107 (Batch (None, 64, 64, 64, 256 ['conv3d_112[0][0]'] Normalization) 64) activation_107 (Activation) (None, 64, 64, 64, 0 ['batch_normalization_107[0][0]'] 64) conv3d_113 (Conv3D) (None, 64, 64, 64, 260 ['activation_107[0][0]'] 4) ================================================================================================== Total params: 90,319,876 Trainable params: 90,308,100 Non-trainable params: 11,776 __________________________________________________________________________________________________ None Error Message Log: Epoch 1/100 --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-52-ec522ff5ad08> in <module>() 5 epochs=100, 6 verbose=1, ----> 7 validation_data=(X_test, y_test)) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: ResourceExhaustedError: Graph execution error: Detected at node 'U-Net/concatenate_23/concat' defined at (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance app.start() File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start self.io_loop.start() File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever self._run_once() File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events handler_func(fileobj, events) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 452, in _handle_events self._handle_recv() File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 481, in _handle_recv self._run_callback(callback, msg) File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 431, in _run_callback callback(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes if self.run_code(code, result): File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-52-ec522ff5ad08>", line 7, in <module> validation_data=(X_test, y_test)) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit tmp_logs = self.train_function(iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step y_pred = self(x, training=True) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) packages/keras/layers/merge.py", line 531, in _merge_function return backend.concatenate(inputs, axis=self.axis) File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 3313, in concatenate return tf.concat([to_dense(x) for x in tensors], axis) Node: 'U-Net/concatenate_23/concat' OOM when allocating tensor with shape[8,128,64,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node U-Net/concatenate_23/concat}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [Op:__inference_train_function_24517] GPU details: nvidia-smi command: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 73W / 149W | 11077MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ I'm new to Tensorflow and all of this ML stuff honestly. Would really appreciate any help. Thanks.
I had the same error as you ,it's a resource exhausted problem, resolved by just reducing batch_size value(I had a Model which try to learn from dataset of big images I reduce it's value from 32 to 16) .and it's worked fine
4
11
72,193,393
2022-5-10
https://stackoverflow.com/questions/72193393/find-the-value-of-variables-to-maximize-return-of-function-in-python
I'd want to achieve similar result as how the Solver-function in Excel is working. I've been reading of Scipy optimization and been trying to build a function which outputs what I would like to find the maximal value of. The equation is based on four different variables which, see my code below: import pandas as pd import numpy as np from scipy import optimize cols = { 'Dividend2': [9390, 7448, 177], 'Probability': [341, 376, 452], 'EV': [0.53, 0.60, 0.55], 'Dividend': [185, 55, 755], 'EV2': [123, 139, 544], } df = pd.DataFrame(cols) def myFunc(params): """myFunc metric.""" (ev, bv, vc, dv) = params df['Number'] = np.where(df['Dividend2'] <= vc, 1, 0) \ + np.where(df['EV2'] <= dv, 1, 0) df['Return'] = np.where( df['EV'] <= ev, 0, np.where( df['Probability'] >= bv, 0, df['Number'] * df['Dividend'] - (vc + dv) ) ) return -1 * (df['Return'].sum()) b1 = [(0.2,4), (300,600), (0,1000), (0,1000)] start = [0.2, 600, 1000, 1000] result = optimize.minimize(fun=myFunc, bounds=b1, x0=start) print(result) So I'd like to find the maximum value of the column Return in df when changing the variables ev,bv,vc & dv. I'd like them to be between in the intervals of ev: 0.2-4, bv: 300-600, vc: 0-1000 & dv: 0-1000. When running my code it seem like the function stops at x0.
Solution I will use optuna library to give you a solution to the type of problem you are trying to solve. I have tried using scipy.optimize.minimize and it appears that the loss-landscape is probably quite flat in most places, and hence the tolerances enforce the minimizing algorithm (L-BFGS-B) to stop prematurely. Optuna Docs: https://optuna.readthedocs.io/en/stable/index.html With optuna, it rather straight forward. Optuna only requires an objective function and a study. The study send various trials to the objective function, which in turn, evaluates the metric of your choice. I have defined another metric function myFunc2 by mostly removing the np.where calls, as you can do-away with them (reduces number of steps) and make the function slightly faster. # install optuna with pip pip install -Uqq optuna Although I looked into using a rather smooth loss landscape, sometimes it is necessary to visualize the landscape itself. The answer in section B elaborates on visualization. But, what if you want to use a smoother metric function? Section D sheds some light on this. Order of code-execution should be: Sections: C >> B >> B.1 >> B.2 >> B.3 >> A.1 >> A.2 >> D A. Building Intuition If you create a hiplot (also known as a plot with parallel-coordinates) with all the possible parameter values as mentioned in the search_space for Section B.2, and plot the lowest 50 outputs of myFunc2, it would look like this: Plotting all such points from the search_space would look like this: A.1. Loss Landscape Views for Various Parameter-Pairs These figures show that mostly the loss-landscape is flat for any two of the four parameters (ev, bv, vc, dv). This could be a reason why, only GridSampler (which brute-forces the searching process) does better, compared to the other two samplers (TPESampler and RandomSampler). Please click on any of the images below to view them enlarged. This could also be the reason why scipy.optimize.minimize(method="L-BFGS-B") fails right off the bat. 01. dv-vc 02. dv-bv 03. dv-ev 04. bv-ev 05. cv-ev 06. vc-bv # Create contour plots for parameter-pairs study_name = "GridSampler" study = studies.get(study_name) views = [("dv", "vc"), ("dv", "bv"), ("dv", "ev"), ("bv", "ev"), ("vc", "ev"), ("vc", "bv")] for i, (x, y) in enumerate(views): print(f"Figure: {i}/{len(views)}") study_contour_plot(study=study, params=(x, y)) A.2. Parameter Importance study_name = "GridSampler" study = studies.get(study_name) fig = optuna.visualization.plot_param_importances(study) fig.update_layout(title=f'Hyperparameter Importances: {study.study_name}', autosize=False, width=800, height=500, margin=dict(l=65, r=50, b=65, t=90)) fig.show() B. Code Section B.3. finds the lowest metric -88.333 for: {'ev': 0.2, 'bv': 500.0, 'vc': 222.2222, 'dv': 0.0} import warnings from functools import partial from typing import Iterable, Optional, Callable, List import pandas as pd import numpy as np import optuna from tqdm.notebook import tqdm warnings.filterwarnings("ignore", category=optuna.exceptions.ExperimentalWarning) optuna.logging.set_verbosity(optuna.logging.WARNING) PARAM_NAMES: List[str] = ["ev", "bv", "vc", "dv",] DEFAULT_METRIC_FUNC: Callable = myFunc2 def myFunc2(params): """myFunc metric v2 with lesser steps.""" global df # define as a global variable (ev, bv, vc, dv) = params df['Number'] = (df['Dividend2'] <= vc) * 1 + (df['EV2'] <= dv) * 1 df['Return'] = ( (df['EV'] > ev) * (df['Probability'] < bv) * (df['Number'] * df['Dividend'] - (vc + dv)) ) return -1 * (df['Return'].sum()) def make_param_grid( bounds: List[Tuple[float, float]], param_names: Optional[List[str]]=None, num_points: int=10, as_dict: bool=True, ) -> Union[pd.DataFrame, Dict[str, List[float]]]: """ Create parameter search space. Example: grid = make_param_grid(bounds=b1, num_points=10, as_dict=True) """ if param_names is None: param_names = PARAM_NAMES # ["ev", "bv", "vc", "dv"] bounds = np.array(bounds) grid = np.linspace(start=bounds[:,0], stop=bounds[:,1], num=num_points, endpoint=True, axis=0) grid = pd.DataFrame(grid, columns=param_names) if as_dict: grid = grid.to_dict() for k,v in grid.items(): grid.update({k: list(v.values())}) return grid def objective(trial, bounds: Optional[Iterable]=None, func: Optional[Callable]=None, param_names: Optional[List[str]]=None): """Objective function, necessary for optimizing with optuna.""" if param_names is None: param_names = PARAM_NAMES if (bounds is None): bounds = ((-10, 10) for _ in param_names) if not isinstance(bounds, dict): bounds = dict((p, (min(b), max(b))) for p, b in zip(param_names, bounds)) if func is None: func = DEFAULT_METRIC_FUNC params = dict( (p, trial.suggest_float(p, bounds.get(p)[0], bounds.get(p)[1])) for p in param_names ) # x = trial.suggest_float('x', -10, 10) return func((params[p] for p in param_names)) def optimize(objective: Callable, sampler: Optional[optuna.samplers.BaseSampler]=None, func: Optional[Callable]=None, n_trials: int=2, study_direction: str="minimize", study_name: Optional[str]=None, formatstr: str=".4f", verbose: bool=True): """Optimizing function using optuna: creates a study.""" if func is None: func = DEFAULT_METRIC_FUNC study = optuna.create_study( direction=study_direction, sampler=sampler, study_name=study_name) study.optimize( objective, n_trials=n_trials, show_progress_bar=True, n_jobs=1, ) if verbose: metric = eval_metric(study.best_params, func=myFunc2) msg = format_result(study.best_params, metric, header=study.study_name, format=formatstr) print(msg) return study def format_dict(d: Dict[str, float], format: str=".4f") -> Dict[str, float]: """ Returns formatted output for a dictionary with string keys and float values. """ return dict((k, float(f'{v:{format}}')) for k,v in d.items()) def format_result(d: Dict[str, float], metric_value: float, header: str='', format: str=".4f"): """Returns formatted result.""" msg = f"""Study Name: {header}\n{'='*30} βœ… study.best_params: \n\t{format_dict(d)} βœ… metric: {metric_value} """ return msg def study_contour_plot(study: optuna.Study, params: Optional[List[str]]=None, width: int=560, height: int=500): """ Create contour plots for a study, given a list or tuple of two parameter names. """ if params is None: params = ["dv", "vc"] fig = optuna.visualization.plot_contour(study, params=params) fig.update_layout( title=f'Contour Plot: {study.study_name} ({params[0]}, {params[1]})', autosize=False, width=width, height=height, margin=dict(l=65, r=50, b=65, t=90)) fig.show() bounds = [(0.2, 4), (300, 600), (0, 1000), (0, 1000)] param_names = PARAM_NAMES # ["ev", "bv", "vc", "dv",] pobjective = partial(objective, bounds=bounds) # Create an empty dict to contain # various subsequent studies. studies = dict() Optuna comes with a few different types of Samplers. Samplers provide the strategy of how optuna is going to sample points from the parametr-space and evaluate the objective function. https://optuna.readthedocs.io/en/stable/reference/samplers.html B.1 Use TPESampler from optuna.samplers import TPESampler sampler = TPESampler(seed=42) study_name = "TPESampler" studies[study_name] = optimize( pobjective, sampler=sampler, n_trials=100, study_name=study_name, ) # Study Name: TPESampler # ============================== # # βœ… study.best_params: # {'ev': 1.6233, 'bv': 585.2143, 'vc': 731.9939, 'dv': 598.6585} # βœ… metric: -0.0 B.2. Use GridSampler GridSampler requires a parameter search grid. Here we are using the following search_space. from optuna.samplers import GridSampler # create search-space search_space = make_param_grid(bounds=bounds, num_points=10, as_dict=True) sampler = GridSampler(search_space) study_name = "GridSampler" studies[study_name] = optimize( pobjective, sampler=sampler, n_trials=2000, study_name=study_name, ) # Study Name: GridSampler # ============================== # # βœ… study.best_params: # {'ev': 0.2, 'bv': 500.0, 'vc': 222.2222, 'dv': 0.0} # βœ… metric: -88.33333333333337 B.3. Use RandomSampler from optuna.samplers import RandomSampler sampler = RandomSampler(seed=42) study_name = "RandomSampler" studies[study_name] = optimize( pobjective, sampler=sampler, n_trials=300, study_name=study_name, ) # Study Name: RandomSampler # ============================== # # βœ… study.best_params: # {'ev': 1.6233, 'bv': 585.2143, 'vc': 731.9939, 'dv': 598.6585} # βœ… metric: -0.0 C. Dummy Data For the sake of reproducibility, I am keeping a record of the dummy data used here. import pandas as pd import numpy as np from scipy import optimize cols = { 'Dividend2': [9390, 7448, 177], 'Probability': [341, 376, 452], 'EV': [0.53, 0.60, 0.55], 'Dividend': [185, 55, 755], 'EV2': [123, 139, 544], } df = pd.DataFrame(cols) def myFunc(params): """myFunc metric.""" (ev, bv, vc, dv) = params df['Number'] = np.where(df['Dividend2'] <= vc, 1, 0) \ + np.where(df['EV2'] <= dv, 1, 0) df['Return'] = np.where( df['EV'] <= ev, 0, np.where( df['Probability'] >= bv, 0, df['Number'] * df['Dividend'] - (vc + dv) ) ) return -1 * (df['Return'].sum()) b1 = [(0.2,4), (300,600), (0,1000), (0,1000)] start = [0.2, 600, 1000, 1000] result = optimize.minimize(fun=myFunc, bounds=b1, x0=start) print(result) C.1. An Observation So, it seems at first glance that the code executed properly and did not throw any error. It says it had success in finding the minimized solution. fun: -0.0 hess_inv: <4x4 LbfgsInvHessProduct with dtype=float64> jac: array([0., 0., 3., 3.]) message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' # πŸ’‘ nfev: 35 nit: 2 status: 0 success: True x: array([2.e-01, 6.e+02, 0.e+00, 0.e+00]) # πŸ”₯ A close observation reveals that the solution (see πŸ”₯) is no different from the starting point [0.2, 600, 1000, 1000]. So, seems like nothing really happened and the algorithm just finished prematurely?!! Now look at the message above (see πŸ’‘). If we run a google search on this, you could find something like this: Summary b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' If the loss-landscape does not have a smoothely changing topography, the gradient descent algorithms will soon find that from one iteration to the next, there isn't much change happening and hence, will terminate further seeking. Also, if the loss-landscape is rather flat, this could see similar fate and get early-termination. scipy-optimize-minimize does not perform the optimization - CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL D. Making the Loss Landscape Smoother A binary evaluation of value = 1 if x>5 else 0 is essentially a step-function that assigns 1 for all values of x that are greater than 5 and 0 otherwise. But this introduces a kink - a discontinuity in smoothness and this could potentially introduce problems in traversing the loss-landscape. What if we use a sigmoid function to introduce some smoothness? # Define sigmoid function def sigmoid(x): """Sigmoid function.""" return 1 / (1 + np.exp(-x)) For the above example, we could modify it as follows. You can additionally introduce another factor (gamma: Ξ³) as follows and try to optimize it to make the landscape smoother. Thus by controlling the gamma factor, you could make the function smoother and change how quickly it changes around x = 5 The above figure is created with the following code-snippet. import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'svg' # 'svg', 'retina' plt.style.use('seaborn-white') def make_figure(figtitle: str="Sigmoid Function"): """Make the demo figure for using sigmoid.""" x = np.arange(-20, 20.01, 0.01) y1 = sigmoid(x) y2 = sigmoid(x - 5) y3 = sigmoid((x - 5)/3) y4 = sigmoid((x - 5)/0.3) fig, ax = plt.subplots(figsize=(10,5)) plt.sca(ax) plt.plot(x, y1, ls="-", label="$\sigma(x)$") plt.plot(x, y2, ls="--", label="$\sigma(x - 5)$") plt.plot(x, y3, ls="-.", label="$\sigma((x - 5) / 3)$") plt.plot(x, y4, ls=":", label="$\sigma((x - 5) / 0.3)$") plt.axvline(x=0, ls="-", lw=1.3, color="cyan", alpha=0.9) plt.axvline(x=5, ls="-", lw=1.3, color="magenta", alpha=0.9) plt.legend() plt.title(figtitle) plt.show() make_figure() D.1. Example of Metric Smoothing The following is an example of how you could apply function smoothing. from functools import partial def sig(x, gamma: float=1.): return sigmoid(x/gamma) def myFunc3(params, gamma: float=0.5): """myFunc metric v3 with smoother metric.""" (ev, bv, vc, dv) = params _sig = partial(sig, gamma=gamma) df['Number'] = _sig(x = -(df['Dividend2'] - vc)) * 1 \ + _sig(x = -(df['EV2'] - dv)) * 1 df['Return'] = ( _sig(x = df['EV'] - ev) * _sig(x = -(df['Probability'] - bv)) * _sig(x = df['Number'] * df['Dividend'] - (vc + dv)) ) return -1 * (df['Return'].sum())
7
3
72,169,984
2022-5-9
https://stackoverflow.com/questions/72169984/how-do-i-find-the-repetition-size-in-a-scanned-texture
Given the scan of a fabric (snippet included here, it could easily be A4 size at 600 dpi) what would be the best method for finding the repetition pattern in the scan? I have tried: splitting the image in 4 quarters and trying to find points via SIFT and OpenCV FFT as suggested here I am aware of other answers on stackoverflow and other sites (this and this and this but they tend to be a bit too terse for an OpenCV beginner. I am thinking of eyeballing an area and optimizing via row-by-row and column-by-column comparison of pixels, but I am wondering if there is a another better path.
The image's 2D autocorrelation is a good generic way to find repeating structures, as others have commented. There are some details to doing this effectively: For this analysis, it is often fine to just convert the image to grayscale; that's what the code snippet below does. To extend this to a color-aware analysis, you could compute the autocorrelation for each color channel and aggregate the results. It helps to apply a window function first, to avoid boundary artifacts. For efficient computation, it helps to compute the autocorrelation through FFTs. Rather than use the whole spectrum in computing the autocorrelation, it helps to zero out very low frequencies, since these are irrelevant for texture. Rather than the plain autocorrelation, it helps to partially "equalize" or "whiten" the spectrum, as suggested by the Generalized Cross Correlation with Phase Transform (GCC-PHAT) technique. Python code: # Copyright 2022 Google LLC. # SPDX-License-Identifier: Apache-2.0 from PIL import Image import matplotlib.pyplot as plt import numpy as np import scipy.signal image = np.array(Image.open('texture.jpg').convert('L'), float) # Window the image. window_x = np.hanning(image.shape[1]) window_y = np.hanning(image.shape[0]) image *= np.outer(window_y, window_x) # Transform to frequency domain. spectrum = np.fft.rfft2(image) # Partially whiten the spectrum. This tends to make the autocorrelation sharper, # but it also amplifies noise. The -0.6 exponent is the strength of the # whitening normalization, where -1.0 would be full normalization and 0.0 would # be the usual unnormalized autocorrelation. spectrum *= (1e-12 + np.abs(spectrum))**-0.6 # Exclude some very low frequencies, since these are irrelevant to the texture. fx = np.arange(spectrum.shape[1]) fy = np.fft.fftshift(np.arange(spectrum.shape[0]) - spectrum.shape[0] // 2) fx, fy = np.meshgrid(fx, fy) spectrum[np.sqrt(fx**2 + fy**2) < 10] = 0 # Compute the autocorrelation and inverse transform. acorr = np.real(np.fft.irfft2(np.abs(spectrum)**2)) plt.figure(figsize=(10, 10)) plt.imshow(acorr, cmap='Blues', vmin=0, vmax=np.percentile(acorr, 99.5)) plt.xlim(0, image.shape[1] / 2) plt.ylim(0, image.shape[0] / 2) plt.title('2D autocorrelation', fontsize=18) plt.xlabel('Horizontal lag (px)', fontsize=15) plt.ylabel('Vertical lag (px)', fontsize=15) plt.show() Output: The period of the flannel texture is visible at the circled point at 282 px horizontally and 290 px vertically.
4
2
72,142,248
2022-5-6
https://stackoverflow.com/questions/72142248/inserting-rows-into-microsoft-sql-server-using-pandas-raises-precision-error
I am trying to insert data into a mssql database. I needed as fast method for this so I set the fast_executemany param to true. The upload works fine for most part but if one of the column is a datetime with timezone it crashes raising: (pyodbc.Error) ('HY104', '[HY104] [Microsoft][ODBC Driver 17 for SQL Server]Invalid precision value (0) (SQLBindParameter)') If I insert the same data with fast_executemany as False then everythhing works perfectly. Did anyone came across a similar problem or know what might be the issue? Sample code from sqlalchemy import create_engine, engine import pytz import pandas as pd from sqlalchemy.dialects.mssql import DATETIMEOFFSET import datetime engine_url = engine.URL.create( drivername='mssql', username='admin', password='**', host='**', port='1433', database='mytestdb1', query={'driver': "ODBC Driver 17 for SQL Server"} ) mssql_engine = create_engine(engine_url, echo=False, fast_executemany=True) base = datetime.datetime.today().replace(tzinfo=pytz.utc) date_list = [base - datetime.timedelta(days=x) for x in range(20)] df = pd.DataFrame(date_list, columns = ['date_time']) df.to_sql('test_insert', mssql_engine, schema='testschema1', if_exists='replace', dtype = {'date_time':DATETIMEOFFSET}) response: DBAPIError: (pyodbc.Error) ('HY104', '[HY104] [Microsoft][ODBC Driver 17 for SQL Server]Invalid precision value (0) (SQLBindParameter)') [SQL: INSERT INTO testschema1.test_datetime ([index], date_time) VALUES (?, ?)] [parameters: ((0, '2022-05-06 16:40:05.434984 +00:00'), (1, '2022-05-05 16:40:05.434984 +00:00'), (2, '2022-05-04 16:40:05.434984 +00:00'), (3, '2022-05-03 16:40:05.434984 +00:00'), (4, '2022-05-02 16:40:05.434984 +00:00'), (5, '2022-05-01 16:40:05.434984 +00:00'), (6, '2022-04-30 16:40:05.434984 +00:00'), (7, '2022-04-29 16:40:05.434984 +00:00') ... displaying 10 of 20 total bound parameter sets ... (18, '2022-04-18 16:40:05.434984 +00:00'), (19, '2022-04-17 16:40:05.434984 +00:00'))] (Background on this error at: http://sqlalche.me/e/14/dbapi) sqlalchemy==1.4, pyodbc==4.0.32 and pandas==1.2.0 As I said the code works perfectly if I dont use fast_executemany.
This issue can be reproduced using SQLAlchemy 1.4.0. It was fixed in SQLAlchemy 1.4.1.
7
0
72,166,259
2022-5-9
https://stackoverflow.com/questions/72166259/werkzeug-server-is-shutting-down-in-django-application
after updating the Werkzeug version from 2.0.3 to 2.1.0, I keep getting errors every time I run the server, and here is the error log: Exception happened during processing of request from ('127.0.0.1', 44612) Traceback (most recent call last): File "/usr/lib/python3.8/socketserver.py", line 683, in process_request_thread self.finish_request(request, client_address) File "/usr/lib/python3.8/socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python3.8/socketserver.py", line 747, in __init__ self.handle() File "/home/oladhari/.virtualenvs/reachat/lib/python3.8/site-packages/werkzeug/serving.py", line 363, in handle super().handle() File "/usr/lib/python3.8/http/server.py", line 427, in handle self.handle_one_request() File "/usr/lib/python3.8/http/server.py", line 415, in handle_one_request method() File "/home/oladhari/.virtualenvs/reachat/lib/python3.8/site-packages/werkzeug/serving.py", line 243, in run_wsgi self.environ = environ = self.make_environ() File "/home/oladhari/.virtualenvs/reachat/lib/python3.8/site-packages/django_extensions/management/commands/runserver_plus.py", line 326, in make_environ del environ['werkzeug.server.shutdown'] KeyError: 'werkzeug.server.shutdown' this exception keep appearing while incrementing by 2 ( ('127.0.0.1', 44612) -> ('127.0.0.1', 44628) and the server crash checking the changes log, I have found this detail: Remove previously deprecated code. #2276 Remove the non-standard shutdown function from the WSGI environ when running the development server. See the docs for alternatives. here is the link to the changes log it asks to check the documentation for alternatives but can not find any please let me know how I would resolve this error, thank you NB: my python version is 3.8
Literally just ran into this today. According to their (git repo issue 1715) and assuming you are running runserver_plus, there are three options that worked for some users. The first worked for me: Not altering your files and adding the option --keep-meta-shutdown. My full command looks like python manage.py runserver_plus --cert-file /path/to/cert.pem --key-file /path/to/key.pem --keep-meta-shutdown localhost:9000 Comment out open lines 325 and 326 under your runserver_plus.py Upgrading python to 3.10 Hope this helps!
15
25
72,191,842
2022-5-10
https://stackoverflow.com/questions/72191842/python-attrs-inheriting-from-class-which-values-have-default-values-raises-erro
I have a situation where attrs class inherits from another class which attributes have default value. This raises ValueError. Here's an example: from attrs import define @define class A: a: int = 1 @define class B(A): b: int test = B(b=1) >>> ValueError: No mandatory attributes allowed after an attribute with a default value or factory. Attribute in question: Attribute(name='b', ... How do I avoid this kind of behavior?
You're running into a limitation of Python. The __init__ you're asking attrs to write for you looks like this: def __init__(self, b=1, a): self.b = b self.a = a which can't exist. You can work around that by either declaring class B or attribute b keyword-only: from attrs import define, field @define class A: a: int = 1 @define(kw_only=True) class B1(A): b: int @define class B2(A): b: int = field(kw_only=True) test1 = B1(b=1) test2 = B2(b=1)
5
8
72,191,298
2022-5-10
https://stackoverflow.com/questions/72191298/what-is-the-pythonic-way-of-iterating-over-a-single-item
I come across this issue often, and I would be surprised if there wasn't some very simple and pythonic one-liner solution to it. Suppose I have a method or a function that takes a list or some other iterable object as an argument. I want for an operation to be performed once for each item in the object. Sometimes, only a single item (say, a float value) is passed to this function. In this situation, my for-loop doesn't know what to do. And so, I find myself peppering my code with the following snippet of code: from collections.abc import Sequence def my_function(value): if not isinstance(value, Sequence): value = [value] # rest of my function This works, but it seems wasteful and not particularly legible. In searching StackOverflow I've also discovered that strings are considered sequences, and so this code could easily break given the wrong argument. It just doesn't feel like the right approach. I come from a MATLAB background, and this is neatly solved in that language since scalars are treated like 1x1 matrices. I'd expect, at the very least, for there to be a built-in, something like numpy's atleast_1d function, that automatically converts anything into an iterable if it isn't one.
The short answer is nope, there is no simple built-in. And yep, if you want str (or bytes or bytes-like stuff or whatever) to act as a scalar value, it gets uglier. Python expects callers to adhere to the interface contract; if you say you accept sequences, say so, and it's on the caller to wrap any individual arguments. If you must do this, there's two obvious ways to do it: First is to make your function accept varargs instead of a single argument, and leave it up to the caller to unpack any sequences, so you can always iterate the varargs received: def my_function(*values): for val in values: # Rest of function A caller with individual items calls you with my_function(a, b), a caller with a sequence calls you with my_function(*seq). The latter does incur some overhead to unpack the sequence to a new tuple to be received by my_function, but in many cases this is fine. If that's not acceptable for whatever reason, the other solution is to roll your own "ensure iterable" converter function, following whatever rules you care about: from collections.abc import ByteString def ensure_iterable(obj): if isinstance(obj, (str, ByteString)): return (obj,) # Treat strings and bytes-like stuff as scalars and wrap try: iter(obj) # Simplest way to test if something is iterable is to try to make it an iterator except TypeError: return (obj,) # Not iterable, wrap else: return obj # Already iterable which my_function can use with: def my_function(value): value = ensure_iterable(value)
5
3
72,187,949
2022-5-10
https://stackoverflow.com/questions/72187949/extracting-start-and-end-indices-of-a-token-using-spacy
I am looking at lots of sentences and looking to extract the start and end indices of a word in a given sentence. For example, the input is as follows: "This is a sentence written in English by a native English speaker." And What I want is the span of the word 'English' which in this case is : (30,37) and (50, 57). Note: I was pointed to this answer (Get position of word in sentence with spacy) But this answer doesn't solve my problem. It can help me in getting the start character of the token but not the end index. All help appreciated
You can do this with re in pure python: s="This is a sentence written in english by a native English speaker." import re [(i.start(), i.end()) for i in re.finditer('ENGLISH', s.upper())] #output [(30, 37), (50, 57)] You can do in spacy as well: import spacy nlp=spacy.load("en_core_web_sm") doc=nlp("This is a sentence written in english by a native English speaker.") for ent in doc.ents: if ent.text.upper()=='ENGLISH': print(ent.start_char,ent.end_char)
5
1
72,176,662
2022-5-9
https://stackoverflow.com/questions/72176662/problem-authorizing-client-with-django-oauth-toolkit-authorization-code-flow
I have been following the django-oAuth-toolkit documentation. In the Authorization Code step, I have registered an application as shown in the screenshot. But then the next step is given like this: To start the Authorization code flow go to this URL which is the same as shown below: http://127.0.0.1:8000/o/authorize/?response_type=code&client_id=vW1RcAl7Mb0d5gyHNQIAcH110lWoOW2BmWJIero8&redirect_uri=http://127.0.0.1:8000/noexist/callback But when I replace my client id and ping that URL it redirects me to the following URL: http://localhost:8000/noexist/callback?error=invalid_request&error_description=Code+challenge+required. I have tried to google that error but it's such a common keyword that I am unable to find anything that is related to my issue. I am probably missing something obvious, I am new to Python and Django. Note: In the documentation screenshot there is one form field missing which is there in my local environment. It's the algorithm field.
After debugging for so many hours I came to this, please include it in your settings.py file and it works. Maybe it is a bug since we defined our app as confidential with authorization_code grant type but oauth_provider is thinking it as public and trying to validate for pkce. OAUTH2_PROVIDER = { "PKCE_REQUIRED": False }
7
13
72,180,701
2022-5-10
https://stackoverflow.com/questions/72180701/one-of-many-recursive-calls-of-a-function-found-the-correct-result-but-it-cant
Recently, I was experimenting with writing a function to find a primitive value anywhere within an arbitrarily deeply nested sequence, and return the path taken to get there (as a list of indices inside each successive nested sequence, in order). I encountered a very unexpected obstacle: the function was finding the result, but not returning it! Instead of the correct output, the function kept returning the output which should only have been produced when attempting to find an item not in the sequence. By placing print statements at various points in the function, I found that the problem was that after the recursive call which actually found the item returned, others which did not find the item were also returning, and evidently later in time than the one that found it. This meant that the final result was getting reset to the 'fail' value from the 'success' value unless the 'success' value was the last thing to be encountered. I tried fixing this by putting an extra conditional inside the function to return early in the success case, trying to preempt the additional, unnecessary recursive calls which were causing the incorrect final result. Now, this is where I ran into the root cause of the problem: There is no way of knowing which recursive call (if any) will find the item beforehand, and once one of them does find it, it has no way of 'communicating' with the others! The only way I could come up with of avoiding this deeper issue was to completely refactor the function to 'set' a variable outside itself with the 'success' output if and only if the 'success' condition is encountered. The external, global variable starts out set to the 'failed to find item in sequence' value, and is not reset except in the 'success' case. All the other recursive calls just return without doing anything. This seems very ugly and inefficient, but it does work. FIRST ATTEMPT # ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (First Attempt) # Works on 'result1' but not on 'result2' # Searches for 'item' in sequence (list or tuple) S, and returns a tuple # containing the indices (in order of increasing depth) at which the item # can be found, plus the depth in S at which 'item' was found. # If the item is *not* found, returns a tuple containing an empty list and -1 def traverse(S, item, indices=[], atDepth=0): # If the sequence is empty, return the 'item not found' result if not S: return ([], -1) else: # For each element in the sequence (breadth-first) for i in range(len(S)): # Success condition base case: found the item! if S[i] == item: return (indices + [i], atDepth) # Recursive step (depth-first): enter nested sequence # and repeat procedure from beginning elif type(S[i]) in (list, tuple): return traverse(S[i], item, indices + [i], atDepth + 1) # Fail condition base case: searched the entire length # and depth of the sequence and didn't find the item, so # return the 'item not found' result else: print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".") return ([], -1) L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)] result1 = traverse(L, 7) result2 = traverse(L, 9) print("-------------------------------------------") print(result1) print("-------------------------------------------") print(result2) SECOND ATTEMPT # ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (Second Attempt) # Does not work on either test case # Searches for 'item' in sequence (list or tuple) S, and returns a tuple # containing the indices (in order of increasing depth) at which the item # can be found, plus the depth in S at which 'item' was found. # If the item is *not* found, returns a tuple containing an empty list and -1 def traverse(S, item, indices=[], atDepth=0, returnValue=None): # If the sequence is empty, return the 'item not found' result if not S: print("Sequence S is empty.") return ([], -1) # --- ATTEMPTED FIX: # If the item is found before the end of S is reached, # do not perform additional searches. In addition to being # inefficient, doing extra steps would cause incorrect false # negatives for the item being in S. # --- DOES NOT WORK: the underlying issue is that the multiple recursive # calls generated at the same time can't communicate with each other, # so the others don't 'know' if one of them already found the item. elif returnValue: print("Inside 'elif' statement!") return returnValue else: # For each element in the sequence (breadth-first) for i in range(len(S)): # Success condition base case: found the item! if S[i] == item: # Return the depth and index at that depth of the item print("--- Found item " + str(item) + " at index path " + str(indices) + " in current sequence") returnValue2 = (indices + [i], atDepth) print("--- Item " + str(item) + " is at index path " + str(returnValue2) + " in S, SHOULD RETURN") #return returnValue2 # THIS DIDN'T FIX THE PROBLEM #break # NEITHER DID THIS # Recursive step (depth-first): enter nested sequence # and repeat procedure from beginning elif type(S[i]) in (list, tuple): # CANNOT USE 'return' BEFORE RECURSIVE CALL, as it would cause any items # in the outer sequence which come after the first occurrence of a nested # sequence to be missed (i.e. the item could exist in S, but if it is # after the first nested sequence, it won't be found) traverse(S[i], item, indices + [i], atDepth + 1, returnValue) # CAN'T USE 'returnValue2' HERE (out of scope); # so parameter can't be updated in 'if' condition # Fail condition base case: searched the entire length # and depth of the sequence and didn't find the item, so # return the 'item not found' result else: print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".") return ([], -1) L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)] result1 = traverse(L, 7) result2 = traverse(L, 9) print("-------------------------------------------") print(result1) print("-------------------------------------------") print(result2) THIRD AND FINAL ATTEMPT -- Working, but not ideal! # ITERATIVE/RECURSIVE SEQUENCE TRAVERSER (Third Attempt) # This 'kludge' is ** HIDEOUSLY UGLY **, but it works! # Searches for 'item' in sequence (list or tuple) S, and generates a tuple # containing the indices (in order of increasing depth) at which the item # can be found, plus the depth in S at which 'item' was found. # If the item is *not* found, returns nothing (implicitly None) # The results of calling the function are obtained via external global variables. # This 3rd version of 'traverse' is thus actually a void function, # and relies on altering the global state instead of producing an output. # ----- WORKAROUND: If the result is found, have the recursive call that found it # send it to global scope and use this global variable as the final result of calling # the 'traverse' function. # Initialize the global variables to the "didn't find the item" result, # so the result will still be correct if the item actually isn't in the sequence. globalVars = {'result1': ([], -1), 'result2': ([], -1)} def traverse(S, item, send_output_to_var, indices=[], atDepth=0): # If the sequence is empty, return *without* doing anything to the global variable. # It is already initialized to the "didn't find item" result. if not S: return else: # For each element in the sequence (breadth-first) for i in range(len(S)): # Success condition base case: found the item! if S[i] == item: # Set the global variable to the index path of 'item' in 'S'. globalVars[send_output_to_var] = (indices + [i], atDepth) # No need to keep on doing unnecessary work! return # Recursive step (depth-first): enter nested sequence # and repeat procedure from beginning elif type(S[i]) in (list, tuple): # Don't use 'return' before the recursive call, or it will miss items # in the outer sequence after a nested sequence is encountered. traverse(S[i], item, send_output_to_var, indices + [i], atDepth + 1) # Fail condition base case: searched the entire length # and depth of the sequence and didn't find the item. else: # Return *without* setting the global variable, as it is # already initialized to the "didn't find item" result. return L = [0, 1, 2, [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])], [[8, ()]], (([9], ), 10)] traverse(L, 7, 'result1') traverse(L, 9, 'result2') print("-------------------------------------------") print(globalVars['result1']) print("-------------------------------------------") print(globalVars['result2']) I was wondering if I'm missing something and there is in fact a way of making this work without the use of external variables. The best possible solution would be somehow 'shutting down' all the other recursive calls as soon as one of them returns the success result, but I don't believe this is possible (I'd love to be wrong about this!). Or maybe some kind of 'priority queue' which delays the return of the 'success' case recursive call (if it exists) until after all the 'fail' case recursive calls have returned? I looked at this similar question: Recursively locate nested dictionary containing a target key and value but although the accepted answer here https://stackoverflow.com/a/59538362/18248018 by ggorlen solved OP's problem and even mentions what seems to be this exact issue ("matched result isn't being passed up the call stack correctly"), it is tailored towards performing a specific task, and doesn't offer the insight I'm looking for into the more general case.
Your first attempt is almost perfect, the only mistake is that you return the result of searching through the first list/tuple at the current depth, regardless of whether the item was found or not. Instead, you need to check for a positive result, and only return if it is one. That way you keep iterating through the current depth until you either find the item or it is not found at all. So you need to change: return traverse(S[i], item, indices + [i], atDepth + 1) to something like: t = traverse(S[i], item, indices + [i], atDepth + 1) if t != ([], -1): return t Full code: def traverse(S, item, indices=[], atDepth=0): # If the sequence is empty, return the 'item not found' result if not S: return ([], -1) else: # For each element in the sequence (breadth-first) for i in range(len(S)): # Success condition base case: found the item! if S[i] == item: return (indices + [i], atDepth) # Recursive step (depth-first): enter nested sequence # and repeat procedure from beginning elif type(S[i]) in (list, tuple): t = traverse(S[i], item, indices + [i], atDepth + 1) if t != ([], -1): return t # Fail condition base case: searched the entire length # and depth of the sequence and didn't find the item, so # return the 'item not found' result else: print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".") return ([], -1) Output for your two test cases: >>> traverse(L, 7) ([3, 1, 2, 4], 3) >>> traverse(L, 9) We looked everywhere but didn't find 9 in [6, 6.25, 6.5, 6.75, 7]. We looked everywhere but didn't find 9 in (4, 5, [6, 6.25, 6.5, 6.75, 7]). We looked everywhere but didn't find 9 in [3, (4, 5, [6, 6.25, 6.5, 6.75, 7])]. We looked everywhere but didn't find 9 in [8, ()]. We looked everywhere but didn't find 9 in [[8, ()]]. ([5, 0, 0, 0], 3) Note as pointed out by @FreddyMcloughlan, atDepth is simply the length of the returned list minus 1. So you can remove that parameter from the function call and just use: def traverse(S, item, indices=[]): # If the sequence is empty, return the 'item not found' result if not S: return ([], -1) else: # For each element in the sequence (breadth-first) for i in range(len(S)): # Success condition base case: found the item! if S[i] == item: return (indices + [i], len(indices)) # Recursive step (depth-first): enter nested sequence # and repeat procedure from beginning elif type(S[i]) in (list, tuple): t = traverse(S[i], item, indices + [i]) if t != ([], -1): return t # Fail condition base case: searched the entire length # and depth of the sequence and didn't find the item, so # return the 'item not found' result else: print("We looked everywhere but didn't find " + str(item) + " in " + str(S) + ".") return ([], -1)
4
4
72,179,103
2022-5-9
https://stackoverflow.com/questions/72179103/xarray-select-the-data-at-specific-x-and-y-coordinates
When selecting data with xarray at x,y locations, I get data for any pair of x,y. I would like to have a 1-D array not a 2-D array from the selection. Is there an efficient way to do this? (For now I am doing it with a for-loop...) x = [x1,x2,x3,x4] y = [y1,y2,y3,y4] DS = 2-D array subset = Dataset.sel(longitude=x, latitude=y, method='nearest') To rephrase, I would like to have the dataset at [x1,y1],[x2,y2],[x3,y3],[x4,y4] not at other location i.e. [x1,y2].
A list of points can be selected along multiple indices if the indexers are DataArrays with a common dimension. This will result in the array being reindexed along the indexers' common dimension. Straight from the docs on More Advanced Indexing: In [78]: da = xr.DataArray(np.arange(56).reshape((7, 8)), dims=['x', 'y']) In [79]: da Out[79]: <xarray.DataArray (x: 7, y: 8)> array([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15], [16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31], [32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47], [48, 49, 50, 51, 52, 53, 54, 55]]) Dimensions without coordinates: x, y In [80]: da.isel(x=xr.DataArray([0, 1, 6], dims='z'), ....: y=xr.DataArray([0, 1, 0], dims='z')) ....: Out[80]: <xarray.DataArray (z: 3)> array([ 0, 9, 48]) Dimensions without coordinates: z The indexing array can also be easily pulled out of a pandas DataFrame, with something like da.sel(longitude=df.longitude.to_xarray(), latitude=df.latitude.to_xarray()), which will result in the DataArray being reindexed by the DataFrame's index. So in your case, rather than selecting with the lists or arrays x, y, turn them into DataArrays with a common dim - let's call it location: x = xr.DataArray([x1,x2,x3,x4], dims=['location']) y = xr.DataArray([y1,y2,y3,y4], dims=['location']) Now your selection will work as you hope: ds.sel(longitude=x, latitude=y, method='nearest')
6
12
72,175,135
2022-5-9
https://stackoverflow.com/questions/72175135/mypy-how-to-mark-line-as-unreachable
I've got a function of the form: def get_new_file(prefix: str) -> pathlib.Path: for i in itertools.count(0): p = pathlib.Path(f'{prefix}_{i}') if not p.is_file(): return p # This line is unreachable. mypy understandably complains that the function is missing a return statement. Is there a way to mark a line so as to inform mypy that the line should be considered unreachable?
This was raised as an issue. The recommended solution is assert False. def get_new_file(prefix: str) -> pathlib.Path: for i in itertools.count(0): p = pathlib.Path(f'{prefix}_{i}') if not p.is_file(): return p assert False
9
9
72,148,941
2022-5-7
https://stackoverflow.com/questions/72148941/how-to-tag-and-store-files-by-metadata-in-python
I want to build a manual file tagging system like this. Given that a folder contains these files: data/ budget.xls world_building_budget.txt a.txt b.exe hello_world.dat world_builder.spec I want to write a tagging system where executing py -3 tag_tool.py -filter=world -tag="World-Building Tool" will output These files were tagged with "World-Building Tool": data/world_building_budget.txt hello_world.dat world_builder.spec Another example. If I execute: py -3 tag_tool.py -filter="\.txt" -add_tag="Human Readable" It will output These files were tagged with "Human Readable": data/world_building_budget.txt a.txt I am not asking "Do my homework for me". I want to know what approach I can take to build something this? What data structure should I use? How should I tag contents in a directory?
First, I am not clear if this is actually homework, but my first recommendation is always to see if it's already done (and it seems to be): https://pypi.org/project/pytaggit/ If I were to ignore that and build it myself, I would consider what a tagging systems structure is. Long story (skip ahead if not interested): consider a simple file system... It has exactly one path to every file. You can do a string search by file name or even properties, but the organization is such that a file can only exist in one place. This is much like a physical file system. In virtual file systems, we also have file links. There are two types: soft (short cuts in Windows) and hard. Soft links make a file appear as though they were in multiple locations. This is like being able to file Soccer under "S" and creating another file called Football in "F" that just says "see Soccer". By contrast, hard links actually make it so that the file effectively exists in multiple locations. This would be like being able to pull the exact same file "Soccer" in both "F" and "S". If someone makes a change to one, the change is made to both. This is still a very limited organization restricted to file location. If you wanted to be nimble and apply arbitrary organizations, hard links become heavy to maintain. Tagging is another way to accomplish this without too much overhead. ...... Past the skipped part ...... There is more than one way to accomplish this, but here is a generic look at what is needed. Tags need to be able to have a many-to-many relationship between files and tags. I.e. you should be able to look at a file and see all tags associated AND you should be able to look at a tag and see all files associated with it. If you want to store the data once, you will have to choose which way to optimize as you are choosing to organize your data only one way. Therefore, forward lookup will be natural and reverse lookup will require processing. If you want to maintain two data sets (or indexes), you can store both forward and reverse lookups. If you know that your data won't grow past a certain size and/or the usage will typically only require one direction, then one index should be fine. Otherwise, I would choose two. The trade-off is the overhead of keeping them in-sync. If you want to optimize for tags(filename), then you would probably use a dict with something like filenameTags = {'myFileName': ['tag1', 'tag2', ...]} Getting filenames from tags with this structure would require a process of searching all of the embedded lists and returning the key associated, if there is a match. You can reverse this structure (filenames(tag)) if you want to optimize the other way. You can also create both file structures, but then you have the overhead of keeping both in sync. Lastly to keep this persistent, save to a file or DB. Redis supports this nicely.
4
2
72,161,257
2022-5-8
https://stackoverflow.com/questions/72161257/exclude-default-fields-from-python-dataclass-repr
Summary I have a dataclass with 10+ fields. print()ing them buries interesting context in a wall of defaults - let's make them friendlier by not needlessly repeating those. Dataclasses in Python Python's @dataclasses.dataclass() (PEP 557) provides automatic printable representations (__repr__()). Assume this example, based on python.org's: from dataclasses import dataclass @dataclass class InventoryItem: name: str unit_price: float = 1.00 quantity_on_hand: int = 0 The decorator, through @dataclass(repr=True) (default) will print() a nice output: InventoryItem(name='Apple', unit_price='1.00', quantity_on_hand=0) What I want: Skip printing the defaults repr It prints all the fields, including implied defaults you wouldn't want to show. print(InventoryItem("Apple")) # Outputs: InventoryItem(name='Apple', unit_price='1.00', quantity_on_hand=0) # I want: InventoryItem(name='Apple') print(InventoryItem("Apple", unit_price="1.05")) # Outputs: InventoryItem(name='Apple', unit_price='1.05', quantity_on_hand=0) # I want: InventoryItem(name='Apple', unit_price='1.05') print(InventoryItem("Apple", quantity_on_hand=3)) # Outputs: InventoryItem(name='Apple', unit_price=1.00, quantity_on_hand=3) # I want: InventoryItem(name='Apple', quantity_on_hand=3) print(InventoryItem("Apple", unit_price='2.10', quantity_on_hand=3)) # Output is fine (everything's custom): # InventoryItem(name='Apple', unit_price=2.10, quantity_on_hand=3) Discussion Internally, here's the machinery of dataclass repr-generator as of python 3.10.4: cls.__repr__=_repr_fn(flds, globals)) -> _recursive_repr(fn) It may be the case that @dataclass(repr=False) be switched off and def __repr__(self): be added. If so, what would that look like? We don't want to include the optional defaults. Context To repeat, in practice, my dataclass has 10+ fields. I'm print()ing instances via running the code and repl, and @pytest.mark.parametrize when running pytest with -vvv. Big dataclass' non-defaults (sometimes the inputs) are impossible to see as they're buried in the default fields and worse, each one is disproportionately and distractingly huge: obscuring other valuable stuff bring printed. Related questions As of today there aren't many dataclass questions yet (this may change): Extend dataclass' __repr__ programmatically: This is trying to limit the repr. It should show less fields unless they're explicitly overridden. Python dataclass generate hash and exclude unsafe fields: This is for hashing and not related to defaults.
You could do it like this: import dataclasses from dataclasses import dataclass from operator import attrgetter @dataclass(repr=False) class InventoryItem: name: str unit_price: float = 1.00 quantity_on_hand: int = 0 def __repr__(self): nodef_f_vals = ( (f.name, attrgetter(f.name)(self)) for f in dataclasses.fields(self) if attrgetter(f.name)(self) != f.default ) nodef_f_repr = ", ".join(f"{name}={value}" for name, value in nodef_f_vals) return f"{self.__class__.__name__}({nodef_f_repr})" # Prints: InventoryItem(name=Apple) print(InventoryItem("Apple")) # Prints: InventoryItem(name=Apple,unit_price=1.05) print(InventoryItem("Apple", unit_price="1.05")) # Prints: InventoryItem(name=Apple,unit_price=2.10,quantity_on_hand=3) print(InventoryItem("Apple", unit_price='2.10', quantity_on_hand=3))
8
8
72,140,681
2022-5-6
https://stackoverflow.com/questions/72140681/matplotlib-set-up-font-computer-modern-and-bold
I would like to have a plot where the font are in "computer modern" (i.e. Latex style) but with x-ticks and y-ticks in bold. Due to the recent upgrade of matplotlib my previous procedure does not work anymore. This is my old procedure: plt.rc('font', family='serif',size=24) matplotlib.rc('text', usetex=True) matplotlib.rc('legend', fontsize=24) matplotlib.rcParams['text.latex.preamble'] = [r'\boldmath'] This is the output message: test_font.py:26: MatplotlibDeprecationWarning: Support for setting an rcParam that expects a str value to a non-str value is deprecated since 3.5 and support will be removed two minor releases later. matplotlib.rcParams['text.latex.preamble'] = [r'\boldmath'] I have decide that a possible solution could be to use the "computer modern" as font. This is my example: import matplotlib import matplotlib.pyplot as plt import numpy as np font = {'family' : 'serif', 'weight' : 'bold', 'size' : 12 } matplotlib.rc('font', **font) # Data for plotting t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2 * np.pi * t) fig, ax = plt.subplots(1,figsize=(9,6)) ax.plot(t, s) ax.set(xlabel='time (s) $a_1$', ylabel='voltage (mV)', title='About as simple as it gets, folks') ax.grid() fig.savefig("test.png") plt.show() This is the result: I am not able, however, to set-up in font the font style. I have tried to set the font family as "cmr10". This the code: font = {'family' : 'serif', 'weight' : 'bold', 'size' : 12, 'serif': 'cmr10' } matplotlib.rc('font', **font) It seems that the "cmr10" makes disappear the bold option. Have I made some errors? Do you have in mind other possible solution? Thanks
You still can use your old procedure, but with a slight change. The MatplotlibDeprecationWarning that you get states that the parameter expects a str value but it's getting something else. In this case what is happening is that you are passing it as a list. Removing the brackets will do the trick: import matplotlib import matplotlib.pyplot as plt import numpy as np plt.rc('font', family='serif',size=24) matplotlib.rc('text', usetex=True) matplotlib.rc('legend', fontsize=24) matplotlib.rcParams['text.latex.preamble'] = r'\boldmath' # Data for plotting t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2 * np.pi * t) fig, ax = plt.subplots(1,figsize=(9,6)) ax.plot(t, s) ax.set(xlabel='time (s) $a_1$', ylabel='voltage (mV)', title='About as simple as it gets, folks') ax.grid() fig.savefig("test.png") plt.show() The code above produces this plot without any errors:
4
3
72,166,020
2022-5-9
https://stackoverflow.com/questions/72166020/how-to-install-multiple-packages-in-one-line-using-conda
I need to install below multiple packages using conda. I am not sure what is conda-forge? some uses conda-forge, some doesn't use it. Is it possible to install them in one line without installing them one by one? Thanks conda install -c conda-forge dash-daq conda install -c conda-forge dash-core-components conda install -c conda-forge dash-html-components conda install -c conda-forge dash-bootstrap-components conda install -c conda-forge dash-table conda install -c plotly jupyter-dash
Why some packages have to be installed through conda forge: Conda official repository only feature a few verified packages. A vast portion of python packages that are otherwise available through pip are installed through community led channel called conda-forge. You can visit their site to learn more about it. How to install multiple packages in a single line? The recommended way to install multiple packages is to create a .yml file and feed conda this. You can specify the version number for each package as well. The following example file can be fed to conda through conda install --file: appdirs=1.4.3 asn1crypto=0.24.0 ... zope=1.0 zope.interface=4.5.0 To specify different channel for each package in this environment.yml file, you can use the :: syntax. dependencies: - python=3.6 - bagit - conda-forge::beautifulsoup4
16
17