question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
70,866,415
2022-1-26
https://stackoverflow.com/questions/70866415/how-to-install-python-specific-version-on-docker
I need to install python 3.8.10 in a container running ubuntu 16.04. 16.04 has no support anymore, so I need a way to install it there manually.
This follows from here Add the following to your dockerfile, and change the python version as needed. When the docker is up, python3.8 will be available in /usr/local/bin/python3.8 # compile python from source - avoid unsupported library problems RUN apt update -y && sudo apt upgrade -y && \ apt-get install -y wget build-essential checkinstall libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev && \ cd /usr/src && \ sudo wget https://www.python.org/ftp/python/3.8.10/Python-3.8.10.tgz && \ sudo tar xzf Python-3.8.10.tgz && \ cd Python-3.8.10 && \ sudo ./configure --enable-optimizations && \ sudo make altinstall Please note the following (standard [and quicker] way of installing) does not work for old ubuntu versions, due to end of support RUN apt-get update && \ apt-get install -y software-properties-common && \ add-apt-repository -y ppa:deadsnakes/ppa && \ apt-get update && \ apt install -y python3.8 See also this to install into /usr/bin
16
23
70,874,423
2022-1-27
https://stackoverflow.com/questions/70874423/fastapi-importerror-attempted-relative-import-with-no-known-parent-package
I am new to FastAPI and I've been having this problem with importing my other files. I get the error:‍‍‍‍ from . import schemas ImportError: attempted relative import with no known parent package For context, the file I am importing from is a Folder called Blog. I saw certain StackOverflow answers saying that instead of from . import schemas I should write from Blog import schemas. And even though their solution is right and I don't get any errors while running the python program, When I try running FastAPI using uvicorn, I get this error and my localhost page doesn't load. File "./main.py", line 2, in <module> from Blog import schemas ModuleNotFoundError: No module named 'Blog' The file structure looks like this: The code to the main file looks like this: from fastapi import FastAPI from Blog import schemas, models from database import engine app = FastAPI() models.Base.metadata.create_all(engine) @app.post('/blog') def create(request: schemas.Blog): return request schemas.py from pydantic import BaseModel class Blog(BaseModel): title: str body: str database.py from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker SQLALCHAMY_DATABASE_URL = 'sqlite:///./blog.db' engine = create_engine(SQLALCHAMY_DATABASE_URL, connect_args={"check_same_thread": False}) SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False) Base = declarative_base() models.py from sqlalchemy import * from database import Base class Blog(Base): __tablename__ = 'blogs' id = Column(Integer, primary_key=True, index=True) title = Column(String) body = Column(String) The SwaggerUI is not loading either. Any help would be greatly appreciated! :)
Since your schemas.py and models.py files are in the same directory as your main.py file, you should import those two modules as follows, instead of using from Blog import schemas, models: import schemas, models For more details and examples please have a look at this answer, as well as this answer and this answer.
10
9
70,884,910
2022-1-27
https://stackoverflow.com/questions/70884910/converting-dates-into-a-specific-format-in-side-a-csv
I am new to python and Iam trying to manipulate some data but it keeps showing me this erro message UserWarning: Parsing '13/01/2021' in DD/MM/YYYY format. Provide format or specify infer_datetime_format=True for consistent parsing. cache_array = _maybe_cache(arg, format, cache, convert_listlike) This is my code import pandas as pd import matplotlib.pyplot as plt dataLake = pd.read_csv("datalake - Data lake.csv", parse_dates=["Day"]) dataLake = dataLake.rename(columns={"Day":"day"}) dataLake = dataLake.rename(columns={"Agent":"agent"}) dataLake["day"] = pd.to_datetime(dataLake.day) print(dataLake.head())
In your case you need to set the dayfirst param to true, like this: pd.to_datetime(dataLake.day, dayfirst=True) or you can set a format (but you don't need to in your case), like this: pd.to_datetime(dataLake.day, format="%d/%m/%y")
7
17
70,862,894
2022-1-26
https://stackoverflow.com/questions/70862894/vscode-pylance-doesnt-work-via-ssh-connection
There is a problem: Pylance (IntelliSense) does not work on the remote server. At the same time it works locally. Pylance itself is installed both locally and on the server. Imports are just white and only "Loading..." pops up when I hover over it. "Go to definition" also doesn't work. Have a such properties: Python: 3.10.2; Pylance: 2022.1.3; Python extension: v2021.12.1559732655; Remote - SSH: v0.70.0 VSCode: 1.63.2; Local OS: Windows 10 Pro; Remote OS: Ubuntu 20.04.3 LTS Virtualenv as env; I've already tried a bunch of options: Installed other versions of Pylance; Older versions of the Python extension itself; Updated Python to the latest version from 3.8.10 to 3.10.2; Changed the language server to Jedi and reverted to Pylance; Reinstalled extensions, VSCode; Recreated the environment with new python. Added to the remote settings.json this settings: "python.insidersChannel": "daily", "python.languageServer": "Pylance". "Python: Show output" gives this output: Experiment 'pythonaacf' is active Experiment 'pythonTensorboardExperiment' is active Experiment 'pythonSurveyNotification' is active Experiment 'PythonPyTorchProfiler' is active Experiment 'pythonDeprecatePythonPath' is active > conda info --json > ~/jupyter_env/bin/python ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > ~/.anaconda_backup/bin/conda info --json Python interpreter path: ./jupyter_env/bin/python > conda --version > /bin/python ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > /bin/python2 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > /bin/python3 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > /bin/python3.10 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > /usr/bin/python2 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > /usr/bin/python3 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py > ". /home/db/jupyter_env/bin/activate && echo 'e8b39361-0157-4923-80e1-22d70d46dee6' && python /home/db/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/printEnvVariables.py" Starting Jedi language server. > ~/jupyter_env/bin/python -m pylint --msg-template='{line},{column},{category},{symbol}:{msg} --reports=n --output-format=text ~/data/qualityControl/core/data_verification/dataQualityControl.py cwd: ~/ ##########Linting Output - pylint########## ************* Module core.data_verification.dataQualityControl 18,53,error,syntax-error:non-default argument follows default argument (<unknown>, line 18)
Basically, the problem was that if a large workspace is selected in VSCode, it will try to index it all, and until it finishes, the highlighting won't turn on. In my case, I had several AWS buckets mounted and since there was about 100TB of data, the file indexing simply never finished. If I select a specific project folder, however, the problem disappears. So in case of such a problem, try to specify the working directory. Good luck!
6
3
70,911,608
2022-1-30
https://stackoverflow.com/questions/70911608/plot-3d-cube-and-draw-line-on-3d-in-python
I know, for those who know Python well piece of cake a question. I have an excel file and it looks like this: 1 7 5 8 2 4 6 3 1 7 4 6 8 2 5 3 6 1 5 2 8 3 7 4 My purpose is to draw a cube in Python and draw a line according to the order of these numbers. Note: There is no number greater than 8 in arrays. I can explain better with a pictures. First Step: Second Step Last Step: I need to print the final version of the 3D cube for each row in Excel. My way to solution import numpy as np import numpy as np from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection import matplotlib.pyplot as plt df = pd.read_csv("uniquesolutions.csv",header=None,sep='\t') myArray = df.values points = solutionsarray def connectpoints(x,y,p1,p2): x1, x2 = x[p1], x[p2] y1, y2 = y[p1], y[p2] plt.plot([x1,x2],[y1,y2],'k-') # cube[0][0][0] = 1 # cube[0][0][1] = 2 # cube[0][1][0] = 3 # cube[0][1][1] = 4 # cube[1][0][0] = 5 # cube[1][0][1] = 6 # cube[1][1][0] = 7 # cube[1][1][1] = 8 for i in range(): connectpoints(cube[i][i][i],cube[],points[i],points[i+1]) # Confused! ax = fig.add_subplot(111, projection='3d') # plot sides ax.add_collection3d(Poly3DCollection(verts, facecolors='cyan', linewidths=1, edgecolors='r', alpha=.25)) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() In the question here, they managed to draw something with the points given inside the cube. I tried to use this 2D connection function. Last Question: Can I print the result of red lines in 3D? How can I do this in Python?
First, it looks like you are using pandas with pd.read_csv without importing it. Since, you are not reading the headers and just want a list of values, it is probably sufficient to just use the numpy read function instead. Since I don't have access to your csv, I will define the vertex lists as variables below. vertices = np.zeros([3,8],dtype=int) vertices[0,:] = [1, 7, 5, 8, 2, 4, 6, 3] vertices[1,:] = [1, 7, 4, 6, 8, 2, 5, 3] vertices[2,:] = [6, 1, 5, 2, 8, 3, 7, 4] vertices = vertices - 1 #(adjust the vertex numbers by one since python starts with zero indexing) Here I used a 2d numpy array to define the vertices. The first dimension, with length 3, is for the number of vertex list, and the second dimension, with length 8, is each vertex list. I subtract 1 from the vertices list because we will use this list to index another array and python indexing starts at 0, not 1. Then, define the cube coordaintes. # Initialize an array with dimensions 8 by 3 # 8 for each vertex # -> indices will be vertex1=0, v2=1, v3=2 ... # 3 for each coordinate # -> indices will be x=0,y=1,z=1 cube = np.zeros([8,3]) # Define x values cube[:,0] = [0, 0, 0, 0, 1, 1, 1, 1] # Define y values cube[:,1] = [0, 1, 0, 1, 0, 1, 0, 1] # Define z values cube[:,2] = [0, 0, 1, 1, 0, 0, 1, 1] Then initialize the plot. # First initialize the fig variable to a figure fig = plt.figure() # Add a 3d axis to the figure ax = fig.add_subplot(111, projection='3d') Then add the red lines for vertex list 1. You can repeat this for the other vertex list by increasing the first index of vertices. # Plot first vertex list ax.plot(cube[vertices[0,:],0],cube[vertices[0,:],1],cube[vertices[0,:],2],color='r-') # Plot second vertex list ax.plot(cube[vertices[1,:],0],cube[vertices[1,:],1],cube[vertices[1,:],2],color='r-') The faces can be added by defining the edges of each faces. There is a numpy array for each face. In the array there are 5 vertices, where the edge are defined by the lines between successive vertices. So the 5 vertices create 4 edges. # Initialize a list of vertex coordinates for each face # faces = [np.zeros([5,3])]*3 faces = [] faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) # Bottom face faces[0][:,0] = [0,0,1,1,0] faces[0][:,1] = [0,1,1,0,0] faces[0][:,2] = [0,0,0,0,0] # Top face faces[1][:,0] = [0,0,1,1,0] faces[1][:,1] = [0,1,1,0,0] faces[1][:,2] = [1,1,1,1,1] # Left Face faces[2][:,0] = [0,0,0,0,0] faces[2][:,1] = [0,1,1,0,0] faces[2][:,2] = [0,0,1,1,0] # Left Face faces[3][:,0] = [1,1,1,1,1] faces[3][:,1] = [0,1,1,0,0] faces[3][:,2] = [0,0,1,1,0] # front face faces[4][:,0] = [0,1,1,0,0] faces[4][:,1] = [0,0,0,0,0] faces[4][:,2] = [0,0,1,1,0] # front face faces[5][:,0] = [0,1,1,0,0] faces[5][:,1] = [1,1,1,1,1] faces[5][:,2] = [0,0,1,1,0] ax.add_collection3d(Poly3DCollection(faces, facecolors='cyan', linewidths=1, edgecolors='k', alpha=.25)) All together it looks like this. import numpy as np from mpl_toolkits.mplot3d.art3d import Poly3DCollection import matplotlib.pyplot as plt vertices = np.zeros([3,8],dtype=int) vertices[0,:] = [1, 7, 5, 8, 2, 4, 6, 3] vertices[1,:] = [1, 7, 4, 6, 8, 2, 5, 3] vertices[2,:] = [6, 1, 5, 2, 8, 3, 7, 4] vertices = vertices - 1 #(adjust the indices by one since python starts with zero indexing) # Define an array with dimensions 8 by 3 # 8 for each vertex # -> indices will be vertex1=0, v2=1, v3=2 ... # 3 for each coordinate # -> indices will be x=0,y=1,z=1 cube = np.zeros([8,3]) # Define x values cube[:,0] = [0, 0, 0, 0, 1, 1, 1, 1] # Define y values cube[:,1] = [0, 1, 0, 1, 0, 1, 0, 1] # Define z values cube[:,2] = [0, 0, 1, 1, 0, 0, 1, 1] # First initialize the fig variable to a figure fig = plt.figure() # Add a 3d axis to the figure ax = fig.add_subplot(111, projection='3d') # plotting cube # Initialize a list of vertex coordinates for each face # faces = [np.zeros([5,3])]*3 faces = [] faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) faces.append(np.zeros([5,3])) # Bottom face faces[0][:,0] = [0,0,1,1,0] faces[0][:,1] = [0,1,1,0,0] faces[0][:,2] = [0,0,0,0,0] # Top face faces[1][:,0] = [0,0,1,1,0] faces[1][:,1] = [0,1,1,0,0] faces[1][:,2] = [1,1,1,1,1] # Left Face faces[2][:,0] = [0,0,0,0,0] faces[2][:,1] = [0,1,1,0,0] faces[2][:,2] = [0,0,1,1,0] # Left Face faces[3][:,0] = [1,1,1,1,1] faces[3][:,1] = [0,1,1,0,0] faces[3][:,2] = [0,0,1,1,0] # front face faces[4][:,0] = [0,1,1,0,0] faces[4][:,1] = [0,0,0,0,0] faces[4][:,2] = [0,0,1,1,0] # front face faces[5][:,0] = [0,1,1,0,0] faces[5][:,1] = [1,1,1,1,1] faces[5][:,2] = [0,0,1,1,0] ax.add_collection3d(Poly3DCollection(faces, facecolors='cyan', linewidths=1, edgecolors='k', alpha=.25)) # plotting lines ax.plot(cube[vertices[0,:],0],cube[vertices[0,:],1],cube[vertices[0,:],2],color='r') ax.plot(cube[vertices[1,:],0],cube[vertices[1,:],1],cube[vertices[1,:],2],color='r') ax.plot(cube[vertices[2,:],0],cube[vertices[2,:],1],cube[vertices[2,:],2],color='r') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() Alternatively, If you want each set of lines to have their own color, replace ax.plot(cube[vertices[0,:],0],cube[vertices[0,:],1],cube[vertices[0,:],2],color='r') ax.plot(cube[vertices[1,:],0],cube[vertices[1,:],1],cube[vertices[1,:],2],color='r') ax.plot(cube[vertices[2,:],0],cube[vertices[2,:],1],cube[vertices[2,:],2],color='r') with colors = ['r','g','b'] for i in range(3): ax.plot(cube[vertices[i,:],0],cube[vertices[i,:],1],cube[vertices[i,:],2],color=colors[i])
5
6
70,870,041
2022-1-26
https://stackoverflow.com/questions/70870041/cannot-import-name-mutablemapping-from-collections
I'm getting the following error: File "/home/ron/rzg2l_bsp_v1.3/poky/bitbake/lib/bb/compat.py", line 7, in <module> from collections import MutableMapping, KeysView, ValuesView, ItemsView, OrderedDict ImportError: cannot import name 'MutableMapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) and Googling revealed that flask has to be >=2.0, so I did $ sudo pacman -Syu python-flask which installed version (2.0.2-3) which did not resolve the issue. Further searching revealed that babelfish needs to be upgraded too, so I did: $ python3.10 -m pip install babelfish -U which showed me: Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: babelfish in /home/ron/.local/lib/python3.10/site-packages (0.6.0) Collecting babelfish Using cached babelfish-0.6.0-py3-none-any.whl (93 kB) Downloading babelfish-0.5.5.tar.gz (90 kB) |████████████████████████████████| 90 kB 406 kB/s but I'm still getting the same error. Can anyone tell what else I'm missing?
You need to import collections.abc Here the link to doc >>> from collections import MutableMapping Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'MutableMapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) >>> from collections.abc import MutableMapping Deprecated since version 3.3, will be removed in version 3.10: Moved Collections Abstract Base Classes to the collections.abc module. For backwards compatibility, they continue to be visible in this module through Python 3.9. Ref. https://docs.python.org/3.9/library/collections.html
36
41
70,893,521
2022-1-28
https://stackoverflow.com/questions/70893521/how-to-sort-a-pyarrow-table
How do I sort an Arrow table in PyArrow? There does not appear to be a single function that will do this, the closest is sort_indices.
PyArrow includes Table.sort_by since 7.0.0, no need to manually call the compute functions (reference) table = pa.table([ pa.array(["a", "a", "b", "b", "b", "c", "d", "d", "e", "c"]), pa.array([15, 20, 3, 4, 5, 6, 10, 1, 14, 123]), ], names=["keys", "values"]) sorted_table = table.sort_by([("values", "ascending")])
5
4
70,878,545
2022-1-27
https://stackoverflow.com/questions/70878545/vs-code-how-to-launch-an-interactive-python-terminal-while-debugging
I have recently started using VS code for Python development. I am unable to figure out how to launch an interactive terminal while debugging, with the program state loaded-in . For example, consider the following code, import numpy as np A = np.array([1, 2, 3]) B = np.zeros() C = A/B \\ <--- Breakpoint here I want to set a breakpoint at C = A/B and as soon as the breakpoint hit, I want to launch an interactive terminal that holds the state of my program. So that I can play around with variables in the terminal. This is simple and straightforward in other Python IDEs like Spyder and Pycharm. How do I do this with VS Code?
There's the Python debugging console in VSCode. When your code stops on a breakpoint, you can click on the debug console button to open an interactive Python console with your current program state loaded in.
24
22
70,944,716
2022-2-1
https://stackoverflow.com/questions/70944716/pydantic-sqlalchemy-how-to-work-with-enums
What is the best way to convert a sqlalchemy model to a pydantic schema (model) if it includes an enum field? Sqlalchemy import enum from sqlalchemy import Enum, Column, String from sqlalchemy.orm import declarative_base Base = declarative_base() class StateEnum(enum.Enum): CREATED = 'CREATED' UPDATED = 'UPDATED' class Adapter(Base): __tablename__ = 'adapters' id = Column(String, primary_key=True) friendly_name = Column(String(256), nullable=False) state: StateEnum = Column(Enum(StateEnum)) Pydantic from pydantic import BaseModel from enum import Enum class StateEnumDTO(str, Enum): CREATED = 'CREATED' UPDATED = 'UPDATED' class AdapterDTO(BaseModel): friendly_name: str state: StateEnumDTO # This currently cannot be converted? class Config: allow_population_by_field_name = True orm_mode = True use_enum_values = True Conversion AdapterDTO.from_orm(Adapter(friendly_name='test', state=StateEnum.CREATED)) This leads to the error value is not a valid enumeration member; permitted: 'CREATED', 'UPDATED' (type=type_error.enum; enum_values=[<StateEnumDTO.CREATED: 'CREATED'>, <StateEnumDTO.UPDATED: 'UPDATED'>]) How can I configure either a.) the serialization with the from_orm method? or b.) the creation of the state field? c.) How to convert it the other way around? Is there a native way to do this with pydantic or how is this typically done? Update: Test case def test_enum_conversion_to_dto(): adapter = Adapter(id='1', friendly_name='test', state=StateEnum.CREATED) adapter_dto = AdapterDTO.from_orm(adapter) assert adapter_dto.state == StateEnumDTO.CREATED assert adapter_dto.state.value == StateEnum.CREATED.value
Pydantic requires that both enum classes have the same type definition. In your case, StateEnum inherits from enum.Enum, but StateEnumDTO inherits from both str and enum.Enum. You can fix this issue by changing your SQLAlchemy enum definition: class StateEnum(str, enum.Enum): CREATED = 'CREATED' UPDATED = 'UPDATED'
5
3
70,948,998
2022-2-1
https://stackoverflow.com/questions/70948998/how-to-re-use-the-return-values-of-matplotlib-axes-hist
Suppose I want to plot a histogram of the same data twice: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8,6)) ax1,ax2 = fig.subplots(nrows=2,ncols=1) ax1.hist(foo) ax2.hist(foo) ax2.set_yscale("log") ax2.set_xlabel("foo") fig.show() Note that I call Axes.hist twice, and it could be expensive. I wonder if there is an easy way to re-use the return value of the 1st call to make the second one cheap.
In the ax.hist docs, there is a related example of reusing np.histogram output: The weights parameter can be used to draw a histogram of data that has already been binned by treating each bin as a single point with a weight equal to its count. counts, bins = np.histogram(data) plt.hist(bins[:-1], bins, weights=counts) We can use the same approach with ax.hist since it also returns counts and bins (along with a bar container): x = np.random.default_rng(123).integers(10, size=100) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 3)) counts, bins, bars = ax1.hist(x) # original hist ax2.hist(bins[:-1], bins, weights=counts) # rebuilt via weights params Alternatively, reconstruct the original histogram using ax.bar and restyle the width/alignment to match ax.hist: fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 3)) counts, bins, bars = ax1.hist(x) # original hist ax2.bar(bins[:-1], counts, width=1.0, align='edge') # rebuilt via ax.bar
5
5
70,946,151
2022-2-1
https://stackoverflow.com/questions/70946151/how-to-set-default-on-update-current-timestamp-in-postgres-with-sqlalchemy
This is a sister question to How to set DEFAULT ON UPDATE CURRENT_TIMESTAMP in mysql with sqlalchemy?, but focused on Postgres instead of MySQL. Say we want to create a table users with a column datemodified that updates by default to the current timestamp whenever a row is updated. The solution given in the sister PR for MySQL is: user = Table( "users", Metadata, Column( "datemodified", TIMESTAMP, server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"), ), ) How can I get the same functionality with a Postgres backend?
Eventually I implemented this using triggers as suggested by a_horse_with_no_name in the comments. Full SQLAlchemy implementation and integration with Alembic follow. SQLAlchemy implementation # models.py class User(Base): __tablename__ = "user" id = Column(Integer, primary_key=True) name = Column(Text) created_at = Column(DateTime, server_default=sqlalchemy.func.now(), nullable=False) updated_at = Column(DateTime) # your_application_code.py import sqlalchemy as sa create_refresh_updated_at_func = """ CREATE FUNCTION {schema}.refresh_updated_at() RETURNS TRIGGER LANGUAGE plpgsql AS $func$ BEGIN NEW.updated_at := now(); RETURN NEW; END $func$; """ create_trigger = """ CREATE TRIGGER trig_{table}_updated BEFORE UPDATE ON {schema}.{table} FOR EACH ROW EXECUTE PROCEDURE {schema}.refresh_updated_at(); """ my_schema = "foo" engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema))) engine.execute(sa.text(create_trigger.format(schema=my_schema, table="user"))) Integration with Alembic In my case it was important to integrate the trigger creation with Alembic, and to add the trigger to n dimension tables (all of them having an updated_at column). # alembic/versions/your_version.py import sqlalchemy as sa create_refresh_updated_at_func = """ CREATE FUNCTION {schema}.refresh_updated_at() RETURNS TRIGGER LANGUAGE plpgsql AS $func$ BEGIN NEW.updated_at := now(); RETURN NEW; END $func$; """ create_trigger = """ CREATE TRIGGER trig_{table}_updated BEFORE UPDATE ON {schema}.{table} FOR EACH ROW EXECUTE PROCEDURE {schema}.refresh_updated_at(); """ def upgrade(): op.create_table(..., schema="foo") ... # Add updated_at triggers for all tables op.execute(sa.text(create_refresh_updated_at_func.format(schema="foo"))) for table in MY_LIST_OF_TABLES: op.execute(sa.text(create_trigger.format(schema="foo", table=table))) def downgrade(): op.drop_table(..., schema="foo") ... op.execute(sa.text("DROP FUNCTION foo.refresh_updated_at() CASCADE"))
5
8
70,903,401
2022-1-29
https://stackoverflow.com/questions/70903401/how-do-i-get-mobile-status-for-discord-bot-by-directly-modifying-identify-packet
Apparently, discord bots can have mobile status as opposed to the desktop (online) status that one gets by default. After a bit of digging I found out that such a status is achieved by modifying the IDENTIFY packet in discord.gateway.DiscordWebSocket.identify modifying the value of $browser to Discord Android or Discord iOS should theoretically get us the mobile status. After modifying code snippets I found online which does this, I end up with this : def get_mobile(): """ The Gateway's IDENTIFY packet contains a properties field, containing $os, $browser and $device fields. Discord uses that information to know when your phone client and only your phone client has connected to Discord, from there they send the extended presence object. The exact field that is checked is the $browser field. If it's set to Discord Android on desktop, the mobile indicator is is triggered by the desktop client. If it's set to Discord Client on mobile, the mobile indicator is not triggered by the mobile client. The specific values for the $os, $browser, and $device fields are can change from time to time. """ import ast import inspect import re import discord def source(o): s = inspect.getsource(o).split("\n") indent = len(s[0]) - len(s[0].lstrip()) return "\n".join(i[indent:] for i in s) source_ = source(discord.gateway.DiscordWebSocket.identify) patched = re.sub( r'([\'"]\$browser[\'"]:\s?[\'"]).+([\'"])', r"\1Discord Android\2", source_, ) loc = {} exec(compile(ast.parse(patched), "<string>", "exec"), discord.gateway.__dict__, loc) return loc["identify"] Now all there is left to do is overwrite the discord.gateway.DiscordWebSocket.identify during runtime in the main file, something like this : import discord import os from discord.ext import commands import mobile_status discord.gateway.DiscordWebSocket.identify = mobile_status.get_mobile() bot = commands.Bot(command_prefix="?") @bot.event async def on_ready(): print(f"Sucessfully logged in as {bot.user}") bot.run(os.getenv("DISCORD_TOKEN")) And we do get the mobile status successfully But here's the problem, I wanted to directly modify the file (which held the function) rather than monkey-patching it during runtime. So I cloned the dpy lib locally and edited the file on my machine, it ended up looking like this : async def identify(self): """Sends the IDENTIFY packet.""" payload = { 'op': self.IDENTIFY, 'd': { 'token': self.token, 'properties': { '$os': sys.platform, '$browser': 'Discord Android', '$device': 'Discord Android', '$referrer': '', '$referring_domain': '' }, 'compress': True, 'large_threshold': 250, 'v': 3 } } # ... (edited both $browser and $device to Discord Android just to be safe) But this does not work and just gives me the regular desktop online icon. So the next thing I did is to inspect the identify function after it has been monkey-patched, so I could just look at the source code and see what went wrong earlier, but due to hard luck I got this error : Traceback (most recent call last): File "c:\Users\Achxy\Desktop\fresh\file.py", line 8, in <module> print(inspect.getsource(discord.gateway.DiscordWebSocket.identify)) File "C:\Users\Achxy\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 1024, in getsource lines, lnum = getsourcelines(object) File "C:\Users\Achxy\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 1006, in getsourcelines lines, lnum = findsource(object) File "C:\Users\Achxy\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 835, in findsource raise OSError('could not get source code') OSError: could not get source code Code : import discord import os from discord.ext import commands import mobile_status import inspect discord.gateway.DiscordWebSocket.identify = mobile_status.get_mobile() print(inspect.getsource(discord.gateway.DiscordWebSocket.identify)) bot = commands.Bot(command_prefix="?") @bot.event async def on_ready(): print(f"Sucessfully logged in as {bot.user}") bot.run(os.getenv("DISCORD_TOKEN")) Since this same behavior was exhibited for every patched function (aforementioned one and the loc["identify"]) I could no longer use inspect.getsource(...) and then relied upon dis.dis which lead to much more disappointing results The disassembled data looks exactly identical to the monkey-patched working version, so the directly modified version simply does not work despite function content being the exact same. (In regards to disassembled data) Notes: Doing Discord iOS directly does not work either, changing the $device to some other value but keeping $browser does not work, I have tried all combinations, none of them work. TL;DR: How to get mobile status for discord bot without monkey-patching it during runtime?
The following works by subclassing the relevant class, and duplicating code with the relevant changes. We also have to subclass the Client class, to overwrite the place where the gateway/websocket class is used. This results in a lot of duplicated code, however it does work, and requires neither dirty monkey-patching nor editing the library source code. However, it does come with many of the same problems as editing the library source code - mainly that as the library is updated, this code will become out of date (if you're using the archived and obsolete version of the library, you have bigger problems instead). import asyncio import sys import aiohttp import discord from discord.gateway import DiscordWebSocket, _log from discord.ext.commands import Bot class MyGateway(DiscordWebSocket): async def identify(self): payload = { 'op': self.IDENTIFY, 'd': { 'token': self.token, 'properties': { '$os': sys.platform, '$browser': 'Discord Android', '$device': 'Discord Android', '$referrer': '', '$referring_domain': '' }, 'compress': True, 'large_threshold': 250, 'v': 3 } } if self.shard_id is not None and self.shard_count is not None: payload['d']['shard'] = [self.shard_id, self.shard_count] state = self._connection if state._activity is not None or state._status is not None: payload['d']['presence'] = { 'status': state._status, 'game': state._activity, 'since': 0, 'afk': False } if state._intents is not None: payload['d']['intents'] = state._intents.value await self.call_hooks('before_identify', self.shard_id, initial=self._initial_identify) await self.send_as_json(payload) _log.info('Shard ID %s has sent the IDENTIFY payload.', self.shard_id) class MyBot(Bot): async def connect(self, *, reconnect: bool = True) -> None: """|coro| Creates a websocket connection and lets the websocket listen to messages from Discord. This is a loop that runs the entire event system and miscellaneous aspects of the library. Control is not resumed until the WebSocket connection is terminated. Parameters ----------- reconnect: :class:`bool` If we should attempt reconnecting, either due to internet failure or a specific failure on Discord's part. Certain disconnects that lead to bad state will not be handled (such as invalid sharding payloads or bad tokens). Raises ------- :exc:`.GatewayNotFound` If the gateway to connect to Discord is not found. Usually if this is thrown then there is a Discord API outage. :exc:`.ConnectionClosed` The websocket connection has been terminated. """ backoff = discord.client.ExponentialBackoff() ws_params = { 'initial': True, 'shard_id': self.shard_id, } while not self.is_closed(): try: coro = MyGateway.from_client(self, **ws_params) self.ws = await asyncio.wait_for(coro, timeout=60.0) ws_params['initial'] = False while True: await self.ws.poll_event() except discord.client.ReconnectWebSocket as e: _log.info('Got a request to %s the websocket.', e.op) self.dispatch('disconnect') ws_params.update(sequence=self.ws.sequence, resume=e.resume, session=self.ws.session_id) continue except (OSError, discord.HTTPException, discord.GatewayNotFound, discord.ConnectionClosed, aiohttp.ClientError, asyncio.TimeoutError) as exc: self.dispatch('disconnect') if not reconnect: await self.close() if isinstance(exc, discord.ConnectionClosed) and exc.code == 1000: # clean close, don't re-raise this return raise if self.is_closed(): return # If we get connection reset by peer then try to RESUME if isinstance(exc, OSError) and exc.errno in (54, 10054): ws_params.update(sequence=self.ws.sequence, initial=False, resume=True, session=self.ws.session_id) continue # We should only get this when an unhandled close code happens, # such as a clean disconnect (1000) or a bad state (bad token, no sharding, etc) # sometimes, discord sends us 1000 for unknown reasons so we should reconnect # regardless and rely on is_closed instead if isinstance(exc, discord.ConnectionClosed): if exc.code == 4014: raise discord.PrivilegedIntentsRequired(exc.shard_id) from None if exc.code != 1000: await self.close() raise retry = backoff.delay() _log.exception("Attempting a reconnect in %.2fs", retry) await asyncio.sleep(retry) # Always try to RESUME the connection # If the connection is not RESUME-able then the gateway will invalidate the session. # This is apparently what the official Discord client does. ws_params.update(sequence=self.ws.sequence, resume=True, session=self.ws.session_id) bot = MyBot(command_prefix="?") @bot.event async def on_ready(): print(f"Sucessfully logged in as {bot.user}") bot.run("YOUR_BOT_TOKEN") Personally, I think that the following approach, which does include some runtime monkey-patching (but no AST manipulation) is cleaner for this purpose: import sys from discord.gateway import DiscordWebSocket, _log from discord.ext.commands import Bot async def identify(self): payload = { 'op': self.IDENTIFY, 'd': { 'token': self.token, 'properties': { '$os': sys.platform, '$browser': 'Discord Android', '$device': 'Discord Android', '$referrer': '', '$referring_domain': '' }, 'compress': True, 'large_threshold': 250, 'v': 3 } } if self.shard_id is not None and self.shard_count is not None: payload['d']['shard'] = [self.shard_id, self.shard_count] state = self._connection if state._activity is not None or state._status is not None: payload['d']['presence'] = { 'status': state._status, 'game': state._activity, 'since': 0, 'afk': False } if state._intents is not None: payload['d']['intents'] = state._intents.value await self.call_hooks('before_identify', self.shard_id, initial=self._initial_identify) await self.send_as_json(payload) _log.info('Shard ID %s has sent the IDENTIFY payload.', self.shard_id) DiscordWebSocket.identify = identify bot = Bot(command_prefix="?") @bot.event async def on_ready(): print(f"Sucessfully logged in as {bot.user}") bot.run("YOUR_DISCORD_TOKEN") As to why editing the library source code did not work for you, I can only assume that you have edited the wrong copy of the file, as people have commented.
10
1
70,864,474
2022-1-26
https://stackoverflow.com/questions/70864474/uvicorn-async-workers-are-still-working-synchronously
Question in short I have migrated my project from Django 2.2 to Django 3.2, and now I want to start using the possibility for asynchronous views. I have created an async view, setup asgi configuration, and run gunicorn with a Uvicorn worker. When swarming this server with 10 users concurrently, they are served synchronously. What do I need to configure in order to serve 10 concurrent users an async view? Question in detail This is what I did so far in my local environment: I am working with Django 3.2.10 and Python 3.9. I have installed gunicorn and uvicorn through pip I have created an asgi.py file with the following contents import os from django.core.asgi import get_asgi_application os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'MyService.settings.local') application = get_asgi_application() I have created a view with the following implementation, and connected it in urlpatterns: import asyncio import json from django.http import HttpResponse async def async_sleep(request): await asyncio.sleep(1) return HttpResponse(json.dumps({'mode': 'async', 'time': 1).encode()) I run locally a gunicorn server with a Uvicorn worker: gunicorn MyService.asgi:application -k uvicorn.workers.UvicornWorker [2022-01-26 14:37:14 +0100] [8732] [INFO] Starting gunicorn 20.1.0 [2022-01-26 14:37:14 +0100] [8732] [INFO] Listening at: http://127.0.0.1:8000 (8732) [2022-01-26 14:37:14 +0100] [8732] [INFO] Using worker: uvicorn.workers.UvicornWorker [2022-01-26 14:37:14 +0100] [8733] [INFO] Booting worker with pid: 8733 [2022-01-26 13:37:15 +0000] [8733] [INFO] Started server process [8733] [2022-01-26 13:37:15 +0000] [8733] [INFO] Waiting for application startup. [2022-01-26 13:37:15 +0000] [8733] [INFO] ASGI 'lifespan' protocol appears unsupported. [2022-01-26 13:37:15 +0000] [8733] [INFO] Application startup complete. I hit the API from a local client once. After 1 second, I get a 200 OK, as expected. I set up a locust server to spawn concurrent users. When I let it make requests with 1 concurrent user, every 1 second an API call is completed. When I let it make requests with 10 concurrent users, every 1 second an API call is completed. All other requests are waiting. This last thing is not what I expect. I expect the worker, while sleeping asynchronously, to pick up the next request already. Am I missing some configuration? I also tried it by using Daphne instead of Uvicorn, but with the same result. Locust This is how I have set up my locust. Start a new virtualenv pip install locust Create a locustfile.py with the following content: from locust import HttpUser, task class SleepUser(HttpUser): @task def async_sleep(self): self.client.get('/api/async_sleep/') Run the locust executable from the shell Visit http://0.0.0.0:8089 in the browser Set number of workers to 10, spawn rate to 1 and host to http://127.0.0.1:8000 Middleware These are my middleware settings MIDDLEWARE = [ 'django_prometheus.middleware.PrometheusBeforeMiddleware', 'corsheaders.middleware.CorsMiddleware', 'django.middleware.gzip.GZipMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware', 'shared.common.middleware.ApiLoggerMiddleware', 'django_prometheus.middleware.PrometheusAfterMiddleware', ] The ApiLoggerMiddleware from shared is from our own code, I will investigate this one first. This is the implementation of it. import logging import os from typing import List from django.http import HttpRequest, HttpResponse from django.utils import timezone from shared.common.authentication_service import BaseAuthenticationService class ApiLoggerMiddleware: TOO_BIG_FOR_LOG_BYTES = 2 * 1024 def __init__(self, get_response): # The get_response callable is provided by Django, it is a function # that takes a request and returns a response. Plainly put, once we're # done with the incoming request, we need to pass it along to get the # response which we need to ultimately return. self._get_response = get_response self.logger = logging.getLogger('api') self.pid = os.getpid() self.request_time = None self.response_time = None def __call__(self, request: HttpRequest) -> HttpResponse: common_data = self.on_request(request) response = self._get_response(request) self.on_response(response, common_data) return response def truncate_body(self, request: HttpRequest) -> str: return f"{request.body[:self.TOO_BIG_FOR_LOG_BYTES]}" def on_request(self, request: HttpRequest) -> List[str]: self.request_time = timezone.now() remote_address = self.get_remote_address(request) user_agent = request.headers.get('User-Agent') or '' customer_uuid = self.get_customer_from_request_auth(request) method = request.method uri = request.get_raw_uri() common = [ remote_address, user_agent, customer_uuid, method, uri ] in_line = [ "IN", str(self.pid), str(self.request_time), ] + common + [ self.truncate_body(request) ] self.logger.info(', '.join(in_line)) return common def on_response(self, response: HttpResponse, common: List[str]) -> None: self.response_time = timezone.now() out_line = [ "OUT", str(self.pid), str(self.response_time) ] + common + [ str(self.response_time - self.request_time), str(response.status_code), ] self.logger.info(", ".join(out_line)) @classmethod def get_customer_from_request_auth(cls, request: HttpRequest) -> str: token = request.headers.get('Authorization') if not token: return 'no token' try: payload = BaseAuthenticationService.validate_access_token(token) return payload.get('amsOrganizationId', '') except Exception: return 'unknown' @classmethod def get_remote_address(cls, request: HttpRequest) -> str: if 'X-Forwarded-For' in request.headers: # in case the request comes in through a proxy, the remote address # will be just the last proxy that passed it along, that's why we # have to get the remote from X-Forwarded-For # https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For addresses = request.headers['X-Forwarded-For'].split(',') client = addresses[0] return client else: return request.META.get('REMOTE_ADDR', '') Sources Sources I have used: A Guide to ASGI in Django 3.0 and its performance How to use Django with Uvicorn
Your ApiLoggerMiddleware is a synchronous middleware. From https://docs.djangoproject.com/en/4.0/topics/async/#async-views, emphasis mine: You will only get the benefits of a fully-asynchronous request stack if you have no synchronous middleware loaded into your site. If there is a piece of synchronous middleware, then Django must use a thread per request to safely emulate a synchronous environment for it. Middleware can be built to support both sync and async contexts. Some of Django’s middleware is built like this, but not all. To see what middleware Django has to adapt, you can turn on debug logging for the django.request logger and look for log messages about “Synchronous middleware … adapted”. (The log message currently says "Asynchronous middleware ... adapted", bug reported at #33495.) Turn on debug logging for the django.request logger by adding this to your LOGGING setting: 'django.request': { 'handlers': ['console'], 'level': 'DEBUG', }, Solution To make ApiLoggerMiddleware asynchronous: Inherit django.utils.deprecation.MiddlewareMixin. call super().__init__(get_response) in __init__. remove __call__; MiddlewareMixin.__call__ makes your middleware asynchronous. Refactor on_request to process_request. return None instead of common. attach common to request instead: request.common = common. remember to update references to request.common. attach request_time to request instead of self to make it (and the middleware) thread-safe. remember to update references to request.request_time. Refactor on_response(self, response, common) to process_response(self, request, response). return response. don't attach response_time to self; leave it as a variable since it's not used in other functions. The result: class ApiLoggerMiddleware(MiddlewareMixin): TOO_BIG_FOR_LOG_BYTES = 2 * 1024 def __init__(self, get_response): # The get_response callable is provided by Django, it is a function # that takes a request and returns a response. Plainly put, once we're # done with the incoming request, we need to pass it along to get the # response which we need to ultimately return. super().__init__(get_response) # + self._get_response = get_response self.logger = logging.getLogger('api') self.pid = os.getpid() # self.request_time = None # - # self.response_time = None # - # def __call__(self, request: HttpRequest) -> HttpResponse: # - # common_data = self.on_request(request) # - # response = self._get_response(request) # - # self.on_response(response, common_data) # - # return response # - def truncate_body(self, request: HttpRequest) -> str: return f"{request.body[:self.TOO_BIG_FOR_LOG_BYTES]}" # def on_request(self, request: HttpRequest) -> List[str]: # - def process_request(self, request: HttpRequest) -> None: # + # self.request_time = timezone.now() # - request.request_time = timezone.now() # + remote_address = self.get_remote_address(request) user_agent = request.headers.get('User-Agent') or '' customer_uuid = self.get_customer_from_request_auth(request) method = request.method uri = request.get_raw_uri() common = [ remote_address, user_agent, customer_uuid, method, uri ] in_line = [ "IN", str(self.pid), # str(self.request_time), # - str(request.request_time), # + ] + common + [ self.truncate_body(request) ] self.logger.info(', '.join(in_line)) # return common # - request.common = common # + return None # + # def on_response(self, response: HttpResponse, common: List[str]) -> None: # - def process_response(self, request: HttpRequest, response: HttpResponse) -> HttpResponse: # + # self.response_time = timezone.now() # - response_time = timezone.now() # + out_line = [ "OUT", str(self.pid), # str(self.response_time) # - str(response_time) # + # ] + common + [ # - ] + getattr(request, 'common', []) + [ # + # str(self.response_time - self.request_time), # - str(response_time - getattr(request, 'request_time', 0)), # + str(response.status_code), ] self.logger.info(", ".join(out_line)) return response # + @classmethod def get_customer_from_request_auth(cls, request: HttpRequest) -> str: token = request.headers.get('Authorization') if not token: return 'no token' try: payload = BaseAuthenticationService.validate_access_token(token) return payload.get('amsOrganizationId', '') except Exception: return 'unknown' @classmethod def get_remote_address(cls, request: HttpRequest) -> str: if 'X-Forwarded-For' in request.headers: # in case the request comes in through a proxy, the remote address # will be just the last proxy that passed it along, that's why we # have to get the remote from X-Forwarded-For # https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For addresses = request.headers['X-Forwarded-For'].split(',') client = addresses[0] return client else: return request.META.get('REMOTE_ADDR', '')
10
5
70,883,863
2022-1-27
https://stackoverflow.com/questions/70883863/poetry-install-fails-on-macos-11-6-xcode-13-dyld-library-not-loaded-executab
New poetry install fails on macOS 11.6/Xcode 13, using official installer: u@s-MacBook-Pro ~ % curl -sSL https://install.python-poetry.org | python3 - Retrieving Poetry metadata # Welcome to Poetry! This will download and install the latest version of Poetry, a dependency and package manager for Python. It will add the `poetry` command to Poetry's bin directory, located at: /Users/u/Library/Python/3.8/bin You can uninstall at any time by executing this script with the --uninstall option, and these changes will be reverted. Installing Poetry (1.1.12): An error occurred. Removing partial environment. Poetry installation failed. See /Users/u/poetry-installer-error-h3v75uw9.log for error logs. u@s-MacBook-Pro ~ % more /Users/u/poetry-installer-error-h3v75uw9.log dyld: Library not loaded: @executable_path/../Python3 Referenced from: /Users/u/Library/Application Support/pypoetry/venv/bin/python3 Reason: image not found Traceback: File "<stdin>", line 877, in main File "<stdin>", line 514, in run u@s-MacBook-Pro ~ % There is no previous poetry install or reference to poetry in caches or Application Support: u@s-MacBook-Pro ~ % ls -al Library/Application\ Support/pypoetry total 0 drwxr-xr-x 2 u staff 64 26 Jan 16:58 . drwx------+ 55 u staff 1760 26 Jan 16:43 .. u@s-MacBook-Pro ~ % ls -al Library/Caches/pypoetry ls: Library/Caches/pypoetry: No such file or directory Python version 3.8.9: u@s-MacBook-Pro ~ % python3 --version Python 3.8.9 u@s-MacBook-Pro ~ % Xcode version: u@s-MacBook-Pro ~ % xcodebuild -version Xcode 13.2.1 Build version 13C100 u@s-MacBook-Pro ~ %
I worked through this by installing a non-system version of python3, then installing poetry. brew install pyenv pyenv install 3.8.12 pyenv local 3.8.12 curl -sSL https://install.python-poetry.org | python3 -
7
5
70,934,699
2022-2-1
https://stackoverflow.com/questions/70934699/how-to-fix-error-when-building-conda-package-related-to-icon-file
I honestly can't figure out what is happening with this error. I thought it was something in my manifest file but apparently it's not. Note, this directory is in my Google Drive. Here is my MANIFEST.in file: graft soothsayer_utils include setup.py include LICENSE.txt include README.md global-exclude Icon* global-exclude *.py[co] global-exclude .DS_Store I'm running conda build . in the directory and get the following error: Packaging soothsayer_utils INFO:conda_build.build:Packaging soothsayer_utils INFO conda_build.build:build(2214): Packaging soothsayer_utils Packaging soothsayer_utils-2022.01.19-py_0 INFO:conda_build.build:Packaging soothsayer_utils-2022.01.19-py_0 INFO conda_build.build:bundle_conda(1454): Packaging soothsayer_utils-2022.01.19-py_0 number of files: 11 Fixing permissions Packaged license file/s. INFO :: Time taken to mark (prefix) 0 replacements in 0 files was 0.11 seconds 'site-packages/soothsayer_utils-2022.1.19.dist-info/Icon' not in tarball 'site-packages/soothsayer_utils-2022.1.19.dist-info/Icon\r' not in info/files Traceback (most recent call last): File "/Users/jespinoz/anaconda3/bin/conda-build", line 11, in <module> sys.exit(main()) File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 474, in main execute(sys.argv[1:]) File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 463, in execute outputs = api.build(args.recipe, post=args.post, test_run_post=args.test_run_post, File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/api.py", line 186, in build return build_tree( File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/build.py", line 3008, in build_tree packages_from_this = build(metadata, stats, File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/build.py", line 2291, in build newly_built_packages = bundlers[pkg_type](output_d, m, env, stats) File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/build.py", line 1619, in bundle_conda tarcheck.check_all(tmp_path, metadata.config) File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/tarcheck.py", line 89, in check_all x.info_files() File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/tarcheck.py", line 53, in info_files raise Exception('info/files') Exception: info/files Here's the complete log I can confirm that the Icon files do not exist in my soothsayer_utils-2022.1.19.tar.gz file: (base) jespinoz@x86_64-apple-darwin13 Downloads % tree soothsayer_utils-2022.1.19 soothsayer_utils-2022.1.19 ├── LICENSE.txt ├── MANIFEST.in ├── PKG-INFO ├── README.md ├── setup.cfg ├── setup.py ├── soothsayer_utils │ ├── __init__.py │ └── soothsayer_utils.py └── soothsayer_utils.egg-info ├── PKG-INFO ├── SOURCES.txt ├── dependency_links.txt ├── requires.txt └── top_level.txt 2 directories, 13 files Can someone help me get conda build to work for my package?
there are a few symptoms I would like to suggest looking into: There is a WARNING in your error log SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. You have MANIFEST.in, setup.py and setup.cfg probably conflicting between them. Because setup.py is the build script for setuptools. It tells setuptools about your package (such as the name and version) as well as which code files to include. Also, An existing generated MANIFEST will be regenerated without sdist comparing its modification time to the one of MANIFEST.in or setup.py, as explained here. Please refer to Building and Distributing Packages with Setuptools, also Configuring setup() using setup.cfg files and Quickstart for more information Maybe not so important, but another thing worth looking into is the fact that there are 2 different python distributions being used at different stages, as Python 3.10 is used at: Using pip 22.0.2 from $PREFIX/lib/python3.10/site-packages/pip (python 3.10) (it is also in your conda dependencies) and Python 3.8 is used at: File "/Users/jespinoz/anaconda3/lib/python3.8/site-packages/conda_build/tarcheck.py", line 53, in info_files raise Exception('info/files') which is where the error happens. So maybe another configuration conflict related to this.
6
2
70,876,473
2022-1-27
https://stackoverflow.com/questions/70876473/kubernetespodoperator-how-to-use-cmds-or-cmds-and-arguments-to-run-multiple-comm
I'm using GCP composer to run an algorithm and at the end of the stream I want to run a task that will perform several operations copying and deleting files and folders from a volume to a bucket I'm trying to perform these copying and deleting operations via a kubernetespodoperator. I'm having hardship finding the right way to run several commands using "cmds" I also tried using "cmds" with "arguments". Here is my KubernetesPodOperator and the cmds and arguments combinations I tried: post_algo_run = kubernetes_pod_operator.KubernetesPodOperator( task_id="multi-coher-post-operations", name="multi-coher-post-operations", namespace="default", image="google/cloud-sdk:alpine", ### doesn't work ### cmds=["gsutil", "cp", "/data/splitter-output\*.csv", "gs://my_bucket/data" , "&" , "gsutil", "rm", "-r", "/input"], #Error: #[2022-01-27 09:31:38,407] {pod_manager.py:197} INFO - CommandException: Destination URL must name a directory, bucket, or bucket #[2022-01-27 09:31:38,408] {pod_manager.py:197} INFO - subdirectory for the multiple source form of the cp command. #################### ### doesn't work ### # cmds=["gsutil", "cp", "/data/splitter-output\*.csv", "gs://my_bucket/data ;","gsutil", "rm", "-r", "/input"], # [2022-01-27 09:34:06,865] {pod_manager.py:197} INFO - CommandException: Destination URL must name a directory, bucket, or bucket # [2022-01-27 09:34:06,866] {pod_manager.py:197} INFO - subdirectory for the multiple source form of the cp command. #################### ### only preform the first command - only copying ### # cmds=["bash", "-cx"], # arguments=["gsutil cp /data/splitter-output\*.csv gs://my_bucket/data","gsutil rm -r /input"], # [2022-01-27 09:36:09,164] {pod_manager.py:197} INFO - + gsutil cp '/data/splitter-output*.csv' gs://my_bucket/data # [2022-01-27 09:36:11,200] {pod_manager.py:197} INFO - Copying file:///data/splitter-output\Coherence Results-26-Jan-2022-1025Part1.csv [Content-Type=text/csv]... # [2022-01-27 09:36:11,300] {pod_manager.py:197} INFO - / [0 files][ 0.0 B/ 93.0 KiB] # / [1 files][ 93.0 KiB/ 93.0 KiB] # [2022-01-27 09:36:11,302] {pod_manager.py:197} INFO - Operation completed over 1 objects/93.0 KiB. # [20 22-01-27 09:36:12,317] {kubernetes_pod.py:459} INFO - Deleting pod: multi-coher-post-operations.d66b4c91c9024bd289171c4d3ce35fdd #################### volumes=[ Volume( name="nfs-pvc", configs={ "persistentVolumeClaim": {"claimName": "nfs-pvc"} }, ) ], volume_mounts=[ VolumeMount( name="nfs-pvc", mount_path="/data/", sub_path=None, read_only=False, ) ], )
I found a technic for running multiple commands. First I found the relations between Kubernetespodoperator cmds and arguments properties to Docker's ENTRYPOINT and CMD. Kubernetespodoperator cmds overwrite the docker original ENTRYPOINT and Kubernetespodoperator arguments is equivalent to docker's CMD. And so in order to run multiple commands from the Kubernetespodoperator I've used the following syntax: I've set the Kubernetespodoperator cmds to run bash with -c: cmds=["/bin/bash", "-c"], And I've set the Kubernetespodoperator arguments to run two echo commands separated by &: arguments=["echo hello && echo goodbye"], So my Kubernetespodoperator looks like so: stajoverflow_test = KubernetesPodOperator( task_id="stajoverflow_test", name="stajoverflow_test", namespace="default", image="google/cloud-sdk:alpine", cmds=["/bin/bash", "-c"], arguments=["echo hello && echo goodbye"], )
5
4
70,882,092
2022-1-27
https://stackoverflow.com/questions/70882092/can-we-make-1-2-true
Python ints are objects that encapsulate the actual number value. Can we mess with that value, for example setting the value of the object 1 to 2? So that 1 == 2 becomes True?
Yes, we can. But don't do this at home. Seriously, the 1 object is used in many places and I have no clue what this might break and what that might do to your computer. I reject all responsibility. But I found it interesting to learn about these things. The id function gives us the memory address and the ctypes module lets us mess with memory: import ctypes ctypes.memmove(id(1) + 24, id(2) + 24, 4) print(1 == 2) x = 40 print(x + 1) Output: True 42 Try it online!. I tried it there because such sites have got to be protected from our hacking anyway. More explanation / analysis: The memmove copied the value from the 2 object into the 1 object. Their size is 28 bytes each, but I skipped the first 24 bytes, because that's the object's reference count, type address, and value size, as we can view/verify as well: import ctypes, struct, sys x = 1 data = ctypes.string_at(id(x), 28) ref_count, type_address, number_of_digits, lowest_digit = \ struct.unpack('qqqi', data) print('reference count: ', ref_count, sys.getrefcount(x)) print('type address: ', type_address, id(type(x))) print('number of digits:', number_of_digits, -(-x.bit_length() // 30)) print('lowest digit: ', lowest_digit, x % 2**30) Output (Try it online!): reference count: 135 138 type address: 140259718753696 140259718753696 number of digits: 1 1 lowest digit: 1 1 The reference count gets increased by the getrefcount call, but I don't know why by 3. Anyway, ~134 things other than us reference the 1 object, and we're potentially messing all of them up, so... really don't try this at home. The "digits" refer to how CPython stores ints as digits in base 230. For example, x = 2 ** 3000 has 101 such digits. Output for x = 123 ** 456 for a better test: reference count: 1 2 type address: 140078560107936 140078560107936 number of digits: 106 106 lowest digit: 970169057 970169057
101
163
70,946,286
2022-2-1
https://stackoverflow.com/questions/70946286/pip-compile-raising-assertionerror-on-its-logging-handler
I have a dockerfile that currently only installs pip-tools FROM python:3.9 RUN pip install --upgrade pip && \ pip install pip-tools COPY ./ /root/project WORKDIR /root/project ENTRYPOINT ["tail", "-f", "/dev/null"] I build and open a shell in the container using the following commands: docker build -t brunoapi_image . docker run --rm -ti --name brunoapi_container --entrypoint bash brunoapi_image Then, when I try to run pip-compile inside the container I get this very weird error (full traceback): root@727f1f38f095:~/project# pip-compile Traceback (most recent call last): File "/usr/local/bin/pip-compile", line 8, in <module> sys.exit(cli()) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.9/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/piptools/scripts/compile.py", line 342, in cli repository = PyPIRepository(pip_args, cache_dir=cache_dir) File "/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py", line 106, in __init__ self._setup_logging() File "/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py", line 455, in _setup_logging assert isinstance(handler, logging.StreamHandler) AssertionError I have no clue what's going on and I've never seen this error before. Can anyone shed some light into this? Running on macOS Monterey
It is a bug, you can downgrade using: pip install "pip<22" https://github.com/jazzband/pip-tools/issues/1558
32
7
70,861,001
2022-1-26
https://stackoverflow.com/questions/70861001/annotate-dataclass-class-variable-with-type-value
We have a number of dataclasses representing various results with common ancestor Result. Each result then provides its data using its own subclass of ResultData. But we have trouble to annotate the case properly. We came up with following solution: from dataclasses import dataclass from typing import ClassVar, Generic, Optional, Sequence, Type, TypeVar class ResultData: ... T = TypeVar('T', bound=ResultData) @dataclass class Result(Generic[T]): _data_cls: ClassVar[Type[T]] data: Sequence[T] @classmethod def parse(cls, ...) -> T: self = cls() self.data = [self._data_cls.parse(...)] return self class FooResultData(ResultData): ... class FooResult(Result): _data_cls = FooResultData but it stopped working lately with mypy error ClassVar cannot contain type variables [misc]. It is also against PEP 526, see https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations, which we missed earlier. Is there a way to annotate this case properly?
At the end I just replaced the variable in _data_cls annotation with the base class and fixed the annotation of subclasses as noted by @rv.kvetch in his answer. The downside is the need to define the result class twice in every subclass, but in my opinion it is more legible than extracting the class in property. The complete solution: from dataclasses import dataclass from typing import ClassVar, Generic, Optional, Sequence, Type, TypeVar class ResultData: ... T = TypeVar('T', bound=ResultData) @dataclass class Result(Generic[T]): _data_cls: ClassVar[Type[ResultData]] # Fixed annotation here data: Sequence[T] @classmethod def parse(cls, ...) -> T: self = cls() self.data = [self._data_cls.parse(...)] return self class FooResultData(ResultData): ... class FooResult(Result[FooResultData]): # Fixed annotation here _data_cls = FooResultData
6
2
70,897,060
2022-1-28
https://stackoverflow.com/questions/70897060/py2-to-py3-add-future-imports
I need to make a old code base compatible with Python3. The code needs to support Python2.7 and Python3 for some months. I would like to add this in very file: # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals I tried this: futurize --both-stages --unicode-literals --write --nobackups . But this adds the unicode-literals future import only. Not the other future imports. I would like to avoid to write my own script which adds this, since adding this blindly does not work, since some files already have this header.
From the docs: Only those __future__ imports deemed necessary will be added unless the --all-imports command-line option is passed to futurize, in which case they are all added. If you want futurize to add all those imports unconditionally, you need to pass it the --all-imports flag.
5
4
70,932,129
2022-1-31
https://stackoverflow.com/questions/70932129/how-to-extract-bold-text-from-pdf-using-python
The list below provides examples of items and services that should not be billed separately. Please note that the list is not all inclusive. 1. Surgical rooms and services – To include surgical suites, major and minor, treatment rooms, endoscopy labs, cardiac cath labs, X-ray. 2. Facility Basic Charges - pulmonary and cardiology procedural rooms. The hospital’s charge for surgical suites and services shall include the entire above listed nursing personnel services, supplies, and equipment I want output like: Surgical rooms and services Facility Basic Charges there is first sentence also bold but we need to omit that sentence, we need to extract only those text which are represented with numbers
Use This Code: import pdfplumber import re demo = [] with pdfplumber.open('HCSC IL Inpatient_Outpatient Unbundling Policy- Facility.pdf') as pdf: for i in range(0, 50): try: text = pdf.pages[i] clean_text = text.filter(lambda obj: obj["object_type"] == "char" and "Bold" in obj["fontname"]) demo.append(str(re.findall(r'(\d+\.\s.*\n?)+', clean_text.extract_text())).replace('[]', ' ')) except IndexError: print("") break
6
2
70,938,215
2022-2-1
https://stackoverflow.com/questions/70938215/why-does-mypy-flag-item-none-has-no-attribute-x-error-even-if-i-check-for-none
Trying to do Python (3.8.8) with type hinting and getting errors from mypy (0.931) that I can't really understand. import xml.etree.ElementTree as ET tree = ET.parse('plant_catalog.xml') # read in file and parse as XML root = tree.getroot() # get root node for plant in root: # loop through children if plant.find("LIGHT") and plant.find("LIGHT").text == "sun" print("foo") This raises the mypy error Item "None" of "Optional[Element]" has no attribute "text". But why? I do check for the possibility of plant.find("LIGHT") returning None in the first half of the if clause. The second part accessing the .text attribute isn't even executed if the first part fails. If I modify to lights = plant.find("LIGHT") if lights: if lights.text == selection: print("foo") the error is gone. So is this because the plant object might still change in between the first check and the second? But assigning to a variable doesn't automatically copy the content, its still just a reference to an object that might change. So why does it pass the second time? (Yes, I know that repeating the .find() twice is also not time-efficient.)
mypy doesn't know that plant.find("LIGHT") always returns the same value, so it doesn't know that your test is a proper guard. So you need to assign it to a variable. As far as mypy is concerned, the variable can't change from one object to another without being reassigned, and its contents can't change if you don't perform some other operation on it.
7
9
70,935,209
2022-2-1
https://stackoverflow.com/questions/70935209/how-to-explode-dynamically-using-pandas-column
I have a dataframe that looks like this import pandas as pd import numpy as np # Create data set. dataSet = {'id': ['A', 'A', 'B'], 'id_2': [1, 2, 1] , 'number': [320, 169, 120], 'add_number' : [4,6,3]} # Create dataframe with data set and named columns. df = pd.DataFrame(dataSet, columns= ['id', 'id_2','number', 'add_number']) id id_2 number add_number 0 A 1 320 4 1 A 2 169 6 2 B 1 120 3 I would like use number and add_number so that I can explode this dynamically, ie) 320 + 4 would have [320,321,322,323,324] (up to 324, and would like to explode on this) DESIRED OUTPUT id id_2 number 0 A 1 320 1 A 1 321 2 A 1 322 3 A 1 323 4 A 1 324 5 A 2 169 6 A 2 170 7 A 2 171 8 A 2 172 9 A 2 173 10 A 2 174 11 A 2 175 12 B 1 120 13 B 1 121 14 B 1 122 15 B 1 123 I looked over explode, wide_to_long pandas function, but I do not know where to start, any sense of direction would be appreciated!!
You try using np.arange and explode: df['range'] = df.apply(lambda x: np.arange(x['number'], x['number']+x['add_number']+1), axis=1) df.explode('range') or df['range'] = [np.arange(n, n+a+1) for n, a in zip(df['number'],df['add_number'])] df.explode('range') Output: id id_2 number add_number range 0 A 1 320 4 320 0 A 1 320 4 321 0 A 1 320 4 322 0 A 1 320 4 323 0 A 1 320 4 324 1 A 2 169 6 169 1 A 2 169 6 170 1 A 2 169 6 171 1 A 2 169 6 172 1 A 2 169 6 173 1 A 2 169 6 174 1 A 2 169 6 175 2 B 1 120 3 120 2 B 1 120 3 121 2 B 1 120 3 122 2 B 1 120 3 123
6
2
70,927,513
2022-1-31
https://stackoverflow.com/questions/70927513/replacing-whole-string-is-faster-than-replacing-only-its-first-character
I tried to replace a character a by b in a given large string. I did an experiment - first I replaced it in the whole string, then I replaced it only at its beginning. import re # pattern = re.compile('a') pattern = re.compile('^a') string = 'x' * 100000 pattern.sub('b', string) I expected that replacing the beginning would have to be much faster then replacing the whole string because you have to check only 1 position instead of 100000. I did some measuring: python -m timeit --setup "import re; p=re.compile('a'); string='x'*100000" "p.sub('b', string)" 10000 loops, best of 3: 19.1 usec per loop python -m timeit --setup "import re; p=re.compile('^a'); string='x'*100000" "p.sub('b', string)" 1000 loops, best of 3: 613 usec per loop The results show that, on the contrary, trying to replace the whole string is about 30x faster. Would you expect such result? Can you explain that?
The functions provided in the Python re module do not optimize based on anchors. In particular, functions that try to apply a regex at every position - .search, .sub, .findall etc. - will do so even when the regex can only possibly match at the beginning. I.e., even without multi-line mode specified, such that ^ can only match at the beginning of the string, the call is not re-routed internally. Thus: $ # .match only looks at the first position regardless $ python -m timeit --setup "import re; p=re.compile('a'); string='x'*100000" "p.match(string)" 2000000 loops, best of 5: 155 nsec per loop $ python -m timeit --setup "import re; p=re.compile('^a'); string='x'*100000" "p.match(string)" 2000000 loops, best of 5: 157 nsec per loop $ # .search looks at every position, even if there is an anchor $ python -m timeit --setup "import re; p=re.compile('a'); string='x'*100000" "p.search(string)" 10000 loops, best of 5: 22.4 usec per loop $ # and the anchor only adds complexity to the matching process $ python -m timeit --setup "import re; p=re.compile('^a'); string='x'*100000" "p.search(string)" 500 loops, best of 5: 746 usec per loop While re does not optimize for anchors, it does optimize for several other things that could occur at the start of a pattern. One of those optimizations is for a pattern starting with a single constant character: if (prefix_len == 1) { /* pattern starts with a literal character */ SRE_CHAR c = (SRE_CHAR) prefix[0]; #if SIZEOF_SRE_CHAR < 4 if ((SRE_CODE) c != prefix[0]) return 0; /* literal can't match: doesn't fit in char width */ #endif end = (SRE_CHAR *)state->end; state->must_advance = 0; while (ptr < end) { while (*ptr != c) { if (++ptr >= end) return 0; } ... This optimization performs a simple character comparison to skip candidate matches that don't start with the required character, instead of invoking the full match engine. This optimization is why the unanchored regex was so much faster - there are 3 separate optimizations like this in the code, one for a single constant character, one for a multi-character constant prefix, and one for a character class, but nothing for a ^ anchor. I think a reasonable case can be made to file a bug report against this - not having such an obvious optimization implemented clearly violates expectations. Aside from which, while it's easy to replace .search with an anchor using .match, it's not so straightforward to replace .sub with an anchor - you have to .match, check the result, and then call .replace on the string yourself. If you need to anchor to the end of the string and not the start, it gets much more difficult; I recall ancient Perl advice to try reversing the string first, but it's hard in general to write a pattern that matches the reverse of what you want.
11
4
70,931,002
2022-1-31
https://stackoverflow.com/questions/70931002/pandas-get-cell-value-by-row-index-and-column-name
Let's say we have a pandas dataframe: name age sal 0 Alex 20 100 1 Jane 15 200 2 John 25 300 3 Lsd 23 392 4 Mari 21 380 Let's say, a few rows are now deleted and we don't know the indexes that have been deleted. For example, we delete row index 1 using df.drop([1]). And now the data frame comes down to this: fname age sal 0 Alex 20 100 2 John 25 300 3 Lsd 23 392 4 Mari 21 380 I would like to get the value from row index 3 and column "age". It should return 23. How do I do that? df.iloc[3, df.columns.get_loc('age')] does not work because it will return 21. I guess iloc takes the consecutive row index?
Use .loc to get rows by label and .iloc to get rows by position: >>> df.loc[3, 'age'] 23 >>> df.iloc[2, df.columns.get_loc('age')] 23 More about Indexing and selecting data
15
25
70,927,544
2022-1-31
https://stackoverflow.com/questions/70927544/saving-a-pymupdf-fitz-object-to-s3-as-a-pdf
I am trying to crop a pdf and save it to s3 with same name using lambda. I am getting error on the data type being a fitz.fitz.page import os import json import boto3 from urllib.parse import unquote_plus import fitz, sys from io import BytesIO OUTPUT_BUCKET_NAME = os.environ["OUTPUT_BUCKET_NAME"] OUTPUT_S3_PREFIX = os.environ["OUTPUT_S3_PREFIX"] SNS_TOPIC_ARN = os.environ["SNS_TOPIC_ARN"] SNS_ROLE_ARN = os.environ["SNS_ROLE_ARN"] def lambda_handler(event, context): textract = boto3.client("textract") if event: file_obj = event["Records"][0] bucketname = str(file_obj["s3"]["bucket"]["name"]) filename = unquote_plus(str(file_obj["s3"]["object"]["key"])) doc = fitz.open() s3 = boto3.resource('s3') obj = s3.Object(bucketname, filename) fs = obj.get()['Body'].read() pdf=fitz.open("pdf", stream=BytesIO(fs)) #pdf.close() rect=fitz.Rect(0.0, 0.0, 595.0, 842.0) #page = pdf[0] page1 = doc.new_page(width = rect.width, # new page with ... height = rect.height) page1.show_pdf_page(rect, pdf, 0) print(type(doc)) print(type(page1)) s3.Bucket(bucketname).put_object(Key=filename, Body=page1)
This is happening because the page1 object is defined using fitz.fitz.page and the type expected by S3 put object is bytes. In order to solve the issue, you can use the write function of the new PDF (doc) and get the output of it which is in bytes format that you could pass to S3 then. # Save fil first. new_bytes = doc.write() s3.Bucket(bucketname).put_object(Key=filename, Body=new_bytes)
5
2
70,923,969
2022-1-31
https://stackoverflow.com/questions/70923969/how-to-remove-the-user-agent-header-when-send-request-in-python
I'm using python requests library, I need send a request without a user-agent header. I found this question, but it's for Urllib2. I'm trying to simulate an Android app which does this when calling a private API. I try to set User-Agent to None as in the following code, but it doesn't work. It still sends User-Agent: python-requests/2.27.1. Is there any way? headers = requests.utils.default_headers() headers['User-Agent'] = None requests.post(url, *args, headers=headers, **kwargs)
The requests library is built on top of the urllib3 library. So, when you pass None User-Agent header to the requests's post method, the urllib3 set their own default User-Agent import requests r = requests.post("https://httpbin.org/post", headers={ "User-Agent": None, }) print(r.json()["headers"]["User-Agent"]) Output python-urllib3/1.26.7 Here the urllib3 source of connection.py class HTTPConnection(_HTTPConnection, object): ... def request(self, method, url, body=None, headers=None): if headers is None: headers = {} else: # Avoid modifying the headers passed into .request() headers = headers.copy() if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): headers["User-Agent"] = _get_default_user_agent() super(HTTPConnection, self).request(method, url, body=body, headers=headers) So, you can monkey patch it to disable default User-Agent header import requests from urllib3 import connection def request(self, method, url, body=None, headers=None): if headers is None: headers = {} else: # Avoid modifying the headers passed into .request() headers = headers.copy() super(connection.HTTPConnection, self).request(method, url, body=body, headers=headers) connection.HTTPConnection.request = request r = requests.post("https://httpbin.org/post", headers={ "User-Agent": None, }) print(r.json()["headers"]) Output { 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Content-Length': '0', 'Host': 'httpbin.org', 'X-Amzn-Trace-Id': 'Root=1-61f7b53b-26c4c8f6498c86a24ff05940' } Also, consider to provide browser-like User-Agent like this Mozilla/5.0 (Macintosh; Intel Mac OS X 12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36. Maybe it solves your task with less effort
10
9
70,910,391
2022-1-29
https://stackoverflow.com/questions/70910391/seaborn-lineplot-connecting-dots-of-scatterplot
I have problem with sns lineplot and scatterplot. Basically what I'm trying to do is to connect dots of a scatterplot to present closest line joining mapped points. Somehow lineplot is changing width when facing points with tha same x axis values. I want to lineplot to be same, solid line all the way. The code: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline data = {'X': [13, 13, 13, 12, 11], 'Y':[14, 11, 13, 15, 20], 'NumberOfPlanets':[2, 5, 2, 1, 2]} cts = pd.DataFrame(data=data) plt.figure(figsize=(10,10)) sns.scatterplot(data=cts, x='X', y='Y', size='NumberOfPlanets', sizes=(50,500), legend=False) sns.lineplot(data=cts, x='X', y='Y',estimator='max', color='red') plt.show() The outcome: Any ideas? EDIT: If I try using pyplot it doesn't work either: Code: plt.plot(cts['X'], cts['Y']) Outcome: I need one line, which connects closest points (basically what is presented on image one but with same solid line).
Ok, I have finally figured it out. The reason lineplot was so messy is because data was not properly sorted. When I sorted dataframe data by 'Y' values, the outcome was satisfactory. data = {'X': [13, 13, 13, 12, 11], 'Y':[14, 11, 13, 15, 20], 'NumberOfPlanets':[2, 5, 2, 1, 2]} cts = pd.DataFrame(data=data) cts = cts.sort_values('Y') plt.figure(figsize=(10,10)) plt.scatter(cts['X'], cts['Y'], zorder=1) plt.plot(cts['X'], cts['Y'], zorder=2) plt.show() Now it works. Tested it also on other similar scatter points. Everything is fine :) Thanks!
5
1
70,916,649
2022-1-30
https://stackoverflow.com/questions/70916649/how-to-change-the-x-axis-and-y-axis-labels-in-plotly
How can I change the x and y-axis labels in plotly because in matplotlib, I can simply use plt.xlabel but I am unable to do that in plotly. By using this code in a dataframe: Date = df[df.Country=="India"].Date New_cases = df[df.Country=="India"]['7day_rolling_avg'] px.line(df,x=Date, y=New_cases, title="India Daily New Covid Cases") I get this output: In this X and Y axis are labeled as X and Y how can I change the name of X and Y axis to "Date" and "Cases"
simple case of setting axis title update_layout( xaxis_title="Date", yaxis_title="7 day avg" ) full code as MWE import pandas as pd import io, requests df = pd.read_csv( io.StringIO( requests.get( "https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/vaccinations.csv" ).text ) ) df["Date"] = pd.to_datetime(df["date"]) df["Country"] = df["location"] df["7day_rolling_avg"] = df["daily_people_vaccinated_per_hundred"] Date = df[df.Country == "India"].Date New_cases = df[df.Country == "India"]["7day_rolling_avg"] px.line(df, x=Date, y=New_cases, title="India Daily New Covid Cases").update_layout( xaxis_title="Date", yaxis_title="7 day avg" )
35
54
70,905,872
2022-1-29
https://stackoverflow.com/questions/70905872/overflowerror-when-reading-from-s3-signed-integer-is-greater-than-maximum
Reading a large file from S3 ( >5GB) into lambda with the following code: import json import boto3 s3 = boto3.client('s3') def lambda_handler(event, context): response = s3.get_object( Bucket="my-bucket", Key="my-key" ) text_bytes = response['Body'].read() ... return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } However I get the following error: "errorMessage": "signed integer is greater than maximum" "errorType": "OverflowError" "stackTrace": [ " File \"/var/task/lambda_function.py\", line 13, in lambda_handler\n text_bytes = response['Body'].read()\n" " File \"/var/runtime/botocore/response.py\", line 77, in read\n chunk = self._raw_stream.read(amt)\n" " File \"/var/runtime/urllib3/response.py\", line 515, in read\n data = self._fp.read() if not fp_closed else b\"\"\n" " File \"/var/lang/lib/python3.8/http/client.py\", line 472, in read\n s = self._safe_read(self.length)\n" " File \"/var/lang/lib/python3.8/http/client.py\", line 613, in _safe_read\n data = self.fp.read(amt)\n" " File \"/var/lang/lib/python3.8/socket.py\", line 669, in readinto\n return self._sock.recv_into(b)\n" " File \"/var/lang/lib/python3.8/ssl.py\", line 1241, in recv_into\n return self.read(nbytes, buffer)\n" " File \"/var/lang/lib/python3.8/ssl.py\", line 1099, in read\n return self._sslobj.read(len, buffer)\n" ] I am using Python 3.8, and I found here an issue with Python 3.8/9 that might be why: https://bugs.python.org/issue42853 Is there any way around this?
As mentioned in the bug you linked to, the core issue in Python 3.8 is the bug with reading more than 1gb at a time. You can use a variant of the workaround suggested in the bug to read the file in chunks. import boto3 s3 = boto3.client('s3') def lambda_handler(event, context): response = s3.get_object( Bucket="-example-bucket-", Key="path/to/key.dat" ) buf = bytearray(response['ContentLength']) view = memoryview(buf) pos = 0 while True: chunk = response['Body'].read(67108864) if len(chunk) == 0: break view[pos:pos+len(chunk)] = chunk pos += len(chunk) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } At best, however, you're going to spend a minute or more of each Lambda run just reading from S3. It would be much better if you could store the file in EFS and read it from there in the Lambda, or use another solution like ECS to avoid reading from a remote data source.
5
7
70,858,169
2022-1-26
https://stackoverflow.com/questions/70858169/networkx-entropy-of-subgraphs-generated-from-detected-communities
I have 4 functions for some statistical calculations in complex networks analysis. import networkx as nx import numpy as np import math from astropy.io import fits Degree distribution of graph: def degree_distribution(G): vk = dict(G.degree()) vk = list(vk.values()) # we get only the degree values maxk = np.max(vk) mink = np.min(min) kvalues= np.arange(0,maxk+1) # possible values of k Pk = np.zeros(maxk+1) # P(k) for k in vk: Pk[k] = Pk[k] + 1 Pk = Pk/sum(Pk) # the sum of the elements of P(k) must to be equal to one return kvalues,Pk Community detection of graph: def calculate_community_modularity(graph): communities = greedy_modularity_communities(graph) # algorithm modularity_dict = {} # Create a blank dictionary for i,c in enumerate(communities): # Loop through the list of communities, keeping track of the number for the community for name in c: # Loop through each neuron in a community modularity_dict[name] = i # Create an entry in the dictionary for the neuron, where the value is which group they belong to. nx.set_node_attributes(graph, modularity_dict, 'modularity') print (graph_name) for i,c in enumerate(communities): # Loop through the list of communities #if len(c) > 2: # Filter out modularity classes with 2 or fewer nodes print('Class '+str(i)+':', len(c)) # Print out the classes and their member numbers return modularity_dict Modularity score of graph: def modularity_score(graph): return nx_comm.modularity(graph, nx_comm.label_propagation_communities(graph)) and finally graph Entropy: def shannon_entropy(G): k,Pk = degree_distribution(G) H = 0 for p in Pk: if(p > 0): H = H - p*math.log(p, 2) return H Question What I would like to achieve now is find local entropy for each community (turned into a subgraph), with preserved edges information. Is this possible? How so? Edit Matrix being used is in this link: dataset with fits.open('mind_dataset/matrix_CEREBELLUM_large.fits') as data: matrix = pd.DataFrame(data[0].data.byteswap().newbyteorder()) and then turn the adjacency matrix into a graph, 'graph', or 'G' like so: def matrix_to_graph(matrix): from_matrix = matrix.copy() to_numpy = from_matrix.to_numpy() G = nx.from_numpy_matrix(to_numpy) return G Edit 2 Based on the proposed answer below I have created another function: def community_entropy(modularity_dict): communities = {} #create communities as lists of nodes for node, community in modularity_dict.items(): if community not in communities.keys(): communities[community] = [node] else: communities[community].append(node) print(communities) #transform lists of nodes to actual subgraphs for subgraph, community in communities.items(): communities[community] = nx.Graph.subgraph(subgraph) local_entropy = {} for subgraph, community in communities.items(): local_entropy[community] = shannon_entropy(subgraph) return local_entropy and: cerebellum_graph = matrix_to_graph(matrix) modularity_dict_cereb = calculate_community_modularity(cerebellum_graph) community_entropy_cereb = community_entropy(modularity_dict_cereb) But it throws the error: TypeError: subgraph() missing 1 required positional argument: 'nodes'
Using the code I provided as an answer to your question here to create graphs from communities. You can first create different graphs for each of your communities (based on the community edge attribute of your graph). You can then compute the entropy for each community with your shannon_entropy and degree_distribution function. See code below based on the karate club example you provided in your other question referenced above: import networkx as nx import networkx.algorithms.community as nx_comm import matplotlib.pyplot as plt import numpy as np import math def degree_distribution(G): vk = dict(G.degree()) vk = list(vk.values()) # we get only the degree values maxk = np.max(vk) mink = np.min(min) kvalues= np.arange(0,maxk+1) # possible values of k Pk = np.zeros(maxk+1) # P(k) for k in vk: Pk[k] = Pk[k] + 1 Pk = Pk/sum(Pk) # the sum of the elements of P(k) must to be equal to one return kvalues,Pk def shannon_entropy(G): k,Pk = degree_distribution(G) H = 0 for p in Pk: if(p > 0): H = H - p*math.log(p, 2) return H G = nx.karate_club_graph() # Find the communities communities = sorted(nx_comm.greedy_modularity_communities(G), key=len, reverse=True) # Count the communities print(f"The club has {len(communities)} communities.") '''Add community to node attributes''' for c, v_c in enumerate(communities): for v in v_c: # Add 1 to save 0 for external edges G.nodes[v]['community'] = c + 1 '''Find internal edges and add their community to their attributes''' for v, w, in G.edges: if G.nodes[v]['community'] == G.nodes[w]['community']: # Internal edge, mark with community G.edges[v, w]['community'] = G.nodes[v]['community'] else: # External edge, mark as 0 G.edges[v, w]['community'] = 0 N_coms=len(communities) edges_coms=[]#edge list for each community coms_G=[nx.Graph() for _ in range(N_coms)] #community graphs colors=['tab:blue','tab:orange','tab:green'] fig=plt.figure(figsize=(12,5)) for i in range(N_coms): edges_coms.append([(u,v,d) for u,v,d in G.edges(data=True) if d['community'] == i+1])#identify edges of interest using the edge attribute coms_G[i].add_edges_from(edges_coms[i]) #add edges ent_coms=[shannon_entropy(coms_G[i]) for i in range(N_coms)] #Compute entropy for i in range(N_coms): plt.subplot(1,3,i+1)#plot communities plt.title('Community '+str(i+1)+ ', entropy: '+str(np.round(ent_coms[i],1))) pos=nx.circular_layout(coms_G[i]) nx.draw(coms_G[i],pos=pos,with_labels=True,node_color=colors[i]) And the output gives:
5
3
70,904,128
2022-1-29
https://stackoverflow.com/questions/70904128/print-a-hyperlink-in-the-terminal
I can use this special escape sequence to print a hyperlink in bash: echo -e '\e]8;;http://example.com\e\\This is a link\e]8;;\e\\\n' Result (Link I can click on): This is a link Now I want to generate this in Python: print('\e]8;;http://example.com\e\\This is a link\e]8;;\e\\\n') \e]8;;http://example.com\e\This is a link\e]8;;\e\ print(r'\e]8;;http://example.com\e\\This is a link\e]8;;\e\\\n') \e]8;;http://example.com\e\\This is a link\e]8;;\e\\\n As you can see, the escape sequence is not interpreted by the shell. Other escape sequences, like the one for bold text, work: print('\033[1mYOUR_STRING\033[0m') YOUR_STRING # <- is actually bold How can I get Python to format the URL correctly?
From This answer, after some tries: print('\x1b]8;;' + 'http://example.com' + '\x1b\\' + 'This is a link' + '\x1b]8;;\x1b\\\n' ) Then better: print( '\x1b]8;;%s\x1b\\%s\x1b]8;;\x1b\\' % ( 'http://example.com' , 'This is a link' ) )
6
3
70,896,932
2022-1-28
https://stackoverflow.com/questions/70896932/simultaneously-reassign-values-of-two-variables-in-c
Is there a way in C++ of emulating this python syntax a,b = b,(a+b) I understand this is trivially possible with a temporary variable but am curious if it is possible without using one?
You can use the standard C++ function std::exchange like #include <utility> //... a = std::exchange( b, a + b ); Here is a demonstration program #include <iostream> #include <utility> int main() { int a = 1; int b = 2; std::cout << "a = " << a << '\n'; std::cout << "b = " << b << '\n'; a = std::exchange( b, a + b ); std::cout << "a = " << a << '\n'; std::cout << "b = " << b << '\n'; } The program output is a = 1 b = 2 a = 2 b = 3 You can use this approach in a function that calculates Fibonacci numbers.
4
8
70,895,037
2022-1-28
https://stackoverflow.com/questions/70895037/how-many-times-can-a-list-be-split-in-a-way-that-every-element-on-the-left-is-sm
For example if the list is: [2,1,2,5,7,6,9] there's 3 possible ways of splitting: [2,1,2] [5,7,6,9] [2,1,2,5] [7,6,9] [2,1,2,5,7,6] [9] I'm supposed to calculate how many times the list can be split in a way that every element on the left is smaller than every element on the right. So with this list, the output would be 3. Here's my current solution: def count(t): c= 0 for i in range(len(t)): try: if max(t[:i]) < min(t[i:]): c+=1 except: continue return c The above code does the right thing, but it's not of O(n) time complexity. How could I achieve the same result, but faster?
Here's my final answer: def count(t): c = 0 maxx = max(t) right = [0]*len(t) left = [0]*len(t) maxx = t[0] for i in range(0, len(t)): if maxx >= t[i]: left[i] = maxx if maxx < t[i]: maxx = t[i] left[i] = maxx minn = t[-1] for i in range(len(t)-1,-1,-1): if minn <= t[i]: right[i] = minn if minn > t[i]: minn = t[i] right[i] = minn for i in range(0, len(t)-1): if left[i] < right[i+1] : c += 1 return c
5
1
70,894,409
2022-1-28
https://stackoverflow.com/questions/70894409/pyspark-get-element-from-array-column-of-struct-based-on-condition
I have a spark df with the following schema: |-- col1 : string |-- col2 : string |-- customer: struct | |-- smt: string | |-- attributes: array (nullable = true) | | |-- element: struct | | | |-- key: string | | | |-- value: string df: #+-------+-------+---------------------------------------------------------------------------+ #|col1 |col2 |customer | #+-------+-------+---------------------------------------------------------------------------+ #|col1_XX|col2_XX|"attributes":[[{"key": "A", "value": "123"},{"key": "B", "value": "456"}] | #+-------+-------+---------------------------------------------------------------------------+ and the json input for the array look like this: ... "attributes": [ { "key": "A", "value": "123" }, { "key": "B", "value": "456" } ], I would like to loop attributes array and get the element with key="B" and then select the corresponding value. I don't want to use explode because I would like to avoid join dataframes. Is it possible to perform this kind of operation directly using spark 'Column' ? Expected output will be: #+-------+-------+-----+ #|col1 |col2 |B | | #+-------+-------+-----+ #|col1_XX|col2_XX|456 | #+-------+-------+-----+ any help would be appreciated
You can use filter function to filter the array of structs then get value: from pyspark.sql import functions as F df2 = df.withColumn( "B", F.expr("filter(customer.attributes, x -> x.key = 'B')")[0]["value"] )
8
10
70,891,225
2022-1-28
https://stackoverflow.com/questions/70891225/how-to-get-outer-html-from-python-playwright-locator-object
I could not find any method that returns outer html from python playwright page.locator(selector, **kwargs). Am I missing something? locator.inner_html(**kwargs) do exists. However, I am trying to use pandas.read_html and it fails on table locator inner html as it trips table tag. What I'm currently doing is using bs4 to parse page.content(). something like: soup = BeautifulSoup(page.content(), 'lxml') df = pd.read_html(str(soup.select('table.selector')))
There is no outer_html out of the box. But it's not hard to implement it: locator.evaluate("el => el.outerHTML")
7
10
70,892,143
2022-1-28
https://stackoverflow.com/questions/70892143/psycopg2-connection-sql-database-to-pandas-dataframe
I am working on a project where I am using psycopg2 connection to fetch the data from the database like this, cursor = connection.execute("select * from table") cursor.fetchall() Now after getting the data from the table, I am running some extra operations to convert the data from cursor to pandas dataframe. I am looking for some library or some more robust way to convert the data to pandas dataframe from psycopg2 connection. Any help of guidance will be appreciated. Thanks
You can use pandas sqlio module to run and save query within pandas dataframe. Let's say you have a connection of psycopg2 connection then you can use pandas sqlio like this. import pandas.io.sql as sqlio data = sqlio.read_sql_query("SELECT * FROM table", connection) # Now data is a pandas dataframe having the results of above query. data.head() For me, sqlio pandas module is working fine. Please have a look at it and let me know if this is what you are looking for.
6
10
70,887,626
2022-1-28
https://stackoverflow.com/questions/70887626/python-type-hinting-a-classmethod-that-returns-an-instance-of-the-class-for-a
Consider the following: from __future__ import annotations class A: def __init__(self): print("A") self.hello = "hello" # how do I type this so that the return type is A for A.bobo() # and B for B.bobo()? @classmethod def bobo(cls) -> UnknownType: return cls() class B(A): def __init__(self): print("B") super().__init__() self.world = "world" instance_of_B = B.bobo() # prints "B", then "A", and returns an instance of B I want to type-hint the bobo class method, so that mypy can know that in the case of B's bobo method, it's not just an instance of A that's returned, but actually an instance of B. I'm really unclear on how to do that, or if it's even possible. I thought that something like Type[cls] might do the trick, but I'm not sure if that even makes syntactic sense to mypy.
You will have to use a TypeVar, thankfully, in Python 3.11 the typing.Self type is coming out. This PEP describes it in detail. It also specifies how to use the TypeVar until then.
14
6
70,863,543
2022-1-26
https://stackoverflow.com/questions/70863543/can-a-python-docstring-be-calculated-f-string-or-expression
Is it possible to have a Python docstring calculated? I have a lot of repetitive things in my docstrings, so I'd like to either use f-strings or a %-style format expression. When I use an f-string at the place of a docstring importing the module invokes the processing but when I check the __doc__ of such a function it is empty sphinx barfs when the docstring is an f-string I do know how to process the docstrings after the import, but that doesn't work for object 'doc' strings which is recognized by sphinx but is not a real __doc__'s of the object.
Docstrings in Python must be regular string literals. This is pretty easy to test - the following program does not show the docstring: BAR = "Hello world!" def foo(): f"""This is {BAR}""" pass assert foo.__doc__ is None help(foo) The Python syntax docs say that the docstring must be a "string literal", and the tail end of the f-string reference says they "cannot be used as docstrings". So unfortunately you must use the __doc__ attribute. However, you should be able to use a decorator to read the __doc__ attribute and replace it with whatever you want.
30
28
70,882,733
2022-1-27
https://stackoverflow.com/questions/70882733/how-to-display-two-decimal-points-in-python-when-a-number-is-perfectly-divisibl
Currently I am trying to solve a problem, where I am supposed to print the answer upto two decimal points without rounding off. I have used the below code for this purpose import math a=1.175 #value of a after some division print(math.floor(a*100)/100) The output we get is: 1.17 #Notice value which has two decimal points & not rounded But the real problem starts when I try to print a number which is evenly divisible, after the decimal point only one zero is displayed. I have used the same code as above, but now a=25/5 #Now a is perfectly divisible print(math.floor(a*100)/100) The output displayed now is 5.0 #Notice only one decimal place is printed what must be done rectify this bug?
The division works and returns adequate precision in result. So your problem is just about visualization or exactly: string-representation of floating-point numbers Formatting a decimal You can use string-formatting for that. For example in Python 3, use f-strings: twoFractionDigits = f"{result:.2f}" or print(f"{result:.2f}") The trick does .2f, a string formatting literal or format specifier that represents a floating-point number (f) with two fractional digits after decimal-point (.2). See also: Fixed digits after decimal with f-strings How to format a floating number to fixed width in Python Try on the Python-shell: Python 3.6.9 (default, Dec 8 2021, 21:08:43) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import math >>> a=1.175 #value of a after some division >>> result = math.floor(a*100)/100 >>> result 1.17 >>> print(result) 1.17 >>> a=25/5 #Now a is perfectly divisible >>> result = math.floor(a*100)/100 >>> result 5.0 >>> print(result) 5.0 >>> print(f"{result:.2f}") 5.00 Formatting a decimal as percentage Similar you can represent the ratio as percentage: print(f"{result:.2f} %") prints: 5.00 % A formatting shortcut for percentage can be: print(f"{25/100:.2%}") Which converts the result of 25/100 == 0.25 to: 25.00% Note: The formatting-literal .2% automatically converts from ratio to percentage with 2 digits after the decimal-point and adds the percent-symbol. Formatting a decimal with specific scale (rounded or truncated ?) Now the part without rounding-off, just truncation. As example we can use the repeating decimal, e.g. 1/6 which needs to be either rounded or truncated (cut-off) after a fixed number of fractional digits - the scale (in contrast to precision). >>> print(f"{1/6:.2}") 0.17 >>> print(f"{1/6:.2%}") 16.67% Note how the formatted string is not truncated (to 0.16) but rounded (to 0.17). Here the scale was specified inside formatting-literal as 2 (after the dot). See also: Truncate to three decimals in Python How do I interpret precision and scale of a number in a database? What is the difference between precision and scale? Formatting many decimals in fixed width (leading spaces) Another example is to print multiple decimals, like in a column as right-aligned, so you can easily compare them. Then use string-formatting literal 6.2f to add leading spaces (here a fixed-width of 6): >>> print(f"{result:6.2f}") 5.00 >>> print(f"{100/25*100:6.2f}") 400.00 >>> print(f"{25/100*100:6.2f}") 25.00 See also All the formatting-literals demonstrated here can also be applied using old-style %-formatting (also known as "Modulo string formatting") which was inherited from printf method of C language. Benefit: This way is also compatible with Python before 3.6). new-style .format method on strings (introduced with Python 3) See theherk's answer which demonstrates those alternatives. Learn more about string-formatting in Python: Real Python: Python 3's f-Strings: An Improved String Formatting Syntax (Guide) Real Python: Python String Formatting Best Practices
5
17
70,882,944
2022-1-27
https://stackoverflow.com/questions/70882944/no-such-file-or-directory-dev-fd-11-during-pytest-collection-in-docker
I have a simple Dockerfile with Python and NodeJS. I install pytest, a local library and run tests: FROM nikolaik/python-nodejs:latest ADD . . RUN pip3 install --upgrade pip RUN pip3 install -e . RUN pip3 install pytest CMD ["pytest"] However, pytest collection fails: ============================= test session starts ============================== platform linux -- Python 3.10.2, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: / collected 0 items / 1 error ==================================== ERRORS ==================================== ________________________ ERROR collecting test session _________________________ usr/local/lib/python3.10/site-packages/_pytest/runner.py:311: in from_call result: Optional[TResult] = func() usr/local/lib/python3.10/site-packages/_pytest/runner.py:341: in <lambda> call = CallInfo.from_call(lambda: list(collector.collect()), "collect") usr/local/lib/python3.10/site-packages/_pytest/main.py:690: in collect for direntry in visit(str(argpath), self._recurse): usr/local/lib/python3.10/site-packages/_pytest/pathlib.py:606: in visit yield from visit(entry.path, recurse) usr/local/lib/python3.10/site-packages/_pytest/pathlib.py:606: in visit yield from visit(entry.path, recurse) usr/local/lib/python3.10/site-packages/_pytest/pathlib.py:606: in visit yield from visit(entry.path, recurse) usr/local/lib/python3.10/site-packages/_pytest/pathlib.py:591: in visit for entry in os.scandir(path): E FileNotFoundError: [Errno 2] No such file or directory: '/dev/fd/11' =========================== short test summary info ============================ ERROR - FileNotFoundError: [Errno 2] No such file or directory: '/dev/fd/11' !!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!! =============================== 1 error in 0.25s ===============================
I found a solution on pytest GitHub: https://github.com/pytest-dev/pytest/issues/8960 your working directory is / so pytest is attempting to recurse through everything in the filesystem (probably not what you want!) Added WORKDIR /tests/ to the Dockerfile and the issue is fixed.
7
9
70,864,604
2022-1-26
https://stackoverflow.com/questions/70864604/websockets-exceptions-connectionclosedok-code-1000-ok-no-reason
I am trying to receive data from a website which use websocket. This acts like this: websocket handshaking Here is the code to catch data: async def hello(symb_id: int): async with websockets.connect("wss://ws.bitpin.ir/", extra_headers = request_header, timeout=15) as websocket: await websocket.send('{"method":"sub_to_price_info"}') recv_msg = await websocket.recv() if recv_msg == '{"message": "sub to price info"}': await websocket.send(json.dumps({"method":"sub_to_market","id":symb_id})) recv_msg = await websocket.recv() counter = 1 while(1): msg = await websocket.recv() print(counter, msg[:100], end='\n\n') counter+=1 asyncio.run(hello(1)) After receiving about 100 message, I am facing with this error: websockets.exceptions.ConnectionClosedOK: code = 1000 (OK), no reason I tried to set timeout and request headers but these were not helpful
I solved this error by creating a task for sending PING. async def hello(symb_id: int): async with websockets.connect("wss://ws.bitpin.ir/", extra_headers = request_header, timeout=10, ping_interval=None) as websocket: await websocket.send('{"method":"sub_to_price_info"}') recv_msg = await websocket.recv() if recv_msg == '{"message": "sub to price info"}': await websocket.send(json.dumps({"method":"sub_to_market","id":symb_id})) recv_msg = await websocket.recv() counter = 1 task = asyncio.create_task(ping(websocket)) while True: msg = await websocket.recv() await return_func(msg) print(counter, msg[:100], end='\n\n') counter+=1 async def ping(websocket): while True: await websocket.send('{"message":"PING"}') print('------ ping') await asyncio.sleep(5)
7
5
70,876,394
2022-1-27
https://stackoverflow.com/questions/70876394/async-generator-object-is-not-iterable
I need to return a value in async function. I tried to use synchronous form of return: import asyncio async def main(): for i in range(10): return i await asyncio.sleep(1) print(asyncio.run(main())) output: 0 [Finished in 204ms] But it just return value of the first loop, which is not expexted. So changed the code as below: import asyncio async def main(): for i in range(10): yield i await asyncio.sleep(1) for _ in main(): print(_) output: TypeError: 'async_generator' object is not iterable by using async generator I am facing with this error. How can I return a value for every loop of async function? Thanks
You need to use an async for which itself needs to be inside an async function: async def get_result(): async for i in main(): print(i) asyncio.run(get_result())
17
28
70,865,732
2022-1-26
https://stackoverflow.com/questions/70865732/faster-numpy-isin-alternative-for-strings-using-numba
I'm trying to implement a faster version of the np.isin in numba, this is what I have so far: import numpy as np import numba as nb @nb.njit(parallel=True) def isin(a, b): out=np.empty(a.shape[0], dtype=nb.boolean) b = set(b) for i in nb.prange(a.shape[0]): if a[i] in b: out[i]=True else: out[i]=False return out For numbers it works, as seen in this example: a = np.array([1,2,3,4]) b = np.array([2,4]) isin(a,b) >>> array([False, True, False, True]) And it's faster than np.isin: a = np.random.rand(20000) b = np.random.rand(5000) %time isin(a,b) CPU times: user 3.96 ms, sys: 0 ns, total: 3.96 ms Wall time: 1.05 ms %time np.isin(a,b) CPU times: user 11 ms, sys: 0 ns, total: 11 ms Wall time: 8.48 ms However, I would like to use this function with arrays containing strings. The problem is that whenever I try to pass an array of strings, numba complains that it cannot interpret the set() operation with these data. a = np.array(['A','B','C','D']) b = np.array(['B','D']) isin(a,b) TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<class 'set'>) found for signature: >>> set(array([unichr x 1], 1d, C)) There are 2 candidate implementations: - Of which 2 did not match due to: Overload of function 'set': File: numba/core/typing/setdecl.py: Line 20. With argument(s): '(array([unichr x 1], 1d, C))': No match. During: resolving callee type: Function(<class 'set'>) During: typing of call at /tmp/ipykernel_20582/4221597969.py (7) File "../../../../tmp/ipykernel_20582/4221597969.py", line 7: <source missing, REPL/exec in use?> Is there a way, like specifying a signature, that will allow me to use this directly on arrays of strings? I know I could assign a numerical value to each string, but for large arrays I think this will take a while and will make the whole thing slower than just using np.isin. Any ideas?
Strings are barely supported by Numba (like bytes although the support is slightly better). Set and dictionary are supported with some strict restriction and are quite experimental/new. Sets of strings are not supported yet regarding the documentation: Sets must be strictly homogeneous: Numba will reject any set containing objects of different types, even if the types are compatible (for example, {1, 2.5} is rejected as it contains a int and a float). The use of reference counted types, e.g. strings, in sets is unsupported. You can try to cheat using a binary search. Unfortunately, np.searchsorted is not yet implemented for string-typed Numpy array (although np.unique is). I think you can implement a binary search yourself but this is cumbersome to do and in the end. I am not sure this will be faster in the end, but I think it should because of the O(Ns Na log Nb)) running time complexity (with Ns the average size of string length of b unique items, Na the number of items in a and Nb the number of unique items in b). Indeed, the running time complexity of np.isin is O(Ns (Na+Nb) log (Na+Nb)) if arrays are about similar sizes and O(Ns Na Nb) if Nb is much smaller than Na. Note that the best theoretical running time complexity is AFAIK O(Ns (Na + Nb)) thanks to a hash table with a good hash function (Tries can also achieve this but they should be practically slower unless the hash function is not great). Note that typed dictionaries support statically declared string but not dynamic ones (and this is an experimental feature for static ones). Another cheat (that should work) is to store string hashes as the key of a typed dictionary and associate each hash to an array of indices referencing the string locations in b having the hash of the associated key. The parallel loop needs to hash a items and fetch the location of strings item in b having this hash so you can then compare the strings. A faster implementation would be to assume that the hash function for b strings is perfect and that there is no collision so you can directly use a TypedDict[np.int64, np.int64] hash table. You can test this hypothesis at runtime when you build b. Writing such a code is a bit tedious. Note that this implementation may not be faster than Numpy in the end in sequential because Numba TypedDicts are currently pretty slow... However, this should be faster in parallel on processors with enough cores.
5
2
70,865,699
2022-1-26
https://stackoverflow.com/questions/70865699/what-is-different-between-dataloader-and-dataloader2-in-pytorch
I developed a custom dataset by using the PyTorch dataset class. The code is like that: class CustomDataset(torch.utils.data.Dataset): def __init__(self, root_path, transform=None): self.path = root_path self.mean = mean self.std = std self.transform = transform self.images = [] self.masks = [] for add in os.listdir(self.path): # Some script to load file from directory and appending address to relative array ... self.masks.sort() self.images.sort() def __len__(self): return len(self.images) def __getitem__(self, item): image_address = self.images[item] mask_address = self.masks[item] if self.transform is not None: augment = self.transform(image=np.asarray(Image.open(image_address, 'r', None)), mask=np.asarray(Image.open(mask_address, 'r', None))) image = Image.fromarray(augment['image']) mask = augment['mask'] if self.transform is None: image = np.asarray(Image.open(image_address, 'r', None)) mask = np.asarray(Image.open(mask_address, 'r', None)) # Handle Augmentation here return image, mask Then I created an object from this class and passed it to torch.utils.data.DataLoader. Although this works well with DataLoader but with torch.utils.data.DataLoader2 I got a problem. The error is this: dataloader = torch.utils.data.DataLoader2(dataset=dataset, batch_size=2, pin_memory=True, num_workers=4) Exception: thread parallelism mode is not supported for old DataSets My question is why DataLoader2 module was added to PyTorch what is different with DataLoader and what are its benefits? PyTorch Version: 1.10.1
You should definitely not use it DataLoader2. torch.utils.data.DataLoader2 (actually torch.utils.data.dataloader_experimental.DataLoader2) was added as an experimental "feature" as a future replacement for DataLoader. It is defined here. Currently, it is only accessible on the master branch (unstable) and is of course not documented on the official pages.
6
3
70,864,887
2022-1-26
https://stackoverflow.com/questions/70864887/how-to-create-batches-using-pytorch-dataloader-such-that-each-example-in-a-given
Suppose I have a list, datalist which contains several examples (which are of type torch_geometric.data.Data for my use case). Each example has an attribute num_nodes For demo purpose, such datalist can be created using the following snippet of code import torch from torch_geometric.data import Data # each example is of this type import networkx as nx # for creating random data import numpy as np # the python list containing the examples datalist = [] for num_node in [9, 11]: for _ in range(1024): edge_index = torch.from_numpy( np.array(nx.fast_gnp_random_graph(num_node, 0.5).edges()) ).t().contiguous() datalist.append( Data( x=torch.rand(num_node, 5), edge_index=edge_index, edge_attr=torch.rand(edge_index.size(1)) ) ) From the above datalist object, I can create a torch_geometric.loader.DataLoader (which subclasses torch.utils.data.DataLoader) naively (without any constraints) by using the DataLoader constructor as: from torch_geometric.loader import DataLoader dataloader = DataLoader( datalist, batch_size=128, shuffle=True ) My question is, how can I use the DataLoader class to ensure that each example in a given batch has the same value for num_nodes attribute? PS: I tried to solve it and came up with a hacky solution by combining multiple DataLoader objects using the combine_iterators function snippet from here as follows: def get_combined_iterator(*iterables): nexts = [iter(iterable).__next__ for iterable in iterables] while nexts: next = random.choice(nexts) try: yield next() except StopIteration: nexts.remove(next) datalists = defaultdict(list) for data in datalist: datalists[data.num_nodes].append(data) dataloaders = ( DataLoader(data, batch_size=128, shuffle=True) for data in datalists.values() ) batches = get_combined_iterator(*dataloaders) But, I think that there must be some elegant/better method of doing it, hence this question.
If your underlying dataset is map-style, you can use define a torch.utils.data.Sampler which returns the indices of the examples you want to batch together. An instance of this will be passed as a batch_sampler kwarg to your DataLoader and you can remove the batch_size kwarg as the sampler will form batches for you depending on how you implement it.
5
3
70,862,692
2022-1-26
https://stackoverflow.com/questions/70862692/how-to-use-pythons-structural-pattern-matching-to-test-built-in-types
I'm trying to use SPM to determine if a certain type is an int or an str. The following code: from typing import Type def main(type_to_match: Type): match type_to_match: case str(): print("This is a String") case int(): print("This is an Int") case _: print("\nhttps://en.meming.world/images/en/0/03/I%27ve_Never_Met_This_Man_In_My_Life.jpg") if __name__ == "__main__": test_type = str main(test_type) returns https://en.meming.world/images/en/0/03/I%27ve_Never_Met_This_Man_In_My_Life.jpg Most of the documentation I found talks about how to test if a certain variable is an instance of a type. But not how to test if a type is of a certain type. Any ideas on how to make it work?
If you just pass a type directly, it will consider it to be a "name capture" rather than a "value capture." You can coerce it to use a value capture by importing the builtins module, and using a dotted notation to check for the type. import builtins from typing import Type def main(type_: Type): match (type_): case builtins.str: # it works with the dotted notation print(f"{type_} is a String") case builtins.int: print(f"{type_} is an Int") case _: print("\nhttps://en.meming.world/images/en/0/03/I%27ve_Never_Met_This_Man_In_My_Life.jpg") if __name__ == "__main__": main(type("hello")) # <class 'str'> is a String main(str) # <class 'str'> is a String main(type(42)) # <class 'int'> is an Int main(int) # <class 'int'> is an Int
11
22
70,863,757
2022-1-26
https://stackoverflow.com/questions/70863757/python-send-message-to-specific-telegram-user
I would like to send a message to a specific telegram-user - So I create a bot called Rapid1898Bot and get the api-key for it. I also send a message in the bot and get with https://api.telegram.org/bot<Bot\_token>/getUpdates the chat-id With the following code it is now working that I send a message to the bot - what is fine: import os from dotenv import load_dotenv, find_dotenv import requests load_dotenv(find_dotenv()) TELEGRAM_API = os.environ.get("TELEGRAM_API") # CHAT_ID = os.environ.get("CHAT_ID_Rapid1898Bot") CHAT_ID = os.environ.get("CHAT_ID_Rapid1898") print(CHAT_ID) botMessage = "This is a test message from python!" sendText = f"https://api.telegram.org/bot{TELEGRAM_API}/sendMessage?chat_id={CHAT_ID}&parse_mode=Markdown&text={botMessage}" response = requests.get(sendText) print(response.json()) But now I want to also send a message to a specific telegram-user. According to this explanation: How can I send a message to someone with my telegram bot using their Username I was expecting to a) send a message from e.g. my telegram account to the bot b) and then open https://api.telegram.org/bot<Bot\_token>/getUpdates But unfortunately it seems that it is always the same chat-id, with which I can send the message to the rapid1898bot - but NOT to MY telegram account. Why is this not working and why I am getting always the same chat-id?
you already have a bot and its token after that you need to get the chat_id: write message in the chat Visit https://api.telegram.org/bot<YourBOTToken>/getUpdates and get the chat_id under the key message['chat']['id'] import requests def telegram_bot_sendtext(bot_message): bot_token = '' bot_chatID = '' send_text = 'https://api.telegram.org/bot' + bot_token + '/sendMessage?chat_id=' + bot_chatID + '&parse_mode=Markdown&text=' + bot_message response = requests.get(send_text) return response.json() test = telegram_bot_sendtext("Testing Telegram bot") print(test) more info - https://medium.com/@ManHay_Hong/how-to-create-a-telegram-bot-and-send-messages-with-python-4cf314d9fa3e
6
4
70,862,614
2022-1-26
https://stackoverflow.com/questions/70862614/how-does-python-dict-comprehension-work-with-lambda-functions-inside
My goal is to aggregate a pandas DataFrameGroupBy Object using the agg function. In order to do that, I am generating a dictionary that I'm going to unpack to kwargs using dict unpacking through **dict. This dictionary is required to contain the new column name as the key and a tuple as the value. The first value of the tuple is the column name that gets squeezed to a series and given to the second value as the input of lambda series: .... agg_dict = { f"{cat_name}_count": ('movement_state', lambda series: series.value_counts()[cat_name]) for cat_name in ml_data['category_column'].cat.categories } # Aggregating agg_ml_data = ml_data.groupby(['col1', 'col2']).agg(**agg_dict) What actually happens now is kinda weird for me. Assuming: ml_data['category_column'].cat.categories Index(['cat1', 'cat2', 'cat3'], dtype='object') The correct value counts for one group are one_group['category_column'].value_counts() | category_column cat1 | 2 cat2 | 9 cat3 | 6 Expected output for one group: cat1_count cat2_count cat3_count 2 9 6 Actual output for one group cat1_count cat2_count cat3_count 6 6 6 Somehow, python executes the dict comprehension for the lambda function not as expected and uses just the last category value cat3 when indexing series.value_counts()[cat_name]. I would expect, that the lambda functions are created as the dictionary itself is. Any idea on how to resolve that problem?
This is a classic Python trap. When you use a free variable (cat_name, in this case) in a lambda expression, the lambda captures which variable the name refers to, not the value of that variable. So in this case, the lambda "remembers" that cat_name was "the loop variable of that dict comprehension". When the lambda is called, it looks up the value of "the loop variable of that dict comprehension", which now, since the dict comprehension has finished, remains at the last value of the list. The usual way of working around this is to use a default argument to "freeze" the value, something like lambda series, cat=cat_name: series.blah[cat] effectively using one trap (Python computing default arguments at function definition time) to climb out of another. :-)
5
7
70,860,798
2022-1-26
https://stackoverflow.com/questions/70860798/how-can-i-reach-a-spark-cluster-in-a-docker-container-with-spark-submit-and-a-py
I've created a Spark cluster with one master and two slaves, each one on a Docker container. I launch it with the command start-all.sh. I can reach the UI from my local machine at localhost:8080 and it shows me that the cluster is well launched : Screenshot of Spark UI Then I try to submit a simple Python script from my host machine (not from the Docker container) with this command spark-submit : spark-submit --master spark://spark-master:7077 test.py test.py : import pyspark conf = pyspark.SparkConf().setAppName('MyApp').setMaster('spark://spark-master:7077') sc = pyspark.SparkContext(conf=conf) But the console returned me this error : 22/01/26 09:20:39 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077... 22/01/26 09:20:40 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077 org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101) at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:109) at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.io.IOException: Failed to connect to spark-master:7077 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190) ... 4 more Caused by: java.net.UnknownHostException: spark-master at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302) at java.base/java.net.InetAddress.getByName(InetAddress.java:1252) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143) at java.base/java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:202) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:48) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:182) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:168) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:985) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:505) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:416) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:475) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518) at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more I also try with a simple scala script, just to try to reach the cluster but I've had the same error. Do you have any idea how can I reach my cluster with a python script? (Edit) I forgot to specify i've created a docker network between my master and my slaves. So with the help of MrTshoot and Gaarv, i replace spark-master (in spark://spark-master:7077) by the ip of my master container (you can get it with the command docker network inspect my-network). And it's work! Thanks!
When you specify .setMaster('spark://spark-master:7077') it means "reach spark cluster at DNS address "spark-master" and port 7077 which local machine cannot resolve. So it order for your host machine to reach the cluster you must instead specify the Docker DNS / IP address of your Spark cluster, check "docker0" interface on your local machine and replace "spark-master" with it.
7
2
70,859,757
2022-1-26
https://stackoverflow.com/questions/70859757/how-to-add-vertically-centered-labels-in-bar-chart-matplotlib
I have a problem which I simplified as below, I would love if anyone suggest me the code in seaborn like what I want to achieve. import matplotlib.pyplot as plt a = [2000, 4000, 3000, 8000, 6000, 3000, 3000, 4000, 2000, 4000, 3000, 8000, 6000, 3000, 3000, 4000, 2000, 4000, 3000, 8000, 6000, 3000, 3000, 4000] b = [0.8, 0.9, 0.83, 0.81, 0.86, 0.89, 0.89, 0.8, 0.8, 0.9, 0.83, 0.81, 0.86, 0.89, 0.89, 0.8, 0.8, 0.9, 0.83, 0.81, 0.86, 0.89, 0.89, 0.8] c = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] fig1, ax1 = plt.subplots(figsize=(12, 6)) ax12 = ax1.twinx() ax1.bar(c, a) ax12.plot(c, b, 'o-', color="red", markersize=12, markerfacecolor='Yellow', markeredgewidth=2, linewidth=2) ax12.set_ylim(bottom=0, top=1, emit=True, auto=False) plt.grid() plt.show() I am trying to achieve the labels in the center and vertical as shown in the following figure.
As of matplotlib 3.4.0, use Axes.bar_label: label_type='center' places the labels at the center of the bars rotation=90 rotates them 90 deg Since this is a regular bar chart, we only need to label one bar container ax1.containers[0]: ax1.bar_label(ax1.containers[0], label_type='center', rotation=90, color='white') But if this were a grouped/stacked bar chart, we should iterate all ax1.containers: for container in ax1.containers: ax1.bar_label(container, label_type='center', rotation=90, color='white') seaborn version I just noticed the question text asks about seaborn, in which case we can use sns.barplot and sns.pointplot. We can still use bar_label with seaborn via the underlying axes. import pandas as pd import seaborn as sns # put the lists into a DataFrame df = pd.DataFrame({'a': a, 'b': b, 'c': c}) # create the barplot and vertically centered labels ax1 = sns.barplot(data=df, x='c', y='a', color='green') ax1.bar_label(ax1.containers[0], label_type='center', rotation=90, color='white') ax12 = ax1.twinx() ax12.set_ylim(bottom=0, top=1, emit=True, auto=False) # create the pointplot with x=[0, 1, 2, ...] # this is because that's where the bars are located (due to being categorical) sns.pointplot(ax=ax12, data=df.reset_index(), x='index', y='b', color='red')
5
7
70,809,438
2022-1-22
https://stackoverflow.com/questions/70809438/python-dataclasses-with-optional-attributes
How do you make Optional attr's of a dataclass? from dataclasses import dataclass @dataclass class CampingEquipment: knife: bool fork: bool missing_flask_size: # what to write here? kennys_stuff = { 'knife': True, 'fork': True } print(CampingEquipment(**kennys_stuff)) I tried field(init=False), but it gave me: TypeError: CampingEquipment.__init__() missing 1 required positional argument: 'missing_flask_size' By Optional I mean __dict__ may contain the key "missing_flask_size" or not. If I set a default value then the key will be there and it shouldn't be in some cases. I want to check its type if it is there. I tried moving the field(init=False) to the type location (after the colon) so I could make it more explicit as to the thing I wanted optional would be the key and not the value. So I want this test to pass: with pytest.raises(AttributeError): ce = CampingEquipment(**kennys_stuff) print(ce.missing_flask_size)
It's not possible to use a dataclass to make an attribute that sometimes exists and sometimes doesn't because the generated __init__, __eq__, __repr__, etc hard-code which attributes they check. However, it is possible to make a dataclass with an optional argument that uses a default value for an attribute (when it's not provided). from dataclasses import dataclass from typing import Optional @dataclass class CampingEquipment: knife: bool fork: bool missing_flask_size: Optional[int] = None kennys_stuff = { 'knife':True, 'fork': True } print(CampingEquipment(**kennys_stuff)) And it's possible to make a dataclass with an argument that's accepted to __init__ but isn't an actual field. So you could do something like this: from dataclasses import dataclass, InitVar from typing import Optional @dataclass class CampingEquipment: knife: bool fork: bool missing_flask_size: InitVar[Optional[int]] = None def __post_init__(self, missing_flask_size): if missing_flask_size is not None: self.missing_flask_size = missing_flask_size If you really want classes to either to have that attribute present or not have it at all, you could subclass your dataclass and make a factory function that creates one class or the other based on whether that missing_flask_size attribute is present: from dataclasses import dataclass @dataclass class CampingEquipment: knife: bool fork: bool @dataclass class CampingEquipmentWithFlask(CampingEquipment): missing_flask_size: int def equipment(**fields): if 'missing_flask_size' in fields: return CampingEquipmentWithFlask(**fields) return CampingEquipment(**fields) kennys_stuff = { 'knife':True, 'fork': True } print(equipment(**kennys_stuff)) If you really wanted to (I wouldn't recommend it though), you could even customize the __new__ of CampingEquipment to return an instance of that special subclass when that missing_flask_size argument is given (though then you'd need to set init=False and make your own __init__ as well on that class).
62
85
70,772,733
2022-1-19
https://stackoverflow.com/questions/70772733/how-to-post-a-json-having-a-single-body-parameter-in-fastapi
I have a file called main.py in which I put a POST call with only one input parameter (integer). Simplified code is given below: from fastapi import FastAPI app = FastAPI() @app.post("/do_something/") async def do_something(process_id: int): # some code return {"process_id": process_id} Now, if I run the code for the test, saved in the file test_main.py, that is: from fastapi.testclient import TestClient from main import app client = TestClient(app) def test_do_something(): response = client.post( "/do_something/", json={ "process_id": 16 } ) return response.json() print(test_do_something()) I get: {'detail': [{'loc': ['query', 'process_id'], 'msg': 'field required', 'type': 'value_error.missing'}]} I can't figure out what the mistake is. It is necessary that it remains a POST call.
The error, basically, says that the required query parameter process_id is missing. The reason for that error is that you send a POST request with request body, i.e., JSON payload; however, your endpoint expects a query parameter. To receive the data in JSON format instead, one needs to create a Pydantic BaseModel—as shown below—and send the data from the client in the same way you already do. from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): process_id: int @app.post("/do_something") async def do_something(item: Item): return item Test the above as shown in your question: def test_do_something(): response = client.post("/do_something", json={"process_id": 16}) return response.json() If, however, you had to pass a query parameter, then you would create an endpoint in the same way you did, that is: @app.post("/do_something") async def do_something(process_id: int): return {"process_id": process_id} but on client side, you would need to add the parameter to the URL itself, as described in the documentation (e.g., "/do_something?process_id=16"), or use the params attribute and as shown below: def test_do_something(): response = client.post("/do_something", params={"process_id": 16}) return response.json() Update Alternatively, another way to pass JSON data when having a single body parameter is to use Body(..., embed=True), as shown below: @app.post("/do_something") def do_something(process_id: int = Body(..., embed=True)): return process_id For more details and options on how to post JSON data in FastAPI, please have a look at this answer and this answer.
10
13
70,799,693
2022-1-21
https://stackoverflow.com/questions/70799693/repeat-python-function-at-every-system-clock-minute
I've seen that I can repeat a function with python every x seconds by using a event loop library in this post: import sched, time s = sched.scheduler(time.time, time.sleep) def do_something(sc): print("Doing stuff...") # do your stuff s.enter(60, 1, do_something, (sc,)) s.enter(60, 1, do_something, (s,)) s.run() But I need something slightly different: I need that the function will be called at every system clock minute: at 11:44:00PM, 11:45:00PM and so on. How can I achieve this result?
Use schedule. import schedule import time schedule.every().minute.at(':00').do(do_something, sc) while True: schedule.run_pending() time.sleep(.1) If do_something takes more than a minute, turn it into a thread before passing it to do. import threading def do_something_threaded(sc): threading.Thread(target=do_something, args=(sc,)).start()
5
7
70,792,895
2022-1-20
https://stackoverflow.com/questions/70792895/python-3-type-dict-with-required-and-arbitrary-keys
I'm trying to type a function that returns a dictionary with one required key, and some additional ones. I've run into TypedDict, but it is too strict for my purpose. At the same time Dict is too lenient. To give some examples with what I have in mind: class Schema(PartiallyTypedDict): name: str year: int a: Schema = {"name": "a", "year": 1} # OK b: Schema = {"name": "a", "year": 1, rating: 5} # OK, "rating" key can be added c: Schema = {"year": 1, rating: 5} # NOT OK, "name" is missing It would be great if there was also a way of forcing values of all of the optional/arbitrary key to be of a specific type. For example int in example above due to "rating" being one. Does there exist such a type, or is there a way of creating such a type in python typing framework?
Since PEP 655 there is a solution for this problem. In Python 3.11+ there are typing.Required and typing.NotRequired. This means we have two ways to do this. typing.Required approach The typing.Required type qualifier is used to indicate that a variable declared in a TypedDict definition is a required key. This means we can use the totality flag to mark any keys as not required. And then use the typing.Required to mark explicitly keys as required. from typing import TypedDict, Required class Schema(TypedDict, total=False): name: Required[str] year: Required[int] rating: int typing.NotRequired approach the typing.NotRequired type qualifier is used to indicate that a variable declared in a TypedDict definition is a potentially-missing key This means we can use the totality flag to mark any keys as required and use typing.NotRequired to mark explicitly some keys as not required. from typing import TypedDict, NotRequired # total=True is the default value and therefore can be omitted. # class Schema(TypedDict, total=True): class Schema(TypedDict): name: str year: int rating: NotRequired[int] Result Both approaches enforce dicts to have a name and a year key. It also enforces that, if a rating key is in the dict, the value is of type int. There is no restriction in adding further keys and there is no value-type restriction for those keys. This means: # OK a: Schema = {"name": "a", "year": 1} # OK, "rating" key can be added b: Schema = {"name": "a", "year": 1, rating: 5} # NOT OK, "name" is missing c: Schema = {"year": 1, rating: 5} # NOT OK, "rating" is no int d: Schema = {"name": "a", "year": 1, "rating": "invalid type"} # OK, because any (undefined) key can be added, with any type e: Schema = {"name": "a", "year": 1, "rating": 5, "undefined-key": "undefined-value-type" } Just for completeness: Required and NotRequired are not allowed outside of TypedDict declarations. It is allowed to have Required and NotRequired Keys in the same dict. A key can only be Required or NotRequired -> This is forbidden: Required[NotRequired[Any]] Addendum There should be soon a way, to give type restrictions to arbitrary keys. PEP 728 brings up the extra_items parameter for TypedDict. So it should be possible to do something like: from typing import TypedDict class Schema(TypedDict(extra_items=int)): name: str year: int Which should result in: # Ok a: Schema = {"name": "a", "year": 1, rating: 5} # Ok b: Schema = {"name": "a", "year": 1, rating: 5, another_arbitrary_key: 4} # NOT Ok, because the arbitrary rating key is no int a: Schema = {"name": "a", "year": 1, rating: "five"} You may have expected the use of "should" this is, because at the time of writing this addendum the PEP 728 is not yet implemented.
5
1
70,851,048
2022-1-25
https://stackoverflow.com/questions/70851048/does-it-make-sense-to-use-conda-poetry
Does it make sense to use Conda + Poetry for a Machine Learning project? Allow me to share my (novice) understanding and please correct or enlighten me: As far as I understand, Conda and Poetry have different purposes but are largely redundant: Conda is primarily a environment manager (in fact not necessarily Python), but it can also manage packages and dependencies. Poetry is primarily a Python package manager (say, an upgrade of pip), but it can also create and manage Python environments (say, an upgrade of Pyenv). My idea is to use both and compartmentalize their roles: let Conda be the environment manager and Poetry the package manager. My reasoning is that (it sounds like) Conda is best for managing environments and can be used for compiling and installing non-python packages, especially CUDA drivers (for GPU capability), while Poetry is more powerful than Conda as a Python package manager. I've managed to make this work fairly easily by using Poetry within a Conda environment. The trick is to not use Poetry to manage the Python environment: I'm not using commands like poetry shell or poetry run, only poetry init, poetry install etc (after activating the Conda environment). For full disclosure, my environment.yml file (for Conda) looks like this: name: N channels: - defaults - conda-forge dependencies: - python=3.9 - cudatoolkit - cudnn and my poetry.toml file looks like that: [tool.poetry] name = "N" authors = ["B"] [tool.poetry.dependencies] python = "3.9" torch = "^1.10.1" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" To be honest, one of the reasons I proceeded this way is that I was struggling to install CUDA (for GPU support) without Conda. Does this project design look reasonable to you?
2024-04-05 update: It looks like my tips proved to be useful to many people, but they are not needed anymore. Just use Pixi. It's still alpha, but it works great, and provides the features of the Conda + Poetry setup in a simpler and more unified way. In particular, Pixi supports: installing packages both from Conda channels and from PyPi, lockfiles, creating multiple features and environments (prod, dev, etc.), very efficient package version resolution, not just faster than Conda (which is very slow), but in my experience also faster than Mamba, Poetry and pip. Making a Pixi env look like a Conda env One non-obvious tip about Pixi is that you can easily make your project's Pixi environment visible as a Conda environment, which may be useful e.g. in VS Code, which allows choosing Python interpreters and Jupyter kernels from detected Conda environments. All you need to do is something like: ln -s /path/to/my/project/.pixi/envs/default /path/to/conda/base/envs/conda-name-of-my-env The first path is the path to your Pixi environment, which resides in your project directory, under .pixi/envs, and the second path needs to be within one of Conda's environment directories, which can be found with conda config --show envs_dirs. Original answer: I have experience with a Conda + Poetry setup, and it's been working fine. The great majority of my dependencies are specified in pyproject.toml, but when there's something that's unavailable in PyPI, or installing it with Conda is easier, I add it to environment.yml. Moreover, Conda is used as a virtual environment manager, which works well with Poetry: there is no need to use poetry run or poetry shell, it is enough to activate the right Conda environment. Tips for creating a reproducible environment Add Poetry, possibly with a version number (if needed), as a dependency in environment.yml, so that you get Poetry installed when you run conda create, along with Python and other non-PyPI dependencies. Add conda-lock, which gives you lock files for Conda dependencies, just like you have poetry.lock for Poetry dependencies. Consider using mamba which is generally compatible with conda, but is better at resolving conflicts, and is also much faster. An additional benefit is that all users of your setup will use the same package resolver, independent from the locally-installed version of Conda. By default, use Poetry for adding Python dependencies. Install packages via Conda if there's a reason to do so (e.g. in order to get a CUDA-enabled version). In such a case, it is best to specify the package's exact version in environment.yml, and after it's installed, to add an entry with the same version specification to Poetry's pyproject.toml (without ^ or ~ before the version number). This will let Poetry know that the package is there and should not be upgraded. If you use a different channels that provide the same packages, it might be not obvious which channel a particular package will be downloaded from. One solution is to specify the channel for the package using the :: notation (see the pytorch entry below), and another solution is to enable strict channel priority. Unfortunately, in Conda 4.x there is no way to enable this option through environment.yml. Note that Python adds user site-packages to sys.path, which may cause lack of reproducibility if the user has installed Python packages outside Conda environments. One possible solution is to make sure that the PYTHONNOUSERSITE environment variable is set to True (or to any other non-empty value). Example environment.yml: name: my_project_env channels: - pytorch - conda-forge # We want to have a reproducible setup, so we don't want default channels, # which may be different for different users. All required channels should # be listed explicitly here. - nodefaults dependencies: - python=3.10.* # or don't specify the version and use the latest stable Python - mamba - pip # pip must be mentioned explicitly, or conda-lock will fail - poetry=1.* # or 1.1.*, or no version at all -- as you want - tensorflow=2.8.0 - pytorch::pytorch=1.11.0 - pytorch::torchaudio=0.11.0 - pytorch::torchvision=0.12.0 # Non-standard section listing target platforms for conda-lock: platforms: - linux-64 virtual-packages.yml (may be used e.g. when we want conda-lock to generate CUDA-enabled lock files even on platforms without CUDA): subdirs: linux-64: packages: __cuda: 11.5 First-time setup You can avoid playing with the bootstrap env and simplify the example below if you have conda-lock, mamba and poetry already installed outside your target environment. # Create a bootstrap env conda create -p /tmp/bootstrap -c conda-forge mamba conda-lock poetry='1.*' conda activate /tmp/bootstrap # Create Conda lock file(s) from environment.yml conda-lock -k explicit --conda mamba # Set up Poetry poetry init --python=~3.10 # version spec should match the one from environment.yml # Fix package versions installed by Conda to prevent upgrades poetry add --lock tensorflow=2.8.0 torch=1.11.0 torchaudio=0.11.0 torchvision=0.12.0 # Add conda-lock (and other packages, as needed) to pyproject.toml and poetry.lock poetry add --lock conda-lock # Remove the bootstrap env conda deactivate rm -rf /tmp/bootstrap # Add Conda spec and lock files git add environment.yml virtual-packages.yml conda-linux-64.lock # Add Poetry spec and lock files git add pyproject.toml poetry.lock git commit Usage The above setup may seem complex, but it can be used in a fairly simple way. Creating the environment conda create --name my_project_env --file conda-linux-64.lock conda activate my_project_env poetry install Activating the environment conda activate my_project_env Updating the environment # Re-generate Conda lock file(s) based on environment.yml conda-lock -k explicit --conda mamba # Update Conda packages based on re-generated lock file mamba update --file conda-linux-64.lock # Update Poetry packages and re-generate poetry.lock poetry update
198
241
70,771,319
2022-1-19
https://stackoverflow.com/questions/70771319/determining-if-object-is-of-typing-literal-type
I need to check if object is descendant of typing.Literal, I have annotation like this: GameState: Literal['start', 'stop'] And I need to check GameState annotation type: def parse_values(ann) if isinstance(ann, str): # do sth if isinstance(ann, int): # do sth if isinstance(ann, Literal): # do sth But it causes error, so I swapped the last one to: if type(ann) == Literal: # do sth But it never returns True, so anyone knows a workaround for this?
typing.get_origin(tp) is the proper way It was implemented in Python 3.8 (Same as typing.Literal) The docstring is thoroughly instructive: def get_origin(tp): """Get the unsubscripted version of a type. This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar and Annotated. Return None for unsupported types. Examples:: get_origin(Literal[42]) is Literal get_origin(int) is None get_origin(ClassVar[int]) is ClassVar get_origin(Generic) is Generic get_origin(Generic[T]) is Generic get_origin(Union[T, int]) is Union get_origin(List[Tuple[T, T]][int]) == list """ In your use case it would be: from typing import Literal, get_origin def parse_values(ann): if isinstance(ann, str): return "str" elif isinstance(ann, int): return "int" elif get_origin(ann) is Literal: return "Literal" assert parse_values("foo") == "str" assert parse_values(5) == "int" assert parse_values(Literal["bar", 6]) == "Literal"
8
2
70,815,197
2022-1-22
https://stackoverflow.com/questions/70815197/how-to-do-structural-pattern-matching-in-python-3-10-with-a-type-to-match
I am trying to match a type in Python 3.10 using the console: t = 12.0 match type(t): case int: print("int") case float: print("float") And I get this error: File "<stdin>", line 2 SyntaxError: name capture 'int' makes remaining patterns unreachable How can I fix this issue?
First, let's explain the code in the question: t = 12.0 match type(t): case int: print("int") case float: print("float") In Python, the match statement operates by trying to fit the subject (the value of type(t), i.e. float) into one of the patterns. The above code has two patterns (case int: and case float:). These patterns, by nature of their syntax, are capture patterns, meaning that they attempt to capture the subject (the value of type(t), i.e. float) into the variable name specified in each pattern (the names int and float, respectively). The patterns case int: and case float: are identical to each other. Moreover, they will both match any subject. Hence, the error: SyntaxError: name capture 'int' makes remaining patterns unreachable. Additionally, note that these two patterns are functionally identical to the conventional default pattern of case _:. case _: is also simply a capture pattern, nothing more. (The only difference between case int:, case float: and case _: is the name of the variable that will capture the subject.) Now let's explain the code in @mmohaveri's answer. t = 12.0 match t: case int(): print("int") case float(): print("float") The above match statement will attempt to fit the value 12.0 first into the pattern case int():, and then into the pattern case float():. These patterns, by nature of their syntax, are class patterns. The parentheses can be used to specify positional arguments and keyword arguments for the pattern. (The parentheses are not function calls.) For the first pattern (case int():), 12.0 is not an instance of int, so the pattern match fails. For the second pattern, (case float():), 12.0 is an instance of float, and no positional or keyword arguments are specified inside the parentheses, so the pattern match succeeds. Here is more information about the matching process.
16
8
70,793,490
2022-1-20
https://stackoverflow.com/questions/70793490/how-do-i-calculate-square-root-in-python
I need to calculate the square root of some numbers, for example √9 = 3 and √2 = 1.4142. How can I do it in Python? The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break? Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title. Related Integer square root in python How to find integer nth roots? Is there a short-hand for nth root of x in Python? Difference between **(1/2), math.sqrt and cmath.sqrt? Why is math.sqrt() incorrect for large numbers? Python sqrt limit for very large numbers? square root of a number greater than 10^2000 in Python 3 Which is faster in Python: x**.5 or math.sqrt(x)? Why does Python give the "wrong" answer for square root? (specific to Python 2) calculating n-th roots using Python 3's decimal module How can I take the square root of -1 using python? (focused on NumPy) Arbitrary precision of square roots
Option 1: math.sqrt() The math module from the standard library has a sqrt function to calculate the square root of a number. It takes any type that can be converted to float (which includes int) and returns a float. >>> import math >>> math.sqrt(9) 3.0 Option 2: Fractional exponent The power operator (**) or the built-in pow() function can also be used to calculate a square root. Mathematically speaking, the square root of a equals a to the power of 1/2. The power operator requires numeric types and matches the conversion rules for binary arithmetic operators, so in this case it will return either a float or a complex number. >>> 9 ** (1/2) 3.0 >>> 9 ** .5 # Same thing 3.0 >>> 2 ** .5 1.4142135623730951 (Note: in Python 2, 1/2 is truncated to 0, so you have to force floating point arithmetic with 1.0/2 or similar. See Why does Python give the "wrong" answer for square root?) This method can be generalized to nth root, though fractions that can't be exactly represented as a float (like 1/3 or any denominator that's not a power of 2) may cause some inaccuracy: >>> 8 ** (1/3) 2.0 >>> 125 ** (1/3) 4.999999999999999 Edge cases Negative and complex Exponentiation works with negative numbers and complex numbers, though the results have some slight inaccuracy: >>> (-25) ** .5 # Should be 5j (3.061616997868383e-16+5j) >>> 8j ** .5 # Should be 2+2j (2.0000000000000004+2j) (Note: the parentheses are required on -25, otherwise it's parsed as -(25**.5) because exponentiation is more tightly binding than negation.) Meanwhile, math is only built for floats, so for x<0, math.sqrt(x) will raise ValueError: math domain error and for complex x, it'll raise TypeError: can't convert complex to float. Instead, you can use cmath.sqrt(x), which is more more accurate than exponentiation (and will likely be faster too): >>> import cmath >>> cmath.sqrt(-25) 5j >>> cmath.sqrt(8j) (2+2j) Precision Both options involve an implicit conversion to float, so floating point precision is a factor. For example let's try a big number: >>> n = 10**30 >>> x = n**2 >>> root = x**.5 >>> root == n False >>> root - n # how far off are they? 0.0 >>> int(root) - n # how far off is the float from the int? 19884624838656 Very large numbers might not even fit in a float and you'll get OverflowError: int too large to convert to float. See Python sqrt limit for very large numbers? Other types Let's look at Decimal for example: Exponentiation fails unless the exponent is also Decimal: >>> decimal.Decimal('9') ** .5 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for ** or pow(): 'decimal.Decimal' and 'float' >>> decimal.Decimal('9') ** decimal.Decimal('.5') Decimal('3.000000000000000000000000000') Meanwhile, math and cmath will silently convert their arguments to float and complex respectively, which could mean loss of precision. decimal also has its own .sqrt(). See also calculating n-th roots using Python 3's decimal module
57
98
70,821,737
2022-1-23
https://stackoverflow.com/questions/70821737/webdriverexception-message-service-geckodriver-unexpectedly-exited-status-cod
For some tests, I've set up a plain new TrueNAS 12.3 FreeBSD Jail and started it, then installed python3, firefox, geckodriver and pip using the following commands: pkg install python3 firefox geckodriver py38-pip pip install --upgrade pip setenv CRYPTOGRAPHY_DONT_BUILD_RUST 1 pip install cryptography==3.4.7 pip install selenium Afterwards, when I want to use Selenium with Firefox in my Python code, it does not work: from selenium import webdriver from selenium.webdriver.firefox.options import Options options = Options() options.headless = True driver = webdriver.Firefox(options=options) it brings Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/firefox/webdriver.py", line 174, in __init__ self.service.start() File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/common/service.py", line 98, in start self.assert_process_still_running() File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/common/service.py", line 110, in assert_process_still_running raise WebDriverException( selenium.common.exceptions.WebDriverException: Message: Service geckodriver unexpectedly exited. Status code was: 64 Funnily, on another Jail that I've set up approximately a year ago (approximately in the mentioned way as well), it just works and does not throw the error (so different versions maybe?)! This is the only content of geckodriver.log: geckodriver: error: Found argument '--websocket-port' which wasn't expected, orisn't valid in this context USAGE: geckodriver [FLAGS] [OPTIONS] For more information try --help Is there anything I could try to get it working? I've already seen this question, but it seems fairly outdated. Firefox 95.0.2, geckodriver 0.26.0, Python 3.8.12, Selenium 4.1.0
This error message... selenium.common.exceptions.WebDriverException: Message: Service geckodriver unexpectedly exited. Status code was: 64 and the GeckoDriver log... geckodriver: error: Found argument '--websocket-port' which wasn't expected, or isn't valid in this context ...implies that the GeckoDriver was unable to initiate/spawn a new Browsing Context i.e. firefox session. Your main issue is the incompatibility between the version of the binaries you are using as follows: Your Selenium Client version is 4.1.0. But your GeckoDriver version is 0.26.0. As @ernstki mentions in their comment: You are running a geckodriver older than 0.30.0, and it is missing the --websocket-port option, which newer/new-ish versions of Selenium seem to depend on. To put it in simple words, till the previous GeckoDriver release of v0.29.0 the --websocket-port option wasn't in use, which is now mandatory with Selenium v4.0.1. Further @whimboo also confirmed in his comment: As it has been manifested the problem here is not geckodriver but Selenium. As such you should create an issue on the Selenium repository instead, so that an option could be added to not always pass the --websocket-port argument. If that request gets denied you will have to use older releases of Selenium if testing with older geckodriver releases is really needed. Solution Ensure that: Selenium is upgraded to current levels Version 4.1.0. GeckoDriver is upgraded to GeckoDriver v0.30.0 level. Firefox is upgraded to current Firefox v96.0.2 levels. FreeBSD versions Incase you are using FreeBSD versions where the GeckoDriver versions are older, in those cases you have to downgrade Selenium to v3.x levels. Commands (courtesy: Kurtibert): Uninstall Selenium: pip3 uninstall selenium; Install Selenium: pip3 install 'selenium<4.0.0'
17
26
70,816,149
2022-1-22
https://stackoverflow.com/questions/70816149/vscode-intellisense-not-working-for-modules-when-using-sys-path-append-to-add-pa
I am adding path that are higher up or in a sibling directories using following code. And I am not getting IntelliSense for modules inside these folders. Any idea how to get this IntelliSense? The function colorPrint is defined inside LoggingHelper module in Utility folder.
I solved it as following. I am adding parent folder and resolving all modules inside the parent folder. This way, I get IntelliSense HERE = Path(__file__).parent sys.path.append(str(HERE / '..')) from Utility.LoggingKotakHelper import (colorPrint, logKotakInfo, logKotakWarning)
5
3
70,826,659
2022-1-23
https://stackoverflow.com/questions/70826659/bar-labels-with-new-f-string-format-style
As of matplotlib 3.4.0, Axes.bar_label method allows for labelling bar charts. However, the labelling format option works with old style formatting, e.g. fmt='%g' How can I make it work with new style formatting that would allow me to do things like percentages, thousands separators, etc: '{:,.2f}', '{:.2%}', ... The first thing that comes to my mind is somehow taking the initial labels from ax.containers and then reformatting them but it also needs to work for different bar structures, grouped bars with different formats and so on.
How can I make bar_label work with new style formatting like percentages, thousands separators, etc? As of matplotlib 3.7 The fmt param now directly supports {}-based format strings, e.g.: # >= 3.7 plt.bar_label(bars, fmt='{:,.2f}') # ^no f here (not an actual f-string) Prior to matplotlib 3.7 The fmt param does not support {}-based format strings, so use the labels param. Format the bar container's datavalues with an f-string and set those as the labels, e.g.: # < 3.7 plt.bar_label(bars, labels=[f'{x:,.2f}' for x in bars.datavalues]) Examples: Thousands separator labels bars = plt.bar(list('ABC'), [12344.56, 23456.78, 34567.89]) # >= v3.7 plt.bar_label(bars, fmt='${:,.2f}') # < v3.7 plt.bar_label(bars, labels=[f'${x:,.2f}' for x in bars.datavalues]) Percentage labels bars = plt.bar(list('ABC'), [0.123456, 0.567890, 0.789012]) # >= 3.7 plt.bar_label(bars, fmt='{:.2%}') # >= 3.7 # < 3.7 plt.bar_label(bars, labels=[f'{x:.2%}' for x in bars.datavalues]) Stacked percentage labels x = list('ABC') y = [0.7654, 0.6543, 0.5432] fig, ax = plt.subplots() ax.bar(x, y) ax.bar(x, 1 - np.array(y), bottom=y) # now 2 bar containers: white labels for blue bars, black labels for orange bars colors = list('wk') # >= 3.7 for bars, color in zip(ax.containers, colors): ax.bar_label(bars, fmt='{:.1%}', color=color, label_type='center') # < 3.7 for bars, color in zip(ax.containers, colors): labels = [f'{x:.1%}' for x in bars.datavalues] ax.bar_label(bars, labels=labels, color=color, label_type='center')
6
18
70,837,669
2022-1-24
https://stackoverflow.com/questions/70837669/how-can-i-parse-package-in-a-urdf-file-path
I have a robot URDF that points to mesh files using "package://". <geometry> <mesh filename="package://a1_rw/meshes/hip.dae" scale="1 1 1"/> </geometry> I would like to use urdfpy to parse this URDF. However, it is unable to interpret the meaning of "package://". import os from urdfpy import URDF a1_rw = { "model": "a1", "csvpath": "a1_rw/urdf/a1_rw.csv", "urdfpath": "a1_rw/urdf/a1_rw.urdf" } model = a1_rw curdir = os.getcwd() path_parent = os.path.dirname(curdir) print("path parent = ", path_parent) model_path = model["urdfpath"] robot = URDF.load(os.path.join(path_parent, model_path)) Here is the error message: $ python3.8 calc_parallax.py path parent = /home/ben/Documents/git_workspace/a1_test Traceback (most recent call last): File "calc_parallax.py", line 18, in <module> robot = URDF.load(os.path.join(path_parent, model_path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 3729, in load return URDF._from_xml(node, path) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 3926, in _from_xml kwargs = cls._parse(node, path) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 161, in _parse kwargs.update(cls._parse_simple_elements(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 137, in _parse_simple_elements v = [t._from_xml(n, path) for n in vs] File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 137, in <listcomp> v = [t._from_xml(n, path) for n in vs] File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 181, in _from_xml return cls(**cls._parse(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 161, in _parse kwargs.update(cls._parse_simple_elements(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 137, in _parse_simple_elements v = [t._from_xml(n, path) for n in vs] File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 137, in <listcomp> v = [t._from_xml(n, path) for n in vs] File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 1146, in _from_xml kwargs = cls._parse(node, path) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 161, in _parse kwargs.update(cls._parse_simple_elements(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 127, in _parse_simple_elements v = t._from_xml(v, path) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 181, in _from_xml return cls(**cls._parse(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 161, in _parse kwargs.update(cls._parse_simple_elements(node, path)) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 127, in _parse_simple_elements v = t._from_xml(v, path) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/urdf.py", line 581, in _from_xml meshes = load_meshes(fn) File "/home/ben/.local/lib/python3.8/site-packages/urdfpy/utils.py", line 225, in load_meshes meshes = trimesh.load(filename) File "/home/ben/.local/lib/python3.8/site-packages/trimesh/exchange/load.py", line 111, in load ) = parse_file_args(file_obj=file_obj, File "/home/ben/.local/lib/python3.8/site-packages/trimesh/exchange/load.py", line 623, in parse_file_args raise ValueError('string is not a file: {}'.format(file_obj)) ValueError: string is not a file: /home/ben/Documents/git_workspace/a1_test/a1_rw/urdf/package://a1_rw/meshes/trunk.dae Is there any way to get urdfpy (or another urdf parser) to parse this correctly?
The behavior you're observing is expected. The documentation for urdfpy.URDF.load() specifically states: Any paths in the URDF should be specified as relative paths to the .urdf file instead of as ROS resources. If you want to keep using the same library, the only way to sort this out is to replace the strings in the .urdf file. To do so I suggest using a temporary file so that it is automatically discarded once you've loaded your URDF and doesn't actually impact your original urdf. from pathlib import Path from urdfpy import URDF import tempfile # URDF file (pathlib is a little nicer but not mandatory) urdf_file_path = Path("path/to/my/file.urdf") # Define how you replace your string. Adjust it so it fits your file organization ros_url_prefix = "package://" abs_path_prefix = "/path/to/my/meshes/folder" # Start with openning a temp dir (context manger makes it easy to handle) with tempfile.TemporaryDirectory() as tmpdirname: # Where your tmpfile will be tmp_file_path = Path(tmpdirname)/urdf_file_path.name # Write each line in fout replacing the ros url prefix with abs path with open(urdf_file_path, 'r') as fin: with open(tmp_file_path, 'w') as fout: for line in fin: fout.write(line.replace(ros_url_prefix, abs_path_prefix)) # Load the urdf from the corrected tmp file robot_urdf = load(tmp_file_path) # Here we get out of tmpfile context manager, so the tmpdir and all its content is erased robot_urdf.show() # The robot urdf is still accessible
5
1
70,783,994
2022-1-20
https://stackoverflow.com/questions/70783994/reload-routes-in-fastapi-during-runtime
I have a FastAPI app in which routes are dynamically generated based on an DB config. However, once the routes are defined and the app running, if the config changes, there seems to be no way to reload the config so that the routes could reflect the config. The only solution I have for now is manually restart the asgi app by restarting uvicorn. Is there any way to fully regenerate routes without stopping the app, that could ideally be called from an URL ?
It is possible to modify routes at runtime. FastAPI apps have the method add_api_route which allows you to dynamically define new endpoints. To remove an endpoint you will need to fiddle directly with the routes of the underlying Router. The following code shows how to dynamically add and remove routes. import fastapi app = fastapi.FastAPI() @app.get("/add") async def add(name: str): async def dynamic_controller(): return f"dynamic: {name}" app.add_api_route(f"/dyn/{name}", dynamic_controller, methods=["GET"]) return "ok" def route_matches(route, name): return route.path_format == f"/dyn/{name}" @app.get("/remove") async def remove(name: str): for i, r in enumerate(app.router.routes): if route_matches(r, name): del app.router.routes[i] return "ok" return "not found" And below is shown how to use it $ curl 127.0.0.1:8000/dyn/test {"detail":"Not Found"} $ curl 127.0.0.1:8000/add?name=test "ok" $ curl 127.0.0.1:8000/dyn/test "dynamic: test" $ curl 127.0.0.1:8000/add?name=test2 "ok" $ curl 127.0.0.1:8000/dyn/test2 "dynamic: test2" $ curl 127.0.0.1:8000/remove?name=test "ok" $ curl 127.0.0.1:8000/dyn/test {"detail":"Not Found"} $ curl 127.0.0.1:8000/dyn/test2 "dynamic: test2" Note though, that if you change the routes dynamically you will need to invalidate the cache of the OpenAPI endpoint.
10
8
70,808,757
2022-1-21
https://stackoverflow.com/questions/70808757/pydantic-inconsistent-and-automatic-conversion-between-float-and-int
I am using pydantic python package in FastAPI for a web app, and I noticed there is some inconsistent float-int conversions with different typing checks. For example: class model(BaseModel): data: Optional[Union[int, float]] = None m = model(data=3.33) m.data --> 3.33 class model(BaseModel): data: Optional[Union[int, float, str]] = None m = model(data=3.33) m.data --> 3 class model(BaseModel): data: Union[int, float, str] = None m = model(data=3.33) m.data --> 3 class model(BaseModel): data: Union[str, int, float] = None m = model(data=3.33) m.data --> '3.33' As shown here, different orders/combinations of typings have different behaviors. I checked out thread https://github.com/samuelcolvin/pydantic/issues/360, and https://github.com/samuelcolvin/pydantic/issues/284, but they seem not to be the exact same problem. What causes such behavior under the hood? Is there a specific reason for this? Or did I do anything wrong/inappropriate here? I'm using python 3.8, pydantic 1.8.2 Thank you for helping! ------ Update ------ In pydantic==1.9.1 this seems has been fixed -> refer to @JacekK's answer.
To understand what happened there we have to know how the Union works and how pydantic uses typing to validate values. Union According to the documentation: Union type; Union[X, Y] is equivalent to X | Y and means either X or Y. OR means that at least one element has to be true to make the whole sentence true. So, if the first element is true there's no need to check the second element. Because the whole sentence is true no matter the value of the second element. Classical disjunction Pydantic Pydantic gets the given value and tries to map it to the type declared in the class attribute definition. If it succeeds then pydantic assigns value to the field. Otherwise it raises a ValidationError. pseudo code: x.data: int x.data = "3" # string x.data = int("3") #convertion, x.data → 3 how it works in your case: x.data = Union[int, float, str] x.data = 3.33 x.data = int(3.33) #convertion to int, which is first in your Union # because previous was success, then: x.data → 3 This behavior is known and well documented: As such, it is recommended that when defining Union annotations, the most specific type is included first and followed by less specific types. class MyModel(BaseModel): data: Union[float, int, str] m = MyModel(data=3.33) print(m.data) # output is: 3.33 m = MyModel(data="3.33") print(m.data) # output is: 3.33 m = MyModel(data=3) print(m.data) # output is: 3.0
6
3
70,780,898
2022-1-20
https://stackoverflow.com/questions/70780898/pydantic-set-variables-from-a-list
Is there a way to set a pydantic model from a list? I tried this and it didn't work for me. If it's not possible with pydantic, what is the best way to do this if I still need type validation and conversion, constraints, etc.? Order is important here. from pydantic import BaseModel from datetime import date class User(BaseModel): id: int name = 'John Doe' sex: str money: float = None dt: date data = [1, 'Tike Myson', 'male', None, '2022-01-20'] user = User(*data) >>> TypeError: __init__() takes exactly 1 positional argument (6 given)
I partially answered it here: Initialize FastAPI BaseModel using non-keywords arguments (a.k.a *args) but I'll give here more dynamic options. Option 1: use the order of the attributes Your case has the problem that Pydantic does not maintain the order of all fields (depends at least on whether you set the type). If you specify the type of name then this works: from pydantic import BaseModel from datetime import date class User(BaseModel): id: int name: str = 'John Doe' sex: str money: float = None dt: date def __init__(self, *args): # Get a "list" of field names (or key view) field_names = self.__fields__.keys() # Combine the field names and args to a dict # using the positions. kwargs = dict(zip(field_names, args)) super().__init__(**kwargs) data = [1, 'Tike Myson', 'male', None, '2022-01-20'] user = User(*data) Option 2: set the order as class variable This has the downside of not being as dynamic but does not have the problem of the order being undesirable. from pydantic import BaseModel from datetime import date class User(BaseModel): id: int name = 'John Doe' sex: str money: float = None dt: date field_names: ClassVar = ('id', 'name', 'sex', 'money', 'dt') def __init__(self, *args): # Combine the field names and args to a dict # using the positions. kwargs = dict(zip(self.field_names, args)) super().__init__(**kwargs) data = [1, 'Tike Myson', 'male', None, '2022-01-20'] user = User(*data) Option 3: just hard-code them in the __init__ This is similar to option 2 but a bit simple (and less reusable). from pydantic import BaseModel from datetime import date from typing import ClassVar class User(BaseModel): id: int name = 'John Doe' sex: str money: float = None dt: date def __init__(self, id, name, sex, money, dt): super().__init__(id=id, name=name, sex=sex, money=money, dt=dt) data = [1, 'Tike Myson', 'male', None, '2022-01-20'] user = User(*data) Option 4: Without overriding One more solution but this does not require overriding the __init__. However, this requires more code when creating an instance: from pydantic import BaseModel from datetime import date class User(BaseModel): id: int name = 'John Doe' sex: str money: float = None dt: date data = [1, 'Tike Myson', 'male', None, '2022-01-20'] # Combine the field names and data field_names = ['id', 'name', 'sex', 'money', 'dt'] kwargs = dict(zip(field_names, data)) # Create an instance of User from a dict user = User(**kwargs) Note for options 1, 2 & 3 If you have multiple places where you need this, create a base class that has the __init__ overridden and subclass that.
6
5
70,806,778
2022-1-21
https://stackoverflow.com/questions/70806778/python-selenium-proxy-network
Overview I am using a proxy network and want to configure it with Selenium on Python. I have seen many post use the HOST:PORT method, but proxy networks uses the "URL method" of http://USER:PASSWORD@PROXY:PORT SeleniumWire I found SeleniumWire to be a way to connect the "URL method" of proxy networks to a Selenium Scraper. See basic SeleniumWire configuration: from seleniumwire import webdriver options = { 'proxy': { 'http': 'http://USER:PASSWORD@PROXY:PORT', 'https': 'http://USER:PASSWORD@PROXY:PORT' }, } driver = webdriver.Chrome(seleniumwire_options=options) driver.get("https://some_url.com") This correctly adds and cycles a proxy to the driver, however on many websites the scraper is quickly blocked by CloudFlare. This blocking is something that does not happen when running on Local IP. After searching through SeleniumWire's GitHub Repository Issues, I found that this is caused by TLS fingerprinting and that there is no current solution to this issue. Selenium Options I tried to configure proxies the conventional selenium way: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument("--proxy-server=http://USER:PASSWORD@PROXY:PORT") driver = webdriver.Chrome(options=options) driver.get("https://some_url.com") A browser instance does open but fails because of a network error. Browser instance does not load in established URL. Docker Configuration The end result of this configuration would be running python code within a docker container that is within a Lambda function. Don't know whether or not that introduces a new level of abstraction or not. Summary What other resources can I use to correctly configure my Selenium scraper to use the "URL method" of IP cycling? Versions python 3.9 selenium 3.141.0 docker 20.10.11 Support Tickets Github: https://github.com/SeleniumHQ/selenium/issues/10605 ChromeDriver: https://bugs.chromium.org/p/chromedriver/issues/detail?id=4118
Selenium Extension: A proxy network, or "URL" proxy, can be configured with Selenium as an extension. Create the following JS script and JSON file: JS script ("background.js") var config = { mode: "fixed_servers", rules: { singleProxy: { scheme: "http", host: "<PROXY>", port: parseInt(<PORT>) }, bypassList: ["foobar.com"] } }; chrome.proxy.settings.set({value: config, scope: "regular"}, function() {}); function callbackFn(details) { return { authCredentials: { username: "<USER>", password: "<PASSWORD>" } }; } chrome.webRequest.onAuthRequired.addListener( callbackFn, {urls: ["<all_urls>"]}, ['blocking'] ); JSON File ("manifest.json") { "version": "1.0.0", "manifest_version": 2, "name": "Chrome Proxy", "permissions": [ "proxy", "tabs", "unlimitedStorage", "storage", "<all_urls>", "webRequest", "webRequestBlocking" ], "background": { "scripts": ["background.js"] }, "minimum_chrome_version":"22.0.0" } Zip background.js and manifest.json as proxy.zip and write the following: from selenium import webdriver options = webdriver.ChromeOptions() options.add_extension("proxy.zip") driver = webdriver.Chrome(options=options) driver.get("https://whatismyipaddress.com/")
5
1
70,787,868
2022-1-20
https://stackoverflow.com/questions/70787868/how-to-change-youtube-dl-output-location-with-python
I wrote a python script to download a list of YouTube URLs, and I want to change the output folder by the subject I'm downlaoding. For example, When I'm downloading a playlist, I want the videos in this playlist be downloaded into a folder named by the current playlist. But if it's a channel, the videos inside it should go into a folder named bt it's uploader. How do I know the URL I'm downloading is a playlist or a channel? Since the options are passed before the download starts, I can't find a way to do this. Here is my code: import sys import yt_dlp URLS = [ 'playlist_url', 'channel_url', ] dl_ops = { 'outtmpl': 'd:/YouTube/%(uploader)s/%(title)s.%(ext)s' } retry_count = 0 def download_video(urls): try: with yt_dlp.YoutubeDL(dl_ops) as ydl: ydl.download(urls) except KeyboardInterrupt: print('Interruptted by user') sys.exit() except Exception as e: print(e) global retry_count if retry_count == 50: print('Retry count exceeded') sys.exit() retry_count += 1 download_video(urls) if __name__ == '__main__': download_video(URLS)
dl_ops = { 'outtmpl': 'd:/YouTube/%(uploader)s/%(title)s.%(ext)s' } You can see that in this line you specify a path to be D:/YouTube/ {uploader name} / {title of the video} . {extension that you specified}. In order to change all the videos to go into the same folder you just need to remove the "/" after the uploader so your videos don't have a seperate folder for each of the uploaders. So if you want to have file name just as video name you will do something like this: 'outtmpl': 'D:/YouTube/%(title)s.%(ext)s' So for a video named "Youtube vlog part 321" the file path will be: D:/YouTube/Youtube vlog part 321.mp4 or any other format that you choose (mp3, wav, avi, mkv...).
6
6
70,793,174
2022-1-20
https://stackoverflow.com/questions/70793174/fastapi-schemahidden-true-not-working-when-trying-to-hide-the-schema-sectio
I'm trying to hide the entire schemas section of the FastAPI generated swagger docs. I've checked the docs and tried this but the schema section still shows. @Schema(hidden=True) class theSchema(BaseModel): category: str How do I omit one particular schema or the entire schemas section from the returned swagger docs. docExpansion does not appear to work either. What am I missing? app = FastAPI( openapi_tags=tags_metadata, title="Documentation", description="API endpoints", version="0.1", docExpansion="None" )
swagger has the UI parameter "defaultModelsExpandDepth" for controlling models' view in the schema section. This parameter can be forwarded using "swagger_ui_parameters" parameter on FastApi initialization. app = FastAPI(swagger_ui_parameters={"defaultModelsExpandDepth": -1}) Values: -1: schema section hidden 0: schema section is closed 1: schema section is open (default) More options can be found here: https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/
8
12
70,839,312
2022-1-24
https://stackoverflow.com/questions/70839312/module-numpy-distutils-config-has-no-attribute-blas-opt-info
I'm trying to study the neural-network-and-deep-learning (http://neuralnetworksanddeeplearning.com/chap1.html). Using the updated version for Python 3 by MichalDanielDobrzanski (https://github.com/MichalDanielDobrzanski/DeepLearningPython). Tried to run it in my command console and it gives an error below. I've tried uninstalling and reinstalling setuptools, theano, and numpy but none have worked thus far. Any help is very appreciated!! Here's the full error log: WARNING (theano.configdefaults): g++ not available, if using conda: `conda install m2w64-toolchain` C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configdefaults.py:560: UserWarning: DeprecationWarning: there is no c++ compiler.This is deprecated and with Theano 0.11 a c++ compiler will be mandatory warnings.warn("DeprecationWarning: there is no c++ compiler." WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. Traceback (most recent call last): File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configparser.py", line 168, in fetch_val_for_key return theano_cfg.get(section, option) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\configparser.py", line 781, in get d = self._unify_values(section, vars) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\configparser.py", line 1149, in _unify_values raise NoSectionError(section) from None configparser.NoSectionError: No section: 'blas' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configparser.py", line 327, in __get__ val_str = fetch_val_for_key(self.fullname, File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configparser.py", line 172, in fetch_val_for_key raise KeyError(key) KeyError: 'blas.ldflags' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ASUS\Documents\GitHub\Neural-network-and-deep-learning-but-for-python-3\test.py", line 156, in <module> import network3 File "C:\Users\ASUS\Documents\GitHub\Neural-network-and-deep-learning-but-for-python-3\network3.py", line 37, in <module> import theano File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\__init__.py", line 124, in <module> from theano.scan_module import (scan, map, reduce, foldl, foldr, clone, File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\scan_module\__init__.py", line 41, in <module> from theano.scan_module import scan_opt File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\scan_module\scan_opt.py", line 60, in <module> from theano import tensor, scalar File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\tensor\__init__.py", line 17, in <module> from theano.tensor import blas File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\tensor\blas.py", line 155, in <module> from theano.tensor.blas_headers import blas_header_text File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\tensor\blas_headers.py", line 987, in <module> if not config.blas.ldflags: File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configparser.py", line 332, in __get__ val_str = self.default() File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\theano\configdefaults.py", line 1284, in default_blas_ldflags blas_info = np.distutils.__config__.blas_opt_info AttributeError: module 'numpy.distutils.__config__' has no attribute 'blas_opt_info'
I had the same issue and solved it downgrading numpy to version 1.20.3 by: pip3 install --upgrade numpy==1.20.3
14
18
70,854,314
2022-1-25
https://stackoverflow.com/questions/70854314/use-fastapi-to-interact-with-async-loop
I am running coroutines of 'workers' whose job it is to wait 5s, get values from an asyncio.Queue() and print them out continually. q = asyncio.Queue() def worker(): while True: await asyncio.sleep(5) i = await q.get() print(i) q.task_done() async def main(q): workers = [asyncio.create_task(worker()) for n in range(10)] await asyncio.gather(*workers) if __name__ == "__main__": asyncio.run(main()) I would like to be able to interact with the queue through http requests using FastAPI. For example POST requests that would 'put' items in the queue for the workers to print. I'm unsure how I can run the coroutines of the workers concurrently with FastAPI to achieve this effect. Uvicorn has its own event loop I believe and my attempts to use asyncio methods have been unsuccessful. The router would look something like this I think. @app.post("/") async def put_queue(data:str): return q.put(data) And I'm hoping there's something that would have an effect like this: await asyncio.gather(main(),{FastApi() app run})
One option would be to add a task that wraps your main coroutine in a on startup event import asyncio @app.on_event("startup") async def startup_event(): asyncio.create_task(main()) This would schedule your main coroutine before the app has been fully started. Important here is that you don't await the created task as it would basically block startup_event forever
8
12
70,823,915
2022-1-23
https://stackoverflow.com/questions/70823915/random-stealing-calls-to-child-initializer
There's a situation involving sub-classing I can't figure out. I'm sub-classing Random (the reason is besides the point). Here's a basic example of what I have: import random class MyRandom(random.Random): def __init__(self, x): # x isn't used here, but it's necessary to show the problem. print("Before") super().__init__() # Nothing passed to parent print("After") MyRandom([]) The above code, when run, gives the following error (and doesn't print "Before"): >>> import test Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\_\PycharmProjects\first\test.py", line 11, in <module> MyRandom([]) TypeError: unhashable type: 'list' To me, this doesn't make any sense. Somehow, the argument to MyRandom is apparently being passed directly to Random.__init__ even though I'm not passing it along, and the list is being treated as a seed. "Before" never prints, so apparently my initializer is never even being called. I thought maybe this was somehow due to the parent of Random being implemented in C and this was causing weirdness, but a similar case with list sub-classing doesn't yield an error saying that ints aren't iterable: class MyList(list): def __init__(self, y): print("Before") super().__init__() print("After") r = MyList(2) # Prints "Before", "After" I have no clue how to even approach this. I rarely ever sub-class, and even rarer is it that I sub-class a built-in, so I must have developed a hole in my knowledge. This is not how I expect sub-classing to work. If anyone can explain what's going on here, I'd appreciate it. Python 3.9
Instantiating a class causes its __new__ method to be called. It is passed the name of the class and the arguments in the constructor call. So MyRandom([1, 2]) results in the call MyRandom.__new__(MyRandom, [1, 2]). (3.9.10 documentation). Because there isn't a MyRandom.__new__() method, the base classes are searched. random.Random does have a __new__() method (see random_new() in _randommodule.c). So we get a call something like this random_new(MyRandom, [1, 2]). Looking at the C code for random_new(), it calls random_seed(self, [1, 2]). Because the second argument isn't Null, or None, or an int, or a subclass of int, the code calls PyObject_Hash([1, 2]). But a list isn't hashable, hence the error. If __new__() returns a instance of the class, then the __init__() method is called with the arguments in the constuctor call. One possible fix is to define a MyRandom.__new__() method, which calls super().__new__() but only passes the appropriate args. class MyRandom(random.Random): def __new__(cls, *args, **kwargs): #print(f"In __new__: {args=}, {kwargs=}") # Random.__new__ expects an optional seed. We are going to # implement out own RNG, so ignore args and kwargs. Pass in a # junk integer value so that Random.__new__ doesn't waste time # trying to access urandom or calling time to initialize the MT RNG # since we aren't going to use it anyway. return super().__new__(cls, 123) def __init__(cls, *args, **kwargs): #print(f"In __init__: {args=}, {kwargs=}") # initialize your custom RNG here pass Also override the methods: random(), seed(), getstate(), setstate(), and optionally getrandbits(). An alternative fix is to only use keyword arguments in the __init__() methods of the subclasses. The C code for random_new() checks to see if a an instance of random.Random is being created. If true, the code throws and error if there are any keyword arguments. However, if a subclass is being created, any keyword arguments are ignored by random_new(), but can be used in the subclass __init__(). class MyRandom(random.Random): def __init__(self, *, x): # make x a keyword only argument print("Before") super().__init__() # Nothing passed to parent print("After") MyRandom(x=[]) Interestingly, in Python 3.10, the code for random_new has been changed to raise an error if more that 1 positional argument is supplied.
5
2
70,786,543
2022-1-20
https://stackoverflow.com/questions/70786543/remove-all-the-special-chars-from-a-list
i have a list of strings with some strings being the special characters what would be the approach to exclude them in the resultant list list = ['ben','kenny',',','=','Sean',100,'tag242'] expected output = ['ben','kenny','Sean',100,'tag242'] please guide me with the approach to achieve the same. Thanks
The string module has a list of punctuation marks that you can use and exclude from your list of words: import string punctuations = list(string.punctuation) input_list = ['ben','kenny',',','=','Sean',100,'tag242'] output = [x for x in input_list if x not in punctuations] print(output) Output: ['ben', 'kenny', 'Sean', 100, 'tag242'] This list of punctuation marks includes the following characters: ['!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~']
4
11
70,812,698
2022-1-22
https://stackoverflow.com/questions/70812698/add-a-python-path-to-a-module-in-visual-studio-code
I'm having difficulty specifying python path containing modules/packages in another directory or even folder of the same project. When I try to import I get the error: ModuleNotFoundError: No module named 'perception' In Spyder this is simply done using the UI to select an additional pythonpath that python will look in, but I am unable to do so in VSC. Note I have tried following other answers on editing the settings.json files, and .env files but the problem still persists. The only solution I have is to use sys.path.append() in every script which is not what I am looking for. As an example my settings.json file is: { "terminal.integrated.env.osx": { "PYTHONPATH": "pathtoprojectfolder" }, "python.envFile": "${workspaceFolder}/.env", "jupyter.interactiveWindowMode": "perFile", "python.terminal.executeInFileDir": true, "terminal.integrated.inheritEnv": true, "jupyter.themeMatplotlibPlots": true, "window.zoomLevel": 2, "python.condaPath": "path_to_conda_python", "python.defaultInterpreterPath": "path_to_conda_python", }
Instructions for MacOS only. Adding .env files and navigating the setting json files is not as intuitive and simple as adding an additional python path in Spyder. However, this worked for me on VSC: create a .env file in the project folder. add the full path that you want to add to PYTHONPATH as such: PYTHONPATH=/Users/../projectFolder:/Users/.../AnotherProjectFolder ** crucially you must use the appropriate separator between paths, on Mac you must use ':', otherwise the paths will not be added. go to to Code->preferences->settings and search for "terminal.integrated.osx". click edit "settings.json" add to the json settings "python.envFile": "${workspaceFolder}/.env", As an example you might to have settings.json showing the following: { "python.envFile": "${workspaceFolder}/.env", "jupyter.interactiveWindowMode": "perFile", "python.terminal.executeInFileDir": true, } Quite an involved process to just add a path, but I like the ability to run Jupyter notebooks in VSC which Spyder doesn't have in the standalone installers yet. This page has the information required: https://code.visualstudio.com/docs/python/environments#_environment-variable-definitions-file
5
3
70,844,974
2022-1-25
https://stackoverflow.com/questions/70844974/onnxruntime-vs-onnxruntimeopenvinoep-inference-time-difference
I'm trying to accelerate my model's performance by converting it to OnnxRuntime. However, I'm getting weird results, when trying to measure inference time. While running only 1 iteration OnnxRuntime's CPUExecutionProvider greatly outperforms OpenVINOExecutionProvider: CPUExecutionProvider - 0.72 seconds OpenVINOExecutionProvider - 4.47 seconds But if I run let's say 5 iterations the result is different: CPUExecutionProvider - 3.83 seconds OpenVINOExecutionProvider - 14.13 seconds And if I run 100 iterations, the result is drastically different: CPUExecutionProvider - 74.19 seconds OpenVINOExecutionProvider - 46.96seconds It seems to me, that the inference time of OpenVinoEP is not linear, but I don't understand why. So my questions are: Why does OpenVINOExecutionProvider behave this way? What ExecutionProvider should I use? The code is very basic: import onnxruntime as rt import numpy as np import time from tqdm import tqdm limit = 5 # MODEL device = 'CPU_FP32' model_file_path = 'road.onnx' image = np.random.rand(1, 3, 512, 512).astype(np.float32) # OnnxRuntime sess = rt.InferenceSession(model_file_path, providers=['CPUExecutionProvider'], provider_options=[{'device_type' : device}]) input_name = sess.get_inputs()[0].name start = time.time() for i in tqdm(range(limit)): out = sess.run(None, {input_name: image}) end = time.time() inference_time = end - start print(inference_time) # OnnxRuntime + OpenVinoEP sess = rt.InferenceSession(model_file_path, providers=['OpenVINOExecutionProvider'], provider_options=[{'device_type' : device}]) input_name = sess.get_inputs()[0].name start = time.time() for i in tqdm(range(limit)): out = sess.run(None, {input_name: image}) end = time.time() inference_time = end - start print(inference_time)
The use of ONNX Runtime with OpenVINO Execution Provider enables the inferencing of ONNX models using ONNX Runtime API while the OpenVINO toolkit runs in the backend. This accelerates ONNX model's performance on the same hardware compared to generic acceleration on Intel® CPU, GPU, VPU and FPGA. Generally, CPU Execution Provider works best with small iteration since its intention is to keep the binary size small. Meanwhile, the OpenVINO Execution Provider is intended for Deep Learning inference on Intel CPUs, Intel integrated GPUs, and Intel® MovidiusTM Vision Processing Units (VPUs). This is why the OpenVINO Execution Provider outperforms the CPU Execution Provider during larger iterations. You should choose Execution Provider that would suffice your own requirements. If you going to execute complex DL with large iteration, then go for OpenVINO Execution Provider. For a simpler use case, where you need the binary size to be smaller with smaller iterations, you can choose the CPU Execution Provider instead. For more information, you may refer to this ONNX Runtime Performance Tuning
7
6
70,836,912
2022-1-24
https://stackoverflow.com/questions/70836912/use-mysql-connector-but-get-importerror-missing-optional-dependency-sqlalche
I work on a program for two months. Today I suddenly got an error when connecting to the database while using mysql.connector. Interestingly, this error is not seen when running previous versions. import mysql.connector import pandas as pd mydb = mysql.connector.connect(host="localhost", user="root", password="*****", database="****") Q = f'SELECT * FROM table' df = pd.read_sql_query(Q, con=mydb) print(df) but I get this error : Traceback (most recent call last): df = pd.read_sql_query(Q, con=mydb) File "g.v1.6\venv\lib\site-packages\pandas\io\sql.py", line 398, in read_sql_query pandas_sql = pandasSQL_builder(con) File "g.v1.6\venv\lib\site-packages\pandas\io\sql.py", line 750, in pandasSQL_builder sqlalchemy = import_optional_dependency("sqlalchemy") File "g.v1.6\venv\lib\site- packages\pandas\compat\_optional.py", line 129, in import_optional_dependency raise ImportError(msg) ImportError: Missing optional dependency 'SQLAlchemy'. Use pip or conda to install SQLAlchemy. What has this got to do with SQLAlchemy??
I just ran into something similar. It looks like Pandas 1.4 was released on January 22, 2022: https://pandas.pydata.org/docs/dev/whatsnew/v1.4.0.html It has an "optional" dependency on SQLAlchemy, which is required to communicate with any database other than sqlite now, as the comment by snakecharmerb mentioned. Once I added that to my requirements and installed SQLAlchemy, it resolved my problem.
6
9
70,848,406
2022-1-25
https://stackoverflow.com/questions/70848406/how-to-approve-github-pull-request-using-access-token
As per API documentation https://docs.github.com/en/rest/reference/pulls#create-a-review-for-a-pull-request We can use a CURL to approve pull request i.e. curl -s -H "Authorization: token ghp_TOKEN" \ -X POST -d '{"event": "APPROVE"}' \ "https://api.github.com/repos/{owner}/{repo}/pulls/{pull_number}/reviews" but I get this error: { "message": "Not Found", "documentation_url": "https://docs.github.com/rest" } Although, the CURL is working fine for other APIs like: curl -s -H "Authorization: token ghp_TOKEN" \ -X POST -d '{"body": "some message"}' \ "https://api.github.com/repos/{owner}/{repo}/issues/{pull_number}/reviews" I have tried everything to make this work. Can somebody please help me with this?
After bit of experiments, it has worked with API /repos/{owner}/{repo}/pulls/{pull_number}/reviews I must say that Github documentation is very poor that I have to spend almost 3 hours to figure this out. A small but proper CURL would have helped in a few seconds and would have saved my time. Anyway, leaving this solution on StackOverflow so that, this helps other people and saves their precious time. CURL: curl -s -H "Authorization: token ghp_BWuQzbiDANEvrQP9vZbqa5LHBAxxIzwi2gM7" \ -X POST -d '{"event":"APPROVE"}' \ "https://api.github.com/repos/tech-security/chatbot/pulls/4/reviews" Python Code: import requests headers = { 'Authorization': 'token ghp_BWuQzbiDANEvrQP9vZbqa5LHBAxxIzwi2gM7', } data = '{"event":"APPROVE"}' response = requests.post('https://api.github.com/repos/tech-security/chatbot/pulls/4/reviews', headers=headers, data=data) print (response.json()) Note: above github token is dummy, so don't freak out please! :D
5
6
70,846,882
2022-1-25
https://stackoverflow.com/questions/70846882/specialize-the-regex-type-re-pattern
Specializing the type of re.Pattern to re.Pattern[bytes], mypy correctly detects the type error: import re REGEX: re.Pattern[bytes] = re.compile(b"\xab.{2}") def check(pattern: str) -> bool: if str == "xyz": return REGEX.fullmatch(pattern) is not None return True print(check("abcd")) Type mismatch detected: $ mypy ~/main.py /home/oren/main.py:5: error: Argument 1 to "fullmatch" of "Pattern" has incompatible type "str"; expected "bytes" Found 1 error in 1 file (checked 1 source file) However, when I try to actually run the code I get a weird (?) message: $ python ~/main.py Traceback (most recent call last): File "/home/oren/main.py", line 2, in <module> REGEX: re.Pattern[bytes] = re.compile(b"\xab.{2}") TypeError: 'type' object is not subscriptable How come the type annotation bothers Python?
The ability to specialize the generic re.Pattern and re.Match types using [str] or [bytes] was added in Python 3.9. It seems you are using an older Python version. For Python versions earlier than 3.8 the typing module provides a typing.re namespace which contains replacement types for this purpose. Since Python 3.8, they are directly available in the typing module and the typing.re namespace is deprecated (will be removed in Python 3.12). Reference: https://docs.python.org/3/library/typing.html#typing.Pattern Summary: for Python <3.8, use typing.re.Pattern[bytes] for Python 3.8, use typing.Pattern[bytes] for Python 3.9+, use re.Pattern[bytes]
5
5
70,843,273
2022-1-25
https://stackoverflow.com/questions/70843273/is-there-any-way-to-override-inherited-class-attribute-type-in-python-with-mypy
I wrote a defaultdict subclass that can call default_factory with key as argument. from collections import defaultdict from typing import TypeVar, Any, Callable, Generic K = TypeVar('K') class keydefaultdict(defaultdict, Generic[K]): ''' Drop-in replacement for defaultdict accepting key as argument ''' default_factory: Callable[[], Any] | Callable[[K], Any] | None def __missing__(self, key: K) -> Any: if self.default_factory is None: raise KeyError(key) else: try: ret = self[key] = self.default_factory(key) except TypeError: # try no-key signature ret = self[key] = self.default_factory() # if failed, let the error propagate as usual return ret mypy complains on the default_factory type hint: incompatible type in assignment (expression has type "Callable[[], Any] | Callable[[K], Any] | None", base class "defaultdict" defined the type as "Optional[Callable[[], Any]]") Is there any way to override the type? mypy complains also on this lines self.default_factory(...) - too many arguments (or too few arguments) and in places where this dict is instantiated (incompatible type): data = keydefaultdict(lambda key: [key]) Argument 1 to "keydefaultdict" has incompatible type "Callable[[Any], list[Any]]"; expected "Optional[Callable[[], Any]]"
This is intended behavior, as what you're describing violates the Liskov substitution principle. See the mypy documentation for more details. Using defaultdict as a subclass is a bad idea for this reason. But, if you really want to get around this (not recommended), you can use # type: ignore[override], like so: default_factory: Callable[[], Any] | Callable[[K], Any] | None # type: ignore[override] This is described in further detail in the mypy documentation if you want more information.
8
7
70,766,875
2022-1-19
https://stackoverflow.com/questions/70766875/how-to-fix-x-does-not-have-valid-feature-names-but-isolationforest-was-fitted-w
Here is my code: import numpy as np import pandas as pd import seaborn as sns from sklearn.ensemble import IsolationForest data = pd.read_csv('marks1.csv', encoding='latin-1', on_bad_lines='skip', index_col=0, header=0 ) random_state = np.random.RandomState(42) model = IsolationForest(n_estimators=100, max_samples='auto', contamination=float(0.2) , random_state=random_state) model.fit(data[['Mark']]) random_state = np.random.RandomState(42) data['scores'] = model.decision_function(data[['Mark']]) data['anomaly_score'] = model.predict(data[['Mark']]) data[data['anomaly_score'] == -1].head() Error: C:\Program Files\Python39\lib\site-packages\sklearn\base.py:450: UserWarning: X does not have valid feature names, but IsolationForest was fitted with feature names warnings.warn(
It depends on the version of sklearn you are using. In versions past 1.0, models have a feature_names attribute when trained with dataframes that integrates the column names. There was a bug in this version that threw an error when training with dataframes. https://github.com/scikit-learn/scikit-learn/issues/21577 I'm not up to date with the new best practices for this yet, so I cannot say definitively how it should be set up. But I just side stepped the issue in my code for now. To get around this, I convert my dataframes to a numpy array before training df.to_numpy()
10
16
70,837,397
2022-1-24
https://stackoverflow.com/questions/70837397/good-alternative-to-pandas-append-method-now-that-it-has-been-deprecated
I use the following method a lot to append a single row to a dataframe. However, it has been deprecated. One thing I really like about it is that it allows you to append a simple dict object. For example: # Creating an empty dataframe df = pd.DataFrame(columns=['a', 'b']) # Appending a row df = df.append({ 'a': 1, 'b': 2 }, ignore_index=True) Again, what I like most about this is that the code is very clean and requires very few lines. Now I suppose the recommended alternative is: # Create the new row as its own dataframe df_new_row = pd.DataFrame({ 'a': [1], 'b': [2] }) df = pd.concat([df, df_new_row]) So what was one line of code before is now two lines with a throwaway variable and extra cruft where I create the new dataframe. :( Is there a good way to do this that just uses a dict like I have in the past (that is not deprecated)?
Create a list with your dictionaries, if they are needed, and then create a new dataframe with df = pd.DataFrame.from_records(your_list). List's "append" method are very efficient and won't be ever deprecated. Dataframes on the other hand, frequently have to be recreated and all data copied over on appends, due to their design - that is why they deprecated the method
179
73
70,836,444
2022-1-24
https://stackoverflow.com/questions/70836444/no-module-named-symbol
I have problem with my pip . Lately I was getting error when I was trying to install any packages The Error was: ( Pyautogui ) Traceback (most recent call last): File "C:\Users\rati_\OneDrive\Desktop\PyAutoGUI-0.9.53.tar\PyAutoGUI-0.9.53\PyAutoGUI-0.9.53\setup.py", line 4, in <module> from setuptools import setup File "C:\Users\rati_\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py", line 12, in <module> from setuptools.extension import Extension File "C:\Users\rati_\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\extension.py", line 7, in <module> from setuptools.dist import _get_unpatched File "C:\Users\rati_\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py", line 16, in <module> import pkg_resources File "C:\Users\rati_\AppData\Local\Programs\Python\Python310\lib\site-packages\pkg_resources.py", line 29, in <module> import symbol ModuleNotFoundError: No module named 'symbol' I reinstalled pip , python but couldn't fix the error ... There was no information online so I couldn't fix it. Any Tips?
Module symbol was a part of the standard library since the dawn of time. It was declared deprecated in Python 3.9 and finally removed in 3.10. For Python 3.10 one has to upgrade any 3rd-party library that imports symbol. In your case the libraries are pip/setuptools: pip install --upgrade pip setuptools If upgrade is not possible or there is no newer version of libraries updated for Python 3.10 the only solution is to downgrade Python.
5
15
70,825,917
2022-1-23
https://stackoverflow.com/questions/70825917/selenium-common-exceptions-webdriverexception-message-unknown-error-devtoolsa
I want to run selenium through chromium. I wrote this code: from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.add_argument("start-maximized") options.add_argument("disable-infobars") options.add_argument("--disable-extensions") options.add_argument("--disable-gpu") options.add_argument("--disable-dev-shm-usage") options.add_argument("--no-sandbox") options.binary_location = "/snap/bin/chromium" driver = webdriver.Chrome(chrome_options=options) But this code throws an error: selenium.common.exceptions.WebDriverException: Message: unknown error: DevToolsActivePort file doesn't exist Stacktrace: #0 0x55efd7355a23 <unknown> #1 0x55efd6e20e18 <unknown> #2 0x55efd6e46e12 <unknown> The chromedriver of the correct version is in usr/bin. What am I doing wrong?
I solved the problem by reinstalling chromium through apt sudo apt install chromium-browser (before that it was installed through snap). My working code looks like this options = Options() options.add_argument("start-maximized") options.add_argument("disable-infobars") options.add_argument("--disable-extensions") options.add_argument("--disable-dev-shm-usage") options.add_argument("--no-sandbox") if headless: options.add_argument('--headless') options.binary_location = "/usr/bin/chromium-browser" self.driver = webdriver.Chrome(options=options)
8
7
70,833,411
2022-1-24
https://stackoverflow.com/questions/70833411/custom-truncfunc-in-django-orm
I have a Django model with the following structure: class BBPerformance(models.Model): marketcap_change = models.FloatField(verbose_name="marketcap change", null=True, blank=True) bb_change = models.FloatField(verbose_name="bestbuy change", null=True, blank=True) created_at = models.DateTimeField(verbose_name="created at", auto_now_add=True) updated_at = models.DateTimeField(verbose_name="updated at", auto_now=True) I would like to have an Avg aggregate function on objects for every 3 days. for example I write a queryset that do this aggregation for each day or with something like TruncDay function. queryset = BBPerformance.objects.annotate(day=TruncDay('created_at')).values('day').annotate(marketcap_avg=Avg('marketcap_change'),bb_avg=Avg('bb_change') How can I have a queryset of the aggregated value with 3-days interval and the index of the second day of that interval?
I guess it's impossible on DB level (and Trunc is DB level function) as only month, days weeks and so on are supported in Postgres and Oracle. So what I would suggest is to use TruncDay and then add python code to group those by 3 days.
5
1
70,790,849
2022-1-20
https://stackoverflow.com/questions/70790849/typeerror-set-ticks-got-an-unexpected-keyword-argument-labels
I am trying to run the example of a heatmap from: https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html When I am running the code in PyCharm (Professional) in an Anaconda3 environment it results in an error message. TypeError: set_ticks() got an unexpected keyword argument 'labels'
There is no need to update matplotlib version if you are using version of matplotlib 3.4.3. Here is the link for matplotlib 3.4.3 documentation and avoid using labels keyword in ax.set_xticks() function as said by JohnC
10
8
70,801,888
2022-1-21
https://stackoverflow.com/questions/70801888/ignore-the-first-space-in-csv
I have a CSV file like this: Time Latitude Longitude 2021-09-12 23:13 44.63 -63.56 2021-09-14 23:13 43.78 -62 2021-09-16 23:14 44.83 -54.6 2021-09-12 23:13 is under Time column. I would like to open it using pandas. But there is a problem with the first column. It contains a space. If I open it using: import pandas as pd points = pd.read_csv("test.csv", delim_whitespace=True) I get Time Latitude Longitude 2021-09-12 23:13 44.630 -63.560 2021-09-14 23:13 43.780 -62.000 2021-09-16 23:14 44.830 -54.600 But I would like to skip the space in the first column in CSV (2021-09-12 23:13 should be under Time column) like: Time Latitude Longitude 0 2021-09-12 23:13 44.630 -63.560 1 2021-09-14 23:13 43.780 -62.000 2 2021-09-16 23:14 44.830 -54.600 How can I ignore the first space when using pd.read_csv? Please do not stick to this csv file. This is a general question to skip (not to consider as a delimiter) the first space(s) in the first column. Because everyone knows that the first space is part of the time value, not a delimiter.
Ideally you should be parsing the first two parts as a datetime. By using a space as a delimiter, it would imply the header has three columns. The space after the date though is being seen as an extra column. A workaround is to skip the header entirely and supply your own column names. The parse_dates parameter can be used to tell Pandas to parse the first two columns as a single combined datetime object. For example: import pandas as pd points = pd.read_csv("test.csv", delimiter=" ", skipinitialspace=True, skiprows=1, index_col=None, parse_dates=[[0, 1]], names=["Date", "Time", "Latitude", "Longitude"]) print(points) Should give you the following dataframe: Date_Time Latitude Longitude 0 2021-09-12 23:13:00 44.63 -63.56 1 2021-09-14 23:13:00 43.78 -62.00 2 2021-09-16 23:14:00 44.83 -54.60
7
1
70,832,297
2022-1-24
https://stackoverflow.com/questions/70832297/install-poppler-in-aws-base-python-image-for-lambda
I am trying to deploy my docker container on AWS Lambda. However, I use pdf2image package in my code which depends on poppler. To install poppler, I need to insert the following line in the Dockerfile. RUN apt-get install -y poppler-utils This is the full view of the dockerfile. FROM ubuntu:18.04 RUN apt-get update RUN apt-get install -y poppler-utils RUN apt-get install python3 -y RUN apt-get install python3-pip -y RUN pip3 install --upgrade pip WORKDIR / COPY app.py . COPY requirements.txt . RUN pip3 install -r requirements.txt ENTRYPOINT [ "python3", "app.py" ] However, to deploy on Lambda, I need to use AWS base python image for Lambda. This is my attempt to rewrite the above dockerfile to use the Lambda base image. FROM public.ecr.aws/lambda/python:3.6 # Cannot run the follow lines: apt-get: command not found # RUN apt-get update # RUN apt-get install -y poppler-utils COPY app.py . COPY requirements.txt . RUN pip install -r requirements.txt CMD ["app.handler"] Based on the dockerfile above, you can see that the apt-get command cannot be run. Understandable because it is not from ubuntu image like I did earlier. My question is, how can I install the poppler in the Lambda base image?
It uses the yum package manager, so you can do the following instead: FROM public.ecr.aws/lambda/python:3.6 RUN yum install -y poppler-utils
6
17
70,818,269
2022-1-23
https://stackoverflow.com/questions/70818269/tensorflow-valueerror-unexpected-result-of-train-function-empty-logs-pleas
I'm trying to make a face detection model with CNN. I used codes that I made for number detection. When I use number images, program work. But, when I use my face images, I get an error that is: Unexpected result of train_function (Empty logs). Please use Model.compile(..., run_eagerly=True), or tf.config.run_functions_eagerly(True) for more information of where went wrong, or file a issue/bug to tf.keras. Notebook link: https://github.com/AkifCanSonmez/ImageProccessingCourse/blob/main/CNN/Number%20Classification%20Project/Building%20Model/Building%20Number%20Classification%20Model%20with%20Keras.ipynb Number image: Face image:
Your input images have a shape of (32,32,3) whil you first conv2D layer sets the inputshape to (32,32,1). Most likely your numbers have only 1 channel since they are grayscale, while you face images have 3 color channels. change: model.add(tf.keras.layers.Conv2D(input_shape = (32,32,1), filters = 8, kernel_size = (5,5),activation = "relu", padding = "same" )) to model.add(tf.keras.layers.Conv2D(input_shape = (32,32,3), filters = 8, kernel_size = (5,5),activation = "relu", padding = "same" ))
9
7
70,825,086
2022-1-23
https://stackoverflow.com/questions/70825086/python-lowpass-filter-with-only-numpy
I need to implement a lowpass filter in Python, but the only module I can use is numpy (not scipy). I tried using np.fft.fft() on the signal, then setting all frequencies which are higher than the cutoff frequency to 0 and then using np.fft.ifft(). Howerver this didn't work and I'm not shure how to apply the filter at all. EDIT: after changing np.abs() to np.real() the result was almost correct. But in the spectrogram the amplitudes are smaller then in the original and the filterd refernce (difference of 6dB). So it looks like it's not completely right. Any Ideas what could be done to fix that? my Lowpass Function should take the following arguments: signal: audio signal to be filtered cutoff_freq: cut off frequency in Hz above which to cut off frequencies sampling_rate: sampling rate in samples/second The filtered signal should be returned. my current function def low_pass_filter(adata: np.ndarray, bandlimit: int = 1000, sampling_rate: int = 44100) -> np.ndarray: # translate bandlimit from Hz to dataindex according to sampling rate and data size bandlimit_index = int(bandlimit * adata.size / sampling_rate) fsig = np.fft.fft(adata) for i in range(bandlimit_index + 1, len(fsig)): fsig[i] = 0 adata_filtered = np.fft.ifft(fsig) return np.real(adata_filtered)
I see that the comments of @Cris Luengo have already developed your solution into the right direction. The last thing you're missing now is that the spectrum you obtain from np.fft.fft is composed of the positive frequency components in the first half and the 'mirrored' negative frequency components in the second half. If you now set all components beyond your bandlimit_index to zero, you're erradicating these negative frequency components. That explains the drop in signal amplitude of 6dB, you're eliminating half the signal power (+ as you already noticed every real signal has to have conjugate symmetric frequency spectrum). The np.fft.ifft function documentation (ifft documentation) explains the expected format quite nicely. It states: "The input should be ordered in the same way as is returned by fft, i.e.," a[0] should contain the zero frequency term, a[1:n//2] should contain the positive-frequency terms, a[n//2 + 1:] should contain the negative-frequency terms, in increasing order starting from the most negative frequency. That's essentially the symmetry you have to preserve. So in order to preserve these components just set the components between bandlimit_index + 1 -> (len(fsig) - bandlimit_index) to zero. def low_pass_filter(adata: np.ndarray, bandlimit: int = 1000, sampling_rate: int = 44100) -> np.ndarray: # translate bandlimit from Hz to dataindex according to sampling rate and data size bandlimit_index = int(bandlimit * adata.size / sampling_rate) fsig = np.fft.fft(adata) for i in range(bandlimit_index + 1, len(fsig) - bandlimit_index ): fsig[i] = 0 adata_filtered = np.fft.ifft(fsig) return np.real(adata_filtered) or maybe if you want to be slightly more 'pythonic', you can also set the components to zero like this: fsig[bandlimit_index+1 : -bandlimit_index] = 0 Here's a colab to walk through: https://colab.research.google.com/drive/1RR_9EYlApDMg4jAS2HuJIpSqwg5RLzGW?usp=sharing
8
8
70,823,561
2022-1-23
https://stackoverflow.com/questions/70823561/aws-cdk-python-passing-multiple-variables-between-stacks
Background I have two Stacks within my CDK App. One is a Stack that defines Network Constructs (such as a VPC, Security Groups, etc), and one is a Stack responsible for creating an EKS cluster. The EKS Cluster needs to be able to consume variables and output from the Networking Stack as part of the cluster provisioning process. Examples below: NetworkStack: class NetworkingStack(Stack): def __init__(self, scope: Construct, id: str, MyCidr,**kwargs) -> None: super().__init__(scope, id) subnet_count = range(1,3) public_subnets = [] worker_subnets = [] control_subnets = [] for x in subnet_count: x = str(x) control_subnets.append(ec2.SubnetConfiguration( name = 'Control-0{}'.format(x), cidr_mask=28, subnet_type = ec2.SubnetType.PRIVATE_WITH_NAT, reserved = False )) worker_subnets.append(ec2.SubnetConfiguration( name = 'Worker-0{}'.format(x), cidr_mask=24, subnet_type = ec2.SubnetType.PRIVATE_WITH_NAT, reserved = False )) public_subnets.append(ec2.SubnetConfiguration( name = 'Public-0{}'.format(x), cidr_mask=27, map_public_ip_on_launch=True, subnet_type = ec2.SubnetType.PUBLIC, reserved = False )) kubernetes_vpc = ec2.Vpc(self, "Kubernetes", cidr=MyCidr, default_instance_tenancy=ec2.DefaultInstanceTenancy.DEFAULT, enable_dns_hostnames=True, enable_dns_support=True, flow_logs=None, gateway_endpoints=None, max_azs=2, nat_gateway_provider=ec2.NatProvider.gateway(), nat_gateways=1, # this is 1 PER AZ subnet_configuration=[*public_subnets,*control_subnets,*worker_subnets], vpc_name="Kubernetes", vpn_connections=None ) controlplane_security_group = ec2.SecurityGroup(self, "ControlPlaneSecurityGroup", vpc=kubernetes_vpc) workerplane_security_group = ec2.SecurityGroup(self, "WorkerPlaneSecurityGroup", vpc=kubernetes_vpc) public_security_group = ec2.SecurityGroup(self, "PublicPlaneSecurityGroup", vpc=kubernetes_vpc) Now, the Cluster Stack needs to consume several of the variables created here. Specifically, it needs to be able to access the kubernetes_vpc, public/control/worker subnets, and the public/control/workerplane security groups In the same region, same account, same CDK environment I have another Stack, the Cluster Stack: ClusterStack: cluster = eks.Cluster( self, "InfrastructureCluster", default_capacity_type=eks.DefaultCapacityType.NODEGROUP, alb_controller=eks.AlbControllerOptions(version="latest"), endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE, version=eks.KubernetesVersion.V1_21, cluster_name="InfrastructureCluster", security_group=MyNetworkStack.controlplane_security_group, vpc=MyNetworkStack.kubernetes_vpc, vpc_subnets=ec2.SubnetSelection( subnets=[*MyNetworkStack.control_subnets, *MyNetworkStack.worker_subnets, *MyNetworkStack.public_subnets], ) ) The official documentation uses the example of sharing a bucket between Stacks in order to access the StackA bucket from StackB. They do this by passing the object between Stacks. This would mean I need to create an argument for each variable that I'm looking for a pass it between Stacks. Is there an easier way? I tried passing the actual NetworkingStack to the ClusterStack as such: class ClusterStack(Stack): def __init__(self, scope: Construct, id: str, MyNetworkStack, **kwargs) -> None: super().__init__(scope, id, **kwargs) and then in app.py: NetworkingStack(app, "NetworkingStack", MyCidr="10.40.0.0/16") ClusterStack(app, "ClusterStack", MyNetworkStack=NetworkingStack) but that doesn't seem to work when I reference the variables in NetworkingStack as attributes e.g. security_group=MyNetworkStack.controlplane_security_group. The error I get usually relates to this variables specifically AttributeError: type object 'NetworkingStack' has no attribute 'controlplane_security_group' Question: What is the best way to pass in multiple variables between Stacks, without having to define each variable as an argument?
Step 1 Change any variable you want to reference to an attribute of the NetworkStack class. So, instead of: controlplane_security_group = ec2.SecurityGroup(self, "ControlPlaneSecurityGroup", vpc=kubernetes_vpc) You should do: self.controlplane_security_group = ec2.SecurityGroup(self, "ControlPlaneSecurityGroup", vpc=kubernetes_vpc) Step 2 Instead of passing the class, you need to pass the object. So, instead of: NetworkingStack(app, "NetworkingStack", MyCidr="10.40.0.0/16") ClusterStack(app, "ClusterStack", MyNetworkStack=NetworkingStack) You should do: networking_stack = NetworkingStack(app, "NetworkingStack", MyCidr="10.40.0.0/16") ClusterStack(app, "ClusterStack", MyNetworkStack=networking_stack) Now MyNetworkStack.controlplane_security_group would work correctly. I would also suggest one more change which is to change the name MyNetworkStack to my_network_stack since that's the general Python convention for method argument names according to PEP8.
9
8
70,822,030
2022-1-23
https://stackoverflow.com/questions/70822030/how-to-fix-template-does-not-exist-in-django
I am a beginner in the Django framework. I created my project created my app and test it, it work fine till I decided to add a template. I don't know where the error is coming from because I follow what Django docs say by creating folder name templates in your app folder creating a folder with your app name and lastly creating the HTML file in the folder. NOTE: other routes are working fine except the template Please view the screenshot of my file structure and error below. ERROR TemplateDoesNotExist at /blog/ Blog/index Request Method: GET Request URL: http://127.0.0.1:8000/blog/ Django Version: 4.0.1 Exception Type: TemplateDoesNotExist Exception Value: Blog/index Exception Location: C:\Python39\lib\site-packages\django\template\loader.py, line 19, in get_template Python Executable: C:\Python39\python.exe Python Version: 3.9.4 Python Path: ['C:\\Users\\Maxwell\\Desktop\\Django\\WebApp', 'C:\\Python39\\python39.zip', 'C:\\Python39\\DLLs', 'C:\\Python39\\lib', 'C:\\Python39', 'C:\\Users\\Maxwell\\AppData\\Roaming\\Python\\Python39\\site-packages', 'C:\\Python39\\lib\\site-packages', 'C:\\Python39\\lib\\site-packages\\win32', 'C:\\Python39\\lib\\site-packages\\win32\\lib', 'C:\\Python39\\lib\\site-packages\\Pythonwin'] Server time: Sun, 23 Jan 2022 12:04:18 +0000 Template-loader postmortem Django tried loading these templates, in this order: Using engine django: django.template.loaders.app_directories.Loader: C:\Users\Maxwell\Desktop\Django\WebApp\Blog\templates\ Blog\index (Source does not exist) django.template.loaders.app_directories.Loader: C:\Python39\lib\site-packages\django\contrib\admin\templates\ Blog\index (Source does not exist) django.template.loaders.app_directories.Loader: C:\Python39\lib\site-packages\django\contrib\auth\templates\ Blog\index (Source does not exist) App/views.py from django.http import HttpResponse from django.shortcuts import render # Create your views here. def index(request,name): return HttpResponse(f'Hello {name}') def html(request): return render(request,' Blog/index.html')
It seems that you render the template with Blog/index, but you need to specify the entire file name, so Blog/index.html and without a leading (or trailing) space: def html(request): return render(request, 'Blog/index.html')
11
3
70,780,758
2022-1-20
https://stackoverflow.com/questions/70780758/how-to-generate-random-normal-distribution-without-numpy-google-interview
So I have a data science interview at Google, and I'm trying to prepare. One of the questions I see a lot (on Glassdoor) from people who have interviewed there before has been: "Write code to generate random normal distribution." While this is easy to do using numpy, I know sometimes Google asks the candidate to code without using any packages or libraries, so basically from scratch. Any ideas?
According to the Central Limit Theorem a normalised summation of independent random variables will approach a normal distribution. The simplest demonstration of this is adding two dice together. So maybe something like: import random import matplotlib.pyplot as plt def pseudo_norm(): """Generate a value between 1-100 in a normal distribution""" count = 10 values = sum([random.randint(1, 100) for x in range(count)]) return round(values/count) dist = [pseudo_norm() for x in range(10_000)] n_bins = 100 fig, ax = plt.subplots() ax.set_title('Pseudo-normal') hist = ax.hist(dist, bins=n_bins) plt.show() Which generates something like:
7
7
70,820,707
2022-1-23
https://stackoverflow.com/questions/70820707/export-module-components-in-python
In Julia, it is possible to export a function (or a variable, struct, etc.) of a module, after which it can be called in another script without the namespace (once it has been imported). For example: # helpers.jl module helpers function ordinary_function() # bla bla bla end function exported_function() # bla bla bla end export exported_function() end # main.jl include("helpers.jl") using .helpers #we have to call the non-exported function with namespace helpers.ordinary_function() #but we can call the exported function without namespace exported_function() Is this feature available in Python?
In Python importing is easier than this: # module.py def foo(): pass # file.py from module import foo foo() # file2.py from file import foo foo() This works with classes too. In Python you can also do something like this: import module # You have to call like this: module.foo() When you import a module, all the functions the module imported are considered part of the module. Using the example below: # file.py import module # file2.py import file file.module.foo() # or... from file import module module.foo() # ...or... from file.module import foo foo() Notice that in Python the export is not needed. Look at the documentation: no export keyword exists in Python.
5
5
70,819,525
2022-1-23
https://stackoverflow.com/questions/70819525/send-long-message-in-telegram-bot-python
I have a telegram bot and I want to send a message in which the error message will be returned to me my code is : path = 'C:\\Bot\\Log\\aaa\\*.log' files = glob.glob(path) nlines= 0 data = "Servers : \n" for name in files: with open(name) as f: for line in f : nlines += 1 if (line.find("Total") >= 0): data += line for i in range(5): data += next(f) data += f'\n{emoji.emojize(":blue_heart:")} ----------------------------------------------------{emoji.emojize(":blue_heart:")}\n' if (line.find("Source") >= 0): data += line query.edit_message_text( text=f"{data}", reply_markup=build_keyboard(number_list), ) my error is : telegram.error.BadRequest: Message_too_long According to this code model, how can I send my message to the bot?
its still an open issue, but you can split your request for 4089 chars per send you have 2 options: if len(info) > 4096: for x in range(0, len(info), 4096): bot.send_message(message.chat.id, info[x:x+4096]) else: bot.send_message(message.chat.id, info) or msgs = [message[i:i + 4096] for i in range(0, len(message), 4096)] for text in msgs: update.message.reply_text(text=text)
7
8
70,813,009
2022-1-22
https://stackoverflow.com/questions/70813009/parsing-env-files
How do safely store passwords and API keys within an .env file and properly parse them? using python? I want to store passwords that I do not wish to push into public repos.
You may parse the key values of an .env file you can use os.getenv(key) where you replace key with the key of the value you want to access Suppose the contents of the .env files are : A=B FOO=BAR SECRET=VERYMUCH You can parse the content like this : import os print(os.getenv("A")) print(os.getenv("FOO")) print(os.getenv("SECRET")) # And so on... Which would result in (output / stdout) : B BAR VERYMUCH Now, if you are using git version control and you never want yourself to push an environment file accidentally then add this to your .gitignore file and they'll be conveniently ignored. .env (Although you should make your life a bit easier by using an existing .gitignore template all of which by default ignore .env files) Lastly : you can load .env just like a text file by doing : with open(".env") as env: ... and then proceed to manually parse it using regex In case for some reason if .env files aren't detected by your python script then use this module python-dotenv You may make use of the aforementioned library like this : from dotenv import load_dotenv load_dotenv() # take environment variables from .env. # Code of your application, which uses environment variables (e.g. from `os.environ` or # `os.getenv`) as if they came from the actual environment.
6
7
70,820,178
2022-1-23
https://stackoverflow.com/questions/70820178/some-problems-about-python-inherited-classmethod
I have this code: from typing import Callable, Any class Test(classmethod): def __init__(self, f: Callable[..., Any]): super().__init__(f) def __get__(self,*args,**kwargs): print(args) # why out put is (None, <class '__main__.A'>) where form none why no parameter 123 # where was it called return super().__get__(*args,**kwargs) class A: @Test def b(cls,v_b): print(cls,v_b) A.b(123) Why the output is (None, <class '__main__.A'>)? Where did None come form and why is it not the parameter 123, which is the value I called it with?
The __get__ method is called when the method b is retrieved from the A class. It has nothing to do with the actual calling of b. To illustrate this, separate the access to b from the actual call of b: print("Getting a reference to method A.b") method = A.b print("I have a reference to the method now. Let's call it.") method() This results in this output: Getting a reference to method A.b (None, <class '__main__.A'>) I have a reference to the method now. Let's call it. <class '__main__.A'> 123 So you see, it is normal that the output in __get__ does not show anything about the argument you call b with, because you haven't made the call yet. The output None, <class '__main__.A'> is in line with the Python documentation on __get__: object.__get__(self, instance, owner=None) Called to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class, while instance is the instance that the attribute was accessed through, or None when the attribute is accessed through the owner. In your case you are using it for accessing an attribute (b) of a class (A) -- not of an instance of A -- so that explains the instance argument is None and the owner argument is your class A. The second output, made with print(cls,v_b), will print <class '__main__.A'> for cls, because that is what happens when you call class methods (as opposed to instance methods). Again, from the documentation: When a class attribute reference (for class C, say) would yield a class method object, it is transformed into an instance method object whose __self__ attribute is C. Your case is described here, where A is the class, and so the first parameter (which you called cls) will get as value A.
5
3
70,819,937
2022-1-23
https://stackoverflow.com/questions/70819937/using-pd-concat-instead-of-df-append-with-pandas-1-4
I am using df.append() in my code to append the percentage change across dataframe columns. With df.append() being depreciated in pandas 1.4, I am trying to use pd.concat but I am not able to replicate the output. So here is what I have now: import numpy as np import pandas as pd df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo", "bar", "bar", "bar", "bar"], "B": ["one", "one", "one", "two", "two", "one", "one", "two", "two"], "C": ["small", "large", "large", "small", "small", "large", "small", "small", "large"], "D": [1, 2, 2, 3, 3, 4, 5, 6, 7], "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]}) table = pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'], aggfunc=np.sum, fill_value=0, margins=True) table.append( (table .iloc[-1] .pct_change(periods=1, fill_method=None) .fillna('') .apply(lambda x: '{:.1%}'.format(x) if x else '') ) ) The output is: C large small All A B bar one 4 5 9 two 7 6 13 foo one 4 1 5 two 0 6 6 All 15 18 33 20.0% 83.3% which is what I am after, but I am getting the depreciated warning, FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.table.append() I changed my code to use pd.concat(), as follow: pd.concat([table, (table .iloc[-1] .pct_change(periods=1, fill_method=None) .fillna('') .apply(lambda x: '{:.1%}'.format(x) if x else '') )], ) now I am getting: large small All 0 (bar, one) 4.0 5.0 9.0 NaN (bar, two) 7.0 6.0 13.0 NaN (foo, one) 4.0 1.0 5.0 NaN (foo, two) 0.0 6.0 6.0 NaN (All, ) 15.0 18.0 33.0 NaN large NaN NaN NaN small NaN NaN NaN 20.0% All NaN NaN NaN 83.3% which is not what I expected - note the percentage changes (20% and 83.3%) compared to the output from append. Any input would be appreciated.
Use pd.concat like this. Convert your inner command to df using Series.to_frame and then transpose it using df.T: In [74]: pd.concat([table, table.iloc[-1].pct_change(periods=1, fill_method=None).fillna('').apply(lambda x: '{:.1%}'.format(x) if x else '').to_frame().T]) Out[74]: C large small All A B bar one 4 5 9 two 7 6 13 foo one 4 1 5 two 0 6 6 All 15 18 33 20.0% 83.3%
5
4
70,817,125
2022-1-22
https://stackoverflow.com/questions/70817125/poetry-how-to-keep-using-the-old-virtual-environment-when-changing-the-name-of
I am using Poetry in some of my python projects. It's not unusual that at some stage I want to rename the root folder of my project. When I do that and run poetry shell poetry creates a new virtual environment. But I don't want a new virtual environment, I just want to keep using the existing virtual environment. I know I can manually activate the old one by running source {path to the old venv}/bin/activate but then I would have to keep track of the old environment name separately and refrain from using poetry shell. Is there something I can do about this? It's quite time consuming to start installing the dependencies again, pointing an IDE to the new environment and deleting the old virtual env, just because you have changed the root folder name - something that can happen multiple times. This question suggests that there is no solution to the problem but would want to confirm this because to me this seems quite annoying issue.
One option is to enable the virtualenvs.in-project option, e.g. by running poetry config virtualenvs.in-project true If set to true, the virtualenv wil [sic] be created and expected in a folder named .venv within the root directory of the project. This will cause Poetry to create new environments in $project_root/.venv/. If you rename the project directory the environment should continue to work.
8
5
70,815,781
2022-1-22
https://stackoverflow.com/questions/70815781/why-does-mypy-strict-not-throw-an-error-in-this-simple-code
I have the following in test.py: def f(x: int) -> float: pass if __name__=="__main__": f(4) When I run mypy --strict test.py, I get no errors. I expected mypy would be able to infer that there is a problem with my definition f. It obviously has no return statement and can never return a float. I feel like there is something fundamental which I do not understand here. The presence (or not) of a return statement is something that can be checked statically. Why does mypy miss it?
The syntax you use is recognized as a function stub, not as a function implementation. Normally, a function stub is written as: def f(x: int) -> float: ... but this is merely a convenience for def f(x: int) -> float: pass From the mypy documentation: Function bodies cannot be completely removed. By convention, we replace them with ... instead of the pass statement. As mypy does no checks for the function body for stubs, you get no error in this case. And as pointed out by @domarm in the comments, adding any statement besides pass (even a second pass statement) to the function body will result in the expected mypy error.
5
4
70,810,857
2022-1-22
https://stackoverflow.com/questions/70810857/split-geometric-progression-efficiently-in-python-pythonic-way
I am trying to achieve a calculation involving geometric progression (split). Is there any effective/efficient way of doing it. The data set has millions of rows. I need the column "Traded_quantity" Marker Action Traded_quantity 2019-11-05 09:25 0 0 09:35 2 BUY 3 09:45 0 0 09:55 1 BUY 4 10:05 0 0 10:15 3 BUY 56 10:24 6 BUY 8128 turtle = 2 (User defined) base_quantity = 1 (User defined) def turtle_split(row): if row['Action'] == 'BUY': return base_quantity * (turtle ** row['Marker'] - 1) // (turtle - 1) else: return 0 df['Traded_quantity'] = df.apply(turtle_split, axis=1).round(0).astype(int) Calculation For 0th Row, Traded_quantity should be zero (because the Marker is zero) For 1st Row, Traded_quantity should be (1x1) + (1x2) = 3 (Marker 2 will be split into 1 and 1, First 1 will be multiplied with the base_quantity>>1x1, Second 1 will be multiplied with the result from first 1 times turtle>>1x2), then we make a sum of these two numbers) For 2nd Row, Traded_quantity should be zero (because the Marker is zero) For 3rd Row, Traded_quantity should be (2x2) = 4(Marker 1 will be multiplied with the last split from row 1 time turtle i.e 2x2) For 4th Row, Traded_quantity should be zero(because the Marker is zero) For 5th Row, Traded_quantity should be (4x2)+(4x2x2)+(4x2x2x2) = 56(Marker 3 will be split into 1,1 and 1, First 1 will be multiplied with the last split from row3 times turtle >>4x2, Second 1 will be multiplied with the result from first 1 with turtle>>8x2), third 1 will be multiplied with the result from second 1 with turtle>>16x2) then we make a sum of these three numbers) For 6th Row, Traded_quantity should be (32x2)+(32x2x2)+(32x2x2x2)+(32x2x2x2x2)+(32x2x2x2x2x2) = 8128 Whenever there will be a BUY, the traded quantity will be calculated using the last batch from Traded_quantity times turtle. Turns out the code is generating correct Traded_quantity when there is no zero in Marker. Once there is a gap with a couple of zeros geometric progression will not help, I would require the previous fig(from Cache) to recalculate Traded_q. tried with lru_cache for recursion, didn't work.
This should work def turtle_split(row): global base_quantity if row['Action'] == 'BUY': summation = base_quantity * (turtle ** row['Marker'] - 1) // (turtle - 1) base_quantity = base_quantity * (turtle ** (row['Marker'] - 1))*turtle return summation else: return 0
7
5