question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-17 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
75,384,141 | 2023-2-8 | https://stackoverflow.com/questions/75384141/python-polars-find-the-length-of-a-string-in-a-dataframe | I am trying to count the number of letters in a string in Polars. I could probably just use an apply method and get the len(Name). However, I was wondering if there is a polars specific method? import polars as pl df = pl.DataFrame({ "start_date": ["2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05"], "Name": ["John", "Joe", "James", "Jรถrg"] }) In Pandas I can use .str.len() >>> df.to_pandas()["Name"].str.len() 0 4 1 3 2 5 3 4 Name: Name, dtype: int64 But that does not exist in Polars: df.with_columns(pl.col("Name").str.len()) # AttributeError: 'ExprStringNameSpace' object has no attribute 'len' | You can use .str.len_bytes() that counts number of bytes in the UTF8 string .str.len_chars() that counts number of characters df.with_columns( pl.col("Name").str.len_bytes().alias("bytes"), pl.col("Name").str.len_chars().alias("chars") ) shape: (4, 4) โโโโโโโโโโโโโโฌโโโโโโโโฌโโโโโโโโฌโโโโโโโโ โ start_date โ Name โ bytes โ chars โ โ --- โ --- โ --- โ --- โ โ str โ str โ u32 โ u32 โ โโโโโโโโโโโโโโชโโโโโโโโชโโโโโโโโชโโโโโโโโก โ 2020-01-02 โ John โ 4 โ 4 โ โ 2020-01-03 โ Joe โ 3 โ 3 โ โ 2020-01-04 โ James โ 5 โ 5 โ โ 2020-01-05 โ Jรถrg โ 5 โ 4 โ โโโโโโโโโโโโโโดโโโโโโโโดโโโโโโโโดโโโโโโโโ | 3 | 6 |
75,445,290 | 2023-2-14 | https://stackoverflow.com/questions/75445290/poetry-no-file-folder-for-package | I have a simple project layout myproject on ๎ main [$!?] is ๐ฆ v1.0.0 via ๎ v18.14.0 via ๐ v3.10.9 โฏ tree -L 1 . โโโ build โโโ deploy โโโ Dockerfile โโโ poetry.lock โโโ pyproject.toml โโโ README.md โโโ scripts.py โโโ src The pyproject.toml is: [tool.poetry] name = "myproject" version = "0.1.0" description = "" authors = [""] [tool.poetry.scripts] test = "scripts:test" The scripts.py is: import subprocess def test(): """ Run all unittests. """ subprocess.run( ['python', '-u', '-m', 'pytest'] ) if __name__ == '__main__': test() When I run poetry run test: myproject on main [$!?] is ๐ฆ v1.0.0 via ๎ v18.14.0 via ๐ v3.10.9 No file/folder found for package myproject | Short answer: The directory where your pyproject.toml file sits needs to be share the same name, e.g., if the name config in pyproject.toml is name = "myproject", the directory needs to be also named myproject. Therefore, you must either: Rename the directory to match your name configuration in the pyproject.toml or Move the pyproject.toml file to the correct directory or If you don't care about making your project packageable by Poetry and only use poetry as your package manager for this project, add package-mode = false to your pyproject.toml. Explanation: There's a thread on the official GitHub for Poetry that discusses this issue. The gist of the issue is that whatever you choose for your project name, i.e., name = "myproject", needs to be the direct parent directory of where your pyproject.toml file lives. When there is a mismatch, you get the error you are getting. Sources: Breakdown of the cause of the issue on thread on GitHub A Poetry contributor's response on thread on GitHub | 11 | 10 |
75,383,650 | 2023-2-8 | https://stackoverflow.com/questions/75383650/how-to-limit-rows-in-pandas-dataframe | How to limit number of rows in pandas dataframe in python code. I needed last 1000 rows the rest need to delete. For example 1000 rows, in pandas dataframe -> 1000 rows in csv. I tried df.iloc[:1000] I needed autoclean pandas dataframe and saving last 1000 rows. | With df.iloc[:1000] you get the first 1000 rows. Since you want to get the last 1000 rows, you have to change this line a bit to df_last_1000 = df.iloc[-1000:] To safe it as a csv file you can use pandas' to_csv() method: df_last_1000.to_csv("last_1000.csv") Update - Speed Comparison: Both .tail(1000) and .iloc[-1000:, :] return the last 1000 rows, so let's compare their performance: import pandas as pd import numpy as np import timeit df = pd.DataFrame(np.random.rand(1000000, 5), columns=['A', 'B', 'C', 'D', 'E']) def tail_operation(): _ = df.tail(1000) def iloc_operation(): _ = df.iloc[-1000:, :] tail_time = timeit.timeit(tail_operation, number=1000) iloc_time = timeit.timeit(iloc_operation, number=1000) print(f"Execution time for tail operation: {tail_time} seconds") print(f"Execution time for iloc operation: {iloc_time} seconds") Execution time for tail operation: 0.0280200999986846 seconds Execution time for iloc operation: 0.07651790000090841 seconds | 3 | 7 |
75,391,653 | 2023-2-8 | https://stackoverflow.com/questions/75391653/importerror-cannot-import-name-association-proxy-from-sqlalchemy-ext-associa | I am working on a small flask API using flask-admin and flask-sqlalchemy. The api runs well but whenever I install a new package I am faced with an error. Error: While importing 'app', an ImportError was raised: Traceback (most recent call last): File "/Users/brandoncreed/.local/share/virtualenvs/management-System-Wyc_ba1N/lib/python3.8/site-packages/flask/cli.py", line 218, in locate_app __import__(module_name) File "/Users/brandoncreed/Desktop/Brandon Coding Projects/management-System/src/app.py", line 12, in <module> from api.admin import setup_admin File "/Users/brandoncreed/Desktop/Brandon Coding Projects/management-System/src/api/admin.py", line 5, in <module> from flask_admin.contrib.sqla import ModelView File "/Users/brandoncreed/.local/share/virtualenvs/management-System-Wyc_ba1N/lib/python3.8/site-packages/flask_admin/contrib/sqla/__init__.py", line 2, in <module> from .view import ModelView File "/Users/brandoncreed/.local/share/virtualenvs/management-System-Wyc_ba1N/lib/python3.8/site-packages/flask_admin/contrib/sqla/view.py", line 18, in <module> from flask_admin.contrib.sqla.tools import is_relationship File "/Users/brandoncreed/.local/share/virtualenvs/management-System-Wyc_ba1N/lib/python3.8/site-packages/flask_admin/contrib/sqla/tools.py", line 11, in <module> from sqlalchemy.ext.associationproxy import ASSOCIATION_PROXY ImportError: cannot import name 'ASSOCIATION_PROXY' from 'sqlalchemy.ext.associationproxy' (/Users/brandoncreed/.local/share/virtualenvs/management-System-Wyc_ba1N/lib/python3.8/site-packages/sqlalchemy/ext/associationproxy.py) I am not sure what is causing this but it seems there is a conflict between flask_admin and sqlalchemy after the new package is installed. I deleted the pipfile.lock in my virtual environment and ran pipenv install to see if the new file would work but it did not. I also tried uninstalling the new package to see if it would resolve the issue but the same error still persists. I am wondering if it was to do with updating either flask-admin or sqlalchemy. | This issue has been fixed in Flask-Admin v1.6.1, which can be installed like this: python3 -m pip install --upgrade flask-admin Flask-Admin versions earler than v1.6.1 are not compatible with SQLAlchemy 2.0. The was an issue for this specific problem. The recommended workaround was to install an earlier version of SQLALchemy, for example: python3 -m pip install --upgrade 'sqlalchemy<2.0' | 3 | 8 |
75,453,995 | 2023-2-14 | https://stackoverflow.com/questions/75453995/pandas-plot-vars-argument-must-have-dict-attribute | It was working perfectly earlier but for some reason now I am getting strange errors. pandas version: 1.2.3 matplotlib version: 3.7.0 sample dataframe: df cap Date 0 1 2022-01-04 1 2 2022-01-06 2 3 2022-01-07 3 4 2022-01-08 df.plot(x='cap', y='Date') plt.show() df.dtypes cap int64 Date datetime64[ns] dtype: object I get a traceback: Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_core.py", line 955, in __call__ return plot_backend.plot(data, kind=kind, **kwargs) File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/__init__.py", line 61, in plot plot_obj.generate() File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py", line 279, in generate self._setup_subplots() File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py", line 337, in _setup_subplots fig = self.plt.figure(figsize=self.figsize) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/_api/deprecation.py", line 454, in wrapper return func(*args, **kwargs) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 813, in figure manager = new_figure_manager( File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 382, in new_figure_manager _warn_if_gui_out_of_main_thread() File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 360, in _warn_if_gui_out_of_main_thread if _get_required_interactive_framework(_get_backend_mod()): File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 208, in _get_backend_mod switch_backend(rcParams._get("backend")) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 331, in switch_backend manager_pyplot_show = vars(manager_class).get("pyplot_show") TypeError: vars() argument must have __dict__ attribute | In fact, this problem may be caused by running your code like default script (or in PyCharm interactive console), not in Jupyter. If it is true, you can fix this error by setting up backend directly in your file with use function: import matplotlib as mpl import matplotlib.pyplot as plt mpl.use('TkAgg') # !IMPORTANT fig, ax = plt.subplots() res = ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) # Plt some data on the axes.o # plt.show() # optionally show the result. In some cases TkAgg may not be available. When firstly check, which backend you use. for this, run this simple code: import matplotlib as mpl print(mpl.get_backend()) BUT! this must be run just by your hands in default terminal, outside of PyCharm. (e.g create simple test.py file, paste code, and run python test.py) Why? Because PyCharm (at least in scientific projects) runs all files with the interactive console, where backend module://backend_interagg is used. And this backend causes same error as you have. So. add mpl.use('TkAgg') in the head of your file, or checkout which backend you can use and paste those names in this function. | 11 | 15 |
75,394,143 | 2023-2-9 | https://stackoverflow.com/questions/75394143/openai-api-error-no-module-named-openai-embeddings-utils-openai-is-not-a | I want to use openai.embeddings_utils import get_embeddings So already install openai Name: openai Version: 0.26.5 Summary: Python client library for the OpenAI API Home-page: https://github.com/openai/openai-python Author: OpenAI Author-email: [email protected] License: Location: /Users/lima/Desktop/Paprika/Openai/.venv/lib/python3.9/site-packages Requires: aiohttp, requests, tqdm Required-by: This is my openai But why not use openai.embeddings_utils?? | For my case, check the version of openai. openai.embeddings_utils does not exist in latest openai 1.2.0, but exists in 0.27.7 | 10 | 8 |
75,440,354 | 2023-2-13 | https://stackoverflow.com/questions/75440354/why-does-pandas-read-excel-fail-on-an-openpyxl-error-saying-readonlyworksheet | This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11. Do folks know the fix? File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 38, in <module> main() File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 25, in main sb = diana.superbills.load_superbills_births(args.site, ath) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/aizenman/My Drive/code/daily_new_clients/code/diana/superbills.py", line 148, in load_superbills_births sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name="Births", parse_dates=["DOS", "DOB"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 557, in __init__ super().__init__(filepath_or_buffer, storage_options=storage_options) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 545, in __init__ self.book = self.load_workbook(self.handles.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 568, in load_workbook return load_workbook( ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 346, in load_workbook reader.read() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 303, in read self.parser.assign_names() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/workbook.py", line 109, in assign_names sheet.defined_names[name] = defn ^^^^^^^^^^^^^^^^^^^ AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names' | Currently, we face the issue on python3.8 environment (Rocky 8.8 OS). We downgrade the openpyxl module to 3.0.9 to fix this issue. sudo pip3 install openpyxl==3.0.9 Enjoy it! | 13 | 0 |
75,442,308 | 2023-2-14 | https://stackoverflow.com/questions/75442308/how-to-calculate-the-month-begin-and-end-month-date-from-date-in-polars | Is there an efficient way to get the month end date on a date column. Like if date =โ2023-02-13โ to return โ2023-02-28โ, also beginning of the month would be great as well. Thanks! df = pl.DataFrame({'DateColumn': ['2022-02-13']}) test_df = df.with_columns([ pl.col('DateColumn').str.strptime(pl.Date).cast(pl.Date) ] ) โโโโโโโโโโโโโโ โ DateColumn โ โ --- โ โ date โ โโโโโโโโโโโโโโก โ 2022-02-13 โ โโโโโโโโโโโโโโ Two new columns would be perfect. | [Update]: Polars has since added .month_start() and .month_end() methods. See the answer from @n-maks You could use .truncate and .offset_by test_df.with_columns( MonthStart = pl.col("DateColumn").dt.truncate("1mo"), MonthEnd = pl.col("DateColumn").dt.offset_by("1mo").dt.truncate("1mo").dt.offset_by("-1d") ) shape: (1, 3) โโโโโโโโโโโโโโฌโโโโโโโโโโโโโฌโโโโโโโโโโโโโ โ DateColumn | MonthStart | MonthEnd โ โ --- | --- | --- โ โ date | date | date โ โโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก โ 2022-02-13 | 2022-02-01 | 2022-02-28 โ โโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโ | 3 | 7 |
75,406,182 | 2023-2-10 | https://stackoverflow.com/questions/75406182/pyexcel-get-book-and-get-records-functions-throw-exceptions-for-xlsx-files | I'm trying to open an XLSX file using pyexcel. But it fails for both get_book and get_records with the following error. However if I try to read the same file converted to xls it does work. I get the files uploaded by users: so can not restrict uploading files in XLSX format. >>> import pyexcel >>> workbook = pyexcel.get_book(file_name='Sample_Employee_data_xls.xlsx') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/me/env/lib/python3.10/site-packages/pyexcel/core.py", line 47, in get_book book_stream = sources.get_book_stream(**keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel/internal/core.py", line 38, in get_book_stream sheets = a_source.get_data() File "/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/sources/file_input.py", line 38, in get_data sheets = self.__parser.parse_file(self.__file_name, **self._keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/parsers/excel.py", line 19, in parse_file return self._parse_any(file_name, **keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel/plugins/parsers/excel.py", line 40, in _parse_any sheets = get_data(anything, file_type=file_type, **keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py", line 86, in get_data data, _ = _get_data( File "/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py", line 105, in _get_data return load_data(**keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel_io/io.py", line 205, in load_data result = reader.read_all() File "/home/me/env/lib/python3.10/site-packages/pyexcel_io/reader.py", line 95, in read_all content_dict = self.read_sheet_by_index(sheet_index) File "/home/me/env/lib/python3.10/site-packages/pyexcel_io/reader.py", line 84, in read_sheet_by_index sheet_reader = self.reader.read_sheet(sheet_index) File "/home/me/env/lib/python3.10/site-packages/pyexcel_xlsx/xlsxr.py", line 148, in read_sheet sheet = SlowSheet(native_sheet, **self.keywords) File "/home/me/env/lib/python3.10/site-packages/pyexcel_xlsx/xlsxr.py", line 72, in __init__ for ranges in sheet.merged_cells.ranges[:]: TypeError: 'set' object is not subscriptable >>> workbook = pyexcel.get_book(file_name='Sample_Employee_data_xls.xls') # working Here is my requirements file. asgiref==3.6.0 asttokens==2.2.1 autopep8==2.0.1 backcall==0.2.0 certifi==2022.12.7 chardet==5.1.0 charset-normalizer==2.1.1 decorator==5.1.1 Django==3.2.16 django-cors-headers==3.13.0 django-filter==22.1 djangorestframework==3.13.1 et-xmlfile==1.1.0 executing==1.2.0 idna==3.4 ipython==8.8.0 jedi==0.18.2 lml==0.1.0 matplotlib-inline==0.1.6 openpyxl==3.1.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 prompt-toolkit==3.0.36 ptyprocess==0.7.0 pure-eval==0.2.2 pycodestyle==2.10.0 pyexcel==0.7.0 pyexcel-io==0.6.6 pyexcel-xls==0.7.0 pyexcel-xlsx==0.6.0 Pygments==2.14.0 pytz==2022.7 requests==2.28.1 six==1.16.0 sqlparse==0.4.3 stack-data==0.6.2 texttable==1.6.7 tomli==2.0.1 traitlets==5.8.1 urllib3==1.26.13 wcwidth==0.2.5 xlrd==2.0.1 xlwt==1.3.0 | You can downgrade openpyxl to 3.0.10 for now. I referenced your issue here: https://foss.heptapod.net/openpyxl/openpyxl/-/issues/1960 Pyexcel uses openpyxl to open xlsx files. The commands are: pip uninstall openpyxl pip install openpyxl==3.0.10 | 3 | 11 |
75,382,340 | 2023-2-8 | https://stackoverflow.com/questions/75382340/python-pandas-read-excel-error-value-must-be-either-numerical-or-a-string-conta | I dont know why this error occurs. pd.read_excel('data/A.xlsx', usecols=["B", "C"]) Then I get this error: "Value must be either numerical or a string containing a wild card" So i change my code use nrows all data pd.read_excel('data/A.xlsx', usecols=["B","C"], nrows=172033) Then there is no error and a dataframe is created. my excel file has 172034 rows, 1st is column name. | This problem is unique to the latest version of the Openpyxl library, v3.1.2. Downgrading to v.3.0.10 will fix this issue. | 14 | 24 |
75,424,730 | 2023-2-12 | https://stackoverflow.com/questions/75424730/what-type-hint-should-i-write-when-the-return-type-is-uncertain | For example, if I define this function: def open_pkl(src: str) -> ?: with open('serialized.pkl', 'rb') as f: data = pickle.load(f) return data what type hint should I write for the return value? Now, I write the function as: def open_pkl(src: str): with open('serialized.pkl', 'rb') as f: data = pickle.load(f) return data Is there type a hint for an uncertain return type? | There are two options: object and typing.Any. Returning an object signals to the caller of the function that nothing can be assumed about the returned object (since everything is an object, saying that something is an object gives no information). So, if a user were to do def open_pkl(src: str) -> object: ... something = open_pkl('some/file') print(len(something)) that would be a type violation, even if the object were a list, because objects per se don't have a __len__ method. typing.Any, on the other hand, is like a wild card which could hypothetically be anything. So, if you reworked the above example to have a typing.Any return type, there would be no type violation. Does a typing.Any have a __len__ method? Maybe. Who says it couldn't? To summarize, you should use object if you want to "force" (because type hints are just suggestions) your users to verify the type of any object returned by this function. Use typing.Any to be more lax. | 3 | 4 |
75,418,252 | 2023-2-11 | https://stackoverflow.com/questions/75418252/how-to-create-3d-torus-from-circle-revolved-about-x-2r-r-is-the-radius-of-circl | I need help to create a torus out of a circle by revolving it about x=2r, r is the radius of the circle. I am open to either JULIA code or Python code. Whichever that can solve my problem the most efficient. I have Julia code to plot circle and the x=2r as the axis of revolution. using Plots, LaTeXStrings, Plots.PlotMeasures gr() ฮธ = 0:0.1:2.1ฯ x = 0 .+ 2cos.(ฮธ) y = 0 .+ 2sin.(ฮธ) plot(x, y, label=L"x^{2} + y^{2} = a^{2}", framestyle=:zerolines, legend=:outertop) plot!([4], seriestype="vline", color=:green, label="x=2a") I want to create a torus out of it, but unable, meanwhile I have solid of revolution Python code like this: # Calculate the surface area of y = sqrt(r^2 - x^2) # revolved about the x-axis import matplotlib.pyplot as plt import numpy as np import sympy as sy x = sy.Symbol("x", nonnegative=True) r = sy.Symbol("r", nonnegative=True) def f(x): return sy.sqrt(r**2 - x**2) def fd(x): return sy.simplify(sy.diff(f(x), x)) def f2(x): return sy.sqrt((1 + (fd(x)**2))) def vx(x): return 2*sy.pi*(f(x)*sy.sqrt(1 + (fd(x) ** 2))) vxi = sy.Integral(vx(x), (x, -r, r)) vxf = vxi.simplify().doit() vxn = vxf.evalf() n = 100 fig = plt.figure(figsize=(14, 7)) ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222, projection='3d') ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224, projection='3d') # 1 is the starting point. The first 3 is the end point. # The last 200 is the number of discretization points. # help(np.linspace) to read its documentation. x = np.linspace(1, 3, 200) # Plot the circle y = np.sqrt(2 ** 2 - x ** 2) t = np.linspace(0, np.pi * 2, n) xn = np.outer(x, np.cos(t)) yn = np.outer(x, np.sin(t)) zn = np.zeros_like(xn) for i in range(len(x)): zn[i:i + 1, :] = np.full_like(zn[0, :], y[i]) ax1.plot(x, y) ax1.set_title("$f(x)$") ax2.plot_surface(xn, yn, zn) ax2.set_title("$f(x)$: Revolution around $y$") # find the inverse of the function y_inverse = x x_inverse = np.power(2 ** 2 - y_inverse ** 2, 1 / 2) xn_inverse = np.outer(x_inverse, np.cos(t)) yn_inverse = np.outer(x_inverse, np.sin(t)) zn_inverse = np.zeros_like(xn_inverse) for i in range(len(x_inverse)): zn_inverse[i:i + 1, :] = np.full_like(zn_inverse[0, :], y_inverse[i]) ax3.plot(x_inverse, y_inverse) ax3.set_title("Inverse of $f(x)$") ax4.plot_surface(xn_inverse, yn_inverse, zn_inverse) ax4.set_title("$f(x)$: Revolution around $x$ \n Surface Area = {}".format(vxn)) plt.tight_layout() plt.show() | My take with Makie: using GLMakie Base.@kwdef mutable struct Torus R::Float64 = 2 r::Float64 = 1 end function generate_torus(torus::Torus, resolution=100; upto=1.0) u = range(0, stop=2ฯ*upto, length=resolution) v = range(0, stop=2ฯ*upto, length=resolution) x = [ ( torus.R + torus.r*cos(vแตข) ) * cos(uแตข) for vแตข in v, uแตข in u ] y = [ ( torus.R + torus.r*cos(vแตข) ) * sin(uแตข) for vแตข in v, uแตข in u ] z = [ torus.r * sin(vแตข) for vแตข in v, _ in u ] return x, y, z end function animate_torus(torus::Torus, filename="torus.gif") fig = Figure(resolution = (500, 500)) ax = fig[1, 1] = LScene(fig) cam3d!(ax.scene) # Initialize surface plot with full torus x, y, z = generate_torus(torus) p = surface!(ax, x, y, z, colormap=:viridis, shading = false) # Set the limits xlims!(ax.scene, (-4, 4)) ylims!(ax.scene, (-4, 4)) zlims!(ax.scene, (-1, 1)) record(fig, filename, range(0, stop=1, length=100)) do i # Update data of surface plot x, y, z = generate_torus(torus, upto=i) p[1] = x p[2] = y p[3] = z end end And with Plots using Plots Base.@kwdef mutable struct Torus x::Float64 = cos(0) + cos(0)*cos(0) y::Float64 = sin(0) + cos(0)*sin(0) z::Float64 = sin(0) end function step!(torus::Torus, t, u; r=1) torus.x = r*cos(t) + cos(u)*cos(t) torus.y = r*sin(t) + cos(u)*sin(t) torus.z = sin(u) return torus end torus = Torus() plt = plot([0], [0], [0], aspect_ratio=:equal, xlims=(-3, 3), ylims=(-3, 3), zlims=(-3, 3)) @gif for t in range(0, stop=2ฯ, length=100) for u in range(0, stop=2ฯ, length=100) step!(torus, t, u, r=2) push!(plt, torus.x, torus.y, torus.z) end end every 2 | 4 | 1 |
75,392,950 | 2023-2-9 | https://stackoverflow.com/questions/75392950/how-does-python-version-affect-azure-functions | I'm developing Azure Functions using Python 3.10.10 on my machine, deploying the Function through Azure DevOps which is building the artifact using Python 3.6.8, and the Python Version shown for the Function App host is 3.8. There was a recent update of Azure Functions Runtime which deprecated Python 3.6. (see breaking changes here). How does python version affect Azure Functions? How do we keep the versions aligned? | Alignment Always keep in Azure DevOps a version of python venv that matches the App host and also keep the same dependencies within a requirements.txt file so that you don't have conflicts from different libraries. On your local you should also have a python venv that matches the same version of the host. I would suggest to downgrade from your current 3.10.10 to 3.8.x just to avoid having conflicts. Python version Python version per se shouldn't generally affect functionality (unless there are big changes between minor versions), it's usually the library dependencies that are breaking the functionalities (deprecation and implementations of methods in different way) | 3 | 3 |
75,419,693 | 2023-2-11 | https://stackoverflow.com/questions/75419693/transparent-textures-being-rendered-as-black-in-opengl | I have a texture with transparent parts, but instead of being rendered transparent they're black. To test if the RGBA values get passed on correctly to the shader, I made everything render in greyscale. And as I thought the alpha values weren't getting passed on correctly. So anyway here is how the textures get loaded: @classmethod def load_texture(cls, file_name: str): try: img = Image.open(f"{sys.path[0]}/res/{file_name}.png").convert('RGBA') img = img.transpose(Image.FLIP_TOP_BOTTOM) # flip image upside down except Exception as e: print(e) img = Image.open(f"{sys.path[0]}/res/missing_texture.png").convert('RGBA') img = img.transpose(Image.FLIP_TOP_BOTTOM) # flip image upside down ix, iy, image = img.size[0], img.size[1], img.tobytes("raw", "RGBA", 0, -1) texture_id = glGenTextures(1) # generate a texture ID cls.__textures.append(texture_id) glBindTexture(GL_TEXTURE_2D, texture_id) # make it current # copy the texture into the current texture texture_id glTexImage2D(GL_TEXTURE_2D, 0, 3, ix, iy, 0, GL_RGBA, GL_UNSIGNED_BYTE, image) glPixelStorei(GL_UNPACK_ALIGNMENT, 1) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) return texture_id And this is my fragment shader (the commented out part is obviously not working): #version 400 core in vec2 pass_texture_coords; in vec3 surface_normal; in vec3 to_light_vector; in vec3 to_camera_vector; out vec4 out_color; uniform sampler2D model_texture; uniform vec3 light_color; uniform float shine_damper; uniform float reflectivity; void main(void){ vec4 texture_color = texture(model_texture, pass_texture_coords); texture_color = vec4(vec3(texture_color.a), 1.0); //if(texture_color.a < 0.5){ // discard; //} vec3 unit_normal = normalize(surface_normal); vec3 unit_light_vector = normalize(to_light_vector); float n_dot1 = dot(unit_normal, unit_light_vector); float brightness = max(n_dot1, 0.1); vec3 diffuse = brightness * light_color; vec3 unit_vector_to_camera = normalize(to_camera_vector); vec3 light_direction = -unit_light_vector; vec3 reflected_light_direction = reflect(light_direction, unit_normal); float specular_factor = dot(reflected_light_direction, unit_vector_to_camera); specular_factor = max(specular_factor, 0.0); float damped_factor = pow(specular_factor, shine_damper); vec3 final_specular = damped_factor * reflectivity * light_color; out_color = vec4(diffuse, 1.0) * texture_color + vec4(final_specular, 1.0); } Also before rendering anything, this method gets called: @staticmethod def update_display(): InputController().apply_input() # apply inputs every frame glEnable(GL_BLEND) glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) # so that transparent bits of the texture won't get rendered glEnable(GL_ALPHA_TEST) glEnable(GL_DEPTH_TEST) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # Remove everything from screen (i.e. displays all white) glClearColor(0, 0.3, 0, 1) # set backdrop color glLoadIdentity() # Reset all graphic/shape's position | From glTexImage2D: glTexImage2D( GL_TEXTURE_2D, // Target 0, // Level 3, // Internalformat -> 3 -> GL_RGB -> no alpha // Framebuffer does not necessarily // need an alpha channel. // But if you want transparent textures // you have to specify one, // E.g. GL_RGBA (or any other format // from Table 2 with alpha). ix, // Width iy, // Height 0, // Border, must be 0 GL_RGBA, // Format of data GL_UNSIGNED_BYTE, // Type of data image // Data, which gets converted from format // to internalformat. In your case, the // alpha components will be discarded ); Hint: always use symbols, e.g., GL_XXXX. Legacy OpenGL accepts a numeric value for internalformat, but the specification in the link above explicitly mandates one of the symbols from the tables. | 3 | 2 |
75,395,892 | 2023-2-9 | https://stackoverflow.com/questions/75395892/writing-a-tuple-search-with-django-orm | I'm trying to write a search based on tuples with the Django ORM syntax. The final sql statement should look something like: SELECT * FROM mytable WHERE (field_a,field_b) IN ((1,2),(3,4)); I know I can achieve this in django using the extra keyword: MyModel.objects.extra( where=["(field_a, field_b) IN %s"], params=[((1,2),(3,4))] ) but the "extra" keyword will be deprecated at some point in django so I'd like a pure ORM/django solution. Searching the web, I found https://code.djangoproject.com/ticket/33015 and the comment from Simon Charette, something like the snippet below could be OK, but I can't get it to work. from django.db.models import Func, lookups class ExpressionTuple(Func): template = '(%(expressions)s)' arg_joiner = "," MyModel.objects.filter(lookups.In( ExpressionTuple('field_a', 'field_b'), ((1,2),(3,4)), )) I'm using Django 3.2 but I don't expect Django 4.x to do a big difference here. My db backend is posgresql in case it matters. | For reference and inspired from akshay-jain proposal, I managed to write something that works: from django.db.models import Func,Value def ValueTuple(items): return tuple(Value(i) for i in items) class Tuple(Func): function = '' qs = ( MyModel.objects .alias(a=Tuple('field_a', 'field_b')) .filter(a__in=ValueTuple([(1, 2), (3, 4)]) ) It does produces a sql query like SELECT * FROM table WHERE (field_a,field_b) IN ((1,2),(3,4)); And can be extended to more fields than just two. I didn't do any benchmarks to compare it to Q objects filtering though. | 5 | 4 |
75,426,868 | 2023-2-12 | https://stackoverflow.com/questions/75426868/tensorflow-gpu-problem-libnvinfer-so-7-and-libnvinfer-so-7-could-not-load | I installed TensorFlow under WSL 2, Ubuntu 22.04 (Jammy Jellyfish), I followed the instructions in Install TensorFlow with pip. *I also installed Nvidia drivers for Windows and in my other WSL 2, I use GPU-supported simulation program. Everything seemed OK. I didn't get any error message during installation, but when I imported TensorFlow in Python 3, I got this error: 2023-02-12 14:49:58.544771: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvrtc.so.11.0: cannot open shared object file: No such file or directory 2023-02-12 14:49:58.544845: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-02-12 14:49:58.544874: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. I searched my libnvinfer_plugin.so.7 files: sudo find / -name libnvinfer.so.7 2> /dev/null and I found them in this directory: cat /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 and I added this directory to LD_LIBRARY_PATH like in Could not load dynamic library 'libnvinfer.so.7', but nothing changed. Still TensorFlow is working, but I can't use the GPU. nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 516.94 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A | | N/A 43C P0 22W / N/A | 0MiB / 6144MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ nvcc--version: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 *The TensorFlow version is: 2.11.0 So, how can I fix this problem? | I changed version and the problem was solved: pip install --upgrade tensorflow==2.8 Note: When I use v2.10, I get the same error message. v2.8 is stable now. | 3 | 8 |
75,451,239 | 2023-2-14 | https://stackoverflow.com/questions/75451239/how-to-reference-input-in-params-section-of-snakemake-rule | I need to process my input file values, turning them into a comma-separated string (instead of white space) in order to pass them to a CLI program. To do this, I want to run the input files through a Python function. How can I reference the input files of a rule in the params section of the same rule? This is what I've tried, but it doesn't work: rule a: input: foo="a.txt", bar=expand({build}.txt,build=config["build"]), output: baz=result.txt, params: joined_bar=lambda w: ",".join(input.bar), # this doesn't work shell: """ qux --comma-separated-files {params.joined_bar} \ --foo {input.foo} \ >{output.baz} """ It fails with: InputFunctionException: AttributeError: 'builtin_function_or_method' object has no attribute 'bar' Potentially related but (over-)complicated questions: How to define parameters for a snakemake rule with expand input Is Snakemake params function evaluated before input file existence? | Turns out I need to explicitly add input to the lambda w: part: rule a: input: foo="a.txt", bar=expand({build}.txt,build=config["build"]), output: baz=result.txt, params: joined_bar=lambda w, input: ",".join(input.bar), # ', input' was added shell: """ qux --comma-separated-files {params.joined_bar} \ --foo {input.foo} \ >{output.baz} """ Interestingly, I found that one needs to use input in the lambda w, input. In my testing, lambda w, i did not work. And alternative is to refer to the rule input in the standard way: rules.a.input.bar: rule a: input: foo="a.txt", bar=expand({build}.txt,build=config["build"]), output: baz=result.txt, params: joined_bar=lambda w: ",".join(rules.a.input.bar), # 'rules.a.' was added shell: """ qux --comma-separated-files {params.joined_bar} \ --foo {input.foo} \ >{output.baz} """ Also see http://biolearnr.blogspot.com/2017/11/snakemake-using-inputoutput-values-in.html for a discussion. | 4 | 7 |
75,407,052 | 2023-2-10 | https://stackoverflow.com/questions/75407052/installing-test-files-with-pyproject-toml-and-setuptools | I'm migrating an old python project to the new pyproject.toml based system and am having trouble with getting files that are required by tests to install. Inside the pyproject.toml I have: [tool.setuptools] package-data = {"my_pkg_name" = ["tests/*.sdf", "tests/*.urdf", "tests/*.xml", "tests/meshes/*.obj"]} [build-system] requires = ["setuptools>=43.0.0", "wheel"] build-backend = "setuptools.build_meta" The tests that are run with pytest require the files described under package-data. After I build and install the build, the test files are not there. How do I get those files to be installed? How to include package data with setuptools/distutils? may be related, but things have changed, and I would rather not have to create a manifest file. The project structure looks something like: . โโโ LICENSE.txt โโโ pyproject.toml โโโ README.md โโโ src โ โโโ my_pkg_name โ โ โโโ __init__.py โโโ tests โโโ ant.xml โโโ humanoid.xml โโโ __init__.py โโโ kuka_iiwa.urdf โโโ meshes โ โโโ link_0.obj โ โโโ link_1.obj โ โโโ link_2.obj โ โโโ link_3.obj โ โโโ link_4.obj โ โโโ link_5.obj โ โโโ link_6.obj โ โโโ link_7.obj โโโ test_transform.py The pyproject.toml has no specific package discovery related settings. | Looking more into this, specifically: https://setuptools.pypa.io/en/stable/userguide/datafiles.html#non-package-data-files my problem is that these files are not part of the default found packages which are just the ones under src. It is also not clear whether test files should be installed or not - many projects explicitly exclude test files from installation. To answer the original question of how to install the test files, the solution is to make tests a package (as I already have above), then specify to find it with setup tools like so: [tool.setuptools.packages.find] where = ["src", "tests"] [tool.setuptools.package-data] "*" = ["*.sdf", "*.urdf", "*.xml", "*.obj"] I believe package-data requires setuptools >= 61 which you can specify like: [build-system] requires = ["setuptools>=61.0.0", "wheel"] build-backend = "setuptools.build_meta" Further answering the deeper issue of installed tests failing due to not being able to find test files, the solution in my github actions workflow for doing automated testing on commits was to install in editable mode. Specifically: - name: Install dependencies run: | python -m pip install --upgrade pip python -m pip install -e . python -m pip install flake8 pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi This is because in my tests, I refer to resource files based on a module's path: open(os.path.join(cfg.TEST_DIR, "simple_arm.sdf")) in the cfg module: import os ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')) TEST_DIR = os.path.join(ROOT_DIR, 'tests') EDIT: A better solution is to reference the test files based on the test script path rather than the package's module's path. This allows us to still test the non-editable installed project. To do this, I have instead TEST_DIR = os.path.dirname(__file__) in each test script, and replace references to cfg. | 3 | 3 |
75,439,401 | 2023-2-13 | https://stackoverflow.com/questions/75439401/child-class-from-magicmock-object-has-weird-spec-str-and-cant-use-or-mock-met | When a class is created deriving from a MagicMock() object it has an unwanted spec='str'. Does anyone know why this happens? Does anyone know any operations that could be done to the MagicMock() object in this case such that it doesn't have the spec='str' or can use methods of the class? from unittest.mock import MagicMock a = MagicMock() class b(): @staticmethod def x(): return 1 class c(a): @staticmethod def x(): return 1 print(a) print(b) print(c) print(a.x()) print(b.x()) print(c.x()) which returns MagicMock id='140670188364408'> <class '__main__.b'> <MagicMock spec='str' id='140670220499320'> <MagicMock name='mock.x()' id='140670220574848'> 1 Traceback (most recent call last): File "/xyz/test.py", line 19, in <module> print(c.x()) File "/xyz/lib/python3.7/unittest/mock.py", line 580, in _getattr_ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'x' Basically I need the AttributeError to not be here. Is there something I can do to 'a' such that c.x() is valid? edit - the issue seems to be with _mock_add_spec in mock.py still not sure how to fix this. | In Python, classes are actually instances of the type class. A class statement like this: class c(a): @staticmethod def x(): return 1 is really syntactic sugar of calling type with the name of the class, the base classes and the class members: c = type('c', (a,), {'x': staticmethod(lambda: 1)}) The above statement would go through the given base classes and call the __new__ method of the type of the first base class with the __new__ method defined, which in this case is a. The return value gets assigned to c to become a new class. Normally, a would be an actual class--an instance of type or a subclass of type. But in this case, a is not an instance of type, but rather an instance of MagicMock, so MagicMock.__new__, instead of type.__new__, is called with these 3 arguments. And here lies the problem: MagicMock is not a subclass of type, so its __new__ method is not meant to take the same arguments as type.__new__. And yet, when MagicMock.__new__ is called with these 3 arguments, it takes them without complaint anyway because according to the signature of MagicMock's constructor (which is the same as Mock's): class unittest.mock.Mock(spec=None, side_effect=None, return_value=DEFAULT, wraps=None, name=None, spec_set=None, unsafe=False, **kwargs) MagicMock.__new__ would assign the 3 positional arguments as spec, side_effect and return_value, respectively. As you now see, the first argument, the class name ('c' in this case), an instance of str, becomes spec, which is why your class c becomes an instance of MagicMock with a spec of str. The solution Luckily, a magic method named __mro_entries__ was introduced since Python 3.7 that can solve this problem by providing a non-class base class with a substitute base class, so that when a, an instance of MagicMock, is used as a base class, we can use __mro_entries__ to force its child class to instead use a's class, MagicMock (or SubclassableMagicMock in the following example), as a base class: from unittest.mock import MagicMock class SubclassableMagicMock(MagicMock): def __mro_entries__(self, bases): return self.__class__, so that: a = SubclassableMagicMock() class b(): @staticmethod def x(): return 1 class c(a): @staticmethod def x(): return 1 print(a) print(b) print(c) print(a.x()) print(b.x()) print(c.x()) outputs: <SubclassableMagicMock id='140127365021408'> <class '__main__.b'> <class '__main__.c'> <SubclassableMagicMock name='mock.x()' id='140127351680080'> 1 1 Demo: https://replit.com/@blhsing/HotAcademicCases | 3 | 2 |
75,442,675 | 2023-2-14 | https://stackoverflow.com/questions/75442675/lxml-fails-to-import-with-error-symbol-not-found-in-flat-namespace-xsltdocde | With the code: from lxml.etree import HTML, XML I get the traceback: Traceback (most recent call last): File "/Users/username/code/project/lxml-test.py", line 3, in <module> from lxml.etree import HTML, XML ImportError: dlopen(/Users/username/.virtualenvs/project-venv/lib/python3.11/site-packages/lxml/etree.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_xsltDocDefaultLoader' I'm on a mac m1 chip. I installed libxml2 and libxslt via brew. I'm running python 3.11 inside of a virtualenv. What I've tried: Uninstalling and re-installing lxml with pip, and tried several different versions. (4.7.1 & 4.8.0 didn't compile. All of the 4.9.0,1,2 versions give me the above error) Installing libxml2 and libxslt via brew and then reinstalling python-lxml. Installing python-lxml via conda (as suggested here) EDIT: I posted this bug in lxml's bug report forum, and was notified that this is a highly-duplicated bug report of Missing wheel for macos with M1 Edit | I solved my problem by cloning lxml, building it, and installing it via pip install -e /path/to/lxml | 3 | 1 |
75,427,538 | 2023-2-12 | https://stackoverflow.com/questions/75427538/regulargridinterpolator-excruciatingly-slow-compared-to-interp2d | Consider the following code example: # %% import numpy from scipy.interpolate import interp2d, RegularGridInterpolator x = numpy.arange(9000) y = numpy.arange(9000) z = numpy.random.randint(-1000, high=1000, size=(9000, 9000)) f = interp2d(x, y, z, kind='linear', copy=False) f2 = RegularGridInterpolator((x, y), z, "linear") mx, my = np.meshgrid(x, y) M = np.stack([mx, my], axis=-1) # %% %timeit f(x, y) # %% %timeit f2(M) It sets up some example interpolators using scipy.interpolate.interp2d and scipy.interpolate.RegularGridInterpolator. The output of the two cells above is 1.09 s ยฑ 4.38 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) and 10 s ยฑ 17.6 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) respectively. The RegularGridInterpolator is about 10 times slower than the interp2d. The problem is that interp2d has been marked as deprecated in scipy 1.10.0. And new code should use RegularGridInterpolator. This seems a bit strange to me since it would be such a bad replacement. Is there maybe a problem in my code example above? How can I speed this interpolation process up? | There is no problem with your code, it's probably a bug in scipy. I've reported it on github | 4 | 3 |
75,414,955 | 2023-2-10 | https://stackoverflow.com/questions/75414955/how-to-view-runtime-warnings-in-pycharm-when-running-tests-using-pytest | When running tests in PyCharm 2022.3.2 (Professional Edition) using pytest (6.2.4) and Python 3.9 I get the following result in the PyCharm console window: D:\cenv\python.exe "D:/Program Files (x86)/JetBrains/PyCharm 2022.3.2/plugins/python/helpers/pycharm/_jb_pytest_runner.py" --path D:\tests\test_k.py Testing started at 6:49 PM ... Launching pytest with arguments D:\tests\test_k.py --no-header --no-summary -q in D:\tests ============================= test session starts ============================= collecting ... collected 5 items test_k.py::test_init test_k.py::test_1 test_k.py::test_2 test_k.py::test_3 test_k.py::test_4 ======================= 5 passed, 278 warnings in 4.50s ======================= Process finished with exit code 0 PASSED [ 20%]PASSED [ 40%]PASSED [ 60%]PASSED [ 80%]PASSED [100%] So the actual warnings don't show. Only the number of warnings (278) is shown. I tried: selecting: Pytest: do not add "--no-header --no-summary -q" in advanced settings Setting Additional arguments to -Wall in the Run/Debug configurations window Setting Interpreter options to -Wall in the Run/Debug configurations window and all permutations, all to no avail. Is there a way to show all runtime warnings when running tests using pytest in PyCharm in the PyCharm Console window? EDIT: @Override12 When I select do not add "--no-header --no-summary -q" in advanced settings I get the following output: D:\Projects\S\SHARK\development_SCE\cenv\python.exe "D:/Program Files (x86)/JetBrains/PyCharm 2020.3.4/plugins/python/helpers/pycharm/_jb_pytest_runner.py" --path D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages\SHARK\analysis\tests\test_fairing_1_plus_k.py -- --jb-show-summary Testing started at 10:07 AM ... Launching pytest with arguments D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages\SHARK\analysis\tests\test_fairing_1_plus_k.py in D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages ============================= test session starts ============================= platform win32 -- Python 3.9.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- D:\Projects\S\SHARK\development_SCE\cenv\python.exe cachedir: .pytest_cache rootdir: D:\Projects\S\SHARK\development_SCE\cenv\Lib\site-packages\vistrails-3.5.0rc0-py3.9.egg\vistrails\packages plugins: pytest_check-1.0.5 collecting ... collected 5 items SHARK/analysis/tests/test_fairing_1_plus_k.py::test_init SHARK/analysis/tests/test_fairing_1_plus_k.py::test_without_1_k_fairing SHARK/analysis/tests/test_fairing_1_plus_k.py::test_1_k_fairing_given SHARK/analysis/tests/test_fairing_1_plus_k.py::test_without_1_k_fairing_only_3_values_under_threshold SHARK/analysis/tests/test_fairing_1_plus_k.py::test_1_k_fairing_given_only_3_values_under_threshold ============================== warnings summary =============================== ......\pyreadline\py3k_compat.py:8 D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\pyreadline\py3k_compat.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working return isinstance(x, collections.Callable) ......\nose\importer.py:12 D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\nose\importer.py:12: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import find_module, load_module, acquire_lock, release_lock SHARK/analysis/tests/test_fairing_1_plus_k.py: 276 warnings D:\Projects\S\SHARK\development_SCE\cenv\lib\site-packages\pymarin\objects\key.py:1101: UserWarning: siUnits is deprecated, use siUnit warnings.warn('siUnits is deprecated, use siUnit') -- Docs: https://docs.pytest.org/en/stable/warnings.html ======================= 5 passed, 278 warnings in 5.79s ======================= Process finished with exit code 0 PASSED [ 20%]PASSED [ 40%]PASSED [ 60%]PASSED [ 80%]PASSED [100%] So 4 warnings are displayed. However I would like to see all 278 warnings. When I run pytest from the command line outside PyCharm I get the same result. So it seems to be a pytest problem and it seems that it has nothing to do with PyCharm. | The solution is a combination of two things: Setting 'do not add "--no-header --no-summary -q"' in advanced settings as @Override12 suggested. When the same warning is issued multiple times, only the first time is displayed. In my case solving the first warning reduced the number of warnings from 278 to 2. | 8 | 9 |
75,433,179 | 2023-2-13 | https://stackoverflow.com/questions/75433179/how-to-form-an-opcua-connection-in-python-from-server-ip-address-port-security | I have never used OPC-UA before, but now faced with a task where I have to pull data from a OPC-UA machine to push to a SQL database using python. I can handle the database part, but how to basically connect to the OPCUA server when I have only the following fields available? IP address 192.168.38.94 Port 8080 Security policy: Basic256 Username: della_client Password: amorphous@# Some tutorials I saw directly use a url, but is there any way to form the URL from these parameters, or should I ask the machine owners something more specific to be able to connect? I just want to be sure of what I need before I approach him. Related, how to use the same parameters in the application called UA-Expert to verify the connections as well? Is it possible? If it is relevant, I am using python 3.10 on Ubuntu 22.04. | You need to know which protocol is used. Then you can create the URLs by using the IP address as domain: OPC UA binary: opc.tcp://ip:port https https://ip:port OPC UA WebSockets opc.wss://ip:port http http://ip:port (Deprecated in Version 1.03) In your example this could be opc.tcp://192.168.38.94:8080 or https://192.168.38.94:8080 In most cases, the binary protocol is used. But the port 8080 is a typical http(s) port. The credential and the securityPolice are needed later in the connection process. And yes: You can test the URLs with the UaExpert. You can finde a step-by-step tutorial in the documention | 3 | 2 |
75,449,889 | 2023-2-14 | https://stackoverflow.com/questions/75449889/check-if-request-is-coming-from-swagger-ui | Using Python and Starlette or FastAPI, How can I know if the request is coming from the Swagger UI or anywhere else (Postman, Frontend app)? I tried to see if there's something in Request object which I can use: from fastapi import Request @app.get("/") async def root(request: Request): # request.client.host just returns some IP # request.headers doesn't contain any hint # request.scope ? request_from_swagger = request.hints_on_whether_request_is_coming_from_swagger_ui if request_from_swagger: return {"message": "Hello Swagger UI"} return {"message": "Hello World"} I need to take some actions based of that. So is there anyway I can tell, whether the request is coming from the Swagger UI? | You could always use the referer header of the request: from fastapi import Request @app.get("/") async def root(request: Request): request_from_swagger = request.headers['referer'].endswith(app.docs_url) if request_from_swagger: return {"message": "Hello Swagger UI"} return {"message": "Hello World"} | 4 | 3 |
75,424,120 | 2023-2-12 | https://stackoverflow.com/questions/75424120/tensorflow-nvidia-gpu-not-detected | Hi I'm struggling to get Tensorflow V2.11 to find my eGPU (RTX 3060 Ti) I am currently on Windows 11 CUDA version is 12 I am currently downloading CUDA 11 as well as CUDnn as I've heard it is recommended I have tried the following code: import tensorflow as tf tf.config.list_physical_devices('GPU') which outputs: [] any help would be great | Tensorflow 2.11 is not supporting GPU on Windows machine. TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. So you can try by installing Tensorflow 2.10 for the GPU setup. Also you need to install the specific version of CUDA and cuDNN for GPU support in your system which is CUDA 11.2 and cuDNN 8.1 for Tensorflow 2.10(Tensorflow>=2.5). Please check the Hardware/Software requirements as mentioned in the link and set the path to the bin directory after installing these software. Now follow the step by step instructions mentioned in the same link and verify the GPU setup using below code. python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" | 6 | 5 |
75,446,123 | 2023-2-14 | https://stackoverflow.com/questions/75446123/cant-correctly-dump-ansible-vault-into-yaml-with-python | I have a python dictionary with an Ansible vault as a value. I can't seem to be able to correctly dump it into a yaml output with the correct formatting. I'm using the ansible-vault package to generate the encrypted data as follows: from ansible_vault import Vault import yaml vault = Vault('secretpassword') data = "secretvalue" To encrypt the data I do as follows: password = (vault.dump(data)) Which generates a correct Ansible vault $ANSIBLE_VAULT;1.1;AES256 36323830636261313833333932326661613063663361656362626432303232636463643030396366 3132623365633138613862333034626261336638613233650a653765386332666537613231626537 37666461653439616564343133656263613134643235363539396435616461626338393338616365 3339383030373532310a366662386665326132373535393930663737383136363735646361383066 65623033343262643138633839666237643735366465373932316233373339643835 To be able to use this in a vault I append the following to it: password = "!vault |\n" + (vault.dump(data)) This is so that I am able to use it in a host_var. And afterwards I add it to a dictionary with some other values which in turn I will dump with yaml. hostvar_dict = { "a": "1", "secret_item": password, "b": "2" } Everything looks fine until here, things go wrong when I try to output the above dict to a yaml output. print(yaml.dump(hostvar_dict)) Gives me the following output: a: '1' b: '2' secret_item: '!vault | $ANSIBLE_VAULT;1.1;AES256 36353763313938663936303630306161393433633765353936656139363937373365376563623937 3762633462623434393036316264646535316233346166660a396634386439656437343162613365 34613661366163643333393163333335343632356330343939396133333665336566623037306432 3539366466353030310a313936376361366366316338636161303564346633373237363463373966 39353731323564393365633465303663373932613631353364626437633561643134 ' I've looked at the answers from yaml.dump adding unwanted newlines in multiline strings but these didn't give the right output. What I'm looking to get is: a: "1" b: "2" secret_item: !vault | $ANSIBLE_VAULT;1.1;AES256 36353763313938663936303630306161393433633765353936656139363937373365376563623937 3762633462623434393036316264646535316233346166660a396634386439656437343162613365 34613661366163643333393163333335343632356330343939396133333665336566623037306432 3539366466353030310a313936376361366366316338636161303564346633373237363463373966 39353731323564393365633465303663373932613631353364626437633561643134 Options with yaml.dump I've tried are: default_style="|",default_flow_style=False. Is there a way to correctly dump the ansible-vault value in a yaml file in the way I want it? | The !vault in your expected output YAML document is a tag. Tags start with an exclamation mark, and if you dump a string to YAML that starts with an exclacmation mark, that string needs to be quoted. In a similar vein, the pipe (|) indicates you want a literal style scalar, and including that in your string, will not get you that. I don't know if you can do this with PyYAML, but with ruamel.yaml you can do: import sys import ruamel.yaml yaml = ruamel.yaml.YAML() secret = ruamel.yaml.comments.TaggedScalar("""\ $ANSIBLE_VAULT;1.1;AES256 36353763313938663936303630306161393433633765353936656139363937373365376563623937 3762633462623434393036316264646535316233346166660a396634386439656437343162613365 34613661366163643333393163333335343632356330343939396133333665336566623037306432 3539366466353030310a313936376361366366316338636161303564346633373237363463373966 39353731323564393365633465303663373932613631353364626437633561643134 """, style='|', tag='!vault') data = dict(a=1, b=2, secret_item=secret) yaml.dump(data, sys.stdout) which gives: a: 1 b: 2 secret_item: !vault | $ANSIBLE_VAULT;1.1;AES256 36353763313938663936303630306161393433633765353936656139363937373365376563623937 3762633462623434393036316264646535316233346166660a396634386439656437343162613365 34613661366163643333393163333335343632356330343939396133333665336566623037306432 3539366466353030310a313936376361366366316338636161303564346633373237363463373966 39353731323564393365633465303663373932613631353364626437633561643134 As written before, the best thing to do if is try and round-trip your expected output. Once it is clear that ruamel.yaml can preserve the layout, analyse the loaded data. In which case you'll see that the value for the key secret_item will be a TaggedScalar. | 3 | 5 |
75,446,361 | 2023-2-14 | https://stackoverflow.com/questions/75446361/run-short-python-code-directly-on-snakemake | I have a snakemake pipeline where I need to do a small step of processing the data (applying a rolling average to a dataframe). I would like to write something like this: rule average_df: input: # script = , df_raw = "{sample}_raw.csv" params: window = 83 output: df_avg = "{sample}_avg.csv" shell: """ python import pandas as pd df=pd.read_csv("{input.df_raw}") df=df.rolling(window={params.window}, center=True, min_periods=1).mean() df.to_csv("{output.df_avg}") """ However it does not work. Do I have to create a python file with those 4 lines of code? The alternative that occurs to me is a bit cumbersome. It would be average_df.py import pandas as pd def average_df(i_path, o_path, window): df=pd.read_csv(path) df=df.rolling(window=window, center=True, min_periods=1).mean() df.to_csv(o_path) return None if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description='Description of your program') parser.add_argument('-i_path', '--input_path', help='csv file', required=True) parser.add_argument('-o_path', '--output_path', help='csv file ', required=True) parser.add_argument('-w', '--window', help='window for averaging', required=True) args = vars(parser.parse_args()) i_path = args['input_path'] o_path = args['output_path'] window = args['window'] average_df(i_path, o_path, window) And then have the snakemake rule like this: rule average_df: input: script = average_df.py, df_raw = "{sample}_raw.csv" params: window = 83 output: df_avg = "{sample}_avg.csv" shell: """ python average_df.py --input_path {input.df_raw} --ouput_path {output.df_avg} -window {params.window} """ Is there a smarter or more efficient way to do this? That would be great! Looking forward to your input! | This can be achieved via run directive: rule average_df: input: # script = , df_raw = "{sample}_raw.csv" params: window = 83 output: df_avg = "{sample}_avg.csv" run: import pandas as pd df=pd.read_csv(input.df_raw) df=df.rolling(window=params.window, center=True, min_periods=1).mean() df.to_csv(output.df_avg) Note that all snakemake objects are available directly via input, output, params, etc. | 3 | 2 |
75,439,217 | 2023-2-13 | https://stackoverflow.com/questions/75439217/error-debugging-python-in-vs-code-pythonpath-is-not-valid-if-python-is-spec | I get a prompt with: Invalid Message: "pythonPath" is not valid if "python" is specified and the option to open launch.json. But my launch.json doesn't contain anything that says "pythonPath": { "configurations": [ { "name": "Docker: Python - General", "type": "docker", "request": "launch", "preLaunchTask": "docker-run: debug", "python": { "pathMappings": [ { "localRoot": "${workspaceFolder}", "remoteRoot": "/app" } ], "projectType": "general" } } ] } and this is my tasks.json: { "version": "2.0.0", "tasks": [ { "type": "docker-build", "label": "docker-build", "platform": "python", "dockerBuild": { "tag": "cachepurger:latest", "dockerfile": "${workspaceFolder}/Dockerfile", "context": "${workspaceFolder}", "pull": true } }, { "type": "docker-run", "label": "docker-run: debug", "dependsOn": ["docker-build"], "python": { "file": "src/main.py" } } ] } and the Dockerfile: # For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3.9.6-slim # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 # Install pip requirements COPY requirements.txt . RUN python -m pip install -r requirements.txt WORKDIR /app COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD ["python", "src/main.py"] all created by the "Docker: Add Dockerfiles to Workspace..." -> "Python: General" command. It worked before but stopped working now. I tried to use Python 3.9.6 image, since that is the one from my .venv but no success. The image gets build without an error and also docker run works fine. The moment it tries to "attach" I get this message. This is the terminal output: * Executing task: docker-build > docker image build --pull --file '/Users/philipp/Sites/cache-purger/Dockerfile' --tag 'cachepurger:latest' --label 'com.microsoft.created-by=visual-studio-code' '/Users/philipp/Sites/cache-purger' < #1 [internal] load build definition from Dockerfile #1 sha256:c141506ce29f9a418e9f7aa174d1f48de1cda1b771a6710f311fe64b6591e190 #1 transferring dockerfile: 37B done#1 DONE 0.0s #2 [internal] load .dockerignore #2 sha256:7b7281b332cffe701548a5bd1f2ad8aa5ef2ad5c7b67d8432ca152c58ad529f9 #2 transferring context: 120B done #2 DONE 0.0s #3 [internal] load metadata for docker.io/library/python:3.9.6-slim #3 sha256:fc1a7a5428ef0e03d295fd81aa9c55606d4e031ac5c881a3941276f70f00d422#3 DONE 0.6s #4 [1/6] FROM docker.io/library/python:3.9.6-slim@sha256:4115592fd02679fb3d9e8c513cae33ad3fdd64747b64d32b504419d7118bcd7c #4 sha256:e45c2d35d6435658167a7e046bb6121c498edab8c8777f3cd5a56f585eead583 #4 DONE 0.0s #5 [internal] load build context #5 sha256:c20427bc601f92346d0d4519c594683dbcd8b4a64b1ca4f59a16ce42d0f217db#5 transferring context: 176.69kB 0.2s done #5 DONE 0.2s #8 [4/6] WORKDIR /app #8 sha256:cd260eca990853ac7a3228b8435b728cf978b08e035684cf2988a4ce596004e6 #8 CACHED #9 [5/6] COPY . /app #9 sha256:8261e563e3b0b0cd32854e76a77543449ceca6f5b3fe76de9cd0a1ea8fae3fab #9 CACHED #6 [2/6] COPY requirements.txt . #6 sha256:3d3741721306ef2859b965575ba3fe04d426fb55b1d22dde909fde88f9be8998 #6 CACHED #7 [3/6] RUN python -m pip install -r requirements.txt #7 sha256:92a4bcf37798995b7fcd368e190a189642affda38a1ecc4f004f0827567520e6 #7 CACHED #10 [6/6] RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app #10 sha256:b0b00f3b01ee74a6a813c94699340d78511ccdb89b4b326fd106625803062cb1 #10 CACHED #11 exporting to image #11 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00 #11 exporting layers done #11 writing image sha256:1a8bf7fd9f7f83ddc25d421eef1c4fb986ef79cdc9bb9709d10ed530c2b4c6c0 done #11 naming to docker.io/library/cachepurger:latest done #11 DONE 0.0s * Terminal will be reused by tasks, press any key to close it. * Executing task: docker-run: debug > docker container run --detach --tty --name 'cachepurger-dev' --publish-all --mount 'type=bind,source=/Users/philipp/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy,destination=/debugpy,readonly' --label 'com.microsoft.created-by=visual-studio-code' --entrypoint 'python3' cachepurger:latest < 8baa3597aa97f39641a43f6886b884e139165f83edaca9ae1f99e9f8a760e50e * Terminal will be reused by tasks, press any key to close it. | Your code is fine. It's VSCode / Python Extension that got updated to version 1.75 and 2023.02 respectively and hence this error is new. Please refer to this issue for constant development on the bug. As for now, uninstall VSCode and re-install 1.74, and re-install the python extension before 2023.02 and you should be good to go. | 4 | 9 |
75,434,681 | 2023-2-13 | https://stackoverflow.com/questions/75434681/type-hint-decorator-for-sync-async-functions | How can I type hint decorator that is meant to be used for both sync & async functions? I've tried something like below, but mypy raises errors: x/decorator.py:130: error: Incompatible types in "await" (actual type "Union[Awaitable[Any], R]", expected type "Awaitable[Any]") [misc] x/decorator.py:136: error: Incompatible return value type (got "Union[Awaitable[Any], R]", expected "R") [return-value] def log_execution_time(foo: Callable[P, AR | R]) -> Callable[P, AR | R]: module: Any = inspect.getmodule(foo) module_spec: Any = module.__spec__ if module else None module_name: str = module_spec.name if module_spec else foo.__module__ # noqa @contextmanager def log_timing(): start = time() try: yield finally: exec_time_ms = (time() - start) * 1000 STATS_CLIENT.timing( metric_key.FUNCTION_TIMING.format(module_name, foo.__name__), exec_time_ms, ) async def async_inner(*args: P.args, **kwargs: P.kwargs) -> R: with log_timing(): result = await foo(*args, **kwargs) <- error return result def sync_inner(*args: P.args, **kwargs: P.kwargs) -> R: with log_timing(): result = foo(*args, **kwargs) return result <- error if inspect.iscoroutinefunction(foo): return wraps(foo)(async_inner) return wraps(foo)(sync_inner) I know there's a trick like this: if inspect.iscoroutinefunction(foo): async_inner: foo # type: ignore[no-redef, valid-type] return wraps(foo)(async_inner) sync_inner: foo # type: ignore[no-redef, valid-type] return wraps(foo)(sync_inner) But I was hoping that there's a way to properly type hint this. I'm on python 3.10.10. PS. I forgot to say that it's important that PyCharm picks it up & suggests proper types. | Say you want to write a function decorator that performs some actions before and/or after the actual function call. Let's call that surrounding context my_context. If you want the decorator to be applicable to both asynchronous and regular functions, you'll need to accommodate both types in it. How can we properly annotate it to ensure type safety and consistency, while also retaining all possible type information of the wrapped function? Here is the cleanest solution I could come up with: from collections.abc import Awaitable, Callable from functools import wraps from inspect import iscoroutinefunction from typing import ParamSpec, TypeVar, cast, overload P = ParamSpec("P") R = TypeVar("R") @overload def decorator(func: Callable[P, Awaitable[R]]) -> Callable[P, Awaitable[R]]: ... @overload def decorator(func: Callable[P, R]) -> Callable[P, R]: ... def decorator(func: Callable[P, R]) -> Callable[P, R] | Callable[P, Awaitable[R]]: if iscoroutinefunction(func): async def async_inner(*args: P.args, **kwargs: P.kwargs) -> R: with my_context(): result = await cast(Awaitable[R], func(*args, **kwargs)) return result return wraps(func)(async_inner) def sync_inner(*args: P.args, **kwargs: P.kwargs) -> R: with my_context(): result = func(*args, **kwargs) return result return wraps(func)(sync_inner) Apply it like this: @decorator def foo(x: str) -> str: ... @decorator async def bar(y: int) -> int: ... This passes mypy --strict and revealing the types of foo and bar after decoration shows what we would expect, i.e. def (x: builtins.str) -> builtins.str and def (y: builtins.int) -> typing.Awaitable[builtins.int] respectively. Details The problem is that no matter what type guards we apply outside of the inner wrapper, they don't carry over to the inside of the wrapper. This has been a long-standing issue of mypy, which is not trivial to deal with. In our scenario this means any inspect.iscoroutinefunction check done in the decorator but outside of the inner wrapper will only narrow the type to something awaitable in the scope of the decorator, but is ignored inside the wrapper. (The reason is that assignment to the non-local variable holding the function reference is possible, even after the wrapper definition. See the issue thread for details/examples.) The typing.cast is the most straight-forward workaround in my opinion. The typing.overloads are a way to deal with the distinction between coroutines and normal functions on the caller's side. But I am curious to see, what other can come up with. | 4 | 6 |
75,416,108 | 2023-2-10 | https://stackoverflow.com/questions/75416108/polars-yyyy-week-into-a-date | Does anyone know how to parse YYYY Week into a date column in Polars? I have tried this code but it throws an error. import polars as pl pl.DataFrame({ "week": [201901, 201902, 201903, 201942, 201943, 201944] }).with_columns(pl.col("week").cast(pl.String).str.to_date("%Y%U").alias("date")) InvalidOperationError: conversion from `str` to `date` failed in column 'week' for 6 out of 6 values: ["201901", "201902", โฆ "201944"] | This seems like a bug (although one with the underlying rust package chrono rather than polars itself). I tried using base python's strptime and it ignores the %U and just gives the first of the year for all cases so you can either do string manipulation and math like this (assuming you don't need an exact response) pl.DataFrame({ "week": [201901, 201902, 201903, 201942, 201943, 201944] }) \ .with_columns(pl.col('week').cast(pl.Utf8)) \ .with_columns([pl.col('week').str.slice(0,4).cast(pl.Int32).alias('year'), pl.col('week').str.slice(4,2).cast(pl.Int32).alias('week')]) \ .select(pl.date(pl.col('year'),1,1) + pl.duration(days=(pl.col('week')-1)*7).alias('date')) If you look at the definition of %U, it's supposed to be based the xth Sunday of the year whereas my math is just multiplying by 7. Another approach is to make a df of dates, then make the strftime of them and then join the dfs. So that might be like this: dfdates=pl.DataFrame({'date':pl.date_range(datetime(2019,1,1), datetime(2019,12,31),'1d').cast(pl.Date())}) \ .with_columns(pl.col('date').dt.strftime("%Y%U").alias('week')) \ .groupby('week').agg(pl.col('date').min()) And then joining it with what you have pl.DataFrame({ "week": [201901, 201902, 201903, 201942, 201943, 201944] }).with_columns(pl.col('week').cast(pl.Utf8())).join(dfdates, on='week') shape: (6, 2) โโโโโโโโโโฌโโโโโโโโโโโโโ โ week โ date โ โ --- โ --- โ โ str โ date โ โโโโโโโโโโชโโโโโโโโโโโโโก โ 201903 โ 2019-01-20 โ โ 201944 โ 2019-11-03 โ โ 201902 โ 2019-01-13 โ โ 201943 โ 2019-10-27 โ โ 201942 โ 2019-10-20 โ โ 201901 โ 2019-01-06 โ โโโโโโโโโโดโโโโโโโโโโโโโ | 3 | 4 |
75,438,567 | 2023-2-13 | https://stackoverflow.com/questions/75438567/r-style-formulas-when-implementing-a-power-i-e-square-in-a-glm-misbehaves | In the python code below, the glm model specification does not include the third power in the in model1 but it does in model2: model1 = glm(formula="wage ~ workhours + workhours**3 + C(gender)", data=df, family=sm.families.Gaussian()) model2 = glm(formula="wage ~ workhours + np.power(workhours, 3) + C(gender)", data=df, family=sm.families.Gaussian()) Is this a bug? According to the documentation **x raises something to the power 3. | ** in a formula is treated as a formula operator, not as regular exponentiation. (This is similar to how ^ works in an R formula.) (a+b+c+d)**3 means that the model should include a, b, c, d, and all interactions between these variables up to 3rd order. workhours**3 means that the model should include workhours and all interactions between... just workhours... up to 3rd order... but there are no such interaction terms, so it's equivalent to just workhours. In contrast, np.power(workhours, 3) is treated as Python code, and computes the power you wanted. statsmodels uses patsy for formula handling, so for full details on the formula language, you can check the patsy docs. | 3 | 6 |
75,433,717 | 2023-2-13 | https://stackoverflow.com/questions/75433717/module-keras-utils-generic-utils-has-no-attribute-get-custom-objects-when-im | I am working on google colab with the segmentation_models library. It worked perfectly the first week using it, but now it seems that I can't import the library anymore. Here is the error message, when I execute import segmentation_models as sm : --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-3-6f48ce46383f> in <module> 1 import tensorflow as tf ----> 2 import segmentation_models as sm 3 frames /usr/local/lib/python3.8/dist-packages/efficientnet/__init__.py in init_keras_custom_objects() 69 } 70 ---> 71 keras.utils.generic_utils.get_custom_objects().update(custom_objects) 72 73 AttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' Colab uses tensorflow version 2.11.0. I did not find any information about this particular error message. Does anyone know where the problem may come from ? | Encountered the same issue sometimes. How I solved it: open the file keras.py, change all the 'init_keras_custom_objects' to 'init_tfkeras_custom_objects'. the location of the keras.py is in the error message. In your case, it should be in /usr/local/lib/python3.8/dist-packages/efficientnet/ | 7 | 6 |
75,431,587 | 2023-2-13 | https://stackoverflow.com/questions/75431587/type-hinting-with-unions-and-collectables-3-9-or-greater | I've been on Python 3.8 for quite some time. I usually type hint with the convention: from typing import List, Union some_stuff: List[Union[int, str, float, List[str]]] = [98, "Fido", -34.925, ["Phantom", "Tollbooth"]] I understand that with python 3.9 or greater you can type hint lists and collectible like: some_ints: list[int] If you have a union of types that a variable can occupy do you still need to import the Union class? like so: from typing import Union some_stuff: list[Union[int, str, float, list[str]]] = [98, "Fido", -34.925, ["Phantom", "Tollbooth"]] Or is there a new convention for Union type hinting as well? | PEP 604, which was implemented in Python 3.10 allows Union types to be formed using the | operator (which has existed with exclusively numeric meanings for most of Python's history). So in sufficiently recent versions of Python, you can write your type hint like this, with no imports required from typing: some_stuff: list[int|str|float|list[str]] = [98, "Fido", -34.925, ["Phantom", "Tollbooth"]] While that specifically answers your question, I'd note that it's a little suspicious that your example data exactly matches the order of the types you're listing in the Union. If you are always going to have exactly one of each kind of data (rather than some unknown mixture of them), you probably want to be type hinting the value differently, e.g. with tuple[int, str, float, list[str]] which specifies that there will be one value of each, in that order. You might need to use an actual tuple for the value, I'm not sure there's a way to hint a fixed pattern of values within a list object. | 5 | 4 |
75,430,161 | 2023-2-12 | https://stackoverflow.com/questions/75430161/cursor-count-gives-attributeerror-in-pymongo-4-3-3 | As the title suggests, I am trying to use count() with a find() on a collection but it keeps throwing the error AttributeError: 'Cursor' object has no attribute 'count'. For reference, I went through this question but count_documents() seems to be tehre for colelctions themselves, and not cursors. The other option mentioned was len(list(cursor)) but I can't use that as it consumes the cursor itself (can't iterate over it after this). I went through a couple more answers, but these seem to be the main ways out. Moreover, my pymongo version is 4.3.3 which can't be changed due to some restrictions. Is there any operation I can perform directly on Cursor which doesn't consume it? Sample code def temp(col): return col.find().count() print(temp(collection)) Thanks! | list() will exhaust the cursor, but save its ouput to a variable and you can access it multiple times, e.g. records = list(col.find()) num_records = len(records) for record in records: # do stuff | 4 | 3 |
75,420,574 | 2023-2-11 | https://stackoverflow.com/questions/75420574/as-of-2023-is-there-any-way-to-line-profile-cython-at-all | Something has substantially changed with the way that line profiling Cython works, such that previous answers no longer work. I am not sure if something subtle has changed, or if it is simply totally broken. For instance, here is a very highly upvoted question about this from about 8 years ago. The notebook in there no longer seems to work, even with the updates referenced in the post. For instance, here is a new version of the notebook incorporating the updates suggested in the original post: https://nbviewer.org/gist/battaglia01/f138f6b85235a530f7f62f5af5a002f0?flush_cache=true The output of line_profiler to profiling that Cython function is simply Timer unit: 1e-09 s with no line-by-line infomration at all. I'm making a new question about this because other comments I've seen on the site all seem to reference these older answers, and they all seem to be broken. If anyone even has the beginnings of a starting point it would be much appreciated - either for the Jupyter notebook, or with something built using cythonize. My notes on what I've tried at least on the Jupyter side are in that notebook. Is there any way to get this to work? | The current status is: line_profiler v4 and Cython don't get on (for reasons that haven't yet been diagnosed but could be on either end). The line_profiler tests run as part of Cython's CI test-suite have line_profiler pinned to <4. It obviously isn't the long-term plan to leave this version pin, but if you need to it work now then do the same! The most recent run on the master branch as today used line_profiler v3.5.1 and this passed. Relevant issue: https://github.com/cython/cython/issues/5141 | 3 | 3 |
75,425,406 | 2023-2-12 | https://stackoverflow.com/questions/75425406/creating-video-from-images-using-pyav | I am trying to write a function that creates a new MP4 video from a set of frames taken from another video. The frames will be given in PIL.Image format and is often cropped to include only a part of the input video, but all images will have the same dimension. What I have tried: def modify_image(img): return img test_input = av.open('input_vid.mp4') test_output =av.open('output_vid.mp4', 'w') in_stream = test_input.streams.video[0] out_stream = test_output.add_stream(template=in_stream) for frame in test_input.decode(in_stream): img_frame = frame.to_image() # Some possible modifications to img_frame... img_frame = modify_image(img_frame) out_frame = av.VideoFrame.from_image(img_frame) out_packet = out_stream.encode(out_frame) print(out_packet) test_input.close() test_output.close() And the error that I got: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[23], line 11 8 img_frame = frame.to_image() 10 out_frame = av.VideoFrame.from_image(img_frame) ---> 11 out_packet = out_stream.encode(out_frame) 12 print(out_packet) 15 test_input.close() File av\stream.pyx:153, in av.stream.Stream.encode() File av\codec\context.pyx:490, in av.codec.context.CodecContext.encode() File av\frame.pyx:52, in av.frame.Frame._rebase_time() ValueError: Cannot rebase to zero time. I followed the answer given in How to create a video out of frames without saving it to disk using python?, and met with the same issue. Comparing the original VideoFrame and the VideoFrame created from the image, I found that the pts value of the new frames are saved as None instead of integer values. Overwriting the pts value of the new frame with the original values still causes the same error, and overwriting the dts value of the new frame gives the following error: AttributeError: attribute 'dts' of 'av.frame.Frame' objects is not writable Is there a way to modify the dts value, or possibly another method to create a video from a set of PIL.Image objects? | Using add_stream(template=in_stream) is only documented in the Remuxing example. It's probably possible to use template=in_stream when re-encoding, but we have to set the time-base, and set the PTS timestamp of each encoded packet. I found a discussion here (I didn't try it). Instead of using template=in_stream, we may stick to the code sample from my other answer, and copy few parameters from the input stream to the output stream. Example: in_stream = test_input.streams.video[0] codec_name = in_stream.codec_context.name # Get the codec name from the input video stream. fps = in_stream.codec_context.rate # Get the framerate from the input video stream. out_stream = test_output.add_stream(codec_name, str(fps)) out_stream.width = in_stream.codec_context.width # Set frame width to be the same as the width of the input stream out_stream.height = in_stream.codec_context.height # Set frame height to be the same as the height of the input stream out_stream.pix_fmt = in_stream.codec_context.pix_fmt # Copy pixel format from input stream to output stream #stream.options = {'crf': '17'} # Select low crf for high quality (the price is larger file size). We also have to "Mux" the video frame: test_output.mux(out_packet) At the end, we have to flush the encoder before closing the file: out_packet = out_stream.encode(None) test_output.mux(out_packet) Code sample: import av # Build input_vid.mp4 using FFmpeg CLI (for testing): # ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=1:duration=100 -vcodec libx264 -crf 10 -pix_fmt yuv444p input_vid.mp4 test_input = av.open('input_vid.mp4') test_output = av.open('output_vid.mp4', 'w') in_stream = test_input.streams.video[0] #out_stream = test_output.add_stream(template=in_stream) # Using template=in_stream is not working (probably meant to be used for re-muxing and not for re-encoding). codec_name = in_stream.codec_context.name # Get the codec name from the input video stream. fps = in_stream.codec_context.rate # Get the framerate from the input video stream. out_stream = test_output.add_stream(codec_name, str(fps)) out_stream.width = in_stream.codec_context.width # Set frame width to be the same as the width of the input stream out_stream.height = in_stream.codec_context.height # Set frame height to be the same as the height of the input stream out_stream.pix_fmt = in_stream.codec_context.pix_fmt # Copy pixel format from input stream to output stream #stream.options = {'crf': '17'} # Select low crf for high quality (the price is larger file size). for frame in test_input.decode(in_stream): img_frame = frame.to_image() out_frame = av.VideoFrame.from_image(img_frame) # Note: to_image and from_image is not required in this specific example. out_packet = out_stream.encode(out_frame) # Encode video frame test_output.mux(out_packet) # "Mux" the encoded frame (add the encoded frame to MP4 file). print(out_packet) # Flush the encoder out_packet = out_stream.encode(None) test_output.mux(out_packet) test_input.close() test_output.close() | 3 | 5 |
75,393,856 | 2023-2-9 | https://stackoverflow.com/questions/75393856/tqdm-4-27-distribution-was-not-found-error-while-executing-a-exe-file-create | I am trying to create a application which checks for sentence similarity. .exe file got created. I get the below error message while executing .exe file after giving required inputs. Error Message The 'tqdm>=4.27' distribution was not found and is required by this application. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main spec file # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis(['render_ui.py'], pathex=[], binaries=[], datas=[('Config\\favicon.ico',"."), ('Config\\miniLM.sav',".")], hiddenimports=['sklearn.metrics._pairwise_distances_reduction._datasets_pair', 'sklearn.metrics._pairwise_distances_reduction._middle_term_computer', 'sklearn.metrics._pairwise_distances_reduction._argkmin', 'sklearn.metrics._pairwise_distances_reduction._base', 'sklearn.metrics._pairwise_distances_reduction._radius_neighbors', 'sentence_transformers.SentenceTransformer'], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='App', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, target_arch=None, codesign_identity=None, entitlements_file=None, icon='Config\\favicon.ico') I already have "tqdm==4.64.1" installed. I am not sure what is causing this problem. Can someone please help me out on this? | after adding below instructions in spec file I was able to resolve the issue datas += copy_metadata('tqdm') datas += copy_metadata('regex') datas += copy_metadata('requests') datas += copy_metadata('packaging') datas += copy_metadata('filelock') datas += copy_metadata('numpy') datas += copy_metadata('tokenizers') datas += copy_metadata('importlib_metadata') datas += copy_metadata('tensorflow') | 3 | 3 |
75,424,785 | 2023-2-12 | https://stackoverflow.com/questions/75424785/case-insensitive-array-any-filter-in-sqlalchemy | I'm migrating some code from SqlAlchemy 1.3 to SqlAlchemy 1.4 with Postgres 12. I found a query that looks like this: session.query(Horse) .filter(Horse.nicknames.any("charlie", operator=ColumnOperators.ilike)) The type of the column nicknames is Column(ARRAY(String(64))). It seems to me that what this is doing is queryng any Horse whose one of their nicknames is charlie in a case-insensitive (ilike) way. This code seems to work fine in SqlAlchemy==1.3.0 and fails in version 1.4.40 with the following error: sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.postgresql.psycopg2.PGCompiler_psycopg2 object at 0x7fce54c80f10> can't render element of type <function ColumnOperators.ilike at 0x7fce92944280> (Background on this error at: https://sqlalche.me/e/14/l7de) What would be an equivalent way of doing this that works, ideally for both versions? | This query seems to have broken in SQLAlchemy version 1.3.20*. In 1.3.0 it generated this SQL (aliases removed for clarity): SELECT id, nicknames FROM horse WHERE 'charlie' ILIKE ANY (nicknames) The docs for any mention that it has been superseded by any_, though it doesn't seem to have been formally deprecated. With this knowledge we can build an alternative that generates the same SQL: import sqlalchemy as sa ... session.query(Horse).filter( sa.literal('able').ilike(sa.any_(Horse.nicknames)) ) or in 2.0-style: sa.select(Horse).where( sa.literal('charlie').ilike(sa.any_(Horse.nicknames)) ) * I don't see anything in the release notes regarding this change in behaviour, so it might be a regression, but at this stage it's probably better to go with the any_ construct. | 3 | 2 |
75,424,530 | 2023-2-12 | https://stackoverflow.com/questions/75424530/why-does-the-id-function-in-python-return-the-same-value-for-different-integer-o | I have a function to retrieve an object in Python using the ctypes module: import ctypes def object_at_addr(obj_addr): try: val = ctypes.cast(obj_addr, ctypes.py_object).value except: return None return val I know that the id of an object shows the memory address the object is at (for the most part as far as I've seen with builtins and custom objects). On my terminal shell, I tried this on the number 0 and "recursively" found the id of each number to see if it was cyclic in some form. Here is the terminal output: >>> id(0) # num1 4343595216 >>> id(4343595216) # num2 4344636112 >>> id(4344636112) # num3 4344636112 To my surprise, the id of two numbers were the same. However, when I use my function and call object_at_addr(4344636112), it doesn't point to either number, but instead returned a different int value as shown below: >>> object_at_addr(4344636112) 4411205344 How can two different numbers have the same id value? Does the id function actually return the memory address for all objects, or is it only for custom objects, or is it just a Python design choice to make the memory address the id of an object, but it's different for builtin objects? | The id is always the memory address in the CPython implementation. The reason that you saw numbers with the same id here is that memory addresses can be re-used. The id is only guaranteed to be unique for the lifetime of the object, and since nothing else was holding a reference to the integer 4343595216 it got deleted immediately after the function call returns. That memory location was freed, and then immediately reused by the integer 4344636112 instance. The docs actually mention this explicitly: Two objects with non-overlapping lifetimes may have the same id() value. | 3 | 7 |
75,424,382 | 2023-2-12 | https://stackoverflow.com/questions/75424382/use-different-values-of-expandtabs-in-the-same-string-python | How can we define several tab lengths in a python string? For example, we want to print the keys, value types and values of a dict nicely aligned (with varying sizes of keys and types): my_dict = { "short_key": 4, "very_very_very_very_very_long_keys": 5.0 } formatted_string_1 = '\n'.join([f"{k}:\t({type(v).__name__})\t{v}".expandtabs(10) for k, v in my_dict.items()]) print(f"Option 1 (.expandtabs(10)), first tab is too small:\n{formatted_string_1}") formatted_string_2 = '\n'.join([f"{k}:\t({type(v).__name__})\t{v}".expandtabs(40) for k, v in my_dict.items()]) print(f"\n\nOption 2 (.expandtabs(40)), second tab is too large:\n{formatted_string_2}") Running this we get: Option 1 (.expandtabs(10)), first tab is too small: short_key: (int) 4 very_very_very_very_very_long_keys: (float) 5.0 and: Option 2 (.expandtabs(40)), second tab is too large: short_key: (int) 4 very_very_very_very_very_long_keys: (float) 5.0 I would like to be able to define a long tab for the first space, and a short tab for the second one, something like .expandtabs([40, 10]), such that we get two nice alignments: short_key: (int) 4 very_very_very_very_very_long_keys: (float) 5.0 Any idea? | Don't use tabs for alignment. You can specify your desired widths directly in the f-string's format spec: print( '\n'.join( f"{f'{k}:':40}" f"{f'({type(v).__name__})':10}" f"{v:<}" for k, v in my_dict.items() ) ) outputs short_key: (int) 4 very_very_very_very_very_long_keys: (float) 5.0 You can even use variable widths computed from the members of my_dict: key_width = max(len(k) for k in my_dict) + 6 type_width = max(len(type(v).__name__) for v in my_dict.values()) + 5 print( '\n'.join( f"{f'{k}:':{key_width}}" f"{f'({type(v).__name__})':{type_width}}" f"{v:<}" for k, v in my_dict.items() ) ) which again outputs short_key: (int) 4 very_very_very_very_very_long_keys: (float) 5.0 | 4 | 3 |
75,423,382 | 2023-2-11 | https://stackoverflow.com/questions/75423382/how-to-remove-carriage-return-characters-from-string-as-if-it-was-printed | I would like to remove all occurrences of \r from a string as if it was printed via print() and store the result in another variable. Example: >>> s = "hello\rworld" >>> print(s) world In this example, how do I "print" s to a new variable which then contains the string "world"? Background: I am using the subprocess module to capture the stdout which contains a lot of \r characters. In order to effectively analyze the string I would like to only have the resulting output. | Using a regex: import re s = "hello\rworld" out = re.sub(r'([^\r]+)\r([^\r\n]+)', lambda m: m.group(2)+m.group(1)[len(m.group(2)):], s) Output: 'world' More complex example: import re s = "hello\r..\nworld" out = re.sub(r'([^\r]+)\r([^\r\n]+)', lambda m: m.group(2)+m.group(1)[len(m.group(2)):], s) Output: ..llo world | 4 | 1 |
75,418,560 | 2023-2-11 | https://stackoverflow.com/questions/75418560/how-do-i-format-both-a-string-and-variable-in-an-f-string | I'm trying to move both the "$" and totalTransactionCost to the right side of the field. My current code is : print(f"Total Cost Of All Transactions: ${totalTransactionCost:>63,.2f}") The code is able to move the totalTransactionCost to the right side of the field, but how can I include the "$" too? | You can use nested f-strings, which basically divides formatting into two steps: first, format the number as a comma-separated two-decimal string, and attach the $, and then fill the whole string with leading spaces. >>> totalTransactionCost = 10000 >>> print(f"Total Cost Of All Transactions: {f'${totalTransactionCost:,.2f}':>64}") Total Cost Of All Transactions: $10,000.00 | 3 | 5 |
75,417,119 | 2023-2-10 | https://stackoverflow.com/questions/75417119/how-to-find-what-is-the-latest-version-of-python-that-pytorch | When I try pip install torch, I get ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch Searching on here stackoverflow I find that the issue is I need an older verson of python, currently I'm using 3.11. That post said 3.8 but was written some time ago, so how do I find the latest version of python that will run pytorch? I couldn't find it easily on the PyTorch pages. | You can always check torch archive or torch nightly to see if your desired version is supported. While the Python3.11 is not officially supported as of now (Feb 11, 2023), if you are on Linux you can install the Python3.11 version of Pytorch 1.13.1: wget https://download.pytorch.org/whl/cu117/torch-1.13.1%2Bcu117-cp311-cp311-linux_x86_64.whl pip3 install torch-1.13.1+cu117-cp311-cp311-linux_x86_64.whl Just note that this is not yet available on Windows or other major OSes. If you want to give the new version a try on Other OSes such as Windows or Mac, you need to use the nighly builds. For example for Windows inside powershell do : wget https://download.pytorch.org/whl/nightly/cu117/torch-2.0.0.dev20230210%2Bcu117-cp311-cp311-win_amd64.whl -OutFile torch-2.0.0.dev20230210+cu117-cp311-cp311-win_amd64.whl pip install torch-2.0.0.dev20230210+cu117-cp311-cp311-win_amd64.whl | 7 | 2 |
75,416,188 | 2023-2-10 | https://stackoverflow.com/questions/75416188/get-evenly-spaced-points-from-a-curved-shape | How may I take a shape that was created with more points at its curves and subdivide it so that the points are distributed more equally along the curve? In my research I thought that numpy's interp might be the right function to use, but I don't know what to use for the parameters (x, xp, fp, left, right, & period). Any help would be very appreciated! Here is an animation showing the desired output. This is the code for the input rounded rectangle: from matplotlib import pyplot as plt import numpy as np x_values = [1321.4, 598.6, 580.6, 563.8, 548.6, 535.4, 524.5, 516.2, 511, 509.2, 509.2, 511, 516.2, 524.5, 535.4, 548.6, 563.8, 580.6, 598.6, 1321.4, 1339.4, 1356.2, 1371.4, 1384.6, 1395.5, 1403.8, 1409, 1410.8, 1410.8, 1409, 1403.8, 1395.5, 1384.6, 1371.4, 1356.2, 1339.4, 1321.4] y_values = [805.4, 805.4, 803.5, 798.3, 790.1, 779.2, 766, 750.8, 734, 716, 364, 346, 329.2, 314, 300.8, 289.9, 281.7, 276.5, 274.6, 274.6, 276.5, 281.7, 289.9, 300.8, 314, 329.2, 346, 364, 716, 734, 750.8, 766, 779.2, 790.1, 798.3, 803.5, 805.4] fig, ax = plt.subplots(1) ax.plot(x_values,y_values) ax.scatter(x_values,y_values) ax.set_aspect('equal') plt.show() Thank you! | from matplotlib import pyplot as plt import numpy as np x = np.array([1321.4, 598.6, 580.6, 563.8, 548.6, 535.4, 524.5, 516.2, 511, 509.2, 509.2, 511, 516.2, 524.5, 535.4, 548.6, 563.8, 580.6, 598.6, 1321.4, 1339.4, 1356.2, 1371.4, 1384.6, 1395.5, 1403.8, 1409, 1410.8, 1410.8, 1409, 1403.8, 1395.5, 1384.6, 1371.4, 1356.2, 1339.4, 1321.4]) y = np.array([805.4, 805.4, 803.5, 798.3, 790.1, 779.2, 766, 750.8, 734, 716, 364, 346, 329.2, 314, 300.8, 289.9, 281.7, 276.5, 274.6, 274.6, 276.5, 281.7, 289.9, 300.8, 314, 329.2, 346, 364, 716, 734, 750.8, 766, 779.2, 790.1, 798.3, 803.5, 805.4]) fig, ax = plt.subplots(1) ax.set_aspect('equal') ax.scatter(x, y, s=40, zorder=3, alpha=0.3) # compute the distances, ds, between points dx, dy = x[+1:]-x[:-1], y[+1:]-y[:-1] ds = np.array((0, *np.sqrt(dx*dx+dy*dy))) # compute the total distance from the 1st point, measured on the curve s = np.cumsum(ds) # interpolate using 200 point xinter = np.interp(np.linspace(0,s[-1], 200), s, x) yinter = np.interp(np.linspace(0,s[-1], 200), s, y) # plot the interpolated points ax.scatter(xinter, yinter, s=5, zorder=4) plt.show() | 3 | 5 |
75,411,163 | 2023-2-10 | https://stackoverflow.com/questions/75411163/apache-airflow-create-tasks-using-for-loop-in-one-dag-i-want-tasks-made-of-fo | A task that performs the same task in one dag was created using a for loop. It is hoped to be divided into two branches that depend on the result of this task. However, all tasks created using the for loop return the xcom of the last task. How can tasks created using for loop return each xcom? Each task a,b,c returns xcom_a, xcom_b, and xcom_c. However, branch tasks all get the same xcom_c. What should I do? default_args ={'start_date':days_ago(1)} dag=DAG( dag_id='batch_test', default_args=default_args, schedule_interval=None) def count(**context): name = context['params']['name'] dict = {'a':50, 'b':100, 'c':150} if dict[name]<100: task_id=f'add_{name}' return task_id elif dict[name]>=100: task_id=f'times_{name}' return task_id def branch(**context): task_id = context['ti'].xcom_pull(task_ids=f'count_task_{name}') return task_id def add(**context): ans = context['ti'].xcom_pull(task_ids=f'branch_task_{name}') ans_dict = {'add_a':50+100, 'add_b':100+100, 'add_c':150+100} ans = ans_dict[ans] return print(ans) def times(**context): ans = context['ti'].xcom_pull(task_ids=f'branch_task_{name}') ans_dict = {'times_a':50*100, 'times_b':100*100, 'times_c':150*100} ans = ans_dict[ans] return print(ans) name_list = ['a','b','c'] for name in name_list: exec_count_task = PythonOperator( task_id = f'count_task_{name}', python_callable = count, provide_context=True, params = {'name':name}, dag=dag ) exec_branch_task = BranchPythonOperator( task_id = f'branch_task_{name}', python_callable = branch, provide_context = True, dag = dag ) exec_add_count = PythonOperator( task_id = f'add_{name}', python_callable = add, provide_context = True, dag = dag ) exec_times_count = PythonOperator( task_id = f'times_{name}', python_callable = times, provide_context = True, dag = dag ) exec_count_task >> exec_branch_task >> [exec_add_count, exec_times_count] i want this... task_a >> branch_a (branch python operator, xcom pull returned by task_a) >> [task_a1, task_a2] task_b >> branch_b (branch python operator, xcom pull returned by task_b) >> [task_b1, task_b2] task_c (>> branch_c (branch python operator, xcom pull returned by task_c) >> [task_c1, task_c2] but task_a >> branch_a (branch python operator, xcom pull returned by task_c) >> [task_a1, task_a2] task_b >> branch_b (branch python operator, xcom pull returned by task_c) >> [task_b1, task_b2] task_c >> branch_c (branch python operator, xcom pull returned by task_c) >> [task_c1, task_c2] | I'm unable to reproduce the behavior you describe using classic operators and the TaskFlow API. If you are able to add more context and code of what you are actually executing that would be most helpful. In the meantime, here are the examples I used should it give you some guidance for troubleshooting. I added a task at the end of the streams to check that the first task indeed pushes its expected value. Classic Operators from pendulum import datetime from airflow.models import DAG from airflow.operators.python import BranchPythonOperator, PythonOperator from airflow.utils.trigger_rule import TriggerRule with DAG(dag_id="multiple_branch_loop", start_date=datetime(2023, 1, 1), schedule=None): def xcom_push(val): return val def func(): ... def choose(val): return f"task_{val}" def check_xcom_output_from_first(val, expected_val): assert val == expected_val stuff = ["a", "b", "c"] for i in stuff: first = PythonOperator(task_id=f"first_task_{i}", python_callable=xcom_push, op_kwargs={"val": i}) branch = BranchPythonOperator(task_id=f"branch_{i}", python_callable=choose, op_kwargs={"val": i}) second = PythonOperator(task_id=f"task_{i}", python_callable=func) third = PythonOperator(task_id=f"task_{i}a", python_callable=func) check = PythonOperator( task_id=f"check_{i}", trigger_rule=TriggerRule.ALL_DONE, python_callable=check_xcom_output_from_first, op_kwargs={"val": first.output, "expected_val": i}, ) first >> branch >> [second, third] >> check The check* tasks succeed meaning the first task in a given stream does push its value and not the last stream's. TaskFlow API from pendulum import datetime from airflow.decorators import dag, task from airflow.utils.trigger_rule import TriggerRule @dag(start_date=datetime(2023, 1, 1), schedule=None) def multiple_branch_loop(): @task() def xcom_push(val): return val @task() def func(): ... @task.branch() def choose(val): return f"task_{val}" @task(trigger_rule=TriggerRule.ALL_DONE) def check_xcom_output_from_first(val, expected_val): assert val == expected_val stuff = ["a", "b", "c"] for i in stuff: first = xcom_push.override(task_id=f"first_task_{i}")(val=i) branch = choose.override(task_id=f"branch_{i}")(val=first) second = func.override(task_id=f"task_{i}")() third = func.override(task_id=f"task_{i}a")() check = check_xcom_output_from_first.override(task_id=f"check_{i}")(val=first, expected_val=i) first >> branch >> [second, third] >> check multiple_branch_loop() Same expected behavior as well confirmed in the check* tasks: | 3 | 4 |
75,387,306 | 2023-2-8 | https://stackoverflow.com/questions/75387306/azure-ml-experiment-using-custom-gpu-cuda-environment | During the last week I have been trying to create a python experiment in Azure ML studio. The job consists on training a PyTorch (1.12.1) Neural Network using a custom environment with CUDA 11.6 for GPU acceleration. However, when attempting any movement operation I get a Runtime Error: device = torch.device("cuda") test_tensor = torch.rand((3, 4), device = "cpu") test_tensor.to(device) CUDA error: all CUDA-capable devices are busy or unavailable CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I have tried to set CUDA_LAUNCH_BLOCKING=1, but this does not change the result. I have also tried to check if CUDA is available: print(f"Is cuda available? {torch.cuda.is_available()}") print(f"Which is the current device? {torch.cuda.current_device()}") print(f"How many devices do we have? {torch.cuda.device_count()}") print(f"How is the current device named? {torch.cuda.get_device_name(torch.cuda.current_device())}") and the result is completely normal: Is cuda available? True Which is the current device? 0 How many devices do we have? 1 How is the current device named? Tesla K80 I also tried to downgrade and change the CUDA, Torch and Python versions, but this does not seem to affect the error. As far as I found this error appears only when using a custom environment. When a curated environment is used, the scripts runs with no problem. However, as the script needs of some libraries like OpenCV, I am forced to use a custom DockerFile to create my environment, which you can read here for reference: FROM mcr.microsoft.com/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121:biweekly.202301.1 USER root RUN apt update # Necessary dependencies for OpenCV RUN apt install ffmpeg libsm6 libxext6 libgl1-mesa-glx -y RUN pip install numpy matplotlib pandas opencv-python Pillow scipy tqdm mlflow joblib onnx ultralytics RUN pip install 'ipykernel~=6.0' \ 'azureml-core' \ 'azureml-dataset-runtime' \ 'azureml-defaults' \ 'azure-ml' \ 'azure-ml-component' \ 'azureml-mlflow' \ 'azureml-telemetry' \ 'azureml-contrib-services' COPY --from=mcr.microsoft.com/azureml/o16n-base/python-assets:20220607.v1 /artifacts /var/ RUN /var/requirements/install_system_requirements.sh && \ cp /var/configuration/rsyslog.conf /etc/rsyslog.conf && \ cp /var/configuration/nginx.conf /etc/nginx/sites-available/app && \ ln -sf /etc/nginx/sites-available/app /etc/nginx/sites-enabled/app && \ rm -f /etc/nginx/sites-enabled/default ENV SVDIR=/var/runit ENV WORKER_TIMEOUT=400 EXPOSE 5001 8883 8888 The code from the COPY statement is a copy from one of the curated environments already predefined by Azure. I would like to highlight that I tried using the DockerFile given in one of these environments, without any modification and I get the same result. Hence, my question is: How can I run a CUDA job using a custom environment? Is it possible? I have tried to find a solution for this but I have not been able of finding any person with the same problem, nor any place in the Microsoft documentation where I could ask for this. I hope this is not duplicated and that any of you can help me out here. | The problem is indeed sensitive and hard to debug. I suspect it has to do with the underlying hardware on which the docker container is deployed, not with the actual custom Docker container and its corresponding dependencies. Since you have a Tesla K80, I suspect NC series video cards (upon which the environments are deployed). As of writing this comment (10th of February 2023), the following observation is valid (https://learn.microsoft.com/en-us/azure/machine-learning/resource-curated-environments): Note Currently, due to underlying cuda and cluster incompatibilities, on NC series only AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu with cuda 11.3 can be used. Therefore, in my opinion, this can be traced back to the supported versions of CUDA + PyTorch and Python. What I did in my case, I just installed my dependences via a .yaml dependency file when creating the environment, starting from this base image: Azure container registry mcr.microsoft.com/azureml/curated/acpt-pytorch-1.11-py38-cuda11.3-gpu:9 You can start building your docker container from this URI as base image in order to work properly on Tesla K80s. IMPORTANT NOTE : Using this base image did work in my case, I was able to train PyTorch models. | 3 | 5 |
75,404,979 | 2023-2-9 | https://stackoverflow.com/questions/75404979/why-does-my-context-manager-not-exit-on-exception | I am learning about context managers and was trying to build one myself. The following is a dummy context manager that opens a file in read mode (I know I can just do with open(...): .... this is just an example I built to help me understand how to make my own context managers): @contextmanager def open_read(path: str): f = open(path, 'r') print('open') yield f f.close() print('closed') def foo(): try: with open_read('main.py') as f: print(f.readline()) raise Exception('oopsie') except Exception: pass print(f.readline()) foo() I expect this code to print: open <line 1 of a.txt> closed ValueError: I/O operation on closed file. But instead it prints: open <line 1 of a.txt> <line 2 of a.txt> It didn't close the file! This seems to contradict python's docs which state that __exit__ will be called whether the with statement exited successfully or with an exception: object.exit(self, exc_type, exc_value, traceback) Exit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be None. Interestingly, when I reimplemented the context manager as shown below, it worked as expected: class open_read(ContextDecorator): def __init__(self, path: str): self.path = path self.f = None def __enter__(self): self.f = open(self.path, 'r') print('open') return self.f def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() print('closed') Why didn't my original implementation work? | The line f.close() is never reached (we exit that frame early due to unhandled exception), and then the exception was "handled" in the outer frame (i.e. within foo). If you want it to close regardless, you'll have to implement it like that: @contextmanager def open_read(path: str): f = open(path, 'r') try: print('open') yield f finally: f.close() print('closed') However, I'd like to point out that that the built-in open is already returning a context-manager, and you may be reinventing stdlib contextlib.closing. | 6 | 7 |
75,392,769 | 2023-2-8 | https://stackoverflow.com/questions/75392769/how-to-use-apache-arrow-ipc-from-multiple-processes-possibly-from-different-lan | I'm not sure where to begin, so looking for some guidance. I'm looking for a way to create some arrays/tables in one process, and have it accessible (read-only) from another. So I create a pyarrow.Table like this: a1 = pa.array(list(range(3))) a2 = pa.array(["foo", "bar", "baz"]) a1 # <pyarrow.lib.Int64Array object at 0x7fd7c4510100> # [ # 0, # 1, # 2 # ] a2 # <pyarrow.lib.StringArray object at 0x7fd7c5d6fa00> # [ # "foo", # "bar", # "baz" # ] tbl = pa.Table.from_arrays([a1, a2], names=["num", "name"]) tbl # pyarrow.Table # num: int64 # name: string # ---- # num: [[0,1,2]] # name: [["foo","bar","baz"]] Now how do I read this from a different process? I thought I would use multiprocessing.shared_memory.SharedMemory, but that didn't quite work: shm = shared_memory.SharedMemory(name='pa_test', create=True, size=tbl.nbytes) with pa.ipc.new_stream(shm.buf, tbl.schema) as out: for batch in tbl.to_batches(): out.write(batch) # TypeError: Unable to read from object of type: <class 'memoryview'> Do I need to wrap the shm.buf with something? Even if I get this to work, it seems very fiddly. How would I do this in a robust manner? Do I need something like zmq? I'm not clear how this is zero copy though. When I write the record batches, isn't that serialisation? What am I missing? In my real use case, I also want to talk to Julia, but maybe that should be a separate question when I come to it. PS: I have gone through the docs, it didn't clarify this part for me. | Do I need to wrap the shm.buf with something? Yes, you can use pa.py_buffer() to wrap it: size = calculate_ipc_size(table) shm = shared_memory.SharedMemory(create=True, name=name, size=size) stream = pa.FixedSizeBufferWriter(pa.py_buffer(shm.buf)) with pa.RecordBatchStreamWriter(stream, table.schema) as writer: writer.write_table(table) Also, for size you need to calculate the size of the IPC output, which may be a bit larger than Table.nbytes. The function you can use for that is: def calculate_ipc_size(table: pa.Table) -> int: sink = pa.MockOutputStream() with pa.ipc.new_stream(sink, table.schema) as writer: writer.write_table(table) return sink.size() How would I do this in a robust manner? Not sure of this part yet. In my experience the original process needs to stay alive while the others are reusing the buffers, but there might be a way to get around that. This is likely connected to this bug in CPython: https://bugs.python.org/issue38119 I'm not clear how this is zero copy though. When I write the record batches, isn't that serialisation? What am I missing? You are correct that writing the Arrow data into an IPC buffer does involve copies. The zero-copy part is when other processes read the data from shared memory. The columns of the Arrow table will reference the relevant segments of the IPC buffer, rather than a copy. | 9 | 10 |
75,401,197 | 2023-2-9 | https://stackoverflow.com/questions/75401197/pandas-extensions-usage-without-importing-it | I have created pandas extensions as mentioned here. The extending classes are defined in a module named pd_extensions, and I would like to use them in a different module my_module for example. The two modules are in the same package called source. currently to be able to use the extensions Im importing the pd_extensions module into my_module like this: import source.pd_extensions is there a way to use the extensions I created without importing the module? I find myself importing this module to every module that I want to use the extensions in the package, and I thought there might be a better way of doing it (maybe through the _ _ init _ _ module). I tried just using the extensions without importing the module they are defined in, but it did not work obviously. I'm thinking about importing it in the _ _ init _ _ file so all the modules in the package would have access to it without having to import it themself, but I can't figure out if it's possible. | I think you can import the extension module in __init__ file since the extension module will first import pandas and then register the accessor therefore the pandas module will be cached in sys.modules and any subsequent import to pandas from other modules will simply retrieve the entry from the cache. here is the simple example: source โโโ __init__.py โโโ my_module.py โโโ pd_extension.py The following are the contente of the files: # pd_extension.py import pandas as pd @pd.api.extensions.register_dataframe_accessor('spam') class Spam: def __init__(self, df): self.df = df @property def shape(self): return self.df.shape # my_module.py import pandas as pd df = pd.DataFrame([[1, 2, 3], [4, 5, 6]]) print(df.spam.shape) # __init__.py import source.pd_extension Now lets test the code by executing my_module.py which works as expected $ python -m source.my_module (2, 3) | 3 | 2 |
75,401,348 | 2023-2-9 | https://stackoverflow.com/questions/75401348/selenium-chrome-driver-headless-mode-not-working | My code worked perfectly until yesterday when I updated Google Chrome to version 110.0.5481.77. Now it's not working in headless mode: options.add_argument("--headless") I even tried adding options.add_argument("--window-size=1280,700") but still not working. Although if I remove the headless option it again works correctly! | Accroding to this answer and Google Chrome release notes you should add the headless mode option like below: options.add_argument("--headless=new") and no need to specify the window size | 6 | 19 |
75,387,685 | 2023-2-8 | https://stackoverflow.com/questions/75387685/files-not-being-included-by-hatchling-when-specified-in-pyproject-toml | I am trying to package my tool with Hatch and want to include some extra files found in /docs in the below directory tree: this_project โ .gitattributes โ .gitignore โ LICENSE โ MANIFEST.in โ pyproject.toml โ README.md โ โโโโdocs โ default.primers โ โโโโribdif __init__.py __main__.py I am installing the tool with pip install git+https://github.com/Rob-murphys/ribdif.git but am only getting the expected file inside ribdif despite specifying in the pyproject.toml per https://hatch.pypa.io/latest/config/build/#file-selection: [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [project] name = "ribdif" version = "1.1.2" authors = [ { name="Robert Murphy", email="[email protected]" }, ] description = "A program to analyse and correct for the usefulness of amplicon sequences" readme = "README.md" requires-python = ">=3.11" classifiers = [ "Programming Language :: Python :: 3.11", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Operating System :: OS Independent", ] [tool.hatch.build] include = [ "ribdif/*.py", "/docs", ] [project.scripts] ribdif = "ribdif.__main__:main" | When installing from github using pip I believe I am populating my site-packages with the content of the produced wheel and given that I need the extra files at run time I need to add to the wheel and not the source distribution. [tool.hatch.build.targets.wheel.force-include] "ribdif" = "ribdif" "docs/default.primers" = "ribdif/default.primers" This directs the default.primers file to be in the ribdif/ directory ofthe site-packages and thus available at run time! | 4 | 1 |
75,393,757 | 2023-2-9 | https://stackoverflow.com/questions/75393757/how-to-escape-dot-in-a-key-to-get-json-value-with-redis-py | I had some JSON data containing dot(.) in keys that are written to redis using redis-py like this: r = redis.Redis() r.json().set(_id, "$", {'First.Last': "John.Smith"}) It works if reading the whole JSON data like r.json().get(_id) but error throws if directly getting the value with a path containing the dot: r.json().get(_id, "First\.Last") # ResponseError Obviously, the dot is not correctly escaped. Also tried to quote it like "'First.Last'"but doesn't work either. What's the correct way to deal with this special character in a key? The redis instance is a redis-stack-server docker container running on linux. Thank you! | RedisJSON path supports both dot and bracket notation https://redis.io/docs/stack/json/path/, so you could use this r.json().get(_id, '["First.Last"]') | 4 | 4 |
75,388,906 | 2023-2-8 | https://stackoverflow.com/questions/75388906/how-to-rotate-and-translate-an-image-with-opencv-without-losing-off-screen-data | I'm trying to use opencv to perform subsequent image transformations. I have an image that I want to both rotate and translate while keeping the overall image size constant. I've been using the warpAffine function with rotation and translation matrixes to perform the transformations, but the problem is that after performing one transformation some image data is lost and not carried over to the next transformation. Original Rotated Translated and rotated. Note how the corners of the original image have been clipped off. Desired output What I would like is to have an image that's very similar to the translated image here, but without the corners clipped off. I understand that this occurs because after performing the first Affine transform the corner data is removed. However, I'm not really sure how I can preserve this data while still maintaining the image's original size with respect to its center. I am very new to computer vision and do not have a strong background in linear algebra or matrix math. Most of the code I've managed to make work has been learned from online tutorials, so any and all accessible help fixing this would be greatly appreciated! Here is the code used to generate the above images: import numpy as np import cv2 def rotate_image(image, angle): w, h = (image.shape[1], image.shape[0]) cx, cy = (w//2,h//2) M = cv2.getRotationMatrix2D((cx,cy), -1*angle, 1.0) rotated = cv2.warpAffine(image, M, (w,h)) return rotated def translate_image(image, d_x, d_y): M = np.float32([ [1,0,d_x], [0,1,d_y] ]) return cv2.warpAffine(image, M, (image.shape[1], image.shape[0])) path = "dog.jpg" image = cv2.imread(path) angle = 30.0 d_x = 200 d_y = 300 rotated = rotate_image(image, angle) translated = translate_image(rotated, d_x, d_y) | Chaining the rotation and translation transformations is what you are looking for. Instead of applying the rotation and translation one after the other, we may apply cv2.warpAffine with the equivalent chained transformation matrix. Using cv2.warpAffine only once, prevents the corner cutting that resulted by the intermediate image (intermediate rotated only image with cutted corners). Chaining two transformation matrices is done by multiplying the two matrices. In OpenCV convention, the first transformation is multiplied from the left side. The last row of 2D affine transformation matrix is always [0, 0, 1]. OpenCV conversion is omitting the last row, so M is 2x3 matrix. Chaining OpenCV transformations M0 and M1 applies the following stages: Insert the "omitting last row" [0, 0, 1] to M0 and to M1. T0 = np.vstack((M0, np.array([0, 0, 1]))) T1 = np.vstack((M1, np.array([0, 0, 1]))) Chain transformations by matrix multiplication: T = T1 @ T0 Remove the last row (equals [0, 0, 1]) for matching OpenCV 2x3 convention: M = T[0:2, :] The higher level solution applies the following stages: Compute rotation transformation matrix. Compute translation transformation matrix. Chain the rotation and translation transformations. Apply affine transformation with the chained transformations matrix. Code sample: import numpy as np import cv2 def get_rotation_mat(image, angle): w, h = (image.shape[1], image.shape[0]) cx, cy = (w//2,h//2) M = cv2.getRotationMatrix2D((cx, cy), -1*angle, 1.0) #rotated = cv2.warpAffine(image, M, (w,h)) return M def get_translation_mat(d_x, d_y): M = np.float64([ [1, 0, d_x], [0, 1, d_y] ]) #return cv2.warpAffine(image, M, (image.shape[1], image.shape[0])) return M def chain_affine_transformation_mats(M0, M1): """ Chaining affine transformations given by M0 and M1 matrices. M0 - 2x3 matrix applying the first affine transformation (e.g rotation). M1 - 2x3 matrix applying the second affine transformation (e.g translation). The method returns M - 2x3 matrix that chains the two transformations M0 and M1 (e.g rotation then translation in a single matrix). """ T0 = np.vstack((M0, np.array([0, 0, 1]))) # Add row [0, 0, 1] to the bottom of M0 ([0, 0, 1] applies last row of eye matrix), T0 is 3x3 matrix. T1 = np.vstack((M1, np.array([0, 0, 1]))) # Add row [0, 0, 1] to the bottom of M1. T = T1 @ T0 # Chain transformations T0 and T1 using matrix multiplication. M = T[0:2, :] # Remove the last row from T (the last row of affine transformations is always [0, 0, 1] and OpenCV conversion is omitting the last row). return M path = "dog.jpg" image = cv2.imread(path) angle = 30.0 d_x = 200 d_y = 300 #rotated = rotate_image(image, angle) #translated = translate_image(rotated, d_x, d_y) rotationM = get_rotation_mat(image, angle) # Compute rotation transformation matrix translationM = get_translation_mat(d_x, d_y) # Compute translation transformation matrix M = chain_affine_transformation_mats(rotationM, translationM) # Chain rotation and translation transformations (translation after rotation) transformed_image = cv2.warpAffine(image, M, (image.shape[1], image.shape[0])) # Apply affine transformation with the chained (unified) matrix M. cv2.imwrite("transformed_dog.jpg", transformed_image) # Store output for testing Output: | 3 | 6 |
75,387,904 | 2023-2-8 | https://stackoverflow.com/questions/75387904/how-to-exclude-tests-folder-from-the-wheel-of-a-pyproject-toml-managed-lib | I try my best to move from a setup.py managed lib to a pure pyproject.toml one. I have the following folder structure: tests โโโ <files> docs โโโ <files> sepal_ui โโโ <files> pyproject.toml and in my pyproject.toml the following setup for file and packages discovery: [build-system] requires = ["setuptools>=61.2", "wheel"] [tool.setuptools] include-package-data = false [tool.setuptools.packages.find] include = ["sepal_ui*"] exclude = ["docs*", "tests*"] and in the produce wheel, I get the following: tests โโโ <files> docs โโโ <files> sepal_ui โโโ <files> sepal_ui.egg-info โโโ top-level.txt looking at the top-level.txt, I see that only sepal_ui is included so my question is simple why do the extra "docs" and "tests" folder are still included even if they are not used? how to get rid of them ? PS: I'm aware of the MANIFEST.in solution that I will accept if it's really the only one but I found it redundant to specify in 2 files. | Fun fact, it was working from the start.... Small debugging workflow for the next person that does not want to spend hours for nothing. configuration of the pyproject.toml the following configuration is the minimal to remove files from a docs/ and tests/ folders that are at the root of the repository. If you disseminated your tests in each modules, consider adding *.tests*: [build-system] requires = ["setuptools>=61.2", "wheel"] [tool.setuptools] include-package-data = false [tool.setuptools.packages.find] include = ["sepal_ui*"] exclude = ["docs*", "tests*"] Clean environment That was my mistake. Python is cahing information in the build/ and egg-info folder. setuptools will use the lest of files stored in egg-info/SOURCE.txt so first step: get rid of previous build files. Then simply run: python -m build check both wheel and tar.gz From the start I was checking only tar.gz thinking (naively) that .whl and .tar.gz were the same. It's not while the tests folder remains in the tar.gz file, it's absent from the wheel. | 18 | 17 |
75,389,166 | 2023-2-8 | https://stackoverflow.com/questions/75389166/how-to-match-an-empty-dictionary | Python supports Structural Pattern Matching since version 3.10. I came to notice that matching an empty dict doesn't work by simply matching {} as it does for lists. According to my naive approach, non-empty dicts are also matched (Python 3.10.4): def match_empty(m): match m: case []: print("empty list") case {}: print("empty dict") case _: print("not empty") match_empty([]) # empty list match_empty([1, 2]) # not empty match_empty({}) # empty dict match_empty({'a': 1}) # empty dict Matching the constructors even breaks the empty list matching: def match_empty(m): match m: case list(): print("empty list") case dict(): print("empty dict") case _: print("not empty") match_empty([]) # empty list match_empty([1, 2]) # empty list match_empty({}) # empty dict match_empty({'a': 1}) # empty dict Here is a solution, that works as I expect: def match_empty(m): match m: case []: print("empty list") case d: if isinstance(d, dict) and len(d) == 0: print("empty dict") return print("not empty") match_empty([]) # empty list match_empty([1, 2]) # not empty match_empty({}) # empty dict match_empty({'a': 1}) # not empty Now my questions are: Why do my first 2 approaches not work (as expected)? Is there a way to use structural pattern matching to match only an empty dict (without checking the dict length explicitly)? | Using a mapping (dict) as the match pattern works a bit differently than using a sequence (list). You can match the dict's structure by key-value pairs where the key is a literal and the value can be a capture pattern so it is used in the case. You can use **rest within a mapping pattern to capture additional keys in the subject. The main difference with lists is - "extra keys in the subject will be ignored while matching". So when you use {} as a case, what you're really telling Python is "match a dict, no constraints whatsoever", and not "match an empty dict". So one way that might be slightly more elegant than your last attempt is: def match_empty(m): match m: case []: print("empty list") case {**keys}: if keys: print("non-empty dict") else: print("empty dict") case _: print("not empty") I think the main reason this feels awkward and doesn't work good is because the feature wasn't intended to work with mixed types like this. i.e. you're using the feature as a type-checker first and then as pattern matching. If you knew that m is a dict (and wanted to match its "insides"), this would work much nicer. | 4 | 1 |
75,382,397 | 2023-2-8 | https://stackoverflow.com/questions/75382397/python-write-bytes-to-file-using-redirect-of-print | using perl, $ perl -e 'print "\xca"' > out now $ xxd out we have 00000000: ca But with Python, I tried $ python3 -c 'print("\xca", end="")' > out $ xxd out what I got is 00000000: c38a I'm not sure what is going on. | So in Python, a str object is a series of unicode code points. How this is printed to the screen depends on the encoding of your sys.stdout. This is picked based on your locale (or possibly various environment variables can affect this, but by default, it is your locale). So yours must be set to UTF-8. That's my default too: (py311) Juans-MBP:~ juan$ locale LANG="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_CTYPE="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_ALL= (py311) Juans-MBP:~ juan$ python -c "print('\xca', end='')" | xxd 00000000: c38a However, if I override my locale and tell it to use en_US.ISO8859-1 (latin-1), a single-byte encoding, we get what you expect: (py311) Juans-MBP:~ juan$ LC_ALL="en_US.ISO8859-1" python -c "print('\xca', end='')" | xxd 00000000: ca The solution is to work with raw bytes if you want raw bytes. The way to do that in Python source code is to use a bytes literal (or a string literal and then .encode it). We can use the raw buffer at sys.stdout.buffer: (py311) Juans-MBP:~ juan$ python -c "import sys; sys.stdout.buffer.write(b'\xca')" | xxd 00000000: ca Or by encoding a string to a bytes object: (py311) Juans-MBP:~ juan$ python -c "import sys; sys.stdout.buffer.write('\xca'.encode('latin'))" | xxd 00000000: ca | 5 | 3 |
75,366,567 | 2023-2-6 | https://stackoverflow.com/questions/75366567/how-do-i-use-a-custom-pip-conf-in-a-docker-image | How can I configure a Docker container to use a custom pip.conf file? This does not (seem to) work for me: from python:3.9 COPY pip.conf ~/.config/pip/pip.conf where pip.conf is a copy of the pip configuration that points to a proprietary package repository. | The problem is the ~ expansion. This is a shell feature, and it's not working in a Dockerfile. Just define like this, using the dest path explicitly: from python:3.9 COPY pip.conf /root/.config/pip/pip.conf If you want to use something other than /root/.config then consider to add a WORKDIR instruction and specify paths relative to that. | 6 | 7 |
75,310,143 | 2023-2-1 | https://stackoverflow.com/questions/75310143/polars-adding-days-to-a-date | I am using Polars in Python to try and add thirty days to a date I run the code, get no errors but also get no new dates Can anyone see my mistake? import polars as pl df = pl.DataFrame( {"start_date": ["2020-01-02", "2020-01-03", "2020-01-04"]}) df = df.with_columns( pl.col("start_date").str.to_date() ) # Generate the days above and below df = df.with_columns( pl.col("start_date") + pl.duration(days=30).alias("date_plus_delta") ) df = df.with_columns( pl.col("start_date") + pl.duration(days=-30).alias("date_minus_delta") ) print(df) shape: (3, 1) โโโโโโโโโโโโโโ โ start_date โ โ --- โ โ date โ โโโโโโโโโโโโโโก โ 2020-01-02 โ โ 2020-01-03 โ โ 2020-01-04 โ โโโโโโโโโโโโโโ Quick References The Manual: https://docs.pola.rs/user-guide/transformations/time-series/parsing/ strftime formats: https://docs.rs/chrono/latest/chrono/format/strftime/index.html SO Answer from a previous Post: How to add a duration to datetime in Python polars | You're supposed to call .alias on the entire operation pl.col('start_date') + pl.duration(days=30). Instead you're only alias-ing on pl.duration(days=30). So the correct way would be: import polars as pl df = pl.DataFrame({"start_date": ["2020-01-02", "2020-01-03", "2020-01-04"]}) df = df.with_columns(pl.col("start_date").str.to_date()) # Generate the days above and below df = df.with_columns((pl.col("start_date") + pl.duration(days=30)).alias("date_plus_delta")) df = df.with_columns((pl.col("start_date") - pl.duration(days=30)).alias("date_minus_delta")) print(df) Output shape: (3, 3) โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโ โ start_date โ date_plus_delta โ date_minus_delta โ โ --- โ --- โ --- โ โ date โ date โ date โ โโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโก โ 2020-01-02 โ 2020-02-01 โ 2019-12-03 โ โ 2020-01-03 โ 2020-02-02 โ 2019-12-04 โ โ 2020-01-04 โ 2020-02-03 โ 2019-12-05 โ โโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโ | 3 | 4 |
75,313,457 | 2023-2-1 | https://stackoverflow.com/questions/75313457/openai-api-openai-api-key-os-getenv-not-working | I am just trying some simple functions in Python with OpenAI APIs but running into an error: I have a valid API secret key which I am using. Code: >>> import os >>> import openai >>> openai.api_key = os.getenv("I have placed the key here") >>> response = openai.Completion.create(model="text-davinci-003", prompt="Say this is a test", temperature=0, max_tokens=7) | Option 1: OpenAI API key not set as an environment variable Change this... openai.api_key = os.getenv('sk-xxxxxxxxxxxxxxxxxxxx') ...to this. openai.api_key = 'sk-xxxxxxxxxxxxxxxxxxxx' Option 2: OpenAI API key set as an environment variable (recommended) There are two ways to set the OpenAI API key as an environment variable: using an .env file (easier, but don't forget to create a .gitignore file) or using Windows Environment Variables. Way 1: Using an .env file Change this... openai.api_key = os.getenv('sk-xxxxxxxxxxxxxxxxxxxx') ...to this... openai.api_key = os.getenv('OPENAI_API_KEY') Also, don't forget to use the python-dotenv package. Your final Python file should look as follows: # main.py import os from dotenv import load_dotenv from openai import OpenAI # Load environment variables from the .env file load_dotenv() # Initialize OpenAI client with the API key from environment variables client = OpenAI( api_key=os.getenv("OPENAI_API_KEY"), ) It's crucial that you create a .gitignore file not to push the .env file to your GitHub/GitLab and leak your OpenAI API key! # .gitignore .env Way 2: Using Windows Environment Variables (source) STEP 1: Open System properties and select Advanced system settings STEP 2: Select Environment Variables STEP 3: Select New STEP 4: Add your name/key value pair Variable name: OPENAI_API_KEY Variable value: sk-xxxxxxxxxxxxxxxxxxxx STEP 5: Restart your computer (IMPORTANT!) Your final Python file should look as follows: # main.py import os from dotenv import load_dotenv from openai import OpenAI # Initialize OpenAI client # It will automatically use your OpenAI API key set via Windows Environment Variables client = OpenAI() | 3 | 15 |
75,378,025 | 2023-2-7 | https://stackoverflow.com/questions/75378025/how-to-complete-a-self-join-in-python-polars-vs-pandas-sql | I am trying to use python polars over pandas sql for a large dataframe as I am running into memory errors. There are two where conditions that are utilized in this dataframe but can't get the syntax right. Here is what the data looks like: Key Field DateColumn 1234 Plumb 2020-02-01 1234 Plumb 2020-03-01 1234 Pear 2020-04-01 import pandas as pd import datetime as dt import pandasql as ps d = {'Key': [1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754], 'Field':[ "Plumb", "Plumb", "Pear", "Plumb", "Orange", "Pear", "Plumb", "Plumb", "Pear", "Apple", "Plumb", "Orange", "Orange", "Apple", "Apple", "Pear", "Apple", "Plumb", "Plumb", "Orange", "Orange", "Pear", "Plumb", "Pear", "Plumb", "Pear", "Apple", "Plumb", "Orange", "Pear", "Apple", "Pear", "Apple"], 'DateColumn':[ '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01', '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01', '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01' ]} df = pd.DataFrame(data=d) df['DateColumn'] = pd.to_datetime(df['DateColumn']) df['PreviousMonth'] = df['DateColumn'] - pd.DateOffset(months=1) df_output = ps.sqldf(""" select a.Key ,a.Field ,b.Field as PreviousField ,a.DateColumn ,b.DateColumn as PreviousDate from df as a, df as b where a.Key = b.Key and b.DateColumn = a.PreviousMonth """) print(df_output.head()) Key Field DateColumn PreviousDate 0 1234 Plumb 2020-03-01 00:00:00.000000 2020-02-01 00:00:00.000000 1 1234 Pear 2020-04-01 00:00:00.000000 2020-03-01 00:00:00.000000 2 1234 Plumb 2020-05-01 00:00:00.000000 2020-04-01 00:00:00.000000 3 1234 Orange 2020-06-01 00:00:00.000000 2020-05-01 00:00:00.000000 4 1234 Pear 2020-07-01 00:00:00.000000 2020-06-01 00:00:00.000000 I have tried to do data_output = df.join(df, left_on='Key', right_on='Key') But unable to find a good example on how to put the two conditions on the join condition. | Let's accomplish everything using Polars. import polars as pl df = ( pl.DataFrame(d) .with_columns( pl.col('DateColumn').str.to_date() ) ) ( df .join( df .with_columns(pl.col('DateColumn').alias('PreviousDate')) .rename({'Field': 'PreviousField'}), left_on=['Key', 'DateColumn'], right_on=['Key', pl.col('DateColumn').dt.offset_by('1mo')], how="inner" ) ) shape: (30, 5) โโโโโโโโฌโโโโโโโโโฌโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโ โ Key โ Field โ DateColumn โ PreviousField โ PreviousDate โ โ --- โ --- โ --- โ --- โ --- โ โ i64 โ str โ date โ str โ date โ โโโโโโโโชโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก โ 1234 โ Plumb โ 2020-03-01 โ Plumb โ 2020-02-01 โ โ 1234 โ Pear โ 2020-04-01 โ Plumb โ 2020-03-01 โ โ 1234 โ Plumb โ 2020-05-01 โ Pear โ 2020-04-01 โ โ 1234 โ Orange โ 2020-06-01 โ Plumb โ 2020-05-01 โ โ 1234 โ Pear โ 2020-07-01 โ Orange โ 2020-06-01 โ โ 1234 โ Plumb โ 2020-08-01 โ Pear โ 2020-07-01 โ โ 1234 โ Plumb โ 2020-09-01 โ Plumb โ 2020-08-01 โ โ 1234 โ Pear โ 2020-10-01 โ Plumb โ 2020-09-01 โ โ 1234 โ Apple โ 2020-11-01 โ Pear โ 2020-10-01 โ โ 1234 โ Plumb โ 2020-12-01 โ Apple โ 2020-11-01 โ โ 2456 โ Orange โ 2020-03-01 โ Orange โ 2020-02-01 โ โ 2456 โ Apple โ 2020-04-01 โ Orange โ 2020-03-01 โ โ 2456 โ Apple โ 2020-05-01 โ Apple โ 2020-04-01 โ โ 2456 โ Pear โ 2020-06-01 โ Apple โ 2020-05-01 โ โ 2456 โ Apple โ 2020-07-01 โ Pear โ 2020-06-01 โ โ 2456 โ Plumb โ 2020-08-01 โ Apple โ 2020-07-01 โ โ 2456 โ Plumb โ 2020-09-01 โ Plumb โ 2020-08-01 โ โ 2456 โ Orange โ 2020-10-01 โ Plumb โ 2020-09-01 โ โ 2456 โ Orange โ 2020-11-01 โ Orange โ 2020-10-01 โ โ 2456 โ Pear โ 2020-12-01 โ Orange โ 2020-11-01 โ โ 3754 โ Pear โ 2020-03-01 โ Plumb โ 2020-02-01 โ โ 3754 โ Plumb โ 2020-04-01 โ Pear โ 2020-03-01 โ โ 3754 โ Pear โ 2020-05-01 โ Plumb โ 2020-04-01 โ โ 3754 โ Apple โ 2020-06-01 โ Pear โ 2020-05-01 โ โ 3754 โ Plumb โ 2020-07-01 โ Apple โ 2020-06-01 โ โ 3754 โ Orange โ 2020-08-01 โ Plumb โ 2020-07-01 โ โ 3754 โ Pear โ 2020-09-01 โ Orange โ 2020-08-01 โ โ 3754 โ Apple โ 2020-10-01 โ Pear โ 2020-09-01 โ โ 3754 โ Pear โ 2020-11-01 โ Apple โ 2020-10-01 โ โ 3754 โ Apple โ 2020-12-01 โ Pear โ 2020-11-01 โ โโโโโโโโดโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโ Note that we use an Expression in the right_on columns to generate our offset date column on-the-fly. | 3 | 4 |
75,352,810 | 2023-2-5 | https://stackoverflow.com/questions/75352810/how-to-web-scrap-economic-calendar-data-from-tradingview-and-load-into-dataframe | I want to load the Economic Calendar data from TradingView link and load into Dataframe ? Link: https://in.tradingview.com/economic-calendar/ Filter-1: Select Data for India and United States Filter-2: Data for This Week | Update: 2024-07-04: you have to specify 'Origin' as headers You can request this url: https://economic-calendar.tradingview.com/events import pandas as pd import requests url = 'https://economic-calendar.tradingview.com/events' today = pd.Timestamp.today().normalize() headers = { 'Origin': 'https://in.tradingview.com' } payload = { 'from': (today + pd.offsets.Hour(23)).isoformat() + '.000Z', 'to': (today + pd.offsets.Day(7) + pd.offsets.Hour(22)).isoformat() + '.000Z', 'countries': ','.join(['US', 'IN']) } data = requests.get(url, headers=headers, params=payload).json() df = pd.DataFrame(data['result']) Output: >>> df id title country ... ticker comment scale 0 312843 3-Month Bill Auction US ... NaN NaN NaN 1 312844 6-Month Bill Auction US ... NaN NaN NaN 2 316430 LMI Logistics Managers Index Current US ... USLMIC The Logistics Managers Survey is a monthly stu... NaN 3 316503 Exports US ... USEXP The United States is the world's third biggest... B 4 316504 Imports US ... USIMP The United States is the world's second-bigges... B 5 316505 Balance of Trade US ... USBOT The United States has been running consistent ... B 6 312845 Redbook YoY US ... USRI The Johnson Redbook Index is a sales-weighted ... NaN 7 316509 IBD/TIPP Economic Optimism US ... USEOI IBD/TIPP Economic Optimism Index measures Amer... NaN 8 337599 Fed Chair Powell Speech US ... USINTR In the United States, the authority to set int... NaN 9 334599 3-Year Note Auction US ... NaN NaN NaN 10 337600 Fed Barr Speech US ... USINTR In the United States, the authority to set int... NaN 11 316449 Consumer Credit Change US ... USCCR In the United States, Consumer Credit refers t... B 12 312846 API Crude Oil Stock Change US ... USCSC Stocks of crude oil refer to the weekly change... M 13 316575 Cash Reserve Ratio IN ... INCRR Cash Reserve Ratio is a specified minimum frac... NaN 14 334653 RBI Interest Rate Decision IN ... ININTR In India, interest rate decisions are taken by... NaN 15 312847 MBA 30-Year Mortgage Rate US ... USMR MBA 30-Year Mortgage Rate is average 30-year f... NaN 16 312848 MBA Mortgage Applications US ... USMAPL In the US, the MBA Weekly Mortgage Application... NaN 17 312849 MBA Mortgage Refinance Index US ... USMRI The MBA Weekly Mortgage Application Survey is ... NaN 18 312850 MBA Mortgage Market Index US ... USMMI The MBA Weekly Mortgage Application Survey is ... NaN 19 312851 MBA Purchase Index US ... USPIND NaN NaN 20 337604 Fed Williams Speech US ... USINTR In the United States, the authority to set int... NaN 21 316553 Wholesale Inventories MoM US ... USWI The Wholesale Inventories are the stock of uns... NaN 22 337601 Fed Barr Speech US ... USINTR In the United States, the authority to set int... NaN 23 312852 EIA Refinery Crude Runs Change US ... USRCR Crude Runs refer to the volume of crude oil co... M 24 312853 EIA Crude Oil Stocks Change US ... USCOSC Stocks of crude oil refer to the weekly change... M 25 312854 EIA Distillate Stocks Change US ... USDFS NaN M 26 312855 EIA Heating Oil Stocks Change US ... USHOS NaN M 27 312856 EIA Gasoline Production Change US ... USGPRO NaN M 28 312857 EIA Crude Oil Imports Change US ... USCOI NaN M 29 312858 EIA Gasoline Stocks Change US ... USGSCH Stocks of gasoline refers to the weekly change... M 30 312859 EIA Cushing Crude Oil Stocks Change US ... USCCOS Change in the number of barrels of crude oil h... M 31 312860 EIA Distillate Fuel Production Change US ... USDFP NaN M 32 337598 17-Week Bill Auction US ... NaN NaN NaN 33 334575 WASDE Report US ... NaN NaN NaN 34 334586 10-Year Note Auction US ... NaN Generally, a government bond is issued by a na... NaN 35 337602 Fed Waller Speech US ... USINTR In the United States, the authority to set int... NaN 36 312933 M3 Money Supply YoY IN ... INM3 India Money Supply M3 includes M2 plus long-te... NaN 37 312863 Jobless Claims 4-week Average US ... USJC4W NaN K 38 312864 Continuing Jobless Claims US ... USCJC Continuing Jobless Claims refer to actual numb... K 39 312865 Initial Jobless Claims US ... USIJC Initial jobless claims have a big impact in fi... K 40 312866 EIA Natural Gas Stocks Change US ... USNGSC Natural Gas Stocks Change refers to the weekly... B 41 312867 8-Week Bill Auction US ... NaN NaN NaN 42 312868 4-Week Bill Auction US ... NaN NaN NaN 43 334602 30-Year Bond Auction US ... NaN NaN NaN 44 312827 Deposit Growth YoY IN ... INDG In India, deposit growth refers to the year-ov... NaN 45 312869 Foreign Exchange Reserves IN ... INFER In India, Foreign Exchange Reserves are the fo... B 46 337022 Bank Loan Growth YoY IN ... INLG In India, bank loan growth refers to the year-... NaN 47 316685 Industrial Production YoY IN ... INIPYY In India, industrial production measures the o... NaN 48 316687 Manufacturing Production YoY IN ... INMPRYY Manufacturing production measures the output o... NaN 49 312902 Michigan Consumer Expectations Prel US ... USMCE The Index of Consumer Expectations focuses on ... NaN 50 312903 Michigan Current Conditions Prel US ... USMCEC The Index of Consumer Expectations focuses on ... NaN 51 312904 Michigan 5 Year Inflation Expectations Prel US ... USMIE5Y The Index of Consumer Expectations focuses on ... NaN 52 312905 Michigan Inflation Expectations Prel US ... USMIE1Y The Index of Consumer Expectations focuses on ... NaN 53 312906 Michigan Consumer Sentiment Prel US ... USCCI The Index of Consumer Expectations focuses on ... NaN 54 337603 Fed Waller Speech US ... USINTR In the United States, the authority to set int... NaN 55 312870 Baker Hughes Oil Rig Count US ... USCOR US Crude Oil Rigs refer to the number of activ... NaN 56 335652 Baker Hughes Total Rig Count US ... NaN US Total Rigs refer to the number of active US... NaN 57 335824 Monthly Budget Statement US ... USGBV Federal Government budget balance is the diffe... B 58 337605 Fed Harker Speech US ... USINTR In the United States, the authority to set int... NaN [59 rows x 16 columns] Info: >>> df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 59 entries, 0 to 58 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 59 non-null object 1 title 59 non-null object 2 country 59 non-null object 3 indicator 59 non-null object 4 period 59 non-null object 5 source 59 non-null object 6 actual 0 non-null object 7 previous 51 non-null float64 8 forecast 9 non-null float64 9 currency 59 non-null object 10 unit 28 non-null object 11 importance 59 non-null int64 12 date 59 non-null object 13 ticker 49 non-null object 14 comment 44 non-null object 15 scale 20 non-null object dtypes: float64(2), int64(1), object(13) memory usage: 7.5+ KB | 4 | 13 |
75,372,275 | 2023-2-7 | https://stackoverflow.com/questions/75372275/importerror-cannot-import-name-gdal-array-from-osgeo | I create a fresh environment, install numpy, then install GDAL. GDAL imports successfully and I can open images using gdal.Open(, but I get the ImportError: cannot import name '_gdal_array' from 'osgeo' error when trying to use ReadAsRaster. pip list returns: GDAL 3.6.2 numpy 1.24.2 pip 23.0 setuptools 65.6.3 wheel 0.38.4 Completely stumped, has anyone come across this? Google tells me that installing numpy first is the solution (but that doesn't help). Help would be much appreciated. | Pip is most likely caching and reinstalling your bad version of GDAL over and over, even though you installed numpy. Here's what fixed it for me: pip3 install --no-cache-dir --force-reinstall 'GDAL[numpy]==3.6.2' Installing without --no-cache-dir causes pip to reuse the compiled wheel: % pip3 install --force-reinstall 'GDAL[numpy]==3.6.2' Collecting GDAL[numpy]==3.6.2 Using cached GDAL-3.6.2-cp311-cp311-macosx_13_0_x86_64.whl Collecting numpy>1.0.0 Using cached numpy-1.24.2-cp311-cp311-macosx_10_9_x86_64.whl (19.8 MB) Installing collected packages: numpy, GDAL Attempting uninstall: numpy Found existing installation: numpy 1.24.2 Uninstalling numpy-1.24.2: Successfully uninstalled numpy-1.24.2 Attempting uninstall: GDAL Found existing installation: GDAL 3.6.2 Uninstalling GDAL-3.6.2: Successfully uninstalled GDAL-3.6.2 Successfully installed GDAL-3.6.2 numpy-1.24.2 For others still encountering this issue, make sure wheel and setuptools are installed. Pip will think setuptools>=48 is enough, but it's wrong, as you'll need at least setuptools 67.` Pip will download wheel and/or setuptools as part of the building process, but it will not work (as of March 2024) and instead blame numpy: Building wheels for collected packages: GDAL Running command Building wheel for GDAL (pyproject.toml) WARNING: numpy not available! Array support will not be enabled The only indication that it's a wheel or setuptools problem shows up when building without build isolation (--no-build-isolation): error: invalid command 'bdist_wheel' error: subprocess-exited-with-error ร Preparing metadata (pyproject.toml) did not run successfully. โ exit code: 1 โฐโ> See above for output. Preparing metadata (pyproject.toml) ... done ERROR: Exception: Traceback (most recent call last): ... ModuleNotFoundError: No module named 'setuptools' To fix these errors, make sure wheel and setuptools are installed with pip3 install wheel 'setuptools>=67' | 9 | 16 |
75,324,341 | 2023-2-2 | https://stackoverflow.com/questions/75324341/yolov8-get-predicted-bounding-box | I want to integrate OpenCV with YOLOv8 from ultralytics, so I want to obtain the bounding box coordinates from the model prediction. How do I do this? from ultralytics import YOLO import cv2 model = YOLO('yolov8n.pt') cap = cv2.VideoCapture(0) cap.set(3, 640) cap.set(4, 480) while True: _, frame = cap.read() img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) results = model.predict(img) for r in results: for c in r.boxes.cls: print(model.names[int(c)]) cv2.imshow('YOLO V8 Detection', frame) if cv2.waitKey(1) & 0xFF == ord(' '): break cap.release() cv2.destroyAllWindows() I want to display the YOLO annotated image in OpenCV. I know I can use the stream parameter in model.predict(source='0', show=True). But I want to continuously monitor the predicted class names for my program, at the same time displaying the image output. | This will: Loop through each frame in the video Pass each frame to Yolov8 which will generate bounding boxes Draw the bounding boxes on the frame using the built in ultralytics' annotator: from ultralytics import YOLO import cv2 from ultralytics.utils.plotting import Annotator # ultralytics.yolo.utils.plotting is deprecated model = YOLO('yolov8n.pt') cap = cv2.VideoCapture(0) cap.set(3, 640) cap.set(4, 480) while True: _, img = cap.read() # BGR to RGB conversion is performed under the hood # see: https://github.com/ultralytics/ultralytics/issues/2575 results = model.predict(img) for r in results: annotator = Annotator(img) boxes = r.boxes for box in boxes: b = box.xyxy[0] # get box coordinates in (left, top, right, bottom) format c = box.cls annotator.box_label(b, model.names[int(c)]) img = annotator.result() cv2.imshow('YOLO V8 Detection', img) if cv2.waitKey(1) & 0xFF == ord(' '): break cap.release() cv2.destroyAllWindows() | 14 | 29 |
75,312,537 | 2023-2-1 | https://stackoverflow.com/questions/75312537/sqlalchemy-is-biginteger-identity-column-possible-in-orm | I want to create BigInteger Identity column in SQLAlchemy ORM. Documentation does not have any example of either ORM Identity or BigInteger Identity. Is this possible at all? I don't see any parameter for Identity type that would allow specifying inner integer type How to do this? Do I have to create custom type and pass it inside Mapping[] brackets? | This seems to work: import sqlalchemy as sa from sqlalchemy.orm import mapped_column, Mapped, DeclarativeBase class Base(DeclarativeBase): pass class Test(Base): __tablename__ = 't75312537' id: Mapped[int] = mapped_column( sa.BigInteger, sa.Identity(), primary_key=True ) engine = sa.create_engine('postgresql+psycopg2:///test', echo=True) Base.metadata.drop_all(engine, checkfirst=True) Base.metadata.create_all(engine) Output: CREATE TABLE t75312537 ( id BIGINT GENERATED BY DEFAULT AS IDENTITY, PRIMARY KEY (id) ) See the docs at Identity Columns (GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY). | 6 | 11 |
75,367,828 | 2023-2-7 | https://stackoverflow.com/questions/75367828/runtimeerror-reentrant-call-inside-io-bufferedwriter-name-stdout | I'm writing a program which starts one thread to generate "work" and add it to a queue every N seconds. Then, I have a thread pool which processes items in the queue. The program below works perfectly fine, until I comment out/delete line #97 (time.sleep(0.5) in the main function). Once I do that, it generates a RuntimeError which attempting to gracefully stop the program (by sending a SIGINT or SIGTERM to the main process). It even works fine with an extremely small sleep like 0.1s, but has an issue with none at all. I tried researching "reentrancy" but it went a bit over my head unfortunately. Can anyone help me to understand this? Code: import random import signal import threading import time from concurrent.futures import Future, ThreadPoolExecutor from datetime import datetime from queue import Empty, Queue, SimpleQueue from typing import Any class UniqueQueue: """ A thread safe queue which can only ever contain unique items. """ def __init__(self) -> None: self._q = Queue() self._items = [] self._l = threading.Lock() def get(self, block: bool = False, timeout: float | None = None) -> Any: with self._l: try: item = self._q.get(block=block, timeout=timeout) except Empty: raise else: self._items.pop(0) return item def put(self, item: Any, block: bool = False, timeout: float | None = None) -> None: with self._l: if item in self._items: return None self._items.append(item) self._q.put(item, block=block, timeout=timeout) def size(self) -> int: return self._q.qsize() def empty(self) -> bool: return self._q.empty() def stop_app(sig_num, sig_frame) -> None: # global stop_app_event print("Signal received to stop the app") stop_app_event.set() def work_generator(q: UniqueQueue) -> None: last_execution = time.time() is_first_execution = True while not stop_app_event.is_set(): elapsed_seconds = int(time.time() - last_execution) if elapsed_seconds <= 10 and not is_first_execution: time.sleep(0.5) continue last_execution = time.time() is_first_execution = False print("Generating work...") for _ in range(100): q.put({"n": random.randint(0, 500)}) def print_work(w) -> None: print(f"{datetime.now()}: {w}") def main(): # Create a work queue work_queue = UniqueQueue() # Create a thread to generate the work and add to the queue t = threading.Thread(target=work_generator, args=(work_queue,)) t.start() # Create a thread pool, get work from the queue, and submit to the pool for processing pool = ThreadPoolExecutor(max_workers=20) futures: list[Future] = [] while True: print("Processing work...") if stop_app_event.is_set(): print("stop_app_event is set:", stop_app_event.is_set()) for future in futures: future.cancel() break print("Queue Size:", work_queue.size()) try: while not work_queue.empty(): work = work_queue.get() future = pool.submit(print_work, work) futures.append(future) except Empty: pass time.sleep(0.5) print("Stopping the work generator thread...") t.join(timeout=10) print("Work generator stopped") print("Stopping the thread pool...") pool.shutdown(wait=True) print("Thread pool stopped") if __name__ == "__main__": stop_app_event = threading.Event() signal.signal(signalnum=signal.SIGINT, handler=stop_app) signal.signal(signalnum=signal.SIGTERM, handler=stop_app) main() | It's because you called the print() in the signal handler, stop_app(). A signal handler is executed in a background thread in C, but in Python it is executed in the main thread. (See the reference.) In your case, while executing a print() call, another print() was called, and the term 'reentrant' fits perfectly here. And the current IO stack prohibits a reentrant call.(See the implementation if you are interested.) You can remedy this by using the os.write() and the sys.stdout like the following. import sys import os ... def stop_app(sig_num, sig_frame): os.write(sys.stdout.fileno(), b"Signal received to stop the app\n") stop_app_event.set() | 3 | 5 |
75,314,250 | 2023-2-1 | https://stackoverflow.com/questions/75314250/python-weakkeydictionary-for-unhashable-types | As raised in cpython issue 88306, python WeakKeyDictionary fails for non hashable types. According to the discussion in the python issue above, this is an unnecessary restriction, using ids of the keys instead of hash would work just fine: In this special case ids are unique identifiers for the keys in the WeakKeyDictionary, because the keys are automatically removed when the original object is deleted. It is important to be aware that using ids instead of hashes is only feasible in this very special case. We can tweak weakref.WeakKeyDictionary (see gist) to achieve the desired behaviour. In summary, this implementation wraps the weakref keys as follows: class _IdKey: def __init__(self, key): self._id = id(key) def __hash__(self): return self._id def __eq__(self, other: typing_extensions.Self): return self._id == other._id def __repr__(self): return f"<_IdKey(_id={self._id})>" class _IdWeakRef(_IdKey): def __init__(self, key, remove: typing.Callable[[typing.Any], None]): super().__init__(key) # hold weak ref to avoid garbage collection of the remove callback self._ref = weakref.ref(key, lambda _: remove(self)) def __call__(self): # used in weakref.WeakKeyDictionary.__copy__ return self._ref() def __repr__(self): return f"<_IdKey(_id={self._id},{self._ref})>" class WeakKeyIdDictionary(weakref.WeakKeyDictionary): """ overrides all methods involving dictionary access key """ ... https://gist.github.com/barmettl/b198f0cf6c22047df77483e8aa28f408 However, this depends on the details of the implementation of weakref.WeakKeyDictionary (using python3.10 here) and is likely to break in future (or even past) versions of python. Of course, alternatively one can just rewrite an entirely new class. It is also possible to implement a custom __hash__ method for all classes, but this won't work when dealing with external code and will give unreliable hashes for use cases beyond weakref.WeakKeyDictionary. We can also monkey patch __hash__, but this is not possible in particular for built in classes and will have unintended effects in other parts of the code. Thus the following question: How should one store non hashable items in a WeakKeyDictionary? | There is a way which does not rely on knowing the internals of WeakKeyDictionary: from weakref import WeakKeyDictionary, WeakValueDictionary class Id: def __init__(self, key): self._id = id(key) def __hash__(self): return self._id def __eq__(self, other): return self._id == other._id class WeakUnhashableKeyDictionary: def __init__(self, *args, **kwargs): # TODO Do something to initialize given args and kwargs. self.keys = WeakValueDictionary() self.values = WeakKeyDictionary() def __getitem__(self, key): return self.values.__getitem__(Id(key)) def __setitem__(self, key, value): _id = Id(key) # NOTE This works because key holds on _id iif key exists, # and _id holds on value iif _id exists. Transitivity. QED. # Because key is only stored as a value, it does not need to be hashable. self.keys.__setitem__(_id, key) self.values.__setitem__(_id, value) def __delitem__(self, key): self.keys.__delitem__(Id(key)) self.values.__delitem__(Id(key)) # etc. other methods should be relatively simple to implement. # TODO Might require some locks or care in the ordering of operations to work threaded. # TODO Add clean error handling. This is just a generalization of my answer to a method caching problem. | 7 | 3 |
75,305,169 | 2023-2-1 | https://stackoverflow.com/questions/75305169/decoding-hidden-layer-embeddings-in-t5 | I'm new to NLP (pardon the very noob question!), and am looking for a way to perform vector operations on sentence embeddings (e.g., randomization in embedding-space in a uniform ball around a given sentence) and then decode them. I'm currently attempting to use the following strategy with T5 and Huggingface Transformers: Encode the text with T5Tokenizer. Run a forward pass through the encoder with model.encoder. Use the last hidden state as the embedding. (I've tried .generate as well, but it doesn't allow me to use the decoder separately from the encoder.) Perform any desired operations on the embedding. The problematic step: Pass it through model.decoder and decode with the tokenizer. I'm having trouble with (4). My sanity check: I set (3) to do nothing (no change to the embedding), and I check whether the resulting text is the same as the input. So far, that check always fails. I get the sense that I'm missing something rather important (something to do with the lack of beam search or some other similar generation method?). I'm unsure of whether what I think is an embedding (as in (2)) is even correct. How would I go about encoding a sentence embedding with T5, modifying it in that vector space, and then decoding it into generated text? Also, might another model be a better fit? As a sample, below is my incredibly broken code, based on this: t5_model = transformers.T5ForConditionalGeneration.from_pretrained("t5-large") t5_tok = transformers.T5Tokenizer.from_pretrained("t5-large") text = "Foo bar is typing some words." input_ids = t5_tok(text, return_tensors="pt").input_ids encoder_output_vectors = t5_model.encoder(input_ids, return_dict=True).last_hidden_state # The rest is what I think is problematic: decoder_input_ids = t5_tok("<pad>", return_tensors="pt", add_special_tokens=False).input_ids decoder_output = t5_model.decoder(decoder_input_ids, encoder_hidden_states=encoder_output_vectors) t5_tok.decode(decoder_output.last_hidden_state[0].softmax(0).argmax(1)) | Much easier than anticipated! For anyone else looking for an answer, this page in HuggingFace's docs wound up helping me the most. Below is an example with code based heavily on that page. First, to get the hidden layer embeddings: encoder_input_ids = self.tokenizer(encoder_input_str, return_tensors="pt").input_ids embeds = self.model.get_encoder()( encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ) Note that using repeat_interleave above is only necessary for decoding methods such as beam search. Otherwise, no repetition in the hidden layer embedding is necessary. HuggingFace provides many methods for decoding, just as would be accessible via generate()'s options. These are documented in the article linked above. To provide an example of decoding using beam search with num_beams beams: model_kwargs = { "encoder_outputs": encoder_outputs } # Define decoder start token ids input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) input_ids = input_ids * model.config.decoder_start_token_id # Instantiate three configuration objects for scoring beam_scorer = BeamSearchScorer( batch_size=1, num_beams=num_beams, device=model.device, ) logits_processor = LogitsProcessorList( [ MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ] ) stopping_criteria = StoppingCriteriaList([ MaxLengthCriteria(max_length=max_length), ]) outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, stopping_criteria=stopping_criteria, **model_kwargs) results = tokenizer.batch_decode(outputs, skip_special_tokens=True) Similar approaches can be taken for greedy and contrastive search, with different parameters. Similarly, different stopping criteria can be used. | 4 | 1 |
75,353,488 | 2023-2-5 | https://stackoverflow.com/questions/75353488/modulenotfounderror-no-module-named-numpy-but-numpy-module-already-installed | Error: I have already installed numpy module(pip show numpy): this how it shows when i try to install numpy again I tried to import numpy module which is already installed but it throws ModuleNotFoundError | if you are using vs code you need to select the Python interpreter explicitly: press ctrl+shift+p to open the editor command then Search Python: Select Interpreter then you can choose the appropriate Python interpreter | 4 | 1 |
75,316,998 | 2023-2-1 | https://stackoverflow.com/questions/75316998/disable-logging-bad-requests-while-unittest-django-app | I have a tests in my Django app. They're working well, but i want to disable showing console logging like .Bad Request: /api/v1/users/register/ One of my tests code def test_user_register_username_error(self): data = { 'username': 'us', 'email': '[email protected]', 'password': 'pass123123', 'password_again': 'pass123123' } url = self.register_url response = client.post(url, data=data) self.assertEqual(response.status_code, 400) self.assertFalse(User.objects.filter(username='us').first()) Console output Found 1 test(s). Creating test database for alias 'default'... System check identified no issues (0 silenced). .Bad Request: /api/v1/users/register/ ---------------------------------------------------------------------- Ran 1 tests in 0.414s OK Everything works nice, but i want to disable Bad Request: /api/v1/users/register/ output to console. I double checked, there's no print or logging functions, that can possibly log this to console. How can i disable messages like Bad Request: /api/v1/users/register/ logging while tests EDIT To make question more understandable here's current console output: Found 22 test(s). Creating test database for alias 'default'... System check identified no issues (0 silenced). ...........Bad Request: /api/v1/users/register/ .Bad Request: /api/v1/users/register/ .Bad Request: /api/v1/users/register/ .Bad Request: /api/v1/users/register/ .Bad Request: /api/v1/users/register/ ..Bad Request: /api/v1/users/register/ .Bad Request: /api/v1/users/activate/ .Bad Request: /api/v1/users/login/ .Bad Request: /api/v1/users/recovery/ Bad Request: /api/v1/users/recovery/ .Bad Request: /api/v1/users/register/ . ---------------------------------------------------------------------- Ran 22 tests in 6.350s OK And what i expect: Found 22 test(s). Creating test database for alias 'default'... System check identified no issues (0 silenced). ...................... ---------------------------------------------------------------------- Ran 22 tests in 6.350s OK | You can log only errors during the tests, and after they complete, return the normal logging level. class SampleTestCase(TestCase): def setUp(self) -> None: """Reduce the log level to avoid messages like 'bad request'""" logger = logging.getLogger("django.request") self.previous_level = logger.getEffectiveLevel() logger.setLevel(logging.ERROR) def tearDown(self) -> None: """Reset the log level back to normal""" logger = logging.getLogger("django.request") logger.setLevel(self.previous_level) | 3 | 2 |
75,323,732 | 2023-2-2 | https://stackoverflow.com/questions/75323732/how-to-download-streamlit-output-data-frame-as-excel-file | I want to know if there is any way to download the output dataframe of streamlit as an Excel file using the streamlit button? | I suggest you edit your question to include a minimal reproducible example so that it's easier for people to understand your question and to help you. Here is the answer if I understand you correctly. Basically it provides 2 ways to download your data df as either csv or xlsx. IMPORTANT: You need to install xlsxwriter package to make this work. import streamlit as st import pandas as pd import io # buffer to use for excel writer buffer = io.BytesIO() data = { "calories": [420, 380, 390], "duration": [50, 40, 45], "random1": [5, 12, 1], "random2": [230, 23, 1] } df = pd.DataFrame(data) @st.cache def convert_to_csv(df): # IMPORTANT: Cache the conversion to prevent computation on every rerun return df.to_csv(index=False).encode('utf-8') csv = convert_to_csv(df) # display the dataframe on streamlit app st.write(df) # download button 1 to download dataframe as csv download1 = st.download_button( label="Download data as CSV", data=csv, file_name='large_df.csv', mime='text/csv' ) # download button 2 to download dataframe as xlsx with pd.ExcelWriter(buffer, engine='xlsxwriter') as writer: # Write each dataframe to a different worksheet. df.to_excel(writer, sheet_name='Sheet1', index=False) download2 = st.download_button( label="Download data as Excel", data=buffer, file_name='large_df.xlsx', mime='application/vnd.ms-excel' ) | 4 | 8 |
75,306,422 | 2023-2-1 | https://stackoverflow.com/questions/75306422/how-to-solve-python-c-api-error-this-is-an-issue-with-the-package-mentioned-abo | I'm trying to implement an algorithm in the form of the C programming language into my system that runs using the python programming language. I'm trying to implement the Python C API with the intention that my algorithm will run in a python environment. As a result it produces an error which I have been trying to fix for several days but still can't find it. Here is the result of the error I got: $ pip install . Processing c:\users\angga danar\documents\skripsi\project\gimli Preparing metadata (setup.py) ... done Installing collected packages: gimlihash DEPRECATION: gimlihash is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for gimlihash ... error error: subprocess-exited-with-error ร Running setup.py install for gimlihash did not run successfully. โ exit code: 1 โฐโ> [17 lines of output] running install C:\python3.10\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_ext building 'gimli' extension creating build creating build\temp.win-amd64-cpython-310 creating build\temp.win-amd64-cpython-310\Release creating build\temp.win-amd64-cpython-310\Release\Hash "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\python3.10\include -IC:\python3.10\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /TcHash/hashing.c /Fobuild\temp.win-amd64-cpython-310\Release\Hash/hashing.obj hashing.c creating C:\Users\Angga Danar\Documents\skripsi\Project\Gimli\build\lib.win-amd64-cpython-310 "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\python3.10\libs /LIBPATH:C:\python3.10 /LIBPATH:C:\python3.10\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\ATLMFC\lib\x64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\x64" /EXPORT:PyInit_gimli build\temp.win-amd64-cpython-310\Release\Hash/hashing.obj /OUT:build\lib.win-amd64-cpython-310\gimli.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-cpython-310\Release\Hash\gimli.cp310-win_amd64.lib LINK : error LNK2001: unresolved external symbol PyInit_gimli build\temp.win-amd64-cpython-310\Release\Hash\gimli.cp310-win_amd64.lib : fatal error LNK1120: 1 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure ร Encountered error while trying to install package. โฐโ> gimlihash note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. Here is the code for my hashing.c file: char print_tex(char* string, char* hex_string) { uint8_t output[32]; size_t l = strlen(string); Gimli_hash(string, strlen(string), output, 32); int i; for (i = 0;i < 32;++i) fprintf( stderr, "%02x",output[i]); hex_string[i] = (char)output[i]; return 0; } static PyObject* hash(PyObject *self, PyObject *args) { char *str; char final[32]; if (!PyArg_ParseTuple(args, "s", &str)){ return NULL; } print_tex(str, final); return PyUnicode_FromString(final); } static PyMethodDef hashMethods[] = { {"gimli", hash, METH_VARARGS, "Gimli Algorithm Hash"}, {NULL, NULL, 0, NULL} }; static struct PyModuleDef hashModule = { PyModuleDef_HEAD_INIT, "hashModule", "hash module", -1, hashMethods }; PyMODINIT_FUNC PyInit_hash(void) { return PyModule_Create(&hashModule); } Basically I implemented a hashing algorithm where the input is in the form of a string and the resulting output is also in the form of the result string of the algorithm. I tried to install the file using the following setup.py file using the pip install . command from setuptools import setup, Extension setup(name = "gimlihash", version = "0.1", ext_modules = [Extension("gimli", ["Hash/hashing.c"])]) Hopefully I can create a local library for my project and I can call the library. Please help because I'm still a beginner. | According to [Python.Docs]: Extending Python with C or C++ - The Moduleโs Method Table and Initialization Function (emphasis is mine): This structure, in turn, must be passed to the interpreter in the moduleโs initialization function. The initialization function must be named PyInit_name(), where name is the name of the module, and should be the only non-static item defined in the module file: So, there's a discrepancy between: Module name: gimli (setup.py) Initialization function name: PyInit_hash (hashing.c) In order for things to be OK, the 2 must match, so either: Use hash everywhere (Extension("hash", ...) Use gimli everywhere (PyMODINIT_FUNC PyInit_gimli(..., as @RetiredNinja suggested in the comment) Since we're talking about an extension module (which most likely will be part of a package), there's a general convention (not necessarily followed by everyone) that their names start with an UnderScore (_). Therefore, I suggest to name your module _hash. The required changes would be: setup.py: Module name # ... ext_modules = [Extension("_hash", ["Hash/hashing.c"])]) hashing.c: Function name: PyMODINIT_FUNC PyInit__hash(void) { // @TODO - cfati: Notice the double UnderScore: __ // ... Module name (optional): // ... PyModuleDef_HEAD_INIT, "_hash", // @TODO - cfati: This is what `_hash.__name__` will return "hash module", // ... Might also want to check: [SO]: c program SWIG to python gives 'ImportError: dynamic module does not define init function' (@CristiFati's answer) [SO]: How to create python C++ extension with submodule that can be imported (@CristiFati's answer) | 3 | 3 |
75,310,294 | 2023-2-1 | https://stackoverflow.com/questions/75310294/github-actions-poetry-installs-black-but-ci-workflow-does-not-find-it | I am setting up a python code quality workflow locally (pre-commit) and on Github Actions (GHA). Environment is managed with poetry. While the local precommit works fine, the remote GHA workflow fails, saying it does not find black, while looking at the workflow logs it seems it was installed just fine. Workflow was largely copied from this great writeup: https://jacobian.org/til/github-actions-poetry/ Where am I making a mistake? Here are the relevant files: codequal.yml name: Python QA on: push: branches: [ "main" ] pull_request: branches: [ "main" ] permissions: contents: read jobs: pylint: runs-on: ubuntu-latest name: Python QA steps: - name: Check out uses: actions/checkout@v3 - name: Set up Python 3.10 uses: actions/setup-python@v4 with: python-version: "3.10" # Cache the installation of Poetry itself, e.g. the next step. This prevents the workflow # from installing Poetry every time, which can be slow. Note the use of the Poetry version # number in the cache key, and the "-0" suffix: this allows you to invalidate the cache # manually if/when you want to upgrade Poetry, or if something goes wrong. This could be # mildly cleaner by using an environment variable, but I don't really care. - name: cache poetry install uses: actions/cache@v3 with: path: ~/.local key: poetry-1.3.2 # Install Poetry. You could do this manually, or there are several actions that do this. # `snok/install-poetry` seems to be minimal yet complete, and really just calls out to # Poetry's default install script, which feels correct. I pin the Poetry version here # because Poetry does occasionally change APIs between versions and I don't want my # actions to break if it does. # # The key configuration value here is `virtualenvs-in-project: true`: this creates the # venv as a `.venv` in your testing directory, which allows the next step to easily # cache it. - uses: snok/[email protected] with: version: 1.3.2 virtualenvs-create: true virtualenvs-in-project: true installer-parallel: true # Cache your dependencies (i.e. all the stuff in your `pyproject.toml`). Note the cache # key: if you're using multiple Python versions, or multiple OSes, you'd need to include # them in the cache key. I'm not, so it can be simple and just depend on the poetry.lock. - name: cache deps id: cache-deps uses: actions/cache@v3 with: path: .venv key: pydeps-${{ hashFiles('**/poetry.lock') }} # Install dependencies. `--no-root` means "install all dependencies but not the project # itself", which is what you want to avoid caching _your_ code. The `if` statement # ensures this only runs on a cache miss. - run: poetry install --no-interaction --no-root if: steps.cache-deps.outputs.cache-hit != 'true' # Now install _your_ project. This isn't necessary for many types of projects -- particularly # things like Django apps don't need this. But it's a good idea since it fully-exercises the # pyproject.toml and makes that if you add things like console-scripts at some point that # they'll be installed and working. - run: poetry install --no-interaction ################################################################ # Now finally run your code quality tools ################################################################ - name: Format with black run: | black 'src' - name: Lint with pylint run: | pylint --fail-under=7.0 --recursive=y --enable=W 'src' Relevant section of GitHub Action logging of codequal.yml Run poetry install --no-interaction --no-root Creating virtualenv rssita in /home/runner/work/rssita/rssita/.venv Installing dependencies from lock file Package operations: 49 installs, 1 update, 0 removals โข Updating setuptools (65.6.3 -> 67.0.0) โข Installing attrs (22.2.0) โข Installing certifi (2022.12.7) โข Installing charset-normalizer (3.0.1) โข Installing distlib (0.3.6) โข Installing exceptiongroup (1.1.0) โข Installing filelock (3.9.0) โข Installing idna (3.4) โข Installing iniconfig (2.0.0) โข Installing nvidia-cublas-cu11 (11.10.3.66) โข Installing nvidia-cuda-nvrtc-cu11 (11.7.99) โข Installing nvidia-cuda-runtime-cu11 (11.7.99) โข Installing nvidia-cudnn-cu11 (8.5.0.96) โข Installing packaging (23.0) โข Installing platformdirs (2.6.2) โข Installing pluggy (1.0.0) โข Installing tomli (2.0.1) โข Installing typing-extensions (4.4.0) โข Installing urllib3 (1.26.14) โข Installing cfgv (3.3.1) โข Installing click (8.1.3) โข Installing coverage (7.1.0) โข Installing identify (2.5.17) โข Installing emoji (2.2.0) โข Installing mccabe (0.7.0) โข Installing mypy-extensions (0.4.3) โข Installing nodeenv (1.7.0) โข Installing numpy (1.24.1) โข Installing pathspec (0.11.0) โข Installing protobuf (4.21.12) โข Installing pycodestyle (2.10.0) โข Installing pyflakes (3.0.1) โข Installing pytest (7.2.1) โข Installing pyyaml (6.0) โข Installing requests (2.28.2) โข Installing sgmllib3k (1.0.0) โข Installing six (1.16.0) โข Installing torch (1.13.1) โข Installing tqdm (4.64.1) โข Installing types-urllib3 (1.26.25.4) โข Installing virtualenv (20.17.1) โข Installing black (22.12.0) โข Installing feedparser (6.0.10) โข Installing flake8 (6.0.0) โข Installing isort (5.12.0) โข Installing mypy (0.991) โข Installing pre-commit (3.0.2) โข Installing pytest-cov (4.0.0) โข Installing stanza (1.4.2) โข Installing types-requests (2.28.11.8) 1s Run poetry install --no-interaction Installing dependencies from lock file No dependencies to install or update Installing the current project: rssita (0.1.0) 0s Run black 'src' /home/runner/work/_temp/0fc25aa8-4903-45ae-9d8e-9c11f60dca11.sh: line 1: black: command not found Error: Process completed with exit code 127. Project tree on local: (base) bob@Roberts-Mac-mini rssita % tree . โโโ README.md โโโ __pycache__ โ โโโ test_feeds.cpython-310-pytest-7.2.1.pyc โโโ poetry.lock โโโ pyproject.toml โโโ setup.cfg โโโ src โ โโโ rssita โ โโโ __init__.py โ โโโ __pycache__ โ โ โโโ __init__.cpython-310-pytest-7.2.1.pyc โ โ โโโ __init__.cpython-310.pyc โ โ โโโ feeds.cpython-310-pytest-7.2.1.pyc โ โ โโโ feeds.cpython-310.pyc โ โ โโโ rssita.cpython-310-pytest-7.2.1.pyc โ โ โโโ termcolors.cpython-310-pytest-7.2.1.pyc โ โโโ feeds.py โ โโโ rssita.py โ โโโ termcolors.py โโโ tests โโโ __init__.py โโโ __pycache__ โ โโโ __init__.cpython-310.pyc โ โโโ test_feeds.cpython-310-pytest-7.2.1.pyc โโโ test_feeds.py | Since the dependencies are installed in a virtual environment managed by Poetry, you need to use Poetry to run Black: - name: Format with black run: poetry run black ./src | 3 | 5 |
75,349,025 | 2023-2-4 | https://stackoverflow.com/questions/75349025/vs-code-jupyter-notebook-iprogress-not-found | I'm attempting to run a very simple tqdm script: from tqdm.notebook import tqdm for i in tqdm(range(10)): time.sleep(1) but am met with: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html My ipywidgets is v8.0.4 and jupyter v1.0.0... does tqdm not work anymore with VS Code Jupyter Notebook? | I had the same problem. Using this post I resolved it by running: %pip install --upgrade jupyter ipywidgets %jupyter nbextension enable --py widgetsnbextension # removed !pip on the recommendation of a comment. import time from tqdm.notebook import tqdm for i in tqdm(range(10)): time.sleep(0.1) | 6 | 7 |
75,364,863 | 2023-2-6 | https://stackoverflow.com/questions/75364863/folders-included-in-the-tar-gz-not-in-the-wheel-setuptools-build | The automatic discovery of setuptools.build_meta includes top-level folders into the tarball that shouldn't be included. We were trying to build a python package with python3 -m build. Our project has a src-layout and we are using setuptools as the backend for the build. According to the documentation, the automatic discovery should figure out the project structure and build the tarball and the wheel accordingly. For the wheel build it works as expected, i.e. only the files under src/... are included on the package. On the other hand, in the tarball build also top-level folders that are on the same hierarchical level as src are included. For example, the tests and the docs folder, which shouldn't be included. Interestingly, only a few files from the docs were included, not all of them. We tried to explicitly exclude the tests and docs folder in the pyproject.toml, the setup.cfg and the MANIFEST.in, following the respective documentations, but none of them helped. We configured the build backend in the pyproject.toml like so: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" ... We don't have a setup.py and our setup.cfg only contains flake8 specifications. We are using setuptools-67.1.0, we tried python versions 3.8, 3.9 and 3.10. We tried it with an Ubuntu 20.04 and a Debian 11 running conda 3.8.13. The tarball and the wheel can be seen here. | Full answer: https://github.com/pypa/setuptools/issues/3883#issuecomment-1494308302 Summary: These files are included by default in the sdist. The set of files included in a wheel is smaller. Thus, the behavior described above is expected. If you want to exclude e.g. the tests and the docs folder from the sdist, add these two lines to your MANIFEST.in: prune tests prune docs | 7 | 4 |
75,363,018 | 2023-2-6 | https://stackoverflow.com/questions/75363018/how-to-resolve-typeerror-init-missing-1-required-positional-argument-up | I want to create a Telegram bot that checks for a new post on a website (currently every 15s for testing purposes). If so, it should send a message with content from the post into the Telegram channel. For this I already have the following "code skeleton": (The fine work in terms of formatting and additions comes later) import requests import asyncio from bs4 import BeautifulSoup from telegram import InputMediaPhoto from telegram.ext import Updater # Telegram Bot API Token API_TOKEN = 'XXXXXXXXXXXXXXXXX' # URL of the website URL = 'https://chemiehalle.de' # List for storing seen posts seen_posts = [] # Function for fetching posts def get_posts(): # Send request to the website res = requests.get(URL) # Parse HTML content soup = BeautifulSoup(res.content, 'html.parser') # Find all posts on the website posts = soup.find_all('article') # Iterate over each post for post in posts: # Get title of the post title = post.find('h2', class_='entry-title').text # Check if post has already been seen if title not in seen_posts: # Get image URL image_src = post.find('img')['src'] # Get short text of the post text = post.find('div', class_='entry-content clearfix').find('p').text # Send image, title, and text as message bot.bot.send_media_group(chat_id='@chemiehalleBot', media=[InputMediaPhoto(media=image_src, caption=title + '\n\n' + text)]) # Add title of the post to the list of seen posts seen_posts.append(title) # Main loop async def main(): while True: # Call get_posts function every 15s get_posts() print("Check for new posts") await asyncio.sleep(15) # Initialize Telegram Bot updater = Updater(API_TOKEN) bot = updater.bot # Start main loop asyncio.run(main()) So far I have found out that updater = Updater(API_TOKEN, use_context=True) produces errors and so I have removed use_context=True following the instructions from other posts on this site. Since that I am encountering the error TypeError: __init__() missing 1 required positional argument: 'update_queue' in the updater = Updater(API_TOKEN) line. But unfortunately I don't know what to change. According to this, the constructor of Updater needs an additional argument update_queue. But I have no idea which one this should be and where I should get it from. Can you help me please? Thank you very much for the support! | You probably have the wrong telegram version. I am a bit old fashion, but for me version 13 still works great. So simply replace your library version by running: pip install python-telegram-bot==13.13 | 6 | 3 |
75,365,431 | 2023-2-6 | https://stackoverflow.com/questions/75365431/mediapipe-display-body-landmarks-only | I have installed Mediapipe (0.9.0.1) using Python (3.7.0) on windows 11. I have been able to successfully get Mediapipe to generate landmarks (for face and body); for an image, video, and webcam stream. I would like to now get Mediapipe to only draw body specific landmarks (i.e. exclude facial landmarks). I understand that I may use OpenCV (or Czone) to accomplish this goal, however, I am looking to achieve my objective using Mediapipe (i.e. using the draw_landmarks function in the MediaPipe library). The specific bit of code I am trying (but with errors) is the following: #Initialize a list to store the detected landmarks. landmarks = [] # Iterate over the Mediapipe detected landmarks. for landmark in results.pose_landmarks.landmark: # Append the Mediapipe landmark into the list. landmarks.append((int(landmark.x * width), int(landmark.y * height), (landmark.z * width))) #create index list for specific landmarks body_landmark_indices = [11,12,13,14,15,16,23,24,25,26,27,28,29,30,31,32] landmark_list_body = [] #Create a list which only has the required landmarks for index in body_landmark_indices: landmark_list_body.append(landmarks[index - 1]) mp_drawing.draw_landmarks( image=output_image, landmark_list=landmark_list_body.pose_landmarks, connections=mp_pose.POSE_CONNECTIONS, landmark_drawing_spec=landmark_drawing_spec, connection_drawing_spec=connection_drawing_spec)` Executing the above I get the error `'list' object has no attribute 'pose_landmarks' I have replaced landmark_list=landmark_list_body.pose_landmarks, with landmark_list=landmark_list_body but with errors. I am now very tiered and out of ideas. Is there a capeless hero out there? Thanks. | You can try the following approach: import cv2 import mediapipe as mp import numpy as np from mediapipe.python.solutions.pose import PoseLandmark from mediapipe.python.solutions.drawing_utils import DrawingSpec mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_pose = mp.solutions.pose custom_style = mp_drawing_styles.get_default_pose_landmarks_style() custom_connections = list(mp_pose.POSE_CONNECTIONS) # list of landmarks to exclude from the drawing excluded_landmarks = [ PoseLandmark.LEFT_EYE, PoseLandmark.RIGHT_EYE, PoseLandmark.LEFT_EYE_INNER, PoseLandmark.RIGHT_EYE_INNER, PoseLandmark.LEFT_EAR, PoseLandmark.RIGHT_EAR, PoseLandmark.LEFT_EYE_OUTER, PoseLandmark.RIGHT_EYE_OUTER, PoseLandmark.NOSE, PoseLandmark.MOUTH_LEFT, PoseLandmark.MOUTH_RIGHT ] for landmark in excluded_landmarks: # we change the way the excluded landmarks are drawn custom_style[landmark] = DrawingSpec(color=(255,255,0), thickness=None) # we remove all connections which contain these landmarks custom_connections = [connection_tuple for connection_tuple in custom_connections if landmark.value not in connection_tuple] IMAGE_FILES = ["test.jpg"] BG_COLOR = (192, 192, 192) with mp_pose.Pose( static_image_mode=True, model_complexity=2, enable_segmentation=True, min_detection_confidence=0.5) as pose: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) image_height, image_width, _ = image.shape results = pose.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) annotated_image = image.copy() mp_drawing.draw_landmarks( annotated_image, results.pose_landmarks, connections = custom_connections, # passing the modified connections list landmark_drawing_spec=custom_style) # and drawing style cv2.imshow('landmarks', annotated_image) cv2.waitKey(0) It modifies the DrawingSpec and POSE_CONNECTIONS to "hide" a subset of landmarks. However, due to the way the draw_landmarks() function is implemented in Mediapipe, it is also required to add a condition in drawing_utils.py (located in site-packages/mediapipe/python/solutions): if drawing_spec.thickness == None: continue Add it before Line 188 (see MediaPipe GitHub repo, # White circle border). The result should look like this: ... drawing_spec = landmark_drawing_spec[idx] if isinstance( landmark_drawing_spec, Mapping) else landmark_drawing_spec if drawing_spec.thickness == None: continue # White circle border circle_border_radius = max(drawing_spec.circle_radius + 1, int(drawing_spec.circle_radius * 1.2)) ... This change is required in order to completely eliminate the white border that is drawn around landmarks regardless of their drawing specification. Hope it helps. | 3 | 7 |
75,330,032 | 2023-2-2 | https://stackoverflow.com/questions/75330032/unable-to-start-jupyter-notebook-kernel-in-vs-code | I am trying to run a Jupyter Notebook in VS Code. However, I'm getting the following error message whenever I try to execute a cell: Failed to start the Kernel. Jupyter server crashed. Unable to connect. Error code from Jupyter: 1 usage: jupyter.py [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [--debug] [subcommand] Jupyter: Interactive Computing positional arguments: subcommand the subcommand to launch options: -h, --help show this help message and exit --version show the versions of core jupyter packages and exit --config-dir show Jupyter config dir --data-dir show Jupyter data dir --runtime-dir show Jupyter runtime dir --paths show all Jupyter paths. Add --json for machine-readable format. --json output paths as machine-readable json --debug output debug information about paths Available subcommands: Jupyter command `jupyter-notebook` not found. View Jupyter log for further details. The Jupyter log referred to by the diagnostic message just contains the same text as the above diagnostic message repeated multiple times. I believe this post refers to the same issue. Unfortunately, the accepted answer does not work for me because I do not have Python: Select Interpreter to Start Jupyter server in my Command Palette. The file was working normally this morning. I also tried uninstalling and reinstalling the extensions. How can I get the Kernel to start? | This sounds like it might be a bug that found in the 2023.1 version of the Jupyter extension that affects MacOS users: Starting a Jupyter server kernel fails (when zmq does not work) #12714 (duplicates: Failed to start the Kernel, version v2023.1.2000312134 #12726, Jupyter server crashed. Unable to connect. #12746) The solution recommended there was to switch to the pre-release version while waiting for the fix to come to the regular-release-channel released the fix. Others in the threads there also found that rolling back to older extension versions also worked for them (you can rollback clicking into the down menu button next to the extension's uninstall button). If none of those solutions work, try this: pip install --upgrade --force-reinstall --no-cache-dir jupyter And then restart VS Code. If you already have it open, you can do this in the VS Code command palette with the Developer: Reload Window command. Credit: The above command is based on this answer by @Spandana-r to the question After installing with pip, "jupyter: command not found". | 6 | 12 |
75,312,569 | 2023-2-1 | https://stackoverflow.com/questions/75312569/error-backend-subprocess-exited-when-trying-to-invoke-get-requires-for-build-sdi | I was creating a Python library, I needed to compile the pyproject.toml file. I runned this command: pip-compile pyproject.toml --resolver=backtracking I got: Backend subprocess exited when trying to invoke get_requires_for_build_wheel Failed to parse .\pyproject.toml My pyproject.toml: [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [project] name = "filedump" version = "1.0.0" description = "Save multiple values to a .svf file (not encrypted)" readme = "README.md" authors = [{ name = "------", email = "-------------------" }] license = { file = "LICENSE" } classifiers = [ "License :: OSI Approved :: MIT License", "Programming Language :: Python", "Programming Language :: Python :: 3", ] keywords = ["file", "encoding"] requires-python = ">=3.9" [project.optional-dependencies] dev = ["pip-tools"] [project.urls] Homepage = "https://github.com/-----------------------" [project.scripts] file = "filedump.FileOperation()" My project root: filedump\ |-- src\ |-- |___ __init__.py |-- LICENSE |-- MANIFEST.in |-- pyproject.toml |-- README.md | Try adding [tools.setuptools] packages = ["src"] to your pyproject.toml See the following page for details, major thanks to Keith R. Petersen on this one :) | 7 | 4 |
75,326,238 | 2023-2-2 | https://stackoverflow.com/questions/75326238/how-to-omit-dependency-when-exporting-requirements-txt-using-poetry | I have a Python3 Poetry project with a pyproject.toml file specifying the dependencies: [tool.poetry.dependencies] python = "^3.10" nltk = "^3.7" numpy = "^1.23.4" scipy = "^1.9.3" scikit-learn = "^1.1.3" joblib = "^1.2.0" [tool.poetry.dev-dependencies] pytest = "^5.2" I export those dependencies to a requirements.txt file using the command poetry export --without-hashes -f requirements.txt --output requirements.txt resulting in the following file requirements.txt: click==8.1.3 ; python_version >= "3.10" and python_version < "4.0" colorama==0.4.6 ; python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows" joblib==1.2.0 ; python_version >= "3.10" and python_version < "4.0" nltk==3.8.1 ; python_version >= "3.10" and python_version < "4.0" numpy==1.24.1 ; python_version >= "3.10" and python_version < "4.0" regex==2022.10.31 ; python_version >= "3.10" and python_version < "4.0" scikit-learn==1.2.1 ; python_version >= "3.10" and python_version < "4.0" scipy==1.9.3 ; python_version >= "3.10" and python_version < "4.0" threadpoolctl==3.1.0 ; python_version >= "3.10" and python_version < "4.0" tqdm==4.64.1 ; python_version >= "3.10" and python_version < "4.0" that I use to install the dependencies when building a Docker image. My question: How can I omit the the colorama dependency in the above list of requirements when calling poetry export --without-hashes -f requirements.txt --output requirements.txt? Possible solution: I could filter out the line with colorama by producing the requirements.txt file using poetry export --without-hashes -f requirements.txt | grep -v colorama > requirements.txt. But that seems hacky and may break things in case the Colorama requirement is expressed across multiple lines in that file. Is there a better and less hacky way? Background: When installing this list of requirements while building the Docker image using pip install -r requirements.txt I get the message Ignoring colorama: markers 'python_version >= "3.10" and python_version < "4.0" and platform_system == "Windows"' don't match your environment A coworker thinks that message looks ugly and would like it not to be visible (but personally I don't care). A call to poetry show --tree reveals that the Colorama dependency is required by pytest and is used to make terminal colors work on Windows. Omitting the library as a requirement when installing on Linux is not likely a problem in this context. | The colorama dependency is required by pytest. I suppose your Docker image is for production use, so it doesn't have to contain pytest which is clearly a development dependency. You can use poetry export --without-hashes --without dev -f requirements.txt -o requirements.txt to prevent your dev packages to be exported to the requirements.txt file so they won't be installed by pip install. You can find some poetry export options Here. | 11 | 6 |
75,380,003 | 2023-2-7 | https://stackoverflow.com/questions/75380003/clean-setup-of-pip-tools-doesnt-compile-very-basic-pyproject-toml | Using a completely new pip-tools setup always results in a Backend subprocess exited error. pyproject.toml: [project] dependencies = [ 'openpyxl >= 3.0.9, < 4', ] Running pip-tools in an empty directory that only contains the above pyproject.toml: % python -m venv .venv % source .venv/bin/activate % python -m pip install pip-tools % pip-compile -v -o requirements.txt --resolver=backtracking pyproject.toml Creating venv isolated environment... Installing packages in isolated environment... (setuptools >= 40.8.0, wheel) Getting build dependencies for wheel... Backend subprocess exited when trying to invoke get_requires_for_build_wheel Failed to parse .../pyproject.toml No requirements.txt gets created. Ideas on what might be missing here are appreciated. | Your pyproject.toml most likely is invalid, try pip install -e . and you'll see a detailed explanation. For now, pip-tools can't show a nice error message, but work is in progress. | 5 | 12 |
75,376,359 | 2023-2-7 | https://stackoverflow.com/questions/75376359/how-to-use-my-own-python-packages-modules-with-pyscript | Question I came across pyscript hoping to use it to document python code with mkdocs. I have looked into importing my own module. Individual files work. How do I import my own module using pyscript instead? Requirements for running the example: python package numpy ($ pip install numpy) python package matplotlib ($ pip install matplotlib) local webserver for live preview on localhost (eq. $ npm install -g live-server) Below is an example that works with the 'just import a python file' approach, see line from example import myplot. When I change the line to from package.example import myplot it is not working and I get the following error in firefox/chromium: JsException(PythonError: Traceback (most recent call last): File "/lib/python3.10/site-packages/_pyodide/_base.py", line 429, in eval_code .run(globals, locals) File "/lib/python3.10/site-packages/_pyodide/_base.py", line 300, in run coroutine = eval(self.code, globals, locals) File "", line 1, in ModuleNotFoundError: No module named 'package' ) Any help is appreciated. I found this discussion on github, but I am lost when trying to follow. Example Folder structure โโโ index.html โโโ pycode/ โโโ example.py โโโ package/ โโโ example.py โโโ __init__.py index.html <!doctype html> <html> <head> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css"/> <py-env> - numpy - matplotlib - paths: - ./pycode/example.py - ./pycode/package </py-env> </head> <body> <div id="lineplot"></div> <py-script> from example import myplot import matplotlib.pyplot as plt theta,r = myplot(4) fig, ax = plt.subplots( subplot_kw = {'projection': 'polar'} ) ax.plot(theta, r) ax.set_rticks([0.5, 1, 1.5, 2]) ax.grid(True) pyscript.write('lineplot',fig) </py-script> </body> </html> example.py import numpy as np def myplot(val:int): r = np.arange(0, 2, 0.01) theta = val * np.pi * r return theta,r __init__.py __all__ = [ 'example.py' ] Intended result in the webbrowser | The folder structure you want to achieve wasn't possible in PyScipt Alpha - <py-env>'s paths functionality was fairly limited. Thankfully, it is possible in PyScript 2022.12.1, the latest version at time of writing. First, you'll want want to point your <script> and <link> tags at the newest release: <script defer src="https://pyscript.net/releases/2022.12.1/pyscript.js"></script> <link rel="stylesheet" href="https://pyscript.net/releases/2022.12.1/pyscript.css"/> <py-env> has been renamed <py-config> and its syntax is quite different. To load packages, the keyword is packages: <py-config> packages = ['numpy', 'matplotlib'] </py-config> Finally, to load your files into the in-browser file system in a structured way, you can make use of a fetch configuration, another new feature of <py-config>. Check out the use cases section of the py-config docs, in in particular example #6, which is similar to your situation: <py-config> packages = ['numpy', 'matplotlib'] [[fetch]] files = ['__init__.py', 'example.py'] from = 'package' to_folder = 'package' </py-config> So finally, your full working example looks like: <!doctype html> <html> <head> <script defer src="https://pyscript.net/releases/2022.12.1/pyscript.js"></script> <link rel="stylesheet" href="https://pyscript.net/releases/2022.12.1/pyscript.css"/> </head> <body> <py-config> packages = ['numpy', 'matplotlib'] [[fetch]] files = ['__init__.py', 'example.py'] from = 'package' to_folder = 'package' </py-config> <div id="lineplot"></div> <py-script> from package.example import myplot import matplotlib.pyplot as plt theta,r = myplot(4) fig, ax = plt.subplots( subplot_kw = {'projection': 'polar'} ) ax.plot(theta, r) ax.set_rticks([0.5, 1, 1.5, 2]) ax.grid(True) pyscript.write('lineplot',fig) </py-script> </body> </html> For more context on fetch configurations and how they work, see this writeup from the 2022.12.1 release blog post. | 7 | 6 |
75,333,570 | 2023-2-3 | https://stackoverflow.com/questions/75333570/generate-unique-id-code-in-faker-data-set | im trying to create a data set with a unique id code but i get a 'ValueError not enough values to unpack (expected 6, got 5)' on line 8, basically, I am trying to: generate a unique 6 digit id code append dataset value with 'ID' ex: ID123456 UPDATE: fixed the error and ID append, now how do i make sure the generated id is unique in the dataset? from faker import Faker import random import pandas as pd Faker.seed(0) random.seed(0) fake = Faker("en_US") fixed_digits = 6 concatid = 'ID' idcode,name, city, country, job, age = [[] for k in range(0,6)] for row in range(0,100): idcode.append(concatid + str(random.randrange(111111, 999999, fixed_digits))) name.append(fake.name()) city.append(fake.city()) country.append(fake.country()) job.append(fake.job()) age.append(random.randint(20,100)) d = {"ID Code":idcode, "Name":name, "Age":age, "City":city, "Country":country, "Job":job} df = pd.DataFrame(d) df.head() planning to generate 1k rows | To answer the question: now how do I make sure the generated id is unique in the dataset? You have to use: unique.random_int So, your code will be like this as you see below: from faker import Faker import random import pandas as pd Faker.seed(0) random.seed(0) fake = Faker("en_US") fixed_digits = 6 concatid = 'ID' idcode,name, city, country, job, age = [[] for k in range(0,6)] for row in range(0,100): idcode.append(concatid + str(fake.unique.random_int(min=111111, max=999999))) name.append(fake.name()) city.append(fake.city()) country.append(fake.country()) job.append(fake.job()) age.append(random.randint(20,100)) d = {"ID Code":idcode, "Name":name, "Age":age, "City":city, "Country":country, "Job":job} df = pd.DataFrame(d) df.head() | 3 | 5 |
75,313,204 | 2023-2-1 | https://stackoverflow.com/questions/75313204/correct-way-to-append-to-string-in-python | I've read this reply which explains that CPython has an optimization to do an in-place append without copy when appending to a string using a = a + b or a += b. I've also read this PEP8 recommendation: Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPythonโs efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isnโt present at all in implementations that donโt use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations. So if I understand correctly, instead of doing a += b + c in order to trigger this CPython optimization which does the replacement in-place, the proper way is to call a = ''.join([a, b, c]) ? But then why is this form with join significantly slower than the form in += in this example (In loop1 I'm using a = a + b + c on purpose in order to not trigger the CPython optimization)? import os import time if __name__ == "__main__": start_time = time.time() print("begin: %s " % (start_time)) s = "" for i in range(100000): s = s + str(i) + '3' time1 = time.time() print("end loop1: %s " % (time1 - start_time)) s2 = "" for i in range(100000): s2 += str(i) + '3' time2 = time.time() print("end loop2: %s " % (time2 - time1)) s3 = "" for i in range(100000): s3 = ''.join([s3, str(i), '3']) time3 = time.time() print("end loop3: %s " % (time3 - time2)) The results show join is significantly slower in this case: ~/testdir$ python --version Python 3.10.6 ~/testdir$ python concatenate.py begin: 1675268345.0761461 end loop1: 3.9019 end loop2: 0.0260 end loop3: 0.9289 Is my version with join wrong? | In "loop3" you bypass a lot of the gain of join() by continuously calling it in an unneeded way. It would be better to build up the full list of characters then join() once. Check out: import time iterations = 100_000 ##---------------- s = "" start_time = time.time() for i in range(iterations): s = s + "." + '3' end_time = time.time() print("end loop1: %s " % (end_time - start_time)) ##---------------- ##---------------- s = "" start_time = time.time() for i in range(iterations): s += "." + '3' end_time = time.time() print("end loop2: %s " % (end_time - start_time)) ##---------------- ##---------------- s = "" start_time = time.time() for i in range(iterations): s = ''.join([s, ".", '3']) end_time = time.time() print("end loop3: %s " % (end_time - start_time)) ##---------------- ##---------------- s = [] start_time = time.time() for i in range(iterations): s.append(".") s.append("3") s = "".join(s) end_time = time.time() print("end loop4: %s " % (end_time - start_time)) ##---------------- ##---------------- s = [] start_time = time.time() for i in range(iterations): s.extend((".", "3")) s = "".join(s) end_time = time.time() print("end loop5: %s " % (end_time - start_time)) ##---------------- Just to be clear, you can run this with: iterations = 10_000_000 If you like, just be sure to remove "loop1" and "loop3" as they get dramatically slower after about 300k. When I run this with 10 million iterations I see: end loop2: 16.977502584457397 end loop4: 1.6301295757293701 end loop5: 1.0435805320739746 So, clearly there is a way to use join() that is fast :-) ADDENDUM: @รtienne has suggested that making the string to append longer reverses the findings and that optimization of loop2 does not happen unless it is in a function. I do not see the same. import time iterations = 10_000_000 string_to_append = "345678912" def loop2(iterations): s = "" for i in range(iterations): s += "." + string_to_append return s def loop4(iterations): s = [] for i in range(iterations): s.append(".") s.append(string_to_append) return "".join(s) def loop5(iterations): s = [] for i in range(iterations): s.extend((".", string_to_append)) return "".join(s) ##---------------- start_time = time.time() s = loop2(iterations) end_time = time.time() print("end loop2: %s " % (end_time - start_time)) ##---------------- ##---------------- start_time = time.time() s = loop4(iterations) end_time = time.time() print("end loop4: %s " % (end_time - start_time)) ##---------------- ##---------------- start_time = time.time() s = loop5(iterations) end_time = time.time() print("end loop5: %s " % (end_time - start_time)) ##---------------- On python 3.10 and 3.11 the results are similar. I get results like the following: end loop2: 336.98531889915466 end loop4: 1.0211727619171143 end loop5: 1.1640543937683105 that continue to suggest to me that join() is overwhelmingly faster. | 3 | 7 |
75,324,072 | 2023-2-2 | https://stackoverflow.com/questions/75324072/pandas-json-orient-autodetection | I'm trying to find out if Pandas.read_json performs some level of autodetection. For example, I have the following data: data_records = [ { "device": "rtr1", "dc": "London", "vendor": "Cisco", }, { "device": "rtr2", "dc": "London", "vendor": "Cisco", }, { "device": "rtr3", "dc": "London", "vendor": "Cisco", }, ] data_index = { "rtr1": {"dc": "London", "vendor": "Cisco"}, "rtr2": {"dc": "London", "vendor": "Cisco"}, "rtr3": {"dc": "London", "vendor": "Cisco"}, } If I do the following: import pandas as pd import json pd.read_json(json.dumps(data_records)) --- device dc vendor 0 rtr1 London Cisco 1 rtr2 London Cisco 2 rtr3 London Cisco though I get the output that I desired, the data is record based. Being that the default orient is columns, I would have not thought this would have worked. Therefore is there some level of autodetection going on? With index based inputs the behaviour seems more inline. As this shows appears to have parsed the data based on a column orient by default. pd.read_json(json.dumps(data_index)) rtr1 rtr2 rtr3 dc London London London vendor Cisco Cisco Cisco pd.read_json(json.dumps(data_index), orient="index") dc vendor rtr1 London Cisco rtr2 London Cisco rtr3 London Cisco | TL;DR When using pd.read_json() with orient=None, the representation of the data is automatically determined through pd.DataFrame(). Explanation The pandas documentation is a bit misleading here. When not specifying orient, the parser for 'columns' is used, which is self.obj = pd.DataFrame(json.loads(json)). So pd.read_json(json.dumps(data_records)) is equivalent to pd.DataFrame(json.loads(json.dumps(data_records))) which again is equivalent to pd.DataFrame(data_records) I.e., you pass a list of dicts to the DataFrame constructor, which then performs the automatic determination of the data representation. Note that this does not mean that orient is auto-detected. Instead, simple heuristics (see below) on how the data should be loaded into a DataFrame are applied. Loading JSON-like data through pd.DataFrame() For the 3 most relevant cases of JSON-structured data, the DataFrame construction through pd.DataFrame() is: Dict of lists In[1]: data = {"a": [1, 2, 3], "b": [9, 8, 7]} ...: pd.DataFrame(data) Out[1]: a b 0 1 9 1 2 8 2 3 7 Dict of dicts In[2]: data = {"a": {"x": 1, "y": 2, "z": 3}, "b": {"x": 9, "y": 8, "z": 7}} ...: pd.DataFrame(data) Out[2]: a b x 1 9 y 2 8 z 3 7 List of dicts In[3]: data = [{'a': 1, 'b': 9}, {'a': 2, 'b': 8}, {'a': 3, 'b': 7}] ...: pd.DataFrame(data) Out[3]: a b 0 1 9 1 2 8 2 3 7 | 6 | 2 |
75,334,838 | 2023-2-3 | https://stackoverflow.com/questions/75334838/is-there-any-downside-in-using-multiple-n-jobs-1-statements | In the context of model selection for a classification problem, while running cross validation, is it ok to specify n_jobs=-1 both in model specification and cross validation function in order to take full advantage of the power of the machine? For example, comparing sklearn RandomForestClassifier and xgboost XGBClassifier: RF_model = RandomForestClassifier( ..., n_jobs=-1) XGB_model = XGBClassifier( ..., n_jobs=-1) RF_cv = cross_validate(RF_model, ..., n_jobs=-1) XGB_cv = cross_validate(XGB_model, ..., n_jobs=-1) is it ok to specify the parameters in both? Or should I specify it only once? And in which of them, model or cross validation statement? I used for the example models from two different libraries (sklearn and xgboost) because maybe there is a difference in how it works, also cross_validate function is from sklearn. | Specifying n_jobs twice does have an effect, though whether it has a positive or negative effect is complicated. When you specify n_jobs twice, you get two levels of parallelism. Imagine you have N cores. The cross-validation function creates N copies of your model. Each model creates N threads to run fitting and predictions. You then have N*N threads. This can blow up pretty spectacularly. I once worked on a program which needed to apply ARIMA to tens of thousands of time-series. Since each ARIMA is independent, I parallelized it and ran one ARIMA on each core of a 12-core CPU. I ran this, and it performed very poorly. I opened up htop, and was surprised to find 144 threads running. It turned out that this library, pmdarima, internally parallelized ARIMA operations. (It doesn't parallelize them well, but it does try.) I got a massive speedup just by turning off this inner layer of parallelism. Having two levels of parallelism is not necessarily better than having one. In your specific case, I benchmarked a random forest with cross validation, and I benchmarked four configurations: No parallelism Parallelize across different CV folds, but no model parallelism Parallelize within the model, but not on CV folds Do both (Error bars represent 95% confidence interval. All tests used RandomForestClassifier. Test was performed using cv=5, 100K samples, and 100 trees. Test system had 4 cores with SMT disabled. Scores are mean duration of 7 runs.) This graph shows that no parallelism is the slowest, CV parallelism is third fastest, and model parallelism and combined parallelism are tied for first place. However, this is closely tied to what classifiers I'm using - a benchmark for pmdarima, for example, would find that cross-val parallelism is faster than model parallelism or combined parallelism. If you don't know which one is faster, then test it. | 3 | 4 |
75,349,276 | 2023-2-5 | https://stackoverflow.com/questions/75349276/python-pandas-vectorized-way-of-cleaning-buy-and-sell-signals | I'm trying to simulate financial trades using a vectorized approach in python. Part of this includes removing duplicate signals. To elaborate, I've developed a buy_signal column and a sell_signal column. These columns contain booleans in the form of 1s and 0s. Looking at the signals from the top-down, I don't want to trigger a second buy_signal before a sell_signal triggers, AKA if a 'position' is open. Same thing with sell signals, I do not want duplicate sell signals if a 'position' is closed. If a sell_signal and buy_signal are 1, set them both to 0. What is the best way to remove these irrelevant signals? Here's an example: import pandas as pd df = pd.DataFrame( { "buy_signal": [1, 1, 1, 1, 0, 0, 1, 1, 1, 0], "sell_signal": [0, 0, 1, 1, 1, 0, 0, 0, 1, 0], } ) print(df) buy_signal sell_signal 0 1 0 1 1 0 2 1 1 3 1 1 4 0 1 5 0 0 6 1 0 7 1 0 8 1 1 9 0 0 Here's the result I want: buy_signal sell_signal 0 1 0 1 0 0 2 0 1 3 0 0 4 0 0 5 0 0 6 1 0 7 0 0 8 0 1 9 0 0 | As I said earlier (in a comment about a response since then deleted), one must consider the interaction between buy and sell signals, and cannot simply operate on each independently. The key idea is to consider a quantity q (or "position") that is the amount currently held, and that the OP says would like bounded to [0, 1]. That quantity is cumsum(buy - sell) after cleaning. Therefore, the problem reduces to "cumulative sum with limits", which unfortunately cannot be done in a vectorized way with numpy or pandas, but that we can code quite efficiently using numba. The code below processes 1 million rows in 37 ms. import numpy as np from numba import njit @njit def cumsum_clip(a, xmin=-np.inf, xmax=np.inf): res = np.empty_like(a) c = 0 for i in range(len(a)): c = min(max(c + a[i], xmin), xmax) res[i] = c return res def clean_buy_sell(df, xmin=0, xmax=1): # model the quantity held: cumulative sum of buy-sell clipped in # [xmin, xmax] # note that, when buy and sell are equal, there is no change q = cumsum_clip( (df['buy_signal'] - df['sell_signal']).values, xmin=xmin, xmax=xmax) # derive actual transactions: positive for buy, negative for sell, 0 for hold trans = np.diff(np.r_[0, q]) df = df.assign( buy_signal=np.clip(trans, 0, None), sell_signal=np.clip(-trans, 0, None), ) return df Now: df = pd.DataFrame( { "buy_signal": [1, 1, 1, 1, 0, 0, 1, 1, 1, 0], "sell_signal": [0, 0, 1, 1, 1, 0, 0, 0, 1, 0], } ) new_df = clean_buy_sell(df) >>> new_df buy_signal sell_signal 0 1 0 1 0 0 2 0 0 3 0 0 4 0 1 5 0 0 6 1 0 7 0 0 8 0 0 9 0 0 Speed and correctness n = 1_000_000 np.random.seed(0) # repeatable example df = pd.DataFrame(np.random.choice([0, 1], (n, 2)), columns=['buy_signal', 'sell_signal']) %timeit clean_buy_sell(df) 37.3 ms ยฑ 104 ยตs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) Correctness tests: z = clean_buy_sell(df) q = (z['buy_signal'] - z['sell_signal']).cumsum() # q is quantity held through time; must be in {0, 1} assert q.isin({0, 1}).all() # we should not have introduced any new buy signal: # check that any buy == 1 in z was also 1 in df assert not (z['buy_signal'] & ~df['buy_signal']).any() # same for sell signal: assert not (z['sell_signal'] & ~df['sell_signal']).any() # finally, buy and sell should never be 1 on the same row: assert not (z['buy_signal'] & z['sell_signal']).any() Bonus: other limits, fractional buys and sells For fun, we can consider the more general case where buy and sell values are fractional (or any float value), and the limits are not [0, 1]. There is nothing to change to the current version of clean_buy_sell, which is general enough to handle these conditions. np.random.seed(0) df = pd.DataFrame( np.random.uniform(0, 1, (100, 2)), columns=['buy_signal', 'sell_signal'], ) # set limits to -1, 2: we can sell short (borrow) up to 1 unit # and own up to 2 units. z = clean_buy_sell(df, -1, 2) (z['buy_signal'] - z['sell_signal']).cumsum().plot() | 3 | 3 |
75,333,571 | 2023-2-3 | https://stackoverflow.com/questions/75333571/get-aerospike-hyperlogloghll-intersection-count-of-multiple-hll-unions | I have 2 or more HLLs that are unioned, I want to get the intersection count of that unions. I have used the example from here hll-python example Following is my code ops = [hll_ops.hll_get_union(HLL_BIN, records)] _, _, result1 = client.operate(getKey(value), ops) ops = [hll_ops.hll_get_union(HLL_BIN, records2)] _, _, result2 = client.operate(getKey(value2), ops) ops = [hll_ops.hll_get_intersect_count(HLL_BIN, [result1[HLL_BIN]] + [result2[HLL_BIN]])] _, _, resultVal = client.operate(getKey(value), ops) print(f'intersectAll={resultVal}') _, _, resultVal2 = client.operate(getKey(value2), ops) print(f'intersectAll={resultVal2}') I get 2 different results when I use different keys for the intersection using hll_get_intersect_count, i.e resultVal and resultVal2 are not same. This does not happen in the case of union count using function hll_get_union_count. Ideally the value of intersection should be the same. Can any one tell me why is this happening and what is the right way to do it? | Was able to figure out the solutions (with the help of Aerospike support, the same question was posted here and discussed more elaboratively aerospike forum). Posting my code for others having the same issue. Intersection of HLLs is not supported in Aerospike. However, If I am to get intersection of multiple HLLs I will have to save one union into aerospike and then get intersection count of one vs the rest of the union. The key we provide in client.operate function for hll_get_intersect_count is used to get the intersection with the union. Following is the code I came up with ops = [hll_ops.hll_get_union(HLL_BIN, records)] _, _, result1 = client.operate(getKey(value), ops) # init HLL bucket ops = [hll_ops.hll_init(HLL_BIN, NUM_INDEX_BITS, NUM_MH_BITS)] _, _, _ = client.operate(getKey('dummy'), ops) # get set union and insert to inited hll bucket and save it for 5 mins(300 sec) # use hll_set_union to save the HLL into aeropike temporarily ops = [hll_ops.hll_set_union(HLL_BIN, [records2])] _, _, _ = client.operate(getKey('dummy'), ops, meta={"ttl": 300}) ops = [hll_ops.hll_get_intersect_count(HLL_BIN, [result1[HLL_BIN]])] _, _, resultVal = client.operate(getKey('dummy'), ops) print(f'intersectAll={resultVal}') For more reference, you can look here for hll_set_union reference. More elaborate discussion can be found here | 3 | 2 |
75,372,032 | 2023-2-7 | https://stackoverflow.com/questions/75372032/in-python-what-is-the-difference-between-async-for-x-in-async-iterator-and-f | The subject contains the whole idea. I came accross code sample where it shows something like: async for item in getItems(): await item.process() And others where the code is: for item in await getItems(): await item.process() Is there a notable difference in these two approaches? | TL;DR While both of them could theoretically work with the same object (without causing an error), they most likely do not. In general those two notations are not equivalent at all, but invoke entirely different protocols and are applied to very distinct use cases. Different protocols Iterable To understand the difference, you first need to understand the concept of an iterable. Abstractly speaking, an object is iterable, if it implements the __iter__ method or (less common for iteration) a sequence-like __getitem__ method. Practically speaking, an object is iterable, if you can use it in a for-loop, so for _ in iterable. A for-loop implicitly invokes the __iter__ method of the iterable and expects it to return an iterator, which implements the __next__ method. That method is called at the start of each iteration in the for-loop and its return value is what is assigned to the loop variable. Asynchronous iterable The async-world introduced a variation of that, namely the asynchronous iterable. An object is asynchronously iterable, if it implements the __aiter__ method. Again, practically speaking, an object is asynchronously iterable, if it can be used in an async for-loop, so async for _ in async_iterable. An async for-loop calls the __aiter__ method of the asynchronous iterable and expects it to return an asynchronous iterator, which implements the __anext__ coroutine method. That method is awaited at the start of each iteration of the async for-loop. Awaitable Typically speaking, an asynchronous iterable is not awaitable, i.e. it is not a coroutine and it does not implement an __await__ method and vice versa. Although they are not necessarily mutually exclusive. You could design an object that is both awaitable by itself and also (asynchronously) iterable, though that seems like a very strange design. (Asynchronous) Iterator Just to be very clear in the terminology used, the iterator is a subtype of the iterable. Meaning an iterator also implements the iterable protocol by providing an __iter__ method, but it also provides the __next__ method. Analogously, the asynchronous iterator is a subtype of the asynchronous iterable because it implements the __aiter__ method, but also provides the __anext__ coroutine method. You do not need the object to be an iterator for it to be used in a for-loop, you need it to return an iterator. The fact that you can use an (asynchronous) iterator in a (async) for-loop is because it is also an (asynchronous) iterable. It is just rare for something to be an iterable but not an iterator. In most cases the object will be both (i.e. the latter). Inferences from your example async for _ in get_items() That code implies that whatever is returned by the get_items function is an asynchronous iterable. Note that get_items is just a normal non-async function, but the object it returns implements the asynchronous iterable protocol. That means we could write the following instead: async_iterable = get_items() async for item in async_iterable: ... for _ in await get_items() Whereas this snippet implies that get_items is in fact a coroutine function (i.e. a callable returning an awaitable) and the return value of that coroutine is a normal iterable. Note that we know for certain that the object returned by the get_items coroutine is a normal iterable because otherwise the regular for-loop would not work with it. The equivalent code would be: iterable = await get_items() for item in iterable: ... Implications Another implication of those code snippets is that in the first one the function (returning the asynchronous iterator) is non-asynchronous, i.e. calling it will not yield control to the event loop, whereas each iteration of the async for-loop is asynchronous (and thus will allow context switches). Conversely, in the second one the function returning the normal iterator is an asynchronous call, but all of the iterations (the calls to __next__) are non-asynchronous. Key difference The practical takeaway should be that those two snippets you showed are never equivalent. The main reason is that get_items either is or is not a coroutine function. If it is not, you cannot do await get_items(). But whether or not you can do async for or for depends on whatever is returned by get_items. Possible combinations For the sake of completion, it should be noted that combinations of the aforementioned protocols are entirely feasible, although not all too common. Consider the following example: from __future__ import annotations class Foo: x = 0 def __iter__(self) -> Foo: return self def __next__(self) -> int: if self.x >= 2: raise StopIteration self.x += 1 return self.x def __aiter__(self) -> Foo: return self async def __anext__(self) -> int: if self.x >= 3: raise StopAsyncIteration self.x += 1 return self.x * 10 async def main() -> None: for i in Foo(): print(i) async for i in Foo(): print(i) if __name__ == "__main__": from asyncio import run run(main()) In this example, Foo implements four distinct protocols: iterable (def __iter__) iterator (iterable + def __next__) asynchronous iterable (def __aiter__) asynchronous iterator (asynchronous iterable + async def __anext__) Running the main coroutine gives the following output: 1 2 10 20 30 This shows that objects can absolutely be all those things at the same time. Since Foo is both a synchronous and an asynchronous iterable, we could write two functions -- one coroutine, one regular -- that each returns an instance of Foo and then replicate your example a bit: from collections.abc import AsyncIterable, Iterable def get_items_sync() -> AsyncIterable[int]: return Foo() async def get_items_async() -> Iterable[int]: return Foo() async def main() -> None: async for i in get_items_sync(): print(i) for i in await get_items_async(): print(i) async for i in await get_items_async(): print(i) if __name__ == "__main__": from asyncio import run run(main()) Output: 10 20 30 1 2 10 20 30 This illustrates very clearly that the only thing determining which of our Foo methods is called (__next__ or __anext__) is whether we use a for-loop or an async for-loop. The for-loop will always call the __next__ method at least once and continue calling it for each iteration, until it intercepts a StopIteration exception. The async for-loop will always await the __anext__ coroutine at least once and continue calling and awaiting it for each subsequent iteration, until it intercepts a StopAsyncIteration exception. | 7 | 6 |
75,354,384 | 2023-2-5 | https://stackoverflow.com/questions/75354384/why-is-b-pop0-over-200-times-slower-than-del-b0-for-bytearray | Letting them compete three times (a million pops/dels each time): from timeit import timeit for _ in range(3): t1 = timeit('b.pop(0)', 'b = bytearray(1000000)') t2 = timeit('del b[0]', 'b = bytearray(1000000)') print(t1 / t2) Time ratios (Try it online!): 274.6037053753368 219.38099365582403 252.08691226683823 Why is pop that much slower at doing the same thing? | When you run b.pop(0), Python moves all the elements back by one as you might expect. This takes O(n) time. When you del b[0], Python simply increases the start pointer of the object by 1. In both cases, PyByteArray_Resize is called to adjust the size. When the new size is smaller than half the allocated size, the allocated memory will be shrunk. In the del b[0] case, this is the only point where the data will be copied. As a result, this case will take O(1) amortized time. Relevant code: bytearray_pop_impl function: Always calls memmove(buf + index, buf + index + 1, n - index); The bytearray_setslice_linear function is called for del b[0] with lo == 0, hi == 1, bytes_len == 0. It reaches this code (with growth == -1): if (lo == 0) { /* Shrink the buffer by advancing its logical start */ self->ob_start -= growth; /* 0 lo hi old_size | |<----avail----->|<-----tail------>| | |<-bytes_len->|<-----tail------>| 0 new_lo new_hi new_size */ } else { /* 0 lo hi old_size | |<----avail----->|<-----tomove------>| | |<-bytes_len->|<-----tomove------>| 0 lo new_hi new_size */ memmove(buf + lo + bytes_len, buf + hi, Py_SIZE(self) - hi); } | 57 | 82 |
75,379,958 | 2023-2-7 | https://stackoverflow.com/questions/75379958/error-int-object-is-not-subscriptable-when-using-lambda-in-reduce-function | When running the following code, I get the following Error: Traceback (most recent call last): File "/Users/crosseyedbum/Documents/Visual Studio Code/Fundamentals of Python_5.py", line 127, in <module> sumo = reduce(lambda a, b : a[1] + b[1], exp) File "/Users/crosseyedbum/Documents/Visual Studio Code/Fundamentals of Python_5.py", line 127, in <lambda> sumo = reduce(lambda a, b : a[1] + b[1], exp) TypeError: 'int' object is not subscriptable I am attempting to sum the integers in each tuple and set the value to sumo. from functools import reduce exp = [ ('Dinner', 80), ('Car repair', 120), ('Netflix', 30), ('Rocket Fuel', 32) ] #stored as tuples sumo = reduce(lambda a, b : a[1] + b[1], exp) print(sumo) OUTPUT: 200 However, the following code runs to sum the integers to 200 just fine. Can someone explain this to me. Or am I only able to use this method with a max of two items in the list** If someone can explain to a beginner why this is behaving the way it is, I would greatly appreciate it. I added tuples to my list and expected the code to continue to add the index one, but I only seem to be able to run the code with two items in my list max. That doesn't make any sense to me. The error makes no sense to me, and even less sense why the number of tuples in the list would cause it not work. | The issue is that the a parameter to your lambda function is not what you think it is. From the functools.reduce docs: The left argument, x, is the accumulated value and the right argument, y, is the update value from the iterable In your case, x is a and y is b due to how you named your parameters. So, a is not a tuple, it is the sum. If you don't provide a default start value, the initial accumulated value will be the first value in the iterator (('Dinner', 80) in your case). Because you cannot add integers and tuples, you get an error. Instead, pass a third parameter that will be the default (use an integer, like 0). Try this code: from functools import reduce exp = [ ('Dinner', 80), ('Car repair', 120), ('Netflix', 30), ('Rocket Fuel', 32) ] sumo = reduce(lambda a, b: a + b[1], exp, 0) print(sumo) # => 262 | 3 | 2 |
75,379,184 | 2023-2-7 | https://stackoverflow.com/questions/75379184/plotly-range-slider-without-showing-line-in-small | I want to use Plotly to generate a line chart with a range slider. the range slider shows the displayed line again. this code is just an example. in my case, I have a lot of subplots and everything is shown twice. is it possible to show nothing or only the date in the range slider? import plotly.express as px import yfinance as yf yf.pdr_override() df = yf.download(tickers='aapl' ,period='1d',interval='1m') fig = px.line(df, x = df.index, y = 'Close', title='Apple Stock Price') fig.update_layout( xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, label="1m", step="month", stepmode="backward"), dict(step="all") ]) ), rangeslider=dict( visible=True ), type="date" ) ) fig.show() | Looking through the rangeslider documentation, there aren't any arguments that can directly impact the rangeslider line because I believe that line will have all of the same properties as the line in the figure (including color, visibility, thickness). The best workaround I can come up with is to change the background color of the rangeslider to be the same color as the line (and this only works if all traces in the figure are the same color). I would probably also make the rangeslider have a relatively smaller thickness so it's less obtrusive, but that's a matter of personal taste. Here is an example: fig.update_layout( xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, label="1m", step="month", stepmode="backward"), dict(step="all") ]) ), rangeslider=dict( visible=True, bgcolor="#636EFA", thickness=0.05 ), type="date" ) ) fig.show() | 4 | 3 |
75,378,987 | 2023-2-7 | https://stackoverflow.com/questions/75378987/unable-to-install-cx-oracle-with-pip | I am currently using the latest version of Python and attempting to install cx_Oracle through the command pip install cx_Oracle. On my first attempt, I encountered an error that stated: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools". To address this, I installed both Microsoft C++ Build Tools from this link and Visual C++ 17 from this link. However, upon my second try, I encountered another error: Temp\pip-install-ocqmu9mg\cx-oracle_a585471535c345cea9e48a083457ccd7\odpi\src\dpiImpl.h(34): fatal error C1083: Cannot open include file: 'stdlib.h': No such file or directory I researched this issue by consulting this, this and this post, but none of them provided a solution for my problem. | Nice effort. Unfortunately, pre-compiled binaries for Python 3.11 are not currently available. To utilize this version, you can either wait for their release or configure your system properly to build them from source. Alternatively, you could consider downgrading to a previous version of Python, although this is not recommended. | 4 | 4 |
75,378,406 | 2023-2-7 | https://stackoverflow.com/questions/75378406/how-to-repeat-the-inner-row-of-a-3d-matrixnxmxv-to-get-a-new-matrixnx2mxv-us | Here is an example of what I want: original matrix: [[[5 1 4]] [[0 9 5]] [[8 0 9]]] matrix I want: [[[5 1 4] [5 1 4]] [[0 9 5] [0 9 5]] [[8 0 9] [8 0 9]]] I have tried np.repeat(A, 2, axis=0), which apparently does not work since it gives the output: [[[5 1 4]] [[5 1 4]] [[0 9 5]] [[0 9 5]] [[8 0 9]]] | You want to repeat on axis=1: np.repeat(A, 2, axis=1) Output: array([[[5, 1, 4], [5, 1, 4]], [[0, 9, 5], [0, 9, 5]], [[8, 0, 9], [8, 0, 9]]]) NB. remember to check the shape of your arrays: A.shape -> (3, 1, 3). You want to make it (3, 2, 3), not (6, 1, 3). | 3 | 5 |
75,370,436 | 2023-2-7 | https://stackoverflow.com/questions/75370436/keras-categoryencoding-layer-with-time-sequences | For a LSTM, I create time sequences by means of tensorflow.keras.utils.timeseries_dataset_from_array(). For some of the features, I would like to do one-hot encoding by means of Keras preprocessing layers. I have the following code: n_timesteps = 20 n_categorical_features = 1 from tensorflow.keras.layers import Input, IntegerLookup, CategoryEncoding cat_inp = keras.layers.Input(shape=(n_timesteps, n_categorical_features), name = "categorical_input") index = IntegerLookup() index.adapt(X["br"]) encoder = CategoryEncoding(num_tokens=index.vocabulary_size(), output_mode = "one_hot")(cat_inp) However, the last line gives me the error ValueError: Exception encountered when calling layer "category_encoding_22" (type CategoryEncoding). When output_mode is not 'int', maximum supported output rank is 2. Received output_mode one_hot and input shape (None, 20, 1), which would result in output rank 3. The problem seems to be that CategoryEncoding does not support the shape of my input tensor (None, n_timesteps, n_categorical_features). How can I one-hot encode the input tensor produced by timeseries_dataset_from_array()? | Please try to use TimeDistributed layer: encoder = tf.keras.layers.TimeDistributed(CategoryEncoding(num_tokens=index.vocabulary_size(), output_mode = "one_hot"))(cat_inp) It will apply CategoryEncoding to each item in your time sequence. Please see https://keras.io/api/layers/recurrent_layers/time_distributed/ for more information. | 3 | 4 |
75,373,164 | 2023-2-7 | https://stackoverflow.com/questions/75373164/is-there-something-like-python-setup-py-version-for-pyproject-toml | With a simple setup.py file: from setuptools import setup setup( name='foo', version='1.2.3', ) I can do $> python setup.py --version 1.2.3 without installing the package. Is there similar functionality for the equivalent pyproject.toml file: [project] name = "foo" version = "1.2.3" | With Python 3.11+, something like this should work: python3.11 -c "import tomllib; print(tomllib.load(open('pyproject.toml', 'rb'))['project']['version'])" This parses the TOML file directly, and assumes that version is not dynamic. In some cases, version is declared dynamic in pyproject.toml, so it can not be parsed directly from this file and a solution (the only one?) is to actually build the project, or at least its metadata. For this purpose, we can use the build.util.project_wheel_metadata() function from the build project, for example with a small script like this: #!/usr/bin/env python import argparse import pathlib import build.util def _main(): args_parser = argparse.ArgumentParser() args_parser.add_argument('path') args = args_parser.parse_args() path_name = getattr(args, 'path') path = pathlib.Path(path_name) # metadata = build.util.project_wheel_metadata(path) version = metadata.get('Version') print(version) if __name__ == '__main__': _main() Or as a one-liner: python -c "import build.util; print(build.util.project_wheel_metadata('.').get('Version'))" | 6 | 3 |
75,370,815 | 2023-2-7 | https://stackoverflow.com/questions/75370815/pylint-specifying-exception-names-in-the-overgeneral-exceptions-option-without | pylint: Command line or configuration file:1: UserWarning: Specifying exception names in the overgeneral-exceptions option without module name is deprecated and support for it will be removed in pylint 3.0. Use fully qualified name (maybe 'builtins.BaseException' ?) instead. Getting PyLint error. I dont have any Except clause in my file and yet I see this error. | This is just a warning for future pylint releases, you can ignore it. If you want to address it now you will need to open your .pylintrc configuration file (should be located at ~/.pylintrc) and replace: overgeneral-exceptions=BaseException, Exception with: overgeneral-exceptions=builtins.BaseException, builtins.Exception | 26 | 50 |
75,365,839 | 2023-2-6 | https://stackoverflow.com/questions/75365839/mypy-with-dictionarys-get-function | I have some code that looks like this: di: dict[str, float] max_val = max(di, key=di.get) When running mypy on it, it complains error: Argument "key" to "max" has incompatible type overloaded function; expected "Callable[[str], Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]]" My guess is that this is because the return type of di.get is Union[float, None] in general, but in this specific instance, it should always return a float since the function is only being called on keys from di. The three workarounds I've discovered are to define a custom lambda function, ignore the di.get call, or cast it as follows. get_lambda = lambda key: di[key] max_val = max(di, key=get_lambda) # --------- max_val = max( di, key=di.get, # type: ignore [arg-type] ) # --------- from typing import cast, Callable max_val = max(di, key=cast(Callable[[str], float], di.get)) Is there a more proper/standard way of letting mypy know that the get method will not be returning None? | The dunder (double underscore) member function __getitem__ of a dict object is what you want here: max(di, key=di.__getitem__) raises no complains from mypy. The reason your lambda works is because it precisely emulates the intent of __getitem__, which is to implement evaluation of self[key], as explained in the documentation. | 4 | 3 |
75,342,160 | 2023-2-4 | https://stackoverflow.com/questions/75342160/partition-of-a-list-of-integers-into-k-sublists-with-equal-sum | Similar questions are 1 and 2 but the answers didn't help. Assume we have a list of integers. We want to find K disjoint lists such that they completely cover the given list and all have the same sum. For example, if A = [4, 3, 5, 6, 4, 3, 1] and K = 2 then the answer should be: [[3, 4, 6], [1, 3, 4, 5]] or [[4, 4, 5], [1, 3, 3, 6]] I have written a code that only works when K = 2 and it works fine with small lists as input but with very larger lists, because of the code's high complexity, OS terminates the task. My code is: def subarrays_equal_sum(l): from itertools import combinations if len(l) < 2 or sum(l) % 2 != 0: return [] l = sorted(l) list_sum = sum(l) all_combinations = [] for i in range(1, len(l)): all_combinations += (list(combinations(l, i))) combinations_list = [i for i in all_combinations if sum(i) == list_sum / 2] if not combinations_list: return [] final_result = [] for i in range(len(combinations_list)): for j in range(i + 1, len(combinations_list)): first = combinations_list[i] second = combinations_list[j] concat = sorted(first + second) if concat == l and [list(first), list(second)] not in final_result: final_result.append([list(first), list(second)]) return final_result An answer for any value of K is available here. But if we pass the arguments A = [4, 3, 5, 6, 4, 3, 1] and K = 2, their code only returns [[5, 4, 3, 1],[4, 3, 6]] whereas my code returns all possible lists i.e., [[[3, 4, 6], [1, 3, 4, 5]], [[4, 4, 5], [1, 3, 3, 6]]] My questions are: How to improve the complexity and cost of my code? How to make my code work with any value of k? | Here is a solution that deals with duplicates. First of all the problem of finding any solution is, as noted, NP-complete. So there are cases where this will churn for a long time to realize that there are none. I've applied reasonable heuristics to limit how often this happens. The heuristics can be improved. But be warned that there will be cases that simply nothing works. The first step in this solution is to take a list of numbers and turn it into [(value1, repeat), (value2, repeat), ...]. One of those heuristics requires that the values be sorted first by descending absolute value, and then by decreasing value. That is because I try to use the first elements first, and we expect a bunch of small leftover numbers to still give us sums. Next, I'm going to try to split it into a possible maximal subset with the right target sum, and all remaining elements. Then I'm going to split the remaining into a possible maximal remaining subset that is no bigger than the first, and the ones that result after that. Do this recursively and we find a solution. Which we yield back up the chain. But, and here is where it gets tricky, I'm not going to do the split by looking at combinations. Instead I'm going to use dynamic programming like we would for the usual subset-sum pseudo-polynomial algorithm, except I'll use it to construct a data structure from which we can do the split. This data structure will contain the following fields: value is the value of this element. repeat is how many times we used it in the subset sum. skip is how many copies we had and didn't use it in the subset sum. tail is the tail of these solutions. prev are some other solutions where we did something else. Here is a class that constructs this data structure, with a method to split elements into a subset and elements still available for further splitting. from collections import namedtuple class RecursiveSums ( namedtuple('BaseRecursiveSums', ['value', 'repeat', 'skip', 'tail', 'prev'])): def sum_and_rest(self): if self.tail is None: if self.skip: yield ([self.value] * self.repeat, [(self.value, self.skip)]) else: yield ([self.value] * self.repeat, []) else: for partial_sum, rest in self.tail.sum_and_rest(): for _ in range(self.repeat): partial_sum.append(self.value) if self.skip: rest.append((self.value, self.skip)) yield (partial_sum, rest) if self.prev is not None: yield from self.prev.sum_and_rest() You might have to look at this a few times to see how it works. Next, remember I said that I used a heuristic to try to use large elements before small ones. Here is some code that we'll need to do that comparison. class AbsComparator(int): def __lt__ (self, other): if abs(int(self)) < abs(int(other)): return True elif abs(other) < abs(self): return False else: return int(self) < int(other) def abs_lt (x, y): return AbsComparator(x) < AbsComparator(y) We'll need both forms. The function for a direct comparison, the class for Python's key argument to the sort function. See Using a comparator function to sort for more on the latter. And now the heart of the method. This finds all ways to split into a subset (that is no larger than bound in the comparison metric we are using) and the remaining elements to split more. The idea is the same as the dynamic programming approach to subset sum https://www.geeksforgeeks.org/count-of-subsets-with-sum-equal-to-x/ except with two major differences. The first is that instead of counting the answers we are building up our data structure. The second is that our keys are (partial_sum, bound_index) so we know whether our bound is currently satisfied, and if it is not we know what element to compare next to test it. def lexically_maximal_subset_rest (elements, target, bound=None): """ elements = [(value, count), (value, count), ...] with largest absolute values first. target = target sum bound = a lexical bound on the maximal subset. """ # First let's deal with all of the trivial cases. if 0 == len(elements): if 0 == target: yield [] elif bound is None or 0 == len(bound): # Set the bound to something that trivially works. yield from lexically_maximal_subset_rest(elements, target, [abs(elements[0][0]) + 1]) elif abs_lt(bound[0], elements[0][0]): pass # we automatically use more than the bound. else: # The trivial checks are done. bound_satisfied = (bound[0] != elements[0][0]) # recurse_by_sum will have a key of (partial_sum, bound_index). # If the bound_index is None, the bound is satisfied. # Otherwise it will be the last used index in the bound. recurse_by_sum = {} # Populate it with all of the ways to use the first element at least once. (init_value, init_count) = elements[0] for i in range(init_count): if not bound_satisfied: if len(bound) <= i or abs_lt(bound[i], init_value): # Bound exceeded. break elif abs_lt(init_value, bound[i]): bound_satisfied = True if bound_satisfied: key = (init_value * (i+1), None) else: key = (init_value * (i+1), i) recurse_by_sum[key] = RecursiveSums( init_value, i+1, init_count-i-1, None, recurse_by_sum.get(key)) # And now we do the dynamic programming thing. for j in range(1, len(elements)): value, repeat = elements[j] next_recurse_by_sum = {} for key, tail in recurse_by_sum.items(): partial_sum, bound_index = key # Record not using this value at all. next_recurse_by_sum[key] = RecursiveSums( value, 0, repeat, tail, next_recurse_by_sum.get(key)) # Now record the rest. for i in range(1, repeat+1): if bound_index is not None: # Bounds check. if len(bound) <= bound_index + i: break # bound exceeded. elif abs_lt(bound[bound_index + i], value): break # bound exceeded. elif abs_lt(value, bound[bound_index + i]): bound_index = None # bound satisfied! if bound_index is None: next_key = (partial_sum + value * i, None) else: next_key = (partial_sum + value * i, bound_index + i) next_recurse_by_sum[next_key] = RecursiveSums( value, i, repeat - i, tail, next_recurse_by_sum.get(next_key)) recurse_by_sum = next_recurse_by_sum # We now have all of the answers in recurse_by_sum, but in several keys. # Find all that may have answers. bound_index = len(bound) while 0 < bound_index: bound_index -= 1 if (target, bound_index) in recurse_by_sum: yield from recurse_by_sum[(target, bound_index)].sum_and_rest() if (target, None) in recurse_by_sum: yield from recurse_by_sum[(target, None)].sum_and_rest() And now we implement the rest. def elements_split (elements, target, k, bound=None): if 0 == len(elements): if k == 0: yield [] elif k == 0: pass # still have elements left over. else: for (subset, rest) in lexically_maximal_subset_rest(elements, target, bound): for answer in elements_split(rest, target, k-1, subset): answer.append(subset) yield answer def subset_split (raw_elements, k): total = sum(raw_elements) if 0 == (total % k): target = total // k counts = {} for e in sorted(raw_elements, key=AbsComparator, reverse=True): counts[e] = 1 + counts.get(e, 0) elements = list(counts.items()) yield from elements_split(elements, target, k) And here is a demonstration using your list, doubled. Which we split into 4 equal parts. On my laptop it finds all 10 solutions in 0.084 seconds. n = 0 for s in subset_split([4, 3, 5, 6, 4, 3, 1]*2, 4): n += 1 print(n, s) So...no performance guarantees. But this should usually be able to find splits pretty quickly per split. Of course there are also usually an exponential number of splits. For example if you take 16 copies of your list and try to split into 32 groups, it takes about 8 minutes on my laptop to find all 224082 solutions. If I didn't try to deal with negatives, this could be sped up quite a bit. (Use cheaper comparisons, drop all partial sums that have exceeded target to avoid calculating most of the dynamic programming table.) And here is the sped up version. For the case with only nonnegative numbers it is about twice as fast. If there are negative numbers it will produce wrong results. from collections import namedtuple class RecursiveSums ( namedtuple('BaseRecursiveSums', ['value', 'repeat', 'skip', 'tail', 'prev'])): def sum_and_rest(self): if self.tail is None: if self.skip: yield ([self.value] * self.repeat, [(self.value, self.skip)]) else: yield ([self.value] * self.repeat, []) else: for partial_sum, rest in self.tail.sum_and_rest(): for _ in range(self.repeat): partial_sum.append(self.value) if self.skip: rest.append((self.value, self.skip)) yield (partial_sum, rest) if self.prev is not None: yield from self.prev.sum_and_rest() def lexically_maximal_subset_rest (elements, target, bound=None): """ elements = [(value, count), (value, count), ...] with largest absolute values first. target = target sum bound = a lexical bound on the maximal subset. """ # First let's deal with all of the trivial cases. if 0 == len(elements): if 0 == target: yield [] elif bound is None or 0 == len(bound): # Set the bound to something that trivially works. yield from lexically_maximal_subset_rest(elements, target, [abs(elements[0][0]) + 1]) elif bound[0] < elements[0][0]: pass # we automatically use more than the bound. else: # The trivial checks are done. bound_satisfied = (bound[0] != elements[0][0]) # recurse_by_sum will have a key of (partial_sum, bound_index). # If the bound_index is None, the bound is satisfied. # Otherwise it will be the last used index in the bound. recurse_by_sum = {} # Populate it with all of the ways to use the first element at least once. (init_value, init_count) = elements[0] for i in range(init_count): if not bound_satisfied: if len(bound) <= i or bound[i] < init_value: # Bound exceeded. break elif init_value < bound[i]: bound_satisfied = True if bound_satisfied: key = (init_value * (i+1), None) else: key = (init_value * (i+1), i) recurse_by_sum[key] = RecursiveSums( init_value, i+1, init_count-i-1, None, recurse_by_sum.get(key)) # And now we do the dynamic programming thing. for j in range(1, len(elements)): value, repeat = elements[j] next_recurse_by_sum = {} for key, tail in recurse_by_sum.items(): partial_sum, bound_index = key # Record not using this value at all. next_recurse_by_sum[key] = RecursiveSums( value, 0, repeat, tail, next_recurse_by_sum.get(key)) # Now record the rest. for i in range(1, repeat+1): if target < partial_sum + value * i: break # these are too big. if bound_index is not None: # Bounds check. if len(bound) <= bound_index + i: break # bound exceeded. elif bound[bound_index + i] < value: break # bound exceeded. elif value < bound[bound_index + i]: bound_index = None # bound satisfied! if bound_index is None: next_key = (partial_sum + value * i, None) else: next_key = (partial_sum + value * i, bound_index + i) next_recurse_by_sum[next_key] = RecursiveSums( value, i, repeat - i, tail, next_recurse_by_sum.get(next_key)) recurse_by_sum = next_recurse_by_sum # We now have all of the answers in recurse_by_sum, but in several keys. # Find all that may have answers. bound_index = len(bound) while 0 < bound_index: bound_index -= 1 if (target, bound_index) in recurse_by_sum: yield from recurse_by_sum[(target, bound_index)].sum_and_rest() if (target, None) in recurse_by_sum: yield from recurse_by_sum[(target, None)].sum_and_rest() def elements_split (elements, target, k, bound=None): if 0 == len(elements): if k == 0: yield [] elif k == 0: pass # still have elements left over. else: for (subset, rest) in lexically_maximal_subset_rest(elements, target, bound): for answer in elements_split(rest, target, k-1, subset): answer.append(subset) yield answer def subset_split (raw_elements, k): total = sum(raw_elements) if 0 == (total % k): target = total // k counts = {} for e in sorted(raw_elements, key=AbsComparator, reverse=True): counts[e] = 1 + counts.get(e, 0) elements = list(counts.items()) yield from elements_split(elements, target, k) n = 0 for s in subset_split([4, 3, 5, 6, 4, 3, 1]*16, 32): n += 1 print(n, s) | 6 | 4 |
75,363,733 | 2023-2-6 | https://stackoverflow.com/questions/75363733/sqlalchemy-2-0-orm-model-datetime-insertion | I am having some real trouble getting a created_date column working with SQLAlchemy 2.0 with the ORM model. The best answer so far I've found is at this comment: https://stackoverflow.com/a/33532154 however I haven't been able to make that function work. In my (simplified) models.py file I have: import datetime from sqlalchemy import Integer, String, DateTime from sqlalchemy.sql import func from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column class Base(DeclarativeBase): pass class MyTable(Base): __tablename__ = "my_table" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(String, nullable=False) created_date: Mapped[datetime.datetime] = mapped_column(DateTime(timezone=True), server_default=func.now()) So far, so good, thinks I. In the simplified engine.py I have: from sqlalchemy import create_engine from sqlalchemy import select from sqlalchemy.orm import Session import models def add_entry(engine, name_str): this_row = models.MyTable() this_row.name = name_str with Session(engine) as session: session.add(this_row) session.commit() If I'm understanding correctly, the default value for the created_date to be a SQL function, and SQLAlchemy maps now() to SQLite3's datetime(). With the engine set to echo=True, I get the following result when it tries to run this insert command (Please note, this is data from the non-simplified form but it's still pretty simple, had 3 strings instead of the one I described) 2023-02-06 09:47:07,080 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-02-06 09:47:07,080 INFO sqlalchemy.engine.Engine INSERT INTO coaches (d_name, bb2_name, bb3_name) VALUES (?, ?, ?) RETURNING id, created_date 2023-02-06 09:47:07,081 INFO sqlalchemy.engine.Engine [generated in 0.00016s] ('andy#1111', 'AndyAnderson', 'Killer Andy') 2023-02-06 09:47:07,081 INFO sqlalchemy.engine.Engine ROLLBACK This causes an exception when it gets to the time function: IntegrityError: NOT NULL constraint failed: coaches.created_date Some additional data (I have been using the rich library which produces an enormous amount of debug information so I'm trying to get the best bits: โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โ โ exc_tb = <traceback object at 0x00000108BD2565C0> โ โ โ โ exc_type = <class 'sqlalchemy.exc.IntegrityError'> โ โ โ โ exc_value = IntegrityError('(sqlite3.IntegrityError) NOT NULL constraint failed: โ โ โ โ coaches.created_date') โ โ โ โ self = <sqlalchemy.util.langhelpers.safe_reraise object at 0x00000108BD1B79A0> โ โ โ โ traceback = None โ โ โ โ type_ = None โ โ โ โ value = None โ โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ In any event, I feel like I've gotten the wrong end of the stick on the way to make a table column automatically execute a SQL command with the func call. Any notions on this one? I haven't found any direct example in the SQLAlchemy 2.0 docs, and aside from the pretty awesome comment to a similar question, I haven't found any working solutions. Thanks for considering! I implemented a SQLAlchemy 2.0 mapped_column with a server_default of func.now() expecting the column to automatically fill during an INSERT operation. During the insert operation, SQLAlchemy threw an exception claiming the column NOT NULLABLE constraint was violated -- thus it was not automatically filling. | Posting an answer to my own question to note what actually did work (actual problem still exists, but a simplified variation does work just dandy the way I expect it to.) import datetime from sqlalchemy import Integer, String, DateTime from sqlalchemy import create_engine from sqlalchemy.sql import func from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column from sqlalchemy.orm import Session class Base(DeclarativeBase): pass class MyTable(Base): __tablename__ = "my_table" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(String) created_date: Mapped[datetime.datetime] = mapped_column( DateTime(timezone=True), server_default=func.now() ) def initialize_engine(filename): return create_engine(f"sqlite+pysqlite:///{filename}", echo=True) def initialize_tables(engine): Base.metadata.create_all(engine) def add_row(engine, name): this_row = MyTable(name=name) print(this_row) with Session(engine) as session: session.add(this_row) session.commit() my_file = "test.db" my_engine = initialize_engine(my_file) initialize_tables(my_engine) add_row(my_engine, "Dave") This produces the result: python datetest.py 2023-02-06 11:02:41,157 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine PRAGMA main.table_info("my_table") 2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine [raw sql] () 2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine COMMIT <__main__.MyTable object at 0x000002CC767ECD50> 2023-02-06 11:02:41,159 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-02-06 11:02:41,160 INFO sqlalchemy.engine.Engine INSERT INTO my_table (name) VALUES (?) RETURNING id, created_date 2023-02-06 11:02:41,160 INFO sqlalchemy.engine.Engine [generated in 0.00020s] ('Dave',) 2023-02-06 11:02:41,171 INFO sqlalchemy.engine.Engine COMMIT The schema in the correctly working database reads: sqlite> .schema my_table CREATE TABLE my_table ( id INTEGER NOT NULL, name VARCHAR NOT NULL, created_date DATETIME DEFAULT (CURRENT_TIMESTAMP) NOT NULL, PRIMARY KEY (id) ); So... all I have to do is figure out why my original code isn't doing the simple variation! | 17 | 10 |
75,357,653 | 2023-2-6 | https://stackoverflow.com/questions/75357653/how-to-resume-a-pytorch-training-of-a-deep-learning-model-while-training-stopped | Actually i am training a deep learning model and want to save checkpoint of the model but its stopped when power is off then i have to start from that point from which its interrupted like 10 epoches completed and want to resume/start again from epoch 11 with that parameters | In PyTorch, you can resume from a specific point by using epoch key from the checkpoint dictionary as follows: # Load model checkpoint checkpoint = torch.load("checkpoint.pth") model.load_state_dict(checkpoint['model']) epoch = checkpoint['epoch'] # Resume training from a specific epoch for epoch in range(epoch + 1, num_epochs): ... | 5 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.