question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
70,664,467
2022-1-11
https://stackoverflow.com/questions/70664467/invalid-python-sdk-in-pycharm
Since this morning, I'm no longer able to run projects in PyCharm. When generating a new virtual environment, I get an "Invalid Python SDK" error. Cannot set up a python SDK at Python 3.11... The SDK seems invalid. What I noticed: No matter what base interpreter I select (3.8, 3.9, 3.10) Pycharm always generates a Python 3.11 interpreter. I did completely uninstall PyCharm, as well as all my python installations and reinstalled everything. I also went through the "Repair IDE" option in PyCharm. I also removed and recreated all virtual environments. When I run "cmd" and type 'python' then python 3.10.1 opens without a problem. This morning, I installed a new antivirus software that did some checks and deleted some "unnecessary files" - maybe it is related (antivirus software is uninstalled again).
Dealt with the same issue despite using python and pycharm without issue for months. Recently kept giving me the error despite changing the PATH variable of my system and even manually pathing within pycharm. After hours of reinstalling pycharm, python and even jumping around versions with no success it turned out it was because my python directory had a space in it that it just randomly decided to break. For anyone who has tried what seems like everything to no avail ensure that NO part of the path to your python directory contains spaces
23
7
70,658,151
2022-1-10
https://stackoverflow.com/questions/70658151/how-to-log-production-database-changes-made-via-the-django-shell
I would like to automatically generate some sort of log of all the database changes that are made via the Django shell in the production environment. We use schema and data migration scripts to alter the production database and they are version controlled. Therefore if we introduce a bug, it's easy to track it back. But if a developer in the team changes the database via the Django shell which then introduces an issue, at the moment we can only hope that they remember what they did or/and we can find their commands in the Python shell history. Example. Let's imagine that the following code was executed by a developer in the team via the Python shell: >>> tm = TeamMembership.objects.get(person=alice) >>> tm.end_date = date(2022,1,1) >>> tm.save() It changes a team membership object in the database. I would like to log this somehow. I'm aware that there are a bunch of Django packages related to audit logging, but I'm only interested in the changes that are triggered from the Django shell, and I want to log the Python code that updated the data. So the questions I have in mind: I can log the statements from IPython but how do I know which one touched the database? I can listen to the pre_save signal for all model to know if data changes, but how do I know if the source was from the Python shell? How do I know what was the original Python statement?
This solution logs all commands in the session if any database changes were made. How to detect database changes Wrap execute_sql of SQLInsertCompiler, SQLUpdateCompiler and SQLDeleteCompiler. SQLDeleteCompiler.execute_sql returns a cursor wrapper. from django.db.models.sql.compiler import SQLInsertCompiler, SQLUpdateCompiler, SQLDeleteCompiler changed = False def check_changed(func): def _func(*args, **kwargs): nonlocal changed result = func(*args, **kwargs) if not changed and result: changed = not hasattr(result, 'cursor') or bool(result.cursor.rowcount) return result return _func SQLInsertCompiler.execute_sql = check_changed(SQLInsertCompiler.execute_sql) SQLUpdateCompiler.execute_sql = check_changed(SQLUpdateCompiler.execute_sql) SQLDeleteCompiler.execute_sql = check_changed(SQLDeleteCompiler.execute_sql) How to log commands made via the Django shell atexit.register() an exit handler that does readline.write_history_file(). import atexit import readline def exit_handler(): filename = 'history.py' readline.write_history_file(filename) atexit.register(exit_handler) IPython Check whether IPython was used by comparing HistoryAccessor.get_last_session_id(). import atexit import io import readline ipython_last_session_id = None try: from IPython.core.history import HistoryAccessor except ImportError: pass else: ha = HistoryAccessor() ipython_last_session_id = ha.get_last_session_id() def exit_handler(): filename = 'history.py' if ipython_last_session_id and ipython_last_session_id != ha.get_last_session_id(): cmds = '\n'.join(cmd for _, _, cmd in ha.get_range(ha.get_last_session_id())) with io.open(filename, 'a', encoding='utf-8') as f: f.write(cmds) f.write('\n') else: readline.write_history_file(filename) atexit.register(exit_handler) Put it all together Add the following in manage.py before execute_from_command_line(sys.argv). if sys.argv[1] == 'shell': import atexit import io import readline from django.db.models.sql.compiler import SQLInsertCompiler, SQLUpdateCompiler, SQLDeleteCompiler changed = False def check_changed(func): def _func(*args, **kwargs): nonlocal changed result = func(*args, **kwargs) if not changed and result: changed = not hasattr(result, 'cursor') or bool(result.cursor.rowcount) return result return _func SQLInsertCompiler.execute_sql = check_changed(SQLInsertCompiler.execute_sql) SQLUpdateCompiler.execute_sql = check_changed(SQLUpdateCompiler.execute_sql) SQLDeleteCompiler.execute_sql = check_changed(SQLDeleteCompiler.execute_sql) ipython_last_session_id = None try: from IPython.core.history import HistoryAccessor except ImportError: pass else: ha = HistoryAccessor() ipython_last_session_id = ha.get_last_session_id() def exit_handler(): if changed: filename = 'history.py' if ipython_last_session_id and ipython_last_session_id != ha.get_last_session_id(): cmds = '\n'.join(cmd for _, _, cmd in ha.get_range(ha.get_last_session_id())) with io.open(filename, 'a', encoding='utf-8') as f: f.write(cmds) f.write('\n') else: readline.write_history_file(filename) atexit.register(exit_handler)
9
6
70,608,619
2022-1-6
https://stackoverflow.com/questions/70608619/how-to-get-message-from-logging-function
I have a logger function from logging package that after I call it, I can send the message through logging level. I would like to send this message also to another function, which is a Telegram function called SendTelegramMsg(). How can I get the message after I call the funcion setup_logger send a message through logger.info("Start") for example, and then send this exatcly same message to SendTelegramMsg() function which is inside setup_logger function? My currently setup_logger function: # Define the logging level and the file name def setup_logger(telegram_integration=False): """To setup as many loggers as you want""" filename = os.path.join(os.path.sep, pathlib.Path(__file__).parent.resolve(), 'logs', str(dt.date.today()) + '.log') formatter = logging.Formatter('%(levelname)s: %(asctime)s: %(message)s', datefmt='%m/%d/%Y %H:%M:%S') level = logging.DEBUG handler = logging.FileHandler(filename, 'a') handler.setFormatter(formatter) consolehandler = logging.StreamHandler() consolehandler.setFormatter(formatter) logger = logging.getLogger('logs') if logger.hasHandlers(): # Logger is already configured, remove all handlers logger.handlers = [] else: logger.setLevel(level) logger.addHandler(handler) logger.addHandler(consolehandler) #if telegram_integration == True: #SendTelegramMsg(message goes here) return logger After I call the function setup_logger(): logger = setup_logger() logger.info("Start") The output: INFO: 01/06/2022 11:07:12: Start How am I able to get this message and send to SendTelegramMsg() if I enable the integration to True?
Implement a custom logging.Handler: class TelegramHandler(logging.Handler): def emit(self, record): message = self.format(record) SendTelegramMsg(message) # SendTelegramMsg(message, record.levelno) # Passing level # SendTelegramMsg(message, record.levelname) # Passing level name Add the handler: def setup_logger(telegram_integration=False): # ... if telegram_integration: telegram_handler = TelegramHandler() logger.addHandler(telegram_handler) return logger Usage, no change: logger = setup_logger() logger.info("Start")
7
9
70,603,855
2022-1-6
https://stackoverflow.com/questions/70603855/how-to-set-python-function-as-callback-for-c-using-pybind11
typedef bool (*ftype_callback)(ClientInterface* client, const Member* member ,int member_num); struct Member{ char x[64]; int y; }; class ClientInterface { public: virtual int calc()=0; virtual bool join()=0; virtual bool set_callback(ftype_callback on_member_join)=0; }; It is from SDK which I can call the client from dynamic library in c++ codes. bool cb(ClientInterface* client, const Member* member ,int member_num) { // do something } cli->set_callback(cb); cli->join(); I want to port it to python bindings use pybind11. How do I set_callback in python? I have seen the doc and try: PYBIND11_MODULE(xxx, m) { m.def("set_callback", [](xxx &self, py::function cb ){ self.set_callback(cb); }); } The code just failed to compile. My question, how do I convert the py::function to ftype_callback or there is other way to make it?
You need a little C++ to get things going. I'm going to use a simpler structure to make the answer more readable. In your binding code: #include <pybind11/pybind11.h> #include <functional> #include <string> namespace py = pybind11; struct Foo { int i; float f; std::string s; }; struct Bar { std::function<bool(const Foo &foo)> python_handler; std::function<bool(const Foo *foo)> cxx_handler; Bar() { cxx_handler = [this](const Foo *foo) { return python_handler(*foo); }; } }; PYBIND11_MODULE(example, m) { py::class_<Foo>(m, "Foo") // .def_readwrite("i", &Foo::i) .def_readwrite("f", &Foo::f) .def_readwrite("s", &Foo::i); py::class_<Bar>(m, "Bar") // .def_readwrite("handler", &Bar::python_handler); } Here, Foo is the object that is passed to the callback, and Bar is the object that needs its callback function set. Since you use pointers, I have wrapped the python_handler function with cxx_handler that is meant to be used in C++, and converted the pointer to reference. To be complete, I'll give a possible example of usage of the module here: import module.example as impl class Bar: def __init__(self): self.bar = impl.Bar() self.bar.handler = self.handler def handler(self, foo): print(foo) return True I have used this structure successfully in one of my projects. I don't know how you want to proceed, but perhaps if you don't want to change your original structure you can write wrapper classes upon them that use the given structure. Update: I thought that you controlled the structure when I wrote the answer above (I'll keep it for anyone who needs it). If you have a single cli instance, you can do something like: using Handler = std::function<bool(std::string, int, int)>; Handler handler; bool cb(ClientInterface *client, const Member *member, int member_num) { return handler(std::string(member->x), member->y, member_num); } // We have created cli somehow // cli->set_callback(cb); // cli->join(); PYBIND11_MODULE(example, m) { m.def("set_callback", [](Handler h) { handler = h; }); } If you have multiple ClientInterface instances, you can map ClientInterface pointers to handlers and call the appropriate handler in the cb function based on given ClientInterface pointer. Note: I haven't tested the above with a python script but it should work. Another Update If you want to handle multiple instances, the code can roughly look like this: using Handler = std::function<bool(std::string, int, int)>; std::map<ClientInterface *, handler> map; bool cb(ClientInterface *client, const Member *member, int member_num) { // Check if <client> instance exists in map return map[client](std::string(member->x), member->y, member_num); } PYBIND11_MODULE(example, m) { m.def("set_callback", [](int clientid, Handler h) { // Somehow map <clientid> to <client> pointer map[client] = h; }); } Note that this isn't a runnable code and you need to complete it.
6
5
70,598,913
2022-1-5
https://stackoverflow.com/questions/70598913/problem-resizing-plot-on-tkinter-figure-canvas
Python 3.9 on Mac running OS 11.6.1. My application involves placing a plot on a frame inside my root window, and I'm struggling to get the plot to take up a larger portion of the window. I thought rcParams in matplotlib.pyplot would take care of this, but I must be overlooking something. Here's what I have so far: import numpy as np from tkinter import Tk,Frame,TOP,BOTH import matplotlib from matplotlib import pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg plt.rcParams["figure.figsize"] = [18,10] root=Tk() root.wm_title("Root Window") root.geometry('1500x1000') x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) fig, ax = plt.subplots() ax.plot(x, y) canvas_frame=Frame(root) # also tried adjusting size of frame but that didn't help canvas_frame.pack(side=TOP,expand=True) canvas = FigureCanvasTkAgg(fig, master=canvas_frame) canvas.draw() canvas.get_tk_widget().pack(side=TOP,fill=BOTH,expand=True) root.mainloop() For my actual application, I need for canvas to have a frame as its parent and not simply root, which is why canvas_frame is introduced above.
try something like this: fig.subplots_adjust(left=0.05, bottom=0.07, right=0.95, top=0.95, wspace=0, hspace=0) this is output, figure now takes more screen area % [
6
4
70,626,218
2022-1-7
https://stackoverflow.com/questions/70626218/how-to-find-the-nearest-linestring-to-a-point
How do I fund the nearest LINESTRING near a point? First I have a list of LINESTRING and point value. How do I have the nearest LINESTRING to the POINT (5.41 3.9) and maybee the distance? from shapely.geometry import Point, LineString line_string = [LINESTRING (-1.15.12 9.9, -1.15.13 9.93), LINESTRING (-2.15.12 8.9, -2.15.13 8.93)] point = POINT (5.41 3.9) #distance line_string [0].distance(point) So far I think I got the distance value by doing line_string [0].distance(point) for the first LINESTRING so far but I just want to make sure I am going about it the right way.
your sample geometry is invalid for line strings, have modified it's simple to achieve with sjoin_nearest() import geopandas as gpd import shapely.wkt import shapely.geometry line_string = ["LINESTRING (-1.15.12 9.9, -1.15.13 9.93)", "LINESTRING (-2.15.12 8.9, -2.15.13 8.93)"] # fix invalid wkt string... line_string = ["LINESTRING (-1.15 9.9, -1.15 9.93)", "LINESTRING (-2.15 8.9, -2.15 8.93)"] point = "POINT (5.41 3.9)" gdf_p = gpd.GeoDataFrame(geometry=[shapely.wkt.loads(point)]) gdf_l = gpd.GeoDataFrame(geometry=pd.Series(line_string).apply(shapely.wkt.loads)) df_n = gpd.sjoin_nearest(gdf_p, gdf_l).merge(gdf_l, left_on="index_right", right_index=True) df_n["distance"] = df_n.apply(lambda r: r["geometry_x"].distance(r["geometry_y"]), axis=1) df_n geometry_x index_right geometry_y distance 0 POINT (5.41 3.9) 0 LINESTRING (-1.15 9.9, -1.15 9.93) 8.89008 distance in meters use a CRS that is in meters. UTM has it's limitations if all points are not in same zone import geopandas as gpd import shapely.wkt import shapely.geometry line_string = ["LINESTRING (-1.15.12 9.9, -1.15.13 9.93)", "LINESTRING (-2.15.12 8.9, -2.15.13 8.93)"] # fix invalid wkt string... line_string = ["LINESTRING (-1.15 9.9, -1.15 9.93)", "LINESTRING (-2.15 8.9, -2.15 8.93)"] point = "POINT (5.41 3.9)" gdf_p = gpd.GeoDataFrame(geometry=[shapely.wkt.loads(point)], crs="epsg:4326") gdf_l = gpd.GeoDataFrame(geometry=pd.Series(line_string).apply(shapely.wkt.loads), crs="epsg:4326") gdf_p = gdf_p.to_crs(gdf_p.estimate_utm_crs()) gdf_l = gdf_l.to_crs(gdf_p.crs) df_n = gpd.sjoin_nearest(gdf_p, gdf_l).merge(gdf_l, left_on="index_right", right_index=True) df_n["distance"] = df_n.apply(lambda r: r["geometry_x"].distance(r["geometry_y"]), axis=1) df_n
9
4
70,651,053
2022-1-10
https://stackoverflow.com/questions/70651053/how-can-i-send-dynamic-website-content-to-scrapy-with-the-html-content-generated
I am working on certain stock-related projects where I have had a task to scrape all data on a daily basis for the last 5 years. i.e from 2016 to date. I particularly thought of using selenium because I can use crawler and bot to scrape the data based on the date. So I used the use of button click with selenium and now I want the same data that is displayed by the selenium browser to be fed by scrappy. This is the website I am working on right now. I have written the following code inside scrappy spider. class FloorSheetSpider(scrapy.Spider): name = "nepse" def start_requests(self): driver = webdriver.Firefox(executable_path=GeckoDriverManager().install()) floorsheet_dates = ['01/03/2016','01/04/2016', up to till date '01/10/2022'] for date in floorsheet_dates: driver.get( "https://merolagani.com/Floorsheet.aspx") driver.find_element(By.XPATH, "//input[@name='ctl00$ContentPlaceHolder1$txtFloorsheetDateFilter']" ).send_keys(date) driver.find_element(By.XPATH, "(//a[@title='Search'])[3]").click() total_length = driver.find_element(By.XPATH, "//span[@id='ctl00_ContentPlaceHolder1_PagerControl2_litRecords']").text z = int((total_length.split()[-1]).replace(']', '')) for data in range(z, z + 1): driver.find_element(By.XPATH, "(//a[@title='Page {}'])[2]".format(data)).click() self.url = driver.page_source yield Request(url=self.url, callback=self.parse) def parse(self, response, **kwargs): for value in response.xpath('//tbody/tr'): print(value.css('td::text').extract()[1]) print("ok"*200) Update: Error after answer is 2022-01-14 14:11:36 [twisted] CRITICAL: Traceback (most recent call last): File "/home/navaraj/PycharmProjects/first_scrapy/env/lib/python3.8/site-packages/twisted/internet/defer.py", line 1661, in _inlineCallbacks result = current_context.run(gen.send, result) File "/home/navaraj/PycharmProjects/first_scrapy/env/lib/python3.8/site-packages/scrapy/crawler.py", line 88, in crawl start_requests = iter(self.spider.start_requests()) TypeError: 'NoneType' object is not iterable I want to send current web html content to scrapy feeder but I am getting unusal error for past 2 days any help or suggestions will be very much appreciated.
The 2 solutions are not very different. Solution #2 fits better to your question, but choose whatever you prefer. Solution 1 - create a response with the html's body from the driver and scraping it right away (you can also pass it as an argument to a function): import scrapy from selenium import webdriver from selenium.webdriver.common.by import By from scrapy.http import HtmlResponse class FloorSheetSpider(scrapy.Spider): name = "nepse" def start_requests(self): # driver = webdriver.Firefox(executable_path=GeckoDriverManager().install()) driver = webdriver.Chrome() floorsheet_dates = ['01/03/2016','01/04/2016']#, up to till date '01/10/2022'] for date in floorsheet_dates: driver.get( "https://merolagani.com/Floorsheet.aspx") driver.find_element(By.XPATH, "//input[@name='ctl00$ContentPlaceHolder1$txtFloorsheetDateFilter']" ).send_keys(date) driver.find_element(By.XPATH, "(//a[@title='Search'])[3]").click() total_length = driver.find_element(By.XPATH, "//span[@id='ctl00_ContentPlaceHolder1_PagerControl2_litRecords']").text z = int((total_length.split()[-1]).replace(']', '')) for data in range(1, z + 1): driver.find_element(By.XPATH, "(//a[@title='Page {}'])[2]".format(data)).click() self.body = driver.page_source response = HtmlResponse(url=driver.current_url, body=self.body, encoding='utf-8') for value in response.xpath('//tbody/tr'): print(value.css('td::text').extract()[1]) print("ok"*200) # return an empty requests list return [] Solution 2 - with super simple downloader middleware: (You might have a delay here in parse method so be patient). import scrapy from scrapy import Request from scrapy.http import HtmlResponse from selenium import webdriver from selenium.webdriver.common.by import By class SeleniumMiddleware(object): def process_request(self, request, spider): url = spider.driver.current_url body = spider.driver.page_source return HtmlResponse(url=url, body=body, encoding='utf-8', request=request) class FloorSheetSpider(scrapy.Spider): name = "nepse" custom_settings = { 'DOWNLOADER_MIDDLEWARES': { 'tempbuffer.spiders.yetanotherspider.SeleniumMiddleware': 543, # 'projects_name.path.to.your.pipeline': 543 } } driver = webdriver.Chrome() def start_requests(self): # driver = webdriver.Firefox(executable_path=GeckoDriverManager().install()) floorsheet_dates = ['01/03/2016','01/04/2016']#, up to till date '01/10/2022'] for date in floorsheet_dates: self.driver.get( "https://merolagani.com/Floorsheet.aspx") self.driver.find_element(By.XPATH, "//input[@name='ctl00$ContentPlaceHolder1$txtFloorsheetDateFilter']" ).send_keys(date) self.driver.find_element(By.XPATH, "(//a[@title='Search'])[3]").click() total_length = self.driver.find_element(By.XPATH, "//span[@id='ctl00_ContentPlaceHolder1_PagerControl2_litRecords']").text z = int((total_length.split()[-1]).replace(']', '')) for data in range(1, z + 1): self.driver.find_element(By.XPATH, "(//a[@title='Page {}'])[2]".format(data)).click() self.body = self.driver.page_source self.url = self.driver.current_url yield Request(url=self.url, callback=self.parse, dont_filter=True) def parse(self, response, **kwargs): print('test ok') for value in response.xpath('//tbody/tr'): print(value.css('td::text').extract()[1]) print("ok"*200) Notice that I've used chrome so change it back to firefox like in your original code.
7
3
70,673,065
2022-1-11
https://stackoverflow.com/questions/70673065/where-is-conda-env-documented
I am wondering why the official documentation of conda does not mention anything about the command conda env? That makes me wonder if it would be possible to do every operation of conda env with the commands listed here and which one is recommended to use in practice. Right now I would assume that conda env creates an easy way to manipulate and work with conda environments.
The reason why the conda env commands are not similarly documented is historical. Namely, after conda was developed, others then developed an add-on package called conda-env that provided some convenience methods for operating on whole environments rather than package operations within environments. Eventually, the conda-env package was integrated directly into the conda package, but apparently there was never any systematic effort to unify the documentation. Instead, most of the high-level documentation on using conda env commands is found under the "Managing environments" section of the Conda documentation. As an end user, I typically use conda env for creating (from YAML), archiving/serializing (to YAML), and deleting whole environments. More directly, the documentation for conda env is consulted with $ conda env --help usage: conda-env [-h] {create,export,list,remove,update,config} ... positional arguments: {create,export,list,remove,update,config} create Create an environment based on an environment file export Export a given environment list List the Conda environments remove Remove an environment update Update the current environment based on environment file config Configure a conda environment optional arguments: -h, --help Show this help message and exit. and documentation of individual subcommands can be similarly consulted with conda env <subcommand> --help.
5
4
70,672,108
2022-1-11
https://stackoverflow.com/questions/70672108/airflow-s3hook-read-files-in-s3-with-pandas-read-csv
I'm trying to read some files with pandas using the s3Hook to get the keys. I'm able to get the keys, however I'm not sure how to get pandas to find the files, when I run the below I get: No such file or directory: Here is my code: def transform_pages(company, **context): ds = context.get("execution_date").strftime('%Y-%m-%d') s3 = S3Hook('aws_default') s3_conn = s3.get_conn() keys = s3.list_keys(bucket_name=Variable.get('s3_bucket'), prefix=f'S/{company}/pages/date={ds}/', delimiter="/") prefix = f'S/{company}/pages/date={ds}/' logging.info(f'keys from function: {keys}') """ transforming pages and loading data back to S3 """ for file in keys: df = pd.read_csv(file, sep='\t', skiprows=1, header=None)
The format you are looking for is the following: filepath = f"s3://{bucket_name}/{key}" So in your specific case, something like: for file in keys: filepath = f"s3://s3_bucket/{file}" df = pd.read_csv(filepath, sep='\t', skiprows=1, header=None) Just make sure you have s3fs installed though (pip install s3fs).
6
3
70,670,079
2022-1-11
https://stackoverflow.com/questions/70670079/get-indexes-of-pandas-rolling-window
I would like to get the indexes of the elements in each rolling window of a Pandas Series. A solution that works for me is from this answer to an existing question: I get the window.index for each window obtained from the rolling function described in the answer. I am only interested in step=1 for the aforementioned function. But this function is not specific for DataFrames and Series, it would work on basic Python lists. Isn't there some functionality that takes advantage of Pandas rolling operations? I tried the Rolling.apply method: s = pd.Series([1, 2, 3, 4, 5, 6, 7]) rolling = s.rolling(window=3) indexes = rolling.apply(lambda x: x.index) But it result in a TypeError: must be real number, not RangeIndex. Apparently, the Rolling.apply method only accepts functions that return a number based on each window. The functions cannot return other kinds of objects. Are there other methods of the Pandas Rolling class I could use? Even private methods. Or are there any other Pandas-specific functionalities to get the indexes of overlapping rolling windows? Expected output As output, I expect some kind of list-of-lists object. Each inner list should countain the index values of each window. The original s Series has [0, 1, 2, 3, 4, 5, 6] as index. So, rolling with a window=3, I expect as outcome something like: [ [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], ]
The apply function after rolling must return a numeric value for each window. One possible workaround is to use a list comprehension to iterate over each window and apply the custom transformation as required: [[*l.index] for l in s.rolling(3) if len(l) == 3] Alternatively you can also use sliding_window_view to accomplish the same: np.lib.stride_tricks.sliding_window_view(s.index, 3) Or even an list comprehension would do the job just fine: w = 3 [[*s.index[i : i + w]] for i in range(len(s) - w + 1)] Result [[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]
6
5
70,658,955
2022-1-10
https://stackoverflow.com/questions/70658955/how-do-i-display-bar-plot-for-values-that-are-zero-in-plotly
How do I make the bar appear when one of the value of y is zero? It just leaves a gap by default. Is there a way I can enable it to plot for zero values? I am able to see a line on the x-axis at y=0 for the same if just plotted using go.Box. I would like to see this in the Bar plot as well. So far, I set the base to zero. But that doesn't plot for y=0 either. Here is my sample code. My actual code contains multiple traces, that's why I would like to see the plot for y=0 Here is the sample python code: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar(x=[1, 2, 3], y=[0, 3, 2])) fig.show()
Bar charts come with a line around the bars that by default are set to the same color as the background. In your case '#E5ECF6'. If you change that, the line will appear as a border around each bar that will remain visible even when y = 0 for any given x. fig.update_traces(marker_line_color = 'blue', marker_line_width = 12) If you set the line color to match that of the bar itself, you'll get this: Plot 1: Bars with identical fill and line colors If I understand correctly, this should be pretty close to what you're trying to achieve. At least visually. I would perhaps consider adjusting the yaxis range a bit to make it a bit clearer that the y value displayed is in fact 0. Plot 2: Adjusted y axis and separate colors Complete code for Plot 1: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar(x=[1, 2, 3], y=[0, 3, 2], marker_color = 'blue')) fig.update_traces(marker_line_color = 'blue', marker_line_width = 12) fig.show() Complete code for Plot 2: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar(x=[1, 2, 3], y=[0, 3, 2], marker_color = '#00CC96')) f = fig.full_figure_for_development(warn=False) fig.update_traces(marker_line_color = '#636EFA', marker_line_width = 4) fig.update_yaxes(range=[-1, 4]) fig.show() Edit after comments Just to verify that the line color is the same as the background color using plotly version 5.4.0 Plot 1: Plot 2: Zoomed in
6
6
70,648,325
2022-1-10
https://stackoverflow.com/questions/70648325/saving-the-progress-of-a-python-script-through-reboot
I'd like to start by saying that I'm very new to Python, and I started this project for fun. Specifically, it’s simply a program which sends compliments to you as notifications periodically throughout the day. This is not for school, and I was actually just trying to make it for my girlfriend while introducing myself to Python. With that in mind, here's my problem. I started this project by writing the simplest version of it: one you have to start each time your computer loads, and runs while you're actively using the computer. This portion works perfectly; however, I can't seem to figure out how to do the next step: have the program carry on as normal after reboot and save its progress. I know how to get it to start up again after reboot. Still, I'm not sure how to save its progress. Particularly, since I'm pulling the compliments out of a text file, I'm not sure how to have the program save what line it's on before rebooting. This is needed as I don't want the program to start from the first compliment each time, as there are over 300 unique ones as of now. In order to help you understand where my code currently is as for the best advice, I've shown it below: import datetime import time from plyer import notification Compliment = None try: with open('C:/Users/conno/Documents/compliments.txt') as f: lines = f.readlines() except: print("I'm sorry, I can't give you a new compliment today because I can't find the file.") for compliment in lines: notification.notify( title = "Your New Compliment for {}".format(datetime.date.today()), message = compliment, app_icon = "C:/Users/conno/Downloads/Paomedia-Small-N-Flat-Bell.ico", timeout = 10 ) time.sleep(60*30) I know I could easily have a variable count which line it is on, but how do I save that value?
You can simply save the count (which is the index of the last compliment line) as an integer in a pickle file, or easier in a text file and read from it every time your script starts after reboot. import datetime import time from plyer import notification Compliment = None compliment_index = 0 try: with open('C:/Users/conno/Documents/compliments.txt') as f: lines = f.readlines() except: print("I'm sorry, I can't give you a new compliment today because I can't find the file.") try: with open('C:/Users/conno/Documents/compliments_counter.txt') as f: compliment_index = int(f.readlines()[0]) except: with open('C:/Users/conno/Documents/compliments_counter.txt', "w") as f: f.write(str(compliment_index)) for index in range(compliment_index, len(lines)): compliment = lines[index] notification.notify( title = "Your New Compliment for {}".format(datetime.date.today()), message = compliment, app_icon = "C:/Users/conno/Downloads/Paomedia-Small-N-Flat-Bell.ico", timeout = 10) with open('C:/Users/conno/Documents/compliments_counter.txt', "w") as f: f.write(str(index)) time.sleep(60*30)
5
6
70,630,962
2022-1-8
https://stackoverflow.com/questions/70630962/finding-2nd-order-relations-sqlalchemy-throws-please-use-the-select-from-m
I have a User model, a Contact model, and a Group model. I'm looking to find all of the 2nd-order User's Groups given a particular user in a single query. That is, I'd like to: Use all contacts of a particular user... ... to get all the users who are also a contact of the given user, and use that to... ... get all groups of those (2nd-order) users Right now I've got something like this (where user.id is the particular user whose contacts-of-contacts I'd like to find): from sqlalchemy.orm import aliased SecondOrderUser = aliased(User) # This returns the phone number of all contacts who're a contact of the particular user subquery = User.query \ .join(Contact, Contact.user_id == User.id) \ .filter(User.id == user.id) \ .with_entities(Contact.contact_phone) \ .subquery() # This filters all users by the phone numbers in the above query, gets their contacts, and gets the group # IDs of those contacts who are themselves users contacts = User.query \ .filter(User.phone.in_(subquery)) \ .join(UserContact, UserContact.user_id == User.id) \ .join(SecondOrderUser, SecondOrderUser.phone == UserContact.phone) \ .join(Group, Group.user_id == SecondOrderUser.id) \ .with_entities(Group.id) \ .all() The only thing that Contact and User share (to link them together—that is, to find contacts that are themselves users) is a common phone number. I [think I] could also do it with four join statements and aliases, but this gives me the same error. Namely: sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join from, there are multiple FROMS which can join to this entity. Please use the .select_from() method to establish an explicit left side, as well as providing an explicit ON clause if not present already to help resolve the ambiguity. What am I doing incorrectly here? Where/how to join feels clear to me, which indicates that I'm totally missing something.
The problem here was two-fold. We've got to alias a table every time we use it (not just the 2nd time onward) and, when using with_entities, we've got to use all columns that we compare on—even if we don't intend on using their data in the end. My final code looked something like this: from sqlalchemy.orm import aliased User1 = aliased(User) User2 = aliased(User) User3 = aliased(User) Contact1 = aliased(Contact) Contact2 = aliased(Contact) contacts = User1.query \ .join(Contact1, Contact1.user_id == User1.id) \ .join(User2, User2.phone == Contact1.phone) \ .join(Contact2, Contact2.user_id == User2.id) \ .join(User3, User3.phone == Contact2.phone) \ .join(Group, Group.user_id == User3.id) \ .with_entities( Contact1.phone, Contact1.user_id, Contact2.phone, Contact2.user_id, Group.id Group.user_id, User1.id, User2.id, User2.phone, User3.id, User3.phone, ) \ .all()
9
7
70,655,157
2022-1-10
https://stackoverflow.com/questions/70655157/how-can-i-alias-a-pytest-fixture
I have a few pytest fixtures I use from third-party libraries and sometimes their names are overly long and cumbersome. Is there a way to create a short alias for them? For example: the django_assert_max_num_queries fixture from pytest-django. I would like to call this max_queries in my tests.
You cannot just add an alias in the form of max_queries = django_assert_max_num_queries because fixtures are looked up by name at run-time and not imported (and even if they can be imported in some cases, this is not recommended). But you can always write your own fixture that just yields another fixture: @pytest.fixture def max_queries(django_assert_max_num_queries): yield django_assert_max_num_queries Done this way, max_queries will behave exactly the same as django_assert_max_num_queries. Note that you should use yield and not return, to make sure that the control is returned to the fixture.
6
5
70,656,932
2022-1-10
https://stackoverflow.com/questions/70656932/how-to-test-python-file-with-pytest-having-if-name-main-with-argum
I want to test a python file with pytest which contains a if __name__ == '__main__': it also has arguments parsed in it. the code is something like this: if __name__ == '__main__': parser = argparse.ArgumentParser(description='Execute job.') parser.add_argument('--env', required=True, choices=['qa', 'staging', 'prod']) args = parser.parse_args() # some logic The limitation here is I cannot add a main() method and wrap the logic in if __name__ == '__main__': inside it and run it from there! The code is a legacy code and cannot be changed! I want to test this with pytest, I wonder how can I run this python file with some arguments inside my test?
Command line arguments are an input/output mechanism like any other. The key is to isolate it in a "boundary layer", and have your main program not depend on them directly. In this case, rather than making your program access sys.argv directly (which is essentially a global variable), make it so your program is wrapped in an "entry point" function (e.g. "main") that takes args as an explicit parameter. If you don't pass any arguments to ArgumentParser.parse_args, defaults to accessing sys.argv for you. Instead, just pass along your args param. It might look something like: def main(args): # some logic pass if __name__ == '__main__': main(sys.arv) Your unit tests can call this entry point function and pass in any args they want: def test_main_does_foo_when_bar(): result = main(["bar"]) assert "foo" == result
5
4
70,656,586
2022-1-10
https://stackoverflow.com/questions/70656586/how-to-clear-oled-display-in-micropython
I'm doing this on esp8266 with micro python and there is a way to clear OLED display in Arduino but I don't know how to clear display in micropython i used ssd1306 library to control my OLED and this is my error I've written a code that prints on OLED from a list loop, but OLED prints it on the text that was printed before it (print one on top of the other not clear and printing) 7 display = [ip, request_uri, country, region, city] for real_time in display: oled.text(real_time, 0, 20) oled.show() time.sleep(2) print(real_time)
The fill() method is used to clean the OLED screen: oled.fill(0) oled.show()
5
9
70,654,589
2022-1-10
https://stackoverflow.com/questions/70654589/python-poetry-and-script-entrypoints
Im trying to use Poetry and the scripts option to run a script. Like so: pyproject.toml [tool.poetry.scripts] xyz = "src.cli:main" Folder layout . ├── poetry.lock ├── pyproject.toml ├── run-book.txt └── src ├── __init__.py └── cli.py I then perform an install like so: ❯ poetry install Installing dependencies from lock file No dependencies to install or update If I then try and run the command its not found (?) ❯ xyz zsh: command not found: xyz Am i missing something here! Thanks,
Poetry is likely installing the script in your user local directory. On Ubuntu, for example, this is $HOME/.local/bin. If that directory isn't in your path, your shell will not find the script. A side note: It is generally a good idea to put a subdirectory with your package name in the src directory. It's generally better to not have an __init__.py in your src directory. Also consider renaming cli.py to __main__.py. This will allow your package to be run as a script using python -m package_name.
13
9
70,649,979
2022-1-10
https://stackoverflow.com/questions/70649979/migrate-to-arm64-on-aws-lambda-show-error-unable-to-import-module-encryptor-la
I have a lambda function runs on Python 3.7 with architecture x86_64 before. Now I would like to migrate it to arm64 to use the Graviton processor and upgrade to Python 3.9 as well. While I success to create the Python 3.9 virtual environment layer with the dependencies that I need, which is aws-encryption-sdk, when I change the architecture of my lambda function to arm64 and runtime to Python 3.9, below error shows after I test my code: Unable to import module 'encryptor-lambda': /opt/python/cryptography/hazmat/bindings/_rust.abi3.so: cannot open shared object file: No such file or directory", I went and check my virtual env layer and pretty sure the file /opt/python/cryptography/hazmat/bindings/_rust.abi3.so is existed there. Then I tried to keep my runtime at Python 3.9 and switched back the architecture to x86, it works! Only if I try to change it to arm64, it has that error above. I look up online and can't seems to have a solution or as of why is that. Is it not possible to migrate for the lambda functions that requires dependencies? Or am I missing anything?
Libraries like aws-encryption-sdk-python sometimes contain code/dependencies that are not pure Python and need to be compiled. When code needs to be "compiled" it is usually compiled for a target architecture (like ARM or x86) to run properly. You can not run code compiled for one architecture on different architecture. So I suspect that is the reason for your error. Looking at the error message I suspect it is the cryptography library causing this issue. The library uses Rust. If you check your error, you will see that the shared library for the Rust binding is the causing your error (_rust.abi3.so). According to the documentation of the library, the ARM architecture is supported. Therefore, I suspect that the way you are packaging your Lambda deployment package and it's dependencies is the issue. You are probably doing that on a computer with x86 architecture. Package manager like pip usually detect the OS and architecture they are run on and download dependencies for those OS's and architectures. So I guess you have two options: Run your build/deployment on an ARM machine Somehow manage to "cross compile" with tools like crossenv Both options are not really great. Unfortunately, this is one of those areas where Python Lambdas can become very cumbersome to develop/deploy. Every time a depdency uses a non-Python extension (like a C extension), packaging/deployment becomes a problem. Maybe someone else has a great tool to recommend.
5
5
70,624,600
2022-1-7
https://stackoverflow.com/questions/70624600/faiss-how-to-retrieve-vector-by-id-from-python
I have a faiss index and want to use some of the embeddings in my python script. Selection of Embeddings should be done by id. As faiss is written in C++, swig is used as an API. I guess the function I need is reconstruct : /** Reconstruct a stored vector (or an approximation if lossy coding) * * this function may not be defined for some indexes * @param key id of the vector to reconstruct * @param recons reconstucted vector (size d) */ virtual void reconstruct(idx_t key, float* recons) const; Therefore, I call this method in python, for example: vector = index.reconstruct(0) But this results in the following error: vector = index.reconstruct(0) File "lib/python3.8/site-packages/faiss/init.py", line 406, in replacement_reconstruct self.reconstruct_c(key, swig_ptr(x)) File "lib/python3.8/site-packages/faiss/swigfaiss.py", line 1897, in reconstruct return _swigfaiss.IndexFlat_reconstruct(self, key, recons) TypeError: in method 'IndexFlat_reconstruct', argument 2 of type 'faiss::Index::idx_t' python-BaseException Has someone an idea what is wrong with my approach?
This is the only way I found manually. import faiss import numpy as np a = np.random.uniform(size=30) a = a.reshape(-1,10).astype(np.float32) d = 10 index = faiss.index_factory(d,'Flat', faiss.METRIC_L2) index.add(a) xb = index.xb print(xb.at(0) == a[0][0]) Output: True You can get any vector with a loop required_vector_id = 1 vector = np.array([xb.at(required_vector_id*index.d + i) for i in range(index.d)]) print(np.all(vector== a[1])) Output: True
5
4
70,644,434
2022-1-9
https://stackoverflow.com/questions/70644434/mypy-using-unions-in-mapping-types-does-not-work-as-expected
Consider the following code: def foo(a: dict[str | tuple[str, str], str]) -> None: pass def bar(b: dict[str, str]) -> None: foo(b) def baz(b: dict[tuple[str, str], str]) -> None: foo(b) foo({"foo": "bar"}) foo({("foo", "bar"): "bar"}) When checked with mypy in strict mode it produces the following errors: file.py:6: error: Argument 1 to "foo" has incompatible type "Dict[str, str]"; expected "Dict[Union[str, Tuple[str, str]], str]" file.py:9: error: Argument 1 to "foo" has incompatible type "Dict[Tuple[str, str], str]"; expected "Dict[Union[str, Tuple[str, str]], str]" Which doesn't seem to make sense to me. The parameter is defined to accept a dict with either a string or a tuple as keys and strings as values. However, both variants are not accepted when explicitly annotated as such. They do however work when passing a dict like this directly to the function. It seems to me that mypy expects a dict that has to be able to have both options of the union as keys. I fail to understand why? If the constraints for the key are to be either a string or a tuple of to strings, passing either should be fine. Right? Am I missing something here?
A dict[str | tuple[str, str], str] isn't just a dict with either str or tuple[str, str] keys. It's a dict you can add more str or tuple[str, str] keys to. You can't add str keys to a dict[tuple[str, str], str], and you can't add tuple[str, str] keys to a dict[str, str], so those types aren't compatible. If you pass a literal dict directly to foo (or to bar or baz), that literal has no static type. mypy infers a type for the dict based on the context. Many different types may be inferred for a literal based on its context. When you pass b to foo inside bar or baz, b already has a static type, and that type is incompatible with foo's signature.
6
4
70,643,142
2022-1-9
https://stackoverflow.com/questions/70643142/repeat-values-of-an-array-on-both-the-axes
Say I have this array: array = np.array([[1,2,3],[4,5,6],[7,8,9]]) Returns: 123 456 789 How should I go about getting it to return something like this? 111222333 111222333 111222333 444555666 444555666 444555666 777888999 777888999 777888999
You'd have to use np.repeat twice here. np.repeat(np.repeat(array, 3, axis=1), 3, axis=0) # [[1 1 1 2 2 2 3 3 3] # [1 1 1 2 2 2 3 3 3] # [1 1 1 2 2 2 3 3 3] # [4 4 4 5 5 5 6 6 6] # [4 4 4 5 5 5 6 6 6] # [4 4 4 5 5 5 6 6 6] # [7 7 7 8 8 8 9 9 9] # [7 7 7 8 8 8 9 9 9] # [7 7 7 8 8 8 9 9 9]]
15
17
70,640,923
2022-1-9
https://stackoverflow.com/questions/70640923/countvectorizer-object-has-no-attribute-get-feature-names-out
Why do i keep getting this error? I tried different versions of anaconda 3 but did not manage to get it done. What should i install to work it properly? I used sklearn versions from 0.20 - 0.23. Error message: Code: import pandas as pd import matplotlib.pyplot as plt import plotly.express as px from sklearn.feature_extraction.text import CountVectorizer from collections import Counter from wordcloud import WordCloud vectorizer = CountVectorizer(ngram_range=(2,2), analyzer='word') sparse_matrix = vectorizer.fit_transform(df['content'][:2000]) frequencies = sum(sparse_matrix).toarray()[0] ngrams = pd.DataFrame(frequencies, index=vectorizer.get_feature_names_out(), columns=['frequency']) ngrams = ngrams.sort_values(by='frequency', ascending=False) ngrams
You are using an old version of scikit-learn. If I'm not mistaken, get_feature_names_out() was only introduced in version 1.0. Upgrade to a newer version, or, to get similar functionality in an earlier version, you can use get_feature_names().
12
34
70,639,443
2022-1-9
https://stackoverflow.com/questions/70639443/convert-a-bytes-iterable-to-an-iterable-of-str-where-each-value-is-a-line
I have an iterable of bytes, such as bytes_iter = ( b'col_1,', b'c', b'ol_2\n1', b',"val', b'ue"\n', ) (but typically this would not be hard coded or available all at once, but supplied from a generator say) and I want to convert this to an iterable of str lines, where line breaks are unknown up front, but could be any of \r, \n or \r\n. So in this case would be: lines_iter = ( 'col_1,col_2', '1,"value"', ) (but again, just as an iterable, not so it's all in memory at once). How can I do this? Context: my aim is to then pass the iterable of str lines to csv.reader (that I think needs whole lines?), but I'm interested in this answer just in general.
Use the io module to do most of the work for you: class ReadableIterator(io.IOBase): def __init__(self, it): self.it = iter(it) def read(self, n): # ignore argument, nobody actually cares # note that it is *critical* that we suppress the `StopIteration` here return next(self.it, b'') def readable(self): return True then just call io.TextIOWrapper(ReadableIterator(some_iterable_of_bytes)).
8
6
70,636,801
2022-1-8
https://stackoverflow.com/questions/70636801/map-unique-values-in-2-columns-to-integers
I have a dataframe with 2 categorical columns (col1, col2). col1 col2 0 A DE 1 A B 2 B BA 3 A A 4 C C I want to map the unique string values to integers, for example (A:0, B:1, BA:2, C:3, DE:4) col1 col2 ideal1 ideal2 0 A DE 0 4 1 A B 0 1 2 B BA 1 2 3 A A 0 0 4 C C 3 3 I am have tried to use factorize or category, but I am not getting the same unique value for both columns, as can be seen from ROW C: Here is my code: df = pd.DataFrame({'col1': ["A", "A", "B", "A" , "C"], 'col2': ["DE", "B", "BA", "A", "C"]}) #ideal map alphabetical: A:0, B:1, BA:2, C:3, DE:4 #ideal result df["ideal1"] = [0, 0, 1,0, 3] df["ideal2"] = [4,1,2,0,3] #trial #1 --> C value 2 & 3 : not matching df["cat1"] = df['col1'].astype("category").cat.codes df["cat2"] = df['col2'].astype("category").cat.codes #trial #2 --> C value 2 & 4 : not matching df["fac1"] = pd.factorize(df["col1"])[0] df["fac2"] = pd.factorize(df["col2"])[0] print (df) OUT: col1 col2 ideal1 ideal2 cat1 cat2 fac1 fac2 0 A DE 0 4 0 4 0 0 1 A B 0 1 0 1 0 1 2 B BA 1 2 1 2 1 2 3 A A 0 0 0 0 0 3 4 C C 3 3 2 3 2 4
To get the same categories across columns you need to reshape to a single dimension first. Then use factorize and restore the original shape. Here is an example using stack/unstack: x = df.stack() x[:] = x.factorize()[0] df2 = x.unstack() Output: col1 col2 0 0 1 1 0 2 2 2 3 3 0 0 4 4 4 Joining to the original data: x = df.stack() x[:] = x.factorize()[0] df2 = df.join(x.unstack().add_suffix('_cat')) Output: col1 col2 col1_cat col2_cat 0 A DE 0 1 1 A B 0 2 2 B BA 2 3 3 A A 0 0 4 C C 4 4 alphabetical order If you really want alphabetical order, you could create you own custom mapping dictionary: import numpy as np cats = {k:v for v,k in enumerate(np.unique(df.values))} df.replace(cats) Output: col1 col2 0 0 4 1 0 1 2 1 2 3 0 0 4 3 3
5
5
70,586,483
2022-1-5
https://stackoverflow.com/questions/70586483/returning-array-from-recursive-binary-tree-search
Hi I've made a simple Binary Tree and added a pre-order traversal method. After throwing around some ideas I got stuck on finding a way to return each value from the traverse_pre() method in an array. class BST: def __init__(self, val): self.value = val self.left = None self.right = None def add_child(self, val): if self.value: if val < self.value: if self.left == None: self.left = BST(val) else: self.left.add_child(val) else: if val > self.value: if self.right == None: self.right = BST(val) else: self.right.add_child(val) else: self.value = val def traverse_pre(self): if self.left: self.left.traverse_pre() print(self.value) if self.right: self.right.traverse_pre() Tree = BST(5) Tree.add_child(10) Tree.add_child(8) Tree.add_child(2) Tree.add_child(4) Tree.add_child(7) Tree.traverse_pre() How would I modify the traverse_pre() function to return an array consisting of the node values. Is there a good example of this process for me to understand this further, I'm a bit stuck on how values can be appended to an array within recursion.
I would not recommend copying the entire tree to an intermediate list using .append or .extend. Instead use yield which makes your tree iterable and capable of working directly with many built-in Python functions - class BST: # ... def preorder(self): # value yield self.value # left if self.left: yield from self.left.preorder() # right if self.right: yield from self.right.preorder() We can simply reorder the lines this to offer different traversals like inorder - class BST: # ... def inorder(self): # left if self.left: yield from self.left.inorder() # value yield self.value # right if self.right: yield from self.right.inorder() And postorder - class BST: # ... def postorder(self): # left if self.left: yield from self.left.postorder() # right if self.right: yield from self.right.postorder() # value yield self.value Usage of generators provides inversion of control. Rather than the traversal function deciding what happens to each node, the the caller is left with the decision on what to do. If a list is indeed the desired target, simply use list - list(mytree.preorder()) # => [ ... ] That said, there's room for improvement with the rest of your code. There's no need to mutate nodes and tangle self context and recursive methods within your BST class directly. A functional approach with a thin class wrapper will make it easier for you to grow the functionality of your tree. For more information on this technique, see this related Q&A. If you need to facilitate trees of significant size, a different traversal technique may be required. Just ask in the comments and someone can help you find what you are looking for.
6
0
70,602,290
2022-1-6
https://stackoverflow.com/questions/70602290/google-app-engine-deployment-fails-error-while-finding-module-specification-for
We are using command prompt c:\gcloud app deploy app.yaml, but get the following error: Running "python3 -m pip install --requirement requirements.txt --upgrade --upgrade-strategy only-if-needed --no-warn-script-location --no-warn-conflicts --force-reinstall --no-compile (PIP_CACHE_DIR=/layers/google.python.pip/pipcache PIP_DISABLE_PIP_VERSION_CHECK=1)" Step #2 - "build": /layers/google.python.pip/pip/bin/python3: Error while finding module specification for 'pip' (AttributeError: module '__main__' has no attribute '__file__') Step #2 - "build": Done "python3 -m pip install --requirement requirements.txt --upgr..." (34.49892ms) Step #2 - "build": Failure: (ID: 0ea8a540) /layers/google.python.pip/pip/bin/python3: Error while finding module specification for 'pip' (AttributeError: module '__main__' has no attribute '__file__') Step #2 - "build": -------------------------------------------------------------------------------- Step #2 - "build": Running "mv -f /builder/outputs/output-5577006791947779410 /builder/outputs/output" Step #2 - "build": Done "mv -f /builder/outputs/output-5577006791947779410 /builder/o..." (12.758866ms) Step #2 - "build": ERROR: failed to build: exit status 1 Finished Step #2 - "build" ERROR ERROR: build step 2 "us.gcr.io/gae-runtimes/buildpacks/python37/builder:python37_20211201_3_7_12_RC00" failed: step exited with non-zero status: 145 Our Requirements.txt is as below. We are currently on Python 3.7 standard app engine firebase_admin==3.0.0 sendgrid==6.9.3 google-auth==1.35.0 google-auth-httplib2==0.1.0 jinja2==3.0.3 MarkupSafe==2.0.1 pytz==2021.3 Flask==2.0.2 twilio==6.46.0 httplib2==0.20.2 requests==2.24.0 requests_toolbelt==0.9.1 google-cloud-tasks==2.7.1 google-cloud-logging==1.15.1 googleapis-common-protos==1.54.0 Please help.The above code was working well before updating the requirements.txt file. We tried to remove gunicorn to allow the system pickup the latest according to documentation here. We have a subdirectory structure that stores all the .py files in controllers and db definitions in models. Our main.py has the following - sys.path.append(os.path.join(os.path.dirname(__file__), '../controllers')) sys.path.append(os.path.join(os.path.dirname(__file__), '../models')) Does anyone know how to debug this error - Error while finding module specification for 'pip' (AttributeError: module '__main__' has no attribute '__file__'). What does this mean?
I had the same issue when deploying a Google Cloud Function. The error cloud function Error while finding module specification for 'pip' (AttributeError: module 'main' has no attribute 'file'); Error ID: c84b3231 appeared after commenting out some packages in the requirements.txt, but that was nothing important and likely did not cause it. I guess that it is more a problem of an instability in Google Storage, since that same Cloud Function I was working on had lost its archive already some time before, all of a sudden, out of nowhere, showing: Archive not found in the storage location cloud function and I did not delete or change anything that might explain this, as Archive not found in the storage location: Google Function would suggest. Though that answer has one very interesting guess that might explain at least the very first time the "Archive not found" error came up and thus made the CF instable: I might have changed the timezone city of the bucket during browsing the Google Storage. It is too long ago, but I know I browsed the GS, therefore, I cannot exclude this. Quote: "It [the Archive not found error] may occurr too if GCS bucket's region is not matched to your Cloud function region." After this "Archive not found" crash, I manually added main.py and requirements.txt and filled them again with code from the backup. This worked for some time, but there seems to be some general instability in the Google Storage. Therefore, always keep backups of your deployed scripts. Then, after getting this pip error of the question in that already instable Cloud Function, waiting for a day or two, Google Function again showed Archive not found in the storage location cloud function If you run into this pip error in a Cloud Function, you might consider updating pip in the "requirements.txt" but if you are in such an unstable Cloud Function the better workaround seems to be to create a new Cloud Function and copy everything in there. The pip error probably just shows that the source script, in this case the requirements.txt, cannot be run since the source code is not fully embedded anymore or has lost some embedding in the Google Storage. Or you give that Cloud Function a second chance and edit, go to Source tab, click on Dropdown Source code to choose Inline Editor and add main.py and requirements.txt manually (Runtime: Python).
18
1
70,632,673
2022-1-8
https://stackoverflow.com/questions/70632673/fastapi-is-not-loading-static-files
So, I'm swapping my project from node.js to python FastAPI. Everything has been working fine with node, but here it says that my static files are not present, so here's the code: from fastapi import FastAPI, Request, WebSocket from fastapi.responses import HTMLResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates app = FastAPI() app.mount("/static", StaticFiles(directory="../static"), name="static") templates = Jinja2Templates(directory='../templates') @app.get('/') async def index_loader(request: Request): return templates.TemplateResponse('index.html', {"request": request}) The project's structure looks like this: Files are clearly where they should be, but when I connect to the website, the following error occurs: ←[32mINFO←[0m: connection closed ←[32mINFO←[0m: 127.0.0.1:54295 - "←[1mGET /img/separator.png HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54296 - "←[1mGET /css/rajdhani.css HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54295 - "←[1mGET /js/pixi.min.js HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54296 - "←[1mGET /js/ease.js HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54298 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m ←[32mINFO←[0m: 127.0.0.1:54298 - "←[1mGET /img/separator.png HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54299 - "←[1mGET /css/rajdhani.css HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54298 - "←[1mGET /js/pixi.min.js HTTP/1.1←[0m" ←[31m404 Not Found←[0m ←[32mINFO←[0m: 127.0.0.1:54299 - "←[1mGET /js/ease.js HTTP/1.1←[0m" ←[31m404 Not Found←[0m So, basically, any static file that I'm using is missing, and I have no idea what I am doing wrong. How to fix it?
Here: app.mount("/static", StaticFiles(directory="../static"), name="static") You mount your static directory under /static path. That means, if you want access static files in your html you need to use static prefix, e.g. <img src="static/img/separator.png"/>
6
4
70,631,807
2022-1-8
https://stackoverflow.com/questions/70631807/python-pandas-pivot-of-two-columns-columnname-and-value
I have a Panda dataframe that contains two columns, as well as a default index. The first columns is the intended 'Column Name' and the second column the required value for that column. name returnattribute 0 Customer Name Customer One Name 1 Customer Code CGLOSPA 2 Customer Name Customer Two Name 3 Customer Code COTHABA 4 Customer Name Customer Three Name 5 Customer Code CGLOADS 6 Customer Name Customer Four Name 7 Customer Code CAPRCANBRA 8 Customer Name Customer Five Name 9 Customer Code COTHAMO I would like to povit this so that instead of 10 rows, I have 5 rows with two columns ('Customer Name' and 'Customer Code'). The hoped for result is as below: Customer Code Customer Name 0 CGLOSPA Customer One Name 1 COTHABA Customer Two Name 2 CGLOADS Customer Three Name 3 CAPRCANBRA Customer Four Name 4 COTHAMO Customer Five Name I have tried to use the pandas pivot function: df.pivot(columns='name', values='returnattribute') But this results in ten rows still with alternate blanks: Customer Code Customer Name 0 NaN Customer One Name 1 CGLOSPA NaN 2 NaN Customer Two Name 3 COTHABA NaN 4 NaN Customer Three Name 5 CGLOADS NaN 6 NaN Customer Four Name 7 CAPRCANBRA NaN 8 NaN Customer Five Name 9 COTHAMO NaN How to I pivot the dataframe to get just 5 rows of two columns?
You can also pass directly the new index to pivot_table, use aggfunc='first' as you have non numeric data: df.pivot_table(index=df.index//2, columns='name', values='returnattribute', aggfunc='first') output: name Customer Code Customer Name 0 CGLOSPA Customer One Name 1 COTHABA Customer Two Name 2 CGLOADS Customer Three Name 3 CAPRCANBRA Customer Four Name 4 COTHAMO Customer Five Name
5
2
70,630,932
2022-1-8
https://stackoverflow.com/questions/70630932/how-to-use-tweepy-for-twitter-api-v2-in-getting-user-id-by-username
I'm trying to replicate this snippet from geeksforgeeks, except that it uses the oauth for Twitter API v1.1 while I the API v2. # the screen name of the user screen_name = "PracticeGfG" # fetching the user user = api.get_user(screen_name) # fetching the ID ID = user.id_str print("The ID of the user is : " + ID) OUTPUT: The ID of the user is: 4802800777. And here's mine: import os import tweepy API_KEY = os.getenv('API_KEY') API_KEY_SECRET = os.getenv('API_KEY_SECRET') BEARER_TOKEN = os.getenv('BEARER_TOKEN') ACCESS_TOKEN = os.getenv('ACCESS_TOKEN') ACCESS_TOKEN_SECRET = os.getenv('ACCESS_TOKEN_SECRET') screen_name = 'nytimes' client = tweepy.Client(consumer_key=API_KEY, consumer_secret=API_KEY_SECRET, access_token=ACCESS_TOKEN, access_token_secret=ACCESS_TOKEN_SECRET) user = client.get_user(screen_name) id = user.id_str print(id) When I run mine, it returns this error: TypeError: get_user() takes 1 positional argument but 2 were given. Can you give me hints in what did I miss? Thanks ahead.
get_user have the following signature. Client.get_user(*, id, username, user_auth=False, expansions, tweet_fields, user_fields) Notice the *. * is used to force the caller to use named arguments. For example, This won't work. >>> def add(first, *, second): ... print(first, second) ... >>> add(1, 2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: add() takes 1 positional argument but 2 were given But this will >>> add(1, second=2) 1 2 So to answer your question you should call the get_user method like this. client.get_user(username=screen_name)
6
6
70,617,258
2022-1-7
https://stackoverflow.com/questions/70617258/session-object-in-fastapi-similar-to-flask
I am trying to use session to pass variables across view functions in fastapi. However, I do not find any doc which specifically says of about session object. Everywhere I see, cookies are used. Is there any way to convert the below flask code in fastapi? I want to keep session implementation as simple as possible. from flask import Flask, session, render_template, request, redirect, url_for app=Flask(__name__) app.secret_key='asdsdfsdfs13sdf_df%&' @app.route('/a') def a(): session['my_var'] = '1234' return redirect(url_for('b')) @app.route('/b') def b(): my_var = session.get('my_var', None) return my_var if __name__=='__main__': app.run(host='0.0.0.0', port=5000, debug = True)
Take a look at Starlette's SessionMiddleware. FastAPI uses Starlette under the hood so it is compatible. After you register SessionMiddleware, you can access Request.session, which is a dictionary. Documentation: SessionMiddleware An implementation in FastAPI may look like: @app.route("/a") async def a(request: Request) -> RedirectResponse: request.session["my_var"] = "1234" return RedirectResponse("/b") @app.route("/b") async def b(request: Request) -> PlainTextResponse: my_var = request.session.get("my_var", None) return PlainTextResponse(my_var)
8
13
70,627,163
2022-1-7
https://stackoverflow.com/questions/70627163/how-to-work-with-regex-in-pathlib-correctly
I want find all images and trying to use pathlib, but my reg expression don't work. where I went wrong? from pathlib import Path FILE_PATHS=list(Path('./photos/test').rglob('*.(jpe?g|png)')) print(len(FILE_PATHS)) FILE_PATHS=list(Path('./photos/test').rglob('*.jpg'))#11104 print(len(FILE_PATHS)) 0 11104
Get list of files using Regex import re p = Path('C:/Users/user/Pictures') files = [] for x in p.iterdir(): a = re.search('.*(jpe?g|png)',str(x)) if a is not None: files.append(a.group())
8
7
70,620,319
2022-1-7
https://stackoverflow.com/questions/70620319/plotting-pd-df-with-datetime-index-in-matplotlib-results-in-valueerror-due-to-wr
I am trying to plot a pandas.DataFrame, but getting an unexplainable ValueError. Here is sample code causing the problem: import pandas as pd import matplotlib.pyplot as plt from io import StringIO import matplotlib.dates as mdates weekday_fmt = mdates.DateFormatter('%a %H:%M') test_csv = 'datetime,x1,x2,x3,x4,x5,x6\n' \ '2021-12-06 00:00:00,8,42,14,23,12,2\n' \ '2021-12-06 00:15:00,17,86,68,86,92,45\n' \ '2021-12-06 00:30:00,44,49,81,26,2,95\n' \ '2021-12-06 00:45:00,35,78,33,18,80,67' test_df = pd.read_csv(StringIO(test_csv), index_col=0) test_df.index = pd.to_datetime(test_df.index) plt.figure() ax = test_df.plot() ax.set_xlabel(f'Weekly aggregation') ax.set_ylabel('y-label') fig = plt.gcf() fig.set_size_inches(12.15, 5) ax.get_legend().remove() ax.xaxis.set_major_formatter(weekday_fmt) # This and the following line are the ones causing the issues ax.xaxis.set_minor_formatter(weekday_fmt) plt.show() If the two formatting lines are removed, the code runs through, but if I leave them in there, I get a ValueError: ValueError: Date ordinal 27312480 converts to 76749-01-12T00:00:00.000000 (using epoch 1970-01-01T00:00:00), but Matplotlib dates must be between year 0001 and 9999. The reason seems to be that the conversion of datetime in pandas and matplotlib are incompatible. This could probably be circumvented by not using the built-in plot-function of pandas. Is there another way? Thanks! My package versions are: pandas 1.3.4 numpy 1.19.5 matplotlib 3.4.2 python 3.8.10
Thanks to the comments by Jody Klymak and MrFuppes, I found the answer to simply be ax = test_df.plot(x_compat=True). For anybody stumbling upon this in future, here comes the full explanation of what is happening: When using the plot-function, pandas takes over the formatting of x-tick (and possibly other features). The selected x-tick-values shown to matplotlib do not need to correspond with what one would expect. In the shown example, the function ax.get_xlim() returns (27312480.0, 27312525.0). Using x_compat=True forces pandas to hand the correct values over to matplotlib where the formatting then happens. Since this was not clear to me from the error message I received, this post might help future viewers searching for that error message.
7
10
70,623,704
2022-1-7
https://stackoverflow.com/questions/70623704/enumerate-causes-incompatible-type-mypy-error
The following code: from typing import Union def process(actions: Union[list[str], list[int]]) -> None: for pos, action in enumerate(actions): act(action) def act(action: Union[str, int]) -> None: print(action) generates a mypy error: Argument 1 to "act" has incompatible type "object"; expected "Union[str, int]" However when removing the enumerate function the typing is fine: from typing import Union def process(actions: Union[list[str], list[int]]) -> None: for action in actions: act(action) def act(action: Union[str, int]) -> None: print(action) Does anyone know what the enumerate function is doing to effect the types? This is python 3.9 and mypy 0.921
enumerate.__next__ needs more context than is available to have a return type more specific than Tuple[int, Any], so I believe mypy itself would need to be modified to make the inference that enumerate(actions) produces Tuple[int,Union[str,int]] values. Until that happens, you can explicitly cast the value of action before passing it to act. from typing import Union, cast StrOrInt = Union[str, int] def process(actions: Union[list[str], list[int]]) -> None: for pos, action in enumerate(actions): act(cast(StrOrInt, action)) def act(action: Union[str, int]) -> None: print(action) You can also make process generic (which now that I've thought of it, is probably a better idea, as it avoids the overhead of calling cast at runtime). from typing import Union, cast, Iterable, TypeVar T = TypeVar("T", str, int) def process(actions: Iterable[T]) -> None: for pos, action in enumerate(actions): act(action) def act(action: T) -> None: print(action) Here, T is not a union of types, but a single concrete type whose identity is fixed by the call to process. Iterable[T] is either Iterable[str] or Iterable[int], depending on which type you pass to process. That fixes T for the rest of the call to process, which every call to act must take the same type of argument. An Iterable[str] or an Iterable[int] is a valid argument, binding T to int or str in the process. Now enumerate.__next__ apparently can have a specific return type Tuple[int, T].
5
4
70,610,919
2022-1-6
https://stackoverflow.com/questions/70610919/installing-python-in-dockerfile-without-using-python-image-as-base
I have a python script that uses DigitalOcean tools (doctl and kubectl) I want to containerize. This means my container will need python, doctl, and kubectl installed. The trouble is, I figure out how to install both python and DigitalOcean tools in the dockerfile. I can install python using the base image "python:3" and I can also install the DigitalOcean tools using the base image "alpine/doctl". However, the rule is you can only use one base image in a dockerfile. So I can include the python base image and install the DigitalOcean tools another way: FROM python:3 RUN <somehow install doctl and kubectl> RUN pip install firebase-admin COPY script.py CMD ["python", "script.py"] Or I can include the alpine/doctl base image and install python3 another way. FROM alpine/doctl RUN <somehow install python> RUN pip install firebase-admin COPY script.py CMD ["python", "script.py"] Unfortunately, I'm not sure how I would do this. Any help in how I can get all these tools installed would be great!
just add this with any other thing you want to apt-get install: RUN apt-get update && apt-get install -y \ python3.6 &&\ python3-pip &&\ in alpine it should be something like: RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python &&\ python3 -m ensurepip &&\ pip3 install --no-cache --upgrade pip setuptools &&\
9
8
70,589,218
2022-1-5
https://stackoverflow.com/questions/70589218/can-python-cursor-execute-accept-multiple-queries-in-one-go
Can the cursor.execute call below execute multiple SQL queries in one go? cursor.execute("use testdb;CREATE USER MyLogin") I don't have python setup yet but want to know if above form is supported by cursor.execute? import pyodbc # Some other example server values are # server = 'localhost\sqlexpress' # for a named instance # server = 'myserver,port' # to specify an alternate port server = 'tcp:myserver.database.windows.net' database = 'mydb' username = 'myusername' password = 'mypassword' cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = cnxn.cursor() #Sample select query cursor.execute("SELECT @@version;") row = cursor.fetchone() while row: print(row[0]) row = cursor.fetchone()
Multiple SQL statements in a single string is often referred to as an "anonymous code block". There is nothing in pyodbc (or pypyodbc) to prevent you from passing a string containing an anonymous code block to the Cursor.execute() method. They simply pass the string to the ODBC Driver Manager (DM) which in turn passes it to the ODBC Driver. However, not all ODBC drivers accept anonymous code blocks by default. Some databases default to allowing only a single SQL statement per .execute() to protect us from SQL injection issues. For example, MySQL/Connector ODBC defaults MULTI_STATEMENTS to 0 (off) so if you want to run an anonymous code block you will have to include MULTI_STATEMENTS=1 in your connection string. Note also that changing the current database by including a USE … statement in an anonymous code block can sometimes cause problems because the database context changes in the middle of a transaction. It is often better to execute a USE … statement by itself and then continue executing other SQL statements.
5
2
70,608,096
2022-1-6
https://stackoverflow.com/questions/70608096/conda-install-different-packages-from-different-channels-in-one-line
When using conda install, is it possible to install different packages from different channels in one line? For example could one do something like this? conda install -c <channel_1> <package_1> -c <channel_2> <package_2> ...?
The --channel argument The --channel, -c flag tells Conda where to search for packages, but does not necessarily constrain where a specific package should be sourced. Moreover, the order that channels are specified applies to the whole solving process, and has no contextual relationship with adjacent package specifications. For example, the following commands are all completely identical after parsing: conda install -c A -c B pkg1 pkg2 conda install -c A pkg1 -c B pkg2 conda install -c A pkg2 -c B pkg1 conda install pkg1 pkg2 -c A -c B and all of these will prioritize channel A over B (under channel_priority: strict or flexible). It should be emphasized that this does not guarantee either of these channels will be used, only considered. I think it helps to have a prosaic translation of the above command: With channels A and B prioritized, ensure that packages pkg1 and pkg2 are installed in the current environment. Specifying channels per package However, the MatchSpec grammar is rather expressive and fully supports specifying the channel from which to source a given package. For example, if we want pkg1 from channel A and pkg2 from channel B, this would be expressed as: conda install A::pkg1 B::pkg2 which translates to the imperative: Ensure that pkg1 from channel A and pkg2 from channel B are installed in the current environment. Note that we don't even need to include the channel via a --channel argument, because the package specification itself indicates the channel. One only needs to include the --channel, -c argument if they want to source additional packages (e.g., dependencies of pkg1) from the additional channel.
10
19
70,610,001
2022-1-6
https://stackoverflow.com/questions/70610001/pandas-method-chaining-when-df-not-assigned-yet
Is it possible to do method chaining in pandas when no variable refering to the dataframe has been assigned, yet AND the method needs to refer to the dataframe? Example: here data frame can be referred to by variable name. df = pd.DataFrame({"a":[1,2,3], "b":list("abc")}) df = (df .drop(df.tail(1).index) #.other_methods #... ) df Is it possible to do this without having assigned the dataframe to a variable name? df = (pd.DataFrame({"a":[1,2,3], "b":list("abc")}) .drop(??.tail(1).index) #.other_methods #... ) df Thanks!
You need some reference to the dataframe in order to use it in multiple independent places. That means binding a reusable name to the value returned by pd.DataFrame. A "functional" way to create such a binding is to use a lambda expression instead of an assignment statement. df = (lambda df: df.drop(df.tail(1).index)....)(pd.DataFrame(...)) The lambda expression defines some function that uses whatever value is passed as an argument as the value of the name df; you then immediately call that function on your original dataframe.
5
5
70,608,253
2022-1-6
https://stackoverflow.com/questions/70608253/why-does-mypy-fail-with-incompatible-type-in-enum-classmethod
In my Enum, i have defined a classmethod for coercing a given value to an Enum member. The given value may already be an instance of the Enum, or it may be a string holding an Enum value. In order to decide whether it needs conversion, i check if the argument is an instance of the class, and only pass it on to int() if it is not. At which point – according to the type hints for the argument 'item' – it must be a string. The class look like this: T = TypeVar('T', bound='MyEnum') class MyEnum(Enum): A = 0 B = 1 @classmethod def coerce(cls: Type[T], item: Union[int, T]) -> T: return item if isinstance(item, cls) else cls(int(item)) mypy fails with: error: Argument 1 to "int" has incompatible type "Union[str, T]"; expected "Union[str, bytes, SupportsInt, SupportsIndex, _SupportsTrunc]" Why?
It's because int cannot be constructed from Enum. From documentation of int(x) If x is not a number or if base is given, then x must be a string, bytes, or bytearray instance representing an integer literal in radix base So to make it work you can make your Enum inherit from str class MyEnum(str, Enum): A = 0 B = 1 Or use IntEnum class MyEnum(IntEnum): A = 0 B = 1 Edit: The question why isinstance doesn't narrow T type enough still left. Couldn't find any prove of my theory, but my (hopefully logical) explanation is - T is bounded to MyEnum so it's MyEnum and any subclass of it. If we have subclass scenario, isinstance(item, MyEnumSubclass) won't exclude case when item is just MyEnum class. If we use MyEnum directly in typing, problem will disappear. @classmethod def coerce(cls: t.Type["MyEnum"], item: t.Union[int, T]) -> "MyEnum": return item if isinstance(item, cls) else cls(int(item))
6
6
70,603,144
2022-1-6
https://stackoverflow.com/questions/70603144/how-to-read-a-file-in-julia-like-python
Why is there a difference between these?: # Python f = open("./text.txt", "r") for i in f.readlines(): for l in i: print(print(l == "\n", ":", l)) f.close() # ----------------------------- # Julia f = open("./text.txt", "r") while !eof(f) for l in readline(file) println(l == '\n', " : ", l) end end close(f) The Python one outputs this: False : h False : e False : l False : l False : o False : False : W False : o False : r False : l False : d True : <--- yep, it is as expected False : y False : a False : y The Julia one outputs this: false : h false : e false : l false : l false : o false : <--- is this not a \n?? false : W false : o false : r false : l false : d false : y false : a false : y text.txt is this: hello World yay As you can see the outputs are different. How can I make the Julia one behaves like the Python one? Are there other ways of reading a file in Julia?
You can do this: # Julia f = open("./text.txt", "r") while !eof(f) for l in readline(file, keep=true) println(l == '\n', " : ", l) end end close(f) By default, it discards \n's, but you can keep them by adding keep=true.
5
7
70,602,796
2022-1-6
https://stackoverflow.com/questions/70602796/pytorch-gpu-memory-keeps-increasing-with-every-batch
I'm training a CNN model on images. Initially, I was training on image patches of size (256, 256) and everything was fine. Then I changed my dataloader to load full HD images (1080, 1920) and I was cropping the images after some processing. In this case, the GPU memory keeps increasing with every batch. Why is this happening? PS: While tracking losses, I'm doing loss.detach().item() so that loss is not retained in the graph.
As suggested here, deleting the input, output and loss data helped. Additionally, I had the data as a dictionary. Just deleting the dictionary isn't sufficient. I had to iterate over the dict elements and delete all of them.
7
5
70,601,601
2022-1-6
https://stackoverflow.com/questions/70601601/how-can-i-use-a-value-in-a-dataframe-to-look-up-an-attribute
Say I have the 2 Dataframes below; one with a list of students and test scores, and different student sessions that made up of the students. Say I want to add a new column, "Sum", to df with the sum of the scores for each session and a new column for the number of years passed since the most recent year that either student took the test, "Years Elapsed". What is the best way to accomplish this? I can make the students a class and make each student an object but then I am stuck on how to link the object to their name in the dataframe. data1 = {'Student': ['John','Kim','Adam','Sonia'], 'Score': [92,100,76,82], 'Year': [2015,2013,2016,2018]} df_students = pd.DataFrame(data1, columns=['Student','Score','Year']) data2 = {'Session': [1,2,3,4], 'Student1': ['Sonia','Kim','John','Adam'], 'Student2': ['Adam','Sonia','Kim','John']} df = pd.DataFrame(data2, columns=['Session','Student1','Student2']) The desired outcome: outcome = {'Session': [1,2,3,4], 'Student1': ['Sonia','Kim','John','Adam'], 'Student2': ['Adam','Sonia','Kim','John'], 'Sum': [158, 182, 192, 168], 'Years Elapsed': [4,4,7,6]} df_outcome = pd.DataFrame(outcome, columns=['Session','Student1','Student2','Sum','Years Elasped']) I have made a class called Student and made each student an object but after this is where I am stuck. df_students.columns = df_students.columns.str.lower() class Student: def __init__(self, s, sc, yr): self.student = s self.score = sc self.year = yr students = [Student(row.student, row.score, row.year) for index, row in df_students.iterrows()] #check to see if list of objects was created correctly s1 = students[1] s1.__dict__ Thanks in advance!
Using apply method: import pandas as pd data1 = {'Student': ['John','Kim','Adam','Sonia'], 'Score': [92,100,76,82], 'Year': [2015,2013,2016,2018]} df_students = pd.DataFrame(data1, columns=['Student','Score','Year']) data2 = {'Session': [1,2,3,4], 'Student1': ['Sonia','Kim','John','Adam'], 'Student2': ['Adam','Sonia','Kim','John']} df = pd.DataFrame(data2, columns=['Session','Student1','Student2']) # SOLUTION def sum_scores(student1, student2): _score_s1 = df_students.loc[(df_students['Student']==student1)]['Score'].values[0] _score_s2 = df_students.loc[(df_students['Student']==student2)]['Score'].values[0] return _score_s1 + _score_s2 def years_elapsed(student1, student2): _year = pd.to_datetime("today").year _year_s1 = df_students.loc[(df_students['Student']==student1)]['Year'].values[0] _year_s2 = df_students.loc[(df_students['Student']==student2)]['Year'].values[0] return _year - max(_year_s1, _year_s2) df['sum_score'] = df.apply(lambda row: sum_scores(row['Student1'], row['Student2']), axis=1) df['years_elapsed'] = df.apply(lambda row: years_elapsed(row['Student1'], row['Student2']), axis=1) df
5
1
70,588,461
2022-1-5
https://stackoverflow.com/questions/70588461/simpler-way-to-return-functions-in-python
so - I have built a bit of a rules engine in python - but I'm fairly new to python... my engine is fairly nice to use - but adding a new rule is pretty ugly, and I'm wondering if there's a way to clean it up. The key thing to remember is that rules have side-effects, rules can be combined with ands, ors, etc - and you only apply the side effects if the whole rule succeeded - ie the check if the rule succeeded can't be combined with perfoming the side effect. So every rule ends up looking something like this: def sample_rule(): def check( item ): if item.doesnt_pass_some_condition(): return None def action_to_perform(): item.set_some_value() item.set_some_other_value() return action_to_perform return Rule(check) which seems horribly ugly - but you don't seem to be able to do multiline lambdas or zero line lambas... I guess I'm looking for something like: def sample_rule(): return Rule( lambda x: x.passes_condition(), lambda x: {x.set_some_value(), x.set_some_other_value)} but both the condition and the side effect could be multiple lines, and the side effect is often empty. so is there a simpler pattern that i can apply that will apply to every case? (I really don't want to use the above pattern when I have exactly one line of condition and one line of side effect, and a completely different pattern in the other cases) just out of interest, at the end you end up with something like rule1 = sample_rule().andalso( other_rule_1().or(other_rule_2)).butnot( other_rule_3) ... ... for thing_to_check in lots_of_things: for rule in lots_of_rules: if rule.apply_to( thing_to_check): break # take the first rule that applies
Instead of defining multi-line lambdas (which python doesn't allow), you could define multiple lambdas in a list and then use all lambdas in the list as required: class Rule: def __init__(self, checks=None, actions=None): self.checks = checks if checks else [] self.actions = actions if actions else [] def apply_to(self, item): if all([check(item) for check in self.checks]): return self.actions else: return None sample_rule = Rule(checks=[lambda x: x.passes_condition()], actions=[lambda x: x.set_some_value(), lambda x: x.set_some_other_value()]) # simpler rule when no actions/side effects are needed simple_rule = Rule(checks=[lambda x: x.passes_condition()]) Giving checks and actions a default value of None in the class also means you don't have to define them if there are no checks/actions for a given rule; further simplifying the definition of new rules.
5
1
70,598,062
2022-1-5
https://stackoverflow.com/questions/70598062/codility-oddoccurrencesinarray-problem-recursion-and-python
I am trying to use recursion to solve the OddOccurrencesInArray Problem in Codility, in which we are given an array with N elements, N is always odd all of the elements of the array except for one has a total even number of occurrences we need to write code that returns the one unpaired value For example, if the array given is [9, 3, 9, 3, 7, 9, 9], the code must return 7, because that is the only element in the array which is unpaired. My solution pseudocode/thought process was: sort the array if the first two elements are equal to each other, remove them and run the solution algorithm again recursively on the array minus the first two elements (after sorting) i.e. if the unpaired element is not found, we keep reducing the size of the array if the first two elements are NOT equal to each other, the first element of the array must be the unpaired item My implementation was: def solution(A): # write your code in Python 3.6 if len(A) > 1: A = sorted(A) if A[0] != A[1]: return A[0] else: solution(A[2:]) else: return A[0] I keep getting the error message Invalid result type, int expected, <class 'NoneType'> found. RUNTIME ERROR (tested program terminated with exit code 1) Can anyone help me figure out what this means and how I can correct it? Algorithmically, I think my solution is sound, and I don't understand why it isn't returning the integer values as I specified.
I would suggest a different approach altogether. A recursive approach is not incorrect, however repeated calls to sorted is highly inefficient, especially if the input is significantly large. def solve(t): s = set() for v in t: s.add(v) if v not in s else s.remove(v) return list(s) input = [9, 3, 9, 3, 7, 9, 9] solve(input) We can visualize s over the course of the evaluation - {} # <- initial s {9} # <- 9 is added {9,3} # <- 3 is added {3} # <- 9 is removed {} # <- 3 is removed {7} # <- 7 is added {7,9} # <- 9 is added {7} # <- 9 is removed The finally list(s) is returned converting {7} to [7]. To output the answer we can write a simple if/elif/else - unpaired = solve(input) if (len(unpaired) < 1): print("there are no unpaired elements") elif (len(unpaired) > 1): print("there is more than one unpaired element") else: print("answer:", unpaired[0]) Another option is to have solve return the first unpaired element or None - def solve(t): s = set() for v in t: s.add(v) if v not in s else s.remove(v) for v in s: return v # <- immediately return first element answer = solve(input) if answer is None: print("no solution") else: print("the solution is", answer)
4
12
70,596,809
2022-1-5
https://stackoverflow.com/questions/70596809/can-a-class-attribute-shadow-a-built-in-in-python
If have some code like this: class Foo(): def open(self, bar): # Doing some fancy stuff here, i.e. opening "bar" pass When I run flake8 with the flake8-builtins plug-in I get the error A003 class attribute "open" is shadowing a python builtin I don't understand how the method could possibly shadow the built-in open-function, because the method can only be called using an instance (i.e. self.open("") or someFoo.open("")). Is there some other way code expecting to call the built-in ends up calling the method? Or is this a false positive of the flake8-builtins plug-in?
Not really a practical case, but your code would fail if you wanted to use the built-it functions on the class level after your shadowed function has been initialized: class Foo: def open(self, bar): pass with open('myfile.txt'): print('did I get here?') >>> TypeError: open() missing 1 required positional argument: 'bar' The same would also be true with other built-in functions, such as print class Foo: def print(self, bar): pass print('did I get here?') >>> TypeError: print() missing 1 required positional argument: 'bar'
6
6
70,595,450
2022-1-5
https://stackoverflow.com/questions/70595450/cant-install-numba-on-python-3-10
Python 3.10 on Mac running OS 11.6.1 I uninstalled Python 3.9 from my machine and upgraded to version 3.10. No problems installing standard packages such as pandas, scipy, etc. However one package, epycom, requires numba. When I enter pip3 install numba, I receive the lengthy error message below with the key phrase FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' Wondering if I should uninstall 3.10 and go back to 3.9? Collecting numba Using cached numba-0.51.2.tar.gz (2.1 MB) Preparing metadata (setup.py) ... done Collecting llvmlite<0.35,>=0.34.0.dev0 Using cached llvmlite-0.34.0.tar.gz (107 kB) Preparing metadata (setup.py) ... done Requirement already satisfied: numpy>=1.15 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from numba) (1.22.0) Requirement already satisfied: setuptools in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from numba) (58.1.0) Using legacy 'setup.py install' for numba, since package 'wheel' is not installed. Using legacy 'setup.py install' for llvmlite, since package 'wheel' is not installed. Installing collected packages: llvmlite, numba Running setup.py install for llvmlite ... error ERROR: Command errored out with exit status 1: command: /Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/setup.py'"'"'; __file__='"'"'/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-record-6u_7985j/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Frameworks/Python.framework/Versions/3.10/include/python3.10/llvmlite cwd: /private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ Complete output (29 lines): running install running build got version from file /private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/llvmlite/_version.py {'version': '0.34.0', 'full': 'c5889c9e98c6b19d5d85ebdd982d64a03931f8e2'} running build_ext /Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 /private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ffi/build.py LLVM version... Traceback (most recent call last): File "/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ffi/build.py", line 105, in main_posix out = subprocess.check_output([llvm_config, '--version']) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 420, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 501, in run with Popen(*popenargs, **kwargs) as process: File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1842, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ffi/build.py", line 191, in <module> main() File "/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ffi/build.py", line 185, in main main_posix('osx', '.dylib') File "/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/ffi/build.py", line 107, in main_posix raise RuntimeError("%s failed executing, please point LLVM_CONFIG " RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config error: command '/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10' failed with exit code 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/setup.py'"'"'; __file__='"'"'/private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-install-42hw6q4a/llvmlite_a0abee749e71467a998628e47a3a1a24/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/pip-record-6u_7985j/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Frameworks/Python.framework/Versions/3.10/include/python3.10/llvmlite Check the logs for full command output. fishbacp@fishbacpK0ML85 ~ % pip3 install llvm ERROR: Could not find a version that satisfies the requirement llvm (from versions: none) ERROR: No matching distribution found for llvm
Based on the historical issues submited on Github numba is slow in adoption of a new Python version; my guess would be that it currently does not support Python 3.10. Reference: https://github.com/numba/llvmlite/issues/621 https://github.com/numba/llvmlite/issues/531
6
8
70,588,917
2022-1-5
https://stackoverflow.com/questions/70588917/django-migrations-calculate-new-fields-value-based-on-old-fields-before-deletin
We are intenting to rework one of our models from old start-end date values to use starting date and length. However, this does pose a challenge in that we want to give default values to our new fields. In this case, is it possible to run migration where we create new field, give it a value based on models old start-end fields from datetime import date as Date from django.db import models class Period(models.Model): #These are our new fields to replace old fields period_length=models.IntegerField(default=12) starting_date=models.DateField(default=Date.today) #Old fields. We want to calculate the period length based on these before we remove them start_day = models.IntegerField(default=1) start_month = models.IntegerField(default=1) end_day = models.IntegerField(default=31) end_month = models.IntegerField(default=12) Starting months and ending months can be anywhere between 1 and 12, so we need to run bunch of calculations to get the correct length. Is there a way for us to run a function in migrations that, after adding the new fields, calculates their new values before calling for removal of the old fields? I do know I can create basic add/remove fields with makemigrations, but I want to add the value calculations in between. Other option I have considered is to first run a migration to add the fields, then a custom command to calculate fields and then a second migration that deletes the old fields, but this feels like it has greater chance of breaking something.
What I would do is create a custom migration and define the following series of operations there: Add length field. Update length field with calculations. Remove old field. So you can create a custom migration with: python manage.py makemigrations --name migration_name app_name --empty And then define there the series of operations you need: operations = [ migrations.AddField (... your length field...), migrations.RunPython (... the name of your function to compute and store length field ...), migrations.RemoveField (... your end_date field ...), ] Edit: Your migration should be something as below (update_length_field would be your function, with the same parameters): class Migration(migrations.Migration): dependencies = [ ('app_name', 'your_previous_migration'), ] def update_length_field(apps, schema_editor): for period in Period.objects.all(): period.length = ... whatever calculations you need ... period.save() operations = [ migrations.AddField (... your length field...), migrations.RunPython(update_length_field), migrations.RemoveField (... your end_date field ...), ] At a basic level, it would be like this. Now, if you want the migration to be able to be rolled back, you will have to define a second function that does exactly the opposite of what update_length_field does. And put it as the second parameter of migrations.RunPython. Also, if you want the migration to be compatible with future changes to the model (this is not necessary if the migration is to be deployed only once), you must take the model from the historical version of the code, something like: def update_length_field(apps, schema_editor): Period = apps.get_model("app_name", "Period") for period in Period.objects.all(): period.length = ... period.save() More information here: https://docs.djangoproject.com/en/4.0/ref/migration-operations/
5
10
70,591,591
2022-1-5
https://stackoverflow.com/questions/70591591/how-to-make-a-character-jump-in-pygame
I want to make my character jump. In my current attempt, the player moves up as long as I hold down SPACEv and falls down when I release SPACE. import pygame pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() rect = pygame.Rect(135, 220, 30, 30) vel = 5 run = True while run: clock.tick(100) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False keys = pygame.key.get_pressed() rect.centerx = (rect.centerx + (keys[pygame.K_RIGHT] - keys[pygame.K_LEFT]) * vel) % 300 if keys[pygame.K_SPACE]: rect.y -= 1 elif rect.y < 220: rect.y += 1 window.fill((0, 0, 64)) pygame.draw.rect(window, (64, 64, 64), (0, 250, 300, 100)) pygame.draw.circle(window, (255, 0, 0), rect.center, 15) pygame.display.flip() pygame.quit() exit() However, I want the character to jump if I hit the SPACE once. I want a smooth jump animation to start when SPACE is pressed once. How would I go about this step by step?
To make a character jump you have to use the KEYDOWN event, but not pygame.key.get_pressed(). pygame.key.get_pressed () is for continuous movement when a key is held down. The keyboard events are used to trigger a single action or to start an animation such as a jump. See alos How to get keyboard input in pygame? pygame.key.get_pressed() returns a sequence with the state of each key. If a key is held down, the state for the key is True, otherwise False. Use pygame.key.get_pressed() to evaluate the current state of a button and get continuous movement. while True: for event in pygame.event.get(): if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE: jump = True Use pygame.time.Clock ("This method should be called once per frame.") you control the frames per second and thus the game speed and the duration of the jump. clock = pygame.time.Clock() while True: clock.tick(100) The jumping should be independent of the player's movement or the general flow of control of the game. Therefore, the jump animation in the application loop must be executed in parallel to the running game. When you throw a ball or something jumps, the object makes a parabolic curve. The object gains height quickly at the beginning, but this slows down until the object begins to fall faster and faster again. The change in height of a jumping object can be described with the following sequence: [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10] Such a series can be generated with the following algorithm (y is the y coordinate of the object): jumpMax = 10 if jump: y -= jumpCount if jumpCount > -jumpMax: jumpCount -= 1 else: jump = False A more sophisticated approach is to define constants for the gravity and player's acceleration as the player jumps: acceleration = 10 gravity = 0.5 The acceleration exerted on the player in each frame is the gravity constant, if the player jumps then the acceleration changes to the "jump" acceleration for a single frame: acc_y = gravity for event in pygame.event.get(): if event.type == pygame.KEYDOWN: if vel_y == 0 and event.key == pygame.K_SPACE: acc_y = -acceleration In each frame the vertical velocity is changed depending on the acceleration and the y-coordinate is changed depending on the velocity. When the player touches the ground, the vertical movement will stop: vel_y += acc_y y += vel_y if y > ground_y: y = ground_y vel_y = 0 acc_y = 0 See also Jump Example 1: replit.com/@Rabbid76/PyGame-Jump import pygame pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() rect = pygame.Rect(135, 220, 30, 30) vel = 5 jump = False jumpCount = 0 jumpMax = 15 run = True while run: clock.tick(50) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: if not jump and event.key == pygame.K_SPACE: jump = True jumpCount = jumpMax keys = pygame.key.get_pressed() rect.centerx = (rect.centerx + (keys[pygame.K_RIGHT] - keys[pygame.K_LEFT]) * vel) % 300 if jump: rect.y -= jumpCount if jumpCount > -jumpMax: jumpCount -= 1 else: jump = False window.fill((0, 0, 64)) pygame.draw.rect(window, (64, 64, 64), (0, 250, 300, 100)) pygame.draw.circle(window, (255, 0, 0), rect.center, 15) pygame.display.flip() pygame.quit() exit() Example 2: replit.com/@Rabbid76/PyGame-JumpAcceleration import pygame pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() player = pygame.sprite.Sprite() player.image = pygame.Surface((30, 30), pygame.SRCALPHA) pygame.draw.circle(player.image, (255, 0, 0), (15, 15), 15) player.rect = player.image.get_rect(center = (150, 235)) all_sprites = pygame.sprite.Group([player]) y, vel_y = player.rect.bottom, 0 vel = 5 ground_y = 250 acceleration = 10 gravity = 0.5 run = True while run: clock.tick(100) acc_y = gravity for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.KEYDOWN: if vel_y == 0 and event.key == pygame.K_SPACE: acc_y = -acceleration keys = pygame.key.get_pressed() player.rect.centerx = (player.rect.centerx + (keys[pygame.K_RIGHT] - keys[pygame.K_LEFT]) * vel) % 300 vel_y += acc_y y += vel_y if y > ground_y: y = ground_y vel_y = 0 acc_y = 0 player.rect.bottom = round(y) window.fill((0, 0, 64)) pygame.draw.rect(window, (64, 64, 64), (0, 250, 300, 100)) all_sprites.draw(window) pygame.display.flip() pygame.quit() exit()
5
10
70,587,271
2022-1-5
https://stackoverflow.com/questions/70587271/is-there-a-pythonic-way-of-filtering-substrings-of-strings-in-a-list
I have a list with strings as below. candidates = ["Hello", "World", "HelloWorld", "Foo", "bar", "ar"] And I want the list to be filtered as ["HelloWorld", "Foo", "Bar"], because others are substrings. I can do it like this, but don't think it's fast or elegant. def filter_not_substring(candidates): survive = [] for a in candidates: for b in candidates: if a == b: continue if a in b: break else: survive.append(a) return survive Is there any fast way to do it?
How about: candidates = ["Hello", "World", "HelloWorld", "Foo", "bar", "ar"] result = [c for c in candidates if not any(c in o and len(o) > len(c) for o in candidates)] print(result) Counter to what was suggested in the comments: from timeit import timeit def filter_not_substring(candidates): survive = [] for a in candidates: for b in candidates: if a == b: continue if a in b: break else: survive.append(a) return survive def filter_not_substring2a(candidates): return [c for c in candidates if not any(len(o) > len(c) and c in o for o in candidates)] def filter_not_substring2b(candidates): return [c for c in candidates if not any(c in o and len(o) > len(c) for o in candidates)] xs = ["Hello", "World", "HelloWorld", "Foo", "bar", "ar", "bar"] print(filter_not_substring(xs), filter_not_substring2a(xs), filter_not_substring2b(xs)) print(timeit(lambda: filter_not_substring(xs))) print(timeit(lambda: filter_not_substring2a(xs))) print(timeit(lambda: filter_not_substring2b(xs))) Result: ['HelloWorld', 'Foo', 'bar', 'bar'] ['HelloWorld', 'Foo', 'bar', 'bar'] ['HelloWorld', 'Foo', 'bar', 'bar'] 1.5163685 4.6516653 3.8334089999999996 So, OP's solution is substantially faster, but filter_not_substring2b is still about 20% faster than 2a. So, putting the len comparison first doesn't save time. For any production scenario, OP's function is probably optimal - a way to speed it up might be to bring the whole problem into C, but I doubt that would show great gains, since the logic is pretty straightforward already and I'd expect Python to do a fairly good job of it as well. User @ming noted that OP's solution can be improved a bit: def filter_not_substring_b(candidates): survive = [] for a in candidates: for b in candidates: if a in b and a != b: break else: survive.append(a) return survive This version of the function is somewhat faster, for me about 10-15% Finally, note that this is only just faster than 2b, even though it is very similar to the optimised solution by @ming, but almost 3x slower than their solution. It's unclear to me why that would be - if anyone has fairly certain thoughts on that, please share in the comments: def filter_not_substring_c(candidates): return [a for a in candidates if all(a not in b or a == b for b in candidates)]
6
7
70,585,611
2022-1-4
https://stackoverflow.com/questions/70585611/how-to-add-python-and-pip-or-conda-packages-to-ddev
I need to execute a Python script inside the Ddev web docker image, but am having trouble figuring out what Debian python libraries are required to get Python binary with additional py package dependencies working.
Most of this is obsolete, because from DDEV v1.23.0 you can't easily get python 2 on DDEV at all, since it's been dropped from upstream. However, see @stasadev answer below for a great add-on that solves this. ddev add-on get stasadev/ddev-python2, see https://github.com/stasadev/ddev-python2. Python 2 on Ddev You really don’t want to be using Python 2 do you? (See caveats 1 & 2 below) Add the following in .ddev/config.yml: webimage_extra_packages: [python] If your Python 2 scripts need additional package dependencies installed via pip, you'll need to instead use a custom Dockerfile: ARG BASE_IMAGE FROM $BASE_IMAGE RUN apt-key adv --refresh-keys --keyserver keyserver.ubuntu.com \ && apt update RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y -o Dpkg::Options::="--force-confold" --no-install-recommends --no-install-suggests python python-pip RUN pip install somepackage anotherpackage Note: a custom Dockerfile will override webimage_extra_packages configurations in .ddev/config.yaml. Caveat 1: As of 2022, Ddev web image runs Debian 11 and sudo apt-get python still installs Python 2. This may change in future versions of Debian, so be careful when you upgrade Ddev. Caveat 2: Python 2 has reached its End Of Life and is unsupported. Additionally, important package manager pip is no longer able to natively install (without workarounds) on latest Python 2, so you're probably better off upgrading your scripts to Python 3 using the 2to3 utility. Python 3 on Ddev Use the following Ddev configuration to install Python 3 into /usr/bin/python along with most of any additional package dependencies for your py scripts. webimage_extra_packages: [python3, python-is-python3] Note that by default, Python 3 is installed to /usr/bin/python3 so add the python-is-python3 package to make python execute Python 3. You can also usually work around needing to install the python3-pip package, because most Python 3 packages are already bundled for Debian. Therefore, additional Python 3 package dependencies can be added by comma-separated name to webimage_extra_packages. See list of stable Python packages for Debian here. If your dependencies are not bundled and you need to use pip, Conda, or another python package manager, then you must implement a custom Dockerfile at .ddev/web-build/Dockerfile like this: ARG BASE_IMAGE FROM $BASE_IMAGE RUN apt-key adv --refresh-keys --keyserver keyserver.ubuntu.com \ && apt update RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y -o Dpkg::Options::="--force-confold" --no-install-recommends --no-install-suggests python3 python-is-python3 python3-pip RUN pip3 install somepackage anotherpackage Note: a custom Dockerfile will override webimage_extra_packages configurations in .ddev/config.yaml.
9
9
70,584,730
2022-1-4
https://stackoverflow.com/questions/70584730/how-to-use-a-reserved-keyword-in-pydantic-model
I need to create a schema but it has a column called global, and when I try to write this, I got an error. class User(BaseModel): id:int global:bool I try to use another name, but gives another error when try to save in db.
It looks like you are using a pydantic module. You can't use the name global because it's a reserved keyword so you need to use this trick to convert it. pydantic v1: class User(BaseModel): id: int global_: bool class Config: fields = { 'global_': 'global' } or pydantic v1 & v2: class User(BaseModel): id: int global_: bool = Field(..., alias='global') To create a class you have to use a dictionary (because User(id=1, global=False) also throws an error): user = User(id=1, global=False) > Traceback (most recent call last): > (...) > File "<input>", line 1 > User(id=1, global=False) > ^^^^^^ > SyntaxError: invalid syntax user = User(**{'id': 1, 'global': False}) Set allow_population_by_field_name = True or populate_by_name=True in config to allow creating models using both global and global_ names (thanks @GooDeeJAY). pydantic v1: class User(BaseModel): id: int global_: bool = Field(..., alias='global') class Config: allow_population_by_field_name = True pydantic v2: class User(BaseModel): id: int global_: bool = Field(..., alias='global') model_config = ConfigDict(populate_by_name=True) user1 = User(**{'id': 1, 'global': False}) user2 = User(id=1, global_=False) assert user1 == user2 By default schema dump will not use aliased fields: user.dict() # for pydantic v1 user.model_dump() # for pydantic v2 > {'id': 1, 'global_': False} To get data in the correct schema use by_alias: user.dict(by_alias=True) # for pydantic v1 user.model_dump(by_alias=True) # for pydantic v2 > {'id': 1, 'global': False}
11
34
70,565,965
2022-1-3
https://stackoverflow.com/questions/70565965/error-failed-building-wheel-for-numpy-error-could-not-build-wheels-for-numpy
I`m using python poetry(https://python-poetry.org/) for dependency management in my project. Though when I`m running poetry install, its giving me below error. ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects I`m having python 3.9 installed in my laptop. I installed numpy 1.21.5 using pip install numpy, I even tried to down version it to 1.19.5. Though I`m getting the same error. I found out many people are getting ERROR: Failed building wheel for numpy this error in python 3.10, they solved it by down versioning python to 3.9, though that didnt working for me.
I solved it by doing the following steps:- I updated the pyproject.toml(This file contains all the library/dependency/dev dependency)with the numpy version that I installed using pip install numpy command. Run poetry lock to update poetry.lock file(contains details information about the library) Run poetry install again, & it should work fine. In a nutshell, you just have to install the correct version of numpy Click me to check the compatibility And then install the required version using pip install numpy==version. Example: To install NumPy version 1.23.5, use the following- pip install numpy==1.23.5 If you are having any problems, you can comment. I`ll try to answer it.
54
14
70,515,542
2021-12-29
https://stackoverflow.com/questions/70515542/adding-comma-to-bar-labels
I have been using the ax.bar_label method to add data values to the bar graphs. The numbers are huge such as 143858918. How can I add commas to the data values using the ax.bar_label method? I do know how to add commas using the annotate method but if it is possible using bar_label, I am not sure. Is it possible using the fmt keyword argument that is available?
Is it possible using the fmt keyword argument of ax.bar_label? Yes, but only in matplotlib 3.7+. Prior to 3.7, fmt only accepted % formatters (no comma support), so labels was needed to f-format the container's datavalues. If matplotlib ≥ 3.7, use fmt: for c in ax.containers: ax.bar_label(c, fmt='{:,.0f}') # ≥ 3.7 # ^no f here (not an actual f-string) If matplotlib < 3.7, use labels: for c in ax.containers: ax.bar_label(c, labels=[f'{x:,.0f}' for x in c.datavalues]) # < 3.7 Toy example: fig, ax = plt.subplots() ax.bar(['foo', 'bar', 'baz'], [3200, 9025, 800]) # ≥ v3.7 for c in ax.containers: ax.bar_label(c, fmt='{:,.0f}') # < v3.7 for c in ax.containers: ax.bar_label(c, labels=[f'{x:,.0f}' for x in c.datavalues])
6
14
70,541,710
2021-12-31
https://stackoverflow.com/questions/70541710/pandas-df-to-stata-dataframe-object-has-no-attribute-dtype
Until now the pandas function df.to_stata() worked just fine with my datasets. I am trying to export a dataframe that includes 29,778 rows and 37 to a Stata file using the following code: df.to_stata("Stata_File.dta", write_index=False, version=118) However, I receive the following error message: AttributeError: 'DataFrame' object has no attribute 'dtype' I would really appreciate any help how to fix this.
It's possible that this error arises when you have multiple columns with the same name in your dataframe
8
5
70,583,166
2022-1-4
https://stackoverflow.com/questions/70583166/how-do-i-write-an-efficient-pair-matching-algorithm
I need help with an algorithm that efficiently groups people into pairs, and ensures that previous pairs are not repeated. For example, say we have 10 candidates; candidates = [0,1,2,3,4,5,6,7,8,9] And say we have a dictionary of previous matches such that each key-value pair i.e. candidate:matches represents a candidate and an array of candidates that they have been paired with so far; prev_matches = {0: [6, 5, 1, 2], 1: [4, 9, 0, 7], 2: [9, 8, 6, 0], 3: [5, 4, 8, 9], 4: [1, 3, 9, 6], 5: [3, 0, 7, 8], 6: [0, 7, 2, 4], 7: [8, 6, 5, 1], 8: [7, 2, 3, 5], 9: [2, 1, 4, 3]} So for Candidate 0, they were first paired with Candidate 6, and in the subsequent pairing rounds, they were paired with Candidate 5, Candidate 1, and Candidate 2. The same follows for the other key-value pairs in the dictionary. There have already been four rounds of matches, as indicated by the length of all the matches in prev_matches. How do I script an algorithm that creates a fifth, sixth...nth(up to numberOfCandidates - 1) round of matches such that candidates do not have duplicate pairs? So Candidate 0 can no longer be paired with Candidate 6, Candidate 5, Candidate 1, and Candidate 2. And after a possible fifth round of matches, we could have our prev_matches as such: prev_matches: {0: [6, 5, 1, 2, 3], 1: [4, 9, 0, 7, 2], 2: [9, 8, 6, 0, 1], 3: [5, 4, 8, 9, 0], 4: [1, 3, 9, 6, 7], 5: [3, 0, 7, 8, 9], 6: [0, 7, 2, 4, 8], 7: [8, 6, 5, 1, 4], 8: [7, 2, 3, 5, 8], 9: [2, 1, 4, 3, 5]}. Here is a naive solution I tried: def make_match(prev_matches): paired_candidates = set() for candidate, matches in prev_matches.items(): i = 0 while i < 10: if i != candidate and i not in matches and i not in paired_candidates and candidate not in paired_candidates: prev_matches[candidate].append(i) prev_matches[i].append(candidate) paired_candidates.add(candidate) paired_candidates.add(i) break i += 1 return prev_matches It worked for the fifth round and returned the following: prev_matches = {0: [6, 5, 1, 2, 3], 1: [4, 9, 0, 7, 2], 2: [9, 8, 6 0, 1], 3: [5, 4, 8, 9, 0], 4: [1, 3, 9, 6, 5], 5: [3, 0, 7, 8, 4], 6: [0, 7, 2, 4, 8], 7: [8, 6, 5, 1, 9], 8: [7, 2, 3, 5, 6], 9: [2, 1, 4, 3, 7]} For the sixth round however, it failed to work as some candidates (7 and 8) couldn't find valid pairs: prev_matches = {0: [6, 5, 1, 2, 3, 4], 1: [4, 9, 0, 7, 2, 3], 2: [9, 8, 6, 0, 1, 5], 3: [5, 4, 8, 9, 0, 1], 4: [1, 3, 9, 6, 5, 0], 5: [3, 0, 7, 8, 4, 2], 6: [0, 7, 2, 4, 8, 9], 7: [8, 6, 5, 1, 9], 8: [7, 2, 3, 5, 6], 9: [2, 1, 4, 3, 7, 6]} As such, it's neither a reliable nor acceptable solution. I'm considering treating it as a backtracking problem such that I'd explore all possible pairings across the rounds till I reach a wholly acceptable and valid solution after the nth round. But the concern here would be how to make it work efficiently. I'd appreciate any help I can get.
If you are in charge of the tournament from the beginning, then the simplest solution is to organise the pairings according to a round-robin tournament. If you have no control on the pairings of the first rounds, and must organise the following rounds, here is a solution using module networkx to compute a maximum matching in a graph: from networkx import Graph from networkx.algorithms.matching import max_weight_matching, is_perfect_matching def next_rounds(candidates, prev_matches): G = Graph() G.add_nodes_from(candidates) G.add_edges_from((u,v) for u,p in prev_matches.items() for v in candidates.difference(p).difference({u})) m = max_weight_matching(G) while is_perfect_matching(G, m): yield m G.remove_edges_from(m) m = max_weight_matching(G) for r in next_rounds({0,1,2,3,4,5,6,7,8,9}, {0: [6, 5, 1, 2], 1: [4, 9, 0, 7], 2: [9, 8, 6, 0], 3: [5, 4, 8, 9], 4: [1, 3, 9, 6], 5: [3, 0, 7, 8], 6: [0, 7, 2, 4], 7: [8, 6, 5, 1], 8: [7, 2, 3, 5], 9: [2, 1, 4, 3]}): print(r) Output: {(2, 7), (8, 1), (0, 9), (4, 5), (3, 6)} {(2, 4), (3, 7), (8, 0), (9, 5), (1, 6)} {(0, 7), (8, 4), (1, 5), (9, 6), (2, 3)} {(9, 7), (0, 4), (8, 6), (2, 5), (1, 3)} {(1, 2), (0, 3), (8, 9), (5, 6), (4, 7)}
5
3
70,558,558
2022-1-2
https://stackoverflow.com/questions/70558558/how-to-mask-environment-variables-created-in-github-when-running-a-workflow
I created a Github workflow that runs a python script with a cron schedule. On every run of the workflow an access_token is generated, which is required during the next run. To save the token the python script writes the token to the GITHUB_ENV file. In the next step, I use the hmanzur/[email protected] action to save the token to a Github secret. All works fine. My only problem is, that the token gets displayed in the logs of the second step as an environment variable. Here is a minimal version of the workflow file: name: Tests on: schedule: - cron: "0 1 * * *" jobs: test: runs-on: ubuntu-latest strategy: matrix: python: ['3.9'] steps: - uses: actions/checkout@v1 - uses: actions/setup-python@v1 with: python-version: ${{ matrix.python }} - name: Install dependencies run: pip install -r requirements.txt - name: Run tests working-directory: ./src run: python -m unittest env: ACCESS_TOKEN: ${{secrets.ACCESS_TOKEN}} - uses: hmanzur/[email protected] with: name: 'ACCESS_TOKEN' value: ${{env.ACCESS_TOKEN}} repository: Me/MyRepository token: ${{ secrets.REPO_ACCESS_TOKEN }} I tried applying ::add-mask::. Adding echo "ACCESS_TOKEN=::add-mask::$ACCESS_TOKEN" >> $GITHUB_ENV only added ::add-mask:: to the string. Is there a way of masking all environment variables in the GITHUB_ENV file I can apply in the first step? Can I apply the masking to the variable while writing to the GITHUB_ENV file in python? Or is there a way to disable the display of the environment variables during the workflow?
Your usage of "::add-mask::" is wrong (not your fault, I hate GHA doc). What you need to do is: echo "::add-mask::$ACCESS_TOKEN" echo "ACCESS_TOKEN=$ACCESS_TOKEN" >> $GITHUB_ENV
14
16
70,552,618
2022-1-1
https://stackoverflow.com/questions/70552618/vscode-fails-to-export-jupyter-notebook-to-html-jupyter-nbconvert-not-found
I keep on getting error message: Available subcommands: 1.0.0 Jupyter command `jupyter-nbconvert` not found. I've tried to reinstall nbconvert using pip to no use. I've also tried the tip from this thread with installing pip install jupyter in vscode terminal but it shows that "Requirement already satisfied" VSCode fails to export jupyter notebook to html I've also tried to manually edit jupyter settings.json file to the following: "python.pythonPath": "C:\\Users\\XYZ\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\Scripts" I've python 3.9 installed via windows store. Any tip on what might be the issue for vscode doesn't want to export the notebook?
Unsure exactly what fixed the issue but heres a summary. Updated to python 3.10 Installed pandoc and miktex Powershell reinstall nbconvert Received warning that nbconvert script file is installed in a location not in Path. Copied said location to System Properties - Envionment Variables - Path Restart and install all miktex package on the go PDF export and HTML export seems to work as intended now.
13
3
70,524,028
2021-12-29
https://stackoverflow.com/questions/70524028/importerror-cannot-import-name-force-text-from-django-utils-encoding-usr
I get the error below when I add 'graphene_django' inside INSTALLED_APPS in the settings.py. After running python3 manage.py runserver graphene_django is installed successfully using pip install django graphene_django This is full error that I get: Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "/usr/local/Cellar/[email protected]/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/Cellar/[email protected]/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 115, in inner_run autoreload.raise_last_exception() File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception raise _exception[1] File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 381, in execute autoreload.check_errors(django.setup)() File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 91, in populate app_config = AppConfig.create(entry) File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 223, in create import_module(entry) File "/usr/local/Cellar/[email protected]/3.9.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/graphene_django/__init__.py", line 1, in <module> from .fields import DjangoConnectionField, DjangoListField File "/usr/local/lib/python3.9/site-packages/graphene_django/fields.py", line 18, in <module> from .utils import maybe_queryset File "/usr/local/lib/python3.9/site-packages/graphene_django/utils/__init__.py", line 2, in <module> from .utils import ( File "/usr/local/lib/python3.9/site-packages/graphene_django/utils/utils.py", line 6, in <module> from django.utils.encoding import force_text ImportError: cannot import name 'force_text' from 'django.utils.encoding' (/usr/local/lib/python3.9/site-packages/django/utils/encoding.py) Any idea on what's going wrong here?
force_text is removed from Django 4.0 You can add this code to top of your settings.py : import django from django.utils.encoding import force_str django.utils.encoding.force_text = force_str
7
11
70,585,068
2022-1-4
https://stackoverflow.com/questions/70585068/how-do-i-get-libpq-to-be-found-by-ctypes-find-library
I am building a simple DB interface in Python (3.9.9) and I am using psycopg (3.0.7) to connect to my Postgres (14.1) database. Until recently, the development of this app took place on Linux, but now I am using macOS Monterey on an M1 Mac mini. This seems to be causing some troubles with ctypes, which psycopg uses extensively. The error I am getting is the following: ImportError: no pq wrapper available. Attempts made: - couldn't import psycopg 'c' implementation: No module named 'psycopg_c' - couldn't import psycopg 'binary' implementation: No module named 'psycopg_binary' - couldn't import psycopg 'python' implementation: libpq library not found Based on the source code of psycopg, this is an error of ctypes not being able to util.find_library libpq.dylib. Postgres is installed as Postgres.app, meaning that libpq.dylib's path is /Applications/Postgres.app/Contents/Versions/14/bin/lib I have tried adding this to PATH, but it did not work. I then created a symlink to the path in /usr/local/lib, but (unsurprisingly) it also did not work. I then did some digging and found this issue describing the same problem. I am not a big macOS expert, so I am unsure on how to interpret some of the points raised. Do I need to add the path to the shared cache? Also, I do not want to fork the Python repo and implement the dlopen() method as suggested, as it seems to lead to other problems. Anyhow, is there a solution to quickly bypass this problem? As an additional reference, the code producing the above error is just: import psycopg print(psycopg.__version__)
I had this problem but the solution was suggested to me by this answer to a related question: try setting envar DYLD_LIBRARY_PATH to the path you identified. NB, to get it working myself, I: used the path /Applications/Postgres.app/Contents/Versions/latest/lib and had to install Python 3.9
9
2
70,524,577
2021-12-29
https://stackoverflow.com/questions/70524577/how-can-i-create-a-script-to-switch-between-my-arm-conda-and-x86-conda
I am on an apple silicon M1 MacBook Pro. I would like to have a native ARM python environment, and an environment that runs on x86 architecture with rosetta 2. I have installed two mini forge distributions, both in the home directory: miniforge3 for the native ARM installation and miniforge3_x86_64 for the x86 installation.
So far, the best solution I've found is to start the terminal with Rosetta 2, then run a function I have saved in .zshrc to initialize the correct conda installation so that I can use the correct architecture for my needs depending on the situation. My current solution is the following function named x86: x86 () { conda deactivate # >>> conda initialize >>> # !! Contents within this block are managed by 'conda init' !! __conda_setup="$('/Users/$USERNAME/miniforge3_x86_64/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)" if [ $? -eq 0 ]; then eval "$__conda_setup" else if [ -f "/Users/$USERNAME/miniforge3_x86_64/etc/profile.d/conda.sh" ]; then . "/Users/$USERNAME/miniforge3_x86_64/etc/profile.d/conda.sh" else export PATH="/Users/$USERNAME/miniforge3_x86_64/bin:$PATH" fi fi unset __conda_setup # <<< conda initialize <<< export PATH="/Users/$USERNAME/miniforge3_x86_64/bin:$PATH" export PATH="/Users/$USERNAME/miniforge3_x86_64/condabin:$PATH" } I am still feeling this out. I may add some aliases within the function as well so things like pip do not conflict, but I hope that by prepending the x86 paths the correct packages will be referenced
4
6
70,565,357
2022-1-3
https://stackoverflow.com/questions/70565357/paramiko-authentication-fails-with-agreed-upon-rsa-sha2-512-pubkey-algorithm
I have a Python 3 application running on CentOS Linux 7.7 executing SSH commands against remote hosts. It works properly but today I encountered an odd error executing a command against a "new" remote server (server based on RHEL 6.10): encountered RSA key, expected OPENSSH key Executing the same command from the system shell (using the same private key of course) works perfectly fine. On the remote server I discovered in /var/log/secure that when SSH connection and commands are issued from the source server with Python (using Paramiko) sshd complains about unsupported public key algorithm: userauth_pubkey: unsupported public key algorithm: rsa-sha2-512 Note that target servers with higher RHEL/CentOS like 7.x don't encounter the issue. It seems like Paramiko picks/offers the wrong algorithm when negotiating with the remote server when on the contrary SSH shell performs the negotiation properly in the context of this "old" target server. How to get the Python program to work as expected? Python code import paramiko import logging ssh_user = "my_user" ssh_keypath = "/path/to/.ssh/my_key.rsa" server = "server.tld" ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_client.connect(server,port=22,username=ssh_user, key_filename=ssh_keypath) # SSH command cmd = "echo TEST : $(hostname)" stdin, stdout, stderr = ssh_client.exec_command(cmd, get_pty=True) exit_code = stdout.channel.recv_exit_status() cmd_raw_output = stdout.readlines() out = "".join(cmd_raw_output) out_msg = out.strip() # Ouput (logger code omitted) logger.debug(out_msg) if ssh_client is not None: ssh_client.close() Shell command equivalent ssh -i /path/to/.ssh/my_key.rsa [email protected] "echo TEST : $(hostname)" Paramiko logs (DEBUG) DEB [YYYYmmdd-HH:MM:30.475] thr=1 paramiko.transport: starting thread (client mode): 0xf6054ac8 DEB [YYYYmmdd-HH:MM:30.476] thr=1 paramiko.transport: Local version/idstring: SSH-2.0-paramiko_2.9.1 DEB [YYYYmmdd-HH:MM:30.490] thr=1 paramiko.transport: Remote version/idstring: SSH-2.0-OpenSSH_5.3 INF [YYYYmmdd-HH:MM:30.490] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.3) DEB [YYYYmmdd-HH:MM:30.498] thr=1 paramiko.transport: === Key exchange possibilities === DEB [YYYYmmdd-HH:MM:30.498] thr=1 paramiko.transport: kex algos: diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1, diffie-hellman-group14-sha1, diffie-hellman-group1-sha1 DEB [YYYYmmdd-HH:MM:30.498] thr=1 paramiko.transport: server key: ssh-rsa, ssh-dss DEB [YYYYmmdd-HH:MM:30.498] thr=1 paramiko.transport: client encrypt: aes128-ctr, aes192-ctr, aes256-ctr, arcfour256, arcfour128, aes128-cbc, 3des-cbc, blowfish-cbc, cast128-cbc, aes192-cbc, aes256-cbc, arcfour, [email protected] DEB [YYYYmmdd-HH:MM:30.498] thr=1 paramiko.transport: server encrypt: aes128-ctr, aes192-ctr, aes256-ctr, arcfour256, arcfour128, aes128-cbc, 3des-cbc, blowfish-cbc, cast128-cbc, aes192-cbc, aes256-cbc, arcfour, [email protected] DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: client mac: hmac-md5, hmac-sha1, [email protected], hmac-sha2-256, hmac-sha2-512, hmac-ripemd160, [email protected], hmac-sha1-96, hmac-md5-96 DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: server mac: hmac-md5, hmac-sha1, [email protected], hmac-sha2-256, hmac-sha2-512, hmac-ripemd160, [email protected], hmac-sha1-96, hmac-md5-96 DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: client compress: none, [email protected] DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: server compress: none, [email protected] DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: client lang: <none> DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: server lang: <none>. DEB [YYYYmmdd-HH:MM:30.499] thr=1 paramiko.transport: kex follows: False DEB [YYYYmmdd-HH:MM:30.500] thr=1 paramiko.transport: === Key exchange agreements === DEB [YYYYmmdd-HH:MM:30.500] thr=1 paramiko.transport: Kex: diffie-hellman-group-exchange-sha256 DEB [YYYYmmdd-HH:MM:30.500] thr=1 paramiko.transport: HostKey: ssh-rsa DEB [YYYYmmdd-HH:MM:30.500] thr=1 paramiko.transport: Cipher: aes128-ctr DEB [YYYYmmdd-HH:MM:30.500] thr=1 paramiko.transport: MAC: hmac-sha2-256 DEB [YYYYmmdd-HH:MM:30.501] thr=1 paramiko.transport: Compression: none DEB [YYYYmmdd-HH:MM:30.501] thr=1 paramiko.transport: === End of kex handshake === DEB [YYYYmmdd-HH:MM:30.548] thr=1 paramiko.transport: Got server p (2048 bits) DEB [YYYYmmdd-HH:MM:30.666] thr=1 paramiko.transport: kex engine KexGexSHA256 specified hash_algo <built-in function openssl_sha256> DEB [YYYYmmdd-HH:MM:30.667] thr=1 paramiko.transport: Switch to new keys ... DEB [YYYYmmdd-HH:MM:30.669] thr=2 paramiko.transport: Adding ssh-rsa host key for server.tld: b'caea********************.' DEB [YYYYmmdd-HH:MM:30.674] thr=2 paramiko.transport: Trying discovered key b'b49c********************' in /path/to/.ssh/my_key.rsa DEB [YYYYmmdd-HH:MM:30.722] thr=1 paramiko.transport: userauth is OK DEB [YYYYmmdd-HH:MM:30.722] thr=1 paramiko.transport: Finalizing pubkey algorithm for key of type 'ssh-rsa' DEB [YYYYmmdd-HH:MM:30.722] thr=1 paramiko.transport: Our pubkey algorithm list: ['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa'] DEB [YYYYmmdd-HH:MM:30.723] thr=1 paramiko.transport: Server-side algorithm list: [''] DEB [YYYYmmdd-HH:MM:30.723] thr=1 paramiko.transport: Agreed upon 'rsa-sha2-512' pubkey algorithm INF [YYYYmmdd-HH:MM:30.735] thr=1 paramiko.transport: Authentication (publickey) failed. DEB [YYYYmmdd-HH:MM:30.739] thr=2 paramiko.transport: Trying SSH agent key b'9d37********************' DEB [YYYYmmdd-HH:MM:30.747] thr=1 paramiko.transport: userauth is OK. DEB [YYYYmmdd-HH:MM:30.748] thr=1 paramiko.transport: Finalizing pubkey algorithm for key of type 'ssh-rsa' DEB [YYYYmmdd-HH:MM:30.748] thr=1 paramiko.transport: Our pubkey algorithm list: ['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa'] DEB [YYYYmmdd-HH:MM:30.748] thr=1 paramiko.transport: Server-side algorithm list: [''] DEB [YYYYmmdd-HH:MM:30.748] thr=1 paramiko.transport: Agreed upon 'rsa-sha2-512' pubkey algorithm INF [YYYYmmdd-HH:MM:30.868] thr=1 paramiko.transport: Authentication (publickey) failed... Shell command logs OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 58: Applying options for * debug2: resolving "server.tld" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to server.tld [server.tld] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: key_load_public: No such file or directory debug1: identity file /path/to/.ssh/my_key.rsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /path/to/.ssh/my_key.rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.4 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* compat 0x0c000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to server.tld:22 as 'my_user' debug3: hostkeys_foreach: reading file "/path/to/.ssh/known_hosts" debug3: record_hostkey: found key type RSA in file /path/to/.ssh/known_hosts:82 debug3: load_hostkeys: loaded 1 keys from server.tld debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],rsa-sha2-512,rsa-sha2-256,ssh-rsa debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c debug2: host key algorithms: [email protected],rsa-sha2-512,rsa-sha2-256,ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: host key algorithms: ssh-rsa,ssh-dss debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: MACs ctos: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: MACs stoc: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: compression ctos: none,[email protected] debug2: compression stoc: none,[email protected] debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: diffie-hellman-group-exchange-sha256 debug1: kex: host key algorithm: ssh-rsa debug1: kex: server->client cipher: aes128-ctr MAC: [email protected] compression: none debug1: kex: client->server cipher: aes128-ctr MAC: [email protected] compression: none debug1: kex: diffie-hellman-group-exchange-sha256 need=16 dh_need=16 debug1: kex: diffie-hellman-group-exchange-sha256 need=16 dh_need=16 debug3: send packet: type 34 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sent debug3: receive packet: type 31 debug1: got SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 1502/3072 debug3: send packet: type 32 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug3: receive packet: type 33 debug1: got SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: ssh-.:************************************************** debug3: hostkeys_foreach: reading file "/path/to/.ssh/known_hosts" debug3: record_hostkey: found key type RSA in file /path/to/.ssh/known_hosts:8..2 debug3: load_hostkeys: loaded 1 keys from server.tld debug1: Host 'server.tld' is known and matches the RSA host key. debug1: Found key in /path/to/.ssh/known_hosts:82 debug2: bits set: 1562/3072 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey after 4294967296 blocks debug2: key: <foo> (0x55bcf6d1d320), agent debug2: key: /path/to/.ssh/my_key.rsa ((nil)), explicit debug3: send packet: type 5 debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-keyex debug3: remaining preferred: gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-keyex debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug2: we did not send a packet, disable method debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:0) debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:0) debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: <foo> debug3: send_pubkey_test debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /path/to/.ssh/my_key.rsa debug3: sign_and_send_pubkey: RSA SHA256:********************************** debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 52 debug1: Authentication succeeded (publickey). Authenticated to server.tld ([server.tld]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug3: send packet: type 90 debug1: Requesting [email protected] debug3: send packet: type 80 debug1: Entering interactive session. debug1: pledge: network debug3: receive packet: type 91 debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: ssh_packet_set_tos: set IP_TOS 0x08 debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env SELINUX_ROLE_REQUESTED debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env HISTSIZE debug3: Ignored env SSH_CLIENT debug3: Ignored env SELINUX_USE_CURRENT_RANGE debug3: Ignored env SSH_TTY debug3: Ignored env CDPATH debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env PWD debug1: Sending env LANG = xx_XX.UTF-8 debug2: channel 0: request env confirm 0 debug3: send packet: type 98 debug3: Ignored env SELINUX_LEVEL_REQUESTED debug3: Ignored env HISTCONTROL debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env LOGNAME debug3: Ignored env SSH_CONNECTION debug3: Ignored env LESSOPEN debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env _ debug1: Sending command: echo TEST : $(hostname) debug2: channel 0: request exec confirm 1 debug3: send packet: type 98 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 2097152 debug3: receive packet: type 99 debug2: channel_input_status_confirm: type 99 id 0 debug2: exec request accepted on channel 0 TEST : server.tld debug3: receive packet: type 96 debug2: channel 0: rcvd eof debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug3: receive packet: type 98 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug3: receive packet: type 98 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug2: channel 0: rcvd eow debug2: channel 0: close_read debug2: channel 0: input open -> closed debug3: receive packet: type 97 debug2: channel 0: rcvd close debug3: channel 0: will not send data after close debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug3: send packet: type 97 debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1) debug3: send packet: type 1 Transferred: sent 3264, received 2656 bytes, in 0.0 seconds. Bytes per second: sent 92349.8, received 75147.4 debug1: Exit status 0 .
Imo, it's a bug in Paramiko. It does not handle correctly absence of server-sig-algs extension on the server side. Try disabling rsa-sha2-* on Paramiko side altogether: ssh_client.connect( server, username=ssh_user, key_filename=ssh_keypath, disabled_algorithms=dict(pubkeys=["rsa-sha2-512", "rsa-sha2-256"])) (note that there's no need to specify port=22, as that's the default) I've found related Paramiko issue: RSA key auth failing from paramiko 2.9.x client to dropbear server Though it refers to Paramiko 2.9.0 change log, which seems to imply that the behavior is deliberate: When the server does not send server-sig-algs, Paramiko will attempt the first algorithm in the above list. Clients connecting to legacy servers should thus use disabled_algorithms to turn off SHA2. Since 2.9.2, Paramiko will say: DEB [20220113-14:46:13.882] thr=1 paramiko.transport: Server did not send a server-sig-algs list; defaulting to our first preferred algo ('rsa-sha2-512') DEB [20220113-14:46:13.882] thr=1 paramiko.transport: NOTE: you may use the 'disabled_algorithms' SSHClient/Transport init kwarg to disable that or other algorithms if your server does not support them! Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server". Your code for waiting for command to complete and reading its output is flawed too. See Wait to finish command executed with Python Paramiko. And for most purposes, the get_pty=True is not a good idea either.
24
41
70,561,769
2022-1-3
https://stackoverflow.com/questions/70561769/apache-beam-cloud-dataflow-streaming-stuck-side-input
I'm currently building PoC Apache Beam pipeline in GCP Dataflow. In this case, I want to create streaming pipeline with main input from PubSub and side input from BigQuery and store processed data back to BigQuery. Side pipeline code side_pipeline = ( p | "periodic" >> PeriodicImpulse(fire_interval=3600, apply_windowing=True) | "map to read request" >> beam.Map(lambda x:beam.io.gcp.bigquery.ReadFromBigQueryRequest(table=side_table)) | beam.io.ReadAllFromBigQuery() ) Function with side input code def enrich_payload(payload, equipments): id = payload["id"] for equipment in equipments: if id == equipment["id"]: payload["type"] = equipment["type"] payload["brand"] = equipment["brand"] payload["year"] = equipment["year"] break return payload Main pipeline code main_pipeline = ( p | "read" >> beam.io.ReadFromPubSub(topic="projects/my-project/topics/topiq") | "bytes to dict" >> beam.Map(lambda x: json.loads(x.decode("utf-8"))) | "transform" >> beam.Map(transform_function) | "timestamping" >> beam.Map(lambda src: window.TimestampedValue( src, dt.datetime.fromisoformat(src["timestamp"]).timestamp() )) | "windowing" >> beam.WindowInto(window.FixedWindows(30)) ) final_pipeline = ( main_pipeline | "enrich data" >> beam.Map(enrich_payload, equipments=beam.pvalue.AsIter(side_pipeline)) | "store" >> beam.io.WriteToBigQuery(bq_table) ) result = p.run() result.wait_until_finish() After deploy it to Dataflow, everything looks fine and no error. But then I noticed that enrich data step has two nodes instead of one. And also, the side input stuck as you can see it has Elements Added with 21 counts in Input Collections and - value in Elements Added in Output Collections. You can find the full pipeline code here I already follow all instruction in these documentations: https://beam.apache.org/documentation/patterns/side-inputs/ https://beam.apache.org/releases/pydoc/2.35.0/apache_beam.io.gcp.bigquery.html Yet still found this error. Please help me. Thanks!
Here you have a working example: mytopic = "" sql = "SELECT station_id, CURRENT_TIMESTAMP() timestamp FROM `bigquery-public-data.austin_bikeshare.bikeshare_stations` LIMIT 10" def to_bqrequest(e, sql): from apache_beam.io import ReadFromBigQueryRequest yield ReadFromBigQueryRequest(query=sql) def merge(e, side): for i in side: yield f"Main {e.decode('utf-8')} Side {i}" pubsub = p | "Read PubSub topic" >> ReadFromPubSub(topic=mytopic) side_pcol = (p | PeriodicImpulse(fire_interval=300, apply_windowing=False) | "ApplyGlobalWindow" >> WindowInto(window.GlobalWindows(), trigger=trigger.Repeatedly(trigger.AfterProcessingTime(5)), accumulation_mode=trigger.AccumulationMode.DISCARDING) | "To BQ Request" >> ParDo(to_bqrequest, sql=sql) | ReadAllFromBigQuery() ) final = (pubsub | "Merge" >> ParDo(merge, side=beam.pvalue.AsList(side_pcol)) | Map(logging.info) ) p.run() Note this uses a GlobalWindow (so that both inputs have the same window). I used a processing time trigger so that the pane contains multiple rows. 5 was chosen arbitrarily, using 1 would work too. Please note matching the data between side and main inputs is non deterministic, and you may see fluctuating values from older fired panes. In theory, using FixedWindows should fix this, but I cannot get the FixedWindows to work.
9
7
70,573,108
2022-1-4
https://stackoverflow.com/questions/70573108/speeding-up-the-loops-or-different-ideas-for-counting-primitive-triples
def pythag_triples(n): i = 0 start = time.time() for x in range(1, int(sqrt(n) + sqrt(n)) + 1, 2): for m in range(x+2,int(sqrt(n) + sqrt(n)) + 1, 2): if gcd(x, m) == 1: # q = x*m # l = (m**2 - x**2)/2 c = (m**2 + x**2)/2 # trips.append((q,l,c)) if c < n: i += 1 end = time.time() return i, end-start print(pythag_triples(3141592653589793)) I'm trying to calculate primitive pythagorean triples using the idea that all triples are generated from using m, n that are both odd and coprime. I already know that the function works up to 1000000 but when doing it to the larger number its taken longer than 24 hours. Any ideas on how to speed this up/ not brute force it. I am trying to count the triples.
This new answer brings the total time for big_n down to 4min 6s. An profiling of my initial answer revealed these facts: Total time: 1h 42min 33s Time spent factorizing numbers: almost 100% of the time In contrast, generating all primes from 3 to sqrt(2*N - 1) takes only 38.5s (using Atkin's sieve). I therefore decided to try a version where we generate all numbers m as known products of prime numbers. That is, the generator yields the number itself as well as the distinct prime factors involved. No factorization needed. The result is still 500_000_000_002_841, off by 4 as @Koder noticed. I do not know yet where that problem comes from. Edit: after correction of the xmax bound (isqrt(2*N - m**2) instead of isqrt(2*N - m**2 - 1), since we do want to include triangles with hypothenuse equal to N), we now get the correct result. The code for the primes generator is included at the end. Basically, I used Atkin's sieve, adapted (without spending much time on it) to Python. I am quite sure it could be sped up (e.g. using numpy and perhaps even numba). To generate integers from primes (which we know we can do thanks to the Fundamental theorem of arithmetic), we just need to iterate through all the possible products prod(p_i**k_i) where p_i is the i^th prime number and k_i is any non-negative integer. The easiest formulation is a recursive one: def gen_ints_from_primes(p_list, upto): if p_list and upto >= p_list[0]: p, *p_list = p_list pk = 1 p_tup = tuple() while pk <= upto: for q, p_distinct in gen_ints_from_primes(p_list, upto=upto // pk): yield pk * q, p_tup + p_distinct pk *= p p_tup = (p, ) else: yield 1, tuple() Unfortunately, we quickly run into memory constraints (and recursion limit). So here is a non-recursive version which uses no extra memory aside from the list of primes themselves. Essentially, the current value of q (the integer in process of being generated) and an index in the list are all the information we need to generate the next integer. Of course, the values come unsorted, but that doesn't matter, as long as they are all covered. def rem_p(q, p, p_distinct): q0 = q while q % p == 0: q //= p if q != q0: if p_distinct[-1] != p: raise ValueError(f'rem({q}, {p}, ...{p_distinct[-4:]}): p expected at end of p_distinct if q % p == 0') p_distinct = p_distinct[:-1] return q, p_distinct def add_p(q, p, p_distinct): if len(p_distinct) == 0 or p_distinct[-1] != p: p_distinct += (p, ) q *= p return q, p_distinct def gen_prod_primes(p, upto=None): if upto is None: upto = p[-1] if upto >= p[-1]: p = p + [upto + 1] # sentinel q = 1 i = 0 p_distinct = tuple() while True: while q * p[i] <= upto: i += 1 while q * p[i] > upto: yield q, p_distinct if i <= 0: return q, p_distinct = rem_p(q, p[i], p_distinct) i -= 1 q, p_distinct = add_p(q, p[i], p_distinct) Example- >>> p_list = list(primes(20)) >>> p_list [2, 3, 5, 7, 11, 13, 17, 19] >>> sorted(gen_prod_primes(p_list, 20)) [(1, ()), (2, (2,)), (3, (3,)), (4, (2,)), (5, (5,)), (6, (2, 3)), (7, (7,)), (8, (2,)), (9, (3,)), (10, (2, 5)), (11, (11,)), (12, (2, 3)), (13, (13,)), (14, (2, 7)), (15, (3, 5)), (16, (2,)), (17, (17,)), (18, (2, 3)), (19, (19,)), (20, (2, 5))] As you can see, we don't need to factorize any number, as they conveniently come along with the distinct primes involved. To get only odd numbers, simply remove 2 from the list of primes: >>> sorted(gen_prod_primes(p_list[1:]), 20) [(1, ()), (3, (3,)), (5, (5,)), (7, (7,)), (9, (3,)), (11, (11,)), (13, (13,)), (15, (3, 5)), (17, (17,)), (19, (19,))] In order to exploit this number-and-factors presentation, we need to amend a bit the function given in the original answer: def phi(n, upto=None, p_list=None): # Euler's totient or "phi" function if upto is None or upto > n: upto = n if p_list is None: p_list = list(distinct_factors(n)) if upto < n: # custom version: all co-primes of n up to the `upto` bound cnt = upto for q in products_of(p_list, upto): cnt += upto // q if q > 0 else -(upto // -q) return cnt # standard formulation: all co-primes of n up to n-1 cnt = n for p in p_list: cnt = cnt * (p - 1) // p return cnt With all this, we can now rewrite our counting functions: def pt_count_m(N): # yield tuples (m, count(x) where 0 < x < m and odd(x) # and odd(m) and coprime(x, m) and m**2 + x**2 <= 2*N)) # in this version, m is generated from primes, and the values # are iterated through unordered. mmax = isqrt(2*N - 1) p_list = list(primes(mmax))[1:] # skip 2 for m, p_distinct in gen_prod_primes(p_list, upto=mmax): if m < 3: continue # requirement: (m**2 + x**2) // 2 <= N # note, both m and x are odd (so (m**2 + x**2) // 2 == (m**2 + x**2) / 2) xmax = isqrt(2*N - m*m) cnt_m = phi(m+1, upto=xmax, p_list=(2,) + tuple(p_distinct)) if cnt_m > 0: yield m, cnt_m def pt_count(N, progress=False): mmax = isqrt(2*N - 1) it = pt_count_m(N) if progress: it = tqdm(it, total=(mmax - 3 + 1) // 2) return sum(cnt_m for m, cnt_m in it) And now: %timeit pt_count(100_000_000) 31.1 ms ± 38.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit pt_count(1_000_000_000) 104 ms ± 299 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # the speedup is still very moderate at that stage # however: %%time big_n = 3_141_592_653_589_793 N = big_n res = pt_count(N) CPU times: user 4min 5s, sys: 662 ms, total: 4min 6s Wall time: 4min 6s >>> res 500000000002845 Addendum: Atkin's sieve As promised, here is my version of Atkin's sieve. It can definitely be sped up. def primes(limit): # Generates prime numbers between 2 and n # Atkin's sieve -- see http://en.wikipedia.org/wiki/Prime_number sqrtLimit = isqrt(limit) + 1 # initialize the sieve is_prime = [False, False, True, True, False] + [False for _ in range(5, limit + 1)] # put in candidate primes: # integers which have an odd number of # representations by certain quadratic forms for x in range(1, sqrtLimit): x2 = x * x for y in range(1, sqrtLimit): y2 = y*y n = 4 * x2 + y2 if n <= limit and (n % 12 == 1 or n % 12 == 5): is_prime[n] ^= True n = 3 * x2 + y2 if n <= limit and (n % 12 == 7): is_prime[n] ^= True n = 3*x2-y2 if n <= limit and x > y and n % 12 == 11: is_prime[n] ^= True # eliminate composites by sieving for n in range(5, sqrtLimit): if is_prime[n]: sqN = n**2 # n is prime, omit multiples of its square; this is sufficient because # composites which managed to get on the list cannot be square-free for i in range(1, int(limit/sqN) + 1): k = i * sqN # k ∈ {n², 2n², 3n², ..., limit} is_prime[k] = False for i, truth in enumerate(is_prime): if truth: yield i
8
3
70,573,780
2022-1-4
https://stackoverflow.com/questions/70573780/unknown-opencv-exception-while-using-easyocr
Code: import easyocr reader = easyocr.Reader(['en']) result = reader.readtext('R.png') Output: CUDA not available - defaulting to CPU. Note: This module is much faster with a GPU. cv2.error: Unknown C++ exception from OpenCV code I would truly appreciate any support!
The new version of OpenCV has some issues. Uninstall the newer version of OpenCV and install the older one using: pip install opencv-python==4.5.4.60
5
6
70,583,652
2022-1-4
https://stackoverflow.com/questions/70583652/grabbing-video-title-from-yt-dlp-command-line-output
from yt_dlp import YoutubeDL with YoutubeDL() as ydl: ydl.download('https://youtu.be/0KFSuoHEYm0') this is the relevant bit of code producing the output. what I would like to do is grab the 2nd last line from the output below, specifying the video title. I have tried a few variations of output = subprocess.getoutput(ydl) as well as output = subprocess.Popen( ydl, stdout=subprocess.PIPE ).communicate()[0] the output I am attempting to capture is the 2nd last line here: [youtube] 0KFSuoHEYm0: Downloading webpage [youtube] 0KFSuoHEYm0: Downloading android player API JSON [info] 0KFSuoHEYm0: Downloading 1 format(s): 22 [download] Destination: TJ Watt gets his 4th sack of the game vs. Browns [0KFSuoHEYm0].mp4 [download] 100% of 13.10MiB in 00:01 There is also documentation on yt-dlp on how to pull title from metadata or include as something in the brackets behind YoutubeDL(), but I can not quite figure it out. This is part of the first project I am making in python. I am missing an understanding of many concepts any help would be much appreciated.
Credits: answer to question: How to get information from youtube-dl in python ?? Modify your code as follows: from yt_dlp import YoutubeDL with YoutubeDL() as ydl: info_dict = ydl.extract_info('https://youtu.be/0KFSuoHEYm0', download=False) video_url = info_dict.get("url", None) video_id = info_dict.get("id", None) video_title = info_dict.get('title', None) print("Title: " + video_title) # <= Here, you got the video title This is the output: #[youtube] 0KFSuoHEYm0: Downloading webpage #[youtube] 0KFSuoHEYm0: Downloading android player API JSON #Title: TJ Watt gets his 4th sack of the game vs. Browns
9
16
70,556,110
2022-1-2
https://stackoverflow.com/questions/70556110/how-to-remove-the-background-from-an-image
I want to remove the background, and draw the outline of the box shown in the image(there are multiple such images with a similar background) . I tried multiple methods in OpenCV, however I am unable to determine the combination of features which can help remove background for this image. Some of the approaches tried out were: Edge Detection - Since the background itself has edges of its own, using edge detection on its own (such as Canny and Sobel) didn't seem to give good results. Channel Filtering / Thresholding - Both the background and foreground have a similar white color, so I was unable to find a correct threshold to filter the foreground. Contour Detection - Since the background itself has a lot of contours, just using the largest contour area, as is often used for background removal, also didn't work. I would be open to tools in Computer Vision or of Deep Learning (in Python) to solve this particular problem.
The Concept This is one of the cases where it is really useful to fine-tune the kernels of which you are using to dilate and erode the canny edges detected from the images. Here is an example, where the dilation kernel is np.ones((4, 2)) and the erosion kernel is np.ones((13, 7)): The Code import cv2 import numpy as np def process(img): img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img_blur = cv2.GaussianBlur(img_gray, (3, 3), 2) img_canny = cv2.Canny(img_blur, 50, 9) img_dilate = cv2.dilate(img_canny, np.ones((4, 2)), iterations=11) img_erode = cv2.erode(img_dilate, np.ones((13, 7)), iterations=4) return cv2.bitwise_not(img_erode) def get_contours(img): contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) cnt = max(contours, key=cv2.contourArea) cv2.drawContours(img, [cv2.convexHull(cnt)], -1, (0, 0, 255), 2) img = cv2.imread("image2.png") get_contours(img) cv2.imshow("result", img) cv2.waitKey(0) cv2.destroyAllWindows() The Output Output for each of the two images provided: Image 1: Image 2: Notes Note that the processed image (which is binary) is inverted at cv2.bitwise_not(img_erode). Observe the processed version of both images (returned by the process() function defined above), with the inversion: Processed Image 1: Processed Image 2: Tools Finally, if you happen to have other images where the above program doesn't work properly on, you can use OpenCV Trackbars to adjust the values passed into the methods with the program below: import cv2 import numpy as np def process(img, b_k, b_s, c_t1, c_t2, k1, k2, k3, k4, iter1, iter2): img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) b_k = b_k // 2 * 2 + 1 img_blur = cv2.GaussianBlur(img_gray, (b_k, b_k), b_s) img_canny = cv2.Canny(img_blur, c_t1, c_t2) img_dilate = cv2.dilate(img_canny, np.ones((k1, k2)), iterations=iter1) img_erode = cv2.erode(img_dilate, np.ones((k3, k4)), iterations=iter2) return cv2.bitwise_not(img_erode) d = {"Blur Kernel": (3, 50), "Blur Sigma": (2, 30), "Canny Threshold 1": (50, 500), "Canny Threshold 2": (9, 500), "Dilate Kernel1": (4, 50), "Dilate Kernel2": (2, 50), "Erode Kernel1": (13, 50), "Erode Kernel2": (7, 50), "Dilate Iterations": (11, 40), "Erode Iterations": (4, 40)} cv2.namedWindow("Track Bars") for i in d: cv2.createTrackbar(i, "Track Bars", *d[i], id) img = cv2.imread("image1.png") while True: img_copy = img.copy() processed = process(img, *(cv2.getTrackbarPos(i, "Track Bars") for i in d)) contours, _ = cv2.findContours(processed, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) if contours: cnt = max(contours, key=cv2.contourArea) cv2.drawContours(img_copy, [cv2.convexHull(cnt)], -1, (0, 0, 255), 2) cv2.imshow("result", img_copy) if cv2.waitKey(1) & 0xFF == ord("q"): break cv2.waitKey(0) cv2.destroyAllWindows()
8
22
70,557,824
2022-1-2
https://stackoverflow.com/questions/70557824/python-itterate-down-dictionairy-move-down-tree-conditionally
I have a some python code below that walk down a tree but I want it to work down a tree checking taking some paths conditioally based on values. I want to get the LandedPrice for branches of tree based on condition and fulfillmentChannel parsed_results['LowestLanded'] = sku_multi_sku['Summary']['LowestPrices']['LowestPrice']['LandedPrice']['Amount']['value'] That walks down this tree but values because there are two LowestPrice records/dicts returned one for each condition and fulfillmentChannel pair. I want to filter on condition=new and fulfillmentChannel=Amazon so I only get back one record. When I parse XML data I can do it with code similar to LowestPrices/LowestPrice[@condition='new'][@fulfillmentChannel='Merchant']/LandedPrice/Amount" but couldn't get similar code to work here. How do I do this with dictionaries? "LowestPrices":{ "value":"\n ", "LowestPrice":[ { "value":"\n ", "condition":{ "value":"new" #condtion new }, "fulfillmentChannel":{ "value":"Amazon" ## fulfilllmentChannel #1 }, "LandedPrice":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"19.57" } }, "ListingPrice":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"19.57" } }, "Shipping":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"0.00" } } }, { "value":"\n ", "condition":{ "value":"new" }, "fulfillmentChannel":{ "value":"Merchant" }, "LandedPrice":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"19.25" } }, "ListingPrice":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"19.25" } }, "Shipping":{ "value":"\n ", "CurrencyCode":{ "value":"USD" }, "Amount":{ "value":"0.00" } } } ] },
You can use list comprehensions with conditional logic for your purposes like this: my_dict = { "LowestPrices": { "value": "\n ", "LowestPrice": [{ "value": "\n ", "condition": { "value": "new" }, "fulfillmentChannel": { "value": "Amazon" }, "LandedPrice": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "19.57" } }, "ListingPrice": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "19.57" } }, "Shipping": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "0.00" } } }, { "value": "\n ", "condition": { "value": "new" }, "fulfillmentChannel": { "value": "Merchant" }, "LandedPrice": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "19.25" } }, "ListingPrice": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "19.25" } }, "Shipping": { "value": "\n ", "CurrencyCode": { "value": "USD" }, "Amount": { "value": "0.00" } } } ] }, } lowest_prices = [x for x in my_dict["LowestPrices"]["LowestPrice"] if x["condition"]["value"] == "new" and x["fulfillmentChannel"]["value"] == "Amazon"] lowest_prices is a list of all dicts that satisfy the required conditions. If you sure that you have only one dictionary in your case that satisfy conditions or you just want to get the amount of the first one, you just do this: if len(lowest_prices) > 0: amount = lowest_prices[0]["LandedPrice"]["Amount"]["value"] print(amount)
4
7
70,575,617
2022-1-4
https://stackoverflow.com/questions/70575617/memory-efficiency-of-nested-functions-in-python
Let's say we have the following functions: def functionA(b, c): def _innerFunction(b, c): return b + c return _innerFunction(b, c) def _outerFunction(b, c): return b + c def functionB(b, c): return _outerFunction(b, c) functionA and functionB will do the same. _outerFunction is globally available, while _innerFunction is only available for functionA. Nested functions are useful for data hiding and privacy, but what about their memory efficiency? For my understanding, the _outerFunction must only be loaded once, while the _innerFunction works like a "local" variable and hence must be loaded each time functionA is called. Is that correct?
Regarding memory, both of them have almost the same memory footprint. A function is comprised of a code object, containing the actual compiled code, and a function object containing the closure, the name and other dynamic variables. The code object is compiled for all functions, inner and outer, before the code is run. It is what resides in the .pyc file. The difference between an inner and an outer function is the creation of the function object. An outer function will create the function only once, while the inner function will load the same constant code object and create the function every run. As the code object is equivalent, _inner and _outer's memory footprint is equivalent: In both cases you have the name of the function as a constant. In the functionA the name will be used to construct the inner function object on each run, while in functionB the name will be used to refer to the global module and search for the outer function. In both cases you need to hold a code object, either in the global module or in functionA. In both cases you have the same parameters and same space saved for variables. Runtime however is not equivalent: functionB needs to call a global function which is slightly slower than an inner function, but functionA needs to create a new function object on each run which is significantly slower. In order to prove how equivalent they are, let's check the code itself: >>> functionA.__code__.co_consts (None, <code object _innerFunction at 0x00000296F0B6A600, file "<stdin>", line 2>, 'functionA.<locals>._innerFunction') We can see the code object as a const stored inside functionA. Let's extract the actual compiled bytecode: >>> functionA.__code__.co_consts[1].co_code b'|\x00|\x01\x17\x00S\x00' Now let's extract the bytecode for the outer function: >>> _outerFunction.__code__.co_code b'|\x00|\x01\x17\x00S\x00' It's exactly the same code! The local variable positions are the same, the code is written the same, and so the actual compiled code is exactly the same. >>> functionA.__code__.co_names () >>> functionB.__code__.co_names ('_outerFunction',) In functionB, instead of saving the name in the consts, the name is saved in a co_names which is later used for calling the global function. The only difference in memory footprint is thus the code of functionA and functionB: >>> functionA.__code__.co_code b'd\x01d\x02\x84\x00}\x02|\x02|\x00|\x01\x83\x02S\x00' >>> functionB.__code__.co_code b't\x00|\x00|\x01\x83\x02S\x00' functionA needs to create a function object on each run, and the name for the inner function includes functionA.<locals>., which entails a few extra bytes (which is negligible) and a slower run. In terms of runtime, if you're calling the inner function multiple times, functionA is slightly faster: def functionA(b, c): def _innerFunction(): return b + c for i in range(10_000_000): _innerFunction() # Faster def _outerFunction(b, c): return b + c def functionB(b, c): for i in range(10_000_000): _outerFunction(b, c) # Slower def functionC(b, c): outerFunction = _outerFunction for i in range(10_000_000): outerFunction(b, c) # Almost same as A but still slower. py -m timeit -s "import temp;" "temp.functionA(1,2)" 1 loop, best of 5: 2.45 sec per loop py -m timeit -s "import temp;" "temp.functionB(1,2)" 1 loop, best of 5: 3.21 sec per loop py -m timeit -s "import temp;" "temp.functionC(1,2)" 1 loop, best of 5: 2.66 sec per loop If you're calling the outer function multiple times, functionB is significantly faster as you avoid creating the function object: def functionA(b, c): # Significantly slower def _innerFunction(): return b + c return _innerFunction() def _outerFunction(b, c): return b + c def functionB(b, c): # Significantly faster return _outerFunction(b, c) py -m timeit -s "import temp;" "for i in range(10_000_000): temp.functionA(1,2)" 1 loop, best of 5: 9.46 sec per loop py -m timeit -s "import temp;" "for i in range(10_000_000): temp.functionB(1,2)" 1 loop, best of 5: 5.48 sec per loop @KellyBundy: What about recursion? My answer is only true for sequential runs. If the inner function recurses inside both A and B, there's no real difference in runtime or memory consumption, other than A being slightly faster. If function A and B recurse themselves, B will not allow a deeper recursion but will be significantly faster and will require less memory. On a sequential run, there is no difference in memory as there is one function object either stored in the global module, or stored as a local variable that is constantly recreated. In case of outside (A | B) recursion there is a memory difference: The local variable where the _innerFunction object is stored is not cleared, meaning there is an additional function object created for every recursion inwards. In this specific example, we can see an important distinction between Python and other languages - Python does not have a tail-call optimization, meaning the frames aren't reused and the variables aren't removed when we recurse inwards, even though no one will reference them anymore. You're welcome to play with the following visualization. I guess stack space is exactly identical, all differences are on the heap? When you're working with Python, it's hard to divide things into a stack and heap. The C-stack is irrelevant as Python uses its own virtual stack. Python's stack is actually linked by the function's currently running frame, and is loaded, or created when a function is invoked. It is the reason they're also called stack frames - there's a linked list or "stack" of frames, and each frame has its own mini-stack called a value-stack. Both the stack and the frame are stored on the heap. There are plenty of benefits for using this approach, and a nice anecdote would be generator functions. I've actually written an article about the subject, but in short, being able to load and unload the stack at will, allows us to pause a function in the middle of its execution, and is the basis for both generators and asyncio.
4
9
70,584,497
2022-1-4
https://stackoverflow.com/questions/70584497/ti-is-not-defined-while-pulling-xcom-variable-in-s3toredshiftoperator
I am using S3ToRedshiftOperator to load csv file into Redshift database. Kindly help to pass xcom variable to S3ToRedshiftOperator. How can we push xcom without using custom function? Error: NameError: name 'ti' is not defined Using below code: from airflow.operators.s3_to_redshift_operator import S3ToRedshiftOperator def export_db_fn(**kwargs): session = settings.Session() outkey = S3_KEY.format(MWAA_ENV_NAME, name[6:]) print(outkey) s3_client.put_object(Bucket=S3_BUCKET, Key=outkey, Body=f.getvalue()) ti.xcom_push(key='FILE_PATH', value=outkey) return "OK" with DAG(dag_id="export_info", schedule_interval=None, catchup=False, start_date=days_ago(1)) as dag: export_info = PythonOperator( task_id="export_info", python_callable=export_db_fn, provide_context=True ) transfer_s3_to_redshift = S3ToRedshiftOperator( s3_bucket=S3_BUCKET, s3_key="{{ti.xcom_pull(key='FILE_PATH', task_ids='export_info')}}", schema="dw_stage", table=REDSHIFT_TABLE, copy_options=['csv',"IGNOREHEADER 1"], redshift_conn_id='redshift', autocommit=True, task_id='transfer_s3_to_redshift', ) start >> export_info >> transfer_s3_to_redshift >> end
The error message tells the problem. ti is not defined. When you set provide_context=True, Airflow makes Context available for you in the python callable. One of the attributes is ti (see source code). So you need to extract it from kwargs or set it in the function signature. Your code should be: def export_db_fn(**kwargs): ... ti = kwargs['ti'] ti.xcom_push(key='FILE_PATH', value=outkey) ... Or if you want to use ti directly then: def export_db_fn(ti, **kwargs): ... ti.xcom_push(key='FILE_PATH', value=outkey) ... Note: In Airflow >= 2.0 there is no need to set provide_context=True
4
5
70,583,230
2022-1-4
https://stackoverflow.com/questions/70583230/union-of-generic-types-that-is-also-generic
Say I have two types (one of them generic) like this from typing import Generic, TypeVar T = TypeVar('T') class A(Generic[T]): pass class B: pass And a union of A and B like this C = A|B Or, in pre-Python-3.10/PEP 604-syntax: C = Union[A,B] How do I have to change the definition of C, so that C is also generic? e.g. if an object is of type C[int], it is either of type A[int] (type parameter is passed down) or of type B (type parameter is ignored)
Rereading the mypy documentation I believe I have found my answer: Type aliases can be generic. In this case they can be used in two ways: Subscripted aliases are equivalent to original types with substituted type variables, so the number of type arguments must match the number of free type variables in the generic type alias. Unsubscripted aliases are treated as original types with free variables replaced with Any So, to answer my question: C = A[T]|B should do the trick. And it does!
10
4
70,586,364
2022-1-4
https://stackoverflow.com/questions/70586364/how-to-elegantly-generate-all-prefixes-of-an-iterable-cumulative-iterable
From an iterable, I'd like to generate an iterable of its prefixes (including the original iterable itself). for prefix in prefixes(range(5)): print(tuple(prefix)) should result in (0,) (0, 1) (0, 1, 2) (0, 1, 2, 3) (0, 1, 2, 3, 4) or in () (0,) (0, 1) (0, 1, 2) (0, 1, 2, 3) (0, 1, 2, 3, 4) and for prefix in prefixes('Hello'): print(''.join(prefix)) should result in H He Hel Hell Hello or in H He Hel Hell Hello (Whether the empty prefix is part of the result doesn't matter too much for me, nor does the exact type of the inner or outer resulting iterables.) I was able to devise several ways to implement this, but all feel at least slightly clunky: using slicing & len: (works if the iterable is a sequence) def prefixes(seq): for i in range(len(seq)): yield seq[:i + 1] or using a list comprehension: def prefixes(seq): return [seq[:i + 1] for i in range(len(seq))] ... or a generator expression def prefixes(seq): return (seq[:i + 1] for i in range(len(seq))) (These don't yield the empty prefix. To include it, replace [i + 1] by just [i] and range(len(seq)) by range(len(seq) + 1) in any of the above.) These feel clunky: because they don't work for all kinds iterable inputs because of the need for the + 1 offset calling range on the len of something (though enumerate wouldn't make it better here) using concatenation def prefixes(iterable): result = () for elem in iterable: result += (elem,) yield result (Doesn't include the empty prefix. This can be changed by yielding result already once before the for-loop.) or using itertools.accumulate from itertools import accumulate as acc def prefixes(iterable): return acc(iterable, lambda t, elem: t + (elem,), initial=()) or a bit more readable: from itertools import accumulate def _append(iterable, elem): return iterable + (elem,) def prefixes(iterable): return accumulate(iterable, _append, initial=()) (These two include the empty prefix. Drop it if unwanted.) These feel clunky due to the need to pack elements into length-one containers just to concatenate them to an existing one. Solutions that are more elegant? I feel like I must be missing something from itertools, functools, operator or more-itertools that would allow for a slightly or even significantly less clunky implementation. I mean, this is eerily similar to more_itertools.powerset, just a, well, rather specific subset of it.
It may be considered elegant to write the prefixes function in any generalized way that works, put it in a module, and then import it in the code where it is needed, so that it doesn't matter how it is implemented. On the other hand, requiring an extra import can be perceived as less elegant than a short local function that is less generic but more tailored to the specific use case. This is one possible quite generic solution: def prefixes(iterable): return itertools.accumulate(map(lambda x: (x,), iterable)) There are reasons for it to be considered elegant: It uses a function that is already available in the standard library and achieves the primary goal, it does not explicitly mention the concatenation which accumulate already does implicitly, it does not require the initial argument to accumulate. But some find using map and lambda to be less elegant than a for loop.
6
3
70,537,825
2021-12-30
https://stackoverflow.com/questions/70537825/problem-dealing-with-a-space-when-moving-json-to-python
I am high school math teacher who is teaching myself programming. My apologies in advance if I don't phrase some of this correctly. I am collecting CSV data from the user and trying to move it to a SQLite database via Python. Everything works fine unless one of the values has a space in it. For example, here is part of my JavaScript object: Firstname: "Bruce" Grade: "" Lastname: "Wayne Jr" Nickname: "" Here is the corresponding piece after applying JSON.stringify: {"Firstname":"Bruce","Lastname":"Wayne Jr","Nickname":"","Grade":""} This is then passed to Python via a form. In Python, I use: data = request.form.get("data") print(data) data2 = json.loads(data) print(data2) I get a bunch of error messages, ending with: json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 250 (char 249) and the log of the first print gives: [{"Firstname":"Jason","Lastname":"Bourne","Nickname":"","Grade":"10"}, {"Firstname":"Steve","Lastname":"McGarret","Nickname":"5-0","Grade":""}, {"Firstname":"Danny","Lastname":"Williams","Nickname":"Dano","Grade":"12"}, {"Firstname":"Bruce","Lastname":"Wayne So it seems to break on the space in "Wayne Jr". I used what I learned here to build the basics: https://bl.ocks.org/HarryStevens/0ce529b9b5e4ea17f8db25324423818f I believe this JavaScript function is parsing the user data: function changeDataFromField(cb){ var arr = []; $('#enter-data-field').val().replace( /\n/g, "^^^xyz" ).split( "^^^xyz" ).forEach(function(d){ arr.push(d.replace( /\t/g, "^^^xyz" ).split( "^^^xyz" )) }); cb(csvToJson(arr)); } Updates based on comments: I am using a POST request. No AJAX. There are actually 2 inputs for the user. A text box where they can paste CSV data and a file upload option. Here is some more of the JavaScript. // Use the HTML5 File API to read the CSV function changeDataFromUpload(evt, cb){ if (!browserSupportFileUpload()) { console.error("The File APIs are not fully supported in this browser!"); } else { var data = null; var file = evt.target.files[0]; var fileName = file.name; $("#filename").html(fileName); if (file !== "") { var reader = new FileReader(); reader.onload = function(event) { var csvData = event.target.result; var parsed = Papa.parse(csvData); cb(csvToJson(parsed.data)); }; reader.onerror = function() { console.error("Unable to read " + file.fileName); }; } reader.readAsText(file); $("#update-data-from-file")[0].value = ""; } } // Method that checks that the browser supports the HTML5 File API function browserSupportFileUpload() { var isCompatible = false; if (window.File && window.FileReader && window.FileList && window.Blob) { isCompatible = true; } return isCompatible; } // Parse the CSV input into JSON function csvToJson(data) { var cols = ["Firstname","Lastname","Nickname","Grade"]; var out = []; for (var i = 0; i < data.length; i++){ var obj = {}; var row = data[i]; cols.forEach(function(col, index){ if (row[index]) { obj[col] = row[index]; } else { obj[col] = ""; } }); out.push(obj); } return out; } // Produces table for user to check appearance of data and button to complete upload function makeTable(data) { console.log(data); send_data = JSON.stringify(data); console.log(send_data); var table_data = '<table style="table-layout: fixed; width: 100%" class="table table-striped">'; table_data += '<th>First name</th><th>Last name</th><th>Nickname</th><th>Grade</th>' for(var count = 0; count < data.length; count++) { table_data += '<tr>'; table_data += '<td>'+data[count]['Firstname']+'</td>'; table_data += '<td>'+data[count]['Lastname']+'</td>'; table_data += '<td>'+data[count]['Nickname']+'</td>'; table_data += '<td>'+data[count]['Grade']+'</td>'; table_data += '</tr>'; } table_data += '</table>'; table_data += '<p><form action="/uploaded" method="post">'; table_data += 'Does the data look OK? If so, click to upload. '; table_data += '<button class="btn btn-primary" type="submit">Upload</button><p>'; table_data += '<input type="hidden" id="data" name="data" value='+send_data+'>'; table_data += '<input type="hidden" name="class_id" value="{{ class_id }}">'; table_data += '</form>'; table_data += 'Otherwise, fix the file and reload.'; document.getElementById("result_table").innerHTML = table_data; } </script>
The problem was the way I was sending the JSON string -- it wasn't in quotes so anytime there was a space in a value, there was a problem. To fix it: I got the JSON from the answer above, then before sending the JSON string via a POST request, I enclosed it in quotes. send_data = JSON.stringify(json); send_data = "'" + send_data + "'"; I am now able to send values that have spaces in it.
5
0
70,583,705
2022-1-4
https://stackoverflow.com/questions/70583705/matplotlib-share-x-axis-between-imshow-and-plot
I am trying to plot two imshow and one plot above each other sharing their x-axis. The figure layout is set up using gridspec. Here is a MWE: import matplotlib as mpl from matplotlib import pyplot as plt import numpy as np fig = plt.figure(figsize=(10,8)) gs = fig.add_gridspec(3,2,width_ratios=(1,2),height_ratios=(1,2,2), left=0.1,right=0.9,bottom=0.1,top=0.99, wspace=0.1, hspace=0.1) ax=fig.add_subplot(gs[2,1]) ax2=fig.add_subplot(gs[2,0], sharey=ax) ax3=fig.add_subplot(gs[1,0]) ax4=fig.add_subplot(gs[1,1], sharex=ax, sharey=ax3) ax5=fig.add_subplot(gs[0,1], sharex=ax) dates = pd.date_range("2020-01-01","2020-01-10 23:00", freq="H") xs = mpl.dates.date2num(dates) ys = np.random.random(xs.size) N = 10 arr = np.random.random((N, N)) arr2 = np.random.random((N, N)) norm=mpl.colors.Normalize(0, arr.max()) # change the min to stretch the color spectrum pcm = ax.imshow(arr, extent=[xs[0],xs[-1],10,0],norm=norm,aspect='auto') cax = fig.colorbar(pcm, ax=ax, extend='max') # , location='left' ax.set_xlabel('date') cax.set_label('fraction [-]') # ax.xaxis_date() myFmt = mpl.dates.DateFormatter('%d.%m') ax.xaxis.set_major_formatter(myFmt) norm=mpl.colors.Normalize(0, arr2.max()) # change the min to stretch the color spectrum pcm = ax4.imshow(arr2, extent=[xs[0],xs[-1],1,0],norm=norm,aspect='auto') cax4 = fig.colorbar(pcm, ax=ax4, extend='max') cax4.set_label('fraction [-]') ax5.plot(xs,ys) con1 = ConnectionPatch(xyA=(ax2.get_xlim()[0],1), xyB=(ax2.get_xlim()[0],1), coordsA="data", coordsB="data", connectionstyle=mpl.patches.ConnectionStyle("Bar", fraction=-0.05), axesA=ax2, axesB=ax3, arrowstyle="-", color='r') con2 = ConnectionPatch(xyA=(ax2.get_xlim()[0],0), xyB=(ax2.get_xlim()[0],0), coordsA="data", coordsB="data", connectionstyle=mpl.patches.ConnectionStyle("Bar", fraction=-0.02), axesA=ax2, axesB=ax3, arrowstyle="-", color='r') fig.add_artist(con1) fig.add_artist(con2) The plot ends up like this: While the axes seem to be linked (date format applied to all of them), they do not have the same extent. NOTE: The two left axes must not share the same x-axis. EDIT: Added ConnectionPatch connections which break when using constrained_layout.
Constrained_layout was specifically designed with this case in mind. It will work with your gridspec solution above, but more idiomatically: import datetime as dt import matplotlib as mpl from matplotlib import pyplot as plt import numpy as np import pandas as pd fig, axs = plt.subplot_mosaic([['.', 'plot'], ['empty1', 'imtop'], ['empty2', 'imbottom']], constrained_layout=True, gridspec_kw={'width_ratios':(1,2),'height_ratios':(1,2,2)}) axs['imtop'].sharex(axs['imbottom']) axs['plot'].sharex(axs['imtop']) dates = pd.date_range("2020-01-01","2020-01-10 23:00", freq="H") xs = mpl.dates.date2num(dates) ys = np.random.random(xs.size) N = 10 arr = np.random.random((N, N)) arr2 = np.random.random((N, N)) norm=mpl.colors.Normalize(0, arr.max()) # change the min to stretch the color spectrum pcm = axs['imtop'].imshow(arr, extent=[xs[0],xs[-1],10,0],norm=norm,aspect='auto') cax = fig.colorbar(pcm, ax=axs['imtop'], extend='max') norm=mpl.colors.Normalize(0, arr2.max()) # change the min to stretch the color spectrum pcm = axs['imbottom'].imshow(arr2, extent=[xs[0],xs[-1],1,0],norm=norm,aspect='auto') cax4 = fig.colorbar(pcm, ax=axs['imbottom'], extend='max') axs['plot'].plot(xs,ys)
4
6
70,583,980
2022-1-4
https://stackoverflow.com/questions/70583980/i-am-unable-to-create-a-new-virtualenv-in-ubuntu
So, I installed virtualenv in ubuntu terminal. I installed using the following commands: sudo apt install python3-virtualenv pip install virtualenv But when I try creating a new virtualenv using: virtualenv -p python3 venv I am getting the following error: AttributeError: module 'virtualenv.create.via_global_ref.builtin.cpython.mac_os' has no attribute 'CPython2macOsArmFramework' How can I solve it?
You don't need to use virtualenv. You can use this: python3 -m venv ./some_env
17
16
70,570,165
2022-1-3
https://stackoverflow.com/questions/70570165/how-to-solve-importerror-with-pytest
There were already questions regarding this topic. Sometimes programmers put some __init__.py at some places, often it is said one should use absolute paths. However, I don't get it to work here: How do I import a class from a package so that tests in pytest run and the code can be used? At the moment I get pytest or the code passing respective running. My example project structure is . ├── testingonly │ ├── cli.py │ ├── __init__.py │ └── testingonly.py └── tests ├── __init__.py └── test_testingonly.py __init__.py is in both cases an empty file. $ cat testingonly/cli.py """Console script for testingonly.""" from testingonly import Tester def main(args=None): """Console script for testingonly.""" te = Tester() return 0 main() $ cat testingonly/testingonly.py """Main module.""" class Tester(): def __init__(self): print("Hello") This gives - as expected: $ python3 testingonly/cli.py Hello Trying to test this, however, fails: $ pytest ========================================================= test session starts ========================================================= platform linux -- Python 3.7.3, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: /home/stefan/Development/testingonly collected 0 items / 1 error =============================================================== ERRORS ================================================================ _____________________________________________ ERROR collecting tests/test_testingonly.py ______________________________________________ ImportError while importing test module '/home/stefan/Development/testingonly/tests/test_testingonly.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.7/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_testingonly.py:10: in <module> from testingonly import cli testingonly/cli.py:2: in <module> from testingonly import Tester E ImportError: cannot import name 'Tester' from 'testingonly' (/home/stefan/Development/testingonly/testingonly/__init__.py) Renaming testingonly/testingonly.py to testingonly/mytest.py and changing the imports in test_testingonly.py (from testingonly import mytest) and cli.py (from mytest import Tester) gives $ pytest ========================================================= test session starts ========================================================= platform linux -- Python 3.7.3, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: /home/stefan/Development/testingonly collected 0 items / 1 error =============================================================== ERRORS ================================================================ _____________________________________________ ERROR collecting tests/test_testingonly.py ______________________________________________ ImportError while importing test module '/home/stefan/Development/testingonly/tests/test_testingonly.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.7/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_testingonly.py:10: in <module> from testingonly import cli testingonly/cli.py:2: in <module> from mytest import Tester E ModuleNotFoundError: No module named 'mytest' ======================================================= short test summary info ======================================================= ERROR tests/test_testingonly.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ========================================================== 1 error in 0.37s =========================================================== $ python3 testingonly/cli.py Hello The other proposed solution with renaming to mytest.py lets the tests pass, but in cli.py using from testingonly.mytest import Tester gives a NameNotFound error. $ python3 testingonly/cli.py Traceback (most recent call last): File "testingonly/cli.py", line 2, in <module> from testingonly.mytest import Tester ModuleNotFoundError: No module named 'testingonly' $ pytest ========================================================= test session starts ========================================================= platform linux -- Python 3.7.3, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: /home/stefan/Development/testingonly collected 1 item tests/test_testingonly.py . [100%] ========================================================== 1 passed in 0.12s ==========================================================
The self-named module testingonly and file name of testingonly.py may be causing some issues with the way the modules are imported. Remove the __init__.py from the tests directory. Ref this answer . Try renaming testingonly.py to mytest.py and then importing it into your project again. In the cli.py, it should be: from testingonly.mytest import Tester And then for your test file test_testingonly.py: from testingonly.mytest import Tester You're test_testingonly.py file should look like this: import pytest from testingonly.mytest import Tester # Import the Tester Class def test_tester(capsys): # Create the Tester Class te = Tester() # Get the captured output captured = capsys.readouterr() # Assert that the capture output is tested assert captured.out == "Hello\n" Finally, Run your tests with: python -m pytest tests/ Here is a fully working example based off of your code: https://github.com/cdesch/testingonly
9
2
70,579,291
2022-1-4
https://stackoverflow.com/questions/70579291/create-new-column-using-str-contains-and-based-on-if-else-condition
I have a list of names 'pattern' that I wish to match with strings in column 'url_text'. If there is a match i.e. True the name should be printed in a new column 'pol_names_block' and if False leave the row empty. pattern = '|'.join(pol_names_list) print(pattern) 'Jon Kyl|Doug Jones|Tim Kaine|Lindsey Graham|Cory Booker|Kamala Harris|Orrin Hatch|Bernie Sanders|Thom Tillis|Jerry Moran|Shelly Moore Capito|Maggie Hassan|Tom Carper|Martin Heinrich|Steve Daines|Pat Toomey|Todd Young|Bill Nelson|John Barrasso|Chris Murphy|Mike Rounds|Mike Crapo|John Thune|John. McCain|Susan Collins|Patty Murray|Dianne Feinstein|Claire McCaskill|Lamar Alexander|Jack Reed|Chuck Grassley|Catherine Masto|Pat Roberts|Ben Cardin|Dean Heller|Ron Wyden|Dick Durbin|Jeanne Shaheen|Tammy Duckworth|Sheldon Whitehouse|Tom Cotton|Sherrod Brown|Bob Corker|Tom Udall|Mitch McConnell|James Lankford|Ted Cruz|Mike Enzi|Gary Peters|Jeff Flake|Johnny Isakson|Jim Inhofe|Lindsey Graham|Marco Rubio|Angus King|Kirsten Gillibrand|Bob Casey|Chris Van Hollen|Thad Cochran|Richard Burr|Rob Portman|Jon Tester|Bob Menendez|John Boozman|Mazie Hirono|Joe Manchin|Deb Fischer|Michael Bennet|Debbie Stabenow|Ben Sasse|Brian Schatz|Jim Risch|Mike Lee|Elizabeth Warren|Richard Blumenthal|David Perdue|Al Franken|Bill Cassidy|Cory Gardner|Lisa Murkowski|Maria Cantwell|Tammy Baldwin|Joe Donnelly|Roger Wicker|Amy Klobuchar|Joel Heitkamp|Joni Ernst|Chris Coons|Mark Warner|John Cornyn|Ron Johnson|Patrick Leahy|Chuck Schumer|John Kennedy|Jeff Merkley|Roy Blunt|Richard Shelby|John Hoeven|Rand Paul|Dan Sullivan|Tim Scott|Ed Markey' I am using the following code df['url_text'].str.contains(pattern) which results in True in case a name in 'pattern' is present in a row in column 'url_text' and False otherwise. With that I have tried the following code: df['pol_name_block'] = df.apply( lambda row: pol_names_list if df['url_text'].str.contains(pattern) in row['url_text'] else ' ', axis=1 ) I get the error: TypeError: 'in <string>' requires string as left operand, not Series
From this toy Dataframe : >>> import pandas as pd >>> from io import StringIO >>> df = pd.read_csv(StringIO(""" ... id,url_text ... 1,Tim Kaine ... 2,Tim Kain ... 3,Tim ... 4,Lindsey Graham.com ... """), sep=',') >>> df id url_text 0 1 Tim Kaine 1 2 Tim Kain 2 3 Tim 3 4 Lindsey Graham.com From pol_names_list, we build patterns by formating it like so : patterns = '(%s)' % '|'.join(pol_names_list) Then, we can use the extract method to assign the value to the column pol_name_block to get the expected result : df['pol_name_block'] = df['url_text'].str.extract(patterns) Output : id url_text pol_name_block 0 1 Tim Kaine Tim Kaine 1 2 Tim Kain NaN 2 3 Tim NaN 3 4 Lindsey Graham.com Lindsey Graham
5
2
70,573,362
2022-1-4
https://stackoverflow.com/questions/70573362/tensorflow-how-to-extract-attention-scores-for-graphing
If you have a MultiHeadAttention layer in Keras, then it can return attention scores like so: x, attention_scores = MultiHeadAttention(1, 10, 10)(x, return_attention_scores=True) How do you extract the attention scores from the network graph? I would like to graph them.
Option 1: If you want to plot the attention scores during training, you can create a Callback and pass data to it. It can be triggered for example, after every epoch. Here is an example where I am using 2 attention heads and plotting them after every epoch: import tensorflow as tf import seaborn as sb import matplotlib.pyplot as plt class CustomCallback(tf.keras.callbacks.Callback): def __init__(self, data): self.data = data def on_epoch_end(self, epoch, logs=None): test_targets, test_sources = self.data _, attention_scores = attention_layer(test_targets[:1], test_sources[:1], return_attention_scores=True) # take one sample fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[5,5,0.2])) sb.heatmap(attention_scores[0, 0, :, :], annot=True, cbar=False, ax=axs[0]) sb.heatmap(attention_scores[0, 1, :, :], annot=True, yticklabels=False, cbar=False, ax=axs[1]) fig.colorbar(axs[1].collections[0], cax=axs[2]) plt.show() layer = tf.keras.layers.MultiHeadAttention(num_heads=2, key_dim=2) target = tf.keras.layers.Input(shape=[8, 16]) source = tf.keras.layers.Input(shape=[4, 16]) output_tensor, weights = layer(target, source, return_attention_scores=True) output = tf.keras.layers.Flatten()(output_tensor) output = tf.keras.layers.Dense(1, activation='sigmoid')(output) model = tf.keras.Model([target, source], output) model.compile(optimizer = 'adam', loss = tf.keras.losses.BinaryCrossentropy()) attention_layer = model.layers[2] samples = 5 train_targets = tf.random.normal((samples, 8, 16)) train_sources = tf.random.normal((samples, 4, 16)) test_targets = tf.random.normal((samples, 8, 16)) test_sources = tf.random.normal((samples, 4, 16)) y = tf.random.uniform((samples,), maxval=2, dtype=tf.int32) model.fit([train_targets, train_sources], y, batch_size=2, epochs=2, callbacks=[CustomCallback([test_targets, test_sources])]) Epoch 1/2 1/3 [=========>....................] - ETA: 2s - loss: 0.7142 3/3 [==============================] - 3s 649ms/step - loss: 0.6992 Epoch 2/2 1/3 [=========>....................] - ETA: 0s - loss: 0.7265 3/3 [==============================] - 1s 650ms/step - loss: 0.6863 <keras.callbacks.History at 0x7fcc839dc590> Option 2: If you just want to plot the attention scores after training, you can just pass some data to the model's attention layer and plot the scores: import tensorflow as tf import seaborn as sb import matplotlib.pyplot as plt layer = tf.keras.layers.MultiHeadAttention(num_heads=2, key_dim=2) target = tf.keras.layers.Input(shape=[8, 16]) source = tf.keras.layers.Input(shape=[4, 16]) output_tensor, weights = layer(target, source, return_attention_scores=True) output = tf.keras.layers.Flatten()(output_tensor) output = tf.keras.layers.Dense(1, activation='sigmoid')(output) model = tf.keras.Model([target, source], output) model.compile(optimizer = 'adam', loss = tf.keras.losses.BinaryCrossentropy()) samples = 5 train_targets = tf.random.normal((samples, 8, 16)) train_sources = tf.random.normal((samples, 4, 16)) test_targets = tf.random.normal((samples, 8, 16)) test_sources = tf.random.normal((samples, 4, 16)) y = tf.random.uniform((samples,), maxval=2, dtype=tf.int32) model.fit([train_targets, train_sources], y, batch_size=2, epochs=2) attention_layer = model.layers[2] _, attention_scores = attention_layer(test_targets[:1], test_sources[:1], return_attention_scores=True) # take one sample fig, axs = plt.subplots(ncols=3, gridspec_kw=dict(width_ratios=[5,5,0.2])) sb.heatmap(attention_scores[0, 0, :, :], annot=True, cbar=False, ax=axs[0]) sb.heatmap(attention_scores[0, 1, :, :], annot=True, yticklabels=False, cbar=False, ax=axs[1]) fig.colorbar(axs[1].collections[0], cax=axs[2]) plt.show() Epoch 1/2 3/3 [==============================] - 1s 7ms/step - loss: 0.6727 Epoch 2/2 3/3 [==============================] - 0s 6ms/step - loss: 0.6503
4
9
70,574,499
2022-1-4
https://stackoverflow.com/questions/70574499/how-do-i-make-a-decorator-to-wrap-an-async-function-with-a-try-except-statement
Let's say I have an async function like this: async def foobar(argOne, argTwo, argThree): print(argOne, argTwo, argThree) I want to make a decorator and use it on this function in a way that it wraps the above code in a try except statement like this: try: print(argOne, argTwo, argThree) except: print('Something went wrong.) Is there any way to do this?
because wrapper called first, we should also define it as a async function: async def wrap(*arg, **kwargs): import asyncio def decorator(f): async def wrapper(*arg, **kwargs): try: await f(*arg, **kwargs) except Exception as e: print('Something went wrong.', e) return wrapper @decorator async def foobar(argOne, argTwo, argThree): print(argOne, argTwo, argThree) await asyncio.sleep(1) asyncio.run(foobar("a", "b", "c"))
11
13
70,534,207
2021-12-30
https://stackoverflow.com/questions/70534207/how-to-use-intel-oneapi-in-right-way
Today, I'm wondering what the difference between Conda in oneAPI and Conda in Anaconda is and how to use the oneAPI in the right way to get the maximum usage of the latest Intel Core gen 12. After installing oneAPI, they also contain conda. However, I cannot use this as a normal condition when: -It does not contain conda-build and several packages like normal conda in Anaconda. -Can not create as well as clone other environments from the "base" of Conda OneAPI. If I clone the "base" to the new one as conda create --name new_env --clone base and then activate the "new_env", I cannot use Conda anymore and it warns me like the conda does not exist. The warning is as below. 'conda' is not recognized as an internal or external command. operable program or batch file. However, training any DNN model on conda oneAPI is faster than on conda in Anaconda by 30%, and it also has better performance in the data preprocessing tasks. I really want to always use the advantage of Python in the Conda OneAPI environment as normal Conda in Anaconda. So, how to merge them into one to make it easier to use, or how to fix the problem of Conda environment of oneAPI toolkit
Conda executable in one api does not support all the features supported by conda in anaconda. Conda executable in one api can be used to download both intel optimized packages as well as anaconda packages. Conda executable in one api gives performance improvement for intel optimized packages. Since setvars is not sourced you are getting this warning 'conda' is not recognized as an internal or external command. operable program or batch file. Using Intel Conda Packages with Continuum's Python: If you want to install Intel packages into an environment with Continuum's python, do not add the "intel" channel to your configuration file because that will cause all your Continuum packages to be replaced with Intel builds, if available. Rather, specify the "intel" channel on the command line with "-c intel" parameter and the "--no-update-deps" flag to avoid switching other packages, such as python itself, to Intel's builds. Use the following command to install intel optimized packages using conda executable in one api: conda install "Package_name" -c intel --no-update-deps here Package_name can be(mkl,numpy..) Available Intel packages can be viewed here: https://anaconda.org/intel/packages Sample installation for intel optimized numpy package: conda install numpy -c intel --no-update-deps
5
1
70,573,066
2022-1-4
https://stackoverflow.com/questions/70573066/conditional-counting-in-pandas-df
I have a dataframe of stock prices: df = pd.DataFrame([100, 101, 99, 100,105,104,106], columns=['P']) I would like to create a counter column, that counts either if the current price is higher than the previous row's price, BUT if the current price is lower than the previous row's price, only counts again, once that price is exceeded (like a watermark). Below is the desired column: df['counter'] = [np.nan, 1, 1, 1,2,2,3] So the second row's price is 101 which exceeds 100, so the counter is 1, then the price drops to 99 and comes back to 100, but the counter is still 1, because we have not reached the 101 price (which is the watermark), then once we exceed 101 in row 4, with a price of 105, the counter goes to 2, then the price drops to 104 again, so we stay at 2, and then when it goes to 106 we increase the counter to 3.
Algorithm: Find what current maximum previously observed value was at each row (inclusive of the current row). See what the maximum previously observed value was for the preceding row. Each time a difference exists between these two values, we know that a new water mark has been hit within the current row. Calculate the cumulative sum of the number of times a new water mark has been hit. df["current_observed_max"] = df["p"].cummax() df["previous_observed_max"] = df["current_observed_max"].shift(1) df["is_new_watermark"] =(df["current_observed_max"] != df["previous_observed_max"]).astype(int) df["counter"] = df["is_new_watermark"].cumsum() With this you may need to subtract 1 depending on how you would like to handle the first observed number.
4
3
70,552,775
2022-1-2
https://stackoverflow.com/questions/70552775/multiprocess-inherently-shared-memory-in-no-longer-working-on-python-3-10-comin
I understand there are a variety of techniques for sharing memory and data structures between processes in python. This question is specifically about this inherently shared memory in python scripts that existed in python 3.6 but seems to no longer exist in 3.10. Does anyone know why and if it's possible to bring this back in 3.10? Or what this change that I'm observing is? I've upgraded my Mac to Monterey and it no longer supports python 3.6, so I'm forced to upgrade to either 3.9 or 3.10+. Note: I tend to develop on Mac and run production on Ubuntu. Not sure if that factors in here. Historically with 3.6, everything behaved the same regardless of OS. Make a simple project with the following python files myLibrary.py MyDict = {} test.py import threading import time import multiprocessing import myLibrary def InitMyDict(): myLibrary.MyDict = {'woot': 1, 'sauce': 2} print('initialized myLibrary.MyDict to ', myLibrary.MyDict) def MainLoop(): numOfSubProcessesToStart = 3 for i in range(numOfSubProcessesToStart): t = threading.Thread( target=CoolFeature(), args=()) t.start() while True: time.sleep(1) def CoolFeature(): MyProcess = multiprocessing.Process( target=SubProcessFunction, args=()) MyProcess.start() def SubProcessFunction(): print('SubProcessFunction: ', myLibrary.MyDict) if __name__ == '__main__': InitMyDict() MainLoop() When I run this on 3.6 it has a significantly different behavior than 3.10. I do understand that a subprocess cannot modify the memory of the main process, but it is still super convenient to access the main process' data structure that was previously set up as opposed to moving every little tiny thing into shared memory just to read a simple dictionary/int/string/etc. Python 3.10 output: python3.10 test.py initialized myLibrary.MyDict to {'woot': 1, 'sauce': 2} SubProcessFunction: {} SubProcessFunction: {} SubProcessFunction: {} Python 3.6 output: python3.6 test.py initialized myLibrary.MyDict to {'woot': 1, 'sauce': 2} SubProcessFunction: {'woot': 1, 'sauce': 2} SubProcessFunction: {'woot': 1, 'sauce': 2} SubProcessFunction: {'woot': 1, 'sauce': 2} Observation: Notice that in 3.6, the subprocess can view the value that was set from the main process. But in 3.10, the subprocess sees an empty dictionary.
In short, since 3.8, CPython uses the spawn start method on MacOs. Before it used the fork method. On UNIX platforms, the fork start method is used which means that every new multiprocessing process is an exact copy of the parent at the time of the fork. The spawn method means that it starts a new Python interpreter for each new multiprocessing process. According to the documentation: The child process will only inherit those resources necessary to run the process object’s run() method. It will import your program into this new interpreter, so starting processes et cetera sould only be done from within the if __name__ == '__main__':-block! This means you cannot count on variables from the parent process being available in the children, unless they are module level constants which would be imported. So the change is significant. What can be done? If the required information could be a module-level constant, that would solve the problem in the simplest way. If that is not possible (e.g. because the data needs to be generated at runtime) you could have the parent write the information to be shared to a file. E.g. in JSON format and before it starts other processes. Then the children could simply read this. That is probably the next simplest solution. Using a multiprocessing.Manager would allow you to share a dict between processes. There is however a certain amount of overhead associated with this. Or you could try calling multiprocessing.set_start_method("fork") before creating processes or pools and see if it doesn't crash in your case. That would revert to the pre-3.8 method on MacOs. But as documented in this bug, there are real problems with using the fork method on MacOs. Reading the issue indicates that fork might be OK as long as you don't use threads.
6
10
70,566,660
2022-1-3
https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow
I'm reading a table on PostgreSQL using pandas.read_sql, then I'm converting it as a Pyarrow table and saving it partitioned in local filesystem. # Retrieve schema.table data from database def basename_file(date_partition): basename_file = f"{table_schema}.{table_name}-{date}.parquet" return basename_file def get_table_data(table_schema, table_name, date): s = "" s += "SELECT" s += " *" s += " , date(created_on) as date_partition" s += " FROM {table_schema}.{table_name}" s += " WHERE created_on = '{date}';" sql = s.format(table_schema = table_schema, table_name = table_name, date = date) # print(sql) df = pd.read_sql(sql, db_conn) result = pa.Table.from_pandas(df) pq.write_to_dataset(result, root_path = f"{dir_name}", partition_cols = ['date_partition'], partition_filename_cb = basename_file, use_legacy_dataset = True ) # print(result) return df Problem is that my SELECT has a column with some rows as null. When I partition this to write (write_to_dataset) in local filesystem, a few files has only rows with that column as null, so the partitioned Parquet files doesn't have this column. When I try to read that by multiple partitions, I get a schema error, because one of the columns cannot be casted correctly. Why is that? Is there any setting I could apply to write_to_dataset to manage this? I've been looking for a workaround for this without success... My main goal here is to export data daily, partitioned by transaction date and read data from any period needed, not caring about schema evolution: that way, row value for null columns will appear as null, simply put.
If you can post the exact error message that might be more helpful. I did some experiments with pyarrow 6.0.1 and I found that things work ok as long as the first file contains some valid values for all columns (pyarrow will use this first file to infer the schema for the entire dataset). The "first" file is not technically well defined when doing dataset discovery but, at the moment, for a local dataset it should be the first file in alphabetical order. If the first file does not have values for all columns then I get the following error: Error: Unsupported cast from string to null using function cast_null I'm a bit surprised as this sort of cast should be pretty easy (to cast to null just throw away all the data). That being said, you probably don't want all your data thrown away anyways. The easiest solution is to provide the full expected schema when you are creating your dataset. If you do not know this ahead of time you can figure it out yourself by inspecting all of the files in the dataset and using pyarrow's unify_schemas. I have an example of doing this in this answer. Here is some code demonstrating my findings: import os import pyarrow as pa import pyarrow.parquet as pq import pyarrow.dataset as ds tab = pa.Table.from_pydict({'x': [1, 2, 3], 'y': [None, None, None]}) tab2 = pa.Table.from_pydict({'x': [4, 5, 6], 'y': ['x', 'y', 'z']}) os.makedirs('/tmp/null_first_dataset', exist_ok=True) pq.write_table(tab, '/tmp/null_first_dataset/0.parquet') pq.write_table(tab2, '/tmp/null_first_dataset/1.parquet') os.makedirs('/tmp/null_second_dataset', exist_ok=True) pq.write_table(tab, '/tmp/null_second_dataset/1.parquet') pq.write_table(tab2, '/tmp/null_second_dataset/0.parquet') try: dataset = ds.dataset('/tmp/null_first_dataset') tab = dataset.to_table() print(f'Was able to read in null_first_dataset without schema.') print(tab) except Exception as ex: print('Was not able to read in null_first_dataset without schema') print(f' Error: {ex}') print() try: dataset = ds.dataset('/tmp/null_second_dataset') tab = dataset.to_table() print(f'Was able to read in null_second_dataset without schema.') print(tab) except: print('Was not able to read in null_second_dataset without schema') print(f' Error: {ex}') print() dataset = ds.dataset('/tmp/null_first_dataset', schema=tab2.schema) tab = dataset.to_table() print(f'Was able to read in null_first_dataset by specifying schema.') print(tab)
4
7
70,567,344
2022-1-3
https://stackoverflow.com/questions/70567344/easyocr-segmentation-fault-core-dumped
I got this issue pip install easyocr on python env import easyocr reader = easyocr.Reader(['en']) result = reader.readtext('./reports/dilate/NP6221833_126.png', workers=1) finally Segmentation fault (core dumped)
Solved downgrading to the nov 2021 version of opencv pip install opencv-python-headless==4.5.4.60
6
10
70,563,360
2022-1-3
https://stackoverflow.com/questions/70563360/grouping-aggregating-on-level-1-index-assigning-different-aggregation-functi
I have a dataframe df: 2019 2020 2021 2022 A 1 10 15 15 31 2 5 4 7 9 3 0.3 0.4 0.4 0.7 4 500 600 70 90 B 1 10 15 15 31 2 5 4 7 9 3 0.3 0.4 0.4 0.7 4 500 600 70 90 C 1 10 15 15 31 2 5 4 7 9 3 0.3 0.4 0.4 0.7 4 500 600 70 90 D 1 10 15 15 31 2 5 4 7 9 3 0.3 0.4 0.4 0.7 4 500 600 70 90 I am trying to group by the level 1 index, 1, 2, 3, 4 and assign different aggregation functions for those 1, 2, 3, 4 indexes so that 1 is aggregated by sum, 2 by mean, and so on. So that the end result would look like this: 2019 2020 2021 2022 1 40 ... ... # sum 2 5 ... ... # mean 3 0.3 ... ... # mean 4 2000 ... ... # sum I tried: df.groupby(level = 1).agg({'1':'sum', '2':'mean', '3':'sum', '4':'mean'}) But I get that none of 1, 2, 3, 4 are in columns which they are not, so I am not sure how should I proceed with this problem.
You could use apply with a custom function as follows: import numpy as np aggs = {1: np.sum, 2: np.mean, 3: np.mean, 4: np.sum} def f(x): func = aggs.get(x.name, np.sum) return func(x) df.groupby(level=1).apply(f) The above code uses sum by default so 1 and 4 could be removed from aggs without any different results. In this way, only groups that should be handled differently from the rest need to be specified. Result: 2019 2020 2021 2022 1 40.0 60.0 60.0 124.0 2 5.0 4.0 7.0 9.0 3 0.3 0.4 0.4 0.7 4 2000.0 2400.0 280.0 360.0
5
4
70,520,120
2021-12-29
https://stackoverflow.com/questions/70520120/attributeerror-module-setuptools-distutils-has-no-attribute-version
I was trying to train a model using tensorboard. While executing, I got this error: $ python train.py Traceback (most recent call last): File "train.py", line 6, in <module> from torch.utils.tensorboard import SummaryWriter File "C:\Users\91960\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\tensorboard\__init__.py", line 4, in <module> LooseVersion = distutils.version.LooseVersion AttributeError: module 'setuptools._distutils' has no attribute 'version'. I'm using python 3.8.9 64-bit & tensorflow with distutils already installed which is required by tensorboard. Why is this happening?
This command did the trick for me: python3 -m pip install setuptools==59.5.0 pip successfully installed this version: Successfully installed setuptools-60.1.0 instead of setuptools-60.2.0
44
42
70,554,095
2022-1-2
https://stackoverflow.com/questions/70554095/counting-triangles-in-a-graph-by-iteratively-removing-high-degree-nodes
Computing nx.triangles(G) on an undirected graph with about 150 thousand nodes and 2 million edges, is currently very slow (on the scale of 80 hours). If the node degree distribution is highly skewed, is there any problem with counting triangles using the following procedure? import networkx as nx def largest_degree_node(G): # this was improved using suggestion by Stef in the comments return max(G.degree(), key=lambda x: x[1])[0] def count_triangles(G): G=G.copy() triangle_counts = 0 while len(G.nodes()): focal_node = largest_degree_node(G) triangle_counts += nx.triangles(G, nodes=[focal_node])[focal_node] G.remove_node(focal_node) return triangle_counts G = nx.erdos_renyi_graph(1000, 0.1) # compute triangles with nx triangles_nx = int(sum(v for k, v in nx.triangles(G).items()) / 3) # compute triangles iteratively triangles_iterative = count_triangles(G) # assertion passes assert int(triangles_nx) == int(triangles_iterative) The assertion passes, but I am wary that there are some edge cases where this iterative approach will not work.
Assuming the graph is not directed (ie. G.is_directed() == False), the number of triangles can be efficiently found by finding nodes that are both neighbors of neighbors and direct neighbors of a same node. Pre-computing and pre-filtering the neighbors of nodes so that each triangle is counted only once helps to improve a lot the execution time. Here is the code: nodeNeighbours = { # The filtering of the set ensure each triangle is only computed once node: set(n for n in edgeInfos.keys() if n > node) for node, edgeInfos in G.adjacency() } triangleCount = sum( len(neighbours & nodeNeighbours[node2]) for node1, neighbours in nodeNeighbours.items() for node2 in neighbours ) The above code is about 12 times faster than the original iterative solution on the example graph. And up to 72 times faster on nx.erdos_renyi_graph(15000, 0.005).
5
2
70,556,229
2022-1-2
https://stackoverflow.com/questions/70556229/how-should-we-type-a-callable-with-additional-properties
As a toy example, let's use the Fibonacci sequence: def fib(n: int) -> int: if n < 2: return 1 return fib(n - 2) + fib(n - 1) Of course, this will hang the computer if we try to: print(fib(100)) So we decide to add memoization. To keep the logic of fib clear, we decide not to change fib and instead add memoization via a decorator: from typing import Callable from functools import wraps def remember(f: Callable[[int], int]) -> Callable[[int], int]: @wraps(f) def wrapper(n: int) -> int: if n not in wrapper.memory: wrapper.memory[n] = f(n) return wrapper.memory[n] wrapper.memory = dict[int, int]() return wrapper @remember def fib(n: int) -> int: if n < 2: return 1 return fib(n - 2) + fib(n - 1) Now there is no problem if we: print(fib(100)) 573147844013817084101 However, mypy complains that "Callable[[int], int]" has no attribute "memory", which makes sense, and usually I would want this complaint if I tried to access a property that is not part of the declared type... So, how should we use typing to indicate that wrapper, while a Callable, also has the property memory?
To describe something as "a callable with a memory attribute", you could define a protocol (Python 3.8+, or earlier versions with typing_extensions): from typing import Protocol class Wrapper(Protocol): memory: dict[int, int] def __call__(self, n: int) -> int: ... In use, the type checker knows that a Wrapper is valid as a Callable[[int], int] and allows return wrapper as well as the assignment to wrapper.memory: from functools import wraps from typing import Callable, cast def remember(f: Callable[[int], int]) -> Callable[[int], int]: @wraps(f) def _wrapper(n: int) -> int: if n not in wrapper.memory: wrapper.memory[n] = f(n) return wrapper.memory[n] wrapper = cast(Wrapper, _wrapper) wrapper.memory = dict() return wrapper Playground Unfortunately this requires wrapper = cast(Wrapper, _wrapper), which is not type safe - wrapper = cast(Wrapper, "foo") would also check just fine.
7
3
70,545,797
2021-12-31
https://stackoverflow.com/questions/70545797/finding-straight-lines-from-tightly-coupled-lines-and-noise-curvy-lines
I have this image for a treeline crop. I need to find the general direction in which the crop is aligned. I'm trying to get the Hough lines of the image, and then find the mode of distribution of angles. I've been following this tutorialon crop lines, however in that one, the crop lines are sparse. Here they are densely pack, and after grayscaling, blurring, and using canny edge detection, this is what i get import cv2 import numpy as np import matplotlib.pyplot as plt img = cv2.imread('drive/MyDrive/tree/sample.jpg') gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) gauss = cv2.GaussianBlur(gray, (3,3), 3) plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(gauss) gscale = cv2.Canny(gauss, 80, 140) plt.subplot(1,2,2) plt.imshow(gscale) plt.show() (Left side blurred image without canny, left one preprocessed with canny) After that, I followed the tutorial and "skeletonized" the preprocessed image size = np.size(gscale) skel = np.zeros(gscale.shape, np.uint8) ret, gscale = cv2.threshold(gscale, 128, 255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3)) done = False while not done: eroded = cv2.erode(gscale, element) temp = cv2.dilate(eroded, element) temp = cv2.subtract(gscale, temp) skel = cv2.bitwise_or(skel, temp) gscale = eroded.copy() zeros = size - cv2.countNonZero(gscale) if zeros==size: done = True Giving me As you can see, there are a bunch of curvy lines still. When using the HoughLines algorithm on it, there are 11k lines scattered everywhere lines = cv2.HoughLinesP(skel,1,np.pi/180,130) a,b,c = lines.shape for i in range(a): rho = lines[i][0][0] theta = lines[i][0][1] a = np.cos(theta) b = np.sin(theta) x0 = a*rho y0 = b*rho x1 = int(x0 + 1000*(-b)) y1 = int(y0 + 1000*(a)) x2 = int(x0 - 1000*(-b)) y2 = int(y0 - 1000*(a)) cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2, cv2.LINE_AA)#showing the results: plt.figure(figsize=(15,15)) plt.subplot(121)#OpenCV reads images as BGR, this corrects so it is displayed as RGB plt.plot() plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.title('Row Detection') plt.xticks([]) plt.yticks([]) plt.subplot(122) plt.plot() plt.imshow(skel,cmap='gray') plt.title('Skeletal Image') plt.xticks([]) plt.yticks([]) plt.show() I am a newbie when it comes to cv2, so I have 0 clue what to do. Searched and tried a bunch of stuff but none works. How can I remove the mildly big dots, and remove the squiggly lines?
You can use a 2D FFT to find the general direction in which the crop is aligned (as proposed by mozway in the comments). The idea is that the general direction can be easily extracted from centred beaming rays appearing in the magnitude spectrum when the input contains many lines in the same direction. You can find more information about how it works in this previous post. It works directly with the input image, but it is better to apply the Gaussian + Canny filters. Here is the interesting part of the magnitude spectrum of the filtered gray image: The main beaming ray can be easily seen. You can extract its angle by iterating over many lines with an increasing angle and sum the magnitude values on each line as in the following figure: Here is the magnitude sum of each line plotted against the angle (in radian) of the line: Based on that, you just need to find the angle that maximize the computed sum. Here is the resulting code: def computeAngle(arr): # Naive inefficient algorithm n, m = arr.shape yCenter, xCenter = (n-1, m//2-1) lineLen = m//2-2 sMax = 0.0 bestAngle = np.nan for angle in np.arange(0, math.pi, math.pi/300): i = np.arange(lineLen) y, x = (np.sin(angle) * i + 0.5).astype(np.int_), (np.cos(angle) * i + 0.5).astype(np.int_) s = np.sum(arr[yCenter-y, xCenter+x]) if s > sMax: bestAngle = angle sMax = s return bestAngle # Load the image in gray img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Apply some filters gauss = cv2.GaussianBlur(gray, (3,3), 3) gscale = cv2.Canny(gauss, 80, 140) # Compute the 2D FFT of real values freqs = np.fft.rfft2(gscale) # Shift the frequencies (centering) and select the low frequencies upperPart = freqs[:freqs.shape[0]//4,:freqs.shape[1]//2] lowerPart = freqs[-freqs.shape[0]//4:,:freqs.shape[1]//2] filteredFreqs = np.vstack((lowerPart, upperPart)) # Compute the magnitude spectrum magnitude = np.log(np.abs(filteredFreqs)) # Correct the angle magnitude = np.rot90(magnitude).copy() # Find the major angle bestAngle = computeAngle(magnitude)
6
4
70,546,198
2021-12-31
https://stackoverflow.com/questions/70546198/python-beautiful-soup-get-correct-column-headers-for-each-table
The following code gets player data but each dataset is different. The first data it sees is the quarterback data, so it uses these columns for all the data going forward. How can I change the header so that for every different dataset it encounters, the correct headers are used with the correct data? import pandas as pd import csv from pprint import pprint from bs4 import BeautifulSoup import requests url = 'https://www.espn.com/nfl/boxscore/_/gameId/401326313'# Create object page soup = BeautifulSoup(requests.get(url).content, "html.parser") rows = soup.select("table.mod-data tr") #rows = soup.find_all("table.mod-data tr") headers = [header.get_text(strip=True).encode("utf-8") for header in rows[0].find_all("th")] data = [dict(zip(headers, [cell.get_text(strip=True).encode("utf-8") for cell in row.find_all("td")])) for row in rows[1:]] df = pd.DataFrame(data) df.to_csv('_Data_{}.csv'.format(pd.datetime.now().strftime("%Y-%m-%d %H%M%S")),index=False) # see what the data looks like at this point pprint(data)
As mentioned expected result is not that clear, but if you just wanna read the tables use pandas.read_html to achieve your goal - index_col=0 avoids that the first column, that has no header is named Unnamed_0. pd.read_html('https://www.espn.com/nfl/boxscore/_/gameId/401326313',index_col=0) Example import pandas as pd for table in pd.read_html('https://www.espn.com/nfl/boxscore/_/gameId/401326313',index_col=0): pd.DataFrame(table).to_csv('_Data_{}.csv'.format(pd.datetime.today().strftime("%Y-%m-%d %H%M%S.%f"))) As alternative you can reset_index() and use to_csv(index=False): pd.DataFrame(table).rename_axis('').reset_index().to_csv('_Data_{}.csv'.format(pd.datetime.today().strftime("%Y-%m-%d %H%M%S.%f")),index=False) EDIT Using captions in tables and to store results in named csv files: import pandas as pd from bs4 import BeautifulSoup url = 'https://www.espn.com/nfl/boxscore/_/gameId/401326313' soup = BeautifulSoup(requests.get(url).content, "html.parser") for table in soup.select('article.boxscore-tabs table'): caption = '_'.join(table.parent.select_one('.table-caption').text.split(' ')) df = pd.read_html(table.prettify(),index_col=0)[0].rename_axis('').reset_index() df.insert(0, 'caption', caption) df.to_csv(f'_DATA_{caption}_{pd.datetime.now().strftime("%Y-%m-%d %H%M%S")}.csv',index=False) Output of your csv files caption,,C/ATT,YDS,AVG,TD,INT,SACKS,QBR,RTG Miami_Passing,Tua Tagovailoa T. Tagovailoa,16/27,202,7.5,1,1,2-17,47.5,79.6 Miami_Passing,TEAM,16/27,185,7.5,1,1,2-17,--,79.6 caption,,C/ATT,YDS,AVG,TD,INT,SACKS,QBR,RTG New_England_Passing,Mac Jones M. Jones,29/39,281,7.2,1,0,1-13,76.9,102.6 New_England_Passing,TEAM,29/39,268,7.2,1,0,1-13,--,102.6 caption,,CAR,YDS,AVG,TD,LONG Miami_Rushing,Myles Gaskin M. Gaskin,9,49,5.4,0,15 Miami_Rushing,Malcolm Brown M. Brown,5,16,3.2,0,5 Miami_Rushing,Jacoby Brissett J. Brissett,2,4,2.0,0,2 Miami_Rushing,Salvon Ahmed S. Ahmed,3,4,1.3,0,8 Miami_Rushing,Tua Tagovailoa T. Tagovailoa,4,1,0.3,1,3 Miami_Rushing,TEAM,23,74,3.2,1,15 caption,,CAR,YDS,AVG,TD,LONG New_England_Rushing,Damien Harris D. Harris,23,100,4.3,0,35 New_England_Rushing,James White J. White,4,12,3.0,0,10 New_England_Rushing,Jonnu Smith J. Smith,1,6,6.0,0,6 New_England_Rushing,Brandon Bolden B. Bolden,1,5,5.0,0,5 New_England_Rushing,Rhamondre Stevenson R. Stevenson,1,2,2.0,0,2 New_England_Rushing,TEAM,30,125,4.2,0,35 caption,,REC,YDS,AVG,TD,LONG,TGTS Miami_Receiving,DeVante Parker D. Parker,4,81,20.3,0,30,7 Miami_Receiving,Jaylen Waddle J. Waddle,4,61,15.3,1,36,5 Miami_Receiving,Myles Gaskin M. Gaskin,5,27,5.4,0,12,5 Miami_Receiving,Salvon Ahmed S. Ahmed,2,24,12.0,0,18,3 Miami_Receiving,Durham Smythe D. Smythe,1,9,9.0,0,9,2 Miami_Receiving,Albert Wilson A. Wilson,0,0,0.0,0,0,2 Miami_Receiving,Mike Gesicki M. Gesicki,0,0,0.0,0,0,3 Miami_Receiving,TEAM,16,202,12.6,1,36,27 caption,,REC,YDS,AVG,TD,LONG,TGTS New_England_Receiving,Nelson Agholor N. Agholor,5,72,14.4,1,25,7 New_England_Receiving,James White J. White,6,49,8.2,0,26,7 New_England_Receiving,Jakobi Meyers J. Meyers,6,44,7.3,0,22,9 New_England_Receiving,Jonnu Smith J. Smith,5,42,8.4,0,11,5 New_England_Receiving,Hunter Henry H. Henry,3,31,10.3,0,16,3 New_England_Receiving,Kendrick Bourne K. Bourne,1,17,17.0,0,17,3 New_England_Receiving,Damien Harris D. Harris,2,17,8.5,0,9,3 New_England_Receiving,Rhamondre Stevenson R. Stevenson,1,9,9.0,0,9,1 New_England_Receiving,TEAM,29,281,9.7,1,26,38 caption,,FUM,LOST,REC Miami_Fumbles,Xavien Howard X. Howard,0,0,1 Miami_Fumbles,Zach Sieler Z. Sieler,0,0,1 Miami_Fumbles,TEAM,0,0,2 caption,,FUM,LOST,REC New_England_Fumbles,David Andrews D. Andrews,0,0,1 New_England_Fumbles,Damien Harris D. Harris,1,1,0 New_England_Fumbles,Rhamondre Stevenson R. Stevenson,1,1,0 New_England_Fumbles,Jonnu Smith J. Smith,1,0,1 New_England_Fumbles,Mac Jones M. Jones,1,0,0 New_England_Fumbles,TEAM,4,2,2 caption,,tackles,tackles,tackles,tackles,misc,misc,misc,misc ,,TOT,SOLO,SACKS,TFL,PD,QB HTS,TD,Unnamed: 8_level_1 Miami_Defensive,Jerome Baker J. Baker,12,9,0,0,0,0,0, Miami_Defensive,Eric Rowe E. Rowe,9,6,0,0,0,0,0, Miami_Defensive,Byron Jones B. Jones,6,5,0,0,1,0,0, Miami_Defensive,Nik Needham N. Needham,6,5,0,0,0,0,0, Miami_Defensive,Sam Eguavoen S. Eguavoen,6,2,0,0,0,3,0, Miami_Defensive,Xavien Howard X. Howard,5,4,0,0,0,0,0, Miami_Defensive,Jason McCourty J. McCourty,5,3,0,0,1,0,0, Miami_Defensive,Brennan Scarlett B. Scarlett,5,2,0,0,1,1,0, Miami_Defensive,Andrew Van Ginkel A. Van Ginkel,5,2,0,0,0,1,0, Miami_Defensive,John Jenkins J. Jenkins,4,4,0,0,0,0,0, Miami_Defensive,Emmanuel Ogbah E. Ogbah,3,3,0,1,1,1,0, Miami_Defensive,Zach Sieler Z. Sieler,3,2,0,1,0,0,0, Miami_Defensive,Christian Wilkins C. Wilkins,3,2,0,0,0,1,0, Miami_Defensive,Elandon Roberts E. Roberts,2,2,0,0,1,1,0, Miami_Defensive,Jamal Perry J. Perry,2,2,0,0,0,0,0, Miami_Defensive,Brandon Jones B. Jones,2,2,0,0,0,0,0, Miami_Defensive,Jevon Holland J. Holland,2,2,0,0,0,0,0, Miami_Defensive,Adam Butler A. Butler,2,1,0,0,0,0,0, Miami_Defensive,Mack Hollins M. Hollins,2,0,0,0,0,0,0, Miami_Defensive,Team Team,1,1,1,0,0,0,0, Miami_Defensive,Mike Gesicki M. Gesicki,1,1,0,0,0,0,0, Miami_Defensive,Durham Smythe D. Smythe,1,0,0,0,0,0,0, Miami_Defensive,Jaelan Phillips J. Phillips,0,0,0,0,0,1,0, Miami_Defensive,TEAM,87,60,1,2,5,9,0, caption,,tackles,tackles,tackles,tackles,misc,misc,misc,misc ,,TOT,SOLO,SACKS,TFL,PD,QB HTS,TD,Unnamed: 8_level_1 New_England_Defensive,Kyle Dugger K. Dugger,7,6,0,1,0,0,0, New_England_Defensive,Devin McCourty D. McCourty,7,4,0,0,0,0,0, New_England_Defensive,Ja'Whaun Bentley J. Bentley,4,4,0,1,0,0,0, New_England_Defensive,Matthew Judon M. Judon,4,3,0,1,0,1,0, New_England_Defensive,Lawrence Guy L. Guy,4,2,0,0,0,1,0, New_England_Defensive,Dont'a Hightower D. Hightower,4,2,0,0,0,0,0, New_England_Defensive,J.C. Jackson J.C. Jackson,3,3,0,0,1,0,0, New_England_Defensive,Kyle Van Noy K. Van Noy,3,2,1,1,1,1,0, New_England_Defensive,Adrian Phillips A. Phillips,3,2,0,2,0,0,0, New_England_Defensive,Davon Godchaux D. Godchaux,3,2,0,0,0,0,0, New_England_Defensive,Jalen Mills J. Mills,2,2,0,0,1,0,0, New_England_Defensive,Josh Uche J. Uche,1,1,1,1,0,1,0, New_England_Defensive,Carl Davis C. Davis,1,1,0,0,0,0,0, New_England_Defensive,Chase Winovich C. Winovich,1,1,0,0,0,0,0, New_England_Defensive,Joejuan Williams J. Williams,1,1,0,0,0,0,0, New_England_Defensive,Christian Barmore C. Barmore,1,0,0,0,0,0,0, New_England_Defensive,Jonathan Jones J. Jones,0,0,0,0,1,0,0, New_England_Defensive,TEAM,49,36,2,7,4,4,0, caption,,INT,YDS,TD Miami_Interceptions,No Miami Interceptions,,, caption,,INT,YDS,TD New_England_Interceptions,Jonathan Jones J. Jones,1,0,0 New_England_Interceptions,TEAM,1,0,0 caption,,NO,YDS,AVG,LONG,TD Miami_Kick_Returns,No Miami Kick Returns,,,,, caption,,NO,YDS,AVG,LONG,TD New_England_Kick_Returns,Brandon Bolden B. Bolden,1,23,23.0,23,0 New_England_Kick_Returns,Gunner Olszewski G. Olszewski,1,17,17.0,17,0 New_England_Kick_Returns,TEAM,2,40,20.0,23,0 caption,,NO,YDS,AVG,LONG,TD Miami_Punt_Returns,Jakeem Grant Sr. J. Grant Sr.,1,18,18.0,18,0 Miami_Punt_Returns,TEAM,1,18,18.0,18,0 caption,,NO,YDS,AVG,LONG,TD New_England_Punt_Returns,Gunner Olszewski G. Olszewski,3,20,6.7,14,0 New_England_Punt_Returns,TEAM,3,20,6.7,14,0 caption,,FG,PCT,LONG,XP,PTS Miami_Kicking,Jason Sanders J. Sanders,1/1,100.0,48,2/2,5 Miami_Kicking,TEAM,1/1,100.0,48,2/2,5 caption,,FG,PCT,LONG,XP,PTS New_England_Kicking,Nick Folk N. Folk,3/3,100.0,42,1/1,10 New_England_Kicking,TEAM,3/3,100.0,42,1/1,10 caption,,NO,YDS,AVG,TB,In 20,LONG Miami_Punting,Michael Palardy M. Palardy,4,180,45.0,1,0,52 Miami_Punting,TEAM,4,180,45.0,1,0,52 caption,,NO,YDS,AVG,TB,In 20,LONG New_England_Punting,Jake Bailey J. Bailey,2,99,49.5,1,0,62 New_England_Punting,TEAM,2,99,49.5,1,0,62
5
1
70,534,339
2021-12-30
https://stackoverflow.com/questions/70534339/adding-nodes-to-a-disconnected-graph-in-order-to-fully-connect-the-graph-compone
I have a graph where each node has a spatial position given by (x,y), and the edges between the nodes are only connected if the euclidean distance between each node is sqrt(2) or less. Here's my example: import networkx G=nx.Graph() G.add_node(1,pos=(1,1)) G.add_node(2,pos=(2,2)) G.add_node(3,pos=(1,2)) G.add_node(4,pos=(1,4)) G.add_node(5,pos=(2,5)) G.add_node(6,pos=(4,2)) G.add_node(7,pos=(5,2)) G.add_node(8,pos=(5,3)) # Connect component one G.add_edge(1,2) G.add_edge(1,3) G.add_edge(2,3) # Connect component two G.add_edge(6,7) # Connect component three G.add_edge(6,8) G.add_edge(7,8) G.add_edge(4,5) pos=nx.get_node_attributes(G,'pos') nx.draw(G,pos) My question is, how can I determine the optimal position and number of additional nodes such that the graph components are connected, whilst ensuring that any additional node is always within sqrt(2) of an existing node?
I am quite convinced that this problem is NP-hard. The closest problem I know is the geometric Steiner tree problem with octilinear metric. I have two, rather quick-and-dirty, suggestions. Both are heuristic. 1st idea: Formulate the problem as an Euclidean Steiner tree problem (https://en.wikipedia.org/wiki/Steiner_tree_problem#Euclidean_Steiner_tree), where you consider just the nodes of your problem and forget about the edges at first. Solve the problem by using GeoSteiner: http://www.geosteiner.com/ This should quickly give you a solution for problems with 10000 or more nodes (if you need to solve bigger problems, you can write the problem out with GeoSteiner after the full-Steiner tree generation and use https://scipjack.zib.de/). There is no Python interface, but just write your problem to a plain text file, the syntax is quite easy. Afterward, put additional nodes into the solution provided by GeoSteiner such that the \sqrt(2) condition is satisfied. Finally, you need to do some clean-up to get rid of redundant edges, because the solution will not take into account that you already have edges in your original problem. Take all the edges and nodes that you have computed so far and define a weighted graph by giving all of your original edges weight 0 and all of the newly added edges weight 1. Consider a Steiner tree problem in graphs (https://en.wikipedia.org/wiki/Steiner_tree_problem#Steiner_tree_in_graphs_and_variants) on this weighted graph, where the terminal set corresponds to your original nodes. Solve this Steiner tree problem with SCIP-Jack: https://scipjack.zib.de/. 2nd idea: Consider your problem directly as a Steiner tree problem in graphs as follows: Each of the original edges is assigned weight 0, consider all original nodes as terminals. Add additional nodes and edges at distance at most \sqrt(2) from each other. For example, you could put a big rectangle around all your connected components and from each node add recursively additional 8 nodes in an angle at degrees 0,45,90,... at a distance of sqrt(2) and with edge of weight 1 in the Steiner tree problem in graphs, as long as they are within the rectangle. If one of these nodes is within distance sqrt(2) of one of your original nodes, connect them directly with an edge of weight 1. Solve the corresponding Steiner tree problem in graphs with SCIP-Jack.
7
1
70,546,823
2022-1-1
https://stackoverflow.com/questions/70546823/pandas-how-to-save-a-styled-dataframe-to-image
I have styled a dataframe output and have gotten it to display how I want it in a Jupyter Notebook but I am having issues find a good way to save this as an image. I have tried https://pypi.org/project/dataframe-image/ but the way I have this working it seem to be a NoneType as it's a styler object and errors out when trying to use this library. This is just a snippet of the whole code, this is intended to loop through several 'col_names' and I want to save these as images (to explain some of the coding). import pandas as pd import numpy as np col_name = 'TestColumn' temp_df = pd.DataFrame({'TestColumn':['A','B','A',np.nan]}) t1 = (temp_df[col_name].fillna("Unknown").value_counts()/len(temp_df)*100).to_frame().reset_index() t1.rename(columns={'index':' '}, inplace=True) t1[' '] = t1[' '].astype(str) display(t1.style.bar(subset=[col_name], color='#5e81f2', vmax=100, vmin=0).set_table_attributes('style="font-size: 17px"').set_properties( **{'color': 'black !important', 'border': '1px black solid !important'} ).set_table_styles([{ 'selector': 'th', 'props': [('border', '1px black solid !important')] }]).set_properties( **{'width': '500px'}).hide_index().set_properties(subset=[" "], **{'text-align': 'left'})) [OUTPUT]
Was able to change how I was using dataframe-image on the styler object and got it working. Passing it into the export() function rather than calling it off the object directly seems to be the right way to do this. The .render() did get the HTML but was often losing much of the styling when converting it to image or when not viewed with Ipython HTML display. See comparision below. Working Code: import pandas as pd import numpy as np import dataframe_image as dfi col_name = 'TestColumn' temp_df = pd.DataFrame({'TestColumn':['A','B','A',np.nan]}) t1 = (temp_df[col_name].fillna("Unknown").value_counts()/len(temp_df)*100).to_frame().reset_index() t1.rename(columns={'index':' '}, inplace=True) t1[' '] = t1[' '].astype(str) style_test = t1.style.bar(subset=[col_name], color='#5e81f2', vmax=100, vmin=0).set_table_attributes('style="font-size: 17px"').set_properties( **{'color': 'black !important', 'border': '1px black solid !important'} ).set_table_styles([{ 'selector': 'th', 'props': [('border', '1px black solid !important')] }]).set_properties( **{'width': '500px'}).hide_index().set_properties(subset=[" "], **{'text-align': 'left'}) dfi.export(style_test, 'successful_test.png')
7
2
70,542,577
2021-12-31
https://stackoverflow.com/questions/70542577/from-base64-encoded-public-key-in-der-format-to-cose-key-in-python
I have a base64-encoded public key in DER format. In Python, how can I convert it into a COSE key? Here is my failed attempt: from base64 import b64decode from cose.keys import CoseKey pubkeyder = "...==" decCborData.key = CoseKey.decode(b64decode(pubkeyder))
The posted key is an EC key for curve P-256 in X.509 format. With an ASN.1 parser (e.g. https://lapo.it/asn1js/) the x and y coordinates can be determined: x: 0x1AF1EA7FB498B65BDEBCEC80FE7A3E8B5FD67264B46CE60FD5B80FFA92538D39 y: 0x013A9422F9FEC87BAE35E56165F5AA2ACCC98A449984E94AF81FE6FD55B6BB14 Then the COSE key can be generated simply as follows: from cose.keys import EC2Key pub_x = bytes.fromhex('1AF1EA7FB498B65BDEBCEC80FE7A3E8B5FD67264B46CE60FD5B80FFA92538D39') pub_y = bytes.fromhex('013A9422F9FEC87BAE35E56165F5AA2ACCC98A449984E94AF81FE6FD55B6BB14') cose_pub_key = EC2Key(crv='P_256', x=pub_x, y=pub_y) For details, s. the cose library documentation and RFC8152, CBOR Object Signing and Encryption (COSE), especially chapter 13. The determination of the x and y coordinates can also be done programmatically, e.g. with PyCryptodome: from Crypto.PublicKey import ECC import base64 der = base64.b64decode('MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEGvHqf7SYtlvevOyA/no+i1/WcmS0bOYP1bgP+pJTjTkBOpQi+f7Ie6415WFl9aoqzMmKRJmE6Ur4H+b9Vba7FA==') key = ECC.import_key(der) pub_x = key.pointQ.x.to_bytes() pub_y = key.pointQ.y.to_bytes()
5
4
70,543,710
2021-12-31
https://stackoverflow.com/questions/70543710/pythons-enumerate-equivalent-in-c
I am learning C# and have been taking a lot of online courses. I am looking for a simpler/neater way to enumerate a list within a list. In python we can do something like this in just one line: newListofList=[[n,i] for n,i in enumerate([List1,List2,List3])] Does it have to involve lambda and Linq in C#? if so, what would be the solution? I tried it with Dictionary in C# but my gut tells me this is not a perfect solution. List<List<string>> familyListss = new List<List<string>>(); familyListss.Add(new List<string> { "Mary", "Mary_sister", "Mary_father", "Mary_mother", "Mary_brother" }); familyListss.Add(new List<string> { "Peter", "Peter_sister", "Peter_father", "Peter_mother", "Peter_brother" }); familyListss.Add(new List<string> { "John", "John_sister", "John_father", "John_mother", "John_brother" }); Dictionary<int, List<string>> familyData = new Dictionary<int, List<string>>(); for (int i = 0; i < familyListss.Count; i++) { familyData.Add(i, familyListss[i]); }
Just a constructor will be enough: List<List<string>> familyListss = new List<List<string>>() { new List<string> { "Mary", "Mary_sister", "Mary_father", "Mary_mother", "Mary_brother" }, new List<string> { "Peter", "Peter_sister", "Peter_father", "Peter_mother", "Peter_brother" }, new List<string> { "John", "John_sister", "John_father", "John_mother", "John_brother" } }; If you want to mimic enumerate you can use Linq, Select((value, index) => your lambda here): using System.Linq; ... var list = new List<string>() { "a", "b", "c", "d"}; var result = list .Select((value, index) => $"item[{index}] = {value}"); Console.Write(string.Join(Environment.NewLine, result)); Outcome: item[0] = a item[1] = b item[2] = c item[3] = d
4
6
70,541,783
2021-12-31
https://stackoverflow.com/questions/70541783/the-simplest-way-to-check-for-nans-in-columns-r
I'm python user learning R. Frequently, I need to check if columns of a dataframe contain NaN(s). In python, I can simply do import pandas as pd df = pd.DataFrame({'colA': [1, 2, None, 3], 'colB': ['A', 'B', 'C', 'D']}) df.isna().any() giving me colA True colB False dtype: bool In R I'm struggling to find an easy solution. People refer to some apply-like methods but that seems overly complex for such a primitive task. The closest solution I've found is this: library(tidyverse) df = data.frame(colA = c(1, 2, NA, 3), colB = c('A', 'B', 'C', 'D')) !complete.cases(t(df)) giving [1] TRUE FALSE That's OKyish but I don't see the column names. If the dataframe has 50 columns I don't know which one has NaNs. Is there a better R solution?
You can use anyNA: Checks for NA in a vector df = data.frame(colA = c(1, 2, NA, 3), colB = c('A', 'B', 'C', 'D')) sapply(df, anyNA) colA colB TRUE FALSE Edit jay.sf is right. This will check for NaNs. df = data.frame(colA = c(1, 2, NA, 3), colB = c('A', 'B', 'C', 'D')) anyNAN <- function(x) { any(is.nan(x)) } sapply(df, anyNAN)
5
8
70,539,415
2021-12-31
https://stackoverflow.com/questions/70539415/is-onedrive-sdk-python-api-still-alive
The author from python-onedrive python-onedrive warns that his library is achrived and obsoleted by official library from microsoft and refers to its git repo sdk What perplexes me is that the archived obsoleted library is being maintained while the official repo is dead since 6 years ago. What's going on? Thanks.
It appears the project was written before Microsoft authored their own SDK that solves the goal the creator originally had - namely that there was no Python SDK for OneDrive. Looking at the commit history, there's been no meaningful changes in six years. The only changes were typos in the documentation, which was cleaned up for long-term archival purposes. The author probably came to the conclusion that it's not worth maintaining a parallel SDK when an official one exists, which in theory should be maintained by the owner of OneDrive. Now, as to why Microsoft has not updated that SDK in six years is another issue entirely. There have been pull requests made to improve that library, but it's likely someone responsible for maintaining the repo needs to be contacted.
6
3
70,537,488
2021-12-30
https://stackoverflow.com/questions/70537488/cannot-import-name-registermattype-from-cv2-cv2
I got below error message when I run model_main_tf2.py on Object Detection API: Traceback (most recent call last): File "/content/models/research/object_detection/model_main_tf2.py", line 32, in <module> from object_detection import model_lib_v2 File "/usr/local/lib/python3.7/dist-packages/object_detection/model_lib_v2.py", line 29, in <module> from object_detection import eval_util File "/usr/local/lib/python3.7/dist-packages/object_detection/eval_util.py", line 36, in <module> from object_detection.metrics import lvis_evaluation File "/usr/local/lib/python3.7/dist-packages/object_detection/metrics/lvis_evaluation.py", line 23, in <module> from lvis import results as lvis_results File "/usr/local/lib/python3.7/dist-packages/lvis/__init__.py", line 5, in <module> from lvis.vis import LVISVis File "/usr/local/lib/python3.7/dist-packages/lvis/vis.py", line 1, in <module> import cv2 File "/usr/local/lib/python3.7/dist-packages/cv2/__init__.py", line 9, in <module> from .cv2 import _registerMatType ImportError: cannot import name '_registerMatType' from 'cv2.cv2' (/usr/local/lib/python3.7/dist-packages/cv2/cv2.cpython-37m-x86_64-linux-gnu.so) The weird thing is I run the same code before, it worked well but now it gives me an error.
The same thing occurred to me yesterday when I used Colab. A possible reason may be that the version of opencv-python(4.1.2.30) does not match opencv-python-headless(4.5.5.62). Or the latest version 4.5.5 may have something wrong... I uninstalled opencv-python-headless==4.5.5.62 and installed 4.1.2.30 and it fixed.
51
76
70,534,875
2021-12-30
https://stackoverflow.com/questions/70534875/typeerror-init-got-an-unexpected-keyword-argument-service-error-using-p
I've been struggling with this problem for sometime, but now I'm coming back around to it. I'm attempting to use selenium to scrape data from a URL behind a company proxy using a pac file. I'm using Chromedriver, which my browser uses the pac file in it's configuration. I've been trying to use desired_capabilities, but the documentation is horrible or I'm not grasping something. Originally, I was attempting to webscrape with beautifulsoup, which I had working except the data I need now is in javascript, which can't be read with bs4. Below is my code: import pandas as pd from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.proxy import Proxy, ProxyType from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC desired_capabilities = webdriver.DesiredCapabilities.CHROME.copy() PAC_PROXY = { 'proxyAutoconfigUrl': 'http://proxy-pac/proxy.pac', } proxy = Proxy() proxy.proxy_autoconfig_url = PAC_PROXY['proxyAutoconfigUrl'] desired_capabilities = {} proxy.add_to_capabilities(desired_capabilities) URL = "https://mor.nlm.nih.gov/RxClass/search?query=ALIMENTARY%20TRACT%20AND%20METABOLISM%7CATC1-4&searchBy=class&sourceIds=a&drugSources=atc1-4%7Catc%2Cepc%7Cdailymed%2Cmeshpa%7Cmesh%2Cdisease%7Cmedrt%2Cchem%7Cdailymed%2Cmoa%7Cdailymed%2Cpe%7Cdailymed%2Cpk%7Cmedrt%2Ctc%7Cfmtsme%2Cva%7Cva%2Cdispos%7Csnomedct%2Cstruct%7Csnomedct%2Cschedule%7Crxnorm" service = Service('C:\Program Files\Chrome Driver\chromedriver.exe') driver = webdriver.Chrome(service=service) driver.get(URL) print(driver.requests[0].headers, driver.requests[0].response) WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, 'tr.dbsearch'))) print(pd.read_html(driver.page_source)[1].iloc[:,:-1]) pd.read_html(driver.page_source)[1].iloc[:,:-1].to_csv('table.csv',index=False) I'm not sure why I'm receiving an: TypeError: __init__() got an unexpected keyword argument 'service' even when I have the path added correctly to my system environment variables as shown below: Essentially what I'm attempting to do is scrape the data in the table from https://mor.nlm.nih.gov/RxClass/search?query=ALIMENTARY%20TRACT%20AND%20METABOLISM%7CATC1-4&searchBy=class&sourceIds=a&drugSources=atc1-4%7Catc%2Cepc%7Cdailymed%2Cmeshpa%7Cmesh%2Cdisease%7Cmedrt%2Cchem%7Cdailymed%2Cmoa%7Cdailymed%2Cpe%7Cdailymed%2Cpk%7Cmedrt%2Ctc%7Cfmtsme%2Cva%7Cva%2Cdispos%7Csnomedct%2Cstruct%7Csnomedct%2Cschedule%7Crxnorm then store it to a pandas dataframe and pass it to a csv file.
If you are still using Selenium v3.x then you shouldn't use the Service() and in that case the key executable_path is relevant. In that case the lines of code will be: driver = webdriver.Chrome(executable_path='C:\Program Files\Chrome Driver\chromedriver.exe') Else, if you are using selenium4 then you have to use Service() and in that case the key executable_path is no more relevant. So you need to change the line of code: service = Service(executable_path='C:\Program Files\Chrome Driver\chromedriver.exe') driver = webdriver.Chrome(service=service) as: service = Service('C:\Program Files\Chrome Driver\chromedriver.exe')
16
20
70,535,336
2021-12-30
https://stackoverflow.com/questions/70535336/how-to-ignore-function-arguments-with-cachetools-ttl-cache
I'm exploiting the cachetools @ttl_cache decorator (not @cached). I need to ignore some params in the cache key. E.g,. @ttl_cache(maxsize=1024, ttl=600) def my_func(foo, ignore_bar, ignore_baz): # do stuff Working that way, I get this: >>> my_func("foo", "ignore_bar", "ignore_baz") # cache miss >>> my_func("foo", "ignore_bar", "ignore_baz") # cache hit >>> my_func("foo", "ignore_bar_bar", "ignore_baz_baz") # cache miss! What I need: >>> my_func("foo", "ignore_bar", "ignore_baz") # cache miss >>> my_func("foo", "ignore_bar", "ignore_baz") # cache hit >>> my_func("foo", "ignore_bar_bar", "ignore_baz_baz") # cache hit!!!!! Is there a way to get that using @ttl_cache?
I haven't used cachetools, but I've looked at the docs out of interest. Apparently, there's no built-in way. If you really need this functionality, I can suggest a hack like the following: class PackedArgs(tuple): def __hash__(self): return hash(self[0]) def __eq__(self, other): if isinstance(other, self.__class__): return self[0] == other[0] else: return NotImplemented def pack_args(func): def inner(packed_args): return func(*packed_args) return inner def unpack_args(func): def inner(*args): return func(PackedArgs(args)) return inner @unpack_args @ttl_cache(maxsize=1024, ttl=600) @pack_args def my_func(foo, ignore_bar, ignore_baz): # do stuff Essentially: "pack" your function's arguments to a single tuple-like object that hashes as it's 0th element and causes @ttl_cache to behave like you need. Then, "unpack" them to restore the normal interface and avoid actually having to pass this one big argument when calling. Please note that this is just a (very hacky) concept, I haven't tested this code at all. It probably won't work as is. By the way, your requirement sounds interesting. You can submit it to cachetools as a feauture request and link this thread. I can imagine it being implemented as something like a key= kwarg lambda, similar to builtin sort's.
6
2
70,536,166
2021-12-30
https://stackoverflow.com/questions/70536166/improving-performance-of-finding-out-how-many-possible-triangles-can-be-made-wit
I am doing an assessment that is asking by the given "n" as input which is a length of a stick; how many triangles can you make? (3 < n < 1,000,000) For example: input: N=8 output: 1 explanation: (3,3,2) input: N=12 output: 3 explanation: (4,4,4) (4,5,3) (5,5,2) Now the codes I wrote are returning 33 % accuracy as the web assessment is throwing time limit error. ans = 0 n = int(input()) for a in range(1, n + 1): for b in range(a, n - a + 1): c = n - a - b if a + b > c >= b: ans += 1 print(ans) code b: ans = 0 n = int(input()) for i in range(1,n): for j in range(i,n): for c in range(j,n): if(i+j+c==n and i+j>c): ans+=1 print(ans) How can this be made faster?
This is an intuitive O(n) algorithm I came up with: def main(): n = int(input()) if n < 3: print(0) return ans = n % 2 for a in range(2, n//2+1): diff = n - a if diff // 2 < a: break if diff % 2 == 0: b = diff // 2 else: b = diff // 2 + 1 b = max(b - a // 2, a) c = n - b - a if abs(b - c) >= a: b += 1 c -= 1 ans += abs(b-c)//2 + 1 print(ans) main() I find the upper bound and lower bound for b and c and count the values in that range.
5
1
70,527,241
2021-12-30
https://stackoverflow.com/questions/70527241/python-pandas-dataframe-assign-a-list-to-multiple-cells
I have a DataFrame like name col1 col2 a aa 123 a bb 123 b aa 234 and a list [1, 2, 3] I want to replace the col2 of every row with col1 = 'aa' with the list like name col1 col2 a aa [1, 2, 3] a bb 123 b aa [1, 2, 3] I tried something like df.loc[df[col1] == 'aa', col2] = [1, 2, 3] but it gives me the error: ValueError: could not broadcast input array from shape (xx,) into shape (yy,) How should I get around this?
import pandas as pd df = pd.DataFrame({"name":["a","a","b"],"col1":["aa","bb","aa"],"col2":[123,123,234]}) l = [1,2,3] df["col2"] = df.apply(lambda x: l if x.col1 == "aa" else x.col2, axis =1) df
7
2
70,523,639
2021-12-29
https://stackoverflow.com/questions/70523639/store-formatted-strings-pass-in-values-later
I have a dictionary with a lot of strings. Is it possible to store a formatted string with placeholders and pass in a actual values later? I'm thinking of something like this: d = { "message": f"Hi There, {0}" } print(d["message"].format("Dave")) The above code obviously doesn't work but I'm looking for something similar.
You use f-string; it already interpolated 0 in there. You might want to remove f there d = { # no f here "message": "Hi There, {0}" } print(d["message"].format("Dave")) Hi There, Dave
7
17