question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
63,505,647 | 2020-8-20 | https://stackoverflow.com/questions/63505647/add-external-margins-with-constrained-layout | When generating a figure to save to a pdf file, I'd like to adjust the positioning of the figure relative to the edges of the page, for example to add an inch margin along all sides. As far as I can tell, the solutions to do this (for example, in this question) either: don't work with constrained_layout mode -- applying plt.subplots_adjust() after creating the figure but prior to fig.savefig() messes up the constrained layout don't actually quantitatively adjust the positioning of the figure -- adding bbox_inches="tight" or pad=-1 don't seem to do anything meaningful Is there a straightforward way to adjust external margins of a constrained layout figure? For example: fig = plt.figure(constrained_layout=True, figsize=(11, 8.5)) page_grid = gridspec.GridSpec(nrows=2, ncols=1, figure=fig) # this doesn't appear to do anything with constrained_layout=True page_grid.update(left=0.2, right=0.8, bottom=0.2, top=0.8) top_row_grid = gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=page_grid[0]) for i in range(3): ax = fig.add_subplot(top_row_grid[:, i], aspect="equal") n_bottom_row_plots = 10 qc_grid = gridspec.GridSpecFromSubplotSpec(1, n_bottom_row_plots, subplot_spec=page_grid[1]) for i, metric in enumerate(range(n_bottom_row_plots)): ax = fig.add_subplot(qc_grid[:, i]) plt.plot(np.arange(5), np.arange(5)) fig.suptitle("my big label", fontweight="bold", fontsize="x-large", y=0.9) # this ruins the constrained layout # plt.subplots_adjust(left=0.2,right=0.8, bottom=0.2, top=0.8) fig.savefig("temp.png", facecolor="coral") Yields the following (I'd like to see more coral around the edges!): | You can set the rectangle that the layout engine operates within. See the rect parameter for each engine at https://matplotlib.org/stable/api/layout_engine_api.html. It's unfortunately not a very friendly part of the API, especially because TightLayoutEngine and ConstrainedLayoutEngine have different semantics for rect: TightLayoutEngine uses rect = (left, bottom, right, top) and ConstrainedLayoutEngine uses rect = (left, bottom, width, height). def set_margins(fig, margins): """Set figure margins as [left, right, top, bottom] in inches from the edges of the figure.""" left,right,top,bottom = margins width, height = fig.get_size_inches() #convert to figure coordinates: left, right = left/width, 1-right/width bottom, top = bottom/height, 1-top/height #get the layout engine and convert to its desired format engine = fig.get_layout_engine() if isinstance(engine, matplotlib.layout_engine.TightLayoutEngine): rect = (left, bottom, right, top) elif isinstance(engine, matplotlib.layout_engine.ConstrainedLayoutEngine): rect = (left, bottom, right-left, top-bottom) else: raise RuntimeError('Cannot adjust margins of unsupported layout engine') #set and recompute the layout engine.set(rect=rect) engine.execute(fig) With your example: fig = plt.figure(constrained_layout=True, figsize=(11, 8.5)) page_grid = gridspec.GridSpec(nrows=2, ncols=1, figure=fig) #your margins were [0.2, 0.8, 0.2, 0.8] in figure coordinates #which are 0.2*11 and 0.2*8.5 in inches from the edge set_margins(fig,[0.2*11, 0.2*11, 0.2*8.5, 0.2*8.5]) top_row_grid = gridspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=page_grid[0]) for i in range(3): ax = fig.add_subplot(top_row_grid[:, i], aspect="equal") n_bottom_row_plots = 10 qc_grid = gridspec.GridSpecFromSubplotSpec(1, n_bottom_row_plots, subplot_spec=page_grid[1]) for i, metric in enumerate(range(n_bottom_row_plots)): ax = fig.add_subplot(qc_grid[:, i]) plt.plot(np.arange(5), np.arange(5)) fig.suptitle("my big label", fontweight="bold", fontsize="x-large", y=0.9) fig.savefig("temp.png", facecolor="coral") Note: fig.suptitle text is apparently not handled by the layout engine, so it doesn't move. | 12 | 1 |
63,580,229 | 2020-8-25 | https://stackoverflow.com/questions/63580229/how-to-save-uploadfile-in-fastapi | I accept the file via POST. When I save it locally, I can read the content using file.read (), but the name via file.name incorrect(16) is displayed. When I try to find it by this name, I get an error. What might be the problem? My code: @router.post( path="/upload", response_model=schema.ContentUploadedResponse, ) async def upload_file( background_tasks: BackgroundTasks, uploaded_file: UploadFile = File(...)): uploaded_file.file.rollover() uploaded_file.file.flush() #shutil.copy(uploaded_file.file.name, f'../api/{uploaded_file.filename}') background_tasks.add_task(s3_upload, uploaded_file=fp) return schema.ContentUploadedResponse() | Background UploadFile is just a wrapper around SpooledTemporaryFile, which can be accessed as UploadFile.file. SpooledTemporaryFile() [...] function operates exactly as TemporaryFile() does And documentation about TemporaryFile says: Return a file-like object that can be used as a temporary storage area. [..] It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is either not created at all or is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. async def endpoint You should use the following async methods of UploadFile: write, read, seek and close. They are executed in a thread pool and awaited asynchronously. For async writing files to disk you can use aiofiles. Example: @app.post("/") async def post_endpoint(in_file: UploadFile=File(...)): # ... async with aiofiles.open(out_file_path, 'wb') as out_file: content = await in_file.read() # async read await out_file.write(content) # async write return {"Result": "OK"} Or in the chunked manner, so as not to load the entire file into memory: @app.post("/") async def post_endpoint(in_file: UploadFile=File(...)): # ... async with aiofiles.open(out_file_path, 'wb') as out_file: while content := await in_file.read(1024): # async read chunk await out_file.write(content) # async write chunk return {"Result": "OK"} def endpoint Also, I would like to cite several useful utility functions from this topic (all credits @dmontagu) using shutil.copyfileobj with internal UploadFile.file. This functions can be invoked from def endpoints: import shutil from pathlib import Path from tempfile import NamedTemporaryFile from typing import Callable from fastapi import UploadFile def save_upload_file(upload_file: UploadFile, destination: Path) -> None: try: with destination.open("wb") as buffer: shutil.copyfileobj(upload_file.file, buffer) finally: upload_file.file.close() def save_upload_file_tmp(upload_file: UploadFile) -> Path: try: suffix = Path(upload_file.filename).suffix with NamedTemporaryFile(delete=False, suffix=suffix) as tmp: shutil.copyfileobj(upload_file.file, tmp) tmp_path = Path(tmp.name) finally: upload_file.file.close() return tmp_path def handle_upload_file( upload_file: UploadFile, handler: Callable[[Path], None] ) -> None: tmp_path = save_upload_file_tmp(upload_file) try: handler(tmp_path) # Do something with the saved temp file finally: tmp_path.unlink() # Delete the temp file Note: you'd want to use the above functions inside of def endpoints, not async def, since they make use of blocking APIs. | 50 | 70 |
63,500,773 | 2020-8-20 | https://stackoverflow.com/questions/63500773/python-security-update-installation-on-windows | Python security updates are source only updates. There is no windows installer. For instance the page for python 3.6.12 states: Security fix releases are produced periodically as needed and are source-only releases; binary installers are not provided. Could someone explain how I can update/patch a Python installation done by the windows installer, so that latest Python security fixes are applied: eg., going from ython 3.6.6 to Python 3.6.12 Or if not possible how to install from Python source code directly. | To install security patches after the last full bugfix release, you must build Python from source: Compile the Binaries Install Visual Studio 2019 Community and select: the Python development workload, and the Python native development tools (this is under Optional, but is necessary in order to build python from source) Download the python source code and unzip it. Navigate to the folder and install required external dependencies by running PCbuild\get_externals.bat Build the debug (python_d.exe) and release (python.exe) binaries > PCbuild\build.bat -p x64 -c Debug > PCbuild\build.bat -p x64 -c Release you can also build the Profile Guided Optimization (pgo) binary > PCbuild\build.bat -p x64 --pgo The binaries on python.org are run through PGO by default, so a --pgo binary will be faster than a -c Release binary. The debug binary is necessary for adding breakpoints and debugging your code. All built binaries are placed in PCbuild\amd64. Build the Installer The instructions for building an installer are in Tools\msi\README.txt. Download the extra build dependencies by running Tools\msi\get_externals.bat NOTE: This is done in addition to running PCbuild/get_externals.bat. It installs additional binaries to externals\windows-installer that are needed for making installers. Specifically, WiX (wix.exe), which is a toolset that lets developers create installers for Windows Installer, the Windows installation engine. HTML Help (htmlhelp), which is for building documentation. Turn on .NET Framework 3.5 Features under Turn Windows features on or off NOTE: This is required by WiX. Build the installer by running > .\Tools\msi\buildrelease.bat -x64 NOTE: Be sure the following environment variables are properly set or left blank so the script can set them: PYTHON=<path to python.exe> SPHINXBUILD=<path to sphinx-build.exe> The installer will be placed in PCbuild\amd64\en-us. It is a single .exe (the installer entry point). The folder will also have a number of additional CAB and MSI files. Each MSI contains the logic required to install a component or feature of Python, but these should not be run directly. Specify --pack to build an installer that does not require all MSIs to be available alongside. This takes longer, but is easier to share. | 20 | 15 |
63,511,090 | 2020-8-20 | https://stackoverflow.com/questions/63511090/how-can-i-smooth-data-in-python | I'm using Python to detect some patterns on OHLC data. My problem is that the data I have is very noisy (I'm using Open data from the Open/High/Low/Close dataset), and it often leads me to incorrect or weak outcomes. Is there any way to "smooth" this data, or to make it less noisy, to improve my results? What algorithms or libraries can I use for this task? Here is a sample of my data, which is a normal array: DataPoints = [6903.79, 6838.04, 6868.57, 6621.25, 7101.99, 7026.78, 7248.6, 7121.4, 6828.98, 6841.36, 7125.12, 7483.96, 7505.0, 7539.03, 7693.1, 7773.51, 7738.58, 8778.58, 8620.0, 8825.67, 8972.58, 8894.15, 8871.92, 9021.36, 9143.4, 9986.3, 9800.02, 9539.1, 8722.77, 8562.04, 8810.99, 9309.35, 9791.97, 9315.96, 9380.81, 9681.11, 9733.93, 9775.13, 9511.43, 9067.51, 9170.0, 9179.01, 8718.14, 8900.35, 8841.0, 9204.07, 9575.87, 9426.6, 9697.72, 9448.27, 10202.71, 9518.02, 9666.32, 9788.14, 9621.17, 9666.85, 9746.99, 9782.0, 9772.44, 9885.22, 9278.88, 9464.96, 9473.34, 9342.1, 9426.05, 9526.97, 9465.13, 9386.32, 9310.23, 9358.95, 9294.69, 9685.69, 9624.33, 9298.33, 9249.49, 9162.21, 9012.0, 9116.16, 9192.93, 9138.08, 9231.99, 9086.54, 9057.79, 9135.0, 9069.41, 9342.47, 9257.4, 9436.06, 9232.42, 9288.34, 9234.02, 9303.31, 9242.61, 9255.85, 9197.6, 9133.72, 9154.31, 9170.3, 9208.99, 9160.78, 9390.0, 9518.16, 9603.27, 9538.1, 9700.42, 9931.54, 11029.96, 10906.27, 11100.52, 11099.79, 11335.46, 11801.17, 11071.36, 11219.68, 11191.99, 11744.91, 11762.47, 11594.36, 11761.02, 11681.69, 11892.9, 11392.09, 11564.34, 11779.77, 11760.55, 11852.4, 11910.99, 12281.15, 11945.1, 11754.38] plt.plot(DataPoints) | Smoothing is a pretty rich subject; there are several methods each with features and drawbacks. Here is one using scipy: import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.signal import savgol_filter # noisy data x = [6903.79, 6838.04, 6868.57, 6621.25, 7101.99, 7026.78, 7248.6, 7121.4, 6828.98, 6841.36, 7125.12, 7483.96, 7505.0, 7539.03, 7693.1, 7773.51, 7738.58, 8778.58, 8620.0, 8825.67, 8972.58, 8894.15, 8871.92, 9021.36, 9143.4, 9986.3, 9800.02, 9539.1, 8722.77, 8562.04, 8810.99, 9309.35, 9791.97, 9315.96, 9380.81, 9681.11, 9733.93, 9775.13, 9511.43, 9067.51, 9170.0, 9179.01, 8718.14, 8900.35, 8841.0, 9204.07, 9575.87, 9426.6, 9697.72, 9448.27, 10202.71, 9518.02, 9666.32, 9788.14, 9621.17, 9666.85, 9746.99, 9782.0, 9772.44, 9885.22, 9278.88, 9464.96, 9473.34, 9342.1, 9426.05, 9526.97, 9465.13, 9386.32, 9310.23, 9358.95, 9294.69, 9685.69, 9624.33, 9298.33, 9249.49, 9162.21, 9012.0, 9116.16, 9192.93, 9138.08, 9231.99, 9086.54, 9057.79, 9135.0, 9069.41, 9342.47, 9257.4, 9436.06, 9232.42, 9288.34, 9234.02, 9303.31, 9242.61, 9255.85, 9197.6, 9133.72, 9154.31, 9170.3, 9208.99, 9160.78, 9390.0, 9518.16, 9603.27, 9538.1, 9700.42, 9931.54, 11029.96, 10906.27, 11100.52, 11099.79, 11335.46, 11801.17, 11071.36, 11219.68, 11191.99, 11744.91, 11762.47, 11594.36, 11761.02, 11681.69, 11892.9, 11392.09, 11564.34, 11779.77, 11760.55, 11852.4, 11910.99, 12281.15, 11945.1, 11754.38] df = pd.DataFrame(dict(x=data)) x_filtered = df[["x"]].apply(savgol_filter, window_length=31, polyorder=2) plt.ion() plt.plot(x) plt.plot(x_filtered) plt.show() | 6 | 11 |
63,564,559 | 2020-8-24 | https://stackoverflow.com/questions/63564559/greenlet-error-cannot-switch-to-a-different-thread | I have a Flask application, getting this error while trying to integrate flask with faust. app.py import mode.loop.eventlet import logging import logging.config import json from flask import Flask from elasticapm.contrib.flask import ElasticAPM def create_app(): app = Flask(__name__) configure_apm(app) configure_logging() register_blueprints(app) register_commands(app) return app main.py from flask import jsonify from litmus.app import create_app from intercepter import Intercepter app = create_app() app.wsgi_app = Intercepter(app.wsgi_app , app) @app.route('/status') def status(): return jsonify({'status': 'online'}), 200 another controller @api_blue_print.route('/v1/analyse', methods=['POST']) def analyse(): analyse_with_historic_data.send(value=[somedata]) return jsonify({'message': 'Enqueued'}), 201 analyse_with_historic_data.py @app.agent(analysis_topic) async def analyse_with_historic_data(self, stream): async for op in stream: entity_log = EntityLog.where('id', op.entity_log_id).first() Error Trace: Traceback (most recent call last): File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 461, in fire_timers timer() File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 59, in __call__ cb(*args, **kw) File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/semaphore.py", line 147, in _do_acquire waiter.switch() greenlet.error: cannot switch to a different thread Traceback (most recent call last): File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 461, in fire_timers timer() File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 59, in __call__ cb(*args, **kw) File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/semaphore.py", line 147, in _do_acquire waiter.switch() greenlet.error: cannot switch to a different thread Traceback (most recent call last): File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/queue.py", line 118, in switch self.greenlet.switch(value) greenlet.error: cannot switch to a different thread ^CError in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/threading.py", line 551, in wait signaled = self._cond.wait(timeout) File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/threading.py", line 299, in wait gotit = waiter.acquire(True, timeout) File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/semaphore.py", line 107, in acquire hubs.get_hub().switch() File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 298, in switch return self.greenlet.switch() File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 350, in run self.wait(sleep_time) File "/Users/sahilpaudel/.pyenv/versions/3.6.5/lib/python3.6/site-packages/eventlet/hubs/kqueue.py", line 96, in wait time.sleep(seconds) I have trying to fix this issue by monkey.patch_all but that too it didn't work out giving another stacktrace that lock cannot be released something. | Something similar happened to me when I tried to debug a flask application using Pycharm. What I finally did to eventually solve my issue was to enable gevent compatibility in Pycharm: File -> settings -> Build,Execution,Deployment -> Python debugger -> Gevent compatible | 12 | 18 |
63,536,505 | 2020-8-22 | https://stackoverflow.com/questions/63536505/tkinter-geometry-management | I see and saw a lot of questions for tkinter that quite often asks not about errors in their code, but asks how do I organize my GUI. So I would like to have an answer that focus on that and help beginners to orientate them a little bit. | Basic knowlege about tkinters geometry management The geometry management of tkinter is characterized by this Quote here: By default a top-level window appears on the screen in its natural size, which is the one determined internally by its widgets and geometry managers. Toplevels Your Toplevel is the first question you should have to answer with: wm_geometry: size, position in your screen? wm_minsize \ wm_maxsize are there minimal or maximal bounderies? wm_resizable has the user the ability to resize it? wm_attributes are there attributes like topmost or fullscreen? pack_propagate \ grid_propagate ignore requested width and height of children. Note: You can skip this question and let the process decide what will be needed after all. Arrange children To arrange your children you've got 3 options, each of them are designed to satisfy specific needs: The packer: The pack command is used to communicate with the packer, a geometry manager that arranges the children of a parent by packing them in order around the edges of the parent. -> I use pack to arrange quickly a few widgets beside eachother in the master. The placer The placer is a geometry manager for Tk. It provides simple fixed placement of windows, where you specify the exact size and location of one window, called the slave, within another window, called the master. The placer also provides rubber-sheet placement, where you specify the size and location of the slave in terms of the dimensions of the master, so that the slave changes size and location in response to changes in the size of the master. Lastly, the placer allows you to mix these styles of placement so that, for example, the slave has a fixed width and height but is centered inside the master. -> I use place sometimes for One-Sheet applications or to set a background image. The gridder The grid command is used to communicate with the grid geometry manager that arranges widgets in rows and columns inside of another window, called the geometry master (or master window). -> Grid is the best choice for more complex applications that contains many widgets. So the question you need to answer here, before picking one of these managers is, how do I organise my application in the best way? Note: Warning: Never mix grid and pack in the same master window. Tkinter will happily spend the rest of your lifetime trying to negotiate a solution that both managers are happy with. Instead of waiting, kill the application, and take another look at your code. A common mistake is to use the wrong parent for some of the widgets. -> You can create a nested layout, in each master(window/frame) you've freedom of choice Most important features Most important features of each manger can help to answer your question. Because you will need to know if the manager can do what you wanna do. For pack I think it is: fill stretch the slave horizontally, vertically or both expand The slaves should be expanded to consume extra space in their master. side Specifies which side of the master the slave(s) will be packed against. anchor it specifies where to position each slave in its parcel. For place it should be: relheight -relheight=1.0, -height=-2 makes the slave 2 pixels shorter than the master. relwidth -relwidth=1.0, -width=5 makes the slave 5 pixels wider than the master. relx -relx=0.5, -x=-2 positions the left edge of the slave 2 pixels to the left out of the center. rely -rely=0.5, -x=3 positions the top edge of the slave 3 pixels below the center of its master. And for grid it should be: columnspan Insert the slave so that it occupies n columns in the grid. rowspan Insert the slave so that it occupies n rows in the grid. sticky this option may be used to position (or stretch) the slave within its cell. grid_remove the configuration options for that window are remembered grid_columnconfigure grid_rowconfigure for the last two options I recommend this answer here. Read the docs A working exampel to play with can be found here: import tkinter as tk root=tk.Tk() holderframe = tk.Frame(root,bg='red') holderframe.pack() display = tk.Frame(holderframe, width=600, height=25,bg='green') display2 = tk.Frame(holderframe, width=300, height=145,bg='orange') display3 = tk.Frame(holderframe, width=300, height=300,bg='black') display4 = tk.Frame(holderframe, width=300, height=20,bg='yellow') display5 = tk.Frame(holderframe, bg='purple') ##display_green display.grid(column = 0, row = 0, columnspan=3) display.pack_propagate(0) #when using pack inside of the display #display.grid_propagate(0) #when using grid inside of the display #left b =tk.Button(display, width =10,text='b') b1 =tk.Button(display, width =10,text='b1') b.pack(side='left') b1.pack(side='left') #right b2 =tk.Button(display, width =20,text='b2') b2.pack(side='right') #center l = tk.Label(display, text ='My_Layout',bg='grey') l.pack(fill='both',expand=1) #the order by using pack can be important. #you will notice if you swip right with center. ##display2_orange display2.grid(column=0,row=1, sticky='n') display2.grid_propagate(0) #column0 lab = tk.Label(display2, text='test2') lab1 = tk.Label(display2, text='test2') lab2 = tk.Label(display2, text='test2') lab3 = tk.Label(display2, text='test2') lab4 = tk.Label(display2, text='test2') lab5 = tk.Label(display2, text='test2') lab6 = tk.Label(display2, text='test2') lab.grid(column=0,row=0) lab1.grid(column=0,row=1) lab2.grid(column=0,row=2) lab3.grid(column=0,row=3) lab4.grid(column=0,row=4) lab5.grid(column=0,row=5) lab6.grid(column=0,row=6) #column1 lab10 = tk.Label(display2, text='test2') lab11 = tk.Label(display2, text='test2') lab12 = tk.Label(display2, text='test2') lab13 = tk.Label(display2, text='test2') lab14 = tk.Label(display2, text='test2') lab15 = tk.Label(display2, text='test2') lab16 = tk.Label(display2, text='test2') lab10.grid(column=2,row=0) lab11.grid(column=2,row=1) lab12.grid(column=2,row=2) lab13.grid(column=2,row=3) lab14.grid(column=2,row=4) lab15.grid(column=2,row=5) lab16.grid(column=2,row=6) display2.grid_columnconfigure(1, weight=1) #the empty column gets the space for left and right effect ##display3_black display3.grid(column=1,row=1,sticky='nswe') display3.grid_propagate(0) ##display4_yellow display4.grid(column=0,row=1,sticky='s') display4.grid_propagate(0) lab20 = tk.Label(display4, bg='black') lab21 = tk.Label(display4, bg='red') lab22 = tk.Label(display4, bg='orange') lab23 = tk.Label(display4, bg='grey') lab20.grid(column=0,row=0,sticky='ew') lab21.grid(column=1,row=0,stick='e') lab22.grid(column=2,row=0,sticky='e') lab23.grid(column=3,row=0,stick='ew') display4.grid_columnconfigure(0, weight=4) display4.grid_columnconfigure(1, weight=2) display4.grid_columnconfigure(2, weight=2) display4.grid_columnconfigure(3, weight=1) ##display5_purple display5.place(x=0,y=170,relwidth=0.5,height=20) display5.grid_propagate(0) root.mainloop() | 6 | 12 |
63,506,885 | 2020-8-20 | https://stackoverflow.com/questions/63506885/how-to-authenticate-google-apis-google-drive-api-from-google-compute-engine-an | Our company is working on processing data from Google Sheets (within Google Drive) from Google Cloud Platform and we are having some problems with the authentication. There are two different places where we need to run code that makes API calls to Google Drive: within production in Google Compute Engine, and within development environments i.e. locally on our developers' laptops. Our company is quite strict about credentials and does not allow the downloading of Service Account credential JSON keys (this is better practice and provides higher security). Seemingly all of the docs from GCP say to simply download the JSON key for a Service Account and use that. Or Google APIs/Developers docs say to create an OAuth2 Client ID and download it’s key like here. They often use code like this: from google.oauth2 import service_account SCOPES = ['https://www.googleapis.com/auth/sqlservice.admin'] SERVICE_ACCOUNT_FILE = '/path/to/service.json' credentials = service_account.Credentials.from_service_account_file( SERVICE_ACCOUNT_FILE, scopes=SCOPES) But we can't (or just don't want to) download our Service Account JSON keys, so we're stuck if we just follow the docs. For the Google Compute Engine environment we have been able to authenticate by using GCP Application Default Credentials (ADCs) - i.e. not explicitly specifying credentials to use in code and letting the client libraries “just work” - this works great as long as one ensures that the VM is created with the correct scopes https://www.googleapis.com/auth/drive, and the default compute Service Account email is given permission to the Sheet that needs to be accessed - this is explained in the docs here. You can do this like so; from googleapiclient.discovery import build service = build('sheets', 'v4') SPREADSHEET_ID="<sheet_id>" RANGE_NAME="A1:A2" s = service.spreadsheets().values().get( spreadsheetId=SPREADSHEET_ID, range=RANGE_NAME, majorDimension="COLUMNS" ).execute() However, how do we do this for development, i.e. locally on our developers' laptops? Again, without downloading any JSON keys, and preferably with the most “just works” approach possible? Usually we use gcloud auth application-default login to create default application credentials that the Google client libraries use which “just work”, such as for Google Storage. However this doesn't work for Google APIs outside of GCP, like Google Drive API service = build('sheets', 'v4') which fails with this error: “Request had insufficient authentication scopes.”. Then we tried all kinds of solutions like: credentials, project_id = google.auth.default(scopes=["https://www.googleapis.com/auth/drive"]) and credentials, project_id = google.auth.default() credentials = google_auth_oauthlib.get_user_credentials( ["https://www.googleapis.com/auth/drive"], credentials._client_id, credentials._client_secret) ) and more... Which all give a myriad of errors/issues we can’t get past when trying to do authentication to Google Drive API :( Any thoughts? | One method for making the authentication from development environments easy is to use Service Account impersonation. Here is a blog about using service account impersonation, including the benefits of doing this. @johnhanley (who wrote the blog post) is a great guy and has lots of very informative answers on SO also! To be able to have your local machine authenticate for Google Drive API you will need to create default application credentials on your local machine that impersonates a Service Account and apply the scopes needed for the APIs you want to access. To be able to impersonate a Service Account your user must have the role roles/iam.serviceAccountTokenCreator. This role can be applied to an entire project or to an individual Service Account. You can use the gcloud to do this: gcloud iam service-accounts add-iam-policy-binding [COMPUTE_SERVICE_ACCOUNT_FULL_EMAIL] \ --member user:[USER_EMAIL] \ --role roles/iam.serviceAccountTokenCreator Once this is done create the local credentials: gcloud auth application-default login \ --scopes=openid,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/accounts.reauth \ --impersonate-service-account=[COMPUTE_SERVICE_ACCOUNT_FULL_EMAIL] This will solve the scopes error you got. The three extra scopes added beyond the Drive API scope are the default scopes that gcloud auth application-default login applies and are needed. If you apply scopes without impersonation you will get an error like this when trying to authenticate: HttpError: <HttpError 403 when requesting https://sheets.googleapis.com/v4/spreadsheets?fields=spreadsheetId&alt=json returned "Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the sheets.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/."> Once you have set up the credentials you can use the same code that is run on Google Compute Engine on your local machine :) Note: it is also possible to set the impersonation for all gcloud commands: gcloud config set auth/impersonate_service_account [COMPUTE_SERVICE_ACCOUNT_FULL_EMAIL] Creating default application credentails on your local machine by impersonating a service account is a slick way of authenticating development code. It means that the code will have exactly the same permissions as the Service Account that it is impersonating. If this is the same Service Account that will run the code in production you know that code in development runs the same as production. It also means that you never have to create or download any Service Account keys. | 9 | 6 |
63,511,413 | 2020-8-20 | https://stackoverflow.com/questions/63511413/fastapi-redirection-for-trailing-slash-returns-non-ssl-link | When we call an endpoint and a redirect occurs due to a missing trailing slash. As you can see in the image below, when a request is made to https://.../notifications, the FastAPI server responds with a redirect to http://.../notifications/ I suspect that it's an app configuration issue rather than a server configuration issue. Does anyone have an idea of how to resolve this issue? | This is because your application isn't trusting the reverse proxy's headers overriding the scheme (the X-Forwarded-Proto header that's passed when it handles a TLS request). There's a few ways we can fix that: If you're running the application straight from uvicorn server, try using the flag --forwarded-allow-ips '*'. If you're running gunicorn you can set as well the flag --forwarded-allow-ips="*". In either application, you can additionally use the FORWARDED_ALLOW_IPS environment variable. Important: the * should be used only as a test, as it'll lead your application to trust the X-Forwarded-* headers from any source. I suggest you read uvicorn's docs and gunicorn's docs for a deeper knowledge of what to set in this flag and why. | 59 | 42 |
63,560,005 | 2020-8-24 | https://stackoverflow.com/questions/63560005/draw-curved-lines-to-connect-points-in-matplotlib | So I am trying to plot curved lines to join points, here is the code I am using:- def hanging_line(point1, point2): a = (point2[1] - point1[1])/(np.cosh(point2[0]) - np.cosh(point1[0])) b = point1[1] - a*np.cosh(point1[0]) x = np.linspace(point1[0], point2[0], 100) y = a*np.cosh(x) + b return (x,y) n_teams = 4 n_weeks = 4 fig, ax = plt.subplots(figsize=(6,6)) t = np.array([ [1, 2, 4, 3], [4, 3, 3, 2], [3, 4, 1, 4], [2, 1, 2, 1] ]) fig.patch.set_facecolor('#1b1b1b') for nw in range(n_weeks): ax.scatter([nw] * n_weeks, t[:, nw], marker='o', color='#4F535C', s=100, zorder=2) ax.axis('off') for team in t: x1, x2 = 0, 1 for rank in range(0, len(team) - 1): y1 = n_weeks - team[rank] + 1 y2 = n_weeks - team[rank + 1] + 1 x, y = hanging_line([x1, y1], [x2, y2]) ax.plot(x, y, color='#4F535C', zorder=1) x1 += 1 x2 += 1 The code is producing the following output:- But I want the curved lines to look somewhat like this: What changes should I have to do in my code to get the required result? | Here is an approach using bezier curves. The sequence [...., i-indent, i, i + 0.8, ...] will put control points at each integer position i and some space before and after. The plot below used indent=0.8; indent=0 would create straight lines; with indent>1 the curves would be intersecting more. Other variations will make the curves more or less "cornered". import matplotlib.pyplot as plt from matplotlib.path import Path import matplotlib.patches as patches import numpy as np n_teams = 4 n_weeks = 4 t = np.array([[1, 2, 4, 3], [4, 3, 3, 2], [3, 4, 1, 4], [2, 1, 2, 1]]) fig, ax = plt.subplots(figsize=(10, 4), facecolor='#1b1b1b') ax.set_facecolor('#1b1b1b') indent = 0.8 for tj in t: ax.scatter(np.arange(len(tj)), tj, marker='o', color='#4F535C', s=100, zorder=3) # create bezier curves verts = [(i + d, tij) for i, tij in enumerate(tj) for d in (-indent, 0, indent)][1:-1] codes = [Path.MOVETO] + [Path.CURVE4] * (len(verts) - 1) path = Path(verts, codes) patch = patches.PathPatch(path, facecolor='none', lw=2, edgecolor='#4F535C') ax.add_patch(patch) ax.set_xticks([]) ax.set_yticks([]) ax.autoscale() # sets the xlim and ylim for the added patches plt.show() A colored version could look like: colors = ['crimson', 'skyblue', 'lime', 'gold'] for tj, color in zip(t, colors): ax.scatter(np.arange(len(tj)), tj, marker='o', color=color, s=100, zorder=3) verts = [(i + d, tij) for i, tij in enumerate(tj) for d in (-indent, 0, indent)][1:-1] codes = [Path.MOVETO] + [Path.CURVE4] * (len(verts) - 1) path = Path(verts, codes) patch = patches.PathPatch(path, facecolor='none', lw=2, edgecolor=color) ax.add_patch(patch) The following plot compares different values for indent: | 22 | 21 |
63,492,123 | 2020-8-19 | https://stackoverflow.com/questions/63492123/how-do-add-an-assembled-field-to-a-pydantic-model | Say I have model class UserDB(BaseModel): first_name: Optional[str] = None last_name: Optional[str] = None How do I make another model that is constructed from this one and has a field that changes based on the fields in this model? For instance, something like this class User(BaseModel): full_name: str = first_name + ' ' + last_name Constructed like this maybe User.parse_obj(UserDB) Thanks! | If you do not want to keep first_name and last_name in User then you can customize __init__. use validator for setting full_name. Both methods do what you want: from typing import Optional from pydantic import BaseModel, validator class UserDB(BaseModel): first_name: Optional[str] = None last_name: Optional[str] = None class User_1(BaseModel): location: str # for a change full_name: Optional[str] = None def __init__(self, user_db: UserDB, **data): super().__init__(full_name=f"{user_db.first_name} {user_db.last_name}", **data) user_db = UserDB(first_name="John", last_name="Stark") user = User_1(user_db, location="Mars") print(user) class User_2(BaseModel): first_name: Optional[str] = None last_name: Optional[str] = None full_name: Optional[str] = None @validator('full_name', always=True) def ab(cls, v, values) -> str: return f"{values['first_name']} {values['last_name']}" user = User_2(**user_db.dict()) print(user) output location='Mars' full_name='John Stark' first_name='John' last_name='Stark' full_name='John Stark' UPDATE: For working with response_model you can customize __init__ in such way: class User_1(BaseModel): location: str # for a change full_name: Optional[str] = None # def __init__(self, user_db: UserDB, **data): def __init__(self, first_name, last_name, **data): super().__init__(full_name=f"{first_name} {last_name}", **data) user_db = UserDB(first_name="John", last_name="Stark") user = User_1(**user_db.dict(), location="Mars") print(user) | 25 | 24 |
63,561,028 | 2020-8-24 | https://stackoverflow.com/questions/63561028/how-to-detect-collisions-between-two-rectangular-objects-or-images-in-pygame | I am making a game in which the player has to use a bowl to catch falling items. I have some images of items in a list and an image of a bowl that is used to catch the items. The items keep on falling and reset to the top of the screen if they reach the boundary (bottom edge). I got this logic done which allows the items to fall but I do not know how to detect when there is a collision between the bowl and item. My code: import math import pygame import random pygame.init() display_width = 800 display_height = 600 game_display = pygame.display.set_mode((display_width, display_height)) clock = pygame.time.Clock() pygame.display.set_caption("Catch the Ball") white = (255, 255, 255) black = (0, 0, 0) red = (255, 0, 0) blue = (0, 255, 0) player_img = pygame.image.load("Images/soup.png") thing_imgs = [pygame.image.load('Images/muffin.png'), pygame.image.load('Images/dessert.png'), pygame.image.load('Images/cheese.png'), pygame.image.load('Images/fruit.png')] def player(x, y): game_display.blit(player_img, (x, y)) def things(x, y, img): game_display.blit(img, (x, y)) def game_loop(): running = True x = display_width * 0.45 y = display_height * 0.8 x_change = 0 player_width = 64 player_height = 64 things_cor = [[random.randint(0, display_width), 32]] things_added = [random.choice(thing_imgs)] thing_height = 32 thing_width = 32 y_change = 5 caught = 0 while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: x_change = -5 if event.key == pygame.K_RIGHT: x_change = 5 if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT: x_change = 0 game_display.fill(white) player(x, y) x += x_change for i in range(len(things_cor)): thing_x, thing_y = things_cor[i] things(thing_x, thing_y, things_added[i]) for i in range(len(things_cor)): things_cor[i][1] += y_change if things_cor[i][1] > display_height: things_cor[i][1] = random.randint(-2000, -1000) things_cor[i][0] = random.randint(0, display_width) things_added[i] = random.choice(thing_imgs) things_added.append(random.choice(thing_imgs)) if len(things_added) < 6: things_cor.append( [random.randint(0, display_width), -10]) if x < 0: x = 0 elif x > display_width - player_width: x = display_width - player_width clock.tick(60) pygame.display.update() game_loop() | Use pygame.Rect objects and colliderect() to detect the collision between the bounding rectangles of 2 objects or 2 images: rect1 = pygame.Rect(x1, y1, w1, h1) rect2 = pygame.Rect(x2, y2, w2, h2) if rect1.colliderect(rect2): # [...] If you have to images (pygame.Surface objects), the bounding rectangle of can be get by get_rect(), where the location of the Surface has to be set by an keyword argument, since the returned rectangle always starts at (0, 0): (see Why is my collision test not working and why is the position of the rectangle of the image always wrong (0, 0)?) def game_loop(): # [...] while running: # [...] player_rect = player_img.get_rect(topleft = (x, y)) for i in range(len(things_cor)): thing_rect = things_added[i].get_rect(topleft = things_cor[i]) if player_rect.colliderect(thing_rect): print("hit") player(x, y) x += x_change for i in range(len(things_cor)): thing_x, thing_y = things_cor[i] things(thing_x, thing_y, things_added[i]) Use pygame.time.get_ticks() to delay the start of the game for a certain time. pygame.time.get_ticks() return the number of milliseconds since pygame.init() was called. For instance: def game_loop(): # [...] while running: passed_time = pygame.time.get_ticks() # passed time in milliseconds start_time = 100 * 1000 # start time in milliseconds (100 seconds) # [...] # move player if passed_time >= start_time: x += x_change if x < 0: x = 0 elif x > display_width - player_width: x = display_width - player_width # move things if passed_time >= start_time: for i in range(len(things_cor)): things_cor[i][1] += y_change if things_cor[i][1] > display_height: things_cor[i][1] = random.randint(-2000, -1000) things_cor[i][0] = random.randint(0, display_width) things_added[i] = random.choice(thing_imgs) things_added.append(random.choice(thing_imgs)) if len(things_added) < 6: things_cor.append( [random.randint(0, display_width), -10]) # draw scene and update dispaly game_display.fill(white) player(x, y) for i in range(len(things_cor)): thing_x, thing_y = things_cor[i] things(thing_x, thing_y, things_added[i]) pygame.display.update() clock.tick(60) | 9 | 4 |
63,514,464 | 2020-8-20 | https://stackoverflow.com/questions/63514464/graph-to-connect-sentences | I have a list of sentences of a few topics (two) like the below: Sentences Trump says that it is useful to win the next presidential election. The Prime Minister suggests the name of the winner of the next presidential election. In yesterday's conference, the Prime Minister said that it is very important to win the next presidential election. The Chinese Minister is in London to discuss about climate change. The president Donald Trump states that he wants to win the presidential election. This will require a strong media engagement. The president Donald Trump states that he wants to win the presidential election. The UK has proposed collaboration. The president Donald Trump states that he wants to win the presidential election. He has the support of his electors. As you can see there is similarity in sentences. I am trying to relate multiple sentences and visualise the characteristics of them by using a graph (directed). The graph is built from a similarity matrix, by applying row ordering of sentences as shown above. I created a new column, Time, to show the order of sentences, so first row (Trump says that....) is at time 1; second row (The Prime Minister suggests...) is at time 2, and so on. Something like this Time Sentences 1 Trump said that it is useful to win the next presidential election. 2 The Prime Minister suggests the name of the winner of the next presidential election. 3 In today's conference, the Prime Minister said that it is very important to win the next presidential election. ... I would like then to find the relationships in order to have a clear overview of the topic. Multiple paths for a sentence would show that there are multiple information associated with it. To determine similarity between two sentences, I tried to extract nouns and verbs as follows: noun=[] verb=[] for index, row in df.iterrows(): nouns.append([word for word,pos in pos_tag(row[0]) if pos == 'NN']) verb.append([word for word,pos in pos_tag(row[0]) if pos == 'VB']) as they are keywords in whatever sentence. So when a keyword (noun or verb) appears in sentence x but not in the other sentences, it represents a difference between these two sentences. I think a better approach, however, could be using word2vec or gensim (WMD). This similarity has to be calculated for each sentence. I would like to build a graph which shows the content of the sentence in my example above. Since there are two topics (Trump and Chinese Minister), for each of them I need to look for sub-topics. Trump has sub-topic presidential election, for example. A node in my graph should represent a sentence. Words in each node represent differences for the sentences, showing new info in the sentence. For example, the word states in sentence at time 5 is in adjacent sentences at time 6 and 7. I would like just to find a way to have similar results as shown in picture below. I have tried using mainly nouns and verbs extraction, but probably it is not the right way to proceed. What I tried to do has been to consider sentence at time 1 and compare it with other sentences, assigning a similarity score (with noun and verbs extraction but also with word2vec), and repeat it for all the other sentences. But my problem is now on how to extract difference to create a graph that can make sense. For the part of the graph, I would consider to use networkx (DiGraph): G = nx.DiGraph() N = Network(directed=True) to show direction of relationships. I provided a different example to make it be clearer (but if you worked with the previous example, it would be fine as well. Apologies for the inconvenience, but since my first question was not so clear, I had to provide also a better, probably easier, example). | Didn't implement NLP for verb / noun separation, just added a list of good words. They can be extracted and normalized with spacy relatively easy. Please note that walk occurs in 1,2,5 sentences and forms a triad. import re import networkx as nx import matplotlib.pyplot as plt plt.style.use("ggplot") sentences = [ "I went out for a walk or walking.", "When I was walking, I saw a cat. ", "The cat was injured. ", "My mum's name is Marylin.", "While I was walking, I met John. ", "Nothing has happened.", ] G = nx.Graph() # set of possible good words good_words = {"went", "walk", "cat", "walking"} # remove punctuation and keep only good words inside sentences words = list( map( lambda x: set(re.sub(r"[^\w\s]", "", x).lower().split()).intersection( good_words ), sentences, ) ) # convert sentences to dict for furtehr labeling sentences = {k: v for k, v in enumerate(sentences)} # add nodes for i, sentence in sentences.items(): G.add_node(i) # add edges if two nodes have the same word inside for i in range(len(words)): for j in range(i + 1, len(words)): for edge_label in words[i].intersection(words[j]): G.add_edge(i, j, r=edge_label) # compute layout coords coord = nx.spring_layout(G) plt.figure(figsize=(20, 14)) # set label coords a bit upper the nodes node_label_coords = {} for node, coords in coord.items(): node_label_coords[node] = (coords[0], coords[1] + 0.04) # draw the network nodes = nx.draw_networkx_nodes(G, pos=coord) edges = nx.draw_networkx_edges(G, pos=coord) edge_labels = nx.draw_networkx_edge_labels(G, pos=coord) node_labels = nx.draw_networkx_labels(G, pos=node_label_coords, labels=sentences) plt.title("Sentences network") plt.axis("off") Update If you want to measure the similarity between different sentences, you may want to calculate the difference between sentence embedding. This gives you an opportunity to find semantic similarity between sentences with different words like "A soccer game with multiple males playing" and "Some men are playing a sport". Almost SoTA approach using BERT can be found here, more simple approaches are here. Since you have similarity measure, just replace add_edge block to add new edge only if similarity measure is greater than some threshold. Resulting add edges code will look like this: # add edges if two nodes have the same word inside tresold = 0.90 for i in range(len(words)): for j in range(i + 1, len(words)): # suppose you have some similarity function using BERT or PCA similarity = check_similarity(sentences[i], sentences[j]) if similarity > tresold: G.add_edge(i, j, r=similarity) | 9 | 4 |
63,515,267 | 2020-8-21 | https://stackoverflow.com/questions/63515267/how-to-get-all-or-multiple-pairs-historical-klines-from-binance-api-in-one-re | I have a trading bot that trades multiple pairs (30-40). It uses the previous 5m candle for the price input. Therefore, I get 5m history for ALL pairs one by one. Currently, the full cycle takes about 10 minutes, so the 5m candles get updated once in 10m, which is no good. Any ideas on how to speed things up? | I think the best option for you will be websocket connection. You cannot recieve kline data once per eg. 5 minutes, but you can recieve every change in candle like you see it in graph. Binance API provide only this, but in compound with websocket connection it will by realy fast, not 10 minutes. After recieve data you only must to specify when candle was closed, you can do it from timestamps that are in json data ('t' and 'T'). [documentation here] You must install websockets library. pip install websockets And here is some sample code how it can work. import asyncio import websockets async def candle_stick_data(): url = "wss://stream.binance.com:9443/ws/" #steam address first_pair = 'bnbbtc@kline_1m' #first pair async with websockets.connect(url+first_pair) as sock: pairs = '{"method": "SUBSCRIBE", "params": ["xrpbtc@kline_1m","ethbtc@kline_1m" ], "id": 1}' #other pairs await sock.send(pairs) print(f"> {pairs}") while True: resp = await sock.recv() print(f"< {resp}") asyncio.get_event_loop().run_until_complete(candle_stick_data()) Output: < {"e":"kline","E":1599828802835,"s":"XRPBTC","k":{"t":1599828780000,"T":1599828839999,"s":"XRPBTC","i":"1m","f":76140140,"L":76140145,"o":"0.00002346","c":"0.00002346","h":"0.00002346","l":"0.00002345","v":"700.00000000","n":6,"x":false,"q":"0.01641578","V":"78.00000000","Q":"0.00182988","B":"0"}} < {"e":"kline","E":1599828804297,"s":"BNBBTC","k":{"t":1599828780000,"T":1599828839999,"s":"BNBBTC","i":"1m","f":87599856,"L":87599935,"o":"0.00229400","c":"0.00229610","h":"0.00229710","l":"0.00229400","v":"417.88000000","n":80,"x":false,"q":"0.95933156","V":"406.63000000","Q":"0.93351653","B":"0"}} < {"e":"kline","E":1599828804853,"s":"ETHBTC","k":{"t":1599828780000,"T":1599828839999,"s":"ETHBTC","i":"1m","f":193235180,"L":193235214,"o":"0.03551300","c":"0.03551700","h":"0.03551800","l":"0.03551300","v":"21.52300000","n":35,"x":false,"q":"0.76437246","V":"11.53400000","Q":"0.40962829","B":"0"}} < {"e":"kline","E":1599828806303,"s":"BNBBTC","k":{"t":1599828780000,"T":1599828839999,"s":"BNBBTC","i":"1m","f":87599856,"L":87599938,"o":"0.00229400","c":"0.00229620","h":"0.00229710","l":"0.00229400","v":"420.34000000","n":83,"x":false,"q":"0.96497998","V":"406.63000000","Q":"0.93351653","B":"0"}} | 12 | 11 |
63,546,429 | 2020-8-23 | https://stackoverflow.com/questions/63546429/binascii-error-incorrect-padding-in-python-django | I am trying to save the base64 encoded image in the django rest framework. First of all, we make a code to insert the base64 encoded image into the imagefield and test it, and the following error appears. binascii.Error: Incorrect padding What I don't understand is that I've used the same code before and there was no such error. Can you help me? Here is my code. serializers.py from rest_framework import serializers from .models import post, comment class Base64ImageField (serializers.ImageField) : def to_internal_value (self, data) : from django.core.files.base import ContentFile import base64 import six import uuid if isinstance(data, six.string_types): if 'data:' in data and ';base64,' in data : header, data = data.split(';base64,') try : decoded_file = base64.b64decode(data) except TypeError : self.fail('invalid_image') file_name = str(uuid.uuid4())[:12] file_extension = self.get_file_extension(file_name, decoded_file) complete_file_name = "%s.%s" % (file_name, file_extension, ) data = ContentFile(decoded_file, name=complete_file_name) return super(Base64ImageField, self).to_internal_value(data) def get_file_extension (self, file_name, decoded_file) : import imghdr extension = imghdr.what(file_name, decoded_file) extension = "jpg" if extension == "jpeg" else extension return extension class commentSerializer (serializers.ModelSerializer) : class Meta : model = comment fields = '__all__' class postSerializer (serializers.ModelSerializer) : author = serializers.CharField(source='author.username', read_only=True) image1 = Base64ImageField(use_url=True) image2 = Base64ImageField(use_url=True) image3 = Base64ImageField(use_url=True) image4 = Base64ImageField(use_url=True) image5 = Base64ImageField(use_url=True) comment = commentSerializer(many=True, read_only=True) class Meta: model = post fields = ['pk', 'author', 'title', 'text', 'image1', 'image2', 'image3', 'image4', 'image5', 'tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'comment'] | I'm not sure this applies to your situation, depending on where you're storing your encoded data. I had the same error, but it related to some encoded session data. I cleared out the session data (cookies, cache etc) in the browser Devtools, and it fixed my issue. Just posting this in case it applies or helps others who come along for the same reason. | 7 | 23 |
63,552,169 | 2020-8-23 | https://stackoverflow.com/questions/63552169/some-python-objects-were-not-bound-to-checkpointed-values | I am trying to get started with Tensorflow 2.0 Object Detection API. I have gone through the installation following the official tutorial and I pass all the tests. However, I keep getting an error message that I don't understand when I try to run the main module. This is how I run it: python model_main_tf2.py --model_dir=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 --pipeline_config_path=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/pipeline.config This is the beginning of the error message: Traceback (most recent call last): File "model_main_tf2.py", line 113, in <module> tf.compat.v1.app.run() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "model_main_tf2.py", line 110, in main record_summaries=FLAGS.record_summaries) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 569, in train_loop unpad_groundtruth_tensors) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 383, in load_fine_tune_checkpoint ckpt.restore(checkpoint_path).assert_existing_objects_matched() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/training/tracking/util.py", line 791, in assert_existing_objects_matched (list(unused_python_objects),)) AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [SyncOnReadVariable:{ 0: <tf.Variable 'conv2_block1_0_bn/moving_variance:0' shape=(256,) dtype=float32, numpy= array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., In the pipeline.config, I specify a checkpoint like this: fine_tune_checkpoint: "ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0" These are the contents of ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ : checkpoint ckpt-0.data-00000-of-00001 ckpt-0.index I have searched Google but couldn't find any answer. In this issue, the suggested solution is outdated (the code they suggest to replace is not there anymore). Question: What is the problem and how can I solve it? I am doing this on a server with CentOS Linux 7. I am using Python 3.7. I am new to Tensorflow so please if I am missing any important information, let me know. | From the file name you provided (ssd_resnet50_v1_fpn_640x640_coco17_tpu-8), I can see you are trying to work with an object detection task. Therefore, in your pipeline.config file change this line: fine_tune_checkpoint_type: "classification" To: fine_tune_checkpoint_type: "detection" This should solve your problem. | 17 | 52 |
63,583,502 | 2020-8-25 | https://stackoverflow.com/questions/63583502/removing-duplicates-from-pandas-rows-replace-them-with-nans-shift-nans-to-end | Problem How to remove duplicate cells from each row, considering each row separately (and perhaps replace them with NaNs) in a Pandas dataframe? It would be even better if we could shift all newly created NaNs to the end of each row. Related but different posts Posts on how to remove entire rows which are deemed duplicate: how do I remove rows with duplicate values of columns in pandas data frame? Drop all duplicate rows across multiple columns in Python Pandas Remove duplicate rows from Pandas dataframe where only some columns have the same value Post on how to remove duplicates from a list which is in a Pandas column: Remove duplicates from rows and columns (cell) in a dataframe, python Answer given here returns a series of strings, not a dataframe. Reproducible setup import pandas as pd Let's create a dataframe: df = pd.DataFrame({'a': ['A', 'A', 'C', 'B'], 'b': ['B', 'D', 'B', 'B'], 'c': ['C', 'C', 'C', 'A'], 'd': ['D', 'D', 'B', 'A']}, index=[0, 1, 2, 3]) df created: +----+-----+-----+-----+-----+ | | a | b | c | d | |----+-----+-----+-----+-----| | 0 | A | B | C | D | | 1 | A | D | C | D | | 2 | C | B | C | B | | 3 | B | B | A | A | +----+-----+-----+-----+-----+ (Printed using this.) A solution One way of dropping duplicates from each row, considering each row separately: df = df.apply(lambda row: pd.Series(row).drop_duplicates(keep='first'),axis='columns') using apply(), a lambda function, pd.Series(), & Series.drop_duplicates(). Shove all NaNs to the end of each row, using Shift NaNs to the end of their respective rows: df.apply(lambda x : pd.Series(x[x.notnull()].values.tolist()+x[x.isnull()].values.tolist()),axis='columns') Output: +----+-----+-----+-----+-----+ | | 0 | 1 | 2 | 3 | |----+-----+-----+-----+-----| | 0 | A | B | C | D | | 1 | A | D | C | nan | | 2 | C | B | nan | nan | | 3 | B | A | nan | nan | +----+-----+-----+-----+-----+ Just as we wished. Question Is there a more efficient way to do this? Perhaps with some built-in Pandas functions? | You can stack and then drop_duplicates that way. Then we need to pivot with the help of a cumcount level. The stack preserves the order the values appear in along the rows and the cumcount ensures that the NaN will appear in the end. df1 = df.stack().reset_index().drop(columns='level_1').drop_duplicates() df1['col'] = df1.groupby('level_0').cumcount() df1 = (df1.pivot(index='level_0', columns='col', values=0) .rename_axis(index=None, columns=None)) 0 1 2 3 0 A B C D 1 A D C NaN 2 C B NaN NaN 3 B A NaN NaN Timings Assuming 4 columns, let's see how a bunch of these methods compare as the number of rows grow. The map and apply solutions have a good advantage when things are small, but they become a bit slower than the more involved stack + drop_duplicates + pivot solution as the DataFrame gets longer. Regardless, they all start to take a while for a large DataFrame. import perfplot import pandas as pd import numpy as np def stack(df): df1 = df.stack().reset_index().drop(columns='level_1').drop_duplicates() df1['col'] = df1.groupby('level_0').cumcount() df1 = (df1.pivot(index='level_0', columns='col', values=0) .rename_axis(index=None, columns=None)) return df1 def apply_drop_dup(df): return pd.DataFrame.from_dict(df.apply(lambda x: x.drop_duplicates().tolist(), axis=1).to_dict(), orient='index') def apply_unique(df): return pd.DataFrame(df.apply(pd.Series.unique, axis=1).tolist()) def list_map(df): return pd.DataFrame(list(map(pd.unique, df.values))) perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(list('ABCD'), (n, 4)), columns=list('abcd')), kernels=[ lambda df: stack(df), lambda df: apply_drop_dup(df), lambda df: apply_unique(df), lambda df: list_map(df), ], labels=['stack', 'apply_drop_dup', 'apply_unique', 'list_map'], n_range=[2 ** k for k in range(18)], equality_check=lambda x,y: x.compare(y).empty, xlabel='~len(df)' ) Finally, if preserving the order in which the values originally appeared within each row is unimportant, you can use numpy. To de-duplicate you sort then check for differences. Then create an output array that shifts values to the right. Because this method will always return 4 columns, we require a dropna to match the other output in the case that every row has fewer than 4 unique values. def with_numpy(df): arr = np.sort(df.to_numpy(), axis=1) r = np.roll(arr, 1, axis=1) r[:, 0] = np.NaN arr = np.where((arr != r), arr, np.NaN) # Move all NaN to the right. Credit @Divakar mask = pd.notnull(arr) justified_mask = np.flip(np.sort(mask, axis=1), 1) out = np.full(arr.shape, np.NaN, dtype=object) out[justified_mask] = arr[mask] return pd.DataFrame(out, index=df.index).dropna(how='all', axis='columns') with_numpy(df) # 0 1 2 3 #0 A B C D #1 A C D NaN #2 B C NaN NaN # B/c this method sorts, B before C #3 A B NaN NaN perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(list('ABCD'), (n, 4)), columns=list('abcd')), kernels=[ lambda df: stack(df), lambda df: with_numpy(df), ], labels=['stack', 'with_numpy'], n_range=[2 ** k for k in range(3, 22)], # Lazy check to deal with string/NaN and irrespective of sort order. equality_check=lambda x, y: (np.sort(x.fillna('ZZ').to_numpy(), 1) == np.sort(y.fillna('ZZ').to_numpy(), 1)).all(), xlabel='len(df)' ) | 34 | 30 |
63,564,017 | 2020-8-24 | https://stackoverflow.com/questions/63564017/keras-accuracy-doesnt-improve-more-than-59-percent | Here is the code I tried: # normalizing the train data cols_to_norm = ["WORK_EDUCATION", "SHOP", "OTHER",'AM','PM','MIDDAY','NIGHT', 'AVG_VEH_CNT', 'work_traveltime', 'shop_traveltime','work_tripmile','shop_tripmile', 'TRPMILES_sum', 'TRVL_MIN_sum', 'TRPMILES_mean', 'HBO', 'HBSHOP', 'HBW', 'NHB', 'DWELTIME_mean','TRVL_MIN_mean', 'work_dweltime', 'shop_dweltime', 'firsttrip_time', 'lasttrip_time'] dataframe[cols_to_norm] = dataframe[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max()-x.min())) # labels y = dataframe.R_SEX.values # splitting train and test set X_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.33, random_state=42) model = Sequential() model.add(Dense(256, input_shape=(X_train.shape[1],), activation='relu')) model.add(Dense(256, activation='relu')) model.add(layers.Dropout(0.3)) model.add(Dense(256, activation='relu')) model.add(layers.Dropout(0.3)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam' , metrics=['acc']) print(model.summary()) model.fit(X_train, y_train , batch_size=128, epochs=30, validation_split=0.2) Epoch 23/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6623 - acc: 0.5985 - val_loss: 0.6677 - val_acc: 0.5918 Epoch 24/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5993 - val_loss: 0.6671 - val_acc: 0.5925 Epoch 25/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6618 - acc: 0.5997 - val_loss: 0.6674 - val_acc: 0.5904 Epoch 26/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6614 - acc: 0.6001 - val_loss: 0.6669 - val_acc: 0.5911 Epoch 27/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6608 - acc: 0.6004 - val_loss: 0.6668 - val_acc: 0.5920 Epoch 28/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6605 - acc: 0.6002 - val_loss: 0.6679 - val_acc: 0.5895 Epoch 29/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6602 - acc: 0.6009 - val_loss: 0.6663 - val_acc: 0.5932 Epoch 30/30 1014/1014 [==============================] - 4s 4ms/step - loss: 0.6597 - acc: 0.6027 - val_loss: 0.6674 - val_acc: 0.5910 <tensorflow.python.keras.callbacks.History at 0x7fdd8143a278> I have tried modifying the neural network and double-cheking the data. Is there anything I can do to improve the outcome? Is the model not deep enough? Is there any alternative models suited for my data? Does this mean these features have no predictive value? I'm kind of confused what to do next. thank you Update: I tried adding new column do my dataframe which is the outcome of a KNN model for sex classification. Here is what I did: #Import knearest neighbors Classifier model from sklearn.neighbors import KNeighborsClassifier #Create KNN Classifier knn = KNeighborsClassifier(n_neighbors=41) #Train the model using the training sets knn.fit(X, y) #predict sex for the train set so that it can be fed to the nueral net y_pred = knn.predict(X) #add the outcome of knn to the train set X = X.assign(KNN_result=y_pred) It improved the training and validation accuracy up to 61 percent. Epoch 26/30 1294/1294 [==============================] - 8s 6ms/step - loss: 0.6525 - acc: 0.6166 - val_loss: 0.6604 - val_acc: 0.6095 Epoch 27/30 1294/1294 [==============================] - 8s 6ms/step - loss: 0.6523 - acc: 0.6173 - val_loss: 0.6596 - val_acc: 0.6111 Epoch 28/30 1294/1294 [==============================] - 8s 6ms/step - loss: 0.6519 - acc: 0.6177 - val_loss: 0.6614 - val_acc: 0.6101 Epoch 29/30 1294/1294 [==============================] - 8s 6ms/step - loss: 0.6512 - acc: 0.6178 - val_loss: 0.6594 - val_acc: 0.6131 Epoch 30/30 1294/1294 [==============================] - 8s 6ms/step - loss: 0.6510 - acc: 0.6183 - val_loss: 0.6603 - val_acc: 0.6103 <tensorflow.python.keras.callbacks.History at 0x7fe981bbe438> Thank you | In short: NNs are rarely the best models for classifying either small amounts data or the data that is already compactly represented by a few non-heterogeneous columns. Often enough, boosted methods or GLM would produce better results from a similar amount of effort. What can you do with your model? Counterintuitively, sometimes hindering the network capacity can be beneficial, especially when the number of network parameters exceeds number of training points. One can reduce the number of neurons, like in your case setting layer sizes to 16 or so and simultaneously removing layers; introduce regularizations (label smoothing, weight decay, etc); or generate more data by adding more derived columns in different (log, binary) scales. Another approach would be to search for NNs models designed for your type of data. As, for example, Self-Normalizing Neural Networks or Wide & Deep Learning for Recommender Systems. If you get to try only 1 thing, I would recommend doing a grid search of the learning rate or trying a few different optimizers. How to make a better decision about which model to use? Look through finished kaggle.com competitions and find datasets similar to the one at hand, then check out the techniques used by the top places. | 7 | 3 |
63,478,525 | 2020-8-19 | https://stackoverflow.com/questions/63478525/pdb-skip-restart-when-finished | With python -m pdb -c "c" script.py the debug mode is entered, when a problem occurs. From the doc, I figured out that the option -c "c" (Python 3.2+) saves me to hit c + Enter each time at program start. Yet, when the program finishes normally, it outputs The program finished and will be restarted and I still have to hit q + Enter to the quit the program. Is there a way to skip this as well? | You can add multiple commands for -c in a sequence. METHOD 1: Quitting only if no error is encountered You can just give another command q to jump out of pdb mode incase no error is encountered. If an error is encountered, however, it will enter the debug mode where you will have to continue hitting c and enter to move forward. python -mpdb -c "c" -c "q" script.py Not encountering an error (quit immediately!)- (base) $ python -mpdb -c "c" -c "q" script.py The program finished and will be restarted (base) $ Encountering an error (enter debug mode!)- (base) $ python -mpdb -c "c" -c "q" script.py Traceback (most recent call last): File "/anaconda3/lib/python3.7/pdb.py", line 1701, in main pdb._runscript(mainpyfile) File "/anaconda3/lib/python3.7/pdb.py", line 1570, in _runscript self.run(statement) File "/anaconda3/lib/python3.7/bdb.py", line 585, in run exec(cmd, globals, locals) File "<string>", line 1, in <module> File "/Projects/Random/script.py", line 6, in <module> """ ModuleNotFoundError: No module named 'thispackagedoesntexist' Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program Post mortem debugger finished. The script.py will be restarted > /Projects/Random/script.py(6)<module>() -> """ (Pdb) METHOD 2: Quitting irrespective of error or not You can use echo "q" and pass it to the next (pdb) command by using the | in the following way. This will run the second command once and immediately take the output of echo "q" to quit - echo "q" | python -mpdb -c "c" script.py This hits q after the program is done running the script in debug mode. Debug automatically quits after encountering (or not encountering) an error. Not encountering an error (quit immediately!)- (base) $ echo "q" | python -mpdb -c "c" script.py The program finished and will be restarted > /Projects/Random/script.py(6)<module>() -> """ (Pdb) (base) $ Encountering an error (quit immediately!)- (base) $ echo "q" | python -mpdb -c "c" script.py Traceback (most recent call last): File "/anaconda3/lib/python3.7/pdb.py", line 1701, in main pdb._runscript(mainpyfile) File "/anaconda3/lib/python3.7/pdb.py", line 1570, in _runscript self.run(statement) File "/anaconda3/lib/python3.7/bdb.py", line 585, in run exec(cmd, globals, locals) File "<string>", line 1, in <module> File /Projects/Random/script.py", line 6, in <module> """ ModuleNotFoundError: No module named 'thispackagedoesntexist' Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /Projects/Random/script.py(6)<module>() -> """ (Pdb) Post mortem debugger finished. The script.py will be restarted > /Projects/Random/script.py(6)<module>() -> """ (Pdb) (base) $ Here is the list of commands you can use with pdb - (Pdb) help Documented commands (type help <topic>): ======================================== EOF c d h list q rv undisplay a cl debug help ll quit s unt alias clear disable ignore longlist r source until args commands display interact n restart step up b condition down j next return tbreak w break cont enable jump p retval u whatis bt continue exit l pp run unalias where | 7 | 5 |
63,490,533 | 2020-8-19 | https://stackoverflow.com/questions/63490533/how-does-the-predict-proba-function-in-lightgbm-work-internally | This is in reference to understanding, internally, how the probabilities for a class are predicted using LightGBM. Other packages, like sklearn, provide thorough detail for their classifiers. For example: LogisticRegression returns: Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. RandomForest returns: Predict class probabilities for X. The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. There are additional Stack Overflow questions which provide additional details, such as for: Support Vector Machines Multilayer Perceptron I am trying to uncover those same details for LightGBM's predict_proba function. The documentation does not list the details of how the probabilities are calculated. The documentation simply states: Return the predicted probability for each class for each sample. The source code is below: def predict_proba(self, X, raw_score=False, start_iteration=0, num_iteration=None, pred_leaf=False, pred_contrib=False, **kwargs): """Return the predicted probability for each class for each sample. Parameters ---------- X : array-like or sparse matrix of shape = [n_samples, n_features] Input features matrix. raw_score : bool, optional (default=False) Whether to predict raw scores. start_iteration : int, optional (default=0) Start index of the iteration to predict. If <= 0, starts from the first iteration. num_iteration : int or None, optional (default=None) Total number of iterations used in the prediction. If None, if the best iteration exists and start_iteration <= 0, the best iteration is used; otherwise, all iterations from ``start_iteration`` are used (no limits). If <= 0, all iterations from ``start_iteration`` are used (no limits). pred_leaf : bool, optional (default=False) Whether to predict leaf index. pred_contrib : bool, optional (default=False) Whether to predict feature contributions. .. note:: If you want to get more explanations for your model's predictions using SHAP values, like SHAP interaction values, you can install the shap package (https://github.com/slundberg/shap). Note that unlike the shap package, with ``pred_contrib`` we return a matrix with an extra column, where the last column is the expected value. **kwargs Other parameters for the prediction. Returns ------- predicted_probability : array-like of shape = [n_samples, n_classes] The predicted probability for each class for each sample. X_leaves : array-like of shape = [n_samples, n_trees * n_classes] If ``pred_leaf=True``, the predicted leaf of every tree for each sample. X_SHAP_values : array-like of shape = [n_samples, (n_features + 1) * n_classes] or list with n_classes length of such objects If ``pred_contrib=True``, the feature contributions for each sample. """ result = super(LGBMClassifier, self).predict(X, raw_score, start_iteration, num_iteration, pred_leaf, pred_contrib, **kwargs) if callable(self._objective) and not (raw_score or pred_leaf or pred_contrib): warnings.warn("Cannot compute class probabilities or labels " "due to the usage of customized objective function.\n" "Returning raw scores instead.") return result elif self._n_classes > 2 or raw_score or pred_leaf or pred_contrib: return result else: return np.vstack((1. - result, result)).transpose() How can I understand how exactly the predict_proba function for LightGBM is working internally? | LightGBM, like all gradient boosting methods for classification, essentially combines decision trees and logistic regression. We start with the same logistic function representing the probabilities (a.k.a. softmax): P(y = 1 | X) = 1/(1 + exp(Xw)) The interesting twist is that the feature matrix X is composed from the terminal nodes from a decision tree ensemble. These are all then weighted by w, a parameter that must be learned. The mechanism used to learn the weights depends on the precise learning algorithm used. Similarly, the construction of X also depends on the algorithm. LightGBM, for example, introduced two novel features which won them the performance improvements over XGBoost: "Gradient-based One-Side Sampling" and "Exclusive Feature Bundling". Generally though, each row collects the terminal leafs for each sample and the columns represent the terminal leafs. So here is what the docs could say... Probability estimates. The predicted class probabilities of an input sample are computed as the softmax of the weighted terminal leaves from the decision tree ensemble corresponding to the provided sample. For further details, you'd have to delve into the details of boosting, XGBoost, and finally the LightGBM paper, but that seems a bit heavy handed given the other documentation examples you've given. | 19 | 15 |
63,581,844 | 2020-8-25 | https://stackoverflow.com/questions/63581844/pycharm-run-tool-window-run-tab-window-is-missing | So recently my PyCharm is missing its run tool window that usually show the run/debug results. it is now replaced with python console and services, which is really frustrating because It's just showing gibberish and command-prompt-like format. How do I return the run tool window back as my main run/debug window? I have circled the tabs/windows that I meant in this pic with red circle. Note: usually I can access this run tool window by pressing alt + 4. Please see red circle: This is my run config: This is my view tab bar, it doesn't show run (alt+4): | From what I understand you want the run icon pinned to your lower toolbar. (This corresponds to running whatever your last chosen configuration was.) Two easy steps: 1º View -> Tool Windows -> Run 2º Right-click run icon on lower tool bar -> View Mode -> Dock Pinned Edit after OP feedback: If your Run (Alt+4) option has disappeared completely, besides trying a PyCharm reinstall it's advisable to manually clean the preference files that might be hidden. Check the following paths C:\Users\user_name\AppData\Local\JetBrains and C:\Users\user_name\AppData\Roaming\JetBrains, C:\Users\user_name\.PyCharmCE2020.x, and C:\Path_to_your_Project\.idea. Some of there directories might be hidden so you'll have to check you've set them to be visible. Even by reinstalling PyCharm some of the above configurations are likely to be kept. There's a strong possibility the state of whatever changes that caused Run to disappear is kept in files inside the above mentioned directories. | 9 | 7 |
63,556,777 | 2020-8-24 | https://stackoverflow.com/questions/63556777/sqlalchemy-add-all-ignore-duplicate-key-integrityerror | I'm adding a list of objects entries to a database. Sometimes it may happen that one of this objects is already in the database (I do not have any control on that). With only one IntegrityError all the transactions will fail, i.e. all the objects in entries will not be inserted into the database. try: session.add_all(entries) session.commit() except: logger.error(f"Error! Rolling back") session.rollback() raise finally: session.close() My desired behavior would be: if there is a IntegrityError in one of the entries, catch it and do not add that object to the database, otherwise continue normally (do not fail) Edit: I'm usign MySQL as backend. | I depends on what backend you're using. PostgreSQL has a wonderful INSERT() ON CONFLICT DO NOTHING clause which you can use with SQLAlchemy: from sqlalchemy.dialects.postgresql import insert session.execute(insert(MyTable) .values(my_entries) .on_conflict_do_nothing()) MySQL has the similar INSERT IGNORE clause, but SQLAlchemy has less support for it. Luckily, according to this answer, there is a workaround, using prefix_with: session.execute(MyTable.__table__ .insert() .prefix_with('IGNORE') .values(my_entries)) The only thing is that my_entries needs to be a list of column to value mappings. That means [{ 'id': 1, 'name': 'Ringo' }, { 'id': 2, 'name': 'Paul' }, ...] et cetera. | 15 | 18 |
63,519,761 | 2020-8-21 | https://stackoverflow.com/questions/63519761/python-typeerror-required-field-type-ignores-missing-from-module-in-jupyter | I have been having issues with my jupyter notebook for a few days. I didn't fix them at the time but have decided to now. Earlier whenever I executed anything in the jupyter notebook, It showed a lengthy list of errors in the terminal(not in the notebook). I tried the same in jupyterlab but again, the same error. I upgraded my ipykernel and somehow it started working again.But this time it only executes a few statements such as print(hello world) I tried using a few other things like this: a = 1 b = 2 a+b But it gave me this error: TypeError Traceback (most recent call last) /usr/lib/python3.8/codeop.py in __call__(self, source, filename, symbol) 134 135 def __call__(self, source, filename, symbol): --> 136 codeob = compile(source, filename, symbol, self.flags, 1) 137 for feature in _features: 138 if codeob.co_flags & feature.compiler_flag: TypeError: required field "type_ignores" missing from Module I tried other things like importing a module but the same error. I am using Ubuntu 20.04 with python 3.8.2. What do I do? Here are all the installed libraries: absl-py==0.9.0 aiohttp==3.6.2 altgraph==0.17 appdirs==1.4.4 apptools==4.5.0 apturl==0.5.2 argon2-cffi==20.1.0 asgiref==3.2.10 astroid==2.4.2 astunparse==1.6.3 async-timeout==3.0.1 attrs==19.3.0 autobahn==20.7.1 backcall==0.2.0 bcrypt==3.1.7 beautifulsoup4==4.9.1 bleach==3.1.5 blinker==1.4 blis==0.4.1 Brlapi==0.7.0 bs4==0.0.1 cachetools==4.1.0 catalogue==1.0.0 certifi==2020.6.20 cffi==1.14.1 chardet==3.0.4 chrome-gnome-shell==0.0.0 click==7.1.2 cloudpickle==1.3.0 colorama==0.4.3 command-not-found==0.3 configobj==5.0.6 cryptography==2.8 cupshelpers==1.0 cycler==0.10.0 cymem==2.0.3 Cython==0.29.20 dbus-python==1.2.16 decorator==4.4.2 defer==1.0.6 defusedxml==0.6.0 discord.py==1.3.4 distlib==0.3.1 distro==1.4.0 distro-info===0.23ubuntu1 Django==3.0.8 docker-py==1.10.6 docker-pycreds==0.2.1 duplicity==0.8.12.0 entrypoints==0.3 envisage==4.9.2 Faker==0.9.1 fasteners==0.14.1 fastzbarlight==0.0.14 filelock==3.0.12 Flask==1.1.2 future==0.18.2 gast==0.3.3 google-auth==1.19.2 google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.30.0 gym==0.17.2 h5py==2.10.0 httplib2==0.14.0 idna==2.8 ipykernel==5.3.4 ipython==5.5.0 ipython-genutils==0.2.0 ipywidgets==7.5.1 isort==4.3.21 itsdangerous==1.1.0 jedi==0.17.1 Jinja2==2.11.2 joblib==0.15.1 json5==0.9.5 jsonschema==3.2.0 jupyter==1.0.0 jupyter-client==6.1.3 jupyter-console==6.1.0 jupyter-core==4.6.3 jupyterlab==2.2.5 jupyterlab-server==1.2.0 Keras==2.4.3 keras-models==0.0.7 Keras-Preprocessing==1.1.2 keyring==18.0.1 Kivy==1.10.1 kiwisolver==1.2.0 language-selector==0.1 launchpadlib==1.10.13 lazr.restfulclient==0.14.2 lazr.uri==1.0.3 lazy-object-proxy==1.4.3 lockfile==0.12.2 louis==3.12.0 lxml==4.5.2 macaroonbakery==1.3.1 Mako==1.1.0 Markdown==3.2.2 MarkupSafe==1.1.0 matplotlib==3.3.1 mccabe==0.6.1 mistune==0.8.4 mixer==6.1.3 monotonic==1.5 multidict==4.7.6 murmurhash==1.0.2 nbconvert==5.6.1 nbformat==5.0.7 netifaces==0.10.4 notebook==6.1.3 numpy==1.17.4 oauthlib==3.1.0 olefile==0.46 opencv-python==4.2.0.34 OpenTimelineIO==0.12.1 opt-einsum==3.2.1 packaging==20.4 pandas==1.0.5 pandocfilters==1.4.2 paramiko==2.6.0 parso==0.7.0 pathlib==1.0.1 pexpect==4.6.0 pickle-mixin==1.0.2 pickleshare==0.7.5 Pillow==7.0.0 plac==1.1.3 plotly==4.9.0 preshed==3.0.2 prometheus-client==0.8.0 prompt-toolkit==3.0.6 protobuf==3.12.2 psutil==5.5.1 ptyprocess==0.6.0 pyaaf2==1.2.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycairo==1.16.2 pycparser==2.20 pycups==1.9.73 pyface==7.0.1 pygame==1.9.6 pyglet==1.5.0 Pygments==2.6.1 PyGObject==3.36.0 PyInquirer==1.0.3 PyInstaller==3.6 PyJWT==1.7.1 pylint==2.5.3 pymacaroons==0.13.0 PyNaCl==1.3.0 pyparsing==2.4.7 Pyphen==0.9.5 pyRFC3339==1.1 pyrsistent==0.16.0 python-apt==2.0.0+ubuntu0.20.4.1 python-dateutil==2.7.3 python-debian===0.1.36ubuntu1 pytz==2019.3 pyxdg==0.26 PyYAML==5.3.1 pyzmq==19.0.1 qtconsole==4.7.6 QtPy==1.9.0 regex==2020.7.14 reportlab==3.5.34 requests==2.24.0 requests-oauthlib==1.3.0 requests-unixsocket==0.2.0 retrying==1.3.3 rsa==4.6 scikit-learn==0.23.1 scipy==1.4.1 seaborn==0.10.1 SecretStorage==2.3.1 selenium==3.141.0 Send2Trash==1.5.0 simplegeneric==0.8.1 simplejson==3.16.0 six==1.12.0 sklearn==0.0 soupsieve==2.0.1 spacy==2.3.2 sqlparse==0.3.1 srsly==1.0.2 syllables==0.1.0 system-service==0.3 systemd-python==234 tensorboard==2.3.0 tensorboard-plugin-wit==1.6.0.post3 tensorflow==2.3.0 tensorflow-cpu==2.2.0 tensorflow-estimator==2.3.0 tensorflow-hub==0.7.0 tensorflowjs==2.0.1.post1 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 text-unidecode==1.2 thinc==7.4.1 threadpoolctl==2.1.0 toml==0.10.1 tornado==6.0.4 tqdm==4.48.2 traitlets==4.3.3 traits==6.1.1 traitsui==7.0.1 txaio==20.4.1 typing-extensions==3.7.4.2 ubuntu-advantage-tools==20.3 ubuntu-drivers-common==0.0.0 ufw==0.36 unattended-upgrades==0.1 unity-tweak-tool==0.0.7 urllib3==1.25.8 usb-creator==0.3.7 vtk==9.0.1 wadllib==1.3.3 wasabi==0.7.1 wcwidth==0.2.5 webencodings==0.5.1 websocket-client==0.57.0 websockets==8.1 Werkzeug==1.0.1 widgetsnbextension==3.5.1 wrapt==1.12.1 xkit==0.0.0 yarl==1.5.0 | As stated here https://github.com/aiidateam/aiida-core/issues/3559 This might be due to ipython 5.8.0 incompatible with Python 3.8 issue. | 10 | 20 |
63,570,108 | 2020-8-24 | https://stackoverflow.com/questions/63570108/how-can-i-set-max-line-length-in-vscode-for-python | For JavaScript formatter works fine but not for Python. I have installed autopep8 but it seems that I can't set max line length. I tried this: "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ] and my settings.json looks like this: { "workbench.colorTheme": "One Dark Pro", "git.autofetch": true, "workbench.iconTheme": "material-icon-theme", "git.enableSmartCommit": true, "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[html]": { "editor.defaultFormatter": "vscode.html-language-features" }, "javascript.updateImportsOnFileMove.enabled": "always", "[javascriptreact]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "liveServer.settings.donotShowInfoMsg": true, "editor.formatOnSave": true, "window.zoomLevel": 1, "vscode-w3cvalidation.validator-token": "Fri, 07 Aug 2020 07:35:05 GMT", "python.formatting.provider": "autopep8", "python.formatting.autopep8Args": [ "--max-line-length", "79", "--experimental" ], "python.autoComplete.addBrackets": true, "python.autoComplete.extraPaths": [] } Any ideas how to fix that? | From autopep8-usage, the default value of max-line-length is 79, so you can change to other value and have a try. About the effect of autopep8 in vscode, I made a test with the same settings as yours, like the following screenshot shows: every print sentence line-length is over 79, the first and the second print() parameters are expressions, and the setting works for the first one, not for the second. This is because setting applicable rules are provided by python extension and it has own caculation mechanism. When it comes to print strings, the setting doesn't work, so if you mean this in your question, you can add the following code in user settings.json. "editor.wordWrap": "wordWrapColumn", "editor.wordWrapColumn": 79 | 17 | 9 |
63,587,821 | 2020-8-25 | https://stackoverflow.com/questions/63587821/divide-two-pandas-columns-of-lists-by-each-other | I have a df like this: col1 col2 [1,3,4,5] [3,3,6,2] [1,4,5,5] [3,8,4,3] [1,3,4,8] [8,3,7,2] Trying to divide the elements in the lists in col1 and col2 together to get what's in the result column: col1 col2 result [1,3,4,5] [3,3,6,2] [.33,1,.66,2.5] [1,4,5,5] [3,8,4,3] [.33,.5,1.25,1.66] [1,3,4,8] [8,3,7,2] [.33,1,.57,4] Tried a lot of different approaches - but always get an error. Attempts: #attempt1 df['col1'].div(df['col2'], axis=0) #attempt2 from operator import truediv for i in df.col1: a = np.array(df['col1']) for t in df.col2: b = np.array(df['col2']) x = a/b print(x) #attempt3 for i in df.index: a = col1 b = col2 x = map(truediv, a, b) #attempt4 a = col1 b = col2 result = [x/y for x, y in zip(a, b)] #then apply to df #attempt5 a = col1 b = col2 result = a/b print(percent_matched) #then #apply to df >>>TypeError: unsupported operand type(s) for /: 'list' and 'list' Any ideas? | You can use list comprehension with apply, this is conditional on both the lists being of same length df['result'] = df.apply(lambda x: [np.round(x['col1'][i]/x['col2'][i], 2) for i in range(len(x['col1']))], axis = 1) col1 col2 result 0 [1, 3, 4, 5] [3, 3, 6, 2] [0.33, 1.0, 0.67, 2.5] 1 [1, 4, 5, 5] [3, 8, 4, 3] [0.33, 0.5, 1.25, 1.67] 2 [1, 3, 4, 8] [8, 3, 7, 2] [0.12, 1.0, 0.57, 4.0] Edit: As @TrentonMcKinney suggested, this can be done without using LC. This solution capitalized on Numpy's vectorized operations, df.apply(lambda x: np.round(np.array(x[0]) / np.array(x[1]), 3), axis=1) | 8 | 5 |
63,587,766 | 2020-8-25 | https://stackoverflow.com/questions/63587766/in-a-plotly-scatter-plot-how-do-you-join-two-set-of-points-with-a-line | I have the following code import plotly.graph_objs as go layout1= go.Layout(title=go.layout.Title(text="A graph",x=0.5), xaxis={'title':'x[m]'}, yaxis={'title':'y[m]','range':[-10,10]}) point_plot=[ go.Scatter(x=[3,4],y=[1,2],name="V0"), go.Scatter(x=[1,2],y=[1,1],name="V0"), go.Scatter(x=[5,6],y=[2,3],name="GT") ] go.Figure(data=point_plot, layout=layout1).show() which produces the following plot However this is not what I want exactly. What I want is that the two sets marked with "V0" must be of the same color and have only one mark in the legend. (In fact I am going to plot much more than two sets, like 20 sets of pairs joined by a line and they all have to be the same color and have only one mark in the legend) | You can combine two V0 segments in a single scatter and add an extra point with np.nan to split two segments value as follows: import plotly.graph_objs as go import numpy as np layout1= go.Layout(title=go.layout.Title(text="A graph",x=0.5), xaxis={'title':'x[m]'}, yaxis={'title':'y[m]','range':[-10,10]}) point_plot=[ go.Scatter(x=[1,2,3,3,4],y=[1,1,np.nan, 1,2],name="V0"), go.Scatter(x=[5,6],y=[2,3],name="GT") ] go.Figure(data=point_plot, layout=layout1).show() | 6 | 3 |
63,580,313 | 2020-8-25 | https://stackoverflow.com/questions/63580313/update-specific-subplot-axes-in-plotly | Setup: I'm tring to plot a subplots with plotly library, but can't figure out how to reference a specific subplots' axis to change its' name (or other properties). In Code 1 I show a simple example where I add two plots one on thop of the other with plotly.subplots.make_subplots. Code 1 import numpy as np from plotly.subplots import make_subplots from math import exp fig = make_subplots(2, 1) x = np.linspace(0, 10, 1000) y = np.array(list(map(lambda x: 1 / (1 + exp(-0.1 * x + 5)), x))) fig.add_trace( go.Scatter( x=x, y=y, name=f'\N{Greek Small Letter Sigma}(x)', showlegend=True ), row=1, col=1 ) x = np.where(np.random.randint(0, 2, 100)==1)[0] fig.add_trace( go.Scatter( x=x, y=np.zeros_like(x), name=f'Plot 2', mode='markers', marker=dict( symbol='circle-open', color='green', size=5 ), showlegend=True ), row=2, col=1 ) fig.show() What I've Tried I've tried using the fig.update_xaxes() after each trace addition, but it messes the plots and does not produce the desired output, as shown in Code 2. Code 2: import numpy as np from plotly.subplots import make_subplots from math import exp fig = make_subplots(2, 1) x = np.linspace(0, 10, 1000) y = np.array(list(map(lambda x: 1 / (1 + exp(-0.1 * x + 5)), x))) fig.add_trace( go.Scatter( x=x, y=y, name=f'\N{Greek Small Letter Sigma}(x)', showlegend=True ), row=1, col=1 ) fig.update_xaxes(title_text='x') x = np.where(np.random.randint(0, 2, 100)==1)[0] fig.add_trace( go.Scatter( x=x, y=np.zeros_like(x), name=f'Plot 2', mode='markers', marker=dict( symbol='circle-open', color='green', size=5 ), showlegend=True ), row=2, col=1 ) fig.update_xaxes(title_text='active users') fig.show() which results in (note the active users being printed on the top): My Questions: How can I assign the top plot x axis the label x, and active users label to the x axis of the bottom plot? And in general - how can I access the properties of an individual subplot? | With the help from this answer I as able to solve it, by referencing the xaxis for plot on the position row=1, col=1 and the xaxis1 for the plot on the row=2, col=1 position. The full solution is in Code 1. Code 1: import numpy as np from plotly.subplots import make_subplots from math import exp fig = make_subplots(2, 1) x = np.linspace(0, 10, 1000) y = np.array(list(map(lambda x: 1 / (1 + exp(-0.1 * x + 5)), x))) fig.add_trace( go.Scatter( x=x, y=y, name=f'\N{Greek Small Letter Sigma}(x)', showlegend=True ), row=1, col=1 ) fig['layout']['xaxis'].update(title_text='x') x = np.where(np.random.randint(0, 2, 100)==1)[0] fig.add_trace( go.Scatter( x=x, y=np.zeros_like(x), name=f'Plot 2', mode='markers', marker=dict( symbol='circle-open', color='green', size=5 ), showlegend=True ), row=2, col=1 ) fig['layout']['xaxis2'].update(title_text='active users') fig.show() Cheers. | 7 | 15 |
63,584,368 | 2020-8-25 | https://stackoverflow.com/questions/63584368/pip-install-psycopg2-error-command-x86-64-linux-gnu-gcc-failed-with-exit-st | I get the following error when trying to install psycopg2: (venv) root@scw-determined-panini:/app# pip install psycopg2 Collecting psycopg2 Using cached https://files.pythonhosted.org/packages/a8/8f/1c5690eebf148d1d1554fc00ccf9101e134636553dbb75bdfef4f85d7647/psycopg2-2.8.5.tar.gz Building wheels for collected packages: psycopg2 Running setup.py bdist_wheel for psycopg2 ... error Complete output from command /app/venv/bin/python3.8 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-psll6xe_/psycopg2/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp52or8xexpip-wheel- --python-tag cp38: running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/psycopg2 copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2 running build_ext building 'psycopg2._psycopg' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/psycopg x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120004 -DHAVE_LO64=1 -I/app/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement In file included from psycopg/psycopgmodule.c:28:0: ./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory #include <libpq-fe.h> ^~~~~~~~~~~~ compilation terminated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Failed building wheel for psycopg2 Running setup.py clean for psycopg2 Failed to build psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 ... error Complete output from command /app/venv/bin/python3.8 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-psll6xe_/psycopg2/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-x62dcj9d-record/install-record.txt --single-version-externally-managed --compile --install-headers /app/venv/include/site/python3.8/psycopg2: running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/psycopg2 copying lib/_json.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/pool.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/tz.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_lru_cache.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_ipaddress.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/extensions.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/_range.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/errors.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/compat.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/sql.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/errorcodes.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/extras.py -> build/lib.linux-x86_64-3.8/psycopg2 copying lib/__init__.py -> build/lib.linux-x86_64-3.8/psycopg2 running build_ext building 'psycopg2._psycopg' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/psycopg x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSYCOPG_VERSION=2.8.5 (dt dec pq3 ext lo64) -DPG_VERSION_NUM=120004 -DHAVE_LO64=1 -I/app/venv/include -I/usr/include/python3.8 -I. -I/usr/include/postgresql -I/usr/include/postgresql/12/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-3.8/psycopg/psycopgmodule.o -Wdeclaration-after-statement In file included from psycopg/psycopgmodule.c:28:0: ./psycopg/psycopg.h:36:10: fatal error: libpq-fe.h: No such file or directory #include <libpq-fe.h> ^~~~~~~~~~~~ compilation terminated. It appears you are missing some prerequisite to build the package from source. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Command "/app/venv/bin/python3.8 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-psll6xe_/psycopg2/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-x62dcj9d-record/install-record.txt --single-version-externally-managed --compile --install-headers /app/venv/include/site/python3.8/psycopg2" failed with error code 1 in /tmp/pip-build-psll6xe_/psycopg2/ (venv) root@scw-determined-panini:/app# psycopg2-binary psycopg2-binary: command not found (venv) root@scw-determined-panini:/app# pip install psycopg2-binary Requirement already satisfied: psycopg2-binary in ./venv/lib/python3.8/site-packages | For Ubuntu use sudo apt install libpq-dev thanks | 22 | 89 |
63,581,308 | 2020-8-25 | https://stackoverflow.com/questions/63581308/edit-yaml-file-with-bash | I'm trying to edit the following YAML file db: host: 'x.x.x.x.x' main: password: 'password_main' admin: password: 'password_admin' To edit the host part, I got it working with sed -i "/^\([[:space:]]*host: \).*/s//\1'$DNS_ENDPOINT'/" config.yml But I can't find a way to update the password for main and admin (which are different values). I tried to play around with \n and [[:space:]] and got different flavours of: sed -i "/^\([[:space:]]*main:\n*[[:space:]]*password: \).*/s//\1'$DNS_ENDPOINT'/" config.yml But never got it to work. Any help greatly appreciated! Edit - Requirement: no external binaries/tools. Just good ol' bash. | $ awk -v new="'sumthin'" 'prev=="main:"{sub(/\047.*/,""); $0=$0 new} {prev=$1} 1' file db: host: 'x.x.x.x.x' main: password: 'sumthin' admin: password: 'password_admin' or if your new text can contain escape sequences that you don't want expanded (e.g. \t or \n), as seems likely when setting a password, then: new="'sumthin'" awk 'prev=="main:"{sub(/\047.*/,""); $0=$0 ENVIRON["new"]} {prev=$1} 1' file See How do I use shell variables in an awk script? for why/how I use ENVIRON[] to access a shell variable rather than setting an awk variable in that second script. | 17 | 5 |
63,553,845 | 2020-8-24 | https://stackoverflow.com/questions/63553845/pandas-read-json-valueerror-protocol-not-known | I ran the following code a while ago and it worked but now there is the following error. How to solve it? ValueError: protocol not known. import json temp = json.dumps([status._json for status in tweet]) # create JSON newdf = pd.read_json(temp, orient='records') | As far as I could debug this issue is caused by an update of pandas. The 1.1.0 update had changed few things on the read_json function. I could make my code work when setting pandas version to 1.0.5 https://pandas.pydata.org/docs/whatsnew/v1.1.0.html | 23 | 7 |
63,577,356 | 2020-8-25 | https://stackoverflow.com/questions/63577356/get-hour-of-year-from-a-datetime | Is there a simple way to obtain the hour of the year from a datetime? dt = datetime(2019, 1, 3, 00, 00, 00) # 03/01/2019 00:00 dt_hour = dt.hour_of_year() # should be something like that Expected output: dt_hour = 48 It would be nice as well to obtain minutes_of_year and seconds_of_year | One way of implementing this yourself is this: def hour_of_year(dt): beginning_of_year = datetime.datetime(dt.year, 1, 1, tzinfo=dt.tzinfo) return (dt - beginning_of_year).total_seconds() // 3600 This first creates a new datetime object representing the beginning of the year. We then compute the time since the beginning of the year in seconds, divide by 3600 and take the integer part to get the full hours that have passed since the beginning of the year. Note that using the days attribute of the timedelta object will only return the number of full days since the beginning of the year. | 11 | 6 |
63,563,496 | 2020-8-24 | https://stackoverflow.com/questions/63563496/exclude-tests-in-pytest-configuration-file | I would like to be able to exclude instead of include certain python test files in the pytest.ini configuration file. According to the docs including tests boils down to something like this: # content of pytest.ini [pytest] pytest_files=test_main.py test_common.py To exclude files however, only command line options are suggested: --ignore=test_common.py How can I actually select files to ignore at the pytest.ini file level? | You can add any command line options in pytest.ini under addopts. In your case this should work: pytest.ini [pytest] addopts = --ignore=test_common.py As has been noted in the comments, --ignore takes a path (relative or absolute), not just a module name. From the output of pytest -h: --ignore=path ignore path during collection (multi-allowed). --ignore-glob=path ignore path pattern during collection (multi-allowed). | 14 | 20 |
63,520,908 | 2020-8-21 | https://stackoverflow.com/questions/63520908/does-imblearn-pipeline-turn-off-sampling-for-testing | Let us suppose the following code (from imblearn example on pipelines) ... # Instanciate a PCA object for the sake of easy visualisation pca = PCA(n_components=2) # Create the samplers enn = EditedNearestNeighbours() renn = RepeatedEditedNearestNeighbours() # Create the classifier knn = KNN(1) # Make the splits X_train, X_test, y_train, y_test = tts(X, y, random_state=42) # Add one transformers and two samplers in the pipeline object pipeline = make_pipeline(pca, enn, renn, knn) pipeline.fit(X_train, y_train) y_hat = pipeline.predict(X_test) I want to make it sure that when executing the pipeline.predict(X_test) the sampling procedures enn and renn will not be executed (but of course the pca must be executed). First, it is clear to me that over-, under-, and mixed-sampling are procedures to be applied to the training set, not to the test/validation set. Please correct me here if I am wrong. I browsed though the imblearn Pipeline code but I could not find the predict method there. I also would like to be sure that this correct behavior works when the pipeline is inside a GridSearchCV I just need some assurance that this is what happens with the imblearn.Pipeline. EDIT: 2020-08-28 @wundermahn answer is all I needed. This edit is just to add that this is the reason one should use the imblearn.Pipeline for imbalanced pre-processing and not sklearn.Pipeline Nowhere in the imblearn documentation I found an explanation why the need for imblearn.Pipeline when there is sklearn.Pipeline | Great question(s). To go through them in the order you posted: First, it is clear to me that over-, under-, and mixed-sampling are procedures to be applied to the training set, not to the test/validation set. Please correct me here if I am wrong. That is correct. You certainly do not want to test (whether that be on your test or validation data) on data that is not representative of the actual, live, "production" dataset. You should really only apply this to training. Please note, that if you are using techniques like cross-fold validation, you should apply the sampling to each fold individually, as indicated by this IEEE paper. I browsed though the imblearn Pipeline code but I could not find the predict method there. I'm assuming you found the imblearn.pipeline source code, and so if you did, what you want to do is take a look at the fit_predict method: @if_delegate_has_method(delegate="_final_estimator") def fit_predict(self, X, y=None, **fit_params): """Apply `fit_predict` of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_params : dict of string -> object Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. Returns ------- y_pred : ndarray of shape (n_samples,) The predicted target. """ Xt, yt, fit_params = self._fit(X, y, **fit_params) with _print_elapsed_time('Pipeline', self._log_message(len(self.steps) - 1)): y_pred = self.steps[-1][-1].fit_predict(Xt, yt, **fit_params) return y_pred Here, we can see that the pipeline utilizes the .predict method of the final estimator in the pipeline, in the example you posted, scikit-learn's knn: def predict(self, X): """Predict the class labels for the provided data. Parameters ---------- X : array-like of shape (n_queries, n_features), \ or (n_queries, n_indexed) if metric == 'precomputed' Test samples. Returns ------- y : ndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample. """ X = check_array(X, accept_sparse='csr') neigh_dist, neigh_ind = self.kneighbors(X) classes_ = self.classes_ _y = self._y if not self.outputs_2d_: _y = self._y.reshape((-1, 1)) classes_ = [self.classes_] n_outputs = len(classes_) n_queries = _num_samples(X) weights = _get_weights(neigh_dist, self.weights) y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].dtype) for k, classes_k in enumerate(classes_): if weights is None: mode, _ = stats.mode(_y[neigh_ind, k], axis=1) else: mode, _ = weighted_mode(_y[neigh_ind, k], weights, axis=1) mode = np.asarray(mode.ravel(), dtype=np.intp) y_pred[:, k] = classes_k.take(mode) if not self.outputs_2d_: y_pred = y_pred.ravel() return y_pred I also would like to be sure that this correct behaviour works when the pipeline is inside a GridSearchCV This sort of assumes the above two assumptions are true, and I am taking this to mean you want a complete, minimal, reproducible example of this working in a GridSearchCV. There is extensive documentation from scikit-learn on this, but an example I created using knn is below: import pandas as pd, numpy as np from imblearn.over_sampling import SMOTE from imblearn.pipeline import Pipeline from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_digits from sklearn.model_selection import GridSearchCV, train_test_split param_grid = [ { 'classification__n_neighbors': [1,3,5,7,10], } ] X, y = load_digits(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.20) pipe = Pipeline([ ('sampling', SMOTE()), ('classification', KNeighborsClassifier()) ]) grid = GridSearchCV(pipe, param_grid=param_grid) grid.fit(X_train, y_train) mean_scores = np.array(grid.cv_results_['mean_test_score']) print(mean_scores) # [0.98051926 0.98121129 0.97981998 0.98050474 0.97494193] Your intuition was spot on, good job :) | 10 | 13 |
63,561,537 | 2020-8-24 | https://stackoverflow.com/questions/63561537/how-to-mark-individual-parameterized-tests-with-a-marker | I have been trying to parameterize my tests using @pytest.mark.parametrize, and I have a marketer @pytest.mark.test("1234"), I use the value from the test marker to do post the results to JIRA. Note the value given for the marker changes for every test_data. Essentially the code looks something like below. @pytest.mark.foo @pytest.mark.parametrize(("n", "expected"),[ (1, 2), (2, 3)]) def test_increment(n, expected): assert n + 1 == expected I want to do something like @pytest.mark.foo @pytest.mark.parametrize(("n", "expected"), [ (1, 2,@pytest.mark.test("T1")), (2, 3,@pytest.mark.test("T2")) ]) How to add the marker when using parameterized tests given that the value of the marker is expected to change with each test? | It's explained here in the documentation: https://docs.pytest.org/en/stable/example/markers.html#marking-individual-tests-when-using-parametrize To show it here as well, it'd be: @pytest.mark.foo @pytest.mark.parametrize(("n", "expected"), [ pytest.param(1, 2, marks=pytest.mark.T1), pytest.param(2, 3, marks=pytest.mark.T2), (4, 5) ]) | 11 | 15 |
63,552,044 | 2020-8-23 | https://stackoverflow.com/questions/63552044/how-to-extract-feature-vector-from-single-image-in-pytorch | I am attempting to understand more about computer vision models, and I'm trying to do some exploring of how they work. In an attempt to understand how to interpret feature vectors more I'm trying to use Pytorch to extract a feature vector. Below is my code that I've pieced together from various places. import torch import torch.nn as nn import torchvision.models as models import torchvision.transforms as transforms from torch.autograd import Variable from PIL import Image img=Image.open("Documents/01235.png") # Load the pretrained model model = models.resnet18(pretrained=True) # Use the model object to select the desired layer layer = model._modules.get('avgpool') # Set model to evaluation mode model.eval() transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) def get_vector(image_name): # Load the image with Pillow library img = Image.open("Documents/Documents/Driven Data Competitions/Hateful Memes Identification/data/01235.png") # Create a PyTorch Variable with the transformed image t_img = transforms(img) # Create a vector of zeros that will hold our feature vector # The 'avgpool' layer has an output size of 512 my_embedding = torch.zeros(512) # Define a function that will copy the output of a layer def copy_data(m, i, o): my_embedding.copy_(o.data) # Attach that function to our selected layer h = layer.register_forward_hook(copy_data) # Run the model on our transformed image model(t_img) # Detach our copy function from the layer h.remove() # Return the feature vector return my_embedding pic_vector = get_vector(img) When I do this I get the following error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead I'm sure this is an elementary error, but I can't seem to figure out how to fix this. It was my impression that the "totensor" transformation would make my data 4-d, but it seems it's either not working correctly or I'm misunderstanding it. Appreciate any help or resources I can use to learn more about this! | All the default nn.Modules in pytorch expect an additional batch dimension. If the input to a module is shape (B, ...) then the output will be (B, ...) as well (though the later dimensions may change depending on the layer). This behavior allows efficient inference on batches of B inputs simultaneously. To make your code conform you can just unsqueeze an additional unitary dimension onto the front of t_img tensor before sending it into your model to make it a (1, ...) tensor. You will also need to flatten the output of layer before storing it if you want to copy it into your one-dimensional my_embedding tensor. A couple of other things: You should infer within a torch.no_grad() context to avoid computing gradients since you won't be needing them (note that model.eval() just changes the behavior of certain layers like dropout and batch normalization, it doesn't disable construction of the computation graph, but torch.no_grad() does). I assume this is just a copy paste issue but transforms is the name of an imported module as well as a global variable. o.data is just returning a copy of o. In the old Variable interface (circa PyTorch 0.3.1 and earlier) this used to be necessary, but the Variable interface was deprecated way back in PyTorch 0.4.0 and no longer does anything useful; now its use just creates confusion. Unfortunately, many tutorials are still being written using this old and unnecessary interface. Updated code is then as follows: import torch import torchvision import torchvision.models as models from PIL import Image img = Image.open("Documents/01235.png") # Load the pretrained model model = models.resnet18(pretrained=True) # Use the model object to select the desired layer layer = model._modules.get('avgpool') # Set model to evaluation mode model.eval() transforms = torchvision.transforms.Compose([ torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) def get_vector(image): # Create a PyTorch tensor with the transformed image t_img = transforms(image) # Create a vector of zeros that will hold our feature vector # The 'avgpool' layer has an output size of 512 my_embedding = torch.zeros(512) # Define a function that will copy the output of a layer def copy_data(m, i, o): my_embedding.copy_(o.flatten()) # <-- flatten # Attach that function to our selected layer h = layer.register_forward_hook(copy_data) # Run the model on our transformed image with torch.no_grad(): # <-- no_grad context model(t_img.unsqueeze(0)) # <-- unsqueeze # Detach our copy function from the layer h.remove() # Return the feature vector return my_embedding pic_vector = get_vector(img) | 7 | 8 |
63,528,693 | 2020-8-21 | https://stackoverflow.com/questions/63528693/whats-the-difference-between-mock-magicmockspec-someclass-and-mock-create-aut | I'm trying to understand the difference between these two mock constructs and when is it appropriate to use either. I tested it in the interpreter, e.g.: >>> mm = mock.MagicMock(spec=list) >>> ca = mock.create_autospec(list) >>> mm <MagicMock spec='list' id='140372375801232'> >>> mm() <MagicMock name='mock()' id='140372384057808'> >>> mm.append() <MagicMock name='mock.append()' id='140372375724720'> >>> mm().append() <MagicMock name='mock().append()' id='140372375753104'> >>> ca <MagicMock spec='list' id='140372384059248'> >>> ca() <NonCallableMagicMock name='mock()' spec='list' id='140372384057040'> >>> ca.append() <MagicMock name='mock.append()' id='140372375719744'> >>> ca().append() <MagicMock name='mock().append()' id='140372375796848'> >>> But I can't understand why "constructing" the mock created using create_autospec gives me a NonCallableMagicMock and the MagicMock gives me more MagicMock. The documentation isn't helping much. | The main difference between using the spec argument and using create_autospec is recursiveness. In the first case, the object itself is specced, while the called object is not: >>> mm = mock.MagicMock(spec=list) >>> mm <MagicMock spec='list' id='2868486557120'> >>> mm.foo Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python\Python38\lib\unittest\mock.py", line 635, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'foo' >>> mm.append <MagicMock name='mock.append' id='2868486430240'> >>> mm.append.foo <MagicMock name='mock.append.foo' id='2868486451408'> In the second case, the called objects are also specced (lazily): >>> ca = mock.create_autospec(list) >>> ca <MagicMock spec='list' id='2868486254848'> >>> ca.foo Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python\Python38\lib\unittest\mock.py", line 635, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'foo' >>> ca.append <MagicMock name='mock.append' spec='method_descriptor' id='2868486256336'> >>> ca.append.foo Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python\Python38\lib\unittest\mock.py", line 635, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'foo' There is one caveat, that is shown in your example code. If you use create_autospec as shown here, it behaves as if the object is a class, not an instance, so you are able to call it (creating an instance): >>> ca = mock.create_autospec(list) >>> ca() <NonCallableMagicMock name='mock()' spec='list' id='2868485877280'> If you want to behave it like an instance, you have to use instance=True: >>> ca = mock.create_autospec(list, instance=True) >>> ca <NonCallableMagicMock spec='list' id='2868485875024'> >>> ca() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NonCallableMagicMock' object is not callable Note that using mock.patch with autospec=True creates a mock that behaves like the one created using mock.create_autospec, as described in the documentation. Also note that the return value of a call is always a MagicMock, regardless of the return value of the real call. So, even if a function returns None, like list.append, a mock is returned if calling the method from a mock, regardless of the spec. | 14 | 13 |
63,550,773 | 2020-8-23 | https://stackoverflow.com/questions/63550773/dynamic-adding-function-to-class-and-make-it-as-bound-method | I am new in python. I want to write an add_function method for this class to add any function to my class dynamically. This function can manipulate some attributes of my class but it is written outside the class. Assume this class and its method. import numpy as np import feature_function as ffun class Features: def __init__(self,x,fs=None): self.x = x self.fs = fs self.label = [] self.features = [] def __mean(self, nan_omit = False, min_samples = 1): out = ffun.mean(self.x, nan_omit = nan_omit, min_samples = min_samples) self.label.extend(['mean']) self.features = np.append(self.features,out) now I want to write this function outside of the scope of class: def __max(self, nan_omit = False, min_samples = 1): out = ffun.max(self.x, nan_omit = nan_omit, min_samples = min_samples) self.label.extend(['max']) self.features = np.append(self.features,out) and add it to class. In order to add function to class I have an add_function method inside my class that its arguments are the function that is passed outside the class and also the name of the function. # Add function dynamically by user def add_function(self, name, methodToRun, type ): name2 = '__' + name # setattr(self, name2, methodToRun) setattr(self, mangle_attr(self, name2), methodToRun) mangle_attr is a function in order to set private function to a class. def mangle_attr(source, attr): # if source is an object, get the class if not hasattr(source, "__bases__"): source = source.__class__ # mangle attr tmp = {} code = _mangle_template.format(cls=source.__name__, attr=attr) eval(compile(code, '', 'exec'), {}, tmp); return tmp['cls'].mangle.__code__.co_varnames[0] using setattr doesn't bound __max function to my class, for example __mean is bounded in my class, if I call the function I can see: <bound method Features.__mean of <__main__.Features object at 0x7fd4c91aac10>> however for __max I still see this: <function skewness at 0x7fd2b9dbfb90> do I have a solution for this problem? | Your addfunction binds its argument to the instance self, not to the class. Bind the argument to the class, e.g. via type(self) or a classmethod: class Features: ... # Add function dynamically by user @classmethod def add_function(cls, name, methodToRun, type ): name2 = '__' + name setattr(cls, mangle_attr(self, name2), methodToRun) A bound method is a function on the class, looked up through the instance. Adding a function to an instance merely makes it an attribute, which happens to be callable. | 6 | 5 |
63,548,053 | 2020-8-23 | https://stackoverflow.com/questions/63548053/python-fill-missing-values-according-to-frequency | I have seen a lot of cases missing values are either filled by mean or medians. I was wondering how can we fill misssing values with frequency. Here is my setup: import numpy as np import pandas as pd df = pd.DataFrame({'sex': [1,1,1,1,0,0,np.nan,np.nan,np.nan]}) df['sex_fillna'] = df['sex'].fillna(df.sex.mode()[0]) print(df) sex sex_fillna 0 1.0 1.0 We have 4 males 1 1.0 1.0 2 1.0 1.0 3 1.0 1.0 4 0.0 0.0 we have 2 females, so ratio is 2 5 0.0 0.0 6 NaN 1.0 Here, I want random choice of [1,1,0] 7 NaN 1.0 eg. 1,1,0 or 1,0,1 or 0,1,1 randomly 8 NaN 1.0 Is there a generic way it can be done so? My attempt df['sex_fillan2'] = df['sex'].fillna(np.random.randint(0,2)) # here the ratio is not guaranteed to approx 4/2 = 2 NOTE This example is only for binary values, I was looking for categorical values having more than two categories. For example: class: A B C 20% 40% 60% Then instead of filling all nans by class C I would like to fill according to frequency counts. But, is this a good idea? As per some comments, this might or might not be a good idea to impute missing values with different values for different rows, I have created a question in CrossValidated, if you want to give some inputs or see if this is a good idea visit the page: https://stats.stackexchange.com/questions/484467/is-it-better-to-fillnans-based-on-frequency-rather-than-all-values-with-mean-or | Check with value_counts + np.random.choice s = df.sex.value_counts(normalize=True) df['sex_fillna'] = df['sex'] df.loc[df.sex.isna(), 'sex_fillna'] = np.random.choice(s.index, p=s.values, size=df.sex.isna().sum()) df Out[119]: sex sex_fillna 0 1.0 1.0 1 1.0 1.0 2 1.0 1.0 3 1.0 1.0 4 0.0 0.0 5 0.0 0.0 6 NaN 0.0 7 NaN 1.0 8 NaN 1.0 The output for s index is the category and the value is the probability s Out[120]: 1.0 0.666667 0.0 0.333333 Name: sex, dtype: float64 | 12 | 6 |
63,530,701 | 2020-8-21 | https://stackoverflow.com/questions/63530701/python-package-to-plot-two-heatmaps-in-one-split-each-square-into-two-triangles | I've been searching around but couldn't find an easy solution to plot two heatmaps in one graphic by having each square in the heatmap split into two triangles (similar to the attached graphic I saw in a paper). Does anybody know a Python package that is able to do this? I tried seaborn but I don't think it has an easy way to achieve this. Thank you for your time! -Peter | plt.tripcolor colors a mesh of triangles similar to how plt.pcolormesh colors a rectangular mesh. Also similar to pcolormesh, care has to be taken that there is one row and one column of vertices less than there are triangles. Furthermore, the arrays need to be made 1D (np.ravel). All this renumbering to 1D can be a bit tricky. As an example, the code below creates a coloring depending on x*y mod 10 and uses two different colormaps for the upper and the lower triangles. import numpy as np import matplotlib.pyplot as plt from matplotlib.tri import Triangulation M = 30 N = 20 x = np.arange(M + 1) y = np.arange(N + 1) xs, ys = np.meshgrid(x, y) zs = (xs * ys) % 10 zs = zs[:-1, :-1].ravel() triangles1 = [(i + j*(M+1), i+1 + j*(M+1), i + (j+1)*(M+1)) for j in range(N) for i in range(M)] triangles2 = [(i+1 + j*(M+1), i+1 + (j+1)*(M+1), i + (j+1)*(M+1)) for j in range(N) for i in range(M)] triang1 = Triangulation(xs.ravel(), ys.ravel(), triangles1) triang2 = Triangulation(xs.ravel(), ys.ravel(), triangles2) img1 = plt.tripcolor(triang1, zs, cmap=plt.get_cmap('inferno', 10), vmax=10) img2 = plt.tripcolor(triang2, zs, cmap=plt.get_cmap('viridis', 10), vmax=10) plt.colorbar(img2, ticks=range(10), pad=-0.05) plt.colorbar(img1, ticks=range(10)) plt.xlim(x[0], x[-1]) plt.ylim(y[0], y[-1]) plt.xticks(x, rotation=90) plt.yticks(y) plt.show() PS: to have the integer ticks nicely in the center of the cells (instead of at their borders), following changes would be needed: triang1 = Triangulation(xs.ravel()-0.5, ys.ravel()-0.5, triangles1) triang2 = Triangulation(xs.ravel()-0.5, ys.ravel()-0.5, triangles2) # ... plt.xlim(x[0]-0.5, x[-1]-0.5) plt.ylim(y[0]-0.5, y[-1]-0.5) plt.xticks(x[:-1], rotation=90) plt.yticks(y[:-1]) | 8 | 12 |
63,533,664 | 2020-8-22 | https://stackoverflow.com/questions/63533664/matplotlib-vertical-space-between-legend-symbols | I have an issue with customizing the legend of my plot. I did lot's of customizing but couldnt get my head around this one. I want the symbols (not the labels) to be equally spaced in the legend. As you can see in the example, the space between the circles in the legend, gets smaller as the circles get bigger. any ideas? Also, how can I also add a color bar (in addition to the size), with smaller circles being light red (for example) and bigger circle being blue (for example) here is my code so far: import pandas as pd import matplotlib.pyplot as plt from vega_datasets import data as vega_data gap = pd.read_json(vega_data.gapminder.url) df = gap.loc[gap['year'] == 2000] fig, ax = plt.subplots(1, 1,figsize=[14,12]) ax=ax.scatter(df['life_expect'], df['fertility'], s = df['pop']/100000,alpha=0.7, edgecolor="black",cmap="viridis") plt.xlabel("X") plt.ylabel("Y"); kw = dict(prop="sizes", num=6, color="lightgrey", markeredgecolor='black',markeredgewidth=2) plt.legend(*ax.legend_elements(**kw),bbox_to_anchor=(1, 0),frameon=False, loc="lower left",markerscale=1,ncol=1,borderpad=2,labelspacing=4,handletextpad=2) plt.grid() plt.show() | It's a bit tricky, but you could measure the legend elements and reposition them to have a constant inbetween distance. Due to the pixel positioning, the plot can't be resized afterwards. I tested the code inside PyCharm with the 'Qt5Agg' backend. And in a Jupyter notebook, both with %matplotlib inline and with %matplotlib notebook. I'm not sure whether it would work well in all environments. Note that ax.scatter doesn't return an ax (countrary to e.g. sns.scatterplot) but a list of the created scatter dots. import pandas as pd import matplotlib.pyplot as plt from matplotlib.transforms import IdentityTransform from vega_datasets import data as vega_data gap = pd.read_json(vega_data.gapminder.url) df = gap.loc[gap['year'] == 2000] fig, ax = plt.subplots(1, 1, figsize=[14, 12]) fig.subplots_adjust(right=0.8) scat = ax.scatter(df['life_expect'], df['fertility'], s=df['pop'] / 100000, alpha=0.7, edgecolor="black", cmap="viridis") plt.xlabel("X") plt.ylabel("Y") x = 1.1 y = 0.1 is_first = True kw = dict(prop="sizes", num=6, color="lightgrey", markeredgecolor='black', markeredgewidth=2) handles, labels = scat.legend_elements(**kw) inverted_transData = ax.transData.inverted() for handle, label in zip(handles[::-1], labels[::-1]): plt.setp(handle, clip_on=False) for _ in range(1 if is_first else 2): plt.setp(handle, transform=ax.transAxes) if is_first: xd, yd = x, y else: xd, yd = inverted_transData.transform((x, y)) handle.set_xdata([xd]) handle.set_ydata([yd]) ax.add_artist(handle) bbox = handle.get_window_extent(fig.canvas.get_renderer()) y += y - bbox.y0 + 15 # 15 pixels inbetween x = (bbox.x0 + bbox.x1) / 2 if is_first: xd_text, _ = inverted_transData.transform((bbox.x1+10, y)) ax.text(xd_text, yd, label, transform=ax.transAxes, ha='left', va='center') y = bbox.y1 is_first = False plt.show() | 7 | 3 |
63,538,588 | 2020-8-22 | https://stackoverflow.com/questions/63538588/python-dictionary-object-syntaxerror-expression-cannot-contain-assignment-per | I am creating a dictionary object, it gets created while I use "Statement 1", however I get an error message while try create a dictionary object using same keys and values with "Statement 2". Statement 1: dmap = {0: 'Mon', 1: 'Tue', 2: 'Wed', 3: 'Thu', 4: 'Fri', 5: 'Sat', 6: 'Sun'} Statement 2: dmap = dict(0='Mon', 1='Tue', 2='Wed', 3='Thu', 4='Fri', 5='Sat', 6='Sun' Error message: File "<stdin>", line 1 SyntaxError: expression cannot contain assignment, perhaps you meant "=="? Can someone tell me, why am I allowed to create dictionary with integer keys using Statement 1, but not with Statement 2? Edited Using an updated version of Statement 2, I am able to create dictionary object with below code: dmap = dict(day_0='Mon', day_1='Tue', day_2='Wed', day_3='Thu', day_4='Fri', day_5='Sat', day_6='Sun') | dict is a regular callable which accepts keyword arguments. As per the Python syntax, keyword arguments are of the form identifier '=' expression. An identifier may not start with a digit, which excludes number literals. keyword_item ::= identifier "=" expression That dict does by default create a dictionary that accepts arbitrary keys does not change the syntax of calls. | 8 | 6 |
63,528,797 | 2020-8-21 | https://stackoverflow.com/questions/63528797/how-do-i-count-the-letters-in-llanfairpwllgwyngyllgogerychwyrndrobwllllantysilio | How do I count the letters in Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch? print(len('Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch')) Says 58 Well if it was that easy I wouldn't be asking you, now would I?! Wikipedia says (https://en.wikipedia.org/wiki/Llanfairpwllgwyngyll#Placename_and_toponymy) The long form of the name is the longest place name in the United Kingdom and one of the longest in the world at 58 characters (51 "letters" since "ch" and "ll" are digraphs, and are treated as single letters in the Welsh language). So I want to count that and get the answer 51. Okey dokey. print(len(['Ll','a','n','f','a','i','r','p','w','ll','g','w','y','n','g','y','ll','g','o','g','e','r','y','ch','w','y','r','n','d','r','o','b','w','ll','ll','a','n','t','y','s','i','l','i','o','g','o','g','o','g','o','ch'])) 51 Yeh but that's cheating, obviously I want to use the word as input, not the list. Wikipedia also says that the digraphs in Welsh are ch, dd, ff, ng, ll, ph, rh, th https://en.wikipedia.org/wiki/Welsh_orthography#Digraphs So off we go. Let's add up the length and then take off the double counting. word='Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch' count=len(word) print('starting with count of',count) for index in range(len(word)-1): substring=word[index]+word[index+1] if substring.lower() in ['ch','dd','ff','ng','ll','ph','rh','th']: print('taking off double counting of',substring) count=count-1 print(count) This gets me this far starting with count of 58 taking off double counting of Ll taking off double counting of ll taking off double counting of ng taking off double counting of ll taking off double counting of ch taking off double counting of ll taking off double counting of ll taking off double counting of ll taking off double counting of ch 49 It appears that I've subtracted too many then. I'm supposed to get 51. Now one problem is that with the llll it has found 3 lls and taken off three instead of two. So that's going to need to be fixed. (Must not overlap.) And then there's another problem. The ng. Wikipedia didn't say anything about there being a letter "ng" in the name, but it's listed as one of the digraphs on the page I quoted above. Wikipedia gives us some more clue here: "additional information may be needed to distinguish a genuine digraph from a juxtaposition of letters". And it gives the example of "llongyfarch" where the ng is just a "juxtaposition of letters", and "llong" where it is a digraph. So it seems that 'Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch' is one of those words where the -ng- is bit just a "juxtaposition of letters". And obviously there's no way that the computer can know that. So I'm going to have to give it that "additional information" that Wikipedia talks about. So anyways, I decided to look in an online dictionary http://geiriadur.ac.uk/gpc/gpc.html and you can see that if you look up llongyfarch (the example from Wikipedia that has the "juxtaposition of letters") it displays it with a vertical line between the n and the g but if you look up "llong" then it doesn't do this. So I've decided okay what we need to do is provide the additional information by putting a | in the input string like it does in the dictionary, just so that the algorithm knows that the ng bit is really two letters. But obviously I don't want the | itself to be counted as a letter. So now I've got these inputs: word='llong' ANSWER NEEDS TO BE 3 (ll o ng) word='llon|gyfarch' ANSWER NEEDS TO BE 9 (ll o n g y f a r ch) word='Llanfairpwllgwyn|gyllgogerychwyrndrobwllllantysiliogogogoch' ANSWER NEEDS TO BE 51 (Ll a n f a i r p w ll g w y n g y ll g o g e r y ch w y r n d r o b w ll ll a n t y s i l i o g o g o g o ch) and still this list of digraphs: ['ch','dd','ff','ng','ll','ph','rh','th'] and the rules are going to be: ignore case if you see a digraph then count it as 1 work from left to right so that llll is ll + ll, not l + ll + l if you see a | don't count it, but you can't ignore it completely, it is there to stop ng being a digraph and I want it to count it as 51 and to do it for the right reasons, not just fluke it. Now I am getting 51 but it is fluking it because it is counting the | as a letter (1 too high), and then it is taking off one too many with the llll (1 too low) - ERRORS CANCEL OUT It is getting llong right (3). It is getting llon|gyfarch wrong (10) - counting the | again How can I fix it the right way? | Like many problems to do with strings, this can be done in a simple way with a regex. >>> word = 'Llanfairpwllgwyn|gyllgogerychwyrndrobwllllantysiliogogogoch' >>> import re >>> pattern = re.compile(r'ch|dd|ff|ng|ll|ph|rh|th|[^\W\d_]', flags=re.IGNORECASE) >>> len(pattern.findall(word)) 51 The character class [^\W\d_] (from here) matches word-characters that are not digits or underscores, i.e. letters, including those with diacritics. | 82 | 58 |
63,533,237 | 2020-8-22 | https://stackoverflow.com/questions/63533237/how-to-see-where-exactly-torch-is-installed-pip-vs-conda-torch-installation | On my machine i can't "pip install torch" - i get infamous "single source externally managed error" - i could not fix it and used "conda install torch" from anaconda. Still, checking version is easy - torch.__version__ But how to see where is it installed -the home dir of torch? Suppose if I had had both torches installed via pip and conda - how to know which one is used in a project? import torch print(torch__version__) | You can get torch module location which is imported in your script import torch print(torch.__file__) | 7 | 14 |
63,532,921 | 2020-8-22 | https://stackoverflow.com/questions/63532921/deep-copy-of-list-in-python | CODE IN PYCHARM [1]: https://i.sstatic.net/aBP1r.png I tried to make a deep copy of a list l, but seems like the slicing method doesn't work somehow?I don't want the change in x to be reflected in l. So how should I make a deep copy and what is wrong in my code? This was my code- def processed(matrix,r,i): matrix[r].append(i) return matrix l=[[1, 2, 3], [4, 5, 6], [7, 8, 9]] x=l[:] print(processed(x,0,10)) print(l) OUTPUT- [[1, 2, 3, 10], [4, 5, 6], [7, 8, 9]] [[1, 2, 3, 10], [4, 5, 6], [7, 8, 9]] | Your code does indeed succeed in creating a shallow copy. This can be seen by inspecting the IDs of the two outer lists, and noting that they differ. >>> id(l) 140505607684808 >>> id(x) 140505607684680 Or simply comparing using is: >>> x is l False However, because it is a shallow copy rather than a deep copy, the corresponding elements of the list are the same object as each other: >>> x[0] is l[0] True This gives you the behaviour that you observed when the sub-lists are appended to. If in fact what you wanted was a deep copy, then you could use copy.deepcopy. In this case the sublists are also new objects, and can be appended to without affecting the originals. >>> from copy import deepcopy >>> l=[[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> xdeep = deepcopy(l) >>> xdeep == l True >>> xdeep is l False <==== A shallow copy does the same here >>> xdeep[0] is l[0] False <==== But THIS is different from with a shallow copy >>> xdeep[0].append(10) >>> print(l) [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> print(xdeep) [[1, 2, 3, 10], [4, 5, 6], [7, 8, 9]] If you wanted to apply this in your function, you could do: from copy import deepcopy def processed(matrix,r,i): new_matrix = deepcopy(matrix) new_matrix[r].append(i) return new_matrix l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] x = processed(l,0,10) print(x) print(l) If in fact you know that the matrix is always exactly 2 deep, then you could do it more efficiently than using deepcopy and without need for the import: def processed(matrix,r,i): new_matrix = [sublist[:] for sublist in matrix] new_matrix[r].append(i) return new_matrix l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] x = processed(l,0,10) print(x) print(l) | 6 | 11 |
63,529,904 | 2020-8-21 | https://stackoverflow.com/questions/63529904/how-do-i-address-oserror-mysql-config-not-found-error-during-elastic-beanstal | Problem I am trying to deploy a very simple app on Elastic Beanstalk with a small database backend. I try to install mysqlclient as part of the process outlined by AWS here. However, when I deploy my app, I get the following error from my Elastic Beanstalk logs as it tries to download the package: Collecting mysqlclient Using cached mysqlclient-2.0.1.tar.gz (87 kB) 2020/08/21 20:30:16.419082 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr: ERROR: Command errored out with exit status 1: command: /var/app/venv/staging-LQM1lest/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bz45889a/mysqlclient/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bz45889a/mysqlclient/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-1tyle8mv cwd: /tmp/pip-install-bz45889a/mysqlclient/ Complete output (12 lines): /bin/sh: mysql_config: command not found /bin/sh: mariadb_config: command not found /bin/sh: mysql_config: command not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-bz45889a/mysqlclient/setup.py", line 15, in <module> metadata, options = get_config() File "/tmp/pip-install-bz45889a/mysqlclient/setup_posix.py", line 65, in get_config libs = mysql_config("libs") File "/tmp/pip-install-bz45889a/mysqlclient/setup_posix.py", line 31, in mysql_config raise OSError("{} not found".format(_mysql_config_path)) OSError: mysql_config not found Question The mysqlclient is the AWS recommended driver to connect to a MySQL database in RDS using Flask or Django. How do I get it installed with Elastic Beanstalk? Context I've tried to implement my architecture with either (1) a database within the Elastic Beanstalk environment or (2) a regularly deployed RDS instance outside the Elastic Beanstalk environment. In this case, we're going with option (1). I can't install mysql-server or other packages using apt-get as suggested here very easily, which is why I hope this isn't labeled as a duplicate. I don't have access to the underlying servers. I attempted to using platform hooks and .ebextensions, but I wasn't able to get that to work. In this post, I'm trying to step back and see if there is another avenue. | For Amazon linux 2 you should be able to install it with yum. Thus, you can have in your .ebextensions a file called, e.g. 01_packages.config with the content: packages: yum: MySQL-python: [] You can add further yum dependencies if you require more. | 11 | 9 |
63,529,127 | 2020-8-21 | https://stackoverflow.com/questions/63529127/identify-all-overlapping-tuples-in-list | I've currently got a list of tuples (though I control creation of the list and tuples, so they could be altered in type if needed). Each tuple has a start and end integer and a string with an ID for that range's source. What I'm trying to do is identify all of the overlapping ranges within the tuples. Currently I have a = [(0, 98, '122:R'), (100, 210, '124:R'), (180, 398, '125:R'), (200, 298, '123:R')] highNum = 0 highNumItem = '' for item in a: if item[0] < highNum: print(highNumItem + ' overlaps ' + item[2]) if item[1] > highNum: highNum = item[1] highNumItem = item[2] # 124:R overlaps 125:R # 125:R overlaps 123:R Which outputs enough information that overlaps should be able to be manually review and fixed. But, it misses identifying some sets of overlaps. I can't help thinking there's a relatively obvious solution I'm just missing or not using the right search terms to find examples of. But ideally I'd like the output to actually be 124:R overlaps 125:R & 123:R 125:R overlaps 123:R But using my comparison method, I can't see a way to catch the rare instance where an overlap spans more than just 2 adjacent ranges. If anyone could point me to a function or comparison method appropriate to this, I'd greatly appreciate. Also, if it matters, I'm currently stuck with python 2.7, but need to be able to port solution to 3.x when 3rd party applications allow it. | This should work: import operator def get_overlaps(end, remaining): output = [] for r in remaining: if r[0] < end: # starts before the end output.append(r[2]) continue break return output def get_all_overlaps(lst): # thanks @Elan-R for this simplification for i, (start, end, name) in enumerate(lst): overlaps = get_overlaps(end, lst[i+1:]) if overlaps: print(name, "overlaps", " & ".join(overlaps)) a = [(0, 98, '122:R'), (100, 210, '124:R'), (180, 398, '125:R'), (200, 298, '123:R')] # sort by start time a.sort(key=operator.itemgetter(0)) # thanks to @moonGoose get_all_overlaps(a) Output: 124:R overlaps 125:R & 123:R 125:R overlaps 123:R This code iterates over each item in the list, and then checks every subsequent item to see if the start time is less than the end time of the current item. If so, it adds the name to the list of overlaps. If not, it stops checking for the current item, as the start times increase, so there will be no further overlaps. (Tested for Python 3.6, but should work with any version) | 6 | 2 |
63,521,088 | 2020-8-21 | https://stackoverflow.com/questions/63521088/destroy-object-of-class-python | Hi i'm trying to destroy a class object if the condition of an if statement(inside a while is met) global variablecheck class createobject: def __init__(self,userinput): self.userinput = input self.method() def method(self): while True: if self.userinput == variablecheck print('the object created using this class is still alive') else: print('user object created using class(createobject) is dead') #what code can i put here to delete the object of this class? | Think of it that way: you're asking a class to self-destruct using an inner method, which is kind of like trying to eat your own mouth. Luckily for you, Python features garbage collection, meaning your class will be automatically destroyed once all of its references have gone out of scope. If you need to do something specific when the instance is being destroyed, you can still override __del__() which will kinda act like a destructor. Here's a silly example: class SelfDestruct: def __init__(self): print("Hi! I'm being instanciated!") def __del__(self): print("I'm being automatically destroyed. Goodbye!") def do_stuff(self): print("I'm doing some stuff...") Now, try instanciating this class in a local scope (such as a function): def make_a_suicidal_class(): my_suicidal_class = SelfDestruct() for i in range(5): my_suicidal_class.do_stuff() return None Here, the lifespan of the object is bound by the function. Meaning it'll be automatically destroyed once the call is completed. Thus the output should look like: >>> make_suicidal_class() "Hi! I'm being instanciated!" "I'm doing some stuff..." "I'm doing some stuff..." "I'm doing some stuff..." "I'm doing some stuff..." "I'm doing some stuff..." "I'm being automatically destroyed. Goodbye!" >>> If your class was instanciated in a global scope, then it won't be destroyed until your program ends. Also, it should be noted that manually calling the __del__() destructor does NOT actually destroy the object. Doing this: foo = SelfDestruct() foo.__del__() foo.do_stuff() Results is this output: "Hi! I'm being instanciated!" "I'm being automatically destroyed. Goodbye!" "I'm doing some stuff..." ergo, the instance still has a pulse... If you really need to prevent the instance from being referenced again in the current scope, you have to call del foo to do so. Though as previously stated, Python actually reference-counts classes and variables. So if your class object is used elsewere, invoking del foo will not actually release it from memory. Here's an exhaustive explanation in the python docs https://docs.python.org/2.5/ref/customization.html "del x" doesn't directly call x.del() -- the former decrements the reference count for x by one, and the latter is only called when x's reference count reaches zero. Long story short: Don't think about it! Let python deal with memory management. The whole point of garbage collection is to stop worrying about the lifespan of your variables! | 8 | 27 |
63,518,441 | 2020-8-21 | https://stackoverflow.com/questions/63518441/how-to-read-a-bearer-token-from-postman-into-python-code | I am trying to create an API that receives arguments from postman. The body of the api contains two arguments: { "db":"EUR", "env":"test" } I parsed these two arguments in the code as below: parser = reqparse.RequestParser() parser.add_argument('fab', type=str, required=True, help='Fab name must be provided.') parser.add_argument('env', type=str, required=False, help='Env is an optional parameter.') Lately I was asked to add a token validation in the code. The token is passed from Authorization-> Type(Bearer Token) -> Token value: eeb867bd2bcca05 But I don't know how can I read the bearer token from postman into Python code. Could anyone let me know how to read the token value that is being passed from Postman's bearer token into my Python code ? Any help is much appreciated. | The Bearer token is sent in the headers of the request as 'Authorization' header, so you can get it in python flask as follows: headers = flask.request.headers bearer = headers.get('Authorization') # Bearer YourTokenHere token = bearer.split()[1] # YourTokenHere | 10 | 26 |
63,516,924 | 2020-8-21 | https://stackoverflow.com/questions/63516924/typeerror-init-got-an-unexpected-keyword-argument-requote | Traceback (most recent call last): File "Chiyo.py", line 1, in <module> import discord File "/home/ubuntu/.local/lib/python3.6/site-packages/discord/__init__.py", line 25, in <module> from .client import Client File "/home/ubuntu/.local/lib/python3.6/site-packages/discord/client.py", line 34, in <module> import aiohttp File "/home/ubuntu/.local/lib/python3.6/site-packages/aiohttp/__init__.py", line 6, in <module> from .client import BaseConnector as BaseConnector File "/home/ubuntu/.local/lib/python3.6/site-packages/aiohttp/client.py", line 30, in <module> from yarl import URL File "/home/ubuntu/.local/lib/python3.6/site-packages/yarl/__init__.py", line 1, in <module> from ._url import URL, cache_clear, cache_configure, cache_info File "/home/ubuntu/.local/lib/python3.6/site-packages/yarl/_url.py", line 56, in <module> @rewrite_module File "/home/ubuntu/.local/lib/python3.6/site-packages/yarl/_url.py", line 132, in URL _QUERY_PART_QUOTER = _Quoter(safe="?/:@", qs=True, requote=False) File "yarl/_quoting.pyx", line 192, in yarl._quoting._Quoter.__init__ TypeError: __init__() got an unexpected keyword argument 'requote' I got this error after updated discord.py and I don't know how to fix it :c using python3.6 | Install yarl 1.4.2 pip install -U yarl==1.4.2 | 11 | 14 |
63,485,231 | 2020-8-19 | https://stackoverflow.com/questions/63485231/whats-the-computational-complexity-of-iloc-in-pandas-dataframes | I'm trying to understand what's the execution complexity of the iloc function in pandas. I read the following Stack Exchange thread (Pandas DataFrame search is linear time or constant time?) that: "accessing single row by index (index is sorted and unique) should have runtime O(m) where m << n_rows" mentioning that iloc runs on O(m) time. What is m (linear, log, constant,...)? Some experiments I ran: import pandas as pd >>> a = pd.DataFrame([[1,2,3],[1,3,4],[2,3,4],[2,4,5]], columns=['a','b','c']) >>> a = a.set_index('a').sort_index() >>> a b c a 1 3 4 1 4 5 2 2 3 2 3 4 >>> a.iloc[[0,1,2,3]] b c a 1 3 4 1 4 5 2 2 3 2 3 4 So iloc clearly works with offsets and not on the integer-based index (column a). Even if we delete few rows at the top, the iloc offset-based lookup works correctly: >>> a.drop([1]).iloc[[0,1]] b c a 2 2 3 2 3 4 So why isn't iloc offset-lookup running on a comparable time to numpy arrays when each column is simply a numpy array that can be accessed in constant time (few operations)? And what's its complexity? UPDATE: I tried to compare the efficiency of pandas vs numpy on a 10000000x2 matrix. Comparing the efficiency of a value increment per row in a DataFrame df and an array arr, with and without a for loop: # Initialization SIZE = 10000000 arr = np.ones((SIZE,2), dtype=np.uint32) df = pd.DataFrame(arr) # numpy, no for-loop arr[range(SIZE),1] += 1 # pandas, no for-loop df.iloc[range(SIZE),1] += 1 # numpy, for-loop for i in range(SIZE): arr[i,1] += 1 # pandas, for-loop for i in range(SIZE): df.iloc[i,1] += 1 Method Execution time numpy, no for-loop 7 seconds pandas, no for-loop 24 seconds numpy, with for-loop 27 seconds pandas, with for-loop > 2 hours | There likely isn't one answer for the runtime complexity of iloc. The method accepts a huge range of input types, and that flexibility necessarily comes with costs. These costs are likely to include both large constant factors and non-constant costs that are almost certainly dependent on the way in which it is used. One way to sort of answer your question is to step through the code in the two cases. Indexing with range First, indexing with range(SIZE). Assuming df is defined as you did, you can run: import pdb pdb.run('df.iloc[range(SIZE), 1]') and then step through the code to follow the path. Ultimately, this arrives at this line: self._values.take(indices) where indices is an ndarray of integers constructed from the original range, and self._values is the source ndarray of the data frame. There are two things to note about this. First, the range is materialized into an ndarray, which means you have a memory allocation of at least SIZE elements. So...that's going to cost you some time :). I don't know how the indexing happens in NumPy itself, but given the time measurements you've produced, it's possible that there is no (or a much smaller) allocation happening. The second thing to note is that numpy.take makes a copy. You can verify this by looking at the .flags attribute of the object returned from calling this method, which indicates that it owns its data and is not a view onto the original. (Also note that np.may_share_memory returns False.) So...there's another allocation there :). Take home: It's not obvious that there's any non-linear runtime here, but there are clearly large constant factors. Multiple allocations are probably the big killer, but the complex branching logic in the call tree under the .iloc property surely doesn't help. Indexing in a loop The code taken in this path is much shorter. It pretty quickly arrives here: return self.obj._get_value(*key, takeable=self._takeable) The really crappy runtime here is probably due to that tuple-unpacking. That repeatedly unpacks and repacks key as a tuple on each loop iteration. (Note that key is already the tuple (i, 1), so that sucks. The cost of accepting a generic iterable.) Runtime analysis In any case, we can get an estimate of the actual runtime of your particular use case by profiling. The following script will generate a log-spaced list of array sizes from 10 to 10e9, index with a range, and print out the time it takes to run the __getitem__ method. (There are only two such calls in the tree, so it's easy to see which is the one we care about.) import pandas as pd import numpy as np import cProfile import pstats sizes = [10 ** i for i in range(1, 9)] for size in sizes: df = pd.DataFrame(data=np.zeros((size, 2))) with cProfile.Profile() as pr: pr.enable() df.iloc[range(size), 1] pr.disable() stats = pstats.Stats(pr) print(size) stats.print_stats("__getitem__") Once the output gets above the minimum resolution, you can see pretty clear linear behavior here: Size | Runtime ------------------ 10000 | 0.002 100000 | 0.021 1000000 | 0.206 10000000 | 2.145 100000000| 24.843 So I'm not sure what sources you're referring to that talk about nonlinear runtime of indexing. They could be out of date, or considering a different code path than the one using range. | 10 | 5 |
63,494,812 | 2020-8-19 | https://stackoverflow.com/questions/63494812/how-can-i-distinguish-a-digitally-created-pdf-from-a-searchable-pdf | I am currently analyzing a set of PDF files. I want to know how many of the PDF files fall in those 3 categories: Digitally Created PDF: The text is there (copyable) and it is guaranteed to be correct as it was created directly e.g. from Word Image-only PDF: A scanned document Searchable PDF: A scanned document, but an OCR engine was used. The OCR engine put text "below" the image so that you can search / copy the content. As OCR is pretty good, this is correct most of the time. But it is not guaranteed to be correct. It is easy to identify Image-only PDFs in my domain as every PDF contains text. If I cannot extract any text, it is image only. But how do I know if it is "just" a searchable PDF or if it is a digially created PDF? By the way, it is not as simple as just looking at the producer as I have seen scanned documents where the Producer field said "Microsoft Word". Note: As a human, it is easy. I just zoom in on the text. If I see pixels, it's "just" searchable. Here are 3 example PDF files to test solutions: Digitally Created PDF Scanned PDF: Well.. not really; I used a script to create images and then put them together as a PDF. But that only means that the quality is very good. It should be very similar to a scan. Searchable PDF What I tried/thought about Using the creator/producer: I see "Microsoft Word" in scanned documents. Also this would be tedious. Embedded fonts: You can extract embedded fonts. The idea was that a scanned document would not have embedded fonts but just use the default. The idea was wrong, as one can see with the example. | With PyMuPDF you can easily remove all text as is required for @ypnos' suggestion. As an alternative, with PyMuPDF you can also check whether text is hidden in a PDF. In PDF's relevant "mini-language" this is triggered by the command 3 Tr ("text render mode", e.g. see page 402 of https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/pdf_reference_1-7.pdf). So if all text is under the influence of this command, then none of it will be rendered - allowing the conclusion "this is an OCR'ed page". | 15 | 4 |
63,503,512 | 2020-8-20 | https://stackoverflow.com/questions/63503512/python-type-hinting-own-class-in-method | Edit: I notice people commenting about how the type hint should not be used with __eq__, and granted, it shouldn't. But that's not the point of my question. My question is why can't the class be used as type hint in the method parameters, but can be used in the method itself? Python type hinting has proven very useful for me when working with PyCharm. However, I've come across behaviour I find strange, when trying to use a class' own type in its methods. For example: class Foo: def __init__(self, id): self.id = id pass def __eq__(self, other): return self.id == other.id Here, when typing other., the property id is not offered automatically. I was hoping to solve it by defining __eq__ as follows: def __eq__(self, other: Foo): return self.id == other.id However, this gives NameError: name 'Foo' is not defined. But when I use the type within the method, id is offered after writing other.: def __eq__(self, other): other: Foo return self.id == other.id My question is, why is it not possible to use the class' own type for type hinting the parameters, while it is possible within the method? | The name Foo doesn't yet exist, so you need to use 'Foo' instead. (mypy and other type checkers should recognize this as a forward reference.) def __eq__(self, other: 'Foo'): return self.id == other.id Alternately, you can use from __future__ import annotations which prevents evaluation of all annotations and simply stores them as strings for later reference. (This will be the default in Python 3.10.) Finally, as also pointed out in the comments, __eq__ should not be hinted this way in the first place. The second argument should be an arbitrary object; you'll return NotImplemented if you don't know how to compare your instance to it. (Who knows, maybe it knows how to compare itself to your instance. If Foo.__eq__(Foo(), Bar()) returns NotImplemented, then Python will try Bar.__eq__(Bar(), Foo()).) from typing import Any def __eq__(self, other: Any) -> bool: if isinstance(other, Foo): return self.id == other.id return NotImplemented or using duck-typing, def __eq__(self, other: Any) -> bool: # Compare to anything with an `id` attribute try: return self.id == other.id except AttributeError: return NotImplemented In either case, the Any hint is optional. | 27 | 36 |
63,502,556 | 2020-8-20 | https://stackoverflow.com/questions/63502556/pyspark-read-nested-json-from-a-string-type-column-and-create-columns | I have a dataframe in PySpark with 3 columns - json, date and object_id: ----------------------------------------------------------------------------------------- |json |date |object_id| ----------------------------------------------------------------------------------------- |{'a':{'b':0,'c':{'50':0.005,'60':0,'100':0},'d':0.01,'e':0,'f':2}}|2020-08-01|xyz123 | |{'a':{'m':0,'n':{'50':0.005,'60':0,'100':0},'d':0.01,'e':0,'f':2}}|2020-08-02|xyz123 | |{'g':{'h':0,'j':{'50':0.005,'80':0,'100':0},'d':0.02}} |2020-08-03|xyz123 | ----------------------------------------------------------------------------------------- Now I have a list of variables: [a.c.60, a.n.60, a.d, g.h]. I need to extract only these variables from the json column of above mentioned dataframe and to add those variables as columns in the dataframe with their respective values. So in the end, the dataframe should look like: ------------------------------------------------------------------------------------------------------- |json |date |object_id|a.c.60|a.n.60|a.d |g.h| ------------------------------------------------------------------------------------------------------- |{'a':{'b':0,'c':{'50':0.005,'60':0,'100':0},'d':0.01,...|2020-08-01|xyz123 |0 |null |0.01|null| |{'a':{'m':0,'n':{'50':0.005,'60':0,'100':0},'d':0.01,...|2020-08-02|xyz123 |null |0 |0.01|null| |{'g':{'h':0,'j':{'k':0.005,'':0,'100':0},'d':0.01}} |2020-08-03|xyz123 |null |null |0.02|0 | ------------------------------------------------------------------------------------------------------- Please help to get this result dataframe. The main problem I am facing is due to no fixed structure for the incoming json data. The json data can be anything in nested form but I need to extract only the given four variables. I have achieved this in Pandas by flattening out the json string and then to extract the 4 variables but in Spark it is getting difficult. | There are 2 ways to do it: use the get_json_object function, like this: import pyspark.sql.functions as F df = spark.createDataFrame(['{"a":{"b":0,"c":{"50":0.005,"60":0,"100":0},"d":0.01,"e":0,"f":2}}', '{"a":{"m":0,"n":{"50":0.005,"60":0,"100":0},"d":0.01,"e":0,"f":2}}', '{"g":{"h":0,"j":{"50":0.005,"80":0,"100":0},"d":0.02}}'], StringType()) df3 = df.select(F.get_json_object(F.col("value"), "$.a.c.60").alias("a_c_60"), F.get_json_object(F.col("value"), "$.a.n.60").alias("a_n_60"), F.get_json_object(F.col("value"), "$.a.d").alias("a_d"), F.get_json_object(F.col("value"), "$.g.h").alias("g_h")) will give: >>> df3.show() +------+------+----+----+ |a_c_60|a_n_60| a_d| g_h| +------+------+----+----+ | 0| null|0.01|null| | null| 0|0.01|null| | null| null|null| 0| +------+------+----+----+ Declare schema explicitly (only necessary fields), convert JSON into structus using the from_json function with the schema, and then extract individual values from structures - this could be more performant than JSON Path: from pyspark.sql.types import * import pyspark.sql.functions as F aSchema = StructType([ StructField("c", StructType([ StructField("60", DoubleType(), True) ]), True), StructField("n", StructType([ StructField("60", DoubleType(), True) ]), True), StructField("d", DoubleType(), True), ]) gSchema = StructType([ StructField("h", DoubleType(), True) ]) schema = StructType([ StructField("a", aSchema, True), StructField("g", gSchema, True) ]) df = spark.createDataFrame(['{"a":{"b":0,"c":{"50":0.005,"60":0,"100":0},"d":0.01,"e":0,"f":2}}', '{"a":{"m":0,"n":{"50":0.005,"60":0,"100":0},"d":0.01,"e":0,"f":2}}', '{"g":{"h":0,"j":{"50":0.005,"80":0,"100":0},"d":0.02}}'], StringType()) df2 = df.select(F.from_json("value", schema=schema).alias('data')).select('data.*') df2.select(df2.a.c['60'], df2.a.n['60'], df2.a.d, df2.g.h).show() will give +------+------+----+----+ |a.c.60|a.n.60| a.d| g.h| +------+------+----+----+ | 0.0| null|0.01|null| | null| 0.0|0.01|null| | null| null|null| 0.0| +------+------+----+----+ | 8 | 15 |
63,483,246 | 2020-8-19 | https://stackoverflow.com/questions/63483246/how-to-call-an-api-from-another-api-in-fastapi | I was able to get the response of one API from another but unable to store it somewhere(in a file or something before returning the response) response=RedirectResponse(url="/apiname/") (I want to access a post request with header and body) I want to store this response content without returning it. Yes, if I return the function I will get the results but when I print it I don't find results. Also, if I give post request then I get error Entity not found. I read the starlette and fastapi docs but couldn't get the workaround. The callbacks also didn't help. | I didn't exactly get the way to store response without returning using fastapi/starlette directly. But I found a workaround for completing this task. For the people trying to implement same thing, Please consider this way. import requests def test_function(request: Request, path_parameter: path_param): request_example = {"test" : "in"} host = request.client.host data_source_id = path_parameter.id get_test_url= f"http://{host}/test/{id}/" get_inp_url = f"http://{host}/test/{id}/inp" test_get_response = requests.get(get_test_url) inp_post_response = requests.post(get_inp_url , json=request_example) if inp_post_response .status_code == 200: print(json.loads(test_get_response.content.decode('utf-8'))) Please let me know if there are better approaches. | 17 | 14 |
63,491,991 | 2020-8-19 | https://stackoverflow.com/questions/63491991/how-to-use-the-ccf-method-in-the-statsmodels-library | I am having some trouble with the ccf() method in the (Python) statsmodels library. The equivalent operation works fine in R. ccf produces a cross-correlation function between two variables, A and B in my example. I am interested to understand the extent to which A is a leading indicator for B. I am using the following: import pandas as pd import numpy as np import statsmodels.tsa.stattools as smt I can simulate A and B as follows: np.random.seed(123) test = pd.DataFrame(np.random.randint(0,25,size=(79, 2)), columns=list('AB')) When I run ccf, I get the following: ccf_output = smt.ccf(test['A'],test['B'], unbiased=False) ccf_output array([ 0.09447372, -0.12810284, 0.15581492, -0.05123683, 0.23403344, 0.0771812 , 0.01434263, 0.00986775, -0.23812752, -0.03996113, -0.14383829, 0.0178347 , 0.23224969, 0.0829421 , 0.14981321, -0.07094772, -0.17713121, 0.15377192, -0.19161986, 0.08006699, -0.01044449, -0.04913098, 0.06682942, -0.02087582, 0.06453489, 0.01995989, -0.08961562, 0.02076603, 0.01085041, -0.01357792, 0.17009109, -0.07586774, -0.0183845 , -0.0327533 , -0.19266634, -0.00433252, -0.00915397, 0.11568826, -0.02069836, -0.03110162, 0.08500599, 0.01171839, -0.04837527, 0.10352341, -0.14512205, -0.00203772, 0.13876788, -0.20846099, 0.30174408, -0.05674962, -0.03824093, 0.04494932, -0.21788683, 0.00113469, 0.07381456, -0.04039815, 0.06661601, -0.04302084, 0.01624429, -0.00399155, -0.0359768 , 0.10264208, -0.09216649, 0.06391548, 0.04904064, -0.05930197, 0.11127125, -0.06346119, -0.08973581, 0.06459495, -0.09600202, 0.02720553, 0.05152299, -0.0220437 , 0.04818264, -0.02235086, -0.05485135, -0.01077366, 0.02566737]) Here is the outcome I am trying to get to (produced in R): The problem is this: ccf_output is giving me only the correlation values for lag 0 and to the right of Lag 0. Ideally, I would like the full set of lag values (lag -60 to lag 60) so that I can produce something like the above plot. Is there a way to do this? | The statsmodels ccf function only produces forward lags, i.e. Corr(x_[t+k], y_[t]) for k >= 0. But one way to compute the backwards lags is by reversing the order of the both the input series and the output. backwards = smt.ccf(test['A'][::-1], test['B'][::-1], adjusted=False)[::-1] forwards = smt.ccf(test['A'], test['B'], adjusted=False) ccf_output = np.r_[backwards[:-1], forwards] Note that both backwards and forwards contained lag 0, so we had to remove that from one of them when combining them. Edit another alternative is to reverse the order of the arguments and the output: backwards = sm.tsa.ccf(test['B'], test['A'], adjusted=False)[::-1] | 8 | 14 |
63,491,221 | 2020-8-19 | https://stackoverflow.com/questions/63491221/modulenotfounderror-no-module-named-virtualenv-seed-embed-via-app-data-when-i | I was creating a new virtual environment on Ubuntu 20.04: $ virtualenv my_env But it gave an error: ModuleNotFoundError: No module named 'virtualenv.seed.embed.via_app_data' Other info: $ virtualenv --version virtualenv 20.0.17 from /usr/lib/python3/dist-packages/virtualenv/__init__.py | Try to create the virtual environment using directly venv module python3 -m venv my_env | 113 | 65 |
63,484,742 | 2020-8-19 | https://stackoverflow.com/questions/63484742/how-to-write-in-env-file-from-python-code | I want to write in the .env using python code. This is what I tried but it's not working:- os.environ['username'] = 'John' os.environ['email'] = '[email protected]' | os.environ is a Python dictionary containing the environment. In order to change the environment variables in your currently running process, and any children process spawned with fork, you should use os.putenv as follows: import os os.putenv("username", "John") os.putenv("email", "[email protected]") Do notice, this changes are not permanent, they just affect the process being currently executed. If you want the changes to be permanent, you can write them to a .env file, and read and reset them on startup: with open(".env", "r") as f: for line in f.readlines(): try: key, value = line.split('=') os.putenv(key, value) except ValueError: # syntax error pass To generate the file you should: with open(".env", "w") as f: f.write("username=John") f.write("[email protected]") If you want to permanently change this environment variables on the whole OS scope, you need an OS specific solution, since each operating system has its way of changing environment variables. This method will set the environment variable globally, which will affect all applications run and not just yours, so be extremely careful on what are you writing to. Unix-like systems like Linux and macOS let you set up these variables in .profile, so you may do something like this (Python 3.5+): from pathlib import Path with open(str(Path.home()) + "/.profile", "a") as f: f.write("export USERNAME=John\nexport EMAIL [email protected]\n") On Windows on the other hand you should call setx: import subprocess subprocess.call(["setx", "USERNAME", "John"]) subprocess.call(["setx", "EMAIL", "[email protected]"]) | 8 | 8 |
63,480,172 | 2020-8-19 | https://stackoverflow.com/questions/63480172/permanently-saving-train-data-in-google-colab | I have train data for 50GB. My google drive capacity was 15GB so I upgraded it to 200GB and I uploaded my train data to my google drive I connected to colab, but I can not find my train data in colab session, So I manually uploaded to colab which has 150GB capacity. It says, it will be deleted when my colab connection is off. It is impossible to save train data for colab permanently? And colab is free for 150GB? And I see colab support nvidia P4 that is almost 5000$. can I use it 100% or it is shared to some portion(like 0.1%) to me? (When P4 is assigned to me) | The way you can do this is to mount your google drive into colab environment. Assume your files are kept under a folder named myfolder in your google drive. This is what I would suggest, do this before you read/write any file: import os from google.colab import drive MOUNTPOINT = '/content/gdrive' DATADIR = os.path.join(MOUNTPOINT, 'My Drive', 'myfolder') drive.mount(MOUNTPOINT) then, for example, your file bigthing.zip reside under myfolder in your google drive will be available in colab as path=os.path.join(DATADIR, 'bigthing.zip') Similarly, when you save a file to a path like the above, you can find your file in Google Drive under the same directory. | 7 | 7 |
63,459,424 | 2020-8-17 | https://stackoverflow.com/questions/63459424/how-to-add-multiple-graphs-to-dash-app-on-a-single-browser-page | How do I add multiple graphs show in in picture on a same page? I am trying to add html.Div components to following code to update the page layout to add more graphs like that on single page, but these newly added graphs do not get shown on a page, only old graph is shown in picture is visible. What element should I modify, to let's say to add graph shown in uploaded image 3 times on single page of dash app on browser? import dash import dash_core_components as dcc import dash_html_components as html i[enter image description here][1]mport plotly.express as px import pandas as pd external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # assume you have a "long-form" data frame # see https://plotly.com/python/px-arguments/ for more options df = pd.DataFrame({ "Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"], "Amount": [4, 1, 2, 2, 4, 5], "City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"] }) fig = px.bar(df, x="Fruit", y="Amount", color="City", barmode="group") app.layout = html.Div(children=[ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='example-graph', figure=fig ) ]) if __name__ == '__main__': app.run_server(debug=True) | To add the same figure multiple times, you just need to extend your app.layout. I have extended you code below as an example. import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd import plotly.express as px external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # assume you have a "long-form" data frame # see https://plotly.com/python/px-arguments/ for more options df_bar = pd.DataFrame({ "Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"], "Amount": [4, 1, 2, 2, 4, 5], "City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"] }) fig = px.bar(df_bar, x="Fruit", y="Amount", color="City", barmode="group") app.layout = html.Div(children=[ # All elements from the top of the page html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='graph1', figure=fig ), ]), # New Div for all elements in the new 'row' of the page html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='graph2', figure=fig ), ]), ]) if __name__ == '__main__': app.run_server(debug=True) The way I have structured the layout is by nesting the html.Div components. For every figure and corresponding titles, text, etc. we make another html.Div that makes a new 'row' in our application. The one thing to keep in mind is that different components need unique ids. In this example we have the same graph displayed twice, but they are not the exact same object. We are making two dcc.Graph objects using the same plotly.express figure I have made another example for you where I have a added another figure that is dynamic. The second figure is updated every time a new colorscale is selected from the dropdown menu. This is were the real potential of Dash lies. You can read more about callback functions in this tutorial import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd import plotly.express as px external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # assume you have a "long-form" data frame # see https://plotly.com/python/px-arguments/ for more options df_bar = pd.DataFrame({ "Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"], "Amount": [4, 1, 2, 2, 4, 5], "City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"] }) fig = px.bar(df_bar, x="Fruit", y="Amount", color="City", barmode="group") # Data for the tip-graph df_tip = px.data.tips() app.layout = html.Div(children=[ # All elements from the top of the page html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='example-graph', figure=fig ), ]), # New Div for all elements in the new 'row' of the page html.Div([ dcc.Graph(id='tip-graph'), html.Label([ "colorscale", dcc.Dropdown( id='colorscale-dropdown', clearable=False, value='bluyl', options=[ {'label': c, 'value': c} for c in px.colors.named_colorscales() ]) ]), ]) ]) # Callback function that automatically updates the tip-graph based on chosen colorscale @app.callback( Output('tip-graph', 'figure'), [Input("colorscale-dropdown", "value")] ) def update_tip_figure(colorscale): return px.scatter( df_tip, x="total_bill", y="tip", color="size", color_continuous_scale=colorscale, render_mode="webgl", title="Tips" ) if __name__ == '__main__': app.run_server(debug=True) Your next question may be, how do i place multiple figures side by side? This is where CSS and stylesheets are important. You have already added an external stylesheet https://codepen.io/chriddyp/pen/bWLwgP.css, which enables us to better structure our layout using the className component of divs. The width of a web page is set to 12 columns no matter the screen size. So if we want to have two figures side by side, each occupying 50% of the screen they need to fill 6 columns each. We can achieve this by nesting another html.Div as our top half row. In this upper div we can have another two divs in which we specify the style according to classname six columns. This splits the first row in two halves import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd import plotly.express as px from jupyter_dash import JupyterDash external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # assume you have a "long-form" data frame # see https://plotly.com/python/px-arguments/ for more options df_bar = pd.DataFrame({ "Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"], "Amount": [4, 1, 2, 2, 4, 5], "City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"] }) fig = px.bar(df_bar, x="Fruit", y="Amount", color="City", barmode="group") app.layout = html.Div(children=[ # All elements from the top of the page html.Div([ html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='graph1', figure=fig ), ], className='six columns'), html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='graph2', figure=fig ), ], className='six columns'), ], className='row'), # New Div for all elements in the new 'row' of the page html.Div([ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='graph3', figure=fig ), ], className='row'), ]) if __name__ == '__main__': app.run_server(debug=True) | 19 | 50 |
63,400,683 | 2020-8-13 | https://stackoverflow.com/questions/63400683/python-logging-with-loguru-log-request-params-on-fastapi-app | I have a fastapi application and I want to log every request made on it. I'm trying to use loguru and uvicorn for this, but I don't know how to print the headers and request params (if have one) associated with each request. I want something like this: INFO 2020-08-13 13:36:33.494 uvicorn.protocols.http.h11_impl:send - 127.0.0.1:52660 - "GET /url1/url2/ HTTP/1.1" 400 params={"some": value, "some1":value} Is there a way ? thanks for your help. Here some links: loguru uvicorn fastapi | A dependency on a router level could be used (thanks to @lsabi in the comments below): import sys import uvicorn from fastapi import FastAPI, Request, APIRouter, Depends from loguru import logger from starlette.routing import Match logger.remove() logger.add(sys.stdout, colorize=True, format="<green>{time:HH:mm:ss}</green> | {level} | <level>{message}</level>") app = FastAPI() router = APIRouter() async def logging_dependency(request: Request): logger.debug(f"{request.method} {request.url}") logger.debug("Params:") for name, value in request.path_params.items(): logger.debug(f"\t{name}: {value}") logger.debug("Headers:") for name, value in request.headers.items(): logger.debug(f"\t{name}: {value}") @router.get("/{param1}/{param2}") async def path_operation(param1: str, param2: str): return {'param1': param1, 'param2': param2} app.include_router(router, dependencies=[Depends(logging_dependency)]) if __name__ == "__main__": uvicorn.run("app:app", host="localhost", port=8001) More sophisticated approach is using a middleware for logging every request and doing matching manually: import sys import uvicorn from fastapi import FastAPI, Request from loguru import logger from starlette.routing import Match logger.remove() logger.add(sys.stdout, colorize=True, format="<green>{time:HH:mm:ss}</green> | {level} | <level>{message}</level>") app = FastAPI() @app.middleware("http") async def log_middle(request: Request, call_next): logger.debug(f"{request.method} {request.url}") routes = request.app.router.routes logger.debug("Params:") for route in routes: match, scope = route.matches(request) if match == Match.FULL: for name, value in scope["path_params"].items(): logger.debug(f"\t{name}: {value}") logger.debug("Headers:") for name, value in request.headers.items(): logger.debug(f"\t{name}: {value}") response = await call_next(request) return response @app.get("/{param1}/{param2}") async def path_operation(param1: str, param2: str): return {'param1': param1, 'param2': param2} if __name__ == "__main__": uvicorn.run("app:app", host="localhost", port=8001) curl http://localhost:8001/admin/home Output: 16:06:43 | DEBUG | GET http://localhost:8001/admin/home 16:06:43 | DEBUG | Params: 16:06:43 | DEBUG | param1: admin 16:06:43 | DEBUG | param2: home 16:06:43 | DEBUG | Headers: 16:06:43 | DEBUG | host: localhost:8001 16:06:43 | DEBUG | user-agent: curl/7.64.0 16:06:43 | DEBUG | accept: */* | 10 | 19 |
63,392,426 | 2020-8-13 | https://stackoverflow.com/questions/63392426/how-to-use-tailwindcss-with-django | How to use all features of TailwindCSS in a Django project (not only the CDN), including a clean workflow with auto-reloading and CSS minify step to be production-ready? | There are (at least) 3 different methods to install Tailwind with Django properly. 1st method: NPM This is the preferred method if you need node in your project (e.g : add plugins like Daisy UI, or have a SPA) Installing tailwindCSS and build/minify processes Create a new directory within your Django project, in which you'll install tailwindCSS like in any vanilla JS project setup: cd your-django-folder; mkdir jstoolchain; cd jstoolchain npm init -y npm install -D tailwindcss npx tailwindcss init Configure your template paths in tailwind.config.js that have just been created, by specifying the right place to parse your content. This could be something like below or a little different, depending on where your templates are located: ... content: ["../templates/**/*.{html,js}"], ... In your-django-folder, create an input.css file and add at least this in it: @tailwind base; @tailwind components; @tailwind utilities; In your package.json file, you can prepare npm scripts to ease execution of build / minify tasks (adapt the paths according to your Django static folder location): "scripts": { // use in local environment "tailwind-watch": "tailwindcss -i ../input.css -o ../static/css/output.css --watch", // use in remote environment "tailwind-build": "tailwindcss -i ../input.css -o ../static/css/output.css --minify" } In your jstoolchains folder, keep running npm run tailwind-watch while you're coding. This will ensure that your output.css file is regenerated as soon as you add a new tailwind class to your code. Add this file to .gitignore. If tailwind-watch is running without error, output.css file should now be filled with CSS. Now you can actually use tailwindCSS classes, by including the outputted css file into a Django template file along with Django's call to load the static files: {% load static %} <head> <link rel="stylesheet" href="{% static "css/output.css" %}"> </head> Don't forget to include the npm run tailwind-build script in your deployment process. This will build the output and remove unused classes to ensure a lower file size. Handling auto-reload locally What's missing now to ease development, is to auto-reload the django development server when an HTML file is changed and saved. The best extension to deal with this is Django-browser-reload. Just follow setup instructions, this will work as expected out of the box 2nd method: standalone CLI This is the preferred method if your project does not require node at all (eg: you don't have SPA for your front, you don't need plugins like daisyUI, etc.). You can install it manually following the official instructions, or automate it using a script shell like this: #!/bin/sh set -e TAILWIND_ARCHITECTURE=arm64 # chose the right architecture for you TAILWIND_VERSION=v3.1.4 # chose the right version SOURCE_NAME=tailwindcss-linux-${TAILWIND_ARCHITECTURE} OUTPUT_NAME=tailwindcss DOWNLOAD_URL=https://github.com/tailwindlabs/tailwindcss/releases/download/${TAILWIND_VERSION}/${SOURCE_NAME} curl -sLO ${DOWNLOAD_URL} && chmod +x ${SOURCE_NAME} mv ${SOURCE_NAME} ${OUTPUT_NAME} # rename it mv ${OUTPUT_NAME} /usr/bin # move it to be used globally in a folder already in the PATH var For Tailwind configuration itself, please refer to the 1st method where it's explained in detail. 3rd method: django-tailwind plugin This plugin produces more or less the same results than you get manually with the npm method. The plugin is well documented, up to date, and people seem to be satisfied with it. As a personal preference, I think abstractions like this creates a little too magic and I prefer building the toolchain by myself to know what's happening behind the scene. But feel free to experiment this method as well and pick it if it suits you! | 43 | 106 |
63,448,679 | 2020-8-17 | https://stackoverflow.com/questions/63448679/typeerror-init-subclass-takes-no-keyword-arguments | I'm trying to create a metaclass but when I assign it to another class I receive the error: TypeError: __init_subclass__() takes no keyword arguments But I don't implement any __init_subclass__. Why is this function being called? class Meta(type): def __new__(cls, name, bases, dct): return super().__new__(cls, name, bases, dct) class MyClass(meta=Meta): pass | Change meta to metaclass. Any keyword arguments passed to the signature of your class are passed to its parent's __init_subclass__ method. Since you entered meta instead of metaclass this meta kwarg is passed to its parent's (object) __init_subclass__ method: >>> object.__init_subclass__(meta=5) TypeError: __init_subclass__() takes no keyword arguments A similar error would be raised if you actually implemented a __init_subclass__ but made a typo: class Parent: def __init_subclass__(cls, handler=None): super().__init_subclass__() cls.handler = handler class CorrectChild(Parent, handler=5): pass class TypoChild(Parent, typo=5): # TypeError: __init_subclass__() got an unexpected keyword argument 'typo' pass | 16 | 20 |
63,449,011 | 2020-8-17 | https://stackoverflow.com/questions/63449011/why-do-i-get-cuda-out-of-memory-when-running-pytorch-model-with-enough-gpu-memo | I am asking this question because I am successfully training a segmentation network on my GTX 2070 on laptop with 8GB VRAM and I use exactly the same code and exactly the same software libraries installed on my desktop PC with a GTX 1080TI and it still throws out of memory. Why does this happen, considering that: The same Windows 10 + CUDA 10.1 + CUDNN 7.6.5.32 + Nvidia Driver 418.96 (comes along with CUDA 10.1) are both on laptop and on PC. The fact that training with TensorFlow 2.3 runs smoothly on the GPU on my PC, yet it fails allocating memory for training only with PyTorch. PyTorch recognises the GPU (prints GTX 1080 TI) via the command : print(torch.cuda.get_device_name(0)) PyTorch allocates memory when running this command: torch.rand(20000, 20000).cuda() #allocated 1.5GB of VRAM. What is the solution to this? | Most of the people (even in the thread below) jump to suggest that decreasing the batch_size will solve this problem. In fact, it does not in this case. For example, it would have been illogical for a network to train on 8GB VRAM and yet to fail to train on 11GB VRAM, considering that there were no other applications consuming video memory on the system with 11GB VRAM and the exact same configuration is installed and used. The reason why this happened in my case was that, when using the DataLoader object, I set a very high (12) value for the workers parameter. Decreasing this value to 4 in my case solved the problem. In fact, although at the bottom of the thread, the answer provided by Yurasyk at https://github.com/pytorch/pytorch/issues/16417#issuecomment-599137646 pointed me in the right direction. Solution: Decrease the number of workers in the PyTorch DataLoader. Although I do not exactly understand why this solution works, I assume it is related to the threads spawned behind the scenes for data fetching; it may be the case that, on some processors, such an error appears. | 8 | 12 |
63,460,126 | 2020-8-18 | https://stackoverflow.com/questions/63460126/typeerror-type-object-is-not-subscriptable-in-a-function-signature | Why am I receiving this error when I run this code? Traceback (most recent call last): File "main.py", line 13, in <module> def twoSum(self, nums: list[int], target: int) -> list[int]: TypeError: 'type' object is not subscriptable nums = [4,5,6,7,8,9] target = 13 def twoSum(self, nums: list[int], target: int) -> list[int]: dictionary = {} answer = [] for i in range(len(nums)): secondNumber = target-nums[i] if(secondNumber in dictionary.keys()): secondIndex = nums.index(secondNumber) if(i != secondIndex): return sorted([i, secondIndex]) dictionary.update({nums[i]: i}) print(twoSum(nums, target)) | The following answer only applies to Python < 3.9 The expression list[int] is attempting to subscript the object list, which is a class. Class objects are of the type of their metaclass, which is type in this case. Since type does not define a __getitem__ method, you can't do list[...]. To do this correctly, you need to import typing.List and use that instead of the built-in list in your type hints: from typing import List ... def twoSum(self, nums: List[int], target: int) -> List[int]: If you want to avoid the extra import, you can simplify the type hints to exclude generics: def twoSum(self, nums: list, target: int) -> list: Alternatively, you can get rid of type hinting completely: def twoSum(self, nums, target): | 59 | 69 |
63,403,758 | 2020-8-13 | https://stackoverflow.com/questions/63403758/is-oop-possible-using-discord-py-without-cogs | These last few days, I've been trying to adapt the structure of a discord bot written in discord.py to a more OOP one (because having functions lying around isn't ideal). But I have found way more problems that I could have ever expected. The thing is that I want to encapsulate all my commands into a single class, but I don't know what decorators to use and how and which classes I must inherit. What I've achieved so far is code like the snippet down below. It runs, but at the moment of executing a command it throws errors like discord.ext.commands.errors.CommandNotFound: Command "status" is not found I'm using Python 3.6. from discord.ext import commands class MyBot(commands.Bot): def __init__(self, command_prefix, self_bot): commands.Bot.__init__(self, command_prefix=command_prefix, self_bot=self_bot) self.message1 = "[INFO]: Bot now online" self.message2 = "Bot still online {}" async def on_ready(self): print(self.message1) @commands.command(name="status", pass_context=True) async def status(self, ctx): print(ctx) await ctx.channel.send(self.message2 + ctx.author) bot = MyBot(command_prefix="!", self_bot=False) bot.run("token") | To register the command you should use self.add_command(setup), but you can't have the self argument in the setup method, so you could do something like this: from discord.ext import commands class MyBot(commands.Bot): def __init__(self, command_prefix, self_bot): commands.Bot.__init__(self, command_prefix=command_prefix, self_bot=self_bot) self.message1 = "[INFO]: Bot now online" self.message2 = "Bot still online" self.add_commands() async def on_ready(self): print(self.message1) def add_commands(self): @self.command(name="status", pass_context=True) async def status(ctx): print(ctx) await ctx.channel.send(self.message2, ctx.author.name) bot = MyBot(command_prefix="!", self_bot=False) bot.run("token") | 10 | 19 |
63,414,448 | 2020-8-14 | https://stackoverflow.com/questions/63414448/pip3-throws-undefined-symbol-xml-sethashsalt | I am having python 3.6.8 on oracle Linux EL7 I installed pip3 using yum install python36-pip however, when ever I invoke pip3 it is having library error pip3 Traceback (most recent call last): File "/bin/pip3", line 8, in <module> from pip import main File "/usr/lib/python3.6/site-packages/pip/__init__.py", line 42, in <module> from pip.utils import get_installed_distributions, get_prog File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 27, in <module> from pip._vendor import pkg_resources File "/usr/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 35, in <module> import plistlib File "/usr/lib64/python3.6/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/usr/lib64/python3.6/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/bin/pip3", line 12, in <module> from pip._internal import main File "/usr/lib/python3.6/site-packages/pip/__init__.py", line 42, in <module> from pip.utils import get_installed_distributions, get_prog File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 27, in <module> from pip._vendor import pkg_resources File "/usr/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 35, in <module> import plistlib File "/usr/lib64/python3.6/plistlib.py", line 65, in <module> from xml.parsers.expat import ParserCreate File "/usr/lib64/python3.6/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt I tried to see if there is any alternative to pyexpat.*.so but it seems there is none did ldd on the last line below is the out put [root@whf00jkd python3.6]# ldd /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007ffd76bf9000) libexpat.so.1 => /scratch/oraofss/app/oraofss/product/18.0.0/client_1/lib/libexpat.so.1 (0x00007fec3a94a000) libpython3.6m.so.1.0 => /lib64/libpython3.6m.so.1.0 (0x00007fec3a422000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fec3a206000) libc.so.6 => /lib64/libc.so.6 (0x00007fec39e38000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fec39c34000) libutil.so.1 => /lib64/libutil.so.1 (0x00007fec39a31000) libm.so.6 => /lib64/libm.so.6 (0x00007fec3972f000) /lib64/ld-linux-x86-64.so.2 (0x00007fec3ad81000) I did a search for libexpat.so.1 [root@whf00jkd python3.6]# whereis libexpat.so.1 libexpat.so: /usr/lib/libexpat.so.1 /usr/lib/libexpat.so /usr/lib64/libexpat.so.1 /usr/lib64/libexpat.so [root@whf00jkd python3.6]# ls -ltr /usr/lib/libexpat.so.1 lrwxrwxrwx. 1 root root 17 Apr 1 2019 /usr/lib/libexpat.so.1 -> libexpat.so.1.6.0 [root@whf00jkd python3.6]# ls -ltr /usr/lib64/libexpat.so.1 lrwxrwxrwx. 1 root root 17 Apr 1 2019 /usr/lib64/libexpat.so.1 -> libexpat.so.1.6.0 and added a link libexpat.so.1 -> /usr/lib/libexpat.so.1 in /usr/lib64/python3.6/lib-dynload/ but that is not removing the error Please help | libexpat.so.1 pointing to wrong location. Fixed it with export LD_LIBRARY_PATH=/lib64/:${LD_LIBRARY_PATH} ldd /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007fff073f1000) libexpat.so.1 => /lib64/libexpat.so.1 (0x00007f9ba53ce000) libpython3.6m.so.1.0 => /lib64/libpython3.6m.so.1.0 (0x00007f9ba4ea9000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f9ba4c8d000) libc.so.6 => /lib64/libc.so.6 (0x00007f9ba48bf000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f9ba46bb000) libutil.so.1 => /lib64/libutil.so.1 (0x00007f9ba44b8000) libm.so.6 => /lib64/libm.so.6 (0x00007f9ba41b6000) /lib64/ld-linux-x86-64.so.2 (0x00007f9ba5807000) | 7 | 13 |
63,443,583 | 2020-8-17 | https://stackoverflow.com/questions/63443583/seaborn-valueerror-zero-size-array-to-reduction-operation-minimum-which-has-no | I ran this scatter plot seaborn example from their own website, import seaborn as sns; sns.set() import matplotlib.pyplot as plt tips = sns.load_dataset("tips") # this works: ax = sns.scatterplot(x="total_bill", y="tip", data=tips) # But adding 'hue' gives the error below: ax = sns.scatterplot(x="total_bill", y="tip", hue="time", data=tips) This error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) e:\Anaconda3\lib\site-packages\IPython\core\formatters.py in __call__(self, obj) 339 pass 340 else: --> 341 return printer(obj) 342 # Finally look for special method names 343 method = get_real_method(obj, self.print_method) e:\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in <lambda>(fig) 246 247 if 'png' in formats: --> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs)) 249 if 'retina' in formats or 'png2x' in formats: 250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs)) e:\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs) 130 FigureCanvasBase(fig) 131 --> 132 fig.canvas.print_figure(bytes_io, **kw) 133 data = bytes_io.getvalue() 134 if fmt == 'svg': e:\Anaconda3\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs) 2191 else suppress()) 2192 with ctx: -> 2193 self.figure.draw(renderer) 2194 2195 bbox_inches = self.figure.get_tightbbox( e:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 39 renderer.start_filter() 40 ---> 41 return draw(artist, renderer, *args, **kwargs) 42 finally: 43 if artist.get_agg_filter() is not None: e:\Anaconda3\lib\site-packages\matplotlib\figure.py in draw(self, renderer) 1861 1862 self.patch.draw(renderer) -> 1863 mimage._draw_list_compositing_images( 1864 renderer, self, artists, self.suppressComposite) 1865 e:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite) 129 if not_composite or not has_images: 130 for a in artists: --> 131 a.draw(renderer) 132 else: 133 # Composite any adjacent images together e:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 39 renderer.start_filter() 40 ---> 41 return draw(artist, renderer, *args, **kwargs) 42 finally: 43 if artist.get_agg_filter() is not None: e:\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py in wrapper(*inner_args, **inner_kwargs) 409 else deprecation_addendum, 410 **kwargs) --> 411 return func(*inner_args, **inner_kwargs) 412 413 return wrapper e:\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe) 2746 renderer.stop_rasterizing() 2747 -> 2748 mimage._draw_list_compositing_images(renderer, self, artists) 2749 2750 renderer.close_group('axes') e:\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite) 129 if not_composite or not has_images: 130 for a in artists: --> 131 a.draw(renderer) 132 else: 133 # Composite any adjacent images together e:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 39 renderer.start_filter() 40 ---> 41 return draw(artist, renderer, *args, **kwargs) 42 finally: 43 if artist.get_agg_filter() is not None: e:\Anaconda3\lib\site-packages\matplotlib\collections.py in draw(self, renderer) 929 def draw(self, renderer): 930 self.set_sizes(self._sizes, self.figure.dpi) --> 931 Collection.draw(self, renderer) 932 933 e:\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 39 renderer.start_filter() 40 ---> 41 return draw(artist, renderer, *args, **kwargs) 42 finally: 43 if artist.get_agg_filter() is not None: e:\Anaconda3\lib\site-packages\matplotlib\collections.py in draw(self, renderer) 383 else: 384 combined_transform = transform --> 385 extents = paths[0].get_extents(combined_transform) 386 if (extents.width < self.figure.bbox.width 387 and extents.height < self.figure.bbox.height): e:\Anaconda3\lib\site-packages\matplotlib\path.py in get_extents(self, transform, **kwargs) 601 xys.append(curve([0, *dzeros, 1])) 602 xys = np.concatenate(xys) --> 603 return Bbox([xys.min(axis=0), xys.max(axis=0)]) 604 605 def intersects_path(self, other, filled=True): e:\Anaconda3\lib\site-packages\numpy\core\_methods.py in _amin(a, axis, out, keepdims, initial, where) 41 def _amin(a, axis=None, out=None, keepdims=False, 42 initial=_NoValue, where=True): ---> 43 return umr_minimum(a, axis, None, out, keepdims, initial, where) 44 45 def _sum(a, axis=None, dtype=None, out=None, keepdims=False, ValueError: zero-size array to reduction operation minimum which has no identity Yesterday it did work. However, I ran an update of using conda update --all. Has something changed? What's going on? I run python on a Linux machine. Pandas: 1.1.0. Numpy: 1.19.1. Seaborn api: 0.10.1. | This issue seems to be resolved for matplotlib==3.3.2. seaborn: Scatterplot fails with matplotlib==3.3.1 #2194 With matplotlib version 3.3.1 A workaround is to send a list to hue, by using .tolist() Use hue=tips.time.tolist(). The normal behavior adds a title to the legend, but sending a list to hue does not add the legend title. The legend title can be added manually. import seaborn as sns # load data tips = sns.load_dataset("tips") # But adding 'hue' gives the error below: ax = sns.scatterplot(x="total_bill", y="tip", hue=tips.time.tolist(), data=tips) ax.legend(title='time') # add a title to the legend | 36 | 37 |
63,404,192 | 2020-8-13 | https://stackoverflow.com/questions/63404192/pip-install-tensorflow-cannot-find-file-called-client-load-reporting-filter-h | I keep failing to run pip install on the tensorflow package. First it downloads the .whl file, then goes through a bunch of already satisfied requirements until it gets to installing collected packages: tensorflow, at which point here's the error I get: ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\Borik\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\tensorflow\\include\\external\\com_github_grpc_grpc\\src\\core\\ext\\filters\\client_channel\\lb_policy\\grpclb\\client_load_reporting_filter.h' I've never seen anything like this before and can't seem to find anything on the net. I'm using Windows 10 and the latest versions of Python and pip. | I hit the same issue on Win10. Rather than renaming my filesystem, I found a good solution in this Python documentation. To summarize the instructions there to change MAX_PATH, either: Enable the "Enable Win32 long paths" group policy: Run gpedit (or searching for "Edit Group Policy" in the Control Panel) Find the "Enable Win32 long paths" option in the sidebar. It should be under Local Computer Policy -> Computer Configuration -> Administrative Templates -> System -> Filesystem (under both Windows 10 and Windows 11, currently). Or edit the registry setting HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem and set LongPathsEnabled to 1. This extends path from char(256) to char(32000). After this change, my 'pip install tensorflow' succeeded. | 14 | 40 |
63,449,770 | 2020-8-17 | https://stackoverflow.com/questions/63449770/oserror-cannot-load-library-gobject-2-0-error-0x7e | I installed the package weasyprint according to the instructions Installing weasyprint (Django project). My system: win 10. I have installed gtk3 and it is present in my PATH import weasyprint ... @staff_member_required def order_admin_pdf(request, order_id): # Получаем заказ по ID: order = get_object_or_404(Order, id=order_id) # Передаем объект в функцию render_to через генерацию шаблона pdf.html HTML в виде строки: html = render_to_string('shop/orders/order_admin_pdf.html', {'order': order}) # Создаем объект овтета с типом содержимого application/pdf и заголовком Content-Disposition: response = HttpResponse(content_type='application/pdf') response['Content-Disposition'] = 'filename=order_{}.pdf"'.format(order.id) # Вызов метода weasyprint для получения PDF документа: weasyprint.HTML(string=html).write_pdf(response, stylesheets=[weasyprint.CSS( settings.STATIC_ROOT + 'css/pdf.css')]) return response OSError: cannot load library 'gobject-2.0': error 0x7e. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0' | Starting from Python 3.8 DLL dependencies for extension modules and DLLs loaded with ctypes on Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added with add_dll_directory() are searched for load-time dependencies. Specifically, PATH and the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If you followed the installation guide from the official documentation then the following example works. import os os.add_dll_directory(r"C:\Program Files\GTK3-Runtime Win64\bin") from weasyprint import HTML HTML('https://weasyprint.org/').write_pdf('weasyprint-website.pdf') In essence you need to call add_dll_directory() before interacting with WeasyPrint. | 21 | 13 |
63,421,086 | 2020-8-14 | https://stackoverflow.com/questions/63421086/modulenotfounderror-no-module-named-webdriver-manager-error-even-after-instal | I've installed webdrivermanager on my windows-10 system C:\Users\username>pip install webdrivermanager Requirement already satisfied: webdrivermanager in c:\python\lib\site-packages (0.8.0) Requirement already satisfied: lxml in c:\python\lib\site-packages (from webdrivermanager) (4.5.1) Requirement already satisfied: requests in c:\python\lib\site-packages (from webdrivermanager) (2.20.1) Requirement already satisfied: tqdm in c:\python\lib\site-packages (from webdrivermanager) (4.46.1) Requirement already satisfied: appdirs in c:\python\lib\site-packages (from webdrivermanager) (1.4.4) Requirement already satisfied: BeautifulSoup4 in c:\python\lib\site-packages (from webdrivermanager) (4.6.0) Requirement already satisfied: certifi>=2017.4.17 in c:\python\lib\site-packages (from requests->webdrivermanager) (2018.11.29) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\python\lib\site-packages (from requests->webdrivermanager) (3.0.4) Requirement already satisfied: idna<2.8,>=2.5 in c:\python\lib\site-packages (from requests->webdrivermanager) (2.7) Requirement already satisfied: urllib3<1.25,>=1.21.1 in c:\python\lib\site-packages (from requests->webdrivermanager) (1.23) Still whenever I am trying to use webdrivermanager I'm facing an error. Code Block: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(executable_path=ChromeDriverManager().install()) driver.get('https://www.google.com/') Console Output: C:\Users\username\Desktop\Debanjan\PyPrograms>webdriverManagerChrome.py Traceback (most recent call last): File "C:\Users\username\Desktop\Debanjan\PyPrograms\webdriverManagerChrome.py", line 2, in <module> from webdriver_manager.chrome import ChromeDriverManager ModuleNotFoundError: No module named 'webdriver_manager' Can someone help me, if I'm missing something? Incase it adds any value, I'm using sublimetext3 | Update (thanks to Vishal Kharde) The documentation now suggests: pip install webdriver-manager Solution: Install it like that: pip install webdriver_manager instead of pip install webdrivermanager. Requirements: The newest version, according to the documentation supports python 3.6 or newer versions: Reference: https://pypi.org/project/webdriver-manager/ | 44 | 87 |
63,467,815 | 2020-8-18 | https://stackoverflow.com/questions/63467815/how-to-access-columntransformer-elements-in-gridsearchcv | I wanted to find out the correct naming convention when referring to individual preprocessor included in ColumnTransformer (which is part of a pipeline) in param_grid for grid_search. Environment & sample data: import seaborn as sns from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder, KBinsDiscretizer, MinMaxScaler from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression df = sns.load_dataset('titanic')[['survived', 'age', 'embarked']] X_train, X_test, y_train, y_test = train_test_split(df.drop(columns='survived'), df['survived'], test_size=0.2, random_state=123) Pipeline: num = ['age'] cat = ['embarked'] num_transformer = Pipeline(steps=[('imputer', SimpleImputer()), ('discritiser', KBinsDiscretizer(encode='ordinal', strategy='uniform')), ('scaler', MinMaxScaler())]) cat_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer(transformers=[('num', num_transformer, num), ('cat', cat_transformer, cat)]) pipe = Pipeline(steps=[('preprocessor', preprocessor), ('classiffier', LogisticRegression(random_state=1, max_iter=10000))]) param_grid = dict([SOMETHING]imputer__strategy = ['mean', 'median'], [SOMETHING]discritiser__nbins = range(5,10), classiffier__C = [0.1, 10, 100], classiffier__solver = ['liblinear', 'saga']) grid_search = GridSearchCV(pipe, param_grid=param_grid, cv=10) grid_search.fit(X_train, y_train) Basically, what should I write instead of [SOMETHING] in my code? I have looked at this answer which answered the question for make_pipeline - so using the similar idea, I tried 'preprocessor__num__', 'preprocessor__num_', 'pipeline__num__', 'pipeline__num_' - no luck so far. Thank you | You were close, the correct way to declare it is like this: param_grid = {'preprocessor__num__imputer__strategy' : ['mean', 'median'], 'preprocessor__num__discritiser__n_bins' : range(5,10), 'classiffier__C' : [0.1, 10, 100], 'classiffier__solver' : ['liblinear', 'saga']} Here is the full code: import seaborn as sns from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder, KBinsDiscretizer, MinMaxScaler from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression df = sns.load_dataset('titanic')[['survived', 'age', 'embarked']] X_train, X_test, y_train, y_test = train_test_split(df.drop(columns='survived'), df['survived'], test_size=0.2, random_state=123) num = ['age'] cat = ['embarked'] num_transformer = Pipeline(steps=[('imputer', SimpleImputer()), ('discritiser', KBinsDiscretizer(encode='ordinal', strategy='uniform')), ('scaler', MinMaxScaler())]) cat_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer(transformers=[('num', num_transformer, num), ('cat', cat_transformer, cat)]) pipe = Pipeline(steps=[('preprocessor', preprocessor), ('classiffier', LogisticRegression(random_state=1, max_iter=10000))]) param_grid = {'preprocessor__num__imputer__strategy' : ['mean', 'median'], 'preprocessor__num__discritiser__n_bins' : range(5,10), 'classiffier__C' : [0.1, 10, 100], 'classiffier__solver' : ['liblinear', 'saga']} grid_search = GridSearchCV(pipe, param_grid=param_grid, cv=10) grid_search.fit(X_train, y_train) One simply way to check the available parameter names is like this: print(pipe.get_params().keys()) This will print out the list of all the available parameters which you can copy directly into your params dictionary. I have written a utility function which you can use to check if a parameter exist in a pipeline/classifier by simply passing in a keyword. def check_params_exist(esitmator, params_keyword): all_params = esitmator.get_params().keys() available_params = [x for x in all_params if params_keyword in x] if len(available_params)==0: return "No matching params found!" else: return available_params Now if you are unsure of the exact name, just pass imputer as the keyword print(check_params_exist(pipe, 'imputer')) This will print the following list: ['preprocessor__num__imputer', 'preprocessor__num__imputer__add_indicator', 'preprocessor__num__imputer__copy', 'preprocessor__num__imputer__fill_value', 'preprocessor__num__imputer__missing_values', 'preprocessor__num__imputer__strategy', 'preprocessor__num__imputer__verbose', 'preprocessor__cat__imputer', 'preprocessor__cat__imputer__add_indicator', 'preprocessor__cat__imputer__copy', 'preprocessor__cat__imputer__fill_value', 'preprocessor__cat__imputer__missing_values', 'preprocessor__cat__imputer__strategy', 'preprocessor__cat__imputer__verbose'] | 8 | 13 |
63,466,010 | 2020-8-18 | https://stackoverflow.com/questions/63466010/what-is-the-recommended-way-to-access-data-from-r-data-table-in-python-can-i-av | Is there some recommended way to pass data from R (in the form of data.table) to Python without having to save the data to disc? I know I could use python modules from R using reticulate (and I suppose the same thing can be done on the other side using rpy2), but from what I've read that hurts the overall performance of the libraries and therefore there is quite a big chance that it's better to store to disc my r data.table and read that same data from disc using python and running, say, lightgbm, than to try to run lightgbm using reticulate or data.table using rpy2. Why don't I just stick to either R or Python: I prefer using r data.table (as opposed to Pandas) for my data manipulations, because it is way faster, more memory efficient, and has a lot of features which I like, such as inequi joins, rolling joins, cartesian joins, and pretty straightforward melting and casting. I also like that whenever I ask a data.table related question in stack overflow, I get a high-quality answer pretty fast, while for Pandas i haven't been so successful. However, there are tasks for which I prefer python, such as when it comes to gradient boosting or neural networks. | There is no recommended way. In theory you have to dump R data.frame to disk and read it in python. In practice (assuming production grade operating system), you can use "RAM disk" location /dev/shm/ so you essentially write data to a file that resides in RAM memory and then read it from RAM directly, without the need to dump data to disk memory. Example usage: fwrite(iris, "/dev/shm/data.csv") d = fread("/dev/shm/data.csv") unlink("/dev/shm/data.csv") As for the format, you have the following options: csv - universal and portable format data.table's fwrite function is super fast and produces portable csv data file. Be sure to enable all cpu threads with setDTthreads(0L) before using fwrite on a multi-core machine. Then in python you need to read csv file, for which python datatable module will be very fast, and then, if needed, object can be converted to python pandas using x.to_pandas(). feather - "portable" binary format Another option is to use R's arrow package and function write_feather, and then read data in python using pyarrow module and read_feather. This format should be faster than csv in most cases, see timings below. In case of writing data the difference might not be that big, but reading data will be much faster in most cases, especially when it comes to reading many character variables in R (although it is not your use case because you read in python). On the other hand it is not really portable yet (see apache/arrow#8732). Moreover, eventually if new version 3 will be released, then files saved with current feather might not be compatible anymore. fst - fast binary format fst can be used as faster alternative to feather format but it is not yet possible to read fst data in python, so this method cannot be applied to solve your problem as of now. You can track progress of this FR in https://github.com/fstpackage/fst/issues/184 and when issue will be resolved, then it will probably address your question in the fastest manner. Using following scripts library(data.table) setDTthreads(0L) ## 40 N = 1e8L x = setDT(lapply(1:10, function(...) sample.int(N))) system.time(arrow::write_feather(x, "/dev/shm/data.feather")) system.time(fwrite(x, "/dev/shm/data.csv", showProgress=FALSE)) rm(x) ## run python unlink(paste0("/dev/shm/data.",c("csv","feather"))) N = 1e8L x = setDT(lapply(1:10, function(...) runif(N))) system.time(arrow::write_feather(x, "/dev/shm/data.feather")) system.time(fwrite(x, "/dev/shm/data.csv", showProgress=FALSE)) rm(x) ## run python unlink(paste0("/dev/shm/data.",c("csv","feather"))) N = 1e7L x = setDT(lapply(1:10, function(...) paste0("id",sample.int(N)))) system.time(arrow::write_feather(x, "/dev/shm/data.feather")) system.time(fwrite(x, "/dev/shm/data.csv", showProgress=FALSE)) rm(x) ## run python unlink(paste0("/dev/shm/data.",c("csv","feather"))) import datatable as dt import timeit import gc from pyarrow import feather gc.collect() t_start = timeit.default_timer() x = dt.fread("/dev/shm/data.csv") print(timeit.default_timer() - t_start, flush=True) gc.collect() t_start = timeit.default_timer() y = x.to_pandas() print(timeit.default_timer() - t_start, flush=True) del x, y gc.collect() t_start = timeit.default_timer() x = feather.read_feather("/dev/shm/data.feather", memory_map=False) print(timeit.default_timer() - t_start, flush=True) del x I got the following timings: integer: write: feather 2.7s vs csv 5.7s read: feather 2.8s vs csv 111s+3s double: write: feather 5.7s vs csv 10.8s read: feather 5.1s vs csv 180s+4.9s character: write: feather 50.2s vs csv 2.8s read: feather 35s vs csv 14s+16s Based on the presented data cases (1e8 rows for int/double, 1e7 rows for character; 10 columns: int/double/character) we can conclude the following: writing int and double is around 2 times slower for csv than feather writing character is around 20 times faster for csv than feather reading int and double is much slower for csv than feather conversion int and double from python datatable to pandas is relatively cheap reading character is around 2 times faster for csv than feather conversion character from python datatable to pandas is expensive Note that these are very basic data cases, be sure to check timings on your actual data. | 11 | 7 |
63,460,213 | 2020-8-18 | https://stackoverflow.com/questions/63460213/how-to-define-colors-in-a-figure-using-plotly-graph-objects-and-plotly-express | There are many questions and answers that touch upon this topic one way or another. With this contribution I'd like to clearly show why an easy approch such as marker = {'color' : 'red'} will work for plotly.graph_objects (go), but color='red' will not for plotly.express (px) although color is an attribute of both px.Line and px.Scatter. And I'd like to demonstrate why it's awesome that it doesn't. So, if px is supposed to be the easiest way to make a plotly figure, then why does something as apparently obvious as color='red' return the error ValueError: Value of 'color' is not the name of a column in 'data_frame'. To put it short, it's because color in px does not accept an arbitrary color name or code, but rather a variable name in your dataset in order to assign a color cycle to unique values and display them as lines with different colors. Let me demonstrate by applyig a gapminder dataset and show a scatterplot of Life expectancy versus GDP per capita for all (at least most) countries across the world as of 2007. A basic setup like below will produce the following plot Figure 1, plot using go: The color is set by a cycle named plotly but is here specified using marker = {'color' : 'red'} Figure 2, code: import plotly.graph_objects as go df = px.data.gapminder() df=df.query("year==2007") fig = go.Figure() fig.add_traces(go.Scatter(x=df['gdpPercap'], y=df["lifeExp"], mode = 'markers', marker = {'color' : 'red'} )) fig.show() So let's try this with px, and assume that color='red' would do the trick: Code 2, attempt at scatter plot with defined color using px: # imports import plotly.express as px import pandas as pd # dataframe df = px.data.gapminder() df=df.query("year==2007") # plotly express scatter plot px.scatter(df, x="gdpPercap", y="lifeExp", color = 'red', ) Result: ValueError: Value of 'color' is not the name of a column in 'data_frame'. Expected one of ['country', 'continent', 'year', 'lifeExp', 'pop', 'gdpPercap', 'iso_alpha', 'iso_num'] but received: red So what's going on here? | First, if an explanation of the broader differences between go and px is required, please take a look here and here. And if absolutely no explanations are needed, you'll find a complete code snippet at the very end of the answer which will reveal many of the powers with colors in plotly.express Part 1: The Essence: It might not seem so at first, but there are very good reasons why color='red' does not work as you might expect using px. But first of all, if all you'd like to do is manually set a particular color for all markers you can do so using .update_traces(marker=dict(color='red')) thanks to pythons chaining method. But first, lets look at the deafult settings: 1.1 Plotly express defaults Figure 1, px default scatterplot using px.Scatter Code 1, px default scatterplot using px.Scatter # imports import plotly.express as px import pandas as pd # dataframe df = px.data.gapminder() df=df.query("year==2007") # plotly express scatter plot px.scatter(df, x="gdpPercap", y="lifeExp") Here, as already mentioned in the question, the color is set as the first color in the default plotly sequence available through px.colors.qualitative.Plotly: ['#636EFA', # the plotly blue you can see above '#EF553B', '#00CC96', '#AB63FA', '#FFA15A', '#19D3F3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'] And that looks pretty good. But what if you want to change things and even add more information at the same time? 1.2: How to override the defaults and do exactly what you want with px colors: As we alread touched upon with px.scatter, the color attribute does not take a color like red as an argument. Rather, you can for example use color='continent' to easily distinguish between different variables in a dataset. But there's so much more to colors in px: The combination of the six following methods will let you do exactly what you'd like with colors using plotly express. Bear in mind that you do not even have to choose. You can use one, some, or all of the methods below at the same time. And one particular useful approach will reveal itself as a combinatino of 1 and 3. But we'll get to that in a bit. This is what you need to know: 1. Change the color sequence used by px with: color_discrete_sequence=px.colors.qualitative.Alphabet 2. Assign different colors to different variables with the color argument color = 'continent' 3. customize one or more variable colors with color_discrete_map={"Asia": 'red'} 4. Easily group a larger subset of your variables using dict comprehension and color_discrete_map subset = {"Asia", "Africa", "Oceania"} group_color = {i: 'red' for i in subset} 5. Set opacity using rgba() color codes. color_discrete_map={"Asia": 'rgba(255,0,0,0.4)'} 6. Override all settings with: .update_traces(marker=dict(color='red')) Part 2: The details and the plots The following snippet will produce the plot below that shows life expectany for all continents for varying levels of GDP. The size of the markers representes different levels of populations to make things more interesting right from the get go. Plot 2: Code 2: import plotly.express as px import pandas as pd # dataframe, input df = px.data.gapminder() df=df.query("year==2007") px.scatter(df, x="gdpPercap", y="lifeExp", color = 'continent', size='pop', ) To illustrate the flexibility of the methods above, lets first just change the color sequence. Since we for starters are only showing one category and one color, you'll have to wait for the subsequent steps to see the real effects. But here's the same plot now with color_discrete_sequence=px.colors.qualitative.Alphabet as per step 1: 1. Change the color sequence used by px with color_discrete_sequence=px.colors.qualitative.Alphabet Now, let's apply the colors from the Alphabet color sequence to the different continents: 2. Assign different colors to different variables with the color argument color = 'continent' If you, like me, think that this particular color sequence is easy on the eye but perhaps a bit indistinguishable, you can assign a color of your choosing to one or more continents like this: 3. customize one or more variable colors with color_discrete_map={"Asia": 'red'} And this is pretty awesome: Now you can change the sequence and choose any color you'd like for particularly interesting variables. But the method above can get a bit tedious if you'd like to assign a particular color to a larger subset. So here's how you can do that too with a dict comprehension: 4. Assign colors to a group using a dict comprehension and color_discrete_map # imports import plotly.express as px import pandas as pd # dataframe df = px.data.gapminder() df=df.query("year==2007") subset = {"Asia", "Europe", "Oceania"} group_color = {i: 'red' for i in subset} # plotly express scatter plot px.scatter(df, x="gdpPercap", y="lifeExp", size='pop', color='continent', color_discrete_sequence=px.colors.qualitative.Alphabet, color_discrete_map=group_color ) 5. Set opacity using rgba() color codes. Now let's take one step back. If you think red suits Asia just fine, but is perhaps a bit too strong, you can adjust the opacity using a rgba color like 'rgba(255,0,0,0.4)' to get this: Complete code for the last plot: import plotly.express as px import pandas as pd # dataframe, input df = px.data.gapminder() df=df.query("year==2007") px.scatter(df, x="gdpPercap", y="lifeExp", color_discrete_sequence=px.colors.qualitative.Alphabet, color = 'continent', size='pop', color_discrete_map={"Asia": 'rgba(255,0,0,0.4)'} ) And if you think we're getting a bit too complicated by now, you can override all settings like this again: 6. Override all settings with: .update_traces(marker=dict(color='red')) And this brings us right back to where we started. I hope you'll find this useful! Complete code snippet with all options available: # imports import plotly.express as px import pandas as pd # dataframe df = px.data.gapminder() df=df.query("year==2007") subset = {"Asia", "Europe", "Oceania"} group_color = {i: 'red' for i in subset} # plotly express scatter plot px.scatter(df, x="gdpPercap", y="lifeExp", size='pop', color='continent', color_discrete_sequence=px.colors.qualitative.Alphabet, #color_discrete_map=group_color color_discrete_map={"Asia": 'rgba(255,0,0,0.4)'} )#.update_traces(marker=dict(color='red')) | 38 | 63 |
63,432,473 | 2020-8-16 | https://stackoverflow.com/questions/63432473/access-to-fetch-url-been-blocked-by-cors-policy-no-access-control-allow-orig | I'm am trying to fetch a serverless function from a react app in development mode with the following code. let response = await fetch(url, { method: 'POST', mode: 'cors', body: "param=" + paramVar, }) .then(response => response.json()); The backend function is a Python Cloud function with the following code: def main(request): # CORS handling if request.method == 'OPTIONS': # Allows GET requests from any origin with the Content-Type # header and caches preflight response for an 3600s headers = { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET, POST', 'Access-Control-Allow-Headers': 'Content-Type', 'Access-Control-Max-Age': '3600' } return ('', 204, headers) # Set CORS headers for the main request headers = { 'Content-Type':'application/json', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Headers': 'Content-Type', } # Process request return (json.dumps(response), 200, headers) But I keep getting the following error: Access to fetch at 'url' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. When I try to perform the same request using curl I get a proper response. Using curl to get the options gives me the following: HTTP/2 204 access-control-allow-headers: Content-Type access-control-allow-methods: GET, POST access-control-allow-origin: * access-control-max-age: 3600 content-type: text/html; charset=utf-8 date: Sun, 16 Aug 2020 01:29:41 GMT Anyone can help me understand why I'm not able to get a response at my front-end? The 'Access-Control-Allow-Origin' is present in the headers so I really don't understand what is the cause of this error. | There was actually a bug in the backend that was only triggered by some additional headers added by the browser. In that particular case, the server was returning a 404 error which wouldn't contain my header definitions and would cause the CORS policy block. I was only able to identify the bug after I used devtools to track the request sent by the browser and replicated all the headers in my curl request. After fixing the function logic the problem was fixed. | 22 | -3 |
63,388,030 | 2020-8-13 | https://stackoverflow.com/questions/63388030/warningtensorflowwrite-grads-will-be-ignored-in-tensorflow-2-0-for-the-tens | I am using the following lines of codes to visualise the gradients of an ANN model using tensorboard tensorboard_callback = tf.compat.v1.keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=1, write_graph = True, write_grads =True, write_images = False) tensorboard_callback .set_model(model) %tensorboard --logdir ./Graph I received a warning message saying "WARNING:tensorflow:write_grads will be ignored in TensorFlow 2.0 for the TensorBoard Callback." I get the tensorboard output, but without gradients. What could be the possible reason? (Note: I use 2.3.0 tensorflow version) Thank you. | Write_Grads was not implemented in TF2.x. This is one of the highly expected feature request that is still open. Please check this GitHub issue as feature request. So, we only need to import TF1.x modules and use write_grads as shown in the following code. # Load the TensorBoard notebook extension %load_ext tensorboard import tensorflow as tf import datetime # Clear any logs from previous runs !rm -rf ./logs/ # Disable V2 behavior tf.compat.v1.disable_v2_behavior() mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model = create_model() model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.compat.v1.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, write_grads =True) model.fit(x=x_train, y=y_train, epochs=1, validation_data=(x_test, y_test), callbacks=[tensorboard_callback]) %tensorboard --logdir logs/fit Output: Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step Train on 60000 samples, validate on 10000 samples WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_v1.py:2048: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. 32/60000 [..............................] - ETA: 0s - loss: 2.3311 - acc: 0.0312WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0055s vs `on_train_batch_end` time: 0.0235s). Check your callbacks. 60000/60000 [==============================] - 17s 288us/sample - loss: 0.2187 - acc: 0.9349 - val_loss: 0.1012 - val_acc: 0.9690 <tensorflow.python.keras.callbacks.History at 0x7f7ebd1d3d30> | 7 | 6 |
63,475,461 | 2020-8-18 | https://stackoverflow.com/questions/63475461/unable-to-import-opengl-gl-in-python-on-macos | I am using OpenGL to render a scene in python. My code works perfectly fine on windows but, for some reason, I'm having issues when importing opengl.gl on MacOS. The issue arises when calling from OpenGL.GL import ... in both python scripts and the python console. More specifically here is the exact call in my script: from OpenGL.GL import glGenBuffers, glBindBuffer, glBufferData, \ glGenVertexArrays, glBindVertexArray, glEnableVertexAttribArray, glVertexAttribPointer, \ glDrawArrays, glUseProgram, glEnable, glDisable, \ GL_ARRAY_BUFFER, GL_STATIC_DRAW, GL_DEPTH_TEST, \ GL_FLOAT, GL_FALSE, \ GL_TRIANGLES, GL_LINES, GL_LINE_STRIP This results in the following error: Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/darwin.py", line 35, in GL return ctypesloader.loadLibrary( File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/ctypesloader.py", line 36, in loadLibrary return _loadLibraryWindows(dllType, name, mode) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/ctypesloader.py", line 89, in _loadLibraryWindows return dllType( name, mode ) File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/ctypes/__init__.py", line 373, in __init__ self._handle = _dlopen(self._name, mode) OSError: ('dlopen(OpenGL, 10): image not found', 'OpenGL', None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iyadboustany/Desktop/lensmaster/Lensmaster.py", line 18, in <module> from MainWindow import MainWindow File "/Users/iyadboustany/Desktop/lensmaster/MainWindow.py", line 14, in <module> from Robot import Robot File "/Users/iyadboustany/Desktop/lensmaster/Robot.py", line 8, in <module> from Graphics.Scene import DHNode File "/Users/iyadboustany/Desktop/lensmaster/Graphics/Scene.py", line 13, in <module> from OpenGL.GL import glGenBuffers, glBindBuffer, glBufferData, \ File "/usr/local/lib/python3.8/site-packages/OpenGL/GL/__init__.py", line 3, in <module> from OpenGL import error as _error File "/usr/local/lib/python3.8/site-packages/OpenGL/error.py", line 12, in <module> from OpenGL import platform, _configflags File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/__init__.py", line 36, in <module> _load() File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/__init__.py", line 33, in _load plugin.install(globals()) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 97, in install namespace[ name ] = getattr(self,name,None) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 15, in __get__ value = self.fget( obj ) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/darwin.py", line 62, in GetCurrentContext return self.CGL.CGLGetCurrentContext File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 15, in __get__ value = self.fget( obj ) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/darwin.py", line 45, in CGL def CGL(self): return self.GL File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 15, in __get__ value = self.fget( obj ) File "/usr/local/lib/python3.8/site-packages/OpenGL/platform/darwin.py", line 41, in GL raise ImportError("Unable to load OpenGL library", *err.args) ImportError: ('Unable to load OpenGL library', 'dlopen(OpenGL, 10): image not found', 'OpenGL', None) Notes: Running glxgears works just fine. I'm running macOS Big Sur beta (20A5343i) I'm using python 3.8.5 I installed opengl using pip: pip3 install PyOpenGL PyOpenGL_accelerate | This error is because Big Sur no longer has the OpenGL library nor other system libraries in standard locations in the file system and instead uses a cache. PyOpenGL uses ctypes to try to locate the OpenGL library and it fails to find it. Fixing ctypes in Python so that it will find the library is the subject of this pull request https://github.com/python/cpython/pull/21241 So a future version of Python should resolve the problem. To fix it now you can edit PyOpenGL file OpenGL/platform/ctypesloader.py changing line fullName = util.find_library( name ) to fullName = '/System/Library/Frameworks/OpenGL.framework/OpenGL' | 13 | 33 |
63,379,968 | 2020-8-12 | https://stackoverflow.com/questions/63379968/using-requirements-txt-to-automatically-install-packages-from-conda-channels-and | I'm trying to set a conda environment using a requirements.txt file that a coworker shared with me. My coworker uses Python in a Mac without Anaconda, and I'm using it in a Windows machine with Anaconda. The file requirements.txt was generated with the command pip freeze and looks like this: absl-py==0.7.1 affine==2.3.0 agate==1.6.0 agate-dbf==0.2.0 agate-excel==0.2.1 agate-sql==0.5.2 ... After checking the answer of this question, I tried the following in the Anaconda terminal: conda create --name my-env-name --file requirements.txt Which fails with the following error message: PackagesNotFoundError: The following packages are not available from current channels: - appscript==1.0.1 - style==1.1.0 - senticnet==1.3 - scikits.optimization==0.3 ... My understanding is that this happens because those packages are not available in the Anaconda package installation channels, and that they should be installed instead via pip with my conda environment activated, using pip install -r requirements.txt The problem is that this list of packages is very long, and I would like to avoid having to manually check and separating which packages are included in Anaconda channels and which should be installed via pip. Then, is there a way to tell Anaconda to create an environment by automatically recognizing the packages included in its channels, installing them, and then installing the rest using pip? | Using requirements.txt with conda There's no problem at all using a requirements.txt file when creating a conda environment. In fact, you can also set additional channels at creation time: conda create --name my-env-name --file requirements.txt --channel <NAME_OF_CHANNEL> for example, in the case of the first package you mention, you can install it from anaconda channel. So you could run: conda create --name my-env-name --file requirements.txt --channel default --channel anaconda Why using default channel first? Well, just to give preference to the default one (the priority of channels is specified by the order they are listed: higher priority from left to right). When at least some of the packages are not available using conda Well, when no conda channel can provide any of your required packages, there are several alternatives: Install through conda those packages available in any of its channels. Install through pip the rest. Create a conda environment.yml file: conda env export > environment.yml When you need to recreate this environment, then you can do: conda env create --name my-env-name --file environment.yml and it will install the packages using conda, will install pip, and then will install those packages only available with the latter one. This approach has good and bad properties: one of the good properties is that it separates those packages installed through conda from those installed using pip. one of the bad properties is that it's only useful for conda, but not for pip alone. | 15 | 22 |
63,436,496 | 2020-8-16 | https://stackoverflow.com/questions/63436496/is-it-possible-to-have-python-ides-offer-autocompletion-for-dynamically-generate | Are there any tricks I can employ to get IDEs to offer code completion for dynamically generated class attributes? For instance class A: def __init__(self): setattr(self, "a", 5) This code will set the class attribute of A called a to the value of 5. But IDEs do not know about a and therefore you do not get code completion for it. I've read that the __dir__ method can be hooked, but the suggestion made in that answer has not worked for me. Does anybody have any ideas? | I believe you will find the answer here. In short, Pycharm currently does not (and probably in the observable future will not) support dynamic attributes. That's because it analyzes code statically, and can't "see" what attributes a class might have. On the other hand, when you run your code in (say) iPython, once an instance of such class is created, the editor can get the attributes of the specific instance that resides in the memory, and thus show them in autocomplete. In such cases __dir__ helps. I assume that would be the case also with other IDEs. So if you really need the autocompletion feature, you may want to try using Jupyter notebook. Still, you will have to instantiate your variable prior to getting autocomplete. Another possible solution is to exploit that IDE supports .pyi files (Python interface file stub). You can add to your code a little snippet that instantiate the class with dynamic attributes, and writes down a class interface file (.pyi). Then IDE will use it for autocompletion. Obviously, this solution will have a "one run delay", but as in PyCharm you can switch to such file just by clicking on the asterisk near the class name, you can manually update it when having massive changes. | 18 | 15 |
63,472,664 | 2020-8-18 | https://stackoverflow.com/questions/63472664/pandas-explode-function-not-working-for-list-of-string-column | To explode list like column to row, we can use pandas explode() function. My pandas' version '0.25.3' The given example worked for me and another answer of Stackoverflow.com works as expected but it doesn't work for my dataset. city nested_city 0 soto ['Soto'] 1 tera-kora ['Daniel'] 2 jan-thiel ['Jan Thiel'] 3 westpunt ['Westpunt'] 4 nieuwpoort ['Nieuwpoort', 'Santa Barbara Plantation'] What I have tried: test_data['nested_city'].explode() and test_data.set_index(['nested_city']).apply(pd.Series.explode).reset_index() Output 0 ['Soto'] 1 ['Daniel'] 2 ['Jan Thiel'] 3 ['Westpunt'] 4 ['Nieuwpoort', 'Santa Barbara Plantation'] Name: neighbors, dtype: object | You need to ensure that your column is of list type to be able to use pandas' explode(). Here is a working solution: from ast import literal_eval test_data['nested_city'] = test_data['nested_city'].apply(literal_eval) #convert to list type test_data['nested_city'].explode() To explode multiple columns at a time, you can do the following: not_list_cols = [col for col in test_data.columns if col not in ['col1', 'col2']] #list of columns you are not exploding (assume col1 and col2 are being exploded) test_data = test_data.set_index(not_list_cols).apply(pd.Series.explode).reset_index() | 10 | 26 |
63,439,648 | 2020-8-16 | https://stackoverflow.com/questions/63439648/why-protobuf-is-smaller-in-memory-than-normal-dictlist-in-python | I have a large structure of primitive types within nested dict/list. The structure is quite complicated and doesn't really matter. If I represent it in python's built-in types (dict/list/float/int/str) it takes 1.1 GB, but if I store it in protobuf and load it to memory it is significantly smaller. ~250 MB total. I'm wondering how can this be. Are the built-in types in python inefficient in comparison to some external library? Edit: The structure is loaded from json file. So no internal references between objects | "Simple" python objects, such as int or float, need much more memory than their C-counterparts used by protobuf. Let's take a list of Python integers as example compared to an array of integers, as for example in an array.array (i.e. array.array('i', ...)). The analysis for array.array is simple: discarding some overhead from the array.arrays-object, only 4 bytes (size of a C-integer) are needed per element. The situation is completely different for a list of integers: the list holds not the integer-objects themselves but pointers to the objects (8 additional bytes for a 64bit executable) even a small non-zero integer needs at least 28 bytes (see import sys; sys.getsizeof(1) returns 28): 8 bytes are needed for reference counting, 8 bytes to hold a pointer to the integer-function table, 8 bytes are needed for the size of the integer value (Python's integers can be much bigger than 2^32), and at least 4 byte to hold the integer value itself. there is also an overhead for memory management of 4.5 bytes. This means there is a whopping cost of 40.5 bytes per Python integer compared to the possible 4 bytes (or 8 bytes if we use long long int, i.e. 64bit integers). A situation is similar for a list with Python floats compared to an array of doubles( i.e. array.array('d',...)), which only needs about 8 bytes per element. But for list we have: the list holds not the float objects themselves but pointers to the objects (8 additional bytes for a 64bit executable) a float object needs 24 bytes (see import sys; sys.getsizeof(1.0) returns 24): 8 bytes are needed for reference counting, 8 bytes to hold a pointer to the float-function table, and 8 bytes to hold the double-value itself. because 24 is a multiple of 8, the overhead for memory management is "only" about 0.5 bytes. Which means 32.5 bytes for a Python float object vs. 8 byte for a C-double. protobuf uses internally the same representation of the data as array.array and thus needs much less memory (about 4-5 times less, as you observe). numpy.array is another example for a data type, which holds raw C-values and thus needs much less memory than lists. If one doesn't need to search in a dictionary, then saving the key-values-pairs in a list will need less memory than in a dictionary, because one doesn't have to maintain a structure for searching (which imposes some memory costs) - this is also another thing that leads to smaller memory footprint of protobuf-data. To answer your other question: There are no built-in modules which are to Python-dict, what array.array are to Python-list, so I use this opportunity to shamelessly plug-in an advertisement for a library of mine: cykhash. Sets and maps from cykhash need less than 25% of Python'S-dict/set memory but are about the same fast. | 7 | 11 |
63,413,064 | 2020-8-14 | https://stackoverflow.com/questions/63413064/how-to-build-hybrid-model-of-random-forest-and-particle-swarm-optimizer-to-find | I need to find optimal discount for each product (in e.g. A, B, C) so that I can maximize total sales. I have existing Random Forest models for each product that map discount and season to sales. How do I combine these models and feed them to an optimiser to find the optimum discount per product? Reason for model selection: RF: it's able to give better(w.r.t linear models) relation between predictors and response(sales_uplift_norm). PSO: suggested in many white papers (available at researchgate/IEEE), also availability of the package in python here and here. Input data: sample data used to build model at product level. Glance of the data as below: Idea/Steps followed by me: Build RF model per products # pre-processed data products_pre_processed_data = {key:pre_process_data(df, key) for key, df in df_basepack_dict.items()} # rf models products_rf_model = {key:rf_fit(df) for key, df in products_pre_processed_data .items()} Pass the model to optimizer Objective function: maximize sales_uplift_norm (the response variable of RF model) Constraint: total spend(spends of A + B + C <= 20), spends = total_units_sold_of_products * discount_percentage * mrp_of_products lower bound of products(A, B, C): [0.0, 0.0, 0.0] # discount percentage lower bounds upper bound of products(A, B, C): [0.3, 0.4, 0.4] # discount percentage upper bounds sudo/sample code # as I am unable to find a way to pass the product_models into optimizer. from pyswarm import pso def obj(x): model1 = products_rf_model.get('A') model2 = products_rf_model.get('B') model3 = products_rf_model.get('C') return -(model1 + model2 + model3) # -ve sign as to maximize def con(x): x1 = x[0] x2 = x[1] x3 = x[2] return np.sum(units_A*x*mrp_A + units_B*x*mrp_B + units_C* x *spend_C)-20 # spend budget lb = [0.0, 0.0, 0.0] ub = [0.3, 0.4, 0.4] xopt, fopt = pso(obj, lb, ub, f_ieqcons=con) How to use the PSO optimizer (or any other optimizer if I am not following right one) with RF? Adding functions used for model: def pre_process_data(df,product): data = df.copy().reset_index() # print(data) bp = product print("----------product: {}----------".format(bp)) # Pre-processing steps print("pre process df.shape {}".format(df.shape)) #1. Reponse var transformation response = data.sales_uplift_norm # already transformed #2. predictor numeric var transformation numeric_vars = ['discount_percentage'] # may include mrp, depth df_numeric = data[numeric_vars] df_norm = df_numeric.apply(lambda x: scale(x), axis = 0) # center and scale #3. char fields dummification #select category fields cat_cols = data.select_dtypes('category').columns #select string fields str_to_cat_cols = data.drop(['product'], axis = 1).select_dtypes('object').astype('category').columns # combine all categorical fields all_cat_cols = [*cat_cols,*str_to_cat_cols] # print(all_cat_cols) #convert cat to dummies df_dummies = pd.get_dummies(data[all_cat_cols]) #4. combine num and char df together df_combined = pd.concat([df_dummies.reset_index(drop=True), df_norm.reset_index(drop=True)], axis=1) df_combined['sales_uplift_norm'] = response df_processed = df_combined.copy() print("post process df.shape {}".format(df_processed.shape)) # print("model fields: {}".format(df_processed.columns)) return(df_processed) def rf_fit(df, random_state = 12): train_features = df.drop('sales_uplift_norm', axis = 1) train_labels = df['sales_uplift_norm'] # Random Forest Regressor rf = RandomForestRegressor(n_estimators = 500, random_state = random_state, bootstrap = True, oob_score=True) # RF model rf_fit = rf.fit(train_features, train_labels) return(rf_fit) | you can find a complete solution below ! The fundamental differences with your approach are the following : Since the Random Forest model takes as input the season feature, optimal discounts must be computed for every season. Inspecting the documentation of pyswarm, the con function yields an output that must comply with con(x) >= 0.0. The correct constraint is therefore 20 - sum(...) and not the other way around. In addition, the units and mrp variable were not given ; I just assumed a value of 1, you might want to change those values. Additional modifications to your original code include : Preprocessing and pipeline wrappers of sklearn in order to simplify the preprocessing steps. Optimal parameters are stored in an output .xlsx file. The maxiter parameter of the PSO has been set to 5 to speed-up debugging, you might want to set its value to another one (default = 100). The code is therefore : import pandas as pd from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.compose import ColumnTransformer from sklearn.ensemble import RandomForestRegressor from sklearn.base import clone # ====================== RF TRAINING ====================== # Preprocessing def build_sample(season, discount_percentage): return pd.DataFrame({ 'season': [season], 'discount_percentage': [discount_percentage] }) columns_to_encode = ["season"] columns_to_scale = ["discount_percentage"] encoder = OneHotEncoder() scaler = StandardScaler() preproc = ColumnTransformer( transformers=[ ("encoder", Pipeline([("OneHotEncoder", encoder)]), columns_to_encode), ("scaler", Pipeline([("StandardScaler", scaler)]), columns_to_scale) ] ) # Model myRFClassifier = RandomForestRegressor( n_estimators = 500, random_state = 12, bootstrap = True, oob_score = True) pipeline_list = [ ('preproc', preproc), ('clf', myRFClassifier) ] pipe = Pipeline(pipeline_list) # Dataset df_tot = pd.read_excel("so_data.xlsx") df_dict = { product: df_tot[df_tot['product'] == product].drop(columns=['product']) for product in pd.unique(df_tot['product']) } # Fit print("Training ...") pipe_dict = { product: clone(pipe) for product in df_dict.keys() } for product, df in df_dict.items(): X = df.drop(columns=["sales_uplift_norm"]) y = df["sales_uplift_norm"] pipe_dict[product].fit(X,y) # ====================== OPTIMIZATION ====================== from pyswarm import pso # Parameter of PSO maxiter = 5 n_product = len(pipe_dict.keys()) # Constraints budget = 20 units = [1, 1, 1] mrp = [1, 1, 1] lb = [0.0, 0.0, 0.0] ub = [0.3, 0.4, 0.4] # Must always remain >= 0 def con(x): s = 0 for i in range(n_product): s += units[i] * mrp[i] * x[i] return budget - s print("Optimization ...") # Save optimal discounts for every product and every season df_opti = pd.DataFrame(data=None, columns=df_tot.columns) for season in pd.unique(df_tot['season']): # Objective function to minimize def obj(x): s = 0 for i, product in enumerate(pipe_dict.keys()): s += pipe_dict[product].predict(build_sample(season, x[i])) return -s # PSO xopt, fopt = pso(obj, lb, ub, f_ieqcons=con, maxiter=maxiter) print("Season: {}\t xopt: {}".format(season, xopt)) # Store result df_opti = pd.concat([ df_opti, pd.DataFrame({ 'product': list(pipe_dict.keys()), 'season': [season] * n_product, 'discount_percentage': xopt, 'sales_uplift_norm': [ pipe_dict[product].predict(build_sample(season, xopt[i]))[0] for i, product in enumerate(pipe_dict.keys()) ] }) ]) # Save result df_opti = df_opti.reset_index().drop(columns=['index']) df_opti.to_excel("so_result.xlsx") print("Summary") print(df_opti) It gives : Training ... Optimization ... Stopping search: maximum iterations reached --> 5 Season: summer xopt: [0.1941521 0.11233673 0.36548761] Stopping search: maximum iterations reached --> 5 Season: winter xopt: [0.18670604 0.37829516 0.21857777] Stopping search: maximum iterations reached --> 5 Season: monsoon xopt: [0.14898102 0.39847885 0.18889792] Summary product season discount_percentage sales_uplift_norm 0 A summer 0.194152 0.175973 1 B summer 0.112337 0.229735 2 C summer 0.365488 0.374510 3 A winter 0.186706 -0.028205 4 B winter 0.378295 0.266675 5 C winter 0.218578 0.146012 6 A monsoon 0.148981 0.199073 7 B monsoon 0.398479 0.307632 8 C monsoon 0.188898 0.210134 | 12 | 4 |
63,472,161 | 2020-8-18 | https://stackoverflow.com/questions/63472161/google-coding-challenge-question-2020-unspecified-words | I got the following problem for the Google Coding Challenge which happened on 16th August 2020. I tried to solve it but couldn't. There are N words in a dictionary such that each word is of fixed length and M consists only of lowercase English letters, that is ('a', 'b', ...,'z') A query word is denoted by Q. The length of query word is M. These words contain lowercase English letters but at some places instead of a letter between 'a', 'b', ...,'z' there is '?'. Refer to the Sample input section to understand this case. A match count of Q, denoted by match_count(Q) is the count of words that are in the dictionary and contain the same English letters(excluding a letter that can be in the position of ?) in the same position as the letters are there in the query word Q. In other words, a word in the dictionary can contain any letters at the position of '?' but the remaining alphabets must match with the query word. You are given a query word Q and you are required to compute match_count. Input Format The first line contains two space-separated integers N and M denoting the number of words in the dictionary and length of each word respectively. The next N lines contain one word each from the dictionary. The next line contains an integer Q denoting the number of query words for which you have to compute match_count. The next Q lines contain one query word each. Output Format For each query word, print match_count for a specific word in a new line. Constraints 1 <= N <= 5X10^4 1 <= M <= 7 1 <= Q <= 10^5 So, I got 30 minutes for this question and I could write the following code which is incorrect and hence didn't give the expected output. def Solve(N, M, Words, Q, Query): output = [] count = 0 for i in range(Q): x = Query[i].split('?') for k in range(N): if x in Words: count += 1 else: pass output.append(count) return output N, M = map(int , input().split()) Words = [] for _ in range(N): Words.append(input()) Q = int(input()) Query = [] for _ in range(Q): Query.append(input()) out = Solve(N, M, Words, Q, Query) for x in out_: print(x) Can somebody help me with some pseudocode or algorithm which can solve this problem, please? | I guess my first try would have been to replace the ? with a . in the query, i.e. change ?at to .at, and then use those as regular expressions and match them against all the words in the dictionary, something as simple as this: import re for q in queries: p = re.compile(q.replace("?", ".")) print(sum(1 for w in words if p.match(w))) However, seeing the input sizes as N up to 5x104 and Q up to 105, this might be too slow, just as any other algorithm comparing all pairs of words and queries. On the other hand, note that M, the number of letters per word, is constant and rather low. So instead, you could create Mx26 sets of words for all letters in all positions and then get the intersection of those sets. from collections import defaultdict from functools import reduce M = 3 words = ["cat", "map", "bat", "man", "pen"] queries = ["?at", "ma?", "?a?", "??n"] sets = defaultdict(set) for word in words: for i, c in enumerate(word): sets[i,c].add(word) all_words = set(words) for q in queries: possible_words = (sets[i,c] for i, c in enumerate(q) if c != "?") w = reduce(set.intersection, possible_words, all_words) print(q, len(w), w) In the worst case (a query that has a non-? letter that is common to most or all words in the dictionary) this may still be slow, but should be much faster in filtering down the words than iterating all the words for each query. (Assuming random letters in both words and queries, the set of words for the first letter will contain N/26 words, the intersection for the first two has N/26² words, etc.) This could probably be improved a bit by taking the different cases into account, e.g. (a) if the query does not contain any ?, just check whether it is in the set (!) of words without creating all those intersections; (b) if the query is all-?, just return the set of all words; and (c) sort the possible-words-sets by size and start the intersection with the smallest sets first to reduce the size of temporarily created sets. About time complexity: To be honest, I am not sure what time complexity this algorithm has. With N, Q, and M being the number of words, number of queries, and length of words and queries, respectively, creating the initial sets will have complexity O(N*M). After that, the complexity of the queries obviously depends on the number of non-? in the queries (and thus the number of set intersections to create), and the average size of the sets. For queries with zero, one, or M non-? characters, the query will execute in O(M) (evaluating the situation and then a single set/dict lookup), but for queries with two or more non-?-characters, the first set intersections will have on average complexity O(N/26), which strictly speaking is still O(N). (All following intersections will only have to consider N/26², N/26³ etc. elements and are thus negligible.) I don't know how this compares to The Trie Approach and would be very interested if any of the other answers could elaborate on that. | 29 | 20 |
63,380,108 | 2020-8-12 | https://stackoverflow.com/questions/63380108/indexing-different-sized-ranges-in-a-2d-numpy-array-using-a-pythonic-vectorized | I have a numpy 2D array, and I would like to select different sized ranges of this array, depending on the column index. Here is the input array a = np.reshape(np.array(range(15)), (5, 3)) example [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11] [12 13 14]] Then, list b = [4,3,1] determines the different range sizes for each column slice, so that we would get the arrays [0 3 6 9] [1 4 7] [2] which we can concatenate and flatten to get the final desired output [0 3 6 9 1 4 7 2] Currently, to perform this task, I am using the following code slices = [] for i in range(a.shape[1]): slices.append(a[:b[i],i]) c = np.concatenate(slices) and, if possible, I want to convert it to a pythonic format. Bonus: The same question but now considering that b determines row slices instead of columns. | We can use broadcasting to generate an appropriate mask and then masking does the job - In [150]: a Out[150]: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]]) In [151]: b Out[151]: [4, 3, 1] In [152]: mask = np.arange(len(a))[:,None] < b In [153]: a.T[mask.T] Out[153]: array([0, 3, 6, 9, 1, 4, 7, 2]) Another way to mask would be - In [156]: a.T[np.greater.outer(b, np.arange(len(a)))] Out[156]: array([0, 3, 6, 9, 1, 4, 7, 2]) Bonus : Slice per row If we are required to slice per row based on chunk sizes, we would need to modify few things - In [51]: a Out[51]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) # slice lengths per row In [52]: b Out[52]: [4, 3, 1] # Usual loop based solution : In [53]: np.concatenate([a[i,:b_i] for i,b_i in enumerate(b)]) Out[53]: array([ 0, 1, 2, 3, 5, 6, 7, 10]) # Vectorized mask based solution : In [54]: a[np.greater.outer(b, np.arange(a.shape[1]))] Out[54]: array([ 0, 1, 2, 3, 5, 6, 7, 10]) | 9 | 6 |
63,370,330 | 2020-8-12 | https://stackoverflow.com/questions/63370330/show-progress-bar-while-uploading-to-s3-using-presigned-url | I'm trying to upload a file in my s3 bucket using a pre-signed URL, it works perfectly and uploads the data to the bucket successfully, however, the files that I upload are very large and I need to be able to show the progress bar. I have tried many solutions available on StackOverflow and other blog posts but nothing seems to be helping. Following is the code snippet that uploads the data to s3 using a pre-signed URL. object_name = 'DataSet.csv' response = create_presigned_post("mybucket_name",object_name) fields = response['fields'] with open(object_name, 'rb') as f: files = {'file': (object_name, f)} http_response = requests.post(response['url'], data=fields, files=files,stream=True) print (http_response.status_code) it returns the 204 status which is for a successful upload. Now, what changes I can make to this code to show the progress bar. P.S I have tried stream=True in requests not working. I have tried iterating over the response using tqdm but it not works in that case also. | The following code would work fine for Python, I found it here import logging import argparse from boto3 import Session import requests logging.basicConfig() logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) class S3MultipartUploadUtil: """ AWS S3 Multipart Upload Uril """ def __init__(self, session: Session): self.session = session self.s3 = session.client('s3') self.upload_id = None self.bucket_name = None self.key = None def start(self, bucket_name: str, key: str): """ Start Multipart Upload :param bucket_name: :param key: :return: """ self.bucket_name = bucket_name self.key = key res = self.s3.create_multipart_upload(Bucket=bucket_name, Key=key) self.upload_id = res['UploadId'] logger.debug(f"Start multipart upload '{self.upload_id}'") def create_presigned_url(self, part_no: int, expire: int=3600) -> str: """ Create pre-signed URL for upload part. :param part_no: :param expire: :return: """ signed_url = self.s3.generate_presigned_url( ClientMethod='upload_part', Params={'Bucket': self.bucket_name, 'Key': self.key, 'UploadId': self.upload_id, 'PartNumber': part_no}, ExpiresIn=expire) logger.debug(f"Create presigned url for upload part '{signed_url}'") return signed_url def complete(self, parts): """ Complete Multipart Uploading. `parts` is list of dictionary below. ``` [ {'ETag': etag, 'PartNumber': 1}, {'ETag': etag, 'PartNumber': 2}, ... ] ``` you can get `ETag` from upload part response header. :param parts: Sent part info. :return: """ res = self.s3.complete_multipart_upload( Bucket=self.bucket_name, Key=self.key, MultipartUpload={ 'Parts': parts }, UploadId=self.upload_id ) logger.debug(f"Complete multipart upload '{self.upload_id}'") logger.debug(res) self.upload_id = None self.bucket_name = None self.key = None def main(): parser = argparse.ArgumentParser() parser.add_argument('target_file') parser.add_argument('--bucket', required=True) args = parser.parse_args() target_file = Path(args.target_file) bucket_name = args.bucket key = target_file.name max_size = 5 * 1024 * 1024 file_size = target_file.stat().st_size upload_by = int(file_size / max_size) + 1 session = Session() s3util = S3MultipartUploadUtil(session) s3util.start(bucket_name, key) urls = [] for part in range(1, upload_by + 1): signed_url = s3util.create_presigned_url(part) urls.append(signed_url) parts = [] with target_file.open('rb') as fin: for num, url in enumerate(urls): part = num + 1 file_data = fin.read(max_size) print(f"upload part {part} size={len(file_data)}") res = requests.put(url, data=file_data) print(res) if res.status_code != 200: return etag = res.headers['ETag'] parts.append({'ETag': etag, 'PartNumber': part}) print(parts) s3util.complete(parts) if __name__ == '__main__': main() | 9 | 3 |
63,457,762 | 2020-8-17 | https://stackoverflow.com/questions/63457762/error-could-not-find-a-version-that-satisfies-the-requirement-pprint-from-r-r | I am trying to install a NLP suite on my macbook pro, which is updated to the most recent software version Catalina 10.15.6. So far, I have installed Anaconda 3.8, created a version 3.7 NLP environment by conda create -n NLP python=3.7, and activated the NLP environment by conda activate NLP. My next step is to install all python packages that are written in the file "requirements.txt" with the following command pip install -r requirements.txt. However, it showcases this message: "ERROR: Could not find a version that satisfies the requirement pprint (from -r requirements.txt (line 67)) (from versions: none) ERROR: No matching distribution found for pprint (from -r requirements.txt (line 67))" I also tried installing the package alone, however, the same error message appears. Any advice would be appreciated! Please let me know if any additional information I can provide. | pprint is part of the standard library, therefore cannot be present in requirements.txt. If one of your requirements is stated to require pprint you'll get an error. To install without dependencies use the --no-deps command for pip. However, this does not guarantee that the installation actually worked as you are likely missing out on other packages. So a better option is installing each requirement one by one until you find the one that needs it and install its other dependencies and install that package with no-deps. An alternative is to use https://pypi.org/project/pipdeptree/ to inspect the dependency tree. If there are many packages and there is a version freeze, try dropping the versions. It is a bit of trial and error detective work, so one can be smart about it: it is likely a less used dependency that is the culprit. | 10 | 10 |
63,475,529 | 2020-8-18 | https://stackoverflow.com/questions/63475529/what-is-r-called | I am seeing this for the first time. I wanted to know what the !r in the last line of the code called so that I can search about it. I found this piece of code on: https://adamj.eu/tech/2020/08/10/a-guide-to-python-lambda-functions/ class Puppy: def __init__(self, name, cuteness): self.name = name self.cuteness = cuteness def __repr__(self): return f"Puppy({self.name!r}, {self.cuteness!r})" | It's a format string conversion flag that tells the formatter to call repr on the object before formatting the string. Three conversion flags are currently supported: '!s' which calls str() on the value, '!r' which calls repr() and '!a' which calls ascii(). | 7 | 13 |
63,471,726 | 2020-8-18 | https://stackoverflow.com/questions/63471726/get-full-request-url-from-inside-apiview-in-django-rest-framework | Is there a method, or an attribute in request object that I can access to return me the URL exactly as the client requested? With the query params included? I've checked request.build_absolute_uri after looking at this question but it just returns the URL without the query params. I need the URL because my API response returns the URL for the "next page" of results. I could build it from the query_params attributes, but this view takes a lot of query params and some exclude others, so having access to the request url would save me a lot of pain. | To get full path, including query string, you want request.get_full_path() | 9 | 15 |
63,473,901 | 2020-8-18 | https://stackoverflow.com/questions/63473901/python-dynamically-create-class-while-providing-arguments-to-init-subclass | How can I dynamically create a subclass of my class and provide arguments to its __init_subclass__() method? Example class: class MyClass: def __init_subclass__(cls, my_name): print(f"Subclass created and my name is {my_name}") Normally I'd implement my subclass as such: class MySubclass(MyClass, my_name="Ellis"): pass But how would I pass in my_name when dynamically creating a subclass of MyClass using a metaclass? Normally I could use type() but it doesn't have the option of providing my_name. MyDynamicSubclass = type("MyDynamicSubclass", (MyClass,), {}) | The basic documentation for type does not mention that it accepts an unlimited number of keyword-only arguments, which you would supply through the keywords in a class statement. The only place this is hinted in is in the Data Model in the section Creating the class object: Once the class namespace has been populated by executing the class body, the class object is created by calling metaclass(name, bases, namespace, **kwds) (the additional keywords passed here are the same as those passed to __prepare__). Normally, you would not use this feature with type exactly because of __init_subclass__: The default implementation object.__init_subclass__ does nothing, but raises an error if it is called with any arguments. Since you have overriden the default implementation, you can create your dynamic class as MyDynamicSubclass = type("MyDynamicSubclass", (MyClass,), {}, my_name="Ellis") | 9 | 12 |
63,464,944 | 2020-8-18 | https://stackoverflow.com/questions/63464944/keras-loss-and-metrics-values-do-not-match-with-same-function-in-each | I am using keras with a custom loss function like below: def custom_fn(y_true, y_pred): # changing y_true, y_pred values systematically return mean_absolute_percentage_error(y_true, y_pred) Then I am calling model.compile(loss=custom_fn) and model.fit(X, y,..validation_data=(X_val, y_val)..) Keras is then saving loss and val_loss in model history. As a sanity check, when the model finishes training, I am using model.predict(X_val) so I can calculate validation loss manually with my custom_fn using the trained model. I am saving the model with the best epoch using this callback: callbacks.append(ModelCheckpoint(path, save_best_only=True, monitor='val_loss', mode='min')) so after calculating this, the validation loss should match keras' val_loss value of the best epoch. But this is not happening. As another attempt to figure this issue out, I am also doing this: model.compile(loss=custom_fn, metrics=[custom_fn]) And to my surprise, val_loss and val_custom_fn do not match (neither loss or loss_custom_fn for that matter). This is really strange, my custom_fn is essentially keras' built in mape with the y_true and y_pred slightly manipulated. what is going on here? PS: the layers I am using are LSTM layers and a final Dense layer. But I think this information is not relevant to the problem. I am also using regularisation as hyperparameter but not dropout. Update Even removing custom_fn and using keras' built in mape as a loss function and metric like so: model.compile(loss='mape', metrics=['mape']) and for simplicity, removing ModelCheckpoint callback is having the same effect; val_loss and val_mape for each epoch are not equivalent. This is extremely strange to me. I am either missing something or there is a bug in Keras code..the former might be more realistic. | This blog post suggests that keras adds any regularisation used in the training when calculating the validation loss. And obviously, when calculating the metric of choice no regularisation is applied. This is why it occurs with any loss function of choice as stated in the question. This is something I could not find any documentation on from Keras. However, it seems to hold up since when I remove all regularisation hyperparameters, the val_loss and val_custom_fn match exactly in each epoch. An easy workaround is to either use the custom_fn as a metric and save the best model based on the metric (val_custom_fn) than on the val_loss. Or else Loop through each epoch manually and calculate the correct val_loss manually after training each epoch. The latter seems to make more sense since there is no reason to include custom_fn both as a metric and as a loss function. If anyone can find any evidence of this in the Keras documentation that would be helpful. | 7 | 5 |
63,463,514 | 2020-8-18 | https://stackoverflow.com/questions/63463514/django-how-to-check-if-django-is-hitting-database-for-certain-query | I want to optimize the django app and for this I would like to know How can I check if my query is hitting database or I am getting result/return value from cached version? For example: products = Products.objects.filter(product_name__icontains="natural") if not products.exist(): return Response(...) total_products = products.count() first_product = product.first() I like to execute this in shell and want to check which line hits the database and which one just return result from cached version so I can write optimized queries in my view. I know about django-toolbar but I couldn't find if it supports things like this(does certain line hit database or result is from cached version). | Check the lenth of connection.queries in this way, from django.conf import settings settings.DEBUG = True from django.db import connection print(len(connection.queries)) # do something with the database products = Products.objects.filter(product_name__icontains="natural") print(len(connection.queries)) # and execute the print statement again and again total_products = products.count() print(len(connection.queries)) first_product = product.first() print(len(connection.queries)) Reference: Get SQL query count during a Django shell session | 10 | 24 |
63,455,683 | 2020-8-17 | https://stackoverflow.com/questions/63455683/when-is-asyncios-default-scheduler-fair | It's my understanding that asyncio.gather is intended to run its arguments concurrently and also that when a coroutine executes an await expression it provides an opportunity for the event loop to schedule other tasks. With that in mind, I was surprised to see that the following snippet ignores one of the inputs to asyncio.gather. import asyncio async def aprint(s): print(s) async def forever(s): while True: await aprint(s) async def main(): await asyncio.gather(forever('a'), forever('b')) asyncio.run(main()) As I understand it, the following things happen: asyncio.run(main()) does any necessary global initialization of the event loop and schedules main() for execution. main() schedules asyncio.gather(...) for execution and waits for its result asyncio.gather schedules the executions of forever('a') and forever('b') whichever of the those executes first, they immediately await aprint() and give the scheduler the opportunity to run another coroutine if desired (e.g. if we start with 'a' then we have a chance to start trying to evaluate 'b', which should already be scheduled for execution). In the output we'll see a stream of lines each containing 'a' or 'b', and the scheduler ought to be fair enough that we see at least one of each over a long enough period of time. In practice this isn't what I observe. Instead, the entire program is equivalent to while True: print('a'). What I found extremely interesting is that even minor changes to the code seem to reintroduce fairness. E.g., if we instead have the following code then we get a roughly equal mix of 'a' and 'b' in the output. async def forever(s): while True: await aprint(s) await asyncio.sleep(1.) Verifying that it doesn't seem to have anything to do with how long we spend in vs out of the infinite loop I found that the following change also provides fairness. async def forever(s): while True: await aprint(s) await asyncio.sleep(0.) Does anyone know why this unfairness might happen and how to avoid it? I suppose when in doubt I could proactively add an empty sleep statement everywhere and hope that suffices, but it's incredibly non-obvious to me why the original code doesn't behave as expected. In case it matters since asyncio seems to have gone through quite a few API changes, I'm using a vanilla installation of Python 3.8.4 on an Ubuntu box. | whichever of the those executes first, they immediately await aprint() and give the scheduler the opportunity to run another coroutine if desired This part is a common misconception. Python's await doesn't mean "yield control to the event loop", it means "start executing the awaitable, allowing it to suspend us along with it". So yes, if the awaited object chooses to suspend, the current coroutine will suspend as well, and so will the coroutine that awaits it and so on, all the way to the event loop. But if the awaited object doesn't choose to suspend, as is the case with aprint, neither will the coroutine that awaits it. This is occasionally a source of bugs, as seen here or here. Does anyone know why this unfairness might happen and how to avoid it? Fortunately this effect is most pronounced in toy examples that don't really communicate with the outside world. And although you can fix them by adding await asyncio.sleep(0) to strategic places (which is even documented to force a context switch), you probably shouldn't do that in production code. A real program will depend on input from the outside world, be it data coming from the network, from a local database, or from a work queue populated by another thread or process. Actual data will rarely arrive so fast to starve the rest of the program, and if it does, the starvation will likely be temporary because the program will eventually suspend due to backpressure from its output side. In the rare possibility that the program receives data from one source faster than it can process it, but still needs to observe data coming from another source, you could have a starvation issue, but that can be fixed with forced context switches if it is ever shown to occur. (I haven't heard of anyone encountering it in production.) Aside from bugs mentioned above, what happens much more often is that a coroutine invokes CPU-heavy or legacy blocking code, and that ends up hogging the event loop. Such situations should be handled by passing the CPU/blocking part to run_in_executor. | 8 | 9 |
63,387,253 | 2020-8-13 | https://stackoverflow.com/questions/63387253/pyspark-setting-executors-cores-and-memory-local-machine | So I looked at a bunch of posts on Pyspark, Jupyter and setting memory/cores/executors (and the associated memory). But I appear to be stuck - Question 1: I dont see my machine utilizing either the cores or the memory. Why? Can I do some adjustments to the excutors/cores/memory to optimize speed of reading the file? Question 2: Also is there any way for me to see a progress bar showing how much of the file ahs been imported (spark-monitor doesnt seem to do it). I am importing a 33.5gb file into pyspark. Machine has 112 gb or RAM 8 Cores/16 virtual cores. from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("Summaries") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() conf = spark.sparkContext._conf.setAll([('spark.executor.memory', '4g'), ('spark.app.name', 'Spark Updated Conf'), ('spark.driver.cores', '4'), ('spark.executor.cores', '16'), ('spark.driver.memory','90g')]) spark.sparkContext.stop() spark = SparkSession.builder.config(conf=conf).getOrCreate() df = spark.read.json("../Data/inasnelylargefile.json.gz") I assume that the pyspark is doing its magic even while reading a file (so I should see heavy core/memory utilization). But I am not seeing it.Help! Update: Tested with a smaller zip file (89 MB) Pyspark take 72 seconds Pandas takes 10.6 seconds Code used : start = time.time() df = spark.read.json("../Data/small.json.gz") end = time.time() print(end - start) start = time.time() df = pa.read_json('../Data/small.json.gz',compression='gzip', lines = True) end = time.time() print(end - start) | spark.sparkContext.stop() spark = SparkSession.builder.config(conf=conf).getOrCreate() df = spark.read.json("../Data/inasnelylargefile.json.gz") Add this: df.show() ##OR df.persist() The comparison you are doing is not apples to apples, spark performs lazy evaluation, meaning if you don't call an action over your operation, it will do nothing but just compile and keep the DAG ready for you. In Spark, there are two concepts, Transformation: Evaluated lazily Actions: (like collect(), take(), show(),persist()) evaluated instantly. In your case, read() is just a transformation, adding an action should trigger the computation. More about actions vs transformation: https://training.databricks.com/visualapi.pdf | 12 | 2 |
63,446,690 | 2020-8-17 | https://stackoverflow.com/questions/63446690/how-to-pack-a-python-to-exe-while-keeping-py-source-code-editable | I am creating a python script that should modify itself and be portable. I can achieve each one of those goals separately, but not together. I use cx_freeze or pyinstaller to pack my .py to exe, so it's portable; but then I have a lot of .pyc compiled files and I can't edit my .py file from the software itself. Is there a way to keep a script portable and lightweight (so a 70mb portable python environment is not an option) but still editable? The idea is to have a sort of exe "interpreter" like python.exe but with all the libraries linked, as pyinstaller allows, that runs the .py file, so the .py script can edit itself or be edited by other scripts and still be executed with the interpreter. | First define your main script (cannot be changed) main_script.py. In a subfolder (e.g. named data) create patch_script.py main_script.py: import sys sys.path.append('./data') import patch_script inside the subfolder: data\patch_script.py: print('This is the original file') In the root folder create a spec file e.g. by running pyinstaller main_script.py. Inside the spec file, add the patch script as a data resource: ... datas=[('./data/patch_script.py', 'data' ) ], ... Run pyinstaller main_sript.spec. Execute the exe file, it should print This is the original file Edit the patch script to e.g. say: print('This is the patched file') Rerun the exe file, it should print This is the patched file Note: As this is a PoC, this works but is prone to security issues, as the python file inside the data directory can be used for injection of arbitrary code (which you don't have any control of). You might want to consider using proper packages and update scripts as used by PIP etc. | 10 | 4 |
63,376,024 | 2020-8-12 | https://stackoverflow.com/questions/63376024/vs-code-intellisense-not-working-with-conda-python-environment | Intellisense: not working with conda (above), working fine when normal Python (below) As shown above, Intellisense does not work in VS Code when Conda Environment is set as Python interpreter, it is just keeps “Loading…”. When normal Python interpreter is set (that comes when installing Python extension), Intellisense is working fine. There are no problems to run or debug files with both environment, only issues is Intellisense in Conda Environment. I have tried at least following things without any success. Restart VS Code several times Uninstalled and re-installed Anaconda Extension Pack (which installs Python extension) Run Python “Build Workspace Symbols” as suggested in How to enable intellisense for python in Visual Studio Code with anaconda3? I also tried command conda init powershell Opening VS Code from Anaconda Prompt as suggested in vscode IntelliSense / code completion doesn't work when I am not in base conda environment System info: Version: 1.47.3 (user setup) Commit: 91899dcef7b8110878ea59626991a18c8a6a1b3e Date: 2020-07-23T13:12:49.994Z Electron: 7.3.2 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Windows_NT x64 10.0.16299 | I found a similar problem, they solve it through explicitly set the python.pythonPath, you can refer to this page. In your problem, only when selecting the conda interpreter the Intellisense not work as the Intellisense was provided by the Language Server, Could you try these? Select a different Language Server, The Language Server includes: 'Jedi'(build-in Python extension ), 'Microsoft', 'Pylance'(need install Pylance extension). downgrade or upgrade the 'Python' extension. If it still not work, you can try these to find more information which will be helpful to solve the problem: Look in the OUTPUT panel, select the 'Python Language Server' channel to check whether the Language Server works well. Open Help -> Toggle Developer Tools select the Console panel to take a check. | 12 | 9 |
63,442,415 | 2020-8-16 | https://stackoverflow.com/questions/63442415/changing-font-size-of-all-qlabel-objects-pyqt5 | I had written a gui using PyQt5 and recently I wanted to increase the font size of all my QLabels to a particular size. I could go through the entire code and individually and change the qfont. But that is not efficient and I thought I could just override the class and set all QLabel font sizes to the desired size. However, I need to understand the class written in python so I can figure out how to override it. But I did not find any python documentation that shows what the code looks like for QLabel. There is just documentation for c++. Hence, I wanted to know where I can get the python code for all of PyQt5 if that exists? If not, how can I change the font size of all QLabels used in my code? | While the provided answers should have already given you enough suggestions, I'd like to add some insight. Are there python sources for Qt? First of all, you cannot find "the class written in python", because (luckily) there's none. PyQt is a binding: it is an interface to the actual Qt library, which is written in C++. As you might already know, while Python is pretty fast on nowadays computers, it's not that fast, so using a binding is a very good compromise: it allows the simple syntax Python provides, and gives all speed provided by C++ compiled libraries under the hood. You can find the source code for Qt widgets here (official mirror), or here. How to override the default font? Well, this depends on how you're going to manage your project. Generally speaking, you can set the default font [size] for a specific widget, for its child widgets, for the top level window or even for the whole application. And there are at least two ways to do it. use setFont(): it sets the default font for the target; you can get the current default font using something.font(), then use font.setPointSize() (or setPointSizeF() for float values, if the font allows it) and then call setFont(font) on the target. use font[-*] in the target setStyleSheet(); Target? The target might be the widget itself, one of its parents or even the QApplication.instance(). You can use both setFont() or setStyleSheet() on any of them: font = self.font() font.setPointSize(24) # set the font for the widget: self.pushButton.setFont(someFont) # set the font for the top level window (and any of its children): self.window().setFont(someFont) # set the font for *any* widget created in this QApplication: QApplication.instance().setFont(someFont) # the same as... self.pushButton.setStyleSheet(''' font-size: 24px; ''') # etc... Also, consider setting the Qt.AA_UseStyleSheetPropagationInWidgetStyles attribute for the application instance. Setting and inheritance By default, Qt uses font propagation (as much as palette propagation) for both setFont and setStyleSheet, but whenever a style sheet is set, it takes precedence, even if it's set on any of the parent widgets (up to the top level window OR the QApplication instance). Whenever stylesheets are applied, there are various possibilities, based on CSS Selectors: 'font-size: 24px;': no selector, the current widget and any of its child will use the specified font size; 'QClass { font-size: 24px; }': classes and subclasses selector, any widget (including the current instance) and its children of the same class/subclass will use the specified font size: 'QClass[property="value"] {...}': property selector, as the above, but only if the property matches the value; note that values are always quoted, and bool values are always lower case; '.QClass {...}': classes selector, but not subclasses: if you're using a subclass of QLabel and the stylesheet is set for .QLabel, that stylesheet won't be applied; 'QClass#objectName {...}': apply only for widgets for which objectName() matches; 'QParentClass QClass {...}': apply for widget of class QClass that are children of QParentClass 'QParentClass > QClass {...}': apply for widget of class QClass that are direct children of QParentClass Note that both setFont and setStyleSheet support propagation, but setStyle only works on children when set to the QApplication instance: if you use widget.setStyle() it won't have effect on any of the widget's children. Finally, remember: whenever a widget gets reparented, it receives the font, palette and stylesheet of its parent, in "cascading" mode (the closest parent has precedence); stylesheets have precedence on both palette and font, whenever any of the related properties are set, and palette/font properties are not compatible with stylesheets (or, at least, they behave in unexpected ways); | 8 | 15 |
63,419,097 | 2020-8-14 | https://stackoverflow.com/questions/63419097/prefect-how-to-avoid-rerunning-a-task | In Prefect, suppose I have some pipeline which runs f(date) for every date in a list, and saves it to a file. This is a pretty common ETL operation. In airflow, if I run this once, it will backfill for all historical dates. If I run it again, it will know that the task has been run, and only run any new tasks that have appeared (ie latest date). In Prefect, to my knowledge, it will run the entire pipeline every day, even if 99% of the tasks were completed the day before. What are some solutions to dealing with this, without switching to Prefect Cloud? Do you just do something like make every task cache it's completion in redis before exiting? | Prefect has many first-class ways of handling caching, depending on how much control you want. For every task, you can specify whether results should be cached, how long they should be cached, and how the cache should be invalidated (age, different inputs to the task, flow parameter values, etc.). The simplest way to cache a task is to use targets, which lets you specify that the task has a templatable side effect (usually a file in local or Cloud storage, but could be e.g. a database entry, redis key, or anything else). Before the task is run, it checks if the side effect exists and if it does, skips the run. For example, this task will write its result to a local file automatically templated with the task name and the current date: @task(result=LocalResult(), target="{task_name}-{today}") def get_data(): return [1, 2, 3, 4, 5] As long as a matching file exists, the task won't re-run. Because {today} is part of the target name, that will implicitly cache the task's value for one day. You could also use a parameter in the template, like the backfill date, to replicate Airflow's behavior. For more control, you can use Prefect's full cache mechanism by setting cache_for, cache_validator, and cache_key on any task. If set, the task will finish in a Cached state instead of a Success state. When paired with a proper orchestration backend like Prefect Server or Prefect Cloud, the Cached state can be queried by future runs of the same task (or any task with the same cache_key). That future task will return the Cached state as its own result. | 10 | 14 |
63,399,459 | 2020-8-13 | https://stackoverflow.com/questions/63399459/how-to-save-the-browser-sessions-in-selenium | I am using Selenium to Log In to an account. After Logging In I would like to save the session and access it again the next time I run the python script so I don't have to Log In again. Basically I want to chrome driver to work like the real google chrome where all the cookies/sessions are saved. This way I don't have to login to the website on every run. browser.get("https://www.website.com") browser.find_element_by_xpath("//*[@id='identifierId']").send_keys(email) browser.find_element_by_xpath("//*[@id='identifierNext']").click() time.sleep(3) browser.find_element_by_xpath("//*[@id='password']/div[1]/div/div[1]/input").send_keys(password) browser.find_element_by_xpath("//*[@id='passwordNext']").click() time.sleep(5) | This is the solution I used: # I am giving myself enough time to manually login to the website and then printing the cookie time.sleep(60) print(driver.get_cookies()) # Than I am using add_cookie() to add the cookie/s that I got from get_cookies() driver.add_cookie({'domain': ''}) This may not be the best way to implement it but it's doing what I was looking for | 13 | 4 |
63,436,120 | 2020-8-16 | https://stackoverflow.com/questions/63436120/django-new-version-3-1-the-settings-file-have-some-changes | On Django new version 3.1, the settings file have some changes, and I came to ask how I have to proceed to set my static files? The way that I usually did doesn't work more. Last versions: import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) Version 3.1: from pathlib import Path BASE_DIR = Path(__file__).resolve(strict=True).parent.parent I usually set my static files like this: STATIC_URL = '/static/' MEDIA_URL = '/media/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'static') ] STATIC_ROOT = os.path.join(BASE_DIR, 'static_root') MEDIA_ROOT = os.path.join(BASE_DIR, 'media_root') If I insert the import os will work, but is it the right practice? What is the best practice to set this? Thank you? | This change makes it a lot easier for you to define your STATIC and MEDIA variables. You don't even need to import os for this purpose and all you need is to add following codes to your settings.py: BASE_DIR = Path(__file__).resolve(strict=True).parent.parent # which shows the root directory of your project STATIC_ROOT = BASE_DIR / 'static' # is equal to os.path.join(BASE_DIR, 'static/') STATIC_URL = '/static/' MEDIA_ROOT = BASE_DIR / 'media' # is equal to os.path.join(BASE_DIR, 'media/') MEDIA_URL = '/media/' | 9 | 10 |
63,435,483 | 2020-8-16 | https://stackoverflow.com/questions/63435483/tensorflow-in-python-3-9-installation-error-failure-to-install-tensorflow-win | While installing TensorFlow on Windows having python 3.9 installed using the following command: pip install tensorflow Following error occurred with the warning: WARNING: Failed to write executable - trying to use .deleteme logic ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: 'c:\\python39\\Scripts\\pyrsa-decrypt.exe' -> 'c:\\python39\\Scripts\\pyrsa-decrypt.exe.deleteme' How this can be resolved? | Run the same command using --user pip install --user package_name or you can try to restart the terminal and run it as admin | 16 | 54 |
63,416,894 | 2020-8-14 | https://stackoverflow.com/questions/63416894/correlation-values-in-pairplot | Is there a way to show pair-correlation values with seaborn.pairplot(), as in the example below (created with ggpairs() in R)? I can make the plots using the attached code, but cannot add the correlations. Thanks import numpy as np import seaborn as sns import matplotlib.pyplot as plt iris = sns.load_dataset('iris') g = sns.pairplot(iris, kind='scatter', diag_kind='kde') # remove upper triangle plots for i, j in zip(*np.triu_indices_from(g.axes, 1)): g.axes[i, j].set_visible(False) plt.show() | If you use PairGrid instead of pairplot, then you can pass a custom function that would calculate the correlation coefficient and display it on the graph: from scipy.stats import pearsonr def reg_coef(x,y,label=None,color=None,**kwargs): ax = plt.gca() r,p = pearsonr(x,y) ax.annotate('r = {:.2f}'.format(r), xy=(0.5,0.5), xycoords='axes fraction', ha='center') ax.set_axis_off() iris = sns.load_dataset("iris") g = sns.PairGrid(iris) g.map_diag(sns.distplot) g.map_lower(sns.regplot) g.map_upper(reg_coef) | 9 | 15 |
63,416,546 | 2020-8-14 | https://stackoverflow.com/questions/63416546/twine-is-defaulting-long-description-content-type-to-text-x-rst | Heres is my setup setup( name="`...", version="...", description=..., long_description_content_type="text/markdown", long_description=README, author="...", classifiers=[...], packages=["..."], include_package_data=True, ) I used the following command to package my project python setup.py sdist bdist_wheel but when I run twine check dist/* I get the following error Checking dist\Futshane_TBG-1.0.0-py3-none-any.whl: FAILED `long_description` has syntax errors in markup and would not be rendered on PyPI. line 9: Error: Unexpected indentation. warning: `long_description_content_type` missing. defaulting to `text/x-rst`. Checking dist\Futshane_TBG-1.0.0.tar.gz: FAILED `long_description` has syntax errors in markup and would not be rendered on PyPI. line 9: Error: Unexpected indentation. warning: `long_description_content_type` missing. defaulting to `text/x-rst`. Why is it failing to identify the type provided, when I've obviously provided one? | I attempted to switch the order of the "long_description_content_type" and the "long_description" arguments and instead of assigning the description argument to a variable containing the description, I assigned it directly to the description. Doing so has resolved my issue setup( name="Futshane_TBG", version="1.0.0", description=""" The description of the package """, long_description_content_type="text/markdown", long_description=README, url="https://github.com/ElLoko233/Text-Based-Game-Package", author="Lelethu Futshane", classifiers=["License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8"], packages=["TBG"], include_package_data=True, ) | 9 | 12 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.