question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
64,520,995 | 2020-10-25 | https://stackoverflow.com/questions/64520995/odoo-14-add-a-section-functionality-in-tree-view | I need add a section functionality like sales > quotation view has, in one of my tree views. . Code of my view is something like this: <record id="view_qualification_form_inh" model="ir.ui.view"> <field name="name">hr.applicant.form</field> <field name="model">hr.applicant</field> <field name="inherit_id" ref="hr_recruitment.hr_applicant_view_form" /> <field name="arch" type="xml"> <xpath expr="//field[@name = 'description']" position="after"> <notebook> <page string="Qualification"> <field name="qualification_lines"> <tree editable="bottom"> <control> <create name="add_line_control" string="Add a line"/> <create name="add_section_control" string="Add a section" context="{'default_display_type': 'line_section'}"/> </control> <field name="qualification_type_id"/> <field name="qualification_type_line_id" domain="[('qualification_type_id','=',qualification_type_id)]"/> <field name="score" /> </tree> </field> <group class="oe_subtotal_footer"> <field name="avg_score" class="oe_subtotal_footer_separator"/> </group> </page> </notebook> </xpath> </field> </record> Code of my model: class HrApplicant(models.Model): _inherit = 'hr.applicant' qualification_lines = fields.One2many('hr.applicant.qualification', 'qualification_data',) avg_score = fields.Float(compute='compute_score_average',store=True,index=True, string='Average Score') display_type = fields.Selection([ ('line_section', "Section"), ('line_note', "Note")], default=False, help="Technical field for UX purpose.") class Qualification(models.Model): _name = 'hr.applicant.qualification' _description = 'Applicant Qualification' qualification_data = fields.Many2one('hr.applicant', string='Qualification') qualification_type_id = fields.Many2one('hr.applicant.qualification.rule', string='Qualification Type') qualification_type_line_id = fields.Many2one(related='qualification_type_id.qualification_type_line_id') score = fields.Float(related='qualification_type_line_id.score') The thing is, I got a add a section option but it is working same like default "add a line". I know it has lot of things to do with python code, even tried to get it from sales' addon but it has very complex structure. I am a beginner, so if anyone can help me out with code or at-least steps for it. | You need to set the qualification_lines widget attribute to section_and_note_one2many and define the display_type in the applicant qualification model instead of the applicant model, it will be used to check if you need to add a section (help: Technical field for UX purpose). In the following example the section text will be stored in the name field: View definition: <field name="qualification_lines" widget="section_and_note_one2many"> <tree editable="bottom"> <control> <create name="add_line_control" string="Add a line"/> <create name="add_section_control" string="Add a section" context="{'default_display_type': 'line_section'}"/> </control> <field name="name" widget="section_and_note_text" optional="show"/> <field name="display_type" invisible="1"/> <field name="score"/> </tree> </field> Model definition: class Qualification(models.Model): _name = 'hr.applicant.qualification' _description = 'Applicant Qualification' name = fields.Char(required=True) display_type = fields.Selection([ ('line_section', "Section"), ('line_note', "Note")], default=False, help="Technical field for UX purpose.") qualification_data = fields.Many2one('hr.applicant', string='Qualification') | 5 | 7 |
64,519,479 | 2020-10-25 | https://stackoverflow.com/questions/64519479/modulenotfounderror-no-module-named-sksurv-in-python | I am trying to run survival analysis in python (pycharm) in linux, here is a part of the code import numpy as np import matplotlib.pyplot as plt #matplotlib inline import pandas as pd from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sksurv.datasets import load_flchain from sksurv.linear_model import CoxPHSurvivalAnalysis I get the error "ModuleNotFoundError: No module named 'sksurv'", I tried everything, but nothing works. | The required dependencies for scikit-survival, cvxpy cvxopt joblib numexpr numpy 1.12 or later osqp pandas 0.21 or later scikit-learn 0.22 scipy 1.0 or later ...will be automatically installed by pip when you run: pip install scikit-survival However, one module in particular, osqp, has CMake as one of its dependencies. If you don't have CMake installed, pip install scikit-survival will throw an error and the installation will fail. You can download CMake for your OS at cmake.org/download After CMake has installed, you should be able to successfully run pip install scikit-survival Notes: GCC needs to be installed also scikit-survival works with Python 3.5 or higher More information is available in the docs | 6 | 3 |
64,484,905 | 2020-10-22 | https://stackoverflow.com/questions/64484905/getting-celery-task-results-using-rpc-backend | I'm struggling with getting results from the Celery task. My app entry point looks like this: from app import create_app,celery celery.conf.task_default_queue = 'order_master' order_app = create_app('../config.order_master.py') Now, before I start the application I start the RabbitMQ and ensure it has no queues: root@3d2e6b124780:/# rabbitmqctl list_queues Timeout: 60.0 seconds ... Listing queues for vhost / ... root@3d2e6b124780:/# Now I start the application. After the start I still see no queues in the RabbitMQ. When I start the task from the application jobs.add_together.delay(2, 3) I get the task ID: ralfeus@web-2 /v/w/order (multiple-instances)> (order) curl localhost/test {"result":"a2c07de4-f9f2-4b21-ae47-c6d92f2a7dfe"} ralfeus@web-2 /v/w/order (multiple-instances)> (order) At that moment I can see that my queue has one message: root@3d2e6b124780:/# rabbitmqctl list_queues Timeout: 60.0 seconds ... Listing queues for vhost / ... name messages dd65ba89-cce9-3e0b-8252-c2216912a910 0 order_master 1 root@3d2e6b124780:/# Now I start Celery worker: ralfeus@web-2 /v/w/order (multiple-instances)> /usr/virtualfish/order/bin/celery -A main_order_master:celery worker --loglevel=INFO -n order_master -Q order_master --concurrency 2 INFO:app:Blueprints are registered -------------- celery@order_master v5.0.0 (singularity) --- ***** ----- -- ******* ---- Linux-5.4.0-51-generic-x86_64-with-glibc2.29 2020-10-22 16:38:56 - *** --- * --- - ** ---------- [config] - ** ---------- .> app: app:0x7f374715c5b0 - ** ---------- .> transport: amqp://guest:**@172.17.0.1:5672// - ** ---------- .> results: rpc:// - *** --- * --- .> concurrency: 2 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> order_master exchange=order_master(direct) key=order_master [tasks] . app.jobs.add_together . app.jobs.post_purchase_orders [2020-10-22 16:38:57,263: INFO/MainProcess] Connected to amqp://guest:**@172.17.0.1:5672// [2020-10-22 16:38:57,304: INFO/MainProcess] mingle: searching for neighbors [2020-10-22 16:38:58,354: INFO/MainProcess] mingle: all alone [2020-10-22 16:38:58,375: INFO/MainProcess] celery@order_master ready. [2020-10-22 16:38:58,377: INFO/MainProcess] Received task: app.jobs.add_together[f855bec7-307d-4570-ab04-3d036005a87b] [2020-10-22 16:40:38,616: INFO/ForkPoolWorker-2] Task app.jobs.add_together[f855bec7-307d-4570-ab04-3d036005a87b] succeeded in 100.13561034202576s: 5 So it's visible the worker could pick up the task and execute it and produce a result. However I can't get the result. Instead, when I request the result I get following: curl localhost/test/f855bec7-307d-4570-ab04-3d036005a87b {"state":"PENDING"} ralfeus@web-2 /v/w/order (multiple-instance)> (order) If I check the queues now I see that: root@3d2e6b124780:/# rabbitmqctl list_queues Timeout: 60.0 seconds ... Listing queues for vhost / ... name messages dd65ba89-cce9-3e0b-8252-c2216912a910 1 65d80661-6195-3986-9fa2-e468eaab656e 0 celeryev.9ca5a092-9a0c-4bd5-935b-f5690cf9665b 0 order_master 0 celery@order_master.celery.pidbox 0 root@3d2e6b124780:/# I see the queue dd65ba89-cce9-3e0b-8252-c2216912a910 has one message, which as I check contains result. So why has it appeared there and how do I get that? All manuals say I just need to get task by ID. But in my case the task is still in pending state. | According to Celery documentation: RPC Result Backend (RabbitMQ/QPid) The RPC result backend (rpc://) is special as it doesn’t actually store the states, but rather sends them as messages. This is an important difference as it means that a result can only be retrieved once, and only by the client that initiated the task. Two different processes can’t wait for the same result. So using rpc:// isn't suitable for retrieving results later by another request. | 7 | 15 |
64,550,426 | 2020-10-27 | https://stackoverflow.com/questions/64550426/using-progress-bars-of-pip | I want use progress bars in my python code. I know there are many libraries for that but I want to use the progress bars used by pip [the package manager]. Please tell if there is a way to do this. | The progress package available on pypi is used by pip. It can be imported by including the following line in your python file: from pip._vendor import progress Usage is available on https://pypi.org/project/progress/ | 13 | 11 |
64,534,844 | 2020-10-26 | https://stackoverflow.com/questions/64534844/python-asyncio-aiohttp-timeout | Word of notice: This is my first approach with asyncio, so I might have done something really stupid. Scenario is as follows: I need to "http-ping" a humongous list of urls to check if they respond 200 or any other value. I get timeouts for each and every request, though tools like gobuster report 200,403, etc. My code is sth similar to this: import asyncio,aiohttp import datetime #------------------------------------------------------------------------------------- async def get_data_coroutine(session,url,follow_redirects,timeout_seconds,retries): #print('#DEBUG '+datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')+' '+url) try: async with session.get(url,allow_redirects=False,timeout=timeout_seconds) as response: status = response.status #res = await response.text() if( status==404): pass elif(300<=status and status<400): location = str(response).split("Location': \'")[1].split("\'")[0] print('#HIT '+datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')+' '+str(status)+' '+url+' ---> '+location) if(follow_redirects==True): return await get_data_coroutine(session,location,follow_redirects,timeout_seconds,retries) else: print('#HIT '+datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')+' '+str(status)+' '+url) return None except asyncio.exceptions.TimeoutError as e: print('#ERROR '+datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')+' '+' '+' '+url+' TIMEOUT '+str(e)) return None #--------------------------------------------------------------------------- async def main(loop): base_url = 'http://192.168.59.37' extensions = ['','.html','php'] fd = open('/usr/share/wordlists/dirb/common.txt','r') words_without_suffix = [x.strip() for x in fd.readlines()]#[-5:] #DEBUG! words_with_suffix = [base_url+'/'+x+y for x in words_without_suffix for y in extensions] follow = True total_timeout = aiohttp.ClientTimeout(total=60*60*24) timeout_seconds = 10 retries = 1 async with aiohttp.ClientSession(loop=loop,timeout=total_timeout) as session: tasks = [get_data_coroutine(session,url,follow,timeout_seconds,retries) for url in words_with_suffix] await asyncio.gather(*tasks) print('DONE') #--------------------------------------------------------------------------- if(__name__=='__main__'): loop = asyncio.get_event_loop() result = loop.run_until_complete(main(loop)) Did I do something really wrong? Any word of advice? Thank you SO much! | Actually, I ended up finding an open issue in aio-libs/aiohttp: https://github.com/aio-libs/aiohttp/issues/3203 This way, they suggest a workaround that achieves my needs: session_timeout = aiohttp.ClientTimeout(total=None,sock_connect=timeout_seconds,sock_read=timeout_seconds) async with aiohttp.ClientSession(timeout=session_timeout) as session: async with session.get(url,allow_redirects=False,timeout=1) as response: ... | 12 | 25 |
64,563,105 | 2020-10-27 | https://stackoverflow.com/questions/64563105/aws-lambda-read-csv-and-convert-to-pandas-dataframe | I have got a simple Lambda code to read the csv file from S3 Bucket. All is working fine however I tried to get the csv data to pandas data frame and the error comes up string indices must be integers My code is bog-standard but I just need to use the csv as a data frame for further manipulation. The hashed line is the source of the error. I can print data with no problems so the bucket and file details are configured properly. updated code import json import pandas as pd import numpy as np import requests import glob import time import os from datetime import datetime from csv import reader import boto3 import traceback import io s3_client = boto3.client('s3') def lambda_handler(event, context): try: bucket_name = event["Records"][0]["s3"]["bucket"]["name"] s3_file_name = event["Records"][0]["s3"]["object"]["key"] resp = s3_client.get_object(Bucket=bucket_name, Key=s3_file_name) data = resp['Body'].read().decode('utf-8') df=pd.DataFrame( list(reader(data))) print (df.head()) except Exception as err: print(err) # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello fr2om Lambda!') } traceback.print_exc() | I believe that your problem is likely tied to this line - df=pd.DataFrame( list(reader(data))) in your function. The answer below should allow you to read the csv file into the pandas dataframe for processes. import boto3 import pandas as pd from io import BytesIO s3_client = boto3.client('s3') def lambda_handler(event, context): try: bucket_name = event["Records"][0]["s3"]["bucket"]["name"] s3_file_name = event["Records"][0]["s3"]["object"]["key"] resp = s3_client.get_object(Bucket=bucket_name, Key=s3_file_name) ########################################### # one of these methods should work for you. # Method 1 # df_s3_data = pd.read_csv(resp['Body'], sep=',') # # Method 2 # df_s3_data = pd.read_csv(BytesIO(resp['Body'].read().decode('utf-8'))) ########################################### print(df_s3_data.head()) except Exception as err: print(err) | 8 | 11 |
64,497,615 | 2020-10-23 | https://stackoverflow.com/questions/64497615/how-to-add-a-custom-decorator-to-a-fastapi-route | I want to add an auth_required decorator to my endpoints. (Please consider that this question is about decorators, not middleware) So a simple decorator looks like this: def auth_required(func): def wrapper(*args, **kwargs): if user_ctx.get() is None: raise HTTPException(...) return func(*args, **kwargs) return wrapper So there are 2 usages: @auth_required @router.post(...) or @router.post(...) @auth_required The first way doesn't work because router.post creates a router that saved into self.routes of APIRouter object. The second way doesn't work because it fails to verify pydantic object. For any request model, it says missing args, missing kwargs. So my question is - how can I add any decorators to FastAPI endpoints? Should I get into router.routes and modify the existing endpoint? Or use some functools.wraps like functions? | How can I add any decorators to FastAPI endpoints? As you said, you need to use @functools.wraps(...)--(PyDoc) decorator as, from functools import wraps from fastapi import FastAPI from pydantic import BaseModel class SampleModel(BaseModel): name: str age: int app = FastAPI() def auth_required(func): @wraps(func) async def wrapper(*args, **kwargs): return await func(*args, **kwargs) return wrapper @app.post("/") @auth_required # Custom decorator async def root(payload: SampleModel): return {"message": "Hello World", "payload": payload} The main caveat of this method is that you can't access the request object in the wrapper and I assume it is your primary intention. If you need to access the request, you must add the argument to the router function as, from fastapi import Request @app.post("/") @auth_required # Custom decorator async def root(request: Request, payload: SampleModel): return {"message": "Hello World", "payload": payload} I am not sure what's wrong with the FastAPI middleware, after all, the @app.middleware(...) is also a decorator. | 67 | 102 |
64,502,578 | 2020-10-23 | https://stackoverflow.com/questions/64502578/mutlithreading-with-raw-pymysql-for-celery | In the project I am currently working on, I am not allowed to use an ORM so I made my own It works great but I am having problems with Celery and it's concurrency. For a while, I had it set to 1 (using --concurrency=1) but I'm adding new tasks which take more time to process than they need to be run with celery beat, which causes a huge backlog of tasks. When I set celery's concurrency to > 1, here's what happens (pastebin because it's big): https://pastebin.com/M4HZXTDC Any idea on how I could implement some kind of lock/wait on the other processes so that the different workers don't cross each other? Edit: Here is where I setup my PyMySQL instance and how the open and close are handled | PyMSQL does not allow threads to share the same connection (the module can be shared, but threads cannot share a connection). Your Model class is reusing the same connection everywhere. So, when different workers call on the models to do queries, they are using the same connection object, causing conflicts. Make sure your connection objects are thread-local. Instead of having a db class attribute, consider a method that will retrieve a thread-local connection object, instead of reusing one potentially created in a different thread. For instance, create your connection in the task. Right now, you're using a global connection everywhere for every model. # Connect to the database connection = pymysql.connect(**database_config) class Model(object): """ Base Model class, all other Models will inherit from this """ db = connection To avoid this you can create the DB in the __init__ method instead... class Model(object): """ Base Model class, all other Models will inherit from this """ def __init__(self, *args, **kwargs): self.db = pymysql.connect(**database_config) However, this may not be efficient/practical because every instance of the db object will create a session. To improve upon this, you could use an approach using threading.local to keep connections local to threads. class Model(object): """ Base Model class, all other Models will inherit from this """ _conn = threading.local() @property def db(self): if not hasattr(self._conn, 'db'): self._conn.db = pymysql.connect(**database_config) return self._conn.db Note, a thread-local solution works assuming you're using a threading concurrency model. Note also that celery uses multiple processes (prefork) by default. This may or may not be a problem. If it is a problem, you may be able to work around it if you change the workers to use eventlet instead. | 6 | 1 |
64,497,080 | 2020-10-23 | https://stackoverflow.com/questions/64497080/how-to-speed-up-the-performance-of-array-masking-from-the-results-of-numpy-searc | I want to generate a mask from the results of numpy.searchsorted(): import numpy as np # generate test examples x = np.random.rand(1000000) y = np.random.rand(200) # sort x idx = np.argsort(x) sorted_x = np.take_along_axis(x, idx, axis=-1) # searchsort y in x pt = np.searchsorted(sorted_x, y) pt is an array. Then I want to create a boolean mask of size (200, 1000000) with True values when its indices are idx[0:pt[i]], and I come up with a for-loop like this: mask = np.zeros((200, 1000000), dtype='bool') for i in range(200): mask[i, idx[0:pt[i]]] = True Anyone has an idea to speed up the for-loop? | Approach #1 Going by the new-found information picked up off OP's comments that states only y is changing in real-time, we can pre-process lots of stuffs around x and hence do much better. We will create a hashing array that will store stepped masks. For the part that involves y, we will simply index into the hashing array with the indices obtained off searchsorted which will approximate the final mask array. A final step of assigning the remaining bools could be offloaded to numba given its ragged nature. This should also be beneficial if we decide to scale up the lengths of y. Let's look at the implementation. Pre-processing with x : sidx = x.argsort() ssidx = x.argsort().argsort() # Choose a scale factor. # 1. A small one would store more mapping info, hence faster but occupy more mem # 2. A big one would store less mapping info, hence slower, but memory efficient. scale_factor = 100 mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx Remaining steps with y : import numba as nb @nb.njit(parallel=True,fastmath=True) def array_masking3(out, starts, idx, sidx): N = len(out) for i in nb.prange(N): for j in nb.prange(starts[i], idx[i]): out[i,sidx[j]] = True return out idx = np.searchsorted(x,y,sorter=sidx) s0 = idx//scale_factor starts = s0*scale_factor out = mapar[s0] out = array_masking3(out, starts, idx, sidx) Benchmarking In [2]: x = np.random.rand(1000000) ...: y = np.random.rand(200) In [3]: ## Pre-processing step with "x" ...: sidx = x.argsort() ...: ssidx = x.argsort().argsort() ...: scale_factor = 100 ...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx In [4]: %%timeit ...: idx = np.searchsorted(x,y,sorter=sidx) ...: s0 = idx//scale_factor ...: starts = s0*scale_factor ...: out = mapar[s0] ...: out = array_masking3(out, starts, idx, sidx) 41 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # A 1/10th smaller hashing array has similar timings In [7]: scale_factor = 1000 ...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx In [8]: %%timeit ...: idx = np.searchsorted(x,y,sorter=sidx) ...: s0 = idx//scale_factor ...: starts = s0*scale_factor ...: out = mapar[s0] ...: out = array_masking3(out, starts, idx, sidx) 40.6 ms ± 196 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # @silgon's soln In [5]: %timeit x[np.newaxis,:] < y[:,np.newaxis] 138 ms ± 896 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Approach #2 This borrowed a good part from OP's solution. import numba as nb @nb.njit(parallel=True) def array_masking2(mask1D, mask_out, idx, pt): n = len(idx) for j in nb.prange(len(pt)): if mask1D[j]: for i in nb.prange(pt[j],n): mask_out[j, idx[i]] = False else: for i in nb.prange(pt[j]): mask_out[j, idx[i]] = True return mask_out def app2(idx, pt): m,n = len(pt), len(idx) mask1 = pt>len(x)//2 mask2 = np.broadcast_to(mask1[:,None], (m,n)).copy() return array_masking2(mask1, mask2, idx, pt) So, the idea is once, we have larger than half of indices to be set True, we switch over to set False instead after pre-assigning those rows as all True. This results in lesser memory accesses and hence some noticeable performance boost. Benchmarking OP's solution : @nb.njit(parallel=True,fastmath=True) def array_masking(mask, idx, pt): for j in nb.prange(pt.shape[0]): for i in nb.prange(pt[j]): mask[j, idx[i]] = True return mask def app1(idx, pt): m,n = len(pt), len(idx) mask = np.zeros((m, n), dtype='bool') return array_masking(mask, idx, pt) Timings - In [5]: np.random.seed(0) ...: x = np.random.rand(1000000) ...: y = np.random.rand(200) In [6]: %timeit app1(idx, pt) 264 ms ± 8.91 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [7]: %timeit app2(idx, pt) 165 ms ± 3.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) | 7 | 3 |
64,561,637 | 2020-10-27 | https://stackoverflow.com/questions/64561637/cant-import-module-situated-in-parent-folder-from-jupyter-lab-notebook-and-path | Here's my situation. I have some jupyter notebooks inside some folder and I would like to share some code between those notebooks trough a library I made. The folder structure is the following: 1.FirstFolder/ notebookA.ipynb 2.SecondFolder/ notebookB.ipynb mylib/ __init__.py otherfiles.py I tried putting the following code at the beginning of the notebook: # to use modules in parent folder import sys import os from pathlib import Path libpath = os.path.join(Path.cwd().parent,'mylib') print(f"custom library functions are in the module:\n\t{libpath}") sys.path.append(libpath) import mylib The print outputs the correct path of the module and then a ModuleNotFoundError comes up and the program crashes: ---> 10 import mylib 11 from mylib import * ModuleNotFoundError: No module named 'mylib' Looking up on SO I found that that should have been the way to import a module from a non-default folder. Where is the error? EDIT: after FinleyGibson's answer I tried sys.path.append(Path.cwd().parent) and restarted the kernel but I still have the same problem. EDIT2: I tried this and it worked, but I still would like to know why the previous approaches haven't worked. import sys import os from pathlib import Path tmp = Path.cwd() os.chdir(Path.cwd().parent) sys.path.append(Path.cwd()) import mylib from mylib.dataloading import * os.chdir(tmp) | You have added the contents of os.path.join(Path.cwd().parent,'mylib') to your path, this means python will look inside this dir for the module you are importing. mylib is not located in this dir, but rather the parent dir. Also Path.cwd().parent returns a pathlib.PosixPath object. Convert this to a string to use it with import (or, just use sys.path.append('../'): try: import sys import os from pathlib import Path sys.path.append(str(Path.cwd().parent)) import mylib doing this allows me to import a variable X = 'import success' located in otherfiles.py like so: ans = mylib.otherfiles.X print(ans) >>> 'import success' | 7 | 5 |
64,464,111 | 2020-10-21 | https://stackoverflow.com/questions/64464111/sendgrid-authenticate-with-api-keys | I got the following mail from SentGrid, We are emailing to inform you of an upcoming requirement to update your authentication method with Twilio SendGrid to API keys exclusively by December 9th, 2020 in order to ensure uninterrupted service and improve the security of your account. Our records show that you have used basic authentication with username and password for one or more of your API requests with 1 users of your SendGrid account in the last 180 days. Why API keys? This is an effort to enhance security for all of our users. Using your account username and password for authentication is less secure than using an API Key. Unlike your username and password, API Keys are uniquely generated and can be set to limit the access and specify permissions for a given request. What action is required? Follow these steps to identify and replace your authentication method to API Keys and then implement Two-Factor Authentication (2FA) for enhanced security. What happens if no action is taken? On December 9th, 2020 we will no longer accept basic authentication with username and password, and we will be requiring 2FA to login to your account. If you attempt to authenticate your API requests or SMTP configuration with username and password for any of your users after that date, your requests will be rejected. We’d like to thank you in advance for your prompt attention to these requirements. If you’d like to learn more about how you can enhance the security of your account, view this post. If you have any questions or need assistance, please visit our documentation or reach out to our Support team. Thank you, The Twilio SendGrid Team Presently I am sending mails to sendgrid by using following credentials, EMAIL_HOST = 'smtp.sendgrid.net' EMAIL_USE_TLS = False EMAIL_PORT = 587 EMAIL_HOST_USER = '[email protected]' EMAIL_HOST_PASSWORD = 'xxx''' Is this change affect me? | Yes, once they force two factor authentication (2FA), your application will not be able to do basic authentication by just using username/email & password. So, you need to start using API keys. Migration is simple: Login to sendgrid account Goto https://app.sendgrid.com/settings/api_keys "Generate API Key" - generate a new API key and copy paste to be used later Code changes: EMAIL_HOST_USER = 'apikey' (username should be this only) EMAIL_HOST_PASSWORD = 'YOUR_API_KEY' Test it If the changes work, you are good to go and have migrated from basic authentication to API keys. | 7 | 12 |
64,483,856 | 2020-10-22 | https://stackoverflow.com/questions/64483856/use-pre-trained-nodes-from-past-runs-pytorch-biggraph | After struggling with this amazing facebookresearch / PyTorch-BigGraph project, and its impossible API, I managed to get a grip on how to run it (thanks to stand alone simple example) My system restrictions do not allow me to train the dense (embedding) representation of all edges, and I need from time to time to upload past embeddings and train the model using both new edges and existing nodes, notice that nodes in past and new edge list do not necessarily overlap. I tried to understand from here: see the context section how to do it, so far with no success. Following is a stand-alone PGD code, that turned batch_edges into an embedding node list, however, I need it to use pre-trained nodes list past_trained_nodes. import os import shutil from pathlib import Path from torchbiggraph.config import parse_config from torchbiggraph.converters.importers import TSVEdgelistReader, convert_input_data from torchbiggraph.train import train from torchbiggraph.util import SubprocessInitializer, setup_logging DIMENSION = 4 DATA_DIR = 'data' GRAPH_PATH = DATA_DIR + '/output1.tsv' MODEL_DIR = 'model' raw_config = dict( entity_path=DATA_DIR, edge_paths=[DATA_DIR + '/edges_partitioned', ], checkpoint_path=MODEL_DIR, entities={"n": {"num_partitions": 1}}, relations=[{"name": "doesnt_matter", "lhs": "n", "rhs": "n", "operator": "complex_diagonal", }], dynamic_relations=False, dimension=DIMENSION, global_emb=False, comparator="dot", num_epochs=7, num_uniform_negs=1000, loss_fn="softmax", lr=0.1, eval_fraction=0.,) batch_edges = [["A", "B"], ["B", "C"], ["C", "D"], ["D", "B"], ["B", "D"]] # I want the model to use these pretrained nodes, Notice that Node A exist, And F Does not #I dont have all past nodes, as some are gained from data past_trained_nodes = {'A': [0.5, 0.3, 1.5, 8.1], 'F': [3, 0.6, 1.2, 4.3]} try: shutil.rmtree('data') except: pass try: shutil.rmtree(MODEL_DIR) except: pass os.makedirs(DATA_DIR, exist_ok=True) with open(GRAPH_PATH, 'w') as f: for edge in batch_edges: f.write('\t'.join(edge) + '\n') setup_logging() config = parse_config(raw_config) subprocess_init = SubprocessInitializer() input_edge_paths = [Path(GRAPH_PATH)] convert_input_data(config.entities, config.relations, config.entity_path, config.edge_paths, input_edge_paths, TSVEdgelistReader(lhs_col=0, rel_col=None, rhs_col=1), dynamic_relations=config.dynamic_relations, ) train(config, subprocess_init=subprocess_init) How can I use my pre-trained nodes in the current model? Thanks in advance! | Since torchbiggraph is file based, you can modify the saved files to load pre-trained embeddings and add new nodes. I wrote a function to achieve this import json def pretrained_and_new_nodes(pretrained_nodes,new_nodes,entity_name,data_dir,embeddings_path): """ pretrained_nodes: A dictionary of nodes and their embeddings new_nodes: A list of new nodes,each new node must have an embedding in pretrained_nodes. If no new nodes, use [] entity_name: The entity's name, for example, WHATEVER_0 data_dir: The path to the files that record graph nodes and edges embeddings_path: The path to the .h5 file of embeddings """ with open('%s/entity_names_%s.json' % (data_dir,entity_name),'r') as source: nodes = json.load(source) dist = {item:ind for ind,item in enumerate(nodes)} if len(new_nodes) > 0: # modify both the node names and the node count extended = nodes.copy() extended.extend(new_nodes) with open('%s/entity_names_%s.json' % (data_dir,entity_name),'w') as source: json.dump(extended,source) with open('%s/entity_count_%s.txt' % (data_dir,entity_name),'w') as source: source.write('%i' % len(extended)) if len(new_nodes) == 0: # if no new nodes are added, we won't bother create a new .h5 file, but just modify the original one with h5py.File(embeddings_path,'r+') as source: for node,embedding in pretrained_nodes.items(): if node in nodes: source['embeddings'][dist[node]] = embedding else: # if there are new nodes, then we must create a new .h5 file # see https://stackoverflow.com/a/47074545/8366805 with h5py.File(embeddings_path,'r+') as source: embeddings = list(source['embeddings']) optimizer = list(source['optimizer']) for node,embedding in pretrained_nodes.items(): if node in nodes: embeddings[dist[node]] = embedding # append new nodes in order for node in new_nodes: if node not in list(pretrained_nodes.keys()): raise ValueError else: embeddings.append(pretrained_nodes[node]) # write a new .h5 file for the embedding with h5py.File(embeddings_path,'w') as source: source.create_dataset('embeddings',data=embeddings,) optimizer = [item.encode('ascii') for item in optimizer] source.create_dataset('optimizer',data=optimizer) After you trained a model (let's say the stand along simple example you linked in your post), and you want to change the learned embedding of node A to [0.5, 0.3, 1.5, 8.1]. Moreover, you also want to add a new node F to the graph with embedding [3, 0.6, 1.2, 4.3] (This newly added node F has no connections with other nodes). You can run my function with past_trained_nodes = {'A': [0.5, 0.3, 1.5, 8.1], 'F': [3, 0.6, 1.2, 4.3]} pretrained_and_new_nodes(pretrained_nodes=past_trained_nodes, new_nodes=['F'], entity_name='WHATEVER_0', data_dir='data/example_1', embeddings_path='model_1/embeddings_WHATEVER_0.v7.h5') After you ran this function, you can check the modified file of embeddings embeddings_WHATEVER_0.v7.h5 filename = "model_1/embeddings_WHATEVER_0.v7.h5" with h5py.File(filename, "r") as source: embeddings = list(source['embeddings']) embeddings and you will see, the embedding of A is changed, and also the embedding of F is added (the order of the embeddings is consistent with the order of nodes in entity_names_WHATEVER_0.json). With the files modified, you can use the pre-trained nodes in a new training session. | 8 | 4 |
64,468,858 | 2020-10-21 | https://stackoverflow.com/questions/64468858/trouble-updating-to-anaconda-navigator-1-10-0-macos | My Anaconda Navigator (v1.9.12) has been prompting me to upgrade to 1.10.0. Only problem is, when I click "yes" on the update prompt (which should close the navigator and update it), nothing happens. No problem, I thought. I ran conda update anaconda-navigator in the terminal. To no avail (and yes, I read the doc online and ran "conda deactivate" beforehand), same with conda install anaconda-navigator=1.10 Both ran for a while, but the desktop navigator is still on the old version. One thing to note: the Looking for incompatible packages line was taking way too long (hours with no notable progress), so I ctrl-c'ed out. But I ran these commands again they managed to finish running. Now I'm out of ideas, would anyone know what I can do to go through with the update? Thanks a lot! | I am having completely the same issue (same Navigator version on macOS). I think I have spent several hours of all possible solution and nothing helped. The only solution that worked was to uninstall and install again. The environment setup remains the same so there is nothing to lose (but still it is strange thought) I was following the process from the answer from this question: How to uninstall Anaconda completely from macOS | 26 | 3 |
64,484,166 | 2020-10-22 | https://stackoverflow.com/questions/64484166/exclude-folder-from-pycharms-duplicate-check | Q: How do I exclude a folder from pycharm's duplicate check? Minimal Example: Say my pycharm project folder structure looks like this: project/main.py project/.backup/main_copy.py How do I tell pycharm not to warn me that main.py and main_copy.py contain duplicate code? | Try to mark .backup as excluded via right-click on it -> Mark Directory as. | 13 | 21 |
64,492,922 | 2020-10-23 | https://stackoverflow.com/questions/64492922/pytube-only-works-periodically-keyerror-assets | Five out of ten times Pytube will send me this error when attempting to run my small testing script. Here's the script: import pytube import urllib.request from pytube import YouTube yt = YouTube('https://www.youtube.com/watch?v=3NCyD3XoJgM') print('Youtube video title is: ' + yt.title + '! Downloading now!') Here's what I get: Traceback (most recent call last): File "youtube.py", line 6, in <module> yt = YouTube('https://www.youtube.com/watch?v=3NCyD3XoJgM') File "C:\Users\test\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pytube\__main__.py", line 91, in __init__ self.prefetch() File "C:\Users\test\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pytube\__main__.py", line 183, in prefetch self.js_url = extract.js_url(self.watch_html) File "C:\Users\test\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\pytube\extract.py", line 143, in js_url base_js = get_ytplayer_config(html)["assets"]["js"] KeyError: 'assets' I am very confused. I attempted to reinstall Python plus pytube but I can't seem to remedy this issue. It's increasingly perplexing that the script works half of the time, but not the other half. | For now fixed 100% with this: https://github.com/nficano/pytube/pull/767#issuecomment-716184994 With anyone else getting this error or issue, run this command in a terminal or cmd: python -m pip install git+https://github.com/nficano/pytube An update to pytubeX that hasn't been released with the pip installation yet. The GitHub link is the current dev explaining the situation. | 5 | 11 |
64,530,316 | 2020-10-26 | https://stackoverflow.com/questions/64530316/euclidean-distance-of-delaney-triangulation-scipy | The spatial package imported from Scipy can measure the Euclidean distance between specified points. Is it possible to return the same measurement by using the Delaunay package? Using the df below, the average distance between all points is measured grouped by Time. However, I'm hoping to use Delaunay triangulation to measure the average distance. import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Delaunay df = pd.DataFrame({ 'Time' : [1,1,1,1,2,2,2,2], 'A_X' : [5, 5, 6, 6, 4, 3, 3, 4], 'A_Y' : [5, 6, 6, 5, 5, 6, 5, 6], }) def make_points(x): return np.array(list(zip(x['A_X'], x['A_Y']))) points = df.groupby("Time").apply(make_points) for p in points: tri = Delaunay(p) ax.triplot(*p.T, tri.simplices) Average distance between all points can be measured using below but I'm hoping to incorporate Delaunay. avg_dist = (df.groupby(['Time']) .apply(lambda x: spatial.distance.pdist (np.array(list(zip(x['A_X'], x['A_Y'])))) .mean() if len(x) > 1 else 0) .reset_index() ) Intended Output: Time 0 0 1 1.082842 1 2 1.082842 | You can try this function from itertools import combinations import numpy as np def edges_with_no_replacement(points): # get the unique coordinates points = np.unique(points.loc[:,['A_X','A_Y']].values,return_index=False,axis=0) if len(points) <= 1: return 0 # for two points, no triangle # I think return the distance between the two points make more sense? You can change the return value to zero. if len(points) == 2: return np.linalg.norm(points[0]-points[1]) tri = Delaunay(points) triangles = tri.simplices # get all the unique edges all_edges = set([tuple(sorted(edge)) for item in triangles for edge in combinations(item,2)]) # compute the average dist return np.mean([np.linalg.norm(points[edge[0]]-points[edge[1]]) for edge in all_edges]) This function will first find all the unique edges given triangles, then return the average length of the triangle edges. Apply this function avg_dist = (df.groupby(['Time']).apply(edges_with_no_replacement).reset_index()) The output is Time 0 0 1 1.082843 1 2 1.082843 Note that the function edges_with_no_replacement will still throw QhullError if points are on the same line, for example Delaunay(np.array([[1,2],[1,3],[1,4]])) So, you have to make sure the points are not on the same line. | 5 | 6 |
64,503,039 | 2020-10-23 | https://stackoverflow.com/questions/64503039/how-do-i-call-pyspark-code-with-whl-file | I have used poetry to create a wheel file. I am running following spark-submit command , but it is not working. I think I am missing something spark-submit --py-files /path/to/wheel Please note that I have referred to below as well, but did not get much details as I am new to Python. how to pass python package to spark job and invoke main file from package with arguments | Wheel file can be executed as a part of below spark-submit command spark-submit --deploy-mode cluster --py-files /path/to/wheel main_file.py | 7 | 3 |
64,464,861 | 2020-10-21 | https://stackoverflow.com/questions/64464861/how-can-i-convert-a-two-column-array-to-a-matrix-with-counts-of-occurences | I have the following numpy array: import numpy as np pair_array = np.array([(205, 254), (205, 382), (254, 382), (18, 69), (205, 382), (31, 183), (31, 267), (31, 382), (183, 267), (183, 382)]) print(pair_array) #[[205 254] # [205 382] # [254 382] # [ 18 69] # [205 382] # [ 31 183] # [ 31 267] # [ 31 382] # [183 267] # [183 382]] Is there a way to transform this array to a symmetric pandas Dataframe that contains the count of occurences for all possible combinations? I expect something along the lines of this: # 18 31 69 183 205 254 267 382 # 18 0 0 1 0 0 0 0 0 # 31 0 0 0 1 0 0 1 1 # 69 1 0 0 0 0 0 0 0 # 183 0 1 0 0 0 0 1 1 # 205 0 0 0 0 0 1 0 2 # 254 0 0 0 0 1 0 0 1 # 267 0 1 0 1 0 0 0 0 # 382 0 1 0 1 2 1 0 0 | One way could be to build a graph using NetworkX and obtain the adjacency matrix directly as a dataframe with nx.to_pandas_adjacency. To account for the co-occurrences of the edges in the graph, we can create a nx.MultiGraph, which allows for multiple edges connecting the same pair of nodes: import networkx as nx G = nx.from_edgelist(pair_array, create_using=nx.MultiGraph) nx.to_pandas_adjacency(G, nodelist=sorted(G.nodes()), dtype='int') 18 31 69 183 205 254 267 382 18 0 0 1 0 0 0 0 0 31 0 0 0 1 0 0 1 1 69 1 0 0 0 0 0 0 0 183 0 1 0 0 0 0 1 1 205 0 0 0 0 0 1 0 2 254 0 0 0 0 1 0 0 1 267 0 1 0 1 0 0 0 0 382 0 1 0 1 2 1 0 0 Building a NetworkX graph, will also enable to create an adjacency matrix or another depending on the behaviour we expect. We can either create it using a: nx.Graph: If we want to set to 1 both entries (x,y) and (y,x) for a (x,y) (or (y,x)) edge. This will hence produce a symmetric adjacency matrix nx.DiGraph: If (x,y) should only set the (x,y) the entry to 1 nx.MultiGraph: For the same behaviour as a nx.Graph but accounting for edge co-occurrences nx.MultiDiGraph: For the same behaviour as a nx.DiGraph but also accounting for edge co-occurrences | 31 | 19 |
64,556,120 | 2020-10-27 | https://stackoverflow.com/questions/64556120/early-stopping-with-multiple-conditions | I am doing multi-class classification for a recommender system (item recommendations), and I'm currently training my network using sparse_categorical_crossentropy loss. Therefore, it is reasonable to perform EarlyStopping by monitoring my validation loss, val_loss as such: tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) which works as expected. However, the performance of the network (recommender system) is measured by Average-Precision-at-10, and is tracked as a metric during training, as average_precision_at_k10. Because of this, I could also perform early stopping with this metric as such: tf.keras.callbacks.EarlyStopping(monitor='average_precision_at_k10', patience=10) which also works as expected. My problem: Sometimes the validation loss increases, whilst the Average-Precision-at-10 is improving and vice-versa. Because of this, I would need to monitor both, and perform early stopping, if and only if both are deteriorating. What I would like to do: tf.keras.callbacks.EarlyStopping(monitor=['val_loss', 'average_precision_at_k10'], patience=10) which obviously does not work. Any ideas how this could be done? | With guidance from Gerry P above I managed to create my own custom EarlyStopping callback, and thought I post it here in case anyone else are looking to implement something similar. If both the validation loss and the mean average precision at 10 does not improve for patience number of epochs, early stopping is performed. class CustomEarlyStopping(keras.callbacks.Callback): def __init__(self, patience=0): super(CustomEarlyStopping, self).__init__() self.patience = patience self.best_weights = None def on_train_begin(self, logs=None): # The number of epoch it has waited when loss is no longer minimum. self.wait = 0 # The epoch the training stops at. self.stopped_epoch = 0 # Initialize the best as infinity. self.best_v_loss = np.Inf self.best_map10 = 0 def on_epoch_end(self, epoch, logs=None): v_loss=logs.get('val_loss') map10=logs.get('val_average_precision_at_k10') # If BOTH the validation loss AND map10 does not improve for 'patience' epochs, stop training early. if np.less(v_loss, self.best_v_loss) and np.greater(map10, self.best_map10): self.best_v_loss = v_loss self.best_map10 = map10 self.wait = 0 # Record the best weights if current results is better (less). self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience: self.stopped_epoch = epoch self.model.stop_training = True print("Restoring model weights from the end of the best epoch.") self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): if self.stopped_epoch > 0: print("Epoch %05d: early stopping" % (self.stopped_epoch + 1)) It is then used as: model.fit( x_train, y_train, batch_size=64, steps_per_epoch=5, epochs=30, verbose=0, callbacks=[CustomEarlyStopping(patience=10)], ) | 13 | 10 |
64,556,874 | 2020-10-27 | https://stackoverflow.com/questions/64556874/how-can-i-debug-python-console-script-command-line-apps-with-the-vscode-debugger | I've a Python package package_name which provides a command line application command-line-app-name as console_script: setup.py: setup( ... entry_points={"console_scripts": ["command-line-app-name=package_name.cli:main"]}, ... ) The virtualenv is located in <project>/.venv and managed with pipenv. pipenv managed venvs should support VSCode debugging integration. I've created a debugger configuration launch.json file with setting the Python path to the venv (pythonPath): { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: command-line-app-name", "type": "python", "request": "launch", "stopOnEntry": false, "program": "command-line-app-name", "linux": { "pythonPath": "${workspaceFolder}/.venv/bin/python", "args": ["-r", "/home/florian/gitlab/package_name/data/Test_MRM.d"] }, "windows": { "pythonPath": "${workspaceFolder}/.venv/Scripts/python.exe", "args": ["-r", "D:\\MassHunter\\Data\\demo_0000.d"], }, "console": "integratedTerminal" } ] } The Windows and Linux specific venv python executable and command line arguments should not have an impact. If I run the debugger I get: FileNotFoundError: [Errno 2] No such file or directory: '/home/florian/gitlab/package-name/command-line-app-name'. It seems like I'm miss-interpreting the documentation somehow. I tried to find help w.r.t. vscode-python as well as debugpy without success. How can I debug a console script command line app (instead of a package module)? | console_scripts cannot be debugged out-of-the-box. The solution is to call the entry point function directly instead ("program": "${workspaceRoot}/package_name/cli.py",). This requires to add the if __name__ == '__main__': idiom in the corresponding module (here: cli.py). In my case the command line argument parser used is click. However the pseudo-code should be very similar for other command line parser libs. package_name/cli.py: @click.command() @click.option(...) def main(<args>, <kwargs>): ... if __name__ == '__main__': main() # pylint: disable=no-value-for-parameter .vscode/launch.json: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: command-line-app-name", "type": "python", "request": "launch", "stopOnEntry": false, "program": "${workspaceRoot}/package_name/cli.py", "linux": { "pythonPath": "${workspaceFolder}/.venv/bin/python", "args": ["-r", "/home/florian/gitlab/package_name/data/Test_MRM.d"] }, "windows": { "pythonPath": "${workspaceFolder}/.venv/Scripts/python.exe", "args": ["-r", "D:\\MassHunter\\Data\\demo_0000.d"], }, "console": "integratedTerminal" } ] } NOTE: The tool used to manage the venv makes a difference. This solution does work in case the venv is managed with pipenv. The solution does not work in case the venv is managed with poetry. | 7 | 8 |
64,554,908 | 2020-10-27 | https://stackoverflow.com/questions/64554908/how-to-count-number-of-elements-in-a-row-greater-than-zero | I need to count the number of values in each row that are greater than zero and store them in a new column The df bellow: team goals goals_against games_in_domestic_league 0 juventus 1 0 0 1 barcelona 0 1 1 2 santos 2 1 2 Should become: team goals goals_against games_in_domestic_league total 0 juventus 1 0 0 1 1 barcelona 0 1 1 2 2 santos 2 1 2 3 | First idea is select numeric columns, test if greater like 0 and count Trues by sum: df['total'] = df.select_dtypes(np.number).gt(0).sum(axis=1) If want specify columns by list: cols = ['goals','goals_against','games_in_domestic_league'] df['total'] = df[cols].gt(0).sum(axis=1) | 5 | 5 |
64,543,449 | 2020-10-26 | https://stackoverflow.com/questions/64543449/update-during-resize-in-pygame | I'm developing a grid based game in pygame, and want the window to be resizable. I accomplish this with the following init code: pygame.display.set_mode((740, 440), pygame.RESIZABLE) As well as the following in my event handler: elif event.type == pygame.VIDEORESIZE: game.screen = pygame.display.set_mode((event.w, event.h), pygame.RESIZABLE) # Code to re-size other important surfaces The problem I'm having is that it seems a pygame.VIDEORESIZE event is only pushed once the user is done resizing the window, i.e. lets go of the border. screen.get_size() updates similarly. Since the graphics of my game are very simple, I'd really prefer for them to resize as the user drags the window. This is trivial in many other languages, but I can't find any reference for it in pygame - although I can't imagine a feature this basic would be impossible. How can I update my game as the screen is being resized in pygame? EDIT: Here is a minimal working example. Running on Windows 10, pygame 1.9.4, the following code will only draw the updated rectangle after the user finishes dragging the window. import sys import pygame pygame.init() size = 320, 240 black = 0, 0, 0 red = 255, 0, 0 screen = pygame.display.set_mode(size, pygame.RESIZABLE) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() elif event.type == pygame.VIDEORESIZE: pygame.display.set_mode((event.w, event.h), pygame.RESIZABLE) screen.fill(black) pygame.draw.rect(screen, red, (10,10,screen.get_width(),screen.get_height())) pygame.display.flip() | If you run into this kind of problem, it's always worth to google it using SDL instead of pygame, since pygame is a pretty low-level SDL wrapper. So that's not a problem of pygame itself, but rather how sdl and your window manager interact, e.g. see this SDL bug report. Nonetheless, if you really need to update the window while resizing, if you're using Windows, you can listen for the actual WM_SIZE event of Windows, redraw your screen, and update the "Windows"-window by calling RedrawWindow. Here's a simple example: import pygame import win32gui import win32con def wndProc(oldWndProc, draw_callback, hWnd, message, wParam, lParam): if message == win32con.WM_SIZE: draw_callback() win32gui.RedrawWindow(hWnd, None, None, win32con.RDW_INVALIDATE | win32con.RDW_ERASE) return win32gui.CallWindowProc(oldWndProc, hWnd, message, wParam, lParam) def main(): pygame.init() screen = pygame.display.set_mode((320, 240), pygame.RESIZABLE | pygame.DOUBLEBUF) def draw_game(): screen.fill(pygame.Color('black')) pygame.draw.rect(screen, pygame.Color('red'), pygame.Rect(0,0,screen.get_width(),screen.get_height()).inflate(-10, -10)) pygame.display.flip() oldWndProc = win32gui.SetWindowLong(win32gui.GetForegroundWindow(), win32con.GWL_WNDPROC, lambda *args: wndProc(oldWndProc, draw_game, *args)) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: return elif event.type == pygame.VIDEORESIZE: pygame.display.set_mode((event.w, event.h), pygame.RESIZABLE| pygame.DOUBLEBUF) draw_game() if __name__ == '__main__': main() Default behaviour: With RedrawWindow: | 8 | 7 |
64,504,406 | 2020-10-23 | https://stackoverflow.com/questions/64504406/how-to-hot-reload-grpc-server-in-python | I'm developing some python microservices with grpc and i'm using docker for the cassandra database and the microservices. Is there a way to setup reload on change within docker-compose? I'm guessing that first I need the code mounted as a volume but I don't see a way to reload on GRPC server like for example flask does. | We use watchdog[watchmedo] with our grpc services and Docker. Install watchdog or add to your requirements.txt file python -m pip install watchdog[watchmedo] Then in your docker-compose.yml add watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app to your container where --directory is the directory to where your app is contained inside the docker container, and python -- -m app is the file that starts your grpc Server. In this example the file that starts the server is called app.py: app: build: context: ./app/ dockerfile: ./Dockerfile target: app command: watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app volumes: - ./app/:/usr/src/app/ | 8 | 8 |
64,533,731 | 2020-10-26 | https://stackoverflow.com/questions/64533731/how-is-floor-division-not-giving-result-according-to-the-documented-rule | >>> print (12//0.2) 59.0 >>> print(floor(12/0.2)) 60 Why floor division is not working according to the rule in this case? p.s. Here Python is treating 0.2 as 0.20000000001 in the floor division case So (12/0.2000000001) is resulting in 59.999999... And floor(59.999999999) outputting 59 But don't know why python is treating 0.2 as 0.2000000001in the floor division case but not in the division case? | The reason why 12 / 0.2 results in 60.0, is not because 0.2 is treated differently, but because the error in the floating point division cancels the error in the representation of 0.2. The float always has the same value (greater than decimal 0.2), but depending on the operations those errors will either accumulate or be cancelled. In other cases the error is not completely cancelled and shows up in the result: >>> (12 / (0.2 * 0.2)) * 0.2 59.99999999999999 In CPython integer division for these specific types (float // float after the first param is automatically converted) and relative magnitudes is performed as follows (see Python's source code for the full method): mod = a % b result = (a - mod) / b If b was actually 0.2, then mod would be 0, but in floating point it is slightly larger, so mod is just under 0.2. If you do this manually you can see how we end up with 59.0: >>> a = 12.0 >>> b = 0.2 >>> mod = a % b >>> mod 0.19999999999999934 >>> (a - mod) / b 59.0 The OP is also asking about the error in the floating point division, here's that as well: The values (mantissa * base^exponent): 12: 1.1000000000000000000000000000000000000000000000000000 * 2^3 0.2: 1.1001100110011001100110011001100110011001100110011010 * 2^(-3) Remember 0.2 is not really 0.2, it's 0.200000000000000011102230246251565404236316680908203125. The result of dividing 12 by a value that is > 0.2 should be < 60. To divide the values, we divide the mantissa and subtract the exponent, so we get: 12 / 0.2: 0.1110111111111111111111111111111111111111111111111111111 * 2^6 But the last 3 bits don't fit into a double, which only has 53 bits for the mantissa (including the sign) and we're currently using 56. Since the result starts with 0, we first normalise, multiplying the mantissa by 2 and subtracting one from the exponent. And then we have to round to the nearest 53 bit mantissa: normalised: 1.110111111111111111111111111111111111111111111111111111 * 2^5 rounded: 1.1110000000000000000000000000000000000000000000000000 * 2^5 1.1110000000000000000000000000000000000000000000000000 * 2^5 is equal to 60. The difference between the correct result (1.110111111111111111111111111111111111111111111111111111 * 2^5) and the closest value we can represent as a 64 bit double (1.1110000000000000000000000000000000000000000000000000 * 2^5) is the error in the floating point division. | 9 | 9 |
64,546,583 | 2020-10-26 | https://stackoverflow.com/questions/64546583/plot-multiple-arrows-between-scatter-points | I'm trying to plot multiple arrows between two sets of scatter points. Plotting a line is easy enough with ax.plot. But I'm trying to implement an arrow instead of a line. The arrows don't appear to be aligning between the points. So if the line plot is initialised below, it works fine. But the quiver plot does not plot alone the same lines. import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.DataFrame(np.random.randint(-50,50,size=(100, 4)), columns=list('ABCD')) fig, ax = plt.subplots() x1 = df['A'] y1 = df['B'] x2 = df['C'] y2 = df['D'] AB = plt.scatter(x1, y1, c = 'blue', marker = 'o', s = 10, zorder = 3) CD = plt.scatter(x2, y2, c = 'red', marker = 'o', s = 10, zorder = 2) # plot line between points #ax.plot([x1,x2],[y1,y2], color = 'black', linestyle = '--', linewidth = 0.5) ax.quiver([x1, x2], [y1, y2]) | According to the documentation, see scale_units option, you need: angles='xy', scale_units='xy', scale=1 in quiver: AB = ax.scatter(x1, y1, c = 'blue', marker = 'o', s = 10, zorder = 3) CD = ax.scatter(x2, y2, c = 'red', marker = 'o', s = 10, zorder = 2) ax.quiver(x1, y1, (x2-x1), (y2-y1), angles='xy', scale_units='xy', scale=1) plt.show() Output: | 5 | 6 |
64,545,132 | 2020-10-26 | https://stackoverflow.com/questions/64545132/will-run-in-executor-ever-block | suppose if I have a web server like this: from fastapi import FastAPI import uvicorn import asyncio app = FastAPI() def blocking_function(): import time time.sleep(5) return 42 @app.get("/") async def root(): loop = asyncio.get_running_loop() result = await loop.run_in_executor(None, blocking_function) return result @app.get("/ok") async def ok(): return {"ok": 1} if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", workers=1) As I understand, the code will spawn another thread in the default ThreadExecutorPool and then execute the blocking function in the thread pool. On the other side, thinking about how the GIL works, the CPython interpreter will only execute a thread for 100 ticks and then it will switch to another thread to be fair and give other threads a chance to progress. In this case, what if the Python interpreter decides to switch to the threads where the blocking_function is executing? Will it block the who interpreter to wait for whatever remaining on the time.sleep(5)? The reason I am asking this is that I have observed sometimes my application will block on the blocking_function, however I am not entirely sure what's in play here as my blocking_function is quite special -- it talks to a COM API object through the win32com library. I am trying to rule out that this is some GIL pitfalls I am falling into. | Both time.sleep (as explained in this question) and the win32com library (according to this mailing list post) release the GIL when they are called, so they will not prevent other threads from making progress while they are blocking. To answer the "high-level" question - "can run_in_executor ever (directly or indirectly) block the event-loop?" - the answer would only be "yes" if you used a ThreadPoolExecutor, and the code you executed in run_in_executor did blocking work that didn't release the GIL. While that wouldn't completely block the event loop, it would mean both your event loop thread and the executor thread could not run in parallel, since both would need to acquire the GIL to make progress. | 6 | 6 |
64,540,868 | 2020-10-26 | https://stackoverflow.com/questions/64540868/faster-for-loops-with-arrays-in-python | N, M = 1000, 4000000 a = np.random.uniform(0, 1, (N, M)) k = np.random.randint(0, N, (N, M)) out = np.zeros((N, M)) for i in range(N): for j in range(M): out[k[i, j], j] += a[i, j] I work with very long for-loops; %%timeit on above with pass replacing the operation yields 1min 19s ± 663 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) this is unacceptable in context (C++ took 6.5 sec). There's no reason for above to be done with Python objects; arrays have well-defined types. Implementing this in C/C++ as an extension is an overkill on both developer and user ends; I'm just passing arrays to loop and do arithmetic on. Is there a way to tell Numpy "move this logic to C", or another library that can handle nested loops involving only arrays? I seek it for the general case, not workarounds for this specific example (but if you have one I can open a separate Q&A). | This is basically the idea behind Numba. Not as fast as C, but it can get close... It uses a jit compiler to compile python code to machine and it's compatible with most Numpy functions. (In the docs you find all the details) import numpy as np from numba import njit @njit def f(N, M): a = np.random.uniform(0, 1, (N, M)) k = np.random.randint(0, N, (N, M)) out = np.zeros((N, M)) for i in range(N): for j in range(M): out[k[i, j], j] += a[i, j] return out def f_python(N, M): a = np.random.uniform(0, 1, (N, M)) k = np.random.randint(0, N, (N, M)) out = np.zeros((N, M)) for i in range(N): for j in range(M): out[k[i, j], j] += a[i, j] return out Pure Python: %%timeit N, M = 100, 4000 f_python(M, N) 338 ms ± 12.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) With Numba: %%timeit N, M = 100, 4000 f(M, N) 12 ms ± 534 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) | 7 | 5 |
64,527,464 | 2020-10-25 | https://stackoverflow.com/questions/64527464/clickable-link-inside-message-discord-py | I would like my bot to send message into chat like this: await ctx.send("This country is not supported, you can ask me to add it here") But to make "here" into clickable link, In HTML I would do it like this, right? <a href="https://www.youtube.com/" > This country is not supported, you can ask me to add it here </a> How can I do it in python? | As the other answer explained, you can't add hyperlinks in normal messages, but you can in Embeds. I don't see why you wouldn't want to use an Embed for an error message, especially considering it adds more functionality, so you should consider using that. embed = discord.Embed() embed.description = "This country is not supported, you can ask me to add it [here](your_link_goes_here)." await ctx.send(embed=embed) Feel free to mess around with the Embed & add some fields, a title, a colour, and whatever else you might want to do to make it look better. More info in the relevant API docs. | 6 | 11 |
64,523,533 | 2020-10-25 | https://stackoverflow.com/questions/64523533/environment-properties-are-not-passed-to-application-in-elastic-beanstalk | When deploying my Django project, database settings are not configured because 'RDS_HOSTNAME' in os.environ returns false. In fact no environment properties are available at the time of deployment. All these properties are available after the deployment. Running /opt/elasticbeanstalk/bin/get-config environment returns following: {"DJANGO_SETTINGS_MODULE":"myApp.settings","PYTHONPATH":"/var/app/venv/staging-LQM1lest/bin:$PYTHONPATH","RDS_DB_NAME":"ebdb","RDS_HOSTNAME":"xxxx.amazonaws.com","RDS_PASSWORD":"xxxx","RDS_PORT":"xxxx","RDS_USERNAME":"xxxx"} All RDS prefixed properties are set but still somehow os.environ is unable to read it. setting.py file: # [...] if 'RDS_HOSTNAME' in os.environ: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': os.environ['RDS_DB_NAME'], 'USER': os.environ['RDS_USERNAME'], 'PASSWORD': os.environ['RDS_PASSWORD'], 'HOST': os.environ['RDS_HOSTNAME'], 'PORT': os.environ['RDS_PORT'], } } # [...] Do I have to make any changes to make these properties available at the time of deployment? | Seems like this is a serious bug and AWS doesn't care about it. There are few ways I came up with to make this work but all of them require logging into the EB environment and do some manual work. Solution 1 As suggested in comment by hephalump Create an AWS secret manager Check IAM instance profile in EB's environment Configuration->Security->Edit. Then go to IAM user console and go to Roles. From there you can attach policy to the instance profile for secret manager. Once it's done, deploy the project Then login to the environment (eb ssh environment_name). Go to /var/app/current/ directory and run this command: source /var/app/venv/*/bin/activate. Finally run python3 manage.py migrate. Solution 2 Edit .bash_profile and add export these variables at the end of the file: export RDS_DB_NAME=your_dbname export RDS_USERNAME=user export RDS_PASSWORD=pass export RDS_HOSTNAME=host_endpoint export RDS_PORT=3306 Run source ~/.bash_profile Now you can deploy your project. Solution 3 Set all environment properties in EB environment's configuration. (Go to Configuration->Software->Edit->Environment properties and add the key and values). 2. Add this snippet at the beginning of settings.py from pathlib import Path import os import subprocess import ast def get_environ_vars(): completed_process = subprocess.run( ['/opt/elasticbeanstalk/bin/get-config', 'environment'], stdout=subprocess.PIPE, text=True, check=True ) return ast.literal_eval(completed_process.stdout) Go to Database section and replace it with this snippet if 'RDS_HOSTNAME' in os.environ: DATABASES = { 'default': { ' ENGINE': 'django.db.backends.mysql', 'NAME': os.environ['RDS_DB_NAME'], 'USER': os.environ['RDS_USERNAME'], 'PASSWORD': os.environ['RDS_PASSWORD'], 'HOST': os.environ['RDS_HOSTNAME'], 'PORT': os.environ['RDS_PORT'], } } else: env_vars = get_environ_vars() DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': env_vars['RDS_DB_NAME'], 'USER': env_vars['RDS_USERNAME'], 'PASSWORD': env_vars['RDS_PASSWORD'], 'HOST': env_vars['RDS_HOSTNAME'], 'PORT': env_vars['RDS_PORT'], } } Deploy the project. Login to the environment (eb ssh environment_name). Go to /var/app/current/ directory and run this command: source /var/app/venv/*/bin/activate. Finally run python3 manage.py migrate. Conclusion: Solution 1 is little complex and secret manager is not free (30 days trial only). Solution 2 is simplest one but I do not recommend tempering any file manually on EB. Solution 3 is a clean solution which I will use. This solution also takes care of this bug fix in future. | 6 | 8 |
64,524,963 | 2020-10-25 | https://stackoverflow.com/questions/64524963/efficient-elementwise-argmin-of-matrix-vector-difference | Suppose an array a.shape == (N, M) and a vector v.shape == (N,). The goal is to compute argmin of abs of v subtracted from every element of a - that is, out = np.zeros(N, M) for i in range(N): for j in range(M): out[i, j] = np.argmin(np.abs(a[i, j] - v)) I have a vectorized implementation via np.matlib.repmat, and it's much faster, but takes O(M*N^2) memory, unacceptable in practice. Computation's still done on CPU so best bet seems to be implementing the for-loop in C as an extension, but maybe Numpy already has this logic implemented. Does it? Any use-ready Numpy functions implementing above efficiently? | Inspired by this post, we can leverage np.searchsorted - def find_closest(a, v): sidx = v.argsort() v_s = v[sidx] idx = np.searchsorted(v_s, a) idx[idx==len(v)] = len(v)-1 idx0 = (idx-1).clip(min=0) m = np.abs(a-v_s[idx]) >= np.abs(v_s[idx0]-a) m[idx==0] = 0 idx[m] -= 1 out = sidx[idx] return out Some more perf. boost with numexpr on large datasets : import numexpr as ne def find_closest_v2(a, v): sidx = v.argsort() v_s = v[sidx] idx = np.searchsorted(v_s, a) idx[idx==len(v)] = len(v)-1 idx0 = (idx-1).clip(min=0) p1 = v_s[idx] p2 = v_s[idx0] m = ne.evaluate('(idx!=0) & (abs(a-p1) >= abs(p2-a))', {'p1':p1, 'p2':p2, 'idx':idx}) idx[m] -= 1 out = sidx[idx] return out Timings Setup : N,M = 500,100000 a = np.random.rand(N,M) v = np.random.rand(N) In [22]: %timeit find_closest_v2(a, v) 4.35 s ± 21.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [23]: %timeit find_closest(a, v) 4.69 s ± 173 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | 9 | 4 |
64,525,237 | 2020-10-25 | https://stackoverflow.com/questions/64525237/how-to-calculate-the-size-of-blocks-of-values-in-a-list | I have a list like this: list_1 = [0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1] How can I calculate the size of blocks of values of 1 and 0 in this list? The resulting list will look like : list_2 = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 1, 1] | Try with cumsum with diff then transform count s = pd.Series(list_1) s.groupby(s.diff().ne(0).cumsum()).transform('count') Out[91]: 0 1 1 2 2 2 3 3 4 3 5 3 6 4 7 4 8 4 9 4 10 1 11 1 dtype: int64 | 7 | 6 |
64,455,605 | 2020-10-21 | https://stackoverflow.com/questions/64455605/show-all-colums-of-a-pandas-dataframe-in-describe | I am stuck here, but I it's a two part question. Looking at the output of .describe(include = 'all'), not all columns are showing; how do I get all columns to show? This is a common problem that I have all of the time with Spyder, how to have all columns to show in Console. Any help is appreciated. import matplotlib.pyplot as plt import pandas as pd import scipy.stats as stats import seaborn as sns mydata = pd.read_csv("E:\ho11.csv") mydata.head() print(mydata.describe(include="all", exclude = None)) mydata.info() OUTPUT: code output | Solution You could use either of the following methods: Method-1: source pd.options.display.max_columns = None Method-2: source pd.set_option('display.max_columns', None) # to reset this pd.reset_option('display.max_columns') Method-3: source # assuming df is your dataframe pd.set_option('display.max_columns', df.columns.size) # to reset this pd.reset_option('display.max_columns') Method-4: source # assuming df is your dataframe pd.set_option('max_columns', df.columns.size) # to reset this pd.reset_option('max_columns') To not wrap the output into multiple lines do this source pd.set_option('display.expand_frame_repr', False) References I will recommend you to explore the following resources for more details and examples. How to show all of columns name on pandas dataframe? How do I expand the output display to see more columns of a pandas DataFrame? How to show all columns / rows of a Pandas Dataframe? | 7 | 14 |
64,480,047 | 2020-10-22 | https://stackoverflow.com/questions/64480047/how-to-use-intrinsic-functions-sub-method-in-aws-cdk | I want to use this resource below for my cdk app, I using Python for CDK: 'arn:aws:s3:::${LoggingBucket}/AWSLogs/${AWSAccoutID}/*' Therefore I need to substitute the value of LoggingBucket and AWSAccountID. Here is what I tried: bucket = s3.Bucket(self, "my-bucket", bucket_name = 'my-bucket') core.Fn.sub('arn:aws:s3:::${LoggingBucket}/AWSLogs/${AWSAccoutID}/*',[bucket.bucket_name, core.Environment.account]) But I get this error for line of core.Fn.sub : AttributeError: type object 'property' has no attribute '__jsii_type__' Subprocess exited with error 1 Then I tried this as well: mappings = { 'LoggingBucket': bucket.bucket_name, 'AWSAccountID': core.Environment.account } core.Fn.sub('arn:aws:s3:::${LoggingBucket}/AWSLogs/${AWSAccoutID}/*',mappings) $ cdk synth I still getting the same error as above. Question: Please give me a solution on how to use the !sub function in cloudformation in CDK. Let me know what I doing wrong as well. Thank you. | Since you are using Python (or other programming language) there is no need to use the instrinsic functions that Cloudformation provides. I suggest a more elegant and easy way to format the arn: arn= f'arn:aws:s3:::{bucket.bucket_name}/AWSLogs/{core.Environment.account}/*' | 5 | 3 |
64,501,193 | 2020-10-23 | https://stackoverflow.com/questions/64501193/fastapi-how-to-use-httpexception-in-responses | The documentation suggests raising an HTTPException with client errors, which is great. But how can I show those specific errors in the documentation following HTTPException's model? Meaning a dict with the "detail" key. The following does not work because HTTPException is not a Pydantic model. @app.get( '/test', responses={ 409 : { 'model' : HTTPException, 'description': 'This endpoint always raises an error' } } ) def raises_error(): raise HTTPException(409, detail='Error raised') | Yes it is not a valid Pydantic type however since you can create your own models, it is easy to create a Model for it. from fastapi import FastAPI from fastapi.exceptions import HTTPException from pydantic import BaseModel class Dummy(BaseModel): name: str class HTTPError(BaseModel): detail: str class Config: schema_extra = { "example": {"detail": "HTTPException raised."}, } app = FastAPI() @app.get( "/test", responses={ 200: {"model": Dummy}, 409: { "model": HTTPError, "description": "This endpoint always raises an error", }, }, ) def raises_error(): raise HTTPException(409, detail="Error raised") I believe this is what you are expecting | 22 | 30 |
64,497,319 | 2020-10-23 | https://stackoverflow.com/questions/64497319/python-discord-py-error-could-not-build-wheels-for-multidict-yarl-which-use | trying to download discord.py using pip install, gave me the error message in the title. I installed using cmd and the commands py -m pip install -U discord, the cmd was also run in admin. tried using pip, pip3, and pip3.9, all of which didnt work. I tried uninstalling/reinstalling/upgrading (in that order) the said libraries: pip yarl multidict wheel setuptools versions of python that I tried (in all versions are downloaded with default settings with nothing changed): python-3.9.0-amd64.exe python-3.9.0.exe I tried researching about wheels and tried installing with --no-binary :all: as well, but it gave the same error message below. in all the iterations of what I have tried, it churned out the exact same error message without any deviation ples help :< Using cached discord-1.0.1-py3-none-any.whl (1.1 kB) Collecting discord.py>=1.0.1 Using cached discord.py-1.5.1-py3-none-any.whl (701 kB) Processing c:\users\mt\appdata\local\pip\cache\wheels\b6\9c\bd\6b99bc6ec9dab11f3756d31fb8506d3ecf07aea58b6201f539\aiohttp-3.6.3-py3-none-any.whl Collecting attrs>=17.3.0 Using cached attrs-20.2.0-py2.py3-none-any.whl (48 kB) Collecting chardet<4.0,>=2.0 Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB) Collecting async-timeout<4.0,>=3.0 Using cached async_timeout-3.0.1-py3-none-any.whl (8.2 kB) Collecting yarl<1.6.0,>=1.0 Using cached yarl-1.5.1.tar.gz (173 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting multidict<5.0,>=4.5 Using cached multidict-4.7.6.tar.gz (50 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting idna>=2.0 Using cached idna-2.10-py2.py3-none-any.whl (58 kB) Building wheels for collected packages: yarl, multidict Building wheel for yarl (PEP 517) ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mt\appdata\local\programs\python\python39\python.exe' 'c:\users\mt\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' build_wheel 'C:\Users\MT\AppData\Local\Temp\tmptlhkh7zi' cwd: C:\Users\MT\AppData\Local\Temp\pip-install-nztu4nu2\yarl Complete output (35 lines): ********************** * Accellerated build * ********************** running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\yarl copying yarl\_quoting.py -> build\lib.win-amd64-3.9\yarl copying yarl\_quoting_py.py -> build\lib.win-amd64-3.9\yarl copying yarl\_url.py -> build\lib.win-amd64-3.9\yarl copying yarl\__init__.py -> build\lib.win-amd64-3.9\yarl running egg_info writing yarl.egg-info\PKG-INFO writing dependency_links to yarl.egg-info\dependency_links.txt writing requirements to yarl.egg-info\requires.txt writing top-level names to yarl.egg-info\top_level.txt reading manifest file 'yarl.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.cache' found anywhere in distribution warning: no previously-included files found matching 'yarl\*.html' warning: no previously-included files found matching 'yarl\*.so' warning: no previously-included files found matching 'yarl\*.pyd' no previously-included directories found matching 'docs\_build' writing manifest file 'yarl.egg-info\SOURCES.txt' copying yarl\__init__.pyi -> build\lib.win-amd64-3.9\yarl copying yarl\_quoting_c.c -> build\lib.win-amd64-3.9\yarl copying yarl\_quoting_c.pyi -> build\lib.win-amd64-3.9\yarl copying yarl\_quoting_c.pyx -> build\lib.win-amd64-3.9\yarl copying yarl\py.typed -> build\lib.win-amd64-3.9\yarl running build_ext building 'yarl._quoting_c' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for yarl Building wheel for multidict (PEP 517) ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mt\appdata\local\programs\python\python39\python.exe' 'c:\users\mt\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' build_wheel 'C:\Users\MT\AppData\Local\Temp\tmpzb98brnr' cwd: C:\Users\MT\AppData\Local\Temp\pip-install-nztu4nu2\multidict Complete output (40 lines): ********************** * Accellerated build * ********************** running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\multidict copying multidict\_abc.py -> build\lib.win-amd64-3.9\multidict copying multidict\_compat.py -> build\lib.win-amd64-3.9\multidict copying multidict\_multidict_base.py -> build\lib.win-amd64-3.9\multidict copying multidict\_multidict_py.py -> build\lib.win-amd64-3.9\multidict copying multidict\__init__.py -> build\lib.win-amd64-3.9\multidict running egg_info writing multidict.egg-info\PKG-INFO writing dependency_links to multidict.egg-info\dependency_links.txt writing top-level names to multidict.egg-info\top_level.txt reading manifest file 'multidict.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files found matching 'multidict\_multidict.html' warning: no previously-included files found matching 'multidict\*.so' warning: no previously-included files found matching 'multidict\*.pyd' warning: no previously-included files found matching 'multidict\*.pyd' no previously-included directories found matching 'docs\_build' writing manifest file 'multidict.egg-info\SOURCES.txt' copying multidict\__init__.pyi -> build\lib.win-amd64-3.9\multidict copying multidict\_multidict.c -> build\lib.win-amd64-3.9\multidict copying multidict\py.typed -> build\lib.win-amd64-3.9\multidict creating build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\defs.h -> build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\dict.h -> build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\istr.h -> build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\iter.h -> build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\pair_list.h -> build\lib.win-amd64-3.9\multidict\_multilib copying multidict\_multilib\views.h -> build\lib.win-amd64-3.9\multidict\_multilib running build_ext building 'multidict._multidict' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for multidict Failed to build yarl multidict ERROR: Could not build wheels for yarl, multidict which use PEP 517 and cannot be installed directly``` | I also had the exact same issue today, since i downloaded node.js and it updated my python 8 to python 9 and i had to reinstall all of my moduels including dpy. The solution is to follow what it says error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ https://visualstudio.microsoft.com/visual-cpp-build-tools/ | 23 | 6 |
64,503,929 | 2020-10-23 | https://stackoverflow.com/questions/64503929/convert-x-and-y-arrays-into-a-frequencies-grid | I would like to convert two arrays (x and y) into a frequency n x n matrix (n = 5), indicating each cell the number of point that contains. It consists on resampling both variables into five intervals and count the existing number of points per cell. I have tried using pandas pivot_table but don't know the way of referencing to each axis coordinate. X and Y arrays are two dependent variables that contain values between 0 and 100. I would really appreciate some one's aid. Thank you very much in advance. This is an example of the code: import pandas as pd import numpy as np import matplotlib.pyplot as plt # Arrays example. They are always float type and ranging 0-100. (n_size array = 15) x = 100 * np.random.random(15) y = 100 * np.random.random(15) # Df created for trying to pivot and counting values per cell df = pd.DataFrame({'X':x,'Y':y}) # Plot the example data: df.plot(x = 'X',y = 'Y', style = 'o') This is what I have: This is the objetive matrix, saved as a df: | If you do not explicitly need to use pandas (which you don't, if it's just about a frequency matrix), consider using numpy.histogram2d: # Sample data x = 100*np.random.random(15) y = 100*np.random.random(15) Construct your bins (since your x and y bins are the same, one set is enough) bins = np.linspace(0, 100, 5+1) # bins = array([ 0., 20., 40., 60., 80., 100.]) Now use the histogram function: binned, binx, biny = np.histogram2d(x, y, bins = [bins, bins]) # To get the result you desire, transpose objmat = binned.T Note: x-values are binned along the first dimension(axis 0), which visually means 'vertical'. Hence the transpose. Plotting: fig, ax = plt.subplots() ax.grid() ax.set_xlim(0, 100) ax.set_ylim(0, 100) ax.scatter(x, y) for i in range(objmat.shape[0]): for j in range(objmat.shape[1]): c = int(objmat[::-1][j,i]) ax.text((bins[i]+bins[i+1])/2, (bins[j]+bins[j+1])/2, str(c), fontdict={'fontsize' : 16, 'ha' : 'center', 'va' : 'center'}) Result: | 19 | 7 |
64,483,136 | 2020-10-22 | https://stackoverflow.com/questions/64483136/how-can-you-identify-what-versions-of-vs-code-an-extensions-will-work-with | I'm trying to install the MS Python extension (ms-python.python-2020.7.96456.vsix) on a VS Code (1.40.2) install and I'm receiving the following error. "Unable to install extension 'ms-python-python' as it is not compatible with VS Code '1.40.2'". How do I go about finding out what version would be compatible? I'm in an environment where I can't connect to the internet, so the vsix must be used in an offline mode. | Thanks rioV8! After looking into the package.json I've found that the "engines" field is what details the minimum version of VS Code required. Per code.visualstudio.com (https://code.visualstudio.com/api/working-with-extensions/publishing-extension) Visual Studio Code compatibility When authoring an extension, you will need to describe what is the extension's compatibility to Visual Studio Code itself. This can be done via the engines.vscode field inside package.json: { "engines": { "vscode": "^1.8.0" } } A value of 1.8.0 means that your extension is compatible only with VS Code 1.8.0. A value of ^1.8.0 means that your extension is compatible with VS Code 1.8.0 and onwards, including 1.8.1, 1.9.0, etc. You can use the engines.vscode field to make sure the extension only gets installed for clients that contain the API you depend on. This mechanism plays well with the Stable release as well as the Insiders one. For example, imagine that the latest Stable version of VS Code is 1.8.0 and that during 1.9.0's development a new API is introduced and thus made available in the Insider release through version 1.9.0-insider. If you want to publish an extension version that benefits from this API, you should indicate a version dependency of ^1.9.0. Your new extension version will be installed only on VS Code greater than or equal to 1.9.0, which means all current Insider customers will get it, while the Stable ones will only get the update when Stable reaches 1.9.0. | 7 | 4 |
64,500,342 | 2020-10-23 | https://stackoverflow.com/questions/64500342/creating-requirements-txt-in-pip-compatible-format-in-a-conda-virtual-environmen | I have created a conda virtual environment on a Windows 10 PC to work on a project. To install the required packages and dependencies, I am using conda install <package> instead of pip install <package> as per the best practices mentioned in https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#using-pip-in-an-environment In order to distribute my software, I choose to create an environment.yml and a requirements.txt file targeting the conda and non-conda users respectively. I am able to export the current virtual environment into a yml file, so the conda users are taken care of. But, for the non-conda users to be able to replicate the same environment, I need to create and share the requirements.txt file. This file can be created using conda list --export > requirements.txt but this format is not compatible with pip and other users can't use pip install -r requirements.txt on their systems. Using pip freeze > requiremens.txt is a solution that is mentioned here and here. This means that non-conda users can simply execute pip install -r requirements.txt inside a virtual environment which they may create using virtualenv in the absence of conda. However, if you generate a requiremets.txt file in the above style, you will end up with a requirements.txt file that has symbolic links. This is because we tried to create a requirements.txt file for packages that are installed using conda install and not pip install. For example, the requirements.txt file that I generated in a similar fashion looks like this. certifi==2020.6.20 cycler==0.10.0 kiwisolver==1.2.0 matplotlib @ file:///C:/ci/matplotlib-base_1603355780617/work mkl-fft==1.2.0 mkl-random==1.1.1 mkl-service==2.3.0 numpy @ file:///C:/ci/numpy_and_numpy_base_1596215850360/work olefile==0.46 pandas @ file:///C:/ci/pandas_1602083338010/work Pillow @ file:///C:/ci/pillow_1602770972588/work pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.1 sip==4.19.13 six==1.15.0 tornado==6.0.4 wincertstore==0.2 These symbolic links will lead to errors when this file is used to install the dependencies. Steps I took that landed me to the above requirements.txt file: Created a new conda virtual environment using conda create -n myenv python=3.8 Activated the newly created conda virtual environment using conda activate myenv Installed pip using conda install pip Installed pandas using conda intall pandas Installed matplotlib using conda install matplotlib generated a pip compatible requrements.txt file using pip freeze > requirements.txt So, my question is how do you stick to the best practice of using conda install instead of pip install while still being able to distribute your software package to both conda and non-conda users? | The best solution I've found for the above is the combination I will describe below. For conda, I would first export the environment list as environment.yml and omit the package build numbers, which is often what makes it hard to reproduce the environment on another OS: conda env export > environment.yml --no-builds Output: name: myenv channels: - defaults - conda-forge dependencies: - blas=1.0 - ca-certificates=2020.10.14 - certifi=2020.6.20 ... For pip, what you describe above is apparently a well-known issue in more recent versions of pip. The workaround to get a "clean" requirements.txt file, is to export as such: pip list --format=freeze > requirements.txt Output: certifi==2020.6.20 cycler==0.10.0 kiwisolver==1.2.0 matplotlib==3.3.2 mkl-fft==1.2.0 ... Notice that the above are different between pip and conda and that is most likely because conda is more generic than pip and includes not only Python packages. Personally, I have found that for distributing a package, it is perhaps more concise to determine the minimum set of packages required and their versions by inspecting your code (what imports do you make?), instead of blindly exporting the full pip or conda lists, which might end up (accidentally or not) including packages that are not really necessary to use the package. | 20 | 45 |
64,499,551 | 2020-10-23 | https://stackoverflow.com/questions/64499551/formatting-of-df-to-latex | I want to export a Pandas DataFrame to LaTeX with . as a thousand seperator and , as a decimal seperator and two decimal digits. E.g. 4.511,34 import numpy as np import pandas as pd df = pd.DataFrame( np.array([[4511.34242, 4842.47565]]), columns=['col_1', 'col_2'] ) df.to_latex('table.tex', float_format="{:0.2f}".format) Ho can I achieve this? If I change the . to an , in the code I receive ValueError: Invalid format specifier. Thank you! | I would format with _ as the thousands seperator and . as the decimal seperator and then replace those with str.replace. df.applymap(lambda x: str.format("{:0_.2f}", x).replace('.', ',').replace('_', '.')).to_latex('table.tex') Gives the following latex: \begin{tabular}{lll} \toprule {} & col\_1 & col\_2 \\ \midrule 0 & 4.511,34 & 4.842,48 \\ \bottomrule \end{tabular} | 8 | 1 |
64,498,561 | 2020-10-23 | https://stackoverflow.com/questions/64498561/activate-conda-environment-using-subprocess | I am trying to find version of pandas: def check_library_version(): print("Checking library version") subprocess.run(f'bash -c "conda activate {ENV_NAME};"', shell=True) import pandas pandas.__version__ Desired output: 1.1.3 Output: Checking library version CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> Currently supported shells are: bash fish tcsh xonsh zsh powershell See 'conda init --help' for more information and options. IMPORTANT: You may need to close and restart your shell after running 'conda init'. To clarify, I don't seek to update the environment of the currently running script; I just want to briefly activate that environment and find out which Pandas version is installed there. | This doesn't make any sense at all; the Conda environment you activated is terminated when the subprocess terminates. You should (conda init and) conda activate your virtual environment before you run any Python code. If you just want to activate, run a simple Python script as a subprocess of your current Python, and then proceed with the current script outside of the virtual environment, try something like subprocess.run(f"""conda init bash conda activate {ENV_NAME} python -c 'import pandas; print(pandas.__version__)'""", shell=True, executable='/bin/bash', check=True) This just prints the output to the user; if your Python program wants to receive it, you need to add the correct flags; check = subprocess.run(...whatever..., text=True, capture_output=True) pandas_version = check.stdout (It is unfortunate that there is no conda init sh; I don't think anything in the above depends on executable='/bin/bash' otherwise. Perhaps there is a way to run this in POSIX sh and drop the Bash requirement.) | 6 | 3 |
64,499,180 | 2020-10-23 | https://stackoverflow.com/questions/64499180/pandas-find-the-nearest-value-for-in-a-column | I have the following table: year pop1 pop2 0 0 100000 100000 1 1 999000 850000 2 2 860000 700000 3 3 770000 650000 I want to find for each pop (pop1 ,pop2) the year the pop was closest to a given number, for example, the year the pop was the closest to 830000. Is there any way to find the nearest value inside column based on given value? I have seen this post (How do I find the closest values in a Pandas series to an input number?_ but seems like here the result is above and below and I wat to get in the end only one number. *I don't have code example because I don't find any argument to use to get the nearest | Convert column year to index, then subtract value, get absolute values and last index (here year) by nearest value - here minimal by DataFrame.idxmin: val = 830000 s = df.set_index('year').sub(val).abs().idxmin() print (s) pop1 2 pop2 1 dtype: int64 | 6 | 7 |
64,496,437 | 2020-10-23 | https://stackoverflow.com/questions/64496437/python-list-type-declaration | I tried to set variable types in my functions. There is no problem when I tried to use normal variable type. For example, def myString(name:str) -> str: return "hello " + name However, I got problem in list. Many examples in internet said use List, but it got error. Now I use list, and there is no error. Is it ok to use this? Another problem that I found someone can use def myListString() -> list[str]: return ["ABC", "CDE"] I found error. TypeError: 'type' object is not subscriptable How should I correct this? Another problem that I found is I cannot declare myClass in the myClass. For example, class Point: def __init__(self, x:int, y:int): self.x:int = x self.y:int = y def isSamePoint(self, p:Point) -> bool: return ((self.x==p.x) and (self.y==p.y)) p0 = Point(10, 5) p1 = Point(5, 5) p0.isSamePoint(p1) I found error, def isSamePoint(self, p:Point): NameError: name 'Point' is not defined Please help me solve the problem. | TypeError: 'type' object is not subscriptable Python 3.9 allows for list[str]. Earlier you had to import List from typing and do -> List[str]. NameError: name 'Point' is not defined If you want to declare the type of "self" you can either put that in a string def isSamePoint(self, p: "Point") -> bool: or create an interface. >>> class A: pass >>> class B(A): pass >>> b = B() >>> isinstance(b, A) True so def isSamePoint(self, p: A) would do the trick. Also, if you want to check if isSamePoint you might want to consider your own __eq__. | 6 | 12 |
64,496,264 | 2020-10-23 | https://stackoverflow.com/questions/64496264/python-class-self-value-error-expected-type-str-got-tuplestr-instead-azure | I've created a class and trying to assign one of its values to something that expects a string, however it is saying it is getting a Tuple[str] instead, and I don't see how? from azure.identity import ClientSecretCredential class ServicePrincipal: """ Service Principal class is used to authorise the service """ def __init__(self): self.tenant_id = "123-xyz", self.client_id = "123-abc", self.client_secret = "123-lmn", def credentials(self): """ Returns a ClientServiceCredential object using service principal details :return: """ # ISSUE IS HERE return ClientSecretCredential( tenant_id=self.tenant_id, # <---- Getting Tuple[str] client_id=self.client_id, # <---- Getting Tuple[str] client_secret=self.client_secret, # <---- Getting Tuple[str] ) if I copy paste the string directly into the parameter its fine. So the self.value is causing an issue somehow? | You should remove the commas here: def __init__(self): self.tenant_id = "123-xyz", # remove the comma self.client_id = "123-abc", # remove the comma self.client_secret = "123-lmn", # remove the comma Comma make the variable be a Tuple | 9 | 16 |
64,493,332 | 2020-10-23 | https://stackoverflow.com/questions/64493332/jinja-templating-in-airflow-along-with-formatted-text | I'm trying to run a SQL statement to be rendered by Airflow, but I'm also trying to include a variable inside the statement which is passed in from Python. The SQL statement is just a where clause, and after the WHERE, I'm trying to add a datetime, minus a few seconds: f" ' {{ ts - macros.timedelta(seconds={lower_delay} + 1) }} ' " so I want what's in the double curly braces to be computed and rendered in Airflow, but I want to pass in this variable called lower_delay before it does that. I've tried varying combinations with zero, one, or two additional curly braces around lower_delay as well as the entire string, but I seem to get a different error each time. What is the proper way to pass in this lower_delay variable (it's just a number) so that it ends up rendering correctly? | Jinja templating requires two curly braces, when you use f-strings or str.format it will replace two braces with one while rendering: Format strings contain “replacement fields” surrounded by curly braces {}. Anything that is not contained in braces is considered literal text, which is copied unchanged to the output. If you need to include a brace character in the literal text, it can be escaped by doubling: {{ and }}. So the following should work: f" ' {{{{ ts - macros.timedelta(seconds={lower_delay} + 1) }}}} ' " | 9 | 19 |
64,489,249 | 2020-10-22 | https://stackoverflow.com/questions/64489249/generating-better-help-from-argparse-when-nargs | Like many command line tools, mine accepts optional filenames. Argparse seems to support this via nargs='*', which is working for me as expected: import argparse parser = argparse.ArgumentParser() parser.add_argument( 'files', help='file(s) to parse instead of stdin', nargs='*') parser.parse_args() However, the help output is bizarre: $ ./help.py -h usage: help.py [-h] [files [files ...]] How can I avoid the nested optional and repeated parameter name? The repetition adds no information beyond [files ...], which is the traditional way optional parameter lists are indicated on Unix: $ grep --help usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]] [-e pattern] [-f file] [--binary-files=value] [--color=when] [--context[=num]] [--directories=action] [--label] [--line-buffered] [--null] [pattern] [file ...] $ ls --help Usage: exa [options] [files...] $ vim --help Usage: nvim [options] [file ...] Edit file(s) Any help is appreciated. I'm trying argparse because using it seems to be a Python best practice, but this help output is a dealbreaker for me. | This was fixed in Python 3.9, see https://bugs.python.org/issue38438 and commit a0ed99bc that fixed it. Your code produces the usage message you expect if run on 3.9: Python 3.9.0 (default, Oct 12 2020, 02:44:01) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('files', help='file(s) to parse instead of stdin', nargs='*') _StoreAction(option_strings=[], dest='files', nargs='*', const=None, default=None, type=None, choices=None, help='file(s) to parse instead of stdin', metavar=None) >>> parser.print_help() usage: [-h] [files ...] | 5 | 6 |
64,470,052 | 2020-10-21 | https://stackoverflow.com/questions/64470052/how-to-set-default-branch-for-gitpython | With GitPython, I can create a new repo with the following: from git.repo.base import Repo Repo.init('/tmp/some-repo/') The repo is created with the default branch master. How can I modify this default branch? Update: As suggested in the answers below, I have tried using Repo.init('/tmp/some-repo', initial_branch="main"), however it renders this exception: Traceback (most recent call last): File "/app/checker/tests.py", line 280, in test_alternative_compare_branch comp_repo_main = Repo.init( File "/usr/local/lib/python3.9/site-packages/git/repo/base.py", line 937, in init git.init(**kwargs) File "/usr/local/lib/python3.9/site-packages/git/cmd.py", line 542, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/git/cmd.py", line 1005, in _call_process return self.execute(call, **exec_kwargs) File "/usr/local/lib/python3.9/site-packages/git/cmd.py", line 822, in execute raise GitCommandError(command, status, stderr_value, stdout_value) git.exc.GitCommandError: Cmd('git') failed due to: exit code(129) cmdline: git init --initial-branch=main stderr: 'error: unknown option `initial-branch=main' In the git docs, it states that the command for setting initial branch is --initial-branch (https://git-scm.com/docs/git-init/2.28.0#Documentation/git-init.txt---initial-branchltbranch-namegt). Judging by the error, I think that the additional kwargs feature of GitPython is not including the -- prefix. | According to the docs, init takes the same arguments as git init as keyword arguments. You do have to turn - into _. from git import Repo Repo.init('/tmp/some-repo/', initial_branch='main') UPDATE initial-branch was added very recently in v2.28.0. You'll need to upgrade Git to use it. If you can't, manually change the branch name with branch.rename(new_name). Unfortunately you can't do this until after the first commit, no branches truly exist yet. That's a Git limitation and why they added initial-branch and also the init.defaultBranch config option. | 5 | 8 |
64,482,562 | 2020-10-22 | https://stackoverflow.com/questions/64482562/specify-per-file-ignores-with-pyproject-toml-and-flake8 | I am using flake8 (with flakehell but that should not interfere) and keep its configuration in a pyproject.toml file. I want to add a per-file-ignores config but nothing works and there is no documentation on how it is supposed to be formatted in a toml file. Flake8 docs show only the 'native' config file format: per-file-ignores = project/__init__.py:F401 setup.py:E121 other_project/*:W9 There is no description / example for pyproject.toml. I tried: per-file-ignores=["file1.py:W0621", "file2.py:W0621"] and per-file-ignores={"file1.py" = "W0621", "file2.py" = "W0621"} both of which silently fail and have no effect (the warning is still raised). What is the proper syntax for per-file-ignores setting in flake8/flakehell while using pyproject.toml? | flake8 does not have support for pyproject.toml, only .flake8, setup.cfg, and tox.ini disclaimer: I am the flake8 maintainer | 27 | 50 |
64,483,854 | 2020-10-22 | https://stackoverflow.com/questions/64483854/efficient-way-of-filtering-by-datetime-in-groupby | Given the DataFrame generated by: import numpy as np import pandas as pd from datetime import timedelta np.random.seed(0) rng = pd.date_range('2015-02-24', periods=14, freq='9H') ids = [1]*5 + [2]*2 + [3]*7 df = pd.DataFrame({'id': ids, 'time_entered': rng, 'val': np.random.randn(len(rng))}) df: id time_entered val 0 1 2015-02-24 00:00:00 1.764052 1 1 2015-02-24 09:00:00 0.400157 2 1 2015-02-24 18:00:00 0.978738 3 1 2015-02-25 03:00:00 2.240893 4 1 2015-02-25 12:00:00 1.867558 5 2 2015-02-25 21:00:00 -0.977278 6 2 2015-02-26 06:00:00 0.950088 7 3 2015-02-26 15:00:00 -0.151357 8 3 2015-02-27 00:00:00 -0.103219 9 3 2015-02-27 09:00:00 0.410599 10 3 2015-02-27 18:00:00 0.144044 11 3 2015-02-28 03:00:00 1.454274 12 3 2015-02-28 12:00:00 0.761038 13 3 2015-02-28 21:00:00 0.121675 I need to, for each id, remove rows which are more than 24hours (1 day) from the latest time_entered, for that id. My current solution: def custom_transform(x): datetime_from = x["time_entered"].max() - timedelta(days=1) x = x[x["time_entered"] > datetime_from] return x df.groupby("id").apply(lambda x: custom_transform(x)).reset_index(drop=True) which gives the correct, expected, output: id time_entered val 0 1 2015-02-24 18:00:00 0.978738 1 1 2015-02-25 03:00:00 2.240893 2 1 2015-02-25 12:00:00 1.867558 3 2 2015-02-25 21:00:00 -0.977278 4 2 2015-02-26 06:00:00 0.950088 5 3 2015-02-28 03:00:00 1.454274 6 3 2015-02-28 12:00:00 0.761038 7 3 2015-02-28 21:00:00 0.121675 However, my real data is tens of millions of rows, and hundreds of thousands of unique ids, because of this, this solution is infeasible (takes very long time). Is there a more efficient way to filter the data? I appreciate all ideas! | Generally, avoid groupby().apply() since it's not vectorized across groups, not to mention the overhead for memory allocation if you are returning new dataframes as in your case. How about finding the time threshold with groupby().transform then use boolean indexing on the whole data: time_max_by_id = df.groupby('id')['time_entered'].transform('max') - pd.Timedelta('1D') df[df['time_entered'] > time_max_by_id] Output: id time_entered val 2 1 2015-02-24 18:00:00 0.978738 3 1 2015-02-25 03:00:00 2.240893 4 1 2015-02-25 12:00:00 1.867558 5 2 2015-02-25 21:00:00 -0.977278 6 2 2015-02-26 06:00:00 0.950088 11 3 2015-02-28 03:00:00 1.454274 12 3 2015-02-28 12:00:00 0.761038 13 3 2015-02-28 21:00:00 0.121675 | 7 | 4 |
64,481,847 | 2020-10-22 | https://stackoverflow.com/questions/64481847/partial-disallow-overriding-given-keyword-arguments | Is there a way to disallow overriding given keyword arguments in a partial? Say I want to create function bar which always has a set to 1. In the following code: from functools import partial def foo(a, b): print(a) print(b) bar = partial(foo, a=1) bar(b=3) # This is fine and prints 1, 3 bar(a=3, b=3) # This prints 3, 3 You can happily call bar and set a to 3. Is it possible to create bar out of foo and make sure that calling bar(a=3, b=3) either raises an error or silently ignores a=3 and keeps using a=1 as in the partial? | This is by design. The documentation for partial says (emphasize mine): functools.partial(func, /, *args, **keywords) Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords. If more arguments are supplied to the call, they are appended to args. If additional keyword arguments are supplied, they extend and override keywords. If you do not want that, you can manually reject frozen keyword arguments: def freeze(f, **kwargs): frozen = kwargs def wrapper(*args, **kwargs): kwargs.update(frozen) return f(*args, **kwargs) return wrapper You can now do: >>> baz = freeze(foo, a=1) >>> baz(b=3, a=2) 1 3 | 6 | 3 |
64,465,836 | 2020-10-21 | https://stackoverflow.com/questions/64465836/python17874-0x111e92dc0-malloc-cant-allocate-region | I am building a Python web scraping script and i have to use cv2 (OpenCV). So I install using pip install opencv-python as the website directs. And it also installs numpy as a dependency. However, right after installing that, I'm unable to run my python script. It crashes with the error below: I think the issue is from Numpy but I don't know how to fix this. Please help. This is my environment: MacOS 10.15.7 Python 3.9.0 pip 20.2.4 from /usr/local/lib/python3.9/site-packages/pip (python 3.9) Numpy 1.19.2 opencv-python 4.4.0.44 | I had the same issue when I tried to run a project I made on Windows on Mac OS. The solution I found was to install an older version of numpy (e.g. numpy 1.18). Here is the command I ran to do so : sudo python -m pip install numpy==1.18 --force I don't think it is a good solution but it is ok for a temporary fix. | 7 | 8 |
64,464,513 | 2020-10-21 | https://stackoverflow.com/questions/64464513/how-to-resize-sg-window-in-pysimplegui | I am using PYsimpleGUI in my python code, and while using the window element to create the main window, this is my code. My code: import PySimpleGUI as sg layout = [ [sg.Button('Close')] ] window = sg.Window('This is a long heading.', layout) while True: event, values = window.read() if event == sg.WIN_CLOSED or event == 'Close': break break window.close() I notice that when I run this program, the full heading is not shown as the window resizes itself and becomes small. Is there a way to resize sg.window? | You can add size argument in the sg.Window. Try this : import PySimpleGUI as sg layout = [ [sg.Button('Close')] ] window = sg.Window('This is a long heading.', layout,size=(290, 50)) while True: event, values = window.read() if event == sg.WIN_CLOSED or event == 'Close': break break window.close() | 7 | 9 |
64,470,110 | 2020-10-21 | https://stackoverflow.com/questions/64470110/how-to-delete-multiple-files-in-gcs-except-1-using-gsutil | I currently have this: gsutil ls gs://basty/*_TZ001.* gs://basty/20201007_TZ001.csv gs://basty/20201008_TZ001.csv gs://basty/20201009_TZ001.csv My problem is that I have bcuket with many files I want to delete all except 1 (20201009_TZ001.csv) I thought using bash or python I don't know. | You can filter results with grep (using -v flag to invert results) and the pipe with xargs gsutil ls gs://basty/*_TZ001.* |\ grep -v 20201009_TZ001.csv |\ xargs -i{} gsutil rm {} To be sure that is precisely what you want, you could first execute a dry-run command: gsutil ls gs://basty/*_TZ001.* |\ grep -v 20201009_TZ001.csv |\ xargs -i{} echo "Will delete: " {} | 5 | 9 |
64,466,231 | 2020-10-21 | https://stackoverflow.com/questions/64466231/pythonic-way-to-operate-comma-separated-list-of-ranges-1-5-10-25-27-30 | I'm currently working on an api where they send me a str range in this format: "1-5,10-25,27-30" and i need add or remove number conserving the format. if they send me "1-5,10-25,27-30" and I remove "15" the result must be "1-5,10-14,16-25,27-30" and if they send me "1-5,10-25,27-30" and i add "26" the result must be "1-5,10-30" i've been trying converting the entire range into a list of numbers, delete the target and converting it again but it's very slow doing in this way becuase they send 8-digits numbers so iter then it's not the best way how can i do this? is a library for work with this format? thanks! | intspan deals with ranges of integers and operations on them >>> from intspan import intspan >>> s = "1-5,10-25,27-30" >>> span = intspan(s) >>> str(span) '1-5,10-25,27-30' >>> span.add(26) >>> str(span) '1-5,10-30' >>> span.discard(15) >>> str(span) '1-5,10-14,16-30' | 5 | 7 |
64,459,175 | 2020-10-21 | https://stackoverflow.com/questions/64459175/how-to-replace-multiple-forward-slashes-in-a-directory-by-a-single-slash | My path: '/home//user////document/test.jpg' I want this to be converted into: '/home/user/document/test.jpg' How to do this? | Use os.path.abspath or normpath to canonicalise the path: >>> import os.path >>> os.path.abspath('/home//user////document/test.jpg') '/home/user/document/test.jpg' | 9 | 9 |
64,371,174 | 2020-10-15 | https://stackoverflow.com/questions/64371174/how-to-change-variable-label-names-for-the-legend-in-a-plotly-express-line-chart | I want to change the variable/label names in plotly express in python. I first create a plot: import pandas as pd import plotly.express as px d = {'col1': [1, 2, 3], 'col2': [3, 4, 5]} df = pd.DataFrame(data=d) fig = px.line(df, x=df.index, y=['col1', 'col2']) fig.show() Which yields: I want to change the label names from col1 to hello and from col2 to hi. I have tried using labels in the figure, but I cannot get it to work: fig = px.line(df, x=df.index, y=['col1', 'col2'], labels={'col1': "hello", 'col2': "hi"}) fig.show() But this seems to do nothing, while not producing an error. Obviously I could achieve my goals by changing the column names, but the actual plot i'm trying to create doesn't really allow for that since it comes from several different dataframes. | The answer: Without changing the data source, a complete replacement of names both in the legend, legendgroup and hovertemplate will require: newnames = {'col1':'hello', 'col2': 'hi'} fig.for_each_trace(lambda t: t.update(name = newnames[t.name], legendgroup = newnames[t.name], hovertemplate = t.hovertemplate.replace(t.name, newnames[t.name]) ) ) Plot: The details: Using fig.for_each_trace(lambda t: t.update(name = newnames[t.name])) ...you can change the names in the legend without changing the source by using a dict newnames = {'col1':'hello', 'col2': 'hi'} ...and map new names to the existing col1 and col2 in the following part of the figure structure (for your first trace, col1): {'hovertemplate': 'variable=col1<br>index=%{x}<br>value=%{y}<extra></extra>', 'legendgroup': 'col1', 'line': {'color': '#636efa', 'dash': 'solid'}, 'mode': 'lines', 'name': 'hello', # <============================= here! 'orientation': 'v', 'showlegend': True, 'type': 'scatter', 'x': array([0, 1, 2], dtype=int64), 'xaxis': 'x', 'y': array([1, 2, 3], dtype=int64), 'yaxis': 'y'}, But as you can see, this doesn't do anything with 'legendgroup': 'col1', nor 'hovertemplate': 'variable=col1<br>index=%{x}<br>value=%{y}<extra></extra>' And depending on the complexity of your figure, this can pose a problem. So I would add legendgroup = newnames[t.name] and hovertemplate = t.hovertemplate.replace(t.name, newnames[t.name])into the mix. Complete code: import pandas as pd import plotly.express as px from itertools import cycle d = {'col1': [1, 2, 3], 'col2': [3, 4, 5]} df = pd.DataFrame(data=d) fig = px.line(df, x=df.index, y=['col1', 'col2']) newnames = {'col1':'hello', 'col2': 'hi'} fig.for_each_trace(lambda t: t.update(name = newnames[t.name], legendgroup = newnames[t.name], hovertemplate = t.hovertemplate.replace(t.name, newnames[t.name]) ) ) | 55 | 58 |
64,368,565 | 2020-10-15 | https://stackoverflow.com/questions/64368565/delete-and-release-memory-of-a-single-pandas-dataframe | I am running a long ETL pipeline in pandas. I have to create different pandas dataframes and I want to release memory for some of the dataframes. I have been reading how to release memory and I saw that runing this command doesn't release the memory: del dataframe Following this link: How to delete multiple pandas (python) dataframes from memory to save RAM?, one of the answer say that del statement does not delete an instance, it merely deletes a name. In the answer they say about put the dataframe in a list and then del the list: lst = [pd.DataFrame(), pd.DataFrame(), pd.DataFrame()] del lst If I only want to release one dataframe I need to put it in a list and then delete a list like this: lst = [pd.DataFrame()] del lst I have seen also this question: How do I release memory used by a pandas dataframe? There are different answers like: import gc del df_1 gc.collect() Or just at the end of the dataframe use df = "" or there is a better way to achieve that? | From the original link that you included, you have to include variable in the list, delete the variable and then delete the list. If you just add to the list, it won't delete the original dataframe, when you delete the list. import pandas import psutil import gc psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >> 68.44267845153809 df = pd.read_csv('pythonSRC/bigFile.txt',sep='|') len(df) >> 20082056 psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >> 56.380510330200195 lst = [df] del lst psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >> 56.22601509094238 lst = [df] del df del lst psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >> 76.77617073059082 gc.collect() >> 0 I tried also just deleting the dataframe and using gc.collect() with the same result! del df gc.collect() psutil.virtual_memory().available * 100 / psutil.virtual_memory().total >> 76.59363746643066 However, the execution time of adding the dataframe to the list and deleting the list and the variable is a bit faster then calling gc.collect(). I used time.time() to measure the difference and gc.collect() was almost a full second slower! EDIT: according to the correct comment below, del df and del [df] indeed generate the same code. The problem with the original post, and my original answer is that as soon as you give a name to the list as in lst=[df], you are no longer referencing the original dataframe. lst=[df] del lst is not the same as: del [df] | 9 | 13 |
64,413,061 | 2020-10-18 | https://stackoverflow.com/questions/64413061/python-pip-install-ends-with-command-errored-out-with-exit-status-1 | I'm new to python, and I'm trying to run some basic codes that require some libraries. And when I'm trying to install a library (e.g. pip install matplotlib-venn) I get this long error: ERROR: Command errored out with exit status 1: command: 'c:\users\scurt\appdata\local\programs\python\python39\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\scurt\\AppData\\Local\\Temp\\pip-install-msmwhfl3\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\scurt\\AppData\\Local\\Temp\\pip-install-msmwhfl3\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\scurt\AppData\Local\Temp\pip-pip-egg-info-g0f_wde0' cwd: C:\Users\scurt\AppData\Local\Temp\pip-install-msmwhfl3\matplotlib\ Complete output (249 lines): WARNING: The wheel package is not available. ERROR: Command errored out with exit status 1: command: 'c:\users\scurt\appdata\local\programs\python\python39\python.exe' 'c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\scurt\AppData\Local\Temp\tmpd9x4v47l' cwd: C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy Complete output (200 lines): Running from numpy source directory. setup.py:470: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE blis_info: libraries blis not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_info: libraries openblas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy\numpy\distutils\system_info.py:1914: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE flame_info: libraries flame not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries tatlas,tatlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs libraries tatlas,tatlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries satlas,satlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs libraries satlas,satlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries ptf77blas,ptcblas,atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs libraries ptf77blas,ptcblas,atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries f77blas,cblas,atlas not found in c:\users\scurt\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs libraries f77blas,cblas,atlas not found in c:\users\scurt\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\scurt\AppData\Local\Temp\pip-wheel-24k214oa\numpy\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] c:\users\scurt\appdata\local\programs\python\python39\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running dist_info running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.9 creating build\src.win-amd64-3.9\numpy creating build\src.win-amd64-3.9\numpy\distutils building library "npymath" sources error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\scurt\appdata\local\programs\python\python39\python.exe' 'c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\scurt\AppData\Local\Temp\tmpd9x4v47l' Check the logs for full command output. Traceback (most recent call last): File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\installer.py", line 126, in fetch_build_egg subprocess.check_call(cmd) File "c:\users\scurt\appdata\local\programs\python\python39\lib\subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\scurt\\AppData\\Local\\Temp\\tmpnf74y62m', '--quiet', 'numpy>=1.15']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\scurt\AppData\Local\Temp\pip-install-msmwhfl3\matplotlib\setup.py", line 242, in <module> setup( # Finally, pass this all along to distutils to do the heavy lifting. File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\__init__.py", line 152, in setup _install_setup_requires(attrs) File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\__init__.py", line 147, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\dist.py", line 673, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\pkg_resources\__init__.py", line 764, in resolve dist = best[req.key] = env.best_match( File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\pkg_resources\__init__.py", line 1049, in best_match return self.obtain(req, installer) File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\pkg_resources\__init__.py", line 1061, in obtain return installer(requirement) File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\dist.py", line 732, in fetch_build_egg return fetch_build_egg(self, req) File "c:\users\scurt\appdata\local\programs\python\python39\lib\site-packages\setuptools\installer.py", line 128, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['c:\\users\\scurt\\appdata\\local\\programs\\python\\python39\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\scurt\\AppData\\Local\\Temp\\tmpnf74y62m', '--quiet', 'numpy>=1.15']' returned non-zero exit status 1. Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.3.2] python: yes [3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]] platform: yes [win32] sample_data: yes [installing] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. I tried to reinstall python multiple times (I'm using the latest version 3.9). I updated the path variables as I've seen in other posts. I tried to run the same command but with pip3, same thing. | I had a similar problem while trying to run "pip install seaborn". The problem is that the new Python 3.9 does not provide binary wheels scipy and numpy do not as discussed here. What I did was uninstall Python 3.9 and reinstall Python 3.8 and everything worked perfectly. | 9 | 1 |
64,430,805 | 2020-10-19 | https://stackoverflow.com/questions/64430805/how-to-compress-video-to-target-size-by-python | I am uploading text and videos to a site by Python program. This site says they only receive video files of up to 50 MB in size. Otherwise, they will reject the video and other associated information. To ensure I can send video continuously, I want to compress it to target size (e.g. 50 MB) before sending. Because no loss of quality is impossible, it is ok to have moderate clarity loss in video or audio. Is there any elegant way in Python for this purpose? Thanks! | Compress video files by Python and FFmpeg Tools FFmpeg is a powerful tool for video editing. And there is a great Python binding named ffmpeg-python (API Reference) for this. Firstly, pip install ffmpeg-python and install FFmpeg. Steps Probe the configuration of video by function ffmpeg.probe() to get duration, audio & video bit rate and so on. And calculate the bit rate of the target file based on what we have. Then, construct commands by ffmpeg.input() and ffmpeg.output(). Finally, run it. Codes Following is the example code. Change the compression algo for your situation if you want. For easy follow-up, I hided the code of boundary condition. The code I am using is in GitHub Gist. Any bug report is welcomed! import os, ffmpeg def compress_video(video_full_path, output_file_name, target_size): # Reference: https://en.wikipedia.org/wiki/Bit_rate#Encoding_bit_rate min_audio_bitrate = 32000 max_audio_bitrate = 256000 probe = ffmpeg.probe(video_full_path) # Video duration, in s. duration = float(probe['format']['duration']) # Audio bitrate, in bps. audio_bitrate = float(next((s for s in probe['streams'] if s['codec_type'] == 'audio'), None)['bit_rate']) # Target total bitrate, in bps. target_total_bitrate = (target_size * 1024 * 8) / (1.073741824 * duration) # Target audio bitrate, in bps if 10 * audio_bitrate > target_total_bitrate: audio_bitrate = target_total_bitrate / 10 if audio_bitrate < min_audio_bitrate < target_total_bitrate: audio_bitrate = min_audio_bitrate elif audio_bitrate > max_audio_bitrate: audio_bitrate = max_audio_bitrate # Target video bitrate, in bps. video_bitrate = target_total_bitrate - audio_bitrate i = ffmpeg.input(video_full_path) ffmpeg.output(i, os.devnull, **{'c:v': 'libx264', 'b:v': video_bitrate, 'pass': 1, 'f': 'mp4'} ).overwrite_output().run() ffmpeg.output(i, output_file_name, **{'c:v': 'libx264', 'b:v': video_bitrate, 'pass': 2, 'c:a': 'aac', 'b:a': audio_bitrate} ).overwrite_output().run() # Compress input.mp4 to 50MB and save as output.mp4 compress_video('input.mp4', 'output.mp4', 50 * 1000) Notes Don't waste your time! Judge the file size before compressing. You can disable two-pass function by only keeping second ffmpeg.output() without parameter 'pass': 2. If video bit rate < 1000, it will throw exception Bitrate is extremely low. The biggest min file size I recommend is: # Best min size, in kB. best_min_size = (32000 + 100000) * (1.073741824 * duration) / (8 * 1024) If you specify a extremely small target file size, the size of generated file maybe exceed it. For most time, this will not happen. | 12 | 21 |
64,445,167 | 2020-10-20 | https://stackoverflow.com/questions/64445167/how-to-convert-positive-numbers-to-negative-in-python | I know that abs() can be used to convert numbers to positive, but is there somthing that does the opposite? I have an array full of numbers which I need to convert to negative: array1 = [] arrayLength = 25 for i in range(arrayLength): array1.append(random.randint(0, arrayLength) I thought perhaps I could convert the numbers as they're being added, not after the array is finished. Anyone knows the code for that? Many thanks in advance | If you want to force a number to negative, regardless of whether it's initially positive or negative, you can use: -abs(n) Note that integer 0 will remain 0. | 24 | 47 |
64,435,497 | 2020-10-19 | https://stackoverflow.com/questions/64435497/how-to-properly-insert-pandas-nat-datetime-values-to-my-postgresql-table | I am tying to bulk insert a dataframe to my postgres dB. Some columns in my dataframe are date types with NaT as a null value. Which is not supported by PostgreSQL, I've tried to replace NaT (using pandas) with other NULL type identifies but that did not work during my inserts. I used df = df.where(pd.notnull(df), 'None') to replace all the NaTs, Example of errors that keep coming up due to datatype issues. Error: invalid input syntax for type date: "None" LINE 1: ...0,1.68757,'2022-11-30T00:29:59.679000'::timestamp,'None','20... My driver and insert statement to postgresql dB: def execute_values(conn, df, table): """ Using psycopg2.extras.execute_values() to insert the dataframe """ # Create a list of tupples from the dataframe values tuples = [tuple(x) for x in df.to_numpy()] # Comma-separated dataframe columns cols = ','.join(list(df.columns)) # SQL quert to execute query = "INSERT INTO %s(%s) VALUES %%s" % (table, cols) cursor = conn.cursor() try: extras.execute_values(cursor, query, tuples) conn.commit() except (Exception, psycopg2.DatabaseError) as error: print("Error: %s" % error) conn.rollback() cursor.close() return 1 print("execute_values() done") cursor.close() Info about my dataframe: for this case the culprits are the datetime columns only. how is this commonly solved? | You're re-inventing the wheel. Just use pandas' to_sql method and it will match up the column names, and take care of the NaT values. Use method="multi" to give you the same effect as psycopg2's execute_values. from pprint import pprint import pandas as pd import sqlalchemy as sa table_name = "so64435497" engine = sa.create_engine("postgresql://scott:[email protected]/test") with engine.begin() as conn: # set up test environment conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}") conn.exec_driver_sql( f"CREATE TABLE {table_name} (" "id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY, " "txt varchar(50), " "txt2 varchar(50), " "dt timestamp)" ) df = pd.read_csv(r"C:\Users\Gord\Desktop\so64435497.csv") df["dt"] = pd.to_datetime(df["dt"]) print(df) """console output: dt txt2 txt 0 2020-01-01 00:00:00 foo2 foo 1 NaT bar2 bar 2 2020-01-02 03:04:05 baz2 baz """ # run test df.to_sql( table_name, conn, index=False, if_exists="append", method="multi" ) pprint( conn.exec_driver_sql( f"SELECT id, txt, txt2, dt FROM {table_name}" ).all() ) """console output: [(1, 'foo', 'foo2', datetime.datetime(2020, 1, 1, 0, 0)), (2, 'baz', 'baz2', None), (3, 'bar', 'bar2', datetime.datetime(2020, 1, 2, 3, 4, 5))] """ | 12 | 4 |
64,381,297 | 2020-10-16 | https://stackoverflow.com/questions/64381297/cant-fully-disable-python-linting-pylance-vscode | I've been searching online for quite a while now and can't seem to find a solution for my problem. I installed Pylance (the newest Microsoft interpreter for Python) and can't seem to disable linting at all. I've tried a lot of options but none worked. Here's a screenshot of how annoying linting is in my code now. Here's how my VSCode Settings file looks like: { // "python.pythonPath": "C://Anaconda3//envs//py34//python.exe", // "python.pythonPath": "C://Anaconda3_2020//python.exe", // "python.pythonPath": "C://Anaconda3_2020_07//python.exe", "python.pythonPath": "C://Anaconda3//python.exe", "python.analysis.disabled": [ "unresolved-import" ], "editor.suggestSelection": "first", "editor.fontSize": 15, "typescript.tsserver.useSeparateSyntaxServer": false, "workbench.colorTheme": "Monokai ST3", "workbench.colorCustomizations": { "editor.background": "#000000", "statusBar.background": "#000000", "statusBar.noFolderBackground": "#212121", "statusBar.debuggingBackground": "#263238" }, "window.zoomLevel": 0, "editor.renderLineHighlight": "none", "editor.fontFamily": "Meslo LG L", "editor.tabCompletion": "on", "editor.parameterHints.enabled": true, "python.terminal.executeInFileDir": true, "python.terminal.launchArgs": [ "-u" ], "terminal.integrated.shell.windows": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe", "editor.lineHeight": 0, "workbench.editor.scrollToSwitchTabs": true, "python.autoComplete.showAdvancedMembers": false, "python.languageServer": "Pylance", "python.linting.enabled": false, "python.linting.pylintEnabled": false, "python.linting.lintOnSave": false, "python.linting.flake8Enabled": false, "python.linting.mypyEnabled": false, "python.linting.banditEnabled": false, "python.linting.pylamaEnabled": false, "python.linting.pylintArgs": [ "--unsafe-load-any-extension=y", "--load-plugin", "pylint_protobuf", "--disable=all", "--disable=undefined-variable", ], "python.linting.mypyArgs": [ "--ignore-missing-imports", "--follow-imports=silent", "--show-column-numbers", "--extension-pkg-whitelist=all", "--disable=all", "--disable=undefined-variable", ], } Any thoughts? Any help is much appreciated. | You can disable the language server with: "python.languageServer": "None" | 33 | 35 |
64,382,706 | 2020-10-16 | https://stackoverflow.com/questions/64382706/dask-distributed-scheduler-error-couldnt-gather-keys | import joblib from sklearn.externals.joblib import parallel_backend with joblib.parallel_backend('dask'): from dask_ml.model_selection import GridSearchCV import xgboost from xgboost import XGBRegressor grid_search = GridSearchCV(estimator= XGBRegressor(), param_grid = param_grid, cv = 3, n_jobs = -1) grid_search.fit(df2,df3) I created a dask cluster using two local machines using client = dask.distributed.client('tcp://191.xxx.xx.xxx:8786') I am trying to find best parameters using dask gridsearchcv. I am facing the following error. istributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1202, 2)": ['tcp://127.0.0.1:3738']} state: ['processing'] workers: ['tcp://127.0.0.1:3738'] NoneType: None distributed.scheduler - ERROR - Workers don't have promised key: ['tcp://127.0.0.1:3738'], ('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1202, 2) NoneType: None distributed.client - WARNING - Couldn't gather 1 keys, rescheduling {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1202, 2)": ('tcp://127.0.0.1:3738',)} distributed.nanny - WARNING - Restarting worker distributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1, 2)": ['tcp://127.0.0.1:3730']} state: ['processing'] workers: ['tcp://127.0.0.1:3730'] NoneType: None distributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 0, 1)": ['tcp://127.0.0.1:3730'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 5, 1)": ['tcp://127.0.0.1:3729'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 4, 2)": ['tcp://127.0.0.1:3729'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 2, 1)": ['tcp://127.0.0.1:3730']} state: ['processing', 'processing', 'processing', 'processing'] workers: ['tcp://127.0.0.1:3730', 'tcp://127.0.0.1:3729'] NoneType: None distributed.scheduler - ERROR - Couldn't gather keys {'cv-n-samples-7cb7087b3aff75a31f487cfe5a9cedb0': ['tcp://127.0.0.1:3729']} state: ['processing'] workers: ['tcp://127.0.0.1:3729'] NoneType: None distributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 4, 0)": ['tcp://127.0.0.1:3729'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 2, 0)": ['tcp://127.0.0.1:3729'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 0, 0)": ['tcp://127.0.0.1:3729']} state: ['processing', 'processing', 'processing'] workers: ['tcp://127.0.0.1:3729'] NoneType: None distributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 0, 2)": ['tcp://127.0.0.1:3729'], "('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 2, 2)": ['tcp://127.0.0.1:3729']} state: ['processing', 'processing'] workers: ['tcp://127.0.0.1:3729'] NoneType: None distributed.scheduler - ERROR - Workers don't have promised key: ['tcp://127.0.0.1:3730'], ('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1, 2) NoneType: None I hope someone helps in solving this issue. Thanks in advance. | I also meet the same issue, and I find it's likely to be caused by firewall. Suppose we have two machines, 191.168.1.1 for scheduler and 191.168.1.2 for worker. When we start scheduler, we may get following info: distributed.scheduler - INFO - ----------------------------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy distributed.scheduler - INFO - ----------------------------------------------- distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://191.168.1.1:8786 distributed.scheduler - INFO - dashboard at: :8787 so for scheduler, we should confirm that port 8786 and port 8786 can be accessed. Simlilarly, we can check worker's info: istributed.nanny - INFO - Start Nanny at: 'tcp://191.168.1.2:39042' distributed.diskutils - INFO - Found stale lock file and directory '/root/dask-worker-space/worker-39rf_n28', purging distributed.worker - INFO - Start worker at: tcp://191.168.1.2:39040 distributed.worker - INFO - Listening to: tcp://191.168.1.2:39040 distributed.worker - INFO - dashboard at: 191.168.1.2:39041 distributed.worker - INFO - Waiting to connect to: tcp://191.168.1.1:8786 distributed.worker - INFO - ------------------------------------------------- nanny port is 39042, worker port is 39040 and dashboard port is 39041. set these ports open for both 191.168.1.1 and 191.168.1.2: firewall-cmd --permanent --add-port=8786/tcp firewall-cmd --permanent --add-port=8787/tcp firewall-cmd --permanent --add-port=39040/tcp firewall-cmd --permanent --add-port=39041/tcp firewall-cmd --permanent --add-port=39042/tcp firewall-cmd --reload and task can run sucessfully. Finally, Dask will choose ports for worker randomly, we can also start worker with customized ports: dask-worker 191.168.1.1:8786 --worker-port 39040 --dashboard-address 39041 --nanny-port 39042 More parameters can be referred here. | 8 | 1 |
64,429,113 | 2020-10-19 | https://stackoverflow.com/questions/64429113/how-should-a-namedtemporaryfile-be-annotated | I tried typing.IO as suggested in Type hint for a file or file-like object?, but it doesn't work: from __future__ import annotations from tempfile import NamedTemporaryFile from typing import IO def example(tmp: IO) -> str: print(tmp.file) return tmp.name print(example(NamedTemporaryFile())) for this, mypy tells me: test.py:6: error: "IO[Any]" has no attribute "file"; maybe "fileno"? and Python runs fine. So the code is ok. | I don't think this can be easily type hinted. If you check the definition of NamedTemporaryFile, you'll see that it's a function that ends in: return _TemporaryFileWrapper(file, name, delete) And _TemporaryFileWrapper is defined as: class _TemporaryFileWrapper: Which means there isn't a super-class that can be indicated, and _TemporaryFileWrapper is "module-private". It also doesn't look like it has any members that make it a part of an existing Protocol * (except for Iterable and ContextManager; but you aren't using those methods here). I think you'll need to use _TemporaryFileWrapper and ignore the warnings: from tempfile import _TemporaryFileWrapper # Weak error def example(tmp: _TemporaryFileWrapper) -> str: print(tmp.file) return tmp.name If you really want a clean solution, you could implement your own Protocol that includes the attributes you need, and have it also inherit from Iterable and ContextManager. Then you can type-hint using your custom Protocol. * It was later pointed out that it does fulfil IO, but the OP requires attributes that aren't in IO, so that can't be used. | 15 | 12 |
64,420,348 | 2020-10-19 | https://stackoverflow.com/questions/64420348/ignore-userwarning-from-openpyxl-using-pandas | I have tons of .xlsm files that I have to load. Each Excel file has 6 sheets. Because of that, I'm opening each Excel file like this, using pandas: for excel_file in files_list: with pd.ExcelFile(excel_file, engine = "openpyxl") as f: df1 = pd.read_excel(f, "Sheet1") df2 = pd.read_excel(f, "Sheet2") df3 = pd.read_excel(f, "Sheet3") ... After each iteration I am passing the df to other function and do some stuff with it. I am using pd.ExcelFile to load the file into memory just once and then separate it on DataFrames. However, when doing this, I am getting the following warning: /opt/anaconda3/lib/python3.8/site-packages/openpyxl/worksheet/_reader.py:300: UserWarning: Data Validation extension is not supported and will be removed warn(msg) No matter the warning, the information is loaded correctly from the Excel file and no data is missing. It takes about 0.8s to load each Excel file and all of its sheets into df. If I use the default engine on pandas to load each Excel file, the warning goes away, but the time it takes for each file goes up to 5 or even 6 seconds. I saw this post, but there wasn't an answer on how to remove the warning, which is what I need, as everything's working correctly. How can I disable said UserWarning? | You can do this using warnings core module: import warnings warnings.filterwarnings('ignore', category=UserWarning, module='openpyxl') You can also specify the particular module you'd like to silence warnings for by adding an argument module="openpyxl". | 9 | 25 |
64,348,889 | 2020-10-14 | https://stackoverflow.com/questions/64348889/how-to-get-local-ip-address-python | There's a code I found in internet that says it gives my machines local network IP address: hostname = socket.gethostname() local_ip = socket.gethostbyname(hostname) but the IP it returns is 192.168.94.2 but my IP address in WIFI network is actually 192.168.1.107 How can I only get wifi network local IP address with only python? I want it to work for windows,linux and macos. | You can use this code: import socket hostname = socket.getfqdn() print("IP Address:",socket.gethostbyname_ex(hostname)[2][1]) or this to get public ip: import requests import json print(json.loads(requests.get("https://ip.seeip.org/jsonip?").text)["ip"]) | 5 | 10 |
64,414,009 | 2020-10-18 | https://stackoverflow.com/questions/64414009/why-is-user-is-authenticated-asserting-true-after-logout | I am trying to write a test for logging out a user in Django. Here is the code: urls.py from django.conf.urls import url from django.contrib import admin from accounts.views import LoginView, LogoutView urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^login/', LoginView.as_view(), name='login'), url(r'^logout/', LogoutView.as_view(), name='logout'), ] views.py from django.http import HttpResponseRedirect from django.contrib.auth import login, logout from django.views.generic import View class LogoutView(View): def get(self, request): logout(request) return HttpResponseRedirect('/') tests.py from django.test import TestCase, Client from django.contrib.auth.models import User class LogoutTest(TestCase): def setUp(self): self.client = Client() self.user = User.objects.create_user( username='user1', email='[email protected]', password='top_secret123' ) def test_user_logs_out(self): self.client.login(email=self.user.email, password=self.user.password) self.assertTrue(self.user.is_authenticated) response = self.client.get('/logout/') self.assertFalse(self.user.is_authenticated) self.assertRedirects(response, '/', 302) The assertion self.assertFalse(self.user.is_authenticated) is failing. Testing through the browser seems to work fine. It seems like the user would not be authenticated after calling logout(). Am I missing something? | It seems like the user would not be authenticated after calling logout(). Am I missing something? .is_authenticated [Django-doc] does not check if a user is logged in. Every real User returns always True for is_authenticated. An AnonymousUser [Django-doc] will return False for example. If you thus log out, then request.user will be the AnonymousUser, and thus not be authenticated. In other words, if you use request.user.is_authenticated, you will call this on the logged-in user if the session is bounded to a user (you logged in with the browser), and you call this on the AnonymousUser in case the browser did not log in a user/the browser logged out. | 6 | 6 |
64,390,904 | 2020-10-16 | https://stackoverflow.com/questions/64390904/how-can-i-extract-the-weight-and-bias-of-linear-layers-in-pytorch | In model.state_dict(), model.parameters() and model.named_parameters() weights and biases of nn.Linear() modules are contained separately, e.q. fc1.weight and fc1.bias. Is there a simple pythonic way to get both of them? Expected example looks similar to this: layer = model['fc1'] print(layer.weight) print(layer.bias) | You can recover the named parameters for each linear layer in your model like so: from torch import nn for layer in model.children(): if isinstance(layer, nn.Linear): print(layer.state_dict()['weight']) print(layer.state_dict()['bias']) | 6 | 7 |
64,436,317 | 2020-10-19 | https://stackoverflow.com/questions/64436317/how-to-check-ocsp-client-certificate-revocation-using-python-requests-library | How do I make a simple request for certificate revocation status to an EJBCA OSCP Responder using the Python requests library? Example: # Determine if certificate has been revoked ocsp_url = req_cert.extensions[2].value[0].access_location.value ocsp_headers = {"whatGoes: here?"} ocsp_body = {"What goes here?"} ocsp_response = requests.get(ocsp_url, ocsp_headers, ocsp_body) if (ocsp_response == 'revoked'): return func.HttpResponse( "Certificate is not valid (Revoked)." ) | Basically it involves the following steps: retrieve the corresponding cert for a hostname if a corresponding entry is contained in the certificate, you can query the extensions via AuthorityInformationAccessOID.CA_ISSUERS, which will provide you with a link to the issuer certificate if successful retrieve the issuer cert with this link similarly you get via AuthorityInformationAccessOID.OCSP the corresponding OCSP server with this information about the current cert, the issuer_cert and the ocsp server you can feed OCSPRequestBuilder to create an OCSP request use requests.get to get the OCSP response from the OCSP response retrieve the certificate_status To retrieve a cert for a hostname and port, you can use this fine answer: https://stackoverflow.com/a/49132495. The OCSP handling in Python is documented here: https://cryptography.io/en/latest/x509/ocsp.html. Code If you convert the above points into a self-contained example, it looks something like this: import base64 import ssl import requests from urllib.parse import urljoin from cryptography import x509 from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import serialization from cryptography.hazmat.primitives.hashes import SHA256 from cryptography.x509 import ocsp from cryptography.x509.ocsp import OCSPResponseStatus from cryptography.x509.oid import ExtensionOID, AuthorityInformationAccessOID def get_cert_for_hostname(hostname, port): conn = ssl.create_connection((hostname, port)) context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) sock = context.wrap_socket(conn, server_hostname=hostname) certDER = sock.getpeercert(True) certPEM = ssl.DER_cert_to_PEM_cert(certDER) return x509.load_pem_x509_certificate(certPEM.encode('ascii'), default_backend()) def get_issuer(cert): aia = cert.extensions.get_extension_for_oid(ExtensionOID.AUTHORITY_INFORMATION_ACCESS).value issuers = [ia for ia in aia if ia.access_method == AuthorityInformationAccessOID.CA_ISSUERS] if not issuers: raise Exception(f'no issuers entry in AIA') return issuers[0].access_location.value def get_ocsp_server(cert): aia = cert.extensions.get_extension_for_oid(ExtensionOID.AUTHORITY_INFORMATION_ACCESS).value ocsps = [ia for ia in aia if ia.access_method == AuthorityInformationAccessOID.OCSP] if not ocsps: raise Exception(f'no ocsp server entry in AIA') return ocsps[0].access_location.value def get_issuer_cert(ca_issuer): issuer_response = requests.get(ca_issuer) if issuer_response.ok: issuerDER = issuer_response.content issuerPEM = ssl.DER_cert_to_PEM_cert(issuerDER) return x509.load_pem_x509_certificate(issuerPEM.encode('ascii'), default_backend()) raise Exception(f'fetching issuer cert failed with response status: {issuer_response.status_code}') def get_oscp_request(ocsp_server, cert, issuer_cert): builder = ocsp.OCSPRequestBuilder() builder = builder.add_certificate(cert, issuer_cert, SHA256()) req = builder.build() req_path = base64.b64encode(req.public_bytes(serialization.Encoding.DER)) return urljoin(ocsp_server + '/', req_path.decode('ascii')) def get_ocsp_cert_status(ocsp_server, cert, issuer_cert): ocsp_resp = requests.get(get_oscp_request(ocsp_server, cert, issuer_cert)) if ocsp_resp.ok: ocsp_decoded = ocsp.load_der_ocsp_response(ocsp_resp.content) if ocsp_decoded.response_status == OCSPResponseStatus.SUCCESSFUL: return ocsp_decoded.certificate_status else: raise Exception(f'decoding ocsp response failed: {ocsp_decoded.response_status}') raise Exception(f'fetching ocsp cert status failed with response status: {ocsp_resp.status_code}') def get_cert_status_for_host(hostname, port): print(' hostname:', hostname, "port:", port) cert = get_cert_for_hostname(hostname, port) ca_issuer = get_issuer(cert) print(' issuer ->', ca_issuer) issuer_cert = get_issuer_cert(ca_issuer) ocsp_server = get_ocsp_server(cert) print(' ocsp_server ->', ocsp_server) return get_ocsp_cert_status(ocsp_server, cert, issuer_cert) Test 1: Good Certificate A test call like the following with a good certificate status = get_cert_status_for_host('software7.com', 443) print('software7.com:', status, '\n') results in the following output: hostname: software7.com port: 443 issuer -> http://cacerts.digicert.com/EncryptionEverywhereDVTLSCA-G1.crt ocsp_server -> http://ocsp.digicert.com software7.com: OCSPCertStatus.GOOD Test 2: Revoked Certificate Of course you also have to do a counter test with a revoked cert. Here revoked.badssl.com is the first choice: status = get_cert_status_for_host('revoked.badssl.com', 443) print('revoked.badssl.com:', status, '\n') This gives as output: hostname: revoked.badssl.com port: 443 issuer -> http://cacerts.digicert.com/DigiCertSHA2SecureServerCA.crt ocsp_server -> http://ocsp.digicert.com revoked.badssl.com: OCSPCertStatus.REVOKED AIA Retrieval of the Issuer Certificate A typical scenario for a certificate relationship looks as follows: The server provides the server certificate and usually one or more intermediate certificates during the TLS handshake. The word 'usually' is used intentionally: some servers are configured not to deliver intermediate certificates. The browsers then use AIA fetching to build the certification chain. Up to two entries can be present in the Certificate Authority Information Access extension: The entry for downloading the issuer certificate and the link to the OCSP server. These entries may also be missing, but a short test script that checks the certs of the 100 most popular servers shows that these entries are usually included in certificates issued by public certification authorities. The CA Issuers entry may also be missing, but while the information about an OCSP server is available, it can be tested e.g. with OpenSSL using a self-signed certificate: In this case you would have to determine the issuer certificate from the chain in the TLS handshake, it is the certificate that comes directly after the server certificate in the chain, see also the figure above. Just for the sake of completeness: There is another case that can sometimes occur especially in conjunction with self-signed certificates: If no intermediate certificates are used, the corresponding root certificate (e.g. available in the local trust store) must be used as issuer certificate. | 7 | 15 |
64,362,772 | 2020-10-14 | https://stackoverflow.com/questions/64362772/switching-python-version-installed-by-homebrew | I have Python 3.8 and 3.9 installed via Homebrew: ~ brew list | grep python [email protected] [email protected] I want to use Python 3.9 as my default one with python3 command. I tried the following: ~ brew switch python 3.9 Error: python does not have a version "3.9" in the Cellar. python's installed versions: 3.8.6 I tried to uninstall Python and reinstall it, but it's used by other packages: ~ brew uninstall python Error: Refusing to uninstall /usr/local/Cellar/[email protected]/3.8.6 because it is required by glib and php, which are currently installed. You can override this and force removal with: brew uninstall --ignore-dependencies python How can I use Python 3.9? | There is an Homebrew known issue related to side by side install of Python 3.8 / 3.9. To workaround, following commands should work for you: brew unlink [email protected] brew unlink [email protected] brew link --force [email protected] Re-opening your terminal or execute command rehash can be required to take account the change. | 68 | 107 |
64,448,567 | 2020-10-20 | https://stackoverflow.com/questions/64448567/python3-8-no-such-file-or-directory-when-trying-to-git-commit-to-bitbucket-on-ma | I am currently on a new mac with python 3.8.2 installed. I have a bitbucket repo I cloned down. When I modify a file and git add that works fine. But when I make a git commit I get this error message env: python3.8: No such file or directory My path env variable looks like this PATH=/Users/rach/bin:/Users/rach/bin:/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin When I type whereis python3 I get /usr/bin/python3 I cannot figure out why I can't make a git commit with python3 already installed and the location in the path | Since you're using pre-commit, you can uninstall hooks by: pre-commit uninstall To install them again, run: pre-commit install | 25 | 40 |
64,381,222 | 2020-10-16 | https://stackoverflow.com/questions/64381222/python-click-access-option-values-globally | Say I have an flag --debug/--no-debug defined for the base command. This flag will affect the behavior of many operations in my program. Right now I find myself passing this flag as function parameters all over the place, which doesn't seem elegant. Especially when I need to access this flag in a deep call stack, I'll have to add this parameter to every single function on the stack. I can instead create a global variable is_debug and set its value at the beginning of the command function that receives the value of this flag. But this doesn't seem elegant to me either. Is there a better way to make some option values globally accessible using the Click library? | There are two ways to do so, depending on your needs. Both of them end up using the click Context. Personally, I'm a fan of Option 2 because then I don't have to modify function signatures (and I rarely write multi-threaded programs). It also sounds more like what you're looking for. Option 1: Pass the Context to the function Use the click.pass_context decorator to pass the click context to the function. Docs: Usage: https://click.palletsprojects.com/en/7.x/commands/#nested-handling-and-contexts API: https://click.palletsprojects.com/en/7.x/api/#click.pass_context # test1.py import click @click.pass_context def some_func(ctx, bar): foo = ctx.params["foo"] print(f"The value of foo is: {foo}") @click.command() @click.option("--foo") @click.option("--bar") def main(foo, bar): some_func(bar) if __name__ == "__main__": main() $ python test1.py --foo 1 --bar "bbb" The value of foo is: 1 Option 2: click.get_current_context() Pull the context directly from the current thread via click.get_current_context(). Available starting in Click 5.0. Docs: Usage: https://click.palletsprojects.com/en/7.x/advanced/#global-context-access API: https://click.palletsprojects.com/en/7.x/api/#click.get_current_context Note: This only works if you're in the current thread (the same thread as what was used to set up the click commands originally). # test2.py import click def some_func(bar): c = click.get_current_context() foo = c.params["foo"] print(f"The value of foo is: {foo}") @click.command() @click.option("--foo") @click.option("--bar") def main(foo, bar): some_func(bar) if __name__ == "__main__": main() $ python test2.py --foo 1 --bar "bbb" The value of foo is: 1 | 6 | 7 |
64,406,727 | 2020-10-17 | https://stackoverflow.com/questions/64406727/is-there-any-solution-to-packaging-a-python-app-that-uses-cppyy | I'm no novice when creating cross-platform runtimes of my python desktop apps. I create various tools for my undergraduates using mostly pyinstaller, cxfreeze, sometimes fbs, and sometimes briefcase. Anyone who does this one a regular basis knows that there are lots of quirks and adjustments needed to target Linux, windows, and macos when using arbitrary collections of python modules, but I've managed to figure everything out until now. I have a python GUI app that uses a c++ library that is huge and ever-changing, so I can't just re-write it in python. I've successfully written python code that uses the c++ library using the amazing (and possibly magical) library called cppyy that allows you to run c++ code from python without hardly any effort. Everything runs great on Linux, mac, and windows, but I cannot get it packaged into runtimes and I've tried all the systems above. All of them have no problem producing the runtimes (i.e., no errors), but they fail when you run them. Essentially they all give some sort of error about not being able to find cppyy-backend (e.g., pyinstaller and fbs which uses pyinstaller gives this message when you run the binary): /home/nogard/Desktop/cppyytest/target/MyApp/cppyy_backend/loader.py:113: UserWarning: No precompiled header available ([Errno 2] No such file or directory: '/home/nogard/Desktop/cppyytest/target/MyApp/cppyy_backend'); this may impact performance. Traceback (most recent call last): File "main.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "/home/nogard/Desktop/cppyytest/venv/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 628, in exec_module exec(bytecode, module.__dict__) File "cppyy/__init__.py", line 74, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "/home/nogard/Desktop/cppyytest/venv/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 628, in exec_module exec(bytecode, module.__dict__) File "cppyy/_cpython_cppyy.py", line 20, in <module> File "cppyy_backend/loader.py", line 74, in load_cpp_backend RuntimeError: could not load cppyy_backend library [11195] Failed to execute script main I'm really stumped. Usually, you install cppyy with pip, which installs cppyy-backend and other packages. I've even used the cppyy docs methods to compile each dependency as well as cppyy, but the result is the same. I'll use any build system that works...has anyone had success? I know I could use docker, but I tried this before and many of my students freaked out at docker asking them to change their bios settings to support virtualization So I'd like to use a normal packaging system that produces some sort of runnable binary. If you know how to get pyinstaller, cxfreeze, fbs, or briefcase to work with cppyy (e.g, if you know how to deal with the error above), please let me know. However, if you've gotten a cppyy app packaged with some other system, let me know and I'll use that one. If you're looking for some code to run, I've been testing out packaging methods using this minimal code: import cppyy print('hello world from python\n') cppyy.cppexec(''' #include <string> using namespace std; string mystring("hello world from c++"); std::cout << mystring << std::endl; ''') | EDIT: figured out the pyinstaller hooks; this should all be fully automatic once released With the caveat that I have no experience whatsoever with packaging run-times, so I may be missing something obvious, but I've just tried pyinstaller, and the following appears to work. First, saving your script above as example.py, then create a spec file: $ pyi-makespec example.py Then, add the headers and libraries from cppyy_backend as datas (skipping the python files, which are added by default). The simplest seems to be to pick up all directories from the backend, so change the generated example.spec by adding at the top: def backend_files(): import cppyy_backend, glob, os all_files = glob.glob(os.path.join( os.path.dirname(cppyy_backend.__file__), '*')) def datafile(path): return path, os.path.join('cppyy_backend', os.path.basename(path)) return [datafile(filename) for filename in all_files if os.path.isdir(filename)] and replace the empty datas in the Analysis object with: datas=backend_files(), If you also need the API headers from CPyCppyy, then these can be found e.g. like so: def api_files(): import cppyy, os paths = str(cppyy.gbl.gInterpreter.GetIncludePath()).split('-I') for p in paths: if not p: continue apipath = os.path.join(p.strip()[1:-1], 'CPyCppyy') if os.path.exists(apipath): return [(apipath, os.path.join('include', 'CPyCppyy'))] return [] and added to the Analysis object: datas=backend_files()+api_files(), Note however, that Python.h then also needs to exist on the system where the package will be deployed. If need be, Python.h can be found through module sysconfig and its path provided through cppyy.add_include_path in the bootstrap.py file discussed below. Next, consider the precompiled header (file cppyy_backend/etc/allDict.cxx.pch): this contains the C++ standard headers in LLVM intermediate representation. If addded, it pre-empts the need for a system compiler where the package is deployed. However, if there is a system compiler, then ideally, the PCH should be recreated on first use after deployment. As is, however, the loader.py script in cppyy_backend uses sys.executable which is broken by the freezing (meaning, it's the top-level script, not python, leading to an infinite recursion). And even when the PCH is available, its timestamp is compared to the timestamp of the include directory, and rebuild if older. Since both the PCH and the include directory get new timestamps based on copy order, not build order, this is unreliable and may lead to spurious rebuilding. Therefore, either disable the PCH, or disable the time stamp checking. To do so, choose one of these two options and write it in a file called bootstrap.py, by uncommenting the desired behavior: ### option 1: disable the PCH altogether # import os # os.environ['CLING_STANDARD_PCH'] = 'none' ### option 2: force the loader to declare the PCH up-to-date # import cppyy_backend.loader # # def _is_uptodate(*args): # return True # # cppyy_backend.loader._is_uptodate = _is_uptodate then add the bootstrap as a hook to the spec file in the Analysis object: runtime_hooks=['bootstrap.py'], As discussed above, the bootstrap.py is also a good place to add more include paths as necessary, e.g. for Python.h. Finally, run as usual: $ pyinstaller example.spec | 7 | 4 |
64,428,208 | 2020-10-19 | https://stackoverflow.com/questions/64428208/why-is-listx-for-x-in-a-faster-for-a-0-than-for-a | I tested list(x for x in a) with three different CPython versions. On a = [0] it's significantly faster than on a = []: 3.9.0 64-bit 3.9.0 32-bit 3.7.8 64-bit a = [] a = [0] a = [] a = [0] a = [] a = [0] 465 ns 412 ns 543 ns 515 ns 513 ns 457 ns 450 ns 406 ns 544 ns 515 ns 506 ns 491 ns 456 ns 408 ns 551 ns 513 ns 515 ns 487 ns 455 ns 413 ns 548 ns 516 ns 513 ns 491 ns 452 ns 404 ns 549 ns 511 ns 508 ns 486 ns With tuple instead of list, it's the expected other way around: 3.9.0 64-bit 3.9.0 32-bit 3.7.8 64-bit a = [] a = [0] a = [] a = [0] a = [] a = [0] 354 ns 405 ns 467 ns 514 ns 421 ns 465 ns 364 ns 407 ns 467 ns 527 ns 425 ns 464 ns 353 ns 399 ns 490 ns 549 ns 419 ns 465 ns 352 ns 400 ns 500 ns 556 ns 414 ns 474 ns 354 ns 405 ns 494 ns 560 ns 420 ns 474 ns So why is list faster when it (and the underlying generator iterator) has to do more? Tested on Windows 10 Pro 2004 64-bit. Benchmark code: from timeit import repeat setups = 'a = []', 'a = [0]' number = 10**6 print(*setups, sep=' ') for _ in range(5): for setup in setups: t = min(repeat('list(x for x in a)', setup, number=number)) / number print('%d ns' % (t * 1e9), end=' ') print() Byte sizes, showing that it doesn't overallocate for input [] but does for input [0]: >>> [].__sizeof__() 40 >>> list(x for x in []).__sizeof__() 40 >>> [0].__sizeof__() 48 >>> list(x for x in [0]).__sizeof__() 72 | What you observe, is that pymalloc (Python memory manager) is faster than the memory manager provided by your C-runtime. It is easy to see in the profiler, that the main difference between both versions is that list_resize and _PyObjectRealloc need more time for the a=[]-case. But why? When a new list is created from an iterable, the list tries to get a hint how many elements are in the iterator: n = PyObject_LengthHint(iterable, 8); However, this doesn't work for generators and thus the hint is the default value 8. After the iterator is exhausted, the list tries to shrink, because there are only 0 or 1 element (and not the original capacity allocated due to a too large size-hint). For 1 element this would lead to (due to over-allocation) capacity of 4 elements. However, there is a special handling for the case of 0 elements: it will not be over-allocated: // ... if (newsize == 0) new_allocated = 0; num_allocated_bytes = new_allocated * sizeof(PyObject *); items = (PyObject **)PyMem_Realloc(self->ob_item, num_allocated_bytes); // ... So in the "empty" case, PyMem_Realloc will be asked for 0 bytes. This call will be passed via _PyObject_Malloc down to pymalloc_alloc, which in case of 0 bytes returns NULL: if (UNLIKELY(nbytes == 0)) { return NULL; } However, _PyObject_Malloc falls back to the "raw" malloc, if pymalloc returns NULL: static void * _PyObject_Malloc(void *ctx, size_t nbytes) { void* ptr = pymalloc_alloc(ctx, nbytes); if (LIKELY(ptr != NULL)) { return ptr; } ptr = PyMem_RawMalloc(nbytes); if (ptr != NULL) { raw_allocated_blocks++; } return ptr; } as can be easily seen in the definition of _PyMem_RawMalloc: static void * _PyMem_RawMalloc(void *ctx, size_t size) { /* PyMem_RawMalloc(0) means malloc(1). Some systems would return NULL for malloc(0), which would be treated as an error. Some platforms would return a pointer with no memory behind it, which would break pymalloc. To solve these problems, allocate an extra byte. */ if (size == 0) size = 1; return malloc(size); } Thus, the case a=[0] will use pymalloc, while a=[] will use the memory manager of the underlying c-runtime, which explains the observed difference. Now, this all can be seen as missed optimization, because for newsize=0, we could just set the ob_item to NULL, adjust other members and return. Let's try it out: static int list_resize(PyListObject *self, Py_ssize_t newsize) { // ... if (newsize == 0) { PyMem_Del(self->ob_item); self->ob_item = NULL; Py_SIZE(self) = 0; self->allocated = 0; return 0; } // ... } with this fix, the empty-case becomes slightly faster (about 10%) than the a=[0] case, as expected. My claim, that pymalloc is faster for smaller sizes than the C-runtime memory manager, can be easily tested with bytes: if more than 512 bytes need to be allocated, pymalloc will fallback to simple malloc: print(bytes(479).__sizeof__()) # 512 %timeit bytes(479) # 189 ns ± 20.4 ns print(bytes(480).__sizeof__()) # 513 %timeit bytes(480) # 296 ns ± 24.8 ns the actual difference is more than the shown 50% (this jump cannot be explained by change of the size by one byte alone), as at least some part of the time is used for initialization of byte object and so on. Here is a more direct comparison with help of cython: %%cython from libc.stdlib cimport malloc, free from cpython cimport PyMem_Malloc, PyMem_Del def with_pymalloc(int size): cdef int i for i in range(1000): PyMem_Del(PyMem_Malloc(size)) def with_cmalloc(int size): cdef int i for i in range(1000): free(malloc(size)) and now %timeit with_pymalloc(1) # 15.8 µs ± 566 ns %timeit with_cmalloc(1) # 51.9 µs ± 2.17 µs i.e. pymalloc is about 3 times faster (or about 35ns per allocation). Note: some compilers would optimize free(malloc(size)) out, but MSVC doesn't. As another example: some time ago I have replaced the default allocator through pymalloc for a c++'s std::map which led to a speed up of factor 4. For profiling the following script was used: a=[0] # or a=[] for _ in range(10000000): list(x for x in a) together with VisualStudio's built-in performance profiler in Release-mode. a=[0]-version needed 6.6 seconds (in profiler) while a=[] version needed 6.9 seconds (i.e. ca. 5% slower). After the "fix", a=[] needed only 5.8 seconds. The share of time spent in list_resize and _PyObject_Realloc: a=[0] a=[] a=[], fixed list_resize 3.5% 10.2% 3% _PyObject_Realloc 3.2% 9.3% 1% Obviously, there is variance from run to run, but the differences in running times are significant and can explain the lion's share of observed time difference. Note: the difference of 0.3 second for 10^7 allocations is about 30ns per allocation - a number similar to the one we get for the difference between pymalloc's and c-runtime's allocations. When verifying the above with debugger, one must be aware, that in the debug-mode Python uses a debug version of pymalloc, which appends additional data to the required memory, thus pymalloc will never be asked to allocate 0 bytes in debug-version, but 0 bytes + debug-overhead and there will be no fallback to malloc. Thus, one should either debug in release mode of switch to realease-pymalloc in debug-build (there is probably an option for it - I just don't know it, the relevant part in code is here and here). | 37 | 40 |
64,445,333 | 2020-10-20 | https://stackoverflow.com/questions/64445333/opencv-probabilistic-hough-line-transform-giving-different-results-with-c-and | I was working on a project using OpenCV, Python that uses Probabilistic Hough Line Transform function "HoughLinesP" in some part of the project. My code worked just fine and there was no problem. Then I thought of converting the same code to C++. After converting the code to C++, the output is not the same as that of the Python code. After long hours of debugging, I found out that everything else works fine but the "HoughLinesP" function is giving different output in the case of C++. The input to this function in both the languages is the same and the values of parameters are also the same but the output from it is different. Can someone explain me why is this happening and any possible fixes for it? Also, I have checked the version of OpenCV for both the languages, and it is the same: 4.5.0 dev Also, I have tried playing with the values passed to the C++ code, but I am not able to obtain similar results. Input Edge Image: Python HoughLinesP() output: C++ HoughLinesP() output: Following are the codes in each language: Python: Lines = cv2.HoughLinesP(EdgeImage, 1, np.pi / 180, 50, 10, 15) C++: std::vector<cv::Vec4i> Lines; cv::HoughLinesP(EdgeImage, Lines, 1, CV_PI / 180, 50, 10, 15); It would be a great help if anyone could suggest something. | Explanation & Fix The problem arises because in the Python version you are not setting the arguments that you think you are setting. In contrast to some other functions for which the argument lists are adapted in the Python interface, HoughLinesP does not only return the lines but also still takes a parameter lines for the line output. You can see that in the help for HoughLinesP: import cv2 help(cv2.HoughLinesP) which gives you (ellipsis mine): Help on built-in function HoughLinesP: HoughLinesP(...) HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]]) -> lines . @brief Finds line segments in a binary image using the probabilistic Hough transform. . ... . @param lines Output vector of lines. Each line is represented by a 4-element vector . \f$(x_1, y_1, x_2, y_2)\f$ , where \f$(x_1,y_1)\f$ and \f$(x_2, y_2)\f$ are the ending points of each detected . line segment. ... So basically, in your python example you pass 10 as lines instead of as minLineLength. To fix this, you can either pass an empty array as lines or you can pass the parameters as keyword arguments: Lines = cv2.HoughLinesP(EdgeImage, rho=1, theta=np.pi/180, threshold=50, minLineLength=10, maxLineGap=15) Doing that should make your Python version's output match the C++ version's. Alternatively, if you are happy with the results of the Python version, you have to leave out parameter lines (i.e. only setting minLineLength to 15 and using the default of 0 for maxLineGap [see docs]): std::vector<cv::Vec4i> Lines; cv::HoughLinesP(EdgeImage, Lines, 1, CV_PI / 180, 50, 15); This should then reproduce your Python version. Example Using the example listed in the openCV documentation of HoughLinesP, you can see that this fixes the issue. C++ version (Taken from openCV documentation listed above and adapted to save image instead.) #include <opencv2/imgproc.hpp> #include <opencv2/highgui.hpp> using namespace cv; using namespace std; int main(int argc, char** argv) { Mat src, dst, color_dst; if( argc != 3 || !(src=imread(argv[1], 0)).data) return -1; Canny( src, dst, 50, 200, 3 ); cvtColor( dst, color_dst, COLOR_GRAY2BGR ); vector<Vec4i> lines; HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 ); for( size_t i = 0; i < lines.size(); i++ ) { line( color_dst, Point(lines[i][0], lines[i][1]), Point( lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); } imwrite( argv[2], color_dst ); return 0; } If you compile this and run it over the example picture provided in the docs, you get the following result: Incorrect Python version (Basically, just the translated C++ version without the lines parameter.) import argparse import cv2 import numpy as np parser = argparse.ArgumentParser() parser.add_argument("input_file", type=str) parser.add_argument("output_file", type=str) args = parser.parse_args() src = cv2.imread(args.input_file, 0) dst = cv2.Canny(src, 50., 200., 3) color_dst = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR) lines = cv2.HoughLinesP(dst, 1., np.pi/180., 80, 30, 10.) for this_line in lines: cv2.line(color_dst, (this_line[0][0], this_line[0][1]), (this_line[0][2], this_line[0][3]), [0, 0, 255], 3, 8) cv2.imwrite(args.output_file, color_dst) Running this gives the following (different) result: Corrected python version (Fixed by passing keyword args instead) import argparse import cv2 import numpy as np parser = argparse.ArgumentParser() parser.add_argument("input_file", type=str) parser.add_argument("output_file", type=str) args = parser.parse_args() src = cv2.imread(args.input_file, 0) dst = cv2.Canny(src, 50., 200., 3) color_dst = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR) lines = cv2.HoughLinesP(dst, rho=1., theta=np.pi/180., threshold=80, minLineLength=30, maxLineGap=10.) for this_line in lines: cv2.line(color_dst, (this_line[0][0], this_line[0][1]), (this_line[0][2], this_line[0][3]), [0, 0, 255], 3, 8) cv2.imwrite(args.output_file, color_dst) This gives the correct result (i.e. the same result as the C++ version): | 7 | 17 |
64,401,570 | 2020-10-17 | https://stackoverflow.com/questions/64401570/error-using-shap-with-simplernn-sequential-model | In the code below, I import a saved sparse numpy matrix, created with python, densify it, add a masking, batchnorm and dense ouptput layer to a many to one SimpleRNN. The keras sequential model works fine, however, I am unable to use shap. This is run in Jupyter lab from Winpython 3830 on a Windows 10 desktop. The X matrix has a shape of (4754, 500, 64): 4754 examples with 500 timesteps and 64 variables. I've created a function to simulate the data so the code can be tested. The simulated data returns the same error. from sklearn.model_selection import train_test_split import tensorflow as tf from tensorflow.keras.models import Sequential import tensorflow.keras.backend as Kb from tensorflow.keras import layers from tensorflow.keras.layers import BatchNormalization from tensorflow import keras as K import numpy as np import shap import random def create_x(): dims = [10,500,64] data = [] y = [] for i in range(dims[0]): data.append([]) for j in range(dims[1]): data[i].append([]) for k in range(dims[2]): isnp = random.random() if isnp > .2: data[i][j].append(np.nan) else: data[i][j].append(random.random()) if isnp > .5: y.append(0) else: y.append(1) return np.asarray(data), np.asarray(y) def first_valid(arr, axis, invalid_val=0): #return the 2nd index of 3 for the first non np.nan on the 3rd axis mask = np.invert(np.isnan(arr)) return np.where(mask.any(axis=axis), mask.argmax(axis=axis), invalid_val) def densify_np(X): X_copy = np.empty_like (X) X_copy[:] = X #loop over the first index for i in range(len(X_copy)): old_row = [] #get the 2nd index of the first valid value for each 3rd index indices = first_valid(X_copy[i,:,:],axis=0, invalid_val=0) for j in range(len(indices)): if np.isnan(X_copy[i,indices[j],j]): old_row.append(0) else: old_row.append(X_copy[i,indices[j],j]) X_copy[i,0,:]= old_row for k in range(1,len(X_copy[i,:])): for l in range(len(X_copy[i,k,:])): if np.isnan(X_copy[i,k,l]): X_copy[i,k,l] = X_copy[i,k-1,l] return(X_copy) #this is what I do in the actual code #X = np.load('C:/WinPython/WPy64-3830/data/X.npy') #Y = np.load('C:/WinPython/WPy64-3830/scripts/Y.npy') #simulated junk data X, Y = create_x() #create a dense matrix from the sparse one. X = densify_np(X) seed = 7 np.random.seed(seed) array_size = 64 X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed) batch = 64 model = Sequential() model.add(layers.Input(shape=(500,array_size))) model.add(layers.Masking(mask_value=0.,input_shape=(500, array_size))) model.add(BatchNormalization()) model.add(layers.SimpleRNN(1, activation=None, dropout = 0, recurrent_dropout=.2)) model.add(layers.Dense(1, activation = 'sigmoid')) opt = K.optimizers.Adam(learning_rate=.001) model.compile(loss='binary_crossentropy', optimizer=opt) model.fit(X_train, y_train.astype(int), validation_data=(X_test,y_test.astype(int)), epochs=25, batch_size=batch) explainer = shap.DeepExplainer(model, X_test) shap_values = explainer.shap_values(X_train) Running the last line to create the shap_values yields the error below. StagingError Traceback (most recent call last) <ipython-input-6-f789203da9c8> in <module> 1 import shap 2 explainer = shap.DeepExplainer(model, X_test) ----> 3 shap_values = explainer.shap_values(X_train) 4 print('done') C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\__init__.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity) 117 were chosen as "top". 118 """ --> 119 return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity) C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\deep_tf.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity) 302 # run attribution computation graph 303 feature_ind = model_output_ranks[j,i] --> 304 sample_phis = self.run(self.phi_symbolic(feature_ind), self.model_inputs, joint_input) 305 306 # assign the attributions to the right part of the output arrays C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\deep_tf.py in run(self, out, model_inputs, X) 359 360 return final_out --> 361 return self.execute_with_overridden_gradients(anon) 362 363 def custom_grad(self, op, *grads): C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\deep_tf.py in execute_with_overridden_gradients(self, f) 395 # define the computation graph for the attribution values using a custom gradient-like computation 396 try: --> 397 out = f() 398 finally: 399 # reinstate the backpropagatable check C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\deep_tf.py in anon() 355 v = tf.constant(data, dtype=self.model_inputs[i].dtype) 356 inputs.append(v) --> 357 final_out = out(inputs) 358 tf_execute.record_gradient = tf_backprop._record_gradient 359 C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = "nonXla" --> 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 821 # This is the first call of __call__, so we have to initialize. 822 initializers = [] --> 823 self._initialize(args, kwds, add_initializers_to=initializers) 824 finally: 825 # At this point we know that the initialization is complete (or less C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to) 694 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph) 695 self._concrete_stateful_fn = ( --> 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access 697 *args, **kwds)) 698 C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2853 args, kwargs = None, None 2854 with self._lock: -> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2856 return graph_function 2857 C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs) 3211 3212 self._function_cache.missed.add(call_context_key) -> 3213 graph_function = self._create_graph_function(args, kwargs) 3214 self._function_cache.primary[cache_key] = graph_function 3215 return graph_function, args, kwargs C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3063 arg_names = base_arg_names + missing_arg_names 3064 graph_function = ConcreteFunction( -> 3065 func_graph_module.func_graph_from_py_func( 3066 self._name, 3067 self._python_function, C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 984 _, original_func = tf_decorator.unwrap(python_func) 985 --> 986 func_outputs = python_func(*func_args, **func_kwargs) 987 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors, C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds) 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give 599 # the function a weak reference to itself to avoid a reference cycle. --> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds) 601 weak_wrapped_fn = weakref.ref(wrapped_fn) 602 C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, "ag_error_metadata"): --> 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise StagingError: in user code: C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\shap\explainers\deep\deep_tf.py:244 grad_graph * x_grad = tape.gradient(out, shap_rAnD) C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\backprop.py:1067 gradient ** flat_grad = imperative_grad.imperative_grad( C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\imperative_grad.py:71 imperative_grad return pywrap_tfe.TFE_Py_TapeGradient( C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\eager\backprop.py:151 _gradient_function grad_fn = ops._gradient_registry.lookup(op_name) # pylint: disable=protected-access C:\WinPython\WPy64-3830\python-3.8.3.amd64\lib\site-packages\tensorflow\python\framework\registry.py:96 lookup raise LookupError( LookupError: gradient registry has no entry for: shap_TensorListStack | The owner of the shap repo said: The fundamental issue here is that DeepExplainer does not yet support TF 2.0. That was on 11 Dec 2019. Is this still the case? Try it with Tensorflow 1.15 and see if that works. Another issue on the shap repo about this (2 Jun 2020) says: Alright, thank you. I did not see the post by Lundberg. I will stick to the workaround of using TF 1.15 until a new version of SHAP is released. | 6 | 4 |
64,434,461 | 2020-10-19 | https://stackoverflow.com/questions/64434461/how-to-abort-cancel-http-request-in-python-thread | I'm looking to abort/cancel an HTTP request in a Python thread. I have to stick with threads. I can't use asyncio or anything outside the standard library. This code works fine with sockets: """Demo for Canceling IO by Closing the Socket Works! """ import socket import time from concurrent import futures start_time = time.time() sock = socket.socket() def read(): "Read data with 10 second delay." sock.connect(('httpbin.org', 80)) sock.sendall(b'GET /delay/10 HTTP/1.0\r\n\r\n') while True: data = sock.recv(1024) if not data: break print(data.decode(), end='') with futures.ThreadPoolExecutor() as pool: future = pool.submit(read) futures.wait([future], timeout=5) sock.close() # <-- Interrupt sock.recv(1024) in Thread:read(). end_time = time.time() print(f'Duration: {end_time - start_time:.3f}') # Duration is ~5s as expected. Closing the socket in the main thread is used to interrupt the recv() in the executor pool's thread. The HTTP request should take 10 seconds but we only wait 5 seconds for it an then close the socket (effectively canceling the HTTP request/response). Now I try using http.client: """Demo for Canceling IO in Threads with HTTP Client Doesn't work! """ import time from concurrent import futures from http.client import HTTPConnection def get(con, url): con.request('GET', url) response = con.getresponse() return response start_time = time.time() with futures.ThreadPoolExecutor() as executor: con = HTTPConnection('httpbin.org') future = executor.submit(get, con, '/delay/10') done, not_done = futures.wait([future], timeout=5) con.sock.close() end_time = time.time() print(f'Duration: {end_time - start_time:.3f}') # Duration is ~10s unfortunately. Unfortunately, the total duration is ~10 seconds here. Closing the socket does not interrupt the recv_into() in the client. Seems like I am making some wrong assumptions. How do I interrupt the socket used in an http client from a separate thread? | What you describe is the intended well documented behavior: Note close() releases the resource associated with a connection but does not necessarily close the connection immediately. If you want to close the connection in a timely fashion, call shutdown() before close(). Some further details regarding this behavior can still be found in CPython howto docs: Strictly speaking, you're supposed to use shutdown on a socket before you close it. The shutdown is an advisory to the socket at the other end. Depending on the argument you pass it, it can mean "I'm not going to send anymore, but I'll still listen", or "I'm not listening, good riddance!". Most socket libraries, however, are so used to programmers neglecting to use this piece of etiquette that normally a close is the same as shutdown(); close(). So in most situations, an explicit shutdown is not needed. One way to use shutdown effectively is in an HTTP-like exchange. The client sends a request and then does a shutdown(1). This tells the server "This client is done sending, but can still receive." The server can detect "EOF" by a receive of 0 bytes. It can assume it has the complete request. The server sends a reply. If the send completes successfully then, indeed, the client was still receiving. Python takes the automatic shutdown a step further, and says that when a socket is garbage collected, it will automatically do a close if it's needed. But relying on this is a very bad habit. If your socket just disappears without doing a close, the socket at the other end may hang indefinitely, thinking you're just being slow. Please close your sockets when you're done. Solution Call shutdown before close. Example with futures.ThreadPoolExecutor() as executor: con = HTTPConnection('httpbin.org') future = executor.submit(get, con, '/delay/10') done, not_done = futures.wait([future], timeout=5) con.sock.shutdown() con.sock.close() References Python Socket objects - close: https://docs.python.org/3/library/socket.html#socket.socket.close CPython Howto Sockets - disconnecting: https://github.com/python/cpython/blob/65460565df99fbda6a74b6bb4bf99affaaf8bd95/Doc/howto/sockets.rst#disconnecting | 11 | 5 |
64,448,442 | 2020-10-20 | https://stackoverflow.com/questions/64448442/replace-data-of-an-array-by-two-values-of-a-second-array | I have two numpy arrays "Elements" and "nodes". My aim is to gather some data of these arrays. I need to replace "Elements" data of the two last columns by the two coordinates contains in "nodes" array. The two arrays are very huge, I have to automate it. This posts refers to an old one: Replace data of an array by 2 values of a second array with a difference that arrays are very huge (Elements: (3342558,5) and nodes: (581589,4)) and the previous way out does not work. An example : import numpy as np Elements = np.array([[1.,11.,14.],[2.,12.,13.]]) nodes = np.array([[11.,0.,0.],[12.,1.,1.],[13.,2.,2.],[14.,3.,3.]]) results = np.array([[1., 0., 0., 3., 3.], [2., 1., 1., 2., 2.]]) The previous way out proposed by hpaulj e = Elements[:,1:].ravel().astype(int) n=nodes[:,0].astype(int) I, J = np.where(e==n[:,None]) results = np.zeros((e.shape[0],2),nodes.dtype) results[J] = nodes[I,:1] results = results.reshape(2,4) But with huge arrays, this script does not work: DepreciationWarning: elementwise comparison failed; this will raise an error in the future... | Most of the game would be to figure out the corresponding matching indices from Elements in nodes. Approach #1 Since it seems you are open to conversion to integer, let's assume we could take them as integers. With that, we could use an array-assignment + mapping based method, as shown below : ar = Elements.astype(int) a = ar[:,1:].ravel() nd = nodes[:,0].astype(int) n = a.max()+1 # for generalized case of neagtive ints in a or nodes having non-matching values: # n = max(a.max()-min(0,a.min()), nd.max()-min(0,nd.min()))+1 lookup = np.empty(n, dtype=int) lookup[nd] = np.arange(len(nd)) indices = lookup[a] nc = (Elements.shape[1]-1)*(nodes.shape[1]-1) # 4 for given setup out = np.concatenate((ar[:,0,None], nodes[indices,1:].reshape(-1,nc)),axis=1) Approach #2 We could also use np.searchsorted to get those indices. For nodes having rows sorted based on first col and matching case, we can simply use : indices = np.searchsorted(nd, a) For not-necessarily sorted case and matching case : sidx = nd.argsort() idx = np.searchsorted(nd, a, sorter=sidx) indices = sidx[idx] For non-matching case, use an invalid bool array : invalid = idx==len(nd) idx[invalid] = 0 indices = sidx[idx] Approach #3 Another with concatenation + sorting - b = np.concatenate((nd,a)) sidx = b.argsort(kind='stable') n = len(nd) v = sidx<n counts = np.diff(np.flatnonzero(np.r_[v,True])) r = np.repeat(sidx[v], counts) indices = np.empty(len(a), dtype=int) indices[sidx[~v]-n] = r[sidx>=n] To detect non-matching ones, use : nd[indices] != a Port the idea here to numba : from numba import njit def numba1(Elements, nodes): a = Elements[:,1:].ravel() nd = nodes[:,0] b = np.concatenate((nd,a)) sidx = b.argsort(kind='stable') n = len(nodes) ncols = Elements.shape[1]-1 size = nodes.shape[1]-1 dt = np.result_type(Elements.dtype, nodes.dtype) nc = ncols*size out = np.empty((len(Elements),1+nc), dtype=dt) out[:,0] = Elements[:,0] return numba1_func(out, sidx, nodes, n, ncols, size) @njit def numba1_func(out, sidx, nodes, n, ncols, size): N = len(sidx) for i in range(N): if sidx[i]<n: cur_id = sidx[i] continue else: idx = sidx[i]-n row = idx//ncols col = idx-row*ncols cc = col*size+1 for ii in range(size): out[row, cc+ii] = nodes[cur_id,ii+1] return out | 8 | 2 |
64,449,971 | 2020-10-20 | https://stackoverflow.com/questions/64449971/pip-install-pyodbc-failed-error-failed-building-wheel-for-pyodbc | I'am trying to import pyodbc library into google colab, but i'am getting this error. Just in case, I have Anaconda installed in my notebook, and I never had problem with pyodbc in there. Can you help me please? Tks! Collecting pyodbc Using cached https://files.pythonhosted.org/packages/81/0d/bb08bb16c97765244791c73e49de9fd4c24bb3ef00313aed82e5640dee5d/pyodbc-4.0.30.tar.gz Building wheels for collected packages: pyodbc Building wheel for pyodbc (setup.py) ... error ERROR: Failed building wheel for pyodbc Running setup.py clean for pyodbc Failed to build pyodbc Installing collected packages: pyodbc Running setup.py install for pyodbc ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u5dmb223/pyodbc/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u5dmb223/pyodbc/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-9jmhckrt/install-record.txt --single-version-externally-managed --compile Check the logs for full command output. | You can try the following: !apt install unixodbc-dev !pip install pyodbc | 19 | 59 |
64,437,677 | 2020-10-20 | https://stackoverflow.com/questions/64437677/aws-throws-the-following-error-bad-interpreter-no-such-file-or-directory | I'm not aware of anything on my system having changed, but the aws CLI tool has stopped working. $ aws-bash: /Users/user_name/Library/Python/3.7/bin/aws:/usr/local/opt/python/bin/ python3.7: bad interpreter: No such file or directory I've tried brew reinstall awscli which is suggested elsewhere, but with no luck. | Option 1 Type brew uninstall awscli Then brew install awscli update python to 3.9. look in the following post. If this approach does not work for you, then try : Option 2 Go to https://www.python.org/ and use the GUI installer for your OS pip3 install awscli | 16 | 27 |
64,452,984 | 2020-10-20 | https://stackoverflow.com/questions/64452984/how-to-share-mmap-between-python-and-node-processes | I'm trying to share memory between a python process and a nodejs process started from the python process using an anonymous mmap. Essentially, the python process begins, initializes the mmap and starts a subprocess using either call or Popen to launch a child that runs some node code. This nodejs code uses mmap to try to access the same area in memory. However I get two different mappings and no data is shared between them. Why is this? import mmap, math, os from subprocess import call mm = mmap.mmap( -1, 1024, flags=mmap.MAP_SHARED | mmap.MAP_ANONYMOUS, prot= mmap.PROT_READ | mmap.PROT_WRITE ) mm.seek(0) mm.write('hello world!\n'.encode('utf-8')) call([ 'node', '-e', """ const mmap = require('mmap.js'); const fileBuf = mmap.alloc( 1024, mmap.PROT_READ | mmap.PROT_WRITE, mmap.MAP_SHARED| mmap.MAP_ANONYMOUS, -1, 0 ) console.log(fileBuf.toString('utf-8')); """ ]) The mmap.js that I am using is a NAPI of the original mmap c function. This is the github for this library. EDIT: Thanks to 'that other guy' for his answer. It was the correct one. Here's some sample code that works out of the box!: test_mmap.py import os, ctypes, posix_ipc, sys, mmap from subprocess import call SHARED_MEMORY_NAME = "/shared_memory" memory = posix_ipc.SharedMemory(SHARED_MEMORY_NAME, posix_ipc.O_CREX, size=1024) mapFile = mmap.mmap(memory.fd, memory.size) memory.close_fd() mapFile.seek(0) mapFile.write("Hello world!\n".encode('utf-8')) mapFile.seek(0) print("FROM PYTHON MAIN PROCESS: ", mapFile.readline().decode('utf-8')) mapFile.seek(0) call([ "node", "./test_mmap.js", SHARED_MEMORY_NAME ]) mapFile.close() posix_ipc.unlink_shared_memory(SHARED_MEMORY_NAME) test_mmap.js const args = process.argv; const mmap = require('mmap.js'); const shm = require('nodeshm'); const SHM_FILE_NAME=args[args.length-1]; let fd = shm.shm_open(SHM_FILE_NAME, shm.O_RDWR, 0600); if (fd == -1){ console.log("FD COULD NOT BE OPENED!"); throw "here"; } let mm = mmap.alloc(1024, mmap.PROT_READ | mmap.PROT_WRITE, mmap.MAP_SHARED, fd, 0); console.log("FROM NODE: ", mm.slice(0, mm.indexOf('\n')).toString('utf-8')); Sample output: FROM PYTHON MAIN PROCESS: Hello world! FROM NODE: Hello world! | Fortunately this doesn't work: imagine how confusing if all of the system's MAP_ANONYMOUS mappings were against the same area and kept overwriting each other. Instead, use shm_open to create a new handle you can mmap in both processes. This is a portable wrapper around the equally valid but less portable strategy of creating and mmap'ing a file in /dev/shm/. | 6 | 3 |
64,451,966 | 2020-10-20 | https://stackoverflow.com/questions/64451966/how-to-embed-code-examples-into-a-docstring | How can I embed code into a docstring to tell Sphinx to format the code similar as it will be done in Markdown (different background colour, monospaced sans-serif font)? For example to document a code usage example. """ This is a module documentation Use this module like this: res = aFunction(something, goes, in) print(res.avalue) """ | There are a few ways to do it. I think the most sensible in your case would be .. code-block:: """ This is a module documentation Use this module like this: .. code-block:: python res = aFunction(something, goes, in) print(res.avalue) """ Notice the blank line between the directive and the code block - it must be there in order for the block to render properly. | 11 | 16 |
64,436,858 | 2020-10-20 | https://stackoverflow.com/questions/64436858/fastest-algorithm-to-find-the-minimum-sum-of-absolute-differences-through-list-r | By rotating 2 lists either from left to right, Find the smallest possible sum of the absolute value of the differences between each corresponding item in the two lists given they're the same length. Rotation Sample: List [0, 1, 2, 3, 4, 5] rotated to the left = [1, 2, 3, 4, 5, 0] List [0, 1, 2, 3, 4, 5] rotated to the right= [5, 0, 1, 2, 3, 4] Sum of Absolute Difference: List 1 = [1, 2, 3, 4] List 2 = [5, 6, 7, 8] Sum of Abs. Diff. = |1-5| + |2-6| + |3-7| + |4-8| = 16 Once again, for any arbitrary length of list and integer values, task is to look for the least possible sum by simply rotating to the left/right of either or both lists. I had no problem with the rotation and acquiring of the minimum sum of absolute difference. I just want to know the smarter approach since my algorithm checks for every possible combination which is quite slow. Here is my bruteforce approach: list1 = [45, 21, 64, 33, 49] list2 = [90, 12, 77, 52, 28] choices = [] # Put all possible sums into a list to find the minimum value. for j in range(len(list1)): # List1 does a full rotation total = 0 for k in range(len(list1)): total += abs(list1[k] - list2[k]) list1.append(list1.pop(0)) choices.append(total) print(min(choices)) What's a smarter approach? I would appreciate a shorter code and time complexity as well. I managed to make it faster by applying generators. Credits to @kuriboh for the idea! But since I'm still new in generator implementation, just want to know if this is the best way of implementing it to reduce my time complexity especially for my loop. Can we still go faster than this configuration? list1 = [45, 21, 64, 33, 49] list2 = [90, 12, 77, 52, 28] choices = [] l = len(list1) for j in range(l): total = sum([abs(int(list1[k])-int(list2[k])) for k in range(l)]) list1.append(list1.pop(0)) choices.append(total) print(min(choices)) | I haven't cracked the full problem, but in the special case where the input values are all 0 or 1 (or any two different values, or any of O(1) different values, but we'll need another idea to get much further than that), we can get an O(n log n)-time algorithm by applying fast convolution. The idea is to compute all of the sums of absolute differences as List1 * reverse(1 - List2) + (1 - List1) * reverse(List2) where 1 - List means doing that operation point-wise and * denotes circular convolution (computable in time O(n log n) using a pair of FFTs). The definition of circular convolution here is n-1 __ \ (f * g)(i) = /_ f(j) g((i - j) mod n). j=0 Substituting List1 for f and reverse(1 - List2) for g, we get n-1 __ \ (List1 * reverse(1 - List2))(i) = /_ List1(j) (1 - List2((n-1-(i-j)) mod n)) j=0 n-1 __ \ = /_ List1(j) (1 - List2((j-(i+1)) mod n)). j=0 The product List1(j) (1 - List2((j-(i+1)) mod n)) is 1 if and only if List1(j) = 1 and List2((j-(i+1)) mod n) = 0, and 0 otherwise. Thus the i value of the convolution counts the number of places where List1 has a 1 offset i+1 circularly to the left of where List2 has a 0. The other convolution counts 0s corresponding to 1s. Given our input restrictions, this is the sum of absolute differences. Code: import numpy def convolve_circularly(a1, a2): return numpy.round(numpy.abs(numpy.fft.ifft(numpy.fft.fft(a1) * numpy.fft.fft(a2)))) def min_sum_abs_diff(a1, a2): a1 = numpy.array(a1) a2 = numpy.array(a2)[::-1] return numpy.min(convolve_circularly(a1, 1 - a2) + convolve_circularly(1 - a1, a2)) def slow_min_sum_abs_diff(a1, a2): return min( sum(abs(a1[i] - a2[i - k]) for i in range(len(a1))) for k in range(len(a2)) ) def main(): n = 100 for r in range(100000): a1 = numpy.random.randint(2, size=n) a2 = numpy.random.randint(2, size=n) r = min_sum_abs_diff(a1, a2) slow_r = slow_min_sum_abs_diff(a1, a2) if r != slow_r: print(a1, a2, r, slow_r) break if __name__ == "__main__": main() | 9 | 2 |
64,448,221 | 2020-10-20 | https://stackoverflow.com/questions/64448221/python-mean-doesnt-work-when-groupby-aggregates-dataframe-to-one-line | I have dataframe: time_to_rent = {'rentId': {0: 43.0, 1: 87.0, 2: 140.0, 3: 454.0, 4: 1458.0}, 'creditCardId': {0: 40, 1: 40, 2: 40, 3: 40, 4: 40}, 'createdAt': {0: Timestamp('2020-08-24 16:13:11.850216'), 1: Timestamp('2020-09-10 10:47:31.748628'), 2: Timestamp('2020-09-13 15:29:06.077622'), 3: Timestamp('2020-09-24 08:08:39.852348'), 4: Timestamp('2020-10-19 08:54:09.891518')}, 'updatedAt': {0: Timestamp('2020-08-24 20:26:31.805939'), 1: Timestamp('2020-09-10 20:05:18.759421'), 2: Timestamp('2020-09-13 18:38:10.044112'), 3: Timestamp('2020-09-24 08:53:22.512533'), 4: Timestamp('2020-10-19 17:10:09.110038')}, 'rent_time': {0: Timedelta('0 days 04:13:19.955723'), 1: Timedelta('0 days 09:17:47.010793'), 2: Timedelta('0 days 03:09:03.966490'), 3: Timedelta('0 days 00:44:42.660185'), 4: Timedelta('0 days 08:15:59.218520')}} The idea to aggregate dataframe by column 'creditCardId' and have mean value of 'rent_time'. Ideal output should be: creditCardId rent_time mean 40 0 days 05:08:10.562342 if I run code: print (time_to_rent['rent_time'].mean()) it works fine and i have "0 days 05:08:10.562342" as output. But when i am trying to get grouping by: time_to_rent.groupby('creditCardId', as_index=False)[['rent_time']].mean() I got error back: ~\anaconda3\lib\site-packages\pandas\core\groupby\generic.py in _cython_agg_blocks(self, how, alt, numeric_only, min_count) 1093 1094 if not (agg_blocks or split_frames): -> 1095 raise DataError("No numeric types to aggregate") 1096 1097 if split_items: DataError: No numeric types to aggregate if I use the command: time_to_rent = time_to_rent.groupby('creditCardId', as_index=False)[['rent_time']] it returns only "<pandas.core.groupby.generic.DataFrameGroupBy object at 0x000000000B5F2EE0>" May you please help me understand where my mistake is? | It's not your mistake, possibly a bug in Pandas since Timedelta can be averaged. A work-around is apply: time_to_rent.groupby('creditCardId')['rent_time'].apply(lambda x: x.mean()) Output: creditCardId 40 0 days 05:08:10.562342200 Name: rent_time, dtype: timedelta64[ns] | 6 | 1 |
64,447,085 | 2020-10-20 | https://stackoverflow.com/questions/64447085/how-to-delete-char-after-without-using-a-regular-expression | Given a string s representing characters typed into an editor, with "->" representing a delete, return the current state of the editor. For every one "->" it should delete one char. If there are two "->" i.e "->->" it should delete 2 char post the symbol. Example 1 Input s = "a->bcz" Output "acz" Explanation The "b" got deleted by the delete. Example 2 Input s = "->x->z" Output empty string Explanation All characters are deleted. Also note you can type delete when the editor is empty as well. """ I Have tried following function but id didnt work def delete_forward(text): """ return the current state of the editor after deletion of characters """ f = "->" for i in text: if (i==f): del(text[i+1]) How can i complete this without using regular expressions? | Here's a simple recursive solution- # Constant storing the length of the arrow ARROW_LEN = len('->') def delete_forward(s: str): try: first_occurence = s.index('->') except ValueError: # No more arrows in string return s if s[first_occurence + ARROW_LEN:first_occurence + ARROW_LEN + ARROW_LEN] == '->': # Don't delete part of the next arrow next_s = s[first_occurence + ARROW_LEN:] else: # Delete the character immediately following the arrow next_s = s[first_occurence + ARROW_LEN + 1:] return delete_forward(s[:first_occurence] + s[first_occurence + ARROW_LEN + 1:]) Remember, python strings are immutable so you should instead rely on string slicing to create new strings as you go. In each recursion step, the first index of -> is located and everything before this is extracted out. Then, check if there's another -> immediately following the current location - if there is, don't delete the next character and call delete_forward with everything after the first occurrence. If what is immediately followed is not an arrow, delete the immediately next character after the current arrow, and feed it into delete_forward. This will turn x->zb into xb. The base case for the recursion is when .index finds no matches, in which case the result string is returned. Output >>> delete_forward('ab->cz') 'abz' >>> delete_forward('abcz') 'abcz' >>> delete_forward('->abc->z') 'bc' >>> delete_forward('abc->z->') 'abc' >>> delete_forward('a-->b>x-->c>de->f->->g->->->->->') 'a->x->de' | 7 | 4 |
64,423,083 | 2020-10-19 | https://stackoverflow.com/questions/64423083/why-cant-both-args-and-keyword-only-arguments-be-mixed-with-args-and-kwargs | The usage of *args and **kwargs in python is clear to me and there are many questions out there in SO (eg Use of *args and **kwargs and What does ** (double star/asterisk) and * (star/asterisk) do for parameters?). But one thing I would like to understand is: why is it not possible to simultaneously define mandatory positional args, mandatory kwarg arguments and eventually still allow catching other args and kwargs as in cant_do_that below? def one_kwarg_is_mandatory(*, b, **kwargs): print(b) for key, value in kwargs.items(): print(key, value) def one_pos_arg_and_one_kwarg_are_mandatory(a, *, b, **kwargs): print(a, b) for key, value in kwargs.items(): print(key, value) # I wanted a mandatory arg (a) and possibly parse other args (*args), # then a mandatory kwarg (b) and eventually other kwargs (**kwargs) def cant_do_that(a, *args, *, b, **kwargs): print(a, b) print(args) for key, value in kwargs.items(): print(key, value) # not really interested on this because "b" must be a kwarg and hiding # it under **kwargs would not be explicit enough for my customer (sometimes myself ;)) def could_do_this_but(a, b, *args, **kwargs): print(a, b) print(args) print(kwargs) Yes one could get rid of b in the could_do_this_but function's signature, perform (for instance) a kwargs.get("b", None) at the top of the function and raise some appropriate error if found None... but having "b" directly on the function signature would allow faster and more explicit code development employing the function down the road. | The correct syntax is def cant_do_that(a, *args, b, **kwargs):. Note that * is used only once, both to mark the end of positional arguments and to set the name for variadic positional arguments. The * in a function definition is syntactically unique at the separation between positional-or-keyword and keyword-only arguments: parameter_list_starargs ::= "*" [parameter] ("," defparameter)* ["," ["**" parameter [","]]] | "**" parameter [","] In short, the grammar "*" [parameter] means * and *args are syntactically the same thing – a literal * and optional name – which may occur only once. Use a bare * to start keyword-only arguments without taking variadic positional arguments, and use a named *args to start keyword-only arguments with taking variadic positional arguments. If the form “*identifier” is present, it is initialized to a tuple receiving any excess positional parameters, defaulting to the empty tuple. [...] Parameters after “*” or “*identifier” are keyword-only parameters and may only be passed used keyword arguments. | 11 | 14 |
64,440,753 | 2020-10-20 | https://stackoverflow.com/questions/64440753/bigqueryoperator-changes-the-table-schema-and-column-modes-when-write-dispositio | I am using Airflow's BigQueryOperator to populate the BQ table with write_disposition='WRITE_TRUNCATE'. The problem is that every time the task runs, it alters the table schema and also the column mode from Required to Nullable. The create_disposition I am using is 'CREATE_NEVER'. Since my tables are pre-created, I don't want the schemas or column modes to be altered. Using write_disposition='WRITE_APPEND' fixes the issue but my requirement is to use WRITE_TRUNCATE. Any idea why BigQueryOperator alters the schema and mode? | I had a similar issue, not with the required/nullable shcema value, but on policy tags, and the behavior is the same: the policy tags are overriden (and lost). Here the answer of the Google support team: If you overwrite to a destination table, any existing policy tags are removed from the table, unless you use the --destination_schema flag to specify a schema with policy tags. For WRITE_TRUNCATE, the disposition overwrites the existing table and the schema. If you want to keep the policy tags, you can use "--destination_schema" to specify a schema with policy tags. However, with my test in python, I observed 2 different behaviors between the QueryJob (job based on sql query and that sinks the result in a table) and the LoadJob (job that loads data from a file and that sinks the data in a table). If you perform a LoadJob, Remove the schema autodetec Get the schema of the original table Perform the load job Like this in Python job_config = bigquery.job.LoadJobConfig() job_config.create_disposition = bigquery.job.CreateDisposition.CREATE_IF_NEEDED job_config.write_disposition = bigquery.job.WriteDisposition.WRITE_TRUNCATE job_config.skip_leading_rows = 1 # job_config.autodetect = True job_config.schema = client.get_table(table).schema query_job = client.load_table_from_uri(uri, table, job_config=job_config) res = query_job.result() This solution, to copy the schema, doesn't work with a QueryJob The workaround is the following (works for LoadJob and QueryJob) Truncate the table Perform a job in WRITE_EMPTY mode Trade-off: the WRITE_TRUNCATE is atomic: if the write write failed, the data aren't truncated the workaround is in 2 steps: if the write write failed, the data are already deleted config = bigquery.job.QueryJobConfig() config.create_disposition = bigquery.job.CreateDisposition.CREATE_IF_NEEDED config.write_disposition = bigquery.job.WriteDisposition.WRITE_EMPTY # Don't work # config.schema = client.get_table(table).schema config.destination = table # Step 1 truncate the table query_job = client.query(f'TRUNCATE TABLE `{table}`') res = query_job.result() # Step 2: Load the new data query_job = client.query(request, job_config=job_config) res = query_job.result() All of this to tell you that the BigQuery operator on Airflow isn't the problem. It's a BigQuery issue. You have a workaround to achieve what you want. | 7 | 10 |
64,401,503 | 2020-10-17 | https://stackoverflow.com/questions/64401503/is-there-a-way-to-further-improve-sparse-solution-times-using-python | I have been trying different sparse solvers available in Python 3 and comparing the performance between them and also against Octave and Matlab. I have chosen both direct and iterative approaches, I will explain this more in detail below. To generate a proper sparse matrix, with a banded structure, a Poisson's problem is solved using finite elements with squared grids of N=250, N=500 and N=1000. This results in dimensions of a matrix A=N^2xN^2 and a vector b=N^2x1, i.e., the largest NxN is a million. If one is interested in replicating my results, I have uploaded the matrices A and the vectors b in the following link (it will expire en 30 days) Get systems used here. The matrices are stored in triplets I,J,V, i.e. the first two columns are the indices for the rows and columns, respectively, and the third column are the values corresponding to such indices. Observe that there are some values in V, which are nearly zero, are left on purpose. Still, the banded structure is preserved after a "spy" matrix command in both Matlab and Python. For comparison, I have used the following solvers: Matlab and Octave, direct solver: The canonical x=A\b. Matlab and Octave, pcg solver: The preconditioned conjugated gradient, pcg solver pcg(A,b,1e-5,size(b,1)) (not preconditioner is used). Scipy (Python), direct solver: linalg.spsolve(A, b) where A is previously formatted in csr_matrix format. Scipy (Python), pcg solver: sp.linalg.cg(A, b, x0=None, tol=1e-05) Scipy (Python), UMFPACK solver: spsolve(A, b) using from scikits.umfpack import spsolve. This solver is apparently available (only?) under Linux, since it make use of the libsuitesparse [Timothy Davis, Texas A&M]. In ubuntu, this has to first be installed as sudo apt-get install libsuitesparse-dev. Furthermore, the aforementioned python solvers are tested in: Windows. Linux. Mac OS. Conditions: Timing is done right before and after the solution of the systems. I.e., the overhead for reading the matrices is not considered. Timing is done ten times for each system and an average and a standard deviation is computed. Hardware: Windows and Linux: Dell intel (R) Core(TM) i7-8850H CPU @2.6GHz 2.59GHz, 32 Gb RAM DDR4. Mac OS: Macbook Pro retina mid 2014 intel (R) quad-core(TM) i7 2.2GHz 16 Gb Ram DDR3. Results: Observations: Matlab A\b is the fastest despite being in an older computer. There are notable differences between Linux and Windows versions. See for instance the direct solver at NxN=1e6. This is despite Linux is running under windows (WSL). One can have a huge scatter in Scipy solvers. This is, if the same solution is run several times, one of the times can just increase more than twice. The fastest option in python can be nearly four times slower than the Matlab running in a more limited hardware. Really? If you want to reproduce the tests, I leave here very simple scripts. For matlab/octave: IJS=load('KbN1M.txt'); b=load('FbN1M.txt'); I=IJS(:,1); J=IJS(:,2); S=IJS(:,3); Neval=10; tsparse=zeros(Neval,1); tsolve_direct=zeros(Neval,1); tsolve_sparse=zeros(Neval,1); tsolve_pcg=zeros(Neval,1); for i=1:Neval tic A=sparse(I,J,S); tsparse(i)=toc; tic x=A\b; tsolve_direct(i)=toc; tic x2=pcg(A,b,1e-5,size(b,1)); tsolve_pcg(i)=toc; end save -ascii octave_n1M_tsparse.txt tsparse save -ascii octave_n1M_tsolvedirect.txt tsolve_direct save -ascii octave_n1M_tsolvepcg.txt tsolve_pcg For python: import time from scipy import sparse as sp from scipy.sparse import linalg import numpy as np from scikits.umfpack import spsolve, splu #NEEDS LINUX b=np.loadtxt('FbN1M.txt') triplets=np.loadtxt('KbN1M.txt') I=triplets[:,0]-1 J=triplets[:,1]-1 V=triplets[:,2] I=I.astype(int) J=J.astype(int) NN=int(b.shape[0]) Neval=10 time_sparse=np.zeros((Neval,1)) time_direct=np.zeros((Neval,1)) time_conj=np.zeros((Neval,1)) time_umfpack=np.zeros((Neval,1)) for i in range(Neval): t = time.time() A=sp.coo_matrix((V, (I, J)), shape=(NN, NN)) A=sp.csr_matrix(A) time_sparse[i,0]=time.time()-t t = time.time() x=linalg.spsolve(A, b) time_direct[i,0] = time.time() - t t = time.time() x2=sp.linalg.cg(A, b, x0=None, tol=1e-05) time_conj[i,0] = time.time() - t t = time.time() x3 = spsolve(A, b) #ONLY IN LINUX time_umfpack[i,0] = time.time() - t np.savetxt('pythonlinux_n1M_tsparse.txt',time_sparse,fmt='%.18f') np.savetxt('pythonlinux_n1M_tsolvedirect.txt',time_direct,fmt='%.18f') np.savetxt('pythonlinux_n1M_tsolvepcg.txt',time_conj,fmt='%.18f') np.savetxt('pythonlinux_n1M_tsolveumfpack.txt',time_umfpack,fmt='%.18f') Is there a way to further improve sparse solution times using python? or at least be in a similar order of performance as Matlab? I am open to suggestions using C/C++ or Fortran and a wrapper for python, but I belive it will not get much better than the UMFPACK choice. Suggestions are very welcome. P.S. I am aware of previous posts, e.g. scipy slow sparse matrix solver Issues using the scipy.sparse.linalg linear system solvers How to use Numba to speed up sparse linear system solvers in Python that are provided in scipy.sparse.linalg? But I think none is as comprehensive as this one, highlighting even more issues between operative systems when using python libraries. EDIT_1: I add a new plot with results using the QR solver from intel MKL using a python wrapper as suggested in the comments. This is, however, still behind Matlab's performance. To do this, one needs to add: from sparse_dot_mkl import sparse_qr_solve_mkl and sparse_qr_solve_mkl(A.astype(np.float32), b.astype(np.float32)) to the scripts provided in the original post. The ".astype(np.float32)" can be omitted, and the performance gets slighlty worse (about 10 %) for this system. | I will try to answer to myself. To provide an answer, I tried an even more demanding example, with a matrix of size of (N,N) of about half a million by half a million and the corresponding vector (N,1). This, however, is much less sparse (more dense) than the one provided in the question. This matrix stored in ascii is of about 1.7 Gb, compared to the one of the example, which is of about 0.25 Gb (despite its "size" is larger). See its shape here, Then, I tried to solve Ax=b using again Matlab, Octave and Python using the aforementioned the direct solvers from scipy, the intel MKL wrapper, the UMFPACK from Tim Davis. My first surprise is that both Matlab and Octave could solve the systems using the A\b, which is not for certain that it is a direct solver, since it chooses the best solver based on the characteristics of the matrix, see Matlab's x=A\b. However, the python's linalg.spsolve , the MKL wrapper and the UMFPACK were throwing out-of-memory errors in Windows and Linux. In mac, the linalg.spsolve was somehow computing a solution, and alghouth it was with a very poor performance, it never through memory errors. I wonder if the memory is handled differently depending on the OS. To me, it seems that mac swapped memory to the hard drive rather than using it from the RAM. The performance of the CG solver in Python was rather poor, compared to the matlab. However, to improve the performance in the CG solver in python, one can get a huge improvement in performance if A=0.5(A+A') is computed first (if one obviously, have a symmetric system). Using a preconditioner in Python did not help. I tried using the sp.linalg.spilu method together with sp.linalg.LinearOperator to compute a preconditioner, but the performance was rather poor. In matlab, one can use the incomplete Cholesky decomposition. For the out-of-memory problem the solution was to use an LU decomposition and solve two nested systems, such as Ax=b, A=LL', y=L\b and x=y\L'. I put here the min. solution times, Matlab mac, A\b = 294 s. Matlab mac, PCG (without conditioner)= 17.9 s. Matlab mac, PCG (with incomplete Cholesky conditioner) = 9.8 s. Scipy mac, direct = 4797 s. Octave, A\b = 302 s. Octave, PCG (without conditioner)= 28.6 s. Octave, PCG (with incomplete Cholesky conditioner) = 11.4 s. Scipy, PCG (without A=0.5(A+A'))= 119 s. Scipy, PCG (with A=0.5(A+A'))= 12.7 s. Scipy, LU decomposition using UMFPACK (Linux) = 3.7 s total. So the answer is YES, there are ways to improve the solution times in scipy. The use of the wrappers for UMFPACK (Linux) or intel MKL QR solver is highly recommended, if the memmory of the workstation allows it. Otherwise, performing A=0.5(A+A') prior to using the conjugate gradient solver can have a positive effect in the solution performance if one is dealing with symmetric systems. Let me know if someone would be interested in having this new system, so I can upload it. | 10 | 6 |
64,434,655 | 2020-10-19 | https://stackoverflow.com/questions/64434655/stop-tensorflow-from-printing-to-the-console | I've been using tensorflow without issue, until I added the following lines of code: log_dir = os.path.join("logs", "fit", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = TensorBoard(log_dir) After running this I get an large amount of information printed to the console. I've tried looking at the tf.keras.callbacks.TensorBoard documentation to see if I can reduce verbosity but I don't see an option. From various stackoverflow answers I've also tried setting the verbosity of tf down but to no avail: tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) tf.get_logger().setLevel('ERROR') tf.autograph.set_verbosity(3) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' I have the following specifications: Python = 3.8 Tensorflow = 2.3.1 Cuda Toolkit = 10.1 cuDNN = 7.6.4 GPU=Nvidia RTX2060 The information being printed to the console are all I messages, I've pasted these below if they add any important detail. 2020-10-19 20:59:45.205887: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2020-10-19 20:59:47.463539: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library nvcuda.dll 2020-10-19 20:59:48.540417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2060 computeCapability: 7.5 coreClock: 1.2GHz coreCount: 30 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 312.97GiB/s 2020-10-19 20:59:48.542360: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2020-10-19 20:59:48.562444: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll 2020-10-19 20:59:48.569770: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll 2020-10-19 20:59:48.572530: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll 2020-10-19 20:59:48.581126: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll 2020-10-19 20:59:48.586315: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll 2020-10-19 20:59:48.604682: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll 2020-10-19 20:59:48.605112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-19 21:00:02.120333: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-19 21:00:02.128143: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x255e4b0b0a0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-19 21:00:02.128792: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-19 21:00:03.014080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce RTX 2060 computeCapability: 7.5 coreClock: 1.2GHz coreCount: 30 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 312.97GiB/s 2020-10-19 21:00:03.014776: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2020-10-19 21:00:03.015127: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll 2020-10-19 21:00:03.015477: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll 2020-10-19 21:00:03.015822: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll 2020-10-19 21:00:03.016172: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll 2020-10-19 21:00:03.016565: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll 2020-10-19 21:00:03.016911: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll 2020-10-19 21:00:03.017288: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-19 21:00:03.722569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-19 21:00:03.722942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-19 21:00:03.723166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-19 21:00:03.723522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4594 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5) 2020-10-19 21:00:03.726833: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x255883632b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-19 21:00:03.727297: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 2060, Compute Capability 7.5 2020-10-19 21:00:08.908192: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session started. 2020-10-19 21:00:08.908485: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1391] Profiler found 1 GPUs 2020-10-19 21:00:08.910553: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cupti64_101.dll 2020-10-19 21:00:09.007043: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1513] CUPTI activity buffer flushed 2020-10-19 21:06:09.402869: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session started. 2020-10-19 21:06:09.403307: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1513] CUPTI activity buffer flushed Can anyone please help me to stop these messages printing to the console its making analysing other info on the console very difficult! Thanks! | You can disable debugging logs with os.environ. import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf Possible values are as follows: 0 = all messages are logged (default behavior) 1 = INFO messages are not printed 2 = INFO and WARNING messages are not printed 3 = INFO, WARNING, and ERROR messages are not printed | 8 | 10 |
64,433,923 | 2020-10-19 | https://stackoverflow.com/questions/64433923/how-can-i-get-all-the-subplots-to-zoom-and-pan-the-same-way-on-the-x-axis-with | I have a plotly graph with subplots drawn this way: fig = make_subplots( rows=4, cols=1, subplot_titles=("Price, orders and positions", "Margin use", "PnL and fees", "Volume traded"), row_heights=[0.5, 0.2, 0.2, 0.1], vertical_spacing=0.1 ) # price, orders, etc fig.add_traces( [ # draw price, average price and min / max go.Scatter(name='Price', x=df.index, y=df['price'], mode='lines', line=dict(color='rgba(31, 119, 180, 1.)')), go.Scatter(name='Average Price', x=df.index, y=df['average_price'], mode='lines', line=dict(color='rgba(31, 119, 180, 0.5)')), go.Scatter(x=df.index, y=df['price_max'], mode='lines', marker=dict(color="#444"), line=dict(width=0), showlegend=False), go.Scatter(x=df.index, y=df['price_min'], marker=dict(color="#444"), line=dict(width=0), mode='lines', fillcolor='rgba(68, 68, 68, 0.3)', fill='tonexty', showlegend=False), # draw the long / short orders go.Scatter(name='Long Open Price', x=df.index, y=df['orderlo_price'], mode='lines', line=dict(color='rgb(180, 119, 31)')), go.Scatter(name='Long Close Price', x=df.index, y=df['orderlc_price'], mode='lines', line=dict(width=2, color='rgb(220, 159, 31)')), go.Scatter(name='Short Open Price', x=df.index, y=df['orderso_price'], mode='lines', line=dict(color='rgb(119, 180, 31)')), go.Scatter(name='Short Close Price', x=df.index, y=df['ordersc_price'], mode='lines', line=dict(width=2, color='rgb(159, 220, 31)')), # add the position go.Scatter(name='position', x=df.index, y=df['position_price'], mode='lines', line=dict(color='rgb(240, 200, 40)')) ], rows=[1, 1, 1, 1, 1, 1, 1, 1, 1], cols=[1, 1, 1, 1, 1, 1, 1, 1, 1] ) # margin use fig.add_traces( [ go.Scatter(name='Margin use', x=df.index, y=df['margin_use'], mode='lines', line=dict(color='rgba(31, 119, 180, 1.)')), #go.Scatter(name='Margin max', x=df.index, y=df['margin_max'], mode='lines', line=dict(color='rgba(31, 180, 119, 1.)')) ], rows=[2], #, 2], cols=[1], #, 1] ) # PnL and fees fig.add_traces( [ go.Scatter(name='Unrealized PnL', x=df.index, y=df['unrealized_pnl'], line=dict(color='rgb(64, 255, 64, 1.0)')), go.Scatter(name='Realized PnL', x=df.index, y=df['realized_pnl'], fill='tozeroy', line=dict(color='rgb(32, 192, 32, 0.8)')), go.Scatter(name='Fees Paid', x=df.index, y=df['fees_paid'], fill='tozeroy', line=dict(color='rgb(192, 32, 32, 0.4)')) ], rows=[3, 3, 3], cols=[1, 1, 1] ) # Volume traded fig.add_trace( go.Scatter(name='Volume Traded', x=df.index, y=df['volume_traded'], fill='tozeroy', line=dict(color='rgb(32, 128, 192, 1.0)')), row=4, col=1 ) # remove the margins fig.update( layout=go.Layout( margin=go.layout.Margin(l=0.5, r=0.5, b=0.5, t=50) ) ) how can I make sure all the subplots are synchronized regarding the X axis zoom and panning? Right now you can zoom in one subplot and suddenly the graph in that subplot has no relations to the other subplots. | Have a look at the Shared X-Axes section of the Plotly docs. I believe this is what you're looking for. Essentially, add shared_xaxes=True to the make_subplots() function. | 11 | 17 |
64,431,313 | 2020-10-19 | https://stackoverflow.com/questions/64431313/split-multiple-columns-in-pandas-dataframe-by-delimiter | I have survey data which annoying has returned multiple choice questions in the following way. It's in an excel sheet There is about 60 columns with responses from single to multiple that are split by /. This is what I have so far, is there any way to do this quicker without having to do this for each individual column data = {'q1': ['one', 'two', 'three'], 'q2' : ['one/two/three', 'a/b/c', 'd/e/f'], 'q3' : ['a/b/c', 'd/e/f','g/h/i']} df = pd.DataFrame(data) df[['q2a', 'q2b', 'q2c']]= df['q2'].str.split('/', expand = True, n=0) df[['q3a', 'q3b', 'q3c']]= df['q2'].str.split('/', expand = True, n=0) clean_df = df.drop(df[['q2', 'q3']], axis=1) | We can use list comprehension with add_prefix, then we use pd.concat to concatenate everything to your final df: splits = [df[col].str.split(pat='/', expand=True).add_prefix(col) for col in df.columns] clean_df = pd.concat(splits, axis=1) q10 q20 q21 q22 q30 q31 q32 0 one one two three a b c 1 two a b c d e f 2 three d e f g h i If you actually want your column names to be suffixed by a letter, you can do the following with string.ascii_lowercase: from string import ascii_lowercase dfs = [] for col in df.columns: d = df[col].str.split('/', expand=True) c = d.shape[1] d.columns = [col + l for l in ascii_lowercase[:c]] dfs.append(d) clean_df = pd.concat(dfs, axis=1) q1a q2a q2b q2c q3a q3b q3c 0 one one two three a b c 1 two a b c d e f 2 three d e f g h i | 10 | 6 |
64,428,794 | 2020-10-19 | https://stackoverflow.com/questions/64428794/flake8-disable-linter-only-for-a-block-of-code | I have a file in python like: def test_constructor_for_legacy_json(): """Test if constructor works for a legacy JSON in an old database""" a = A(**{ 'field1': 'BIG TEXT WITH MORE THAN 500 CHARACTERS....(...)', 'field2': 'BIG TEXT WITH MORE THAN 500 CHARACTERS....(...)', 'field3': 'BIG TEXT WITH MORE THAN 500 CHARACTERS....(...)', # (...) 'field1000': 'BIG TEXT WITH MORE THAN 500 CHARACTERS....(...)', }) assert type(a) == A When I run flake8 + hacking I receive an error because the lines are too big. If I put this command at the beginning of the file # flake8: noqa all file will be ignored from linter. But I only want to exclude from linter the block where a is declared. I want to lint the rest of the file, and I cannot put at the end of each fieldx an # noqa: E501. Some one know how can I solve this? Thanks | There isn't a way in flake8 to ignore a block of code Your options are: ignore each line that produces an error by putting # noqa: E501 on it ignore the entire file (but this turns off all other errors as well) with a # flake8: noqa on a line by itself ignore E501 in the entire file by using per-file-ignores: [flake8] per-file-ignores = path/to/file.py: E501 generally I'd prefer the third one, maybe even sequestering your long-strings into their own file to be ignored disclaimer: I'm the current flake8 maintainer | 49 | 53 |
64,425,864 | 2020-10-19 | https://stackoverflow.com/questions/64425864/zeep-client-throws-service-has-no-operation-error | I am using zeep to call a SOAP webservice. It is throwing an error even if the method exists in the WSDL client = Client(self.const.soap_url) client.service.getPlansDetails(id) I get this error AttributeError: Service has no operation 'getPlansDetails' Here is the information from the python -m zeep <wsd_url> Prefixes: xsd: http://www.w3.org/2001/XMLSchema ns0: http://tempuri.org/Imports ns1: http://tempuri.org ns2: http://schemas.datacontract.org/2004/07/Dynamics.Ax.Application ns3: http://schemas.datacontract.org/2004/07/Microsoft.Dynamics.Ax.Xpp ns4: http://schemas.microsoft.com/dynamics/2010/01/datacontracts ns5: http://schemas.microsoft.com/2003/10/Serialization/Arrays ns6: http://schemas.microsoft.com/dynamics/2008/01/documents/Fault ns7: http://schemas.datacontract.org/2004/07/Microsoft.Dynamics.AX.Framework.Services ns8: http://schemas.microsoft.com/2003/10/Serialization/ Global elements: ns2:LFCPaymentPlanDetailsContract(ns2:LFCPaymentPlanDetailsContract) ns7:ArrayOfInfologMessage(ns7:ArrayOfInfologMessage) ns7:InfologMessage(ns7:InfologMessage) ns7:InfologMessageType(ns7:InfologMessageType) ns3:XppObjectBase(ns3:XppObjectBase) ns5:ArrayOfKeyValueOfstringstring(ns5:ArrayOfKeyValueOfstringstring) ns8:QName(xsd:QName) ns8:anyType(None) ns8:anyURI(xsd:anyURI) ns8:base64Binary(xsd:base64Binary) ns8:boolean(xsd:boolean) ns8:byte(xsd:byte) ns8:char(ns8:char) ns8:dateTime(xsd:dateTime) ns8:decimal(xsd:decimal) ns8:double(xsd:double) ns8:duration(ns8:duration) ns8:float(xsd:float) ns8:guid(ns8:guid) ns8:int(xsd:int) ns8:long(xsd:long) ns8:short(xsd:short) ns8:string(xsd:string) ns8:unsignedByte(xsd:unsignedByte) ns8:unsignedInt(xsd:unsignedInt) ns8:unsignedLong(xsd:unsignedLong) ns8:unsignedShort(xsd:unsignedShort) ns6:AifFault(ns6:AifFault) ns6:ArrayOfFaultMessage(ns6:ArrayOfFaultMessage) ns6:ArrayOfFaultMessageList(ns6:ArrayOfFaultMessageList) ns6:FaultMessage(ns6:FaultMessage) ns6:FaultMessageList(ns6:FaultMessageList) ns4:CallContext(ns4:CallContext) ns1:LFCPaymentPlanDetailsServicesGetPlansDetailsRequest(_crmno: xsd:string) ns1:LFCPaymentPlanDetailsServicesGetPlansDetailsResponse(response: ns2:LFCPaymentPlanDetailsContract) Global types: xsd:anyType ns2:LFCPaymentPlanDetailsContract(Amount: xsd:decimal, CRMNumber: xsd:string, CustomerName: xsd:string, CustomerRequestedType: xsd:string, ErrorMessage: xsd:string, OrderId: xsd:long, PlanType: xsd:string) ns7:ArrayOfInfologMessage(InfologMessage: ns7:InfologMessage[]) ns7:InfologMessage(InfologMessageType: ns7:InfologMessageType, Message: xsd:string) ns7:InfologMessageType ns3:XppObjectBase() ns5:ArrayOfKeyValueOfstringstring(KeyValueOfstringstring: {Key: xsd:string, Value: xsd:string}[]) ns8:char ns8:duration ns8:guid ns6:AifFault(CustomDetailXml: xsd:string, FaultMessageListArray: ns6:ArrayOfFaultMessageList, InfologMessageList: ns7:ArrayOfInfologMessage, StackTrace: xsd:string, XppExceptionType: xsd:int) ns6:ArrayOfFaultMessage(FaultMessage: ns6:FaultMessage[]) ns6:ArrayOfFaultMessageList(FaultMessageList: ns6:FaultMessageList[]) ns6:FaultMessage(Code: xsd:string, Message: xsd:string) ns6:FaultMessageList(Document: xsd:string, DocumentOperation: xsd:string, FaultMessageArray: ns6:ArrayOfFaultMessage, Field: xsd:string, Service: xsd:string, ServiceOperation: xsd:string, ServiceOperationParameter: xsd:string, XPath: xsd:string, XmlLine: xsd:string, XmlPosition: xsd:string) ns4:CallContext(Company: xsd:string, Language: xsd:string, LogonAsUser: xsd:string, MessageId: xsd:string, PartitionKey: xsd:string, PropertyBag: ns5:ArrayOfKeyValueOfstringstring) xsd:ENTITIES xsd:ENTITY xsd:ID xsd:IDREF xsd:IDREFS xsd:NCName xsd:NMTOKEN xsd:NMTOKENS xsd:NOTATION xsd:Name xsd:QName xsd:anySimpleType xsd:anyURI xsd:base64Binary xsd:boolean xsd:byte xsd:date xsd:dateTime xsd:decimal xsd:double xsd:duration xsd:float xsd:gDay xsd:gMonth xsd:gMonthDay xsd:gYear xsd:gYearMonth xsd:hexBinary xsd:int xsd:integer xsd:language xsd:long xsd:negativeInteger xsd:nonNegativeInteger xsd:nonPositiveInteger xsd:normalizedString xsd:positiveInteger xsd:short xsd:string xsd:time xsd:token xsd:unsignedByte xsd:unsignedInt xsd:unsignedLong xsd:unsignedShort Bindings: Soap11Binding: {http://tempuri.org/}BasicHttpBinding_LFCPaymentPlanDetailsServices Soap11Binding: {http://tempuri.org/}serviceEndpoint Service: RoutingService Port: serviceEndpoint (Soap11Binding: {http://tempuri.org/}serviceEndpoint) Operations: Port: BasicHttpBinding_LFCPaymentPlanDetailsServices (Soap11Binding: {http://tempuri.org/}BasicHttpBinding_LFCPaymentPlanDetailsServices) Operations: getPlansDetails(_crmno: xsd:string, _soapheaders={context: ns4:CallContext}) -> response: ns2:LFCPaymentPlanDetailsContract | The reason why it's giving that error is because by default, zeep binds to 1st service and 1st port. But in your case you are trying to call method from 2nd port. So this should work: client = Client(self.const.soap_url) plan_client = client.bind('RoutingService', 'BasicHttpBinding_LFCPaymentPlanDetailsServices') plan_client.service.getPlansDetails(id) Let us know if it doesn't work. | 6 | 5 |
64,427,593 | 2020-10-19 | https://stackoverflow.com/questions/64427593/how-to-check-if-default-value-for-python-function-argument-is-set-using-inspect | I'm trying to identify the parameters of a function for which default values are not set. I'm using inspect.signature(func).parameters.value() function which gives a list of function parameters. Since I'm using PyCharm, I can see that the parameters for which the default value is not set have their Parameter.default attribute set to inspect._empty. I'm declaring the function in the following way: def f(a, b=1): pass So, the default value of a is inspect._empty. Since inspect._empty is a private attribute, I thought that there might be a method for checking if a value is inspect._empty, but I couldn't find it. | You can do it like this: import inspect def foo(a, b=1): pass for param in inspect.signature(foo).parameters.values(): if param.default is param.empty: print(param.name) Output: a param.empty holds the same object inspect._empty. I suppose that this way of using it is recommended because of the example in the official documentation of inspect module: Example: print all keyword-only arguments without default values: >>> >>> def foo(a, b, *, c, d=10): ... pass >>> sig = signature(foo) >>> for param in sig.parameters.values(): ... if (param.kind == param.KEYWORD_ONLY and ... param.default is param.empty): ... print('Parameter:', param) Parameter: c | 8 | 12 |
64,422,367 | 2020-10-19 | https://stackoverflow.com/questions/64422367/how-to-overwrite-python-dataclass-asdict-method | I have a dataclass, which looks like this: @dataclass class myClass: id: str mode: str value: float This results in: dataclasses.asdict(myClass) {"id": id, "mode": mode, "value": value} But what I want is {id:{"mode": mode, "value": value}} I thought I could achive this by adding a to_dict method to my dataclass, which returns the desired dict, but that didn't work. How could I get my desired result? | from dataclasses import dataclass, asdict @dataclass class myClass: id: str mode: str value: float def my_dict(data): return { data[0][1]: { field: value for field, value in data[1:] } } instance = myClass("123", "read", 1.23) data = {"123": {"mode": "read", "value": 1.23}} assert asdict(instance, dict_factory=my_dict) == data | 15 | 7 |
64,422,974 | 2020-10-19 | https://stackoverflow.com/questions/64422974/pandas-select-rows-that-contain-any-substring-from-a-list | I would like to select those rows in a column that contains any of the substrings in a list. This is what I have for now. product = ['LID', 'TABLEWARE', 'CUP', 'COVER', 'CONTAINER', 'PACKAGING'] df_plastic_prod = df_plastic[df_plastic['Goods Shipped'].str.contains(product)] df_plastic_prod.info() Sample df_plastic Name Product David PLASTIC BOTTLE Meghan PLASTIC COVER Melanie PLASTIC CUP Aaron PLASTIC BOWL Venus PLASTIC KNIFE Abigail PLASTIC CONTAINER Sophia PLASTIC LID Desired df_plastic_prod Name Product Meghan PLASTIC COVER Melanie PLASTIC CUP Abigail PLASTIC CONTAINER Sophia PLASTIC LID Thanks in advance! I appreciate any assistance on this! | For match values by subtrings join all values of list by | for regex or - so get values LID or TABLEWARE ...: Solution working well also with 2 or more words in list. pat = '|'.join(r"\b{}\b".format(x) for x in product) df_plastic_prod = df_plastic[df_plastic['Product'].str.contains(pat)] print (df_plastic_prod) Name Product 1 Meghan PLASTIC COVER 2 Melanie PLASTIC CUP 5 Abigail PLASTIC CONTAINER 6 Sophia PLASTIC LID | 6 | 6 |
64,421,671 | 2020-10-19 | https://stackoverflow.com/questions/64421671/error-getting-for-src-type-cv-8uc3-to-cv-8uc1-in-opencv-python | import cv2 as cv import numpy as np from matplotlib import pyplot as plt img = cv.imread('t2.jpg', cv.COLOR_BGR2GRAY) blur = cv.GaussianBlur(img, (5, 5), 0) ret3, th3 = cv.threshold(blur, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU) I get the error: Traceback (most recent call last): File "F:/l4 project docs/project/l4proTest1/projectionStepTest.py", line 11, in <module> ret3,th3 = cv.threshold(blur, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU) cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-9d_dfo3_\opencv\modules\imgproc\src\thresh.cpp:1557: error: (-2:Unspecified error) in function 'double __cdecl cv::threshold(const class cv::_InputArray &,const class cv::_OutputArray &,double,double,int)' THRESH_OTSU mode: 'src_type == CV_8UC1 || src_type == CV_16UC1' where 'src_type' is 16 (CV_8UC3) Process finished with exit code 1 | img = cv.imread('t2.jpg', cv.IMREAD_GRAYSCALE) or img = cv.imread('t2.jpg', cv.IMREAD_COLOR) img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) | 10 | 15 |
64,414,486 | 2020-10-18 | https://stackoverflow.com/questions/64414486/how-to-check-if-a-user-is-subscribed-to-a-specific-telegram-channel-python-py | I am writing a Telegram bot using the PyTelegramBotApi library, I would like to implement the function of checking the user's subscription to a certain telegram channel, and if there is none, offer to subscribe. Thanks in advance for your answers! | use getChatMember method to check if a user is member in a channel or not. getChatMember Use this method to get information about a member of a chat. Returns a ChatMember object on success. import telebot bot = telebot.TeleBot("TOKEN") CHAT_ID = -1001... USER_ID = 700... result = bot.get_chat_member(CHAT_ID, USER_ID) print(result) bot.polling() Sample result: You receive a user info if the user is a member {'user': {'id': 700..., 'is_bot': False, 'first_name': '', 'username': None, 'last_name': None, ... } or an Exception otherwise telebot.apihelper.ApiTelegramException: A request to the Telegram API was unsuccessful. Error code: 400 Description: Bad Request: user not found example on how to use it inside your project import telebot from telebot.apihelper import ApiTelegramException bot = telebot.TeleBot("BOT_TOKEN") CHAT_ID = -1001... USER_ID = 700... def is_subscribed(chat_id, user_id): try: bot.get_chat_member(chat_id, user_id) return True except ApiTelegramException as e: if e.result_json['description'] == 'Bad Request: user not found': return False if not is_subscribed(CHAT_ID, USER_ID): # user is not subscribed. send message to the user bot.send_message(CHAT_ID, 'Please subscribe to the channel') else: # user is subscribed. continue with the rest of the logic # ... bot.polling() | 8 | 11 |
64,408,338 | 2020-10-17 | https://stackoverflow.com/questions/64408338/include-raw-tab-literal-character-in-doctest | I can't figure out how to avoid this doctest error: Failed example: print(test()) Expected: output <BLANKLINE> Got: output <BLANKLINE> For this code def test(): r'''Produce string according to specification. >>> print(test()) output <BLANKLINE> ''' return '\toutput\n' I have put a tab literal into the source code, line 5 in front of output. It looks like doctest (or python docstrings?) ignores that tab literal and converts it to four spaces. The so-called "Expected" value is literally not what my source specifies it to be. What's a solution for this? I don't want to replace the print statement with >>> test() '\toutput\n' Because a huge part of why I usually like doctests is because they demonstrate examples, and the most important part of this function I am writing is the shape of the output. | The tabs in the docstring get expanded to 8 spaces, but the tabs in the output are not expanded. From the doctest documentation (emphasis added): All hard tab characters are expanded to spaces, using 8-column tab stops. Tabs in output generated by the tested code are not modified. Because any hard tabs in the sample output are expanded, this means that if the code output includes hard tabs, the only way the doctest can pass is if the NORMALIZE_WHITESPACE option or directive is in effect. Alternatively, the test can be rewritten to capture the output and compare it to an expected value as part of the test. This handling of tabs in the source was arrived at through trial and error, and has proven to be the least error prone way of handling them. It is possible to use a different algorithm for handling tabs by writing a custom DocTestParser class. Unfortunately, all examples of directive use in the doctest docs have been mangled by Sphinx. Here's how you would use NORMALIZE_WHITESPACE: def test(): r'''Produce string according to specification. >>> print(test()) # doctest: +NORMALIZE_WHITESPACE output <BLANKLINE> ''' return '\toutput\n' Note that this treats all runs of whitespace as equal, rather than disabling the tab processing. Disabling the tab processing is absurdly cumbersome, and it's impossible through the usual doctest interface - you would need to subclass DocTestParser, manually reimplement doctest parsing, then construct instances of DocTestFinder, DocTestRunner, and your DocTestParser subclass and call their methods manually. | 8 | 8 |
64,412,233 | 2020-10-18 | https://stackoverflow.com/questions/64412233/how-to-format-pandas-matplotlib-graph-so-the-x-axis-ticks-are-only-hours-and-m | I am trying to plot temperature with respect to time data from a csv file. My goal is to have a graph which shows the temperature data per day. My problem is the x-axis: I would like to show the time for uniformly and only be in hours and minutes with 15 minute intervals, for example: 00:00, 00:15, 00:30. The csv is loaded into a pandas dataframe, where I filter the data to be shown based on what day it is, in the code I want only temperature data for 18th day of the month. Here is the csv data that I am loading in: date,temp,humid 2020-10-17 23:50:02,20.57,87.5 2020-10-17 23:55:02,20.57,87.5 2020-10-18 00:00:02,20.55,87.31 2020-10-18 00:05:02,20.54,87.17 2020-10-18 00:10:02,20.54,87.16 2020-10-18 00:15:02,20.52,87.22 2020-10-18 00:20:02,20.5,87.24 2020-10-18 00:25:02,20.5,87.24 here is the python code to make the graph: import pandas as pd import datetime import matplotlib.pyplot as plt df = pd.read_csv("saveData2020.csv") #make new columns in dataframe so data can be filtered df["New_Date"] = pd.to_datetime(df["date"]).dt.date df["New_Time"] = pd.to_datetime(df["date"]).dt.time df["New_hrs"] = pd.to_datetime(df["date"]).dt.hour df["New_mins"] = pd.to_datetime(df["date"]).dt.minute df["day"] = pd.DatetimeIndex(df['New_Date']).day #filter the data to be only day 18 ndf = df[df["day"]==18] #display dataframe in console pd.set_option('display.max_rows', ndf.shape[0]+1) print(ndf.head(10)) #plot a graph ndf.plot(kind='line',x='New_Time',y='temp',color='red') #edit graph to be sexy plt.setp(plt.gca().xaxis.get_majorticklabels(),'rotation', 30) plt.xlabel("time") plt.ylabel("temp in C") #show graph with the sexiness edits plt.show() here is the graph I get: | Answer First of all, you have to convert "New Time" (your x axis) from str to datetime type with: ndf["New_Time"] = pd.to_datetime(ndf["New_Time"], format = "%H:%M:%S") Then you can simply add this line of code before showing the plot (and import the proper matplotlib library, matplotlib.dates as md) to tell matplotlib you want only hours and minutes: plt.gca().xaxis.set_major_formatter(md.DateFormatter('%H:%M')) And this line of code to fix the 15 minutes span for the ticks: plt.gca().xaxis.set_major_locator(md.MinuteLocator(byminute = [0, 15, 30, 45])) For more info on x axis time formatting you can check this answer. Code import pandas as pd import datetime import matplotlib.pyplot as plt import matplotlib.dates as md df = pd.read_csv("saveData2020.csv") #make new columns in dataframe so data can be filtered df["New_Date"] = pd.to_datetime(df["date"]).dt.date df["New_Time"] = pd.to_datetime(df["date"]).dt.time df["New_hrs"] = pd.to_datetime(df["date"]).dt.hour df["New_mins"] = pd.to_datetime(df["date"]).dt.minute df["day"] = pd.DatetimeIndex(df['New_Date']).day #filter the data to be only day 18 ndf = df[df["day"]==18] ndf["New_Time"] = pd.to_datetime(ndf["New_Time"], format = "%H:%M:%S") #display dataframe in console pd.set_option('display.max_rows', ndf.shape[0]+1) print(ndf.head(10)) #plot a graph ndf.plot(kind='line',x='New_Time',y='temp',color='red') #edit graph to be sexy plt.setp(plt.gca().xaxis.get_majorticklabels(),'rotation', 30) plt.xlabel("time") plt.ylabel("temp in C") plt.gca().xaxis.set_major_locator(md.MinuteLocator(byminute = [0, 15, 30, 45])) plt.gca().xaxis.set_major_formatter(md.DateFormatter('%H:%M')) #show graph with the sexiness edits plt.show() Plot Notes If you do not need "New_Date", "New_Time", "New hrs", "New_mins" and "day" columns for other purposes than plotting, you can use a shorter version of the above code, getting rid of those columns and appling the day filter directly on "date" column as here: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as md df = pd.read_csv("saveData2020.csv") # convert date from string to datetime df["date"] = pd.to_datetime(df["date"], format = "%Y-%m-%d %H:%M:%S") #filter the data to be only day 18 ndf = df[df["date"].dt.day == 18] #display dataframe in console pd.set_option('display.max_rows', ndf.shape[0]+1) print(ndf.head(10)) #plot a graph ndf.plot(kind='line',x='date',y='temp',color='red') #edit graph to be sexy plt.setp(plt.gca().xaxis.get_majorticklabels(),'rotation', 30) plt.xlabel("time") plt.ylabel("temp in C") plt.gca().xaxis.set_major_locator(md.MinuteLocator(byminute = [0, 15, 30, 45])) plt.gca().xaxis.set_major_formatter(md.DateFormatter('%H:%M')) #show graph with the sexiness edits plt.show() This code will reproduce exactly the same plot as before. | 6 | 4 |
64,409,191 | 2020-10-18 | https://stackoverflow.com/questions/64409191/angle-between-two-vectors-in-the-interval-0-360 | I'm trying to find the angle between two vectors. Following is the code that I use to evaluate the angle between vectors ba and bc import numpy as np import scipy.linalg as la a = np.array([6,0]) b = np.array([0,0]) c = np.array([1,1]) ba = a - b bc = c - b cosine_angle = np.dot(ba, bc) / (la.norm(ba) * la.norm(bc)) angle = np.arccos(cosine_angle) print (np.degrees(angle)) My question is, here in this code: for both c = np.array([1,1]) and c = np.array([1,-1]) you get 45 degrees as the answer. I can understand this in a mathematical viewpoint because, from the dot product you always focus on the angle in the interval [0,180]. But geometrically this is misleading as the point c is in two different locations for [1,1] and [1,-1]. So is there a way that I can get the angle in the interval [0,360] for a general starting point b = np.array([x,y]) Appreciate your help | Conceptually, obtaining the angle between two vectors using the dot product is perfectly alright. However, since the angle between two vectors is invariant upon translation/rotation of the coordinate system, we can find the angle subtended by each vector to the positive direction of the x-axis and subtract one value from the other. The advantage is, we'll use np.arctan2to find the angles, which returns angles in the range [-π,π] and hence you get an idea of the quadrant your vector lies in. # Syntax: np.arctan2(y, x) - put the y value first! # Instead of explicitly referring by indices, you can unpack each vector in reverse, like so: # np.arctan2(*bc[::-1]) angle = np.arctan2(bc[1], bc[0]) - np.arctan2(ba[1], ba[0]) Which you can then appropriately transform to get a value within [0, 2π]. | 6 | 5 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.