question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
62,814,607
2020-7-9
https://stackoverflow.com/questions/62814607/pdfkit-warning-blocked-access-to-file
I am getting an error(Blocked access to the file) in HTML to pdf conversion using pdfkit library while using a local image in my HTML file. How can I use local images in my HTML file?
I faced the same problem. I solved it by adding "enable-local-file-access" option to pdfkit.from_file(). options = { "enable-local-file-access": None } pdfkit.from_file(html_file_name, pdf_file_name, options=options)
12
38
62,803,633
2020-7-8
https://stackoverflow.com/questions/62803633/timestamp-object-has-no-attribute-dt
I am trying to convert a new column in a dataframe through a function based on the values in the date column, but get an error indicating "Timestamp object has no attribute dt." However, if I run this outside of a function, the dt attribute works fine. Any guidance would be appreciated. This code runs with no issues: sample = {'Date': ['2015-07-02 11:47:00', '2015-08-02 11:30:00']} dftest = pd.DataFrame.from_dict(sample) dftest['Date'] = pd.to_datetime(dftest['Date']) display(dftest.info()) dftest['year'] = dftest['Date'].dt.year dftest['month'] = dftest['Date'].dt.month This code gives me the error message: sample = {'Date': ['2015-07-02 11:47:00', '2015-08-02 11:30:00']} dftest = pd.DataFrame.from_dict(sample) dftest['Date'] = pd.to_datetime(dftest['Date']) def CALLYMD(dftest): if dftest['Date'].dt.month>9: return str(dftest['Date'].dt.year) + '1231' elif dftest['Date'].dt.month>6: return str(dftest['Date'].dt.year) + '0930' elif dftest['Date'].dt.month>3: return str(dftest['Date'].dt.year) + '0630' else: return str(dftest['Date'].dt.year) + '0331' dftest['CALLYMD'] = dftest.apply(CALLYMD, axis=1) Lastly, I'm open to any suggestions on how to make this code better as I'm still learning.
I'm guessing you should remove .dt in the second case. When you do apply it's applying to each element, .dt is needed when it's a group of data, if it's only one element you don't need .dt otherwise it will raise {AttributeError: 'Timestamp' object has no attribute 'dt'} reference: https://stackoverflow.com/a/48967889/13720936
34
56
62,895,219
2020-7-14
https://stackoverflow.com/questions/62895219/getting-error-in-airflow-dag-unsupported-operand-types-for-list-and-lis
I am new to Apache airflow and DAG. There are total 6 tasks in the DAG (task1, task2, task3, task4, task5, task6). But at the time of running the DAG we are getting the error below. DAG unsupported operand type(s) for >>: 'list' and 'list' Below is my code for the DAG. Please help. I am new to airflow. from airflow import DAG from datetime import datetime from airflow.providers.databricks.operators.databricks import DatabricksSubmitRunOperator default_args = { 'owner': 'airflow', 'depends_on_past': False } dag = DAG('DAG_FOR_TEST',default_args=default_args,schedule_interval=None,max_active_runs=3, start_date=datetime(2020, 7, 14)) #################### CREATE TASK ##################################### task_1 = DatabricksSubmitRunOperator( task_id='task_1', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_1/task_1.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_1.driver.TestClass1', 'parameters' : [ '{{ dag_run.conf.json }}' ] } ) task_2 = DatabricksSubmitRunOperator( task_id='task_2', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_2/task_2.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_2.driver.TestClass2', 'parameters' : [ '{{ dag_run.conf.json }}' ] } ) task_3 = DatabricksSubmitRunOperator( task_id='task_3', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_3/task_3.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_3.driver.TestClass3', 'parameters' : [ '{{ dag_run.conf.json }}' ] } ) task_4 = DatabricksSubmitRunOperator( task_id='task_4', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_4/task_4.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_4.driver.TestClass4', 'parameters' : [ '{{ dag_run.conf.json }}' ] } ) task_5 = DatabricksSubmitRunOperator( task_id='task_5', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_5/task_5.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_5.driver.TestClass5', 'parameters' : [ 'json ={{ dag_run.conf.json }}' ] } ) task_6 = DatabricksSubmitRunOperator( task_id='task_6', databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_6/task_6.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_6.driver.TestClass6', 'parameters' : ['{{ dag_run.conf.json }}' ] } ) #################### ORDER OF OPERATORS ########################### task_1.dag = dag task_2.dag = dag task_3.dag = dag task_4.dag = dag task_5.dag = dag task_6.dag = dag task_1 >> [task_2 , task_3] >> [ task_4 , task_5 ] >> task_6
What is your desired Task Dependency? Do you want to run task_4 after task_2 only or after task_2 and task_3 Based on that answer, use one of the following: (use this if task_4 should run after both task_2 and task_3 are completed) task_1 >> [task_2 , task_3] task_2 >> [task_4, task_5] >> task_6 task_3 >> [task_4, task_5] OR (use this if task_4 should run after task_2 is completed and task_5 should run after task_3 is completed) task_1 >> [task_2 , task_3] task_2 >> task_4 task_3 >> task_5 [task_4, task_5] >> task_6 A tip, you don't need to do the following: task_1.dag = dag task_2.dag = dag task_3.dag = dag task_4.dag = dag task_5.dag = dag task_6.dag = dag You can pass the dag parameter to your task itself, example: task_6 = DatabricksSubmitRunOperator( task_id='task_6', dag=dag, databricks_conn_id='connection_id_details', existing_cluster_id='{{ dag_run.conf.clusterId }}', libraries= [ { 'jar': 'dbfs:/task_6/task_6.jar' } ], spark_jar_task={ 'main_class_name': 'com.task_6.driver.TestClass6', 'parameters' : ['{{ dag_run.conf.json }}' ] } ) or use DAG as your context manager as documented in https://airflow.apache.org/docs/stable/concepts.html#context-manager and Point (1) in https://medium.com/datareply/airflow-lesser-known-tips-tricks-and-best-practises-cf4d4a90f8f
9
11
62,876,777
2020-7-13
https://stackoverflow.com/questions/62876777/documenting-and-detailing-a-single-script-based-on-the-comments-inside
I am going to write a set of scripts, each independent from the others but with some similarities. The structure will most likely be the same for all the scripts and probably looks like: # -*- coding: utf-8 -*- """ Small description and information @author: Author """ # Imports import numpy as np import math from scipy import signal ... # Constant definition (always with variable in capital letters) CONSTANT_1 = 5 CONSTANT_2 = 10 # Main class class Test(): def __init__(self, run_id, parameters): # Some stuff not too important def _run(self, parameters): # Main program returning a result object. For each script, I would like to write documentation and export it in PDF. I need a library/module/parser which reads the scripts, extracts the noted comment, code and puts it back together in the desired output format. For instance, in the _run() method, there might be several steps detailed in the comments: def _run(self, parameters): # Step 1: we start by doing this code to do it # Step 2: then we do this code to do it code code # this code does that Which library/parser could I use to analyze the python script and output a PDF? At first, I was thinking of sphinx, but it is not suited to my need as I would have to design a custom extension. Moreover, sphinx strength lies in the links and hierarchy between multiple scripts of a same or of different modules. In my case, I will only be documenting one script, one file at a time. Then, my second idea is to use the RST format and RST2PDF to create the PDF. For the parser, I could then design a parser which reads the .py file and extract the commented/decorated lines or set of lines as proposed below, and then write the RST file. #-description ## Title of something # doing this here #- #-code some code to extract and put in the doc some more code #- Finally, I would also like to be able to execute some code and catch the result in order to put it in the output PDF file. For instance, I could run a python code to compute the SHA1 hash of the .py file content and include this as a reference in the PDF documentation.
Docstrings instead of comments In order to make things easier for yourself, you probably want to make use of docstrings rather than comments: A docstring is a string literal that occurs as the first statement in a module, function, class, or method definition. Such a docstring becomes the __doc__ special attribute of that object. This way, you can make use of the __doc__ attribute when parsing the scripts when generating documentation. The three double quoted string placed immediately after the function/module definition that becomes the docstring is just syntactic sugaring. You can edit the __doc__ attribute programmatically as needed. For instance, you can make use of decorators to make the creation of docstrings nicer in your specific case. For instance, to let you comment the steps inline, but still adding the comments to the docstring (programmed in browser, probably with errors): def with_steps(func): def add_step(n, doc): func.__doc__ = func.__doc__ + "\nStep %d: %s" % (n, doc) func.add_step = add_step @with_steps def _run(self, parameters): """Initial description that is turned into the initial docstring""" _run.add_step(1, "we start by doing this") code to do it _run.add_step(2, "then we do this") code to do it code Which would create a docstring like this: Initial description that is turned into the initial docstring Step 1: we start by doing this Step 2: then we do this You get the idea. Generating PDF from documented scripts Sphinx Personally, I'd just try the PDF-builders available for Sphinx, via the bundled LaTeXBuilder or using rinoh if you don't want to depend on LaTeX. However, you would have to use a docstring format that Sphinx understands, such as reStructuredText or Google Style Docstrings. AST An alternative is to use ast to extract the docstrings. This is probably what the Sphinx autodoc extension uses internally to extract the documentation from the source files. There are a few examples out there on how to do this, like this gist or this blog post. This way you can write a script that parses and outputs any formats you want. For instance, you can output Markdown or reST and convert it to PDF using pandoc. You could write marked up text directly in the docstrings, which would give you a lot of flexibility. Let's say you wanted to write your documentation using markdown – just write markdown directly in your docstring. def _run(self, parameters): """Example script ================ This script does a, b, c 1. Does something first 2. Does something else next 3. Returns something else Usage example: result = script(parameters) foo = [r.foo for r in results] """ This string can be extracted using ast and parsed/processed using whatever library you see fit.
7
3
62,854,761
2020-7-11
https://stackoverflow.com/questions/62854761/python3-8-whats-the-difference-between-importerror-and-modulenotfounderror
In python3.8, what's the difference between ImportError and ModuleNotFoundError? I'm just wondering what the difference is and why they matter.
According to the python docs: The ImportError is raised when an import statement has trouble successfully importing the specified module. Typically, such a problem is due to an invalid or incorrect path, which will raise a ModuleNotFoundError in Python 3.6 and newer versions.
27
8
62,903,056
2020-7-14
https://stackoverflow.com/questions/62903056/elementclickinterceptedexception-message-element-click-intercepted-element-is
I am trying to click on the first box (ASN / DSD) But I get this error message: selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <input type="radio" name="docTypes" ng-model="$ctrl.documentTypes.selected" id="documentType-0" ng-change="$ctrl.onChangeDocumentType()" ng-value="documentType" tabindex="0" class="ng-pristine ng-untouched ng-valid ng-empty" value="[object Object]" aria-invalid="false"> is not clickable at point (338, 202). Other element would receive the click: <label translate-attr="{title: 'fulfillment.documentAction.createNew.modal.documentType.document.title'}" translate-values="{documentName: documentType.name}" for="documentType-0" translate="ASN - DSD" tabindex="0" title="Select ASN - DSD document type">...</label> (Session info: chrome=83.0.4103.116) I know I have entered the right iframe because it can find the element, just not click on it. My code is driver.switch_to.default_content() iframes = driver.find_elements_by_tag_name("iframe") driver.switch_to.frame(iframes[0]) time.sleep(5) driver.find_element_by_xpath('//*[@id="documentType-0"]').click() I saw that DebanjanB answered a similar question on here: link I am trying to do his third solution of using execute script. I don't know what CSS selector to use for this model. The model looks like this WebDriverWait(driver, 20).until(EC.invisibility_of_element((By.CSS_SELECTOR, "span.taLnk.ulBlueLinks"))) driver.execute_script("arguments[0].click();", WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[@class='loadingWhiteBox']")))) My question is what css selector do I need to use on the first line, and then is it just initial xpath I was using on the second line? Here is the HTML for reference. I get the click intercept error when I try to click on the input section. If use xpath to click on the label tag, it does not error out but also does not click on it. It just moves on to the next section of code without doing anything. <li ng-repeat="documentType in selectDocumentType.documentTypes.displayedList | orderBy:selectDocumentType.formOrder"> <input type="radio" name="docTypes" ng model="selectDocumentType.documentTypes.selected" id="documentType-0" ng-value="documentType" tabindex="0" class="ng-valid ng-not-empty ng-dirty ng-valid-parse ng-touched" value="[object Object]" aria-invalid="false"> <label translate-attr="{title:'fulfillment.documentAction.createNew.modal.documentType.document.title'}" translate-values={documentName: documentType.name}" for="documentType-0" translate="ASN - DSD" tabindex="0" title= "Select ASN - DSD document type"><span>ASN - DSD</span></label> </li> Any suggestions on how to stop having the click intercepted?
This error message... selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <input type="radio" name="docTypes" ng-model="$ctrl.documentTypes.selected" id="documentType-0" ng-change="$ctrl.onChangeDocumentType()" ng-value="documentType" tabindex="0" class="ng-pristine ng-untouched ng-valid ng-empty" value="[object Object]" aria-invalid="false"> is not clickable at point (338, 202). Other element would receive the click: <label translate-attr="{title: 'fulfillment.documentAction.createNew.modal.documentType.document.title'}" translate-values="{documentName: documentType.name}" for="documentType-0" translate="ASN - DSD" tabindex="0" title="Select ASN - DSD document type">...</label> ...implies that the desired element wasn't clickable as some other element obscures it. The desired element is a Angular element so to invoke click() on the element you have to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following Locator Strategies: Using CSS_SELECTOR: WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "label[for='documentType-0']"))).click() Using XPATH: WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//label[@for='documentType-0']"))).click() Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC Update As an alternative you can use the execute_script() method as follows: Using CSS_SELECTOR: driver.execute_script("arguments[0].click();", WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "label[for='documentType-0']")))) Using XPATH: driver.execute_script("arguments[0].click();", WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//label[@for='documentType-0']")))) References You can find a couple of relevant discussions in: Element MyElement is not clickable at point (x, y)… Other element would receive the click Selenium Web Driver & Java. Element is not clickable at point (x, y). Other element would receive the click ElementClickInterceptedException: Message: element click intercepted: Element is not clickable with Selenium and Python
7
20
62,827,291
2020-7-10
https://stackoverflow.com/questions/62827291/warning-pip-is-configured-with-locations-that-require-tls-ssl-however-the-ssl
I would like to use Python3.8.x on Google Cloud Compute Engine. First, I created an instance with gcloud command. gcloud compute instances create \ pegasus-test \ --zone=asia-northeast1-b \ --machine-type=n1-highmem-8 \ --boot-disk-size=500GB \ --image-project=ml-images \ --image-family=tf-1-15 \ --maintenance-policy TERMINATE --restart-on-failure In default, Python version is 3.5.3. python3 -V Python 3.5.3 Therefore, I upgraded Python. I followed this instruction. (google cloud compute engine change to python 3.6) cd /tmp wget https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz tar -xvf Python-3.8.3.tgz cd Python-3.8.3 ./configure sudo apt-get install zlib1g-dev sudo make sudo make install I got no error message. Now, I have Python3.8.3. python3 -V Python 3.8.3 Next, I would like to use PEGASUS. (https://github.com/google-research/pegasus) git clone https://github.com/google-research/pegasus cd pegasus export PYTHONPATH=. pip3 install -r requirements.txt Then, I got an error message. WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting absl-py (from -r requirements.txt (line 1)) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(" Can't connect to HTTPS URL because the SSL module is not available.")': /simple/absl-py/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(" Can't connect to HTTPS URL because the SSL module is not available.")': /simple/absl-py/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(" Can't connect to HTTPS URL because the SSL module is not available.")': /simple/absl-py/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(" Can't connect to HTTPS URL because the SSL module is not available.")': /simple/absl-py/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(" Can't connect to HTTPS URL because the SSL module is not available.")': /simple/absl-py/ Could not fetch URL https://pypi.org/simple/absl-py/: There was a problem confirming the ssl certificate: HTTPSConnectionPool( host='pypi.org', port=443): Max retries exceeded with url: /simple/absl-py/ (Caused by SSLError("Can't connect to HTTPS URL beca use the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement absl-py (from -r requirements.txt (line 1)) (from versions: non e) ERROR: No matching distribution found for absl-py (from -r requirements.txt (line 1)) WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host=' pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SS L module is not available.")) - skipping I checked pip's version. pip3 -V pip 19.2.3 from /usr/local/lib/python3.8/site-packages/pip (python 3.8) So, I tried to upgrade pip. pip3 install --upgrade pip Then, I got this error message. WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Ca n't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Ca n't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Ca n't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Ca n't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Ca n't connect to HTTPS URL because the SSL module is not available.")': /simple/pip/ Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host=' pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SS L module is not available.")) - skipping Requirement already up-to-date: pip in /usr/local/lib/python3.8/site-packages (19.2.3) WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host=' pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SS L module is not available.")) - skipping So, next I used pip instead of pip3. I input pip install -r requirements.txt This is the result. Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: absl-py in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 1)) (0.9.0) Requirement already satisfied: mock in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 2)) (3.0.5) Requirement already satisfied: numpy in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 3)) (1.16.6) Collecting rouge-score Downloading rouge_score-0.0.4-py2.py3-none-any.whl (22 kB) Collecting sacrebleu Downloading sacrebleu-1.3.7.tar.gz (26 kB) ERROR: Command errored out with exit status 1: command: /usr/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-HCHhuX/sacrebleu/setup.p y'"'"'; __file__='"'"'/tmp/pip-install-HCHhuX/sacrebleu/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f .read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/ pip-install-HCHhuX/sacrebleu/pip-egg-info cwd: /tmp/pip-install-HCHhuX/sacrebleu/ Complete output (7 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-HCHhuX/sacrebleu/setup.py", line 65, in <module> version = get_version(), File "/tmp/pip-install-HCHhuX/sacrebleu/setup.py", line 56, in get_version with open(os.path.join(os.path.dirname(__file__), 'sacrebleu.py'), encoding='utf-8') as fin: TypeError: 'encoding' is an invalid keyword argument for this function ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. WARNING: You are using pip version 20.0.2; however, version 20.1.1 is available. You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command. How can I realize pip3 install -r requirements.txt? Could you give me any advice, please?
I had the same issue and had to spend a few days to tackle. After exploring many different solutions, this worked for the pip ssl issue.
16
18
62,891,917
2020-7-14
https://stackoverflow.com/questions/62891917/how-to-change-the-colour-of-an-image-using-a-mask
I am writing a code to change the color of hair in the facial picture of a person. Doing this I made a model and was able to get a mask of the parts of the hair. But now I am stuck at a problem how to change the color of it. Below is the output mask and input image passed. Can you suggest me the method that could be used to change the color of the hair into different colors?
Since they both have the same shape, you can mask the image of the face using mask image. We first need to perform binary thresholding on it, so it can be used as a b&w mask. Then we can perform boolean indexing based on whether a value is 0 or 255, and assign a new color, such as green? import cv2 mask = cv2.imread('eBB2Q.jpg') face = cv2.imread('luraB.jpg') _, mask = cv2.threshold(mask, thresh=180, maxval=255, type=cv2.THRESH_BINARY) # copy where we'll assign the new values green_hair = np.copy(face) # boolean indexing and assignment based on mask green_hair[(mask==255).all(-1)] = [0,255,0] fig, ax = plt.subplots(1,2,figsize=(12,6)) ax[0].imshow(cv2.cvtColor(face, cv2.COLOR_BGR2RGB)) ax[1].imshow(cv2.cvtColor(green_hair, cv2.COLOR_BGR2RGB)) Now we can combine the new image with the original using cv2.addWeighted, which will return the weighted sum of both images, hence we'll only see a difference on the masked region: green_hair_w = cv2.addWeighted(green_hair, 0.3, face, 0.7, 0, green_hair) fig, ax = plt.subplots(1,2,figsize=(12,6)) ax[0].imshow(cv2.cvtColor(face, cv2.COLOR_BGR2RGB)) ax[1].imshow(cv2.cvtColor(green_hair_w, cv2.COLOR_BGR2RGB)) Note that you can set the weights in the weighted sum via the alpha and beta parameters, depending on how much you want the new colour to predominate. Note that, as mentioned earlier the new image will be obtained from the weighted sum dst = src1*alpha + src2*beta + gamma. Let's try with another colour and setting the weights as a convex combination with alpha values ranging from say 0.5 and 0.9: green_hair = np.copy(face) # boolean indexing and assignment based on mask green_hair[(mask==255).all(-1)] = [0,0,255] fig, axes = plt.subplots(2,2,figsize=(8,8)) for ax, alpha in zip(axes.flatten(), np.linspace(.6, .95, 4)): green_hair_w = cv2.addWeighted(green_hair, 1-alpha, face, alpha, 0, green_hair_w) ax.imshow(cv2.cvtColor(green_hair_w, cv2.COLOR_BGR2RGB)) ax.axis('off')
12
31
62,899,860
2020-7-14
https://stackoverflow.com/questions/62899860/how-can-i-resolve-typeerror-cannot-safely-cast-non-equivalent-float64-to-int6
I'm trying to convert a few float columns to int in a DF but I'm getting above error. I've tried both to convert it as well as to fillna to 0(which I prefer not to do, as in my dataset the NA is required). What am I doing wrong? I've tried both: orginalData[NumericColumns] = orginalData[NumericColumns].astype('Int64') #orginalData[NumericColumns] = orginalData[NumericColumns].fillna(0).astype('Int64') but it keeps resulting in the same error TypeError: cannot safely cast non-equivalent float64 to int64 What can I do to convert the columns?
import numpy as np orginalData[NumericColumns] = orginalData[NumericColumns].fillna(0).astype(np.int64, errors='ignore') For NaNs you need to replace the NaNs with 0, then do the type casting
20
4
62,796,591
2020-7-8
https://stackoverflow.com/questions/62796591/breakpoint-in-except-clause-doesnt-have-access-to-the-bound-exception
Consider the following example: try: raise ValueError('test') except ValueError as err: breakpoint() # at this point in the debugger, name 'err' is not defined Here, after the breakpoint is entered, the debugger doesn't have access to the exception instance bound to err: $ python test.py --Return-- > test.py(4)<module>()->None -> breakpoint() (Pdb) p err *** NameError: name 'err' is not defined Why is this the case? How can I access the exception instance? Currently I'm using the following workaround but it feels awkward: try: raise ValueError('test') except ValueError as err: def _tmp(): breakpoint() _tmp() # (lambda: breakpoint())() # or this one alternatively Interestingly, using this version, I can also access the bound exception err when moving one frame up in the debugger: $ python test.py --Return-- > test.py(5)_tmp()->None -> breakpoint() (Pdb) up > test.py(6)<module>() -> _tmp() (Pdb) p err ValueError('test') Disassembly via dis In the following I compared two versions, one using breakpoint directly and the other wrapping it in a custom function _breakpoint: def _breakpoint(): breakpoint() try: raise ValueError('test') except ValueError as err: breakpoint() # version (a), cannot refer to 'err' # _breakpoint() # version (b), can refer to 'err' The output of dis is similar except for some memory locations and the name of the function of course: So it must be the additional stack frame that allows pdb to refer to the bound exception instance. However it is not clear why this is the case, since within the except block anything can refer to the bound exception instance.
breakpoint() is not a breakpoint in the sense that it halts execution at the exact location of this function call. Instead it's a shorthand for import pdb; pdb.set_trace() which will halt execution at the next line of code (it calls sys.settrace under the covers). Since there is no more code inside the except block, execution will halt after that block has been exited and hence the name err is already deleted. This can be seen more clearly by putting an additional line of code after the except block: try: raise ValueError('test') except ValueError as err: breakpoint() print() which gives the following: $ python test.py > test.py(5)<module>() -> print() This means the interpreter is about to execute the print() statement in line 5 and it has already executed everything prior to it (including deletion of the name err). When using another function to wrap the breakpoint() then the interpreter will halt execution at the return event of that function and hence the except block is not yet exited (and err is still available): $ python test.py --Return-- > test.py(5)<lambda>()->None -> (lambda: breakpoint())() Exiting of the except block can also be delayed by putting an additional pass statement after the breakpoint(): try: raise ValueError('test') except ValueError as err: breakpoint() pass which results in: $ python test.py > test.py(5)<module>() -> pass (Pdb) p err ValueError('test') Note that the pass has to be put on a separate line, otherwise it will be skipped: $ python test.py --Return-- > test.py(4)<module>()->None -> breakpoint(); pass (Pdb) p err *** NameError: name 'err' is not defined Note the --Return-- which means the interpreter has already reached the end of the module.
10
12
62,880,911
2020-7-13
https://stackoverflow.com/questions/62880911/generate-video-from-numpy-arrays-with-opencv
I am trying to use the openCV VideoWriter class to generate a video from numpy arrays. I am using the following code: import numpy as np import cv2 size = 720*16//9, 720 duration = 2 fps = 25 out = cv2.VideoWriter('output.avi', cv2.VideoWriter_fourcc(*'X264'), fps, size) for _ in range(fps * duration): data = np.random.randint(0, 256, size, dtype='uint8') out.write(data) out.release() The codec seems to be installed as ffmpeg can do conversions to the x264 codec and libx264 is installed. The code runs without warnings, however the videos generated seem to contain no data since I always get the following message when trying to read them with mpv: [ffmpeg/demuxer] avi: Could not find codec parameters for stream 0 (Video: h264 (X264 / 0x34363258), none, 1280x720): unspecified pixel format What could be the cause of this issue?
The first issue is that you are trying to create a video using black and white frames while VideoWriter assumes color by default. VideoWriter has a 5th boolean parameter where you can pass False to specify that the video is to be black and white. The second issue is that the dimensions that cv2 expects are the opposite of numpy. Thus, size should be (size[1], size[0]). A possible further problem is the codec being used. I have never been able to get "X264" to work on my machine and instead have been using "mp4v" as the codec and ".mp4" for the container type in order to get an H.264 encoded output. After all of these issues are fixed, this is the result. Please try this and see if it works for you: import numpy as np import cv2 size = 720*16//9, 720 duration = 2 fps = 25 out = cv2.VideoWriter('output.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (size[1], size[0]), False) for _ in range(fps * duration): data = np.random.randint(0, 256, size, dtype='uint8') out.write(data) out.release()
14
20
62,886,283
2020-7-14
https://stackoverflow.com/questions/62886283/python-requests-post-how-to-send-request-body-encoded-as-application-x-www-fo
I'm doign an app with the Spotify API. My problem is I'm trying to get an access_token but It's not working. In the docs it says i need to send the body encoded as application/x-www-form-urlencoded so I search a little bit and It should work just setting request_body as a dictionary. This is the code of my function: def get_access_token(self): auth_code, code_verifier = self.get_auth_code() endpoint = "https://accounts.spotify.com/api/token" # as docs said data should be encoded as application/x-www-form-urlencoded # as internet says i just need to send it as a dictionary. However it's not working request_body = { "client_id": f"{self.client_ID}", "grant_type": "authorization_code", "code": f"{auth_code}", "redirect_uri": f"{self.redirect_uri}", "code_verifier": f"{code_verifier}" } response = requests.post(endpoint, data=request_body) print(response) The response I'm getting is always <Response [400]> Here are the docs, step 4 https://developer.spotify.com/documentation/general/guides/authorization-guide/#authorization-code-flow-with-proof-key-for-code-exchange-pkce NOTE: I tryed executing this as a curl and it works fine I'm not sure what I'm doing wrong in the python code here's the command: curl -d client_id={self.client_ID} -d grant_type=authorization_code -d code={auth_code} -d redirect_uri={self.redirect_uri} -d code_verifier={code_verifier} https://accounts.spotify.com/api/token
You can specify the request type in the request header. headers = {'Content-Type': 'application/x-www-form-urlencoded'} response = requests.post(endpoint, data=request_body, headers=headers) print(response)
11
22
62,885,911
2020-7-13
https://stackoverflow.com/questions/62885911/pip-freeze-creates-some-weird-path-instead-of-the-package-version
I am working on developing a python package. I use pip freeze > requirements.txt to add the required package into the requirement.txt file. However, I realized that some of the packages, instead of the package version, have some path in front of them. numpy==1.19.0 packaging==20.4 pandas @ file:///opt/concourse/worker/volumes/live/38d1301c-8fa9-4d2f-662e-34dddf33b183/volume/pandas_1592841668171/work pandocfilters==1.4.2 Whereas, inside the environment, I get: >>> pandas.__version__ '1.0.5' Do you have any idea how to address this problem?
It looks like this is an open issue with pip freeze in version 20.1, the current workaround is to use: pip list --format=freeze > requirements.txt In a nutshell, this is caused by changing the behavior of pip freeze to include direct references for distributions installed from direct URL references. You can read more about the issue on GitHub: pip freeze does not show version for in-place installs Output of "pip freeze" and "pip list --format=freeze" differ for packages installed via Direct URLs Better freeze of distributions installed from direct URL references
178
367
62,883,329
2020-7-13
https://stackoverflow.com/questions/62883329/how-to-deal-with-large-dependencies-in-aws-lambda
I am using AWS Lambda and the functions I need to deploy require many different packages. Using serverless-python-requirements the zip file that is generated is 169.5MB, far greater than the 50MB limit. I have tried using Lambda Layers, but this doesn't solve the size issue. I have also tried dumping the zip file in an s3 bucket, but it is still too large to load when invoking the function. I need all of these packages and I'm not sure how I can deploy them all. My requirements.txt file looks like: bs4==0.0.1 gensim==3.8.3 matplotlib==3.2.2 nltk==3.5 numpy==1.19.0 openpyxl==3.0.4 pandas==1.0.5 pyLDAvis==2.1.2 spacy==2.3.1 XlsxWriter==1.2.9
Very recently, AWS announced the support of EFS for Lambda.Read the announcement here. EFS or the Elastic File System is the NFS file system for compute nodes. Read more about them here. With this now you can essentially attach a network storage to your lambda function. I have personally used it to load huge reference files which are over the limit of Lambda's file storage. For a walkthrough you can refer to this article by AWS. To pick the conclusion from the article: EFS for Lambda allows you to share data across function invocations, read large reference data files, and write function output to a persistent and shared store. After configuring EFS, you provide the Lambda function with an access point ARN, allowing you to read and write to this file system. Lambda securely connects the function instances to the EFS mount targets in the same Availability Zone and subnet.
11
15
62,875,416
2020-7-13
https://stackoverflow.com/questions/62875416/python-peewee-improperlyconfigured-mysql-driver-not-installed
I tried to make a MySQL connection with peewee and followed the tutorial from their website: peewee quickstart So my code is the following: from peewee import * db = MySQLDatabase( host='127.0.0.1', user='root', password='', database='db_test' ) class Person(Model): name = CharField() birthday = DateField() class Meta: database = db class Pet(Model): owner = ForeignKeyField(Person, backref='pets') name = CharField() animal_type = CharField() class Meta: database = db db.connect() db.create_tables([Person, Pet]) db.close() (My Database is from xampp) But when i execute this code I get this error message: peewee.ImproperlyConfigured: MySQL driver not installed! I tried to fix this by installing this MySQL Driver. But this changed absolutely nothing. Due to me beeing new to python I have no idea what I can do to fix this, if i'm just missing a import or if I have to install a library with pip?
The docs are clear, as is the error message: http://docs.peewee-orm.com/en/latest/peewee/database.html#using-mysql Install pymysql or mysqldb. To use the non-standard mysql-connector driver, you need to import the playhouse.mysql_ext module and use the MySQLConnectorDatabase implementation: http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#mysql-ext
12
6
62,853,875
2020-7-11
https://stackoverflow.com/questions/62853875/stopping-python-container-is-slow-sigterm-not-passed-to-python-process
I made a simple python webserver based on this example, which runs inside Docker FROM python:3-alpine WORKDIR /app COPY entrypoint.sh . RUN chmod +x entrypoint.sh COPY src src CMD ["python", "/app/src/api.py"] ENTRYPOINT ["/app/entrypoint.sh"] Entrypoint: #!/bin/sh echo starting entrypoint set -x exec "$@" Stopping the container took very long, altough the exec statement with the JSON array syntax should pass it to the python process. I assumed a problem with SIGTERM no being passed to the container. I added the following to my api.pyscript to detect SIGTERM def terminate(signal,frame): print("TERMINATING") if __name__ == "__main__": signal.signal(signal.SIGTERM, terminate) webServer = HTTPServer((hostName, serverPort), MyServer) print("Server started http://%s:%s" % (hostName, serverPort)) webServer.serve_forever() Executed without Docker python3 api/src/api.py, I tried kill -15 $(ps -guaxf | grep python | grep -v grep | awk '{print $2}') to send SIGTERM (15 is the number code of it). The script prints TERMINATING, so my event handler works. Now I run the Docker container using docker-compose and press CTRL + C. Docker says gracefully stopping... (press Ctrl+C again to force) but doesn't print my terminating message from the event handler. I also tried to run docker-compose in detached mode, then run docker-compose kill -s SIGTERM api and view the logs. Still no message from the event handler.
Since the script runs as pid 1 as desired and setting init: true in docker-compose.yml doesn't seem to change anything, I took a deeper drive in this topic. This leads me figuring out multiple mistakes I did: Logging The approach of printing a message when SIGTERM is catched was designed as simple test case to see if this basically works before I care about stopping the server. But I noticed that no message appears for two reasons: Output buffering When running a long term process in python like the HTTP server (or any while True loop for example), there is no output displayed when starting the container attached with docker-compose up (no -d flag). To receive live logs, we need to start python with the -u flag or set the env variable PYTHONUNBUFFERED=TRUE. No log piping after stop But the main problem was not the output buffering (this is just a notice since I wonder why there was no log output from the container). When canceling the container, docker-compose stops piping logs to the console. This means that from a logical perspective it can't display anything that happens AFTER CTRL + C is pressed. To fetch those logs, we need to wait until docker-compose has stopped the container and run docker-compose logs. It will print all, including those generated after CTRL + C is pressed. Using docker-compose logs I found out that SIGTERM is passed to the container and my event handler works. Stopping the webserver With those knowledge I tried to stop the webserver instance. First this doesn't work because it's not enough to just call webServer.server_close(). Its required to exit explicitely after any cleanup work is done like this: def terminate(signal,frame): print("Start Terminating: %s" % datetime.now()) webServer.server_close() sys.exit(0) When sys.exit() is not called, the process keeps running which results in ~10s waiting time before Docker kills it. Full working example Here a demo script that implement everything I've learned: from http.server import BaseHTTPRequestHandler, HTTPServer import signal from datetime import datetime import sys, os hostName = "0.0.0.0" serverPort = 80 class MyServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-Type", "text/html") self.end_headers() self.wfile.write(bytes("Hello from Python Webserver", "utf-8")) webServer = None def terminate(signal,frame): print("Start Terminating: %s" % datetime.now()) webServer.server_close() sys.exit(0) if __name__ == "__main__": signal.signal(signal.SIGTERM, terminate) webServer = HTTPServer((hostName, serverPort), MyServer) print("Server started http://%s:%s with pid %i" % ("0.0.0.0", 80, os.getpid())) webServer.serve_forever() Running in a container, it could be stopped very fast without waiting for Docker to kill the process: $ docker-compose up --build -d $ time docker-compose down Stopping python-test_app_1 ... done Removing python-test_app_1 ... done Removing network python-test_default real 0m1,063s user 0m0,424s sys 0m0,077s
16
22
62,869,201
2020-7-13
https://stackoverflow.com/questions/62869201/upgrading-pycharm-venv-python-version
I have python 3.6 in my venv on PyCharm. However, I want to change that to Python 3.8. I have already installed 3.8, so how do I change my venv python version? I am on windows 10. Changing the version on the project intepreter settings seems to run using the new venv not my existing venv with all the packages I have installed. Attempting to add a new intepreter also results in the "OK" button being greyed out, possibly due to the current venv being not empty.
You need to create a new virtual environment with the interpreter which version is 3.8. Go to Settings => Project => Python Interpreter Click on the vertical 3 dots, and click on "Add". Select Virtualenv Environment => New Environment Choose as base interpreter the one which version is 3.8 (the one you just installed) Click on "OK" => "OK" Once you have set the new interpreter, PyCharm will warn you that you need update some dependencies based on your requirements.txt file or, in this case, Pipfile.lock (I am using pipenv for this project) That's it!
9
7
62,858,552
2020-7-12
https://stackoverflow.com/questions/62858552/why-cant-i-import-geopy-distance-vincenty-on-jupyter-notebook-i-installed-ge
from geopy.distance import vincenty I just installed the geopy package 2.0.0, I want to use geopy.distance.vincenty() as this doc says. However, it returns ImportError: cannot import name 'vincenty' from 'geopy.distance'. And if I try from geopy import distance it becomes AttributeError: module 'geopy.distance' has no attribute 'vincenty'. About two or three months ago I used this on Google Colab, and it was fine. What happened? Could it be the latest version letting go this attribute?
Yes, it has been removed. Look at the changelog's Breaking Changes section which contains this entry: Removed geopy.distance.vincenty, use geopy.distance.geodesic instead.
11
29
62,864,163
2020-7-12
https://stackoverflow.com/questions/62864163/why-does-equality-not-appear-to-be-a-symmetric-relation-in-python
I'm learning about comparison operators, and I was playing around with True and False statements. I ran the following code in the Python shell: not(5>7) == True As expected, this returned True. However, I then ran the following code: True == not(5>7) and there was a syntax error. Why was this? If the first line of code is valid syntax, then surely the second line of code should also be valid. Where have I gone wrong? (To give a bit of background, my understanding is that = in Python is only used for variable assignment, while == is closely related to the mathematical symbol '='.)
The syntax error seems to be caused by the not keyword, not (pun intended) the equality operator: True == not (5 > 7) # SyntaxError: invalid syntax True == (not (5 > 7)) # True The explanation can be found in the docs: not has a lower priority than non-Boolean operators, so not a == b is interpreted as not (a == b), and a == not b is a syntax error. Basically, the interpreter thinks you're comparing True to not.
23
33
62,806,175
2020-7-9
https://stackoverflow.com/questions/62806175/xarray-combine-by-coords-return-the-monotonic-global-index-error
I am trying to combine two spatial xarray datasets using combine_by_coords. These two datasets are two tiles next to each other. So there are overlapping coordinates. In the overlapping regions, the variable values of one of the datasets is nan. I used the "combine_by_coords" with compat='no_conflicts' option. However, it returns the monotonic global indexes along dimension y error. It looks like it was an issue before but it was fixed (here). So I don't really know why I get this error. Here is an example (the netcdf tiles are here): import xarray as xr print(xr.__version__) >>>0.15.1 ds1=xr.open_dataset('Tile1.nc') ds2=xr.open_dataset('Tile2.nc') ds = xr.combine_by_coords([ds1,ds2], compat='no_conflicts') >>>... ValueError: Resulting object does not have monotonic global indexes along dimension y Thanks
This isn't a bug, it's throwing the error it should be throwing given your input. However I can see how the documentation doesn't make it very clear as to why this is happening! combine_by_coords and combine_nested do two things: they concatenate (using xr.concat), and they merge (using xr.merge). merge groups variables of the same size, concat joins variables of different sizes onto the ends of one another. The concatenate step is never supposed to handle partially overlapping coordinates, and the combine functions therefore have the same restriction. That error is an explicit rejection of the input you gave it: "you gave me overlapping coordinates, I don't know how to concatenate those, so I'll reject them." Normally this makes sense - when the overlapping coordinates aren't NaNs then it's ambiguous as to which values to choose. In your case then you are asking it to perform a well-defined operation, and the discussion in the docs about merging overlapping coordinates here implies that compat='no_conflicts' would handle this situation. Unfortunately that's only for xr.merge, not xr.concat, and so it doesn't apply for combine_by_coords either. This is definitely confusing. It might be possible to generalise the combine functions to handle the scenario you're describing (where the overlapping parts of the coordinates are specified entirely by the non-NaN values). Please open an issue proposing this feature if you would like to see it. (Issue #3150 was about something else, an actual bug in the handling of "coordinate dimensions which do not vary between each dataset".) Instead, what you need to do is trim off the overlap first. That shouldn't be hard - presumably you know (or can determine) how big your overlap is, and all your NaNs are on one dataset. You just need to use the .isel() method with a slice. Once you've got rid of the overlapping NaNs then you should be able to combine it fine (and you shouldn't need to specify compat either). If you're using combine_by_coords as part of opening many files with open_mfdataset then it might be easier to write a trimming function which you apply first using the preprocess argument to open_mfdataset.
15
16
62,844,211
2020-7-11
https://stackoverflow.com/questions/62844211/updating-sagemaker-endpoint-with-new-endpoint-configuration
A bit confused with automatisation of Sagemaker retraining the model. Currently I have a notebook instance with Sagemaker LinearLerner model making the classification task. So using Estimator I'm making training, then deploying the model creating Endpoint. Afterwards using Lambda function for invoke this endpoint, I add it to the API Gateway receiving the api endpoint which can be used for POST requests and sending back response with class. Now I'm facing with the problem of retraining. For that I use serverless approach and lambda function getting environment variables for training_jobs. But the problem that Sagemaker not allow to rewrite training job and you can only create new one. My goal is to automatise the part when the new training job and the new endpoint config will apply to the existing endpoint that I don't need to change anything in API gateway. Is that somehow possible to automatically attach new endpoint config with existing endpoint? Thanks
If I am understanding the question correctly, you should be able to use CreateEndpointConfig near the end of the training job, then use UpdateEndpoint: Deploys the new EndpointConfig specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous EndpointConfig (there is no availability loss). If the API Gateway / Lambda is routed via the endpoint ARN, that should not change after using UpdateEndpoint.
7
4
62,845,884
2020-7-11
https://stackoverflow.com/questions/62845884/how-can-i-show-syntax-highlighted-python-code-in-a-html-page
Is it somehow possible to show syntax-highlighted python code in a webpage? I found this: <pre class="brush: python"> # python code here </pre> However, it shows all the code in black. I want import to be orange, strings to be green. Is it possible to do this? Thank you!
If you wish to only display code, python in this case, consider using Github gist. You can then embed it using the 'embed' option on the top right corner. It will give you a script tag that you can copy and add to your webpage like so: <script src="https://gist.github.com/username/a39a422ebdff6e732753b90573100b16.js"></script>
13
6
62,824,783
2020-7-9
https://stackoverflow.com/questions/62824783/pytest-cov-does-not-read-pyproject-toml
Pytest cov is not reading its setting from the pyproject.toml file. I am using nox, so I run the test with: python3 -m nox It seems I have the same issue even without nox. In fact, after running a poetry install: poetry run pytest --cov=src passes the test poetry run pytest --cov does not pass the test In particular, when failing the test I have the following output (output is cut to the most important stuff): WARNING: Failed to generate report: No data to report. /Users/matteo/Library/Caches/pypoetry/virtualenvs/project-Nz69kfmJ-py3.7/lib/python3.7/site-packages/pytest_cov/plugin.py:271: PytestWarning: Failed to generate report: No data to report. self.cov_controller.finish() ---------- coverage: platform darwin, python 3.7.7-final-0 ----------- FAIL Required test coverage of 100.0% not reached. Total coverage: 0.00% Code with a reproducible error here. To run it you'll need to install poetry and to install nox.
Turning the comment into an answer: Check the current treatment of the src directory. Right now, it seems to be a namespace package which is not what you intend. Either switch to the src layout: # pyproject.toml [tool.poetry] ... packages = [ { include = 'project', from = 'src' } ] [tool.coverage.run] ... source = ['project'] and fix the import in test_code.py: from src.project import code to from project import code or remove the src dir: rootdir ├── project │ └── __init__.py └── tests └── test_code.py and fix the import in test_code.py.
12
8
62,840,719
2020-7-10
https://stackoverflow.com/questions/62840719/how-to-correctly-access-properties-in-a-json-from-python
EDIT: As pointed out by some users, the request does not actually returns a JSON, but a string encoded JSON. The issue here is not actually parsing the JSON in python, but writing it in such a way that a request can be sent to the API. Therefore, it's not necessary to use the json python library. I'm using the various google APIs (slides, drive, sheets and such) with a python program. I am having issues accessing the properties in the JSON that the API requests return. Take a slides presentations().get request for example. It returns an instance of presentation. And I want to access its Slides property, which is an array of page objects. So I am trying the code slideInfo = SLIDES.presentations().get(presentationId=presId).execute() slideInfo = slideInfo.get(['slides'][0]['pageType']) But I get an error message saying "TypeError: string indices must be integers" However, I thought that using brackets with a string was an acceptable replacement for accesing with a dot. In fact, I don't know how to translate this to dot accesors because it throws a syntax error since the keys have to be wrapped in quotes. 'slides'[0].'pageType' throws syntax error because of the dot, and without the dot it doesnt work either
So with the JSON representation in the docs: { "presentationId": string, "pageSize": { object (Size) }, "slides": [ { object (Page) } ], "title": string, "masters": [ { object (Page) } ], "layouts": [ { object (Page) } ], "locale": string, "revisionId": string, "notesMaster": { object (Page) } } You can access the slides using: slideInfo.get('slides') or slideInfo['slides'] So if you want to get the pageType of the first slide, it would be: slideInfo['slides'][0]['pageType'] or slideInfo.get('slides')[0].get('pageType')
8
6
62,838,129
2020-7-10
https://stackoverflow.com/questions/62838129/using-global-variables-inside-a-nested-function-in-python
I read this code (given below) and my understanding was that, if a variable is declared global inside a function and if it is modified then it's value will change permanently. x = 15 def change(): global x x = x + 5 print("Value of x inside a function :", x) change() print("Value of x outside a function :", x) Output: Value of x inside a function : 20 Value of x outside a function : 20 But the code below shows a different output. How is it that the value of x does not change inside the print("After making change: ", x) and still remains 15 def add(): x = 15 def change(): global x x = 20 print("Before making changes: ", x) print("Making change") change() print("After making change: ", x) add() print("value of x",x) Output: Before making changes: 15 Making change After making change: 15 value of x 20
In add, x is not a global variable; it's local to add. You either need to make it global as well, so that add and change are referring to the same variable def add(): global x x = 15 def change(): global x x = 20 print("Before making changes: ", x) print("Making change") change() print("After making change: ", x) add() print("value of x",x) or you need to declare x in change as nonlocal, rather than global. def add(): x = 15 def change(): nonlocal x x = 20 print("Before making changes: ", x) print("Making change") change() print("After making change: ", x) add() print("value of x",x)
9
12
62,800,189
2020-7-8
https://stackoverflow.com/questions/62800189/pytorch-lightning-move-tensor-to-correct-device-in-validation-epoch-end
I would like to create a new tensor in a validation_epoch_end method of a LightningModule. From the official docs (page 48) it is stated that we should avoid direct .cuda() or .to(device) calls: There are no .cuda() or .to() calls. . . Lightning does these for you. and we are encouraged to use type_as method to transfer to the correct device. new_x = new_x.type_as(x.type()) However, in a step validation_epoch_end I do not have any tensor to copy device from(by type_as method) in a clean way. My question is what should I do if I want to create a new tensor in this method and transfer it to the device where is the model? The only thing I can think of is to find a tensor in the outputs dictionary but it feels kinda messy: avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() output = self(self.__test_input.type_as(avg_loss)) Is there any clean way to achieve that?
did you check part 3.4 (page 34) in the doc you linked ? LightningModules know what device they are on! construct tensors on the device directly to avoid CPU->Device transfer t = tensor.rand(2, 2).cuda()# bad (self is lightningModule)t = tensor.rand(2,2, device=self.device)# good I had a similar issue to create tensors this helped me. I hope it will help you too.
14
25
62,827,538
2020-7-10
https://stackoverflow.com/questions/62827538/in-cython-class-whats-the-difference-of-using-init-and-cinit
Code block 1 using __init__ %%cython -3 cdef class c: cdef: int a str s def __init__(self): self.a=1 self.s="abc" def get_vals(self): return self.a,self.s m=c() print(m.get_vals()) Code block 2 using __cinit__ %%cython -3 cdef class c: cdef: int a str s def __cinit__(self): # cinit here self.a=1 self.s="abc" def get_vals(self): return self.a,self.s m=c() print(m.get_vals()) I tested both of these codes, and both run without error. In this case, what's the point of using __cinit__ instead of __init__? I've read the official article, I got confused by one sentence: If you need to pass a modified argument list to the base type, you will have to do the relevant part of the initialization in the __init__() method instead, where the normal rules for calling inherited methods apply. What does "modified argument" mean? Here, why should I use init rather than cinit?
It's mainly about inheritance. Suppose I inherit from your class C: class D(C): def __init__(self): pass # oops forgot to call C.__init__ class E(C): def __init__(self): super().__init__(self) super().__init__(self) # called it twice How __init__ ends up being called is entirely up to the classes that inherit from it. Bear in mind there may be multiple layers of inheritance. Additionally, a fairly common pattern for creating classes that wrap C/C++ objects is to create a staticmethod cdef function as an alternative constructor: cdef class C: def __cinit__(self): print("In __cinit__") @staticmethod cdef make_from_ptr(void* x): val = C.__new__(C) # do something with pointer return val In this case again, __init__ typically isn't called. In contrast __cinit__ is guaranteed to be called exactly once and this happens automatically by Cython at an early stage of the process. This is most important when you have cdef attributes (such as C pointers) that your class relies on being initialized. It would be impossible for a Python derived class to even set these up, but __cinit__ can ensure that they are. In your case it probably doesn't matter - use whichever you're happy with. In terms of "modified arguments" it's saying you can't replicate this with __cinit__: class NameValue: def __init__(self, name, value): self.name = name self.value = value class NamedHelloPlus1(NamedValue): def __init__(self, value): super().__init__("Hello", value+1) i.e. NamedHelloPlus1 controls what arguments NamedValue gets. With __cinit__ Cython all the calls to __cinit__ receive exactly the same arguments (because Cython arranges the call - you cannot call it manually).
8
9
62,811,311
2020-7-9
https://stackoverflow.com/questions/62811311/installing-awscli-on-alpine-how-to-fix-modulenotfounderror-no-module-named
Context I had a dockerfile based on postgres:11-alpine that was working in the past (probably a few months since it was last built) with the following definition: FROM postgres:11-alpine RUN apk update # install aws cli # taken from: https://github.com/anigeo/docker-awscli/blob/master/Dockerfile RUN \ apk -Uuv add groff less python py-pip && \ pip install awscli && \ apk --purge -v del py-pip && \ rm /var/cache/apk/* I recently tried to rebuild it before upgrading to postgres 12, but the image build failed with: ERROR: unsatisfiable constraints: python (missing): required by: world[python] I guess the python package is gone now because YOLO? Whatever, I tried to upgrade to python3 by changing the docker file to: RUN \ apk -Uuv add groff less python3 py-pip && \ pip install awscli && \ apk --purge -v del py-pip && \ rm /var/cache/apk/* This looked like it worked, but then when running the aws command it failed with error: ModuleNotFoundError: No module named 'six' Question How to fix this so the awscli will not give the error No module named 'six'?
The problem seems to actually be caused by deleting py-pip. As far as I know, the aim of the apk del was to reduce the size of the final docker image. I'm not sure why deleting py-pip used to work when the file was using the python package. So the following now seems to be working: RUN \ apk -Uuv add groff less python3 py-pip && \ pip install awscli && \ rm /var/cache/apk/*
7
12
62,805,973
2020-7-9
https://stackoverflow.com/questions/62805973/how-do-i-extract-all-of-the-text-from-a-pdf-using-indexing
I am new to Python and coding in general. I'm trying to create a program that will OCR a directory of PDFs then extract the text so I can later pick out specific things. However, I am having trouble getting pdfPlumber to extract all the text from all of the pages. You can index from start to an end, but if the end is unknown, it breaks because the index is out of range. import ocrmypdf import os import requests import pdfplumber import re import logging import sys import PyPDF2 ## test folder C:\Users\adams\OneDrive\Desktop\PDF user_direc = input("Enter the path of your files: ") #walks the path and prints out each PDF in the #OCRs the documents and skips any OCR'd pages. for dir_name, subdirs, file_list in os.walk(user_direc): logging.info(dir_name + '\n') os.chdir(dir_name) for filename in file_list: file_ext = os.path.splitext(filename)[0--1] if file_ext == '.pdf': full_path = dir_name + '/' + filename print(full_path) result = ocrmypdf.ocr(filename, filename, skip_text=True, deskew = True, optimize = 1) logging.info(result) #the next step is to extract the text from each individual document and print directory = os.fsencode(user_direc) for file in os.listdir(directory): filename = os.fsdecode(file) if filename.endswith('.pdf'): with pdfplumber.open(file) as pdf: page = pdf.pages[0] text = page.extract_text() print(text) As is, this will only take the text from the first page of each PDF. I want to extract all of the text from each PDF but pdfPlumber will break if my index is too large and I do not know the number of pages the PDF will have. I've tried page = pdf.pages[0--1] but this breaks as well. I have not been able to find a workaround with PyPDF2, either. I apologize if this sloppy code or unreadable. I've tried to add comments to kind of explain what I am doing.
The pdfplumber git page says pdfplumber.open returns an instance of the pdfplumber.PDF class. That instance has the pages property which is a list of pdfplumber.Page instances - one per Page loaded from your pdf. Looking at your code, if you do: total_pages = len(pdf.pages) You should get the total pages for the currently loaded pdf. To combine all the pdf's text into one giant text string, you could try the 'for in' operation. Try changing your existing code: for file in os.listdir(directory): filename = os.fsdecode(file) if filename.endswith('.pdf'): with pdfplumber.open(file) as pdf: page = pdf.pages[0] text = page.extract_text() print(text) To: for file in os.listdir(directory): filename = os.fsdecode(file) if filename.endswith('.pdf'): all_text = '' # new line with pdfplumber.open(file) as pdf: # page = pdf.pages[0] - comment out or remove line # text = page.extract_text() - comment out or remove line for pdf_page in pdf.pages: single_page_text = pdf_page.extract_text() print( single_page_text ) # separate each page's text with newline all_text = all_text + '\n' + single_page_text print(all_text) # print(text) - comment out or remove line Rather than use the page's index value pdf.page[0] to access individual pages, use for pdf_page in pdf.pages. It will stop looping after it reaches the last page without generating an Exception. You won't have to worry about using an index value that's out of range.
7
19
62,819,600
2020-7-9
https://stackoverflow.com/questions/62819600/detect-and-remove-outliers-as-step-of-a-pipeline
I have a problem, I'm trying to build my own class to put into a pipeline in python, but it doesn't work. The problem I am trying to solve is a multiclass classification problem. What I want to do this to add a step in the pipeline to detect and remove outliers. I found this detect and remove outliers in pipeline python which is very similar to what I did. This is my class: from sklearn.neighbors import LocalOutlierFactor from sklearn.base import BaseEstimator, TransformerMixin import numpy as np class OutlierExtraction(BaseEstimator, TransformerMixin): def __init__(self, **kwargs ): self.kwargs = kwargs def transform(self, X, y): """ X should be of shape (n_samples, n_features) y should be of shape (n_samples,) """ lof = LocalOutlierFactor(**self.kwargs) lof.fit(X) nof = lof.negative_outlier_factor_ return X[nof > np.quantile(nof, 0.95), :], y[nof > np.quantile(nof, 0.95)] def fit(self, X, y = None): return self But i get this error in fit_transform return self.fit(X, y, **fit_params).transform(X) TypeError: transform() missing 1 required positional argument: 'y' The following code is the code i use to call this class: scaler = preprocessing.RobustScaler() outlierExtractor = OutlierExtraction() pca = PCA() classfier = svm.SVC() pipeline = [('scaler', scaler), ('outliers', outlierExtractor), ('reduce_dim', pca), ('classfier', classfier)] pipe = Pipeline(pipeline) params = { 'reduce_dim__n_components': [5, 15], 'classfier__kernel': ['rbf'], 'classfier__gamma': [0.1], 'classfier__C': [1], 'classfier__decision_function_shape':['ovo']} my_scoring = 'f1_macro' n_folds = 5 gscv = GridSearchCV(pipe, param_grid=params, scoring=my_scoring, n_jobs=-1, cv=n_folds, refit=True) gscv.fit(train_x, train_y)
The error is because the transform method def transform(self, X, y) requires both X and y to be passed in, but whatever is calling it is only passing X. (I can't see where it's called from in your code so assume it's being called by the underlying library). I don't know if making y optional (def transform(self, X, y=None) and modifying your method would work in this case. Otherwise, you'll have to figure out how to get the calling code to pass y, or provide it another way. I'm not familiar with the library, but looking at the source code shows that transform() should only take a single parameter X: if y is None: # fit method of arity 1 (unsupervised transformation) return self.fit(X, **fit_params).transform(X) else: # fit method of arity 2 (supervised transformation) return self.fit(X, y, **fit_params).transform(X)
7
5
62,818,306
2020-7-9
https://stackoverflow.com/questions/62818306/what-is-the-most-efficient-way-to-fill-missing-values-in-this-data-frame
I have the following pandas dataframe : df = pd.DataFrame([ ['A', 2017, 1], ['A', 2019, 1], ['B', 2017, 1], ['B', 2018, 1], ['C', 2016, 1], ['C', 2019, 1], ], columns=['ID', 'year', 'number']) and am looking for the most efficient way to fill the missing years with a default value of 0 for the column number The expected output is: ID year number 0 A 2017 1 1 A 2018 0 2 A 2019 1 3 B 2017 1 4 B 2018 1 5 C 2016 1 6 C 2017 0 7 C 2018 0 8 C 2019 1 The dataframe that I have is relatively big, so I am looking for an efficient solution. Edit: This is the code that I have so far: min_max_dict = df[['ID', 'year']].groupby('ID').agg([min, max]).to_dict('index') new_ix = [[], []] for id_ in df['ID'].unique(): for year in range(min_max_dict[id_][('year', 'min')], min_max_dict[id_][('year', 'max')]+1): new_ix[0].append(id_) new_ix[1].append(year) df.set_index(['ID', 'year'], inplace=True) df = df.reindex(new_ix, fill_value=0).reset_index() Result ID year number 0 A 2017 1 1 A 2018 0 2 A 2019 1 3 B 2017 1 4 B 2018 1 5 C 2016 1 6 C 2017 0 7 C 2018 0 8 C 2019 1
A slightly faster approach rather than using explode is to use pd.Series constructor. And you can use .iloc if years are already sorted from earliest to latest. idx = df.groupby('ID')['year'].apply(lambda x: pd.Series(np.arange(x.iloc[0], x.iloc[-1]+1))).reset_index() df.set_index(['ID','year']).reindex(pd.MultiIndex.from_arrays([idx['ID'], idx['year']]), fill_value=0).reset_index() Output: ID year number 0 A 2017 1 1 A 2018 0 2 A 2019 1 3 B 2017 1 4 B 2018 1 5 C 2016 1 6 C 2017 0 7 C 2018 0 8 C 2019 1
24
20
62,818,625
2020-7-9
https://stackoverflow.com/questions/62818625/read-local-json-file-with-python
I want to read a JSON file with Python : Here is part of my JSON file : { "Jointure":[ { "IDJointure":1, "societe":"S.R.T.K", "date":"2019/01/01", "heure":"05:47:00"}, { "IDJointure":2, "societe":"S.R.T.K", "date":"2019/01/01", "heure":"05:50:00"}]} This is the code : import json data = json.loads('Data2019.json') for i in data['Jointure']: print(i) But, here is the error that was displayed Traceback (most recent call last): File "C:\Users\HP\Desktop\readJSON.py", line 4, in <module> data = json.loads('Data2019.json') File "C:\Users\HP\AppData\Local\Programs\Python\Python38\lib\json\__init__.py", line 357, in loads return _default_decoder.decode(s) File "C:\Users\HP\AppData\Local\Programs\Python\Python38\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\HP\AppData\Local\Programs\Python\Python38\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) >>>
Try pandas import pandas as pd patients_df = pd.read_json('E:/datasets/patients.json') patients_df.head()
12
-9
62,793,544
2020-7-8
https://stackoverflow.com/questions/62793544/efficient-way-to-remove-half-of-the-duplicate-items-in-a-list
If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. So the new list will be something like k=[1,8,8,3] What is the fastest way to do this? I did list.count() for every element but it was very slow.
If order isn't important, a way would be to get the odd or even indexes only after a sort. Those lists will be the same so you only need one of them. l = [1,8,8,8,1,3,3,8] l.sort() # Get all odd indexes odd = l[1::2] # Get all even indexes even = l[::2] print(odd) print(odd == even) Result: [1, 3, 8, 8] True
60
106
62,810,872
2020-7-9
https://stackoverflow.com/questions/62810872/pairwise-distances-between-two-islands-connected-components-in-numpy-array
Consider the following image, stored as a numpy array: a = [[0,0,0,0,0,1,1,0,0,0], [0,0,0,0,1,1,1,1,0,0], [0,0,0,0,0,1,1,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,2,0,0,0,0], [0,0,0,0,0,2,2,0,0,0], [0,0,0,0,0,2,0,0,0,0], [0,0,0,0,3,3,3,0,0,0], [4,0,0,0,0,0,0,0,0,0], [4,4,0,0,0,0,0,0,0,0], [4,4,4,0,0,0,0,0,0,0]] a = np.array(a) Zeros represent background pixels, 1,2,3 and 4 represent pixels that belong to objects. You can see that objects always form contiguous islands or regions in the image. I would like to know the distance between every pair of objects. As distance measure I'd like to have the shortest, staightline distance, between those pixels of the object, that are closest to each other. Example: Distance(2,3) = 1, because they are touching. Distance(1,2) = 2, because there is exactly one background pixel separating the two regions, or in other words, the closest pixels of the objects are two pixels apart. Can anybody tell me how one would approach this problem in Python? Or link me to some resources?
This is what you would need: from scipy.spatial.distance import cdist def Distance(a, m, n): return cdist(np.argwhere(a==m),np.argwhere(a==n),'minkowski',p=1.).min() or similarly per @MaxPowers comment (claim: cityblock is faster): return cdist(np.argwhere(a==m),np.argwhere(a==n),'cityblock').min() Find the locations of islands and calculate pairwise distance of locations and get the minimum. I am not 100% sure of your desired distance, but I think you are looking for l1 norm. If not, you can change the cdist measure to your desired metric. output: Distance(a,2,3) 1.0 Distance(a,2,1) 2.0 Distance(a,3,1) 5.0 Distance(a,4,3) 5.0
9
8
62,802,006
2020-7-8
https://stackoverflow.com/questions/62802006/aws-sam-cli-fresh-install-throws-error-dyld-library-not-loaded-executable-p
I am trying to use the AWS SAM CLI installed through Homebrew and I am seeing the following error when I try to use sam with any command: dyld: Library not loaded: @executable_path/../.Python Referenced from: /usr/local/Cellar/aws-sam-cli/0.53.0/libexec/bin/python3.7 Reason: image not found Looking at the .Python file referenced in the error, it is symlinked to a python folder that doesn't actually exist: drwxr-xr-x 7 RCR staff 224 Jun 16 19:40 . drwxr-xr-x 9 RCR staff 288 Jul 8 14:55 .. lrwxr-xr-x 1 RCR staff 70 Jun 16 19:40 .Python -> ../../../../opt/python/Frameworks/Python.framework/Versions/3.7/Python drwxr-xr-x 39 RCR staff 1248 Jul 8 14:55 bin drwxr-xr-x 3 RCR staff 96 Jun 16 19:40 include drwxr-xr-x 3 RCR staff 96 Jun 16 19:40 lib -rw-r--r-- 1 RCR staff 61 Jun 16 19:40 pip-selfcheck.json I do not have a 3.7 folder at that location, but I do have a 3.8 folder. That said, I am not sure what is the origin of this folder. My Python3 installation is from Homebrew and located in the Cellar as usual (../Cellar/[email protected]/3.8.3_1/bin/python3) and symlinked to /usr/local/bin/python3. Not sure if that is relevant but I figure more info can't hurt. I tried symlinking the .Python file to the 3.8 version I do have at that location but it only produced other errors. Any idea how I can get this CLI working?
Looks like 0.53.0 comes with python3.7 executables, there is a workaround until it is fixed: brew install --build-from-source aws-sam-cli https://github.com/awslabs/aws-sam-cli/issues/2101 https://github.com/aws/homebrew-tap/issues/93
13
19
62,808,852
2020-7-9
https://stackoverflow.com/questions/62808852/cant-run-ipython-on-cmd
I successfully installed ipython via pip. I wanted then to use it by launching it through windows 10 command prompt but am getting the following error 'ipython' is not recognized as an internal or external command, operable program or batch file. I have gone through many questions on stackoverflow but cannot get a relevant solution. I tried pip install ipython to confirm the ipython is installed and following on the instruction on my tutorial, i typed ipython on cmd to launch the program and it has never worked. This is slowing down my learning, please help!
Search in your machine the ipython application (directory in which it is installled) and the add the path to PATH environment variables. For example in my case location was C:\Users\DELL\AppData\Local\Programs\Python\Python37\Scripts Add this path to PATH environment variable (see here) and your problem is solved.
12
6
63,438,979
2020-7-8
https://stackoverflow.com/questions/63438979/python-pmdarima-autoarima-does-not-work-with-large-data
I have a Dataframe with around 80.000 observations taken every 15 min. The seasonal parameter m is assumed with 96, because every 24h the pattern repeats. When I insert these informations in my auto_arima algorithm, it takes a long time (some hours) until the following error message is given out: MemoryError: Unable to allocate 5.50 GiB for an array with shape (99, 99, 75361) and data type float64 The code that I am using: stepwise_fit = auto_arima(df['Hges'], seasonal=True, m=96, stepwise=True, stationary=True, trace=True) print(stepwise_fit.summary()) I tried it with resampling to hourly values, to reduce the amount of data and the m-factor to 24, but still my computer cannot calculate the result. How do find the weighting factors with auto_arima when you deal with large data ?
I don't recall the exact source where I read this, but neither auto.arima nor pmdarima are really optimized to scale, which might explain the issues you are facing. But there are some more important things to note about your question: With 80K data points at 15 minute intervals, ARIMA probably isn't the best type of model for your use case anyway: With the frequency and density of your data, it is likely that there are multiple cycles/seasonal patterns, and ARIMA can handle only one seasonal component. So at the very least you should try a model that can handle multiple seasonalities like STS or Prophet (TBATS in R can also handle multiple seasonalities, but it is likely to suffer from the same issues as auto.arima, since it is in the same package). At 80K points and 15 minute measurement intervals, I assume you are most likely dealing with a "physical" time series that is the output of a sensor or some other metering/monitoring device (electrical load, network traffic, etc...). These types of time series are usually very good use cases for LSTM or other Deep Learning based models instead of ARIMA.
7
13
62,798,296
2020-7-8
https://stackoverflow.com/questions/62798296/how-to-hide-axis-lines-but-show-ticks-in-a-chart-in-altair-while-actively-using
I am aware of using axis=None to hide axis lines. But when you have actively used axis to modify the graph, is it possible to keep just the ticks, but hide the axis lines for both X and Y axis? For example, here is a graph I have where I'd like it to happen - import pandas as pd import altair as alt df = pd.DataFrame({'a': [1,2,3,4], 'b':[2000,4000,6000,8000]}) alt.Chart(df).mark_trail().encode( x=alt.X('a:Q', axis=alt.Axis(titleFontSize=12, title='Time →', labelColor='#999999', titleColor='#999999', titleAlign='right', titleAnchor='end', titleY=-30)), y=alt.Y('b:Q', axis=alt.Axis(format="$s", tickCount=3, titleFontSize=12, title='Cost →', labelColor='#999999', titleColor='#999999', titleAnchor='end')), size=alt.Size('b:Q', legend=None) ).configure_view(strokeWidth=0).configure_axis(grid=False) The output should look like the ticks in this SO post. Note: The plot in that post has nothing to do with the demo provided here. its just for understanding purposes.
Vega-Lite calls the axis line the domain. You can hide it by passing domain=False to the axis configuration: import pandas as pd import altair as alt df = pd.DataFrame({'a': [1,2,3,4], 'b':[2000,4000,6000,8000]}) alt.Chart(df).mark_trail().encode( x=alt.X('a:Q', axis=alt.Axis(titleFontSize=12, title='Time →', labelColor='#999999', titleColor='#999999', titleAlign='right', titleAnchor='end', titleY=-30)), y=alt.Y('b:Q', axis=alt.Axis(format="$s", tickCount=3, titleFontSize=12, title='Cost →', labelColor='#999999', titleColor='#999999', titleAnchor='end')), size=alt.Size('b:Q', legend=None) ).configure_view(strokeWidth=0).configure_axis(grid=False, domain=False)
10
11
62,785,679
2020-7-8
https://stackoverflow.com/questions/62785679/typevar-describing-a-class-that-must-subclass-more-than-one-class
I would like to create a type annotation T that describes a type that must be a subclass of both class A and class B. T = TypeVar('T', bound=A) only specifies that T must be a subclass of A. T = TypeVar('T', A, B) only specifies that T must be a subclass of A or a subclass of B but not necessarily both. I actually want something like T = TypeVar('T', bound=[A, B]) which would mean T must subclass both A and B. Is there a standard way of doing this?
What you are looking for is an intersection type. Strictly speaking, I do not believe Python's type annotations support this (at least not yet). However, you can get something similar with a Protocol: from typing import Protocol, TypeVar class A: def foo(self) -> int: return 42 class B: def bar(self) -> bool: return False class C(A, B): pass class D(A): pass class E(B): pass class ABProtocol(Protocol): def foo(self) -> int: ... def bar(self) -> bool: ... T = TypeVar('T', bound=ABProtocol) def frobnicate(obj: T) -> int: if obj.bar(): return obj.foo() return 0 frobnicate(C()) frobnicate(D()) frobnicate(E()) Mypy complains: test.py:26: error: Value of type variable "T" of "frobnicate" cannot be "D" test.py:27: error: Value of type variable "T" of "frobnicate" cannot be "E" Of course, this requires you to explicitly annotate all the methods yourself, unfortunately, something like class ABProtocol(A, B, Protocol): pass isn't allowed
9
8
62,786,028
2020-7-8
https://stackoverflow.com/questions/62786028/importerror-libgthread-2-0-so-0-cannot-open-shared-object-file-no-such-file-o
I was builting a web app with streamlit, OpenCV and Torch on local machine. The whole project went well until I built a Docker file and was about to transport it to my Google Cloud Platform. Can anyone tell me what is really going wrong here? Here is my Dockerfile: FROM pytorch/pytorch:latest RUN pip install virtualenv ENV VIRTUAL_ENV=/venv RUN virtualenv venv -p python3 ENV PATH="VIRTUAL_ENV/bin:$PATH" WORKDIR /app ADD . /app # Install dependencies RUN pip install -r requirements.txt # copying all files over COPY . /app # Expose port ENV PORT 8501 # cmd to launch app when container is run CMD streamlit run app.py # streamlit-specific commands for config ENV LC_ALL=C.UTF-8 ENV LANG=C.UTF-8 RUN mkdir -p /root/.streamlit RUN bash -c 'echo -e "\ [general]\n\ email = \"\"\n\ " > /root/.streamlit/credentials.toml' RUN bash -c 'echo -e "\ [server]\n\ enableCORS = false\n\ " > /root/.streamlit/config.toml' And requirements.txt: albumentations==0.4.5 matplotlib==3.2.2 numpy==1.19.0 opencv-python==4.1.0.25 # opencv-python-headless==4.2.0.34 pandas==1.0.5 Pillow==7.1.2 scipy==1.5.0 streamlit==0.62.0
Maybe, you should run following command before pip. apt update apt-get install -y libglib2.0-0 libsm6 libxrender1 libxext6
22
42
62,670,991
2020-7-1
https://stackoverflow.com/questions/62670991/read-csv-from-azure-blob-storage-and-store-in-a-dataframe
I'm trying to read multiple CSV files from blob storage using python. The code that I'm using is: blob_service_client = BlobServiceClient.from_connection_string(connection_str) container_client = blob_service_client.get_container_client(container) blobs_list = container_client.list_blobs(folder_root) for blob in blobs_list: blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name") stream = blob_client.download_blob().content_as_text() I'm not sure what is the correct way to store the CSV files read in a pandas dataframe. I tried to use: df = df.append(pd.read_csv(StringIO(stream))) But this shows me an error. Any idea how can I to do this?
Base on @sahaj-raj-malla answer: 2 snippets of code to load (or save) file from blob: shorter load with pandas [necessary to pip install adlfs fsspec ] import pandas as pd account_name = "my_account_stage_name" account_key = "loooooooooooooooooooooong_acccccooooooooount_keeeeeeeeeeeeeeeeey$$$$***$$$$$$$$$$$$$$22222222" connection_string = f"DefaultEndpointsProtocol=https;AccountName={account_name};AccountKey={account_key};EndpointSuffix=core.windows.net" pd.read_csv("abfs:///my_container_name/path/to/my/file/on/blob/file.csv", storage_options={"account_name": account_name, "connection_string": connection_string}) load with pandas & azure [necessary to pip install azure-storage-blob] from azure.storage.blob import BlobServiceClient import pandas as pd account_name = "my_account_stage_name" account_key = "loooooooooooooooooooooong_acccccooooooooount_keeeeeeeeeeeeeeeeey$$$$***$$$$$$$$$$$$$$22222222" connection_string = f"DefaultEndpointsProtocol=https;AccountName={account_name};AccountKey={account_key};EndpointSuffix=core.windows.net" # load file from blob container_name = "my_container_name" blob_name = "path/to/my/file/on/blob/file.csv" blob_service_client = BlobServiceClient.from_connection_string(connection_string) container_client = blob_service_client.get_container_client(container_name) blob_client = container_client.get_blob_client(blob_name) # load to RAM, eg. jupyter notebook pd.read_csv(blob_client.download_blob()) # save file to ROM, eg. local file local_file_name = "path/to/my/file/on/disk/file.csv" with open(local_file_name, "wb") as my_blob_locally: download_stream = blob_client.download_blob() my_blob_locally.write(download_stream.readall()) How to get Connections string goto Storage account -> Access Keys -> Show and copy Connection string
8
5
62,695,786
2020-7-2
https://stackoverflow.com/questions/62695786/error-215assertion-failed-scn-1-m-cols-in-function-cvperspectivetra
Below is a python script that calculates the homography between two images and then map a desired point from one image to another import cv2 import numpy as np if __name__ == '__main__' : # Read source image. im_src = cv2.imread(r'C:/Users/kjbaili/.spyder-py3/webcam_calib/homography/khaledd 35.0 sec.jpg') # Five corners of the book in source image pts_src = np.array([[281, 238], [325, 297], [283, 330],[248, 325],[213, 321]]) # Read destination image. im_dst = cv2.imread(r'C:/Users/kjbaili/.spyder-py3/webcam_calib/homography/20.jpg') # Five corners of the book in destination image. pts_dst = np.array([[377, 251],[377, 322],[316, 315],[289, 284],[263,255]]) # Calculate Homography h, status = cv2.findHomography(pts_src, pts_dst) # provide a point i wish to map from image 1 to image 2 a = np.array([[260, 228]]) pointsOut = cv2.getPerspectiveTransform(a, h) # Display image cv2.imshow("treced_point_image", pointsOut) cv2.waitKey(0) cv2.destroyAllWindows() However, when i display the image that contains the mapped point it returns the following error: error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\core\src\matmul.dispatch.cpp:531: error: (-215:Assertion failed) scn + 1 == m.cols in function 'cv::perspectiveTransform' According to my knowledge this error means that parameter assigned to the function perspective transform is not correct or not being read. I checked the two images at the reading step and everything is fine. So anyone knows why this happens? Thanks in advance Khaled
You are passing wrong arguments to cv2.getPerspectiveTransform(). The function expects a set of four coordinates in the original image and the new coordinates in the transformed image. You can directly pass the pts_src and pts_dst to the function and you will get the transformation matrix. You can then get the transformed coordinates for point "a" by matrix multiplication like a_transformed = np.dot(matrix, a).
8
2
62,671,226
2020-7-1
https://stackoverflow.com/questions/62671226/plotly-dash-how-to-reset-the-n-clicks-attribute-of-a-dash-html-button
I have a basic datatable in plotly/dash. My goal is to upload (or print for the sake of the example...) after I press the upload-button. The issue is, that I can't figure out how to get the n_clicks attribute of the button back to zero. So what happens is that after I clicked the button for the first time it prints continuously whenever something changes (row added or number changed/added), but what I want is for it to print only once whenever I click the button. This is the code: import dash from dash.dependencies import Input, Output, State import dash_table import dash_daq as daq import dash_core_components as dcc import dash_html_components as html import pandas as pd external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] df = pd.read_csv('.../dummy_csv.csv') app = dash.Dash(__name__) app.layout = html.Div([ html.Div(id='my-div'), dash_table.DataTable( id='adding-rows-table', style_data={ 'whiteSpace': 'normal', 'height': 'auto' }, style_table={ 'maxHeight': '800px' , 'overflowY': 'scroll' }, columns=[ {'name': i, 'id': i} for i in df.columns ], data=df.to_dict('records'), editable=True, row_deletable=True ), html.Button('+ Row', id='editing-rows-button', n_clicks=0), html.Button('Update', id='btn-nclicks-1', n_clicks=0 ), ]) @app.callback( Output(component_id='my-div', component_property='children'), [Input('btn-nclicks-1', 'n_clicks'), Input('adding-rows-table', 'data')] ) def update_output_div(n_clicks, data): if n_clicks > 0: print(data) # n_clicks = 0 # return n_clicks else: print("else loop") @app.callback( Output('adding-rows-table', 'data'), [Input('editing-rows-button', 'n_clicks')], [State('adding-rows-table', 'data'), State('adding-rows-table', 'columns')]) def add_row(n_clicks, rows, columns): if n_clicks > 0: rows.append({c['id']: '' for c in columns}) return rows if __name__ == '__main__': app.run_server(debug=True) This the CSV: a,b,c,d 1,1,5,1 2,2,5,1 2,33,6,2 3,4,6,2 And this is the "faulty" output.
You could use the dash.callback_context property to trigger the callback only when the number of clicks has changed rather than after the first click. See the section on "Determining which Button Changed with callback_context" in the Dash documentation. The following is an example of how you could update your callback. @app.callback(Output(component_id='my-div', component_property='children'), [Input('btn-nclicks-1', 'n_clicks'), Input('adding-rows-table', 'data')]) def update_output_div(n_clicks, data): changed_id = [p['prop_id'] for p in dash.callback_context.triggered][0] if 'btn-nclicks-1' in changed_id: print(data) # n_clicks = 0 # return n_clicks else: print("else loop")
11
14
62,748,978
2020-7-6
https://stackoverflow.com/questions/62748978/python-annotate-variable-as-key-of-a-typeddict
Basically a distilled down version of this (as yet unanswered) question. I want to state that a variable should only take on values that are keys in a TypedDict. At present I'm defining a separate Literal type to represent the keys, for example: from typing import Literal, TypedDict class MyTD(TypedDict): a: int b: int mytd = MyTD(a=1, b=2) key = "a" mytd[key] # error: TypedDict key must be a string literal; expected one of ('a', 'b') MyTDKeyT = Literal["a", "b"] typed_key: MyTDKeyT = "b" mytd[typed_key] # no error I would like to be able to replace the Literal definition for all the usual reasons of wanting to minimize duplicated code. Pseudo-code: key: Keys[MyTD] = "a" mytd[key] # would be no error not_key: Keys[MyTD] = "z" # error Is there a way to achieve this? To clarify, given that mypy can tell me that the key type needs to be a literal of "a" or "b", I'm hoping there might be a less error prone way to annotate a variable to that type, rather than having to maintain two separate lists of keys side-by-side, once in the TypedDict definition, once in the Literal definition.
Using MyPy, I don't think this is possible. I ran this experiment: from typing import TypedDict class MyTD(TypedDict): a: str b: int d = MyTD(a='x', b=2) reveal_type(list(d)) The MyPy output was: Revealed type is "builtins.list[builtins.str]" This indicates that internally it is not tracking the keys as literals. Otherwise, we would expect: Revealed type is "builtins.list[Literal['A', 'B']]" Also, this errors out in MyPy, so __required_keys__ isn't even inspectable: reveal_type(MyTD.__required_keys__)
16
4
62,687,193
2020-7-2
https://stackoverflow.com/questions/62687193/how-to-create-a-pathlib-relative-path-with-a-dot-starting-point
I needed to create a relative path starting with the current directory as a "." dot For example, in windows ".\envs\.some.env" or "./envs/.some.env" elsewhere I wanted to do this using pathlib. A solution was found, but it has a kludgy replace statement. Is there a better way to do this using pathlib? The usage was django-environ, and the goal was to support multiple env files. The working folder contained an envs folder with the multiple env files within that folder. import environ from pathlib import Path import os domain_env = Path.cwd() dotdot = Path("../") some_env = dotdot / "envs" / ".some.env" envsome = environ.Env() envsome.read_env(envsome.str(str(domain_env), str(some_env).replace("..", "."))) print(str(some_env)) print(str(some_env).replace("..", ".")) dot = Path("./") # Path(".") gives the same result some_env = dot / "envs" / ".some.env" print(str(some_env)) On windows gives: ..\envs\.some.env .\envs\.some.env envs\.some.env
Here's a multi-platform idea: import ntpath import os import posixpath from pathlib import Path, PurePosixPath, PureWindowsPath def dot_path(pth): """Return path str that may start with '.' if relative.""" if pth.is_absolute(): return os.fsdecode(pth) if isinstance(pth, PureWindowsPath): return ntpath.join(".", pth) elif isinstance(pth, PurePosixPath): return posixpath.join(".", pth) else: return os.path.join(".", pth) print(dot_path(PurePosixPath("file.txt"))) # ./file.txt print(dot_path(PureWindowsPath("file.txt"))) # .\file.txt print(dot_path(Path("file.txt"))) # one of the above, depending on host OS print(dot_path(Path("file.txt").resolve())) # (e.g.) /path/to/file.txt
10
3
62,771,868
2020-7-7
https://stackoverflow.com/questions/62771868/axiserror-axis-1-is-out-of-bounds-for-array-of-dimension-1-when-calculating-acc
I try to predict 10 classes using this code #Predicting the Test set rules y_pred = model.predict(traindata) y_pred = np.argmax(y_pred, axis=1) y_true = np.argmax(testdata, axis=1) target_names = ["akLembut","akMundur","akTajam","caMenaik", "caMenurun", "coretanTengah", "garisAtas", "garisBawah", "garisBawahBanyak", "ttdCangkang"] print("\n"+ classification_report(y_true, y_pred, target_names=target_names)) But then I got an error message like this AxisError Traceback (most recent call last) <ipython-input-13-a2b02b251547> in <module>() 2 y_pred = model.predict(traindata) 3 y_pred = np.argmax(y_pred, axis=1) ----> 4 y_true = np.argmax(testdata, axis=1) 5 6 target_names = ["akLembut","akMundur","akTajam","caMenaik", "caMenurun", "coretanTengah", "garisAtas", "garisBawah", "garisBawahBanyak", "ttdCangkang"] <__array_function__ internals> in argmax(*args, **kwargs) 2 frames /usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds) 45 except AttributeError: 46 wrap = None ---> 47 result = getattr(asarray(obj), method)(*args, **kwds) 48 if wrap: 49 if not isinstance(result, mu.ndarray): AxisError: axis 1 is out of bounds for array of dimension 1 I already train the data and I need to know each accuracy.
My guess is that your test_data array is only one-dimensional, so change to y_true = np.argmax(testdata, axis=0)
14
18
62,764,148
2020-7-6
https://stackoverflow.com/questions/62764148/how-to-import-an-existing-requirements-txt-into-a-poetry-project
I am trying out Poetry in an existing project. It used pyenv and virtual env originally so I have a requirements.txt file with the project's dependencies. I want to import the requirements.txt file using Poetry, so that I can load the dependencies for the first time. I've looked through poetry's documentation, but I haven't found a way to do this. Is there a way to do it? I know that I can add all packages manually, but I was hoping for a more automated process, because there are a lot of packages.
poetry doesn't support this directly. But if you have a handmade list of required packages (at best without any version numbers), that only contain the main dependencies and not the dependencies of a dependency you could do this: $ cat requirements.txt | xargs poetry add
196
259
62,714,153
2020-7-3
https://stackoverflow.com/questions/62714153/does-ansible-shell-module-need-python-on-target-server
I have a very basic playbook that simply runs a script using the shell module on the target remote host. In the output it however fails stating python interpreter not found. Installing python on each target is not the solution I can pursue. Is it possible to use my Ansible automation to run the playbook and execute the script using shell module without having python dependency?
Any ansible operation requires python on the target node except the raw and script modules. Please note that these two modules are primarily meant to install ansible requirements (i.e. Python and its mandatory modules) on targets where they are missing. In other words, Python is definitely a requirement to run ansible following all best practices (e.g. using modules when they exists, creating idempotent tasks...). If installing Python on your targets is not an option, don't use ansible, choose an other tool. References: Ansible managed node requirements raw module script module
12
21
62,731,561
2020-7-4
https://stackoverflow.com/questions/62731561/discord-send-message-only-from-python-app-to-discord-channel-one-way-communic
I am designing an app where I can send notification to my discord channel when something happen with my python code (e.g new user signup on my website). It will be a one way communication as only python app will send message to discord channel. Here is what I have tried. import os import discord import asyncio TOKEN = "" GUILD = "" def sendMessage(message): client = discord.Client() @client.event async def on_ready(): channel = client.get_channel(706554288985473048) await channel.send(message) print("done") return "" client.run(TOKEN) print("can you see me?") if __name__ == '__main__': sendMessage("abc") sendMessage("def") The issue is only first message is being sent (i-e abc) and then aysn function is blocking the second call (def). I don't need to listen to discord events and I don't need to keep the network communication open. Is there any way where I can just post the text (post method of api like we use normally) to discord server without listening to events? Thanks.
You can send the message to a Discord webhook. First, make a webhook in the Discord channel you'd like to send messages to. Then, use the discord.Webhook.from_url method to fetch a Webhook object from the URL Discord gave you. Finally, use the discord.Webhook.send method to send a message using the webhook. If you're using version 2 of discord.py, you can use this snippet: from discord import SyncWebhook webhook = SyncWebhook.from_url("url-here") webhook.send("Hello World") Otherwise, you can make use of the requests module: import requests from discord import Webhook, RequestsWebhookAdapter webhook = Webhook.from_url("url-here", adapter=RequestsWebhookAdapter()) webhook.send("Hello World")
22
35
62,684,468
2020-7-1
https://stackoverflow.com/questions/62684468/pythons-requests-triggers-cloudflares-security-while-urllib-does-not
I'm working on an automated web scraper for a Restaurant website, but I'm having an issue. The said website uses Cloudflare's anti-bot security, which I would like to bypass, not the Under-Attack-Mode but a captcha test that only triggers when it detects a non-American IP or a bot. I'm trying to bypass it as Cloudflare's security doesn't trigger when I clear cookies, disable javascript or when I use an American proxy. Knowing this, I tried using python's requests library as such: import requests headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0'} response = requests.get("https://grimaldis.myguestaccount.com/guest/accountlogin", headers=headers).text print(response) But this ends up triggering Cloudflare, no matter the proxy I use. HOWEVER when using urllib.request with the same headers as such: import urllib.request headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0'} request = urllib.request.Request("https://grimaldis.myguestaccount.com/guest/accountlogin", headers=headers) r = urllib.request.urlopen(request).read() print(r.decode('utf-8')) When run with the same American IP, this time it does not trigger Cloudflare's security, even though it uses the same headers and IP used with the requests library. So I'm trying to figure out what exactly is triggering Cloudflare in the requests library that isn't in the urllib library. While the typical answer would be "Just use urllib then", I'd like to figure out what exactly is different with requests, and how I could fix it, first off to understand how requests works and Cloudflare detects bots, but also so that I may apply any fix I can find to other httplibs (notably asynchronous ones) EDIT N°2: Progress so far: Thanks to @TuanGeek we can now bypass the Cloudflare block using requests as long as we connect directly to the host IP rather than the domain name (for some reason, the DNS redirection with requests triggers Cloudflare, but urllib doesn't): import requests from collections import OrderedDict import socket # grab the address using socket.getaddrinfo answers = socket.getaddrinfo('grimaldis.myguestaccount.com', 443) (family, type, proto, canonname, (address, port)) = answers[0] headers = OrderedDict({ 'Host': "grimaldis.myguestaccount.com", 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0', }) s = requests.Session() s.headers = headers response = s.get(f"https://{address}/guest/accountlogin", verify=False).text To note: trying to access via HTTP (rather than HTTPS with the verify variable set to False) will trigger Cloudflare's block Now this is great, but unfortunately, my final goal of making this work asynchronously with the httplib HTTPX still isn't met, as using the following code, the Cloudflare block is still triggered even though we're connecting directly through the Host IP, with proper headers, and with verifying set to False: import trio import httpx import socket from collections import OrderedDict answers = socket.getaddrinfo('grimaldis.myguestaccount.com', 443) (family, type, proto, canonname, (address, port)) = answers[0] headers = OrderedDict({ 'Host': "grimaldis.myguestaccount.com", 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0', }) async def asks_worker(): async with httpx.AsyncClient(headers=headers, verify=False) as s: r = await s.get(f'https://{address}/guest/accountlogin') print(r.text) async def run_task(): async with trio.open_nursery() as nursery: nursery.start_soon(asks_worker) trio.run(run_task) EDIT N°1: For additional details, here's the raw HTTP request from urllib and requests REQUESTS: send: b'GET /guest/nologin/account-balance HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: grimaldis.myguestaccount.com\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0\r\nConnection: close\r\n\r\n' reply: 'HTTP/1.1 403 Forbidden\r\n' header: Date: Thu, 02 Jul 2020 20:20:06 GMT header: Content-Type: text/html; charset=UTF-8 header: Transfer-Encoding: chunked header: Connection: close header: CF-Chl-Bypass: 1 header: Set-Cookie: __cfduid=df8902e0b19c21b364f3bf33e0b1ce1981593721256; expires=Sat, 01-Aug-20 20:20:06 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 header: Expires: Thu, 01 Jan 1970 00:00:01 GMT header: X-Frame-Options: SAMEORIGIN header: cf-request-id: 03b2c8d09300000ca181928200000001 header: Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" header: Set-Cookie: __cfduid=df8962e1b27c25b364f3bf66e8b1ce1981593723206; expires=Sat, 01-Aug-20 20:20:06 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Vary: Accept-Encoding header: Server: cloudflare header: CF-RAY: 5acb25c75c981ca1-EWR URLLIB: send: b'GET /guest/nologin/account-balance HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: grimaldis.myguestaccount.com\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0\r\nConnection: close\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Date: Thu, 02 Jul 2020 20:20:01 GMT header: Content-Type: text/html;charset=utf-8 header: Transfer-Encoding: chunked header: Connection: close header: Set-Cookie: __cfduid=db9de9687b6c22e6c12b33250a0ded3251292457801; expires=Sat, 01-Aug-20 20:20:01 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Expires: Thu, 2 Jul 2020 20:20:01 GMT header: Cache-Control: no-cache, private, no-store header: X-Powered-By: Undertow/1 header: Pragma: no-cache header: X-Frame-Options: SAMEORIGIN header: Content-Security-Policy: script-src 'self' 'unsafe-inline' 'unsafe-eval' https://www.google-analytics.com https://www.google-analytics.com/analytics.js https://use.typekit.net connect.facebook.net/ https://googleads.g.doubleclick.net/ app.pendo.io cdn.pendo.io pendo-static-6351154740266000.storage.googleapis.com pendo-io-static.storage.googleapis.com https://www.google.com/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/api.js apis.google.com https://www.googletagmanager.com api.instagram.com https://app-rsrc.getbee.io/plugin/BeePlugin.js https://loader.getbee.io api.instagram.com https://bat.bing.com/bat.js https://www.googleadservices.com/pagead/conversion.js https://connect.facebook.net/en_US/fbevents.js https://connect.facebook.net/ https://fonts.googleapis.com/ https://ssl.gstatic.com/ https://tagmanager.google.com/;style-src 'unsafe-inline' *;img-src * data:;connect-src 'self' app.pendo.io api.feedback.us.pendo.io; frame-ancestors 'self' app.pendo.io pxsweb.com *.pxsweb.com;frame-src 'self' *.myguestaccount.com https://app.getbee.io/ *; header: X-Lift-Version: Unknown Lift Version header: CF-Cache-Status: DYNAMIC header: cf-request-id: 01b2c5b1fa00002654a25485710000001 header: Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" header: Set-Cookie: __cfduid=db9de811004e591f9a12b66980a5dde331592650101; expires=Sat, 01-Aug-20 20:20:01 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Set-Cookie: __cfduid=db9de811004e591f9a12b66980a5dde331592650101; expires=Sat, 01-Aug-20 20:20:01 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Set-Cookie: __cfduid=db9de811004e591f9a12b66980a5dde331592650101; expires=Sat, 01-Aug-20 20:20:01 GMT; path=/; domain=.myguestaccount.com; HttpOnly; SameSite=Lax; Secure header: Server: cloudflare header: CF-RAY: 5acb58a62c5b5144-EWR
This really piqued my interests. The requests solution that I was able to get working. Solution Finally narrow down the problem. When you use requests it uses urllib3 connection pool. There seems to be some inconsistency between a regular urllib3 connection and a connection pool. A working solution: import requests from collections import OrderedDict from requests import Session import socket # grab the address using socket.getaddrinfo answers = socket.getaddrinfo('grimaldis.myguestaccount.com', 443) (family, type, proto, canonname, (address, port)) = answers[0] s = Session() headers = OrderedDict({ 'Accept-Encoding': 'gzip, deflate, br', 'Host': "grimaldis.myguestaccount.com", 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0' }) s.headers = headers response = s.get(f"https://{address}/guest/accountlogin", headers=headers, verify=False).text print(response) Technical Background So I ran both method through Burp Suite to compare the requests. Below are the raw dumps of the requests using requests GET /guest/accountlogin HTTP/1.1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0 Accept-Encoding: gzip, deflate Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Connection: close Host: grimaldis.myguestaccount.com Accept-Language: en-GB,en;q=0.5 Upgrade-Insecure-Requests: 1 dnt: 1 using urllib GET /guest/accountlogin HTTP/1.1 Host: grimaldis.myguestaccount.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Language: en-GB,en;q=0.5 Accept-Encoding: gzip, deflate Connection: close Upgrade-Insecure-Requests: 1 Dnt: 1 The difference is the ordering of the headers. The difference in the dnt capitalization is not actually the problem. So I was able to make a successful request with the following raw request: GET /guest/accountlogin HTTP/1.1 Host: grimaldis.myguestaccount.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0 So the Host header has be sent above User-Agent. So if you want to continue to to use requests. Consider using a OrderedDict to ensure the ordering of the headers.
18
15
62,710,057
2020-7-3
https://stackoverflow.com/questions/62710057/access-color-from-plotly-color-scale
Is there a way in Plotly to access colormap colours at any value along its range? I know I can access the defining colours for a colourscale from plotly.colors.PLOTLY_SCALES["Viridis"] but I am unable to find how to access intermediate / interpolated values. The equivalent in Matplotlib is shown in this question. There is also another question that address a similar question from the colorlover library, but neither offers a nice solution.
This answer extend the already good one provided by Adam. In particular, it deals with the inconsistency of Plotly's color scales. In Plotly, you specify a built-in color scale by writing colorscale="name_of_the_colorscale". This suggests that Plotly already has a built-in tool that somehow convert the color scale to an appropriate value and is capable of dealing with these inconsistencies. By searching Plotly's source code we find the useful ColorscaleValidator class. Let's see how to use it: def get_color(colorscale_name, loc): from _plotly_utils.basevalidators import ColorscaleValidator # first parameter: Name of the property being validated # second parameter: a string, doesn't really matter in our use case cv = ColorscaleValidator("colorscale", "") # colorscale will be a list of lists: [[loc1, "rgb1"], [loc2, "rgb2"], ...] colorscale = cv.validate_coerce(colorscale_name) if hasattr(loc, "__iter__"): return [get_continuous_color(colorscale, x) for x in loc] return get_continuous_color(colorscale, loc) # Identical to Adam's answer import plotly.colors from PIL import ImageColor def get_continuous_color(colorscale, intermed): """ Plotly continuous colorscales assign colors to the range [0, 1]. This function computes the intermediate color for any value in that range. Plotly doesn't make the colorscales directly accessible in a common format. Some are ready to use: colorscale = plotly.colors.PLOTLY_SCALES["Greens"] Others are just swatches that need to be constructed into a colorscale: viridis_colors, scale = plotly.colors.convert_colors_to_same_type(plotly.colors.sequential.Viridis) colorscale = plotly.colors.make_colorscale(viridis_colors, scale=scale) :param colorscale: A plotly continuous colorscale defined with RGB string colors. :param intermed: value in the range [0, 1] :return: color in rgb string format :rtype: str """ if len(colorscale) < 1: raise ValueError("colorscale must have at least one color") hex_to_rgb = lambda c: "rgb" + str(ImageColor.getcolor(c, "RGB")) if intermed <= 0 or len(colorscale) == 1: c = colorscale[0][1] return c if c[0] != "#" else hex_to_rgb(c) if intermed >= 1: c = colorscale[-1][1] return c if c[0] != "#" else hex_to_rgb(c) for cutoff, color in colorscale: if intermed > cutoff: low_cutoff, low_color = cutoff, color else: high_cutoff, high_color = cutoff, color break if (low_color[0] == "#") or (high_color[0] == "#"): # some color scale names (such as cividis) returns: # [[loc1, "hex1"], [loc2, "hex2"], ...] low_color = hex_to_rgb(low_color) high_color = hex_to_rgb(high_color) return plotly.colors.find_intermediate_color( lowcolor=low_color, highcolor=high_color, intermed=((intermed - low_cutoff) / (high_cutoff - low_cutoff)), colortype="rgb", ) At this point, all you have to do is: get_color("phase", 0.5) # 'rgb(123.99999999999999, 112.00000000000001, 236.0)' import numpy as np get_color("phase", np.linspace(0, 1, 256)) # ['rgb(167, 119, 12)', # 'rgb(168.2941176470588, 118.0078431372549, 13.68235294117647)', # ... Edit: improvements to deal with special cases.
12
9
62,725,822
2020-7-4
https://stackoverflow.com/questions/62725822/why-does-a-type-hint-float-accept-int-while-it-is-not-even-a-subclass
On the one hand, I have learned that numbers that can be int or float should be type annotated as float (sources: PEP 484 Type Hints and this stackoverflow question): def add(a: float, b: float): return a + b On the other hand, an int is not an instance of float: issubclass(int, float) returns False isinstance(42, float) returns False I would thus have expected Union[int, float] to be the correct annotation for this use case. Questions: What is the reason for that counter-intuitive behaviour? Does type hinting follow different mechanics than class comparisons (for instance in some case a "lossless casting" rule or so)? Are int/float a special case in type annotations? Are there other examples like this? Is there any linter that would warn me about Union[float, int] if this is an unintended use?
Are int/float a special case in type annotations? float is a special case. int is not. PEP 484 says, in the paragraph below the one referenced by the link in your question: when an argument is annotated as having type float, an argument of type int is acceptable; So accepting int where float is annotated is explicitly a special case, independent of the way annotations generally deal with a class hierarchy. Are there other examples like this? Yes, there's at least one other special case. In that same paragraph PEP 484 goes on to say: for an argument annotated as having type complex, arguments of type float or int are acceptable. Is there any linter that would warn me about Union[float, int] if this is an unintended use? Union[float, int] is perfectly fine. The special treatment of a float annotation is just a convenience (PEP 484 calls it a "shortcut") to allow people to avoid writing out the long-winded Union[float, int] annotation, because arguments that can be a float or an int are very common.
14
14
62,759,863
2020-7-6
https://stackoverflow.com/questions/62759863/how-to-use-pyav-or-opencv-to-decode-a-live-stream-of-raw-h-264-data
The data was received by socket ,with no more shell , they are pure I P B frames begin with NAL Header(something like 00 00 00 01). I am now using pyav to decode the frames ,but i can only decode the data after the second pps info(in key frame) was received(so the chunk of data I send to my decode thread can begin with pps and sps ), otherwise the decode() or demux() will return error "non-existing PPS 0 referenced decode_slice_header error" . I want to feed data to a sustaining decoder which can remember the previous P frame , so after feeding one B frame, the decoder return a decoded video frame. Or someform of IO that can be opened as container and keep writing data into it by another thread. Here is my key code: #read thread... read until get a key frame, then make a new io.BytesIO() to store the new data. rawFrames = io.BytesIO() while flag_get_keyFrame:() .... content= socket.recv(2048) rawFrames.write(content) .... #decode thread... decode content between two key frames .... rawFrames.seek(0) container = av.open(rawFrames) for packet in container.demux(): for frame in packet.decode(): self.frames.append(frame) .... My code will play the video but with a 3~4 seconds delay. So I am not putting all of it here, because I know it's not actually working for what I want to achieve. I want to play the video after receiving the first key frame and decode the following frames right after receiving them . Pyav opencv ffmpeg or something else ,how can I achieve my goal?
After hours of finding an answer for this as well. I figure this out myself. For single thread, you can do the following: rawData = io.BytesIO() container = av.open(rawData, format="h264", mode='r') cur_pos = 0 while True: data = await websocket.recv() rawData.write(data) rawData.seek(cur_pos) for packet in container.demux(): if packet.size == 0: continue cur_pos += packet.size for frame in packet.decode(): self.frames.append(frame) That is the basic idea. I have worked out a generic version that has receiving thread and decoding thread separated. The code will also skip frames if the CPU does not keep up with the decoding speed and will start decoding from the next key frame (so you will not have the teared green screen effect). Here is the full version of the code: import asyncio import av import cv2 import io from multiprocessing import Process, Queue, Event import time import websockets def display_frame(frame, start_time, pts_offset, frame_rate): if frame.pts is not None: play_time = (frame.pts - pts_offset) * frame.time_base.numerator / frame.time_base.denominator if start_time is not None: current_time = time.time() - start_time time_diff = play_time - current_time if time_diff > 1 / frame_rate: return False if time_diff > 0: time.sleep(time_diff) img = frame.to_ndarray(format='bgr24') cv2.imshow('Video', img) return True def get_pts(frame): return frame.pts def render(terminated, data_queue): rawData = io.BytesIO() cur_pos = 0 frames_buffer = [] start_time = None pts_offset = None got_key_frame = False while not terminated.is_set(): try: data = data_queue.get_nowait() except: time.sleep(0.01) continue rawData.write(data) rawData.seek(cur_pos) if cur_pos == 0: container = av.open(rawData, mode='r') original_codec_ctx = container.streams.video[0].codec_context codec = av.codec.CodecContext.create(original_codec_ctx.name, 'r') cur_pos += len(data) dts = None for packet in container.demux(): if packet.size == 0: continue dts = packet.dts if pts_offset is None: pts_offset = packet.pts if not got_key_frame and packet.is_keyframe: got_key_frame = True if data_queue.qsize() > 8 and not packet.is_keyframe: got_key_frame = False continue if not got_key_frame: continue frames = codec.decode(packet) if start_time is None: start_time = time.time() frames_buffer += frames frames_buffer.sort(key=get_pts) for frame in frames_buffer: if display_frame(frame, start_time, pts_offset, codec.framerate): frames_buffer.remove(frame) if cv2.waitKey(1) & 0xFF == ord('q'): break if dts is not None: container.seek(25000) rawData.seek(cur_pos) if cv2.waitKey(1) & 0xFF == ord('q'): break terminated.set() cv2.destroyAllWindows() async def receive_encoded_video(websocket, path): data_queue = Queue() terminated = Event() p = Process( target=render, args=(terminated, data_queue) ) p.start() while not terminated.is_set(): try: data = await websocket.recv() except: break data_queue.put(data) terminated.set()
8
11
62,678,765
2020-7-1
https://stackoverflow.com/questions/62678765/finally-always-runs-just-before-the-return-in-try-block-then-why-update-in-fina
Finally block runs just before the return statement in the try block, as shown in the below example - returns False instead of True: >>> def bool_return(): ... try: ... return True ... finally: ... return False ... >>> bool_return() False Similarly, the following code returns value set in the Finally block: >>> def num_return(): ... try: ... x=100 ... return x ... finally: ... x=90 ... return x ... >>> num_return() 90 However, for variable assignment without return statement in the finally block, why does value of variable updated by the finally block not get returned by the try block? Is the variable from finally block scoped locally in the finally block? Or is the return value from the try block held in memory buffer and unaffected by assignment in finally block? In the below example, why is the output 100 instead of 90? >>> def num_return(): ... try: ... x=100 ... return x ... finally: ... x=90 ... >>> num_return() 100 Similarly the following example: In [1]: def num_return(): ...: try: ...: x=[100] ...: return x ...: finally: ...: x[0] = 90 ...: In [2]: num_return() Out[2]: [90] In [3]: def num_return(): ...: try: ...: x=[100] ...: return x[0] ...: finally: ...: x[0] = 90 ...: In [4]: num_return() Out[4]: 100
I think the problem you have is more related to value assignment than what try and finally do. I suggest to read Facts and myths about Python names and values. When you return a value, it just like assigning the value to a variable, result for example, and finally always execute to reassign the value. Then, your example code may be represent as: # try result = True # return # finally result = False # return (reassign value) print(result) # Output: False # try x = 100 result = x # return # finally x = 90 result = x # return (reassign value) print(result) # Output: 90 # try x = 100 result = x # return # finally x = 90 # no return so result not updated print(result) # Output: 100 print(x) # Output: 90 (x is changed actually) # try x = [100] result = x # return the list (result refer to a list and list is mutable) # finally x[0] = 90 # changing the list in-place so it affects the result print(result) # Output: [90] # try x = [100] result = x[0] # return the integer (result refer to the integer) # finally # changing the list in-place which have no effect to result unless reassign value by return x[0] x[0] = 90 print(result) # Output: 100 print(x) # Output: [90] (x is changed actually)
23
1
62,709,815
2020-7-3
https://stackoverflow.com/questions/62709815/clienterror-an-error-occurred-internalfailure-when-calling-the-publish-operat
I am simply trying to publish to an SNS topic using a lambda function. The function code as follows, with ARN being the actual SNS topic ARN: import boto3 print('Loading function') def lambda_handler(event, context): client = boto3.client('sns') response = client.publish( TargetArn='ARN', Message="Test", ) return response The function execution role as access to SNS. In fact I even gave SNS full access. But I keep getting the error: { "errorMessage": "An error occurred (InternalFailure) when calling the Publish operation (reached max retries: 4): Unknown", "errorType": "ClientError", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 6, in lambda_handler\n response = client.publish(\n", " File \"/var/runtime/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 626, in _make_api_call\n raise error_class(parsed_response, operation_name)\n" ] } I do not find any access denied errors in cloudtrail either. Any idea on what is the issue here ? Edit: Its my bad, I used the subscription ARN instead of the topic ARN causing this issue.
In case if anyone is facing this issue, make sure you use the correct ARN - use the ARN of the Topic instead of the subscription.
7
6
62,748,241
2020-7-6
https://stackoverflow.com/questions/62748241/check-if-datetime-object-in-pandas-has-a-timezone
I'm importing data into pandas and want to remove any timezones – if they're present in the data. If the data has a time zone, the following code works successfully: col = "my_date_column" df[col] = pd.to_datetime(df[col]).dt.tz_localize(None) # We don't want timezones... If the data does not contain a timezone, I'd like to use the following code: df[col] = pd.to_datetime(df[col]) My issue is that I'm not sure how to test for timezone in the datetime object / series.
Assuming you have a column of type datetime, you can check the tzinfo of each timestamp in the column. It's basically described here (although this is not specific to pytz). Ex: import pandas as pd # example series: s = pd.Series([ pd.Timestamp("2020-06-06").tz_localize("Europe/Berlin"), # tzinfo defined pd.Timestamp("2020-06-07") # tzinfo is None ]) # s # 0 2020-06-06 00:00:00+02:00 # 1 2020-06-07 00:00:00 # dtype: object # now find a mask which is True where the timestamp has a timezone: has_tz = s.apply(lambda t: t.tzinfo is not None) # has_tz # 0 True # 1 False # dtype: bool
8
8
62,681,257
2020-7-1
https://stackoverflow.com/questions/62681257/tf-keras-model-predict-is-slower-than-straight-numpy
Thanks, everyone for trying to help me understand the issue below. I have updated the question and produced a CPU-only run and GPU-only of the run. In general, it also appears that in either case a direct numpy calculation hundreds of times faster than the model. predict(). Hopefully, this clarifies that this does not appear to be a CPU vs GPU issue (if it is, I would love an explanation). Let's create a trained model with keras. import tensorflow as tf (X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data() model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(1000,'relu'), tf.keras.layers.Dense(100,'relu'), tf.keras.layers.Dense(10,'softmax'), ]) model.compile('adam','sparse_categorical_crossentropy') model.fit(X,Y,epochs=20,batch_size=1024) Now let's re-create the model.predict function using numpy. import numpy as np W = model.get_weights() def predict(X): X = X.reshape((X.shape[0],-1)) #Flatten X = X @ W[0] + W[1] #Dense X[X<0] = 0 #Relu X = X @ W[2] + W[3] #Dense X[X<0] = 0 #Relu X = X @ W[4] + W[5] #Dense X = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax return X We can easily verify these are the same function (module machine errors in implementation). print(model.predict(X[:100]).argmax(1)) print(predict(X[:100]).argmax(1)) We can also test out how fast these functions run. Using ipython: %timeit model.predict(X[:10]).argmax(1) # 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # 1000 loops takes 356 µs I get that predict runs about 10,000 times faster than model. predict at low batches and reduces to around 100 times faster at larger batches. Regardless, why is predict so much faster? In fact, predict isn't even optimized, we could use numba, or even straight re-write predict in C code and compile it. Thinking in terms of deployment purposes, why would manually extracting the weights from the model and re-writing the function be thousands of times faster than what keras does internally? This also means that writing a script to utilize a .h5 file or similar, maybe much slower than manually re-writing the prediction function. In general, is this true? Ipython Output (CPU): Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help. PyDev console: using IPython 7.19.0 Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] on win32 import os os.environ["CUDA_VISIBLE_DEVICES"]="-1" import tensorflow as tf (X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data() model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(1000,'relu'), tf.keras.layers.Dense(100,'relu'), tf.keras.layers.Dense(10,'softmax'), ]) model.compile('adam','sparse_categorical_crossentropy') model.fit(X,Y,epochs=20,batch_size=1024) 2021-04-19 15:10:58.323137: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 2021-04-19 15:11:01.990590: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll 2021-04-19 15:11:02.039285: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2021-04-19 15:11:02.042553: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-G0U8S3P 2021-04-19 15:11:02.043134: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-G0U8S3P 2021-04-19 15:11:02.128834: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:127] None of the MLIR optimization passes are enabled (registered 2) Epoch 1/20 59/59 [==============================] - 4s 60ms/step - loss: 35.3708 Epoch 2/20 59/59 [==============================] - 3s 58ms/step - loss: 0.8671 Epoch 3/20 59/59 [==============================] - 3s 56ms/step - loss: 0.5641 Epoch 4/20 59/59 [==============================] - 3s 56ms/step - loss: 0.4359 Epoch 5/20 59/59 [==============================] - 3s 56ms/step - loss: 0.3447 Epoch 6/20 59/59 [==============================] - 3s 56ms/step - loss: 0.2891 Epoch 7/20 59/59 [==============================] - 3s 56ms/step - loss: 0.2371 Epoch 8/20 59/59 [==============================] - 3s 57ms/step - loss: 0.1977 Epoch 9/20 59/59 [==============================] - 3s 57ms/step - loss: 0.1713 Epoch 10/20 59/59 [==============================] - 3s 57ms/step - loss: 0.1381 Epoch 11/20 59/59 [==============================] - 4s 61ms/step - loss: 0.1203 Epoch 12/20 59/59 [==============================] - 3s 57ms/step - loss: 0.1095 Epoch 13/20 59/59 [==============================] - 3s 56ms/step - loss: 0.0877 Epoch 14/20 59/59 [==============================] - 3s 57ms/step - loss: 0.0793 Epoch 15/20 59/59 [==============================] - 3s 56ms/step - loss: 0.0727 Epoch 16/20 59/59 [==============================] - 3s 56ms/step - loss: 0.0702 Epoch 17/20 59/59 [==============================] - 3s 56ms/step - loss: 0.0701 Epoch 18/20 59/59 [==============================] - 3s 57ms/step - loss: 0.0631 Epoch 19/20 59/59 [==============================] - 3s 56ms/step - loss: 0.0539 Epoch 20/20 59/59 [==============================] - 3s 58ms/step - loss: 0.0493 Out[3]: <tensorflow.python.keras.callbacks.History at 0x143069fdf40> import numpy as np W = model.get_weights() def predict(X): X = X.reshape((X.shape[0],-1)) #Flatten X = X @ W[0] + W[1] #Dense X[X<0] = 0 #Relu X = X @ W[2] + W[3] #Dense X[X<0] = 0 #Relu X = X @ W[4] + W[5] #Dense X = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax return X %timeit model.predict(X[:10]).argmax(1) # 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # 1000 loops takes 356 µs 52.8 ms ± 2.13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 640 µs ± 10.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) Ipython Output (GPU): Python 3.7.7 (default, Mar 26 2020, 15:48:22) Type 'copyright', 'credits' or 'license' for more information IPython 7.4.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import tensorflow as tf ...: ...: (X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data() ...: ...: model = tf.keras.models.Sequential([ ...: tf.keras.layers.Flatten(), ...: tf.keras.layers.Dense(1000,'relu'), ...: tf.keras.layers.Dense(100,'relu'), ...: tf.keras.layers.Dense(10,'softmax'), ...: ]) ...: model.compile('adam','sparse_categorical_crossentropy') ...: model.fit(X,Y,epochs=20,batch_size=1024) 2020-07-01 15:50:46.008518: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-01 15:50:46.054495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 pciBusID: 0000:05:00.0 2020-07-01 15:50:46.059582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-07-01 15:50:46.114562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-07-01 15:50:46.142058: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-07-01 15:50:46.152899: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-07-01 15:50:46.217725: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-07-01 15:50:46.260758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-07-01 15:50:46.374328: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-01 15:50:46.376747: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-07-01 15:50:46.377688: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX FMA 2020-07-01 15:50:46.433422: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 4018875000 Hz 2020-07-01 15:50:46.434383: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563e4d0d71c0 executing computations on platform Host. Devices: 2020-07-01 15:50:46.435119: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version 2020-07-01 15:50:46.596077: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563e4a9379f0 executing computations on platform CUDA. Devices: 2020-07-01 15:50:46.596119: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5 2020-07-01 15:50:46.597894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 pciBusID: 0000:05:00.0 2020-07-01 15:50:46.597961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-07-01 15:50:46.597988: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-07-01 15:50:46.598014: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-07-01 15:50:46.598040: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-07-01 15:50:46.598065: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-07-01 15:50:46.598090: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-07-01 15:50:46.598115: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-01 15:50:46.599766: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-07-01 15:50:46.600611: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-07-01 15:50:46.603713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-01 15:50:46.603751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-07-01 15:50:46.603763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-07-01 15:50:46.605917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10311 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5) Train on 60000 samples Epoch 1/20 2020-07-01 15:50:49.995091: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 60000/60000 [==============================] - 2s 26us/sample - loss: 9.9370 Epoch 2/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.6094 Epoch 3/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.3672 Epoch 4/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.2720 Epoch 5/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.2196 Epoch 6/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.1673 Epoch 7/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.1367 Epoch 8/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.1082 Epoch 9/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0895 Epoch 10/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0781 Epoch 11/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0666 Epoch 12/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0537 Epoch 13/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0459 Epoch 14/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0412 Epoch 15/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0401 Epoch 16/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0318 Epoch 17/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0275 Epoch 18/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0237 Epoch 19/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0212 Epoch 20/20 60000/60000 [==============================] - 0s 4us/sample - loss: 0.0199 Out[1]: <tensorflow.python.keras.callbacks.History at 0x7f7c9000b550> In [2]: import numpy as np ...: ...: W = model.get_weights() ...: ...: def predict(X): ...: X = X.reshape((X.shape[0],-1)) #Flatten ...: X = X @ W[0] + W[1] #Dense ...: X[X<0] = 0 #Relu ...: X = X @ W[2] + W[3] #Dense ...: X[X<0] = 0 #Relu ...: X = X @ W[4] + W[5] #Dense ...: X = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax ...: return X ...: In [3]: print(model.predict(X[:100]).argmax(1)) ...: print(predict(X[:100]).argmax(1)) [5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5 6 0 7 6 1 8 7 9 3 9 8 5 9 3 3 0 7 4 9 8 0 9 4 1 4 4 6 0 4 5 6 1 0 0 1 7 1 6 3 0 2 1 1 7 5 0 2 6 7 8 3 9 0 4 6 7 4 6 8 0 7 8 3 1] /home/bobbyocean/anaconda3/bin/ipython3:12: RuntimeWarning: overflow encountered in exp /home/bobbyocean/anaconda3/bin/ipython3:12: RuntimeWarning: invalid value encountered in true_divide [5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5 6 0 7 6 1 8 7 9 3 9 8 5 9 3 3 0 7 4 9 8 0 9 4 1 4 4 6 0 4 5 6 1 0 0 1 7 1 6 3 0 2 1 1 7 5 0 2 6 7 8 3 9 0 4 6 7 4 6 8 0 7 8 3 1] In [4]: %timeit model.predict(X[:10]).argmax(1) 37.7 ms ± 806 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [5]: %timeit predict(X[:10]).argmax(1) 361 µs ± 13.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
We observe that the main issue is the cause of the Eager Execution mode. We give shallow look at your code and corresponding results as per CPU and GPU bases. It is true that numpy doesn't operate on GPU, so unlike tf-gpu, it doesn't encounter any data shifting overhead. But also it's quite noticeable how much fast computation is done by your defined predict method with np compare to model. predict with tf. keras, whereas the input test set is 10 samples only. However, We're not giving any deep analysis, like one piece of art here you may love to read. My Setup is as follows. I'm using the Colab environment and checking with both CPU and GPU mode. TensorFlow 1.15.2 Keras 2.3.1 Numpy 1.19.5 TensorFlow 2.4.1 Keras 2.4.0 Numpy 1.19.5 TF 1.15.2 - CPU %tensorflow_version 1.x import os os.environ["CUDA_VISIBLE_DEVICES"]="-1" import tensorflow as tf from tensorflow.python.client import device_lib print(tf.__version__) print('A: ', tf.test.is_built_with_cuda) print('B: ', tf.test.gpu_device_name()) local_device_protos = device_lib.list_local_devices() ([x.name for x in local_device_protos if x.device_type == 'GPU'], [x.name for x in local_device_protos if x.device_type == 'CPU']) TensorFlow 1.x selected. 1.15.2 A: <function is_built_with_cuda at 0x7f122d58dcb0> B: ([], ['/device:CPU:0']) Now, running your code. import tensorflow as tf import keras print(tf.executing_eagerly()) # False (X,Y),(Xt,Yt) = keras.datasets.mnist.load_data() model = keras.models.Sequential([]) model.compile model.fit %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 1000 loops, best of 5: 1.07 ms per loop 1000 loops, best of 5: 1.48 ms per loop We can see that the execution times are comparable with old keras. Now, let's test with GPU also. TF 1.15.2 - GPU %tensorflow_version 1.x import os os.environ["CUDA_VISIBLE_DEVICES"]="0" import tensorflow as tf from tensorflow.python.client import device_lib print(tf.__version__) print('A: ', tf.test.is_built_with_cuda) print('B: ', tf.test.gpu_device_name()) local_device_protos = device_lib.list_local_devices() ([x.name for x in local_device_protos if x.device_type == 'GPU'], [x.name for x in local_device_protos if x.device_type == 'CPU']) 1.15.2 A: <function is_built_with_cuda at 0x7f0b5ad46830> B: /device:GPU:0 (['/device:GPU:0'], ['/device:CPU:0']) ... ... %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 1000 loops, best of 5: 1.02 ms per loop 1000 loops, best of 5: 1.44 ms per loop Now, the execution time is also comparable here with old keras and no eager mode. Let's now see the new tf. keras with eager mode first and then we observe without eager mode. TF 2.4.1 - CPU Eagerly import os os.environ["CUDA_VISIBLE_DEVICES"]="-1" import tensorflow as tf from tensorflow.python.client import device_lib print(tf.__version__) print('A: ', tf.test.is_built_with_cuda) print('B: ', tf.test.gpu_device_name()) local_device_protos = device_lib.list_local_devices() ([x.name for x in local_device_protos if x.device_type == 'GPU'], [x.name for x in local_device_protos if x.device_type == 'CPU']) 2.4.1 A: <function is_built_with_cuda at 0x7fed85de3560> B: ([], ['/device:CPU:0']) Now, running the code with eager mode. import tensorflow as tf import keras print(tf.executing_eagerly()) # True (X,Y),(Xt,Yt) = keras.datasets.mnist.load_data() model = keras.models.Sequential([ ]) model.compile model.fit %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 10 loops, best of 5: 28 ms per loop 1000 loops, best of 5: 1.73 ms per loop Disable Eagerly Now, if we disable the eager mode and run the same code as follows then we will get: import tensorflow as tf import keras # # Disables eager execution tf.compat.v1.disable_eager_execution() # or, # Disables eager execution of tf.functions. # tf.config.run_functions_eagerly(False) print(tf.executing_eagerly()) False (X,Y),(Xt,Yt) = keras.datasets.mnist.load_data() model = keras.models.Sequential([]) model.compile model.fit %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 1000 loops, best of 5: 1.37 ms per loop 1000 loops, best of 5: 1.57 ms per loop Now, we can see the execution times are comparable for disabling the eager mode in new tf. keras. Now, let's test with GPU mode also. TF 2.4.1 - GPU Eagerly import os os.environ["CUDA_VISIBLE_DEVICES"]="0" import tensorflow as tf from tensorflow.python.client import device_lib print(tf.__version__) print('A: ', tf.test.is_built_with_cuda) print('B: ', tf.test.gpu_device_name()) local_device_protos = device_lib.list_local_devices() ([x.name for x in local_device_protos if x.device_type == 'GPU'], [x.name for x in local_device_protos if x.device_type == 'CPU']) 2.4.1 A: <function is_built_with_cuda at 0x7f16ad88f680> B: /device:GPU:0 (['/device:GPU:0'], ['/device:CPU:0']) import tensorflow as tf import keras print(tf.executing_eagerly()) # True (X,Y),(Xt,Yt) = keras.datasets.mnist.load_data() model = keras.models.Sequential([ ]) model.compile model.fit %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 10 loops, best of 5: 26.3 ms per loop 1000 loops, best of 5: 1.48 ms per loop Disable Eagerly And lastly again, if we disable the eager mode and run the same code as follows, we will get: # Disables eager execution tf.compat.v1.disable_eager_execution() # or, # Disables eager execution of tf.functions. # tf.config.run_functions_eagerly(False) print(tf.executing_eagerly()) # False (X,Y),(Xt,Yt) = keras.datasets.mnist.load_data() model = keras.models.Sequential([ ]) model.compile model.fit %timeit model.predict(X[:10]).argmax(1) # yours: 10 loops takes 37.7 ms %timeit predict(X[:10]).argmax(1) # yours: 1000 loops takes 356 µs 1000 loops, best of 5: 1.12 ms per loop 1000 loops, best of 5: 1.45 ms per loop And like before, the execution times are comparable with the non-eager mode in new tf. keras. That's why, the Eager mode is the root cause of the slower performance of tf. keras than straight numpy.
13
18
62,732,358
2020-7-4
https://stackoverflow.com/questions/62732358/how-to-find-which-dll-failed-in-importerror-dll-load-failed-while-importing-i
Context Are there commands to enhance the error message that is received such that python displays which .dll file it cannot find? For error: python test_cv2.py Traceback (most recent call last): File "test_cv2.py", line 1, in <module> import cv2 File "E:\Anaconda3\envs\py38\lib\site-packages\cv2\__init__.py", line 5, in <module> from .cv2 import * ImportError: DLL load failed while importing cv2: The specified module could not be found. (py38) E:\somepath> I would like to determine which .dll file is actually not being found. To do so, I downloaded and run DependenciesGui.exe from this repository.. Next I fed the DependenciesGui.exe the cv2.cp38-win_amd64.pyd which indicates api-ms-win-core-wow64-l1-1-1.dll is missing, amongst others. I currently do not have a way to verify that the .dll files that are reported missing by dependenciesGUI.exe are also the files that python 3.8 is not finding in the anaconda environment. A way to implicitly verify that the python 3.8 missing .dll files are the same as the same files reported missing by dependenciesGUI.exe would be to download and paste all the missing .dll files into ../system32/. Followed by inspecting if the error message dissapears/changes. However one of the .dll files reported missing is: api-ms-win-core-wow64-l1-1-1.dll which I am not yet able to find (online). Also I tried to cheat to copy and rename api-ms-win-core-wow64-l1-1-0.dll to api-ms-win-core-wow64-l1-1-1.dll but that (luckily) doesn't enable the dpendenciesGUI.exe to recognize the .dll file as found. Question How can I make the error message/traceback of python explicitly mention which .dll file is (the first .dll file that is) not found? Note This is not about solving the xy-problem of installing opencv.
Short answer: No. Although it is probably not completely impossible, it would require to bind a tool like dependenciesGUI in Python, in order to be able to call it in that given context (namely taking into account the actually search path for dll in Python and already loaded dynamics libraries). It would be quite a lot of work for little gain. Indeed, the default search path in Python>=3.8 on Windows should be very similar to the one of dependenciesGUI, so that the missing dll should be the same. Personally, I'm developing pre-compiled binary distributions for Python, and so far dependenciesGUI was enough to identify the missing libraries at Python import.
18
0
62,681,388
2020-7-1
https://stackoverflow.com/questions/62681388/residual-plot-for-residual-vs-predicted-value-in-python
I have run a KNN model. Now i want to plot the residual vs predicted value plot. Every example from different websites shows that i have to first run a linear regression model. But i couldn't understand how to do this. Can anyone help? Thanks in advance. Here is my model- train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))]) x_train = train.iloc[:,[2,5]].values y_train = train.iloc[:,4].values x_validate = validate.iloc[:,[2,5]].values y_validate = validate.iloc[:,4].values x_test = test.iloc[:,[2,5]].values y_test = test.iloc[:,4].values clf=neighbors.KNeighborsRegressor(n_neighbors = 6) clf.fit(x_train, y_train) y_pred = clf.predict(x_validate)
Residuals are nothing but how much your predicted values differ from actual values. So, it's calculated as actual values-predicted values. In your case, it's residuals = y_test-y_pred. Now for the plot, just use this; import matplotlib.pyplot as plt plt.scatter(residuals,y_pred) plt.show()
12
9
62,784,718
2020-7-7
https://stackoverflow.com/questions/62784718/how-does-the-value-of-the-name-parameter-to-setuptools-setup-affect-the-results
I recently received a bundle of Python code, written by a graduate student at an academic lab, and consisting of a Python script and about half dozen single-file Python modules, used by by the script. All these files (script and modules) are on the same directory. I wanted to use pip to install this code in a virtual environment, so I tried my hand at writing a setup.py file for it, something I had not done before. I got this installation to work, and I have a vague understanding of what most of the stuff I put in the setup.py means. The one exception to this is the value to the name keyword to the setuptools.setup function. According to the documentation I found, this parameter is supposed to be the "name of the package", but I this doesn't tell me how its value ultimately matters. In other words, is this value important only to human readers, or does it actually affect either the way pip install, or the code this command installs, will work? Therefore, I had no idea what value to give to this parameter, and so I just came up with a reasonably-sounding name, but without any attempt to have it match something else in the code base. To my surprise, nothing broke! By this I mean that the pip installation completed without errors, and the installed code performed correctly in the virtual environment. I experimented a bit, and it seems that pretty much any value I came up was equally OK. For the sake of the following description, suppose I give the name parameter the value whatever. Then, the only effect this has, as far as I can tell, is that a subdirectory with the name whatever.egg-info/ gets created (by pip?) in the same directory as the setup.py file, and this subdirectory contains two files that include the string whatever in them. One of these files is whatever.egg-info/PKG-INFO, which contains the line Name: whatever The other one is whatever.egg-info/SOURCES.txt, which lists several relative paths, including some beginning with whatever.egg-info/. Maybe this was too simple a packaging problem for the value of name to matter? Q: Can someone give me a simple example in which a wrong value for setuptools.setup's name parameter would cause either pip install or the installed code to fail?
Preamble: The Python glossary defines a package as "a Python module which can contain submodules or recursively, subpackages". What setuptools and the like create is usually referred to as a distribution which can bundle one or more packages (hence the parameter setup(packages=...)). I will use this meaning for the terms package and distribution in the following text. The name parameter determines how your distribution will be identified throughout the Python ecosystem. It is not related to the actual layout of the distribution (i.e. its packages) nor to any modules defined within those packages. The documentation precisely specifies what makes a legal distribution name: The name of the distribution. The name field is the primary identifier for a distribution. A valid name consists only of ASCII letters and numbers, period, underscore and hyphen. It must start and end with a letter or number. Distribution names are limited to those which match the following regex (run with re.IGNORECASE): ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$. (History: This specification was refined in PEP 566 to be aligned with the definition according to PEP 508. Before PEP 345 loosely specified distribution names without imposing any restrictions.) In addition to the above limitations there are some other aspects to consider: When you intend to distribute your distribution via PyPI then no distinction is made between _ and -, i.e. hello_world and hello-world are considered to be the same distribution. You also need to make sure that the distribution name is not already taken on PyPI because otherwise you won't be able to upload it (if it's occupied by an abandoned project, you can attempt to claim ownership of that project in order to be able to use the name; see PEP 541 for more information). Most importantly you should make sure that the distribution name is unique within your working environment, i.e. that it doesn't conflict with other distributions' names. Suppose you have already installed the requests project in your virtual environment and you decide to name your distribution requests as well. Then installing your distribution will remove the already existing installation (i.e. the corresponding package) and you won't be able to access it anymore. Top-level package names The second bullet point above also applies to the names of the top-level packages in your distribution. Suppose you have the following distribution layout: . ├── setup.py └── testpkg └── __init__.py └── a.py The setup.py contains: from setuptools import setup setup( name='dist-a', version='1.0', packages=['testpkg'], ) __init__.py and a.py are just empty files. After installing that distribution you can access it by importing testpkg (the top-level package). Now suppose that you have a different distribution with name='dist-b' but using the same packages=['testpkg'] and providing a module b.py (instead of a.py). What happens is that the second install is performed over the already existing one, i.e. using the same physical directory (namely testpkg which happens to be the package used by both distributions), possibly replacing already existing modules, though both distributions are actually installed: $ pip freeze | grep dist-* dist-a @ file:///tmp/test-a dist-b @ file:///tmp/test-b $ python >>> import testpkg >>> import testpkg.a >>> import testpkg.b Now uninstalling the first distribution (dist-a) will also remove the contents of the second: $ pip uninstall dist-a $ python >>> import testpkg ModuleNotFoundError: No module named 'testpkg' Hence besides the distribution name it's also important to make sure that its top-level packages don't conflict with the ones of already installed projects. It's those top-level packages that serve as namespaces for the distribution. For that reason it's a good idea to choose a distribution name which resembles the name of the top-level package - often these are chosen to be the same.
9
6
62,743,132
2020-7-5
https://stackoverflow.com/questions/62743132/ubuntu-18-04-command-pyenv-not-found-did-you-mean
So here is my Ubuntu version: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic I'm trying to run the following command: pyenv install 3.6.2 but i get the error: Command 'pyenv' not found, did you mean: command 'pyvenv' from deb python3-venv command 'p7env' from deb libnss3-tools I've searched and this post (Ubuntu 14.04 - Python 3.4 - pyenv: command Not Found) states Ubuntu 14.04 and below use Python 2 be default so one has to use virtualenv instead, but why does my 18.04 Ubuntu not recognize the command?
First see if you have the curl already installed in your machine using the command: $ curl --version If you don't have, install the curl using: $ sudo apt-get install curl After that install the pyenv using the command: $curl https://pyenv.run | bash And after installation update your bashrc adding the lines: export PATH="~/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" Finally Reload the bashrc: $ source ~/.bashrc I think will work fine after that. If you installed the pyenv before, look up at you bashrc to confirm if you added the lines above and reload the bashrc again.
32
82
62,745,734
2020-7-5
https://stackoverflow.com/questions/62745734/mypy-declares-iobytes-incompatible-with-binaryio
Consider the following code: from io import TextIOWrapper from typing import List from zipfile import ZipFile def read_zip_lines(zippath: str, filename: str) -> List[str]: with ZipFile(zippath) as zf: with zf.open(filename) as bfp: with TextIOWrapper(bfp, 'utf-8') as fp: return fp.readlines() Running mypy v0.782 on the above code under Python 3.6.9 fails with the following error: zfopen.py:8: error: Argument 1 to "TextIOWrapper" has incompatible type "IO[bytes]"; expected "BinaryIO" However, I feel that this code should not be regarded as an error, as ZipFile.open() returns a binary filehandle, which TextIOWrapper accepts. Moreover, IO[bytes] and BinaryIO are (as far as I understand) effectively the same thing; it's just that BinaryIO is declared as a subclass of IO[bytes]. I would naïvely expect IO[bytes] to be accepted everywhere that BinaryIO is, except that's not how subclasses work, and I'm not sure how to properly make use of this subclassing when typing. Who is in error here, and how does the error get fixed? Is typeshed in error for declaring the return type of ZipFile.open() as IO[bytes] instead of BinaryIO? Is typeshed in error for declaring the type of the first argument to TextIOWrapper as BinaryIO instead of IO[bytes]? Is the typing module in error for making BinaryIO a subclass of IO[bytes] instead of an alias? Is my code in error for not performing some sort of cast on bfp? Is my thinking in error for expecting bfp to be passable to TextIOWrapper unmodified?
This shorter test case with mypy 0.782 gets the same error: binary_file = io.open('foo.bin', 'rb') text_file = io.TextIOWrapper(binary_file, encoding='utf-8', newline='') whether binary_file is explicitly declared as IO[bytes] or inferred. Fix: Use mypy 0.770 or mypy 0.790. It was a regression in mypy's typeshed (Issue 4349) and the fix is in mypy 0.790, fixing both zipfile.open() and io.open().
11
5
62,739,178
2020-7-5
https://stackoverflow.com/questions/62739178/django-save-multiple-versions-of-an-image
My application needs to save multiple versions of an uploaded Image. One high quality image and another one just for thumbnails use (low quality). Currently this is working most of the time but sometimes the save method simply fails and all of my Thumbnail images are getting deleted, especially then if I use the remove_cover checkbox at my form raise ValueError("The '%s' attribute has no file associated with it." % self.field.name) app | ValueError: The 'postcover_tn' attribute has no file associated with it. -> See full trace here: https://pastebin.com/hgieMGet models.py class Post(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) author = models.ForeignKey(User, on_delete=models.CASCADE) title = models.CharField() content = models.TextField(blank=False) postcover = models.ImageField( verbose_name="Post Cover", blank=True, null=True, upload_to=image_uploads, ) postcover_tn = models.ImageField( verbose_name="Post Cover Thumbnail", blank=True, null=True, upload_to=image_uploads, ) published_date = models.DateTimeField(auto_now_add=True, null=True) def save(self, *args, **kwargs): super(Post, self).save(*args, **kwargs) if self.postcover: if not (self.postcover_tn and os.path.exists(self.postcover_tn.path)): image = Image.open(self.postcover) outputIoStream = BytesIO() baseheight = 500 hpercent = baseheight / image.size[1] wsize = int(image.size[0] * hpercent) imageTemproaryResized = image.resize((wsize, baseheight)) imageTemproaryResized.save(outputIoStream, format='PNG') outputIoStream.seek(0) self.postcover = InMemoryUploadedFile(outputIoStream, 'ImageField', "%s.png" % self.postcover.name.split('.')[0], 'image/png', sys.getsizeof(outputIoStream), None) image = Image.open(self.postcover) outputIoStream = BytesIO() baseheight = 175 hpercent = baseheight / image.size[1] wsize = int(image.size[0] * hpercent) imageTemproaryResized = image.resize((wsize, baseheight)) imageTemproaryResized.save(outputIoStream, format='PNG') outputIoStream.seek(0) self.postcover_tn = InMemoryUploadedFile(outputIoStream, 'ImageField', "%s.png" % self.postcover.name.split('.')[0], 'image/png', sys.getsizeof(outputIoStream), None) elif self.postcover_tn: self.postcover_tn.delete() super(Post, self).save(*args, **kwargs) It also seems that I'm not able to properly resolve: self.postcover_tn.delete() -> Unresolved attribute reference 'delete' for class 'InMemoryUploadedFile' self.postcover_tn.path -> Unresolved attribute reference 'path' for class 'InMemoryUploadedFile' forms.py: def save(self, commit=True): instance = super(PostForm, self).save(commit=False) if self.cleaned_data.get('remove_cover'): try: os.unlink(instance.postcover.path) except OSError: pass instance.postcover = None if commit: instance.save() return instance
maybe if we look at the problem from another angle, we could solve it otherwise, out of the box. signals are very handy when it comes to handle images (add, update and delete) and below how i managed to solve your issue: in models.py: # from django.template.defaultfilters import slugify class Post(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) author = models.ForeignKey(User, on_delete=models.CASCADE) title = models.CharField() # slug = models.SlugField('slug', max_length=255, # unique=True, null=True, blank=True, # help_text='If blank, the slug will be generated automatically from the given title.' # ) content = models.TextField(blank=False) # ------------------------------------------------------------------------------------ # rename images with the current post id/pk (which is UUID) and keep the extension # for cover thumbnail we append "_thumbnail" to the name # e.g: # img/posts/77b122a3d241461b80c51adc41d719fb.jpg # img/posts/77b122a3d241461b80c51adc41d719fb_thumbnail.jpg def upload_cover(instance, filename): ext = filename.split('.')[-1] filename = '{}.{}'.format(instance.id, ext) path = 'img/posts/' return '{}{}'.format(path, filename) postcover = models.ImageField('Post Cover', upload_to=upload_cover, # callback function null=True, blank=True, help_text=_('Upload Post Cover.') ) def upload_thumbnail(instance, filename): ext = filename.split('.')[-1] filename = '{}_thumbnail.{}'.format(instance.id, ext) path = 'img/posts/' return '{}{}'.format(path, filename) postcover_tn = models.ImageField('Post Cover Thumbnail', upload_to=upload_thumbnail, # callback function null=True, blank=True, help_text=_('Upload Post Cover Thumbnail.') ) # ------------------------------------------------------------------------------------ published_date = models.DateTimeField(auto_now_add=True, null=True) def save(self, *args, **kwargs): # i moved the logic to signals # if not self.slug: # self.slug = slugify(self.title) super(Post, self).save(*args, **kwargs) create new file and rename it signals.py (near to models.py): import io import sys from PIL import Image from django.core.files.uploadedfile import InMemoryUploadedFile from django.dispatch import receiver from django.db.models.signals import pre_save, pre_delete from .models import Post # DRY def image_resized(image, h): name = image.name _image = Image.open(image) content_type = Image.MIME[_image.format] r = h / _image.size[1] # ratio w = int(_image.size[0] * r) imageTemproaryResized = _image.resize((w, h)) file = io.BytesIO() imageTemproaryResized.save(file, _image.format) file.seek(0) size = sys.getsizeof(file) return file, name, content_type, size @receiver(pre_save, sender=Post, dispatch_uid='post.save_image') def save_image(sender, instance, **kwargs): # add image (cover | thumbnail) if instance._state.adding: # postcover file, name, content_type, size = image_resized(instance.postcover, 500) instance.postcover = InMemoryUploadedFile(file, 'ImageField', name, content_type, size, None) # postcover_tn file, name, content_type, size = image_resized(instance.postcover_tn, 175) instance.postcover_tn = InMemoryUploadedFile(file, 'ImageField', name, content_type, size, None) # update image (cover | thumbnail) if not instance._state.adding: # we have 2 cases: # - replace old with new # - delete old (when 'clear' checkbox is checked) # postcover old = sender.objects.get(pk=instance.pk).postcover new = instance.postcover if (old and not new) or (old and new and old.url != new.url): old.delete(save=False) # postcover_tn old = sender.objects.get(pk=instance.pk).postcover_tn new = instance.postcover_tn if (old and not new) or (old and new and old.url != new.url): old.delete(save=False) @receiver(pre_delete, sender=Post, dispatch_uid='post.delete_image') def delete_image(sender, instance, **kwargs): s = sender.objects.get(pk=instance.pk) if (not s.postcover or s.postcover is not None) and (not s.postcover_tn or s.postcover_tn is not None): s.postcover.delete(False) s.postcover_tn.delete(False) in apps.py: we need to register signals in apps.py since we use decorators @receiver: from django.apps import AppConfig from django.utils.translation import ugettext_lazy as _ class BlogConfig(AppConfig): # change to the name of your app name = 'blog' # and here verbose_name = _('Blog Entries') def ready(self): from . import signals and this is the first screen shot of post admin area since the thumbnail is generated from post cover, as UI/UX good practices there's no need to show a second input file for post cover thumbnail (i kept the second image field read only in admin.py). below is the second screenshot after i uploaded the image PS: the screenshot is taken from another app that i'm working on, so there's little changes, in your case you should see Post Cover instead Featured Image Currently: img/posts/8b0be417db564c53ad06cb493029e2ca.jpg (see upload_cover() in models.py) instead Currently: img/blog/posts/featured/8b0be417db564c53ad06cb493029e2ca.jpg in admin.py # "img/posts/default.jpg" and "img/posts/default_thumbnail.jpg" are placeholders # grab to 2 image placeholders from internet and put them under "/static" folder def get_post_cover(obj): src = obj.postcover.url if obj.postcover and \ hasattr(obj.postcover, 'url') else os.path.join( settings.STATIC_URL, 'img/posts/default.jpg') return mark_safe('<img src="{}" height="500" style="border:1px solid #ccc">'.format(src)) get_post_cover.short_description = '' get_post_cover.allow_tags = True def get_post_cover_thumbnail(obj): src = obj.postcover_tn.url if obj.postcover_tn and \ hasattr(obj.postcover_tn, 'url') else os.path.join( settings.STATIC_URL, 'img/posts/default_thumbnail.jpg') return mark_safe('<img src="{}" height="175" style="border:1px solid #ccc">'.format(src)) get_post_cover_thumbnail.short_description = '' get_post_cover_thumbnail.allow_tags = True class PostAdmin(admin.ModelAdmin): list_display = ('title', .. ) fields = ( 'author', 'title', 'content', get_post_cover, get_post_cover_thumbnail, 'postcover', ) readonly_fields = (get_post_cover, get_post_cover_thumbnail) [..] and finally you don't need any delete logic in save() function in forms.py
7
11
62,767,438
2020-7-7
https://stackoverflow.com/questions/62767438/expand-1-dim-vector-by-using-taylor-series-of-log1ex-in-python
I need to non-linearly expand on each pixel value from 1 dim pixel vector with taylor series expansion of specific non-linear function (e^x or log(x) or log(1+e^x)), but my current implementation is not right to me at least based on taylor series concepts. The basic intuition behind is taking pixel array as input neurons for a CNN model where each pixel should be non-linearly expanded with taylor series expansion of non-linear function. new update 1: From my understanding from taylor series, taylor series is written for a function F of a variable x in terms of the value of the function F and it's derivatives in for another value of variable x0. In my problem, F is function of non-linear transformation of features (a.k.a, pixels), x is each pixel value, x0 is maclaurin series approximation at 0. new update 2 if we use taylor series of log(1+e^x) with approximation order of 2, each pixel value will yield two new pixel by taking first and second expansion terms of taylor series. graphic illustration Here is the graphical illustration of the above formulation: Where X is pixel array, p is approximation order of taylor series, and α is the taylor expansion coefficient. I wanted to non-linearly expand pixel vectors with taylor series expansion of non-linear function like above illustration demonstrated. My current attempt This is my current attempt which is not working correctly for pixel arrays. I was thinking about how to make the same idea applicable to pixel arrays. def taylor_func(x, approx_order=2): x_ = x[..., None] x_ = tf.tile(x_, multiples=[1, 1, approx_order+ 1]) pows = tf.range(0, approx_order + 1, dtype=tf.float32) x_p = tf.pow(x_, pows) x_p_ = x_p[..., None] return x_p_ x = Input(shape=(4,4,3)) x_new = Lambda(lambda x: taylor_func(x, max_pow))(x) my new updated attempt: x_input= Input(shape=(32, 32,3)) def maclurin_exp(x, powers=2): out= 0 for k in range(powers): out+= ((-1)**k) * (x ** (2*k)) / (math.factorial(2 * k)) return res x_input_new = Lambda(lambda x: maclurin_exp(x, max_pow))(x_input) This attempt doesn't yield what the above mathematical formulation describes. I bet I missed something while doing the expansion. Can anyone point me on how to make this correct? Any better idea? goal I wanted to take pixel vector and make non-linearly distributed or expanded with taylor series expansion of certain non-linear function. Is there any possible way to do this? any thoughts? thanks
This is a really interesting question but I can't say that I'm clear on it as of yet. So, while I have some thoughts, I might be missing the thrust of what you're looking to do. It seems like you want to develop your own activation function instead of using something RELU or softmax. Certainly no harm there. And you gave three candidates: e^x, log(x), and log(1+e^x). Notice log(x) asymptotically approaches negative infinity x --> 0. So, log(x) is right out. If that was intended as a check on the answers you get or was something jotted down as you were falling asleep, no worries. But if it wasn't, you should spend some time and make sure you understand the underpinnings of what you doing because the consequences can be quite high. You indicated you were looking for a canonical answer and you get a two for one here. You get both a canonical answer and highly performant code. Considering you're not likely to able to write faster, more streamlined code than the folks of SciPy, Numpy, or Pandas. Or, PyPy. Or Cython for that matter. Their stuff is the standard. So don't try to compete against them by writing your own, less performant (and possibly bugged) version which you will then have to maintain as time passes. Instead, maximize your development and run times by using them. Let's take a look at the implementation e^x in SciPy and give you some code to work with. I know you don't need a graph for what you're at this stage but they're pretty and can help you understand how they Taylor (or Maclaurin, aka Euler-Maclaurin) will work as the order of the approximation changes. It just so happens that SciPy has Taylor approximation built-in. import scipy import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import approximate_taylor_polynomial x = np.linspace(-10.0, 10.0, num=100) plt.plot(x, np.exp(x), label="e^x", color = 'black') for degree in np.arange(1, 4, step=1): e_to_the_x_taylor = approximate_taylor_polynomial(np.exp, 0, degree, 1, order=degree + 2) plt.plot(x, e_to_the_x_taylor(x), label=f"degree={degree}") plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.0, shadow=True) plt.tight_layout() plt.axis([-10, 10, -10, 10]) plt.show() That produces this: But let's say if you're good with 'the maths', so to speak, and are willing to go with something slightly slower if it's more 'mathy' as in it handles symbolic notation well. For that, let me suggest SymPy. And with that in mind here is a bit of SymPy code with a graph because, well, it looks good AND because we need to go back and hit another point again. from sympy import series, Symbol, log, E from sympy.functions import exp from sympy.plotting import plot import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 13,10 plt.rcParams['lines.linewidth'] = 2 x = Symbol('x') def taylor(function, x0, n): """ Defines Taylor approximation of a given function function -- is our function which we want to approximate x0 -- point where to approximate n -- order of approximation """ return function.series(x,x0,n).removeO() # I get eyestain; feel free to get rid of this plt.rcParams['figure.figsize'] = 10, 8 plt.rcParams['lines.linewidth'] = 1 c = log(1 + pow(E, x)) plt = plot(c, taylor(c,0,1), taylor(c,0,2), taylor(c,0,3), taylor(c,0,4), (x,-5,5),legend=True, show=False) plt[0].line_color = 'black' plt[1].line_color = 'red' plt[2].line_color = 'orange' plt[3].line_color = 'green' plt[4].line_color = 'blue' plt.title = 'Taylor Series Expansion for log(1 +e^x)' plt.show() I think either option will get you where you need go. Ok, now for the other point. You clearly stated after a bit of revision that log(1 +e^x) was your first choice. But the others don't pass the sniff test. e^x vacillates wildly as the degree of the polynomial changes. Because of the opaqueness of algorithms and how few people can conceptually understand this stuff, Data Scientists can screw things up to a degree people can't even imagine. So make sure you're very solid on theory for this. One last thing, consider looking at the CDF of the Erlang Distribution as an activation function (assuming I'm right and you're looking to roll your own activation function as an area of research). I don't think anyone has looked at that but it strikes as promising. I think you could break out each channel of the RGB as one of the two parameters, with the other being the physical coordinate.
10
7
62,686,305
2020-7-1
https://stackoverflow.com/questions/62686305/errorbar-in-legend-pandas-bar-plot
Is it possible to show the error bars in the legend? (Like i draw in red) They do not necessarily have to be the correct length, it is enough for me if they are indicated and recognizable. My working sample: import pandas as pd import matplotlib.pyplot as plt test = pd.DataFrame(data={'one':2000,'two':300,'three':50,'four':150}, index=['MAX']) fig, ax = plt.subplots(figsize=(5, 3), dpi=230) ax.set_ylim(-.12,.03) # barplot ax = test.loc[['MAX'],['one']].plot(position=5.5,color=['xkcd:camo green'], xerr=test.loc[['MAX'],['two']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) ax = test.loc[['MAX'],['one']].plot(position=7,color=['xkcd:moss green'], xerr=test.loc[['MAX'],['three']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) ax = test.loc[['MAX'],['one']].plot(position=8.5,color=['xkcd:light olive green'],xerr=test.loc[['MAX'],['four']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) # Legende h0, l0 = ax.get_legend_handles_labels() l0 = [r'MAX $1$', r'MAX $2$', r'MAX $3$'] legend = plt.legend(h0, l0, borderpad=0.15,labelspacing=0.1, frameon=True, edgecolor="xkcd:black", ncol=1, loc='upper left',framealpha=1, facecolor='white') legend.get_frame().set_linewidth(0.3) cur_axes = plt.gca() cur_axes.axes.get_yaxis().set_ticklabels([]) cur_axes.axes.get_yaxis().set_ticks([]) plt.show() I tried a few ways, no one works. With Patch in legend_elements i get no lines for the errorbars, with the errorbar() function i can draw a figure with errorbars, but it semms not to work in the legend: import pandas as pd import matplotlib.pyplot as plt from matplotlib.patches import Patch from matplotlib.lines import Line2D legend_elements = [ Line2D([1,2], [5,4], color='b', lw=1, label='Line'), Patch(facecolor='orange', edgecolor='r', label='Color Patch'), matplotlib.pyplot.errorbar(3, 3, yerr=None, xerr=1, marker='s',mfc='xkcd:camo green', mec='black', ms=20, mew=2, fmt='-', ecolor="black", elinewidth=2, capsize=3, barsabove=True, lolims=False, uplims=False, xlolims=False, xuplims=False, errorevery=2, capthick=None, label="error"), ] test = pd.DataFrame(data={'one':2000,'two':300,'three':50,'four':150}, index=['MAX']) fig, ax = plt.subplots(figsize=(5, 3), dpi=230) ax.set_ylim(-.12,.03) # barplot ax = test.loc[['MAX'],['one']].plot(position=5.5,color=['xkcd:camo green'], xerr=test.loc[['MAX'],['two']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) ax = test.loc[['MAX'],['one']].plot(position=7,color=['xkcd:moss green'], xerr=test.loc[['MAX'],['three']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) ax = test.loc[['MAX'],['one']].plot(position=8.5,color=['xkcd:light olive green'],xerr=test.loc[['MAX'],['four']].values.T, edgecolor='black',linewidth = 0.3, error_kw=dict(lw=1, capsize=2, capthick=1),ax=ax,kind='barh',width=.025) # Legende h0, l0 = ax.get_legend_handles_labels() l0 = [r'MAX $1$', r'MAX $2$', r'MAX $3$'] legend = plt.legend(h0, l0, borderpad=0.15,labelspacing=0.1, frameon=True, edgecolor="xkcd:black", ncol=1, loc='upper left',framealpha=1, facecolor='white') legend.get_frame().set_linewidth(0.3) ax.legend(handles=legend_elements, loc='center') cur_axes = plt.gca() cur_axes.axes.get_yaxis().set_ticklabels([]) cur_axes.axes.get_yaxis().set_ticks([]) #plt.show() Implementation based on the idea of r-beginners: import pandas as pd import matplotlib.pyplot as plt test = pd.DataFrame(data={'one':2000,'two':300,'three':50,'four':150}, index=['MAX']) fig, ax = plt.subplots(figsize=(5, 3), dpi=150) ax.set_ylim(0, 6) ax.set_xlim(0, 2400) ax1 = ax.twiny() ax1.set_xlim(0, 2400) ax1.set_xticks([]) ax.barh(1, width=test['one'], color=['xkcd:camo green'], edgecolor='black',linewidth = 0.3, label='MAX1') ax.barh(2, width=test['one'], color=['xkcd:moss green'], edgecolor='black',linewidth = 0.3, label='MAX2') ax.barh(3, width=test['one'], color=['xkcd:light olive green'], edgecolor='black',linewidth = 0.3, label='MAX3') ax1.errorbar(test['one'], 1, xerr=test['two'], color='k', ecolor='k', fmt=',', lw=1, capsize=2, capthick=1, label='MAX1') ax1.errorbar(test['one'], 2, xerr=test['three'], color='k', ecolor='k', fmt=',', lw=1, capsize=2, capthick=1, label='MAX2') ax1.errorbar(test['one'], 3, xerr=test['four'], color='k', ecolor='k', fmt=',', lw=1, capsize=2, capthick=1, label='MAX3') handler, label = ax.get_legend_handles_labels() handler1, label1 = ax1.get_legend_handles_labels() label1 = ['' for l in label1] ax.legend(handler, label, loc='upper left', handletextpad=1.5) ax1.legend(handler1, label1, loc='upper left', handletextpad=1., markerfirst=False, framealpha=0.001) plt.show() Changes: ax1 gets the same limit as ax all strings from label1 are deleted in ax1.legend() the order of handler and label is exchanged and with the handlertextpad the error bars are shifted to the right
The method I came up with was to draw 'ax.barh' and 'ax1.errorbar()' and then superimpose the legends of each on top of each other. On one side, I minimized the transparency so that the legend below is visible; the error bar looks different because I made it biaxial. import pandas as pd import matplotlib.pyplot as plt test = pd.DataFrame(data={'one':2000,'two':300,'three':50,'four':150}, index=['MAX']) fig, ax = plt.subplots(figsize=(5, 3), dpi=230) ax.set_ylim(0, 15) ax.set_xlim(0, 2400) ax1 = ax.twiny() ax.barh(5.5, width=test['one'], color=['xkcd:camo green'], edgecolor='black',linewidth = 0.3, label='MAX1') ax.barh(7.0, width=test['one'], color=['xkcd:moss green'], edgecolor='black',linewidth = 0.3, label='MAX2') ax.barh(8.5, width=test['one'], color=['xkcd:light olive green'], edgecolor='black',linewidth = 0.3, label='MAX3') ax1.errorbar(test['one'], 5.5, xerr=test['two'], color='k', ecolor='k', capsize=3, fmt='|', label='MAX1') ax1.errorbar(test['one'], 7.0, xerr=test['three'], color='k', ecolor='k', capsize=3, fmt='|', label='MAX2') ax1.errorbar(test['one'], 8.5, xerr=test['four'], color='k', ecolor='k', capsize=3, fmt='|', label='MAX3') handler, label = ax.get_legend_handles_labels() handler1, label1 = ax1.get_legend_handles_labels() ax.legend(handler, label, loc='upper left', title='mix legend') ax1.legend(handler1, label1, loc='upper left', title='mix legend', framealpha=0.001) plt.show()
9
2
62,683,732
2020-7-1
https://stackoverflow.com/questions/62683732/combining-strings-and-ints-to-create-a-date-string-results-in-typeerror
I am trying to combine the lists below to display a date in the format 'dd/hh:mm'. the lists are as follows: dd = [23, 23, 24, 24, 24, 24, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27] hh = [21, 23, 7, 9, 16, 19, 2, 5, 12, 15, 22, 1, 8, 11, 18, 21, 2, 8, 12, 12, 13, 13, 18, 22] mm = [18, 39, 3, 42, 52, 43, 46, 41, 42, 35, 41, 27, 37, 30, 0, 58, 57, 51, 11, 20, 18, 30, 35, 5] So combining the lists would look something like 23/21:18, 23/23:39, 24/7:3, 24/9:42 ...... and so on. I tried using a for loop (below) for this, but each time was unsurprisingly met with finaltimes = [] zip_object = zip(dd,hh,mm) for list1, list2, list3 in zip_object: finaltimes.append(list1+'/'+list2+':'+list3) TypeError: unsupported operand type(s) for +: 'int' and 'str' I know I can't combine int and str in this loop but am not sure how to approach this? Any help is appreciated
The following should work: finaltimes = ['{}/{}:{}'.format(*tpl) for tpl in zip(dd, hh, m)]
8
11
62,748,654
2020-7-6
https://stackoverflow.com/questions/62748654/python-3-8-shared-memory-resource-tracker-producing-unexpected-warnings-at-appli
I am using a multiprocessing.Pool which calls a function in 1 or more subprocesses to produce a large chunk of data. The worker process creates a multiprocessing.shared_memory.SharedMemory object and uses the default name assigned by shared_memory. The worker returns the string name of the SharedMemory object to the main process. In the main process the SharedMemory object is linked to, consumed, and then unlinked & closed. At shutdown I'm seeing warnings from resource_tracker: /usr/local/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 10 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /usr/local/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/psm_e27e5f9e': [Errno 2] No such file or directory: '/psm_e27e5f9e' warnings.warn('resource_tracker: %r: %s' % (name, e)) /usr/local/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/psm_2cf099ac': [Errno 2] No such file or directory: '/psm_2cf099ac' <8 more similar messages omitted> Since I unlinked the shared memory objects in my main process I'm confused about what's happening here. I suspect these messages are occurring in the subprocess (in this example I tested with a process pool of size 1). Here is a minimum reproducible example: import multiprocessing import multiprocessing.shared_memory as shared_memory def create_shm(): shm = shared_memory.SharedMemory(create=True, size=30000000) shm.close() return shm.name def main(): pool = multiprocessing.Pool(processes=4) tasks = [pool.apply_async(create_shm) for _ in range(200)] for task in tasks: name = task.get() print('Getting {}'.format(name)) shm = shared_memory.SharedMemory(name=name, create=False) shm.close() shm.unlink() pool.terminate() pool.join() if __name__ == '__main__': main() I have found that running that example on my own laptop (Linux Mint 19.3) it runs fine, however running it on two different server machines (unknown OS configurations, but both different) it does exhibit the problem. In all cases I'm running the code from a docker container, so Python/software config is identical, the only difference is the Linux kernel/host OS. I notice this documentation that might be relevant: https://docs.python.org/3.8/library/multiprocessing.html#contexts-and-start-methods I also notice that the number of "leaked shared_memory objects" varies from run to run. Since I unlink in main process, then immediately exit, perhaps this resource_tracker (which I think is a separate process) has just not received an update before the main process exits. I don't understand the role of the resource_tracker well enough to fully understand what I just proposed though. Related topics: https://bugs.python.org/issue39959
In theory and based on the current implementation of SharedMemory, the warnings should be expected. The main reason is that every shared memory object you have created is being tracked twice: first, when it's produced by one of the processes in the Pool object; and second, when it's consumed by the main process. This is mainly because the current implementation of the constructor of SharedMemory will register the shared memory object regardless of whether the createargument is set to True or its value is False. So, when you call shm.unlink() in the main process, what you are doing is deleting the shared memory object entirely before its producer (some process in the Pool) gets around to cleaning it up. As a result, when the pool gets destroyed, each of its members (if they ever got a task) has to clean up after itself. The first warning about leaked resources probably refers to the shared memory objects actually created by processes in the Pool that never got unlinked by those same processes. And, the No such file or directory warnings are due to the fact that the main process has unlinked the files associated with the shared memory objects before the processes in the Pool are destroyed. The solution provided in the linked bug report would likely prevent consuming processes from having to spawn additional resource trackers, but it does not quite prevent the issue that arises when a consuming process decides to delete a shared memory object that it did not create. This is because the process that produced the shared memory object will still have to do some clean up, i.e. some unlinking, before it exits or is destroyed. The fact that you are not seeing those warnings is quite puzzling. But it may well have to do with a combination of OS scheduling, unflushed buffers in the child process and the start method used when creating a process pool. For comparison, when I use fork as a start method on my machine, I get the warnings. Otherwise, I see no warnings when spawn and forkserver are used. I added argument parsing to your code to make it easy to test different start methods: #!/usr/bin/env python3 # shm_test_script.py """ Use --start_method or -s to pick a process start method when creating a process Pool. Use --tasks or -t to control how many shared memory objects should be created. Use --pool_size or -p to control the number of child processes in the create pool. """ import argparse import multiprocessing import multiprocessing.shared_memory as shared_memory def create_shm(): shm = shared_memory.SharedMemory(create=True, size=30000000) shm.close() return shm.name def main(tasks, start_method, pool_size): multiprocessing.set_start_method(start_method, force=True) pool = multiprocessing.Pool(processes=pool_size) tasks = [pool.apply_async(create_shm) for _ in range(tasks)] for task in tasks: name = task.get() print('Getting {}'.format(name)) shm = shared_memory.SharedMemory(name=name, create=False) shm.close() shm.unlink() pool.terminate() pool.join() if __name__ == '__main__': parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument( '--start_method', '-s', help='The multiproccessing start method to use. Default: %(default)s', default=multiprocessing.get_start_method(), choices=multiprocessing.get_all_start_methods() ) parser.add_argument( '--pool_size', '-p', help='The number of processes in the pool. Default: %(default)s', type=int, default=multiprocessing.cpu_count() ) parser.add_argument( '--tasks', '-t', help='Number of shared memory objects to create. Default: %(default)s', default=200, type=int ) args = parser.parse_args() main(args.tasks, args.start_method, args.pool_size) Given that fork is the only method that ends up displaying the warnings (for me, at least), maybe there is actually something to the following statement about it: The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic. It's not surprising that the warnings from child processes persist/propagate if all resources of the parent are inherited by the child processes. If you're feeling particularly adventurous, you can edit the multiprocessing/resource_tracker.py and update warnings.warn lines by adding os.getpid() to the printed strings. For instance, changing any warning with "resource_tracker:" to "resource_tracker %d: " % (os.getpid()) should be sufficient. If you've done this, you will notice that the warnings come from various processes that are neither the child processes, nor the main process itself. With those changes made, the following should help with double checking that the complaining resource trackers are as many as your Pool size, and their process IDs are different from the main process or the child processes: chmod +x shm_test_script.py ./shm_test_script.py -p 10 -t 50 -s fork > log 2> err awk -F ':' 'length($4) > 1 { print $4 }' err | sort | uniq -c That should display ten lines, each of which prepended with the number of complaints from the corresponding resource tracker. Every line should also contain a PID that should be different from the main and child processes. To recap, each child process should have its own resource tracker if it receives any work. Since you're not explicitly unlinking the shared memory objects in the child processes, the resources will likely get cleaned up when the child processes are destroyed. I hope this helps answer some, if not all, of your questions.
20
14
62,766,200
2020-7-7
https://stackoverflow.com/questions/62766200/create-csv-from-xml-json-using-python-pandas
I am trying to parse to an xml into multiple different Files - Sample XML <integration-outbound:IntegrationEntity xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <integrationEntityHeader> <integrationTrackingNumber>281#963-4c1d-9d26-877ba40a4b4b#1583507840354</integrationTrackingNumber> <referenceCodeForEntity>25428</referenceCodeForEntity> <attachments> <attachment> <id>d6esd1d518b06019e01</id> <name>durance.pdf</name> <size>0</size> </attachment> <attachment> <id>182e60164ddd4236b5bd96109</id> <name>ssds</name> <size>0</size> </attachment> </attachments> <source>SIM</source> <entity>SUPPLIER</entity> <action>CREATE</action> <timestampUTC>20200306T151721</timestampUTC> <zDocBaseVersion>2.0</zDocBaseVersion> <zDocCustomVersion>0</zDocCustomVersion> </integrationEntityHeader> <integrationEntityDetails> <supplier> <requestId>2614352</requestId> <controlBlock> <dataProcessingInfo> <key>MODE</key> <value>Onboarding</value> </dataProcessingInfo> <dataProcessingInfo> <key>Supplier_Type</key> <value>Operational</value> </dataProcessingInfo> </controlBlock> <id>1647059</id> <facilityCode>0001</facilityCode> <systemCode>1</systemCode> <supplierType>Operational</supplierType> <systemFacilityDetails> <systemFacilityDetail> <facilityCode>0001</facilityCode> <systemCode>1</systemCode> <FacilityStatus>ACTIVE</FacilityStatus> </systemFacilityDetail> </systemFacilityDetails> <status>ACTIVE</status> <companyDetails> <displayGSID>254232128</displayGSID> <legalCompanyName>asdasdsads</legalCompanyName> <dunsNumber>03-175-2493</dunsNumber> <legalStructure>1</legalStructure> <website>www.aaadistributor.com</website> <noEmp>25</noEmp> <companyIndicator1099>No</companyIndicator1099> <taxidAndWxformRequired>NO</taxidAndWxformRequired> <taxidFormat>Fed. Tax</taxidFormat> <wxForm>182e601649ade4c38cd4236b5bd96109</wxForm> <taxid>27-2204474</taxid> <companyTypeFix>SUPPLIER</companyTypeFix> <fields> <field> <id>LOW_CUURENT_SERV</id> <value>1</value> </field> <field> <id>LOW_COI</id> <value>USA</value> </field> <field> <id>LOW_STATE_INCO</id> <value>US-PA</value> </field> <field> <id>CERT_INSURANCE</id> <value>d6e6e460fe8958564c1d518b06019e01</value> </field> <field> <id>COMP_DBA</id> <value>asdadas</value> </field> <field> <id>LOW_AREUDIVE</id> <value>N</value> </field> <field> <id>LOW_BU_SIZE1</id> <value>SMLBUS</value> </field> <field> <id>EDI_CAP</id> <value>Y</value> </field> <field> <id>EDI_WEB</id> <value>N</value> </field> <field> <id>EDI_TRAD</id> <value>N</value> </field> </fields> </companyDetails> <allLocations> <location> <addressInternalid>1704342</addressInternalid> <isDelete>false</isDelete> <internalSupplierid>1647059</internalSupplierid> <acctGrpid>HQ</acctGrpid> <address1>2501 GRANT AVE</address1> <country>USA</country> <state>US-PA</state> <city>PHILADELPHIA</city> <zip>19114</zip> <phone>(215) 745-7900</phone> </location> </allLocations> <contactDetails> <contactDetail> <contactInternalid>12232</contactInternalid> <isDelete>false</isDelete> <addressInternalid>1704312142</addressInternalid> <contactType>Main</contactType> <firstName>Raf</firstName> <lastName>jas</lastName> <title>Admin</title> <email>[email protected]</email> <phoneNo>123-42-23-23</phoneNo> <createPortalLogin>yes</createPortalLogin> <allowedPortalSideProducts>SIM,iSource,iContract</allowedPortalSideProducts> </contactDetail> <contactDetail> <contactInternalid>1944938</contactInternalid> <isDelete>false</isDelete> <addressInternalid>1704342</addressInternalid> <contactType>Rad</contactType> <firstName>AVs</firstName> <lastName>asd</lastName> <title>Founder</title> <email>[email protected]</email> <phoneNo>21521-2112-7900</phoneNo> <createPortalLogin>yes</createPortalLogin> <allowedPortalSideProducts>SIM,iContract,iSource</allowedPortalSideProducts> </contactDetail> </contactDetails> <myLocation> <addresses> <myLocationsInternalid>1704342</myLocationsInternalid> <isDelete>false</isDelete> <addressInternalid>1704342</addressInternalid> <usedAt>N</usedAt> </addresses> </myLocation> <bankDetails> <fields> <field> <id>LOW_BANK_KEY</id> <value>123213</value> </field> <field> <id>LOW_EFT</id> <value>123123</value> </field> </fields> </bankDetails> <forms> <form> <id>CATEGORY_PRODSER</id> <records> <record> <Internalid>24348</Internalid> <isDelete>false</isDelete> <fields> <field> <id>CATEGOR_LEVEL_1</id> <value>MR</value> </field> <field> <id>LOW_PRODSERV</id> <value>RES</value> </field> <field> <id>LOW_LEVEL_2</id> <value>keylevel221</value> </field> <field> <id>LOW_LEVEL_3</id> <value>keylevel3127</value> </field> <field> <id>LOW_LEVEL_4</id> <value>keylevel4434</value> </field> <field> <id>LOW_LEVEL_5</id> <value>keylevel5545</value> </field> </fields> </record> <record> <Internalid>24349</Internalid> <isDelete>false</isDelete> <fields> <field> <id>CATEGOR_LEVEL_1</id> <value>MR</value> </field> <field> <id>LOW_PRODSERV</id> <value>RES</value> </field> <field> <id>LOW_LEVEL_2</id> <value>keylevel221</value> </field> <field> <id>LOW_LEVEL_3</id> <value>keylevel3125</value> </field> <field> <id>LOW_LEVEL_4</id> <value>keylevel4268</value> </field> <field> <id>LOW_LEVEL_5</id> <value>keylevel5418</value> </field> </fields> </record> <record> <Internalid>24350</Internalid> <isDelete>false</isDelete> <fields> <field> <id>CATEGOR_LEVEL_1</id> <value>MR</value> </field> <field> <id>LOW_PRODSERV</id> <value>RES</value> </field> <field> <id>LOW_LEVEL_2</id> <value>keylevel221</value> </field> <field> <id>LOW_LEVEL_3</id> <value>keylevel3122</value> </field> <field> <id>LOW_LEVEL_4</id> <value>keylevel425</value> </field> <field> <id>LOW_LEVEL_5</id> <value>keylevel5221</value> </field> </fields> </record> </records> </form> <form> <id>OTHER_INFOR</id> <records> <record> <isDelete>false</isDelete> <fields> <field> <id>S_EAST</id> <value>N</value> </field> <field> <id>W_EST</id> <value>N</value> </field> <field> <id>M_WEST</id> <value>N</value> </field> <field> <id>N_EAST</id> <value>N</value> </field> <field> <id>LOW_AREYOU_ASSET</id> <value>-1</value> </field> <field> <id>LOW_SWART_PROG</id> <value>-1</value> </field> </fields> </record> </records> </form> <form> <id>ABDCEDF</id> <records> <record> <isDelete>false</isDelete> <fields> <field> <id>LOW_COD_CONDUCT</id> <value>-1</value> </field> </fields> </record> </records> </form> <form> <id>CODDUC</id> <records> <record> <isDelete>false</isDelete> <fields> <field> <id>LOW_SUPPLIER_TYPE</id> <value>2</value> </field> <field> <id>LOW_DO_INT_BOTH</id> <value>1</value> </field> </fields> </record> </records> </form> </forms> </supplier> </integrationEntityDetails> </integration-outbound:IntegrationEntity> The goal is to have common xml to csv conversion to be put in place. Based on input file the xml should be flattend and exploded into multiple csv and stored. The input is an xml which is above and config csv file below. Need to create 3 csv files with corresponding XPATH mentioned in the file XPATH,ColumName,CSV_File_Name,ParentKey /integration-outbound:IntegrationEntity/integrationEntityHeader/integrationTrackingNumber,integrationTrackingNumber,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/referenceCodeForEntity,referenceCodeForEntity,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/attachments/attachment[]/id,id,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/attachments/attachment[]/name,name,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/attachments/attachment[]/size,size,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/source,source,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/entity,entity,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/action,action,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/timestampUTC,timestampUTC,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/zDocBaseVersion,zDocBaseVersion,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/zDocCustomVersion,zDocCustomVersion,integrationEntityHeader.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/integrationTrackingNumber,integrationTrackingNumber,integrationEntityDetailsControlBlock.csv,Y /integration-outbound:IntegrationEntity/integrationEntityHeader/referenceCodeForEntity,referenceCodeForEntity,integrationEntityDetailsControlBlock.csv,Y /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/requestId,requestId,integrationEntityDetailsControlBlock.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/controlBlock/dataProcessingInfo[]/key,key,integrationEntityDetailsControlBlock.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/controlBlock/dataProcessingInfo[]/value,value,integrationEntityDetailsControlBlock.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/id,supplier_id,integrationEntityDetailsControlBlock.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/forms/form[]/id,id,integrationEntityDetailsForms.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/forms/form[]/records/record[]/Internalid,Internalid,integrationEntityDetailsForms.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/forms/form[]/records/record[]/isDelete,FormId,integrationEntityDetailsForms.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/forms/form[]/records/record[]/fields/field[]/id,SupplierFormRecordFieldId,integrationEntityDetailsForms.csv, /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/forms/form[]/records/record[]/fields/field[]/value,SupplierFormRecordFieldValue,integrationEntityDetailsForms.csv, /integration-outbound:IntegrationEntity/integrationEntityHeader/integrationTrackingNumber,integrationTrackingNumber,integrationEntityDetailsForms.csv,Y /integration-outbound:IntegrationEntity/integrationEntityHeader/referenceCodeForEntity,referenceCodeForEntity,integrationEntityDetailsForms.csv,Y /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/requestId,requestId,integrationEntityDetailsForms.csv,Y /integration-outbound:IntegrationEntity/integrationEntityDetails/supplier/id,supplier_id,integrationEntityDetailsForms.csv,Y I need to create 3 csv files output from it. The design is to pick each csv file and get the xpath and pick the corresponding value from the xml and fetch it Step 1 - Convert to xml to Json - import json import xmltodict with open("/home/s0998hws/test.xml") as xml_file: data_dict = xmltodict.parse(xml_file.read()) xml_file.close() # generate the object using json.dumps() # corresponding to json data json_data = json.dumps(data_dict) # Write the json data to output # json file with open("data.json", "w") as json_file: json_file.write(json_data) json_file.close() with open('data.json') as f: d = json.load(f) Step 2 - Normalize using the panda normalize function - using the xpath / converting to . and [] as other delimter and building the columns to be fecthed from the json i.e code will look for /integration-outbound:IntegrationEntity/integrationEntityHeader/integrationTrackingNumber and convert to .integrationEntityHeader.integrationTrackingNumber and with the first [] it will exlode , there on df_1=pd.json_normalize(data=d['integration-outbound:IntegrationEntity']) df_2=df_1[['integrationEntityHeader.integrationTrackingNumber','integrationEntityDetails.supplier.requestId','integrationEntityHeader.referenceCodeForEntity','integrationEntityDetails.supplier.id','integrationEntityDetails.supplier.forms.form']] df_3=df_2.explode('integrationEntityDetails.supplier.forms.form') df_3['integrationEntityDetails.supplier.forms.form.id']=df_3['integrationEntityDetails.supplier.forms.form'].apply(lambda x: x.get('id')) df_3['integrationEntityDetails.supplier.forms.form.records']=df_3['integrationEntityDetails.supplier.forms.form'].apply(lambda x: x.get('records')) I was trying to use the metadata from the csv file and fecth it but the challenge is df_3['integrationEntityDetails.supplier.forms.form.records.record.Internalid']=df_3['integrationEntityDetails.supplier.forms.form.records.record'].apply(lambda x: x.get('Internalid')) Failed with Error - Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib64/python3.6/site-packages/pandas/core/series.py", line 3848, in apply mapped = lib.map_infer(values, f, convert=convert_dtype) File "pandas/_libs/lib.pyx", line 2327, in pandas._libs.lib.map_infer File "<stdin>", line 1, in <lambda> AttributeError: 'list' object has no attribute 'get' The reason is the data from the panda dataframe is having list when and array and it is unable be fecth using the above method. Below is the output generated integrationEntityHeader.integrationTrackingNumber integrationEntityDetails.supplier.requestId integrationEntityHeader.referenceCodeForEntity integrationEntityDetails.supplier.id integrationEntityDetails.supplier.forms.form integrationEntityDetails.supplier.forms.form.id integrationEntityDetails.supplier.forms.form.records 0 281#999eb16e-242c-4239-b33e-ae6f5296fb15#10c7338c-ab63-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 {'id': 'CATEGORY_PRODSER', 'records': {'record': [{'Internalid': '24348', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3127'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4434'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5545'}]}}, {'Internalid': '24349', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3125'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4268'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5418'}]}}, {'Internalid': '24350', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3122'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel425'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5221'}]}}]}} CATEGORY_PRODSER {'record': [{'Internalid': '24348', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3127'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4434'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5545'}]}}, {'Internalid': '24349', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3125'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel4268'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5418'}]}}, {'Internalid': '24350', 'isDelete': 'false', 'fields': {'field': [{'id': 'CATEGOR_LEVEL_1', 'value': 'MR'}, {'id': 'LOW_PRODSERV', 'value': 'RES'}, {'id': 'LOW_LEVEL_2', 'value': 'keylevel221'}, {'id': 'LOW_LEVEL_3', 'value': 'keylevel3122'}, {'id': 'LOW_LEVEL_4', 'value': 'keylevel425'}, {'id': 'LOW_LEVEL_5', 'value': 'keylevel5221'}]}}]} 0 281#999eb16e-242c-4239-b33e-ae6f5296fb15#10c7338c-ab63-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 {'id': 'OTHER_INFOR', 'records': {'record': {'isDelete': 'false', 'fields': {'field': [{'id': 'S_EAST', 'value': 'N'}, {'id': 'W_EST', 'value': 'N'}, {'id': 'M_WEST', 'value': 'N'}, {'id': 'N_EAST', 'value': 'N'}, {'id': 'LOW_AREYOU_ASSET', 'value': '-1'}, {'id': 'LOW_SWART_PROG', 'value': '-1'}]}}}} OTHER_INFOR {'record': {'isDelete': 'false', 'fields': {'field': [{'id': 'S_EAST', 'value': 'N'}, {'id': 'W_EST', 'value': 'N'}, {'id': 'M_WEST', 'value': 'N'}, {'id': 'N_EAST', 'value': 'N'}, {'id': 'LOW_AREYOU_ASSET', 'value': '-1'}, {'id': 'LOW_SWART_PROG', 'value': '-1'}]}}} 0 281#999eb16e-242c-4239-b33e-ae6f5296fb15#10c7338c-ab63-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 {'id': 'CORPORATESUSTAINABILITY', 'records': {'record': {'isDelete': 'false', 'fields': {'field': {'id': 'LOW_COD_CONDUCT', 'value': '-1'}}}}} CORPORATESUSTAINABILITY {'record': {'isDelete': 'false', 'fields': {'field': {'id': 'LOW_COD_CONDUCT', 'value': '-1'}}}} 0 281#999eb16e-242c-4239-b33e-ae6f5296fb15#10c7338c-ab63-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 {'id': 'PRODUCTSERVICES', 'records': {'record': {'isDelete': 'false', 'fields': {'field': [{'id': 'LOW_SUPPLIER_TYPE', 'value': '2'}, {'id': 'LOW_DO_INT_BOTH', 'value': '1'}]}}}} PRODUCTSERVICES {'record': {'isDelete': 'false', 'fields': {'field': [{'id': 'LOW_SUPPLIER_TYPE', 'value': '2'}, {'id': 'LOW_DO_INT_BOTH', 'value': '1'}]}}} Expected Ouput integrationEntityDetailsForms.csv integrationTrackingNumber requestId referenceCodeForEntity supplier.id integrationEntityDetails.supplier.forms.form.id InternalId isDelete SupplierFormRecordFieldId SupplierFormRecordFieldValue 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE CATEGOR_LEVEL_1 MR 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE LOW_PRODSERV RES 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE LOW_LEVEL_2 keylevel221 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE LOW_LEVEL_3 keylevel3127 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE LOW_LEVEL_4 keylevel4434 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24348 FALSE LOW_LEVEL_5 keylevel5545 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE CATEGOR_LEVEL_1 MR 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE LOW_PRODSERV RES 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE LOW_LEVEL_2 keylevel221 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE LOW_LEVEL_3 keylevel3122 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE LOW_LEVEL_4 keylevel425 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CATEGORY_PRODSER 24350 FALSE LOW_LEVEL_5 keylevel5221 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 OTHER_INFOR FALSE S_EAST N 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 OTHER_INFOR FALSE W_EST N 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 OTHER_INFOR FALSE M_WEST N 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 OTHER_INFOR FALSE N_EAST N 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 OTHER_INFOR FALSE LOW_AREYOU_ASSET -1 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CORPORATESUSTAINABILITY FALSE LOW_SWART_PROG -1 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 CORPORATESUSTAINABILITY FALSE LOW_COD_CONDUCT -1 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 PRODUCTSERVICES FALSE LOW_SUPPLIER_TYPE 2 281#963-4c1d-9d26-877ba40a4b4b#1583507840354 2614352 25428 1647059 PRODUCTSERVICES FALSE LOW_DO_INT_BOTH 1
The xml is converted to dict and then the parsing logic is written , the reason for this is because the same can be used for json . The stackoverflow is amazingly helpful and the solution is build based on the responses from all these links . For simplicity i have created a 3 level nest xml. This works on Python3 <?xml version="1.0"?><Company><Employee><FirstName>Hal</FirstName><LastName>Thanos</LastName><ContactNo>122131</ContactNo><Email>[email protected]</Email><Addresses><Address><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Iron</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email>[email protected]</Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Captain</FirstName><LastName>America</LastName><ContactNo>13322</ContactNo><Email>[email protected]</Email><Addresses><Address><City>Trivandrum</City><State>Kerala</State><Zip>28115</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Sword</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email>[email protected]</Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Thor</FirstName><LastName>Odison</LastName><ContactNo>156565</ContactNo><Email>[email protected]</Email><Addresses><Address><City>Tirunelveli</City><State>TamilNadu</State><Zip>36595</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form></forms></Address></Addresses></Employee><Employee><FirstName>Spider</FirstName><LastName>Man</LastName><ContactNo>12324</ContactNo><Email>[email protected]</Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> <?xml version="1.0"?><Company><Employee><FirstName>Black</FirstName><LastName>Widow</LastName><ContactNo>16767</ContactNo><Email>[email protected]</Email><Addresses><Address><City>Mysore</City><State>Karnataka</State><Zip>12478</Zip><forms><form><id>ID1</id><value>LIC</value></form></forms></Address></Addresses></Employee><Employee><FirstName>White</FirstName><LastName>Man</LastName><ContactNo>5634</ContactNo><Email>[email protected]</Email><Addresses><Address><type>Permanent</type><City>Bangalore</City><State>Karnataka</State><Zip>560212</Zip><forms><form><id>ID3</id><value>LIC</value></form></forms></Address><Address><type>Temporary</type><City>Concord</City><State>NC</State><Zip>28027</Zip><forms><form><id>ID1</id><value>LIC</value></form><form><id>ID2</id><value>PAS</value></form><form><id>ID3</id><value>SSN</value></form><form><id>ID2</id><value>CC</value></form></forms></Address></Addresses></Employee></Company> The config file for this xml is all possible array/multiple level/explode columns should be mentioned as []. The header is needed as referred in the code. Chnage the variable as per u store process_config_csv = 'config.csv' xml_file_name = 'test.xml' XPATH,ColumName,CSV_File_Name /Company/Employee[]/FirstName,FirstName,Name.csv /Company/Employee[]/LastName,LastName,Name.csv /Company/Employee[]/ContactNo,ContactNo,Name.csv /Company/Employee[]/Email,Email,Name.csv /Company/Employee[]/FirstName,FirstName,Address.csv /Company/Employee[]/LastName,LastName,Address.csv /Company/Employee[]/ContactNo,ContactNo,Address.csv /Company/Employee[]/Email,Email,Address.csv /Company/Employee[]/Addresses/Address[]/City,City,Address.csv /Company/Employee[]/Addresses/Address[]/State,State,Address.csv /Company/Employee[]/Addresses/Address[]/Zip,Zip,Address.csv /Company/Employee[]/Addresses/Address[]/type,type,Address.csv /Company/Employee[]/FirstName,FirstName,Form.csv /Company/Employee[]/LastName,LastName,Form.csv /Company/Employee[]/ContactNo,ContactNo,Form.csv /Company/Employee[]/Email,Email,Form.csv /Company/Employee[]/Addresses/Address[]/type,type,Form.csv /Company/Employee[]/Addresses/Address[]/forms/form[]/id,id,Form.csv /Company/Employee[]/Addresses/Address[]/forms/form[]/value,value,Form.csv The code to create multiple csv based on the config file is import json import xmltodict import json import os import csv import numpy as np import pandas as pd import sys from collections import defaultdict import numpy as np def getMatches(L1, L2): R = set() for elm in L1: for pat in L2: if elm.find(pat) != -1: if elm.find('.', len(pat)+1) != -1: R.add(elm[:elm.find('.', len(pat)+1)]) else: R.add(elm) return list(R) def xml_parse(xml_file_name): try: process_xml_file = xml_file_name with open(process_xml_file) as xml_file: for xml_string in xml_file: """Converting the xml to Dict""" data_dict = xmltodict.parse(xml_string) """Converting the dict to Pandas DF""" df_processing = pd.json_normalize(data_dict) xml_parse_loop(df_processing) xml_file.close() except Exception as e: s = str(e) print(s) def xml_parse_loop(df_processing_input): CSV_File_Name = [] """Getting the list of csv Files to be created""" with open(process_config_csv, newline='') as csvfile: DataCaptured = csv.DictReader(csvfile) for row in DataCaptured: if row['CSV_File_Name'] not in CSV_File_Name: CSV_File_Name.append(row['CSV_File_Name']) """Iterating the list of CSV""" for items in CSV_File_Name: df_processing = df_processing_input df_subset_process = [] df_subset_list_all_cols = [] df_process_sub_explode_Level = [] df_final_column_name = [] print('Parsing the xml file for creating the file - ' + str(items)) """Fetching the field list for processs from the confic File""" with open(process_config_csv, newline='') as csvfile: DataCaptured = csv.DictReader(csvfile) for row in DataCaptured: if row['CSV_File_Name'] in items: df_final_column_name.append(row['ColumName']) """Getting the columns until the first [] """ df_subset_process.append(row['XPATH'].strip('/').replace("/",".").split('[]')[0]) """Getting the All the columnnames""" df_subset_list_all_cols.append(row['XPATH'].strip('/').replace("/",".").replace("[]","")) """Getting the All the Columns to explode""" df_process_sub_explode_Level.append(row['XPATH'].strip('/').replace('/', '.').split('[]')) explode_ld = defaultdict(set) """Putting Level of explode and column names""" for x in df_process_sub_explode_Level: if len(x) > 1: explode_ld[len(x) - 1].add(''.join(x[: -1])) explode_ld = {k: list(v) for k, v in explode_ld.items()} #print(' The All column list is for the file ' + items + " is " + str(df_subset_list_all_cols)) #print(' The first processing for the file ' + items + " is " + str(df_subset_process)) #print('The explode level of attributes for the file ' + items + " is " + str(explode_ld)) """Remove column duplciates""" df_subset_process = list(dict.fromkeys(df_subset_process)) for col in df_subset_process: if col not in df_processing.columns: df_processing[col] = np.nan df_processing = df_processing[df_subset_process] df_processing_col_list = df_processing.columns.tolist() print ('The total levels to be exploded : %d' % len(explode_ld)) i=0 level=len(explode_ld) for i in range(level): print (' Exploding the Level : %d' % i ) df_processing_col_list = df_processing.columns.tolist() list_of_explode=set(df_processing_col_list) & set(explode_ld[i + 1]) #print('List to expolde' + str(list_of_explode)) """If founc in explode list exlplode some xml doesnt need to have a list it could be column handling the same""" for c in list_of_explode: print (' There are column present which needs to be exploded - ' + str(c)) df_processing = pd.concat((df_processing.iloc[[type(item) == list for item in df_processing[c]]].explode(c),df_processing.iloc[[type(item) != list for item in df_processing[c]]])) print(' Finding the columns need to be fetched ') """From the overall column list fecthing the attributes needed to explode""" next_level_pro_lst = getMatches(df_subset_list_all_cols,explode_ld[ i + 1 ]) #print(next_level_pro_lst) df_processing_col_list = df_processing.columns.tolist() for nex in next_level_pro_lst: #print ("Fetching " + nex.rsplit('.', 1)[1] + ' from ' + nex.rsplit('.', 1)[0] + ' from ' + nex ) parent_col=nex.rsplit('.', 1)[0] child_col=nex.rsplit('.', 1)[1] #print(parent_col) #print(df_processing_col_list) if parent_col not in df_processing_col_list: df_processing[nex.rsplit('.', 1)[0]] = "" try: df_processing[nex] = df_processing[parent_col].apply(lambda x: x.get(child_col)) except AttributeError: df_processing[nex] = "" df_processing_col_list = df_processing.columns.tolist() if i == level-1: print('Last Level nothing to be done') else: """Extracting All columns until the next exlode column list is found""" while len(set(df_processing_col_list) & set(explode_ld[i + 2]))==0: next_level_pro_lst = getMatches(df_subset_list_all_cols, next_level_pro_lst) #print(next_level_pro_lst) for nextval in next_level_pro_lst: if nextval not in df_processing_col_list: #print("Fetching " + nextval.rsplit('.', 1)[1] + ' from ' + nextval.rsplit('.', 1)[0] + ' from ' + nextval) if nextval.rsplit('.', 1)[0] not in df_processing.columns: df_processing[nextval.rsplit('.', 1)[0]] = "" try: df_processing[nextval] = df_processing[nextval.rsplit('.', 1)[0]].apply(lambda x: x.get(nextval.rsplit('.', 1)[1])) except AttributeError: df_processing[nextval] = "" df_processing_col_list = df_processing.columns.tolist() df_processing = df_processing[df_subset_list_all_cols] df_processing.columns = df_final_column_name # if file does not exist write header if not os.path.isfile(items): print("The file does not exists Exists so writing new") df_processing.to_csv('{}'.format(items), header='column_names',index=None) else: # else it exists so append without writing the header print("The file does exists Exists so appending") df_processing.to_csv('{}'.format(items), mode='a', header=False,index=None) from datetime import datetime startTime = datetime.now().strftime("%Y%m%d_%H%M%S") startTime = str(os.getpid()) + "_" + startTime process_task_name = '' process_config_csv = 'config.csv' xml_file_name = 'test.xml' old_print = print def timestamped_print(*args, **kwargs): now = datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f") printheader = now + " xml_parser " + " " + process_task_name + " - " old_print(printheader, *args, **kwargs) print = timestamped_print xml_parse(xml_file_name) The output created are [, ~]$ cat Name.csv FirstName,LastName,ContactNo,Email Hal,Thanos,122131,[email protected] Iron,Man,12324,[email protected] Captain,America,13322,[email protected] Sword,Man,12324,[email protected] Thor,Odison,156565,[email protected] Spider,Man,12324,[email protected] Black,Widow,16767,[email protected] White,Man,5634,[email protected] [, ~]$ cat Address.csv FirstName,LastName,ContactNo,Email,City,State,Zip,type Iron,Man,12324,[email protected],Bangalore,Karnataka,560212,Permanent Iron,Man,12324,[email protected],Concord,NC,28027,Temporary Hal,Thanos,122131,[email protected],Bangalore,Karnataka,560212, Sword,Man,12324,[email protected],Bangalore,Karnataka,560212,Permanent Sword,Man,12324,[email protected],Concord,NC,28027,Temporary Captain,America,13322,[email protected],Trivandrum,Kerala,28115, Spider,Man,12324,[email protected],Bangalore,Karnataka,560212,Permanent Spider,Man,12324,[email protected],Concord,NC,28027,Temporary Thor,Odison,156565,[email protected],Tirunelveli,TamilNadu,36595, White,Man,5634,[email protected],Bangalore,Karnataka,560212,Permanent White,Man,5634,[email protected],Concord,NC,28027,Temporary Black,Widow,16767,[email protected],Mysore,Karnataka,12478, [, ~]$ cat Form.csv FirstName,LastName,ContactNo,Email,type,id,value Iron,Man,12324,[email protected],Temporary,ID1,LIC Iron,Man,12324,[email protected],Temporary,ID2,PAS Iron,Man,12324,[email protected],Temporary,ID3,SSN Iron,Man,12324,[email protected],Temporary,ID2,CC Hal,Thanos,122131,[email protected],,ID1,LIC Hal,Thanos,122131,[email protected],,ID2,PAS Iron,Man,12324,[email protected],Permanent,ID3,LIC Sword,Man,12324,[email protected],Temporary,ID1,LIC Sword,Man,12324,[email protected],Temporary,ID2,PAS Sword,Man,12324,[email protected],Temporary,ID3,SSN Sword,Man,12324,[email protected],Temporary,ID2,CC Captain,America,13322,[email protected],,ID1,LIC Captain,America,13322,[email protected],,ID2,PAS Sword,Man,12324,[email protected],Permanent,ID3,LIC Spider,Man,12324,[email protected],Temporary,ID1,LIC Spider,Man,12324,[email protected],Temporary,ID2,PAS Spider,Man,12324,[email protected],Temporary,ID3,SSN Spider,Man,12324,[email protected],Temporary,ID2,CC Thor,Odison,156565,[email protected],,ID1,LIC Thor,Odison,156565,[email protected],,ID2,PAS Spider,Man,12324,[email protected],Permanent,ID3,LIC White,Man,5634,[email protected],Temporary,ID1,LIC White,Man,5634,[email protected],Temporary,ID2,PAS White,Man,5634,[email protected],Temporary,ID3,SSN White,Man,5634,[email protected],Temporary,ID2,CC White,Man,5634,[email protected],Permanent,ID3,LIC Black,Widow,16767,[email protected],,ID1,LIC The pieces and answers are extracted from different threads and thanks to @Mark Tolonen @Mandy007 @deadshot Create a dict of list using python from csv https://stackoverflow.com/questions/62837949/extract-a-list-from-a-list How to explode Panda column with data having different dict and list of dict This can be definitely made shorter and more performing one and can be enhanced further
8
2
62,775,254
2020-7-7
https://stackoverflow.com/questions/62775254/why-does-my-pygame-window-not-fit-in-my-4k3840x2160-monitor-scale-of-pygame-w
So I was trying to make a game with python and pygame but I noticed that I couldn't make a high resolution display because when I tried to make a display with more pixels, the pygame window was too big for my 4k (3840x2160) monitor. I should note that my monitor is connected to an old Dell laptop with a resolution of (1366x768). But when I entered this: print(pygame.display.list_modes()) it told me that I could use resolutions up to 4k and not just up to the resolution of my laptop. After a lot of searching and trying I accepted the fact that my game will be low resolution and moved on. As I continued coding the game I wanted to have a pop-up window so I imported pyautogui and my pygame window suddenly became much smaller. BOOM problem solved. I increased the resolution and I had no problems, my game was now running at a very high resolution! I was very confused so I made a very simple pygame program so I could test this and it actually worked. This is low quality and can't fit in my screen: import pygame import sys pygame.init() screen = pygame.display.set_mode((3000, 1500)) font = pygame.font.Font('font.otf', 50) while True: screen.fill((255, 255, 255)) txt = font.render("hello", True, (0, 0, 0)) screen.blit(txt, (100, 100)) pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() screenshot1 And this is high resolution and does fit in my screen: import pygame import sys import pyautogui pygame.init() screen = pygame.display.set_mode((3000, 1500)) font = pygame.font.Font('font.otf', 50) while True: screen.fill((255, 255, 255)) txt = font.render("hello", True, (0, 0, 0)) screen.blit(txt, (100, 100)) pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() screenshot2 I don't even need to use pyautogui! Can someone explain this to me? Thanks
After a bunch of source diving I believe I have found the solution: pyautogui imports pyscreeze for the functions center, grab, pixel, pixelMatchesColor, screenshot. On lines 63 to 71 of pyscreeze/__init__.py is the following: if sys.platform == 'win32': # On Windows, the monitor scaling can be set to something besides normal 100%. # PyScreeze and Pillow needs to account for this to make accurate screenshots. # TODO - How does macOS and Linux handle monitor scaling? import ctypes try: ctypes.windll.user32.SetProcessDPIAware() except AttributeError: pass # Windows XP doesn't support monitor scaling, so just do nothing. The above code calls SetProcessDPIAware, which is equivalent to the following: System DPI aware. This window does not scale for DPI changes. It will query for the DPI once and use that value for the lifetime of the process. If the DPI changes, the process will not adjust to the new DPI value. It will be automatically scaled up or down by the system when the DPI changes from the system value. If want to get the same effect without pyautogui you can just include the above call to SetProcessDPIAware in your code.
7
5
62,744,659
2020-7-5
https://stackoverflow.com/questions/62744659/attributeerror-tuple-object-has-no-attribute-rank-when-calling-fit-on-a-ker
I want to build a Neural Network with two inputs: for image data and for numeric data. So I wrote custom data generator for that. The train and validation dataframes contain 11 columns: image_name — path to the image; 9 numeric features; target — class for the item (last column). The code for custom generator (based on this answer): target_size = (224, 224) batch_size = 1 train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True) val_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_dataframe( train, x_col='image_name', y_col=train.columns[1:], target_size=target_size, batch_size=batch_size, shuffle=True, class_mode='raw') validation_generator = val_datagen.flow_from_dataframe( validation, x_col='image_name', y_col=validation.columns[1:], target_size=target_size, shuffle=False, batch_size=batch_size, class_mode='raw') def train_generator_func(): count = 0 while True: if count == len(train.index): train_generator.reset() break count += 1 data = train_generator.next() imgs = [] cols = [] targets = [] for k in range(batch_size): imgs.append(data[0][k]) cols.append(data[1][k][:-1]) targets.append(data[1][k][-1]) yield [imgs, cols], targets def validation_generator_func(): count = 0 while True: if count == len(validation.index): validation_generator.reset() break count += 1 data = validation_generator.next() imgs = [] cols = [] targets = [] for k in range(batch_size): imgs.append(data[0][k]) cols.append(data[1][k][:-1]) targets.append(data[1][k][-1]) yield [imgs, cols], targets Model building: def mlp_model(dim): model = Sequential() model.add(Dense(8, input_dim=dim, activation="relu")) model.add(Dense(4, activation="relu")) return model def vgg16_model(): model = VGG16(weights='imagenet', include_top=False, input_shape=target_size+(3,)) x=Flatten()(model.output) output=Dense(1,activation='sigmoid')(x) # because we have to predict the AUC model=Model(model.input,output) return model def concatenated_model(cnn, mlp): combinedInput = concatenate([cnn.output, mlp.output]) x = Dense(4, activation="relu")(combinedInput) x = Dense(1, activation="sigmoid")(x) model = Model(inputs=[cnn.input, mlp.input], outputs=x) return model def focal_loss(alpha=0.25,gamma=2.0): def focal_crossentropy(y_true, y_pred): bce = K.binary_crossentropy(y_true, y_pred) y_pred = K.clip(y_pred, K.epsilon(), 1.- K.epsilon()) p_t = (y_true*y_pred) + ((1-y_true)*(1-y_pred)) alpha_factor = 1 modulating_factor = 1 alpha_factor = y_true*alpha + ((1-alpha)*(1-y_true)) modulating_factor = K.pow((1-p_t), gamma) # compute the final loss and return return K.mean(alpha_factor*modulating_factor*bce, axis=-1) return focal_crossentropy cnn = vgg16_model() mlp = mlp_model(9) model = concatenated_model(cnn, mlp) opt = Adam(lr=1e-5) model.compile(loss=focal_loss(), metrics=[tf.keras.metrics.AUC()],optimizer=opt) nb_epochs = 2 nb_train_steps = train.shape[0]//batch_size nb_val_steps = validation.shape[0]//batch_size model.fit( train_generator_func(), steps_per_epoch=nb_train_steps, epochs=nb_epochs, validation_data=validation_generator_func(), validation_steps=nb_val_steps) And fitting doesn't work with error message: AttributeError Traceback (most recent call last) <ipython-input-53-253849fd34d6> in <module> 9 epochs=nb_epochs, 10 validation_data=validation_generator_func(), ---> 11 validation_steps=nb_val_steps) d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 106 def _method_wrapper(self, *args, **kwargs): 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access --> 108 return method(self, *args, **kwargs) 109 110 # Running inside `run_distribute_coordinator` already. d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1061 use_multiprocessing=use_multiprocessing, 1062 model=self, -> 1063 steps_per_execution=self._steps_per_execution) 1064 1065 # Container that configures and calls `tf.keras.Callback`s. d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution) 1108 use_multiprocessing=use_multiprocessing, 1109 distribution_strategy=ds_context.get_strategy(), -> 1110 model=model) 1111 1112 strategy = ds_context.get_strategy() d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, workers, use_multiprocessing, max_queue_size, model, **kwargs) 796 return tensor_shape.TensorShape([None for _ in shape.as_list()]) 797 --> 798 output_shapes = nest.map_structure(_get_dynamic_shape, peek) 799 output_types = nest.map_structure(lambda t: t.dtype, peek) 800 d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in map_structure(func, *structure, **kwargs) 633 634 return pack_sequence_as( --> 635 structure[0], [func(*x) for x in entries], 636 expand_composites=expand_composites) 637 d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\util\nest.py in <listcomp>(.0) 633 634 return pack_sequence_as( --> 635 structure[0], [func(*x) for x in entries], 636 expand_composites=expand_composites) 637 d:\pyenv\keras-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in _get_dynamic_shape(t) 792 shape = t.shape 793 # Unknown number of dimensions, `as_list` cannot be called. --> 794 if shape.rank is None: 795 return shape 796 return tensor_shape.TensorShape([None for _ in shape.as_list()]) AttributeError: 'tuple' object has no attribute 'rank' So I tried to look at Keras sources but without any success. If I use modified train_generator and validation_generator (y_col='target' instead of y_col=train.columns[1:]) everything works fine.
You need to convert all the individual objects returned by both the training and validation generators to Numpy arrays: yield [np.array(imgs), np.array(cols)], np.array(targets) Alternatively, a simpler and much more efficient solution is to not iterate over the data batch at all; instead, we can take advantage of the fact that these objects are already Numpy arrays when returned by ImageDataGenerator, so we can write: imgs = data[0] cols = data[1][:,:-1] targets = data[1][:,-1:] yield [imgs, cols], targets
10
9
62,701,809
2020-7-2
https://stackoverflow.com/questions/62701809/count-if-in-multiple-index-dataframe
I have a multi-index dataframe and I want to know the percentage of clients who paid a certain threshold of debt for each of the 3 criteria: City, Card and Collateral. This is a working script: import pandas as pd d = {'City': ['Tokyo','Tokyo','Lisbon','Tokyo','Tokyo','Lisbon','Lisbon','Lisbon','Tokyo','Lisbon','Tokyo','Tokyo','Tokyo','Lisbon','Tokyo','Tokyo','Lisbon','Lisbon','Lisbon','Tokyo','Lisbon','Tokyo'], 'Card': ['Visa','Visa','Master Card','Master Card','Visa','Master Card','Visa','Visa','Master Card','Visa','Master Card','Visa','Visa','Master Card','Master Card','Visa','Master Card','Visa','Visa','Master Card','Visa','Master Card'], 'Colateral':['Yes','No','Yes','No','No','No','No','Yes','Yes','No','Yes','Yes','No','Yes','No','No','No','Yes','Yes','No','No','No'], 'Client Number':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], '% Debt Paid':[0.8,0.1,0.5,0.30,0,0.2,0.4,1,0.60,1,0.5,0.2,0,0.3,0,0,0.2,0,0.1,0.70,0.5,0.1]} df = pd.DataFrame(data=d) df1 = (df.set_index(['City','Card','Colateral']) .drop(['Client Number'],axis=1) .sum(level=[0,1,2])) df2 = df1.reindex(pd.MultiIndex.from_product(df1.index.levels), fill_value=0) And this is the result: To overcome this issue I tried the following without success: df1 = (df.set_index(['City','Card','Colateral']) .drop(['Client Number'],axis=1) [df.Total = 0].count(level=[0,1,2])/[df.Total].count() [df.Total > 0 & df.Total <=0.25 ].count(level=[0,1,2])/[df.Total].count() [df.Total > 0.25 & df.Total <=0.5 ].count(level=[0,1,2])/[df.Total]) [df.Total > 0.5 & df.Total <=0.75 ].count(level=[0,1,2])/[df.Total] [df.Total > 0.75 & df.Total <1 ].count(level=[0,1,2])/[df.Total] [df.Total = 1].count(level=[0,1,2])/[df.Total] [df.Total > 1].count(level=[0,1,2])/[df.Total]) df2 = df1.reindex(pd.MultiIndex.from_product(df1.index.levels), fill_value=0) And this is the result I wish to accomplish for all the criteria. Any thoughts on how to solve this? Thank you.
TL;DR group_cols = ['City', 'Card', 'Colateral'] debt_col = '% Debt Paid' # (1) Bin the data that is in non-zero-width intervals bins = pd.IntervalIndex.from_breaks((0, 0.25, 0.5, 0.75, 1, np.inf), closed='right') ser_pt1 = df.groupby(group_cols, sort=False)[debt_col]\ .value_counts(bins=bins, sort=False, normalize=True) # (2) Get the data from zero width intervals (0% and 100%) ser_pt2 = df[df[debt_col].isin((0, 1))]\ .groupby(group_cols)[debt_col].value_counts() # Take also "zero counts" and normalize ser_pt2 = ser_pt2.reindex( pd.MultiIndex.from_product(ser_pt2.index.levels, names=ser_pt2.index.names), fill_value=0) / df.groupby(group_cols)[debt_col].count() # (3) Combine the results ser_out = pd.concat([ser_pt1, ser_pt2]) Here's the quick-n-dirty answer. Below is a copy-pasteable full answer which also makes the index names and ordering as requested in the question. 1. Summary The problem comes more difficult to solve since the bins you want are intersecting. That is, you want to have bin for ]75, 100] and [100, 100], which both should include the case where % Debt Paid is 1.0. I would handle two cases separately (1) Binning for values ]0, 25]%, ]25, 50]%, ... ,]100%, np.inf]% (2) 0% and 100% 2. Description of solution 2.1 Binned part The binned part is calculated using gp[debt_col].value_counts, which is essentially using pd.Series.value_counts since gp is a DataFrameGroupBy object and gp[debt_col] is a SeriesGroupBy object. The bins needed for the value_counts can be created easily from a list of endpoints using pd.IntervalIndex.from_breaks The >100% is also a bin, with right endpoint at infinity (np.inf). 2.2 The rest (0% and 100%) Use the pd.Series.isin at df[debt_col].isin((0, 1)) to select the 0.0 and 1.0 cases only, and then use value_counts to count the occurences of "0%" and "100%". Then, we also need to include the cases where the count is zero. This can be done by reindexing. So, we use pd.Series.reindex to give a row for each ("City", "Card", "Colateral") combination, and form there combinations with pd.MultiIndex.from_product Lastly, we normalize the counts by dividing with the total counts in each group (df.groupby(group_cols)[debt_col].count()) 2.3 Renaming Our new index (level 3, called 'bin') is now ready, but to get the to same output as in the OP's question, we need to rename the index labels. This is done just looping over the values and using a "lookup dictionary" for new names The ordering of the labels in the index is by default taken from the numerical/alphabetical ordering but this is not what we want. To force the index order after sorting it, we must use pd.Categorical as the index. The order for sorting is given in the categories argument. We rely on the fact that in python 3.6+ dictionaries preserve ordering. For some reason the ser_out.sort_index() did not work out even with a categorical index. I am thinking it might be a bug in the pandas. Therefore, the result Series ser_out is casted to a DataFrame df_out, and the sorting is made using dataframe. Lastly, the resulting dataframe is made MultiIndex with set_index. Code Zero-width bins cause the value_counts to yield really bizarre results. Maybe this is a bug of pandas. Therefore, let's divide the problem into two steps (1) Count the data in the non-zero-width bins (2) Count the data in zero-width bins ("0%" and "100%") import pandas as pd import numpy as np d = {'City': ['Tokyo','Tokyo','Lisbon','Tokyo','Tokyo','Lisbon','Lisbon','Lisbon','Tokyo','Lisbon','Tokyo','Tokyo','Tokyo','Lisbon','Tokyo','Tokyo','Lisbon','Lisbon','Lisbon','Tokyo','Lisbon','Tokyo'], 'Card': ['Visa','Visa','Master Card','Master Card','Visa','Master Card','Visa','Visa','Master Card','Visa','Master Card','Visa','Visa','Master Card','Master Card','Visa','Master Card','Visa','Visa','Master Card','Visa','Master Card'], 'Colateral':['Yes','No','Yes','No','No','No','No','Yes','Yes','No','Yes','Yes','No','Yes','No','No','No','Yes','Yes','No','No','No'], 'Client Number':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], '% Debt Paid':[0.8,0.1,0.5,0.30,0,0.2,0.4,1,0.60,1,0.5,0.2,0,0.3,0,0,0.2,0,0.1,0.70,0.5,0.1]} df = pd.DataFrame(data=d) def _get_binned_part(df, group_cols, debt_col): bins = pd.IntervalIndex.from_breaks((0, 0.25, 0.5, 0.75, 1, np.inf), closed='right') gp = df[group_cols + [debt_col]].groupby(group_cols, sort=False) ser_pt1 = gp[debt_col].value_counts(bins=bins, sort=False, normalize=True) ser_pt1.index.set_names('bin', level=3, inplace=True) return ser_pt1 def _get_non_binned_part(df, group_cols, debt_col): # Count 0% and 100% occurences ser_pt2 = df[df[debt_col].isin((0, 1))]\ .groupby(group_cols)[debt_col].value_counts() # include zero counts ser_pt2 = ser_pt2.reindex(pd.MultiIndex.from_product( ser_pt2.index.levels, names=ser_pt2.index.names), fill_value=0) ser_pt2.index.set_names('bin', level=3, inplace=True) # ser_counts has the counts for normalization. ser_counts = df.groupby(group_cols)[debt_col].count() ser_pt2 = ser_pt2 / ser_counts return ser_pt2 def _rename_bins(ser_out, group_cols, debt_col): bin_names = [] bin_name_dict = { '0.0': '0%', '(0.0, 0.25]': ']0, 25]%', '(0.25, 0.5]': ']25, 50]%', '(0.5, 0.75]': ']50, 75]%', '(0.75, 1.0]': ']75, 100]%', '1.0': '100%', '(1.0, inf]': '>100%', } bin_order = list(bin_name_dict.values()) for val in ser_out.index.levels[3].values: bin_names.append(bin_name_dict.get(val.__str__(), val.__str__())) bin_categories = pd.Categorical(bin_names, categories=bin_order, ordered=True) ser_out.index.set_levels(bin_categories, level=3, inplace=True) # For some reason, .sort_index() does not sort correcly # -> Make it a dataframe and sort there. df_out = ser_out.reset_index() df_out['bin'] = pd.Categorical(df_out['bin'].values, bin_order, ordered=True) df_out = df_out.sort_values(group_cols + ['bin']).set_index(group_cols + ['bin']) df_out.rename(columns={debt_col: 'in_bin'}, inplace=True) df_out['in_bin'] = (df_out['in_bin'] * 100).round(2) return df_out def get_results(df): group_cols = ['City', 'Card', 'Colateral'] debt_col = '% Debt Paid' ser_pt1 = _get_binned_part(df, group_cols, debt_col) ser_pt2 = _get_non_binned_part(df, group_cols, debt_col) ser_out = pd.concat([ser_pt1, ser_pt2]) df_out = _rename_bins(ser_out, group_cols, debt_col) return df_out df_out = get_results(df) Example output In [1]: df_out Out[1]: in_bin City Card Colateral bin Lisbon Master Card No 0% 0.00 ]0, 25]% 100.00 ]25, 50]% 0.00 ]50, 75]% 0.00 ]75, 100]% 0.00 100% 0.00 >100% 0.00 Yes 0% 0.00 ]0, 25]% 0.00 ]25, 50]% 100.00 ]50, 75]% 0.00 ]75, 100]% 0.00 100% 0.00 >100% 0.00 Visa No 0% 0.00 ]0, 25]% 0.00 ]25, 50]% 66.67 ]50, 75]% 0.00 ]75, 100]% 33.33 100% 33.33 >100% 0.00 Yes 0% 33.33 ]0, 25]% 33.33 ]25, 50]% 0.00 ]50, 75]% 0.00 ]75, 100]% 33.33 100% 33.33 >100% 0.00 Tokyo Master Card No 0% 25.00 ]0, 25]% 25.00 ]25, 50]% 25.00 ]50, 75]% 25.00 ]75, 100]% 0.00 100% 0.00 >100% 0.00 Yes 0% 0.00 ]0, 25]% 0.00 ]25, 50]% 50.00 ]50, 75]% 50.00 ]75, 100]% 0.00 100% 0.00 >100% 0.00 Visa No 0% 75.00 ]0, 25]% 25.00 ]25, 50]% 0.00 ]50, 75]% 0.00 ]75, 100]% 0.00 100% 0.00 >100% 0.00 Yes 0% 0.00 ]0, 25]% 50.00 ]25, 50]% 0.00 ]50, 75]% 0.00 ]75, 100]% 50.00 100% 0.00 >100% 0.00 Appendix Desired example output: "Lisbon, Visa, No" With this combination In [1]: df.loc[ (df['City'] == 'Lisbon') & (df['Card'] == 'Visa') & (df['Colateral'] == 'No')] Out[1]: City Card Colateral Client Number % Debt Paid 6 Lisbon Visa No 7 0.4 9 Lisbon Visa No 10 1.0 20 Lisbon Visa No 21 0.5 the output data table should have 0% 0% ]0, 25]% 0% ]25, 50]% 66.7% ]50, 75]% 0% ]75, 100]% 33.3% 100% 33.3% >100% 0% Note that the one intersecting bin pair (]75, 100] and [100, 100]) will cause the total sum of the ouput column to be sometimes greater than 100%.
7
4
62,765,652
2020-7-6
https://stackoverflow.com/questions/62765652/how-to-debug-the-stack-trace-that-causes-a-subsequent-exception-in-python
Python (and ipython) has very powerful post-mortem debugging capabilities, allowing variable inspection and command execution at each scope in the traceback. The up/down debugger commands allow changing frame for the stack trace of the final exception, but what about the __cause__ of that exception, as defined by the raise ... from ... syntax? Python 3.7.6 (default, Jan 8 2020, 13:42:34) Type 'copyright', 'credits' or 'license' for more information IPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: def foo(): ...: bab = 42 ...: raise TypeError ...: In [2]: try: ...: foo() ...: except TypeError as err: ...: barz = 5 ...: raise ValueError from err ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-dd046d7cece0> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: <ipython-input-1-da9a05838c59> in foo() 2 bab = 42 ----> 3 raise TypeError 4 TypeError: The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <ipython-input-2-dd046d7cece0> in <module> 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError from err 6 ValueError: In [3]: %debug > <ipython-input-2-dd046d7cece0>(5)<module>() 2 foo() 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError from err 6 ipdb> barz 5 ipdb> bab *** NameError: name 'bab' is not defined ipdb> down *** Newest frame ipdb> up *** Oldest frame Is there a way to access bab from the debugger? EDIT: I realized post-mortem debugging isn't just a feature of ipython and ipdb, it's actually part of vanilla pdb. The above can also be reproduced by putting the code into a script testerr.py and running python -m pdb testerr.py and running continue. After the error, it says Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program and gives a debugger at the same spot.
You can use the with_traceback(tb) method to preserve the original exception's traceback: try: foo() except TypeError as err: barz = 5 raise ValueError().with_traceback(err.__traceback__) from err Note that I have updated the code to raise an exception instance rather than the exception class. Here is the full code snippet in iPython: In [1]: def foo(): ...: bab = 42 ...: raise TypeError() ...: In [2]: try: ...: foo() ...: except TypeError as err: ...: barz = 5 ...: raise ValueError().with_traceback(err.__traceback__) from err ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-a5a6d81e4c1a> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: <ipython-input-1-ca1efd1bee60> in foo() 2 bab = 42 ----> 3 raise TypeError() 4 TypeError: The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <ipython-input-2-a5a6d81e4c1a> in <module> 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError().with_traceback(err.__traceback__) from err 6 <ipython-input-2-a5a6d81e4c1a> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: 4 barz = 5 5 raise ValueError().with_traceback(err.__traceback__) from err <ipython-input-1-ca1efd1bee60> in foo() 1 def foo(): 2 bab = 42 ----> 3 raise TypeError() 4 ValueError: In [3]: %debug > <ipython-input-1-ca1efd1bee60>(3)foo() 1 def foo(): 2 bab = 42 ----> 3 raise TypeError() 4 ipdb> bab 42 ipdb> u > <ipython-input-2-a5a6d81e4c1a>(2)<module>() 1 try: ----> 2 foo() 3 except TypeError as err: 4 barz = 5 5 raise ValueError().with_traceback(err.__traceback__) from err ipdb> u > <ipython-input-2-a5a6d81e4c1a>(5)<module>() 2 foo() 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError().with_traceback(err.__traceback__) from err 6 ipdb> barz 5 EDIT - An alternative inferior approach Addressing @user2357112supportsMonica's first comment, if you wish to avoid multiple dumps of the original exception's traceback in the log, it's possible to raise from None. However, as @user2357112supportsMonica's second comment states, this hides the original exception's message. This is particularly problematic in the common case where you're not post-mortem debugging but rather inspecting a printed traceback. try: foo() except TypeError as err: barz = 5 raise ValueError().with_traceback(err.__traceback__) from None Here is the code snippet in iPython: In [4]: try: ...: foo() ...: except TypeError as err: ...: barz = 5 ...: raise ValueError().with_traceback(err.__traceback__) from None ...: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-b090fb9c510e> in <module> 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError().with_traceback(err.__traceback__) from None 6 <ipython-input-6-b090fb9c510e> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: 4 barz = 5 5 raise ValueError().with_traceback(err.__traceback__) from None <ipython-input-2-ca1efd1bee60> in foo() 1 def foo(): 2 bab = 42 ----> 3 raise TypeError() 4 ValueError: In [5]: %debug > <ipython-input-2-ca1efd1bee60>(3)foo() 1 def foo(): 2 bab = 42 ----> 3 raise TypeError() 4 ipdb> bab 42 ipdb> u > <ipython-input-6-b090fb9c510e>(2)<module>() 1 try: ----> 2 foo() 3 except TypeError as err: 4 barz = 5 5 raise ValueError().with_traceback(err.__traceback__) from None ipdb> u > <ipython-input-6-b090fb9c510e>(5)<module>() 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError().with_traceback(err.__traceback__) from None 6 ipdb> barz 5 Raising from None is required since otherwise the chaining would be done implicitly, attaching the original exception as the new exception’s __context__ attribute. Note that this differs from the __cause__ attribute which is set when the chaining is done explicitly. In [6]: try: ...: foo() ...: except TypeError as err: ...: barz = 5 ...: raise ValueError().with_traceback(err.__traceback__) ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-ee78991171cb> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: <ipython-input-2-ca1efd1bee60> in foo() 2 bab = 42 ----> 3 raise TypeError() 4 TypeError: During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-5-ee78991171cb> in <module> 3 except TypeError as err: 4 barz = 5 ----> 5 raise ValueError().with_traceback(err.__traceback__) 6 <ipython-input-5-ee78991171cb> in <module> 1 try: ----> 2 foo() 3 except TypeError as err: 4 barz = 5 5 raise ValueError().with_traceback(err.__traceback__) <ipython-input-2-ca1efd1bee60> in foo() 1 def foo(): 2 bab = 42 ----> 3 raise TypeError() 4 ValueError:
12
8
62,717,970
2020-7-3
https://stackoverflow.com/questions/62717970/how-to-convert-data-type-for-list-of-tuples-string-to-float
g = [('Books', '10.000'),('Pen', '10'),('test', 'a')] Here '10.000' and '10' are strings How to convert to below format, string to float Expected out [('Books', 10.000),('Pen', 10),('test', 'a')] Here 10.000 and 10 are floats and a has to be string newresult = [] for x in result: if x.isalpha(): newresult.append(x) elif x.isdigit(): newresult.append(int(x)) else: newresult.append(float(x)) print(newresult) I got error AttributeError: 'tuple' object has no attribute 'isalpha'
you have a problem in your code because the x that you are using is a tuple. The elements of the list you provided are tuples type (String,String) so you need one more iteration on the elemts of the tuples. I have modified your code to : newresult = [] for tuple in result: temp = [] for x in tuple: if x.isalpha(): temp.append(x) elif x.isdigit(): temp.append(int(x)) else: temp.append(float(x)) newresult.append((temp[0],temp[1])) print(newresult) I have tested the code : //input result= [('Books', '10.000'),('Pen', '10'),('test', 'a')] //output [('Books', 10.0), ('Pen', 10), ('test', 'a')]
9
4
62,719,641
2020-7-3
https://stackoverflow.com/questions/62719641/why-pytorch-model-takes-multiple-image-size-inside-the-model
I am using a simple object detection model in PyTorch and using a Pytoch Model for Inferencing. When I am using a simple iterator over the code for k, image_path in enumerate(image_list): image = imgproc.loadImage(image_path) print(image.shape) with torch.no_grad(): y, feature = net(x) result = image.cuda() It prints our variable sized images such as torch.Size([1, 3, 384, 320]) torch.Size([1, 3, 704, 1024]) torch.Size([1, 3, 1280, 1280]) So When I am using Batch Inferencing using a DataLoader applying the same transformation the code is not running. However, when I am resizing all the images as 600.600 the batch processing runs successfully. I am having Two Doubts, First why Pytorch is capable of inputting dynamically sized inputs in Deep Learning Model and Why dynamic sized input is failing in Batch Processing.
PyTorch has what is called a Dynamic Computational Graph (other explanation). It allows the graph of the neural network to dynamically adapt to its input size, from one input to the next, during training or inference. This is what you observe in your first example: providing an image as a Tensor of size [1, 3, 384, 320] to your model, then another one as a Tensor of size [1, 3, 384, 1024], and so forth, is completely fine, as, for each input, your model will dynamically adapt. However, if your input is a actually a collection of inputs (a batch), it is another story. A batch, for PyTorch, will be transformed to a single Tensor input with one extra dimension. For example, if you provide a list of n images, each of the size [1, 3, 384, 320], PyTorch will stack them, so that your model has a single Tensor input, of the shape [n, 1, 3, 384, 320]. This "stacking" can only happen between images of the same shape. To provide a more "intuitive" explanation than previous answers, this stacking operation cannot be done between images of different shapes, because the network cannot "guess" how the different images should "align" with one another in a batch, if they are not all the same size. No matter if it happens during training or testing, if you create a batch out of images of varying size, PyTorch will refuse your input. Several solutions are usually in use: reshaping as you did, adding padding (often small or null values on the border of your images) to extend your smaller images to the size of the biggest one, and so forth.
9
12
62,679,083
2020-7-1
https://stackoverflow.com/questions/62679083/how-do-i-separate-overlapping-cards-from-each-other-using-python-opencv
I am trying to detect playing cards and transform them to get a bird's eye view of the card using python opencv. My code works fine for simple cases but I didn't stop at the simple cases and want to try out more complex ones. I'm having problems finding correct contours for cards.Here's an attached image where I am trying to detect cards and draw contours: My Code: path1 = "F:\\ComputerVisionPrograms\\images\\cards4.jpeg" g = cv2.imread(path1,0) img = cv2.imread(path1) edge = cv2.Canny(g,50,200) p,c,h = cv2.findContours(edge, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) rect = [] for i in c: p = cv2.arcLength(i, True) ap = cv2.approxPolyDP(i, 0.02 * p, True) if len(ap)==4: rect.append(i) cv2.drawContours(img,rect, -1, (0, 255, 0), 3) plt.imshow(img) plt.show() Result: This is not what I wanted, I wanted only the rectangular cards to be selected but since they are occluding one another, I am not getting what I expected. I believe I need to apply morphological tricks or other operations to maybe separate them or make the edges more prominent or may be something else. It would be really appreciated if you could share your approach to tackle this problem. A few more examples requested by other fellows:
There are lots of approaches to find overlapping objects in the image. The information you have for sure is that your cards are all rectangles, mostly white and have the same size. Your variables are brightness, angle, may be some perspective distortion. If you want a robust solution, you need to address all that issues. I suggest using Hough transform to find card edges. First, run a regular edge detection. Than you need to clean up the results, as many short edges will belong to "face" cards. I suggest using a combination of dilate(11)->erode(15)->dilate(5). This combination will fill all the gaps in the "face" card, then it "shrinks" down the blobs, on the way removing the original edges and finally grow back and overlap a little the original face picture. Then you remove it from the original image. Now you have an image that have almost all the relevant edges. Find them using Hough transform. It will give you a set of lines. After filtering them a little you can fit those edges to rectangular shape of the cards. dst = cv2.Canny(img, 250, 50, None, 3) cn = cv2.dilate(dst, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))) cn = cv2.erode(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))) cn = cv2.dilate(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))) dst -= cn dst[dst < 127] = 0 cv2.imshow("erode-dilated", dst) # Copy edges to the images that will display the results in BGR cdstP = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR) linesP = cv2.HoughLinesP(dst, 0.7, np.pi / 720, 30, None, 20, 15) if linesP is not None: for i in range(0, len(linesP)): l = linesP[i][0] cv2.line(cdstP, (l[0], l[1]), (l[2], l[3]), (0, 255, 0), 2, cv2.LINE_AA) cv2.imshow("Detected edges", cdstP) This will give you following:
12
7
62,703,400
2020-7-2
https://stackoverflow.com/questions/62703400/python-how-to-type-hint-a-callable-with-wrapped
When passing around functions, I normally type hint them with typing.Callable. The docs for collections.abc.Callable state that it has four dunder methods: class collections.abc.Callable ABCs for classes that provide respectively the methods __contains__(), __hash__(), __len__(), and __call__(). At one point, I want to check if there is a __wrapped__ attribute on a function. This works fine at runtime via a check with hasattr(func, "__wrapped__"). When static type checking with mypy, it reports: error: "Callable[..., Any]" has no attribute "__wrapped__" [attr-defined]. This makes sense to me, as Callable isn't supposed to have a __wrapped__ attribute. How can I properly type hint a Callable with a __wrapped__ attribute? Is there some other type hint or workaround I can do? Code Sample I am using mypy==0.782 and Python==3.8.2: from functools import wraps from typing import Callable def print_int_arg(arg: int) -> None: """Print the integer argument.""" print(arg) @wraps(print_int_arg) def wrap_print_int_arg(arg: int) -> None: print_int_arg(arg) # do other stuff def print_is_wrapped(func: Callable) -> None: """Print if a function is wrapped.""" if hasattr(func, "__wrapped__"): # error: "Callable[..., Any]" has no attribute "__wrapped__" [attr-defined] print(f"func named {func.__name__} wraps {func.__wrapped__.__name__}.") print_is_wrapped(wrap_print_int_arg)
Obviously the easy answer is to add a # type: ignore comment. However, this isn't actually solving the problem, IMO. I decided to make a type stub for a callable with a __wrapped__ attribute. Based on this answer, here is my current solution: from typing import Callable, cast class WrapsCallable: """Stub for a Callable with a __wrapped__ attribute.""" __wrapped__: Callable __name__: str def __call__(self, *args, **kwargs): ... def print_is_wrapped(func: Callable) -> None: """Print if a function is wrapped.""" if hasattr(func, "__wrapped__"): func = cast(WrapsCallable, func) print(f"func named {func.__name__} wraps {func.__wrapped__.__name__}.") And mypy now reports Success: no issues found in 1 source file. I feel as if this is a lot of boiler-plate code, and would love a more streamlined answer.
8
3
62,742,387
2020-7-5
https://stackoverflow.com/questions/62742387/how-to-use-weights-in-a-logistic-regression
I want to calculate (weighted) logistic regression in Python. The weights were calculated to adjust the distribution of the sample regarding the population. However, the results don´t change if I use weights. import numpy as np import pandas as pd import statsmodels.api as sm The data looks like this. The target variable is VISIT. The features are all other variables except WEIGHT_both (which is the weight I want to use). df.head() WEIGHT_both VISIT Q19_1 Q19_2 Q19_3 Q19_4 Q19_5 Q19_6 Q19_7 Q19_8 ... Q19_23 Q19_24 Q19_25 Q19_26 Q19_27 Q19_28 Q19_29 Q19_30 Q19_31 Q19_32 0 0.022320 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... 4.0 4.0 1.0 1.0 1.0 1.0 2.0 3.0 3.0 2.0 1 0.027502 1.0 3.0 2.0 2.0 2.0 3.0 4.0 3.0 2.0 ... 3.0 2.0 2.0 2.0 2.0 4.0 2.0 4.0 2.0 2.0 2 0.022320 1.0 2.0 3.0 1.0 4.0 3.0 3.0 3.0 2.0 ... 3.0 3.0 3.0 2.0 2.0 1.0 2.0 2.0 1.0 1.0 3 0.084499 1.0 2.0 2.0 2.0 2.0 2.0 4.0 1.0 1.0 ... 2.0 2.0 1.0 1.0 1.0 2.0 1.0 2.0 1.0 1.0 4 0.022320 1.0 3.0 4.0 3.0 3.0 3.0 2.0 3.0 3.0 ... 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 Without the weight the model looks like this: X = df.drop('WEIGHT_both', axis = 1) X = X.drop('VISIT', axis = 1) X = sm.add_constant(X) w = = df['WEIGHT_both'] Y= df['VISIT'] fit = sm.Logit(Y, X).fit() fit.summary() Dep. Variable: VISIT No. Observations: 7971 Model: Logit Df Residuals: 7938 Method: MLE Df Model: 32 Date: Sun, 05 Jul 2020 Pseudo R-squ.: 0.2485 Time: 16:41:12 Log-Likelihood: -3441.2 converged: True LL-Null: -4578.8 Covariance Type: nonrobust LLR p-value: 0.000 coef std err z P>|z| [0.025 0.975] const 3.8098 0.131 29.126 0.000 3.553 4.066 Q19_1 -0.1116 0.063 -1.772 0.076 -0.235 0.012 Q19_2 -0.2718 0.061 -4.483 0.000 -0.391 -0.153 Q19_3 -0.2145 0.061 -3.519 0.000 -0.334 -0.095 With the sample weight the result looks like this (no change): fit2 = sm.Logit(Y, X, sample_weight = w).fit() # same thing if I use class_weight fit2.summary() Dep. Variable: VISIT No. Observations: 7971 Model: Logit Df Residuals: 7938 Method: MLE Df Model: 32 Date: Sun, 05 Jul 2020 Pseudo R-squ.: 0.2485 Time: 16:41:12 Log-Likelihood: -3441.2 converged: True LL-Null: -4578.8 Covariance Type: nonrobust LLR p-value: 0.000 coef std err z P>|z| [0.025 0.975] const 3.8098 0.131 29.126 0.000 3.553 4.066 Q19_1 -0.1116 0.063 -1.772 0.076 -0.235 0.012 Q19_2 -0.2718 0.061 -4.483 0.000 -0.391 -0.153 Q19_3 -0.2145 0.061 -3.519 0.000 -0.334 -0.095 I calculated the regression with other Programms (e.g. SPSS, R). The weighted result has to be different. Here is an example (R-Code). Without weights (same result as with Python code): fit = glm(VISIT~., data = df[ -c(1)] , family = "binomial") summary(fit) Call: glm(formula = VISIT ~ ., family = "binomial", data = df[-c(1)]) Deviance Residuals: Min 1Q Median 3Q Max -3.1216 -0.6984 0.3722 0.6838 2.1083 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.80983 0.13080 29.126 < 2e-16 *** Q19_1 -0.11158 0.06296 -1.772 0.076374 . Q19_2 -0.27176 0.06062 -4.483 7.36e-06 *** Q19_3 -0.21451 0.06096 -3.519 0.000434 *** Q19_4 0.22417 0.05163 4.342 1.41e-05 *** With weights: fit2 = glm(VISIT~., data = df[ -c(1)], weights = df$WEIGHT_both, family = "binomial") summary(fit2) Call: glm(formula = VISIT ~ ., family = "binomial", data = df[-c(1)], weights = df$WEIGHT_both) Deviance Residuals: Min 1Q Median 3Q Max -2.4894 -0.3315 0.1619 0.2898 3.7878 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 4.950e-01 1.821e-01 2.718 0.006568 ** Q19_1 -6.497e-02 8.712e-02 -0.746 0.455835 Q19_2 -1.720e-02 8.707e-02 -0.198 0.843362 Q19_3 -1.114e-01 8.436e-02 -1.320 0.186743 Q19_4 1.898e-02 7.095e-02 0.268 0.789066 Any idea how to use weights in a logistic regression?
I think one way is to use smf.glm() where you can provide the weights as freq_weights , you should check this section on weighted glm and see whether it is what you want to achieve. Below I provide an example where it is used in the same way as weights= in R : import pandas as pd import numpy as np import seaborn as sns import statsmodels.formula.api as smf import statsmodels.api as sm data = sns.load_dataset("iris") data['species'] = (data['species'] == "versicolor").astype(int) fit = smf.glm("species ~ sepal_length + sepal_width + petal_length + petal_width", family=sm.families.Binomial(),data=data).fit() fit.summary() coef std err z P>|z| [0.025 0.975] Intercept 7.3785 2.499 2.952 0.003 2.480 12.277 sepal_length -0.2454 0.650 -0.378 0.706 -1.518 1.028 sepal_width -2.7966 0.784 -3.569 0.000 -4.332 -1.261 petal_length 1.3136 0.684 1.921 0.055 -0.027 2.654 petal_width -2.7783 1.173 -2.368 0.018 -5.078 -0.479 Now provide weights: wts = np.repeat(np.arange(1,6),30) fit = smf.glm("species ~ sepal_length + sepal_width + petal_length + petal_width", family=sm.families.Binomial(),data=data,freq_weights=wts).fit() fit.summary() coef std err z P>|z| [0.025 0.975] Intercept 8.7146 1.444 6.036 0.000 5.885 11.544 sepal_length -0.2053 0.359 -0.571 0.568 -0.910 0.499 sepal_width -2.7293 0.454 -6.012 0.000 -3.619 -1.839 petal_length 0.8920 0.365 2.440 0.015 0.176 1.608 petal_width -2.8420 0.622 -4.570 0.000 -4.061 -1.623 So in R you have the unweighted: glm(Species ~ .,data=data,family=binomial) Call: glm(formula = Species ~ ., family = binomial, data = data) Coefficients: (Intercept) Sepal.Length Sepal.Width Petal.Length Petal.Width 7.3785 -0.2454 -2.7966 1.3136 -2.7783 Degrees of Freedom: 149 Total (i.e. Null); 145 Residual Null Deviance: 191 Residual Deviance: 145.1 AIC: 155.1 And the weighted model glm(Species ~ .,data=data,family=binomial,weights=rep(1:5,each=30)) Call: glm(formula = Species ~ ., family = binomial, data = data, weights = rep(1:5, each = 30)) Coefficients: (Intercept) Sepal.Length Sepal.Width Petal.Length Petal.Width 8.7146 -0.2053 -2.7293 0.8920 -2.8420 Degrees of Freedom: 149 Total (i.e. Null); 145 Residual Null Deviance: 572.9 Residual Deviance: 448.9 AIC: 458.9
11
13
62,721,186
2020-7-3
https://stackoverflow.com/questions/62721186/explain-a-surprising-parity-in-the-rounding-direction-of-apparent-ties-in-the-in
Consider the collection of floating-point numbers of the form 0.xx5 between 0.0 and 1.0: [0.005, 0.015, 0.025, 0.035, ..., 0.985, 0.995] I can make a list of all 100 such numbers easily in Python: >>> values = [n/1000 for n in range(5, 1000, 10)] Let's look at the first few and last few values to check we didn't make any mistakes: >>> values[:8] [0.005, 0.015, 0.025, 0.035, 0.045, 0.055, 0.065, 0.075] >>> values[-8:] [0.925, 0.935, 0.945, 0.955, 0.965, 0.975, 0.985, 0.995] Now I want to round each of these numbers to two decimal places after the point. Some of the numbers will be rounded up; some will be rounded down. I'm interested in counting exactly how many round up. I can compute this easily in Python, too: >>> sum(round(value, 2) > value for value in values) 50 So it turns out that exactly half of the 100 numbers were rounded up. If you didn't know that Python was using binary floating-point under the hood, this result wouldn't be surprising. After all, Python's documentation states clearly that the round function uses round-ties-to-even (a.k.a. Banker's rounding) as its rounding mode, so you'd expect the values to round up and round down alternately. But Python does use binary floating-point under the hood, and that means that with a handful of exceptions (namely 0.125, 0.375, 0.625 and 0.875), these values are not exact ties, but merely very good binary approximations to those ties. And not surprisingly, closer inspection of the rounding results shows that the values do not round up and down alternately. Instead, each value rounds up or down depending on which side of the decimal value the binary approximation happens to land. So there's no a priori reason to expect exactly half of the values to round up, and exactly half to round down. That makes it a little surprising that we got a result of exactly 50. But maybe we just got lucky? After all, if you toss a fair coin 100 times, getting exactly 50 heads isn't that unusual an outcome: it'll happen with around an 8% probability. But it turns out that the pattern persists with a higher number of decimal places. Here's the analogous example when rounding to 6 decimal places: >>> values = [n/10**7 for n in range(5, 10**7, 10)] >>> sum(round(value, 6) > value for value in values) 500000 And here it is again rounding apparent ties to 8 decimal places after the point: >>> values = [n/10**9 for n in range(5, 10**9, 10)] >>> sum(round(value, 8) > value for value in values) 50000000 So the question is: why do exactly half of the cases round up? Or put another way, why is it that out of all the binary approximations to these decimal ties, the number of approximations that are larger than the true value exactly matches the number of approximations that are smaller? (One can easily show that for the case that are exact, we will have the same number of rounds up as down, so we can disregard those cases.) Notes I'm assuming Python 3. On a typical desktop or laptop machine, Python's floats will be using the IEEE 754 binary64 ("double precision") floating-point format, and true division of integers and the round function will both be correctly rounded operations, using the round-ties-to-even rounding mode. While none of this is guaranteed by the language itself, the behaviour is overwhelmingly common, and we're assuming that such a typical machine is being used in this question. This question was inspired by a Python bug report: https://bugs.python.org/issue41198
It turns out that one can prove something stronger, that has nothing particularly to do with decimal representations or decimal rounding. Here's that stronger statement: Theorem. Choose a positive integer n <= 2^1021, and consider the sequence of length n consisting of the fractions 1/2n, 3/2n, 5/2n, ..., (2n-1)/2n. Convert each fraction to the nearest IEEE 754 binary64 floating-point value, using the IEEE 754 roundTiesToEven rounding direction. Then the number of fractions for which the converted value is larger than the original fraction will exactly equal the number of fractions for which the converted value is smaller than the original fraction. The original observation involving the sequence [0.005, 0.015, ..., 0.995] of floats then follows from the case n = 100 of the above statement: in 96 of the 100 cases, the result of round(value, 2) depends on the sign of the error introduced when rounding to binary64 format, and by the above statement, 48 of those cases will have positive error, and 48 will have negative error, so 48 will round up and 48 will round down. The remaining 4 cases (0.125, 0.375, 0.625, 0.875) convert to binary64 format with no change in value, and then the Banker's Rounding rule for round kicks in to round 0.125 and 0.625 down, and 0.375 and 0.875 up. Notation. Here and below, I'm using pseudo-mathematical notation, not Python notation: ^ means exponentiation rather than bitwise exclusive or, and / means exact division, not floating-point division. Example Suppose n = 11. Then we're considering the sequence 1/22, 3/22, ..., 21/22. The exact values, expressed in decimal, have a nice simple recurring form: 1/22 = 0.04545454545454545... 3/22 = 0.13636363636363636... 5/22 = 0.22727272727272727... 7/22 = 0.31818181818181818... 9/22 = 0.40909090909090909... 11/22 = 0.50000000000000000... 13/22 = 0.59090909090909090... 15/22 = 0.68181818181818181... 17/22 = 0.77272727272727272... 19/22 = 0.86363636363636363... 21/22 = 0.95454545454545454... The nearest exactly representable IEEE 754 binary64 floating-point values are: 1/22 -> 0.04545454545454545580707161889222334139049053192138671875 3/22 -> 0.13636363636363635354342704886221326887607574462890625 5/22 -> 0.2272727272727272651575702866466599516570568084716796875 7/22 -> 0.318181818181818176771713524431106634438037872314453125 9/22 -> 0.409090909090909116141432377844466827809810638427734375 11/22 -> 0.5 13/22 -> 0.59090909090909093936971885341336019337177276611328125 15/22 -> 0.68181818181818176771713524431106634438037872314453125 17/22 -> 0.7727272727272727070868540977244265377521514892578125 19/22 -> 0.86363636363636364645657295113778673112392425537109375 21/22 -> 0.954545454545454585826291804551146924495697021484375 And we see by direct inspection that when converting to float, 1/22, 9/22, 13/22, 19/22 and 21/22 rounded upward, while 3/22, 5/22, 7/22, 15/22 and 17/22 rounded downward. (11/22 was already exactly representable, so no rounding occurred.) So 5 of the 11 values were rounded up, and 5 were rounded down. The claim is that this perfect balance occurs regardless of the value of n. Computational experiments For those who might be more convinced by numerical experiments than a formal proof, here's some code (in Python). First, let's write a function to create the sequences we're interested in, using Python's fractions module: from fractions import Fraction def sequence(n): """ [1/2n, 3/2n, ..., (2n-1)/2n] """ return [Fraction(2*i+1, 2*n) for i in range(n)] Next, here's a function to compute the "rounding direction" of a given fraction f, which we'll define as 1 if the closest float to f is larger than f, -1 if it's smaller, and 0 if it's equal (i.e., if f turns out to be exactly representable in IEEE 754 binary64 format). Note that the conversion from Fraction to float is correctly rounded under roundTiesToEven on a typical IEEE 754-using machine, and that the order comparisons between a Fraction and a float are computed using the exact values of the numbers involved. def rounding_direction(f): """ 1 if float(f) > f, -1 if float(f) < f, 0 otherwise """ x = float(f) if x > f: return 1 elif x < f: return -1 else: return 0 Now to count the various rounding directions for a given sequence, the simplest approach is to use collections.Counter: from collections import Counter def round_direction_counts(n): """ Count of rounding directions for sequence(n). """ return Counter(rounding_direction(value) for value in sequence(n)) Now we can put in any integer we like to observe that the count for 1 always matches the count for -1. Here's a handful of examples, starting with the n = 100 example that started this whole thing: >>> round_direction_counts(100) Counter({1: 48, -1: 48, 0: 4}) >>> round_direction_counts(237) Counter({-1: 118, 1: 118, 0: 1}) >>> round_direction_counts(24) Counter({-1: 8, 0: 8, 1: 8}) >>> round_direction_counts(11523) Counter({1: 5761, -1: 5761, 0: 1}) The code above is unoptimised and fairly slow, but I used it to run tests up to n = 50000 and checked that the counts were balanced in each case. As an extra, here's an easy way to visualise the roundings for small n: it produces a string containing + for cases that round up, - for cases that round down, and . for cases that are exactly representable. So our theorem says that each signature has the same number of + characters as - characters. def signature(n): """ String visualising rounding directions for given n. """ return "".join(".+-"[rounding_direction(value)] for value in sequence(n)) And some examples, demonstrating that there's no immediately obvious pattern: >>> signature(10) '+-.-+++.--' >>> signature(11) '+---+.+--++' >>> signature(23) '---+++-+-+-.-++--++--++' >>> signature(59) '-+-+++--+--+-+++---++---+++--.-+-+--+-+--+-+-++-+-++-+-++-+' >>> signature(50) '+-++-++-++-+.+--+--+--+--+++---+++---.+++---+++---' Proof of the statement The original proof I gave was unnecessarily complicated. Following a suggestion from Tim Peters, I realised that there's a much simpler one. You can find the old one in the edit history, if you're really interested. The proof rests on three simple observations. Two of those are floating-point facts; the third is a number-theoretic observation. Observation 1. For any (non-tiny, non-huge) positive fraction x, x rounds "the same way" as 2x. If y is the closest binary64 float to x, then 2y is the closest binary64 float to 2x. So if x rounds up, so does 2x, and if x rounds down, so does 2x. If x is exactly representable, so is 2x. Small print: "non-tiny, non-huge" should be interpreted to mean that we avoid the extremes of the IEEE 754 binary64 exponent range. Strictly, the above statement applies for all x in the interval [-2^1022, 2^1023). There's a corner-case involving infinity to be careful of right at the top end of that range: if x rounds to 2^1023, then 2x rounds to inf, so the statement still holds in that corner case. Observation 1 implies that (again provided that underflow and overflow are avoided), we can scale any fraction x by an arbitrary power of two without affecting the direction it rounds when converting to binary64. Observation 2. If x is a fraction in the closed interval [1, 2], then 3 - x rounds the opposite way to x. This follows because if y is the closest float to x (which implies that y must also be in the interval [1.0, 2.0]), then thanks to the even spacing of floats within [1, 2], 3 - y is also exactly representable and is the closest float to 3 - x. This works even for ties under the roundTiesToEven definition of "closest", since the last bit of y is even if and only if the last bit of 3 - y is. So if x rounds up (i.e., y is greater than x), then 3 - y is smaller than 3 - x and so 3 - x rounds down. Similarly, if x is exactly representable, so is 3 - x. Observation 3. The sequence 1/2n, 3/2n, 5/2n, ..., (2n-1)/2n of fractions is equal to the sequence n/n, (n+1)/n, (n+2)/n, ..., (2n-1)/n, up to scaling by powers of two and reordering. This is just a scaled version of a simpler statement, that the sequence 1, 3, 5, ..., 2n-1 of integers is equal to the sequence n, n+1, ..., 2n-1, up to scaling by powers of two and reordering. That statement is perhaps easiest to see in the reverse direction: start out with the sequence n, n+1, n+2, ...,2n-1, and then divide each integer by its largest power-of-two divisor. What you're left with must be, in each case, an odd integer smaller than 2n, and it's easy to see that no such odd integer can occur twice, so by counting we must get every odd integer in 1, 3, 5, ..., 2n - 1, in some order. With these three observations in place, we can complete the proof. Combining Observation 1 and Observation 3, we get that the cumulative rounding directions (i.e., the total counts of rounds-up, rounds-down, stays-the-same) of 1/2n, 3/2n, ..., (2n-1)/2n exactly match the cumulative rounding directions of n/n, (n+1)/n, ..., (2n-1)/n. Now n/n is exactly one, so is exactly representable. In the case that n is even, 3/2 also occurs in this sequence, and is exactly representable. The rest of the values can be paired with each other in pairs that add up to 3: (n+1)/n pairs with (2n-1)/n, (n+2)/n pairs with (2n-2)/n, and so-on. And now by Observation 2, within each pair either one value rounds up and one value rounds down, or both values are exactly representable. So the sequence n/n, (n+1)/2n, ..., (2n-1)/n has exactly as many rounds-down cases as rounds-up cases, and hence the original sequence 1/2n, 3/2n, ..., (2n-1)/2n has exactly as many rounds-down cases as rounds-up cases. That completes the proof. Note: the restriction on the size of n in the original statement is there to ensure that none of our sequence elements lie in the subnormal range, so that Observation 1 can be used. The smallest positive binary64 normal value is 2^-1022, so our proof works for all n <= 2^1021.
8
4
62,697,599
2020-7-2
https://stackoverflow.com/questions/62697599/unable-to-send-receive-data-via-hc-12-uart-in-python
I've written some code to communicate between two Raspberry Pi's, using identical HC-12 433Mhz transceivers. I was able to successfully echo between the two Pi's using a direct serial connection and echo/cat, however am unable to replicate this using HC-12s, which theoretically work by a similar principal. I'm using the port ttyAMA0 on both for this example, but ttyS0 is also available and have tried every combination of these ports. The following code is common to both the sending and receiving, just writing once for sake of brevity: import serial import time ser = serial.Serial( port = "/dev/ttyAMA0", baudrate = 9600, parity = serial.PARITY_NONE, stopbits = serial.STOPBITS_ONE, bytesize = serial.EIGHTBITS ) print("Serial status: " + str(ser.isOpen())) This is the sending program: while True: print("Sending...") ser.write("hello\n".encode()) time.sleep(1) And the receiving program: while True: print("Receiving...") data = ser.readlines() print(data.decode()) The sending program simply loops as expected, but the receiver prints "Receiving...", and then nothing. When I keyboard interrupt the receiving program at that point, it says it is currently up to data = ser.readlines(). Any help would be much appreciated - I've spent the better part of the last week trawling and exhausting forums and READMEs to no avail, and this is literally my last option. Am close to insanity on this one!
The pyserial readlines() function relies on the timeout parameter to know when end-of-file is reached - this is warned about in the doco. So with no timeout, the end never occurs, so it keeps buffering all lines read forever. So you can just add a timeout to the serial port open, and your existing code will begin to work. ser = serial.Serial( port = "/dev/ttyAMA0", baudrate = 9600, parity = serial.PARITY_NONE, stopbits = serial.STOPBITS_ONE, bytesize = serial.EIGHTBITS, timeout = 2 # seconds # <-- HERE ) A better approach might be to use readline() (note singular, no 's'), for each line in turn: print( "Receiving..." ) while True: try: data = ser.readline() print( data.decode() ) # TODO - something with data except: print( "Error reading from port" ) break As that will allow the code to act on the input line-by-line.
8
4
62,782,979
2020-7-7
https://stackoverflow.com/questions/62782979/logger-info-not-working-in-django-logging
Following is the logging snippet I have used in my django settings.py file. All the GET,POST requests are getting written to log but when i wrote logger.info("print something"), its not getting printed/captured in console as well as the log file Please suggest a workaround to capture logger.info() logs views.py import logging logger = logging.getLogger(__name__) def custom_data_generator(request): logger.info("print somethig") # NOT GETTING CAPTURED IN LOG FILE return somethig settings.py (DEBUG = True and DEBUG_MODE = False in settings.py file) LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'simple': { 'format': '[%(asctime)s] %(levelname)s|%(name)s|%(message)s', 'datefmt': '%Y-%m-%d %H:%M:%S', }, }, 'handlers': { 'applogfile': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/home/mahesh/Documents/refactor/unityapp/unity/media/myproject.log', 'backupCount': 10, 'formatter': 'simple', }, 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'simple' } }, 'loggers': { 'django': { 'handlers': ['applogfile', 'console'], 'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'), } } } log data generated as follows [2020-07-07 11:43:25] ERROR|django.server|"GET /a11y/project-dashboard/? refnum=ACGLOBAL&env_id=4 HTTP/1.1" 500 92016 [2020-07-07 12:05:21] INFO|django.server|"GET /admin/ HTTP/1.1" 200 59501 [2020-07-07 12:05:21] INFO|django.server|"GET /admin/ HTTP/1.1" 200 59501 [2020-07-07 12:05:21] INFO|django.server|"GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 200 85692 [2020-07-07 12:05:21] INFO|django.server|"GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 200 86184 [2020-07-07 12:05:21] INFO|django.server|"GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 200 85876 [2020-07-07 12:05:26] INFO|django.server|"GET /admin/accessibility/axe_json/ HTTP/1.1" 200 1886434 [2020-07-07 12:05:27] INFO|django.server|"GET /admin/jsi18n/ HTTP/1.1" 200 3223 [2020-07-07 12:05:27] INFO|django.server|"GET /static/admin/js/vendor/jquery/jquery.js HTTP/1.1" 200 280364 [2020-07-07 12:05:27] INFO|django.server|"GET /static/admin/js/vendor/xregexp/xregexp.js HTTP/1.1" 200 128820 [2020-07-07 12:05:34] INFO|django.server|"GET /admin/accessibility/axe_json/?page_id=https%3A%2F%2Fjobs.chegg.com%2Fapplythankyou HTTP/1.1" 200 1868950 [2020-07-07 12:05:35] INFO|django.server|"GET /admin/jsi18n/ HTTP/1.1" 200 3223
It's probably because your views module doesn't have a logging level set, so it will inherit the root logger's default level of WARNING. If you add a root entry with a level of INFO, similarly to the documented examples, you should see messages from other modules. Alternatively you can specify logger names under the loggers key for your specific module hierarchy, whatever that is. (Your example only overrides the WARNING level for modules in the django hierarchy, i.e. code in Django itself.)
8
8
62,770,893
2020-7-7
https://stackoverflow.com/questions/62770893/how-to-add-another-attribute-in-dictionary-inside-a-one-line-for-loop
I have a list of dictionary and a string. I want to add a selected attribute in each dictionary inside the list. I am wondering if this is possible using a one liner. Here are my inputs: saved_fields = "apple|cherry|banana".split('|') fields = [ { 'name' : 'cherry' }, { 'name' : 'apple' }, { 'name' : 'orange' } ] This is my expected output: [ { 'name' : 'cherry', 'selected' : True }, { 'name' : 'apple', 'selected' : True }, { 'name' : 'orange', 'selected' : False } ] I tried this: new_fields = [item [item['selected'] if item['name'] in saved_fields] for item in fields]
I don't necessarily think "one line way" is the best way. s = set(saved_fields) # set lookup is more efficient for d in fields: d['status'] = d['name'] in s fields # [{'name': 'cherry', 'status': True}, # {'name': 'apple', 'status': True}, # {'name': 'orange', 'status': False}] Simple. Explicit. Obvious. This updates your dictionary in-place, which is better if you have a lot of records or other keys besides "name" and "status" that you haven't told us about. If you insist on a one-liner, this is one preserves other keys: [{**d, 'status': d['name'] in s} for d in fields] # [{'name': 'cherry', 'status': True}, # {'name': 'apple', 'status': True}, # {'name': 'orange', 'status': False}] This is list comprehension syntax and creates a new list of dictionaries, leaving the original untouched. The {**d, ...} portion is necessary to preserve keys that are not otherwise modified. I didn't see any other answers doing this, so thought it was worth calling out. The extended unpacking syntax works for python3.5+ only, for older versions, change {**d, 'status': d['name'] in s} to dict(d, **{'status': d['name'] in s}).
8
15
62,780,290
2020-7-7
https://stackoverflow.com/questions/62780290/more-efficient-way-to-add-columns-with-same-string-values-in-multiple-dataframes
I want to add a new column, Category, in each of my 8 similar dataframes. The values in this column are the same, they are also the df name, like df1_p8 in this example. I have used: In: df61_p8.insert(3,"Category","df61_p8", True) # or simply, df61_p8['Category']='df61_p8' Out: code violation_description Category 89491 9-1-503 Defective or obstructed duct system one- building df61_p8 102045 9-1-503 Defective or obstructed duct system one- building df61_p8 103369 9-1-503 Defective or obstructed duct system one- building df61_p8 130440 9-1-502 Failure to maintain at least one (1) elevator df61_p8 132446 9-1-503 Defective or obstructed duct system one- building df61_p8 Ultimately, I want to append/concat these 8 dataframes into one dataframe. I wonder if there is more efficient way to do it, rather than using .insert one by one on each dataframe. Something like loops or lambdas.. As a beginner, I am not sure how to apply them in my case? thank you. append_alldfs = [] x=[df61_p1,df61_p2,df61_p3,df61_p4,df61_p5,df61_p6,df61_p7,df61_p8] lambdafunc = lambda x: x.insert(3,"Category","x",True)
Keep it simple and explicit. for col_val, df in [ ('df61_p1', df61_p1), ('df61_p2', df61_p2), ('df61_p3', df61_p3), ('df61_p4', df61_p4), ('df61_p5', df61_p5), ('df61_p6', df61_p6), ('df61_p7', df61_p7), ('df61_p8', df61_p8), ]: df['Category'] = col_val While there are certainly more 'meta-programming-ey' ways of accomplishing the same task, these are usually quite convoluted and more complicated to understand and refactor. Given the structure of this code, however, I imagine that there are ways you could get rid of this problem before you even get to this point. For example, at what point did those dataframes get split up? Perhaps by never using separate DataFrames in the first place [keep the original dataframe together/concat at beginning] (and using apply, groupby, pivot and melt operations as needed), you can avoid this problem altogether.
7
2
62,713,741
2020-7-3
https://stackoverflow.com/questions/62713741/tkinter-and-32-bit-unicode-duplicating-any-fix
I only want to show Chip, but I get both Chip AND Dale. It doesn't seem to matter which 32 bit character I put in, tkinter seems to duplicate them - it's not just chipmunks. I'm thinking that I may have to render them to png and then place them as images, but that seems a bit ... heavy-handed. Any other solutions? Is tkinter planning on fixing this? import tkinter as tk # Python 3.8.3 class Application(tk.Frame): def __init__(self, master=None): self.canvas = None self.quit_button = None tk.Frame.__init__(self, master) self.grid() self.create_widgets() def create_widgets(self): self.canvas = tk.Canvas(self, width=500, height=420, bg='yellow') self.canvas.create_text(250, 200, font="* 180", text='\U0001F43F') self.canvas.grid() self.quit_button = tk.Button(self, text='Quit', command=self.quit) self.quit_button.grid() app = Application() app.master.title('Emoji') app.mainloop() Apparently this works fine on Windows - so maybe it’s a MacOS issue. I've run it on two separate Mac - both of them on the latest OS Catalina 10.15.5 - and both show the problem The bug shows with the standard Python installer from python.org - Python 3.8.3 with Tcl/Tk 8.6.8 Supposedly it might be fixed with Tcl/Tk 8.6.10 - but I don't really see how I can upgrade Tcl/Tk using the normal installer. This is also reported as a bug cf. https://bugs.python.org/issue41212 One of the python contributors believes that TCL/Tk can-not/will-not support variable width encoding (it always internally converts fixed width encoding) which indicates to me that Tcl/Tk is not suitable for general UTF-8 development.
The fundamental problem is that Tcl and Tk are not very happy with non-BMP (Unicode Basic Multilingual Plane) characters. Prior to 8.6.10, what happens is anyone's guess; the implementation simply assumed such characters didn't exist and was known to be buggy when they actually turned up (there's several tickets on various aspects of this). 8.7 will have stronger fixes in place (see TIP #389 for the details) — the basic aim is that if you feed non-BMP characters in, they can be got out at the other side so they can be written to a UTF-8 file or displayed by Tk if the font engine deigns to support them — but some operations will still be wrong as the string implementation will still be using surrogates. 9.0 will fix things properly (by changing the fundamental character storage unit to be large enough to accommodate any Unicode codepoint) but that's a disruptive change. With released versions, if you can get the surrogates over the wall from Python to Tcl, they'll probably end up in the GUI engine which might do the right thing. In some cases (not including any build I've currently got, FWIW, but I've got strange builds so don't read very much into that). With 8.7, sending over UTF-8 will be able to work; that's part of the functionality profile that will be guaranteed. (The encoding functions exist in older versions, but with 8.6 releases they will do the wrong thing with non-BMP UTF-8 and break weirdly with older versions than that.)
7
7
62,760,929
2020-7-6
https://stackoverflow.com/questions/62760929/how-can-i-run-a-streamlit-app-from-within-a-python-script
Is there a way to run the command streamlit run APP_NAME.py from within a python script, that might look something like: import streamlit streamlit.run("APP_NAME.py") As the project I'm working on needs to be cross-platform (and packaged), I can't safely rely on a call to os.system(...) or subprocess.
Hopefully this works for others: I looked into the actual streamlit file in my python/conda bin, and it had these lines: import re import sys from streamlit.cli import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) From here, you can see that running streamlit run APP_NAME.py on the command line is the same (in python) as: import sys from streamlit import cli as stcli if __name__ == '__main__': sys.argv = ["streamlit", "run", "APP_NAME.py"] sys.exit(stcli.main()) So I put that in another script, and then run that script to run the original app from python, and it seemed to work. I'm not sure how cross-platform this answer is though, as it still relies somewhat on command line args.
28
28
62,768,327
2020-7-7
https://stackoverflow.com/questions/62768327/typing-protocol-class-init-method-not-called-during-explicit-subtype-const
Python's PEP 544 introduces typing.Protocol for structural subtyping, a.k.a. "static duck typing". In this PEP's section on Merging and extending protocols, it is stated that The general philosophy is that protocols are mostly like regular ABCs, but a static type checker will handle them specially. Thus, one would expect to inherit from a subclass of typing.Protocol in much the same way that one expects to inherit from a subclasses of abc.ABC: from abc import ABC from typing import Protocol class AbstractBase(ABC): def method(self): print("AbstractBase.method called") class Concrete1(AbstractBase): ... c1 = Concrete1() c1.method() # prints "AbstractBase.method called" class ProtocolBase(Protocol): def method(self): print("ProtocolBase.method called") class Concrete2(ProtocolBase): ... c2 = Concrete2() c2.method() # prints "ProtocolBase.method called" As expected, the concrete subclasses Concrete1 and Concrete2 inherit method from their respective superclasses. This behavior is documented in the Explicitly declaring implementation section of the PEP: To explicitly declare that a certain class implements a given protocol, it can be used as a regular base class. In this case a class could use default implementations of protocol members. ... Note that there is little difference between explicit and implicit subtypes, the main benefit of explicit subclassing is to get some protocol methods "for free". However, when the protocol class implements the __init__ method, __init__ is not inherited by explicit subclasses of the protocol class. This is in contrast to subclasses of an ABC class, which do inherit the __init__ method: from abc import ABC from typing import Protocol class AbstractBase(ABC): def __init__(self): print("AbstractBase.__init__ called") class Concrete1(AbstractBase): ... c1 = Concrete1() # prints "AbstractBase.__init__ called" class ProtocolBase(Protocol): def __init__(self): print("ProtocolBase.__init__ called") class Concrete2(ProtocolBase): ... c2 = Concrete2() # NOTHING GETS PRINTED We see that, Concrete1 inherits __init__ from AbstractBase, but Concrete2 does not inherit __init__ from ProtocolBase. This is in contrast to the previous example, where Concrete1 and Concrete2 both inherit method from their respective superclasses. My questions are: What is the rationale behind not having __init__ inherited by explicit subtypes of a protocol class? Is there some type-theoretic reason for protocol classes not being able to supply an __init__ method "for free"? Is there any documentation concerning this discrepancy? Or is it a bug?
You can't instantiate a protocol class directly. This is currently implemented by replacing a protocol's __init__ with a method whose sole function is to enforce this restriction: def _no_init(self, *args, **kwargs): if type(self)._is_protocol: raise TypeError('Protocols cannot be instantiated') ... class Protocol(Generic, metaclass=_ProtocolMeta): ... def __init_subclass__(cls, *args, **kwargs): ... cls.__init__ = _no_init Your __init__ doesn't execute because it isn't there any more. This is pretty weird and messes with even more stuff than it looks like at first glance - for example, it interacts poorly with multiple inheritance, interrupting super().__init__ chains.
12
16
62,757,921
2020-7-6
https://stackoverflow.com/questions/62757921/is-aws-boto-python-supporting-ses-signature-version-4
Due to AWS deprecating Signature Version 3 in Oct 2020 for SES, I want to handle this issue with AWS boto (Python). But I didn't see any doc related to boto supporting signature version 4 for SES. Is anyone having similar issue and have solutions?
My recommendation is that you migrate from boto, which is essentially deprecated, to boto3 because boto3 supports signature v4 by default (with the exception of S3 pre-signed URLs which has to be explicitly configured).
8
3
62,731,198
2020-7-4
https://stackoverflow.com/questions/62731198/wsl-2-pycharm-debugger-connection-time-out
I set up Pycharm to use a virtualenv inside wls 2, It works fine, I mean, I can run my project throught the button "run", The problem is I can't use the debugger, it's says connection time out, let me show you the full [erros][1]. ('Connecting to ', '172.21.176.1', ':', '63597') Could not connect to 172.21.176.1: 63597 It seems that when I run with debug mode, It wants to connect to 172.21.176.1 (wsl 2 ip adress), but it should connect to 127.0.0.1 because the process is launched by ubuntu2004.exe. Can you help me? Error: C:\Users\tux\AppData\Local\Microsoft\WindowsApps\ubuntu2004.exe run "export IDE_PROJECT_ROOTS=/mnt/c/Users/tux/Documents/projects/odoo/13 && export PYCHARM_DEBUG=True && export PYTHONUNBUFFERED=1 && export IPYTHONENABLE=True && export PYCHARM_HOSTED=1 && export PYTHONIOENCODING=UTF-8 && export PYCHARM_DISPLAY_PORT=63342 && export PYTHONDONTWRITEBYTECODE=1 && export PYDEVD_LOAD_VALUES_ASYNC=True && export "LIBRARY_ROOTS=/mnt/c/Users/tux/AppData/Local/JetBrains/PyCharm2020.1/remote_sources/525578736/201545293:/mnt/c/Users/tux/AppData/Local/JetBrains/PyCharm2020.1/remote_sources/525578736/1688665391:/mnt/c/Users/tux/AppData/Local/JetBrains/PyCharm2020.1/python_stubs/525578736:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/python-skeletons:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/typeshed/stdlib/3.7:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/typeshed/stdlib/3:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/typeshed/stdlib/2and3:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/typeshed/third_party/3:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/typeshed/third_party/2and3" && export "PYTHONPATH=/mnt/c/Users/tux/Documents/projects/odoo/13:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pycharm_matplotlib_backend:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pycharm_display:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/third_party/thriftpy:/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev:/mnt/c/Users/tux/AppData/Local/JetBrains/PyCharm2020.1/cythonExtensions:/mnt/c/Users/tux/Documents/projects/odoo/13" && cd /mnt/c/Users/tux/Documents/projects/odoo/13 && /opt/interpreters/python3.8_odoo_13/bin/python3 "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/pydevd.py" --multiproc --qt-support=auto --client 172.21.176.1 --port 63597 --file /mnt/c/Users/tux/Documents/projects/odoo/13/odoo-bin -c conf/learning.conf" Executing PyCharm's sitecustomize Traceback (most recent call last): File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pycharm_matplotlib_backend/sitecustomize.py", line 43, in import matplotlib ModuleNotFoundError: No module named 'matplotlib' Unable to load jupyter_debug plugin Executing file /mnt/c/Users/tux/Documents/projects/odoo/13/odoo-bin arguments: ['/mnt/c/Users/tux/Documents/projects/odoo/13/odoo-bin', '-c', 'conf/learning.conf'] PYDEVD_FILTER_LIBRARIES False Started in multiproc mode ('Connecting to ', '172.21.176.1', ':', '63597') Could not connect to 172.21.176.1: 63597 Traceback (most recent call last): File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 456, in start_client s.connect((host, port)) socket.timeout: timed out Traceback (most recent call last): File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/pydevd.py", line 2131, in main() File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/pydevd.py", line 2013, in main dispatcher.connect(host, port) File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/pydevd.py", line 1788, in connect self.client = start_client(self.host, self.port) File "/mnt/d/Program Files/JetBrains/PyCharm 2020.1.2/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 456, in start_client s.connect((host, port)) socket.timeout: timed out Process finished with exit code 1
Firewall was the case. Unbloking connections from Pycharm (Eset firewall in my case) helped. See https://youtrack.jetbrains.com/issue/PY-39051
9
13
62,678,377
2020-7-1
https://stackoverflow.com/questions/62678377/plotly-how-to-set-up-multiple-subplots-with-grouped-legends
for each subplot I have 3 seperate line:2017 ,2018 and 2019 with 3 times "go.Scatter", each subplot represents one country (25 countries) with always these 3 years. I can use the subplot sample code but then all the 75 legends (25 X 3) will be all together with different colors and it's messy. I don't need different colors amont different subplot, I can just have 3 different colors and 3 legends for the 3 years on all subplots, would be ideal if I click on for example 2017 that all the 2017 curve/line dissappear across the 25 subplots. Anyone can share a sample code? it can be 2 instead of 25 for illustration purpose. I fail to find this sample code on Plotly website. Edit: this is a sample code: from plotly.subplots import make_subplots import plotly.graph_objects as go from plotly import offline fig = make_subplots(rows=3, cols=1) fig.add_trace(go.Scatter( x=[3, 4, 5], y=[1000, 1100, 1200],name="2017", ), row=1, col=1) fig.add_trace(go.Scatter( x=[2, 3, 4], y=[1200, 1100, 1000],name="2018", ), row=1, col=1) fig.append_trace(go.Scatter( x=[2, 3, 4], y=[100, 110, 120],name="2017", ), row=2, col=1) fig.append_trace(go.Scatter( x=[2, 3, 4], y=[120, 110, 100],name="2018", ), row=2, col=1) fig.append_trace(go.Scatter( x=[0, 1, 2], y=[10, 11, 12],name="2017", ), row=3, col=1) fig.append_trace(go.Scatter( x=[0, 1, 2], y=[12, 11, 10],name="2018", ), row=3, col=1) fig.update_layout(height=600, width=600, title_text="Stacked Subplots") offline.plot(fig,filename="subplots.html") I wish to have only 2 legends: 2017 and 2018, instead of 6 legends, easier if all the 2017 has same color along the 3 subplots
A correct combination of legendgroup and showlegend should do the trick. With the setup below, all 2017 traces are assigned to the same legendgroup="2017". And all 2017 traces except the first have showlegend=False. And of course the same goes for the 2018 traces. Give it a try! Plot Complete code from plotly.subplots import make_subplots import plotly.graph_objects as go from plotly import offline fig = make_subplots(rows=3, cols=1) fig.add_trace(go.Scatter(x=[3, 4, 5], y=[1000, 1100, 1200], name="2017", legendgroup="2017", line=dict(color='blue')), row=1, col=1) fig.add_trace(go.Scatter(x=[2, 3, 4], y=[1200, 1100, 1000], name="2018",legendgroup="2018", line=dict(color='red')), row=1, col=1) fig.add_trace(go.Scatter(x=[2, 3, 4], y=[100, 110, 120], name="2017", legendgroup="2017", line=dict(color='blue'), showlegend=False), row=2, col=1) fig.append_trace(go.Scatter(x=[2, 3, 4], y=[120, 110, 100], name="2018", legendgroup="2018", line=dict(color='red'), showlegend=False), row=2, col=1) fig.append_trace(go.Scatter(x=[0, 1, 2], y=[10, 11, 12], name="2017", legendgroup="2017", line=dict(color='blue'), showlegend=False), row=3, col=1) fig.append_trace(go.Scatter(x=[0, 1, 2], y=[12, 11, 10], name="2018", legendgroup="2018", line=dict(color='red'), showlegend=False), row=3, col=1) fig.update_layout(height=600, width=600, title_text="Stacked Subplots") #offline.plot(fig,filename="subplots.html") fig.show()
10
25
62,740,922
2020-7-5
https://stackoverflow.com/questions/62740922/check-if-value-exists-in-file
I am trying to read the following file line by line and check if a value exists in the file. What I am trying currently is not working. What am I doing wrong? If the value exists I do nothing. If it does not then I write it to the file. file.txt: 123 345 234 556 654 654 Code: file = open("file.txt", "a+") lines = file.readlines() value = '345' if value in lines: print('val ready exists in file') else: # write to file file.write(value)
There are two problems here: .readlines() returns lines with \n not trimmed, so your check will not work properly. a+ mode opens a file with position set to the end of the file. So your readlines() currently returns an empty list! Here is a direct fixed version of your code, also adding context manager to auto-close the file value = '345' with open("file.txt", "a+") as file: file.seek(0) # set position to start of file lines = file.read().splitlines() # now we won't have those newlines if value in lines: print('val ready exists in file') else: # write to file file.write(value + "\n") # in append mode writes will always go to the end, so no need to seek() here However, I agree with @RoadRunner that better is to just use r+ mode; then you don't need the seek(0). But the cleanest is just to split out your read and write phases completely, so you don't run into file position problems.
10
12
62,738,960
2020-7-5
https://stackoverflow.com/questions/62738960/on-aws-elastic-search-messageuser-anonymous-is-not-authorized-to-perform
I have created AWS elasticsearch domain https://search-xx-xx.us-east-1.es.amazonaws.com/ On click both elastic url and kibana below is the error i got {"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"} Below is code which is working fine import boto3 from requests_aws4auth import AWS4Auth from elasticsearch import Elasticsearch, RequestsHttpConnection session = boto3.session.Session() credentials = session.get_credentials() awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, session.region_name, 'es', session_token=credentials.token) es = Elasticsearch( ['https://search-testelastic-2276kyz2u4l3basec63onfq73a.us-east-1.es.amazonaws.com'], http_auth=awsauth, use_ssl=True, verify_certs=True, connection_class=RequestsHttpConnection ) def lambda_handler(event, context): es.cluster.health() es.indices.create(index='my-index', ignore=400) r = [{'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}, {'Name': 'Eliana M. Aaron', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Joseph J. Aaron', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Michael R. Aaron', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Darryl H. Aarons', 'Specialised and Location': 'Health'}, {'Name': 'Dr. William B. Aarons', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Sirike T. Aasmaa', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Jacobo A. Abadi', 'Specialised and Location': 'Health'}] for e in enumerate(r): es.index(index="my-index", body=e[1]) Below is the access policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1:xxxxxx:domain/xxxxx/*", "Condition": { "IpAddress": { "aws:SourceIp": "*" } } } ] }
This error would indicate your ElasticSearch service does not support anonymous requests (those not signed with valid IAM credentials). Although your policy sees ok the official allow all policy looks like the below { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1:xxxxxx:domain/xxxxx/*" } ] }
29
37
62,732,402
2020-7-4
https://stackoverflow.com/questions/62732402/can-i-omit-optional-if-i-set-default-to-none
For example: def foo(bar: int = None): pass When I check a type/annotation of bar pycharm tells me that it is Optional[int]. bar: int = None looks much cleaner rather then bar: Optional[int] = None, especially when you have 10+ parameters. So can I simply omit Optional? Will tools like mypy or other linters highlight this case as en error? Looks like python itself doesn't like the idea: In [1]: from typing import Optional In [2]: from inspect import signature In [3]: def foo(a: int = None): pass In [4]: def bar(a: Optional[int] = None): pass In [5]: signature(foo).parameters['a'].annotation Out[5]: int In [6]: signature(bar).parameters['a'].annotation Out[6]: typing.Union[int, NoneType]
No. Omitting Optional was previously allowed, but has since been removed. A past version of this PEP allowed type checkers to assume an optional type when the default value is None [...] This is no longer the recommended behavior. Type checkers should move towards requiring the optional type to be made explicit. Some tools may still provide the old behaviour for legacy support. Even if that is the case, do not rely on it being supported in the future. In specific, mypy still supports implicit Optional by default, but explicitly notes this may change in the future: Optional types and the None type (mypy v0.782) [...] You can use the --no-implicit-optional command-line option to stop treating arguments with a None default value as having an implicit Optional[...] type. It’s possible that this will become the default behavior in the future. The deprecation of this behaviour is tracked in mypy/#9091
56
61
62,723,766
2020-7-3
https://stackoverflow.com/questions/62723766/how-to-get-type-hints-for-an-objects-attributes
I want to get the type hints for an object's attributes. I can only get the hints for the class and not an instance of it. I have tried using foo_instance.__class__ from here but that only shows the class variables. So in the example how do I get the type hint of bar? class foo: var: int = 42 def __init__(self): self.bar: int = 2 print(get_type_hints(foo)) # returns {'var': <class 'int'>}
This information isn't evaluated and only exists in the source code. if you must get this information, you can use the ast module and extract the information from the source code yourself, if you have access to the source code. You should also ask yourself if you need this information because in most cases reevaluating the source code will be to much effort.
12
2
62,728,854
2020-7-4
https://stackoverflow.com/questions/62728854/how-to-place-spacy-en-core-web-md-model-in-python-package
I am building a python package and I am using Spacy library and Spacy model en_core_web_md. It can't be installed using pip. You can install it like this python -m spacy download en_core_web_md I have place en_core_web_md folder in my Python package. simple_eda init.py simple_eda.py en_core_web_md tests setup.py README.md LICENSE I can install package successfully but when I import, it gives me this error. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/shahid/anaconda3/envs/eda_test_6/lib/python3.5/site-packages/simple_nlp/__init__.py", line 1, in <module> from simple_nlp.simple_nlp import SimpleNLP File "/home/shahid/anaconda3/envs/eda_test_6/lib/python3.5/site-packages/simple_nlp/simple_nlp.py", line 22, in <module> nlp = spacy.load("en_core_web_md") File "/home/shahid/anaconda3/envs/eda_test_6/lib/python3.5/site-packages/spacy/__init__.py", line 30, in load return util.load_model(name, **overrides) File "/home/shahid/anaconda3/envs/eda_test_6/lib/python3.5/site-packages/spacy/util.py", line 175, in load_model raise IOError(Errors.E050.format(name=name)) OSError: [E050] Can't find model 'en_core_web_md'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. Where should I place the folder, or should I add a link to that folder in setup.py file?
This solved my issue. try: nlp = spacy.load('en') except OSError: print('Downloading language model for the spaCy POS tagger\n' "(don't worry, this will only happen once)", file=stderr) from spacy.cli import download download('en') nlp = spacy.load('en')
7
16
62,716,521
2020-7-3
https://stackoverflow.com/questions/62716521/plotly-how-to-add-text-to-existing-figure
Is it possible to add some text on the same html file as my plotly graph? For example : This is the code that generates a graph : data = pd.read_csv('file.csv') data.columns = ['price', 'place', 'date'] fig = px.scatter(data, x = "place", y = "price", ) fig.write_html("done.html") This graph will generate a pyplot graph in an html file and I want to add some simple text (such as a conclusion line explaning the graph) under the graph. This is an example of the output I would like: ly
You can use fig.update_layout(margin=dict()) to make room for an explanation, and then fig.add_annotation() to insert any text you'd like below the figure utself to get this: Complete code: import plotly.graph_objects as go import numpy as np x = np.arange(-4,5) y=x**3 yticks=list(range(y.min(), y.max(), 14)) #yticks.append(y.max())9 # build figure fig = go.Figure(data=go.Scatter(x=x, y=y)) # make space for explanation / annotation fig.update_layout(margin=dict(l=20, r=20, t=20, b=60),paper_bgcolor="LightSteelBlue") # add annotation fig.add_annotation(dict(font=dict(color='yellow',size=15), x=0, y=-0.12, showarrow=False, text="A very clear explanation", textangle=0, xanchor='left', xref="paper", yref="paper")) fig.show()
11
21
62,722,599
2020-7-3
https://stackoverflow.com/questions/62722599/how-can-i-use-pytest-django-to-create-a-user-object-only-once-per-session
First, I tired this: @pytest.mark.django_db @pytest.fixture(scope='session') def created_user(django_db_blocker): with django_db_blocker.unblock(): return CustomUser.objects.create_user("User", "UserPassword") def test_api_create(created_user): user = created_user() assert user is not None But I got an UndefinedTable error. So marking my fixture with @pytest.mark.django_db somehow didn’t actually register my Django DB. So next I tried pass the db object directly to the fixture: @pytest.fixture(scope='session') def created_user(db, django_db_blocker): with django_db_blocker.unblock(): return CustomUser.objects.create_user("User", "UserPassword") def test_api_create(created_user): user = created_user() assert user is not None But then I got an error ScopeMismatch: You tried to access the 'function' scoped fixture 'db' with a 'session' scoped request object, involved factories So finally, just to confirm everything was working, I tried: @pytest.fixture def created_user(db, django_db_blocker): with django_db_blocker.unblock(): return CustomUser.objects.create_user("User", "UserPassword") def test_api_create(created_user): user = created_user() assert user is not None This works just fine, but now my create_user function is being called every single time my function is being setup or torn down. Whats the solution here?
@hoefling had the answer, I needed to pass django_db_setup instead. @pytest.fixture(scope='session') def created_user(django_db_setup, django_db_blocker): with django_db_blocker.unblock(): return CustomUser.objects.create_user("User", "UserPassword")
7
10
62,705,271
2020-7-2
https://stackoverflow.com/questions/62705271/connect-to-flask-server-from-other-devices-on-same-network
Dear smart people of stackoverflow, I know this question has been asked a lot here but none of the posted solutions have worked for me as of yet. Any help here would be much appreciated: The Problem: Cannot connect to flask app server from other devices (PCs, mobiles) on the same network. (in other words: localhost works perfectly but I cannot connect from external device) What I've Tried: 1) Setting app.run(host='0.0.0.0', port=5000, debug=True, threaded=True) in the app.py so that the server will listen on all available network interfaces. 2) Enabling TCP traffic for port 5000 in local network in Windows Defender Firewall (with inbound and outbound rules added) 3) Using my host PC's IPv4 address in the URL bar of my external device's browser in the following format: http://<host_ipaddress>:<port>/ 4) Using my host PC's hostname in the URL bar of my external device's browser in the following format: http://<host_name>:<port>/ 5) Running the app.py file from Windows Powershell and Python (.py) Executor None of these solutions has worked so far, even after attempting to connect from a few different external devices. Thanks in advance for your help!
I solved the issue by changing my home network profile to private instead of public, which allows my PC to be discoverable. Completely overlooked that! Hope this helps someone!
12
14
62,716,077
2020-7-3
https://stackoverflow.com/questions/62716077/remove-white-border-from-dots-in-a-seaborn-scatterplot
The scatterplot from seaborn produces dots with a small white boarder. This is helpful if there are a few overlapping dots, but it becomes visually noisy once there are many overlaying dots. How can the white borders be removed? import seaborn as sns; sns.set() import matplotlib.pyplot as plt tips = sns.load_dataset("tips") ax = sns.scatterplot(x="total_bill", y="tip", data=tips)
Instead of edgecolors use linewidth = 0: import seaborn as sns; sns.set() import matplotlib.pyplot as plt tips = sns.load_dataset("tips") ax = sns.scatterplot(x="total_bill", y="tip", data=tips, linewidth=0)
17
23
62,715,570
2020-7-3
https://stackoverflow.com/questions/62715570/failing-to-install-psycopg2-binary-on-new-docker-container
I have encountered a problem while trying to run my django project on a new Docker container. It is my first time using Docker and I can't seem to find a good way to run a django project on it. Having tried multiple tutorials, I always get the error about psycopg2 not being installed. requirements.txt: -i https://pypi.org/simple asgiref==3.2.7 django-cors-headers==3.3.0 django==3.0.7 djangorestframework==3.11.0 gunicorn==20.0.4 psycopg2-binary==2.8.5 pytz==2020.1 sqlparse==0.3.1 Dockerfile: # pull official base image FROM python:3.8.3-alpine # set work directory WORKDIR /usr/src/app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies RUN pip install --upgrade pip COPY ./requirements.txt . RUN pip install -r requirements.txt # copy project COPY . . # set project environment variables # grab these via Python's os.environ # these are 100% optional here ENV PORT=8000 ENV SECRET_KEY_TWITTER = "***" While running docker-compose build, I get the following error: Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. I will gladly answer any questions that might lead to the solution. Also, maybe someone can recommend me a good tutorial on dockerizing django apps?
On Alpine Linux, you will need to compile all packages, even if a pre-compiled binary wheel is available on PyPI. On standard Linux-based images, you won't (https://pythonspeed.com/articles/alpine-docker-python/ - there are also other articles I've written there that might be helpful, e.g. on security). So change your base image to python:3.8.3-slim-buster or python:3.8-slim-buster and it should work.
41
27
62,706,402
2020-7-2
https://stackoverflow.com/questions/62706402/difference-between-python-console-and-terminal-in-pycharm
I am a beginner in Python. I started using PyCharm recently but I don't know what's the difference between Terminal and console. Some of the commands in Terminal do not work in console.
Before we can talk about the differences, we need to talk about what the two are in practice. The Terminal, essentially replaces your command-prompt/power-shell on windows and the terminal app on Mac, giving you a way to access them without leaving PyCharm. The PyCharm console on the other hand, is a more advanced version of the "Python Console", which allows you to run bits of Python. It is also called the Python REPL or Read Eval Print Loop You can invoke the Python Console from the terminal as well.
8
8
62,712,023
2020-7-3
https://stackoverflow.com/questions/62712023/selenium-with-chrome-driver-taking-screenshots-at-double-resolution-on-retina-di
I am using Selenium with Chrome driver to taking some website screenshots. I need the screenshots to be at very specific resolution (1024x768). I've noticed that although the browser is correctly set at this resolution, the screenshot on disk is saved at double resolution (2048x1536). I suspect this is due the retina resolution of the macbook where I am running the application (mid 2017 macbook pro). This is the code I am using: from selenium import webdriver from selenium.webdriver.chrome.options import Options width = 1024 height = 768 chrome_options = Options() chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--lang=en') chrome_options.add_argument('--headless') chrome_options.add_argument(f'window-size={width}x{height}') driver = webdriver.Chrome(options=chrome_options) url = 'https://google.com' driver.get(url) print('Window size', driver.get_window_size()) # Window size {'width': 1024, 'height': 768} driver.save_screenshot('test.png') # Image is saved at 2048x1536 Is there a way to prevent the screenshot to be taken at double resolution on retina?
Found a possible solution: chrome_options.add_argument('--force-device-scale-factor=1')
7
13