question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
61,552,469
2020-5-1
https://stackoverflow.com/questions/61552469/google-cloud-functions-deploy-allow-unauthenticated-invocations
Whenever I have to deploy a new python function using the gcloud sdk I get this message Allow unauthenticated invocations of new function [function-name]? (y/N)? WARNING: Function created with limited-access IAM policy. To enable unauthorized access consider "gcloud alpha functions add-iam-policy-binding function-name --region=europe-west1 --member=allUsers --role=roles/cloudfunctions.invoker" Is there any flag I can add to the command to make it a NO when deploying? This is a sample command I use to deploy one function: gcloud functions deploy function-name --region=europe-west1 --entry-point function-entry-point --trigger-resource "projects/my-project/databases/(default)/documents/user_ids/{user_id}" --trigger-event providers/cloud.firestore/eventTypes/document.create --runtime python37 --timeout 60 --project my-project
From https://cloud.google.com/sdk/docs/scripting-gcloud#disabling_prompts: You can disable prompts from gcloud CLI commands by setting the disable_prompts property in your configuration to True or by using the global --quiet or -q flag. So for your example, you could run: gcloud functions deploy function-name --quiet --region=europe-west1 --entry-point function-entry-point --trigger-resource "projects/my-project/databases/(default)/documents/user_ids/{user_id}" --trigger-event providers/cloud.firestore/eventTypes/document.create --runtime python37 --timeout 60 --project my-project
20
12
61,550,026
2020-5-1
https://stackoverflow.com/questions/61550026/valueerror-shapes-none-1-and-none-3-are-incompatible
I have a 3 dimensional dataset of audio files where X.shape is (329,20,85). I want to have a simpl bare-bones model running, so please don't nitpick and address only the issue at hand. Here is the code: model = tf.keras.models.Sequential() model.add(tf.keras.layers.LSTM(32, return_sequences=True, stateful=False, input_shape = (20,85,1))) model.add(tf.keras.layers.LSTM(20)) model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"]) model.summary() print("Train...") model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=50, validation_data=(X_test, y_test)) But then I had the error mentioned in the title: ValueError: Shapes (None, 1) and (None, 3) are incompatible Here is the model.summary() Model: "sequential_13" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_21 (LSTM) (None, 20, 32) 15104 _________________________________________________________________ lstm_22 (LSTM) (None, 20) 4240 _________________________________________________________________ dense_8 (Dense) (None, 3) 63 ================================================================= Total params: 19,407 Trainable params: 19,407 Non-trainable params: 0 _________________________________________________________________ Train... For this, I followed this post and updated Tensorflow to the latest version, but the issue persists. This post is completely unrelated and highly unreliable.This post although a bit relatable is unanswered for a while now. Update 1.0: I strongly think the problem has something to do with the final Dense layer where I pass nb_classes as 3, since I am classifying for 3 categories in y. So I changed the Dense layer's nb_classes to 1, which ran the model and gives me this output, which I am positive is wrong. Train... 9/9 [==============================] - 2s 177ms/step - loss: 0.0000e+00 - accuracy: 0.1520 - val_loss: 0.0000e+00 - val_accuracy: 0.3418 <tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0> Update 2.0: I one hot encoded the ys and resolved the shape issue. But now the above output with <tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0> persists. Any help with this? Or should I post a new question for this? Thanks for all the help. How should I proceed, or what should I be changing?
The first problem is with the LSTM input_shape. input_shape = (20,85,1). From the doc: https://keras.io/layers/recurrent/ LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim). model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) - this suggets you're doing a multi-class classification. So, you need your y_train and y_test have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3), where 3 denotes number of classes. You need to apply tensorflow.keras.utils.to_categorical to them. y_train = to_categorical(y_train, 3) y_test = to_categorical(y_test, 3) ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical tf.keras.callbacks.History() - this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models. ref: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History
33
44
61,543,768
2020-5-1
https://stackoverflow.com/questions/61543768/super-in-a-typing-namedtuple-subclass-fails-in-python-3-8
I have code which worked in Python 3.6 and fails in Python 3.8. It seems to boil down to calling super in subclass of typing.NamedTuple, as below: <ipython-input-2-fea20b0178f3> in <module> ----> 1 class Test(typing.NamedTuple): 2 a: int 3 b: float 4 def __repr__(self): 5 return super(object, self).__repr__() RuntimeError: __class__ not set defining 'Test' as <class '__main__.Test'>. Was __classcell__ propagated to type.__new__? In [3]: class Test(typing.NamedTuple): ...: a: int ...: b: float ...: #def __repr__(self): ...: # return super(object, self).__repr__() ...: >>> # works The purpose of this super(object, self).__repr__ call is to use the standard '<__main__.Test object at 0x7fa109953cf8>' __repr__ instead of printing out all the contents of the tuple elements (which would happen by default). There are some questions on super resulting in similar errors but they: Refer to the parameter-less version super() Fail already in Python 3.6 (it worked for me before 3.6 -> 3.8 upgrade) I fail to understand how to fix this anyway, given that it's not a custom metaclass I have control over but the stdlib-provided typing.NamedTuple. My question is how can I fix this while maintaining backwards compatibility with Python 3.6 (otherwise I'd just use @dataclasses.dataclass instead of inheriting from typing.NamedTuple)? A side question is how can this fail at definition time given that the offending super call is inside a method which is not even executed yet. For instance: In [3]: class Test(typing.NamedTuple): ...: a: int ...: b: float ...: def __repr__(self): ...: return foo works (until we actually call the __repr__) even though foo is an undefined reference. Is super magical in that regard?
I was slightly wrong in the other question (which I just updated). Apparently, this behavior manifests in both cases of super. In hindsight, I should have tested this. What's happening here is the metaclass NamedTupleMeta indeed doesn't pass __classcell__ over to type.__new__ because it creates a namedtuple on the fly and returns that. Actually, in Python's 3.6 and 3.7 (where this is still a DeprecationWarning), the __classcell__ leaks into the class dictionary since it isn't removed by NamedTupleMeta.__new__. class Test(NamedTuple): a: int b: float def __repr__(self): return super().__repr__() # isn't removed by NamedTupleMeta Test.__classcell__ <cell at 0x7f956562f618: type object at 0x5629b8a2a708> Using object.__repr__ directly as suggested by Azat does the trick. how can this fail at definition time The same way the following also fails: class Foo(metaclass=1): pass Many checks are performed while the class is being constructed. Among these, is checking if the metaclass has passed the __classcell__ over to type_new.
14
5
61,540,156
2020-5-1
https://stackoverflow.com/questions/61540156/python-unittest-setting-a-global-variable-correctly
I have a simple method that sets a global variable to either True or False depending on the method parameter. This global variable is called feedback and has a default value of False. When I call setFeedback('y') the global variable will be changed to be feedback = True. When I call setFeedback('n') the global variable will be changed to be feedback = False. Now I am trying to test this using unittest in Python: class TestMain(unittest.TestCase): def test_setFeedback(self): self.assertFalse(feedback) setFeedback('y') self.assertTrue(feedback) When I run this test I get the following error: AssertionError: False is not true. Since I know that the method works correctly, I assume that the global variables are reset somehow. However, since I am still very new to the Python environment, I don't know exactly what I am doing wrong. I have already read an article here about mocking, but since my method changes a global variable, I don't know if mocking can solve this. I would be grateful for suggestions. Here is the code: main.py: #IMPORTS from colorama import init, Fore, Back, Style from typing import List, Tuple #GLOBAL VARIABLE feedback = False #SET FEEDBACK METHOD def setFeedback(feedbackInput): """This methods sets the feedback variable according to the given parameter. Feedback can be either enabled or disabled. Arguments: feedbackInput {str} -- The feedback input from the user. Values = {'y', 'n'} """ #* ACCESS TO GLOBAL VARIABLES global feedback #* SET FEEDBACK VALUE # Set global variable according to the input if(feedbackInput == 'y'): feedback = True print("\nFeedback:" + Fore.GREEN + " ENABLED\n" + Style.RESET_ALL) input("Press any key to continue...") # Clear the console clearConsole() else: print("\nFeedback:" + Fore.GREEN + " DISABLED\n" + Style.RESET_ALL) input("Press any key to continue...") # Clear the console clearConsole() test_main.py: import unittest from main import * class TestMain(unittest.TestCase): def test_setFeedback(self): self.assertFalse(feedback) setFeedback('y') self.assertTrue(feedback) if __name__ == '__main__': unittest.main()
There are two problems with your test. First, you use input in your feedback function, that will stall the test until you enter a key. You probably should mock input. Also you may consider that the call to input does not belong in setFeedback (see comment by @chepner). Second, from main import * will not work here (apart from being bad style), because this way you create a copy of your global variable in the test module - changes in the variable itself will not be propagated to the copy. You should instead import the module, so that you will access the variable in the module. Third (this is taken from the answer by @chepner, I had missed that), you have to make sure that the variable is at a known state at test begin. Here is what should work: import unittest from unittest import mock import main # importing the module lets you access the original global variable class TestMain(unittest.TestCase): def setUp(self): main.feedback = False # make sure the state is defined at test start @mock.patch('main.input') # patch input to run the test w/o user interaction def test_setFeedback(self, mock_input): self.assertFalse(main.feedback) main.setFeedback('y') self.assertTrue(main.feedback)
8
10
61,545,580
2020-5-1
https://stackoverflow.com/questions/61545580/how-does-mypy-use-typing-type-checking-to-resolve-the-circular-import-annotation
I have the following structure for a package: /prog -- /ui ---- /menus ------ __init__.py ------ main_menu.py ------ file_menu.py -- __init__.py __init__.py prog.py These are my import/classes statements: prog.py: from prog.ui.menus import MainMenu /prog/ui/menus/__init__.py: from prog.ui.menus.file_menu import FileMenu from prog.ui.menus.main_menu import MainMenu main_menu.py: import tkinter as tk from prog.ui.menus import FileMenu class MainMenu(tk.Menu): def __init__(self, master: tk.Tk, **kwargs): super().__init__(master, **kwargs) self.add_cascade(label='File', menu=FileMenu(self, tearoff=False)) [...] file_menu.py: import tkinter as tk from prog.ui.menus import MainMenu class FileMenu(tk.Menu): def __init__(self, master: MainMenu, **kwargs): super().__init__(master, **kwargs) self.add_command(label='Settings') [...] This will lead to a circular import problem in the sequence: prog.py -> __init__.py -> main_menu.py -> file_menu.py -> main_menu.py -> [...] From several searches it was suggested to update the imports to such: file_menu.py import tkinter as tk from typing import TYPE_CHECKING if TYPE_CHECKING: from prog.ui.menus import MainMenu class FileMenu(tk.Menu): def __init__(self, master: 'MainMenu', **kwargs): super().__init__(master, **kwargs) self.add_command(label='Settings') [...] I've read the TYPE_CHECKING docs and the mypy docs on the usage, but I do not follow how using this conditional resolves the cycle. Yes, at runtime it works because it evaluates to False so that is an "operational resolution", but how does it not reappear during type checking: The TYPE_CHECKING constant defined by the typing module is False at runtime but True while type checking. I don't know a great deal about mypy, thus I fail to see how once the conditional evaluates to True that the issue will not reappear. What occurs differently between "runtime" and "type checking"? Does the process of "type checking" mean code is not executed? Notes: This is not a circular import dependency problem so dependency injection isn't needed This is strictly a cycle induced by type hinting for static analysis I am aware of the following import options (which work just fine): Replace from [...] import [...] with import [...] Conduct imports in MainMenu.__init__ and leave file_menu.py alone
Does the process of "type checking" mean code is not executed? Yes, exactly. The type checker never executes your code: instead, it analyzes it. Type checkers are implemented in pretty much the same way compilers are implemented, minus the "generate bytecode/assembly/machine code" step. This means your type checker has more strategies available for resolving import cycles (or cycles of any kind) than the Python interpreter will have during runtime since it doesn't need to try blindly importing modules. For example, what mypy does is basically start by analyzing your code module-by-module, keeping track of each new class/new type that's being defined. During this process, if mypy sees a type hint using a type that hasn't been defined yet, substitute it with a placeholder type. Once we've finished checking all the modules, check and see if there are still any placeholder types floating around. If so, try re-analyzing the code using the type definitions we've collected so far, replacing any placeholders when possible. We rinse and repeat until there are either no more placeholders or we've iterated too many times. After that point, mypy assumes any remaining placeholders are just invalid types and reports an error. In contrast, the Python interpreter doesn't have the luxury of being able to repeatedly re-analyze modules like this. It needs to run each module it sees, and repeatedly re-running modules could break some user code/user expectations. Similarly, the Python interpreter doesn't have the luxury of being able to just swap around the order in which we analyze modules. In contrast, mypy can theoretically analyze your modules in any arbitrary order ignoring what imports what -- the only catch is that it'll just be super inefficient since we'd need lots of iterations to reach fixpoint. (So instead, mypy uses your imports as suggestions to decide in which order to analyze modules. For example, if module A directly imports module B, we probably want to analyze B first. But if A imports B behind if TYPE_CHECKING, it's probably fine to relax the ordering if it'll help us break a cycle.)
31
33
61,514,121
2020-4-30
https://stackoverflow.com/questions/61514121/serving-flask-app-with-waitress-on-windows-using-ssl-public-private-key
How do I run my Flask app which uses SSL keys using waitress. The SSL context is specified in my Flask's run() as in app.run(ssl_context=('cert.pem', 'key.pem')) But app.run() is not used when using waitress as in the code below. So, where do I specify the keys? Thanks for the help. from flask import Flask, request app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == '__main__': # app.run(ssl_context=('../cert.pem', '../key.pem')) from waitress import serve serve(app, host="0.0.0.0", port=5000)
At the current version (1.4.3), Waitress does not natively support TLS. See TLS support in https://github.com/Pylons/waitress/blob/36240c88b1c292d293de25fecaae1f1d0ad9cc22/docs/reverse-proxy.rst You either need a reverse proxy in front to handle the tls/ssl part, or use another WSGI server (CherryPy, Tornado...).
11
10
61,514,887
2020-4-30
https://stackoverflow.com/questions/61514887/how-to-trigger-a-dag-on-the-success-of-a-another-dag-in-airflow-using-python
I have a python DAG Parent Job and DAG Child Job. The tasks in the Child Job should be triggered on the successful completion of the Parent Job tasks which are run daily. How can add external job trigger ? MY CODE from datetime import datetime, timedelta from airflow import DAG from airflow.operators.postgres_operator import PostgresOperator from utils import FAILURE_EMAILS yesterday = datetime.combine(datetime.today() - timedelta(1), datetime.min.time()) default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': yesterday, 'email': FAILURE_EMAILS, 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } dag = DAG('Child Job', default_args=default_args, schedule_interval='@daily') execute_notebook = PostgresOperator( task_id='data_sql', postgres_conn_id='REDSHIFT_CONN', sql="SELECT * FROM athena_rs.shipments limit 5", dag=dag )
Answer is in this thread already. Below is demo code: Parent dag: from datetime import datetime from airflow import DAG from airflow.operators.dummy_operator import DummyOperator default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(2020, 4, 29), } dag = DAG('Parent_dag', default_args=default_args, schedule_interval='@daily') leave_work = DummyOperator( task_id='leave_work', dag=dag, ) cook_dinner = DummyOperator( task_id='cook_dinner', dag=dag, ) leave_work >> cook_dinner Child dag: from datetime import datetime, timedelta from airflow import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.operators.sensors import ExternalTaskSensor default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(2020, 4, 29), } dag = DAG('Child_dag', default_args=default_args, schedule_interval='@daily') # Use ExternalTaskSensor to listen to the Parent_dag and cook_dinner task # when cook_dinner is finished, Child_dag will be triggered wait_for_dinner = ExternalTaskSensor( task_id='wait_for_dinner', external_dag_id='Parent_dag', external_task_id='cook_dinner', start_date=datetime(2020, 4, 29), execution_delta=timedelta(hours=1), timeout=3600, ) have_dinner = DummyOperator( task_id='have_dinner', dag=dag, ) play_with_food = DummyOperator( task_id='play_with_food', dag=dag, ) wait_for_dinner >> have_dinner wait_for_dinner >> play_with_food Images: Dags Parent_dag Child_dag
28
40
61,528,500
2020-4-30
https://stackoverflow.com/questions/61528500/installing-venv-for-python3-in-wsl-ubuntu
I am trying to configure venv on Windows Subsystem for Linux with Ubuntu. What I have tried: 1) Installing venv through pip (pip3, to be exact) pip3 install venv I get the following error ERROR: Could not find a version that satisfies the requirement venv (from versions: none) ERROR: No matching distribution found for venv 2) Installing venv through apt and apt-get sudo apt install python3-venv In this case the installation seems to complete, but when I try to create a virtual environment with python3 -m venv ./venv, I get an error, telling me to do apt-get install python3-venv (which I just did!) The virtual environment was not created successfully because ensurepip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. apt-get install python3-venv You may need to use sudo with that command. After installing the python3-venv package, recreate your virtual environment. Failing command: ['/mnt/c/Users/Vicubso/.../code/venv/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip'] I have also read the following posts post 1, post 2, and several others. None of these seem to solve my problem. Any help would be much appreciated.
Give this approach a shot: Install the pip: sudo apt-get install python-pip Install the virtual environment: sudo pip install virtualenv Store your virtual environments somewhere: mkdir ~/.storevirtualenvs Now you should be able to create a new virtualenv virtualenv -p python3 yourVenv To activate: source yourVenv/bin/activate To exit your new virtualenv, just deactivate
49
50
61,513,681
2020-4-29
https://stackoverflow.com/questions/61513681/bin-sh-1-python-not-found-when-run-via-cron-in-docker
I want to repeatedly call a script via cron in a docker container, but when I switch from one time execution to execution via cron the official python image suddenly can't seem to find python. Dockerfile: FROM python:3.7-slim COPY main.py /home/main.py #A: works CMD [ "python", "/home/main.py" ] #B: doesn't work #RUN apt-get update && apt-get -y install -qq --force-yes cron #COPY hello-cron /etc/cron.d/hello-cron #CMD ["cron", "-f"] main.py import time for i in range(90000): print(i) time.sleep(5000) hello-cron: * * * * * root python /home/main.py > /proc/1/fd/1 2> /proc/1/fd/2 # When I switch A for B in the Dockerfile the error message is: /bin/sh: 1: python: not found Thank you all for he quick responses! Adding PATH=/usr/local/bin in the cron file solved my problem.
Cron doesn't set up the PATH environment variable the same as a normal login shell so python can't be found. It should work if you specify a complete path to the Python executable, e.g. replace python with /usr/bin/python (or whatever the path to your Python executable happens to be). Alternatively you can explicitly set the PATH environment variable in the Cron configuration file to include the directory where Python can be found.
9
10
61,494,278
2020-4-29
https://stackoverflow.com/questions/61494278/plotly-how-to-make-a-figure-with-multiple-lines-and-shaded-area-for-standard-de
How can I use Plotly to produce a line plot with a shaded standard deviation? I am trying to achieve something similar to seaborn.tsplot. Any help is appreciated.
The following approach is fully flexible with regards to the number of columns in a pandas dataframe and uses the default color cycle of plotly. If the number of lines exceed the number of colors, the colors will be re-used from the start. As of now px.colors.qualitative.Plotly can be replaced with any hex color sequence that you can find using px.colors.qualitative: Alphabet = ['#AA0DFE', '#3283FE', '#85660D', '#782AB6', '#565656', '#1... Alphabet_r = ['#FA0087', '#FBE426', '#B00068', '#FC1CBF', '#C075A6', '... [...] Complete code: # imports import plotly.graph_objs as go import plotly.express as px import pandas as pd import numpy as np # sample data in a pandas dataframe np.random.seed(1) df=pd.DataFrame(dict(A=np.random.uniform(low=-1, high=2, size=25).tolist(), B=np.random.uniform(low=-4, high=3, size=25).tolist(), C=np.random.uniform(low=-1, high=3, size=25).tolist(), )) df = df.cumsum() # define colors as a list colors = px.colors.qualitative.Plotly # convert plotly hex colors to rgba to enable transparency adjustments def hex_rgba(hex, transparency): col_hex = hex.lstrip('#') col_rgb = list(int(col_hex[i:i+2], 16) for i in (0, 2, 4)) col_rgb.extend([transparency]) areacol = tuple(col_rgb) return areacol rgba = [hex_rgba(c, transparency=0.2) for c in colors] colCycle = ['rgba'+str(elem) for elem in rgba] # Make sure the colors run in cycles if there are more lines than colors def next_col(cols): while True: for col in cols: yield col line_color=next_col(cols=colCycle) # plotly figure fig = go.Figure() # add line and shaded area for each series and standards deviation for i, col in enumerate(df): new_col = next(line_color) x = list(df.index.values+1) y1 = df[col] y1_upper = [(y + np.std(df[col])) for y in df[col]] y1_lower = [(y - np.std(df[col])) for y in df[col]] y1_lower = y1_lower[::-1] # standard deviation area fig.add_traces(go.Scatter(x=x+x[::-1], y=y1_upper+y1_lower, fill='tozerox', fillcolor=new_col, line=dict(color='rgba(255,255,255,0)'), showlegend=False, name=col)) # line trace fig.add_traces(go.Scatter(x=x, y=y1, line=dict(color=new_col, width=2.5), mode='lines', name=col) ) # set x-axis fig.update_layout(xaxis=dict(range=[1,len(df)])) fig.show()
14
9
61,516,930
2020-4-30
https://stackoverflow.com/questions/61516930/does-shap-in-python-support-keras-or-tensorflow-models-while-using-deepexplainer
I am currently using SHAP Package to determine the feature contributions. I have used the approach for XGBoost and RandomForest and it worked really well. Since the data I am working on is a sequential data I tried using LSTM and CNN to train the model and then get the feature importance using the SHAP's DeepExplainer; but it is continuously throwing error. The error I am getting is: AssertionError: <class 'keras.callbacks.History'> is not currently a supported model type!. I am attaching the sample code as well (LSTM). It would be helpful if someone could help me with it. shap.initjs() model = Sequential() model.add(LSTM(n_neurons, input_shape=(X.shape[1],X.shape[2]), return_sequences=True)) model.add(LSTM(n_neurons, return_sequences=False)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') h=model.fit(X, y, epochs=nb_epochs, batch_size=n_batch, verbose=1, shuffle=True) background = X[np.random.choice(X.shape[0],100, replace=False)] explainer = shap.DeepExplainer(h,background)
The returned value of model.fit is not the model instance; rather, it's the history of training (i.e. stats like loss and metric values) as an instance of keras.callbacks.History class. That's why you get the mentioned error when you pass the returned History object to shap.DeepExplainer. Instead, you should pass the model instance itself: explainer = shap.DeepExplainer(model, background)
12
11
61,499,350
2020-4-29
https://stackoverflow.com/questions/61499350/combine-audio-files-in-python
How can I combined multiple audio files (wav) to one file in Python? I found this: import wave infiles = ["sound_1.wav", "sound_2.wav"] outfile = "sounds.wav" data= [] for infile in infiles: w = wave.open(infile, 'rb') data.append( [w.getparams(), w.readframes(w.getnframes())] ) w.close() output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.close() but this appends one audio file to the other. What I would like to have is code, that "stacks" the audio files (with volume controll please). Is this even possible in Python?
You can use the pydub module. It's one of the easiest ways to cut, edit, merge audio files using Python. Here's an example of how to use it to combine audio files with volume control: from pydub import AudioSegment sound1 = AudioSegment.from_file("/path/to/sound.wav", format="wav") sound2 = AudioSegment.from_file("/path/to/another_sound.wav", format="wav") # sound1 6 dB louder louder = sound1 + 6 # sound1, with sound2 appended (use louder instead of sound1 to append the louder version) combined = sound1 + sound2 # simple export file_handle = combined.export("/path/to/output.mp3", format="mp3") To overlay sounds, try this: from pydub import AudioSegment sound1 = AudioSegment.from_file("1.wav", format="wav") sound2 = AudioSegment.from_file("2.wav", format="wav") # sound1 6 dB louder louder = sound1 + 6 # Overlay sound2 over sound1 at position 0 (use louder instead of sound1 to use the louder version) overlay = sound1.overlay(sound2, position=0) # simple export file_handle = overlay.export("output.mp3", format="mp3") Full documentation here pydub API Documentation
16
41
61,503,183
2020-4-29
https://stackoverflow.com/questions/61503183/how-can-i-add-grid-lines-to-a-catplot-in-seaborn
How can I add grid lines (vertically and horizontally) to a seaborn catplot? I found a possibility to do that on a boxplot, but I have multiple facets and therefore need a catplot instead. And in contrast to this other answer, catplot does not allow an ax argument. This code is borrowed from here. import seaborn as sns sns.set(style="ticks") exercise = sns.load_dataset("exercise") g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise) plt.show() Any ideas? Thank you! EDIT: The provided answer is working, but for faceted plots, only the last plot inherits the grid. import seaborn as sns sns.set(style="ticks") exercise = sns.load_dataset("exercise") g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) plt.grid() plt.show() Can someone explain to me why and how to fix it?
You can set the grid over seaborn plots in two ways: 1. plt.grid() method: You need to use the grid method inside matplotlib.pyplot. You can do that like so: import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks") exercise = sns.load_dataset("exercise") g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise) plt.grid() #just add this plt.show() Which results in this graph: 2. sns.set_style() method You can also use sns.set_style which will enable grid over all subplots in any given FacetGrid. You can do that like so: import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks") exercise = sns.load_dataset("exercise") sns.set_style("darkgrid") g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) plt.show() Which returns this graph:
23
41
61,500,121
2020-4-29
https://stackoverflow.com/questions/61500121/opencv-python-reading-image-as-rgb
Is it possible to opencv (using python) as default read an image as order of RGB ? in the opencv documentation imread method return image as order of BGR but in code imread methods return the image as RGB order ? I am not doing any converting process. Just used imread methods and show on the screen. It shows as on windows image viewer. is it possible ? EDIT 1: my code is below. left side cv.imshow() methods and the other one plt.imshow() methods. cv2.imshow() methods shows image as RGB and plt show it as opencv read (BGR) the image. image_file = 'image/512-2-1001-18-RGB.jpg' # img = imp.get_image(image_file) img = cv2.imread(image_file) plt.imshow(img) plt.show() cv2.imshow('asd', img) cv2.waitKey(0) cv2.destroyAllWindows() EDIT 2 : Some how opencv imshow methods is showing the image as RGB below I have attached the first pixel's value of image and next image is photoshop pixel values EDIT 3 : below just reading image and with imshow and second image is original RGB image. after imshow method image looks same as original image and this confused me Original image in order of RGB.
OpenCV is entirely consistent within itself. It reads images into Numpy arrays with the channels in BGR order, keeps the images in BGR order and its cv2.imshow() and cv2.imwrite() also expect images in BGR order. All your JPEG/PNG/BMP/TIFF files remain in their normal RGB order on disk. Other libraries, such as PIL/Pillow, scikit-image, matplotlib, pyvips store images in conventional RGB order in memory. So, you will only get colour issues if you mix OpenCV with any other library. If you go from/to OpenCV from any of the others, you will need to reverse the channel order. It is the same process either way, you are swapping the first and third channel: RGBimage = BGRimage[...,::-1] or BGRimage = RGBimage[...,::-1] Or, you can use OpenCV cvtColor() to do the transform: RGBimage = cv2.cvtColor(BGRimage, cv2.COLOR_BGR2RGB) You don't need to necessarily make a whole copy in a new variable every time. Say you read an image with OpenCV, in OpenCV BGR order obviously, and you briefly want to display it with matplotlib, you can just reverse the channels as you pass it: # Load image with OpenCV and process in BGR order im = cv2.imread(something) # Briefly display with matplotlib, which will want RGB order plt.imshow(img[...,::-1]) plt.show() # Carry on processing in BGR order with OpenCV ...
13
23
61,497,292
2020-4-29
https://stackoverflow.com/questions/61497292/getting-pep8-invalid-escape-sequence-warning-trying-to-escape-parentheses-in-a
I am trying to escape a string such as this: string = re.split(")(", other_string) Because not escaping those parentheses gives me an error. But if I do this: string = re.split("\)\(", other_string) I get a warning from PEP8 that it's an invalid escape sequence. Is there a way to do this properly? Putting 'r' in front of the string does not fix it.
You probably are looking for this which would mean your string would be written as string = r")(" which would be escaped. Though that is from 2008 and in python 2 which is being phased out. What the r does is make it a "raw string" See: How to fix "<string> DeprecationWarning: invalid escape sequence" in Python? as well. What you're dealing with is string literals which can be seen in the PEP 8 Style Guide UPDATE For Edit in Question: If you're trying to split on ")(" then here's a working example that doesn't throw PEP8 warnings: import re string = r"Hello my darling)(!" print(string) string = re.split(r'\)\(', string) print(string) Which outputs: Hello )(world! ['Hello ', 'world!'] Or you can escape the backslash explicitly by using two backslashes: import re string = r"Hello )(world!" print(string) string = re.split('\\)\\(', string) print(string) Which outputs the same thing: Hello )(world! ['Hello ', 'world!']
30
30
61,494,374
2020-4-29
https://stackoverflow.com/questions/61494374/how-do-i-run-a-program-installed-with-pip-in-windows
I installed a program with pip (e.g. simple-plotter) in windows using the following command: py -m pip install simple-plotter How do I run the program I installed? In Linux, I can just type the command in a terminal, but if I type in windows, I get a "not recognized as an internal or external command" or if I run in Python I get a "No module named" error.
I think the error occurs because the Scripts folder is not on PATH in the environment variables, while in Linux it probably is (I don't know a lot of Linux so I don't know how environment variables work there, anyway): the best way to solve this is by adding the Scripts python folder to PATH: In my case it is C:\Users\MyUser\AppData\Local\Programs\Python\Python37-32\Scripts So, press WIN(windows key) + PAUSE_BREAK(near print screen key) Then click on Advanced System Configuration (Sorry if it's not accurate, my system is in portuguese) After that click in Environment Variables Then, select Path and click edit Click in new, add the path and hit the OKs
17
9
61,492,879
2020-4-29
https://stackoverflow.com/questions/61492879/why-is-true-true-true-true-true-true-not-true-in-python
Code snippet 1: a = True, True, True b = (True, True, True) print(a == b) returns True. Code snippet 2: (True, True, True) == True, True, True returns (False, True, True).
Operator precedence. You're actually checking equality between (True, True, True) and True in your second code snippet, and then building a tuple with that result as the first item. Recall that in Python by specifying a comma-separated "list" of items without any brackets, it returns a tuple: >>> a = True, True, True >>> print(type(a)) <class 'tuple'> >>> print(a) (True, True, True) Code snippet 2 is no exception here. You're attempting to build a tuple using the same syntax, it just so happens that the first element is (True, True, True) == True, the second element is True, and the third element is True. So code snippet 2 is equivalent to: (((True, True, True) == True), True, True) And since (True, True, True) == True is False (you're comparing a tuple of three objects to a boolean here), the first element becomes False.
16
29
61,487,041
2020-4-28
https://stackoverflow.com/questions/61487041/more-perceptually-uniform-colormaps
I am an advocate of using perceptually uniform colormaps when plotting scientific data as grayscale images and applying false colorings. I don't know who invented these, but these colormaps are fantastic and I would not use anything else. Anyways to be honest, I've gotten a bit bored of the 5 colormaps (viridis, plasma, inferno, magma, cividis) which have been implemented in many popular graphing softwares (R-ggplot, python-matplotlib, matlab, JMP, etc.). I'm sure some of you also feel the same monotony... So in addition to those 5 colormaps, what are some other colormaps which are perceptually uniform? BONUS: Is there some algorithm to derive colormaps with perceptually uniform qualities (maybe not since color perception has a psychological aspect)? but if so, what is it? Some examples & refs: https://www.youtube.com/watch?v=xAoljeRJ3lU
If you follow this page: http://bids.github.io/colormap/, you will find all the details required to produce Viridis, Magma, Inferno and Plasma. All the details are too long to enumerate as an answer but using the aforementioned page and viscm, you can regenerate them and some more interactively. Alternatively, and using Colour: import colour import numpy as np CAM16UCS = colour.convert(['#ff0000', '#00ff00'], 'Hexadecimal', 'CAM16UCS') gradient = colour.algebra.lerp( np.linspace(0, 1, 20)[..., np.newaxis], CAM16UCS[0][np.newaxis], CAM16UCS[1][np.newaxis], ) RGB = colour.convert(gradient, 'CAM16UCS', 'Output-Referred RGB') colour.plotting.plot_multi_colour_swatches( [colour.plotting.ColourSwatch(RGB=np.clip(x, 0, 1)) for x in RGB]) print(colour.convert(RGB, 'Output-Referred RGB', 'Hexadecimal')) ['#fe0000' '#fb3209' '#f74811' '#f35918' '#ef671e' '#ea7423' '#e67f28' '#e18a2c' '#dc9430' '#d79e34' '#d1a738' '#cbb03b' '#c4b93d' '#bcc23e' '#b2cc3d' '#a6d53a' '#97df36' '#82e92e' '#62f321' '#00ff00'] Note that the two boundary colours are given as hexadecimal values but you could obviously choose any relevant colourspace. Likewise, CAM16 could be swapped for JzAzBz or alike. You can try that online with this Google Colab notebook.
16
10
61,405,654
2020-4-24
https://stackoverflow.com/questions/61405654/how-to-close-files-using-the-pathlib-module
Historically I have always used the following for reading files in python: with open("file", "r") as f: for line in f: # do thing to line Is this still the recommend approach? Are there any drawbacks to using the following: from pathlib import Path path = Path("file") for line in path.open(): # do thing to line Most of the references I found are using the with keyword for opening files for the convenience of not having to explicitly close the file. Is this applicable for the iterator approach here? with open() docs
If all you wanted to do was read or write a small blob of text (or bytes), then you no longer need to use a with-statement when using pathlib: >>> import pathlib >>> path = pathlib.Path("/tmp/example.txt") >>> path.write_text("hello world") 11 >>> path.read_text() 'hello world' >>> path.read_bytes() b'hello world' These methods still use a context-manager internally (src). Opening a large file to iterate lines should still use a with-statement, as the docs show: >>> with path.open() as f: ... for line in f: ... print(line) ... hello world
25
33
61,368,851
2020-4-22
https://stackoverflow.com/questions/61368851/how-to-rotate-seaborn-barplot-x-axis-tick-labels
I'm trying to get a barplot to rotate it's X Labels in 45° to make them readable (as is, there's overlap). len(genero) is 7, and len(filmes_por_genero) is 20 I'm using a MovieLens dataset and making a graph counting the number of movies in each individual genre. Here's my code as of now: import seaborn as sns import matplotlib.pyplot as plt sns.set_style("whitegrid") filmes_por_genero = filmes["generos"].str.get_dummies('|').sum().sort_values(ascending=False) genero = filmes_com_media.index chart = plt.figure(figsize=(16,8)) sns.barplot(x=genero, y=filmes_por_genero.values, palette=sns.color_palette("BuGn_r", n_colors=len(filmes_por_genero) + 4) ) chart.set_xticklabels( chart.get_xticklabels(), rotation=45, horizontalalignment='right' ) Here's the full error: /usr/local/lib/python3.6/dist-packages/pandas/core/groupby/grouper.py in get_grouper(obj, key, axis, level, sort, observed, mutated, validate) 623 in_axis=in_axis, 624 ) --> 625 if not isinstance(gpr, Grouping) 626 else gpr 627 ) /usr/local/lib/python3.6/dist-packages/pandas/core/groupby/grouper.py in __init__(self, index, grouper, obj, name, level, sort, observed, in_axis) 254 self.name = name 255 self.level = level --> 256 self.grouper = _convert_grouper(index, grouper) 257 self.all_grouper = None 258 self.index = index /usr/local/lib/python3.6/dist-packages/pandas/core/groupby/grouper.py in _convert_grouper(axis, grouper) 653 elif isinstance(grouper, (list, Series, Index, np.ndarray)): 654 if len(grouper) != len(axis): --> 655 raise ValueError("Grouper and axis must be same length") 656 return grouper 657 else: ValueError: Grouper and axis must be same length
Data from MovieLens 25M Dataset at MovieLens The following code uses the explicit Axes interface with the seaborn axes-level functions. See How to rotate xticklabels in a seaborn catplot for the figure-level functions. If there's no need to change the xticklabels, the easiest option is ax.tick_params(axis='x', labelrotation=45), but horizontalalignment / ha can't be set. ax.set_xticks can be used with ax.get_xticks and ax.get_xticklabels ax.set_xticklabels works with ax.get_xticklabels, but results in UserWarning: set_ticklabels() should only be used with a fixed number of ticks, i.e. after set_ticks() or using a FixedLocator. Becasue sns.barplot is a categorical plot, and native_scale=False by default, the xticks are 0 indexed, so the xticklabels can be set, and the warning ignored. Tested in python v3.12.0, pandas v2.1.2, matplotlib v3.8.1, seaborn v0.13.0. import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # data df = pd.read_csv('ml-25m/movies.csv') print(df.head()) movieId title genres 0 1 Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy 1 2 Jumanji (1995) Adventure|Children|Fantasy 2 3 Grumpier Old Men (1995) Comedy|Romance 3 4 Waiting to Exhale (1995) Comedy|Drama|Romance 4 5 Father of the Bride Part II (1995) Comedy # split the strings in the genres column df['genres'] = df['genres'].str.split('|') # explode the lists that result for str.split df = df.explode('genres', ignore_index=True) print(df.head()) movieId title genres 0 1 Toy Story (1995) Adventure 1 1 Toy Story (1995) Animation 2 1 Toy Story (1995) Children 3 1 Toy Story (1995) Comedy 4 1 Toy Story (1995) Fantasy Genres Counts gc = df.genres.value_counts().reset_index() print(gc) genres count 0 Drama 25606 1 Comedy 16870 2 Thriller 8654 3 Romance 7719 4 Action 7348 5 Horror 5989 6 Documentary 5605 7 Crime 5319 8 (no genres listed) 5062 9 Adventure 4145 10 Sci-Fi 3595 11 Children 2935 12 Animation 2929 13 Mystery 2925 14 Fantasy 2731 15 War 1874 16 Western 1399 17 Musical 1054 18 Film-Noir 353 19 IMAX 195 sns.barplot fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(data=gc, x='genres', y='count', hue='genres', palette=sns.color_palette("BuGn_r", n_colors=len(gc)), ec='k', legend=False, ax=ax) ax.tick_params(axis='x', labelrotation=45) # ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha='right') # ax.set_xticks(ticks=ax.get_xticks(), labels=ax.get_xticklabels(), rotation=45, ha='right') plt.show() plt.figure(figsize=(12, 6)) ax = sns.barplot(data=gc, x='genres', y='count', hue='genres', palette=sns.color_palette("BuGn_r", n_colors=len(gc)), ec='k', legend=False) ax.tick_params(axis='x', labelrotation=45) # ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha='right') # ax.set_xticks(ticks=ax.get_xticks(), labels=ax.get_xticklabels(), rotation=45, ha='right') plt.show() sns.countplot Use sns.countplot to skip using .value_counts() if the plot order doesn't matter. To order the countplot, order=df.genres.value_counts().index must be used, so countplot doesn't really save you from needing .value_counts(), if a descending order is desired. fig, ax = plt.subplots(figsize=(12, 6)) sns.countplot(data=df, x='genres', ax=ax) ax.tick_params(axis='x', labelrotation=45) # ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha='right') # ax.set_xticks(ticks=ax.get_xticks(), labels=ax.get_xticklabels(), rotation=45, ha='right') plt.show() pandas.DataFrame.plot .value_counts can be plotted directly, and the rot= parameter can be used to rotate the xticklabels. ax = df.genres.value_counts().plot(kind='bar', rot=45, width=0.85, ec='k', figsize=(12, 6)) In a bar plot with long xticklabels, the cleaner option might be to use horizontal bars. plt.figure(figsize=(6, 4)) ax = sns.barplot(data=gc, y='genres', x='count', orient='h', hue='genres', palette=sns.color_palette("BuGn_r", n_colors=len(gc)), ec='k', legend=False)
13
30
61,414,947
2020-4-24
https://stackoverflow.com/questions/61414947/why-dont-python-sets-preserve-insertion-order
I was surprised to discover recently that while dicts are guaranteed to preserve insertion order in Python 3.7+, sets are not: >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d {'a': 1, 'b': 2, 'c': 3} >>> d['d'] = 4 >>> d {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> s = {'a', 'b', 'c'} >>> s {'b', 'a', 'c'} >>> s.add('d') >>> s {'d', 'b', 'a', 'c'} What is the rationale for this difference? Do the same efficiency improvements that led the Python team to change the dict implementation not apply to sets as well? I'm not looking for pointers to ordered-set implementations or ways to use dicts as stand-ins for sets. I'm just wondering why the Python team didn't make built-in sets preserve order at the same time they did so for dicts.
Sets and dicts are optimized for different use-cases. The primary use of a set is fast membership testing, which is order agnostic. For dicts, cost of the lookup is the most critical operation, and the key is more likely to be present. With sets, the presence or absence of an element is not known in advance, and so the set implementation needs to optimize for both the found and not-found case. Also, some optimizations for common set operations such as union and intersection make it difficult to retain set ordering without degrading performance. While both data structures are hash based, it's a common misconception that sets are just implemented as dicts with null values. Even before the compact dict implementation in CPython 3.6, the set and dict implementations already differed significantly, with little code reuse. For example, dicts use randomized probing, but sets use a combination of linear probing and open addressing, to improve cache locality. The initial linear probe (default 9 steps in CPython) will check a series of adjacent key/hash pairs, improving performance by reducing the cost of hash collision handling - consecutive memory access is cheaper than scattered probes. dictobject.c - main, v3.5.9 setobject.c - main, v3.5.9 issue18771 - changeset to reduce the cost of hash collisions for set objects in Python 3.4. It would be possible in theory to change CPython's set implementation to be similar to the compact dict, but in practice there are drawbacks, and notable core developers were opposed to making such a change. Sets remain unordered. (Why? The usage patterns are different. Also, different implementation.) – Guido van Rossum Sets use a different algorithm that isn't as amendable to retaining insertion order. Set-to-set operations lose their flexibility and optimizations if order is required. Set mathematics are defined in terms of unordered sets. In short, set ordering isn't in the immediate future. – Raymond Hettinger A detailed discussion about whether to compactify sets for 3.7, and why it was decided against, can be found in the python-dev mailing lists. In summary, the main points are: different usage patterns (insertion ordering dicts such as **kwargs is useful, less so for sets), space savings for compacting sets are less significant (because there are only key + hash arrays to densify, as opposed to key + hash + value arrays), and the aforementioned linear probing optimization which sets currently use is incompatible with a compact implementation. I will reproduce Raymond's post below which covers the most important points. On Sep 14, 2016, at 3:50 PM, Eric Snow wrote: Then, I'll do same to sets. Unless I've misunderstood, Raymond was opposed to making a similar change to set. That's right. Here are a few thoughts on the subject before people starting running wild. For the compact dict, the space savings was a net win with the additional space consumed by the indices and the overallocation for the key/value/hash arrays being more than offset by the improved density of key/value/hash arrays. However for sets, the net was much less favorable because we still need the indices and overallocation but can only offset the space cost by densifying only two of the three arrays. In other words, compacting makes more sense when you have wasted space for keys, values, and hashes. If you lose one of those three, it stops being compelling. The use pattern for sets is different from dicts. The former has more hit or miss lookups. The latter tends to have fewer missing key lookups. Also, some of the optimizations for the set-to-set operations make it difficult to retain set ordering without impacting performance. I pursued alternative path to improve set performance. Instead of compacting (which wasn't much of space win and incurred the cost of an additional indirection), I added linear probing to reduce the cost of collisions and improve cache performance. This improvement is incompatible with the compacting approach I advocated for dictionaries. For now, the ordering side-effect on dictionaries is non-guaranteed, so it is premature to start insisting the sets become ordered as well. The docs already link to a recipe for creating an OrderedSet ( https://code.activestate.com/recipes/576694/ ) but it seems like the uptake has been nearly zero. Also, now that Eric Snow has given us a fast OrderedDict, it is easier than ever to build an OrderedSet from MutableSet and OrderedDict, but again I haven't observed any real interest because typical set-to-set data analytics don't really need or care about ordering. Likewise, the primary use of fast membership testings is order agnostic. That said, I do think there is room to add alternative set implementations to PyPI. In particular, there are some interesting special cases for orderable data where set-to-set operations can be sped-up by comparing entire ranges of keys (see https://code.activestate.com/recipes/230113-implementation-of-sets-using-sorted-lists for a starting point). IIRC, PyPI already has code for set-like bloom filters and cuckoo hashing. I understanding that it is exciting to have a major block of code accepted into the Python core but that shouldn't open to floodgates to engaging in more major rewrites of other datatypes unless we're sure that it is warranted. – Raymond Hettinger From [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered, Sept 2016.
67
66
61,415,284
2020-4-24
https://stackoverflow.com/questions/61415284/poetry-cant-find-version-of-dependency-even-though-it-exists
When bumping my python version from 3.7 to 3.8 in poetry, reinstalling all the dependencies fail with a version of the following: ERROR: No matching distribution found for... The distribution for that version is available at pypa, and is often the most recent version. Simply removing the offending package doesn't fix the issue, as poetry will likely fail with other packages. After some investigation, it appears that somehow poetry isn't using pip3 to install underneath, but is instead using pip2.7. Indeed this is supported by a deprecation alert, and the error is always reproducible if I attempt to install the same version with pip (globally or otherwise) and not pip3. This issue is frustrating and deleting the venv alone doesn't seem to help. How can I resolve this dependency issue that shouldn't exist in the first place?
There are two issues here which feed into each other. poetry seems to consistently botch the upgrade of a venv when you modify the python versions. According to finswimmer, the upgrade should create a new virtual env for the new python version, however this process can fail when poetry uses the wrong pip version or loses track of which virtual env it's using. poetry uses whatever pip is no questions asked - with no way to override and force usage of pip3. Here are the distilled steps I used to solve this issue delete the virtual env ( sometimes poetry loses track of the venv/thinks it's already activated. Best to clear the slate ) rm -rf `poetry env list --full-path` create a new virtual env ( the command should fail, but the venv will be created ) poetry install manually activate the virtual env source "$(poetry env list --full-path | tail -1 | sed 's/.\{12\}$//')/bin/activate" poetry install within the virtual env ( this ensures poetry is using the correct version of pip ) poetry install
22
17
61,468,548
2020-4-27
https://stackoverflow.com/questions/61468548/check-if-list-is-not-empty-with-pydantic-in-an-elegant-way
Let's say I have some BaseModel, and I want to check that it's options list is not empty. I can perfectly do it with a validator: class Trait(BaseModel): name: str options: List[str] @validator("options") def options_non_empty(cls, v): assert len(v) > 0 return v Are there any other, more elegant, way to do this?
If you want to use a @validator: return v if v else doSomething Python assumes boolean-ess of an empty list as False If you don't want to use a @validator: In Pydantic, use conlist: from pydantic import BaseModel, conlist from typing import List class Trait(BaseModel): name: str options: conlist(str, min_length=1)
27
53
61,427,583
2020-4-25
https://stackoverflow.com/questions/61427583/how-do-i-plot-a-keras-tensorflow-subclassing-api-model
I made a model that runs correctly using the Keras Subclassing API. The model.summary() also works correctly. When trying to use tf.keras.utils.plot_model() to visualize my model's architecture, it will just output this image: This almost feels like a joke from the Keras development team. This is the full architecture: import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' from sklearn.datasets import load_diabetes import tensorflow as tf tf.keras.backend.set_floatx('float64') from tensorflow.keras.layers import Dense, GaussianDropout, GRU, Concatenate, Reshape from tensorflow.keras.models import Model X, y = load_diabetes(return_X_y=True) data = tf.data.Dataset.from_tensor_slices((X, y)).\ shuffle(len(X)).\ map(lambda x, y: (tf.divide(x, tf.reduce_max(x)), y)) training = data.take(400).batch(8) testing = data.skip(400).map(lambda x, y: (tf.expand_dims(x, 0), y)) class NeuralNetwork(Model): def __init__(self): super(NeuralNetwork, self).__init__() self.dense1 = Dense(16, input_shape=(10,), activation='relu', name='Dense1') self.dense2 = Dense(32, activation='relu', name='Dense2') self.resha1 = Reshape((1, 32)) self.gru1 = GRU(16, activation='tanh', recurrent_dropout=1e-1) self.dense3 = Dense(64, activation='relu', name='Dense3') self.gauss1 = GaussianDropout(5e-1) self.conca1 = Concatenate() self.dense4 = Dense(128, activation='relu', name='Dense4') self.dense5 = Dense(1, name='Dense5') def call(self, x, *args, **kwargs): x = self.dense1(x) x = self.dense2(x) a = self.resha1(x) a = self.gru1(a) b = self.dense3(x) b = self.gauss1(b) x = self.conca1([a, b]) x = self.dense4(x) x = self.dense5(x) return x skynet = NeuralNetwork() skynet.build(input_shape=(None, 10)) skynet.summary() model = tf.keras.utils.plot_model(model=skynet, show_shapes=True, to_file='/home/nicolas/Desktop/model.png')
I've found some workaround to plot with the model sub-classing API. For the obvious reason Sub-Classing API doesn't support Sequential or Functional API like model.summary() and nice visualization using plot_model. Here, I will demonstrate both. class my_model(keras.Model): def __init__(self, dim): super(my_model, self).__init__() self.Base = keras.keras.applications.VGG16( input_shape=(dim), include_top = False, weights = 'imagenet' ) self.GAP = L.GlobalAveragePooling2D() self.BAT = L.BatchNormalization() self.DROP = L.Dropout(rate=0.1) self.DENS = L.Dense(256, activation='relu', name = 'dense_A') self.OUT = L.Dense(1, activation='sigmoid') def call(self, inputs): x = self.Base(inputs) g = self.GAP(x) b = self.BAT(g) d = self.DROP(b) d = self.DENS(d) return self.OUT(d) # AFAIK: The most convenient method to print model.summary() # similar to the sequential or functional API like. def build_graph(self): x = Input(shape=(dim)) return Model(inputs=[x], outputs=self.call(x)) dim = (124,124,3) model = my_model((dim)) model.build((None, *dim)) model.build_graph().summary() It will produce as follows: Layer (type) Output Shape Param # ================================================================= input_67 (InputLayer) [(None, 124, 124, 3)] 0 _________________________________________________________________ vgg16 (Functional) (None, 3, 3, 512) 14714688 _________________________________________________________________ global_average_pooling2d_32 (None, 512) 0 _________________________________________________________________ batch_normalization_7 (Batch (None, 512) 2048 _________________________________________________________________ dropout_5 (Dropout) (None, 512) 0 _________________________________________________________________ dense_A (Dense) (None, 256) 402192 _________________________________________________________________ dense_7 (Dense) (None, 1) 785 ================================================================= Total params: 14,848,321 Trainable params: 14,847,297 Non-trainable params: 1,024 Now by using the build_graph function, we can simply plot the whole architecture. # Just showing all possible argument for newcomer. tf.keras.utils.plot_model( model.build_graph(), # here is the trick (for now) to_file='model.png', dpi=96, # saving show_shapes=True, show_layer_names=True, # show shapes and layer name expand_nested=False # will show nested block ) It will produce as follows: -) Similar QnA: Retrieving Keras Layer Properties from a tf.keras.Model Visualize nested keras.Model (SubClassed API) GAN model
16
26
61,392,258
2020-4-23
https://stackoverflow.com/questions/61392258/most-efficient-method-to-concatenate-strings-in-python
At the time of asking this question, I'm using Python 3.8 When I say efficient, I'm only referring to the speed at which the strings are concatenated, or in more technical terms: I'm asking about the time complexity, not accounting the space complexity. The only methods I can think of at the moment are the following 3 given that: a = 'start' b = ' end' Method 1 result = a + b Method 2 result = ''.join((a, b)) Method 3 result = '{0}{1}'.format(a, b) I want to know which of these methods are faster, or if there are other methods that are more efficient. Also, if you know if either of these methods performs differently with more strings or longer strings, please include that in your answer. Edit After seeing all the comments and answers, I have learned a couple of new ways to concatenate strings, and I have also learned about the timeit library. I will report my personal findings below: >>> import timeit >>> print(timeit.Timer('result = a + b', setup='a = "start"; b = " end"').timeit(number=10000)) 0.0005306000000473432 >>> print(timeit.Timer('result = "".join((a, b))', setup='a = "start"; b = " end"').timeit(number=10000)) 0.0011297000000354274 >>> print(timeit.Timer('result = "{0}{1}".format(a, b)', setup='a = "start"; b = " end"').timeit(number=10000)) 0.002327799999989111 >>> print(timeit.Timer('result = f"{a}{b}"', setup='a = "start"; b = " end"').timeit(number=10000)) 0.0005772000000092703 >>> print(timeit.Timer('result = "%s%s" % (a, b)', setup='a = "start"; b = " end"').timeit(number=10000)) 0.0017815999999584164 It seems that for these small strings, the traditional a + b method is the fastest for string concatenation. Thanks for all of the answers!
Let's try it out! We can use timeit.timeit() to run a statement many times and return the overall duration. Here, we use s to setup the variables a and b (not included in the overall time), and then run the various options 10 million times. >>> from timeit import timeit >>> >>> n = 10 * 1000 * 1000 >>> s = "a = 'start'; b = ' end'" >>> >>> timeit("c = a + b", setup=s, number=n) 0.4452877212315798 >>> >>> timeit("c = f'{a}{b}'", setup=s, number=n) 0.5252049304544926 >>> >>> timeit("c = '%s%s'.format(a, b)", setup=s, number=n) 0.6849184390157461 >>>> >>> timeit("c = ''.join((a, b))", setup=s, number=n) 0.8546998891979456 >>> >>> timeit("c = '%s%s' % (a, b)", setup=s, number=n) 1.1699129864573479 >>> >>> timeit("c = '{0}{1}'.format(a, b)", setup=s, number=n) 1.5954962372779846 This shows that unless your application's bottleneck is string concatenation, it's probably not worth being too concerned about... The best case is ~0.45 seconds for 10 million iterations, or about 45ns per operation. The worst case is ~1.59 seconds for 10 million iterations, or about 159ns per operation. Depending on the performance of your system, you might see a speed improvement in the order of a few seconds if you're performing literally millions of operations. Note that your results may vary quite drastically depending on the lengths (and number) of the strings you're concatenating, and the hardware you're running on.
11
11
61,463,224
2020-4-27
https://stackoverflow.com/questions/61463224/when-to-use-raise-for-status-vs-status-code-testing
I have always used: r = requests.get(url) if r.status_code == 200: # my passing code else: # anything else, if this even exists Now I was working on another issue and decided to allow for other errors and am instead now using: try: r = requests.get(url) r.raise_for_status() except requests.exceptions.ConnectionError as err: # eg, no internet raise SystemExit(err) except requests.exceptions.HTTPError as err: # eg, url, server and other errors raise SystemExit(err) # the rest of my code is going here With the exception that various other errors could be tested for at this level, is one method any better than the other?
Response.raise_for_status() is just a built-in method for checking status codes and does essentially the same thing as your first example. There is no "better" here, just about personal preference with flow control. My preference is toward try/except blocks for catching errors in any call, as this informs the future programmer that these conditions are some sort of error. If/else doesn't necessarily indicate an error when scanning code. Edit: Here's my quick-and-dirty pattern. import time from http import HTTPStatus import requests from requests.exceptions import HTTPError url = "https://theurl.com" retries = 3 retry_codes = [ HTTPStatus.TOO_MANY_REQUESTS, HTTPStatus.INTERNAL_SERVER_ERROR, HTTPStatus.BAD_GATEWAY, HTTPStatus.SERVICE_UNAVAILABLE, HTTPStatus.GATEWAY_TIMEOUT, ] for n in range(retries): try: response = requests.get(url) response.raise_for_status() break except HTTPError as exc: code = exc.response.status_code if code in retry_codes: # retry after n seconds time.sleep(n) continue raise However, in most scenarios, I subclass requests.Session, make a custom HTTPAdapter that handles exponential backoffs, and the above lives in an overridden requests.Session.request method. An example of that can be seen here.
86
95
61,368,805
2020-4-22
https://stackoverflow.com/questions/61368805/how-to-plot-shaded-error-bands-with-seaborn
I wish to create a plot like the following, where I show some values alongside standard deviations. I have two sets of values, containing the mean and standard deviation obtained by two different methods. I thought of doing this with seaborn, but I don't know exactly how to do it since the official example uses pandas DataFrame objects, which I'm not familiar with. As an example, consider the following starting code: import seaborn as sns mean_1 = [10, 20, 30, 25, 32, 43] std_1 = [2.2, 2.3, 1.2, 2.2, 1.8, 3.5] mean_2 = [12, 22, 30, 13, 33, 39] std_2 = [2.4, 1.3, 2.2, 1.2, 1.9, 3.5] Thank you, G.
Here is a minimal example to create such a plot with the given data. Thanks to vectorization and broadcasting, working with numpy simplifies the code. import matplotlib.pyplot as plt import numpy as np mean_1 = np.array([10, 20, 30, 25, 32, 43]) std_1 = np.array([2.2, 2.3, 1.2, 2.2, 1.8, 3.5]) mean_2 = np.array([12, 22, 30, 13, 33, 39]) std_2 = np.array([2.4, 1.3, 2.2, 1.2, 1.9, 3.5]) x = np.arange(len(mean_1)) plt.plot(x, mean_1, 'b-', label='mean_1') plt.fill_between(x, mean_1 - std_1, mean_1 + std_1, color='b', alpha=0.2) plt.plot(x, mean_2, 'r-', label='mean_2') plt.fill_between(x, mean_2 - std_2, mean_2 + std_2, color='r', alpha=0.2) plt.legend() plt.show() Another example: import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set() N = 100 x = np.arange(N) mean_1 = 25 + np.random.normal(0.1, 1, N).cumsum() std_1 = 3 + np.random.normal(0, .08, N).cumsum() mean_2 = 15 + np.random.normal(0.2, 1, N).cumsum() std_2 = 4 + np.random.normal(0, .1, N).cumsum() plt.plot(x, mean_1, 'b-', label='mean_1') plt.fill_between(x, mean_1 - std_1, mean_1 + std_1, color='b', alpha=0.2) plt.plot(x, mean_2, 'r--', label='mean_2') plt.fill_between(x, mean_2 - std_2, mean_2 + std_2, color='r', alpha=0.2) plt.legend(title='title') plt.show() PS: Using matplotlib 3.5 or higher, the line and the fill can be combined in the legend: line_1, = plt.plot(x, mean_1, 'b-') fill_1 = plt.fill_between(x, mean_1 - std_1, mean_1 + std_1, color='b', alpha=0.2) line_2, = plt.plot(x, mean_2, 'r--') fill_2 = plt.fill_between(x, mean_2 - std_2, mean_2 + std_2, color='r', alpha=0.2) plt.margins(x=0) plt.legend([(line_1, fill_1), (line_2, fill_2)], ['Series 1', 'Series 2'], title='title')
14
29
61,374,525
2020-4-22
https://stackoverflow.com/questions/61374525/how-do-i-check-if-alembic-migrations-need-to-be-generated
I'm trying to improve CI pipeline to prevent situations where SQLAlchemy models are added or changed, but no Alembic migration is written or generated by the commit author from hitting the production branch. alembic --help doesn't seem to provide any helpful commands for this case, yet it already has all the metadata required (target_metadata variable) and the database credentials in env.py to make this happen. What would be the best practice for implementing this check in CI?
Here's a solution that I use. It's a check that I have implemented as a test. from alembic.autogenerate import compare_metadata from alembic.command import upgrade from alembic.runtime.migration import MigrationContext from alembic.config import Config from models.base import Base def test_migrations_sane(): """ This test ensures that models defined by SQLAlchemy match what alembic migrations think the database should look like. If these are different, then once we have constructed the database via Alembic (via running all migrations) alembic will generate a set of changes to modify the database to match the schema defined by SQLAlchemy models. If these are the same, the set of changes is going to be empty. Which is exactly what we want to check. """ engine = "SQLAlchemy DB Engine instance" try: with engine.connect() as connection: alembic_conf_file = "location of alembic.ini" alembic_config = Config(alembic_conf_file) upgrade(alembic_config, "head") mc = MigrationContext.configure(connection) diff = compare_metadata(mc, Base.metadata) assert diff == [] finally: with engine.connect() as connection: # Resetting the DB connection.execute( """ DROP SCHEMA public CASCADE; CREATE SCHEMA public; GRANT ALL ON SCHEMA public TO postgres; GRANT ALL ON SCHEMA public TO public; """ ) EDIT: I noticed you linked a library that's supposed to do the same thing. I gave it a go but it seems like it assumes that the database that it's running the check against has to have had alembic run against it. My solution works against a blank db. EDIT EDIT: Since leaving this answer, I have discovered the pytest-alembic package which offers this functionality and more. I have not used it personally but it seems pretty sweet.
14
6
61,366,664
2020-4-22
https://stackoverflow.com/questions/61366664/how-to-upsert-pandas-dataframe-to-postgresql-table
I've scraped some data from web sources and stored it all in a pandas DataFrame. Now, in order harness the powerful db tools afforded by SQLAlchemy, I want to convert said DataFrame into a Table() object and eventually upsert all data into a PostgreSQL table. If this is practical, what is a workable method of going about accomplishing this task?
Update: You can save yourself some typing by using this method. If you are using PostgreSQL 9.5 or later you can perform the UPSERT using a temporary table and an INSERT ... ON CONFLICT statement: import sqlalchemy as sa # … with engine.begin() as conn: # step 0.0 - create test environment conn.exec_driver_sql("DROP TABLE IF EXISTS main_table") conn.exec_driver_sql( "CREATE TABLE main_table (id int primary key, txt varchar(50))" ) conn.exec_driver_sql( "INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')" ) # step 0.1 - create DataFrame to UPSERT df = pd.DataFrame( [(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"] ) # step 1 - create temporary table and upload DataFrame conn.exec_driver_sql( "CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false" ) df.to_sql("temp_table", conn, index=False, if_exists="append") # step 2 - merge temp_table into main_table conn.exec_driver_sql( """\ INSERT INTO main_table (id, txt) SELECT id, txt FROM temp_table ON CONFLICT (id) DO UPDATE SET txt = EXCLUDED.txt """ ) # step 3 - confirm results result = conn.exec_driver_sql("SELECT * FROM main_table ORDER BY id").all() print(result) # [(1, 'row 1 new text'), (2, 'new row 2 text')]
17
24
61,430,552
2020-4-25
https://stackoverflow.com/questions/61430552/dataclass-not-inheriting-eq-method-from-its-parent
I have a parent dataclass and a sub-dataclass inherits the first class. I've redefined __eq__() method in parent dataclass. But when I compare objects sub-dataclass, it doesn't use the __eq__() method defined in parent dataclass. Why is this happening? How can I fix this? MWE: from dataclasses import dataclass @dataclass class A: name: str field1: str = None def __eq__(self, other): print('A class eq') return self.name == other.name @dataclass class B(A): field2: str = None b1 = B('b', 'b1') b2 = B('b', 'b2') print(b1 == b2)
The @dataclass decorator adds a default __eq__ implementation. If you use @dataclass(eq=False) on class B, it will avoid doing that. See https://docs.python.org/3/library/dataclasses.html
16
17
61,358,683
2020-4-22
https://stackoverflow.com/questions/61358683/dependency-inversion-in-python
I've started to apply SOLID principles to my projects. All of them are clear for me, except dependency inversion, because in Python we have no change to define variable in type of some class inside another class (or maybe just I don't know). So I've realized Dependency Inversion principle in two forms, and want to know which of them is true, how can I correct them. Here are my codes: d1.py: class IFood: def bake(self, isTendir: bool): pass class Production: def __init__(self): self.food = IFood() def produce(self): self.food.bake(True) class Bread(IFood): def bake(self, isTendir:bool): print("Bread was baked") d2.py: from abc import ABC, abstractmethod class Food(ABC): @abstractmethod def bake(self, isTendir): pass class Production(): def __init__(self): self.bread = Bread() def produce(self): self.bread.bake(True) class Bread(Food): def bake(self, isTendir:bool): print("Bread was baked")
# define a common interface any food should have and implement class IFood: def bake(self): pass def eat(self): pass class Bread(IFood): def bake(self): print("Bread was baked") def eat(self): print("Bread was eaten") class Pastry(IFood): def bake(self): print("Pastry was baked") def eat(self): print("Pastry was eaten") class Production: def __init__(self, food): # food now is any concrete implementation of IFood self.food = food # this is also dependency injection, as it is a parameter not hardcoded def produce(self): self.food.bake() # uses only the common interface def consume(self): self.food.eat() # uses only the common interface Use it: ProduceBread = Production(Bread()) ProducePastry = Production(Pastry())
36
34
61,362,948
2020-4-22
https://stackoverflow.com/questions/61362948/seaborn-pairplots-with-continuous-hues
How may I introduce a continuous hue to my seaborn pairplots? I am passing in a pandas data frame train_df in order to visualise the relationship between the multiple features. However I'd also like to add a hue which would use their corresponding target values, target_df. These target values are on a continuous scale (~ floats between 10 and 100). I have defined a sns.color_palette("RdGr") that I'd like to use. Right now I have following pairplot (with no hue): sns.pairplot(train_df) How can I pass in the target_df as a hue using color palette defined above? Many thanks in advance.
You can just assign the target_df as a column in train_df and pass it as hue: sns.pairplot(data=train_df.assign(target=target_df, hue='target') However, this will be extremely slow if your target is continuous. Instead, you can do a double for loop: num_features = len(train_df.columns) fig,ax = plt.subplots(num_features, num_features, figsize=(10,10)) for i in train_df.columns: for j in train_df.columns: if i==j: # diagonal sns.distplot(train_df[0], kde=False, ax=ax[i][j]) else: # off diagonal sns.scatterplot(x=train_df[i],y=train_df[j], ax=ax[i][j], hue=target_df, palette='BrBG', legend=False) Which gives you something like this:
9
4
61,400,225
2020-4-24
https://stackoverflow.com/questions/61400225/expected-type-type-got-typetype-instead
My class has a project where we have to build a database. I keep running into this error where python asks expected a type of the same kind that I gave it (see image ) It says Expected type 'TableEntry', got 'Type[TableEntry]' instead TableEntry is a dataclass instance (as per my assignment). I am only calling it for it's creation newTable = Table(id, TableEntry) Where Table is another dataclass with an id and data with type TableEntry*. *I am required to complete the assignment using this format, and am only asking about the syntax, not how to format it
When you see an error like this: Expected type 'TableEntry', got 'Type[TableEntry]' instead it generally means that in the body of your code you said TableEntry (the name of the type) rather than TableEntry() (an expression that constructs an actual object of that type).
8
17
61,430,166
2020-4-25
https://stackoverflow.com/questions/61430166/python-3-7-on-ubuntu-20-04
I am preparing a docker image for Ubuntu 20.04 and due to TensorFlow 2.0 requirement, I need Python 3.7. TensorFlow runs on Python 3.5 to 3.7. Running apt install python3 installs Python 3.8 by default and that breaks my TensorFlow installation. Is there any way I can get an apt package for Python 3.7 for Ubuntu 20.04? Since it is going to be inside docker image, I don't want to get into the business of downloading Python 3.7 source code and compiling. Putting those commands in Dockerfile will be overwhelming for me. Is there any simpler way of getting Python 3.7 for Ubuntu 20.04? Running sudo apt-cache madison python3 returns python3 | 3.8.2-0ubuntu2 | http://in.archive.ubuntu.com/ubuntu focal/main amd64 Packages
Do you need Ubuntu 20.04? Ubuntu 18.04 comes with Python 3.6, and 3.7 available. If you do, the deadsnakes PPA has Python 3.5-3.7 for Ubuntu 20.04 (Focal). To add it and install: sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get install python3.7 P.s. I'm not a dev and have no experience with Tensorflow so take this with a grain of salt. (Sidenote: add-apt-repository runs apt-get update automatically, but that's not documented in man add-apt-repository, only add-apt-repository --help. This was fixed in a later release.)
60
117
61,386,477
2020-4-23
https://stackoverflow.com/questions/61386477/type-hints-for-a-pandas-dataframe-with-mixed-dtypes
I've been looking for robust type hints for a pandas DataFrame, but cannot seem to find anything useful. This question barely scratches the surface Pythonic type hints with pandas? Normally if I want to hint the type of a function, that has a DataFrame as an input argument I would do: import pandas as pd def func(arg: pd.DataFrame) -> int: return 1 What I cannot seem to find is how do I type hint a DataFrame with mixed dtypes. The DataFrame constructor supports only type definition of the complete DataFrame. So to my knowledge changes in the dtypes can only occur afterwards with the pd.DataFrame().astype(dtypes={}) function. This here works, but doesn't seem very pythonic to me import datetime def func(arg: pd.DataFrame(columns=['integer', 'date']).astype(dtype={'integer': int, 'date': datetime.date})) -> int: return 1 I came across this package: https://pypi.org/project/dataenforce/ with examples such as this one: def process_data(data: Dataset["id": int, "name": object, "latitude": float, "longitude": float]) pass This looks somewhat promising, but sadly the project is old and buggy. As a data scientist, building a machine learning application with long ETL processes I believe that type hints are important. What do you use and does anybody type hint their dataframes in pandas?
I have now found the pandera library that seems very promising: https://github.com/pandera-dev/pandera It allows users to create schemas and use those schemas to create verbose checks. From their docs: https://pandera.readthedocs.io/en/stable/schema_models.html import pandas as pd import pandera as pa from pandera.typing import Index, DataFrame, Series class InputSchema(pa.SchemaModel): year: Series[int] = pa.Field(gt=2000, coerce=True) month: Series[int] = pa.Field(ge=1, le=12, coerce=True) day: Series[int] = pa.Field(ge=0, le=365, coerce=True) class OutputSchema(InputSchema): revenue: Series[float] @pa.check_types def transform(df: DataFrame[InputSchema]) -> DataFrame[OutputSchema]: return df.assign(revenue=100.0) df = pd.DataFrame({ "year": ["2001", "2002", "2003"], "month": ["3", "6", "12"], "day": ["200", "156", "365"], }) transform(df) invalid_df = pd.DataFrame({ "year": ["2001", "2002", "1999"], "month": ["3", "6", "12"], "day": ["200", "156", "365"], }) transform(invalid_df) Also a note from them: Due to current limitations in the pandas library (see discussion here), pandera annotations are only used for run-time validation and cannot be leveraged by static-type checkers like mypy. See the discussion here for more details. But still, even though there is no static-type checking I think that this is going in a very good direction.
11
8
61,370,108
2020-4-22
https://stackoverflow.com/questions/61370108/tf-data-parallelize-loading-step
I have a data input pipeline that has: input datapoints of types that are not castable to a tf.Tensor (dicts and whatnot) preprocessing functions that could not understand tensorflow types and need to work with those datapoints; some of which do data augmentation on the fly I've been trying to fit this into a tf.data pipeline, and I'm stuck on running the preprocessing for multiple datapoints in parallel. So far I've tried this: use Dataset.from_generator(gen) and do the preprocessing in the generator; this works but it processes each datapoint sequentially, no matter what arrangement of prefetch and fake map calls I patch on it. Is it impossible to prefetch in parallel? encapsulate the preprocessing in a tf.py_function so I could map it in parallel over my Dataset, but this requires some pretty ugly (de)serialization to fit exotic types into string tensors, apparently the execution of the py_function would be handed over to the (single-process) python interpreter, so I'd be stuck with the python GIL which would not help me much I saw that you could do some tricks with interleave but haven't found any which does not have issues from the first two ideas. Am I missing anything here? Am I forced to either modify my preprocessing so that it can run in a graph or is there a way to multiprocess it? Our previous way of doing this was using keras.Sequence which worked well but there's just too many people pushing the upgrade to the tf.data API. (hell, even trying the keras.Sequence with tf 2.2 yields WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended.) Note: I'm using tf 2.2rc3
I came across the same problem and found a (relatively) easy solution. It turns out that the proper way to do so is indeed to first create a tf.data.Dataset object using the from_generator(gen) method, before applying your custom python processing function (wrapped within a py_function) with the map method. As you mentioned, there is a trick to avoid serialization / deserialization of the input. The trick is to use a generator which will only generates the indexes of your training set. Each called training index will be passed to the wrapped py_function, which can in return evaluate your original dataset at that index. You can then process your datapoint and return your processed data to the rest of your tf.data pipeline. def func(i): i = i.numpy() # decoding from the EagerTensor object x, y = processing_function(training_set[i]) return x, y # numpy arrays of types uint8, float32 z = list(range(len(training_set))) # the index generator dataset = tf.data.Dataset.from_generator(lambda: z, tf.uint8) dataset = dataset.map(lambda i: tf.py_function(func=func, inp=[i], Tout=[tf.uint8, tf.float32]), num_parallel_calls=12) dataset = dataset.batch(1) Note that in practice, depending on the model you train your dataset on, you will probably need to apply another map to your dataset after the batch: def _fixup_shape(x, y): x.set_shape([None, None, None, nb_channels]) y.set_shape([None, nb_classes]) return x, y dataset = dataset.map(_fixup_shape) This is a known issue which seems to be due to the incapacity of the from_generator method to infer the shape properly in some cases. Hence you need to pass the expected output shape explicitly. For more information: https://github.com/tensorflow/tensorflow/issues/32912 as_list() is not defined on an unknown TensorShape on y_t_rank = len(y_t.shape.as_list()) and related to metrics)
11
4
61,419,449
2020-4-25
https://stackoverflow.com/questions/61419449/unable-to-instantiate-python-dataclass-frozen-inside-a-pytest-function-that-us
I'm following along with Architecture Patterns in Python by Harry Percival and Bob Gregory. Around chapter three (3) they introduce testing the ORM of SQLAlchemy. A new test that requires a session fixture, it is throwing AttributeError, FrozenInstanceError due to cannot assign to field '_sa_instance_state' It may be important to note that other tests do not fail when creating instances of OrderLine, but they do fail if I simply include session into the test parameter(s). Anyway I'll get straight into the code. conftest.py @pytest.fixture def local_db(): engine = create_engine('sqlite:///:memory:') metadata.create_all(engine) return engine @pytest.fixture def session(local_db): start_mappers() yield sessionmaker(bind=local_db)() clear_mappers() model.py @dataclass(frozen=True) class OrderLine: id: str sku: str quantity: int test_orm.py def test_orderline_mapper_can_load_lines(session): session.execute( 'INSERT INTO order_lines (order_id, sku, quantity) VALUES ' '("order1", "RED-CHAIR", 12),' '("order1", "RED-TABLE", 13),' '("order2", "BLUE-LIPSTICK", 14)' ) expected = [ model.OrderLine("order1", "RED-CHAIR", 12), model.OrderLine("order1", "RED-TABLE", 13), model.OrderLine("order2", "BLUE-LIPSTICK", 14), ] assert session.query(model.OrderLine).all() == expected Console error for pipenv run pytest test_orm.py ============================= test session starts ============================= platform linux -- Python 3.7.6, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 rootdir: /home/[redacted]/Documents/architecture-patterns-python collected 1 item test_orm.py F [100%] ================================== FAILURES =================================== ____________________ test_orderline_mapper_can_load_lines _____________________ session = <sqlalchemy.orm.session.Session object at 0x7fd919ac5bd0> def test_orderline_mapper_can_load_lines(session): session.execute( 'INSERT INTO order_lines (order_id, sku, quantity) VALUES ' '("order1", "RED-CHAIR", 12),' '("order1", "RED-TABLE", 13),' '("order2", "BLUE-LIPSTICK", 14)' ) expected = [ > model.OrderLine("order1", "RED-CHAIR", 12), model.OrderLine("order1", "RED-TABLE", 13), model.OrderLine("order2", "BLUE-LIPSTICK", 14), ] test_orm.py:13: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <string>:2: in __init__ ??? ../../.local/share/virtualenvs/architecture-patterns-python-Qi2y0bev/lib64/python3.7/site-packages/sqlalchemy/orm/instrumentation.py:377: in _new_state_if_none self._state_setter(instance, state) <string>:1: in set ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'OrderLine' object has no attribute '_sa_instance_state'") raised in repr()] OrderLine object at 0x7fd919a8cf50> name = '_sa_instance_state' value = <sqlalchemy.orm.state.InstanceState object at 0x7fd9198f7490> > ??? E dataclasses.FrozenInstanceError: cannot assign to field '_sa_instance_state' <string>:4: FrozenInstanceError =========================== short test summary info =========================== FAILED test_orm.py::test_orderline_mapper_can_load_lines - dataclasses.Froze... ============================== 1 failed in 0.06s ============================== Additional Questions I understand the overlying logic and what these files are doing, but correct my if my rudimentary understanding is lacking. conftest.py (used for all pytest config) is setting up a session fixture, which basically sets up a temporary database in memory - using start and clear mappers to ensure that the orm model definitions are binding to the db isntance. model.py simply a dataclass used to represent an atomic OrderLine object. test_orm.py class for pytest to supply the session fixture, in order to setup, execute, teardown a db explicitly for the purpose of running tests. Issue resolution provided by https://github.com/cosmicpython/code/issues/17
SqlAlchemy allows you to override some of the attribute instrumentation that is applied when using mapping classes and tables. In particular the following allows sqla to save the state on an instrumented frozen dataclass. This should be applied before calling the mapper function which associates the dataclass and the sql table. from sqlalchemy.ext.instrumentation import InstrumentationManager ... DEL_ATTR = object() class FrozenDataclassInstrumentationManager(InstrumentationManager): def install_member(self, class_, key, implementation): self.originals.setdefault(key, class_.__dict__.get(key, DEL_ATTR)) setattr(class_, key, implementation) def uninstall_member(self, class_, key): original = self.originals.pop(key, None) if original is not DEL_ATTR: setattr(class_, key, original) else: delattr(class_, key) def dispose(self, class_): del self.originals delattr(class_, "_sa_class_manager") def manager_getter(self, class_): def get(cls): return cls.__dict__["_sa_class_manager"] return get def manage(self, class_, manager): self.originals = {} setattr(class_, "_sa_class_manager", manager) def get_instance_dict(self, class_, instance): return instance.__dict__ def install_state(self, class_, instance, state): instance.__dict__["state"] = state def remove_state(self, class_, instance, state): del instance.__dict__["state"] def state_getter(self, class_): def find(instance): return instance.__dict__["state"] return find OrderLine.__sa_instrumentation_manager__ = FrozenDataclassInstrumentationManager Attribute instrumentation docs Custom instrumentation examples
12
4
61,365,987
2020-4-22
https://stackoverflow.com/questions/61365987/whats-new-in-python-2-7-18
So the final Python 2 release is out. However, I can't find anywhere what has changed with this release. The corresponding news page on GitHub is also empty. Can anyone shed some light on this?
Nice question. You can find out for yourself by downloading and comparing the source code both for 2.7.17 and 2.7.18. Since 2.7 is my favorite flavor of Python I've decided to do it myself; here's a WinMerge screenshot: Looks like there are some differences. On the other hand, Misc\NEWS clearly states: What's New in Python 2.7.18 final? Release date: 2020-04-19 There were no new changes in version 2.7.18. That, IMHO, is not completely true but before getting into it let's first see what most of those differences are about. Again, the changes are listed inside Misc\NEWS and they relate to version 2.7.18rc1: What's New in Python 2.7.18 release candidate 1? Release date: 2020-04-04 Security bpo-38945: Newline characters have been escaped when performing uu encoding to prevent them from overflowing into to content section of the encoded file. This prevents malicious or accidental modification of data during the decoding process. bpo-38804: Fixes a ReDoS vulnerability in :mod:http.cookiejar. Patch by Ben Caller. Core and Builtins bpo-38535: Fixed line numbers and column offsets for AST nodes for calls without arguments in decorators. Library bpo-38576: Disallow control characters in hostnames in http.client, addressing CVE-2019-18348. Such potentially malicious header injection URLs now cause a InvalidURL to be raised. bpo-27973: Fix urllib.urlretrieve failing on subsequent ftp transfers from the same host. Build bpo-38730: Fix problems identified by GCC's -Wstringop-truncation warning. Windows bpo-37025: AddRefActCtx() was needlessly being checked for failure in PC/dl_nt.c. macOS bpo-38295: Prevent failure of test_relative_path in test_py_compile on macOS Catalina. C API bpo-38540: Fixed possible leak in :c:func:PyArg_Parse and similar functions for format units "es#" and "et#" when the macro :c:macro:PY_SSIZE_T_CLEAN is not defined. So, it looks like 2.7.18 is just a wrap-around of 2.7.18rc1 but is that all? To be exact, there are a few changes that belong to 2.7.18 and have to do with marking it as the final release of Python 2 and warn the user about the EOL of the product. For example, in tools\templates\layout.html you can find the block below: {% block header %} {%- if outdated %} {% trans %}This document is for an old version of Python that is {% endtrans %}{% trans %}no longer supported{% endtrans %}. {% trans %}You should upgrade and read the {% endtrans %} {% trans %} Python documentation for the current stable release{% endtrans %}. {%- endif %} {% endblock %} So, I believe, the reasoning behind that final version is twofold: (a) finalize the changes of 2.7.18rc1 and (b) mark it as the final release of Python 2 and let the user know about it.
17
9
61,359,162
2020-4-22
https://stackoverflow.com/questions/61359162/convert-a-list-of-tensors-to-tensors-of-tensors-pytorch
I have this code: import torch list_of_tensors = [ torch.randn(3), torch.randn(3), torch.randn(3)] tensor_of_tensors = torch.tensor(list_of_tensors) I am getting the error: ValueError: only one element tensors can be converted to Python scalars How can I convert the list of tensors to a tensor of tensors in pytorch?
Here is a solution: tensor_of_tensors = torch.stack((list_of_tensors)) print(tensor_of_tensors) #shape (3,3)
9
12
61,419,086
2020-4-24
https://stackoverflow.com/questions/61419086/fatal-error-in-launcher-unable-to-create-process-using-file-path1-file-path2
I am trying to use different versions of Python on my Windows pc and I'm getting this error when using pip: Fatal error in launcher: Unable to create process using '"c:\users\mypc\appdata\local\programs\python\python38\python.exe" "C:\Python38\Scripts\pip.exe" ': The system cannot find the file specified. I understand that this might mean there's two PATH to each of those locations so it's confused, but c:\users\mypc\appdata\local\programs\python\python38\python.exe doesn't even exist on my computer nor in my PATH. Output of where python is: C:\Python38\python.exe` Here is the PATH in readable format C:\Program Files\Intel\WiFi\bin\ C:\Program Files\Common Files\Intel\WirelessCommon\ C:\WINDOWS\system32 C:\WINDOWS C:\WINDOWS\System32\Wbem C:\WINDOWS\System32\WindowsPowerShell\v1.0\ C:\WINDOWS\System32\OpenSSH\ C:\Program Files\nodejs\ C:\Program Files\Git\cmd C:\Program Files\wkhtmltopdf C:\Program Files\Docker\Docker\resources\bin C:\ProgramData\DockerDesktop\version-bin C:\Python38\Scripts C:\Python38\ "C:\Users\mypc\AppData\Local\Microsoft\WindowsApps C:\bin" C:\Program Files\Intel\WiFi\bin\ C:\Program Files\Common Files\Intel\WirelessCommon\ C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2019.2.1\bin C:\Program Files (x86)\Nmap C:\Users\mypc\AppData\Local\Microsoft\WindowsApps C:\Users\mypc\AppData\Local\Programs\Microsoft VS Code\bin C:\Users\mypc\AppData\Roaming\npm C:\Users\mypc\AppData\Local\Programs\MiKTeX 2.9\miktex\bin\x64\ C:\Program Files\wkhtmltopdf C:\ProgramData\mypc\atom\bin C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\bin C:\Program Files\MongoDB\Server\4.2\bin
it seems like python team may have implemented some security measures. The new method now is just to prefix python -m before your commands. Let's say you are trying to install pygame (any package) with pip. For that, you'll use python -m pip install pygame //Or any package name Also, upgrading pip and all other commands will also use the same command structure: python -m pip install --upgrade pip
7
24
61,379,554
2020-4-23
https://stackoverflow.com/questions/61379554/how-to-bypass-google-recaptcha-while-scraping-with-requests
Python code to request the URL: agent = {"User-Agent":'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'} #using agent to solve the blocking issue response = requests.get('https://www.naukri.com/jobs-in-andhra-pradesh', headers=agent) #making the request to the link Output when printing the html : <!DOCTYPE html> <html> <head> <title>Naukri reCAPTCHA</title> #the title in the actual title of the URL that I am requested for <meta name="robots" content="noindex, nofollow"> <link rel="stylesheet" href="https://static.naukimg.com/s/4/101/c/common_v62.min.css" /> <script src="https://www.google.com/recaptcha/api.js" async defer></script> </head> </html>
Using Google Cache along with a referer (in the header) will help you bypass the captcha. Things to note: Don't send more than 2 requests/sec. You may get blocked. The result you receive is a cache. This will not be effective if you are trying to scrape a real-time data. Example: header = { "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" , 'referer':'https://www.google.com/' } r = requests.get("http://webcache.googleusercontent.com/search?q=cache:www.naukri.com/jobs-in-andhra-pradesh",headers=header) This gives: >>> r.content [Squeezed 2554 lines]
13
25
61,399,162
2020-4-24
https://stackoverflow.com/questions/61399162/is-there-a-way-to-splat-assign-as-tuple-instead-of-list-when-unpacking
I was recently surprised to find that the "splat" (unary *) operator always captures slices as a list during item unpacking, even when the sequence being unpacked has another type: >>> x, *y, z = tuple(range(5)) >>> y [1, 2, 3] # list, was expecting tuple Compare to how this assignment would be written without unpacking: >>> my_tuple = tuple(range(5)) >>> x = my_tuple[0] >>> y = my_tuple[1:-1] >>> z = my_tuple[-1] >>> y (1, 2, 3) It is also inconsistent with how the splat operator behaves in function arguments: >>> def f(*args): ... return args, type(args) ... >>> f() ((), <class 'tuple'>) In order to recover y as a tuple after unpacking, I now have to write: >>> x, *y, z = tuple(range(5)) >>> y = tuple(y) Which is still much better that the slice-based syntax, but nonetheless suffers from what I consider to be a very unnecessary and unexpected loss of elegance. Is there any way to recover y as a tuple instead of a list without post-assignment processing? I tried to force python to interpret y as a tuple by writing x, *(*y,), z = ..., but it still ended up as a list. And of course silly things like x, *tuple(y), z don't work in python. I am currently using Python 3.8.3 but solutions/suggestions/explanations involving higher versions (as they become available) are also welcome.
This is by design. Quoting the official docs about Assignment: ...The first items of the iterable are assigned, from left to right, to the targets before the starred target. The final items of the iterable are assigned to the targets after the starred target. A list of the remaining items in the iterable is then assigned to the starred target (the list can be empty). It is highly probable that the Python user wants to mutate your y afterwards, so the list type was chosen over the tuple. Quoting the Acceptance section of PEP 3132 that I found through a link in this related question: After a short discussion on the python-3000 list [1], the PEP was accepted by Guido in its current form. Possible changes discussed were: Only allow a starred expression as the last item in the exprlist. This would simplify the unpacking code a bit and allow for the starred expression to be assigned an iterator. This behavior was rejected because it would be too surprising. Try to give the starred target the same type as the source iterable, for example, b in a, *b = "hello" would be assigned the string "ello". This may seem nice, but is impossible to get right consistently with all iterables. Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the result harder. So converting with y = tuple(y) afterwards is your only option.
13
5
61,357,038
2020-4-22
https://stackoverflow.com/questions/61357038/how-do-i-install-the-most-recent-tensorflow-here-2-2-on-windows-when-conda-do
I have conda 4.8.3 and Python 3.7.4 on Windows 8.1. I have tf 2.0.0 installed in a conda environment. How do I upgrade to 2.2.x? Or, how do I just install 2.2.x in a conda environment? Edit 1: pip install --upgrade tensorflow says: Requirement already up-to-date: tensorflow in d:\anaconda3\envs\tf2\lib\site-packages (2.1.0) but tf version is still 2.0. Edit 2: conda install tensorflow==2.2.0 says: PackagesNotFoundError: The following packages are not available from current channels: tensorflow==2.2.0 I did have some luck here. Edit 3: (tf2) D:\ray\dev\covid-19>conda list -n tf2 # packages in environment at D:\Anaconda3\envs\tf2: # # Name Version Build Channel _anaconda_depends 2019.03 py37_0 _ipyw_jlab_nb_ext_conf 0.1.0 py37_0 _tflow_select 2.3.0 mkl absl-py 0.9.0 pypi_0 pypi alabaster 0.7.12 py37_0 anaconda custom py37_1 anaconda-client 1.7.2 py37_0 anaconda-navigator 1.9.7 py37_0 anaconda-project 0.8.3 py_0 asn1crypto 1.0.1 py37_0 astor 0.8.1 pypi_0 pypi astroid 2.3.1 py37_0 astropy 3.2.1 py37he774522_0 atomicwrites 1.3.0 py37_1 attrs 19.2.0 py_0 babel 2.7.0 py_0 backcall 0.1.0 py37_0 backports 1.0 py_2 backports.functools_lru_cache 1.5 py_2 backports.os 0.1.1 py37_0 backports.shutil_get_terminal_size 1.0.0 py37_2 backports.tempfile 1.0 py_1 backports.weakref 1.0.post1 py_1 beautifulsoup4 4.8.0 py37_0 bitarray 1.0.1 py37he774522_0 bkcharts 0.2 py37_0 blas 1.0 mkl bleach 3.1.0 py37_0 blosc 1.16.3 h7bd577a_0 bokeh 1.3.4 py37_0 boto 2.49.0 py37_0 bottleneck 1.2.1 py37h452e1ab_1 bzip2 1.0.8 he774522_0 ca-certificates 2020.1.1 0 cachetools 4.0.0 pypi_0 pypi certifi 2019.9.11 py37_0 cffi 1.12.3 py37h7a1dbc1_0 chardet 3.0.4 py37_1003 click 7.0 py37_0 cloudpickle 1.2.2 py_0 clyent 1.2.2 py37_1 colorama 0.4.1 py37_0 comtypes 1.1.7 py37_0 conda-package-handling 1.6.0 py37h62dcd97_0 conda-verify 3.4.2 py_1 console_shortcut 0.1.1 3 contextlib2 0.6.0 py_0 cryptography 2.7 py37h7a1dbc1_0 curl 7.65.3 h2a8f88b_0 cycler 0.10.0 py37_0 cython 0.29.13 py37ha925a31_0 cytoolz 0.10.0 py37he774522_0 dask 2.5.2 py_0 dask-core 2.5.2 py_0 decorator 4.4.0 py37_1 defusedxml 0.6.0 py_0 dill 0.3.1.1 pypi_0 pypi distributed 2.5.2 py_0 docutils 0.15.2 py37_0 entrypoints 0.3 py37_0 et_xmlfile 1.0.1 py37_0 fastcache 1.1.0 py37he774522_0 filelock 3.0.12 py_0 flask 1.1.1 py_0 freetype 2.9.1 ha9979f8_1 fsspec 0.5.2 py_0 future 0.17.1 py37_0 gast 0.2.2 pypi_0 pypi get_terminal_size 1.0.0 h38e98db_0 gevent 1.4.0 py37he774522_0 glob2 0.7 py_0 google-auth 1.11.0 pypi_0 pypi google-auth-oauthlib 0.4.1 pypi_0 pypi google-pasta 0.1.8 py_0 googleapis-common-protos 1.51.0 pypi_0 pypi greenlet 0.4.15 py37hfa6e2cd_0 grpcio 1.26.0 pypi_0 pypi h5py 2.9.0 py37h5e291fa_0 hdf5 1.10.4 h7ebc959_0 heapdict 1.0.1 py_0 html5lib 1.0.1 py37_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 idna 2.8 py37_0 imageio 2.6.0 py37_0 imagesize 1.1.0 py37_0 importlib_metadata 0.23 py37_0 intel-openmp 2019.4 245 ipykernel 5.1.2 py37h39e3cac_0 ipython 7.8.0 py37h39e3cac_0 ipython_genutils 0.2.0 py37_0 ipywidgets 7.5.1 py_0 isort 4.3.21 py37_0 itsdangerous 1.1.0 py37_0 jdcal 1.4.1 py_0 jedi 0.15.1 py37_0 jinja2 2.10.3 py_0 joblib 0.13.2 py37_0 jpeg 9b hb83a4c4_2 json5 0.8.5 py_0 jsonschema 3.0.2 py37_0 jupyter 1.0.0 py37_7 jupyter_client 5.3.3 py37_1 jupyter_console 6.0.0 py37_0 jupyter_core 4.5.0 py_0 jupyterlab 1.1.4 pyhf63ae98_0 jupyterlab_server 1.0.6 py_0 keras 2.3.1 py37h21ff451_0 conda-forge keras-applications 1.0.8 py_0 keras-preprocessing 1.1.0 py_1 keyring 18.0.0 py37_0 kiwisolver 1.1.0 py37ha925a31_0 krb5 1.16.1 hc04afaa_7 lazy-object-proxy 1.4.2 py37he774522_0 libarchive 3.3.3 h0643e63_5 libcurl 7.65.3 h2a8f88b_0 libgpuarray 0.7.6 hfa6e2cd_0 libiconv 1.15 h1df5818_7 liblief 0.9.0 ha925a31_2 libmklml 2019.0.5 0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.11.2 h7bd577a_0 libpython 2.1 py37_0 libsodium 1.0.16 h9d3ae62_0 libssh2 1.8.2 h7a1dbc1_0 libtiff 4.0.10 hb898794_2 libxml2 2.9.9 h464c3ec_0 libxslt 1.1.33 h579f668_0 llvmlite 0.29.0 py37ha925a31_0 locket 0.2.0 py37_1 lxml 4.4.1 py37h1350720_0 lz4-c 1.8.1.2 h2fa13f4_0 lzo 2.10 h6df0209_2 m2w64-binutils 2.25.1 5 m2w64-bzip2 1.0.6 6 m2w64-crt-git 5.0.0.4636.2595836 2 m2w64-gcc 5.3.0 6 m2w64-gcc-ada 5.3.0 6 m2w64-gcc-fortran 5.3.0 6 m2w64-gcc-libgfortran 5.3.0 6 m2w64-gcc-libs 5.3.0 7 m2w64-gcc-libs-core 5.3.0 7 m2w64-gcc-objc 5.3.0 6 m2w64-gmp 6.1.0 2 m2w64-headers-git 5.0.0.4636.c0ad18a 2 m2w64-isl 0.16.1 2 m2w64-libiconv 1.14 6 m2w64-libmangle-git 5.0.0.4509.2e5a9a2 2 m2w64-libwinpthread-git 5.0.0.4634.697f757 2 m2w64-make 4.1.2351.a80a8b8 2 m2w64-mpc 1.0.3 3 m2w64-mpfr 3.1.4 4 m2w64-pkg-config 0.29.1 2 m2w64-toolchain 5.3.0 7 m2w64-tools-git 5.0.0.4592.90b8472 2 m2w64-windows-default-manifest 6.4 3 m2w64-winpthreads-git 5.0.0.4634.697f757 2 m2w64-zlib 1.2.8 10 mako 1.1.0 py_0 markdown 3.1.1 py37_0 markupsafe 1.1.1 py37he774522_0 matplotlib 3.1.1 py37hc8f65d3_0 mccabe 0.6.1 py37_1 menuinst 1.4.16 py37he774522_0 mistune 0.8.4 py37he774522_0 mkl 2019.4 245 mkl-service 2.3.0 py37hb782905_0 mkl_fft 1.0.14 py37h14836fe_0 mkl_random 1.1.0 py37h675688f_0 mock 3.0.5 py37_0 more-itertools 7.2.0 py37_0 mpmath 1.1.0 py37_0 msgpack-python 0.6.1 py37h74a9793_1 msys2-conda-epoch 20160418 1 multipledispatch 0.6.0 py37_0 navigator-updater 0.2.1 py37_0 nbconvert 5.6.0 py37_1 nbformat 4.4.0 py37_0 networkx 2.3 py_0 nltk 3.4.5 py37_0 nose 1.3.7 py37_2 notebook 6.0.1 py37_0 numba 0.45.1 py37hf9181ef_0 numexpr 2.7.0 py37hdce8814_0 numpy 1.16.5 py37h19fb1c0_0 numpy-base 1.16.5 py37hc3f5095_0 numpydoc 0.9.1 py_0 oauthlib 3.1.0 pypi_0 pypi olefile 0.46 py37_0 openpyxl 3.0.0 py_0 openssl 1.1.1d he774522_3 opt_einsum 3.1.0 py_0 packaging 19.2 py_0 pandas 0.25.1 pypi_0 pypi pandoc 2.2.3.2 0 pandocfilters 1.4.2 py37_1 parso 0.5.1 py_0 partd 1.0.0 py_0 path.py 12.0.1 py_0 pathlib2 2.3.5 py37_0 patsy 0.5.1 py37_0 pep8 1.7.1 py37_0 pickleshare 0.7.5 py37_0 pillow 6.2.0 py37hdc69c19_0 pip 20.0.2 pypi_0 pypi pkginfo 1.5.0.1 py37_0 pluggy 0.13.0 py37_0 ply 3.11 py37_0 powershell_shortcut 0.0.1 2 prometheus_client 0.7.1 py_0 promise 2.3 pypi_0 pypi prompt_toolkit 2.0.10 py_0 protobuf 3.11.2 pypi_0 pypi psutil 5.6.3 py37he774522_0 py 1.8.0 py37_0 py-lief 0.9.0 py37ha925a31_2 pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycodestyle 2.5.0 py37_0 pycosat 0.6.3 py37hfa6e2cd_0 pycparser 2.19 py37_0 pycrypto 2.6.1 py37hfa6e2cd_9 pycurl 7.43.0.3 py37h7a1dbc1_0 pyflakes 2.1.1 py37_0 pygments 2.4.2 py_0 pygpu 0.7.6 py37h452e1ab_0 pylint 2.4.2 py37_0 pyodbc 4.0.27 py37ha925a31_0 pyopenssl 19.0.0 py37_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py37h6538335_2 pyreadline 2.1 py37_1 pyrsistent 0.15.4 py37he774522_0 pysocks 1.7.1 py37_0 pytables 3.5.2 py37h1da0976_1 pytest 5.2.1 py37_0 pytest-arraydiff 0.3 py37h39e3cac_0 pytest-astropy 0.5.0 py37_0 pytest-doctestplus 0.4.0 py_0 pytest-openfiles 0.4.0 py_0 pytest-remotedata 0.3.2 py37_0 python 3.7.4 h5263a28_0 python-dateutil 2.8.0 py37_0 python-libarchive-c 2.8 py37_13 pytz 2019.3 py_0 pywavelets 1.0.3 py37h8c2d366_1 pywin32 223 py37hfa6e2cd_1 pywinpty 0.5.5 py37_1000 pyyaml 5.1.2 py37he774522_0 pyzmq 18.1.0 py37ha925a31_0 qt 5.9.7 vc14h73c81de_0 qtawesome 0.6.0 py_0 qtconsole 4.5.5 py_0 qtpy 1.9.0 py_0 requests 2.22.0 py37_0 requests-oauthlib 1.3.0 pypi_0 pypi rope 0.14.0 py_0 rsa 4.0 pypi_0 pypi ruamel_yaml 0.15.46 py37hfa6e2cd_0 scikit-image 0.15.0 py37ha925a31_0 scikit-learn 0.21.3 py37h6288b17_0 scipy 1.4.1 pypi_0 pypi seaborn 0.9.0 py37_0 send2trash 1.5.0 py37_0 setuptools 41.4.0 py37_0 simplegeneric 0.8.1 py37_2 singledispatch 3.4.0.3 py37_0 sip 4.19.8 py37h6538335_0 six 1.12.0 py37_0 sklearn 0.0 pypi_0 pypi snappy 1.1.7 h777316e_3 snowballstemmer 2.0.0 py_0 sortedcollections 1.1.2 py37_0 sortedcontainers 2.1.0 py37_0 soupsieve 1.9.3 py37_0 sphinx 2.2.0 py_0 sphinxcontrib 1.0 py37_1 sphinxcontrib-applehelp 1.0.1 py_0 sphinxcontrib-devhelp 1.0.1 py_0 sphinxcontrib-htmlhelp 1.0.2 py_0 sphinxcontrib-jsmath 1.0.1 py_0 sphinxcontrib-qthelp 1.0.2 py_0 sphinxcontrib-serializinghtml 1.1.3 py_0 sphinxcontrib-websupport 1.1.2 py_0 spyder 3.3.6 py37_0 spyder-kernels 0.5.2 py37_0 sqlalchemy 1.3.9 py37he774522_0 sqlite 3.30.0 he774522_0 statsmodels 0.10.1 py37h8c2d366_0 sympy 1.4 py37_0 tbb 2019.4 h74a9793_0 tblib 1.4.0 py_0 tensorboard 2.1.0 pypi_0 pypi tensorflow 2.1.0 pypi_0 pypi tensorflow-base 2.0.0 mkl_py37hd1d5974_0 tensorflow-datasets 2.0.0 pypi_0 pypi tensorflow-estimator 2.1.0 pypi_0 pypi tensorflow-metadata 0.21.1 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi terminado 0.8.2 py37_0 testpath 0.4.2 py37_0 theano 1.0.4 py37_0 tk 8.6.8 hfa6e2cd_0 toolz 0.10.0 py_0 tornado 6.0.3 py37he774522_0 tqdm 4.36.1 py_0 traitlets 4.3.3 py37_0 unicodecsv 0.14.1 py37_0 urllib3 1.24.2 py37_0 vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 wcwidth 0.1.7 py37_0 webencodings 0.5.1 py37_1 werkzeug 0.16.0 py_0 wheel 0.33.6 py37_0 widgetsnbextension 3.5.1 py37_0 win_inet_pton 1.1.0 py37_0 win_unicode_console 0.5 py37_0 wincertstore 0.2 py37_0 winpty 0.4.3 4 wrapt 1.11.2 py37he774522_0 xlrd 1.2.0 py37_0 xlsxwriter 1.2.1 py_0 xlwings 0.15.10 py37_0 xlwt 1.3.0 py37_0 xz 5.2.4 h2fa13f4_4 yaml 0.1.7 hc54c509_2 zeromq 4.3.1 h33f27b4_3 zict 1.0.0 py_0 zipp 0.6.0 py_0 zlib 1.2.11 h62dcd97_3 zstd 1.3.7 h508b16e_0
There are two methods 1. Install into virtual environment with pip TensorFlow virtualenv --system-site-packages -p python3 ./venv then you need to activate your new environment pip install --upgrade pip pip list # show packages installed within the virtual environment this command for quit deactivate # don't exit until you're done using TensorFlow we finish by installing tensor pip install --upgrade tensorflow 2. Install in your system python3 --version pip3 --version virtualenv --version Ubuntu sudo apt update sudo apt install python3-dev python3-pip sudo pip3 install -U virtualenv # system-wide install then pip3 install --user --upgrade tensorflow # install in $HOME
12
4
61,381,620
2020-4-23
https://stackoverflow.com/questions/61381620/pytest-windows-fatal-exception-access-violation
I'm using pytest to run some tests for my project. Sometimes (about 30 to 50%) I get an error after the test finished. But this is preventing the testengine to create the testreport, which is really a pain. Error: Windows fatal exception: access violation Current thread 0x000019e0 (most recent call first): File "C:\Python38\lib\threading.py", line 1200 in invoke_excepthook File "C:\Python38\lib\threading.py", line 934 in _bootstrap_inner File "C:\Python38\lib\threading.py", line 890 in _bootstrap Thread 0x00001b0c (most recent call first): File "C:\Python38\lib\site-packages\serial\serialwin32.py", line 240 in _close File "C:\Python38\lib\site-packages\serial\serialwin32.py", line 246 in close File "C:\NoBackup\svn\test_system_smets2\pyets\plugins\plugin_zigbee_dongle.py", line 141 in disconnect File "C:\NoBackup\svn\test_system_smets2\pyets\plugins\plugin_zigbee_dongle.py", line 232 in stop File "C:\NoBackup\svn\test_system_smets2\pyets\testengine\plugin_manager.py", line 286 in stop File "C:\NoBackup\svn\test_system_smets2\pyets\testengine\testengine.py", line 112 in session_finalize File "C:\NoBackup\svn\test_system_smets2\pyets\testengine\pytest_test_engine.py", line 58 in test_engine File "C:\Python38\lib\site-packages\_pytest\fixtures.py", line 800 in _teardown_yield_fixture File "C:\Python38\lib\site-packages\_pytest\fixtures.py", line 871 in finish File "C:\Python38\lib\site-packages\_pytest\runner.py", line 318 in _callfinalizers File "C:\Python38\lib\site-packages\_pytest\runner.py", line 328 in _teardown_with_finalization File "C:\Python38\lib\site-packages\_pytest\runner.py", line 310 in _pop_and_teardown File "C:\Python38\lib\site-packages\_pytest\runner.py", line 350 in _teardown_towards File "C:\Python38\lib\site-packages\_pytest\runner.py", line 342 in teardown_exact File "C:\Python38\lib\site-packages\_pytest\runner.py", line 148 in pytest_runtest_teardown File "C:\Python38\lib\site-packages\pluggy\callers.py", line 187 in _multicall File "C:\Python38\lib\site-packages\pluggy\manager.py", line 83 in <lambda> File "C:\Python38\lib\site-packages\pluggy\manager.py", line 92 in _hookexec File "C:\Python38\lib\site-packages\pluggy\hooks.py", line 286 in __call__ File "C:\Python38\lib\site-packages\_pytest\runner.py", line 217 in <lambda> File "C:\Python38\lib\site-packages\_pytest\runner.py", line 244 in from_call File "C:\Python38\lib\site-packages\_pytest\runner.py", line 216 in call_runtest_hook File "C:\Python38\lib\site-packages\_pytest\runner.py", line 186 in call_and_report File "C:\Python38\lib\site-packages\_pytest\runner.py", line 101 in runtestprotocol File "C:\Python38\lib\site-packages\_pytest\runner.py", line 85 in pytest_runtest_protocol File "C:\Python38\lib\site-packages\pluggy\callers.py", line 187 in _multicall File "C:\Python38\lib\site-packages\pluggy\manager.py", line 83 in <lambda> File "C:\Python38\lib\site-packages\pluggy\manager.py", line 92 in _hookexec File "C:\Python38\lib\site-packages\pluggy\hooks.py", line 286 in __call__ File "C:\Python38\lib\site-packages\_pytest\main.py", line 272 in pytest_runtestloop File "C:\Python38\lib\site-packages\pluggy\callers.py", line 187 in _multicall File "C:\Python38\lib\site-packages\pluggy\manager.py", line 83 in <lambda> File "C:\Python38\lib\site-packages\pluggy\manager.py", line 92 in _hookexec File "C:\Python38\lib\site-packages\pluggy\hooks.py", line 286 in __call__ File "C:\Python38\lib\site-packages\_pytest\main.py", line 247 in _main File "C:\Python38\lib\site-packages\_pytest\main.py", line 191 in wrap_session File "C:\Python38\lib\site-packages\_pytest\main.py", line 240 in pytest_cmdline_main File "C:\Python38\lib\site-packages\pluggy\callers.py", line 187 in _multicall File "C:\Python38\lib\site-packages\pluggy\manager.py", line 83 in <lambda> File "C:\Python38\lib\site-packages\pluggy\manager.py", line 92 in _hookexec File "C:\Python38\lib\site-packages\pluggy\hooks.py", line 286 in __call__ File "C:\Python38\lib\site-packages\_pytest\config\__init__.py", line 124 in main File "testrun.py", line 1184 in run_single_test File "testrun.py", line 1548 in main File "testrun.py", line 1581 in <module> Has somebody any idea how to fix or debug that? I'm using pytest 5.4.1 with python 3.8.0 on win10. But this is also reproducable with older pytest versions. The plugin plugin_zigbee_dongle.py uses pyserial (3.4) to communicate with an usb-rf-dongle in a thread. The following code is a snippet of this plugin: import serial import threading class ZigbeeDongleSerial(object): def __init__(self, test_engine): self.ser = serial.Serial() self.test_engine = test_engine # ---------------------------------------------------------- def connect(self, port, baud, timeout): self.ser.baudrate = baud self.ser.timeout = timeout self.ser.port = port self.ser.open() # ---------------------------------------------------------- def disconnect(self): try: if self.is_connected(): self.ser.close() # <------- This is line 141 ---------- except: pass # ---------------------------------------------------------- # ---------------------------------------------------------- class PluginZigbeeDongle(PluginBase): # ---------------------------------------------------------- def __init__(self, test_engine): super(PluginZigbeeDongle, self).__init__() self.test_engine = test_engine self.dongle_serial = ZigbeeDongleSerial(self.test_engine) self._startup_lock = threading.Lock() self._startup_lock.acquire() self.reader_thread = threading.Thread(target=self._read_worker, name="ZigbeeDongleThread") # ---------------------------------------------------------- def start(self): super(PluginZigbeeDongle, self).start() # connect the serial port self.dongle_serial.connect() # start the reader thread if self.dongle_serial.is_connected(): self.reader_thread.start() # ---------------------------------------------------------- def stop(self): super(PluginZigbeeDongle, self).stop() # disconnect the serial port if self.dongle_serial.is_connected(): self.dongle_serial.disconnect() # <------- This is line 232 ---------- # stop the reader thread if self.reader_thread.is_alive(): self.reader_thread.join() # ---------------------------------------------------------- def _read_worker(self): # start-up is now complete self._startup_lock.release() # handle the incoming character stream done = False while not done: try: c = self.dongle_serial.read_byte() except serial.SerialException: done = True except AttributeError: done = True if not done: self._read_parser(c)
Downgrading pyserial from Version 3.4 to Version 2.7 fixed the problem
8
-1
61,457,122
2020-4-27
https://stackoverflow.com/questions/61457122/python-assignment-operator-differs-from-non-assignment
I have face this weird behavior I can not find explications about. MWE: l = [1] l += {'a': 2} l [1, 'a'] l + {'B': 3} Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: can only concatenate list (not "dict") to list Basically, when I += python does not raise an error and append the key to the list while when I only compute the + I get the expected TypeError. Note: this is Python 3.6.10
l += ... is actually calling object.__iadd__(self, other) and modifies the object in-place when l is mutable The reason (as @DeepSpace explains in his comment) is that when you do l += {'a': 2} the operation updates l in place only and only if l is mutable. On the other hand, the operation l + {'a': 2} is not done in place resulting into list + dictionary -> TypeError. (see here) l = [1] l = l.__iadd__({'a': 2}) l #[1, 'a'] is not the same as + that calls object.__add__(self, other) l + {'B': 3} TypeError: can only concatenate list (not "dict") to list
20
17
61,425,296
2020-4-25
https://stackoverflow.com/questions/61425296/why-neural-network-predicts-wrong-on-its-own-training-data
I made a LSTM (RNN) neural network with supervised learning for data stock prediction. The problem is why it predicts wrong on its own training data? (note: reproducible example below) I created simple model to predict next 5 days stock price: model = Sequential() model.add(LSTM(32, activation='sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) model.add(Dense(y_train.shape[1])) model.compile(optimizer='adam', loss='mse') es = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True) model.fit(x_train, y_train, batch_size=64, epochs=25, validation_data=(x_test, y_test), callbacks=[es]) The correct results are in y_test (5 values), so model trains, looking back 90 previous days and then restore weights from best (val_loss=0.0030) result with patience=3: Train on 396 samples, validate on 1 samples Epoch 1/25 396/396 [==============================] - 1s 2ms/step - loss: 0.1322 - val_loss: 0.0299 Epoch 2/25 396/396 [==============================] - 0s 402us/step - loss: 0.0478 - val_loss: 0.0129 Epoch 3/25 396/396 [==============================] - 0s 397us/step - loss: 0.0385 - val_loss: 0.0178 Epoch 4/25 396/396 [==============================] - 0s 399us/step - loss: 0.0398 - val_loss: 0.0078 Epoch 5/25 396/396 [==============================] - 0s 391us/step - loss: 0.0343 - val_loss: 0.0030 Epoch 6/25 396/396 [==============================] - 0s 391us/step - loss: 0.0318 - val_loss: 0.0047 Epoch 7/25 396/396 [==============================] - 0s 389us/step - loss: 0.0308 - val_loss: 0.0043 Epoch 8/25 396/396 [==============================] - 0s 393us/step - loss: 0.0292 - val_loss: 0.0056 Prediction result is pretty awesome, isn't it? That's because algorithm restored best weights from #5 epoch. Okey, let's now save this model to .h5 file, move back -10 days and predict last 5 days (at first example we made model and validate on 17-23 April including day off weekends, now let's test on 2-8 April). Result: It shows absolutely wrong direction. As we see that's because model was trained and took #5 epoch best for validation set on 17-23 April, but not on 2-8. If I try train more, playing with what epoch to choose, whatever I do, there are always a lot of time intervals in the past that have wrong prediction. Why does model show wrong results on its own trained data? I trained data, it must remember how to predict data on this piece of set, but predicts wrong. What I also tried: Use large data sets with 50k+ rows, 20 years stock prices, adding more or less features Create different types of model, like adding more hidden layers, different batch_sizes, different layers activations, dropouts, batchnormalization Create custom EarlyStopping callback, get average val_loss from many validation data sets and choose the best Maybe I miss something? What can I improve? Here is very simple and reproducible example. yfinance downloads S&P 500 stock data. """python 3.7.7 tensorflow 2.1.0 keras 2.3.1""" import numpy as np import pandas as pd from keras.callbacks import EarlyStopping, Callback from keras.models import Model, Sequential, load_model from keras.layers import Dense, Dropout, LSTM, BatchNormalization from sklearn.preprocessing import MinMaxScaler import plotly.graph_objects as go import yfinance as yf np.random.seed(4) num_prediction = 5 look_back = 90 new_s_h5 = True # change it to False when you created model and want test on other past dates df = yf.download(tickers="^GSPC", start='2018-05-06', end='2020-04-24', interval="1d") data = df.filter(['Close', 'High', 'Low', 'Volume']) # drop last N days to validate saved model on past df.drop(df.tail(0).index, inplace=True) print(df) class EarlyStoppingCust(Callback): def __init__(self, patience=0, verbose=0, validation_sets=None, restore_best_weights=False): super(EarlyStoppingCust, self).__init__() self.patience = patience self.verbose = verbose self.wait = 0 self.stopped_epoch = 0 self.restore_best_weights = restore_best_weights self.best_weights = None self.validation_sets = validation_sets def on_train_begin(self, logs=None): self.wait = 0 self.stopped_epoch = 0 self.best_avg_loss = (np.Inf, 0) def on_epoch_end(self, epoch, logs=None): loss_ = 0 for i, validation_set in enumerate(self.validation_sets): predicted = self.model.predict(validation_set[0]) loss = self.model.evaluate(validation_set[0], validation_set[1], verbose = 0) loss_ += loss if self.verbose > 0: print('val' + str(i + 1) + '_loss: %.5f' % loss) avg_loss = loss_ / len(self.validation_sets) print('avg_loss: %.5f' % avg_loss) if self.best_avg_loss[0] > avg_loss: self.best_avg_loss = (avg_loss, epoch + 1) self.wait = 0 if self.restore_best_weights: print('new best epoch = %d' % (epoch + 1)) self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience or self.params['epochs'] == epoch + 1: self.stopped_epoch = epoch self.model.stop_training = True if self.restore_best_weights: if self.verbose > 0: print('Restoring model weights from the end of the best epoch') self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): print('best_avg_loss: %.5f (#%d)' % (self.best_avg_loss[0], self.best_avg_loss[1])) def multivariate_data(dataset, target, start_index, end_index, history_size, target_size, step, single_step=False): data = [] labels = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i, step) data.append(dataset[indices]) if single_step: labels.append(target[i+target_size]) else: labels.append(target[i:i+target_size]) return np.array(data), np.array(labels) def transform_predicted(pr): pr = pr.reshape(pr.shape[1], -1) z = np.zeros((pr.shape[0], x_train.shape[2] - 1), dtype=pr.dtype) pr = np.append(pr, z, axis=1) pr = scaler.inverse_transform(pr) pr = pr[:, 0] return pr step = 1 # creating datasets with look back scaler = MinMaxScaler() df_normalized = scaler.fit_transform(df.values) dataset = df_normalized[:-num_prediction] x_train, y_train = multivariate_data(dataset, dataset[:, 0], 0,len(dataset) - num_prediction + 1, look_back, num_prediction, step) indices = range(len(dataset)-look_back, len(dataset), step) x_test = np.array(dataset[indices]) x_test = np.expand_dims(x_test, axis=0) y_test = np.expand_dims(df_normalized[-num_prediction:, 0], axis=0) # creating past datasets to validate with EarlyStoppingCust number_validates = 50 step_past = 5 validation_sets = [(x_test, y_test)] for i in range(1, number_validates * step_past + 1, step_past): indices = range(len(dataset)-look_back-i, len(dataset)-i, step) x_t = np.array(dataset[indices]) x_t = np.expand_dims(x_t, axis=0) y_t = np.expand_dims(df_normalized[-num_prediction-i:len(df_normalized)-i, 0], axis=0) validation_sets.append((x_t, y_t)) if new_s_h5: model = Sequential() model.add(LSTM(32, return_sequences=False, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) # model.add(Dropout(0.2)) # model.add(BatchNormalization()) # model.add(LSTM(units = 16)) model.add(Dense(y_train.shape[1])) model.compile(optimizer = 'adam', loss = 'mse') # EarlyStoppingCust is custom callback to validate each validation_sets and get average # it takes epoch with best "best_avg" value # es = EarlyStoppingCust(patience = 3, restore_best_weights = True, validation_sets = validation_sets, verbose = 1) # or there is keras extension with built-in EarlyStopping, but it validates only 1 set that you pass through fit() es = EarlyStopping(monitor = 'val_loss', patience = 3, restore_best_weights = True) model.fit(x_train, y_train, batch_size = 64, epochs = 25, shuffle = True, validation_data = (x_test, y_test), callbacks = [es]) model.save('s.h5') else: model = load_model('s.h5') predicted = model.predict(x_test) predicted = transform_predicted(predicted) print('predicted', predicted) print('real', df.iloc[-num_prediction:, 0].values) print('val_loss: %.5f' % (model.evaluate(x_test, y_test, verbose=0))) fig = go.Figure() fig.add_trace(go.Scatter( x = df.index[-60:], y = df.iloc[-60:,0], mode='lines+markers', name='real', line=dict(color='#ff9800', width=1) )) fig.add_trace(go.Scatter( x = df.index[-num_prediction:], y = predicted, mode='lines+markers', name='predict', line=dict(color='#2196f3', width=1) )) fig.update_layout(template='plotly_dark', hovermode='x', spikedistance=-1, hoverlabel=dict(font_size=16)) fig.update_xaxes(showspikes=True) fig.update_yaxes(showspikes=True) fig.show()
The OP postulates an interesting finding. Let me simplify the original question as follows. If the model is trained on a particular time series, why can't the model reconstruct previous time series data, which it was already trained on? Well, the answer is embedded in the training progress itself. Since EarlyStopping is used here to avoid overfitting, the best model is saved at epoch=5, where val_loss=0.0030 as mentioned by the OP. At this instance, the training loss is equal to 0.0343, that is, the RMSE of training is 0.185. Since the dataset is scaled using MinMaxScalar, we need to undo the scaling of RMSE to understand what's going on. The minimum and maximum values of the time sequence are found to be 2290 and 3380. Therefore, having 0.185 as the RMSE of training means that, even for the training set, the predicted values may differ from the ground truth values by approximately 0.185*(3380-2290), that is ~200 units on average. This explains why there is a big difference when predicting the training data itself at a previous time step. What should I do to perfectly emulate training data? I asked this question from myself. The simple answer is, make the training loss approaching 0, that is overfit the model. After some training, I realized that a model with only 1 LSTM layer that has 32 cells is not complex enough to reconstruct the training data. Therefore, I have added another LSTM layer as follows. model = Sequential() model.add(LSTM(32, return_sequences=True, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) # model.add(Dropout(0.2)) # model.add(BatchNormalization()) model.add(LSTM(units = 64, return_sequences=False,)) model.add(Dense(y_train.shape[1])) model.compile(optimizer = 'adam', loss = 'mse') And the model is trained for 1000 epochs without considering EarlyStopping. model.fit(x_train, y_train, batch_size = 64, epochs = 1000, shuffle = True, validation_data = (x_test, y_test)) At the end of 1000th epoch we have a training loss of 0.00047 which is much lower than the training loss in your case. So we would expect the model to reconstruct the training data better. Following is the prediction plot for Apr 2-8. A Final Note: Training on a particular database does not necessarily mean that the model should be able to perfectly reconstruct the training data. Especially, when the methods such as early stopping, regularization and dropout are introduced to avoid overfitting, the model tends to be more generalizable rather than memorizing training data.
27
15
61,479,772
2020-4-28
https://stackoverflow.com/questions/61479772/remap-the-values-to-other-and-give-default-value-also
I have tabe i have to map with two values in NY,CAits Domestic, WT its OUTSIDE, and other than that its has to OVERSEAS di = {"NY": "Domestic","CA": "Domestic","WT":"OUTSIDE"} df.replace({'Territory': di}) How to give OVERSEAS in the above code. So by default it has(nothing in the dictionary) to OVERSEAS
Use Series.map which return missing values for no match values, so added Series.fillna for replace them to default value: df = pd.DataFrame({'Territory':['NY','CA','WT','SK','DE']}) di = {"NY": "Domestic","CA": "Domestic","WT":"OUTSIDE"} print (df) Territory 0 NY 1 CA 2 WT 3 SK 4 DE df['Territory'] = df['Territory'].map(di).fillna('OVERSEAS') print (df) Territory 0 Domestic 1 Domestic 2 OUTSIDE 3 OVERSEAS 4 OVERSEAS
7
7
61,480,570
2020-4-28
https://stackoverflow.com/questions/61480570/how-to-pass-a-parameterised-fixture-as-a-parameter-to-another-fixture
I am trying to avoid repeating too much boilerplate in my tests, and I want to rewrite them in a more structured way. Let's say that I have two different parsers that both can parse a text into a doc. That doc would then be used in other tests. The end goal is to expose a doc() fixture that can be used in other tests, and that is parameterised in such a way that it runs all combinations of given parsers and texts. @pytest.fixture def parser_a(): return "parser_a" # actually a parser object @pytest.fixture def parser_b(): return "parser_b" # actually a parser object @pytest.fixture def short_text(): return "Lorem ipsum" @pytest.fixture def long_text(): return "If I only knew how to bake cookies I could make everyone happy." The question is, now, how to create a doc() fixture that would look like this: @pytest.fixture(params=???) def doc(parser, text): return parser.parse(text) where parser is parameterised to be parser_a and parser_b, and text to be short_text and long_text. This means that in total doc would test four combinations of parsers and text in total. The documentation on PyTest's parameterised fixtures is quite vague and I could not find an answer on how to approach this. All help welcome.
Not sure if this is exactly what you need, but you could just use functions instead of fixtures, and combine these in fixtures: import pytest class Parser: # dummy parser for testing def __init__(self, name): self.name = name def parse(self, text): return f'{self.name}({text})' class ParserFactory: # do not recreate existing parsers parsers = {} @classmethod def instance(cls, name): if name not in cls.parsers: cls.parsers[name] = Parser(name) return cls.parsers[name] def parser_a(): return ParserFactory.instance("parser_a") def parser_b(): return ParserFactory.instance("parser_b") def short_text(): return "Lorem ipsum" def long_text(): return "If I only knew how to bake cookies I could make everyone happy." @pytest.fixture(params=[long_text, short_text]) def text(request): yield request.param @pytest.fixture(params=[parser_a, parser_b]) def parser(request): yield request.param @pytest.fixture def doc(parser, text): yield parser().parse(text()) def test_doc(doc): print(doc) The resulting pytest output is: ============================= test session starts ============================= ... collecting ... collected 4 items test_combine_fixt.py::test_doc[parser_a-long_text] PASSED [ 25%]parser_a(If I only knew how to bake cookies I could make everyone happy.) test_combine_fixt.py::test_doc[parser_a-short_text] PASSED [ 50%]parser_a(Lorem ipsum) test_combine_fixt.py::test_doc[parser_b-long_text] PASSED [ 75%]parser_b(If I only knew how to bake cookies I could make everyone happy.) test_combine_fixt.py::test_doc[parser_b-short_text] PASSED [100%]parser_b(Lorem ipsum) ============================== 4 passed in 0.05s ============================== UPDATE: I added a singleton factory for the parser as discussed in the comments as an example. NOTE: I tried to use pytest.lazy_fixture as suggested by @hoefling. That works, and makes it possible to pass the parser and text directly from a fixture, but I couldn't get it (yet) to work in a way that each parser is instantiated only once. For reference, here is the changed code if using pytest.lazy_fixture: @pytest.fixture def parser_a(): return Parser("parser_a") @pytest.fixture def parser_b(): return Parser("parser_b") @pytest.fixture def short_text(): return "Lorem ipsum" @pytest.fixture def long_text(): return "If I only knew how to bake cookies I could make everyone happy." @pytest.fixture(params=[pytest.lazy_fixture('long_text'), pytest.lazy_fixture('short_text')]) def text(request): yield request.param @pytest.fixture(params=[pytest.lazy_fixture('parser_a'), pytest.lazy_fixture('parser_b')]) def parser(request): yield request.param @pytest.fixture def doc(parser, text): yield parser.parse(text) def test_doc(doc): print(doc)
9
1
61,473,880
2020-4-28
https://stackoverflow.com/questions/61473880/avro-deserialization-from-kafka-using-fastavro
I am building an application which receives data from Kafka. When using standard avro library provided by Apache ( https://pypi.org/project/avro-python3/ ) the results are correct, however, the deserialization process is terribly slow. class KafkaReceiver: data = {} def __init__(self, bootstrap='192.168.1.111:9092'): self.client = KafkaConsumer( 'topic', bootstrap_servers=bootstrap, client_id='app', api_version=(0, 10, 1) ) self.schema = avro.schema.parse(open("Schema.avsc", "rb").read()) self.reader = avro.io.DatumReader(self.schema) def do(self): for msg in self.client: bytes_reader = io.BytesIO(msg.value) decoder = BinaryDecoder(bytes_reader) self.data = self.reader.read(decoder) While reading why this is so slow I've found fastavro which should be much faster. I am using this this way: def do(self): schema = fastavro.schema.load_schema('Schema.avsc') for msg in self.client: bytes_reader = io.BytesIO(msg.value) bytes_reader.seek(0) for record in reader(bytes_reader, schema): self.data = record And, since everything is working when using Apache's librabry, I would expect that everything will be working the same way with fastavro. However, when running this, I am getting File "fastavro/_read.pyx", line 389, in fastavro._read.read_map File "fastavro/_read.pyx", line 290, in fastavro._read.read_utf8 File "fastavro/_six.pyx", line 22, in fastavro._six.py3_btou UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 3: invalid start byte I don't usually program in Python, so I don't exactly know how to approach this. Any ideas?
The fastavro.reader expects the avro file format that includes the header. It looks like what you have is a serialized record without the header. I think you might be able to read this using the fastavro.schemaless_reader. So instead of: for record in reader(bytes_reader, schema): self.data = record You would do: self.data = schemaless_reader(bytes_reader, schema)
7
10
61,461,520
2020-4-27
https://stackoverflow.com/questions/61461520/does-anyone-know-the-meaning-of-the-output-of-image-to-data-and-image-to-osd-met
I'm trying to extract the data from an image using pytesseract. This module has image_to_data and image_to_osd methods. These two methods provide lots of info (TextLineOrder, WritingDirection, ScriptDetection, Orientation, etc...) as output. The image below is the output of the image_to_data method. What do the values of these columns (level, block_num, par_num, line_num, word_num) mean? The output of image_to_osd looks as presented below. What is the meaning each term in it? Page number: 0 Orientation in degrees: 0 Rotate: 0 Orientation confidence: 16.47 Script: Latin Script confidence: 4.00 I referred to docs but I did not find any info regarding these parameters.
Column Level: Item with no block_num, paragraph_num, line_num, word_num Item with block_num and with no paragraph_num, line_num, word_num Item with block_num, paragraph_num and with no line_num, word_num Item with block_num, paragraph_num, line_num, and with no word_num Item with all those numbers Column block_num: Block number of the detected text or item Column par_num: Paragraph number of the detected text or item Column line_num: Line number of the detected text or item Column word_num: word number of the detected text or item But above all 4 columns are interconnected.If the item comes from new line then word number will start counting again from 0, it doesn't continue from previous line last word number. Same goes with line_num, par_num, block_num. Check out the below image for reference. 1st column: block_num 2nd column: par_num 3rd column: line_num 4rth column: word_num
10
8
61,479,059
2020-4-28
https://stackoverflow.com/questions/61479059/it-there-any-default-asynchronious-null-context-manager-in-python3-7
I would like to create optional asynchronious semaphore. In case of asyncio.Semaphore does not support None values, i decided to create asyncio.Semaphore, if connections limit is specified, else - some kind of dummy object There is a contextlib.nullcontext, but it supports only synchorious with I`ve created my own dummy: @contextlib.asynccontextmanager async def asyncnullcontext(): yield None It there any default asynchronious null context manager?
It there any default asynchronious null context manager? You can use contextlib.AsyncExitStack(). ExitStack() was similarly the way to create a quick-and-dirty null context manager before the introduction of nullcontext.
7
8
61,491,893
2020-4-28
https://stackoverflow.com/questions/61491893/i-cannot-install-tensorflow-version-1-15-through-pip
I have checked my pip version and got the following output: Requirement already up-to-date: pip in ./anaconda3/envs/runlee_python3/lib/python3.8/site-packages (20.1) I have a specific situation in which I have to use version 1.15 of Tensorflow, but when I try to install it, it seems like it can‘t find this specific version. pip install tensorflow==1.15 ERROR: Could not find a version that satisfies the requirement tensorflow==1.15 (from versions: 2.2.0rc1, 2.2.0rc2, 2.2.0rc3) ERROR: No matching distribution found for tensorflow==1.15 I can also not find version 1.15 when listing all available options. What am I missing?
You are using python 3.8, which was not officially supported when tensorflow was at version 1.15. You can also check on pypi, there are no files available for cp38, even for 2.10 Onle the versions listed by your command have a cp38 whl file available, see here Since you have conda, simply create a virtual env with the required version conda create -n tf python=3.7 then install tensorflow in this env
63
86
61,490,351
2020-4-28
https://stackoverflow.com/questions/61490351/scipy-cosine-similarity-vs-sklearn-cosine-similarity
I noticed that both scipy and sklearn have a cosine similarity/cosine distance functions. I wanted to test the speed for each on pairs of vectors: setup1 = "import numpy as np; arrs1 = [np.random.rand(400) for _ in range(60)];arrs2 = [np.random.rand(400) for _ in range(60)]" setup2 = "import numpy as np; arrs1 = [np.random.rand(400) for _ in range(60)];arrs2 = [np.random.rand(400) for _ in range(60)]" import1 = "from sklearn.metrics.pairwise import cosine_similarity" stmt1 = "[float(cosine_similarity(arr1.reshape(1,-1), arr2.reshape(1,-1))) for arr1, arr2 in zip(arrs1, arrs2)]" import2 = "from scipy.spatial.distance import cosine" stmt2 = "[float(1 - cosine(arr1, arr2)) for arr1, arr2 in zip(arrs1, arrs2)]" import timeit print("sklearn: ", timeit.timeit(stmt1, setup=import1 + ";" + setup1, number=1000)) print("scipy: ", timeit.timeit(stmt2, setup=import2 + ";" + setup2, number=1000)) sklearn: 11.072769448000145 scipy: 1.9755544730005568 sklearn runs almost 10 times slower than scipy (even if you remove the array reshape for the sklearn example and generate data that's already in the right shape). Why is one significantly slower than the other?
As mentioned in the comments section, I don't think the comparison is fair mainly because the sklearn.metrics.pairwise.cosine_similarity is designed to compare pairwise distance/similarity of the samples in the given input 2-D arrays. On the other hand, scipy.spatial.distance.cosine is designed to compute cosine distance of two 1-D arrays. Maybe a more fair comparison is to use scipy.spatial.distance.cdist vs. sklearn.metrics.pairwise.cosine_similarity, where both computes pairwise distance of samples in the given arrays. However, to my surprise, that shows the sklearn implementation is much faster than the scipy implementation (which I don't have an explanation for that currently!). Here is the experiment: import numpy as np from sklearn.metrics.pairwise import cosine_similarity from scipy.spatial.distance import cdist x = np.random.rand(1000,1000) y = np.random.rand(1000,1000) def sklearn_cosine(): return cosine_similarity(x, y) def scipy_cosine(): return 1. - cdist(x, y, 'cosine') # Make sure their result is the same. assert np.allclose(sklearn_cosine(), scipy_cosine()) And here is the timing result: %timeit sklearn_cosine() 10 loops, best of 3: 74 ms per loop %timeit scipy_cosine() 1 loop, best of 3: 752 ms per loop
8
18
61,449,954
2020-4-27
https://stackoverflow.com/questions/61449954/pyqt5-datepicker-popup
I am not able to make datepicker in pyqt5. I am using calendarWidget and it working fine now. But i want dropdown datepicker in my menu bar and want to show selected date in lineEdit. I have created a layout in QDesigner and adding 'DateEdit" widget. But i want same exactly as image shown. I searched for datepicker and getting this link :How to add Today Button in QDateEdit Pop-up QCalendarWidget I tried many approaches but it's not working. Note: when i run my file dateEdit widget always show the Date:[01-01-2000]. it not showing the current date. datepicker.py from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.verticalLayout = QtWidgets.QVBoxLayout(self.centralwidget) self.verticalLayout.setObjectName("verticalLayout") self.frame_2 = QtWidgets.QFrame(self.centralwidget) self.frame_2.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame_2.setFrameShadow(QtWidgets.QFrame.Raised) self.frame_2.setObjectName("frame_2") self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.frame_2) self.verticalLayout_2.setObjectName("verticalLayout_2") self.frame = QtWidgets.QFrame(self.frame_2) self.frame.setStyleSheet("background-color: rgb(56, 122, 179);") self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame.setFrameShadow(QtWidgets.QFrame.Raised) self.frame.setObjectName("frame") self.horizontalLayout = QtWidgets.QHBoxLayout(self.frame) self.horizontalLayout.setObjectName("horizontalLayout") self.label = QtWidgets.QLabel(self.frame) self.label.setStyleSheet("color: rgb(243, 243, 243);") self.label.setObjectName("label") self.horizontalLayout.addWidget(self.label) self.dateEdit = QtWidgets.QDateEdit(self.frame) self.dateEdit.setObjectName("dateEdit") self.horizontalLayout.addWidget(self.dateEdit) spacerItem = QtWidgets.QSpacerItem(40, 20, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout.addItem(spacerItem) self.verticalLayout_2.addWidget(self.frame) self.tableWidget = QtWidgets.QTableWidget(self.frame_2) self.tableWidget.setObjectName("tableWidget") self.tableWidget.setColumnCount(0) self.tableWidget.setRowCount(0) self.verticalLayout_2.addWidget(self.tableWidget) self.verticalLayout.addWidget(self.frame_2) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "Selected Date is :________________________")) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_())
The QDateEdit already provides a QCalendarWidget so you only need to enable the calendarPopup property: import sys from PyQt5 import QtCore, QtWidgets class MainWindow(QtWidgets.QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.dateedit = QtWidgets.QDateEdit(calendarPopup=True) self.menuBar().setCornerWidget(self.dateedit, QtCore.Qt.TopLeftCorner) self.dateedit.setDateTime(QtCore.QDateTime.currentDateTime()) if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) w = MainWindow() w.show() sys.exit(app.exec_())
7
17
61,468,705
2020-4-27
https://stackoverflow.com/questions/61468705/pyspark-using-collect-list-over-window-with-condition
I have the following test data: import pandas as pd import datetime data = {'date': ['2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05', '2014-01-06'], 'customerid': [2, 2, 2, 3, 4, 3], 'names': ['Andrew', 'Pete', 'Sean', 'Steve', 'Ray', 'Stef'], 'PaymentType': ['OI', 'CC', 'CC', 'OI', 'OI', 'OI']} data = pd.DataFrame(data) data['date'] = pd.to_datetime(data['date']) The following code gives me the list of names with matching customerid's within a two day time-frame: import pandas as pd import datetime from pyspark.sql.window import Window from pyspark.sql import functions as F from pyspark.sql import SparkSession spark = SparkSession.builder \ .master('local[*]') \ .config("spark.driver.memory", "500g") \ .appName('my-pandasToSparkDF-app') \ .config("spark.ui.showConsoleProgress", "false")\ .getOrCreate() spark.conf.set("spark.sql.execution.arrow.enabled", "true") spark.conf.set('spark.sql.execution.arrow.maxRecordsPerBatch', 50000) spark.sparkContext.setLogLevel("OFF") data = {'date': ['2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05', '2014-01-06'], 'customerid': [2, 2, 2, 3, 4, 3], 'names': ['Andrew', 'Pete', 'Sean', 'Steve', 'Ray', 'Stef'], 'PaymentType': ['OI', 'CC', 'CC', 'OI', 'OI', 'OI']} data = pd.DataFrame(data) data['date'] = pd.to_datetime(data['date']) spark_data= spark.createDataFrame(data) win = Window().partitionBy('customerid').orderBy((F.col('date')).cast("long")).rangeBetween( -(2*86400), Window.currentRow) result_frame = spark_data.withColumn("names_array", F.collect_list('names').over(win)).sort(F.col("date").asc()) pd_result_frame = result_frame.toPandas() Data: <pre> date |customerid|PaymentType|names 2014-01-01|2 |OI |Andrew 2014-01-02|2 |CC |Pete 2014-01-03|2 |CC |Sean 2014-01-04|3 |OI |Steve 2014-01-05|4 |OI |Ray 2014-01-06|3 |OI |Stef </pre> Resulting table: <pre> date |customerid|PaymentType|names_array| 2014-01-01|2 |OI |['Andrew'] 2014-01-02|2 |CC |['Andrew', 'Pete'] 2014-01-03|2 |CC |['Andrew', 'Pete', 'Sean'] 2014-01-04|3 |OI |['Steve'] 2014-01-05|4 |OI |['Ray'] 2014-01-06|3 |OI |['Steve', 'Stef'] </pre> Now I would like to introduce a condition for F.collect_list. Only those names should be collected into the lists for which PaymentType == 'OI'. In the end, the table should look like this: <pre> date |customerid|PaymentType|names_array| 2014-01-01|2 |OI |['Andrew'] 2014-01-02|2 |CC |['Andrew'] 2014-01-03|2 |CC |['Andrew'] 2014-01-04|3 |OI |['Steve'] 2014-01-05|4 |OI |['Ray'] 2014-01-06|3 |OI |['Steve', 'Stef'] </pre> Thank you!
You could put a when/otherwise clause in your collect_list to collect only when PaymentType is 'OI', otherwise collect None. spark_data.withColumn("names_array",\ F.collect_list(F.when(F.col("PaymentType")=='OI',F.col("names"))\ .otherwise(F.lit(None))).over(win)).sort(F.col("date").asc()).show() #+-------------------+----------+------+-----------+-------------+ #| date|customerid| names|PaymentType| names_array| #+-------------------+----------+------+-----------+-------------+ #|2014-01-01 00:00:00| 2|Andrew| OI| [Andrew]| #|2014-01-02 00:00:00| 2| Pete| CC| [Andrew]| #|2014-01-03 00:00:00| 2| Sean| CC| [Andrew]| #|2014-01-04 00:00:00| 3| Steve| OI| [Steve]| #|2014-01-05 00:00:00| 4| Ray| OI| [Ray]| #|2014-01-06 00:00:00| 3| Stef| OI|[Steve, Stef]| #+-------------------+----------+------+-----------+-------------+
8
13
61,457,120
2020-4-27
https://stackoverflow.com/questions/61457120/how-to-use-libreoffice-api-uno-with-python-windows
This question is focused on Windows + LibreOffice + Python 3. I've installed LibreOffice (6.3.4.2), also pip install unoconv and pip install unotools (pip install uno is another unrelated library), but still I get this error after import uno: ModuleNotFoundError: No module named 'uno' More generally, and as an example of use of UNO, how to open a .docx document with LibreOffice UNO and export it to PDF? I've searched extensively on this since a few days, but I haven't found a reproducible sample code working on Windows: headless use of soffice.exe, see my question+answer Headless LibreOffice very slow to export to PDF on Windows (6 times slow than on Linux) and the notes on the answer: it "works" with soffice.exe --headless ... but something closer to a COM interaction (Component Object Model) would be useful for many applications, thus this question here Related forum post, and LibreOffice: Programming with Python Scripts, but the way uno should be installed on Windows, with Python, is not detailed; also Detailed tutorial regarding LibreOffice to Python macro writing, especially for Calc I've also tried this (unsuccessfully): Getting python to import uno / pyuno: import os os.environ["URE_BOOTSTRAP"] = r"vnd.sun.star.pathname:C:\Program Files\LibreOffice\program\fundamental.ini" os.environ["PATH"] += r";C:\Program Files\LibreOffice\program" import uno
In order to interact with LibreOffice, start an instance listening on a socket. I don't use COM much, but I think this is the equivalent of the COM interaction you asked about. This can be done most easily on the command line or using a shell script, but it can also work with a system call using a time delay and subprocess. chdir "%ProgramFiles%\LibreOffice\program\" start soffice -accept=socket,host=localhost,port=2002;urp; Next, run the installation of python that comes with LibreOffice, which has uno installed by default. "C:\Program Files\LibreOffice\program\python.exe" >> import uno If instead you are using an installation of Python on Windows that was not shipped with LibreOffice, then getting it to work with UNO is much more difficult, and I would not recommend it unless you enjoy hacking. Now, here is all the code. In a real project, it's probably best to organize into classes, but this is a simplified version. import os import uno from com.sun.star.beans import PropertyValue def createProp(name, value): prop = PropertyValue() prop.Name = name prop.Value = value return prop localContext = uno.getComponentContext() resolver = localContext.ServiceManager.createInstanceWithContext( "com.sun.star.bridge.UnoUrlResolver", localContext) ctx = resolver.resolve( "uno:socket,host=localhost,port=2002;urp;" "StarOffice.ComponentContext") smgr = ctx.ServiceManager desktop = smgr.createInstanceWithContext( "com.sun.star.frame.Desktop", ctx) dispatcher = smgr.createInstanceWithContext( "com.sun.star.frame.DispatchHelper", ctx) filepath = r"C:\Users\JimStandard\Desktop\Untitled 1.docx" fileUrl = uno.systemPathToFileUrl(os.path.realpath(filepath)) uno_args = ( createProp("Minimized", True), ) document = desktop.loadComponentFromURL( fileUrl, "_default", 0, uno_args) uno_args = ( createProp("FilterName", "writer_pdf_Export"), createProp("Overwrite", False), ) newpath = filepath[:-len("docx")] + "pdf" fileUrl = uno.systemPathToFileUrl(os.path.realpath(newpath)) try: document.storeToURL(fileUrl, uno_args) # Export except ErrorCodeIOException: raise try: document.close(True) except CloseVetoException: raise Finally, since speed is a concern, using a listening instance of LibreOffice can be slow. To do this faster, move the code into a macro. APSO provides a menu to organize Python macros. Then call the macro like this: soffice "vnd.sun.star.script:myscript.py$name_of_maindef?language=Python&location=user" In macros, obtain the document objects from XSCRIPTCONTEXT rather than the resolver.
10
9
61,455,686
2020-4-27
https://stackoverflow.com/questions/61455686/time-complexity-of-python-dictionary-len-method
Example: a = {a:'1', b:'2'} len(a) What is the time complexity of len(a) ?
Inspecting the c-source of dictobject.c shows that the structure contains a member responsible for maintaining an explicit count (dk_size) layout: +---------------+ | dk_refcnt | | dk_size | | dk_lookup | | dk_usable | | dk_nentries | +---------------+ ... Thus it will have order O(1)
10
16
61,440,990
2020-4-26
https://stackoverflow.com/questions/61440990/how-to-check-whether-user-is-logged-in-or-not
I am working on invoice management system in which user can add invoice data and it will save in database and whenever user logged in the data will appear on home page but whenever user logout and try to access home page but it is giving following error. TypeError at / 'AnonymousUser' object is not iterable i tried AnonymousUser.is_authenticated method but still not working. i want if user is logged in then home.html should open otherwise intro.html here is my code views.py from django.shortcuts import render from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin from django.views.generic import ( ListView, DetailView, CreateView, UpdateView, DeleteView ) from .models import Invoicelist def home(request): if request.user.is_authenticated(): context = { 'invoices': Invoicelist.objects.all() } return render(request, 'invoicedata/home.html', context) else: return render(request, 'invoicedata/intro.html', context) home.html {% extends "invoicedata/base.html" %} {% block content %} {% for invoice in invoices %} <article class="media content-section"> <div class="media-body"> <div class="article-metadata"> <small class="text-muted">{{ invoice.date_posted|date:"F d, Y" }}</small> <h2><a class="article-title" href="{% url 'invoice-detail' invoice.id %}">{{ invoice.issuer }}</a></h2> </div> <p class="article-content">{{ invoice.invoice_number }}</p> <p class="article-content">{{ invoice.date }}</p> <p class="article-content">{{ invoice.amount }}</p> <p class="article-content">{{ invoice.currency }}</p> <p class="article-content">{{ invoice.other }}</p> <div class="article-metadata"> <small class="text-muted">{{ invoice.author }}</small> </div> </div> </article> {% endfor %} {% endblock content %} intro.html {% extends "invoicedata/base.html" %} {% block content %} <h2>login to your portal for great auditing services</h2> {% endblock content %}
Finally i got the solution that work for me here it is Django provides LoginRequiredMixin i used this in my invoicelistview function from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin class InvoiceListView(LoginRequiredMixin,ListView): model = Invoicelist template_name = 'invoicedata/home.html' context_object_name = 'invoices' def get_queryset(self): return self.model.objects.all().filter(author=self.request.user).order_by('-date_posted')[:2] and that's it. Now whenever user logout then it will redirect to login page
14
10
61,452,582
2020-4-27
https://stackoverflow.com/questions/61452582/why-is-using-sudo-pip-a-bad-idea
In a post I was reviewing recently, I read that it's advised not to use 'sudo pip' to install certain items. Can someone clarify why this is and what the downsides/upsides are? Thanks!
Your OS has a Python interpreter to run Python software controlled by your package manager, be it apt, yum, or App Store. Any Python package installed to the system Python installation are dependencies of such software, or that software itself. By installing or updating packages in your system Python, you can break that software. Also, your modifications would be overwritten with the next update of something that required a dependency you've overwritten ("upgraded"), which often occurs when you install something with many dependencies. This can bite you at the most inopportune moment. If you value your time and sanity, always use virtualenv or your favorite wrapper over it. Preferably have one virtualenv per project, and separate virtualenvs for stuff like AWS CLI. Never sudo pip install anything for your development.
7
8
61,451,279
2020-4-27
https://stackoverflow.com/questions/61451279/how-do-setcolumnstretch-and-setrowstretch-work
I have an application built using PySide2 which uses setColumnStretch for column stretching and setRowStretch for row stretching. It works well and good, but I am unable to understand how it is working. I am stuck on the two values inside those parentheses. For example: glay = QtWidgets.QGridLayout(right_container) glay.addWidget(lineedit, 0, 0) glay.addWidget(button2, 0, 2) glay.addWidget(widget, 2, 0, 1, 3) glay.addWidget(button, 4, 0) glay.addWidget(button1, 4, 2) glay.setColumnStretch(1, 1) # setColumnStretch glay.setRowStretch(1, 1) # setRowStretch glay.setRowStretch(2, 2) # setRowStretch glay.setRowStretch(3, 1) # setRowStretch This produces the output as shown in the image below: But how? What does these four values inside glay.addWidget(widget, 2, 0, 1, 3)do? Please explain me all this with examples.
Short Answer: Read the Qt docs: https://doc.qt.io/qt-5/qgridlayout.html as it is clear and precise. Long Answer: addWidget(): The addWidget method is overload (that concept exists natively in C++ but can be built in python but does not exist by default) which implies that a method (or function) has a different behavior depending on the arguments, in this case: void addWidget(QWidget *widget, int row, int column, Qt::Alignment alignment = Qt::Alignment()) void addWidget(QWidget *widget, int fromRow, int fromColumn, int rowSpan, int columnSpan, Qt::Alignment alignment = Qt::Alignment()) The first method indicates that the widget will be placed in the position "row" and "column" that an item occupies, and in the second method the number of rows or columns that it occupies can be indicated, and they are equivalent to: def addWidget(self, row, column, rowSpan = 1, colSpan = 1, alignment = Qt.Alignment()): pass So in lay.addWidget(widget, 2, 0, 1, 3) it means that "widget" will be placed at position 2x0 and will occupy 1 row and 3 columns. import random import sys from PyQt5 import QtGui, QtWidgets if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) w = QtWidgets.QWidget() glay = QtWidgets.QGridLayout(w) elements = ( (0, 0, 1, 1), # Position: 0x0 1 rowspan 1 colspan (1, 0, 1, 1), # Position: 1x0 1 rowspan 1 colspan (0, 1, 2, 1), # Position: 0x1 2 rowspan 1 colspan (2, 0, 1, 2), # Position: 2x0 1 rowspan 2 colspan ) for i, (row, col, row_span, col_span) in enumerate(elements): label = QtWidgets.QLabel("{}".format(i)) color = QtGui.QColor(*random.sample(range(255), 3)) label.setStyleSheet("background-color: {}".format(color.name())) glay.addWidget(label, row, col, row_span, col_span) w.resize(640, 480) w.show() sys.exit(app.exec_()) setColumnStretch() and setRowStretch(): By default if a QGridLayout is filled with the same widgets and the rowSpan and columnSpan are 1 then all the widgets will be the same size, but many times you want a widget to take up more space, or the sizes are proportional. To understand the logic I will use the following code: import random import sys from PyQt5 import QtGui, QtWidgets if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) w = QtWidgets.QWidget() glay = QtWidgets.QGridLayout(w) for i in range(3): for j in range(3): label = QtWidgets.QLabel("{}x{}".format(i, j)) color = QtGui.QColor(*random.sample(range(255), 3)) label.setStyleSheet("background-color: {}".format(color.name())) glay.addWidget(label, i, j) glay.setRowStretch(0, 1) glay.setRowStretch(1, 2) glay.setRowStretch(2, 3) w.resize(640, 480) w.show() sys.exit(app.exec_()) It was established that the stretch of the first column is 1, the second is 2 and the third is 3, which is the ratio of the size of the rows 1:2:3 What happens if you set stretch to 0? Well, it will occupy the minimum size and those with a stretch > 0 will keep the proportion: glay.setRowStretch(0, 0) glay.setRowStretch(1, 1) glay.setRowStretch(2, 2) The same concept applies to setColumnStretch().
15
30
61,447,877
2020-4-26
https://stackoverflow.com/questions/61447877/python-split-list-into-several-lines-of-code
I have a list in Python which includes up to 50 elements. In order for me to easily add/subtract elements, I'd prefer to either code it vertically (each list element on one Python code line) or alternatively, import a separate CSV file? list_of_elements = ['AA','BB','CC','DD','EE','FF', 'GG'] for i in list_of_elements: more code... I'd prefer code like this: list_of_elements = ['AA', 'BB', 'CC', 'DD', 'EE', 'FF', 'GG'] for i in list_of_elements: more code... Just to clarify, it's not about printing, but about coding. I need to have a better visual overview of all the list elements inside the Python code.
The first line should contain the first element, like this: list_of_elements = ['AA', 'BB', 'CC', 'DD', 'EE', 'FF', 'GG'] or as Naufan Rusyda Faikar commented: Put backslash next to = Or put the left bracket next to =. list_of_elements = \ ['AA', 'BB', 'CC', 'DD', 'EE', 'FF', 'GG'] list_of_elements = [ 'AA', 'BB', 'CC', 'DD', 'EE', 'FF', 'GG'] All three will work.
9
14
61,443,261
2020-4-26
https://stackoverflow.com/questions/61443261/what-is-the-use-of-pd-plotting-register-matplotlib-converters-in-pandas
While learning from an online course on visualising data, I came across this line of code. import pandas as pd pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns Can someone please tell me what is the use of pd.plotting.register_matplotlib_converters() I referred to the official documentation, but a clear explanation is not given. Documentation
I found this in the documentation: This function modifies the global matplotlib.units.registry dictionary. Pandas adds custom converters for pd.Timestamp pd.Period np.datetime64 ... So I guess it makes sure that pandas datatypes like pd.Timestamp can be used in matplotlib plots without having to cast them to another type.
14
6
61,439,815
2020-4-26
https://stackoverflow.com/questions/61439815/how-to-display-an-svg-image-in-python
I was following this tutorial on how to write a chess program in Python. It uses the python-chess engine. The functions from that engine apparently return SVG data, that could be used to display a chessboard. Code from the tutorial: import chess import chess.svg from IPython.display import SVG board = chess.Board() SVG(chess.svg.board(board=board,size=400)) but when I run that code, all I see is a line in the terminal and no image. <IPython.core.display.SVG object> The tutorial makes a passing reference to Jupyter Notebooks and how they can be used to display SVG images. I have no experience with Jupyter Notebooks and even though I installed the package from pip and I dabbled a little into how to use it, I couldn't make much progress with regards to my original chessboard problem. But what I do have, is, experience with Qt development using C++ and since Qt has Python bindings, I decided to use those bindings. Here is what I wrote: import sys import chess import chess.svg from PyQt5 import QtGui, QtSvg from PyQt5.QtWidgets import QApplication from IPython.display import SVG, display app = QApplication(sys.argv); board = chess.Board(); svgWidget = QtSvg.QSvgWidget(chess.svg.board(board=board, size=400)); #svgWidget.setGeometry(50,50,759,668) svgWidget.show() sys.exit(app.exec_()) A Qt window opens and shows nothing and in the terminal I see a lot of text - (apparently the SVG data is ending up in the console and not in the Qt window that is opening?). I figured I have to install some SVG library under python so I installed drawSvg from pip. But it seems that library generates SVG images. And was of no use for me. What is even more strange is, after seeing this SO question, I tried the following: import sys import chess import chess.svg from PyQt5 import QtGui, QtSvg from PyQt5.QtWidgets import QApplication from IPython.display import SVG, display app = QApplication(sys.argv); board = chess.Board(); svgWidget = QtSvg.QSvgWidget('d:\projects\python_chess\Zeichen_123.svg'); #svgWidget.setGeometry(50,50,759,668) svgWidget.show() sys.exit(app.exec_()) And it showed an image - an SVG image! What is the difference then between my case and this case? Question: So my question is, what I am doing wrong in the case of the chessboard SVG data? Is the SVG data generated by the python-chess library not compatible with QtSvg?
I think you are getting confused by the scripting nature of Python. You say, you have experience with Qt development under C++. Wouldn't you create a main window widget there first and add to it your SVG widget within which you would call or load SVG data? I would rewrite your code something like this. import chess import chess.svg from PyQt5.QtSvg import QSvgWidget from PyQt5.QtWidgets import QApplication, QWidget class MainWindow(QWidget): def __init__(self): super().__init__() self.setGeometry(100, 100, 1100, 1100) self.widgetSvg = QSvgWidget(parent=self) self.widgetSvg.setGeometry(10, 10, 1080, 1080) self.chessboard = chess.Board() self.chessboardSvg = chess.svg.board(self.chessboard).encode("UTF-8") self.widgetSvg.load(self.chessboardSvg) if __name__ == "__main__": app = QApplication([]) window = MainWindow() window.show() app.exec() EDIT It would be even better if you would add a paint function to the MainWindow class. Because for sure in future, you would want to repaint your board image many times, whenever you would move a piece. So I would do something like this. def paintEvent(self, event): self.chessboardSvg = chess.svg.board(self.chessboard).encode("UTF-8") self.widgetSvg.load(self.chessboardSvg)
12
8
61,437,320
2020-4-26
https://stackoverflow.com/questions/61437320/pytest-finding-when-each-test-started-and-ended
I have a complex Django-Pytest test suite with lots of tests that are running in parallel processes. I'd like to see the exact timepoint at which each test started and ended. How can I get that information out of Pytest?
The start/stop timestamps for each call phase are stored in the CallInfo objects. However, accessing those for reporting is not very convenient, so it's best to store both timestamps in the report objects. Put the following code in a conftest.py file in your project/test root dir: import pytest @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(item, call): outcome = yield report = outcome.get_result() report.start = call.start report.stop = call.stop Now that you have each report enhanced with start/stop times, process them the way you need to, for example by printing them in a custom section after the test execution. Enhance your conftest.py file with: def pytest_terminal_summary(terminalreporter): terminalreporter.ensure_newline() terminalreporter.section('start/stop times', sep='-', bold=True) for stat in terminalreporter.stats.values(): for report in stat: if report.when == 'call': start = datetime.fromtimestamp(report.start) stop = datetime.fromtimestamp(report.stop) terminalreporter.write_line(f'{report.nodeid:20}: {start:%Y-%m-%d %H:%M:%S} - {stop:%Y-%m-%d %H:%M:%S}') A test execution of sample tests def test_spam(): time.sleep(1) def test_eggs(): time.sleep(2) now yields: test_spam.py .. [100%] ------------------------------ start/stop times ------------------------------- test_spam.py::test_spam: 2020-04-26 13:29:05 - 2020-04-26 13:29:06 test_spam.py::test_eggs: 2020-04-26 13:29:06 - 2020-04-26 13:29:08 ============================== 2 passed in 3.03s ============================== Notice that in the above pytest_terminal_summary hookimpl example, I only print the time of the call phase (execution times of the test function). If you want to see or include the timestamps of the test setup/teardown phases, filter terminalreporter.stats by report objects with report.when == 'setup'/report.when == 'teardown', respectively.
8
6
61,437,756
2020-4-26
https://stackoverflow.com/questions/61437756/no-python-3-8-installation-was-detected
Python installation screenshot 1.i Uninstall everything of python with advance uninstaller . ( register file and ...) 2.i download the last version of Python from python.org 3. i add the Include PATH file when start the installation of Python. BUT I don't KNOW WHY ITS NOT Installed !
Error Code 0x80070643 I found it. if you are not administrator of the system , and change the location of installation , for example (c:\python) this error will be appear . so you must be install python on (c:\users\'your username'\App Data\Local\Programs\Python) and after installation python go to system environment and add the path into this.
11
5
61,408,795
2020-4-24
https://stackoverflow.com/questions/61408795/using-sagemath-as-a-python-library
Is it possible to import the SageMath functions inside a python session? What I wish to do, from a user perspective is something like this: >>> import sage >>> sage.kronecker_symbol(3,5) # ...or any other sage root functions instead of accessing kronecker_symbol(3,5) from a sagemath session. If possible, it would be very handy, as would allow embedding all the functionalities of SageMath within the python world.
Importing SageMath functions in a Python session There are several ways to achieve that. SageMath from the operating system's package manager Some operating systems have Sage packaged natively, for example Arch Linux, Debian, Fedora, Gentoo, NixOS, and their derivatives (Linux Mint, Manjaro, Ubuntu...). See the dedicated "Distribution" page on the Sage wiki: SageMath distribution and packaging If using one of those, use the package manager to install sage or sagemath and then the Sage library will be installed on the system's Python, and in that Python it will become possible to do things like >>> from sage.arith.misc import kronecker >>> kronecker(3, 5) -1 Another option is to use a cross-platform package manager such as Conda, Guix and Nix. These should work on most Linux distributions and macOS. Yet another option would be to run a Docker container. I will detail the Conda case below. SageMath with Conda Install Sage with Conda and you will get that. Instructions are here: SageMath installation: install from conda-forge and start by installing a Conda distribution, either Miniconda, Minimamba or Anaconda, and then creating a sage conda environment. Once a sage conda environment is installed, activate it: conda activate sage With that sage conda environment active, run python and importing the sage module or importing functions such as kronecker from that module should work.
12
10
61,428,816
2020-4-25
https://stackoverflow.com/questions/61428816/why-does-unpacking-give-a-list-instead-of-a-tuple-in-python
This is really strange to me, because by default I thought unpacking gives tuples. In my case I want to use the prefix keys for caching, so a tuple is preferred. # The r.h.s is a tuple, equivalent to (True, True, 100) *prefix, seed = ml_logger.get_parameters("Args.attn", "Args.memory_gate", "Args.seed") assert type(prefix) is list But I thought unpacking would return a tuple instead. Here is the relevant PEP: https://www.python.org/dev/peps/pep-3132/ -- Update -- Given the comment and answers bellow, specifically I was expecting the unpacking to give a tuple because in function arguments a spread arg is always a tuple instead of a list. As Jason pointed out, during unpacking one would not be able to know the length of the result ahead of time, so implementation-wise the catch-all has to start as a list for dynamic appends. Converting it to a list is a waste of effort the majority of the time. Semantically, I would prefer to have a tuple for consistency.
This issue was mentioned in that PEP (PEP 3132): After a short discussion on the python-3000 list [1], the PEP was accepted by Guido in its current form. Possible changes discussed were: [...] Try to give the starred target the same type as the source iterable, for example, b in a, *b = 'hello' would be assigned the string 'ello'. This may seem nice, but is impossible to get right consistently with all iterables. Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the result harder. But as you can see, these features currently are not implemented: In [1]: a, *b, c = 'Hello!' In [2]: print(a, b, c) H ['e', 'l', 'l', 'o'] ! Maybe, mutable lists are more appropriate for this type of unpacking.
9
7
61,428,792
2020-4-25
https://stackoverflow.com/questions/61428792/how-do-i-go-back-to-my-system-python-using-pyenv-in-ubuntu
i installed pyenv and switched to python 3.6.9 (using pyenv global 3.6.9). How do i go back to my system python? Running pyenv global system didnt work
Your system Python might be /usr/bin/python or /usr/bin/python3. You have a couple options: Execute that Python interpreter directly: /usr/bin/python --version If you want to run it from a script and you're on a *nix machine, put #!/usr/bin/python at the top of the file, then give it execute permissions (chmod +x my-script.py) and run it directly: ./my-script.py. Turn off pyenv's path hacks. This could mean removing the eval "$(pyenv init -)" from your ~/.bashrc or ~/.bash_profile and loading a new shell. Use the pyenv register plugin - https://github.com/doloopwhile/pyenv-register (or use/build something similar). Here's a portion of the README Installation: git clone https://github.com/doloopwhile/pyenv-register.git $(pyenv root)/plugins/pyenv-register # clone plugin exec "$SHELL" # reload shell Usage: pyenv register /usr/bin/python pyenv versions
11
8
61,426,232
2020-4-25
https://stackoverflow.com/questions/61426232/update-dataclass-fields-from-a-dict-in-python
How can I update the fields of a dataclass using a dict? Example: @dataclass class Sample: field1: str field2: str field3: str field4: str sample = Sample('field1_value1', 'field2_value1', 'field3_value1', 'field4_value1') updated_values = {'field1': 'field1_value2', 'field3': 'field3_value2'} I want to do something like sample.update(updated_values)
One way is to make a small class and inherit from it: class Updateable(object): def update(self, new): for key, value in new.items(): if hasattr(self, key): setattr(self, key, value) @dataclass class Sample(Updateable): field1: str field2: str field3: str field4: str You can read this if you want to learn more about getattr and setattr
11
10
61,417,426
2020-4-24
https://stackoverflow.com/questions/61417426/is-if-name-main-required-in-a-main-py
In a project that has a __main__.py, rather than # __main__.py # def main... if __name__ == "__main__": main() ...is it OK to just do: # __main__.py # def main... main() Edit: @user2357112-supports-Monica's argument made a lot of sense to me, so I went back and tracked down the library that had been giving me issues, causing me to still be adding the if __... line. It's upon calling python -m pytest --doctest-modules. Maybe that's the only place that makes a mistake in running __main__.py? And maybe that's a bug? Reproduced by putting the first example in the docs in a __main__.py: ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― package/__main__.py ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― package/__main__.py:58: in <module> args = parser.parse_args() /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py:1755: in parse_args args, argv = self.parse_known_args(args, namespace) /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py:1787: in parse_known_args namespace, args = self._parse_known_args(args, namespace) /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py:2022: in _parse_known_args ', '.join(required_actions)) /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py:2508: in error self.exit(2, _('%(prog)s: error: %(message)s\n') % args) /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py:2495: in exit _sys.exit(status) E SystemExit: 2 --------------------------------------------------------------------------------------- Captured stderr --------------------------------------------------------------------------------------- usage: pytest.py [-h] [--sum] N [N ...] pytest.py: error: the following arguments are required: N !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Results (4.23s):
It's okay to skip the if __name__ == '__main__' guard in most regular scripts, not just __main__.py. The purpose of the guard is to make specific code not run if the file is imported as a module instead of run as the program's entry point, but importing a __main__.py as a module is usually using it wrong anyway. Even with multiprocessing, you might think you need an if __name__ == '__main__' guard, but in the case of a __main__.py, it wouldn't actually help. It's commonly said that multiprocessing in spawn or forkserver mode imports the __main__ script as a module, but that's a simplification of the real behavior. In particular, one part of the real behavior is that if spawn mode detects the main script was a __main__.py, it just doesn't try to load the original __main__ at all: # __main__.py files for packages, directories, zip archives, etc, run # their "main only" code unconditionally, so we don't even try to # populate anything in __main__, nor do we make any changes to # __main__ attributes current_main = sys.modules['__main__'] if mod_name == "__main__" or mod_name.endswith(".__main__"): return forkserver mode also didn't load __main__.py when I tested it, but forkserver goes through a slightly different code path, and I'm not sure where it decided to skip __main__.py. (This might be different on different Python versions - I only checked 3.8.2.) That said, there's nothing wrong with using an if __name__ == '__main__' guard. Not using it has more weird edge cases than using it, and experienced readers will be more confused by its absence than its presence. Even in a __main__.py, I would probably still use the guard. If you actually do want to import __main__.py for some reason, perhaps to unit test functions defined there, then you will need the guard. However, it might make more sense to move anything worth importing out of __main__.py and into another file.
11
6
61,387,304
2020-4-23
https://stackoverflow.com/questions/61387304/tabula-vs-camelot-for-table-extraction-from-pdf
I need to extract tables from pdf, these tables can be of any type, multiple headers, vertical headers, horizontal header etc. I have implemented the basic use cases for both and found tabula doing a bit better than camelot still not able to detect all tables perfectly, and I am not sure whether it will work for all kinds or not. So seeking suggestions from experts who have implemented similar use case. Example PDFs: PDF1 PDF2 PDF3 Tabula Implementation: import tabula tab = tabula.read_pdf('pdfs/PDF1.pdf', pages='all') for t in tab: print(t, "\n=========================\n") Camelot Implementation: import camelot tables = camelot.read_pdf('pdfs/PDF1.pdf', pages='all', split_text=True) tables for tabs in tables: print(tabs.df, "\n=================================\n")
Please read this: https://camelot-py.readthedocs.io/en/master/#why-camelot The main advantage of Camelot is that this library is rich in parameters, through which you can improve the extraction. Obviously, the application of these parameters requires some study and various attempts. Here you can find comparision of Camelot with other PDF Table Extraction libraries.
7
13
61,384,752
2020-4-23
https://stackoverflow.com/questions/61384752/how-to-type-hint-with-an-optional-import
When using an optional import, i.e. the package is only imported inside a function as I want it to be an optional dependency of my package, is there a way to type hint the return type of the function as one of the classes belonging to this optional dependency? To give a simple example with pandas as an optional dependency: def my_func() -> pd.DataFrame: import pandas as pd return pd.DataFrame() df = my_func() In this case, since the import statement is within my_func, this code will, not surprisingly, raise: NameError: name 'pd' is not defined If the string literal type hint were used instead, i.e.: def my_func() -> 'pd.DataFrame': import pandas as pd return pd.DataFrame() df = my_func() the module can now be executed without issue, but mypy will complain: error: Name 'pd' is not defined How can I make the module execute successfully and retain the static type checking capability, while also having this import be optional?
Try sticking your import inside of an if typing.TYPE_CHECKING statement at the top of your file. This variable is always false at runtime but is treated as always true for the purposes of type hinting. For example: # Lets us avoid needing to use forward references everywhere # for Python 3.7+ from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: import pandas as pd def my_func() -> pd.DataFrame: import pandas as pd return pd.DataFrame() You can also do if False:, but I think that makes it a little harder for somebody to tell what's going on. One caveat is that this does mean that while pandas will be an optional dependency at runtime, it'll still be a mandatory one for the purposes of type checking. Another option you can explore using is mypy's --always-true and --always-false flags. This would give you finer-grained control over which parts of your code are typechecked. For example, you could do something like this: try: import pandas as pd PANDAS_EXISTS = True except ImportError: PANDAS_EXISTS = False if PANDAS_EXISTS: def my_func() -> pd.DataFrame: return pd.DataFrame() ...then do mypy --always-true=PANDAS_EXISTS your_code.py to type check it assuming pandas is imported and mypy --always-false=PANDAS_EXISTS your_code.py to type check assuming it's missing. This could help you catch cases where you accidentally use a function that requires pandas from a function that isn't supposed to need it -- though the caveats are that (a) this is a mypy-only solution and (b) having functions that only sometimes exist in your library might be confusing for the end-user.
21
19
61,400,692
2020-4-24
https://stackoverflow.com/questions/61400692/how-to-bypass-bot-detection-and-scrape-a-website-using-python
The problem I was new to web scraping and I was trying to create a scraper which looks at a playlist link and gets the list of the music and the author. But the site kept rejecting my connection because it thought that I was a bot, so I used UserAgent to create a fake useragent string to try and bypass the filter. It sort of worked? But the problem was that when you visited the website by a browser, you could see the contents of the playlist, but when you tried to extract the html code with requests, the contents of the playlist was just a big blank space. Mabye I have to wait for the page to load? Or there is a stronger bot filter? My code import requests from bs4 import BeautifulSoup from fake_useragent import UserAgent ua = UserAgent() melon_site="http://kko.to/IU8zwNmjM" headers = {'User-Agent' : ua.random} result = requests.get(melon_site, headers = headers) print(result.status_code) src = result.content soup = BeautifulSoup(src,'html.parser') print(soup) Link of website playlist link html I get when using requests html with blank space where the playlist was supposed to be
You wanna check out this link to get the content you wish to grab. The following attempt should fetch you the artist names and their song names. import requests from bs4 import BeautifulSoup url = 'https://www.melon.com/mymusic/playlist/mymusicplaylistview_listSong.htm?plylstSeq=473505374' r = requests.get(url,headers={"User-Agent":"Mozilla/5.0"}) soup = BeautifulSoup(r.text,"html.parser") for item in soup.select("tr:has(#artistName)"): artist_name = item.select_one("#artistName > a[href*='goArtistDetail']")['title'] song = item.select_one("a[href*='playSong']")['title'] print(artist_name,song) Output are like: Martin Garrix - 페이지 이동 Used To Love (feat. Dean Lewis) 재생 - 새 창 Post Malone - 페이지 이동 Circles 재생 - 새 창 Marshmello - 페이지 이동 Here With Me 재생 - 새 창 Coldplay - 페이지 이동 Cry Cry Cry 재생 - 새 창 Note: your BeautifulSoup version should be 4.7.0 or later in order for the script to support pseudo selector.
7
5
61,394,826
2020-4-23
https://stackoverflow.com/questions/61394826/how-do-i-get-to-show-gaussian-kernel-for-2d-opencv
I am using this: blur = cv2.GaussianBlur(dst,(5,5),0) And I wanted to show the kernel matrix by this: print(cv2.getGaussianKernel(ksize=(5,5),sigma=0)) But I am getting a type error: TypeError: an integer is required (got type tuple) If I only put 5, I get a 5x1 matrix. Isn't the blur kernel 5x5? Or am I missing on something fundamental?
The Gaussian kernel is separable. Therefore, the kernel generated is 1D. The GaussianBlur function applies this 1D kernel along each image dimension in turn. The separability property means that this process yields exactly the same result as applying a 2D convolution (or 3D in case of a 3D image). But the amount of work is strongly reduced. For your 5x5 kernel, the 2D convolution does 25 multiplications and additions, the separable implementation does only 5+5=10. For larger kernels, the gains are increasingly significant. To see the full 2D kernel, apply the GaussianBlur function to an image that is all zeros and has a single pixel in the middle set to 1. This is the discrete equivalent to the Dirac delta function, which we can use to analyze linear time-invariant functions (==convolution filters).
7
12
61,392,431
2020-4-23
https://stackoverflow.com/questions/61392431/how-to-create-a-permutation-in-c-using-stl-for-number-of-places-lower-than-the
I have a c++ vector with std::pair<unsigned long, unsigned long> objects. I am trying to generate permutations of the objects of the vector using std::next_permutation(). However, I want the permutations to be of a given size, you know, similar to the permutations function in python where the size of the expected returned permutation is specified. Basically, the c++ equivalent of import itertools list = [1,2,3,4,5,6,7] for permutation in itertools.permutations(list, 3): print(permutation) Python Demo (1, 2, 3) (1, 2, 4) (1, 2, 5) (1, 2, 6) (1, 2, 7) (1, 3, 2) (1, 3, 4) .. (7, 5, 4) (7, 5, 6) (7, 6, 1) (7, 6, 2) (7, 6, 3) (7, 6, 4) (7, 6, 5)
You might use 2 loops: Take each n-tuple iterate over permutations of that n-tuple template <typename F, typename T> void permutation(F f, std::vector<T> v, std::size_t n) { std::vector<bool> bs(v.size() - n, false); bs.resize(v.size(), true); std::sort(v.begin(), v.end()); do { std::vector<T> sub; for (std::size_t i = 0; i != bs.size(); ++i) { if (bs[i]) { sub.push_back(v[i]); } } do { f(sub); } while (std::next_permutation(sub.begin(), sub.end())); } while (std::next_permutation(bs.begin(), bs.end())); } Demo
19
6
61,383,179
2020-4-23
https://stackoverflow.com/questions/61383179/fastapi-passing-json-in-get-request-via-testclient
I'm try to test the api I wrote with Fastapi. I have the following method in my router : @app.get('/webrecord/check_if_object_exist') async def check_if_object_exist(payload: WebRecord) -> bool: key = get_key_of_obj(payload.data) if payload.key is None else payload.key return await check_if_key_exist(key) and the following test in my test file : client = TestClient(app) class ServiceTest(unittest.TestCase): ..... def test_check_if_object_is_exist(self): webrecord_json = {'a':1} response = client.get("/webrecord/check_if_object_exist", json=webrecord_json) assert response.status_code == 200 assert response.json(), "webrecord should already be in db, expected : True, got : {}".format(response.json()) When I run the code in debug I realized that the break points inside the get method aren't reached. When I changed the type of the request to post everything worked fine. What am I doing wrong?
In order to send data to the server via a GET request, you'll have to encode it in the url, as GET does not have any body. This is not advisable if you need a particular format (e.g. JSON), since you'll have to parse the url, decode the parameters and convert them into JSON. Alternatively, you may POST a search request to your server. A POST request allows a body which may be of different formats (including JSON). If you still want GET request @app.get('/webrecord/check_if_object_exist/{key}') async def check_if_object_exist(key: str, data: str) -> bool: key = get_key_of_obj(payload.data) if payload.key is None else payload.key return await check_if_key_exist(key) client = TestClient(app) class ServiceTest(unittest.TestCase): ..... def test_check_if_object_is_exist(self): response = client.get("/webrecord/check_if_object_exist/key", params={"data": "my_data") assert response.status_code == 200 assert response.json(), "webrecord should already be in db, expected : True, got : {}".format(response.json()) This will allow to GET requests from url mydomain.com/webrecord/check_if_object_exist/{the key of the object}. One final note: I made all the parameters mandatory. You may change them by declaring to be None by default. See fastapi Docs
10
6
61,387,845
2020-4-23
https://stackoverflow.com/questions/61387845/python-vs-julia-speed-comparison
I tried to compare these two snippets and see how many iterations could be done in one second. Turns out that Julia achieves 2.5 million iterations whereas Python 4 million. Isn't Julia supposed to be quicker. Or maybe these two snippets are not equivalent? Python: t1 = time.time() i = 0 while True: i += 1 if time.time() - t1 >= 1: break Julia: function f() i = 0 t1 = now() while true i += 1 if now() - t1 >= Base.Dates.Millisecond(1000) break end end return i end
This is kind of an odd performance comparison since typically one measures the time it takes to compute something of substance, rather than seeing how many trivial iterations one can do in a certain amount of time. I had trouble getting your Python and Julia codes to work, so I modified the Julia code to work and just didn't run the Python code. As noted by @chepner in a comment, using now() and doing time comparisons with DateTime objects is fairly expensive. The Python time.time() function just returns a floating-point value. As it turns out, there's a Julia function called time() that does the exact same thing: julia> time() 1.587648091474481e9 Here's the timing of your original f() function (modified to work) on my system: julia> using Dates julia> function f() i = 0 t1 = now() while true i += 1 if now() - t1 >= Millisecond(1000) break end end return i end f (generic function with 1 method) julia> f() 4943739 It did almost 5 million iterations before time was up. As I said, I wasn't able to get your Python code to run on my system without significant fiddling (which I didn't bother doing). But here's a version of f() that uses time() instead, which I will imaginatively call g(): julia> function g() i = 0 t1 = time() while true i += 1 if time() - t1 >= 1 break end end return i end g (generic function with 1 method) julia> g() 36087637 This version did 36 million iterations. So I guess Julia is faster at looping? Yay! Well, actually the main work in this loop is the calls to time() so... Julia is faster at generating lots of time() calls! Why is it odd to time this? As I said, most of actual work here is calling time(). The rest of the loop doesn't really do anything. In an optimizing compiled language, if the compiler sees a loop that doesn't do anything, it will eliminate it entirely. For example: julia> function h() t = 0 for i = 1:100_000_000 t += i end return t end h (generic function with 1 method) julia> h() 5000000050000000 julia> @time h() 0.000000 seconds 5000000050000000 Woah, zero seconds! How is that possible? Well, let's look at the LLVM code (kind of like machine code but for an imaginary machine that is used as an intermediate representation) this lowers to: julia> @code_llvm h() ; @ REPL[16]:1 within `h' define i64 @julia_h_293() { top: ; @ REPL[16]:6 within `h' ret i64 5000000050000000 } The compiler sees the loop, figures out that the result is the same every time, and just returns that constant value instead of actually executing the loop. Which, of course, takes zero time.
13
13
61,367,382
2020-4-22
https://stackoverflow.com/questions/61367382/plot-custom-data-with-tensorboard
I have a personal implementation of a RL algorithm that generates performance metrics every x time steps. That metric is simply a scalar, so I have an array of scalars that I want to display as a simple graph such as: I want to display it in real time in tensorboard like my above example. Thanks in advance
If you really want to use tensorboard you can start looking at tensorflow site and this datacamp tutorial on tensorboard. With tensorflow you can use summary.scalar to plot your custom data (as the example), no need for particular format, as the summary is taking care of that, the only condition is that data has to be a real numeric scalar value, convertible to a float32 Tensor. import tensorflow as tf import numpy as np import os import time now = time.localtime() subdir = time.strftime("%d-%b-%Y_%H.%M.%S", now) summary_dir1 = os.path.join("stackoverflow", subdir, "t1") summary_writer1 = tf.summary.create_file_writer(summary_dir1) for cont in range(200): with summary_writer1.as_default(): tf.summary.scalar(name="unify/sin_x", data=np.math.sin(cont) ,step=cont) tf.summary.scalar(name="unify/sin_x_2", data=np.math.sin(cont/2), step=cont) summary_writer1.flush() That said, if you are not planning on using tensorflow with your implementation, I would suggest you just use matplotlib as this library also enables you to plot data in real time https://youtu.be/Ercd-Ip5PfQ?t=444.
13
8
61,372,172
2020-4-22
https://stackoverflow.com/questions/61372172/lark-grammar-how-does-the-escaped-string-regex-work
The lark parser predefines some common terminals, including a string. It is defined as follows: _STRING_INNER: /.*?/ _STRING_ESC_INNER: _STRING_INNER /(?<!\\)(\\\\)*?/ ESCAPED_STRING : "\"" _STRING_ESC_INNER "\"" I do understand _STRING_INNER. I also understand how ESCAPED_STRING is composed. But what I don't really understand is _STRING_ESC_INNER. If I read the regex correctly, all it says is that whenever I find two consecutive literal backslashes, they must not be preceeded by another literal backslash? How can I combine those two into a single regex? And wouldn't it be required for the grammar to only allow escaped double quotes in the string data?
Preliminaries: .*? Non-greedy match, meaning the shortest possible number of repetitions of . (any symbol). This only makes sense when followed by something else. So .*?X on input AAXAAX would match only the AAX part, instead of expanding all the way to the last X. (?<!...) is a "negative look-behind assertion" (link): "Matches if the current position in the string is not preceded by a match for ....". So .*(?<!X)Y would match AY but not XY. Applying this to your example: ESCAPED_STRING: The rule says: "Match ", then _STRING_ESC_INNER, and then " again". _STRING_INNER: Matches the shortest possible number of repetitions of any symbol. As said before, this only makes sense when considering the regular expression that comes after it. _STRING_ESC_INNER: We want this to match the shortest possible string that does not contain a closing quote. That is, for an input "abc"xyz", we want to match "abc", instead of also consuming the xyz" part. However, we have to make sure that the " is really a closing quote, in that it should not be itself escaped. So for input "abc\"xyz", we do not want to match only "abc\", because the \" is escaped. We observe that the closing " has to be directly preceded by an even number of \ (with zero being an even number). So " is ok, \\" is ok, \\\\" is ok etc. But as soon as " is preceded by an odd number of \, that means the " is not really a closing quote. (\\\\) matches \\. The (?<!\\) says "the position before should not have \". So combined (?<!\\)(\\\\) means "match \\, but only if it is not preceded by \". The following *? then does the smallest possible repetitions of this, which again only makes sense when considering the regular expression that comes after this, which is the " from the ESCAPED_STRING rule (possible point of confusion: the \" in the ESCAPED_STRING refers to a literal " in the actual input we want to match, in the same way that \\\\ refers to \\ in the input). So (?<!\\)(\\\\)*?\" means "match the shortest amount of \\ that is followed by " and not preceded by \. So in other words, (?<!\\)(\\\\)*?\" matches only " that are preceded by an even number of \ (including blocks of size 0). Now combining it with the preceding _STRING_INNER, the _STRING_ESC_INNER rule then says: Match the first " preceded by an even number of \, so in other words, the first " where the \ is not itself escaped.
7
9
61,363,712
2020-4-22
https://stackoverflow.com/questions/61363712/how-to-print-a-pandas-io-formats-style-styler-object
I have the following code which produces a pandas.io.formats.style.Styler object: import pandas as pd import numpy as np df = pd.DataFrame({'text': ['foo foo', 'bar bar'], 'number': [1, 2]}) df1 = df.style.set_table_styles([dict(selector='th', props=[('text-align', 'center')])]) df2 = df1.set_properties(**{'text-align': 'center'}).hide_index() df2 # df2 is a pandas.io.formats.style.Styler object How do I print df2 if I have more code running underneath the above script, for eg.: import pandas as pd import numpy as np df = pd.DataFrame({'text': ['foo foo', 'bar bar'], 'number': [1, 2]}) df1 = df.style.set_table_styles([dict(selector='th', props=[('text-align', 'center')])]) df2 = df1.set_properties(**{'text-align': 'center'}).hide_index() df2 np.round(0.536, 2) I tried using the print statement but it's giving me an output as below: import pandas as pd import numpy as np df = pd.DataFrame({'text': ['foo foo', 'bar bar'], 'number': [1, 2]}) df1 = df.style.set_table_styles([dict(selector='th', props=[('text-align', 'center')])]) df2 = df1.set_properties(**{'text-align': 'center'}).hide_index() print(df2) np.round(0.536, 2) <pandas.io.formats.style.Styler object at 0x000000000B4FAFC8> 0.54 Any help would really be appreciated. Many thanks in advance.
I found the answer for this: import pandas as pd from IPython.display import display import numpy as np df = pd.DataFrame({'text': ['foo foo', 'bar bar'], 'number': [1, 2]}) df1 = df.style.set_table_styles([dict(selector='th', props=[('text-align', 'center')])]) df2 = df1.set_properties(**{'text-align': 'center'}).hide_index() display(df2) np.round(0.536, 2)
12
14
61,363,534
2020-4-22
https://stackoverflow.com/questions/61363534/whats-the-recommended-way-of-renaming-a-project-in-pypi
I want people that know of the old name to be directed to the new name. For the pypi website, it's easy to upload a package with a README linking to the new package. I'm not sure what's the best way to handle people using pip to install it. I assume it might be possible to show an error on pip install old_name, looking around it seems to be possible using cmdclass in setup.py and maybe throwing an exception in the right place but the documentation around it is scarce to put it mildly. So I was wondering if anyone is aware of proper built-in systems for this, or common practices to handle this sort of thing.
Declare the new package a dependency of the old. See for example how scikit-learn does it: the old package sklearn declares in its setup.py: install_requires=['scikit-learn'], Thus everyone who does pip install sklearn automatically gets scikit-learn.
12
8
61,238,502
2020-4-15
https://stackoverflow.com/questions/61238502/how-to-require-predefined-string-values-in-python-pydantic-basemodels
Is there any in-built way in pydantic to specify options? For example, let's say I want a string value that must either have the value "foo" or "bar". I know I can use regex validation to do this, but since I use pydantic with FastAPI, the users will only see the required input as a string, but when they enter something, it will give a validation error. All in-built validations of pydantic are displayed in the api interface, so would be great if there was something like class Input(BaseModel): option: "foo" || "bar"
Yes, you can either use an enum: class Choices(Enum): foo = 'foo' bar = 'bar' class Input(BaseModel): option: Choices see here Or you can use Literal: from typing import Literal class Input(BaseModel): option: Literal['foo', 'bar'] see here
61
127
61,351,844
2020-4-21
https://stackoverflow.com/questions/61351844/difference-between-multiprocessing-asyncio-threading-and-concurrency-futures-i
Being new to using concurrency, I am confused about when to use the different python concurrency libraries. To my understanding, multiprocessing, multithreading and asynchronous programming are part of concurrency, while multiprocessing is part of a subset of concurrency called parallelism. I searched around on the web about different ways to approach concurrency in python, and I came across the multiprocessing library, concurrenct.futures' ProcessPoolExecutor() and ThreadPoolExecutor(), and asyncio. What confuses me is the difference between these libraries. Especially what the multiprocessing library does, since it has methods like pool.apply_async, does it also do the job of asyncio? If so, why is it called multiprocessing when it is a different method to achieve concurrency from asyncio (multiple processes vs cooperative multitasking)?
Let's go through the major concurrency-related modules provided by the standard library: threading: interface to OS-level threads. Note that CPU-bound work is mostly serialized by the GIL, so don't expect threading to speed up calculations. Use it when you need to invoke blocking APIs in parallel, and when you require precise control over thread creation. Avoid creating too many threads (e.g. thousands), as they are not free. If possible, don't create threads yourself, use concurrent.futures instead. multiprocessing: interface to spawning python processes with an API intentionally mimicking that of threading. Processes work in parallel unaffected by the GIL, so you can use multiprocessing to utilize multiple cores and speed up calculations. The disadvantage is that you can't share in-memory data structures without using multiprocessing-specific tools. concurrent.futures: A modern interface to threading and multiprocessing, which provides convenient thread/process pools it calls executors. The pool's main entry point is the submit method which returns a handle that you can test for completion or wait for its result. Getting the result gives you the return value of the submitted function and correctly propagates raised exceptions (if any), which would be tedious to do with threading. concurrent.futures should be the tool of choice when considering thread or process based parallelism. asyncio: While the previous options are "async" in the sense that they provide non-blocking APIs (this is what methods like apply_async refer to), they are still relying on thread/process pools to do their magic, and cannot really do more things in parallel than they have workers in the pool. Asyncio is different: it uses a single thread of execution and async system calls across the board. It has no blocking calls at all, the only blocking part being the asyncio.run() entry point. Asyncio code is typically written using coroutines, which use await to suspend until something interesting happens. (Suspending is different than blocking in that it allows the event loop thread to continue to other things while you're waiting.) It has many advantages compared to thread-based solutions, such as being able to spawn thousands of cheap "tasks" without bogging down the system, and being able to cancel tasks or easily wait for multiple things at once. Asyncio should be the tool of choice for servers and for clients connecting to multiple servers. When choosing between asyncio and multithreading/multiprocessing, consider the adage that "threading is for working in parallel, and async is for waiting in parallel". Also note that asyncio can await functions executed in thread or process pools provided by concurrent.futures, so it can serve as glue between all those different models. This is part of the reason why asyncio is often used to build new library infrastructure.
61
135
61,321,503
2020-4-20
https://stackoverflow.com/questions/61321503/is-there-a-pathlib-alternate-for-os-path-join
I am currently accessing the parent directory of my file using Pathlib as follows: Path(__file__).parent When I print it, and this gives me the following output: print('Parent: ', Path(__file__).parent) #output /home/user/EC/main-folder The main-folder has a .env file which I want to access and for that I want to join the parent path with the .env. Right now, I did: dotenv_path = os.path.join(Path(__file__).parent, ".env") which works. But I would like to know, if there is a Pathlib alternate to os.path.join()? Something like: dotenv_path = pathlib_alternate_for_join(Path(__file__).parent, ".env")
Use pathlib.Path.joinpath: (Path(__file__).parent).joinpath('.env')
150
84
61,274,967
2020-4-17
https://stackoverflow.com/questions/61274967/why-cant-i-exclude-tests-directory-from-my-python-wheel-using-exclude
Consider the following package structure: With the following setup.py contents: from setuptools import setup, find_packages setup( name='dfl_client', packages=find_packages(exclude=['*tests*']), include_package_data=True, package_data={"": ['py.typed', '*.pyi']}, ) When I package it using python setup.py sdist bdist_wheel, the resulting wheel: contains the py.typed file, which is good contains the tests folder, while it should be excluded according to the find_packages doc. I spent hours trying to understand why with no success. Especially because it seems to work for other projects !
(I spent so many time trying to understand this stupid issue that I answer my own question hoping that can save time to others facing the same problem) I finally found the culprit: it is a hidden interaction between setuptools_scm and the include_package_data=True flag. By itself, include_package_data=True does not make the tests directory be included in the wheel. However if setuptools_scm is installed and the folder is under version control (and the tests directory is in the list of git-managed files), then the exclude directive does not seem to be taken into account anymore. So the solution was simply to remove the include_package_data=True, that is actually not needed when package_data is present: from setuptools import setup, find_packages setup( name='dfl_client', packages=find_packages(exclude=['*tests*']), package_data={"": ['py.typed', '*.pyi']}, ) See setuptools doc on including files (that is actually very straightforward about include_package_data) and this related issue and workaround (the workaround seems to work for the wheel too, not only the sdist).
14
21
61,292,464
2020-4-18
https://stackoverflow.com/questions/61292464/get-confidence-interval-from-sklearn-linear-regression-in-python
I want to get a confidence interval of the result of a linear regression. I'm working with the boston house price dataset. I've found this question: How to calculate the 99% confidence interval for the slope in a linear regression model in python? However, this doesn't quite answer my question. Here is my code: import numpy as np import matplotlib.pyplot as plt from math import pi import pandas as pd import seaborn as sns from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score # import the data boston_dataset = load_boston() boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) boston['MEDV'] = boston_dataset.target X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns=['LSTAT', 'RM']) Y = boston['MEDV'] # splits the training and test data set in 80% : 20% # assign random_state to any value.This ensures consistency. X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=5) lin_model = LinearRegression() lin_model.fit(X_train, Y_train) # model evaluation for training set y_train_predict = lin_model.predict(X_train) rmse = (np.sqrt(mean_squared_error(Y_train, y_train_predict))) r2 = r2_score(Y_train, y_train_predict) # model evaluation for testing set y_test_predict = lin_model.predict(X_test) # root mean square error of the model rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict))) # r-squared score of the model r2 = r2_score(Y_test, y_test_predict) plt.scatter(Y_test, y_test_predict) plt.show() How can I get, for instance, the 95% or 99% confidence interval from this? Is there some sort of in-built function or piece of code?
If you're looking to compute the confidence interval of the regression parameters, one way is to manually compute it using the results of LinearRegression from scikit-learn and numpy methods. The code below computes the 95%-confidence interval (alpha=0.05). alpha=0.01 would compute 99%-confidence interval etc. import numpy as np import pandas as pd from scipy import stats from sklearn.linear_model import LinearRegression alpha = 0.05 # for 95% confidence interval; use 0.01 for 99%-CI. # fit a sklearn LinearRegression model lin_model = LinearRegression().fit(X_train, Y_train) # the coefficients of the regression model coefs = np.r_[[lin_model.intercept_], lin_model.coef_] # build an auxiliary dataframe with the constant term in it X_aux = X_train.copy() X_aux.insert(0, 'const', 1) # degrees of freedom dof = -np.diff(X_aux.shape)[0] # Student's t-distribution table lookup t_val = stats.t.isf(alpha/2, dof) # MSE of the residuals mse = np.sum((Y_train - lin_model.predict(X_train)) ** 2) / dof # inverse of the variance of the parameters var_params = np.diag(np.linalg.inv(X_aux.T.dot(X_aux))) # distance between lower and upper bound of CI gap = t_val * np.sqrt(mse * var_params) conf_int = pd.DataFrame({'lower': coefs - gap, 'upper': coefs + gap}, index=X_aux.columns) Using the Boston housing dataset, the above code produces the dataframe below: If this is too much manual code, you can always resort to the statsmodels and use its conf_int method: import statsmodels.api as sm alpha = 0.05 # 95% confidence interval lr = sm.OLS(Y_train, sm.add_constant(X_train)).fit() conf_interval = lr.conf_int(alpha) Since it uses the same formula, it produces the same output as above. Convenient wrapper function: import numpy as np import pandas as pd from scipy import stats from sklearn.linear_model import LinearRegression def get_conf_int(alpha, lr, X=X_train, y=Y_train): """ Returns (1-alpha) 2-sided confidence intervals for sklearn.LinearRegression coefficients as a pandas DataFrame """ coefs = np.r_[[lr.intercept_], lr.coef_] X_aux = X.copy() X_aux.insert(0, 'const', 1) dof = -np.diff(X_aux.shape)[0] mse = np.sum((y - lr.predict(X)) ** 2) / dof var_params = np.diag(np.linalg.inv(X_aux.T.dot(X_aux))) t_val = stats.t.isf(alpha/2, dof) gap = t_val * np.sqrt(mse * var_params) return pd.DataFrame({ 'lower': coefs - gap, 'upper': coefs + gap }, index=X_aux.columns) # for 95% confidence interval; use 0.01 for 99%-CI. alpha = 0.05 # fit a sklearn LinearRegression model lin_model = LinearRegression().fit(X_train, Y_train) get_conf_int(alpha, lin_model, X_train, Y_train) Stats reference
18
14
61,226,910
2020-4-15
https://stackoverflow.com/questions/61226910/how-to-programmatically-check-if-kafka-broker-is-up-and-running-in-python
I'm trying to consume messages from a Kafka topic. I'm using a wrapper around confluent_kafka consumer. I need to check if connection is established before I start consuming messages. I read that the consumer is lazy, so I need to perform some action for the connection to get established. But I want to check the connection establishment without doing a consume or poll operation. Also, I tried giving some bad configurations to see what the response on a poll would be. The response I got was: b'Broker: No more messages' So, how do I decide if the connection parameters are faulty, the connection is broken, or there actually are no messages in the topic?
I am afraid there is no direct approach for testing whether Kafka Brokers are up and running. Also note that if your consumer has already consumed the messages it doesn't mean that this is a bad behaviour and obviously it does not indicate that the Kafka broker is down. A possible workaround would be to perform some sort of quick operation and see if the broker responds. An example would be listing the topics: Using confluent-kafka-python and AdminClient # Example using confuent_kafka from confluent_kafka.admin import AdminClient kafka_broker = {'bootstrap.servers': 'localhost:9092'} admin_client = AdminClient(kafka_broker) topics = admin_client.list_topics().topics if not topics: raise RuntimeError() Using kafka-python and KafkaConsumer # example using kafka-python import kafka consumer = kafka.KafkaConsumer(group_id='test', bootstrap_servers=['localhost:9092']) topics = consumer.topics() if not topics: raise RuntimeError() Use kafka-python at your own risk, library has not been updated in years, might not be compatible with amazon msk or confluent containers
11
21
61,222,356
2020-4-15
https://stackoverflow.com/questions/61222356/no-menu-for-adding-wsl-python-interpreter-in-pycharm
I was following this guide from official jetbrains page, until the step 2 comes in the existence. In the picture mentioned in that page, has so many options like ssh, wsl, vagrant, docker, etc. In my pycharm (latest 2019.3.4) it only shows 4 options - venv, conda, pipenv and system-interpreter. There is no WSL menu in the add python interpreter dialog. See the below image: Searched web for an hour and found no results that show how to fix it. I started plugin search in the PyCharm if there's an external plugin to do so, but there were no plugin named as WSL. I don't know how to setup the WSL interpreter, I have python3.8 installed on my wsl right now. Any help will be appreciated!
I have solved this by Uninstalling pycharm with history and cache. Removing folders completely from C:\Users\%USERNAME%\AppData\Local and C:\Users\%USERNAME%\AppData\Roaming\JetBrains and Clean re-install of pycharm WSL interpreter option shows up as normal Ideavim is creating a conflict I guess.
14
14
61,342,459
2020-4-21
https://stackoverflow.com/questions/61342459/how-can-i-add-text-labels-to-a-plotly-scatter-plot-in-python
I'm trying to add text labels next to the data points in a Plotly scatter plot in Python but I get an error. How can I do that? Here is my dataframe: world_rank university_name country teaching international research citations income total_score num_students student_staff_ratio international_students female_male_ratio year 0 1 Harvard University United States of America 99.7 72.4 98.7 98.8 34.5 96.1 20,152 8.9 25% NaN 2011 Here is my code snippet: citation = go.Scatter( x = "World Rank" + timesData_df_top_50["world_rank"], <--- error y = "Citation" + timesData_df_top_50["citations"], <--- error mode = "lines+markers", name = "citations", marker = dict(color = 'rgba(48, 217, 189, 1)'), text= timesData_df_top_50["university_name"]) The error is shown below. TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32')
You can include the text labels in the text attribute. To make sure that they are displayed on the scatter plot, set mode='lines+markers+text'. See the Plotly documentation on text and annotations. I included an example below based on your code. import plotly.graph_objects as go import pandas as pd df = pd.DataFrame({'world_rank': [1, 2, 3, 4, 5], 'university_name': ['Harvard', 'MIT', 'Stanford', 'Cambridge', 'Oxford'], 'citations': [98.8, 98.7, 97.6, 97.5, 96]}) layout = dict(plot_bgcolor='white', margin=dict(t=20, l=20, r=20, b=20), xaxis=dict(title='World Rank', range=[0.9, 5.5], linecolor='#d9d9d9', showgrid=False, mirror=True), yaxis=dict(title='Citations', range=[95.5, 99.5], linecolor='#d9d9d9', showgrid=False, mirror=True)) data = go.Scatter(x=df['world_rank'], y=df['citations'], text=df['university_name'], textposition='top right', textfont=dict(color='#E58606'), mode='lines+markers+text', marker=dict(color='#5D69B1', size=8), line=dict(color='#52BCA3', width=1, dash='dash'), name='citations') fig = go.Figure(data=data, layout=layout) fig.show()
14
36
61,345,981
2020-4-21
https://stackoverflow.com/questions/61345981/error-running-as-root-without-no-sandbox-is-not-supported
I try to implement scrapy-puppeteer library for my project (https://pypi.org/project/scrapy-puppeteer/) I implement PuppeteerMiddleware according to documentation from library Here is code which I run: import asyncio from twisted.internet import asyncioreactor asyncioreactor.install(asyncio.get_event_loop()) import scrapy from scrapy.crawler import CrawlerRunner from twisted.internet import defer from twisted.trial.unittest import TestCase import scrapy_puppeteer class ScrapyPuppeteerTestCase(TestCase): """Test case for the ``scrapy-puppeteer`` package""" class PuppeteerSpider(scrapy.Spider): name = 'puppeteer_crawl_spider' allowed_domains = ['codesandbox.io'] custom_settings = { 'DOWNLOADER_MIDDLEWARES': { 'scrapy_puppeteer.PuppeteerMiddleware': 800 } } items = [] def start_requests(self): yield scrapy_puppeteer.PuppeteerRequest( 'https://codesandbox.io/search?page=1', wait_until='networkidle2', ) def parse(self, response): for selector_item in response.selector.xpath( '//li[@class="ais-Hits-item"]'): self.items.append(selector_item.xpath('.//h2').extract_first()) def setUp(self): """Store the Scrapy runner to use in the tests""" self.runner = CrawlerRunner() @defer.inlineCallbacks def test_items_number(self): crawler = self.runner.create_crawler(self.PuppeteerSpider) yield crawler.crawl() self.assertEqual(len(crawler.spider.items), 12) When i run it i have next error: 2020-04-21 14:02:13 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: test) 2020-04-21 14:02:13 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.18.0, Twisted 20.3.0, Python 3.6.9 (default, Nov 23 2019, 06:41:34) - [GCC 8.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f 31 Mar 2020), cryptography 2.9, Platform Linux-4.15.0-96-generic-x86_64-with-debian-10.2 2020-04-21 14:02:13 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'AUTOTHROTTLE_MAX_DELAY': 90, 'AUTOTHROTTLE_START_DELAY': 1, 'BOT_NAME': 'test', 'CONCURRENT_REQUESTS': 2, 'CONCURRENT_REQUESTS_PER_DOMAIN': 1, 'DOWNLOAD_DELAY': 0.25, 'DOWNLOAD_MAXSIZE': 36700160, 'DOWNLOAD_TIMEOUT': 90, 'DUPEFILTER_CLASS': 'test.dupefilter.RedisDupeFilter', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'test.spiders', 'RETRY_HTTP_CODES': [500, 502, 503, 504, 408, 403, 429], 'RETRY_TIMES': 4, 'ROBOTSTXT_OBEY': True, 'SCHEDULER': 'test.scheduler.Scheduler', 'SPIDER_MODULES': ['test.spiders']} 2020-04-21 14:02:13 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.throttle.AutoThrottle'] [W:pyppeteer.chromium_downloader] start chromium download. Download may take a few minutes. 100%|██████████| 106826418/106826418 [00:21<00:00, 4914607.73it/s] [W:pyppeteer.chromium_downloader] chromium download done. [W:pyppeteer.chromium_downloader] chromium extracted to: /root/.local/share/pyppeteer/local-chromium/575458 [I:pyppeteer.launcher] terminate chrome process... Unhandled error in Deferred: 2020-04-21 14:02:39 [twisted] CRITICAL: Unhandled error in Deferred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 171, in crawl return self._crawl(crawler, *args, **kwargs) File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 175, in _crawl d = crawler.crawl(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator return _cancellableInlineCallbacks(gen) File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks _inlineCallbacks(None, g, status) --- <exception caught here> --- File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl self.engine = self._create_engine() File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 69, in __init__ self.downloader = downloader_cls(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings mw = mwcls.from_crawler(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy_puppeteer/middlewares.py", line 38, in from_crawler asyncio.ensure_future(cls._from_crawler(crawler)) File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete return future.result() File "/usr/local/lib/python3.6/site-packages/scrapy_puppeteer/middlewares.py", line 27, in _from_crawler middleware.browser = await launch({'logLevel': crawler.settings.get('LOG_LEVEL')}) File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 311, in launch return await Launcher(options, **kwargs).launch() File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 189, in launch self.browserWSEndpoint = self._get_ws_endpoint() File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 233, in _get_ws_endpoint self.proc.stdout.read().decode() pyppeteer.errors.BrowserError: Browser closed unexpectedly: [0421/140239.027694:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180. 2020-04-21 14:02:39 [twisted] CRITICAL: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl self.engine = self._create_engine() File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 69, in __init__ self.downloader = downloader_cls(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings mw = mwcls.from_crawler(crawler) File "/usr/local/lib/python3.6/site-packages/scrapy_puppeteer/middlewares.py", line 38, in from_crawler asyncio.ensure_future(cls._from_crawler(crawler)) File "/usr/local/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete return future.result() File "/usr/local/lib/python3.6/site-packages/scrapy_puppeteer/middlewares.py", line 27, in _from_crawler middleware.browser = await launch({'logLevel': crawler.settings.get('LOG_LEVEL')}) File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 311, in launch return await Launcher(options, **kwargs).launch() File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 189, in launch self.browserWSEndpoint = self._get_ws_endpoint() File "/usr/local/lib/python3.6/site-packages/pyppeteer/launcher.py", line 233, in _get_ws_endpoint self.proc.stdout.read().decode() pyppeteer.errors.BrowserError: Browser closed unexpectedly: [0421/140239.027694:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180. I run my app from docker like separate service This is my dockerfile: FROM python:3.6 ADD . /code/test WORKDIR /code/test RUN pip3 install -r requirements.txt RUN apt-get update && apt-get install -y nmap RUN apt-get install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget -y And this is runner: https://i.sstatic.net/71Sbf.jpg How i can fix it?
How i can fix it? I would presume by not running as root It appears your dockerfile only needs root privileges for the apt-get process, since pip3 will cheerfully install either into a virtualenv (highly recommended) or into your dockerfile's user's home directory via --user FROM python:3.6 RUN set -e ;\ export DEBIAN_FRONTEND=noninteractive ;\ apt-get update ;\ apt-get install -y nmap ;\ apt-get install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 \ libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 \ libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 \ libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 \ libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 \ libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates \ fonts-liberation libnss3 lsb-release xdg-utils wget # be aware: I removed libappindicator1 from your list because python:3.6 # (at least as of 1498723101b5 from 3 days ago) reported "E: Unable to locate package libappindicator1" RUN useradd -m -d /code/test -s /bin/bash myuser USER myuser WORKDIR /code/test # this is the directory in which pip3 will install binaries ENV PATH=/code/test/.local/bin:$PATH ADD . /code/test RUN pip3 install --user -r requirements.txt and by re-ordering those changes, your container will build faster since it won't reinstall every apt package in the world when your code changes. I couldn't optimize it further without knowing what, specifically, is in your requirements.txt but often it is a best practice to put that pip3 install operation above the ADD . to further help the caching story
9
3
61,341,119
2020-4-21
https://stackoverflow.com/questions/61341119/write-a-text-inside-a-subplot
I'm working on this plot: I need to write something inside the first plot, between the red and the black lines, I tried with ax1.text() but it shows the text between the two plots and not inside the first one. How can I do that? The plot was set out as such: fig, (ax1,ax2) = plt.subplots(nrows=2, ncols=1, figsize = (12,7), tight_layout = True)
Without more code details, it's quite hard to guess what is wrong. The matplotlib.axes.Axes.text works well to show text box on subplots. I encourage you to have a look at the documentation (arguments...) and try by yourself. The text location is based on the 2 followings arguments: transform=ax.transAxes: indicates that the coordinates are given relative to the axes bounding box, with (0, 0) being the lower left of the axes and (1, 1) the upper right. text(x, y,...): where x, y are the position to place the text. The coordinate system can be changed using the below parameter transform. Here is an example: # import modules import matplotlib.pyplot as plt import numpy as np # Create random data x = np.arange(0,20) y1 = np.random.randint(0,10, 20) y2 = np.random.randint(0,10, 20) + 15 # Create figure fig, (ax1,ax2) = plt.subplots(nrows=2, ncols=1, figsize = (12,7), tight_layout = True) # Add subplots ax1.plot(x, y1) ax1.plot(x, y2) ax2.plot(x, y1) ax2.plot(x, y2) # Show texts ax1.text(0.1, 0.5, 'Begin text', horizontalalignment='center', verticalalignment='center', transform=ax1.transAxes) ax2.text(0.9, 0.5, 'End text', horizontalalignment='center', verticalalignment='center', transform=ax2.transAxes) plt.show() output
14
29
61,353,532
2020-4-21
https://stackoverflow.com/questions/61353532/plotly-how-to-get-the-trace-color-attribute-in-order-to-plot-selected-marker-wi
I am trying to plot a selected marker for each of my traces in plotly. I would like to assign the same color to marker and line. Is there a way how to get the color attribute of my traces? fig = go.Figure() fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[0, 3, 5, 7, 9, 11], name='trace01', mode='lines+markers', marker=dict(size=[0, 0, 30, 0, 0, 0], color=[0, 0, 10, 0, 0, 0]) )) fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[3, 5, 7, 9, 11, 13], name='trace02', mode='lines+markers', marker=dict(size=[0, 0, 0, 30, 0, 0], color=[0, 0, 0, 10, 0, 0]) )) fig.show()
Updated answer for newer versions of plotly: For recent plotly versions, a larger number of the attributes of a plotly figure object are readable through fig.data. Now you can retrive the color for a line without defining it or following a color cycle through: fig.data[0].line.color To make things a bit more flexible compared to my original answer, I've put together an example where you can have multiple markers on the same line. Markers are organized in a dict like so: markers = {'trace01':[[2,5], [4,9]], 'trace02':[[3,9]]} And the essence of my approach to getting the plot below is this snippet: for d in fig.data: if d.name in markers.keys(): for m in markers[d.name]: fig.add_traces(go.Scatter(x=[m[0]], y = [m[1]], mode='markers', name=None, showlegend=False, marker=dict(color=d.line.color,size=15) ) ) Here you can see that I'm not actually using fig.data[0].line.color, but rather color=d.line.color since I've matched the markers with the traces by name through : for d in fig.data: if d.name in markers.keys(): for m in markers[d.name]: Plot: Complete code: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[0, 3, 5, 7, 9, 11], name='trace01', line=dict(color='blue'), mode='lines', )) fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[3, 5, 7, 9, 11, 13], name='trace02', line=dict(color='red'), mode='lines' )) markers = {'trace01':[[2,5], [4,9]], 'trace02':[[3,9]]} for d in fig.data: if d.name in markers.keys(): for m in markers[d.name]: fig.add_traces(go.Scatter(x=[m[0]], y = [m[1]], mode='markers', name=None, showlegend=False, marker=dict(color=d.line.color,size=15) ) ) fig.show() Original answer for older versions You can retrieve the color of a trace using: fig['data'][0]['line']['color'] But you'll have to specify the color of the trace to be able to do so. Or you can make sure that the color of the markers follow the same sequence as the traces. But we can get to all the details if this is in fact what you're trying to accomplish: If you study the code snippet below, you'll see that I, unlike you, havent defined the markers in the same go as the lines. Rather, I've added the traces as pure lines with mode='lines' and then added separate traces for the markers with mode='markers'. When doing the latter, I've retrieved the colors of the corresponding lines using color=data['line']['color'] in a loop: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[0, 3, 5, 7, 9, 11], name='trace01', line=dict(color='blue'), mode='lines', )) fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[3, 5, 7, 9, 11, 13], name='trace02', line=dict(color='red'), mode='lines' )) markers = [[2,5], [3,9]] for i, data in enumerate(fig['data']): #print(data['line']['color']) fig.add_trace(go.Scatter(x=[markers[i][0]], y=[markers[i][1]], mode='markers', name=None, showlegend=False, marker=dict(color=data['line']['color'], size=15 ))) fig.show() Edit 1: How to do the same by referencing the default color sequence By default, plotly follows a color sequence that can be found using px.colors.qualitative.Plotly: ['#636EFA', '#EF553B', '#00CC96', '#AB63FA', '#FFA15A', '#19D3F3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'] The following snippet will produce the exact same figure as before, but without having to define the colors for the traces. import plotly.graph_objects as go import plotly.express as px fig = go.Figure() fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[0, 3, 5, 7, 9, 11], name='trace01', mode='lines', )) fig.add_trace(go.Scatter( x=[0, 1, 2, 3, 4, 5], y=[3, 5, 7, 9, 11, 13], name='trace02', mode='lines' )) colors = px.colors.qualitative.Plotly markers = [[2,5], [3,9]] for i, data in enumerate(fig['data']): #print(data['line']['color']) fig.add_trace(go.Scatter(x=[markers[i][0]], y=[markers[i][1]], mode='markers', name=None, showlegend=False, marker=dict(color=colors[i], size=15 ))) fig.show()
14
22
61,277,709
2020-4-17
https://stackoverflow.com/questions/61277709/unexpected-tokens-in-doctype-html-in-pycharm-community-edition
I am new in using PyCharm but I am loving it gradually. I am getting a red underline on <!DOCTYPE html> and the error is "Unexpected Token". Why PyCharm shows it? I can't understand.
It usually happens when you don't enable Django in Pycharm's settings. To resolve the problem: In Pycharm open Setting in File menu Select and expand Languages & Frameworks Select Django and enable it Select your Django project root Select your project setting.py file Select your project manage.py file Apply setting
8
6
61,235,853
2020-4-15
https://stackoverflow.com/questions/61235853/how-to-invoke-cloud-function-from-cloud-scheduler-with-authentication
I've looked everywhere and it seems people either use pubsub, app engine http or http with no auth. Not too many people out there showing their work for accessing functions via authentication w/ oidc tokens to access google functions. I checked out: Cannot invoke Google Cloud Function from GCP Scheduler but nothing seemed to work. Documentation I followed: https://cloud.google.com/scheduler/docs/http-target-auth#using-gcloud_1 created a new service account set roles (Cloud scheduler service agent/Cloud functions service agent/Cloud scheduler admin/cloud functions invoker...even tried owner!) deployed google function that doesn't allow public (unauthenticated) access (a simple helloworld function) setup cron job on cloud scheduler to run every minute against the new deployed function with this configuration: url = helloworld function oidc-token newly created service account audience set to hello world function url outcome on cloud scheduler logs: Expand all | Collapse all{ httpRequest: { } insertId: "ibboa4fg7l1s9" jsonPayload: { @type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished" jobName: "projects/project/locations/region/jobs/tester" status: "PERMISSION_DENIED" targetType: "HTTP" url: "https://region-project.cloudfunctions.net/tester" } logName: "projects/project/logs/cloudscheduler.googleapis.com%2Fexecutions" receiveTimestamp: "2020-04-15T17:50:14.287689800Z" resource: {…} severity: "ERROR" timestamp: "2020-04-15T17:50:14.287689800Z" I saw one solution that showed someone creating a new project to get to this to work, are there any others?? Appreciate any help provided. UPDATE New Google Function - running in central (same as my app engine app) New Service Account - w/ Owner role New Scheduled Task - Info New Scheduled Task - Status New Scheduled Task - Logs ACTUAL FIX If you're missing the cloudscheduler service account (ex: service-1231231231412@gcp-sa-cloudscheduler.iam.gserviceaccount.com) Http auth tasks wont work. To fix, I had to disable api and renable and it gave me the service account, I didnt use this service account but, that was the only changing factor after I did this to make it work.
These are the exact steps you have to take. Be sure not to skip the second step, it sets invoker permissions on the service account so that the scheduler is able to invoke the HTTP Cloud Function with that service account's OIDC information. Note: for simplicity, I choose the default service account here, however, it would be wise to create a separate service account for this purpose with less privileges. # Create cloud function gcloud functions deploy my_function \ --entry-point=my_entrypoint \ --runtime=python37 \ --trigger-http \ --region=europe-west1 \ --project=${PROJECT_ID} # Set invoke permissions gcloud functions add-iam-policy-binding my_function \ --region=europe-west1 \ --member=serviceAccount:${PROJECT_ID}@appspot.gserviceaccount.com \ --role="roles/cloudfunctions.invoker" \ --project=${PROJECT_ID} # Deploy scheduler gcloud scheduler jobs create http my_job \ --schedule="every 60 minutes" \ --uri="https://europe-west1-${PROJECT_ID}.cloudfunctions.net/my_function/" \ --http-method=POST \ --oidc-service-account-email="${PROJECT_ID}@appspot.gserviceaccount.com" \ --oidc-token-audience="https://europe-west1-${PROJECT_ID}.cloudfunctions.net/my_function" \ --project=${PROJECT_ID}
23
12
61,302,822
2020-4-19
https://stackoverflow.com/questions/61302822/can-you-change-code-in-a-gitlab-pipeline
Is it possible for a GitLab CI/CD pipeline to commit code changes? I would like to run a stage that uses black to format my code automatically whenever I push my work. gitlab-ci.yml image: python:3.6 stages: - test before_script: - python3 -m pip install -r requirements.txt test:linting: script: - black ./ I made sure to include a file that needs reformatting to test if this works. Job output $ black ./ reformatted test.py All done! ✨ 🍰 ✨ 1 file reformatted. The file in my repository remains unchanged which leads me to believe that this might not be possible.
Black won't automatically commit corrected python code unless you use pre-commit hook. Best way to run black in CI is to include something like : black . --check --verbose --diff --color This will fail the test if python code fail to adhere to code format and force user to fix formatting. please checkout black --help for all tags. This is a good reference on Black : https://www.mattlayman.com/blog/2018/python-code-black/ Repo : https://github.com/psf/black
10
17
61,275,551
2020-4-17
https://stackoverflow.com/questions/61275551/python-in-vs-code-can-i-run-cell-in-the-integrated-terminal
In VS Code with Python, we can run a "cell" (block that starts with #%%) in the "Python Interactive Window" Can we do the same thing on the Integrated Terminal? I know we can do this in Spyder, where the terminal is generally always an IPython terminal Matlab works in the same way with its terminal. Can we do this in VS Code?
I've opened a issue in GitHub, @AdamAL also opened, but it seems they don't have intention to do this. Here is a workaround to other users. EDIT: I've answered before with a workaround that takes 2 VSCode extensions and not use the smart command jupyter.selectCellContents. @AdamAL shared a better solution using this command, but with the caveat that the cursor loses position and focusing only in Python terminal. @Maxime Beau pointed that out expanding @AdamAL solution to the active terminal (that could be IPython, for instance) Now I'm taking all answers and posting a general solution. This solution don't loose the cursor position when running only one cell. Also there's another command that runs the cell and advance to the next cell (as in Jupyter) and it is general to the active terminal (that could be IPython). 1. Install macros extension: This extension have some additional commands that multi-command didn't have (like the delay command) 2. In settings.json "macros.list": { "runCellinTerminal": [ "jupyter.selectCellContents", "workbench.action.terminal.runSelectedText", "cursorUndo", {"command": "$delay","args": {"delay": 100}}, {"command": "workbench.action.terminal.sendSequence","args": { "text": "\n" }}, ], "runCellinTerminaladvance": [ "jupyter.selectCellContents", "workbench.action.terminal.runSelectedText", "cursorDown", {"command": "$delay","args": {"delay": 100}}, {"command": "workbench.action.terminal.sendSequence","args": { "text": "\n" }}, ], } OBS: cursorUndo takes the cursor back to the right position. cursorDown takes the cursor to the next cell. The commands delay and sendSequence \n are usefull when the terminal is an IPython terminal. 3. In keybinding.json: { "key": "ctrl+alt+enter", "command": "macros.runCellinTerminal", "when": "editorTextFocus && jupyter.hascodecells" }, { "key": "shift+alt+enter", "command": "macros.runCellinTerminaladvance", "when": "editorTextFocus && jupyter.hascodecells" },
8
9
61,350,804
2020-4-21
https://stackoverflow.com/questions/61350804/tkinter-treeview-how-to-correctly-select-multiple-items-with-the-mouse
I'm trying to use the mouse to select and deselect multiple items. I have it working sort of but there is a problem when the user moves the mouse to fast. When the mouse is moved fast some items are skipped and are not selected at all. I must be going about this the wrong way. Update 1: I decided to use my own selecting system, but I get the same results as above. Some items are skipped when the mouse is moved to fast and therefore they don't get the correct color tag added and remain unchanged. If the mouse is moved slowly all items get selected correctly. Below is the new code and another Image of the problem. Update 2: I have solved this issue and posted the working code just for completeness and to help others in the future. I ended up using my own selection selecting system instead of the built in one. import tkinter as tk import tkinter.ttk as ttk class App(tk.Tk): def __init__(self): super().__init__() self.title('Treeview Demo') self.geometry('300x650') self.rowconfigure(0, weight=1) self.columnconfigure(0, weight=1) tv = self.tv = ttk.Treeview(self) tv.heading('#0', text='Name') # Populate tree with test data. for idx in range(0, 4): tv.insert('', idx, f'!{idx}', text=f'Item {idx+1}', tags='TkTextFont', open=1) iid = f'!{idx}_!{idx}' tv.insert(f'!{idx}', '0', iid, text=f'Python {idx+1}', tags='TkTextFont', open=1) for i in range(0, 5): tv.insert(iid, f'{i}', f'{iid}_!{i}', text=f'Sub item {i+1}', tags='TkTextFont') tv.grid(sticky='NSEW') self.active_item = None def motion(_): x, y = tv.winfo_pointerxy() item = tv.identify('item', x - tv.winfo_rootx(), y - tv.winfo_rooty()) if not item or item == self.active_item: return if not self.active_item: self.active_item = item tv.selection_toggle(item) self.active_item = item def escape(_): tv.selection_remove(tv.selection()) def button_press(_): self.bind('<Motion>', motion) def button_release(_): self.unbind('<Motion>') self.active_item = None self.bind('<Escape>', escape) self.bind('<Button-1>', button_press) self.bind('<ButtonRelease-1>', button_release) def main(): app = App() app.mainloop() if __name__ == '__main__': main() Second Try: import tkinter as tk import tkinter.ttk as ttk import tkinter.font as tkfont class App(tk.Tk): def __init__(self): super().__init__() self.title('Treeview Demo') self.geometry('700x650') self.rowconfigure(0, weight=1) self.columnconfigure(0, weight=1) tv = self.tv = ttk.Treeview(self) tv.heading('#0', text='Name') tv.tag_configure('odd', background='#aaaaaa') tv.tag_configure('even', background='#ffffff') tv.tag_configure('selected_odd', background='#25a625') tv.tag_configure('selected_even', background='#b0eab2') tag = 'odd' # Populate tree with test data. for idx in range(0, 4): tag = 'even' if tag == 'odd' else 'odd' tv.insert('', idx, f'!{idx}', text=f'Item {idx+1}', open=1, tags=(tag,)) tag = 'even' if tag == 'odd' else 'odd' iid = f'!{idx}_!{idx}' tv.insert(f'!{idx}', '0', iid, text=f'Python {idx+1}', open=1, tags=(tag,)) for i in range(0, 5): tag = 'even' if tag == 'odd' else 'odd' tv.insert(iid, i, f'{iid}_!{i}', text=f'Sub item {i+1}', tags=(tag,)) tv.config(selectmode="none") tv.grid(sticky='NSEW') dw = tk.Toplevel() dw.overrideredirect(True) dw.wait_visibility(self) dw.wm_attributes('-alpha', 0.2) dw.wm_attributes("-topmost", True) dw.config(bg='#00aaff') dw.withdraw() self.selected = False self.active_item = None def motion(event): x, y = self.winfo_pointerxy() width = event.x-self.anchor_x height = event.y-self.anchor_y if width < 0: coord_x = event.x+self.winfo_rootx() width = self.anchor_x - event.x else: coord_x = self.anchor_x+self.winfo_rootx() if coord_x+width > self.winfo_rootx()+self.winfo_width(): width -= (coord_x+width)-(self.winfo_rootx()+self.winfo_width()) elif x < self.winfo_rootx(): width -= (self.winfo_rootx() - x) coord_x = self.winfo_rootx() if height < 0: coord_y = event.y+self.winfo_rooty() height = self.anchor_y - event.y else: coord_y = self.anchor_y+self.winfo_rooty() if coord_y+height > self.winfo_rooty()+self.winfo_height(): height -= (coord_y+height)-(self.winfo_rooty()+self.winfo_height()) elif y < self.winfo_rooty(): height -= (self.winfo_rooty() - y) coord_y = self.winfo_rooty() dw.geometry(f'{width}x{height}+{coord_x}+{coord_y}') item = tv.identify('item', coord_x, coord_y-40) if not item or item == self.active_item: self.active_item = None return self.active_item = item tags = list(tv.item(item, 'tags')) if 'odd' in tags: tags.pop(tags.index('odd')) tags.append('selected_odd') if 'even' in tags: tags.pop(tags.index('even')) tags.append('selected_even') tv .item(item, tags=tags) def escape(_=None): for item in tv.tag_has('selected_odd'): tags = list(tv.item(item, 'tags')) tags.pop(tags.index('selected_odd')) tags.append('odd') tv.item(item, tags=tags) for item in tv.tag_has('selected_even'): tags = list(tv.item(item, 'tags')) tags.pop(tags.index('selected_even')) tags.append('even') tv.item(item, tags=tags) def button_press(event): if self.selected and not event.state & 1 << 2: escape() self.selected = False dw.deiconify() self.anchor_item = tv.identify('item', event.x, event.y-40) self.anchor_x, self.anchor_y = event.x, event.y self.bind('<Motion>', motion) self.selected = True def button_release(event): dw.withdraw() dw.geometry('0x0+0+0') self.unbind('<Motion>') self.bind('<Escape>', escape) self.bind('<Button-1>', button_press) self.bind('<ButtonRelease-1>', button_release) def main(): app = App() app.mainloop() if __name__ == '__main__': main()
Working Example: Tested and works in Windows and Linux UPDATE: I have updated the code and everything works in Windows and Linux, although in Windows the blue transparent window is jittery when sizing from right to left, left to right is fine. In Linux the jitters don't happen anyone know why? If someone could let me know if the code below works in MacOS that would be great also. import tkinter as tk import tkinter.ttk as ttk import tkinter.font as tkfont class Treeview(ttk.Treeview): def __init__(self, parent, **kwargs): super().__init__(parent, **kwargs) self.root = parent.winfo_toplevel() self.selected_items = [] self.origin_x = \ self.origin_y = \ self.active_item = \ self.origin_item = None sw = self.select_window = tk.Toplevel(self.root) sw.wait_visibility(self.root) sw.withdraw() sw.config(bg='#00aaff') sw.overrideredirect(True) sw.wm_attributes('-alpha', 0.3) sw.wm_attributes("-topmost", True) self.font = tkfont.nametofont('TkTextFont') self.style = parent.style self.linespace = self.font.metrics('linespace') + 5 self.bind('<Escape>', self.tags_reset) self.bind('<Button-1>', self.button_press) self.bind('<ButtonRelease-1>', self.button_release) def fixed_map(self, option): return [elm for elm in self.style.map("Treeview", query_opt=option) if elm[:2] != ("!disabled", "!selected")] def tag_add(self, tags, item): self.tags_update('add', tags, item) def tag_remove(self, tags, item=None): self.tags_update('remove', tags, item) def tag_replace(self, old, new, item=None): for item in (item,) if item else self.tag_has(old): self.tags_update('add', new, item) self.tags_update('remove', old, item) def tags_reset(self, _=None): self.tag_remove(('selected', '_selected')) self.tag_replace('selected_odd', 'odd') self.tag_replace('selected_even', 'even') def tags_update(self, opt, tags, item): def get_items(node): items.append(node) for node in self.get_children(node): get_items(node) if not tags: return elif isinstance(tags, str): tags = (tags,) if not item: items = [] for child in self.get_children(): get_items(child) else: items = (item,) for item in items: _tags = list(self.item(item, 'tags')) for _tag in tags: if opt == 'add': if _tag not in _tags: _tags.append(_tag) elif opt == 'remove': if _tag in _tags: _tags.pop(_tags.index(_tag)) self.item(item, tags=_tags) def button_press(self, event): self.origin_x, self.origin_y = event.x, event.y item = self.origin_item = self.active_item = self.identify('item', event.x, event.y) sw = self.select_window sw.geometry('0x0+0+0') sw.deiconify() self.bind('<Motion>', self.set_selected) if not item: if not event.state & 1 << 2: self.tags_reset() return if event.state & 1 << 2: if self.tag_has('odd', item): self.tag_add('selected', item) self.tag_replace('odd', 'selected_odd', item) elif self.tag_has('even', item): self.tag_add('selected', item) self.tag_replace('even', 'selected_even', item) elif self.tag_has('selected_odd', item): self.tag_replace('selected_odd', 'odd', item) elif self.tag_has('selected_even', item): self.tag_replace('selected_even', 'even', item) else: self.tags_reset() self.tag_add('selected', item) if self.tag_has('odd', item): self.tag_replace('odd', 'selected_odd', item) elif self.tag_has('even', item): self.tag_replace('even', 'selected_even', item) def button_release(self, _): self.select_window.withdraw() self.unbind('<Motion>') for item in self.selected_items: if self.tag_has('odd', item) or self.tag_has('even', item): self.tag_remove(('selected', '_selected'), item) else: self.tag_replace('_selected', 'selected', item) def get_selected(self): return sorted(self.tag_has('selected_odd') + self.tag_has('selected_even')) def set_selected(self, event): def selected_items(): items = [] window_y = int(self.root.geometry().rsplit('+', 1)[-1]) titlebar_height = self.root.winfo_rooty() - window_y sw = self.select_window start = sw.winfo_rooty() - titlebar_height - window_y end = start + sw.winfo_height() while start < end: start += 1 node = self.identify('item', event.x, start) if not node or node in items: continue items.append(node) return sorted(items) def set_row_colors(): items = self.selected_items = selected_items() for item in items: if self.tag_has('selected', item): if item == self.origin_item: continue if self.tag_has('selected_odd', item): self.tag_replace('selected_odd', 'odd', item) elif self.tag_has('selected_even', item): self.tag_replace('selected_even', 'even', item) elif self.tag_has('odd', item): self.tag_replace('odd', 'selected_odd', item) elif self.tag_has('even', item): self.tag_replace('even', 'selected_even', item) self.tag_add('_selected', item) for item in self.tag_has('_selected'): if item not in items: self.tag_remove('_selected', item) if self.tag_has('odd', item): self.tag_replace('odd', 'selected_odd', item) elif self.tag_has('even', item): self.tag_replace('even', 'selected_even', item) elif self.tag_has('selected_odd', item): self.tag_replace('selected_odd', 'odd', item) elif self.tag_has('selected_even', item): self.tag_replace('selected_even', 'even', item) root_x = self.root.winfo_rootx() if event.x < self.origin_x: width = self.origin_x - event.x coord_x = root_x + event.x else: width = event.x - self.origin_x coord_x = root_x + self.origin_x if coord_x+width > root_x+self.winfo_width(): width -= (coord_x+width)-(root_x+self.winfo_width()) elif self.winfo_pointerx() < root_x: width -= (root_x - self.winfo_pointerx()) coord_x = root_x root_y = self.winfo_rooty() if event.y < self.origin_y: height = self.origin_y - event.y coord_y = root_y + event.y else: height = event.y - self.origin_y coord_y = root_y + self.origin_y if coord_y+height > root_y+self.winfo_height(): height -= (coord_y+height)-(root_y+self.winfo_height()) elif self.winfo_pointery() < root_y + self.linespace: height -= (root_y - self.winfo_pointery() + self.linespace) coord_y = root_y + self.linespace if height < 0: height = self.winfo_rooty() + self.origin_y set_row_colors() self.select_window.geometry(f'{width}x{height}+{coord_x}+{coord_y}') class App(tk.Tk): def __init__(self): super().__init__() def print_selected_items(_): print(tv.get_selected()) return 'break' style = self.style = ttk.Style() tv = Treeview(self) style.map("Treeview", foreground=tv.fixed_map("foreground"), background=tv.fixed_map("background")) tv.heading('#0', text='Name') tv.tag_configure('odd', background='#ffffff') tv.tag_configure('even', background='#aaaaaa') tv.tag_configure('selected_odd', background='#b0eab2') tv.tag_configure('selected_even', background='#25a625') color_tag = 'odd' for idx in range(0, 4): # Populating the tree with test data. color_tag = 'even' if color_tag == 'odd' else 'odd' tv.insert('', idx, f'{idx}', text=f'Item {idx+1}', open=1, tags=(color_tag,)) color_tag = 'even' if color_tag == 'odd' else 'odd' iid = f'{idx}_{0}' tv.insert(f'{idx}', '0', iid, text=f'Menu {idx+1}', open=1, tags=(color_tag,)) for i in range(0, 5): color_tag = 'even' if color_tag == 'odd' else 'odd' tv.insert(iid, i, f'{iid}_{i}', text=f'Sub item {i+1}', tags=(color_tag,)) color_tag = 'even' if color_tag == 'odd' else 'odd' tv.insert(iid, 5, f'{iid}_{5}', text=f'Another Menu {idx+1}', open=1, tags=(color_tag,)) iid = f'{iid}_{5}' for i in range(0, 3): color_tag = 'even' if color_tag == 'odd' else 'odd' tv.insert(iid, i, f'{iid}_{i}', text=f'Sub item {i+1}', tags=(color_tag,)) self.title('Treeview Demo') self.geometry('275x650+3000+250') self.rowconfigure(0, weight=1) self.columnconfigure(0, weight=1) tv.config(selectmode="none") tv.grid(sticky='NSEW') button = ttk.Button(self, text='Get Selected Items') button.grid() button.bind('<Button-1>', print_selected_items) def main(): app = App() app.mainloop() if __name__ == '__main__': main()
13
0
61,291,741
2020-4-18
https://stackoverflow.com/questions/61291741/passing-list-likes-to-loc-or-with-any-missing-labels-is-no-longer-supported
I want to create a modified dataframe with the specified columns. I tried the following but throws the error "Passing list-likes to .loc or [] with any missing labels is no longer supported" # columns to keep filtered_columns = ['text', 'agreeCount', 'disagreeCount', 'id', 'user.firstName', 'user.lastName', 'user.gender', 'user.id'] tips_filtered = tips_df.loc[:, filtered_columns] # display tips tips_filtered Thank you
It looks like Pandas has deprecated this method of indexing. According to their docs: This behavior is deprecated and will show a warning message pointing to this section. The recommended alternative is to use .reindex() Using the new recommended method, you can filter your columns using: tips_filtered = tips_df.reindex(columns = filtered_columns). NB: To reindex rows, you would use reindex(index = ...) (More information here).
54
54
61,265,125
2020-4-17
https://stackoverflow.com/questions/61265125/jupyter-notebook-module-not-found-even-after-pip-install
I have a module installed in my Juyter notebook !pip install gensim Requirement already satisfied: gensim in /home/m.gawinecki/virtualenv/la-recoms/lib/python3.7/site-packages (3.8.2) However, when I try to import it, it fails import gensim --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-e70e92d32c6e> in <module> ----> 1 import gensim ModuleNotFoundError: No module named 'gensim' It looks like it has been installed properly: !pip list | grep gensim gensim 3.8.2 How can I fix it?
Add your virtual environment as Python kernel in this way (Make sure it's activated): (venv) $ ipython kernel install --name "local-venv-kernel" --user Now, you can select the created kernel "local-venv-kernel" when you start Jupyter notebook or lab. You could check the installed libraries using this code in a notebook cell: !pip freeze
20
16
61,330,414
2020-4-20
https://stackoverflow.com/questions/61330414/pandas-astype-with-date-or-datetime
This answer contains a very elegant way of setting all the types of your pandas columns in one line: # convert column "a" to int64 dtype and "b" to complex type df = df.astype({"a": int, "b": complex}) I am starting to think that that unfortunately has limited application and you will have to use various other methods of casting the column types sooner or later, over many lines. I tested 'category' and that worked, so it will take things which are actual python types like int or complex and then pandas terms in quotation marks like 'category'. I have a column of dates which looks like this: 25.07.10 08.08.10 07.01.11 I had a look at this answer about casting date columns but none of them seem to fit into the elegant syntax above. I tried: from datetime import date df = df.astype({"date": date}) but it gave an error: TypeError: dtype '<class 'datetime.date'>' not understood I also tried pd.Series.dt.date which also didn't work. Is it possible to cast all your columns including the date or datetime column in one line like this?
This has been answered in the comments where it was noted that the following works: df.astype({'date': 'datetime64[ns]'}) In addition, you can set the dtype when reading in the data: pd.read_csv('path/to/file.csv', parse_dates=['date'])
17
33