question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
61,314,440
2020-4-20
https://stackoverflow.com/questions/61314440/certbort-commands-return-modulenotfounderror-no-module-named-cffi-backend
I followed a guide to get my python flask app running and I am at the last step where I change http into https with certbot. But when I run my certbot command sudo certbot --nginx -d domainname -d www.domainname I get ModuleNotFoundError: No module named '_cffi_backend' The whole error is: Traceback (most recent call last): File "/usr/bin/certbot", line 11, in <module> load_entry_point('certbot==0.31.0', 'console_scripts', 'certbot')() File "/home/mc-obfuscator/.local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 490, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/mc-obfuscator/.local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2859, in load_entry_point return ep.load() File "/home/mc-obfuscator/.local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2450, in load return self.resolve() File "/home/mc-obfuscator/.local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2456, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3/dist-packages/certbot/main.py", line 10, in <module> import josepy as jose File "/usr/lib/python3/dist-packages/josepy/__init__.py", line 44, in <module> from josepy.interfaces import JSONDeSerializable File "/usr/lib/python3/dist-packages/josepy/interfaces.py", line 8, in <module> from josepy import errors, util File "/usr/lib/python3/dist-packages/josepy/util.py", line 4, in <module> import OpenSSL File "/usr/lib/python3/dist-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import crypto, SSL File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 12, in <module> from cryptography import x509 File "/usr/lib/python3/dist-packages/cryptography/x509/__init__.py", line 8, in <module> from cryptography.x509.base import ( File "/usr/lib/python3/dist-packages/cryptography/x509/base.py", line 16, in <module> from cryptography.x509.extensions import Extension, ExtensionType File "/usr/lib/python3/dist-packages/cryptography/x509/extensions.py", line 18, in <module> from cryptography.hazmat.primitives import constant_time, serialization File "/usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py", line 9, in <module> from cryptography.hazmat.bindings._constant_time import lib ModuleNotFoundError: No module named '_cffi_backend' I hope someone can help as I have found a lot of people asking questions about this mysterious _cffi_backend thing. Some more info: If I do python3 -m pip install cffi it says requirement already satisfied. I have also gotten this error when installing other things and trying different peoples solutions. 'ModuleNotFoundError: No module named 'apt_pkg' that seems to be fixed by doing sudo apt-get install python3-apt --reinstall but now I get: ImportError: cannot import name '_gi' from partially initialized module 'gi' (most likely due to a circular import) (/usr/lib/python3/dist-packages/gi/__init__.py) I also made a symbolic link with /usr/lib/python3/dist-packages/apt_pkg.so -> apt_pkg.cpython-36m-x86_64-linux-gnu.so something other people said would work. I am running python 3.8 but probably have 3.6 on the server as well. If I do python it opens the 3.8 shell. Also I am running ubuntu 18.04.4. ls -al /usr/bin | grep python gives : -rwxr-xr-x 1 root root 1056 Apr 16 2018 dh_python2 lrwxrwxrwx 1 root root 23 Nov 7 10:07 pdb2.7 -> ../lib/python2.7/pdb.py lrwxrwxrwx 1 root root 23 Nov 7 10:44 pdb3.6 -> ../lib/python3.6/pdb.py lrwxrwxrwx 1 root root 23 Nov 7 10:50 pdb3.7 -> ../lib/python3.7/pdb.py lrwxrwxrwx 1 root root 23 Oct 28 16:14 pdb3.8 -> ../lib/python3.8/pdb.py lrwxrwxrwx 1 root root 31 Oct 25 2018 py3versions -> ../share/python3/py3versions.py lrwxrwxrwx 1 root root 24 Jun 19 2019 python -> /etc/alternatives/python lrwxrwxrwx 1 root root 16 Apr 16 2018 python-config -> python2.7-config lrwxrwxrwx 1 root root 9 Apr 16 2018 python2 -> python2.7 lrwxrwxrwx 1 root root 16 Apr 16 2018 python2-config -> python2.7-config -rwxr-xr-x 1 root root 3637096 Nov 7 10:07 python2.7 lrwxrwxrwx 1 root root 33 Nov 7 10:07 python2.7-config -> x86_64-linux-gnu-python2.7-config lrwxrwxrwx 1 root root 25 Jan 5 10:38 python3 -> /etc/alternatives/python3 -rwxr-xr-x 1 root root 384 Feb 5 2018 python3-futurize -rwxr-xr-x 1 root root 388 Feb 5 2018 python3-pasteurize -rwxr-xr-x 1 root root 152 Nov 11 2017 python3-pbr -rwxr-xr-x 2 root root 4526456 Nov 7 10:44 python3.6 -rwxr-xr-x 2 root root 4526456 Nov 7 10:44 python3.6m -rwxr-xr-x 2 root root 4873376 Nov 7 10:50 python3.7 -rwxr-xr-x 2 root root 4873376 Nov 7 10:50 python3.7m -rwxr-xr-x 1 root root 5203488 Oct 28 16:14 python3.8 lrwxrwxrwx 1 root root 10 Oct 25 2018 python3m -> python3.6m lrwxrwxrwx 1 root root 29 Apr 16 2018 pyversions -> ../share/python/pyversions.py lrwxrwxrwx 1 root root 10 Sep 27 2018 uwsgi_python36 -> uwsgi-core lrwxrwxrwx 1 root root 33 Apr 16 2018 x86_64-linux-gnu-python-config -> x86_64-linux-gnu-python2.7-config -rwxr-xr-x 1 root root 2971 Nov 7 10:07 x86_64-linux-gnu-python2.7-config The files do exist on my system because: dpkg -l python3-cffi-backend python3-cryptography Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-=====================================================-===============================-===============================-=============================================================================================================== ii python3-cffi-backend 1.11.5-1 amd64 Foreign Function Interface for Python 3 calling C code - runtime ii python3-cryptography 2.1.4-1ubuntu1.3 amd64 Python library exposing cryptographic recipes and primitives (Python 3)
This fixes the problem: pip install -U cffi
9
21
61,241,374
2020-4-16
https://stackoverflow.com/questions/61241374/attributeerror-module-os-has-no-attribute-uname
When I do: >>> import os >>> os.uname() I get an attribute error which looks like this: Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> os.uname() AttributeError: module 'os' has no attribute 'uname' How can I fix this is my python broken or something else because in the docs.
I've run your code the exact same way in IDLE on Windows 10 and got the same result. >>> print(os.uname()) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> print(os.uname()) AttributeError: module 'os' has no attribute 'uname' And as @Joran Beasley pointed out, this function is only available in certain operating systems. From an online compiler: posix.uname_result(sysname='Linux', nodename='Check', release='5.4.10-x86_64-linode132', version='#1 SMP PREEMPT Thu Jan 9 21:17:12 UTC 2020', machine='x86_64') If you want to get current os, I recommend the platform module. >>> import platform >>> platform.platform() 'Windows-10-10.0.18362-SP0' Some people prefer using os module, but platform is much more readable.
15
11
61,241,172
2020-4-16
https://stackoverflow.com/questions/61241172/plotly-how-to-create-sunburst-subplot-using-graph-objects
my dataframe looks something like this: user age gender 0 23 12 male 1 24 13 male 2 25 15 female 3 26 20 male 4 27 21 male and using px.sunburst(df, path=["gender", "age"]) gives me correct sunburst plot where gender is in middle part of pie chart and for each gender it has associated ages. I want to do this using graph_objects instead of plotly express since I want two sunburst plots to be side by side. from df I have above how can I use it in graph_objects. I do not understand what values to add to lables, parents, ids, etc... fig = go.Figure() fig.add_trace( go.Sunburst( lables = df.age, parents = df.gender, domain=dict(column=0) ) ) fig.show() I've read the documentation however I cannot understand how it works. If someone knows, please tell me how I can create sunburst plot using graph_object with df I have above.
The answer: Just build one figure using px, and "steal" all your figure elements from there and use it in a graph_objects figure to get what you need! The details: If px does in fact give you the desired sunburst chart like this: Plot 1: Code 1: # imports import pandas as pd import plotly.graph_objects as go import plotly.express as px # data df = pd.DataFrame({'user': [23, 24, 25, 26, 27], 'age': [12, 13,15, 20, 21], 'gender': ['male','male', 'female','male', 'male'] }) # plotly express figure fig = px.sunburst(df, path=["gender", "age"]) fig.show() Then, to my knowledge, you'll have to restructure your data in order to use graph_objects. Currently, your data has the form And graph_objects would require label = ['12', '13', '15', '20', '21', 'female', 'male']. So what now? Go through the agonizing pain of finding the correct data structure for each element? No, just build one figure using px, and "steal" all your figure elements from there and use it in a graph_objects figure: Code 2: # imports import pandas as pd import plotly.graph_objects as go import plotly.express as px # data df = pd.DataFrame({'user': [23, 24, 25, 26, 27], 'age': [12, 13,15, 20, 21], 'gender': ['male','male', 'female','male', 'male'] }) # plotly express figure fig = px.sunburst(df, path=["gender", "age"]) # plotly graph_objects figure fig2 =go.Figure(go.Sunburst( labels=fig['data'][0]['labels'].tolist(), parents=fig['data'][0]['parents'].tolist(), ) ) fig2.show() Plot 2: Now, if you'd like to display som more features of your dataset in the same figure, just add ids=fig['data'][0]['ids'].tolist() to the mix: Plot 3: Complete code: # imports import pandas as pd import plotly.graph_objects as go import plotly.express as px # data df = pd.DataFrame({'user': [23, 24, 25, 26, 27], 'age': [12, 13,15, 20, 21], 'gender': ['male','male', 'female','male', 'male'] }) # plotly express figure fig = px.sunburst(df, path=["gender", "age"]) # plotly graph_objects figure fig2 =go.Figure(go.Sunburst( labels=fig['data'][0]['labels'].tolist(), parents=fig['data'][0]['parents'].tolist(), values=fig['data'][0]['values'].tolist(), ids=fig['data'][0]['ids'].tolist(), domain={'x': [0.0, 1.0], 'y': [0.0, 1.0]} )) fig2.show()
8
17
61,330,427
2020-4-20
https://stackoverflow.com/questions/61330427/set-y-axis-in-millions
I have a problem with this plot: The y-axis is in unit but I need them to be in millions as such: Do you know a method to achieve this? Thanks in advance.
You can use a custom FuncFormatter like this: from matplotlib.ticker import FuncFormatter import matplotlib.pyplot as plt def millions(x, pos): 'The two args are the value and tick position' return '%1.1fM' % (x * 1e-6) formatter = FuncFormatter(millions) fig, ax = plt.subplots() ax.yaxis.set_major_formatter(formatter) Or you can even replace millions with the following functions to support all magnitudes: def human_format(num, pos): magnitude = 0 while abs(num) >= 1000: magnitude += 1 num /= 1000.0 # add more suffixes if you need them return '%.2f%s' % (num, ['', 'K', 'M', 'G', 'T', 'P'][magnitude])
12
21
61,321,143
2020-4-20
https://stackoverflow.com/questions/61321143/12296266720420-163936-459errorbrowser-switcher-service-cc238-xxx-init-er
I am using Version 81.0.4044.113 (Official Build) (64-bit). It was not happening before and the code was working completely fine. But after few days I ran it again and this error came. I am using these modules-> from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException import csv import time from tkinter import * def Authorization(): time.sleep(15) username = driver.find_element_by_id("userInput") username.send_keys('username') driver.find_element_by_xpath("//*[@id='login-button']").click() time.sleep(5) password = driver.find_element_by_xpath("//*[@id='passwordInput']") password.send_keys('password') submit_button = driver.find_element_by_xpath("//*[@id='login-button']").click() def Extractor(): time.sleep(25) integrated_release = driver.find_elements_by_xpath("//*[@id='versionArea']/div/table/tbody/tr[2]/td[2]") global integrated_release_data integrated_release_data = [x.text for x in integrated_release] impact_release = driver.find_elements_by_xpath("//*[@id='versionArea']/div/table/tbody/tr[5]/td[2]") global impact_release_data impact_release_data = [x.text for x in impact_release] build_platform = driver.find_elements_by_xpath("//*[@id='btkArea']/div/table/tbody/tr[2]/td[2]/span") global build_platform_data build_platform_data = [x.text for x in build_platform] def To_csv(): csvData = [final_data] with open('data.csv', 'a') as csvFile: writers = csv.writer(csvFile) writers.writerows(csvData) csvFile.close() def printtext(): global bugName bugName = e.get() print(bugName) def kinter(): root = Tk() root.geometry("500x100") root.title('xtractor') var = StringVar() label = Label( root, textvariable=var) var.set("Enter") label.pack() global e e = Entry(root) e.pack() e.focus_set() b = Button(root,text='submit',command=printtext) b.pack(side='bottom') root.mainloop() kinter() driver = webdriver.Chrome() bugs = bugName.split(',') driver.get("http........"+bugs[0]) bugname = [bugs[0]] Authorization() Extractor() final_data = a+b+c+d+e To_csv() count = 0 for bug in bugs: try: if count == 0: count += 1 continue driver.get("http:....."+bug) bugname = [bug] Extractor() final_data = a+b+c+d+e To_csv() except: continue and I have installed the same version of webdriver as of chrome. Any idea how can I solve this issue?
This error message... ERROR:browser_switcher_service.cc(238)] XXX Init() ...implies that call to on_init_ raised an error. Analysis This error is defined in bluetooth_adapter_winrt.cc and was the direct impact of the changes incorporated within google-chrome as per the details available within the discussion Chrome no longer accepts certificates that fallback to common name Solution Ensure that: Selenium is upgraded to current levels Version 3.141.59. ChromeDriver is updated to current ChromeDriver v84.0 level. Chrome is updated to current Chrome Version 84.0 level. (as per ChromeDriver v84.0 release notes) If your base Web Client version is too old, then uninstall it and install a recent GA and released version of Web Client. Additional considerations However it was observed that this error can be supressed by running Chrome as root user (administrator) on Linux. but that would be a deviation from the documentation in ChromeDriver - WebDriver for Chrome where it is mentioned: A common cause for Chrome to crash during startup is running Chrome as root user (administrator) on Linux. While it is possible to work around this issue by passing '--no-sandbox' flag when creating your WebDriver session, i.e. the ChromeDriver session as such a configuration is unsupported and highly discouraged. Ideally, you need to configure your environment to run Chrome as a regular user instead. Suppressing the error Finally, as per the documentation in Selenium Chrome Driver: Resolve Error Messages Regarding Registry Keys and Experimental Options these error logs can be supressed by adding the argument: excludeSwitches: ['enable-logging'] So your effective code block will be: from selenium import webdriver options = webdriver.ChromeOptions() options.add_experimental_option("excludeSwitches", ["enable-logging"]) driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe') driver.get("https://www.google.com/")
14
1
61,226,587
2020-4-15
https://stackoverflow.com/questions/61226587/pycharm-does-not-recognize-logging-basicconfig-handlers-argument
I have a python application that uses the python logging library for some time now for printing messages both on the screen and on time rotating files and works fine. The logging configuration is as follows: import logging from logging.handlers import TimedRotatingFileHandler logging.basicConfig(level=logging.INFO if debug is not True else logging.DEBUG, format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', handlers=[ TimedRotatingFileHandler(log_filename, when='midnight', interval=1), logging.StreamHandler() ] ) My problem is that PyCharm keeps highlighting the logging.basicConfig part of the configuration with the following warning: Unexpected argument(s) Possible callees: basicConfig(*, filename: Optional[str]=..., filemode: str=..., format: str=..., datefmt: Optional[str]=..., level: Union[int, str, None]=..., stream: IO[str]=...) basicConfig() Inspection info: Reports discrepancies between declared parameters and actual arguments, as well as incorrect arguments (e.g. duplicate named arguments) and incorrect argument order. Decorators are analyzed, too. And it goes out only if I remove the handlers=[...] part of the code. Did the basicConfig's arguments change on a specific version? If yes, what is the proposed way to achieve the same thing? I'm using python 3.6 and pycharm 2020.1 (but had the same warning for at least the past 3 updates)
This issue is reported on PyCharm bug tracker at https://youtrack.jetbrains.com/issue/PY-39762 . In short: the new keyword arguments of basicConfig in Python 3, like handler, are not recognized. That issue also mentions a workaround: Put a caret on basicConfig - Right Click - Go to - Declaration or Usages - Click on a star on the left (on a gutter) - logging/__init__.pyi should be opened - Annotate all basicConfig definitions with @overload. I tested it and it worked. In my case PyCharm no longer complains about force=True argument. Did the basicConfig's arguments change on a specific version? You can always check the docs for that: https://docs.python.org/3/library/logging.html#logging.basicConfig : Changed in version 3.2: The style argument was added. Changed in version 3.3: The handlers argument was added. Additional checks were added to catch situations where incompatible arguments are specified (e.g. handlers together with stream or filename, or stream together with filename). Changed in version 3.8: The force argument was added.
9
6
61,346,100
2020-4-21
https://stackoverflow.com/questions/61346100/plotly-how-to-style-a-plotly-figure-so-that-it-doesnt-display-gaps-for-missing
I have a plotly graph of the EUR/JPY exchange rate across a few months in 15 minute time intervals, so as a result, there is no data from friday evenings to sunday evenings. Here is a portion of the data, note the skip in the index (type: DatetimeIndex) over the weekend: Plotting this data in plotly results in a gap over the missing dates Using the dataframe above: import plotly.graph_objs as go candlesticks = go.Candlestick(x=data.index, open=data['Open'], high=data['High'], low=data['Low'], close=data['Close']) fig = go.Figure(layout=cf_layout) fig.add_trace(trace=candlesticks) fig.show() Ouput: As you can see, there are gaps where the missing dates are. One solution I've found online is to change the index to text using: data.index = data.index.strftime("%d-%m-%Y %H:%M:%S") and plotting it again, which admittedly does work, but has it's own problem. The x-axis labels look atrocious: I would like to produce a graph that plots a graph like in the second plot where there are no gaps, but the x-axis is displayed like as it is on the first graph. Or at least displayed in a much more concise and responsive format, as close to the first graph as possible. Thank you in advance for any help!
Even if some dates are missing in your dataset, plotly interprets your dates as date values, and shows even missing dates on your timeline. One solution is to grab the first and last dates, build a complete timeline, find out which dates are missing in your original dataset, and include those dates in: fig.update_xaxes(rangebreaks=[dict(values=dt_breaks)]) This will turn this figure: Into this: Complete code: import plotly.graph_objects as go from datetime import datetime import pandas as pd import numpy as np # sample data df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') # remove some dates to build a similar case as in the question df = df.drop(df.index[75:110]) df = df.drop(df.index[210:250]) df = df.drop(df.index[460:480]) # build complete timepline from start date to end date dt_all = pd.date_range(start=df['Date'].iloc[0],end=df['Date'].iloc[-1]) # retrieve the dates that ARE in the original datset dt_obs = [d.strftime("%Y-%m-%d") for d in pd.to_datetime(df['Date'])] # define dates with missing values dt_breaks = [d for d in dt_all.strftime("%Y-%m-%d").tolist() if not d in dt_obs] # make fiuge fig = go.Figure(data=[go.Candlestick(x=df['Date'], open=df['AAPL.Open'], high=df['AAPL.High'], low=df['AAPL.Low'], close=df['AAPL.Close']) ]) # hide dates with no values fig.update_xaxes(rangebreaks=[dict(values=dt_breaks)]) fig.update_layout(yaxis_title='AAPL Stock') fig.show()
14
15
61,238,162
2020-4-15
https://stackoverflow.com/questions/61238162/why-cant-i-import-candlestick-ohlc-from-mplfinance
So I have been able to successfully install mplfinance with pip and when I import it alone I receive no error. Though when I do: from mplfinance import candlestick_ohlc I get the error ImportError: cannot import name 'candlestick_ohlc' from 'mplfinance' I have checked command prompt again, and it says it has successfully installed mplfinance. Why am I receiving this error?
So from what I understand the Matplotlib for finance has changed so that: To access the old API with the new mplfinance package installed, change statments from: from mpl_finance import to: from mplfinance.original_flavor import candlestick_ohlc and then it should work fine.
17
36
61,296,763
2020-4-18
https://stackoverflow.com/questions/61296763/why-cnn-running-in-python-is-extremely-slow-in-comparison-to-matlab
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. Is there any way I can make this python code classify faster?
In this case, it appears that the Grapper optimization suite has encountered some kind of infinite loop or memory leak. I would recommend filing an issue against the Github repo. It's challenging to debug why constant folding is taking so long, but you may have better performance using the ONNX TensorRT backend as compared to the TensorFlow backend. It achieves better performance as compared to the TensorFlow backend on Nvidia GPUs while compiling typical graphs more quickly. Constant folding usually doesn't provide large speedups for well optimized models. import onnx import onnx_tensorrt.backend as backend import numpy as np model = onnx.load("trainednet.onnx'") engine = backend.prepare(model, device='CUDA:1') filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) output_data = engine.run(img)[0] print(output_data)
8
1
61,333,273
2020-4-20
https://stackoverflow.com/questions/61333273/how-to-use-edge-bundling-with-networkx-and-matplotlib-in-python
I've created a toy graph with the iris dataset. My layout is from PCA ordination which separates out the nodes nicely. I've recently discovered edge bundling. Does anybody know of a way to do this with matplotlib and networkx? from sklearn.decomposition import PCA import pandas as pd import networkx as nx import matplotlib.pyplot as plt import numpy as np # Data X_iris = pd.DataFrame({'sepal_length': {'iris_0': 5.1, 'iris_1': 4.9, 'iris_2': 4.7, 'iris_3': 4.6, 'iris_4': 5.0, 'iris_5': 5.4, 'iris_6': 4.6, 'iris_7': 5.0, 'iris_8': 4.4, 'iris_9': 4.9, 'iris_10': 5.4, 'iris_11': 4.8, 'iris_12': 4.8, 'iris_13': 4.3, 'iris_14': 5.8, 'iris_15': 5.7, 'iris_16': 5.4, 'iris_17': 5.1, 'iris_18': 5.7, 'iris_19': 5.1, 'iris_20': 5.4, 'iris_21': 5.1, 'iris_22': 4.6, 'iris_23': 5.1, 'iris_24': 4.8, 'iris_25': 5.0, 'iris_26': 5.0, 'iris_27': 5.2, 'iris_28': 5.2, 'iris_29': 4.7, 'iris_30': 4.8, 'iris_31': 5.4, 'iris_32': 5.2, 'iris_33': 5.5, 'iris_34': 4.9, 'iris_35': 5.0, 'iris_36': 5.5, 'iris_37': 4.9, 'iris_38': 4.4, 'iris_39': 5.1, 'iris_40': 5.0, 'iris_41': 4.5, 'iris_42': 4.4, 'iris_43': 5.0, 'iris_44': 5.1, 'iris_45': 4.8, 'iris_46': 5.1, 'iris_47': 4.6, 'iris_48': 5.3, 'iris_49': 5.0, 'iris_50': 7.0, 'iris_51': 6.4, 'iris_52': 6.9, 'iris_53': 5.5, 'iris_54': 6.5, 'iris_55': 5.7, 'iris_56': 6.3, 'iris_57': 4.9, 'iris_58': 6.6, 'iris_59': 5.2, 'iris_60': 5.0, 'iris_61': 5.9, 'iris_62': 6.0, 'iris_63': 6.1, 'iris_64': 5.6, 'iris_65': 6.7, 'iris_66': 5.6, 'iris_67': 5.8, 'iris_68': 6.2, 'iris_69': 5.6, 'iris_70': 5.9, 'iris_71': 6.1, 'iris_72': 6.3, 'iris_73': 6.1, 'iris_74': 6.4, 'iris_75': 6.6, 'iris_76': 6.8, 'iris_77': 6.7, 'iris_78': 6.0, 'iris_79': 5.7, 'iris_80': 5.5, 'iris_81': 5.5, 'iris_82': 5.8, 'iris_83': 6.0, 'iris_84': 5.4, 'iris_85': 6.0, 'iris_86': 6.7, 'iris_87': 6.3, 'iris_88': 5.6, 'iris_89': 5.5, 'iris_90': 5.5, 'iris_91': 6.1, 'iris_92': 5.8, 'iris_93': 5.0, 'iris_94': 5.6, 'iris_95': 5.7, 'iris_96': 5.7, 'iris_97': 6.2, 'iris_98': 5.1, 'iris_99': 5.7, 'iris_100': 6.3, 'iris_101': 5.8, 'iris_102': 7.1, 'iris_103': 6.3, 'iris_104': 6.5, 'iris_105': 7.6, 'iris_106': 4.9, 'iris_107': 7.3, 'iris_108': 6.7, 'iris_109': 7.2, 'iris_110': 6.5, 'iris_111': 6.4, 'iris_112': 6.8, 'iris_113': 5.7, 'iris_114': 5.8, 'iris_115': 6.4, 'iris_116': 6.5, 'iris_117': 7.7, 'iris_118': 7.7, 'iris_119': 6.0, 'iris_120': 6.9, 'iris_121': 5.6, 'iris_122': 7.7, 'iris_123': 6.3, 'iris_124': 6.7, 'iris_125': 7.2, 'iris_126': 6.2, 'iris_127': 6.1, 'iris_128': 6.4, 'iris_129': 7.2, 'iris_130': 7.4, 'iris_131': 7.9, 'iris_132': 6.4, 'iris_133': 6.3, 'iris_134': 6.1, 'iris_135': 7.7, 'iris_136': 6.3, 'iris_137': 6.4, 'iris_138': 6.0, 'iris_139': 6.9, 'iris_140': 6.7, 'iris_141': 6.9, 'iris_142': 5.8, 'iris_143': 6.8, 'iris_144': 6.7, 'iris_145': 6.7, 'iris_146': 6.3, 'iris_147': 6.5, 'iris_148': 6.2, 'iris_149': 5.9}, 'sepal_width': {'iris_0': 3.5, 'iris_1': 3.0, 'iris_2': 3.2, 'iris_3': 3.1, 'iris_4': 3.6, 'iris_5': 3.9, 'iris_6': 3.4, 'iris_7': 3.4, 'iris_8': 2.9, 'iris_9': 3.1, 'iris_10': 3.7, 'iris_11': 3.4, 'iris_12': 3.0, 'iris_13': 3.0, 'iris_14': 4.0, 'iris_15': 4.4, 'iris_16': 3.9, 'iris_17': 3.5, 'iris_18': 3.8, 'iris_19': 3.8, 'iris_20': 3.4, 'iris_21': 3.7, 'iris_22': 3.6, 'iris_23': 3.3, 'iris_24': 3.4, 'iris_25': 3.0, 'iris_26': 3.4, 'iris_27': 3.5, 'iris_28': 3.4, 'iris_29': 3.2, 'iris_30': 3.1, 'iris_31': 3.4, 'iris_32': 4.1, 'iris_33': 4.2, 'iris_34': 3.1, 'iris_35': 3.2, 'iris_36': 3.5, 'iris_37': 3.6, 'iris_38': 3.0, 'iris_39': 3.4, 'iris_40': 3.5, 'iris_41': 2.3, 'iris_42': 3.2, 'iris_43': 3.5, 'iris_44': 3.8, 'iris_45': 3.0, 'iris_46': 3.8, 'iris_47': 3.2, 'iris_48': 3.7, 'iris_49': 3.3, 'iris_50': 3.2, 'iris_51': 3.2, 'iris_52': 3.1, 'iris_53': 2.3, 'iris_54': 2.8, 'iris_55': 2.8, 'iris_56': 3.3, 'iris_57': 2.4, 'iris_58': 2.9, 'iris_59': 2.7, 'iris_60': 2.0, 'iris_61': 3.0, 'iris_62': 2.2, 'iris_63': 2.9, 'iris_64': 2.9, 'iris_65': 3.1, 'iris_66': 3.0, 'iris_67': 2.7, 'iris_68': 2.2, 'iris_69': 2.5, 'iris_70': 3.2, 'iris_71': 2.8, 'iris_72': 2.5, 'iris_73': 2.8, 'iris_74': 2.9, 'iris_75': 3.0, 'iris_76': 2.8, 'iris_77': 3.0, 'iris_78': 2.9, 'iris_79': 2.6, 'iris_80': 2.4, 'iris_81': 2.4, 'iris_82': 2.7, 'iris_83': 2.7, 'iris_84': 3.0, 'iris_85': 3.4, 'iris_86': 3.1, 'iris_87': 2.3, 'iris_88': 3.0, 'iris_89': 2.5, 'iris_90': 2.6, 'iris_91': 3.0, 'iris_92': 2.6, 'iris_93': 2.3, 'iris_94': 2.7, 'iris_95': 3.0, 'iris_96': 2.9, 'iris_97': 2.9, 'iris_98': 2.5, 'iris_99': 2.8, 'iris_100': 3.3, 'iris_101': 2.7, 'iris_102': 3.0, 'iris_103': 2.9, 'iris_104': 3.0, 'iris_105': 3.0, 'iris_106': 2.5, 'iris_107': 2.9, 'iris_108': 2.5, 'iris_109': 3.6, 'iris_110': 3.2, 'iris_111': 2.7, 'iris_112': 3.0, 'iris_113': 2.5, 'iris_114': 2.8, 'iris_115': 3.2, 'iris_116': 3.0, 'iris_117': 3.8, 'iris_118': 2.6, 'iris_119': 2.2, 'iris_120': 3.2, 'iris_121': 2.8, 'iris_122': 2.8, 'iris_123': 2.7, 'iris_124': 3.3, 'iris_125': 3.2, 'iris_126': 2.8, 'iris_127': 3.0, 'iris_128': 2.8, 'iris_129': 3.0, 'iris_130': 2.8, 'iris_131': 3.8, 'iris_132': 2.8, 'iris_133': 2.8, 'iris_134': 2.6, 'iris_135': 3.0, 'iris_136': 3.4, 'iris_137': 3.1, 'iris_138': 3.0, 'iris_139': 3.1, 'iris_140': 3.1, 'iris_141': 3.1, 'iris_142': 2.7, 'iris_143': 3.2, 'iris_144': 3.3, 'iris_145': 3.0, 'iris_146': 2.5, 'iris_147': 3.0, 'iris_148': 3.4, 'iris_149': 3.0}, 'petal_length': {'iris_0': 1.4, 'iris_1': 1.4, 'iris_2': 1.3, 'iris_3': 1.5, 'iris_4': 1.4, 'iris_5': 1.7, 'iris_6': 1.4, 'iris_7': 1.5, 'iris_8': 1.4, 'iris_9': 1.5, 'iris_10': 1.5, 'iris_11': 1.6, 'iris_12': 1.4, 'iris_13': 1.1, 'iris_14': 1.2, 'iris_15': 1.5, 'iris_16': 1.3, 'iris_17': 1.4, 'iris_18': 1.7, 'iris_19': 1.5, 'iris_20': 1.7, 'iris_21': 1.5, 'iris_22': 1.0, 'iris_23': 1.7, 'iris_24': 1.9, 'iris_25': 1.6, 'iris_26': 1.6, 'iris_27': 1.5, 'iris_28': 1.4, 'iris_29': 1.6, 'iris_30': 1.6, 'iris_31': 1.5, 'iris_32': 1.5, 'iris_33': 1.4, 'iris_34': 1.5, 'iris_35': 1.2, 'iris_36': 1.3, 'iris_37': 1.4, 'iris_38': 1.3, 'iris_39': 1.5, 'iris_40': 1.3, 'iris_41': 1.3, 'iris_42': 1.3, 'iris_43': 1.6, 'iris_44': 1.9, 'iris_45': 1.4, 'iris_46': 1.6, 'iris_47': 1.4, 'iris_48': 1.5, 'iris_49': 1.4, 'iris_50': 4.7, 'iris_51': 4.5, 'iris_52': 4.9, 'iris_53': 4.0, 'iris_54': 4.6, 'iris_55': 4.5, 'iris_56': 4.7, 'iris_57': 3.3, 'iris_58': 4.6, 'iris_59': 3.9, 'iris_60': 3.5, 'iris_61': 4.2, 'iris_62': 4.0, 'iris_63': 4.7, 'iris_64': 3.6, 'iris_65': 4.4, 'iris_66': 4.5, 'iris_67': 4.1, 'iris_68': 4.5, 'iris_69': 3.9, 'iris_70': 4.8, 'iris_71': 4.0, 'iris_72': 4.9, 'iris_73': 4.7, 'iris_74': 4.3, 'iris_75': 4.4, 'iris_76': 4.8, 'iris_77': 5.0, 'iris_78': 4.5, 'iris_79': 3.5, 'iris_80': 3.8, 'iris_81': 3.7, 'iris_82': 3.9, 'iris_83': 5.1, 'iris_84': 4.5, 'iris_85': 4.5, 'iris_86': 4.7, 'iris_87': 4.4, 'iris_88': 4.1, 'iris_89': 4.0, 'iris_90': 4.4, 'iris_91': 4.6, 'iris_92': 4.0, 'iris_93': 3.3, 'iris_94': 4.2, 'iris_95': 4.2, 'iris_96': 4.2, 'iris_97': 4.3, 'iris_98': 3.0, 'iris_99': 4.1, 'iris_100': 6.0, 'iris_101': 5.1, 'iris_102': 5.9, 'iris_103': 5.6, 'iris_104': 5.8, 'iris_105': 6.6, 'iris_106': 4.5, 'iris_107': 6.3, 'iris_108': 5.8, 'iris_109': 6.1, 'iris_110': 5.1, 'iris_111': 5.3, 'iris_112': 5.5, 'iris_113': 5.0, 'iris_114': 5.1, 'iris_115': 5.3, 'iris_116': 5.5, 'iris_117': 6.7, 'iris_118': 6.9, 'iris_119': 5.0, 'iris_120': 5.7, 'iris_121': 4.9, 'iris_122': 6.7, 'iris_123': 4.9, 'iris_124': 5.7, 'iris_125': 6.0, 'iris_126': 4.8, 'iris_127': 4.9, 'iris_128': 5.6, 'iris_129': 5.8, 'iris_130': 6.1, 'iris_131': 6.4, 'iris_132': 5.6, 'iris_133': 5.1, 'iris_134': 5.6, 'iris_135': 6.1, 'iris_136': 5.6, 'iris_137': 5.5, 'iris_138': 4.8, 'iris_139': 5.4, 'iris_140': 5.6, 'iris_141': 5.1, 'iris_142': 5.1, 'iris_143': 5.9, 'iris_144': 5.7, 'iris_145': 5.2, 'iris_146': 5.0, 'iris_147': 5.2, 'iris_148': 5.4, 'iris_149': 5.1}, 'petal_width': {'iris_0': 0.2, 'iris_1': 0.2, 'iris_2': 0.2, 'iris_3': 0.2, 'iris_4': 0.2, 'iris_5': 0.4, 'iris_6': 0.3, 'iris_7': 0.2, 'iris_8': 0.2, 'iris_9': 0.1, 'iris_10': 0.2, 'iris_11': 0.2, 'iris_12': 0.1, 'iris_13': 0.1, 'iris_14': 0.2, 'iris_15': 0.4, 'iris_16': 0.4, 'iris_17': 0.3, 'iris_18': 0.3, 'iris_19': 0.3, 'iris_20': 0.2, 'iris_21': 0.4, 'iris_22': 0.2, 'iris_23': 0.5, 'iris_24': 0.2, 'iris_25': 0.2, 'iris_26': 0.4, 'iris_27': 0.2, 'iris_28': 0.2, 'iris_29': 0.2, 'iris_30': 0.2, 'iris_31': 0.4, 'iris_32': 0.1, 'iris_33': 0.2, 'iris_34': 0.2, 'iris_35': 0.2, 'iris_36': 0.2, 'iris_37': 0.1, 'iris_38': 0.2, 'iris_39': 0.2, 'iris_40': 0.3, 'iris_41': 0.3, 'iris_42': 0.2, 'iris_43': 0.6, 'iris_44': 0.4, 'iris_45': 0.3, 'iris_46': 0.2, 'iris_47': 0.2, 'iris_48': 0.2, 'iris_49': 0.2, 'iris_50': 1.4, 'iris_51': 1.5, 'iris_52': 1.5, 'iris_53': 1.3, 'iris_54': 1.5, 'iris_55': 1.3, 'iris_56': 1.6, 'iris_57': 1.0, 'iris_58': 1.3, 'iris_59': 1.4, 'iris_60': 1.0, 'iris_61': 1.5, 'iris_62': 1.0, 'iris_63': 1.4, 'iris_64': 1.3, 'iris_65': 1.4, 'iris_66': 1.5, 'iris_67': 1.0, 'iris_68': 1.5, 'iris_69': 1.1, 'iris_70': 1.8, 'iris_71': 1.3, 'iris_72': 1.5, 'iris_73': 1.2, 'iris_74': 1.3, 'iris_75': 1.4, 'iris_76': 1.4, 'iris_77': 1.7, 'iris_78': 1.5, 'iris_79': 1.0, 'iris_80': 1.1, 'iris_81': 1.0, 'iris_82': 1.2, 'iris_83': 1.6, 'iris_84': 1.5, 'iris_85': 1.6, 'iris_86': 1.5, 'iris_87': 1.3, 'iris_88': 1.3, 'iris_89': 1.3, 'iris_90': 1.2, 'iris_91': 1.4, 'iris_92': 1.2, 'iris_93': 1.0, 'iris_94': 1.3, 'iris_95': 1.2, 'iris_96': 1.3, 'iris_97': 1.3, 'iris_98': 1.1, 'iris_99': 1.3, 'iris_100': 2.5, 'iris_101': 1.9, 'iris_102': 2.1, 'iris_103': 1.8, 'iris_104': 2.2, 'iris_105': 2.1, 'iris_106': 1.7, 'iris_107': 1.8, 'iris_108': 1.8, 'iris_109': 2.5, 'iris_110': 2.0, 'iris_111': 1.9, 'iris_112': 2.1, 'iris_113': 2.0, 'iris_114': 2.4, 'iris_115': 2.3, 'iris_116': 1.8, 'iris_117': 2.2, 'iris_118': 2.3, 'iris_119': 1.5, 'iris_120': 2.3, 'iris_121': 2.0, 'iris_122': 2.0, 'iris_123': 1.8, 'iris_124': 2.1, 'iris_125': 1.8, 'iris_126': 1.8, 'iris_127': 1.8, 'iris_128': 2.1, 'iris_129': 1.6, 'iris_130': 1.9, 'iris_131': 2.0, 'iris_132': 2.2, 'iris_133': 1.5, 'iris_134': 1.4, 'iris_135': 2.3, 'iris_136': 2.4, 'iris_137': 1.8, 'iris_138': 1.8, 'iris_139': 2.1, 'iris_140': 2.4, 'iris_141': 2.3, 'iris_142': 1.9, 'iris_143': 2.3, 'iris_144': 2.5, 'iris_145': 2.3, 'iris_146': 1.9, 'iris_147': 2.0, 'iris_148': 2.3, 'iris_149': 1.8}}) y_iris = pd.Series({'iris_0': 'setosa', 'iris_1': 'setosa', 'iris_2': 'setosa', 'iris_3': 'setosa', 'iris_4': 'setosa', 'iris_5': 'setosa', 'iris_6': 'setosa', 'iris_7': 'setosa', 'iris_8': 'setosa', 'iris_9': 'setosa', 'iris_10': 'setosa', 'iris_11': 'setosa', 'iris_12': 'setosa', 'iris_13': 'setosa', 'iris_14': 'setosa', 'iris_15': 'setosa', 'iris_16': 'setosa', 'iris_17': 'setosa', 'iris_18': 'setosa', 'iris_19': 'setosa', 'iris_20': 'setosa', 'iris_21': 'setosa', 'iris_22': 'setosa', 'iris_23': 'setosa', 'iris_24': 'setosa', 'iris_25': 'setosa', 'iris_26': 'setosa', 'iris_27': 'setosa', 'iris_28': 'setosa', 'iris_29': 'setosa', 'iris_30': 'setosa', 'iris_31': 'setosa', 'iris_32': 'setosa', 'iris_33': 'setosa', 'iris_34': 'setosa', 'iris_35': 'setosa', 'iris_36': 'setosa', 'iris_37': 'setosa', 'iris_38': 'setosa', 'iris_39': 'setosa', 'iris_40': 'setosa', 'iris_41': 'setosa', 'iris_42': 'setosa', 'iris_43': 'setosa', 'iris_44': 'setosa', 'iris_45': 'setosa', 'iris_46': 'setosa', 'iris_47': 'setosa', 'iris_48': 'setosa', 'iris_49': 'setosa', 'iris_50': 'versicolor', 'iris_51': 'versicolor', 'iris_52': 'versicolor', 'iris_53': 'versicolor', 'iris_54': 'versicolor', 'iris_55': 'versicolor', 'iris_56': 'versicolor', 'iris_57': 'versicolor', 'iris_58': 'versicolor', 'iris_59': 'versicolor', 'iris_60': 'versicolor', 'iris_61': 'versicolor', 'iris_62': 'versicolor', 'iris_63': 'versicolor', 'iris_64': 'versicolor', 'iris_65': 'versicolor', 'iris_66': 'versicolor', 'iris_67': 'versicolor', 'iris_68': 'versicolor', 'iris_69': 'versicolor', 'iris_70': 'versicolor', 'iris_71': 'versicolor', 'iris_72': 'versicolor', 'iris_73': 'versicolor', 'iris_74': 'versicolor', 'iris_75': 'versicolor', 'iris_76': 'versicolor', 'iris_77': 'versicolor', 'iris_78': 'versicolor', 'iris_79': 'versicolor', 'iris_80': 'versicolor', 'iris_81': 'versicolor', 'iris_82': 'versicolor', 'iris_83': 'versicolor', 'iris_84': 'versicolor', 'iris_85': 'versicolor', 'iris_86': 'versicolor', 'iris_87': 'versicolor', 'iris_88': 'versicolor', 'iris_89': 'versicolor', 'iris_90': 'versicolor', 'iris_91': 'versicolor', 'iris_92': 'versicolor', 'iris_93': 'versicolor', 'iris_94': 'versicolor', 'iris_95': 'versicolor', 'iris_96': 'versicolor', 'iris_97': 'versicolor', 'iris_98': 'versicolor', 'iris_99': 'versicolor', 'iris_100': 'virginica', 'iris_101': 'virginica', 'iris_102': 'virginica', 'iris_103': 'virginica', 'iris_104': 'virginica', 'iris_105': 'virginica', 'iris_106': 'virginica', 'iris_107': 'virginica', 'iris_108': 'virginica', 'iris_109': 'virginica', 'iris_110': 'virginica', 'iris_111': 'virginica', 'iris_112': 'virginica', 'iris_113': 'virginica', 'iris_114': 'virginica', 'iris_115': 'virginica', 'iris_116': 'virginica', 'iris_117': 'virginica', 'iris_118': 'virginica', 'iris_119': 'virginica', 'iris_120': 'virginica', 'iris_121': 'virginica', 'iris_122': 'virginica', 'iris_123': 'virginica', 'iris_124': 'virginica', 'iris_125': 'virginica', 'iris_126': 'virginica', 'iris_127': 'virginica', 'iris_128': 'virginica', 'iris_129': 'virginica', 'iris_130': 'virginica', 'iris_131': 'virginica', 'iris_132': 'virginica', 'iris_133': 'virginica', 'iris_134': 'virginica', 'iris_135': 'virginica', 'iris_136': 'virginica', 'iris_137': 'virginica', 'iris_138': 'virginica', 'iris_139': 'virginica', 'iris_140': 'virginica', 'iris_141': 'virginica', 'iris_142': 'virginica', 'iris_143': 'virginica', 'iris_144': 'virginica', 'iris_145': 'virginica', 'iris_146': 'virginica', 'iris_147': 'virginica', 'iris_148': 'virginica', 'iris_149': 'virginica'}) c_iris = pd.Series({'iris_0': '#db5f57', 'iris_1': '#db5f57', 'iris_2': '#db5f57', 'iris_3': '#db5f57', 'iris_4': '#db5f57', 'iris_5': '#db5f57', 'iris_6': '#db5f57', 'iris_7': '#db5f57', 'iris_8': '#db5f57', 'iris_9': '#db5f57', 'iris_10': '#db5f57', 'iris_11': '#db5f57', 'iris_12': '#db5f57', 'iris_13': '#db5f57', 'iris_14': '#db5f57', 'iris_15': '#db5f57', 'iris_16': '#db5f57', 'iris_17': '#db5f57', 'iris_18': '#db5f57', 'iris_19': '#db5f57', 'iris_20': '#db5f57', 'iris_21': '#db5f57', 'iris_22': '#db5f57', 'iris_23': '#db5f57', 'iris_24': '#db5f57', 'iris_25': '#db5f57', 'iris_26': '#db5f57', 'iris_27': '#db5f57', 'iris_28': '#db5f57', 'iris_29': '#db5f57', 'iris_30': '#db5f57', 'iris_31': '#db5f57', 'iris_32': '#db5f57', 'iris_33': '#db5f57', 'iris_34': '#db5f57', 'iris_35': '#db5f57', 'iris_36': '#db5f57', 'iris_37': '#db5f57', 'iris_38': '#db5f57', 'iris_39': '#db5f57', 'iris_40': '#db5f57', 'iris_41': '#db5f57', 'iris_42': '#db5f57', 'iris_43': '#db5f57', 'iris_44': '#db5f57', 'iris_45': '#db5f57', 'iris_46': '#db5f57', 'iris_47': '#db5f57', 'iris_48': '#db5f57', 'iris_49': '#db5f57', 'iris_50': '#57db5f', 'iris_51': '#57db5f', 'iris_52': '#57db5f', 'iris_53': '#57db5f', 'iris_54': '#57db5f', 'iris_55': '#57db5f', 'iris_56': '#57db5f', 'iris_57': '#57db5f', 'iris_58': '#57db5f', 'iris_59': '#57db5f', 'iris_60': '#57db5f', 'iris_61': '#57db5f', 'iris_62': '#57db5f', 'iris_63': '#57db5f', 'iris_64': '#57db5f', 'iris_65': '#57db5f', 'iris_66': '#57db5f', 'iris_67': '#57db5f', 'iris_68': '#57db5f', 'iris_69': '#57db5f', 'iris_70': '#57db5f', 'iris_71': '#57db5f', 'iris_72': '#57db5f', 'iris_73': '#57db5f', 'iris_74': '#57db5f', 'iris_75': '#57db5f', 'iris_76': '#57db5f', 'iris_77': '#57db5f', 'iris_78': '#57db5f', 'iris_79': '#57db5f', 'iris_80': '#57db5f', 'iris_81': '#57db5f', 'iris_82': '#57db5f', 'iris_83': '#57db5f', 'iris_84': '#57db5f', 'iris_85': '#57db5f', 'iris_86': '#57db5f', 'iris_87': '#57db5f', 'iris_88': '#57db5f', 'iris_89': '#57db5f', 'iris_90': '#57db5f', 'iris_91': '#57db5f', 'iris_92': '#57db5f', 'iris_93': '#57db5f', 'iris_94': '#57db5f', 'iris_95': '#57db5f', 'iris_96': '#57db5f', 'iris_97': '#57db5f', 'iris_98': '#57db5f', 'iris_99': '#57db5f', 'iris_100': '#5f57db', 'iris_101': '#5f57db', 'iris_102': '#5f57db', 'iris_103': '#5f57db', 'iris_104': '#5f57db', 'iris_105': '#5f57db', 'iris_106': '#5f57db', 'iris_107': '#5f57db', 'iris_108': '#5f57db', 'iris_109': '#5f57db', 'iris_110': '#5f57db', 'iris_111': '#5f57db', 'iris_112': '#5f57db', 'iris_113': '#5f57db', 'iris_114': '#5f57db', 'iris_115': '#5f57db', 'iris_116': '#5f57db', 'iris_117': '#5f57db', 'iris_118': '#5f57db', 'iris_119': '#5f57db', 'iris_120': '#5f57db', 'iris_121': '#5f57db', 'iris_122': '#5f57db', 'iris_123': '#5f57db', 'iris_124': '#5f57db', 'iris_125': '#5f57db', 'iris_126': '#5f57db', 'iris_127': '#5f57db', 'iris_128': '#5f57db', 'iris_129': '#5f57db', 'iris_130': '#5f57db', 'iris_131': '#5f57db', 'iris_132': '#5f57db', 'iris_133': '#5f57db', 'iris_134': '#5f57db', 'iris_135': '#5f57db', 'iris_136': '#5f57db', 'iris_137': '#5f57db', 'iris_138': '#5f57db', 'iris_139': '#5f57db', 'iris_140': '#5f57db', 'iris_141': '#5f57db', 'iris_142': '#5f57db', 'iris_143': '#5f57db', 'iris_144': '#5f57db', 'iris_145': '#5f57db', 'iris_146': '#5f57db', 'iris_147': '#5f57db', 'iris_148': '#5f57db', 'iris_149': '#5f57db'}) # Connections df_dense = X_iris.T.corr("pearson") y_condensed = defaultdict(dict) for edge, w in df_dense.unstack().items(): y_condensed[frozenset(edge)] = w y_condensed = pd.Series(y_condensed) tol_connection = 0.9 # Graph graph = nx.Graph() for edge, w in y_condensed[lambda x: x > tol_connection].items(): if len(edge) == 1: node = list(edge)[0] edge = (node, node) graph.add_edge(*edge, weight=abs(w)) # Plot network nodes = list(graph.nodes()) weights = np.asarray(list(map(lambda x: x[-1]["weight"], graph.edges(data=True))))**2 pos = dict(zip(nodes, PCA(n_components=2, random_state=0).fit_transform(X_iris.loc[nodes]))) with plt.style.context("seaborn-white"): fig, ax = plt.subplots(figsize=(8,8)) nx.draw_networkx_nodes(graph, pos=pos, node_color=c_iris[nodes], ax=ax, node_size=100, edgecolors="white", linewidths=1) nx.draw_networkx_edges(graph, pos=pos, width=weights, alpha=0.1618, ax=ax) Here is an example of edge bundling from datashader (not for this graph). Aside from doing this in datashader and then loading the image separately, how could this be done in matplotlib?
This can be done fairly easily for matplotlib using hammer_bundle from datashader. Datashader is a python library which uses a lot of pandas DataFrames, so getting the data into a format for matplotlib is fairly easy. (I assume the main goal is to plot this easily in matplotlib, but if one really doesn't want to install datashader, the hammer_bundle file from datashader could be used separately without a full install.) hammer_bundle wants two dataframes, one for the nodes (with columns=['name', 'x', 'y']) and one for the edges (with columns=['x', 'y']) -- at least these specifics work, and I don't know fully what flexibility exists). I'll include the original code for completeness, where all I've added is a couple of imports: from sklearn.decomposition import PCA import pandas as pd import networkx as nx import matplotlib.pyplot as plt import numpy as np from collections import defaultdict from datashader.bundling import hammer_bundle # Data X_iris = pd.DataFrame({'sepal_length': {'iris_0': 5.1, 'iris_1': 4.9, 'iris_2': 4.7, 'iris_3': 4.6, 'iris_4': 5.0, 'iris_5': 5.4, 'iris_6': 4.6, 'iris_7': 5.0, 'iris_8': 4.4, 'iris_9': 4.9, 'iris_10': 5.4, 'iris_11': 4.8, 'iris_12': 4.8, 'iris_13': 4.3, 'iris_14': 5.8, 'iris_15': 5.7, 'iris_16': 5.4, 'iris_17': 5.1, 'iris_18': 5.7, 'iris_19': 5.1, 'iris_20': 5.4, 'iris_21': 5.1, 'iris_22': 4.6, 'iris_23': 5.1, 'iris_24': 4.8, 'iris_25': 5.0, 'iris_26': 5.0, 'iris_27': 5.2, 'iris_28': 5.2, 'iris_29': 4.7, 'iris_30': 4.8, 'iris_31': 5.4, 'iris_32': 5.2, 'iris_33': 5.5, 'iris_34': 4.9, 'iris_35': 5.0, 'iris_36': 5.5, 'iris_37': 4.9, 'iris_38': 4.4, 'iris_39': 5.1, 'iris_40': 5.0, 'iris_41': 4.5, 'iris_42': 4.4, 'iris_43': 5.0, 'iris_44': 5.1, 'iris_45': 4.8, 'iris_46': 5.1, 'iris_47': 4.6, 'iris_48': 5.3, 'iris_49': 5.0, 'iris_50': 7.0, 'iris_51': 6.4, 'iris_52': 6.9, 'iris_53': 5.5, 'iris_54': 6.5, 'iris_55': 5.7, 'iris_56': 6.3, 'iris_57': 4.9, 'iris_58': 6.6, 'iris_59': 5.2, 'iris_60': 5.0, 'iris_61': 5.9, 'iris_62': 6.0, 'iris_63': 6.1, 'iris_64': 5.6, 'iris_65': 6.7, 'iris_66': 5.6, 'iris_67': 5.8, 'iris_68': 6.2, 'iris_69': 5.6, 'iris_70': 5.9, 'iris_71': 6.1, 'iris_72': 6.3, 'iris_73': 6.1, 'iris_74': 6.4, 'iris_75': 6.6, 'iris_76': 6.8, 'iris_77': 6.7, 'iris_78': 6.0, 'iris_79': 5.7, 'iris_80': 5.5, 'iris_81': 5.5, 'iris_82': 5.8, 'iris_83': 6.0, 'iris_84': 5.4, 'iris_85': 6.0, 'iris_86': 6.7, 'iris_87': 6.3, 'iris_88': 5.6, 'iris_89': 5.5, 'iris_90': 5.5, 'iris_91': 6.1, 'iris_92': 5.8, 'iris_93': 5.0, 'iris_94': 5.6, 'iris_95': 5.7, 'iris_96': 5.7, 'iris_97': 6.2, 'iris_98': 5.1, 'iris_99': 5.7, 'iris_100': 6.3, 'iris_101': 5.8, 'iris_102': 7.1, 'iris_103': 6.3, 'iris_104': 6.5, 'iris_105': 7.6, 'iris_106': 4.9, 'iris_107': 7.3, 'iris_108': 6.7, 'iris_109': 7.2, 'iris_110': 6.5, 'iris_111': 6.4, 'iris_112': 6.8, 'iris_113': 5.7, 'iris_114': 5.8, 'iris_115': 6.4, 'iris_116': 6.5, 'iris_117': 7.7, 'iris_118': 7.7, 'iris_119': 6.0, 'iris_120': 6.9, 'iris_121': 5.6, 'iris_122': 7.7, 'iris_123': 6.3, 'iris_124': 6.7, 'iris_125': 7.2, 'iris_126': 6.2, 'iris_127': 6.1, 'iris_128': 6.4, 'iris_129': 7.2, 'iris_130': 7.4, 'iris_131': 7.9, 'iris_132': 6.4, 'iris_133': 6.3, 'iris_134': 6.1, 'iris_135': 7.7, 'iris_136': 6.3, 'iris_137': 6.4, 'iris_138': 6.0, 'iris_139': 6.9, 'iris_140': 6.7, 'iris_141': 6.9, 'iris_142': 5.8, 'iris_143': 6.8, 'iris_144': 6.7, 'iris_145': 6.7, 'iris_146': 6.3, 'iris_147': 6.5, 'iris_148': 6.2, 'iris_149': 5.9}, 'sepal_width': {'iris_0': 3.5, 'iris_1': 3.0, 'iris_2': 3.2, 'iris_3': 3.1, 'iris_4': 3.6, 'iris_5': 3.9, 'iris_6': 3.4, 'iris_7': 3.4, 'iris_8': 2.9, 'iris_9': 3.1, 'iris_10': 3.7, 'iris_11': 3.4, 'iris_12': 3.0, 'iris_13': 3.0, 'iris_14': 4.0, 'iris_15': 4.4, 'iris_16': 3.9, 'iris_17': 3.5, 'iris_18': 3.8, 'iris_19': 3.8, 'iris_20': 3.4, 'iris_21': 3.7, 'iris_22': 3.6, 'iris_23': 3.3, 'iris_24': 3.4, 'iris_25': 3.0, 'iris_26': 3.4, 'iris_27': 3.5, 'iris_28': 3.4, 'iris_29': 3.2, 'iris_30': 3.1, 'iris_31': 3.4, 'iris_32': 4.1, 'iris_33': 4.2, 'iris_34': 3.1, 'iris_35': 3.2, 'iris_36': 3.5, 'iris_37': 3.6, 'iris_38': 3.0, 'iris_39': 3.4, 'iris_40': 3.5, 'iris_41': 2.3, 'iris_42': 3.2, 'iris_43': 3.5, 'iris_44': 3.8, 'iris_45': 3.0, 'iris_46': 3.8, 'iris_47': 3.2, 'iris_48': 3.7, 'iris_49': 3.3, 'iris_50': 3.2, 'iris_51': 3.2, 'iris_52': 3.1, 'iris_53': 2.3, 'iris_54': 2.8, 'iris_55': 2.8, 'iris_56': 3.3, 'iris_57': 2.4, 'iris_58': 2.9, 'iris_59': 2.7, 'iris_60': 2.0, 'iris_61': 3.0, 'iris_62': 2.2, 'iris_63': 2.9, 'iris_64': 2.9, 'iris_65': 3.1, 'iris_66': 3.0, 'iris_67': 2.7, 'iris_68': 2.2, 'iris_69': 2.5, 'iris_70': 3.2, 'iris_71': 2.8, 'iris_72': 2.5, 'iris_73': 2.8, 'iris_74': 2.9, 'iris_75': 3.0, 'iris_76': 2.8, 'iris_77': 3.0, 'iris_78': 2.9, 'iris_79': 2.6, 'iris_80': 2.4, 'iris_81': 2.4, 'iris_82': 2.7, 'iris_83': 2.7, 'iris_84': 3.0, 'iris_85': 3.4, 'iris_86': 3.1, 'iris_87': 2.3, 'iris_88': 3.0, 'iris_89': 2.5, 'iris_90': 2.6, 'iris_91': 3.0, 'iris_92': 2.6, 'iris_93': 2.3, 'iris_94': 2.7, 'iris_95': 3.0, 'iris_96': 2.9, 'iris_97': 2.9, 'iris_98': 2.5, 'iris_99': 2.8, 'iris_100': 3.3, 'iris_101': 2.7, 'iris_102': 3.0, 'iris_103': 2.9, 'iris_104': 3.0, 'iris_105': 3.0, 'iris_106': 2.5, 'iris_107': 2.9, 'iris_108': 2.5, 'iris_109': 3.6, 'iris_110': 3.2, 'iris_111': 2.7, 'iris_112': 3.0, 'iris_113': 2.5, 'iris_114': 2.8, 'iris_115': 3.2, 'iris_116': 3.0, 'iris_117': 3.8, 'iris_118': 2.6, 'iris_119': 2.2, 'iris_120': 3.2, 'iris_121': 2.8, 'iris_122': 2.8, 'iris_123': 2.7, 'iris_124': 3.3, 'iris_125': 3.2, 'iris_126': 2.8, 'iris_127': 3.0, 'iris_128': 2.8, 'iris_129': 3.0, 'iris_130': 2.8, 'iris_131': 3.8, 'iris_132': 2.8, 'iris_133': 2.8, 'iris_134': 2.6, 'iris_135': 3.0, 'iris_136': 3.4, 'iris_137': 3.1, 'iris_138': 3.0, 'iris_139': 3.1, 'iris_140': 3.1, 'iris_141': 3.1, 'iris_142': 2.7, 'iris_143': 3.2, 'iris_144': 3.3, 'iris_145': 3.0, 'iris_146': 2.5, 'iris_147': 3.0, 'iris_148': 3.4, 'iris_149': 3.0}, 'petal_length': {'iris_0': 1.4, 'iris_1': 1.4, 'iris_2': 1.3, 'iris_3': 1.5, 'iris_4': 1.4, 'iris_5': 1.7, 'iris_6': 1.4, 'iris_7': 1.5, 'iris_8': 1.4, 'iris_9': 1.5, 'iris_10': 1.5, 'iris_11': 1.6, 'iris_12': 1.4, 'iris_13': 1.1, 'iris_14': 1.2, 'iris_15': 1.5, 'iris_16': 1.3, 'iris_17': 1.4, 'iris_18': 1.7, 'iris_19': 1.5, 'iris_20': 1.7, 'iris_21': 1.5, 'iris_22': 1.0, 'iris_23': 1.7, 'iris_24': 1.9, 'iris_25': 1.6, 'iris_26': 1.6, 'iris_27': 1.5, 'iris_28': 1.4, 'iris_29': 1.6, 'iris_30': 1.6, 'iris_31': 1.5, 'iris_32': 1.5, 'iris_33': 1.4, 'iris_34': 1.5, 'iris_35': 1.2, 'iris_36': 1.3, 'iris_37': 1.4, 'iris_38': 1.3, 'iris_39': 1.5, 'iris_40': 1.3, 'iris_41': 1.3, 'iris_42': 1.3, 'iris_43': 1.6, 'iris_44': 1.9, 'iris_45': 1.4, 'iris_46': 1.6, 'iris_47': 1.4, 'iris_48': 1.5, 'iris_49': 1.4, 'iris_50': 4.7, 'iris_51': 4.5, 'iris_52': 4.9, 'iris_53': 4.0, 'iris_54': 4.6, 'iris_55': 4.5, 'iris_56': 4.7, 'iris_57': 3.3, 'iris_58': 4.6, 'iris_59': 3.9, 'iris_60': 3.5, 'iris_61': 4.2, 'iris_62': 4.0, 'iris_63': 4.7, 'iris_64': 3.6, 'iris_65': 4.4, 'iris_66': 4.5, 'iris_67': 4.1, 'iris_68': 4.5, 'iris_69': 3.9, 'iris_70': 4.8, 'iris_71': 4.0, 'iris_72': 4.9, 'iris_73': 4.7, 'iris_74': 4.3, 'iris_75': 4.4, 'iris_76': 4.8, 'iris_77': 5.0, 'iris_78': 4.5, 'iris_79': 3.5, 'iris_80': 3.8, 'iris_81': 3.7, 'iris_82': 3.9, 'iris_83': 5.1, 'iris_84': 4.5, 'iris_85': 4.5, 'iris_86': 4.7, 'iris_87': 4.4, 'iris_88': 4.1, 'iris_89': 4.0, 'iris_90': 4.4, 'iris_91': 4.6, 'iris_92': 4.0, 'iris_93': 3.3, 'iris_94': 4.2, 'iris_95': 4.2, 'iris_96': 4.2, 'iris_97': 4.3, 'iris_98': 3.0, 'iris_99': 4.1, 'iris_100': 6.0, 'iris_101': 5.1, 'iris_102': 5.9, 'iris_103': 5.6, 'iris_104': 5.8, 'iris_105': 6.6, 'iris_106': 4.5, 'iris_107': 6.3, 'iris_108': 5.8, 'iris_109': 6.1, 'iris_110': 5.1, 'iris_111': 5.3, 'iris_112': 5.5, 'iris_113': 5.0, 'iris_114': 5.1, 'iris_115': 5.3, 'iris_116': 5.5, 'iris_117': 6.7, 'iris_118': 6.9, 'iris_119': 5.0, 'iris_120': 5.7, 'iris_121': 4.9, 'iris_122': 6.7, 'iris_123': 4.9, 'iris_124': 5.7, 'iris_125': 6.0, 'iris_126': 4.8, 'iris_127': 4.9, 'iris_128': 5.6, 'iris_129': 5.8, 'iris_130': 6.1, 'iris_131': 6.4, 'iris_132': 5.6, 'iris_133': 5.1, 'iris_134': 5.6, 'iris_135': 6.1, 'iris_136': 5.6, 'iris_137': 5.5, 'iris_138': 4.8, 'iris_139': 5.4, 'iris_140': 5.6, 'iris_141': 5.1, 'iris_142': 5.1, 'iris_143': 5.9, 'iris_144': 5.7, 'iris_145': 5.2, 'iris_146': 5.0, 'iris_147': 5.2, 'iris_148': 5.4, 'iris_149': 5.1}, 'petal_width': {'iris_0': 0.2, 'iris_1': 0.2, 'iris_2': 0.2, 'iris_3': 0.2, 'iris_4': 0.2, 'iris_5': 0.4, 'iris_6': 0.3, 'iris_7': 0.2, 'iris_8': 0.2, 'iris_9': 0.1, 'iris_10': 0.2, 'iris_11': 0.2, 'iris_12': 0.1, 'iris_13': 0.1, 'iris_14': 0.2, 'iris_15': 0.4, 'iris_16': 0.4, 'iris_17': 0.3, 'iris_18': 0.3, 'iris_19': 0.3, 'iris_20': 0.2, 'iris_21': 0.4, 'iris_22': 0.2, 'iris_23': 0.5, 'iris_24': 0.2, 'iris_25': 0.2, 'iris_26': 0.4, 'iris_27': 0.2, 'iris_28': 0.2, 'iris_29': 0.2, 'iris_30': 0.2, 'iris_31': 0.4, 'iris_32': 0.1, 'iris_33': 0.2, 'iris_34': 0.2, 'iris_35': 0.2, 'iris_36': 0.2, 'iris_37': 0.1, 'iris_38': 0.2, 'iris_39': 0.2, 'iris_40': 0.3, 'iris_41': 0.3, 'iris_42': 0.2, 'iris_43': 0.6, 'iris_44': 0.4, 'iris_45': 0.3, 'iris_46': 0.2, 'iris_47': 0.2, 'iris_48': 0.2, 'iris_49': 0.2, 'iris_50': 1.4, 'iris_51': 1.5, 'iris_52': 1.5, 'iris_53': 1.3, 'iris_54': 1.5, 'iris_55': 1.3, 'iris_56': 1.6, 'iris_57': 1.0, 'iris_58': 1.3, 'iris_59': 1.4, 'iris_60': 1.0, 'iris_61': 1.5, 'iris_62': 1.0, 'iris_63': 1.4, 'iris_64': 1.3, 'iris_65': 1.4, 'iris_66': 1.5, 'iris_67': 1.0, 'iris_68': 1.5, 'iris_69': 1.1, 'iris_70': 1.8, 'iris_71': 1.3, 'iris_72': 1.5, 'iris_73': 1.2, 'iris_74': 1.3, 'iris_75': 1.4, 'iris_76': 1.4, 'iris_77': 1.7, 'iris_78': 1.5, 'iris_79': 1.0, 'iris_80': 1.1, 'iris_81': 1.0, 'iris_82': 1.2, 'iris_83': 1.6, 'iris_84': 1.5, 'iris_85': 1.6, 'iris_86': 1.5, 'iris_87': 1.3, 'iris_88': 1.3, 'iris_89': 1.3, 'iris_90': 1.2, 'iris_91': 1.4, 'iris_92': 1.2, 'iris_93': 1.0, 'iris_94': 1.3, 'iris_95': 1.2, 'iris_96': 1.3, 'iris_97': 1.3, 'iris_98': 1.1, 'iris_99': 1.3, 'iris_100': 2.5, 'iris_101': 1.9, 'iris_102': 2.1, 'iris_103': 1.8, 'iris_104': 2.2, 'iris_105': 2.1, 'iris_106': 1.7, 'iris_107': 1.8, 'iris_108': 1.8, 'iris_109': 2.5, 'iris_110': 2.0, 'iris_111': 1.9, 'iris_112': 2.1, 'iris_113': 2.0, 'iris_114': 2.4, 'iris_115': 2.3, 'iris_116': 1.8, 'iris_117': 2.2, 'iris_118': 2.3, 'iris_119': 1.5, 'iris_120': 2.3, 'iris_121': 2.0, 'iris_122': 2.0, 'iris_123': 1.8, 'iris_124': 2.1, 'iris_125': 1.8, 'iris_126': 1.8, 'iris_127': 1.8, 'iris_128': 2.1, 'iris_129': 1.6, 'iris_130': 1.9, 'iris_131': 2.0, 'iris_132': 2.2, 'iris_133': 1.5, 'iris_134': 1.4, 'iris_135': 2.3, 'iris_136': 2.4, 'iris_137': 1.8, 'iris_138': 1.8, 'iris_139': 2.1, 'iris_140': 2.4, 'iris_141': 2.3, 'iris_142': 1.9, 'iris_143': 2.3, 'iris_144': 2.5, 'iris_145': 2.3, 'iris_146': 1.9, 'iris_147': 2.0, 'iris_148': 2.3, 'iris_149': 1.8}}) y_iris = pd.Series({'iris_0': 'setosa', 'iris_1': 'setosa', 'iris_2': 'setosa', 'iris_3': 'setosa', 'iris_4': 'setosa', 'iris_5': 'setosa', 'iris_6': 'setosa', 'iris_7': 'setosa', 'iris_8': 'setosa', 'iris_9': 'setosa', 'iris_10': 'setosa', 'iris_11': 'setosa', 'iris_12': 'setosa', 'iris_13': 'setosa', 'iris_14': 'setosa', 'iris_15': 'setosa', 'iris_16': 'setosa', 'iris_17': 'setosa', 'iris_18': 'setosa', 'iris_19': 'setosa', 'iris_20': 'setosa', 'iris_21': 'setosa', 'iris_22': 'setosa', 'iris_23': 'setosa', 'iris_24': 'setosa', 'iris_25': 'setosa', 'iris_26': 'setosa', 'iris_27': 'setosa', 'iris_28': 'setosa', 'iris_29': 'setosa', 'iris_30': 'setosa', 'iris_31': 'setosa', 'iris_32': 'setosa', 'iris_33': 'setosa', 'iris_34': 'setosa', 'iris_35': 'setosa', 'iris_36': 'setosa', 'iris_37': 'setosa', 'iris_38': 'setosa', 'iris_39': 'setosa', 'iris_40': 'setosa', 'iris_41': 'setosa', 'iris_42': 'setosa', 'iris_43': 'setosa', 'iris_44': 'setosa', 'iris_45': 'setosa', 'iris_46': 'setosa', 'iris_47': 'setosa', 'iris_48': 'setosa', 'iris_49': 'setosa', 'iris_50': 'versicolor', 'iris_51': 'versicolor', 'iris_52': 'versicolor', 'iris_53': 'versicolor', 'iris_54': 'versicolor', 'iris_55': 'versicolor', 'iris_56': 'versicolor', 'iris_57': 'versicolor', 'iris_58': 'versicolor', 'iris_59': 'versicolor', 'iris_60': 'versicolor', 'iris_61': 'versicolor', 'iris_62': 'versicolor', 'iris_63': 'versicolor', 'iris_64': 'versicolor', 'iris_65': 'versicolor', 'iris_66': 'versicolor', 'iris_67': 'versicolor', 'iris_68': 'versicolor', 'iris_69': 'versicolor', 'iris_70': 'versicolor', 'iris_71': 'versicolor', 'iris_72': 'versicolor', 'iris_73': 'versicolor', 'iris_74': 'versicolor', 'iris_75': 'versicolor', 'iris_76': 'versicolor', 'iris_77': 'versicolor', 'iris_78': 'versicolor', 'iris_79': 'versicolor', 'iris_80': 'versicolor', 'iris_81': 'versicolor', 'iris_82': 'versicolor', 'iris_83': 'versicolor', 'iris_84': 'versicolor', 'iris_85': 'versicolor', 'iris_86': 'versicolor', 'iris_87': 'versicolor', 'iris_88': 'versicolor', 'iris_89': 'versicolor', 'iris_90': 'versicolor', 'iris_91': 'versicolor', 'iris_92': 'versicolor', 'iris_93': 'versicolor', 'iris_94': 'versicolor', 'iris_95': 'versicolor', 'iris_96': 'versicolor', 'iris_97': 'versicolor', 'iris_98': 'versicolor', 'iris_99': 'versicolor', 'iris_100': 'virginica', 'iris_101': 'virginica', 'iris_102': 'virginica', 'iris_103': 'virginica', 'iris_104': 'virginica', 'iris_105': 'virginica', 'iris_106': 'virginica', 'iris_107': 'virginica', 'iris_108': 'virginica', 'iris_109': 'virginica', 'iris_110': 'virginica', 'iris_111': 'virginica', 'iris_112': 'virginica', 'iris_113': 'virginica', 'iris_114': 'virginica', 'iris_115': 'virginica', 'iris_116': 'virginica', 'iris_117': 'virginica', 'iris_118': 'virginica', 'iris_119': 'virginica', 'iris_120': 'virginica', 'iris_121': 'virginica', 'iris_122': 'virginica', 'iris_123': 'virginica', 'iris_124': 'virginica', 'iris_125': 'virginica', 'iris_126': 'virginica', 'iris_127': 'virginica', 'iris_128': 'virginica', 'iris_129': 'virginica', 'iris_130': 'virginica', 'iris_131': 'virginica', 'iris_132': 'virginica', 'iris_133': 'virginica', 'iris_134': 'virginica', 'iris_135': 'virginica', 'iris_136': 'virginica', 'iris_137': 'virginica', 'iris_138': 'virginica', 'iris_139': 'virginica', 'iris_140': 'virginica', 'iris_141': 'virginica', 'iris_142': 'virginica', 'iris_143': 'virginica', 'iris_144': 'virginica', 'iris_145': 'virginica', 'iris_146': 'virginica', 'iris_147': 'virginica', 'iris_148': 'virginica', 'iris_149': 'virginica'}) c_iris = pd.Series({'iris_0': '#db5f57', 'iris_1': '#db5f57', 'iris_2': '#db5f57', 'iris_3': '#db5f57', 'iris_4': '#db5f57', 'iris_5': '#db5f57', 'iris_6': '#db5f57', 'iris_7': '#db5f57', 'iris_8': '#db5f57', 'iris_9': '#db5f57', 'iris_10': '#db5f57', 'iris_11': '#db5f57', 'iris_12': '#db5f57', 'iris_13': '#db5f57', 'iris_14': '#db5f57', 'iris_15': '#db5f57', 'iris_16': '#db5f57', 'iris_17': '#db5f57', 'iris_18': '#db5f57', 'iris_19': '#db5f57', 'iris_20': '#db5f57', 'iris_21': '#db5f57', 'iris_22': '#db5f57', 'iris_23': '#db5f57', 'iris_24': '#db5f57', 'iris_25': '#db5f57', 'iris_26': '#db5f57', 'iris_27': '#db5f57', 'iris_28': '#db5f57', 'iris_29': '#db5f57', 'iris_30': '#db5f57', 'iris_31': '#db5f57', 'iris_32': '#db5f57', 'iris_33': '#db5f57', 'iris_34': '#db5f57', 'iris_35': '#db5f57', 'iris_36': '#db5f57', 'iris_37': '#db5f57', 'iris_38': '#db5f57', 'iris_39': '#db5f57', 'iris_40': '#db5f57', 'iris_41': '#db5f57', 'iris_42': '#db5f57', 'iris_43': '#db5f57', 'iris_44': '#db5f57', 'iris_45': '#db5f57', 'iris_46': '#db5f57', 'iris_47': '#db5f57', 'iris_48': '#db5f57', 'iris_49': '#db5f57', 'iris_50': '#57db5f', 'iris_51': '#57db5f', 'iris_52': '#57db5f', 'iris_53': '#57db5f', 'iris_54': '#57db5f', 'iris_55': '#57db5f', 'iris_56': '#57db5f', 'iris_57': '#57db5f', 'iris_58': '#57db5f', 'iris_59': '#57db5f', 'iris_60': '#57db5f', 'iris_61': '#57db5f', 'iris_62': '#57db5f', 'iris_63': '#57db5f', 'iris_64': '#57db5f', 'iris_65': '#57db5f', 'iris_66': '#57db5f', 'iris_67': '#57db5f', 'iris_68': '#57db5f', 'iris_69': '#57db5f', 'iris_70': '#57db5f', 'iris_71': '#57db5f', 'iris_72': '#57db5f', 'iris_73': '#57db5f', 'iris_74': '#57db5f', 'iris_75': '#57db5f', 'iris_76': '#57db5f', 'iris_77': '#57db5f', 'iris_78': '#57db5f', 'iris_79': '#57db5f', 'iris_80': '#57db5f', 'iris_81': '#57db5f', 'iris_82': '#57db5f', 'iris_83': '#57db5f', 'iris_84': '#57db5f', 'iris_85': '#57db5f', 'iris_86': '#57db5f', 'iris_87': '#57db5f', 'iris_88': '#57db5f', 'iris_89': '#57db5f', 'iris_90': '#57db5f', 'iris_91': '#57db5f', 'iris_92': '#57db5f', 'iris_93': '#57db5f', 'iris_94': '#57db5f', 'iris_95': '#57db5f', 'iris_96': '#57db5f', 'iris_97': '#57db5f', 'iris_98': '#57db5f', 'iris_99': '#57db5f', 'iris_100': '#5f57db', 'iris_101': '#5f57db', 'iris_102': '#5f57db', 'iris_103': '#5f57db', 'iris_104': '#5f57db', 'iris_105': '#5f57db', 'iris_106': '#5f57db', 'iris_107': '#5f57db', 'iris_108': '#5f57db', 'iris_109': '#5f57db', 'iris_110': '#5f57db', 'iris_111': '#5f57db', 'iris_112': '#5f57db', 'iris_113': '#5f57db', 'iris_114': '#5f57db', 'iris_115': '#5f57db', 'iris_116': '#5f57db', 'iris_117': '#5f57db', 'iris_118': '#5f57db', 'iris_119': '#5f57db', 'iris_120': '#5f57db', 'iris_121': '#5f57db', 'iris_122': '#5f57db', 'iris_123': '#5f57db', 'iris_124': '#5f57db', 'iris_125': '#5f57db', 'iris_126': '#5f57db', 'iris_127': '#5f57db', 'iris_128': '#5f57db', 'iris_129': '#5f57db', 'iris_130': '#5f57db', 'iris_131': '#5f57db', 'iris_132': '#5f57db', 'iris_133': '#5f57db', 'iris_134': '#5f57db', 'iris_135': '#5f57db', 'iris_136': '#5f57db', 'iris_137': '#5f57db', 'iris_138': '#5f57db', 'iris_139': '#5f57db', 'iris_140': '#5f57db', 'iris_141': '#5f57db', 'iris_142': '#5f57db', 'iris_143': '#5f57db', 'iris_144': '#5f57db', 'iris_145': '#5f57db', 'iris_146': '#5f57db', 'iris_147': '#5f57db', 'iris_148': '#5f57db', 'iris_149': '#5f57db'}) # Connections df_dense = X_iris.T.corr("pearson") y_condensed = defaultdict(dict) for edge, w in df_dense.unstack().items(): y_condensed[frozenset(edge)] = w y_condensed = pd.Series(y_condensed) tol_connection = 0.9 # Graph graph = nx.Graph() for edge, w in y_condensed[lambda x: x > tol_connection].items(): if len(edge) == 1: node = list(edge)[0] edge = (node, node) graph.add_edge(*edge, weight=abs(w)) # Plot network nodes = list(graph.nodes()) weights = np.asarray(list(map(lambda x: x[-1]["weight"], graph.edges(data=True))))**2 pos = dict(zip(nodes, PCA(n_components=2, random_state=0).fit_transform(X_iris.loc[nodes]))) with plt.style.context("seaborn-white"): fig, ax = plt.subplots(figsize=(8,8)) nx.draw_networkx_nodes(graph, pos=pos, node_color=c_iris[nodes], ax=ax, node_size=100, edgecolors="white", linewidths=1) nx.draw_networkx_edges(graph, pos=pos, width=weights, alpha=0.1618, ax=ax) Now convert to a format for Datashader, and run hammer_bundle, and that's about all there is to do. (The first line is just a modification of a line from the OP, and for the edges I used int(n0.split("_")[1] which relies on the fact the number in the iris_# string is the same as the row number, which is a shortcut that doesn't generalize.) nodes_py = [[name, a[0], a[1]] for name, a in zip(nodes, PCA(n_components=2, random_state=0).fit_transform(X_iris.loc[nodes]))] ds_nodes = pd.DataFrame(nodes_py, columns=['name', 'x', 'y']) ds_edges_py = [[int(n0.split("_")[1]), int(n1.split("_")[1])] for (n0, n1) in graph.edges] ds_edges = pd.DataFrame(ds_edges_py, columns=['source', 'target']) hb = hammer_bundle(ds_nodes, ds_edges) The hammer_bundle, hb is now a DataFrame with columns x and y and points that create the shapes of the curves. It is very large (283553 x 2) because it defines the curves by points rather than, say, Bezier curves, but this format is convenient for matplotlib. Different curves are separated by a row of NaNs (which also works well for matplotlib since when directly plotting, the NaNs will create breaks between the curves). To plot the bundling curves directly from the DataFrame: hb.plot(x="x", y="y", figsize=(9,9)) It can easily be combined, with say, the original analysis: with plt.style.context("seaborn-white"): fig, ax = plt.subplots(figsize=(8,8)) ax.plot(hb.x, hb.y, 'y', zorder=1, linewidth=3) nx.draw_networkx_nodes(graph, pos=pos, node_color=c_iris[nodes], ax=ax, node_size=50, edgecolors='white', linewidths=1) nx.draw_networkx_edges(graph, pos=pos, width=weights, alpha=0.1618, ax=ax) Or plot every bundle in a different color (here, every 50th bundle for visibility): hbnp = hb.to_numpy() splits = (np.isnan(hbnp[:,0])).nonzero()[0] start = 0 segments = [] for stop in splits: seg = hbnp[start:stop, :] segments.append(seg) start = stop fig, ax = plt.subplots(figsize=(7,7)) for seg in segments[::50]: ax.plot(seg[:,0], seg[:,1])
15
10
61,331,079
2020-4-20
https://stackoverflow.com/questions/61331079/how-to-configure-celery-worker-and-beat-for-email-reporting-in-apache-superset-r
I am running Superset via Docker. I enabled the Email Report feature and tried it: However, I only receive the test email report. I don't receive any emails after. This is my CeleryConfig in superset_config.py: class CeleryConfig(object): BROKER_URL = 'sqla+postgresql://superset:superset@db:5432/superset' CELERY_IMPORTS = ( 'superset.sql_lab', 'superset.tasks', ) CELERY_RESULT_BACKEND = 'db+postgresql://superset:superset@db:5432/superset' CELERYD_LOG_LEVEL = 'DEBUG' CELERYD_PREFETCH_MULTIPLIER = 10 CELERY_ACKS_LATE = True CELERY_ANNOTATIONS = { 'sql_lab.get_sql_results': { 'rate_limit': '100/s', }, 'email_reports.send': { 'rate_limit': '1/s', 'time_limit': 120, 'soft_time_limit': 150, 'ignore_result': True, }, } CELERYBEAT_SCHEDULE = { 'email_reports.schedule_hourly': { 'task': 'email_reports.schedule_hourly', 'schedule': crontab(minute=1, hour='*'), }, } The documentation says I need to run the celery worker and beat. celery worker --app=superset.tasks.celery_app:app --pool=prefork -O fair -c 4 celery beat --app=superset.tasks.celery_app:app I added them to the 'docker-compose.yml': superset-worker: build: *superset-build command: > sh -c "celery worker --app=superset.tasks.celery_app:app -Ofair -f /app/celery_worker.log && celery beat --app=superset.tasks.celery_app:app -f /app/celery_beat.log" env_file: docker/.env restart: unless-stopped depends_on: *superset-depends-on volumes: *superset-volumes Celery Worker is indeed working when sending the first email. The log file is also visible. However, the celery beat seems to not be functioning. There is also no 'celery_beat.log' created. If you'd like a deeper insight, here's the commit with the full implementation of the functionality. How do I correctly configure celery beat? How can I debug this?
I managed to solve it by altering the CeleryConfig implementation, and adding a beat service to 'docker-compose.yml' New CeleryConfig class in 'superset_config.py': REDIS_HOST = get_env_variable("REDIS_HOST") REDIS_PORT = get_env_variable("REDIS_PORT") class CeleryConfig(object): BROKER_URL = "redis://%s:%s/0" % (REDIS_HOST, REDIS_PORT) CELERY_IMPORTS = ( 'superset.sql_lab', 'superset.tasks', ) CELERY_RESULT_BACKEND = "redis://%s:%s/1" % (REDIS_HOST, REDIS_PORT) CELERY_ANNOTATIONS = { 'sql_lab.get_sql_results': { 'rate_limit': '100/s', }, 'email_reports.send': { 'rate_limit': '1/s', 'time_limit': 120, 'soft_time_limit': 150, 'ignore_result': True, }, } CELERY_TASK_PROTOCOL = 1 CELERYBEAT_SCHEDULE = { 'email_reports.schedule_hourly': { 'task': 'email_reports.schedule_hourly', 'schedule': crontab(minute='1', hour='*'), }, } Changes in 'docker-compose.yml': superset-worker: build: *superset-build command: ["celery", "worker", "--app=superset.tasks.celery_app:app", "-Ofair"] env_file: docker/.env restart: unless-stopped depends_on: *superset-depends-on volumes: *superset-volumes superset-beat: build: *superset-build command: ["celery", "beat", "--app=superset.tasks.celery_app:app", "--pidfile=", "-f", "/app/celery_beat.log"] env_file: docker/.env restart: unless-stopped depends_on: *superset-depends-on volumes: *superset-volumes
8
3
61,299,553
2020-4-19
https://stackoverflow.com/questions/61299553/subclassing-is-it-possible-to-override-a-property-with-a-conventional-attribute
Let's assume we want to create a family of classes which are different implementations or specializations of an overarching concept. Let's assume there is a plausible default implementation for some derived properties. We'd want to put this into a base class class Math_Set_Base: @property def size(self): return len(self.elements) So a subclass will automatically be able to count its elements in this rather silly example class Concrete_Math_Set(Math_Set_Base): def __init__(self,*elements): self.elements = elements Concrete_Math_Set(1,2,3).size # 3 But what if a subclass doesn't want to use this default? This does not work: import math class Square_Integers_Below(Math_Set_Base): def __init__(self,cap): self.size = int(math.sqrt(cap)) Square_Integers_Below(7) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # File "<stdin>", line 3, in __init__ # AttributeError: can't set attribute I realize there are ways to override a property with a property, but I'd like to avoid that. Because the purpose of the base class is to make life as easy as possible for its user, not to add bloat by imposing a (from the subclass's narrow point of view) convoluted and superfluous access method. Can it be done? If not what's the next best solution?
A property is a data descriptor which takes precedence over an instance attribute with the same name. You could define a non-data descriptor with a unique __get__() method: an instance attribute takes precedence over the non-data descriptor with the same name, see the docs. The problem here is that the non_data_property defined below is for computation purpose only (you can't define a setter or a deleter) but it seems to be the case in your example. import math class non_data_property: def __init__(self, fget): self.__doc__ = fget.__doc__ self.fget = fget def __get__(self, obj, cls): if obj is None: return self return self.fget(obj) class Math_Set_Base: @non_data_property def size(self, *elements): return len(self.elements) class Concrete_Math_Set(Math_Set_Base): def __init__(self, *elements): self.elements = elements class Square_Integers_Below(Math_Set_Base): def __init__(self, cap): self.size = int(math.sqrt(cap)) print(Concrete_Math_Set(1, 2, 3).size) # 3 print(Square_Integers_Below(1).size) # 1 print(Square_Integers_Below(4).size) # 2 print(Square_Integers_Below(9).size) # 3 However this assumes that you have access to the base class in order to make this changes.
24
13
61,238,840
2020-4-15
https://stackoverflow.com/questions/61238840/xarray-reverse-interpolation-on-coordinate-not-on-data
I have a the following DataArray arr = xr.DataArray([[0.33, 0.25],[0.55, 0.60],[0.85, 0.71],[0.92,0.85],[1.50,0.96],[2.5,1.1]],[('x',[0.25,0.5,0.75,1.0,1.25,1.5]),('y',[1,2])]) This gives the following output <xarray.DataArray (x: 6, y: 2)> array([[0.33, 0.25], [0.55, 0.6 ], [0.85, 0.71], [0.92, 0.85], [1.5 , 0.96], [2.5 , 1.1 ]]) Coordinates: * x (x) float64 0.25 0.5 0.75 1.0 1.25 1.5 * y (y) int32 1 2 or sorted below with x and output (z) next to each other for convenience. x z (y=1) z(y=2) 0.25 0.33 0.25 0.50 0.55 0.60 0.75 0.85 0.71 1.00 0.92 0.85 1.25 1.50 0.96 1.50 2.50 1.10 The data I have is the result of several input values. One of them is the x value. There are several other dimensions (such as y) for other input values. I want to know when my output value (z) is growing larger than 1.00, keeping the other dimensions fixed and vary the x-value. In the above 2-dimensional example, I would like to get the answer [1.03 1.32]. Because a value of 1.03 for x will give me 1.00 for z when y=1 and a value of 1.32 for x will give me 1.00 for z when y=2. edit: Since the output z will grow with increasing x, there is only one point where z will have 1.0 as an output. Is there any efficient way to achieve this with xarray? My actual table is much larger and has 4 inputs (dimensions). Thank you for any help!
The problem I had with jojo's answer is that it is difficult to expand it in many dimensions and to keep the xarray structure. Hence, I decided to look further into this. I used some ideas from jojo's code to make below answer. I make two arrays, one with the condition that the values are smaller than what I look for, and one with the condition they need to be larger. I shift the second one in the x-direction by minus 1. Now I combine them in a normal linear interpolation formula. The two arrays only have values overlapping at the 'edge' of the condition. If not shifted by -1, no values would overlap. In the final line I sum over the x-direction and since al other values are NaN, I extract the correct value and remove the x-direction from the DataArray in the process. def interpolate_dimension_x(arr, target_value, step): M0 = arr.where(arr - target_value <= 0) M1 = arr.where(arr - target_value > 0).shift(x=-1) work_mat = M0.x + step * (target_value - M0) / (M1 - M0) return work_mat.sum(dim='x') interpolate_dimension_x(arr, 1, 0.25) >>> <xarray.DataArray (y: 2)> array([1.034483, 1.321429]) Coordinates: * y (y) int32 1 2 I have some drawbacks with my code. The code only works if M0 and M1 find a value that fulfills the condition. Otherwise all values in that row will be set to NaN. To prevent issues with M0, I decided to just have the x-values start at 0 since my target value is always larger than 0. To prevent issues with M1, I choose my values of x large enough so I know my values are in there. Naturally, these are not ideal solutions and may break the code. If I get a bit more experience with xarray and python I might rewrite. In summary I have the following items I would like to solve: How to extrapolate values outside the x-range? I'm currently just ensuring my x-range is large enough that the answers falls within it. How to make the code robust for a variable stepsize? How to make the code so that my dimension can be chosen dynamically (now it only works for 'x') Any optimizations are appreciated.
9
1
61,348,795
2020-4-21
https://stackoverflow.com/questions/61348795/generate-list-of-numbers-and-their-negative-counterparts-in-python
Is there a convenient one-liner to generate a list of numbers and their negative counterparts in Python? For example, say I want to generate a list with the numbers 6 to 9 and -6 to -9. My current approach is: l = [x for x in range(6,10)] l += [-x for x in l] A simple "one-liner" would be: l = [x for x in range(6,10)] + [y for y in range(-9, -5)] However, generating two lists and then joining them together seems inconvenient.
I am unsure if order matters, but you could create a tuple and unpack it in a list comprehension. nums = [y for x in range(6,10) for y in (x,-x)] print(nums) [6, -6, 7, -7, 8, -8, 9, -9]
63
71
61,346,009
2020-4-21
https://stackoverflow.com/questions/61346009/why-is-pil-used-so-often-with-pytorch
I noticed that a lot of dataloaders use PIL to load and transform images, e.g. the dataset builders in torchvision.datasets.folder. My question is: why use PIL? You would need to do an np.asarray operation before turning it into a tensor. OpenCV seems to load it directly as a numpy array, and is faster too. One reason I can think of is because PIL has a rich transforms library, but I feel like several of those transforms can be quickly implemented.
There is a discussion about adding OpenCV as one of possible backends in torchvision PR. In summary, some reasons provided: OpenCV2 loads images in BGR format which would require wrapper class to handle changing to RGB internally or format of loaded images backend dependent This in turn would lead to code duplication in functional transforms in torchvision many of which use PIL operations (as transformations to support multiple backends would be pretty convoluted) OpenCV loads images as np.array, it's not really easier to do transformations on arrays Different representation might lead to hard to catch by users bugs PyTorch's modelzoo is dependent on RGB format as well and they would like to have it easily supported Doesn't play well with Python's multiprocessing (but it's no-issue as it was an issue for Python 2) To be honest I don't see much movement towards this idea as there exists albumentations which uses OpenCV and can be integrated with PyTorch rather smoothly. A little off-topic, but one can choose faster backend via torchvision.set_image_backend to Intel's accimage. Also Pillow-SIMD can be used as a drop-in replacement for PIL (it is supposedly faster and recommended by fastai project). When it comes to performance benchmarks they do not seem too reliable and it's not that easy to tell AFAIK.
9
15
61,249,708
2020-4-16
https://stackoverflow.com/questions/61249708/valueerror-no-gradients-provided-for-any-variable-tensorflow-2-0-keras
I am trying to implement a simple sequence-to-sequence model using Keras. However, I keep seeing the following ValueError: ValueError: No gradients provided for any variable: ['simple_model/time_distributed/kernel:0', 'simple_model/time_distributed/bias:0', 'simple_model/embedding/embeddings:0', 'simple_model/conv2d/kernel:0', 'simple_model/conv2d/bias:0', 'simple_model/dense_1/kernel:0', 'simple_model/dense_1/bias:0']. Other questions like this or looking at this issue on Github suggests that this might have something to do with the cross-entropy loss function; but I fail to see what I am doing wrong here. I do not think that this is the problem, but I want to mention that I am on a nightly build of TensorFlow, tf-nightly==2.2.0.dev20200410 to be precise. This following code is a standalone example and should reproduce the exception from above: import random from functools import partial import tensorflow as tf from tensorflow import keras from tensorflow_datasets.core.features.text import SubwordTextEncoder EOS = '<eos>' PAD = '<pad>' RESERVED_TOKENS = [EOS, PAD] EOS_ID = RESERVED_TOKENS.index(EOS) PAD_ID = RESERVED_TOKENS.index(PAD) dictionary = [ 'verstehen', 'verstanden', 'vergessen', 'verlegen', 'verlernen', 'vertun', 'vertan', 'verloren', 'verlieren', 'verlassen', 'verhandeln', ] dictionary = [word.lower() for word in dictionary] class SimpleModel(keras.models.Model): def __init__(self, params, *args, **kwargs): super().__init__(*args, **kwargs) self.params = params self.out_layer = keras.layers.Dense(1, activation='softmax') self.model_layers = [ keras.layers.Embedding(params['vocab_size'], params['vocab_size']), keras.layers.Lambda(lambda l: tf.expand_dims(l, -1)), keras.layers.Conv2D(1, 4), keras.layers.MaxPooling2D(1), keras.layers.Dense(1, activation='relu'), keras.layers.TimeDistributed(self.out_layer) ] def call(self, example, training=None, mask=None): x = example['inputs'] for layer in self.model_layers: x = layer(x) return x def sample_generator(text_encoder: SubwordTextEncoder, max_sample: int = None): count = 0 while True: random.shuffle(dictionary) for word in dictionary: for i in range(1, len(word)): inputs = word[:i] targets = word example = dict( inputs=text_encoder.encode(inputs) + [EOS_ID], targets=text_encoder.encode(targets) + [EOS_ID], ) count += 1 yield example if max_sample is not None and count >= max_sample: print('Reached max_samples (%d)' % max_sample) return def make_dataset(generator_fn, params, training): dataset = tf.data.Dataset.from_generator( generator_fn, output_types={ 'inputs': tf.int64, 'targets': tf.int64, } ).padded_batch( params['batch_size'], padded_shapes={ 'inputs': (None,), 'targets': (None,) }, ) if training: dataset = dataset.map(partial(prepare_example, params=params)).repeat() return dataset def prepare_example(example: dict, params: dict): # Make sure targets are one-hot encoded example['targets'] = tf.one_hot(example['targets'], depth=params['vocab_size']) return example def main(): text_encoder = SubwordTextEncoder.build_from_corpus( iter(dictionary), target_vocab_size=1000, max_subword_length=6, reserved_tokens=RESERVED_TOKENS ) generator_fn = partial(sample_generator, text_encoder=text_encoder, max_sample=10) params = dict( batch_size=20, vocab_size=text_encoder.vocab_size, hidden_size=32, max_input_length=30, max_target_length=30 ) model = SimpleModel(params) model.compile( optimizer='adam', loss='categorical_crossentropy', ) train_dataset = make_dataset(generator_fn, params, training=True) dev_dataset = make_dataset(generator_fn, params, training=False) # Peek data for train_batch, dev_batch in zip(train_dataset, dev_dataset): print(train_batch) print(dev_batch) break model.fit( train_dataset, epochs=1000, steps_per_epoch=100, validation_data=dev_dataset, validation_steps=100, ) if __name__ == '__main__': main() Update Gist link Github issue link
There are two different sets of problems in your code, which could be categorized as syntactical and architectural problems. The error raised (i.e. No gradients provided for any variable) is related to the syntactical problems which I would mostly address below, but I would try to give you some pointers about the architectural problems after that as well. The main cause of syntactical problems is about using named inputs and outputs for the model. Named inputs and outputs in Keras is mostly useful when the model has multiple input and/or output layers. However, your model has only one input and one output layer. Therefore, it may not be very useful to use named inputs and outputs here, but if that's your decision I would explain how it could be done properly. First of all, you should keep in mind that when using Keras models, the data generated from any input pipeline (whether it's a Python generator or tf.data.Dataset) should be provided as a tuple i.e. (input_batch, output_batch) or (input_batch, output_batch, sample_weights). And, as I said, this is the expected format everywhere in Keras when dealing with input pipelines, even when we are using named inputs and outputs as dictionaries. For example, if I want to use inputs/outputs naming and my model has two input layers named as "words" and "importance", and also two output layers named as "output1" and "output2", they should be formatted like this: ({'words': words_data, 'importance': importance_data}, {'output1': output1_data, 'output2': output2_data}) So as you can see above, it's a tuple where each element of the tuple is a dictionary; the first element corresponds to inputs of the model and the second element corresponds to outputs of the model. Now, according to this point, let's see what modifications should be done to your code: In sample_generator we should return a tuple of dicts, not a dict. So: example = tuple([ {'inputs': text_encoder.encode(inputs) + [EOS_ID]}, {'targets': text_encoder.encode(targets) + [EOS_ID]}, ]) In make_dataset function, the input arguments of tf.data.Dataset should respect this: output_types=( {'inputs': tf.int64}, {'targets': tf.int64} ) padded_shapes=( {'inputs': (None,)}, {'targets': (None,)} ) The signature of prepare_example and its body should be modified as well: def prepare_example(ex_inputs: dict, ex_outputs: dict, params: dict): # Make sure targets are one-hot encoded ex_outputs['targets'] = tf.one_hot(ex_outputs['targets'], depth=params['vocab_size']) return ex_inputs, ex_outputs And finally, the call method of subclassed model: return {'targets': x} And one more thing: we should also put these names on the corresponding input and output layers using the name argument when constructing the layers (like Dense(..., name='output'); however, since we are using the Model sub-classing here to define our model, that's not necessary to do. All right, these would resolve the input/output problems and the error related to gradients would be gone; however, if you run the code after applying the above modifications, you would still get an error regarding incompatible shapes. As I said earlier, there are architectural issues in your model which I would briefly address below. As you mentioned, this is supposed to be a seq-to-seq model. Therefore, the output is a sequence of one-hot encoded vectors, where the length of each vector is equal to (target sequences) vocabulary size. As a result, the softmax classifier should have as much units as vocabulary size, like this (Note: never in any model or problem use a softmax layer with only one unit; that's all wrong! Think about why it's wrong!): self.out_layer = keras.layers.Dense(params['vocab_size'], activation='softmax') The next thing to consider is the fact that we are dealing with 1D sequences (i.e. a sequence of tokens/words). Therefore using 2D-convolution and 2D-pooling layers does not make sense here. You can either use their 1D counterparts or replace them with something else like RNN layers. As a result of this, the Lambda layer should be removed as well. Also, if you want to use convolution and pooling, you should adjust the number of filters in each layer as well as the pool size properly (i.e. one conv filter, Conv1D(1,...) is not probably optimal, and pool size of 1 does not make sense). Further, that Dense layer before the last layer which has only one unit could severely limit the representational capacity of the model (i.e. it is essentially the bottleneck of your model). Either increase its number of units, or remove it. The other thing is that there is no reason for not one-hot encoding the labels of dev set. Rather, they should be one-hot encoded like the labels of training set. Therefore, either the training argument of make_generator should be removed entirely or, if you have some other use case for it, the dev dataset should be created with training=True argument passed to make_dataset function. Finally, after all these changes your model might work and start fitting on data; but after a few batches passed, you might get incompatible shapes error again. That's because you are generating input data with unknown dimension and also use a relaxed padding approach to pad each batch as much as needed (i.e. by using (None,) for padded_shapes). To resolve this you should decide on a fixed input/output dimension (e.g. by considering a fixed length for input/output sequences), and then adjust the architecture or hyper-parameters of the model (e.g. conv kernel size, conv padding, pooling size, adding more layers, etc.) as well as the padded_shapes argument accordingly. Even if you would like your model to support input/output sequences of variable length instead, then you should consider it in model's architecture and hyper-parameters and also the padded_shapes argument. Since this the solution depends on the task and desired design in your mind and there is no one-fits-all solutions, I would not comment further on that and leave it to you to figure it out. But here is a working solution (which may not be, and probably isn't, optimal at all) just to give you an idea: self.out_layer = keras.layers.Dense(params['vocab_size'], activation='softmax') self.model_layers = [ keras.layers.Embedding(params['vocab_size'], params['vocab_size']), keras.layers.Conv1D(32, 4, padding='same'), keras.layers.TimeDistributed(self.out_layer) ] # ... padded_shapes=( {'inputs': (10,)}, {'targets': (10,)} )
8
24
61,336,238
2020-4-21
https://stackoverflow.com/questions/61336238/getting-attributeerror-module-pandas-has-no-attribute-json-normalize-while
I am exploring the Jupiter notebook for Python. While calling this method "Access OutbreakLocation data" I get this exception in Python 3.6: getting AttributeError: module 'pandas' has no attribute 'json_normalize' Any ideas how can we fix this issue?
Make sure to update to Pandas 1.0.3. Pandas prior to version 1 doesn't have json_normalize.
18
16
61,253,928
2020-4-16
https://stackoverflow.com/questions/61253928/writing-pandas-dataframe-to-s3-bucket-aws
I have an AWS Lambda function which queries API and creates a dataframe, I want to write this file to an S3 bucket, I am using: import pandas as pd import s3fs df.to_csv('s3.console.aws.amazon.com/s3/buckets/info/test.csv', index=False) I am getting an error: No such file or directory: 's3.console.aws.amazon.com/s3/buckets/info/test.csv' But that directory exists, because I am reading files from there. What is the problem here? I've read the previous files like this: s3_client = boto3.client('s3') s3_client.download_file('info', 'secrets.json', '/tmp/secrets.json') How can I upload the whole dataframe to an S3 bucket?
You can use boto3 package also for storing data to S3: from io import StringIO # python3 (or BytesIO for python2) import boto3 bucket = 'info' # already created on S3 csv_buffer = StringIO() df.to_csv(csv_buffer) s3_resource = boto3.resource('s3') s3_resource.Object(bucket, 'df.csv').put(Body=csv_buffer.getvalue())
14
35
61,341,712
2020-4-21
https://stackoverflow.com/questions/61341712/calculate-projected-point-location-x-y-on-given-line-startx-y-endx-y
If i have three points P1, P2, P3 with their coordinates(x,y) P1(x,y) and P3(x,y) are coordinate of line(start, end) and P3 is a point need to be projected. how can i find the coordinate of point r(x,y) which is projection of P3 over P1 and P2
This solution extends to points with any geometric dimensions (2D, 3D, 4D, ...). It assumes all points are one dimensional numpy arrays (or two dimensional with one dimension shape 1). I am not sure if you require the projection to fall onto line segment or the extension of segment so I include both. You can pick whichever fits your question the best: #distance between p1 and p2 l2 = np.sum((p1-p2)**2) if l2 == 0: print('p1 and p2 are the same points') #The line extending the segment is parameterized as p1 + t (p2 - p1). #The projection falls where t = [(p3-p1) . (p2-p1)] / |p2-p1|^2 #if you need the point to project on line extention connecting p1 and p2 t = np.sum((p3 - p1) * (p2 - p1)) / l2 #if you need to ignore if p3 does not project onto line segment if t > 1 or t < 0: print('p3 does not project onto p1-p2 line segment') #if you need the point to project on line segment between p1 and p2 or closest point of the line segment t = max(0, min(1, np.sum((p3 - p1) * (p2 - p1)) / l2)) projection = p1 + t * (p2 - p1)
10
13
61,338,539
2020-4-21
https://stackoverflow.com/questions/61338539/how-to-use-enum-value-in-asdict-function-from-dataclasses-module
I have a dataclass with a field template of type Enum. When using the asdict function it converts my dataclass to a dictionary. Is it possible to use the value attribute of FoobarEnum to return the string value instead of the Enum object? My initial idea was to use the dict_factory=dict parameter of the asdict function and provide my own factory but I couldn't figure out how to do this. from dataclasses import dataclass, asdict from enum import Enum @dataclass class Foobar: name: str template: "FoobarEnum" class FoobarEnum(Enum): FIRST = "foobar" SECOND = "baz" foobar = Foobar(name="John", template=FoobarEnum.FIRST) print(asdict(foobar)) Current output: {'name': 'John', 'template': <FoobarEnum.FIRST: 'foobar'>} Goal: {'name': 'John', 'template': 'foobar'}
This can't be done with standard library except maybe by some metaclass enum hack I'm not aware of. Enum.name and Enum.value are builtin and not supposed to be changed. The approach of using the dataclass default_factory isn't going to work either. Because default_factory is called to produce default values for the dataclass members, not to customize access to members. You can either have the Enum member or the Enum.value as a dataclass member, and that's what asdict() will return. If you want to keep an Enum member -not just the Enum.value- as a dataclass member, and have a function converting it to dictionary that returns the Enum.value instead of the Enum member, the correct way to do it is implementing your own method to return the dataclass as a dictionary. from dataclasses import dataclass from enum import Enum class FoobarEnum(Enum): FIRST = "foobar" SECOND = "baz" @dataclass class Foobar: name: str template: FoobarEnum def as_dict(self): return { 'name': self.name, 'template': self.template.value } # Testing. print(Foobar(name="John", template=FoobarEnum.FIRST).as_dict()) # {'name': 'John', 'template': 'foobar'}
38
7
61,339,594
2020-4-21
https://stackoverflow.com/questions/61339594/how-to-convert-a-dictionary-to-dataframe-in-pyspark
I am trying to convert a dictionary: data_dict = {'t1': '1', 't2': '2', 't3': '3'} into a dataframe: key | value| ---------------- t1 1 t2 2 t3 3 To do that, I tried: schema = StructType([StructField("key", StringType(), True), StructField("value", StringType(), True)]) ddf = spark.createDataFrame(data_dict, schema) But I got the below error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 748, in createDataFrame rdd, schema = self._createFromLocal(map(prepare, data), schema) File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 413, in _createFromLocal data = list(data) File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/session.py", line 730, in prepare verify_func(obj) File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/types.py", line 1389, in verify verify_value(obj) File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/pyspark/sql/types.py", line 1377, in verify_struct % (obj, type(obj)))) TypeError: StructType can not accept object 't1' in type <class 'str'> So I tried this without specifying any schema but just the column datatypes: ddf = spark.createDataFrame(data_dict, StringType() & ddf = spark.createDataFrame(data_dict, StringType(), StringType()) But both result in a dataframe with one column which is key of the dictionary as below: +-----+ |value| +-----+ |t1 | |t2 | |t3 | +-----+ Could anyone let me know how to convert a dictionary into a spark dataframe in PySpark ?
You can use data_dict.items() to list key/value pairs: spark.createDataFrame(data_dict.items()).show() Which prints +---+---+ | _1| _2| +---+---+ | t1| 1| | t2| 2| | t3| 3| +---+---+ Of course, you can specify your schema: spark.createDataFrame(data_dict.items(), schema=StructType(fields=[ StructField("key", StringType()), StructField("value", StringType())])).show() Resulting in +---+-----+ |key|value| +---+-----+ | t1| 1| | t2| 2| | t3| 3| +---+-----+
9
11
61,234,609
2020-4-15
https://stackoverflow.com/questions/61234609/how-to-import-python-package-from-another-directory
I have a project that is structured as follows: project ├── api │ ├── __init__.py │ └── api.py ├── instance │ ├── __init__.py │ └── config.py ├── package │ ├── __init__.py │ └── app.py ├── requirements.txt └── tests └── __init__.py I am trying to call the config.py file from the package/app.py as shown below: # package/app.py from instance import config # I've also tried import instance.config import ..instance.config from ..instance import config But I always get the following error: Traceback (most recent call last): File "/home/csymvoul/projects/project/package/app.py", line 1, in <module> from instance import config ModuleNotFoundError: No module named 'instance' Modifying the sys.path is not something I want to do. I know that this question is very much answered but the answers that were given, did not work for me. EDIT: When moving the app.py to the root folder it works just fine. But I need to have it under the package folder.
You can add the parent directory to PYTHONPATH, in order to achieve that, you can use OS depending path in the "module search path" which is listed in sys.path. So you can easily add the parent directory like following: import sys sys.path.insert(0, '..') from instance import config Note that the previous code uses a relative path, so you must launch the file inside the same location or it will likely not work. To launch from anywhere, you can use the pathlib module. from pathlib import Path import sys path = str(Path(Path(__file__).parent.absolute()).parent.absolute()) sys.path.insert(0, path) from instance import config However, the previous approach is more a hack than anything, in order to do things right, you'll first need to reshape your project structure according to this very detailed blog post python packaging, going for the recommended way with a src folder. Your directory layout must look like this: project ├── CHANGELOG.rst ├── README.rst ├── requirements.txt ├── setup.py ├── src │ ├── api │ │ ├── api.py │ │ └── __init__.py │ ├── instance │ │ ├── config.py │ │ └── __init__.py │ └── package │ ├── app.py │ └── __init__.py └── tests └── __init__.py Note that you don't really need the requirements.txt because you can declare the dependencies inside your setup.py. A sample setup.py (adapted from here): #!/usr/bin/env python # -*- encoding: utf-8 -*- from __future__ import absolute_import from __future__ import print_function import io import re from glob import glob from os.path import basename from os.path import dirname from os.path import join from os.path import splitext from setuptools import find_packages from setuptools import setup def read(*names, **kwargs): with io.open( join(dirname(__file__), *names), encoding=kwargs.get('encoding', 'utf8') ) as fh: return fh.read() setup( name='nameless', version='1.644.11', license='BSD-2-Clause', description='An example package. Generated with cookiecutter-pylibrary.', author='mpr', author_email='[email protected]', packages=find_packages('src'), package_dir={'': 'src'}, include_package_data=True, zip_safe=False, classifiers=[ # complete classifier list: http://pypi.python.org/pypi?%3Aaction=list_classifiers 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Unix', 'Operating System :: POSIX', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', # uncomment if you test on these interpreters: # 'Programming Language :: Python :: Implementation :: IronPython', # 'Programming Language :: Python :: Implementation :: Jython', # 'Programming Language :: Python :: Implementation :: Stackless', 'Topic :: Utilities', ], keywords=[ # eg: 'keyword1', 'keyword2', 'keyword3', ], python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*', install_requires=[ # eg: 'aspectlib==1.1.1', 'six>=1.7', ], extras_require={ # eg: # 'rst': ['docutils>=0.11'], # ':python_version=="2.6"': ['argparse'], }, setup_requires=[ # 'pytest-runner', ], entry_points={ 'console_scripts': [ 'api = api.api:main', ] }, ) The content of my api.py: from instance import config def main(): print("imported") config.config() The content of my config.py: def config(): print("config imported successfully") You can find all the previous here Optional but recommended: create a virtual environment, I use venv (Python 3.3 <=) for that, inside the root of the project: python -m venv . And to activate: source bin/activate Now I can install the package: Using pip install -e . (with the dot) command inside the root of the project Your import from instance import config works now, to confirm you can run api.py with: python src/api/api.py
33
29
61,337,373
2020-4-21
https://stackoverflow.com/questions/61337373/split-on-train-and-test-separating-by-group
I have a sample data as follows: import pandas as pd df = pd.DataFrame({"x": [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120], "id": [1, 1, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5], "label": ["a", "a", "a", "b", "a", "b", "b", "b", "a", "b", "a", "b"]}) So my data look like this x id label 10 1 a 20 1 a 30 1 a 40 1 b 50 2 a 60 2 b 70 3 a 80 3 a 90 4 b 100 4 a 110 5 b 120 5 a I would like to split this data into two groups (train, test) based on label distribution given the number of test samples (e.g. 6 samples). My settings prefers to define size of test set as integer representing the number of test samples rather than percentage. However, with my specific domain, any id MUST be allocated in ONLY one group. For example, if id 1 was assigned to the training set, other samples with id 1 cannot be assigned to the test set. So the expected output are 2 dataframes as follows: Training set x id label 10 1 a 20 1 a 30 1 a 40 1 b 50 2 a 60 2 b Test set x id label 70 3 a 80 3 a 90 4 b 100 4 a 110 5 b 120 5 a Both training set and test set have the same class distribution (a:b is 4:2) and id 1, 2 were assigned to only the training set while id 3, 4, 5 were assigned to only the test set. I used to do with sklearn train_test_split but I could not figure out how to apply it with such a condition. May I have your suggestions how to handle such conditions?
sklearn.model_selection has several other options other than train_test_split. One of them, aims at solving what you're after. In this case you could use GroupShuffleSplit, which as mentioned inthe docs it provides randomized train/test indices to split data according to a third-party provided group. You also have GroupKFold for these cases which is very useful. from sklearn.model_selection import GroupShuffleSplit X = df.drop('label',1) y=df.label You can now instantiate GroupShuffleSplit, and do as you would with train_test_split, with the only difference of specifying a group column, which will be used to split X and y so the groups are split according the the groups values: gs = GroupShuffleSplit(n_splits=2, test_size=.6, random_state=0) train_ix, test_ix = next(gs.split(X, y, groups=X.id)) Now you can index the dataframe to create the train and test sets: X_train = X.loc[train_ix] y_train = y.loc[train_ix] X_test = X.loc[test_ix] y_test = y.loc[test_ix] Giving: print(X_train) x id 4 50 2 5 60 2 8 90 4 9 100 4 10 110 5 11 120 5 And for the test set: print(X_test) x id 0 10 1 1 20 1 2 30 1 3 40 1 6 70 3 7 80 3
10
10
61,337,007
2020-4-21
https://stackoverflow.com/questions/61337007/pysftp-library-not-working-in-aws-lambda-layer
I want to upload files to EC2 instance using pysftp library (Python script). So I have created small Python script which is using below line to connect pysftp.Connection( host=Constants.MY_HOST_NAME, username=Constants.MY_EC2_INSTANCE_USERNAME, private_key="./mypemfilelocation.pem", ) some code here ..... pysftp.put(file_to_be_upload, ec2_remote_file_path) This script will upload files from my local Windows machine to EC2 instance using .pem file and it works correctly. Now I want to do this action using AWS lambda with API Gateway functionality. So I have uploaded Python script to AWS lambda. Now I am not sure how to use pysftp library in AWS lambda, so I found solution that add pysftp library Layer in AWS lambda Layer. I did it with pip3 install pysftp -t ./library_folder And I make zip of above folder and added in AWS lambda Layer. But still I got so many errors like one by one :- No module named 'pysftp' No module named 'paramiko' Undefined Symbol: PyInt_FromLong cannot import name '_bcrypt' from partially initialized module 'bcrypt' (most likely due to a circular import) cffi module not found I just fade up of above errors I didn't find the proper solution. How can I can use pysftp library in my AWS lambda seamlessly?
I build pysftp layer and tested it on my lambda with python 3.8. Just to see import and basic print: import json import pysftp def lambda_handler(event, context): # TODO implement print(dir(pysftp)) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } I used the following docker tool to build the pysftp layer: https://github.com/lambci/docker-lambda So what I did for pysftp was: # create pysftp fresh python 3.8 environment python -m venv pysftp # activate it source pysftp/bin/activate cd pysftp # install pysftp in the environemnt pip3 install pysftp # generate requirements.txt pip freeze > requirements.txt # use docker to construct the layer docker run --rm -v `pwd`:/var/task:z lambci/lambda:build-python3.8 python3.8 -m pip --isolated install -t ./mylayer -r requirements.txt zip -r pysftp-layer.zip . And the rest is uploading the zip into s3, creating new layer in AWS console, setting Compatible runtime to python 3.8 and using it in my test lambda function. You can also check here how to use this docker tool (the docker command I used is based on what is in that link). Hope this helps
8
8
61,334,085
2020-4-21
https://stackoverflow.com/questions/61334085/breaking-change-for-google-api-python-client-1-8-1-attributeerror-module-goo
After upgrading to the new google-api-python-client 1.8.1 I'm receiving this error. Do we know if python 3.8 breaks the latest google-api-core? And whether there's a solution Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/prosebit/prosebit/app.py", line 10, in <module> from prosebit.blueprints.page import page File "/prosebit/prosebit/blueprints/page/__init__.py", line 1, in <module> from prosebit.blueprints.page.views import page File "/prosebit/prosebit/blueprints/page/views.py", line 4, in <module> from prosebit.blueprints.page.gmail import get_google_auth, ListHistory, add_to_watch, is_meter_data, ListMessagesMatchingQuery, GetMessage, send_message, get_daily_threadid, ModifyMessage, create MsgLabels, get_new_messages File "/prosebit/prosebit/blueprints/page/gmail.py", line 32, in <module> from apiclient import errors File "/usr/local/lib/python3.8/site-packages/apiclient/__init__.py", line 22, in <module> __version__ = googleapiclient.__version__ AttributeError: module 'googleapiclient' has no attribute '__version__' Here are my dependencies: Successfully installed Blinker-1.4 Click-6.4 Flask-1.1.1 Flask-Login-0.4.1 Flask-Mail-0.9.1 Flask- SQLAlchemy-2.4.1 Flask-WTF-0.14.3 Jinja2-2.11.2 MarkupSafe-1.1.1 SQLAlchemy-1.3.7 SQLAlchemy-Utils-0.36.3 WTForms-2. 2.1 WTForms-Alchemy-0.16.9 WTForms-Components-0.10.4 amqp-2.5.2 atomicwrites-1.3.0 attrs-19.3.0 billiard-3.6.3.0 cachetools-4.1.0 celery-4.4.0 certifi-2020.4.5.1 chardet-3.0.4 coverage-5.1 decorator-4.4.2 entrypoin ts-0.3 faker-3.0.0 flake8-3.7.8 flask-debugtoolbar-0.10.1 gocardless-pro-1.10.0 google-api-core-1.16.0 google-api-python-client-1.8.1 google-auth-1.14.0 google-auth-httplib2-0.0.3 google-auth-oauthlib-0.4.1 google- cloud-pubsub-1.4.3 googleapis-common-protos-1.51.0 grpc-google-iam-v1-0.12.3 grpcio-1.28.1 gunicorn-19.9.0 httplib2-0.17.2 idna-2.9 infinity-1.4 intervals-0.8.1 itsdangerous-1.1.0 kombu-4.6.8 mccabe-0.6.1 mock-3.0. 5 more-itertools-8.2.0 oauth2client-4.1.3 oauthlib-3.1.0 packaging-20.3 pluggy-0.13.1 protobuf-3.11.3 psycopg2-2.8.4 py-1.8.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycodestyle-2.5.0 pyflakes-2.1.1 pyparsing-2.4.7 pytes t-5.1.0 pytest-cov-2.7.1 python-dateutil-2.8.1 pytz-2019.3 redis-3.3.11 requests-2.23.0 requests-oauthlib-1.3.0 rsa-4.0 six-1.14.0 stripe-2.41.0 text-unidecode-1.3 uritemplate-3.0.1 urllib3-1.25.9 validators-0.14.3 vine-1.3.0 wcwidth-0.1.9 werkzeug-1.0.0
I had the same error and changing the import fixed it for me. The developers recommend importing from googleapiclient instead of apiclient. So you will need to change from apiclient import errors to from googleapiclient import errors
8
9
61,328,571
2020-4-20
https://stackoverflow.com/questions/61328571/standard-init-linux-go211-exec-user-process-caused-no-such-file-or-directory
Dockerfile FROM python:3.7.4-alpine ENV PYTHONUNBUFFERED 1 ENV PYTHONDONTWRITEBYTECODE 1 ENV LANG C.UTF-8 MAINTAINER "[email protected]" RUN apk update && apk add postgresql-dev gcc musl-dev RUN apk --update add build-base jpeg-dev zlib-dev RUN pip install --upgrade setuptools pip RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ #CMD ["gunicorn", "--log-level=DEBUG", "--timeout 90", "--bind", "0.0.0.0:8000", "express_proj.wsgi:application"] ENTRYPOINT ["./docker-entrypoint.sh"] docker-entrypoint.sh #!/bin/bash # Prepare log files and start outputting logs to stdout touch /code/gunicorn.log touch /code/access.log tail -n 0 -f /code/*.log & # Start Gunicorn processes echo Starting Gunicorn. exec gunicorn express_proj.wsgi:application \ --name express \ --bind 0.0.0.0:8000 \ --log-level=info \ --log-file=/code/gunicorn.log \ --access-logfile=/code/access.log \ --workers 2 \ --timeout 90 \ "$@" Getting Error standard_init_linux.go:211: exec user process caused "no such file or directory" Need help. Some saying to use dos2unix(i do not know hoe to use it.)
The "shebang" line at the start of a script says what interpreter to use to run it. In your case, your script has specified #!/bin/bash, but Alpine-based Docker images don't typically include GNU bash; instead, they have a more minimal /bin/sh that includes just the functionality in the POSIX shell specification. Your script isn't using any of the non-standard bash extensions, so you can just change the start of the script to #!/bin/sh
10
18
61,325,817
2020-4-20
https://stackoverflow.com/questions/61325817/differences-between-matplotlib-and-matplotlib-base
While updating my packages I've noticed that there is a package named "matplotlib-base". I couldn't figure out what the difference to "matplotlib" is, neither on the official website nor here on Stack Overflow, and I also couldn't find any repository to compare the code. Any ideas?
The packages are similar, but differ in their dependencies: matplotlib depends on matplotlib-base and pyqt. Therefore installing matplotlib will also pull in the qt stack, while installing matplotlib-base does not. Users that do not need qt backends and prefer a slim installation will prefer matplotlib-base over matplotlib. See also: https://conda-forge.org/docs/maintainer/knowledge_base.html#matplotlib
16
19
61,327,917
2020-4-20
https://stackoverflow.com/questions/61327917/why-does-using-return-a-series-instead-of-bool-in-pandas
I just can't figure out what "==" means at the second line: - It is not a test, there is no if statement... - It is not a variable declaration... I've never seen this before, the thing is data.ctage==cat is a pandas Series and not a test... for cat in data["categ"].unique(): subset = data[data.categ == cat] # Création du sous-échantillon print("-"*20) print('Catégorie : ' + cat) print("moyenne:\n",subset['montant'].mean()) print("mediane:\n",subset['montant'].median()) print("mode:\n",subset['montant'].mode()) print("VAR:\n",subset['montant'].var()) print("EC:\n",subset['montant'].std()) plt.figure(figsize=(5,5)) subset["montant"].hist(bins=30) # Crée l'histogramme plt.show() # Affiche l'histogramme
It is testing each element of data.categ for equality with cat. That produces a vector of True/False values. This is passed as in indexer to data[], which returns the rows from data that correspond to the True values in the vector. To summarize, the whole expression returns the subset of rows from data where the value of data.categ equals cat. (Seems possible the whole operation could be done more elegantly using data.groupBy('categ').apply(someFunc).)
13
13
61,327,385
2020-4-20
https://stackoverflow.com/questions/61327385/trying-to-use-on-path-object-python
The program takes an optional command line argument (which is meant to be a directory path) I am using python pathlib and shutil to move files. Here's the code: from pathlib import Path path = Path(sys.argv[1]) shutil.move(path / file, path / e.upper()) Where e is just a string representing certain file extension; Input: python3 app.py /home/user/Desktop This code generates an error: 'PosixPath' object has no attribute 'rstrip' The / operator works fine if I don't specify the second argument in the command line (and use Path.cwd() as the path instead)
Use the rename function of Path to move a file, if you're using the pathlib module. ie. (path / file).rename(path / e.upper()) Otherwise, if you wish to use the shutil module, then you must convert your paths to strings before passing them to shutil.move() ie. shutil.move(str(path / file), str(path / e.upper()))
9
16
61,319,140
2020-4-20
https://stackoverflow.com/questions/61319140/difference-between-numpy-and-tensorflow
Are NumPy and TensorFlow the same thing? I just started learning programming; I was learning AI and found TensorFlow. I started to look at videos and I saw the code snippets below: import tensorflow as tf tf.ones([1,2,3]) tf.zeros([2,3,2]) import numpy as np np.zeros([2,3,2]) np.ones([1,2,3])
Although the method names and parameters look identical, they are not the same thing. This becomes clear in the debugger. Just assign the results to variables and inspect them: As you can see, Tensorflow gives you an EagerTensor and NumPy gives you an NDArray. Tensorflow is a library for artificial intelligence, especially machine learning. Numpy is a library for doing numerical calculations. They are often used in combination, because it's often required to pre-process data, which can be done with NumPy, and then do the machine learning on the processed data with Tensorflow.
9
2
61,313,365
2020-4-20
https://stackoverflow.com/questions/61313365/pandas-futurewarning-columnar-iteration-over-characters-will-be-deprecated-in-f
I have an existing solution to split a dataframe with one column into 2 columns. df['A'], df['B'] = df['AB'].str.split(' ', 1).str Recently, I got the following warning FutureWarning: Columnar iteration over characters will be deprecated in future releases. How to fix this warning? I'm using python 3.7
That's not entirely correct, plus the trailing .str does not make sense. Since split with expand returns a DataFrame, this is easier: df[['A', 'B']] = df['AB'].str.split(' ', n=1, expand=True) Your existing method without expand returns a single Series with a list of columns. I'm not sure what version of pandas used to work with your code but AFAIK you'll need to make some tweaks for this to work with pandas (>= 1.0) today. Assignment in this way is tedious but still possible. s = df['AB'].str.split(' ', n=1) df['A'], df['B'] = s.str[0], s.str[1] I prefer the expand solution as it's a line shorter.
14
23
61,305,921
2020-4-19
https://stackoverflow.com/questions/61305921/pandas-select-columns-using-list-but-ignore-missing-column-names
I have a dataframe 'A' 'B' 'C' 'X' ,'Y' , 'Z' 0 1 2 3 4 5 and a list l=[A,B,C,D,E,F] I want to use that that list to select columns that are in the list but ignore the ones that don't appear. So the expected output is 'A' 'B' 'C' 0 1 2 3 4 5
Use DataFrame.loc for select all rows by : and columns by mask created by Index.isin : df = df.loc[:, df.columns.isin(l)] Or get columns names by Index.intersection: df = df[df.columns.intersection(l)] print (df) A B C 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN 5 NaN NaN NaN
7
22
61,292,759
2020-4-18
https://stackoverflow.com/questions/61292759/how-to-fill-elements-between-intervals-of-a-list
I have a list like this: list_1 = [np.NaN, np.NaN, 1, np.NaN, np.NaN, np.NaN, 0, np.NaN, 1, np.NaN, 0, 1, np.NaN, 0, np.NaN, 1, np.NaN] So there are intervals that begin with 1 and end with 0. How can I replace the values in those intervals, say with 1? The outcome will look like this: list_2 = [np.NaN, np.NaN, 1, 1, 1, 1, 0, np.NaN, 1, 1, 0, 1, 1, 0, np.NaN, 1, np.NaN] I use NaN in this example, but a generalized solution that can apply to any value will also be great
Pandas solution: s = pd.Series(list_1) s1 = s.eq(1) s0 = s.eq(0) m = (s1 | s0).where(s1.cumsum().ge(1),False).cumsum().mod(2).eq(1) s.loc[m & s.isna()] = 1 print(s.tolist()) #[nan, nan, 1.0, 1.0, 1.0, 1.0, 0.0, nan, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, nan, 1.0, 1.0] but if there is only 1, 0 or NaN you can do: s = pd.Series(list_1) s.fillna(s.ffill().where(lambda x: x.eq(1))).tolist() output [nan, nan, 1.0, 1.0, 1.0, 1.0, 0.0, nan, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, nan, 1.0, 1.0]
9
5
61,257,658
2020-4-16
https://stackoverflow.com/questions/61257658/python-dataclasses-mocking-the-default-factory-in-a-frozen-dataclass
I'm attempting to use freezegun in my unit tests to patch a field in a dataclass that is set to the current date when the object is initialised. I would imagine the question is relevant to any attempt to patch a function being used as a default_factory outside of just freezegun. The dataclass is frozen so its immutable. For example if my dataclass is: @dataclass(frozen=True) class MyClass: name: str timestamp: datetime.datetime = field(init=False, default_factory=datetime.datetime.now) When I patch datetime with freezegun, it has no impact on the initialisation of the timestamp in MyClass (it still sets timestamp to the current date returned by now() in the unit test, causing the test to fail). I'm assuming it has to do with the default factory and module being loaded well before the patch is in place. I have tried patching datetime, and then reloading the module with importlib.reload but with no luck. The solution I have at the moment is: @dataclass(frozen=True) class MyClass: name: str timestamp: datetime.datetime = field(init=False) def __post_init__(self): object.__setattr__(self, "timestamp", datetime.datetime.now()) which works. Ideally though, I would like a non-invasive solution that doesn't require me changing my production code to enable my unit tests.
You're right, the dataclass creation process does something strange here which leads to your current problem. It binds the factory function during class creation, which means that it holds a reference of the code before freezegun had a chance to patch it. Here is an example without dataclasses that runs into the same issue: from datetime import datetime from freezegun import freeze_time class Foo: # looks up the function at class creation time now_func = datetime.now def __init__(self): # asks datetime for a reference at instance creation time self.timestamp_a = datetime.now() # uses an old reference we couldn't patch self.timestamp_b = Foo.now_func() with freeze_time(datetime(2020, 1, 1)): foo = Foo() assert foo.timestamp_a == datetime(2020, 1, 1) # works assert foo.timestamp_b == datetime(2020, 1, 1) # raises an AssertionError As to how to solve the problem, you can theoretically hack MyClass.__init__.__closure__ during your tests to switch out the functions, but that's a bit mad. Something that is still a bit better than overwriting timestamp in a __post_init__ might be to just delegate the function call with a lambda so that the name lookup is delayed to instantiation time: timestamp: datetime = field(init=False, default_factory=lambda: datetime.now()) Or you can start using a different datetime library like pendulum that supports freezing time out of the box. FWIW, this is what I ended up doing.
8
11
61,273,244
2020-4-17
https://stackoverflow.com/questions/61273244/how-to-downgrade-torch-version-for-google-colab
I would like to downgrade the Torch version used in my Google Colab notebooks. How could I do that?
Run from your cell: !pip install torch==version Where version could be, for example, 1.3.0 (default is 1.4.0). You may have to downgrade torchvision appropriately as well so you would go with: !pip install torch==1.3.0 torchvision==0.4.1 On the other hand PyTorch provides backward compatibility between major versions so there should be no need for downgrading. Remember you have to restart your Google Colab for changes to take effect
10
7
61,269,796
2020-4-17
https://stackoverflow.com/questions/61269796/aws-lambda-returning-json-data-as-string
I am using AWS Lambda to create my APIs and want to return an array's data in JSON format. However, when I call the lambda, it is able to return the required JSON data but it is coming as a string in double quotes. I tried running the same code in my Python IDE and everything works fine but when I try to return it in Lambda it is coming as a string. Has this got something to do with how Lambda handles the return statement in Python functions? This is how I'm returning data in my Lambda: return { 'statusCode': 200, 'headers': { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": 'GET, POST, PUT, DELETE, OPTIONS' }, 'body': json.dumps(json_data,default = myconverter) } Here json_data is a python list which is populated with the data that is being retrieved from the database for the specific unique ID passed by the user and myconverter is the JSON encoder that I've written. The output that I'm getting is: { "statusCode": 200, "headers": { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS" }, "body": "[{\"Dp_Record_Id\": 2, \"DP_TYPE\": \"NSDL\", \"DP_ID\": \"40877589\", \"CLIENT_ID\": \"1232\", \"Default_flag\": \"Y\"}]" } Here, I want the "body" to just return an array of the data without double quotes, like this: "body": [{\"Dp_Record_Id\": 2, \"DP_TYPE\": \"NSDL\", \"DP_ID\": \"40877589\", \"CLIENT_ID\": \"1232\", \"Default_flag\": \"Y\"}] Please let me know if this is possible and how can it be done. Any help on this would be appreciated
I'm not sure if this is what you want, but you can just do 'body': json_data. I tested this now in my λ function: Lambda function import json def lambda_handler(event, context): json_data = [{"Dp_Record_Id": 2, "DP_TYPE": "NSDL", "DP_ID": "40877589", "CLIENT_ID": "1232", "Default_flag": "Y"}] return {'statusCode': 200, 'headers': { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": 'GET, POST, PUT, DELETE, OPTIONS' }, 'body': json_data } Result Response: { "statusCode": 200, "headers": { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS" }, "body": [ { "Dp_Record_Id": 2, "DP_TYPE": "NSDL", "DP_ID": "40877589", "CLIENT_ID": "1232", "Default_flag": "Y" } ] }
10
9
61,261,907
2020-4-16
https://stackoverflow.com/questions/61261907/on-colab-class-weight-is-causing-a-valueerror-the-truth-value-of-an-array-wit
i'm running a CNN with keras sequential on google colab. i'm getting the following error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() when i remove the class_weight argument from the model.fit function, the error is gone and the network is trained succesfully. however, i really want to account for unbalanced data i checked the shape of my class_weights vector and it's good (and nd.array, just like you would get when generating class_Weights from sklearn compute class weights function ) not sure what details are relevant but i wil gladly provide more details regarding version and all that mess. p.s a fact that might be important - my data is the FER2013 data and i'm using FERplus labels. meaning, my samples are not associated with one unique class, rather each sample has it's own probability distribution for each class. bottom line, my labels are vectors of size class_names with all elements adding up to one. just to be super clear, an example: img1 label = [0,0,0,0,0.2,0,0.3,0,0,0.5] anyhow, i computed class_weights as an nd.array of size 10 with elements ranging between 0 and 1, supposed to balance down the more represented classes. i was not sure if that is relevant to the error, but i'm bringing it up just in case. my code: def create_model_plus(): return tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=32,kernel_size=5,strides=1,input_shape=(48, 48, 1),padding='same',use_bias=True,kernel_initializer='normal',bias_initializer=tf.keras.initializers.Constant(0.1),activation=tf.nn.relu), tf.keras.layers.BatchNormalization(), tf.keras.layers.MaxPooling2D((2, 2), strides=2), tf.keras.layers.Conv2D(filters=64,kernel_size=5,strides=1,padding='same',use_bias=True,kernel_initializer='normal',bias_initializer=tf.keras.initializers.Constant(0.1),activation=tf.nn.relu), tf.keras.layers.BatchNormalization(), tf.keras.layers.MaxPooling2D((2, 2), strides=1), tf.keras.layers.Conv2D(filters=128,kernel_size=5,strides=1,padding='same',use_bias=True,kernel_initializer='normal',bias_initializer=tf.keras.initializers.Constant(0.1),activation=tf.nn.relu), tf.keras.layers.BatchNormalization(), tf.keras.layers.MaxPooling2D((2, 2), strides=1), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1008, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) history_df=[] history_object=tf.keras.callbacks.History() #save_best_object=tf.keras.callbacks.ModelCheckpoint('/Users/nimrodros', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) early_stop_object=tf.keras.callbacks.EarlyStopping(monitor='val_loss',min_delta=0.001, patience=4) gony_adam=tf.keras.optimizers.Adam( lr=0.001 ) reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.3,patience=3, min_lr=0.0001, verbose=1) #log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") #tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) datagen = tf.keras.preprocessing.image.ImageDataGenerator(rotation_range=8, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, validation_split=0.3 ) datagen.fit(images.reshape(28709,48,48,1)) model = create_model_plus() model.compile(optimizer=gony_adam, loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(x=datagen.flow(images.reshape(28709,48,48,1), FER_train_labels, batch_size=32,subset='training'),validation_data=datagen.flow(images.reshape(28709,48,48,1), FER_train_labels, batch_size=32,subset='validation'),steps_per_epoch=600,validation_steps=250,epochs=60,callbacks=[history_object,early_stop_object,reduce_lr],class_weight=cl_weigh) history_df=pd.DataFrame(history.history) hope someone knows what to do! thanks!!!
The problem is that the sklearn API returns a numpy array but the keras requires a dictionary as an input for class_weight (see here). You can resolve the error using below method: from sklearn.utils import class_weight weight = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train) weight = {i : weight[i] for i in range(5)}
14
27
61,266,275
2020-4-17
https://stackoverflow.com/questions/61266275/epoch-1-2-103-unknown-8s-80ms-step-loss-0-0175-model-fit-keeps-running-f
I am developing autoencoder on the dataset https://www.kaggle.com/jessicali9530/celeba-dataset. import tensorflow tensorflow.__version__ Output: '2.2.0-rc3' from tensorflow.keras.preprocessing import image data_gen = image.ImageDataGenerator(rescale=1.0/255) batch_size = 20 train_data_gen = data_gen.flow_from_directory(directory=train_dest_path, target_size=(256, 256), batch_size=batch_size, shuffle=True, class_mode = 'input') test_data_gen = data_gen.flow_from_directory(directory=test_dest_path, target_size=(256,256), batch_size=batch_size, shuffle=True, class_mode= 'input') # autoencoder from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D from tensorflow.keras import Model from tensorflow.keras.optimizers import Adam, SGD #parameters inchannel = 3 x, y = 256, 256 input_img = Input(shape=(x,y,inchannel)) def autoencoder_model(input_img): #encoder conv1 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='same')(input_img) pool1 = MaxPooling2D(pool_size=(2,2))(conv1) conv2 = Conv2D(64, kernel_size=(3,3), activation='relu', padding='same')(pool1) pool2 = MaxPooling2D(pool_size=(2,2))(conv2) conv3 = Conv2D(128, kernel_size=(3,3), activation='relu', padding='same')(pool2) #decoder conv4 = Conv2D(128, kernel_size=(3,3), activation='relu', padding='same')(conv3) pool3 = UpSampling2D(size=(2,2))(conv4) conv5 = Conv2D(64, kernel_size=(3,3), activation='relu', padding='same')(pool3) pool4 = UpSampling2D(size=(2,2))(conv5) decoded = Conv2D(3, kernel_size=(3,3), activation='relu', padding='same')(pool4) return decoded model = Model(inputs=input_img, outputs=autoencoder_model(input_img)) model.compile(loss='mean_squared_error', optimizer=Adam()) model.summary() Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 256, 256, 3)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 256, 256, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 128, 128, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 64, 64, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 128, 128, 128) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 128, 128, 64) 73792 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 256, 256, 64) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 256, 256, 3) 1731 ================================================================= Total params: 316,355 Trainable params: 316,355 Non-trainable params: 0 from tensorflow.keras.callbacks import ModelCheckpoint epochs = 2 num_training_steps = train_data_gen.samples/batch_size checkpoint_directory = '/gdrive/My Drive/Colab Notebooks' checkpoint = ModelCheckpoint(checkpoint_directory, verbose=1, save_weights_only=False, save_freq='epoch') model.fit(train_data_gen, epochs=epochs, verbose=1, callbacks=[checkpoint]) Output: Epoch 1/2 103/Unknown - 8s 80ms/step - loss: 0.0175 After spending a lot of time, I am still not able to understand why I am getting "Unknown" in the output of model.fit(). Also, model.fit() keeps running forever even though if I take only 1000 images from the training dataset in flow_from_directory(). It goes above 1000 and I am not able to understand why it is acting like that.
When executing model.fit with a generator as input you have to set the steps_per_epoch argument. For generators you can't know the number of images they output (and in this case they go on forever), so set it to the number of images in your dataset divided by your batch size.
9
16
61,249,612
2020-4-16
https://stackoverflow.com/questions/61249612/error-unable-to-download-video-data-http-error-403-forbidden-while-using-yout
I am trying to download songs from youtube using python 3.8 and youtube_dl 2020.3.24. But the weird thing is that most songs I try to download don't get downloaded. I'm talking 99% of them. The ones that do get downloaded get the following Error from youtube_dl: ERROR: unable to download video data: HTTP Error 403: Forbidden It is worth saying that this happened overnight and I did not change any code. before this everything worked fine. I have friends who ran the same code and they did not get this Error
Same problem many times .. solution: youtube-dl --rm-cache-dir Cause of the problem: Sometimes I download playlists of large videos and I force it to stop downloading, the next time I run the command to resume the download, the 403 problem arises At the moment, the cache directory is used only to store youtube players for obfuscated signatures. Since all videos in playlist use simple signatures Playlist caching is an obvious way to detect changed titles or changed playlists in general
25
40
61,250,311
2020-4-16
https://stackoverflow.com/questions/61250311/error-importing-bert-module-tensorflow-api-v2-train-has-no-attribute-optimi
I tried to use bert-tensorflow in Google Colab, but I got the following error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 1 import bert ----> 2 from bert import run_classifier_with_tfhub # run_classifier 3 from bert import optimization 4 from bert import tokenization 1 frames /usr/local/lib/python3.6/dist-packages/bert/optimization.py in () 85 86 ---> 87 class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89 AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer' Here is the code I tried: Install the libraries: !pip install --upgrade --force-reinstall tensorflow !pip install --upgrade --force-reinstall tensorflow-gpu !pip install tensorflow_hub !pip install sentencepiece !pip install bert-tensorflow Run this code: from sklearn.model_selection import train_test_split import pandas as pd from datetime import datetime from tensorflow.keras import optimizers import bert from bert import run_classifier from bert import optimization from bert import tokenization I've also tried import tensorflow.compat.v1 as tf tf.disable_v2_behavior() But got the same error.
I did some experimentation in my own colab notebook (please provide a link next time) and I found that in the error message, there was class AdamWeightDecayOptimizer(tf.train.Optimizer): this being the header of the class. But there is nothing like tf.train.optimizer instead it should be : class AdamWeightDecayOptimizer(tf.compat.v1.train.Optimizer): The link where there is exact issue with (lol) exact same line is here
10
9
61,251,473
2020-4-16
https://stackoverflow.com/questions/61251473/compare-lists-in-the-same-dictionary-of-lists
In resume, I have two keys in the same dictionary where each one has their corresponding lists. I try to compare both list to check common and differential elements. It means that the output I will count how many elements are identical or present in only one key's list. from the beginning I am inserting the elements using the files as arguments and they are read in the function def shared(list): dict_shared = {} for i in list: infile = open(i, 'r') if i not in dict_shared: dict_shared[i] = [] for line in infile: dict_shared[spacer].append(record.id) return dict_shared Now I am stuck trying to find a way to compare the lists created and present in the dictionary. dict = {a:[1,2,3,4,5], b:[2,3,4,6]} My intention is to compare the lists in order to have the lines shared between two texts. a: [1,5] b: [6] a-b: [2,3,4] From now I can't find a way to solve this. Any suggestion?
A solution with list comprehension would be: dictionary = {'a':[1,2,3,4,5], 'b':[2,3,4,6]} only_in_a = [x for x in dictionary['a'] if not x in dictionary['b']] only_in_b = [x for x in dictionary['b'] if not x in dictionary['a']] in_both = [x for x in dictionary['a'] if x in dictionary['b']] Note that this is not especially wise in terms of complexity, for larger lists.
7
4
61,234,309
2020-4-15
https://stackoverflow.com/questions/61234309/how-to-give-space-between-two-dcc-components-in-python-dash
What is the HTML equivalent for &nbsp (space) in Dash? html.Div( [ dcc.Input(), <add horizontal space here> dcc.Input() ] )
If you want to add some space between components you can simply use CSS-properties for this: html.Div( [ dcc.Input(), dcc.Input(style={"margin-left": "15px"}) ] ) This adds a margin to the left of your second Input. Have a look at the layout-section in the Plotly Dash documentation and CSS documentation about margin: https://dash.plotly.com/layout https://www.w3schools.com/csSref/pr_margin-left.asp
19
30
61,242,966
2020-4-16
https://stackoverflow.com/questions/61242966/pytorch-attributeerror-function-object-has-no-attribute-copy
I am trying to load a model state_dict I trained on Google Colab GPU, here is my code to load the model: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = models.resnet50() num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs, n_classes) model.load_state_dict(copy.deepcopy(torch.load("./models/model.pth",device))) model = model.to(device) model.eval() Here is the error: state_dict = state_dict.copy() AttributeError: 'function' object has no attribute 'copy' Pytorch : >>> import torch >>> print (torch.__version__) 1.4.0 >>> import torchvision >>> print (torchvision.__version__) 0.5.0 Please help I have searched everywhere to no avail [full error details][1] https://i.sstatic.net/s22DL.png
I am guessing this is what you did by mistake. You saved the function torch.save(model.state_dict, 'model_state.pth') instead of the state_dict() torch.save(model.state_dict(), 'model_state.pth') Otherwise, everything should work as expected. (I tested the following code on Colab) Replace model.state_dict() with model.state_dict to reproduce error import copy model = TheModelClass() torch.save(model.state_dict(), 'model_state.pth') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.load_state_dict(copy.deepcopy(torch.load("model_state.pth",device)))
16
32
61,147,405
2020-4-10
https://stackoverflow.com/questions/61147405/how-do-i-setup-my-own-time-zone-in-django
I live in Chittagong, Bangladesh and my time zone is GMT+6. How can i change to this time zone in Django settings?
You can specify the timezone as 'Asia/Dhaka' in the TIME_ZONE setting [Django-doc]: # settings.py TIME_ZONE = 'Asia/Dhaka' # … Note that if USE_TZ setting [Django-doc] is set to True, then: When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms.
11
16
61,153,546
2020-4-11
https://stackoverflow.com/questions/61153546/addition-subtraction-of-integers-and-integer-arrays-with-timestamp-is-no-longer
I am using pytrends library to extract google trends and i am getting the following error: Addition/subtraction of integers and integer-arrays with Timestamp is no longer supported. Instead of adding/subtracting n, use n * obj.freq timeframes = [] datelist = pd.date_range('2004-01-01', '2018-01-01', freq="AS") date = datelist[0] while date <= datelist[len(datelist)-1]: start_date = date.strftime("%Y-%m-%d") end_date = (date+4).strftime("%Y-%m-%d") timeframes.append(start_date+' '+end_date) date = date+3
You can't sum a date and a number like date+4 because who knows which unit this is, 4h, 4d,... ? You may use datetime.timedelta, here's an example if you meant days from datetime import timedelta end_date = (date + timedelta(days=4)).strftime("%Y-%m-%d") # ... date = date + timedelta(days=3)
28
30
61,149,803
2020-4-10
https://stackoverflow.com/questions/61149803/threads-is-not-executing-in-parallel-python-with-threadpoolexecutor
I'm new in python threading and I'm experimenting this: When I run something in threads (whenever I print outputs), it never seems to be running in parallel. Also, my functions take the same time that before using the library concurrent.futures (ThreadPoolExecutor). I have to calculate the gains of some attributes over a dataset (I cannot use libraries). Since I have about 1024 attributes and the function was taking about a minute to execute (and I have to use it in a for iteration) I dicided to split the array of attributes into 10 (just as an example) and run the separete function gain(attribute) separetly for each sub array. So I did the following (avoiding some extra unnecessary code): def calculate_gains(self): splited_attributes = np.array_split(self.attributes, 10) result = {} for atts in splited_attributes: with concurrent.futures.ThreadPoolExecutor() as executor: future = executor.submit(self.calculate_gains_helper, atts) return_value = future.result() self.gains = {**self.gains, **return_value} Here's the calculate_gains_helper: def calculate_gains_helper(self, attributes): inter_result = {} for attribute in attributes: inter_result[attribute] = self.gain(attribute) return inter_result Am I doing something wrong? I read some other older posts but I couldn't get any info. Thanks a lot for any help!
I had this same trouble and fixed by moving the iteration to within the context of the ThreadPoolExecutor, or else, you'll have to wait for the context to finish and start another one. Here is a probably fix for your code: def calculate_gains(self): splited_attributes = np.array_split(self.attributes, 10) result = {} with concurrent.futures.ThreadPoolExecutor() as executor: for atts in splited_attributes: future = executor.submit(self.calculate_gains_helper, atts) return_value = future.result() self.gains = {**self.gains, **return_value} To demonstrate better what I mean here is a sample code: Below is a non working code. Threads will execute synchronoulsly... from concurrent.futures import ThreadPoolExecutor, as_completed from time import sleep def t(reference): i = 0 for i in range(10): print(f"{reference} :" + str(i)) i+=1 sleep(1) futures = [] refs = ["a", "b", "c"] for i in refs: with ThreadPoolExecutor(max_workers=3) as executor: futures.append(executor.submit(t, i)) for future in as_completed(futures): print(future.result()) Here is the fixed code: from concurrent.futures import ThreadPoolExecutor, as_completed from time import sleep def t(reference): i = 0 for i in range(10): print(f"{reference} :" + str(i)) i+=1 sleep(1) futures = [] refs = ["a", "b", "c"] with ThreadPoolExecutor(max_workers=3) as executor: #swapped for i in refs: #swapped futures.append(executor.submit(t, i)) for future in as_completed(futures): print(future.result()) You can try this on your terminal and check out the outputs.
11
1
61,140,398
2020-4-10
https://stackoverflow.com/questions/61140398/fastapi-return-a-file-response-with-the-output-of-a-sql-query
I'm using FastAPI and currently I return a csv which I read from SQL server with pandas. (pd.read_sql()) However the csv is quite big for the browser and I want to return it with a File response: https://fastapi.tiangolo.com/advanced/custom-response/ (end of the page). I cannot seem to do this without first writing it to a csv file which seems slow and will clutter the filesystem with csv's on every request. So my questions way, is there way to return a FileResponse from a sql database or pandas dataframe. And if not, is there a way to delete the generated csv files, after it has all been read by the client? Thanks for your help! Kind regards, Stephan
Based HEAVILY off this https://github.com/tiangolo/fastapi/issues/1277 Turn your dataframe into a stream use a streaming response Modify headers so it's a download (optional) from fastapi import FastAPI from fastapi.responses import StreamingResponse import io import pandas as pd app = FastAPI() @app.get("/get_csv") async def get_csv(): df = pd.DataFrame(dict(col1 = 1, col2 = 2), index=[0]) stream = io.StringIO() df.to_csv(stream, index = False) response = StreamingResponse(iter([stream.getvalue()]), media_type="text/csv" ) response.headers["Content-Disposition"] = "attachment; filename=export.csv" return response
31
64
61,150,835
2020-4-11
https://stackoverflow.com/questions/61150835/check-if-string-is-in-string-literal-type
We use static type checking extensively, but we also need some simple runtime type checking. I'd love to use our static types for that runtime type checking. I've seen typeguard and the other libraries, but I'd prefer to have something simpler. I've tried below, but assert value in expected_type doesn't make sense. How do I create a simple function that will check if a string is in a Python string literal? from typing_extensions import Literal def check_str_in_literal(value: str, expected_type: Literal): assert value in expected_type Gender = Literal["Male", "Female", "Other"] def print_gender(gender: Gender): print(gender) # Unknown string as it's been retrieved from elsewhere strRetrievedFromDB = "Male" # type: ignore check_str_in_literal(strRetrievedFromDB, Gender) print_gender(strRetrievedFromDB)
Python 3.8 introduced typing.get_args(tp), making this possible: assert value in get_args(expected_type)
12
15
61,092,523
2020-4-8
https://stackoverflow.com/questions/61092523/what-is-running-loss-in-pytorch-and-how-is-it-calculated
I had a look at this tutorial in the PyTorch docs for understanding Transfer Learning. There was one line that I failed to understand. After the loss is calculated using loss = criterion(outputs, labels), the running loss is calculated using running_loss += loss.item() * inputs.size(0) and finally, the epoch loss is calculated using running_loss / dataset_sizes[phase]. Isn't loss.item() supposed to be for an entire mini-batch (please correct me if I am wrong). i.e, if the batch_size is 4, loss.item() would give the loss for the entire set of 4 images. If this is true, why is loss.item() being multiplied with inputs.size(0) while calculating running_loss? Isn't this step like an extra multiplication in this case? Any help would be appreciated. Thanks!
It's because the loss given by CrossEntropy or other loss functions is divided by the number of elements i.e. the reduction parameter is mean by default. torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') Hence, loss.item() contains the loss of entire mini-batch, but divided by the batch size. That's why loss.item() is multiplied with batch size, given by inputs.size(0), while calculating running_loss.
22
33
61,116,190
2020-4-9
https://stackoverflow.com/questions/61116190/what-are-all-the-formats-to-save-machine-learning-model-in-scikit-learn-keras
There are many ways to save a model and its weights. It is confusing when there are so many ways and not any source where we can read and compare their properties. Some of the formats I know are: 1. YAML File - Structure only 2. JSON File - Structure only 3. H5 Complete Model - Keras 4. H5 Weights only - Keras 5. ProtoBuf - Deployment using TensorFlow serving 6. Pickle - Scikit-learn 7. Joblib - Scikit-learn - replacement for Pickle, for objects containing large data. Discussion: Unlike scikit-learn, Keras does not recommend you save models using pickle. Instead, models are saved as an HDF5 file. The HDF5 file contains everything you need to not only load the model to make predictions (i.e., architecture and trained parameters) but also to restart training (i.e., loss and optimizer settings and the current state). What are other formats to save the model for Scikit-learn, Keras, Tensorflow, and Mxnet? Also what info I am missing about each of the above-discussed formats?
There are also formats like onnx which basically supports most of the frameworks and helps in removing the confusion of using different formats for different frameworks.
14
3
61,156,894
2020-4-11
https://stackoverflow.com/questions/61156894/pytorch-torch-max-over-multiple-dimensions
Have tensor like :x.shape = [3, 2, 2]. import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) I need to take .max() over the 2nd and 3rd dimensions. I expect some like this [-0.2632, -0.1453, -0.0274] as output. I tried to use: x.max(dim=(1,2)), but this causes an error.
Now, you can do this. The PR was merged (Aug 28 2020) and it is now available in the nightly release. Simply use torch.amax(): import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) print(torch.amax(x, dim=(1, 2))) # Output: # >>> tensor([-0.2632, -0.1453, -0.0274]) Original Answer As of today (April 11, 2020), there is no way to do .min() or .max() over multiple dimensions in PyTorch. There is an open issue about it that you can follow and see if it ever gets implemented. A workaround in your case would be: import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) print(x.view(x.size(0), -1).max(dim=-1)) # output: # >>> values=tensor([-0.2632, -0.1453, -0.0274]), # >>> indices=tensor([3, 3, 3])) So, if you need only the values: x.view(x.size(0), -1).max(dim=-1).values. If x is not a contiguous tensor, then .view() will fail. In this case, you should use .reshape() instead. Update August 26, 2020 This feature is being implemented in PR#43092 and the functions will be called amin and amax. They will return only the values. This is probably being merged soon, so you might be able to access these functions on the nightly build by the time you're reading this :) Have fun.
32
44
61,151,832
2020-4-11
https://stackoverflow.com/questions/61151832/how-can-i-set-marker-size-based-on-column-value
I am trying to use plotly (version 4.6.0) to create plots, but having trouble with the markers/size attribute. I am using the Boston housing price dataset in my example. I want to use the value in one of the columns of my dataframe to set a variable size for the marker, but I get an error when I use a direct reference to the column (size='TAX'). I can set the size to a constant (size=1) without issues. I found some examples online, but they generate a ValueError when I try to use them. How can I avoid this error? Code and error are shown below. import chart_studio.plotly as py import plotly.graph_objs as go from plotly.offline import iplot, init_notebook_mode import cufflinks cufflinks.go_offline(connected=True) init_notebook_mode(connected=True) import pandas as pd from sklearn.datasets import load_boston boston = load_boston() df = pd.DataFrame(boston.data, columns=boston.feature_names) y = boston.target df['RAD_CAT']=df['RAD'].astype(str) df.iplot( x='CRIM', y='INDUS', size='TAX', #size=1, text='RAD', mode='markers', layout=dict( xaxis=dict(type='log', title='CRIM'), yaxis=dict(title='INDUS'), title='CRIM vs INDUS Sized by RAD')) ValueError: Invalid value of type 'builtins.str' received for the 'size' property of scatter.marker Received value: 'TAX' The 'size' property is a number and may be specified as: - An int or float in the interval [0, inf] - A tuple, list, or one-dimensional numpy array of the above
import chart_studio.plotly as py import plotly.graph_objs as go from plotly.offline import iplot, init_notebook_mode import cufflinks cufflinks.go_offline(connected=True) init_notebook_mode(connected=True) import pandas as pd from sklearn.datasets import load_boston boston = load_boston() df = pd.DataFrame(boston.data, columns=boston.feature_names) df.iplot( x='CRIM', y='INDUS', size=df['TAX']/20, text='RAD', mode='markers', layout=dict( xaxis=dict(type='log', title='CRIM'), yaxis=dict(title='INDUS'), title='CRIM vs INDUS Sized by TAX'))
9
5
61,152,889
2020-4-11
https://stackoverflow.com/questions/61152889/plotly-how-to-set-node-positions-in-a-sankey-diagram
The sample data is as follows: unique_list = ['home0', 'page_a0', 'page_b0', 'page_a1', 'page_b1', 'page_c1', 'page_b2', 'page_a2', 'page_c2', 'page_c3'] sources = [0, 0, 1, 2, 2, 3, 3, 4, 4, 7, 6] targets = [3, 4, 4, 3, 5, 6, 8, 7, 8, 9, 9] values = [2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2] Using the sample code from the documentation fig = go.Figure(data=[go.Sankey( node = dict( pad = 15, thickness = 20, line = dict(color = "black", width = 0.5), label = unique_list, color = "blue" ), link = dict( source = sources, target = targets, value = values ))]) fig.show() This outputs the following sankey diagram However, I would like to get all the values which end in the same number in the same vertical column, just like how the leftmost column has all of it's nodes ending with a 0. I see in the docs that it is possible to move the node positions, however I was wondering if there was a cleaner way to do it other than manually inputting x and y values. Any help appreciated.
In go.Sankey() set arrangement='snap' and adjust x and y positions in x=<list> and y=<list>. The following setup will place your nodes as requested. Plot: Please note that the y-values are not explicitly set in this example. As soon as there are more than one node for a common x-value, the y-values will be adjusted automatically for all nodes to be displayed in the same vertical position. If you do want to set all positions explicitly, just set arrangement='fixed' Edit: I've added a custom function nodify() that assigns identical x-positions to label names that have a common ending such as '0' in ['home0', 'page_a0', 'page_b0']. Now, if you as an example change page_c1 to page_c2 you'll get this: Complete code: import plotly.graph_objects as go unique_list = ['home0', 'page_a0', 'page_b0', 'page_a1', 'page_b1', 'page_c1', 'page_b2', 'page_a2', 'page_c2', 'page_c3'] sources = [0, 0, 1, 2, 2, 3, 3, 4, 4, 7, 6] targets = [3, 4, 4, 3, 5, 6, 8, 7, 8, 9, 9] values = [2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2] def nodify(node_names): node_names = unique_list # uniqe name endings ends = sorted(list(set([e[-1] for e in node_names]))) # intervals steps = 1/len(ends) # x-values for each unique name ending # for input as node position nodes_x = {} xVal = 0 for e in ends: nodes_x[str(e)] = xVal xVal += steps # x and y values in list form x_values = [nodes_x[n[-1]] for n in node_names] y_values = [0.1]*len(x_values) return x_values, y_values nodified = nodify(node_names=unique_list) # plotly setup fig = go.Figure(data=[go.Sankey( arrangement='snap', node = dict( pad = 15, thickness = 20, line = dict(color = "black", width = 0.5), label = unique_list, color = "blue", x=nodified[0], y=nodified[1] ), link = dict( source = sources, target = targets, value = values ))]) fig.show()
11
15
61,184,906
2020-4-13
https://stackoverflow.com/questions/61184906/difference-between-predict-vs-predict-proba-in-scikit-learn
Suppose I have created a model, and my target variable is either 0, 1 or 2. It seems that if I use predict, the answer is either of 0, or 1 or 2. But if I use predict_proba, I get a row with 3 cols for each row as follows, for example model = ... Classifier # It could be any classifier m1 = model.predict(mytest) m2= model.predict_proba(mytest) # Now suppose m1[3] = [0.6, 0.2, 0.2] Suppose I use both predict and predict_proba. If in index 3, I get the above result with the result of predict_proba, in index 3 of the result of predict I should see 0. Is this the case? I am trying to understand how using both predict and predict_proba on the same model relate to each other.
predict() is used to predict the actual class (in your case one of 0, 1, or 2). predict_proba() is used to predict the class probabilities From the example output that you shared, predict() would output class 0 since the class probability for 0 is 0.6. [0.6, 0.2, 0.2] is the output of predict_proba that simply denotes that the class probability for classes 0, 1, and 2 are 0.6, 0.2, and 0.2 respectively. Now as the documentation mentions for predict_proba, the resulting array is ordered based on the labels you've been using: The returned estimates for all classes are ordered by the label of classes. Therefore, in your case where your class labels are [0, 1, 2], the corresponding output of predict_proba will contain the corresponding probabilities. 0.6 is the probability of the instance to be classified as 0 and 0.2 are the probabilities that the instance is categorised as 1 and 2 respectively. For a more comprehensive explanation, refer to the article What is the difference between predict() and predict_proba() in scikit-learn on TDS.
25
30
61,218,237
2020-4-14
https://stackoverflow.com/questions/61218237/how-can-i-install-tkinter-for-python-on-mac
So I posted this error on a Facebook group, they said I should get pip. I installed pip, when I am wanting to install tkinter it's giving me error: I used this command first : sudo pip install tkinter . . . error: ERROR: Could not find a version that satisfies the requirement tkinter (from versions: none) ERROR: No matching distribution found for tkinter
After a day of headache, this worked for me: $ brew install python-tk
19
46
61,165,055
2020-4-11
https://stackoverflow.com/questions/61165055/storing-factory-boy-relatedfactory-object-on-parent-factory
I have two Django models (Customer and CustomerAddress) that both contain ForeignKeys to each other. I am using factory-boy to manage creation of these models, and cannot save a child factory instance onto the parent factory (using relationships defined using the RelatedFactory class). My two models: class ExampleCustomerAddress(models.Model): # Every customer mailing address is assigned to a single Customer, # though Customers may have multiple addresses. customer = models.ForeignKey('ExampleCustomer', on_delete=models.CASCADE) class ExampleCustomer(models.Model): # Each customer has a single (optional) default billing address: default_billto = models.ForeignKey( 'ExampleCustomerAddress', on_delete=models.SET_NULL, blank=True, null=True, related_name='+') I have two factories, one for each model: class ExampleCustomerAddressFactory(factory.django.DjangoModelFactory): class Meta: model = ExampleCustomerAddress customer = factory.SubFactory( 'ExampleCustomerFactory', default_billto=None) # Set to None to prevent recursive address creation. class ExampleCustomerFactory(factory.django.DjangoModelFactory): class Meta: model = ExampleCustomer default_billto = factory.RelatedFactory(ExampleCustomerAddressFactory, 'customer') When creating a ExampleCustomerFactory, default_billto is None, even though a ExampleCustomerAddress has been created: In [14]: ec = ExampleCustomerFactory.build() In [15]: ec.default_billto is None Out[15]: True (When using create(), a new ExampleCustomerAddress exists in the database. I am using build() here to simplify the example). Creating an ExampleCustomerAddress works as expected, with the Customer being automatically created: In [22]: eca = ExampleCustomerAddressFactory.build() In [23]: eca.customer Out[23]: <ExampleCustomer: ExampleCustomer object> In [24]: eca.customer.default_billto is None Out[24]: True <-- I was expecting this to be set to an `ExampleCustomerAddress!`. I feel like I am going crazy here, missing something very simple. I get the impression I am encountering this error because of how both models contain ForeignKeys to each other.
First, a simple rule of thumb: when you're following a ForeignKey, always prefer a SubFactory; RelatedFactory is intended to follow a reverse relationship. Let's take each factory in turn. ExampleCustomerAddressFactory When we call this factory without a customer, we'll want to get an address, linked to a customer, and used as the default address for that customer. However, when we call it with a customer, don't alter it. The following would work: class ExampleCustomerAddressFactory(factory.django.DjangoModelFactory): class Meta: model = ExampleCustomerAddress # Fill the Customer unless provided customer = factory.SubFactory( ExampleCustomerFactory, # We can't provide ourself there, since we aren't saved to the database yet. default_billto=None, ) @factory.post_generation def set_customer_billto(obj, create, *args, **kwargs): """Set the default billto of the customer to ourselves if empty""" if obj.customer.default_billto is None: obj.customer.default_billto = obj if create: obj.customer.save() Here, we'll set the newly created customer's value to "us"; note that this logic could also be moved to ExampleCustomerAddress.save(). ExampleCustomerFactory For this factory, the rules are simpler: when creating a customer, create a default billing address (unless a value has been provided). class ExampleCustomerFactory(factory.django.DjangoModelFactory): class Meta: model = ExampleCustomer # We can't use a SubFactory here, since that would be evaluated before # the Customer has been saved. default_billto = factory.RelatedFactory( ExampleCustomerAddressFactory, 'customer', ) This factory will run as follows: Create the ExampleCustomer instance with default_billto=None; Call ExampleCustomerAddressFactory(customer=obj) with the newly created customer; That factory will create an ExampleCustomerAddress with that customer; The post-generation hook in that factory will then detect that the customer has no default_billto, and will override it. Notes I didn't test this, so some typos or minor bugs could occur; It's up to you to decide which factory is declared first, using the target factory's path instead of a direct reference; As stated above, the logic to set the default billing address of a customer when it's empty and an address is added to that customer could be moved to your model's .save() method.
8
12
61,154,741
2020-4-11
https://stackoverflow.com/questions/61154741/how-to-display-a-pandas-dataframe-within-a-vbox-using-ipywidgets
i would like to display a pandas dataframe in a interactive way using ipywidgets. So far the code gets some selections and then does some calculation. For this exmaple case, its not really using the input labels. However, my problem is when I would like to display the pandas dataframe, it's not treated as widget. But how can I nicely display then pandas dataframe using widgets? At the end I would like to have a nice table in the main_box here is a code exmaple, which works in any jupyter notebook import pandas as pd import ipywidgets as widgets def button_run_on_click(_): status_label.value = "running...." df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]]) status_label.value = "" result_box = setup_ui(df) main_box.children = [selection, button_run, status_label, result_box] def setup_ui(df): return widgets.VBox([df]) selection_box = widgets.Box() selection_toggles = [] selected_labels = {} default_labels = ['test1', "test2"] labels = {"test1": "test1", "test2": "test2", "test3": "test3"} def update_selection(change): owner = change['owner'] name = owner.description if change['new']: owner.icon = 'check' selected_labels[name] = labels[name] else: owner.icon = "" selected_labels.pop(name) for k in sorted(labels): o = widgets.ToggleButton(description=k) o.observe(update_selection, 'value') o.value = k in default_labels selection_toggles.append(o) selection_box.children = selection_toggles status_label = widgets.Label() status_label.layout.width = '300px' button_run = widgets.Button(description="Run") main_box = widgets.VBox([selection_box, button_run, status_label]) button_run.on_click(button_run_on_click) display(main_box)
from IPython.display import display import ipywidgets as widgets def setup_ui(df): out = widgets.Output() with out: display(df) return out If you change your setup_ui function to this, you can return an Output widget with your dataframe. BUT, in your button_run_on_click function it appears selection is not defined. Should this be something else?
12
14
61,149,073
2020-4-10
https://stackoverflow.com/questions/61149073/null-identity-key-error-using-sqlalchemys-base-automap-to-reflect-a-postgres
I have a postgres database that I'm trying to reflect that uses the now standard "Identity" column for primary keys. Here's my table definition: create table class_label ( class_label_id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY, class_name varchar not null, default_color varchar, created_dttm timestamp default current_timestamp NOT NULL, created_by varchar DEFAULT USER NOT NULL, updated_dttm timestamp default current_timestamp NOT NULL, updated_by varchar DEFAULT user NOT NULL ); And here's my code: from sqlalchemy import create_engine, MetaData, insert, Table, or_, and_, func from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import scoped_session, sessionmaker import os from sqlalchemy.schema import CreateColumn from sqlalchemy.ext.compiler import compiles @compiles(CreateColumn, 'postgresql') def use_identity(element, compiler, **kw): text = compiler.visit_create_column(element, **kw) text = text.replace("SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY") return text usr = os.environ.get("POSTGRES_USR") pwd = os.environ.get("POSTGRES_PWD") host = os.environ.get("POSTGRES_HOST") engine = create_engine('postgresql://' + usr + ':' + pwd + host, convert_unicode=True) session = scoped_session(sessionmaker(bind=engine)) metadata = MetaData(bind=engine) metadata.reflect(engine, only=['class_label']) Base = automap_base(metadata=metadata) Base.prepare() Class_Label = Base.classes.class_label session.add(Class_Label(class_name="Testing", default_color="red")) session.commit() When I run my code, I get this error: sqlalchemy.orm.exc.FlushError: Instance <class_label at 0x1091c4ba8> has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a load() event. Per https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#postgresql-10-identity-columns I understand that this is somewhat a shortcoming of SQLAlchemy, but I'm wondering if the work-around they suggest can work for auto-mapped/reflected databases and how I would implement it. I'm using SQLAlchemy 1.3.16 and Postgres 11.
A fix was added in SQLAlchemy 1.4
8
2
61,154,740
2020-4-11
https://stackoverflow.com/questions/61154740/attributeerror-module-networkx-has-no-attribute-connected-component-subgraph
B = nx.Graph() B.add_nodes_from(data['movie'].unique(), bipartite=0, label='movie') B.add_nodes_from(data['actor'].unique(), bipartite=1, label='actor') B.add_edges_from(edges, label='acted') A = list(nx.connected_component_subgraphs(B))[0] I am getting the below given error when am trying to use nx.connected_component_subgraphs(G). In the dataset there are two coumns(movie and actor), and it's in the form bipartite graph. I want to get connected components for the movie nodes. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-16-efff4e6fafc4> in <module> ----> 1 A = list(nx.connected_component_subgraphs(B))[0] AttributeError: module 'networkx' has no attribute 'connected_component_subgraphs'
This was deprecated with version 2.1, and finally removed with version 2.4. See these instructions Use (G.subgraph(c) for c in connected_components(G)) Or (G.subgraph(c).copy() for c in connected_components(G))
23
32
61,206,437
2020-4-14
https://stackoverflow.com/questions/61206437/importerror-cannot-import-name-literal-from-typing
I have recently started using PEP 484 and PEP 586 to make my code clearer and more accessible. So far everything was ok, but when I wanted to use Literal from the package typing it appears it couldn't be imported. What is the most surprising is that PyCharm isn't complaining at all for importing it or using it. The code I want to use in the end is looking like that : SomeVar = TypeVar("SomeVar", Literal['choice1'], Literal['choice2'], someType) It would be used in the cases where you can have a string to describe what you want or an already made solution e.g : def someFunc(my_var: SomeVar = 'choice1'): result = [] if my_var == 'choice1': result.append(...) else: result = my_var return result I use an Anaconda environment with Python 3.7.7.
Using Literal in Python 3.8 and later from typing import Literal Using Literal in all Python versions (1) Literal was added to typing.py in 3.8, but you can use Literal in older versions anyway. First install typing_extensions (pip install typing_extensions) and then from typing_extensions import Literal This approach is supposed to work also in Python 3.8 and later. Using Literal in all Python versions (2) For completeness, I'm also adding the try-except approach to import Literal: try: from typing import Literal except ImportError: from typing_extensions import Literal This should also work for all Python versions, given that typing_extensions is installed if you're using Python 3.7 or older.
43
53
61,198,658
2020-4-13
https://stackoverflow.com/questions/61198658/how-to-flip-numpy-array-along-the-diagonal-efficiently
Lets say that i have the following array (note that there is a 1 in the [2,0] position and a 2 in the [3,4] position): [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [1, 0, 0, 0, 0] [0, 0, 0, 0, 2] [0, 0, 0, 0, 0] and I want to flip it along the diagonal efficiently such that: [0, 0, 1, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 2, 0] This does not work with fliplr or rot90 or flipud. Would like efficient answer rather than just an answer since unfortunately this is not being performed on matrices this small.
Both np.rot90(np.fliplr(x)) and transposing the array solves this. a = np.random.uniform(size=(5,5)) a.T == np.rot90(np.fliplr(a))
9
16
61,176,552
2020-4-12
https://stackoverflow.com/questions/61176552/how-to-hide-file-paths-when-running-python-scripts-in-vs-code
Every time I run my code, the terminal at the bottom is displaying this long name (I think the file location) as well as whatever output it's supposed to display. Is there a way to get that to go away? This is what it looks like: administrator@Machintosh-2 Exercise Files Python % /user/bin/python3... Hello world! administrator@Machintosh-2 Exercise Files Python %
AFAIK, there is no way to hide the paths, because the VS Code Integrated Terminal is basically using your OS/system's underlying terminal. And running Python scripts on a terminal requires the following form: <path/to/python/interpreter> <path/to/python/file> If you want a "cleaner" console output, you could create a debug/launch configuration for Python, then set the console configuration to internalConsole: .vscode/launch.json { "version": "0.2.0", "configurations": [ { "name": "run-my-script", "type": "python", "request": "launch", "program": "${workspaceFolder}/path/to/your_script.py", "console": "internalConsole" } ] } That would display the output in the Debug Console tab rather than the Terminal tab. There will be no more paths, just whatever your script prints out to the console. sample output: Unfortunately, if your Python code uses and requires manual input() from the user, using the Debug Console and the internalConsole solution would not work. It will show this error: There is no direct solution to that, to having a "clean" output and allowing manual user input(). You might want to consider instead reading your program inputs from a file or passing them as command line arguments. VS Code supports passing in arguments ([args]) when running your script <path/to/python/interpreter> <path/to/python/file> [args] using the "args" option in the launch.json file. Here is a simple example with sys.argv: python code import sys # This is always path to the script arg_0 = sys.argv[0] # Same sequence as "args" in launch.json arg_1 = sys.argv[1] arg_2 = sys.argv[2] print(arg_1) print(arg_2) .vscode/launch.json { "version": "0.2.0", "configurations": [ { "name": "run-my-script", "type": "python", "request": "launch", "program": "${workspaceFolder}/path/to/your_script.py", "console": "internalConsole", "args": [ "89", "abc" ] } ] } sample output
8
2
61,112,322
2020-4-9
https://stackoverflow.com/questions/61112322/get-userid-cant-find-user-returns-none-self-bot-discord-py
I am trying to DM myself using a self bot. I am trying to use the get_user() function in my code. bot = commands.Bot(command_prefix='', self_bot=True) counter = 0 userID = 695724603406024726 @bot.event async def dm(userID): print('Running Function') global counter if counter <= 0: print('Finding user.') counter += 1 user = bot.get_user(userID) print('user:',user) await user.send("Hello") print('message sent') return bot.loop.create_task(dm(userID)) bot.run(token, bot=False) Instead, I am returned with this error: File "<ipython-input-1-90e5e962a6e9>", line 24, in dm await user.send("Hello") AttributeError: 'NoneType' object has no attribute 'send' The bot can't find the user and returns a None value. I have tested multiple ID's and am unsure what the problem is.
You could always use the coroutine client.fetch_user(id) to get it done. get_user() takes it from cache so when fresh, doesn't work most of the times. In your case: bot = commands.Bot(command_prefix='', self_bot=True) counter = 0 userID = 695724603406024726 async def dm(userID): print('Running Function') global counter if counter <= 0: print('Finding user.') counter += 1 user = await bot.fetch_user(userID) print('user:',user) await user.send("Hello") print('message sent') return bot.loop.create_task(dm(userID)) bot.run(token, bot=False)```
8
9
61,204,189
2020-4-14
https://stackoverflow.com/questions/61204189/vs-code-pylintimport-error-unable-to-import-subsub-module-from-custom-direct
I have organized my self-written Python scripts within a tree of several sub-directories, starting from the parent directory "Scripts" which is already included in "python.autoComplete.extraPaths" within the settings-json: "python.autoComplete.extraPaths": ["/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Python/Scripts", "/home/andylu/anaconda3/lib/python3.7/site-packages"] Apart from that, I've included a Python environment-file: "python.envFile": "/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Visual_studio_code/vscode_own_scripts.env" which contains the line export PYTHONPATH=/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Python/Scripts:/home/andylu/anaconda3/lib/python3.7/site-packages All of this worked out great before, where all my scripts were distributed just over 1 single directory level, like so: +---Scripts | +---General | | +---script_one.py | | +---script_two.py When I imported within any python-script e.g. script_one.py, I started the script with import sys sys.path.append( "/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Python/Scripts/" ) import General.script_one as one and pylint recognized this imported script correctly without throwing the aforementioned VS Code pylint(import-error). Now, the situation is different. The scripts had become so many, that I split up the subfolder General to contain an additional sub-directory level in order to get the scripts organized more lucidly: +---Scripts | +---General | | +---Plotting | | | +---script_one.py | | | +---script_two.py | | +---Misc | | | +---script_three.py | | | +---script_four.py .... When starting a Python script with e.g. the following lines, I get the VS Code pylint(import-error) for each of following imports. # Package importing import sys sys.path.append( "/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Python/Scripts/" ) import General.Plotting.auxiliary_plotting_functions as aux_plot import General.Plotting.plotting as plot #%% # TIME MEASUREMENT for the entire code/function import General.Misc.timing I don't know why pylint stopped recognizing the imports all of the sudden, just because I added an additional sub-directory level. I would like these senseless pylint import errors to disappear, since effectively the subsub-models are being imported correctly when executing the codes. I even tried to modify the .pylintrc - file, which lies under /home/andylu/anaconda3/pkgs/pylint-2.3.1-py37_0/lib/python3.7/site-packages/pylint/test/regrtest_data/.pylintrc : [MASTER] optimize-ast=no init-hook='import sys; sys.path.append("/home/andylu/Dokumente/Allgemeines_material/Sonstiges/Programming/Python/Scripts")' Adding the init-hook - line had no effect either.
I found a great workaround for my problem with this answer. It points towards the message control part of the pylint-docs. Practically, I just had to add the comment # pylint: disable=import-error behind my custom imports like so: import General.Plotting.auxiliary_plotting_functions as aux_plot # pylint: disable=import-error This solves my issue perfectly, as I honestly didn't find an easy solution to this problem via configuring e.g. a .pylintrc file, let alone all my attempts with no avail involving PYTHONPATH and environment-files etc. Put simply, my custom modules get imported correctly when executing the scripts in VS Code, but the only annoying detail was that pylint didn't get it and showed me useless import-errors. Now, pylint doesn't show this non-sense anymore, that's all I wanted :) I'm sure there might be a more elegant solution, but so far the aforementioned workaround came in handy. If anyone comes up with a "better" solution, just post it here and I'll change the answer to yours. PS for those using pylance as alternative linter in VS-Code: A similar workaround (to the above-mentioned regarding pylint) I found here works fine (appending # type: ignore to the import-statement), which I mentioned in this question as well: import General.Misc.general_tools as tools # type: ignore Most likely it's got something to do with the settings.json - file of VS-Code, since using VS-Code is the constant factor here.
11
7
61,124,950
2020-4-9
https://stackoverflow.com/questions/61124950/received-incompatible-instance-in-graphql-query
When i hit insomnia with this request bellow then it shows this response. How can i solve this issue? Request: query{ datewiseCoronaCasesList{ updatedAt, affected, death, recovered } } Response: { "errors": [ { "message": "Received incompatible instance \"{'updated_at': datetime.date(2020, 4, 8), 'affected': 137, 'death': 42, 'recovered': 104}\"." } ], "data": { "datewiseCoronaCasesList": [ null ] } } My expectation which i have already gotten in errors message but this way: { 'updated_at': datetime.date(2020, 4, 8), 'affected': 137, 'death': 42, 'recovered': 104 } My GraphQL query: class CoronaQuery(graphene.ObjectType): datewise_corona_cases_list = graphene.Field(CoronaCaseType) def resolve_datewise_corona_cases_list(self, info, **kwargs): return CoronaCase.objects.values('updated_at').annotate( affected=Sum('affected'),death=Sum('death'), recovered=Sum('recovered')) My model: class CoronaCase(models.Model): affected = models.IntegerField(default=0) death = models.IntegerField(default=0) recovered = models.IntegerField(default=0) district = models.CharField(max_length=265, null=False, blank=False) created_at = models.DateTimeField(default=timezone.now) updated_at = models.DateTimeField(default=timezone.now) def __str__(self): return "Affected from: {}".format(self.district)
Under CoronaQuery class, since you are returning a list of objects (instances), graphene.Field should be changed to graphene.List. I mean: class CoronaQuery(graphene.ObjectType): datewise_corona_cases_list = graphene.List(CoronaCaseType)
10
10
61,155,366
2020-4-11
https://stackoverflow.com/questions/61155366/type-hint-for-finite-iterable
My function foo accepts an argument things which is turned into a list internally. def foo(things): things = list(things) # more code The list constructor accepts any iterable. However, annotating things with typing.Iterable does not give the user a clue that the iterable must be finite, not something like itertools.count(). What's the correct type hint to use in this case?
I am not aware of any possible way to achieve this in Python as you cannot provide such constraints in type hints. However, probably the Collection type might be useful in your context as a workaround: class collections.abc.Collection ABC for sized iterable container classes. This requires objects to have a __len__, which is a more strict requirement than being finite. For example, finite generators don't count as Collection.
18
20
61,186,708
2020-4-13
https://stackoverflow.com/questions/61186708/pandas-read-excel-doesnt-parse-dates-correctly-returns-a-constant-date-instea
I've read a .xlsb file and parsed date columns using a code below: dateparser = lambda x: pd.to_datetime(x) data = pd.read_excel(r"test.xlsb", engine="pyxlsb", parse_dates=["start_date","end_date"], date_parser=dateparser ) My input columns in the .xlsb file have format DD/MM/YYYY (e.g. 26/01/2008). As an output of the above-mentioned code I get, for example: 1970-01-01 00:00:00.000038840. Only the last 5 digits changes. If I read the same file without parsing dates, the same columns are of float64 type and containing only the last 5 digits of output before (e.g. 38840.0). I assume this is a problem associated with date encoding itself. Does anyone know how to fix this issue?
I am not sure if you were able to figure out the answer to this problem. But, below is how I resolved it: from pyxlsb import convert_date self.data: pd.DataFrame = pd.read_excel(self.file, sheet_name=self.sheet, engine='pyxlsb', header=0) self.data["test"] = self.data.apply(lambda x: convert_date(x.SomeStupidDate), axis=1) More details can be found here: https://pypi.org/project/pyxlsb/ by doing ctrl+F for "convert_date".
7
9
61,166,864
2020-4-12
https://stackoverflow.com/questions/61166864/tensorflow-python-framework-ops-eagertensor-object-has-no-attribute-in-graph
I am trying to visualize CNN filters by optimizing a random 'image' so that it produces a high mean activation on that filter which is somehow similar to the neural style transfer algorithm. For that purpose, I am using TensorFlow==2.2.0-rc. But during the optimization process, an error occurs saying 'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode'. I tried debugging it and it somehow works when I don't use the opt.apply_gradients() and instead, apply its gradient manually like img = img - lr * grads but I want to use the "Adam" optimizer rather than the simple SGD. Here is my code for the optimization part opt = tf.optimizers.Adam(learning_rate=lr, decay = 1e-6) for _ in range(epoch): with tf.GradientTape() as tape: tape.watch(img) y = model(img)[:, :, :, filter] loss = -tf.math.reduce_mean(y) grads = tape.gradient(loss, img) opt.apply_gradients(zip([grads], [img]))
The reason for the bug is that the tf.keras optimizers apply gradients to variable objects (of type tf.Variable), while you are trying to apply gradients to tensors (of type tf.Tensor). Tensor objects are not mutable in TensorFlow, thus the optimizer cannot apply gradients to it. You should initialize the variable img as a tf.Variable. This is how your code should be: # NOTE: The original image is lost here. If this is not desired, then you can # rename the variable to something like img_var. img = tf.Variable(img) opt = tf.optimizers.Adam(learning_rate=lr, decay = 1e-6) for _ in range(epoch): with tf.GradientTape() as tape: tape.watch(img) y = model(img.value())[:, :, :, filter] loss = -tf.math.reduce_mean(y) grads = tape.gradient(loss, img) opt.apply_gradients(zip([grads], [img])) Also, it is recommended to calculate the gradients outside the tape's context. This is because keeping it in will lead to the tape tracking the gradient calculation itself, leading to higher memory usage. This is only desirable if you want to calculate higher-order gradients. Since you don't need those, I have kept them outside. Note I have changed the line y = model(img)[:, :, :, filter] to y = model(img.value())[:, :, :, filter]. This is because tf.keras models need tensors as input, not variables (bug, or feature?).
11
14
61,128,637
2020-4-9
https://stackoverflow.com/questions/61128637/discord-errors-forbidden-403-forbidden-error-code-50013-missing-permissions
I am trying to setup roles for my discord bot but keep getting this error: discord.errors.Forbidden: 403 Forbidden (error code: 50013) My Code: @client.event async def on_member_join(member): guild = client.get_guild(688568885968109756) role = discord.utils.get(member.guild.roles, id=689916456871133311) await member.add_roles(role)
If your bot have enough permission, Then it's coming due to the hierarchy of roles. Check-in server settings for the hierarchy. For changing the hierarchy, you can move the roles up down in the settings.
7
19
61,218,501
2020-4-14
https://stackoverflow.com/questions/61218501/plotly-how-to-show-legend-in-single-trace-scatterplot-with-plotly-express
Sorry beforehand for the long post. I'm new to python and to plotly, so please bear with me. I'm trying to make a scatterplot with a trendline to show me the legend of the plot including the regression parameters but for some reason I can't understand why px.scatter doesn't show me the legend of my trace. Here is my code fig1 = px.scatter(data_frame = dataframe, x="xdata", y="ydata", trendline = 'ols') fig1.layout.showlegend = True fig1.show() This displays the scatterplot and the trendline, but no legend even when I tried to override it. I used pio.write_json(fig1, "fig1.plotly") to export it to jupyterlab plotly chart studio and add manually the legend, but even though I enabled it, it won't show either in the chart studio. I printed the variable with print(fig1) to see what's happening, this is (part of) the result (Scatter({ 'hovertemplate': '%co=%{x}<br>RPM=%{y}<extra></extra>', 'legendgroup': '', 'marker': {'color': '#636efa', 'symbol': 'circle'}, 'mode': 'markers', 'name': '', 'showlegend': False, 'x': array([*** some x data ***]), 'xaxis': 'x', 'y': array([*** some y data ***]), 'yaxis': 'y' }), Scatter({ 'hovertemplate': ('<b>OLS trendline</b><br>RPM = ' ... ' <b>(trend)</b><extra></extra>'), 'legendgroup': '', 'marker': {'color': '#636efa', 'symbol': 'circle'}, 'mode': 'lines', 'name': '', 'showlegend': False, 'x': array([*** some x data ***]), 'xaxis': 'x', 'y': array([ *** some y data ***]), 'yaxis': 'y' })) As we can see, creating a figure with px.scatter by default hides the legend when there's a single trace (I experimented adding a color property to px.scatter and it showed the legend), and searching the px.scatter documentation I can't find something related to override the legend setting. I went back to the exported file (fig1.plotly.json) and manually changed the showlegend entries to True and then I could see the legend in the chart studio, but there has to be some way to do it directly from the command. Here's the question: Does anyone know a way to customize px.express graphic objects? Another workaround I see is to use low level plotly graph object creation, but then I don't know how to add a trendline. Thank you again for reading through all of this.
You must specify that you'd like to display a legend and provide a legend name like this: fig['data'][0]['showlegend']=True fig['data'][0]['name']='Sepal length' Plot: Complete code: import plotly.express as px df = px.data.iris() # iris is a pandas DataFrame fig = px.scatter(df, x="sepal_width", y="sepal_length", trendline='ols', trendline_color_override='red') fig['data'][0]['showlegend']=True fig['data'][0]['name']='Sepal length' fig.show() Complete code:
11
18
61,165,574
2020-4-12
https://stackoverflow.com/questions/61165574/cast-and-type-env-variables-using-file
For all my projects, I load all env variables at the start and check that all the expected keys exist as described by an .env.example file following the dotenv-safe approach. However, the env variables are strings, which have to be manually cast whenever they're used inside the Python code. This is annoying and error-prone. I'd like to use the information from the .env.example file to cast the env variables and get Python typing support in my IDE (VS Code). How do I do that? env.example PORT: int SSL: boolean Python Ideal Behavior # Set the env in some way (doesn't matter) import os os.environment["SSL"] = "0" os.environment["PORT"] = "99999" env = type_env() if not env["SSL"]: # <-- I'd like this to be cast to boolean and typed as a boolean print("Connecting w/o SSL!") if 65535 < env["PORT"]: # <-- I'd like this to be cast to int and typed as an int print("Invalid port!") In this code example, what would the type_env() function look like assuming it only supported boolean, int, float, and str? It's not too hard to do the casting as shown in e.g. https://stackoverflow.com/a/11781375/1452257, but it's unclear to me how to get it working with typing support.
I will suggest using pydantic. From StackOverflow pydantic tag info Pydantic is a library for data validation and settings management based on Python type hinting (PEP484) and variable annotations (PEP526). It allows for defining schemas in Python for complex structures. let's assume that you have a file with your SSL and PORT envs: with open('.env', 'w') as fp: fp.write('PORT=5000\nSSL=0') then you can use: from pydantic import BaseSettings class Settings(BaseSettings): PORT : int SSL : bool class Config: env_file = '.env' config = Settings() print(type(config.SSL), config.SSL) print(type(config.PORT), config.PORT) # <class 'bool'> False # <class 'int'> 5000 with your code: env = Settings() if not env.SSL: print("Connecting w/o SSL!") if 65535 < env.PORT: print("Invalid port!") output: Connecting w/o SSL!
9
18
61,213,866
2020-4-14
https://stackoverflow.com/questions/61213866/why-do-i-get-this-many-iterations-when-adding-to-and-removing-from-a-set-while-i
Trying to understand the Python for-loop, I thought this would give the result {1} for one iteration, or just get stuck in an infinite loop, depending on if it does the iteration like in C or other languages. But actually it did neither. >>> s = {0} >>> for i in s: ... s.add(i + 1) ... s.remove(i) ... >>> print(s) {16} Why does it do 16 iterations? Where does the result {16} come from? This was using Python 3.8.2. On pypy it makes the expected result {1}.
Python makes no promises about when (if ever) this loop will end. Modifying a set during iteration can lead to skipped elements, repeated elements, and other weirdness. Never rely on such behavior. Everything I am about to say is implementation details, subject to change without notice. If you write a program that relies on any of it, your program may break on any combination of Python implementation and version other than CPython 3.8.2. The short explanation for why the loop ends at 16 is that 16 is the first element that happens to be placed at a lower hash table index than the previous element. The full explanation is below. The internal hash table of a Python set always has a power of 2 size. For a table of size 2^n, if no collisions occur, elements are stored in the position in the hash table corresponding to the n least-significant bits of their hash. You can see this implemented in set_add_entry: mask = so->mask; i = (size_t)hash & mask; entry = &so->table[i]; if (entry->key == NULL) goto found_unused; Most small Python ints hash to themselves; particularly, all ints in your test hash to themselves. You can see this implemented in long_hash. Since your set never contains two elements with equal low bits in their hashes, no collision occurs. A Python set iterator keeps track of its position in a set with a simple integer index into the set's internal hash table. When the next element is requested, the iterator searches for a populated entry in the hash table starting at that index, then sets its stored index to immediately after the found entry and returns the entry's element. You can see this in setiter_iternext: while (i <= mask && (entry[i].key == NULL || entry[i].key == dummy)) i++; si->si_pos = i+1; if (i > mask) goto fail; si->len--; key = entry[i].key; Py_INCREF(key); return key; Your set initially starts with a hash table of size 8, and a pointer to a 0 int object at index 0 in the hash table. The iterator is also positioned at index 0. As you iterate, elements are added to the hash table, each at the next index because that's where their hash says to put them, and that's always the next index the iterator looks at. Removed elements have a dummy marker stored at their old position, for collision resolution purposes. You can see that implemented in set_discard_entry: entry = set_lookkey(so, key, hash); if (entry == NULL) return -1; if (entry->key == NULL) return DISCARD_NOTFOUND; old_key = entry->key; entry->key = dummy; entry->hash = -1; so->used--; Py_DECREF(old_key); return DISCARD_FOUND; When 4 is added to the set, the number of elements and dummies in the set becomes high enough that set_add_entry triggers a hash table rebuild, calling set_table_resize: if ((size_t)so->fill*5 < mask*3) return 0; return set_table_resize(so, so->used>50000 ? so->used*2 : so->used*4); so->used is the number of populated, non-dummy entries in the hash table, which is 2, so set_table_resize receives 8 as its second argument. Based on this, set_table_resize decides the new hash table size should be 16: /* Find the smallest table size > minused. */ /* XXX speed-up with intrinsics */ size_t newsize = PySet_MINSIZE; while (newsize <= (size_t)minused) { newsize <<= 1; // The largest possible value is PY_SSIZE_T_MAX + 1. } It rebuilds the hash table with size 16. All elements still end up at their old indexes in the new hash table, since they didn't have any high bits set in their hashes. As the loop continues, elements keep getting placed at the next index the iterator will look. Another hash table rebuild is triggered, but the new size is still 16. The pattern breaks when the loop adds 16 as an element. There is no index 16 to place the new element at. The 4 lowest bits of 16 are 0000, putting 16 at index 0. The iterator's stored index is 16 at this point, and when the loop asks for the next element from the iterator, the iterator sees that it has gone past the end of the hash table. The iterator terminates the loop at this point, leaving only 16 in the set.
69
97
61,212,514
2020-4-14
https://stackoverflow.com/questions/61212514/django-model-objects-became-not-hashable-after-upgrading-to-django-2-2
I'm testing the update of an application from Django 2.1.7 to 2.2.12. I got an error when running my unit tests, which boils down to a model object not being hashable : Station.objects.all().delete() py37\lib\site-packages\django\db\models\query.py:710: in delete collector.collect(del_query) py37\lib\site-packages\django\db\models\deletion.py:192: in collect reverse_dependency=reverse_dependency) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <django.db.models.deletion.Collector object at 0x000001EC78243E80> objs = <QuerySet [<Station(nom='DUNKERQUE')>, <Station(nom='STATION1')>, <Station(nom='STATION2')>]>, source = None, nullable = False reverse_dependency = False def add(self, objs, source=None, nullable=False, reverse_dependency=False): """ Add 'objs' to the collection of objects to be deleted. If the call is the result of a cascade, 'source' should be the model that caused it, and 'nullable' should be set to True if the relation can be null. Return a list of all objects that were not already collected. """ if not objs: return [] new_objs = [] model = objs[0].__class__ instances = self.data.setdefault(model, set()) for obj in objs: > if obj not in instances: E TypeError: unhashable type: 'Station' Instances of model objects are hashable in Django, once they are saved to database and get a primary key. I don't understand where the error comes from and why I get this when running this basic code: In [7]: s = Station.objects.create(nom='SOME PLACE') In [8]: hash(s) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-9333020f3184> in <module> ----> 1 hash(s) TypeError: unhashable type: 'Station' In [9]: s.pk Out[9]: 2035 All this code works fine when I switch back to Django 2.1.7. The same happens with other model objects in the app. I'm using python version 3.7.2 on Windows, with a SQlite backend (on the development workstation). Edit: Here's the definition of the model referred to above: class Station(models.Model): nom = models.CharField(max_length=200, unique=True) def __str__(self): return self.nom def __repr__(self): return "<Station(nom='{}')>".format(self.nom) def __eq__(self, other): return isinstance(other, Station) and self.nom == other.nom
As pointed out by @Alasdair, the issue was a change of behaviour brought in Django 2.2 to comply with how a model class should behave when __eq__() is overriden but not __hash__(). As per the python docs for __hash__(): A class that overrides __eq__() and does not define __hash__() will have its __hash__() implicitly set to None. More information about the implementation of this behaviour in Django can be found in this ticket. The fix can be either the one suggested in the ticket, i.e. re-assigning the __hash__() method of the model to the one of the super class: __hash__ = models.Model.__hash__ Or a more object-oriented way could be: def __hash__(self): return super().__hash__() This seems a bit weird because this should be unnecessary: by default, a call to __hash__() should use the method from the super class where it's implemented. This suggests Django breaks encapsulation somehow. But maybe I don't understand everything. Anyway that's a sidenote. In my case, I still wanted to be able to compare model instances not yet saved to the database for testing purposes and ended up with this implementation : def __hash__(self): if self.pk is None: return hash(self.nom) return super().__hash__()
7
19
61,217,834
2020-4-14
https://stackoverflow.com/questions/61217834/how-to-use-extra-files-for-aws-glue-job
I have an ETL job written in python, which consist of multiple scripts with following directory structure; my_etl_job | |--services | | | |-- __init__.py | |-- dynamoDB_service.py | |-- __init__.py |-- main.py |-- logger.py main.py is the entrypoint script that imports other scripts from above directories. The above code runs perfectly fine on dev-endpoint, after uploading on the ETL cluster created by dev endpoint. Since now I want to run it in production, I want to create a proper glue job for it. But when I compress the whole directory my_etl_job in .zip format, upload it in artifacts s3 bucket, and specify the .zip file location into script location as follows s3://<bucket_name>/etl_jobs/my_etl_job.zip This is the code I see on glue job UI dashboard; PK ���P__init__.pyUX�'�^"�^A��)PK#7�P logger.pyUX��^1��^A��)]�Mk�0����a�&v+���A�B���`x����q��} ...AND ALLOT MORE... Seems like the glue job doesn't accepts .zip format ? if yes, then what compression format shall I use ? UPDATE: I checked out that glue job has option of taking in extra files Referenced files path where I provided a comma separated list of all paths of the above files, and changed the script_location to refer to only main.py file path. But that also didn't worked. Glue job throws error no module found logger (and I defined this module inside logger.py file)
You'll have to pass the zip file as extra python lib , or build a wheel package for the code package and upload the zip or wheel to s3, provide the same path as extra python lib option Note: Have your main function written in the glue console it self , referencing the required function from the zipped/wheel dependency, you script location should never be a zip file https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-libraries.html
20
13
61,217,923
2020-4-14
https://stackoverflow.com/questions/61217923/merge-rows-based-on-value-pandas-to-excel-xlsxwriter
I'm trying to output a Pandas dataframe into an excel file using xlsxwriter. However I'm trying to apply some rule-based formatting; specifically trying to merge cells that have the same value, but having trouble coming up with how to write the loop. (New to Python here!) See below for output vs output expected: (As you can see based off the image above I'm trying to merge cells under the Name column when they have the same values). Here is what I have thus far: #This is the logic you use to merge cells in xlsxwriter (just an example) worksheet.merge_range('A3:A4','value you want in merged cells', merge_format) #Merge Car type Loop thought process... #1.Loop through data frame where row n Name = row n -1 Name #2.Get the length of the rows that have the same Name #3.Based off the length run the merge_range function from xlsxwriter, worksheet.merge_range('range_found_from_loop','Name', merge_format) for row_index in range(1,len(car_report)): if car_report.loc[row_index, 'Name'] == car_report.loc[row_index-1, 'Name'] #find starting point based off index, then get range by adding number of rows to starting point. for example lets say rows 0-2 are similar I would get 'A0:A2' which I can then put in the code below #from there apply worksheet.merge_range('A0:A2','[input value]', merge_format) Any help is greatly appreciated! Thank you!
Your logic is almost correct, however i approached your problem through a slightly different approach: 1) Sort the column, make sure that all the values are grouped together. 2) Reset the index (using reset_index() and maybe pass the arg drop=True). 3) Then we have to capture the rows where the value is new. For that purpose create a list and add the first row 1 because we will start for sure from there. 4) Then start iterating over the rows of that list and check some conditions: 4a) If we only have one row with a value the merge_range method will give an error because it can not merge one cell. In that case we need to replace the merge_range with the write method. 4b) With this algorithm you 'll get an index error when trying to write the last value of the list (because it is comparing it with the value in the next index postion, and because it is the last value of the list there is not a next index position). So we need to specifically mention that if we get an index error (which means we are checking the last value) we want to merge or write until the last row of the dataframe. 4c) Finally i did not take into consideration if the column contains blank or null cells. In that case code needs to be adjusted. Lastly code might look a bit confusing, you have to take in mind that the 1st row for pandas is 0 indexed (headers are separate) while for xlsxwriter headers are 0 indexed and the first row is indexed 1. Here is a working example to achieve exactly what you want to do: import pandas as pd # Create a test df df = pd.DataFrame({'Name': ['Tesla','Tesla','Toyota','Ford','Ford','Ford'], 'Type': ['Model X','Model Y','Corolla','Bronco','Fiesta','Mustang']}) # Create the list where we 'll capture the cells that appear for 1st time, # add the 1st row and we start checking from 2nd row until end of df startCells = [1] for row in range(2,len(df)+1): if (df.loc[row-1,'Name'] != df.loc[row-2,'Name']): startCells.append(row) writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name='Sheet1', index=False) workbook = writer.book worksheet = writer.sheets['Sheet1'] merge_format = workbook.add_format({'align': 'center', 'valign': 'vcenter', 'border': 2}) lastRow = len(df) for row in startCells: try: endRow = startCells[startCells.index(row)+1]-1 if row == endRow: worksheet.write(row, 0, df.loc[row-1,'Name'], merge_format) else: worksheet.merge_range(row, 0, endRow, 0, df.loc[row-1,'Name'], merge_format) except IndexError: if row == lastRow: worksheet.write(row, 0, df.loc[row-1,'Name'], merge_format) else: worksheet.merge_range(row, 0, lastRow, 0, df.loc[row-1,'Name'], merge_format) writer.save() Output:
19
15
61,213,745
2020-4-14
https://stackoverflow.com/questions/61213745/typechecking-dynamically-added-attributes
When writing project-specific pytest plugins, I often find the Config object useful to attach my own properties. Example: from _pytest.config import Config def pytest_configure(config: Config) -> None: config.fizz = "buzz" def pytest_unconfigure(config: Config) -> None: print(config.fizz) Obviously, there's no fizz attribute in _pytest.config.Config class, so running mypy over the above snippet yields conftest.py:5: error: "Config" has no attribute "fizz" conftest.py:8: error: "Config" has no attribute "fizz" (Note that pytest doesn't have a release with type hints yet, so if you want to actually reproduce the error locally, install a fork following the steps in this comment). Sometimes redefining the class for typechecking can offer a quick help: from typing import TYPE_CHECKING if TYPE_CHECKING: from _pytest.config import Config as _Config class Config(_Config): fizz: str else: from _pytest.config import Config def pytest_configure(config: Config) -> None: config.fizz = "buzz" def pytest_unconfigure(config: Config) -> None: print(config.fizz) However, aside from cluttering the code, the subclassing workaround is very limited: adding e.g. from pytest import Session def pytest_sessionstart(session: Session) -> None: session.config.fizz = "buzz" would force me to also override Session for typechecking. What is the best way to resolve this? Config is one example, but I usually have several more in each project (project-specific adjustments for test collection/invocation/reporting etc). I could imagine writing my own version of pytest stubs, but then I would need to repeat this for every project, which is very tedious.
One way of doing this would be to contrive to have your Config object define __getattr__ and __setattr__ methods. If those methods are defined in a class, mypy will use those to type check places where you're accessing or setting some undefined attribute. For example: from typing import Any class Config: def __init__(self) -> None: self.always_available = 1 def __getattr__(self, name: str) -> Any: pass def __setattr__(self, name: str, value: Any) -> None: pass c = Config() # Revealed types are 'int' and 'Any' respectively reveal_type(c.always_available) reveal_type(c.missing_attr) # The first assignment type checks, but the second doesn't: since # 'already_available' is a predefined attr, mypy won't try using # `__setattr__`. c.dummy = "foo" c.always_available = "foo" If you know for certain your ad-hoc properties will always be strs or something, you could type __getattr__ and __setattr__ to return or accept str instead of Any respectively to get tighter types. Unfortunately, you would still need to do the subtyping trick or mess around with making your own stubs -- the only advantage this gives you is that you at least won't have to list out every single custom property you want to set and makes it possible to create something genuinely reusable. This could maybe make the option more palatable to you, not sure. Other options you could explore include: Just adding a # type: ignore comment to every line where you use an ad-hoc property. This would be a somewhat precise, if intrusive, way of suppressing the error messages. Type your pytest_configure and pytest_unconfigure so they accept objects of type Any. This would be a somewhat less intrusive way of suppressing the error messages. If you want to minimize the blast radius of using Any, you could maybe confine any logic that wants to use these custom properties to their own dedicated functions and continue using Config everywhere else. Try using casting instead. For example, inside pytest_configure you could do config = cast(MutableConfig, config) where MutableConfig is a class you wrote that subclasses _pytest.Config and defines both __getattr__ and __setattr__. This is maybe a middle ground between the above two approaches. If adding ad-hoc attributes to Config and similar classes is a common kind of thing to do, maybe try convincing the pytest maintainers to include typing-only __getattr__ and __setattr__ definitions in their type hints -- or some other more dedicated way of letting users add these dynamic properties.
14
7
61,125,925
2020-4-9
https://stackoverflow.com/questions/61125925/optimization-help-involving-matrix-operations-and-constraints
I'm so far out of my league on this one, so I'm hoping someone can point me in the right direction. I think this is an optimization problem, but I have been confused by scipy.optimize and how it fits with pulp. Also, matrix math boggles my mind. Therefore this problem has really been slowing me down without to ask. Problem Statement: I have a dataset of customers. For each customer, I can make a choice of 3 options, or choose none. So 4 options. Also for each customer, I have a numeric score that says how "good" each choice is. You can imagine this value as the probability of the Choice to create a future sale. # fake data for the internet data = {'customerid':[101,102,103,104,105,106,107,108,109,110], 'prob_CHOICEA':[0.00317,0.00629,0.00242,0.00253,0.00421,0.00414,0.00739,0.00549,0.00658,0.00852], 'prob_CHOICEB':[0.061,0.087,0.055,0.027,0.022,0.094,0.099,0.072,0.018,0.052], 'prob_CHOICEC':[0.024,0.013,0.091,0.047,0.071,0.077,0.067,0.046,0.077,0.044] } # Creates pandas DataFrame df = pd.DataFrame(data) df = df.reset_index(drop=True).set_index(['customerid']) +------------+--------------+--------------+--------------+ | customerid | prob_CHOICEA | prob_CHOICEB | prob_CHOICEC | +------------+--------------+--------------+--------------+ | 101 | 0.00317 | 0.061 | 0.024 | | 102 | 0.00629 | 0.087 | 0.013 | | 103 | 0.00242 | 0.055 | 0.091 | | 104 | 0.00253 | 0.027 | 0.047 | | 105 | 0.00421 | 0.022 | 0.071 | | 106 | 0.00414 | 0.094 | 0.077 | | 107 | 0.00739 | 0.099 | 0.067 | | 108 | 0.00549 | 0.072 | 0.046 | | 109 | 0.00658 | 0.018 | 0.077 | | 110 | 0.00852 | 0.052 | 0.044 | +------------+--------------+--------------+--------------+ I started by combining these elements into a single array for each customer: # combine all values into 1 array list_to_combine = ['prob_CHOICEA', 'prob_CHOICEB','prob_CHOICEC'] df['probs_A_B_C']= df[list_to_combine].values.tolist() df.drop(list_to_combine, axis=1, inplace=True) +------------+-------------------------+ | customerid | probs_A_B_C | +------------+-------------------------+ | 101 | [0.00317, 0.061, 0.024] | | 102 | [0.00629, 0.087, 0.013] | | 103 | [0.00242, 0.055, 0.091] | | 104 | [0.00253, 0.027, 0.047] | | 105 | [0.00421, 0.022, 0.071] | | 106 | [0.00414, 0.094, 0.077] | | 107 | [0.00739, 0.099, 0.067] | | 108 | [0.00549, 0.072, 0.046] | | 109 | [0.00658, 0.018, 0.077] | | 110 | [0.00852, 0.052, 0.044] | +------------+-------------------------+ For each customer, I only have 4 choices of what to do: choices = [ [0,0,0], [1,0,0], [0,1,0], [0,0,1] ] For each customer, I want to choose the best choice for each customer. At first glance this was easy - just pick the highest number. However it starts to blow my mind once I start adding constraints. For example, what if I want to choose the best choice for each customer, but with the constraint that the sum of selected choices is = 5 +------------+-------------------------+-------------+ | customerid | probs_A_B_C | best_choice | +------------+-------------------------+-------------+ | 101 | [0.00317, 0.061, 0.024] | [0,0,0] | | 102 | [0.00629, 0.087, 0.013] | [0,1,0] | | 103 | [0.00242, 0.055, 0.091] | [0,0,1] | | 104 | [0.00253, 0.027, 0.047] | [0,0,0] | | 105 | [0.00421, 0.022, 0.071] | [0,0,0] | | 106 | [0.00414, 0.094, 0.077] | [0,1,0] | | 107 | [0.00739, 0.099, 0.067] | [0,1,0] | | 108 | [0.00549, 0.072, 0.046] | [0,0,0] | | 109 | [0.00658, 0.018, 0.077] | [0,0,1] | | 110 | [0.00852, 0.052, 0.044] | [0,0,0] | +------------+-------------------------+-------------+ I did not even figure out how to do this, I just eye-balled it manually for illustrative purposes. Ideally, I'd like to add multiple constraints simultaneously: Total sum of best_choice = N Total sum of CHOICEA (first element of best_choice) >= M Total sum of CHOICEB (second element of best_choice) <= 10 Any ideas of where to start?
You can use scipy.optimize.linprog to solve this linear optimization problem. It requires to setup the boundary conditions as matrix products, as outlined in the docs. There are two types of boundary conditions, inequalities of the form A @ x <= b and equality A @ x == b. The problem can be modeled as follows: The resulting vector x has length N*C where N is the number of customers and C is the number of options; it represents the choices per custom in a linear layout: [c1_A, c1_B, c1_C, c2_A, c2_B, c2_C, ..., cN_A, cN_B, cN_C]. Since each customer can make at most one choice we have an inequality for each customer that sums all the corresponding choices, i.e. a matrix where the rows represent the customers and columns represent all choices. The matrix has entries 1 if a choice corresponds to the customer and zero otherwise (illustration see below). Option A must be selected at minimum M times; since we only have inequalities of the form A @ x <= b we can invert the values and use -1 entries in A that correspond to option A and -M in b. Option B must be selected no more that 10 times; this can be modeled similar to the previous constraint by using entries of 1 and positive 10 (since it is already of the form <=). The sum of all choices must be N. This can be modeled by an equality constraint where the matrix sums over all choices in x and the result must be equal to N. This is an illustration of the above constraints: # Max. one choice per customer. # A = # b = [[1, 1, 1, 0, 0, 0, ..., 0, 0, 0], [1, [0, 0, 0, 1, 1, 1, ..., 0, 0, 0], 1, ... ... [0, 0, 0, 0, 0, 0, ..., 1, 1, 1]] 1] # Min. M choices for option A. # A = # b = [[-1, 0, 0, -1, 0, 0, ..., -1, 0, 0]] [[-M]] # Max. 10 choices for option B. # A = # b = [[0, 1, 0, 0, 1, 0, ..., 0, 1, 0]] [[10]] # Total number of choices equals N. # A = # b = [[1, 1, 1, 1, 1, 1, ..., 1, 1, 1]] [[N]] Here's some sample code to setup the constraints and run the optimization: import numpy as np import pandas as pd from scipy.optimize import linprog data = {'customerid':[101,102,103,104,105,106,107,108,109,110], 'prob_CHOICEA':[0.00317,0.00629,0.00242,0.00253,0.00421,0.00414,0.00739,0.00549,0.00658,0.00852], 'prob_CHOICEB':[0.061,0.087,0.055,0.027,0.022,0.094,0.099,0.072,0.018,0.052], 'prob_CHOICEC':[0.024,0.013,0.091,0.047,0.071,0.077,0.067,0.046,0.077,0.044] } # Creates pandas DataFrame df = pd.DataFrame(data) df = df.reset_index(drop=True).set_index(['customerid']) print(df, end='\n\n') nc = df.shape[1] # number of options data = df.to_numpy().ravel() # Max. choices per customer is 1. A_ub_1 = np.zeros((len(df), len(data))) for i in range(len(A_ub_1)): A_ub_1[i, nc*i:nc*(i+1)] = 1 b_ub_1 = np.ones(len(df)) # Min. choices for option A is 3. A_ub_2 = np.zeros((1, len(data))) A_ub_2[0, ::nc] = -1 # invert, since this defines an upper boundary b_ub_2 = np.array([-3]) # Max. choices for option B is 2. A_ub_3 = np.zeros((1, len(data))) A_ub_3[0, 1::nc] = 1 b_ub_3 = np.array([2]) # Total sum of choices is 7. A_eq = np.ones((1, len(data))) b_eq = np.array([7]) result = linprog( -1 * data, # linprog aims to minimize the value A_eq=A_eq, b_eq=b_eq, A_ub=np.concatenate((A_ub_1, A_ub_2, A_ub_3), axis=0), b_ub=np.concatenate((b_ub_1, b_ub_2, b_ub_3), axis=0), bounds=(0, 1) ) print(result, end='\n\n') choices = (result.x.reshape(-1, 3) > 1e-6).astype(int) print('Choices:', choices, sep='\n') It produces the following results: prob_CHOICEA prob_CHOICEB prob_CHOICEC customerid 101 0.00317 0.061 0.024 102 0.00629 0.087 0.013 103 0.00242 0.055 0.091 104 0.00253 0.027 0.047 105 0.00421 0.022 0.071 106 0.00414 0.094 0.077 107 0.00739 0.099 0.067 108 0.00549 0.072 0.046 109 0.00658 0.018 0.077 110 0.00852 0.052 0.044 con: array([-1.30002675e-11]) fun: -0.3812999999903971 message: 'Optimization terminated successfully.' nit: 7 slack: array([1.00000000e+00, 7.99305067e-11, 1.47325485e-11, 1.00000000e+00, 1.00000000e+00, 2.49527066e-11, 2.42738052e-11, 5.84235438e-10, 4.23596713e-11, 5.77714543e-11, 8.80984175e-12, 1.46305190e-11]) status: 0 success: True x: array([2.89971936e-10, 1.32732722e-11, 6.97732845e-12, 1.00000000e+00, 3.28055311e-10, 5.72702383e-12, 1.80418885e-11, 4.61391860e-12, 1.00000000e+00, 2.01674011e-10, 4.58311340e-12, 1.29599793e-11, 2.95298295e-10, 4.34109315e-12, 1.21776975e-11, 3.39951283e-11, 1.00000000e+00, 2.55262044e-10, 4.94703751e-11, 1.00000000e+00, 1.57932544e-11, 9.99999999e-01, 2.21487598e-11, 1.33679145e-11, 2.30514296e-10, 3.91129933e-12, 1.00000000e+00, 1.00000000e+00, 8.19015577e-12, 1.07293976e-11]) Choices: [[0 0 0] [1 0 0] [0 0 1] [0 0 0] [0 0 0] [0 1 0] [0 1 0] [1 0 0] [0 0 1] [1 0 0]]
7
5
61,104,747
2020-4-8
https://stackoverflow.com/questions/61104747/jupyter-notebook-to-html-notebook-json-is-invalid-outputprepend
I am trying to convert my Jupyter Notebook file (.ipynb) into an HTML file for easier reading. Every time I try to save the notebook I get a "Notebook validation failed" error: Notebook validation failed: ['outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend'] has non-unique elements: [ "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend" ] message when file is saved When I try to download as .html using File > Download as, I get a further more detailed error but I still cannot make out what it means or what needs to be done to solve the problem and finally download as HTML: nbconvert failed: ['outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend'] has non-unique elements Failed validating 'uniqueItems' in code_cell['properties']['metadata']['properties']['tags']: On instance['cells'][70]['metadata']['tags']: ['outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend', 'outputPrepend'] message when trying to download as HTML This is only happening on the specific notebook I am working on. I have tried to save and export other notebooks as HTML successfully. Any help is appreciated. Thanks
I recently experienced this from using the VSCode Notebook editor. I solved it by opening the notebook in a regular text editor and deleting all the extra outputPrepend-items, leaving only a single one in each array.
8
14
61,190,321
2020-4-13
https://stackoverflow.com/questions/61190321/calling-invoking-a-javascript-function-from-python-a-flask-function-within-html
I was creating a flask application and tried to call a python function in which I wanted to invoke some javascript function/code regarding the HTML template that I returned on the initial app.route('/') If the user did something, then I called another function that should invoke or call a js function I have tried looking everywhere but I cannot make any sense of the solutions. Here is the structure of my code: @app.route('/', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': #verify if the file is valid #here invoke js to do something (for example flash("test")) return ''' <!doctype html> <title>Upload new File</title> <h1>Upload new File</h1> <form method=post enctype=multipart/form-data> <input type=file name=file> <input type=submit value=Upload> </form> '''
You could execute a JavaScript function on load and have the function check for the condition. You can influence the outcome of this check by changing the condition with Python. If you use the render_template function of Flask, you do not have to write your HTML code within your Python file. For better readability I am using this functionality, but you can always put the HTML code into your Python code, as you have done before. Your HTML template, e.g. named upload.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Upload new File</title> </head> <body onload="flashMessage()"> <script> function flashMessage() { if ("{{ flash_message }}" == "True") { alert("[YOUR_MESSAGE_HERE]"); } } </script> <h1>Upload new File</h1> <form method=post enctype=multipart/form-data> <input type=file name=file> <input type=submit value=Upload> </form> </body> </html> Your Python code: from flask import Flask, render_template app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def upload_file(): if request.method == 'POST': #verify if the file is valid #here invoke js to do something (for example flash("test")) return render_template('upload.html', flash_message="True") return render_template('upload.html', flash_message="False") So, the condition line in your HTML file will render to if ("True" == "True") or if ("False" == "True") depending on if you want the flash message to show or not.
8
9
61,126,284
2020-4-9
https://stackoverflow.com/questions/61126284/zipped-python-generators-with-2nd-one-being-shorter-how-to-retrieve-element-tha
I want to parse 2 generators of (potentially) different length with zip: for el1, el2 in zip(gen1, gen2): print(el1, el2) However, if gen2 has less elements, one extra element of gen1 is "consumed". For example, def my_gen(n:int): for i in range(n): yield i gen1 = my_gen(10) gen2 = my_gen(8) list(zip(gen1, gen2)) # Last tuple is (7, 7) print(next(gen1)) # printed value is "9" => 8 is missing gen1 = my_gen(8) gen2 = my_gen(10) list(zip(gen1, gen2)) # Last tuple is (7, 7) print(next(gen2)) # printed value is "8" => OK Apparently, a value is missing (8 in my previous example) because gen1 is read (thus generating the value 8) before it realizes gen2 has no more elements. But this value disappears in the universe. When gen2 is "longer", there is no such "problem". QUESTION: Is there a way to retrieve this missing value (i.e. 8 in my previous example)? ... ideally with a variable number of arguments (like zip does). NOTE: I have currently implemented in another way by using itertools.zip_longest but I really wonder how to get this missing value using zip or equivalent. NOTE 2: I have created some tests of the different implementations in this REPL in case you want to submit and try a new implementation :) https://repl.it/@jfthuong/MadPhysicistChester
If you want to reuse code, the easiest solution is: from more_itertools import peekable a = peekable(a) b = peekable(b) while True: try: a.peek() b.peek() except StopIteration: break x = next(a) y = next(b) print(x, y) print(list(a), list(b)) # Misses nothing. You can test this code out using your setup: def my_gen(n: int): yield from range(n) a = my_gen(10) b = my_gen(8) It will print: 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 [8, 9] []
66
2
61,194,028
2020-4-13
https://stackoverflow.com/questions/61194028/adding-labels-at-end-of-line-chart-in-altair
So I have been trying to get it so there is a label at the end of each line giving the name of the country, then I can remove the legend. Have tried playing with transform_filter but no luck. I used data from here https://ourworldindata.org/coronavirus-source-data I cleaned and reshaped the data so it looks like this:- index days date country value 0 1219 0 2020-03-26 Australia 11.0 1 1220 1 2020-03-27 Australia 13.0 2 1221 2 2020-03-28 Australia 13.0 3 1222 3 2020-03-29 Australia 14.0 4 1223 4 2020-03-30 Australia 16.0 5 1224 5 2020-03-31 Australia 19.0 6 1225 6 2020-04-01 Australia 20.0 7 1226 7 2020-04-02 Australia 21.0 8 1227 8 2020-04-03 Australia 23.0 9 1228 9 2020-04-04 Australia 30.0 import altair as alt countries_list = ['Australia', 'China', 'France', 'Germany', 'Iran', 'Italy','Japan', 'South Korea', 'Spain', 'United Kingdom', 'United States'] chart = alt.Chart(data_core_sub).mark_line().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Color('country:N', scale=alt.Scale(domain=countries_list,type='ordinal')), ) labels = alt.Chart(data_core_sub).mark_text().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600) alt.layer(chart, labels).resolve_scale(color='independent') This is the current mess that the chart is in. How would I go about just showing the last 'country' name? EDIT Here is the result. I might look at adjusting some of the countries separately as adjusting as a group means that some of the labels are always badly positioned no matter what I do with the dx and dy alignment.
You can do this by aggregating the x and y encodings. You want the text to be at the maximum x value, so you can use a 'max' aggregate in x. For the y-value, you want the y value associated with the max x-value, so you can use an {"argmax": "x"} aggregate. With a bit of adjustment of text alignment, the result looks like this: labels = alt.Chart(data_core_sub).mark_text(align='left', dx=3).encode( alt.X('days:Q', aggregate='max'), alt.Y('value:Q', aggregate={'argmax': 'days'}, scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600)
7
12
61,194,881
2020-4-13
https://stackoverflow.com/questions/61194881/docker-container-run-locally-didnt-send-any-data
I am trying to run a basic flask app inside a docker container. The docker build works fine but when i try to test locally i get 127.0.0.1 didn't send any data error. Dockerfile FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 ENV LISTEN_PORT=5000 EXPOSE 5000 RUN pip install --upgrade pip WORKDIR /app ADD . /app CMD ["python3","main.py","--host=0.0.0.0"] main.py import flask from flask import Flask, request import os app = Flask(__name__) @app.route('/') def this_works(): return "This works..." if __name__ == '__main__': app.run(debug=True) The command to run the container i am using is : docker run -it --name dockertestapp1 --rm -p 5000:5000 dockertestapp1 Also command to build is : docker build --tag dockertestapp1 . Could someone help please.
The issue is that you are passing --host parameter while not using the flask binary to bring up the application. Thus, you need to just take the parameter out of CMD in Dockerfile to your code. Working setup: Dockerfile: FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 ENV LISTEN_PORT=5000 EXPOSE 5000 RUN pip install --upgrade pip WORKDIR /app ADD . /app CMD ["python3","main.py"] and main.py import flask from flask import Flask, request import os app = Flask(__name__) @app.route('/') def this_works(): return "This works..." if __name__ == '__main__': app.run(host="0.0.0.0", debug=True) The way you are building the image and bringing up the container is correct. I am adding the steps again for the answer to be whole: # build the image docker build --tag dockertestapp1 . # run the container docker run -it --name dockertestapp1 --rm -p 5000:5000 dockertestapp1
7
9
61,159,469
2020-4-11
https://stackoverflow.com/questions/61159469/importerror-cannot-import-name-dnn-superres-for-python-example-of-super-resol
I am trying to run an example for upscaling images from the following website: https://towardsdatascience.com/deep-learning-based-super-resolution-with-opencv-4fd736678066 This is the code I am using: import cv2 from cv2 import dnn_superres # Create an SR object sr = dnn_superres.DnnSuperResImpl_create() # Read image image = cv2.imread('butterfly.png') # Read the desired model path = "EDSR_x3.pb" sr.readModel(path) # Set the desired model and scale to get correct pre- and post-processing sr.setModel("edsr", 3) # Upscale the image result = sr.upsample(image) # Save the image cv2.imwrite("./upscaled.png", result) I have downloaded the already trained model from the website, called "EDSR_x3.pb" and when I run the code I get the following error: Traceback (most recent call last): File "upscale.py", line 2, in <module> from cv2 import dnn_superres ImportError: cannot import name 'dnn_superres' I now it seems like there is no such method or class, but I have already installed opencv and the contrib modules. Why do I get this error?
I had the same problem with Python 3.6.9 and opencv 4.2.0, but after the upgrade to 4.3.0, the problem disappeared. If you have no problem upgrading the version, try 4.3.0.
18
4
61,172,400
2020-4-12
https://stackoverflow.com/questions/61172400/what-does-padding-idx-do-in-nn-embeddings
I'm learning pytorch and I'm wondering what does the padding_idx attribute do in torch.nn.Embedding(n1, d1, padding_idx=0)? I have looked everywhere and couldn't find something I can get. Can you show example to illustrate this?
As per the docs, padding_idx pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index. What this means is that wherever you have an item equal to padding_idx, the output of the embedding layer at that index will be all zeros. Here is an example: Let us say you have word embeddings of 1000 words, each 50-dimensional ie num_embeddingss=1000, embedding_dim=50. Then torch.nn.Embedding works like a lookup table (lookup table is trainable though): emb_layer = torch.nn.Embedding(1000,50) x = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) y = emb_layer(x) y will be a tensor of shape 2x4x50. I hope this part is clear to you. Now if I specify padding_idx=2, ie emb_layer = torch.nn.Embedding(1000,50, padding_idx=2) x = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) y = emb_layer(x) then output will still be 2x4x50 but the 50-dim vector at (1,2) and (2,3) will be all zeros since x[1,2] and x[2,3] values are 2 which is equal to the padding_idx. You can think of it as 3rd word in the lookup table (since lookup table would be 0-indexed) is not being used for training.
18
16
61,168,140
2020-4-12
https://stackoverflow.com/questions/61168140/opencv-removing-the-background-with-a-mask-image
Given two input i. original image & ii. mask image, what's the best way to remove to the background from the original image. Original Image Mask Image The final output would contain just the dog without the background and look transparent. I have seen the mask images are also created with OpenCV. Is there a way to just the existing mask image and generate the output image? Update I tried this import cv2 # opencv loads the image in BGR, convert it to RGB img = cv2.imread("originalImage.png") mask = cv2.imread("maskImage.png") final = cv2.bitwise_and(img, mask) cv2.imwrite("final.png", final) Final Image Is there a way to set the background to be transparent?
You can create a transparent image by creating a 4-channel BGRA image and copying the first 3 channels from the original image and setting the alpha channel using the mask image. transparent = np.zeros((img.shape[0], img.shape[1], 4), dtype=np.uint8) transparent[:,:,0:3] = img transparent[:, :, 3] = mask
8
7
61,163,024
2020-4-11
https://stackoverflow.com/questions/61163024/return-multiple-files-from-fastapi
Using fastapi, I can't figure out how to send multiple files as a response. For example, to send a single file, I'll use something like this from fastapi import FastAPI, Response app = FastAPI() @app.get("/image_from_id/") async def image_from_id(image_id: int): # Get image from the database img = ... return Response(content=img, media_type="application/png") However, I'm not sure what it looks like to send a list of images. Ideally, I'd like to do something like this: @app.get("/images_from_ids/") async def image_from_id(image_ids: List[int]): # Get a list of images from the database images = ... return Response(content=images, media_type="multipart/form-data") However, this returns the error def render(self, content: typing.Any) -> bytes: if content is None: return b"" if isinstance(content, bytes): return content > return content.encode(self.charset) E AttributeError: 'list' object has no attribute 'encode'
Zipping is the best option that will have same results on all browsers. you can zip files dynamically. import os import zipfile import StringIO def zipfiles(filenames): zip_subdir = "archive" zip_filename = "%s.zip" % zip_subdir # Open StringIO to grab in-memory ZIP contents s = StringIO.StringIO() # The zip compressor zf = zipfile.ZipFile(s, "w") for fpath in filenames: # Calculate path for file in zip fdir, fname = os.path.split(fpath) zip_path = os.path.join(zip_subdir, fname) # Add file, at correct path zf.write(fpath, zip_path) # Must close zip for all contents to be written zf.close() # Grab ZIP file from in-memory, make response with correct MIME-type resp = Response(s.getvalue(), mimetype = "application/x-zip-compressed") # ..and correct content-disposition resp['Content-Disposition'] = 'attachment; filename=%s' % zip_filename return resp @app.get("/image_from_id/") async def image_from_id(image_id: int): # Get image from the database img = ... return zipfiles(img) As alternative you can use base64 encoding to embed an (very small) image into json response. but i don't recommend it. You can also use MIME/multipart but keep in mind that i was created for email messages and/or POST transmission to the HTTP server. It was never intended to be received and parsed on the client side of a HTTP transaction. Some browsers support it, some others don't. (so i think you shouldn't use this either)
11
10
61,159,437
2020-4-11
https://stackoverflow.com/questions/61159437/what-is-the-equivalent-of-decorators-with-arguments-without-the-syntactical-suga
I'm learning about decorators and came across an example where the decorator took an argument. This was a little confusing for me though, because I learned that (note: the examples from this question are mostly from this article): def my_decorator(func): def inner(*args, **kwargs): print('Before function runs') func(*args, **kwargs) print('After function ran') return inner @my_decorator def foo(thing_to_print): print(thing_to_print) foo('Hello') # Returns: # Before function runs # Hello # After function ran was equivalent to foo = my_wrapper(foo) So, it doesn't make sense to me how something could take an argument, to better explain, here is a decorator example that takes an argument: def repeat(num_times): def decorator_repeat(func): @functools.wraps(func) def wrapper_repeat(*args, **kwargs): for _ in range(num_times): value = func(*args, **kwargs) return value return wrapper_repeat return decorator_repeat @repeat(num_times=4) def greet(name): print(f"Hello {name}") greet('Bob') # Returns: # Hello Bob # Hello Bob # Hello Bob # Hello Bob So when I see this I am thinking: greet = repeat(greet, num_times=4) I know that that can't be right because num_times is the only argument that should be getting passed. So what is the correct equivalent to @repeat(num_times=4) without the "@-symbol-syntax"? Thanks!
That article is the same one that tought me everthing I know about decorators! It's brilliant. In regards to what the non @ symbol syntax looks like: You can imagine the actual decorator function is decorator_repeat(func), the function within repeat(num_times=4). @repeat(num_times=4) returns a decorator which is essentially @decorator_repeat except @decorator_repeat now has access to a variable num_times. Further down the page in the article it shows how to make these arguments optional which may help further clarify it for you.
8
3
61,153,872
2020-4-11
https://stackoverflow.com/questions/61153872/renumbering-line-by-line
I have an input text which looks like this: word77 text text bla66 word78 text bla67 text bla68 word79 text bla69 word80 text bla77 word81 text bla78 word92 text bla79 word99 I have to renumber word and bla from 1, in each line. I can renumber the whole input which looks like this: word1 text text bla1 word2 text bla2 text bla3 word3 text bla4 word4 text bla5 word5 text bla6 word6 text bla7 word7 The code for the above: import re def replace(m): global i; i+=1; return str(i); fp = open('input.txt', 'r').read() i = 0 fp = re.sub(r'(?<=word)(\d+)', replace, fp) i = 0 fp = re.sub(r'(?<=bla)(\d+)', replace, fp) #open('sample.txt', 'wb').write(fp) print fp Ideally, the result should look like this: word1 text text bla1 word2 text bla2 text bla1 word1 text bla2 word2 text bla1 word2 text bla3 word3 text bla4 word4
You operate on the whole file at once (fp.read()) - you need to do it line-wise: with open("input.txt","w") as f: f.write("""word77 text text bla66 word78 text bla67 text bla68 word79 text bla69 word80 text bla77 word81 text bla78 word92 text bla79 word99""") import re i = 0 def replace(m): global i i+=1 return str(i) with open('input.txt') as fp, open("output.txt","w") as out: # read only one line of the file and apply the transformations for line in fp: i = 0 l = re.sub(r'(?<=word)(\d+)', replace, line) i = 0 l = re.sub(r'(?<=bla)(\d+)', replace, l) out.write(l) with open("output.txt") as f: print(f.read()) Output: word1 text text bla1 word2 text bla2 text bla1 word1 text bla2 word2 text bla1 word1 text bla2 word2 text bla3 word3
7
8
61,144,232
2020-4-10
https://stackoverflow.com/questions/61144232/updated-to-python-3-8-terminal-wont-open
I updated my system (Ubuntu 18.04) from Python 3.6 to Python 3.8, and reset the defaults so that python3 now points to Python 3.8 (and not 3.6). However, since then, the terminal has refused to open using Ctrl + Alt + T, and other obvious methods such as clicking on the icon itself. When I run gnome-terminal - I get the following: usernew@HP:/usr/lib/python3/dist-packages/gi$ gnome-terminal Traceback (most recent call last): File "/usr/bin/gnome-terminal", line 9, in <module> from gi.repository import GLib, Gio File "/usr/lib/python3/dist-packages/gi/__init__.py", line 42, in <module> from . import _gi ImportError: cannot import name '_gi' from partially initialized module 'gi' (most likely due to a circular import) (/usr/lib/python3/dist-packages/gi/__init__.py) I don't know what this means but I guess it definitely points to the fact that something went wrong during the update. I understand that there are other existing threads on similar issues, but most of them were about updating from Python2 to Python3, so I'm not sure if they're relevant. Could someone help, please? Important Update: So, after reading this answer - I changed the gnome-terminal script's first line to #!/usr/bin/python3.6 instead of #!/usr/bin/python3.8 - and that solves the problem. Also, when I type python3 in the terminal, I'm greeted with Python 3.8.2, as desired. The question remains - Why did this work? What was the actual problem? An explanation would help, so I really know what I'm doing. Thanks!
You shouldn't change the symlink /usr/bin/python3 since a bunch of Ubuntu components depend on it, and Ubuntu-specific Python libraries like gi are built only for the Python build shipped with Ubuntu, which is version 3.6 on 18.04. See Gnome terminal will not start on Ask Ubuntu (though note that it's about Ubuntu 16.04 which uses Python 3.5). So the best way to fix it is to revert the symlink: sudo ln -sf python3.6 /usr/bin/python3 As for setting Python 3.8 as the default, you could put an alias in your bashrc: alias python3=python3.8 But this will only affect the shell for your user. In scripts for example if you want to use Python 3.8 you'll have to write it, i.e. #!/usr/bin/env python3.8
12
14
61,143,812
2020-4-10
https://stackoverflow.com/questions/61143812/unable-to-install-metatrader5
I could not install MetaTrader5 by: pip install MetaTrader5 I got the following error: ERROR: Could not find a version that satisfies the requirement MetaTrader5 (from versions: none) ERROR: No matching distribution found for MetaTrader5 Knowing that I am on MAC laptop and I have Python 3.7.6 Thanks in advance to provide me the solution
MetaTrader5 provides a lot of binary wheels but only for w32 and w64. No Linux, no MacOS and no source code. It seems the software is Windows-only. Their site recommends to use one of the w32/w64 emulators on MacOS.
15
10
61,141,025
2020-4-10
https://stackoverflow.com/questions/61141025/not-understanding-a-trick-on-get-method-in-python
While learning python I came across a line of code which will figure out the numbers of letters. dummy='lorem ipsum dolor emet...' letternum={} for each_letter in dummy: letternum[each_letter.lower()]=letternum.get(each_letter,0)+1 print(letternum) Now, My question is -in the 4th line of code inletternum.get(each_letter,0)+1 why there is ,0)+1and why is it used for. Pls describe.
The get method on a dictionary is documented here: https://docs.python.org/3/library/stdtypes.html#dict.get get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError. So this explains the 0 - it's a default value to use when letternum doesn't contain the given letter. So we have letternum.get(each_letter, 0) - this expression finds the value stored in the letternum dictionary for the currently considered letter. If there is no value stored, it evaluates to 0 instead. Then we add one to this number: letternum.get(each_letter, 0) + 1 Finally we stored it back into the letternum dictionary, although this time converting the letter to lowercase: letternum[each_letter.lower()] = letternum.get(each_letter, 0) + 1 It seems this might be a mistake. We probably want to update the same item we just looked up, but if each_letter is upper-case that's not true.
8
9
61,132,574
2020-4-10
https://stackoverflow.com/questions/61132574/can-i-convert-spectrograms-generated-with-librosa-back-to-audio
I converted some audio files to spectrograms and saved them to files using the following code: import os from matplotlib import pyplot as plt import librosa import librosa.display import IPython.display as ipd audio_fpath = "./audios/" spectrograms_path = "./spectrograms/" audio_clips = os.listdir(audio_fpath) def generate_spectrogram(x, sr, save_name): X = librosa.stft(x) Xdb = librosa.amplitude_to_db(abs(X)) fig = plt.figure(figsize=(20, 20), dpi=1000, frameon=False) ax = fig.add_axes([0, 0, 1, 1], frameon=False) ax.axis('off') librosa.display.specshow(Xdb, sr=sr, cmap='gray', x_axis='time', y_axis='hz') plt.savefig(save_name, quality=100, bbox_inches=0, pad_inches=0) librosa.cache.clear() for i in audio_clips: audio_fpath = "./audios/" spectrograms_path = "./spectrograms/" audio_length = librosa.get_duration(filename=audio_fpath + i) j=60 while j < audio_length: x, sr = librosa.load(audio_fpath + i, offset=j-60, duration=60) save_name = spectrograms_path + i + str(j) + ".jpg" generate_spectrogram(x, sr, save_name) j += 60 if j >= audio_length: j = audio_length x, sr = librosa.load(audio_fpath + i, offset=j-60, duration=60) save_name = spectrograms_path + i + str(j) + ".jpg" generate_spectrogram(x, sr, save_name) I wanted to keep the most detail and quality from the audios, so that i could turn them back to audio without too much loss (They are 80MB each). Is it possible to turn them back to audio files? How can I do it? I tried using librosa.feature.inverse.mel_to_audio, but it didn't work, and I don't think it applies. I now have 1300 spectrogram files and want to train a Generative Adversarial Network with them, so that I can generate new audios, but I don't want to do it if i wont be able to listen to the results later.
Yes, it is possible to recover most of the signal and estimate the phase with e.g. Griffin-Lim Algorithm (GLA). Its "fast" implementation for Python can be found in librosa. Here's how you can use it: import numpy as np import librosa y, sr = librosa.load(librosa.util.example_audio_file(), duration=10) S = np.abs(librosa.stft(y)) y_inv = librosa.griffinlim(S) And that's how the original and reconstruction look like: The algorithm by default randomly initialises the phases and then iterates forward and inverse STFT operations to estimate the phases. Looking at your code, to reconstruct the signal, you'd just need to do: import numpy as np X_inv = librosa.griffinlim(np.abs(X)) It's just an example of course. As pointed out by @PaulR, in your case you'd need to load the data from jpeg (which is lossy!) and then apply inverse transform to amplitude_to_db first. The algorithm, especially the phase estimation, can be further improved thanks to advances in artificial neural networks. Here is one paper that discusses some enhancements.
7
13
61,116,006
2020-4-9
https://stackoverflow.com/questions/61116006/how-can-i-get-the-current-userid-in-flask-jwt-extended
I am new to python/flask and I am working on one to many relationships. I tried some solutions but didn't work. My problem here is in the "/add_about" route I want to get the user_id that created this post and be able to see that reflected in the database. Here is my code: from flask import Blueprint, jsonify, request from flask_jwt_extended import jwt_required, current_user, get_current_user, get_jwt_identity from app import db from models.about import About, AboutSchema from models.users import User about = Blueprint('about', __name__) about_schema = AboutSchema() abouts_schema = AboutSchema(many=True) @about.route('/') def hello(): return "Oh LA LA LA LA !!!!!!" # Get all abouts: @about.route('/all', methods=['GET']) def abouts(): abouts_list = About.query.all() result = abouts_schema.dump(abouts_list) return jsonify(result) @about.route('/add_about', methods=['POST']) @jwt_required def add_about(): description = request.form['description'] user_id = get_jwt_identity() #I tried using "current_user" an "get_current_user" new_about = About(description=description, user=user_id) db.session.add(new_about) db.session.commit() return jsonify(message="You added a bio"), 201 Here are the db models: from sqlalchemy import Column, Integer, String, FLOAT from app import db, ma database models: class User(db.Model): __tablename__ = 'users' id = Column(Integer, primary_key=True) first_name = Column(String) last_name = Column(String) email = Column(String, unique=True) password = Column(String) about = db.relationship('About', backref='user', lazy='dynamic') class About(db.Model): __tablename__ = 'abouts' about_id = Column(Integer, primary_key=True) description = Column(String(1000)) user_id = db.Column(db.Integer, db.ForeignKey('users.id')) class UserSchema(ma.Schema): class Meta: fields = ('id', 'first_name', 'last_name', 'email', 'password') class AboutSchema(ma.Schema): class Meta: fields = ('about_id', 'description') Here are the user routes: from flask import Blueprint, jsonify, request from app import db # from models.users import User, UserSchema from models.users import User, UserSchema from flask_jwt_extended import JWTManager, jwt_required, create_access_token users = Blueprint('user', __name__) user_schema = UserSchema() users_schema = UserSchema(many=True) @users.route('/register', methods=['POST']) def register(): email = request.form['email'] test = User.query.filter_by(email=email).first() if test: return jsonify(messgae="that email already exists") else: first_name = request.form['first_name'] last_name = request.form['last_name'] password = request.form['password'] user = User(first_name=first_name, last_name=last_name, email=email, password=password) db.session.add(user) db.session.commit() return jsonify(messgae="User created successfully"), 201 @users.route('/login', methods=['POST']) def login(): if request.is_json: email = request.json['email'] password = request.json['password'] else: email = request.form['email'] password = request.form['password'] test = User.query.filter_by(email=email, password=password).first() if test: access_token = create_access_token(identity=email) return jsonify(message="Login succeeded!", access_token=access_token), 200 else: return jsonify(message="Bad email or password"), 401 here is a screenshot from my db admin:
When you register your jwt token on login, you register the token with the users email. user route test = User.query.filter_by(email=email, password=password).first() if test: access_token = create_access_token(identity=email) # identity = email return jsonify(message="Login succeeded!", access_token=access_token), 200 else: return jsonify(message="Bad email or password"), 401 So 2 possible solutions: You can do a db lookup in the user table on email address and return the userId. @about.route('/add_about', methods=['POST']) @jwt_required def add_about(): description = request.form['description'] user = User.query.filter_by(email=get_jwt_identity()).first() # Filter DB by token (email) new_about = About(description=description, user=user) Or you can register the user.id as the jwt token access_token = create_access_token(identity=test.id) Then user_id = get_jwt_identity() should then return the user.id from the token
9
13
61,133,916
2020-4-10
https://stackoverflow.com/questions/61133916/is-there-in-python-a-single-function-that-shows-the-full-structure-of-a-hdf5-fi
When opening a .hdf5 file, one can explore the levels, keys and names of the file in different ways. I wonder if there is a way or a function that displays all the available paths to explore in the .hdf5. Ultimately showing the whole tree.
Try using nexuformat package to list the structure of the hdf5 file. Install by pip install nexusformat Code import nexusformat.nexus as nx f = nx.nxload(‘myhdf5file.hdf5’) print(f.tree) This should print the entire structure of the file. For more on that see this thread. Examples can be found here
7
4
61,132,936
2020-4-10
https://stackoverflow.com/questions/61132936/drawing-plotting-a-circle-with-some-radius-around-a-point-matplotlib
I use scatter to plot some points. For example: import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() ax.scatter([1,2, 1.5], [2, 1, 1.5]) plt.show() Now I also want a circle with radius 0.5 around point [1.5, 1.5] in the plot. How do I do that? I know that there are edgecolors, so that I could just set them to 'none' and then to some color. But those circles then have no radius of 0.5.
To make a circle around point, you can use plt.Circle as shown below: import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() ax.scatter([1,2, 1.5], [2, 1, 1.5]) cir = plt.Circle((1.5, 1.5), 0.07, color='r',fill=False) ax.set_aspect('equal', adjustable='datalim') ax.add_patch(cir) plt.show() I hope this solved your problem. And please let me know further.
7
6
61,131,768
2020-4-9
https://stackoverflow.com/questions/61131768/how-to-count-consecutive-repetitions-of-a-substring-in-a-string
I need to find consecutive (non-overlapping) repetitions of a substring in a string. I can count them but not consecutive. For instance: string = "AASDASDDAAAAAAAAERQREQREQRAAAAREWQRWERAAA" substring = "AA" here, "AA" is repeated one time at the beginning of the string, then 4 times, then 2 times, etc. I should select the biggest one, in this example - 4 times. How can I do that?
Regular expressions shine when searching through strings. Here you can find all groups of one or more AA with (?:AA)+ the (?: simply tells the engine to interpret the parentheses for grouping only. Once you have the groups you can use max() to find the longest based on length (len()). import re s = "AASDASDDAAAAAAAAERQREQREQRAAAAREWQRWERAAA" groups = re.findall(r'(?:AA)+', s) print(groups) # ['AA', 'AAAAAAAA', 'AAAA', 'AA'] largest = max(groups, key=len) print(len(largest) // 2) # 4
8
16
61,130,890
2020-4-9
https://stackoverflow.com/questions/61130890/best-way-to-overwrite-azure-blob-in-python
If I try to overwrite an existing blob: blob_client = BlobClient.from_connection_string(connection_string, container_name, blob_name) blob_client.upload_blob('Some text') I get a ResourceExistsError. I can check if the blob exists, delete it, and then upload it: try: blob_client.get_blob_properties() blob_client.delete_blob() except ResourceNotFoundError: pass blob_client.upload_blob('Some text') Taking into account both what the python azure blob storage API has available as well as idiomatic python style, is there a better way to overwrite the contents of an existing blob? I was expecting there to be some sort of overwrite parameter that could be optionally set to true in the upload_blob method, but it doesn't appear to exist.
From this issue it seems that you can add overwrite=True to upload_blob and it will work.
16
37
61,128,143
2020-4-9
https://stackoverflow.com/questions/61128143/plots-not-showing-in-jupyter-notebook
I am trying to create a 2x2 plots for Anscombe data-set Loading Data-set and separating each class in data-set import seaborn as sns import matplotlib.pyplot as plt anscombe = sns.load_dataset('anscombe') dataset_1 = anscombe[anscombe['dataset'] == 'I'] dataset_2 = anscombe[anscombe['dataset'] == 'II'] dataset_3 = anscombe[anscombe['dataset'] == 'III'] dataset_4 = anscombe[anscombe['dataset'] == 'IV'] Creating a figure and dividing into 4 parts fig = plt.figure() axes_1 = fig.add_subplot(2,2,1) axes_2 = fig.add_subplot(2,2,2) axes_3 = fig.add_subplot(2,2,3) axes_4 = fig.add_subplot(2,2,4) axes_1.plot(dataset_1['x'], dataset_1['y'], 'o') axes_2.plot(dataset_2['x'], dataset_2['y'], 'o') axes_3.plot(dataset_3['x'], dataset_3['y'], 'o') axes_4.plot(dataset_4['x'], dataset_4['y'], 'o') axes_1.set_title('dataset_1') axes_2.set_title('dataset_2') axes_3.set_title('dataset_3') axes_4.set_title('dataset_4') fig.suptitle('Anscombe Data') fig.tight_layout() The only output which i'm getting at each plot is [<matplotlib.lines.Line2D at 0x24592c94bc8>] What am I doing wrong?
If you are working with a Jupyter Notebook then you can add the following line to the top cell where you call all your imports. The following command will render your graph %matplotlib inline
11
19
61,128,227
2020-4-9
https://stackoverflow.com/questions/61128227/what-is-the-proper-way-to-override-threading-excepthook-in-python
I am trying to handle uncaught exceptions that occur when I run a thread. The python documentation at docs.python.org states that "threading.excepthook() can be overridden to control how uncaught exceptions raised by Thread.run() are handled." However, I can't seem to do it properly. It doesn't appear that my excepthook function is ever excecuted. What is the correct way to do this? import threading import time class MyThread(threading.Thread): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def excepthook(self, *args, **kwargs): print("In excepthook") def error_soon(timeout): time.sleep(timeout) raise Exception("Time is up!") my_thread = MyThread(target=error_soon, args=(3,)) my_thread.start() time.sleep(7)
threading.excepthook is a function that belongs to the threading module, not a method of the threading.Thread class, so you should override threading.excepthook instead with your own function: import threading import time def excepthook(args): print("In excepthook") threading.excepthook = excepthook class MyThread(threading.Thread): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def error_soon(timeout): time.sleep(timeout) raise Exception("Time is up!") my_thread = MyThread(target=error_soon, args=(3,)) my_thread.start() time.sleep(7)
13
13
61,123,685
2020-4-9
https://stackoverflow.com/questions/61123685/receiving-failed-to-query-code-13-access-is-denied-when-using-virtualenv-p-o
I have two versions of Python installed on my Windows system. 3.7 is installed in C:\Python37 and 3.8 installed in Python 3.8. My PATH variables include the Python 3.7 executable. When I try to run 'virtualenv -p C:\Python38 ProjectFolder' I get the following error: RuntimeError: failed to query C:\Python38 with code 13 err: 'Access is denied' This is true if I specify C:\Python37 as well. Isn't this supposed to create a virutalenv using the specified Python binaries? What am I doing wrong? Thanks in advance!
virtualenv -p C:\Python38\python.exe ProjectFolder I.e. point -p to python executable, not to a directory.
25
47