question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
61,968,521
2020-5-23
https://stackoverflow.com/questions/61968521/python-web-scraping-request-errormod-security
I am new and I try to grap source code of an Web page for tutorial.I got beautifulsoup install,request install. At first I want to grap the source.I am doing this scraping job from "https://pythonhow.com/example.html".I am not doing anything illegal and I think this site also established for this purposes.Here's my code: import requests from bs4 import BeautifulSoup r=requests.get("http://pythonhow.com/example.html") c=r.content c And i got the mod security error: b'<head><title>Not Acceptable!</title></head><body><h1>Not Acceptable!</h1><p>An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.</p></body></html>' Thanks for all who re dealing with me.Respectly
You can easily fix this issue by providing a user agent to the request. By doing so, the website will think that someone is actually visiting the site using a web browser. Here is the code that you want to use: import requests from bs4 import BeautifulSoup headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:55.0) Gecko/20100101 Firefox/55.0', } r = requests.get("http://pythonhow.com/example.html", headers=headers) c = r.content print(c) Which gives you the expected output b'<!DOCTYPE html>\n<html>\n<head>\n<style>\ndiv.cities {\n background-color:black;\n color:white;\n margin:20px;\n padding:20px;\n} \n</style>\n</head>\n<body>\n<h1 align="center"> Here are three big cities </h1>\n<div class="cities">\n<h2>London</h2>\n<p>London is the capital of England and it\'s been a British settlement since 2000 years ago. </p>\n</div>\n<div class="cities">\n<h2>Paris</h2>\n<p>Paris is the capital city of France. It was declared capital since 508.</p>\n</div>\n<div class="cities">\n<h2>Tokyo</h2>\n<p>Tokyo is the capital of Japan and one of the most populated cities in the world.</p>\n</div>\n</body>\n</html>'
9
32
61,944,815
2020-5-21
https://stackoverflow.com/questions/61944815/how-to-set-seed-for-jitter-in-seaborn-stripplot
I am trying to reproduce stripplots exactly so that I can draw lines and write on them reliably. However, when I produce a stripplot with jitter the jitter is random and prevents me from achieving my goal. I have blindly tried some rcParams I found in other Stack Overflow posts, such as mpl.rcParams['svg.hashsalt'] which hasn't worked. I also tried setting a seed for random.seed() without success. The code I am running looks like the following. import seaborn as sns import matplotlib.pyplot as plt import random plt.figure(figsize=(14,9)) random.seed(123) catagories = [] values = [] for i in range(0,200): n = random.randint(1,3) catagories.append(n) for i in range(0,200): n = random.randint(1,100) values.append(n) sns.stripplot(catagories, values, size=5) plt.title('Random Jitter') plt.xticks([0,1,2],[1,2,3]) plt.show() This code generates a stripplot just like I want. But if you run the code twice you will get different placements for the points, due to the jitter. The graph I am making requires jitter to not look ridiculous, but I want to write on the graph. However there is no way to know the exact positions of the points before running the code, which then changes every time the code is run. Is there any way to set a seed for the jitter in seaborn stripplots to make them perfectly reproduceable?
jitter is determined by scipy.stats.uniform uniform is class uniform_gen(scipy.stats._distn_infrastructure.rv_continuous) Which is a subclass of class rv_continuous(rv_generic) Which has a seed parameter, and uses np.random Therefore, use np.random.seed() It needs to be called before each plot. In the case of the example, np.random.seed(123) must be inside the loop. from the Stripplot docstring jitter : float, ``True``/``1`` is special-cased, optional Amount of jitter (only along the categorical axis) to apply. This can be useful when you have many points and they overlap, so that it is easier to see the distribution. You can specify the amount of jitter (half the width of the uniform random variable support), or just use ``True`` for a good default. From class _StripPlotter in categorical.py jitter is calculated with scipy.stats.uniform from scipy import stats class _StripPlotter(_CategoricalScatterPlotter): """1-d scatterplot with categorical organization.""" def __init__(self, x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, 1) # Set object attributes self.dodge = dodge self.width = .8 if jitter == 1: # Use a good default for `jitter = True` jlim = 0.1 else: jlim = float(jitter) if self.hue_names is not None and dodge: jlim /= len(self.hue_names) self.jitterer = stats.uniform(-jlim, jlim * 2).rvs from the rv_continuous docstring seed : {None, int, `~np.random.RandomState`, `~np.random.Generator`}, optional This parameter defines the object to use for drawing random variates. If `seed` is `None` the `~np.random.RandomState` singleton is used. If `seed` is an int, a new ``RandomState`` instance is used, seeded with seed. If `seed` is already a ``RandomState`` or ``Generator`` instance, then that object is used. Default is None. Using your code with np.random.seed All the plot points are the same import seaborn as sns import matplotlib.pyplot as plt import numpy as np fig, axes = plt.subplots(2, 3, figsize=(12, 12)) for x in range(6): np.random.seed(123) catagories = [] values = [] for i in range(0,200): n = np.random.randint(1,3) catagories.append(n) for i in range(0,200): n = np.random.randint(1,100) values.append(n) row = x // 3 col = x % 3 axcurr = axes[row, col] sns.stripplot(catagories, values, size=5, ax=axcurr) axcurr.set_title(f'np.random jitter {x+1}') plt.show() using just random The plot points move around import seaborn as sns import matplotlib.pyplot as plt import random fig, axes = plt.subplots(2, 3, figsize=(12, 12)) for x in range(6): random.seed(123) catagories = [] values = [] for i in range(0,200): n = random.randint(1,3) catagories.append(n) for i in range(0,200): n = random.randint(1,100) values.append(n) row = x // 3 col = x % 3 axcurr = axes[row, col] sns.stripplot(catagories, values, size=5, ax=axcurr) axcurr.set_title(f'random jitter {x+1}') plt.show() Using random for the data and np.random.seed for the plot fig, axes = plt.subplots(2, 3, figsize=(12, 12)) for x in range(6): random.seed(123) catagories = [] values = [] for i in range(0,200): n = random.randint(1,3) catagories.append(n) for i in range(0,200): n = random.randint(1,100) values.append(n) row = x // 3 col = x % 3 axcurr = axes[row, col] np.random.seed(123) sns.stripplot(catagories, values, size=5, ax=axcurr) axcurr.set_title(f'np.random jitter {x+1}') plt.show()
7
9
61,948,867
2020-5-22
https://stackoverflow.com/questions/61948867/add-file-extension-in-timedrotatingfilehandler
I am trying to implement the python logging using TimedRotatingFileHandler i'm getting the problem in adding the file extension in log filename here is my code Path(".\\Log").mkdir(parents=True, exist_ok=True) LOGGING_MSG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s' LOGGING_DATE_FORMAT = '%m-%d %H:%M:%S' formatter = logging.Formatter(LOGGING_MSG_FORMAT, LOGGING_DATE_FORMAT) handler = TimedRotatingFileHandler(".\\Log\\info.log",'midnight',1) handler.setFormatter(formatter) handler.setLevel(logging.INFO) root_logger = logging.getLogger() root_logger.addHandler(handler) using this code very first time i'm getting the fileName "info.log" as expected, but when it rolls over to midnight the fileName i'm getting is "info.log.2020-05-22" but what i'm expecting is "info.2020-05-22.log". How i can append the handler suffix before to file extension(.log)?
You should use a custom namer: handler.namer = lambda name: name + ".log" Unfortunately, the namer function gets the processed name. The name param would be like "info.log.2020-05-22", so you'll end up with "info.log.2020-05-22.log". If double .log is not acceptable just remove the initial one: handler.namer = lambda name: name.replace(".log", "") + ".log"
7
8
61,923,188
2020-5-20
https://stackoverflow.com/questions/61923188/how-to-stop-autopep8-not-installed-messages-in-code
I'm a new Python programmer using the Mac version of VS Code 1.45.1 to create a Django project. I have the Python and Django extensions installed. Every time I save a Django file, Code pops up this window: Formatter autopep8 is not installed. Install? Source: Python (Extension) [Yes] [Use black] [Use yapf] I keep clicking the "Yes" button to install the autopep8 extension but this message keeps popping up nevertheless. Is there some trick to configuring VS Code so that this extension will be installed permanently and I stop getting this error?
You will receive this prompt if You have "formatOnSave" turned on as a setting You selected autopep8 as your formatter The Python extension can't find autopep8 So the options are: Turn off formatting on save Make sure you successfully installed autopep8 into your environment or you specified the path to autopep8 in your settings My guess is there's an installation failure because you are using a globally installed interpreter and you're not allowed to install where pip wants to put autopep8.
20
10
61,942,138
2020-5-21
https://stackoverflow.com/questions/61942138/apply-function-row-wise-to-pandas-dataframe
I have to calculate the distance on a hilbert-curve from 2D-Coordinates. With the hilbertcurve-package i built my own "hilbert"-function, to do so. The coordinates are stored in a dataframe (col_1 and col_2). As you see, my function works when applied to two values (test). However it just does not work when applied row wise via apply-function! Why is this? what am I doing wrong here? I need an additional column "hilbert" with the hilbert-distances from the x- and y-coordinate given in columns "col_1" and "col_2". import pandas as pd from hilbertcurve.hilbertcurve import HilbertCurve df = pd.DataFrame({'ID': ['1', '2', '3'], 'col_1': [0, 2, 3], 'col_2': [1, 4, 5]}) def hilbert(x, y): n = 2 p = 7 hilcur = HilbertCurve(p, n) dist = hilcur.distance_from_coordinates([x, y]) return dist test = hilbert(df.col_1[2], df.col_2[2]) df["hilbert"] = df.apply(hilbert(df.col_1, df.col_2), axis=0) The last command ends in error: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Thank you for your help!
Since you have hilbert(df.col_1, df.col_2) in the apply, that's immediately trying to call your function with the full pd.Serieses for those two columns, triggering that error. What you should be doing is: df.apply(lambda x: hilbert(x['col_1'], x['col_2']), axis=1) so that the lambda function given will be applied to each row.
9
19
61,921,940
2020-5-20
https://stackoverflow.com/questions/61921940/running-poetry-fails-with-usr-bin-env-python-no-such-file-or-directory
I just installed poetry with the following install script curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 However, when I execute poetry it fails with the following error $ poetry /usr/bin/env: ‘python’: No such file or directory I recently upgraded to ubuntu 20.04, is this an issue with the upgrade or with poetry?
poetry is dependent on whatever python is and doesn't attempt to use a specific version of python unless otherwise specified. The above issue will exist on ubuntu systems moving forward 20.04 onwards as python2.7 is deprecated and the python command does not map to python3.x You'll find specifying an alias for python to python3 won't work ( unless, perhaps you specify this in your bashrc instead of any other shell run command file ) as poetry spins it's own shell to execute commands. Install the following package instead sudo apt install python-is-python3 It should be noted that you can install python2.7 if you want to and poetry should run fine.
19
42
61,930,060
2020-5-21
https://stackoverflow.com/questions/61930060/how-to-use-shapely-for-subtracting-two-polygons
I am not really sure how to explain this but I have 2 polygons, Polygon1 and Polygon2. These polygons overlapped with each other. How to do I get Polygon2 using Shapely without the P from Polygon1.
You are looking for a difference. In Shapely you can calculate it either by using a difference method or by simply subtracting* one polygon from another: from shapely.geometry import Polygon polygon1 = Polygon([(0.5, -0.866025), (1, 0), (0.5, 0.866025), (-0.5, 0.866025), (-1, 0), (-0.5, -0.866025)]) polygon2 = Polygon([(1, -0.866025), (1.866025, 0), (1, 0.866025), (0.133975, 0)]) difference = polygon2.difference(polygon1) # or difference = polygon2 - polygon1 See docs for more set-theoretic methods. *This feature is not documented. See issue on GitHub: Document set-like properties.
12
26
61,933,021
2020-5-21
https://stackoverflow.com/questions/61933021/how-to-overwrite-data-on-an-existing-excel-sheet-while-preserving-all-other-shee
I have a pandas dataframe df which I want to overwrite to a sheet Data of an excel file while preserving all the other sheets since other sheets have formulas linked to sheet Data I used the following code but it does not overwrite an existing sheet, it just creates a new sheet with the name Data 1 with pd.ExcelWriter(filename, engine="openpyxl", mode="a") as writer: df.to_excel(writer, sheet_name="Data") Is there a way to overwrite on an existing sheet?
You can do it using openpyxl: import pandas as pd from openpyxl import load_workbook book = load_workbook(filename) writer = pd.ExcelWriter(filename, engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) df.to_excel(writer, "Data") writer.save() You need to initialize writer.sheets in order to let ExcelWriter know about the sheets. Otherwise, it will create a new sheet with the name that you pass.
10
14
61,923,379
2020-5-20
https://stackoverflow.com/questions/61923379/simple-keras-network-in-gradienttape-lookuperror-no-gradient-defined-for-opera
I've built a very simple TensorFlow Keras model with a single dense layer. It works perfectly fine outside a GradientTape block, but inside a GradientTape block it raises LookupError: No gradient defined for operation 'IteratorGetNext' (op type: IteratorGetNext) Code to reproduce: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense import tensorflow as tf import numpy as np print(tf.__version__) model = Sequential() model.add(Dense(1, input_shape=(16,))) fake_data = np.random.random((1, 16)) print(model.predict(fake_data).shape) # works with tf.GradientTape() as tape: print(model.predict(fake_data).shape) # LookupError: No gradient defined for operation 'IteratorGetNext' (op type: IteratorGetNext) This appears to work in TensorFlow 2.0.0, however it fails in TensorFlow 2.1.0 and 2.2.0 Here is a notebook that replicates the issue.
try to redefine the predict operation in GradientTape in this way with tf.GradientTape() as tape: print(model(fake_data).shape)
12
15
61,918,827
2020-5-20
https://stackoverflow.com/questions/61918827/ansible-no-longer-works
I have been learning Ansible on Windows 10 through WSL (using Pengwin, a Debian-based Linux) and it's been working fine up until last night. This morning, it's as though it doesn't exist any more: ❯ ansible Traceback (most recent call last): File "/usr/bin/ansible", line 34, in <module> from ansible import context ModuleNotFoundError: No module named 'ansible' Literally nothing has changed since last night. Even my computer has remained on. The only difference is that I had logged out of my terminal program. I tried running pengwin-setup to re-install Ansible, but the issue persists. Finally, I tried installing it via the instructions on Ansible's own site. However, things got even worse: ❯ sudo apt install software-properties-common [sudo] password for sturm: Reading package lists... Done Building dependency tree Reading state information... Done software-properties-common is already the newest version (0.96.20.2-2.1). 0 upgraded, 0 newly installed, 0 to remove and 85 not upgraded. ❯ sudo apt-add-repository --yes --update ppa:ansible/ansible gpg: keybox '/tmp/tmpg2r1t8x7/pubring.gpg' created gpg: /tmp/tmpg2r1t8x7/trustdb.gpg: trustdb created gpg: key 93C4A3FD7BB9C367: public key "Launchpad PPA for Ansible, Inc." imported gpg: Total number processed: 1 gpg: imported: 1 gpg: no valid OpenPGP data found. Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 688, in addkey_func func(**kwargs) File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 386, in add_key return apsk.add_ppa_signing_key() File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 273, in add_ppa_signing_key cleanup(tmp_keyring_dir) File "/usr/lib/python3/dist-packages/softwareproperties/ppa.py", line 234, in cleanup shutil.rmtree(tmp_keyring_dir) File "/usr/lib/python3.8/shutil.py", line 715, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/usr/lib/python3.8/shutil.py", line 672, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/usr/lib/python3.8/shutil.py", line 670, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) FileNotFoundError: [Errno 2] No such file or directory: 'S.gpg-agent.extra' Traceback (most recent call last): File "/usr/lib/python3/dist-packages/apt/cache.py", line 570, in update res = self._cache.update(fetch_progress, slist, apt_pkg.Error: E:The repository 'http://ppa.launchpad.net/ansible/ansible/ubuntu groovy Release' does not have a Release file. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/apt-add-repository", line 168, in <module> if not sp.add_source_from_shortcut(shortcut, options.enable_source): File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 759, in add_source_from_shortcut cache.update(sources_list=new_debsrc_entry.file) File "/usr/lib/python3/dist-packages/apt/cache.py", line 573, in update raise FetchFailedException(e) apt.cache.FetchFailedException: E:The repository 'http://ppa.launchpad.net/ansible/ansible/ubuntu groovy Release' does not have a Release file. Now I'm out of options. How can I get Ansible running again?
Your issue is coming from the fact that you are using the instructions to install Ansible on an Ubuntu distribution, when, as you stated it, Pengwin is a Debian based one. So you should use the chapter on how to install Ansible on Debian and not how to install Ansible on Ubuntu. Better, still, because Pengwin is a very particular distribution, since it is a WSL one, you might want to try the installation via pip: Ansible can be installed with pip, the Python package manager. If pip isn’t already available on your system of Python, run the following commands to install it: $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py --user Then install Ansible: $ pip install --user ansible https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-with-pip
9
5
61,917,521
2020-5-20
https://stackoverflow.com/questions/61917521/computing-ab%e2%81%bb%c2%b9-with-np-linalg-solve
I need to compute AB⁻¹ in Python / Numpy for two matrices A and B (B being square, of course). I know that np.linalg.inv() would allow me to compute B⁻¹, which I can then multiply with A. I also know that B⁻¹A is actually better computed with np.linalg.solve(). Inspired by that, I decided to rewrite AB⁻¹ in terms of np.linalg.solve(). I got to a formula, based on the identity (AB)ᵀ = BᵀAᵀ, which uses np.linalg.solve() and .transpose(): np.linalg.solve(a.transpose(), b.transpose()).transpose() that seems to be doing the job: import numpy as np n, m = 4, 2 np.random.seed(0) a = np.random.random((n, n)) b = np.random.random((m, n)) print(np.matmul(b, np.linalg.inv(a))) # [[ 2.87169378 -0.04207382 -1.10553758 -0.83200471] # [-1.08733434 1.00110176 0.79683577 0.67487591]] print(np.linalg.solve(a.transpose(), b.transpose()).transpose()) # [[ 2.87169378 -0.04207382 -1.10553758 -0.83200471] # [-1.08733434 1.00110176 0.79683577 0.67487591]] print(np.all(np.isclose(np.matmul(b, np.linalg.inv(a)), np.linalg.solve(a.transpose(), b.transpose()).transpose()))) # True and also comes up much faster for sufficiently large inputs: n, m = 400, 200 np.random.seed(0) a = np.random.random((n, n)) b = np.random.random((m, n)) print(np.all(np.isclose(np.matmul(b, np.linalg.inv(a)), np.linalg.solve(a.transpose(), b.transpose()).transpose()))) # True %timeit np.matmul(b, np.linalg.inv(a)) # 100 loops, best of 3: 13.3 ms per loop %timeit np.linalg.solve(a.transpose(), b.transpose()).transpose() # 100 loops, best of 3: 7.71 ms per loop My question is: does this identity always stand correct or there are some corner cases I am overlooking?
In general, np.linalg.solve(B, A) is equivalent to B-1A. The rest is just math. In all cases, (AB)T = BTAT: https://math.stackexchange.com/q/1440305/295281. Not necessary for this case, but for invertible matrices, (AB)-1 = B-1A-1: https://math.stackexchange.com/q/688339/295281. For an invertible matrix, it is also the case that (A-1)T = (AT)-1: https://math.stackexchange.com/q/340233/295281. From that it follows that (AB-1)T = (B-1)TAT = (BT)-1AT. As long as B is invertible, you should have no issues with the transformation you propose in any case.
7
8
61,916,096
2020-5-20
https://stackoverflow.com/questions/61916096/word-cloud-built-out-of-tf-idf-vectorizer-function
I have a list called corpus that I am attempting TF-IDF on, using the sklearn in-built function. The list has 5 items. Each of these items comes from text files. I have generated a toy list called corpus for this example. corpus = ['Hi what are you accepting here do you accept me', 'What are you thinking about getting today', 'Give me your password to get accepted into this school', 'The man went to the tree to get his sword back', 'go away to a far away place in a foreign land'] vectorizer = TfidfVectorizer(stop_words='english') vecs = vectorizer.fit_transform(corpus) feature_names = vectorizer.get_feature_names() dense = vecs.todense() lst1 = dense.tolist() df = pd.DataFrame(lst1, columns=feature_names) df Using the above code, I was able to get a dataframe with 5 rows (for each item in the list) and n-columns with the tf-idf for each term in this corpus. As a next step, I want to build the word cloud with largest tf-idf terms across the 5 items in the corpus getting the highest weight. I tried the following: x = vectorizer.vocabulary_ Cloud = WordCloud(background_color="white", max_words=50).generate_from_frequencies(x) This clearly does not work. The dictionary is a list of words with an index attached to it, not a word scoring. Hence, I need a dictionary that assigns the TF-IDF score to each word across the corpus. Then, the word cloud generated has the highest scored words as the largest size.
You're almost there. You need to transpose to get the frequencies per term rather than term frequencies per document, then sum hem, then pass that series directly to your wordcloud df.T.sum(axis=1) accept 0.577350 accepted 0.577350 accepting 0.577350 away 0.707107 far 0.353553 foreign 0.353553 getting 0.577350 hi 0.577350 land 0.353553 man 0.500000 password 0.577350 place 0.353553 school 0.577350 sword 0.500000 thinking 0.577350 today 0.577350 tree 0.500000 went 0.500000 Cloud = WordCloud(background_color="white", max_words=50).generate_from_frequencies(df.T.sum(axis=1))
7
12
61,917,043
2020-5-20
https://stackoverflow.com/questions/61917043/python-enum-meta-making-typing-module-crash
I've been breaking my head on this and I can't seem to find a solution to the problem. I use an enum to manage my access in a flask server. Short story I need the enum to return a default value if a non-existent enum value is queried. First I created a meta class for the enum: class AuthAccessMeta(enum.EnumMeta): def __getattr__(self, item): try: return super().__getattr__(item) except Exception as _: if self == AuthAccess and item not in ['_subs_tree']: Loggers.SYS.warn('Access {} doesn\'t exist, substituting with MISSING.'.format(item)) return AuthAccess.MISSING @unique class AuthAccess(str, AutoName, metaclass=AuthAccessMeta): ... You can see I exclude the _subs_tree attribute since neither EnumMeta or Enum has it. Only place I found this method is in the typing module. Then I type an argument with AuthAcess elsewhere and it gives me this weird error: C:\Users\[USER]\AppData\Local\Programs\Python\Python36\python.exe -m src.main [SYS][INFO][11:18:54]: Instance 76cb0042196d4a75b3794ce0b9c1590c is running on project 'local/project1' Traceback (most recent call last): File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\[USER]\Documents\code\sollumcloudplatform\src\main.py", line 19, in <module> from src.procedures import create_app File "C:\Users\[USER]\Documents\code\sollumcloudplatform\src\procedures.py", line 191, in <module> def satisfy_role(role: {}, access_need: Tuple[List[AuthAccess]]) -> bool: File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 626, in inner return func(*args, **kwds) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 1062, in __getitem__ orig_bases=self.__orig_bases__) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 965, in __new__ self.__tree_hash__ = hash(self._subs_tree()) if origin else hash((self.__name__,)) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 1007, in _subs_tree tree_args = _subs_tree(self, tvars, args) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 548, in _subs_tree tree_args.append(_replace_arg(arg, tvars, args)) File "C:\Users\[USER]\AppData\Local\Programs\Python\Python36\lib\typing.py", line 517, in _replace_arg return arg._subs_tree(tvars, args) TypeError: 'NoneType' object is not callable I've tried returning the method from the typing module but Python tells me it doesn't exists either. Am I using the meta class wrong ? Should I just remove the typing on the argument ?
Returning a default value can be done with the right version of enum. The problem you are having now, I suspect, is because in your except branch you do not return a value, nor raise an exception, if the if fails -- so None is returned instead. class AuthAccessMeta(enum.EnumMeta): def __getattr__(self, item): try: return super().__getattr__(item) except Exception as _: if self == AuthAccess and item not in ['_subs_tree']: Loggers.SYS.warn('Access {} doesn\'t exist, substituting with MISSING.'.format(item)) return AuthAccess.MISSING # need something here, like simply reraising the exception raise
8
8
61,913,010
2020-5-20
https://stackoverflow.com/questions/61913010/can-not-import-pipeline-from-transformers
I have installed pytorch with conda and transformers with pip. I can import transformers without a problem but when I try to import pipeline from transformers I get an exception: from transformers import pipeline --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-4-69a9fd07ccac> in <module> ----> 1 from transformers import pipeline ImportError: cannot import name 'pipeline' from 'transformers' (C:\Users\Alienware\Anaconda3\envs\tf2\lib\site-packages\transformers\__init__.py) This is a view of the directory where it searches for the init.py file: What is causing the problem and how can I resolve it?
Check transformers version. Make sure you are on latest. Pipelines were introduced quite recently, you may have older version.
10
8
61,909,732
2020-5-20
https://stackoverflow.com/questions/61909732/how-to-catch-concurrent-futures-base-timeouterror-correctly-when-using-asyncio
First of all, i need to warn you: I'm new to asyncio, and i h I warn you right away, I'm new to asyncio, and I can hardly imagine what is in the library under the hood. Here is my code: import asyncio semaphore = asyncio.Semaphore(50) async def work(value): async with semaphore: print(value) await asyncio.sleep(10) async def main(): tasks = [] for i in range(0, 10000): tasks.append(asyncio.wait_for(work(i), timeout=3)) await asyncio.gather(*tasks) loop = asyncio.get_event_loop() future = asyncio.ensure_future(main()) loop.run_until_complete(future) What i need: Coroutine work() to be completed for no more than 3 seconds, and no more than 50 pieces at a same time. After 3 seconds (timeout), the coroutine work() must stop execution, and a new 50 tasks must start to work. But in my case, after 3 seconds crashes: Traceback (most recent call last): File "C:/Users/root/PycharmProjects/LogParser/ssh/async/asyn_test.py", line 19, in <module> loop.run_until_complete(future) File "C:\Code\Python3\lib\asyncio\base_events.py", line 579, in run_until_complete return future.result() File "C:/Users/root/PycharmProjects/LogParser/ssh/async/asyn_test.py", line 15, in main await asyncio.gather(*tasks) File "C:\Code\Python3\lib\asyncio\tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError Whatever I had not tried to catch this exception, no matter how many tasks remained, program breaks down. I need, upon reaching timeout, the program continued to the next tasks Please, teach me, how do I need properly implement this? Python 3.7, asyncio 3.4.3
You need to handle the exception. If you just pass it to gather, it will re-raise it. For example, you can create a new coroutine with the appropriate try/except: semaphore = asyncio.Semaphore(50) async def work(value): print(value) await asyncio.sleep(10) async def work_with_timeout(value): async with semaphore: try: return await asyncio.wait_for(work(value), timeout=3) except asyncio.TimeoutError: return None async def main(): tasks = [] for i in range(0, 10000): tasks.append(work_with_timeout(i)) await asyncio.gather(*tasks)
8
6
61,908,021
2020-5-20
https://stackoverflow.com/questions/61908021/how-to-get-n-easily-distinguishable-colors-with-matplotlib
I need to make different amounts of line plots with Matplotlib, but I have not been able to find a colormap that makes it easy to distinguish between the line plots. I have used the brg colormap like this: colors=brg(np.linspace(0,1,num_plots)) with for i in range(num_plots): ax.step(x,y,c=colors[i]) With four plots, this could look like this: Notice how hard it is to distinguish the colors of the top and bottom plots, which is especially bad if a legend is used. I've tried a lot of different colormaps like rainbow and jet, but with this setup, brg seems to give the best result for num_plots between 1 and 12. I did find this How to get 10 different colors that are easily recognizable and this Wiki page Help:Distinguishable colors, but I don't know if this can be used in any way.. Is there an easy fix to this, or will I have to make do with this?
I would use the tab10 or tab20 colormaps. See Colormap reference However, I believe you will always have trouble distinguishing hues when the number of lines becomes large (I would say >5 and certainly >10). In this case, you should combine hues with other distinguishing features like different markers or linestyles. colors = matplotlib.cm.tab20(range(20)) markers = matplotlib.lines.Line2D.markers.keys() x = np.linspace(0,1,100) fig, axs = plt.subplots(2,4, figsize=(4*4,4*2)) for nlines,ax0 in zip(np.arange(5,21,5), axs.T): ax0[0].set_title('{:d} lines'.format(nlines)) for n,c,m in zip(range(nlines),colors,markers): y = x*np.random.random()+np.random.random() ax0[0].plot(x,y) ax0[1].plot(x,y, marker=m, markevery=10) axs[0,0].set_ylabel('only hues', fontsize=16, fontweight='bold') axs[1,0].set_ylabel('hues+markers', fontsize=16, fontweight='bold') fig.tight_layout()
8
7
61,908,745
2020-5-20
https://stackoverflow.com/questions/61908745/error-astype-got-an-unexpected-keyword-argument-categories
df = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'], index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor']) df.rename(columns={0: 'Grades'}, inplace=True) df I am trying to create an ordered category from the above dataframe using the following code - df = df['Grades'].astype('category',categories=['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'],ordered=True) However it gives the error : astype() got an unexpected keyword argument 'categories'.
From pandas 0.25+ are removed these arguments: Removed the previously deprecated ordered and categories keyword arguments in astype (GH17742) In newer pandas versions is necesary use CategoricalDtype and pass to astype: from pandas.api.types import CategoricalDtype cats = ['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'] cat_type = CategoricalDtype(categories=cats, ordered=True) df['Grades'] = df['Grades'].astype(cat_type) print (df) Grades excellent A+ excellent A excellent A- good B+ good B good B- ok C+ ok C ok C- poor D+ poor D Or use Categorical: df['Grades'] = pd.Categorical(df['Grades'], categories=cats, ordered=True) print (df) Grades excellent A+ excellent A excellent A- good B+ good B good B- ok C+ ok C ok C- poor D+ poor D
9
16
61,815,883
2020-5-15
https://stackoverflow.com/questions/61815883/how-to-export-pptx-to-image-png-jpeg-in-python
I have developed a small code in Python to generate a PPTX file. But I would like also to generate a picture in PNG or JPEG of the slide. from pptx import Presentation from pptx.util import Inches img_path = 'monty-truth.png' prs = Presentation() blank_slide_layout = prs.slide_layouts[6] slide = prs.slides.add_slide(blank_slide_layout) left = top = Inches(1) pic = slide.shapes.add_picture(img_path, left, top) left = Inches(5) height = Inches(5.5) pic = slide.shapes.add_picture(img_path, left, top, height=height) prs.save('test.pptx') Is there a way to transform a PPTX file (including just one slide) into a PNG or JPEG picture ?
On Windows once you have installed pywin32 pip install pywin32 You can then use the following code : import win32com.client Application = win32com.client.Dispatch("PowerPoint.Application") Presentation = Application.Presentations.Open(r"your_path") Presentation.Slides[0].Export(r"the_new_path-file.jpg", "JPG") Application.Quit() Presentation = None Application = None With this line, you can change the number of the slide you want to export (for example below, for slide number 6) : Presentation.Slides[5].Export(r"the_new_path-file.jpg", "JPG")
7
6
61,865,481
2020-5-18
https://stackoverflow.com/questions/61865481/is-there-an-idiomatic-way-to-install-systemd-units-with-setuptools
I'm distributing a module which can be imported and used as a library. It also comes with an executable—installed via console_scripts—for people to use. That executable can also be started as a systemd service (Type=simple) to provide a daemon to the system. systemd services need to refer to absolute paths in their ExecStart lines, so I need to figure out where the console_script got installed to. With other build systemd like CMake, autotools or Meson, I can directly access the prefix and then use something like configure_file (meson) to substitute @PREFIX@ in foo.service.in, and then install the generated unit to $prefix/lib/systemd/system/. I just can't figure out how to do this with setuptools. Inside setup.py, sys.prefix is set to python's prefix, which isn't the same thing. I have tried overriding install_data and install, but these approaches all have various problems like breaking when building a wheel via pip, or leaving the file around when uninstalling. (I'd be OK just not installing the service when we're not being installed system-wide; systemd wouldn't be able to find it if the prefix is not /usr or /usr/local anyway.) Someone pointed me at a website which says that you're not supposed to do this with setuptools: Note, by the way, that this encapsulation of data files means that you can't actually install data files to some arbitrary location on a user's machine; this is a feature, not a bug. Which sounds like I'm just going against the grain here. I'm really looking for pointers or examples to tell me what I'm supposed to be doing here. If that's "don't use setuptools", fine - but then advice on what the preferred thing is would be welcome.
I would say don't use setuptools for this, it's not what it's made for. Instead use the package manager for the target distribution (apt, yum, dnf, pacman, etc.). I believe systemd and such things that are specific to the operating system (Linux distribution, Linux init system) are out of scope for Python packaging, pip, setuptools, etc. Keep as much as reasonably possible under setuptools's responsibility, so that the project stays pip-installable, and so that the project itself provides easy way to get access to systemd specific files but don't install them (maybe with a command my-thing-generate-systemd-files --target path/to/output for example, that would simplify the task for the users of the project). And on the other side provide Linux-distribution-specific packages (deb, rpm, etc.), if I am not mistaken there are tools that allow to build such packages starting from the Python project (a sdist or the source code repository). Some ideas (quick search without testing, not recommendations): https://pypi.org/project/pyp2rpm/ (from Fedora) https://src.fedoraproject.org/rpms/pyproject-rpm-macros (from Fedora) https://pypi.org/project/stdeb/ (unmaintained?) https://pypi.org/project/py2deb/ (unmaintained?) https://snapcraft.io/docs/python-apps (from Ubuntu, new standard?) https://fpm.readthedocs.io/ https://briefcase.readthedocs.io/en/stable/reference/platforms/linux/appimage.html etc.
10
8
61,881,175
2020-5-19
https://stackoverflow.com/questions/61881175/normed-histogram-y-axis-larger-than-1
Sometimes when I create a histogram, using say seaborn's displot function, with norm_hist = True, the y-axis is less than 1 as expected for a PDF. Other times it takes on values greater than one. For example if I run sns.set(); x = np.random.randn(10000) ax = sns.distplot(x) Then the y-axis on the histogram goes from 0.0 to 0.4 as expected, but if the data is not normal the y-axis can be as large as 30 even if norm_hist = True. What am I missing about the normalization arguments for histogram functions, e.g. norm_hist for sns.distplot? Even if I normalize the data myself by creating a new variable thus: new_var = data/sum(data) so that the data sums to 1, the y-axis will still show values far larger than 1 (like 30 for example) whether the norm_hist argument is True or not. What interpretation can I give when the y-axis has such a large range? I think what is happening is my data is concentrated closely around zero so in order for the data to have an area equal to 1 (under the kde for example) the height of the histogram has to be larger than 1...but since probabilities can't be above 1 what does the result mean? Also, how can I get these functions to show probability on the y-axis?
The rule isn't that all the bars should sum to one. The rule is that all the areas of all the bars should sum to one. When the bars are very narrow, their sum can be quite large although their areas sum to one. The height of a bar times its width is the probability that a value would all in that range. To have the height being equal to the probability, you need bars of width one. Here is an example to illustrate what's going on. import numpy as np from matplotlib import pyplot as plt import seaborn as sns fig, axs = plt.subplots(ncols=2, figsize=(14, 3)) np.random.seed(2023) a = np.random.normal(0, 0.01, 100000) sns.histplot(a, bins=np.arange(-0.04, 0.04, 0.001), stat='density', ax=axs[0]) axs[0].set_title('Measuring in meters') axs[0].containers[1][40].set_color('r') a *= 1000 sns.histplot(a, bins=np.arange(-40, 40, 1), stat='density', ax=axs[1]) axs[1].set_title('Measuring in milimeters') axs[1].containers[1][40].set_color('r') plt.show() The plot at the left uses bins of 0.001 meter wide. The highest bin (in red) is about 40 high. The probability that a value falls into that bin is 40*0.001 = 0.04. The plot at the right uses exactly the same data, but measures in milimeter. Now the bins are 1 mm wide. The highest bin is about 0.04 high. The probability that a value falls into that bin is also 0.04, because of the bin width of 1. As an example of a distribution for which the probability density function has zones larger than 1, see the Pareto distribution with α = 3. By directly using plt.hist, which returns the bin edges and heights, the area can easily be calculated. np.random.seed(2023) a = np.random.normal(0, 0.01, 100000) v = plt.hist(a, bins=np.arange(-0.04, 0.04, 0.001), density=True, ec='k') left = v[1][:-1] right = v[1][1:] area = (v[0] * (right-left)).sum() print(f'Area: {area}') sns.distplot is deprecated import numpy as np from matplotlib import pyplot as plt import seaborn as sns fig, axs = plt.subplots(ncols=2, figsize=(14, 3)) a = np.random.normal(0, 0.01, 100000) sns.distplot(a, bins=np.arange(-0.04, 0.04, 0.001), ax=axs[0]) axs[0].set_title('Measuring in meters') axs[0].containers[0][40].set_color('r') a *= 1000 sns.distplot(a, bins=np.arange(-40, 40, 1), ax=axs[1]) axs[1].set_title('Measuring in milimeters') axs[1].containers[0][40].set_color('r') plt.show()
7
21
61,852,402
2020-5-17
https://stackoverflow.com/questions/61852402/how-can-i-plot-a-simple-plot-with-seaborn-from-a-python-dictionary
I have a dictionary like this: my_dict = {'Southampton': '33.7%', 'Cherbourg': '55.36%', 'Queenstown': '38.96%'} How can I have a simple plot with 3 bars showing the values of each key in a dictionary? I've tried: sns.barplot(x=my_dict.keys(), y = int(my_dict.values())) But I get : TypeError: int() argument must be a string, a bytes-like object or a number, not 'dict_values'
There are several issues in your code: You are trying to convert each value (eg "xx.xx%") into a number. my_dict.values() returns all values as a dict_values object. int(my_dict.values())) means converting the set of all values to a single integer, not converting each of the values to an integer. The former, naturally, makes no sense. Python can't interpret something like "12.34%" as an integer or a float. You need to remove the percentage sign, ie float("12.34%"[:-1]). Dictionaries are not ordered. Therefore, my_dict.keys() and my_dict.values() are not guaranteed to return keys and values in the key-value pairs in the same order, for example, the keys you get may be ['Southampton', 'Cherbourg', 'Queenstown'] and the values you get may be "55.36%", "33.7", "38.96%". This is no longer a problem in Python >= 3.7 and CPython 3.6; see @AmphotericLewisAcid's comment below. With all these issues fixed: keys = list(my_dict.keys()) # get values in the same order as keys, and parse percentage values vals = [float(my_dict[k][:-1]) for k in keys] sns.barplot(x=keys, y=vals) You get:
10
16
61,787,127
2020-5-14
https://stackoverflow.com/questions/61787127/how-to-not-print-the-index-url-in-generated-requirements-txt-when-using-piptools
I am using piptools to compile requirements.in to generate requirements.txt. I also have some index url written in my .pip/pip.conf file which I store my credentials to our python artifactory repo. So whenever I do pip-compile requirements.in the generated requirements.txt will contain a line reflecting that index url such as the following. I don't want this line to be there, is there a configuration where we can configure pip-tools to not generate this line to requirements.txt? --extra-index-url https://pli:[email protected]/api/pypi/pypi-virtual/simple
add the --no-emit-index-url flag to the pip-compile command, or --no-index for pre-5.2.0 versions. EDIT: Official docs for this flag are here.
9
10
61,883,438
2020-5-19
https://stackoverflow.com/questions/61883438/is-there-a-way-to-programmatically-confirm-that-a-python-package-version-satisfi
I am trying to find whether there is a way to take an installed package and version and check whether it satisfies a requirements spec. For example, if I have the package pip==20.0.2, I want the program to do the following: CheckReqSpec("pip==20.0.2", "pip>=19.0.0") -> True CheckReqSpec("pip==20.0.2", "pip<=20.1") -> True CheckReqSpec("pip==20.0.2", "pip~=20.0.0") -> True CheckReqSpec("pip==20.0.2", "pip>20.0.2") -> False I found that pkg_resources.extern.packaging has version.parse, which is useful for comparing different versions greater than or less than, but requirement specs can be very complex, and there are operators like ~= that are not standard mathematical operators. The setuptools docs has this example: PickyThing<1.6,>1.9,!=1.9.6,<2.0a0,==2.4c1 Is there an existing way to do this check, or an easy way to make my own? edit: The ~= in particular is difficult, especially if the specs are input as a variable. * in the version requirement is also hard to figure out, since version.parse("20.0.*") == version.parse("20.0.1") # False version.parse("20.0.*") < version.parse("20.0.0") # True version.parse("20.0.*") < version.parse("20.1.1") # True version.parse("20.0.*") >= version.parse("20.0.0") # False
Using pkg_resources (from setuptools) as an API is now deprecated, and will cause warnings at import time: $ python3 -W always -c 'from pkg_resources import Requirement' <string>:1: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html Instead, we can parse a requirement using packaging (which pkg_resources was using internally anyway). Check requirement name is matching with an equality comparison, and check requirement version is within a specifier set using in: >>> from packaging.requirements import Requirement >>> req = Requirement("pip~=20.0.0") >>> pin = "pip==20.0.2" >>> name, version = pin.split("==") >>> name == req.name and version in req.specifier True Post releases work. Pre-releases have to be opted in for explicitly. >>> "20.0.0post1" in req.specifier True >>> req.specifier.contains("20.0.1b3") False >>> req.specifier.contains("20.0.1b3", prereleases=True) True Note: a top-level packaging installation may be at a different version than the packaging version which pip vendors and uses internally. If you need to guarantee the packaging APIs are matching pip's behavior exactly, you could import the Requirement type from pip's vendored subpackage directly: from pip._vendor.packaging.requirements import Requirement Or, if importing from a private submodule scares you, then install packaging at top-level to the exact same version which your pip version is currently vendoring. Check your pip version (with pip --version) and then check the corresponding packaging version which pip vendors. For example, if your pip version is 23.2.1 you may check in: https://github.com/pypa/pip/blob/23.2.1/src/pip/_vendor/vendor.txt Here you will see that pip==23.2.1 vendors an older version at packaging==21.3.
11
10
61,859,356
2020-5-17
https://stackoverflow.com/questions/61859356/how-to-click-the-ok-button-within-an-alert-using-python-selenium
I want to click the "OK" button in this pop up dialog I tried: driver.switchTo().alert().accept(); but it doesn't work
To click on the OK button within the alert you need to induce WebDriverWait for the desired alert_is_present() and you can use the following solution: WebDriverWait(driver, 10).until(EC.alert_is_present()) driver.switch_to.alert.accept() Note : You have to add the following imports : from selenium.webdriver.common.alert import Alert from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC Reference You can find a couple of relevant discussions in: Python click button on alert How to read the text from the alert box using Python + Selenium Why switching to alert through selenium is not stable? Would like to understand why switch_to_alert() is receiving a strikethrough and how to fix
7
16
61,814,614
2020-5-15
https://stackoverflow.com/questions/61814614/unknown-layer-keraslayer-when-i-try-to-load-model
When i try to save my model as hdf5 path = 'path.h5' model.save(path) then load the model again my_reloaded_model = tf.keras.models.load_model(path) I get the following error ValueError: Unknown layer: KerasLayer Any help ? I'm using tensorflow version: 2.2.0 keras version: 2.3.0-tf
I just found a solution that worked for me my_reloaded_model = tf.keras.models.load_model( (path), custom_objects={'KerasLayer':hub.KerasLayer} )
18
42
61,859,098
2020-5-17
https://stackoverflow.com/questions/61859098/maximum-volume-inscribed-ellipsoid-in-a-polytope-set-of-points
Later Edit: I uploaded here a sample of my original data. It's actually a segmentation image in the DICOM format. The volume of this structure as it is it's ~ 16 mL, so I assume the inner ellipsoid volume should be smaller than that. to extract the points from the DICOM image I used the following code: import os import numpy as np import SimpleITK as sitk def get_volume_ml(image): x_spacing, y_spacing, z_spacing = image.GetSpacing() image_nda = sitk.GetArrayFromImage(image) imageSegm_nda_NonZero = image_nda.nonzero() num_voxels = len(list(zip(imageSegm_nda_NonZero[0], imageSegm_nda_NonZero[1], imageSegm_nda_NonZero[2]))) if 0 >= num_voxels: print('The mask image does not seem to contain an object.') return None volume_object_ml = (num_voxels * x_spacing * y_spacing * z_spacing) / 1000 return volume_object_ml def get_surface_points(folder_path): """ :param folder_path: path to folder where DICOM images are stored :return: surface points of the DICOM object """ # DICOM Series reader = sitk.ImageSeriesReader() dicom_names = reader.GetGDCMSeriesFileNames(os.path.normpath(folder_path)) reader.SetFileNames(dicom_names) reader.MetaDataDictionaryArrayUpdateOn() reader.LoadPrivateTagsOn() try: dcm_img = reader.Execute() except Exception: print('Non-readable DICOM Data: ', folder_path) return None volume_obj = get_volume_ml(dcm_img) print('The volume of the object in mL:', volume_obj) contour = sitk.LabelContour(dcm_img, fullyConnected=False) contours = sitk.GetArrayFromImage(contour) vertices_locations = contours.nonzero() vertices_unravel = list(zip(vertices_locations[0], vertices_locations[1], vertices_locations[2])) vertices_list = [list(vertices_unravel[i]) for i in range(0, len(vertices_unravel))] surface_points = np.array(vertices_list) return surface_points folder_path = r"C:\Users\etc\TTT [13]\20160415 114441\Series 052 [CT - Abdomen WT 1 0 I31f 3]" points = get_surface_points(folder_path) I have a set of points (n > 1000) in 3D space that describe a hollow ovoid like shape. What I would like is to fit an ellipsoid (3D) that is inside all of the points. I am looking for the maximum volume ellipsoid fitting inside the points. I tried to adapt the code from Minimum Enclosing Ellipsoid (aka outer bounding ellipsoid) by modifying the threshold err > tol, with my logic begin that all points should be smaller than < 1 given the ellipsoid equation. But no success. I also tried the Loewner-John adaptation on mosek, but I didn't figure how to describe the intersection of a hyperplane with 3D polytope (the Ax <= b representation) so I can use it for the 3D case. So no success again. The code from the outer ellipsoid: import numpy as np import numpy.linalg as la import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D pi = np.pi sin = np.sin cos = np.cos def plot_ellipsoid(A, centroid, color, ax): """ :param A: matrix :param centroid: center :param color: color :param ax: axis :return: """ centroid = np.asarray(centroid) A = np.asarray(A) U, D, V = la.svd(A) rx, ry, rz = 1. / np.sqrt(D) u, v = np.mgrid[0:2 * np.pi:20j, -np.pi / 2:np.pi / 2:10j] x = rx * np.cos(u) * np.cos(v) y = ry * np.sin(u) * np.cos(v) z = rz * np.sin(v) E = np.dstack((x, y, z)) E = np.dot(E, V) + centroid x, y, z = np.rollaxis(E, axis=-1) ax.plot_wireframe(x, y, z, cstride=1, rstride=1, color=color, alpha=0.2) ax.set_zlabel('Z-Axis') ax.set_ylabel('Y-Axis') ax.set_xlabel('X-Axis') def mvee(points, tol = 0.001): """ Finds the ellipse equation in "center form" (x-c).T * A * (x-c) = 1 """ N, d = points.shape Q = np.column_stack((points, np.ones(N))).T err = tol+1.0 u = np.ones(N)/N while err > tol: # assert u.sum() == 1 # invariant X = np.dot(np.dot(Q, np.diag(u)), Q.T) M = np.diag(np.dot(np.dot(Q.T, la.inv(X)), Q)) jdx = np.argmax(M) step_size = (M[jdx]-d-1.0)/((d+1)*(M[jdx]-1.0)) new_u = (1-step_size)*u new_u[jdx] += step_size err = la.norm(new_u-u) u = new_u c = np.dot(u,points) A = la.inv(np.dot(np.dot(points.T, np.diag(u)), points) - np.multiply.outer(c,c))/d return A, c folder_path = r"" # path to a DICOM img folder points = get_surface_points(folder_path) # or some random pts A, centroid = mvee(points) U, D, V = la.svd(A) rx_outer, ry_outer, rz_outer = 1./np.sqrt(D) # PLOT fig = plt.figure() ax1 = fig.add_subplot(111, projection='3d') ax1.scatter(points[:, 0], points[:, 1], points[:, 2], c='blue') plot_ellipsoid(A, centroid, 'green', ax1) Which gives me this result for the outer ellipsoid on my sample points: The main question: How do I fit an ellipsoid (3D) inside a cloud of 3D points using Python? Is it possible to modify the algorithm for the outer ellipsoid to get the maximum inscribed (inner) ellipsoid? I am looking for code in Python ideally.
Problem statement Given a number of points v₁, v₂, ..., vₙ, find a large ellipsoid satisfying two constraints: The ellipsoid is in the convex hull ℋ = ConvexHull(v₁, v₂, ..., vₙ). None of the points v₁, v₂, ..., vₙ is within the ellipsoid. I propose an iterative procedure to find a large ellipsoid satisfying these two constraints. In each iteration we need to solve a semidefinite programming problem. This iterative procedure is guaranteed to converge, however it not guaranteed to converge to the globally largest ellipsoid. Approach Find a single ellipsoid The core of our iterative procedure is that in each iteration, we find an ellipsoid satisfying 3 conditions: The ellipsoid is contained within ConvexHull(v₁, v₂, ..., vₙ) = {x | Ax<=b}. For a set of points u₁, ... uₘ (where {v₁, v₂, ..., vₙ} ⊂ {u₁, ... uₘ}, namely the given point in the point clouds belongs to this set of points u₁, ... uₘ), the ellipsoid doesn't contain any point in u₁, ... uₘ. We call this set u₁, ... uₘ as "outside points". For a set of points w₁,..., wₖ (where {w₁,..., wₖ} ∩ {v₁, v₂, ..., vₙ} = ∅, namely none of the point in v₁, v₂, ..., vₙ belongs to {w₁,..., wₖ}), the ellipsoid contains all of the points w₁,..., wₖ. We call this set w₁,..., wₖ as "inside points". The intuitive idea is that the "inside points" w₁,..., wₖ indicate the volume of the ellipsoid. We will append new point to "inside points" so as to increase the ellipsoid volume. To find such an ellipsoid through convex optimization, we parameterize the ellipsoid as {x | xᵀPx + 2qᵀx ≤ r} and we will search for P, q, r. The condition that the "outside points" u₁, ... uₘ are all outside of the ellipsoid is formulated as uᵢᵀPuᵢ + 2qᵀuᵢ >= r ∀ i=1, ..., m this is a linear constraint on P, q, r. The condition that the "inside points" w₁,..., wₖ are all inside the ellipsoid is formulated as wᵢᵀPwᵢ + 2qᵀwᵢ <= r ∀ i=1, ..., k This is also a linear constraint on P, q, r. We also impose the constraint P is positive definite P being positive definite, together with the constraint that there exists points wᵢ satisfying wᵢᵀPwᵢ + 2qᵀwᵢ <= r guarantees that the set {x | xᵀPx + 2qᵀx ≤ r} is an ellipsoid. We also have the constraint that the ellipsoid is inside the convex hull ℋ={x | aᵢᵀx≤ bᵢ, i=1,...,l} (namely there are l halfspaces as the H-representation of ℋ). Using s-lemma, we know that a necessary and sufficient condition for the halfspace {x|aᵢᵀx≤ bᵢ} containing the ellipsoid is that ∃ λᵢ >= 0, s.t [P q -λᵢaᵢ/2] is positive semidefinite. [(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r] Hence we can solve the following semidefinite programming problem to find the ellipsoid that contains all the "inside points", doesn't contain any "outside points", and is within the convex hull ℋ find P, q, r, λ s.t uᵢᵀPuᵢ + 2qᵀuᵢ >= r ∀ i=1, ..., m wᵢᵀPwᵢ + 2qᵀwᵢ <= r ∀ i=1, ..., k P is positive definite. λ >= 0, [P q -λᵢaᵢ/2] is positive semidefinite. [(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r] We call this P, q, r = find_ellipsoid(outside_points, inside_points, A, b). The volume of this ellipsoid is proportional to (r + qᵀP⁻¹q)/power(det(P), 1/3). Iterative procedure. We initialize "outside points" as all the points v₁, v₂, ..., vₙ in the point cloud, and "inside points" as a single point w₁ in the convex hull ℋ. In each iteration, we use find_ellipsoid function in the previous sub-section to find the ellipsoid within ℋ that contains all "inside points" but doesn't contain any "outside points". Depending on the result of the SDP in find_ellipsoid, we do the following If the SDP is feasible. We then compare the newly found ellipsoid with the largest ellipsoid found so far. If this new ellipsoid is larger, then accept it as the largest ellipsoid found so far. If the SDP is infeasible, then we remove the last point in "inside points", add this point to "outside point". In both cases, we then take a new sample point in the convex hull ℋ, add that sample point to "inside points", and then solve the SDP again. The complete algorithm is as follows Initialize "outside points" to v₁, v₂, ..., vₙ, initialize "inside points" to a single random point in the convex hull ℋ. while iter < max_iterations: Solve the SDP P, q, r = find_ellipsoid(outside_points, inside_points, A, b). If SDP is feasible and volume(Ellipsoid(P, q, r)) > largest_volume, set P_best = P, q_best=q, r_best = r. If SDP is infeasible, pt = inside_points.pop_last(), outside_points.push_back(pt). Randomly sample a new point in ℋ, append the point to "inside points", iter += 1. Go to step 3. Code from scipy.spatial import ConvexHull, Delaunay import scipy import cvxpy as cp import matplotlib.pyplot as plt import numpy as np from scipy.stats import dirichlet from mpl_toolkits.mplot3d import Axes3D # noqa def get_hull(pts): dim = pts.shape[1] hull = ConvexHull(pts) A = hull.equations[:, 0:dim] b = hull.equations[:, dim] return A, -b, hull def compute_ellipsoid_volume(P, q, r): """ The volume of the ellipsoid xᵀPx + 2qᵀx ≤ r is proportional to power(r + qᵀP⁻¹q, dim/2)/sqrt(det(P)) We return this number. """ dim = P.shape[0] return np.power(r + q @ np.linalg.solve(P, q)), dim/2) / \ np.sqrt(np.linalg.det(P)) def uniform_sample_from_convex_hull(deln, dim, n): """ Uniformly sample n points in the convex hull Ax<=b This is copied from https://stackoverflow.com/questions/59073952/how-to-get-uniformly-distributed-points-in-convex-hull @param deln Delaunay of the convex hull. """ vols = np.abs(np.linalg.det(deln[:, :dim, :] - deln[:, dim:, :]))\ / np.math.factorial(dim) sample = np.random.choice(len(vols), size=n, p=vols / vols.sum()) return np.einsum('ijk, ij -> ik', deln[sample], dirichlet.rvs([1]*(dim + 1), size=n)) def centered_sample_from_convex_hull(pts): """ Sample a random point z that is in the convex hull of the points v₁, ..., vₙ. z = (w₁v₁ + ... + wₙvₙ) / (w₁ + ... + wₙ) where wᵢ are all uniformly sampled from [0, 1]. Notice that by central limit theorem, the distribution of this sample is centered around the convex hull center, and also with small variance when the number of points are large. """ num_pts = pts.shape[0] pts_weights = np.random.uniform(0, 1, num_pts) z = (pts_weights @ pts) / np.sum(pts_weights) return z def find_ellipsoid(outside_pts, inside_pts, A, b): """ For a given sets of points v₁, ..., vₙ, find the ellipsoid satisfying three constraints: 1. The ellipsoid is within the convex hull of these points. 2. The ellipsoid doesn't contain any of the points. 3. The ellipsoid contains all the points in @p inside_pts This ellipsoid is parameterized as {x | xᵀPx + 2qᵀx ≤ r }. We find this ellipsoid by solving a semidefinite programming problem. @param outside_pts outside_pts[i, :] is the i'th point vᵢ. The point vᵢ must be outside of the ellipsoid. @param inside_pts inside_pts[i, :] is the i'th point that must be inside the ellipsoid. @param A, b The convex hull of v₁, ..., vₙ is Ax<=b @return (P, q, r, λ) P, q, r are the parameterization of this ellipsoid. λ is the slack variable used in constraining the ellipsoid inside the convex hull Ax <= b. If the problem is infeasible, then returns None, None, None, None """ assert(isinstance(outside_pts, np.ndarray)) (num_outside_pts, dim) = outside_pts.shape assert(isinstance(inside_pts, np.ndarray)) assert(inside_pts.shape[1] == dim) num_inside_pts = inside_pts.shape[0] constraints = [] P = cp.Variable((dim, dim), symmetric=True) q = cp.Variable(dim) r = cp.Variable() # Impose the constraint that v₁, ..., vₙ are all outside of the ellipsoid. for i in range(num_outside_pts): constraints.append( outside_pts[i, :] @ (P @ outside_pts[i, :]) + 2 * q @ outside_pts[i, :] >= r) # P is strictly positive definite. epsilon = 1e-6 constraints.append(P - epsilon * np.eye(dim) >> 0) # Add the constraint that the ellipsoid contains @p inside_pts. for i in range(num_inside_pts): constraints.append( inside_pts[i, :] @ (P @ inside_pts[i, :]) + 2 * q @ inside_pts[i, :] <= r) # Now add the constraint that the ellipsoid is in the convex hull Ax<=b. # Using s-lemma, we know that the constraint is # ∃ λᵢ > 0, # s.t [P q -λᵢaᵢ/2] is positive semidefinite. # [(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r] num_faces = A.shape[0] lambda_var = cp.Variable(num_faces) constraints.append(lambda_var >= 0) Q = [None] * num_faces for i in range(num_faces): Q[i] = cp.Variable((dim+1, dim+1), PSD=True) constraints.append(Q[i][:dim, :dim] == P) constraints.append(Q[i][:dim, dim] == q - lambda_var[i] * A[i, :]/2) constraints.append(Q[i][-1, -1] == lambda_var[i] * b[i] - r) prob = cp.Problem(cp.Minimize(0), constraints) try: prob.solve(verbose=False) except cp.error.SolverError: return None, None, None, None if prob.status == 'optimal': P_val = P.value q_val = q.value r_val = r.value lambda_val = lambda_var.value return P_val, q_val, r_val, lambda_val else: return None, None, None, None def draw_ellipsoid(P, q, r, outside_pts, inside_pts): """ Draw an ellipsoid defined as {x | xᵀPx + 2qᵀx ≤ r } This ellipsoid is equivalent to |Lx + L⁻¹q| ≤ √(r + qᵀP⁻¹q) where L is the symmetric matrix satisfying L * L = P """ fig = plt.figure() dim = P.shape[0] L = scipy.linalg.sqrtm(P) radius = np.sqrt(r + q@(np.linalg.solve(P, q))) if dim == 2: # first compute the points on the unit sphere theta = np.linspace(0, 2 * np.pi, 200) sphere_pts = np.vstack((np.cos(theta), np.sin(theta))) ellipsoid_pts = np.linalg.solve( L, radius * sphere_pts - (np.linalg.solve(L, q)).reshape((2, -1))) ax = fig.add_subplot(111) ax.plot(ellipsoid_pts[0, :], ellipsoid_pts[1, :], c='blue') ax.scatter(outside_pts[:, 0], outside_pts[:, 1], c='red') ax.scatter(inside_pts[:, 0], inside_pts[:, 1], s=20, c='green') ax.axis('equal') plt.show() if dim == 3: u = np.linspace(0, np.pi, 30) v = np.linspace(0, 2*np.pi, 30) sphere_pts_x = np.outer(np.sin(u), np.sin(v)) sphere_pts_y = np.outer(np.sin(u), np.cos(v)) sphere_pts_z = np.outer(np.cos(u), np.ones_like(v)) sphere_pts = np.vstack(( sphere_pts_x.reshape((1, -1)), sphere_pts_y.reshape((1, -1)), sphere_pts_z.reshape((1, -1)))) ellipsoid_pts = np.linalg.solve( L, radius * sphere_pts - (np.linalg.solve(L, q)).reshape((3, -1))) ax = plt.axes(projection='3d') ellipsoid_pts_x = ellipsoid_pts[0, :].reshape(sphere_pts_x.shape) ellipsoid_pts_y = ellipsoid_pts[1, :].reshape(sphere_pts_y.shape) ellipsoid_pts_z = ellipsoid_pts[2, :].reshape(sphere_pts_z.shape) ax.plot_wireframe(ellipsoid_pts_x, ellipsoid_pts_y, ellipsoid_pts_z) ax.scatter(outside_pts[:, 0], outside_pts[:, 1], outside_pts[:, 2], c='red') ax.scatter(inside_pts[:, 0], inside_pts[:, 1], inside_pts[:, 2], s=20, c='green') ax.axis('equal') plt.show() def find_large_ellipsoid(pts, max_iterations): """ We find a large ellipsoid within the convex hull of @p pts but not containing any point in @p pts. The algorithm proceeds iteratively 1. Start with outside_pts = pts, inside_pts = z where z is a random point in the convex hull of @p outside_pts. 2. while num_iter < max_iterations 3. Solve an SDP to find an ellipsoid that is within the convex hull of @p pts, not containing any outside_pts, but contains all inside_pts. 4. If the SDP in the previous step is infeasible, then remove z from inside_pts, and append it to the outside_pts. 5. Randomly sample a point in the convex hull of @p pts, if this point is outside of the current ellipsoid, then append it to inside_pts. 6. num_iter += 1 When the iterations limit is reached, we report the ellipsoid with the maximal volume. @param pts pts[i, :] is the i'th points that has to be outside of the ellipsoid. @param max_iterations The iterations limit. @return (P, q, r) The largest ellipsoid is parameterized as {x | xᵀPx + 2qᵀx ≤ r } """ dim = pts.shape[1] A, b, hull = get_hull(pts) hull_vertices = pts[hull.vertices] deln = hull_vertices[Delaunay(hull_vertices).simplices] outside_pts = pts z = centered_sample_from_convex_hull(pts) inside_pts = z.reshape((1, -1)) num_iter = 0 max_ellipsoid_volume = -np.inf while num_iter < max_iterations: (P, q, r, lambda_val) = find_ellipsoid(outside_pts, inside_pts, A, b) if P is not None: volume = compute_ellipsoid_volume(P, q, r) if volume > max_ellipsoid_volume: max_ellipsoid_volume = volume P_best = P q_best = q r_best = r else: # Adding the last inside_pts doesn't increase the ellipsoid # volume, so remove it. inside_pts = inside_pts[:-1, :] else: outside_pts = np.vstack((outside_pts, inside_pts[-1, :])) inside_pts = inside_pts[:-1, :] # Now take a new sample that is outside of the ellipsoid. sample_pts = uniform_sample_from_convex_hull(deln, dim, 20) is_in_ellipsoid = np.sum(sample_pts.T*(P_best @ sample_pts.T), axis=0)\ + 2 * sample_pts @ q_best <= r_best if np.all(is_in_ellipsoid): # All the sampled points are in the ellipsoid, the ellipsoid is # already large enough. return P_best, q_best, r_best else: inside_pts = np.vstack(( inside_pts, sample_pts[np.where(~is_in_ellipsoid)[0][0], :])) num_iter += 1 return P_best, q_best, r_best if __name__ == "__main__": pts = np.array([[0., 0.], [0., 1.], [1., 1.], [1., 0.], [0.2, 0.4]]) max_iterations = 10 P, q, r = find_large_ellipsoid(pts, max_iterations) I also put the code in the github repo Results Here is the result on a simple 2D example, say we want to find a large ellipsoid that doesn't contain the five red points in the figure below. Here is the result after the first iteration . The red points are the "outside points" (the initial outside points are v₁, v₂, ..., vₙ), the green point is the initial "inside points". In the second iteration, the ellipsoid becomes . By adding one more "inside point" (green dot), the ellipsoid gets larger. This gif shows the animation of the 10 iteations.
12
9
61,827,165
2020-5-15
https://stackoverflow.com/questions/61827165/plotly-how-to-handle-overlapping-colorbar-and-legends
I have a simple graph and I am using Plotly Express Library to draw it. The image is as follows which have two legends overlapping 'Rank' and 'Genre'. px.scatter_ternary(data_frame = data, a='Length.', b='Beats.Per.Minute', c='Popularity', color = 'Rank', symbol = 'Genre', labels = {'Length.': 'Len', 'Beats.Per.Minute':'Beats'}, color_continuous_midpoint = 15, symbol_sequence = ['circle-open-dot', 'cross-open','triangle-ne']) What can be done to avoid overlapping?
Short answer: You can move the colorbar with: fig.update_layout(coloraxis_colorbar=dict(yanchor="top", y=1, x=0, ticks="outside")) The details: Since you haven't provided a fully executable code snippet with a sample of your data, I'm going to have to base a suggestion on a dataset and an example that's at least able to reproduce a similar problem. Take a look: This seems to be the exact same problem that you're facing. To make the plot readable, I would simply move the colorbar using fig.update_layout(coloraxis_colorbar() like this: Complete code: # imports import plotly.express as px # data df = px.data.election() # figure setup fig = px.scatter_ternary(df, a="Joly", b="Coderre", c="Bergeron", hover_name="district", color="total", size="total", size_max=15, symbol ='Coderre', color_discrete_map = {"Joly": "blue", "Bergeron": "green", "Coderre":"red"}, ) # move colorbar fig.update_layout(coloraxis_colorbar=dict(yanchor="top", y=1, x=0, ticks="outside", ticksuffix=" bills")) fig.show() I hope this solves your real-world problem. Don't hesitate to let me know if not!
11
15
61,783,925
2020-5-13
https://stackoverflow.com/questions/61783925/running-a-package-pytest-with-poetry
I am new to poetry and want to get it set-up with pytest. I have a package mylib in the following set-up ├── dist │ ├── mylib-0.0.1-py3-none-any.whl │ └── mylib-0.0.1.tar.gz ├── poetry.lock ├── mylib │ ├── functions.py │ ├── __init__.py │ └── utils.py ├── pyproject.toml ├── README.md └── tests └── test_functions.py in test_functions I have import mylib However, when I run poetry run pytest it complains about mylib not being included. I can run pip install dist/mylib-0.0.1-py3-none-any.whl but that clutters my python environment with mylib. I want to use that environment as well for other packages. My question is: What is the proper way to work with poetry and pytest? My underlying python environment is a clean pyenv python 3.8. Using pyproject.toml I create a project based virtual environment for mylib.
You need to run poetry install to set up your dev environment. It will install all package and development requirements, and once that is done it will do a dev-install of your source code. You only need to run it once, code changes will propagate directly and do not require running the install again. If you have set up the virtual env that you want already, take care that it is activated when you run the install command. If you don't, poetry will try to create a new virtual env and use that, which is probably not what you want.
41
25
61,860,800
2020-5-18
https://stackoverflow.com/questions/61860800/running-a-processpoolexecutor-in-ipython
I was running a simple multiprocessing example in my IPython interpreter (IPython 7.9.0, Python 3.8.0) on my MacBook and ran into a strange error. Here's what I typed: [In [1]: from concurrent.futures import ProcessPoolExecutor [In [2]: executor=ProcessPoolExecutor(max_workers=1) [In [3]: def func(): print('Hello') [In [4]: future=executor.submit(func) However, I received the following error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 313, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/process.py", line 233, in _process_worker call_item = call_queue.get(block=True) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 116, in get return _ForkingPickler.loads(res) AttributeError: Can't get attribute 'func' on <module '__main__' (built-in)> Furthermore, trying to submit the job again gave me a different error: [In [5]: future=executor.submit(func) --------------------------------------------------------------------------- BrokenProcessPool Traceback (most recent call last) <ipython-input-5-42bad1a6fe80> in <module> ----> 1 future=executor.submit(func) /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/process.py in submit(*args, **kwargs) 627 with self._shutdown_lock: 628 if self._broken: --> 629 raise BrokenProcessPool(self._broken) 630 if self._shutdown_thread: 631 raise RuntimeError('cannot schedule new futures after shutdown') BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore As a sanity check, I typed the same (almost) code into a Python file and ran it from the command line (python3 test.py). It worked fine. Why does IPython have an issue with my test? EDIT: Here's the Python file that worked fine. from concurrent.futures import ProcessPoolExecutor as Executor def func(): print('Hello') if __name__ == '__main__': with Executor(1) as executor: future=executor.submit(func) print(future.result())
TLDR; import multiprocessing as mp from concurrent.futures import ProcessPoolExecutor # create child processes using 'fork' context executor = ProcessPoolExecutor(max_workers=1, mp_context=mp.get_context('fork')) This is in-fact caused by python 3.8 on MacOS switching to "spawn" method for creating a child process; as opposed to "fork" which was the default prior to 3.8. Here are some essential differences: Fork: Clones data and code of the parent process therefore inheriting the state of the parent program. Any modifications made to the inherited variables by the child process does not reflect back on the state of those variables in the parent process. The states are essentially forked from this point (copy-on-write). All the libraries imported in the parent process are available for use in the child processes' context. This also makes this method fast since child processes don't have to re-import libraries (code) and variables (data). This comes with some downsides especially with respect to forking multithreaded programs. Some libraries with C backends like Tensorflow, OpenCV etc are not fork-safe and causes the child process to hang in a non-deterministic manner. Spawn: Creates a fresh interpreter for the child process without inheriting code or data. Only the necessary data/arguments are sent to the child process. Which means variables, thread-locks, file descriptors etc are not automatically available to the child process -- this avoids hard to catch bugs. This method also comes with some drawbacks — since data/arguments need to be sent to the child process, they must also be pickle-able. Some objects with internal locks/mutex like Queues are not pickle-able and pickling heavier objects like data frames and large numpy arrays are expensive. Unpickling objects on the child process will cause re-import of associated libraries if any. This again adds to time. Since parent code is not cloned into the child process, you will need to use the if __name__ == '__main__' guard while creating a child process. Not doing this will make the child process unable to import code from the parent process (now running as main). This is also why your program works when used with the guard. If you're mindful of the fact that fork comes with some unpredictable effects caused by either your program or by an imported non-fork-safe library, you can either: (a) globally set the context for multiprocessing to use 'fork' method: import multiprocessing as mp mp.set_start_method("fork") note that this will set the context globally and you or any other imported library will not be able to change this context once it is set. (b) locally set context by using multiprocessing's get_context method: import multiprocessing as mp mp_fork = mp.get_context('fork') # mp_fork has all the attributes of mp so you can do: mp_fork.Process(...) mp_fork.Pool(...) # using local context will not change global behaviour: # create child process using global context # default is fork in < 3.8; spawn otherwise mp.Process(...) # most multiprocessing based functionality like ProcessPoolExecutor # also take context as an argument: executor=ProcessPoolExecutor(max_workers=1, mp_context=mp_fork)
10
18
61,891,181
2020-5-19
https://stackoverflow.com/questions/61891181/how-to-use-multiple-inputs-in-tensorflow-2-x-keras-custom-layer
I'm trying to use multiple inputs in custom layers in Tensorflow-Keras. Usage can be anything, right now it is defined as multiplying the mask with the image. I've search SO and the only answer I could find was for TF 1.x so it didn't do any good. class mul(layers.Layer): def __init__(self, **kwargs): super().__init__(**kwargs) # I've added pass because this is the simplest form I can come up with. pass def call(self, inputs): # magic happens here and multiplications occur return(Z)
EDIT: Since TensorFlow v2.3/2.4, the contract is to use a list of inputs to the call method. For keras (not tf.keras) I think the answer below still applies. Implementing multiple inputs is done in the call method of your class, there are two alternatives: List input, here the inputs parameter is expected to be a list containing all the inputs, the advantage here is that it can be variable size. You can index the list, or unpack arguments using the = operator: def call(self, inputs): Z = inputs[0] * inputs[1] #Alternate input1, input2 = inputs Z = input1 * input2 return Z Multiple input parameters in the call method, works but then the number of parameters is fixed when the layer is defined: def call(self, input1, input2): Z = input1 * input2 return Z Whatever method you choose to implement this depends if you need fixed size or variable sized number of arguments. Of course each method changes how the layer has to be called, either by passing a list of arguments, or by passing arguments one by one in the function call. You can also use *args in the first method to allow for a call method with a variable number of arguments, but overall keras' own layers that take multiple inputs (like Concatenate and Add) are implemented using lists.
12
15
61,888,521
2020-5-19
https://stackoverflow.com/questions/61888521/python-sphinx-warning-definition-list-ends-without-a-blank-line-unexpected-uni
My doc is like this: def segments(self, start_time=1, end_time=9223372036854775806, offset=0, size=20): """Get segments of the model :parameter offset: - optional int size: - optional int start_time: - optional string,Segments end_time: - optional string,Segments :return: Segments Object """ When I make html, it turns out: WARNING: Definition list ends without a blank line; unexpected unindent. I have no idea where I should add a blank line? I searched SO, but can not find any similar case as mine.
The Question was already answered by jonrsharpe in a comment, but i want to complete it here. You are using the default Sphinx Style "reStructuredText" You have to add ":parameter" on every line: """Get segments of the model :parameter offset: optional int :parameter size: optional int :parameter start_time: optional string,Segments :parameter end_time: optional string,Segments :return: Segments Object """ If you had created a dedicated definition list, then you would need an empty line after the last definition. In this case, you can do it, but don't always have to.
17
8
61,852,225
2020-5-17
https://stackoverflow.com/questions/61852225/align-button-to-the-center-of-the-window-using-pysimplegui
In my application am trying to place my button,text and input at the center of the window.I am using PySimpleGUI for designing buttons.For aligning to the center i used justification='center' attribute on my code.But still it is not fitting to the center of the window. The code am working with is import PySimpleGUI as sg from tkinter import * sg.theme('DarkAmber') layout = [ [sg.Text('Enter the value',justification='center')], [sg.Input(justification='center')], [sg.Button('Enter','center')] ] window = sg.Window('My new window', layout, size=(500,300), grab_anywhere=True) while True: event, values = window.read() # Read the event that happened and the values dictionary print(event, values) if event == sg.WIN_CLOSED or event == 'Exit': # If user closed window with X or if user clicked "Exit" button then exit break if event == 'Button': print('You pressed the button') window.close() The above code outputs How can i make the text,button and input to the center of the window?
this code make the text, button and input to the center of the window. import PySimpleGUI as sg sg.theme('DarkAmber') layout = [ [sg.Text('Enter the value',justification='center',size=(100,1))], [sg.Input(justification='center',size=(100,1))], [sg.Button('Enter','center',size=(100,1))] ] window = sg.Window('My new window', layout, size=(500,300), grab_anywhere=True) while True: event, values = window.read() # Read the event that happened and the values dictionary print(event, values) if event == None or event == 'Exit': # If user closed window with X or if user clicked "Exit" button then exit break if event == 'Button': print('You pressed the button') window.close()
14
3
61,870,688
2020-5-18
https://stackoverflow.com/questions/61870688/cant-run-idle-with-pyenv-installation-python-may-not-be-configured-for-tk-m
I recently spent couple hours making tkinter and IDLE work on my pyenv Python installation (macOS). Why you are here? You manage Python versions with pyenv on macOS and ( You want IDLE - the development environment for Python - work on your macOS or you want tkinter module work ) What's wrong? You get one of the following errors: Python may not be configured for Tk on import tkinter import _tkinter # If this fails your Python may not be configured for Tk RuntimeError: tk.h version (8.6) doesn't match libtk.a version (8.5) ModuleNotFoundError: No module named '_tkinter'
Here is step by step guide to make IDLE and tkinter work: install tcl-tk with Homebrew. In shell run brew install tcl-tk in shell run echo 'export PATH="/usr/local/opt/tcl-tk/bin:$PATH"' >> ~/.zshrc reload shell by quitting Terminal app or run source ~/.zshrc after reloaded check that tck-tk is in $PATH. Run echo $PATH | grep --color=auto tcl-tk. As the result you should see your $PATH contents with tcl-tk highlighted now we run three commands from Homebrew's output from step #1 in shell run export LDFLAGS="-L/usr/local/opt/tcl-tk/lib" in shell run export CPPFLAGS="-I/usr/local/opt/tcl-tk/include" in shell run export PKG_CONFIG_PATH="/usr/local/opt/tcl-tk/lib/pkgconfig" if you have your Python version already installed with pyenv then uninstall it with pyenv uninstall <your python version>. E.g. pyenv uninstall 3.8.2 set environment variable that will be used by python-build. In shell run export PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I/usr/local/opt/tcl-tk/include' --with-tcltk-libs='-L/usr/local/opt/tcl-tk/lib -ltcl8.6 -ltk8.6'" Note: in future use tck-tk version that actually installed with Homebrew. At the moment of posting 8.6 was the actual finally install Python with pyenv with pyenv install <version>. E.g. pyenv install 3.8.2 Test in shell run pyenv global <verion that you've just installed> now check IDLE. In shell run idle. You should see IDLE window without any warnings and "text printed in red". now check tkinter. In shell run python -m tkinter -c "tkinter._test()". You should see test window like on the image: That's it! My environment: check this is something went wrong executing steps above: macOS Catalina zsh (included in macOS Catalina) = "shell" above Homebrew (installed with instructions from Homebrew official website) pyenv (installed with Homebrew and PATH updated according to pyenv official readme from GitHub) Python 3.8.x - 3.9.x (installed with pyenv install <version> command)
13
19
61,791,651
2020-5-14
https://stackoverflow.com/questions/61791651/how-to-run-python-3-function-even-after-user-has-closed-web-browser-tab
I am having an issue at work with a python project I am working on (I normally use PHP/Java so am lacking a bit of knowledge). Bascially I have a python program that I have built using Flask that connects an inventory management system to Shopify using the Shopify Python API. When the user triggers a function via an AJAX request, I need to start a function/process that updates products in the client's Shopify store via the Shopify API. This takes about 2 hours (~7000 products plus have to pull them first from the inventory management system). The issue is that I need a way that I can trigger this function/process, and even if the client closes their browser the function/process will continue running. If there is any way I could update the front end with the current progress of this background function/process as well that would be swell. If anyone knows of any library or resources for accomplishing this it would be much appreciated. I have had a google, but all the solutions I can find seem to be for CLI scripts not Web scripts. Thanks heaps, Corey :)
You need to handle this task asynchronously because it's a long-running job that would dramatically reduce the performance of an HTTP response (if you wait untill it finishes). Also, you may notice that you need to run this task in a separate process of the current process that serves your HTTP request. Because, web servers (Gunicorn, uWSGI, etc...) will spawn the process they had created and liberate the system ressources when they need. You can easly be in the case that the async process launched via Ajax will be interrupted and killed by the web server because you closed the browser (request closed). So, threading and coroutines are not the best tools for this task. This is why there is some cool Task queue projects that solves your problem. We may note: Celery: (Production ready solution) It’s a task queue with focus on real-time processing, while also supporting task scheduling. Works well with Redis and RabbitMQ as message brokers RQ (Redis Queue): RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily. Taskmaster: Taskmaster is a simple distributed queue designed for handling large numbers of one-off tasks. Huey: is a Redis-based task queue that aims to provide a simple, yet flexible framework for executing tasks. Huey supports task scheduling, crontab-like repeating tasks, result storage and automatic retry in the event of failure. Dramatiq: is a fast and reliable alternative to Celery. It supports RabbitMQ and Redis as message brokers. APScheduler: Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically. And many more ! And with the rise of micro services you can combine the power of Task queues and containers and you can build a seperate container(s) that handles your long running tasks (and updates your databse(s) as your current case). Also, if you can't use micro services architecture yet, you can build a seperate server that handles those tasks and keep the web server that handles the user requests free from running a long running tasks. Finally, you can combine these solutions in your current website like this scenario: User click on a button. Ajax request trigger your backend (via API or whatever) You schedule a Task in your broker message to run it now or later (in a separate container/VPS...) In your backend you retrieve the Task ID of the task You return the Task ID by API or whatever and you add it in the session cookies or in seperate table that deals with the user who launched the process. Within some JS you keep requesting the status of the task from your backend by the Task ID you have (in the user session cookies or in your database) Even if the user closes his browser, the task will continue its action untill it finishes or raises an exception. And within the Task ID you already have you can easly know the status of this task and send this information to the user (in the view when he loggedin again, by email, etc ...) And sure you can improve this scenario !
8
7
61,890,687
2020-5-19
https://stackoverflow.com/questions/61890687/dash-app-refusing-to-start-127-0-0-1-refused-to-connect
I am trying to run the example dash application but upon trying to run, the browser says it is refusing to connect. I have checked and Google Chrome has access through the firewall. The example code is: import dash import dash_core_components as dcc import dash_html_components as html external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div(children=[ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='example-graph', figure={ 'data': [ {'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'}, {'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'}, ], 'layout': { 'title': 'Dash Data Visualization' } } ) ]) if __name__ == '__main__': app.run_server(debug=True) Here is a picture of my browser: Does anyone understand this?
First check if you are accessing the right port, the default one (usually) is 8050: http://localhost:8050/ Also, check if there is another Dash code running, it might be occupying the port. If it does not work, try determining the host as an argument in app.runserver(args), like this: app.run_server(host='0.0.0.0', debug=True) You might also want to determine the port as an argument like this: app.run_server(host='0.0.0.0', port=8050, debug=True)
13
11
61,895,282
2020-5-19
https://stackoverflow.com/questions/61895282/plotly-how-to-remove-empty-dates-from-x-axis
I have a Dataframe Date Category Sum 0 2019-06-03 "25M" 34 1 2019-06-03 "25M" 60 2 2019-06-03 "50M" 23 3 2019-06-04 "25M" 67 4 2019-06-05 "50M" -90 5 2019-06-05 "50M" 100 6 2019-06-06 "100M" 6 7 2019-06-07 "25M" -100 8 2019-06-08 "100M" 67 9 2019-06-09 "25M" 450 10 2019-06-10 "50M" 600 11 2019-06-11 "25M" -9 12 2019-07-12 "50M" 45 13 2019-07-13 "50M" 67 14 2019-07-14 "100M" 130 15 2019-07-14 "50M" 45 16 2019-07-15 "100M" 100 17 2019-07-16 "25M" -90 18 2019-07-17 "25M" 700 19 2019-07-18 "25M" -9 I want to create a plotly graph showing the addition of "Sum" for different "Category" on Every described date, but want to remove dates, if they don't have any data. Code df["Date"]=pd.to_datetime(df["Date"], format=("%Y%m%d")) df=df.sort_values(["Date","Category","Sum"],ascending=False) df=round(df.groupby(["Date","Category"]).agg({"Sum":"sum"}).reset_index(),1) fig = px.bar(df, x=df["Date"] , y='Sum',barmode="group",color="Category") fig.update_xaxes( rangeslider_visible=True, rangeselector=dict( buttons=list([ dict(count=1, label="day", step="day", stepmode="todate"), dict(count=24, label="montly", step="month", stepmode="todate"), dict(count=1, label="year", step="year", stepmode="todate"), dict(step="all") ]) )) fig.show() I am getting graph like this but I want to remove the empty Dates from the plotly graph
I had the same problem with my graph. Just add the following to layout code: xaxis=dict(type = "category") Note: I have used import plotly.graph_objs as go and NOT import plotly.express as px This worked for me. Hope it helps you too.
12
19
61,802,080
2020-5-14
https://stackoverflow.com/questions/61802080/excelwriter-valueerror-excel-does-not-support-datetime-with-timezone-when-savin
I'm running on this issue for quite a while now. I set the writer as follows: writer = pd.ExcelWriter(arquivo+'.xlsx', engine = 'xlsxwriter', options = {'remove_timezone': True}) df.to_excel(writer, header = True, index = True) This code is inside s function. The problem is every time I run the code, it gets information from the database, which contains two columns datetime64[ns, UTC] object with time zone info. But when the code to save to Excel runs I receive: ValueError: Excel does not support datetimes with timezones. Please ensure that datetimes are timezone unaware before writing to Excel. I have already tried several things like 'dt.tz_convert', replace(tzinfo=None) and other solutions I have found here and around. The code runs without problem in my personal computer, my colleague at work with the same machine specs can run the code. Only in my machine it doesn't. I already reinstalled python and all the packages, including formatting the machine and nothing, the error persists. xlrd v1.1.0 xlsxwriter v1.0.4 python 3.7.4 pandas v0.25.1 If someone could bring some light into this issue I would much appreciate it. Thanks
What format is your timestamps in? I just had a similar problem. I was trying to save a data frame to Excel. However I was getting: I checked my date format which was in this format '2019-09-01T00:00:00.000Z' This is a timestamp pandas._libs.tslibs.timestamps.Timestamp from pandas.to_datetime which includes a method date() that converted the date into a format "%Y-%m-%d" that was acceptable by excel So my code was something like: #Pseudo df['date'] = old_dates df['date'] = df['date'].apply(lambda a: pd.to_datetime(a).date()) # .date() removes timezone ...df.to_excel etc.
39
23
61,819,842
2020-5-15
https://stackoverflow.com/questions/61819842/how-can-i-login-in-instagram-with-python-requests
Hello i am trying to login instagram with python requests library but when i try, instagram turns me "bad requests". İs anyone know how can i solve this problem? i searched to find a solve for this problem but i didnt find anything. Please help, thanks! it was working but after some time, it started to turn "bad request" this is full of my code: import os import requests import getpass import json import io import time X_SECOND = 60 BASE_URL = "https://www.instagram.com/" LOGIN_URL = BASE_URL + "accounts/login/ajax/" USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; ) Gecko/20100101 Firefox/65.0" CHANGE_URL = "https://www.instagram.com/accounts/web_change_profile_picture/" CHNAGE_DATA = {"Content-Disposition": "form-data", "name": "profile_pic", "filename": "profilepic.jpg", "Content-Type": "image/jpeg"} headers = { "Host": "www.instagram.com", "Accept": "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate, br", "Referer": "https://www.instagram.com/accounts/edit/", "X-IG-App-ID": "936619743392459", "X-Requested-With": "XMLHttpRequest", "DNT": "1", "Connection": "keep-alive", } session = requests.Session() session.headers = {'user-agent': USER_AGENT, 'Referer': BASE_URL} def login(): USERNAME = str(input('Username > ')) PASSWD = getpass.getpass('Password > ') resp = session.get(BASE_URL) session.headers.update({'X-CSRFToken': resp.cookies['csrftoken']}) login_data = {'username': USERNAME, 'password': PASSWD} login_resp = session.post(LOGIN_URL, data=login_data, allow_redirects=True) print(login_resp.text) # it turns "bad request" time.sleep(100) if login_resp.json()['authenticated']: print("Login successful") else: print("Login failed!") login() # print(login.json()) session.headers.update({'X-CSRFToken': login_resp.cookies['csrftoken']}) def save(): with open('cookies.txt', 'w+') as f: json.dump(session.cookies.get_dict(), f) with open('headers.txt', 'w+') as f: json.dump(session.headers, f) def load(): with open('cookies.txt', 'r') as f: session.cookies.update(json.load(f)) with open('headers.txt', 'r') as f: session.headers = json.load(f) def change(): session.headers.update(headers) try: print("wow"+str(i)) with open("./data/wow"+str(i)+".png", "rb") as resp: f = resp.read() print("1") p_pic = bytes(f) print("2") p_pic_s = len(f) print("3") session.headers.update({'Content-Length': str(p_pic_s)}) print("4") files = {'profile_pic': p_pic} print("5") r = session.post(CHANGE_URL, files=files, data=CHNAGE_DATA) print("6") if r.json()['changed_profile']: print("Profile picture changed!") else: print("Something went wrong") time.sleep(X_SECOND) except Exception as e: print(e) pass time.sleep(10) if __name__ == "__main__": i = 0 try: load() except: login() save() while True: if i == 12: i = 0 i += 1 change() AN UPDATE also the instaloader users are gettin same problem now https://github.com/instaloader/instaloader/issues/615
link = 'https://www.instagram.com/accounts/login/' login_url = 'https://www.instagram.com/accounts/login/ajax/' time = int(datetime.now().timestamp()) response = requests.get(link) csrf = response.cookies['csrftoken'] payload = { 'username': username, 'enc_password': f'#PWD_INSTAGRAM_BROWSER:0:{time}:{password}', 'queryParams': {}, 'optIntoOneTap': 'false' } login_header = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36", "X-Requested-With": "XMLHttpRequest", "Referer": "https://www.instagram.com/accounts/login/", "x-csrftoken": csrf } login_response = requests.post(login_url, data=payload, headers=login_header) json_data = json.loads(login_response.text) if json_data["authenticated"]: print("login successful") cookies = login_response.cookies cookie_jar = cookies.get_dict() csrf_token = cookie_jar['csrftoken'] print("csrf_token: ", csrf_token) session_id = cookie_jar['sessionid'] print("session_id: ", session_id) else: print("login failed ", login_response.text) You can find a complete guide here: Share a post into your Instagram account using the requests library.
8
12
61,867,945
2020-5-18
https://stackoverflow.com/questions/61867945/python-import-error-cannot-import-name-six-from-sklearn-externals
I'm using numpy and mlrose, and all i have written so far is: import numpy as np import mlrose However, when i run it, it comes up with an error message: File "C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mlrose\neural.py", line 12, in <module> from sklearn.externals import six ImportError: cannot import name 'six' from 'sklearn.externals' (C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\externals\__init__.py) Any help on sorting this problem will be greatly appreciated.
Solution: The real answer is that the dependency needs to be changed by the mlrose maintainers. A workaround is: import six import sys sys.modules['sklearn.externals.six'] = six import mlrose
26
76
61,875,869
2020-5-18
https://stackoverflow.com/questions/61875869/ubuntu-20-04-upgrade-python-missing-libffi-so-6
I recently upgraded my OS to Ubuntu 20.04 LTS. Now when I try to import a library like Numpy in Python, I get the following error: ImportError: libffi.so.6: cannot open shared object file: No such file or directory I tried installing the libffi package, but apt can't locate it: sudo apt-get install libffi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libffi
It seems like I fixed it. I could be wrong, but here is what I think happened: Ubuntu 20.04 upgraded libffi6 to libffi7 Python is still looking for libffi6 What I did to fix it : Locate libffi.so.7 in your system $ find /usr/lib -name "libffi.so*" Create a simlink named libffi.so.6 that points to libffi.so.7: sudo ln -s /usr/path/to/libffi.so.7 /usr/lib/path/to/libffi.so.6 UPDATE: As noted by many users, this fix could have unintended consequences. The better way to do it is to reinstall python as @amichaud explained. This should be used as a last resort IF you're not using pyenv/virtualenv/etc in which case removing python will cause a lot of dependencies to be removed as well.
119
113
61,863,806
2020-5-18
https://stackoverflow.com/questions/61863806/stuck-in-watching-for-file-changes-with-statreloader
I made my project fine and when I run my server through a normal shell, it works. Now, I am trying to run my project through Git Bash. All the commands seem to work fine but when I do python manage.py runserver, it gets stuck on Watching for file changes with StatReloader Apparently after that I go to localhost:8000 but that and my 127 port 8000 are not responding and show that there's nothing there. What is going on?
You may be missing the port binding, try to run python manage.py runserver 0.0.0.0:8000 to be sure that the app is running on localhost:8000
14
5
61,778,794
2020-5-13
https://stackoverflow.com/questions/61778794/python-ctypes-how-to-check-memory-management
So I'm using Python as a front end GUI that interacts with some C files for storage and memory management as a backend. Whenever the GUI's window is closed or exited, I call all the destructor methods for my allocated variables. Is there anyway to check memory leaks or availability, like a C Valgrind check, right before exiting the whole program to make sure there wasn't any memory leaks? Example exit: from tkinter import * root = Tk() # New GUI # some code here def destructorMethods: myFunctions.destructorLinkedList() # Destructor method of my allocated memory in my C file # Here is where I would want to run a Valgrind/Memory management check before closing root.destroy() # close the program root.protocol("WM_DELETE_WINDOW", destructorMethods) # When the close window option is pressed call destructorMethods function
If you want to use Valgrind, then this readme might be helpful. Probably, this could be another good resource to make Valgrind friendly python and use it in your program. But if you consider something else like tracemalloc, then you can easily get some example usage of it here. The examples are pretty easy to interpret. For example according to their doc, import tracemalloc tracemalloc.start() # ... run your application ... snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('lineno') print("[ Top 10 ]") for stat in top_stats[:10]: print(stat) This will output something like. <frozen importlib._bootstrap>:716: size=4855 KiB, count=39328, average=126 B <frozen importlib._bootstrap>:284: size=521 KiB, count=3199, average=167 > You can either parse this to plot memory usage for your investigation or you may use the reference doc to get a more concrete idea. In this case your program could be something like the following: from tkinter import * import tracemalloc root = Tk() # New GUI # some code here def destructorMethods: tracemalloc.start() myFunctions.destructorLinkedList() # Destructor method of my allocated memory in my C file # Here is where I would want to run a Valgrind/Memory management check before closing snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('lineno') print("[ Top 10 ]") for stat in top_stats[:10]: print(stat) root.destroy() # close the program root.protocol("WM_DELETE_WINDOW", destructorMethods) Another option is, you can use a memory profiler to see memory usage at a variable time. The package is available here. After the installation of this package, you can probably use the following command in your script to get the memory usage over time in a png file. mprof run --include-children python your_filename.py mprof plot --output timelyplot.png or you may use different functions available on memory_profiler package according to your need. Maybe this tutorial can be an interesting one for you.
9
4
61,850,321
2020-5-17
https://stackoverflow.com/questions/61850321/django-channels-vs-django-3-0-3-1
Can someone clarify the differences or complementarities between Django Channels Project and new Django native async support? From what I understood, Django-Channels is a project that have been started outside of Django, and then, started to be integrated in the core Django. But the current state of this work remains confusing to me. For example, today I'm using Django 2.2, and I'd like to add WebSocket support to my project. Should I: Upgrade to the latest Django version? Use Django Channels package? Do both actions?
today I'm using Django 2.2, and I'd like to add WebSocket support to my project. If you want to add websocket support to your app, at the moment you don't need to upgrade to django 3.0. Django 2.2 plus channels can do that - and for the time being is the best way forward. (Although there's absolutely no harm in upgrading to django 3.0 if you don't have any good reason not to). I will try and further explain why in this answer. From what I understood, Django-Channels is a project that have been started outside of Django, and then, started to be integrated in the core Django. But the current state of this work remains confusing to me. Yes, my understanding is that channels started out as a project from one of the core Django developers (Andrew Godwin - who has also been instrumental in bringing about the async changes brought in Django 3.0). It is not included automatically if you just install Django, but it is officially part of the django project, and has been since september 2016 (see here). It's now on version 2.4 and so is an established and stable project that can be used to add websockets to your django app. So What's going on with Django 3.x and async? Whilst channels adds a way to add some async functionality to your django app, Django at it's core is still synchonous. The 'async' project that is being gradually introduced addresses this. The key thing to note here is that it's being introduced gradually. Django is made up of several layers: WSGI server (not actually part of django): deals with the protocol of actually accepting an HTTP request Base Handler: This takes the request passed to it from the server and makes sure it's sent through the middleware, and the url config, so that we end up with a django request object, and a view to pass it to. The view layer (which does whatever you tell it to) The ORM, and all the other lovely stuff you get with Django, that we can call from the view. Now to fully benefit from async, we really need all of these layers to be async, otherwise there won't really be any performance benefit. This is a fairly big project, hence why it is being rolled out gradually: With the release of django 3.0, all that was really added was the ability to talk to an ASGI sever, (rather than just a WSGI one). When Django 3.1 is released (expected august 2020) it is anticipated that there will be capabilities for asynchronous middleware and views. Then finally in django 3.2, or maybe even 4.0 we will get async capabilities up and down the whole of Django. Once we get to that final point, it may be worth considering using the async features of Django for stuff like web-sockets, but at the moment we can't even take advantage of the fact we can now deal with ASGI as well as WSGI servers. You can use Django with an ASGI server, but there would be no point as the base handler is still synchronous. TLDR Django channels adds a way to deal with protocols other than HTTP, and adds integrations into things such as django's session framework and authentication framework, so it's easy to add things like websockets to your django project. It is complete and you can start working with it today!!. Native async support is a fundemental re-write of the core of Django. This is a work in progress. It's very exciting, but won't be ready to really benefit from for a little while yet. There was a good talk given at last years djangoCon outlining the plans for async django. You can view it here.
18
30
61,822,379
2020-5-15
https://stackoverflow.com/questions/61822379/with-django-csrf-exempt-request-session-is-always-empty
I am stuck in django and would really appreciate it if someone could help me. I need to have an entry point for a 3rd party API. So I created a view and decorated it with @csrf_exempt Now the problem is I am not able to access any session variables I set before. edit - I set multiple session variables like user email to know if a user is already logged in. I was able to use the session before calling the 3rd party API. When the 3rd party API sends a response, they don't send CSRF token hence I have exempt that view from csrf. Once I receive a valid response I want to update my database. To do that, I need to know the email id of the user which I lost since I don't have session variables anymore. ppConfirmPaymentProcess is another function that processes the POST data sent by this 3rd party API. Everything is working fine, csrf_exempt is also working fine but I can't do request.session["foo"] with this request. Can someone please help? @csrf_exempt def ppConfirmPayment(request): print(request.session, "=======================================") for key, value in request.session.items(): print('{} => {}'.format(key, value)) return ppConfirmPaymentProcess(request)
I solved it using Django itself. No manipulation of session-id or interaction with the database. Step1: call 3rd party api @login_required def thirdPartyAPICall(request): #do some stuff and send a request to 3rd party Step2: receive a call-back from 3rd party in the view. Note how I put csrf_exempt and not login_required so that 3rd party can send a request to my application without CSRF token and session. It's like an entry point to my APP for them. In this callBackView do some action and check if this indeed is a valid response from the 3rd party or someone is trying to hack your system. E.g. check for CHECKSUM or TXNID etc and then create a response dictionary and sent another HTTP response to another resource using HttpResponseRedirect with-in my app and then I passed relevant GET parameter to it. This particular step restores my previous session and now I have the relevant data from the 3rd party to process the request I sent to them and I also got my session back in the request. @csrf_exempt def callBackView(request): if request.POST["CHECKSUM"] == myCalCulatedCheckSum: foo = True else: foo = False return HttpResponseRedirect("TEST.HTML" +"/" + str(foo)) I like this method the most because as I mentioned before, we don't need to store session, Django does it for us.
9
0
61,890,366
2020-5-19
https://stackoverflow.com/questions/61890366/flask-session-log-out-and-redirect-to-login-page
I am using Flask,Python for my web application . The user will login and if the session time is more than 5 minutes then the app should come out and it should land on the login page. I tried some of the methods and I can see the session time out is happening but redirect to login page is not happening. @app.before_request def before_request(): "Session time out method" flask.session.permanent = True app.permanent_session_lifetime = datetime.timedelta(minutes=2) flask.session.modified = True flask.g.user = flask_login.current_user #return redirect(url_for('login')) I have used before_request for seesion time out. I have referrred this link Flask logout if sessions expires if no activity and redirect for login page but I dont see any changes from what I have tried before and this code. I can see lot of stackoverflow questions over there for this topic and I couldnt find the solution. I have tried this link als Expire session in flask in ajax context But I am not sure what should I pass as a session and what qualifier I should return here? @mod.before_request def make_session_permanent(): if session_is_invalid(session): return redirect(url_for('logout')) def session_is_invalid(ses): # return your qualifier if the previous method is correct can some one tell me what is the session and what qualifier I should return here? What I need is after session log out the page should automatically land on the login screen What is happening is session log out is happening but it's not redirecting to login page could some one help me in this?
When I read the documents of the Flask-Login package, i saw a few things. When creating your Flask application, you also need to create a login manager. login_manager = LoginManager() A login_view variable in LoginManager class caught my attention. The details include the following explanation: The name of the view to redirect to when the user needs to log in. (This can be an absolute URL as well, if your authentication machinery is external to your application.) Actually, after you create a LoginManager object, login_manager.login_view = 'your login view' You should specify your login page. Finally, Once the actual application object has been created, you can configure it for login with: login_manager.init_app(app) After doing these, any unauthorized calls to every method you use @login_required annotation will be sent to the page you have pointed out with login_view. I also developed a simple application. You can review this application from here. I tested it and working without any problems. I hope it helps you.
7
9
61,840,060
2020-5-16
https://stackoverflow.com/questions/61840060/how-to-detect-subscript-numbers-in-an-image-using-ocr
I am using tesseract for OCR, via the pytesseract bindings. Unfortunately, I encounter difficulties when trying to extract text including subscript-style numbers - the subscript number is interpreted as a letter instead. For example, in the basic image: I want to extract the text as "CH3", i.e. I am not concerned about knowing that the number 3 was a subscript in the image. My attempt at this using tesseract is: import cv2 import pytesseract img = cv2.imread('test.jpeg') # Note that I have reduced the region of interest to the known # text portion of the image text = pytesseract.image_to_string( img[200:300, 200:320], config='-l eng --oem 1 --psm 13' ) print(text) Unfortunately, this will incorrectly output 'CHs' It's also possible to get 'CHa', depending on the psm parameter. I suspect that this issue is related to the "baseline" of the text being inconsistent across the line, but I'm not certain. How can I accurately extract the text from this type of image? Update - 19th May 2020 After seeing Achintha Ihalage's answer, which doesn't provide any configuration options to tesseract, I explored the psm options. Since the region of interest is known (in this case, I am using EAST detection to locate the bounding box of the text), the psm config option for tesseract, which in my original code treats the text as a single line, may not be necessary. Running image_to_string against the region of interest given by the bounding box above gives the output CH 3 which can, of course, be easily processed to get CH3.
You want to do apply pre-processing to your image before feeding it into tesseract to increase the accuracy of the OCR. I use a combination of PIL and cv2 to do this here because cv2 has good filters for blur/noise removal (dilation, erosion, threshold) and PIL makes it easy to enhance the contrast (distinguish the text from the background) and I wanted to show how pre-processing could be done using either... (use of both together is not 100% necessary though, as shown below). You can write this more elegantly- it's just the general idea. import cv2 import pytesseract import numpy as np from PIL import Image, ImageEnhance img = cv2.imread('test.jpg') def cv2_preprocess(image_path): img = cv2.imread(image_path) # convert to black and white if not already img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # remove noise kernel = np.ones((1, 1), np.uint8) img = cv2.dilate(img, kernel, iterations=1) img = cv2.erode(img, kernel, iterations=1) # apply a blur # gaussian noise img = cv2.threshold(cv2.GaussianBlur(img, (9, 9), 0), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # this can be used for salt and pepper noise (not necessary here) #img = cv2.adaptiveThreshold(cv2.medianBlur(img, 7), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2) cv2.imwrite('new.jpg', img) return 'new.jpg' def pil_enhance(image_path): image = Image.open(image_path) contrast = ImageEnhance.Contrast(image) contrast.enhance(2).save('new2.jpg') return 'new2.jpg' img = cv2.imread(pil_enhance(cv2_preprocess('test.jpg'))) text = pytesseract.image_to_string(img) print(text) Output: CH3 The cv2 pre-process produces an image that looks like this: The enhancement with PIL gives you: In this specific example, you can actually stop after the cv2_preprocess step because that is clear enough for the reader: img = cv2.imread(cv2_preprocess('test.jpg')) text = pytesseract.image_to_string(img) print(text) output: CH3 But if you are working with things that don't necessarily start with a white background (i.e. grey scaling converts to light grey instead of white)- I have found the PIL step really helps there. Main point is the methods to increase accuracy of the tesseract typically are: fix DPI (rescaling) fix brightness/noise of image fix tex size/lines (skewing/warping text) Doing one of these or all three of them will help... but the brightness/noise can be more generalizable than the other two (at least from my experience).
11
4
61,863,309
2020-5-18
https://stackoverflow.com/questions/61863309/package-requires-a-different-python-2-7-17-not-in-3-6-1-while-setting-up-pr
I cloned a repository, installed pre-commit and was committing for the first time. This is the time when pre-commit packages actually get installed and setup. I faced the following issue. [INFO] Installing environment for https://github.com/asottile/seed-isort-config. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('/home/roopak/.cache/pre-commit/repokb2ckm/py_env-python2.7/bin/python', u'/home/roopak/.cache/pre-commit/repokb2ckm/py_env-python2.7/bin/pip', 'install', '.') return code: 1 expected return code: 0 stdout: Processing /home/roopak/.cache/pre-commit/repokb2ckm stderr: DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support ERROR: Package 'seed-isort-config' requires a different Python: 2.7.17 not in '>=3.6.1'
The issue was that I have both Python2.7 and 3 installed. And my pre-commit was installed was using Python 2.7 as the default. Solution 1: remove pre-commit from Python2.7 and add it to Python3. As per the creator of pre-commit - @anthony-sottile - it is better to use pre-commit with Python3. To do that we will have to uninstall pre-commit from Python2.7 and install it via Python3. $ pip uninstall pre-commit # uninstall from Python2.7 $ pip3 install pre-commit # install with Python3 Solution 2: keeping pre-commit with Python2.7 (not recommended) To solve this I used default_language_version from the pre-commit documentation. Refer: https://pre-commit.com/#overriding-language-version By setting the default_language_version all hooks will use this particular version. If any particular hook needs to be overridden this property - language_version: - may be set on the hook. Eg:- default_language_version: # force all unspecified python hooks to run python3 python: python3 repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.5.0 hooks: - id: trailing-whitespace name: trim trailing whitespace description: This hook trims trailing whitespace on files entry: trailing-whitespace-fixer - id: check-merge-conflict name: check for merge conflict description: Prevent accidentally commiting files with merge conflicts. language_version: python: python2.7 This example .pre-commit-config.yaml file set the default python version to Python 3. For the hook - check-merge-conflict it will use Python 2.7.
8
9
61,880,977
2020-5-19
https://stackoverflow.com/questions/61880977/how-to-create-a-color-bar-in-an-osmnx-plot
Currently I have created a color map based on the distance of the nodes in the network to a specific target. The one thing I am not being able to do is a color bar. I would like the color bar to show me how much time the color indicates. The time data is in data['time']. Each color will indicate how long it will take to go from the node to the target. I have defined the speed of the car. For example, a color bar that ranges from 0 to 60 min. But in this case it would go to the maximum value of data['time']. Here is what I have tried: import networkx as nx import matplotlib.pyplot as plt import osmnx as ox import pandas as pd from shapely.wkt import loads as load_wkt import numpy as np import matplotlib.cm as cm ox.config(log_console=True, use_cache=True) place = {'city': 'Lisbon', 'country': 'Portugal'} G = ox.graph_from_place(place, network_type='drive') hospitals = ox.pois_from_place(place, amenities=['hospital']) hosp_1 = hospitals.iloc[21]['geometry'] # Hospital Santa Maria coord_1 = (38.74817825481225, -9.160815118526642) # Coordinate Hospital Santa Maria target_1 = ox.get_nearest_node(G, coord_1) nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True) # Transforms nodes and edges into Geodataframes travel_speed = 20 # km/h meters_per_minute = travel_speed * 1000 / 60 nodes['shortest_route_length_to_target'] = 0 route_lengths = [] i = 0 # print(G.edges(data=True)) for u, v, k, data in G.edges(data=True, keys=True): data['time'] = data['length'] / meters_per_minute for node in G.nodes: try: route_length = nx.shortest_path_length(G, node, target_1, weight='time') route_lengths.append(route_length) nodes['shortest_route_length_to_target'][node] = route_length except nx.exception.NetworkXNoPath: continue def get_colors(n, cmap='viridis', start=0., stop=1., alpha=1.): colors = [cm.get_cmap(cmap)(x) for x in np.linspace(start, stop, n)] colors = [(r, g, b, alpha) for r, g, b, _ in colors] return colors def get_node_colors_by_attr(G, attr, num_bins=None, cmap='viridis', start=0, stop=1, na_color='none'): if num_bins is None: num_bins = len(G.nodes()) bin_labels = range(num_bins) # attr_values = pd.Series([data[attr] for node, data in G.nodes(data=True)]) attr_values = pd.Series(nodes[attr].values) # Cretaes a dataframe ith the attribute of each node # print(attr_values) cats = pd.qcut(x=attr_values, q=num_bins, labels=bin_labels) # Puts the values in bins # print(cats) colors = get_colors(num_bins, cmap, start, stop) #List of colors of each bin node_colors = [colors[int(cat)] if pd.notnull(cat) else na_color for cat in cats] return node_colors nc = get_node_colors_by_attr(G, attr='shortest_route_length_to_target', num_bins=10) ns = [80 if node == target_1 else 20 for node in G.nodes()] k = 0 for node in G.nodes(): if node == target_1: nc[k] = str('red') k += 1 else: k += 1 G = ox.project_graph(G) cmap = plt.cm.get_cmap('viridis') norm=plt.Normalize(vmin=0, vmax=1) sm = mpl.cm.ScalarMappable(norm=norm, cmap=cmap) sm.set_array([]) fig, ax = ox.plot_graph(G, node_color=nc, node_size=ns, edge_linewidth=0.5, fig_height = 13, fig_width =13, bgcolor = 'white') plt.colorbar(sm) The graph I obtain is the following:
again. I had faced this same issue earlier without having enough motivation to solve it. But somehow I have managed to do it this time (and your trial has helped a lot as well, given that my coding knowledge is limited to say the least). See that I have changed the normalised values so that it means something on the figure instead of just ranging from 0 to 1. import matplotlib as mpl G = ox.project_graph(G) cmap = plt.cm.get_cmap('viridis') norm=plt.Normalize(vmin=nodes['shortest_route_length_to_target'].min(), vmax=nodes['shortest_route_length_to_target'].max()) sm = mpl.cm.ScalarMappable(norm=norm, cmap=cmap) sm.set_array([]) fig, ax = ox.plot_graph(G, node_color=nc, node_size=ns, edge_linewidth=0.5, fig_height = 13, fig_width =13, bgcolor = 'white', show=False) cb = fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax, orientation='horizontal') cb.set_label('shortest_route_length_to_target', fontsize = 20) fig.savefig('demo.png')
11
10
61,842,649
2020-5-16
https://stackoverflow.com/questions/61842649/renaming-months-from-number-to-name-in-pandas
i have the following dataframe: High Low Open Close Volume Adj Close year pct_day month day 1 1 NaN NaN NaN NaN NaN NaN 2010.0 0.000000 2 7869.853149 7718.482498 7779.655014 7818.089966 7.471689e+07 7818.089966 2010.0 0.007826 3 7839.965652 7719.758224 7775.396255 7777.940002 8.185879e+07 7777.940002 2010.0 0.002582 4 7747.175260 7624.540007 7691.152083 7686.288672 1.018877e+08 7686.288672 2010.0 -0.000744 5 7348.487095 7236.742135 7317.313616 7287.688546 1.035424e+08 7287.688546 2010.0 -0.002499 ... ... ... ... ... ... ... ... ... ... 12 27 7849.846680 7760.222526 7810.902051 7798.639258 4.678145e+07 7798.639258 2009.5 -0.000833 28 7746.209996 7678.152204 7713.497907 7710.449358 4.187133e+07 7710.449358 2009.5 0.000578 29 7357.001540 7291.827806 7319.393874 7338.938345 4.554891e+07 7338.938345 2009.5 0.003321 30 7343.726938 7276.871507 7322.123779 7302.545316 3.967812e+07 7302.545316 2009.5 -0.000312 31 NaN NaN NaN NaN NaN NaN 2009.5 0.000000 Since it is not clear from the above pasted dataframe, below is a snapshot: The months are in 1,2 3 ... Is it possible to rename the month index to Jan Feb Mar format? Edit : I am having a hard time implementing the example by @ChihebNexus My code is as follows since it is a datetime : full_dates = pd.date_range(start, end) data = data.reindex(full_dates) data['year'] = data.index.year data['month'] = data.index.month data['week'] = data.index.week data['day'] = data.index.day data.set_index('month',append=True,inplace=True) data.set_index('week',append=True,inplace=True) data.set_index('day',append=True,inplace=True) df = data.groupby(['month', 'day']).mean()
I would do it using calendar and pd.CategoricalDtype to ensure sorting works correctly. import pandas as pd import numpy as np import calendar #Create dummy dataframe dateindx = pd.date_range('2019-01-01', '2019-12-31', freq='D') df = pd.DataFrame(np.random.randint(0,1000, (len(dateindx), 5)), index=pd.MultiIndex.from_arrays([dateindx.month, dateindx.day]), columns=['High', 'Low','Open', 'Close','Volume']) #Use calendar library for abbreviations and order dd=dict((enumerate(calendar.month_abbr))) #rename level zero of multiindex df = df.rename(index=dd,level=0) #Create calendar month data type with order for sorting cal_dtype = pd.CategoricalDtype(list(calendar.month_abbr), ordered=True) #Change the dtype of the level zero index df.index = df1.index.set_levels(df.index.levels[0].astype(cal_dtype), level=0) df Output: High Low Open Close Volume Jan 1 501 720 671 943 586 2 410 67 207 945 284 3 473 481 527 415 852 4 157 809 484 592 894 5 294 38 458 62 945 ... ... ... ... ... ... Dec 27 305 354 347 0 726 28 764 987 564 260 72 29 730 151 846 137 118 30 999 399 634 674 81 31 347 980 441 600 676 [365 rows x 5 columns]
7
6
61,899,474
2020-5-19
https://stackoverflow.com/questions/61899474/polynomial-regression-using-statsmodels-formula-api
Please forgive my ignorance. All I'm trying to do is add a squared term to my regression without going through the trouble of defining a new column in my dataframe. I'm using statsmodels.formula.api (as stats) because the format is similar to R, which I am more familiar with. hours_model = stats.ols(formula='act_hours ~ h_hours + C(month) + trend', data = df).fit() The above works as expected. hours_model = stats.ols(formula='act_hours ~ h_hours + h_hours**2 + C(month) + trend', data = df).fit() This omits h_hours**2 and returns the same output as the line above. I've also tried: h_hours^2, math.pow(h_hours,2), and poly(h_hours,2) All throw errors. Any help would be appreciated.
You can try using I() like in R: import statsmodels.formula.api as smf np.random.seed(0) df = pd.DataFrame({'act_hours':np.random.uniform(1,4,100),'h_hours':np.random.uniform(1,4,100), 'month':np.random.randint(0,3,100),'trend':np.random.uniform(0,2,100)}) model = 'act_hours ~ h_hours + I(h_hours**2)' hours_model = smf.ols(formula = model, data = df) hours_model.exog[:5,] array([[ 1. , 3.03344961, 9.20181654], [ 1. , 1.81002392, 3.27618659], [ 1. , 3.20558207, 10.27575638], [ 1. , 3.88656564, 15.10539244], [ 1. , 1.74625943, 3.049422 ]])
8
27
61,794,582
2020-5-14
https://stackoverflow.com/questions/61794582/plotly-how-to-only-show-vertical-and-horizontal-line-crosshair-as-hoverinfo
I want to plot a chart with two subplots in plotly dash. My entire chart looks like this: import pandas as pd import numpy as np import dash import dash_core_components as dcc import dash_html_components as html import plotly.graph_objs as go from plotly.subplots import make_subplots df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv').iloc[:60] fig = make_subplots(rows=2, cols=1, row_heights=[0.8, 0.2], vertical_spacing=0) fig.add_trace(go.Candlestick(open=df['AAPL.Open'], high=df['AAPL.High'], low=df['AAPL.Low'], close=df['AAPL.Close'], increasing_line_color='#0384fc', decreasing_line_color='#e8482c', name='AAPL'), row=1, col=1) fig.add_trace(go.Scatter(y=np.random.randint(20, 40, len(df)), marker_color='#fae823', name='VO', hovertemplate=[]), row=2, col=1) fig.update_layout({'plot_bgcolor': "#21201f", 'paper_bgcolor': "#21201f", 'legend_orientation': "h"}, legend=dict(y=1, x=0), font=dict(color='#dedddc'), dragmode='pan', hovermode='x unified', margin=dict(b=20, t=0, l=0, r=40)) fig.update_xaxes(showgrid=False, zeroline=False, rangeslider_visible=False, showticklabels=False, showspikes=True, spikemode='across', spikesnap='data', showline=False, spikedash='solid') fig.update_yaxes(showgrid=False, zeroline=False) fig.update_traces(xaxis='x', hoverinfo='none') app = dash.Dash(__name__) app.layout = html.Div(children=[ html.Div(dcc.Graph(id='chart', figure=fig, config={'displayModeBar': False}))]) if __name__ == '__main__': app.run_server(debug=True, dev_tools_ui=False, dev_tools_props_check=False) What I need is a so called crosshair that is common in trading charts. Basically it consists of two lines that are connected to x and y axes and moves with cursor. This is a screenshot from tradingview.com charts: However in my chart there is a little icon that appears when the cursor is on candlesticks: What I have found out so far is that when the cursor is on the scatter plot, the icon disappears and it works fine. I think that is because I set hovertemplate=[] in the scatterplot. I cannot do that in the candlestick plot because there is no such parameter for it. Moreover, this icon only appears if I set hovermode='x unified'. If I set it to x, the little icon doesn't appear. But I need it to be exactly like the tradingview.com example that I showed. Is there any way to replicate that crosshair? UPDATE 1: I tried fig.update_layout(hoverdistance=0). But the problem is that when the cursor is not on the candlesticks, the crosshair is just not right. I took two screenshots: the first one is from tradingview.com charts and the second one is from my code with hoverdistance set to 0. As can be seen, when the cursor is not on the candlesticks, in the first screenshot the crosshair is still correct. However, in the second screenshot it is just not working correctly. It only works if the cursor is on the candlesticks ONLY. I just want to copy tradingview.com crosshair. Nothing less and nothing more. UPDATE 2: I think the answer could be on these plotly docs. I am working on it currently. Please share your comments about this update.
This should do it: fig.update_layout(hoverdistance=0) And setting spikesnap='cursor' for xaxes and yaxes. These little adjustments will keep the crosshair intact and remove the little icon that has been bothering you. From the docs: Plot: hoverdistance Sets the default distance (in pixels) to look for data to add hover labels (-1 means no cutoff, 0 means no looking for data). This is only a real distance for hovering on point-like objects, like scatter points. For area-like objects (bars, scatter fills, etc) hovering is on inside the area and off outside, but these objects will not supersede hover on point-like objects in case of conflict. Complete code: (but with no dash elements) import pandas as pd import numpy as np import plotly.graph_objs as go from plotly.subplots import make_subplots df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv').iloc[:60] fig = make_subplots(rows=2, cols=1, row_heights=[0.8, 0.2], vertical_spacing=0) fig.add_trace(go.Candlestick(open=df['AAPL.Open'], high=df['AAPL.High'], low=df['AAPL.Low'], close=df['AAPL.Close'], increasing_line_color='#0384fc', decreasing_line_color='#e8482c', name='AAPL'), row=1, col=1) fig.add_trace(go.Scatter(y=np.random.randint(20, 40, len(df)), marker_color='#fae823', name='VO', hovertemplate=[]), row=2, col=1) fig.update_layout({'plot_bgcolor': "#21201f", 'paper_bgcolor': "#21201f", 'legend_orientation': "h"}, legend=dict(y=1, x=0), font=dict(color='#dedddc'), dragmode='pan', hovermode='x unified', margin=dict(b=20, t=0, l=0, r=40)) fig.update_yaxes(showgrid=False, zeroline=False, showticklabels=False, showspikes=True, spikemode='across', spikesnap='cursor', showline=False, spikedash='solid') fig.update_xaxes(showgrid=False, zeroline=False, rangeslider_visible=False, showticklabels=False, showspikes=True, spikemode='across', spikesnap='cursor', showline=False, spikedash='solid') fig.update_layout(hoverdistance=0) fig.update_traces(xaxis='x', hoverinfo='none') fig.show()
19
21
61,902,426
2020-5-19
https://stackoverflow.com/questions/61902426/cased-vs-uncased-bert-models-in-spacy-and-train-data
I want to use spacy's pretrained BERT model for text classification but I'm a little confused about cased/uncased models. I read somewhere that cased models should only be used when there is a chance that letter casing will be helpful for the task. In my specific case: I am working with German texts. And in German all nouns start with the capital letter. So, I think, (correct me if I'm wrong) that this is the exact situation where cased model must be used. (There is also no uncased model available for German in spacy). But what must be done with data in this situation? Should I (while preprocessing train data) leave it as it is (by that I mean not using the .lower() function) or it doesn't make any difference?
As a non-German-speaker, your comment about nouns being uppercase does make it seem like case is more relevant for German than it might be for English, but that doesn't obviously mean that a cased model will give better performance on all tasks. For something like part-of-speech detection, case would probably be enormously helpful for the reason you describe, but for something like sentiment analysis, it's less clear whether the added complexity of having a much larger vocabulary is worth the benefits. (As a human, you could probably imagine doing sentiment analysis with all lowercase text just as easily.) Given that the only model available is the cased version, I would just go with that - I'm sure it will still be one of the best pretrained German models you can get your hands on. Cased models have separate vocab entries for differently-cased words (e.g. in english the and The will be different tokens). So yes, during preprocessing you wouldn't want to remove that information by calling .lower(), just leave the casing as-is.
25
21
61,900,138
2020-5-19
https://stackoverflow.com/questions/61900138/pytorch-caught-indexerror-in-dataloader-worker-process-0-indexerror-too-man
I am trying to implement a detection model based on "finetuning object detection" official tutorial of PyTorch. It seemed to have worked with minimal data, (for 10 of images). However I uploaded my whole dataset to Drive and checked the index-data-label correspondences. There are not unmatching items in my setup, I have all the errors in that part solved. (I deleted extra items from the labels on GDrive) class SomeDataset(torch.utils.data.Dataset): def __init__(self, root_path, transforms): self.root_path = root_path self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root_path, "images")))) self.labels = list(sorted(os.listdir(os.path.join(root_path, "labels")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root_path, "images", self.imgs[idx]) label_path = os.path.join(self.root_path, "labels", self.labels[idx]) img = Image.open(img_path).convert("RGB") # get labels and boxes label_data = np.loadtxt(label_path, dtype=str, delimiter=' '); print(f"{len(label_data)} is the length of label data") num_objs = label_data.shape[0]; if num_objs != 0: print(f"number of objects {num_objs}") # label values should start from 1 for i,label_name in enumerate(classnames): label_data[np.where(label_name==label_data)] = i; label_data = label_data.astype(np.float); print(f"label data {label_data}") xs = label_data[:,0:8:2]; ys = label_data[:,1:8:2]; x_min = np.min(xs, axis=1)[...,np.newaxis]; x_max = np.max(xs, axis=1)[...,np.newaxis]; y_min = np.min(ys, axis=1)[...,np.newaxis]; y_max = np.max(ys, axis=1)[...,np.newaxis]; boxes = np.hstack((x_min,y_min,x_max,y_max)); labels = label_data[:,8]; else: # if there is no label add background whose label is 0 boxes = [[0,0,1,1]]; labels = [0]; num_objs = 1; boxes = torch.as_tensor(boxes, dtype=torch.float32) labels = torch.as_tensor(labels, dtype=torch.int64) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) My main method is like the following, def main(): # train on the GPU or on the CPU, if a GPU is not available device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # our dataset has 16 classes - background and others num_classes = 16 # use our dataset and defined transformations dataset = SomeDataset('trainImages', get_transform(train=True)) print(f"{len(dataset)} number of images in training dataset") dataset_validation = SomeDataset('valImages', get_transform(train=True)) print(f"{len(dataset_validation)} number of images in validation dataset") # define training and validation data loaders data_loader = torch.utils.data.DataLoader( dataset, batch_size=20, shuffle=True, num_workers=4, collate_fn=utils.collate_fn) data_loader_val = torch.utils.data.DataLoader( dataset_validation, batch_size=10, shuffle=False, num_workers=4, collate_fn=utils.collate_fn) # get the model using our helper function #model = get_model_instance_segmentation(num_classes) model = get_rcnn(num_classes); # move model to the right device model.to(device) # construct an optimizer params = [p for p in model.parameters() if p.requires_grad] #optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005); optimizer = torch.optim.Adam(params, lr=0.0005); # and a learning rate scheduler lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1) # let's train it for 10 epochs num_epochs = 5 for epoch in range(num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=100) # update the learning rate lr_scheduler.step() # evaluate on the test dataset #evaluate(model, data_loader_test, device=device) print("That's it!") return model; When I run my code, it runs for a few number of data (for example 10 of them) and then stops and gives out this error. IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "<ipython-input-114-e0ccd94603fd>", line 31, in __getitem__ xs = label_data[:,0:8:2]; IndexError: too many indices for array The error goes from model = main() to > train_one_epoch() and goes on. I do not understand why this is happening. Also, this is an example from one instance of dataset, (<PIL.Image.Image image mode=RGB size=1024x1024 at 0x7F46FC0A94A8>, {'boxes': tensor([[ 628., 6., 644., 26.], [ 633., 50., 650., 65.], [ 620., 27., 637., 44.], [ 424., 193., 442., 207.], [ 474., 188., 496., 204.], [ 383., 226., 398., 236.], [ 399., 218., 418., 231.], [ 42., 189., 63., 203.], [ 106., 159., 129., 169.], [ 273., 17., 287., 34.], [ 225., 961., 234., 980.], [ 220., 1004., 230., 1024.]]), 'labels': tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]), 'image_id': tensor([0]), 'area': tensor([320., 255., 289., 252., 352., 150., 247., 294., 230., 238., 171., 200.]), 'iscrowd': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])})
When using np.loadtxt() method, make sure to add ndims = 2 as a parameter. Because the number of objects parameter num_obj becomes 10 even if it has only 1 object in it. It is because 1 object becomes a column vector which shows up as 10 objects. (representing 10 columns) ndims = 2, makes sure that the output of np.loadtxt() method does not give out any row or column vectors, only 2 dimensional outputs.
7
1
61,890,674
2020-5-19
https://stackoverflow.com/questions/61890674/run-python-script-in-jenkins
I want to run a python script from Jenkins using Jenkinsfile. Is there any way to run it directly from Jenkinsfile. I found python plugin(Click Here) in Jenkins to run a script, but there is no proper documentation for this plugin. It would be very helpful if anyone explains how to integrate this plugin with Jenkinsfile.
Adds the ability to execute python scripts as build steps. Other than that, this plugin works pretty much like the standard shell script support Per the docs of the plugin. Though I have not used this plugin through pipeline, from job perspective, you have to just provide .py script (filename and path), in a same way you provide for shell/powershell script. Similarly, even for python, you'll be executing the script on a node, which will be either Linux or Windows. So, it would work as below : stage('build') { steps { sh 'python abc.py' } } References : https://www.jenkins.io/doc/pipeline/tour/hello-world/ Lookout for "Python" block.
12
16
61,888,674
2020-5-19
https://stackoverflow.com/questions/61888674/can-you-plot-interquartile-range-as-the-error-band-on-a-seaborn-lineplot
I'm plotting time series data using seaborn lineplot (https://seaborn.pydata.org/generated/seaborn.lineplot.html), and plotting the median instead of mean. Example code: import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") ax = sns.lineplot(x="timepoint", y="signal", estimator = np.median, data=fmri) I want the error bands to show the interquartile range as opposed to the confidence interval. I know I can use ci = "sd" for standard deviation, but is there a simple way to add the IQR instead? I cannot figure it out. Thank you!
I don't know if this can be done with seaborn alone, but here's one way to do it with matplotlib, keeping the seaborn style. The describe() method conveniently provides summary statistics for a DataFrame, among them the quartiles, which we can use to plot the medians with inter-quartile-ranges. import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") fmri_stats = fmri.groupby(['timepoint']).describe() x = fmri_stats.index medians = fmri_stats[('signal', '50%')] medians.name = 'signal' quartiles1 = fmri_stats[('signal', '25%')] quartiles3 = fmri_stats[('signal', '75%')] ax = sns.lineplot(x, medians) ax.fill_between(x, quartiles1, quartiles3, alpha=0.3);
8
13
61,787,520
2020-5-14
https://stackoverflow.com/questions/61787520/i-want-to-make-a-multi-page-help-command-using-discord-py
I am using discord.py to make a bot, and there are more commands than can fit on one page for my custom help command. I want the bot to add 2 reactions, back and forward, then the user that sent the help message can pick one, and go onto different pages of the help command. I want the bot to be able to edit the message to show the second page, and if they go back, then edit back to the original first page. Could anyone help with this? This is similar to the owobot definitions, where you can scroll back and forth between definitions.
This method would be using Client.wait_For(), and can be easily adapted if you have any other ideas for it. Example @bot.command() async def pages(ctx): contents = ["This is page 1!", "This is page 2!", "This is page 3!", "This is page 4!"] pages = 4 cur_page = 1 message = await ctx.send(f"Page {cur_page}/{pages}:\n{contents[cur_page-1]}") # getting the message object for editing and reacting await message.add_reaction("◀️") await message.add_reaction("▶️") def check(reaction, user): return user == ctx.author and str(reaction.emoji) in ["◀️", "▶️"] # This makes sure nobody except the command sender can interact with the "menu" while True: try: reaction, user = await bot.wait_for("reaction_add", timeout=60, check=check) # waiting for a reaction to be added - times out after x seconds, 60 in this # example if str(reaction.emoji) == "▶️" and cur_page != pages: cur_page += 1 await message.edit(content=f"Page {cur_page}/{pages}:\n{contents[cur_page-1]}") await message.remove_reaction(reaction, user) elif str(reaction.emoji) == "◀️" and cur_page > 1: cur_page -= 1 await message.edit(content=f"Page {cur_page}/{pages}:\n{contents[cur_page-1]}") await message.remove_reaction(reaction, user) else: await message.remove_reaction(reaction, user) # removes reactions if the user tries to go forward on the last page or # backwards on the first page except asyncio.TimeoutError: await message.delete() break # ending the loop if user doesn't react after x seconds If your editor doesn't support pasting in the emojis directly, you can use a website such as this one to find the unicodes of the emojis instead. In this case, the forwards arrow was \u25c0 and the backwards arrow was \u25b6. Other than that, you should be good to go! The message will delete itself after 60 seconds of inactivity in that message (i.e. nobody reacting with the arrows), but just change the number if you want a longer period before deletion. Alternatively, you can add in a third emoji, such as a cross, which which delete the message on demand. References: Message.add_reaction() Message.remove_reaction() Client.wait_for() Message.edit() Message.delete() asyncio.TimeoutError - Exception for when user doesn't react in time
7
24
61,861,739
2020-5-18
https://stackoverflow.com/questions/61861739/plotly-how-to-set-custom-xticks
From plotly doc: layout > xaxis > tickvals: Sets the values at which ticks on this axis appear. Only has an effect if tickmode is set to "array". Used with ticktext. layout > xaxis > ticktext: Sets the text displayed at the ticks position via tickvals. Only has an effect if tickmode is set to "array". Used with tickvals. Example: import pandas as pd import numpy as np np.random.seed(42) feature = pd.DataFrame({'ds': pd.date_range('20200101', periods=100*24, freq='H'), 'y': np.random.randint(0,20, 100*24) , 'yhat': np.random.randint(0,20, 100*24) , 'price': np.random.choice([6600, 7000, 5500, 7800], 100*24)}) import plotly.graph_objects as go import plotly.offline as py import plotly.express as px from plotly.offline import init_notebook_mode init_notebook_mode(connected=True) y = feature.set_index('ds').resample('D')['y'].sum() fig = go.Figure() fig.add_trace(go.Scatter(x=y.index, y=y)) x_dates = y.index.to_series().dt.strftime('%Y-%m-%d').sort_values().unique() layout = dict( xaxis=dict( tickmode="array", tickvals=np.arange(0, x_dates.shape[0],2).astype(int), ticktext=x_dates[::2], tickformat='%Y-%m-%d', tickangle=45, ) ) fig.update_layout(layout) fig.show() Result: Since length of x_dates[::2] is 50 , the ticknumber doesn't match at all . How do I sovle it ??
I normally use the approach below. You should know that tickvals is to be regarded as a positional argument and works best (perhaps only) with numerical values and not dates. Use ticktext to display the dates in your preferred format. Snippet 1: fig.update_xaxes(tickangle=45, tickmode = 'array', tickvals = df_tips['date'][0::40], ticktext= [d.strftime('%Y-%m-%d') for d in datelist]) Plot 1: Now you can change tickvals=np.arange(0, y.shape[0]).astype(int)[0::40] to tickvals=np.arange(0, y.shape[0]).astype(int)[0::80] and get: Plot 2: So why didn't this work for you the first time? A number of reasons: Your resampled pandas series y had dates as indexes, so y.index were set as x-axis values. y.index returns dates When you set the tickmark positions through fig.update_xaxes(tickvals), this works better with integer values. And what did I do to fix it? I reset the index after resampling so that y.index does not return dates. Changed y to a dataframe using .to_frame. Changed to fig.add_trace(go.Scatter(x=y.index, y=y.y)) which otherwise would have failed since this is now a dataframe and not a series. Your dates are now retrieved as x_dates = y.ds I know y=y.y looks really weird, but I just left it like that as a friendly reminder to give your pandas series or dataframes more sensible names than a single letter like y that's more likely to be confused with a single, index-free, array or list. Complete code: import pandas as pd import numpy as np import plotly.graph_objects as go import plotly.offline as py import plotly.express as px from plotly.offline import init_notebook_mode # data np.random.seed(42) feature = pd.DataFrame({'ds': pd.date_range('20200101', periods=100*24, freq='H'), 'y': np.random.randint(0,20, 100*24) , 'yhat': np.random.randint(0,20, 100*24) , 'price': np.random.choice([6600, 7000, 5500, 7800], 100*24)}) # resampling y = feature.set_index('ds').resample('D')['y'].sum()#.to_frame() y=y.to_frame() y.reset_index(inplace=True) # plotly setup fig = go.Figure() fig.add_trace(go.Scatter(x=y.index, y=y.y)) # x-ticks preparations x_dates = y.ds tickvals=np.arange(0, y.shape[0]).astype(int)[0::40] ticktext=x_dates # update tickmarks fig.update_xaxes(tickangle=45, tickmode = 'array', tickvals = tickvals, ticktext=[d.strftime('%Y-%m-%d') for d in ticktext]) fig.show()
12
22
61,878,019
2020-5-18
https://stackoverflow.com/questions/61878019/install-optional-dependencies-with-tox
I use tox to test a python project with the following basic config (tox.ini): [tox] envlist = py3 isolated_build = True [testenv] deps = pytest pytest-cov commands = pytest --cov {envsitepackagesdir}/foobar --cov-report xml --cov-report term Unfortunately, the package's optional dependencies (as specified in setup.cfg) don't get installed; the corresponding line in raw pip would be pip install .[all] How to make tox install all optional dependencies?
The supported way to do this is to use the extras key in your testenv for example: [testenv] deps = -rrequirements-dev.txt extras = typed this will install .[typed] or -e .[typed] if usedevelop = true disclaimer: I'm one of the tox maintainers
9
14
61,872,923
2020-5-18
https://stackoverflow.com/questions/61872923/supporting-both-form-and-json-encoded-bodys-with-fastapi
I've been using FastAPI to create an HTTP based API. It currently supports JSON encoded parameters, but I'd also like to support form-urlencoded (and ideally even form-data) parameters at the same URL. Following on Nikita's answer I can get separate urls working with: from typing import Optional from fastapi import FastAPI, Body, Form, Depends from pydantic import BaseModel class MyItem(BaseModel): id: Optional[int] = None txt: str @classmethod def as_form(cls, id: Optional[int] = Form(None), txt: str = Form(...)) -> 'MyItem': return cls(id=id, txt=txt) app = FastAPI() @app.post("/form") async def form_endpoint(item: MyItem = Depends(MyItem.as_form)): print("got item =", repr(item)) return "ok" @app.post("/json") async def json_endpoint(item: MyItem = Body(...)): print("got item =", repr(item)) return "ok" and I can test these using curl by doing: curl -X POST "http://localhost:8000/form" -d 'txt=test' and curl -sS -X POST "http://localhost:8000/json" -H "Content-Type: application/json" -d '{"txt":"test"}' It seems like it would be nicer to have a single URL that accepts both content-types and have the model parsed out appropriately. But the above code currently fails with either: {"detail":[{"loc":["body","txt"],"msg":"field required","type":"value_error.missing"}]} or {"detail":"There was an error parsing the body"} if I post to the "wrong" endpoint, e.g. form encoding posted to /json. For bonus points; I'd also like to support form-data encoded parameters as it seems related (my txt can get rather long in practice), but might need to turn it into another question if it's sufficiently different.
FastAPI can't route based on Content Type, you'd have to check that in the request and parse appropriately: @app.post('/') async def route(req: Request) -> Response: if req.headers['Content-Type'] == 'application/json': item = MyItem(** await req.json()) elif req.headers['Content-Type'] == 'multipart/form-data': item = MyItem(** await req.form()) elif req.headers['Content-Type'] == 'application/x-www-form-urlencoded': item = MyItem(** await req.form()) return Response(content=item.json()) There seems to be an open issue on GitHub regarding this functionality
10
14
61,878,026
2020-5-18
https://stackoverflow.com/questions/61878026/eigenvectors-are-complex-but-only-for-large-matrices
I'm trying to calculate the eigenvectors and eigenvalues of this matrix import numpy as np la = 0.02 mi = 0.08 n = 500 d1 = np.full(n, -(la+mi), np.double) d1[0] = -la d1[-1] = -mi d2 = np.full(n-1, la, np.double) d3 = np.full(n-1, mi, np.double) A = np.diagflat(d1) + np.diagflat(d2, -1) + np.diag(d3, 1) e_values, e_vectors = np.linalg.eig(A) If I set the dimensions of the matrix to n < 110 the output is fine. However, if I set it to n >= 110 both the eigenvalues and the eigenvector components become complex numbers with significant imaginary parts. Why does this happen? Is it supposed to happen? It is very strange behavior and frankly I'm kind of stuck.
What you are seeing appears to be fairly normal roundoff error. This is an unfortunate result of storing floating point numbers with a finite precision. It naturally gets relatively worse for large problems. Here is a plot of the real vs. imaginary components of the eigenvalues: You can see that the imaginary numbers are effectively noise. This is not to say that they are not important. Here is a plot of the imaginary vs. real part, showing that the ratio can get as large as 0.06 in the worst case: This ratio changes with respect to the absolute and relative quantities la and mi. If you multiply both by 10, you get If you keep la = 0.02 and set mi = 0.8, you get a smaller imaginary part: Things get really weird when you do the opposite, and increase la by a factor of 10, keeping mi as-is: The relative precision of the calculation decreases for smaller eigenvalues, so this is not too surprising. Given the relatively small magnitudes of the imaginary parts (at least for the important eigenvalues), you can either take the magnitude or the real part of the result since you know that all the eigenvalues are real.
7
7
61,879,166
2020-5-18
https://stackoverflow.com/questions/61879166/pandas-groupby-month-and-year-date-as-datetime64ns-and-summarized-by-count
I have a data frame, which I created in pandas, grouping by date and summarizing by rides. date rides 0 2019-01-01 247279 1 2019-01-02 585996 2 2019-01-03 660631 3 2019-01-04 662011 4 2019-01-05 440848 .. ... ... 451 2020-03-27 218499 452 2020-03-28 143305 453 2020-03-29 110833 454 2020-03-30 207743 455 2020-03-31 199623 [456 rows x 2 columns] My date column is in datetime64[ns]. date datetime64[ns] rides int64 dtype: object Now I would like to create another data frame, grouping by month and year (I have data form 2019 and 2020) and summarize by rides. Ideal output: Year Month Rides 2019 January 2000000 2020 March 1000000
you can groupby and get the dt.year and the dt.month_name from the column date. print (df.groupby([df['date'].dt.year.rename('year'), df['date'].dt.month_name().rename('month')]) ['rides'].sum().reset_index()) year month rides 0 2019 January 2596765 1 2020 March 880003
7
11
61,875,963
2020-5-18
https://stackoverflow.com/questions/61875963/pytorch-row-wise-dot-product
Suppose I have two tensors: a = torch.randn(10, 1000, 1, 4) b = torch.randn(10, 1000, 6, 4) Where the third index is the index of a vector. I want to take the dot product between each vector in b with respect to the vector in a. To illustrate, this is what I mean: dots = torch.Tensor(10, 1000, 6, 1) for b in range(10): for c in range(1000): for v in range(6): dots[b,c,v] = torch.dot(b[b,c,v], a[b,c,0]) How would I achieve this using torch functions?
a = torch.randn(10, 1000, 1, 4) b = torch.randn(10, 1000, 6, 4) c = torch.sum(a * b, dim=-1) print(c.shape) torch.Size([10, 1000, 6]) c = c.unsqueeze(-1) print(c.shape) torch.Size([10, 1000, 6, 1])
8
13
61,842,432
2020-5-16
https://stackoverflow.com/questions/61842432/pyqt5-and-asyncio
Is it possible to keep a UDP server running as an asynchronous function receiving data and then passing it to an (PyQt5) widget which is also running as an asynchronous function?? The idea is that when the data coming into the server is updated, it also updates the widget. I have got a simple UDP server and a (PyQt5) widget already which are working fine independently but I am struggling trying to combine them and keep them both running asynchronously and exchanging data(Server transmitting data to widget) [UPDATE] Below is a widget that I am trying out import sys from PyQt5 import QtWidgets, QtCore, QtGui from PyQt5.QtWidgets import QApplication, QMainWindow import asyncio class Speedometer(QMainWindow): angleChanged = QtCore.pyqtSignal(float) def __init__(self, parent = None): QtWidgets.QWidget.__init__(self, parent) self._angle = 0.0 self._margins = 20 self._pointText = {0: "40", 30: "50", 60: "60", 90: "70", 120: "80", 150:"" , 180: "", 210: "", 240: "0", 270: "10", 300: "20", 330: "30", 360: ""} def paintEvent(self, event): painter = QtGui.QPainter() painter.begin(self) painter.setRenderHint(QtGui.QPainter.Antialiasing) painter.fillRect(event.rect(), self.palette().brush(QtGui.QPalette.Window)) self.drawMarkings(painter) self.drawNeedle(painter) painter.end() def drawMarkings(self, painter): painter.save() painter.translate(self.width()/2, self.height()/2) scale = min((self.width() - self._margins)/120.0, (self.height() - self._margins)/60.0) painter.scale(scale, scale) font = QtGui.QFont(self.font()) font.setPixelSize(10) metrics = QtGui.QFontMetricsF(font) painter.setFont(font) painter.setPen(self.palette().color(QtGui.QPalette.Shadow)) i = 0 while i < 360: if i % 30 == 0 and (i <150 or i > 210): painter.drawLine(0, -40, 0, -50) painter.drawText(-metrics.width(self._pointText[i])/2.0, -52, self._pointText[i]) elif i <135 or i > 225: painter.drawLine(0, -45, 0, -50) painter.rotate(15) i += 15 painter.restore() def drawNeedle(self, painter): painter.save() painter.translate(self.width()/2, self.height()/1.5) painter.rotate(self._angle) scale = min((self.width() - self._margins)/120.0, (self.height() - self._margins)/120.0) painter.scale(scale, scale) painter.setPen(QtCore.Qt.NoPen) painter.setBrush(self.palette().brush(QtGui.QPalette.Shadow)) painter.drawPolygon( QtGui.QPolygon([QtCore.QPoint(-10, 0), QtCore.QPoint(0, -45), QtCore.QPoint(10, 0), QtCore.QPoint(0, 5), QtCore.QPoint(-10, 0)]) ) painter.setBrush(self.palette().brush(QtGui.QPalette.Highlight)) painter.drawPolygon( QtGui.QPolygon([QtCore.QPoint(-5, -25), QtCore.QPoint(0, -45), QtCore.QPoint(5, -25), QtCore.QPoint(0, -30), QtCore.QPoint(-5, -25)]) ) painter.restore() def sizeHint(self): return QtCore.QSize(150, 150) def angle(self): return self._angle # @pyqtSlot(float) def setAngle(self, angle): if angle != self._angle: self._angle = angle self.angleChanged.emit(angle) self.update() angle = QtCore.pyqtProperty(float, angle, setAngle) @staticmethod def mainLoopSpd(): while True: app = QApplication(sys.argv) window = QtWidgets.QWidget() spd = Speedometer() spinBox = QtWidgets.QSpinBox() #spd.setAngle(100) spinBox.setRange(0, 359) spinBox.valueChanged.connect(spd.setAngle) layout = QtWidgets.QVBoxLayout() layout.addWidget(spd) layout.addWidget(spinBox) window.setLayout(layout) window.show() app.exec_() #await asyncio.sleep(1) sys.exit(app.exec_()) Below is an implementation of a UDP socket end which also is printing the values in the console import socket class UDPserver: def __init__(self, parent= None): self.localIP = "127.0.0.1" self.localPort = 20002 self.bufferSize = 1024 self.UDPServerSocket = socket.socket(family= socket.AF_INET, type=socket.SOCK_DGRAM) # Create a socket object self.UDPServerSocket.bind((self.localIP, self.localPort)) print("UDP server up and listening") self.counter= 1 @staticmethod def mainLoopUDPserver(): serv= UDPserver() #while(True): bytesAddressPair = serv.UDPServerSocket.recvfrom(serv.bufferSize) # Receive data from the socket message = bytesAddressPair[0] # The output of the recvfrom() function is a 2-element array # First element is the message address = bytesAddressPair[1] # Second element is the address of the sender newMsg= "{}".format(message) serv.counter=serv.counter+1 NumMssgReceived = "#Num of Mssg Received:{}".format(serv.counter) newMsg= newMsg.replace("'","") newMsg= newMsg.replace("b","") newMsg= newMsg.split("/") eastCoord= float(newMsg[0]) northCoord= float(newMsg[1]) vehSpeed= float(newMsg[2]) agYaw= float(newMsg[3]) eastCoordStr="East Coordinate:{}".format(newMsg[0]) northCoordStr="North Coordinate:{}".format(newMsg[1]) vehSpeedStr= "Vehicle Speed:{}".format(newMsg[2]) agYawStr="Yaw Angle:{}".format(newMsg[3]) print(NumMssgReceived) print(vehSpeedStr) and below is the main function which is calling them both from speedometer import Speedometer import asyncio from pyServer import UDPserver class mainApp: #vel = 0 def __init__(self): self.velo = 0 self.queue= asyncio.Queue(0) async def server(self): while True: self.velo= UDPserver.mainLoopUDPserver() print("THIS IS VELO{}",self.velo) #await self.queue.put(self.velo) #vel= await self.queue.get() #return vel #print("ASSDASDSADSD{}",vel) await asyncio.sleep(0) #print("HI, vel Received={}",self.veloc) #return velo async def widget(self): while True: #vel = await self.queue.get() #print("Hola xDDDDDDD", vel) print(">>>>>>>>>>>>>>>NextIteration>>>>>>>>>>>>>>") await Speedometer.mainLoopSpd() await asyncio.sleep(0) loop= asyncio.get_event_loop() mApp= mainApp() loop.create_task(mApp.server()) loop.create_task(mApp.widget()) loop.run_forever() So, when I run it, it listens to the server and once I start sending data over UDP, it receives the first piece of data and open the widget which is runs just fine but it makes the server to stop, it does not receive any data anymore. As you can see in the comments, I have also been playing around with Asyncio queues, but I haven't got anything really. My ideal scenario would be the server receiving the data and passing it to the widget so it gets updated with the incoming data but for now I just want them both working independently. Thanks
It should be clear that your UDP server does not run asynchronously. The logic of asyncio is that everything uses an eventloop as a base, and by default Qt does not support it, so you must use libraries such as qasync(python -m pip install qasync) and asyncqt(python -m pip install asyncqt) Considering the above, the solution is: speedometer.py from PyQt5 import QtCore, QtGui, QtWidgets class Speedometer(QtWidgets.QWidget): angleChanged = QtCore.pyqtSignal(float) def __init__(self, parent=None): super().__init__(parent) self._angle = 0.0 self._margins = 20 self._pointText = { 0: "40", 30: "50", 60: "60", 90: "70", 120: "80", 150: "", 180: "", 210: "", 240: "0", 270: "10", 300: "20", 330: "30", 360: "", } def paintEvent(self, event): painter = QtGui.QPainter(self) painter.setRenderHint(QtGui.QPainter.Antialiasing) painter.fillRect(event.rect(), self.palette().brush(QtGui.QPalette.Window)) self.drawMarkings(painter) self.drawNeedle(painter) def drawMarkings(self, painter): painter.save() painter.translate(self.width() / 2, self.height() / 2) scale = min( (self.width() - self._margins) / 120.0, (self.height() - self._margins) / 60.0, ) painter.scale(scale, scale) font = QtGui.QFont(self.font()) font.setPixelSize(10) metrics = QtGui.QFontMetricsF(font) painter.setFont(font) painter.setPen(self.palette().color(QtGui.QPalette.Shadow)) i = 0 while i < 360: if i % 30 == 0 and (i < 150 or i > 210): painter.drawLine(0, -40, 0, -50) painter.drawText( -metrics.width(self._pointText[i]) / 2.0, -52, self._pointText[i] ) elif i < 135 or i > 225: painter.drawLine(0, -45, 0, -50) painter.rotate(15) i += 15 painter.restore() def drawNeedle(self, painter): painter.save() painter.translate(self.width() / 2, self.height() / 1.5) painter.rotate(self._angle) scale = min( (self.width() - self._margins) / 120.0, (self.height() - self._margins) / 120.0, ) painter.scale(scale, scale) painter.setPen(QtCore.Qt.NoPen) painter.setBrush(self.palette().brush(QtGui.QPalette.Shadow)) painter.drawPolygon( QtGui.QPolygon( [ QtCore.QPoint(-10, 0), QtCore.QPoint(0, -45), QtCore.QPoint(10, 0), QtCore.QPoint(0, 5), QtCore.QPoint(-10, 0), ] ) ) painter.setBrush(self.palette().brush(QtGui.QPalette.Highlight)) painter.drawPolygon( QtGui.QPolygon( [ QtCore.QPoint(-5, -25), QtCore.QPoint(0, -45), QtCore.QPoint(5, -25), QtCore.QPoint(0, -30), QtCore.QPoint(-5, -25), ] ) ) painter.restore() def sizeHint(self): return QtCore.QSize(150, 150) def angle(self): return self._angle @QtCore.pyqtSlot(float) def setAngle(self, angle): if angle != self._angle: self._angle = angle self.angleChanged.emit(angle) self.update() angle = QtCore.pyqtProperty(float, angle, setAngle) if __name__ == "__main__": import sys import asyncio from asyncqt import QEventLoop app = QtWidgets.QApplication(sys.argv) loop = QEventLoop(app) asyncio.set_event_loop(loop) with loop: w = Speedometer() w.angle = 10 w.show() loop.run_forever() server.py import asyncio from PyQt5 import QtCore class UDPserver(QtCore.QObject): dataChanged = QtCore.pyqtSignal(float, float, float, float) def __init__(self, parent=None): super().__init__(parent) self._transport = None self._counter_message = 0 @property def transport(self): return self._transport def connection_made(self, transport): self._transport = transport def datagram_received(self, data, addr): self._counter_message += 1 print("#Num of Mssg Received: {}".format(self._counter_message)) message = data.decode() east_coord_str, north_coord_str, veh_speed_str, ag_yaw_str, *_ = message.split( "/" ) try: east_coord = float(east_coord_str) north_coord = float(north_coord_str) veh_speed = float(veh_speed_str) ag_yaw = float(ag_yaw_str) self.dataChanged.emit(east_coord, north_coord, veh_speed, ag_yaw) except ValueError as e: print(e) main.py import sys import asyncio from PyQt5 import QtCore, QtWidgets from asyncqt import QEventLoop from speedometer import Speedometer from server import UDPserver class Widget(QtWidgets.QWidget): def __init__(self, parent=None): super().__init__(parent) self.spd = Speedometer() self.spinBox = QtWidgets.QSpinBox() self.spinBox.setRange(0, 359) self.spinBox.valueChanged.connect(lambda value: self.spd.setAngle(value)) layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.spd) layout.addWidget(self.spinBox) @QtCore.pyqtSlot(float, float, float, float) def set_data(self, east_coord, north_coord, veh_speed, ag_yaw): print(east_coord, north_coord, veh_speed, ag_yaw) self.spd.setAngle(veh_speed) async def create_server(loop): return await loop.create_datagram_endpoint( lambda: UDPserver(), local_addr=("127.0.0.1", 20002) ) def main(): app = QtWidgets.QApplication(sys.argv) loop = QEventLoop(app) asyncio.set_event_loop(loop) w = Widget() w.resize(640, 480) w.show() with loop: _, protocol = loop.run_until_complete(create_server(loop)) protocol.dataChanged.connect(w.set_data) loop.run_forever() if __name__ == "__main__": main()
9
7
61,855,161
2020-5-17
https://stackoverflow.com/questions/61855161/any-workaround-to-add-token-authorization-decorator-to-endpoint-at-swagger-pytho
I know how to secure endpoint in flask, and I want to do the same thing to swagger generated python server stub. I am wondering how I can integrate flask token authentication works for the swagger python server, so the endpoint will be secured. I could easily add token authentication decorator to endpoint in flask. This is how things works in flask-restplus and this one below is totally working: from flask import Flask, request, jsonify from flask_restplus import Api, Resource app = Flask(__name__) authorizations = { 'apikey' : { 'type' : 'apiKey', 'in' : 'header', 'name' : 'X-API-KEY' }, } api = Api(app, security = 'apikey',authorizations=authorizations) def token_required(f): @wraps(f) def decorated(*args, **kwargs): token = None if 'X-API-KEY' in request.headers: token = request.headers['X-API-KEY'] if not token: return {'message' : 'Token is missing.'}, 401 if token != 'mytoken': return {'message' : 'Your token is wrong, wrong, wrong!!!'}, 401 print('TOKEN: {}'.format(token)) return f(*args, **kwargs) return decorated class classResource(Resource): @api.doc(security='apikey') @token_required def get(self): return "this is test" how to make Bearer Authentication at swagger generated server stub: I am wondering how am I gonna integrate this authentication to swagger generated python server stub. Here is how spec file begins: openapi: 3.0.2 info: title: test api version: 1.0.0 servers: - url: /api/v1/ description: Example API Service paths: /about: get: summary: general summary description: get current version responses: '200': description: About information content: application/json: schema: $ref: '#/components/schemas/version' '401': description: Authorization information is missing or invalid. components: securitySchemes: BearerAuth: scheme: bearer type: http security: - BearerAuth: [] controller at swagger python server stub: update: my new attempt: here is default_controller that generated by swagger python server stub and I tried as follow: import connexion import six @api.doc(security='apikey') @token_required def about_get(): # noqa: E501 return 'do some magic!' but authorize button is missing. why? in swagger python server stub, I have also authorization_controller which has following code logic: from typing import List def check_BearerAuth(token): return {'test_key': 'test_value'} update: here in swagger python server stub. about_get() is one endpoint and it is not secured right now. How can we secured that like what we did in flask? any thought? how can I add above flask token authentication to about_get() in swagger python server stub? Is there any way of doing this? any idea?
Update Here is a example yaml to use JWT as bearer format: https://github.com/zalando/connexion/blob/master/examples/openapi3/jwt/openapi.yaml After you generate the flask server, on the swagger-ui you can find the 'Authorize' button. And if you execute /secret before 'Authorize' you will get a 401 error. So for your situation, you have to change it into: openapi: 3.0.2 info: title: test api version: 1.0.0 servers: - url: /api/v1/ description: Example API Service paths: /about: get: summary: general summary description: get current version security: - jwt: ['secret'] responses: '200': description: About information content: application/json: schema: type: string components: securitySchemes: jwt: type: http scheme: bearer bearerFormat: JWT x-bearerInfoFunc: app.decode_token Hence, after you have installed connexion[swagger-ui] and start the server by python -m swagger_server. Then, navigate to http://0.0.0.0:8080/api/v1/ui/, you can test the auth works properly. If you call the /about before authorize, it will hit a 401 error. To add auth from code: from flask_restx import Api authorizations = { 'Bearer Auth': { 'type': 'apiKey', 'in': 'header', 'name': 'Authorization' }, } api = Api(app, security='Bearer Auth', authorizations=authorizations) Btw, better migrate the flask_restplus into flask_restx, as flask_restplus is no longer be maintained. Source https://github.com/noirbizarre/flask-restplus/issues/398#issuecomment-444336893
8
2
61,865,793
2020-5-18
https://stackoverflow.com/questions/61865793/python-typeerror-invalid-comparison-between-dtype-datetime64ns-and-date
For a current project, I am planning to filter a JSON file by timeranges by running several loops, each time with a slightly shifted range. The code below however yields the error TypeError: Invalid comparison between dtype=datetime64[ns] and date for line after_start_date = df["Date"] >= start_date. I have already tried to modify the formatting of the dates both within the Python code as well as the corresponding JSON file. Is there any smart tweak to align the date types/formats? The JSON file has the following format: [ {"No":"121","Stock Symbol":"A","Date":"05/11/2017","Text Main":"Sample text"} ] And the corresponding code looks like this: import string import json import pandas as pd import datetime from dateutil.relativedelta import * # Loading and reading dataset file = open("Glassdoor_A.json", "r") data = json.load(file) df = pd.json_normalize(data) df['Date'] = pd.to_datetime(df['Date']) # Create an empty dictionary d = dict() # Filtering by date start_date = datetime.date.fromisoformat('2017-01-01') end_date = datetime.date.fromisoformat('2017-01-31') for i in df.iterrows(): start_date += relativedelta(months=+3) end_date += relativedelta(months=+3) print(start_date) print(end_date) after_start_date = df["Date"] >= start_date before_end_date = df["Date"] <= end_date between_two_dates = after_start_date & before_end_date filtered_dates = df.loc[between_two_dates] print(filtered_dates)
You can use pd.to_datetime('2017-01-31') instead of datetime.date.fromisoformat('2017-01-31'). I hope this helps!
13
14
61,861,172
2020-5-18
https://stackoverflow.com/questions/61861172/what-does-the-argument-newline-do-in-the-open-function
I was learning Python in Codecademy and they were talking about using the open() function for CSV files. I couldn't really understand what the argument newline='' meant for the code. import csv with open('addresses.csv', newline='') as addresses_csv: address_reader = csv.DictReader(addresses_csv, delimiter=';') for row in address_reader: print(row['Address'])
In your csv.DictReader function, you are iterating over lines in addresses.csv and mapping each row to a dict. Check the quoted fields in the csv file, and see if there are any escape sequences for ending a line '\r\n' - notice what happens when you include the newline parameter as shown in your code versus when you don't. Not including the newline parameter will probably add an extra line ending which you don't want. Including the newline parameter allows the csv module to handle the line endings itself - replicating the format as defined in your csv.
20
14
61,854,891
2020-5-17
https://stackoverflow.com/questions/61854891/tee-function-from-itertools-library
Here is an simple example that gets min, max, and avg values from a list. The two functions below have same result. I want to know the difference between these two functions. And why use itertools.tee()? What advantage does it provide? from statistics import median from itertools import tee purchases = [1, 2, 3, 4, 5] def process_purchases(purchases): min_, max_, avg = tee(purchases, 3) return min(min_), max(max_), median(avg) def _process_purchases(purchases): return min(purchases), max(purchases), median(purchases) def main(): stats = process_purchases(purchases=purchases) print("Result:", stats) stats = _process_purchases(purchases=purchases) print("Result:", stats) if __name__ == '__main__': main()
Iterators can only be iterated once in python. After that they are "exhausted" and don't return more values. You can see this in functions like map(), zip(), filter() and many others: purchases = [1, 2, 3, 4, 5] double = map(lambda n: n*2, purchases) print(list(double)) # [2, 4, 6, 8, 10] print(list(double)) # [] <-- can't use it twice You can see the difference between your two functions if you pass them an iterator, such as the return value from map(). In this case _process_purchases() fails because min() exhausts the iterator and leaves no values for max() and median(). tee() takes an iterator and gives you two or more, allowing you to use the iterator passed into the function more than once: from itertools import tee from statistics import median purchases = [1, 2, 3, 4, 5] def process_purchases(purchases): min_, max_, avg = tee(purchases, 3) return min(min_), max(max_), median(avg) def _process_purchases(purchases): return min(purchases), max(purchases), median(purchases) double = map(lambda n: n*2, purchases) _process_purchases(double) # ValueError: max() arg is an empty sequence double = map(lambda n: n*2, purchases) process_purchases(double) # (2, 10, 6)
7
9
61,851,174
2020-5-17
https://stackoverflow.com/questions/61851174/how-to-get-message-by-id-discord-py
I'm wondering how to get a message by its message id. I have tried discord.fetch_message(id) and discord.get_message(id), but both raise: Command raised an exception: AttributeError: module 'discord' has no attribute 'fetch_message'/'get_message'
When getting a message, you're going to need an abc.Messageable object - essentially an object where you can send a message in, for example a text channel, a DM etc. Example: @bot.command() async def getmsg(ctx, msgID: int): # yes, you can do msg: discord.Message # but for the purposes of this, i'm using an int msg = await ctx.fetch_message(msgID) # you now have the message object from the id # ctx.fetch_message gets it from the channel # the command was executed in ################################################### @bot.command() async def getmsg(ctx, channel: discord.TextChannel, member: discord.Member): msg = discord.utils.get(await channel.history(limit=100).flatten(), author=member) # this gets the most recent message from a specified member in the past 100 messages # in a certain text channel - just an idea of how to use its versatility References: abc.Messageable TextChannel.history() TextChannel.fetch_message() Context.fetch_message()
8
12
61,819,120
2020-5-15
https://stackoverflow.com/questions/61819120/how-to-get-the-endpoint-of-a-linestring-in-shapely
Linestring1 = LINESTRING (51.2176008 4.4177154, 51.21758 4.4178548, **51.2175729 4.4179023**, *51.21745162000732 4.41871738126533*) Linestring2 = LINESTRING (*51.21745162000732 4.41871738126533*, **51.2174025 4.4190475**, 51.217338 4.4194807, 51.2172511 4.4200562, 51.2172411 4.4201077, 51.2172246 4.4201654, 51.2172067 4.420205, 51.2171806 4.4202355, 51.2171074 4.4202929, 51.2170063 4.4203409, 51.2169564 4.4203641, 51.2168076 4.4204243, 51.2166588 4.4204833, 51.2159018 4.420431, 51.2154117 4.4203843) Considering these two linestrings were cut from a bigger linestring, how to get the endpoint of a LineString? - Point(51.21745162000732 4.41871738126533) removed - The new last element of linestring 1 = “ 51.2175729 4.4179023 - The new first element of linestring 2 = “ 51.2174025 4.4190475 In short, I want to get the new last value of the first part (linestring1) and the new first value of the second part (linestring2), but without the point where I cut them. How can I make this work?
To get endpoints of a LineString, you just need to access its boundary property: from shapely.geometry import LineString line = LineString([(0, 0), (1, 1), (2, 2)]) endpoints = line.boundary print(endpoints) # MULTIPOINT (0 0, 2 2) first, last = line.boundary print(first, last) # POINT (0 0) POINT (2 2) Alternatively, you can get the first and the last points from the coords cordinate sequence: from shapely.geometry import Point first = Point(line.coords[0]) last = Point(line.coords[-1]) print(first, last) # POINT (0 0) POINT (2 2) In your specific case, though, as you want to remove the last point of the first line, and the first point of the second line, and only after that get the endpoints, you should construct new LineString objects first using the same coords property: from shapely.wkt import loads first_line = loads("LINESTRING (51.2176008 4.4177154, 51.21758 4.4178548, 51.2175729 4.4179023, 51.21745162000732 4.41871738126533)") second_line = loads("LINESTRING (51.21745162000732 4.41871738126533, 51.2174025 4.4190475, 51.217338 4.4194807, 51.2172511 4.4200562, 51.2172411 4.4201077, 51.2172246 4.4201654, 51.2172067 4.420205, 51.2171806 4.4202355, 51.2171074 4.4202929, 51.2170063 4.4203409, 51.2169564 4.4203641, 51.2168076 4.4204243, 51.2166588 4.4204833, 51.2159018 4.420431, 51.2154117 4.4203843)") first_line = LineString(first_line.coords[:-1]) second_line = LineString(second_line.coords[1:]) print(first_line.boundary[1], second_line.boundary[0]) # POINT (51.2175729 4.4179023) POINT (51.2174025 4.4190475)
16
34
61,833,301
2020-5-16
https://stackoverflow.com/questions/61833301/error-on-tensorflow-cannot-import-name-export-saved-model
I keep getting this error when importing tensorflow as tf with the below error text: ImportError: cannot import name 'export_saved_model' from 'tensorflow.python.keras.saving.saved_model' Code used is simply: import tensorflow as tf I have done: uninstalled and installed tensorflow through pip and condo via anaconda cmd prompt Restarted and cleared outputs from kernel, closing the jupyternotebook and restarting my computer Code used for tensorflow unstallation: pip uninstall tensorflow or conda uninstall -y tensorflow Code used for tensorflow installation: pip install tensorflow or conda install tensorflow I can't seem to figure why it keeps saying: "ImportError: cannot import name 'export_saved_model' from 'tensorflow.python.keras.saving.saved_model'"
Uninstalling and install again worked for me. conda activate tf pip uninstall -y tensorflow-gpu pip install tensorflow-gpu However, am still looking for the cause of this error. It was working just a few minutes ago but suddenly I have faced this error.
10
2
61,841,672
2020-5-16
https://stackoverflow.com/questions/61841672/no-matching-distribution-found-for-torch-1-5-0cpu-on-heroku
I am trying to deploy my Django app which uses a machine learning model. And the machine learning model requires pytorch to execute. When i am trying to deploy it is giving me this error ERROR: Could not find a version that satisfies the requirement torch==1.5.0+cpu (from -r /tmp/build_4518392d43f43bc52f067241a9661c92/requirements.txt (line 23)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0) ERROR: No matching distribution found for torch==1.5.0+cpu (from -r /tmp/build_4518392d43f43bc52f067241a9661c92/requirements.txt (line 23)) ! Push rejected, failed to compile Python app. ! Push failed My requirements.txt is asgiref==3.2.7 certifi==2020.4.5.1 chardet==3.0.4 cycler==0.10.0 dj-database-url==0.5.0 Django==3.0.6 django-heroku==0.3.1 future==0.18.2 gunicorn==20.0.4 idna==2.9 imageio==2.8.0 kiwisolver==1.2.0 matplotlib==3.2.1 numpy==1.18.4 Pillow==7.1.2 psycopg2==2.8.5 pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.1 requests==2.23.0 six==1.14.0 sqlparse==0.3.1 torch==1.5.0+cpu torchvision==0.6.0+cpu urllib3==1.25.9 whitenoise==5.0.1 And runtime.txt is python-3.7.5 However installing it on my computer is not giving any type of error when i use command pip install torch==1.5.0+cpu I am using python 3.7.5 and pip 20.0.2. Complete code is here. How to solve this issue i really need to deploy my app. Thanks
PyTorch does not distribute the CPU only versions over PyPI. They are only available through their custom registry. If you select the CPU only version on PyTorch - Get Started Locally you get the following instructions: pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html Since you're not manually executing the pip install, you cannot simply add the -f https://download.pytorch.org/whl/torch_stable.html. As an alternative, you can put it into your requirements.txt as a standalone line. It shouldn't really matter where exactly you put it, but it is commonly put at the very top. -f https://download.pytorch.org/whl/torch_stable.html asgiref==3.2.7 certifi==2020.4.5.1 chardet==3.0.4 cycler==0.10.0 dj-database-url==0.5.0 Django==3.0.6 django-heroku==0.3.1 future==0.18.2 gunicorn==20.0.4 idna==2.9 imageio==2.8.0 kiwisolver==1.2.0 matplotlib==3.2.1 numpy==1.18.4 Pillow==7.1.2 psycopg2==2.8.5 pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.1 requests==2.23.0 six==1.14.0 sqlparse==0.3.1 torch==1.5.0+cpu torchvision==0.6.0+cpu urllib3==1.25.9 whitenoise==5.0.1
11
23
61,799,363
2020-5-14
https://stackoverflow.com/questions/61799363/read-tsv-file-in-pyspark
What is the best way to read .tsv file with header in pyspark and store it in a spark data frame. I am trying to use "spark.read.options" and "spark.read.csv" commands however no luck. Thanks. Regards, Jit
Well you can directly read the tsv file without providing external schema if there is header available as: df = spark.read.csv(path, sep=r'\t', header=True).select('col1','col2') Since spark is lazily evaluated it'll read only selected columns. Hope it helps.
7
12
61,801,260
2020-5-14
https://stackoverflow.com/questions/61801260/vscode-sort-imports-vs-organize-imports
When I enter option + shift + o on my Mac in a Python file in VSCode, I am given two options - "Sort imports" and "Organize Imports". They both organize the inputs nicely but in a different way, so I can keep flipping back and forth between the two of them. Why are there two different commands for this and is one preferred (i.e. more PEP8 compliant) than the other?
There are two because the Python extension created the Sort Imports command before VS Code introduced the Organize Imports command. So the Sort Imports command is the more "official" one for now. There is an open issue, though, to transition over to Organize Imports at some point. Feel free to 👍 the issue if you would like to help bump its priority.
14
28
61,826,300
2020-5-15
https://stackoverflow.com/questions/61826300/how-to-switch-from-hmset-to-hset-in-redis
I get the deprication warning, that Redis.hmset() is deprecated. Use Redis.hset() instead. However hset() takes a third parameter and I can't figure out what name is supposed to be. info = {'users': 10, "timestamp": datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')} r.hmset("myKey", info) The above works, but this requires a first parameter called name. r.hset(name, "myKey", info) Comparing the hset vs hmset in docs isn't clear to me.
You may execute multiple hset for each field/value pair in hmset. r.hset('myKey', 'users', 10) r.hset('myKey', 'timestamp', datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')) r.hset('myKey', 'yet-another-field', 'yet-another-value') first parameter is the key name second parameter is the field name third parameter is the value of the field.
14
1
61,816,236
2020-5-15
https://stackoverflow.com/questions/61816236/does-pyspark-code-run-in-jvm-or-python-subprocess
I want to understand what is happening under the hood when I run the following script named t1.py with python3 t1.py. Specifically, I have the following questions: What kind of code is submitted to the spark worker node? Is it the python code or a translated equivalent Java code submitted to the spark worker node? Is the add operation in the reduce treated as UDF and thus run in a python subprocess on the worker node? If the add operation run in a python subprocess on the worker node, does the worker JVM communicates with the python subprocess for each number in a partition being added? If this is the case, it means a lot of overhead. #!/home/python3/venv/bin/python3 #this file is named t1.py import pyspark from pyspark.sql import SparkSession from pyspark.sql.types import DecimalType, IntegerType import pyspark.sql.functions as F from operator import add import pandas as pd from datetime import datetime len = int(100000000/1) print("len=", len) spark = SparkSession.builder.appName('ai_project').getOrCreate() start = datetime.now() t=spark.sparkContext.parallelize(range(len)) a = t.reduce(add) print(a) end= datetime.now() print("end for spark rdd sum:", end, end-start)
In PySpark, Python and JVM codes live in separate OS processes. PySpark uses Py4J, which is a framework that facilitates interoperation between the two languages, to exchange data between the Python and the JVM processes. When you launch a PySpark job, it starts as a Python process, which then spawns a JVM instance and runs some PySpark specific code in it. It then instantiates a Spark session in that JVM, which becomes the driver program that Spark sees. That driver program connects to the Spark master or spawns an in-proc one, depending on how the session is configured. When you create RDDs or Dataframes, those are stored in the memory of the Spark cluster just as RDDs and Dataframes created by Scala or Java applications. Transformations and actions on them work just as they do in JVM, with one notable difference: anything, which involves passing the data through Python code, runs outside the JVM. So, if you create a Dataframe, and do something like: df.select("foo", "bar").where(df["foo"] > 100).count() this runs entirely in the JVM as there is no Python code that the data must pass through. On the other side, if you do: a = t.reduce(add) since the add operator is a Python one, the RDD gets serialised, then sent to one or more Python processes where the reduction is performed, then the result is serialised again and returned to the JVM, and finally transferred over to the Python driver process for the final reduction. The way this works (which coves your Q1) is like this: each Spark JVM executor spawns a new Python subprocess running a special PySpark script the Python driver serialises the bytecode that has to be executed by each Spark task (e.g., the add operator) and pickles it together with some additional data the JVM executor serialises its RDD partitions and sends them over to its Python subprocess together with the serialised Python bytecode, which it received from the driver the Python code runs over the RDD data the result is serialised back and sent to the JVM executor The JVM executors use network sockets to talk to the Python subprocesses they spawn and the special PySpark scripts they launch run a loop whose task is to sit there and expect serialised data and bytecode to run. Regarding Q3, the JVM executors transfer whole RDD partitions to the Python subprocess, not single items. You should strive to use Pandas UDFs since those can be vectorised. If you are interested in the details, start with the source code of python/pyspark/rdd.py and take a look at the RDD class.
17
33
61,813,503
2020-5-15
https://stackoverflow.com/questions/61813503/is-there-a-go-equivalent-to-pythons-virtualenv
The question is in the title: Is there a GO equivalent to python's virtualenv? What is the prefered work flow to start a new project?
Go modules, which are built into the tooling since Go 1.12 (or 1.11 with a special flag turned on). Create a directory outside of your GOPATH (i.e. basically anywhere), create a go.mod using go mod init (which gives your module a declared importpath), and start working. There's no need to "activate" an environment like with venv; the Go 1.12+ tooling will automatically work within the current module when one is detected, e.g. any go get will happen within module scope. Although the Go blog entry I linked to mostly focuses on creating a library within a module, which you may want to publish to allow using from elsehwere, this isn't a requirement. You can create a module to hold a program (package main) as well, and there is no need to publish it (although if you do, it becomes possible to use go get to install the program).
26
26
61,809,897
2020-5-15
https://stackoverflow.com/questions/61809897/why-does-time-sleep-not-get-affected-by-the-gil
From what I understood when doing research on the Python GIL, is that only one thread can be executed at the once (Whoever holds the lock). However, if that is true, then why would this code only take 3 seconds to execute, rather than 15 seconds? import threading import time def worker(): """thread worker function""" time.sleep(3) print 'Worker' for i in range(5): t = threading.Thread(target=worker) t.start() By intuition about threads, I would have thought this would take 3 seconds, which it does. However after learning about the GIL and that one thread can be executing at once, now I look at this code and think, why does it not take 15 seconds?
Mario's answer is a good high level answer. If you're interested in some details of how this is implemented: in CPython, the implementation of time.sleep wraps its select system call with Py_BEGIN_ALLOW_THREADS / Py_END_ALLOW_THREADS: https://github.com/python/cpython/blob/7ba1f75f3f02b4b50ac6d7e17d15e467afa36aac/Modules/timemodule.c#L1880-L1882 These are macros that save and restore the thread state, while releasing/acquiring the GIL: https://github.com/python/cpython/blob/7c59d7c9860cdbaf4a9c26c9142aebd3259d046e/Include/ceval.h#L86-L94 https://github.com/python/cpython/blob/4c9ea093cd752a6687864674d34250653653f743/Python/ceval.c#L498
13
14
61,767,723
2020-5-13
https://stackoverflow.com/questions/61767723/get-config-missing-while-loading-previously-saved-model-without-custom-layers
I have a problem with loading the previously saved model. This is my save: def build_rnn_lstm_model(tokenizer, layers): model = tf.keras.Sequential([ tf.keras.layers.Embedding(len(tokenizer.word_index) + 1, layers,input_length=843), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(layers, kernel_regularizer=l2(0.01), recurrent_regularizer=l2(0.01), bias_regularizer=l2(0.01))), tf.keras.layers.Dense(layers, activation='relu', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)), tf.keras.layers.Dense(layers/2, activation='relu', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01)), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy',f1,precision, recall]) print("Layers: ", len(model.layers)) return model model_path = str(Path(__file__).parents[2]) + os.path.sep + 'model' data_train_sequence, data_test_sequence, labels_train, labels_test, tokenizer = get_training_test_data_local() model = build_rnn_lstm_model(tokenizer, 32) model.fit(data_train_sequence, labels_train, epochs=num_epochs, validation_data=(data_test_sequence, labels_test)) model.save(model_path + os.path.sep + 'auditor_model', save_format='tf') After this I can see that auditor_model is saved in model directory. now I would like to load this model with: model = tf.keras.models.load_model(model_path + os.path.sep + 'auditor_model') but I get: ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_config when saving. In addition, please use the custom_objects arg when calling load_model(). I have read about custom_objects in TensorFlow docs but I don't understand how to implement it while I use no custom layers but the predefined ones. Could anyone give me a hint how to make it work? I use TensorFlow 2.2 and Python3
Your example is missing the definition of f1, precision and recall functions. If the builtin metrics e.g. 'f1' (note it is a string) do not fit your usecase you can pass the custom_objects as follows: def f1(y_true, y_pred): return 1 model = tf.keras.models.load_model(path_to_model, custom_objects={'f1':f1})
19
27
61,788,158
2020-5-14
https://stackoverflow.com/questions/61788158/elasticsearch-search-api-not-returning-all-the-results
I have three indexes, all three of them share a particular key-value pair. When I do a blanket search with the api "http://localhost:9200/_search" using the request body {"query":{ "query_string": { "query":"city*" } } } It is only returning results from two of the indexes. I tried using the same request body by altering the url to search only in that missed index "http://localhost:9200/index_name/_search" and that's working. Am I missing anything here? The code for inserting all three indexes follow the same procedure and I used elasticsearch-py to ingest the data. I'm using the GET HTTP method and also tried the POST HTTP method. Both returns the same results. Elasticsearch version is 7.6.0. Results for specific index search is like the one below { "took": 1, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "index_name", "_type": "meta", "_id": "LMRqDnIBh5wU6Ax_YsOD", "_score": 1.0, "_source": { "table_schema": "test_table", "table_name": "citymaster_old" } } ] } }
The reason might be that you haven't provided the size parameter in the query. This limits the result count to 10 by default. Out of all the results the top 10 might be from the two index even thought the match is present in third index as well. This in turn giving the perception that result from third index are not being returned. Try adding size parameter. { "query": { "query_string": { "query": "city*" } }, "size": 20 } You can figure out the number of documents that matched the query by the total key in response "total": { "value": 1, "relation": "eq" }
7
10
61,784,255
2020-5-13
https://stackoverflow.com/questions/61784255/split-a-pandas-dataframe-into-two-dataframes-efficiently-based-on-some-condition
So, I want to split a given dataframe into two dataframes based on an if condition for a particular column. I am currently achieving this by iterating over the whole dataframe two times. Please suggest some ways to improve this. player score dan 10 dmitri 45 darren 15 xae12 40 Like in the above dataframe, I want to split the df into two such that one df contains rows with the players scoring less than 15 and the other df contains the remaining rows. I want to this with one iteration only. (Also, if the answer can be generic for n dfs, it will help me a lot) Thanks in advance.
IICU Use boolean select m=df.score>15 Lessthan15=df[~m] Morethan15=df[m] Morethan15 LessThan15
7
3
61,776,830
2020-5-13
https://stackoverflow.com/questions/61776830/python-asyncio-runtimeerror-await-wasnt-used-with-future
I want to use a semaphore with a gather() to limit api calls. I think I have to use create_task() but I obtain a runtime error: "RuntimeError: await wasn't used with future". How can I fix it? Here is the code: import asyncio # pip install git+https://github.com/sammchardy/python-binance.git@00dc9a978590e79d4aa02e6c75106a3632990e8d from binance import AsyncClient async def catch_up_aggtrades(client, symbols): tasks = asyncio.create_task(get_historical_aggtrades(client, symbol) for symbol in symbols) sem = asyncio.Semaphore(1) async with sem: await asyncio.gather(*tasks) async def get_historical_aggtrades(client, symbol): async for trade in client.aggregate_trade_iter(symbol, '1 day ago UTC'): print(f"symbol {symbol}") async def main(): client = await AsyncClient.create() symbols = ['BTCUSDT', 'ETHUSDT', 'BNBUSDT'] await catch_up_aggtrades(client, symbols) if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main())
A sempahore limiting a resource usage is actually a very simple concept. It is similar to counting free parking lots. (-1 when a car enters, +1 when it leaves). When the counter drops to zero, a queue of waiting cars starts to build. That means: one semaphore per resource initial value = upper limit of concurrent resource users each resource usage is guarded by async with sem: The existing code: sem = asyncio.Semaphore(1) async with sem: await asyncio.gather(*tasks) limits the use of asyncio.gather to 1 task gathering at a time. It does not limit the tasks, just their gathering. Since the gather is called just once anyway, the semaphore does not change anything. Your program might be changed to (including the issue resolved in comments): LIMIT = 1 async def catch_up_aggtrades(client, symbols): sem = asyncio.Semaphore(LIMIT) tasks = [asyncio.create_task(get_historical_aggtrades(client, symbol, sem)) for symbol in symbols] await asyncio.gather(*tasks) async def get_historical_aggtrades(client, symbol, sem): async with sem: async for trade in client.aggregate_trade_iter(symbol, '1 day ago UTC'): print(f"symbol {symbol}")
9
4
61,776,207
2020-5-13
https://stackoverflow.com/questions/61776207/where-does-win32-come-from-when-im-using-windows-64bit
I'm using windows10 64-bit, I downloaded Python 3.8.1 for windows x86-64. But when I type "python" in cmd, the output says "win32". Where does that come from? Or is that normal? C:\>python Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> New in Python, please could anyone help?
tl;dr: "Win32" is still the common name of the Windows API, regardless of whether you're using it on a 32-bit or 64-bit machine. Background The Windows API was once called WinAPI, and the earliest versions ran on 16-bit computers. When they started to make versions for 32-bit computers, they had to modify a bunch of the API's functions and how some parameters were passed in window messages. This was largely due to Win16 having used some tricks to save memory. But the names of these functions and messages were mostly unchanged, so it was often necessary to distinguish between the 16- and 32-bit versions when, for example, you're trying to figure out what information the WPARAM of a window message is carrying. The term "WinAPI" gave way to "Win16" and "Win32", and eventually Win16 was left behind. When Windows built versions for 64-bit computers, the API did not have to undergo the same kinds of changes (at the source level). So there was no need to change the term. Python's version line simply means that this version is made to run on Windows.
10
12
61,770,551
2020-5-13
https://stackoverflow.com/questions/61770551/how-to-run-django-with-uvicorn-webserver
I have a Django project running on my local machine with dev server manage.py runserver and I'm trying to run it with Uvicorn before I deploy it in a virtual machine. So in my virtual environment I installed uvicorn and started the server, but as you can see below it fails to find Django static css files. (envdev) user@lenovo:~/python/myproject$ uvicorn myproject.asgi:application --port 8001 Started server process [17426] Waiting for application startup. ASGI 'lifespan' protocol appears unsupported. Application startup complete. Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) INFO: 127.0.0.1:45720 - "GET /admin/ HTTP/1.1" 200 OK Not Found: /static/admin/css/base.css Not Found: /static/admin/css/base.css INFO: 127.0.0.1:45720 - "GET /static/admin/css/base.css HTTP/1.1" 404 Not Found Not Found: /static/admin/css/dashboard.css Not Found: /static/admin/css/dashboard.css INFO: 127.0.0.1:45724 - "GET /static/admin/css/dashboard.css HTTP/1.1" 404 Not Found Not Found: /static/admin/css/responsive.css Not Found: /static/admin/css/responsive.css INFO: 127.0.0.1:45726 - "GET /static/admin/css/responsive.css HTTP/1.1" 404 Not Found Uvicorn has an option --root-path so I tried to specify the directory where these files are located but there is still the same error (path is correct). How can I solve this issue?
When not running with the built-in development server, you'll need to either use whitenoise which does this as a Django/WSGI middleware (my recommendation) use the classic staticfile deployment procedure which collects all static files into some root and a static file server is expected to serve them. Uvicorn doesn't seem to support static file serving, so you might need something else too (see e.g. https://www.uvicorn.org/deployment/#running-behind-nginx). (very, very unpreferably!) have Django serve static files like it does in dev
18
25
61,765,502
2020-5-13
https://stackoverflow.com/questions/61765502/pip-freeze-doesnt-show-package-version
Over the weekends I have upgraded my Ubuntu to 20.04, and I tried creating virtualenvironment with python 3.8.2, and pip install requirements.txt. In requirement.txt, I am installing some code from private gitlab repositories. Previously, if I do pip freeze, I was able to see all packages name and version (formatted as package_name == version. However, if I do pip freeze, now I see something like this... pkg1 @ file:///tmp/tmp44ir_jik pkg2 @ file:///tmp/tmp5pijtzbq ... (pkg1 and pkg2 are both from pip installing private git repo) I would like to somehow display the version, but don't know how to. I mean, pip list does show the version, but I am writing a script and would like to use pip freeze for it. How can I get pip freeze to show how it use to before (with the format as pkg_name==pkg_version)? Thanks in advance.
You can use pip list --format=freeze instead.
28
40
61,764,107
2020-5-13
https://stackoverflow.com/questions/61764107/detect-sign-changes-in-pandas-dataframe
I have a pandas dataframe that is datetime indexed and it looks like this: Datetime 2020-05-11 14:00:00-03:00 0.097538 2020-05-11 14:30:00-03:00 -0.083788 2020-05-11 15:00:00-03:00 -0.074128 2020-05-11 15:30:00-03:00 0.059725 2020-05-11 16:00:00-03:00 0.041369 2020-05-11 16:30:00-03:00 0.034388 2020-05-12 10:00:00-03:00 0.006814 2020-05-12 10:30:00-03:00 -0.005308 2020-05-12 11:00:00-03:00 -0.036952 2020-05-12 11:30:00-03:00 -0.070307 2020-05-12 12:00:00-03:00 0.102004 2020-05-12 12:30:00-03:00 -0.139317 2020-05-12 13:00:00-03:00 -0.167589 2020-05-12 13:30:00-03:00 -0.179942 2020-05-12 14:00:00-03:00 0.182351 2020-05-12 14:30:00-03:00 -0.160736 2020-05-12 15:00:00-03:00 -0.150033 2020-05-12 15:30:00-03:00 -0.141862 2020-05-12 16:00:00-03:00 -0.121372 2020-05-12 16:30:00-03:00 -0.095990 Name: result_col, dtype: float64 My need is to mark the rows where it changes signal, from negative to positive and vice-versa. Any thoughts on how to achieve it? Edit: I need +1 on the cross up and -1 on the cross down.
Let us try import numpy as np np.sign(data).diff().ne(0)
9
15
61,693,014
2020-5-9
https://stackoverflow.com/questions/61693014/how-to-hide-plotly-yaxis-title-in-python
Editing: The following example from Plotly for reference: import plotly.express as px df = px.data.gapminder().query("continent == 'Europe' and year == 2007 and pop > 2.e6") fig = px.bar(df, y='pop', x='country', text='pop') fig.update_traces(texttemplate='%{text:.2s}', textposition='outside') fig.update_layout(uniformtext_minsize=8, uniformtext_mode='hide') fig.show() How to remove the word 'pop'. What I want to hide the y-axis title of'value'. The following syntax doesn't work. fig.update_yaxes(showticklabels=False) Thanks.
Solution You need to use visible=False inside fig.update_yaxes() or fig.update_layout() as follows. For more details see the documentation for plotly.graph_objects.Figure. # Option-1: using fig.update_yaxes() fig.update_yaxes(visible=False, showticklabels=False) # Option-2: using fig.update_layout() fig.update_layout(yaxis={'visible': False, 'showticklabels': False}) # Option-3: using fig.update_layout() + dict-flattening shorthand fig.update_layout(yaxis_visible=False, yaxis_showticklabels=False) Try using the following to test this: # Set the visibility ON fig.update_yaxes(title='y', visible=True, showticklabels=False) # Set the visibility OFF fig.update_yaxes(title='y', visible=False, showticklabels=False) A. How to create the figure directly with hidden-yaxis label and tickmarks You can do this directly by using the layout keyword and supplying a dict to go.Figure() constructor. import plotly.graph_objects as go fig = go.Figure( data=[go.Bar(y=[2, 1, 3])], layout_title_text="A Figure Displaying Itself", layout = {'xaxis': {'title': 'x-label', 'visible': True, 'showticklabels': True}, 'yaxis': {'title': 'y-label', 'visible': False, 'showticklabels': False} } ) fig B. How to create the figure without the margin space around Say, you suppressed the titles for both the axes. By default plotly would still leave a default amount of space all around the figure: this is known as the margin in Plotly's documention. What if you want to reduce or even completely remove the margin? This can be done using fig.update_layout(margin=dict(l = ..., r = ..., t = ..., b = ...)) as mentioned in the documentation: https://plotly.com/python/reference/#layout-margin. In the following example, I have reduced the left, right and bottom margins to 10 px and set the top margin to 50 px. import plotly.graph_objects as go fig = go.Figure( data=[go.Bar(y=[2, 1, 3])], layout_title_text="A Figure with no axis-title and modified margins", layout = { 'xaxis': {'title': 'x-label', 'visible': False, 'showticklabels': True}, 'yaxis': {'title': 'y-label', 'visible': False, 'showticklabels': False}, # specify margins in px 'margin': dict( l = 10, # left r = 10, # right t = 50, # top b = 10, # bottom ), }, ) fig C. An Interesting Feature of Plotly: A hidden shorthand It turns out that Plotly has a convenient shorthand notation allowing dict-flattening available for input arguments such as this: ## ALL THREE METHODS BELOW ARE EQUIVALENT # No dict-flattening # layout = dict with yaxis as key layout = {'yaxis': {'title': 'y-label', 'visible': False, 'showticklabels': False} } # Partial dict-flattening # layout_yaxis = dict with key-names # title, visible, showticklabels layout_yaxis = {'title': 'y-label', 'visible': False, 'showticklabels': False} # Complete dict-flattening # layout_yaxis_key-name for each of the key-names layout_yaxis_title = 'y-label' layout_yaxis_visible = False layout_yaxis_showticklabels = False Now try running all three of the following and compare the outputs. import plotly.graph_objects as go # Method-1: Shortest (less detailed) fig = go.Figure( data=[go.Bar(y=[2, 1, 3])], layout_title_text="A Figure Displaying Itself", layout_yaxis_visible = False, layout_xaxis_title = 'x-label' ) fig.show() # Method-2: A hibrid of dicts and underscore-separated-syntax fig = go.Figure( data=[go.Bar(y=[2, 1, 3])], layout_title_text="A Figure Displaying Itself", layout_xaxis_title = 'x-label', layout_yaxis = {'title': 'y-label', 'visible': False, 'showticklabels': False} ) fig.show() # Method-3: A complete dict syntax fig = go.Figure( data=[go.Bar(y=[2, 1, 3])], layout_title_text="A Figure Displaying Itself", layout = {'xaxis': {'title': 'x-label', 'visible': True, 'showticklabels': True}, 'yaxis': {'title': 'y-label', 'visible': False, 'showticklabels': False} } ) fig.show()
43
73
61,644,396
2020-5-6
https://stackoverflow.com/questions/61644396/flask-how-to-make-validation-on-request-json-and-json-schema
In flask-restplus API , I need to make validation on request JSON data where I already defined request body schema with api.model. Basically I want to pass input JSON data to API function where I have to validate input JSON data before using API function. To do so, I used RequestParser for doing this task, but the API function was expecting proper JSON data as parameters after request JSON is validated and parsed. To do request JSON validation, first I have to parse received input JSON data, parse its JSON body, validate each then reconstructs it as JSON object, and pass to the API function. Is there any easier way to do this? input JSON data { "body": { "gender": "M", "PCT": { "value": 12, "device": "roche_cobas" }, "IL6": { "value": 12, "device": "roche_cobas" }, "CRP": { "value": 12, "device": "roche_cobas" } } } my current attempt in flask from flask_restplus import Api, Namespace, Resource, fields, reqparse, inputs from flask import Flask, request, make_response, Response, jsonify app = Flask(__name__) api = Api(app) ns = Namespace('') feature = api.model('feature', { 'value': fields.Integer(required=True), 'time': fields.Date(required=True) }) features = api.model('featureList', { 'age': fields.String, 'gender': fields.String(required=True), 'time': fields.Date, 'features': fields.List(fields.Nested(feature, required=True)) }) @ns.route('/hello') class helloResource(Resource): @ns.expect(features) def post(self): json_dict = request.json ## get input JSON data ## need to parse json_dict to validate expected argument in JSON body root_parser = reqparse.RequestParser() root_parser.add_argument('body', type=dict) root_args = root_parser.parse_args() jsbody_parser = reqparse.RequestParser() jsbody_parser.add_argument('age', type=dict, location = ('body',)) jsbody_parser.add_argument('gender', type=dict, location=('body',)) ## IL6, CRP could be something else, how to use **kwargs here jsbody_parser.add_argument('IL6', type=dict, location=('body',)) jsbody_parser.add_argument('PCT', type=dict, location=('body',)) jsbody_parser.add_argument('CRP', type=dict, location=('body',)) jsbody_parser = jsbody_parser.parse_args(req=root_args) ## after validate each argument on input JSON request body, all needs to be constructed as JSON data json_data = json.dumps(jsonify(jsbody_parser)) ## how can I get JSON again from jsbody_parser func_output = my_funcs(json_data) rest = make_response(jsonify(str(func_output)), 200) return rest if __name__ == '__main__': api.add_namespace(ns) app.run(debug=True) update: dummy api function Here is dummy function that expecting json data after validation: import json def my_funcs(json_data): a =json.loads(json_data) for k,v in a.iteritems(): print k,v return jsonify(a) current output of above attempt: I have this on response body: { "errors": { "gender": "'gender' is a required property" }, "message": "Input payload validation failed" } obviously, request JSON input is not handled and not validated in my attempt. I think I have to pass json_dict to RequestParser object, but still can't validate request JSON here. how to make this happen? I have to validate expected arguments from JSON body, after validation, I want to construct JSON body that gonna be used as a parameter for API function. How can I make this happen? any workaround to achieve this? parsed JSON must pass to my_funcs in my post, request JSON data should be parsed, such as age, gender should be validated as expected arguments in the request JSON, then jsonify added arguments as JSON and pass the my_funcs. how to make this happen easily in fl I want to make sure flask should parse JSON body and add arguments as it expected, otherwise throw up error. for example: { "body": { "car": "M", "PCT": { "value": 12, "device": "roche_cobas" }, "IL6": { "device": "roche_cobas" }, "CRP": { "value": 12 } } } if I give JSON data like above for making POST request to a server endpoint, it should give the error. How to make this happen? how to validate POST request JSON for flask API call?
As I tried to convey in our conversation it appears you are after a serialization and deserialization tool. I have found Marshmallow to be an exceptional tool for this (it is not the only one). Here's a working example of using Marshmallow to validate a request body, converting the validated data back to a JSON string and passing it to a function for manipulation, and returning a response with JSON data: from json import dumps, loads from flask import Flask, jsonify, request from marshmallow import Schema, fields, ValidationError class BaseSchema(Schema): age = fields.Integer(required=True) gender = fields.String(required=True) class ExtendedSchema(BaseSchema): # have a look at the examples in the Marshmallow docs for more complex data structures, such as nested fields. IL6 = fields.String() PCT = fields.String() CRP = fields.String() def my_func(json_str:str): """ Your Function that Requires JSON string""" a_dict = loads(json_str) return a_dict app = Flask(__name__) @app.route('/base', methods=["POST"]) def base(): # Get Request body from JSON request_data = request.json schema = BaseSchema() try: # Validate request body against schema data types result = schema.load(request_data) except ValidationError as err: # Return a nice message if validation fails return jsonify(err.messages), 400 # Convert request body back to JSON str data_now_json_str = dumps(result) response_data = my_func(data_now_json_str) # Send data back as JSON return jsonify(response_data), 200 @app.route('/extended', methods=["POST"]) def extended(): """ Same as base function but with more fields in Schema """ request_data = request.json schema = ExtendedSchema() try: result = schema.load(request_data) except ValidationError as err: return jsonify(err.messages), 400 data_now_json_str = dumps(result) response_data = my_func(data_now_json_str) return jsonify(response_data), 200 Here's some quick tests to show validation, as well as extending the fields in your request body: import requests # Request fails validation base_data = { 'age': 42, } r1 = requests.post('http://127.0.0.1:5000/base', json=base_data) print(r1.content) # Request passes validation base_data = { 'age': 42, 'gender': 'hobbit' } r2 = requests.post('http://127.0.0.1:5000/base', json=base_data) print(r2.content) # Request with extended properties extended_data = { 'age': 42, 'gender': 'hobbit', 'IL6': 'Five', 'PCT': 'Four', 'CRP': 'Three'} r3 = requests.post('http://127.0.0.1:5000/extended', json=extended_data) print(r3.content) Hope this help gets you where you're going.
11
21
61,710,787
2020-5-10
https://stackoverflow.com/questions/61710787/how-to-run-a-python-script-from-deno
I have a python script with the following code: print("Hello Deno") I want to run this python script (test.py) from test.ts using Deno. This is the code in test.ts so far: const cmd = Deno.run({cmd: ["python3", "test.py"]}); How can I get the output, of the python script in Deno?
To execute a python script from Deno you need to use Deno.Command const command = new Deno.Command('python3', { args: [ "test.py" ], }); const { code, stdout, stderr } = await command.output(); console.log(new TextDecoder().decode(stdout)); console.log(new TextDecoder().decode(stderr)); Old answer: Deno.run is now deprecated in favor of Deno.Command Deno.run returns an instance of Deno.Process. In order to get the output use .output(). Don't forget to pass stdout/stderr options if you want to read the contents. // --allow-run const cmd = Deno.run({ cmd: ["python3", "test.py"], stdout: "piped", stderr: "piped" }); const output = await cmd.output() // "piped" must be set const outStr = new TextDecoder().decode(output); const error = await cmd.stderrOutput(); const errorStr = new TextDecoder().decode(error); cmd.close(); // Don't forget to close it console.log(outStr, errorStr); If you don't pass stdout property you'll get the output directly to stdout const p = Deno.run({ cmd: ["python3", "test.py"] }); await p.status(); // output to stdout "Hello Deno" // calling p.output() will result in an Error p.close() You can also send the output to a File // --allow-run --allow-read --allow-write const filepath = "/tmp/output"; const file = await Deno.open(filepath, { create: true, write: true }); const p = Deno.run({ cmd: ["python3", "test.py"], stdout: file.rid, stderr: file.rid // you can use different file for stderr }); await p.status(); p.close(); file.close(); const fileContents = await Deno.readFile(filepath); const text = new TextDecoder().decode(fileContents); console.log(text) In order to check status code of the process you need to use .status() const status = await cmd.status() // { success: true, code: 0, signal: undefined } // { success: false, code: number, signal: number } In case you need to write data to stdin you can do it like this: const p = Deno.run({ cmd: ["python", "-c", "import sys; assert 'foo' == sys.stdin.read();"], stdin: "piped", }); // send other value for different status code const msg = new TextEncoder().encode("foo"); const n = await p.stdin.write(msg); p.stdin.close() const status = await p.status(); p.close() console.log(status) You'll need to run Deno with: --allow-run flag in order to use Deno.run
16
18
61,696,180
2020-5-9
https://stackoverflow.com/questions/61696180/pytest-exec-code-in-self-locals-syntaxerror-missing-parentheses-in-call-to-exe
Trying to debug a pytest unit test gives me exec code in self.locals SyntaxError: Missing parentheses in call to 'exec' on very simple code. What could be causing it?
Don't have a package/directory/file/module named code in your code, because it conflicts with pytest. Changing to src solved this. I found the answer here: it turned out to be a conflict with my own python module called 'code' and one in use by the debugger. I changed my module name and the debugger began working. This article pointed me to the solution: https://superuser.com/questions/1385995/my-pycharm-run-is-working-but-debugging-is-failing This took me a while to find, so I thought I'd post it here for easy googling.
11
29
61,754,797
2020-5-12
https://stackoverflow.com/questions/61754797/how-to-change-the-color-of-the-median-line-in-boxplot
More generally, how to change the color values for a subset of the boxes properties in a seaborn boxplot? Be that the median, the whiskers, or such. I'm particularly interested in how to change the median value, as I have to create plots which have a dark colour and the median line can't be seen against it. Here's some example code: import matplotlib.pyplot as plt import seaborn as sns fig, ax1 = plt.subplots(figsize=(15,5) colours = ["#000184", "#834177"] sns.set_palette(sns.color_palette(colours)) tips = sns.load_dataset("tips") sns.boxplot(x="day", y="total_bill", hue="smoker", data=tips, ax=ax1) Which creates: As can be seen, it's quite hard to see the median line against the blue here. Note - I don't want to change the entire boxplot colour, just a subsection of the lines (in this case the median). Also - the option of changing it for a particular group (in this case the Yes smokers) is needed, as the colour might not apply to both groups.
Update: the old version of this answer used the index in the generated lines to guess which line of the plot corresponds to a median line. The new version attaches a special label to each of the means, that can be retrieved later to filter out only those lines. You can use medianprops to change the color of the median line. However, there only will be one color for all boxplots. If you need to assign individual colors, you can loop through the generated lines. This makes the code more robust towards different options and possible future changes. ax1.get_lines() gives a list of all Line2D objects created by the boxplot. There will be a Line2D object for each of the elements that make up a box: 4 for the whiskers (2 vertical and 2 horizontal lines), 1 for the median and also 1 for the outliers (it is a Line2D that shows markers without connecting lines). In case showmeans=True there will also be a Line2D with a marker for the mean. So in that case there would be 7 Line2Ds per box, in this case 6. Note that a boxplot also accepts a parameter notch=True which shows a confidence interval around the median. In the case of the current plot, most of the confidence intervals are larger than the quartiles, which create some flipped notches. Depending on your real application, such notches can help to accentuate the median. import matplotlib.pyplot as plt import seaborn as sns fig, ax1 = plt.subplots(figsize=(15,5)) colours = ["#000184", "#834177"] sns.set_palette(sns.color_palette(colours)) tips = sns.load_dataset("tips") sns.boxplot(x="day", y="total_bill", hue="smoker", data=tips, ax=ax1, medianprops={'color': 'red', 'label': '_median_'}) median_colors = ['orange', 'yellow'] median_lines = [line for line in ax1.get_lines() if line.get_label() == '_median_'] for i, line in enumerate(median_lines): line.set_color(median_colors[i % len(median_colors)]) plt.show()
9
4
61,678,226
2020-5-8
https://stackoverflow.com/questions/61678226/executing-the-assembly-generated-by-numba
In a bizarre turn of events, I've ended up in the following predicament where I'm using the following Python code to write the assembly generated by Numba to a file: @jit(nopython=True, nogil=True) def six(): return 6 with open("six.asm", "w") as f: for k, v in six.inspect_asm().items(): f.write(v) The assembly code is successfully written to the file but I can't figure out how to execute it. I've tried the following: $ as -o six.o six.asm $ ld six.o -o six.bin $ chmod +x six.bin $ ./six.bin However, the linking step fails with the following: ld: warning: cannot find entry symbol _start; defaulting to 00000000004000f0 six.o: In function `cpython::__main__::six$241': <string>:(.text+0x20): undefined reference to `PyArg_UnpackTuple' <string>:(.text+0x47): undefined reference to `PyEval_SaveThread' <string>:(.text+0x53): undefined reference to `PyEval_RestoreThread' <string>:(.text+0x62): undefined reference to `PyLong_FromLongLong' <string>:(.text+0x74): undefined reference to `PyExc_RuntimeError' <string>:(.text+0x88): undefined reference to `PyErr_SetString' I'm suspecting that the Numba and/or the Python standard library need to be dynamically linked against the generated object file for this to run successfully but I'm not sure how it can be done (if it can even be done in the first place). I've also tried the following where I write the intermediate LLVM code to the file instead of the assembly: with open("six.ll", "w") as f: for k, v in six.inspect_llvm().items(): f.write(v) And then $ lli six.ll But this fails as well with the following error: 'main' function not found in module. UPDATE: It turns out that there exists a utility to find the relevant flags to pass to the ld command to dynamically link the Python standard library. $ python3-config --ldflags Returns -L/Users/rayan/anaconda3/lib/python3.7/config-3.7m-darwin -lpython3.7m -ldl -framework CoreFoundation Running the following again, this time with the correct flags: $ as -o six.o six.asm $ ld six.o -o six.bin -L/Users/rayan/anaconda3/lib/python3.7/config-3.7m-darwin -lpython3.7m -ldl -framework CoreFoundation $ chmod +x six.bin $ ./six.bin I am now getting ld: warning: No version-min specified on command line ld: entry point (_main) undefined. for inferred architecture x86_64 I have tried adding a _main label in the assembly file but that doesn't seem to do anything. Any ideas on how to define the entry point? UPDATE 2: Here's the assembly code in case that's useful, it seems like the target function is the one with label _ZN8__main__7six$241E: .text .file "<string>" .globl _ZN8__main__7six$241E .p2align 4, 0x90 .type _ZN8__main__7six$241E,@function _ZN8__main__7six$241E: movq $6, (%rdi) xorl %eax, %eax retq .Lfunc_end0: .size _ZN8__main__7six$241E, .Lfunc_end0-_ZN8__main__7six$241E .globl _ZN7cpython8__main__7six$241E .p2align 4, 0x90 .type _ZN7cpython8__main__7six$241E,@function _ZN7cpython8__main__7six$241E: .cfi_startproc pushq %rax .cfi_def_cfa_offset 16 movq %rsi, %rdi movabsq $.const.six, %rsi movabsq $PyArg_UnpackTuple, %r8 xorl %edx, %edx xorl %ecx, %ecx xorl %eax, %eax callq *%r8 testl %eax, %eax je .LBB1_3 movabsq $_ZN08NumbaEnv8__main__7six$241E, %rax cmpq $0, (%rax) je .LBB1_2 movabsq $PyEval_SaveThread, %rax callq *%rax movabsq $PyEval_RestoreThread, %rcx movq %rax, %rdi callq *%rcx movabsq $PyLong_FromLongLong, %rax movl $6, %edi popq %rcx .cfi_def_cfa_offset 8 jmpq *%rax .LBB1_2: .cfi_def_cfa_offset 16 movabsq $PyExc_RuntimeError, %rdi movabsq $".const.missing Environment", %rsi movabsq $PyErr_SetString, %rax callq *%rax .LBB1_3: xorl %eax, %eax popq %rcx .cfi_def_cfa_offset 8 retq .Lfunc_end1: .size _ZN7cpython8__main__7six$241E, .Lfunc_end1-_ZN7cpython8__main__7six$241E .cfi_endproc .globl cfunc._ZN8__main__7six$241E .p2align 4, 0x90 .type cfunc._ZN8__main__7six$241E,@function cfunc._ZN8__main__7six$241E: movl $6, %eax retq .Lfunc_end2: .size cfunc._ZN8__main__7six$241E, .Lfunc_end2-cfunc._ZN8__main__7six$241E .type _ZN08NumbaEnv8__main__7six$241E,@object .comm _ZN08NumbaEnv8__main__7six$241E,8,8 .type .const.six,@object .section .rodata,"a",@progbits .const.six: .asciz "six" .size .const.six, 4 .type ".const.missing Environment",@object .p2align 4 .const.missing Environment: .asciz "missing Environment" .size ".const.missing Environment", 20 .section ".note.GNU-stack","",@progbits
After browsing [PyData.Numba]: Numba docs, and some debugging, trial and error, I reached to a conclusion: it seems you're off the path to your quest (as was also pointed out in comments). Numba converts Python code (functions) to machine code (for the obvious reason: speed). It does everything (convert, build, insert in the running process) on the fly, the programmer only needs to decorate the function as e.g. @numba.jit ([PyData.Numba]: Just-in-Time compilation). The behavior that you're experiencing is correct. The Dispatcher object (used by decorating the six function) only generates (assembly) code for the function itself (it's no main there, as the code is executing in the current process (Python interpreter's main function)). So, it's normal for the linker to complain there's no main symbol. It's like writing a C file that only contains: int six() { return 6; } In order for things to work properly, you have to: Build the .asm file into an .o (object) file (done) Include the .o file from #1. into a library which can be Static Dynamic The library is to be linked in the (final) executable. This step is optional as you could use the .o file directly Build another file that defines main (and calls six - which I assume it's the whole purpose) into an .o file. As I'm not very comfortable with assembly, I wrote it in C Link the 2 entities (from #2. (#1.) and #3.) together As an alternative, you could take a look at [PyData.Numba]: Compiling code ahead of time, but bear in mind that it would generate a Python (extension) module. Back to the current problem. Did the test on Ubuntu 18.04 064bit. code00.py: #!/usr/bin/env python import math import sys import numba @numba.jit(nopython=True, nogil=True) def six(): return 6 def main(*argv): six() # Call the function(s), otherwise `inspect_asm()` would return empty dict speed_funcs = [ (six, numba.int32()), ] for func, _ in speed_funcs: file_name_asm = "numba_{:s}_{:s}_{:03d}_{:02d}{:02d}{:02d}.asm".format(func.__name__, sys.platform, int(round(math.log2(sys.maxsize))) + 1, *sys.version_info[:3]) asm = func.inspect_asm() print("Writing to {:s}:".format(file_name_asm)) with open(file_name_asm, "wb") as fout: for k, v in asm.items(): print(" {:}".format(k)) fout.write(v.encode()) if __name__ == "__main__": print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform)) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) main00.c: #include <dlfcn.h> #include <stdio.h> //#define SYMBOL_SIX "_ZN8__main__7six$241E" #define SYMBOL_SIX "cfunc._ZN8__main__7six$241E" typedef int (*SixFuncPtr)(); int main() { void *pMod = dlopen("./libnumba_six_linux.so", RTLD_LAZY); if (!pMod) { printf("Error (%s) loading module\n", dlerror()); return -1; } SixFuncPtr pSixFunc = dlsym(pMod, SYMBOL_SIX); if (!pSixFunc) { printf("Error (%s) loading function\n", dlerror()); dlclose(pMod); return -2; } printf("six() returned: %d\n", (*pSixFunc)()); dlclose(pMod); return 0; } build.sh: #!/usr/bin/env bash CC=gcc LIB_BASE_NAME=numba_six_linux FLAG_LD_LIB_NUMBALINUX="-Wl,-L. -Wl,-l${LIB_BASE_NAME}" FLAG_LD_LIB_PYTHON="-Wl,-L/usr/lib/python3.7/config-3.7m-x86_64-linux-gnu -Wl,-lpython3.7m" rm -f *.asm *.o *.a *.so *.exe echo Generate .asm python3 code00.py echo Assemble as -o ${LIB_BASE_NAME}.o ${LIB_BASE_NAME}_064_030705.asm echo Link library LIB_NUMBA="./lib${LIB_BASE_NAME}.so" #ar -scr ${LIB_NUMBA} ${LIB_BASE_NAME}.o ${CC} -o ${LIB_NUMBA} -shared ${LIB_BASE_NAME}.o ${FLAG_LD_LIB_PYTHON} echo Dump library contents nm -S ${LIB_NUMBA} #objdump -t ${LIB_NUMBA} echo Compile and link executable ${CC} -o main00.exe main00.c -ldl echo Exit script Output: (py_venv_pc064_03.07.05_test0) [cfati@cfati-ubtu-18-064-00:~/Work/Dev/StackOverflow/q061678226]> ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit prompt]> [064bit prompt]> ls build.sh code00.py main00.c [064bit prompt]> [064bit prompt]> ./build.sh Generate .asm Python 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] 064bit on linux Writing to numba_six_linux_064_030705.asm: () Done. Assemble Link library Dump library contents 0000000000201020 B __bss_start 00000000000008b0 0000000000000006 T cfunc._ZN8__main__7six$241E 0000000000201020 0000000000000001 b completed.7698 00000000000008e0 0000000000000014 r .const.missing Environment 00000000000008d0 0000000000000004 r .const.six w __cxa_finalize 0000000000000730 t deregister_tm_clones 00000000000007c0 t __do_global_dtors_aux 0000000000200e58 t __do_global_dtors_aux_fini_array_entry 0000000000201018 d __dso_handle 0000000000200e60 d _DYNAMIC 0000000000201020 D _edata 0000000000201030 B _end 00000000000008b8 T _fini 0000000000000800 t frame_dummy 0000000000200e50 t __frame_dummy_init_array_entry 0000000000000990 r __FRAME_END__ 0000000000201000 d _GLOBAL_OFFSET_TABLE_ w __gmon_start__ 00000000000008f4 r __GNU_EH_FRAME_HDR 00000000000006f0 T _init w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U PyArg_UnpackTuple U PyErr_SetString U PyEval_RestoreThread U PyEval_SaveThread U PyExc_RuntimeError U PyLong_FromLongLong 0000000000000770 t register_tm_clones 0000000000201020 d __TMC_END__ 0000000000201028 0000000000000008 B _ZN08NumbaEnv8__main__7six$241E 0000000000000820 0000000000000086 T _ZN7cpython8__main__7six$241E 0000000000000810 000000000000000a T _ZN8__main__7six$241E Compile and link executable Exit script [064bit prompt]> [064bit prompt]> ls build.sh code00.py libnumba_six_linux.so main00.c main00.exe numba_six_linux_064_030705.asm numba_six_linux.o [064bit prompt]> [064bit prompt]> # Run the executable [064bit prompt]> [064bit prompt]> ./main00.exe six() returned: 6 [064bit prompt]> Also posting (since it's important) numba_six_linux_064_030705.asm: .text .file "<string>" .globl _ZN8__main__7six$241E .p2align 4, 0x90 .type _ZN8__main__7six$241E,@function _ZN8__main__7six$241E: movq $6, (%rdi) xorl %eax, %eax retq .Lfunc_end0: .size _ZN8__main__7six$241E, .Lfunc_end0-_ZN8__main__7six$241E .globl _ZN7cpython8__main__7six$241E .p2align 4, 0x90 .type _ZN7cpython8__main__7six$241E,@function _ZN7cpython8__main__7six$241E: .cfi_startproc pushq %rax .cfi_def_cfa_offset 16 movq %rsi, %rdi movabsq $.const.six, %rsi movabsq $PyArg_UnpackTuple, %r8 xorl %edx, %edx xorl %ecx, %ecx xorl %eax, %eax callq *%r8 testl %eax, %eax je .LBB1_3 movabsq $_ZN08NumbaEnv8__main__7six$241E, %rax cmpq $0, (%rax) je .LBB1_2 movabsq $PyEval_SaveThread, %rax callq *%rax movabsq $PyEval_RestoreThread, %rcx movq %rax, %rdi callq *%rcx movabsq $PyLong_FromLongLong, %rax movl $6, %edi popq %rcx .cfi_def_cfa_offset 8 jmpq *%rax .LBB1_2: .cfi_def_cfa_offset 16 movabsq $PyExc_RuntimeError, %rdi movabsq $".const.missing Environment", %rsi movabsq $PyErr_SetString, %rax callq *%rax .LBB1_3: xorl %eax, %eax popq %rcx .cfi_def_cfa_offset 8 retq .Lfunc_end1: .size _ZN7cpython8__main__7six$241E, .Lfunc_end1-_ZN7cpython8__main__7six$241E .cfi_endproc .globl cfunc._ZN8__main__7six$241E .p2align 4, 0x90 .type cfunc._ZN8__main__7six$241E,@function cfunc._ZN8__main__7six$241E: movl $6, %eax retq .Lfunc_end2: .size cfunc._ZN8__main__7six$241E, .Lfunc_end2-cfunc._ZN8__main__7six$241E .type _ZN08NumbaEnv8__main__7six$241E,@object .comm _ZN08NumbaEnv8__main__7six$241E,8,8 .type .const.six,@object .section .rodata,"a",@progbits .const.six: .asciz "six" .size .const.six, 4 .type ".const.missing Environment",@object .p2align 4 ".const.missing Environment": .asciz "missing Environment" .size ".const.missing Environment", 20 .section ".note.GNU-stack","",@progbits Notes: numba_six_linux_064_030705.asm (and everything that derives from it) contain the code for the six function. Actually, there are a bunch of symbols (on OSX, you can also use the native otool -T) like: cfunc._ZN8__main__7six$241E - the (C) function itself _ZN7cpython8__main__7six$241E - the Python wrapper: 2.1. Performs the C <=> Python conversions (via Python API functions like PyArg_UnpackTuple) 2.2. Due to #1. it needs (depends on) libpython3.7m 2.3. As a consequence, nopython=True has no effect in this case Also, the main part from these symbols doesn't refer to an executable entry point (main function), but to a Python module's top level namespace (__main__). After all, this code is supposed to be run from Python Due to the fact that the C plain function contains a dot (.) in the name, I couldn't call it directly from C (as it's an invalid identifier name), so I had to load (the .so and) the function manually (DlOpen / DlSym), resulting in more code than simply calling the function. I didn't try it, but I think it would make sense that the following (manual) changes to the generated .asm file would simplify the work: Renaming the plain C function name (to something like __six, or any other valid C identifier that also doesn't clash with another (explicit or internal) name) in the .asm file before assembling it, would make the function directly callable from C Removing the Python wrapper (#2.) would also get rid of #2.2. Update #0 Thanks to @PeterCordes, who shared that exact piece of info ([GNU.GCC]: Controlling Names Used in Assembler Code) that I was missing, here's a much simpler version. main01.c: #include <stdio.h> extern int six() asm ("cfunc._ZN8__main__7six$241E"); int main() { printf("six() returned: %d\n", six()); } Output: [064bit prompt]> # Resume from previous point + main01.c [064bit prompt]> [064bit prompt]> ls build.sh code00.py libnumba_six_linux.so main00.c main00.exe main01.c numba_six_linux_064_030705.asm numba_six_linux.o [064bit prompt]> [064bit prompt]> ar -scr libnumba_six_linux.a numba_six_linux.o [064bit prompt]> [064bit prompt]> gcc -o main01.exe main01.c ./libnumba_six_linux.a -Wl,-L/usr/lib/python3.7/config-3.7m-x86_64-linux-gnu -Wl,-lpython3.7m [064bit prompt]> [064bit prompt]> ls build.sh code00.py libnumba_six_linux.a libnumba_six_linux.so main00.c main00.exe main01.c main01.exe numba_six_linux_064_030705.asm numba_six_linux.o [064bit prompt]> [064bit prompt]> ./main01.exe six() returned: 6 [064bit prompt]>
9
10
61,681,097
2020-5-8
https://stackoverflow.com/questions/61681097/python-and-selenium-mobile-emulation
I'm trying to emulate Chrome for iPhone X with Selenium emulation and Python, as follow: from selenium import webdriver mobile_emulation = { "deviceName": "iphone X" } chrome_options = webdriver.ChromeOptions() chrome_options.add_experimental_option("mobileEmulation", mobile_emulation) driver = webdriver.Chrome(r'C:\Users\Alex\PythonDev\chromedriver') driver.get('https://www.google.com') However, nothing happens: my page is still a normal browser page, and I don't see it as a mobile page. What is missing or wrong in my code?
You might have found an answer by now, but here's a general one: In your code example, your driver has no chance to know that you want it to emulate another device. Here's full working code: from selenium import webdriver mobile_emulation = { "deviceName": "your device" } chrome_options = webdriver.ChromeOptions() chrome_options.add_experimental_option("mobileEmulation", mobile_emulation) driver = webdriver.Chrome(options=chrome_options) #sometimes you have to insert your execution path driver.get('https://www.google.com') Make sure that Chrome supports your device and your device name is spelled correctly.
14
14
61,668,501
2020-5-7
https://stackoverflow.com/questions/61668501/duplicate-layers-when-reusing-pytorch-model
I am trying to reuse some of the resnet layers for a custom architecture and ran into a issue I can't figure out. Here is a simplified example; when I run: import torch from torchvision import models from torchsummary import summary def convrelu(in_channels, out_channels, kernel, padding): return nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel, padding=padding), nn.ReLU(inplace=True), ) class ResNetUNet(nn.Module): def __init__(self): super().__init__() self.base_model = models.resnet18(pretrained=False) self.base_layers = list(self.base_model.children()) self.layer0 = nn.Sequential(*self.base_layers[:3]) def forward(self, x): print(x.shape) output = self.layer0(x) return output base_model = ResNetUNet().cuda() summary(base_model,(3,224,224)) Is giving me: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 112, 112] 9,408 Conv2d-2 [-1, 64, 112, 112] 9,408 BatchNorm2d-3 [-1, 64, 112, 112] 128 BatchNorm2d-4 [-1, 64, 112, 112] 128 ReLU-5 [-1, 64, 112, 112] 0 ReLU-6 [-1, 64, 112, 112] 0 ================================================================ Total params: 19,072 Trainable params: 19,072 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.57 Forward/backward pass size (MB): 36.75 Params size (MB): 0.07 Estimated Total Size (MB): 37.40 ---------------------------------------------------------------- This is duplicating each layer (there are 2 convs, 2 batchnorms, 2 relu's) as opposed to giving one layer each. If I print out self.base_layers[:3] I get: [Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), ReLU(inplace=True)] which shows just three layers without duplicates. Why is it duplicating my layers? I am using pytorch version 1.4.0
Your layers aren't actually being invoked twice. This is an artifact of how summary is implemented. The simple reason is because summary recursively iterates over all the children of your module and registers forward hooks for each of them. Since you have repeated children (in base_model and layer0) then those repeated modules get multiple hooks registered. When summary calls forward this causes both of the hooks for each module to be invoked which causes repeats of the layers to be reported. For your toy example a solution would be to simply not assign base_model as an attribute since it's not being used during forward anyway. This avoids having base_model ever being added as a child. class ResNetUNet(nn.Module): def __init__(self): super().__init__() base_model = models.resnet18(pretrained=False) base_layers = list(base_model.children()) self.layer0 = nn.Sequential(*base_layers[:3]) Another solution is to create a modified version of summary which doesn't register hooks for the same module multiple times. Below is an augmented summary where I use a set named already_registered to keep track of modules which already have hooks registered to avoid registering multiple hooks. from collections import OrderedDict import torch import torch.nn as nn import numpy as np def summary(model, input_size, batch_size=-1, device="cuda"): # keep track of registered modules so that we don't add multiple hooks already_registered = set() def register_hook(module): def hook(module, input, output): class_name = str(module.__class__).split(".")[-1].split("'")[0] module_idx = len(summary) m_key = "%s-%i" % (class_name, module_idx + 1) summary[m_key] = OrderedDict() summary[m_key]["input_shape"] = list(input[0].size()) summary[m_key]["input_shape"][0] = batch_size if isinstance(output, (list, tuple)): summary[m_key]["output_shape"] = [ [-1] + list(o.size())[1:] for o in output ] else: summary[m_key]["output_shape"] = list(output.size()) summary[m_key]["output_shape"][0] = batch_size params = 0 if hasattr(module, "weight") and hasattr(module.weight, "size"): params += torch.prod(torch.LongTensor(list(module.weight.size()))) summary[m_key]["trainable"] = module.weight.requires_grad if hasattr(module, "bias") and hasattr(module.bias, "size"): params += torch.prod(torch.LongTensor(list(module.bias.size()))) summary[m_key]["nb_params"] = params if ( not isinstance(module, nn.Sequential) and not isinstance(module, nn.ModuleList) and not (module == model) and module not in already_registered: ): already_registered.add(module) hooks.append(module.register_forward_hook(hook)) device = device.lower() assert device in [ "cuda", "cpu", ], "Input device is not valid, please specify 'cuda' or 'cpu'" if device == "cuda" and torch.cuda.is_available(): dtype = torch.cuda.FloatTensor else: dtype = torch.FloatTensor # multiple inputs to the network if isinstance(input_size, tuple): input_size = [input_size] # batch_size of 2 for batchnorm x = [torch.rand(2, *in_size).type(dtype) for in_size in input_size] # print(type(x[0])) # create properties summary = OrderedDict() hooks = [] # register hook model.apply(register_hook) # make a forward pass # print(x.shape) model(*x) # remove these hooks for h in hooks: h.remove() print("----------------------------------------------------------------") line_new = "{:>20} {:>25} {:>15}".format("Layer (type)", "Output Shape", "Param #") print(line_new) print("================================================================") total_params = 0 total_output = 0 trainable_params = 0 for layer in summary: # input_shape, output_shape, trainable, nb_params line_new = "{:>20} {:>25} {:>15}".format( layer, str(summary[layer]["output_shape"]), "{0:,}".format(summary[layer]["nb_params"]), ) total_params += summary[layer]["nb_params"] total_output += np.prod(summary[layer]["output_shape"]) if "trainable" in summary[layer]: if summary[layer]["trainable"] == True: trainable_params += summary[layer]["nb_params"] print(line_new) # assume 4 bytes/number (float on cuda). total_input_size = abs(np.prod(input_size) * batch_size * 4. / (1024 ** 2.)) total_output_size = abs(2. * total_output * 4. / (1024 ** 2.)) # x2 for gradients total_params_size = abs(total_params.numpy() * 4. / (1024 ** 2.)) total_size = total_params_size + total_output_size + total_input_size print("================================================================") print("Total params: {0:,}".format(total_params)) print("Trainable params: {0:,}".format(trainable_params)) print("Non-trainable params: {0:,}".format(total_params - trainable_params)) print("----------------------------------------------------------------") print("Input size (MB): %0.2f" % total_input_size) print("Forward/backward pass size (MB): %0.2f" % total_output_size) print("Params size (MB): %0.2f" % total_params_size) print("Estimated Total Size (MB): %0.2f" % total_size) print("----------------------------------------------------------------") # return summary
10
7
61,741,997
2020-5-12
https://stackoverflow.com/questions/61741997/how-to-format-requirements-txt-when-package-source-is-from-specific-websites
I am trying to convert the following installation commands using pip that downloads from another website, into a requirements.txt format, but just can't figure out how. Can anyone assist? pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
The structure of the contents of a requirements.txt file is defined as follows: [[--option]...] <requirement specifier> [; markers] [[--option]...] <archive url/path> [-e] <local project path> [-e] <vcs project url> The <requirement specifier> defines the package and an optional version. SomeProject SomeProject == 1.3 SomeProject >=1.2,<2.0 SomeProject[foo, bar] SomeProject~=1.4.2 The --option (such as the -f/--find-links) is the same as the pip install options you would use if you were doing pip install from the command line. The following options are supported: -i, --index-url --extra-index-url --no-index -c, --constraint -r, --requirement -e, --editable -f, --find-links --no-binary --only-binary --require-hashes --pre --trusted-host So, for your install commands, the requirements.txt would look like this: # Torch --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.5.0+cu101 torchvision==0.6.0+cu101 # Detectron --find-links https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html detectron2 Make sure to verify that the links are correctly used: $ pip install -r requirements.txt Looking in links: https://download.pytorch.org/whl/torch_stable.html, https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html Collecting torch==1.5.0+cu101 (from -r requirements.txt (line 3)) Using cached https://download.pytorch.org/whl/cu101/torch-1.5.0%2Bcu101-cp38-cp38-linux_x86_64.whl Collecting torchvision==0.6.0+cu101 (from -r requirements.txt (line 4)) Using cached https://download.pytorch.org/whl/cu101/torchvision-0.6.0%2Bcu101-cp38-cp38-linux_x86_64.whl Collecting detectron2 (from -r requirements.txt (line 8)) Using cached https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/detectron2-0.1.2%2Bcu101-cp38-cp38-linux_x86_64.whl ... As a side note, you originally said "(not github)" in your title. The default source of packages installed using pip is hosted on PyPi: https://files.pythonhosted.org/. You can see the actual links when going to the Download Files section of a package in PyPi (example for Torch).
18
33
61,734,206
2020-5-11
https://stackoverflow.com/questions/61734206/how-can-i-use-prefer-binary-with-pip-in-python-3
In Python 2 I can install a set of packages via pip preferring binary packages over source packages (meaning: fallback to source if binary not found) with: (1) pip install --prefer-binary -r requirements.txt In Python 3 I can do this with: (2) pip3 install --only-binary=:all: -r requirements.txt But (1) is not exactly equal to (2) since the former says: Prefer binaries when installing; but if I don't find a binary option, then I'll go with source. The latter says: I will fail if no binaries are found; don't even try from source. So, from the docs it seems that one solution could be to just manually enter each package which should be considered for source installation - meaning: the "only-binary" flag can be provided multiple times on the command line and can thus handle special-cases like that (by emptying it out, or giving specific package names to the binary packages). This answer details, to some extent, that approach: Make pip download prefer to download source-distributions (not wheels). However, I have a large number of both types of packages so I need an automated way like the (1) approach. Question: How can I get a similar automated behavior as (1) but in Python/pip 3? Solution: Pip is not Python - upgrade pip to vs. 20.X and use --prefer-binary.
Solution: upgrade pip to vs. 20.X and use --prefer-binary
18
11
61,678,338
2020-5-8
https://stackoverflow.com/questions/61678338/why-is-pycharm-not-highlighting-todos
In my settings, I have the TODO bound to highlight in yellow, yet in the actual code it does not highlight. Here is a screenshot of my settings: Editor -> TODO Does anyone know how to fix this? EDIT: I even tried re-installing Pycharm and I still have the issue. EDIT 2: In the TODO Window, it is saying "0 TODO items found in 0 files". I believe this means it is looking in the wrong files to check for TODO items. However, when I try to find TODO items in "this file" it still doesn't work. Does anyone know why this is?
Go to Preferences (or Settings), Project Structure, and make sure the folder with your files is not in the "Excluded" tab's list. Click the folder you want to include and click on the "Sources" tab. Click Apply, then OK! It should work.
9
9
61,667,967
2020-5-7
https://stackoverflow.com/questions/61667967/how-can-i-swap-axis-in-a-torch-tensor
I have a torch tensor of size torch.Size([1, 128, 56, 128]) 1 is channel, 128 is the width, and height. 56 are the stacks of images. How can I resize it to torch.Size([1, 56, 128, 128]) ?
You could simply use permute or transpose.
8
4
61,664,673
2020-5-7
https://stackoverflow.com/questions/61664673/should-i-use-pip-or-pip3
Eventually, every single time I install a new Linux distribution I do sudo apt-get install python3. However, once installed I always get confused. python is Python 2.7 and python3 is Python 3.x. But also it appears that pip is for Python 2 and pip3 for Python 3. That said most tutorials I see on Internet always use the traditional pip install even though it is about Python 3. How should I deal with this? Should I simply continue to put this annoying 3 every time I use Python (pip3, ipython3, python3...)? In most of my lectures I read that creating a symlink python->python3 is a bad practice. Is that correct?
Use python3 -m pip or python -m pip. That will use the correct pip for the python version you want. This method is mentioned in the pip documentation: python -m pip executes pip using the Python interpreter you specified as python. So /usr/bin/python3.7 -m pip means you are executing pip for your interpreter located at /usr/bin/python3.7. Symlinking python->python3 is a bad idea because some programs might rely on python being python 2. Though, I have seen some Dockerfiles symlink python->python3, like TensorFlow's CPU dockerfile (it's less of an issue in a Docker image). Coincidentally, that same Dockerfile uses the python3 -m pip install syntax that I recommend.
9
18
61,689,391
2020-5-8
https://stackoverflow.com/questions/61689391/error-with-simple-subclassing-of-pathlib-path-no-flavour-attribute
I'm trying to sublclass Path from pathlib, but I failed with following error at instantiation from pathlib import Path class Pl(Path): def __init__(self, *pathsegments: str): super().__init__(*pathsegments) Error at instantiation AttributeError: type object 'Pl' has no attribute '_flavour' Update: I'm inheriting from WindowsPath still doesn't work. TypeError: object.__init__() takes exactly one argument (the instance to initialize)
I solved it. Mokey patching is the way to go. define functions just like this def method1(self, other): blah Path.method1 = method1 The fastest, easiest, most convenient solution, zero downsides. Autosuggest in Pycharm works well. UPDATE: I got THE solution (works with linter and auto suggestor): class MyPath(type(Path()), Path): pass
11
3
61,717,006
2020-5-10
https://stackoverflow.com/questions/61717006/pip-for-python-3-8
How do I install Pip for Python 3.8 ? I made 3.8 my default Python version. sudo apt install python3.8-pip gives unable to locate package python3.8-pip and running python3.8 -m pip install [package] gives no module named pip I can't run sudo apt install python3-pip because it installs pip for Python 3.6
Install pip the official way: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3.8 get-pip.py made 3.8 my default Python version It depends on how you did that, but it might break something in your OS. For example some packages on Ubuntu 18.04 might depend on python being python2.7 or python3 being python3.6 with some pip packages preinstalled.
49
67