question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
63,054,541 | 2020-7-23 | https://stackoverflow.com/questions/63054541/how-to-type-the-new-method-in-a-python-metaclass-so-that-mypy-is-happy | I am trying to type the __new__ method in a metaclass in Python so that it pleases mypy. The code would be something like this (taken from pep-3115 - "Metaclasses in Python 3000" and stripped down a bit): from __future__ import annotations from typing import Type # The metaclass class MetaClass(type): # The metaclass invocation def __new__(cls: Type[type], name: str, bases: tuple, classdict: dict) -> type: result = type.__new__(cls, name, bases, classdict) print('in __new__') return result class MyClass(metaclass=MetaClass): pass With this, mypy complains, Incompatible return type for "__new__" (returns "type", but must return a subtype of "MetaClass"), pointing at the line def __new__. I have also tried with: def __new__(cls: Type[MetaClass], name: str, bases: tuple, classdict: dict) -> MetaClass: Then mypy complains (about the return result line): Incompatible return value type (got "type", expected "MetaClass"). I have also tried with a type var (TSubMetaclass = TypeVar('TSubMetaclass', bound='MetaClass')) and the result is the same as using MetaClass. Using super().__new__ instead of type.__new__ gave similar results. What would be the correct way to do it? | First, the return type is MetaClass, not type. Second, you need to explicitly cast the return value, since type.__new__ doesn't know it is returning an instance of MetaClass. (Its specific return type is determined by its first argument, which isn't known statically.) from __future__ import annotations from typing import Type, cast # The metaclass class MetaClass(type): # The metaclass invocation def __new__(cls: Type[type], name: str, bases: tuple, classdict: dict) -> MetaClass: result = type.__new__(cls, name, bases, classdict) print('in __new__') return cast(MetaClass, result) class MyClass(metaclass=MetaClass): pass To use super, you need to adjust the static type of the cls parameter. class MetaClass(type): # The metaclass invocation def __new__(cls: Type[MetaClass], name: str, bases: tuple, classdict: dict) -> MetaClass: result = super().__new__(name, bases, classdict) print('in __new__') return cast(MetaClass, result) | 11 | 10 |
63,051,253 | 2020-7-23 | https://stackoverflow.com/questions/63051253/using-class-or-static-method-as-default-factory-in-dataclasses | I want to populate an attribute of a dataclass using the default_factory method. However, since the factory method is only meaningful in the context of this specific class, I want to keep it inside the class (e.g. as a static or class method). For example: from dataclasses import dataclass, field from typing import List @dataclass class Deck: cards: List[str] = field(default_factory=self.create_cards) @staticmethod def create_cards(): return ['King', 'Queen'] However, I get this error (as expected) on line 6: NameError: name 'self' is not defined How can I overcome this issue? I don't want to move the create_cards() method out of the class. | One possible solution is to move it to __post_init__(self). For example: @dataclass class Deck: cards: List[str] = field(default_factory=list) def __post_init__(self): if not self.cards: self.cards = self.create_cards() def create_cards(self): return ['King', 'Queen'] Output: d1 = Deck() print(d1) # prints Deck(cards=['King', 'Queen']) d2 = Deck(["Captain"]) print(d2) # prints Deck(cards=['Captain']) | 11 | 8 |
63,047,762 | 2020-7-23 | https://stackoverflow.com/questions/63047762/correct-way-to-register-a-parameter-for-model-in-pytorch | I tried to define a simple model in Pytorch. The model computes negative log prob for a gaussian distribution: import torch import torch.nn as nn class GaussianModel(nn.Module): def __init__(self): super(GaussianModel, self).__init__() self.register_parameter('mean', nn.Parameter(torch.zeros(1), requires_grad=True)) self.pdf = torch.distributions.Normal(self.state_dict()['mean'], torch.tensor([1.0])) def forward(self, x): return -self.pdf.log_prob(x) model = GaussianModel() Then I tried to optimize the mean parameter: optimizer = torch.optim.SGD(model.parameters(), lr=0.002) for _ in range(5): optimizer.zero_grad() nll = model(torch.tensor([3.0], requires_grad=True)) nll.backward() optimizer.step() print('mean : ', model.state_dict()['mean'], ' - Negative Loglikelihood : ', nll.item()) But it seems the gradient is zero and mean does not change: mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785 mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785 mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785 mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785 mean : tensor([0.]) - Negative Loglikelihood : 5.418938636779785 Did I register and use the mean parameter correctly? can autograd compute the gradient for torch.distributions.Normal.log_prob or I should implement the backward() for the model? | You're over complicating registering your parameter. You can just assign a new self.mean attribute to be an nn.Parameter then use it like a tensor for the most part. nn.Module overrides the __setattr__ method which is called every time you assign a new class attribute. One of the things it does is check to see if you assigned an nn.Parameter type, and if so, it adds it to the modules dictionary of registered parameters. Because of this, the easiest way to register your parameter is as follows: import torch import torch.nn as nn class GaussianModel(nn.Module): def __init__(self): super(GaussianModel, self).__init__() self.mean = nn.Parameter(torch.zeros(1)) self.pdf = torch.distributions.Normal(self.mean, torch.tensor([1.0])) def forward(self, x): return -self.pdf.log_prob(x) | 8 | 18 |
63,047,555 | 2020-7-23 | https://stackoverflow.com/questions/63047555/no-chrome-binary-at-the-given-path-macos-selenium-python | I just started to work with selenium web driver with chromedrivers. I am using MacOS and When I try to set the path for the chrome browser as a binary path I always face the same error saying no chrome binary at so and so path given. import os from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys chrome_options = Options() chrome_options.binary_location = "/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome" driver = webdriver.Chrome(executable_path = os.path.abspath("drivers/chromedriver") , chrome_options = chrome_options) The given path in chrome_options.binary_location is correct and that is where I find my chrome browser. I also have included my chromedriver inside the project folder itself /Applications/Codes/Selenium/seleniumproject/ChromeBinary.py:11: DeprecationWarning: use options instead of chrome_options driver = webdriver.Chrome(executable_path = os.path.abspath("drivers/chromedriver") , chrome_options = chrome_options) Traceback (most recent call last): File "/Applications/Codes/Selenium/seleniumproject/ChromeBinary.py", line 11, in <module> driver = webdriver.Chrome(executable_path = os.path.abspath("drivers/chromedriver") , chrome_options = chrome_options) File "/Users/apple/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__ desired_capabilities=desired_capabilities) File "/Users/apple/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__ self.start_session(capabilities, browser_profile) File "/Users/apple/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/Users/apple/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute self.error_handler.check_response(response) File "/Users/apple/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: no chrome binary at /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome | This resolved the issue chrome_options.binary_location = "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" | 9 | 12 |
63,026,713 | 2020-7-22 | https://stackoverflow.com/questions/63026713/leading-zeros-are-not-allowed-in-python | I have a code for finding the possible combinations of a given string in the list. But facing an issue with leading zero getting error like SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers . How to overcome this issue as I wanted to pass values with leading zeros(Cannot edit the values manually all the time). Below is my code def permute_string(str): if len(str) == 0: return [''] prev_list = permute_string(str[1:len(str)]) next_list = [] for i in range(0,len(prev_list)): for j in range(0,len(str)): new_str = prev_list[i][0:j]+str[0]+prev_list[i][j:len(str)-1] if new_str not in next_list: next_list.append(new_str) return next_list list = [129, 831 ,014] length = len(list) i = 0 # Iterating using while loop while i < length: a = list[i] print(permute_string(str(a))) i += 1; | Finally, I got my answer. Below is the working code. def permute_string(str): if len(str) == 0: return [''] prev_list = permute_string(str[1:len(str)]) next_list = [] for i in range(0,len(prev_list)): for j in range(0,len(str)): new_str = prev_list[i][0:j]+str[0]+prev_list[i][j:len(str)-1] if new_str not in next_list: next_list.append(new_str) return next_list #Number should not be with leading Zero actual_list = '129, 831 ,054, 845,376,970,074,345,175,965,068,287,164,230,250,983,064' list = actual_list.split(',') length = len(list) i = 0 # Iterating using while loop while i < length: a = list[i] print(permute_string(str(a))) i += 1; | 14 | -4 |
63,038,345 | 2020-7-22 | https://stackoverflow.com/questions/63038345/how-to-make-fastapi-pickup-changes-in-an-api-routing-file-automatically-while-ru | I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile : FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 # Environment variable for directory containing our app ENV APP /var/www/app ENV PYTHONUNBUFFERED 1 # Define working directory RUN mkdir -p $APP WORKDIR $APP COPY . $APP # Install missing dependencies RUN pip install -r requirements.txt AND my docker-compose.yml file version: '3.8' services: ingestion-service: build: context: ./app dockerfile: Dockerfile ports: - "80:80" volumes: - .:/app restart: always I am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time. | Quick answer: Yes :) In the Dockerfile, you copying your app into /var/www/app. The instructions form the Dockerfile are executed when you build your image (docker build -t <imgName>:<tag>) If you change the code later on, how could the image be aware of that? However, you can mount a volume(a directory) from your host machine, into the container when you execute the docker run / docker-compose up command, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well. Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app? volumes: - .:/var/www/app | 7 | 5 |
63,039,065 | 2020-7-22 | https://stackoverflow.com/questions/63039065/fig-ax-plt-subplots-meaning | I've been using matplotlib for a while and I don't actually understand what this line does. fig, ax = plt.subplots() Could someone explain? | plt.subplots() is basically a (very nice) shortcut for initializing a figure and subplot axes. See the docs here. In particular, >>> fig, ax = plt.subplots(1, 1) is essentially equivalent to >>> fig = plt.figure() >>> ax = fig.add_subplot(1, 1) But plt.subplots() is most useful for constructing several axes at once, for example, >>> fig, axes = plt.subplots(2, 3) makes a figure with 2 rows and 3 columns of subplots, essentially equivalent to >>> fig = plt.figure() >>> axes = np.empty((2,3)) >>> for i in range(2): ... for j in range(3): ... axes[i,j] = fig.add_subplot(2, 3, (i*j)+j+1) I say "essentially" because plt.subplots() also has some nice features, like sharex=True forces each of the subplots to share the same x axis (i.e., same axis limits / scales, etc.). This is my favorite way to initialize a figure because it gives you the figure and all of the axes handles in one smooth line. | 6 | 9 |
63,029,186 | 2020-7-22 | https://stackoverflow.com/questions/63029186/set-axis-in-altair-bar-chart-as-a-integer | I am trying to visualization a bar plot of many statistic data, and wanna set y-axis as a integer (there is no float type data in my dataset) This is one of the charts, which I want to change the axis. Image Link This is my python source code to visualization this chart def plot_3(data,x,y,width): selector = alt.selection_single(encodings=['x', 'color']) bars = alt.Chart(data).mark_bar(opacity=0.8).encode( alt.X('Tahun:O', title=''), alt.Y('N:Q', title=x, axis=alt.Axis(format='.0f')), # this format axis has no effect alt.Column('Keterangan:N', title=y),color=alt.condition(selector, 'Tahun:O', alt.value('lightgray')), tooltip = ['Tahun','Keterangan','N','Satuan'] ).add_selection(selector ).interactive( ).resolve_scale(x='independent') return bars.properties(width=width ) waiting for the solution, thank you :) | You could set tickMinStep=1. import altair as alt import pandas as pd source = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D'], 'b': [2.0, 1.0, 1.0, 3.0] }) alt.Chart(source).mark_bar().encode( alt.X('a:N'), alt.Y('b:Q', axis=alt.Axis(tickMinStep=1)) ) | 6 | 9 |
63,038,379 | 2020-7-22 | https://stackoverflow.com/questions/63038379/add-title-to-networkx-plot | I want my code to create a plot with a title. With the code below the plot gets created but no title. Can someone clue me in on what I am doing wrong? import pandas as pd import networkx as nx from networkx.algorithms import community import matplotlib.pyplot as plt from datetime import datetime ... G = nx.from_pandas_edgelist((df), 'AXIS1', 'AXIS2'); nx.draw(G,with_labels=True) plt.title('TITLE') plt.axis('off') plt.savefig('test.png'); | I can only think of some intermediate step triggering a call to plt.show before your call to plt.title (though it doesn't look like that should be the case with the shared code). Try setting the title beforehand, and setting an ax, here's an example: plt.figure(figsize=(10,5)) ax = plt.gca() ax.set_title('Random graph') G = nx.fast_gnp_random_graph(10,0.2) nx.draw(G,with_labels=True, node_color='lightgreen', ax=ax) _ = ax.axis('off') | 10 | 10 |
63,026,648 | 2020-7-22 | https://stackoverflow.com/questions/63026648/errormessage-class-decimal-inexact-class-decimal-rounded-while | Code is below import json from decimal import Decimal from pprint import pprint import boto3 def update_movie(title, year, rating=None, plot=None, actors=None, dynamodb=None): if not dynamodb: dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Movies') response = table.update_item( Key={ 'year': year, 'title': title }, UpdateExpression="set info.rating=:r, info.plot=:p, info.actors=:a", ExpressionAttributeValues={ ':r': Decimal(rating), ':p': plot, ':a': actors }, ReturnValues="UPDATED_NEW" ) return response def lambda_handler(event, context): update_response = update_movie( "Rush", 2013, 8.3, "Car show", ["Daniel", "Chris", "Olivia"]) print("Update movie succeeded:") pprint(update_response, sort_dicts=False) While updating a key in the dynamodb i got the error below "errorMessage": "[<class 'decimal.Inexact'>, <class 'decimal.Rounded'>]", "errorType": "Inexact", If i am changing 8.3 to 8 My code is working fine update_response = update_movie( "Rush", 2013, 8.3, "Car show", ["Daniel", "Chris", "Olivia"]) print("Update movie succeeded:")``` | The problem is that DynamoDB's representation of floating-point numbers is different from Python's: DynamoDB represents floating point numbers with a decimal representation. So "8.3" can be represented exactly - with no rounding or inexactness. Python uses, as traditional, base-2 representation, so it can't represent 8.3 exactly. 8.3 is actually represented as 8.3000000000000007105 and is known to be inexact (python doesn't know which digits you intended at the very end). The SDK knows the floating-point 8.3 is inexact, and refuses to use it. The solution is to use the Decimal class as intended: It should be constructed with a string parameter, not a floating-point one. I.e., use Decimal("8.3") (note the quotes), not Decimal(8.3). In your code above fixing this is as trivial as changing 8.3 to "8.3", with quotes. That's the best approach. The other, not as good, approach is to do Decimal(str(8.3))), but be prepared for the potential of inexact representation of the numbers. Moreover, creating a Decimal with a string allows you to create numbers which are simply not supported in Python. For example, Decimal("3.1415926535897932384626433832795028841") will get you 38 decimal digits of precision (the maximum supported by DynamoDB) - something you cannot do in Python floating point. | 16 | 33 |
63,026,749 | 2020-7-22 | https://stackoverflow.com/questions/63026749/atom-cant-search-for-packages-or-themes-in-the-install-packages-section-of-set | I'm new to Atom (and relatively new to programming) and I just installed it about a few hours ago. I was trying to set it up by installing some new packages and themes in the Install Packages section of Settings. It was working fine for a while but now I'm getting errors when I try to search. A red box appears below the search field with this error: Searching for “pre” failed.Hide output… i.filter is not a function [object Object] I'm on Windows 10 with Atom 1.49.0 x64 installed. Python 3.8 is also installed with the path set. It seems to run code fine. I even found a theme on the Atom website and was able to install from there, I just can't search. I feel like I messed something up. I've searched Google, Stack Overflow, and the discussion section for Atom and I'm finding nothing that fixes this issue. I've restarted my computer and uninstalled/reinstalled Atom. I will say that after I reinstalled Atom, it still had all my setting changes I had made and it had all the packages that I had already installed still there. So maybe the uninstall didn't remove those with the program. But I wouldn't know where to go to clear that. Any help would be appreciated. Thanks! | Atom Server seems to have a problem today. Packages that were installed well a few days ago are not available today. | 7 | 3 |
63,027,848 | 2020-7-22 | https://stackoverflow.com/questions/63027848/discord-py-error-typeerror-new-got-an-unexpected-keyword-argument-deny | Yesterday, my code was perfectly fine. Everything was running... and it was going great. All of a sudden, this error: TypeError: __new__() got an unexpected keyword argument 'deny_new' pops up in my PyCharm console. I've looked it up on the internet but I've only found a similiar questions with zero answers to it. I hope the stackoverflow community will be able to help me. I did not change my code, all I did was, I tried to host my bot on heroku, and it did not go well. And after my first few attempts, I gave up. But, I found out my bot started going crazy and I couldn't run it anymore :<. Has anyone else experienced this and know how to fix it? UPDATE I just found out that for some reason, it only works on my test server but not any other servers. Traceback (most recent call last): File "C:/Users/danie/PyCharmProjects/skybot/skybotgaming.py", line 21, in <module> client.run('TOKEN') File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\client.py", line 640, in run return future.result() File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\client.py", line 621, in runner await self.start(*args, **kwargs) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\client.py", line 585, in start await self.connect(reconnect=reconnect) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\client.py", line 499, in connect await self._connect() File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\client.py", line 463, in _connect await self.ws.poll_event() File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\gateway.py", line 471, in poll_event await self.received_message(msg) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\gateway.py", line 425, in received_message func(data) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\state.py", line 750, in parse_guild_create guild = self._get_create_guild(data) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\state.py", line 725, in _get_create_guild guild._from_data(data) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\guild.py", line 297, in _from_data self._sync(guild) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\guild.py", line 328, in _sync self._add_channel(CategoryChannel(guild=self, data=c, state=self._state)) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\channel.py", line 726, in __init__ self._update(guild, data) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\channel.py", line 737, in _update self._fill_overwrites(data) File "C:\Users\danie\anaconda3\envs\discordbottest\lib\site-packages\discord\abc.py", line 294, in _fill_overwrites self._overwrites.append(_Overwrites(id=overridden_id, **overridden)) TypeError: __new__() got an unexpected keyword argument 'deny_new' I tried it with a different file and bot and I got the same results, this is like a problem with discord.py. This is literally my entire code import discord import random from discord.ext import commands import asyncio client = commands.Bot(command_prefix='{') client.remove_command('help') @client.event async def on_ready(): print("Signed in") @client.command() async def dm(ctx): await ctx.author.send("What up chump?") client.run('TOKEN') | Discord pushed a new change that changes the overwrites object. Just reinstall the latest version of Discord.py python3 -m pip install -U discord.py That's it. | 58 | 71 |
62,983,756 | 2020-7-19 | https://stackoverflow.com/questions/62983756/what-is-pyproject-toml-file-for | Background I was about to try Python package downloaded from GitHub, and realized that it did not have a setup.py, so I could not install it with pip install -e <folder> Instead, the package had a pyproject.toml file which seems to have very similar entries as the setup.py usually has. What I found Googling lead me into PEP-518 and it gives some critique to setup.py in Rationale section. However, it does not clearly tell that usage of setup.py should be avoided, or that pyproject.toml would as such completely replace setup.py. Questions Is the pyproject.toml something that is used to replace setup.py? Or should a package come with both, a pyproject.toml and a setup.py? How would one install a project with pyproject.toml in an editable state? | What is it for? Currently there are multiple packaging tools being popular in Python community and while setuptools still seems to be prevalent it's not a de facto standard anymore. This situation creates a number of hassles for both end users and developers: For setuptools-based packages installation from source / build of a distribution can fail if one doesn't have setuptools installed; pip doesn't support the installation of packages based on other packaging tools from source, so these tools had to generate a setup.py file to produce a compatible package. To build a distribution package one has to install the packaging tool first and then use tool-specific commands; If package author decides to change the packaging tool, workflows must be changed as well to use different tool-specific commands. pyproject.toml is a new configuration file introduced by PEP 517 and PEP 518 to solve these problems: ... think of the (rough) steps required to produce a built artifact for a project: The source checkout of the project. Installation of the build system. Execute the build system. This PEP [518] covers step #2. PEP 517 covers step #3 ... Any tool can also extend this file with its own section (table) to accept tool-specific options, but it's up to them and not required. PEP 621 suggests using pyproject.toml to specify package core metadata in static, tool-agnostic way. Which backends currently support this is shown in the following table: enscons flit_core hatchling pdm-backend poetry-core setuptools 0.26.0+ 3.2+ 0.3+ 0.3.0+ 2.0.0+ 61.0.0+ Does it replace setup.py? For setuptools-based packages pyproject.toml is not strictly meant to replace setup.py, but rather to ensure its correct execution if it's still needed. For other packaging tools – yes, it is: Where the build-backend key exists, this takes precedence and the source tree follows the format and conventions of the specified backend (as such no setup.py is needed unless the backend requires it). Projects may still wish to include a setup.py for compatibility with tools that do not use this spec. How to install a package in editable mode? Originally "editable install" was a setuptools-specific feature and as such it was not supported by PEP 517. Later on PEP 660 extended this concept to packages using pyproject.toml. There are two possible conditions for installing a package in editable mode using pip: Modern: Both the frontend (pip) and a backend must support PEP 660. pip supports it since version 21.3; Legacy: Packaging tool must provide a setup.py file which supports the develop command. Since version 21.1 pip can also install packages using only setup.cfg file in editable mode. The following table describes the support of editable installs by various backends: enscons flit_core hatchling pdm-backend poetry-core setuptools 0.28.0+ 3.4+ 0.3+ 0.8.0+ 1.0.8+ 64.0.0+ | 410 | 154 |
62,941,378 | 2020-7-16 | https://stackoverflow.com/questions/62941378/how-to-sort-glob-glob-numerically | I have a bunch of files sorted numerically on a folder, when I try to sort glob.glob I never get the files in the right order. file examples and expected output sorting folder ------ C:\Users\user\Desktop\folder\1 sample.mp3 C:\Users\user\Desktop\folder\2 sample.mp3 C:\Users\user\Desktop\folder\3 sample.mp3 C:\Users\user\Desktop\folder\4 sample.mp3 C:\Users\user\Desktop\folder\5 sample.mp3 ... over 800 files... What I tried but the output seems random files = sorted(glob.glob(f'{os.getcwd()}/*.mp3'), key=lambda x: (os.path.splitext(os.path.basename(x))[0])) C:\Users\user\Desktop\folder\1 speech.mp3 C:\Users\user\Desktop\folder\10 speech.mp3 C:\Users\user\Desktop\folder\100 speech.mp3 C:\Users\user\Desktop\folder\101 speech.mp3 C:\Users\user\Desktop\folder\102 speech.mp3 C:\Users\user\Desktop\folder\103 speech.mp3 C:\Users\user\Desktop\folder\104 speech.mp3 C:\Users\user\Desktop\folder\105 speech.mp3 C:\Users\user\Desktop\folder\106 speech.mp3 C:\Users\user\Desktop\folder\107 speech.mp3 C:\Users\user\Desktop\folder\108 speech.mp3 C:\Users\user\Desktop\folder\109 speech.mp3 C:\Users\user\Desktop\folder\11 speech.mp3 Is not a solution try to sorting by date or size. UPDATE all the previous answer worked great: l = sorted(glob.glob(f'{os.getcwd()}/*.mp3'), key=len) l = sorted(glob.glob(f'{os.getcwd()}/*.mp3'), key=lambda x: int(os.path.basename(x).split(' ')[0])) def get_key(fp): filename = os.path.splitext(os.path.basename(fp))[0] int_part = filename.split()[0] return int(int_part) l = sorted(glob.glob(f'{os.getcwd()}/*.mp3'), key=get_key) | The general answer would catch the number with re.match() and to convert that number (string) to integer with int(). Use these numbers to sort the files with sorted() Code import re import math from pathlib import Path file_pattern = re.compile(r'.*?(\d+).*?') def get_order(file): match = file_pattern.match(Path(file).name) if not match: return math.inf return int(match.groups()[0]) sorted_files = sorted(files, key=get_order) Example input Consider random files with one integer number in any part of the filename: ├── 012 some file.mp3 ├── 1 file.txt ├── 13 file.mp3 ├── 2 another file.txt ├── 3 file.csv ├── 4 file.mp3 ├── 6 yet another file.txt ├── 88 name of file.mp3 ├── and final 999.txt ├── and some another file7.txt ├── some 5 file.mp3 └── test.py Example output The get_order() could be used to sort the files, when passed to the sorted() builtin function in the key argument In [1]: sorted(files, key=get_order) Out[1]: ['C:\\tmp\\file_sort\\1 file.txt', 'C:\\tmp\\file_sort\\2 another file.txt', 'C:\\tmp\\file_sort\\3 file.csv', 'C:\\tmp\\file_sort\\4 file.mp3', 'C:\\tmp\\file_sort\\some 5 file.mp3', 'C:\\tmp\\file_sort\\6 yet another file.txt', 'C:\\tmp\\file_sort\\and some another file7.txt', 'C:\\tmp\\file_sort\\012 some file.mp3', 'C:\\tmp\\file_sort\\13 file.mp3', 'C:\\tmp\\file_sort\\88 name of file.mp3', 'C:\\tmp\\file_sort\\and final 999.txt', 'C:\\tmp\\file_sort\\test.py'] Short explanation The re.compile is used to give a small speed boost (if matching multiple times same pattern) The re.match is used to match the regular expression pattern. In the regex pattern, .*? means any character (.), zero or more times (*) non-greedily (?). \d+ matches any digit number one or more times, and the parenthesis just captures that match to the groups() list. In case of no match (no digits in the file), the match will be None, and the get_order gives infinity; these files are sorted arbitrarily, but one could add logic for these (was not asked in this question). The sorted() function takes key argument, which should be callable which takes one argument: The item in the list. In this case, it will be one of those file strings (full file path) The Path(file).name just takes the filename part (without suffix) from full file path. | 9 | 6 |
62,994,795 | 2020-7-20 | https://stackoverflow.com/questions/62994795/how-to-secure-fastapi-api-endpoint-with-jwt-token-based-authorization | I am a little new to FastAPI in python. I am building an API backend framework that needs to have JWT token based authorization. Now, I know how to generate JWT tokens, but not sure how to integrate that with API methods in fast api in Python. Any pointers will be really appreciated. | With some help from my friend and colleague, I was able to solve this problem, and wanted to share this solution with the community. This is how it looks like now: Python Code ---- import json import os import datetime from fastapi import HTTPException, Header from urllib.request import urlopen from jose import jwt from jose import exceptions as JoseExceptions from utils import logger AUTH0_DOMAIN = os.environ.get( 'AUTH0_DOMAIN', 'https://<domain>/<tenant-id>/') AUTH0_ISSUER = os.environ.get( 'AUTO0_ISSUER', 'https://sts.windows.net/<tenant>/') AUTH0_API_AUDIENCE = os.environ.get( 'AUTH0_API_AUDIENCE', '<audience url>') AZURE_OPENID_CONFIG = os.environ.get( 'AZURE_OPENID_CONFIG', 'https://login.microsoftonline.com/common/.well-known/openid-configuration') def get_token_auth_header(authorization): parts = authorization.split() if parts[0].lower() != "bearer": raise HTTPException( status_code=401, detail='Authorization header must start with Bearer') elif len(parts) == 1: raise HTTPException( status_code=401, detail='Authorization token not found') elif len(parts) > 2: raise HTTPException( status_code=401, detail='Authorization header be Bearer token') token = parts[1] return token def get_payload(unverified_header, token, jwks_properties): try: payload = jwt.decode( token, key=jwks_properties["jwks"], algorithms=jwks_properties["algorithms"], # ["RS256"] typically audience=AUTH0_API_AUDIENCE, issuer=AUTH0_ISSUER ) except jwt.ExpiredSignatureError: raise HTTPException( status_code=401, detail='Authorization token expired') except jwt.JWTClaimsError: raise HTTPException( status_code=401, detail='Incorrect claims, check the audience and issuer.') except Exception: raise HTTPException( status_code=401, detail='Unable to parse authentication token') return payload class AzureJWKS: def __init__(self, openid_config: str=AZURE_OPENID_CONFIG): self.openid_url = openid_config self._jwks = None self._signing_algorithms = [] self._last_updated = datetime.datetime(2000, 1, 1, 12, 0, 0) def _refresh_cache(self): openid_reader = urlopen(self.openid_url) azure_config = json.loads(openid_reader.read()) self._signing_algorithms = azure_config["id_token_signing_alg_values_supported"] jwks_url = azure_config["jwks_uri"] jwks_reader = urlopen(jwks_url) self._jwks = json.loads(jwks_reader.read()) logger.info(f"Refreshed jwks config from {jwks_url}.") logger.info("Supported token signing algorithms: {}".format(str(self._signing_algorithms))) self._last_updated = datetime.datetime.now() def get_jwks(self, cache_hours: int=24): logger.info("jwks config is out of date (last updated at {})".format(str(self._last_updated))) self._refresh_cache() return {'jwks': self._jwks, 'algorithms': self._signing_algorithms} jwks_config = AzureJWKS() async def require_auth(token: str = Header(...)): token = get_token_auth_header(token) try: unverified_header = jwt.get_unverified_header(token) except JoseExceptions.JWTError: raise HTTPException( status_code=401, detail='Unable to decode authorization token headers') payload = get_payload(unverified_header, token, jwks_config.get_jwks()) if not payload: raise HTTPException( status_code=401, detail='Invalid authorization token') return payload I hope the community gets benefited from this! | 25 | 13 |
63,001,988 | 2020-7-20 | https://stackoverflow.com/questions/63001988/how-to-remove-background-of-images-in-python | I have a dataset that contains full width human images I want to remove all the backgrounds in those Images and just leave the full width person, my questions: is there any python code that does that ? and do I need to specify each time the coordinate of the person object? | Here is one way to use Python/OpenCV. Read the input Convert to gray Threshold and invert as a mask Optionally apply morphology to clean up any extraneous spots Anti-alias the edges Convert a copy of the input to BGRA and insert the mask as the alpha channel Save the results Input: import cv2 import numpy as np # load image img = cv2.imread('person.png') # convert to graky gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # threshold input image as mask mask = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)[1] # negate mask mask = 255 - mask # apply morphology to remove isolated extraneous noise # use borderconstant of black since foreground touches the edges kernel = np.ones((3,3), np.uint8) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # anti-alias the mask -- blur then stretch # blur alpha channel mask = cv2.GaussianBlur(mask, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT) # linear stretch so that 127.5 goes to 0, but 255 stays 255 mask = (2*(mask.astype(np.float32))-255.0).clip(0,255).astype(np.uint8) # put mask into alpha channel result = img.copy() result = cv2.cvtColor(result, cv2.COLOR_BGR2BGRA) result[:, :, 3] = mask # save resulting masked image cv2.imwrite('person_transp_bckgrnd.png', result) # display result, though it won't show transparency cv2.imshow("INPUT", img) cv2.imshow("GRAY", gray) cv2.imshow("MASK", mask) cv2.imshow("RESULT", result) cv2.waitKey(0) cv2.destroyAllWindows() Transparent result: | 16 | 42 |
62,980,464 | 2020-7-19 | https://stackoverflow.com/questions/62980464/cant-install-pyqt5-on-python-3-with-spyder-ide | So I'm trying to install the PyQt package so I just did this on my Anaconda Prompt: C:\Users\USER>pip install PyQt5 Collecting PyQt5 Using cached PyQt5-5.15.0-5.15.0-cp35.cp36.cp37.cp38-none-win_amd64.whl (64.5 MB) Collecting PyQt5-sip<13,>=12.8 Using cached PyQt5_sip-12.8.0-cp37-cp37m-win_amd64.whl (62 kB) ERROR: spyder 4.1.4 requires pyqtwebengine<5.13; python_version >= "3", which is not installed. ERROR: spyder 4.1.4 has requirement pyqt5<5.13; python_version >= "3", but you'll have pyqt5 5.15.0 which is incompatible. So I tried a different version with: pip install --upgrade --user pyqt5==5.12 And then this happened: Collecting pyqt5==5.12 Downloading PyQt5-5.12-5.12.1_a-cp35.cp36.cp37.cp38-none-win_amd64.whl (49.4 MB) |████████████████████████████████| 49.4 MB 43 kB/s Collecting PyQt5_sip<4.20,>=4.19.14 Downloading PyQt5_sip-4.19.19-cp37-none-win_amd64.whl (52 kB) |████████████████████████████████| 52 kB 3.8 MB/s ERROR: spyder 4.1.4 requires pyqtwebengine<5.13; python_version >= "3", which is not installed. | To install PyQt5 without errors try this. First install pyqtwebengine version 5.12 and then install pyqt5 version 5.12, using the following commands: pip install --upgrade --user pyqtwebengine==5.12 pip install --upgrade --user pyqt5==5.12 By this, I have successfully installed pyqt5 | 11 | 33 |
62,956,690 | 2020-7-17 | https://stackoverflow.com/questions/62956690/install-local-wheel-file-with-requirements-txt | Have a local package ABC-0.0.2-py3-none-any.whl. I want to install it in the different project through requrements.txt. e.g. requirements.txt ABC==0.0.2 Flask==1.1.2 flask-restplus==0.13.0 gunicorn==20.0.4 Is it possible to install the ABC package this way. ABC-0.0.2-py3-none-any.whl is included in source code. I had to pip install ABC-0.0.2-py3-none-any.whl separately. | This is called a direct reference. Since version 19.3, pip support this in both command line and requirement files. Check out an example from the official documentation. As to OP's question, simply put the local wheel's relative path, i.e., ./<my_wheel_dir>/<my_wheel.whl>, in requirement.txt, e.g., ./local_wheels/ABC-0.0.2-py3-none-any.whl Flask==1.1.2 flask-restplus==0.13.0 gunicorn==20.0.4 | 44 | 65 |
62,919,271 | 2020-7-15 | https://stackoverflow.com/questions/62919271/how-do-i-define-a-typing-union-dynamically | I am using Typeguard in a couple if projects for type checking at run time in Python. It works pretty well. I have encountered a situation where the type of a function parameter is a typing.Union made up of a few dynamically collected data types. E.g. def find_datatypes(): # some stuff ... return (str, int) # dynamically generated list / tuple datatypes = find_datatypes() Now I want to generate a typing.Union from datatypes for eventual use in a function. I expected unpacking syntax to work: my_union = typing.Union[*datatypes] @typeguard.typechecked def some_function(param: my_union): pass However, it did not: my_union = typing.Union[*datatypes] ^ SyntaxError: invalid syntax How would I achieve what I want? | You can kind of do it: my_union = typing.Union[datatypes] At runtime, thing[x, y] is already equivalent to thing[(x, y)]. That said, there are limitations to keep in mind. Particularly, when using string annotations, my_union will have to be available in some_function's global namespace for typeguard or anything else to be able to resolve the annotation at runtime. That restricts a lot of closure use cases, and a lot of attempts to add annotations dynamically. (String annotations may become the default eventually, but the devs are considering other options, and the plans are currently unclear.) Also, as you might expect, mypy will not consider any of this valid. | 24 | 18 |
62,997,313 | 2020-7-20 | https://stackoverflow.com/questions/62997313/remove-first-item-from-python-dict | Good afternoon. I'm sorry if my question may seem dumb or if it has already been posted (I looked for it but didn't seem to find anything. If I'm wrong, please let me know: I'm new here and I may not be the best at searching for the correct questions). I was wondering if it was possible to remove (pop) a generic item from a dictionary in python. The idea came from the following exercise: Write a function to find the sum of the VALUES in a given dictionary. Obviously there are many ways to do it: summing dictionary.values(), creating a variable for the sum and iterate through the dict and updating it, etc.. But I was trying to solve it with recursion, with something like: def total_sum(dictionary): if dictionary == {}: return 0 return dictionary.pop() + total_sum(dictionary) The problem with this idea is that we don't know a priori which could be the "first" key of a dict since it's unordered: if it was a list, the index 0 would have been used and it all would have worked. Since I don't care about the order in which the items are popped, it would be enough to have a way to delete any of the items (a "generic" item). Do you think something like this is possible or should I necessarily make use of some auxiliary variable, losing the whole point of the use of recursion, whose advantage would be a very concise and simple code? I actually found the following solution, which though, as you can see, makes the code more complex and harder to read: I reckon it could still be interesting and useful if there was some built-in, simple and direct solution to that particular problem of removing the "first" item of a dict, although many "artificious", alternative solutions could be found. def total_sum(dictionary): if dictionary == {}: return 0 return dictionary.pop(list(dictionary.keys())[0]) + total_sum(dictionary) I will let you here a simple example dictionary on which the function could be applied, if you want to make some simple tests. ex_dict = {"milk":5, "eggs":2, "flour": 3} | ex_dict.popitem() it removes the last (most recently added) element from the dictionary | 8 | 3 |
62,986,778 | 2020-7-19 | https://stackoverflow.com/questions/62986778/fastapi-handling-and-redirecting-404 | How can i redirect a request with FastAPI if there is a HTTPException? In Flask we can achieve that like this: @app.errorhandler(404) def handle_404(e): if request.path.startswith('/api'): return render_template('my_api_404.html'), 404 else: return redirect(url_for('index')) Or in Django we can use django.shortcuts: from django.shortcuts import redirect def view_404(request, exception=None): return redirect('/') How we can achieve that with FastAPI? | I know it's too late but this is the shortest approach to handle 404 exceptions in your personal way. Redirect from fastapi.responses import RedirectResponse @app.exception_handler(404) async def custom_404_handler(_, __): return RedirectResponse("/") Custom Jinja Template from fastapi.templating import Jinja2Templates from fastapi.staticfiles import StaticFiles templates = Jinja2Templates(directory="templates") app.mount("/static", StaticFiles(directory="static"), name="static") @app.exception_handler(404) async def custom_404_handler(request, __): return templates.TemplateResponse("404.html", {"request": request}) Serve HTML from file @app.exception_handler(404) async def custom_404_handler(_, __): return FileResponse('./path/to/404.html') Serve HTML directly from fastapi.responses import HTMLResponse response_404 = """ <!DOCTYPE html> <html lang="en"> <head> <title>Not Found</title> </head> <body> <p>The file you requested was not found.</p> </body> </html> """ @app.exception_handler(404) async def custom_404_handler(_, __): return HTMLResponse(response_404) Note: exception_handler decorator passes the current request and exception as arguments to the function. I've used _ and __ where the variables are unnecessary. | 19 | 20 |
63,006,575 | 2020-7-21 | https://stackoverflow.com/questions/63006575/what-is-the-difference-between-maxpool-and-maxpooling-layers-in-keras | I just started working with keras and noticed that there are two layers with very similar names for max-pooling: MaxPool and MaxPooling. I was surprised that I couldn't find the difference between these two on Google; so I am wondering what the difference is between the two if any. | They are the same... You can test it on your own import numpy as np import tensorflow as tf from tensorflow.keras.layers import * # create dummy data X = np.random.uniform(0,1, (32,5,3)).astype(np.float32) pool1 = MaxPool1D()(X) pool2 = MaxPooling1D()(X) tf.reduce_all(pool1 == pool2) # True I used 1D max-pooling but the same is valid for all the pooling operations (2D, 3D, avg, global pooling) | 29 | 21 |
62,956,054 | 2020-7-17 | https://stackoverflow.com/questions/62956054/how-to-install-pillow-on-termux | I am using Termux for quite a while now and would like to install "Pillow" library on it. Whenever I try to install Pillow using "pip" it shows me the below errors. At first I thought I need to upgrade pip, but it did not help. I have also cleared caches, to no avail. Error $ python3 -m pip install Pillow==7.2.0 Collecting Pillow==7.2.0 Using cached Pillow-7.2.0.tar.gz (39.1 MB) Using legacy setup.py install for Pillow, since package 'wheel' is not installed. Installing collected packages: Pillow Running setup.py install for Pillow ... error ERROR: Command errored out with exit status 1: command: /data/data/com.termux/files/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py'"'"'; __file__='"'"'/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /data/data/com.termux/files/usr/tmp/pip-record-rstq_guv/install-record.txt --single-version-externally-managed --compile --install-headers /data/data/com.termux/files/usr/include/python3.8/Pillow cwd: /data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/ Complete output (172 lines): running install running build running build_py creating build creating build/lib.linux-aarch64-3.8 creating build/lib.linux-aarch64-3.8/PIL copying src/PIL/BdfFontFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/BlpImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/BmpImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/BufrStubImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ContainerIO.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/CurImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/DcxImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/DdsImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/EpsImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ExifTags.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/FitsStubImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/FliImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/FontFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/FpxImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/FtexImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GbrImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GdImageFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GifImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GimpGradientFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GimpPaletteFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/GribStubImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/Hdf5StubImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/IcnsImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/IcoImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/Image.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageChops.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageCms.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageColor.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageDraw.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageDraw2.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageEnhance.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageFilter.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageFont.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageGrab.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageMath.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageMode.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageMorph.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageOps.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImagePalette.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImagePath.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageQt.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageSequence.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageShow.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageStat.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageTk.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageTransform.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImageWin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/ImtImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/IptcImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/Jpeg2KImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/JpegImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/JpegPresets.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/McIdasImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/MicImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/MpegImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/MpoImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/MspImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PSDraw.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PaletteFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PalmImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PcdImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PcfFontFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PcxImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PdfImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PdfParser.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PixarImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PngImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PpmImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PsdImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/PyAccess.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/SgiImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/SpiderImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/SunImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/TarIO.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/TgaImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/TiffImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/TiffTags.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/WalImageFile.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/WebPImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/WmfImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/XVThumbImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/XbmImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/XpmImagePlugin.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/__init__.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/__main__.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/_binary.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/_tkinter_finder.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/_util.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/_version.py -> build/lib.linux-aarch64-3.8/PIL copying src/PIL/features.py -> build/lib.linux-aarch64-3.8/PIL running egg_info writing src/Pillow.egg-info/PKG-INFO writing dependency_links to src/Pillow.egg-info/dependency_links.txt writing top-level names to src/Pillow.egg-info/top_level.txt reading manifest file 'src/Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.c' warning: no files found matching '*.h' warning: no files found matching '*.sh' warning: no previously-included files found matching '.appveyor.yml' warning: no previously-included files found matching '.coveragerc' warning: no previously-included files found matching '.editorconfig' warning: no previously-included files found matching '.readthedocs.yml' warning: no previously-included files found matching 'codecov.yml' warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution no previously-included directories found matching '.ci' writing manifest file 'src/Pillow.egg-info/SOURCES.txt' running build_ext The headers or library files could not be found for jpeg, a required dependency when compiling Pillow from source. Please see the install instructions at: https://pillow.readthedocs.io/en/latest/installation.html Traceback (most recent call last): File "/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py", line 864, in <module> setup( File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/setuptools/__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/data/data/com.termux/files/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/command/install.py", line 545, in run self.run_command('build') File "/data/data/com.termux/files/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/data/data/com.termux/files/usr/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/data/data/com.termux/files/usr/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/data/data/com.termux/files/usr/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py", line 694, in build_extensions raise RequiredDependencyException(f) __main__.RequiredDependencyException: jpeg During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py", line 918, in <module> raise RequiredDependencyException(msg) __main__.RequiredDependencyException: The headers or library files could not be found for jpeg, a required dependency when compiling Pillow from source. Please see the install instructions at: https://pillow.readthedocs.io/en/latest/installation.html ---------------------------------------- ERROR: Command errored out with exit status 1: /data/data/com.termux/files/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py'"'"'; __file__='"'"'/data/data/com.termux/files/usr/tmp/pip-install-48geu0sg/Pillow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /data/data/com.termux/files/usr/tmp/pip-record-rstq_guv/install-record.txt --single-version-externally-managed --compile --install-headers /data/data/com.termux/files/usr/include/python3.8/Pillow Check the logs for full command output. How do I install Pillow on termux? | First download the wheel module from pypi pip install wheel Then install the libjpeg-turbo package. pkg install libjpeg-turbo And now install Pillow with: LDFLAGS="-L/system/lib/" CFLAGS="-I/data/data/com.termux/files/usr/include/" pip install Pillow Note: If you are using an aarch64 device, set the LDFLAGS flag to "-L/system/lib64/" related: Install Pillow in termux - Termux Wiki | 10 | 27 |
62,954,167 | 2020-7-17 | https://stackoverflow.com/questions/62954167/get-a-list-of-all-pytest-node-ids-using-python | Do you know if there is a way to collect all pytest node ids (as presented here) using the pytest python API ? I have found the --collect-only parameter of pytest, but I can't figure out how to get the output using python ? Thanks in advance ! | If you want to access nodeids programmatically, best is to write a small plugin that will store them on test collection. Example: import pytest class NodeidsCollector: def pytest_collection_modifyitems(self, items): self.nodeids = [item.nodeid for item in items] def main(): collector = NodeidsCollector() pytest.main(['--collect-only'], plugins=[collector]) # use collector.nodeids now If you want to avoid pytest output to the terminal, disable the terminal plugin in addition: pytest.main(['--collect-only', '-pno:terminal'], plugins=[collector]) | 8 | 6 |
62,953,704 | 2020-7-17 | https://stackoverflow.com/questions/62953704/valueerror-the-number-of-fixedlocator-locations-5-usually-from-a-call-to-set | this piece of code was working before, however, after creating a new environment , it stopped working for the line plt.xticks(x, months, rotation=25,fontsize=8) if i comment this line then no error, after putting this line error is thrown ValueError: The number of FixedLocator locations (5), usually from a call to set_ticks, does not match the number of ticklabels (12). import numpy as np import matplotlib.pyplot as plt dataset = df dfsize = dataset[df.columns[0]].size x = [] for i in range(dfsize): x.append(i) dataset.shape # dataset.dropna(inplace=True) dataset.columns.values var = "" for i in range(dataset.shape[1]): ## 1 is for column, dataset.shape[1] calculate length of col y = dataset[dataset.columns[i]].values y = y.astype(float) y = y.reshape(-1, 1) y.shape from sklearn.impute import SimpleImputer missingvalues = SimpleImputer(missing_values=np.nan, strategy='mean', verbose=0) missingvalues = missingvalues.fit(y) y = missingvalues.transform(y[:, :]) from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.compose import ColumnTransformer labelencoder_x = LabelEncoder() x = labelencoder_x.fit_transform(x) from scipy.interpolate import * p1 = np.polyfit(x, y, 1) # from matplotlib.pyplot import * import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.figure() plt.xticks(x, months, rotation=25,fontsize=8) #print("-->"+dataset.columns[i]) plt.suptitle(dataset.columns[i] + ' (xyz)', fontsize=10) plt.xlabel('month', fontsize=8) plt.ylabel('Age', fontsize=10) plt.plot(x, y, y, 'r-', linestyle='-', marker='o') plt.plot(x, np.polyval(p1, x), 'b-') y = y.round(decimals=2) for a, b in zip(x, y): plt.text(a, b, str(b), bbox=dict(facecolor='yellow', alpha=0.9)) plt.grid() # plt.pause(2) # plt.grid() var = var + "," + dataset.columns[i] plt.savefig(path3 + dataset.columns[i] + '_1.png') plt.close(path3 + dataset.columns[i] + '_1.png') plt.close('all') | I also stumbled across the error and found that making both your xtick_labels and xticks a list of equal length works. So in your case something like : def month(num): # returns month name based on month number num_elements = len(x) X_Tick_List = [] X_Tick_Label_List=[] for item in range (0,num_elements): X_Tick_List.append(x[item]) X_Tick_Label_List.append(month(item+1)) plt.xticks(ticks=X_Tick_List,labels=X_Tick_LabeL_List, rotation=25,fontsize=8) | 35 | 17 |
62,917,910 | 2020-7-15 | https://stackoverflow.com/questions/62917910/how-can-i-export-pandas-dataframe-to-google-sheets-using-python | I managed to read data from a Google Sheet file using this method: # ACCES GOOGLE SHEET googleSheetId = 'myGoogleSheetId' workSheetName = 'mySheetName' URL = 'https://docs.google.com/spreadsheets/d/{0}/gviz/tq?tqx=out:csv&sheet={1}'.format( googleSheetId, workSheetName ) df = pd.read_csv(URL) However, after generating a pd.DataFrame that fetches info from the web using selenium, I need to append that data to the Google Sheet. Question: Do you know a way to export that DataFrame to Google Sheets? | Yes, there is a module called "gspread". Just install it with pip and import it into your script. Here you can find the documentation: https://gspread.readthedocs.io/en/latest/ In particular their section on Examples of gspread with pandas. worksheet.update([dataframe.columns.values.tolist()] + dataframe.values.tolist()) | 28 | 22 |
62,951,520 | 2020-7-17 | https://stackoverflow.com/questions/62951520/pythons-lru-cache-on-inner-function-doesnt-seem-to-work | I'm trying to use functools.lru_cache to cache the result of an inner function, but the cache doesn't seem to work as expected. I have a function that performs some logic and then calls a rather expensive function. I'd like to cache the result of the expensive function call and though I'd just apply the lru_cache to an inner function. Unfortunately the behavior isn't as expected - the expensive function gets called every time even though the arguments to the inner function are identical. I've created a (simplified) test case to show the behavior: import unittest from functools import lru_cache from unittest.mock import patch def expensive_function(parameter: str) -> str: return parameter def partially_cached(some_parameter: str) -> str: @lru_cache def inner_function(parameter: str): return expensive_function(parameter) result = inner_function(some_parameter) print(inner_function.cache_info()) return result class CacheTestCase(unittest.TestCase): def test_partially_cached(self): with patch(self.__module__ + ".expensive_function") as expensive_mock: expensive_mock.return_value = "a" self.assertEqual(partially_cached("a"), "a") self.assertEqual(partially_cached("a"), "a") # If the cache works, I expect the expensive function # to be called just once for the same parameter expensive_mock.assert_called_once() if __name__ == "__main__": unittest.main() (In this case I wouldn't need the inner function, but as I said - it's simplified) Unfortunately the test fails python3 /scratch/test_cache.py CacheInfo(hits=0, misses=1, maxsize=128, currsize=1) CacheInfo(hits=0, misses=1, maxsize=128, currsize=1) F ====================================================================== FAIL: test_partially_cached (__main__.CacheTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): [...] AssertionError: Expected 'expensive_function' to have been called once. Called 2 times. Calls: [call('a'), call('a')]. ---------------------------------------------------------------------- Ran 1 test in 0.004s FAILED (failures=1) I probably have a misunderstanding concerning either inner functions or the lru_cache, but I'm not sure which it is - I appreciate all the help I can get. | It does not work as intended because the inner_function gets redefined each time partially_cached is called, and so is the cached version. So each cached version gets called only once. See memoizing-decorator-keeping-stored-values Additionally, if you mock a decorated function you need to apply the decorator again. See how-to-mock-a-decorated-function If you decorate at the outer level, or decorate another extracted to outer level function, it will work, if you don't break it with mocking. | 12 | 9 |
63,011,748 | 2020-7-21 | https://stackoverflow.com/questions/63011748/contour-iso-z-or-threshold-lines-in-seaborn-heatmap | Is there a way to automatically add contour (iso-z) lines to a heatmap with concrete x and y values? Please consider the official seaborn flights dataset: import seaborn as sns flights = sns.load_dataset("flights") flights = flights.pivot("month", "year", "passengers") sns.heatmap(flights, annot=True, fmt='d') I imagine the step-like lines to look something like shown below (lhs), indicating thresholds (here 200 and 400). They do not need to be interpolated or smoothed in any way, although that would do as well, if easier to realize. If the horizontal lines complicate the solution further, they too could be omitted (rhs). So far, I have tried to add hlines and vlines manually, to overlay a kdeplot etc. without the desired result. Could somebody hint me into the right direction? | You can use aLineCollection: import seaborn as sns import numpy as np from matplotlib.collections import LineCollection flights = sns.load_dataset("flights") flights = flights.pivot("month", "year", "passengers") ax = sns.heatmap(flights, annot=True, fmt='d') def add_iso_line(ax, value, color): v = flights.gt(value).diff(axis=1).fillna(False).to_numpy() h = flights.gt(value).diff(axis=0).fillna(False).to_numpy() try: l = np.argwhere(v.T) vlines = np.array(list(zip(l, np.stack((l[:,0], l[:,1]+1)).T))) l = np.argwhere(h.T) hlines = np.array(list(zip(l, np.stack((l[:,0]+1, l[:,1])).T))) lines = np.vstack((vlines, hlines)) ax.add_collection(LineCollection(lines, lw=3, colors=color )) except: pass add_iso_line(ax, 200, 'b') add_iso_line(ax, 400, 'y') | 8 | 8 |
62,961,627 | 2020-7-17 | https://stackoverflow.com/questions/62961627/oserror-error-no-file-named-pytorch-model-bin-tf-model-h5-model-ckpt-in | When I load the BERT pretrained model online I get this error OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory uncased_L-12_H-768_A-12 or 'from_tf' set to False what should I do? | Here is what I found. Go to the following link, and click the circled to download, rename it to pytorch_model.bin, and drop it to the directory of biobert-nli, then the issue is resolved. Didn't figure out how to clone from the link. https://huggingface.co/gsarti/biobert-nli/tree/main | 12 | 7 |
63,001,429 | 2020-7-20 | https://stackoverflow.com/questions/63001429/what-is-the-difference-between-pathlib-glob-and-iterdir | Suppose I'm writing code using pathlib and I want to iter over all the files in the same level of a directory. I can do this in two ways: p = pathlib.Path('/some/path') for f in p.iterdir(): print(f) p = pathlib.Path('/some/path') for f in p.glob('*'): print(f) Is one of the options better in any way? | Expansion of my comment: Why put the API to extra work parsing and testing against a filter pattern when you could just... not? glob is better when you need to make use of the filtering feature and the filter is simple and string-based, as it simplifies the work. Sure, hand-writing simple matches (filtering iterdir via if path.endswith('.txt'): instead of glob('*.txt')) might be more efficient than the regex based pattern matching glob hides, but it's generally not worth the trouble of reinventing the wheel given that disk I/O is orders of magnitude slower. But if you don't need the filtering functionality at all, don't use it. glob is gaining you nothing in terms of code simplicity or functionality, and hurting performance, so just use iterdir. | 28 | 32 |
62,912,397 | 2020-7-15 | https://stackoverflow.com/questions/62912397/open3d-visualizing-multiple-point-clouds-as-a-video-animation | I have generated multiple point clouds using a RGB+depth video, and would like to visualize the multiple point clouds as a video or animation. Currently I am using Python, part of my code is as follows: for i in range(1,10) pcd = Track.create_pcd(i) o3d.visualization.draw_geometries([pcd]) pcd_list.append(pcd) When I use draw_geometries or draw_geometries_with_animation_callback, it seems they could not display a list of point clouds: o3d.visualization.draw_geometries([pcd_list]) or def rotate_view(vis): ctr = vis.get_view_control() ctr.rotate(10.0, 0.0) return False o3d.visualization.draw_geometries_with_animation_callback([pcd_list],rotate_view) It gave the following error: TypeError: draw_geometries(): incompatible function arguments. The following argument types are supported: (geometry_list: List[open3d.open3d_pybind.geometry.Geometry], window_name: str = ‘Open3D’, width: int = 1920, height: int = 1080, left: int = 50, top: int = 50, point_show_normal: bool = False, mesh_show_wireframe: bool = False, mesh_show_back_face: bool = False) -> None Is there any example of how to export list of point cloud into a video, like setting a viewer, and displaying each point cloud with a waitkey of 0.5 seconds, and then save as a video file (.mp4/.avi)? And also to get and then set a fixed viewpoint of the point clouds in the video? Thank you very much! | You can use Open3D Non-blocking visualization. It'll be like this vis = o3d.visualization.Visualizer() vis.create_window() # geometry is the point cloud used in your animaiton geometry = o3d.geometry.PointCloud() vis.add_geometry(geometry) for i in range(icp_iteration): # now modify the points of your geometry # you can use whatever method suits you best, this is just an example geometry.points = pcd_list[i].points vis.update_geometry(geometry) vis.poll_events() vis.update_renderer() | 8 | 6 |
62,976,648 | 2020-7-19 | https://stackoverflow.com/questions/62976648/architecture-flask-vs-fastapi | I have been tinkering around Flask and FastAPI to see how it acts as a server. One of the main things that I would like to know is how Flask and FastAPI deal with multiple requests from multiple clients. Especially when the code has efficiency issues (long database query time). So, I tried making a simple code to understand this problem. The code is simple, when the client access the route, the application sleeps for 10 seconds before it returns results. It looks something like this: FastAPI import uvicorn from fastapi import FastAPI from time import sleep app = FastAPI() @app.get('/') async def root(): print('Sleeping for 10') sleep(10) print('Awake') return {'message': 'hello'} if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=8000) Flask from flask import Flask from flask_restful import Resource, Api from time import sleep app = Flask(__name__) api = Api(app) class Root(Resource): def get(self): print('Sleeping for 10') sleep(10) print('Awake') return {'message': 'hello'} api.add_resource(Root, '/') if __name__ == "__main__": app.run() Once the applications are up, I tried accessing them at the same time through 2 different chrome clients. The below are the results: FastAPI Flask As you can see, for FastAPI, the code first waits 10 seconds before processing the next request. Whereas for Flask, the code processes the next request while the 10-second sleep is still happening. Despite doing a bit of googling, there is not really a straight answer on this topic. If anyone has any comments that can shed some light on this, please drop them in the comments. Your opinions are all appreciated. Thank you all very much for your time. EDIT An update on this, I am exploring a bit more and found this concept of Process manager. For example, we can run uvicorn using a process manager (gunicorn). By adding more workers, I am able to achieve something like Flask. Still testing the limits of this, however. https://www.uvicorn.org/deployment/ Thanks to everyone who left comments! Appreciate it. | This seemed a little interesting, so i ran a little tests with ApacheBench: Flask from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class Root(Resource): def get(self): return {"message": "hello"} api.add_resource(Root, "/") FastAPI from fastapi import FastAPI app = FastAPI(debug=False) @app.get("/") async def root(): return {"message": "hello"} I ran 2 tests for FastAPI, there was a huge difference: gunicorn -w 4 -k uvicorn.workers.UvicornWorker fast_api:app uvicorn fast_api:app --reload So here is the benchmarking results for 5000 requests with a concurrency of 500: FastAPI with Uvicorn Workers Concurrency Level: 500 Time taken for tests: 0.577 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 720000 bytes HTML transferred: 95000 bytes Requests per second: 8665.48 [#/sec] (mean) Time per request: 57.700 [ms] (mean) Time per request: 0.115 [ms] (mean, across all concurrent requests) Transfer rate: 1218.58 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 6 4.5 6 30 Processing: 6 49 21.7 45 126 Waiting: 1 42 19.0 39 124 Total: 12 56 21.8 53 127 Percentage of the requests served within a certain time (ms) 50% 53 66% 64 75% 69 80% 73 90% 81 95% 98 98% 112 99% 116 100% 127 (longest request) FastAPI - Pure Uvicorn Concurrency Level: 500 Time taken for tests: 1.562 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 720000 bytes HTML transferred: 95000 bytes Requests per second: 3200.62 [#/sec] (mean) Time per request: 156.220 [ms] (mean) Time per request: 0.312 [ms] (mean, across all concurrent requests) Transfer rate: 450.09 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 8 4.8 7 24 Processing: 26 144 13.1 143 195 Waiting: 2 132 13.1 130 181 Total: 26 152 12.6 150 203 Percentage of the requests served within a certain time (ms) 50% 150 66% 155 75% 158 80% 160 90% 166 95% 171 98% 195 99% 199 100% 203 (longest request) For Flask: Concurrency Level: 500 Time taken for tests: 27.827 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 830000 bytes HTML transferred: 105000 bytes Requests per second: 179.68 [#/sec] (mean) Time per request: 2782.653 [ms] (mean) Time per request: 5.565 [ms] (mean, across all concurrent requests) Transfer rate: 29.13 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 87 293.2 0 3047 Processing: 14 1140 4131.5 136 26794 Waiting: 1 1140 4131.5 135 26794 Total: 14 1227 4359.9 136 27819 Percentage of the requests served within a certain time (ms) 50% 136 66% 148 75% 179 80% 198 90% 295 95% 7839 98% 14518 99% 27765 100% 27819 (longest request) Total results Flask: Time taken for tests: 27.827 seconds FastAPI - Uvicorn: Time taken for tests: 1.562 seconds FastAPI - Uvicorn Workers: Time taken for tests: 0.577 seconds With Uvicorn Workers FastAPI is nearly 48x faster than Flask, which is very understandable. ASGI vs WSGI, so i ran with 1 concurreny: FastAPI - UvicornWorkers: Time taken for tests: 1.615 seconds FastAPI - Pure Uvicorn: Time taken for tests: 2.681 seconds Flask: Time taken for tests: 5.541 seconds I ran more tests to test out Flask with a production server. 5000 Request 1000 Concurrency Flask with Waitress Server Software: waitress Server Hostname: 127.0.0.1 Server Port: 8000 Document Path: / Document Length: 21 bytes Concurrency Level: 1000 Time taken for tests: 3.403 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 830000 bytes HTML transferred: 105000 bytes Requests per second: 1469.47 [#/sec] (mean) Time per request: 680.516 [ms] (mean) Time per request: 0.681 [ms] (mean, across all concurrent requests) Transfer rate: 238.22 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 4 8.6 0 30 Processing: 31 607 156.3 659 754 Waiting: 1 607 156.3 658 753 Total: 31 611 148.4 660 754 Percentage of the requests served within a certain time (ms) 50% 660 66% 678 75% 685 80% 691 90% 702 95% 728 98% 743 99% 750 100% 754 (longest request) Gunicorn with Uvicorn Workers Server Software: uvicorn Server Hostname: 127.0.0.1 Server Port: 8000 Document Path: / Document Length: 19 bytes Concurrency Level: 1000 Time taken for tests: 0.634 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 720000 bytes HTML transferred: 95000 bytes Requests per second: 7891.28 [#/sec] (mean) Time per request: 126.722 [ms] (mean) Time per request: 0.127 [ms] (mean, across all concurrent requests) Transfer rate: 1109.71 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 28 13.8 30 62 Processing: 18 89 35.6 86 203 Waiting: 1 75 33.3 70 171 Total: 20 118 34.4 116 243 Percentage of the requests served within a certain time (ms) 50% 116 66% 126 75% 133 80% 137 90% 161 95% 189 98% 217 99% 230 100% 243 (longest request) Pure Uvicorn, but this time 4 workers uvicorn fastapi:app --workers 4 Server Software: uvicorn Server Hostname: 127.0.0.1 Server Port: 8000 Document Path: / Document Length: 19 bytes Concurrency Level: 1000 Time taken for tests: 1.147 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 720000 bytes HTML transferred: 95000 bytes Requests per second: 4359.68 [#/sec] (mean) Time per request: 229.375 [ms] (mean) Time per request: 0.229 [ms] (mean, across all concurrent requests) Transfer rate: 613.08 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 20 16.3 17 70 Processing: 17 190 96.8 171 501 Waiting: 3 173 93.0 151 448 Total: 51 210 96.4 184 533 Percentage of the requests served within a certain time (ms) 50% 184 66% 209 75% 241 80% 260 90% 324 95% 476 98% 504 99% 514 100% 533 (longest request) | 51 | 59 |
62,934,384 | 2020-7-16 | https://stackoverflow.com/questions/62934384/how-to-add-timestamp-to-each-request-in-uvicorn-logs | When I run my FastAPI server using uvicorn: uvicorn main:app --host 0.0.0.0 --port 8000 --log-level info The log I get after running the server: INFO: Started server process [405098] INFO: Waiting for application startup. INFO: Connect to database... INFO: Successfully connected to the database! INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) INFO: 122.179.31.158:54604 - "GET /api/hello_world?num1=5&num2=10 HTTP/1.1" 200 OK How do I get the time stamp along with the request logging? Like: INFO: "2020-07-16:23:34:78" - 122.179.31.158:54604 - "GET /api/hello_world?num1=5&num2=10 HTTP/1.1" 200 OK | You can use Uvicorn's LOGGING_CONFIG import uvicorn from uvicorn.config import LOGGING_CONFIG from fastapi import FastAPI app = FastAPI() def run(): LOGGING_CONFIG["formatters"]["default"]["fmt"] = "%(asctime)s [%(name)s] %(levelprefix)s %(message)s" uvicorn.run(app) if __name__ == '__main__': run() Which will return uvicorn log with the timestamp 2020-08-20 02:33:53,765 [uvicorn.error] INFO: Started server process [107131] 2020-08-20 02:33:53,765 [uvicorn.error] INFO: Waiting for application startup. 2020-08-20 02:33:53,765 [uvicorn.error] INFO: Application startup complete. 2020-08-20 02:33:53,767 [uvicorn.error] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) | 22 | 18 |
62,979,389 | 2020-7-19 | https://stackoverflow.com/questions/62979389/is-there-a-best-practice-to-make-a-package-pep-561-compliant | I'm writing a Python project which is published as a package to a pypi-like repository (using setuptools and twine). I use type hints in my code. The issue is, when importing the package from a different project and running mypy, I get the following error: error: Skipping analyzing 'XXX': found module but no type hints or library stubs As I understand, I got this error because my package was not compliant with https://www.python.org/dev/peps/pep-0561/ . After some searching online, I didn't find a way that was not manual to add the required files to the package. I resorted to writing my own code to: Run stubgen to create stub files. Create py.typed files in every directory. Collect all the created files in a dict in package_data field in the setup.py file. This code solved the issue and mypy runs without errors. But this feels very wrong to me. Is there a standard tool for making a package PEP-561 compliant? Am I missing something else? | As mentioned before, You need to add the py.typed in the package folder of the module. You also need to add that file to the setup.py package_data - otherwise the file would not be part of the package when You deploy it. I personally put the type annotations in the code and dont create extra stub files - but that is only possible from python 3.4 upwards. If You want to make python2.7 compatible code, You can not use inline type annotation - in that case You can use stub files. If You want to type annotate a third party library, You can write a *.pyi file for the functions You use for that library. That can be a bit tricky, because MYPY must only find that *.pyi file ONCE in the MYPY Path. So I handle it that way : for local testing, the MYPY path is set to a directory were I collect all the 3rd party stubs, for testing on travis, I have a subdirectory in the package with the stubs needed for that module to test it on travis, and set the mypy path accordingly. | 43 | 30 |
62,986,053 | 2020-7-19 | https://stackoverflow.com/questions/62986053/breaking-cycles-in-a-digraph-with-the-condition-of-preserving-connectivity-for-c | I have a digraph consisting of a strongly connected component (blue) and a set of nodes (orange) that are the inputs to it. The challenge is to break as many cycles as possible with a minimum of removed edges. In addition, there must be a path from each orange node to each blue node. I solve the problem with a brute force: Removing the random edge Check for a path from every orange node to every blue one. If everything is ok, I add an edge to the list and count the number of cycles. I return the edge to the graph and go to step 1 until I iterate over all the edges Next, from the resulting list (of length n) I generate combinations C (n, k) where k = {2 ... n} I perform operations 1, 2, 3 for all combinations of edges The core of the code looks like this: for level in range(2, len(edges)): stop = True edges2 = combinations(edges,level) for i, e in enumerate(edges2): g.remove_edges_from(e) test = True for node in orange_nodes: d = nx.algorithms.descendants(g, node) test = blue_nodes == d if not test: break if test: stop = False cycles_count = len(list(nx.simple_cycles(g))) print(f'{i}\t{level}\t{cycles_count}\t{e}') g.add_edges_from(e) if stop: break Questions: Is it possible to somehow optimize the code (nx.algorithms.descendants() and nx.simple_cycles() are dramatically slow)? Is it possible to rewrite code using Spanning tree or Feedback arc set? Maybe there is a fast search algorithm for not the best solution, but a good one? Additionally: I rewrote the code as it is using the graph-tool, which gave a ~20x...50x speed boost. But this still does not allow us to approach the set practical task =( | The problem as stated is NP-Hard. Not sure if it is in NP either. In order to verify NP-hardness of the problem, consider graphs such that every blue node has an incoming edge from an orange node. For such graphs, what we need is that the graph after removing edges continues to be strongly connected. We also assume that maximum number of cycles need to be removed. Now, in order to break as many cycles as possible with a minimum of removed edges, assume that the maximum number of cycles that can be removed for a graph G while continuing to be strongly connected be removable(G) = k. This is a well-defined quantity for any graph G. Thus we need a graph G' that is a subgraph of G with number of cycles being cycles(G)-k. Now maximizing k is equivalent to minimizing the number of cycles that survive in G'. This is what makes the problem hard. Consider the Hamiltonian Cycle problem that is known to be NP-hard. Assume we have a program breakCycles(G) that computes a graph G' as a subgraph of G with maximum number of cycles removed (with minimal number of edges removed) or cycles(G') = cycles(G) - k. Then, it is straightforward to see that the Hamiltonian cycle problem can also be solved using breakCycles(G) by just providing input graph G to breakCycles to obtain the graph G' and return true iff G' is a simple cycle involving all vertices (of G). Update : In order to obtain a practical solution, let's look at obtaining a graph with minimal cycles, that is a subgraph of the blue nodes such that removing any edge will result in loss of connectivity for those nodes that have an orange node incident to it. Solving the above problem is much easier and should work well in practice def getRemovableEdges(G, edgeLst, initNodes): import networkx as nx removableEdgeLst = [] for (u,v) in edgeLst: G.remove_edge(u, v) f = nx.floyd_warshall(G) addEdge = True for s in initNodes: if 'inf' in list(map(str, f[s].values())): G.add_edge(u,v) addEdge = False break if addEdge: removableEdgeLst.append((u,v)) return removableEdgeLst To try it on the example provided, we need to first initialize the graph DG = nx.DiGraph() DG.add_nodes_from(range(1,8)) DG.add_edges_from([(1,2), (2,3), (3,4), (3,5), (4,5), (5,1), (5,4), (5,7), (6,4), (7,6)]); With our graph initialized above, we execute the function as below... In [5]: %time eL = getRemovableEdges(DG, list(DG.edges()), [2, 5]) CPU times: user 791 µs, sys: 141 µs, total: 932 µs Wall time: 936 µs In [6]: DG.remove_edges_from(eL); In [7]: plt.subplot(121) ...: nx.draw(DG, with_labels=True, font_weight='bold'); ...: plt.show(); We get the graph as below, | 10 | 3 |
63,001,954 | 2020-7-20 | https://stackoverflow.com/questions/63001954/python-apscheduler-how-does-asyncioscheduler-work | I'm having a hard time understanding how the AsyncIOScheduler works, and how is it non blocking? If my job is executing a blocking function, will the AsyncIOScheduler be blocking? And what if I use AsyncIOScheduler with ThreadPoolExecutor? How does that work? Can I await the job execution? | So, in APScheduler there are 3 important components: The Scheduler The Executor(s) The Datastore(s) For this question, only 1 and 2 are relevant. The Scheduler is simply who decides when to call the jobs based on their interval settings, in the case of AsyncIOScheduler it uses asyncio to make the waiting period non blocking. It runs in the same process and thread as the main. It is very useful if your application is already running on an asyncio loop since it saves the overhead of running a new proccess/thread. Now, when a job needs to be executed, is the Executor who is called, in the case of AsyncIOScheduler, by default is uses AsyncIOExecutor, which runs in the same thread and process as the Scheduler IF the job function is signed as async, else it uses asyncio's run_in_executor which runs it in a thread pool. Which brings us to the last question, what happens if we use AsyncIOScheduler with ThreadPoolExecutor? Well, technically is the same as using the default Executor with a non async function, it will run it in a thread pool, but the scheduler will remain in the main thread. | 15 | 10 |
63,012,515 | 2020-7-21 | https://stackoverflow.com/questions/63012515/how-to-detect-whether-zlib-is-available-and-whether-zip-deflated-is-available | The zipfile.ZipFile documentation says that ZIP_DEFLATED can be used as compression method only if zlib is available, but neither zipfile module specification nor zlib module specification says anything about when zlib might not be available, or how to check for its availability. I work on Windows and when I install any version of Python, zlib module is available. Is this different in Linux? Does zlib need to be installed separately? Also, what is the proper way to check for zlib availability? Is import zlib going to raise an ImportError if it is not available? In oher words, is this the correct way to use zipfile? try: import zlib except ImportError: zlib = None compression = zipfile.ZIP_STORED if zlib is None else zipfile.ZIP_DEFLATED with zipfile.ZipFile(file, mode, compression) as zf: ... | On Ubuntu if you install Python 3 using apt, e.g. sudo apt install python3.8, zlib will be installed as a dependency. Another way is to install Python 3 from source code. In this case, you need to install all prerequisites, including zlib1g-dev, (and this action is sometimes forgotten to do) and then compile and install python as sudo make install. Full instructions here Yes, if zlib is not available import zlib will raise exception, such as >>> import zlib Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'zlib There are examples of code like this in the Python standard library. e.g. in zipfile.py: try: import zlib # We may need its compression method crc32 = zlib.crc32 except ImportError: zlib = None crc32 = binascii.crc32 | 11 | 12 |
62,960,983 | 2020-7-17 | https://stackoverflow.com/questions/62960983/simple-captcha-solving | I'm trying to solve some simple captcha using OpenCV and pytesseract. Some of captcha samples are: I tried to the remove the noisy dots with some filters: import cv2 import numpy as np import pytesseract img = cv2.imread(image_path) _, img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY) img = cv2.morphologyEx(img, cv2.MORPH_OPEN, np.ones((4, 4), np.uint8), iterations=1) img = cv2.medianBlur(img, 3) img = cv2.medianBlur(img, 3) img = cv2.medianBlur(img, 3) img = cv2.medianBlur(img, 3) img = cv2.GaussianBlur(img, (5, 5), 0) cv2.imwrite('res.png', img) print(pytesseract.image_to_string('res.png')) Resulting tranformed images are: Unfortunately pytesseract just recognizes first captcha correctly. Any other better transformation? Final Update: As @Neil suggested, I tried to remove noise by detecting connected pixels. To find connected pixels, I found a function named connectedComponentsWithStats, whichs detect connected pixels and assigns group (component) a label. By finding connected components and removing the ones with small number of pixels, I managed to get better overall detection accuracy with pytesseract. And here are the new resulting images: | I've taken a much more direct approach to filtering ink splotches from pdf documents. I won't share the whole thing it's a lot of code, but here is the general strategy I adopted: Use Python Pillow library to get an image object where you can manipulate pixels directly. Binarize the image. Find all connected pixels and how many pixels are in each group of connected pixels. You can do this using the minesweeper algorithm. Which is easy to search for. Set some threshold value of pixels that all legitimate letters are expected to have. This will be dependent on your image resolution. replace all black pixels in groups below the threshold with white pixels. Convert back to image. | 11 | 2 |
62,992,595 | 2020-7-20 | https://stackoverflow.com/questions/62992595/pandas-drop-consecutive-duplicate-rows-only-ignoring-specific-columns | I have a dataframe below df = pd.DataFrame({ 'ID': ['James', 'James', 'James', 'James', 'Max', 'Max', 'Max', 'Max', 'Max', 'Park', 'Park','Park', 'Park', 'Tom', 'Tom', 'Tom', 'Tom'], 'From_num': [578, 420, 420, 'Started', 298, 78, 36, 298, 'Started', 28, 28, 311, 'Started', 60, 520, 99, 'Started'], 'To_num': [96, 578, 578, 420, 36, 298, 78, 36, 298, 112, 112, 28, 311, 150, 60, 520, 99], 'Date': ['2020-05-12', '2020-02-02', '2020-02-01', '2019-06-18', '2019-08-26', '2019-06-20', '2019-01-30', '2018-10-23', '2018-08-29', '2020-05-21', '2020-05-20', '2019-11-22', '2019-04-12', '2019-10-16', '2019-08-26', '2018-12-11', '2018-10-09']}) and it is like this: ID From_num To_num Date 0 James 578 96 2020-05-12 1 James 420 578 2020-02-02 2 James 420 578 2020-02-01 # Drop the this duplicated row (ignore date) 3 James Started 420 2019-06-18 4 Max 298 36 2019-08-26 5 Max 78 298 2019-06-20 6 Max 36 78 2019-01-30 7 Max 298 36 2018-10-23 8 Max Started 298 2018-08-29 9 Park 28 112 2020-05-21 10 Park 28 112 2020-05-20 # Drop this duplicate row (ignore date) 11 Park 311 28 2019-11-22 12 Park Started 311 2019-04-12 13 Tom 60 150 2019-10-16 14 Tom 520 60 2019-08-26 15 Tom 99 520 2018-12-11 16 Tom Started 99 2018-10-09 There are some consecutive duplicated values (ignore the Date value) within each 'ID'(Name), e.g. line 1 and 2 for James, the From_num are both 420, same as line 9 and 10, I wish to drop the 2nd duplicated row and keep the first. I wrote loop conditions, but it is very redundant and slow, I assume there might be easier way to do this, so please help if you have ideas. Great thanks. The expected result is like this: ID From_num To_num Date 0 James 578 96 2020-05-12 1 James 420 578 2020-02-02 2 James Started 420 2019-06-18 3 Max 298 36 2019-08-26 4 Max 78 298 2019-06-20 5 Max 36 78 2019-01-30 6 Max 298 36 2018-10-23 7 Max Started 298 2018-08-29 8 Park 28 112 2020-05-21 9 Park 311 28 2019-11-22 10 Park Started 311 2019-04-12 11 Tom 60 150 2019-10-16 12 Tom 520 60 2019-08-26 13 Tom 99 520 2018-12-11 14 Tom Started 99 2018-10-09 | It's a bit late, but does this do what you wanted? This drops consecutive duplicates ignoring "Date". t = df[['ID', 'From_num', 'To_num']] df[(t.ne(t.shift())).any(axis=1)] ID From_num To_num Date 0 James 578 96 2020-05-12 1 James 420 578 2020-02-02 3 James Started 420 2019-06-18 4 Max 298 36 2019-08-26 5 Max 78 298 2019-06-20 6 Max 36 78 2019-01-30 7 Max 298 36 2018-10-23 8 Max Started 298 2018-08-29 9 Park 28 112 2020-05-21 11 Park 311 28 2019-11-22 12 Park Started 311 2019-04-12 13 Tom 60 150 2019-10-16 14 Tom 520 60 2019-08-26 15 Tom 99 520 2018-12-11 16 Tom Started 99 2018-10-09 This drops rows with index values 2 and 10. | 8 | 7 |
62,960,775 | 2020-7-17 | https://stackoverflow.com/questions/62960775/discord-py-make-a-bot-react-to-its-own-messages | I am trying to make my discord bot react to its own message, pretty much. The system works like this: A person uses the command !!bug - And gets a message in DM', she/she is supposed to answer those questions. And then whatever he/she answered, it will be transferred an embedded message to an admin text-channel. But I need to add 3 emojis, or react with three different emojis. And depending on what the admin chooses, it will transfer the message once more. So if an admin reacts to an emoji that equals to "fixed", it will be moved to a "fixed" text-channel (the entire message). I have done a lot of research about this, but only found threads about the old discord.py, meaning await bot.add_react(emoji) - But as I have understood it, that no longer works! Here is my code: import discord from discord.ext import commands import asyncio TOKEN = '---' bot = commands.Bot(command_prefix='!!') reactions = [":white_check_mark:", ":stop_sign:", ":no_entry_sign:"] @bot.event async def on_ready(): print('Bot is ready.') @bot.command() async def bug(ctx, desc=None, rep=None): user = ctx.author await ctx.author.send('```Please explain the bug```') responseDesc = await bot.wait_for('message', check=lambda message: message.author == ctx.author, timeout=300) description = responseDesc.content await ctx.author.send('````Please provide pictures/videos of this bug```') responseRep = await bot.wait_for('message', check=lambda message: message.author == ctx.author, timeout=300) replicate = responseRep.content embed = discord.Embed(title='Bug Report', color=0x00ff00) embed.add_field(name='Description', value=description, inline=False) embed.add_field(name='Replicate', value=replicate, inline=True) embed.add_field(name='Reported By', value=user, inline=True) adminBug = bot.get_channel(733721953134837861) await adminBug.send(embed=embed) # Add 3 reaction (different emojis) here bot.run(TOKEN) | In discord.py@rewrite, you have to use discord.Message.add_reaction: emojis = ['emoji 1', 'emoji_2', 'emoji 3'] adminBug = bot.get_channel(733721953134837861) message = await adminBug.send(embed=embed) for emoji in emojis: await message.add_reaction(emoji) Then, to exploit reactions, you'll have to use the discord.on_reaction_add event. This event will be triggered when someone reacts to a message and will return a Reaction object and a User object: @bot.event async def on_reaction_add(reaction, user): embed = reaction.embeds[0] emoji = reaction.emoji if user.bot: return if emoji == "emoji 1": fixed_channel = bot.get_channel(channel_id) await fixed_channel.send(embed=embed) elif emoji == "emoji 2": #do stuff elif emoji == "emoji 3": #do stuff else: return NB: You'll have to replace "emoji 1", "emoji 2" and "emoji 3" with your emojis. add_reactions accepts: Global emojis (eg. 😀) that you can copy on emojipedia Raw unicode (as you tried) Discord emojis: \N{EMOJI NAME} Custom discord emojis (There might be some better ways, I've came up with this after a small amount a research) async def get_emoji(guild: discord.Guild, arg): return get(ctx.guild.emojis, name=arg) | 9 | 17 |
63,019,348 | 2020-7-21 | https://stackoverflow.com/questions/63019348/how-to-set-a-title-above-each-marker-which-represents-a-same-label | I have a first version of legend in the following plot : with the following code : # Plot and save : kmax = 0.3 p11, = plt.plot([0], marker='None', linestyle='None', label='$k_{max} = 0.3$') p1, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,1], '-b', label = '$GC_{sp}$') p2, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,2], '-r', label = '$GC_{ph}$') p3, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,3], '-y', label = '$WL$') p4, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,4], '-g', label = '$GC_{ph} + WL + XC$') p5, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,5], '-m', label = \ '$GC_{sp} + (GC_{ph} + WL + XC)$') # Plot and save : kmax = 1.0 p12, = plt.plot([0], marker='None', linestyle='None', label='$k_{max} = 1.0$') p6, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,1], '--b', label = '$GC_{sp}$') p7, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,2], '--r', label = '$GC_{ph}$') p8, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,3], '--y', label = '$WL$') p9, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,4], '--g', label = '$GC_{ph} + WL + XC$') p10, =plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,5], '--m', label = \ '$GC_{sp} + (GC_{ph} + WL + XC)$') plt.legend(fontsize=14, loc='best', ncol=2, handleheight=1.4, labelspacing=0.05) As you can see, I put a title (k_max = 0.3 and k_max = 1.0) for each column of markers and columns. Now, to avoid this redundancy, I am trying to merge all duplicated labels while keeping the title for each marker by doing : from matplotlib.legend_handler import HandlerTuple # Plot and save : kmax = 0.3 p11, = plt.plot([0], marker='None', linestyle='None') p1, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,1], '-b') p2, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,2], '-r') p3, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,3], '-y') p4, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,4], '-g') p5, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,5], '-m') # Plot and save : kmax = 1.0 p12, = plt.plot([0], marker='None', linestyle='None') p6, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,1], '--b') p7, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,2], '--r') p8, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,3], '--y') p9, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,4], '--g') p10, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,5], '--m') l = plt.legend([(p1,p6), (p2,p7), (p3,p8), (p4,p9), (p5,p10)], ['$GC_{sp}$', \ '$GC_{ph}$', '$WL$', '$GC_{ph} + WL + XC$', '$GC_{sp} + (GC_{ph} + WL + XC)$'], \ fontsize=14, loc='best', handlelength=2.5, handleheight=1.4, labelspacing=0.05, \ handler_map={tuple: HandlerTuple(ndivide=None)}) This way, I get the following figure : Then, 2 issues occurs : 1) The space between 2 markers is too small compared to the first figure above : how to insert a bigger space between markers and more length for the markers themselves (for example, having 4 dash-lines for the dash-line marker, like for the 4 dash-lines marker on the legend of the first figure above at the beginning of my post) 2) How to put the titles k_max = 0.3 and k_max = 1.0 above each column of markers ? : this way, I could identify quickly the case I consider on the plot (like I did on the first figure above but there was redundancy by repeating twice the displaying of all labels). | To tackle your issues you can try the following: 1.1 To increase the space between the markers you can provide the additional parameter pad to HandlerTuple() (from here). It will adjust the spacing between the different marker sections. This will look like: l = plt.legend(..., handler_map={tuple: HandlerTuple(ndivide=None, pad=2)}) 1.2 To add more width to the markers you could increase the value for the handlelength parameter like this: l = plt.legend(..., handlelength=6.5, ...) The resulting with these values will look like this: To specify a description above the marker columns you can modify your p11, = plt.plot([0], marker='None', linestyle='None') and p12, = plt.plot([0], marker='None', linestyle='None') lines to the following: # your code: # p11, = plt.plot([0], marker='None', linestyle='None') # p12, = plt.plot([0], marker='None', linestyle='None') p11, = plt.plot([], marker='$0.3$', linestyle='None', color='k', markersize=18, mec='None') p12, = plt.plot([], marker='$1.0$', linestyle='None', color='k', markersize=18, mec='None') Replacing [0] with an empty list [] will result in really nothing is plotted. mec='None' will remove the markers edge color. Without it, the markers look like written in bold. To show these "lines" in the legend add the following to legend: l = plt.legend([(p11,p12), ....], ['$k_{max}$', ...], ...) With these adjustments the legend should look like: The complete code should then look like this: # Plot and save : kmax = 0.3 p11, = plt.plot([], marker=r'$0.3$', linestyle='None', color='k', markersize=18, mec='None') p1, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,1], '-b') p2, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,2], '-r') p3, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,3], '-y') p4, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,4], '-g') p5, = plt.plot(FoM_vs_Density_array_1[:,0],FoM_vs_Density_array_1[:,5], '-m') # Plot and save : kmax = 1.0 p12, = plt.plot([], marker='$1.0$', linestyle='None', color='k', markersize=18, mec='None') p6, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,1], '--b') p7, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,2], '--r') p8, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,3], '--y') p9, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,4], '--g') p10, = plt.plot(FoM_vs_Density_array_2[:,0],FoM_vs_Density_array_2[:,5], '--m') l = plt.legend([(p11,p12),(p1,p6), (p2,p7), (p3,p8), (p4,p9), (p5,p10)], ['$k_{max}$','$GC_{sp}$', \ '$GC_{ph}$', '$WL$', '$GC_{ph} + WL + XC$', '$GC_{sp} + (GC_{ph} + WL + XC)$'], \ fontsize=14, loc='best', handlelength=6.5, handleheight=1.4, labelspacing=0.05, \ handler_map={tuple: HandlerTuple(ndivide=None, pad=2)}) To format the column names like you show in your first picture use: # change marker and markersize p11, = plt.plot([], marker=r'$k_{max}=0.3$', linestyle='None', color='k', markersize=45, mec='None') p12, = plt.plot([], marker='$k_{max}=1.0$', linestyle='None', color='k', markersize=45, mec='None') # change plt.legend, use 'borderpad' parameter to increase the legend box, # otherwise part of the column name will be written out of bounds # | change here l = plt.legend([(p11,p12),(p1,p6), (p2,p7), (p3,p8), (p4,p9), (p5,p10)], ['','$GC_{sp}$', \ '$GC_{ph}$', '$WL$', '$GC_{ph} + WL + XC$', '$GC_{sp} + (GC_{ph} + WL + XC)$'], \ fontsize=14, loc='best', handlelength=6.5, handleheight=1.4, labelspacing=0.05, \ borderpad=0.6, # change here handler_map={tuple: HandlerTuple(ndivide=None, pad=2)}) This will result in: | 9 | 4 |
62,993,366 | 2020-7-20 | https://stackoverflow.com/questions/62993366/color-calibration-with-color-checker-using-using-root-polynomial-regression-not | For a quantification project, I am in need of colour corrected images which produce the same result over and over again irrespective of lighting conditions. Every image includes a X-Rite color-checker of which the colors are known in matrix format: Reference=[[170, 189, 103],[46, 163, 224],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[149, 86, 187],[85, 85, 85],[67, 108, 87],[108, 60, 94],[31, 199, 231],[121, 122, 122], [157, 122, 98],[99, 90, 193],[60, 54, 175],[160, 160, 160],[130, 150, 194],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56],[242, 243, 243]] For every image I calculate the same matrix for the color card present as an example: Actual_colors=[[114, 184, 137], [2, 151, 237], [118, 131, 55], [12, 25, 41], [111, 113, 177], [33, 178, 188], [88, 78, 227], [36, 64, 85], [30, 99, 110], [45, 36, 116], [6, 169, 222], [53, 104, 138], [98, 114, 123], [48, 72, 229], [29, 39, 211], [85, 149, 184], [66, 136, 233], [110, 79, 90], [41, 142, 91], [110, 180, 214], [7, 55, 137], [0, 111, 238], [82, 44, 48], [139, 206, 242]] Then I calibrate the entire image using a color correction matrix which was derived from the coefficient from the input and output matrices: for im in calibrated_img: im[:]=colour.colour_correction(im[:], Actual_colors, Reference, "Finlayson 2015") The results are as follows: Where the top image represents the input and the down image the output. Lighting plays a key role in the final result for the color correction, but the first two images on the left should generate the same output. Once the images become too dark, white is somehow converted to red.. I am not able to understand why. I have tried to apply a gamma correction before processing with no success. The other two models Cheung 2004 and Vandermonde gave worse results, as did partial least squares. The images are pretty well corrected from the yellow radiating lamps, but the final result is not clean white, instead they have a blueish haze over the image. White should be white.. What can I do to further improve these results? Edit 23-08-2020: Based on @Kel Solaar his comments I have made changes to my script to include the steps mentioned by him as follows #Convert image from int to float Float_image=skimage.img_as_float(img) #Normalise image to have pixel values from 0 to 1 Normalised_image = (Float_image - np.min(Float_image))/np.ptp(Float_image) #Decoded the image with sRGB EOTF Decoded_img=colour.models.eotf_sRGB(Normalised_image) #Performed Finlayson 2015 color correction to linear data: for im in Decoded_img: im[:]=colour.colour_correction(im[:], Image_list, Reference, "Finlayson 2015") #Encoded image back to sRGB Encoded_img=colour.models.eotf_inverse_sRGB(Decoded_img) #Denormalized image to fit 255 pixel values Denormalized_image=Encoded_img*255 #Converted floats back to integers Integer_image=Denormalised_image.astype(int) This greatly improved image quality as can be seen below: However, lighting/color differences between corrected images are unfortunately still present. Raw images can be found here but due note that they are upside down. Measured values of color cards in images: IMG_4244.JPG [[180, 251, 208], [62, 235, 255], [204, 216, 126], [30, 62, 97], [189, 194, 255], [86, 250, 255], [168, 151, 255], [68, 127, 167], [52, 173, 193], [111, 87, 211], [70, 244, 255], [116, 185, 228], [182, 199, 212], [102, 145, 254], [70, 102, 255], [153, 225, 255], [134, 214, 255], [200, 156, 169], [87, 224, 170], [186, 245, 255], [44, 126, 235], [45, 197, 254], [166, 101, 110], [224, 255, 252]] IMG_4243.JPG [[140, 219, 168], [24, 187, 255], [148, 166, 73], [17, 31, 53], [141, 146, 215], [42, 211, 219], [115, 101, 255], [33, 78, 111], [24, 118, 137], [63, 46, 151], [31, 203, 255], [67, 131, 172], [128, 147, 155], [61, 98, 255], [42, 59, 252], [111, 181, 221], [88, 168, 255], [139, 101, 113], [47, 176, 117], [139, 211, 253], [19, 78, 178], [12, 146, 254], [110, 60, 64], [164, 232, 255]] IMG_4241.JPG [[66, 129, 87], [0, 90, 195], [65, 73, 26], [9, 13, 18], [60, 64, 117], [20, 127, 135], [51, 38, 176], [15, 27, 39], [14, 51, 55], [21, 15, 62], [1, 112, 180], [29, 63, 87], [54, 67, 69], [20, 33, 179], [10, 12, 154], [38, 92, 123], [26, 81, 178], [58, 44, 46], [23, 86, 54], [67, 127, 173], [5, 26, 77], [2, 64, 194], [43, 22, 25], [84, 161, 207]] IMG_4246.JPG [[43, 87, 56], [2, 56, 141], [38, 40, 20], [3, 5, 6], [31, 31, 71], [17, 85, 90], [19, 13, 108], [7, 13, 20], [4, 24, 29], [8, 7, 33], [1, 68, 123], [14, 28, 46], [28, 34, 41], [6, 11, 113], [0, 1, 91], [27, 53, 83], [11, 44, 123], [32, 21, 23], [11, 46, 26], [32, 77, 115], [2, 12, 42], [0, 29, 128], [20, 9, 11], [49, 111, 152]] Actual colors of color card (or reference) are given in the top of this post and are in the same order as values given for images. Edit 30-08-2020, I have applied @nicdall his comments: #Remove color chips which are outside of RGB range New_reference=[] New_Actual_colors=[] for L,K in zip(Actual_colors, range(len(Actual_colors))): if any(m in L for m in [0, 255]): print(L, "value outside of range") else: New_reference.append(Reference[K]) New_Actual_colors.append(Actual_colors[K]) In addition to this, I realized I was using a single pixel from the color card, so I started to take 15 pixels per color chip and averaged them to make sure it is a good balance. The code is too long to post here completely but something in this direction (don't judge my bad coding here): for i in Chip_list: R=round(sum([rotated_img[globals()[i][1],globals()[i][0],][0], rotated_img[globals()[i][1]+5,globals()[i][0],][0], rotated_img[globals()[i][1]+10,globals()[i][0],][0], rotated_img[globals()[i][1],(globals()[i][0]+5)][0], rotated_img[globals()[i][1],(globals()[i][0]+10)][0], rotated_img[globals()[i][1]+5,(globals()[i][0]+5)][0], rotated_img[globals()[i][1]+10,(globals()[i][0]+10)][0]])/(number of pixels which are summed up)) The result was dissapointing, as the correction seemed to have gotten worse but it is shown below: New_reference = [[170, 189, 103], [161, 133, 8], [52, 52, 52], [177, 128, 133], [64, 188, 157], [85, 85, 85], [67, 108, 87], [108, 60, 94], [121, 122, 122], [157, 122, 98], [60, 54, 175], [160, 160, 160], [166, 91, 80], [70, 148, 70], [200, 200, 200], [68, 82, 115], [44, 126, 214], [150, 61, 56]] #For Image: IMG_4243.JPG: New_Actual_colors= [[139, 218, 168], [151, 166, 74], [16, 31, 52], [140, 146, 215], [44, 212, 220], [35, 78, 111], [25, 120, 137], [63, 47, 150], [68, 132, 173], [128, 147, 156], [40, 59, 250], [110, 182, 222], [141, 102, 115], [48, 176, 118], [140, 211, 253], [18, 77, 178], [12, 146, 254], [108, 59, 62]] #The following values were omitted in IMG_4243: [23, 187, 255] value outside of range [115, 102, 255] value outside of range [30, 203, 255] value outside of range [61, 98, 255] value outside of range [88, 168, 255] value outside of range [163, 233, 255] value outside of range I have started to approach the core of the problem but I am not a mathematician, however the correction itself seems to be the problem.. This is the color correction matrix for IMG4243.jpg generated and utilized by the colour package: CCM=colour.characterisation.colour_correction_matrix_Finlayson2015(New_Actual_colors, New_reference, degree=1 ,root_polynomial_expansion=True) print(CCM) [[ 1.10079803 -0.03754644 0.18525637] [ 0.01519612 0.79700086 0.07502735] [-0.11301282 -0.05022718 0.78838144]] Based on what I understand from the colour package code the New_Actual_colors is converted with the CCM as follows: Converted_colors=np.reshape(np.transpose(np.dot(CCM, np.transpose(New_Actual_colors))), shape) When we compare the Converted_colors with the New_reference, we can see that the correction is getting a long way, but differences are still present (so the endgoal is to convert New_Actual_colors with the color correction matrix (CCM) to Converted_colors which should exactly match the New_reference): print("New_reference =",New_reference) print("Converted_colors =",Converted_colors) New_reference = [[170, 189, 103],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[85, 85, 85],[67, 108, 87],[108, 60, 94],[121, 122, 122],[157, 122, 98],[60, 54, 175],[160, 160, 160],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56]] Converted_colors = [[176, 188, 106],[174, 140, 33],[26, 29, 38],[188, 135, 146],[81, 186, 158],[56, 71, 80],[48, 106, 99],[95, 50, 109],[102, 119, 122],[164, 131, 101],[88, 66, 190],[155, 163, 153],[173, 92, 70],[68, 150, 79],[193, 189, 173],[50, 75, 134],[55, 136, 192],[128, 53, 34]] When substracted the differences become clear, and the question is how to overcome these differences?: list(np.array(New_reference) - np.array(Converted_colors)) [array([-6, 1, -3]), array([-13, -7, -25]), array([26, 23, 14]), array([-11, -7, -13]), array([-17, 2, -1]), array([29, 14, 5]), array([ 19, 2, -12]), array([ 13, 10, -15]), array([19, 3, 0]), array([-7, -9, -3]), array([-28, -12, -15]), array([ 5, -3, 7]), array([-7, -1, 10]), array([ 2, -2, -9]), array([ 7, 11, 27]), array([ 18, 7, -19]), array([-11, -10, 22]), array([22, 8, 22])] | Here are a few recommendations: As stated in my comment above we had an implementation issue with the Root-Polynomial variant from Finlayson (2015) which should be fixed in the develop branch. You are passing integer and encoded values to the colour.colour_correction definition. I would strongly recommend that you: Convert the datasets to floating-point representation. Scale it from range [0, 255] to range [0, 1]. Decode it with the sRGB EOTF. Perform the colour correction onto that linear data. Encode back and scale back to integer representation. Your images seem to be an exposure wedge, ideally, you would compute a single matrix for the appropriate reference exposure, normalise the other images exposure to it and apply the matrix on it. | 10 | 5 |
63,017,653 | 2020-7-21 | https://stackoverflow.com/questions/63017653/download-file-using-s3fs | I am trying to download a csv file from an s3 bucket using the s3fs library. I have noticed that writing a new csv using pandas has altered data in some way. So I want to download the file directly in its raw state. The documentation has a download function but I do not understand how to use it: download(self, rpath, lpath[, recursive]): Alias of FilesystemSpec.get. Here's what I tried: import pandas as pd import datetime import os import s3fs import numpy as np #Creds for s3 fs = s3fs.S3FileSystem(key=mykey, secret=mysecretkey) bucket = "s3://mys3bucket/mys3bucket" files = fs.ls(bucket)[-3:] #download files: for file in files: with fs.open(file) as f: fs.download(f,"test.csv") AttributeError: 'S3File' object has no attribute 'rstrip' | for file in files: fs.download(file,'test.csv') Modified to download all files in the directory: import pandas as pd import datetime import os import s3fs import numpy as np #Creds for s3 fs = s3fs.S3FileSystem(key=mykey, secret=mysecretkey) bucket = "s3://mys3bucket/mys3bucket" #files references the entire bucket. files = fs.ls(bucket) for file in files: fs.download(file,'test.csv') | 10 | 13 |
63,019,506 | 2020-7-21 | https://stackoverflow.com/questions/63019506/python-get-value-of-env-variable-from-a-specific-env-file | In python, is there a way to retrieve the value of an env variable from a specific .env file? For example, I have multiple .env files as follows: .env.a .env.a ... And I have a variable in .env.b called INDEX=4. I tried receiving the value of INDEX by doing the following: import os os.getenv('INDEX') But this value returns None. Any suggestions? | This is a job for ConfigParser or ConfigObj. ConfigParser is built into the Python standard library, but has the drawback that it REALLY wants section names. Below is for Python3. If you're still using Python2, then use import ConfigParser import configparser config = configparser.ConfigParser() config.read('env.b') index = config['mysection']['INDEX'] where env.b for ConfigParser is [mysection] INDEX=4 And using ConfigObj: import configobj config = configobj.ConfigObj('env.b') index = config['INDEX'] where env.b for ConfigObj is INDEX=4 | 8 | 9 |
63,012,346 | 2020-7-21 | https://stackoverflow.com/questions/63012346/how-to-activate-virtual-environment-in-vscode-when-running-scripts-are-disabled | I created a virtual environment in vscode in a folder called server by typing: python -m venv env And I opened the server folder, select interpreter Python 3.8.1 64-bit('env':venv) then I got following error: I can't find any solution to this and I am stuck for hours. | It seems that it is going to activate the environment through a powershell script. And running such scripts is turned off by default. Also, usually a virtual environment is activated through cmd and .bat script. You could either turn on running powershell script or make VS Code activate an environment through cmd and .bat file. The first way - using cmd instead of Powershell I just checked it in my PC and VS Code doesn't use Powershell at all. It activate an environment with cmd instead of Powershell. Probably it is worth to check VS Code settings, set cmd as a default terminal. It is probably such an option in the main settings.json (you can open it through ctrl+shift+p and type 'open settings (JSON)'): "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe",. The second way - changing Powershell execution policy In order to change Powershell execution policy you can add "terminal.integrated.shellArgs.windows": ["-ExecutionPolicy", "Bypass"] to your main VS Code settings. Also you can open a Powershell window as administrator and type the following: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser Then respond y to any questions. | 10 | 17 |
63,002,350 | 2020-7-20 | https://stackoverflow.com/questions/63002350/ignore-missing-columns-in-usecol-parameter | I'm reading a table from csv and only want a subset of the columns. The list I'm using to subset contains field names that may not exist in the table I'm reading. For example: # contents of sample.csv: #a,b,c #1,2,3 #4,5,6 subset = ['a', 'c', 'd'] I'd like to return the following, using pandas.read_csv and the subset, but this raises an error: pd.read_csv(sample.csv, usecols=subset) a c 1 3 4 6 ValueError: Usecols do not match columns, columns expected but not found: ['d'] I think I might be able to do with a callable value for usecols, but am not sure how to implement. | Use a callable checking if the column is in the subset subset = ['a', 'c', 'd'] df = pd.read_csv('sample.csv', usecols=lambda x: x in subset) a c 0 1 3 1 4 6 | 7 | 17 |
62,982,784 | 2020-7-19 | https://stackoverflow.com/questions/62982784/plotly-bar-chart-change-color-based-on-positive-negative-value-python | I have the following code which plots a bar chart (1 series), but I need the bars to be coloured blue if the 'Net' value is positive, and red if its negative: import pandas as pd import plotly.graph_objects as go df = pd.DataFrame({ 'Net':[15,20,-10,-15], 'Date':['07/14/2020','07/15/2020','07/16/2020','07/17/2020'] }) df['Date'] = pd.to_datetime(df['Date']) fig = go.Figure(data=[go.Bar(name='Net', x=df['Date'], y=df['Net'])]) fig.update_layout(barmode='stack') fig.show() | You can check documentation here. Full code as following import pandas as pd import plotly.graph_objects as go import numpy as np # Data df = pd.DataFrame({ 'Net':[15,20,-10,-15], 'Date':['07/14/2020','07/15/2020','07/16/2020','07/17/2020'] }) df['Date'] = pd.to_datetime(df['Date']) ## here I'm adding a column with colors df["Color"] = np.where(df["Net"]<0, 'red', 'green') # Plot fig = go.Figure() fig.add_trace( go.Bar(name='Net', x=df['Date'], y=df['Net'], marker_color=df['Color'])) fig.update_layout(barmode='stack') fig.show() | 11 | 19 |
63,000,388 | 2020-7-20 | https://stackoverflow.com/questions/63000388/how-to-include-simpleimputer-before-countvectorizer-in-a-scikit-learn-pipeline | I have a pandas DataFrame that includes a column of text, and I would like to vectorize the text using scikit-learn's CountVectorizer. However, the text includes missing values, and so I would like to impute a constant value before vectorizing. My initial idea was to create a Pipeline of SimpleImputer and CountVectorizer: import pandas as pd import numpy as np df = pd.DataFrame({'text':['abc def', 'abc ghi', np.nan]}) from sklearn.impute import SimpleImputer imp = SimpleImputer(strategy='constant') from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer() from sklearn.pipeline import make_pipeline pipe = make_pipeline(imp, vect) pipe.fit_transform(df[['text']]).toarray() However, the fit_transform errors because SimpleImputer outputs a 2D array and CountVectorizer requires 1D input. Here's the error message: AttributeError: 'numpy.ndarray' object has no attribute 'lower' QUESTION: How can I modify this Pipeline so that it will work? NOTE: I'm aware that I can impute missing values in pandas. However, I would like to accomplish all preprocessing in scikit-learn so that the same preprocessing can be applied to new data using Pipeline. | The best solution I have found is to insert a custom transformer into the Pipeline that reshapes the output of SimpleImputer from 2D to 1D before it is passed to CountVectorizer. Here's the complete code: import pandas as pd import numpy as np df = pd.DataFrame({'text':['abc def', 'abc ghi', np.nan]}) from sklearn.impute import SimpleImputer imp = SimpleImputer(strategy='constant') from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer() # CREATE TRANSFORMER from sklearn.preprocessing import FunctionTransformer one_dim = FunctionTransformer(np.reshape, kw_args={'newshape':-1}) # INCLUDE TRANSFORMER IN PIPELINE from sklearn.pipeline import make_pipeline pipe = make_pipeline(imp, one_dim, vect) pipe.fit_transform(df[['text']]).toarray() It has been proposed on GitHub that CountVectorizer should allow 2D input as long as the second dimension is 1 (meaning: a single column of data). That modification to CountVectorizer would be a great solution to this problem! | 17 | 15 |
62,992,438 | 2020-7-20 | https://stackoverflow.com/questions/62992438/how-to-collapse-the-code-section-in-google-colab-notebook-but-keeping-the-result | As you can read on the title, I'm currently trying to make the code section collapsing without collapsing also the results section. For example without the collapse the title/code/result sections look like this: If I try to "collapse" the section will look like this: I'm looking for a solution where I can make collapse the code but not the results sections | You can start the cell with #@title Then, you can double click the title. It will hide the code part, but keep the output still visible. For example #@title My title greet = "Hello" print(greet) Double clicking "My title" will still show the output "Hello". | 8 | 9 |
62,966,480 | 2020-7-18 | https://stackoverflow.com/questions/62966480/how-to-disable-pylint-inspections-for-anything-that-uses-my-function | I've made a classproperty descriptor and whenever I use a function decorated with it, I get multiple pylint inspection errors. Here is a sample class with a sample decorated function: class Bar: """ Bar documentation. """ # pylint: disable=no-method-argument @classproperty def foo(): """ Retrieve foo. """ return "foo" Thanks to the descriptor, I can call Bar.foo and get the string foo returned. Unfortunately, whenever I use functions like this with slightly more complex items (e.g. functions which return instances of objects), pylint starts complaining about things such as no-member or unexpected-keyword-arg, simply because it thinks Bar.foo is a method, rather than a wrapped classproperty object. I would like to disable warnings for any code that uses my function - I definitely can't allow having to write # pylint: disable every single time I use the classproperty-wrapped methods. How can I do it with pylint? Or maybe I should switch to use a different linter instead? Here is an example of a warning generated because of the reasons above: class Bar: """ Bar documentation. """ # pylint: disable=no-method-argument @classproperty def foo(): """ Retrieve an object. """ return NotImplementedError("Argument") print(Bar.foo.args) pylint complains that E1101: Method 'foo' has no 'args' member (no-member) (even though I know it definitely has), and I would like to completely disable some warnings for any module/class/function that uses Bar.foo.args or similar. For anyone interested, here is a minimal implementation of a classproperty descriptor: class classproperty: """ Minimal descriptor. """ # pylint: disable=invalid-name def __init__(self, func): self._func = func def __get__(self, _obj, _type): return self._func() | I have managed to create a dirty hack by type-hinting the items as None: class Bar: """ Bar documentation. """ # pylint: disable=no-method-argument,function-redefined,too-few-public-methods foo: None @classproperty def foo(): """ Retrieve an object. """ return NotImplementedError("Argument") I would rather avoid having code like this because I can't actually import the items which should be type-hinted due to the circular imports issue (hence None), but it tricks pylint well. | 9 | 4 |
62,984,477 | 2020-7-19 | https://stackoverflow.com/questions/62984477/running-python-scripts-in-anaconda-environment-through-windows-cmd | I have the following goal: I have a python script, which should be running in my custom Anaconda environment. And this process needs to be automatizated. The first thing I've tried was to create an .exe file of my script using pyinstaller in the Anaconda command prompt, opened in my environment. And put the .exe into Windows Task Scheduler. But I did not succeeded cause my script seems to be too complex, contain too many imports so pyinstaller didn't create the .exe. The next thing I thought of was an attempt to run my script using Windows CMD with appropriate attributes, and also put it into Windows Task Scheduler. Now my question is if there is a way to set up Task Scheduler so it could run CMD with attributes, which would activate my environment and with this environment run my script right away? I need this to be done automatically once a day at a given time. Update 3: am I blind or what? I mean, here it is: | You could Create a .bat file (e.g. run_python_script.bat) with contents shown below. Create task in "Task Scheduler" to run the .bat file. 1.a. The .bat file contents with conda environments Check your <condapath>. Your conda.exe is located at <condapath>/Scripts. Put into your .bat file call "<condapath>\Scripts\activate.bat" <env_name> & cd "<folder_for_your_py_script>" & python <scriptname.py> [<arguments>] <env_name> is the name of the conda environment. <folder_for_your_py_script> is the folder that contains <scriptname.py> <scriptname.py> is the script you want to start. [<arguments>] represent the optional arguments (if you need to give arguments to your script) 1.b. The .bat file contents with venv "<path_to_python_exe>" "<path_to_python_script>" [<arguments>] where <path_to_python_exe> is the path to your python executable. If you are using a virtual environment (venv), then use the python.exe found in the /venv/Scripts folder <path_to_python_script> is the path to your python script. [<arguments>] represent the optional arguments (if you need to give arguments to your script) 2. Creating task in Task Scheduler Go to "Task Scheduler" -> "Create Basic Task" Give the name & timing info Add to the "Program/Script" the path to your run_python_script.bat. Appendix: Creating venv with Anaconda It seems that conda create command does not create similar virtual environments as python -m venv command. To create normal python virtual environment with the venv Check your <condapath>. Your conda.exe is located at <condapath>/Scripts. Create virtual environment to folder you want (let's call it venv_folder), by running following command in <venv_folder> <condapath>\python.exe -m venv venv Now, your <path_to_python_exe> will be <venv_folder>\venv\Scripts.python.exe. If you need to install packages to this virtual environment, you use <venv_folder>\venv\Scripts.python.exe -m pip install <package_name> | 7 | 13 |
62,990,029 | 2020-7-20 | https://stackoverflow.com/questions/62990029/how-to-get-equally-spaced-points-on-a-line-in-shapely | I'm trying to (roughly) equally space the points of a line to a predefined distance. It's ok to have some tolerance between the distances but as close as possible would be desirable. I know I could manually iterate through each point in my line and check the p1 distance vs p2 and add more points if needed. But I wondered if anyone knows if there is a way to achieve this with shapely as I already have the coords in a LineString. | One way to do that is to use interpolate method that returns points at specified distances along the line. You just have to generate a list of the distances somehow first. Taking the input line example from Roy2012's answer: import numpy as np from shapely.geometry import LineString from shapely.ops import unary_union line = LineString(([0, 0], [2, 1], [3, 2], [3.5, 1], [5, 2])) Splitting at a specified distance: distance_delta = 0.9 distances = np.arange(0, line.length, distance_delta) # or alternatively without NumPy: # points_count = int(line.length // distance_delta) + 1 # distances = (distance_delta * i for i in range(points_count)) points = [line.interpolate(distance) for distance in distances] + [line.boundary[1]] multipoint = unary_union(points) # or new_line = LineString(points) Note that since the distance is fixed you can have problems at the end of the line as shown in the image. Depending on what you want you can include/exclude the [line.boundary[1]] part which adds the line's endpoint or use distances = np.arange(0, line.length, distance_delta)[:-1] to exclude the penultimate point. Also, note that the unary_union I'm using should be more efficient than calling object.union(other) inside a loop, as shown in another answer. Splitting to a fixed number of points: n = 7 # or to get the distances closest to the desired one: # n = round(line.length / desired_distance_delta) distances = np.linspace(0, line.length, n) # or alternatively without NumPy: # distances = (line.length * i / (n - 1) for i in range(n)) points = [line.interpolate(distance) for distance in distances] multipoint = unary_union(points) # or new_line = LineString(points) | 12 | 24 |
62,990,553 | 2020-7-20 | https://stackoverflow.com/questions/62990553/how-to-run-a-python-function-in-kotlin | I am making an app in kotlin. But I know python a lot and made the logic in python. The kotlin is only used for the display. Is there a way to call a python function in kotlin?. A python script can call python scripts but can a Kotlin script call one? | I think there are two normal solutions. Using Polyglot API of GraalVM (https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/package-summary.html). Creating a C interface implemented with Python (https://www.linuxjournal.com/article/8497) and calling it with JNI (https://docs.oracle.com/javase/8/docs/technotes/guides/jni/). However, this option needs you to manage your objects and memory very carefully (dispose all C pointers properly etc.) | 12 | 0 |
62,989,923 | 2020-7-20 | https://stackoverflow.com/questions/62989923/pandas-dataframe-replace-part-of-string-with-value-from-another-column | I having replace issue while I try to replace a string with value from another column. I want to replace 'Length' with df['Length']. df["Length"]= df["Length"].replace('Length', df['Length'], regex = True) Below is my data Input: **Formula** **Length** Length 5 Length+1.5 6 Length-2.5 5 Length 4 5 5 Expected Output: **Formula** **Length** 5 5 6+1.5 6 5-2.5 5 4 4 5 5 However, with the code I used above, it will replace my entire cell instead of Length only. I getting below output: I found it was due to df['column'] is used, if I used any other string the behind offset (-1.5) will not get replaced. **Formula** **Length** 5 5 6 6 5 5 4 4 5 5 May I know is there any replace method for values from other columns? Thank you. | If want replace by another column is necessary use DataFrame.apply: df["Formula"]= df.apply(lambda x: x['Formula'].replace('Length', str(x['Length'])), axis=1) print (df) Formula Length 0 5 5 1 6+1.5 6 2 5-2.5 5 3 4 4 4 5 5 Or list comprehension: df["Formula"]= [x.replace('Length', str(y)) for x, y in df[['Formula','Length']].to_numpy()] | 13 | 18 |
62,948,421 | 2020-7-17 | https://stackoverflow.com/questions/62948421/how-to-create-point-cloud-file-ply-from-vertices-stored-as-numpy-array | I have some vertices whose coordinates were stored as NumPy array. xyz_np: array([[ 7, 53, 31], [ 61, 130, 116], [ 89, 65, 120], ..., [ 28, 72, 88], [ 77, 65, 82], [117, 90, 72]], dtype=int32) I want to save these vertices as a point cloud file(such as .ply) and visualize it in Blender. I don't have face information. | You can use Open3D to do this. # Pass numpy array to Open3D.o3d.geometry.PointCloud and visualize xyz = np.random.rand(100, 3) pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(xyz) o3d.io.write_point_cloud("./data.ply", pcd) You can also visualize the point cloud using Open3D. o3d.visualization.draw_geometries([pcd]) | 13 | 19 |
62,988,494 | 2020-7-20 | https://stackoverflow.com/questions/62988494/adjust-width-of-dropdown-menu-option-in-dash-plotly | I am trying to build an app using Dash in Python based on Plotly. I am having hard time in adjusting the width of Dropdown menu options. I have attached code and image below. I would like the width of Dropdown options to be same as the menu width. app.layout = html.Div(children=[ html.H1(children='Welcome to Portfolio Construction Engine!'), html.Div(children='What would you like to do?', style={ 'font-style': 'italic', 'font-weight': 'bold' }), html.Div([ dcc.Dropdown( id='demo-dropdown', options=[ {'label': 'Upload Scores', 'value': 'upload_scores'}, {'label': 'Analyze Portfolios', 'value': 'analyze_portfoliio'}, {'label': 'Generate Data for IC Engine', 'value': 'gen_data_ic_engine'} ], placeholder='Select a task...', style={ 'width': '50%' } ), html.Div(id='dd-output-container') ]) ]) | While its correct to place width: 50% to change the width of the dropdown component, you've placed it in the inner component rather than the parent Div. app.layout = html.Div( children=[ html.H1(children="Welcome to Portfolio Construction Engine!"), html.Div( children="What would you like to do?", style={"font-style": "italic", "font-weight": "bold"}, ), html.Div( [ dcc.Dropdown( id="demo-dropdown", options=[ {"label": "Upload Scores", "value": "upload_scores"}, {"label": "Analyze Portfolios", "value": "analyze_portfoliio"}, { "label": "Generate Data for IC Engine", "value": "gen_data_ic_engine", }, ], placeholder="Select a task...", # style={"width": "50%"}, NOT HERE ), html.Div(id="dd-output-container"), ], style={"width": "50%"}, ), ] ) | 16 | 24 |
62,985,961 | 2020-7-19 | https://stackoverflow.com/questions/62985961/how-to-use-requests-session-so-that-headers-are-presevred-and-reused-in-subseque | I might have misunderstood the requests.session object. headers ={'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} s = requests.Session() r = s.get('https://www.barchart.com/', headers = headers) print(r.status_code) This works fine and return 200 as expected. However this following return 403 and shows that the headers from the first request have not been saved in the session like it would manually using a browser: headers ={'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} s = requests.Session() r = s.get('https://www.barchart.com/', headers = headers) r = s.get('https://www.barchart.com/futures/quotes/CLQ20') print(r.status_code) print(s.headers) I thought there would be a way to compound headers, cookies etc from 1 requests to the other using the session object... am i wrong ? | You can use session.headers (doc) property to specify headers that are sent with each request: import requests headers ={'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} s = requests.session() s.headers = headers # <-- set default headers here r = s.get('https://www.barchart.com/') print(r.status_code) print(s.headers) print('-' * 80) r = s.get('https://www.barchart.com/futures/quotes/CLQ20') print(r.status_code) print(s.headers) s.close() Prints: 200 {'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} -------------------------------------------------------------------------------- 200 {'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.28 Safari/537.36'} | 15 | 20 |
62,983,674 | 2020-7-19 | https://stackoverflow.com/questions/62983674/the-absolute-value-of-a-complex-number-with-numpy | I have the following script in Python. I am calculating the Fourier Transform of an array. When I want to plot the results (Fourier transform) I am using the absolute value of that calculation. However, I do not know how the absolute value of complex numbers is being produced. Does anyone know how it calculates? I need this to reproduce in Java. import numpy as np import matplotlib.pyplot as plt from numpy import fft inp = [1,2,3,4] res = fft.fft(inp) print(res[1]) # returns (-2+2j) complex number print(np.abs(res[1])) # returns 2.8284271247461903 | sqrt(Re(z)**2 + Im(z)**2) for z = a + ib this becomes: sqrt(a*a + b*b) It's just the euclidean norm. You have to sum the square of real part and imaginary part (without the i) and do the sqrt of it. https://www.varsitytutors.com/hotmath/hotmath_help/topics/absolute-value-complex-number | 10 | 13 |
62,978,955 | 2020-7-19 | https://stackoverflow.com/questions/62978955/poetry-ignore-dependency-in-pyproject-toml | I currently have a Python3 project set up with Poetry as the main package manager. Next to that I also have set up a build and some automated testing via Github workflows. My package depends on Tensorflow, although the automated tests can run without it. Unfortunately Tensorflow (which is quite big) is installed every time the Github workflow runs these tests. As Tensorflow is not needed for these tests and as I would like to speed up my build, I would like to ignore the Tensorflow dependency when poetry install is called from the build pipeline. Does anybody know a way to exclude a dependency when using Poetry? | The only other approach that comes to mind would be to move the tensorflow dependency to an extra category, which in poetry would look like this: $ poetry add --extras tensorflow This means that it won't be installed when you run poetry install, unless it is part of a named group that you install explicitly. This can be achieved by adding this to your pyproject.toml: [tool.poetry.extras] runtime = ["tensorflow"] # any name goes, I chose "runtime" because it sounded like it'd make sense The list can be extended with any other package that you only need during runtime, and not during tests. If you want to install your code to actually run it, you'll have to execute this before: $ poetry install --extras runtime This will cleanly separate your dependencies, you'll have to evaluate whether it makes sense in your case. As a rule of thumb, it's usually better to run hacks to make tests work rather than worsen the client experience, so your current workflow has a good chance of being better than what I just wrote. | 14 | 15 |
62,975,325 | 2020-7-19 | https://stackoverflow.com/questions/62975325/why-is-summing-list-comprehension-faster-than-generator-expression | Not sure if title is correct terminology. If you have to compare the characters in 2 strings (A,B) and count the number of matches of chars in B against A: sum([ch in A for ch in B]) is faster on %timeit than sum(ch in A for ch in B) I understand that the first one will create a list of bool, and then sum the values of 1. The second one is a generator. I'm not clear on what it is doing internally and why it is slower? Thanks. Edit with %timeit results: 10 characters generator expression list 10000 loops, best of 3: 112 µs per loop 10000 loops, best of 3: 94.6 µs per loop 1000 characters generator expression list 100 loops, best of 3: 8.5 ms per loop 100 loops, best of 3: 6.9 ms per loop 10,000 characters generator expression list 10 loops, best of 3: 87.5 ms per loop 10 loops, best of 3: 76.1 ms per loop 100,000 characters generator expression list 1 loop, best of 3: 908 ms per loop 1 loop, best of 3: 840 ms per loop | I took a look at the disassembly of each construct (using dis). I did this by declaring these two functions: def list_comprehension(): return sum([ch in A for ch in B]) def generation_expression(): return sum(ch in A for ch in B) and then calling dis.dis with each function. For the list comprehension: 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) 4 FOR_ITER 12 (to 18) 6 STORE_FAST 1 (ch) 8 LOAD_FAST 1 (ch) 10 LOAD_GLOBAL 0 (A) 12 COMPARE_OP 6 (in) 14 LIST_APPEND 2 16 JUMP_ABSOLUTE 4 18 RETURN_VALUE and for the generator expression: 0 LOAD_FAST 0 (.0) 2 FOR_ITER 14 (to 18) 4 STORE_FAST 1 (ch) 6 LOAD_FAST 1 (ch) 8 LOAD_GLOBAL 0 (A) 10 COMPARE_OP 6 (in) 12 YIELD_VALUE 14 POP_TOP 16 JUMP_ABSOLUTE 2 18 LOAD_CONST 0 (None) 20 RETURN_VALUE The disassembly for the actual summation is: 0 LOAD_GLOBAL 0 (sum) 2 LOAD_CONST 1 (<code object <genexpr> at 0x7f49dc395240, file "/home/mishac/dev/python/kintsugi/KintsugiModels/automated_tests/a.py", line 12>) 4 LOAD_CONST 2 ('generation_expression.<locals>.<genexpr>') 6 MAKE_FUNCTION 0 8 LOAD_GLOBAL 1 (B) 10 GET_ITER 12 CALL_FUNCTION 1 14 CALL_FUNCTION 1 16 RETURN_VALUE but this sum disassembly was constant between both your examples, with the only difference being the loading of generation_expression.<locals>.<genexpr> vs list_comprehension.<locals>.<listcomp> (so just loading a different local variable). The differing bytecode instructions between the first two disassemblies are LIST_APPEND for the list comprehension vs. the conjunction of YIELD_VALUE and POP_TOP for the generator expression. I won't pretend I know the intrinsics of Python bytecode, but what I gather from this is that the generator expression is implemented as a queue, where the value is generated and then popped. This popping doesn't have to happen in a list comprehension, leading me to believe there'll be a slight amount of overhead in using generators. Now this doesn't mean that generators are always going to be slower. Generators excel at being memory-efficient, so there will be a threshold N such that list comprehensions will perform slightly better before this threshold (because memory use won't be a problem), but after this threshold, generators will significantly perform better. | 16 | 15 |
62,959,412 | 2020-7-17 | https://stackoverflow.com/questions/62959412/how-to-properly-uninstall-pyenv-on-linux | I have installed pyenv on a raspberry pi but now I want to uninstall it. I already ran the command rm -rf $(pyenv root) but now it says to delete lines from my "shell startup configuration". What does it mean? I found this lines in my .bash_profile files: if command -v pyenv 1>/dev/null 2>&1; then eval"$(pyenv init-)" fi And in .bashrc file at the end there is: export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" Should I only delete (or comment with #) the ones from .bash_profile? Or maybe from both files? | Delete all 5 lines from both files. Then make sure that in your .bash_profile it tells .bashrc to load | 11 | 8 |
62,962,703 | 2020-7-17 | https://stackoverflow.com/questions/62962703/how-to-get-numpy-working-properly-in-anaconda-python-3-7-6 | I am trying to use NumPy in Python. I have just installed Anaconda Python 3.7, and that all seemed to go smoothly. However, I cannot import numpy(using the line import numpy). When I do, I get the following error: C:\Users\jsmith\anaconda3\lib\site-packages\numpy\__init__.py:140: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init Traceback (most recent call last): File "C:\Users\jsmith\anaconda3\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "C:\Users\jsmith\anaconda3\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "C:\Users\jsmith\anaconda3\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\jsmith\anaconda3\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "C:\Users\jsmith\anaconda3\lib\site-packages\numpy\core\__init__.py", line 54, in <module> raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy c-extensions failed. - Try uninstalling and reinstalling numpy. - If you have already done that, then: 1. Check that you expected to use Python3.7 from "C:\Users\jsmith\anaconda3\python.exe", and that you have no directories in your PATH or PYTHONPATH that can interfere with the Python and numpy version "1.18.1" you're trying to use. 2. If (1) looks fine, you can open a new issue at https://github.com/numpy/numpy/issues. Please include details on: - how you installed Python - how you installed numpy - your operating system - whether or not you have multiple versions of Python installed - if you built from source, your compiler versions and ideally a build log - If you're working with a numpy git repository, try `git clean -xdf` (removes all files not under version control) and rebuild numpy. Note: this error has many possible causes, so please don't comment on an existing issue about this - open a new one instead. Original error was: DLL load failed: The specified module could not be found. I can see it in the Enviorments tab of Anaconda Navigator, and when I try to use it in Eclipse(Pydev) it shows up under forced builtins. I took a look at my PYTHONPATH, and both my enviorment in Eclipse and my base python directory (jsmith/anaconda3) are in it. I have tried importing other libraries I see under forced builtins,and those work fine, yet numpy seems to be the only one with issues. Calling pip install numpy tells me it is already installed with version 1.18.1. I looked at this stack overflow page, and ran the first command in the answer(conda create -n test numpy python=3.7 --no-default-packages) in anaconda prompt. This worked, and then I realized the test was specific to the question, and tried base instead, and got this error: CondaValueError: The target prefix is the base prefix. Aborting. However calling conda activate base did nothing. | As mentioned in the comments by @cel uninstalling and reinstalling numpy using pip uninstall numpy and pip install numpy made it work. | 11 | 22 |
62,967,062 | 2020-7-18 | https://stackoverflow.com/questions/62967062/disable-publishing-to-pypi-with-poetry | I am a setting up Poetry in combination with Tox to automate builds and testing. The project I am working on however is private and I want to avoid anyone working on it accidentally publishing it to PyPi. I have initialized a project using poetry init and my assumption is that the resulting setup does not result in a viable package that can be published without any further setup to begin with. Is this correct? How could I further configure poetry so that even if someone accidentally runs poetry publish in the future the package will not actually be published. | As I know poetry does not support such straightforward option yet. But the workaround is possible: [tool.poetry] exclude = ["**"] In TOML format: * denotes a single level wildcard, and ** denotes all files in the given directory hierarchy. exclude = ["**"] option prevents project files from getting into the package when poetry build is executed. It will show: [ModuleOrPackageNotFound] No file/folder found for package package_name But nevertheless, poetry will create a tar.gz file and include three files in it: pyproject.toml, setup.py, and PKG-INFO. And it can be published | 7 | 5 |
62,962,623 | 2020-7-17 | https://stackoverflow.com/questions/62962623/how-to-set-background-color-of-the-plot-and-color-for-gridlines | Here is my code: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar( name='Group 1', x=['Var 1', 'Var 2', 'Var 3'], y=[3, 6, 4], error_y=dict(type='data', array=[1, 0.5, 1.5]), width=0.15 )) fig.add_trace(go.Bar( name='Group 2', x=['Var 1', 'Var 2', 'Var 3'], y=[4, 7, 3], error_y=dict(type='data', array=[0.5, 1, 2]), width=0.15 )) fig.update_layout(barmode='group') fig.show() output: I read this but code from seems depreciated and does not works. Question: How to set background color of the plot and color for grid lines? | This should help import plotly.graph_objects as go from plotly.graph_objects import Layout # Set layout with background color you want (rgba values) # This one is for white background layout = Layout(plot_bgcolor='rgba(0,0,0,0)') # Use that layout here fig = go.Figure(layout=layout) fig.add_trace(go.Bar( name='Group 1', x=['Var 1', 'Var 2', 'Var 3'], y=[3, 6, 4], error_y=dict(type='data', array=[1, 0.5, 1.5]), width=0.15 )) fig.add_trace(go.Bar( name='Group 2', x=['Var 1', 'Var 2', 'Var 3'], y=[4, 7, 3], error_y=dict(type='data', array=[0.5, 1, 2]), width=0.15 )) fig.update_layout(barmode='group') # Change grid color and axis colors fig.update_xaxes(showline=True, linewidth=2, linecolor='black', gridcolor='Red') fig.update_yaxes(showline=True, linewidth=2, linecolor='black', gridcolor='Red') fig.show() Sources: Grid line customization Background Customization | 8 | 19 |
62,937,310 | 2020-7-16 | https://stackoverflow.com/questions/62937310/python-3-6-type-hinting-for-a-function-accepting-generic-class-type-and-instance | I have a function with the following signature: def wait_for_namespaced_objects_condition( obj_type: Type[NamespacedAPIObject], obj_condition_fun: Callable[[NamespacedAPIObject], bool], ) -> List[NamespacedAPIObject]: ... Important part here are NamespacedAPIObject parameters. This function takes an obj_type as type spec, then creates an object(instance) of that type(class). Then some other objects of that type are added to a list, which is then filtered with obj_condition_fun and returned as a result of type List[NamespacedAPIObject]. This works fine and also evaluates OK with mypy`. Now, I want to make this function generic, so that in the place of NamespacedAPIObject any subtype of it can be used. My attempt was to do it like this: T = TypeVar("T", bound=NamespacedAPIObject) def wait_for_namespaced_objects_condition( obj_type: Type[T], obj_condition_fun: Callable[[T], bool], ) -> List[T]: But Type[T] is TypeVar, so it's not the way to go. The question is: what should be the type of obj_type parameter to make this work? I tried Generic[T], which seemed the most reasonable to me, but it doesn't work. If i put there just obj_type: T, mypy evaluates this OK, but it seems to wrong to me. In my opinion this means that obj_type is an instance of a class that is a subtype of NamespacedAPIObject, while what I want to say is that "T is a class variable that represents subtype of NamespacedAPIObject. How to solve this? [Edit] To explain a little bit better (see comment below) why Type[T] doesn't work for me: in my case all subtype of T implement class method objects(), but when I try to write my code like this: obj_type.objects() mypy returns: pytest_helm_charts/utils.py:36: error: "Type[T]" has no attribute "objects" | But Type[T] is TypeVar, so it's not the way to go. No, you are on the right track - TypeVar is definitely the way to go. The problem here is rather in pykube.objects.APIObject class being wrapped in a decorator that mypy cannot deal with yet. Adding type stubs for pykube.objects will resolve the issue. Create a directory _typeshed/pykube and add minimal type stubs for pykube: _typeshed/pykube/__init__.pyi: from typing import Any def __getattr__(name: str) -> Any: ... # incomplete _typeshed/pykube/objects.pyi: from typing import Any, ClassVar, Optional from pykube.query import Query def __getattr__(name: str) -> Any: ... # incomplete class ObjectManager: def __getattr__(self, name: str) -> Any: ... # incomplete def __call__(self, api: Any, namespace: Optional[Any] = None) -> Query: ... class APIObject: objects: ClassVar[ObjectManager] def __getattr__(self, name: str) -> Any: ... # incomplete class NamespacedAPIObject(APIObject): ... Now running $ MYPYPATH=_typeshed mypy pytest_helm_charts/ resolves obj_type.objects correctly: T = TypeVar('T', bound=NamespacedAPIObject) def wait_for_namespaced_objects_condition(obj_type: Type[T]) -> List[T]: reveal_type(obj_type.objects) Output: pytest_helm_charts/utils.py:29: note: Revealed type is 'pykube.objects.ObjectManager' | 8 | 4 |
62,952,273 | 2020-7-17 | https://stackoverflow.com/questions/62952273/catch-python-exception-and-save-traceback-text-as-string | I'm trying to write a nice error handler for my code, so that when it fails, the logs, traceback and other relevant info get emailed to me. I can't figure out how to take an exception object and extract the traceback. I find the traceback module pretty confusing, mostly because it doesn't deal with exceptions at all. It just fetches some global variables from somewhere, assuming that I want the most recent exception. But what if I don't? What if I want to ignore some exception in my error handler? (e.g. if I fail to send me email and want to retry.) What I want import traceback as tb # some function that will fail after a few recursions def myfunc(x): assert x > 0, "oh no" return myfunc(x-1) try: myfunc(3) except Exception as e: traceback_str = tb.something(e) Note that tb.something takes e as an argument. There's lots of questions on Stack Overflow about using the traceback module to get a traceback string. The unique thing about this question is how to get it from the caught exception, instead of global variables. Result: traceback_str contains the string: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in myfunc File "<stdin>", line 3, in myfunc File "<stdin>", line 3, in myfunc File "<stdin>", line 2, in myfunc AssertionError: oh no Note that it contains not just the most recent function call, but the whole stack, and includes the "AssertionError" at the end | The correct function for tb.something(e) is ''.join(tb.format_exception(None, e, e.__traceback__)) | 16 | 23 |
62,939,781 | 2020-7-16 | https://stackoverflow.com/questions/62939781/adding-files-to-gitignore-in-visual-studio-code | In Visual Studio Code, with git extensions installed, how do you add files or complete folders to the .gitignore file so the files do not show up in untracked changes. Specifically, using Python projects, how do you add the pycache folder and its contents to the .gitignore. I have tried right-clicking in the folder in explorer panel but the pop-menu has no git ignore menu option. Thanks in advance. I know how to do it from the command line. Yes, just edit the .gitignore file. I was just asking how it can be done from within VS Code IDE using the git extension for VS Code. | So after further investigation, it is possible to add files from the pycache folder to the .gitignore file from within VS Code by using the list of untracked changed files in the 'source control' panel. You right-click a file and select add to .gitignore from the pop-up menu. You can't add folders but just the individual files. | 18 | 14 |
62,947,285 | 2020-7-17 | https://stackoverflow.com/questions/62947285/is-there-a-difference-between-series-replace-and-series-map-in-pandas | Both pandas.Series.map and pandas.Series.replace seem to give the same result. Is there a reason for using one over the other? For example: import pandas as pd df = pd.Series(['Yes', 'No']) df 0 Yes 1 No dtype: object df.replace(to_replace=['Yes', 'No'], value=[True, False]) 0 True 1 False dtype: bool df.map({'Yes':True, 'No':False}) 0 True 1 False dtype: bool df.replace(to_replace=['Yes', 'No'], value=[True, False]).equals(df.map({'Yes':True, 'No':False})) True | Both of these methods are used for substituting values. From Series.replace docs: Replace values given in to_replace with value. From Series.map docs: Used for substituting each value in a Series with another value, that may be derived from a function, a dict or a Series. They differ in the following: replace accepts str, regex, list, dict, Series, int, float, or None. map accepts a dict or a Series. They differ in handling null values. replace uses re.sub under the hood.The rules for substitution for re.sub are the same. Take below example: In [124]: s = pd.Series([0, 1, 2, 3, 4]) In [125]: s Out[125]: 0 0 1 1 2 2 3 3 4 4 dtype: int64 In [126]: s.replace({0: 5}) Out[126]: 0 5 1 1 2 2 3 3 4 4 dtype: int64 In [129]: s.map({0: 'kitten', 1: 'puppy'}) Out[129]: 0 kitten 1 puppy 2 NaN 3 NaN 4 NaN dtype: object As you can see for s.map method, values that are not found in the dict are converted to NaN, unless the dict has a default value (e.g. defaultdict) For s.replace, it just replaces the value to be replaced keeping the rest as it is. | 17 | 38 |
62,937,312 | 2020-7-16 | https://stackoverflow.com/questions/62937312/fastest-and-most-compact-way-to-get-the-smallest-number-that-is-divisible-by-num | I tried my attempt at finding the smallest number that is divisible by numbers from 1 to n, and now I'm looking for advice on ways to further compact/make my solution more efficient. It would be pretty cool if there was an O(1) solution too. def get_smallest_number(n): """ returns the smallest number that is divisible by numbers from 1 to n """ divisors = range(1, n+1) check_divisible = lambda x: all([x % y == 0 for y in divisors]) i = 1 while True: if check_divisible(i): return i i += 1 | Mathematically, you are computing the least common multiple of 1, 2, ..., n. lcm is easily derived from gcd, and lcm is an associative operation. reduce is useful for applying an associative operation to an interable. We can combine these ideas (as well as improvements due to Mark Dickinson and Eric Postpischil in the comments) to get a very fast solution: from math import gcd from functools import reduce def lcm(a,b): return a // gcd(a,b) * b def get_smallest_number2(n): return reduce(lcm,range(1 + n//2,n+1),1) Some quick %timeit results in IPython: %timeit get_smallest_number2(15) 2.07 µs ± 26.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) %timeit get_smallest_number(15) 443 ms ± 5.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) For n = 15 it is thus over 200,000 times faster. Your function fails to produce any output long before n = 100, but get_smallest_number2(100) evaluates to 69720375229712477164533808935312303556800 almost instantly. | 9 | 10 |
62,935,983 | 2020-7-16 | https://stackoverflow.com/questions/62935983/vary-thickness-of-edges-based-on-weight-in-networkx | I'm trying to draw a network diagram using Python Networkx package. I would like to vary the thickness of the edges based on the weights given to the edges. I am using the following code which draws the diagram, but I cannot get the edge to vary its thickness based on the weight. Can someone help me with this problem? Thanks in advance. df = pd.DataFrame({ 'from':['D', 'A', 'B', 'C','A'], 'to':['A', 'D', 'A', 'E','C'], 'weight':['1', '5', '8', '3','20']}) G=nx.from_pandas_edgelist(df, 'from', 'to', edge_attr='weight', create_using=nx.DiGraph() ) nx.draw_shell(G, with_labels=True, node_size=1500, node_color='skyblue', alpha=0.3, arrows=True, weight=nx.get_edge_attributes(G,'weight').values()) | In order to set the widths for each edge, i.e with an array-like of edges, you'll have to use nx.draw_networkx_edges through the width parameter, since nx.draw only accepts a single float. And the weights can be obtaind with nx.get_edge_attributes. Also you can draw with a shell layout using nx.shell_layout and using it to position the nodes instead of nx.draw_shell: import networkx as nx from matplotlib import pyplot as plt widths = nx.get_edge_attributes(G, 'weight') nodelist = G.nodes() plt.figure(figsize=(12,8)) pos = nx.shell_layout(G) nx.draw_networkx_nodes(G,pos, nodelist=nodelist, node_size=1500, node_color='black', alpha=0.7) nx.draw_networkx_edges(G,pos, edgelist = widths.keys(), width=list(widths.values()), edge_color='lightblue', alpha=0.6) nx.draw_networkx_labels(G, pos=pos, labels=dict(zip(nodelist,nodelist)), font_color='white') plt.box(False) plt.show() | 8 | 15 |
62,935,406 | 2020-7-16 | https://stackoverflow.com/questions/62935406/how-to-make-a-signup-view-using-class-based-views-in-django | When I started to use Django, I was using FBVs ( Function Based Views ) for pretty much everything including signing up for new users. But as I delved deep more into projects, I realized that Class-Based Views are usually better for large projects as they are more clean and maintainable but this is not to say that FBVs aren't. Anyway, I migrated most of my whole project's views to Class-Based Views except for one that was a little confusing, the SignUpView. | In order to make SignUpView in Django, you need to utilize CreateView and SuccessMessageMixin for creating new users as well as displaying a success message that confirms the account was created successfully. Here's the code : views.py: from .forms import UserRegisterForm from django.views.generic.edit import CreateView class SignUpView(SuccessMessageMixin, CreateView): template_name = 'users/register.html' success_url = reverse_lazy('login') form_class = UserRegisterForm success_message = "Your profile was created successfully" and, the forms.py: from django import forms from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm class UserRegisterForm(UserCreationForm): email = forms.EmailField() class Meta: model = User fields = ['username', 'email', 'first_name'] | 8 | 19 |
62,925,100 | 2020-7-15 | https://stackoverflow.com/questions/62925100/how-to-control-distance-between-bars-in-bar-chart | I have two peaces of code that produce sameme result, so any could be used for answer. First: import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar( name='Group 1', x=['Var 1', 'Var 2', 'Var 3'], y=[3, 6, 4], error_y=dict(type='data', array=[1, 0.5, 1.5]), width=0.15 )) fig.add_trace(go.Bar( name='Group 2', x=['Var 1', 'Var 2', 'Var 3'], y=[4, 7, 3], error_y=dict(type='data', array=[0.5, 1, 2]), width=0.15 )) fig.update_layout(barmode='group') fig.show() Second: import plotly.graph_objects as go fig = go.Figure(data=[ go.Bar( name='Group 1', x=['Var 1', 'Var 2', 'Var 3'], y=[3, 6, 4], error_y=dict(type='data', array=[1, 0.5, 1.5]), width=0.15), go.Bar( name='Group 2', x=['Var 1', 'Var 2', 'Var 3'], y=[4, 7, 3], error_y=dict(type='data', array=[0.5, 1, 2]), width=0.15) ]) # Change the bar mode fig.update_layout(barmode='group') fig.show() They both look like: Question: How can I control the distance between bars of different groups and bars of different Vars? | There are 2 properties that you are asking for. They are as follows: bargap - gap between bars of adjacent location coordinates. bargroupgap - gap between bars of the same location coordinate. But there is a catch here, if you set width then both the properties are ignored. So, remove the width value from the code and setting the above properties should work. import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar( name='Group 1', x=['Var 1', 'Var 2', 'Var 3'], y=[3, 6, 4], error_y=dict(type='data', array=[1, 0.5, 1.5]) )) fig.add_trace(go.Bar( name='Group 2', x=['Var 1', 'Var 2', 'Var 3'], y=[4, 7, 3], error_y=dict(type='data', array=[0.5, 1, 2]) )) fig.update_layout(barmode='group', bargap=0.30,bargroupgap=0.0) fig.show() | 14 | 25 |
62,922,491 | 2020-7-15 | https://stackoverflow.com/questions/62922491/change-linewidth-in-python-plotly-px-figure | I'm unclear how to style a line in a plotly express figure, altering color and width. The plotly documentation offers suggestions to style lines using go, but I do not see information for px. Example import plotly.express as px df = px.data.gapminder().query("continent=='Oceania'") fig = px.line(df, x="year", y="lifeExp", color='country') fig.show() I tried putting line=dict(color='firebrick', width=4) as an argument of px.line but that throws the error line() got an unexpected keyword argument 'line' as it is code for go rather than px. | Plotly px line styles can be updated with update_traces function see documentation for further information. The following example modifies the prior figure making all of the lines black and thin. import plotly.express as px df = px.data.gapminder().query("continent=='Oceania'") fig = px.line(df, x="year", y="lifeExp", color='country') # This styles the line fig.update_traces(line=dict(color="Black", width=0.5)) fig.show() | 9 | 27 |
62,918,389 | 2020-7-15 | https://stackoverflow.com/questions/62918389/ceating-dynamodb-table-says-invalid-one-or-more-parameter-values-were-invalid | I am trying to create dynamo db table with python. Below is the script i have. I am trying to create one partition key and sort key and bunch of columns. What I tried: import boto3 dynamodb = boto3.resource('dynamodb') table = dynamodb.create_table( TableName='g_view_data', KeySchema=[ { 'AttributeName': 'customer_id', 'KeyType': 'HASH' #Partition key }, { 'AttributeName': 'key_id', 'KeyType': 'RANGE' #Sort key } ], AttributeDefinitions=[ { 'AttributeName': 'cusotmer_id', 'AttributeType': 'N' }, { 'AttributeName': 'key_id', 'AttributeType': 'N' }, { 'AttributeName': 'dashboard_name', 'AttributeType': 'S' }, { 'AttributeName': 'tsm', 'AttributeType': 'S' }, { 'AttributeName': 'security_block', 'AttributeType': 'S' }, { 'AttributeName': 'core_block', 'AttributeType': 'S' }, { 'AttributeName': 'type', 'AttributeType': 'S' }, { 'AttributeName': 'subscription', 'AttributeType': 'S' }, { 'AttributeName': 'account_id', 'AttributeType': 'S' }, { 'AttributeName': 'region', 'AttributeType': 'S' }, { 'AttributeName': 'NAT', 'AttributeType': 'S' }, { 'AttributeName': 'jb', 'AttributeType': 'S' }, { 'AttributeName': 'dc', 'AttributeType': 'S' }, { 'AttributeName': 'av', 'AttributeType': 'S' }, { 'AttributeName': 'gl', 'AttributeType': 'S' }, { 'AttributeName': 'backup', 'AttributeType': 'S' }, { 'AttributeName': 'cpm', 'AttributeType': 'S' }, { 'AttributeName': 'zb', 'AttributeType': 'S' }, ], ProvisionedThroughput={ 'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10 } ) print("Table status:", table.table_status) Output i am getting: invalid One or more parameter values were invalid: Some index key attributes are not defined in AttributeDefinitions. can some one suggest what wrong with my code..just trying to create a simple table. why it complains parameters are missing. not sure..can some one suggest pls Edit1: after adding customer_id,key_id in attributes i am getting different error botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the CreateTable operation: One or more parameter values were invalid: Number of attributes in KeySchema does not exactly match number of attributes defined in AttributeDefinitions | As stated in the AWS documentation, the attributes in KeySchema must also be defined in the AttributeDefinitions. Please try adding customer_id and key_id to your AttributeDefinitions as well. As for the AttributeDefinitions, they are used for the keys only (primary key and indexes). So here is an example that worked for me: import boto3 dynamodb = boto3.resource('dynamodb') table = dynamodb.create_table( TableName='g_view_data', KeySchema=[ { 'AttributeName': 'customer_id', 'KeyType': 'HASH' }, { 'AttributeName': 'key_id', 'KeyType': 'RANGE' } ], AttributeDefinitions=[ { 'AttributeName': 'customer_id', 'AttributeType': 'N' }, { 'AttributeName': 'key_id', 'AttributeType': 'N' }, ], ProvisionedThroughput={ 'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10 } ) print("Table status:", table.table_status) Then after creating that table you can add items with any attributes that you want. For example add this code to your scripts: table = dynamodb.Table('g_view_data') item = table.put_item( Item={ 'customer_id': 234, 'key_id': 123, 'dashboard_name': 'test', } ) | 11 | 18 |
62,918,481 | 2020-7-15 | https://stackoverflow.com/questions/62918481/how-to-create-a-unique-identifier-based-on-multiple-columns | I have a pandas dataframe that looks something like this: brand description former_price discounted_price 0 A icecream 1099.0 855.0 1 A cheese 469.0 375.0 2 B catfood 179.0 119.0 3 C NaN 699.0 399.0 4 NaN icecream 769.0 549.0 5 A icecream 769.0 669.0 I want to create a column that will assign a unique value for each brand & description combination. Note that either the brand or the description can be missing from the dataset (notified by NaN value). Also, note that if the brand and the description is the same (duplicated) I still want the unique value to be the same for the row. The output should look like this: product_key brand description former_price discounted_price 0 1 A icecream 1099.0 855.0 1 2 A cheese 469.0 375.0 2 3 B catfood 179.0 119.0 3 4 C NaN 699.0 399.0 4 5 NaN icecream 769.0 549.0 5 1 A icecream 769.0 669.0 The values in product_key can be anything, I just want them to be unique based on brand and description columns. Any help is immensely appreciated! Thanks a lot! | You could try with pd.Series.factorize: df.set_index(['brand','description']).index.factorize()[0]+1 Output: 0 1 1 2 2 3 3 4 4 5 5 1 So you could try this, to assign it to be the first column: df.insert(loc=0, column='product_key', value=df.set_index(['brand','description']).index.factorize()[0]+1) Output: df product_key brand description former_price discounted_price 0 1 A icecream 1099.0 855.0 1 2 A cheese 469.0 375.0 2 3 B catfood 179.0 119.0 3 4 C NaN 699.0 399.0 4 5 NaN icecream 769.0 549.0 5 1 A icecream 769.0 669.0 | 8 | 10 |
62,917,882 | 2020-7-15 | https://stackoverflow.com/questions/62917882/convert-datetime64ns-utc-pandas-column-to-datetime | I have a dataframe which has timestamp and its datatype is object. 0 2020-07-09T04:23:50.267Z 1 2020-07-09T11:21:55.536Z 2 2020-07-09T11:23:18.015Z 3 2020-07-09T04:03:28.581Z 4 2020-07-09T04:03:33.874Z Name: timestamp, dtype: object I am not aware of the format of the datetime in the above dataframe. I applied pd.to_datetime to the above column where the datatype is changed as datetime64[ns, UTC]. df['timestamp'] = pd.to_datetime(df.timestamp) Now the dataframe looks in this way, 0 2020-07-09 04:23:50.267000+00:00 1 2020-07-09 11:21:55.536000+00:00 2 2020-07-09 11:23:18.015000+00:00 3 2020-07-09 04:03:28.581000+00:00 4 2020-07-09 04:03:33.874000+00:00 Name: timestamp, dtype: datetime64[ns, UTC] I want to convert the above datetime64[ns, UTC] format to normal datetime. For example, 2020-07-09 04:23:50.267000+00:00 to 2020-07-09 04:23:50 Can anyone explain me what is the meaning of this 2020-07-09T04:23:50.267Z representation and also how to convert this into datetime object? | To remove timezone, use tz_localize: df['timestamp'] = pd.to_datetime(df.timestamp).dt.tz_localize(None) Output: timestamp 0 2020-07-09 04:23:50.267 1 2020-07-09 11:21:55.536 2 2020-07-09 11:23:18.015 3 2020-07-09 04:03:28.581 4 2020-07-09 04:03:33.874 | 36 | 65 |
62,914,335 | 2020-7-15 | https://stackoverflow.com/questions/62914335/python-pandas-query-for-values-in-list | I want to use query() to filter rows in a panda dataframe that appear in a given list. Similar to this question, but I really would prefer to use query() import pandas as pd df = pd.DataFrame({'A' : [5,6,3,4], 'B' : [1,2,3, 5]}) mylist =[5,3] I tried: df.query('A.isin(mylist)') | You could try this, using @, that allows us to refer a variable in the environment: df.query('A in @mylist') Or this: df.query('A.isin(@mylist)',engine='python') | 12 | 17 |
62,910,635 | 2020-7-15 | https://stackoverflow.com/questions/62910635/create-sub-cell-in-spyder | Is there any workaround to create sub-cells in Spyder? E.g. I know that with #%% Cell 1 I can create a new cell. But is there a way to create a sub-cell which is grouped under the cell as in # Cell 1.1 ? I have found this discussion which didn't look encouraging. But I wanted to give it a try and ask here. | As per the post linked, the feature you are asking has been implemented. I just tried it on my Spyder IDE version 4.1.3 and it works by using an increasing number of %. For instance #%% Section 1 some code #%%% Sub-Section 1.1 some more code #%% Section 2 and so on | 16 | 21 |
62,907,802 | 2020-7-15 | https://stackoverflow.com/questions/62907802/best-way-to-detect-if-checkbox-is-ticked | My work: Scan the paper Check horizontal and vertical line Detect checkbox How to know checkbox is ticked or not At this point, I thought I could find it by using Hierarchical and Contours: Below is my work for i in range (len( contours_region)): #I already have X,Y,W,H of the checkbox through #print(i) #cv2.connectedComponentsWithStats x = contours_region[i][0][1] #when detecting checkbox x_1 = contours_region[i][2][1] y = contours_region[i][0][0] y_1 = contours_region[i][2][0] image_copy= image.copy() X,Y,W,H = contours_info[i] cv2.drawContours(image_copy, [numpy.array([[[X,Y]],[[X+W,Y]],[[X+W,Y+H]],[[X,Y+H]]])], 0, (0,0,255),2) gray = cv2.cvtColor(image_copy, cv2.COLOR_BGR2GRAY) ret,bw = cv2.threshold(gray,220,255,cv2.THRESH_BINARY_INV) contours,hierarchy = cv2.findContours(bw[x:x_1, y:y_1], cv2.RETR_CCOMP,1) print('-----Hierarchy-----') print(hierarchy) print('-----Number of Contours : '+ str(len(contours))) cv2.imshow('a', image_copy) cv2.waitKey(0) I got this result (some high contours, some high hierarchy) -----Hierarchy----- [[[-1 -1 1 -1] [ 2 -1 -1 0] [ 3 1 -1 0] [ 4 2 -1 0] [ 5 3 -1 0] [ 6 4 -1 0] [ 7 5 -1 0] [-1 6 -1 0]]] -----Number of Contours : 8 Another result: Low Contours, Low Hierarchy -----Hierarchy----- [[[-1 -1 1 -1] [ 2 -1 -1 0] [-1 1 -1 0]]] -----Number of Contours : 3 However, it's not perfect some case where it's not ticked but still got a really high result [[[-1 -1 1 -1] [ 2 -1 -1 0] [ 3 1 -1 0] [ 4 2 -1 0] [ 5 3 -1 0] [-1 4 -1 0]]] -----Number of Contours : 6 In general, After review the whole data, the gap is not convincing between ticked and not ticked. Around 30% of boxes, giving the wrong result. Therefore, really wish to have a better method. | I think erode function help you. Use erosion to make the ticks bigger then count the non zero pixels. Here You can find the basics: import cv2 import numpy as np from google.colab.patches import cv2_imshow img = cv2.imread("image.png"); cv2_imshow(img) kernel = np.ones((3, 3), np.uint8) better_image = cv2.erode(img,kernel) cv2_imshow(better_image) | 8 | 2 |
62,907,815 | 2020-7-15 | https://stackoverflow.com/questions/62907815/pytorch-what-is-the-difference-between-tensor-cuda-and-tensor-totorch-device | In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU: Setup: X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model() X = torch.DoubleTensor(X) Method 1 Method 2 X.cuda() device = torch.device("cuda:0")X = X.to(device) (I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing) | There is no difference between the two. Early versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome: if cuda_available: x = x.cuda() model.cuda() else: x = x.cpu() model.cpu() Later versions introduced .to() that basically takes care of everything in an elegant way: device = torch.device('cuda') if cuda_available else torch.device('cpu') x = x.to(device) model = model.to(device) | 23 | 31 |
62,798,421 | 2020-7-8 | https://stackoverflow.com/questions/62798421/how-to-send-file-to-fastapi-endpoint-using-postman | I faced the difficulty of testing api using postman. Through swagger file upload functionality works correctly, I get a saved file on my hard disk. I would like to understand how to do this with Postman. I use the standard way to work with files which I use when working with Django and Flask: Body -> form-data: key=file, value=image.jpeg But with FastAPI, I get an error: 127.0.0.1:54294 - "POST /uploadfile/ HTTP/1.1" 422 Unprocessable Entity main.py @app.post("/uploadfile/") async def create_upload_file(file: UploadFile = File(...)): img = await file.read() if file.content_type not in ['image/jpeg', 'image/png']: raise HTTPException(status_code=406, detail="Please upload only .jpeg files") async with aiofiles.open(f"{file.filename}", "wb") as f: await f.write(img) return {"filename": file.filename} I also tried body -> binary: image.jpeg, but got the same result: | My code: from fastapi import FastAPI, UploadFile, File app = FastAPI() @app.post("/file/") async def create_upload_file(file: UploadFile = File(...)): return {"filename": file.filename} Setup in Postman: As stated in https://github.com/tiangolo/fastapi/issues/1653, the parameter name for the file is the key value that you have to use. Before you were using key=file and value=image.png (or whatever). Instead, FastAPI accepts file=image.png. Thus the error, since the file is necessary, but it is not present (at least, the key with that name is not present). P.S. I tested it with Postman v7.16.1 | 8 | 19 |
62,841,000 | 2020-7-10 | https://stackoverflow.com/questions/62841000/how-is-egg-used-in-pip-install-e | Trying to test editable installs out and I'm not sure how to interpret the results. I intentionally made a typo in the egg= portion but it was still able to locate the egg without any help from me: root@6be8ee41b6c9:/# pip3 install -e git+https://gitlab.com/jame/clientapp.git Could not detect requirement name for 'git+https://gitlab.com/jame/clientapp.git', please specify one with #egg=your_package_name root@6be8ee41b6c9:/# pip3 install -e git+https://gitlab.com/jame/clientapp.git#egg= Could not detect requirement name for 'git+https://gitlab.com/jame/clientapp.git#egg=', please specify one with #egg=your_package_name root@6be8ee41b6c9:/# pip3 install -e git+https://gitlab.com/jame/clientapp.git#egg=e Obtaining e from git+https://gitlab.com/jame/clientapp.git#egg=e Cloning https://gitlab.com/jame/clientapp.git to /src/e Running setup.py (path:/src/e/setup.py) egg_info for package e produced metadata for project name clientapp. Fix your #egg=e fragments. Installing collected packages: clientapp Found existing installation: ClientApp 0.7 Can't uninstall 'ClientApp'. No files were found to uninstall. Running setup.py develop for clientapp Successfully installed clientapp root@6be8ee41b6c9:/# pip3 freeze asn1crypto==0.24.0 -e git+https://gitlab.com/jame/clientapp.git@5158712c426ce74613215e61cab8c21c7064105c#egg=ClientApp cryptography==2.6.1 entrypoints==0.3 keyring==17.1.1 keyrings.alt==3.1.1 pycrypto==2.6.1 PyGObject==3.30.4 pyxdg==0.25 SecretStorage==2.3.1 six==1.12.0 So if I could mess the egg name up so bad, why is it considered an error to either leave it blank or set to something empty | This is outdated notation. Nowadays one should use the following notation whenever possible: python -m pip install 'ProjectName @ git+https://example.local/[email protected]' My guess, the name matters if the project is a dependency of another project. For example in a case where one wants to install A from PyPI and Z from git, but Z is a dependency of A. python -m pip install 'A' 'git+https://example.local/repository.git#egg=Z' or with new notation: python -m pip install 'A' 'Z @ git+https://example.local/repository.git | 9 | 5 |
62,889,093 | 2020-7-14 | https://stackoverflow.com/questions/62889093/what-does-no-build-isolation-do | I am trying to edit a python library and build it from source. Can someone explain what does the following instruction do and why is this method different from pip install package-name done normally? pip install --verbose --no-build-isolation --editable | You can read all the usage options here: https://pip.pypa.io/en/stable/cli/pip_install/ -v, --verbose Give more output. Option is additive, and can be used up to 3 times. --no-build-isolation Disable isolation when building a modern source distribution. Build dependencies specified by PEP 518 must be already installed if this option is used. It means pip won't install the dependencies, so you have to install the dependencies if any by yourself first or the command will fail. -e, --editable <path/url> Install a project in editable mode (i.e. setuptools “develop mode”) from a local project path or a VCS url. Here you have to input a path/url argument to install from an external source. | 33 | 22 |
62,822,956 | 2020-7-9 | https://stackoverflow.com/questions/62822956/how-to-make-pip-install-to-path-on-linux | I installed PyInstaller via pip, but when I try to run it I get pyinstaller: command not found After installation of the package the following warning was displayed: WARNING: The scripts pyi-archive_viewer, pyi-bindepend, pyi-grab_version, pyi-makespec, pyi-set_version and pyinstaller are installed in '/home/kevinapetrei/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Do you know how can I make it so pip installs everything straight to PATH? | Rather than messing with existing entires in PATH, consider appending to it the location from pip. It usually is ~/.local/bin (consistent with systemd's file-hierarchy). Good place to add/modify environmental variables is ~/.profile file. You do it by adding the following line: export PATH="$HOME/.local/bin:$PATH" You may also consider ~/.pam_environment (but it has slightly different syntax). For those wondering about the actual answer to the question: It used to be somewhat possible in old versions of Python 3, but the option got patched away. | 11 | 28 |
62,884,503 | 2020-7-13 | https://stackoverflow.com/questions/62884503/what-are-the-best-practices-for-repr-with-collection-class-python | I have a custom Python class which essentially encapsulate a list of some kind of object, and I'm wondering how I should implement its __repr__ function. I'm tempted to go with the following: class MyCollection: def __init__(self, objects = []): self._objects = [] self._objects.extend(objects) def __repr__(self): return f"MyCollection({self._objects})" This has the advantage of producing a valid Python output which fully describes the class instance. However, in my real-wold case, the object list can be rather large and each object may have a large repr by itself (they are arrays themselves). What are the best practices in such situations? Accept that the repr might often be a very long string? Are there potential issues related to this (debugger UI, etc.)? Should I implement some kind of shortening scheme using semicolon? If so, is there a good/standard way to achieve this? Or should I skip listing the collection's content altogether? | The official documentation outlines this as how you should handle __repr__: Called by the repr() built-in function to compute the “official” string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned. The return value must be a string object. If a class defines __repr__() but not __str__(), then __repr__() is also used when an “informal” string representation of instances of that class is required. This is typically used for debugging, so it is important that the representation is information-rich and unambiguous. Python 3 __repr__ Docs Lists, strings, sets, tuples and dictionaries all print out the entirety of their collection in their __repr__ method. Your current code looks to perfectly follow the example of what the documentation suggests. Though I would suggest changing your __init__ method so it looks more like this: class MyCollection: def __init__(self, objects=None): if objects is None: objects = [] self._objects = objects def __repr__(self): return f"MyCollection({self._objects})" You generally want to avoid using mutable objects as default arguments. Technically because of the way your method is implemented using extend (which makes a copy of the list), it will still work perfectly fine, but Python's documentation still suggests you avoid this. It is good programming practice to not use mutable objects as default values. Instead, use None as the default value and inside the function, check if the parameter is None and create a new list/dictionary/whatever if it is. https://docs.python.org/3/faq/programming.html#why-are-default-values-shared-between-objects If you're interested in how another library handles it differently, the repr for Numpy arrays only shows the first three items and the last three items when the array length is greater than 1,000. It also formats the items so they all use the same amount of space (In the example below, 1000 takes up four spaces so 0 has to be padded with three more spaces to match). >>> repr(np.array([i for i in range(1001)])) 'array([ 0, 1, 2, ..., 998, 999, 1000])' To mimic this numpy array style you could implement a __repr__ method like this in your class: class MyCollection: def __init__(self, objects=None): if objects is None: objects = [] self._objects = objects def __repr__(self): # If length is less than 1,000 return the full list. if len(self._objects) < 1000: return f"MyCollection({self._objects})" else: # Get the first and last three items items_to_display = self._objects[:3] + self._objects[-3:] # Find the which item has the longest repr max_length_repr = max(items_to_display, key=lambda x: len(repr(x))) # Get the length of the item with the longest repr padding = len(repr(max_length_repr)) # Create a list of the reprs of each item and apply the padding values = [repr(item).rjust(padding) for item in items_to_display] # Insert the '...' inbetween the 3rd and 4th item values.insert(3, '...') # Convert the list to a string joined by commas array_as_string = ', '.join(values) return f"MyCollection([{array_as_string}])" >>> repr(MyCollection([1,2,3,4])) 'MyCollection([1, 2, 3, 4])' >>> repr(MyCollection([i for i in range(1001)])) 'MyCollection([ 0, 1, 2, ..., 998, 999, 1000])' | 14 | 39 |
62,897,548 | 2020-7-14 | https://stackoverflow.com/questions/62897548/why-am-i-getting-a-line-shadow-in-a-seaborn-line-plot | Here is the code: fig=plt.figure(figsize=(14,8)) sns.lineplot(x='season', y='team_strikerate', hue='batting_team', data=overall_batseason) plt.legend(title = 'Teams', loc = 1, fontsize = 12) plt.xlim([2008,2022]) And here is the image Just to let you know, I've already drawn another similar lineplot above this one. | There is line shadow showing the confidence interval, because the dataset contains multiple y(team_strikerate) values for each x(season) value. By default, sns.lineplot() will estimate the mean by aggregating over multiple y values at each x value. After aggregation, the mean of y values at each x value will be plotted as a line. The line shadow represents the 95% confidence interval of the estimate. To remove the line shadow, you can pass the argument ci=None to sns.lineplot(). (credit to @JohanC for providing this idea in this question's comment) To change the confidence interval, you can pass the argument errorbar=('ci', <int>) to sns.lineplot(). | 10 | 14 |
62,898,911 | 2020-7-14 | https://stackoverflow.com/questions/62898911/how-to-downgrade-python-version-from-3-8-to-3-7-mac | I'm using Python & okta-aws tools and in order to fetch correct credentials on aws I need to run okta-aws init. But got an error message of Could not read roles from Okta and the system prompted that"Your Pipfile requires python_version 3.7, but you are using 3.8.3 (/usr/local/Cellar/o/1.1.4/l/.venv/bin/python). I've tried to search all the Pipfiles on the mac and it seems that the Pipflie under my ~/Pipfile and /usr/local/Cellar/[email protected]/3.8.3_2/libexec/bin/Pipfile all have the same python version of 3.8, while the Pipfile under my /usr/local/Cellar/okta-aws-tools/1.1.4/libexec/Pipfile has required python_version = 3.7. I've been struggling with this for a while and really not sure how I can fix this. | Consider installing pyenv with Homebrew on macOS brew update brew install pyenv OR Clone the repository to get the latest version of pyenv git clone https://github.com/pyenv/pyenv.git ~/.pyenv Define your environment variables (For a recent MacOS you may want to replace ~/.bash_profile with ~/.zshrc as that is the default shell) echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile echo 'eval "$(pyenv init -)"' >> ~/.bash_profile source ~/.bash_profile Restart your shell so the path changes take effect exec "$SHELL" Verify the installation and check the available python versions pyenv install --list Install the required python version pyenv install 3.7 Set it as your global version after installation pyenv global 3.7 eval pyenv path eval "$(pyenv init --path)" Verify your current python version the system is using python3 --version | 58 | 204 |
62,884,183 | 2020-7-13 | https://stackoverflow.com/questions/62884183/trying-to-add-a-colorbar-to-a-seaborn-scatterplot | I'm a geology master's student working on my dissertation with a focus on the Sulfur Dioxide output of a number of volcanoes in the South Pacific. I have a little experience with R but my supervisor recommended python (JupyterLab specifically) for generating figures and data manipulation so I'm pretty new to programming and essentially teaching myself as I go. I'm trying to use earthquake data to generate some scatterplots using seaborn but I can't seem to get a color bar to show up in the legend for the earthquake magnitude. The code I'm using is below and I'll do my best to format it in a clear way. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import matplotlib as mpl from scipy import stats import cartopy.crs as ccrs import cartopy.io.img_tiles as cimgt then the data sets I'm working with. These are the sets for Earthquake data. df = pd.read_csv('Vanuatu Earthquakes May18-May19.csv') df = pd.read_csv('Vanuatu Earthquakes May17-May18.csv') df = pd.read_csv('Vanuatu Earthquakes May19-Jul20.csv') and locations of the volcanoes, purely there for spatial reference. dg = pd.read_csv('Volcano coordinates.csv') Here's the main plot I'm trying to work with as it stands at the moment. So far I've been able to classify the earthquakes' magnitudes using the hue function but I don't like how it looks in the legend and want to convert it to a colorbar (or use a colorbar instead of hue, either/or), except I can't quite figure out how to do that. Alternatively, if there's a different function that would give me the results I'm looking for, I'm definitely open to that instead of a scatterplot. Also the black triangles are the volcanoes so those can be ignored for now. plt.figure(figsize=(5.5,9)) sns.scatterplot(x='longitude', y='latitude', data=df, marker='D', hue='mag', palette='colorblind', cmap='RdBu') sns.scatterplot(x='longitude', y='latitude', data=dg, marker='^', legend='brief', color='k', s=100) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title='Magnitude (Mw)') plt.xlabel('Longitude (degrees)') plt.ylabel('Latitude (degrees)') plt.title('Earthquake and Volcano Locations', size=15) plt.show() Hopefully that's clear enough but let me know if more info is needed! | The same method employed in this answer regarding Seaborn barplots can be applied to a scatterplot as well. With your code that would look something like this: # ... norm = plt.Normalize(df['mag'].min(), df['mag'].max()) sm = plt.cm.ScalarMappable(cmap="RdBu", norm=norm) sm.set_array([]) ax = sns.scatterplot(x='longitude', y='latitude', data=df, marker='D', palette='RdBu', hue='mag') sns.scatterplot(x='longitude', y='latitude', data=dg, marker='^', legend='brief', color='k', s=100, ax=ax) # Remove the legend and add a colorbar (optional) # ax.get_legend().remove() # ax.figure.colorbar(sm) # ... See this question and its answers for information on manipulating the labels and ticks of the color bar. For a complete example using the tips dataset: import seaborn as sns import matplotlib.pyplot as plt sns.set() tips = sns.load_dataset("tips") ax = sns.scatterplot(x="total_bill", y="tip", hue="size", palette='RdBu', data=tips) norm = plt.Normalize(tips['size'].min(), tips['size'].max()) sm = plt.cm.ScalarMappable(cmap="RdBu", norm=norm) sm.set_array([]) # Remove the legend and add a colorbar ax.get_legend().remove() ax.figure.colorbar(sm) plt.show() | 10 | 27 |
62,861,810 | 2020-7-12 | https://stackoverflow.com/questions/62861810/mypy-how-should-i-type-a-dict-that-has-strings-as-keys-and-the-values-can-be-ei | I am using Python 3.8.1 and mypy 0.782. I don't understand why mypy complains about the following code: from typing import Union, List, Dict Mytype = Union[Dict[str, str], Dict[str, List[str]]] s: Mytype = {"x": "y", "a": ["b"]} Mypy gives the following error on line 3: Incompatible types in assignment (expression has type "Dict[str, Sequence[str]]", variable has type "Union[Dict[str, str], Dict[str, List[str]]]") If I change the last line to s: Mytype = {"a": ["b"]} mypy doesn't complain. However, when adding yet one more line s["a"].append("c") leads to an error: error: Item "str" of "Union[str, List[str]]" has no attribute "append" How can the above mentioned be explained? How should I type a dict that has strings as keys and the values can be either strings or lists of strings? Found this: https://github.com/python/mypy/issues/2984#issuecomment-285716826 but still not completely sure why the above mentioned happens and how should I fix it. EDIT: Although it's still not clear why the suggested modification Mytype = Dict[str, Union[str, List[str]]] does not resolve the error with s['a'].append('c') I think the TypeDict approach suggested in the comments as well as in https://stackoverflow.com/a/62862029/692695 is the way to go, so marking that approach as a solution. See similar question at: Indicating multiple value in a Dict[] for type hints, suggested in the comments by Georgy. | Because s: Mytype cannot have type Dict[str, str] and type Dict[str, List[str]] at the same time. You could do what you want like this: Mytype = Dict[str, Union[str, List[str]]] But maybe problems, because Dict is invariant Also you could use TypedDict, but only a fixed set of string keys is expected: from typing import List, TypedDict MyType = TypedDict('MyType', {'x': str, 'a': List[str]}) s: MyType = {"x": "y", "a": ["b"]} s['a'].append('c') NOTE: Unless you are on Python 3.8 or newer (where TypedDict is available in standard library typing module) you need to install typing_extensions using pip to use TypedDict And, of course, you can use Any: Mytype = Dict[str, Any] | 15 | 17 |
62,798,739 | 2020-7-8 | https://stackoverflow.com/questions/62798739/how-to-update-the-elasticsearch-document-with-python | I am using the code below to add data to Elasticsearch: from elasticsearch import Elasticsearch es = Elasticsearch() es.cluster.health() records = [ {'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'} ] es.indices.create(index='my-index_1', ignore=400) for record in records: # es.indices.update(index="my-index_1", body=record) es.index(index="my-index_1", body=record) # Retrieve the data es.search(index='my-index_1')['hits']['hits'] But how do I update the document? records = [ {'Name': 'Dr. Messi', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Christiano', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'} ] Here Dr. Messi, Dr. Christiano has to update the index and Dr. Bernard M. Aaron should not update as it is already present in the index. | In Elasticsearch, when data is indexed without providing a custom ID, then a new ID will be created by Elasticsearch for every document you index. Hence, since you are not providing an ID, Elasticsearch generates it automatically. But you also want to check if Name already exists. There are two approaches: Index the data without passing an _id for every document. After this you will have to search using the Name field to see if the document exists. Index the data with your own _id for each document. Then search with _id. I'm going to demonstrate the second approach of creating our own IDs. Since you are searching on the Name field, I'll hash it using MD5 to generate the _id. (Any hash function could work.) First Indexing Data: import hashlib from elasticsearch import Elasticsearch es = Elasticsearch() es.cluster.health() records = [ {'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'} ] index_name="my-index_1" es.indices.create(index=index_name, ignore=400) for record in records: #es.indices.update(index="my-index_1", body=record) es.index(index=index_name, body=record,id=hashlib.md5(record['Name'].encode()).hexdigest()) Output: [{'_index': 'my-index_1', '_type': '_doc', '_id': '1164c423bc4e2fcb75697c3031af9ef1', '_score': 1.0, '_source': {'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': '672ae14197a135c39eab759be8b0597f', '_score': 1.0, '_source': {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': '85702447f9e9ea010054eaf0555ce79c', '_score': 1.0, '_source': {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}}] Next Step: Indexing new data records = [ {'Name': 'Dr. Messi', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Christiano', 'Specialised and Location': 'Health'}, {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}] for record in records: try: es.get(index=index_name, id=hashlib.md5(record['Name'].encode()).hexdigest()) except NotFoundError: print("Record Not found") es.index(index=index_name, body=record,id=hashlib.md5(record['Name'].encode()).hexdigest()) Output: [{'_index': 'my-index_1', '_type': '_doc', '_id': '1164c423bc4e2fcb75697c3031af9ef1', '_score': 1.0, '_source': {'Name': 'Dr. Christopher DeSimone', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': '672ae14197a135c39eab759be8b0597f', '_score': 1.0, '_source': {'Name': 'Dr. Tajwar Aamir (Aamir)', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': '85702447f9e9ea010054eaf0555ce79c', '_score': 1.0, '_source': {'Name': 'Dr. Bernard M. Aaron', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': 'e2e0f463145568471097ff027b18b40d', '_score': 1.0, '_source': {'Name': 'Dr. Messi', 'Specialised and Location': 'Health'}}, {'_index': 'my-index_1', '_type': '_doc', '_id': '23bb4f1a3a41efe7f4cab8a80d766708', '_score': 1.0, '_source': {'Name': 'Dr. Christiano', 'Specialised and Location': 'Health'}}] As you can see Dr. Bernard M. Aaron record is not indexed as it's already present | 9 | 5 |
62,830,862 | 2020-7-10 | https://stackoverflow.com/questions/62830862/how-to-install-python3-8-on-debian-10 | i've installed debian 10.0.4 yesterday on my pc. it had python version 3.7.3 installed on it , so i tried to update it to version 3.8.3 and now i have version 3.8.3 installed but when i try to install pip using the official get-pip.py it throws an exception . the details is : Traceback (most recent call last): File "<frozen zipimport>", line 520, in _get_decompress_func ModuleNotFoundError: No module named 'zlib' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<frozen zipimport>", line 520, in _get_decompress_func ModuleNotFoundError: No module named 'zlib' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<frozen zipimport>", line 568, in _get_data File "<frozen zipimport>", line 523, in _get_decompress_func zipimport.ZipImportError: can't decompress data; zlib not available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "get-pip.py", line 23484, in <module> main() File "get-pip.py", line 198, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap from pip._internal.cli.main import main as pip_entry_point File "<frozen zipimport>", line 241, in load_module File "<frozen zipimport>", line 709, in _get_module_code File "<frozen zipimport>", line 570, in _get_data zipimport.ZipImportError: can't decompress data; zlib not available i must mention that the python (python2.7) and pip for python 2.7 is working , and i tried to reinstall python using source compilation and i got another error while installing it (zlib error) | Installing Python 3.8 on Debian 10 Building Python 3.8 on Debian is a relatively straightforward process and will only take a few minutes. Start by installing the packages necessary to build Python source: sudo apt update sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libsqlite3-dev libreadline-dev libffi-dev curl libbz2-dev liblzma-dev Download the latest release’s source code from the Python download page with wget or curl. At the time of writing this article, the latest release is 3.8.2: curl -O https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tar.xz When the download is complete, extract the tarball: tar -xf Python-3.8.2.tar.xz Navigate to the Python source directory and run the configure script: cd Python-3.8.2 ./configure --enable-optimizations --enable-loadable-sqlite-extensions The script performs a number of checks to make sure all of the dependencies on your system are present. The --enable-optimizations option will optimize the Python binary by running multiple tests, which will make the build process slower. Run make to start the build process: make -j 4 Modify the -j to correspond to the number of cores in your processor. You can find the number by typing nproc. Once the build is done, install the Python binaries by running the following command as a user with sudo access: sudo make altinstall Do not use the standard make install as it will overwrite the default system python3 binary. At this point, Python 3.8 is installed on your Debian system and ready to be used. You can verify it by typing: python3.8 --version Python 3.8.2 source: https://linuxize.com/post/how-to-install-python-3-8-on-debian-10/ | 16 | 36 |
62,809,562 | 2020-7-9 | https://stackoverflow.com/questions/62809562/how-do-i-annotate-a-callable-with-args-and-kwargs | I have a function which returns a function. I would like to find a proper type annotation. However, the returned function has *args and *kwargs. How is that annotated within Callable[[Parameters???], ReturnType]? Example: from typing import Callable import io import pandas as pd def get_conversion_function(file_type: str) -> Callable[[io.BytesIO, TODO], pd.DataFrame]: def to_csv(bytes_, *args, **kwargs): return pd.read_csv(bytes_, **kwargs) if file_type == "csv": return to_csv | As I know, python's typing does not allow do that straightforwardly as stated in the docs of typing.Callable: There is no syntax to indicate optional or keyword arguments; such function types are rarely used as callback types. Callable[..., ReturnType] (literal ellipsis) can be used to type hint a callable taking any number of arguments and returning ReturnType. But you could use mypy extensions like this: from typing import Callable from mypy_extensions import Arg, VarArg, KwArg def foo(a: str, *args: int, **kwargs: float) -> str: return 'Hello, {}'.format(a) def bar() -> Callable[[Arg(str, 'a'), VarArg(int), KwArg(float)], str]: return foo | 26 | 13 |
62,821,480 | 2020-7-9 | https://stackoverflow.com/questions/62821480/add-a-trace-to-every-facet-of-a-plotly-figure | I'd like to add a trace to all facets of a plotly plot. For example, I'd like to add a reference line to each daily facet of a scatterplot of the "tips" dataset showing a 15% tip. However, my attempt below only adds the line to the first facet. import plotly.express as px import plotly.graph_objects as go import numpy as np df = px.data.tips() ref_line_slope = 0.15 # 15% tip for reference ref_line_x_range = np.array([df.total_bill.min(), df.total_bill.max()]) fig = px.scatter(df, x="total_bill", y="tip",facet_col="day", trendline='ols') fig = fig.add_trace(go.Scatter(x=reference_line_x_range,y=ref_line_slope*reference_line_x_range,name='15%')) fig.show() | According to an example from plotly you can pass 'all' as the row and col arguments and even skip empty subplots: fig.add_trace(go.Scatter(...), row='all', col='all', exclude_empty_subplots=True) | 7 | 9 |
62,856,818 | 2020-7-12 | https://stackoverflow.com/questions/62856818/how-can-i-run-the-fastapi-server-using-pycharm | I have a simple API function as below, from fastapi import FastAPI app = FastAPI() @app.get("/") async def read_root(): return {"Hello": "World"} I am starting the server using uvicorn command as, uvicorn main:app Since we are not calling any python file directly, it is not possible to call uvicorn command from Pycharm. So, How can I run the fast-api server using Pycharm? | Method-1: Run FastAPI by calling uvicorn.run(...) In this case, your minimal code will be as follows, # main.py import uvicorn from fastapi import FastAPI app = FastAPI() @app.get("/") async def read_root(): return {"Hello": "World"} if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) Normally, you'll start the server by running the following command, python main.py Pycharm Setup For this setup, and now, you can set the script path in Pycharm's config Notes Script Path: path to the FastAPI script Python Interpreter: Choose your interpreter/virtual environment Working Directory: Your FastAPI project root Method-2: Run FastAPI by calling uvicorn command In this case, your minimal code will be as follows, # main.py from fastapi import FastAPI app = FastAPI() @app.get("/") async def read_root(): return {"Hello": "World"} Normally, you'll start the server by running the following command, uvicorn main:app --reload Pycharm Setup For this setup, and now, you can set the script path in Pycharm's config Notes Module name: set to uvicorn [Optional] Script: Path to uvicorn binary. You will get the path by executing the command, which uvicorn , inside your environment. (See this image) Parameters: The actual parameters of uvicorn command Python Interpreter: Choose your interpreter/virtual environment Working Directory: Your FastAPI project root | 103 | 219 |
62,794,219 | 2020-7-8 | https://stackoverflow.com/questions/62794219/tensorflow-gpu-not-showing-in-jupyter-notebook | In terminal windows 10 using cuda 10.1 python 3.7.7 GPU GeForce GTX 1050 4GB >>> import tensorflow as tf 2020-07-08 17:10:50.005569: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll >>> tf.config.list_physical_devices('GPU') 2020-07-08 17:10:55.657489: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1 coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s 2020-07-08 17:10:55.701387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] In jupyter notebook in [1]: import tensorflow as tf tf.config.list_physical_devices('GPU') out[1]: [] in [1]: tf.__version__ out[1]: '2.2.0' | It needs to make new kernel for this env and select kernel form jupyter notebook $ conda activate env_name $ pip install ipykernel --user $ python -m ipykernel install --user --name env_name --display-name env_name | 8 | 1 |
62,801,562 | 2020-7-8 | https://stackoverflow.com/questions/62801562/pandas-explode-multiple-columns | I have DF that has multiple columns. Two of the columns are list of the same len.( col2 and col3 are list. the len of the list is the same). My goal is to list each element on it's own row. I can use the df.explode(). but it only accepts one column. However, I want the pair of the two columns to be 'exploded'. If I do df.explode('col2') and then df.explode('col3'), it results it 9 rows instead of 3. Original DF col0 col1 col2 col3 1 aa [1,2,3] [1.1,2.2,3.3] 2 bb [4,5,6] [4.4,5.5,6.6] 3 cc [7,8,9] [7.7,8.8,9.9] 3 cc [7,8,9] [7.7,8.8,9.9] End DataFrame id col1 col2 col3 1 aa 1 1.1 1 aa 2 2.2 1 aa 3 3.3 2 bb 4 4.4 2 bb 5 5.5 2 bb 6 6.6 3 cc ... ... Update None of the column have unique values, so can't be used as index. | You could set col1 as index and apply pd.Series.explode across the columns: df.set_index('col1').apply(pd.Series.explode).reset_index() Or: df.apply(pd.Series.explode) col1 col2 col3 0 aa 1 1.1 1 aa 2 2.2 2 aa 3 3.3 3 bb 4 4.4 4 bb 5 5.5 5 bb 6 6.6 6 cc 7 7.7 7 cc 8 8.8 8 cc 9 9.9 9 cc 7 7.7 10 cc 8 8.8 11 cc 9 9.9 | 21 | 20 |
62,870,656 | 2020-7-13 | https://stackoverflow.com/questions/62870656/file-system-scheme-local-not-implemented-in-google-colab-tpu | I am using TPU runtime in Google Colab, but having problems in reading files (not sure). I initialized TPU using: import tensorflow as tf import os import tensorflow_datasets as tfds resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR']) tf.config.experimental_connect_to_cluster(resolver) # This is the TPU initialization code that has to be at the beginning. tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ", tf.config.list_logical_devices('TPU')) I have many images in a folder in Google Colab storage ( e.g. '/content/train2017/000000000009.jpg'). I run the following code: import tensorflow as tf def load_image(image_path): img = tf.io.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, (299, 299)) img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path load_image('/content/train2017/000000000009.jpg') But, I am getting the following error: --------------------------------------------------------------------------- UnimplementedError Traceback (most recent call last) <ipython-input-33-a7fbb45f3b76> in <module>() ----> 1 load_image('/content/train2017/000000000009.jpg') 5 frames <ipython-input-7-862c73d29b96> in load_image(image_path) 2 img = tf.io.read_file(image_path) 3 img = tf.image.decode_jpeg(img, channels=3) ----> 4 img = tf.image.resize(img, (299, 299)) 5 img = tf.keras.applications.inception_v3.preprocess_input(img) 6 return img, image_path /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/image_ops_impl.py in resize_images_v2(images, size, method, preserve_aspect_ratio, antialias, name) 1515 preserve_aspect_ratio=preserve_aspect_ratio, 1516 name=name, -> 1517 skip_resize_if_same=False) 1518 1519 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/image_ops_impl.py in _resize_images_common(images, resizer_fn, size, preserve_aspect_ratio, name, skip_resize_if_same) 1183 with ops.name_scope(name, 'resize', [images, size]): 1184 images = ops.convert_to_tensor(images, name='images') -> 1185 if images.get_shape().ndims is None: 1186 raise ValueError('\'images\' contains no shape.') 1187 # TODO(shlens): Migrate this functionality to the underlying Op's. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in get_shape(self) 1071 def get_shape(self): 1072 """Alias of Tensor.shape.""" -> 1073 return self.shape 1074 1075 def _shape_as_list(self): /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in shape(self) 1065 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple()) 1066 except core._NotOkStatusException as e: -> 1067 six.raise_from(core._status_to_exception(e.code, e.message), None) 1068 1069 return self._tensor_shape /usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value) UnimplementedError: File system scheme '[local]' not implemented (file: '/content/train2017/000000000009.jpg') How should I solve it? I found something like a gs bucket, but it is paid. Is there any other way to solve this? | For loading file from local file when using TPU - read them as normal python file.read() (not tf.io). In your case: def load_image(image_path): with open(image_path, "rb") as local_file: # <= change here img = local_file.read() img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, (299, 299)) img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path load_image('/content/train2017/000000000009.jpg') | 15 | 4 |
62,815,318 | 2020-7-9 | https://stackoverflow.com/questions/62815318/get-current-jupyter-lab-notebook-name-for-jupyter-lab-version-2-1-and-3-0-1-and | Problem Hi all, As my title suggested it, I would like to get access to the notebook name (in jupyter-lab) as a variable. So I could reuse it in the notebook itself (for example to name some figure files generated in the notebook). I saw that a similar issue was opened years ago [see here]. However I didnt find a satisfactory answer. I like the simplicity of the answer suggested by @bill: import ipyparams currentNotebook = ipyparams.notebook_name However, it doesn't work for me. I got this warning the first time I execute the first cell: import ipyparams Javascript Error: Jupyter is not defined currentNotebook = ipyparams.notebook_name currentNotebook '' Then if I rerun the cell again, I don't have the warning message anymore but the variable currentNotebook is still empty. (I run the cell sequentially, I didn't do a 'Run All Cells'). Configuration details My Jupyter version is jupyter notebook --version 6.0.3] jupyter-lab --version 2.1.1 I am using my notebook mostly for python code. Edit 27/01/2021 @juan solution [here], using ipynbname is working for jupyter-notebook : 6.1.6 jupyter lab : 2.2.6 but this solution is still not working for jupyter lab : 3.0.1 Edit 28/01/2021 ipynbname is now working for jupyter 3 More details about it [here] | As an alternative you can use the following library: ipynbname #! pip install ipynbname import ipynbname nb_fname = ipynbname.name() nb_path = ipynbname.path() This worked for me and the solution is quite straightforward. | 8 | 21 |
62,814,861 | 2020-7-9 | https://stackoverflow.com/questions/62814861/difference-between-time-and-time-in-jupyter-notebook | What is the difference between %time and %%time in a Jupyter Notebook Cell? | %time measures execution time of the next line. %%time measures execution time of the whole cell. For instance: %time a = 1 time.sleep(5) CPU times: user 8 µs, sys: 0 ns, total: 8 µs Wall time: 16.9 µs %%time a = 1 time.sleep(5) CPU times: user 1.13 ms, sys: 2.11 ms, total: 3.24 ms Wall time: 5 s | 12 | 23 |
62,839,068 | 2020-7-10 | https://stackoverflow.com/questions/62839068/memoryerror-unable-to-allocate-mib-for-an-array-with-shape-and-data-type-when | Getting this memory error. But the book/link I am following doesn't get this error. A part of Code: from sklearn.linear_model import SGDClassifier sgd_clf = SGDClassifier() sgd_clf.fit(x_train, y_train) Error: MemoryError: Unable to allocate 359. MiB for an array with shape (60000, 784) and data type float64 I also get this error when I try to scale the data using StandardScaler's fit_transfrom But works fine in both if I decrease the size of training set (something like : x_train[:1000] ,y_train[:1000]) Link for the code in the book here. The error I get is in Line 60 and 63 (In [60] and In [63]) The book : Aurélien Géron - Hands-On Machine Learning with Scikit-Learn Keras and Tensorflow 2nd Ed (Page : 149 / 1130) So here's my question : Does this has anything to do with my ram? and what does "Unable to allocate 359" mean? is it the memory size ? Just in case my specs : CPU - ryzen 2400g , ram - 8gb (3.1gb is free when using jupyter notebook) | Upgrading python-64 bit seems to have solved all the "Memory Error" problem. | 16 | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.