question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
63,430,672 | 2020-8-15 | https://stackoverflow.com/questions/63430672/how-to-make-pytest-ignore-a-class-named-testsomething | I am working on a kind of test framework that happens to have classes named TestSomething of course. And I realized my tests are failing because pytest sees these classes as "something I need to instantiate and run!", as soon as imported. And this absolutely won't work. import pytest from package import TestSomethingClass Is it possible to import such class from a pytest test file directly? Or should I indirectly use those via a fixture maybe? The same problem applies to exceptions, as I would need to do something like with pytest.raises(TestsSomethingError): | Explicitly Disable You're able to allow pytest to ignore this specific class by virtue of it starting with the word Test, by setting the __test__ flag to False within your conflicting class class TestSomethingClass(object): __test__ = False def test_class_something(self, object): pass feature: https://github.com/pytest-dev/pytest/pull/1561 Configuration File We could also change the convention completely in your pytest.ini file and ignore all class names which start the word Example. But I don't advice this, as we could potentially encounter unintended consequences down the road. #pytest.ini file [pytest] python_classes = !Example Pytest Conventions test prefixed test functions or methods outside of class test prefixed test functions or methods inside Test prefixed test classes (without an init method) Source: https://docs.pytest.org/en/stable/customize.html | 9 | 12 |
63,427,771 | 2020-8-15 | https://stackoverflow.com/questions/63427771/extracting-intermediate-layer-outputs-of-a-cnn-in-pytorch | I am using a Resnet18 model. ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (fc): Linear(in_features=512, out_features=1000, bias=True) ) I want to extract the outputs only from layer2, layer3, layer4 & I don't want the avgpool and fc outputs. How do I achieve this ? class BasicBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, padding=1) -> None: super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d(in_channels, out_channels, 3, stride, padding=padding, bias=False) self.bn1 = nn.BatchNorm2d(out_channels) self.relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channels) if in_channels != out_channels: l1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False) l2 = nn.BatchNorm2d(out_channels) self.downsample = nn.Sequential(l1, l2) else: self.downsample = None def forward(self, xb): prev = xb x = self.relu(self.bn1(self.conv1(xb))) x = self.bn2(self.conv2(x)) if self.downsample is not None: prev = self.downsample(xb) x = x + prev return self.relu(x) class CustomResnet(nn.Module): def __init__(self, pretrained:bool=True) -> None: super(CustomResnet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7,stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = nn.Sequential(BasicBlock( 64, 64, stride=1), BasicBlock(64, 64)) self.layer2 = nn.Sequential(BasicBlock(64, 128, stride=2), BasicBlock(128, 128)) self.layer3 = nn.Sequential(BasicBlock(128, 256, stride=2), BasicBlock(256, 256)) self.layer4 = nn.Sequential(BasicBlock(256, 512, stride=2), BasicBlock(512, 512)) def forward(self, xb): x = self.maxpool(self.relu(self.bn1(self.conv1(xb)))) x = self.layer1(x) x2 = x = self.layer2(x) x3 = x = self.layer3(x) x4 = x = self.layer4(x) return [x2, x3, x4] I guess one solution would be this .. But is there any other way without writing this while lot of code? Also is it possible to load in the pre-trained weights given by torchvision in the above modified ResNet model. | If you know how the forward method is implemented, then you can subclass the model, and override the forward method only. If you are using the pre-trained weights of a model in PyTorch, then you already have access to the code of the model. So, find where the code of the model is, import it, subclass the model, and override the forward method. For example: class MyResNet18(Resnet): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def forward(self, xb): x = self.maxpool(self.relu(self.bn1(self.conv1(xb)))) x = self.layer1(x) x2 = x = self.layer2(x) x3 = x = self.layer3(x) x4 = x = self.layer4(x) return [x2, x3, x4] and you are done. | 8 | 13 |
63,426,545 | 2020-8-15 | https://stackoverflow.com/questions/63426545/best-way-of-tqdm-for-data-loader | How to use tqdm for data_loader ? is this the correct way? for i,j in enumerate(data_loader,total = 100): pass | You need to wrap the iterable with tqdm, as their documentation clearly says: Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you’re done! If you're enumerating over an iterable, you can do something like the following. Sleep is only for visualizing it. from tqdm import tqdm from time import sleep data_loader = list(range(1000)) for i, j in enumerate(tqdm(data_loader)): sleep(0.01) | 27 | 34 |
63,420,889 | 2020-8-14 | https://stackoverflow.com/questions/63420889/fastapi-pydantic-circular-references-in-separate-files | I would love to use a schema that looks something like the following in FastAPI: from __future__ import annotations from typing import List from pydantic import BaseModel class Project(BaseModel): members: List[User] class User(BaseModel): projects: List[Project] Project.update_forward_refs() but in order to keep my project structure clean, I would ofc. like to define these in separate files. How could I do this without creating a circular reference? With the code above the schema generation in FastAPI works fine, I just dont know how to separate it out into separate files. In a later step I would then instead of using attributes use @propertys to define the getters for these objects in subclasses of them. But for the OpenAPI doc generation, I need this combined - I think. | There are three cases when circular dependency may work in Python: Top of module: import package.module Bottom of module: from package.module import attribute Top of function: works both In your situation, the second case "bottom of module" will help. Because you need to use update_forward_refs function to resolve pydantic postponed annotations like this: # project.py from typing import List from pydantic import BaseModel class Project(BaseModel): members: "List[User]" from user import User Project.update_forward_refs() # user.py from typing import List from pydantic import BaseModel class User(BaseModel): projects: "List[Project]" from project import Project User.update_forward_refs() Nonetheless, I would strongly discourage you from intentionally introducing circular dependencies | 28 | 25 |
63,424,180 | 2020-8-15 | https://stackoverflow.com/questions/63424180/bs4-replace-with-result-is-no-longer-in-tree | I need to replace multiple words in a html document. Atm I am doing this by calling replace_with once for each replacement. Calling replace_with twice on a NavigableString leads to a ValueError (see example below) cause the replaced element is no longer in the tree. Minimal example #!/usr/bin/env python3 from bs4 import BeautifulSoup import re def test1(): html = \ ''' Identify ''' soup = BeautifulSoup(html,features="html.parser") for txt in soup.findAll(text=True): if re.search('identify',txt,re.I) and txt.parent.name != 'a': newtext = re.sub('identify', '<a href="test.html"> test </a>', txt.lower()) txt.replace_with(BeautifulSoup(newtext, features="html.parser")) txt.replace_with(BeautifulSoup(newtext, features="html.parser")) # I called it twice here to make the code as small as possible. # Usually it would be a different newtext .. # which was created using the replaced txt looking for a different word to replace. return soup print(test1()) Expected Result: The txt is == newstring Result: ValueError: Cannot replace one element with another when the element to be replaced is not part of the tree. An easy solution would be just to tinker around with the newstring and only replacing all at once in the end, but I would like to understand the current phenomenon. | The first txt.replace_with(...) removes NavigableString (here stored in variable txt) from the document tree (doc). This effectively sets txt.parent to None The second txt.replace_with(...) looks at parent property, finds None (because txt is already removed from tree) and throws an ValueError. As you said at the end of your question, one the solution can be to use .replace_with() only once: import re from bs4 import BeautifulSoup def test1(): html = \ ''' word1 word2 word3 word4 ''' soup = BeautifulSoup(html,features="html.parser") to_delete = [] for txt in soup.findAll(text=True): if re.search('word1', txt, flags=re.I) and txt.parent.name != 'a': newtext = re.sub('word1', '<a href="test.html"> test1 </a>', txt.lower()) # ...some computations newtext = re.sub('word3', '<a href="test.html"> test2 </a>', newtext) # ...some more computations # and at the end, replce txt only once: txt.replace_with(BeautifulSoup(newtext, features="html.parser")) return soup print(test1()) Prints: <a href="test.html"> test1 </a> word2 <a href="test.html"> test2 </a> word4 | 7 | 5 |
63,422,389 | 2020-8-15 | https://stackoverflow.com/questions/63422389/is-there-a-way-to-download-video-while-keeping-their-chapters-metadata | I've used many video downloaders before: atube catcher, 4k downloader, jDownloader, and currently using youtube-dl. I can't download videos, this for example, while still keeping their online chapters intact, like part1 is "intro" lasting from 00:00 to 00:45 and so on. So far I tried these parameters with youtube-dl Filesystem --write-annotations --write-description --write-info-json Thumbnail images --write-all-thumbnails Video format -f 'bestvideo[height<=720]+bestaudio/best[height<=720]/worst' --merge-output-format mp4 Post-processing --add-metadata --embed-subs --embed-thumbnail Also tried requesting the mkv video format (thought maybe it was built into it) didn't help tho. I know these options don't really say anything about sections but I'm trying to get as much metadata as I can | The information you want is called chapters in the youtube-dl info JSON. There is a recent open pull request for youtube-dl that fixes a problem with this information. In the current release of youtube-dl, if you use the ---write-info-json or --dump-json you will see that the chapters information is null ("chapters": null). You can use the code in the fork repository to be able to obtain the information you want. Follow these steps: Clone this repository: git clone https://github.com/gschizas/youtube-dl.git Change to the repository directory: cd youtube-dl/ Checkout the pull request branch: git checkout bugfix/youtube/chapters-fix-extractor Run youtube-dl from the current location: python -m youtube_dl --write-info-json https://youtu.be/LnO42jxJaC4 You will see information like this in the info JSON: "chapters": [ { "start_time": 0.0, "end_time": 46.0, "title": "Intro" }, { "start_time": 46.0, "end_time": 72.0, "title": "QOTD" }, ... ] Hopefully the fix will be accepted into the youtube-dl repository and included in future releases, so there will be no need to clone any repository. | 7 | 8 |
63,370,701 | 2020-8-12 | https://stackoverflow.com/questions/63370701/snowflake-pandas-pd-writer-writes-out-tables-with-nulls | I have a Pandas dataframe that I'm writing out to Snowflake using SQLAlchemy engine and the to_sql function. It works fine, but I have to use the chunksize option because of some Snowflake limit. This is also fine for smaller dataframes. However, some dataframes are 500k+ rows, and at a 15k records per chunk, it takes forever to complete writing to Snowflake. I did some research and came across the pd_writer method provided by Snowflake, which apparently loads the dataframe much faster. My Python script does complete faster and I see it creates a table with all the right columns and the right row count, but every single column's value in every single row is NULL. I thought it was a NaN to NULL issue and tried everything possible to replace the NaNs with None, and while it does the replacement within the dataframe, by the time it gets to the table, everything becomes NULL. How can I use pd_writer to get these huge dataframes written properly into Snowflake? Are there any viable alternatives? EDIT: Following Chris' answer, I decided to try with the official example. Here's my code and the result set: import os import pandas as pd from snowflake.sqlalchemy import URL from sqlalchemy import create_engine from snowflake.connector.pandas_tools import write_pandas, pd_writer def create_db_engine(db_name, schema_name): return create_engine( URL( account=os.environ.get("DB_ACCOUNT"), user=os.environ.get("DB_USERNAME"), password=os.environ.get("DB_PASSWORD"), database=db_name, schema=schema_name, warehouse=os.environ.get("DB_WAREHOUSE"), role=os.environ.get("DB_ROLE"), ) ) def create_table(out_df, table_name, idx=False): engine = create_db_engine("dummy_db", "dummy_schema") connection = engine.connect() try: out_df.to_sql( table_name, connection, if_exists="append", index=idx, method=pd_writer ) except ConnectionError: print("Unable to connect to database!") finally: connection.close() engine.dispose() return True df = pd.DataFrame([("Mark", 10), ("Luke", 20)], columns=["name", "balance"]) print(df.head) create_table(df, "dummy_demo_table") The code works fine with no hitches, but when I look at the table, which gets created, it's all NULLs. Again. | Turns out, the documentation (arguably, Snowflake's weakest point) is out of sync with reality. This is the real issue: https://github.com/snowflakedb/snowflake-connector-python/issues/329. All it needs is a single character in the column name to be upper case and it works perfectly. My workaround is to simply do: df.columns = map(str.upper, df.columns) before invoking to_sql. | 9 | 21 |
63,412,583 | 2020-8-14 | https://stackoverflow.com/questions/63412583/nswindow-drag-regions-should-only-be-invalidated-on-the-main-thread-this-will-t | I am writing a Python program with two threads. One displays a GUI and the other gets input from a scanner and saves data in an online database. The code works fine on my raspberry pi but if I try it on my MacBook Pro (Catalina 10.15.2), I get the above mentioned warning followed by my code crashing. Does anyone have an idea how to get it working or what causes the problem? | You likely use different Python versions. Your Python on your Raspberry PI still allows invalidating NSWindow drag regions outside the Main thread, while your Python in your MacBook Pro already stopped supporting this. You will likely need to refactor your code so that NSWindow drag regions will only be invalidated on the Main thread. You need to localize where NSWindow drag regions are invalidated and make sure that those happen in the Main thread. EDIT The asker explained that according to his/her findings, NSWindow drag regions only apply to Mac. | 13 | 5 |
63,412,782 | 2020-8-14 | https://stackoverflow.com/questions/63412782/pandas-dataframe-filling-missing-values-in-a-column | I have a large DataFrame with the following columns: import pandas as pd x = pd.read_csv('age_year.csv') x.head() ID Year age 22445 1991 29925 1991 76165 1991 223725 1991 16.0 280165 1991 The Year column has values ranging from 1991 to 2017. Most ID have an age value in each Year, for example: x.loc[x['ID'] == 280165].to_clipboard(index = False) ID Year age 280165 1991 280165 1992 280165 1993 280165 1994 280165 1995 16.0 280165 1996 17.0 280165 1997 18.0 280165 1998 19.0 280165 1999 20.0 280165 2000 21.0 280165 2001 280165 2002 280165 2003 280165 2004 25.0 280165 2005 26.0 280165 2006 27.0 280165 2007 280165 2008 280165 2010 31.0 280165 2011 32.0 280165 2012 33.0 280165 2013 34.0 280165 2014 35.0 280165 2015 36.0 280165 2016 37.0 280165 2017 38.0 I want to fill the missing values in the age column for each unique ID based on their existing values. For example, for ID 280165 above, we know they are 29 in 2008, given that they are 31 in 2010 (28 in 2007, 24 in 2003 and so on). How should one fill in these missing age values for many unique ID for every year? I'm not sure how to do this in a uniform way across the entire DataFrame. The data used as the example in this question can be found here. | I think instead of trying to fill the values, find the year of birth instead. df["age"] = df["Year"] - (df["Year"]-df["age"]).mean() Or general solution with more than 1 id: s = df.loc[df["age"].notnull()].groupby("ID").first() df["age"] = df["Year"]-df["ID"].map(s["Year"]-s["age"]) print (df) ID Year age 0 280165 1991 12.0 1 280165 1992 13.0 2 280165 1993 14.0 3 280165 1994 15.0 4 280165 1995 16.0 5 280165 1996 17.0 6 280165 1997 18.0 7 280165 1998 19.0 8 280165 1999 20.0 9 280165 2000 21.0 10 280165 2001 22.0 11 280165 2002 23.0 12 280165 2003 24.0 13 280165 2004 25.0 14 280165 2005 26.0 15 280165 2006 27.0 16 280165 2007 28.0 17 280165 2008 29.0 18 280165 2010 31.0 19 280165 2011 32.0 20 280165 2012 33.0 21 280165 2013 34.0 22 280165 2014 35.0 23 280165 2015 36.0 24 280165 2016 37.0 25 280165 2017 38.0 | 7 | 3 |
63,413,928 | 2020-8-14 | https://stackoverflow.com/questions/63413928/how-to-manually-set-the-color-of-points-in-plotly-express-scatter-plots | https://plotly.com/python/line-and-scatter/ has many scatter plot examples, but not a single one showing you how to set all the points' colours within px.scatter: # x and y given as DataFrame columns import plotly.express as px df = px.data.iris() # iris is a pandas DataFrame fig = px.scatter(df, x="sepal_width", y="sepal_length") fig.show() I've tried adding colour = 'red' etc doesn't work. These examples only show you how to colour by some other variable. In principle I could add another feature and set it all the same but that seems a bizzare way of accomplishing the task.... | For that you may use the color_discrete_sequence argument. fig = px.scatter(df, x="sepal_width", y="sepal_length", color_discrete_sequence=['red']) This argument is to use a custom color paletter for discrete color factors, but if you are not using any factor for color it will use the first element for all the points in the plot. More about discrete color palletes: https://plotly.com/python/discrete-color/ | 37 | 27 |
63,410,588 | 2020-8-14 | https://stackoverflow.com/questions/63410588/cant-install-opencv-python3-8 | When I execute this command: pip3 install opencv-python I get the following error: Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3 /usr/lib/python3/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-z4c_sn6u/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel scikit-build cmake pip 'numpy==1.11.3; python_version=='"'"'3.5'"'"'' 'numpy==1.13.3; python_version=='"'"'3.6'"'"'' 'numpy==1.14.5; python_version=='"'"'3.7'"'"'' 'numpy==1.17.3; python_version>='"'"'3.8'"'"'' cwd: None Complete output (22 lines): Ignoring numpy: markers 'python_version == "3.5"' don't match your environment Ignoring numpy: markers 'python_version == "3.6"' don't match your environment Ignoring numpy: markers 'python_version == "3.7"' don't match your environment Collecting setuptools Downloading setuptools-49.6.0-py3-none-any.whl (803 kB) Collecting wheel Downloading wheel-0.35.0-py2.py3-none-any.whl (24 kB) Collecting scikit-build Using cached scikit_build-0.11.1-py2.py3-none-any.whl (72 kB) Collecting cmake Using cached cmake-3.18.0.tar.gz (28 kB) ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-95tsmt_u/cmake/setup.py'"'"'; __file__='"'"'/tmp/pip-install-95tsmt_u/cmake/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-95tsmt_u/cmake/pip-egg-info cwd: /tmp/pip-install-95tsmt_u/cmake/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-95tsmt_u/cmake/setup.py", line 7, in <module> from skbuild import setup ModuleNotFoundError: No module named 'skbuild' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/bin/python3 /usr/lib/python3/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-z4c_sn6u/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel scikit-build cmake pip 'numpy==1.11.3; python_version=='"'"'3.5'"'"'' 'numpy==1.13.3; python_version=='"'"'3.6'"'"'' 'numpy==1.14.5; python_version=='"'"'3.7'"'"'' 'numpy==1.17.3; python_version>='"'"'3.8'"'"'' Check the logs for full command output. Same when I tried to install ecapture, I am using the latest python version | Try to upgrade your pip with pip install --upgrade pip and then run the pip install opencv-python | 9 | 9 |
63,396,950 | 2020-8-13 | https://stackoverflow.com/questions/63396950/will-conda-clean-erase-my-favorite-packages | I have to do some cleanup with my (Mini)conda python packages to free some disk space, and I see people usually resort to the conda clean command to get this job done. Conda documentation says that it it's safe to do that, as it will only erase packages that "have never been used in any environment". I've never used conda's environments feature, though, and I don't know if I should be doing so. I just install my packages with conda install commands, run Jupyter Notebook and do all my work inside Jupyter. (I'm not a software engineer, just a regular dude using python and pandas to manage data.) Do I risk erase my favorite packages if i run conda clean? If I don't do any cleanup, will conda eventually engulf all of my disk space? Is there any command I can use to check how much disk space my python packages are taking. | At the very least, tarballs can be removed with no risk. Cleaning packages is done based on counting the number of hardlinks for the package. If there is only one hardlink, this implies the package is not referenced by any environment, and therefore can be removed. This will be the case for all packages that were previously in use but were superseded by other versions. The warning mainly applies to people who have environments across different disks but are using softlinks to limit redundancy. Unlike hardlinks, the file system does not keep track of softlink references, so there is no simple way to count the number of softlinks. Hence, when cleaning, those packages that are only connected to envs through softlinks (i.e., only report one hardlink) will be removed and thereby break the softlinking envs. | 8 | 10 |
63,400,417 | 2020-8-13 | https://stackoverflow.com/questions/63400417/groupby-names-replace-values-with-there-max-value-in-all-columns-pandas | I have this DataFrame lst = [['AAA',15,'BBB',20],['BBB',16,'AAA',12],['BBB',22,'CCC',15],['CCC',11,'AAA',31],['DDD',25,'EEE',35]] df = pd.DataFrame(lst,columns = ['name1','val1','name2','val2']) which looks like this name1 val1 name2 val2 0 AAA 15 BBB 20 1 BBB 16 AAA 12 2 BBB 22 CCC 15 3 CCC 11 AAA 31 4 DDD 25 EEE 35 I want this name1 val1 name2 val2 0 AAA 31 BBB 22 1 BBB 22 AAA 31 2 BBB 22 CCC 15 3 CCC 15 AAA 31 4 DDD 25 EEE 35 replaced all values with the maximum value. we choose the maximum value from both val1 and val2 if i do this i will get the maximum from only val1 df["val1"] = df.groupby("name1")["val1"].transform("max") | Try using pd.wide_to_long to melt that dataframe into a long form, then use groupby with transform to find the max value. Map that max value to 'name' and reshape back to four column (wide) dataframe: df_long = pd.wide_to_long(df.reset_index(), ['name','val'], 'index', j='num',sep='',suffix='\d+') mapper= df_long.groupby('name')['val'].max() df_long['val'] = df_long['name'].map(mapper) df_new = df_long.unstack() df_new.columns = [f'{i}{j}' for i,j in df_new.columns] df_new Output: name1 name2 val1 val2 index 0 AAA BBB 31 22 1 BBB AAA 22 31 2 BBB CCC 22 15 3 CCC AAA 15 31 4 DDD EEE 25 35 | 7 | 8 |
63,396,570 | 2020-8-13 | https://stackoverflow.com/questions/63396570/pip-is-selecting-wrong-path | I'm using windows 10 and I got rid of python 3.8 and installed 3.7 as the only python version on my system. When trying to install libraries using pip I now get the error: Fatal error in launcher: Unable to create process using '"c:\users\user\appdata\local\programs\python\python38-32\python.exe" "C:\Users\User\AppData\Local\Programs\Python\Python38-32\Scripts\pip.exe" install pygame_menu': The system cannot find the file specified. when I checked in the console which -a pip I got: C:\Users\User>which -a pip /cygdrive/c/Users/User/AppData/Local/Programs/Python/Python38-32/Scripts/pip /cygdrive/c/Users/User/AppData/Local/Programs/Python/Python37/Scripts/pip Now when I look for Python in my variable path it is alright... Anyways I can't figure out how to change the path of pip so the right one is selected... besides its pretty weird that ive uninstalled python and pip multiple times and it still gets it wrong every time during installation. Thanks | to fix do this : check if you still have the python38-32 folder in your local variable list Delete the "%userprofile%\AppData\Local\Programs\Python\Python38" folder run pip from command line if the problem still persists then type "environment variables" in the windows search box and add "%userprofile%\AppData\Local\Programs\Python\Python37" to your system variable named "Path" This should completely fix your problem if not uninstall all the python files including py launcher reinstall python # when installing you must select ADD PYTHON to system environment variable | 7 | 2 |
63,386,812 | 2020-8-13 | https://stackoverflow.com/questions/63386812/plotly-how-to-hide-axis-titles-in-a-plotly-express-figure-with-facets | Is there a simple way to hide the repeated axis titles in a faceted chart using plotly express? I tried setting visible=True In the code below, but that also hid the y axis tick labels (the values). Ideally, I would like to set hiding the repeated axes titles a default for faceted plots in general (or even better, just defaulting to showing a single x and y axis title for the entire faceted figure. Here is the test code: import pandas as pd import numpy as np import plotly.express as px import string # create a dataframe cols = list(string.ascii_letters) n = 50 df = pd.DataFrame({'Date': pd.date_range('2021-01-01', periods=n)}) # create data with vastly different ranges for col in cols: start = np.random.choice([1, 10, 100, 1000, 100000]) s = np.random.normal(loc=0, scale=0.01*start, size=n) df[col] = start + s.cumsum() # melt data columns from wide to long dfm = df.melt("Date") fig = px.line( data_frame=dfm, x = 'Date', y = 'value', facet_col = 'variable', facet_col_wrap=6, facet_col_spacing=0.05, facet_row_spacing=0.035, height = 1000, width = 1000, title = 'Value vs. Date' ) fig.update_yaxes(matches=None, showticklabels=True, visible=True) fig.update_annotations(font=dict(size=16)) fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) Final Code (accepted answer). Note plotly >= 4.9 import pandas as pd import numpy as np import plotly.express as px import string import plotly.graph_objects as go # create a dataframe cols = list(string.ascii_letters) n = 50 df = pd.DataFrame({'Date': pd.date_range('2021-01-01', periods=n)}) # create data with vastly different ranges for col in cols: start = np.random.choice([1, 10, 100, 1000, 100000]) s = np.random.normal(loc=0, scale=0.01*start, size=n) df[col] = start + s.cumsum() # melt data columns from wide to long dfm = df.melt("Date") fig = px.line( data_frame=dfm, x = 'Date', y = 'value', facet_col = 'variable', facet_col_wrap=6, facet_col_spacing=0.05, facet_row_spacing=0.035, height = 1000, width = 1000, title = 'Value vs. Date' ) fig.update_yaxes(matches=None, showticklabels=True, visible=True) fig.update_annotations(font=dict(size=16)) fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) # hide subplot y-axis titles and x-axis titles for axis in fig.layout: if type(fig.layout[axis]) == go.layout.YAxis: fig.layout[axis].title.text = '' if type(fig.layout[axis]) == go.layout.XAxis: fig.layout[axis].title.text = '' # keep all other annotations and add single y-axis and x-axis title: fig.update_layout( # keep the original annotations and add a list of new annotations: annotations = list(fig.layout.annotations) + [go.layout.Annotation( x=-0.07, y=0.5, font=dict( size=16, color = 'blue' ), showarrow=False, text="single y-axis title", textangle=-90, xref="paper", yref="paper" ) ] + [go.layout.Annotation( x=0.5, y=-0.08, font=dict( size=16, color = 'blue' ), showarrow=False, text="Dates", textangle=-0, xref="paper", yref="paper" ) ] ) fig.show() | This answer has five parts: Hide subplot titles (not 100% sure you wanted to do that though...) Hide y-axis tick values using fig.layout[axis].tickfont = dict(color = 'rgba(0,0,0,0)') Set single axis labels using go.layout.Annotation(xref="paper", yref="paper") the plotly figure Complete code snippet at the end One very important take-away here is that you can edit any element produced with a px function using plotly.graph_object references, like go.layout.XAxis. 1. Hide subplot titles If you're otherwise happy with the way you've set up your fig, you can just include for anno in fig['layout']['annotations']: anno['text']='' fig.show() 2. Hide yaxis text You can set the yaxis tickfont to transparent using the following in a loop fig.layout[axis].tickfont = dict(color = 'rgba(0,0,0,0)') That exact line is included in the snippet below that also removes y-axis title for every subplot. 3. Single axis labels The removal of axis labels and inclusion of a single label requires a bit more work, but here's a very flexible setup that does exactly what you need and more if you'd like to edit your new labels in any way: # hide subplot y-axis titles and x-axis titles for axis in fig.layout: if type(fig.layout[axis]) == go.layout.YAxis: fig.layout[axis].title.text = '' if type(fig.layout[axis]) == go.layout.XAxis: fig.layout[axis].title.text = '' # keep all other annotations and add single y-axis and x-axis title: fig.update_layout( # keep the original annotations and add a list of new annotations: annotations = list(fig.layout.annotations) + [go.layout.Annotation( x=-0.07, y=0.5, font=dict( size=16, color = 'blue' ), showarrow=False, text="single y-axis title", textangle=-90, xref="paper", yref="paper" ) ] + [go.layout.Annotation( x=0.5, y=-0.08, font=dict( size=16, color = 'blue' ), showarrow=False, text="Dates", textangle=-0, xref="paper", yref="paper" ) ] ) fig.show() 4. Plot 5. Complete code: import pandas as pd import numpy as np import plotly.express as px import string import plotly.graph_objects as go # create a dataframe cols = list(string.ascii_letters) cols[0]='zzz' n = 50 df = pd.DataFrame({'Date': pd.date_range('2021-01-01', periods=n)}) # create data with vastly different ranges for col in cols: start = np.random.choice([1, 10, 100, 1000, 100000]) s = np.random.normal(loc=0, scale=0.01*start, size=n) df[col] = start + s.cumsum() # melt data columns from wide to long dfm = df.melt("Date") fig = px.line( data_frame=dfm, x = 'Date', y = 'value', facet_col = 'variable', facet_col_wrap=6, #facet_col_spacing=0.05, #facet_row_spacing=0.035, height = 1000, width = 1000, title = 'Value vs. Date' ) fig.update_yaxes(matches=None, showticklabels=True, visible=True) fig.update_annotations(font=dict(size=16)) fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) # subplot titles for anno in fig['layout']['annotations']: anno['text']='' # hide subplot y-axis titles and x-axis titles for axis in fig.layout: if type(fig.layout[axis]) == go.layout.YAxis: fig.layout[axis].title.text = '' if type(fig.layout[axis]) == go.layout.XAxis: fig.layout[axis].title.text = '' # keep all other annotations and add single y-axis and x-axis title: fig.update_layout( # keep the original annotations and add a list of new annotations: annotations = list(fig.layout.annotations) + [go.layout.Annotation( x=-0.07, y=0.5, font=dict( size=16, color = 'blue' ), showarrow=False, text="single y-axis title", textangle=-90, xref="paper", yref="paper" ) ] + [go.layout.Annotation( x=0.5, y=-0.08, font=dict( size=16, color = 'blue' ), showarrow=False, text="Dates", textangle=-0, xref="paper", yref="paper" ) ] ) fig.show() | 15 | 10 |
63,373,911 | 2020-8-12 | https://stackoverflow.com/questions/63373911/how-do-you-make-pylint-in-vscode-know-that-its-in-a-package-so-that-relative-i | Layout: workspace/ .vscode/launch.json main.py foo.py: def harr(): pass launch.json: { "version": "0.2.0", "configurations": [ { "name": "Python: Module", "type": "python", "request": "launch", "cwd": "${workspaceFolder}/..", "module": "${workspaceFolderBasename}" } ] } main.py and pylint error: from .foo import harr ^ Attempted relative import beyond top-level package - pylint(relative-beyond-top-level) The program runs fine if not for the linter error. I have tried modifying sys.path in __init__.py and through pylintrc but the result is the same. Is there a way VSCode can run linting for a file and know at the same time that the file is part of a package? I've scanned through vscode linting and the pylint config docs but found nothing like what I'd expect. This should be so standard, what am I not getting? How is pylint supposed to work out relative imports if it has no idea of the top-level package? | Currently pylint cannot find modules accurately through relative imports, it will mess up the path, although the code can run. You could try the following two ways to solve it: 1.Add the following settings in the setting.json file. "python.linting.pylintArgs": ["--disable=all", "--enable=F,E,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode", "--disable=E0402", ], (Since there is no issue with the code, we can turn off this type of pylint prompt.) Since the relative import method will make pylint confusing, we can avoid such use. Use 'from foo import harr' instead of 'from .foo import harr'. Reference: Default Pylint rules. | 7 | 3 |
63,388,135 | 2020-8-13 | https://stackoverflow.com/questions/63388135/vs-code-modulenotfounderror-no-module-named-pandas | Tried to import pandas in VS Code with import pandas and got Traceback (most recent call last): File "c:\Users\xxxx\hello\sqltest.py", line 2, in <module> import pandas ModuleNotFoundError: No module named 'pandas' Tried to install pandas with pip install pandas pip3 install pandas python -m pip install pandas separately which returned (.venv) PS C:\Users\xxxx\hello> pip3 install pandas Requirement already satisfied: pandas in c:\users\xxxx\hello\.venv\lib\site-packages (1.1.0) Requirement already satisfied: pytz>=2017.2 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2020.1) Requirement already satisfied: numpy>=1.15.4 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (1.19.1) Requirement already satisfied: python-dateutil>=2.7.3 in c:\users\xxxx\hello\.venv\lib\site-packages (from pandas) (2.8.1) Requirement already satisfied: six>=1.5 in c:\users\xxxx\hello\.venv\lib\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0) Tried: sudo pip install pandas and got (.venv) PS C:\Users\xxxx\hello> sudo pip install pandas sudo : The term 'sudo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + sudo pip install pandas + ~~~~ + CategoryInfo : ObjectNotFound: (sudo:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException I also tried to change the python path under workspace settings following this answer. with C:\Users\xxxx\AppData\Local\Microsoft\WindowsApps\python.exe which is the python path I found in Command Prompt using where python but didn't work. Then I tried python -m venv .venv which returned (.venv) PS C:\Users\xxxx\hello> python -m venv .venv Error: [Errno 13] Permission denied: 'C:\\Users\\xxxx\\hello\\.venv\\Scripts\\python.exe' Update: Tried python3.8.5 -m pip install pandas and returned (.venv) PS C:\Users\xxxx\hello> python3.8.5 -m pip install pandas python3.8.5 : The term 'python3.8.5' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + python3.8.5 -m pip install pandas + ~~~~~~~~~ + CategoryInfo : ObjectNotFound: (python3.8.5:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException | The solution seems fairly simple! First things first though! From looking at your post, you seem to have followed a guide into installing Pandas. Nothing is wrong about that but I must point out first based on your information that you provided to us, you seem to run Windows Powershell PS C:\Users\xxxx\hello> and the error format matches Powershell. Therefore, sudo isn't recognized because sudo is the admin command for Unix-based systems like Debian, Ubuntu, and so on which is why it's not a valid command! But here's how to properly install: (I assume you're running Windows but if that's not the case, correct me and Ill give you the Unix version!) 1 - Windows key, search up CMD and run it as administrator this is important to avoid permissions issues! 2 - Run pip3 install pandas OR python3 -m pip3 install pandas | 24 | 1 |
63,388,372 | 2020-8-13 | https://stackoverflow.com/questions/63388372/why-there-is-an-unbound-variable-error-warning-by-ide-in-this-simple-python-func | Very simple question, but I can't find the answer to it. My IDE vs code (pylance) give me the warning/hint for a being possibly unbound. Why is this? How do I fix it? def f(): for i in range(4): a = 1 print(a) return a | Because range(4) might be something empty (if you overwrite the built-in range), in which case the loop body will never run and a will not get assigned. Which is a problem when it's supposed to get returned. Maybe you can tell your IDE to ignore this and not show the warning. Or assign some meaningful default to a before the loop. | 15 | 26 |
63,387,031 | 2020-8-13 | https://stackoverflow.com/questions/63387031/how-to-clear-existing-flash-messages-in-flask | I have a warning flash message in Flask that that appears before the user tries to submit a form based on background information about the user. If the user goes ahead and submits the form the way they were warned not to, they are prevented and see a second flash message. I'd like to clear the first flash message before the user sees the second. I've read the Flask documentation on flash messages and tried to google for the answer. I also read some of the Flask source code. No solution jumps out at me. Can anyone help me figure out how to clear a flash message? | This way you can clear the flash message as there is no predefined method to clear the flash message in Flask flash helpers. You can try the below code. It works for me and maybe useful to you. session.pop('_flashes', None) | 10 | 12 |
63,383,594 | 2020-8-12 | https://stackoverflow.com/questions/63383594/how-does-tensorflow-build-work-from-tf-keras-layers-layer | I was wondering if anyone knew how the build() function works from the tf.keras.layers.Layer class under the hood. According to the documentation: build is called when you know the shapes of the input tensors and can do the rest of the initialization so to me it seems like the class is behaving similar to this: class MyDenseLayer: def __init__(self, num_outputs): self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def __call__(self, input): self.build(input.shape) ## build is called here when input shape is known return tf.matmul(input, self.kernel) I can't imagine build() would be called for ever __call__, but it is the only place where the input is passed in. Does anyone know how exactly this works under the hood? | The Layer.build() method is typically used to instantiate the weights of the layer. See the source code for tf.keras.layers.Dense for an example, and note that the weight and bias tensors are created in that function. The Layer.build() method takes an input_shape argument, and the shape of the weights and biases often depend on the shape of the input. The Layer.call() method, on the other hand, implements the forward-pass of the layer. You do not want to overwrite __call__, because that is implemented in the base class tf.keras.layers.Layer. In a custom layer, you should implement call(). Layer.call() does not call Layer.build(). However, Layer().__call__() does call it if the layer has not been built yet (source), and that will set an attribute self.built = True to prevent Layer.build() from being called again. In other words, Layer.__call__() only calls Layer.build() the first time it is called. | 14 | 15 |
63,383,347 | 2020-8-12 | https://stackoverflow.com/questions/63383347/runtimeerror-expected-object-of-scalar-type-long-but-got-scalar-type-float-for | I'm running into an issue while calculating the loss for my Neural Net. I'm not sure why the program expects a long object because all my Tensors are in float form. I looked at threads with similar errors and the solution was to cast Tensors as floats instead of longs, but that wouldn't work in my case because all my data is already in float form when passed to the network. Here's my code: # Dataloader from torch.utils.data import Dataset, DataLoader class LoadInfo(Dataset): def __init__(self, prediction, indicator): self.prediction = prediction self.indicator = indicator def __len__(self): return len(self.prediction) def __getitem__(self, idx): data = torch.tensor(self.indicator.iloc[idx, :],dtype=torch.float) data = torch.unsqueeze(data, 0) label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.float) sample = {'data': data, 'label': label} return sample # Trainloader test_train = LoadInfo(train_label, train_indicators) trainloader = DataLoader(test_train, batch_size=64,shuffle=True, num_workers=1,pin_memory=True) # The Network class NetDense2(nn.Module): def __init__(self): super(NetDense2, self).__init__() self.rnn1 = nn.RNN(11, 100, 3) self.rnn2 = nn.RNN(100, 500, 3) self.fc1 = nn.Linear(500, 100) self.fc2 = nn.Linear(100, 20) self.fc3 = nn.Linear(20, 3) def forward(self, x): x1, h1 = self.rnn1(x) x2, h2 = self.rnn2(x1) x = F.relu(self.fc1(x2)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # Allocate / Transfer to GPU dense2 = NetDense2() dense2.cuda() # Optimizer import torch.optim as optim criterion = nn.CrossEntropyLoss() # specify the loss function optimizer = optim.SGD(dense2.parameters(), lr=0.001, momentum=0.9,weight_decay=0.001) # Training dense2.train() loss_memory = [] for epoch in range(50): # loop over the dataset multiple times running_loss = 0.0 for i, samp in enumerate(trainloader): # get the inputs ins = samp['data'] targets = samp['label'] tmp = [] tmp = torch.squeeze(targets.float()) ins, targets = ins.cuda(), tmp.cuda() # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = dense2(ins) loss = criterion(outputs, targets) # The loss loss.backward() optimizer.step() # keep track of loss running_loss += loss.data.item() I get the error from above in the line " loss = criterion(outputs, targets) " | As per the documentation and official example at pytorch webpage, The targets passed to nn.CrossEntropyLoss() should be in torch.long format # official example import torch import torch.nn as nn loss = nn.CrossEntropyLoss() input = torch.randn(3, 5, requires_grad=True) target = torch.empty(3, dtype=torch.long).random_(5) # if you will replace the dtype=torch.float, you will get error output = loss(input, target) output.backward() update this line in your code as label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.long) #updated torch.float to torch.long | 13 | 20 |
63,377,150 | 2020-8-12 | https://stackoverflow.com/questions/63377150/what-became-available-attrs-on-django-3 | First of all, I'm new to Django, so please be nice with me :D I'm currently adapting .py files for Django 3 because the files I have are compatible for Django 2. So, some changes have been made for the new version and in a file, it's written : @wraps(view_func, assigned=available_attrs(view_func)) With the import : from django.utils.decorators import available_attrs I searched for an adaptation of available_attrs, and I quickly found that it has been removed for the new version. And when I launch the code, I have this : ImportError : cannot import name 'available_attrs' from 'django.utils.decorators' So I was wondering what should I write instead of available_attrs to make it work ? PS : Sorry for my bad english | available_attrs() only ever existed to help bridge between Python 2 and Python 3. This is documented in the Django 3.0 release notes: Removed private Python 2 compatibility APIs While Python 2 support was removed in Django 2.0, some private APIs weren’t removed from Django so that third party apps could continue using them until the Python 2 end-of-life. Since we expect apps to drop Python 2 compatibility when adding support for Django 3.0, we’re removing these APIs at this time. [...] django.utils.decorators.available_attrs() - This function returns functools.WRAPPER_ASSIGNMENTS If the @wraps() in your sample line is the standard functools.wraps() decorator, then you can just entirely remove assigned=available_attrs(...), because functools.WRAPPER_ASSIGNMENTS is the default value for assigned: @wraps(view_func) otherwise, just use functools.WRAPPER_ASSIGNMENTS directly. | 14 | 21 |
63,372,039 | 2020-8-12 | https://stackoverflow.com/questions/63372039/how-can-you-read-a-gzipped-parquet-file-in-python | I need to open a gzipped file, that has a parquet file inside with some data. I am having so much trouble trying to print/read what is inside the file. I tried the following: with gzip.open("myFile.parquet.gzip", "rb") as f: data = f.read() This does not seem to work, as I get an error that my file id not a gz file. Thanks! | You can use read_parquet function from pandas module: Install pandas and pyarrow: pip install pandas pyarrow use read_parquet which returns DataFrame: data = read_parquet("myFile.parquet.gzip") print(data.count()) # example of operation on the returned DataFrame | 10 | 15 |
63,307,440 | 2020-8-7 | https://stackoverflow.com/questions/63307440/how-to-plot-a-mean-line-on-a-kdeplot-between-0-and-the-y-value-of-the-mean | I have a distplot and I would like to plot a mean line that goes from 0 to the y value of the mean frequency. I want to do this, but have the line stop at when the distplot does. Why isn't there a simple parameter that does this? It would be very useful. I have some code that gets me almost there: plt.plot([x.mean(),x.mean()], [0, *what here?*]) This code plots a line just as I'd like except for my desired y-value. What would the correct math be to get the y max to stop at the frequency of the mean in the distplot? An example of one of my distplots is below using 0.6 as the y-max. It would be awesome if there was some math to make it stop at the y-value of the mean. I have tried dividing the mean by the count etc. | Update for the latest versions of matplotlib (3.3.4) and seaborn (0.13.3): the kdeplot with shade=True now doesn't create a line object anymore. To get the same outcome as before, setting fill=False will still create the line object. The curve can then be filled with ax.fill_between(). The code below is changed accordingly. (Use the revision history to see the older versions.) ax.lines[0] gets the curve of the kde, of which you can extract the x and y data. np.interp then can find the height of the curve for a given x-value: import numpy as np import matplotlib.pyplot as plt import seaborn as sns x = np.random.normal(np.tile(np.random.uniform(10, 30, 5), 50), 3) ax = sns.kdeplot(x, fill=False, color='crimson') kdeline = ax.lines[0] mean = x.mean() xs = kdeline.get_xdata() ys = kdeline.get_ydata() height = np.interp(mean, xs, ys) ax.vlines(mean, 0, height, color='crimson', ls=':') ax.fill_between(xs, 0, ys, facecolor='crimson', alpha=0.2) plt.show() The same approach can be extended to show the mean together with the standard deviation, or the median and the quartiles: import matplotlib.pyplot as plt import seaborn as sns import numpy as np x = np.random.normal(np.tile(np.random.uniform(10, 30, 5), 50), 3) fig, axes = plt.subplots(ncols=2, figsize=(12, 4)) for ax in axes: sns.kdeplot(x, fill=False, color='crimson', ax=ax) kdeline = ax.lines[0] xs = kdeline.get_xdata() ys = kdeline.get_ydata() if ax == axes[0]: middle = x.mean() sdev = x.std() left = middle - sdev right = middle + sdev ax.set_title('Showing mean and sdev') else: left, middle, right = np.percentile(x, [25, 50, 75]) ax.set_title('Showing median and quartiles') ax.vlines(middle, 0, np.interp(middle, xs, ys), color='crimson', ls=':') ax.fill_between(xs, 0, ys, facecolor='crimson', alpha=0.2) ax.fill_between(xs, 0, ys, where=(left <= xs) & (xs <= right), interpolate=True, facecolor='crimson', alpha=0.2) # ax.set_ylim(ymin=0) plt.show() PS: for the mode of the kde: mode_idx = np.argmax(ys) ax.vlines(xs[mode_idx], 0, ys[mode_idx], color='lime', ls='--') | 7 | 25 |
63,367,594 | 2020-8-11 | https://stackoverflow.com/questions/63367594/the-url-function-in-django-has-been-deprecated-do-i-have-to-change-my-source | The url() function in django has been deprecated since version 3.1. Here's how backwards compatibility is being handled; def url(regex, view, kwargs=None, name=None): warnings.warn( 'django.conf.urls.url() is deprecated in favor of ' 'django.urls.re_path().', RemovedInDjango40Warning, stacklevel=2, ) return re_path(regex, view, kwargs, name) For now, re_path() is returned when the url() function is called. When the function is completely removed, will the projects that use it have to change their source code? | will the projects that use it have to change their source code? Yes, if they upgrade to django-4.0, url will no longer be available. Typically if something is marked deprecated, it is removed two versions later, so in django-4.0, since after django-3.2, django-4.0 will be released. If you thus have an active project, you eventually will upgrade to Django-4.0 or further, and thus should make use of re_path(…) [Django-doc] instead. The idea is thus to give users time to adapt the code accordingly, and keep the application running. But one should eventually fix the deprecation warnings, since after ~16 months, it is removed in the newest Django versions. | 6 | 9 |
63,264,888 | 2020-8-5 | https://stackoverflow.com/questions/63264888/pydantic-using-property-getter-decorator-for-a-field-with-an-alias | scroll all the way down for a tl;dr, I provide context which I think is important but is not directly relevant to the question asked A bit of context I'm in the making of an API for a webapp and some values are computed based on the values of others in a pydantic BaseModel. These are used for user validation, data serialization and definition of database (NoSQL) documents. Specifically, I have nearly all resources inheriting from a OwnedResource class, which defines, amongst irrelevant other properties like creation/last-update dates: object_key -- The key of the object using a nanoid of length 6 with a custom alphabet owner_key -- This key references the user that owns that object -- a nanoid of length 10. _key -- this one is where I'm bumping into some problems, and I'll explain why. So arangodb -- the database I'm using -- imposes _key as the name of the property by which resources are identified. Since, in my webapp, all resources are only accessed by the users who created them, they can be identified in URLs with just the object's key (eg. /subject/{object_key}). However, as _key must be unique, I intend to construct the value of this field using f"{owner_key}/{object_key}", to store the objects of every user in the database and potentially allow for cross-user resource sharing in the future. The goal is to have the shortest per-user unique identifier, since the owner_key part of the full _key used to actually access and act upon the document stored in the database is always the same: the currently-logged-in user's _key. My attempt My thought was then to define the _key field as a @property-decorated function in the class. However, Pydantic does not seem to register those as model fields. Moreover, the attribute must actually be named key and use an alias (with Field(... alias="_key"), as pydantic treats underscore-prefixed fields as internal and does not expose them. Here is the definition of OwnedResource: class OwnedResource(BaseModel): """ Base model for resources owned by users """ object_key: ObjectBareKey = nanoid.generate(ID_CHARSET, OBJECT_KEY_LEN) owner_key: UserKey updated_at: Optional[datetime] = None created_at: datetime = datetime.now() @property def key(self) -> ObjectKey: return objectkey(self.owner_key) class Config: fields = {"key": "_key"} # [1] [1] Since Field(..., alias="...") cannot be used, I use this property of the Config subclass (see pydantic's documentation) However, this does not work, as shown in the following example: @router.post("/subjects/") def create_a_subject(subject: InSubject): print(subject.dict(by_alias=True)) with InSubject defining properties proper to Subject, and Subject being an empty class inheriting from both InSubject and OwnedResource: class InSubject(BaseModel): name: str color: Color weight: Union[PositiveFloat, Literal[0]] = 1.0 goal: Primantissa # This is just a float constrained in a [0, 1] range room: str class Subject(InSubject, OwnedResource): pass When I perform a POST /subjects/, the following is printed in the console: {'name': 'string', 'color': Color('cyan', rgb=(0, 255, 255)), 'weight': 0, 'goal': 0.0, 'room': 'string'} As you can see, _key or key are nowhere to be seen. Please ask for details and clarification, I tried to make this as easy to understand as possible, but I'm not sure if this is clear enough. tl;dr A context-less and more generic example without insightful context: With the following class: from pydantic import BaseModel class SomeClass(BaseModel): spam: str @property def eggs(self) -> str: return self.spam + " bacon" class Config: fields = {"eggs": "_eggs"} I would like the following to be true: a = SomeClass(spam="I like") d = a.dict(by_alias=True) d.get("_eggs") == "I like bacon" | If you can update to the newest version of Pydantic 2, which may be a bit of an ordeal honestly, there are some really nice new feature including support for properties like you are referring to. I recently updated and after some refactoring I have been happy with the newer version. from pydantic import BaseModel, computed_field class SomeClass(BaseModel): spam: str @computed_field @property def eggs(self) -> str: return self.spam + " bacon" a = SomeClass(spam="I like") a.model_dump() # -> {'spam': 'I like', 'eggs': 'I like bacon'} | 17 | 10 |
63,351,189 | 2020-8-11 | https://stackoverflow.com/questions/63351189/pyarrow-add-column-to-pyarrow-table | I have a pyarrow table name final_table of shape 6132,7 I want to add column to this table list_ = ['IT'] * 6132 final_table.append_column('COUNTRY_ID', list_) but I am getting following error ArrowInvalid: Added column's length must match table's length. Expected length 6132 but got length 12264 | According to the documentation: Append column at end of columns. Parameters field (str or Field) – If a string is passed then the type is deduced from the column data. column (Array, list of Array, or values coercible to arrays) – Column data. Returns pyarrow.Table – New table with the passed column added. I think pyarrow is assuming that you're providing a list of Array. To avoid the confusion you should pass an arrow array instead col_a = pa.array([1, 2, 3], pa.int32()) col_b = pa.array(["X", "Y", "Z"], pa.string()) table = pa.Table.from_arrays( [col_a, col_b], schema=pa.schema([ pa.field('a', col_a.type), pa.field('b', col_b.type), ]) ) table = table.append_column('COUNTRY_ID', pa.array(['IT'] * len(table), pa.string())) | 7 | 12 |
63,363,044 | 2020-8-11 | https://stackoverflow.com/questions/63363044/why-does-np-inf-2-result-in-nan-and-not-infinity | I’m slightly disappointed that np.inf // 2 evaluates to np.nan and not to np.inf, as is the case for normal division. Is there a reason I’m missing why nan is a better choice than inf? | I'm going to be the person who just points at the C level implementation without any attempt to explain intent or justification: *mod = fmod(vx, wx); div = (vx - *mod) / wx; It looks like in order to calculate divmod for floats (which is called when you just do floor division) it first calculates the modulus and float('inf') %2 only makes sense to be NaN, so when it calculates vx - mod it ends up with NaN so everything propagates nan the rest of the way. So in short, since the implementation of floor division uses modulus in the calculation and that is NaN, the result for floor division also ends up NaN | 41 | 33 |
63,316,840 | 2020-8-8 | https://stackoverflow.com/questions/63316840/django-3-1-streaminghttpresponse-with-an-async-generator | Documentation for Django 3.1 says this about async views: The main benefits are the ability to service hundreds of connections without using Python threads. This allows you to use slow streaming, long-polling, and other exciting response types. I believe that "slow streaming" means we could implement an SSE view without monopolizing a thread per client, so I tried to sketch a simple view, like so: async def stream(request): async def event_stream(): while True: yield 'data: The server time is: %s\n\n' % datetime.datetime.now() await asyncio.sleep(1) return StreamingHttpResponse(event_stream(), content_type='text/event-stream') (note: I adapted the code from this response) Unfortunately, when this view is invoked, it raises the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/asgiref/sync.py", line 330, in thread_handler raise exc_info[1] File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 38, in inner response = await get_response(request) File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 231, in _get_response_async response = await wrapped_callback(request, *callback_args, **callback_kwargs) File "./chat/views.py", line 144, in watch return StreamingHttpResponse(event_stream(), content_type='text/event-stream') File "/usr/local/lib/python3.7/site-packages/django/http/response.py", line 367, in __init__ self.streaming_content = streaming_content File "/usr/local/lib/python3.7/site-packages/django/http/response.py", line 382, in streaming_content self._set_streaming_content(value) File "/usr/local/lib/python3.7/site-packages/django/http/response.py", line 386, in _set_streaming_content self._iterator = iter(value) TypeError: 'async_generator' object is not iterable To me, this shows that StreamingHttpResponse doesn't currently support async generators. I tried to modify StreamingHttpResponse to use async for but I wasn't able to do much. Any idea how I could do that? | This is an old question but it came up on a Google result since I was looking for a solution to the same issue. In the end I found this repo https://github.com/valberg/django-sse - which uses async views in Django 4.2 to stream via SSE (specifically see here). I understand this is a recent addition to Django so I hope it helps anyone else looking for an answer. | 21 | 4 |
63,304,163 | 2020-8-7 | https://stackoverflow.com/questions/63304163/how-to-create-a-deb-package-for-a-python-project-without-setup-py | Any documentation I've found about this topic mentions that the "only" requirement to build a deb package is to have a correct setup.py (and requirements.txt). For instance in dh-virtualenv tutorial, stdeb documentation and the Debian's library style guide for python. But nowadays new (amazing) tools like poetry allow to develop (and upload to PyPI) python projects without any setup.py (this file and several others including requirements.txt are all replaced by pyproject.toml). I believe flit allows this too. I have developed a python project managed by poetry and would like to package it for Ubuntu/Debian. I guess, as a workaround I can still write a setup.py file that would take its values from pyproject.toml and a requirements.txt file (written by hand using values from poetry.lock). But, is there a way to do this without any setup.py file? | setuptools, and the setup.py file that it requires, has been the de-facto packaging standard in python for the longest time. The new package managers you mention were enabled by the introduction of PEP 517 and PEP 518 (or read this for a high-level description on the topic), which provide a standardized way of specifying the build backend without the need of a setup.py (and the ensuing hen-egg problem where you already need setuptools to correctly parse it, but might also need to specify which version of setuptools you want for the build). Anyway, it's all still very fresh, and the linux packaging community hasn't fully caught up yet. I found this discussion on the debian bug tracker, and the rpm side sums it up neatly over here. The short answer is to just wait a while until the toolchain catches up to the new standards, and google debian packaging pep517 support every now and then. As a workaround for poetry-build packages specifically, you can use dephell to generate the setup.py, and poetry to generate the requirements.txt for you to keep using legacy tools: dephell deps convert --from=poetry --to=setuppy poetry export -f requirements.txt -o requirements.txt And, during the build, tell your pyproject.tom that you plan to use setuptools for the build instead of poetry: [build-system] requires = ["setuptools >= 40.6.0", "wheel"] build-backend = "setuptools.build_meta" | 17 | 12 |
63,312,692 | 2020-8-8 | https://stackoverflow.com/questions/63312692/importerror-attempted-relative-import-with-no-known-parent-package | I'm attempting to import a script from my Items file but I keeps on getting an error from .Items.Quest1_items import * gives from .Items.Quest1_items import * # ImportError: attempted relative import with no known parent package # Process finished with exit code 1 Here my project tree, I'm running the script from the main.py file Quest1/ | |- main.py | |- Items/ | |- __init__.py | |- Quest1_items.py | Remove the dot from the beginning. Relative paths with respect to main.py are found automatically. from Items.Quest1_items import * | 25 | 28 |
63,335,753 | 2020-8-10 | https://stackoverflow.com/questions/63335753/how-to-check-if-string-exists-in-enum-of-strings | I have created the following Enum: from enum import Enum class Action(str, Enum): NEW_CUSTOMER = "new_customer" LOGIN = "login" BLOCK = "block" I have inherited from str, too, so that I can do things such as: action = "new_customer" ... if action == Action.NEW_CUSTOMER: ... I would now like to be able to check if a string is in this Enum, such as: if "new_customer" in Action: .... I have tried adding the following method to the class: def __contains__(self, item): return item in [i for i in self] However, when I run this code: print("new_customer" in [i for i in Action]) print("new_customer" in Action) I get this exception: True Traceback (most recent call last): File "/Users/kevinobrien/Documents/Projects/crazywall/utils.py", line 24, in <module> print("new_customer" in Action) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/enum.py", line 310, in __contains__ raise TypeError( TypeError: unsupported operand type(s) for 'in': 'str' and 'EnumMeta' | I just bumped into this problem today (2020-12-09); I had to change a number of subpackages for Python 3.8. Perhaps an alternative to the other solutions here is the following, inspired by the excellent answer here to a similar question, as well as @MadPhysicist's answer on this page: from enum import Enum, EnumMeta class MetaEnum(EnumMeta): def __contains__(cls, item): try: cls(item) except ValueError: return False return True class BaseEnum(Enum, metaclass=MetaEnum): pass class Stuff(BaseEnum): foo = 1 bar = 5 Tests (python >= 3.7; tested up to 3.10): >>> 1 in Stuff True >>> Stuff.foo in Stuff True >>> 2 in Stuff False >>> 2.3 in Stuff False >>> 'zero' in Stuff False | 39 | 52 |
63,257,839 | 2020-8-5 | https://stackoverflow.com/questions/63257839/best-way-to-specify-nested-dict-with-pydantic | Context I'm trying to validate/parse some data with pydantic. I want to specify that the dict can have a key daytime, or not. If it does, I want the value of daytime to include both sunrise and sunset. e.g. These should be allowed: { 'type': 'solar', 'daytime': { 'sunrise': 4, # 4am 'sunset': 18 # 6pm } } And { 'type': 'wind' # daytime key is omitted } And { 'type': 'wind', 'daytime': None } But I want to fail validation for { 'type': 'solar', 'daytime': { 'sunrise': 4 } } Because this has a daytime value, but no sunset value. MWE I've got some code that does this. If I run this script, it executes successfully. from pydantic import BaseModel, ValidationError from typing import List, Optional, Dict class DayTime(BaseModel): sunrise: int sunset: int class Plant(BaseModel): daytime: Optional[DayTime] = None type: str p = Plant.parse_obj({'type': 'wind'}) p = Plant.parse_obj({'type': 'wind', 'daytime': None}) p = Plant.parse_obj({ 'type': 'solar', 'daytime': { 'sunrise': 5, 'sunset': 18 }}) try: p = Plant.parse_obj({ 'type': 'solar', 'daytime': { 'sunrise': 5 }}) except ValidationError: pass else: raise AssertionError("Should have failed") Question What I'm wondering is, is this how you're supposed to use pydantic for nested data? I have lots of layers of nesting, and this seems a bit verbose. Is there any way to do something more concise, like: class Plant(BaseModel): daytime: Optional[Dict[('sunrise', 'sunset'), int]] = None type: str | Pydantic create_model function is what you need: from pydantic import BaseModel, create_model class Plant(BaseModel): daytime: Optional[create_model('DayTime', sunrise=(int, ...), sunset=(int, ...))] = None type: str | 20 | 23 |
63,266,504 | 2020-8-5 | https://stackoverflow.com/questions/63266504/python-pytest-mock-fails-with-assert-none-for-function-call-assertions | I am trying to mock some calls to boto3 and it looks like the mocked function is returning the correct value, and it look like if I change the assertion so it no longer matches what was passed in the assertion fails because the input parameters do not match, however if I make them match then the assertion fails with: E AssertionError: assert None E + where None = <bound method wrap_assert_called_with of <MagicMock name='get_item' id='139668501580240'>>(TableName='TEST_TABLE', Key={'ServiceName': {'S': 'Site'}}) E + where <bound method wrap_assert_called_with of <MagicMock name='get_item' id='139668501580240'>> = <MagicMock name='get_item' id='139668501580240'>.assert_called_with E + where <MagicMock name='get_item' id='139668501580240'> = <botocore.client.DynamoDB object at 0x7f071b8251f0>.get_item E + where <botocore.client.DynamoDB object at 0x7f071b8251f0> = site_dao.ddb_client The dynamo DB object is a global variable. ddb_client = boto3.client("dynamodb") def query_ddb(): """Query dynamo DB for the current strategy. Returns: strategy (str): The latest strategy from dynamo DB. """ response = None try: ddb_table = os.environ["DDB_DA_STRATEGY"] response = ddb_client.get_item( TableName=ddb_table, Key={"ServiceName": {"S": "Site"}} ) except Exception as exception: LOGGER.error(exception) raise ServerException("The server was unable to process your request.", []) return response.get("Item").get("Strategy").get("S") And my unit test looks like: def test_data_access_context_strategy_ddb(mocker): object_key = { "ServiceName": {"S": "Site"} } table_name = "TEST_TABLE" expected_response = "SqlServer" os_env = { "DDB_DA_STRATEGY": table_name, } ddb_response = { "Item": { "Strategy": { "S": expected_response } } } mocker.patch.dict(os.environ, os_env) mocker.patch.object(site_dao.ddb_client, "get_item") site_dao.ddb_client.get_item.return_value = ddb_response data_access_context = site_dao.DataAccessContext() response = data_access_context.query_ddb() assert response == expected_response assert site_dao.ddb_client.get_item.assert_called_with(TableName=table_name, Key=object_key) I cant work out what is going wrong, if I change the expected value for assert_called_with, so for example: assert site_dao.ddb_client.get_item.assert_called_with(TableName="TT", Key=object_key) The test fails with: E AssertionError: expected call not found. E Expected: get_item(TableName='TT', Key={'ServiceName': {'S': 'Site'}}) E Actual: get_item(TableName='TEST_TABLE', Key={'ServiceName': {'S': 'Site'}}) E E pytest introspection follows: E E Kwargs: E assert {'Key': {'Ser... 'TEST_TABLE'} == {'Key': {'Ser...leName': 'TT'} E Omitting 1 identical items, use -vv to show E Differing items: E {'TableName': 'TEST_TABLE'} != {'TableName': 'TT'} E Use -v to get the full diff So when the expected and actual inputs differ then it fails because of this, however when they are the same and the test should pass it fails because it is as if the function were never even called. | You have two assertions on this line: assert site_dao.ddb_client.get_item.assert_called_with(TableName=...) The first assertion is assert_called_with which sounds like it is what you want. Then you have another assertion at the beginning of the line: assert ... which asserts on the return value of the assert_called_with function. That return value is None when the assertion passes. So then the whole line evaluates to assert None, and None is false-y, so you end up with assert False. tl;dr Don't use assert twice. | 21 | 86 |
63,280,876 | 2020-8-6 | https://stackoverflow.com/questions/63280876/what-is-the-difference-between-config-and-configure-in-tkinter | I'm a beginner in Python. Recently, I finished the basics, and now I'm trying to make some GUI applications. I have found many cases where we use config() and configure(). But what is the difference between config() and configure()? I mean, in what cases should config() be used, and in what cases should configure() be used? I have a small portion of my code here. Code where Configure is used: fontStyle = tkFont.Font(family="comic sans ms", size=15) def zoomin_1(): fontsize=fontStyle['size'] fontStyle.configure(size=fontsize+5) def zoomout_1(): fontsize = fontStyle['size'] fontStyle.configure(size=fontsize - 5) def default_1(): fontStyle.configure(size=10) Code where config is used: root.config(menu=mainmenu) mainmenu.add_cascade(label="Edit", menu=Edit_menu) Format_menu = Menu(mainmenu, tearoff = False) Format_menu.add_command(label="Word Wrap", command= test_fun) Format_menu.add_command(label="Font", command= test_fun) root.config(menu=mainmenu) mainmenu.add_cascade(label="Format", menu=Format_menu) It would of great help if someone would clear this doubt. | Both are exactly the same, the only difference is, the difference in the name, I would just reccomend using .config() just to save a few typing characters ;-) | 7 | 7 |
63,318,719 | 2020-8-8 | https://stackoverflow.com/questions/63318719/difference-between-type-alias-and-newtype | What is the difference between this: INPUT_FORMAT_TYPE = NewType('INPUT_FORMAT_TYPE', Tuple[str, str, str]) and this INPUT_FORMAT_TYPE = Tuple[str, str, str] Functionally, both work but IDEs like PyCharm flag code like this: return cast(INPUT_FORMAT_TYPE, ("*", "*", "All")) | InputFormat (renamed it to keep type notation consistent) can be a subtype or alias of Tuple[str, str, str]. Having it be a subtype (your first example) instead of an alias (your second example) is useful for a situation where you want to statically verify (through something like mypy) that all InputFormats were made in a certain way. For example: def new_input_format(a: str) -> InputFormat: return InputFormat((a, a * 2, a * 4)) def print_input_format(input_format: InputFormat): print(input_format) print_input_format(new_input_format("a")) # Statement 1 print_input_format(("a", "aa", "aaa")) # Statement 2 If InputFormat is declared as an alias (through InputFormat = Tuple[str, str, str]), both statements will statically verify. If InputFormat is declared as a subtype (through InputFormat = NewType('InputFormat', Tuple[str, str, str])), only the first statement will statically verify. Now this isn't foolproof. A third statement such as: print_input_format(InputFormat(("a", "aa", "aaa"))) will statically verify, yet it bypasses our careful InputFormat creator called new_input_format. However, by making InputFormat a subtype here we were forced to explicitly acknowledge that we're creating an input format through having to wrap the tuple in an InputFormat, which makes it easier to maintain this type of code and spot potential bugs in input format constructions. Another example where NewType is beneficial over a type alias: Let's say you had a database which we expose two functions for: def read_user_id_from_session_id(session_id: str) -> Optional[str]: ... def read_user(user_id: str) -> User: ... intended to be called like this (exhibit A): user_id = read_user_id_by_session_id(session_id) if user_id: user = read_user(user_id) # Do something with `user`. else: print("User not found!") Forget about the fact that we can use a join here to make this only one query instead of two. Anyways, we want to only allow a return value of read_user_id_from_session_id to be used in read_user (since in our system, a user ID can only come from a session). We don't want to allow any value, reason being that it's probably a mistake. Imagine we did this (exhibit B): user = read_user(session_id) To a quick reader, it may appear correct. They'd probably think a select * from users where session_id = $1 is happening. However, this is actually treating a session_id as a user_id, and with our current type hints it passes despite causing unintended behavior at runtime. Instead, we can change the type hints to this: UserID = NewType("UserID", str) def read_user_id_from_session_id(session_id: str) -> Optional[UserID]: ... def read_user(user_id: UserID) -> User: ... Exhibit A expressed above would still work, because the flow of data is correct. But we'd have to turn Exhibit B into read_user(UserID(session_id)) which quickly points out the problem of converting a session_id to a user_id without going through the required function. In other programming languages with better type systems, this can be taken a step further. You can actually prohibit explicit construction like UserID(...) in all but one place, causing everyone to have to go through that one place in order to obtain a piece of data of that type. In Python, you can bypass the intended flow of data by explicitly doing YourNewType(...) anywhere. While NewType is beneficial over simply type aliases, it leaves this feature to be desired. | 15 | 21 |
63,290,336 | 2020-8-6 | https://stackoverflow.com/questions/63290336/python-mock-check-if-methods-are-called-in-mocked-object | I have a certain piece of code that looks like this: # file1.py from module import Object def method(): o = Object("param1") o.do_something("param2") I have unittests that look like this: @patch("file1.Object") class TestFile(unittest.TestCase): def test_call(self, obj): ... I can do obj.assert_called_with() in the unittest to verify that the constructor was called with certain parameters. Is it possible to verify that obj.do_something was called with certain parameters? My instinct is no, as the Mock is fully encapsulated within Object, but I was hoping there might be some other way. | You can do this, because the arguments are passed to the mock object. This should work: @patch("file1.Object") class TestFile: def test_call(self, obj): method() obj.assert_called_once_with("param1") obj.return_value.do_something.assert_called_once_with("param2") obj.return_value is the Object instance (which is a MagickMock object with the Object spec), and do_something is another mock in that object that is called with the given parameter. As long as you are just passing arguments to mock objects, the mock will record this and you can check it. What you don't have is any side effects from the real function calls - so if the original do_something would call another function, this cannot be checked. | 7 | 10 |
63,347,818 | 2020-8-10 | https://stackoverflow.com/questions/63347818/aiohttp-client-exceptions-clientconnectorerror-cannot-connect-to-host-stackover | here is my code: import asyncio from aiohttp import ClientSession async def main(): url = "https://stackoverflow.com/" async with ClientSession() as session: async with session.get(url) as resp: print(resp.status) asyncio.run(main()) if I run it on my computer, everything works, but if I run it on pythonanywhere, I get this error: Traceback (most recent call last): File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 936, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa File "/usr/lib/python3.8/asyncio/base_events.py", line 1017, in create_connection raise exceptions[0] File "/usr/lib/python3.8/asyncio/base_events.py", line 1002, in create_connection sock = await self._connect_sock( File "/usr/lib/python3.8/asyncio/base_events.py", line 916, in _connect_sock await self.sock_connect(sock, address) File "/usr/lib/python3.8/asyncio/selector_events.py", line 485, in sock_connect return await fut File "/usr/lib/python3.8/asyncio/selector_events.py", line 517, in _sock_connect_cb raise OSError(err, f'Connect call failed {address}') ConnectionRefusedError: [Errno 111] Connect call failed ('151.101.193.69', 443) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test_c.py", line 39, in <module> asyncio.run(main()) File "/usr/lib/python3.8/asyncio/runners.py", line 43, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 608, in run_until_complete return future.result() File "test_c.py", line 28, in main async with session.get(url, timeout=30) as resp: # , headers=headers File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 1012, in __aenter__ self._resp = await self._coro File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/client.py", line 480, in _request conn = await self._connector.connect( File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 523, in connect proto = await self._create_connection(req, traces, timeout) File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 858, in _create_connection _, proto = await self._create_direct_connection( File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 1004, in _create_direct_connection raise last_exc File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 980, in _create_direct_connection transp, proto = await self._wrap_create_connection( File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 943, in _wrap_create_connection raise client_error(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)] Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x7f25a71d1a90> aiohttp on hosting: Name: aiohttp Version: 3.6.2 Summary: Async http client/server framework (asyncio) Home-page: https://github.com/aio-libs/aiohttp Author: Nikolay Kim Author-email: [email protected] License: Apache 2 Location: /home/0dminnimda/.local/lib/python3.8/site-packages Requires: chardet, async-timeout, multidict, yarl, attrs Required-by: aiohttp on my PC: Name: aiohttp Version: 3.6.2 Summary: Async http client/server framework (asyncio) Home-page: https://github.com/aio-libs/aiohttp Author: Nikolay Kim Author-email: [email protected] License: Apache 2 Location: c:\users\asus\appdata\roaming\python\python38\site-packages Requires: async-timeout, attrs, chardet, yarl, multidict Required-by: I am at a loss that it is not so? I am running both files using python3.8. I also tried other urls, they have the same problem Do I need to add any more details? | first solution Referring to the help from the forum, I added trust_env = True when creating the client and now everything works. Explanation: Free accounts on PythonAnywhere must use a proxy to connect to the public internet, but aiohttp, by default, does not connect to a proxy accessible from an environment variable. Link to aiohttp documentation (look for a parameter called "trust_env") Here is the new code: import asyncio from aiohttp import ClientSession async def main(): url = "https://stackoverflow.com/" async with ClientSession(trust_env=True) as session: async with session.get(url) as resp: print(resp.status) asyncio.run(main()) solution if the first didn't help you The domain you are trying to access must be in whitelist, otherwise you may also get this error. In this case you need to post a new topic on the pythonanywhere forum asking to add the domain to the whitelist. If this is an api, then you will need to provide a link to the documentation for this api. | 29 | 32 |
63,308,383 | 2020-8-7 | https://stackoverflow.com/questions/63308383/typeerrorkeyword-argument-not-understood-groups-in-keras-models-load-mod | After training a model using Google Colab, I downloaded it using the following command (inside Google Colab): model.save('model.h5') from google.colab import files files.download('model.h5') My problem is that when I try to load the downloaded model.h5 using my local machine (outside Google Colab), I get the following error: [input] from keras.models import load_model model = load_model(model.h5) [output] Traceback (most recent call last): File "test.py", line 2, in <module> model = load_model(filepath = 'saved_model/model2.h5',custom_objects=None,compile=True, ) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 184, in load_model return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 177, in load_model_from_hdf5 model = model_config_lib.model_from_config(model_config, File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config return deserialize(config, custom_objects=custom_objects) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py", line 105, in deserialize return deserialize_keras_object( File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 369, in deserialize_keras_object return cls.from_config( File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py", line 397, in from_config layer = layer_module.deserialize(layer_config, File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py", line 105, in deserialize return deserialize_keras_object( File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 375, in deserialize_keras_object return cls.from_config(cls_config) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 655, in from_config return cls(**config) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py", line 582, in __init__ super(Conv2D, self).__init__( File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py", line 121, in __init__ super(Conv, self).__init__( File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper result = method(self, *args, **kwargs) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 294, in __init__ generic_utils.validate_kwargs(kwargs, allowed_kwargs) File "/home/lucasmirachi/anaconda3/envs/myenviron/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 792, in validate_kwargs raise TypeError(error_message, kwarg) TypeError: ('Keyword argument not understood:', 'groups') Does anyone know what is this 'groups' keyword argument not understood? Instead of using from keras.models I have tried using from tensorflow.keras.models but I had no success, I got the same error. In both Google Colab and on my local machine I'm running Keras '2.4.3' Thank you all in advance! | I commented earlier saying I had the same exact error from doing the same exact thing. I just solved it by upgrading both tensorflow and keras on my local machine pip install --upgrade tensorflow pip install --upgrade keras The error was probably due to differing versions of the packages between Colab and local machine. Hope this works for you, too. | 11 | 14 |
63,272,417 | 2020-8-5 | https://stackoverflow.com/questions/63272417/pandas-groupby-drops-group-columns-after-fillna-in-1-1-0 | I have a piece of pandas code which used to work in version 1.0.5. Here's a simplified, self-contained example of my problem: import pandas as pd df = pd.DataFrame(data=[ ('bk1', 10), ('bk1', None), ('bk1', 13), ('bk1', None), ('bk2', None), ('bk2', 14), ('bk3', 12), ('bk3', None), ], columns=('book', 'price')) grouped = df.groupby(['book'], as_index=False, sort=False) df = grouped.fillna(method='ffill') print(df) In this example, we have a list of book sales, where some of the prices are missing. We're trying to fill in the missing data by using the previous row, where that row is the same book. In Pandas 1.0.5, this produces a dataframe with two columns: book price 0 bk1 10.0 1 bk1 10.0 2 bk1 13.0 3 bk1 13.0 4 bk2 NaN 5 bk2 14.0 6 bk3 12.0 7 bk3 12.0 In Pandas 1.1.0, this removes the book column, which makes the output unusable. price 0 10.0 1 10.0 2 13.0 3 13.0 4 NaN 5 14.0 6 12.0 7 12.0 I have read the patch notes for version 1.1.0, and I can't find any remarks about this change. Questions: Is this a bug in Pandas, or am I relying on undefined behavior? Is there a more natural way to express this? Questions you might ask: Why not use fillna without a groupby? In this example, the first row with bk2 has no price, but it wouldn't make any sense to fill it in with the previous row, which is the price of bk1. Why use ffill instead of dropping NA values? My real code is working with timeseries data, and ffill is the most natural way to express carrying forward the last known observation. | You could take a different approach to get around this issue (different from the solution proposed by Nick ODell) by using the update function: df.update(df.groupby(['book']).ffill()) print(df) Out[1]: book price 0 bk1 10.0 1 bk1 10.0 2 bk1 13.0 3 bk1 13.0 4 bk2 NaN 5 bk2 14.0 6 bk3 12.0 7 bk3 12.0 This also works in both versions. | 7 | 5 |
63,365,434 | 2020-8-11 | https://stackoverflow.com/questions/63365434/how-to-create-any-aws-lambda-python-layer-usage-example-with-xgboost | I am having trouble creating a lambda layer for the xgboost library. Im running: Im grabbing a zip of xgboost and it's dependencies from here (https://github.com/alexeybutyrev/aws_lambda_xgboost) and loading it into a layer. When I try to test my lambda, I get this error: Unable to import module 'lambda_function': No module named 'xgboost.core' It looks like __init__.py is trying to reference core.py via from .core import <stuff> Has anyone encountered this error with AWS Lambda before? | EDIT: As @Marcin has remark, the first answer provided works for packages under 262 MB large. A. Python Packages within Lambda Layer size limit You can also do it with AWS sam cli and Docker (see this link to install the SAM cli), to build the packages inside a container. Basically you initialize a default template with Python as runtime and then you specify the packages under the requirements.txt file. I found it more easy than the article you mentioned. I let you steps if you want to consider them for future use. 1. Initialize a default SAM template Under any folder that you want to keep the project, you can type sam init this will prompt a series of questions, for a quick set up we will be choosing the Quick Start Templates as follows 1 - AWS Quick Start Templates 2 - Python 3.8 Project name [sam-app]: your_project_name 1 - Hello World Example By choosing the Hello World Example it generates a default lambda function with a requirements.txt file. Now, we're going to edit with the name of the package that you want, in this case xgboost 2. Specify packages to install cd your_project_name code hello_world/requirements.txt as I have Visual Studio Code as editor, this will open the file on it. Now, I can specify the xgboost package your_python_package Here comes the reason to have Docker installed. Some packages relied on C++. Thus, it is recommended to build inside a container (case on Windows). Now, move to the folder where the template.yaml file is located. Then, type sam build -u 3. Zip packages there are some files that you do not want to be included in your lambda layer, because we only want to keep the python libraries. Thus, you could remove the following files rm .aws-sam/build/HelloWorldFunction/app.py rm .aws-sam/build/HelloWorldFunction/__init__.py rm .aws-sam/build/HelloWorldFunction/requirements.txt and then zip the remaining content of the folder. cp -r .aws-sam/build/HelloWorldFunction/ python/ zip -r my_layer.zip python/ where we place the layer in the python/ folder according to the docs On Windows system the zip command should be replaced with Compress-Archive my_layer/ my_layer.zip. 4. Upload your Layer to AWS On AWS go to Lambda, then choose Layers and Create Layer. Now, you can upload your .zip file as the image below shows Notice that for zip files over 50 MB, you should upload the .zip file to an s3 bucket and provide the path, for exampl, https://s3:amazonaws.com//mybucket/my_layer.zip. B. Python packages that exceeds Lambda Layer limits The xgboost package on its own is more than 300 MB and will throw the following error As @Marcin has kindly pointed out, the prior approach with SAM cli would not directly work for Python layers that exceed the limit. There's an open issue on github to specify a custom docker image when running sam build -u and a possible solution retagging the default lambda/lambci image. So, how could we pass through this?. There are already some useful resources that I would just point to. First, the Medium article that @Alex took as solution that follow this repo code. Second, alexeybutyrev approach that works by applying the strip command to reduce the libraries sizes. One can find this approach under a github repo, the instructions are provided. Edit (December 2020) This month AWS releases container Image support for AWS Lambda. Following the next tree structure for your project Project/ |-- app/ | |-- app.py | |-- requirements.txt | |-- xgb_trained.bin |-- Dockerfile You can deploy an XGBoost model with the following Docker image. Follow this repo instructions for a detailed explanation. # Dockerfile based on https://docs.aws.amazon.com/lambda/latest/dg/images-create.html # Define global args ARG FUNCTION_DIR="/function" ARG RUNTIME_VERSION="3.6" # Choose buster image FROM python:${RUNTIME_VERSION}-buster as base-image # Install aws-lambda-cpp build dependencies RUN apt-get update && \ apt-get install -y \ g++ \ make \ cmake \ unzip \ libcurl4-openssl-dev \ git # Include global arg in this stage of the build ARG FUNCTION_DIR # Create function directory RUN mkdir -p ${FUNCTION_DIR} # Copy function code COPY app/* ${FUNCTION_DIR}/ # Install python dependencies and runtime interface client RUN python${RUNTIME_VERSION} -m pip install \ --target ${FUNCTION_DIR} \ --no-cache-dir \ awslambdaric \ -r ${FUNCTION_DIR}/requirements.txt # Install xgboost from source RUN git clone --recursive https://github.com/dmlc/xgboost RUN cd xgboost; make -j4; cd python-package; python${RUNTIME_VERSION} setup.py install; cd; # Multi-stage build: grab a fresh copy of the base image FROM base-image # Include global arg in this stage of the build ARG FUNCTION_DIR # Set working directory to function root directory WORKDIR ${FUNCTION_DIR} # Copy in the build image dependencies COPY --from=base-image ${FUNCTION_DIR} ${FUNCTION_DIR} ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ] CMD [ "app.handler" ] | 8 | 8 |
63,342,767 | 2020-8-10 | https://stackoverflow.com/questions/63342767/command-errored-out-with-exit-status-1-python-setup-py-egg-info-check-the-logs | I am trying to install mysqlclient for python, but I always get this error when I try: $ pip3 install mysqlclient How can I resolve this issue? | Looking at the error message, it seems that there is an OS Error, and judging by the terminal layout, you're using Linux. It seems that to install this package on Linux, there are extra instructions to follow, mentioned on the module's github page: You may need to install the Python 3 and MySQL development headers and libraries like so: $ sudo apt-get install python3-dev default-libmysqlclient-dev build-essential # Debian / Ubuntu % sudo yum install python3-devel mysql-devel # Red Hat / CentOS Then you can install mysqlclient via pip now: $ pip install mysqlclient | 6 | 24 |
63,279,168 | 2020-8-6 | https://stackoverflow.com/questions/63279168/valueerror-input-0-of-layer-sequential-is-incompatible-with-the-layer-expect | I keep on getting this error related to input shape. Any help would be highly appreciated. Thanks! import tensorflow as tf (xtrain, ytrain), (xtest, ytest) = tf.keras.datasets.mnist.load_data() model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16, kernel_size=3, activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics='accuracy') history = model.fit(xtrain, ytrain, validation_data=(xtest, ytest), epochs=10, batch_size=8) ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [8, 28, 28] | The input layers of the model you created needs a 4 dimension tensor to work with but the x_train tensor you are passing to it has only 3 dimensions This means that you have to reshape your training set with .reshape(n_images, 286, 384, 1). Now you have added an extra dimension without changing the data and your model is ready to run. you need to reshape your x_train tensor to a 4 dimension before training your model. for example: x_train = x_train.reshape(-1, 28, 28, 1) for more info on keras inputs Check this answer | 23 | 20 |
63,277,123 | 2020-8-6 | https://stackoverflow.com/questions/63277123/what-does-the-error-message-about-pip-use-feature-2020-resolver-mean | I'm trying to install jupyter on Ubuntu 16.04.6 x64 on DigitalOcean droplet. It is giving me the following error message, and I can't understand what this means. ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. jsonschema 3.2.0 requires six>=1.11.0, but you'll have six 1.10.0 which is incompatible Any help would be greatly appreciated! | According to this announcement, pip will introduce a new dependency resolver in October 2020, which will be more robust but might break some existing setups. Therefore they are suggesting users to try running their pip install scripts at least once (in dev mode) with this option: --use-feature=2020-resolver to anticipate any potential issue before the new resolver becomes the default in October 2020 with pip version 20.3. On behalf of the PyPA, I am pleased to announce that we have just released pip 20.2, a new version of pip. You can install it by running python -m pip install --upgrade pip. The highlights for this release are: The beta of the next-generation dependency resolver is available Faster installations from wheel files Improved handling of wheels containing non-ASCII file contents Faster pip list using parallelized network operations Installed packages now contain metadata about whether they were directly requested by the user (PEP 376’s REQUESTED file) The new dependency resolver is off by default because it is not yet ready for everyday use. The new dependency resolver is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please test it with the --use-feature=2020-resolver flag. Please see our guide on how to test and migrate, and how to report issues . We are preparing to change the default dependency resolution behavior and make the new resolver the default in pip 20.3 (in October 2020). | 87 | 64 |
63,326,840 | 2020-8-9 | https://stackoverflow.com/questions/63326840/specifying-command-line-scripts-in-pyproject-toml | I'm trying to add a pyproject.toml to a project that's been using setup.py in order to enable support by pipx. I'd like to specify the command line scripts the project includes in pyproject.toml, but all the guides I can find give instructions for use with poetry, which I am not using. I also don't want to specify entry points to modules - I already have working command line scripts and just want to specify those. Is there a proper place in pyproject.toml to specify command line scripts? Not sure it matters, but the package in question is cutlet. | Is there a proper place in pyproject.toml to specify command line scripts? PEP566 (Metadata 2.1) only defines Core metadata specifications. Thus, the answer depends on your build system (Note: PEP518 defines build system concept). If you use the existing build tools such as setuptools, poetry, and flit, you only can consider adding such options in pyproject.toml if that tool supports command line scripts (console_scripts) in pyproject.toml. Surely, if you have your own build tool, you need to implement a parser to parse the command line scripts in pyproject.toml. Lastly, you can check below list to know which major build system supports command line scripts (console_scripts) in pyproject.toml (2020/Oct): setuptools: not implemented yet according to PEP621 poetry: yes, here's part of the implementation. flit: yes. | 61 | 14 |
63,272,437 | 2020-8-5 | https://stackoverflow.com/questions/63272437/how-can-i-send-a-message-to-someone-with-telegram-api-using-my-own-account | It's awesome how google something can be annoying when you can't find the right words. I found a million answers on how about to create a Telegram Bot to send and receive messages, and it's easy as write maybe five code lines. But how about managing my own account? I want to know if it it's posible, using Python (telepot or other library), to retrieve my personal messages and send messages from my PERSONAL account, not using a bot. If it's possible, where can I find more information about that | Telegram has a thorough and documented public API. Following some links from there, here is the summary of the relevant parts: the API is not restricted to bots, they are just a (special) kind of users; the API has methods called getMessages and sendMessage, that should be what you need; to call the API, Telegram recommends to use the dedicated library TDLib available for multiple programming languages. There are several examples available on GitHub Among the examples, if you go the the Python part, they recommend: If you use modern Python >= 3.6, take a look at python-telegram. You'll find instructions to use the library, and in the examples folder you can find a script to send a message. I'll copy it here for the sake of completeness: import logging import argparse from utils import setup_logging from telegram.client import Telegram """ Sends a message to a chat Usage: python examples/send_message.py api_id api_hash phone chat_id text """ if __name__ == '__main__': setup_logging(level=logging.INFO) parser = argparse.ArgumentParser() parser.add_argument('api_id', help='API id') # https://my.telegram.org/apps parser.add_argument('api_hash', help='API hash') parser.add_argument('phone', help='Phone') parser.add_argument('chat_id', help='Chat id', type=int) parser.add_argument('text', help='Message text') args = parser.parse_args() tg = Telegram( api_id=args.api_id, api_hash=args.api_hash, phone=args.phone, database_encryption_key='changeme1234', ) # you must call login method before others tg.login() # if this is the first run, library needs to preload all chats # otherwise the message will not be sent result = tg.get_chats() # `tdlib` is asynchronous, so `python-telegram` always returns you an `AsyncResult` object. # You can wait for a result with the blocking `wait` method. result.wait() if result.error: print(f'get chats error: {result.error_info}') else: print(f'chats: {result.update}') result = tg.send_message( chat_id=args.chat_id, text=args.text, ) result.wait() if result.error: print(f'send message error: {result.error_info}') else: print(f'message has been sent: {result.update}') Of course you'll need to explore the documentation to get what are all those variables / ids in your case, but it will get you started! | 30 | 30 |
63,341,773 | 2020-8-10 | https://stackoverflow.com/questions/63341773/pipenv-install-giving-failed-to-load-paths-errors | I am running pipenv install --dev which is giving me the following errors Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning. Installing dependencies from Pipfile.lock (2df4c1)… Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory Output: Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory Output: Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory I don't really want to change the command around I would rather solve the underlying issue as it is part of a package.json file in a project others are using rather than something i am just trying to run on my own machine.. Thanks | Remove your Pipfile.lock and try rerunning pipenv install to rebuild your dependencies from your Pipfile. It is looking for a virtual environment that does not exist. By removing your Pipfile.lock, you force pipenv to create a new environment. | 10 | 7 |
63,361,579 | 2020-8-11 | https://stackoverflow.com/questions/63361579/aot-compiler-for-python | I want to get my Python script working on a bare metal device like microcontroller WITHOUT the need for an interpreter. I know there are already JIT compilers for Python like PyPy, and interpreters like CPython. However, existing interpreters I've seen (such as CPython) take up large memory (in MB range). Is there an AOT compiler for Python (i.e. compiling directly to native hardware through intermediary like LLVM)? I assume such a compiler would enable Python to run much faster compared to existing implementations AND with lower memory footprint. If there is, I wonder why that solution hasn't been popularized. | As you already mentioned Cython is an option (However, it is true that the result is big due since the C runtime need to implement the Python functionality together with your program). With regards to LLVM there was a project by Google named unladen swallow. However, that project is mostly abandoned. You can find some information about it here Basically it was an attempt to bring LLVM optimizations into the runtime of Cython. E.g JITTING Python code. Another old alternative was shed skin which compiles Python to C++. Some information about it can be found here. Yet another option similar to shed skin is to restrict yourself to a subset of the Python language and use micropython. | 7 | 7 |
63,338,424 | 2020-8-10 | https://stackoverflow.com/questions/63338424/how-to-append-new-argument-after-executing-parse-args-for-argparse-in-python | I want to use argeparse module in following way, from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('-b', dest='binKey', type=str) args = parser.parse_args() # I will make use of args.binKey option in this space print args.binKey # After that I want to add -d option to the arguments parser.add_argument('-d', dest='dir', type=str) args = parser.parse_args() With this example, I can only provide -b option as shown below in the help text. $$ python test.py -h usage: test.py [-h] [-b BINKEY] optional arguments: -h, --help show this help message and exit -b BINKEY What I want is I should be able to provide both options while running this code and those options should also by visible in --help option NOTE: I know that I can provide both options before executing parse_args() once, but that is not the way I want to use my parser. | With the help from @hpaulj's comment, here is the resolution: Turn off help option while creating ArgumentParser object. You can call parse_known_args multiple times in between. Add help option at the end. Below is a snippet of the code from argparse import ArgumentParser parser = ArgumentParser(add_help=False) parser.add_argument('-b', dest='binKey', type=str) args = parser.parse_known_args()[0] # Here you can make use of args.binKey option in this space print args.binKey # You can more options after this as parser.add_argument('-d', dest='dir', type=str) parser.add_argument('-h', '--help', action='help', default='==SUPPRESS==', help=('show this help message and exit')) args = parser.parse_args() | 7 | 6 |
63,261,658 | 2020-8-5 | https://stackoverflow.com/questions/63261658/get-environment-variables-in-a-cloud-function | I have a Cloud Function in GCP that queries BigQuery in a specific project/environment. As I have multiple environments I would like to get the current project/environment of the cloud function. This is so that I can access BigQuery in the corresponding environment. Of course I could just hardcode the project_id, but I would like to do this programmatically. According to Google environment variables are set automatically. But when I try to access those I cannot find any of them. For instance I have tried the following, which gives me none of the env. var listed by Google. print(os.environ) Anyone managed to access environment variables at runtime? | Those environment variables you are referring to only applies to python 3.7, the second section on that page (https://cloud.google.com/functions/docs/env-var#nodejs_10_and_subsequent_runtimes) states : All languages and runtimes other than those mentioned in the previous section will use this more limited set of predefined environment variables. This means the GCP_PROJECT is no longer set with 3.8 (at least for now). | 11 | 6 |
63,300,859 | 2020-8-7 | https://stackoverflow.com/questions/63300859/python-location-show-distance-from-closest-other-location | I am a location in a dataframe, underneath lat lon column names. I want to show how far that is from the lat lon of the nearest train station in a separate dataframe. So for example, I have a lat lon of (37.814563 144.970267), and i have a list as below of other geospatial points. I want to find the point that is closest and then find the distance between those points, as an extra column in the dataframe in suburbs. This is the example of the train dataset <bound method NDFrame.to_clipboard of STOP_ID STOP_NAME LATITUDE \ 0 19970 Royal Park Railway Station (Parkville) -37.781193 1 19971 Flemington Bridge Railway Station (North Melbo... -37.788140 2 19972 Macaulay Railway Station (North Melbourne) -37.794267 3 19973 North Melbourne Railway Station (West Melbourne) -37.807419 4 19974 Clifton Hill Railway Station (Clifton Hill) -37.788657 LONGITUDE TICKETZONE ROUTEUSSP \ 0 144.952301 1 Upfield 1 144.939323 1 Upfield 2 144.936166 1 Upfield 3 144.942570 1 Flemington,Sunbury,Upfield,Werribee,Williamsto... 4 144.995417 1 Mernda,Hurstbridge geometry 0 POINT (144.95230 -37.78119) 1 POINT (144.93932 -37.78814) 2 POINT (144.93617 -37.79427) 3 POINT (144.94257 -37.80742) 4 POINT (144.99542 -37.78866) > and this is an example of the suburbs <bound method NDFrame.to_clipboard of postcode suburb state lat lon 4901 3000 MELBOURNE VIC -37.814563 144.970267 4902 3002 EAST MELBOURNE VIC -37.816640 144.987811 4903 3003 WEST MELBOURNE VIC -37.806255 144.941123 4904 3005 WORLD TRADE CENTRE VIC -37.822262 144.954856 4905 3006 SOUTHBANK VIC -37.823258 144.965926> Which I am trying to show, is the distance from the lat lon to the closet train station in a new column for the suburb list. Using a solution get a weird output, wondering if it's correct? With both solutions shown, from sklearn.neighbors import NearestNeighbors from haversine import haversine NN = NearestNeighbors(n_neighbors=1, metric='haversine') NN.fit(trains_shape[['LATITUDE', 'LONGITUDE']]) indices = NN.kneighbors(df_complete[['lat', 'lon']])[1] indices = [index[0] for index in indices] distances = NN.kneighbors(df_complete[['lat', 'lon']])[0] df_complete['closest_station'] = trains_shape.iloc[indices]['STOP_NAME'].reset_index(drop=True) df_complete['closest_station_distances'] = distances print(df_complete) The output here, <bound method NDFrame.to_clipboard of postcode suburb state lat lon Venues Cluster \ 1 3040 aberfeldie VIC -37.756690 144.896259 4.0 2 3042 airport west VIC -37.711698 144.887037 1.0 4 3206 albert park VIC -37.840705 144.955710 0.0 5 3020 albion VIC -37.775954 144.819395 2.0 6 3078 alphington VIC -37.780767 145.031160 4.0 #1 #2 #3 \ 1 Café Electronics Store Grocery Store 2 Fast Food Restaurant Café Supermarket 4 Café Pub Coffee Shop 5 Café Fast Food Restaurant Grocery Store 6 Café Park Bar #4 ... #6 \ 1 Coffee Shop ... Bakery 2 Grocery Store ... Italian Restaurant 4 Breakfast Spot ... Burger Joint 5 Vietnamese Restaurant ... Pub 6 Pizza Place ... Vegetarian / Vegan Restaurant #7 #8 #9 \ 1 Shopping Mall Japanese Restaurant Indian Restaurant 2 Portuguese Restaurant Electronics Store Middle Eastern Restaurant 4 Bar Bakery Gastropub 5 Chinese Restaurant Gym Bakery 6 Italian Restaurant Gastropub Bakery #10 Ancestry Cluster ClosestStopId \ 1 Greek Restaurant 8.0 20037 2 Convenience Store 5.0 20032 4 Beach 6.0 22180 5 Convenience Store 5.0 20004 6 Coffee Shop 5.0 19931 ClosestStopName \ 1 Essendon Railway Station (Essendon) 2 Glenroy Railway Station (Glenroy) 4 Southern Cross Railway Station (Melbourne City) 5 Albion Railway Station (Sunshine North) 6 Alphington Railway Station (Alphington) closest_station closest_station_distances 1 Glenroy Railway Station (Glenroy) 0.019918 2 Southern Cross Railway Station (Melbourne City) 0.031020 4 Alphington Railway Station (Alphington) 0.023165 5 Altona Railway Station (Altona) 0.005559 6 Newport Railway Station (Newport) 0.002375 And the second function. def ClosestStop(r): # Cartesin Distance: square root of (x2-x2)^2 + (y2-y1)^2 distances = ((r['lat']-StationDf['LATITUDE'])**2 + (r['lon']-StationDf['LONGITUDE'])**2)**0.5 # Stop with minimum Distance from the Suburb closestStationId = distances[distances == distances.min()].index.to_list()[0] return StationDf.loc[closestStationId, ['STOP_ID', 'STOP_NAME']] df_complete[['ClosestStopId', 'ClosestStopName']] = df_complete.apply(ClosestStop, axis=1) This is giving different answers oddly enough, and leads me to think that there is an issue with this code. the KM's seem wrong as well. Completely unsure how to approach this problem - would love some guidance here, thanks! | A few key concepts do a Cartesian product between two data frames to get all combinations (joining on identical value between two data frames is approach to this foo=1) once both sets of data is together, have both sets of lat/lon to calculate distance) geopy has been used for this cleanup the columns, use sort_values() to find smallest distance finally a groupby() and agg() to get first values for shortest distance There are two data frames for use dfdist contains all the combinations and distances dfnearest which contains result dfstat = pd.DataFrame({'STOP_ID': ['19970', '19971', '19972', '19973', '19974'], 'STOP_NAME': ['Royal Park Railway Station (Parkville)', 'Flemington Bridge Railway Station (North Melbo...', 'Macaulay Railway Station (North Melbourne)', 'North Melbourne Railway Station (West Melbourne)', 'Clifton Hill Railway Station (Clifton Hill)'], 'LATITUDE': ['-37.781193', '-37.788140', '-37.794267', '-37.807419', '-37.788657'], 'LONGITUDE': ['144.952301', '144.939323', '144.936166', '144.942570', '144.995417'], 'TICKETZONE': ['1', '1', '1', '1', '1'], 'ROUTEUSSP': ['Upfield', 'Upfield', 'Upfield', 'Flemington,Sunbury,Upfield,Werribee,Williamsto...', 'Mernda,Hurstbridge'], 'geometry': ['POINT (144.95230 -37.78119)', 'POINT (144.93932 -37.78814)', 'POINT (144.93617 -37.79427)', 'POINT (144.94257 -37.80742)', 'POINT (144.99542 -37.78866)']}) dfsub = pd.DataFrame({'id': ['4901', '4902', '4903', '4904', '4905'], 'postcode': ['3000', '3002', '3003', '3005', '3006'], 'suburb': ['MELBOURNE', 'EAST MELBOURNE', 'WEST MELBOURNE', 'WORLD TRADE CENTRE', 'SOUTHBANK'], 'state': ['VIC', 'VIC', 'VIC', 'VIC', 'VIC'], 'lat': ['-37.814563', '-37.816640', '-37.806255', '-37.822262', '-37.823258'], 'lon': ['144.970267', '144.987811', '144.941123', '144.954856', '144.965926']}) import geopy.distance # cartesian product so we get all combinations dfdist = (dfsub.assign(foo=1).merge(dfstat.assign(foo=1), on="foo") # calc distance in km between each suburb and each train station .assign(km=lambda dfa: dfa.apply(lambda r: geopy.distance.geodesic( (r["LATITUDE"],r["LONGITUDE"]), (r["lat"],r["lon"])).km, axis=1)) # reduce number of columns to make it more digestable .loc[:,["postcode","suburb","STOP_ID","STOP_NAME","km"]] # sort so shortest distance station from a suburb is first .sort_values(["postcode","suburb","km"]) # good practice .reset_index(drop=True) ) # finally pick out stations nearest to suburb # this can easily be joined back to source data frames as postcode and STOP_ID have been maintained dfnearest = dfdist.groupby(["postcode","suburb"])\ .agg({"STOP_ID":"first","STOP_NAME":"first","km":"first"}).reset_index() print(dfnearest.to_string(index=False)) dfnearest output postcode suburb STOP_ID STOP_NAME km 3000 MELBOURNE 19973 North Melbourne Railway Station (West Melbourne) 2.564586 3002 EAST MELBOURNE 19974 Clifton Hill Railway Station (Clifton Hill) 3.177320 3003 WEST MELBOURNE 19973 North Melbourne Railway Station (West Melbourne) 0.181463 3005 WORLD TRADE CENTRE 19973 North Melbourne Railway Station (West Melbourne) 1.970909 3006 SOUTHBANK 19973 North Melbourne Railway Station (West Melbourne) 2.705553 an approach to reducing size of tested combinations # pick nearer places, based on lon/lat then all combinations dfdist = (dfsub.assign(foo=1, latr=dfsub["lat"].round(1), lonr=dfsub["lon"].round(1)) .merge(dfstat.assign(foo=1, latr=dfstat["LATITUDE"].round(1), lonr=dfstat["LONGITUDE"].round(1)), on=["foo","latr","lonr"]) # calc distance in km between each suburb and each train station .assign(km=lambda dfa: dfa.apply(lambda r: geopy.distance.geodesic( (r["LATITUDE"],r["LONGITUDE"]), (r["lat"],r["lon"])).km, axis=1)) # reduce number of columns to make it more digestable .loc[:,["postcode","suburb","STOP_ID","STOP_NAME","km"]] # sort so shortest distance station from a suburb is first .sort_values(["postcode","suburb","km"]) # good practice .reset_index(drop=True) ) | 10 | 7 |
63,314,452 | 2020-8-8 | https://stackoverflow.com/questions/63314452/python-autopep8-formatting-not-working-with-max-line-length-parameter | I noticed one strange thing that autopep8 autoformatting in VSCode doesn't work when we set "python.formatting.autopep8Args": [ "--line-length 119" ], But if this setting is in a default mode that is line length 79 then it works well. Is there some issue with autopep8 to work only with line length 79 not more than that, or I am making any mistake in VSCode. Major feature that I need is when my python program line goes too long it should be able to break it in multiple lines. I don't want to go ahead with 79 characters' approach. My preferred approach is 119 characters. Currently, I have to indent big lines manually. Is there any other format apart from pep8 which supports 119 characters and indent lines with characters more than 119 characters automatically. I am attaching my settings.json file data { "window.zoomLevel": 1, "python.dataScience.sendSelectionToInteractiveWindow": true, "diffEditor.ignoreTrimWhitespace": false, "editor.fontSize": 16, "python.formatting.provider": "autopep8", "editor.formatOnPaste": true, "editor.formatOnSave": true, "editor.formatOnType": true, "python.autoComplete.addBrackets": true, "python.formatting.autopep8Args": [ "--line-length 119" ], // "python.linting.flake8Args": [ // "--max-line-length=120" // ], "files.autoSaveDelay": 10000, "editor.defaultFormatter": "ms-python.python", "files.autoSave": "afterDelay", "files.trimTrailingWhitespace": true, "files.trimFinalNewlines": true, "editor.quickSuggestions": true, "editor.codeActionsOnSave": null, } | experimental worked for me "python.formatting.autopep8Args": ["--max-line-length", "120", "--experimental"] check out this link for proper format specifier settings | 29 | 66 |
63,320,653 | 2020-8-8 | https://stackoverflow.com/questions/63320653/preventing-namespace-collisions-between-private-and-pypi-based-python-packages | We have 100+ private packages and so far we've been using s3pypi to set up a private pypi in an s3 bucket. Our private packages have dependencies on each other (and on public packages), and it is (of course) important that our GitLab pipelines find the latest functional version of packages it relies on. I.e. we're not interested in the latest checked in code. We create new wheels only after tests and qa has run against a push to master (which is a long-winded way of explaining that -e <vcs> requirements will not work). Our setup works really well until someone creates a new public package on the official pypi that shadows one of our package names. We can force our private package to be chosen by increasing the version number so it is higher than the new package on pypi.org - or by renaming our package to something that haven't yet been taken on pypi.org. This is obviously a hacky and fragile solution, but apparently the functionality is this way by-design. After the initial bucket setup s3pypi has required no maintenance or administration. The above ticket suggests using devpi but that seems like a very heavy solution that requires administration/monitoring/etc. GitLab's pypi solution seems to be at individual package level (meaning we'd have to list up to 100+ urls - one for each package). This doesn't seem practical, but maybe I'm not understanding something (I can see the package registry menu under our group as well, but the docs point to the "package-pypi" docs). We can't be the first small company that has faced this issue..? Is there a better way than to register dummy versions of all our packages on pypi.org (with version=0.0.1, so the s3pypi version will be preferred)? | It might not be the solution for you, but I tell what we do. Prefix the package names, and using namespaces (eg. company.product.tool). When we install our packages (including their in-house dependencies), we use a requirements.txt file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images. | 37 | 15 |
63,325,727 | 2020-8-9 | https://stackoverflow.com/questions/63325727/pandas-resample-a-dataframe-to-match-a-datetimeindex-of-a-different-dataframe | I have a two time series in separate pandas.dataframe, the first one - series1 has less entries and different start datatime from the second - series2: index1 = pd.date_range(start='2020-06-16 23:16:00', end='2020-06-16 23:40:30', freq='1T') series1 = pd.Series(range(len(index1)), index=index1) index2 = pd.date_range('2020-06-16 23:15:00', end='2020-06-16 23:50:30', freq='30S') series2 = pd.Series(range(len(index2)), index=index2) How can I resample series2 to match the DatetimeIndex of series1? | Use reindex: series2.reindex(series1.index) Output: 2020-06-16 23:16:00 2 2020-06-16 23:17:00 4 2020-06-16 23:18:00 6 2020-06-16 23:19:00 8 2020-06-16 23:20:00 10 2020-06-16 23:21:00 12 2020-06-16 23:22:00 14 2020-06-16 23:23:00 16 2020-06-16 23:24:00 18 2020-06-16 23:25:00 20 2020-06-16 23:26:00 22 2020-06-16 23:27:00 24 2020-06-16 23:28:00 26 2020-06-16 23:29:00 28 2020-06-16 23:30:00 30 2020-06-16 23:31:00 32 2020-06-16 23:32:00 34 2020-06-16 23:33:00 36 2020-06-16 23:34:00 38 2020-06-16 23:35:00 40 2020-06-16 23:36:00 42 2020-06-16 23:37:00 44 2020-06-16 23:38:00 46 2020-06-16 23:39:00 48 2020-06-16 23:40:00 50 Freq: T, dtype: int64 | 9 | 8 |
63,367,559 | 2020-8-11 | https://stackoverflow.com/questions/63367559/how-to-fix-usr-local-bin-virtualenv-usr-bin-python-bad-interpreter-no-such | When I tried to use virtualenv on Ubuntu 18.04, I got this error: bash: /usr/local/bin/virtualenv: /usr/bin/python: bad interpreter: No such file or directory Python 2 and 3 is working fine: josir@desenv16:~/bin$ which python3 /usr/bin/python3 josir@desenv16:~/bin$ python3 Python 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0] on linux I've already tried to unistall virtualenv: sudo apt-get purge --auto-remove virtualenv sudo apt-get purge --auto-remove python-virtualenv sudo apt-get purge --auto-remove python3-virtualenv But when I installed again, the error remains. | bash: /usr/local/bin/virtualenv: /usr/bin/python: bad interpreter: No such file or directory The error is in '/usr/local/bin/virtualenv' — it's first line (shebang) is #!/usr/bin/python and there is no such file at your system. I believe the stream of events led to the situation is: you've installed virtualenv with pip (not apt) long ago and put /usr/local/bin at the front of your $PATH. Then you upgraded you system; the upgrade removed /usr/bin/python, now you have only /usr/bin/python3. Now you have to decide which route you'd go: apt or pip. If you choose apt — remove /usr/local/bin/virtualenv. If you choose pip: my advice is to uninstall as much as possible python packages installed with apt; reinstall virtualenv; that should be the only additional package installed with apt. For every project/task create a virtual environment and install packages with pip. PS. Personal experience: I switched from apt way to pip a few years ago. PPS. Avoid using sudo pip — do not clobber system installation. Either install into virtual environments or pip install --user. | 7 | 4 |
63,361,267 | 2020-8-11 | https://stackoverflow.com/questions/63361267/plotly-how-to-update-plotly-data-using-dropdown-list-for-line-graph | I am trying to add a dropdown menu to a plotly line graph that updates the graph data source when selected. My data has 3 columns and looks as such: 1 Country Average House Price (£) Date 0 Northern Ireland 47101.0 1992-04-01 1 Northern Ireland 49911.0 1992-07-01 2 Northern Ireland 50174.0 1992-10-01 3 Northern Ireland 46664.0 1993-01-01 4 Northern Ireland 48247.0 1993-04-01 The Country column contains the 4 countries in the United Kingdom and is used for creating the separate lines for each using the color parameter. I have 4 different dataframes for different housing types such as all_dwellings, first_timebuyers and when trying to specify the updatemenus args it seems I cannot use the dataframe format. Here is the code to create the entire graph. lineplt = px.line(data_frame = all_dwellings, x='Date', y='Average House Price (£)', color= 'Country', hover_name='Country', color_discrete_sequence=['rgb(23, 153, 59)','rgb(214, 163, 21)','rgb(40, 48, 165)', 'rgb(210, 0, 38)'] ) updatemenus = [ {'buttons': [ { 'method': 'restyle', 'label': 'All Dwellings', 'args': [{'data_frame': all_dwellings}] }, { 'method': 'restyle', 'label': 'First Time Buyers', 'args': [{'data_frame': first_buyers}] } ], 'direction': 'down', 'showactive': True, } ] lineplt = lineplt.update_layout( title_text='Average House Price in UK (£)', title_x=0.5, #plot_bgcolor= 'rgb(194, 208, 209)', xaxis_showgrid=False, yaxis_showgrid=False, hoverlabel=dict(font_size=10, bgcolor='rgb(69, 95, 154)', bordercolor= 'whitesmoke'), legend=dict(title='Please click legend item to remove <br>or add to plot', x=0, y=1, traceorder='normal', bgcolor='LightSteelBlue', xanchor = 'auto'), updatemenus=updatemenus ) lineplt = lineplt.update_traces(mode="lines", hovertemplate= 'Date = %{x} <br>' + 'Price = £%{y:.2f}') lineplt.show() However I get the following error: TypeError: Object of type DataFrame is not JSON serializable All the examples seem to convert items to lists but this doesn't seem to work with dataframe format. Could anybody help? If question not clear then please let me know. EDIT - output of all_dwellings.head(20).to_dict() {'Country': {0: 'Northern Ireland ', 1: 'Northern Ireland ', 2: 'Northern Ireland ', 3: 'Northern Ireland ', 4: 'Northern Ireland ', 5: 'Northern Ireland ', 6: 'Northern Ireland ', 7: 'Northern Ireland ', 8: 'Northern Ireland ', 9: 'Northern Ireland ', 10: 'Northern Ireland ', 11: 'Northern Ireland ', 12: 'Northern Ireland ', 13: 'Northern Ireland ', 14: 'Northern Ireland ', 15: 'Northern Ireland ', 16: 'Northern Ireland ', 17: 'Northern Ireland ', 18: 'Northern Ireland ', 19: 'Northern Ireland '}, 'Average House Price (£)': {0: 47101.0, 1: 49911.0, 2: 50174.0, 3: 46664.0, 4: 48247.0, 5: 54891.0, 6: 53773.0, 7: 57594.0, 8: 49804.0, 9: 58586.0, 10: 55154.0, 11: 55413.0, 12: 60239.0, 13: 59094.0, 14: 57131.0, 15: 61849.0, 16: 61951.0, 17: 61595.0, 18: 68705.0, 19: 74869.0}, 'Date': {0: Timestamp('1992-04-01 00:00:00'), 1: Timestamp('1992-07-01 00:00:00'), 2: Timestamp('1992-10-01 00:00:00'), 3: Timestamp('1993-01-01 00:00:00'), 4: Timestamp('1993-04-01 00:00:00'), 5: Timestamp('1993-07-01 00:00:00'), 6: Timestamp('1993-10-01 00:00:00'), 7: Timestamp('1994-01-01 00:00:00'), 8: Timestamp('1994-04-01 00:00:00'), 9: Timestamp('1994-07-01 00:00:00'), 10: Timestamp('1994-10-01 00:00:00'), 11: Timestamp('1995-01-01 00:00:00'), 12: Timestamp('1995-04-01 00:00:00'), 13: Timestamp('1995-07-01 00:00:00'), 14: Timestamp('1995-10-01 00:00:00'), 15: Timestamp('1996-01-01 00:00:00'), 16: Timestamp('1996-04-01 00:00:00'), 17: Timestamp('1996-07-01 00:00:00'), 18: Timestamp('1996-10-01 00:00:00'), 19: Timestamp('1997-01-01 00:00:00')}} Output of first_buyers {'Country': {0: 'Northern Ireland ', 1: 'Northern Ireland ', 2: 'Northern Ireland ', 3: 'Northern Ireland ', 4: 'Northern Ireland ', 5: 'Northern Ireland ', 6: 'Northern Ireland ', 7: 'Northern Ireland ', 8: 'Northern Ireland ', 9: 'Northern Ireland ', 10: 'Northern Ireland ', 11: 'Northern Ireland ', 12: 'Northern Ireland ', 13: 'Northern Ireland ', 14: 'Northern Ireland ', 15: 'Northern Ireland ', 16: 'Northern Ireland ', 17: 'Northern Ireland ', 18: 'Northern Ireland ', 19: 'Northern Ireland '}, 'Average House Price (£)': {0: 29280.0, 1: 32690.0, 2: 29053.0, 3: 30241.0, 4: 31032.0, 5: 31409.0, 6: 31299.0, 7: 28922.0, 8: 28621.0, 9: 31519.0, 10: 33497.0, 11: 35861.0, 12: 32472.0, 13: 34493.0, 14: 33662.0, 15: 32630.0, 16: 33426.0, 17: 37154.0, 18: 36555.0, 19: 36406.0}, 'Date': {0: Timestamp('1992-04-01 00:00:00'), 1: Timestamp('1992-07-01 00:00:00'), 2: Timestamp('1992-10-01 00:00:00'), 3: Timestamp('1993-01-01 00:00:00'), 4: Timestamp('1993-04-01 00:00:00'), 5: Timestamp('1993-07-01 00:00:00'), 6: Timestamp('1993-10-01 00:00:00'), 7: Timestamp('1994-01-01 00:00:00'), 8: Timestamp('1994-04-01 00:00:00'), 9: Timestamp('1994-07-01 00:00:00'), 10: Timestamp('1994-10-01 00:00:00'), 11: Timestamp('1995-01-01 00:00:00'), 12: Timestamp('1995-04-01 00:00:00'), 13: Timestamp('1995-07-01 00:00:00'), 14: Timestamp('1995-10-01 00:00:00'), 15: Timestamp('1996-01-01 00:00:00'), 16: Timestamp('1996-04-01 00:00:00'), 17: Timestamp('1996-07-01 00:00:00'), 18: Timestamp('1996-10-01 00:00:00'), 19: Timestamp('1997-01-01 00:00:00')}} | I've made a preliminary setup using your full datasample, and I think I've got it figured out. The challenge here is that px.line will group your data by the color argument. And that makes it a bit harder to edit the data displayed using a dropdownmenu with a direct reference to the source of your px.line plot. But you can actually build multiple px.line figures for the different datasets, and "steal" the data with the correct structure for your figure there. This will give you these figures for the different dropdown options: I'm a bit worried that the second plot might be a bit off, but I'm using the date you've provided which looks like this for first_timebuyers: So maybe it makes sense after all? Below is the complete code without your data. We can talk about the details and further tweaking tomorrow. Bye for now. import numpy as np import pandas as pd import plotly.express as px from pandas import Timestamp all_dwellings=pd.DataFrame(<yourData>) first_timebuyers = pd.DataFrame(<yourData>) # datagrab 1 lineplt_all = px.line(data_frame = all_dwellings, x='Date', y='Average House Price (£)', color= 'Country', hover_name='Country', color_discrete_sequence=['rgb(23, 153, 59)','rgb(214, 163, 21)','rgb(40, 48, 165)', 'rgb(210, 0, 38)'] ) # datagrab 2 lineplt_first = px.line(data_frame = first_timebuyers, x='Date', y='Average House Price (£)', color= 'Country', hover_name='Country', color_discrete_sequence=['rgb(23, 153, 59)','rgb(214, 163, 21)','rgb(40, 48, 165)', 'rgb(210, 0, 38)'] ) ### Your original setup lineplt = px.line(data_frame = all_dwellings, x='Date', y='Average House Price (£)', color= 'Country', hover_name='Country', color_discrete_sequence=['rgb(23, 153, 59)','rgb(214, 163, 21)','rgb(40, 48, 165)', 'rgb(210, 0, 38)'] ) updatemenus = [ {'buttons': [ { 'method': 'restyle', 'label': 'All Dwellings', 'args': [{'y': [dat.y for dat in lineplt_all.data]}] }, { 'method': 'restyle', 'label': 'First Time Buyers', 'args': [{'y': [dat.y for dat in lineplt_first.data]}] } ], 'direction': 'down', 'showactive': True, } ] lineplt = lineplt.update_layout( title_text='Average House Price in UK (£)', title_x=0.5, #plot_bgcolor= 'rgb(194, 208, 209)', xaxis_showgrid=False, yaxis_showgrid=False, hoverlabel=dict(font_size=10, bgcolor='rgb(69, 95, 154)', bordercolor= 'whitesmoke'), legend=dict(title='Please click legend item to remove <br>or add to plot', x=0, y=1, traceorder='normal', bgcolor='LightSteelBlue', xanchor = 'auto'), updatemenus=updatemenus ) lineplt = lineplt.update_traces(mode="lines", hovertemplate= 'Date = %{x} <br>' + 'Price = £%{y:.2f}') lineplt.show() | 9 | 5 |
63,366,843 | 2020-8-11 | https://stackoverflow.com/questions/63366843/how-to-find-the-minimal-numpy-dtype-to-store-a-maximum-integer-value | I need to create a very large numpy array that will hold non-negative integer values. I know in advance what the largest integer will be, so I want to try to use the smallest datatype possible. So far I have the following: >>> import numpy as np >>> def minimal_type(max_val, types=[np.uint8,np.uint16,np.uint32,np.uint64]): ''' finds the minimal data type needed to correctly store the given max_val returns None if none of the provided types are sufficient ''' for t in types: if max_val <= np.iinfo(t).max: return t return None >>> print(minimal_type(42)) <class 'numpy.uint8'> >>> print(minimal_type(255)) <class 'numpy.uint8'> >>> print(minimal_type(256)) <class 'numpy.uint16'> >>> print(minimal_type(4200000000)) <class 'numpy.uint32'> >>> Is there a numpy builtin way to achieve this functionality? | It's numpy.min_scalar_type. Examples from the docs: >>> np.min_scalar_type(10) dtype('uint8') >>> np.min_scalar_type(-260) dtype('int16') >>> np.min_scalar_type(3.1) dtype('float16') >>> np.min_scalar_type(1e50) dtype('float64') >>> np.min_scalar_type(np.arange(4,dtype='f8')) dtype('float64') You might not be interested in the behavior for floats, but I'm including it anyway for other people who come across the question, particularly since the use of float16 and the lack of float->int demotion might be surprising. | 6 | 11 |
63,366,430 | 2020-8-11 | https://stackoverflow.com/questions/63366430/pass-a-dictionary-in-try-except-clause | I have a use case that requires passing a dictionary in a try/exception clause in Python 3.x The error message can be accessed as a string using str() function, but I can't figure out who to get it as a dictionary. try: raise RuntimeError({'a':2}) except Exception as e: error = e print(error['a']) e is a RuntimeError object and I can't find any method that returns the message in its original format. | Exceptions store their init args in an "args" attribute: try: raise RuntimeError({'a':2}) except Exception as e: (the_dict,) = e.args print(the_dict["a"]) That being said, if you want an exception type which has a structured key/value context associated, it would be best to define your own custom exception subclass for this purpose rather than re-use the standard library's RuntimeError directly. That's because if you catch such an exception and attempt to unpack the dictionary context, you would need to detect and handle your RuntimeError instances differently from RuntimeError instances that the standard library may have raised. Using a different type entirely will make it much cleaner and easier to distinguish these two cases in your code. | 7 | 14 |
63,361,807 | 2020-8-11 | https://stackoverflow.com/questions/63361807/how-can-i-get-the-arguments-i-sent-to-threadpoolexecutor-when-iterating-through | I use a ThreadPoolExecutor to quickly check a list of proxies to see which ones are dead or alive. with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: futures = [] for proxy in proxies: future = executor.submit(is_proxy_alive, proxy) futures.append(future) for future in futures: print(future.result()) # prints true or false depending on if proxy is alive. # how do I get the specific proxy I passed in the arguments # so that I can make a dictionary here? My goal is to get the argument (the proxy) I passed to the executor when iterating through the results to know which exact proxies are dead or alive, so I could make a dictionary that might look like this: {"IP1": False, "IP2": True, "IP3": True} One method I can think of is returning the proxy I sent on top of returning true/false, but is there a better way to do it externally so the function doesn't have to return more than just a bool? | While submitting the task, you could create a mapping from future to its proxy. with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future_proxy_mapping = {} futures = [] for proxy in proxies: future = executor.submit(is_proxy_alive, proxy) future_proxy_mapping[future] = proxy futures.append(future) for future in futures: proxy = future_proxy_mapping[future] print(proxy) print(future.result()) | 8 | 18 |
63,339,707 | 2020-8-10 | https://stackoverflow.com/questions/63339707/is-it-possible-to-include-c-code-in-python-in-pycharm | I'm trying to speed up some code that I have written in Python and have thought about writing some code in C to do so. However, I am using PyCharm and its supported languages section, https://www.jetbrains.com/help/pycharm/supported-languages.html doesn't mention C, meaning no option to just create a C file and then import it into python. Is there any other way of doing it? | Unfortunately PyCharm does not support any C/C++ coding and there are no existing plugins for PyCharm that will support this. With that said, there is an IDE for C and C++ called CLion which is released by JetBrains just like PyCharm. CLion supports many features varying from Python debugger to Python Console for working with python and also supports all features that Pycharm offers. This should satisfy your needs towards using C code with Python in the most healthy way. Here are some useful links for you: https://www.jetbrains.com/clion/ https://www.jetbrains.com/help/clion/python.html#features Good luck! | 7 | 9 |
63,350,459 | 2020-8-11 | https://stackoverflow.com/questions/63350459/getting-the-frequencies-associated-with-stft-in-librosa | When using librosa.stft() to calculate a spectrogram, how does one get back the associated frequency values? I am not interested in generating an image as in librosa.display.specshow, but rather I want to have those values in hand. y, sr = librosa.load('../recordings/high_pitch.m4a') stft = librosa.stft(y, n_fft=256, window=sig.windows.hamming) spec = np.abs(stft) spec gives me the 'amplitude' or 'power' of each frequency, but not the frequencies bins themselves. I have seen that there is a display.specshow function that will display these frequency values on the vertical axis of a heatmap, but not return the values themselves. I'm looking for something similar to nn.fft.fttfreq() for a single FFT, but cannot find its equivalent in the librosa documentation. | I would like to point out this question and answer in particular: How do I obtain the frequencies of each value in an FFT?. In addition to consulting the documentation for the STFT from librosa, we know that the horizontal axis is the time axis while the vertical axis are the frequencies. Each column in the spectrogram is the FFT of a slice in time where the centre at this time point has a window placed with n_fft=256 components. We also know that there is a hop length which tells us how many audio samples we need to skip over before we calculate the next FFT. This by default is n_fft / 4, so every 256 / 4 = 64 points in your audio, we calculate a new FFT centered at this time point of n_fft=256 points long. If you want to know the exact time point each window is centered at, that is simply i / Fs with i being the index of the audio signal which would be a multiple of 64. Now, for each FFT window, for real signals the spectrum is symmetric so we only consider the positive side of the FFT. This is verified by the documentation where the number of rows and hence the number of frequency components is 1 + n_fft / 2 with 1 being the DC component. Since we have this now, consulting the post above the relationship from bin number to the corresponding frequency is i * Fs / n_fft with i being the bin number, Fs being the sampling frequency and n_fft=256 as the number of points in the FFT window. Since we are only looking at the half spectrum, instead of i spanning from 0 to n_fft, this spans from 0 up to 1 + n_fft / 2 instead as the bins beyond 1 + n_fft / 2 would simply be the reflected version of the half spectrum and so we do not consider the frequency components beyond Fs / 2 Hz. If you wanted to generate a NumPy array of these frequencies, you could just do: import numpy as np freqs = np.arange(0, 1 + n_fft / 2) * Fs / n_fft freqs would be an array that maps the bin number in the FFT to the corresponding frequency. As an illustrative example, suppose our sampling frequency is 16384 Hz, and n_fft = 256. Therefore: In [1]: import numpy as np In [2]: Fs = 16384 In [3]: n_fft = 256 In [4]: np.arange(0, 1 + n_fft / 2) * Fs / n_fft Out[4]: array([ 0., 64., 128., 192., 256., 320., 384., 448., 512., 576., 640., 704., 768., 832., 896., 960., 1024., 1088., 1152., 1216., 1280., 1344., 1408., 1472., 1536., 1600., 1664., 1728., 1792., 1856., 1920., 1984., 2048., 2112., 2176., 2240., 2304., 2368., 2432., 2496., 2560., 2624., 2688., 2752., 2816., 2880., 2944., 3008., 3072., 3136., 3200., 3264., 3328., 3392., 3456., 3520., 3584., 3648., 3712., 3776., 3840., 3904., 3968., 4032., 4096., 4160., 4224., 4288., 4352., 4416., 4480., 4544., 4608., 4672., 4736., 4800., 4864., 4928., 4992., 5056., 5120., 5184., 5248., 5312., 5376., 5440., 5504., 5568., 5632., 5696., 5760., 5824., 5888., 5952., 6016., 6080., 6144., 6208., 6272., 6336., 6400., 6464., 6528., 6592., 6656., 6720., 6784., 6848., 6912., 6976., 7040., 7104., 7168., 7232., 7296., 7360., 7424., 7488., 7552., 7616., 7680., 7744., 7808., 7872., 7936., 8000., 8064., 8128., 8192.]) In [5]: freqs = _; len(freqs) Out[5]: 129 We can see that we have generated a 1 + n_fft / 2 = 129 element array which tells us the frequencies for each corresponding bin number. A word of caution Take note that librosa.display.specshow has a default sampling rate of 22050 Hz, so if you don't set the sampling rate (sr) to the same sampling frequency as your audio signal, the vertical and horizontal axes will not be correct. Make sure you specify the sr input flag to match your sampling frequency of the incoming audio. | 9 | 10 |
63,353,438 | 2020-8-11 | https://stackoverflow.com/questions/63353438/plotly-how-to-set-a-varying-marker-opacity-but-keep-the-same-outline-color-for | I'm trying to plat marker where maker opacity is changed by some vector. But marker edge color opacity is constant. fig.add_trace(go.Scatter(x=real.index, y=real['some_value'], mode='markers', marker={'opacity': real['another value'], 'color':'green', 'size':10, 'line':dict(width=1, color='rgba(165,42,42,1)')} )) It can be seen in plot below, that the marker edge color opacity, is changed together with color opacity of the marker filling. My purpose is to keep line (marker edge ) opacity constant. NOTE: this question doesn't answer the question: plotly.py: change line opacity, leave markers opaque | You can easily rescale a pandas series between 0 and 1 and use that as an argument in rgba(red,green,blue,opacity) like color='rgba(100,0,255,'+opac+')' where opac is some opacity between 0 and 1 for a certain marker in your figure. The color property of the markers is unique for any go.Scatter(), so you'll have to add a unique trace for each point. Then, at the same time, you can set color (and opacity if you should feel so inclined) for the outline of the marker using marker=dict(line=dict(color='rgba(100,0,255,1)')) In the figure below, I've set the outline color as 'rgba(100,0,255,1)', and the opacity of the marker fill as varying according to the logic above. This way, the highest values will appear as a completely "filled" marker: But you can also set a more clearly defined line using, for example, line=dict(color='rgba(0,0,0,1)', width = 2) to get something like this: Now you can play around with all the rgba arguments to find a color to your liking. Complete code: # imports import plotly.graph_objects as go import pandas as pd import numpy as np # sample data in the form of an hourlt np.random.seed(1234) tseries = pd.date_range("01.01.2020", "01.04.2020", freq="H") data = np.random.randint(-100, 100, size=(len(tseries), 3)) df = pd.DataFrame(data=data) df.columns=list('ABC') df['C_scaled'] = df['C'].max()/df['C'] df['C_scaled'] = (df['C']-df['C'].min())/(df['C'].max()-df['C'].min()) df = df.sort_values(by=['C_scaled'], ascending=False) fig=go.Figure() for ix in df.index: d = df.iloc[ix] opac = str(d['C_scaled']) fig.add_trace(go.Scatter(x=[d['A']], y=[d['B']], showlegend=False, marker=dict(size = 14, color='rgba(100,0,255,'+opac+')', line=dict(color='rgba(0,0,0,1)', width = 2))) ) fig.show() Edit: Hoverinfo side-effects Just include the following to edit the hoverinfo so that x and y values always are shown on hover to the closeset values: fig.update_layout(hovermode="x") fig.update_traces(hoverinfo = 'x+y') | 7 | 6 |
63,347,977 | 2020-8-10 | https://stackoverflow.com/questions/63347977/what-is-the-conceptual-purpose-of-librosa-amplitude-to-db | I'm using the librosa library to get and filter spectrograms from audio data. I mostly understand the math behind generating a spectrogram: Get signal window signal for each window compute Fourier transform Create matrix whose columns are the transforms Plot heat map of this matrix So that's really easy with librosa: spec = np.abs(librosa.stft(signal, n_fft=len(window), window=window) Yay! I've got my matrix of FFTs. Now I see this function librosa.amplitude_to_db and I think this is where my ignorance of signal processing starts to show. Here is a snippet I found on Medium: spec = np.abs(librosa.stft(y, hop_length=512)) spec = librosa.amplitude_to_db(spec, ref=np.max) Why does the author use this amplitude_to_db function? Why not just plot the output of the STFT directly? | The range of perceivable sound pressure is very wide, from around 20 μPa (micro Pascal) to 20 Pa, a ratio of 1 million. Furthermore the human perception of sound levels is not linear, but better approximated by a logarithm. By converting to decibels (dB) the scale becomes logarithmic. This limits the numerical range, to something like 0-120 dB instead. The intensity of colors when this is plotted corresponds more closely to what we hear than if one used a linear scale. Note that the reference (0 dB) point in decibels can be chosen freely. The default for librosa.amplitude_to_db is to compute numpy.max, meaning that the max value of the input will be mapped to 0 dB. All other values will then be negative. The function also applies a threshold on the range of sounds, by default 80 dB. So anything lower than -80 dB will be clipped -80 dB. | 7 | 14 |
63,341,547 | 2020-8-10 | https://stackoverflow.com/questions/63341547/how-to-inject-pygame-events-from-pytest | How can one inject events into a running pygame from a pytest test module? The following is a minimal example of a pygame which draws a white rectangle when J is pressed and quits the game when Ctrl-Q is pressed. #!/usr/bin/env python """minimal_pygame.py""" import pygame def minimal_pygame(testing: bool=False): pygame.init() game_window_sf = pygame.display.set_mode( size=(400, 300), ) pygame.display.flip() game_running = True while game_running: # Main game loop: # the following hook to inject events from pytest does not work: # if testing: # test_input = (yield) # pygame.event.post(test_input) for event in pygame.event.get(): # React to closing the pygame window: if event.type == pygame.QUIT: game_running = False break # React to keypresses: if event.type == pygame.KEYDOWN: if event.key == pygame.K_q: # distinguish between Q and Ctrl-Q mods = pygame.key.get_mods() # End main loop if Ctrl-Q was pressed if mods & pygame.KMOD_CTRL: game_running = False break # Draw a white square when key J is pressed: if event.key == pygame.K_j: filled_rect = game_window_sf.fill(pygame.Color("white"), pygame.Rect(50, 50, 50, 50)) pygame.display.update([filled_rect]) pygame.quit() if __name__ == "__main__": minimal_pygame() I want to write a pytest module which would automatically test it. I have read that one can inject events into running pygame. Here I read that yield from allows a bidirectional communication, so I thought I must implement some sort of a hook for pygame.events being injected from the pytest module, but it is not as simple as I thought, so I commented it out. If I uncomment the test hook under while game_running, pygame does not even wait for any input. Here is the test module for pytest: #!/usr/bin/env python """test_minimal_pygame.py""" import pygame import minimal_pygame def pygame_wrapper(coro): yield from coro def test_minimal_pygame(): wrap = pygame_wrapper(minimal_pygame.minimal_pygame(testing=True)) wrap.send(None) # prime the coroutine test_press_j = pygame.event.Event(pygame.KEYDOWN, {"key": pygame.K_j}) for e in [test_press_j]: wrap.send(e) | Pygame can react to custom user events, not keypress or mouse events. Here is a working code where pytest sends a userevent to pygame, pygame reacts to it and sends a response back to pytest for evaluation: #!/usr/bin/env python """minimal_pygame.py""" import pygame TESTEVENT = pygame.event.custom_type() def minimal_pygame(testing: bool=False): pygame.init() game_window_sf = pygame.display.set_mode( size=(400, 300), ) pygame.display.flip() game_running = True while game_running: # Hook for testing if testing: attr_dict = (yield) test_event = pygame.event.Event(TESTEVENT, attr_dict) pygame.event.post(test_event) # Main game loop: pygame.time.wait(1000) for event in pygame.event.get(): # React to closing the pygame window: if event.type == pygame.QUIT: game_running = False break # React to keypresses: if event.type == pygame.KEYDOWN: if event.key == pygame.K_q: # distinguish between Q and Ctrl-Q mods = pygame.key.get_mods() # End main loop if Ctrl-Q was pressed if mods & pygame.KMOD_CTRL: game_running = False break # React to TESTEVENTS: if event.type == TESTEVENT: if event.instruction == "draw_rectangle": filled_rect = game_window_sf.fill(pygame.Color("white"), pygame.Rect(50, 50, 50, 50)) pygame.display.update([filled_rect]) pygame.time.wait(1000) if testing: # Yield the color value of the pixel at (50, 50) back to pytest yield game_window_sf.get_at((50, 50)) pygame.quit() if __name__ == "__main__": minimal_pygame() Here's the test code: #!/usr/bin/env python """test_minimal_pygame.py""" import minimal_pygame import pygame def pygame_wrapper(coro): yield from coro def test_minimal_pygame(): wrap = pygame_wrapper(minimal_pygame.minimal_pygame(testing=True)) wrap.send(None) # prime the coroutine # Create a dictionary of attributes for the future TESTEVENT attr_dict = {"instruction": "draw_rectangle"} response = wrap.send(attr_dict) assert response == pygame.Color("white") It works, However, pytest, being a tool for stateless unit tests rather than integration tests, makes the pygame quit after it gets the first response (teardown test). It is not possible to continue and do more tests and assertions in the current pygame session. (Just try to duplicate the last two lines of the test code to resend the event, it will fail.) Pytest is not the right tool to inject a series of instructions into pygame to bring it to a precondition and then perform a series of tests. That's at least what I heard from people on the pygame discord channel. For automated integration tests they suggest a BDD tool like Cucumber (or behave for python). | 7 | 3 |
63,343,230 | 2020-8-10 | https://stackoverflow.com/questions/63343230/get-rankings-of-column-names-in-pandas-dataframe | I have pivoted the Customer ID against their most frequently purchased genres of performances: Genre Jazz Dance Music Theatre Customer 100000000001 0 3 1 2 100000000002 0 1 6 2 100000000003 0 3 13 4 100000000004 0 5 4 1 100000000005 1 10 16 14 My desired result is to append the column names according to the rankings: Genre Jazz Dance Music Theatre Rank1 Rank2 Rank3 Rank4 Customer 100000000001 0 3 1 2 Dance Theatre Music Jazz 100000000002 0 1 6 2 Music Theatre Dance Jazz 100000000003 0 3 13 4 Music Theatre Dance Jazz 100000000004 0 5 4 1 Dance Music Theatre Jazz 100000000005 1 10 16 14 Music Theatre Dance Jazz I have looked up some threads but the closest thing I can find is idxmax. However that only gives me Rank1. Could anyone help me to get the result I need? Thanks a lot! Dennis | Use: i = np.argsort(df.to_numpy() * -1, axis=1) r = pd.DataFrame(df.columns[i], index=df.index, columns=range(1, i.shape[1] + 1)) df = df.join(r.add_prefix('Rank')) Details: Use np.argsort along axis=1 to get the indices i that would sort the genres in descending order. print(i) array([[1, 3, 2, 0], [2, 3, 1, 0], [2, 3, 1, 0], [1, 2, 3, 0], [2, 3, 1, 0]]) Create a new dataframe r from the columns of dataframe df taken along the indices i (i.e df.columns[i]), then use DataFrame.join to join the dataframe r with df: print(df) Jazz Dance Music Theatre Rank1 Rank2 Rank3 Rank4 Customer 100000000001 0 3 1 2 Dance Theatre Music Jazz 100000000002 0 1 6 2 Music Theatre Dance Jazz 100000000003 0 3 13 4 Music Theatre Dance Jazz 100000000004 0 5 4 1 Dance Music Theatre Jazz 100000000005 1 10 16 14 Music Theatre Dance Jazz | 6 | 5 |
63,269,750 | 2020-8-5 | https://stackoverflow.com/questions/63269750/can-i-store-a-parquet-file-with-a-dictionary-column-having-mixed-types-in-their | I am trying to store a Python Pandas DataFrame as a Parquet file, but I am experiencing some issues. One of the columns of my Pandas DF contains dictionaries as such: import pandas as pandas df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": "Value3" } ] }) df.to_parquet("test.parquet") Now, that works perfectly fine, the problem is when one of the nested values of the dictionary has a different type than the rest. For instance: import pandas as pandas df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": ["Value3"] } ] }) df.to_parquet("test.parquet") This throws the following error: ArrowInvalid: ('cannot mix list and non-list, non-null values', 'Conversion failed for column ColC with type object') Notice how, for the last row of the DF, the Field property of the ColC dictionary is a list instead of a string. Is there any workaround to be able to store this DF as a Parquet file? | ColC is a UDT (user defined type) with one field called Field of type Union of String, List of String. In theory arrow supports it, but in practice it has a hard time figuring out what the type of ColC is. Even if you were providing the schema of your data frame explicitly, it wouldn't work because this type of conversion (converting unions from pandas to arrow/parquet) isn't supported yet. union_type = pa.union( [pa.field("0",pa.string()), pa.field("1", pa.list_(pa.string()))], 'dense' ) col_c_type = pa.struct( [ pa.field('Field', union_type) ] ) schema=pa.schema( [ pa.field('ColA', pa.int32()), pa.field('ColB', pa.string()), pa.field('ColC', col_c_type), ] ) df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": ["Value3"] } ] }) pa.Table.from_pandas(df, schema) This gives you this error: ('Sequence converter for type union[dense]<0: string=0, 1: list<item: string>=1> not implemented', 'Conversion failed for column ColC with type object' Even if you create the arrow table manually it won't be able to convert it to parquet (again, union are not supported). import io import pyarrow.parquet as pq col_a = pa.array([1, 2, 3], pa.int32()) col_b = pa.array(["X", "Y", "Z"], pa.string()) xs = pa.array(["Value", "Value2", None], type=pa.string()) ys = pa.array([None, None, ["value3"]], type=pa.list_(pa.string())) types = pa.array([0, 0, 1], type=pa.int8()) col_c = pa.UnionArray.from_sparse(types, [xs, ys]) table = pa.Table.from_arrays( [col_a, col_b, col_c], schema=pa.schema([ pa.field('ColA', col_a.type), pa.field('ColB', col_b.type), pa.field('ColC', col_c.type), ]) ) with io.BytesIO() as buffer: pq.write_table(table, buffer) Unhandled type for Arrow to Parquet schema conversion: sparse_union<0: string=0, 1: list<item: string>=1> I think your only option for now it to use a struct where fields have got different names for string value and list of string values. df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field1": "Value" }, { "Field1": "Value2" }, { "Field2": ["Value3"] } ] }) df.to_parquet('/tmp/hello') | 11 | 9 |
63,324,327 | 2020-8-9 | https://stackoverflow.com/questions/63324327/write-a-csv-file-asynchronously-in-python | I am writing a CSV file with the following function: import csv import os import aiofiles async def write_extract_file(output_filename: str, csv_list: list): """ Write the extracted content into the file """ try: async with aiofiles.open(output_filename, "w+") as csv_file: writer = csv.DictWriter(csv_file, fieldnames=columns.keys()) writer.writeheader() writer.writerows(csv_list) except FileNotFoundError: print("Output file not present", output_filename) print("Current dir: ", os.getcwd()) raise FileNotFoundError However, as there is no await allowed over writerows method, there are no rows being written into the CSV file. How to resolve this issue? Is there any workaround available? Thank you. Entire code can be found here. | In my opinion it’s better not to try to use the aiofiles with the csv module and run the synchronous code using loop.run_in_executor and wait it asynchronously like below: def write_extract_file(output_filename: str, csv_list: list): """ Write the extracted content into the file """ try: with open(output_filename, "w+") as csv_file: writer = csv.DictWriter(csv_file, fieldnames=columns.keys()) writer.writeheader() writer.writerows(csv_list) except FileNotFoundError: print("Output file not present", output_filename) print("Current dir: ", os.getcwd()) raise FileNotFoundError async def main(): loop = asyncio.get_running_loop() await loop.run_in_executor(None, write_extract_file, 'test.csv', csv_list) | 11 | 8 |
63,294,040 | 2020-8-7 | https://stackoverflow.com/questions/63294040/pandas-check-if-dataframe-has-negative-value-in-any-column | I wonder how to check if a pandas dataframe has negative value in 1 or more columns and return only boolean value (True or False). Can you please help? In[1]: df = pd.DataFrame(np.random.randn(10, 3)) In[2]: df Out[2]: 0 1 2 0 -1.783811 0.736010 0.865427 1 -1.243160 0.255592 1.670268 2 0.820835 0.246249 0.288464 3 -0.923907 -0.199402 0.090250 4 -1.575614 -1.141441 0.689282 5 -1.051722 0.513397 1.471071 6 2.549089 0.977407 0.686614 7 -1.417064 0.181957 0.351824 8 0.643760 0.867286 1.166715 9 -0.316672 -0.647559 1.331545 Expected output:- Out[3]: True | Actually, if speed is important, I did a few tests: df = pd.DataFrame(np.random.randn(10000, 30000)) Test 1, slowest: pure pandas (df < 0).any().any() # 303 ms ± 1.28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Test 2, faster: switching over to numpy with .values for testing the presence of a True entry (df < 0).values.any() # 269 ms ± 8.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Test 3, maybe even faster, though not significant: switching over to numpy for the whole thing (df.values < 0).any() # 267 ms ± 1.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | 12 | 18 |
63,329,657 | 2020-8-9 | https://stackoverflow.com/questions/63329657/python-3-7-error-unsupported-pickle-protocol-5 | I'm trying to restore a pickled config file from RLLib (json didn't work as shown in this post), and getting the following error: config = pickle.load(open(f"{path}/params.pkl", "rb")) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-28-c964561b863c> in <module> ----> 1 config = pickle.load(open(f"{path}/params.pkl", "rb")) ValueError: unsupported pickle protocol: 5 Python Version = 3.7.0 How can I open this file in 3.7? | Use pickle5 or load it into python 3.8+ and then serialize it to a lower version of it using the protocol parameter. | 95 | 69 |
63,297,763 | 2020-8-7 | https://stackoverflow.com/questions/63297763/why-is-this-regular-expression-so-slow-in-java | I recently had a SonarQube rule (https://rules.sonarsource.com/java/RSPEC-4784) bring to my attention some performance issues which could be used as a denial of service against a Java regular expression implementation. Indeed, the following Java test shows how slow the wrong regular expression can be: import org.junit.Test; public class RegexTest { @Test public void fastRegex1() { "aaaaaaaaaaaaaaaaaaaaaaaaaaaabs".matches("(a+)b"); } @Test public void fastRegex2() { "aaaaaaaaaaaaaaaaaaaaaaaaaaaab".matches("(a+)+b"); } @Test public void slowRegex() { "aaaaaaaaaaaaaaaaaaaaaaaaaaaabs".matches("(a+)+b"); } } As you can see, the first two tests are fast, the third one is incredibly slow (in Java 8) The same data and regex in Perl or Python, however, is not at all slow, which leads me to wonder why it is that this regular expression is so slow to evaluate in Java. $ time perl -e '"aaaaaaaaaaaaaaaaaaaaaaaaaaaabs" =~ /(a+)+b/ && print "$1\n"' aaaaaaaaaaaaaaaaaaaaaaaaaaaa real 0m0.004s user 0m0.000s sys 0m0.004s $ time python3 -c 'import re; m=re.search("(a+)+b","aaaaaaaaaaaaaaaaaaaaaaaaaaaabs"); print(m.group(0))' aaaaaaaaaaaaaaaaaaaaaaaaaaaab real 0m0.018s user 0m0.015s sys 0m0.004s What is it about the extra matching modifier + or trailing character s in the data which makes this regular expression so slow, and why is it only specific to Java? | Caveat: I don't really know much about regex internals, and this is really conjecture. And I can't answer why Java suffers from this, but not the others (also, it is substantially faster than your 12 seconds in jshell 11 when I run it, so it perhaps only affects certain versions). "aaaaaaaaaaaaaaaaaaaaaaaaaaaabs".matches("(a+)+b") There are lots of ways that lots of as could match: (a)(a)(a)(a) (aa)(a)(a) (a)(aa)(a) (aa)(aa) (a)(aaa) etc. For the input string "aaaaaaaaaaaaaaaaaaaaaaaaaaaab", it will greedily match all of those as in a single pass, match the b, job done. For "aaaaaaaaaaaaaaaaaaaaaaaaaaaabs", when it gets to the end and finds that the string doesn't match (because of the s), it's not correctly recognizing that the s means it can never match. So, having gone through and likely matched as (aaaaaaaaaaaaaaaaaaaaaaaaaaaa)bs it thinks "Oh, maybe it failed because of the way I grouped the as - and goes back and tries all the other combinations of the as. (aaaaaaaaaaaaaaaaaaaaaaaaaaa)(a)bs // Nope, still no match (aaaaaaaaaaaaaaaaaaaaaaaaaa)(aa)bs // ... (aaaaaaaaaaaaaaaaaaaaaaaaa)(aaa)bs // ... ... (a)(aaaaaaaaaaaaaaaaaaaaaaaaaaa)bs // ... (aaaaaaaaaaaaaaaaaaaaaaaaaa(a)(a)bs // ... (aaaaaaaaaaaaaaaaaaaaaaaaa(aa)(a)bs // ... (aaaaaaaaaaaaaaaaaaaaaaaa(aaa)(a)bs // ... ... There are lots of these (I think there are something like 2^27 - that's 134,217,728 - combinations for 28 as, because each a can either be part of the previous group, or start its own group), so it takes a long time. | 51 | 54 |
63,318,567 | 2020-8-8 | https://stackoverflow.com/questions/63318567/azure-function-exception-oserror-errno-30-read-only-file-system | I'm trying to copy the value from the Excel file but it returns this error message: Exception while executing function: Functions.extract Result: Failure Exception: OSError: [Errno 30] Read-only file system: './data_download/xxxxx.xlsx' Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py", line 343, in handle_invocation_request call_result = await self._loop.run_in_executor( File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py", line 480, in __run_sync_func return func(**params) File "/home/site/wwwroot/extract/_init_.py", line 62, in main with open(download_file_path, "wb") as download_file: My code with open(download_file_path, "wb") as download_file: | This is not related to Azure function, in general Only /tmp seems to be writable Try adding tmp to the file path filepath = '/tmp/' + key | 9 | 13 |
63,265,707 | 2020-8-5 | https://stackoverflow.com/questions/63265707/plotly-how-to-plot-multiple-lines-with-shared-x-axis | I would like to have a multiple line plot within same canvas tied with the same x-axis as shown something in the figure: Using subplots does not achieve the intended desire. import plotly.express as px from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=2, shared_xaxes=True,vertical_spacing=0.1) fig.add_scatter(y=[2, 1, 3], row=1, col=1) fig.add_scatter(y=[1, 3, 2], row=2, col=1) fig.show() May I know how this can be done, appreciate if someone can point to good reading material | With a dataset such as this you can select any number of columns, set up a figure using fig = make_subplots() with shared_xaxes set to True and then add your series with a shared x-axis using fig.add_trace(go.Scatter(x=df[col].index, y=df[col].values), row=i, col=1) in a loop to get this: Let me know if this is a setup you can use but need a little tweaking. Complete code: import plotly.graph_objects as go import plotly.io as pio from plotly.subplots import make_subplots import pandas as pd # data pio.templates.default = "plotly_white" df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') df = df.set_index('Date') df.tail() cols = df.columns[:-4] ncols = len(cols) # subplot setup fig = make_subplots(rows=ncols, cols=1, shared_xaxes=True) for i, col in enumerate(cols, start=1): fig.add_trace(go.Scatter(x=df[col].index, y=df[col].values), row=i, col=1) fig.show() | 10 | 15 |
63,310,735 | 2020-8-8 | https://stackoverflow.com/questions/63310735/do-pyspark-dataframes-have-a-pipe-function-like-in-pandas | For example in Pandas I would do data_df = ( pd.DataFrame(dict(col1=['a', 'b', 'c'], col2=['1', '2', '3'])) .pipe(lambda df: df[df.col1 != 'a']) ) This is similar to R's pipe %>% Is there something similar in PySpark? | I think, in pyspark, you can easily achieve this pipe functionality with help of pipeline. convert each of the pipe function into the transformer. There are some predefined transformers that spark provides, we can make use of that also Create pipeline using the transformers Run the pipeline to transform provided dataframe Example: Let's take the example you provided Input Dataframe to transform val df = Seq(("a", 1), ("b", 2), ("c", 3)).toDF("col1", "col2") df.show(false) df.printSchema() /** * +----+----+ * |col1|col2| * +----+----+ * |a |1 | * |b |2 | * |c |3 | * +----+----+ * * root * |-- col1: string (nullable = true) * |-- col2: integer (nullable = false) */ 1. Convert each of the pipe function into the transformer for .pipe(lambda df: df[df.col1 != 'a']), we can easily use spark SQLTransformer. so no need to create custom transformer 2. Create pipeline using the transformers val transform1 = new SQLTransformer() .setStatement("select * from __THIS__ where col1 != 'a'") val transform2 = new SQLTransformer() .setStatement("select col1, col2, SQRT(col2) as col3 from __THIS__") val pipeline = new Pipeline() .setStages(Array(transform1, transform2)) 3. Run the pipeline to transform provided dataframe pipeline.fit(df).transform(df) .show(false) /** * +----+----+------------------+ * |col1|col2|col3 | * +----+----+------------------+ * |b |2 |1.4142135623730951| * |c |3 |1.7320508075688772| * +----+----+------------------+ */ | 7 | 2 |
63,302,027 | 2020-8-7 | https://stackoverflow.com/questions/63302027/how-to-avoid-double-extracting-of-overlapping-patterns-in-spacy-with-matcher | I need to extract item combination from 2 lists by means of python Spacy Matcher. The problem is following: Let us have 2 lists: colors=['red','bright red','black','brown','dark brown'] animals=['fox','bear','hare','squirrel','wolf'] I match the sequences by the following code: first_color=[] last_color=[] only_first_color=[] for color in colors: if ' ' in color: first_color.append(color.split(' ')[0]) last_color.append(color.split(' ')[1]) else: only_first_color.append(color) matcher = Matcher(nlp.vocab) pattern1 = [{"TEXT": {"IN": only_first_color}},{"TEXT":{"IN": animals}}] pattern2 = [{"TEXT": {"IN": first_color}},{"TEXT": {"IN": last_color}},{"TEXT":{"IN": animals}}] matcher.add("ANIMALS", None, pattern1,pattern2) doc = nlp('bright red fox met black wolf') matches = matcher(doc) for match_id, start, end in matches: string_id = nlp.vocab.strings[match_id] # Get string representation span = doc[start:end] # The matched span print(start, end, span.text) It gives the output: 0 3 bright red fox 1 3 red fox 4 6 black wolf How can i extract only 'bright red fox' and 'black wolf'? Should i change the patterns rules or post-process the matches? Any thoughts appreciate! | You may use spacy.util.filter_spans: Filter a sequence of Span objects and remove duplicates or overlaps. Useful for creating named entities (where one token can only be part of one entity) or when merging spans with Retokenizer.merge. When spans overlap, the (first) longest span is preferred over shorter spans. Python code: matches = matcher(doc) spans = [doc[start:end] for _, start, end in matches] for span in spacy.util.filter_spans(spans): print(span.start, span.end, span.text) Output: 0 3 bright red fox 4 6 black wolf | 15 | 20 |
63,302,534 | 2020-8-7 | https://stackoverflow.com/questions/63302534/how-to-write-torch-devicecuda-if-torch-cuda-is-available-else-cpu-as-a-f | I'm a beginner to Pytorch and wanted to type this statement as a whole if else statement:- torch.device('cuda' if torch.cuda.is_available() else 'cpu') Can somebody help me? | Here is the code as a whole if-else statement: torch.device('cuda' if torch.cuda.is_available() else 'cpu') if torch.cuda.is_available(): torch.device('cuda') else: torch.device('cpu') Since you probably want to store the device for later, you might want something like this instead: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') Here is a post and discussion about the ternary operator in Python: https://stackoverflow.com/a/2802748/13985765 From that post: value_when_true if condition else value_when_false | 11 | 30 |
63,302,082 | 2020-8-7 | https://stackoverflow.com/questions/63302082/syntaxerror-f-string-expecting | I have a problem here. I don't know why this code does not work. newline = '\n' tasks_choosen = ['markup', 'media', 'python_api', 'script', 'style', 'vue'] print(f'{ newline }### Initializing project with the following tasks: { ' '.join(tasks_choosen) }.{ newline }') Error: File "new-gulp-project.py", line 85 print(f'{ newline }### Initializing project with the following tasks: { ' '.join(tasks_choosen) }.{ newline }') SyntaxError: f-string: expecting '}' Can anyone help me? Thanks | Because you use single quotes twice you get: print(f'{ newline }### Initializing project with the following tasks: { ' instead of print(f'{ newline }### Initializing project with the following tasks: { ' '.join(tasks_choosen) }.{ newline }') Use double quotes inside: print(f'{ newline }### Initializing project with the following tasks: { " ".join(tasks_choosen) }.{ newline }') | 10 | 24 |
63,286,757 | 2020-8-6 | https://stackoverflow.com/questions/63286757/sqlalchemy-mysql-pass-table-name-as-a-parameter-in-a-raw-query | In my app I use SQLAlchemy and mysql-connector-python. I would like to perform such query SELECT * FROM :table LIMIT 10 on my mysql database. However my code doesn't work table_name = "tmp1" QUERY = "SELECT * FROM :table LIMIT 10" conn = create_sql_connection() res = conn.execute( QUERY, {'table': table_name} ).fetchall() print(res) I've read that you cannot use table name as a parameter and I should just use python string format. However I'm really scared that it's absolutely not safe against sql injection. How to solve it ? Is there any utility that would escape my table name variable ? Postgres has a solution - Passing table name as a parameter in psycopg2 - Do you know how to solve it while using mysql ? | you can pass the user provided string to a Table statement and build queries from this: (here I assume you get the user data from a post request json) table_name_string = request.get_json().get('table') selected_table = db.Table(table_name_string, metadata, autoload=True) query = selected_table.select() you can also go on an use this query as a raw sql string at this point if you really want to query_string = str(query) some input validation or restrictions what tables are valid is definitely recommended, but it is sql injection safe, since the table must exist in the metadata, or it will throw an Exception (sqlalchemy.exc.NoSuchTableError:) | 7 | 6 |
63,298,721 | 2020-8-7 | https://stackoverflow.com/questions/63298721/how-to-update-imagefield-in-django | i am new in Django. i am having issue in updating ImageField.i have following code in models.py class ImageModel(models.Model): image_name = models.CharField(max_length=50) image_color = models.CharField(max_length=50) image_document = models.ImageField(upload_to='product/') -This is My forms.py class ImageForm(forms.ModelForm): class Meta: model = ImageModel fields = ['image_name', 'image_color' , 'image_document'] in Html file (editproduct.html) <form method="POST" action="/myapp/updateimage/{{ singleimagedata.id }}"> {% csrf_token %} <input class="form-control" type="text" name="image_name" value="{{ singleimagedata.image_name}}"> <input class="form-control" type="file" name="image_document"> <button type="submit" class="btn btn-primary">UPDATE PRODUCT</button> </form> -myapp is my application name. {{singleimagedata}} is a Variable Containing all fetched Data -urls.py urlpatterns = [ path('productlist', views.productlist, name='productlist'), path('addproduct', views.addproduct, name='addproduct'), path('editimage/<int:id>', views.editimage, name='editimage'), path('updateimage/<int:id>', views.updateimage, name='updateimage'), ] and Here is My views.py def productlist(request): if request.method == 'GET': imagedata = ImageModel.objects.all() return render(request,"product/productlist.html",{'imagedata':imagedata}) def addproduct(request): if request.method == 'POST': form = ImageForm(request.POST, request.FILES) if form.is_valid(): form.save() messages.add_message(request, messages.SUCCESS, 'Image Uploaded') return redirect('/myapp/productlist') else: imageform = ImageForm() return render(request, "product/addproduct.html", {'imageform': imageform}) def editimage(request, id): singleimagedata = ImageModel.objects.get(id=id) return render(request, 'product/editproduct.html', {'singleimagedata': singleimagedata}) def updateimage(request, id): #this function is called when update data data = ImageModel.objects.get(id=id) form = ImageForm(request.POST,request.FILES,instance = data) if form.is_valid(): form.save() return redirect("/myapp/productlist") else: return render(request, 'demo/editproduct.html', {'singleimagedata': data}) My image Upload is working fine.i can not Update image while updating data.rest of the data are updated.i don't know how to update image and how to remove old image and put new image into directory. | I think you missed the enctype="multipart/form-data", try to change: <form method="POST" action="/myapp/updateimage/{{ singleimagedata.id }}"> into; <form method="POST" enctype="multipart/form-data" action="{% url 'updateimage' id=singleimagedata.id %}"> Don't miss also to add the image_color field to your html input. Because, in your case the image_color field model is designed as required field. To remove & update the old image file from directory; import os from django.conf import settings # your imported module... def updateimage(request, id): #this function is called when update data old_image = ImageModel.objects.get(id=id) form = ImageForm(request.POST, request.FILES, instance=old_image) if form.is_valid(): # deleting old uploaded image. image_path = old_image.image_document.path if os.path.exists(image_path): os.remove(image_path) # the `form.save` will also update your newest image & path. form.save() return redirect("/myapp/productlist") else: context = {'singleimagedata': old_image, 'form': form} return render(request, 'demo/editproduct.html', context) | 8 | 8 |
63,276,033 | 2020-8-6 | https://stackoverflow.com/questions/63276033/what-is-the-difference-between-using-mock-mock-vs-mock-patch-and-when-to-us | What is the difference between using mock.Mock() vs mock.patch()? When to use mock.Mock() and when to use mock.patch() I've read that Mock is used to replace something that is used in the current scope, vs, patch is used to replace something that is imported and/or created in another scope. Can someone explain what that means? If we are testing in a separate test file, wouldn't every classmethod, staticmethod, intancemethod being tested imported from the dev/production files? Does that mean only patch should be used here? And if I were to test on the same file as the code being tested, mock is preferred to be used? Is that correct? | I'm not completely sure if I understood your question, but I'll give it a try. As described in the documentation, Mock objects (actually MagickMock instances) are created by using the patch decorator: from unittest.mock import patch @patch('some_module.some_object') def test_something(mocked_object): print(mocked_object) This gives something like: <MagicMock name='some_object' id='1870192381512'> This is eqivalent to: def test_something(): with patch('some_module.some_object') as mocked_object: print(mocked_object) This gives you the possibility to replace any object by a mock object to avoid calling the actual production code and/or to check how the original object is called (if the object is a function). The reason why using patch (or some similar methods) is preferred is that this ensures that the patch is reverted after the test (or after the context manager scope in the second case), so there are no side effects on other tests or other code. To cite the documentation: The patch decorators are used for patching objects only within the scope of the function they decorate. They automatically handle the unpatching for you, even if exceptions are raised. All of these functions can also be used in with statements or as class decorators. You are also able to create a Mock object manually and assign it to an object - I assume this is what you mean in your question. If you do this instead of using patch, you are responsible to reset the previous state yourself. As this is more error-prone, I would advice to the dedicated methods for patching if possible. Where this does not matter is in local objects and other mocks. Mocking local objects is seldom needed, but Mock instances are often created in conjunction with patching an object to preserve the instance of a mock object for later check: @mock.patch('my_functions.MyClass') def test_object(mock_class): arg1 = Mock() arg2 = Mock() do_something(arg1, arg2) # check that do_something creates MyClass with the given arguments mock_class.assert_called_with(arg1, arg2) In this case the case will only be used as an argument to a mock object, so no reset is needed. To summarize: patch is a convenience decorator/context manager function to replace objects with mock objects (or other objects) and reset the previous state after finishing or in case of an exception Mock or derived objects are created by mock.patch, and can also be created manually. Manually created mocks are usually only used to patch local functions or other mocks where no reset is needed. | 15 | 22 |
63,289,981 | 2020-8-6 | https://stackoverflow.com/questions/63289981/pyspark-insertinto-overwrite | I am trying to insert data from a data frame into a Hive table. I have been able to do so successfully using df.write.insertInto("db1.table1", overwrite = True). I am just a little confused about the overwrite = True part -- I tried running it multiple times and it seemed to append, not overwrite. There wasn't too much in the docs, but when should I set overwrite to False vs. True? | df.insertInto works only if table already exists in hive. df.write.insertInto("db.table1",overwrite=False) will append the data to the existing hive table. df.write.insertInto("db.table1",overwrite=True) will overwrite the data in hive table. Example: df.show() #+----+---+ #|name| id| #+----+---+ #| a| 1| #| b| 2| #+----+---+ #save the table to hive df.write.saveAsTable("default.table1") #from hive #hive> select * from table1; #OK #a 1 #b 2 df.write.insertInto("moch.table1",overwrite=True) #from hive #hive> select * from table1; #OK #a 1 #b 2 #appending data to hive df.write.insertInto("moch.table1",overwrite=False) #from hive #hive> select * from table1; #OK #a 1 #b 2 #a 1 #b 2 | 6 | 13 |
63,289,494 | 2020-8-6 | https://stackoverflow.com/questions/63289494/what-is-the-correct-syntax-for-walrus-operator-with-ternary-operator | Looking at Python-Dev and StackOverflow, Python's ternary operator equivalent is: a if condition else b Looking at PEP-572 and StackOverflow, I understand what Walrus operator is: := Now I'm trying to to combine the "walrus operator's assignment" and "ternary operator's conditional check" into a single statement, something like: other_func(a) if (a := some_func(some_input)) else b For example, please consider the below snippet: do_something(list_of_roles) if list_of_roles := get_role_list(username) else "Role list is [] empty" I'm failing to wrap my mind around the syntax. Having tried various combinations, every time the interpreter throws SyntaxError: invalid syntax. My python version is 3.8.3. My question is What is the correct syntax to embed walrus operator within ternary operator? | Syntactically, you are just missing a pair of parenthesis. do_something(list_of_roles) if (list_of_roles := get_role_list(username)) else "Role list is [] empty" If you look at the grammar, := is defined as part of a high-level namedexpr_test construct: namedexpr_test: test [':=' test] while a conditional expression is a kind of test: test: or_test ['if' or_test 'else' test] | lambdef This means that := cannot be used in a conditional expression unless it occurs inside a nested expression. | 18 | 27 |
63,278,444 | 2020-8-6 | https://stackoverflow.com/questions/63278444/google-cloud-storage-python-client-attributeerror-clientoptions-object-has-no | I am using cloud storage with App Engine Flex. Out of the blue i start getting this error message after deploy succeeds The error is happening from these lines in my flask app. from google.cloud import storage, datastore client = storage.Client() File "/home/vmagent/app/main.py", line 104, in _load_db client = storage.Client() File "/env/lib/python3.6/site-packages/google/cloud/storage/client.py", line 110, in __init__ project=project, credentials=credentials, _http=_http File "/env/lib/python3.6/site-packages/google/cloud/client.py", line 250, in __init__ Client.__init__(self, credentials=credentials, client_options=client_options, _http=_http) File "/env/lib/python3.6/site-packages/google/cloud/client.py", line 143, in __init__ scopes = client_options.scopes or self.SCOPE AttributeError: 'ClientOptions' object has no attribute 'scopes' This is something to do with breaking upgrades made to grpcio and google-api-core and google-cloud-storage packages based on numerous SO threads. However, I cant figure out where this is happening. My requirements.txt is as follows: setuptools>=40.3 grpcio<=1.27.2 google-api-core<1.17.0 Flask gevent>=0.13 gunicorn>=19.7.1 numpy>=1.18.0 numpy-financial scipy>=1.4 pvlib>=0.7 google-cloud-storage==1.28.0 google-cloud-datastore==1.12.0 google-cloud-pubsub pandas==1.0.5 my app.yaml is as follows: service: app-preprod runtime: custom env: flex entrypoint: gunicorn -t 600 -c gunicorn.conf.py -b :$PORT main:app runtime_config: python_version: 3.6 manual_scaling: instances: 1 resources: cpu: 1 memory_gb: 4 beta_settings: cloud_sql_instances: xxxx:europe-west6:component-cost endpoints_api_service: name: apipreprod-dot-xxxx.appspot.com rollout_strategy: managed Looking at the release histories, some new versions of google-cloud-storage etc were released a few days ago, but i have tried to maintain the same older version number. The ridiculous thing is that with these exact same requirements.txt, i have an identical prod app engine that is working fine --- but that i had not redeployed for a week. Obviously, no problems at all with exactly the same versions of storage and datastore to run the client from my local machine. --EDIT-- Apparently according to https://github.com/googleapis/google-cloud-python/issues/10471 i should just add google-cloud-core==1.3.0 to requirements.txt This seems a workaround --- any better permanent way of ensuring this break doesnt catch me unawares? | This is due to https://github.com/googleapis/google-cloud-python/issues/10471. I'd recommend upgrading google-cloud-core and google-api-core to the latest versions with the bugfix. | 12 | 9 |
63,286,750 | 2020-8-6 | https://stackoverflow.com/questions/63286750/how-to-apply-kernel-regularization-in-a-custom-layer-in-keras-tensorflow | Consider the following custom layer code from a TensorFlow tutorial: class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, input): return tf.matmul(input, self.kernel) How do I apply any pre-defined regularization (say tf.keras.regularizers.L1) or custom regularization on the parameters of the custom layer? | The add_weight method takes a regularizer argument which you can use to apply regularization on the weight. For example: self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs], regularizer=tf.keras.regularizers.l1_l2()) Alternatively, to have more control like other built-in layers, you can modify the definition of custom layer and add a kernel_regularizer argument to __init__ method: from tensorflow.keras import regularizers class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs, kernel_regularizer=None): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs self.kernel_regularizer = regularizers.get(kernel_regularizer) def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs], regularizer=self.kernel_regularizer) With that you can even pass a string like 'l1' or 'l2' to kernel_regularizer argument when constructing the layer, and it would be resolved properly. | 6 | 13 |
63,278,737 | 2020-8-6 | https://stackoverflow.com/questions/63278737/object-of-type-decimal-is-not-json-serializable-aws-lambda-dynamodb | Lambda execution failed with status 200 due to customer function error: Object of type 'Decimal' is not JSON serializable I went through all the existing solutions in the following link but nothing worked for me. What am I doing wrong?: Python JSON serialize a Decimal object import json import boto3 import decimal client = boto3.resource('dynamodb') table = client.Table('table') def lambda_handler(event, context): method = event["httpMethod"] print(event) if method=="POST": return POST(event) elif method=="DELETE": return DELETE(event) elif method=="GET": return GET(event) #the respons format def send_respons(responseBody, statusCode): response = { "statusCode": statusCode, "headers": { "my_header": "my_value" }, "body": json.dumps(responseBody), "isBase64Encoded": 'false' } return response def GET(event): tab = table.scan()['Items'] ids = [] for item in tab: ids.append({"id":item["id"], "decimalOBJ":decimal.Decimal(item["decimalOBJ"]})) return send_respons(ids, 201) | It seems you have two options: Probably easiest, you can serialize the int/float value of a Decimal object: """ assume d is your decimal object """ serializable_d = int(d) # or float(d) d_json = json.dumps(d) You can add simplejson to your requirements.txt, which now has support for serializing Decimals. It's a drop-in replacement for the included json module. import simplejson as json # instead of import json The rest of your code will work the same. If you need further assistance, kindly leave a comment. | 15 | 18 |
63,265,669 | 2020-8-5 | https://stackoverflow.com/questions/63265669/is-possible-to-save-a-temporaly-file-in-a-azure-function-linux-consuption-plan-i | first of all sorry for my English. I have an Azure Function Linux Consuption Plan using Python and I need to generate an html, transform to pdf using wkhtmltopdf and send it by email. #generate temporally pdf config = pdfkit.configuration(wkhtmltopdf="binary/wkhtmltopdf") pdfkit.from_string(pdf_content, 'report.pdf',configuration=config, options={}) #read pdf and transform to Bytes with open('report.pdf', 'rb') as f: data = f.read() #encode bytes encoded = base64.b64encode(data).decode() #Send Email EmailSendData.sendEmail(html_content,encoded,spanish_month) Code is running ok in my local development but when I deploy the function and execute the code I am getting an error saying: Result: Failure Exception: OSError: wkhtmltopdf reported an error: Loading pages (1/6) [> ] 0% [======> ] 10% [==============================> ] 50% [============================================================] 100% QPainter::begin(): Returned false Error: Unable to write to destination I think that error is reported because for any reason write permission is not available. Can you help me to solve this problem? Thanks in advance. | The tempfile.gettempdir() method returns a temporary folder, which on Linux is /tmp. Your application can use this directory to store temporary files generated and used by your functions during execution. So use /tmp/report.pdf as the file directory to save temporary file. with open('/tmp/report.pdf', 'rb') as f: data = f.read() For more details, you could refer to this article. | 8 | 10 |
63,273,028 | 2020-8-5 | https://stackoverflow.com/questions/63273028/fastapi-get-user-id-from-api-key | In fastAPI one can simply write a security dependency at the router level and secure an entire part of the URLs. router.include_router( my_router, prefix="/mypath", dependencies=[Depends(auth.oauth2_scheme)] ) This avoids repeating a lot of code. The only problem is that I would like to protect a part of URLs with a router level dependency that checks the validity of the user token and retrieve the user id for that token. The only way I found, is to add another dependency to all the functions, but this leads to repeating the code that I just saved. Long story short, is there a way to add the dependency at the router level, retrieve and return the user id, and pass the returned value to the handling function? Something like router.py router.include_router( my_router, prefix="/mypath", dependencies=[user_id = Depends(auth.oauth2_scheme)] ) my_router.py my_router = APIRouter() @my_router.get("/my_path") async def get_my_path(**kwargs): user_id = kwargs["user_id"] # Do stuff with the user_id return {} | Once the user is authenticated in the dependency function add the user_id to request.state, then on your route you can access it from the request object. async def oauth2_scheme(request: Request): request.state.user_id = "foo" my_router = APIRouter() @my_router .get("/") async def hello(request: Request): print(request.state.user_id) app.include_router( my_router, dependencies=[Depends(oauth2_scheme)] ) | 7 | 4 |
63,259,362 | 2020-8-5 | https://stackoverflow.com/questions/63259362/type-hints-for-lxml | New to Python and come from a statically typed language background. I want type hints for https://lxml.de just for ease of development (mypy flagging issues and suggesting methods would be nice!) To my knowledge, this is a python 2.0 module and doesn’t have types. Currently I’ve used https://mypy.readthedocs.io/en/stable/stubgen.html to create stub type definitions and filling in “any”-types I’m using with more information, but it’s really hacky. Are there any safer ways to get type hints? | There is an official stubs package for lxml now called lxml-stubs: $ pip install lxml-stubs Note, however, that the stubs are still in development and are not 100% complete yet (although very much usable from my experience). These stubs were once part of typeshed, then curated by Jelle Zijlstra after removal and now are developed as part of the lxml project. If you want the development version of the stubs, install via $ pip install git+https://github.com/lxml/lxml-stubs.git (the project's readme installation command is missing the git+ prefix in URL's scheme and won't work). | 17 | 16 |
63,258,749 | 2020-8-5 | https://stackoverflow.com/questions/63258749/how-to-extract-density-function-probabilities-in-python-pandas-kde | The pandas.plot.kde() function is handy for plotting the estimated density function of a continuous random variable. It will take data x as input, and display the probabilities p(x) of the binned input as its output. How can I extract the values of probabilities it computes? Instead of just plotting the probabilities of bandwidthed samples, I would like an array or pandas series that contains the probability values it internally computed. If this can't be done with pandas kde, let me know of any equivalent in scipy or other | there are several ways to do that. You can either compute it yourself or get it from the plot. As pointed out in the comment by @RichieV following this post, you can extract the data from the plot using data.plot.kde().get_lines()[0].get_xydata() Use seaborn and then the same as in 1): You can use seaborn to estimate the kernel density and then matplotlib to extract the values (as in this post). You can either use distplot or kdeplot: import seaborn as sns # kde plot x,y = sns.kdeplot(data).get_lines()[0].get_data() # distplot x,y = sns.distplot(data, hist=False).get_lines()[0].get_data() You can use the underlying methods of scipy.stats.gaussian_kde to estimate the kernel density which is used by pandas: import scipy.stats density = scipy.stats.gaussian_kde(data) and then you can use this to evaluate it on a set of points: x = np.linspace(0,80,200) y = density(xs) | 14 | 19 |
63,168,043 | 2020-7-30 | https://stackoverflow.com/questions/63168043/python-matplotlib-3d-plot-with-two-axes | I am trying to create a plot similar to the one below taken from this paper, essentially a 3d plot with two distinct y-axes. Following guidance in this blog, I created a minimal example. Modules from mpl_toolkits import mplot3d import numpy as np %matplotlib inline import numpy as np import matplotlib.pyplot as plt Create some data def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) Z2 = Z*100+100 Plotting This creates a nice 3d plot, but obviously with only one y-axis. I could not find any advice online on how to get there for python, albeit some for matlab. fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_surface(X, Y, Z2, rstride=1, cstride=1, cmap='viridis', edgecolor='none') ax.set_title('surface'); ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z'); Code gives: Reference graph: | This isn't easy. One possible workaround approach is as follows: Based on your shared reference figure, I think you mean actually that you are looking to implement a second z-axis, not y-axis. The axes object for the 3d plot remains singular and shared (out of necessity / apparent matplotlib 3d plot limitations), but the data is plotted as though on the same continuous scale of values yet the axis ticks and labels are custom overwritten to reflect different scales of values. E.g., import numpy as np import matplotlib.pyplot as plt def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) def g(x, y): return -np.cos(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) Z_new = g(X, Y) offset = 5 Z_new_offset = Z_new + Z.max() + offset fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111, projection="3d") surf1 = ax.plot_surface( X, Y, Z, rstride=1, cstride=1, cmap="viridis", edgecolor="none", alpha=0.7 ) surf2 = ax.plot_surface( X, Y, Z_new_offset, rstride=1, cstride=1, cmap="plasma", edgecolor="none", alpha=0.7, ) z_ticks_original = np.linspace(Z.min(), Z.max(), 5) # Add custom tick labels and tick marks for the new plot on the left z_ticks_new = np.linspace(Z_new_offset.min(), Z_new_offset.max(), 5) for z_tick in z_ticks_new: ax.text( X.min() - 0.5, Y.min() - 2.5, z_tick + 0.25, f"{z_tick - (offset+1):.1f}", color="k", verticalalignment="center", ) ax.plot( [X.min() - 0.5, X.min()], [Y.min() - 0.5, Y.min()], [z_tick, z_tick], color="k", ) ax.set_zticks(np.block([z_ticks_original, z_ticks_new])) fig.canvas.draw() labels = [] for lab, tick in zip(ax.get_zticklabels(), ax.get_zticks()): if float(tick) >= 1.0: lab.set_text("") labels += [lab] ax.set_zticklabels(labels) # Draw the left Z-axis line ax.plot( [X.min() - 0.5] * 2, [Y.min() - 0.5] * 2, [z_ticks_new.min(), z_ticks_new.max()], color="k", ) ax.set_xlabel("X") ax.set_ylabel("Y") ax.set_zlabel("Z") cbar1 = fig.colorbar( surf1, ax=ax, pad=-0.075, orientation="vertical", shrink=0.5 ) cbar1.set_label("Z Values (primary z-axis)") cbar2 = fig.colorbar( surf2, ax=ax, pad=0.12, orientation="vertical", shrink=0.5, ticks=z_ticks_original, ) cbar2.set_label("Z Values (secondary z-axis)") plt.show() produces: | 7 | 3 |
63,199,763 | 2020-7-31 | https://stackoverflow.com/questions/63199763/maintained-alternatives-to-pypdf2 | I'm using the PyPDF2 library for extracting text, images, page width and heights, annotations, and other attributes from pdf documents. However, the library has many bugs and issues and seems not to be maintained for a long time already. (edit: PyPDF2 is maintained again) Is there a more vivid fork that is being maintained and developed? Is there a good alternative? From what I know, reportlab is more suitable for creating brand new pdf's (or maybe I'm just not experienced enough with reportlab). | Update: pypdf (pypi) is maintained again - and I am the maintainer (of pypdf and PyPDF2) :-) I've just released a new version with several bugfixes. Looking at the top PyPI packages, PyPDF2 is also the most used one (and pypdf==3.1.0 is almost the same as PyPDF2==3.0.0, the community just needs a bit of time to switch to pypdf) Three potential alternatives which are maintained (just like pypdf): pymupdf: uses mupdf (only free for open source due to mypdf license) pikepdf: Uses qpdf pdfminer.six: A pure Python project. I would not use: PyPDF2: I am the maintainer. In December 2022, I made the last release. I want the community to switch to pypdf (where I'm also the maintainer) PyPDF3 (pypi): Has less activity and probably less features than PyPDF2. PyPDF4 (pypi): Last release on PyPI in 2018 | 24 | 48 |
63,178,721 | 2020-7-30 | https://stackoverflow.com/questions/63178721/how-do-decode-b-x95-xc3-x8a-xb0-x8ds-x86-x89-x94-x82-x8a-xba | [Summary]: The data grabbed from the file is b"\x95\xc3\x8a\xb0\x8ds\x86\x89\x94\x82\x8a\xba" How to decode these bytes into readable Chinese characters please? ====== I extracted some game scripts from an exe file. The file is packed with Enigma Virtual Box and I unpacked it. Then I'm able to see the scripts' names just right, in English, as it supposed to be. In analyzing these scripts, I get an error looks like this: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x95 in position 0: invalid start byte I changed the decoding to GBK, and the error disappeared. But the output file is not readable. It includes readable English characters and non-readable content which supposed to be in Chinese. Example: chT0002>pDIӘIʆ I tried different encodings for saving the file and they show the same result, so the problem might be on the decoding part. The data grabbed from the file is b"\x95\xc3\x8a\xb0\x8ds\x86\x89\x94\x82\x8a\xba" I tried many ways but I just can't decode these bytes into readable Chinese characters. Is there anything wrong with the file itself? Or somewhere else? I really need help, please. One of the scripts are attached here. | In order to reliably decode bytes, you must know how the bytes were encoded. I will borrow the quote from the python codecs docs: Without external information it’s impossible to reliably determine which encoding was used for encoding a string. Without this information, there are ways to try and detect the encoding (chardet seems to be the most widely-used). Here's how you could approach that. import chardet data = b"\x95\xc3\x8a\xb0\x8ds\x86\x89\x94\x82\x8a\xba" detected = chardet.detect(data) decoded = data.decode(detected["encoding"]) The above example, however, does not work in this case because chardet isn't able to detect the encoding of these bytes. At that point, you'll have to either use trial-and-error or try other libraries. One method you could use is to simply try every standard encoding, print out the result, and see which encoding makes sense. codecs = [ "ascii", "big5", "big5hkscs", "cp037", "cp273", "cp424", "cp437", "cp500", "cp720", "cp737", "cp775", "cp850", "cp852", "cp855", "cp856", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863", "cp864", "cp865", "cp866", "cp869", "cp874", "cp875", "cp932", "cp949", "cp950", "cp1006", "cp1026", "cp1125", "cp1140", "cp1250", "cp1251", "cp1252", "cp1253", "cp1254", "cp1255", "cp1256", "cp1257", "cp1258", "cp65001", "euc_jp", "euc_jis_2004", "euc_jisx0213", "euc_kr", "gb2312", "gbk", "gb18030", "hz", "iso2022_jp", "iso2022_jp_1", "iso2022_jp_2", "iso2022_jp_2004", "iso2022_jp_3", "iso2022_jp_ext", "iso2022_kr", "latin_1", "iso8859_2", "iso8859_3", "iso8859_4", "iso8859_5", "iso8859_6", "iso8859_7", "iso8859_8", "iso8859_9", "iso8859_10", "iso8859_11", "iso8859_13", "iso8859_14", "iso8859_15", "iso8859_16", "johab", "koi8_r", "koi8_t", "koi8_u", "kz1048", "mac_cyrillic", "mac_greek", "mac_iceland", "mac_latin2", "mac_roman", "mac_turkish", "ptcp154", "shift_jis", "shift_jis_2004", "shift_jisx0213", "utf_32", "utf_32_be", "utf_32_le", "utf_16", "utf_16_be", "utf_16_le", "utf_7", "utf_8", "utf_8_sig", ] data = b"\x95\xc3\x8a\xb0\x8ds\x86\x89\x94\x82\x8a\xba" for codec in codecs: try: print(f"{codec}, {data.decode(codec)}") except UnicodeDecodeError: continue Output cp037, nC«^ýËfimb«[ cp273, nC«¢ýËfimb«¬ cp437, ò├è░ìsåëöéè║ cp500, nC«¢ýËfimb«¬ cp720, ـ├è░së¤éè║ cp737, Χ├Λ░ΞsΗΚΦΓΛ║ cp775, Ģ├Ŗ░ŹsåēöéŖ║ cp850, ò├è░ìsåëöéè║ cp852, Ľ├Ő░ŹsćëöéŐ║ cp855, Ћ├і░ЇsєЅћѓі║ cp856, ץ├ך░םsזיפגך║ cp857, ò├è░ısåëöéè║ cp858, ò├è░ìsåëöéè║ cp860, ò├è░ìsÁÊõéè║ cp861, þ├è░Þsåëöéè║ cp862, ץ├ך░םsזיפגך║ cp863, Ï├è░‗s¶ëËéè║ cp864, ¼ﺃ├٠┌s│┬½∙├ﻑ cp865, ò├è░ìsåëöéè║ cp866, Х├К░НsЖЙФВК║ cp875, nCα£δΉfimbας cp949, 빩뒺뛱냹봻듆 cp1006, ﺣﺍsﭦ cp1026, nC«¢`Ëfimb«¬ cp1125, Х├К░НsЖЙФВК║ cp1140, nC«^ýËfimb«[ cp1250, •ĂŠ°Ťs†‰”‚Šş cp1251, •ГЉ°Ќs†‰”‚Љє cp1256, •أٹ°چs†‰”‚ٹ؛ gbk, 暶姲峴唹攤姾 gb18030, 暶姲峴唹攤姾 latin_1, ðsº iso8859_2, ðsş iso8859_4, ðsē iso8859_5, УАsК iso8859_7, Γ°sΊ iso8859_9, ðsº iso8859_10, ðsš iso8859_11, รฐsบ iso8859_13, ưsŗ iso8859_14, ÃḞsẃ iso8859_15, ðsº iso8859_16, ðsș koi8_r, ∙ц┼╟█s├┴■┌┼╨ koi8_u, ∙ц┼╟█s├┴■┌┼╨ kz1048, •ГЉ°Қs†‰”‚Љғ mac_cyrillic, Х√К∞НsЖЙФВКЇ mac_greek, ïΟäΑçsÜâî²äΚ mac_iceland, ï√ä∞çsÜâîÇä∫ mac_latin2, ē√äįćsÜČĒāäļ mac_roman, ï√ä∞çsÜâîÇä∫ mac_turkish, ï√ä∞çsÜâîÇä∫ ptcp154, •ГҠ°ҚsҶү”ӮҠә shift_jis_2004, 陛寛行̹狽桓 shift_jisx0213, 陛寛行̹狽桓 utf_16, 쎕낊玍覆芔몊 utf_16_be, 闃誰赳蚉钂誺 utf_16_le, 쎕낊玍覆芔몊 Edit: After running all of the seemingly legible results through Google Translate, I suspect this encoding is UTF-16 big-endian. Here's the results: Encoding Decoded Language Detected English Translation gbk 暶姲峴唹攤姾 Chinese Jian Xian JiaoTanJiao gb18030 暶姲峴唹攤姾 Chinese Jian Xian Jiao Tan Jiao utf_16 쎕낊玍覆芔몊 Korean None utf_16_be 闃誰赳蚉钂誺 Chinese Who is the epiphysis? utf_16_le 쎕낊玍覆芔몊 Korean None | 7 | 7 |
63,221,321 | 2020-8-2 | https://stackoverflow.com/questions/63221321/discord-py-how-to-get-the-user-who-invited-added-the-bot-to-his-server-soluti | I want to send a DM to the user, who invited/added the bot to his server. I noticed that it's displayed in the audit log. Can I fetch that and get the user or is there a easier way to achieve that? Example: bot = commands.Bot() @bot.event async def on_guild(guild, inviter): await inviter.send("Thanks for adding the bot to your server!") | With discord.py 2.0 you can get the BotIntegration of a server and with that the user who invited the bot. Example from discord.ext import commands bot = commands.Bot() @bot.event async def on_guild_join(guild): # get all server integrations integrations = await guild.integrations() for integration in integrations: if isinstance(integration, discord.BotIntegration): if integration.application.user.name == bot.user.name: bot_inviter = integration.user # returns a discord.User object # send message to the inviter to say thank you await bot_inviter.send("Thank you for inviting my bot!!") break Note: guild.integrations() requires the Manage Server (manage_guild) permission. References: Guild.integrations discord.BotIntegration | 7 | 4 |
63,169,865 | 2020-7-30 | https://stackoverflow.com/questions/63169865/how-to-do-multiprocessing-in-fastapi | While serving a FastAPI request, I have a CPU-bound task to do on every element of a list. I'd like to do this processing on multiple CPU cores. What's the proper way to do this within FastAPI? Can I use the standard multiprocessing module? All the tutorials/questions I found so far only cover I/O-bound tasks like web requests. | async def endpoint You could use loop.run_in_executor with ProcessPoolExecutor to start function at a separate process. @app.post("/async-endpoint") async def test_endpoint(): loop = asyncio.get_event_loop() with concurrent.futures.ProcessPoolExecutor() as pool: result = await loop.run_in_executor(pool, cpu_bound_func) # wait result def endpoint Since def endpoints are run implicitly in a separate thread, you can use the full power of modules multiprocessing and concurrent.futures. Note that inside def function, await may not be used. Samples: @app.post("/def-endpoint") def test_endpoint(): ... with multiprocessing.Pool(3) as p: result = p.map(f, [1, 2, 3]) @app.post("/def-endpoint/") def test_endpoint(): ... with concurrent.futures.ProcessPoolExecutor(max_workers=3) as executor: results = executor.map(f, [1, 2, 3]) Note: It should be remembered that creating a pool of processes in an endpoint, as well as creating a large number of threads, can lead to a slowdown in response as the number of requests increases. Executing on the fly The easiest and most native way to execute a function in a separate process and immediately wait for the results is to use the loop.run_in_executor with ProcessPoolExecutor. A pool, as in the example below, can be created when the application starts and do not forget to shutdown on application exit. The number of processes used in the pool can be set using the max_workers ProcessPoolExecutor constructor parameter. If max_workers is None or not given, it will default to the number of processors on the machine. The disadvantage of this approach is that the request handler (path operation) waits for the computation to complete in a separate process, while the client connection remains open. And if for some reason the connection is lost, then the results will have nowhere to return. import asyncio from concurrent.futures.process import ProcessPoolExecutor from contextlib import asynccontextmanager from fastapi import FastAPI from calc import cpu_bound_func @asynccontextmanager async def lifespan(app: FastAPI): app.state.executor = ProcessPoolExecutor() yield app.state.executor.shutdown() app = FastAPI(lifespan=lifespan) async def run_in_process(fn, *args): loop = asyncio.get_event_loop() return await loop.run_in_executor(app.state.executor, fn, *args) # wait and return result @app.get("/{param}") async def handler(param: int): res = await run_in_process(cpu_bound_func, param) return {"result": res} Move to background Usually, CPU bound tasks are executed in the background. FastAPI offers the ability to run background tasks to be run after returning a response, inside which you can start and asynchronously wait for the result of your CPU bound task. In this case, for example, you can immediately return a response of "Accepted" (HTTP code 202) and a unique task ID, continue calculations in the background, and the client can later request the status of the task using this ID. BackgroundTasks provide some features, in particular, you can run several of them (including in dependencies). And in them you can use the resources obtained in the dependencies, which will be cleaned only when all tasks are completed, while in case of exceptions it will be possible to handle them correctly. This can be seen more clearly in this diagram. Below is an example that performs minimal task tracking. One instance of the application running is assumed. import asyncio from concurrent.futures.process import ProcessPoolExecutor from contextlib import asynccontextmanager from http import HTTPStatus from fastapi import BackgroundTasks from typing import Dict from uuid import UUID, uuid4 from fastapi import FastAPI from pydantic import BaseModel, Field from calc import cpu_bound_func class Job(BaseModel): uid: UUID = Field(default_factory=uuid4) status: str = "in_progress" result: int = None app = FastAPI() jobs: Dict[UUID, Job] = {} async def run_in_process(fn, *args): loop = asyncio.get_event_loop() return await loop.run_in_executor(app.state.executor, fn, *args) # wait and return result async def start_cpu_bound_task(uid: UUID, param: int) -> None: jobs[uid].result = await run_in_process(cpu_bound_func, param) jobs[uid].status = "complete" @app.post("/new_cpu_bound_task/{param}", status_code=HTTPStatus.ACCEPTED) async def task_handler(param: int, background_tasks: BackgroundTasks): new_task = Job() jobs[new_task.uid] = new_task background_tasks.add_task(start_cpu_bound_task, new_task.uid, param) return new_task @app.get("/status/{uid}") async def status_handler(uid: UUID): return jobs[uid] @asynccontextmanager async def lifespan(app: FastAPI): app.state.executor = ProcessPoolExecutor() yield app.state.executor.shutdown() More powerful solutions All of the above examples were pretty simple, but if you need some more powerful system for heavy distributed computing, then you can look aside message brokers RabbitMQ, Kafka, NATS and etc. And libraries using them like Celery. | 58 | 123 |
63,216,201 | 2020-8-2 | https://stackoverflow.com/questions/63216201/how-to-install-python-with-conda | I'm trying to install python 3.9 in a conda enviroment. I tried creating a new conda env using the following command, conda create --name myenv python=3.9 But I got an error saying package not found because python 3.9 is not yet released So, I manually created a folder in envs folder and tried to list all envs. But I couldn't get the manually created new environment. So, how do I install python 3.9 in a conda env with all functionalities like pip working? | To create python 3.11 conda environment use the following command conda create -n py311 python=3.11 py311 - environment name Update 3 To create python 3.10 conda environment use the following command conda create -n py310 python=3.10 py310 - environment name Update 2 You can now directly create python 3.9 environment using the following command conda create -n py39 python=3.9 py39 - environment name Update 1 Python 3.9 is now available in conda-forge. To download the tar file - https://anaconda.org/conda-forge/python/3.9.0/download/linux-64/python-3.9.0-h852b56e_0_cpython.tar.bz2 Anaconda Page - https://anaconda.org/conda-forge/python As pointed out in the comments, python 3.9 is not yet there on any channels. So, it cannot be install yet via conda. Instead, you can download the python 3.9 executable and install it. Once the installation is done, a new executable will be created for python 3.9 and pip 3.9 will be created. Python: python3.7 python3.7-config python3.7m python3.7m-config python3.9 python3.9-config pip pip pip3 pip3.7 pip3.8 pip3.9 pipreqs In order to install ipython for python 3.9, pip3.9 install ipython | 106 | 122 |
63,177,681 | 2020-7-30 | https://stackoverflow.com/questions/63177681/is-there-a-difference-between-running-fastapi-from-uvicorn-command-in-dockerfile | I am running a fast api and when i was developing i had the following piece of code in my app.py file code in app.py: import uvicorn if __name__=="__main__": uvicorn.run("app.app:app",host='0.0.0.0', port=4557, reload=True, debug=True, workers=3) so i was about to run CMD ["python3","app.py"] in my Dockerfile. on the fastapi example they did something like this : CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"] I want to know what is the difference between these two methods as i think both of them will work. | Update (on 2022-12-31) As an update from @Marcelo Trylesinski, from uvicorn v 0.19.0, the --debug flag was removed (Ref #1640). No, there is no difference. The commadline run method (uvicorn app.main:app) and executing the app.py using python command (python app.py) are the same. Both methods are calling the uvicorn.main.run(...) function under the hood. In other words, the uvicorn command is a shortcut to the uvicorn.run(...) function. So, in your case the function call uvicorn.run("app.app:app",host='0.0.0.0', port=4557, reload=True, debug=True, workers=3) can be done by uvicorn commandline as, uvicorn app.app:app --host 0.0.0.0 --port 4557 --reload --debug --workers 3 Side Notes The --debug option is hidden from the command line options help page, but it can be found in the source code. Thus, it feels running the app using uvicorn command can be considered as a production thing. | 42 | 39 |
63,174,561 | 2020-7-30 | https://stackoverflow.com/questions/63174561/pip-install-package-from-private-github-repo-with-deploy-key-in-docker | I'm trying to build a Docker container that should install a series of python packages from a requirements.txt file. One of the entries is a python package hosted on a private GitHub repository. To install it, I've created a pair of SSH keys and added the public one as a Deploy Key to the GitHub repository. However, when I'm building the container I'm getting this error: ERROR: Command errored out with exit status 128: git clone -q 'ssh://****@github.com:organization/my-package' /tmp/pip-install-e81w4wri/my-package Check the logs for full command output. I've tried to debug the error by changing the pip install command of the docker file with RUN git clone [email protected]:organization/my-package.git and it did work fine. What does this error mean and how can I solve it? I could clone it and install it with a dedicated command, but if possible I'd like to keep all requirements in a single place. Thanks! This is the Dockerfile that I'm using: FROM joyzoursky/python-chromedriver:3.7-alpine3.8 as base FROM base as builder RUN echo "http://dl-8.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories RUN apk --no-cache --update-cache add bash gcc gfortran build-base git wget freetype-dev libpng-dev openblas-dev openssh-client RUN ln -s /usr/include/locale.h /usr/include/xlocale.h # copy requirements RUN mkdir /install WORKDIR /install COPY ./requirements.txt /var/www/requirements.txt ### GITHUB SSH KEY ### COPY ./keys/deploy_key_private . RUN mkdir /root/.ssh && mv deploy_key_private /root/.ssh/id_rsa RUN eval $(ssh-agent) && \ ssh-add /root/.ssh/id_rsa && \ ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts RUN pip install --upgrade pip && pip install --prefix=/install -r /var/www/requirements.txt --log logs.txt FROM base COPY --from=builder /install /usr/local # KEEP ON BUILDING THE CONTAINER The package is listed in the requirements.txt as git+ssh://[email protected]:organization/my-package@master#egg=my_package If relevant, here is the traceback from pip: Exception information: 2020-07-30T11:56:55,329 Traceback (most recent call last): 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 216, in _main 2020-07-30T11:56:55,329 status = self.run(options, args) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/cli/req_command.py", line 182, in wrapper 2020-07-30T11:56:55,329 return func(self, options, args) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 325, in run 2020-07-30T11:56:55,329 reqs, check_supported_wheels=not options.target_dir 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 183, in resolve 2020-07-30T11:56:55,329 discovered_reqs.extend(self._resolve_one(requirement_set, req)) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 388, in _resolve_one 2020-07-30T11:56:55,329 abstract_dist = self._get_abstract_dist_for(req_to_install) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 340, in _get_abstract_dist_for 2020-07-30T11:56:55,329 abstract_dist = self.preparer.prepare_linked_requirement(req) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/operations/prepare.py", line 469, in prepare_linked_requirement 2020-07-30T11:56:55,329 hashes=self._get_linked_req_hashes(req) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/operations/prepare.py", line 239, in unpack_url 2020-07-30T11:56:55,329 unpack_vcs_link(link, location) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/operations/prepare.py", line 99, in unpack_vcs_link 2020-07-30T11:56:55,329 vcs_backend.unpack(location, url=hide_url(link.url)) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py", line 733, in unpack 2020-07-30T11:56:55,329 self.obtain(location, url=url) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py", line 641, in obtain 2020-07-30T11:56:55,329 self.fetch_new(dest, url, rev_options) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/vcs/git.py", line 230, in fetch_new 2020-07-30T11:56:55,329 self.run_command(make_command('clone', '-q', url, dest)) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py", line 774, in run_command 2020-07-30T11:56:55,329 log_failed_cmd=log_failed_cmd) 2020-07-30T11:56:55,329 File "/usr/local/lib/python3.7/site-packages/pip/_internal/vcs/versioncontrol.py", line 166, in call_subprocess 2020-07-30T11:56:55,329 raise SubProcessError(exc_msg) 2020-07-30T11:56:55,329 pip._internal.exceptions.SubProcessError: Command errored out with exit status 128: git clone -q 'ssh://****@github.com:organization/my-package' /tmp/pip-install-e81w4wri/my-package Check the logs for full command output. | [email protected]:organization/my-package.git is a valid SSH URL. ssh://[email protected]:organization/my-package.git is not. ssh://[email protected]/organization/my-package.git would be. As in here, you can add GIT_SSH_COMMAND='ssh -v' pip install ... to see exactly what is going on. You might need: git config --global url."ssh://[email protected]/".insteadOf ssh://[email protected]: The OP arabinelli reports in the comments having to use the following line in requirements.txt: git+ssh://[email protected]/my-organization/my-repo-name@master#egg=my_package_dir Aug. 2022, Jako adds in the comments: This worked for me with a private BitBucket repository: git+ssh://[email protected]/my-organization/my-repo-name@master#egg=my_project&subdirectory=subdir1 ^^^^^^^ I had to specify the subdirectory 'subdir1' | 7 | 6 |
63,187,644 | 2020-7-31 | https://stackoverflow.com/questions/63187644/import-error-cannot-import-name-ft2font-from-partially-initialized-module-ma | import matplotlib.pyplot as plt output ImportError Traceback (most recent call last) <ipython-input-7-a0d2faabd9e9> in <module> ----> 1 import matplotlib.pyplot as plt ~\AppData\Roaming\Python\Python38\site-packages\matplotlib\__init__.py in <module> 172 173 --> 174 _check_versions() 175 176 ~\AppData\Roaming\Python\Python38\site-packages\matplotlib\__init__.py in _check_versions() 157 # Quickfix to ensure Microsoft Visual C++ redistributable 158 # DLLs are loaded before importing kiwisolver --> 159 from . import ft2font 160 161 for modname, minver in [ ImportError: cannot import name 'ft2font' from partially initialized module 'matplotlib' (most likely due to a circular import) (C:\Users\p****\AppData\Roaming\Python\Python38\site-packages\matplotlib\__init__.py) | As you are on a windows machine, there is a possible duplicate. Navigate by clicking here. This could be an issue regarding matplotlib. A force reinstall over pip would solve the issue. pip install matplotlib --force-reinstall If you are working on Anaconda, launch Anaconda as Administrator, conda install freetype --force-reinstall This solved the same issue for me. | 16 | 19 |
63,220,597 | 2020-8-2 | https://stackoverflow.com/questions/63220597/python-in-r-error-could-not-find-a-python-environment-for-usr-bin-python | I don't understand how R handles the Python environment and Python version and keep getting the error Error: could not find a Python environment for /usr/bin/python. I installed Miniconda and created a conda environment in the shell: conda activate r-reticulate Then, in R, I try to install keras (same problem with package tensorflow): library(keras) reticulate::use_condaenv() install_keras(method = "conda", conda = reticulate::conda_binary()) ... and get the following error: Error: could not find a Python environment for /usr/bin/python I tried to figure out what Python R should be using by reticulate::py_config() and get python: /usr/bin/python libpython: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config/libpython2.7.dylib pythonhome: /System/Library/Frameworks/Python.framework/Versions/2.7:/System/Library/Frameworks/Python.framework/Versions/2.7 version: 2.7.16 (default, Jul 5 2020, 02:24:03) [GCC 4.2.1 Compatible Apple LLVM 11.0.3 (clang-1103.0.29.21) (-macos10.15-objc- numpy: /Users/bestocke/Library/Python/2.7/lib/python/site-packages/numpy numpy_version: 1.16.6 tensorflow: [NOT FOUND] python versions found: /usr/bin/python3 /usr/local/bin/python3 /usr/bin/python I don't understand this. This seems to be using Python 2.7. When trying to figure out which Python is being used in the shell, I get: > which python /opt/miniconda3/envs/r-reticulate/bin/python and > ls -l /opt/miniconda3/envs/r-reticulate/bin/python lrwxr-xr-x 1 username wheel 9 Aug 2 15:21 /opt/miniconda3/envs/r-reticulate/bin/python -> python3.6 Suggesting Python 3.6 should be used. What am I getting wrong here? | Try to follow the guide at https://tensorflow.rstudio.com/installation/: In your R-studio console : install.packages("tensorflow") library(tensorflow) install_tensorflow() If you have not installed Anaconda / Miniconda manually, then at step no. 3, a prompt will ask your permission to install Miniconda. If you already have conda installed, then : Create new environment r-reticulate in conda : conda create -n r-reticulate Install tensorflow from R-studio console with parameters : install_tensorflow(method = 'conda', envname = 'r-reticulate') Load the reticulate package library(reticulate) Activate the conda environment in R-studio use_condaenv('r-reticulate') Load the tensorflow libray library(tensorflow) Check if tensorflow is active tf$constant("Hellow Tensorflow") References : https://tensorflow.rstudio.com/installation/ https://rstudio.github.io/reticulate/ | 14 | 26 |
63,158,424 | 2020-7-29 | https://stackoverflow.com/questions/63158424/why-does-keras-model-fit-with-sample-weight-have-long-initialization-time | I am using keras with a tensorflow (version 2.2.0) backend to train a classifier to distinguish between two datasets, A and B, which I have mixed into a pandas DataFrame object x_train (with two columns), and with labels in a numpy array y_train. I would like to perform sample weighting in order to account for the fact that A has far more samples than B. In addition, A is comprised of two datasets A1 and A2, with A1 much larger than A2; I would like to account for this fact as well using my sample weights. I have the sample weights in a numpy array called w_train. There are ~10 million training samples. Here is example code: model = Sequential() model.add(Dense(64, input_dim=x_train.shape[1], activation='relu')) model.add(Dropout(0.1)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.1)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train.iloc, y_train, sample_weight=w_train) When I use the sample_weight argument in model.fit(), I find that the model fitting initialization (i.e. whatever happens before keras starts to display the training progress) takes forever, too long to wait for. The problem goes away when I limit the dataset to 1000 samples, but as I increase to 100000 or 1000000 samples I notice that there is a significant difference in initialization and fitting time, so I suspect it has something to do with the way the data is being loaded. Nevertheless, it seems weird that merely adding the sample_weights argument would cause such a large timing difference. Other information: I am running on CPU using a Jupyter notebook. What is the problem here? Is there a way for me to modify the training setup or something else in order to speed up the initialization (or training) time? | The issue is caused by how TensorFlow validates some type of input objects. Such validations, when the data are surely correct, are exclusively a wasted time expenditure (I hope in the future it will be handled better). In order to force TensorFlow to skip such validation procedures, you can trivially wrap the weights in a Pandas Series, such as follows: model.fit(x_train.iloc, y_train, sample_weight=pd.Series(w_train)) Do note that in your code you are using the metrics keyword. If you want the accuracy to be actually weighted on the provided weights, to use the weighted_metrics argument instead. | 7 | 9 |
63,218,645 | 2020-8-2 | https://stackoverflow.com/questions/63218645/lowering-the-xtick-label-density-for-a-datetime-axis | Pretty new to python and programming in general so bear with me please. I have a data set imported from a .csv file and I'm trying to plot a column of values (y axis) by date (x axis) over a 1 year period but the problem is that the dates are way too dense and I can't for the life of me figure out how to space them out or modify how they're defined. Here's the code I'm working with: import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import matplotlib as mpl from scipy import stats import cartopy.crs as ccrs import cartopy.io.img_tiles as cimgt df = pd.read_csv('Vanuatu Earthquakes 2018-2019.csv') and here's the line plot code: plt.figure(figsize=(15, 7)) ax = sns.lineplot(x='date', y='mag', data=df).set_title("Earthquake magnitude May 2018-2019") plt.xlabel('Date') plt.ylabel('Magnitude (Mw)') plt.savefig('EQ mag time') This currently gives me this line plot: Currently I want to do it by something like a small tick for each day and a larger tick + label for the beginning of each week. Doesn't have to be exactly that but I'm mostly looking to just decrease the density. I've looked at loads of posts on here but none of them seem to work for my situation so any help would be greatly appreciated. Got the dates working as per Konqui's advice and my code now looks like this: time = pd.date_range(start = '01-05-2018', end = '01-05-2019', freq = 'D') df = pd.DataFrame({'date': list(map(lambda x: str(x), time)), 'mag': np.random.random(len(time))}) plt.figure(figsize=(15, 7)) df['date'] = pd.to_datetime(df['date'], format = '%Y-%m') ax = sns.lineplot(x='date', y='mag', data=df).set_title("Earthquake magnitude May 2018-2019") ax.xaxis.set_major_locator(md.WeekdayLocator(byweekday = 1)) ax.xaxis.set_major_formatter(md.DateFormatter('%Y-%m-%d')) plt.setp(ax.xaxis.get_majorticklabels(), rotation = 90) ax.xaxis.set_minor_locator(md.DayLocator(interval = 1)) plt.xlabel('Date') plt.ylabel('Magnitude (Mw)') which gives me an error message: AttributeError: 'Text' object has no attribute 'xaxis'. Any thoughts? | Assumption I suppose you start from a dataframe similar to this one saved in a Vanuatu Earthquakes 2018-2019.csv file : import pandas as pd import numpy as np time = pd.date_range(start = '01-01-2020', end = '31-03-2020', freq = 'D') df = pd.DataFrame({'date': list(map(lambda x: str(x), time)), 'mag': np.random.random(len(time))}) output: date mag 0 2020-01-01 00:00:00 0.940040 1 2020-01-02 00:00:00 0.765570 2 2020-01-03 00:00:00 0.951839 3 2020-01-04 00:00:00 0.708172 4 2020-01-05 00:00:00 0.705032 5 2020-01-06 00:00:00 0.857500 6 2020-01-07 00:00:00 0.866418 7 2020-01-08 00:00:00 0.363287 8 2020-01-09 00:00:00 0.289615 9 2020-01-10 00:00:00 0.741499 plotting: import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize = (15, 7)) sns.lineplot(ax = ax, x='date', y='mag', data=df).set_title('Earthquake magnitude May 2018-2019') plt.xlabel('Date') plt.ylabel('Magnitude (Mw)') plt.show() Answer You should do a series of things: First of all, you get that density of labels because your 'date' values are str type, you need to convert them to datetime by df['date'] = pd.to_datetime(df['date'], format = '%Y-%m-%d') in this way your x axis is a datetime type and the above plot will become this: Then you have to adjust ticks; for the major ticks you should set: import matplotlib.dates as md # specify the position of the major ticks at the beginning of the week ax.xaxis.set_major_locator(md.WeekdayLocator(byweekday = 1)) # specify the format of the labels as 'year-month-day' ax.xaxis.set_major_formatter(md.DateFormatter('%Y-%m-%d')) # (optional) rotate by 90° the labels in order to improve their spacing plt.setp(ax.xaxis.get_majorticklabels(), rotation = 90) and for the minor ticks: # specify the position of the minor ticks at each day ax.xaxis.set_minor_locator(md.DayLocator(interval = 1)) optionally, you can edit the length of the ticks with: ax.tick_params(axis = 'x', which = 'major', length = 10) ax.tick_params(axis = 'x', which = 'minor', length = 5) so the final plot will become: Complete Code # import required packages import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib.dates as md # read the dataframe df = pd.read_csv('Vanuatu Earthquakes 2018-2019.csv') # convert 'date' column type from str to datetime df['date'] = pd.to_datetime(df['date'], format = '%Y-%m-%d') # prepare the figure fig, ax = plt.subplots(figsize = (15, 7)) # set up the plot sns.lineplot(ax = ax, x='date', y='mag', data=df).set_title('Earthquake magnitude May 2018-2019') # specify the position of the major ticks at the beginning of the week ax.xaxis.set_major_locator(md.WeekdayLocator(byweekday = 1)) # specify the format of the labels as 'year-month-day' ax.xaxis.set_major_formatter(md.DateFormatter('%Y-%m-%d')) # (optional) rotate by 90° the labels in order to improve their spacing plt.setp(ax.xaxis.get_majorticklabels(), rotation = 90) # specify the position of the minor ticks at each day ax.xaxis.set_minor_locator(md.DayLocator(interval = 1)) # set ticks length ax.tick_params(axis = 'x', which = 'major', length = 10) ax.tick_params(axis = 'x', which = 'minor', length = 5) # set axes labels plt.xlabel('Date') plt.ylabel('Magnitude (Mw)') # show the plot plt.show() Notes If you pay attention to the y axis in my plots, you see that 'mag' values fall in the range (0-1). This is due to the fact that I generate this fake data with 'mag': np.random.random(len(time)). If you read your data from the file Vanuatu Earthquakes 2018-2019.csv, you will get the correct values on the y axis. Try to simply copy the code in the Complete code section. | 7 | 12 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.