question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
64,084,786
2020-9-27
https://stackoverflow.com/questions/64084786/read-data-from-a-pandas-dataframe-and-create-a-tree-using-anytree-in-python
Is there a way to read data from a pandas DataFrame and construct a tree using anytree? Parent Child A A1 A A2 A2 A21 I can do it with static values as follows. However, I want to automate this by reading the data from a pandas DataFrame with anytree. >>> from anytree import Node, RenderTree >>> A = Node("A") >>> A1 = Node("A1", parent=A) >>> A2 = Node("A2", parent=A) >>> A21 = Node("A21", parent=A2) Output is A β”œβ”€β”€ A1 └── A2 └── A21 This question AND especially the ANSWER has been adopted, copied really, from: Read data from a file and create a tree using anytree in python Many thanks to @Fabien N
Create nodes first if not exist, store their references in a dictionary nodes for further usage. Change parent when necessary for children. We can derive roots of the forest of trees by seeing what Parent values are not in Child values, since a parent is not a children of any node it won't appear in Child column. def add_nodes(nodes, parent, child): if parent not in nodes: nodes[parent] = Node(parent) if child not in nodes: nodes[child] = Node(child) nodes[child].parent = nodes[parent] data = pd.DataFrame(columns=["Parent","Child"], data=[["A","A1"],["A","A2"],["A2","A21"],["B","B1"]]) nodes = {} # store references to created nodes # data.apply(lambda x: add_nodes(nodes, x["Parent"], x["Child"]), axis=1) # 1-liner for parent, child in zip(data["Parent"],data["Child"]): add_nodes(nodes, parent, child) roots = list(data[~data["Parent"].isin(data["Child"])]["Parent"].unique()) for root in roots: # you can skip this for roots[0], if there is no forest and just 1 tree for pre, _, node in RenderTree(nodes[root]): print("%s%s" % (pre, node.name)) Result: A β”œβ”€β”€ A1 └── A2 └── A21 B └── B1 Update printing a specific root: root = 'A' # change according to usecase for pre, _, node in RenderTree(nodes[root]): print("%s%s" % (pre, node.name))
13
9
64,082,036
2020-9-26
https://stackoverflow.com/questions/64082036/arrays-into-pandas-dataframe-columns
I have a program that outputs arrays. For example: [[0, 1, 0], [0, 0, 0], [1, 3, 3], [2, 4, 4]] I would like to turn these arrays into a dataframe using pandas. However, when I do the values become row values like this: As you can see each array within the overall array becomes its own row. I would like each array within the overall array to become its own column with a column name. Furthermore, in my use case, the number of arrays within the array is variable. There could be 4 arrays or 70 which means there could be 4 columns or 70. This is problematic when it comes to column names and I was wondering if there was anyway to auto increment column names in python. Check out my attempt below and let me know how I can solve this. My desired outcome is simply to make each array within the overall array into its own column instead of row and to have titles for the column that increment with each additional array/column. Thank you so much. Need help. Please respond! frame = [[0, 1, 0], [0, 0, 0], [1, 3, 3], [2, 4, 4]] numpy_data= np.array(frame) df = pd.DataFrame(data=numpy_data, columns=["column1", "column2", "column3"]) print(frame) print(df)
A possible solution could be transposing and renaming the columns after transforming the numpy array into a dataframe. Here is the code: import numpy as np import pandas as pd frame = [[0, 1, 0], [0, 0, 0], [1, 3, 3], [2, 4, 4]] numpy_data= np.array(frame) #transposing later df = pd.DataFrame(data=numpy_data).T #creating a list of columns using list comprehension without specifying number of columns df.columns = [f'mycol{i}' for i in range(0,len(df.T))] print(df) Output: mycol0 mycol1 mycol2 mycol3 0 0 0 1 2 1 1 0 3 4 2 0 0 3 4 Same code for 11 columns: import numpy as np import pandas as pd frame = [[0, 1, 0], [0, 0, 0], [1, 3, 3], [2, 4, 4], [5, 2, 2], [6,7,8], [8,9,19] , [10,2,4], [2,6,5], [10,2,5], [11,2,9]] numpy_data= np.array(frame) df = pd.DataFrame(data=numpy_data).T df.columns = [f'mycol{i}' for i in range(0,len(df.T))] print(df) mycol0 mycol1 mycol2 mycol3 mycol4 mycol5 mycol6 mycol7 mycol8 mycol9 mycol10 0 0 0 1 2 5 6 8 10 2 10 11 1 1 0 3 4 2 7 9 2 6 2 2 2 0 0 3 4 2 8 19 4 5 5 9
8
5
64,076,149
2020-9-26
https://stackoverflow.com/questions/64076149/plotly-how-to-create-an-odd-number-of-subplots
I want the 5th subplot to be in the centre of the two columns in the third row. (I have tried doing that by adding the domain argument). Here is the code to reproduce it- import pandas as pd import plotly.graph_objects as go from plotly.subplots import make_subplots continent_df = pd.read_csv('https://raw.githubusercontent.com/vyaduvanshi/helper-files/master/continent.csv') temp_cont_df = pd.pivot_table(continent_df, index='continent', aggfunc='last').reset_index() fig = make_subplots(rows=3, cols=2, specs=[[{'type':'pie'},{'type':'pie'}],[{'type':'pie'},{'type':'pie'}], [{'type':'pie'},{'type':'pie'}]]) fig.add_pie(labels=continent_df.continent, values=continent_df.new_cases, row=1,col=1) fig.add_pie(labels=continent_df.continent, values=continent_df.new_deaths, row=1,col=2) fig.add_pie(labels=continent_df.continent, values=continent_df.new_recovered, row=2,col=1) fig.add_pie(labels=continent_df.continent, values=continent_df.new_tests, row=2,col=2) fig.add_pie(labels=temp_cont_df.continent, values=temp_cont_df.active_cases, row=3,col=1,domain={'x':[0.25,0.75],'y':[0,0.33]}) If I do not include the 6th plot in the specs argument, it throws an error.
You can achieve this through a correct setup of domain. Here's an example that will have a figure in each of the four corners and one figure in the middle. Plot Complete code: import plotly import plotly.offline as py import plotly.graph_objs as go labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen'] values = [4500,2500,1053,500] domains = [ {'x': [0.0, 0.33], 'y': [0.0, 0.33]}, {'x': [0.33, 0.66], 'y': [0.33, 0.66]}, {'x': [0.0, 0.33], 'y': [0.66, 1.0]}, {'x': [0.66, 1.00], 'y': [0.0, 0.33]}, {'x': [0.66, 1.0], 'y': [0.66, 1.00]}, ] traces = [] for domain in domains: trace = go.Pie(labels = labels, values = values, domain = domain, hoverinfo = 'label+percent+name') traces.append(trace) layout = go.Layout(height = 600, width = 600, autosize = False, title = 'Main title') fig = go.Figure(data = traces, layout = layout) #py.iplot(fig, show_link = False) fig.show()
7
6
64,079,000
2020-9-26
https://stackoverflow.com/questions/64079000/groupby-in-reverse
I have a pandas dataframe with name of variables, the values for each and the count (which shows the frequency of that row): df = pd.DataFrame({'var':['A', 'B', 'C'], 'value':[10, 20, 30], 'count':[1,2,3]}) var value count A 10 1 B 20 2 C 30 3 I want to use count to get an output like this: var value A 10 B 20 B 20 C 30 C 30 C 30 What is the best way to do that?
You can use index.repeat: i = df.index.repeat(df['count']) d = df.loc[i, :'value'].reset_index(drop=True) var value 0 A 10 1 B 20 2 B 20 3 C 30 4 C 30 5 C 30
8
7
64,074,217
2020-9-26
https://stackoverflow.com/questions/64074217/finplot-as-a-widget-in-layout
I am trying to add finplot, https://pypi.org/project/finplot/, as a widget to one of the layouts in my UI. I created a widget for finplot and added it to the widgets in the layout but I get the following error: self.tab1.layout.addWidget(self.tab1.fplt_widget) TypeError: addWidget(self, QWidget, stretch: int = 0, alignment: Union[Qt.Alignment, Qt.AlignmentFlag] = Qt.Alignment()): argument 1 has unexpected type 'PlotItem' Here is the code that I used. import sys from PyQt5.QtWidgets import * import finplot as fplt import yfinance # Creating the main window class App(QMainWindow): def __init__(self): super().__init__() self.title = 'PyQt5 - QTabWidget' self.left = 0 self.top = 0 self.width = 600 self.height = 400 self.setWindowTitle(self.title) self.setGeometry(self.left, self.top, self.width, self.height) self.tab_widget = MyTabWidget(self) self.setCentralWidget(self.tab_widget) self.show() # Creating tab widgets class MyTabWidget(QWidget): def __init__(self, parent): super(QWidget, self).__init__(parent) self.layout = QVBoxLayout(self) # Initialize tab screen self.tabs = QTabWidget() self.tabs.setTabPosition(QTabWidget.East) self.tabs.setMovable(True) self.tab1 = QWidget() self.tabs.resize(600, 400) # Add tabs self.tabs.addTab(self.tab1, "tab1") self.tab1.layout = QVBoxLayout(self) self.tab1.label = QLabel("AAPL") self.tab1.df = yfinance.download('AAPL') self.tab1.fplt_widget = fplt.create_plot_widget(self.window()) fplt.candlestick_ochl(self.tab1.df[['Open', 'Close', 'High', 'Low']]) self.tab1.layout.addWidget(self.tab1.label) self.tab1.layout.addWidget(self.tab1.fplt_widget) self.tab1.setLayout(self.tab1.layout) # Add tabs to widget self.layout.addWidget(self.tabs) self.setLayout(self.layout) if __name__ == '__main__': app = QApplication(sys.argv) ex = App() sys.exit(app.exec_())
The create_plot_widget() function creates a PlotItem that cannot be added to a layout, the solution is to use some QWidget that can display the content of the PlotItem as the PlotWidget: import pyqtgraph as pg # ... self.tab1.df = yfinance.download("AAPL") self.tab1.fplt_widget = pg.PlotWidget( plotItem=fplt.create_plot_widget(self.window()) ) fplt.candlestick_ochl(self.tab1.df[["Open", "Close", "High", "Low"]])
9
6
64,067,519
2020-9-25
https://stackoverflow.com/questions/64067519/how-to-create-a-min-max-lineplot-by-month
I have retail beef ad counts time series data, and I intend to make stacked line chart aim to show On a three-week average basis, quantity of average ads that grocers posted per store last week. To do so, I managed to aggregate data for plotting and tried to make line chart that I want. The main motivation is based on context of the problem and desired plot. In my attempt, I couldn't get very nice line chart because it is not informative to understand. I am wondering how can I achieve this goal in matplotlib. Can anyone suggest me what should I do from my current attempt? Any thoughts? reproducible data and current attempt Here is minimal reproducible data that I used in my current attempt: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates import seaborn as sns from datetime import timedelta, datetime url = 'https://gist.githubusercontent.com/adamFlyn/96e68902d8f71ad62a4d3cda135507ad/raw/4761264cbd55c81cf003a4219fea6a24740d7ce9/df.csv' df = pd.read_csv(url, parse_dates=['date']) df.drop(columns=['Unnamed: 0'], inplace=True) df_grp = df.groupby(['date', 'retail_item']).agg({'number_of_ads': 'sum'}) df_grp["percentage"] = df_grp.groupby(level=0).apply(lambda x:100 * x / float(x.sum())) df_grp = df_grp.reset_index(level=[0,1]) for item in df_grp['retail_item'].unique(): dd = df_grp[df_grp['retail_item'] == item].groupby(['date', 'percentage'])[['number_of_ads']].sum().reset_index(level=[0,1]) dd['weakly_change'] = dd[['percentage']].rolling(7).mean() fig, ax = plt.subplots(figsize=(8, 6), dpi=144) sns.lineplot(dd.index, 'weakly_change', data=dd, ax=ax) ax.set_xlim(dd.index.min(), dd.index.max()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y')) plt.gcf().autofmt_xdate() plt.style.use('ggplot') plt.xticks(rotation=90) plt.show() Current Result but I couldn't get correct line chart that I expected, I want to reproduce the plot from this site. Is that doable to achieve this? Any idea? desired plot here is the example desired plot that I want to make from this minimal reproducible data: I don't know how should make changes for my current attempt to get my desired plot above. Can anyone know any possible way of doing this in matplotlib? what else should I do? Any possible help would be appreciated. Thanks
Also see How to create a min-max plot by month with fill_between? See in-line comments for details import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import calendar ################################################################# # setup from question url = 'https://gist.githubusercontent.com/adamFlyn/96e68902d8f71ad62a4d3cda135507ad/raw/4761264cbd55c81cf003a4219fea6a24740d7ce9/df.csv' df = pd.read_csv(url, parse_dates=['date']) df.drop(columns=['Unnamed: 0'], inplace=True) df_grp = df.groupby(['date', 'retail_item']).agg({'number_of_ads': 'sum'}) df_grp["percentage"] = df_grp.groupby(level=0).apply(lambda x:100 * x / float(x.sum())) df_grp = df_grp.reset_index(level=[0,1]) ################################################################# # create a month map from long to abbreviated calendar names month_map = dict(zip(calendar.month_name[1:], calendar.month_abbr[1:])) # update the month column name df_grp['month'] = df_grp.date.dt.month_name().map(month_map) # set month as categorical so they are plotted in the correct order df_grp.month = pd.Categorical(df_grp.month, categories=month_map.values(), ordered=True) # use groupby to aggregate min mean and max dfmm = df_grp.groupby(['retail_item', 'month'])['percentage'].agg([max, min, 'mean']).stack().reset_index(level=[2]).rename(columns={'level_2': 'mm', 0: 'vals'}).reset_index() # create a palette map for line colors cmap = {'min': 'k', 'max': 'k', 'mean': 'b'} # iterate through each retail item and plot the corresponding data for g, d in dfmm.groupby('retail_item'): plt.figure(figsize=(7, 4)) sns.lineplot(x='month', y='vals', hue='mm', data=d, palette=cmap) # select only min or max data for fill_between y1 = d[d.mm == 'max'] y2 = d[d.mm == 'min'] plt.fill_between(x=y1.month, y1=y1.vals, y2=y2.vals, color='gainsboro') # add lines for specific years for year in [2016, 2018, 2020]: data = df_grp[(df_grp.date.dt.year == year) & (df_grp.retail_item == g)] sns.lineplot(x='month', y='percentage', ci=None, data=data, label=year) plt.ylim(0, 100) plt.margins(0, 0) plt.legend(bbox_to_anchor=(1., 1), loc='upper left') plt.ylabel('Percentage of Ads') plt.title(g) plt.show()
6
6
64,070,651
2020-9-25
https://stackoverflow.com/questions/64070651/argparse-optional-argument-between-positional-arguments
I want to emulate the behavior of most command-line utilities, where optional arguments can be put anywhere in the command line, including between positional arguments, such as in this mkdir example: mkdir before --mode 077 after In this case, we know that --mode takes exactly 1 argument, so before and after are both considered positional arguments. The optional part, --mode 077, can really be put anywhere in the command line. However, with argparse, the following code does not work with this example: # mkdir.py import argparse parser = argparse.ArgumentParser() parser.add_argument('--mode', nargs=1) parser.add_argument('dirs', nargs='*') args = parser.parse_args() Running ./mkdir.py before --mode 077 after results in: mkdir.py: error: unrecognized arguments: after How can I get argparse to accept an optional argument (with a fixed, known number of items) between positional ones?
Starting from Python 3.7, it seems argparse now supports this kind of Unix-style parsing: Intermixed parsing ArgumentParser.parse_intermixed_args(args=None, namespace=None) A number of Unix commands allow the user to intermix optional arguments with positional arguments. The parse_intermixed_args() and parse_known_intermixed_args() methods support this parsing style. There is a caveat, but for "simple" options, it does not affect them: These parsers do not support all the argparse features, and will raise exceptions if unsupported features are used. In particular, subparsers, argparse.REMAINDER, and mutually exclusive groups that include both optionals and positionals are not supported. (I posted this FAQ-style question after spending 1 hour trying to understand why the examples in the Python argparse documentation didn't seem to include it, and only by chance found a somewhat unrelated question which contained the mention to this "intermixed" function in a comment, which I am unable to find again to cite it properly.)
7
6
64,070,128
2020-9-25
https://stackoverflow.com/questions/64070128/zen-of-python-explicit-is-better-than-implicit
I'm trying to understand what 'implicit' and 'explicit' really means in the context of Python. a = [] # my understanding is that this is implicit if not a: print("list is empty") # my understanding is that this is explicit if len(a) == 0: print("list is empty") I'm trying to follow the Zen of Python rules, but I'm curious to know if this applies in this situation or if I am over-thinking it?
The two statements have very different semantics. Remember that Python is dynamically typed. For the case where a = [], both not a and len(a) == 0 are equivalent. A valid alternative might be to check not len(a). In some cases, you may even want to check for both emptiness and listness by doing a == []. But a can be anything. For example, a = None. The check not a is fine, and will return True. But len(a) == 0 will not be fine at all. Instead you will get TypeError: object of type 'NoneType' has no len(). This is a totally valid option, but the if statements do very different things and you have to pick which one you want. (Almost) everything has a __bool__ method in Python, but not everything has __len__. You have to decide which one to use based on the situation. Things to consider are: Have you already verified whether a is a sequence? Do you need to? Do you mind if your if statement crashed on non-sequences? Do you want to handle other falsy objects as if they were empty lists? Remember that making the code look pretty takes second place to getting the job done correctly.
14
15
64,068,659
2020-9-25
https://stackoverflow.com/questions/64068659/bar-chart-in-matplotlib-using-a-colormap
I have a df with two columns: y: different numeric values for the y axis days: the names of four different days (Monday, Tuesday, Wednesday, Thursday) I also have a colormap with four different colors that I made myself and it's a ListedColorMap object. I want to create a bar chart with the four categories (days of the week) in the x axis and their corresponding values in the y axis. At the same time, I want each bar to have a different color using my colormap. This is the code I used to build my bar chart: def my_barchart(my_df, my_cmap): fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.bar(my_df['days'], my_df['y'], color=my_cmap) return fig However, I get the following error: "object of type 'ListedColormap' has no len()", so it seems that I'm not using my_cmap correctly. If I remove that from the function and run it, my bar chart looks ok, except that all bars have the same color. So my question is: what is the right way to use a colormap with a bar chart?
Okay, I found a way to do this without having to scale my values: def my_barchart(my_df, my_cmap): fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.bar(my_df['days'], my_df['y'], color=my_cmap.colors) return fig Simply adding .colors after my_cmap works!
22
15
64,061,721
2020-9-25
https://stackoverflow.com/questions/64061721/opencv-to-close-the-window-on-a-specific-key
It seems really simple, but I can't get it to work and the couldn't find any questions regarding this particular issue (if there are, please point out in the comments). I am showing an image and want the window to close on a specific key, but strangely, any key causes it to close. This is my simple code for testing: img = cv2.imread("MyImage.png") cv2.imshow('My Image', img) k = cv2.waitKey(0) & 0xFF print(k) if k == 27: # close on ESC key cv2.destroyAllWindows() (based on what is said here) No matter what key I press, the key code is shown (27 for ESC, 32 for SPACE, ...) and the window closes. The main problem: the if clause is never reached (I checked by putting the print(k) inside it, and nothing is printed). After the key press, the program simply stops running and it doesn't get to checking the key code. (I am on macOS Catalina, with Python 3.8) So, how do I actually make it wait for a specific key?
From my point of view, your program just terminates, and thus all windows are implicitly closed, regardless of which key you press. One idea might be to put a while True loop around the reading and checking of the pressed key: import cv2 img = cv2.imread('path/to/your/image.png') cv2.imshow('My Image', img) while True: k = cv2.waitKey(0) & 0xFF print(k) if k == 27: cv2.destroyAllWindows() break Running this, pressing some keys, and finally ESC, I get the following output: 103 100 102 27 Also, all windows are closed, and the program is terminated. ---------------------------------------- System information ---------------------------------------- Platform: Windows-10-10.0.16299-SP0 Python: 3.8.5 OpenCV: 4.4.0 ----------------------------------------
9
8
64,061,426
2020-9-25
https://stackoverflow.com/questions/64061426/is-there-a-command-to-exit-a-module-when-imported-like-return-for-a-function
When you import a module in python, the module code is "run". Sometimes it is useful to have branching logic in the module such as checking package versions or platform or whatever. Is there a way to exit the entire module execution before hitting the end of the file, something equivalent to early return in a function? If the module is run as a script you can exit() but that actively raises an exception and kills the whole thing. I just want to say, that's it you are done now, don't run any more code below here. Basically can I transform this if not <condition>: MY_CONSTANT = 3.14 class blah(): ... def foo(x): ... # rest of module.... into if <condition>: return from module MY_CONSTANT = 3.14 class blah(): ... def foo(x): ... # rest of module.... mostly so that I don't have to have lots of code that looks strangely one extra indent level in.
You can create a custom Loader that special-cases e.g. ImportError (1) as a shortcut to stop module execution. This can be registered via a custom Finder at sys.meta_path. So if you have the following module to be imported: # foo.py x = 1 raise ImportError # stop module execution here y = 2 You can use the following finder/loader to import that module. It will be executed until the point where it hits the raise ImportError. import importlib class Loader(importlib.machinery.SourceFileLoader): def exec_module(self, module): try: super().exec_module(module) except ImportError: # the module chose to stop executing pass class Finder(importlib.machinery.PathFinder): @classmethod def find_spec(cls, fullname, path=None, target=None): spec = super().find_spec(fullname, path, target) if spec is not None: spec.loader = Loader(spec.name, spec.origin) # register the custom loader return spec import sys sys.meta_path.insert(2, Finder()) # from now on the custom finder will be queried for imports import foo print(foo.x) # prints 1 print(foo.y) # raises AttributeError (1) Using ImportError to indicate the shortcut obviously has its downsides, such as if your module tries to import something else which doesn't exist, this won't be reported as an error but the module just stops executing. So it's better to use some custom exception instead. I'm just using ImportError for the sake of the example.
9
2
64,057,445
2020-9-25
https://stackoverflow.com/questions/64057445/fastapi-post-does-not-recognize-my-parameter
I am usually using Tornado, and trying to migrate to FastAPI. Let's say, I have a very basic API as follows: @app.post("/add_data") async def add_data(data): return data When I am running the following Curl request: curl http://127.0.0.1:8000/add_data -d 'data=Hello' I am getting the following error: {"detail":[{"loc":["query","data"],"msg":"field required","type":"value_error.missing"}]} So I am sure I am missing something very basic, but I do not know what that might be.
Since you are sending a string data, you have to specify that in the router function with typing as from pydantic import BaseModel class Payload(BaseModel): data: str = "" @app.post("/add_data") async def add_data(payload: Payload = None): return payload Example cURL request will be in the form, curl -X POST "http://0.0.0.0:6022/add_data" -d '{"data":"Hello"}'
13
11
64,055,314
2020-9-24
https://stackoverflow.com/questions/64055314/why-cant-pythons-walrus-operator-be-used-to-set-instance-attributes
I just learned that the new walrus operator (:=) can't be used to set instance attributes, it's supposedly invalid syntax (raises a SyntaxError). Why is this? (And can you provide a link to official docs mentioning this?) I looked through PEP 572, and couldn't find if/where this is documented. Research This answer mentions this limitation without an explanation or source: you can't use the walrus operator on object attributes Sample Code class Foo: def __init__(self): self.foo: int = 0 def bar(self, value: int) -> None: self.spam(self.foo := value) # Invalid syntax def baz(self, value: int) -> None: self.spam(temp := value) self.foo = temp def spam(self, value: int) -> None: """Do something with value.""" Trying to import Foo results in a SyntaxError: self.spam(self.foo := value) ^ SyntaxError: cannot use assignment expressions with attribute
PEP 572 describes the purpose of this (emphasis mine): This is a proposal for creating a way to assign to variables within an expression using the notation NAME := expr. self.foo isn't a variable, it's an attribute of an object. The Syntax and semantics section specifies it further: NAME is an identifier. self.foo isn't an identifier, it's two identifiers separated by the . operator. While we often use variables and attributes similarly, and sometimes will sloppily refer to self.foo as a variable, they aren't the same. Assigning to self.foo is actually just a shorthand for setattr(self, 'foo', temp) This is what allows you to define getters and setters for attributes. It would complicate the specification and implementation of the assignment expression if it had to work with attributes that have customized setters. For instance, if the setter transforms the value that's being assigned, should the value of the assignment expression be the original value or the transformed value? Variables, on the other hand, cannot be customized. Assigning to a variable always has the same, simple semantics, and it's easy for the expression to evaluate to the value that was assigned. Similarly, you can't use the walrus operator with slice assignment. This isn't valid: foo1[2:4] := foo2[1:3]
17
23
64,053,954
2020-9-24
https://stackoverflow.com/questions/64053954/pad-rows-on-a-pandas-dataframe-with-zeros-till-n-count
Iam loading data via pandas read_csv like so: data = pd.read_csv(file_name_item, sep=" ", header=None, usecols=[0,1,2]) which looks like so: 0 1 2 0 257 503 48 1 167 258 39 2 172 242 39 3 172 403 81 4 180 228 39 5 183 394 255 6 192 179 15 7 192 347 234 8 192 380 243 9 192 437 135 10 211 358 234 I would like to pad this data with zeros till a row count of 256, meaning: 0 1 2 0 157 303 48 1 167 258 39 2 172 242 39 3 172 403 81 4 180 228 39 5 183 394 255 6 192 179 15 7 192 347 234 8 192 380 243 9 192 437 135 10 211 358 234 11 0 0 0 .. .. .. .. 256 0 0 0 How do I go about doing this? The file could have anything from 1 row to 200 odd rows and I am looking for something generic which pads this dataframe with 0's till 256 rows. I am quite new to pandas and could not find any function to do this.
reindex with fill_value df_final = data.reindex(range(257), fill_value=0) Out[1845]: 0 1 2 0 257 503 48 1 167 258 39 2 172 242 39 3 172 403 81 4 180 228 39 .. ... ... .. 252 0 0 0 253 0 0 0 254 0 0 0 255 0 0 0 256 0 0 0 [257 rows x 3 columns]
6
8
64,032,271
2020-9-23
https://stackoverflow.com/questions/64032271/handling-accept-cookies-popup-with-selenium-in-python
I've been trying to scrape some information of this real estate website with selenium. However when I access the website I need to accept cookies to continue. This only happens when the bot accesses the website, not when I do it manually. When I try to find the corresponding element either by xpath or id, as I find it when I inspect the page manually I get following error. selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="uc-btn-accept-banner"]"} from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC PATH = "/usr/bin/chromedriver" driver = webdriver.Chrome(PATH) driver.get("https://www.immoweb.be/en/search/house/for-sale?countries=BE&page=1&orderBy=relevance") driver.find_element_by_xpath('//*[@id="uc-btn-accept-banner"]').click() Does anyone know how to get around this? Why can't I find the element? Below is an image of the accept cookies popup. This is the HTML corresponding to the button "Keep browsing". The XPATH is as above. <button aria-label="Keep browsing" id="uc-btn-accept-banner" class="uc-btn-new uc-btn-accept">Keep browsing <span id="uc-optin-timer-display"></span></button>
You were very close! If you open your page in a new browser you'll note the page fully loads, then, a moment later your popup appears. The default wait strategy in selenium is just that the page is loaded. That draw delay between page loaded and display appearing is causing your scripts to fail. You have two good synchronisation options. 1/ Include an implicit wait for your driver. This is done once per script and affects all objects. This waits 10 seconds before throwing any error, or continues when it's ready: PATH = "/usr/bin/chromedriver" driver = webdriver.Chrome(PATH) driver.implicitly_wait(10) driver.get("https://www.immoweb.be/en/search/house/for-sale?countries=BE&page=1&orderBy=relevance") driver.find_element_by_xpath('//*[@id="uc-btn-accept-banner"]').click() 2/ Do a explicit wait on your object only: PATH = "/usr/bin/chromedriver" driver = webdriver.Chrome(PATH) driver.get("https://www.immoweb.be/en/search/house/for-sale?countries=BE&page=1&orderBy=relevance") WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,'//*[@id="uc-btn-accept-banner"]'))).click() More info on the wait strategies is here
30
21
64,048,813
2020-9-24
https://stackoverflow.com/questions/64048813/what-shebang-should-i-use-to-consistently-point-to-python3
I have a script which uses the shebang #!/usr/bin/env python. It works great on machines where Python 3 is the only version available, but on the machines which have both Python 2 and Python 3, it runs the script with Python 2. If I modify the shebang to be #!/usr/bin/env python3, it would work on the machines with Python 2 and Python 3, but on the machines which have only Python 3, it would fail with β€œNo such file or directory” error. One solution is to create an alias alias python=python3. Are there other solutions to have a same shebang working uniformly on every machine?
Unfortunately, there is no universally working way of doing this that would work across any and all up front unknown Linux hosts and you are largely left at the mercy of distro maintainers and local host configuration. alias won't help, because interpreter specified by #! handled by the kernel and /usr/bin/env it will exec in this case does not know about aliases of your shell. When using env, you could make sure that the name following env is first found and means what you want it to mean by: making sure all hosts are setup the same way in that respect (expected packaged are installed or at least symlinks are created) having user specific construct for execution of your script, such as: mkdir /tmp/bin ln -s /usr/bin/python /tmp/bin/python3 PATH="/tmp/bin:${PATH}" ./myscript.py But none of this is really great and ultimately what you've asked for. Your interpreter (this is harder than it may sound though, interpreter resolution code is quite simple; where to put it, how to call it for kernel to find and use it) could also be a simple shell script that tries to figure it out that you pack with your python code, but any option you look at isn't really great I am afraid. There is a PEP-394 for that which suggested / expected on U*X-like system: you get python for python2 and python3 for python3 But it recognizes this has never been entirely consistently applied... and also not as useful in 2020: However, these recommendations implicitly assumed that Python 2 would always be available. As Python 2 is nearing its end of life in 2020 (PEP 373, PEP 404), distributions are making Python 2 optional or removing it entirely. This means either removing the python command or switching it to invoke Python 3. Some distributors also decided that their users were better served by ignoring the PEP's original recommendations, and provided system administrators with the freedom to configure their systems based on the needs of their particular environment. TL;DR unfortunately there is no way that universally works and compensates for decisions of various distro and even individual host maintainers. :( I would most likely opt to stick with #!/usr/bin/env python3 (which has so far been the recommended naming) and add a README that explains the prerequisites and how to setup the host just to be sure. For the sake of completeness I should add, the PEP does make a recommendation in this regard: setup and use virtual environment or use a (third party) environment manager. However the way I read the question: "Portable interpreter specification that does not make any assumptions about nor poses any (additional) requirements to target host configuration", this would then not fit the bill and would not mean a substantial improvement over saying: make sure you have python3 executable in the search path and create a symlink if not.
10
6
64,038,673
2020-9-24
https://stackoverflow.com/questions/64038673/could-not-build-wheels-for-which-use-pep-517-and-cannot-be-installed-directly
I am trying to install a package which uses PEP 517. The newest version of Pip won't allow me to install it due to an error involving building wheels for PEP 517. In the past, I've solved this issue by downgrading Pip, installing the package and upgrading Pip back to the latest version. However, after I downgrade pip in my virtualenv, if I try to run pip install black I get the error No module named 'pip._internal.cli.main' How can I solve this?
The easiest solution to deal with the error "Could not build wheels for ____ which use PEP 517 and cannot be installed directly" is the following: sudo pip3 install _____ --no-binary :all: Where ____ is obviously the name of the library you want to install.
67
26
64,020,570
2020-9-23
https://stackoverflow.com/questions/64020570/why-redis-zset-means-sorted-set
When I was studying Redis for my database, I learned that 'Zset' means 'Sorted Set'. What does 'Zset' actually stand for? I couldn't figure out why it also means 'Sorted Set'. It could be simple or too broad question, but I want to understand exactly what I learned.
A similar question is asked before on Redis's github page and the creator of Redis answered it Hello. Z is as in XYZ, so the idea is, sets with another dimension: the order. It's a far association... I know :) Set commands start with s Hash commands start with h List commands start with l Sorted set commands start with z Stream commands start with x Hyperloglog commands start with pf
14
27
64,019,287
2020-9-23
https://stackoverflow.com/questions/64019287/why-doesnt-small-integer-caching-seem-to-work-with-int-objects-from-the-round
Can you please explain why this happens in Python v3.8? a=round(2.3) b=round(2.4) print(a,b) print(type(a),type(b)) print(a is b) print(id(a)) print(id(b)) Output: 2 2 <class 'int'> <class 'int'> False 2406701496848 2406701496656 >>> 2 is within the range of the small integer caching. So why are there different objects with the same value?
Looks like in 3.8, PyLong_FromDouble (which is what float.__round__ ultimately delegates to) explicitly allocates a new PyLong object and fills it in manually, without normalizing it (via the IS_SMALL_INT check and get_small_int cache lookup function), so it doesn't check the small int cache to resolve to the canonical value. This will change in 3.9 as a result of issue 37986: Improve perfomance of PyLong_FromDouble(), which now has it delegate to PyLong_FromLong when the double is small enough to be losslessly represented as a C long. By side-effect, this will use the small int cache, as PyLong_FromLong reliably normalizes for small values.
8
9
63,988,804
2020-9-21
https://stackoverflow.com/questions/63988804/how-to-infer-frequency-from-an-index-where-a-few-observations-are-missing
Using pd.date_range like dr = pd.date_range('2020', freq='15min', periods=n_obs) will produce this DateTimeIndex with a 15 minute interval or frequency: DatetimeIndex(['2020-01-01 00:00:00', '2020-01-01 00:15:00', '2020-01-01 00:30:00', '2020-01-01 00:45:00', '2020-01-01 01:00:00'], dtype='datetime64[ns]', freq='15T') You can use this to set up a pandas dataframe like: import pandas as pd import numpy as np # data np.random.seed(10) n_obs = 10 daterange = pd.date_range('2020', freq='15min', periods=n_obs) values = np.random.uniform(low=-1, high=1, size=n_obs).tolist() df = pd.DataFrame({'time':daterange, 'value':values}) df = df.set_index('time') And now you can use pd.infer_freq(df.index) to retreive the frequency '15T' again for further calculations. Taking a closer look at help(pd.infer_freq()) lets us know that pd.infer_freq will: Infer the most likely frequency given the input index. If the frequency is uncertain, a warning will be printed. My understanding of this would be that it would be possible to retrieve '15T' if a few observations were missing, leading to an irregular time index. But when I remove a few of the observations using: dropped = df.index[[1,3]] df = df.drop(dropped) Then pd.infer_freq(df.index) returns None. This also happens if we set n_obs = 100. So it would seem that I was hoping for a bit too much when I thought that [...] infer the most likely frequency [...] meant that pd.infer_freq() could infer that this was in fact an index with a 15 minute frequency with only a few missing values. Is there any other approach I could use to programmatically infer an index frequency from a somewhat irregular time series using pandas?
You could compute the minimum time difference of values in the index (here min_delta), try to find 3 consecutive values in the index, each with this minimum time difference between them, and then call infer_freq on these consecutive values of the index: diffs = (df.index[1:] - df.index[:-1]) min_delta = diffs.min() mask = (diffs == min_delta)[:-1] & (diffs[:-1] == diffs[1:]) pos = np.where(mask)[0][0] idx = df.index print(pd.infer_freq(idx[pos: pos + 3])) This retrieves "15T".
8
4
63,996,623
2020-9-21
https://stackoverflow.com/questions/63996623/no-module-named-ctypes
I'm trying to install pyautogui, but pip keeps throwing errors. How to fix it? I've tried installing libffi library. Here is some code: python3 -m pip install pyautogui Defaulting to user installation because normal site-packages is not writeable Collecting pyautogui Using cached PyAutoGUI-0.9.50.tar.gz (57 kB) ERROR: Command errored out with exit status 1: command: /usr/local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-sxm4ewnq/pyautogui/setup.py'"'"'; __file__='"'"'/tmp/pip-install-sxm4ewnq/pyautogui/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-85ugzov6 cwd: /tmp/pip-install-sxm4ewnq/pyautogui/ Complete output (11 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 19, in <module> from setuptools.dist import Distribution File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 34, in <module> from setuptools import windows_support File "/usr/local/lib/python3.8/site-packages/setuptools/windows_support.py", line 2, in <module> import ctypes File "/usr/local/lib/python3.8/ctypes/__init__.py", line 7, in <module> from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. That's from python REPL >>> sys.path ['', '/home/walenty/apps/Python-3.8.5/Modules/_ctypes', '/usr/local/lib/python38.zip', '/usr/local/lib/python3.8', '/usr/local/lib/python3.8/lib-dynload', '/home/walenty/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/site-packages'] >>> import _ctypes Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named '_ctypes'
Required Install foreign function interface headers sudo apt install libffi-dev Reinstall Python Substitute desired python version Ubuntu sudo add-apt-repository ppa:deadsnakes/ppa -y && sudo apt install --reinstall python3.9-distutils MacOS Use brew install python3.9 or port install python3.9 (I recommend port) Windows Use Microsoft Store Specify project python version Poetry poetry env use 3.9 Virtual envs virtualenv -p python3.9 myproject etc...
8
6
63,989,328
2020-9-21
https://stackoverflow.com/questions/63989328/can-i-combine-conv2d-and-leakyrelu-into-a-single-layer
The keras Conv2D layer does not come with an activation function itself. I am currently rebuilding the YOLOv1 model for practicing. In the YOLOv1 model, there are several Conv2D layers followed by activations using the leaky relu function. Is there a way to combine from keras.layers import Conv2D, LeakyReLU ... def model(input): ... X = Conv2D(filters, kernel_size)(X) X = LeakyReLU(X) ... into a single line of code, like X = conv_with_leaky_relu(X)? I think it should be similar to def conv_with_leaky_relu(*args, **kwargs): X = Conv2D(*args, **kwargs)(X) X = LeakyReLU(X) return X but this of course doesn't work because it is undefined what X ist. Any ideas?
You can just pass it as an activation: X = Conv2D(filters, kernel_size, activation=LeakyReLU())(X)
6
11
64,002,627
2020-9-22
https://stackoverflow.com/questions/64002627/python-tenacity-log-exception-on-retry
I'm using the tenacity package to retry a function. My retry decorator looks like this: @retry(wait=wait_exponential(multiplier=1/(2**5), max=60), after=after_log(logger, logging.INFO)) On exception I get a logging message like this: INFO:mymodule:Finished call to 'mymodule.MyClass.myfunction' after 0.001(s), this was the 1st time calling it. I want to log the actual exception (1-line format, not stack trace preferably) in addition to what is already logged. Can this be done with tenacity? Or do I just have to catch the exception, print, and re-raise?
You can set the before_sleep parameter. This callable receives the exc_info thrown by your code. Ref.: https://tenacity.readthedocs.io/en/latest/api.html#module-tenacity.before_sleep Example import logging from typing import Final from tenacity import ( after_log, before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, wait_exponential, ) logger: Final = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) @retry( reraise=True, before_sleep=before_sleep_log(logger, logging.INFO), after=after_log(logger, logging.INFO), wait=wait_exponential(multiplier=1, min=1, max=8), retry=retry_if_exception_type(ZeroDivisionError), stop=stop_after_attempt(4), ) def never_gonna_give_you_up() -> None: 1 / 0 if __name__ == "__main__": never_gonna_give_you_up() Output INFO:__main__:Finished call to '__main__.never_gonna_give_you_up' after 0.000(s), this was the 1st time calling it. INFO:__main__:Retrying __main__.never_gonna_give_you_up in 1.0 seconds as it raised ZeroDivisionError: division by zero. INFO:__main__:Finished call to '__main__.never_gonna_give_you_up' after 1.015(s), this was the 2nd time calling it. INFO:__main__:Retrying __main__.never_gonna_give_you_up in 2.0 seconds as it raised ZeroDivisionError: division by zero. INFO:__main__:Finished call to '__main__.never_gonna_give_you_up' after 3.031(s), this was the 3rd time calling it. INFO:__main__:Retrying __main__.never_gonna_give_you_up in 4.0 seconds as it raised ZeroDivisionError: division by zero. INFO:__main__:Finished call to '__main__.never_gonna_give_you_up' after 7.031(s), this was the 4th time calling it. Traceback (most recent call last): File "C:\***.py", line 33, in <module> never_gonna_give_you_up() File "C:\Users\***\lib\site-packages\tenacity\__init__.py", line 324, in wrapped_f return self(f, *args, **kw) File "C:\Users\***\lib\site-packages\tenacity\__init__.py", line 404, in __call__ do = self.iter(retry_state=retry_state) File "C:\Users\***\lib\site-packages\tenacity\__init__.py", line 360, in iter raise retry_exc.reraise() File "C:\Users\***\lib\site-packages\tenacity\__init__.py", line 193, in reraise raise self.last_attempt.result() File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\_base.py", line 439, in result return self.__get_result() File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\_base.py", line 391, in __get_result raise self._exception File "C:\Users\***\lib\site-packages\tenacity\__init__.py", line 407, in __call__ result = fn(*args, **kwargs) File "C:\***.py", line 29, in never_gonna_give_you_up 1 / 0 ZeroDivisionError: division by zero These lines are printed by before_sleep_log: Retrying __main__.never_gonna_give_you_up in 1.0 seconds as it raised ZeroDivisionError: division by zero.
6
7
63,963,532
2020-9-18
https://stackoverflow.com/questions/63963532/how-to-copy-gym-environment
Info: I am using OpenAI Gym to create RL environments but need multiple copies of an environment for something I am doing. I do not want to do anything like [gym.make(...) for i in range(2)] to make a new environment. Question: Given one gym env what is the best way to make a copy of it so that you have 2 duplicate but disconnected envs? Here is an example: import gym env = gym.make("CartPole-v0") new_env = # NEED COPY OF ENV HERE env.reset() # Should not alter new_env
Astariul has an updated answer:: Their answer states: import copy env_2 = copy.deepcopy(env) For more info about 'copy.deepcopy', and the copy library Link to copy library documentation
8
3
63,979,540
2020-9-20
https://stackoverflow.com/questions/63979540/python-how-to-filter-specific-warning
How to filter the specific warning for specific module in python? MWE ERROR: cross_val_score(model, df_Xtrain,ytrain,cv=2,scoring='r2') /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_ridge.py:148: LinAlgWarning: Ill-conditioned matrix (rcond=3.275e-20): result may not be accurate. overwrite_a=True).T My attempt import warnings warnings.filterwarnings(action='ignore',module='sklearn') cross_val_score(model, df_Xtrain,ytrain,cv=2,scoring='r2') This works but I assume it ignores all the warnings such as deprecation warnings. I would like to exclude only LinAlgWarning Required Like import warnings warnings.filterwarnings(action='ignore', category='LinAlgWarning', module='sklearn') Is there a way like this filter specific warning?
You have to pass category as a WarningClass not in a String: from scipy.linalg import LinAlgWarning warnings.filterwarnings(action='ignore', category=LinAlgWarning, module='sklearn')
12
15
63,968,710
2020-9-19
https://stackoverflow.com/questions/63968710/python-ctypes-and-mutability
I noticed that passing Python objects to native code with ctypes can break mutability expectations. For example, if I have a C function like: int print_and_mutate(char *str) { str[0] = 'X'; return printf("%s\n", str); } and I call it like this: from ctypes import * lib = cdll.LoadLibrary("foo.so") s = b"asdf" lib.print_and_mutate(s) The value of s changed, and is now b"Xsdf". The Python docs say "You should be careful, however, not to pass them to functions expecting pointers to mutable memory.". Is this only because it breaks expectations of which types are immutable, or can something else break as a result? In other words, if I go in with the clear understanding that my original bytes object will change, even though normally bytes are immutable, is that OK or will I get some kind of nasty surprise later if I don't use create_string_buffer like I'm supposed to?
Python makes assumptions about immutable objects, so mutating them will definitely break things. Here's a concrete example: >>> import ctypes as c >>> x = b'abc' # immutable string >>> d = {x:123} # Used as key in dictionary (keys must be hashable/immutable) >>> d {b'abc': 123} Now build a ctypes mutable buffer to the immutable object. id(x) in CPython is the memory address of the Python object and sys.getsizeof() returns the size of that object. PyBytes objects have some overhead, but the end of the object has the bytes of the string. >>> sys.getsizeof(x) 36 >>> px=(c.c_char*36).from_address(id(x)) >>> px.raw b'\x02\x00\x00\x00\x00\x00\x00\x000\x8fq\x0b\xfc\x7f\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\xf0\x06\xe61\xeb\x00\x1b\xa9abc\x00' >>> px.raw[-4:] # last bytes of the object b'abc\x00' >>> px[-4] b'a' >>> px[-4] = b'y' # Mutate the ctypes buffer, mutating the "immutable" string >>> x # Now it has a modified value. b'ybc' Now try to access the key in the dictionary. Keys are located in O(1) time using its hash, but the hash was on the original, "immutable" value so it is incorrect. The key can no longer be found by old or new value: >>> d # Note that dictionary key changed, too. {b'ybc': 123} >>> d[b'ybc'] # Try to access the key Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: b'ybc' >>> d[b'abc'] # Maybe original key will work? It hashes same as the original... Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: b'abc'
7
5
63,995,578
2020-9-21
https://stackoverflow.com/questions/63995578/change-colour-of-colorbar-in-python-matplotlib
I have a code that gives me a scatter plot of predicted vs actual values as a function of concentration. The data is pulled from an excel csv spreadsheet. This is the code: import matplotlib.pyplot as plt from numpy import loadtxt dataset = loadtxt("ColorPlot.csv", delimiter=',') x = dataset[:,0] y = dataset[:,1] z = dataset[:,2] scaled_z = (z - z.min()) / z.ptp() colors = plt.cm.viridis(scaled_z) sc=plt.scatter(x, y, c=colors) plt.clim(0, 100) plt.colorbar() plt.xlabel("Actual") plt.ylabel("Predicted") plt.show() And with this I get a nice graph: However if I change the color to something like colors = plt.cm.plasma(scaled_z) I get the graph below but the colorbar remains unchanged. I've tried lots of different things like cmap or edgecolors but I don't know how to change it. And I want to keep the code as simple as it currently is because I want to readily change the third variable of z based on my excel spreadsheet data. Is there also a way for the scale of the colorbar to pick up what the scale is from the excel spreadsheet without me manually specifying 0-100?
To get the right color bar, use the following code: colormap = plt.cm.get_cmap('plasma') # 'plasma' or 'viridis' colors = colormap(scaled_z) sc = plt.scatter(x, y, c=colors) sm = plt.cm.ScalarMappable(cmap=colormap) sm.set_clim(vmin=0, vmax=100) plt.colorbar(sm) plt.xlabel("Actual") plt.ylabel("Predicted") plt.show() For my random generated data I got the following plot: Now replace 'plasma' with 'viridis' and check the other variant.
9
11
63,964,011
2020-9-18
https://stackoverflow.com/questions/63964011/precise-specification-of-await
The Python Language Reference specifies object.__await__ as follows: object.__await__(self) Must return an iterator. Should be used to implement awaitable objects. For instance, asyncio.Future implements this method to be compatible with the await expression. That's it. I find this specification very vague and not very specific (ironically). Ok, it should return an iterator, but can it be an arbitrary iterator? Obviously not: import asyncio class Spam: def __await__(self): yield from range(10) async def main(): await Spam() asyncio.run(main()) RuntimeError: Task got bad yield: 0 I'm assuming the asyncio event loop expects a specific kind of object being yielded by the iterator. Then what exactly should it yield? (And why isn't this documented?) Edit: as far as I can see, this isn't documented anywhere. But I've been investigating on my own, and I think that the key to understanding what objects the asyncio expects its coroutines to yield lies in task_step_impl in _asynciomodule.c. Update: I've made a PR to the cpython repository with the aim of clarifying this: "Clarify the vague specification of object.__await__". The PR has now been merged and should be available in the docs for Python 3.10+.
The language doesn't care which iterator you return. The error comes from a library, asyncio, which has specific ideas about the kind of values that must be produced by the iterator. Asyncio requires __await__ to produce asyncio futures (including their subtypes such as tasks) or None. Other libraries, like curio and trio, will expect different kinds of values. Async libraries by and large don't document their expectations from __await__ because they consider it an implementation detail. As far as asyncio is concerned, you're supposed to be using higher-level constructs, such as futures and tasks, and await those, in addition to coroutines. There is rarely a need to implement __await__ manually, and even then you should use it to delegate the signals of another awaitable. Writing an __await__ that creates and yields a fresh suspend-value of its own requires it to be coupled with the event loop and have knowledge of its internals. You can think of __await__ as a tool to write a library similar to asyncio. If you are the author of such a library, the current specification is sufficient because you can yield whatever you like from the iterator, only the code in your event loop will observe the yielded values. If you're not in that position, you probably have no need to implement __await__.
22
24
63,949,240
2020-9-18
https://stackoverflow.com/questions/63949240/python-global-variable-in-fastapi-not-working-as-normal
I have a simple FastAPI demo app which achieve a function: get different response json by calling a post api named changeResponse. The changeResponse api just changed a global variable, another api return different response through the same global variable. On local env, it works correctly, but the response always changes after i just call changeResponse once, when i build this on docker.The code is as follows: from typing import Optional from fastapi import FastAPI from util import read_json import enum app = FastAPI() type = "00" @app.post("/changeResponse") async def handle_change_download_response(param:Optional[str]): global type type = param print("type is "+type) return {"success":"true"} @app.post("/download") async def handle_download(param:Optional[str]): print("get download param: "+param) if legalDownload(param): print("type is "+type) return read_json.readDownloadSuccessRes(type) else: return read_json.readDownloadFailRes() def legalDownload(data:str)->bool: return True the dockerfile is as follows: FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app what i except: call changeResponse param is 7, get response for 7, call changeResponse param is 8, get response for 8. what i get: call changeResponse param is 7, get reponse for 7, call changeReponse 8, sometime the response is 7, sometime is 8, impossible to predict
tiangolo/uvicorn-gunicorn-fastapi is based on uvicorn-gunicorn-docker image, which by defaults creates multiple workers. Excerpt from gunicorn_conf.py: default_web_concurrency = workers_per_core * cores Thus, the described situation arises because the request is processed by different workers (processes). Each of which has its own copy of the global variable Update: If you want to change the count of workers, use the following environment variables: WORKERS_PER_CORE: It will set the number of workers to the number of CPU cores multiplied by this value. MAX_WORKERS: You can use it to let the image compute the number of workers automatically but making sure it's limited to a maximum. WEB_CONCURRENCY Override the automatic definition of number of workers. You can set it like: docker run -d -p 80:80 -e WEB_CONCURRENCY="2" myimage A more detailed description of these variables and examples here If you want to share data between workers, pay attention to this topic.
12
11
63,975,284
2020-9-20
https://stackoverflow.com/questions/63975284/mt5-metatrader-5-connect-to-different-mt5-terminals-using-python
I've got multiple python programs that connect to Mt5 terminal using the following code. # Establish connection to the MetaTrader 5 terminal if not mt5.initialize("C:\\Program Files\\ICMarkets - MetaTrader 5 - 01\\terminal64.exe"): print("initialize() failed, error code =", mt5.last_error()) quit() The python module for MT5 is the one here - https://www.mql5.com/en/docs/integration/python_metatrader5 The problem I have is that, when multiple programs connect to the same MT5 terminal.exe, the performance degrades & one or more python programs exit with errors. To overcome this, I have installed multiple copies of MT5 & have updated the python code such that different copies of python program use different installations of MT5. However, only the first installation of MT5 is the only one that can be invoked by all the python programs. Trying to use any other terminal.exe from other installation raises an exception & the connection fails. There isn't much on the internet either to troubleshoot this. If anyone has ideas to tackle this or has solved the problem, I'd be keen to hear from you please. The error as such is - initialize() failed, error code = (-10003, "IPC initialize failed, Process create failed 'C:\\Program Files\\ICMarkets - MetaTrader 5 - 02\terminal64.exe'") This could be something to do with Windows's default pointing to the first installation or something like that that you wouldn't even think about. Just thinking aloud here.
From my experience, imho, the MT5 python API was not designed to handle multiple connections from the same machine simultaneously. I have overcome this by creating Virtual Machines and running everything through them. I used Oracle VM because it's free, I had past experience with it, but its not very good at sharing resources. If your machine is not very strong you might want to look into some other solution. I heard Docker is good at sharing the host resources.
6
3
64,010,263
2020-9-22
https://stackoverflow.com/questions/64010263/attributeerror-module-importlib-has-no-attribute-util
I've just upgraded from Fedora 32 to Fedora 33 (which comes with Python 3.9). Since then gcloud command stopped working: [guy@Gandalf32 ~]$ gcloud Error processing line 3 of /home/guy/.local/lib/python3.9/site-packages/XStatic-1.0.2-py3.9-nspkg.pth: Traceback (most recent call last): File "/usr/lib64/python3.9/site.py", line 169, in addpackage exec(line) File "<string>", line 1, in <module> File "<frozen importlib._bootstrap>", line 562, in module_from_spec AttributeError: 'NoneType' object has no attribute 'loader' Remainder of file ignored Traceback (most recent call last): File "/usr/lib64/google-cloud-sdk/lib/gcloud.py", line 104, in <module> main() File "/usr/lib64/google-cloud-sdk/lib/gcloud.py", line 62, in main from googlecloudsdk.core.util import encoding File "/usr/lib64/google-cloud-sdk/lib/googlecloudsdk/__init__.py", line 23, in <module> from googlecloudsdk.core.util import importing File "/usr/lib64/google-cloud-sdk/lib/googlecloudsdk/core/util/importing.py", line 23, in <module> import imp File "/usr/lib64/python3.9/imp.py", line 23, in <module> from importlib import util File "/usr/lib64/python3.9/importlib/util.py", line 2, in <module> from . import abc File "/usr/lib64/python3.9/importlib/abc.py", line 17, in <module> from typing import Protocol, runtime_checkable File "/usr/lib64/python3.9/typing.py", line 26, in <module> import re as stdlib_re # Avoid confusion with the re we export. File "/usr/lib64/python3.9/re.py", line 124, in <module> import enum File "/usr/lib64/google-cloud-sdk/lib/third_party/enum/__init__.py", line 26, in <module> spec = importlib.util.find_spec('enum') AttributeError: module 'importlib' has no attribute 'util'
Update from GCP support GCP support mentioned that the new version 318.0.0 released on 2020.11.10 should support python 3.9 I updated my gcloud sdk to 318.0.0 and now looks like python 3.9.0 is supported. To fix this issue run gcloud components update Fedora 33 includes python 2.7 and to force GCloud SDK to use it please set this environment variable export CLOUDSDK_PYTHON=python2 You can add this export command to your ~/.bash_profile Python 3.9 is very new and is expected that Gcloud SDK does not support 3.9, it is written to be compatible with 2.7.x & 3.6 - 3.8 (3.8 can cause some compat issues I recommend to use 3.7) As a workaround, configure Python 3.8 or 3.7 (these versions work well for Gcloud and most of linux distros) as system wide interpreter and try to use gcloud commands.
98
118
63,939,138
2020-9-17
https://stackoverflow.com/questions/63939138/is-there-a-way-to-use-python-3-9-type-hinting-in-its-previous-versions
In Python 3.9 we can use type hinting in a lowercase built-in fashion (without having to import type signatures from the typing module) as described here: def greet_all(names: list[str]) -> None: for name in names: print("Hello", name) I like very much this idea and I would like to know if it is possible to use this way of type hinting but in previous versions of python, such as Python 3.7, where we have write type hinting like this: from typing import List def greet_all(names: List[str]) -> None: for name in names: print("Hello", name)
Simply, import annotations from __future__ and you should be good to go. from __future__ import annotations import sys !$sys.executable -V #this is valid in iPython/Jupyter Notebook def greet_all(names: list[str]) -> None: for name in names: print("Hello", name) greet_all(['Adam','Eve']) Python 3.7.6 Hello Adam Hello Eve
15
15
63,972,113
2020-9-19
https://stackoverflow.com/questions/63972113/big-sur-clang-invalid-version-error-due-to-macosx-deployment-target
I assume due to the fact Big Sur is sparkling new hotfixes for the new OS have not yet happen. When attempting to install modules that use clang for compilation, the following error is thrown: clang: error: invalid version number in 'MACOSX_DEPLOYMENT_TARGET=11.0' Currently running: Mac OS Big Sur, 11.0 Beta Intel CPU (i386) Python 3.8.0 installed via pyenv Multiple modules have clang dependencies, and so it seems this error is quite common. An example: pip install multidict Installing older versions of Command Line Tools (e.g. 11.5) does not work as well.
Figure out the issue on my end. Previously I had installed XCode from the App Store (11.7) and set its SDKs as my default: sudo xcode-select --switch /Applications/Xcode.app/ However, it seems this come with an unsupported version of clang: Ξ» clang --version Apple clang version 11.0.3 (clang-1103.0.32.62) Target: x86_64-apple-darwin20.1.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin Setting the xcode-select to the latest version via: sudo xcode-select --switch /Library/Developer/CommandLineTools EDIT (11/15/2020) You might receive an error when attempting the above change: xcode-select: error: invalid developer directory '/Library/Developer/CommandLineTools' To fix this, you must install the latest Command Line Tools from the official Apple website here. At the time of writting this edit, I installed the Command Line Tools for Xcode 12.3 beta. Changes clang to a working version: Ξ» clang --version Apple clang version 12.0.0 (clang-1200.0.32.2) Target: x86_64-apple-darwin20.1.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin The built-in Big Sur SDK is version 10.15, which seems to work without an issue: Ξ» ls /Library/Developer/CommandLineTools/SDKs MacOSX.sdk MacOSX10.15.sdk After the switch, multidict was installed successfully. Ξ» pip install multidict Collecting multidict Downloading multidict-4.7.6-cp38-cp38-macosx_10_14_x86_64.whl (48 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 48 kB 589 kB/s Installing collected packages: multidict Successfully installed multidict-4.7.6 Further investigation seems to indicate this is a design choice by Apple (source): Therefore, ensuring your SDK is the default out-of-the-box as opposed to XCode's new SDK should be enough for the system to switch context when needed (and seems to work fine with pip+clang).
60
64
63,925,403
2020-9-16
https://stackoverflow.com/questions/63925403/custom-criterion-for-decisiontreeregressor-in-sklearn
I want to use a DecisionTreeRegressor for multi-output regression, but I want to use a different "importance" weight for each output (e.g. predicting y1 accurately is twice as important as predicting y2). Is there a way of including these weights directly in the DecisionTreeRegressor of sklearn? If not, how can I create a custom MSE criterion with different weights for each output in sklearn?
I am afraid you can only provide one weight-set when you fit https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html#sklearn.tree.DecisionTreeRegressor.fit And the more disappointing thing is that since only one weight-set is allowed, the algorithms in sklearn is all about one weight-set. As for custom criterion: There is a similar issue in scikit-learn https://github.com/scikit-learn/scikit-learn/issues/17436 Potential solution is to create a criterion class mimicking the existing one (e.g. MAE) in https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_criterion.pyx#L976 However, if you see the code in detail, you will find that all the variables about weights are "one weight-set", which is unspecific to the tasks. So to customize, you may need to hack a lot of code, including: hacking the fit function to accept a 2D array of weights https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_classes.py#L142 Bypassing the checking (otherwise continue to hack...) Modify tree builder to allow the weights https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_tree.pyx#L111 It is terrible, there are a lot of related variable, you should change double to double* Modify Criterion class to accept a 2-D array of weights https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_criterion.pyx#L976 In init, reset and update, you have to keep attributions such as self.weighted_n_node_samples specific to outputs (tasks). TBH, I think it is really difficult to implement. Maybe we need to raise an issue for scikit-learn group.
6
5
63,933,790
2020-9-17
https://stackoverflow.com/questions/63933790/robust-algorithm-to-detect-uneven-illumination-in-images-detection-only-needed
One of the biggest challenges in tesseract OCR text recognition is the uneven illumination of images. I need an algorithm that can decide the image is containing uneven illuminations or not. Test Images I Attached the images of no illumination image, glare image( white-spotted image) and shadow containing image. If we give an image to the algorithm, the algorithm should divide into two class like No uneven illumination - our no illumination image will fall into this category. Uneven illumination - Our glare image( white-spotted image), shadow containing image will fall in this category. No Illumination Image - Category A UnEven Illumination Image (glare image( white-spotted image)) Category B Uneven Illumination Image (shadow containing an image) Category B Initial Approach Change colour space to HSV Histogram analysis of the value channel of HSV to identify the uneven illumination. Instead of the first two steps, we can use the perceived brightness channel instead of the value channel of HSV Set a low threshold value to get the number of pixels which are less than the low threshold Set a high threshold value to get the number of pixels which are higher than the high threshold percentage of low pixels values and percentage of high pixel values to detect uneven lightning condition (The setting threshold for percentage as well ) But I could not find big similarities between uneven illumination images. I just found there are some pixels that have low value and some pixels have high value with histogram analysis. Basically what I feel is if setting some threshold values in the low and to find how many pixels are less than the low threshold and setting some high threshold value to find how many pixels are greater than that threshold. with the pixels counts can we come to a conclusion to detect uneven lightning conditions in images? Here we need to finalize two threshold values and the percentage of the number of pixels to come to the conclusion. def show_hist_v(img_path): img = cv2.imread(img_path) hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv_img) histr =cv2.calcHist(v, [0], None, [255],[0,255]) plt.plot(histr) plt.show() low_threshold =np.count_nonzero(v < 50) high_threshold =np.count_nonzero(v >200) total_pixels = img.shape[0]* img.shape[1] percenet_low =low_threshold/total_pixels*100 percenet_high =high_threshold/total_pixels*100 print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high)) return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high So can someone improve my initial approach or give better than this approach to detect uneven illumination in images for general cases? Also, I tried perceived brightness instead of the value channel since the value channel takes the maximum of (b,g,r) values the perceive brightness is a good choice as I think def get_perceive_brightness( float_img): float_img = np.float64(float_img) # unit8 will make overflow b, g, r = cv2.split(float_img) float_brightness = np.sqrt( (0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2))) brightness_channel = np.uint8(np.absolute(float_brightness)) return brightness_channel def show_hist_v(img_path): img = cv2.imread(img_path) v = get_perceive_brightness(img) histr =cv2.calcHist(v, [0], None, [255],[0,255]) plt.plot(histr) plt.show() low_threshold =np.count_nonzero(v < 50) high_threshold =np.count_nonzero(v >200) total_pixels = img.shape[0]* img.shape[1] percenet_low =low_threshold/total_pixels*100 percenet_high =high_threshold/total_pixels*100 print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high)) return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high Histogram analysis of perceived brightness channel As Ahmet suggested. def get_percentage_of_binary_pixels(img=None, img_path=None): if img is None: if img_path is not None: gray_img = cv2.imread(img_path, 0) else: return "No img or img_path" else: print(img.shape) if len(img.shape) > 2: gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) else: gray_img = img h, w = gray_img.shape guassian_blur = cv2.GaussianBlur(gray_img, (5, 5), 0) thresh_value, otsu_img = cv2.threshold(guassian_blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) cv2.imwrite("binary/{}".format(img_path.split('/')[-1]), otsu_img) black_pixels = np.count_nonzero(otsu_img == 0) # white_pixels = np.count_nonzero(otsu_img == 255) black_pixels_percentage = black_pixels / (h * w) * 100 # white_pixels_percentage = white_pixels / (h * w) * 100 return black_pixels_percentage when we get more than 35% of black_ pixels percentage with otsu binarization, we can detect the uneven illumination images around 80 percentage. When the illumination occurred in a small region of the image, the detection fails. Thanks in advance
I suggest using the division trick to separate text from the background, and then calculate statistics on the background only. After setting some reasonable thresholds it is easy to create classifier for the illumination. def get_image_stats(img_path, lbl): img = cv2.imread(img_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (25, 25), 0) no_text = gray * ((gray/blurred)>0.99) # select background only no_text[no_text<10] = no_text[no_text>20].mean() # convert black pixels to mean value no_bright = no_text.copy() no_bright[no_bright>220] = no_bright[no_bright<220].mean() # disregard bright pixels print(lbl) std = no_bright.std() print('STD:', std) bright = (no_text>220).sum() print('Brigth pixels:', bright) plt.figure() plt.hist(no_text.reshape(-1,1), 25) plt.title(lbl) if std>25: print("!!! Detected uneven illumination") if no_text.mean()<200 and bright>8000: print("!!! Detected glare") This results in: good_img STD: 11.264569863071165 Brigth pixels: 58 glare_img STD: 15.00149131296984 Brigth pixels: 15122 !!! Detected glare uneven_img STD: 57.99510339944441 Brigth pixels: 688 !!! Detected uneven illumination Now let's analyze the histograms and apply some common sense. We expect background to be even and have low variance, like it is the case in "good_img". If it has high variance, then its standard deviation would be high and it is the case of uneven brightness. On the lower image you can see 3 (smaller) peaks that are responsible for the 3 different illuminated areas. The largest peak in the middle is the result of setting all black pixels to the mean value. I believe it is safe to call images with STD above 25 as "uneven illumination" case. It is easy to spot a high amount of bright pixels when there is glare (see image on right). Glared image looks like a good image, besided the hot spot. Setting threshold of bright pixels to something like 8000 (1.5% of total image size) should be good to detect such images. There is a possibility that the background is very bright everywhere, so if the mean of no_text pixels is above 200, then it is the case and there is no need to detect hot spots.
21
5
63,950,888
2020-9-18
https://stackoverflow.com/questions/63950888/typeerror-failed-to-convert-object-of-type-sparsetensor-to-tensor
I am building a text classification model for imdb sentiment analysis dataset. I downloaded the dataset and followed the tutorial given here - https://developers.google.com/machine-learning/guides/text-classification/step-4 The error I get is TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("DeserializeSparse:0", shape=(None, 2), dtype=int64), values=Tensor("DeserializeSparse:1", shape=(None,), dtype=float32), dense_shape=Tensor("stack:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type. the type of x_train and x_val are scipy.sparse.csr.csr_matrix. This give an error when passed to sequential model. How to solve? import tensorflow as tf import numpy as np from tensorflow.python.keras.preprocessing import sequence from tensorflow.python.keras.preprocessing import text from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif # Vectorization parameters # Range (inclusive) of n-gram sizes for tokenizing text. NGRAM_RANGE = (1, 2) # Limit on the number of features. We use the top 20K features. TOP_K = 20000 # Whether text should be split into word or character n-grams. # One of 'word', 'char'. TOKEN_MODE = 'word' # Minimum document/corpus frequency below which a token will be discarded. MIN_DOCUMENT_FREQUENCY = 2 # Limit on the length of text sequences. Sequences longer than this # will be truncated. MAX_SEQUENCE_LENGTH = 500 def ngram_vectorize(train_texts, train_labels, val_texts): """Vectorizes texts as ngram vectors. 1 text = 1 tf-idf vector the length of vocabulary of uni-grams + bi-grams. # Arguments train_texts: list, training text strings. train_labels: np.ndarray, training labels. val_texts: list, validation text strings. # Returns x_train, x_val: vectorized training and validation texts """ # Create keyword arguments to pass to the 'tf-idf' vectorizer. kwargs = { 'ngram_range': NGRAM_RANGE, # Use 1-grams + 2-grams. 'dtype': 'int32', 'strip_accents': 'unicode', 'decode_error': 'replace', 'analyzer': TOKEN_MODE, # Split text into word tokens. 'min_df': MIN_DOCUMENT_FREQUENCY, } vectorizer = TfidfVectorizer(**kwargs) # Learn vocabulary from training texts and vectorize training texts. x_train = vectorizer.fit_transform(train_texts) # Vectorize validation texts. x_val = vectorizer.transform(val_texts) # Select top 'k' of the vectorized features. selector = SelectKBest(f_classif, k=min(TOP_K, x_train.shape[1])) selector.fit(x_train, train_labels) x_train = selector.transform(x_train) x_val = selector.transform(x_val) x_train = x_train.astype('float32') x_val = x_val.astype('float32') return x_train, x_val
There's a similar open issue that you can find here. Solution proposed is use Tensorflow version 2.1.0 and Keras version 2.3.1.
6
-1
63,989,813
2020-9-21
https://stackoverflow.com/questions/63989813/setting-a-custom-directory-for-confuse-yaml-configuration-files
I'm trying to use this library for setting up a YAML config file for a python project, but I don't want to use the suggested directories for configuration e.g. ~/.config/app or /etc/app for linux. I've tried setting the path using an environment variable as outlined in the documentation here. Does anybody have any experience of getting this to work using the environment variables? I can't really understand why the API doesn't just let you pass a file path, this approach seems unnecessarily complex to me. I suspect there is a good reason I just don't understand! I would have thought in most cases the config file will be in your python project directory?
I'm experimenting with the library and so far in order to put a config.yaml file in the root folder of my script I just did that: import confuse class MyConfiguration(confuse.Configuration): def config_dir(self): return './' config = MyConfiguration('SplitwiseToBuckets') print(config) Quite rude I know but for what I want so far it works..! :D
6
2
64,014,568
2020-9-22
https://stackoverflow.com/questions/64014568/kivy-sounds-do-not-play-on-android-device-even-though-they-play-fine-on-laptop
I am trying to play a sound using Kivy. The sound plays perfectly and everything works perfectly on my laptop, but when I load the APK on my Android device, the sound does not play. I have manually allowed "storage permissions" on my android device, and in my buildozer.spec file I have included permissions to write and read external storage. I created a test file to try and debug to see what was going wrong but to no avail. I will share the details of this test file below. To start, here is the .py file: import kivy from kivy.app import App from kivy.uix.widget import Widget from kivy.graphics import Color from kivy.core.audio import SoundLoader __version__ = '0.1' class SoundTestApp(App): def build(self): self.load_kv('soundtest.kv') return SoundTestWidget() class SoundTestWidget(Widget): def playsound(self): sound = SoundLoader.load('testmusic.wav') sound.play() SoundTestApp().run() Next, here is the .kv file: <SoundTestWidget>: soundbutton: sound_button Button: id: sound_button size: (root.width,root.height) background_color: (1, 0, 0, 1) text: "press to play music" pos: self.pos on_release: self.text = "music is playing" root.playsound() Next, here is the Buildozer.spec file: [app] # (str) Title of your application title = SoundTest # (str) Package name package.name = soundtest # (str) Package domain (needed for android/ios packaging) package.domain = org.soundtest # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (list) List of inclusions using pattern matching #source.include_patterns = assets/*,images/*.png # (list) Source files to exclude (let empty to not exclude anything) #source.exclude_exts = spec # (list) List of directory to exclude (let empty to not exclude anything) #source.exclude_dirs = tests, bin # (list) List of exclusions using pattern matching #source.exclude_patterns = license,images/*/*.jpg # (str) Application versioning (method 1) version = 0.1 # (str) Application versioning (method 2) # version.regex = __version__ = ['"](.*)['"] # version.filename = %(source.dir)s/main.py # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy # (str) Custom source folders for requirements # Sets custom source for any requirements with recipes # requirements.source.kivy = ../../kivy # (list) Garden requirements #garden_requirements = # (str) Presplash of the application #presplash.filename = %(source.dir)s/data/presplash.png # (str) Icon of the application #icon.filename = %(source.dir)s/data/icon.png # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) orientation = landscape # (list) List of service to declare #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY # # OSX Specific # # # author = Β© Copyright Info # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = 0 # (string) Presplash background color (for new android toolchain) # Supported formats are: #RRGGBB #AARRGGBB or one of the following names: # red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray, # darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy, # olive, purple, silver, teal. #android.presplash_color = #FFFFFF # (list) Permissions android.permissions = INTERNET, WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE # (int) Target Android API, should be as high as possible. #android.api = 27 # (int) Minimum API your APK will support. #android.minapi = 21 # (int) Android SDK version to use #android.sdk = 20 # (str) Android NDK version to use #android.ndk = 19b # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. #android.ndk_api = 21 # (bool) Use --private data storage (True) or --dir public storage (False) #android.private_storage = True # (str) Android NDK directory (if empty, it will be automatically downloaded.) #android.ndk_path = # (str) Android SDK directory (if empty, it will be automatically downloaded.) #android.sdk_path = # (str) ANT directory (if empty, it will be automatically downloaded.) #android.ant_path = # (bool) If True, then skip trying to update the Android sdk # This can be useful to avoid excess Internet downloads or save time # when an update is due and you just want to test/build your package # android.skip_update = False # (bool) If True, then automatically accept SDK license # agreements. This is intended for automation only. If set to False, # the default, you will be shown the license when first running # buildozer. # android.accept_sdk_license = False # (str) Android entry point, default is ok for Kivy-based app #android.entrypoint = org.renpy.android.PythonActivity # (str) Android app theme, default is ok for Kivy-based app # android.apptheme = "@android:style/Theme.NoTitleBar" # (list) Pattern to whitelist for the whole project #android.whitelist = # (str) Path to a custom whitelist file #android.whitelist_src = # (str) Path to a custom blacklist file #android.blacklist_src = # (list) List of Java .jar files to add to the libs so that pyjnius can access # their classes. Don't add jars that you do not need, since extra jars can slow # down the build process. Allows wildcards matching, for example: # OUYA-ODK/libs/*.jar #android.add_jars = foo.jar,bar.jar,path/to/more/*.jar # (list) List of Java files to add to the android project (can be java or a # directory containing the files) #android.add_src = # (list) Android AAR archives to add (currently works only with sdl2_gradle # bootstrap) #android.add_aars = # (list) Gradle dependencies to add (currently works only with sdl2_gradle # bootstrap) #android.gradle_dependencies = # (list) add java compile options # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option # see https://developer.android.com/studio/write/java8-support for further information # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} # please enclose in double quotes # e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }" #android.add_gradle_repositories = # (list) packaging options to add # see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html # can be necessary to solve conflicts in gradle_dependencies # please enclose in double quotes # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" #android.add_gradle_repositories = # (list) Java classes to add as activities to the manifest. #android.add_activities = com.example.ExampleActivity # (str) OUYA Console category. Should be one of GAME or APP # If you leave this blank, OUYA support will not be enabled #android.ouya.category = GAME # (str) Filename of OUYA Console icon. It must be a 732x412 png image. #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png # (str) XML file to include as an intent filters in <activity> tag #android.manifest.intent_filters = # (str) launchMode to set for the main activity #android.manifest.launch_mode = standard # (list) Android additional libraries to copy into libs/armeabi #android.add_libs_armeabi = libs/android/*.so #android.add_libs_armeabi_v7a = libs/android-v7/*.so #android.add_libs_arm64_v8a = libs/android-v8/*.so #android.add_libs_x86 = libs/android-x86/*.so #android.add_libs_mips = libs/android-mips/*.so # (bool) Indicate whether the screen should stay on # Don't forget to add the WAKE_LOCK permission if you set this to True #android.wakelock = False # (list) Android application meta-data to set (key=value format) #android.meta_data = # (list) Android library project to add (will be added in the # project.properties automatically.) #android.library_references = # (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag #android.uses_library = # (str) Android logcat filters to use #android.logcat_filters = *:S python:D # (bool) Copy library instead of making a libpymodules.so #android.copy_libs = 1 # (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 android.arch = armeabi-v7a # (int) overrides automatic versionCode computation (used in build.gradle) # this is not the same as app version and should only be edited if you know what you're doing # android.numeric_version = 1 # # Python for android (p4a) specific # # (str) python-for-android fork to use, defaults to upstream (kivy) #p4a.fork = kivy # (str) python-for-android branch to use, defaults to master #p4a.branch = master # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) #p4a.source_dir = # (str) The directory in which python-for-android should look for your own build recipes (if any) #p4a.local_recipes = # (str) Filename to the hook for p4a #p4a.hook = # (str) Bootstrap to use for android builds # p4a.bootstrap = sdl2 # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) #p4a.port = # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = https://github.com/kivy/kivy-ios ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = https://github.com/phonegap/ios-deploy ios.ios_deploy_branch = 1.7.0 # (str) Name of the certificate to use for signing the debug version # Get a list of available identities: buildozer ios list_identities #ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)" # (str) Name of the certificate to use for signing the release version #ios.codesign.release = %(ios.codesign.debug)s [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 # (str) Path to build artifact storage, absolute or relative to spec file # build_dir = ./.buildozer # (str) Path to build output (i.e. .apk, .ipa) storage # bin_dir = ./bin # ----------------------------------------------------------------------------- # List as sections # # You can define all the "list" as [section:key]. # Each line will be considered as a option to the list. # Let's take [app] / source.exclude_patterns. # Instead of doing: # #[app] #source.exclude_patterns = license,data/audio/*.wav,data/images/original/* # # This can be translated into: # #[app:source.exclude_patterns] #license #data/audio/*.wav #data/images/original/* # # ----------------------------------------------------------------------------- # Profiles # # You can extend section / key with a profile # For example, you want to deploy a demo version of your application without # HD content. You could first change the title to add "(demo)" in the name # and extend the excluded directories to remove the HD content. # #[app@demo] #title = My Application (demo) # #[app:source.exclude_patterns@demo] #images/hd/* # # Then, invoke the command line with the "demo" profile: # #buildozer --profile demo android debug Finally, here are some extracts from the logcat, which may or may not be relevant (I do not know yet how to properly read a logcat): 09-22 22:24:52.558 6029 1471 E pageboostd: orgsoundtestsoundtest, amt 0 scnt 0 fcnt 0 09-22 22:24:52.559 4865 5176 E ActivityTaskManager: TouchDown intent received, starting ActiveLaunch 09-22 22:24:52.560 4865 5176 I ApplicationPolicy: isApplicationExternalStorageWhitelisted:org.soundtest.soundtest user:0 09-22 22:24:52.560 4865 5176 D ApplicationPolicy: isApplicationExternalStorageWhitelisted: DO is not enabled on user 0. Allowed. 09-22 22:24:52.560 4865 5176 D ActivityManager: package org.soundtest.soundtest, user - 0 is SDcard whitelisted 09-22 22:24:52.560 4865 5176 I ApplicationPolicy: isApplicationExternalStorageBlacklisted:org.soundtest.soundtest user:0 09-22 22:24:52.560 4865 5176 D ApplicationPolicy: isApplicationExternalStorageBlacklisted: DO is not enabled on user 0. Allowed. 09-22 22:24:52.561 4865 5176 I ApplicationPolicy: isApplicationExternalStorageBlacklisted:org.soundtest.soundtest user:0 09-22 22:24:52.561 4865 5176 D ApplicationPolicy: isApplicationExternalStorageBlacklisted: DO is not enabled on user 0. Allowed. 09-22 22:24:52.561 4865 5176 D ActivityTaskManager: starting Active launch 09-22 22:19:24.770 4865 5191 I Pageboost: active launch for : org.soundtest.soundtest , 1 09-22 22:19:24.772 4865 5176 E ActivityTaskManager: TouchDown intent received, starting ActiveLaunch 09-22 22:19:24.772 4865 5176 D ActivityTaskManager: starting Active launch 09-22 22:19:24.837 4865 6989 D CustomFrequencyManagerService: acquireDVFSLockLocked : type : DVFS_MIN_LIMIT frequency : 2314000 uid : 1000 pid : 4865 pkgName : APP_LAUNCH@CPU_MIN@7 09-22 22:19:24.837 5574 5574 D ActivityOptions: makeRemoteAnimation, adapter=android.view.RemoteAnimationAdapter@5d434e1, caller=com.android.systemui.shared.system.ActivityOptionsCompat.makeRemoteAnimation:68 com.android.launcher3.QuickstepAppTransitionManagerImpl.getActivityLaunchOptions:350 com.android.launcher3.Launcher.getActivityLaunchOptions:3071 09-22 22:19:24.840 4865 6997 I ActivityTaskManager: START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=org.soundtest.soundtest/org.kivy.android.PythonActivity bnds=[975,120][1156,451]} from uid 10073 09-22 22:19:24.841 6953 6953 E SDHMS:ib.qa: e = /sys/class/lcd/panel/vrr: open failed: ENOENT (No such file or directory) 09-22 22:19:24.846 4865 6997 D CustomFrequencyManagerService: acquireDVFSLockLocked : type : DVFS_MIN_LIMIT frequency : 2314000 uid : 1000 pid : 4865 pkgName : AMS_APP_SWITCH@CPU_MIN@38 09-22 22:19:24.846 4865 6997 D ActivityManagerPerformance: AMP_acquire() APP_SWITCH 09-22 22:19:24.846 4865 6997 D ActivityTaskManager: MultiTaskingTaskLaunchParamsModifier:task=null display-from-source=0 display-id=0 display-windowing-mode=1 09-22 22:19:24.847 4865 6997 D ActivityTaskManager: MultiTaskingTaskLaunchParamsModifier:task=null display-from-source=0 display-id=0 display-windowing-mode=1 activity-options-fullscreen=Rect(0, 0 - 0, 0) non-freeform-display maximized-bounds 09-22 22:19:24.849 4865 6997 D ActivityTaskManager: MultiTaskingTaskLaunchParamsModifier:tid=340 display-from-task=0 display-id=0 display-windowing-mode=1 activity-options-fullscreen=Rect(0, 0 - 0, 0) non-freeform-display maximized-bounds 09-22 22:19:24.850 4865 6997 D ActivityTaskManager: updateMinimizedState: unknown notifyReason=2 09-22 22:19:24.855 4865 4906 D GameManagerService: MultiWindowEventListener.onFocusStackChanged(), state=0, top=ComponentInfo{org.soundtest.soundtest/org.kivy.android.PythonActivity} 09-22 22:19:24.855 4865 5173 D GameManagerService: handleForegroundChange(). pkgName: org.soundtest.soundtest, clsName: org.kivy.android.PythonActivity,FgActivityName:org.soundtest.soundtest/org.kivy.android.PythonActivity,userID:0 09-22 22:19:24.855 4865 5173 D GameManagerService: handleForegroundChange(). set mFgApp: org.soundtest.soundtest 09-22 22:19:24.855 4865 5173 D GameManagerService: notifyResumePause(). pkg: org.soundtest.soundtest, type: 4, isMinimized: false, isTunableApp: false 09-22 22:19:24.856 4865 5173 D GameManagerService: notifyResumePause(). do nothing. mKillNotiCount: 1 09-22 22:19:24.856 4865 4906 D GameSDKService: MultiWindowEventListener.onFocusStackChanged(), state=0, top=ComponentInfo{org.soundtest.soundtest/org.kivy.android.PythonActivity} 09-22 22:19:24.857 4865 4906 D GameSDKService: MultiWindowEventListener.onFocusStackChanged(): org.soundtest.soundtest 09-22 22:19:24.857 4865 4906 D MdnieScenarioControlService: MultiWindowState : false , mode : 0 09-22 22:19:24.858 4865 4865 I Pageboost: package org.soundtest.soundtest 09-22 22:19:24.859 4865 4865 I Pageboost: stop active launch The app does not crash on Android. But the sound does not play. What is the problem? EDIT: Thanks to the answers below, we realized that the problem is likely that I needed to add the requirement ffpyplayer. However, there is an issue that buildozer refuses to build when I add this requirement. Here is the log: [INFO]: -> running configure --disable-everything --enable-openssl --enable-nonfree --enable-protocol=https,tls_op...(and 891 more) Exception in thread background thread for pid 33867: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 1662, in wrap fn(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 2606, in background_thread handle_exit_code(exit_code) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 2304, in fn return self.command.handle_command_exit_code(exit_code) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 877, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/ffmpeg/armeabi-v7a__ndk_target_21/ffmpeg/configure --disable-everything --enable-openssl --enable-nonfree --enable-protocol=https,tls_openssl --enable-parser=aac,ac3,h261,h264,mpegaudio,mpeg4video,mpegvideo,vc1 --enable-decoder=aac,h264,mpeg4,mpegvideo --enable-muxer=h264,mov,mp4,mpeg2video --enable-demuxer=aac,h264,m4v,mov,mpegvideo,vc1 --disable-symver --disable-programs --disable-doc --enable-filter=aresample,resample,crop,adelay,volume,scale --enable-protocol=file,http,hls --enable-small --enable-hwaccels --enable-gpl --enable-pic --disable-static --disable-debug --enable-shared --target-os=android --enable-cross-compile --cross-prefix=armv7a-linux-androideabi21- --arch=arm --strip=arm-linux-androideabi-strip --sysroot=/Users/maithreyasitaraman/.buildozer/android/platform/android-ndk-r19c/toolchains/llvm/prebuilt/linux-x86_64/sysroot --enable-neon --prefix=/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/ffmpeg/armeabi-v7a__ndk_target_21/ffmpeg STDOUT: tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified armv7a-linux-androideabi21-clang is unable to create an executable file. C compiler test failed. If you think configure made a mistake, make sure you are using the latest version from Git. If the latest version fails, report the problem to the [email protected] mailing list or IRC #ffmpeg on irc.freenode.net. Include the log file "ffbuild/config.log" produced by configure as this will help solve the problem. STDERR: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1260, in <module> main() File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main ToolchainCL() File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 709, in __init__ getattr(self, command)(args) File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 154, in wrapper_func build_dist_from_args(ctx, dist, args) File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 216, in build_dist_from_args args, "ignore_setup_py", False File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 577, in build_recipes recipe.build_arch(arch) File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/ffmpeg/__init__.py", line 135, in build_arch shprint(configure, *flags, _env=env) File "/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint for line in output: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 925, in next self.wait() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 849, in wait self.handle_command_exit_code(exit_code) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sh.py", line 877, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/ffmpeg/armeabi-v7a__ndk_target_21/ffmpeg/configure --disable-everything --enable-openssl --enable-nonfree --enable-protocol=https,tls_openssl --enable-parser=aac,ac3,h261,h264,mpegaudio,mpeg4video,mpegvideo,vc1 --enable-decoder=aac,h264,mpeg4,mpegvideo --enable-muxer=h264,mov,mp4,mpeg2video --enable-demuxer=aac,h264,m4v,mov,mpegvideo,vc1 --disable-symver --disable-programs --disable-doc --enable-filter=aresample,resample,crop,adelay,volume,scale --enable-protocol=file,http,hls --enable-small --enable-hwaccels --enable-gpl --enable-pic --disable-static --disable-debug --enable-shared --target-os=android --enable-cross-compile --cross-prefix=armv7a-linux-androideabi21- --arch=arm --strip=arm-linux-androideabi-strip --sysroot=/Users/maithreyasitaraman/.buildozer/android/platform/android-ndk-r19c/toolchains/llvm/prebuilt/linux-x86_64/sysroot --enable-neon --prefix=/Users/maithreyasitaraman/Downloads/2_Sound_Test/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/ffmpeg/armeabi-v7a__ndk_target_21/ffmpeg STDOUT: tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified armv7a-linux-androideabi21-clang is unable to create an executable file. C compiler test failed. If you think configure made a mistake, make sure you are using the latest version from Git. If the latest version fails, report the problem to the [email protected] mailing list or IRC #ffmpeg on irc.freenode.net. Include the log file "ffbuild/config.log" produced by configure as this will help solve the problem. STDERR:
I tried to build a project in Kivy for android for and faced similar issue where in the sound play was working fine on Laptop but not on Android device. As i could see from your buildspec file it does not include the below requirement of ffpyplayer. Try to include this one and rebuild again clean. Hopefully it should resolve your issue requirements = python3,kivy,ffpyplayer
6
2
63,978,903
2020-9-20
https://stackoverflow.com/questions/63978903/python-import-path-for-sub-modules-if-put-in-namespace-package
I have a python modules written in C, it has a main module and a submodule(name with a dot, not sure this can be called real submodule): PyMODINIT_FUNC initsysipc(void) { PyObject *module = Py_InitModule3("sysipc", ...); ... init_sysipc_light(); } static PyTypeObject FooType = { ... }; PyMODINIT_FUNC init_sysipc_light(void) { PyObject *module = Py_InitModule3("sysipc.light", ...); ... PyType_Ready(&FooType); PyModule_AddObject(module, "FooType", &FooType); } The module is compiled as sysipc.so, and when I put it in current directory, following import works without problem: import sysipc import sysipc.light from sysipc.light import FooType The problem is I want to put this module inside a namespace package, the folder structure is like this: company/ company/__init__.py company/dept/ company/dept/__init__.py company/dept/sys/ company/dept/sys/__init__.py company/dept/sys/sysipc.so all the three __init__.py just includes the standard setuptool import line: __path__ = __import__('pkgutil').extend_path(__path__, __name__) in current directory, following imports does not work: from company.dept.sys import sysipc; from company.dept.sys.sysipc.light import FooType; How should I import the types and methods defined in module sysipc.light in this case? =================================== Update with the actual error: I have sysipc.so built, if I run python in current directory as this module, import will work as expected: [root@08649fea17ef 2]# python2 Python 2.7.18 (default, Jul 20 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sysipc >>> import sysipc.light >>> If however if I put it into a namespace folder, like this: company/ company/__init__.py company/dept company/dept/__init__.py company/dept/sys company/dept/sys/sysipc.so company/dept/sys/__init__.py import the submodule will not work: >>> from company.dept.sys import sysipc >>> from company.dept.sys import sysipc.light File "<stdin>", line 1 from company.dept.sys import sysipc.light ^ SyntaxError: invalid syntax >>> from company.dept.sys.sysipc import light Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name light >>> The module is built with this simple code, it is for python2. I also have same example for python3.
Quoting from https://www.python.org/dev/peps/pep-0489/#multiple-modules-in-one-library : To support multiple Python modules in one shared library, the library can export additional PyInit* symbols besides the one that corresponds to the library's filename. Note that this mechanism can currently only be used to load extra modules, but not to find them. (This is a limitation of the loader mechanism, which this PEP does not try to modify.) ... In other words, you need to restructure the project as follows for importlib to be able to find the submodule light in the sysipc package: company/__init__.py company/dept/__init__.py company/dept/sys/__init__.py company/dept/sys/sysipc/__init__.py company/dept/sys/sysipc/sysipc.so company/dept/sys/sysipc/light.so -> sysipc.so # hardlink The hardlink between light.so and sysipc.so can be created with: ln company/dept/sys/sysipc/sysipc.so company/dept/sys/sysipc/light.so Then in company/dept/sys/sysipc/__init__.py you import all symbols from sysipc.so using: from .sysipc import * In addition, you need to change the name of the submodule C extension init function from init_sysipc_light to init_light for Python2, or from PyInit_sysipc_light to PyInit_light for Python3, since importlib loads modules by looking for an exported PyInit_<module name> from the dynamic module and the module name here is only light, i.e., the parent package prefix is not part of the (sub)module name. Here is the extension code (Python3) and a couple of functions for testing: #include <Python.h> PyObject *sysipc_light_foo(PyObject *self, PyObject *args) { printf("[*] sysipc.light.foo\n"); return PyLong_FromLong(0); } static PyMethodDef sysipc_light_methods[] = { {"foo", (PyCFunction)sysipc_light_foo, METH_VARARGS, "sysipc.light.foo function"}, {NULL, NULL, 0, NULL} }; static struct PyModuleDef sysipc_light_module = { PyModuleDef_HEAD_INIT, "sysipc.light", "sysipc child module", -1, sysipc_light_methods }; PyMODINIT_FUNC PyInit_light(void) { PyObject *module = NULL; module = PyModule_Create(&sysipc_light_module); return module; } PyObject *sysipc_bar(PyObject *self, PyObject *args) { printf("[*] sysipc.bar\n"); return PyLong_FromLong(0); } static PyMethodDef sysipc_methods[] = { {"bar", (PyCFunction)sysipc_bar, METH_VARARGS, "sysipc.bar function"}, {NULL, NULL, 0, NULL} }; static struct PyModuleDef sysipc_module = { PyModuleDef_HEAD_INIT, "sysipc", "sysipc parent module", -1, sysipc_methods }; PyMODINIT_FUNC PyInit_sysipc(void) { PyObject *module = NULL; module = PyModule_Create(&sysipc_module); PyInit_light(); return module; } test.py: #!/usr/bin/env python3 from company.dept.sys import sysipc from company.dept.sys.sysipc import light sysipc.bar() light.foo() Output: [*] sysipc.bar [*] sysipc.light.foo
6
5
63,930,235
2020-9-17
https://stackoverflow.com/questions/63930235/how-to-find-multi-mode-of-an-array-column-in-pyspark
I want to find the mode of the task column in this dataframe: +-----+-----------------------------------------+ | id | task | +-----+-----------------------------------------+ | 101 | [person1, person1, person3] | | 102 | [person1, person2, person3] | | 103 | null | | 104 | [person1, person2] | | 105 | [person1, person1, person2, person2] | | 106 | null | +-----+-----------------------------------------+ If there are multiple modes, I want to display all of them. Can someone please help me get this output: +-----+-----------------------------------------+---------------------------+ | id | task | mode | +-----+-----------------------------------------+---------------------------+ | 101 | [person1, person1, person3] |[person1] | | 102 | [person1, person2, person3] |[person1, person2, person3]| | 103 | null |[] | | 104 | [person1, person2] |[person1, person2] | | 105 | [person1, person1, person2, person2] |[person1, person2] | | 106 | null |[] | +-----+-----------------------------------------+---------------------------+ This is my first question here. Any help or hint is greatly appreciated. Thank you.
Using Spark 2.3: You can solve this using a custom UDF. For the purposes of getting multiple mode values, I'm using a Counter. I use the except block in the UDF for the null cases in your task column. (For Python 3.8+ users, there is a statistics.multimode() in-built function you can make use of) Your dataframe: from pyspark.sql.types import * from pyspark.sql import functions as F from pyspark.sql.functions import * schema = StructType([StructField("id", IntegerType()), StructField("task", ArrayType(StringType()))]) data = [[101, ["person1", "person1", "person3"]], [102, ["person1", "person2", "person3"]], [103, None], [104, ["person1", "person2"]], [105, ["person1", "person1", "person2", "person2"]], [106, None]] df = spark.createDataFrame(data,schema=schema) Operation: from collections import Counter def get_multi_mode_list(input_array): multi_mode = [] counter_var = Counter(input_array) try: temp = counter_var.most_common(1)[0][1] except: temp = counter_var.most_common(1) for i in counter_var: if input_array.count(i) == temp: multi_mode.append(i) return(list(set(multi_mode))) get_multi_mode_list_udf = F.udf(get_multi_mode_list, ArrayType(StringType())) df.withColumn("multi_mode", get_multi_mode_list_udf(col("task"))).show(truncate=False) Output: +---+------------------------------------+---------------------------+ |id |task |multi_mode | +---+------------------------------------+---------------------------+ |101|[person1, person1, person3] |[person1] | |102|[person1, person2, person3] |[person2, person3, person1]| |103|null |[] | |104|[person1, person2] |[person2, person1] | |105|[person1, person1, person2, person2]|[person2, person1] | |106|null |[] | +---+------------------------------------+---------------------------+
6
0
63,927,188
2020-9-16
https://stackoverflow.com/questions/63927188/keras-custom-loss-function-per-tensor-group
I am writing a custom loss function that requires calculating ratios of predicted values per group. As a simplified example, here is what my Data and model code looks like: def main(): df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"], data=[[5, 10, "a", "1", 0], [30, 20, "a", "1", 1], [50, 40, "a", "1", 0], [15, 20, "a", "2", 0], [25, 30, "b", "2", 1], [35, 40, "b", "1", 0], [10, 80, "b", "1", 1]]) features = ["feature_1", "feature_2"] conds_and_label = ["condition_1", "condition_2", "label"] X = df[features] Y = df[conds_and_label] model = my_model(input_shape=len(features)) model.fit(X, Y, epochs=10, batch_size=128) model.evaluate(X, Y) def custom_loss(conditions, y_pred): # this is what I need help with conds = ["condition_1", "condition_2"] conditions["label_pred"] = y_pred g = conditions.groupby(by=conds, as_index=False).apply(lambda x: x["label_pred"].sum() / len(x)).reset_index(name="pred_ratio") # true_ratios will be a constant, external DataFrame. Simplified example here: true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"], data=[["a", "1", 0.1], ["a", "2", 0.2], ["b", "1", 0.8], ["b", "2", 0.9]]) merged = pd.merge(g, true_ratios, on=conds) merged["diff"] = merged["pred_ratio"] - merged["true_ratio"] return K.mean(K.abs(merged["diff"])) def joint_loss(conds_and_label, y_pred): y_true = conds_and_label[:, 2] conditions = tf.gather(conds_and_label, [0, 1], axis=1) loss_1 = standard_loss(y_true=y_true, y_pred=y_pred) # not shown loss_2 = custom_loss(conditions=conditions, y_pred=y_pred) return 0.5 * loss_1 + 0.5 * loss_2 def my_model(input_shape=None): model = Sequential() model.add(Dense(units=2, activation="relu"), input_shape=(input_shape,)) model.add(Dense(units=1, activation='sigmoid')) model.add(Flatten()) model.compile(loss=joint_loss, optimizer="Adam", metrics=[joint_loss, custom_loss, "accuracy"]) return model What I need help with is the custom_loss function. As you can see, it is currently written as if the inputs are Pandas DataFrames. However, the inputs will be Keras Tensors (with tensorflow backend), so I am trying to figure out how to convert the current code in custom_loss to use Keras/TF backend functions. For example, I searched online and couldn't find out a way to do a groupby in Keras/TF to get the ratios I need... Some context/explanation that might be helpful to you: My main loss function is joint_loss, which consists of standard_loss (not shown) and custom_loss. But I only need help converting custom_loss. What custom_loss does is: Groupby on two condition columns (these two columns represent the groups of the data). Get the ratio of predicted 1s to total number of batch samples per each group. Compare the "pred_ratio" to a set of "true_ratio" and get the difference. Calculate mean absolute error from the differences.
I ended up figuring out a solution to this, though I would like some feedback on it (specifically some parts). Here is the solution: import pandas as pd import tensorflow as tf import keras.backend as K from keras.models import Sequential from keras.layers import Dense, Flatten, Dropout from tensorflow.python.ops import gen_array_ops def main(): df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"], data=[[5, 10, "a", "1", 0], [30, 20, "a", "1", 1], [50, 40, "a", "1", 0], [15, 20, "a", "2", 0], [25, 30, "b", "2", 1], [35, 40, "b", "1", 0], [10, 80, "b", "1", 1]]) df = pd.concat([df] * 500) # making data artificially larger true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"], data=[["a", "1", 0.1], ["a", "2", 0.2], ["b", "1", 0.8], ["b", "2", 0.9]]) features = ["feature_1", "feature_2"] conditions = ["condition_1", "condition_2"] conds_ratios_label = conditions + ["true_ratio", "label"] df = pd.merge(df, true_ratios, on=conditions, how="left") X = df[features] Y = df[conds_ratios_label] # need to convert strings to ints because tensors can't mix strings with floats/ints mapping_1 = {"a": 1, "b": 2} mapping_2 = {"1": 1, "2": 2} Y.replace({"condition_1": mapping_1}, inplace=True) Y.replace({"condition_2": mapping_2}, inplace=True) X = tf.convert_to_tensor(X) Y = tf.convert_to_tensor(Y) model = my_model(input_shape=len(features)) model.fit(X, Y, epochs=1, batch_size=64) print() print(model.evaluate(X, Y)) def custom_loss(conditions, true_ratios, y_pred): y_pred = tf.sigmoid((y_pred - 0.5) * 1000) uniques, idx, count = gen_array_ops.unique_with_counts_v2(conditions, [0]) num_unique = tf.size(count) sums = tf.math.unsorted_segment_sum(data=y_pred, segment_ids=idx, num_segments=num_unique) lengths = tf.cast(count, tf.float32) pred_ratios = tf.divide(sums, lengths) mean_pred_ratios = tf.math.reduce_mean(pred_ratios) mean_true_ratios = tf.math.reduce_mean(true_ratios) diff = mean_pred_ratios - mean_true_ratios return K.mean(K.abs(diff)) def standard_loss(y_true, y_pred): return tf.losses.binary_crossentropy(y_true=y_true, y_pred=y_pred) def joint_loss(conds_ratios_label, y_pred): y_true = conds_ratios_label[:, 3] true_ratios = conds_ratios_label[:, 2] conditions = tf.gather(conds_ratios_label, [0, 1], axis=1) loss_1 = standard_loss(y_true=y_true, y_pred=y_pred) loss_2 = custom_loss(conditions=conditions, true_ratios=true_ratios, y_pred=y_pred) return 0.5 * loss_1 + 0.5 * loss_2 def my_model(input_shape=None): model = Sequential() model.add(Dropout(0, input_shape=(input_shape,))) model.add(Dense(units=2, activation="relu")) model.add(Dense(units=1, activation='sigmoid')) model.add(Flatten()) model.compile(loss=joint_loss, optimizer="Adam", metrics=[joint_loss, "accuracy"], # had to remove custom_loss because it takes 3 args now run_eagerly=True) return model if __name__ == '__main__': main() The main updates are to custom_loss. I removed creating the true_ratios DataFrame from custom_loss and instead appended it to my Y in main. Now custom_loss takes 3 arguments, one of which is the true_ratios tensor. I had to use gen_array_ops.unique_with_counts_v2 and unsorted_segment_sum to get sums per group of conditions. And then I got the lengths of each group in order to create pred_ratios (calculated ratios per group based on y_pred). Finally I get the mean predicted ratios and mean true ratios, and take the absolute difference to get my custom loss. Some things of note: Because the last layer of my model is a sigmoid, my y_pred values are probabilities between 0 and 1. So I needed to convert them to 0s and 1s in order to calculate the ratios I need in my custom loss. At first I tried using tf.round, but I realized that is not differentiable. So instead I replaced it with y_pred = tf.sigmoid((y_pred - 0.5) * 1000) inside of custom_loss. This essentially takes all the y_pred values to 0 and 1, but in a differentiable way. It seems like a bit of a "hack" though, so please let me know if you have any feedback on this. I noticed that my model only works if I use run_eagerly=True in model.compile(). Otherwise I get this error: "ValueError: Dimensions must be equal, but are 1 and 2 for ...". I'm not sure why this is the case, but the error originates from the line where I use tf.unsorted_segment_sum. unique_with_counts_v2 does not actually exist in tensorflow API yet, but it exists in the source code. I needed this to be able to group by multiple conditions (not just a single one). Feel free to comment if you have any feedback on this, in general, or in response to the bullets above.
6
4
63,955,581
2020-9-18
https://stackoverflow.com/questions/63955581/building-object-models-around-external-data
I want to integrate external data into a Django app. Let's say, for example, I want to work with GitHub issues as if they were formulated as normal models within Django. So underneath these objects, I use the GitHub API to retrieve and store data. In particular, I also want to be able to reference the GitHub issues from models but not the other way around. I.e., I don't intend to modify or extend the external data directly. The views would use this abstraction to fetch data, but also to follow the references from "normal objects" to properties of the external data. Simple joins would also be nice to have, but clearly there would be limitations. Are there any examples of how to achieve this in an idiomatic way? Ideally, this would be would also be split in a general part that describes the API in general, and a descriptive part of the classes similar to how normal ORM classes are described.
The django way in this case would be to write a custom "db" backend. This repo looks abandoned but still can lead you to some ideas.
9
2
64,005,822
2020-9-22
https://stackoverflow.com/questions/64005822/how-to-specify-external-system-dependencies-to-a-python-package
When writing a Python package, I know how to specify other required Python packages in the setup.py file thanks to the field install_requires from setuptools.setup. However, I do not know how to specify external system dependencies that are NOT Python packages, i.e. a commands such as git or cmake (examples) that my package could call via subprocess.call or subprocess.Popen? Do I have to manually check the availability of the commands in my setup.py file, or is there a fancy way to specify system requirements? Edit: I just want to be able to check if the external tools are available, and if not invite the user to install them (by themself). I do not want to manage the installation of external tools when installing the package. Summary of contributions: it seems that setuptools has no support for this, and it would be safer to do the check at runtime (c.f. comments and answers).
My recommendation would be to check for the presence of those external dependencies not at install-time but at run-time. Either at the start of each run, or maybe at the first run. It's true that you could add this to your setup.py, but the setup.py is not always executed at install-time: for example if your project is packaged as a wheel then it doesn't even contain the setup.py file at all. And even if you do not distribute your project as a wheel, if I am not mistaken pip tends to build a wheel locally anyway and reuse it for the subsequent installations. So although it would be possible to do such checks as part of the setup script at install time (provided that you can guarantee the presence and execution of setup.py), I would say run-time is a safer bet.
8
6
64,014,746
2020-9-22
https://stackoverflow.com/questions/64014746/how-do-you-create-a-legend-for-kde-plot-in-seaborn
I have a kdeplot but I'm struggling to figure out how to create the legend. import matplotlib.patches as mpatches # see the tutorial for how we use mpatches to generate this figure! # Set 'is_workingday' to a boolean array that is true for all working_days is_workingday = daily_counts["workingday"] == "yes" is_not_workingday = daily_counts['workingday'] == "no" # Bivariate KDEs require two data inputs. # In this case, we will need the daily counts for casual and registered riders on workdays casual_workday = daily_counts.loc[is_workingday, 'casual'] registered_workday = daily_counts.loc[is_workingday, 'registered'] # Use sns.kdeplot on the two variables above to plot the bivariate KDE for weekday rides sns.kdeplot(casual_workday, registered_workday, color = "red", cmap = "Reds", hue = "workingday", legend = True) # Repeat the same steps above but for rows corresponding to non-workingdays casual_non_workday = daily_counts.loc[is_not_workingday, 'casual'] registered_non_workday = daily_counts.loc[is_not_workingday, 'registered'] # Use sns.kdeplot on the two variables above to plot the bivariate KDE for non-workingday rides sns.kdeplot(casual_non_workday, registered_non_workday, color = 'blue', cmap = "Blues", legend = True, shade = False) Gets me this: I'm trying to get this:
The other answer works nice when one single color is used per kdeplot. In case a colormap such as 'Reds' is used, this would show a very light red. A custom colormap can show a color from the middle of the range: from matplotlib import pyplot as plt import matplotlib.patches as mpatches import seaborn as sns import numpy as np casual_workday = np.random.randn(100) * 1.2 registered_workday = 0.8 * np.random.randn(100) + casual_workday * 0.2 + 1 sns.kdeplot(x=casual_workday, y=registered_workday, color="red", cmap="Reds", shade=False) casual_non_workday = np.random.randn(100) * 1.6 registered_non_workday = 0.5 * np.random.randn(100) + casual_non_workday * 0.5 - 1 sns.kdeplot(x=casual_non_workday, y=registered_non_workday, cmap="Blues", shade=False) handles = [mpatches.Patch(facecolor=plt.cm.Reds(100), label="Workday"), mpatches.Patch(facecolor=plt.cm.Blues(100), label="Non-workday")] plt.legend(handles=handles) plt.show()
10
10
64,016,590
2020-9-22
https://stackoverflow.com/questions/64016590/numpy-fortran-like-reshape
Let's say I have an array X of shape (6, 2) like this: import numpy as np X = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12]]) I want to reshape it to an array of shape (3, 2, 2), so I did this: X.reshape(3, 2, 2) And got: array([[[ 1, 2], [ 3, 4]], [[ 5, 6], [ 7, 8]], [[ 9, 10], [11, 12]]]) However, I need my data in a different format. To be precise, I want to end up wth: array([[[ 1, 2], [ 7, 8]], [[ 3, 4], [ 9, 10]], [[ 5, 6], [11, 12]]]) Should I be using reshape for this or something else? What's the best way to do this in Numpy?
You have to set the order option: >>> X.reshape(3, 2, 2, order='F') array([[[ 1, 2], [ 7, 8]], [[ 3, 4], [ 9, 10]], [[ 5, 6], [11, 12]]]) β€˜F’ means to read / write the elements using Fortran-like index order, with the first index changing fastest, and the last index changing slowest. see: https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
6
5
64,014,291
2020-9-22
https://stackoverflow.com/questions/64014291/pandas-dataframe-round-not-accepting-pd-na-or-pd-nan
pandas version: 1.2 I have a dataframe that columns as 'float64' with null values represented as pd.NAN. Is there way to round without converting to string then decimal: df = pd.DataFrame([(.21, .3212), (.01, .61237), (.66123, .03), (.21, .18),(pd.NA, .18)], columns=['dogs', 'cats']) df dogs cats 0 0.21 0.32120 1 0.01 0.61237 2 0.66123 0.03000 3 0.21 0.18000 4 <NA> 0.18000 Here is what I wanted to do, but it is erroring: df['dogs'] = df['dogs'].round(2) TypeError: float() argument must be a string or a number, not 'NAType' Here is another way I tried but this silently fails and no conversion occurs: tn.round({'dogs': 1}) dogs cats 0 0.21 0.32120 1 0.01 0.61237 2 0.66123 0.03000 3 0.21 0.18000 4 <NA> 0.18000
df['dogs'] = df['dogs'].apply(lambda x: round(x,2) if str(x) != '<NA>' else x)
6
2
63,920,237
2020-9-16
https://stackoverflow.com/questions/63920237/why-does-client-recv1024-return-an-empty-byte-literal-in-this-bare-bones-webso
I need a web socket client server exchange between Python and JavaScript on an air-gapped network, so I'm limited to what I can read and type up (believe me I'd love to be able to run pip install websockets). Here's a bare-bones RFC 6455 WebSocket client-server relationship between Python and JavaScript. Below the code, I'll pinpoint a specific issue with client.recv(1024) returning an empty byte literal, causing the WebSocket Server implementation to abort the connection. Client: <script> const message = { name: "ping", data: 0 } const socket = new WebSocket("ws://localhost:8000") socket.addEventListener("open", (event) => { console.log("socket connected to server") socket.send(JSON.stringify(message)) }) socket.addEventListener("message", (event) => { console.log("message from socket server:", JSON.parse(event)) }) </script> Server, found here (minimal implementation of RFC 6455): import array import time import socket import hashlib import sys from select import select import re import logging from threading import Thread import signal from base64 import b64encode class WebSocket(object): handshake = ( "HTTP/1.1 101 Web Socket Protocol Handshake\r\n" "Upgrade: WebSocket\r\n" "Connection: Upgrade\r\n" "WebSocket-Origin: %(origin)s\r\n" "WebSocket-Location: ws://%(bind)s:%(port)s/\r\n" "Sec-Websocket-Accept: %(accept)s\r\n" "Sec-Websocket-Origin: %(origin)s\r\n" "Sec-Websocket-Location: ws://%(bind)s:%(port)s/\r\n" "\r\n" ) def __init__(self, client, server): self.client = client self.server = server self.handshaken = False self.header = "" self.data = "" def feed(self, data): if not self.handshaken: self.header += str(data) if self.header.find('\\r\\n\\r\\n') != -1: parts = self.header.split('\\r\\n\\r\\n', 1) self.header = parts[0] if self.dohandshake(self.header, parts[1]): logging.info("Handshake successful") self.handshaken = True else: self.data += data.decode("utf-8", "ignore") playloadData = data[6:] mask = data[2:6] unmasked = array.array("B", playloadData) for i in range(len(playloadData)): unmasked[i] = unmasked[i] ^ mask[i % 4] self.onmessage(bytes(unmasked).decode("utf-8", "ignore")) def dohandshake(self, header, key=None): logging.debug("Begin handshake: %s" % header) digitRe = re.compile(r'[^0-9]') spacesRe = re.compile(r'\s') part = part_1 = part_2 = origin = None for line in header.split('\\r\\n')[1:]: name, value = line.split(': ', 1) if name.lower() == "sec-websocket-key1": key_number_1 = int(digitRe.sub('', value)) spaces_1 = len(spacesRe.findall(value)) if spaces_1 == 0: return False if key_number_1 % spaces_1 != 0: return False part_1 = key_number_1 / spaces_1 elif name.lower() == "sec-websocket-key2": key_number_2 = int(digitRe.sub('', value)) spaces_2 = len(spacesRe.findall(value)) if spaces_2 == 0: return False if key_number_2 % spaces_2 != 0: return False part_2 = key_number_2 / spaces_2 elif name.lower() == "sec-websocket-key": part = bytes(value, 'UTF-8') elif name.lower() == "origin": origin = value if part: sha1 = hashlib.sha1() sha1.update(part) sha1.update("258EAFA5-E914-47DA-95CA-C5AB0DC85B11".encode('utf-8')) accept = (b64encode(sha1.digest())).decode("utf-8", "ignore") handshake = WebSocket.handshake % { 'accept': accept, 'origin': origin, 'port': self.server.port, 'bind': self.server.bind } #handshake += response else: logging.warning("Not using challenge + response") handshake = WebSocket.handshake % { 'origin': origin, 'port': self.server.port, 'bind': self.server.bind } logging.debug("Sending handshake %s" % handshake) self.client.send(bytes(handshake, 'UTF-8')) return True def onmessage(self, data): logging.info("Got message: %s" % data) def send(self, data): logging.info("Sent message: %s" % data) self.client.send("\x00%s\xff" % data) def close(self): self.client.close() class WebSocketServer(object): def __init__(self, bind, port, cls): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind((bind, port)) self.bind = bind self.port = port self.cls = cls self.connections = {} self.listeners = [self.socket] def listen(self, backlog=5): self.socket.listen(backlog) logging.info("Listening on %s" % self.port) self.running = True while self.running: # upon first connection rList = [784] and the other two are empty rList, wList, xList = select(self.listeners, [], self.listeners, 1) for ready in rList: if ready == self.socket: logging.debug("New client connection") client, address = self.socket.accept() fileno = client.fileno() self.listeners.append(fileno) self.connections[fileno] = self.cls(client, self) else: logging.debug("Client ready for reading %s" % ready) client = self.connections[ready].client data = client.recv(1024) # currently, this results in: b'' fileno = client.fileno() if data: # data = b'' self.connections[fileno].feed(data) else: logging.debug("Closing client %s" % ready) self.connections[fileno].close() del self.connections[fileno] self.listeners.remove(ready) for failed in xList: if failed == self.socket: logging.error("Socket broke") for fileno, conn in self.connections: conn.close() self.running = False if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG, format="%(asctime)s - %(levelname)s - %(message)s") server = WebSocketServer("localhost", 8000, WebSocket) server_thread = Thread(target=server.listen, args=[5]) server_thread.start() # Add SIGINT handler for killing the threads def signal_handler(signal, frame): logging.info("Caught Ctrl+C, shutting down...") server.running = False sys.exit() signal.signal(signal.SIGINT, signal_handler) while True: time.sleep(100) server side logs: INFO - Hanshake successful DEBUG - Client ready for reading 664 DEBUG - Closing client 664 and on the client side I get WebSocket connection to 'ws://localhost:8000' failed: Unknown Reason The problem is traced here: if data: self.connections[fileno].feed(data) else: # this is being triggered on the server side logging.debug("Closing client %s" % ready) So researching this I found a potential problem in the Python documentation for select used to retrieve rlist, wlist, xlist select.select(rlist, wlist, xlist[, timeout]) This is a straightforward interface to the Unix select() system call. The first three arguments are iterables of β€˜waitable objects’: either integers representing file descriptors or objects with a parameterless method named fileno() returning such an integer: rlist: wait until ready for reading wlist: wait until ready for writing xlist: wait for an β€œexceptional condition” (see the manual page for what your system considers such a condition) Seeing that the feature is based on the Unix system call, I realized this code might not support Windows, which is my environment. I checked the values of rlist, wlist, xlist and found they're all empty lists on the first iteration rList = [784] (or another number, such as 664) and the other two are empty, after which the connection is closed. The documentation goes on to note: Note: File objects on Windows are not acceptable, but sockets are. On Windows, the underlying select() function is provided by the WinSock library, and does not handle file descriptors that don’t originate from WinSock. But I'm not clear on the exact meaning of this. So in the code logic, I did some logging and traced the issue here: rList, wList, xList = select(self.listeners, [], self.listeners, 1) for ready in rList: # rList = [836] or some other number # and then we check if ready (so the 836 int) == self.socket # but if we log self.socket we get this: # <socket.socket fd=772, family=AddressFamily.AF_INET, # type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 8000)> # so of course an integer isn't going to be equivalent to that if ready == self.socket: logging.debug("New client connection") #so lets skip this code and see what the other condition does else: logging.debug("Client ready for reading %s" % ready) client = self.connections[ready].client data = client.recv(1024) # currently, this results in: b'' fileno = client.fileno() if data: # data = b'', so this is handled as falsy self.connections[fileno].feed(data) else: logging.debug("Closing client %s" % ready) And as to why client.recv(1024) returns an empty binary string, I have no idea. I don't know if rList was supposed to contain more than an integer, or if the protocol is working as intended up until recv Can anyone explain what's causing the broken .recv call here? Is the client side JavaScript WebSocket protocol not sending whatever data should be expected? Or is the WebSocket Server at fault, and what's wrong with it?
I tried running your example and it seem to be working as expected. At least server logs end with the following line: INFO - Got message: {"name":"ping","data":0} My environment: OS: Arch Linux; WebSocket client: Chromium/85.0.4183.121 running the JS-code you provided; WebSocket server: Python/3.8.5 running the Python code you provided; select.select docstring indeed states that On Windows, only sockets are supported but most likely the OS is irrelevant since the server code uses only sockets as select.select arguments. recv returns an empty byte string when the reading end of a socket is closed. From recv(3) man: If no messages are available to be received and the peer has performed an orderly shutdown, recv() shall return 0. An interesting thing is a message about a successful handshake in server logs you got: INFO - Hanshake successful It means that in your case the connection between the client and the server has been established and some data has flown in both directions. After that the socket got closed. Looking at the server code I see no reason for the server to stop the connection. So I assume that the client you are using is to blame. To find out exactly what is going wrong, try intercepting the network traffic using tcpdump or wireshark and running the following Python WebSocket client script that reproduces the actions my browser did when I was testing: import socket SERVER = ("localhost", 8000) HANDSHAKE = ( b"GET /chat HTTP/1.1\r\n" b"Host: server.example.com\r\n" b"Upgrade: websocket\r\n" b"Connection: Upgrade\r\n" b"Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==\r\n" b"Sec-WebSocket-Protocol: chat, superchat\r\n" b"Sec-WebSocket-Version: 13\r\n" b"Origin: http://example.com\r\n" b"\r\n\r\n" ) # a frame with `{"name":"ping","data":0}` payload MESSAGE = b"\x81\x983\x81\xde\x04H\xa3\xb0e^\xe4\xfc>\x11\xf1\xb7jT\xa3\xf2&W\xe0\xaae\x11\xbb\xeey" with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect(SERVER) n = s.send(HANDSHAKE) assert n != 0 data = s.recv(1024) print(data.decode()) n = s.send(MESSAGE) assert n != 0
6
1
64,004,193
2020-9-22
https://stackoverflow.com/questions/64004193/how-to-split-dataset-to-train-test-and-valid-in-python
I have a dataset like this my_data= [['Manchester', '23', '80', 'CM', 'Manchester', '22', '79', 'RM', 'Manchester', '19', '76', 'LB'], ['Benfica', '26', '77', 'CF', 'Benfica', '22', '74', 'CDM', 'Benfica', '17', '70', 'RB'], ['Dortmund', '24', '75', 'CM', 'Dortmund', '18', '74', 'AM', 'Dortmund', '16', '69', 'LM'] ] I know that using train_test_split from sklearn.cross_validation, and I've tried with this from sklearn.model_selection import train_test_split train, test = train_test_split(my_data, test_size = 0.2) The result just split into test and train. I wish to divide it to 3 separate sets with randomized data. Expected: Test, Train, Valid
You can simply use train_test split twice X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) also, the answer can be found here
7
11
63,998,612
2020-9-21
https://stackoverflow.com/questions/63998612/what-is-the-best-way-to-append-custom-message-to-the-output-when-pytest-raises-f
What I'm aiming at is - if I ran the following test: def test_func(): with pytest.raises(APIError): func() and the func() did not raise APIError - I want to get custom message to the output, e.g. "No APIError caught" Pytest had a feature specifically for this: with raises(SomeError, message="Custom message here"): pass https://docs.pytest.org/en/4.2.1/assert.html#assertions-about-expected-exceptions but for some reason it didn't make it to further versions (current is 6.0.2)
You can replicate this behavior using the pytest.fail function after the function you expect to raise an exception, since that will only run if an exception is not raised. The deprecation notice explains the reasoning for its removal, and offers this alternate approach: import pytest def func(): return def test_func(): with pytest.raises(ValueError): func() pytest.fail("Oh no!") $ pytest test.py collected 1 item test.py F [100%] ==================================================================================================== FAILURES ===================================================================================================== ____________________________________________________________________________________________________ test_func ____________________________________________________________________________________________________ def test_func(): with pytest.raises(ValueError): func() > pytest.fail("Oh no!") E Failed: Oh no! test.py:9: Failed ============================================================================================= short test summary info ============================================================================================= FAILED test.py::test_func - Failed: Oh no! ================================================================================================ 1 failed in 0.16s ================================================================================================
6
8
63,998,196
2020-9-21
https://stackoverflow.com/questions/63998196/python-pyspark-correct-method-chaining-order-rules
Coming from a SQL development background, and currently learning pyspark / python I am a bit confused with querying data / chaining methods, using python. for instance the query below (taken from 'Learning Spark 2nd Edition'): fire_ts_df. select("CallType") .where(col("CallType").isNotNull()) .groupBy("CallType") .count() .orderBy("count", ascending=False) .show(n=10, truncate=False) will execute just fine. What i don't understand though, is that if i had written the code like: (moved the call to 'count()' higher) fire_ts_df. select("CallType") .count() .where(col("CallType").isNotNull()) .groupBy("CallType") .orderBy("count", ascending=False) .show(n=10, truncate=False) this wouldn't work. The problem is that i don't want to memorize the order, but i want to understand it. I feel it has something to do with proper method chaining in Python / Pyspark but I am not sure how to justify it. In other words, in a case like this, where multiple methods should be invoked and chained using (.), what is the right order and is there any specific rule to follow? Thanks a lot in advance
The important thing to note here is that chained methods necessarily do not occur in random order. The operations represented by these method calls are not some associative transformations applied flatly on the data from left to right. Each method call could be written as a separate statement, where each statement produces a result that makes the input to the next operation, and so on until the result. fire_ts_df. select("CallType") # selects column CallType into a 1-col DF .where(col("CallType").isNotNull()) # Filters rows on the 1-column DF from select() .groupBy("CallType") # Group filtered DF by the one column into a pyspark.sql.group.GroupedData object .count() # Creates a new DF off the GroupedData with counts .orderBy("count", ascending=False) # Sorts the aggregated DF, as a new DF .show(n=10, truncate=False) # Prints the last DF Just to use your example to explain why this doesn't work, calling count() on a pyspark.sql.group.GroupedData creates a new data frame with aggregation results. But count() called on a DataFrame object returns just the number of records, which means that the following call, .where(col("CallType").isNotNull()), is made on a long, which simply doesn't make sense. Longs don't have that filter method. As said above, you may visualize it differently by rewriting the code in separate statements: call_type_df = fire_ts_df.select("CallType") non_null_call_type = call_type_df.where(col("CallType").isNotNull()) groupings = non_null_call_type.groupBy("CallType") counts_by_call_type_df = groupings.count() ordered_counts = counts_by_call_type_df.orderBy("count", ascending=False) ordered_counts.show(n=10, truncate=False) As you can see, the ordering is meaningful as the succession of operations is consistent with their respective output. Chained calls make what is referred to as fluent APIs, which minimize verbosity. But this does not remove the fact that a chained method must be applicable to the type of the output of the preceding call (and in fact that the next operation is intended to be applied on the value produced by the one preceding it).
6
7
63,954,102
2020-9-18
https://stackoverflow.com/questions/63954102/numpy-vectorized-way-to-count-non-zero-bits-in-array-of-integers
I have an array of integers: [int1, int2, ..., intn] I want to count how many non-zero bits are in the binary representation of these integers. For example: bin(123) -> 0b1111011, there are 6 non-zero bits Of course I can loop over list of integers, use bin() and count('1') functions, but I'm looking for vectorized way to do it.
Assuming your array is a, you can simply do: np.unpackbits(a.view('uint8')).sum() example: a = np.array([123, 44], dtype=np.uint8) #bin(a) is [0b1111011, 0b101100] np.unpackbits(a.view('uint8')).sum() #9 Comparison using benchit: #@Ehsan's solution def m1(a): return np.unpackbits(a.view('uint8')).sum() #@Valdi_Bo's solution def m2(a): return sum([ bin(n).count('1') for n in a ]) in_ = [np.random.randint(100000,size=(n)) for n in [10,100,1000,10000,100000]] m1 is significantly faster.
7
7
63,992,444
2020-9-21
https://stackoverflow.com/questions/63992444/how-to-convert-bytes-object-to-io-bytesio-python
I am making a simple flask API for uploading an image and do some progresses then store it in the data base as binary, then i want to download it by using send_file() function but, when i am passing an image like a bytes it gives me an error: return send_file(BytesIO.read(image.data), attachment_filename='f.jpg', as_attachment=True) TypeError: descriptor 'read' requires a '_io.BytesIO' object but received a 'bytes' and My code for upload an image as follow: @app.route('/upload', methods=['POST']) def upload(): images = request.files.getlist('uploadImages') n = 0 for image in images: fileName = image.filename.split('.')[0] fileFormat = image.filename.split('.')[1] imageRead = image.read() img = BytesIO(imageRead) with graph.as_default(): caption = generate_caption_from_file(img) newImage = imageDescription(name=fileName, format=fileFormat, description=caption, data=imageRead) db.session.add(newImage) db.session.commit() n = n + 1 return str(n) + ' Image has been saved successfully' And my code for downloading an image: @app.route('/download/<int:id>') def download(id): image = imageDescription.query.get_or_404(id) return send_file(BytesIO.read(image.data), attachment_filename='f.jpg', as_attachment=True) any one can help please???
It seems you are confused io.BytesIO. Let's look at some examples of using BytesIO. >>> from io import BytesIO >>> inp_b = BytesIO(b'Hello World', ) >>> inp_b <_io.BytesIO object at 0x7ff2a71ecb30> >>> inp.read() # read the bytes stream for first time b'Hello World' >>> inp.read() # now it is positioned at the end so doesn't give anything. b'' >>> inp.seek(0) # position it back to begin >>> BytesIO.read(inp) # This is same as above and prints bytes stream b'Hello World' >>> inp.seek(0) >>> inp.read(4) # Just read upto four bytes of stream. >>> b'Hell' This should give you an idea of how read on BytesIO works. I think what you need to do is this. return send_file( BytesIO(image.data), mimetype='image/jpg', as_attachment=True, attachment_filename='f.jpg' )
7
13
63,993,139
2020-9-21
https://stackoverflow.com/questions/63993139/how-to-split-a-list-into-two-random-parts
I have 12 people who i need to divide into 2 different teams. What i need to do is pick random 6 numbers between 0 and 11 for the first team and do the same for the second one with no overlap. What is the most efficient way to do this? import random A = random.choice([x for x in range(12)]) B = random.choice([x for x in range(12) if x != A]) C = random.choice([x for x in range(12) if (x != A) and (x != B)]) team1 = random.sample(range(0, 12), 6) team2 = random.sample(range(0, 12), 6) This is what i wrote so far. Any help is appreciated.
You can use sets and set difference, like this: import random all_players = set(range(12)) team1 = set(random.sample(all_players, 6)) team2 = all_players - team1 print(team1) print(team2) Example Output: {1, 5, 8, 9, 10, 11} {0, 2, 3, 4, 6, 7}
7
14
63,988,597
2020-9-21
https://stackoverflow.com/questions/63988597/i-need-to-change-the-type-of-few-columns-in-a-pandas-dataframe-cant-do-so-usin
In a dataframe with around 40+ columns I am trying to change dtype for first 27 columns from float to int by using iloc: df1.iloc[:,0:27]=df1.iloc[:,0:27].astype('int') However, it's not working. I'm not getting any error, but dtype is not changing as well. It still remains float. Now the strangest part: If I first change dtype for only 1st column (like below): df1.iloc[:,0]=df1.iloc[:,0].astype('int') and then run the earlier line of code: df1.iloc[:,0:27]=df1.iloc[:,0:27].astype('int') It works as required. Any help to understand this and solution to same will be grateful. Thanks!
I guess it is a bug in 1.0.5. I tested on my 1.0.5. I have the same issue as yours. The .loc also has the same issue, so I guess pandas devs break something in iloc/loc. You need to update to latest pandas or use a workaround. If you need a workaround, using assignment as follows df1[df1.columns[0:27]] = df1.iloc[:, 0:27].astype('int') I tested it. Above way overcomes this bug. It will turn first 27 columns to dtype int32
12
8
63,987,965
2020-9-21
https://stackoverflow.com/questions/63987965/typeerror-argument-of-type-windowspath-is-not-iterable-in-django-python
whenever I run the server or executing any commands in the terminal this error is showing in the terminal. The server is running and the webpage is working fine but when I quit the server or run any commands(like python manage.py migrate) this error is showing. `Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). September 21, 2020 - 12:42:24 Django version 3.0, using settings 'djangoblog.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CTRL-BREAK. Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "C:\Python37\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line utility.execute() File "C:\Python37\lib\site-packages\django\core\management\__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python37\lib\site-packages\django\core\management\base.py", line 341, in run_from_argv connections.close_all() File "C:\Python37\lib\site-packages\django\db\utils.py", line 230, in close_all connection.close() File "C:\Python37\lib\site-packages\django\utils\asyncio.py", line 24, in inner return func(*args, **kwargs) File "C:\Python37\lib\site-packages\django\db\backends\sqlite3\base.py", line 261, in close if not self.is_in_memory_db(): File "C:\Python37\lib\site-packages\django\db\backends\sqlite3\base.py", line 380, in is_in_memory_db return self.creation.is_in_memory_db(self.settings_dict['NAME']) File "C:\Python37\lib\site-packages\django\db\backends\sqlite3\creation.py", line 12, in is_in_memory_db return database_name == ':memory:' or 'mode=memory' in database_name TypeError: argument of type 'WindowsPath' is not iterable `
I got this cleared by changing DATABASES in settings.py file: change 'NAME': BASE_DIR / 'db.sqlite3', to 'NAME': str(os.path.join(BASE_DIR, "db.sqlite3")) this works DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': str(os.path.join(BASE_DIR, "db.sqlite3")) } }
14
45
63,979,186
2020-9-20
https://stackoverflow.com/questions/63979186/jupyter-notebook-does-not-launch-importerror-dll-load-failed-while-importing
Recently my jupyter notebook stopped launching. When I try the command jupyter notebook from anaconda prompt but it gives error Traceback (most recent call last): File "C:\Users\Dell\anaconda3\Scripts\jupyter-notebook-script.py", line 6, in from notebook.notebookapp import main File "C:\Users\Dell\anaconda3\lib\site-packages\notebook\notebookapp.py", line 51, in from zmq.eventloop import ioloop File "C:\Users\Dell\anaconda3\lib\site-packages\zmq_init_.py", line 50, in from zmq import backend File "C:\Users\Dell\anaconda3\lib\site-packages\zmq\backend_init_.py", line 40, in reraise(*exc_info) File "C:\Users\Dell\anaconda3\lib\site-packages\zmq\utils\sixcerpt.py", line 34, in reraise raise value File "C:\Users\Dell\anaconda3\lib\site-packages\zmq\backend_init_.py", line 27, in ns = select_backend(first) File "C:\Users\Dell\anaconda3\lib\site-packages\zmq\backend\select.py", line 28, in select_backend mod = import(name, fromlist=public_api) File "C:\Users\Dell\anaconda3\lib\site-packages\zmq\backend\cython_init.py", line 6, in from . import (constants, error, message, context, ImportError: DLL load failed while importing error: The specified module could not be found. I even tried reinstalling anaconda and upgraded to python 3.8.3 on windows 10 but still get the same error. When I tried to check jupyter notebook's version it said that ipykernel and some other things were not installed. jupyter --version gives me this: jupyter core : 4.6.3 jupyter-notebook : 6.1.1 qtconsole : 4.7.6 ipython : 7.18.1 ipykernel : not installed jupyter client : not installed jupyter lab : not installed nbconvert : 5.6.1 But installing ipykernel with conda install ipykernel says All requested packages already installed. I also tried ipython kernel install --name <env_name> --user but this gives another dll error. Reading some other problems in stackoverflow I went and checked my enviroment variables. Is there a problem with this environment variable. Please help. Screenshot of the anaconda prompt with error. Edit: The anaconda navigator does not launch either. anaconda-navigator on the anaconda prompt gives another error: I also tried: conda install qt --force conda install pyqt --force But that did'nt help. Does previously installed anaconda cause such error?
I found what I did wrong (silly me). Microsoft visual C++ 2015-2019 was somehow removed when I tried to install openCV manually. Didn't think that such an install would make such big impact, have to keep that in mind now but installing the latest solved all the problems. P.S.: This solution might not work for someone else with similar problem. But its worth taking a note.
7
4
63,986,466
2020-9-21
https://stackoverflow.com/questions/63986466/how-can-i-check-the-sparsity-of-a-pandas-dataframe
In Pandas, How can I check how sparse a DataFrame? Is there any function available, or I will need to write my own? For now, I have this: df = pd.DataFrame({'a':[1,0,1,1,3], 'b':[0,0,0,0,1], 'c':[4,0,0,0,0], 'd':[0,0,3,0,0]}) a b c d 0 1 0 4 0 1 0 0 0 0 2 1 0 0 3 3 1 0 0 0 4 3 1 0 0 sparsity = sum((df == 0).astype(int).sum())/df.size Which divides the number of zeros by the total number of elements, in this example it's 0.65. Wanted to know if there is any better way to do this. And if there is any function which gives more information about the sparsity (like NaNs, any other prominent number like -1).
One idea for your solution is convert to numpy array, compare and use mean: a = (df.to_numpy() == 0).mean() print (a) 0.65 If want use Sparse dtypes is possible use: #convert each column to SparseArray sparr = df.apply(pd.arrays.SparseArray) print (sparr) a b c d 0 1 0 4 0 1 0 0 0 0 2 1 0 0 3 3 1 0 0 0 4 3 1 0 0 print (sparr.dtypes) a Sparse[int64, 0] b Sparse[int64, 0] c Sparse[int64, 0] d Sparse[int64, 0] dtype: object print (sparr.sparse.density) 0.35
7
7
63,982,499
2020-9-20
https://stackoverflow.com/questions/63982499/keyerror-on-if-condition-in-dictionary-python
I have this problem: I hav this code that is trying to count bigrams in a text file. An if statement checks wether the tuple is in a dictionary. If it is, the value (counter) is one-upped. If it doesn't exist, the code shoud create a key-value pair with the tuple as key and the value 1. for i in range(len(temp_list)-1): temp_tuple=(temp_list[i], temp_list[i+1]) if bigramdict[temp_tuple] in bigramdict: bigramdict[temp_tuple] = bigramdict[temp_tuple]+1 else: bigramdict[temp_tuple] = 1 However, whenever I run the code, it throws a KeyError on the very first tuple. As far as I understand, KeyError gets thrown when a key in dict doesn't exist, which is the case here. That's why I have the if statement to see if there is a key. Normally, the program should see that there is no key and go to the else to create one. However, it gets stuck on the if and complains about the missing key. Why does it not recognize that this is a conditional statement? Pls help.
What you were trying to do was if temp_tuple in bigramdict: instead of if bigramdict[temp_tuple] in bigramdict:
7
9
63,980,647
2020-9-20
https://stackoverflow.com/questions/63980647/how-can-i-stop-the-log-output-of-lightgbm
I would like to know how to stop lightgbm logging. What kind of settings should I use to stop the log? Also, is there a way to output only your own log with the lightgbm log stopped?
I think you can disable lightgbm logging using verbose=-1 in both Dataset constructor and train function, as mentioned here
13
12
63,978,820
2020-9-20
https://stackoverflow.com/questions/63978820/type-hints-for-dataclass-defined-inside-a-class-with-generic-types
I know that the title is very confusing, so let me take the Binary Search Tree as an example: Using ordinary class definition # This code passed mypy test from typing import Generic, TypeVar T = TypeVar('T') class BST(Generic[T]): class Node: def __init__( self, val: T, left: 'BST.Node', right: 'BST.Node' ) -> None: self.val = val self.left = left self.right = right The above code passed mypy test. Using dataclass However, when I tried to use dataclass to simplify the definition of Node, the code failed in mypy test. # This code failed to pass mypy test from dataclasses import dataclass from typing import Generic, TypeVar T = TypeVar('T') class BST(Generic[T]): @dataclass class Node: val: T left: 'BST.Node' right: 'BST.Node' mypy gave me this error message: (test_typing.py:8 is the line val: T) test_typing.py:8: error: Type variable "test_typing.T" is unbound test_typing.py:8: note: (Hint: Use "Generic[T]" or "Protocol[T]" base class to bind "T" inside a class) test_typing.py:8: note: (Hint: Use "T" in function signature to bind "T" inside a function) Pinpoint the problem # This code passed mypy test, suggest the problem is the reference to `T` in the dataclass definition from dataclasses import dataclass from typing import Generic, TypeVar T = TypeVar('T') class BST(Generic[T]): @dataclass class Node: val: int # chose `int` just for testing left: 'BST.Node' right: 'BST.Node' The above code agained passed the test, so I think the problem is the reference to T in the dataclass definition. Does anyone know how to future fix this to meet my original goal?
Let's start with what is written in PEP 484 about scoping rules for type variables: A generic class nested in another generic class cannot use same type variables. The scope of the type variables of the outer class doesn't cover the inner one: T = TypeVar('T') S = TypeVar('S') class Outer(Generic[T]): class Bad(Iterable[T]): # Error ... class AlsoBad: x = None # type: List[T] # Also an error class Inner(Iterable[S]): # OK ... attr = None # type: Inner[T] # Also OK This is why your example with nested decorated class does not work. Now let's answer the question why the example works with __init__ function that takes TypeVar variable. This is because the method __init__ is treated by mypy as a generic method with an independent TypeVar variable. For example reveal_type(BST[int].Node.__init__) shows Revealed type is 'def [T, T] (self: main.BST.Node, val: T'-1, left: main.BST.Node, right: main.BST.Node)'. i.e. T is not bound to int here.
6
5
63,980,292
2020-9-20
https://stackoverflow.com/questions/63980292/how-to-delete-all-instances-of-a-repeated-number-in-a-list
I want a code that deletes all instances of any number that has been repeated from a list. E.g.: Inputlist = [2, 3, 6, 6, 8, 9, 12, 12, 14] Outputlist = [2,3,8,9,14] I have tried to remove the duplicated elements in the list already (by using the "unique" function), but it leaves a single instance of the element in the list nevertheless! seen = set() uniq = [] for x in Outputlist: if x not in seen: uniq.append(x) seen.add(x) seen I went through a lot of StackOverflow articles too, but all of them differ in the idea that they are searching for removing common elements from two different lists, or that they want just one instance of each element to still be kept. I want to simply remove all common elements.
You can use a Counter >>> from collections import Counter >>> l = [2, 3, 6, 6, 8, 9, 12, 12, 14] >>> res = [el for el, cnt in Counter(l).items() if cnt==1] >>> res [2, 3, 8, 9, 14]
9
10
63,979,315
2020-9-20
https://stackoverflow.com/questions/63979315/python-difference-with-previous-row-by-group
i am trying to take diff value from previous row in a dataframe by grouping column "group", there are several similar questions but i can't get this working. date group value 0 2020-01-01 A 808 1 2020-01-01 B 331 2 2020-01-02 A 612 3 2020-01-02 B 1391 4 2020-01-03 A 234 5 2020-01-04 A 828 6 2020-01-04 B 820 6 2020-01-05 A 1075 8 2020-01-07 B 572 9 2020-01-10 B 736 10 2020-01-10 A 1436 df.sort_values(['group','date'], inplace=True) df['diff'] = df['value'].diff() print(df) date value group diff 1 2020-01-03 234 A NaN 8 2020-01-01 331 B 97.0 2 2020-01-07 572 B 241.0 9 2020-01-02 612 A 40.0 5 2020-01-10 736 B 124.0 17 2020-01-01 808 A 72.0 14 2020-01-04 820 B 12.0 4 2020-01-04 828 A 8.0 18 2020-01-05 1075 A 247.0 7 2020-01-02 1391 B 316.0 10 2020-01-10 1436 A 45.0 This is the result that i need date group value diff 0 2020-01-01 A 808 Na 2 2020-01-02 A 612 -196 4 2020-01-03 A 234 -378 5 2020-01-04 A 828 594 6 2020-01-05 A 1075 247 10 2020-01-10 A 1436 361 1 2020-01-01 B 331 Na 3 2020-01-02 B 1391 1060 6 2020-01-04 B 820 -571 8 2020-01-07 B 572 -248 9 2020-01-10 B 736 164
Shifts through each group to create a calculated column. Subtract that column from the original value column to create the difference column. df.sort_values(['group','date'], ascending=[True,True], inplace=True) df['shift'] = df.groupby('group')['value'].shift() df['diff'] = df['value'] - df['shift'] df = df[['date','group','value','diff']] 1 df date group value diff 0 2020-01-01 A 808 NaN 2 2020-01-02 A 612 -196.0 4 2020-01-03 A 234 -378.0 5 2020-01-04 A 828 594.0 6 2020-01-05 A 1075 247.0 10 2020-01-10 A 1436 361.0 1 2020-01-01 B 331 NaN 3 2020-01-02 B 1391 1060.0 6 2020-01-04 B 820 -571.0 8 2020-01-07 B 572 -248.0 9 2020-01-10 B 736 164.0
6
7
63,977,422
2020-9-20
https://stackoverflow.com/questions/63977422/error-trying-to-import-cv2opencv-python-package
I am trying to access my webcam with cv2(opencv-python) package. When I try to import it I get this error: Traceback (most recent call last): File "server.py", line 6, in <module> import cv2 File "/usr/local/lib/python3.8/dist-packages/cv2/__init__.py", line 5, in <module> from .cv2 import * ImportError: libGL.so.1: cannot open shared object file: No such file or directory Note: I am trying to import this package on putty, on Linode server - that might be useful information. If anyone can explain to me what is happening and maybe solve the problem I will highly appreciate it!
Install opencv-python-headless instead of opencv-python. Server (headless) environments do not have GUI packages installed which is why you are seeing the error. opencv-python depends on Qt which in turn depends on X11 related libraries. Other alternative is to run sudo apt-get install -y libgl1-mesa-dev which will provide the missing libGL.so.1 if you want to use opencv-python. The libgl1-mesa-dev package might be named differently depending on your GNU/Linux distribution. Full installation guide for opencv-python can be found from the package documentation: https://github.com/skvark/opencv-python#installation-and-usage
12
46
63,975,678
2020-9-20
https://stackoverflow.com/questions/63975678/how-to-convert-a-dataframe-from-long-to-wide-with-values-grouped-by-year-in-the
The code below worked with the previous csv that I used, both csv's have the same amount of columns, and the columns have the same name. Data for the csv that worked here Data for csv that didnt here What does this error mean? Why am I getting this error? from pandas import read_csv from pandas import DataFrame from pandas import Grouper from matplotlib import pyplot series = read_csv('carringtonairtemp.csv', header=0, index_col=0, parse_dates=True, squeeze=True) groups = series.groupby(Grouper(freq='A')) years = DataFrame() for name, group in groups: years[name.year] = group.values years = years.T pyplot.matshow(years, interpolation=None, aspect='auto') pyplot.show() Error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-7173fcbe8c08> in <module> 6 # display(group.head()) 7 # print(group.values[:10]) ----> 8 years[name.year] = group.values e:\Anaconda3\lib\site-packages\pandas\core\frame.py in __setitem__(self, key, value) 3038 else: 3039 # set column -> 3040 self._set_item(key, value) 3041 3042 def _setitem_slice(self, key: slice, value): e:\Anaconda3\lib\site-packages\pandas\core\frame.py in _set_item(self, key, value) 3114 """ 3115 self._ensure_valid_index(value) -> 3116 value = self._sanitize_column(key, value) 3117 NDFrame._set_item(self, key, value) 3118 e:\Anaconda3\lib\site-packages\pandas\core\frame.py in _sanitize_column(self, key, value, broadcast) 3759 3760 # turn me into an ndarray -> 3761 value = sanitize_index(value, self.index) 3762 if not isinstance(value, (np.ndarray, Index)): 3763 if isinstance(value, list) and len(value) > 0: e:\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in sanitize_index(data, index) 745 """ 746 if len(data) != len(index): --> 747 raise ValueError( 748 "Length of values " 749 f"({len(data)}) " ValueError: Length of values (365) does not match length of index (252)
The issue with iteratively creating the dataframe in the manner shown, is it requires the new column to match the length of the existing dataframe, year, index. In the smaller dataset, all the years are 365 days without missing days. The larger dataset has mixed length years of 365 and 366 days and there is missing data from 1990 and 2020, which is causing ValueError: Length of values (365) does not match length of index (252). Following is a more succinct script, which achieves the desired dataframe shape, and plot. This implementation doesn't have issues with the unequal data lengths. import pandas as pd import matplotlib.pyplot as plt # links to data url1 = 'https://raw.githubusercontent.com/trenton3983/stack_overflow/master/data/so_data/2020-09-19%20%2063975678/daily-min-temperatures.csv' url2 = 'https://raw.githubusercontent.com/trenton3983/stack_overflow/master/data/so_data/2020-09-19%20%2063975678/carringtonairtemp.csv' # load the data into a DataFrame, not a Series # parse the dates, and set them as the index df1 = pd.read_csv(url1, parse_dates=['Date'], index_col=['Date']) df2 = pd.read_csv(url2, parse_dates=['Date'], index_col=['Date']) # groupby year and aggregate Temp into a list dfg1 = df1.groupby(df1.index.year).agg({'Temp': list}) dfg2 = df2.groupby(df2.index.year).agg({'Temp': list}) # create a wide format dataframe with all the temp data expanded df1_wide = pd.DataFrame(dfg1.Temp.tolist(), index=dfg1.index) df2_wide = pd.DataFrame(dfg2.Temp.tolist(), index=dfg2.index) # plot fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 10)) ax1.matshow(df1_wide, interpolation=None, aspect='auto') ax2.matshow(df2_wide, interpolation=None, aspect='auto')
6
3
63,975,914
2020-9-20
https://stackoverflow.com/questions/63975914/python-asyncio-typeerror-a-coroutine-was-expected
I'm trying python coroutine programming using asyncio. This is my code. import asyncio async def coro_function(): return 2 + 2 async def get(): return await coro_function() print(asyncio.iscoroutinefunction(get)) loop = asyncio.get_event_loop() a1 = loop.create_task(get) loop.run_until_complete(a1) But when I execute it, it gives me error True Traceback (most recent call last): File "a.py", line 13, in <module> a1 = loop.create_task(get) File "/home/alie/anaconda3/lib/python3.7/asyncio/base_events.py", line 405, in create_task task = tasks.Task(coro, loop=self) TypeError: a coroutine was expected, got <function get at 0x7fe1280b6c80> How to solve it?
You're passing in the function get. In order to pass in a coroutine, pass in get(). a1 = loop.create_task(get()) loop.run_until_complete(a1) Take a look at the types: >>> type(get) <class 'function'> >>> print(type(get())) <class 'coroutine'> get is a coroutine function, i.e. a function that returns a coroutine object, get(). For more information and a better understanding of the fundamentals, take a look at the docs.
17
28
63,975,130
2020-9-20
https://stackoverflow.com/questions/63975130/how-to-get-only-specific-classes-from-pytorchs-fashionmnist-dataset
The FashionMNIST dataset has 10 different output classes. How can I get a subset of this dataset with only specific classes? In my case, I only want images of sneaker, pullover, sandal and shirt classes (their classes are 7,2,5 and 6 respectively). This is how I load my dataset. train_dataset_full = torchvision.datasets.FashionMNIST(data_folder, train = True, download = True, transform = transforms.ToTensor()) The approach I’ve followed is below. Iterate through the dataset, one by one, then compare the 1st element (i.e. class) in the returned tuple to my required class. I’m stuck here. If the value returned is true, how can I append/add this observation to an empty dataset? sneaker = 0 pullover = 0 sandal = 0 shirt = 0 for i in range(60000): if train_dataset_full[i][1] == 7: sneaker += 1 elif train_dataset_full[i][1] == 2: pullover += 1 elif train_dataset_full[i][1] == 5: sandal += 1 elif train_dataset_full[i][1] == 6: shirt += 1 Now, in place of sneaker += 1, pullover += 1, sandal += 1 and shirt += 1 I want to do something like this empty_dataset.append(train_dataset_full[i]) or something similar. If the above approach is incorrect, please suggest another method.
Finally found the answer. dataset_full = torchvision.datasets.FashionMNIST(data_folder, train = True, download = True, transform = transforms.ToTensor()) # Selecting classes 7, 2, 5 and 6 idx = (dataset_full.targets==7) | (dataset_full.targets==2) | (dataset_full.targets==5) | (dataset_full.targets==6) dataset_full.targets = dataset_full.targets[idx] dataset_full.data = dataset_full.data[idx]
10
12
63,936,759
2020-9-17
https://stackoverflow.com/questions/63936759/scrapy-hidden-memory-leak
Background - TLDR: I have a memory leak in my project Spent a few days looking through the memory leak docs with scrapy and can't find the problem. I'm developing a medium size scrapy project, ~40k requests per day. I am hosting this using scrapinghub's scheduled runs. On scrapinghub, for $9 per month, you are essentially given 1 VM, with 1GB of RAM, to run your crawlers. I've developed a crawler locally and uploaded to scrapinghub, the only problem is that towards the end of the run, I exceed the memory. Localling setting CONCURRENT_REQUESTS=16 works fine, but leads to exceeding the memory on scrapinghub at the 50% point. When I set CONCURRENT_REQUESTS=4, I exceed the memory at the 95% point, so reducing to 2 should fix the problem, but then my crawler becomes too slow. The alternative solution, is paying for 2 VM's, to increase the RAM, but I have a feeling that the way I've set up my crawler is causing memory leaks. For this example, the project will scrape an online retailer. When run locally, my memusage/max is 2.7gb with CONCURRENT_REQUESTS=16. I will now run through my scrapy structure Get the total number of pages to scrape Loop through all these pages using: www.example.com/page={page_num} On each page, gather information on 48 products For each of these products, go to their page and get some information Using that info, call an API directly, for each product Save these using an item pipeline (locally I write to csv, but not on scrapinghub) Pipeline class Pipeline(object): def process_item(self, item, spider): item['stock_jsons'] = json.loads(item['stock_jsons'])['subProducts'] return item Items class mainItem(scrapy.Item): date = scrapy.Field() url = scrapy.Field() active_col_num = scrapy.Field() all_col_nums = scrapy.Field() old_price = scrapy.Field() current_price = scrapy.Field() image_urls_full = scrapy.Field() stock_jsons = scrapy.Field() class URLItem(scrapy.Item): urls = scrapy.Field() Main spider class ProductSpider(scrapy.Spider): name = 'product' def __init__(self, **kwargs): page = requests.get('www.example.com', headers=headers) self.num_pages = # gets the number of pages to search def start_requests(self): for page in tqdm(range(1, self.num_pages+1)): url = 'www.example.com/page={page}' yield scrapy.Request(url = url, headers=headers, callback = self.prod_url) def prod_url(self, response): urls_item = URLItem() extracted_urls = response.xpath(####).extract() # Gets URLs to follow urls_item['urls'] = [# Get a list of urls] for url in urls_item['urls']: yield scrapy.Request(url = url, headers=headers, callback = self.parse) def parse(self, response) # Parse the main product page item = mainItem() item['date'] = DATETIME_VAR item['url'] = response.url item['active_col_num'] = XXX item['all_col_nums'] = XXX item['old_price'] = XXX item['current_price'] = XXX item['image_urls_full'] = XXX try: new_url = 'www.exampleAPI.com/' + item['active_col_num'] except TypeError: new_url = 'www.exampleAPI.com/{dummy_number}' yield scrapy.Request(new_url, callback=self.parse_attr, meta={'item': item}) def parse_attr(self, response): ## This calls an API Step 5 item = response.meta['item'] item['stock_jsons'] = response.text yield item What I've tried so far? psutils, haven't helped too much. trackref.print_live_refs() returns the following at the end: HtmlResponse 31 oldest: 3s ago mainItem 18 oldest: 5s ago ProductSpider 1 oldest: 3321s ago Request 43 oldest: 105s ago Selector 16 oldest: 3s ago printing the top 10 global variables, over time printing the top 10 item types, over time QUESTIONS How can I find the memory leak? Can anyone see where I may be leaking memory? Is there a fundamental problem with my scrapy structure? Please let me know if there is any more information required Additional Information Requested Note, the following output is from my local machine, where I have plenty of RAM, so the website I am scraping becomes the bottleneck. When using scrapinghub, due to the 1GB limit, the suspected memory leak becomes the problem. Please let me know if the output from scrapinghub is required, I think it should be the same, but the message for finish reason, is memory exceeded. 1.Log lines from start(from INFO: Scrapy xxx started to spider opened). 2020-09-17 11:54:11 [scrapy.utils.log] INFO: Scrapy 2.3.0 started (bot: PLT) 2020-09-17 11:54:11 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.4 (v3.7.4:e09359112e, Jul 8 2019, 14:54:52) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020), cryptography 3.1, Platform Darwin-18.7.0-x86_64-i386-64bit 2020-09-17 11:54:11 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'PLT', 'CONCURRENT_REQUESTS': 14, 'CONCURRENT_REQUESTS_PER_DOMAIN': 14, 'DOWNLOAD_DELAY': 0.05, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'PLT.spiders', 'SPIDER_MODULES': ['PLT.spiders']} 2020-09-17 11:54:11 [scrapy.extensions.telnet] INFO: Telnet Password: # blocked 2020-09-17 11:54:11 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] ======= 17_Sep_2020_11_54_12 ======= 2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled item pipelines: ['PLT.pipelines.PltPipeline'] 2020-09-17 11:54:12 [scrapy.core.engine] INFO: Spider opened 2.Ending log lines (INFO: Dumping Scrapy stats to end). 2020-09-17 11:16:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 15842233, 'downloader/request_count': 42031, 'downloader/request_method_count/GET': 42031, 'downloader/response_bytes': 1108804016, 'downloader/response_count': 42031, 'downloader/response_status_count/200': 41999, 'downloader/response_status_count/403': 9, 'downloader/response_status_count/404': 1, 'downloader/response_status_count/504': 22, 'dupefilter/filtered': 110, 'elapsed_time_seconds': 3325.171148, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2020, 9, 17, 10, 16, 43, 258108), 'httperror/response_ignored_count': 10, 'httperror/response_ignored_status_count/403': 9, 'httperror/response_ignored_status_count/404': 1, 'item_scraped_count': 20769, 'log_count/INFO': 75, 'memusage/max': 2707484672, 'memusage/startup': 100196352, 'request_depth_max': 2, 'response_received_count': 42009, 'retry/count': 22, 'retry/reason_count/504 Gateway Time-out': 22, 'scheduler/dequeued': 42031, 'scheduler/dequeued/memory': 42031, 'scheduler/enqueued': 42031, 'scheduler/enqueued/memory': 42031, 'start_time': datetime.datetime(2020, 9, 17, 9, 21, 18, 86960)} 2020-09-17 11:16:43 [scrapy.core.engine] INFO: Spider closed (finished) what value used for self.num_pages variable? The site I am scraping has around 20k products, and shows 48 per page. So it goes to the site, see's 20103 products, then divides by 48 (then math.ceil) to get the number of pages. Adding the output from scraping hub after updating the middleware downloader/request_bytes 2945159 downloader/request_count 16518 downloader/request_method_count/GET 16518 downloader/response_bytes 3366280619 downloader/response_count 16516 downloader/response_status_count/200 16513 downloader/response_status_count/404 3 dupefilter/filtered 7 elapsed_time_seconds 4805.867308 finish_reason memusage_exceeded finish_time 1600567332341 httperror/response_ignored_count 3 httperror/response_ignored_status_count/404 3 item_scraped_count 8156 log_count/ERROR 1 log_count/INFO 94 memusage/limit_reached 1 memusage/max 1074937856 memusage/startup 109555712 request_depth_max 2 response_received_count 16516 retry/count 2 retry/reason_count/504 Gateway Time-out 2 scheduler/dequeued 16518 scheduler/dequeued/disk 16518 scheduler/enqueued 17280 scheduler/enqueued/disk 17280 start_time 1600562526474
1.Scheruler queue/Active requests with self.numpages = 418. this code lines will create 418 request objects (including -to ask OS to delegate memory to hold 418 objects) and put them into scheduler queue : for page in tqdm(range(1, self.num_pages+1)): url = 'www.example.com/page={page}' yield scrapy.Request(url = url, headers=headers, callback = self.prod_url) each "page" request generate 48 new requests. each "product page" request generate 1 "api_call" request each "api_call" request returns item object. As all requests have equal priority - on the worst case application will require memory to hold ~20000 request/response objects in RAM at once. In order to exclude this cases priority parameter can be added to scrapy.Request. And probably You will need to change spider configuration to something like this: def start_requests(self): yield scrapy.Request(url = 'www.example.com/page=1', headers=headers, callback = self.prod_url) def prod_url(self, response): #get number of page next_page_number = int(response.url.split("/page=")[-1] + 1 #... for url in urls_item['urls']: yield scrapy.Request(url = url, headers=headers, callback = self.parse, priority = 1) if next_page_number < self.num_pages: yield scrapy.Request(url = f"www.example.com/page={str(next_page_number)}" def parse(self, response) # Parse the main product page #.... try: new_url = 'www.exampleAPI.com/' + item['active_col_num'] except TypeError: new_url = 'www.exampleAPI.com/{dummy_number}' yield scrapy.Request(new_url, callback=self.parse_attr, meta={'item': item}, priority = 2) With this spider configuration - spider will process product pages of next page only when it finish processing products from previous pages and your application will not receive long queue of requests/response. 2.Http compression A lot websites compress html code to reduce traffic load. For example Amazon website compress it's product pages using gzip. Average size of compressed html of amazon product page ~250Kb Size of uncompressed html can exceed ~1.5Mb. In case if Your website use compression and response sizes of uncompressed html is similar to size of amazon product pages - app will require to spend a lot of memory to hold both compressed and uncompressed response bodies. And DownloaderStats middleware that populates downloader/response_bytes stats parameter will not count size of uncompresses responses as it's process_response method called before process_response method of HttpCompressionMiddleware middleware. In order to check it you will need to change priority of Downloader stats middleware by adding this to settings: DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.stats.DownloaderStats':50 } In this case: downloader/request_bytes stats parameter - will be reduced as it will not count size of some headers populated by middlewares. downloader/response_bytes stats parameter - will be greatly increased in case if website uses compression.
6
4
63,972,580
2020-9-19
https://stackoverflow.com/questions/63972580/how-to-download-a-nested-json-into-a-pandas-dataframe
Looking to sharpen my data science skills. I am practicing url data pulls from a sports site and the json file has multiple nested dictionaries. I would like to be able to pull this data to map my own custom form of the leaderboard in matplotlib, etc., but am having a hard time getting the json to a workable df. The main website is: https://www.usopen.com/scoring.html Looking at the background I believe the live info is being pulled from the link listed in the short code below. I'm working in Jupyter notebooks. I can get the data successfully pulled. But as you can see, it is pulling multiple nested dictionaries which is making it very difficult in getting a simple dataframe pulled. Was just looking to get player, score to par, total, and round pulled. Any help would be greatly appreciated, thank you! import pandas as pd import urllib as ul import json url = "https://gripapi-static-pd.usopen.com/gripapi/leaderboard.json" response = ul.request.urlopen(url) data = json.loads(response.read()) print(data)
Simple and Quick Solution. A better solution might exist with JSON normalize from pandas but this is fairly good for your use case. def func(x): if not any(x.isnull()): return (x['round'], x['player']['firstName'], x['player']['identifier'], x['toParToday']['value'], x['totalScore']['value']) df = pd.DataFrame(data['standings']) df['round'] = data['currentRound']['name'] df = df[['player', 'toPar', 'toParToday', 'totalScore', 'round']] info = df.apply(func, axis=1) info_df = pd.DataFrame(list(info.values), columns=['Round', 'player_name', 'pid', 'to_par_today', 'totalScore']) info_df.head()
10
2
63,971,973
2020-9-19
https://stackoverflow.com/questions/63971973/celery-in-docker-container-error-mainprocess-consumer-cannot-connect-to-redis
A lot of frustration on this, been trying to make it work for days. I beg for help. It's a Django project with Postgres, Celery and Docker. First I tried with RabbitMQ, and I had the same error than now with Redis, then I changed to redis after multiple tries and the error is still the same, so I think the problem is around Celery, not RabbitMQ/Redis. Dockerfile: FROM python:3.8.5-alpine ENV PYTHONUNBUFFERED 1 RUN apk update \ # psycopg2 dependencies && apk add --virtual build-deps gcc python3-dev musl-dev \ && apk add postgresql-dev \ # Pillow dependencies && apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \ # Translation dependencies && apk add gettext \ # CFFI dependencies && apk add libffi-dev py-cffi \ && apk add --no-cache openssl-dev libffi-dev \ && apk add --no-cache --virtual .pynacl_deps build-base python3-dev libffi-dev RUN mkdir /app WORKDIR /app COPY requirements.txt /app/ RUN pip install -r requirements.txt COPY . /app/ docker-compose.yml: version: '3' volumes: local_postgres_data: {} services: postgres: image: postgres environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres volumes: - local_postgres_data:/var/lib/postgresql/data env_file: - ./.envs/.postgres django: &django build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/app/ ports: - "8000:8000" depends_on: - postgres redis: image: redis:6.0.8 celeryworker: <<: *django image: pyrty_celeryworker depends_on: - redis - postgres ports: [] command: celery -A pyrty worker -l INFO celerybeat: <<: *django image: pyrty_celerybeat depends_on: - redis - postgres ports: [] command: celery -A pyrty beat -l INFO pyrty/pyrty/celery.py: from __future__ import absolute_import, unicode_literals import os from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'pyrty.settings') app = Celery('pyrty') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks() @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request)) pyrty/pyrty/settings.py: # Celery conf CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0' #also tried localhost and CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379/0' #also tried without the '/0' CELERY_ACCEPT_CONTENT = ['json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'America/Argentina/Buenos_Aires' pyrty/pyrty/init.py: from __future__ import absolute_import, unicode_literals from .celery import app as celery_app __all__ = ('celery_app',) requirements.txt: Django==3.1 psycopg2==2.8.3 djangorestframework==3.11.0 celery==4.4.7 redis==3.5.3 Pillow==7.1.2 django-extensions==2.2.9 amqp==2.6.1 billiard==3.6.3 kombu==4.6.11 vine==1.3.0 pytz==2020.1 That's all the configuration, then when I do docker-compose up I get the following (regarding Celery and Redis) in the terminal: redis_1 | 1:M 19 Sep 2020 18:09:08.117 # Server initialized redis_1 | 1:M 19 Sep 2020 18:09:08.117 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. redis_1 | 1:M 19 Sep 2020 18:09:08.118 * Loading RDB produced by version 6.0.8 redis_1 | 1:M 19 Sep 2020 18:09:08.118 * RDB age 16 seconds redis_1 | 1:M 19 Sep 2020 18:09:08.118 * RDB memory usage when created 0.77 Mb redis_1 | 1:M 19 Sep 2020 18:09:08.118 * DB loaded from disk: 0.000 seconds redis_1 | 1:M 19 Sep 2020 18:09:08.118 * Ready to accept connections celeryworker_1 | celeryworker_1 | -------------- celery@f334b468b079 v4.4.7 (cliffs) celeryworker_1 | --- ***** ----- celeryworker_1 | -- ******* ---- Linux-5.4.0-47-generic-x86_64-with 2020-09-19 18:09:16 celeryworker_1 | - *** --- * --- celeryworker_1 | - ** ---------- [config] celeryworker_1 | - ** ---------- .> app: pyrty:0x7fd280ac7640 celeryworker_1 | - ** ---------- .> transport: redis://127.0.0.1:6379/0 celeryworker_1 | - ** ---------- .> results: redis://127.0.0.1:6379/0 celeryworker_1 | - *** --- * --- .> concurrency: 6 (prefork) celeryworker_1 | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) celeryworker_1 | --- ***** ----- celeryworker_1 | -------------- [queues] celeryworker_1 | .> celery exchange=celery(direct) key=celery celeryworker_1 | celeryworker_1 | celeryworker_1 | [tasks] celeryworker_1 | . pyrty.celery.debug_task celeryworker_1 | celeryworker_1 | [2020-09-19 18:09:16,865: ERROR/MainProcess] consumer: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:6379. Connection refused.. celeryworker_1 | Trying again in 2.00 seconds... (1/100) celeryworker_1 | celeryworker_1 | [2020-09-19 18:09:18,871: ERROR/MainProcess] consumer: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:6379. Connection refused.. celeryworker_1 | Trying again in 4.00 seconds... (2/100) I really don't get what I am missing, I've been reading documentation but I cannot solve this. Please help!
Try updating your app settings to use redis hostname as redis instead of 127.0.0.1 # Celery conf CELERY_BROKER_URL = 'redis://redis:6379/0' CELERY_RESULT_BACKEND = 'redis://redis:6379/0' Reference: Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database. https://docs.docker.com/compose/networking/
10
7
63,969,194
2020-9-19
https://stackoverflow.com/questions/63969194/how-to-calculate-pairwise-mutual-information-for-entire-pandas-dataset
I have 50 variables in my dataframe. 46 are dependent variables and 4 are independent variables (precipitation, temperature, dew, snow). I want to calculate the mutual information of my dependent variables against my independent. So in the end I want a dataframe like this Right now I am calculating it using the following but it's taking so long because I have to change my y each time: X = df[['Temperature', 'Precipitation','Dew','Snow']] # Features y = df[['N0037']] #target from sklearn.feature_selection import mutual_info_regression mi = mutual_info_regression(X, y) mi /= np.max(mi) mi = pd.Series(mi) mi.index = X.columns mi.sort_values(ascending=False) print(mi) How to calculate pairwise mutual information for the entire dataset?
Using list comprehension: indep_vars = ['Temperature', 'Precipitation', 'Dew', 'Snow'] # set independent vars dep_vars = df.columns.difference(indep_vars).tolist() # set dependent vars from sklearn.feature_selection import mutual_info_regression as mi_reg df_mi = pd.DataFrame([mi_reg(df[indep_vars], df[dep_var]) for dep_var in dep_vars], index = dep_vars, columns = indep_vars).apply(lambda x: x / x.max(), axis = 1)
9
4
63,967,363
2020-9-19
https://stackoverflow.com/questions/63967363/how-to-speed-up-numpy-all-and-numpy-nonzero
I need to check if a point lies inside a bounding cuboid. The number of cuboids is very large (~4M). The code I come up with is: import numpy as np # set the numbers of points and cuboids n_points = 64 n_cuboid = 4000000 # generate the test data points = np.random.rand(1, 3, n_points)*512 cuboid_min = np.random.rand(n_cuboid, 3, 1)*512 cuboid_max = cuboid_min + np.random.rand(n_cuboid, 3, 1)*8 # main body: check if the points are inside the cuboids inside_cuboid = np.all((points > cuboid_min) & (points < cuboid_max), axis=1) indices = np.nonzero(inside_cuboid) It takes 8 seconds to run np.all and 3 seconds to run np.nonzero on my computer. Any idea to speed up the code?
We can reduce memory congestion for all-reduction with slicing along the smallest axis length of 3 to get inside_cuboid - out = (points[0,0,:] > cuboid_min[:,0]) & (points[0,0,:] < cuboid_max[:,0]) & \ (points[0,1,:] > cuboid_min[:,1]) & (points[0,1,:] < cuboid_max[:,1]) & \ (points[0,2,:] > cuboid_min[:,2]) & (points[0,2,:] < cuboid_max[:,2]) Timings - In [43]: %timeit np.all((points > cuboid_min) & (points < cuboid_max), axis=1) 2.49 s Β± 20 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) In [51]: %%timeit ...: out = (points[0,0,:] > cuboid_min[:,0]) & (points[0,0,:] < cuboid_max[:,0]) & \ ...: (points[0,1,:] > cuboid_min[:,1]) & (points[0,1,:] < cuboid_max[:,1]) & \ ...: (points[0,2,:] > cuboid_min[:,2]) & (points[0,2,:] < cuboid_max[:,2]) 1.95 s Β± 10.6 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
6
3
63,965,503
2020-9-19
https://stackoverflow.com/questions/63965503/return-value-from-list-according-to-index-number
I have been struggling to find the pandas solution to this without looping: input: df = pd.DataFrame({'A' : [[6,1,1,1], [1,5,1,1], [1,1,11,1], [1,1,1,20]]}) A 0 [6, 1, 1, 1] 1 [1, 5, 1, 1] 2 [1, 1, 11, 1] 3 [1, 1, 1, 20] output: A B 0 [6, 1, 1, 1] 6 1 [1, 5, 1, 1] 5 2 [1, 1, 11, 1] 11 3 [1, 1, 1, 20] 20 I have tried so many different things over the past hour or so, and I know the solution will be an embarrassingly simple one-liner. Thanks for your help -- not my python day today!
You can do a simple list comprehension: df['B'] = [s[i] for i, s in zip(df.index, df['A'])] Or if you want only diagonal values: df['B'] = np.diagonal([*df['A']]) A B 0 [6, 1, 1, 1] 6 1 [1, 5, 1, 1] 5 2 [1, 1, 11, 1] 11 3 [1, 1, 1, 20] 20
6
5
63,955,752
2020-9-18
https://stackoverflow.com/questions/63955752/topologicalerror-the-operation-geosintersection-r-could-not-be-performed
Hi Guys, I am trying to map the district shapefile into assembly constituencies. I have shape files for [Both].Basically I have to map all the variables given at district level in census data to assembly constituency level. So I am following a pycon talk. Everything is working fine but I am getting error in get_intersection function.Error for that is TopologicalError: The operation 'GEOSIntersection_r' could not be performed. Likely cause is invalidity of the geometry <shapely.geometry.polygon.Polygon object at 0x7f460250ce10>. I have tried to use both pygeos and rtree. There were some links which said that problem is in pygeos. So,I used rtree.But of no avail.Please help Thanks in advance. Code tried by me is constituencies=gpd.GeoDataFrame.from_file('/content/AC_All_Final.shp') districts=gpd.GeoDataFrame.from_file('/content/2001_Dist.shp') districts['AREA'] = districts.geometry.area constituencies['AREA'] = constituencies.geometry.area merged = gpd.sjoin(districts, constituencies).reset_index().rename( columns={'index': 'index_left'}) def get_intersection(row): left_geom = districts['geometry'][row['index_left']] right_geom = constituencies['geometry'][row['index_right']] return left_geom.intersection(right_geom) ***Error is at this point*** merged['geometry'] = merged.apply(get_intersection, axis=1) merged['AREA'] = merged.geometry.area Error trace is given below: TopologyException: Input geom 1 is invalid: Ring Self-intersection at or near point 77.852561819157373 14.546596140487276 at 77.852561819157373 14.546596140487276 --------------------------------------------------------------------------- TopologicalError Traceback (most recent call last) <ipython-input-17-8123669e025c> in <module>() 4 return left_geom.intersection(right_geom) 5 ----> 6 merged['geometry'] = merged.apply(get_intersection, axis=1) 7 merged['AREA'] = merged.geometry.area 7 frames /usr/local/lib/python3.6/dist-packages/shapely/topology.py in _check_topology(self, err, *geoms) 36 "The operation '%s' could not be performed. " 37 "Likely cause is invalidity of the geometry %s" % ( ---> 38 self.fn.__name__, repr(geom))) 39 raise err 40 TopologicalError: The operation 'GEOSIntersection_r' could not be performed. Likely cause is invalidity of the geometry <shapely.geometry.polygon.Polygon object at 0x7f460250ce10>
The error message tells you exactly what is going on. Some of your geometries are not valid, so you have to make them valid before doing your apply. The simple trick, which works in most of the cases is using buffer(0). merged['geometry'] = merged.buffer(0) Since the issue is with geometry validity and is raised by GEOS, it does not matter if you use shapely/rtree backend or pygeos.
8
13
63,962,454
2020-9-18
https://stackoverflow.com/questions/63962454/numpy-find-index-of-second-highest-value-in-each-row-of-an-ndarray
I have a [10,10] numpy.ndarray. I am trying to get the index the second highest number in each row. So for the array: [101 0 1 0 0 0 1 1 2 0] [ 0 116 1 0 0 0 0 0 1 0] [ 1 4 84 2 2 0 2 4 6 1] [ 0 2 0 84 0 6 0 2 3 0] [ 0 0 1 0 78 0 0 2 0 11] [ 2 0 0 1 1 77 5 0 2 0] [ 1 2 1 0 1 2 94 0 1 0] [ 0 1 1 0 0 0 0 96 0 4] [ 1 5 4 3 1 3 0 1 72 4] [ 0 1 0 0 3 2 0 7 0 82] Expected result: [8, 2, 8, 5, 9, ...] Any suggestions?
The amazing numpy.argsort() function makes this task really simple. Once the sorted indices are found, get the second to last column. m = np.array([[101, 0, 1, 0, 0, 0, 1, 1, 2, 0], [ 0, 116, 1, 0, 0, 0, 0, 0, 1, 0], [ 1, 4, 84, 2, 2, 0, 2, 4, 6, 1], [ 0, 2, 0, 84, 0, 6, 0, 2, 3, 0], [ 0, 0, 1, 0, 78, 0, 0, 2, 0, 11], [ 2, 0, 0, 1, 1, 77, 5, 0, 2, 0], [ 1, 2, 1, 0, 1, 2, 94, 0, 1, 0], [ 0, 1, 1, 0, 0, 0, 0, 96, 0, 4], [ 1, 5, 4, 3, 1, 3, 0, 1, 72, 4], [ 0, 1, 0, 0, 3, 2, 0, 7, 0, 82]]) # Get index for the second highest value. m.argsort()[:,-2] Output: array([8, 8, 8, 5, 9, 6, 5, 9, 1, 7], dtype=int32)
7
15
63,945,330
2020-9-17
https://stackoverflow.com/questions/63945330/plotly-how-to-add-text-labels-to-a-histogram
Is there a way how to display the counted value of the histogram aggregate in the Plotly.Express histogram? px.histogram(pd.DataFrame({"A":[1,1,1,2,2,3,3,3,4,4,4,5]}),x="A") If I would use regular histogram, I can specify text parameter which direct to the column which contain the value to display. px.bar(pd.DataFrame({"val":[1,2,3,4,5], "height": [3,2,3,3,1]}), x="val", y="height", text="height") But with histograms, this value is calculated and it's not even part of the fig.to_dict(). Is there a way to add the text labels into histogram? Using the answers below, I've summarized the finding to an article - https://towardsdatascience.com/histograms-with-plotly-express-complete-guide-d483656c5ad7
As far as I know, plotly histograms do not have a text attribute. It also turns out that it's complicated if at all possible to retrieve the applied x and y values and just throw them into appropriate annotations. Your best option seems to be to take care of the binning using numpy.histogram and the set up your figure using go.Bar. The code snippet below will produce the following plot: Complete code: import numpy as np import plotly.express as px import plotly.graph_objects as go # sample data df = px.data.tips() # create bins bins = [0, 10, 20, 30, 40, 50] counts, bins = np.histogram(df.total_bill, bins=bins) #bins2 = 0.5 * (bins1[:-1] + bins2[1:]) fig = go.Figure(go.Bar(x=bins, y=counts)) fig.data[0].text = counts fig.update_traces(textposition='inside', textfont_size=8) fig.update_layout(bargap=0) fig.update_traces(marker_color='blue', marker_line_color='blue', marker_line_width=1, opacity=0.4) fig.show()
7
3
63,954,442
2020-9-18
https://stackoverflow.com/questions/63954442/how-to-parse-unix-timestamp-into-datetime-without-timezone-in-fast-api
Assume I have a pydantic model class EventEditRequest(BaseModel): uid: UUID name: str start_dt: datetime end_dt: datetime I send request with body b'{"uid":"a38a7543-20ca-4a50-ab4e-e6a3ae379d3c","name":"test event2222","start_dt":1600414328,"end_dt":1600450327}' So both start_dt and end_dt are unix timestamps. But in endpoint they become datetimes with timezones. @app.put('...') def edit_event(event_data: EventEditRequest): event_data.start_dt.tzinfo is not None # True I don't want to manually edit start_dt and end_dt in the endpoint function to get rid of timezones. How can I set up my pydantic model so it will make datetime without timezones?
You can use own @validator to parse datetime manually: from datetime import datetime from pydantic import BaseModel, validator class Model(BaseModel): dt: datetime = None class ModelNaiveDt(BaseModel): dt: datetime = None @validator("dt", pre=True) def dt_validate(cls, dt): return datetime.fromtimestamp(dt) print(Model(dt=1600414328)) print(ModelNaiveDt(dt=1600414328)) Output: dt=datetime.datetime(2020, 9, 18, 7, 32, 8, tzinfo=datetime.timezone.utc) dt=datetime.datetime(2020, 9, 18, 10, 32, 8)
6
8
63,954,751
2020-9-18
https://stackoverflow.com/questions/63954751/does-it-make-a-difference-if-you-iterate-over-a-list-or-a-tuple-in-python
I'm currently trying the wemake-python-styleguide and found WPS335: Using lists, dicts, and sets do not make much sense. You can use tuples instead. Using comprehensions implicitly create a two level loops, that are hard to read and deal with. It gives this example: # Correct: for person in ('Kim', 'Nick'): ... # Wrong: for person in ['Kim', 'Nick']: ... Is this purely personal preference or is there more that speaks for using a tuple? I can only think about speed, but I cannot imagine that this makes a difference. I think I have seen more people using lists and I wonder if there is a reason to change it.
Using lists instead of tuples as constants makes no difference in CPython. As of some versions, both are compiled to tuples. >>> dis.dis(""" ... for person in ["Kim", "Nick"]: ... ... ... """) 2 0 SETUP_LOOP 12 (to 14) 2 LOAD_CONST 0 (('Kim', 'Nick')) 4 GET_ITER >> 6 FOR_ITER 4 (to 12) 8 STORE_NAME 0 (person) 3 10 JUMP_ABSOLUTE 6 >> 12 POP_BLOCK >> 14 LOAD_CONST 1 (None) 16 RETURN_VALUE Note how the list literal was transformed to a LOAD_CONST (('Kim', 'Nick')) instruction of a tuple. As for preference, CPython prefers tuple. If you have the choice, you should do so as well.
7
6
63,953,605
2020-9-18
https://stackoverflow.com/questions/63953605/should-i-commit-static-files-into-git-repo-with-django-project
Should I commit and push my Django's project static files into my git repo? I know there is collectstatic command but it's just for prod deployment right? I work on the same project from 2 different computers and then, I have static files in one that I don't have on the other. Am I supposed to collectstatic from one to the other? But I don't understand this command neither how to use it nor what it does. Thank you for helping. Best regards.
Yes, normally we commit static files. The command collectstatic just copies the static files from the individual app folders into one general folder (normally used only in PROD server). But the static files should already be present (and committed) in the individual app folders, so that each development PC and also the PROD server have the same static files present.
6
8
63,949,141
2020-9-18
https://stackoverflow.com/questions/63949141/get-indices-of-items-in-numpy-array-where-values-is-in-list
Is there a numpy way (and without for loop) to extract all the indices in a numpy array list_of_numbers, where values are in a list values_of_interest? This is my current solution: list_of_numbers = np.array([11,0,37,0,8,1,39,38,1,0,1,0]) values_of_interest = [0,1,38] indices = [] for value in values_of_interest: this_indices = np.where(list_of_numbers == value)[0] indices = np.concatenate((indices, this_indices)) print(indices) # this shows [ 1. 3. 9. 11. 5. 8. 10. 7.]
Use numpy.where with numpy.isin: np.argwhere(np.isin(list_of_numbers, values_of_interest)).ravel() Output: array([ 1, 3, 5, 7, 8, 9, 10, 11])
7
12
63,936,578
2020-9-17
https://stackoverflow.com/questions/63936578/docker-how-to-make-python-3-8-as-default
I'm trying to update an existing Dockerfile to switch from python3.5 to python3.8, previously it was creating a symlink for python3.5 and pip3 like this: RUN ln -s /usr/bin/pip3 /usr/bin/pip RUN ln -s /usr/bin/python3 /usr/bin/python I've updated the Dockerfile to install python3.8 from deadsnakes:ppa apt-get install python3-pip python3.8-dev python3.8-distutils python3.8-venv if I remove python3-pip, it complains about gcc C compiler or Python headers are not installed on this system. Try to run: sudo apt-get install gcc python3-dev with these installations in place I'm trying to update existing symlink creation something like this: RUN ln -s /usr/bin/pip3 /usr/local/lib/python3.8/dist-packages/pip RUN ln -s /usr/bin/pip /usr/local/lib/python3.8/dist-packages/pip RUN ln -s /usr/bin/python3.8 /usr/bin/python3 it fails, saying ln: failed to create symbolic link '/usr/bin/python3': File exists which I assume fails because python3 points to python3.6. if I try: RUN ln -s /usr/bin/python3.8 /usr/bin/python it doesn't complain about symlink and image gets build successfully, but fails while installing requirements later (we use Makefile targets to install dependencies inside the container using pip and pip-sync): ERROR: Cannot uninstall 'python-apt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. which I assume because python-apt gets installed as part of the default python3.6 installation and python3.8 pip can't uninstall it. PS: my Dockerfile image is based on Ubunut 18.04 which comes with python3.6 as default. How can I properly switch Dockerfile / image from python3.5 to python3.8? so I can later use pip directly and it points to python3.8's pip
Replacing the system python in this way is usually not a good idea (as it can break operating-system-level programs which depend on those executables) -- I go over that a little bit in this video I made "why not global pip / virtualenv?" A better way is to create a prefix and put that on the PATH earlier (this allows system executables to continue to work, but bare python / python3 / etc. will use your other executable) in the case of deadsnakes which it seems like you're using, something like this should work: FROM ubuntu:bionic RUN : \ && apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ software-properties-common \ && add-apt-repository -y ppa:deadsnakes \ && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ python3.8-venv \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* \ && : RUN python3.8 -m venv /venv ENV PATH=/venv/bin:$PATH the ENV line is the key here, that puts the virtualenv on the beginning of the path $ docker build -t test . ... $ docker run --rm -ti test bash -c 'which python && python --version && which pip && pip --version' /venv/bin/python Python 3.8.5 /venv/bin/pip pip 20.1.1 from /venv/lib/python3.8/site-packages/pip (python 3.8) disclaimer: I'm the maintainer of deadsnakes
12
18
63,941,547
2020-9-17
https://stackoverflow.com/questions/63941547/what-is-the-difference-between-a-variable-and-a-parameter
I am learning python 3 and programming in general for the first time, but I can't seem to distinguish a parameter and a variable?
A variable is just something that refers/points to some data you have. x = 5 Here x is a variable. Variables can point to more kinds of data than just numbers, though. They can point to strings, functions, etc. A parameter is something that is passed into a function def my_function(y): print(y) Here y is a parameter. It doesn't contain a value yet. But if I want to call the function, I need to provide an argument to the function. An argument is the actual value you provide to the function that replaces the parameter. my_function(5) Here, 5 is the argument. Of course, since x points to the value "5", I can do this too: my_function(x) which also prints 5
6
6
63,940,952
2020-9-17
https://stackoverflow.com/questions/63940952/py-works-but-not-python-in-command-prompt-for-windows-10
I installed Python on my computer. When I type python in the command prompt I get the following message: 'python' is not recognized as an internal or external command, operable program or batch file. But when I type py it seems to be working and I get the following: Python 3.7.0 (v3.7.0, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Why is this happening? FYI: I checked the path variable in environmental variables and I don't see any path to python installation. But then how is visual code able to find the path to python.exe and able to run python code? I am confused.
py is itself located in C:\Windows (which is always part of the PATH), which is why you find it. When you installed Python, you didn't check the box to add it to your PATH, which is why it isn't there. In general, it's best to use the Windows Python Launcher, py.exe anyway, so this is no big deal. Just use py for launching consistently, and stuff will just work. Similarly, if py.exe was associated with the .py extension at installation time, a standard shebang line (details in PEP linked above) will let you run the script without even typing py. I don't know precisely what VSCode uses to find Python (using py.exe directly, using a copy of Python that ships with the editor, performing registry lookup, a config file that just says where to find it, etc.), but that's not really relevant to running your scripts yourself.
14
8
63,936,321
2020-9-17
https://stackoverflow.com/questions/63936321/sqlalchemy-how-can-i-execute-a-raw-insert-sql-query-in-a-postgres-database
I’m building an app using Python and the clean architecture principles, with TDD. Some unit tests require executing some raw SQL queries against an in-memory database. I am trying to switch from sqlite to postgresql inmemory data, using pytest-postgres. Problem When using sqlite inmemory database, I can both insert and select data. When using Postgresql inmemory database, I can only SELECT (raw INSERT fails). Insert work in sqlite… s_tb_name = "tb_customer" ls_cols = ["first_name", "last_name", "email"] ls_vals = ['("John", "Doe", "[email protected]")', '("Jane", "Doe", "[email protected]")', '("Eric", "Dal", "[email protected]")'] s_cols = ', '.join(ls_cols) s_vals = ', '.join(ls_vals) session.execute(f"INSERT INTO {s_tb_name} ({s_cols}) VALUES ({s_vals})") …but fail in Postgres: E sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column "John" does not exist E LINE 1: ..., email) VALUES (("John".... From this psycopg documentation page, I understand this is due to pyscopg2. It prevents injecting raw dynamic SQL, and it seems I should add this : tb_sql_id = sql.Identifier(s_tb_name) cols_sql_id = sql.SQL(' ,').join(map(sql.Identifier, ls_cols)) vals_sql_id = sql.SQL(' ,').join(map(sql.Literal, ls_vals)) psycopg2_query = sql.SQL(f"INSERT INTO {tb_sql_id} ({cols_sql_id}) VALUES ({vals_sql_id})") but logically, sqlalchemy refuses to execute the psycopg2_query : sqlalchemy.exc.ArgumentError: SQL expression object expected, got object of type <class 'psycopg2.sql.SQL'> instead Question Is there a way to execute raw dynamic insert queries in Postgres using SQL Alchemy?
As pointed by others, injecting SQL like this is to be avoided in most cases. Here, the SQL is written in the unit test itself. There is no external input leaking to the SQL injection, which alleviates the security risk. Mike Organek’s solution did not fully work for me, but it pointed me to the right direction : I just had to also remove the parens from ls_vals. s_tb_name = "tb_customer" ls_cols = ["first_name", "last_name", "email"] ls_vals = ["'John', 'Doe', '[email protected]'", "'Jane', 'Doe', '[email protected]'", "'Eric', 'Dal', '[email protected]'"] s_cols = ', '.join(ls_cols) s_vals = '(' + '), ('.join(ls_vals) + ')' session.execute(f"INSERT INTO {s_tb_name} ({s_cols}) VALUES {s_vals}") This made the insert test pass, both when using the sqlite engine and the postgres engine.
9
5
63,930,512
2020-9-17
https://stackoverflow.com/questions/63930512/how-to-combine-three-string-columns-to-one-which-have-nan-values-in-pandas
I have the following dataframe: A B C 0 NaN NaN cat 1 dog NaN NaN 2 NaN cat NaN 3 NaN NaN dog I would like to add a colunm with the value that doesnt have the NaN value. So that: A B C D 0 NaN NaN cat cat 1 dog NaN NaN dog 2 NaN cat NaN cat 3 NaN NaN dog dog would it be using an lambda function? or fillna? Any help would be appreciated! Thanks!
use combine_first chained df['D'] = df.A.combine_first(df.B).combine_first(df.C) alternatively, forward fill and pick the last column df['D'] = df.ffill(axis=1).iloc[:,-1] # specifying the columns explicitly: df['D'] = df[['A', 'B', 'C']].ffill(1).iloc[:, -1]
8
6
63,929,902
2020-9-17
https://stackoverflow.com/questions/63929902/how-to-drop-row-at-certain-index-in-every-group-in-groupby-object
I'm trying to drop a row at certain index in every group inside a GroupBy object. The best I have been able to manage is: import pandas as pd x_train = x_train.groupby('ID') x_train.apply(lambda x: x.drop([0], axis=0)) However, this doesn't work. I have spent a whole day on this to no solution, so have turned to stack. Edit: A solution for any index value is needed as well
You can do it with cumcount idx= x_train.groupby('ID').cumcount() x_train = x_train[idx!=0]
6
7
63,925,623
2020-9-16
https://stackoverflow.com/questions/63925623/where-should-i-put-abstract-classes-in-a-python-package
I am adding abstract classes to my python package like this: class AbstractClass(ABC): @abstractmethod def do_something(self): pass There will be multiple subclasses that inherit from AbstractClass like this: class SubClass(AbstractClass): def do_something(self): pass I am wondering if there are any conventions for python packages regarding abstract classes that I am unaware of. should the abstract classes be separated from the subclasses, or should they all be in the same directory? what about naming of abstract functions, any conventions there? I realize this is fairly subjective. I am asking for opinions, not what is 'right' or 'wrong' Thanks!
No convention comes to mind. If this is all for just one project, I'd be looking to put the abstract class in with the concrete subclasses at whatever level they're at. If all the subclasses were in one file, then I'd put the abstract class in that file too. If all the subclasses were in individual files in a dir, I'd put the abstract class right next to them in its own file. The reason I'd put the abstract class somewhere else is if I'm separating off utility code that can be reused with other projects. In short, an abstract class is just a piece of code. Just another class. Treat it like any other. As far as naming, the natural naming of the class given what it is/does is usually enough. If your abstract class were Animal, and your subclasses were Goat, Horse, etc., I'd see no reason to call your abstract class AbstractAnimal, as it's pretty clear already what's going on...that you wouldn't instantiate Animal directly. Also, if you're looking at a class thinking of reusing it, and you see an abstract method in it, then you know it's abstract.
6
7
63,922,309
2020-9-16
https://stackoverflow.com/questions/63922309/could-not-open-requirements-file-errno-2-no-such-file-or-directory-requirem
I'm trying to build a docker image on my ubuntu 18.04 machine and I have located requirements.txt in the same building directory but still its showing up this error. Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt' The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 1 requirements.txt is to install python modules using pip3. requirement.txt: numpy opencv-contrib-python opencv-python scikit-image pillow imutils scikit-learn matplotlib progressbar2 beautifulsoup4 pandas matplotlib re2 regex json argparse pickle DockerFile: FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 COPY requirements.txt /home/usr/app/requirements.txt WORKDIR /home/usr/app/ RUN apt-get update && apt-get install -y python3 python3-pip sudo RUN pip3 install -r requirements.txt FROM tensorflow/tensorflow:latest-gpu-jupyter
I suspect that you haven't copied over your requirements.txt file to your Docker image. Typically you add the following lines to your Dockerfile to copy your requirements.txt file and install it using pip: COPY requirements.txt /tmp/requirements.txt RUN python3 -m pip install -r /tmp/requirements.txt If you don't explicitly copy over anything to your Docker image your image has no data save for what is on the base image.
15
38
63,911,610
2020-9-16
https://stackoverflow.com/questions/63911610/python-difference-between-yaml-load-and-yaml-safe-load
I am seeing that PyYaml, truncates zero's while loading from yaml file, if one uses: yaml.safe_load(stream). It can be fixed, if one uses yaml.load(stream, Loader=yaml.BaseLoader), but is that advisable? It works with yaml.load and zeros are not truncated. I want to understand that would it be safe to switch to yaml.load instead of yaml.safe_load? Example: Test yaml content: $cat test.yml number: 5.10 Code: $python -c 'import yaml, sys; content = yaml.safe_load(sys.stdin); print(content) ' < test.yml {'number': 5.1} << It truncates the 0 at the end. But that is due to floating point value >> whereas what I want is the exact number as is. $python -c 'import yaml, sys; content = yaml.load(sys.stdin, Loader=yaml.BaseLoader); print(content) ' < test.yml {u'number': u'5.10'} Is that the correct approach to change it to yaml.load ?
yaml.safe_load(sys.stdin) just does yaml.load(sys.stdin, Loader=yaml.SafeLoader). The facilities to execute arbitrary Python code (which makes loading unsafe) are implemented in yaml.Loader which is used by default. yaml.BaseLoader does not contain them. Therefore, if you use yaml.BaseLoader, loading will not execute arbitrary Python code (that is, unless you yourself register custom constructors with yaml.BaseLoader).
25
24
63,909,351
2020-9-15
https://stackoverflow.com/questions/63909351/how-to-rotate-xticklabels-in-a-seaborn-catplot
I'm not able to rotate my xlabels in Seaborn/Matplotlib. I have tried many different solutions but not able to fix it. I have seen many related questions here on stackoverflow, but they have not worked for me. My current plot looks like this, but I want the xlabels to rotate 90. @staticmethod def plotPrestasjon(plot): sns.set(style="darkgrid") ax = sns.catplot(x="COURSE", y="FINISH", hue="COURSE", col="BIB#", data=plot, s=9, palette="Set2") ax.set(xlabel='COURSES', ylabel='FINISH (sec)') plt.show() I have tried: ax.set_xticklabels(ax.get_xticklabels(), rotation=90) But that fails to generate the plot. Any ideas how I can fix it?
The correct way to set the xticklabels for sns.catplot, according to the documentation, is with the .set_xticklabels method (.e.g. g.set_xticklabels(rotation=30)). Using a loop to iterate through the Axes, should be used if changes need to be made on a plot by plot basis, within the FacetGrid. Building structured multi-plot grids seaborn.FacetGrid g, or in the case of the OP, ax is a seaborn.axisgrid.FacetGrid When iterating through ax.axes.flat, axes is a <class 'matplotlib.axes._subplots.AxesSubplot'>, which has a wide array of class methods, including .get_xticklabels(). In cases with many columns, and col_wrap= is used, g.set_xticklabels(rotation=30) may result in removing all the xticklabels. Instead, labels= must be specified, g.set_xticklabels(g.axes.flat[-1].get_xticklabels(), rotation=30), where g.axes.flat[-1] should be the last facet axes with xticklabels. Alternatively, use g.tick_params(axis='x', rotation=30) Tested in python 3.12, matplotlib 3.8.1, seaborn 0.13.0 One Row & Multiple Columns import seaborn as sns # load data exercise = sns.load_dataset("exercise") # plot catplot g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) # set rotation g.set_xticklabels(rotation=30) Multiple Rows & Columns # plot catplot g = sns.catplot(x="time", y="pulse", hue="diet", col="kind", col_wrap=2, data=exercise) # set rotation using one of the following options # g.set_xticklabels(labels=g.axes.flat[-1].get_xticklabels(), rotation=30) g.tick_params(axis='x', rotation=30) Using g.set_xticklabels(g.get_xticklabels(), rotation=30) results in an AttributeError. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-442-d1d39d8cc4f0> in <module> 1 g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) ----> 2 g.set_xticklabels(g.get_xticklabels(), rotation=30) AttributeError: 'FacetGrid' object has no attribute 'get_xticklabels'
11
21
63,866,180
2020-9-13
https://stackoverflow.com/questions/63866180/how-to-convert-from-heic-to-jpg-in-python-on-windows
Im trying to convert HEIC to JPG using python. The only other answers about this topic used pyheif. I am on windows and pyheif doesn't support windows. Any suggestions? I am currently trying to use pillow.
The following code will convert an HEIC file format to a PNG file format from PIL import Image import pillow_heif heif_file = pillow_heif.read_heif("HEIC_file.HEIC") image = Image.frombytes( heif_file.mode, heif_file.size, heif_file.data, "raw", ) image.save("./picture_name.png", format("png"))
15
25
63,838,471
2020-9-10
https://stackoverflow.com/questions/63838471/possible-to-enforce-type-hints
Is there any advantage to using the 'type hint' notation in python? import sys def parse(arg_line: int) -> str: print (arg_line) # passing a string, returning None if __name__ == '__main__': parse(' '.join(sys.argv[1:])) To me it seems like it complicates the syntax without providing any actual benefit (outside of perhaps within a development environment). Based on this: Are there any plans for python to contain type constraints within the language itself? What is the advantage of having a "type hint" ? Couldn't I just as easily throw that into the docstring or something? I also don't see this much in the python codebase itself as far as I can tell -- most types are enforced manually, for example: argparse.py and any other files I've glanced at in https://github.com/python/cpython/blob/3.7/Lib/.
Are there any plans for python to contain type constraints within the language itself? Almost certainly not, and definitely not before the next major version (4.x). What is the advantage of having a "type hint" ? Couldn't I just as easily throw that into the docstring or something? Off the top of my head, consider the following: Type hints can be verified with tooling like mypy. Type hints can be used by IDEs and other tooling to give hints and tips. E.g., when you're calling a function and you've just written foo(, the IDE can pick up on the type hints and display a box nearby that shows foo(x: int, y: List[int]). The advantage to you as a developer is that you have exactly the information you need exposed to you and don't have to munge an entire docstring. Type hints can be used by modules like functools.singledispatch or external libraries like multipledispatch to add additional type-related features (in this case, dispatching function calls based on name and type, not just name).
27
21
63,847,850
2020-9-11
https://stackoverflow.com/questions/63847850/find-which-python-package-provides-a-specific-import-module
Without getting confused, there are tons of questions about installing Python packages, how to import the resulting modules, and listing what packages are available. But there doesn't seem to be the equivalent of a --what-provides option for pip, if you don't have a pip-style requirements.txt file or a Pipenv Pipfile. This question is similar to a previous question, but asks for the parent package, and not additional metadata. That said, these other questions did not get a lot of attention or many accepted answers - eg. How do you find python package metadata information given a module. So forging ahead... By way of example, there are two packages (to name a few) that will install a module called serial - namely pyserial and serial. So assuming that one of the packages was installed, we might find it by using pip list: python3 -m pip list | grep serial However, the problem comes in if the name of the package does not match the name of the module, or if you just want to find out what package to install, working on a legacy server or development machine. You can check the path of the imported module - which can give you a clue. But continuing the example... >>> import serial >>> print(serial.__file__) /usr/lib/python3.6/site-packages/serial/__init__.py It is in a serial directory, but only pyserial is in fact installed, not serial: > python3 -m pip list | grep serial pyserial 3.4 The closest I can come is to generate a requirements.txt via pipreqs ./ which may fail on a dependent child file (as it does with me), or to reverse check dependencies via Pipenv (which brings a whole set of new issues along to get it all setup): > pipenv graph --reverse cymysql==0.9.15 ftptool==0.7.1 netifaces==0.10.9 pip==20.2.2 PyQt5-sip==12.8.1 - PyQt5==5.15.0 [requires: PyQt5-sip>=12.8,<13] setuptools==50.3.0 wheel==0.35.1 Does anyone know of a command that I have missed for a simple solution to finding what pip package provides a particular module?
Use the packages_distributions() function from importlib.metadata (or importlib-metadata). So for example, in your case where serial is the name of the "import package": import importlib.metadata # or: `import importlib_metadata` importlib.metadata.packages_distributions()['serial'] This should return a list containing pyserial, which is the name of the "distribution package" (the name that should be used to pip-install). References https://importlib-metadata.readthedocs.io/en/stable/using.html#package-distributions https://github.com/python/importlib_metadata/pull/287/files For older Python versions and/or older versions of importlib-metadata... I believe something like the following should work: #!/usr/bin/env python3 import importlib.util import pathlib import importlib_metadata def get_distribution(file_name): result = None for distribution in importlib_metadata.distributions(): try: relative = ( pathlib.Path(file_name) .relative_to(distribution.locate_file('')) ) except ValueError: pass else: if distribution.files and relative in distribution.files: result = distribution break return result def alpha(): file_name = importlib.util.find_spec('serial').origin distribution = get_distribution(file_name) print("alpha", distribution.metadata['Name']) def bravo(): import serial file_name = serial.__file__ distribution = get_distribution(file_name) print("bravo", distribution.metadata['Name']) if __name__ == '__main__': alpha() bravo() This is just an example of code showing how to get the metadata of the installed project a specific module belongs to. The important bit is the get_distribution function, it takes a file name as an argument. It could be the file name of a module or package data. If that file name belongs to a project installed in the environment (via pip install for example) then the importlib.metadata.Distribution object is returned.
11
9
63,894,169
2020-9-15
https://stackoverflow.com/questions/63894169/pandas-datareader-importerror-cannot-import-name-urlencode
I am working fine with pandas_datareader, then today I installed below both yahoo finance from the below link trying to solve another issue. No data fetched Web.DataReader Panda pip install yfinance pip install fix_yahoo_finance After the above installtion, pandas_datareader cannot be used anymore. I googled it and I did add the below import, and pandas_datareader is still not working. from urllib.parse import urlencode Here is the error: from pandas_datareader import data File "C:\Users\yongn\Anaconda3\lib\site-packages\pandas_datareader\__init__.py", line 2, in <module> from .data import ( File "C:\Users\yongn\Anaconda3\lib\site-packages\pandas_datareader\data.py", line 11, in <module> from pandas_datareader.av.forex import AVForexReader File "C:\Users\yongn\Anaconda3\lib\site-packages\pandas_datareader\av\__init__.py", line 6, in <module> from pandas_datareader.base import _BaseReader File "C:\Users\yongn\Anaconda3\lib\site-packages\pandas_datareader\base.py", line 7, in <module> from pandas.io.common import urlencode ImportError: cannot import name 'urlencode'
I encountered exactly the same error. I am using python anaconda 2020_07 version. The solution is to use the latest pandas-datareader v0.9 from anaconda channel. If you use the pandas-datareader package from conda-forge which is using older version v0.8.1, you will encounter the error. This is the status as of 20Dec2020. I ran the command below to install the latest pandas-datareader package. conda install -c anaconda pandas-datareader The error message disappeared and the problem has been fixed. EDIT: If conda later downgrades pandas-datareader back to conda-forge older version, there's a fix. See https://stackoverflow.com/a/65386464/1709088
8
5
63,895,392
2020-9-15
https://stackoverflow.com/questions/63895392/seaborn-is-not-plotting-within-defined-subplots
I am trying to plot two displots side by side with this code fig,(ax1,ax2) = plt.subplots(1,2) sns.displot(x =X_train['Age'], hue=y_train, ax=ax1) sns.displot(x =X_train['Fare'], hue=y_train, ax=ax2) It returns the following result (two empty subplots followed by one displot each on two lines)- If I try the same code with violinplot, it returns result as expected fig,(ax1,ax2) = plt.subplots(1,2) sns.violinplot(y_train, X_train['Age'], ax=ax1) sns.violinplot(y_train, X_train['Fare'], ax=ax2) Why is displot returning a different kind of output and what can I do to output two plots on the same line?
seaborn.distplot has been DEPRECATED in seaborn 0.11 and is replaced with the following: displot(), a figure-level function with a similar flexibility over the kind of plot to draw. This is a FacetGrid, and does not have the ax parameter, so it will not work with matplotlib.pyplot.subplots. histplot(), an axes-level function for plotting histograms, including with kernel density smoothing. This does have the ax parameter, so it will work with matplotlib.pyplot.subplots. It is applicable to any of the seaborn FacetGrid plots that there is no ax parameter. Use the equivalent axes-level plot. Look at the documentation for the figure-level plot to find the appropriate axes-level plot function for your needs. See Figure-level vs. axes-level functions Figure-Level: relplot, displot, catplot Because the histogram of two different columns is desired, it's easier to use histplot. See How to plot in multiple subplots for a number of different ways to plot into maplotlib.pyplot.subplots Also review seaborn histplot and displot output doesn't match Tested in seaborn 0.11.1 & matplotlib 3.4.2 fig, (ax1, ax2) = plt.subplots(1, 2) sns.histplot(x=X_train['Age'], hue=y_train, ax=ax1) sns.histplot(x=X_train['Fare'], hue=y_train, ax=ax2) Imports and DataFrame Sample import seaborn as sns import matplotlib.pyplot as plt # load data penguins = sns.load_dataset("penguins", cache=False) # display(penguins.head()) species island bill_length_mm bill_depth_mm flipper_length_mm body_mass_g sex 0 Adelie Torgersen 39.1 18.7 181.0 3750.0 MALE 1 Adelie Torgersen 39.5 17.4 186.0 3800.0 FEMALE 2 Adelie Torgersen 40.3 18.0 195.0 3250.0 FEMALE 3 Adelie Torgersen NaN NaN NaN NaN NaN 4 Adelie Torgersen 36.7 19.3 193.0 3450.0 FEMALE Axes Level Plot With the data in a wide format, use sns.histplot .ravel, .flatten, and .flat all convert the axes array to 1-D. What is the difference between flatten and ravel functions in numpy? numpy difference between flat and ravel() How to fix 'numpy.ndarray' object has no attribute 'get_figure' when plotting subplots # select the columns to be plotted cols = ['bill_length_mm', 'bill_depth_mm'] # create the figure and axes fig, axes = plt.subplots(1, 2) axes = axes.ravel() # flattening the array makes indexing easier for col, ax in zip(cols, axes): sns.histplot(data=penguins[col], kde=True, stat='density', ax=ax) fig.tight_layout() plt.show() Figure Level Plot With the dataframe in a long format, use displot # create a long dataframe dfl = penguins.melt(id_vars='species', value_vars=['bill_length_mm', 'bill_depth_mm'], var_name='bill_size', value_name='vals') # display(dfl.head()) species bill_size vals 0 Adelie bill_length_mm 39.1 1 Adelie bill_depth_mm 18.7 2 Adelie bill_length_mm 39.5 3 Adelie bill_depth_mm 17.4 4 Adelie bill_length_mm 40.3 # plot sns.displot(data=dfl, x='vals', col='bill_size', kde=True, stat='density', common_bins=False, common_norm=False, height=4, facet_kws={'sharey': False, 'sharex': False}) Multiple DataFrames If there are multiple dataframes, they can be combined with pd.concat, and use .assign to create an identifying 'source' column, which can be used for row=, col=, or hue= # list of dataframe lod = [df1, df2, df3] # create one dataframe with a new 'source' column to use for row, col, or hue df = pd.concat((d.assign(source=f'df{i}') for i, d in enumerate(lod, 1)), ignore_index=True) See Import multiple csv files into pandas and concatenate into one DataFrame to read multiple files into a single dataframe with an identifying column.
31
44
63,830,284
2020-9-10
https://stackoverflow.com/questions/63830284/fastapi-and-pydantic-recursionerror-causing-exception-in-asgi-application
Description I've seen similar issues about self-referencing Pydantic models causing RecursionError: maximum recursion depth exceeded in comparison but as far as I can tell there are no self-referencing models included in the code. I'm just just using Pydantic's BaseModel class. The code runs successfully until the function in audit.py below tries to return the output from the model. I've included the full traceback as I'm not sure where to begin with this error. I've run the code with PyCharm and without an IDE and it always produces the traceback below but doesn't crash the app but returns a http status code of 500 to the front end. Any advice would be much appreciated. As suggested I have also tried sys.setrecursionlimit(1500) to increase the recursion limit. Environment OS: Windows 10 FastAPI Version: 0.61.1 Pydantic Version: 1.6.1 Uvicorn Version: 0.11.8 Python Version: 3.7.1 Pycharm Version: 2020.2 App main.py import uvicorn from fastapi import FastAPI from starlette.middleware.cors import CORSMiddleware from app.api.routes.router import api_router from app.core.logging import init_logging from app.core.config import settings init_logging() def get_app() -> FastAPI: application = FastAPI(title=settings.APP_NAME, version=settings.APP_VERSION, debug=settings.DEBUG) if settings.BACKEND_CORS_ORIGINS: # middleware support for cors application.add_middleware( CORSMiddleware, allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) application.include_router(api_router, prefix=settings.API_V1_STR) return application app = get_app() if __name__ == "__main__": uvicorn.run("main:app", host="127.0.0.1", port=80) router.py from fastapi import APIRouter from app.api.routes import audit api_router = APIRouter() api_router.include_router(audit.router, tags=["audit"], prefix="/audit") audit.py import validators from fastapi import APIRouter, HTTPException from loguru import logger from app.api.dependencies.audit import analyzer from app.schemas.audit import AuditPayload, AuditResult router = APIRouter() @router.post("/", response_model=AuditResult, name="audit", status_code=200) async def post_audit(payload: AuditPayload) -> AuditResult: logger.info("Audit request received") # validate URL try: logger.info("Validating URL") validators.url(payload.url) except HTTPException: HTTPException(status_code=404, detail="Invalid URL.") logger.exception("HTTPException - Invalid URL") # generate output from route audit.py logger.info("Running audit analysis. This could take up to 10 minutes. Maybe grab a coffee...") analyzed_output = analyzer.analyze(url=payload.url, brand=payload.brand, twitter_screen_name=payload.twitter_screen_name, facebook_page_name=payload.facebook_page_name, instagram_screen_name=payload.instagram_screen_name, youtube_user_name=payload.youtube_user_name, ignore_robots=payload.ignore_robots, ignore_sitemap=payload.ignore_sitemap, google_analytics_view_id=payload.google_analytics_view_id) output = AuditResult(**analyzed_output) return output audit_models.py from pydantic import BaseModel class AuditPayload(BaseModel): url: str brand: str twitter_screen_name: str facebook_page_name: str instagram_screen_name: str youtube_user_name: str ignore_robots: bool ignore_sitemap: bool google_analytics_view_id: str class AuditResult(BaseModel): base_url: str run_time: float website_404: dict website_302: dict website_h1_tags: dict website_duplicate_h1: dict website_h2_tags: dict website_page_duplications: dict website_page_similarities: dict website_page_desc_duplications: dict website_page_title_duplications: dict pages: list pages_out_links_404: dict = None pages_canonicals: dict seo_phrases: dict social: dict google_analytics_report: dict google_psi_desktop: dict google_psi_mobile: dict google_algo_updates: dict google_sb: list robots_txt: list This line throws the error in the logs: 2020-09-10 10:02:31.483 | ERROR | uvicorn.protocols.http.h11_impl:run_asgi:391 - Exception in ASGI application I believe this bit is the most relevant to understanding why this error is occuring: File "pydantic\main.py", line 623, in pydantic.main.BaseModel._get_value [Previous line repeated 722 more times] Full traceback: Traceback (most recent call last): File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\pydevconsole.py", line 483, in <module> pydevconsole.start_client(host, port) β”‚ β”‚ β”‚ β”” 50488 β”‚ β”‚ β”” '127.0.0.1' β”‚ β”” <function start_client at 0x000001BCEDC19D08> β”” <module 'pydevconsole' from 'C:\\Users\\<user>\\AppData\\Local\\JetBrains\\Toolbox\\apps\\PyCharm-P\\ch-0\\202.6948.78\\pl... File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\pydevconsole.py", line 411, in start_client process_exec_queue(interpreter) β”‚ β”” <_pydev_bundle.pydev_ipython_console.InterpreterInterface object at 0x000001BCEDC1BF98> β”” <function process_exec_queue at 0x000001BCEDC19A60> File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\pydevconsole.py", line 258, in process_exec_queue more = interpreter.add_exec(code_fragment) β”‚ β”‚ β”” <_pydev_bundle.pydev_console_types.CodeFragment object at 0x000001BCEDCFE748> β”‚ β”” <function BaseCodeExecutor.add_exec at 0x000001BCECF38488> β”” <_pydev_bundle.pydev_ipython_console.InterpreterInterface object at 0x000001BCEDC1BF98> File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\_pydev_bundle\pydev_code_executor.py", line 106, in add_exec more = self.do_add_exec(code_fragment) β”‚ β”‚ β”” <_pydev_bundle.pydev_console_types.CodeFragment object at 0x000001BCEDCFE748> β”‚ β”” <function InterpreterInterface.do_add_exec at 0x000001BCEDC15D90> β”” <_pydev_bundle.pydev_ipython_console.InterpreterInterface object at 0x000001BCEDC1BF98> File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\_pydev_bundle\pydev_ipython_console.py", line 36, in do_add_exec res = bool(self.interpreter.add_exec(code_fragment.text)) β”‚ β”‚ β”‚ β”‚ β”” "runfile('E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py', wdir='E:/Users/<user>/Docume... β”‚ β”‚ β”‚ β”” <_pydev_bundle.pydev_console_types.CodeFragment object at 0x000001BCEDCFE748> β”‚ β”‚ β”” <function _PyDevFrontEnd.add_exec at 0x000001BCEDC15A60> β”‚ β”” <_pydev_bundle.pydev_ipython_console_011._PyDevFrontEnd object at 0x000001BCEDC350B8> β”” <_pydev_bundle.pydev_ipython_console.InterpreterInterface object at 0x000001BCEDC1BF98> File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\_pydev_bundle\pydev_ipython_console_011.py", line 483, in add_exec self.ipython.run_cell(line, store_history=True) β”‚ β”‚ β”‚ β”” "runfile('E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py', wdir='E:/Users/<user>/Docume... β”‚ β”‚ β”” <function InteractiveShell.run_cell at 0x000001BCED5E7268> β”‚ β”” <_pydev_bundle.pydev_ipython_console_011.PyDevTerminalInteractiveShell object at 0x000001BCEDC350F0> β”” <_pydev_bundle.pydev_ipython_console_011._PyDevFrontEnd object at 0x000001BCEDC350B8> File "C:\Program Files\Python37\lib\site-packages\IPython\core\interactiveshell.py", line 2843, in run_cell raw_cell, store_history, silent, shell_futures) β”‚ β”‚ β”‚ β”” True β”‚ β”‚ β”” False β”‚ β”” True β”” "runfile('E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py', wdir='E:/Users/<user>/Docume... File "C:\Program Files\Python37\lib\site-packages\IPython\core\interactiveshell.py", line 2869, in _run_cell return runner(coro) β”‚ β”” <generator object InteractiveShell.run_cell_async at 0x000001BCEDC49C78> β”” <function _pseudo_sync_runner at 0x000001BCED5D0C80> File "C:\Program Files\Python37\lib\site-packages\IPython\core\async_helpers.py", line 67, in _pseudo_sync_runner coro.send(None) β”‚ β”” <method 'send' of 'generator' objects> β”” <generator object InteractiveShell.run_cell_async at 0x000001BCEDC49C78> File "C:\Program Files\Python37\lib\site-packages\IPython\core\interactiveshell.py", line 3044, in run_cell_async interactivity=interactivity, compiler=compiler, result=result) β”‚ β”‚ β”” <ExecutionResult object at 1bcedcd3470, execution_count=2 error_before_exec=None error_in_exec=None info=<ExecutionInfo objec... β”‚ β”” <IPython.core.compilerop.CachingCompiler object at 0x000001BCEDC356D8> β”” 'last_expr' File "C:\Program Files\Python37\lib\site-packages\IPython\core\interactiveshell.py", line 3215, in run_ast_nodes if (yield from self.run_code(code, result)): β”‚ β”‚ β”‚ β”” <ExecutionResult object at 1bcedcd3470, execution_count=2 error_before_exec=None error_in_exec=None info=<ExecutionInfo objec... β”‚ β”‚ β”” <code object <module> at 0x000001BCEDCDADB0, file "<ipython-input-2-086756a0f1dd>", line 1> β”‚ β”” <function InteractiveShell.run_code at 0x000001BCED5E76A8> β”” <_pydev_bundle.pydev_ipython_console_011.PyDevTerminalInteractiveShell object at 0x000001BCEDC350F0> File "C:\Program Files\Python37\lib\site-packages\IPython\core\interactiveshell.py", line 3291, in run_code exec(code_obj, self.user_global_ns, self.user_ns) β”‚ β”‚ β”‚ β”‚ β”” {'__name__': 'pydev_umd', '__doc__': 'Automatically created module for IPython interactive environment', '__package__': None,... β”‚ β”‚ β”‚ β”” <_pydev_bundle.pydev_ipython_console_011.PyDevTerminalInteractiveShell object at 0x000001BCEDC350F0> β”‚ β”‚ β”” <property object at 0x000001BCED5D8958> β”‚ β”” <_pydev_bundle.pydev_ipython_console_011.PyDevTerminalInteractiveShell object at 0x000001BCEDC350F0> β”” <code object <module> at 0x000001BCEDCDADB0, file "<ipython-input-2-086756a0f1dd>", line 1> File "<ipython-input-2-086756a0f1dd>", line 1, in <module> runfile('E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py', wdir='E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app') File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script β”‚ β”‚ β”‚ β”‚ β”” {'__name__': '__main__', '__doc__': "\nMain entry point into API for endpoints related to HawkSense's main functionality.\nto... β”‚ β”‚ β”‚ β”” {'__name__': '__main__', '__doc__': "\nMain entry point into API for endpoints related to HawkSense's main functionality.\nto... β”‚ β”‚ β”” 'E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py' β”‚ β”” <function execfile at 0x000001BCECC521E0> β”” <module '_pydev_bundle.pydev_imports' from 'C:\\Users\\<user>\\AppData\\Local\\JetBrains\\Toolbox\\apps\\PyCharm-P\\ch-0\\... File "C:\Users\<user>\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\202.6948.78\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) β”‚ β”‚ β”‚ β”” {'__name__': '__main__', '__doc__': "\nMain entry point into API for endpoints related to HawkSense's main functionality.\nto... β”‚ β”‚ β”” {'__name__': '__main__', '__doc__': "\nMain entry point into API for endpoints related to HawkSense's main functionality.\nto... β”‚ β”” 'E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app/main.py' β”” '#!/usr/bin/env python\n\n"""\nMain entry point into API for endpoints related to HawkSense\'s main functionality.\ntodo: htt... File "E:/Users/<user>/Documents/GitHub/HawkSense/backend/app/app\main.py", line 47, in <module> uvicorn.run("main:app", host="127.0.0.1", port=80) # for debug only β”‚ β”” <function run at 0x000001BCEDE041E0> β”” <module 'uvicorn' from 'C:\\Program Files\\Python37\\lib\\site-packages\\uvicorn\\__init__.py'> File "C:\Program Files\Python37\lib\site-packages\uvicorn\main.py", line 362, in run server.run() β”‚ β”” <function Server.run at 0x000001BCEDE4B510> β”” <uvicorn.main.Server object at 0x000001BCFC722198> File "C:\Program Files\Python37\lib\site-packages\uvicorn\main.py", line 390, in run loop.run_until_complete(self.serve(sockets=sockets)) β”‚ β”‚ β”‚ β”‚ β”” None β”‚ β”‚ β”‚ β”” <function Server.serve at 0x000001BCEDE4B598> β”‚ β”‚ β”” <uvicorn.main.Server object at 0x000001BCFC722198> β”‚ β”” <function BaseEventLoop.run_until_complete at 0x000001BCED49FE18> β”” <_WindowsSelectorEventLoop running=True closed=False debug=False> File "C:\Program Files\Python37\lib\asyncio\base_events.py", line 560, in run_until_complete self.run_forever() β”‚ β”” <function BaseEventLoop.run_forever at 0x000001BCED49FD90> β”” <_WindowsSelectorEventLoop running=True closed=False debug=False> File "C:\Program Files\Python37\lib\asyncio\base_events.py", line 528, in run_forever self._run_once() β”‚ β”” <function BaseEventLoop._run_once at 0x000001BCED4A27B8> β”” <_WindowsSelectorEventLoop running=True closed=False debug=False> File "C:\Program Files\Python37\lib\asyncio\base_events.py", line 1764, in _run_once handle._run() β”‚ β”” <function Handle._run at 0x000001BCED43AB70> β”” <Handle <TaskStepMethWrapper object at 0x000001BCFC7D4B00>()> File "C:\Program Files\Python37\lib\asyncio\events.py", line 88, in _run self._context.run(self._callback, *self._args) β”‚ β”‚ β”‚ β”‚ β”‚ β”” <member '_args' of 'Handle' objects> β”‚ β”‚ β”‚ β”‚ β”” <Handle <TaskStepMethWrapper object at 0x000001BCFC7D4B00>()> β”‚ β”‚ β”‚ β”” <member '_callback' of 'Handle' objects> β”‚ β”‚ β”” <Handle <TaskStepMethWrapper object at 0x000001BCFC7D4B00>()> β”‚ β”” <member '_context' of 'Handle' objects> β”” <Handle <TaskStepMethWrapper object at 0x000001BCFC7D4B00>()> > File "C:\Program Files\Python37\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 388, in run_asgi result = await app(self.scope, self.receive, self.send) β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”” <function RequestResponseCycle.send at 0x000001BCFC757840> β”‚ β”‚ β”‚ β”‚ β”‚ β”” <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90> β”‚ β”‚ β”‚ β”‚ β”” <function RequestResponseCycle.receive at 0x000001BCFC7578C8> β”‚ β”‚ β”‚ β”” <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90> β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90> β”” <uvicorn.middleware.proxy_headers.ProxyHeadersMiddleware object at 0x000001BCFC722BA8> File "C:\Program Files\Python37\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) β”‚ β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.send of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90>> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <fastapi.applications.FastAPI object at 0x000001BCFC722710> β”” <uvicorn.middleware.proxy_headers.ProxyHeadersMiddleware object at 0x000001BCFC722BA8> File "C:\Program Files\Python37\lib\site-packages\fastapi\applications.py", line 149, in __call__ await super().__call__(scope, receive, send) β”‚ β”‚ β”” <bound method RequestResponseCycle.send of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90>> β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... File "C:\Program Files\Python37\lib\site-packages\starlette\applications.py", line 102, in __call__ await self.middleware_stack(scope, receive, send) β”‚ β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.send of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4A90>> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <starlette.middleware.errors.ServerErrorMiddleware object at 0x000001BCFC7B8FD0> β”” <fastapi.applications.FastAPI object at 0x000001BCFC722710> File "C:\Program Files\Python37\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__ raise exc from None File "C:\Program Files\Python37\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__ await self.app(scope, receive, _send) β”‚ β”‚ β”‚ β”‚ β”” <function ServerErrorMiddleware.__call__.<locals>._send at 0x000001BCFC72AE18> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <starlette.middleware.cors.CORSMiddleware object at 0x000001BCFC7B8F60> β”” <starlette.middleware.errors.ServerErrorMiddleware object at 0x000001BCFC7B8FD0> File "C:\Program Files\Python37\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__ await self.simple_response(scope, receive, send, request_headers=headers) β”‚ β”‚ β”‚ β”‚ β”‚ β”” Headers({'host': '127.0.0.1', 'connection': 'keep-alive', 'content-length': '295', 'accept': 'application/json', 'user-agent'... β”‚ β”‚ β”‚ β”‚ β”” <function ServerErrorMiddleware.__call__.<locals>._send at 0x000001BCFC72AE18> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <function CORSMiddleware.simple_response at 0x000001BCEE53DC80> β”” <starlette.middleware.cors.CORSMiddleware object at 0x000001BCFC7B8F60> File "C:\Program Files\Python37\lib\site-packages\starlette\middleware\cors.py", line 140, in simple_response await self.app(scope, receive, send) β”‚ β”‚ β”‚ β”‚ β”” functools.partial(<bound method CORSMiddleware.send of <starlette.middleware.cors.CORSMiddleware object at 0x000001BCFC7B8F60... β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <starlette.exceptions.ExceptionMiddleware object at 0x000001BCFC7B8E48> β”” <starlette.middleware.cors.CORSMiddleware object at 0x000001BCFC7B8F60> File "C:\Program Files\Python37\lib\site-packages\starlette\exceptions.py", line 82, in __call__ raise exc from None File "C:\Program Files\Python37\lib\site-packages\starlette\exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) β”‚ β”‚ β”‚ β”‚ β”” <function ExceptionMiddleware.__call__.<locals>.sender at 0x000001BCFC7C18C8> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <fastapi.routing.APIRouter object at 0x000001BCFC7220F0> β”” <starlette.exceptions.ExceptionMiddleware object at 0x000001BCFC7B8E48> File "C:\Program Files\Python37\lib\site-packages\starlette\routing.py", line 550, in __call__ await route.handle(scope, receive, send) β”‚ β”‚ β”‚ β”‚ β”” <function ExceptionMiddleware.__call__.<locals>.sender at 0x000001BCFC7C18C8> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <function Route.handle at 0x000001BCEE4FF6A8> β”” <fastapi.routing.APIRoute object at 0x000001BCFC7B8E80> File "C:\Program Files\Python37\lib\site-packages\starlette\routing.py", line 227, in handle await self.app(scope, receive, send) β”‚ β”‚ β”‚ β”‚ β”” <function ExceptionMiddleware.__call__.<locals>.sender at 0x000001BCFC7C18C8> β”‚ β”‚ β”‚ β”” <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.h11_impl.RequestResponseCycle object at 0x000001BCFC7D4... β”‚ β”‚ β”” {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.1'}, 'http_version': '1.1', 'server': ('127.0.0.1', 80), 'clie... β”‚ β”” <function request_response.<locals>.app at 0x000001BCFC7C1A60> β”” <fastapi.routing.APIRoute object at 0x000001BCFC7B8E80> File "C:\Program Files\Python37\lib\site-packages\starlette\routing.py", line 41, in app response = await func(request) β”‚ β”” <starlette.requests.Request object at 0x000001BCFC7D4588> β”” <function get_request_handler.<locals>.app at 0x000001BCFC7C19D8> File "C:\Program Files\Python37\lib\site-packages\fastapi\routing.py", line 213, in app is_coroutine=is_coroutine, β”” True File "C:\Program Files\Python37\lib\site-packages\fastapi\routing.py", line 113, in serialize_response exclude_none=exclude_none, β”” False File "C:\Program Files\Python37\lib\site-packages\fastapi\routing.py", line 65, in _prepare_response_content exclude_none=exclude_none, β”” False File "pydantic\main.py", line 386, in pydantic.main.BaseModel.dict File "pydantic\main.py", line 706, in _iter File "pydantic\main.py", line 623, in pydantic.main.BaseModel._get_value File "pydantic\main.py", line 623, in pydantic.main.BaseModel._get_value File "pydantic\main.py", line 623, in pydantic.main.BaseModel._get_value [Previous line repeated 722 more times] File "pydantic\main.py", line 605, in pydantic.main.BaseModel._get_value File "C:\Program Files\Python37\lib\abc.py", line 139, in __instancecheck__ return _abc_instancecheck(cls, instance) β”‚ β”‚ β”” 8 β”‚ β”” <class 'pydantic.main.BaseModel'> β”” <built-in function _abc_instancecheck> RecursionError: maximum recursion depth exceeded in comparison```
This was a simple issue that was resolved by amending the output response to match the pydantic model. This involved ensuring output within audit.py was the same structure and data types as those specified in the AuditResult class from the pydantic model.
9
3
63,910,610
2020-9-15
https://stackoverflow.com/questions/63910610/generate-typeddict-from-functions-keyword-arguments
foo.py: kwargs = {"a": 1, "b": "c"} def consume(*, a: int, b: str) -> None: pass consume(**kwargs) mypy foo.py: error: Argument 1 to "consume" has incompatible type "**Dict[str, object]"; expected "int" error: Argument 1 to "consume" has incompatible type "**Dict[str, object]"; expected "str" This is because object is a supertype of int and str, and is therefore inferred. If I declare: from typing import TypedDict class KWArgs(TypedDict): a: int b: str and then annotate kwargs as KWArgs, the mypy check passes. This achieves type safety, but requires me to duplicate the keyword argument names and types for consume in KWArgs. Is there a way to generate this TypedDict from the function signature at type checking time, such that I can minimize the duplication in maintenance?
This will be available in Python 3.12 via PEP 692: from typing import TypedDict, Unpack, Required, NotRequired class KWArgs(TypedDict): a: Required[int] b: NotRequired[str] def consume(**kwargs: Unpack[KWArgs]) -> None: a = kwargs["a"] b = kwargs.get("b", ...) consume() # Not allowed. consume(a=1) # Allowed. consume(a=1, b="abc") # Allowed.
13
6
63,822,152
2020-9-10
https://stackoverflow.com/questions/63822152/pytorch-rnn-is-more-efficient-with-batch-first-false
In machine translation, we always need to slice out the first timestep (the SOS token) in the annotation and prediction. When using batch_first=False, slicing out the first timestep still keeps the tensor contiguous. import torch batch_size = 128 seq_len = 12 embedding = 50 # Making a dummy output that is `batch_first=False` batch_not_first = torch.randn((seq_len,batch_size,embedding)) batch_not_first = batch_first[1:].view(-1, embedding) # slicing out the first time step However, if we use batch_first=True, after slicing, the tensor is no longer contiguous. We need to make it contiguous before we can do different operations such as view. batch_first = torch.randn((batch_size,seq_len,embedding)) batch_first[:,1:].view(-1, embedding) # slicing out the first time step output>>> """ --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-8-a9bd590a1679> in <module> ----> 1 batch_first[:,1:].view(-1, embedding) # slicing out the first time step RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. """ Does that mean batch_first=False is better, at least, in the context of machine translation? Since it saves us from doing the contiguous() step. Is there any cases that batch_first=True works better?
Performance There doesn't seem to be a considerable difference between batch_first=True and batch_first=False. Please see the script below: import time import torch def time_measure(batch_first: bool): torch.cuda.synchronize() layer = torch.nn.RNN(10, 20, batch_first=batch_first).cuda() if batch_first: inputs = torch.randn(100000, 7, 10).cuda() else: inputs = torch.randn(7, 100000, 10).cuda() torch.cuda.synchronize() start = time.perf_counter() for chunk in torch.chunk(inputs, 100000 // 64, dim=0 if batch_first else 1): _, last = layer(chunk) torch.cuda.synchronize() return time.perf_counter() - start print(f"Time taken for batch_first=False: {time_measure(False)}") print(f"Time taken for batch_first=True: {time_measure(True)}") On my device (GTX 1050 Ti), PyTorch 1.6.0 and CUDA 11.0 here are the results: Time taken for batch_first=False: 0.3275816479999776 Time taken for batch_first=True: 0.3159054920001836 (and it varies either way so nothing conclusive). Code readability batch_first=True is simpler when you want to use other PyTorch layers which require batch as 0th dimension (which is the case for almost all torch.nn layers like torch.nn.Linear). In this case you would have to permute returned tensor anyway if batch_first=False was specified. Machine translation It should be better as the tensor is contiguous all the time and no copy of data has to be done. It also looks cleaner to slice using [1:] instead of [:,1:].
8
5
63,823,395
2020-9-10
https://stackoverflow.com/questions/63823395/how-can-i-get-the-number-of-cuda-cores-in-my-gpu-using-python-and-numba
I would like to know how to obtain the total number of CUDA Cores in my GPU using Python, Numba and cudatoolkit.
Most of what you need can be found by combining the information in this answer along with the information in this answer. We'll use the first answer to indicate how to get the device compute capability and also the number of streaming multiprocessors. We'll use the second answer (converted to python) to use the compute capability to get the "core" count per SM, then multiply that by the number of SMs. Here is a full example: $ cat t36.py from numba import cuda cc_cores_per_SM_dict = { (2,0) : 32, (2,1) : 48, (3,0) : 192, (3,5) : 192, (3,7) : 192, (5,0) : 128, (5,2) : 128, (6,0) : 64, (6,1) : 128, (7,0) : 64, (7,5) : 64, (8,0) : 64, (8,6) : 128, (8,9) : 128, (9,0) : 128 } # the above dictionary should result in a value of "None" if a cc match # is not found. The dictionary needs to be extended as new devices become # available, and currently does not account for all Jetson devices device = cuda.get_current_device() my_sms = getattr(device, 'MULTIPROCESSOR_COUNT') my_cc = device.compute_capability cores_per_sm = cc_cores_per_SM_dict.get(my_cc) total_cores = cores_per_sm*my_sms print("GPU compute capability: " , my_cc) print("GPU total number of SMs: " , my_sms) print("total cores: " , total_cores) $ python t36.py GPU compute capability: (5, 2) GPU total number of SMs: 8 total cores: 1024 $
10
20
63,873,066
2020-9-13
https://stackoverflow.com/questions/63873066/do-python-dict-literals-and-dictlist-of-pairs-keep-their-key-order
Do dict literals keep the order of their keys, in Python 3.7+? For example, is it guaranteed that {1: "one", 2: "two"} will always have its keys ordered this way (1, then 2) when iterating over it? (There is a thread in the Python mailing list with a similar subject, but it goes in all directions and I couldn't find an answer.) Similarly, is a dictionary like dict([('sape', 4139), ('guido', 4127), ('jack', 4098)]) ordered like in the list? The same question is true for other naturally ordered constructions, like dict comprehension and dict(sape=4139, guido=4127, jack=4098). PS: it is documented that dictionaries preserve insertion order. This question thus essentially asks: is it guaranteed that data is inserted in the order of a dict literal, of the list given to dict(), etc.
Yes, any method of constructing a dict preserves insertion order, in Python 3.7+. For literal key-value pairs, see the documentation: If a comma-separated sequence of key/datum pairs is given, they are evaluated from left to right to define the entries of the dictionary See also: Martijn's answer on How to keep keys/values in same order as declared? For comprehensions, from the same source: When the comprehension is run, the resulting key and value elements are inserted in the new dictionary in the order they are produced. Lastly, the dict initializer works by iterating over its argument and keyword arguments, and inserting each in order, similar to this: def __init__(self, mapping_or_iterable, **kwargs): if hasattr(mapping_or_iterable, "items"): # It's a mapping for k, v in mapping_or_iterable.items(): self[k] = v else: # It's an iterable of key-value pairs for k, v in mapping_or_iterable: self[k] = v for k, v in kwargs.items(): self[k] = v [This is based on the source code, but glossing over a lot of unimportant details, e.g. that dict_init is just a wrapper on dict_update_common.] This, combined with the fact that keyword arguments pass a dictionary in the same order since Python 3.6, makes dict(x=…, y=…) preserve the order of the variables.
7
7
63,893,783
2020-9-15
https://stackoverflow.com/questions/63893783/how-to-get-a-typeddict-corresponding-to-a-function-signature
Say I've got a function signature like this: def any_foo( bar: Bar, with_baz: Optional[Baz] = None, with_datetime: Optional[datetime] = None, effective: Optional[bool] = False, ) -> Foo I could of course just copy its declaration and fiddle with it enough to create the following TypedDict: AnyFooParameters = TypedDict( "AnyFooParameters", { bar: Bar, with_baz: Optional[Baz], with_datetime: Optional[datetime], effective: Optional[bool] } ) But this seems like such a straight-forward transformation that I wonder whether there's some easy way to create this TypedDict (or at least the name: type pairs) straight from the function.
The result of any_foo.__annotations__ is exactly what you want. For example: from typing import Optional def any_foo( req_int: int, opt_float: Optional[float] = None, opt_str: Optional[str] = None, opt_bool: Optional[bool] = False, ) -> int: pass And with any_foo.__annotations__, you can get this: {'req_int': int, 'opt_float': typing.Optional[float], 'opt_str': typing.Optional[str], 'opt_bool': typing.Optional[bool], 'return': int} Note you can access to the type of return value by the return key. BTW, since return is a reserved keyword, you cannot name an argument as return so there's no need to warry about a duplicate key :)
11
3