question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
64,406,942 | 2020-10-17 | https://stackoverflow.com/questions/64406942/in-a-pytest-coverage-report-what-does-mean-for-missing-lines | I'm running pytest with the coverage plugin (pytest --cov) and in the report I got the following line: Name Stmts Miss Branch BrPart Cover Missing --------------------------------------------------------- foo.py 5 1 2 1 71% 3->5, 5 I know that 3-5 would mean it missed lines 3 to 5, but I don't know what -> means. From the test logic, I'd expect only 5 to be reported. For reference, this is the code I used: # foo.py class Test: def __lt__(self, other): if type(self) == type(other): return False return NotImplemented # test_foo.py def test_lt(): test = Test() assert not (test < test) | Coverage collects pairs of transition in your code that goes from one line (the source) to another (the destination). In some cases, some transitions could be jumped, like in conditional statements or a break statememt, then it would be measured as a missing branch (or missing transition). For example, in your code there is one transition that can be jumped. if type(self) == type(other): return False return NotImplemented see that from line 3 to line 5 is a transition that can not necessarily happen as there could be the case that the if statement don't evaluate to False. Thus, branch coverage will flag this code as not fully covered because of the missing jump from line 3 to line 5. References How does branch coverage works. https://coverage.readthedocs.io/en/latest/branch.html#how-it-works | 8 | 7 |
64,406,954 | 2020-10-17 | https://stackoverflow.com/questions/64406954/how-can-i-drop-a-column-if-the-last-row-is-nan | I have found examples of how to remove a column based on all or a threshold but I have not been able to find a solution to my particular problem which is dropping the column if the last row is nan. The reason for this is im using time series data in which the collection of data doesnt all start at the same time which is fine but if I used one of the previous solutions it would remove 95% of the dataset. I do however not want data whose most recent column is nan as it means its defunct. A B C nan t x 1 2 3 x y z 4 nan 6 Returns A C nan x 1 3 x z 4 6 | You can also do something like this df.loc[:, ~df.iloc[-1].isna()] A C 0 NaN x 1 1 3 2 x z 3 4 6 | 9 | 5 |
64,401,916 | 2020-10-17 | https://stackoverflow.com/questions/64401916/python-cannot-import-token-from-token | I'm trying to make a lexer in python but when I try to import a class from file token.py I get this error ImportError: cannot import name 'Token' from 'token' the code for token.py is as follows from enum import Enum class Token(): def __init__(self, ttype, value=None): self.type = ttype self.value = value def __repr__(self): return {'type':self.type, 'value':self.value} class TokenType(Enum): NUMBER = 0 PLUS = 1 MINUS = 2 MULTIPLY = 3 DIVIDE = 4 LPAREN = 5 RPAREN = 6 and the import statement is from token import Token, TokenType | There is a library in python called token, so your interpreter might be confusing it with the inbuilt python library. Try to rename the library. Name it token_2.py or something | 7 | 13 |
64,385,747 | 2020-10-16 | https://stackoverflow.com/questions/64385747/valueerror-you-are-trying-to-merge-on-object-and-int64-columns-when-use-pandas | The test.csv data likes this: device_id,upload_time,latitude,longitude,mileage,other_vals,speed,upload_time_1 11115304371,2020-08-05 05:10:05+00:00,23.140366,114.18685,0,,0,202008 1234,2020-08-05 05:10:33+00:00,22.994716,114.2998,0,,0,202008 11115304371,2020-08-05 05:20:55+00:00,22.994716,114.2998,0,,3.8,202008 11115304371,2020-08-05 05:24:02+00:00,22.994916,114.299683,0,,2.1,202008 11115304371,2020-08-05 05:24:30+00:00,22.99545,114.2998,0,,6.5,202008 11115304371,2020-08-05 05:29:30+00:00,22.995433,114.299766,0,,3.4,202008 11115304371,2020-08-05 05:34:30+00:00,22.995433,114.299766,0,,3.4,202008 11115304371,2020-08-05 05:39:30+00:00,22.995433,114.299766,0,,3.4,202008 822649e2d142a486,2020-08-05 05:44:30+00:00,22.995433,114.299766,0,,3.4,202008 11115304371,2020-08-05 05:44:53+00:00,22.995433,114.299766,0,,3.4,202008 11115304371,2020-08-05 05:45:40+00:00,22.995433,114.299766,0,,5.8,202008 And the info.csv data likes this: car_id,device_id,car_type,car_num,marketer_name 1,11110110037,1,AAA,T1 2,11115304371,1,BBB,T2 3,11111100345,1,CCC,T3 4,11111100242,1,DDD,T4 5,12221100034,1,EEE,T5 6,12221100230,1,FFF,T6 7,14465301234,1,GGG,T7 When I use this code to merge 2 dataframe. import pandas as pd df_device_data = pd.read_csv(r'E:/test.csv', encoding='utf-8', parse_dates=[1], low_memory=False) df_common_car_info = pd.read_csv(r'E:/info.csv', encoding='utf-8', low_memory=False) result = pd.merge(df_device_data, df_common_car_info, how='left', on='device_id') result.to_csv(r'E:/result.csv', index=False, mode='w', header=True) This error occurred: ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat How to fix it? | When I use this code df.astype(str): import pandas as pd df_device_data = pd.read_csv(r'E:/test.csv', encoding='utf-8', parse_dates=[1], low_memory=False) df_device_data['device_id'] = df_device_data['device_id'].astype(str) df_common_car_info = pd.read_csv(r'E:/info.csv', encoding='utf-8', low_memory=False) df_common_car_info['device_id'] = df_common_car_info['device_id'].astype(str) result = pd.merge(df_device_data, df_common_car_info, how='left', on='device_id') result.to_csv(r'E:/result.csv', index=False, mode='w', header=True) The result is right. | 12 | 7 |
64,395,136 | 2020-10-16 | https://stackoverflow.com/questions/64395136/typeerror-object-of-type-uuid-is-not-json-serializable | I'm building a fairly large JSON dictionary where I specify a few uuids like this: import uuid game['uuid'] = uuid.uuid1() I'm getting a type error with the following traceback. I'm not sure what the issue is because we can have UUIDs within json objects Traceback (most recent call last): File "/Users/claycrosby/Desktop/coding/projects/gambling/scraper/sbtesting.py", line 182, in <module> game_json = json.dumps(game) File "/opt/miniconda3/envs/ds383/lib/python3.8/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/opt/miniconda3/envs/ds383/lib/python3.8/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/opt/miniconda3/envs/ds383/lib/python3.8/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/opt/miniconda3/envs/ds383/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type UUID is not JSON serializable [Finished in 0.5s with exit code 1] [cmd: ['/opt/miniconda3/envs/ds383/bin/python3', '/Users/claycrosby/Desktop/coding/projects/gambling/scraper/sbtesting.py']] [dir: /Users/claycrosby/Desktop/coding/projects/gambling/scraper] [path: /opt/miniconda3/bin:/opt/miniconda3/condabin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/Users/claycrosby/Desktop/coding/programs/pbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin] | The uuid.UUID class itself can't be JSON serialized, but the object can be expressed in several JSON compatible formats. From help(uuid.UUID) we can see these options (though bytes would need some more work because they aren't json either). | bytes the UUID as a 16-byte string (containing the six | integer fields in big-endian byte order) | | bytes_le the UUID as a 16-byte string (with time_low, time_mid, | and time_hi_version in little-endian byte order) | | fields a tuple of the six integer fields of the UUID, | which are also available as six individual attributes | and two derived attributes: | | time_low the first 32 bits of the UUID | time_mid the next 16 bits of the UUID | time_hi_version the next 16 bits of the UUID | clock_seq_hi_variant the next 8 bits of the UUID | clock_seq_low the next 8 bits of the UUID | node the last 48 bits of the UUID | | time the 60-bit timestamp | clock_seq the 14-bit sequence number | | hex the UUID as a 32-character hexadecimal string | | int the UUID as a 128-bit integer | | urn the UUID as a URN as specified in RFC 4122 If your API wants a URN for example, you'd >>> game = {'uuid': uuid.uuid1().urn} >>> game {'uuid': 'urn:uuid:56fabaca-0fe6-11eb-9910-c770eddca9e7'} >>> json.dumps(game) '{"uuid": "urn:uuid:56fabaca-0fe6-11eb-9910-c770eddca9e7"}' | 5 | 7 |
64,390,560 | 2020-10-16 | https://stackoverflow.com/questions/64390560/error-could-not-find-a-version-that-satisfies-the-requirement-csv-from-version | I'm trying to install the csv module in Python 3. I have pip install, and I'm using Pycharm. I've tried downloading it in the terminal using both pip install csv and pip3 install csv, but neither of those worked. I get the following error: ERROR: Could not find a version that satisfies the requirement csv (from versions: none) ERROR: No matching distribution found for csv I also tried downloading csv by going to Pycharm's setting and adding csv as a project interpreter, but it gives me the same error message. Hopefully, you can help me, thanks! | You can't install the csv module because it is already part of Python, so you can just start to use it by including in your file: import csv . | 6 | 10 |
64,382,010 | 2020-10-16 | https://stackoverflow.com/questions/64382010/how-to-create-a-seaborn-heatmap-by-hour-day-from-timestamp-with-multiple-data-po | I have a data frame with a date column which is a timestamp. There are multiple data points per hour of a day eg 2014-1-1 13:10, 2014-1-1 13:20 etc. I want to group the data points from the same hour of a specific day and then create a heatmap using seaborn and plot a different column. I have tried to use groupby but I'm not sure how to specify i want the hour and day date data 2014-1-1 13:10 50 2014-1-1 13:20 51 2014-1-1 13:30 51 2014-1-1 13:40 56 2014-1-1 13:50 67 2014-1-1 14:00 43 2014-1-1 14:10 78 2014-1-1 14:20 45 2014-1-1 14:30 58 I want to combine the data by its mean value | You can use dt.strftime('%H') to get the hours, and dt.strftime('%Y-%m-%D') or dt.normalize() for the days sns.heatmap(df.groupby([df.date.dt.normalize(), df.date.dt.strftime('%H:00')]) ['data'].mean() .rename_axis(index=['day','hour']) .unstack(level=0) ) Output: Update: for the weeks, we can use a similar approach s = (df.groupby([df.date.dt.isocalendar().week, df.date.dt.strftime('%Y-%m-%d'), df.date.dt.strftime('%H:00')]) ['data'].mean() .rename_axis(index=['week','day','hour']) ) fig, axes = plt.subplots(2,2, figsize=(10,10)) for w, ax in zip(s.index.unique('week'), axes.ravel()): sns.heatmap(s.loc[w].unstack(level='day'), ax=ax) ax.set_title(f'Week {w}') Output: | 6 | 5 |
64,374,482 | 2020-10-15 | https://stackoverflow.com/questions/64374482/how-to-calculate-distance-for-every-row-in-a-pandas-dataframe-from-a-single-poin | I have a point point = np.array([0.07852388, 0.60007135, 0.92925712, 0.62700219, 0.16943809, 0.34235233]) And a pandas dataframe a b c d e f 0 0.025641 0.554686 0.988809 0.176905 0.050028 0.333333 1 0.027151 0.520914 0.985590 0.409572 0.163980 0.424242 2 0.028788 0.478810 0.970480 0.288557 0.095053 0.939394 3 0.018692 0.450573 0.985910 0.178048 0.118399 0.484848 4 0.023256 0.787253 0.865287 0.217591 0.205670 0.303030 I would like to calculate the distance of every row in the pandas dataframe, to that specific point I tried import numpy as np d_all = list() for index, row in df_scaled[cols_list].iterrows(): d = np.linalg.norm(centroid-np.array(list(row[cols_list]))) d_all += [d] df_scaled['distance_cluster'] = d_all My solution is really slow though (taking into account that I want to calculate the distance from other points as well. Is there a way to do my calculations more efficiently ? | You can compute vectorized Euclidean distance (L2 norm) using the formula sqrt((a1 - b1)2 + (a2 - b2)2 + ...) df.sub(point, axis=1).pow(2).sum(axis=1).pow(.5) 0 0.474690 1 0.257080 2 0.703857 3 0.503596 4 0.461151 dtype: float64 Which gives the same output as your current code. Or, using linalg.norm: np.linalg.norm(df.to_numpy() - point, axis=1) # array([0.47468985, 0.25707985, 0.70385676, 0.5035961 , 0.46115096]) | 18 | 11 |
64,370,230 | 2020-10-15 | https://stackoverflow.com/questions/64370230/deleting-rows-which-sum-to-zero-in-1-column-but-are-otherwise-duplicates-in-pand | I have a pandas dataframe of the following structure: df = pd.DataFrame({'ID':['A001', 'A001', 'A001', 'A002', 'A002', 'A003', 'A003', 'A004', 'A004', 'A004', 'A005', 'A005'], 'Val1':[2, 2, 2, 5, 6, 8, 8, 3, 3, 3, 7, 7], 'Val2':[100, -100, 50, -40, 40, 60, -50, 10, -10, 10, 15, 15]}) ID Val1 Val2 0 A001 2 100 1 A001 2 -100 2 A001 2 50 3 A002 5 -40 4 A002 6 40 5 A003 8 60 6 A003 8 -50 7 A004 3 10 8 A004 3 -10 9 A004 3 10 10 A005 7 15 11 A005 7 15 I want to remove duplicate rows where ID and Val1 are duplicates, and where Val2 sums to zero across two rows. The positive/negative Val2 rows may not be consecutive either, even under a groupby In the above sample data, rows 0 and 1, as well as 7, 8, 9 fulfill these criteria. I'd want to remove [0, 1], and either [7, 8] or [8, 9]. Another constraint here is that there could be entirely duplicate rows ([10, 11]). In this case, I want to keep both rows. The desired output is thus: ID Val1 Val2 2 A001 2 50 3 A002 5 -40 4 A002 6 40 5 A003 8 60 6 A003 8 -50 9 A004 3 10 10 A005 7 15 11 A005 7 15 Short of iterating over each row and looking for other rows which fit the criteria, I'm out of ideas for a more "pythonic" way to do this. Any help is much appreciated. | I put some comments in the code, so hopefully, my line of thought should be clear : cond = df.assign(temp=df.Val2.abs()) # a way to get the same values (differentiated by their sign) # to follow each other cond = cond.sort_values(["ID", "Val1", "temp"]) # cumsum should yield a zero for numbers that are different # only by their sign cond["check"] = cond.groupby(["ID", "temp"]).Val2.cumsum() cond["check"] = np.where(cond.check != 0, np.nan, cond.check) # the backward fill here allows us to assign an identifier # to the two values that summed to zero cond["check"] = cond["check"].bfill(limit=1) # this is where we implement your other condition # essentially, it looks for rows that are duplicates # and rows that any two rows sum to zero cond.loc[ ~(cond.duplicated(["ID", "Val1"], keep=False) & (cond.check == 0)), ["ID", "Val1", "Val2"], ] ID Val1 Val2 2 A001 2 50 3 A002 5 -40 4 A002 6 40 6 A003 8 -50 5 A003 8 60 9 A004 3 10 | 8 | 2 |
64,369,710 | 2020-10-15 | https://stackoverflow.com/questions/64369710/what-are-the-hex-codes-of-matplotlib-tab10-palette | Do you know what are the hex codes or RGB values of the "tab" palette (the default 10 colors: tab:blue, tab:orange, etc...) of matplotlib ? And possibly do you know how if there's a way to obtain the hex code for any named color in matplotlib ? | Turns out this piece of code from the matplotlib examples gave me the answer I was after. The hex codes of the "tableau" palette are as follows: tab:blue : #1f77b4 tab:orange : #ff7f0e tab:green : #2ca02c tab:red : #d62728 tab:purple : #9467bd tab:brown : #8c564b tab:pink : #e377c2 tab:gray : #7f7f7f tab:olive : #bcbd22 tab:cyan : #17becf Using the following code I made a dictionnary containing all the named colors and their respective hex code : import matplotlib.colors as mcolors mcolors.TABLEAU_COLORS mcolors.XKCD_COLORS mcolors.CSS4_COLORS #Base colors are in RGB so they need to be converted to HEX BASE_COLORS_hex = {name:mcolors.rgb2hex(color) for name,color in mcolors.BASE_COLORS.items()} all_named_colors = {} all_named_colors.update(mcolors.TABLEAU_COLORS) all_named_colors.update(BASE_COLORS_hex) all_named_colors.update(mcolors.CSS4_COLORS) all_named_colors.update(mcolors.XKCD_COLORS) #print(all_named_colors) print(all_named_colors["tab:blue"]) >>> #1f77b4 | 31 | 57 |
64,364,499 | 2020-10-15 | https://stackoverflow.com/questions/64364499/set-description-for-query-parameter-in-swagger-doc-using-pydantic-model-fastapi | This is continue to this question. I have added a model to get query params to pydantic model class QueryParams(BaseModel): x: str = Field(description="query x") y: str = Field(description="query y") z: str = Field(description="query z") @app.get("/test-query-url/{test_id}") async def get_by_query(test_id: int, query_params: QueryParams = Depends()): print(test_id) print(query_params.dict(by_alias=True)) return True it is working as expected but description(added in model) is not reflecting in swagger ui But if same model is used for request body, then description is shown in swagger Am I missing anything to get the description for QueryParams(model) in swagger ui? | This is not possible with Pydantic models The workaround to get the desired result is to have a custom dependency class (or function) rather than the Pydantic model from fastapi import Depends, FastAPI, Query app = FastAPI() class CustomQueryParams: def __init__( self, foo: str = Query(..., description="Cool Description for foo"), bar: str = Query(..., description="Cool Description for bar"), ): self.foo = foo self.bar = bar @app.get("/test-query/") async def get_by_query(params: CustomQueryParams = Depends()): return params Thus, you will have the doc as, References Validate GET parameters in FastAPI--(FastAPI GitHub) It seems like there is less interest in extending the Pydantic model to validate the GET parameters Classes as Dependencies--(FastAPI Doc) | 28 | 25 |
64,362,032 | 2020-10-14 | https://stackoverflow.com/questions/64362032/how-to-melt-a-dataframe-while-doing-some-operation | Let's say that I have the following dataframe: index K1 K2 D1 D2 D3 N1 0 1 12 4 6 N2 1 1 10 2 7 N3 0 0 3 5 8 Basically, I want to transform this dataframe into the following: index COL1 COL2 K1 D1 = 0*12+1*10+0*3 K1 D2 = 0*4+1*2+0*5 K1 D3 = 0*6+1*7+0*8 K2 D1 = 1*12+1*10+0*3 K2 D2 = 1*4+1*2+0*5 K2 D3 = 1*6+1*7+0*8 The content of COL2 is basically the dot product (aka the scalar product) between the vector in index and the one in COL1. For example, let's take the first line of the resulting df. Under index, we have K1 and, under COL1 we have D1. Looking at the first table, we know that K1 = [0,1,0] and D1 = [12,10,3]. The scalar product of these two "vectors" is the value inside COL2 (first line). I'm trying to find a way of doing this without using a nested loop (because the idea is to make something efficient), however, I don't exactly know how. I tried using the pd.melt() function and, although it gets me closer to what I want, it doesn't exactly get me to where I want. Could you give me a hint? | This is matrix multiplication: (df[['D1','D2','D3']].T@df[['K1','K2']]).unstack().reset_index() Output: level_0 level_1 0 0 K1 D1 10 1 K1 D2 2 2 K1 D3 7 3 K2 D1 22 4 K2 D2 6 5 K2 D3 13 | 5 | 7 |
64,362,290 | 2020-10-14 | https://stackoverflow.com/questions/64362290/python-struct-pack-and-unpack | I want to reverse the packing of the following code: struct.pack("<"+"I"*elements, *self.buf[:elements]) I know "<" means little endian and "I" is unsigned int. How can I use struct.unpack to reverse the packing? | struct.pack takes non-byte values (e.g. integers, strings, etc.) and converts them to bytes. And conversely, struct.unpack takes bytes and converts them to their 'higher-order' equivalents. For example: >>> from struct import pack, unpack >>> packed = pack('hhl', 1, 2, 3) >>> packed b'\x00\x01\x00\x02\x00\x00\x00\x03' >>> unpacked = unpack('hhl', packed) >>> unpacked (1, 2, 3) So in your instance, you have little-endian unsigned integers (elements many of them). You can unpack them using the same structure string (the '<' + 'I' * elements part) - e.g. struct.unpack('<' + 'I' * elements, value). Example from: https://docs.python.org/3/library/struct.html | 9 | 15 |
64,362,044 | 2020-10-14 | https://stackoverflow.com/questions/64362044/what-does-levels-mean-in-seaborn-kde-plot | I am trying to make a contour plot of my 2d data. However, I would like to input the contours manually. I found the "levels" option in seaborn.kde documentation, where I can define the levels for contours manually. However, I have no idea what these levels mean. The documentation gives this definition - Levels correspond to iso-proportions of the density. What does iso-proportions of density mean? Are there any references that I could read up on this? | Basically, the contour line for the level corresponding to 0.05 is drawn such that 5% of the distribution lies "below" it. Alternately, because the integral over the full density equals 1 (that's what makes it a PDF), the integral over the area outside of the contour line will be 0.05. | 10 | 3 |
64,359,016 | 2020-10-14 | https://stackoverflow.com/questions/64359016/how-can-i-remove-numbers-and-words-with-length-below-2-from-a-sentence | I am trying to remove words that have length below 2 and any word that is numbers. For example s = " This is a test 1212 test2" Output desired is " This is test test2" I tried \w{2,} this removes all the word whose length is below 2. When I added \D+ this removes all numbers when I didn't want to get rid of 2 from test2. | You may use: s = re.sub(r'\b(?:\d+|\w)\b\s*', '', s) RegEx Demo Pattern Details: \b: Match word boundary (?:\d+|\w): Match a single word character or 1+ digits \b: Match word boundary \s*: Match 0 or more whitespaces | 6 | 3 |
64,345,790 | 2020-10-14 | https://stackoverflow.com/questions/64345790/sort-a-pandas-dataframe-by-multiple-columns-using-key-argument | I have a dataframe a pandas dataframe with the following columns: df = pd.DataFrame([ ['A2', 2], ['B1', 1], ['A1', 2], ['A2', 1], ['B1', 2], ['A1', 1]], columns=['one','two']) Which I am hoping to sort primarily by column 'two', then by column 'one'. For the secondary sort, I would like to use a custom sorting rule that will sort column 'one' by the alphabetic character [A-Z] and then the trailing numeric number [0-100]. So, the outcome of the sort would be: one two A1 1 B1 1 A2 1 A1 2 B1 2 A2 2 I have sorted a list of strings similar to column 'one' before using a sorting rule like so: def custom_sort(value): return (value[0], int(value[1:])) my_list.sort(key=custom_sort) If I try to apply this rule via a pandas sort, I run into a number of issues including: The pandas DataFrame.sort_values() function accepts a key for sorting like the sort() function, but the key function should be vectorized (per the pandas documentation). If I try to apply the sorting key to only column 'one', I get the error "TypeError: cannot convert the series to <class 'int'>" When you use the pandas DataFrame.sort_values() method, it applies the sort key to all columns you pass in. This will not work since I want to sort first by the column 'two' using a native numerical sort. How would I go about sorting the DataFrame as mentioned above? | You can split column one into its constituent parts, add them as columns to the dataframe and then sort on them with column two. Finally, remove the temporary columns. >>> (df.assign(lhs=df['one'].str[0], rhs=df['one'].str[1:].astype(int)) .sort_values(['two', 'rhs', 'lhs']) .drop(columns=['lhs', 'rhs'])) one two 5 A1 1 1 B1 1 3 A2 1 2 A1 2 4 B1 2 0 A2 2 | 10 | 3 |
64,348,644 | 2020-10-14 | https://stackoverflow.com/questions/64348644/set-default-logging-extra-for-pythons-logger-object | In built-in python's logging module you create a logger and log messages: import logging log = logging.getLogger('mylogger') log.info('starting!') You can also pass extras to the log message that will be used later by the formatter: user = 'john doe' log.info('starting!', extra={'user': 'john doe'}) However it's quite tedious to do it explicitly with every message. Is it possible to set extras for the whole logger, so they would be passed with every log? Something like: log = logging.getLogger('mylogger') log.extras = {'user': 'john doe'} log.info('starting!') Currently the only way it seems is to monkey patch log() method: def patch_default_extras(logger, extras): original_log = logger.log def log_with_extras(level, msg, *args, **kwargs): kwargs['extra'] = {**extras, **kwargs.get('extra', {})} return original_log(level, msg, *args, **kwargs) logger.log = log_with_extras mylog = logging.getLogger('mylogger') patch_default_extras(mylog, {'user': 'john doe'}) mylog = logging.getLogger('mylogger') mylog.log(logging.ERROR, 'hello') Which is rather ugly, but unless I'm missing something ‒ it's the only way? | The builtin way to do this is using a logging adapter. A filter would also work but for your described use-case adapters are perfect. import logging logging.basicConfig(level=logging.DEBUG, format='user: %(user)s - message: %(msg)s') logger = logging.getLogger() logger_with_user = logging.LoggerAdapter(logger, {'user': 'jane doe'}) logger_with_user.info('test') # user: jane doe - message: test | 10 | 20 |
64,347,217 | 2020-10-14 | https://stackoverflow.com/questions/64347217/error-pickle-picklingerror-cant-pickle-function-lambda-at-0x0000002f2175b | I am trying to run following code that reported running well with other users, but I found this error. -- coding: utf-8 -- Import the Stuff import torch import torch.nn as nn import torch.optim as optim from torch.utils import data from torch.utils.data import DataLoader import torchvision.transforms as transforms import cv2 import numpy as np import csv Step1: Read from the log file samples = [] with open('data/driving_log.csv') as csvfile: reader = csv.reader(csvfile) next(reader, None) for line in reader: samples.append(line) Step2: Divide the data into training set and validation set train_len = int(0.8*len(samples)) valid_len = len(samples) - train_len train_samples, validation_samples = data.random_split(samples, lengths=[train_len, valid_len]) Step3a: Define the augmentation, transformation processes, parameters and dataset for dataloader def augment(imgName, angle): name = 'data/IMG/' + imgName.split('/')[-1] current_image = cv2.imread(name) current_image = current_image[65:-25, :, :] if np.random.rand() < 0.5: current_image = cv2.flip(current_image, 1) angle = angle * -1.0 return current_image, angle class Dataset(data.Dataset): def __init__(self, samples, transform=None): self.samples = samples self.transform = transform def __getitem__(self, index): batch_samples = self.samples[index] steering_angle = float(batch_samples[3]) center_img, steering_angle_center = augment(batch_samples[0], steering_angle) left_img, steering_angle_left = augment(batch_samples[1], steering_angle + 0.4) right_img, steering_angle_right = augment(batch_samples[2], steering_angle - 0.4) center_img = self.transform(center_img) left_img = self.transform(left_img) right_img = self.transform(right_img) return (center_img, steering_angle_center), (left_img, steering_angle_left), (right_img, steering_angle_right) def __len__(self): return len(self.samples) Step3b: Creating generator using the dataloader to parallasize the process transformations = transforms.Compose([transforms.Lambda(lambda x: (x / 255.0) - 0.5)]) params = {'batch_size': 32, 'shuffle': True, 'num_workers': 4} training_set = Dataset(train_samples, transformations) training_generator = data.DataLoader(training_set, **params) validation_set = Dataset(validation_samples, transformations) validation_generator = data.DataLoader(validation_set, **params) Step4: Define the network class NetworkDense(nn.Module): def __init__(self): super(NetworkDense, self).__init__() self.conv_layers = nn.Sequential( nn.Conv2d(3, 24, 5, stride=2), nn.ELU(), nn.Conv2d(24, 36, 5, stride=2), nn.ELU(), nn.Conv2d(36, 48, 5, stride=2), nn.ELU(), nn.Conv2d(48, 64, 3), nn.ELU(), nn.Conv2d(64, 64, 3), nn.Dropout(0.25) ) self.linear_layers = nn.Sequential( nn.Linear(in_features=64 * 2 * 33, out_features=100), nn.ELU(), nn.Linear(in_features=100, out_features=50), nn.ELU(), nn.Linear(in_features=50, out_features=10), nn.Linear(in_features=10, out_features=1) ) def forward(self, input): input = input.view(input.size(0), 3, 70, 320) output = self.conv_layers(input) output = output.view(output.size(0), -1) output = self.linear_layers(output) return output class NetworkLight(nn.Module): def __init__(self): super(NetworkLight, self).__init__() self.conv_layers = nn.Sequential( nn.Conv2d(3, 24, 3, stride=2), nn.ELU(), nn.Conv2d(24, 48, 3, stride=2), nn.MaxPool2d(4, stride=4), nn.Dropout(p=0.25) ) self.linear_layers = nn.Sequential( nn.Linear(in_features=48*4*19, out_features=50), nn.ELU(), nn.Linear(in_features=50, out_features=10), nn.Linear(in_features=10, out_features=1) ) def forward(self, input): input = input.view(input.size(0), 3, 70, 320) output = self.conv_layers(input) output = output.view(output.size(0), -1) output = self.linear_layers(output) return output Step5: Define optimizer model = NetworkLight() optimizer = optim.Adam(model.parameters(), lr=0.0001) criterion = nn.MSELoss() Step6: Check the device and define function to move tensors to that device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('device is: ', device) def toDevice(datas, device): imgs, angles = datas return imgs.float().to(device), angles.float().to(device) Step7: Train and validate network based on maximum epochs defined max_epochs = 22 for epoch in range(max_epochs): model.to(device) # Training train_loss = 0 model.train() for local_batch, (centers, lefts, rights) in enumerate(training_generator): # Transfer to GPU centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device) # Model computations optimizer.zero_grad() datas = [centers, lefts, rights] for data in datas: imgs, angles = data # print("training image: ", imgs.shape) outputs = model(imgs) loss = criterion(outputs, angles.unsqueeze(1)) loss.backward() optimizer.step() train_loss += loss.data[0].item() if local_batch % 100 == 0: print('Loss: %.3f ' % (train_loss/(local_batch+1))) # Validation model.eval() valid_loss = 0 with torch.set_grad_enabled(False): for local_batch, (centers, lefts, rights) in enumerate(validation_generator): # Transfer to GPU centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device) # Model computations optimizer.zero_grad() datas = [centers, lefts, rights] for data in datas: imgs, angles = data # print("Validation image: ", imgs.shape) outputs = model(imgs) loss = criterion(outputs, angles.unsqueeze(1)) valid_loss += loss.data[0].item() if local_batch % 100 == 0: print('Valid Loss: %.3f ' % (valid_loss/(local_batch+1))) Step8: Define state and save the model wrt to state state = { 'model': model.module if device == 'cuda' else model, } torch.save(state, 'model.h5') this is the error message: "D:\VICO\Back up\venv\Scripts\python.exe" "D:/VICO/Back up/venv/Scripts/self_driving_car.py" device is: cpu Traceback (most recent call last): File "D:/VICO/Back up/venv/Scripts/self_driving_car.py", line 163, in <module> for local_batch, (centers, lefts, rights) in enumerate(training_generator): File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__ return _MultiProcessingDataLoaderIter(self) File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__ w.start() File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) _pickle.PicklingError: Can't pickle <function <lambda> at 0x0000002F2175B048>: attribute lookup <lambda> on __main__ failed Process finished with exit code 1 I am not sure the next step to resolve the problem. | pickle doesn't pickle function objects. It expects to find the function object by importing its module and looking up its name. lambdas are anonymous functions (no name) so that doesn't work. The solution is to name the function at module level. The only lambda I found in your code is transformations = transforms.Compose([transforms.Lambda(lambda x: (x / 255.0) - 0.5)]) Assuming that's the troublesome function, you can def _my_normalization(x): return x/255.0 - 0.5 transformations = transforms.Compose([transforms.Lambda(_my_normalization]) You may have other problems because it looks like you are doing work at module level. If this is a multiprocessing thing and you are running on windows, the new process will import the file and run all of that module level code again. This isn't a problem on linux/mac where a forked process already has the modules loaded from the parent. | 9 | 8 |
64,291,076 | 2020-10-10 | https://stackoverflow.com/questions/64291076/generating-all-permutations-efficiently | I need to generate as fast as possible all permutations of integers 0, 1, 2, ..., n - 1 and have result as a NumPy array of shape (factorial(n), n), or to iterate through large portions of such an array to conserve memory. Is there some built-in function in NumPy for doing this? Or some combination of functions. Using itertools.permutations(...) is too slow, I need a faster method. | Here's a NumPy solution that builds the permutations of size m by modifying the permutations of size m-1 (see more explanation further down): from math import factorial def permutations(n): a = np.zeros((factorial(n), n), np.uint8) f = 1 for m in range(2, n+1): b = a[:f, n-m+1:] # the block of permutations of range(m-1) for i in range(1, m): a[i*f:(i+1)*f, n-m] = i a[i*f:(i+1)*f, n-m+1:] = b + (b >= i) b += 1 f *= m return a Demo: >>> permutations(3) array([[0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], [2, 1, 0]], dtype=uint8) For n=10, the itertools solution takes 5.5 seconds for me while this NumPy solution takes 0.2 seconds. How it proceeds: It starts with a zero-array of the goal size, which already contains the permutations for range(1) at the top-right (I "dotted out" the other parts of the array): [[. . 0] [. . .] [. . .] [. . .] [. . .] [. . .]] Then it turns that into the permutations of range(2): [[. 0 1] [. 1 0] [. . .] [. . .] [. . .] [. . .]] And then into the permutations of range(3): [[0 1 2] [0 2 1] [1 0 2] [1 2 0] [2 0 1] [2 1 0]] It does so by filling the next-left column and by copying/modifying the previous block of permutations downwards. | 13 | 12 |
64,314,141 | 2020-10-12 | https://stackoverflow.com/questions/64314141/psycopg2-importerror-dll-load-failed-while-importing-psycopg-the-operating | I installed psycopg2 using conda on Windows 10. https://anaconda.org/anaconda/psycopg2 I did it in a clean new conda environment (named wr). I then tried to run this sample app but I am getting this error (see below). I have no idea what I might be doing wrong because it was all straightforward and I did it in a clean way. Any ideas how to solve this? import psycopg2 try: connection = psycopg2.connect(user = "***", password = "***", host = "***", port = "5432", database = "***") cursor = connection.cursor() # Print PostgreSQL Connection properties print ( connection.get_dsn_parameters(),"\n") # Print PostgreSQL version cursor.execute("SELECT version();") record = cursor.fetchone() print("You are connected to - ", record,"\n") except (Exception, psycopg2.Error) as error : print ("Error while connecting to PostgreSQL", error) finally: #closing database connection. if(connection): cursor.close() connection.close() print("PostgreSQL connection is closed") Error in VS code: PS C:\Work\WRR\git\tools\JTunnelTestApp> cd 'c:\Work\WRR\git\tools\JTunnelTestApp'; & 'C:\Programs\Anaconda3\envs\wr\python.exe' 'c:\Users\petrop01\.vscode\extensions\ms-python.python-2020.9.114305\pythonFiles\lib\python\debugpy\launcher' '56143' '--' 'c:\Work\WRR\git\tools\JTunnelTestApp\main.py' Traceback (most recent call last): File "c:\Work\WRR\git\tools\JTunnelTestApp\main.py", line 1, in <module> import psycopg2 File "C:\Programs\Anaconda3\envs\wr\lib\site-packages\psycopg2\__init__.py", line 51, in <module> from psycopg2._psycopg import ( # noqa ImportError: DLL load failed while importing _psycopg: The operating system cannot run %1. PS C:\Work\WRR\git\tools\JTunnelTestApp> EDIT: Seems they had a bug open for this 2 years ago and they just closed it, ignoring it completely. https://github.com/psycopg/psycopg2/issues/734 | You can use psycopg2-binary library instead of psycopg2. After installation the usage is the same. psycopg2 requires some dll file to be installed on your operating system. You can either install it manually or you can install psycopg2-binary instead which packs both the library and dll files together. The documentation resides in psycopg2 library docs: https://psycopg.org/docs/install.html | 20 | 18 |
64,282,629 | 2020-10-9 | https://stackoverflow.com/questions/64282629/why-does-module-file-return-none | I've added a local directory to my system path. When I import it, it's fine but when I do import local_repo print(local_repo.__file__) it returns None. How do I get it to return the path... P.S. When I try this with other modules works fine - returns the path - for example import pathlib print(pathlib.__file__) >>>> "C:\Python38\lib\pathlib.py" | __file__ is None because there is no file for your package. You've made a namespace package, and those aren't even supposed to correspond to a specific directory, let alone a file - they're designed for a very specialized use case that doesn't match what you're doing. If you want a regular package, with a non-None __file__ and all, you should give it an __init__.py file. __file__ will then be the path to the package's __init__.py. | 7 | 7 |
64,255,154 | 2020-10-8 | https://stackoverflow.com/questions/64255154/change-tf-contrib-layers-xavier-initializer-to-2-0-0 | how can I change tf.contrib.layers.xavier_initializer() to tf version >= 2.0.0 ?? all codes: W1 = tf.get_variable("W1", shape=[self.input_size, h_size], initializer=tf.contrib.layers.xavier_initializer()) | the TF2 replacement for tf.contrib.layers.xavier_initializer() is tf.keras.initializers.glorot_normal(Xavier and Glorot are 2 names for the same initializer algorithm, referring to one researcher named Xavier Glorot) documentation link. if dtype is important for some compatibility reasons - use tf.compat.v1.keras.initializers.glorot_normal | 9 | 9 |
64,252,764 | 2020-10-7 | https://stackoverflow.com/questions/64252764/sql-case-when-x-like-t-in-python-script-resulting-in-typeerror-dict-is-not | I am attempting to run a SQL query against a Redshift DB in a Python script. The following results in an error "TypeError: dict is not a sequence" and I cannot figure out why. test1 = """ WITH A AS ( SELECT *, CASE WHEN substring(text_value, 0, 4) LIKE ' ' THEN substring(substring(text_value, 0, 4), 3) ELSE substring(text_value, 0, 4) END AS cost_center_id FROM job_custom_fields WHERE key = 'cost_center' ) SELECT job_id, display_value, CASE WHEN cost_center_id LIKE '%T%' THEN SUBSTRING(cost_center_id, 1,1) WHEN cost_center_id LIKE '%D%' THEN SUBSTRING(cost_center_id, 1,1) WHEN cost_center_id LIKE '%P%' THEN SUBSTRING(cost_center_id, 1,1) ELSE cost_center_id END AS cost_center_id FROM A """ red_engine = create_engine('postgresql+psycopg2://org_525:[email protected]:5439/greenhouse') test = pd.read_sql_query(test1,red_engine) test Any assistance is much appreciated | you need to escape % character using %% in Python. % is used for string formatting in Python. Not a solution to this problem but it's worth mentioning here that in Python 3.6 and later, there is a more recommended way of string formatting using f-strings. For example: name = 'eshirvana' message = f"I am {name}" print(message) | 17 | 41 |
64,303,607 | 2020-10-11 | https://stackoverflow.com/questions/64303607/python-asyncio-how-to-read-stdin-and-write-to-stdout | I have a need to async read StdIn in order to get messages (json terminated by \r\n) and after processing async write updated message to StdOut. At the moment I am doing it synchronous like: class SyncIOStdInOut(): def write(self, payload: str): sys.stdout.write(payload) sys.stdout.write('\r\n') sys.stdout.flush() def read(self) -> str: payload=sys.stdin.readline() return payload How to do the same but asynchronously? | Here's an example of echo stdin to stdout using asyncio streams (for Unix). import asyncio import sys async def connect_stdin_stdout(): loop = asyncio.get_event_loop() reader = asyncio.StreamReader() protocol = asyncio.StreamReaderProtocol(reader) await loop.connect_read_pipe(lambda: protocol, sys.stdin) w_transport, w_protocol = await loop.connect_write_pipe(asyncio.streams.FlowControlMixin, sys.stdout) writer = asyncio.StreamWriter(w_transport, w_protocol, reader, loop) return reader, writer async def main(): reader, writer = await connect_stdin_stdout() while True: res = await reader.read(100) if not res: break writer.write(res) await writer.drain() if __name__ == "__main__": asyncio.run(main()) As a ready-to-use solution, you could use aioconsole library. It implements a similar approach, but also provide additional useful asynchronous equivalents to input, print, exec and code.interact: from aioconsole import get_standard_streams async def main(): reader, writer = await get_standard_streams() Update: Let's try to figure out how the function connect_stdin_stdout works. Get the current event loop: loop = asyncio.get_event_loop() Create StreamReader instance. reader = asyncio.StreamReader() Usually, StreamReader/StreamWriter classes are not intended to be directly instantiated and should only be used as a result of functions such as open_connection() and start_server(). StreamReader provides a buffered asynchronous interface to some data stream. Some source(library code) calls its functions such as feed_data, feed_eof, the data is buffered and can be read using the documented interface coroutine read(), readline(), and etc. Create StreamReaderProtocol instance. protocol = asyncio.StreamReaderProtocol(reader) This class is derived from asyncio.Protocol and FlowControlMixin and helps to adapt between Protocol and StreamReader. It overrides such Protocol methods as data_received, eof_received and calls StreamReader methods feed_data. Register standard input stream stdin in the event loop. await loop.connect_read_pipe(lambda: protocol, sys.stdin) The connect_read_pipe function takes as a pipe parameter a file-like object. stdin is a file-like object. From now, all data read from the stdin will fall into the StreamReaderProtocol and then pass into StreamReader Register standard output stream stdout in the event loop. w_transport, w_protocol = await loop.connect_write_pipe(FlowControlMixin, sys.stdout) In connect_write_pipe you need to pass a protocol factory that creates protocol instances that implement flow control logic for StreamWriter.drain(). This logic is implemented in the class FlowControlMixin. Also StreamReaderProtocol inherited from it. Create StreamWriter instance. writer = asyncio.StreamWriter(w_transport, w_protocol, reader, loop) This class forwards the data passed to it using functions write(), writelines() and etc. to the underlying transport. protocol is used to support the drain() function to wait for the moment that the underlying transport has flushed its internal buffer and is available for writing again. reader is an optional parameter and can be None, it is also used to support the drain() function, at the start of this function it is checked if an exception was set for the reader, for example, due to a connection lost (relevant for sockets and bidirectional connections), then drain() will also throw an exception. You can read more about StreamWriter and drain() function in this great answer. Update 2: To read lines with \r\n separator readuntil can be used | 20 | 29 |
64,280,161 | 2020-10-9 | https://stackoverflow.com/questions/64280161/importing-python-files-in-docker-container | When running my docker image, I get an import error: File "./app/main.py", line 8, in <module> import wekinator ModuleNotFoundError: No module named 'wekinator'` How do I import local python modules in Docker? Wouldn't the COPY command copy the entire "app" folder (including both files), hence preserving the correct import location? . ├── Dockerfile ├── README.md └── app ├── main.py └── wekinator.py FROM python:3.7 RUN pip install fastapi uvicorn python-osc EXPOSE 80 COPY ./app /app CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"] | Setting the PYTHONPATH as such works but feels clumsy: ENV PYTHONPATH "${PYTHONPATH}:/app/" Using Docker's WORKDIR allows for a much cleaner solution: FROM python:3.7 WORKDIR /code RUN pip install fastapi uvicorn python-osc EXPOSE 80 COPY app ./app CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"] Note how the COPY statement changed. | 7 | 3 |
64,236,463 | 2020-10-7 | https://stackoverflow.com/questions/64236463/jupyter-notebook-installation-error-building-wheel-for-argon2-cffi-pep-517 | Building wheels for collected packages: argon2-cffi Building wheel for argon2-cffi (PEP 517) ... error ERROR: Command errored out with exit status 1: command: 'c:\users\prasa\appdata\local\programs\python\python39\python.exe' 'c:\users\prasa\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' build_wheel 'C:\Users\prasa\AppData\Local\Temp\tmpcczeigwt' cwd: C:\Users\prasa\AppData\Local\Temp\pip-install-iele2h25\argon2-cffi Complete output (17 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\argon2 copying src\argon2\exceptions.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\low_level.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\_ffi_build.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\_legacy.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\_password_hasher.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\_utils.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\__init__.py -> build\lib.win-amd64-3.9\argon2 copying src\argon2\__main__.py -> build\lib.win-amd64-3.9\argon2 running build_clib building 'argon2' library error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for argon2-cffi Failed to build argon2-cffi ERROR: Could not build wheels for argon2-cffi which use PEP 517 and cannot be installed directly I installed python 3.9 on my computer. When I try to install jupyter notebook I got this error. SO how to solve this issue ? | In case, you are a mac user on Intel CPUs, just check your pip version, if you are installing through the command : pip install notebook Upgrade your PIP, the command that worked for me: /Library/Developer/CommandLineTools/usr/bin/python3 -m pip install --upgrade pip Check if its the latest version of PIP , by using command : pip --version After this again, try : pip install notebook This time you should not see any error. Then include ~/Library/Python/3.8/bin in your path variable. Check if its there by : echo $PATH And then launch the jupyter notebook by command : jupyter notebook Pls upvote, if you found this answer useful :) | 21 | 18 |
64,241,264 | 2020-10-7 | https://stackoverflow.com/questions/64241264/i-have-a-high-performant-function-written-in-julia-how-can-i-use-it-from-python | I have a found a Julia function that nicely does the job I need. How can I quickly integrate it to be able to call it from Python? Suppose the function is f(x,y) = 2x.+y What is the best and most elegent way to use it from Python? | Assuming your Python and Julia are installed you need to take the following steps. Run Julia and install PyCall using Pkg pkg"add PyCall" Put your code into a Julia package using Pkg Pkg.generate("MyPackage") In the folder src you will find MyPackage.jl, edit it to look like this: module MyPackage f(x,y) = 2x.+y export f end Install pyjulia python -m pip install julia (On Linux systems you might want to use python3 instead of python command) For this step note that while an external Python can be used with Julia. However, for a convenience it might be worth to consider using a Python that got installed together with Julia as PyCall. In that case for installation use a command such this one: %HOMEPATH%\.julia\conda\3\python -m pip install julia or on Linux ~/.julia/conda/3/python -m pip install julia Note that if you have JULIA_DEPOT_PATH variable defined you can replace %HOMEPATH%\.julia or ~/.julia/ with its value. Run the appropiate Python and tell it to configure the Python-Julia integration: import julia julia.install() Now you are ready to call your Julia code: >>> from julia import Pkg >>> Pkg.activate(".\\MyPackage") #use the correct path Activating environment at `MyPackage\Project.toml` >>> from julia import MyPackage >>> MyPackage.f([1,2],5) [7,9] It is worth noting that the proposed approach in this answer has several advantages over a standalone Julia file which would be possible, although is less recommended. The advantages include: Packages get precompiled (so they are faster in subsequent runs) and can be loaded as a package in Python. Packages come with their own virtual environment via 1Project.toml` which makes production deployments much comfortable. A Julia package can be statically compiled into Julia's system image which can slash itsloading time --- see https://github.com/JuliaLang/PackageCompiler.jl . EDIT In February 2022 juliacall was announced, as of December 2022 juliacall might be an easier option for some users - have a look at: How to load a custom Julia package in Python using Python's juliacall | 38 | 46 |
64,337,087 | 2020-10-13 | https://stackoverflow.com/questions/64337087/typeerror-init-got-an-unexpected-keyword-argument-name-when-loading-a-m | I made a custom layer in keras for reshaping the outputs of a CNN before feeding to ConvLSTM2D layer class TemporalReshape(Layer): def __init__(self,batch_size,num_patches): super(TemporalReshape,self).__init__() self.batch_size = batch_size self.num_patches = num_patches def call(self,inputs): nshape = (self.batch_size,self.num_patches)+inputs.shape[1:] return tf.reshape(inputs, nshape) def get_config(self): config = super().get_config().copy() config.update({'batch_size':self.batch_size,'num_patches':self.num_patches}) return config When I try to load the best model using model = tf.keras.models.load_model('/content/saved_models/model_best.h5',custom_objects={'TemporalReshape':TemporalReshape}) I get the error TypeError Traceback (most recent call last) <ipython-input-83-40b46da33e91> in <module>() ----> 1 model = tf.keras.models.load_model('/content/saved_models/model_best.h5',custom_objects={'TemporalReshape':TemporalReshape}) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile, options) 180 if (h5py is not None and ( 181 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))): --> 182 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) 183 184 filepath = path_to_string(filepath) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile) 176 model_config = json.loads(model_config.decode('utf-8')) 177 model = model_config_lib.model_from_config(model_config, --> 178 custom_objects=custom_objects) 179 180 # set weights /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects) 53 '`Sequential.from_config(config)`?') 54 from tensorflow.python.keras.layers import deserialize # pylint: disable=g-import-not-at-top ---> 55 return deserialize(config, custom_objects=custom_objects) 56 57 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects) 173 module_objects=LOCAL.ALL_OBJECTS, 174 custom_objects=custom_objects, --> 175 printable_module_name='layer') /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 356 custom_objects=dict( 357 list(_GLOBAL_CUSTOM_OBJECTS.items()) + --> 358 list(custom_objects.items()))) 359 with CustomObjectScope(custom_objects): 360 return cls.from_config(cls_config) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py in from_config(cls, config, custom_objects) 615 """ 616 input_tensors, output_tensors, created_layers = reconstruct_from_config( --> 617 config, custom_objects) 618 model = cls(inputs=input_tensors, outputs=output_tensors, 619 name=config.get('name')) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py in reconstruct_from_config(config, custom_objects, created_layers) 1202 # First, we create all layers and enqueue nodes to be processed 1203 for layer_data in config['layers']: -> 1204 process_layer(layer_data) 1205 # Then we process nodes in order of layer depth. 1206 # Nodes that cannot yet be processed (if the inbound node /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py in process_layer(layer_data) 1184 from tensorflow.python.keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top 1185 -> 1186 layer = deserialize_layer(layer_data, custom_objects=custom_objects) 1187 created_layers[layer_name] = layer 1188 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects) 173 module_objects=LOCAL.ALL_OBJECTS, 174 custom_objects=custom_objects, --> 175 printable_module_name='layer') /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 358 list(custom_objects.items()))) 359 with CustomObjectScope(custom_objects): --> 360 return cls.from_config(cls_config) 361 else: 362 # Then `cls` may be a function returning a class. /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in from_config(cls, config) 695 A layer instance. 696 """ --> 697 return cls(**config) 698 699 def compute_output_shape(self, input_shape): TypeError: __init__() got an unexpected keyword argument 'name' When building the model, I used the custom layer like the following : x = TemporalReshape(batch_size = 8, num_patches = 16)(x) What is causing the error and how to load the model without this error? | Based on the error message only, I would suggest putting **kwargs in __init__. This object will then accept any other keyword argument that you haven't included. def __init__(self, batch_size, num_patches, **kwargs): super(TemporalReshape, self).__init__(**kwargs) # <--- must, thanks https://stackoverflow.com/users/349130/dr-snoopy self.batch_size = batch_size self.num_patches = num_patches | 14 | 21 |
64,306,147 | 2020-10-11 | https://stackoverflow.com/questions/64306147/using-playwright-for-python-how-do-i-select-an-option-from-a-drop-down-list | This is a followup to this question on the basic functionality of Playwright for Python. How do I select an option from a drop down list? This example remote controls a vuejs-webseite that has a drop down list of fruits like "Apple", "Banana", "Carrot", "Orange" Here I want to select the option "Banana" from playwright import sync_playwright import time URL = '<my url>' with sync_playwright() as p: browser = p.chromium.launch(headless=False) page = browser.newPage() page.goto(URL) # identify this element by ID. Wait for it first new_selector = 'id=name-fruit' page.waitForSelector(new_selector) handle = page.querySelector(new_selector) # at this point I have the element and can print the content print(handle.innerHTML()) The drop down list HTML like this <select data-v-c2cef47a="" id="name-fruit" autocomplete="true" class="form-select__select"> <option data-v-c2cef47a="" disabled="disabled" value=""><!----></option> <option data-v-c2cef47a="" value="[object Object]"> Apple </option> <option data-v-c2cef47a="" value="[object Object]"> Banana </option> <option data-v-c2cef47a="" value="[object Object]"> Carrot </option> <option data-v-c2cef47a="" value="[object Object]"> Orange </option> </select> in Selenium, I would select an option like this from selenium.webdriver.support.ui import Select Select(handle).select_by_visible_text('Banana') # Note: no spaces needed! The Javascript docs for Playwright have this, which is not exactly the same, because it seems to identify the object at the same time. // Single selection matching the label await page.selectOption('select#colors', { label: 'Blue' }); How do I do the selection in Playwright for Python? I tried both of these and nothing happens. handle.selectOption("text=Banana") handle.selectOption("text= Banana ") | After trying many different variants, I guessed a working syntax handle.selectOption({"label": "Banana"}) | 9 | 6 |
64,268,575 | 2020-10-8 | https://stackoverflow.com/questions/64268575/how-can-i-import-the-first-and-only-dict-out-of-a-top-level-array-in-a-json-file | I'm working with a json file in Python and I wanted to convert it into a dict. This is what my file looks like: [ { "label": "Label", "path": "/label-path", "image": "icon.svg", "subcategories": [ { "title": "Main Title", "categories": { "column1": [ { "label": "Label Title", "path": "/Desktop/Folder" } ] } } ] } ] So this is what I did: import json # Opening JSON file f = open('file.json') # returns JSON object as a list data = json.load(f) However, data is now a list, not a dict. I tried to think about how can I convert it to a dict, but (I) not sure how to do this; (II) isn't there a way of importing the file already as a dict? | Your data get's imported as list, because in your JSON file the main structure is an Array (squared brackets), which is comparable to a list in Python. If you want just inner dict you can do data = json.load(f)[0] | 18 | 18 |
64,324,688 | 2020-10-12 | https://stackoverflow.com/questions/64324688/how-can-i-get-the-tag-name-in-selenium-python | Is there a way to get the name of the tag of a Selenium web element? I found, there is .getTagName() in selenium for Java, in How do I get a parent HTML Tag with Selenium WebDriver using Java?. Example: In this HTML, if I iterate though class='some_class_name', how can I get the tag_name (h2, p, ul or li)? <div class='some_class_name'> <h2>Some Random Heading 1</h2> <p>Lorem Ipsum is simply dummy text of the printing and typesetting industry</p> <h2>Some Random Heading 2</h2> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p> <p>Random list are:</p> <ul> <li>Lorem</li> <li>Ipsum</li> <li>Dolor</li> </ul> </div> The Python code may look like this... content = driver.find_elements_by_xpath("//div[@class='some_class_name']//child::*") for c in content: print(c.getTagName()) # Something like this to get the # tag of inner content... | In Python, you have tag_name to get the tag name. content = driver.find_elements_by_xpath("//div[@class='some_class_name']//child::*") for c in content: print(c.tag_name) | 9 | 18 |
64,268,081 | 2020-10-8 | https://stackoverflow.com/questions/64268081/creating-a-subplot-of-images-with-plotly | I wanted to display the first 10 images from the mnist dataset with plotly. This is turning out to be more complicated than I thought. This does not work: import numpy as np np.random.seed(123) import plotly.express as px from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() fig = subplots.make_subplots(rows=1, cols=10) fig.add_trace(px.imshow(X_train[0]), row=1, col=1) as it results in ValueError: Invalid element(s) received for the 'data' property of Invalid elements include: [Figure({ 'data': [{'coloraxis': 'coloraxis', 'type': 'heatmap', 'z': array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=uint8)}], 'layout': {'coloraxis': {'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']]}, 'margin': {'t': 60}, 'template': '...', 'xaxis': {'constrain': 'domain', 'scaleanchor': 'y'}, 'yaxis': {'autorange': 'reversed', 'constrain': 'domain'}} })] The 'data' property is a tuple of trace instances that may be specified as: - A list or tuple of trace instances (e.g. [Scatter(...), Bar(...)]) - A single trace instance (e.g. Scatter(...), Bar(...), etc.) - A list or tuple of dicts of string/value properties where: - The 'type' property specifies the trace type One of: ['area', 'bar', 'barpolar', 'box', 'candlestick', 'carpet', 'choropleth', 'choroplethmapbox', 'cone', 'contour', 'contourcarpet', 'densitymapbox', 'funnel', 'funnelarea', 'heatmap', 'heatmapgl', 'histogram', 'histogram2d', 'histogram2dcontour', 'image', 'indicator', 'isosurface', 'mesh3d', 'ohlc', 'parcats', 'parcoords', 'pie', 'pointcloud', 'sankey', 'scatter', 'scatter3d', 'scattercarpet', 'scattergeo', 'scattergl', 'scattermapbox', 'scatterpolar', 'scatterpolargl', 'scatterternary', 'splom', 'streamtube', 'sunburst', 'surface', 'table', 'treemap', 'violin', 'volume', 'waterfall'] - All remaining properties are passed to the constructor of the specified trace type (e.g. [{'type': 'scatter', ...}, {'type': 'bar, ...}]) neither does fig.add_trace(go.Image(X_train[0]), row=1, col=1) or fig.add_trace(go.Figure(go.Heatmap(z=X_train[0])), 1,1) I am starting to run out of ideas. It should be possible to have a row of images as a header. | Using facets Please consider the following answer, which is much simpler: import plotly.express as px from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() fig = px.imshow(X_train[:10, :, :], binary_string=True, facet_col=0, facet_col_wrap=5) It produces the following output: Using animation For exploration purposes and although this is not exactly what you are asking for, you can also make an animation and visualize one digit at a time: fig = px.imshow(X_train[:10, :, :], binary_string=True, animation_frame=0) Source Imshow documentation | 9 | 8 |
64,334,033 | 2020-10-13 | https://stackoverflow.com/questions/64334033/how-to-solve-runtimeerror-cuda-error-invalid-device-ordinal | I'm trying to run this code. I don't know what is wrong with it, but this code is not running. and I don't know how to solve this problem. import cv2 from facial_emotion_recognition import EmotionRecognition emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) camera = cv2.VideoCapture(0) while True: image = camera.read()[1] image = emotion_detector.recognise_emotion(image, return_type='BGR') cv2.imshow('Camera', image) key = cv2.waitKey(1) if key == 27: break camera.release() cv2.destroyAllWindows() but I'm getting this error: Traceback (most recent call last): File "/home/fahim/Documents/Python_projects/Python tutorials/pantech AI Master/Computer_Vision/Day 8 Face emotion recognition/emotion.py", line 4, in <module> emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/facial_emotion_recognition/facial_emotion_recognition.py", line 25, in __init__ self.network = NetworkV2(in_c=1, nl=32, out_f=7).to(self.device) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 607, in to return self._apply(convert) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 354, in _apply module._apply(fn) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 354, in _apply module._apply(fn) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 376, in _apply param_applied = fn(param) File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 605, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: CUDA error: invalid device ordinal Process finished with exit code 1 This is my the configuration of my computer: GPU: NVIDIA GeForce MX130 CPU: Intel i5-10210U (8) @ 4.200GHz Help me to solve this please. | Try changing: emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) To: emotion_detector = EmotionRecognition(device='gpu', gpu_id=0) gpu_id is only effective when more than one GPU is detected, you only seem to have one GPU, so it throws an error since you tell the function to get GPU 2 (since we count from 0). | 24 | 25 |
64,261,546 | 2020-10-8 | https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst | I'm trying to install a package on Python, but Python is throwing an error on installing packages. I'm getting an error every time I tried to install pip install google-search-api. Here is the error how can I successfully install it? error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I already updated that and have the latest version of 14.27 but the problem is throwing the same error. | Go to this link and download Microsoft C++ Build Tools: https://visualstudio.microsoft.com/visual-cpp-build-tools/ Open the installer, then follow the steps. You might have something like this, just download it or resume. If updating above doesn't work then you need to configure or make some updates here. You can make some updates here too by clicking "Modify". Check that and download what you need there or you might find that you just need to update Microsoft Visual C++ as stated on the error, but I also suggest updating everything there because you might still need it on your future programs. I think those with the C++ as I've done that before and had a similar problem just like that when installing a python package for creating WorldCloud visualization. UPDATE: December 28, 2020 You can also follow these steps here: Select: Workloads → Desktop development with C++ Then for Individual Components, select only: Windows 10 SDK C++ x64/x86 build tools You can also achieve the same automatically using the following command: vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools Reference: https://www.scivision.dev/python-windows-visual-c-14-required | 222 | 330 |
64,312,153 | 2020-10-12 | https://stackoverflow.com/questions/64312153/tf-newaxis-operation-in-tensorflow | x_train = x_train[..., tf.newaxis].astype("float32") x_test = x_test[..., tf.newaxis].astype("float32") Can someone please explain how tf.newaxis works ? I found a brief mention in the documentation https://www.tensorflow.org/api_docs/python/tf/strided_slice but I could not properly understand. | Check this example: a = tf.constant([100]) print(a.shape) ## (1) expanded_1 = tf.expand_dims(a,axis=1) print(expanded_1.shape) ## (1,1) expanded_2 = a[:, tf.newaxis] print(expanded_2.shape) ## (1,1) It is similar to expand_dims() which adds a new axis. If you want to add a new axis at the beginning of the tensor, use expanded_2 = a[tf.newaxis, :] otherwise (at the end) expanded_2 = a[:,tf.newaxis] | 16 | 19 |
64,253,599 | 2020-10-7 | https://stackoverflow.com/questions/64253599/spacy-confusion-about-word-vectors-and-tok2vec | it would be really helpful for me if you would help me understand some underlying concepts about Spacy. I understand some spacy models have some predefined static vectors, for example, for the Spanish models these are the vectors generated by FastText. I also understand that there is a tok2vec layer that generates vectors from tokens, and this is used for example as the input of the NER components of the model. If the above is correct, then I have some questions: Does the NER component also use the static vectors? If yes, then where does the tok2vec layer comes into play? If no, then is there any advantage on using the lg or md models if you only intend to use the model for e.g. the NER component? Is the tok2vec layer already trained for pretrained downloaded models, e.g. Spanish? If I replace the NER component of a pretrained model, does it keep the tok2vec layer untouched i.e. with the learned weights? Is the tok2vec layer also trained when I train a NER model? Would the pretrain command help the tok2vec layer learn some domain-specific words that may be OOV? Thanks a lot! | Does the NER component also use the static vectors? This is addressed in point 2 and 3 of my answer here. Is the tok2vec layer already trained for pretrained downloaded models, e.g. Spanish? Yes, the full model is trained, and the tok2vec layer is a part of it. If I replace the NER component of a pretrained model, does it keep the tok2vec layer untouched i.e. with the learned weights? No, not in the current spaCy v2. The tok2vec layer is part of the model, if you remove the model, you also remove the tok2vec layer. In the upcoming v3, you'll be able to separate these so you can in fact keep the tok2vec model separately, and share it between components. Is the tok2vec layer also trained when I train a NER model? Yes - see above Would the pretrain command help the tok2vec layer learn some domain-specific words that may be OOV? See also my answer at https://stackoverflow.com/a/63520262/7961860 If you have further questions - happy to discuss in the comments! | 6 | 6 |
64,296,359 | 2020-10-10 | https://stackoverflow.com/questions/64296359/how-can-i-install-pip-for-python2-7-in-ubuntu-20-04 | Is there any way that I can install "pip" for "Python2.7" ? I could install python2.7 by sudo apt install python2-minimal I tried installing pip for this. sudo apt install python-pip / python2-pip / python2.7-pip but none worked. Can anybody have solution for this. | Pip for Python 2 is not included in the Ubuntu 20.04 repositories. Try this guide which suggests to fetch a Python 2.7 compatible get_pip.py and use that to bootstrap pip. curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py | 35 | 23 |
64,284,698 | 2020-10-9 | https://stackoverflow.com/questions/64284698/export-conda-environment-with-minimized-requirements | The typical command to export a Anaconda environment to a YAML file is: conda env export --name my_env > myenv.yml However, one huge issue is the readbility of this file as it includes hard specifications for all of the libraries and all of their dependencies. Is there a way for Anaconda to export a list of the optimally smallest subset of commands that would subsume these dependencies to make the YAML more readable? For example, if all you installed in a conda environment was pip and scipy, is there a way for Anaconda to realize that the file should just read: name: my_env channels: - defaults dependencies: - scipy=1.3.1 - pip=19.2.3 That way, the anaconda environment will still have the exact same specification, if not an improved on (if an upstream bug is fixed) and anyone who looks at the yml file will understand what is "required" to run the code, in the sense that if they did want to/couldn't use the conda environment they would know what packages they needed to install? | Options from the Conda CLI This is sort of what the --from-history flag is for, but not exactly. Instead of including exact build info for each package, it will include only what are called explicit specifications, i.e., the specifications that a user has explicitly requested via the CLI (e.g., conda install scipy=1.3.1). Have a try: conda env export --from-history --name my_env > myenv.yml This will only include versions if the user originally included versions during installation. Hence, creating a new environment is very likely not going to use the exact same versions and builds. On the other hand, if the user originally included additional constraints beyond version and build they will also be included (e.g., a channel specification conda install conda-forge::numpy will lead to conda-forge::numpy). Another option worth noting is the --no-builds flag, which will export every package in the YAML, but leave out the build specifiers. These flags work in a mutually exclusive manner. conda-minify If this is not sufficient, then there is an external utility called conda-minify that offers some functionality to export an environment that is minimized based on a dependency tree rather than through the user's explicit specifications. | 32 | 43 |
64,297,272 | 2020-10-10 | https://stackoverflow.com/questions/64297272/best-way-to-convert-ipynb-to-py-in-vscode | I'm looking for a good way to convert .ipynb to .py files in VSCode. So far I've tried: the "Export As" Option built into vscode. Not ideal as it produces the following at the start of the script, as well as "Run Cell", "Run Below", etc. buttons/links: "To add a new cell, type '# %%' To add a new markdown cell, type '# %% [markdown]' %% from IPython import get_ipython" nbconvert. I usually insert this as a command in the script itself (https://stackoverflow.com/a/19779226/14198216) so it's automatically converted and saved once run. But this also leaves "Run Cell" etc, as well as execution markings (ex: "In [1]") jupytext. I also usually insert this as a command. At the start of the .py, it produces: -- coding: utf-8 -- --- jupyter: jupytext: text_representation: Is there a nice, minimalist solution that doesn't insert a bunch of gunk (which needs to be manually edited out) in the .py version of my notebooks and can be easily/automatically executed from the notebook itself? This could also be setting tweaks that I'm currently unaware of that can make one of the things I mentioned work better. Thanks in advance. | We can add the following settings in "settings.json", the generated python file will not have "Run Cell", "Run Above", "Debug Cell": "jupyter.codeLenses": " ", | 23 | 5 |
64,281,002 | 2020-10-9 | https://stackoverflow.com/questions/64281002/pyinstaller-compiled-uvicorn-server-does-not-start-correctly | When I start the server.exe and it is trying to perform uvicorn.run(), the exception is being thrown: Traceback (most recent call last): File "logging\config.py", line 390, in resolve ModuleNotFoundError: No module named 'uvicorn.logging' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "logging\config.py", line 542, in configure File "logging\config.py", line 654, in configure_formatter File "logging\config.py", line 469, in configure_custom File "logging\config.py", line 397, in resolve File "logging\config.py", line 390, in resolve ValueError: Cannot resolve 'uvicorn.logging.DefaultFormatter': No module named 'uvicorn.logging' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "server.py", line 82, in <module> File "server.py", line 21, in run File "uvicorn\main.py", line 343, in run File "uvicorn\config.py", line 180, in __init__ File "uvicorn\config.py", line 223, in configure_logging File "logging\config.py", line 808, in dictConfig File "logging\config.py", line 545, in configure ValueError: Unable to configure formatter 'default' [7932] Failed to execute script server Note that the uvicorn.logging module does exist and when I perform the server`s code in Python, it operates correctly. | I encountered the same problem. And I found it's a job of hiddenimports,It's useful to modify the following lines in xxx.spec: a = Analysis(['xxx.py'], hiddenimports=['uvicorn.logging'], <everything else>) however, there will still be other similar problems. So, I try to add all files of uvicorn,and it works with: hiddenimports=['uvicorn.lifespan.off','uvicorn.lifespan.on','uvicorn.lifespan', 'uvicorn.protocols.websockets.auto','uvicorn.protocols.websockets.wsproto_impl', 'uvicorn.protocols.websockets_impl','uvicorn.protocols.http.auto', 'uvicorn.protocols.http.h11_impl','uvicorn.protocols.http.httptools_impl', 'uvicorn.protocols.websockets','uvicorn.protocols.http','uvicorn.protocols', 'uvicorn.loops.auto','uvicorn.loops.asyncio','uvicorn.loops.uvloop','uvicorn.loops', 'uvicorn.logging'], then, run: pyinstaller xxx.spec | 6 | 8 |
64,331,384 | 2020-10-13 | https://stackoverflow.com/questions/64331384/tuple-object-has-no-attribute-committed-error-while-updating-image-objects | Here I am trying to update each product image of a particular product. But it is not working properly. Here only the image of first object is updating. There is a template where we can update product and product images at once. ProductImage has a ManyToOne relation with Product Model so in the template there can be multiple images objects of a single product. Updating product model works fine but while updating ProductImage Objects it is not working. Instead of zipping is there any other way to update multiple image objects at once ? EDIT: If I unzip the images list then updating doesn't works properly. For example if I change the image of one object then the another object images changes. BUT when I change all the image objects images then the update works fine. It doesn't work properly when I change only some of the objects. When I Zip then images list then this is the error traceback. Traceback (most recent call last): File "venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner response = get_response(request) File "venv\lib\site-packages\django\core\handlers\base.py", line 179, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "\venv\lib\site-packages\django\views\generic\base.py", line 73, in view return self.dispatch(request, *args, **kwargs) File "\venv\lib\site-packages\django\views\generic\base.py", line 101, in dispatch return handler(request, *args, **kwargs) File "dashboard\product\views\views.py", line 293, in post p.save() File "venv\lib\site-packages\django\db\models\base.py", line 751, in save force_update=force_update, update_fields=update_fields) File "venv\lib\site-packages\django\db\models\base.py", line 789, in save_base force_update, using, update_fields, File "venv\lib\site-packages\django\db\models\base.py", line 867, in _save_table for f in non_pks] File "lib\site-packages\django\db\models\base.py", line 867, in <listcomp> for f in non_pks] File "\venv\lib\site-packages\django\db\models\fields\files.py", line 303, in pre_save if file and not file._committed: AttributeError: 'tuple' object has no attribute '_committed' models class ProductImage(models.Model): image = models.ImageField(upload_to='imgs',blank=True, null=True) product = models.ForeignKey(Product, on_delete=models.CASCADE) template <form method="post" enctype="multipart/form-data"> {% csrf_token %} {{product_form}} <th>Current Image</th> <th>Change</th> {% for image in p_images %} <tr><td>{{image.pk}}</td> <td><img src="{{image.image.url}}" width="50" height="50"></td> <td><input type="file" name="image"></td> </tr> {% endfor %} <input type="submit"> views def post(self, request, *args, **kwargs): product = Product.objects.get(pk=kwargs['pk']) product_form = ProductForm(request.POST, instance=product) images = zip(request.FILES.getlist('image')) p_images = ProductImage.objects.filter(product=product).order_by('pk') if product_form.is_valid(): product_form.save() # updating product images for p, img in zip(p_images, images): # error is here p.image = img p.save() # Tried this way too: for img in images: ProductImage.objects.filter(product=product).update(image=img) | First, change input name to be able to identify which ProductImage was updated. <!-- <td><input type="file" name="image"></td> --> <td><input type="file" name="image-{{image.pk}}"></td> Next, iterate the input_name in request.FILES and get the ProductImage PK. Then, lookup the ProductImage p, update the image field and save the model. def post(self, request, *args, **kwargs): product = Product.objects.get(pk=kwargs['pk']) product_form = ProductForm(request.POST, instance=product) if product_form.is_valid(): product_form.save() # Updating product images if request.FILES: p_images = ProductImage.objects.filter(product=product).order_by('pk') p_images_lookup = {p_image.pk: p_image for p_image in p_images} for input_name in request.FILES: p = p_images_lookup[int(input_name[len('image-'):])] p.image = request.FILES[input_name] p.save() | 6 | 3 |
64,324,685 | 2020-10-12 | https://stackoverflow.com/questions/64324685/why-my-pca-is-not-invariant-to-rotation-and-axis-swap | I have a voxel (np.array) with size 3x3x3, filled with some values, this setup is essential for me. I want to have rotation-invariant representation of it. For this case, I decided to try PCA representation which is believed to be invariant to orthogonal transformations. another For simplicity, I took some axes swap, but in case I'm mistaken there can be np.rot90. I have interpereted my 3d voxels as a set of weighted 3d cube point vectors which I incorrectly called "basis", total 27 (so that is some set of 3d point in space, represented by the vectors, obtained from cube points, scaled by voxel values). import numpy as np voxel1 = np.random.normal(size=(3,3,3)) voxel2 = np.transpose(voxel1, (1,0,2)) #np.rot90(voxel1) # basis = [] for i in range(3): for j in range(3): for k in range(3): basis.append([i+1, j+1, k+1]) # avoid 0 basis = np.array(basis) voxel1 = voxel1.reshape((27,1)) voxel2 = voxel2.reshape((27,1)) voxel1 = voxel1*basis # weighted basis vectors voxel2 = voxel2*basis print(voxel1.shape) (27, 3) Then I did PCA to those 27 3-dimensional vectors: def pca(x): center = np.mean(x, 0) x = x - center cov = np.cov(x.T) / x.shape[0] e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values) v = e_vectors[:, order].transpose() return x.dot(v) vp1 = pca(voxel1) vp2 = pca(voxel2) But the results in vp1 and vp2 are different. Perhaps, I have a mistake (though I beleive this is the right formula), and the proper code must be x.dot(v.T) But in this case the results are very strange. The upper and bottom blocks of the transofrmed data are the same up to the sign: >>> np.abs(np.abs(vp1)-np.abs(vp2)) > 0.01 array([[False, False, False], [False, False, False], [False, False, False], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, False, True], [ True, True, True], [ True, True, True], [ True, True, True], [False, False, False], [False, False, False], [False, False, False], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, False, True], [ True, True, True], [ True, True, True], [ True, True, True], [False, False, False], [False, False, False], [False, False, False]]) What I'm doing wrong? What I want to do is to find some invariant representation of my weighted voxel, something like positioning according to the axes of inertia or principal axes. I would really appreciate if someone helps me. UPD: Found the question similar to mine, but code is unavailable EDIT2: Found the code InertiaRotate and managed to monkey-do the following: import numpy as np # https://github.com/smparker/orient-molecule/blob/master/orient.py voxel1 = np.random.normal(size=(3,3,3)) voxel2 = np.transpose(voxel1, (1,0,2)) voxel1 = voxel1.reshape((27,)) voxel2 = voxel2.reshape((27,)) basis = [] for i in range(3): for j in range(3): for k in range(3): basis.append([i+1, j+1, k+1]) # avoid 0 basis = np.array(basis) basis = basis - np.mean(basis, axis=0) def rotate_func(data, mass): #mass = [ masses[n.lower()] for n in geom.names ] inertial_tensor = -np.einsum("ax,a,ay->xy", data, mass, data) # negate sign to reverse the sorting of the tensor eig, axes = np.linalg.eigh(-inertial_tensor) axes = axes.T # adjust sign of axes so third moment moment is positive new in X, and Y axes testcoords = np.dot(data, axes.T) # a little wasteful, but fine for now thirdmoment = np.einsum("ax,a->x", testcoords**3, mass) for i in range(2): if thirdmoment[i] < 1.0e-6: axes[i,:] *= -1.0 # rotation matrix must have determinant of 1 if np.linalg.det(axes) < 0.0: axes[2,:] *= -1.0 return axes axes1 = rotate_func(basis, voxel1) v1 = np.dot(basis, axes1.T) axes2 = rotate_func(basis, voxel2) v2 = np.dot(basis, axes2.T) print(v1) print(v2) It seems to use basis (coordinates) and mass separately. The results are quite similar to my problem above: some parts of the transformed data match up to the sign, I believe those are some cube sides print(np.abs(np.abs(v1)-np.abs(v2)) > 0.01) [[False False False] [False False False] [False False False] [ True True True] [ True True True] [ True True True] [ True True True] [False False False] [ True True True] [ True True True] [ True True True] [ True True True] [False False False] [False False False] [False False False] [ True True True] [ True True True] [ True True True] [ True True True] [False False False] [ True True True] [ True True True] [ True True True] [ True True True] [False False False] [False False False] [False False False]] Looking for some explanation. This code is designed for molecules, and must work... UPD: Tried to choose 3 vectors as a new basis from those 24 - the one with biggest norm, the one with the smallest and their cross product. Combined them into the matrix V, then used the formula V^(-1)*X to transform coordinates, and got the same problem - the resulting sets of vectors are not equal for rotated voxels. UPD2: I agree with meTchaikovsky that my idea of multiplying voxel vectors by weights and thus creating some non-cubic point cloud was incorrect. Probably, we indeed need to take the solution for rotated "basis"(yes, this is not a basis, but rather a way to determine point cloud) which will work later when "basis" is the same, but the weights are rotated according to the 3D rotation. Based on the answer and the reference provided by meTchaikovsky, and finding other answers we together with my friend came to conclusion that rotate_func from molecular package mentioned above tries to invent some convention for computing the signs of the components. Their solution tries to use 3rd moment for the first 2 axes and determinant for the last axis (?). We tried a bit another approach and succeeded to have half of the representations matching: # -*- coding: utf-8 -*- """ Created on Fri Oct 16 11:40:30 2020 @author: Dima """ import numpy as np from numpy.random import randn from numpy import linalg as la from scipy.spatial.transform import Rotation as R np.random.seed(10) rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]]) def pca(feat, x): # pca with attemt to create convention on sign changes x_c =x- np.mean(x,axis=0) x_f= feat*x x_f-= np.mean(x_f, axis=0) cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] #print(order) v = e_vectors[:,order] v= v/np.sign(v[0,:]) if(la.det(v)<0): v= -v return x_c @ v def standardize(x): # take vector with biggest norm, with smallest and thir cross product as basis x -= np.mean(x,axis=0) nrms= la.norm(x, axis=1) imin= argmin(nrms) imax= argmax(nrms) vec1= x[imin, :] vec2= x[imax, :] vec3= np.cross(vec1, vec2) Smat= np.stack([vec1, vec2, vec3], axis=0) if(la.det(Smat)<0): Smat= -Smat return(la.inv(Smat)@x.T) angles = np.linspace(0.0,90.0,91) voxel1 = np.random.normal(size=(3,3,3)) res = [] for angle in angles: voxel2 = voxel1.copy() voxel1 = voxel1.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 = basis1+1e-4*randn(27,3) # perturbation basis2 = basis1 @rotate_mat(np.deg2rad(angle)) #voxel1 = voxel1*basis1 #voxel2 = voxel2*basis2 #print(angle,(np.abs(pca(voxel1) - pca(voxel2) ))) #gg= np.abs(standardize(basis1) - standardize(basis2) ) gg= np.abs(pca(voxel1, basis1) - pca(voxel1, basis2) ) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print(angle,ss, bl) #res.append(np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6))) del basis1, basis2 The results are good up to 58 degree angle (yet we're still experimenting with rotation of x, y axes). After that we have constant difference which indicates some uncounted sign reverse. This is better than the less consistent result of rotate_func: 0.0 0.0 True 1.0 1.1103280567106161e-13 True 2.0 5.150139890290964e-14 True 3.0 8.977126225544196e-14 True 4.0 5.57341699240722e-14 True 5.0 4.205149954378956e-14 True 6.0 3.7435437643664957e-14 True 7.0 1.2943967187158123e-13 True 8.0 5.400185371573149e-14 True 9.0 8.006410204958181e-14 True 10.0 7.777189536904011e-14 True 11.0 5.992073021576436e-14 True 12.0 6.3716122222085e-14 True 13.0 1.0120048110065158e-13 True 14.0 1.4193029076233626e-13 True 15.0 5.32774440341853e-14 True 16.0 4.056702432878251e-14 True 17.0 6.52062429116855e-14 True 18.0 1.3237663595853556e-13 True 19.0 8.950259695710006e-14 True 20.0 1.3795067925438317e-13 True 21.0 7.498727794307339e-14 True 22.0 8.570866862371226e-14 True 23.0 8.961510590826412e-14 True 24.0 1.1839169916779899e-13 True 25.0 1.422193407555868e-13 True 26.0 6.578778015788652e-14 True 27.0 1.0042963537887101e-13 True 28.0 8.438153062569065e-14 True 29.0 1.1299103064863272e-13 True 30.0 8.192453876745831e-14 True 31.0 1.2618492405483406e-13 True 32.0 4.9237819394886296e-14 True 33.0 1.0971028569666842e-13 True 34.0 1.332138304559801e-13 True 35.0 5.280024600049296e-14 True From the code above, you can see that we tried to use another basis: vector with the biggest norm, vector with the smallest and their cross product. Here we should have only two variants (direction of the cross product) which could be later fixed, but I couldn't manage this alternative solution to work. I hope that someone can help me finish this and obtain rotation-invariant representation for voxels. EDIT 3. Thank you very much meTchaikovsky, but the situation is still unclear. My problem initially lies in processing 3d voxels which are (3,3,3) numpy arrays. We reached the conclusion that for finding invariant representation, we just need to fix 3d voxel as weights used for calculating cov matrix, and apply rotations on the centered "basis" (some vectors used for describing point cloud). Therefore, when we achieved invariance to "basis" rotations, the problem should have been solved: now, when we fix "basis" and use rotated voxel, the result must be invariant. Surprisingly, this is not so. Here I check 24 rotations of the cube with basis2=basis1 (except small perturbation): import scipy.ndimage def pca(feat, x): # pca with attemt to create convention on sign changes x_c = x - np.mean(x,axis=0) x_f = feat * x x_f -= np.mean(x_f,axis=0) cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] # here is the solution, we switch the sign of the projections # so that the projection with the largest absolute value along a principal axis is positive proj = x_c @ v asign = np.sign(proj) max_ind = np.argmax(np.abs(proj),axis=0)[None,:] sign = np.take_along_axis(asign,max_ind,axis=0) proj = proj * sign return proj def rotate_3d(image1, alpha, beta, gamma): # z # The rotation angle in degrees. image2 = scipy.ndimage.rotate(image1, alpha, mode='nearest', axes=(0, 1), reshape=False) # rotate along y-axis image3 = scipy.ndimage.rotate(image2, beta, mode='nearest', axes=(0, 2), reshape=False) # rotate along x-axis image4 = scipy.ndimage.rotate(image3, gamma, mode='nearest', axes=(1, 2), reshape=False) return image4 voxel10 = np.random.normal(size=(3,3,3)) angles = [[x,y,z] for x in [-90,0,90] for y in [-90,0,90] for z in [-90,0,90]] res = [] for angle in angles: voxel2 = rotate_3d(voxel10, angle[0], angle[1], angle[2]) voxel1 = voxel10.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation basis2 = basis1 original_diff = np.sum(np.abs(basis1-basis2)) gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2)) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) difference before pca 0.000, difference after pca 45.738 False difference before pca 0.000, difference after pca 12.157 False difference before pca 0.000, difference after pca 26.257 False difference before pca 0.000, difference after pca 37.128 False difference before pca 0.000, difference after pca 52.131 False difference before pca 0.000, difference after pca 45.436 False difference before pca 0.000, difference after pca 42.226 False difference before pca 0.000, difference after pca 18.959 False difference before pca 0.000, difference after pca 38.888 False difference before pca 0.000, difference after pca 12.157 False difference before pca 0.000, difference after pca 26.257 False difference before pca 0.000, difference after pca 50.613 False difference before pca 0.000, difference after pca 52.132 False difference before pca 0.000, difference after pca 0.000 True difference before pca 0.000, difference after pca 52.299 False Here basis1=basis2 (hence basis difference before pca=0), and you can see 0 for (0,0,0) rotation. But rotated voxels give different result. In case scipy does something wrong, I've checked the approach with numpy.rot90 with the same result: rot90 = np.rot90 def rotations24(polycube): # imagine shape is pointing in axis 0 (up) # 4 rotations about axis 0 yield from rotations4(polycube, 0) # rotate 180 about axis 1, now shape is pointing down in axis 0 # 4 rotations about axis 0 yield from rotations4(rot90(polycube, 2, axis=1), 0) # rotate 90 or 270 about axis 1, now shape is pointing in axis 2 # 8 rotations about axis 2 yield from rotations4(rot90(polycube, axis=1), 2) yield from rotations4(rot90(polycube, -1, axis=1), 2) # rotate about axis 2, now shape is pointing in axis 1 # 8 rotations about axis 1 yield from rotations4(rot90(polycube, axis=2), 1) yield from rotations4(rot90(polycube, -1, axis=2), 1) def rotations4(polycube, axis): """List the four rotations of the given cube about the given axis.""" for i in range(4): yield rot90(polycube, i, axis) def rot90(m, k=1, axis=2): """Rotate an array k*90 degrees in the counter-clockwise direction around the given axis""" m = np.swapaxes(m, 2, axis) m = np.rot90(m, k) m = np.swapaxes(m, 2, axis) return m voxel10 = np.random.normal(size=(3,3,3)) gen = rotations24(voxel10) res = [] for voxel2 in gen: #voxel2 = rotate_3d(voxel10, angle[0], angle[1], angle[2]) voxel1 = voxel10.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation basis2 = basis1 original_diff = np.sum(np.abs(basis1-basis2)) gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2)) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) I tried to investigate this case, and the only perhaps irrelevant thing I found the following: voxel1 = np.ones((3,3,3)) voxel1[0,0,0] = 0 # if I change 0 to 0.5 it stops working at all # mirrored around diagonal voxel2 = np.ones((3,3,3)) voxel2[2,2,2] = 0 for angle in range(1): voxel1 = voxel1.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 = basis1 + 1e-4 * randn(27,3) # perturbation basis2 = basis1 # If perturbation is used we have # difference before pca 0.000, difference after pca 0.000 True # correct for 100.0 percent of time # eigenvalues for both voxels # [1.03417495 0.69231107 0.69235402] # [0.99995368 0.69231107 0.69235402] # If no perturbation applied for basis, difference is present # difference before pca 0.000, difference after pca 55.218 False # correct for 0.0 percent of time # eignevalues for both voxels (always have 1.): # [0.69230769 1.03418803 0.69230769] # [1. 0.69230769 0.69230769] Currently don't know how to proceed from there. EDIT4: I'm currently thinking that there is some problem with voxel rotations transformed into basis coefficients via voxel.reshape() Simple experiment with creating array of indices indices = np.arange(27) indices3d = indices.reshape((3,3,3)) voxel10 = np.random.normal(size=(3,3,3)) assert voxel10[0,1,2] == voxel10.ravel()[indices3d[0,1,2]] And then using it for rotations gen = rotations24(indices3d) res = [] for ind2 in gen: basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) voxel1 = voxel10.copy().reshape(27,1) #np.array([voxel10[i,j,k] for k in range(3) for j in range(3) for i in range(3)])[...,np.newaxis] voxel2 = voxel1[ind2.reshape(27,)] basis1 += 1e-4*np.random.normal(size=(27, 1)) # perturbation basis2 = basis1[ind2.reshape(27,)] original_diff = np.sum(np.abs(basis1-basis2)) gg= np.abs(pca(voxel1, basis1) - pca(voxel2, basis2)) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) Shows that those rotations are not correct, because on my opinion rotated voxel and basis should match: difference before pca 0.000, difference after pca 0.000 True difference before pca 48.006, difference after pca 87.459 False difference before pca 72.004, difference after pca 70.644 False difference before pca 48.003, difference after pca 71.930 False difference before pca 72.004, difference after pca 79.409 False difference before pca 84.005, difference after pca 36.177 False EDIT 5: Okaaay, so here we go at least for 24 rotations. At first, we had a slight change of logic lurked into our pca function. Here we center x_c (basis) and forget about it, further centering x_f (features*basis) and transforming it with pca. This does not work perhaps because our basis is not centered and multiplication by features further increased the bias. If we center x_c first, and multiply it by features, everything will be Ok. Also, previously we had proj = x_c @ v with v computed from x_f which was totally wrong in this case, as x_f and x_c were centered around different centers. def pca(feat, x): # pca with attemt to create convention on sign changes x_c = x - np.mean(x,axis=0) x_f = feat * x x_f -= np.mean(x_f,axis=0) cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] # here is the solution, we switch the sign of the projections # so that the projection with the largest absolute value along a principal axis is positive proj = x_f @ v return proj Secondly, as we already found, we need to sort vectors obtained by pca, for example by the first column: basis2 = basis1 original_diff = np.sum(np.abs(basis1-basis2)) a = pca(voxel1, basis1) t1 = a[a[:,0].argsort()] a = pca(voxel2, basis2) t2 = a[a[:,0].argsort()] gg= np.abs(t1-t2) And the last thing we also discovered already, is that simple reshape is wrong for voxel, it must correspond to rotation: voxel2 = voxel1[ind2.reshape(27,)] #np.take(voxel10, ind2).reshape(27,1). One more important comment to understand the solution. When we perform PCA on the 3d vectors (point cloud, defined by our basis) with weights assigned (analogously to the inertia of the rigid body), the actual assignment of the weights to the points is sort of external information, which becomes hard-defined for the algorithm. When we rotated basis by applying rotation matrices, we did not change the order of the vectors in the array, hence the order of the mass assignments wasn't changed too. When we start to rotate voxel, we change the order of the masses, so in general PCA algorithm will not work without the same transformation applied to the basis. So, only if we have some array of 3d vectors, transformed by some rotation AND the list of masses re-arranged accordingly, we can detect the rotation of the rigid body using PCA. Otherwise, if we detach masses from points, that would be another body in general. So how does it work for us then? It works because our points are fully symmetric around the center after centering basis. In this case reassignment of the masses does not change "the body" because vector norms are the same. In this case we can use the same (numerically) basis2=basis1 for testing 24 rotations and rotated voxel2 (rotated point cloud cubes match, just masses migrate). This correspond to the rotation of the point cloud with mass points around the center of the cube. PCA will transform vectors with the same lengths and different masses in the same way according to the body's "inertia" then (after we reached convention on the signs of the components). The only thing left is to sort the pca transformed vectors in the end, because they have different position in the array (because our body was rotated, mass points changed their positions). This makes us lose some information related to the order of the vectors but it looks inevitable. Here is the code which checks the solution for 24 rotations. If should theoretically work in the general case as well, giving some closer values for more complicated objects rotated inside a bigger voxel: import numpy as np from numpy.random import randn #np.random.seed(20) def pca(feat, x): # pca with attemt to create convention on sign changes x_c = x - np.mean(x,axis=0) x_f = feat * x_c cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] # here is the solution, we switch the sign of the projections # so that the projection with the largest absolute value along a principal axis is positive proj = x_f @ v asign = np.sign(proj) max_ind = np.argmax(np.abs(proj),axis=0)[None,:] sign = np.take_along_axis(asign,max_ind,axis=0) proj = proj * sign return proj # must be correct https://stackoverflow.com/questions/15230179/how-to-get-the-linear-index-for-a-numpy-array-sub2ind indices = np.arange(27) indices3d = indices.reshape((3,3,3)) voxel10 = np.random.normal(size=(3,3,3)) assert voxel10[0,1,2] == voxel10.ravel()[indices3d[0,1,2]] rot90 = np.rot90 def rotations24(polycube): # imagine shape is pointing in axis 0 (up) # 4 rotations about axis 0 yield from rotations4(polycube, 0) # rotate 180 about axis 1, now shape is pointing down in axis 0 # 4 rotations about axis 0 yield from rotations4(rot90(polycube, 2, axis=1), 0) # rotate 90 or 270 about axis 1, now shape is pointing in axis 2 # 8 rotations about axis 2 yield from rotations4(rot90(polycube, axis=1), 2) yield from rotations4(rot90(polycube, -1, axis=1), 2) # rotate about axis 2, now shape is pointing in axis 1 # 8 rotations about axis 1 yield from rotations4(rot90(polycube, axis=2), 1) yield from rotations4(rot90(polycube, -1, axis=2), 1) def rotations4(polycube, axis): """List the four rotations of the given cube about the given axis.""" for i in range(4): yield rot90(polycube, i, axis) def rot90(m, k=1, axis=2): """Rotate an array k*90 degrees in the counter-clockwise direction around the given axis""" m = np.swapaxes(m, 2, axis) m = np.rot90(m, k) m = np.swapaxes(m, 2, axis) return m gen = rotations24(indices3d) res = [] for ind2 in gen: basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) voxel1 = voxel10.copy().reshape(27,1) voxel2 = voxel1[ind2.reshape(27,)] #np.take(voxel10, ind2).reshape(27,1) basis1 += 1e-6*np.random.normal(size=(27, 1)) # perturbation basis2 = basis1 original_diff = np.sum(np.abs(basis1-basis2)) a = pca(voxel1, basis1) t1 = a[a[:,0].argsort()] a = pca(voxel2, basis2) t2 = a[a[:,0].argsort()] gg= np.abs(t1-t2) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) difference before pca 0.000, difference after pca 0.000 True difference before pca 0.000, difference after pca 0.000 True difference before pca 0.000, difference after pca 0.000 True difference before pca 0.000, difference after pca 0.000 True PS. I want to propose better ordering theme to take into account zero values in the voxel which might confuse previous approach when entire first column of PCA vectors is zero, etc. I propose to sort by vector norms, multiplied by the sign of the sum of elements. Here is tensorflow 2 code: def infer_shape(x): x = tf.convert_to_tensor(x) # If unknown rank, return dynamic shape if x.shape.dims is None: return tf.shape(x) static_shape = x.shape.as_list() dynamic_shape = tf.shape(x) ret = [] for i in range(len(static_shape)): dim = static_shape[i] if dim is None: dim = dynamic_shape[i] ret.append(dim) return ret def merge_last_two_dims(tensor): shape = infer_shape(tensor) shape[-2] *= shape[-1] #shape.pop(1) shape = shape[:-1] return tf.reshape(tensor, shape) def pca(inpt_voxel): patches = tf.extract_volume_patches(inpt_voxel, ksizes=[1,3,3,3,1], strides=[1, 1,1,1, 1], padding="VALID") features0 = patches[...,tf.newaxis]*basis # centered basises basis1_ = tf.ones(shape=tf.shape(patches[...,tf.newaxis]), dtype=tf.float32)*basis basis1 = basis1_ - tf.math.divide_no_nan(tf.reduce_sum(features0, axis=-2), tf.reduce_sum(patches, axis=-1)[...,None])[:,:,:,:,None,:] features = patches[...,tf.newaxis]*basis1 features_centered_basis = features - tf.reduce_mean(features, axis=-2)[:,:,:,:,None,:] x = features_centered_basis m = tf.cast(x.get_shape()[-2], tf.float32) cov = tf.matmul(x,x,transpose_a=True)/(m - 1) e,v = tf.linalg.eigh(cov,name="eigh") proj = tf.matmul(x,v,transpose_b=False) asign = tf.sign(proj) max_ind = tf.argmax(tf.abs(proj),axis=-2)[:,:,:,:,None,:] sign = tf.gather(asign,indices=max_ind, batch_dims=4, axis=-2) sign = tf.linalg.diag_part(sign) proj = proj * sign # But we can have 1st coordinate zero. In this case, # other coordinates become ambiguous #s = tf.argsort(proj[...,0], axis=-1) # sort by l2 vector norms, multiplied by signs of sums sum_signs = tf.sign(tf.reduce_sum(proj, axis=-1)) norms = tf.norm(proj, axis=-1) s = tf.argsort(sum_signs*norms, axis=-1) proj = tf.gather(proj, s, batch_dims=4, axis=-2) return merge_last_two_dims(proj) | Firstly, your pca function is not correct, it should be def pca(x): x -= np.mean(x,axis=0) cov = np.cov(x.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] return x @ v You shouldn't transpose the e_vectors[:,order] because we want each column of the v array is an eigenvector, therefore, x @ v will be projections of x on those eigenvectors. Secondly, I think you misunderstand the meaning of rotation. It is not voxel1 that should be rotated, but the basis1. If you rotate (by taking transposition) voxel1, what you really do is to rearrange the indices of grid points, while the coordinates of the points basis1 are not changed. In order to rotate the points (around the z axis for example), you can first define a function to calculate the rotation matrix given an angle rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]]) with the rotation matrix generated by this function, you can rotate the array basis1 to create another array basis2 basis2 = basis1 @ rotate_mat(np.deg2rad(angle)) Now it comes to the title of your question "Why my PCA is not invariant to rotation and axis swap?", from this post, the PCA result is not unique, you can actually run a test to see this import numpy as np np.random.seed(10) rotate_mat = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]]) def pca(x): x -= np.mean(x,axis=0) cov = np.cov(x.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] return x @ v angles = np.linspace(0,90,91) res = [] for angle in angles: voxel1 = np.random.normal(size=(3,3,3)) voxel2 = voxel1.copy() voxel1 = voxel1.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]) # basis2 = np.hstack((-basis1[:,1][:,None],basis1[:,0][:,None],-basis1[:,2][:,None])) basis2 = basis1 @ rotate_mat(np.deg2rad(angle)) voxel1 = voxel1*basis1 voxel2 = voxel2*basis2 print(angle,np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6))) res.append(np.all(np.abs(pca(voxel1) - pca(voxel2) < 1e-6))) print() print(np.sum(res) / len(angles)) After you run this script, you will see that in only 21% of times the two PCA results are the same. UPDATE I think instead of focusing on the eigenvectors of the principal components, you can instead focus on the projections. For two clouds of points, even though they are essentially the same, the eigenvectors can be drastically different. Therefore, hardcoding in order to somehow let the two sets of eigenvectors to be the same is a very difficult task. However, based on this post, for the same cloud of points, two sets of eigenvectors can be different only up to a minus sign. Therefore, the projections upon the two sets of eigenvectors are also different only up to a minus sign. This actually offers us an elegant solution, for the projections along an eigenvector (principal axis), all we need to do is to switch the sign of the projections so that the projection with the largest absolute value along that principal axis is positive. import numpy as np from numpy.random import randn #np.random.seed(20) rotmat_z = lambda theta: np.array([[np.cos(theta),-np.sin(theta),0.],[np.sin(theta),np.cos(theta),0.],[0.,0.,1.]]) rotmat_y = lambda theta: np.array([[np.cos(theta),0.,np.sin(theta)],[0.,1.,0.],[-np.sin(theta),0.,np.cos(theta)]]) rotmat_x = lambda theta: np.array([[1.,0.,0.],[0.,np.cos(theta),-np.sin(theta)],[0.,np.sin(theta),np.cos(theta)]]) # based on https://en.wikipedia.org/wiki/Rotation_matrix rot_mat = lambda alpha,beta,gamma: rotmat_z(alpha) @ rotmat_y(beta) @ rotmat_x(gamma) deg2rad = lambda alpha,beta,gamma: [np.deg2rad(alpha),np.deg2rad(beta),np.deg2rad(gamma)] def pca(feat, x): # pca with attemt to create convention on sign changes x_c = x - np.mean(x,axis=0) x_f = feat * x x_f -= np.mean(x_f,axis=0) cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] # here is the solution, we switch the sign of the projections # so that the projection with the largest absolute value along a principal axis is positive proj = x_f @ v asign = np.sign(proj) max_ind = np.argmax(np.abs(proj),axis=0)[None,:] sign = np.take_along_axis(asign,max_ind,axis=0) proj = proj * sign return proj ref_angles = np.linspace(0.0,90.0,10) angles = [[alpha,beta,gamma] for alpha in ref_angles for beta in ref_angles for gamma in ref_angles] voxel1 = np.random.normal(size=(3,3,3)) res = [] for angle in angles: voxel2 = voxel1.copy() voxel1 = voxel1.reshape(27,1) voxel2 = voxel2.reshape(27,1) basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 = basis1 + 1e-4 * randn(27,3) # perturbation basis2 = basis1 @ rot_mat(*deg2rad(*angle)) original_diff = np.sum(np.abs(basis1-basis2)) gg= np.abs(pca(voxel1, basis1) - pca(voxel1, basis2)) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference before pca %.3f,' % original_diff, 'difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) As you can see by running this script, the projections on the principal axis are the same, this means we have resolved the issue of PCA results being not unique. Reply to EDIT 3 As for the new issue you raised, I think you missed an important point, it is the projections of the cloud of points onto the principal axes that are invariant, not anything else. Therefore, if you rotate voxel1 and obtain voxel2, they are the same in the sense that their own respective projections onto the principal axes of the cloud of points are the same, it actually does not make too much sense to compare pca(voxel1,basis1) with pca(voxel2,basis1). Furthermore, the method rotate of scipy.ndimage actually changes information, as you can see by running this script image1 = np.linspace(1,100,100).reshape(10,10) image2 = scipy.ndimage.rotate(image1, 45, mode='nearest', axes=(0, 1), reshape=False) image3 = scipy.ndimage.rotate(image2, -45, mode='nearest', axes=(0, 1), reshape=False) fig,ax = plt.subplots(nrows=1,ncols=3,figsize=(12,4)) ax[0].imshow(image1) ax[1].imshow(image2) ax[2].imshow(image3) The output image is As you can see the matrix after rotation is not the same as the original one, some information of the original matrix is changed. Reply to EDIT 4 Actually, we are almost there, the two pca results are different because we are comparing pca components for different points. indices = np.arange(27) indices3d = indices.reshape((3,3,3)) # apply rotations to the indices, it is not weights yet gen = rotations24(indices3d) # construct the weights voxel10 = np.random.normal(size=(3,3,3)) res = [] count = 0 for ind2 in gen: count += 1 # ind2 is the array of indices after rotation # reindex the weights with the indices after rotation voxel1 = voxel10.copy().reshape(27,1) voxel2 = voxel1[ind2.reshape(27,)] # basis1 is the array of coordinates where the points are basis1 = np.array([[i+1,j+1,k+1] for k in range(3) for j in range(3) for i in range(3)]).astype(np.double) basis1 += 1e-4*np.random.normal(size=(27, 1)) # reindex the coordinates with the indices after rotation basis2 = basis1[ind2.reshape(27,)] # add a slight modification to pca, return the axes also pca1,v1 = pca(voxel1,basis1) pca2,v2 = pca(voxel2,basis2) # sort the principal components before comparing them pca1 = np.sort(pca1,axis=0) pca2 = np.sort(pca2,axis=0) gg= np.abs(pca1 - pca2) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) Running this script, you will find, for each rotation, the two sets of principal axes are different only up to a minus sign. The two sets of pca results are different because the indices of the cloud of points before and after rotation are different (since you apply rotation to the indices). If you sort the pca results before comparing them, you will find the two pca results are exactly the same. Summary The answer to this question can be divided into two parts. In the first part, the rotation is applied to the basis (the coordinates of points), while the indices and the corresponding weights are unchanged. In the second part, the rotation is applied to the indices, then the weights and the basis are rearranged with the new indices. For both of the two parts, the solution pca function is the same def pca(feat, x): # pca with attemt to create convention on sign changes x_c = x - np.mean(x,axis=0) x_f = feat * x x_f -= np.mean(x_f,axis=0) cov = np.cov(x_f.T) e_values, e_vectors = np.linalg.eig(cov) order = np.argsort(e_values)[::-1] v = e_vectors[:,order] # here is the solution, we switch the sign of the projections # so that the projection with the largest absolute value along a principal axis is positive proj = x_f @ v asign = np.sign(proj) max_ind = np.argmax(np.abs(proj),axis=0)[None,:] sign = np.take_along_axis(asign,max_ind,axis=0) proj = proj * sign return proj The idea of this function is, instead of matching the principal axes, we can match the principal components since it is the principal components that are rotationally invariant after all. Based on this function pca, the first part of this answer is easy to understand, since the indices of the points are unchanged while we only rotate the basis. In order to understand the second part of this answer (Reply to EDIT 5), we must first understand the function rotations24. This function rotates the indices rather than the coordinates of the points, therefore, if we stay at the same position observing the points, we will feel that the positions of the points are changed. With this in mind, it is not hard to understand Reply to EDIT 5. Actually, the function pca in this answer can be applied to more general cases, for example (we rotate the indices) num_of_points_per_dim = 10 num_of_points = num_of_points_per_dim ** 3 indices = np.arange(num_of_points) indices3d = indices.reshape((num_of_points_per_dim,num_of_points_per_dim,num_of_points_per_dim)) voxel10 = 100*np.random.normal(size=(num_of_points_per_dim,num_of_points_per_dim,num_of_points_per_dim)) gen = rotations24(indices3d) res = [] for ind2 in gen: voxel1 = voxel10.copy().reshape(num_of_points,1) voxel2 = voxel1[ind2.reshape(num_of_points,)] basis1 = 100*np.random.rand(num_of_points,3) basis2 = basis1[ind2.reshape(num_of_points,)] pc1 = np.sort(pca(voxel1, basis1),axis=0) pc2 = np.sort(pca(voxel2, basis2),axis=0) gg= np.abs(pc1-pc2) ss= np.sum(np.ravel(gg)) bl= np.all(gg<1e-4) print('difference after pca %.3f' % ss,bl) res.append(bl) del basis1, basis2 print('correct for %.1f percent of time' % (100*(np.sum(res) / len(res)))) | 6 | 5 |
64,327,534 | 2020-10-13 | https://stackoverflow.com/questions/64327534/unable-to-connect-to-websocket-server-with-nginx-reverse-proxy | I want to set up a websocket server with a reverse proxy. To do so I create a docker-compose with a simple websocket server in python and a nginx reverse proxy. SETUP: docker-compose.yml: version: '2.4' services: wsserver: restart: always ports: - 8765:8765 build: context: ./server dockerfile: Dockerfile ngproxy: image: nginx ports: - 8020:80 - 5000:5000 restart: always depends_on: - wsserver volumes: - ./nginx/nginx.conf:/etc/nginx/conf.conf nginx.conf: http { map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream websocket { server wsserver:8765; } server { listen 5000; location / { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; } } } Websocket server: Dockerfile: FROM python:3 RUN pip install websockets RUN pip install asyncio COPY server.py / CMD [ "python", "./server.py" ] server.py: import asyncio import websockets import os async def echo(websocket, path): async for message in websocket: print(message) await websocket.send(message) asyncio.get_event_loop().run_until_complete( websockets.serve(echo, '0.0.0.0', 8765)) asyncio.get_event_loop().run_forever() Simple ws client that helps to run tests (I also used a Chrome extension): client.py: import asyncio import websockets async def hello(uri): async with websockets.connect(uri) as websocket: await websocket.send("Hello world!") resp = await websocket.recv() print(resp) asyncio.get_event_loop().run_until_complete( hello('ws://localhost:5000')) # Without reverse proxy -> ws://localhost:8765 PROBLEM: When I try to connect (with the client or using the Chrome extension) the following error appeared: WebSocket connection to 'ws://localhost:5000/' failed: Connection closed before receiving a handshake response Is there anything wrong in my steps? More info: When I try connect to the websocket without the reverse proxy it works. No messages appears in my nginx error logs. All the configuration were based on nginx docs: nginx websocket I have already checked this threads: Docker NGINX Proxy not Forwarding Websockets, Configure NGINX reverse proxy with browser WebSocket and docker-compose and Docker nginx websocket proxy - client closed connection while waiting for request Thanks! | After some research I finally got what was wrong: I mapped my local nginx configuration to the wrong file on the container. So to fix it a changed the volume in my docker-compose.yml From: volumes: - ./nginx/nginx.conf:/etc/nginx/conf.conf To: volumes: - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro And also removed de http from the nginx.conf: nginx.conf map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream websocket { server wsserver:8765; } server { listen 5000; location / { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; } } | 7 | 5 |
64,344,515 | 2020-10-13 | https://stackoverflow.com/questions/64344515/python-consistent-hash-replacement | As noted by many, Python's hash is not consistent anymore (as of version 3.3), as a random PYTHONHASHSEED is now used by default (to address security concerns, as explained in this excellent answer). However, I have noticed that the hash of some objects are still consistent (as of Python 3.7 anyway): that includes int, float, tuple(x), frozenset(x) (as long as x yields consistent hash). For example: assert hash(10) == 10 assert hash((10, 11)) == 3713074054246420356 assert hash(frozenset([(0, 1, 2), 3, (4, 5, 6)])) == -8046488914261726427 Is that always true and guaranteed? If so, is that expected to stay that way? Is the PYTHONHASHSEED only applied to salt the hash of strings and byte arrays? Why am I asking? I have a system that relies on hashing to remember whether or not we have seen a given dict (in any order): {key: tuple(ints)}. In that system, the keys are a collection of filenames and the tuples a subset of os.stat_result, e.g. (size, mtime) associated with them. This system is used to make update/sync decisions based on detecting differences. In my application, I have on the order of 100K such dicts, and each can represent several thousands of files and their state, so the compactness of the cache is important. I can tolerate the small false positive rate (< 10^-19 for 64-bit hashes) coming from possible hash collisions (see also birthday paradox). One compact representation is the following for each such dict "fsd": def fsd_hash(fsd: dict): return hash(frozenset(fsd.items())) It is very fast and yields a single int to represent an entire dict (with order-invariance). If anything in the fsd dict changes, with high probability the hash will be different. Unfortunately, hash is only consistent within a single Python instance, rendering it useless for hosts to compare their respective hashes. Persisting the full cache ({location_name: fsd_hash}) to disk to be reloaded on restart is also useless. I cannot expect the larger system that uses that module to have been invoked with PYTHONHASHSEED=0, and, to my knowledge, there is no way to change this once the Python instance has started. Things I have tried I may use hashlib.sha1 or similar to calculate consistent hashes. This is slower and I can't directly use the frozenset trick: I have to iterate through the dict in a consistent order (e.g. by sorting on keys, slow) while updating the hasher. In my tests on real data, I see over 50x slow-down. I could try applying an order-invariant hashing algorithm on consistent hashes obtained for each item (also slow, as starting a fresh hasher for each item is time-consuming). I can try transforming everything into ints or tuples of ints and then frozensets of such tuples. At the moment, it seems that all int, tuple(int) and frozenset(tuple(int)) yield consistent hashes, but: is that guaranteed, and if so, how long can I expect this to be the case? Additional question: more generally, what would be a good way to write a consistent hash replacement for hash(frozenset(some_dict.items())) when the dict contains various types and classes? I can implement a custom __hash__ (a consistent one) for the classes I own, but I cannot override str's hash for example. One thing I came up with is: def const_hash(x): if isinstance(x, (int, float, bool)): pass elif isinstance(x, frozenset): x = frozenset([const_hash(v) for v in x]) elif isinstance(x, str): x = tuple([ord(e) for e in x]) elif isinstance(x, bytes): x = tuple(x) elif isinstance(x, dict): x = tuple([(const_hash(k), const_hash(v)) for k, v in x.items()]) elif isinstance(x, (list, tuple)): x = tuple([const_hash(e) for e in x]) else: try: return x.const_hash() except AttributeError: raise TypeError(f'no known const_hash implementation for {type(x)}') return hash(x) | Short answer to broad question: There are no explicit guarantees made about hashing stability aside from the overall guarantee that x == y requires that hash(x) == hash(y). There is an implication that x and y are both defined in the same run of the program (you can't perform x == y where one of them doesn't exist in that program obviously, so no guarantees are needed about the hash across runs). Longer answers to specific questions: Is [your belief that int, float, tuple(x), frozenset(x) (for x with consistent hash) have consistent hashes across separate runs] always true and guaranteed? It's true of numeric types, with the mechanism being officially documented, but the mechanism is only guaranteed for a particular interpreter for a particular build. sys.hash_info provides the various constants, and they'll be consistent on that interpreter, but on a different interpreter (CPython vs. PyPy, 64 bit build vs. 32 bit build, even 3.n vs. 3.n+1) they can differ (documented to differ in the case of 64 vs. 32 bit CPython), so the hashes won't be portable across machines with different interpreters. No guarantees on algorithm are made for tuple and frozenset; I can't think of any reason they'd change it between runs (if the underlying types are seeded, the tuple and frozenset benefit from it without needing any changes), but they can and do change the implementation between releases of CPython (e.g. in late 2018 they made a change to reduce the number of hash collisions in short tuples of ints and floats), so if you store off the hashes of tuples from say, 3.7, and then compute hashes of the same tuples in 3.8+, they won't match (even though they'd match between runs on 3.7 or between runs on 3.8). If so, is that expected to stay that way? Expected to, yes. Guaranteed, no. I could easily see seeded hashes for ints (and by extension, for all numeric types to preserve the numeric hash/equality guarantees) for the same reason they seeded hashes for str/bytes, etc. The main hurdles would be: It would almost certainly be slower than the current, very simple algorithm. By documenting the numeric hashing algorithm explicitly, they'd need a long period of deprecation before they could change it. It's not strictly necessary (if web apps need seeded hashes for DoS protection, they can always just convert ints to str before using them as keys). Is the PYTHONHASHSEED only applied to salt the hash of strings and byte arrays? Beyond str and bytes, it applies to a number of random things that implement their own hashing in terms of the hash of str or bytes, often because they're already naturally convertable to raw bytes and are commonly used as keys in dicts populated by web-facing frontends. The ones I know of off-hand include the various classes of the datetime module (datetime, date, time, though this isn't actually documented in the module itself), and read-only memoryviews of with byte-sized formats (which hash equivalently to hashing the result of the view's .tobytes() method). What would be a good way to write a consistent hash replacement for hash(frozenset(some_dict.items())) when the dict contains various types and classes? The simplest/most composable solution would probably be to define your const_hash as a single dispatch function, using it the same way you do hash itself. This avoids having one single function defined in a single place that must handle all types; you can have the const_hash default implementation (which just relies on hash for those things with known consistent hashes) in a central location, and provide additional definitions for the built-in types you know aren't consistent (or which might contain inconsistent stuff) there, while still allowing people to extend the set of things it covers seamlessly by registering their own single-dispatch functions by importing your const_hash and decorating the implementation for their type with @const_hash.register. It's not significantly different in effect from your proposed const_hash, but it's a lot more manageable. | 8 | 3 |
64,337,550 | 2020-10-13 | https://stackoverflow.com/questions/64337550/neither-pytorch-nor-tensorflow-2-0-have-been-found-models-wont-be-available | I am trying to install transformers using pip pip install transformers after import transformers this error show Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration, and file/data utilities can be used. although I install TensorFlow-GPU= 2.3.1 and using conda system info Windows 10 python 3.6 cuda 10.1 tensorflow-gpu= 2.3.1 | I found the problem after investigate for 10 hours I installed tensorflow by using conda install tensorflow-gpu and transformers by using pip after remove tensorflow-gpu and install it by using pip it works fine | 27 | 8 |
64,297,237 | 2020-10-10 | https://stackoverflow.com/questions/64297237/how-can-i-prevent-or-trap-stopiteration-exception-in-the-yield-calling-function | A generator-returning function (i.e. one with a yield statement in it) in one of our libraries fails some tests due to an unhandled StopIteration exception. For convenience, in this post I'll refer to this function as buggy. I have not been able to find a way for buggy to prevent the exception (without affecting the function's normal operation). Similarly, I have not found a way to trap the exception (with a try/except) within buggy. (Client code using buggy can trap this exception, but this happens too late, because the code that has the information necessary to properly handle the condition leading to this exception is the buggy function.) The actual code and test case I am working with are far too complicated to post here, so I have created a very simple, but also extremely artificial toy example that illustrates the problem. First, the module with the buggy function: # mymod.py import csv # essential! def buggy(csvfile): with open(csvfile) as stream: reader = csv.reader(stream) # how to test *here* if either stream is at its end? for row in reader: yield row As indicated by the comment, the use of the csv module (from the Python 3.x standard library) is an essential feature of this problem1. The next file for the example is a script that is meant to stand in for "client code". In other word, this script's "real purpose" beyond this example is largely irrelevant. Its role in the example is to provide a simple, reliable way to elicit the problem with the buggy function. (Some of its code could be repurposed for a test case in a test suite, for example.) #!/usr/bin/env python3 # myscript.py import sys import mymod def print_row(row): print(*row, sep='\t') def main(csvfile, mode=None): if mode == 'first': print_row(next(mymod.buggy(csvfile))) else: for row in mymod.buggy(csvfile): print_row(row) if __name__ == '__main__': main(*sys.argv[1:]) The script takes the path to a CSV file as a mandatory argument, and an optional second argument. If the second argument is ommitted, or it is anything other than the string "first", the script will print to stdout the information in the CSV file, but in TSV format. If the second argument is the string "first", only the information in the first row will be so printed. The StopIteration exception I am trying to trap arises when myscript.py script is invoked with an empty file and the string "first" as arguments2. Here is an example of this code in action: % cat ok_input.csv 1,2,3 4,5,6 7,8,9 % ./myscript.py ok_input.csv 1 2 3 4 5 6 7 8 9 % ./myscript.py ok_input.csv first 1 2 3 % cat empty_input.csv # no output (of course) % ./myscript.py empty_input.csv # no output (as desired) % ./myscript.py empty_input.csv first Traceback (most recent call last): File "./myscript.py", line 19, in <module> main(*sys.argv[1:]) File "./myscript.py", line 13, in main print_row(next(mymod.buggy(csvfile))) StopIteration Q: How can I prevent or trap this StopIteration exception in the lexical scope of the buggy function? IMPORTANT: Please keep in mind that, in the example given above, the myscript.py script is stand-in for "client code", and is therefore outside of our control. This means that any approach that would require changing the myscript.py script would not solve the actual real-world problem, and therefore it would not be an acceptable answer to this question. One important difference between the simple example shown above and our actual situation is that in our case, the problematic input stream does not come from an empty file. The problem arises in cases where buggy (or, rather, its real-world counterpart) reaches the end of this stream "too early", so to speak. I think it may be enough if I could test whether either stream is at its end, before the for row in reader: line, but I have not figured a way to do this either. Testing whether the value returned by stream.read(1) is 0 or 1 will tell me if stream is at its end, but in the latter case stream's internal pointer will be left pointing one byte too far into csvfile's content. (Neither stream.seek(-1, 1) nor stream.tell() work at this point.) Lastly, to anyone who would like post an answer to this question: it would be most efficient if you were to take advantage of the example code I have provided above to test your proposal before posting it. EDIT: One variation of mymod.py that I tried was this: import csv # essential! def buggy(csvfile): with open(csvfile) as stream: reader = csv.reader(stream) try: firstrow = next(reader) except StopIteration: firstrow = None if firstrow != None: yield firstrow for row in reader: yield row This variation fails with pretty much the same error message as does the original version. When I first read @mcernak's proposal, I thought that it was pretty similar to the variation above, and therefore expected it to fail too. Then I was pleasantly surprised to discover that this is not the case! Therefore, as of now, there is one definite candidate to get bounty. That said, I would love to understand why the variation above fails to trap the exception, while @mcernak's succeeds. 1 The actual case I'm dealing with is legacy code; switching from the csv module to some alternative is not an option for us in the short term. 2 Please, disregard entirely the question of what this demonstration script's "right response should be" when it gets invoked with an empty file and the string "first" as arguments. The particular combination of inputs that elicits the StopIteration exception in this post's demonstration does not represent the real-world condition that causes our code to emit the problematic StopIteration exception. Therefore, the "correct response", whatever that may be, of the demonstration script to the empty file plus "first" string combination would be irrelevant to the real-world problem I am dealing with. | You can trap the StopIteration exception in the lexical scope of the buggy function this way: import csv # essential! def buggy(csvfile): with open(csvfile) as stream: reader = csv.reader(stream) try: yield next(reader) except StopIteration: yield 'dummy value' for row in reader: yield row You basically manually request the first value from the reader iterator and if this succeeds, the first line is read from the csv file and is yielded to the caller of buggy function if this fails, as is the case for empty csv files, some string e.g. dummy value is yielded to prevent the caller of the buggy function from crashing Afterwards, if the csv file was not empty, the remaining rows will be read (and yielded) in the for cycle. EDIT: to illustrate why the other variation of mymod.py mentioned in the question does not work, I've added some print statements to it: import csv # essential! def buggy(csvfile): with open(csvfile) as stream: reader = csv.reader(stream) try: print('reading first row') firstrow = next(reader) except StopIteration: print('no first row exists') firstrow = None if firstrow != None: print('yielding first row: ' + firstrow) yield firstrow for row in reader: print('yielding next row: ' + row) yield row print('exiting function open') Running it gives the following output: % ./myscript.py empty_input.csv first reading first row no first row exists exiting function open Traceback (most recent call last): File "myscript.py", line 15, in <module> main(*sys.argv[1:]) File "myscript.py", line 9, in main print_row(next(mymod.buggy(csvfile))) That shows, that in case that the input file is empty, the first try..except block correctly handles the StopIteration exception and that the buggy function continues on normally. The exception that the caller of the buggy gets in this case is due to the fact that the buggy function does not yield any value before completing. | 6 | 5 |
64,329,049 | 2020-10-13 | https://stackoverflow.com/questions/64329049/converting-smiles-to-chemical-name-or-iupac-name-using-rdkit-or-other-python-mod | Is there a way to convert SMILES to either chemical name or IUPAC name using RDKit or other python modules? I couldn't find something very helpful in other posts. Thank you very much! | As far as I am aware this is not possible using rdkit, and I do not know of any python modules with this ability. If you are ok with using a web service you could use the NCI resolver. Here is a naive implementation of a function to retrieve an IUPAC identifier from a SMILES string: import requests CACTUS = "https://cactus.nci.nih.gov/chemical/structure/{0}/{1}" def smiles_to_iupac(smiles): rep = "iupac_name" url = CACTUS.format(smiles, rep) response = requests.get(url) response.raise_for_status() return response.text print(smiles_to_iupac('c1ccccc1')) print(smiles_to_iupac('CC(=O)OC1=CC=CC=C1C(=O)O')) [Out]: BENZENE 2-acetyloxybenzoic acid You could easily extend it to convert multiple different formats, although the function isn't exactly fast... Another solution is to use PubChem. You can use the API with the python package pubchempy. Bear in mind this may return multiple compounds. import pubchempy # Use the SMILES you provided smiles = 'O=C(NCc1ccc(C(F)(F)F)cc1)[C@@H]1Cc2[nH]cnc2CN1Cc1ccc([N+](=O)[O-])cc1' compounds = pubchempy.get_compounds(smiles, namespace='smiles') match = compounds[0] print(match.iupac_name) [Out]: (6S)-5-[(4-nitrophenyl)methyl]-N-[[4-(trifluoromethyl)phenyl]methyl]-3,4,6,7-tetrahydroimidazo[4,5-c]pyridine-6-carboxamide | 7 | 5 |
64,336,575 | 2020-10-13 | https://stackoverflow.com/questions/64336575/select-a-file-or-a-folder-in-qfiledialog-pyqt5 | My scrip ist currently using QtWidgets.QFileDialog.getOpenFileNames() to let the user select files within Windows explorer. Now I´m wondering if there is a way to let them select also folders, not just files. There are some similar posts, but none of them provides a working solution. I really dont want to use the QFileDialog file explorer to get around this. | QFileDialog doesn't allow that natively. The only solution is to create your own instance, do some small "patching". Note that in order to achieve this, you cannot use the native dialogs of your OS, as Qt has almost no control over them; that's the reason of the dialog.DontUseNativeDialog flag, which is mandatory. The following code works as much as static methods do, and returns the selected items (or none, if the dialog is cancelled). def getOpenFilesAndDirs(parent=None, caption='', directory='', filter='', initialFilter='', options=None): def updateText(): # update the contents of the line edit widget with the selected files selected = [] for index in view.selectionModel().selectedRows(): selected.append('"{}"'.format(index.data())) lineEdit.setText(' '.join(selected)) dialog = QtWidgets.QFileDialog(parent, windowTitle=caption) dialog.setFileMode(dialog.ExistingFiles) if options: dialog.setOptions(options) dialog.setOption(dialog.DontUseNativeDialog, True) if directory: dialog.setDirectory(directory) if filter: dialog.setNameFilter(filter) if initialFilter: dialog.selectNameFilter(initialFilter) # by default, if a directory is opened in file listing mode, # QFileDialog.accept() shows the contents of that directory, but we # need to be able to "open" directories as we can do with files, so we # just override accept() with the default QDialog implementation which # will just return exec_() dialog.accept = lambda: QtWidgets.QDialog.accept(dialog) # there are many item views in a non-native dialog, but the ones displaying # the actual contents are created inside a QStackedWidget; they are a # QTreeView and a QListView, and the tree is only used when the # viewMode is set to QFileDialog.Details, which is not this case stackedWidget = dialog.findChild(QtWidgets.QStackedWidget) view = stackedWidget.findChild(QtWidgets.QListView) view.selectionModel().selectionChanged.connect(updateText) lineEdit = dialog.findChild(QtWidgets.QLineEdit) # clear the line edit contents whenever the current directory changes dialog.directoryEntered.connect(lambda: lineEdit.setText('')) dialog.exec_() return dialog.selectedFiles() | 7 | 8 |
64,338,294 | 2020-10-13 | https://stackoverflow.com/questions/64338294/how-to-disable-scientific-notation-in-hvplot-plots | I have just started using hvPlot today, as part of Panel. I am having a difficult time figuring out how to disable scientific notation in my plots. For example here is a simple bar plot. The axis and the tootltip are in scientific notation. How can I change the format to a simple int? I am showing this to non numerical and non techy management. They would rather see just basic integers and I don’t want to have to explain to them what scientific notation is. I could not find anything in the docs to help me: https://hvplot.holoviz.org/user_guide/Customization.html I’ve also tried to cobble together suggestions from Bokeh docs. I can’t figure it out. Please help! Thanks My simple df: local_date amount 0 Jan 19 506124.98 1 Feb 19 536687.28 2 Mar 19 652279.31 3 Apr 19 629440.06 4 May 19 703527.00 5 Jun 19 724234.08 6 Jul 19 733413.32 7 Aug 19 758647.44 8 Sep 19 782676.16 9 Oct 19 833674.28 10 Nov 19 864649.74 11 Dec 19 849920.47 12 Jan 20 857732.52 13 Feb 20 927399.50 14 Mar 20 1152440.49 15 Apr 20 1285779.35 16 May 20 1431744.76 17 Jun 20 1351893.95 18 Jul 20 1325507.38 19 Aug 20 1299528.81 And code: df.hvplot.bar(height=500,width=1000) | You can specify the formatter you would like to use in either x- or y-axis ticks, as such: df.hvplot.bar(height=500,width=1000, yformatter='%.0f') According to the Customization page you also referenced, the xformatter and yformatter arguments can accept "printf formatter, e.g. '%.3f', and bokeh TickFormatter". So, another way to do it is by passing a custom formatter from boken.models.formatters and customize it as such (Note: there's many other formatters you can also explore). For example: from bokeh.models.formatters import BasicTickFormatter df.hvplot.bar(height=500,width=1000, yformatter=BasicTickFormatter(use_scientific=False)) Both should give you a result like this: Now, editing the hover tooltip format is a bit tricker. One way to do it is to assign the figure object as custom HoverTool, in the following fashion: from bokeh.models import HoverTool hover = HoverTool(tooltips=[("amount", "@amount{0,0}"), ("local_date", "@local_date")]) df.hvplot.bar(height=500, width=1000, yformatter='%.0f', use_index=False).opts(tools=[hover]) You can find more details on how to configure a custom HoverTool here. | 9 | 14 |
64,259,054 | 2020-10-8 | https://stackoverflow.com/questions/64259054/django-duplicated-logic-between-properties-and-queryset-annotations | When I want to define my business logic, I'm struggling finding the right way to do this, because I often both need a property AND a custom queryset to get the same info. In the end, the logic is duplicated. Let me explain... First, after defining my class, I naturally start writing a simple property for data I need: class PickupTimeSlot(models.Model): @property def nb_bookings(self) -> int: """ How many times this time slot is booked? """ return self.order_set.validated().count() Then, I quickly realise that calling this property while dealing with many objects in a queryset will lead to duplicated queries and will kill performance (even if I use prefetching, because filtering is called again). So I solve the problem writing a custom queryset with annotation: class PickupTimeSlotQuerySet(query.QuerySet): def add_nb_bookings_data(self): return self.annotate(db_nb_bookings=Count('order', filter=Q(order__status=Order.VALIDATED))) The issue And then, I end up with 2 problems: I have the same business logic ("how to find the number of bookings") written twice, that could lead to functional errors. I need to find two different attribute names to avoid conflicts, because obviously, setting nb_bookings for both the property and the annotation don't work. This forces me, when using my object, to think about how the data is generated, to call the right attribute name (let's say pickup_slot.nb_bookings (property) or pickup_slot.db_nb_bookings (annotation) ) This seems poorly designed to me, and I'm pretty sure there is a way to do better. I'd need a way to always write pickup_slot.nb_bookings and having a performant answer, always using the same business logic. I have an idea, but I'm not sure... I was thinking of completely removing the property and keeking custom queryset only. Then, for single objects, wrapping them in querysets just to be able to call add annotation data on it. Something like: pickup_slot = PickupTimeSlot.objects.add_nb_bookings_data().get(pk=pickup_slot.pk) Seems pretty hacky and unnatural to me. What do you think? | Based on your different good answers, I decided to stick with annotations and properties. I created a cache mechanism to make it transparent about the naming. The main advantage is to keep the business logic in one place only. The only drawback I see is that an object could be called from database a second time to be annotated. Performance impact stays minor IMO. Here is a full example with 3 different attributes I need in my model. Feel free to comment to improve this. models.py class PickupTimeSlotQuerySet(query.QuerySet): def add_booking_data(self): return self \ .prefetch_related('order_set') \ .annotate(_nb_bookings=Count('order', filter=Q(order__status=Order.VALIDATED))) \ .annotate(_nb_available_bookings=F('nb_max_bookings') - F('_nb_bookings')) \ .annotate(_is_bookable=Case(When(_nb_bookings__lt=F('nb_max_bookings'), then=Value(True)), default=Value(False), output_field=BooleanField()) ) \ .order_by('start') class PickupTimeSlot(models.Model): objects = SafeDeleteManager.from_queryset(PickupTimeSlotQuerySet)() nb_max_bookings = models.PositiveSmallIntegerField() @annotate_to_property('add_booking_data', 'nb_bookings') def nb_bookings(self): pass @annotate_to_property('add_booking_data', 'nb_available_bookings') def nb_available_bookings(self): pass @annotate_to_property('add_booking_data', 'is_bookable') def is_bookable(self): pass decorators.py def annotate_to_property(queryset_method_name, key_name): """ allow an annotated attribute to be used as property. """ from django.apps import apps def decorator(func): def inner(self): attr = "_" + key_name if not hasattr(self, attr): klass = apps.get_model(self._meta.app_label, self._meta.object_name) to_eval = f"klass.objects.{queryset_method_name}().get(pk={self.pk}).{attr}" value = eval(to_eval, {'klass': klass}) setattr(self, attr, value) return getattr(self, attr) return property(inner) return decorator | 19 | 1 |
64,319,173 | 2020-10-12 | https://stackoverflow.com/questions/64319173/pre-release-versions-are-not-matched-by-pip-when-using-the-pre-option | Imagine you have published two pre-releases: package 0.0.1.dev0 package 0.0.2.dev0 My install_requires section in setup.py states: [ 'package>=0.0.2,<1.0.0' ] Now, when i run pip install . --upgrade --pre I get an error: ERROR: Could not find a version that satisfies the requirement package<1.0.0,>=0.0.2 (from versions: 0.0.1.dev0, 0.0.2.dev0) ERROR: No matching distribution found for package<1.0.0,>=0.0.2 What am I doing wrong? Isn't the --pre flag supposed to tell pip to match pre-release versions? | Summary The pip --pre option directs pip to include potential matching pre-release and development versions, but it does not change the semantics of version matching. Since pre-release 0.0.2.dev0 is older than stable release 0.0.2, pip correctly reports an error when searching for a package that is at least as new as stable release 0.0.2. Explanation The key point of confusion is around the pip --pre option, which is documented as: --pre Include pre-release and development versions. By default, pip only finds stable versions. The premise of the question is that the --pre option should change the package-version-matching semantics such that pre-release version suffixes would be ignored when matching against stable versions. To further clarify, consider the compatible release operator ~=. PEP 440 section Compatible release, states in part: For a given release identifier V.N, the compatible release clause is approximately equivalent to the pair of comparison clauses: >= V.N, == V.* ... If a pre-release, post-release or developmental release is named in a compatible release clause as V.N.suffix, then the suffix is ignored when determining the required prefix match: ~= 2.2.post3 = 2.2.post3, == 2.* ~= 1.4.5a4 = 1.4.5a4, == 1.4.* This example makes it clear that the suffix is ignored. The following requirement does not match 0.0.2.dev0: install_requires=['package~=0.0.2'] # ERROR: ResolutionImpossible Whereas this example does match stable release 0.0.2: install_requires=['package~=0.0.2.dev0'] # OK - suffix ignored | 9 | 7 |
64,323,261 | 2020-10-12 | https://stackoverflow.com/questions/64323261/how-does-fastapis-application-mounting-works | For certain reasons, we have chosen the FastAPI, in order to use it as back-end tier of our multi-module production. One of its attractive features is sub application, that helps us to separate different modules with intention of making it more modular. But we are concerned about some possible deficiencies which are missing in the official documentation. There are a considerable amount of common things -- e.g data, services, etc -- that we need to share them between main module and submodule through plugins, middle-wares and dependency-injection. The questions are: Is this feature good enough for separate modules? and so: Do sub applications inherit middle-ware, plugins and dependency injection from parent app? thanks for sharing your experiences. the sample code in the official docs from fastapi import FastAPI app = FastAPI() @app.get("/app") def read_main(): return {"message": "Hello World from main app"} subapi = FastAPI() @subapi.get("/sub") def read_sub(): return {"message": "Hello World from sub API"} app.mount("/subapi", subapi) | I think documentation is pretty clear about it. "Mounting" means adding a completely "independent" application. Whatsoever, let's keep going from your example. This is what we got for our subapi's routes. [{"path":route.path} for route in subapi.routes] = [ {'path': '/openapi.json'}, {'path': '/docs'}, {'path': '/docs/oauth2-redirect'}, {'path': '/redoc'}, {'path': '/sub'} ] This is what we got for app's routes. [{"path":route.path} for route in app.routes] = [{'path': '/openapi.json'}, {'path': '/docs'}, {'path': '/docs/oauth2-redirect'}, {'path': '/redoc'}, {'path': '/app'}, {'path': '/subapi'} ] That's quite interesting because our subapi did not inherited /app, let's keep going and make things more interesting, let us run our app with a single command uvicorn my_app_name:app As expected we have our app's documentation in /docs Also we have subapi's documentation in /subapi/docs, there is not interesting here. So what we should expect, when we add this? subapi.mount("/app", app) Let's run it again, but this time let's call subapi. uvicorn my_app_name:subapi What do we expect to see? By default we should have subapi's documentation in /docs The app's documentation in the /app/docs Yes, we are right, but things goes interesting from here. Now we have an application like Matryoshka dolls When we send a request to /app/subapi/sub (Remind we ran our app with uvicorn my_app_name:subapi) curl http://127.0.0.1:8000/app/subapi/sub Out: {"message":"Hello World from sub API"} Seems like it's working fine but let's try more. What about /app/subapi/app/subapi/app/subapi/app/subapi/app/subapi/app/app curl http://127.0.0.1:8000/app/subapi/app/subapi/app/subapi/app/subapi/app/subapi/app/app Out: {"message":"Hello World from main app"} Are you confused? Don't be, let me explain. When you mount a sub-application, FastAPI takes care of the mounted app, using a mechanism from the ASGI specification called a root_path What root_path does and why the example above worked? Straight-forward root_path says, you can reach all the routes that you defined in your app.routes from your root_path, let's visualize this. Now our root_path is /app /app/ Let's add subapi, and it became our root_path. /app/subapi/ Let's add app again, and it became our root_path /app/subapi/app Note: The example above worked because, we mounted two apps together. Are you not satisfied and you are saying what if I add a middleware, what is going to happen? Easy to answer, it will not inherit. Let me explain this with a simple example, I am going to add a middleware for my subapi. from fastapi.middleware.cors import CORSMiddleware subapi.add_middleware(CORSMiddleware) All the data for your application is inside of __dict__ So we can find out the difference easily by checking the 'user_middleware' key. subapi.__dict__['user_middleware'] = [Middleware(CORSMiddleware)] app.__dict__['user_middleware'] = [] All the other things you add etc will work independently because they are totally different applications underneath, so you will use mounting safely. Conclusion Yes, they will work independently. | 8 | 13 |
64,323,745 | 2020-10-12 | https://stackoverflow.com/questions/64323745/how-to-find-the-version-of-jupyter-notebook-from-within-the-notebook | I wish to return the version of Jupyter Notebook from within a cell of a notebook. For example, to get the python version, I run: from platform import python_version python_version() or to get the pandas version: pd.__version__ I have tried: notebook.version() ipython.version() jupyter.version() and several other, related forms (including capitalizing the first letters), but get errors that (for example): NameError: name 'jupyter' is not defined I am aware of other ways (e.g. clicking on Help>About in the GUI menu; using the conda command line) but I want to automate documentation of all package versions. If it matters, I am running v6.1.1 of Notebook in a Python 3.7.3 environment. | Paste the following command into your jupyter cell(exclamation symbol means that you need to run shell command, not python) !jupyter --version example output: jupyter core : 4.6.0 jupyter-notebook : 6.0.1 qtconsole : 4.7.5 ipython : 7.8.0 ipykernel : 5.1.3 jupyter client : 5.3.4 jupyter lab : not installed nbconvert : 5.6.0 ipywidgets : 7.5.1 nbformat : 4.4.0 traitlets : 4.3.3 To get the python version use the python --version command: !python --version example output: Python 3.6.8 UPDATE: to get values as dict you can use the following script(not perfect, written in 3 minutes) import subprocess versions = subprocess.check_output(["jupyter", "--version"]).decode().split('\n') parsed_versions = {} for component in versions: if component == "": continue comps = list(map(str.strip, component.split(': '))) parsed_versions[comps[0]] = comps[1] Value of parsed_versions variable { "jupyter core": "4.6.0", "jupyter-notebook": "6.0.1", "qtconsole": "4.7.5", "ipython": "7.8.0", "ipykernel": "5.1.3", "jupyter client": "5.3.4", "jupyter lab": "not installed", "nbconvert": "5.6.0", "ipywidgets": "7.5.1", "nbformat": "4.4.0", "traitlets": "4.3.3" } UPDATE 2: Thanks to @TrentonMcKinney for suggestions on how to make this script better | 12 | 22 |
64,241,837 | 2020-10-7 | https://stackoverflow.com/questions/64241837/use-python-open-cv-for-segmenting-newspaper-article | I'm using the code below for segmenting the articles from an image of newspaper. def segmenter(image_received): # Process 1: Lines Detection img = image_received gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert to binary gray image edges = cv2.Canny(gray, 75, 150) # determine contours lines = cv2.HoughLinesP(edges, 0.017, np.pi / 180, 60, minLineLength=100, maxLineGap=0.1) # houghlines generation # drawing houghlines for line in lines: x1, y1, x2, y2 = line[0] cv2.line(img, (x1, y1), (x2, y2), (0, 0, 128), 12) # the houghlines of color (0,0,128) is drawn # Drawing brown border bold = cv2.copyMakeBorder( img, # image source 5, # top width 5, # bottomm width 5, # left width 5, # right width cv2.BORDER_CONSTANT, value=(0, 0, 128) # brown color value ) image = bold gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 1)) detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: if int(len(c) >= 10): cv2.drawContours(image, [c], 0, (0, 17, 255), 1) vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 1)) detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2) cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: if int(len(c) >= 10): cv2.drawContours(image, [c], 0, (0, 17, 255), 1) cv2.imwrite(f'tmp/{str(str(uuid.uuid4()))}.jpg', image) for instance the input image is and the output image is : There are three problems: the output rectangles aren't complete in all cases. Images also are segmented inside articles as part of articles. But what I need is to segment only the text of the newspaper and crop all the other things out. Something like this one: Consider the following image: The article indicated by borders is not rectangular and is much more complicated. How can I achieve the correct borders using python open-cv or other image processing libraries? (the question has an answer here for matlab. But I need a python code. | here my pipeline. I think can be optimized. Initialization %matplotlib inline import numpy as np import cv2 from matplotlib import pyplot as plt Load image image_file_name = 'paper.jpg' image = cv2.imread(image_file_name) # gray convertion gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) The first important thing is to remove the lines. So I search the lines. grad_x = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=3) grad_y = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=3) abs_grad_x = cv2.convertScaleAbs(grad_x) abs_grad_y = cv2.convertScaleAbs(grad_y) # threshold thresh_x = cv2.threshold(abs_grad_x, 0, 255, cv2.THRESH_OTSU)[1] thresh_y = cv2.threshold(abs_grad_y, 0, 255, cv2.THRESH_OTSU)[1] # bluring kernel_size = 3 blur_thresh_x = cv2.GaussianBlur(thresh_x,(kernel_size, kernel_size),0) blur_thresh_y = cv2.GaussianBlur(thresh_y,(kernel_size, kernel_size),0) # Run Hough on edge detected image rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi / 180 # angular resolution in radians of the Hough grid threshold = 15 # minimum number of votes (intersections in Hough grid cell) min_line_length = 200 # minimum number of pixels making up a line max_line_gap = 1 # maximum gap in pixels between connectable line segments line_image = np.copy(gray) * 0 # creating a blank to draw lines on # Vertical lines vertical_lines = cv2.HoughLinesP(blur_thresh_x, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) if vertical_lines is not None: for line in vertical_lines: for x1,y1,x2,y2 in line: # here it's possible to add a selection of only vertical lines if np.abs(y1-y2)> 0.1 * np.abs(x1-x2): cv2.line(line_image,(x1,y1),(x2,y2),255,5) # Horizontal lines horizontal_lines = cv2.HoughLinesP(blur_thresh_y, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) if horizontal_lines is not None: for line in horizontal_lines: for x1,y1,x2,y2 in line: # here it's possible to add a selection of only horizontal lines if np.abs(x1-x2)> 0.1 * np.abs(y1-y2): cv2.line(line_image,(x1,y1),(x2,y2),255,5) After I remove the lines from the threshold # threshold thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # remove lines clean_thresh = cv2.subtract(thresh, line_image) Then I search the phrases # search the phrases dilatation_type = cv2.MORPH_RECT horizontal_dilatation = 20 #This is the gap. 20 for the first image, 10 for the second image vertical_dilatation = 1 element = cv2.getStructuringElement(dilatation_type, (2*horizontal_dilatation + 1, 2*vertical_dilatation+1), (horizontal_dilatation, vertical_dilatation)) dilatation_thresh = cv2.dilate(clean_thresh, element) # Fill filled_tresh = dilatation_thresh.copy() contours, hierarchy = cv2.findContours(dilatation_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) for cnt in contours: cv2.drawContours(filled_tresh, [cnt], -1, 255, cv2.FILLED) Now I detect the bounding boxes # Draw bounding boxes bounding_box1 = filled_tresh.copy() contours, hierarchy = cv2.findContours(bounding_box1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(bounding_box1,(x,y),(x+w,y+h),255,cv2.FILLED) # REPEAT Draw bounding boxes and Find the mean text width mean_bb_width = 0 # mean bounding box width bounding_box2 = bounding_box1.copy() contours, hierarchy = cv2.findContours(bounding_box2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) num_cnt=0 for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(bounding_box2,(x,y),(x+w,y+h),255,cv2.FILLED) mean_bb_width = mean_bb_width+w num_cnt=num_cnt+1 mean_bb_width=mean_bb_width/num_cnt Now I separate the titles from the text # define title what has width bigger than 1.5* mean_width min_title_width = 1.5 * mean_bb_width raw_title = np.copy(gray) * 0 raw_text = np.copy(gray) * 0 # separate titles from phrases contours, hierarchy = cv2.findContours(bounding_box2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) if w >=min_title_width : cv2.drawContours(raw_title, [cnt], -1, 255, cv2.FILLED) else : cv2.drawContours(raw_text, [cnt], -1, 255, cv2.FILLED) and then the final processing image_out = image.copy() # Closing parameters horizontal_closing = 1 vertical_closing = 20 kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(horizontal_closing,vertical_closing)) # Processing titles # Closing closing_title = cv2.morphologyEx(raw_title, cv2.MORPH_CLOSE, kernel) # Find contours contours, hierarchy = cv2.findContours(closing_title, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Draw bounding boxes bounding_title = closing_title.copy() for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(image_out,(x,y),(x+w,y+h),(255,0,0),2) # Processing text # Closing closing_text = cv2.morphologyEx(raw_text, cv2.MORPH_CLOSE, kernel) # Find contours contours, hierarchy = cv2.findContours(closing_text , cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Draw bounding boxes bounding_text = closing_text.copy() for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(image_out,(x,y),(x+w,y+h),(0,255,0),2) The result is Changing the parameter horizontal_dilatation from 20 to 10, I obtain for the second image (where I remove the red border that you added) the following result | 12 | 21 |
64,317,360 | 2020-10-12 | https://stackoverflow.com/questions/64317360/how-to-use-memcached-in-django | I've seen all over problems using Memcached in Django projects which is considered to be The fastest, most efficient type of cache supported natively by Django For instances, Why doesn't memcache work in my Django? How to configure Memcache for Django on Google cloud AppEngine? Django doesn't use memcached framework memcache on django is not working How to use memcached in django project? How do you confirm django is using memcached? Configuring memcached with django What steps are needed to implement memcached in a Django application? How to use memcached in django project? So, how can we then use it? | This answer explains how to install Memcached on Windows 10 and how to integrate it with Django through a specific client. It was validated using Memcached 1.4.4, Python 2.7 and Django 1.11. In your Django project, under settings.py, add the following code in the bottom of the file SESSIONS_ENGINE='django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } Install memcached client for Python with your virtual environment active (python-memcached) pip install python-memcached Download Memcached using one of the following download links and extract it to a particular folder http://downloads.northscale.com/memcached-win32-1.4.4-14.zip http://downloads.northscale.com/memcached-win64-1.4.4-14.zip Open that folder location in the Terminal or PowerShell and run .\memcached.exe -h you should get something like this Run the following command (-m is for the amount of memory you want to dedicate and -vvv is for extreme verbose) .\memcached.exe -m 512 -vvv In the view you want to use cache, specify it in urls.py like from django.conf.urls import include, url from django.views.decorators.cache import cache_page from .views import IndexView urlpatterns = [ url(r'^$', cache_page(60*60)(IndexView.as_view()), name="index"), ] Go to the Django project, start the server and you should get much better results in your Time load. | 9 | 11 |
64,306,938 | 2020-10-11 | https://stackoverflow.com/questions/64306938/how-can-i-generate-three-random-integers-that-satisfy-some-condition | I'm a beginner in programming and I'm looking for a nice idea how to generate three integers that satisfy a condition. Example: We are given n = 30, and we've been asked to generate three integers a, b and c, so that 7*a + 5*b + 3*c = n. I tried to use for loops, but it takes too much time and I have a maximum testing time of 1000 ms. I'm using Python 3. My attempt: x = int(input()) c = [] k = [] w = [] for i in range(x): for j in range(x): for h in range(x): if 7*i + 5*j + 3*h = x: c.append(i) k.append(j) w.append(h) if len(c) == len(k) == len(w) print(-1) else: print(str(k[0]) + ' ' + str(c[0]) + ' ' + str(w[0])) | import numpy as np def generate_answer(n: int, low_limit:int, high_limit: int): while True: a = np.random.randint(low_limit, high_limit + 1, 1)[0] b = np.random.randint(low_limit, high_limit + 1, 1)[0] c = (n - 7 * a - 5 * b) / 3.0 if int(c) == c and low_limit <= c <= high_limit: break return a, b, int(c) if __name__ == "__main__": n = 30 ans = generate_answer(low_limit=-5, high_limit=50, n=n) assert ans[0] * 7 + ans[1] * 5 + ans[2] * 3 == n print(ans) If you select two of the numbers a, b, c, you know the third. In this case, I randomize ints for a, b, and I find c by c = (n - 7 * a - 5 * b) / 3.0. Make sure c is an integer, and in the allowed limits, and we are done. If it is not, randomize again. If you want to generate all possibilities, def generate_all_answers(n: int, low_limit:int, high_limit: int): results = [] for a in range(low_limit, high_limit + 1): for b in range(low_limit, high_limit + 1): c = (n - 7 * a - 5 * b) / 3.0 if int(c) == c and low_limit <= c <= high_limit: results.append((a, b, int(c))) return results | 40 | 36 |
64,309,821 | 2020-10-11 | https://stackoverflow.com/questions/64309821/difference-between-the-lte-and-gte-in-django | I am trying to figure out the difference between the __lte and __gte in Django. The reason being that I am trying to create a function with dates that can work only with a time frame, so I've been researching between Field Lookups Comparison. I've looked up several documentations https://docs.djangoproject.com/en/3.0/ref/models/querysets/#exclude but didn't reach a conclusive answer. Edited: I learned that lte is less than or equal and gte greater than or equal Here is some documentation link | The __lte lookup [Django-doc] means that you constrain the field that is should be less than or equal to the given value, whereas the __gte lookup [Django-doc] means that the field is greater than or equal to the given value. So for example: MyModel.objects.filter(field__gte=5) # field ≥ 5 MyModel.objects.filter(field__lte=5) # field ≤ 5 | 10 | 7 |
64,298,298 | 2020-10-10 | https://stackoverflow.com/questions/64298298/type-hinting-callable-with-no-parameters | I want to use type hinting for a function with no parameters from typing import Callable def no_parameters_returns_int() -> int: return 7 def get_int_returns_int(a: int) -> int: return a def call_function(next_method: Callable[[], int]): print(next_method()) call_function(no_parameters_returns_int) # no indication of error from IDE expected call_function(get_int_returns_int) # expected an indication of error from IDE I expected PyCharm to mark the line when I pass a function that does take parameters. Also tried Callable[[None], int] and Callable[[...], int]. However the first one hinting the passed function to receive a None type argument, second one hinting the passed function to receive at least one argument. Is it possible to hint that the passed function receives no arguments? | Is it possible to hint that the passed function receives no arguments? The correct way to type hint a Callable without arguments is stated in: "Fundamental building blocks", PEP 483 Callable[[t1, t2, ..., tn], tr]. A function with positional argument types t1 etc., and return type tr. The argument list may be empty n==0. An explicit example is given in: "Covariance and Contravariance", PEP 483 - Callable[[], int] is a subtype of Callable[[], float]. - Callable[[], Manager] is a subtype of Callable[[], Employee]. And also in: "Callable", PEP 484 from typing import Callable def feeder(get_next_item: Callable[[], str]) -> None: # Body The built-in name None should be distinguished from the type None (the first is used to access the second): 3.2. The standard type hierarchy, Data Model None This type has a single value. There is a single object with this value. This object is accessed through the built-in name None. The syntax and meaning of the built-in name None used as a type hint is a special case: "Using None", PEP 484 When used in a type hint, the expression None is considered equivalent to type(None). Considering the above, it's less of a surprise the following two ways -of trying to write a Callable type hint of a function without arguments- are wrong: Callable[[None], tr] Callable[[type(None)], tr] The Ellipsis in a Callable type hint simply means: "Callable", PEP 484 Note that there are no square brackets around the ellipsis. The arguments of the callback are completely unconstrained in this case (and keyword arguments are acceptable). Since it is "unconstrained" the following is unlikely to cause the static type checker to issue any warnings because of arguments: Callable[..., tr] Worth noting, the relation between Callable, Any and ... (Ellipsis). "The Any type", PEP 484 As well, a bare Callable in an annotation is equivalent to Callable[..., Any] Finally, if you run your code through MyPy the expected warning is in fact issued: main.py:13: error: Argument 1 to "call_function" has incompatible type "Callable[[int], int]"; expected "Callable[[], int]" Found 1 error in 1 file (checked 1 source file) I checked your example in PyCharm 2020.2 Pro and the IDE does not issue the above warning. Notice that PyCharm uses its own implementation of PEP 484, and their static type checker has been know to have bugs. I think you found a bug... Final Note: Running type(None) gives NoneType. In Python 3 NoneType isn't exposed for import although in Python 2 it was importable. EDIT: For some reason Python 3.10 is reintroducing types.NoneType. | 12 | 9 |
64,277,506 | 2020-10-9 | https://stackoverflow.com/questions/64277506/what-does-the-qq-mean-as-a-pip-install-option | I saw this on a jupyter notebook: !pip install -Uqq fastbook ! runs commands on shell. U stands for upgrade. What do the options qq mean? q stands for quiet. Why are there two q's? Looked up pip install --help. Looked up User guide to no avail. | The option -q of pip give less output. The Option is additive. In other words, you can use it up to 3 times (corresponding to WARNING, ERROR, and CRITICAL logging levels). So: -q means display only the messages with WARNING,ERROR,CRITICAL log levels -qq means display only the messages with ERROR,CRITICAL log levels -qqq means display only the messages with CRITICAL log level | 15 | 24 |
64,303,839 | 2020-10-11 | https://stackoverflow.com/questions/64303839/how-to-calculate-ndcg-with-binary-relevances-using-sklearn | I'm trying to calculate the NDCG score for binary relevances: from sklearn.metrics import ndcg_score y_true = [0, 1, 0] y_pred = [0, 1, 0] ndcg_score(y_true, y_pred) And getting: ValueError: Only ('multilabel-indicator', 'continuous-multioutput', 'multiclass-multioutput') formats are supported. Got binary instead Is there a way to make this work? | Please try: from sklearn.metrics import ndcg_score y_true = [[0, 1, 0]] y_pred = [[0, 1, 0]] ndcg_score(y_true, y_pred) 1.0 Note the expected shapes in the docs: y_true: ndarray, shape (n_samples, n_labels) y_score: ndarray, shape (n_samples, n_labels) | 9 | 12 |
64,303,326 | 2020-10-11 | https://stackoverflow.com/questions/64303326/using-playwright-for-python-how-do-i-select-or-find-an-element | I'm trying to learn the Python version of Playwright. See here I would like to learn how to locate an element, so that I can do things with it. Like printing the inner HTML, clicking on it and such. The example below loads a page and prints the HTML from playwright import sync_playwright with sync_playwright() as p: browser = p.chromium.launch(headless=False) page = browser.newPage() page.goto('http://whatsmyuseragent.org/') print(page.innerHTML("*")) browser.close() This page contains an element <div class="user-agent"> <p class="intro-text">Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4238.0 Safari/537.36</p> </div> Using Selenium, I could locate the element and print it's content like this elem = driver.find_element_by_class_name("user-agent") print(elem) print(elem.get_attribute("innerHTML")) How can I do the same in Playwright? #UPDATE# - Note if you want to run this in 2021+ that current versions of playwright have changed the syntax from CamelCase to snake_case. | You can use the querySelector function, and then call the innerHTML function: handle = page.querySelector(".user-agent") print(handle.innerHTML()) | 10 | 7 |
64,295,425 | 2020-10-10 | https://stackoverflow.com/questions/64295425/how-to-set-different-colors-for-bars-in-a-plotly-waterfall-chart | I have a waterfall chart and I want to set each bar's color separately (blue for the first one, red for the 2nd, 3rd, and 4th one, green for 5th one, and blue for 6th one). All the relative bars in the chart are increasing, and the plotly only allows you to set three colors for increasing, decreasing, and total ones. Is there any way to do what I want? import plotly.graph_objects as go fig = go.Figure(go.Waterfall( name = "20", orientation = "v", measure = ["relative", "relative", "relative", "relative", "relative", "total"], x = ["Buy", "Transaction Cost", "Remodeling Cost", "Ownership Cost", "Gain", "Sell"], textposition = "outside", text = ["$200", "$14", "$45", "$5", "$86", "$350"], y = [200, 14, 45, 5, 86, 350], connector = {"visible": False} )) fig.show() Result: As I said, I want the color of the bar to be: blue for the first one, red for the 2nd, 3rd, and 4th one, green for 5th one, and blue for 6th one | Problem Ploty waterfall chart bar color customization. As OP mentioned, currently plotly supports customizing bar colors for decreasing, increasing, and totals. Solution In OP's example, to make color of bars (blue, red, red, red, green, blue): set marker color red in increasing attribute set marker color blue in totals attribute add shapes of blue and green to the 1st and 4th bar via .add_shape() import plotly.graph_objects as go fig = go.Figure(go.Waterfall( name = "20", orientation = "v", measure = ["relative", "relative", "relative", "relative", "relative", "total"], x = ["Buy", "Transaction Cost", "Remodeling Cost", "Ownership Cost", "Gain", "Sell"], textposition = "outside", text = ["$200", "$14", "$45", "$5", "$86", "$350"], y = [200, 14, 45, 5, 86, 350], increasing = {"marker":{"color":"red"}}, totals = {"marker":{"color":"blue"}}, connector = {"visible": False} )) fig.add_shape( type="rect", fillcolor="blue", line=dict(color="blue"), opacity=1, x0=-0.4, x1=0.4, xref="x", y0=0.0, y1=fig.data[0].y[0], yref="y" ) fig.add_shape( type="rect", fillcolor="green", line=dict(color="green"), opacity=1, x0=3.6, x1=4.4, xref="x", y0=fig.data[0].y[-1] - fig.data[0].y[-2], y1=fig.data[0].y[-1], yref="y" ) fig.show() Which would yield the result OP wanted Reference https://plotly.com/python/waterfall-charts/ R Plotly: How to set the color of Individual Bars of a Waterfall Chart in R Plot.ly? https://plotly.com/python/styling-plotly-express/#updating-or-modifying-figures-made-with-plotly-express | 6 | 10 |
64,246,437 | 2020-10-7 | https://stackoverflow.com/questions/64246437/how-to-increase-aws-sagemaker-invocation-time-out-while-waiting-for-a-response | I deployed a large 3D model to aws sagemaker. Inference will take 2 minutes or more. I get the following error while calling the predictor from Python: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (0) from model with message "Your invocation timed out while waiting for a response from container model. Review the latency metrics for each container in Amazon CloudWatch, resolve the issue, and try again."' In Cloud Watch I also see some PING time outs while the container is processing: 2020-10-07T16:02:39.718+02:00 2020/10/07 14:02:39 https://forums.aws.amazon.com/ 106#106: *251 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.32.0.2, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock/ping", host: "model.aws.local:8080" How do I increase the invocation time out? Or is there a way to make async invocations to an sagemaker endpoint? | It’s currently not possible to increase timeout—this is an open issue in GitHub. Looking through the issue and similar questions on SO, it seems like you may be able to use batch transforms in conjunction with inference. References https://stackoverflow.com/a/55642675/806876 Sagemaker Python SDK timeout issue: https://github.com/aws/sagemaker-python-sdk/issues/1119 | 12 | 8 |
64,292,938 | 2020-10-10 | https://stackoverflow.com/questions/64292938/how-to-find-indices-of-first-two-elements-in-a-list-that-are-any-of-the-elements | How do I find the indices of the first two elements in a list that are any of the elements in another list? For example: story = ['a', 'b', 'c', 'd', 'b', 'c', 'c'] elementsToCheck = ['a', 'c', 'f', 'h'] In this case, the desired output is a list indices = [0,2] for strings 'a' and 'c'. | story = ['a', 'b', 'c', 'd', 'b', 'c', 'c'] elementsToCheck = ['a', 'c', 'f', 'h'] out = [] for i, v in enumerate(story): if v in elementsToCheck: out.append(i) if len(out) == 2: break print(out) Prints: [0, 2] | 9 | 7 |
64,237,904 | 2020-10-7 | https://stackoverflow.com/questions/64237904/numpy-installation-for-python-ver-3-9 | I'm trying to install NumPy but I'm facing an issue. The python ver I'm using is 3.9 and Windows version is 10. The error is as follows: C:\>pip3 install numpy Collecting numpy Using cached numpy-1.19.2.zip (7.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... error ERROR: Command errored out with exit status 1: command: 'c:\users\arr48\appdata\local\programs\python\python39\python.exe' 'c:\users\arr48\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\arr48\AppData\Local\Temp\tmp0g3__7ax' cwd: C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy Complete output (200 lines): Running from numpy source directory. setup.py:470: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE blis_info: libraries blis not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_info: libraries openblas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE accelerate_info: NOT AVAILABLE C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy\numpy\distutils\system_info.py:1914: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy\numpy\distutils\system_info.py:1914: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE flame_info: libraries flame not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries tatlas,tatlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs libraries tatlas,tatlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries satlas,satlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs libraries satlas,satlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries ptf77blas,ptcblas,atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs libraries ptf77blas,ptcblas,atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries f77blas,cblas,atlas not found in c:\users\arr48\appdata\local\programs\python\python39\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs libraries f77blas,cblas,atlas not found in c:\users\arr48\appdata\local\programs\python\python39\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\lib', 'C:\\', 'c:\\users\\arr48\\appdata\\local\\programs\\python\\python39\\libs'] NOT AVAILABLE C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\arr48\AppData\Local\Temp\pip-install-fincp1mj\numpy\numpy\distutils\system_info.py:1748: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] c:\users\arr48\appdata\local\programs\python\python39\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running dist_info running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.9 creating build\src.win-amd64-3.9\numpy creating build\src.win-amd64-3.9\numpy\distutils building library "npymath" sources error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\arr48\appdata\local\programs\python\python39\python.exe' 'c:\users\arr48\appdata\local\programs\python\python39\lib\site- packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\arr48\AppData\Local\Temp\tmp0g3__7ax' Check the logs for full command output. | The numpy package does not yet include binaries for Python 3.9, so pip tries to compile from source. This (of course) requires you to have the appropriate C compiler, as the error message says. That is not straightforward. pip wants Visual C++ 14.2. The only version readily available from Microsoft is Visual C++ 2019, a.k.a. version 16, which won't work with pip. You can get Visual C++ 2015, a.k.a version 14, from https://visualstudio.microsoft.com/vs/older-downloads/. But it appears that only the latest version is free. You may have to pay for an older version. Even then, having the compiler isn't enough. You have to configure it, and that is not for the faint-hearted. Instead, download a prepackaged binary wheel from Christoph Gohlke's site at https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy. The official scipy.org docs recommend this site. Obviously you must choose the .whl file that matches your system. Install the wheel using pip like this: pip install D:\Users\<user>\Downloads\numpy-1.19.2+mkl-cp39-cp39-win_amd64.whl | 7 | 5 |
64,269,453 | 2020-10-8 | https://stackoverflow.com/questions/64269453/pandas-replace-duplicates-with-nan-and-keep-row | How do I replace duplicates for each group with NaNs while keeping the rows? I need to keep rows without removing and perhaps keeping the first original value where it shows up first. import pandas as pd from datetime import timedelta df = pd.DataFrame({ 'date': ['2019-01-01 00:00:00','2019-01-01 01:00:00','2019-01-01 02:00:00', '2019-01-01 03:00:00', '2019-09-01 02:00:00','2019-09-01 03:00:00','2019-09-01 04:00:00', '2019-09-01 05:00:00'], 'value': [10,10,10,10,12,12,12,12], 'ID': ['Jackie','Jackie','Jackie','Jackie','Zoop','Zoop','Zoop','Zoop',] }) df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True) date value ID 0 2019-01-01 00:00:00 10 Jackie 1 2019-01-01 01:00:00 10 Jackie 2 2019-01-01 02:00:00 10 Jackie 3 2019-01-01 03:00:00 10 Jackie 4 2019-09-01 02:00:00 12 Zoop 5 2019-09-01 03:00:00 12 Zoop 6 2019-09-01 04:00:00 12 Zoop 7 2019-09-01 05:00:00 12 Zoop Desired Dataframe: date value ID 0 2019-01-01 00:00:00 10 Jackie 1 2019-01-01 01:00:00 NaN Jackie 2 2019-01-01 02:00:00 NaN Jackie 3 2019-01-01 03:00:00 NaN Jackie 4 2019-09-01 02:00:00 12 Zoop 5 2019-09-01 03:00:00 NaN Zoop 6 2019-09-01 04:00:00 NaN Zoop 7 2019-09-01 05:00:00 NaN Zoop Edit: Duplicated values should only be dropped on the same date indifferent of the frequency. So if value 10 shows up on twice on Jan-1 and three times on Jan-2, the value 10 should only show up once on Jan-1 and once on Jan-2. | I assume you check duplicates on columns value and ID and further check on date of column date df.loc[df.assign(d=df.date.dt.date).duplicated(['value','ID', 'd']), 'value'] = np.nan Out[269]: date value ID 0 2019-01-01 00:00:00 10.0 Jackie 1 2019-01-01 01:00:00 NaN Jackie 2 2019-01-01 02:00:00 NaN Jackie 3 2019-01-01 03:00:00 NaN Jackie 4 2019-09-01 02:00:00 12.0 Zoop 5 2019-09-01 03:00:00 NaN Zoop 6 2019-09-01 04:00:00 NaN Zoop 7 2019-09-01 05:00:00 NaN Zoop As @Trenton suggest, you may use pd.NA to avoid import numpy (Note: as @rafaelc sugguest: here is the link explain detail differences between pd.NA and np.nan https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.0.0.html#experimental-na-scalar-to-denote-missing-values) df.loc[df.assign(d=df.date.dt.date).duplicated(['value','ID', 'd']), 'value'] = pd.NA Out[273]: date value ID 0 2019-01-01 00:00:00 10 Jackie 1 2019-01-01 01:00:00 <NA> Jackie 2 2019-01-01 02:00:00 <NA> Jackie 3 2019-01-01 03:00:00 <NA> Jackie 4 2019-09-01 02:00:00 12 Zoop 5 2019-09-01 03:00:00 <NA> Zoop 6 2019-09-01 04:00:00 <NA> Zoop 7 2019-09-01 05:00:00 <NA> Zoop | 7 | 10 |
64,266,229 | 2020-10-8 | https://stackoverflow.com/questions/64266229/fast-way-to-find-length-and-start-index-of-repeated-elements-in-array | I have an array A: import numpy as np A = np.array( [0, 0, 1, 1, 1, 0, 1, 1, 0 ,0, 1, 0] ) The length of consecutive '1s' would be: output: [3, 2, 1] with the corresponding starting indices: idx = [2, 6, 10] The original arrays are huge and I prefer a solution with less for-loop. Edit (Run time): import numpy as np import time A = np.array( [0, 0, 1, 1, 1, 0, 1, 1, 0 ,0, 1, 0] ) def LoopVersion(A): l_A = len(A) size = [] idx = [] temp_idx = [] temp_size = [] for i in range(l_A): if A[i] == 1: temp_size.append(1) if not temp_idx: temp_idx = i idx.append(temp_idx) else: size.append( len(temp_size) ) size = [i for i in size if i != 0] temp_size = [] temp_idx = [] return size, idx Quang's solution: def UniqueVersion(A): _, idx, counts = np.unique(np.cumsum(1-A)*A, return_index=True, return_counts=True) return idx, counts Jacco's solution: def ConcatVersion(A): A = np.concatenate(([0], A, [0])) # get rid of some edge cases starts = np.argwhere((A[:-1] + A[1:]) == 1).ravel()[::2] ends = np.argwhere((A[:-1] + A[1:]) == 1).ravel()[1::2] len_of_repeats = ends - starts return starts, len_of_repeats Dan's solution (works with special cases as well): def structure(A): ZA = np.concatenate(([0], A, [0])) indices = np.flatnonzero( ZA[1:] != ZA[:-1] ) counts = indices[1:] - indices[:-1] return indices[::2], counts[::2] Run time analysis with 10000 elements: np.random.seed(1234) B = np.random.randint(2, size=10000) start = time.time() size, idx = LoopVersion(B) end = time.time() print ( (end - start) ) # 0.32489800453186035 seconds start = time.time() idx, counts = UniqueVersion(B) end = time.time() print ( (end - start) ) # 0.008305072784423828 seconds start = time.time() idx, counts = ConcatVersion(B) end = time.time() print ( (end - start) ) # 0.0009801387786865234 seconds start = time.time() idx, counts = structure(B) end = time.time() print ( (end - start) ) # 0.000347137451171875 seconds | Here is a pedestrian try, solving the problem by programming the problem. We prepend and also append a zero to A, getting a vector ZA, then detect the 1 islands, and the 0 islands coming in alternating manner in the ZA by comparing the shifted versions ZA[1:] and ZA[-1]. (In the constructed arrays we take the even places, corresponding to the ones in A.) import numpy as np def structure(A): ZA = np.concatenate(([0], A, [0])) indices = np.flatnonzero( ZA[1:] != ZA[:-1] ) counts = indices[1:] - indices[:-1] return indices[::2], counts[::2] Some sample runs: In [71]: structure(np.array( [0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0] )) Out[71]: (array([ 2, 6, 10]), array([3, 2, 1])) In [72]: structure(np.array( [1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1] )) Out[72]: (array([ 0, 5, 9, 13, 15]), array([3, 3, 2, 1, 1])) In [73]: structure(np.array( [1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0] )) Out[73]: (array([0, 5, 9]), array([3, 3, 2])) In [74]: structure(np.array( [1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1] )) Out[74]: (array([ 0, 2, 5, 7, 11, 14]), array([1, 2, 1, 3, 2, 3])) | 11 | 3 |
64,239,799 | 2020-10-7 | https://stackoverflow.com/questions/64239799/werkzeug-disable-bash-colors-when-logging-to-file | In a Flask application, I use a RotatingFileLogger to log werkzeug access logs to a file like shown in this question: file_handler_access_log = RotatingFileHandler("access.log", backupCount=5, encoding='utf-8') formatter = logging.Formatter('%(asctime)s %(module)s %(levelname)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S') file_handler_access_log.setFormatter(formatter) werkzeug_logger.addHandler(file_handler_access_log) werkzeug_logger.setLevel(logging.DEBUG) In the access.log file, the request looks like this: 2020-10-07 09:43:51 _internal INFO: 127.0.0.1 - - [07/Oct/2020 09:43:51] "[37mGET /api/foo HTTP/1.1[0m" 200 - I want to get rid of the color codes like [37m in the log file. The werkzeug documentation states: The development server can optionally highlight the request logs in different colors based on the status code. Install Click to enable this feature. Click is a Flask dependency, so I cannot uninstall it. How can I disable the colored logging? | OK, so what you are hitting is if click: color = click.style if code[0] == "1": # 1xx - Informational msg = color(msg, bold=True) ... self.log("info", '"%s" %s %s', msg, code, size) Source: https://github.com/pallets/werkzeug/blob/ef545f0d0bf28cbad02066b4cb7471bea50a93ee/src/werkzeug/serving.py Not easy to prevent the behavior. The second option is to remove color codes from messages. I would try to use log Filter to update the message, something like import logging import click class RemoveColorFilter(logging.Filter): def filter(self, record): if record and record.msg and isinstance(record.msg, str): record.msg = click.unstyle(record.msg) return True remove_color_filter = RemoveColorFilter() file_handler_access_log.addFilter(remove_color_filter) The above suggestion was inspired by the following answer https://stackoverflow.com/a/60692906/4183498. I didn't test the proposed solution. | 9 | 4 |
64,261,360 | 2020-10-8 | https://stackoverflow.com/questions/64261360/how-are-pythons-c-and-c-libraries-cross-platform | Many of Python's libraries, e.g. Pandas and Numpy, are actually C or C++ with Python wrappers round them. I have no experience with compiled languages and don't understand how these libraries are cross platform (i.e. run on Mac, Windows, Linux), since my understanding is that C and C++ need to be compiled for a specific operating system. How does this work? Edits: How do you compile Python C/C++ extensions for different OS/versions of Python? does not answer my question and therefore this is not a duplicate. This question is about understanding how it works, that question presumes this understanding and is about implementation. | As has been pointed out in comments, Python package using C/C++ compiled code require compilation on the target architecture for them to be cross-platform. Under the hood, when you use pip install pandas for example, pip will look for the requested package on PyPI and, if available, it will install the wheel corresponding to your specific system. A wheel is a distribution mechanism that helps installation of python packages on specific python distribution and/or target architecture. Taking the example of pandas again, here is what an upgrade of pandas returned this morning: applepie:~ applepie$ pip install pandas --upgrade Collecting pandas Downloading pandas-1.1.3-cp38-cp38-macosx_10_9_x86_64.whl (10.1 MB) |████████████████████████████████| 10.1 MB 7.2 MB/s Requirement already satisfied, skipping upgrade: numpy>=1.15.4 in ./.pyenv/versions/3.8.5/lib/python3.8/site-packages (from pandas) (1.19.2) Requirement already satisfied, skipping upgrade: pytz>=2017.2 in ./.pyenv/versions/3.8.5/lib/python3.8/site-packages (from pandas) (2020.1) Requirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in ./.pyenv/versions/3.8.5/lib/python3.8/site-packages (from pandas) (2.8.1) Requirement already satisfied, skipping upgrade: six>=1.5 in ./.pyenv/versions/3.8.5/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0) Installing collected packages: pandas Attempting uninstall: pandas Found existing installation: pandas 1.1.2 Uninstalling pandas-1.1.2: Successfully uninstalled pandas-1.1.2 Successfully installed pandas-1.1.3 Note that the first step executed was to download a .whl file that matches my specific architecture (Mac OSX, x86_64). The file name has further information, for example it is pandas v 1.1.3 and is compatible with CPython 3.8. Running this command on a different machine would have yielded a different output. You can view the list of available files for pip to look for directly on PyPI. Again, looking for pandas on PyPI shows that the most up-to-date wheel for Mac OSX on CPython 3.8 is named pandas-1.1.3-cp38-cp38-macosx_10_9_x86_64.whl, which unsurprisingly is what pip install pandas --upgrade downloaded and installed. I am no expert in python distribution, in fact I was only very recently introduced to python wheels, have never distributed python code and had to do some reading prior to answering this question, however it is my understanding that Python packages with C/C++ components would first require compilation on each architecture and then to build a specific wheel for that combination of python version and computer architecture. If a compatible wheel is not found, installing a Python package with C/C++ may require compilation. | 8 | 8 |
64,252,434 | 2020-10-7 | https://stackoverflow.com/questions/64252434/architecture-not-supported-error-when-installing-nltk-with-pip-on-mac | New MacBookPro running Catalina. I have a virtualenv with no additional libraries installed yet. When I try to install nltk with pip3 install nltk, I get the following long error. The gist of it being "Architecture Not Supported". I tried installing with pip3 install -U but got a similar failure. Below is the all of the terminal text beginning with the first error message. ERROR: Command errored out with exit status 1: command: /Users/Jayco/projects/passgen/venv/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"'; __file__='"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-wheel-vixqiu0k cwd: /private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/ Complete output (114 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.14.6-x86_64-3.8 creating build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex running build_ext building 'regex._regex' extension creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3 xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Users/Jayco/projects/passgen/venv/include -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:63: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:64: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:33: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:75: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_va_list.h:31: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for regex Running setup.py clean for regex Failed to build regex Installing collected packages: tqdm, regex, nltk Running setup.py install for regex ... error ERROR: Command errored out with exit status 1: command: /Users/Jayco/projects/passgen/venv/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"'; __file__='"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-record-2kbtvdvx/install-record.txt --single-version-externally-managed --compile --install-headers /Users/Jayco/projects/passgen/venv/include/site/python3.8/regex cwd: /private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/ Complete output (114 lines): running install running build running build_py creating build creating build/lib.macosx-10.14.6-x86_64-3.8 creating build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex running build_ext building 'regex._regex' extension creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3 xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Users/Jayco/projects/passgen/venv/include -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:63: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:64: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:33: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:75: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_va_list.h:31: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Users/Jayco/projects/passgen/venv/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"'; __file__='"'"'/private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-install-3izld5bk/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/t3/ccw8mfyj24s1cdsyrwtpb32r0000gp/T/pip-record-2kbtvdvx/install-record.txt --single-version-externally-managed --compile --install-headers /Users/Jayco/projects/passgen/venv/include/site/python3.8/regex Check the logs for full command output. | I had the same problem with the default installed python. (pip3 install regex) When using python from brew it worked for me. Try this: brew install python3 /usr/local/bin/pip3 install nltk or with a virtualenv: brew install python3 /usr/local/bin/python3 -m venv venv . venv/bin/activate pip install ntlk | 18 | 13 |
64,258,570 | 2020-10-8 | https://stackoverflow.com/questions/64258570/seaborn-title-error-attributeerror-facetgrid-object-has-no-attribute-set-t | I created a lineplot graph to begin with using the following code: plot = sns.lineplot(data=tips, x="sex", y="tip", ci=50, hue="day", palette="Accent") plot.set_title("Value of Tips Given to Waiters, by Days of the Week and Sex", fontsize=24, pad=30, fontdict={"weight": "bold"}) plot.legend("") I have realised that its actually a catplot chart that I need so I amended the code to the following: plot = sns.catplot (data=tips, x="day", y="tip", kind='bar', ci=50, hue="sex", palette="Accent") plot.set_title("Value of Tips Given to Waiters, by Days of the Week and Sex", fontsize=24, pad=30, fontdict={"weight": "bold"}) plot.legend("") However I am getting the following error message with the title: 'AttributeError: 'FacetGrid' object has no attribute 'set_title''. Why is my title not working for the catplot chart? | When you call catplot, it returns a FacetGrid object, so to change the the title and remove legend, you have to use the legend= option inside the function, and also use plot.fig.suptitle() : import seaborn as sns tips = sns.load_dataset("tips") plot = sns.catplot (data=tips, x="day", y="tip", kind='bar', ci=50, hue="sex", palette="Accent", legend=False) plot.fig.suptitle("Value of Tips Given to Waiters, by Days of the Week and Sex", fontsize=24, fontdict={"weight": "bold"}) | 12 | 11 |
64,255,834 | 2020-10-8 | https://stackoverflow.com/questions/64255834/no-definition-found-for-function-vscode-python | I am using VSCode for Python along with the Microsoft for Python extension enabled in VSCode. For Python v3.9.0 I am getting No definition found if I try to seek a function definition. However, I do not get the error if I use my Conda Virtual environment for Python 3.7.0 What might be the problem? | When I used the code you provided and disabled the Python extension, I encountered the same problem as you. Since "Go to Definition" is supported by the corresponding language service extension, it is recommended that you check that the current Python extension is available and confirm that the selected python interpreter is also available. In addition, please try to reload VSCode. | 35 | 18 |
64,248,955 | 2020-10-7 | https://stackoverflow.com/questions/64248955/how-to-register-typing-callable-with-python-singledispatch | Background Suppose I am to implement a simple decorator @notifyme that prints a message when the decorated function is invoked. I would like the decorator to accept one argument to print a customized message; the argument (along with the parentheses surrounding the argument) may be omitted, in which case the default message is printed: @notifyme('Foo is invoked!') def foo(): pass @notifyme # instead of @notifyme() def bar(): pass To allow the parentheses to be omitted, I have to provide two implementations of @notifyme: The first implementation allows the user to customize the message, so it accepts a string as argument and returns a decorator: def notifyme_customized(message: str) -> Callable[[Callable], Callable]: def decorator(func: Callable) -> Callable: def decorated_func(*args, **kwargs): print(str) return func(*args, **kwargs) return decorated_func return decorator The second implementation is a decorator itself and uses the first implementation to print a default message: def notifyme_default(func: Callable) -> Callable: return notifyme_customized('The function is invoked.')(func) To make the two implementations above use the same name notifyme, I used functools.singledispatch to dynamically dispatch the call to notifyme to one of the two implementations: # This is a complete minimal reproducible example from functools import singledispatch from typing import Callable @singledispatch def notifyme(arg): return NotImplemented @notifyme.register def notifyme_customized(message: str) -> Callable[[Callable], Callable]: def decorator(func: Callable) -> Callable: def decorated_func(*args, **kwargs): print(str) return func(*args, **kwargs) return decorated_func return decorator @notifyme.register def notifyme_default(func: Callable) -> Callable: return notifyme_customized('The function is invoked.')(func) Problem However, as the code is interpreted by the Python interpreter, it complains that typing.Callable is an invalid type: Traceback (most recent call last): File "demo.py", line 20, in <module> def notifyme_default(func: Callable) -> Callable: File "C:\Program Files\Python38\lib\functools.py", line 860, in register raise TypeError( TypeError: Invalid annotation for 'func'. typing.Callable is not a class. I have found this issue on Python bug tracker, according to which it seems to be expected behavior since Python 3.7. Does a solution or workaround exist in Python 3.8 I use currently (or Python 3.9 that has been released recently)? Thanks in advance. | I was unable to use typing.Callable with functools.singledispatch, but I did find a workaround by using a function class reference instead: from functools import singledispatch from typing import Callable function = type(lambda: ()) @singledispatch def notifyme(arg): return NotImplemented @notifyme.register def notifyme_customized(message: str) -> Callable[[Callable], Callable]: def decorator(func: Callable) -> Callable: def decorated_func(*args, **kwargs): print(str) return func(*args, **kwargs) return decorated_func return decorator @notifyme.register def notifyme_default(func: function) -> Callable: return notifyme_customized('The function is invoked.')(func) | 9 | 3 |
64,247,450 | 2020-10-7 | https://stackoverflow.com/questions/64247450/runtime-marshalerror-in-python | I am Getting this error. I am executing code of aws lambda function using python 3.7 to know quicksight dashboard version. Thanks in advance! errorMessage: "Unable to marshal response: Object of type datetime is not JSON serializable", errorType : "Runtime.MarshalError" Code- import boto3 import time import sys client = boto3.client('quicksight') def lambda_handler(event, context): response = client.list_dashboard_versions(AwsAccountId='11111', DashboardId='2222',MaxResults=10) return response | I quick fix could be: import boto3 import time import sys import json client = boto3.client('quicksight') def lambda_handler(event, context): response = client.list_dashboard_versions(AwsAccountId='11111', DashboardId='2222',MaxResults=10) return json.dumps(response, default=str) | 10 | 15 |
64,250,017 | 2020-10-7 | https://stackoverflow.com/questions/64250017/how-to-aggregate-combining-dataframes-with-pandas-groupby | I have a dataframe df and a column df['table'] such that each item in df['table'] is another dataframe with the same headers/number of columns. I was wondering if there's a way to do a groupby like this: Original dataframe: name table Bob Pandas df1 Joe Pandas df2 Bob Pandas df3 Bob Pandas df4 Emily Pandas df5 After groupby: name table Bob Pandas df containing the appended df1, df3, and df4 Joe Pandas df2 Emily Pandas df5 I found this code snippet to do a groupby and lambda for strings in a dataframe, but haven't been able to figure out how to append entire dataframes in a groupby. df['table'] = df.groupby(['name'])['table'].transform(lambda x : ' '.join(x)) I've also tried df['table'] = df.groupby(['name'])['HTML'].apply(list), but that gives me a df['table'] of all NaN. Thanks for your help!! | Given 3 dataframes import pandas as pd dfa = pd.DataFrame({'a': [1, 2, 3]}) dfb = pd.DataFrame({'a': ['a', 'b', 'c']}) dfc = pd.DataFrame({'a': ['pie', 'steak', 'milk']}) Given another dataframe, with dataframes in the columns df = pd.DataFrame({'name': ['Bob', 'Joe', 'Bob', 'Bob', 'Emily'], 'table': [dfa, dfa, dfb, dfc, dfb]}) # print the type for the first value in the table column, to confirm it's a dataframe print(type(df.loc[0, 'table'])) [out]: <class 'pandas.core.frame.DataFrame'> Each group of dataframes, can be combined into a single dataframe, by using .groupby and aggregating a list for each group, and combining the dataframes in the list, with pd.concat # if there is only one column, or if there are multiple columns of dataframes to aggregate dfg = df.groupby('name').agg(lambda x: pd.concat(list(x)).reset_index(drop=True)) # display(dfg.loc['Bob', 'table']) a 0 1 1 2 2 3 3 a 4 b 5 c 6 pie 7 steak 8 milk # to specify a single column, or specify multiple columns, from many columns dfg = df.groupby('name')[['table']].agg(lambda x: pd.concat(list(x)).reset_index(drop=True)) Not a duplicate Originally, I had marked this question as a duplicate of How to group dataframe rows into list in pandas groupby, thinking the dataframes could be aggregated into a list, and then combined with pd.concat. df.groupby('name')['table'].apply(list) df.groupby('name').agg(list) df.groupby('name')['table'].agg(list) df.groupby('name').agg({'table': list}) df.groupby('name').agg(lambda x: list(x)) However, these all result in a StopIteration error, when there are dataframes to aggregate. | 7 | 1 |
64,246,528 | 2020-10-7 | https://stackoverflow.com/questions/64246528/add-missing-rows-based-on-column | I have given the following df df = pd.DataFrame(data = {'day': [1, 1, 1, 2, 2, 3], 'pos': 2*[1, 14, 18], 'value': 2*[1, 2, 3]} df day pos value 0 1 1 1 1 1 14 2 2 1 18 3 3 2 1 1 4 2 14 2 5 3 18 3 and i want to fill in rows such that every day has every possible value of column 'pos' desired result: day pos value 0 1 1 1.0 1 1 14 2.0 2 1 18 3.0 3 2 1 1.0 4 2 14 2.0 5 2 18 NaN 6 3 1 NaN 7 3 14 NaN 8 3 18 3.0 Proposition: df.set_index('pos').reindex(pd.Index(3*[1,14,18])).reset_index() yields: ValueError: cannot reindex from a duplicate axis | Let's try pivot then stack: df.pivot('day','pos','value').stack(dropna=False).reset_index(name='value') Output: day pos value 0 1 1 1.0 1 1 14 2.0 2 1 18 3.0 3 2 1 1.0 4 2 14 2.0 5 2 18 NaN 6 3 1 NaN 7 3 14 NaN 8 3 18 3.0 Option 2: merge with MultiIndex: df.merge(pd.DataFrame(index=pd.MultiIndex.from_product([df['day'].unique(), df['pos'].unique()])), left_on=['day','pos'], right_index=True, how='outer') Output: day pos value 0 1 1 1.0 1 1 14 2.0 2 1 18 3.0 3 2 1 1.0 4 2 14 2.0 5 3 18 3.0 5 2 18 NaN 5 3 1 NaN 5 3 14 NaN | 6 | 4 |
64,246,026 | 2020-10-7 | https://stackoverflow.com/questions/64246026/switching-between-python-virtual-environments | I have some noob level virtual environment questions. I've been using virtual environments a little but still have a few questions. I have created and activated an env which is my main working environment as follows: virtualenv env source /path/to/environment/env/bin/activate Having activated this, I can now see I am in the environment as I have (env) visible on command line. My first question is, do I need to run the activate command each time I open a terminal session? And therefore each time I turn my laptop on, etc.? Further, I want to create another environment that runs on an earlier version of python for testing purposes. I was intending to do it as follows: virtualenv --python=python2.7 env-py2 source /path/to/new/environment/env-py2/bin/activate Can these virtualenvs be switched easily? So can I activate env-py2 and then easily jump back to activate env again? Or is there an inbetween step required? Apologies for the very basic questions but I was struggling to find high level information. | Yes you need to run activate command i.e. source each time you open a terminal session. Switching between two virtual environment is easy. You can run deactivate command and source the other virtual environment. | 9 | 10 |
64,246,136 | 2020-10-7 | https://stackoverflow.com/questions/64246136/how-to-access-the-top-element-in-heapq-without-deleting-popping-it-python | How to access the top element in heapq without deleting (popping) it python ? I need only to check the element at the top of my heapq without popping it. How can I do that. | From docs python, under heapq.heappop definition, it says: To access the smallest item without popping it, use heap[0]. It says smallest, because it is a min heap. So the item at the top will be the smallest one. Illustration: import heapq pq = [] heapq.heappush(pq,5) heapq.heappush(pq,3) heapq.heappush(pq,1) heapq.heappush(pq,2) heapq.heappush(pq,4) print("element at top = ",pq[0]) print("check the heapq : ", pq) Result: element at top = 1 check the heapq : [1, 2, 3, 5, 4] | 10 | 23 |
64,138,325 | 2020-9-30 | https://stackoverflow.com/questions/64138325/python-asyncio-queue-join-finishes-only-when-exception-are-not-raised-why | I've been trying to write an async version of the map function in Python for doing IO. To do that, I'm using a queue with a producer/consumer. At first it seems to be working well, but only without exceptions. In particular, if I use queue.join(), it works well when no exceptions, but blocks in case of exception. If I use gather(*tasks), it works well when there is an exception, but blocks if not. So it only finished sometimes, and I just don't understand why. Here is the code I've implemented: import asyncio from asyncio import Queue from typing import Iterable, Callable, TypeVar Input = TypeVar("Input") Output = TypeVar("Output") STOP = object() def parallel_map(func: Callable[[Input], Output], iterable: Iterable[Input]) -> Iterable[Output]: """ Parallel version of `map`, backed by asyncio. Only suitable to do IO in parallel (not for CPU intensive tasks, otherwise it will block). """ number_of_parallel_calls = 9 async def worker(input_queue: Queue, output_queue: Queue) -> None: while True: data = await input_queue.get() try: output = func(data) # Simulate an exception: # raise RuntimeError("") output_queue.put_nowait(output) finally: input_queue.task_done() async def group_results(output_queue: Queue) -> Iterable[Output]: output = [] while True: item = await output_queue.get() if item is not STOP: output.append(item) output_queue.task_done() if item is STOP: break return output async def procedure() -> Iterable[Output]: # First, produce a queue of inputs input_queue: Queue = asyncio.Queue() for i in iterable: input_queue.put_nowait(i) # Then, assign a pool of tasks to consume it (and also produce outputs in a new queue) output_queue: Queue = asyncio.Queue() tasks = [] for _ in range(number_of_parallel_calls): task = asyncio.create_task(worker(input_queue, output_queue)) tasks.append(task) # Wait for the input queue to be fully consumed (only works if no exception occurs in the tasks), blocks otherwise. await input_queue.join() # Gather tasks, only works when an exception is raised in a task, blocks otherwise # asyncio.gather(*tasks) for task in tasks: task.cancel() # Indicate that the output queue is complete, to stop the worker output_queue.put_nowait(STOP) # Consume the output_queue, and return its data as a list group_results_task = asyncio.create_task(group_results(output_queue)) await output_queue.join() output = await group_results_task return output return asyncio.run(procedure()) if __name__ == "__main__()": def my_function(x): return x * x data = [1, 2, 3, 4] print(parallel_map(my_function, data)) I think I'm misunderstanding a basic but important with Python asyncio, but not sure what. | Problem is, your are NOT catching exceptions. From Python doc The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks. So Queue is essentially counting number of calls on put(), and reducing counter by 1 on every task_done() call. If worker stops before processing all queue, you are blocked in Queue.join(). At your worker code: async def worker(input_queue: Queue, output_queue: Queue) -> None: while True: data = await input_queue.get() try: output = func(data) output_queue.put_nowait(output) finally: input_queue.task_done() Your worker stops when it encounters Exception, becuase try-finally only guarantee cleanup, not actual error handling. Therefore what is happening for your case is: Each Queue.put() call increase internal counter, lets say we called it n times. Workers start calling Queue.task_done() to decrease internal counter. When encountering error, worker will stop after executing finally block. Now if all workers stop, Queue.task_done() call count n' is n' < n, internal counter is still in positive value. Queue.join() Hangs indefinitely until internal counter is 0, which never happens as all worker died. This is a design Flaw. Additional helpful changes There is multiple design factors to be changed for the sake of easy implementation, less prune to failure, and performance. Do note that this is from my experience with python, so don't take this as concrete facts. For design factors, I've made following changes: It's pointless if it's running function only, so it's better to support coroutine too. input_queue is guaranteed to be populated before worker. Checking Queue.empty() would be enough to determine the end of loop. If you start by filling input_queue then no need of sentinel, you know how long given iterable was, by queue.qszie(). Use await Queue.put() rather than put_nowait(), You can't be sure if Queue is not available at that precise timing you're putting on it. Keep order of input and output with extra index parameter, You also can't be sure about order of output with concurrent task. Rather than implementing fail-safe in Exception, put errors in result and process all queue, then simply re-raise it on user's choice. Use genexp in task spawning, for is not needed for this task - and list.append will hinder your script's performance. Do not import Queue from asyncio, it does not warn user enough if it's from queue or any other libraries with built-in Queue objects. No need to wait for queue.join to run group_results - await it altogether. Function Code: import asyncio def parallel_map(func, iterable, concurrent_limit=2, raise_error=False): async def worker(input_queue: asyncio.Queue, output_queue: asyncio.Queue): while not input_queue.empty(): # EDIT(2024-11-26): use get_nowait here not await get # vvvvvvvvvvvv idx, item = input_queue.get_nowait() try: # Support both coroutine and function. Coroutine function I mean! if asyncio.iscoroutinefunction(func): output = await func(item) else: output = func(item) await output_queue.put((idx, output)) except Exception as err: await output_queue.put((idx, err)) finally: input_queue.task_done() async def group_results(input_size, output_queue: asyncio.Queue): output = {} # using dict to remove the need to sort list for _ in range(input_size): idx, val = await output_queue.get() # gets tuple(idx, result) output[idx] = val output_queue.task_done() return [output[i] for i in range(input_size)] async def procedure(): # populating input queue input_queue: asyncio.Queue = asyncio.Queue() for idx, item in enumerate(iterable): input_queue.put_nowait((idx, item)) # Remember size before using Queue input_size = input_queue.qsize() # Generate task pool, and start collecting data. output_queue: asyncio.Queue = asyncio.Queue() result_task = asyncio.create_task(group_results(input_size, output_queue)) tasks = [ asyncio.create_task(worker(input_queue, output_queue)) for _ in range(concurrent_limit) ] # Wait for tasks complete await asyncio.gather(*tasks) # Wait for result fetching results = await result_task # Re-raise errors at once if raise_error if raise_error and (errors := [err for err in results if isinstance(err, Exception)]): # noinspection PyUnboundLocalVariable raise Exception(errors) # It never runs before assignment, safe to ignore. return results return asyncio.run(procedure()) Test code: if __name__ == "__main__": import random import time data = [1, 2, 3] err_data = [1, 'yo', 3] def test_normal_function(data_, raise_=False): def my_function(x): t = random.uniform(1, 2) print(f"Sleep {t:.3} start") time.sleep(t) print(f"Awake after {t:.3}") return x * x print(f"Normal function: {parallel_map(my_function, data_, raise_error=raise_)}\n") def test_coroutine(data_, raise_=False): async def my_coro(x): t = random.uniform(1, 2) print(f"Coroutine sleep {t:.3} start") await asyncio.sleep(t) print(f"Coroutine awake after {t:.3}") return x * x print(f"Coroutine {parallel_map(my_coro, data_, raise_error=raise_)}\n") # Test starts print(f"Test for data {data}:") test_normal_function(data) test_coroutine(data) print(f"Test for data {err_data} without raise:") test_normal_function(err_data) test_coroutine(err_data) print(f"Test for data {err_data} with raise:") test_normal_function(err_data, True) test_coroutine(err_data, True) # this line will not run, but works same. Above will test for following conditions for both function and coroutine: Normal data Data with defect, don't raise error Data with defect, raise all errors it encountered Even with Exceptions this will not cancel task, rather it processes all queue. Output: Test for data [1, 2, 3]: Sleep 1.71 start Awake after 1.71 Sleep 1.74 start Awake after 1.74 Sleep 1.83 start Awake after 1.83 Normal function: [1, 4, 9] Coroutine sleep 1.32 start Coroutine sleep 1.01 start Coroutine awake after 1.01 Coroutine sleep 1.98 start Coroutine awake after 1.32 Coroutine awake after 1.98 Coroutine [1, 4, 9] Test for data [1, 'yo', 3] without raise: Sleep 1.57 start Awake after 1.57 Sleep 1.98 start Awake after 1.98 Sleep 1.39 start Awake after 1.39 Normal function: [1, TypeError("can't multiply sequence by non-int of type 'str'"), 9] Coroutine sleep 1.22 start Coroutine sleep 2.0 start Coroutine awake after 1.22 Coroutine sleep 1.96 start Coroutine awake after 2.0 Coroutine awake after 1.96 Coroutine [1, TypeError("can't multiply sequence by non-int of type 'str'"), 9] Test for data [1, 'yo', 3] with raise: Sleep 1.99 start Awake after 1.99 Sleep 1.74 start Awake after 1.74 Sleep 1.52 start Awake after 1.52 Traceback (most recent call last): ... line 52, in procedure raise Exception(errors) Exception: [TypeError("can't multiply sequence by non-int of type 'str'")] Do note that I've set concurrent_limit 2 to demonstrate coroutine waiting for available worker. That's why one coroutine task is not running immediately out of 3. From output you can also see some tasks are finished before others, but results are in order. P.S. If you're importing Queue separately because of PEP-8 line limit violated via type-hint, you can add type-hint as following: async def worker(input_queue, output_queue) -> None: input_queue: asyncio.Queue output_queue: asyncio.Queue or async def worker( input_queue: asyncio.Queue, output_queue: asyncio.Queue ) -> None: Although it's not as clean as original way, but that will help others reading your code. | 8 | 9 |
64,180,511 | 2020-10-3 | https://stackoverflow.com/questions/64180511/pip-change-directory-of-pip-cache-on-linux | I heard changing XDG_CACHE_DIR or XDG_DATA_HOME fixes that but I did export XDG_CACHE_DIR=<new path> export XDG_DATA_HOME=<new path> I've also tried pip cache dir --cache-dir <new path> and pip cache --cache-dir <new path> and --cache-dir <new path> and python --cache-dir <new path> from https://pip.pypa.io/en/stable/reference/pip/#cmdoption-cache-dir and when I type pip cache dir It's still in the old location. How do I change the directory of pip cache? | TL;TR;: do not change XDG_CACHE_HOME globally unless you are sure you really want to do that. Changing XDG_CACHE_HOME globally, like some people suggested would not only affect pip but also other apps as well. You simply do not want to mess that deep, because it's simply not necessary in most of the cases. So what are your alternatives then? You could be using pip's --cache-dir <dir> command line option instead: pip --cache-dir=<dir> ... Or you could override value of XDG_CACHE_HOME variable for pip invocation only: XDG_CACHE_HOME=<path> pip ... which also can be made more permanent by i.e. using shell alias feature: alias pip="XDG_CACHE_HOME=<path> pip" BUT, but, but... there is not need to touch XDG_CACHE_HOME at all, as pip can have own configuration file, in which you can override all of the defaults to match your needs, including alternative location of cache directory. Moreover, all command line switches have accompanying environment variables that pip checks at runtime, which looks like the cleanest approach for your tweaks. In your particular case, --cache-dir can be provided via PIP_CACHE_DIR env variable. So you can either set it globally: export PIP_CACHE_DIR=<path> or per invocation: PIP_CACHE_DIR=<path> pip ... or, you create said pip's configuration file and set it there. See docs for more information about pip config file and variables. | 28 | 48 |
64,130,332 | 2020-9-30 | https://stackoverflow.com/questions/64130332/how-to-pass-seaborn-positional-and-keyword-arguments | I want to plot a seaborn regplot. my code: x=data['Healthy life expectancy'] y=data['max_dead'] sns.regplot(x,y) plt.show() However this gives me future warning error. How to fix this warning? FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be 'data', and passing other arguments without an explicit keyword will result in an error or misinterpretation. | Seaborn >= 0.12 With seaborn 0.12, the FutureWarning from seaborn 0.11 is now an TypeError. Only data may be specified as the first positional argument for seaborn plots. All other arguments must use keywords (e.g. x= and y=). This applies to all seaborn plotting functions. sns.*plot(data=penguins, x="bill_length_mm", y="bill_depth_mm") or sns.*plot(penguins, x="bill_length_mm", y="bill_depth_mm") sns.*plot(data=penguins.bill_length_mm) or sns.*plot(penguins.bill_length_mm) See Overview of seaborn plotting functions Some potential errors for incorrect use of positional and keyword arguments with seaborn: TypeError: *plot() takes from 0 to 1 positional arguments but 3 were given occurs when no keywords are passed. sns.*plot(penguins, "bill_length_mm", "bill_depth_mm") TypeError: *plot() got multiple values for argument 'data' occurs when data= is used after passing x and y as positional arguments. sns.*plot("bill_length_mm", "bill_depth_mm", data=penguins) TypeError: *plot() takes from 0 to 1 positional arguments but 2 positional arguments (and 1 keyword-only argument) were given occurs when positional arguments are passed for x and y, followed by a keyword argument other than data sns.*plot(penguins.bill_length_mm, penguins.bill_depth_mm, kind="reg") See TypeError: method() takes 1 positional argument but 2 were given for the general python explanation. Positional argument vs keyword argument Seaborn 0.11 Technically, it's a warning, not an error, and can be ignored for now, as shown in the bottom section of this answer. I recommend doing as the warning says, specify the x and y parameters for seaborn.regplot, or any of the other seaborn plot functions with this warning. sns.regplot(x=x, y=y), where x and y are parameters for regplot, to which you are passing x and y variables. Beginning in version 0.12, passing any positional arguments, except data, will result in an error or misinterpretation. For those concerned with backward compatibility, write a script to fix existing code, or don't update to 0.12 (once available). x and y are used as the data variable names because that is what is used in the OP. Data can be assigned to any variable name (e.g. a and b). This also applies to FutureWarning: Pass the following variable as a keyword arg: x, which can be generated by plots only requiring x or y, such as: sns.countplot(penguins['sex']), but should be sns.countplot(x=penguins['sex']) or sns.countplot(y=penguins['sex']) import seaborn as sns import pandas as pd penguins = sns.load_dataset('penguins') x = penguins.culmen_depth_mm # or bill_depth_mm y = penguins.culmen_length_mm # or bill_length_mm # plot without specifying the x, y parameters sns.regplot(x, y) # plot with specifying the x, y parameters sns.regplot(x=x, y=y) # or use sns.regplot(data=penguins, x='bill_depth_mm', y='bill_length_mm') Ignore the warnings I do not advise using this option. Once seaborn v0.12 is available, this option will not be viable. From version 0.12, the only valid positional argument will be data, and passing other arguments without an explicit keyword will result in an error or misinterpretation. import warnings warnings.simplefilter(action="ignore", category=FutureWarning) # plot without specifying the x, y parameters sns.regplot(x, y) | 15 | 36 |
64,199,103 | 2020-10-4 | https://stackoverflow.com/questions/64199103/how-to-display-two-figures-side-by-side-in-a-jupyter-cell | import pandas as pd import seaborn as sns # load data df = sns.load_dataset('penguins', cache=False) sns.scatterplot(data=df, x='bill_length_mm', y='bill_depth_mm', hue='sex') plt.show() sns.scatterplot(data=df, x='flipper_length_mm', y='body_mass_g', hue='sex') plt.show() When I draw two plots with seaborn, in one cell, in jupyter, I get this view: I want to draw the plots, side by side, like this: plot1 plot2 How I should do this? Updated: Not two plots on one figure, but two plots on two separate figures. This is not the solution being sought, because it's two plots on one figure. fig, ax = plt.subplots(1,2) sns.plotType(someData, ax=ax[0]) # plot1 sns.plotType(someData, ax=ax[1]) # plot2 fig.show() The solutions from the proposed duplicate ipython notebook arrange plots horizontally, do not work The option with %html causes the figures to plot on top of each other Additionally, other options were for ipython, not Jupyter, or recommended creating subplots.. | Markdown seems to be the easiest option because it does not require loading any additional packages, nor does it require multiple lines of code. This question is about displaying two figures, side by side. Separate figures, side by side, executed from a code cell, does not work. You will need to create separate figures, and then use plt.savefig('file.jpg') to save each figure to a file. Tested with jupyterlab v4.1.4, ipython v8.2.0, ipywidgets v8.1.2 import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # load data df = sns.load_dataset('penguins', cache=False) # create and save figure sns.scatterplot(data=df, x='bill_length_mm', y='bill_depth_mm', hue='sex') plt.savefig('bill.jpg') plt.close() # prevents figure from being displayed when code cell is executed # create and save new figure sns.scatterplot(data=df, x='flipper_length_mm', y='body_mass_g', hue='sex') plt.savefig('flipper.jpg') plt.close() # prevents figure from being displayed when code cell is executed Markdown Once the figures are saved to a file, they can be displayed side by side, by loading them in a markdown cell. If the images are to large, the second figure will go to a new line. Like this answer without being a table. **Bill**:  **Flipper**:  Then execute the cell Other Options How do I make 2 images appear side by side in Jupyter notebook (iPython)? How to include two pictures side by side in Markdown for IPython Notebook (Jupyter)? HTML and IPython.display from IPython.display import display, HTML display(HTML(f"<table><tr><td><img src='bill.jpg'></td><td><img src='flipper.jpg'></td></tr></table>")) ipywidgets and IPython.display import ipywidgets as widgets import IPython.display as display # read the image files img1 = open('bill.jpg', 'rb').read() img2 = open('flipper.jpg', 'rb').read() # create the image widgets widget1 = widgets.Image(value=img1, format='jpeg') widget2 = widgets.Image(value=img2, format='jpeg') # create a box widget for the image widgets box = widgets.Box([widget1, widget2]) # display box display(box) imshow Read the images directly with matplotlib and display them with imshow, but the plot axis must also be set to off. # read images img_A = mpimg.imread('bill.jpg') img_B = mpimg.imread('flipper.jpg') # create the subplot axis fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(11, 8)) # plot the images ax1.imshow(img_A) ax2.imshow(img_B) # turn of the axis ax1.axis('off') _ = ax2.axis('off') axis on axis off | 13 | 10 |
64,193,171 | 2020-10-4 | https://stackoverflow.com/questions/64193171/how-to-generate-presigned-s3-urls-using-django-storages | I have a Django form which saves a file to s3 through the django-storages library and works fine. How can I generate and return a pre-signed url so the user can access the file temporarily after it is uploaded ? Is this abstracted by django-storages or do I have to use the boto3 api? I have spent hours going through the Django-storages documentation however it is not very clear how to do this .. form.py class DocumentForm(forms.Form): docfile = forms.FileField( label='Select a file', help_text='max. 42 megabytes' ) name = models.CharField(max_length=20) uploaded_at = models.DateTimeField(auto_now_add=True) views.py def upload_file(request): if request.method == 'POST': form = DocumentForm(request.POST) if form.is_valid(): form.save() url = ... # generate pre-signed url of uploaded file here return render(request, 'upload_response.html', url) | Turns out you do not need to use boto3 to generate a presigned url. Django-storages abstracts the entire process. You can simply access the url attribute on the FileField, like in this example: document_form = DocumentForm.objects.get(pk=1) url = document_form.docfile.url --- Edit ---- For reference, here is the S3 storage class method that generates the pre-signed url for you https://github.com/jschneier/django-storages/blob/770332b598712da27ecdba75c9e202ad6a1a8722/storages/backends/s3boto3.py#L554 | 17 | 17 |
64,226,700 | 2020-10-6 | https://stackoverflow.com/questions/64226700/download-entire-content-of-a-subfolder-in-a-s3-bucket | I have a bucket in s3 called "sample-data". Inside the Bucket I have folders labelled "A" to "Z". Inside each alphabetical folder there are more files and folders. What is the fastest way to download the alphabetical folder and all it's content? For example --> sample-data/a/foo.txt,more_files/foo1.txt In the above example the bucket sample-data contains an folder called a which contains foo.txt and a folder called more_files which contains foo1.txt I know how to download a single file. For instance if i wanted foo.txt I would do the following. s3 = boto3.client('s3') s3.download_file("sample-data", "a/foo.txt", "foo.txt") However i am wondering if i can download the folder called a and all it's contents entirely? Any help would be appreciated. | I think your best bet would be the awscli aws s3 cp --recursive s3://mybucket/your_folder_named_a path/to/your/destination From the docs: --recursive (boolean) Command is performed on all files or objects under the specified directory or prefix. EDIT: However, to do this with boto3 try this: import os import errno import boto3 client = boto3.client('s3') def assert_dir_exists(path): try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise def download_dir(bucket, path, target): # Handle missing / at end of prefix if not path.endswith('/'): path += '/' paginator = client.get_paginator('list_objects_v2') for result in paginator.paginate(Bucket=bucket, Prefix=path): # Download each file individually for key in result['Contents']: # Calculate relative path rel_path = key['Key'][len(path):] # Skip paths ending in / if not key['Key'].endswith('/'): local_file_path = os.path.join(target, rel_path) # Make sure directories exist local_file_dir = os.path.dirname(local_file_path) assert_dir_exists(local_file_dir) client.download_file(bucket, key['Key'], local_file_path) download_dir('your_bucket', 'your_folder', 'destination') | 8 | 19 |
64,156,202 | 2020-10-1 | https://stackoverflow.com/questions/64156202/add-dense-layer-on-top-of-huggingface-bert-model | I want to add a dense layer on top of the bare BERT Model transformer outputting raw hidden-states, and then fine tune the resulting model. Specifically, I am using this base model. This is what the model should do: Encode the sentence (a vector with 768 elements for each token of the sentence) Keep only the first vector (related to the first token) Add a dense layer on top of this vector, to get the desired transformation So far, I have successfully encoded the sentences: from sklearn.neural_network import MLPRegressor import torch from transformers import AutoModel, AutoTokenizer # List of strings sentences = [...] # List of numbers labels = [...] tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased") # 2D array, one line per sentence containing the embedding of the first token encoded_sentences = torch.stack([model(**tokenizer(s, return_tensors='pt'))[0][0][0] for s in sentences]).detach().numpy() regr = MLPRegressor() regr.fit(encoded_sentences, labels) In this way I can train a neural network by feeding it with the encoded sentences. However, this approach clearly does not fine tune the base BERT model. Can anybody help me? How can I build a model (possibly in pytorch or using the Huggingface library) that can be entirely fine tuned? | There are two ways to do it: Since you are looking to fine-tune the model for a downstream task similar to classification, you can directly use: BertForSequenceClassification class. Performs fine-tuning of logistic regression layer on the output dimension of 768. Alternatively, you can define a custom module, that created a bert model based on the pre-trained weights and adds layers on top of it. from transformers import BertModel class CustomBERTModel(nn.Module): def __init__(self): super(CustomBERTModel, self).__init__() self.bert = BertModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased") ### New layers: self.linear1 = nn.Linear(768, 256) self.linear2 = nn.Linear(256, 3) ## 3 is the number of classes in this example def forward(self, ids, mask): sequence_output, pooled_output = self.bert( ids, attention_mask=mask) # sequence_output has the following shape: (batch_size, sequence_length, 768) linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) ## extract the 1st token's embeddings linear2_output = self.linear2(linear1_output) return linear2_output tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased") model = CustomBERTModel() # You can pass the parameters if required to have more flexible model model.to(torch.device("cpu")) ## can be gpu criterion = nn.CrossEntropyLoss() ## If required define your own criterion optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters())) for epoch in epochs: for batch in data_loader: ## If you have a DataLoader() object to get the data. data = batch[0] targets = batch[1] ## assuming that data loader returns a tuple of data and its targets optimizer.zero_grad() encoding = tokenizer.batch_encode_plus(data, return_tensors='pt', padding=True, truncation=True,max_length=50, add_special_tokens = True) outputs = model(input_ids, attention_mask=attention_mask) outputs = F.log_softmax(outputs, dim=1) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] loss = criterion(outputs, targets) loss.backward() optimizer.step() | 25 | 38 |
64,229,894 | 2020-10-6 | https://stackoverflow.com/questions/64229894/how-to-fix-numpy-ndarray-object-has-no-attribute-get-figure-when-plotting-su | I have written the following code to plot 6 pie charts in different subplots, but I get an error. This code works correctly if I use it to plot only 2 charts, but produces an an error for anything more than that. I have 6 categorical variables in my dataset, the names of which are stored in the list cat_cols. The charts are to be plotted from the training data train. CODE fig, axes = plt.subplots(2, 3, figsize=(24, 10)) for i, c in enumerate(cat_cols): train[c].value_counts()[::-1].plot(kind = 'pie', ax=axes[i], title=c, autopct='%.0f', fontsize=18) axes[i].set_ylabel('') plt.tight_layout() ERROR AttributeError: 'numpy.ndarray' object has no attribute 'get_figure' How do we rectify this? | The issue is plt.subplots(2, 3, figsize=(24, 10)) creates two groups of 3 subplots, not one group of six subplots. array([[<AxesSubplot:xlabel='radians'>, <AxesSubplot:xlabel='radians'>, <AxesSubplot:xlabel='radians'>], [<AxesSubplot:xlabel='radians'>, <AxesSubplot:xlabel='radians'>, <AxesSubplot:xlabel='radians'>]], dtype=object) Unpack all of the subplot arrays from axes, using axes.ravel(). numpy.ravel, which returns a flattened array. A list comprehension will also work, axe = [sub for x in axes for sub in x] In practical terms, axes.ravel(), axes.flat, and axes.flatten(), can be used similarly. See What is the difference between flatten and ravel functions in numpy? & numpy difference between flat and ravel(). Assign each plot to one of the subplots in axe. How to resolve AttributeError: 'numpy.ndarray' object has no attribute 'get_figure' when plotting subplots is a similar issue. import pandas as pd import numpy as np # sinusoidal sample data sample_length = range(1, 6+1) rads = np.arange(0, 2*np.pi, 0.01) data = np.array([np.sin(t*rads) for t in sample_length]) df = pd.DataFrame(data.T, index=pd.Series(rads.tolist(), name='radians'), columns=[f'freq: {i}x' for i in sample_length]) # crate the figure and axes fig, axes = plt.subplots(2, 3, figsize=(24, 10)) # unpack all the axes subplots axe = axes.ravel() # assign the plot to each subplot in axe for i, c in enumerate(df.columns): df[c].plot(ax=axe[i]) | 9 | 20 |
64,234,182 | 2020-10-6 | https://stackoverflow.com/questions/64234182/error-when-creating-a-pipenv-virtual-environment-with-python-3-7 | My OS is ubuntu 20.04 and my default python is 3.8.2. I'm trying to create a virtual environment with pipenv and python 3.7. The following error occurs when I run pipenv install --python 3.7: Creating a virtualenv for this project… Using /usr/bin/python3.7m (3.7.0) to create virtualenv… ⠋RuntimeError: failed to query /usr/bin/python3.7m with code 1 err: 'Traceback (most recent call last):\n File "/usr/lib/python3/dist-packages/virtualenv/discovery/py_info.py", line 16, in <module>\n from distutils.command.install import SCHEME_KEYS\nModuleNotFoundError: No module named \'distutils.command\'\n' Error while trying to remove the /home/yuhao/.local/share/virtualenvs/electrode-mimic-j_E-dTLW env: No such file or directory Virtualenv location: Warning: Your Pipfile requires python_version 3.7, but you are using None (/bin/python). $ pipenv check will surely fail. Creating a virtualenv for this project… Using /usr/bin/python3 (3.8.2) to create virtualenv… ⠙created virtual environment CPython3.8.2.final.0-64 in 162ms creator CPython3Posix(dest=/home/yuhao/.local/share/virtualenvs/electrode-mimic-j_E-dTLW, clear=False, global=False) seeder FromAppData(download=False, progress=latest, msgpack=latest, pytoml=latest, packaging=latest, setuptools=latest, contextlib2=latest, retrying=latest, pip=latest, pep517=latest, idna=latest, CacheControl=latest, appdirs=latest, requests=latest, pkg_resources=latest, webencodings=latest, distlib=latest, certifi=latest, distro=latest, ipaddr=latest, wheel=latest, six=latest, pyparsing=latest, urllib3=latest, chardet=latest, colorama=latest, lockfile=latest, html5lib=latest, via=copy, app_data_dir=/home/yuhao/.local/share/virtualenv/seed-app-data/v1.0.1.debian) activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator Virtualenv location: /home/yuhao/.local/share/virtualenvs/electrode-mimic-j_E-dTLW Pipfile.lock not found, creating… Locking [dev-packages] dependencies… ^C Aborted! Why is this happening? | Did you install the python3-distutils package ? If not you can install it with : sudo apt-get install python3-distutils If you need it for a python3 version that is not the system default, specify the python 3 version : sudo apt-get install python3.X-distutils example: python3.8-distutils | 14 | 9 |
64,208,678 | 2020-10-5 | https://stackoverflow.com/questions/64208678/hiding-secret-key-in-django-project-on-github-after-uploading-project | I uploaded my django project on github and I have a lot of commits on my project. I don't want to delete my project and reupload it again. what is the easiest way to hide secret key after uploading project to github and after a lot of commits? | In the same directory where manage.py is, create a file whose name is .env, and put inside it: SECRET_KEY = '....your secret key ....' # --- the one indicated in your settings.py, cut an paste it here where SECRET_KEY = '....your secret key ....' is the one indicated in your settings.py. So cut this line from your settings.py and paste it in the .env file. In the same directory, create a file whose name is .gitignore, and put inside it: .env Then in your settings.py, where previously you had SECRET_KEY = '....your secret key ....', put: from decouple import config SECRET_KEY = config("SECRET_KEY") # this is to replace the secret key you cut away before then in your command prompts run: pip install python-decouple pip freeze > requirements.txt then add, commit and push on Github. Here you can find out more information on how .gitignore works. | 16 | 43 |
64,197,754 | 2020-10-4 | https://stackoverflow.com/questions/64197754/how-do-i-rotate-a-pytorch-image-tensor-around-its-center-in-a-way-that-supports | I'd like to randomly rotate an image tensor (B, C, H, W) around it's center (2d rotation I think?). I would like to avoid using NumPy and Kornia, so that I basically only need to import from the torch module. I'm also not using torchvision.transforms, because I need it to be autograd compatible. Essentially I'm trying to create an autograd compatible version of torchvision.transforms.RandomRotation() for visualization techniques like DeepDream (so I need to avoid artifacts as much as possible). import torch import math import random import torchvision.transforms as transforms from PIL import Image # Load image def preprocess_simple(image_name, image_size): Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()]) image = Image.open(image_name).convert('RGB') return Loader(image).unsqueeze(0) # Save image def deprocess_simple(output_tensor, output_name): output_tensor.clamp_(0, 1) Image2PIL = transforms.ToPILImage() image = Image2PIL(output_tensor.squeeze(0)) image.save(output_name) # Somehow rotate tensor around it's center def rotate_tensor(tensor, radians): ... return rotated_tensor # Get a random angle within a specified range r_degrees = 5 angle_range = list(range(-r_degrees, r_degrees)) n = random.randint(angle_range[0], angle_range[len(angle_range)-1]) # Convert angle from degrees to radians ang_rad = angle * math.pi / 180 # test_tensor = preprocess_simple('path/to/file', (512,512)) test_tensor = torch.randn(1,3,512,512) # Rotate input tensor somehow output_tensor = rotate_tensor(test_tensor, ang_rad) # Optionally use this to check rotated image # deprocess_simple(output_tensor, 'rotated_image.jpg') Some example outputs of what I'm trying to accomplish: | So the grid generator and the sampler are sub-modules of the Spatial Transformer (JADERBERG, Max, et al.). These sub-modules are not trainable, they let you apply a learnable, as well as non-learnable, spatial transformation. Here I take these two submodules and use them to rotate an image by theta using PyTorch's functions torch.nn.functional.affine_grid and torch.nn.functional.affine_sample (these functions are implementations of the generator and the sampler, respectively): import torch import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt def get_rot_mat(theta): theta = torch.tensor(theta) return torch.tensor([[torch.cos(theta), -torch.sin(theta), 0], [torch.sin(theta), torch.cos(theta), 0]]) def rot_img(x, theta, dtype): rot_mat = get_rot_mat(theta)[None, ...].type(dtype).repeat(x.shape[0],1,1) grid = F.affine_grid(rot_mat, x.size()).type(dtype) x = F.grid_sample(x, grid) return x #Test: dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor #im should be a 4D tensor of shape B x C x H x W with type dtype, range [0,255]: plt.imshow(im.squeeze(0).permute(1,2,0)/255) #To plot it im should be 1 x C x H x W plt.figure() #Rotation by np.pi/2 with autograd support: rotated_im = rot_img(im, np.pi/2, dtype) # Rotate image by 90 degrees. plt.imshow(rotated_im.squeeze(0).permute(1,2,0)/255) In the example above, assume we take our image, im, to be a dancing cat in a skirt: rotated_im will be a 90-degrees CCW rotated dancing cat in a skirt: And this is what we get if we call rot_img with theta eqauls to np.pi/4: And the best part that it's differentiable w.r.t the input and has autograd support! Hooray! | 8 | 13 |
64,170,759 | 2020-10-2 | https://stackoverflow.com/questions/64170759/pyathena-is-super-slow-compared-to-querying-from-athena | I run a query from AWS Athena console and takes 10s. The same query run from Sagemaker using PyAthena takes 155s. Is PyAthena slowing it down or is the data transfer from Athena to sagemaker so time consuming? What could I do to speed this up? | Just figure out a way of boosting the queries: Before I was trying: import pandas as pd from pyathena import connect conn = connect(s3_staging_dir=STAGIN_DIR, region_name=REGION) pd.read_sql(QUERY, conn) # takes 160s Figured out that using a PandasCursor instead of a connection is way faster import pandas as pd pyathena import connect from pyathena.pandas.cursor import PandasCursor cursor = connect(s3_staging_dir=STAGIN_DIR, region_name=REGION, cursor_class=PandasCursor).cursor() df = cursor.execute(QUERY).as_pandas() # takes 12s Ref: https://github.com/laughingman7743/PyAthena/issues/46 | 9 | 20 |
64,187,581 | 2020-10-3 | https://stackoverflow.com/questions/64187581/e-package-python-pip-has-no-installation-candidate | $ sudo apt-get install python-pip Reading package lists... Done Building dependency tree Reading state information... Done Package python-pip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: python3-pip both my pip and pip3 are installed in python 3 pip -V pip 20.1.1 from /usr/lib/python3/dist-packages/pip (python 3.8) pip3 -V pip 20.1.1 from /usr/lib/python3/dist-packages/pip (python 3.8) Now i cant install pip ...it shows above error | If you have python(python2) installed you then you can use following command to install pip(for python2). curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py python get-pip.py Now you can check for pip2 pip2 --version I hope these will help you | 15 | 45 |
64,235,312 | 2020-10-6 | https://stackoverflow.com/questions/64235312/how-to-implodereverse-of-pandas-explode-based-on-a-column | I have a dataframe df like below NETWORK config_id APPLICABLE_DAYS Case Delivery 0 Grocery 5399 SUN 10 1 1 Grocery 5399 MON 20 2 2 Grocery 5399 TUE 30 3 3 Grocery 5399 WED 40 4 I want to implode( combine Applicable_days from multiple rows into single row like below) and get the average case and delivery per config_id NETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery 0 Grocery 5399 SUN,MON,TUE,WED 90 10 using the groupby on network,config_id i can get the avg_cases and avg_delivery like below. df.groupby(['network','config_id']).agg({'case':'mean','delivery':'mean'}) But How do i be able to join APPLICABLE_DAYS while performing this aggregation? | If you want the "opposite" of explode, then that means bringing it into a list in Solution #1. You can also join as a string in Solution #2: Use lambda x: x.tolist() for the 'APPLICABLE_DAYS' column within your .agg groupby function: df = (df.groupby(['NETWORK','config_id']) .agg({'APPLICABLE_DAYS': lambda x: x.tolist(),'Case':'mean','Delivery':'mean'}) .rename({'Case' : 'Avg_Cases','Delivery' : 'Avg_Delivery'},axis=1) .reset_index()) df Out[1]: NETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery 0 Grocery 5399 [SUN, MON, TUE, WED] 25 2.5 Use lambda x: ",".join(x) for the 'APPLICABLE_DAYS' column within your .agg groupby function: df = (df.groupby(['NETWORK','config_id']) .agg({'APPLICABLE_DAYS': lambda x: ",".join(x),'Case':'mean','Delivery':'mean'}) .rename({'Case' : 'Avg_Cases','Delivery' : 'Avg_Delivery'},axis=1) .reset_index()) df Out[1]: NETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery 0 Grocery 5399 SUN,MON,TUE,WED 25 2.5 If you are looking for the sum, then you can just change mean to sum for the Cases and Delivery columns. | 46 | 54 |
64,218,755 | 2020-10-6 | https://stackoverflow.com/questions/64218755/getting-error-403-in-google-colab-with-tensorboard-with-firefox | I have a tfevent file already present on my Drive and I have successfully connected it to Google Colab. After searching within the issues of Tensorboard Github, I found that I had to set dom.serviceWorkers.enabled to True which I have done. But on Google Colab after performing the two steps: %load_ext tensorboard %tensorboard --logdir path/to/logs I am getting a Error 403 on the second step cell: I am using Firefox version 81.0.1 (64-bit) and my default mode is Private Window, so history and cache are cleared after I close all browser window. Can someone help me with this? | It seems that Tensorboard needs you to enable third party cookies to run without returning a HTTP 403 (Forbidden) error. I had the same issue using Chrome and fixed it by just allowing everything: You can do the same on Firefox like so: You could also find out which exact cookie is needed and then just allow that one. | 27 | 28 |
64,189,006 | 2020-10-3 | https://stackoverflow.com/questions/64189006/why-is-pipenv-not-picking-up-my-pyenv-versions | My system Python version is 3.8.5, however I use pyenv to manage an additional version, 3.6.0, to mirror the server version my project is deployed to. I previously used virtualenv + virtualenvwrapper to manage my virtual environments, but I've heard great things on pipenv and thought I would give it a go. It's all great until I try using Python 3.6.0. Flow goes something like this: $ mkdir test_project && cd test_project $ pyenv shell 3.6.0 $ pipenv install django Creating a virtualenv for this project… Pipfile: /home/user/projects/test_project/Pipfile Using /home/user/.pyenv/shims/python (3.6.0) to create virtualenv… ⠸ Creating virtual environment...created virtual environment CPython3.8.5.final.0-64 in 130ms creator CPython3Posix(dest=/home/user/.local/share/virtualenvs/test_project-eAvoynKo-/home/user, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv) added seed packages: pip==20.2.3, setuptools==50.3.0, wheel==0.35.1 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator ✔ Successfully created virtual environment! Traceback (most recent call last): File "/home/user/.pyenv/versions/3.6.0/bin/pipenv", line 11, in <module> sys.exit(cli()) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/vendor/click/decorators.py", line 21, in new_func return f(get_current_context(), *args, **kwargs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/cli/command.py", line 252, in install site_packages=state.site_packages File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/core.py", line 1928, in do_install site_packages=site_packages, File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/core.py", line 580, in ensure_project pypi_mirror=pypi_mirror, File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/core.py", line 512, in ensure_virtualenv python=python, site_packages=site_packages, pypi_mirror=pypi_mirror File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pipenv/core.py", line 986, in do_create_virtualenv with open(project_file_name, "w") as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.local/share/virtualenvs/test_project-eAvoynKo-/home/user/.pyenv/shims/python/.project' I came across this previous question Pipenv not recognizing Pyenv version? and set the environment variable PIPENV_PYTHON="$PYENV_ROOT/shims/python in my .bashrc file. to no avail. Using the system Python version 3.8.5 works flawlessly: $ pyenv install django Creating a virtualenv for this project… Pipfile: /home/user/projects/test_project/Pipfile Using /home/user/.pyenv/shims/python (3.8.5) to create virtualenv… ⠹ Creating virtual environment...created virtual environment CPython3.8.5.final.0-64 in 114ms creator CPython3Posix(dest=/home/user/.local/share/virtualenvs/test_project-eAvoynKo-/home/user/.pyenv/shims/python, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv) added seed packages: pip==20.2.2, setuptools==50.3.0, wheel==0.35.1 activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator ✔ Successfully created virtual environment! Virtualenv location: /home/user/.local/share/virtualenvs/test_project-eAvoynKo-/home/user/.pyenv/shims/python Creating a Pipfile for this project… Installing django… Adding django to Pipfile's [packages]… ✔ Installation Succeeded Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Building requirements... Resolving dependencies... ✔ Success! Updated Pipfile.lock (a6086c)! Installing dependencies from Pipfile.lock (a6086c)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:00:00 To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. Update While I still can't get it to recognize the python version activated with pyenv shell x.x.x, by removing the PIPENV_PYTHON environment variable, and creating a new virtualenv with pipenv install --python 3.6 pipenv does recognize the pyenv version installed. | pipenv doesn't respect pyenv local and pyenv global (reference) maybe it also doesn't respect pyenv shell I usually do what you did, specify the python like pipenv install --python 3.7 | 10 | 12 |
64,200,512 | 2020-10-4 | https://stackoverflow.com/questions/64200512/tensorflow-evalutaion-and-earlystopping-gives-infinity-overflow-error | I a model as seen in the code below, but when trying to evaluate it or using earlystopping on it it gives me the following error: numdigits = int(np.log10(self.target)) + 1 OverflowError: cannot convert float infinity to integer I must state that without using .EarlyStopping or model.evaluate everything works well. I know that np.log10(0) gives -inf so that could be a potential cause, but why is there a 0 there in the first place and how can it be prevented? How can this problem be fixed? NOTES this is the code I use: import tensorflow as tf from tensorflow import keras TRAIN_PERCENT = 0.9 model = keras.Sequential([ keras.layers.Dense(128, input_shape=(100,), activation='relu'), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(100) ]) earlystop_callback = keras.callbacks.EarlyStopping(min_delta=0.0001, patience=1 , monitor='accuracy' ) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(optimizer=optimizer, loss="mse", metrics=['accuracy']) X_set, Y_set = some_get_data_function() sep = int(len(X_set)/TRAIN_PERCENT) X_train, Y_train = X_set[:sep], Y_set[:sep] X_test, Y_test = X_set[sep:], Y_set[sep:] model.fit(X_train, Y_train, batch_size=16, epochs=5, callbacks=[earlystop_callback]) ev = model.evaluate(X_test, Y_test) print(ev) X,Y sets are np arrays. X is an array of arrays of 100 integers between 0 and 10. Y is an array of arrays of 100 integers, all of them are either 0 or 1. | Well it's hard to tell exactly as I can't run code without some_get_data_function() realization but recently I've got same error when mistakenly passed EMPTY array to model.evaluate. Taking into account that @meTchaikovsky comment solved your issue it's certainly due to messed up input arrays. | 8 | 9 |
64,132,842 | 2020-9-30 | https://stackoverflow.com/questions/64132842/resnet50-produces-different-prediction-when-image-loading-and-resizing-is-done-w | I want to use Keras Resnet50 model using OpenCV for reading and resizing the input image. I'm using the same preprocessing code from Keras (with OpenCV I need to convert to RGB since this is the format expected by preprocess_input()). I get slightly different predictions using OpenCV and Keras image loading. I don't understand why the predictions are not the same. Here is my code: import numpy as np import json from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions import cv2 model = ResNet50(weights='imagenet') img_path = '/home/me/squirle.jpg' # Keras prediction img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) print('Predicted Keras:', decode_predictions(preds, top=3)[0]) # OpenCV prediction imgcv = cv2.imread(img_path) dim = (224, 224) imgcv_resized = cv2.resize(imgcv, dim, interpolation=cv2.INTER_LINEAR) x = cv2.cvtColor(imgcv_resized , cv2.COLOR_BGR2RGB) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) print('Predicted OpenCV:', decode_predictions(preds, top=3)[0]) Predicted Keras: [('n02490219', 'marmoset', 0.28250763), ('n02356798', 'fox_squirrel', 0.25657368), ('n02494079', 'squirrel_monkey', 0.19992349)] Predicted OpenCV: [('n02356798', 'fox_squirrel', 0.5161952), ('n02490219', 'marmoset', 0.21953616), ('n02494079', 'squirrel_monkey', 0.1160824)] How can I use OpenCV imread() and resize() to get the same prediction as Keras image loading? | # Keras prediction img = image.load_img(img_path, target_size=(224, 224)) # OpenCV prediction imgcv = cv2.imread(img_path) dim = (224, 224) imgcv_resized = cv2.resize(imgcv, dim, interpolation=cv2.INTER_LINEAR) If you look attentively, the interpolation you specify in the case of cv2 is cv2.INTER_LINEAR (bilinear interpolation); however, by default, image.load_img() uses an INTER_NEAREST interpolation method. img_to_array(img). The dtype argument here is: None Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32") Therefore, in img_to_array(img) you have an image that consists of float32 values, while the cv2.imread(img) returns a numpy array of uint8 values. Ensure you convert to RGB from BGR, as OpenCV loads directly into BGR format. You can use image = image[:,:,::-1] or image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB); otherwise you will have the R and B channels reversed resulting in an incorrect comparison. Since the preprocessing that you apply is the same in both cases, the only differences are the ones that I mentioned above; adapting those changes should ensure reproducibility. There is one observation I would like to make: provided that one uses a library (cv2 in this case) which automatically (and arguably only loads ints) instead of floats, the only correct way is to cast the first prediction array (Keras) to uint8 because by casting the latter to float32, the possible difference in information is lost. For example, with cv2 you load to uint8, and by casting instead of 233 you get 233.0. However, maybe the initial pixel value was 233,3 but this was lost due to the first conversion. | 8 | 6 |
64,227,835 | 2020-10-6 | https://stackoverflow.com/questions/64227835/how-to-build-an-mac-os-app-from-a-python-script-having-a-pyside2-gui | Context: I am developping a simple Python application using a PySide2 GUI. It currently works fine in Windows, Linux and Mac. On Windows, I could use PyInstaller and InnoSetup to build a simple installer. Then I tried to do the same thing on Mac. It soon broke, because the system refused to start the command or the app generated by PyInstaller because it was not correctly signed. And as I am not an apple developper, I cannot sign anything... After some research, I tried py2app. I can go one step further here. With python setup.py py2app -A I can create a runnable app. Which obviously cannot be ported to a different system because it uses my development folders. And if I use python setup.py py2app the generated program cannot start because py2app did not copy all the required Qt stuff. I tried to add one by one the missing libraries, but on the end the system could not find the plugins and I gave up... Question: Can someone help me with a recipe to convert a python script or package using a Qt GUI into a portable app on Mac? Ideally, the recipe should say how to use a custom application icon, but this is not required. References: Python 3.8.5 macOS 10.15.7 Catalina PySide2 5.15.1 PyInstaller 4.0 py2app 0.22 As my real package is too large for a SO question I trimmed it down to a minimal reproducible example: from PySide3.QtWidgets import * import sys class MainWindow(QMainWindow): def __init__(self): super().__init__() hello = QLabel('Hello', self) hello.move(50, 50) def run(args): app = QApplication(args) main = MainWindow() main.show() sys.exit(app.exec_()) if __name__ == '__main__': run(sys.argv) And here is the setup.py file used for py2app: from setuptools import setup APP = ['app.py'] DATA_FILES = [] OPTIONS = {} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) | Requirements works with Python 3.8.5 macOS 10.15.7 Catalina uses PySide2 and py2app Problems PySide2 must be added under OPTIONS to the packages list when running the app then still an error occurs: Library not loaded: @rpath/libshiboken2.abi3.5.15.dylib, Reason: image not found Solution The slightly modified setup.py could look like this: from setuptools import setup APP = ['app.py'] DATA_FILES = [] OPTIONS = { 'packages': ['PySide2'], 'iconfile': 'some_icon.icns', 'plist': { 'CFBundleDevelopmentRegion': 'English', 'CFBundleIdentifier': "com.ballesta.xxx", 'CFBundleVersion': "1.0.0", 'NSHumanReadableCopyright': u"Copyright © 2020, Serge Ballesta, All Rights Reserved" } } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) Additionally, an icon definition and a few plist entries for some basic information have been added. The whole build is best triggered with a script that could look like this: #!/bin/sh python3 -m venv venv . venv/bin/activate pip install PySide2 pip install py2app python setup.py py2app cp ./venv/lib/python3.8/site-packages/shiboken2/libshiboken2.abi3.5.15.dylib ./dist/app.app/Contents/Resources/lib/python3.8/lib-dynload/shiboken2 Test Here the screenshot of a test run: | 18 | 14 |
64,159,631 | 2020-10-1 | https://stackoverflow.com/questions/64159631/removing-collected-static-files | I ran collectstatic a few weeks back and I would like to remove most (99.5%) of the collected files so that I do not have to store them when deploying to production. I tried collectstatic --clear but this removed them and then placed the deleted files back afterwards (not sure what the practical point of the command is. The docs state it is used to "to remove stale static files" but it certainly doesn't do that). Is there a way to erase the collected files? I've been using this page of the Django docs to try and find a solution but haven't had any luck. | It took an interesting combination of commands and actions to get the job done. After replacing my static folder with a new folder with only my desired contents, the base command of collectstatic --noinput --clear --no-post-process got most of the files to be cleared and not re-copied. However, the js and css files were still being copied. To get the system to ignore these files and not copy them I used the --ignore tag for each JS and CSS filled-folder that was being copied over. For me personally, this additional command looked like this: --ignore facebook --ignore gis In the end the full command pattern for another user would look something like this: collectstatic --noinput --clear --no-post-process --ignore [JSfolder0] --ignore [JSfolder1] --ignore [JSfolder2] | 8 | 9 |
64,158,887 | 2020-10-1 | https://stackoverflow.com/questions/64158887/how-to-see-python-print-statements-from-running-fargate-ecs-task | I have a Fargate ECS container that I use to run a Docker container through tasks in ECS. When the task starts, an sh script is called, runner.sh, #!/bin/sh echo "this line will get logged to ECS..." python3 src/my_python_script.py # however print statements from this Python script are not logged to ECS This in turn starts a long-running Python script, my_python_script.py. I know the Python script is running fine because it does what it needs to do, but I can't see output from the Python script. Inside of my_python_script.py there are several print() statements. In the CloudWatch logs for my ECS Fargate task, I see output from the sh script ("this line will get logged to ECS..."), but not output from print() statements that are made within the Python script. This is the logs configuration from inside my task definition: { "ipcMode": null, "executionRoleArn": "myecsTaskExecutionRolearn", "containerDefinitions": [ { "dnsSearchDomains": null, "environmentFiles": null, "logConfiguration": { "logDriver": "awslogs", "secretOptions": null, "options": { "awslogs-group": "/ecs/mylogsgroup", "awslogs-region": "eu-west-1", "awslogs-stream-prefix": "ecs" } }, "entryPoint": null, "portMappings": [], "command": null, "linuxParameters": null, "cpu": 0, "environment": [], "resourceRequirements": null, "ulimits": null, "dnsServers": null, "mountPoints": [], "workingDirectory": null, "secrets": null, "dockerSecurityOptions": null, "memory": null, "memoryReservation": null, "volumesFrom": [], "stopTimeout": null, "image": "1234567.dck.aws.com/mydockerimage", "startTimeout": null, "firelensConfiguration": null, "dependsOn": null, "disableNetworking": null, "interactive": null, "healthCheck": null, "essential": true, "links": null, "hostname": null, "extraHosts": null, "pseudoTerminal": null, "user": null, "readonlyRootFilesystem": null, "dockerLabels": null, "systemControls": null, "privileged": null, "name": "my-task-definition-name" } ], "memory": "4096", "taskRoleArn": "myecsTaskRolearn", "family": "my-task-definition-name", "pidMode": null, "requiresCompatibilities": [ "FARGATE" ], "networkMode": "awsvpc", "cpu": "2048", "inferenceAccelerators": [], "proxyConfiguration": null, "volumes": [], "tags": [] } Dockerfile: FROM rocker/verse:3.6.0 ENV DEBIAN_FRONTEND noninteractive RUN install2.r --error \ jsonlite RUN echo "deb http://ftp.de.debian.org/debian testing main" >> /etc/apt/sources.list RUN echo 'APT::Default-Release "stable";' | tee -a /etc/apt/apt.conf.d/00local RUN apt-get update && apt-get -t testing install -y --force-yes python3.6 RUN apt-get update && apt-get -t testing install -y libmagick++-dev python3-pip python-setuptools RUN mkdir /app WORKDIR /app COPY ./src /app/src RUN pip3 install --trusted-host pypi.python.org -r /app/requirements.txt CMD /app/runner.sh I think I am following the awslogs instructions from https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_awslogs.html but maybe not? Is there something obvious I need to do to make sure that print() statements from within a Python script are captured in my ECS task's CloudWatch logs? | Seems to me that there are a couple of things you could be dealing with here. The first is the default buffering behaviour of Python, which could stop the output from showing up. You will need to stop this. You can set the PYTHONUNBUFFERED env var correctly by inserting the following before CMD: ENV PYTHONUNBUFFERED=1 Secondly, quoting from the Using the awslogs driver doc that you linked: The type of information that is logged by the containers in your task depends mostly on their ENTRYPOINT command. By default, the logs that are captured show the command output that you would normally see in an interactive terminal if you ran the container locally, which are the STDOUT and STDERR I/O streams. The awslogs log driver simply passes these logs from Docker to CloudWatch Logs. For more information on how Docker logs are processed, including alternative ways to capture different file data or streams, see View logs for a container or service in the Docker documentation. So going by that, I would replace the CMD line with the following as per the Exec form of ENTRYPOINT: ENTRYPOINT ["/app/runner.sh"] This should serve to hook up the STDOUT and STDERR I/O streams for your shell script and hopefully your Python script to the container logging. | 13 | 7 |
64,214,011 | 2020-10-5 | https://stackoverflow.com/questions/64214011/algorithm-what-set-of-tiles-of-length-n-can-be-used-to-generate-the-most-amount | I'm trying to create a function best_tiles which takes in the number of tiles in your hand and returns the set of tiles that allows you to produce the most number of unique English-valid words, assuming that you can only use each tile once. For example, with the set of tiles in your hand (A, B, C) you can produce the words, CAB, BAC, AB, and BA (all of these are English words), so you can spell 4 unique words with that set. With (B, A, A), you can spell 5 words: ABA, BAA, AA, AB, and BA. The goal is to find the set of letters which allows you to spell the most number of English-Valid words (without replacement). So if 5 was the maximum number of words that could be spelled with any combination of letters for N = 3, running best_tiles( n = 3 ) would return B, A, A. I'm wondering how to implement this efficiently? My current approach doesn't scale well at all with number of letters. I read in a wordlist. In this case, I'm using enable.txt here: https://www.wordgamedictionary.com/enable/ import os path = "enable.txt" words = [] with open(path , encoding='utf8') as f: for values in f: words.append(list(values.strip().upper())) I create a function word_in_tiles h/t smack89 which returns whether it is possible to construct a word given a tile set: def word_in_tiles(word, tiles): tiles_counter = collections.Counter(tiles) return all(tiles_counter.get(ch, 0) == cnt for ch,cnt in collections.Counter(word).items()) I then create a function get_all_words which produces a list of all the possible words one can spell from a word list and a tile set. def get_all_words(tile_set, words): # words is a word list return [i for i in words if word_in_tiles(i, tile_set)] The extremely naive approach for identifying which tileset is the "best" for three letters is the following: I first create a list of every possible combination for a given length. So for length 3, I'd do: import string import itertools letters = string.ascii_lowercase all_three_letter_combinations = list(itertools.combinations_with_replacement(letters, 3)) # Create a list of only words that are three letters are less three_letter = [i for i in words if len(i) <= 3] sequence_dict = dict() for i in all_three_letter_combinations: string_test = "".join(i).upper() sequence_dict[i] = get_all_words(string_test, three_letter) Then remove the values with no length and sort by the length of the result: res = {k: v for k, v in sequence_dict.items() if len(v) >= 1} def GetMaxLength(res): return max((len(v), v, k) for k, v in res.items())[1:] GetMaxLength(res) You get that, for three letters, the tile-set that produces the most english valid words is T A E which can produce the following words ['AE', 'AT', 'ATE', 'EAT', 'ET', 'ETA', 'TA', 'TAE', 'TEA'] I'd like to be able to scale this up to as big as N = 15. What is the best procedure for doing this? | I think this is good enough! Here is a log of my code running under PyPy: 0:00:00.000232 E 0:00:00.001251 ER 0:00:00.048733 EAT 0:00:00.208744 ESAT 0:00:00.087425 ESATL 0:00:00.132049 ESARTP 0:00:00.380296 ESARTOP 0:00:01.409129 ESIARTLP 0:00:03.433526 ESIARNTLP 0:00:10.391252 ESIARNTOLP 0:00:25.651012 ESIARNTOLDP 0:00:56.642405 ESIARNTOLCDP 0:01:57.257293 ESIARNTOLCDUP 0:03:55.933906 ESIARNTOLCDUPM 0:07:17.146036 ESIARNTOLCDUPMG 0:10:14.844347 ESIARNTOLCDUPMGH 0:13:34.722600 ESIARNTOLCDEUPMGH 0:18:14.215019 ESIARNTOLCDEUPMGSH 0:22:47.129284 ESIARNTOLCDEUPMGSHB 0:27:56.859511 ESIARNTOLCDEUPMGSHBYK 0:46:20.448502 ESIARNTOLCDEUPMGSHBYAK 0:57:15.213635 ESIARNTOLCDEUPMGSHIBYAT 1:09:55.530180 ESIARNTOLCDEUPMGSHIBYATF 1:18:35.209599 ESIARNTOLCDEUPMGSHIBYATRF 1:21:54.095119 ESIARNTOLCDEUPMGSHIBYATRFV 1:20:16.978411 ESIARNTOLCDEUPMGSHIBYAOTRFV 1:14:24.253660 ESIARNTOLCDEUPMGSHIBYAONTRFV 1:00:37.405571 The key improvements are these. I distinguish not only between letters, but how many times the letter has been seen. Therefore every letter I can accept or move on. That was an idea I got while commenting on David Eisenstat's solution. From him I also got the idea that pruning trees out that can't lead to an answer controls the growth of the problem surprisingly well. The very first solution that I look at is simply all the top letters. This starts as a pretty good solution so despite it being depth first, we will prune pretty well. I am careful to consolidate "exhausted tries" into a single record. This reduces how much data we have to throw around. And here is the code. import os import datetime path = "enable.txt" words = [] with open(path) as f: for values in f: words.append(values.strip().upper()) key_count = {} for word in words: seen = {} for letter in word: if letter not in seen: seen[letter] = 0 key = (letter, seen[letter]) if key not in key_count: key_count[key] = 1 else: key_count[key] += 1 seen[letter] += 1 KEYS = sorted(key_count.keys(), key=lambda key: -key_count[key]) #print(KEYS) #print(len(KEYS)) KEY_POS = {} for i in range(len(KEYS)): KEY_POS[KEYS[i]] = i # Now we will build a trie. Every node has a list of words, and a dictionary # from the next letter farther in the trie. # BUT TRICK:, we will map each word to a sequence of numbers, and those numbers # will be indexes into KEYS. This allows us to use the fact that a second 'e' is # unlikely, so we can deal with that efficiently. class Trie: def __init__(self, path): self.words = [] self.dict = {} self.min_pos = -1 self.max_pos = -1 self.words = [] self.count_words = 0 self.path = path def add_word (self, word): trie = self poses = [] seen = {} for letter in word: if letter not in seen: seen[letter] = 0 key = (letter, seen[letter]) poses.append(KEY_POS[(key)]) seen[letter] += 1 sorted_poses = sorted(poses); for i in range(len(sorted_poses)): trie.count_words += 1 pos = sorted_poses[i] if pos not in trie.dict: trie.dict[pos] = Trie(trie.path + KEYS[pos][0]) if trie.max_pos < pos: trie.max_pos = pos trie = trie.dict[pos] trie.count_words += 1 trie.words.append(word) base_trie = Trie('') for word in words: base_trie.add_word(word); def best_solution (size): def solve (subset, pos, best, partial): found = sum(x[0] for x in partial) upper_bound = sum(x[1] for x in partial) if size <= len(subset) or upper_bound < best or len(KEYS) <= pos: return (found, subset) if best < found: best = found # Figure out our next calculations. partial_include = [] partial_exclude = [] finalized_found = 0 for this_found, this_bound, this_trie in partial: if this_trie is None: # This is a generic record of already emptied tries finalized_found += this_found elif pos in this_trie.dict: include_trie = this_trie.dict[pos] partial_include.append(( this_found + len(include_trie.words), include_trie.count_words + this_found, include_trie )) # We included the tally of found words in the previous partial. # So do not double-count by including it again partial_include.append(( 0, this_bound - include_trie.count_words - this_found, this_trie )) partial_exclude.append(( this_found, this_bound - include_trie.count_words, this_trie )) elif this_found == this_bound: finalized_found += this_found else: partial_include.append(( this_found, this_bound, this_trie )) partial_exclude.append(( this_found, this_bound, this_trie )) if 0 < finalized_found: partial_include.append( (finalized_found, finalized_found, None) ) partial_exclude.append( (finalized_found, finalized_found, None) ) found_include, subset_include = solve(subset + [pos], pos+1, best, partial_include) if best < found_include: best = found_include found_exclude, subset_exclude = solve(subset, pos+1, best, partial_exclude) if found_include < found_exclude: return (found_exclude, subset_exclude) else: return (found_include, subset_include) count, subset = solve([], 0, 0, [(len(base_trie.words), base_trie.count_words, base_trie)]) return ''.join([KEYS[x][0] for x in subset]) for i in range(20): start = datetime.datetime.now() print(best_solution(i)) print(datetime.datetime.now() - start) | 7 | 4 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.