question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
65,085,956
2020-12-1
https://stackoverflow.com/questions/65085956/pycharm-venv-failed-no-such-option-build-dir
I'm doing a fresh install on a new Windows 10 laptop. I installed Python 3.9 and PyCharm Community 2020.2, then started a new project. In the project settings, I created a new project interpreter in a venv, inside the /venv folder. Everything looks to get set up correctly, but I can't install anything to the project interpreter. When I try to do so, e.g. when I try to install pandas or anything else, I get None-zero exit code (2) with the following message: Usage: D:\MyProject\project\venv\Scripts\python.exe -m pip install [options] [package-index-options] ... D:\MyProject\project\venv\Scripts\python.exe -m pip install [options] -r [package-index-options] ... D:\MyProject\project\venv\Scripts\python.exe -m pip install [options] [-e] ... D:\MyProject\project\venv\Scripts\python.exe -m pip install [options] [-e] ... D:\MyProject\project\venv\Scripts\python.exe -m pip install [options] <archive url/path> ... no such option: --build-dir When I go to the Terminal and just 'pip install pandas' per PyCharm's 'proposed solution', it installs fine, and pandas and its dependencies appear as usual in the list of installed modules in the interpreter. I've not encountered this before, and don't see anywhere in the settings where I can specify how exactly PyCharm will invoke pip in this situation.
PyCharm relies on --build-dir to install packages and the flag was removed in the latest pip 20.3. The fix for PyCharm is ready and will be released this week in 2020.3 release (and backported to 2020.2.5 and 2020.1.5). The workaround is to downgrade pip to the previous version - close PyCharm and run python -m pip install pip==20.2.4 in the terminal using the corresponding virtual environment. Update 1 2020.1.5 and 2020.2.5 with the fix were released - please update.
49
61
65,098,912
2020-12-1
https://stackoverflow.com/questions/65098912/how-to-calculate-the-difference-between-rows-in-pyspark
This is my DataFrame in PySpark: utc_timestamp data feed 2015-10-13 11:00:00+00:00 1 A 2015-10-13 12:00:00+00:00 5 A 2015-10-13 13:00:00+00:00 6 A 2015-10-13 14:00:00+00:00 10 B 2015-10-13 15:00:00+00:00 11 B The values of data are cumulative. I want to get this result (differences between consecutive rows, grouped by feed): utc_timestamp data feed 2015-10-13 11:00:00+00:00 1 A 2015-10-13 12:00:00+00:00 4 A 2015-10-13 13:00:00+00:00 1 A 2015-10-13 14:00:00+00:00 10 B 2015-10-13 15:00:00+00:00 1 B In pandas I would do it this way: df["data"] -= (df.groupby("feed")["data"].shift(fill_value=0)) How can I do the same thing in PySpark?
You can use lag as a substitute for shift, and coalesce( , F.lit(0)) as a substitute for fill_value=0 from pyspark.sql.window import Window import pyspark.sql.functions as F window = Window.partitionBy("feed").orderBy("utc_timestamp") data = F.col("data") - F.coalesce(F.lag(F.col("data")).over(window), F.lit(0)) df.withColumn("data", data)
13
9
65,088,076
2020-12-1
https://stackoverflow.com/questions/65088076/how-to-find-the-current-project-id-of-the-deployed-python-function-in-google-clo
I have deployed a Python 3.7 function in Google Cloud. I need to get the project-id through code to find out where it is deployed. I write a small Python 3.7 script and test it through Google shell command line import urllib import urllib.request url="http://metadata.google.internal/computeMetadata/v1/project/project-id" x=urllib.request.urlopen(url) with x as response: x.read() Unfortunately this gives me only b'' as response. I am not getting the project id though I have set it using gcloud config set project my-project I am new to Google Cloud and Python. Issue 2 This is an additional issue below: In my local system I have installed gcloud and if I run the above python3.7 script from there: x=urllib.request.urlopen(url) #this line I am getting this exception from the line above: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1350, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1240, in request self._send_request(method, url, body, headers, encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1286, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1235, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1006, in _send_output self.send(msg) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 946, in send self.connect() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 917, in connect self.sock = self._create_connection( File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py", line 787, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 8] nodename nor servname provided, or not known The below changes as suggested by the answer also gives me empty string:
In the Python 3.7 runtime, you can get the project ID via an environment variable: import os project_id = os.environ['GCP_PROJECT'] In future runtimes, this environment variable will be unavailable, and you'll need to get the project ID from the metadata server: import urllib.request url = "http://metadata.google.internal/computeMetadata/v1/project/project-id" req = urllib.request.Request(url) req.add_header("Metadata-Flavor", "Google") project_id = urllib.request.urlopen(req).read().decode() Running this locally will produce an error because your local machine can't resolve the http://metadata.google.internal URL -- this is only available to deployed functions.
11
17
65,097,687
2020-12-1
https://stackoverflow.com/questions/65097687/what-is-a-default-transaction-isolation-level-in-sqlalchemy-for-postgres
I use sqlalchemy for Postgres DB. engine = create_engine( "postgresql://postgres:postgres@localhost/test" ... Create engine without Postgres dialect(psycopg2, pg8000, or other). So my question: what is a default transaction isolation level? And what is a default Postgres dialect?
Per the docs, the default postgres driver used by SQLAlchemy is psycopg2. The default transaction isolation level is configured on the DB side, not by the client. Out-of-the-box, it is READ COMMITTED.
5
8
65,093,644
2020-12-1
https://stackoverflow.com/questions/65093644/pandas-group-by-one-column-and-aggregate-other-column-to-list
I have a dataframe that has multiple entries for users. These users can also be assigned to multiple ID's. I would like to group by the users and then store a list of these ID's in another column as shown below: I'd like to go from this: df1 = pd.DataFrame({'USER': ['BOB','STEVE','PAUL','KEITH','STEVE','STEVE','BOB'],'ID':[1,2,3,4,5,6,7]}) To this. Only showing values if that user is attached to multiple ID's
groupby + map u = df1.groupby("USER")["ID"].agg(list) df1["MULTI_IDS"] = df1["USER"].map(u[u.str.len().ge(2)]) USER ID MULTI_IDS 0 BOB 1 [1, 7] 1 STEVE 2 [2, 5, 6] 2 PAUL 3 NaN 3 KEITH 4 NaN 4 STEVE 5 [2, 5, 6] 5 STEVE 6 [2, 5, 6] 6 BOB 7 [1, 7]
10
10
65,046,975
2020-11-28
https://stackoverflow.com/questions/65046975/finding-relationships-between-values-based-on-their-name-in-python-with-panda
I want to make relationship between values by their Name based on below rules: 1- I have a CSV file (with more than 100000 rows) that consists of lots of values, I shared some examples as below: Name: A02-father A03-father A04-father A05-father A07-father A08-father A09-father A17-father A18-father A20-father A02-SA-A03-SA A02-SA-A04-SA A03-SA-A02-SA A03-SA-A05-SA A03-SA-A17-SA A04-SA-A02-SA A04-SA-A09-SA A05-SA-A03-SA A09-SA-A04-SA A09-SA-A20-SA A17-SA-A03-SA A17-SA-A18-SA A18-SA-A17-SA A20-SA-A09-SA A05-NA B02-Father B04-Father B06-Father B02-SA-B04-SA B04-SA-BO2-SA B04-SA-B06-SA B06-SA-B04-SA B06-NA 2- Now I have another CSV file which let me know from which value I should start? in this case the value is A03-father & B02-father & ... which dont have any influence on each other and they all have seperate path to go, so for each path we will start from mentioned start point. father.csv A03-father B02-father .... 3- Based on the naming I want to make the relationships, As A03-Father has been determined as Father I should check for any value which has been started with A03.(All of them are A0's babies.) Also as B02 is father, we will check for any value which starts with B02. (B02-SA-B04-SA) 4- Now If I find out A03-SA-A02-SA , this is A03's baby. I find out A03-SA-A05-SA , this is A03's baby. I find out A03-SA-A17-SA , this is A03's baby. and after that I must check any node which starts with A02 & A05 & A17: As you see A02-Father exists so it is Father and now we will search for any string which starts with A02 and doesn't have A03 which has been detected as Father(It must be ignored) This must be checked till end of values which exist in the CSV file. As you see I should check the path based on name (REGEX) and should go forward till end of path. The expected result: Father Baby A03-father A03-SA-A02-SA A03-father A03-SA-A05-SA A03-father A03-SA-A17-SA A02-father A02-SA-A04-SA A05-father A05-NA A17-father A17-SA-A18-SA A04-father A04-SA-A09-SA A02-father A02-SA-A04-SA A09-father A09-SA-A20-SA B02-father B02-SA-B04-SA B04-father B04-SA-B06-SA B06-father B06-NA I have coded it as below with pandas: import pandas as pd import numpy as np import re #Read the file which consists of all Values df = pd.read_csv("C:\\total.csv") #Read the file which let me know who is father Fa = pd.read_csv("C:\\Father.csv") #Get the first part of Father which is A0 Fa['sub'] = Fa['Name'].str.extract(r'(\w+\s*)', expand=False) r2 = [] #check in all the csv file and find anything which starts with A0 and is not Father for f in Fa['sub']: baby=(df[df['Name'].str.startswith(f) & ~df['Name'].str.contains('Father')]) baby['sub'] = bay['Name'].str.extract(r'(\w+\s*)', expand=False) r1= pd.merge(Fa, baby, left_on='sub', right_on='sub',suffixes=('_f', '_c')) r2.append(result1) out_df = pd.concat(result2) out_df= out_df.replace(np.nan, '', regex=True) #find A0-N-A2-M and A0-N-A4-M out_df.to_csv('C:\\child1.csv') #check in all the csv file and find anything which starts with the second part of child1 which is A2 and A4 out_df["baby2"] = out_df['Name_baby'].str.extract(r'^(?:[^-]*-){2}\s*([^-]+)', expand=False) baby3= out_df["baby2"] r4 = [] for f in out_df["baby2"]: #I want to exclude A0 which has been detected. l = ['A0'] regstr = '|'.join(l) baby1=(df[df['Name'].str.startswith(f) & ~df['Name'].str.contains(regstr)]) baby1['sub'] = baby1['Name'].str.extract(r'(\w+\s*)', expand=False) r3= pd.merge(baby3, baby1, left_on='baby2', right_on='sub',suffixes=('_f', '_c')) r4.append(r3) out2_df = pd.concat(r4) out2_df.to_csv('C:\\child2.csv') I want to put below code in a loop and go through the file and check it, based on naming process and detect other fathers and babies till it finished. however this code is not customized and doesn't have the exact result as i expected. my question is about how to make the loop? I should go through the path and also consider regstr value for any string. #check in all the csv file and find anything which starts with the second part of child1 which is A2 and A4 out_df["baby2"] = out_df['Name_baby'].str.extract(r'^(?:[^-]*-){2}\s*([^-]+)', expand=False) baby3= out_df["baby2"] r4 = [] for f in out_df["baby2"]: #I want to exclude A0 which has been detected. l = ['A0'] regstr = '|'.join(l) baby1=(df[df['Name'].str.startswith(f) & ~df['Name'].str.contains(regstr)]) baby1['sub'] = baby1['Name'].str.extract(r'(\w+\s*)', expand=False) r3= pd.merge(baby3, baby1, left_on='baby2', right_on='sub',suffixes=('_f', '_c')) r4.append(r3) out2_df = pd.concat(r4) out2_df.to_csv('C:\\child2.csv')
Start with import collections (will be needed soon). I assume that you have already read df and Fa DataFrames. The first part of my code is to create children Series (index - parent, value - child): isFather = df.Name.str.contains('-father', case=False) dfChildren = df[~isFather] key = []; val = [] for fath in df[isFather].Name: prefix = fath.split('-')[0] for child in dfChildren[dfChildren.Name.str.startswith(prefix)].Name: key.append(prefix) val.append(child) children = pd.Series(val, index=key) Print children to see the result. The second part is to create the actual result, starting from each starting points in Fa: nodes = collections.deque() father = []; baby = [] # Containers for source data # Loop for each starting point for startNode in Fa.Name.str.split('-', expand=True)[0]: nodes.append(startNode) while nodes: node = nodes.popleft() # Take node name from the queue # Children of this node myChildren = children[children.index == node] # Process children (ind - father, val - child) for ind, val in myChildren.items(): parts = val.split('-') # Parts of child name # Child "actual" name (if exists) val_2 = parts[2] if len(parts) >= 3 else '' if val_2 not in father: # val_2 not "visited" before # Add father / child name to containers father.append(ind) baby.append(val) if len(val_2) > 0: nodes.append(val_2) # Add to the queue, to be processe later # Drop rows for "node" from "children" (if any exists) if (children.index == node).sum() > 0: children.drop(node, inplace=True) # Convert to a DataFrame result = pd.DataFrame({'Father': father, 'Baby': baby}) result.Father += '-father' # Add "-father" to "bare" names I added -father with lower case "f", but I think this is not much significant detail. The result, for your data sample, is: Father Baby 0 A03-father A03-SA-A02-SA 1 A03-father A03-SA-A05-SA 2 A03-father A03-SA-A17-SA 3 A02-father A02-SA-A04-SA 4 A05-father A05-NA 5 A17-father A17-SA-A18-SA 6 A04-father A04-SA-A09-SA 7 A09-father A09-SA-A20-SA 8 B02-father B02-SA-B04-SA 9 B04-father B04-SA-B06-SA 10 B06-father B06-NA And two remarks concerning your data sample: You wrote B04-SA-B02-SA with capital O (a letter) instead of 0 (zero). I corrected it in my source data. Row A02-father A02-SA-A04-SA in your expected result is doubled. I assume it should occur only once.
6
3
65,085,780
2020-12-1
https://stackoverflow.com/questions/65085780/what-is-difference-between-keras-backend-tensorflow-and-keras-from-tensorfl
I want to limit CPU cores and threads. So I found three ways to limit these. 1) "Keras backend + Tensorflow" from keras import backend as K import tensorflow as tf config = tf.ConfigProto(intra_op_parallelism_threads=2, \ inter_op_parallelism_threads=4, \ allow_soft_placement=True, \ device_count = {'CPU': 1}) session = tf.Session(config=config) K.set_session(session) 2) "Keras from Tensorflow" import tensorflow as tf from tensorflow import keras tf.config.threading.set_intra_op_parallelism_threads(2) tf.config.threading.set_inter_op_parallelism_threads(4) 3) "keras from Tensorflow" import os os.environ['TF_NUM_INTRAOP_THREADS'] = '2' os.environ['TF_NUM_INTEROP_THREADS'] = '4' These three ways are same affects? Lastly I understood for the parameters like I wrote below intra_op_parallelism_threads("number of CPU cores") inter_op_parallelism_threads("number of threads") is this right? If I miss-understanding please let me know. Thank you.
Not exactly, it's not as simple as that. As per official documentation - intra_op_parallelism_threads - Certain operations like matrix multiplication and reductions can utilize parallel threads for speedups. A value of 0 means the system picks an appropriate number. Refer this inter_op_parallelism_threads - Determines the number of parallel threads used by independent non-blocking operations. 0 means the system picks an appropriate number. Refer this So technically you can not limit the number of CPUs but only the number of parallel threads, which, for the sake of limiting resource consumption, is sufficient. Regarding the methods, you are using - The third approach allows you to directly set the environment variables using os library. import os os.environ['TF_NUM_INTRAOP_THREADS'] = '2' os.environ['TF_NUM_INTEROP_THREADS'] = '4' The second approach is a method in tf2 that does exactly the same (sets environment variables), the difference being that Keras is packaged into tf2 now. import tensorflow as tf from tensorflow import keras tf.config.threading.set_intra_op_parallelism_threads(2) tf.config.threading.set_inter_op_parallelism_threads(4) The first approach is for standalone Keras. This approach will work if keras is set to tensorflow backend. Again, it does the same thing which is set environment variables indirectly. from keras import backend as K import tensorflow as tf config = tf.ConfigProto(intra_op_parallelism_threads=2, \ inter_op_parallelism_threads=4, \ allow_soft_placement=True, \ device_count = {'CPU': 1}) session = tf.Session(config=config) K.set_session(session) If you still have doubts, you can check what happens to the environment variables after running all 3 independently and then check the specific variable using os with - print(os.environ.get('KEY_THAT_MIGHT_EXIST')) For a better understanding of the topic, you can check this link that details it out quite well. TLDR; You can use the second or third approach if you are working with tf2. Else use the first or third approach if you are using standalone Keras with tensorflow backend.
6
2
65,085,181
2020-12-1
https://stackoverflow.com/questions/65085181/adding-df-column-finding-matching-values-in-another-df-for-both-indexed-values-a
Simplified dfs: df = pd.DataFrame( { "ID": [6, 2, 4], "to ignore": ["foo", "whatever", "idk"], "value": ["A", "B", "A"], } ) df2 = pd.DataFrame( { "ID_number": [1, 2, 3, 4, 5, 6], "A": [0.91, 0.42, 0.85, 0.84, 0.81, 0.88], "B": [0.11, 0.22, 0.45, 0.38, 0.01, 0.18], } ) ID to ignore value 0 6 foo A 1 2 whatever B 2 4 idk A A B ID_number 0 0.91 0.11 1 1 0.42 0.22 2 2 0.85 0.45 3 3 0.84 0.38 4 4 0.81 0.01 5 5 0.88 0.18 6 I want to add a column to df which includes combinations of df['ID'] to df2['ID_number'] and df['value'] to the df2 column matching the value in df[value] (either 'A' or 'B'). We can add a column of matching values where the lookup column name in df2 is given, 'A': df["NewCol"] = df["ID"].map( df2.drop_duplicates("ID_number").set_index("ID_number")["A"] ) Which gives: ID to ignore value NewCol 0 6 foo A 0.88 1 2 whatever B 0.42 2 4 idk A 0.84 But this doesn't give values for B, so the value '0.42' above when looking for 'B' should instead be '0.22'. df["NewCol"] = df["ID"].map( df2.drop_duplicates("ID_number").set_index("ID_number")[df["value"]] ) obviously doesn't work. How can do I this?
You can set ID_number as index in df2,then use pd.Index.get_indexer here. df2 = df2.set_index('ID_number') r = df2.index.get_indexer(df['ID']) c = df2.columns.get_indexer(df['value']) df['new_col'] = df2.values[r, c] df ID to ignore value new_col 0 6 foo A 0.88 1 2 whatever B 0.22 2 4 idk A 0.84 Timeits Benchmarked using the below setup: Tested on Ubuntu 20.04.1 LTS(focal), Cpython3.8.5, Ipython shell(7.18.1), pandas(1.1.4), numpy(1.19.2) Setup df2 = pd.DataFrame( { "ID_number": np.arange(1, 1_000_000 + 1), "A": np.random.rand(1_000_000), "B": np.random.rand(1_000_000), } ) df = pd.DataFrame( { "ID": np.random.randint(1, 1_000_000, 50_000), "to ignore": ["anything"] * 50_000, "value": np.random.choice(["A", "B"], 50_000), } ) Resutls: @Vaishali In [57]: %%timeit ...: mapper = df2.set_index('ID_number').to_dict('index') ...: df['NewCol'] = df.apply(lambda x: mapper[x['ID']][x['value']], axis = ...: 1) ...: ...: 2.09 s ± 68.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) @Ch3steR In [58]: %%timeit ...: t = df2.set_index('ID_number') ...: r = t.index.get_indexer(df['ID']) ...: c = t.columns.get_indexer(df['value']) ...: df['new_col'] = df2.values[r, c] ...: ...: 49.7 ms ± 2.69 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) @Mayank In [59]: %%timeit ...: x = df2.set_index('ID_number').stack() ...: y = df.set_index(['ID', 'value']) ...: y['NewCol'] = y.index.to_series().map(x.to_dict()) ...: y.reset_index(inplace=True) ...: ...: 3.41 s ± 226 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) @Jezrael In [60]: %%timeit ...: df11 = (df2.melt('ID_number', value_name='NewCol', var_name='value') ...: .drop_duplicates(['ID_number','value']) ...: .rename(columns={'ID_number':'ID'})) ...: df.merge(df11, on=['ID','value'], how='left') ...: ...: 693 ms ± 16.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
9
4
65,084,389
2020-12-1
https://stackoverflow.com/questions/65084389/why-am-i-blocked-from-using-the-discord-api
Im coding a discord bot and I got this error randomly. From what I can interpret, I am temporarily blocked from discord.py api, but what does the "exceeding rate limits part mean?" discord.errors.HTTPException: 429 Too Many Requests (error code: 0): You are being blocked from accessing our API temporarily due to exceeding our rate limits frequently. Please read our docs at https://discord.com/developers/docs/topics/rate-limits to prevent this moving forward.
Exceeding the rate limit means that the discord API is explicitly telling you that you cannot read any more data from their API for a given amount of time. Looking at their rate limit docs, the rate limit varies depending on the endpoint you're talking to: The HTTP API implements a process for limiting and preventing excessive requests in accordance with RFC 6585. API users that regularly hit and ignore rate limits will have their API keys revoked, and be blocked from the platform. For more information on rate limiting of requests, please see the Rate Limits section. To help, they conveniently return some information on where you stand with respect to their rate limiting: X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1470173023 X-RateLimit-Bucket: abcd1234 ... If you're using the requests library you can easily check if you're close to exceeding the rate limit: req = requests.get("https://discord.com/api/path/to/the/endpoint") req.headers["X-RateLimit-Remaining"] # How many more requests you can make before `X-RateLimitReset`
6
8
65,083,494
2020-12-1
https://stackoverflow.com/questions/65083494/unpack-python-tuple-with-s
I know the canonical way to unpack a tuple is like this a, b, c = (1, 2, 3) # or (a,b,c) = (1, 2, 3) but noticed that you can unpack a tuple like this [a, b, c] = (1, 2, 3) Does the second method incur any extra cost due to some sort of cast or list construction? Is there a way to inspect how the python interpreter is dealing with this akin to looking at the assembly from a compiler?
No, those are all exactly equivalent. One way to look at this empirically is to use the dis dissasembler: >>> import dis >>> dis.dis("a, b, c = (1, 2, 3)") 1 0 LOAD_CONST 0 ((1, 2, 3)) 2 UNPACK_SEQUENCE 3 4 STORE_NAME 0 (a) 6 STORE_NAME 1 (b) 8 STORE_NAME 2 (c) 10 LOAD_CONST 1 (None) 12 RETURN_VALUE >>> dis.dis("(a, b, c) = (1, 2, 3)") 1 0 LOAD_CONST 0 ((1, 2, 3)) 2 UNPACK_SEQUENCE 3 4 STORE_NAME 0 (a) 6 STORE_NAME 1 (b) 8 STORE_NAME 2 (c) 10 LOAD_CONST 1 (None) 12 RETURN_VALUE >>> dis.dis("[a, b, c] = (1, 2, 3)") 1 0 LOAD_CONST 0 ((1, 2, 3)) 2 UNPACK_SEQUENCE 3 4 STORE_NAME 0 (a) 6 STORE_NAME 1 (b) 8 STORE_NAME 2 (c) 10 LOAD_CONST 1 (None) 12 RETURN_VALUE >>> From the formal language specification, this is detailed here. This is part of the "target list", A relevant quote: Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows....
14
18
65,051,581
2020-11-28
https://stackoverflow.com/questions/65051581/how-to-trigger-lifespan-startup-and-shutdown-while-testing-fastapi-app
Being very new to FastAPI I am strugling to test slightly more difficult code than I saw in the tutorial. I use fastapi_cache module and Redis like this: from fastapi import Depends, FastAPI, Query, Request from fastapi_cache.backends.redis import CACHE_KEY, RedisCacheBackend from fastapi_cache import caches, close_caches app = FastAPI() def redis_cache(): return caches.get(CACHE_KEY) @app.get('/cache') async def test( cache: RedisCacheBackend = Depends(redis_cache), n: int = Query( ..., gt=-1 ) ): # code that uses redis cache @app.on_event('startup') async def on_startup() -> None: rc = RedisCacheBackend('redis://redis') caches.set(CACHE_KEY, rc) @app.on_event('shutdown') async def on_shutdown() -> None: await close_caches() test_main.py looks like this: import pytest from httpx import AsyncClient from .main import app @pytest.mark.asyncio async def test_cache(): async with AsyncClient(app=app, base_url="http://test") as ac: response = await ac.get("/cache?n=150") When I run pytest, it sets cache variable to None and test fails. I think I understand why the code is not working. But how do I fix it to test my caching properly?
The point is that httpx does not implement lifespan protocol and trigger startup event handlers. For this, you need to use LifespanManager. Install: pip install asgi_lifespan The code would be like so: import pytest from asgi_lifespan import LifespanManager from httpx import AsyncClient from .main import app @pytest.mark.asyncio async def test_cache(): async with LifespanManager(app): async with AsyncClient(app=app, base_url="http://localhost") as ac: response = await ac.get("/cache") More info here: https://github.com/encode/httpx/issues/350
5
10
65,076,264
2020-11-30
https://stackoverflow.com/questions/65076264/python-library-for-parsing-code-of-any-language-into-an-ast
I'm looking for a Python library for parsing code into its abstract syntax tree representation. There exists a built-in module, named ast, however, it is only designed for parsing Python code, to my understanding. I'm wondering if there is a similar Python library that suits the same purpose, but works with other programming languages. In particular, I'm looking for one that can parse JavaScript code. If one does not exist, any direction on how I could get started designing my own?
In general, when you need to parse code written in a language, it’s almost always better to use that language instead. For parsing JavaScript from Python, you may want to check out this module, which can be installed using pip and should work well enough.
6
4
65,075,158
2020-11-30
https://stackoverflow.com/questions/65075158/converting-pil-image-to-skimage
I have 2 modules in my project: first works with image in bytes format, second requires skimage object. I need to combine them. I have this code: import io from PIL import Image import skimage.io area = (...) image = Image.open(io.BytesIO(image_bytes)) image = Image.crop(area) image = skimage.io.imread(image) But i get this error: How can i convert an image (object/variable) to skimage? I don't necessarily need PIL Image, this is just one way to work with bytes image, cause i need to crop my image Thanks!
Scikit-image works with images stored as Numpy arrays - same as OpenCV and wand. So, if you have a PIL Image, you can make a Numpy array for scikit-image like this: # Make Numpy array for scikit-image from "PIL Image" na = np.array(YourPILImage) Just in case you want to go the other way, and make a PIL Image from a Numpy array, you can do: # Make "PIL Image" from Numpy array pi = Image.fromarray(na)
5
12
65,072,296
2020-11-30
https://stackoverflow.com/questions/65072296/django-execute-code-only-for-manage-py-runserver-not-for-migrate-help-e
We are using Django as backend for a website that provides various things, among others using a Neural Network using Tensorflow to answer to certain requests. For that, we created an AppConfig and added loading of this app config to the INSTALLED_APPS in Django's settings.py. This AppConfig then loads the Neural Network as soon as it is initialized: settings.py: INSTALLED_APPS = [ ... 'bert_app.apps.BertAppConfig', ] .../bert_apps/app.py: class BertAppConfig(AppConfig): name = 'bert_app' if 'bert_app.apps.BertAppConfig' in settings.INSTALLED_APPS: predictor = BertPredictor() #loads the ANN. Now while that works and does what it should, the ANN is now loaded for every single command run through manage.py. While we of course want it to be executed if you call manage.py runserver, we don't want it to be run for manage.py migrate, or manage.py help and all other commands. I am generally not sure if this is the proper way how to load an ANN for a Django-Backend in general, so does anybody have any tips how to do this properly? I can imagine that loading the model on startup is not quite best practice, and I am very open to suggestions on how to do that properly instead. However, there is also some other code besides the actual model-loading that also takes a few seconds and that is definitely supposed to be executed as soon as the server starts up (so on manage.py runserver), but also not on manage.py help (as it takes a few seconds as well), so is there some quick fix for how to tell Django to execute it only on runserver and not for its other commands?
I had a similar problem, solved it with checking argv. class SomeAppConfig(AppConfig): def ready(self, *args, **kwargs): is_manage_py = any(arg.casefold().endswith("manage.py") for arg in sys.argv) is_runserver = any(arg.casefold() == "runserver" for arg in sys.argv) if (is_manage_py and is_runserver) or (not is_manage_py): init_your_thing_here() Now a bit closer to the if not is_manage_py part: in production you run your web server with uwsgi/uvicorn/..., which is still a web server, except it's not run with manage.py. Most likely, it's the only thing that you will ever run without manage.py Use AppConfig.ready() - it's intended for it: Subclasses can override this method to perform initialization tasks such as registering signals. It is called as soon as the registry is fully populated. - [django documentation] To get your AppConfig back, use: from django.apps import apps apps.get_app_config(app_name) # apps.get_app_configs() # all
6
8
65,071,206
2020-11-30
https://stackoverflow.com/questions/65071206/override-all-python-comparison-methods-in-one-declaration
Suppose you have a simple class like A below. The comparison methods are all virtually the same except for the comparison itself. Is there a shortcut around declaring the six methods in one method such that all comparisons are supported, something like B? I ask mainly because B seems more Pythonic to me and I am surprised that my searching didn't find such a route. class A: def __init__(self, number: float, metadata: str): self.number = number self.metadata = metadata def __lt__(self, other): return self.number < other.number def __le__(self, other): return self.number <= other.number def __gt__(self, other): return self.number > other.number def __ge__(self, other): return self.number >= other.number def __eq__(self, other): return self.number == other.number def __ne__(self, other): return self.number != other.number class B: def __init__(self, number: float, metadata: str): self.number = number self.metadata = metadata def __compare__(self, other, comparison): return self.number.__compare__(other.number, comparison)
The functools module provides the total_ordering decorator which is meant to provide all comparison methods, given that you provide at least one from __lt__(), __le__(), __gt__(), or __ge__(). See this answer from Martijn Pieters.
8
7
65,044,430
2020-11-27
https://stackoverflow.com/questions/65044430/plotly-create-a-scatter-with-categorical-x-axis-jitter-and-multi-level-axis
I would like to make a graph with a multi-level x axis like in the following picture: import plotly.graph_objects as go fig = go.Figure() fig.add_trace( go.Scatter( x = [df['x'], df['x1']], y = df['y'], mode='markers' ) ) But also I would like to put jitter on the x-axis like in the next picture: So far I can make each graph independently using the next code: import plotly.express as px fig = px.strip(df, x=[df["x"], df['x1']], y="y", stripmode='overlay') Is it possible to combine the jitter and the multi-level axis in one plot? Here is a code to reproduce the dataset: import numpy as np import pandas as pd import random '''Create DataFrame''' price = np.append( np.random.normal(20, 5, size=(1, 50)), np.random.normal(40, 2, size=(1, 10)) ) quantity = np.append( np.random.randint(1, 5, size=(50)), np.random.randint(8, 12, size=(10)) ) firstLayerList = ['15 in', '16 in'] secondLayerList = ['1/2', '3/8'] vendorList = ['Vendor1','Vendor2','Vendor3'] data = { 'Width': [random.choice(firstLayerList) for i in range(len(price))], 'Length': [random.choice(secondLayerList) for i in range(len(price))], 'Vendor': [random.choice(vendorList) for i in range(len(price))], 'Quantity': quantity, 'Price': price } df = pd.DataFrame.from_dict(data)
Firstly - thanks for the challenge! There aren't many challenging Plotly questions these days. The key elements to creating a scatter graph with jitter are: Using mode: 'box' - to create a box-plot, not a scatter plot. Setting 'boxpoints': 'all' - so all points are plotted. Using 'pointpos': 0 - to center the points on the x-axis. Removing (hiding!) the whisker boxes using: 'fillcolor': 'rgba(255,255,255,0)' 'line': {'color': 'rgba(255,255,255,0)'} DataFrame preparation: This code simply splits the main DataFrame into a frame for each vendor, thus allowing a trace to be created for each, with their own colour. df1 = df[df['Vendor'] == 'Vendor1'] df2 = df[df['Vendor'] == 'Vendor2'] df3 = df[df['Vendor'] == 'Vendor3'] Plotting code: The plotting code could use a for-loop if you like. However, I've intentionally kept it more verbose, so as to increase clarity. import plotly.io as pio layout = {'title': 'Categorical X-Axis, with Jitter'} traces = [] traces.append({'x': [df1['Width'], df1['Length']], 'y': df1['Price'], 'name': 'Vendor1', 'marker': {'color': 'green'}}) traces.append({'x': [df2['Width'], df2['Length']], 'y': df2['Price'], 'name': 'Vendor2', 'marker': {'color': 'blue'}}) traces.append({'x': [df3['Width'], df3['Length']], 'y': df3['Price'], 'name': 'Vendor3', 'marker': {'color': 'orange'}}) # Update (add) trace elements common to all traces. for t in traces: t.update({'type': 'box', 'boxpoints': 'all', 'fillcolor': 'rgba(255,255,255,0)', 'hoveron': 'points', 'hovertemplate': 'value=%{x}<br>Price=%{y}<extra></extra>', 'line': {'color': 'rgba(255,255,255,0)'}, 'pointpos': 0, 'showlegend': True}) pio.show({'data': traces, 'layout': layout}) Graph: The data behind this graph was generated using np.random.seed(73), against the dataset creation code posted in the question. Comments (TL;DR): The example code shown here uses the lower-level Plotly API, rather than a convenience wrapper such as graph_objects or express. The reason is that I (personally) feel it's helpful to users to show what is occurring 'under the hood', rather than masking the underlying code logic with a convenience wrapper. This way, when the user needs to modify a finer detail of the graph, they will have a better understanding of the lists and dicts which Plotly is constructing for the underlying graphing engine (orca). And this use-case is a prime example of this reasoning, as it’s edging Plotly past its (current) design point.
15
20
65,061,121
2020-11-29
https://stackoverflow.com/questions/65061121/git-repository-within-docker-python-image
I have a dockerfile, where my image is python:3.7-alpine. In my project, I use a git repository I need to download. Is there any way to do that ? My Dockerfile : FROM python:3.7-alpine ENV DOCKER_APP True COPY requirements.txt . RUN pip install -r requirements.txt COPY . app/ WORKDIR app/ ENTRYPOINT ["python3", "main.py"] my requirements : certifi==2020.6.20 requests==2.24.0 urllib3==1.25.10 git+https://github.com/XXX/YYY Thank
add the following to your dockerfile RUN apk update RUN apk add git
8
13
65,059,995
2020-11-29
https://stackoverflow.com/questions/65059995/convert-pyspark-dataframe-into-list-of-python-dictionaries
Hi I'm new to pyspark and I'm trying to convert pyspark.sql.dataframe into list of dictionaries. Below is my dataframe, the type is <class 'pyspark.sql.dataframe.DataFrame'>: +------------------+----------+------------------------+ | title|imdb_score|Worldwide_Gross(dollars)| +------------------+----------+------------------------+ | The Eight Hundred| 7.2| 460699653| | Bad Boys for Life| 6.6| 426505244| | Tenet| 7.8| 334000000| |Sonic the Hedgehog| 6.5| 308439401| | Dolittle| 5.6| 245229088| +------------------+----------+------------------------+ I would like to convert it into: [{"title":"The Eight Hundred", "imdb_score":7.2, "Worldwide_Gross(dollars)":460699653}, {"title":"Bad Boys for Life", "imdb_score":6.6, "Worldwide_Gross(dollars)":426505244}, {"title":"Tenet", "imdb_score":7.8, "Worldwide_Gross(dollars)":334000000}, {"title":"Sonic the Hedgehog", "imdb_score":6.5, "Worldwide_Gross(dollars)":308439401}, {"title":"Dolittle", "imdb_score":5.6, "Worldwide_Gross(dollars)":245229088}] How should I do this? Thanks in advance!
You can map each row into a dictionary and collect the results: df.rdd.map(lambda row: row.asDict()).collect()
8
9
65,041,691
2020-11-27
https://stackoverflow.com/questions/65041691/is-python-dictionary-async-safe
I have created a dictionary in my Python application where I save the data and I have two tasks that run concurrently and get data from external APIs. Once they get the data, they update the dictionary - each with a different key in the dictionary. I want to understand if the dictionary is async safe or do I need to put a lock when the dictionary is read/updated? The tasks also read the last saved value each time. my_data = {} asyncio.create_task(call_func_one_coroutine) asyncio.create_task(call_func_two_coroutine) async def call_func_one_coroutine(): data = await goto_api_get_data() my_data['one'] = data + my_data['one'] async def call_func_two_coroutine(): data = await goto_api_another_get_data() my_data['two'] = data + my_data['two']
I want to understand if the dictionary is async safe or do I need to put a lock when the dictionary is read/updated? Asyncio is based on cooperative multitasking, and can only switch tasks at an explicit await expression or at the async with and async for statements. Since update of a single dictionary can never involve awaiting (await must completes before the update begins), it is effectively atomic as far as async multitasking is concerned, and you don't need to lock it. This applies to all data structures accessed from async code. To take another example where there is no problem: # correct - there are no awaits between two accesses to the dict d key = await key_queue.get() if key in d: d[key] = calc_value(key) An example where a dict modification would not be async-safe would involve multiple accesses to the dict separated by awaits. For example: # incorrect, d[key] could appear while we're reading the value, # in which case we'd clobber the existing key if key not in d: d[key] = await read_value() To correct it, you can either add another check after the await, or use an explicit lock: # correct (1), using double check if key not in d: value = await read_value() # Check again whether the key is vacant. Since there are no awaits # between this check and the update, the operation is atomic. if key not in d: d[key] = value # correct (2), using a shared asyncio.Lock: async with d_lock: # Single check is sufficient because the lock ensures that # no one can modify the dict while we're reading the value. if key not in d: d[key] = await read_value()
17
27
65,023,526
2020-11-26
https://stackoverflow.com/questions/65023526/runtimeerror-the-size-of-tensor-a-4000-must-match-the-size-of-tensor-b-512
I'm trying to build a model for document classification. I'm using BERT with PyTorch. I got the bert model with below code. bert = AutoModel.from_pretrained('bert-base-uncased') This is the code for training. for epoch in range(epochs): print('\n Epoch {:} / {:}'.format(epoch + 1, epochs)) #train model train_loss, _ = modhelper.train(proc.train_dataloader) #evaluate model valid_loss, _ = modhelper.evaluate() #save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(modhelper.model.state_dict(), 'saved_weights.pt') # append training and validation loss train_losses.append(train_loss) valid_losses.append(valid_loss) print(f'\nTraining Loss: {train_loss:.3f}') print(f'Validation Loss: {valid_loss:.3f}') this is my train method, accessible with the object modhelper. def train(self, train_dataloader): self.model.train() total_loss, total_accuracy = 0, 0 # empty list to save model predictions total_preds=[] # iterate over batches for step, batch in enumerate(train_dataloader): # progress update after every 50 batches. if step % 50 == 0 and not step == 0: print(' Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader))) # push the batch to gpu #batch = [r.to(device) for r in batch] sent_id, mask, labels = batch # clear previously calculated gradients self.model.zero_grad() print(sent_id.size(), mask.size()) # get model predictions for the current batch preds = self.model(sent_id, mask) #This line throws the error # compute the loss between actual and predicted values self.loss = self.cross_entropy(preds, labels) # add on to the total loss total_loss = total_loss + self.loss.item() # backward pass to calculate the gradients self.loss.backward() # clip the the gradients to 1.0. It helps in preventing the exploding gradient problem torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0) # update parameters self.optimizer.step() # model predictions are stored on GPU. So, push it to CPU #preds=preds.detach().cpu().numpy() # append the model predictions total_preds.append(preds) # compute the training loss of the epoch avg_loss = total_loss / len(train_dataloader) # predictions are in the form of (no. of batches, size of batch, no. of classes). # reshape the predictions in form of (number of samples, no. of classes) total_preds = np.concatenate(total_preds, axis=0) #returns the loss and predictions return avg_loss, total_preds preds = self.model(sent_id, mask) this line throws the following error(including full traceback). Epoch 1 / 1 torch.Size([32, 4000]) torch.Size([32, 4000]) Traceback (most recent call last): File "<ipython-input-39-17211d5a107c>", line 8, in <module> train_loss, _ = modhelper.train(proc.train_dataloader) File "E:\BertTorch\model.py", line 71, in train preds = self.model(sent_id, mask) File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\model.py", line 181, in forward #pass the inputs to the model File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 837, in forward embedding_output = self.embeddings( File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 201, in forward embeddings = inputs_embeds + position_embeddings + token_type_embeddings RuntimeError: The size of tensor a (4000) must match the size of tensor b (512) at non-singleton dimension 1 If you observe I've printed the torch size in the code. print(sent_id.size(), mask.size()) The output of that line of code is torch.Size([32, 4000]) torch.Size([32, 4000]). as we can see that size is the same but it throws the error. Please put your thoughts. Really appreciate it. please comment if you need further information. I'll be quick to add whatever is required.
The issue is regarding the BERT's limitation with the word count. I've passed the word count as 4000 where the maximum supported is 512(have to give up 2 more for '[cls]' & '[Sep]' at the beginning and the end of the string, so it is 510 only). Reduce the word count or use some other model for your promlem. something like Longformers as suggested by @cronoik in the comments above. Thanks.
9
20
65,031,973
2020-11-27
https://stackoverflow.com/questions/65031973/how-to-select-specific-data-variables-from-xarray-dataset
BACKGROUND I am trying to download GFS weather data netcdf4 files via xarray & OPeNDAP. Big thanks to Vorticity0123 for their prior post, which allowed me to get the bones of the python script sorted (as below). PROBLEM Thing is, the GFS dataset has 195 data variables, But I don't require the majority, I only need ten of them. ugrd100m, vgrd100m, dswrfsfc, tcdcclm, tcdcblcll, tcdclcll, tcdcmcll, tcdchcll, tmp2m, gustsfc HELP REQUESTED I've gone through the xarray readthedocs page and elsewhere, but I couldn't figure out a way to narrow down my dataset to only the ten data variables. Does anyone know how to narrow down the list of variables in a dataset? PYTHON SCRIPT import numpy as np import xarray as xr # File Details dt = '20201124' res = 25 step = '1hr' run = '{:02}'.format(18) # URL URL = f'http://nomads.ncep.noaa.gov:80/dods/gfs_0p{res}_{step}/gfs{dt}/gfs_0p{res}_{step}_{run}z' # Load data dataset = xr.open_dataset(URL) time = dataset.variables['time'] lat = dataset.variables['lat'][:] lon = dataset.variables['lon'][:] lev = dataset.variables['lev'][:] # Narrow Down Selection time_toplot = time lat_toplot = np.arange(-43, -17, 0.5) lon_toplot = np.arange(135, 152, 0.5) lev_toplot = np.array([1000]) # Select required data via xarray dataset = dataset.sel(time=time_toplot, lon=lon_toplot, lat=lat_toplot) print(dataset)
You can use the dict-like syntax of xarray. variables = [ 'ugrd100m', 'vgrd100m', 'dswrfsfc', 'tcdcclm', 'tcdcblcll', 'tcdclcll', 'tcdcmcll', 'tcdchcll', 'tmp2m', 'gustsfc' ] dataset[variables] Gives you: <xarray.Dataset> Dimensions: (lat: 721, lon: 1440, time: 121) Coordinates: * time (time) datetime64[ns] 2020-11-24T18:00:00 ... 2020-11-29T18:00:00 * lat (lat) float64 -90.0 -89.75 -89.5 -89.25 ... 89.25 89.5 89.75 90.0 * lon (lon) float64 0.0 0.25 0.5 0.75 1.0 ... 359.0 359.2 359.5 359.8 Data variables: ugrd100m (time, lat, lon) float32 ... vgrd100m (time, lat, lon) float32 ... dswrfsfc (time, lat, lon) float32 ... tcdcclm (time, lat, lon) float32 ... tcdcblcll (time, lat, lon) float32 ... tcdclcll (time, lat, lon) float32 ... tcdcmcll (time, lat, lon) float32 ... tcdchcll (time, lat, lon) float32 ... tmp2m (time, lat, lon) float32 ... gustsfc (time, lat, lon) float32 ... Attributes: title: GFS 0.25 deg starting from 18Z24nov2020, downloaded Nov 24 ... Conventions: COARDS\nGrADS dataType: Grid history: Sat Nov 28 05:52:44 GMT 2020 : imported by GrADS Data Serve...
8
15
65,048,547
2020-11-28
https://stackoverflow.com/questions/65048547/how-to-get-number-of-days-between-two-dates-using-pandas
I'm trying to get number of days between two dates using below function df['date'] = pd.to_datetime(df.date) # Creating a function that returns the number of days def calculate_days(date): today = pd.Timestamp('today') return today - date # Apply the function to the column date df['days'] = df['date'].apply(lambda x: calculate_days(x)) The results looks like this 153 days 10:16:46.294037 but I want it to say 153. How do I handle this?
For performance you can subtract values without apply for avoid loops use Series.rsub for subtract from rigth side: df['date'] = pd.to_datetime(df.date) df['days'] = df['date'].rsub(pd.Timestamp('today')).dt.days What working like: df['days'] = (pd.Timestamp('today') - df['date']).dt.days If want use your solution: df['date'] = pd.to_datetime(df.date) def calculate_days(date): today = pd.Timestamp('today') return (today - date).days df['days'] = df['date'].apply(lambda x: calculate_days(x)) Or: df['date'] = pd.to_datetime(df.date) def calculate_days(date): today = pd.Timestamp('today') return (today - date) df['days'] = df['date'].apply(lambda x: calculate_days(x)).dt.days
5
8
65,045,565
2020-11-28
https://stackoverflow.com/questions/65045565/why-is-random-shuffle-so-much-slower-than-using-sorted-function
When using pythons random.shuffle function, I noticed it went significantly faster to use sorted(l, key=lambda _: random.random()) than random.shuffle(l). As far as I understand, both ways produce completely random lists, so why does shuffle take so much longer? Below are the times using timeit module. from timeit import timeit setup = 'import random\nl = list(range(1000))' # 5.542 seconds print(timeit('random.shuffle(l)', setup=setup, number=10000)) # 1.878 seconds print(timeit('sorted(l, key=lambda _: random.random())', setup=setup, number=10000))
On CPython (the reference interpreter) random.shuffle is implemented in Python (and implemented in terms of _randbelow, itself a Python wrapper around getrandbits, the C level function that ultimately implements it, and which can end up being called nearly twice as often as strictly necessary in an effort to ensure the outputs are unbiased); sorted (and random.random) are implemented in C. The overhead of performing work in Python is higher than performing similar work in C.
6
5
65,044,048
2020-11-27
https://stackoverflow.com/questions/65044048/how-to-check-if-na-type-variable-is-na-or-not-from-a-pandas-dataframe-np-na
I have a dataframe with a column whose values look something like: YEAR_TORONTO 0 <NA> 1 2016 2 <NA> 3 1999 I need to check each element of this dataframe individually via a for loop for other reasons outside this segment of code, so I'm looking for solutions that comply with my implementation. Essentially the code I have at the moment to check the presence of is: if np.isnan(df.get("YEAR_TORONTO")[row]): This leads to me getting the following error for the <NA> values: boolean value of NA is ambiguous Any idea of what I can do to fix this error? Help much appreciated
As sammywemmy said, pd.isna() should work. >>> d = pd.Series([1,2,pd.NA,3]) >>> d 0 1 1 2 2 <NA> 3 3 dtype: object >>> d.isna() 0 False 1 False 2 True 3 False dtype: bool
7
7
65,041,605
2020-11-27
https://stackoverflow.com/questions/65041605/why-does-python-detect-the-symbol-%c2%b2-as-a-digit
Can someone say if "²" is a symbol or a digit? (alt+1277, power of two) print("²".isdigit()) # True print("²".isnumeric()) # True Because Python says it's a digit, but it's not actually a digit. Am I wrong? Or it's a bug?
It is explicitly documented as a digit: str.isdigit() Return True if all characters in the string are digits and there is at least one character, False` otherwise. Digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Formally, a digit is a character that has the property value Numeric_Type=Digit or Numeric_Type=Decimal. Regarding Numeric_Type, this is defined by Unicode: Numeric_Type=Digit Variants of positional decimal characters (Numeric_Type=Decimal) or sequences thereof. These include super/subscripts, enclosed, or decorated by the addition of characters such as parentheses, dots, or commas.
12
17
65,034,771
2020-11-27
https://stackoverflow.com/questions/65034771/how-to-truncate-a-bert-tokenizer-in-transformers-library
I am using the Scibert pretrained model to get embeddings for various texts. The code is as follows: from transformers import * tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased', model_max_length=512, truncation=True) model = AutoModel.from_pretrained('allenai/scibert_scivocab_uncased') I have added both the max length and truncation parameters to tokenizers, but unfortunately, they don't truncate the results.If I run a longer text through the tokenizer: inputs = tokenizer("""long text""") I get the following error: Token indices sequence length is longer than the specified maximum sequence length for this model (605 > 512). Running this sequence through the model will result in indexing errors Now obviously I can't run this through the model due to having too long sequences of tensors. What is the easiest way to truncate the input to fit the maximum sequence length of 512?
truncation is not a parameter of the class constructor (class reference), but a parameter of the __call__ method. Therefore you should use: tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased', model_max_length=512) len(tokenizer(text, truncation=True).input_ids) Output: 512
9
19
65,021,157
2020-11-26
https://stackoverflow.com/questions/65021157/increase-pythons-stdout-buffer-size
Is there a way to increase the stdout buffer size from 8182 in Python or to delay the flush until I actually call flush? Things I've tried that don't work: I can get around this issue on Windows because I can access the buffer directly (e.g. see my answer to this post). But this doesn't work for Unix. I can increase the buffer size for a file by passing buffer to the constructor, however stdout is already constructed. Turning off buffering (python -u) obviously makes things worse! Using a temporary buffer encounters the same problems - stdout is flushed after every 8192nd byte is copied from the temporary buffer. Rationale: The aim here is to reduce console flickering. Buffering everything, as per this question indeed works, for instance when I try in C or by using the Windows API in Python, but the 8182 limit in Python seems to be causing problems that I can't get around on Unix.
Found the answer, actually very simple: my_stdout = open( 1, "w", buffering = 100000 ) 1 is the fileno for stdout. sys.stdout = my_stdout can be used to make the change to the default print target. I've only tested this on Unix.
6
6
65,014,768
2020-11-26
https://stackoverflow.com/questions/65014768/cant-find-my-python-module-after-installing-on-github-actions
When I install my example module in the local environment, python is able to find it when the module is imported. Whereas, when executed by Github Actions, the workflow fails and the reported error is that my module (ci-test) is not installed. main.yaml: - name: Install ci-test package run: | python setup.py build python setup.py install python -c "import ci_test" The full yaml file is located here. And the error output of Github Actions is: Installed /home/runner/.local/lib/python3.7/site-packages/ci_test-0.0.1-py3.7.egg Processing dependencies for ci-test==0.0.1 Finished processing dependencies for ci-test==0.0.1 Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'ci_test' Error: Process completed with exit code 1.
The issue is not related to github action. When looking to your repository, the repository is organized this way. ci-test/ |-- requirements.txt |-- setup.py |-- src/ | |-- ci_test/ | | |-- app.py | | |-- __init__.py | | |-- main.py |-- tests/ | |-- app_test.py | |-- __init__.py | |-- main_test.py And in your setup.py you describe your packages as: packages=['src/ci_test', 'tests'] Your ci_test package can be import with the following path: import src.ci_test This question have some best practices about python project structure: What is the best project structure for a Python application?
9
6
65,024,477
2020-11-26
https://stackoverflow.com/questions/65024477/walrus-operator-in-list-comprehensions-python
When coding I really like to use list comprehensions to transform data and I try to avoid for loops. Now I discovered that the walrus operator can be really handy for this, but when I try to use it in my code it doesn't seem to work. I've got the following code and want to transform the strings containing data about the timestamps into datetime objects in one easy line, but I get an syntax error and I am not sure what the right syntax would be, does anyone know what I did wrong? from datetime import datetime timestamps = ['30:02:17:36', '26:07:44:25','25:19:30:38','25:07:40:47','24:18:29:05','24:06:13:15','23:17:36:39', '23:00:14:52','22:07:04:33','21:15:42:20','21:04:27:53', '20:12:09:22','19:21:46:25'] timestamps_dt = [ datetime(days=day,hours=hour,minutes=mins,seconds=sec) for i in timestamps day,hour,mins,sec := i.split(':') ]
Since Walrus operator does not support values unpacking, the operation day,hour,mins,sec := i.split(':') is invalid. Walrus operator is recommended to be used mostly in logic comparison, especially when you need to reuse a variable in comparison. Therefore, I would argue that for this case, a simple datetime.strptime() would be better for this case. If you must use walrus comparison in your list comprehension, you may do from datetime import datetime timestamps = ['30:02:17:36', '26:07:44:25','25:19:30:38','25:07:40:47','24:18:29:05','24:06:13:15','23:17:36:39', '23:00:14:52','22:07:04:33','21:15:42:20','21:04:27:53', '20:12:09:22','19:21:46:25'] timestamps_dt = [ datetime(2020,11, *map(int, time)) # map them from str to int for i in timestamps if (time := i.split(':')) # assign the list to the variable time ] print(timestamps_dt) But then it will lead to a question why not just, timestamps_dt = [ datetime(2020,11, *map(int, i.split(':'))) for i in timestamps ] Reference PEP-572
23
30
65,018,676
2020-11-26
https://stackoverflow.com/questions/65018676/how-to-sum-even-and-odd-values-with-one-for-loop-and-no-if-condition
I am taking a programming class in college and one of the exercises in the problem sheet was to write this code: number = int(input()) x = 0 y = 0 for n in range(number): if n % 2 == 0: x += n else: y += n print(x) print(y) using only one "for" loop, and no "while" or "if". The purpose of the code is to find the sum of the even and the sum of the odd numbers from zero to the number inputted and print it to the screen. Be reminded that at this time we aren't supposed to know about functions. I've been trying for a long time now and can't seem to find a way of doing it without using "if" statements to know if the loop variable is even or odd.
for n in range(number): x += (1 - n % 2) * n y += (n % 2) * n
24
13
65,014,519
2020-11-26
https://stackoverflow.com/questions/65014519/docker-log-dont-show-python-print-output
I have a Django Proj running in Docker Container My Debug=True but docker up logging doesn't show any print('xxxx') output. Is there a way to fix it? thanks!
After a long search I found this https://serverfault.com/a/940357 Add flush=True print(datetime.now(), flush=True) Or add PYTHONUNBUFFERED: 1 to docker-compose.yml which is added by PyCharm by default version: '3.6' services: test: .... environment: PYTHONUNBUFFERED: 1 # <--- ....
7
15
65,013,956
2020-11-25
https://stackoverflow.com/questions/65013956/a-function-like-index-get-loc-but-for-multiple-values
I have the following data frame: df = pd.DataFrame({'color': ['Yellow', 'Green', 'Red', 'Orange'], 'weight': [0.5, 4, 1, 10]}, index = ['Banana','Melon','Apple','Pumpkin']) which looks like this: Banana Yellow 0.5 Melon Green 4.0 Apple Red 1.0 Pumpkin Orange 10.0 What I'm trying to do is access the integer locations of a subset of 'Banana','Melon','Apple','Pumpkin'. I know that pandas.index.get_loc is a way of doing this for a single element of index. For example: df.index.get_loc('Apple') returns 2. However, I would like to do this for multiple elements at once. I tried, among other things, df.index.get_loc('Apple','Pumpkin'), hoping to get the list [2,3], but this gave the following error: ValueError: Invalid fill method. Expecting pad (ffill), backfill (bfill) or nearest. Got pumpkin Is there a function that will allow me to get multiple integer locations at once?
Try with get_indexer which accept list like input df.index.get_indexer(['Apple','Pumpkin']) Out[104]: array([2, 3], dtype=int32)
6
11
65,012,603
2020-11-25
https://stackoverflow.com/questions/65012603/removing-rows-contains-non-english-words-in-pandas-dataframe
I have a pandas data frame that consists of 4 rows, the English rows contain news titles, some rows contain non-English words like this one **She’s the Hollywood Power Behind Those ...** I want to remove all rows like this one, so all rows that contain at least non-English characters in the Pandas data frame.
If using Python >= 3.7: df[df['col'].map(lambda x: x.isascii())] where col is your target column. Data: df = pd.DataFrame({ 'colA': ['**She’s the Hollywood Power Behind Those ...**', 'Hello, world!', 'Cainã', 'another value', 'test123*', 'âbc'] }) print(df.to_markdown()) | | colA | |---:|:------------------------------------------------------| | 0 | **She’s the Hollywood Power Behind Those ...** | | 1 | Hello, world! | | 2 | Cainã | | 3 | another value | | 4 | test123* | | 5 | âbc | Identifying and filtering strings with non-English characters (see the ASCII printable characters): df[df.colA.map(lambda x: x.isascii())] Output: colA 1 Hello, world! 3 another value 4 test123* Original approach was to use a user-defined function like this: def is_ascii(s): try: s.encode(encoding='utf-8').decode('ascii') except UnicodeDecodeError: return False else: return True
6
9
65,011,428
2020-11-25
https://stackoverflow.com/questions/65011428/play-is-not-defined-pylance-reportundefinedvariable
I'm just starting to learn Python and am having trouble calling classes. I'm using Visual Studio Code. I've looked up the error but couldn't find anything helpful. All of my experience so far has been with Java and think I might be getting some stuff mixed up. Any help would be greatly appreciated! print("Lets play a game!") dotheywantotplay = input("Do you want to play Rock-Paper-Scissors? (y or n) ") if dotheywantotplay == "y": player1 = input('Enter first players name: ') player2 = input('Enter second players name: ') personchoice1 = input("What move will you do? [\"Rock\", \"Paper\", \"Scissors\"]") personchoice2 = input("What move will you do? [\"Rock\", \"Paper\", \"Scissors\"]") Play(personchoice1,personchoice2) // The error looks like its this line else: print("I am so sad now :-(") exit() class Play: def __init__(player1, player2): if player1 == player2: print("Tie!") elif player1 == "Rock": if player2 == "Paper": print("You lose!", player2, "covers", player1) else: print("You win!", player1, "smashes", player2) elif player1 == "Paper": if player2 == "Scissors": print("You lose!", player2, "cut", player1) else: print("You win!", player1, "covers", player2) elif player1 == "Scissors": if player2 == "Rock": print("You lose...", player2, "smashes", player1) else: print("You win!", player1, "cut", player2) else: print("That's not a valid play. Check your spelling!") Above was the original question for future reference. Below is the solution. Python is different than Java in that, one has to define classes before they are referenced. This means the classes must be at the top of your python file. Hope this helps some other Java nerd out there trying to write programs in Python... print("Lets play a game!") dotheywantotplay = input("Do you want to play Rock-Paper-Scissors? (y or n) ") print("Lets play a game!") dotheywantotplay = input("Do you want to play Rock-Paper-Scissors? (y or n) ") class Play: def __init__(self, player1, player2, name1, name2 ): self.player1 = player1 self.player2 = player2 self.name1 = name1 self.name2 = name2 if player1 == player2: print("Tie!") elif player1 == "Rock": if player2 == "Paper": print("You Win ", name2,"!") print(player2, "covers", player1) else: print("You Win ", name1,"!") print(player1, "smashes", player2) elif player1 == "Paper": if player2 == "Scissors": print("You Win ", name2,"!") print(player2, "cut", player1) else: print("You Win ", name1,"!") print(player1, "covers", player2) elif player1 == "Scissors": if player2 == "Rock": print("You Win ", name2,"!") print(player2, "smashes", player1) else: print("You Win ", name1,"!") print(player1, "cut", player2) else: print("That's not a valid play. Check your spelling!") if dotheywantotplay == "y": player1 = input('Enter first players name: ') player2 = input('Enter second players name: ') personchoice1 = input("What move will you do? \"Rock\", \"Paper\", \"Scissors\" ") personchoice2 = input("What move will you do? \"Rock\", \"Paper\", \"Scissors\" ") Play(personchoice1,personchoice2, player1, player2) else: print("I am sad now :-(") exit()
Python is executed top to bottom, so all your Classes and finctions should be defined before called (so placed on top). Also class Play: def __init__(player1, player2): self.player1 = player1 self.player2 = player2 you should define your attributes inside your class like this before anything else, self it refers to the current instance of your class, everything in here.
8
10
65,011,159
2020-11-25
https://stackoverflow.com/questions/65011159/importerror-cannot-import-name-celery
I'm trying to learn Celery i'm using Django 2.0 and celery 5.0.2 and my os is Ubuntu. This is my structure My project structure is: celery/ manage.py celery/ __init__.py cerely_app.py settings.py urls.py wsgi.py apps/ main/ __init__.py admin.py apps.py models.py task.py views.py test.py My configuration for cerely_app, based on documentation: import os from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'celery.settings') app = Celery('celery') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks() @app.task(bind=True) def debug_task(self): print(f'Request: {self.request!r}') And my init.py: from .celery_app import app as celery_app __all__ = ('celery_app',) But when django give a error of import when i use command python3 manage.py runserver: $python3 manage.py runserver Traceback (most recent call last): File "manage.py", line 15, in <module> execute_from_command_line(sys.argv) File "/home/brayan/Envs/celery/lib/python3.8/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line utility.execute() File "/home/brayan/Envs/celery/lib/python3.8/site-packages/django/core/management/__init__.py", line 317, in execute settings.INSTALLED_APPS File "/home/brayan/Envs/celery/lib/python3.8/site-packages/django/conf/__init__.py", line 56, in __getattr__ self._setup(name) File "/home/brayan/Envs/celery/lib/python3.8/site-packages/django/conf/__init__.py", line 43, in _setup self._wrapped = Settings(settings_module) File "/home/brayan/Envs/celery/lib/python3.8/site-packages/django/conf/__init__.py", line 106, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/brayan/dev/python/celery/celery/celery/__init__.py", line 3, in <module> from .celery_app import app as celery_app File "/home/brayan/dev/python/celery/celery/celery/celery_app.py", line 2, in <module> from celery import Celery ImportError: cannot import name 'Celery' from partially initialized module 'celery' (most likely due to a circular import) (/home/brayan/dev/python/celery/celery/celery/__init__.py)
Do not put same name of your package and system package as it creates confusion for python when you hit import statement. In your case you name your package celery which is also a name of original celery package. In short simply rename your celery folder to something else.
6
4
64,998,847
2020-11-25
https://stackoverflow.com/questions/64998847/only-remove-entirely-empty-rows-in-pandas
If I have this data frame: d = {'col1': [1, np.nan, np.nan], 'col2': [1, np.nan, 1]} df = pd.DataFrame(data=d) col1 col2 0 1.0 1.0 1 NaN NaN 2 NaN 1.0 and want to drop only rows that are empty to produce the following: d = {'col1': [1, np.nan], 'col2': [1, 1]} df = pd.DataFrame(data=d) col1 col2 0 1.0 1 1 NaN 1 What is the best way to do this?
Check the docs page df.dropna(how='all')
17
32
64,998,533
2020-11-25
https://stackoverflow.com/questions/64998533/how-to-import-a-class-from-another-file-in-python
Im new to python and have looked at various stack overflow posts. i feel like this should work but it doesnt. How do you import a class from another file in python? This folder structure src/example/ClassExample src/test/ClassExampleTest I have this class class ClassExample: def __init__(self): pass def helloWorld(self): return "Hello World!" I have this test class import unittest from example import ClassExample class ClassExampleTest(unittest.TestCase): def test_HelloWorld(self): hello = ClassExample() self.assertEqual("Hello World!", hello.helloWorld()) if __name__ == '__main__': unittest.main() When the unit test runs the object is None: AttributeError: 'NoneType' object has no attribute 'helloWorld' What's wrong with this? How do you import a class in python?
If you're using Python 3, then imports are absolute by default. This means that import example will look for an absolute package named example, somewhere in the module search path. So instead, you probably want a relative import. This is useful when you want to import a module that is relative the module doing the importing. In this case: from ..example.ClassExample import ClassExample I'm assuming that your folders are Python packages, meaning that they contain __init__.py files. So your directory structure would look like this. src |-- __init__.py |-- example | |-- __init__.py | |-- ClassExample.py |-- test | |-- __init__.py | |-- ClassExampleTest.py
7
9
64,998,199
2020-11-25
https://stackoverflow.com/questions/64998199/cannot-import-name-imaging-from-pil
I'm trying to run this code: import pyautogui import time from PIL import _imaging from PIL import Image import pytesseract time.sleep(5) captura = pyautogui.screenshot() codigo = captura.crop((872, 292, 983, 337)) codigo.save(r'C:\autobot_wwe_supercard\imagenes\codigo.png') time.sleep(2) pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract' print(pytesseract.image_to_string(r'D:\codigo.png')) And this error pops up: ImportError: cannot import name 'imaging' from 'PIL' (C:\Users\Usuario\AppData\Roaming\Python\Python38\site-packages\PIL_init.py) I installed pillow in console with pip install pillow I installed pytesseract in console with pip install pytesseract
It appears as if a lot of PIL ImportErrors can simply be fixed by uninstalling and reinstalling Pillow again according to this source and your specific problem can be found here. Try these three commands: pip uninstall PIL pip uninstall Pillow pip install Pillow
8
13
64,974,078
2020-11-23
https://stackoverflow.com/questions/64974078/how-do-the-scoping-rules-work-with-classes
Consider the following snippet of python code: x = 1 class Foo: x = 2 def foo(): x = 3 class Foo: print(x) # prints 3 Foo.foo() As expected, this prints 3. But, if we add a single line to the above snippet, the behavior changes: x = 1 class Foo: x = 2 def foo(): x = 3 class Foo: x += 10 print(x) # prints 11 Foo.foo() And, if we switch the order of the two lines in the above example, the result changes yet again: x = 1 class Foo: x = 2 def foo(): x = 3 class Foo: print(x) # prints 1 x += 10 Foo.foo() I'd like to understand why this occurs, and more generally, understand the scoping rules that cause this behavior. From the LEGB scoping rule, I would expect that both snippets print 3, 13, and 3, since there is an x defined in the enclosing function foo().
Class block scope is special. It is documented here: A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods — this includes comprehensions and generator expressions since they are implemented using a function scope. Basically, class blocks do not "participate" in creating/using enclosing scopes. So, it is actually the first example that isn't working as documented. I think this is an actual bug. EDIT: OK, so actually, here's some more relevant documentation from the data model. I think it all is actually consistent with the documentation: The class body is executed (approximately) as exec(body, globals(), namespace). The key difference from a normal call to exec() is that lexical scoping allows the class body (including any methods) to reference names from the current and outer scopes when the class definition occurs inside a function. So class blocks do participate in using enclosing scopes, but for free variables (as is normal anyway). In the first piece of documentation that I'm quoting, the part about "unbound local variables are looked up in the global namespace" applies to variables that would normally be marked local by the complier. So, consider this notorious error, for example: x = 1 def foo(): x += 1 print(x) foo() This would throw an unbound local error, but an equivalent class definition would print 2: x = 1 class Foo: x += 1 print(x) Basically, if there is an assignment statement anywhere in a class block, it is "local", but it will check in the global scope if there is an unbound local instead of throwing the UnboundLocal error. Hence, in your first example, it isn't a local variable, it is simply a free variable, and the resolution goes through the normal rules. In your next two examples, you use an assignment statement, marking x as "local", and thus, it will be looked up in the global namespace in case it is unbound in the local one.
51
33
64,993,130
2020-11-24
https://stackoverflow.com/questions/64993130/how-to-get-hmm-working-with-real-valued-data-in-tensorflow
I'm working with a dataset that contains data from IoT devices and I have found that Hidden Markov Models work pretty well for my use case. As such, I'm trying to alter some code from a Tensorflow tutorial I've found here. The dataset contains real-values for the observed variable compared to the count data shown in the tutorial. In particular, I believe the following needs to be changed so that the HMM has Normally distributed emissions. Unfortunately, I can't find any code on how to alter the model to have a different emission other than Poisson. How should I change the code to emit normally distributed values? # Define variable to represent the unknown log rates. trainable_log_rates = tf.Variable( np.log(np.mean(observed_counts)) + tf.random.normal([num_states]), name='log_rates') hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical( logits=initial_state_logits), transition_distribution=tfd.Categorical(probs=transition_probs), observation_distribution=tfd.Poisson(log_rate=trainable_log_rates), num_steps=len(observed_counts)) rate_prior = tfd.LogNormal(5, 5) def log_prob(): return (tf.reduce_sum(rate_prior.log_prob(tf.math.exp(trainable_log_rates))) + hmm.log_prob(observed_counts)) optimizer = tf.keras.optimizers.Adam(learning_rate=0.1) @tf.function(autograph=False) def train_op(): with tf.GradientTape() as tape: neg_log_prob = -log_prob() grads = tape.gradient(neg_log_prob, [trainable_log_rates])[0] optimizer.apply_gradients([(grads, trainable_log_rates)]) return neg_log_prob, tf.math.exp(trainable_log_rates)
@mCoding's answer is right, in the example posted in by Tensorflow, you have a Hidden Markov model with a uniform zero distribution ([0.,0.,0.,0.]), a heavy diagonal transition matrix, and the emission probabilities are Poisson distributed. In order to adapt it to your "Normal" example, you only have to change those probabilities to the Normal one. As an example, consider as a starting point that your emission probabilities are distributed Normal with parameters: training_loc = tf.Variable([0.,0.,0.,0.]) training_scale = tf.Variable([1.,1.,1.,1.]) then your observation_distribution will be: observation_distribution = tfp.distributions.Normal(loc= training_loc, scale=training_scale ) Finally, you also have to change your prior knowledge about these parameters, setting a prior_loc, prior_scale. You might want to consider uninformative/weakly informative priors as I see that you are fitting the model afterwards. So your code should be similar to: # Define the emission probabilities. training_loc = tf.Variable([0.,0.,0.]) training_scale = tf.Variable([1.,1.,1.]) observation_distribution = tfp.distributions.Normal(loc= training_loc, scale=training_scale ) #Change this to your desired distribution hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical( logits=initial_state_logits), transition_distribution=tfd.Categorical(probs=transition_probs), observation_distribution=observation_distribution, num_steps=len(observed_counts)) # Prior distributions prior_loc = tfd.Normal(loc=0., scale=1.) prior_scale = tfd.HalfNormal(scale=1.) def log_prob(): log_probability = hmm.log_prob(data)#Use your training data right here # Compute the log probability of the prior on the mean and standard deviation of the observation distribution log_probability += tf.reduce_sum(prior_mean.log_prob(observation_distribution.loc)) log_probability += tf.reduce_sum(prior_scale.log_prob(observation_distribution.scale)) # Return the negative log probability, since we want to minimize this quantity return log_probability optimizer = tf.keras.optimizers.Adam(learning_rate=0.1) # Finally train the model like in the example losses = tfp.math.minimize( lambda: -log_prob(), optimizer=tf.optimizers.Adam(learning_rate=0.1), num_steps=100) So now if you look at your params training_loc and training_scale, they should have fitted values.
6
1
64,967,847
2020-11-23
https://stackoverflow.com/questions/64967847/pandas-representative-sampling-across-multiple-columns
I have a dataframe which represents a population, with each column denoting a different quality/ characteristic of that person. How can I get a sample of that dataframe/ population, which is representative of the population as a whole across all characteristics. Suppose I have a dataframe which represents a workforce of 650 people as follows: import pandas as pd import numpy as np c = np.random.choice colours = ['blue', 'yellow', 'green', 'green... no, blue'] knights = ['Bedevere', 'Galahad', 'Arthur', 'Robin', 'Lancelot'] qualities = ['wise', 'brave', 'pure', 'not quite so brave'] df = pd.DataFrame({'name_id':c(range(3000), 650, replace=False), 'favourite_colour':c(colours, 650), 'favourite_knight':c(knights, 650), 'favourite_quality':c(qualities, 650)}) I can get a sample of the above that reflects the distribution of a single column as follows: # Find the distribution of a particular column using value_counts and normalize: knight_weight = df['favourite_knight'].value_counts(normalize=True) # Add this to my dataframe as a weights column: df['knight_weight'] = df['favourite_knight'].apply(lambda x: knight_weight[x]) # Then sample my dataframe using the weights column I just added as the 'weights' argument: df_sample = df.sample(140, weights=df['knight_weight']) This will return a sample dataframe (df_sample) such that: df_sample['favourite_knight'].value_counts(normalize=True) is approximately equal to df['favourite_knight'].value_counts(normalize=True) My question is this: How can I generate a sample dataframe (df_sample) such that the above i.e.: df_sample[column].value_counts(normalize=True) is approximately equal to df[column].value_counts(normalize=True) is true for all columns (except 'name_id') instead of just one of them? population of 650 with a sample size of 140 is approximately the sizes I'm working with so performance isn't too much of an issue. I'll happily accept solutions that take a couple of minutes to run as this will still be considerably faster than producing the above sample manually. Thank you for any help.
You create a combined feature column, weight that one and draw with it as weights: df["combined"] = list(zip(df["favourite_colour"], df["favourite_knight"], df["favourite_quality"])) combined_weight = df['combined'].value_counts(normalize=True) df['combined_weight'] = df['combined'].apply(lambda x: combined_weight[x]) df_sample = df.sample(140, weights=df['combined_weight']) This will need an additional step of dividing by the count of the specific weight so sum up to 1 - see Ehsan Fathi post.
8
7
64,936,440
2020-11-20
https://stackoverflow.com/questions/64936440/python-uvicorn-the-term-uvicorn-is-not-recognized-as-the-name-of-a-cmdlet-f
Good evening, I am using python 3.9 and try to run a new FastAPI service on Windows 10 Pro based on the documentation on internet https://www.uvicorn.org/ i executed the following statements pip install uvicorn pip install uvicorn[standard] create the sample file app.py from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} But when i run the code below : uvicorn main:app --reload uvicorn : The term 'uvicorn' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify t hat the path is correct and try again. At line:1 char:1 + uvicorn + ~~~~~~~ + CategoryInfo : ObjectNotFound: (uvicorn:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException I also add the path of Python in the envroment settings I also re-install Python 3.9 and make the default path for installation to c:\ProgramFiles\Python39 this path is also include now in the system enviroment and user enviroment settings. if i run pip install uvicorn again it shows the following statement: λ pip install uvicorn Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: uvicorn in c:\users\username\appdata\roaming\python\python39\site-packages (0.12.2) Requirement already satisfied: h11>=0.8 in c:\users\username\appdata\roaming\python\python39\site-packages (from uvicorn) (0.11.0) Requirement already satisfied: click==7.* in c:\users\username\appdata\roaming\python\python39\site-packages (from uvicorn) (7.1.2) WARNING: You are using pip version 20.2.3; however, version 20.2.4 is available. You should consider upgrading via the 'c:\program files\python39\python.exe -m pip install --upgrade pip' command. Many thanks Erik
Python installs it's scripts in the scripts folder at the following path: c:\users\username\appdata\roaming\python\python39\scripts Place that path in the system and user environment variable. This will solve the problem.
29
11
64,992,044
2020-11-24
https://stackoverflow.com/questions/64992044/pytest-mock-multiple-calls-of-same-method-with-different-side-effect
I have a unit test like so below: # utilities.py def get_side_effects(): def side_effect_func3(self): # Need the "self" to do some stuff at run time. return {"final":"some3"} def side_effect_func2(self): # Need the "self" to do some stuff at run time. return {"status":"some2"} def side_effect_func1(self): # Need the "self" to do some stuff at run time. return {"name":"some1"} return side_effect_func1, side_effect_func2, side_effect_func2 ################# # test_a.py def test_endtoend(): s1, s2, s3 = utilities.get_side_effects() m1 = mock.MagicMock() m1.side_effect = s1 m2 = mock.MagicMock() m2.side_effect = s2 m3 = mock.MagicMock() m3.side_effect = s3 with mock.patch("a.get_request", m3): with mock.patch("a.get_request", m2): with mock.patch("a.get_request", m1): foo = a() # Class to test result = foo.run() As part of the foo.run() code run, get_request is called multiple times. I want to have a different side_effect function for each call of get_request method, in this case it is side_effect_func1, side_effect_func2, side_effect_func3. But what I'm noticing is that only m1 mock object is active, i.e only side_effect_func1 is invoked but not the other 2. How do I achieve this? I have also tried the below, but the actual side_effect functions don't get invoked, they always return the function object, but don't actually execute the side_effect functions. # utilities.py def get_side_effects(): def side_effect_func3(self): # Need the "self" to do some stuff at run time. return {"final":"some3"} def side_effect_func2(self): # Need the "self" to do some stuff at run time. return {"status":"some2"} def side_effect_func1(self): # Need the "self" to do some stuff at run time. return {"name":"some1"} all_get_side_effects = [] all_get_side_effects.append(side_effect_func1) all_get_side_effects.append(side_effect_func2) all_get_side_effects.append(side_effect_func3) return all_get_side_effects ######################### # test_a.py def test_endtoend(): all_side_effects = utilities.get_side_effects() m = mock.MagicMock() m.side_effect = all_side_effects with mock.patch("a.get_request", m): foo = a() # Class to test result = foo.run()
Your first attempt doesn't work because each mock just replaced the previous one (the outer two mocks don't do anything). Your second attempt doesn't work because side-effect is overloaded to serve a different purpose for iterables (docs): If side_effect is an iterable then each call to the mock will return the next value from the iterable. Instead you could use a callable class for the side-effect, which is maintaining some state about which underlying function to actually call, consecutively. Basic example with two functions: >>> class SideEffect: ... def __init__(self, *fns): ... self.fs = iter(fns) ... def __call__(self, *args, **kwargs): ... f = next(self.fs) ... return f(*args, **kwargs) ... >>> def sf1(): ... print("called sf1") ... return 1 ... >>> def sf2(): ... print("called sf2") ... return 2 ... >>> def foo(): ... print("called actual func") ... return "f" ... >>> with mock.patch("__main__.foo", side_effect=SideEffect(sf1, sf2)) as m: ... first = foo() ... second = foo() ... called sf1 called sf2 >>> assert first == 1 >>> assert second == 2 >>> assert m.call_count == 2
8
12
64,955,230
2020-11-22
https://stackoverflow.com/questions/64955230/how-can-i-check-by-type-if-an-object-is-instance-of-pytz-timezone
I want something like this: from datetime import datetime, timezone import pytz def convert_datetime_by_timezone(timestamp_dt, to_timezone): if isinstance(to_timezone, str): return timestamp_dt.astimezone(pytz.timezone(to_timezone)) elif isinstance(to_timezone, pytz.tzinfo.??????): return timestamp_dt.astimezone(to_timezone) else: raise Exception("Invalid timezone: '%s'" % str(to_timezone)) But each time I create a pytz timezone a new type of object gets created: >>> type(pytz.timezone("UTC")) <class 'pytz.UTC'> >>> type(pytz.timezone("Europe/Budapest")) <class 'pytz.tzfile.Europe/Budapest'> What is the correct way to check this?
isinstance(x, pytz.BaseTzInfo) works for both cases
9
15
64,950,340
2020-11-22
https://stackoverflow.com/questions/64950340/cv2-imshow-is-not-working-properly-in-pycharm-macos
Environment: interpreter: Python 3.9.0 OS: macOS Big Sur This simple code is running fine with no errors; however, no image is produced and nothing is displayed, and I'm forced to manually stop the code and interrupt it to exit, otherwise it just seems to run forever? This exact same code used to work fine on my windows laptop. Any clue? The code: import cv2 CV_cat = cv2.imread('Cat.jpg') cv2.imshow('displaymywindows', CV_cat) cv2.waitKey(1500)
With Ubuntu 18.04 and Anaconda environment I was getting the error with Pycharm as process finished with exit code 139 (interrupted by signal 11 sigsegv) Solved it by pip install opencv-python-headless in addition to pip install opencv-python
10
3
64,900,801
2020-11-18
https://stackoverflow.com/questions/64900801/implementing-knn-imputation-on-categorical-variables-in-an-sklearn-pipeline
I am implementing a pre-processing pipeline using sklearn's pipeline transformers. My pipeline includes sklearn's KNNImputer estimator that I want to use to impute categorical features in my dataset. (My question is similar to this thread but it doesn't contain the answer to my question: How to implement KNN to impute categorical features in a sklearn pipeline) I know that the categorical features have to be encoded before imputation and this is where I am having trouble. With standard label/ordinal/onehot encoders, when trying to encode categorical features with missing values (np.nan) you get the following error: ValueError: Input contains NaN I've managed to "by-pass" that by creating a custom encoder where I replace the np.nan with 'Missing': class CustomEncoder(BaseEstimator, TransformerMixin): def __init__(self): self.encoder = None def fit(self, X, y=None): self.encoder = OrdinalEncoder() return self.encoder.fit(X.fillna('Missing')) def transform(self, X, y=None): return self.encoder.transform(X.fillna('Missing')) def fit_transform(self, X, y=None, **fit_params): self.encoder = OrdinalEncoder() return self.encoder.fit_transform(X.fillna('Missing')) preprocessor = ColumnTransformer([ ('categoricals', CustomEncoder(), cat_features), ('numericals', StandardScaler(), num_features)], remainder='passthrough' ) pipeline = Pipeline([ ('preprocessing', preprocessor), ('imputing', KNNImputer(n_neighbors=5)) ]) In this scenario however I cannot find a reasonable way to then set the encoded 'Missing' values back to np.nan before imputing with the KNNImputer. I've read that I could do this manually using the OneHotEncoder transformer on this thread: Cyclical Loop Between OneHotEncoder and KNNImpute in Scikit-learn, but again, I'd like to implement all of this in a pipeline to automate the entire pre-processing phase. Has anyone managed to do this? Would anyone recommend an alternative solution? Is imputing with a KNN algorithm maybe not worth the trouble and should I use a simple imputer instead? Thanks in advance for your feedback!
I am afraid that this cannot work. If you one-hot encode your categorical data, your missing values will be encoded into a new binary variable and KNNImputer will fail to deal with them because: it works on each column at a time, not on the full set of one-hot encoded columns there won't any missing to be dealt with anymore Anyway, you have a couple of options for imputing missing categorical variables using scikit-learn: you can use sklearn.impute.SimpleImputer using strategy="most_frequent": this will replace missing values using the most frequent value along each column, no matter if they are strings or numeric data use sklearn.impute.KNNImputer with some limitation: you have first to transform your categorical features into numeric ones while preserving the NaN values (see: LabelEncoder that keeps missing values as 'NaN'), then you can use the KNNImputer using only the nearest neighbour as replacement (if you use more than one neighbour it will render some meaningless average). For example: import numpy as np import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.impute import KNNImputer df = pd.DataFrame({'A': ['x', np.NaN, 'z'], 'B': [1, 6, 9], 'C': [2, 1, np.NaN]}) df = df.apply(lambda series: pd.Series( LabelEncoder().fit_transform(series[series.notnull()]), index=series[series.notnull()].index )) imputer = KNNImputer(n_neighbors=1) imputer.fit_transform(df) In: A B C 0 x 1 2.0 1 NaN 6 1.0 2 z 9 NaN Out: array([[0., 0., 1.], [0., 1., 0.], [1., 2., 0.]]) Use sklearn.impute.IterativeImputer and replicate a MissForest imputer for mixed data (but you will have to processe separately numeric from categorical features). For example: import numpy as np import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier df = pd.DataFrame({'A': ['x', np.NaN, 'z'], 'B': [1, 6, 9], 'C': [2, 1, np.NaN]}) categorical = ['A'] numerical = ['B', 'C'] df[categorical] = df[categorical].apply(lambda series: pd.Series( LabelEncoder().fit_transform(series[series.notnull()]), index=series[series.notnull()].index )) print(df) imp_num = IterativeImputer(estimator=RandomForestRegressor(), initial_strategy='mean', max_iter=10, random_state=0) imp_cat = IterativeImputer(estimator=RandomForestClassifier(), initial_strategy='most_frequent', max_iter=10, random_state=0) df[numerical] = imp_num.fit_transform(df[numerical]) df[categorical] = imp_cat.fit_transform(df[categorical]) print(df)
12
24
64,902,852
2020-11-18
https://stackoverflow.com/questions/64902852/the-difference-between-opencv-python-and-opencv-contrib-python
I was looking at the Python Package Index (PyPi) and noticed 2 very similar packages: opencv-contrib-python and opencv-python and wondering what the difference was. I looked at them and they had the exact same description and version numbers.
As per PyPi documentation: There are four different packages (see options 1, 2, 3 and 4 below): Packages for standard desktop environments: Option 1 - Main modules package: pip install opencv-python Option 2 - Full package (contains both main modules and contrib/extra modules): pip install opencv-contrib-python (check contrib/extra modules listing from OpenCV documentation) Packages for server (headless) environments: Option 3 - Headless main modules package: pip install opencv-python-headless Option 4 - Headless full package (contains both main modules and contrib/extra modules): pip install opencv-contrib-python-headless Do not install multiple different packages in the same environment
39
57
64,971,281
2020-11-23
https://stackoverflow.com/questions/64971281/airflow-webserver-gettins-valueerrorsamesite
I installed Airflow 1.10.12 using Anaconda in one of my environments. But when I tried to follow the Quick Start Guide (at https://airflow.apache.org/docs/stable/start.html) I got the following error after accessing http://localhost:8080/admin/ Traceback (most recent call last): File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/flask/app.py", line 1953, in full_dispatch_request return self.finalize_request(rv) File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/flask/app.py", line 1970, in finalize_request response = self.process_response(response) File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/flask/app.py", line 2269, in process_response self.session_interface.save_session(self, ctx.session, response) File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/flask/sessions.py", line 379, in save_session response.set_cookie( File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/werkzeug/wrappers/base_response.py", line 468, in set_cookie dump_cookie( File "/home/guilherme/anaconda3/envs/engdados/lib/python3.8/site-packages/werkzeug/http.py", line 1217, in dump_cookie raise ValueError("SameSite must be 'Strict', 'Lax', or 'None'.") ValueError: SameSite must be 'Strict', 'Lax', or 'None'. I tried to set the cookie_samesite variable at the default_airflow.cfg file to either None, Lax or Strict but the error persists. Some details of my airflow environment: I installed Apache airflow in my workspace using the conda install -c anaconda airflow After that I create an environment on Anaconda and installed the airflow package within this environment. Then I opened the terminal and run airflow initdb and airflow webserver -p 8080. Then I went to webpage and saw the error. Thanks in advance for any help.
Bump werkzeug version to the following: pip install 'werkzeug<1.0.0' For Airflow >=2.0.0 change the config (airflow.cfg) [webserver] cookie_samesite to use Lax (https://github.com/apache/airflow/blob/2.0.1/UPDATING.md#the-default-value-for-webserver-cookie_samesite-has-been-changed-to-lax).
7
10
64,988,557
2020-11-24
https://stackoverflow.com/questions/64988557/how-to-run-a-coroutine-inside-a-context
In the Python docs about Context Vars a Context::run method is described to enable executing a callable inside a context so changes that the callable perform to the context are contained inside the copied Context. Though what if you need to execute a coroutine? What are you supposed to do in order to achieve the same behavior? In my case, what I wanted was something like this to handle a transactional context with possible nested transactions: my_ctxvar = ContextVar("my_ctxvar") async def coro(func, transaction): token = my_ctxvar.set(transaction) r = await func() my_ctxvar.reset(token) # no real need for this, but why not either return r async def foo(): ctx = copy_context() # simplification to one case here: let's use the current transaction if there is one if tx_owner := my_ctxvar not in ctx: tx = await create_transaction() else: tx = my_ctxvar.get() try: r = await ctx.run(coro) # not actually possible if tx_owner: await tx.commit() except Exception as e: if tx_owner: await tx.rollback() raise from e return r
As I already pointed out here, context variables are natively supported by asyncio and are ready to be used without any extra configuration. It should be noted that: Сoroutines executed by the current task by means of await share the same context New spawned tasks by create_task are executed in the copy of parent task context. Therefore, in order to execute a coroutine in a copy of the current context, you can execute it as a task: await asyncio.create_task(coro()) Small example: import asyncio from contextvars import ContextVar var = ContextVar('var') async def foo(): await asyncio.sleep(1) print(f"var inside foo {var.get()}") var.set("ham") # change copy async def main(): var.set('spam') await asyncio.create_task(foo()) print(f"var after foo {var.get()}") asyncio.run(main()) var inside foo spam var after foo spam
7
10
64,909,861
2020-11-19
https://stackoverflow.com/questions/64909861/how-to-return-401-from-aws-lambda-authorizer-without-raising-an-exception
I have a lambda authorizer that is written in Python. I know that with the following access policy I can return 200/403 : { "principalId": "yyyyyyyy", "policyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": "execute-api:Invoke", "Effect": "Deny", "Resource": "*" } ] }, "context": { "stringKey": "value", "numberKey": "1", "booleanKey": "true" }, "usageIdentifierKey": "{api-key}" } I'm trying to return 401 error if the customer didn't send any token, therefore I'm raising an exception : raise Exception("Unauthorized") The problem with this solution is that the AWS lambda fails and then the execution is marked as a failed execution and not as a successful execution of the lambda. Is there any way to return 401 without failing the lambda ? Also tried the following like in lambda integration but didn't work: return {"statusCode": 401, "body" : "Unauthorized"}
It really is ugly, but that's the only way to truly signal a 401, which means "I can't find your Authorization header or cookie or nothing, you have to authenticate to do that". A 403 is an explicit 👎 saying "I know who you are, you're Forbidden from doing that". It's an odd, ternary response that API Gateway needs here: 👍/👎/🤷, and this "throw a very specific exception" is one way to do it. So you can't customize the response with the authorizer lambda; you can only give a response document that says yay/nay, or throw your hands up and signal "I can't find any authentication material here". To customize the shape of that response to a client, you would use Gateway Responses. With this, you can customize the shape of the json (or whatever content-type, really) of your 401/403 responses. Now, with respect to raise Exception("Unauthorized") polluting your metrics, making ambiguous real errors vs this expected error, I agree, it kinda stinks. My only recommendation would be to log something ERROR level that you set up a Metric Filter to watch out for, and use that as your true "something's gone wrong" metric.
14
12
64,908,080
2020-11-19
https://stackoverflow.com/questions/64908080/python-deepcopy-uses-more-memory-than-needed
Recently I came across strange memory usage while using copy.deepcopy. I have the following code example: import copy import gc import os import psutil from pympler.asizeof import asizeof from humanize import filesize class Foo(object): __slots__ = ["name", "foos", "bars"] def __init__(self, name): self.name = name self.foos = {} self.bars = {} def add_foo(self, foo): self.foos[foo.name] = foo def add_bar(self, bar): self.bars[bar.name] = bar def __getstate__(self): return {k: getattr(self, k) for k in self.__slots__} def __setstate__(self, state): for k, v in state.items(): setattr(self, k, v) class Bar(object): __slots__ = ["name", "description"] def __init__(self, name, description): self.name = name self.description = description def __getstate__(self): return {k: getattr(self, k) for k in self.__slots__} def __setstate__(self, state): for k, v in state.items(): setattr(self, k, v) def get_ram(): return psutil.Process(os.getpid()).memory_info()[0] def get_foo(): sub_foo = Foo("SubFoo1") for i in range(5000): sub_foo.add_bar(Bar("BarInSubFoo{}".format(i), "BarInSubFoo{}".format(i))) foo = Foo("Foo") foo.add_foo(sub_foo) for i in range(5000): foo.add_bar(Bar("BarInFoo{}".format(i), "BarInFoo{}".format(i))) return foo def main(): foo = get_foo() foo_size = asizeof(foo) gc.collect() ram1 = get_ram() foo_copy = copy.deepcopy(foo) gc.collect() ram2 = get_ram() foo_copy_size = asizeof(foo_copy) print("Original object size: {}, Ram before: {}\nCopied object size: {}, Ram after: {}\nDiff in ram: {}".format( filesize.naturalsize(foo_size), filesize.naturalsize(ram1), filesize.naturalsize(foo_copy_size), filesize.naturalsize(ram2), filesize.naturalsize(ram2-ram1) )) if __name__ == "__main__": main() What I tried to do, is to test the amount of memory used by the program before and after the copy.deepcopy. For this purpose, I created two classes. I expected my memory usage to rise after the call to deepcopy in an amount equal to the size of the original object. Strangly I got these results: Original object size: 2.1 MB, Ram before: 18.6 MB Copied object size: 2.1 MB, Ram after: 24.7 MB Diff in ram: 6.1 MB As you can see the difference in memory usage is aprox. 300% the size of the copied object. ** These results has been obtained using Python 3.8.5 on Windows 10 64 bit What I tried? Running this example of code using Python2.7, the results was even stranger (more than 500% of the copied object size): Original object size: 2.3 MB, Ram before: 34.3 MB Copied object size: 2.3 MB, Ram after: 46.2 MB Diff in ram: 11.9 MB Running on Linux using both Python3.8 and Python2.7 got the same results (respectively). Returning list of tuples instead of dict in __getstate__ got better results but far from what I was expecting Using lists instead of dicts in the Foo object also got better results but also far from what I was expecting. Using pickle.dumps & pickle.loads in order to copy the object has produced the same results. Any toughts?
Some of that is probably accounted for because deepcopy keeps a cache of all the objects it has visited to avoid getting stuck in an infinite loop (a set I'm pretty sure). For this sort of thing, you should probably write your own efficient copy function. deepcopy is written to be able to handle arbitrary inputs, not necessarily to be efficient. If you want an efficient copying function, you can just write it yourself. This is sufficient for a deep copy, something to the effect of: import copy import gc import os import psutil from pympler.asizeof import asizeof from humanize import filesize class Foo(object): __slots__ = ["name", "foos", "bars"] def __init__(self, name): self.name = name self.foos = {} self.bars = {} def add_foo(self, foo): self.foos[foo.name] = foo def add_bar(self, bar): self.bars[bar.name] = bar def copy(self): new = Foo(self.name) new.foos = {k:foo.copy() for k, foo in self.foos.items()} new.bars = {k:bar.copy() for k, bar in self.bars.items()} return new class Bar(object): __slots__ = ["name", "description"] def __init__(self, name, description): self.name = name self.description = description def copy(self): return Bar(self.name, self.description) def get_ram(): return psutil.Process(os.getpid()).memory_info()[0] def get_foo(): sub_foo = Foo("SubFoo1") for i in range(5000): sub_foo.add_bar(Bar("BarInSubFoo{}".format(i), "BarInSubFoo{}".format(i))) foo = Foo("Foo") foo.add_foo(sub_foo) for i in range(5000): foo.add_bar(Bar("BarInFoo{}".format(i), "BarInFoo{}".format(i))) return foo def main(): foo = get_foo() foo_size = asizeof(foo) gc.collect() ram1 = get_ram() foo_copy = foo.copy() gc.collect() ram2 = get_ram() foo_copy_size = asizeof(foo_copy) print("Original object size: {}, Ram before: {}\nCopied object size: {}, Ram after: {}\nDiff in ram: {}".format( filesize.naturalsize(foo_size), filesize.naturalsize(ram1), filesize.naturalsize(foo_copy_size), filesize.naturalsize(ram2), filesize.naturalsize(ram2-ram1) )) if __name__ == "__main__": main()
6
7
64,894,694
2020-11-18
https://stackoverflow.com/questions/64894694/which-python-static-checker-can-catch-forgotten-await-problems
Code: from typing import AsyncIterable import asyncio async def agen() -> AsyncIterable[str]: print('agen start') yield '1' yield '2' async def agenmaker() -> AsyncIterable[str]: print('agenmaker start') return agen() async def amain(): print('amain') async for item in agen(): pass async for item in await agenmaker(): pass # Error: async for item in agenmaker(): pass def main(): asyncio.get_event_loop().run_until_complete(amain()) if __name__ == '__main__': main() As you can see, it is type-annotated, and contains an easy-to-miss error. However, neither pylint nor mypy find that error. Aside from unit tests, what options are there for catching such errors?
MyPy is perfectly capable of finding this issue. The problem is that unannotated functions are not inspected. Annotate the offending function as -> None and it is correctly inspected and rejected. # annotated with return type async def amain() -> None: print('amain') async for item in agen(): pass async for item in await agenmaker(): pass # Error: async for item in agenmaker(): # error: "Coroutine[Any, Any, AsyncIterable[str]]" has no attribute "__aiter__" (not async iterable) pass If you want to eliminate such issues slipping through, use the flags --disallow-untyped-defs or --check-untyped-defs. MyPy: Function signatures and dynamic vs static typing A function without type annotations is considered to be dynamically typed by mypy: def greeting(name): return 'Hello ' + name By default, mypy will not type check dynamically typed functions. This means that with a few exceptions, mypy will not report any errors with regular unannotated Python.
6
5
64,994,872
2020-11-24
https://stackoverflow.com/questions/64994872/how-can-i-properly-run-2-threads-that-await-things-at-the-same-time
Basically, I have 2 threads, receive and send. I want to be able to type a message, and whenever I get a new message it just gets 'printed above the line I am typing in'. first what I thought would work, and you can just paste this it will run: import multiprocessing import time from reprint import output import time import random import sys def receiveThread(queue): i = 0 while True: queue.put(i) i+=1 time.sleep(0.5) def sendThread(queue): while True: a = sys.stdin.read(1) if (a != ""): queue.put(a) if __name__ == "__main__": send_queue = multiprocessing.Queue() receive_queue = multiprocessing.Queue() send_thread = multiprocessing.Process(target=sendThread, args=[send_queue],) receive_thread = multiprocessing.Process(target=receiveThread, args=[receive_queue],) receive_thread.start() send_thread.start() with output(initial_len=2, interval=0) as output_lines: while True: output_lines[0] = "Received: {}".format(str(receive_queue.get())) output_lines[1] = "Last Sent: {}".format(str(send_queue.get())) But what happens here is that i cannot send data. The input doesn't give me an EOF unlike when I put a = input(), but it overwrites whatever I put in that line, so how can I wait for the input in one thread while the other one works? EXPECTED BEHAVOIR: first line goes Received: 0, 1, 2, 3, 4... second line goes [my input until I press enter, then my input] ACTUAL BEHAVIOR if i don't check if input != "" first line as expected only that the input overwrites the first couple of letters until it resets to Received second line is always empty, maybe bc stdin only is filled for that one time i press enter and then always returns empty? ACTUAL BEHAVIOR if i check if input != "" first line stays: received = 0 second line is just like whatever i enter, if i press enter it goes into a new line where i then enter stuff
Don't use the same socket for communicating with... itself. That may be possible to do, I'm not sure, but it certainly isn't normal. Instead make a socket pair, one for the sending thread, and one for the receiving thread, e.g. this works for me: import socket; import multiprocessing; def receiveThread(sock): while True: msg = sock.recv(1024) print(msg.decode("utf-8")) def sendThread(sock): while True: # msg=input("client: ") # input() is broken on my system :( msg="foo" sock.send(bytes(msg,"utf8")) pair = socket.socketpair() recieve_thread_socket = pair[0] send_thread_socket = pair[1] send_thread = multiprocessing.Process(target=sendThread, args=[recieve_thread_socket]) receive_thread = multiprocessing.Process(target=receiveThread,args=[send_thread_socket]) send_thread.start() receive_thread.start()
7
8
64,973,695
2020-11-23
https://stackoverflow.com/questions/64973695/automatically-register-new-prefect-flows
Is there a mechanism to automatically register flows/new flows if a local agent is running, without having to manually run e.g. flow.register(...) on each one? In airflow, I believe they have a process that regularly scans for any files with dag in the name in the specified airflow home folder, then searches them for DAG objects. And if it finds them it loads them so they are accessible through the UI without having to manually 'register' them. Does something similar exist for prefect. So for example if I just created the following file test_flow.py, without necessarily running it or adding flow.run_agent() is there a way for it to just be magically registered and accessible through the UI :) - just by it simply existing in the proper place? # prefect_home_folder/test_flow.py import prefect from prefect import task, Flow @task def hello_task(): logger = prefect.context.get("logger") logger.info("Hello, Cloud!") flow = Flow("hello-flow", tasks=[hello_task]) flow.register(project_name='main') I could write a script that has similar behavior to the airflow process to scan a folder and register flows at regular intervals, but I wonder if it's a bit hacky or if there is a better solution and I'm justing thinking too much in terms of airflow?
Great question (and awesome username!) - in short, I suggest you are thinking too much in terms of Airflow. There are a few reasons this is not currently available in Prefect: explicit is better than implicit Prefect flows are not constrained to live in one place and are not constrained to have the same runtime environments; this makes both the automatic discovery of a flow + re-serializing it complicated from a single agent process (which is not required to share the same runtime environment as the flows it submits) agents are better thought of as being parametrized by deployment infrastructure, not flow storage Ideally for production workflows you'd use a CI/CD process so that anytime you make a code change an automatic job is triggered that re-registers the flow. A few comments that may be helpful: you don't actually need to re-register the flow for every possible code change; for example, if you changed the message that your hello_task logs in your example, you could simply re-save the flow to its original location (what this looks like depends on the type of storage you use). Ultimately you only need to re-register if any of the metadata about your flow changes (retry settings, task names, dependency relationships, etc.) you can use flow.register("My Project", idempotency_key=flow.serialized_hash()) to automatically capture this; this pattern will only register a new version if the flow's backend representation changes in some way
6
6
64,996,339
2020-11-24
https://stackoverflow.com/questions/64996339/get-cpu-and-gpu-temp-using-python-without-admin-access-windows
I posted this question, asking how to get the CPU and GPU temp on Windows 10: Get CPU and GPU Temp using Python Windows. For that question, I didn't include the restriction (at least when I first posted the answer, and for quite a bit after that) for no admin access. I then modified my question to invalidate answers that need admin access (which the only working answer then). A mod rolled back to a previous version of my question, and asked me to post a new question, so I have done that. I was wondering if there was a way to get the CPU and the GPU temperature in python. I have already found a way for Linux (using psutil.sensors_temperature), and I wanted to find a way for Windows. Info: OS: Windows 10 Python: Python 3.8.3 64-bit (So no 32 bit DLLs) Below are some of the stuff I tried: When I try doing the below, I get None (from here - https://stackoverflow.com/a/3264262/13710015): import wmi w = wmi.WMI() prin(w.Win32_TemperatureProbe()[0].CurrentReading) When I try doing the below, I get an error (from here - https://stackoverflow.com/a/3264262/13710015): import wmi w = wmi.WMI(namespace="root\wmi") temperature_info = w.MSAcpi_ThermalZoneTemperature()[0] print(temperature_info.CurrentTemperature) Error: wmi.x_wmi: <x_wmi: Unexpected COM Error (-2147217396, 'OLE error 0x8004100c', None, None)> When I tried doing the below, I got (from here - https://stackoverflow.com/a/58924992/13710015): import ctypes import ctypes.wintypes as wintypes from ctypes import windll LPDWORD = ctypes.POINTER(wintypes.DWORD) LPOVERLAPPED = wintypes.LPVOID LPSECURITY_ATTRIBUTES = wintypes.LPVOID GENERIC_READ = 0x80000000 GENERIC_WRITE = 0x40000000 GENERIC_EXECUTE = 0x20000000 GENERIC_ALL = 0x10000000 FILE_SHARE_WRITE=0x00000004 ZERO=0x00000000 CREATE_NEW = 1 CREATE_ALWAYS = 2 OPEN_EXISTING = 3 OPEN_ALWAYS = 4 TRUNCATE_EXISTING = 5 FILE_ATTRIBUTE_NORMAL = 0x00000080 INVALID_HANDLE_VALUE = -1 FILE_DEVICE_UNKNOWN=0x00000022 METHOD_BUFFERED=0 FUNC=0x900 FILE_WRITE_ACCESS=0x002 NULL = 0 FALSE = wintypes.BOOL(0) TRUE = wintypes.BOOL(1) def CTL_CODE(DeviceType, Function, Method, Access): return (DeviceType << 16) | (Access << 14) | (Function <<2) | Method def _CreateFile(filename, access, mode, creation, flags): """See: CreateFile function http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).asp """ CreateFile_Fn = windll.kernel32.CreateFileW CreateFile_Fn.argtypes = [ wintypes.LPWSTR, # _In_ LPCTSTR lpFileName wintypes.DWORD, # _In_ DWORD dwDesiredAccess wintypes.DWORD, # _In_ DWORD dwShareMode LPSECURITY_ATTRIBUTES, # _In_opt_ LPSECURITY_ATTRIBUTES lpSecurityAttributes wintypes.DWORD, # _In_ DWORD dwCreationDisposition wintypes.DWORD, # _In_ DWORD dwFlagsAndAttributes wintypes.HANDLE] # _In_opt_ HANDLE hTemplateFile CreateFile_Fn.restype = wintypes.HANDLE return wintypes.HANDLE(CreateFile_Fn(filename, access, mode, NULL, creation, flags, NULL)) handle=_CreateFile('\\\\\.\PhysicalDrive0',GENERIC_WRITE,FILE_SHARE_WRITE,OPEN_EXISTING,ZERO) def _DeviceIoControl(devhandle, ioctl, inbuf, inbufsiz, outbuf, outbufsiz): """See: DeviceIoControl function http://msdn.microsoft.com/en-us/library/aa363216(v=vs.85).aspx """ DeviceIoControl_Fn = windll.kernel32.DeviceIoControl DeviceIoControl_Fn.argtypes = [ wintypes.HANDLE, # _In_ HANDLE hDevice wintypes.DWORD, # _In_ DWORD dwIoControlCode wintypes.LPVOID, # _In_opt_ LPVOID lpInBuffer wintypes.DWORD, # _In_ DWORD nInBufferSize wintypes.LPVOID, # _Out_opt_ LPVOID lpOutBuffer wintypes.DWORD, # _In_ DWORD nOutBufferSize LPDWORD, # _Out_opt_ LPDWORD lpBytesReturned LPOVERLAPPED] # _Inout_opt_ LPOVERLAPPED lpOverlapped DeviceIoControl_Fn.restype = wintypes.BOOL # allocate a DWORD, and take its reference dwBytesReturned = wintypes.DWORD(0) lpBytesReturned = ctypes.byref(dwBytesReturned) status = DeviceIoControl_Fn(devhandle, ioctl, inbuf, inbufsiz, outbuf, outbufsiz, lpBytesReturned, NULL) return status, dwBytesReturned class OUTPUT_temp(ctypes.Structure): """See: http://msdn.microsoft.com/en-us/library/aa363972(v=vs.85).aspx""" _fields_ = [ ('Board Temp', wintypes.DWORD), ('CPU Temp', wintypes.DWORD), ('Board Temp2', wintypes.DWORD), ('temp4', wintypes.DWORD), ('temp5', wintypes.DWORD) ] class OUTPUT_volt(ctypes.Structure): """See: http://msdn.microsoft.com/en-us/library/aa363972(v=vs.85).aspx""" _fields_ = [ ('VCore', wintypes.DWORD), ('V(in2)', wintypes.DWORD), ('3.3V', wintypes.DWORD), ('5.0V', wintypes.DWORD), ('temp5', wintypes.DWORD) ] def get_temperature(): FUNC=0x900 outDict={} ioclt=CTL_CODE(FILE_DEVICE_UNKNOWN, FUNC, METHOD_BUFFERED, FILE_WRITE_ACCESS) handle=_CreateFile('\\\\\.\PhysicalDrive0',GENERIC_WRITE,FILE_SHARE_WRITE,OPEN_EXISTING,ZERO) win_list = OUTPUT_temp() p_win_list = ctypes.pointer(win_list) SIZE=ctypes.sizeof(OUTPUT_temp) status, output = _DeviceIoControl(handle, ioclt , NULL, ZERO, p_win_list, SIZE) for field, typ in win_list._fields_: #print ('%s=%d' % (field, getattr(disk_geometry, field))) outDict[field]=getattr(win_list,field) return outDict def get_voltages(): FUNC=0x901 outDict={} ioclt=CTL_CODE(FILE_DEVICE_UNKNOWN, FUNC, METHOD_BUFFERED, FILE_WRITE_ACCESS) handle=_CreateFile('\\\\\.\PhysicalDrive0',GENERIC_WRITE,FILE_SHARE_WRITE,OPEN_EXISTING,ZERO) win_list = OUTPUT_volt() p_win_list = ctypes.pointer(win_list) SIZE=ctypes.sizeof(OUTPUT_volt) status, output = _DeviceIoControl(handle, ioclt , NULL, ZERO, p_win_list, SIZE) for field, typ in win_list._fields_: #print ('%s=%d' % (field, getattr(disk_geometry, field))) outDict[field]=getattr(win_list,field) return outDict print(OUTPUT_temp._fields_) Output: [('Board Temp', <class 'ctypes.c_ulong'>), ('CPU Temp', <class 'ctypes.c_ulong'>), ('Board Temp2', <class 'ctypes.c_ulong'>), ('temp4', <class 'ctypes.c_ulong'>), ('temp5', <class 'ctypes.c_ulong'>)] I tried this code, and it worked, but it needs admin (from here - https://stackoverflow.com/a/62936850/13710015): import clr # the pythonnet module. clr.AddReference(r'YourdllPath') from OpenHardwareMonitor.Hardware import Computer c = Computer() c.CPUEnabled = True # get the Info about CPU c.GPUEnabled = True # get the Info about GPU c.Open() while True: for a in range(0, len(c.Hardware[0].Sensors)): # print(c.Hardware[0].Sensors[a].Identifier) if "/intelcpu/0/temperature" in str(c.Hardware[0].Sensors[a].Identifier): print(c.Hardware[0].Sensors[a].get_Value()) c.Hardware[0].Update() I tried this code, but it also needed admin (from here - https://stackoverflow.com/a/49909330/13710015): import clr #package pythonnet, not clr openhardwaremonitor_hwtypes = ['Mainboard','SuperIO','CPU','RAM','GpuNvidia','GpuAti','TBalancer','Heatmaster','HDD'] cputhermometer_hwtypes = ['Mainboard','SuperIO','CPU','GpuNvidia','GpuAti','TBalancer','Heatmaster','HDD'] openhardwaremonitor_sensortypes = ['Voltage','Clock','Temperature','Load','Fan','Flow','Control','Level','Factor','Power','Data','SmallData'] cputhermometer_sensortypes = ['Voltage','Clock','Temperature','Load','Fan','Flow','Control','Level'] def initialize_openhardwaremonitor(): file = 'OpenHardwareMonitorLib.dll' clr.AddReference(file) from OpenHardwareMonitor import Hardware handle = Hardware.Computer() handle.MainboardEnabled = True handle.CPUEnabled = True handle.RAMEnabled = True handle.GPUEnabled = True handle.HDDEnabled = True handle.Open() return handle def initialize_cputhermometer(): file = 'CPUThermometerLib.dll' clr.AddReference(file) from CPUThermometer import Hardware handle = Hardware.Computer() handle.CPUEnabled = True handle.Open() return handle def fetch_stats(handle): for i in handle.Hardware: i.Update() for sensor in i.Sensors: parse_sensor(sensor) for j in i.SubHardware: j.Update() for subsensor in j.Sensors: parse_sensor(subsensor) def parse_sensor(sensor): if sensor.Value is not None: if type(sensor).__module__ == 'CPUThermometer.Hardware': sensortypes = cputhermometer_sensortypes hardwaretypes = cputhermometer_hwtypes elif type(sensor).__module__ == 'OpenHardwareMonitor.Hardware': sensortypes = openhardwaremonitor_sensortypes hardwaretypes = openhardwaremonitor_hwtypes else: return if sensor.SensorType == sensortypes.index('Temperature'): print(u"%s %s Temperature Sensor #%i %s - %s\u00B0C" % (hardwaretypes[sensor.Hardware.HardwareType], sensor.Hardware.Name, sensor.Index, sensor.Name, sensor.Value)) if __name__ == "__main__": print("OpenHardwareMonitor:") HardwareHandle = initialize_openhardwaremonitor() fetch_stats(HardwareHandle) print("\nCPUMonitor:") CPUHandle = initialize_cputhermometer() fetch_stats(CPUHandle) I am also fine with using C/C++ extensions with Python, portable command-line apps (which will be run with subprocess.Popen), DLLs, and commands (which will be run with subprocess.Popen). Non-portable apps are not allowed.
Problem An unprivileged user needs access to functionality only available by a privileged user in a secure manner. Solution Create an server-client interface where functionality is decoupled from the actual system as to prevent security issues (ie: don't just pipe commands or options directly from client for execution by the server). Consider using gRPC for this server-client interface. If you haven't used gRPC before, here's an example of what this entails: Create a temperature.proto: syntax = "proto3"; option java_multiple_files = true; option java_package = "temperature"; option java_outer_classname = "TemperatureProto"; option objc_class_prefix = "TEMP"; package temperature; service SystemTemperature { rpc GetTemperature (TemperatureRequest) returns (TemperatureReply) {} } message TemperatureRequest { string name = 1; } message TemperatureReply { string message = 1; } Compile the aforementioned with protoc from protobuf library. python -m grpc_tools.protoc --proto_path=. temperature.proto --python_out=. --grpc_python_out=. This will generate a file named temperature_pb2_grpc.py, which is where you'll define functionality and response for GetTemperature, note, that you can implement logic branches contextual upon TemperatureRequest options passed from the client. Once complete simply write and run a temperature_server.py from your privileged user, and temperature_client.py from your unprivileged user. References gRPC: https://grpc.io gRPC QuickStart guide: https://grpc.io/docs/languages/ruby/quickstart/ protobuf: https://developers.google.com/protocol-buffers/
6
2
64,910,582
2020-11-19
https://stackoverflow.com/questions/64910582/can-we-make-the-ml-model-pickle-file-more-robust-by-accepting-or-ignoring-n
I have trained a ML model, and stored it into a Pickle file. In my new script, I am reading new 'real world data', on which I want to do a prediction. However, I am struggling. I have a column (containing string values), like: Sex Male Female # This is just as example, in real it is having much more unique values Now comes the issue. I received a new (unique) value, and now I cannot make predictions anymore (e.g. 'Neutral' was added). Since I am transforming the 'Sex' column into Dummies, I do have the issue that my model is not accepting the input anymore, Number of features of the model must match the input. Model n_features is 2 and input n_features is 3 Therefore my question: is there a way how I can make my model robust, and just ignore this class? But do a prediction, without the specific info? What I have tried: df = pd.read_csv('dataset_that_i_want_to_predict.csv') model = pickle.load(open("model_trained.sav", 'rb')) # I have an 'example_df' containing just 1 row of training data (this is exactly what the model needs) example_df = pd.read_csv('reading_one_row_of_trainings_data.csv') # Checking for missing columns, and adding that to the new dataset missing_cols = set(example_df.columns) - set(df.columns) for column in missing_cols: df[column] = 0 #adding the missing columns, with 0 values (Which is ok. since everything is dummy) # make sure that we have the same order df = df[example_df.columns] # The prediction will lead to an error! results = model.predict(df) # ValueError: Number of features of the model must match the input. Model n_features is X and n_features is Y Note, I searched, but could not find any helpfull solution (not here, here or here UPDATE Also found this article. But same issue here.. we can make the test set with the same columns as training set... but what about new real world data (e.g. the new value 'Neutral')?
Yes, you can't include (update the model) a new category or feature into a dataset after the training part is done. OneHotEncoder might handle the problem of having new categories inside some feature in test data. It will take care of keep the columns consistent in your training and test data with respect to categorical variables. from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import OneHotEncoder import numpy as np import pandas as pd from sklearn import set_config set_config(print_changed_only=True) df = pd.DataFrame({'feature_1': np.random.rand(20), 'feature_2': np.random.choice(['male', 'female'], (20,))}) target = pd.Series(np.random.choice(['yes', 'no'], (20,))) model = Pipeline([('preprocess', ColumnTransformer([('ohe', OneHotEncoder(handle_unknown='ignore'), [1])], remainder='passthrough')), ('lr', LogisticRegression())]) model.fit(df, target) # let us introduce new categories in feature_2 in test data test_df = pd.DataFrame({'feature_1': np.random.rand(20), 'feature_2': np.random.choice(['male', 'female', 'neutral', 'unknown'], (20,))}) model.predict(test_df) # array(['yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', # 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', 'yes', # 'yes', 'yes'], dtype=object)
7
7
64,995,369
2020-11-24
https://stackoverflow.com/questions/64995369/geopandas-warning-on-read-file
I'm getting the following warning reading a geojson with geopanda's read_file(): ...geodataframe.py:422: RuntimeWarning: Sequential read of iterator was interrupted. Resetting iterator. This can negatively impact the performance. for feature in features_lst: Here's the code sample I used: crime_gdf = gpd.read_file('datasets/crimes.geojson', bbox=bbox) crimes.geojson is a file containing a large number of points, each with a 'Crime type' bbox defines the boundaries The code runs as expected, but I don't understand that warning. EDIT I converted the geojson to feather, and I get the same warning.
See my comment in Fiona's issue tracker: https://github.com/Toblerity/Fiona/issues/986 GDAL (the library Fiona uses to access the geodata) maintains an iterator over the features that are currently read. There a some operations that, for some drivers, can influence this iterator. Thus, after such operations we have to ensure that the iterator is set to the correct position that a continuous read of the data is ensured. Such operations include counting all features in a dataset, respectively calculating its extent. There are different types of drivers in GDAL. Some drivers support random access, while some do not. For the drivers that do not support random access, the resetting of the iterator involves reading all features again up to the iterator position. As this is a possible costly operation, this RuntimeWarning is emitted, so that users are aware of this behavior.
10
11
64,927,909
2020-11-20
https://stackoverflow.com/questions/64927909/failed-to-read-descriptor-from-node-connection-a-device-attached-to-the-system
I got this while running the selenium webdriver script in python I also set the path in System Environment and also tried downloading the webdriver that matches with my chrome version. And also letest version also. But I still get this error: [8552:6856:1120/155118.770:ERROR:device_event_log_impl.cc(211)] [15:51:18.771] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [8552:6856:1120/155118.774:ERROR:device_event_log_impl.cc(211)] [15:51:18.774] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [8552:6856:1120/155118.821:ERROR:device_event_log_impl.cc(211)] [15:51:18.821] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) I used this in my code : driver = webdriver.Chrome(resource_path("C:\\webdriver\\chromedriver.exe")) # to open the chromebrowser driver.get("https://web.whatsapp.com")
After a week of finding an answer to my error, I ended up with a solution that you just need to install pywin32 library and it will not gives you an error open cmd and type pip install pywin32 and you are good to go.....!
46
10
64,901,945
2020-11-18
https://stackoverflow.com/questions/64901945/how-to-send-a-progress-of-operation-in-a-fastapi-app
I have deployed a fastapi endpoint, from fastapi import FastAPI, UploadFile from typing import List app = FastAPI() @app.post('/work/test') async def testing(files: List(UploadFile)): for i in files: ....... # do a lot of operations on each file # after than I am just writing that processed data into mysql database # cur.execute(...) # cur.commit() ....... # just returning "OK" to confirm data is written into mysql return {"response" : "OK"} I can request output from the API endpoint and its working fine for me perfectly. Now, the biggest challenge for me to know how much time it is taking for each iteration. Because in the UI part (those who are accessing my API endpoint) I want to help them show a progress bar (TIME TAKEN) for each iteration/file being processed. Is there any possible way for me to achieve it? If so, please help me out on how can I proceed further? Thank you.
Below is solution which uses uniq identifiers and globally available dictionary which holds information about the jobs: NOTE: Code below is safe to use until you use dynamic keys values ( In sample uuid in use) and keep application within single process. To start the app create a file main.py Run uvicorn main:app --reload Create job entry by accessing http://127.0.0.1:8000/ Repeat step 3 to create multiple jobs Go to http://127.0.0.1/status page to see page statuses. Go to http://127.0.0.1/status/{identifier} to see progression of the job by the job id. Code of app: from fastapi import FastAPI, UploadFile import uuid from typing import List import asyncio context = {'jobs': {}} app = FastAPI() async def do_work(job_key, files=None): iter_over = files if files else range(100) for file, file_number in enumerate(iter_over): jobs = context['jobs'] job_info = jobs[job_key] job_info['iteration'] = file_number job_info['status'] = 'inprogress' await asyncio.sleep(1) pending_jobs[job_key]['status'] = 'done' @app.post('/work/test') async def testing(files: List[UploadFile]): identifier = str(uuid.uuid4()) context[jobs][identifier] = {} asyncio.run_coroutine_threadsafe(do_work(identifier, files), loop=asyncio.get_running_loop()) return {"identifier": identifier} @app.get('/') async def get_testing(): identifier = str(uuid.uuid4()) context['jobs'][identifier] = {} asyncio.run_coroutine_threadsafe(do_work(identifier), loop=asyncio.get_running_loop()) return {"identifier": identifier} @app.get('/status') def status(): return { 'all': list(context['jobs'].values()), } @app.get('/status/{identifier}') async def status(identifier): return { "status": context['jobs'].get(identifier, 'job with that identifier is undefined'), }
24
10
64,983,112
2020-11-24
https://stackoverflow.com/questions/64983112/keras-vertical-ensemble-model-with-condition-in-between
I have trained two separate models ModelA: Checks if the input text is related to my work (Binary Classifier [related/not-related]) ModelB: Classifier of related texts (Classifier [good/normal/bad]). Only the related texts are relayed to this model from ModelA I want ModelC: Ensemble classifier that outputs [good/normal/bad/not-related] I'll be training in batches. And there can be mix of not-related and good/normal/bad in one batch. I need them separated. Some pseudo code of what i need # Output of modelA will be a vector I presume `(1, None)` where `None` is batch def ModelC.predict(input): outputA = ModelA(input) if outputA == 'not-related': return outputA return ModelB(outputA) I don't know how to include the if logic inside the models inference. How can I achieve this?
Just define your own model. I'm surprised your other models are outputting strings instead of numbers, but without more info this is about all I can give you, so I will assume the output of model A is a string. import tensorflow as tf class ModelC(tf.keras.Model): def __init__(self, A, B): super(ModelC, self).__init__() self.A = A self.B = B def call(self, inputs, training=False): x = self.A(inputs, training) if x == 'not-related': return x return self.B(inputs, training)
7
2
64,993,222
2020-11-24
https://stackoverflow.com/questions/64993222/python-neat-not-learning-further-after-a-certain-point
It seems that my program is trying to learn until a certain point, and then it's satisfied and stops improving and changing at all. With my testing it usually goes to a value of -5 at most, and then it remains there no matter how long I keep it running. The result set does not change either. Just to keep track of it I made my own kind of logging thing to see which did best. The array of ones and zeroes is referring to how often the AI made a right choice (1), and how often the AI made a wrong choice (0). My goal is to get the AI to repeat a pattern of going above 0.5 and then going below 0.5, not necessarily find the odd number. This was meant as just a little test to see if I could get an AI working properly with some basic data, before doing something a bit more advanced. But unfortunately it's not working and I am not certain why. The code: import os import neat def main(genomes, config): networks = [] ge = [] choices = [] for _, genome in genomes: network = neat.nn.FeedForwardNetwork.create(genome, config) networks.append(network) genome.fitness = 0 ge.append(genome) choices.append([]) for x in range(25): for i, genome in enumerate(ge): output = networks[i].activate([x]) # print(str(x) + " - " + str(i) + " chose " + str(output[0])) if output[0] > 0.5: if x % 2 == 0: ge[i].fitness += 1 choices[i].append(1) else: ge[i].fitness -= 5 choices[i].append(0) else: if not x % 2 == 0: ge[i].fitness += 1 choices[i].append(1) else: ge[i].fitness -= 5 choices[i].append(0) pass # Optional death function, if I use this there are no winners at any point. # if ge[i].fitness <= 20: # ge[i].fitness -= 100 # ge.pop(i) # choices.pop(i) # networks.pop(i) if len(ge) > 0: fittest = -1 fitness = -999999 for i, genome in enumerate(ge): if ge[i].fitness > fitness: fittest = i fitness = ge[i].fitness print("Best: " + str(fittest) + " with fitness " + str(fitness)) print(str(choices[fittest])) else: print("Done with no best.") def run(config_path): config = neat.config.Config(neat.DefaultGenome, neat.DefaultReproduction, neat.DefaultSpeciesSet, neat.DefaultStagnation, config_path) pop = neat.Population(config) #pop.add_reporter(neat.StdOutReporter(True)) #stats = neat.StatisticsReporter() #pop.add_reporter(stats) winner = pop.run(main, 100) if __name__ == "__main__": local_dir = os.path.dirname(__file__) config_path = os.path.join(local_dir, "config-feedforward.txt") run(config_path) The NEAT config: [NEAT] fitness_criterion = max fitness_threshold = 100000 pop_size = 5000 reset_on_extinction = False [DefaultGenome] # node activation options activation_default = tanh activation_mutate_rate = 0.0 activation_options = tanh # node aggregation options aggregation_default = sum aggregation_mutate_rate = 0.0 aggregation_options = sum # node bias options bias_init_mean = 0.0 bias_init_stdev = 1.0 bias_max_value = 30.0 bias_min_value = -30.0 bias_mutate_power = 0.5 bias_mutate_rate = 0.7 bias_replace_rate = 0.1 # genome compatibility options compatibility_disjoint_coefficient = 1.0 compatibility_weight_coefficient = 0.5 # connection add/remove rates conn_add_prob = 0.5 conn_delete_prob = 0.5 # connection enable options enabled_default = True enabled_mutate_rate = 0.1 feed_forward = True initial_connection = full # node add/remove rates node_add_prob = 0.2 node_delete_prob = 0.2 # network parameters num_hidden = 0 num_inputs = 1 num_outputs = 1 # node response options response_init_mean = 1.0 response_init_stdev = 0.0 response_max_value = 30.0 response_min_value = -30.0 response_mutate_power = 0.0 response_mutate_rate = 0.0 response_replace_rate = 0.0 # connection weight options weight_init_mean = 0.0 weight_init_stdev = 1.0 weight_max_value = 30 weight_min_value = -30 weight_mutate_power = 0.5 weight_mutate_rate = 0.8 weight_replace_rate = 0.1 [DefaultSpeciesSet] compatibility_threshold = 3.0 [DefaultStagnation] species_fitness_func = max max_stagnation = 20 species_elitism = 2 [DefaultReproduction] elitism = 2 survival_threshold = 0.2
Sorry to tell you that this approach just isn't going to work. Remember that neural networks are typically built out of doing a matrix multiply and then max with 0 (this is called RELU), so basically linear at each layer with a cutoff (and no, picking a different activation like sigmoid is not going to help). You want the network to produce >.5, <.5, >.5, <.5, ... 25 times. Imagine what it would take to build that out of RELU pieces. You would at the very least need a network that is ~25 layers deep, and NEAT is just not going to produce a network that large without consistent incremental progress in the evolution. You are in good company though, what you are doing is equivalent to learning the modulo operator, which has been studied for many years. Here is one post that succeeds, though not using NEAT. Keras: Making a neural network to find a number's modulus The only real progress you could make with NEAT is to give the network many more features as inputs, e.g. give it x%2 as input and it will learn quickly, though this is obviously 'cheating'.
6
7
64,899,579
2020-11-18
https://stackoverflow.com/questions/64899579/how-to-debug-a-python-script-launched-by-a-third-party-app
I'm using Linux Eclipse (pydev) as IDE to develop python scripts that are launched by an application written in C++. I can debug the python script without problems in the IDE, but the environment is not real (the C++ program sends and receives messages through the stdin/stdout and it's a complex communication channel that I can't fully reproduce writing the messages by hand). Until now I was using log messages to debug (poor man's debug) but it's getting too complex. When I do something similar in PHP I can just leave xdebug listening and add breakpoints in Netbeans. Very neat and easy. Is it possible to do something like that in Python 3.X (with Eclipse or other IDE)? NOTE: I know there is a Pydev / Attach to Process functionality, but it doesn't work. Always fails to attach. NOTE2: There is also a built-in "breakpoint()" in Python 3.7 but it links to a debugger and if also fails, the IDE never gets the control.
After some research, this is the best option I have found. Without any other solution provided, I post it just in case anyone has the same problem. Python has an integrated debugger: pdb. It works as a module and it doesn't allow to use it if you don't have the window control (i.e. you launch the script). To solve this there are some coders that have created modules that add a layer on pdb. I have tried some and the most easy and still visual interesting is rpudb (but have a look also to this). To install it: pip3 install https://github.com/msbrogli/rpudb/archive/master.zip (if you install it using the pip3 install rpudb command it will install an old version only valid for python 2) Then, you use it just adding an import and a function call: import rpudb ..... rpudb.set_trace('127.0.0.1', 4444) ..... Launch the program and it will stop in the set_trace call. To debug it (and continue) open a terminal and launch a telnet like this: telnet 127.0.0.1 4444 You will have a visual debugger in front of you with the advantage that you can not only debug local programs, but also remote (just change the IP).
6
4
64,916,693
2020-11-19
https://stackoverflow.com/questions/64916693/jupyter-notebook-error-dyld-library-not-loaded-corefoundation-after-macos-big
After updating to macOS Big Sur I get the following error when I run jupyter notebook: dyld: Library not loaded: /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python Reason: image not found [1] 2971 abort jupyter notebook Any idea as to how to solve this? I tried re-installing jupyter-notebook using brew (I also updated brew) but I keep getting the same error.
I had the same problem after i upgraded to macOS Big Sur. I updated my python version (3.6.4) to (3.9.0) and after that i just uninstalled the notebook and reinstalled it. Now it works.
9
3
64,952,027
2020-11-22
https://stackoverflow.com/questions/64952027/compute-l2-distance-with-numpy-using-matrix-multiplication
I'm trying to do it by myself the assignments from Stanford CS231n 2017 CNN course. I'm trying to compute L2 distance using only matrix multiplication and sum broadcasting with Numpy. L2 distance is: And I think I can do it if I use this formula: The following code shows three methods to compute L2 distance. If I compare the output from the method compute_distances_two_loops with the output from method compute_distances_one_loop, both are equals. But I compare the output from the method compute_distances_two_loops with the output from the method compute_distances_no_loops, where I have implemented the L2 distance using only matrix multiplication and sum broadcasting, they are different. def compute_distances_two_loops(self, X): """ Compute the distance between each test point in X and each training point in self.X_train using a nested loop over both the training data and the test data. Inputs: - X: A numpy array of shape (num_test, D) containing test data. Returns: - dists: A numpy array of shape (num_test, num_train) where dists[i, j] is the Euclidean distance between the ith test point and the jth training point. """ num_test = X.shape[0] num_train = self.X_train.shape[0] dists = np.zeros((num_test, num_train)) for i in xrange(num_test): for j in xrange(num_train): ##################################################################### # TODO: # # Compute the l2 distance between the ith test point and the jth # # training point, and store the result in dists[i, j]. You should # # not use a loop over dimension. # ##################################################################### #dists[i, j] = np.sqrt(np.sum((X[i, :] - self.X_train[j, :]) ** 2)) dists[i, j] = np.sqrt(np.sum(np.square(X[i, :] - self.X_train[j, :]))) ##################################################################### # END OF YOUR CODE # ##################################################################### return dists def compute_distances_one_loop(self, X): """ Compute the distance between each test point in X and each training point in self.X_train using a single loop over the test data. Input / Output: Same as compute_distances_two_loops """ num_test = X.shape[0] num_train = self.X_train.shape[0] dists = np.zeros((num_test, num_train)) for i in xrange(num_test): ####################################################################### # TODO: # # Compute the l2 distance between the ith test point and all training # # points, and store the result in dists[i, :]. # ####################################################################### dists[i, :] = np.sqrt(np.sum(np.square(self.X_train - X[i, :]), axis = 1)) ####################################################################### # END OF YOUR CODE # ####################################################################### print(dists.shape) return dists def compute_distances_no_loops(self, X): """ Compute the distance between each test point in X and each training point in self.X_train using no explicit loops. Input / Output: Same as compute_distances_two_loops """ num_test = X.shape[0] num_train = self.X_train.shape[0] dists = np.zeros((num_test, num_train)) ######################################################################### # TODO: # # Compute the l2 distance between all test points and all training # # points without using any explicit loops, and store the result in # # dists. # # # # You should implement this function using only basic array operations; # # in particular you should not use functions from scipy. # # # # HINT: Try to formulate the l2 distance using matrix multiplication # # and two broadcast sums. # ######################################################################### dists = np.sqrt(-2 * np.dot(X, self.X_train.T) + np.sum(np.square(self.X_train), axis=1) + np.sum(np.square(X), axis=1)[:, np.newaxis]) print(dists.shape) ######################################################################### # END OF YOUR CODE # ######################################################################### return dists You can find a full working testable code here. Do you know what am I doing wrong in compute_distances_no_loops, or wherever? UPDATE: The code that throws the error message is: dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print('Difference was: %f' % (difference, )) if difference < 0.001: print('Good! The distance matrices are the same') else: print('Uh-oh! The distance matrices are different') And the error message: Difference was: 372100.327569 Uh-oh! The distance matrices are different
Here is how you can compute pairwise distances between rows of X and Y without creating any 3-dimensional matrices: def dist(X, Y): sx = np.sum(X**2, axis=1, keepdims=True) sy = np.sum(Y**2, axis=1, keepdims=True) return np.sqrt(-2 * X.dot(Y.T) + sx + sy.T)
10
13
64,994,341
2020-11-24
https://stackoverflow.com/questions/64994341/gauge-needle-for-plotly-indicator-graph
I currently have an indicator chart (gauge) from plotly where the value is shown by how far a dark blue center reaches. However, that looks a bit odd to me, so I would like to change it to have a needle/pointer from the center to the value, like a speedometer. Here is my current code: import plotly.graph_objects as go fig = go.Figure(go.Indicator( mode = "gauge+number", number = {'suffix': "% match", 'font': {'size': 50}}, value = 80, domain = {'x': [0,1], 'y': [0,1]}, gauge = { 'axis': {'range': [None, 100], 'tickwidth': 1, 'tickcolor': "darkblue"}, 'bar': {'color': "darkblue"}, 'bgcolor': "white", 'borderwidth': 2, 'bordercolor': "gray", 'steps': [ {'range': [0, 33], 'color': 'red'}, {'range': [33, 66], 'color': 'yellow'}, {'range': [66,100], 'color': 'green'}], })) fig.update_layout(font = {'color': "black", 'family': "Arial"}) fig.show()
My suggestion would be to add an arrow annotation that overlays the indicator chart. By setting the range of the chart to [-1,1] x [0,1] we are basically creating a new coordinate system that the arrow will be on, we can approximate where the arrow should go to in order to correspond to the value on your indicator chart. This will also ensure that the point (0,0) is at the center of your chart which is convenient since that will be one of the arrow's endpoints. When adding an arrow annotation ax and ay are coordinates for the tail of your arrow, so we want that in the middle of our chart which would be at ax=0 and ay=0. I placed the arrow straight up to show that the radius of the indicator chart with respect to the chart is approximately 0.9 units for my browser window. This may be different for yours. import plotly.graph_objects as go fig = go.Figure(go.Indicator( mode = "gauge+number", number = {'suffix': "% match", 'font': {'size': 50}}, value = 80, domain = {'x': [0,1], 'y': [0,1]}, gauge = { 'axis': {'range': [None, 100], 'tickwidth': 1, 'tickcolor': "darkblue"}, 'bar': {'color': "darkblue"}, 'bgcolor': "white", 'borderwidth': 2, 'bordercolor': "gray", 'steps': [ {'range': [0, 33], 'color': 'red'}, {'range': [33, 66], 'color': 'yellow'}, {'range': [66,100], 'color': 'green'}], })) fig.update_layout( font={'color': "black", 'family': "Arial"}, xaxis={'showgrid': False, 'range':[-1,1]}, yaxis={'showgrid': False, 'range':[0,1]}, # plot_bgcolor='rgba(0,0,0,0)' ) ## by setting the range of the layout, we are effectively adding a grid in the background ## and the radius of the gauge diagram is roughly 0.9 when the grid has a range of [-1,1]x[0,1] fig.add_annotation( ax=0, ay=0, axref='x', ayref='y', x=0, y=0.9, xref='x', yref='y', showarrow=True, arrowhead=3, arrowsize=1, arrowwidth=4 ) fig.show() Now while we could use trial and error to find where the arrow should end, that's a truly hacky solution which isn't generalizable at all. For the next steps, I would recommend you choose an aspect ratio for your browser window size that keeps the indicator chart as close to a circle as possible (e.g. an extreme aspect ratio will make your indicator chart more elliptical, and I am making a simple assumption that the indicator chart is a perfect circle). So, under the assumption that the indicator chart is roughly a circle with radius ≈ 0.9 (in my case, your radius might be different), we can find the x and y coordinates of your circle using polar coordinates: x = r*cos(θ) and y = r*sin(θ). Note that this formula is only valid for a circle centered at (0,0), which is why we centered your chart at this point. Since the value on the indicator is 80 on a scale of 0-100, we are 80/100 of the way of an 180 angle of rotation, which comes out to 180 degrees*(80/100) = 144 degrees. So you are rotating 144 degrees clockwise from the lower left corner, or 36 degrees counterclockwise from the lower right corner. Plugging in, we get x = 0.9*cos(36 degrees) = 0.72811529493, and y = 0.9*sin(36 degrees) = 0.52900672706. Updating the annotation: fig.add_annotation( ax=0, ay=0, axref='x', ayref='y', x=0.72811529493, y=0.52900672706, xref='x', yref='y', showarrow=True, arrowhead=3, arrowsize=1, arrowwidth=4 ) We get the following image: So this is pretty close but not an exact science. For my browser window, let's adjust the angle slightly higher to 40 degrees. Repeating the same process x = 0.9*cos(40 degrees) = 0.6894399988, and y = 0.9*cos(40 degrees) = 0.57850884871, and updating the annotation coordinates, I get the following chart: To make the chart prettier, we can now remove tick labels of the chart for the arrow annotation, and also make the background transparent. And to make this method more easy to adjust, I have made theta and r variables. from numpy import radians, cos, sin import plotly.graph_objects as go fig = go.Figure(go.Indicator( mode = "gauge+number", number = {'suffix': "% match", 'font': {'size': 50}}, value = 80, domain = {'x': [0,1], 'y': [0,1]}, gauge = { 'axis': {'range': [None, 100], 'tickwidth': 1, 'tickcolor': "darkblue"}, 'bar': {'color': "darkblue"}, 'bgcolor': "white", 'borderwidth': 2, 'bordercolor': "gray", 'steps': [ {'range': [0, 33], 'color': 'red'}, {'range': [33, 66], 'color': 'yellow'}, {'range': [66,100], 'color': 'green'}], })) fig.update_layout( font={'color': "black", 'family': "Arial"}, xaxis={'showgrid': False, 'showticklabels':False, 'range':[-1,1]}, yaxis={'showgrid': False, 'showticklabels':False, 'range':[0,1]}, plot_bgcolor='rgba(0,0,0,0)' ) ## by setting the range of the layout, we are effectively adding a grid in the background ## and the radius of the gauge diagram is roughly 0.9 when the grid has a range of [-1,1]x[0,1] theta = 40 r= 0.9 x_head = r * cos(radians(theta)) y_head = r * sin(radians(theta)) fig.add_annotation( ax=0, ay=0, axref='x', ayref='y', x=x_head, y=y_head, xref='x', yref='y', showarrow=True, arrowhead=3, arrowsize=1, arrowwidth=4 ) fig.show()
7
7
64,995,178
2020-11-24
https://stackoverflow.com/questions/64995178/decryption-failed-or-bad-record-mac-in-multiprocessing
I am trying to get all the PC cores to work simultaneously while filling a PostgreSQL database, I have edited the code to make a reproducible error of what I am getting Traceback (most recent call last): File "test2.py", line 50, in <module> download_all_sites(sites) File "test2.py", line 36, in download_all_sites pool.map(download_site, sites) File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value psycopg2.OperationalError: SSL error: decryption failed or bad record mac The full code which makes the error import requests import multiprocessing import time import os import psycopg2 session = None conn = psycopg2.connect(user="user", password="pass123", host="127.0.0.1", port="5432", database="my_db") cursor = conn.cursor() def set_global_session(): global session if not session: session = requests.Session() def download_site(domain): url = "http://" + domain with session.get(url) as response: temp = response.text.lower() found = [i for i in keywords if i in temp] query = """INSERT INTO test (domain, keyword) VALUES (%s, %s)""" cursor.execute(query, (domain, found)) def download_all_sites(sites): with multiprocessing.Pool(processes=os.cpu_count(), initializer=set_global_session) as pool: pool.map(download_site, sites) if __name__ == "__main__": sites = ['google.com'] * 10 keywords = ['google', 'success'] start_time = time.time() download_all_sites(sites) duration = time.time() - start_time conn.commit() print(f"Finished {len(sites)} in {duration} seconds")
Create a new postgres connection for each multiprocess. Libpq connections shouldn’t be used with forked processes (what multiprocessing is doing), it is mentioned in the second warning box at the postgres docs. import requests import multiprocessing import time import os import psycopg2 session = None def set_global_session(): global session if not session: session = requests.Session() def download_site(domain): url = "http://" + domain with session.get(url) as response: #temp = response.text.lower() #found = [i for i in keywords if i in temp] #query = """INSERT INTO test (domain, keyword) VALUES (%s, %s)""" conn = psycopg2.connect( "dbname=mf port=5959 host=localhost user=mf_usr" ) cursor = conn.cursor() query = """INSERT INTO mytable (name) VALUES (%s)""" cursor.execute(query, (domain, )) conn.commit() conn.close() def download_all_sites(sites): with multiprocessing.Pool( processes=os.cpu_count(), initializer=set_global_session ) as pool: pool.map(download_site, sites) if __name__ == "__main__": sites = ['google.com'] * 10 keywords = ['google', 'success'] start_time = time.time() download_all_sites(sites) duration = time.time() - start_time print(f"Finished {len(sites)} in {duration} seconds") # make sure it worked! conn = psycopg2.connect("dbname=mf port=5959 host=localhost user=mf_usr") cursor = conn.cursor() cursor.execute('select count(name) from mytable') print(cursor.fetchall()) # verify 10 downloads == 10 records in database Out: Finished 10 in 0.9922008514404297 seconds [(10,)]
8
9
64,987,304
2020-11-24
https://stackoverflow.com/questions/64987304/qtmediaplayer-wont-work-on-frameless-and-translucent-background-pyqt5
I am making a videoplayer with QMediaplayer but it wont work on frameless and translucent background window.I want to make a round corner windows so i need frameless and translucent window. Here is my code: from PyQt5.QtCore import Qt, QUrl from PyQt5.QtMultimedia import QMediaContent, QMediaPlayer from PyQt5.QtMultimediaWidgets import QVideoWidget from PyQt5.QtWidgets import QApplication,QMainWindow,QFrame import sys class Player(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.setWindowTitle("PyQt Video Player Widget Example") self.resize(600,400) self.frame=QFrame(self) self.frame.setStyleSheet('background:grey;border-radius:20px;') self.setCentralWidget(self.frame) #self.setWindowFlag(Qt.FramelessWindowHint) #self.setAttribute(Qt.WA_TranslucentBackground) self.mediaPlayer = QMediaPlayer(None, QMediaPlayer.VideoSurface) videoWidget = QVideoWidget(self.frame) videoWidget.setGeometry(10,10,580,380) self.resize(600,400) self.mediaPlayer.error.connect(self.handleError) self.mediaPlayer.setVideoOutput(videoWidget) self.mediaPlayer.setMedia( QMediaContent(QUrl.fromLocalFile("C:/Users/mishra/Desktop/HiddenfilesWindow/10000000_1874628825927192_6229658593205944320_n(1).mp4"))) self.mediaPlayer.play() def handleError(self): print("Error: " + self.mediaPlayer.errorString()) if __name__ == "__main__": import sys app = QApplication(sys.argv) window = Player() window.show() sys.exit(app.exec_()) After setting translucent background it only plays audio not video.Anybody knows how to fix it?
Try it: import sys from PyQt5.QtCore import Qt, QUrl, QRectF from PyQt5.QtGui import QPainterPath, QRegion from PyQt5.QtMultimedia import QMediaContent, QMediaPlayer from PyQt5.QtMultimediaWidgets import QVideoWidget from PyQt5.QtWidgets import QApplication, QMainWindow, QFrame, QWidget, QHBoxLayout class Player(QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.setWindowTitle("PyQt Video Player Widget Example") self.resize(600,400) self.frame= QFrame(self) # self.frame.setStyleSheet('background:grey; border-radius: 20px;') self.setStyleSheet("Player {background: #000;}") # +++ self.setCentralWidget(self.frame) # self.setWindowFlag(Qt.FramelessWindowHint) # self.setAttribute(Qt.WA_TranslucentBackground) layout = QHBoxLayout(self.frame) # +++ videoWidget = QVideoWidget() # +++ layout.addWidget(videoWidget) # +++ self.mediaPlayer = QMediaPlayer(None, QMediaPlayer.VideoSurface) # videoWidget = QVideoWidget(self.frame) # videoWidget.setGeometry(10,10,580,380) # self.resize(600,400) self.mediaPlayer.error.connect(self.handleError) self.mediaPlayer.setVideoOutput(videoWidget) self.mediaPlayer.setMedia( QMediaContent(QUrl.fromLocalFile("Samonastrojka.avi"))) self.mediaPlayer.play() # +++ vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv def resizeEvent(self, event): path = QPainterPath() path.addRoundedRect(QRectF(self.rect()), 20, 20) reg = QRegion(path.toFillPolygon().toPolygon()) self.setMask(reg) # +++ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ def handleError(self): print("Error: " + self.mediaPlayer.errorString()) if __name__ == "__main__": app = QApplication(sys.argv) window = Player() window.show() sys.exit(app.exec_())
6
0
64,990,689
2020-11-24
https://stackoverflow.com/questions/64990689/control-the-power-of-a-usb-port-in-python
I was wondering if it could be possible to control the power of usb ports in Python, using vendor ids and product ids. It should be controlling powers instead of just enabling and disabling the ports. It would be appreciated if you could provide some examples.
Look into the subprocess module in the standard library: What commands you need will depend on the OS. Windows For windows you will want to look into devcon This has been answered in previous posts import subprocess # Fetches the list of all usb devices: result = subprocess.run(['devcon', 'hwids', '=usb'], capture_output=True, text=True) # ... add code to parse the result and get the hwid of the device you want ... subprocess.run(['devcon', 'disable', parsed_hwid]) # to disable subprocess.run(['devcon', 'enable', parsed_hwid]) # to enable Linux See posts on shell comands import subprocess # determine desired usb device # to disable subprocess.run(['echo', '0', '>' '/sys/bus/usb/devices/usbX/power/autosuspend_delay_ms']) subprocess.run(['echo', 'auto', '>' '/sys/bus/usb/devices/usbX/power/control']) # to enable subprocess.run(['echo', 'on', '>' '/sys/bus/usb/devices/usbX/power/control'])
6
4
64,987,430
2020-11-24
https://stackoverflow.com/questions/64987430/what-exactly-does-the-forward-function-output-in-pytorch
This example is taken verbatim from the PyTorch Documentation. Now I do have some background on Deep Learning in general and know that it should be obvious that the forward call represents a forward pass, passing through different layers and finally reaching the end, with 10 outputs in this case, then you take the output of the forward pass and compute the loss using the loss function one defined. Now, I forgot what exactly the output from the forward() pass yields me in this scenario. I thought that the last layer in a Neural Network should be some sort of activation function like sigmoid() or softmax(), but I did not see these being defined anywhere, furthermore, when I was doing a project now, I found out that softmax() is called later on. So I just want to clarify what exactly is the outputs = net(inputs) giving me, from this link, it seems to me by default the output of a PyTorch model's forward pass is logits? transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) print(outputs) break loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
it seems to me by default the output of a PyTorch model's forward pass is logits As I can see from the forward pass, yes, your function is passing the raw output def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x So, where is softmax? Right here: criterion = nn.CrossEntropyLoss() It's a bit masked, but inside this function is handled the softmax computation which, of course, works with the raw output of your last layer This is softmax calculation: where z_i are the raw outputs of the neural network So, in conclusion, there is no activation function in your last input because it's handled by the nn.CrossEntropyLoss class Answering what's the raw output that comes from nn.Linear: The raw output of a neural network layer is the linear combination of the values that come from the neurons of the previous layer
12
11
64,943,693
2020-11-21
https://stackoverflow.com/questions/64943693/what-are-the-best-practices-for-structuring-a-fastapi-project
The problem that I want to solve related the project setup: Good names of directories so that their purpose is clear. Keeping all project files (including virtualenv) in one place, so I can easily copy, move, archive, remove the whole project, or estimate disk space usage. Creating multiple copies of some selected file sets such as entire application, repository, or virtualenv, while keeping a single copy of other files that I don't want to clone. Deploying the right set of files to the server simply by resyncing selected one dir. handling both frontend and backend nicely.
Harsha already mentioned my project generator but I think it can be helpful for future readers to explain the ideas behind of it. If you are going to serve your frontend something like yarn or npm. You should not worry about the structure between them. With something like axios or the Javascript's fetch you can easily talk with your backend from anywhere. When it comes to structuring the backend, if you want to render templates with Jinja, you can have something that is close to MVC Pattern. your_project ├── __init__.py ├── main.py ├── core │ ├── models │ │ ├── database.py │ │ └── __init__.py │ ├── schemas │ │ ├── __init__.py │ │ └── schema.py │ └── settings.py ├── tests │ ├── __init__.py │ └── v1 │ ├── __init__.py │ └── test_v1.py └── v1 ├── api.py ├── endpoints │ ├── endpoint.py │ └── __init__.py └── __init__.py By using __init__ everywhere, we can access the variables from the all over the app, just like Django. Let's the folders into parts: Core models database.py schemas users.py something.py settings.py views (Add this if you are going to render templates) v1_views.py v2_views.py tests v1 v2 Models It is for your database models, by doing this you can import the same database session or object from v1 and v2. Schemas Schemas are your Pydantic models, we call it schemas because it is actually used for creating OpenAPI schemas since FastAPI is based on OpenAPI specification we use schemas everywhere, from Swagger generation to endpoint's expected request body. settings.py It is for Pydantic's Settings Management which is extremely useful, you can use the same variables without redeclaring it, to see how it could be useful for you check out our documentation for Settings and Environment Variables Views This is optional if you are going to render your frontend with Jinja, you can have something close to MVC pattern Core views v1_views.py v2_views.py It would look something like this if you want to add views. Tests It is good to have your tests inside your backend folder. APIs Create them independently by APIRouter, instead of gathering all your APIs inside one file. Notes You can use absolute import for all your importing since we are using __init__ everywhere, see Python's packaging docs. So assume you are trying to import v1's endpoint.py from v2, you can simply do from my_project.v1.endpoints.endpoint import something
85
129
64,985,488
2020-11-24
https://stackoverflow.com/questions/64985488/how-do-you-list-local-profiles-with-boto3-from-aws-credentials-and-aws-c
I would like to list all of my local profiles using boto3, as I think boto3 is not picking up my credentials correctly. I have tried the following: import boto3 boto3.Session.available_profiles Which doesn't give me a list, but a property object.
You might want to use awscli instead of boto3 to list your profiles. aws configure list This should output something like this: Name Value Type Location ---- ----- ---- -------- profile <not set> None None access_key ****************ABCD config_file ~/.aws/config secret_key ****************ABCD config_file ~/.aws/config region us-west-2 env AWS_DEFAULT_REGION As for boto3, try this: for profile in boto3.session.Session().available_profiles: print(profile)
13
17
64,979,440
2020-11-24
https://stackoverflow.com/questions/64979440/in-python-is-it-possible-to-restrict-the-type-of-a-function-parameter-to-two-po
I try to restrict the 'parameter' type to be int or list like the function 'f' below. However, Pycharm does not show a warning at the line f("weewfwef") about wrong parameter type, which means this (parameter : [int, list]) is not correct. In Python, is it possible to restrict the type of a python function parameter to two possible types? def f(parameter : [int, list]): if len(str(parameter)) <= 3: return 3 else: return [1] if __name__ == '__main__': f("weewfwef")
The term you're looking for is a union type. from typing import Union def f(parameter: Union[int, list]): ... Union is not limited to two types. If you ever have a value which is one of several known types, but you can't necessarily know which one, you can use Union[...] to encapsulate that information.
7
8
64,897,689
2020-11-18
https://stackoverflow.com/questions/64897689/how-to-have-pandas-perform-a-rolling-average-on-a-non-uniform-x-grid
I would like to perform a rolling average but with a window that only has a finite 'vision' in x. I would like something similar to what I have below, but I want a window range that based on the x value rather than position index. While doing this within pandas is preferred numpy/scipy equivalents are also OK import numpy as np import pandas as pd x_val = [1,2,4,8,16,32,64,128,256,512] y_val = [x+np.random.random()*200 for x in x_val] df = pd.DataFrame(data={'x':x_val,'y':y_val}) df.set_index('x', inplace=True) df.plot() df.rolling(1, win_type='gaussian').mean(std=2).plot() So I would expect the first 5 values to be averaged together because they are within 10 xunits of each other, but the last values to be unchanged.
According to pandas documentation on rolling Size of the moving window. This is the number of observations used for calculating the statistic. Each window will be a fixed size. Therefore, maybe you need to fake a rolling operation with various window sizes like this test_df = pd.DataFrame({'x':np.linspace(1,10,10),'y':np.linspace(1,10,10)}) test_df['win_locs'] = np.linspace(1,10,10).astype('object') for ind in range(10): test_df.at[ind,'win_locs'] = np.random.randint(0,10,np.random.randint(5)).tolist() # rolling operation with various window sizes def worker(idx_list): x_slice = test_df.loc[idx_list,'x'] return np.sum(x_slice) test_df['rolling'] = test_df['win_locs'].apply(worker) As you can see, test_df is x y win_locs rolling 0 1.0 1.0 [5, 2] 9.0 1 2.0 2.0 [4, 8, 7, 1] 24.0 2 3.0 3.0 [] 0.0 3 4.0 4.0 [9] 10.0 4 5.0 5.0 [6, 2, 9] 20.0 5 6.0 6.0 [] 0.0 6 7.0 7.0 [5, 7, 9] 24.0 7 8.0 8.0 [] 0.0 8 9.0 9.0 [] 0.0 9 10.0 10.0 [9, 4, 7, 1] 25.0 where the rolling operation is achieved with apply method. However, this approach is significantly slower than the native rolling, for example, test_df = pd.DataFrame({'x':np.linspace(1,10,10),'y':np.linspace(1,10,10)}) test_df['win_locs'] = np.linspace(1,10,10).astype('object') for ind in range(10): test_df.at[ind,'win_locs'] = np.arange(ind-1,ind+1).tolist() if ind >= 1 else [] using the approach above %%timeit # rolling operation with various window sizes def worker(idx_list): x_slice = test_df.loc[idx_list,'x'] return np.sum(x_slice) test_df['rolling_apply'] = test_df['win_locs'].apply(worker) the result is 41.4 ms ± 4.44 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) while using native rolling is ~x50 faster %%timeit test_df['rolling_native'] = test_df['x'].rolling(window=2).sum() 863 µs ± 118 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
7
3
64,973,215
2020-11-23
https://stackoverflow.com/questions/64973215/specifying-any-instance-of-class-foo-for-mock-assert-called-once-with
In assert_called_once_with, how can I specify a parameter is "any instance of class Foo"? For example: class Foo(): pass def f(x): pass def g(): f(Foo()) import __main__ from unittest import mock mock.ANY of course passes: with mock.patch.object(__main__, 'f') as mock_f: g() mock_f.assert_called_once_with(mock.ANY) and of course, another instance of Foo doesn't pass. with mock.patch.object(__main__, 'f') as mock_f: g() mock_f.assert_called_once_with(Foo()) AssertionError: Expected call: f(<__main__.Foo object at 0x7fd38411d0b8>) Actual call: f(<__main__.Foo object at 0x7fd384111f98>) What can I put as my expected parameter such that any instance of Foo will make the assertion pass?
One simple solution is to do this in two steps: with mock.patch.object(__main__, 'f') as mock_f: g() mock_f.assert_called_once() self.assertIsInstance(mock_f.mock_calls[0].args[0], Foo) However, if you look at the implementation of ANY: class _ANY(object): "A helper object that compares equal to everything." def __eq__(self, other): return True def __ne__(self, other): return False def __repr__(self): return '<ANY>' ANY = _ANY() you can see it's just an object that's equal to anything. So you could define your own equivalent that's equal to any instance of Foo: class AnyFoo: "A helper object that compares equal to every instance of Foo." def __eq__(self, other): return isinstance(other, Foo) def __ne__(self, other): return not isinstance(other, Foo) def __repr__(self): return '<ANY Foo>' ANY_FOO = AnyFoo() Or more generically: class AnyInstanceOf: "A helper object that compares equal to every instance of the specified class." def __init__(self, cls): self.cls = cls def __eq__(self, other): return isinstance(other, self.cls) def __ne__(self, other): return not isinstance(other, self.cls) def __repr__(self): return f"<ANY {self.cls.__name__}>" ANY_FOO = AnyInstanceOf(Foo) Either way, you can use it as you would ANY: with mock.patch.object(__main__, 'f') as mock_f: g() mock_f.assert_called_once_with(ANY_FOO)
7
6
64,968,646
2020-11-23
https://stackoverflow.com/questions/64968646/pandas-convert-integer-zeroes-and-ones-to-boolean
I have a dataframe that contains one hot encoded columns of 0s and 1s which is of dtype int32. a b h1 h2 h3 xy za 0 0 1 ab cd 1 0 0 pq rs 0 1 0 I want to convert the columns h1,h2 and h3 to boolean so here is what I did.. df[df.columns[2:]].astype(bool) But this changed all values of h1-h3 as TRUE. I also tried df[df.columns[2:]].map({0:False, 1:True}) but that does not work either. (AttributeError: 'DataFrame' object has no attribute 'map') What is the best way to convert specific columns of the dataframe from int32 0s and 1s to boolean (True/False)?
You can select all columns by positions after first 2 with DataFrame.iloc, convert to boolean and assign back: df.iloc[:, 2:] = df.iloc[:, 2:].astype(bool) print (df) a b h1 h2 h3 0 xy za False False True 1 ab cd True False False 2 pq rs False True False Or create dictionary for convert columns names without first 2: df = df.astype(dict.fromkeys(df.columns[2:], bool)) print (df) a b h1 h2 h3 0 xy za False False True 1 ab cd True False False 2 pq rs False True False
6
7
64,964,259
2020-11-23
https://stackoverflow.com/questions/64964259/retain-original-bar-order-in-plotly-python-when-also-passing-color
Using Plotly for a bar plot preserves the dataset's order when not using color: import pandas as pd import plotly.express as px df = pd.DataFrame({'val': [1, 2, 3], 'type': ['b', 'a', 'b']}, index=['obs1', 'obs2', 'obs3']) px.bar(df, 'val') But color reorders the data: px.bar(df, 'val', color='type') How can I preserve the original ordering while using the color arg? This is similar to How can I retain desired order in R Plotly bar chart with color variable, but I'm using Python rather than R.
You could use the category_orders parameter: import pandas as pd import plotly.express as px df = pd.DataFrame({'val': [1, 2, 3], 'type': ['b', 'a', 'b']}, index=['obs1', 'obs2', 'obs3']) fig = px.bar(df, 'val', color='type', category_orders={'index': df.index[::-1]}) fig.show() Output From the documentation: This parameter is used to force a specific ordering of values per column. The keys of this dict should correspond to column names, and the values should be lists of strings corresponding to the specific display order desired. By looking at the code it seems that color is being used as a grouping attribute, so that's probably why the reordering is happening.
6
8
64,960,430
2020-11-22
https://stackoverflow.com/questions/64960430/python-requests-with-proxy-results-in-sslerror-wrong-version-number
I can't use the different proxy in Python. My code: import requests proxies = { "https":'https://154.16.202.22:3128', "http":'http://154.16.202.22:3128' } r=requests.get('https://httpbin.org/ip', proxies=proxies) print(r.json()) The error I'm getting is: . . . raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1122)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): . . . requests.exceptions.SSLError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1122)'))) I executed pip install requests. I executed pip uninstall pyopenssl, then tried to pip install an old version of of pyopenssl but it didn't work. Why is this not working?
The proxy you use simply does not support proxying https:// URLs: $ https_proxy=http://154.16.202.22:3128 curl -v https://httpbin.org/ip * Trying 154.16.202.22... * TCP_NODELAY set * Connected to (nil) (154.16.202.22) port 3128 (#0) * Establish HTTP proxy tunnel to httpbin.org:443 > CONNECT httpbin.org:443 HTTP/1.1 > Host: httpbin.org:443 > User-Agent: curl/7.52.1 > Proxy-Connection: Keep-Alive > < HTTP/1.1 400 Bad Request Apart from that the URL for the proxy itself is wrong - it should be http://.. and not https://.. even if you proxy HTTPS traffic. But requests actually ignores the given protocol completely, so this error is not the reason for the problem. But just to demonstrate that it would not work either if the proxy itself got accessed with HTTPS (as the URL suggests): $ https_proxy=https://154.16.202.22:3128 curl -v https://httpbin.org/ip * Trying 154.16.202.22... * TCP_NODELAY set * Connected to (nil) (154.16.202.22) port 3128 (#0) * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol * Curl_http_done: called premature == 0 * Closing connection 0 curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol So the fix here would be to use a different proxy, one which actually supports proxying https:// URLs.
14
9
64,954,213
2020-11-22
https://stackoverflow.com/questions/64954213/python-how-to-recieve-sigint-in-docker-to-stop-service
I'm writing a monitor service in Python that monitors another service and while the monitor & scheduling part works fine, I have a hard time figuring out how to do a proper shutdown of the service using a SIGINT signal send to the Docker container. Specifically, the service should catch the SIGINT from either a docker stop or a Kubernetes stop signal, but so far it doesn't. I have reduced the issue to a minimal test case which is easy to replicate in Docker: import signal import sys import time class MainApp: def __init__(self): self.shutdown = False signal.signal(signal.SIGINT, self.exit_gracefully) signal.signal(signal.SIGTERM, self.exit_gracefully) def exit_gracefully(self, signum, frame): print('Received:', signum) self.shutdown = True def start(self): print("Start app") def run(self): print("Running app") time.sleep(1) def stop(self): print("Stop app") if __name__ == '__main__': app = MainApp() app.start() # This boolean flag should flip to false when a SIGINT or SIGTERM comes in... while not app.shutdown: app.run() else: # However, this code gets never executed ... app.stop() sys.exit(0) And the corresponding Dockerfile, again minimalistic: FROM python:3.8-slim-buster COPY test/TestGS.py . STOPSIGNAL SIGINT CMD [ "python", "TestGS.py" ] I opted for Docker because the Docker stop command is documented to issue a SIGINT signal, waits a bit, and then issues a SIGKILL. This should be an ideal test case. However, when starting the docker container with an interactive shell attached, and stopping the container from a second shell, the stop() code never gets executed. Verifying the issue, a simple: $ docker inspect -f '{{.State.ExitCode}}' 64d39c3b Shows exit code 137 instead of exit code 0. Apparently, one of two things is happening. Either the SIGTERM signal isn't propagated into the container or Python runtime and this might be true because the exit_gracefully function isn't called apparently otherwise we would see the printout of the signal. I know that you have to be careful about how to start your code from within Docker to actually get a SIGINT, but when adding the stop signal line to the Dockerfile, a global SIGINT should be issued to the container, at least to my humble understanding reading the docs. Or, the Python code I wrote isn't catching any signal at all. Either way, I simply cannot figure out why the stop code never gets called. I spent a fair amount of time researching the web, but at this point, I feel I'm running circles, Any idea how to solve the issue of correctly ending a python script running inside docker using a SIGINT signal? Thank you Marvin
Solution: The app must run as PID 1 inside docker to receive a SIGINT. To do so, one must use ENTRYPOINT instead of CMD. The fixed Dockerfile: FROM python:3.8-slim-buster COPY test/TestGS.py . ENTRYPOINT ["python", "TestGS.py"] Build the image: docker build . -t python-signals Run the image: docker run -it --rm --name="python-signals" python-signals And from a second terminal, stop the container: docker stop python-signals Then you get the expected output: Received SIGTERM signal Stop app It seems a bit odd to me that Docker only emits SIGTERMS to PID 1, but thankfully that's relatively easy to fix. The article below was most helpful to solve this issue. https://itnext.io/containers-terminating-with-grace-d19e0ce34290
16
18
64,959,714
2020-11-22
https://stackoverflow.com/questions/64959714/await-vs-asyncio-run-in-python
In Python, what is the actual difference between awaiting a coroutine and using asyncio.run()? They both seem to run a coroutine, the only difference that I can see being that await can only be used in a coroutine.
That is the exact difference. There should be exactly one call to asyncio.run() in your code, which will block until all coroutines have finished. Inside any coroutine, you can use await to suspend the current function, and asyncio will resume the function at some future time. All of this happens inside the asyncio.run() function, which schedules what functions can run when.
21
19
64,952,572
2020-11-22
https://stackoverflow.com/questions/64952572/output-directories-for-python-setup-py-sdist-bdist-wheel
When doing python setup.py sdist bdist_wheel it creates build, dist, packagename.egg-info directories. I'd like to have them out of the current folder. I tried: --dist-dir=../dist: works with sdist but packagename.egg-info is still there --bdist-dir=../dist: for example: python setup.py sdist bdist_wheel --dist-dir=../dist --bdist-dir=../dist2 works and the final bdist package is in ../dist. But the current folder still gets new directories build, dist, packagename.egg-info, which I don't want. Question: how to have everything (the output of sdist and bdist_wheel) outside of the current folder? Of course I can write a script with mv, rm -r, etc. but I wanted to know if there exists a built-in solution.
I tried some time again with -d, --dist-dir, --bdist-dir but I found no way to do it in one-line. I'm afraid the shortest we could find (on Windows) is: python setup.py sdist bdist_wheel rmdir /s /q packagename.egg-info build ..\dist move dist ..
11
0
64,943,656
2020-11-21
https://stackoverflow.com/questions/64943656/plot-3-graphs-2-on-top-and-one-on-bottom-axis-in-python
I am trying to plot 3 dendrograms, 2 on top and one on the bottom. But the only way I figured out in doing this: fig, axes = plt.subplots(2, 2, figsize=(22, 14)) dn1 = hc.dendrogram(wardLink, ax=axes[0, 0]) dn2 = hc.dendrogram(singleLink, ax=axes[0, 1]) dn3 = hc.dendrogram(completeLink, ax=axes[1, 0]) Gives me a fourth blank graph on the bottom right. Is there a way to plot only 3 graphs?
You can redivide the canvas area as you desire and use the 3rd argument to subplot to tell it which cell to plot to: plt.subplot(2, 2, 1) # divide as 2x2, plot top left plt.plt(...) plt.subplot(2, 2, 2) # divide as 2x2, plot top right plt.plt(...) plt.subplot(2, 1, 2) # divide as 2x1, plot bottom plt.plt(...) You can also use a gridspec as follows: gs = fig.add_gridspec(2, 2) ax1 = fig.add_subplot(gs[0, 0]) ax2 = fig.add_subplot(gs[0, 1]) ax3 = fig.add_subplot(gs[1, :]) ...
7
10
64,940,181
2020-11-21
https://stackoverflow.com/questions/64940181/cannot-display-emojis-in-windows-powershell-or-wsl-linux-terminal-using-python
I am trying to print emojis in both Windows Powershell and WSL Linux Terminal using Python3. I have tried using unicode, CLDR names and also installed the emoji library. print("\U0001F44D") print(emoji.emojize(':thumbs_up:')) But in the terminal it is showing only a question mark within a box. No emoji is showing. What is the issue here? But if I copy the output here it is showing properly. Like this 👍
I don't think any of the Windows Shells has proper support for emoji/unicode characters, or it may not support emojis. If your using Windows, you may want to try Windows Terminal. It has complete Emoji support and should work with Powershell and WSL.
8
8
64,905,873
2020-11-19
https://stackoverflow.com/questions/64905873/sibling-package-import-and-mypy-has-no-attribute-error
I am trying to import a module from a sibling package in Python; following the instructions in this answer. My problem is that the import works... but mypy is saying that it's a bad import. I'm seeking to understand why mypy is reporting an error, and how to fix it. Directory structure/Code This is a module that I have installed successfully with python -m pip install -e .. I know it is installed because it is listed when I run pip freeze, and the project root is listed in sys.path when I print that out. mypackage ├── mypackage │ ├── foo │ │ ├── __init__.py │ │ └── db.py │ ├── bar │ │ ├── __init__.py │ │ └── model.py │ └── py.typed └── setup.py In db.py: from mypackage.bar import model In model.py: class MyClass: # implementation irrelevant Error Message When I run mypy (mypy mypackage from the project base directory), I get the following error: mypackage/foo/db.py:7: error: Module 'mypackage.bar' has no attribute 'model' What confuses me is that, when I open IDLE, the following imports/runs just fine: >>> from mypackage.bar import model >>> model.MyClass <class 'mypackage.bar.model.MyClass'> My Question Why is mypy showing an error here when the import actually works? How can I get mypy to recognize that the import works?
Running mypy with the --namespace-packages flag made the check run without error, which pointed me to the actual problem: ./mypackage/mypackage/__init__.py did not exist, causing mypy to not pursue the import correctly. Python was working because in 3.3+, namespace packages are supported, but mypy requires a flag to check for those specifically. Thus, my overall solution was to add the needed __init__.py file.
8
7
64,935,522
2020-11-20
https://stackoverflow.com/questions/64935522/how-to-know-torch-version-that-installed-locally-in-your-device
I want to check torch version in my device using Jupyter Notebook. I'm used this import torch print(torch.__version__) but it didn't work and Jupyter notebook raised an error as below AttributeError Traceback (most recent call last) <ipython-input-8-beb55f24d5ec> in <module> 1 import torch ----> 2 print(torch.__version__) AttributeError: module 'torch' has no attribute '__version__' Is there any command to check torch version using Jupyter notebook?
I have tried to install new Pytorch version. But, it didn't work and then I deleted the Pytorch files manually suggested on my command line. Finally, I installed new Pytorch version using conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch and everything works fine. This code works well after that. import torch print(torch.__version__)
6
6
64,938,027
2020-11-20
https://stackoverflow.com/questions/64938027/type-annotation-for-dict-arguments
Can I indicate a specific dictionary shape/form for an argument to a function in python? Like in typescript I'd indicate that the info argument should be an object with a string name and a number age: function parseInfo(info: {name: string, age: number}) { /* ... */ } Is there a way to do this with a python function that's otherwise: def parseInfo(info: dict): # function body Or is that perhaps not Pythonic and I should use named keywords or something like that?
In Python 3.8+ you could use the alternative syntax to create a TypedDict: from typing import TypedDict Info = TypedDict('Info', {'name': str, 'age': int}) def parse_info(info: Info): pass From the documentation on TypedDict: TypedDict declares a dictionary type that expects all of its instances to have a certain set of keys, where each key is associated with a value of a consistent type. This expectation is not checked at runtime but is only enforced by type checkers.
10
23
64,933,298
2020-11-20
https://stackoverflow.com/questions/64933298/why-should-we-use-in-def-init-self-n-none
Why should we use -> in def __init__(self, n) -> None:? I read the following excerpt from PEP 484, but I am unable to understand what it means. (Note that the return type of __init__ ought to be annotated with -> None. The reason for this is subtle. If __init__ assumed a return annotation of -> None, would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.) What's the subtle difference between using def __init__(self, n) -> None: and def __init__(self, n):? Can someone explain the quoted excerpt in simple words?
The main reason is to allow static type checking. By default, mypy will ignore unannotated functions and methods. Consider the following definition: class Foo: def __init__(self): return 3 f = Foo() mypy, a static type analysis tool, sees nothing wrong with this by default: $ mypy tmp.py Success: no issues found in 1 source file but it produces a runtime TypeError (note that python here is Python 3.8.6): $ python tmp.py Traceback (most recent call last): File "tmp.py", line 5, in <module> f = Foo() TypeError: __init__() should return None, not 'int' If you add the annotation -> None, then mypy will type-check the method and raise an error: $ mypy tmp.py tmp.py:3: error: No return value expected Found 1 error in 1 file (checked 1 source file) mypy will even complain if you try to circumvent the the check by declaring def __init__(self) -> int: instead: $ mypy tmp.py tmp.py:2: error: The return type of "__init__" must be None Found 1 error in 1 file (checked 1 source file) It's also worth noting that any annotation will make mypy pay attention; the lack of a return type is the same as -> None if you have at least one annotated argument: def __init__(self, x: int): return x will produce the same "No return value expected" error as an explicit -> None. The explicit return type, though, is often easier to provide than any artificial argument type hints, and is arguably clearer than trying to type self.
27
29
64,917,285
2020-11-19
https://stackoverflow.com/questions/64917285/difference-in-python-thread-join-between-python-3-7-and-3-8
I have a small Python program that behaves differently in Python 3.7 and Python 3.8. I'm struggling to understand why. The #threading changelog for Python 3.8 does not explain this. Here's the code: import time from threading import Event, Thread class StoppableWorker(Thread): def __init__(self): super(StoppableWorker, self).__init__() self.daemon = False self._stop_event = Event() def join(self, *args, **kwargs): self._stop_event.set() print("join called") super(StoppableWorker, self).join(*args, **kwargs) def run(self): while not self._stop_event.is_set(): time.sleep(1) print("hi") if __name__ == "__main__": t = StoppableWorker() t.start() print("main done.") When I run this in Python 3.7.3 (Debian Buster), I see the following output: python test.py main done. join called hi The program exits on its own. I don't know why join() is called. From the daemon documentation of 3.7: The entire Python program exits when no alive non-daemon threads are left. But clearly the thread should be still alive. When I run this in Python 3.8.6 (Arch), I get the expected behavior. That is, the program keeps running: python test.py main done. hi hi hi hi ... The daemon documentation for 3.8 states the same as 3.7: The program should not exit unless all non-daemon threads have joined. Can someone help me understand what's going on, please?
There is an undocumented change in the behavior of threading _shutdown() from Python version 3.7.3 to 3.7.4. Here's how I found it: To trace the issue, I first used the inspect package to find out who join()s the thread in the Python 3.7.3 runtime. I modified the join() function to get some output: ... def join(self, *args, **kwargs): self._stop_event.set() c = threading.current_thread() print(f"join called from thread {c}") print(f"calling function: {inspect.stack()[1][3]}") super(StoppableWorker, self).join(*args, **kwargs) ... When executing with Python 3.7.3, this prints: main done. join called from thread <_MainThread(MainThread, stopped 139660844881728)> calling function: _shutdown hi So the MainThread, which is already stopped, invokes the join() method. The function responsible in the MainThread is _shutdown(). From the CPython source for Python 3.7.3 for _shutdown(), lines 1279-1282: t = _pickSomeNonDaemonThread() while t: t.join() t = _pickSomeNonDaemonThread() That code invokes join() on all non-daemon threads when the MainThread exits! That implementation was changed in Python 3.7.4. To verify these findings I built Python 3.7.4 from source. It indeed behaves differently. It keeps the thread running as expected and the join() function is not invoked. This is apparently not documented in the release notes of Python 3.7.4 nor in the changelog of Python 3.8. -- EDIT: As pointed out in the comments by MisterMiyagi, one might argue that extending the join() function and using it for signaling termination is not a proper use of join(). IMHO that is up to taste. It should, however, be documented that in Python 3.7.3 and before, join() is invoked by the Python runtime on system exit, while with the change to 3.7.4 this is no longer the case. If properly documented, it would explain this behavior from the get-go.
7
6
64,919,868
2020-11-19
https://stackoverflow.com/questions/64919868/fastapi-module-app-routers-test-has-no-attribute-routes
I am trying to setup an app using FastAPI but keep getting this error which I can't make sense of. My main.py file is as follows: from fastapi import FastAPI from app.routers import test app = FastAPI() app.include_router(test, prefix="/api/v1/test") And in my routers/test.py file I have: from fastapi import APIRouter, File, UploadFile import app.schemas.myschema as my_schema router = APIRouter() Response = my_schema.Response @router.get("/", response_model=Response) def process(file: UploadFile = File(...)): # Do work But I keep getting the following error: File "/Users/Desktop/test-service/venv/lib/python3.8/site-packages/fastapi/routing.py", line 566, in include_router for route in router.routes: AttributeError: module 'app.routers.test' has no attribute 'routes' python-BaseException I cant make sense of this as I can see something similar being done in the sample app here.
I think you want: app.include_router(test.router, prefix="/api/v1/test") rather than: app.include_router(test, prefix="/api/v1/test")
6
15
64,909,849
2020-11-19
https://stackoverflow.com/questions/64909849/syntax-error-with-flake8-and-pydantic-constrained-types-constrregex
I use in Python the package pydantic and the linker Flake8. I want to use constr from pydantic with a regular Experssion. Only certain Characters should be passed. (a-z, A-Z, 0-9 and _) The regular Experssion "^[a-zA-Z0-9_]*$" works, but flake8 shows me the following error: syntax error in forward annotation '^[a-zA-Z0-9_]*$' flake8(F722) class RedisSettings(BaseModel): keyInput: constr(regex="^[a-zA-Z0-9_]*$") = "" keyOutput: constr(regex="^[a-zA-Z0-9_]*$") = "" Can you help me to avoid the Error Message?
the error here comes from pyflakes which attempts to interpret type annotations as type annotations according to PEP 484 the annotations used by pydantic are incompatible with PEP 484 and result in that error. you can read more about this in this pyflakes issue I'd suggest either (1) finding a way to use pydantic which doesn't involve violating PEP 484 or (2) ignoring the errors from pyflakes using flake8's extend-ignore / # noqa: ... / per-file-ignores disclaimer: I am one of the pyflakes maintainers and I am the current flake8 maintainer
18
24
64,915,548
2020-11-19
https://stackoverflow.com/questions/64915548/python-sharedmemory-persistence-between-processes
Is there any way to make SharedMemory object created in Python persist between processes? If the following code is invoked in interactive python session: >>> from multiprocessing import shared_memory >>> shm = shared_memory.SharedMemory(name='test_smm', size=1000000, create=True) it creates a file in /dev/shm/ on a Linux machine. ls /dev/shm/test_smm /dev/shm/test_smm But when the python session ends I get the following: /usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d and the test_smm is gone: ls /dev/shm/test_smm ls: cannot access '/dev/shm/test_smm': No such file or directory So is there any way to make the shared memory object created in python persist across process runs? Running with Python 3.8
You can unregister a shared memory object from the resource cleanup process without unlinking it: $ python3 Python 3.8.6 (default, Sep 25 2020, 09:36:53) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from multiprocessing import shared_memory, resource_tracker >>> shm = shared_memory.SharedMemory(name='test_smm', size=1000000, create=True) >>> resource_tracker.unregister(shm._name, 'shared_memory') >>> $ ls /dev/shm/test_smm /dev/shm/test_smm I don't know whether this is portable, and it doesn't look like a supported way of using the multiprocessing module, but it works on Linux at least.
8
10
64,908,770
2020-11-19
https://stackoverflow.com/questions/64908770/replace-values-in-pandas-dataframe-column-with-different-replacement-dict-based
I have a dataframe where I want to replace values in a column, but the dict describing the replacement is based on values in another column. A sample dataframe would look like this: Map me strings date 0 1 test1 2020-01-01 1 2 test2 2020-02-10 2 3 test3 2020-01-01 3 4 test2 2020-03-15 I have a dictionary that looks like this: map_dict = {'2020-01-01': {1: 4, 2: 3, 3: 1, 4: 2}, '2020-02-10': {1: 3, 2: 4, 3: 1, 4: 2}, '2020-03-15': {1: 3, 2: 2, 3: 1, 4: 4}} Where I want the mapping logic to be different based on the date. In this example, the expected output would be: Map me strings date 0 4 test1 2020-01-01 1 4 test2 2020-02-10 2 1 test3 2020-01-01 3 4 test2 2020-03-15 I have a massive dataframe (100M+ rows) so I really want to avoid any looping solutions if at all possible. I have tried to think of a way to use either map or replace but have been unsuccessful
Use DataFrame.join with MultiIndex Series created by DataFrame cosntructor and DataFrame.stack: df = df.join(pd.DataFrame(map_dict).stack().rename('new'), on=['Map me','date']) print (df) Map me strings date new 0 1 test1 2020-01-01 4 1 2 test2 2020-02-10 4 2 3 test3 2020-01-01 1 3 4 test2 2020-03-15 4
6
7
64,900,812
2020-11-18
https://stackoverflow.com/questions/64900812/how-to-use-sqlites-upsert-or-on-conflict-with-flask-sqlalchemy
I want to update my SQLite DB with data provided by an external API and I don't want to check every time for any conflicts by myself, instead, I want to take advantage of UPSERT statement. SQLAlchemy documentation for version 1.4 (still in beta but that's ok) shows that this is possible. But I don't know how to get this to work with flask_sqlalchemy. I've also tried to include sqlite_on_conflict='IGNORE' in constraint definitions (as mentioned in SQLAlchemy docs for version 1.3 here): class SomeModel(db.Model): __table_args__ = ( db.ForeignKeyConstraint( ('id', 'other_id'), ('other_table.id', 'other_table.other_id') sqlite_on_conflict='IGNORE' ), ) # ... I then verified SQL output with SQLALCHEMY_ECHO set to True and it didn't work at all... I tried with both 1.3 and 1.4 SQLAlchemy versions.
I found the solution thanks to this gist (credits for droustchev) which I slightly edited. It's a bit messy but it works: from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql import Insert @compiles(Insert, 'sqlite') def suffix_insert(insert, compiler, **kwargs): stmt = compiler.visit_insert(insert, **kwargs) if insert.dialect_kwargs.get('sqlite_on_conflict_do_nothing'): stmt += ' ON CONFLICT DO NOTHING' return stmt Insert.argument_for('sqlite', 'on_conflict_do_nothing', False) Just include this code in your models.py file (or whatever filename you have) and then you can write: some_random_values = [ {'id': 1, 'some_column': 'asadsadad'}, {'id': 2, 'some_column': 'dsawaefds'} ] stmt = db.insert(MyModel, sqlite_on_conflict_do_nothing=True).values(some_random_values) db.session.execute(stmt) This should do the trick.
8
8
64,896,838
2020-11-18
https://stackoverflow.com/questions/64896838/how-do-i-avoid-type-errors-when-internal-function-returns-union-that-could-be
I've been running into a bit of weirdness with Unions (and Optionals, of course) in Python - namely it seems that the static type checker tests properties against all member of a union, and not a member of the union (i.e. it seems overly strict?). As an example, consider the following: import pandas as pd def test_dummy() -> pd.DataFrame: df = pd.DataFrame() df = df.fillna(df) return df This creates a type warning, as pd.fillna(..., inplace: Bool = False, ...) -> Optional[pd.DataFrame] (it is a None return if inplace=True). I suspect that in theory the static type checker should realize the return of the function changes depending on the arguments (as that should be known when code is written), but that's a bit beyond the point. I have the following questions: What is the best way to resolve this? I can think of two solutions: i) do nothing -- which creates ugly squiggles in my code ii) cast the return of fillna to a pd.DataFrame; my understanding is this is a informative step to the static type checker so should not cause any concerns or issues? Let us consider that I'm writing a function f which, similarly to this, has its return types vary depending on the function call inputs, and this should be determinable before runtime. In order to avoid such errors in the future; what is the best way to go about writing this function? Would it be better to do something like a @typing.overload?
The underlying function should really be defined as an overload -- I'd suggest a patch to pandas probably Here's what the type looks like right now: def fillna( self: FrameOrSeries, value=None, method=None, axis=None, inplace: bool_t = False, limit=None, downcast=None, ) -> Optional[FrameOrSeries]: ... in reality, a better way to represent this is to use an @overload -- the function returns None when inplace = True: @overload def fillna( self: FrameOrSeries, value=None, method=None, axis=None, inplace: Literal[True] = False, limit=None, downcast=None, ) -> None: ... @overload def fillna( self: FrameOrSeries, value=None, method=None, axis=None, inplace: Literal[False] = False, limit=None, downcast=None, ) -> FrameOrSeries: ... def fillna( self: FrameOrSeries, value=None, method=None, axis=None, inplace: bool_t = False, limit=None, downcast=None, ) -> Optional[FrameOrSeries]: # actual implementation but assuming you can't change the underlying library you have several approaches to unpacking the union. I made a video about this specifically for re.match but I'll reiterate here since it's basically the same problem (Optional[T]) option 1: an assert indicating the expected return type the assert tells the type checker something it doesn't know: that the type is narrower than it knows about. mypy will trust this assertion and the type will be assumed to be pd.DataFrame def test_dummy() -> pd.DataFrame: df = pd.DataFrame() ret = df.fillna(df) assert ret is not None return ret option 2: cast explicitly tell the type checker that the type is what you expect, "cast"ing away the None-ness from typing import cast def test_dummy() -> pd.DataFrame: df = pd.DataFrame() ret = cast(pd.DataFrame, df.fillna(df)) return ret type: ignore the (imo) hacky solution is to tell the type checker to ignore the incompatibility, I would not suggest this approach but it can be helpful as a quick fix def test_dummy() -> pd.DataFrame: df = pd.DataFrame() ret = df.fillna(df) return ret # type: ignore
7
6
64,890,117
2020-11-18
https://stackoverflow.com/questions/64890117/what-is-the-best-way-to-generate-all-binary-strings-of-the-given-length-in-pytho
I'm now studying recursion and try to build some codes to generate all binary strings of the given length 'n'. I found a code to use for loop: n = 5 for i in range(2**n, 2**(n+1)): print(bin(i)[3:]) But is there any other way to solve this problem using recursion? Thank you!
It's hard to determine what way is "the best" ;) We have to add zero or one to current string and go to the next recursion level. Stop-condition is reaching of needed length (here n-1 because we have to provide leading one corresponding to your example) def genbin(n, bs = ''): if n-1: genbin(n-1, bs + '0') genbin(n-1, bs + '1') else: print('1' + bs) genbin(4) 1000 1001 1010 1011 1100 1101 1110 1111
10
5
64,795,367
2020-11-11
https://stackoverflow.com/questions/64795367/method-to-determine-lowest-required-versions-of-python-packages-for-a-project-pa
This question concerns any package, not just Python version itself. To give some context: we are planning to build an internal package at work, which naturally will have many dependencies. To give freedom for our developers and avoid messy version conflicts, I want to specify broader constraints for packages requirements(.txt), for example, pandas>=1.0 or pyspark>=1.0.0, <2.0. There is a way to efficiently determine/test which are the lowest required versions for a given code? I could install pandas==0.2.4 and I see if the code runs, and so on, but that approach seems to get out of hand pretty fast. It's the first time I work on package building, so I am kinda lost on that. Looking at other package's source-code (on GitHub) didn't help me, because I have no idea what is the methodology developers use to specify dependency constraints.
Installing a project using its "lower bounds" dependencies is useful to ensure that (a) you have those lower bounds specified properly and (b) the tests pass. This is a popular feature request for pip, and is being tracked in Add a resolver option to use the specified minimum version for a dependency #8085. The issue has been open for years and there doesn't appear to be sufficient volunteer time to implement it, despite the support and interest evident on the issue tracker. Pip's solver only supports targeting the latest satisfactory versions of dependencies. Fortunately, alternate resolver strategies have been implemented in a different frontend uv. Using uv pip install can be used as a (mostly) drop-in replacement for pip install, but you'll have a new resolver option available: --resolution resolution The strategy to use when selecting between the different compatible versions for a given package requirement. By default, uv will use the latest compatible version of each package (highest). Possible values: highest: Resolve the highest compatible version of each package lowest: Resolve the lowest compatible version of each package lowest-direct: Resolve the lowest compatible version of any direct dependencies, and the highest compatible version of any transitive dependencies Helpful links: https://docs.astral.sh/uv/concepts/resolution/#resolution-strategy https://docs.astral.sh/uv/concepts/resolution/#lower-bounds https://docs.astral.sh/uv/reference/settings/#pip_resolution Let's see what this looks like using a popular Python package with a few dependencies: requests. For this example, I'll create parallel venvs based on a Python 3.9 interpreter, and manage them externally, creating a uv installation in the second env. Note: Usually I'd recommend a global uv installation not tied to any particular env, and let it autodetect the env (see uv installation about that), but for the purposes of this example it's clearer if the uv installation is tied to .venv2. $ python3.9 -m venv .venv1 $ python3.9 -m venv .venv2 $ .venv2/bin/python3 -m pip install -q uv Doing a Python pip install in the fresh .venv1: $ .venv1/bin/python -m pip install -q requests $ .venv1/bin/python -m pip freeze certifi==2024.7.4 charset-normalizer==3.3.2 idna==3.8 requests==2.32.3 urllib3==2.2.2 It selected the latest versions of dependencies. By default uv would select those same versions: $ .venv2/bin/python -m uv pip install requests Resolved 5 packages in 1ms Installed 5 packages in 1ms + certifi==2024.7.4 + charset-normalizer==3.3.2 + idna==3.8 + requests==2.32.3 + urllib3==2.2.2 However, with a lowest resolution strategy we will get older urllib3 1.x, amongst other things: $ .venv2/bin/python -m uv pip install --resolution=lowest --force-reinstall requests==2.32.3 Resolved 5 packages in 21ms Prepared 5 packages in 0.69ms Uninstalled 5 packages in 1ms Installed 5 packages in 2ms - certifi==2024.7.4 + certifi==2017.4.17 - charset-normalizer==3.3.2 + charset-normalizer==2.0.0 - idna==3.8 + idna==2.5 ~ requests==2.32.3 - urllib3==2.2.2 + urllib3==1.21.1 This matches the lower bounds advertised by requests v2.32.3 here. Does it still work? Yes! $ .venv2/bin/python -c 'import requests; print(requests.get("https://exmaple.org"))' <Response [200]> Does it still work if we install even lower deps? Probably not! $ .venv2/bin/python -m uv pip install 'urllib3<1.21' Resolved 1 package in 41ms Prepared 1 package in 31ms Uninstalled 1 package in 0.59ms Installed 1 package in 0.62ms - urllib3==1.21 + urllib3==1.20 $ .venv2/bin/python -c 'import requests; print(requests.get("https://exmaple.org"))' .venv2/lib/python3.9/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (1.20) or chardet (None)/charset_normalizer (2.0.0) doesn't match a supported version! warnings.warn( Traceback (most recent call last): File "<string>", line 1, in <module> File ".venv2/lib/python3.9/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, **kwargs) File ".venv2/lib/python3.9/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File ".venv2/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File ".venv2/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File ".venv2/lib/python3.9/site-packages/requests/adapters.py", line 633, in send conn = self.get_connection_with_tls_context( File ".venv2/lib/python3.9/site-packages/requests/adapters.py", line 489, in get_connection_with_tls_context conn = self.poolmanager.connection_from_host( TypeError: connection_from_host() got an unexpected keyword argument 'pool_kwargs' This technique of using --resolution=lowest-direct and/or --resolution=lowest in combination with a good test suite can be used to determine accurate lower bounds for your project metadata directly and/or recursively. Additionally, you can use the same feature to verify adequacy of the specified lower bounds in your project's CI. This later point is very important, because if you only test your changes using a pip install, you'll be getting the latest available versions of packages by default and will have no visibility of whether a change is breaking for older versions of dependencies. Also testing in your CI using a lowest resolution strategy you can gain visibility about code changes you've made which may necessitate setting a stricter lower bound in the package metadata. You can decide then whether to increase the lower bound, or rework the changes in a way that preserves compatibility with older dependencies. Another useful tool tangentially related to this question is pypi-timemachine. It filters the index so that only packages older than a given timestamp are available. This allows adding time-based version bounds to an installation, to simulate installation into a user environment which was created at some time in the past.
10
4
64,872,401
2020-11-17
https://stackoverflow.com/questions/64872401/os-system-calls-run-like-serial-execution-multi-threads
My co-worker asked me why his code cannot run in concurrency in multi-threads. I discover the os.system function is acting weird unlike the other functions in multi-threads. I code a small demo to reproduce the problem. Appreciating somebody who can answer my question. Run like serial execution import os import time from concurrent.futures import ThreadPoolExecutor, as_completed executor = ThreadPoolExecutor() begin = time.time() for _ in as_completed([ executor.submit(os.system, cmd) for cmd in ['sleep 10', 'sleep 3', 'sleep 2'] ]): print("completed after", time.time() - begin, "seconds") output completed after 10.005892276763916 seconds completed after 13.017478942871094 seconds completed after 15.034425258636475 seconds Run normally Replace os.system with subprocess.call. import subprocess import time from concurrent.futures import ThreadPoolExecutor, as_completed executor = ThreadPoolExecutor() begin = time.time() for _ in as_completed([ executor.submit(subprocess.call, cmd, shell=True) for cmd in ['sleep 10', 'sleep 3', 'sleep 2'] ]): print("completed after", time.time() - begin, "seconds") output completed after 2.019604206085205 seconds completed after 3.017033338546753 seconds completed after 10.009897232055664 seconds
os.system is implemented by calling the standard C function system: This is implemented by calling the Standard C function system(), and has the same limitations. And according to the man page of the system function: Blocking SIGCHLD while waiting for the child to terminate prevents the application from catching the signal and obtaining status from system()'s child process before system() can get the status itself. and: Using the system() function in more than one thread in a process or when the SIGCHLD signal is being manipulated by more than one thread in a process may produce unexpected results. Since SIGCHLD must be blocked during the execution, calling system concurrently would not work. subprocess.Popen (the underlying class that the subprocess.call function uses) on the other hand handles fork/exec/wait without blocking SIGCHLD and can therefore be executed concurrently.
6
2
64,799,827
2020-11-12
https://stackoverflow.com/questions/64799827/python-json-dumps-outputs-all-my-data-into-one-line-but-i-want-to-have-a-new-l
I am working with Python and some json data. I am looping through my data (which are all dictionaries) and when I print the loop values to my console, I get 1 dictionary per line. However, when I do the same line of code with json.dumps() to convert my object into a string to be able to be output, I get multiple lines within the dictionary versus wanting the new line outside the dictionary. How do you add new lines after each dictionary value when looping? Code example: def test(values, filename): with open(filename, 'w') as f: for value in values: print(json.dumps(value, sort_keys=True)) # gives me each dictionary in a new line f.write(json.dumps(value, sort_keys=True, indent=0) #gives me a new line for each key/value pair instead of after each dictionary. Output in the console: {"first_name": "John", "last_name": "Smith", "food": "corn"} {"first_name": "Jane", "last_name": "Doe", "food": "soup"} Output in my output file: {"first_name": "John", "last_name": "Smith", "food": "corn"}{"first_name": "Jane", "last_name": "Doe", "food": "soup"} What code am I missing to get a new line for each dictionary value so that my output file looks the same as my console?
You can newline after each f.write(json.dumps(value, sort_keys=True, indent=0)) like this - f.write('\n')
9
8
64,815,227
2020-11-13
https://stackoverflow.com/questions/64815227/attributeerror-module-seaborn-has-no-attribute-histplot
I'm trying to plot using sns.histplot on the Titanic Dataset in Kaggle's Jupyter Notebook. This is my code: sns.histplot(train, x = "Age", hue="Sex") But it's throwing me this error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-11-d14c3b5a0670> in <module> ----> 1 sns.histplot(train, x = "Age", hue="Sex") AttributeError: module 'seaborn' has no attribute 'histplot' I have made sure to import seaborn (previous plots using sns.barplot worked fine). I'm running on Mac OS X 10.15.6, and Seaborn version 0.11.0. Could somebody point out what went wrong? Thanks in advance.
If you have a standard python installation, update with pip: pip install -U seaborn If you are using an Anaconda distribution, at the anaconda prompt (base) environment, or activate the appropriate environment: # update all the packages in the environment conda update --all # or conda update seaborn See Anaconda: Managing Packages
40
51
64,807,163
2020-11-12
https://stackoverflow.com/questions/64807163/importerror-cannot-import-name-from-partially-initialized-module-m
I'm upgrading an application from Django 1.11.25 (Python 2.6) to Django 3.1.3 (Python 3.8.5) and, when I run manage.py makemigrations, I receive this message: File "/home/eduardo/projdevs/upgrade-intra/corporate/models/section.py", line 9, in <module> from authentication.models import get_sentinel** ImportError: cannot import name 'get_sentinel' from partially initialized module 'authentication.models' (most likely due to a circular import) (/home/eduardo/projdevs/upgrade-intra/authentication/models.py)** My models are: authentication / models.py from django.conf import settings from django.contrib.auth.models import AbstractUser, UserManager from django.db import models from django.db.models.signals import post_save from django.utils import timezone from corporate.constants import GROUP_SUPPORT from corporate.models import Phone, Room, Section from library.exceptions import ErrorMessage from library.model import update_through_dict from .constants import INTERNAL_USER, EXTERNAL_USER, SENTINEL_USERNAME, SPECIAL_USER, USER_TYPES_DICT class UserProfile(models.Model): user = models.OneToOneField( 'User', on_delete=models.CASCADE, unique=True, db_index=True ) ... phone = models.ForeignKey('corporate.Phone', on_delete=models.SET_NULL, ...) room = models.ForeignKey('corporate.Room', on_delete=models.SET_NULL, ...) section = models.ForeignKey('corporate.Section', on_delete=models.SET_NULL, ...) objects = models.Manager() ... class CustomUserManager(UserManager): def __init__(self, type=None): super(CustomUserManager, self).__init__() self.type = type def get_queryset(self): qs = super(CustomUserManager, self).get_queryset() if self.type: qs = qs.filter(type=self.type).order_by('first_name', 'last_name') return qs def get_this_types(self, types): qs = super(CustomUserManager, self).get_queryset() qs = qs.filter(type__in=types).order_by('first_name', 'last_name') return qs def get_all_excluding(self, types): qs = super(CustomUserManager, self).get_queryset() qs = qs.filter(~models.Q(type__in=types)).order_by('first_name', 'last_name') return qs class User(AbstractUser): type = models.PositiveIntegerField('...', default=SPECIAL_USER) username = models.CharField('...', max_length=256, unique=True) first_name = models.CharField('...', max_length=40, blank=True) last_name = models.CharField('...', max_length=80, blank=True) date_joined = models.DateTimeField('...', default=timezone.now) previous_login = models.DateTimeField('...', default=timezone.now) objects = CustomUserManager() ... def get_profile(self): if self.type == INTERNAL_USER: ... return None def get_or_create_profile(self): profile = self.get_profile() if not profile and self.type == INTERNAL_USER: ... return profile def update(self, changes): ... class ExternalUserProxy(User): objects = CustomUserManager(type=EXTERNAL_USER) class Meta: proxy = True verbose_name = '...' verbose_name_plural = '...' class InternalUserProxy(User): objects = CustomUserManager(type=INTERNAL_USER) class Meta: proxy = True verbose_name = '...' verbose_name_plural = '...' def create_profile(sender, instance, created, **kwargs): if created and instance.type == INTERNAL_USER: try: profile = UserProfile() profile.user = instance profile.save() except: pass post_save.connect(create_profile, sender=User) def get_sentinel(): try: sentinel = User.objects.get(username__exact=SENTINEL_USERNAME) except User.DoesNotExist: settings.LOGGER.error("...") from django.contrib.auth.models import Group sentinel = User() sentinel.username = SENTINEL_USERNAME sentinel.first_name = "..." sentinel.last_name = "..." sentinel.set_unusable_password() sentinel.save() technical = Group.objects.get(name=GROUP_SUPPORT) sentinel = User.objects.get(username__exact=SENTINEL_USERNAME) sentinel.groups.add(technical) sentinel.save() return sentinel corporate / models / __init__.py ... from .section import Section ... corporate / models / section.py from django.conf import settings from authentication.models import get_sentinel from .room import Room class Section(models.Model): ... boss = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.SET(get_sentinel), ...) surrogate = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.SET(get_sentinel), ...) room = models.ForeignKey(Room, on_delete=models.SET_NULL, ...) is_subordinate_to = models.ForeignKey('self', on_delete=models.SET_NULL, ...) ... What am I doing wrong?
You have a circular import. authentication/models imports corporate/models, which imports corporate/models/section, which imports authentication/models. You can't do that. Rewrite and/or rearrange your modules so that circular imports aren't needed. One strategy to do this is to organize your modules into a hierarchy, and make sure that a module only imports other modules that are lower in the hierarchy. (This hierarchy can be an actual directory structure, but it doesn't have to be; it can just be a mental note in the programmer's mind.)
151
65
64,809,370
2020-11-12
https://stackoverflow.com/questions/64809370/how-can-i-invert-a-melspectrogram-with-torchaudio-and-get-an-audio-waveform
I have a MelSpectrogram generated from: eval_seq_specgram = torchaudio.transforms.MelSpectrogram(sample_rate=sample_rate, n_fft=256)(eval_audio_data).transpose(1, 2) So eval_seq_specgram now has a size of torch.Size([1, 128, 499]), where 499 is the number of timesteps and 128 is the n_mels. I'm trying to invert it, so I'm trying to use GriffinLim, but before doing that, I think I need to invert the melscale, so I have: inverse_mel_pred = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=256)(eval_seq_specgram) inverse_mel_pred has a size of torch.Size([1, 256, 499]) Then I'm trying to use GriffinLim: pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred) but I get an error: Traceback (most recent call last): File "evaluate_spect.py", line 63, in <module> main() File "evaluate_spect.py", line 51, in main pred_audio = torchaudio.transforms.GriffinLim(n_fft=256)(inverse_mel_pred) File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torchaudio/transforms.py", line 169, in forward return F.griffinlim(specgram, self.window, self.n_fft, self.hop_length, self.win_length, self.power, File "/home/shamoon/.local/share/virtualenvs/speech-reconstruction-7HMT9fTW/lib/python3.8/site-packages/torchaudio/functional.py", line 179, in griffinlim inverse = torch.istft(specgram * angles, RuntimeError: The size of tensor a (256) must match the size of tensor b (129) at non-singleton dimension 1 Not sure what I'm doing wrong or how to resolve this.
Just for history, full code: import torch import torchaudio import IPython waveform, sample_rate = torchaudio.load("wavs/LJ030-0196.wav", normalize=True) n_fft = 256 n_stft = int((n_fft//2) + 1) transofrm = torchaudio.transforms.MelSpectrogram(sample_rate, n_fft=n_fft) invers_transform = torchaudio.transforms.InverseMelScale(sample_rate=sample_rate, n_stft=n_stft) grifflim_transform = torchaudio.transforms.GriffinLim(n_fft=n_fft) mel_specgram = transofrm(waveform) inverse_waveform = invers_transform(mel_specgram) pseudo_waveform = grifflim_transform(inverse_waveform) And IPython.display.Audio(waveform.numpy(), rate=sample_rate) IPython.display.Audio(pseudo_waveform.numpy(), rate=sample_rate)
15
5